id
stringlengths
10
10
title
stringlengths
5
246
abstract
stringlengths
42
3.32k
authors
stringlengths
5
21.5k
published_date
timestamp[s]
link
stringlengths
33
34
markdown
stringlengths
140
1.08M
abstract_ja
stringlengths
0
1.35k
2301.00666
E-commerce users' preferences for delivery options
Many e-commerce marketplaces offer their users fast delivery options for free to meet the increasing needs of users, imposing an excessive burden on city logistics. Therefore, understanding e-commerce users' preference for delivery options is a key to designing logistics policies. To this end, this study designs a stated choice survey in which respondents are faced with choice tasks among different delivery options and time slots, which was completed by 4,062 users from the three major metropolitan areas in Japan. To analyze the data, mixed logit models capturing taste heterogeneity as well as flexible substitution patterns have been estimated. The model estimation results indicate that delivery attributes including fee, time, and time slot size are significant determinants of the delivery option choices. Associations between users' preferences and socio-demographic characteristics, such as age, gender, teleworking frequency and the presence of a delivery box, were also suggested. Moreover, we analyzed two willingness-to-pay measures for delivery, namely, the value of delivery time savings (VODT) and the value of time slot shortening (VOTS), and applied a non-semiparametric approach to estimate their distributions in a data-oriented manner. Although VODT has a large heterogeneity among respondents, the estimated median VODT is 25.6 JPY/day, implying that more than half of the respondents would wait an additional day if the delivery fee were increased by only 26 JPY, that is, they do not necessarily need a fast delivery option but often request it when cheap or almost free. Moreover, VOTS was found to be low, distributed with the median of 5.0 JPY/hour; that is, users do not highly value the reduction in time slot size in monetary terms. These findings on e-commerce users' preferences can help in designing levels of service for last-mile delivery to significantly improve its efficiency.
Yuki Oyama, Daisuke Fukuda, Naoto Imura, Katsuhiro Nishinari
2022-12-30T00:31:58
http://arxiv.org/abs/2301.00666v2
# E-commerce users' preferences for delivery options ###### Abstract Many e-commerce marketplaces offer their users fast delivery options for free to meet the increasing needs of users, imposing an excessive burden on city logistics. Therefore, understanding e-commerce users' preference for delivery options is one of the core challenges faced while designing logistics policies. To advance such understanding, this study designs and implements a stated choice survey in which respondents are faced with choice tasks among different delivery options and time slots. The survey was completed by 4,062 users from the three major metropolitan areas in Japan. To analyze the stated choice data, mixed logit models capturing users' taste heterogeneity as well as flexible substitution patterns among choice alternatives have been estimated. The model estimation results indicate that delivery attributes including fee, time, and time slot size are significant determinants of the delivery option choices. Associations between users' preferences and socio-demographic characteristics, such as age, gender, teleworking frequency and the presence of a delivery box, were also suggested. Moreover, we analyzed two willingness-to-pay measures for delivery, namely, the value of delivery time savings (VODT) and the value of time slot shortening (VOTS), and applied a non-semiparametric approach with polynomial approximation to estimate their distributions in a data-oriented manner. Although VODT has a large heterogeneity among the respondents, the estimated median VODT is 25.6 JPY per day, implying that more than half of the respondents would wait an additional day if the delivery fee were increased by only 26 JPY, that is, they do not necessarily need a fast delivery option but often request it when cheap or almost free. Moreover, VOTS was found to be low, distributed with the median of 5.0 JPY per hour; that is, users do not highly value the reduction in time slot size in monetary terms. These findings on e-commerce users' preferences can help in designing levels of service for last-mile delivery to significantly improve its efficiency. keywords: E-commerce, next day delivery, last mile delivery, delivery option choice behavior, stated preference, willingness-to-pay + Footnote †: journal: Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American Journal of the American American Journal of the American Journal of the American American Journal of the American Journal of the American American Journal of the American Journal of the American American Journal of the American Journal of the American American Journal of the American American Journal of the American American Journal of the American American Journal of the American American Journal of the American American Journal of the American American American Journal of the American American Journal of the American American Journal of the American American Journal of the American American Journal of the American American Journal of the American American American Journal of the American American American Journal of the American American American American Journal of the American under the existing circumstances, e-marketplaces often provide user-oriented services, with many of them offering their users next- or same-day delivery options (e.g., Klapp et al., 2020). Many users are unwilling to pay extra for fast delivery and are often not asked to pay for it (Savelsbergh and Van Woensel, 2016; Rai et al., 2019). Naturally, users choose a fast delivery option even though they may not necessarily need it. The increasing needs of e-commerce users for delivery lead to strict time constraints, imposing an excessive burden on city logistics (e.g., Hua et al., 2010). Based on an experiment conducted in London, Nockold (2001) reports that scheduled delivery with time windows costs three times as much as normal delivery, where items can be delivered at any time of the day. As a result, the workload of delivery service providers has been intensifying and is recognized today as one of the major social issues faced by many countries. A solution to this challenge is the appropriate management of _demand_. Although optimizing logistics networks and costs of operators have been the primary focus of the literature, the behavior of e-commerce users can be one of the dominant factors in determining logistics strategies, and understanding users' behavior and preferences is one of the core challenges faced while designing urban logistics policies (Neslin et al., 2006; Holguin-Veras et al., 2017). E-retailers and delivery providers may also want to level over-concentration of delivery demand, thereby, increasing efficiency without compromising on users' satisfaction. For delivery demand management, _allocation_ and _pricing_ of delivery options/time slots have been considered representative techniques (e.g., Agatz et al., 2011, 2013; Yang et al., 2016; Yang and Strauss, 2017). The former deals with which options/time slots to offer, while the latter considers determining the delivery fees to offer the options (Klein et al., 2019). Better understanding of users' preference regarding delivery options and willingness to pay for fast/preferred-time delivery enhances allocation and pricing of delivery options. In other words, to appropriately design such demand management techniques, it is necessary to understand users' preferences for delivery option choice. Modeling users' choices also allows for demand prediction in reaction to changes in policy design. However, despite its importance, the literature investigating how different e-commerce users make choices among available delivery options with different attributes is limited (Garver et al., 2012; Nguyen et al., 2019). Some studies have investigated the willingness-to-pay (WTP) measure for the delivery time, which may be useful in designing a pricing strategy (Dinlersoz and Li, 2006; Hsiao, 2009; Gawor and Hoberg, 2019). Yet, to the best of our knowledge, no research has analyzed the WTP for delivery in the context of an e-retailer/marketplace offering multiple delivery options, with users making a choice among them. The complex trade-off between delivery time, charge, and slot attributes in delivery option choice utilities still needs to be investigated to appropriately determine the level-of-service and the fee of delivery options. There is also a need to understand in detail the variation in the preference for delivery options among users, because they are generally heterogenous in terms of WTP, schedule preferences, and flexibility, and therefore, differentiation/personalization may have a significant impact on delivery demand management (Agatz et al., 2013). Furthermore, the analyst may not be able to capture taste heterogeneity based only on observed variables, and has to take into account unobserved user heterogeneity to analyze WTP measures (Hess et al., 2005, 2017). The objective of this study is to advance the understanding of e-commerce users' preferences for delivery options and the potential impact of level-of-service design on e-commerce delivery demand. To this end, we perform a choice-based analysis of users' behavior in the context that they are faced with a choice task among different delivery options and time slots. To explore detailed users' taste heterogeneity, we design and implement a large-scale stated preference (SP) survey in a realistic option choice context, particularly focusing on two proposed WTP measures for delivery option choice: the value of delivery time savings (VODT) and the value of delivery time slot shortening (VOTS). The former indicates how much additional delivery fee users are willing to pay to save their waiting time for their orders by one day, and the latter indicates how much they are willing to pay to shorten a delivery time slot by one hour. Since users do not know exactly when the ordered item is delivered within the time slot, the uncertainty of the delivery timing increases with a wide time slot range. Although fast delivery and narrow time slots are preferred by users, in the opposite sense, they impose strict constraints on last-mile delivery on the logistic service provider side. The concepts of VODT and VOTS enable to quantitatively analyze the trade-off between levels-of-service and prices and thus may provide useful information to appropriately design pricing strategies. Moreover, we incorporate both observed and unobserved heterogeneity into the analysis, by employing a mixed logit (MXL) model (Train, 2009), which is an extension of the standard multinomial logit (MNL) model to flexibly capture both taste heterogeneity and substitution patterns. We further introduce a flexible semi-nonparametric approach by Fosgerau and Mabit (2013) that does not require a distributional assumption because no prior knowledge regarding the WTPs of our interest is available in the literature. ### Related Literature An extensive body of literature has investigated e-commerce users' reactions to delivery, in terms of satisfaction, retention, and loyalty (e.g., Ramanathan, 2010; Rao et al., 2011; Koufteros et al., 2014), highlighting the importance of delivery fee (Lewis, 2006; Lewis et al., 2006), delivery time (Rao et al., 2011; Xu et al., 2017), time slot convenience (Agatz et al., 2011; Goebel et al., 2012), for e-commerce users. We refer the reader to Nguyen et al. (2018) for a systematic review of the literature on e-commerce users' behavior and order fulfillment. However, the literature on the analysis of e-commerce users' _choice behavior_ is limited. A choice-based analysis, particularly, the discrete choice analysis (e.g., Ben-Akiva et al., 1985) provides insights on how users change their behavior in reaction to service design, thereby, evaluating various behavioral indicators, which are the key to demand management for parcel delivery. Focusing on the benefits, some studies have proposed choice-based delivery management frameworks (Yang et al., 2016; Yang and Strauss, 2017; Mackert, 2019), however, the choice models in their studies are not yet sufficiently realistic due to a lack of prior understanding of users' preferences for the choice of delivery options and time slots. Many studies on e-commerce users' choice behavior have focused on the choice between online or physical in-store shopping (e.g., Farag et al., 2007; Kollmann et al., 2012). Hsiao (2009) analyzed such choice behavior by estimating a binary logit model based on stated choice data in the book purchase context. The author investigated VODT and found that it was approximately $0.53 per day. However, the value was calculated as the trade-off between delivery time and travel cost for physical in-store shopping, not between delivery time and fee, and thus this measure cannot be used for designing a pricing scheme for delivery options. Similarly, Gawor and Hoberg (2019) analyzed VODT by conducting a choice-based conjoint analysis in the context of choice among e-retailers in an electronic marketplace offering the same product with different prices (including delivery fees) with different delivery options. As a result, they found that VODT was $3.61 per day at an aggregated average, which was much higher than that reported by Dinlersoz and Li (2006) and Hsiao (2009) in the book purchase context. The authors discussed that this difference comes from a difference in marketplaces since an electronic market generally offers customers more expensive products than those offered in a book market. However, because they calculated VODT with respect to the sum of the item price and delivery fee, it is not possible to separate the effect of the change in delivery fee from the measure. To the best of our knowledge, no research has analyzed VODT in the context of an e-marketplace offering multiple delivery options with different levels-of-service and users making a choice among them. Further, VODT in the literature is a mean estimate, meaning that users' preferences are implicitly assumed to be homogeneous. Moreover, the literature on the delivery option choice behavior of e-commerce users is scarce. Agatz et al. (2021) recently explored the impact of green labels on time slot choices. By estimating an MNL model, they analyzed stated choice data among non-overlapping time slots for delivery, some of which had green labels and/or price incentives. The authors found that green labels work as an incentive and are particularly effective for people who are eco-conscious. Yang et al. (2016) and Yang and Strauss (2017) also estimated MNL models of time slot choice in more general settings, but included only the sensitivity to slot price and slot dummy effects in the utility function. The effects of delivery attributes on delivery option choice behavior have not yet been sufficiently investigated (Garver et al., 2012; Nguyen et al., 2019). Most relevant to our study are the studies by Rai et al. (2019) and Nguyen et al. (2019), who investigated the delivery option choice behavior of e-commerce users. Rai et al. (2019) conducted a choice-based conjoint analysis, focusing on the delivery attributes of fee, time, reception, and return possibility, as well as the attitude of users to sustainable options. The authors found that although e-commerce users prefer free and fast delivery to their home during regular office hours, they are willing to accept more sustainable options, such as collecting their orders themselves or waiting longer, when delivery and returns are free. Nguyen et al. (2019) also performed a choice-based conjoint analysis to investigate how users value delivery attributes when selecting a delivery option for their online purchases. They found that the most important attribute for users was the delivery fee, followed by non-price attributes, which was consistent with the result of Rai et al. (2019). The authors also discussed a significant difference in users' preferences among gender and income groups. Several important aspects are still missing in the previous analysis of e-commerce users' choice behavior toward appropriate demand management. First, users' taste heterogeneity in delivery option preference has to be further investigated. Although the effects of basic demographic characteristics such as age and gender (e.g., Hsiao, 2009; Nguyen et al., 2019), different user segments (e.g., Gawor and Hoberg, 2019; Hjort et al., 2013), or specific attitudinal characteristics (e.g., Rai et al., 2019; Agatz et al., 2021) were analyzed in the literature, attributes associated with lifestyle and household can also have a significant impact on choice behavior (Goebel et al., 2012) and need to be explored. Furthermore, because the analyst may not be able to capture users' taste heterogeneity only with observed attributes, unobserved user heterogeneity should also be taken into account. Second, a detailed analysis of the WTP measures with respect to delivery attributes is missing but can be very useful in delivery demand management, particularly in designing a pricing strategy. Although some studies have investigated WTP for delivery (Dinlersoz and Li, 2006; Hsiao, 2009; Gawor and Hoberg, 2019), no research has analyzed it in the context of delivery option choices. Third, to analyze the taste heterogeneity and WTP measures in detail, advanced econometric modeling needs to be applied. The existing literature on e-commerce users' behavior or delivery demand management mostly relies on the standard MNL model (e.g., Yang et al., 2016; Yang and Strauss, 2017; Rai et al., 2019; Agatz et al., 2021), which does not capture users' taste heterogeneity or the correlation among utilities of choice alternatives. ### Contributions and Structure of the Paper To advance the understanding of e-commerce users' delivery option preferences, we perform a discrete choice analysis in the context of users facing a choice problem among different delivery options and time slots. We enumerate our contributions to the literature below: * **Delivery option choice analysis**. This study is the first to analyze the integrated choice of a delivery option, date, and time slot offered by an e-marketplace. Available delivery options include next-day, normal, and scheduled delivery. Next-day and normal delivery options do not allow users to select a time slot, while scheduled delivery does. This choice problem is a realistic situation that e-commerce users often face when shopping online, e.g., in _Amazon_. The alternatives may exhibit complex substitution patterns and thus construct a cross-nested choice structure, which we incorporate into the analysis. * **Stated choice survey design**. To perform the choice analysis, we implement a web-based stated preference (SP) survey. It also includes a revealed preference (RP) survey on respondents' most recent experience shopping online so that they can readily imagine a realistic situation when conducting the stated choice tasks, and the item category and price are not limited. In addition to basic demographic information, lifestyle characteristics such as teleworking frequency during the COVID-19 pandemic were collected. The survey was conducted in collaboration with Yamato Holdings, which has the largest market share of parcel delivery in Japan, and obtained a valid sample of 4,062 respondents. To the best of our knowledge, this is the largest SP survey conducted on e-commerce users' delivery option choices. * **WTP measures for delivery**. In the analysis, we focus on three delivery attributes: delivery fee (option-specific price and additional charges for holiday and night slot delivery), expected delivery time, and time slot size. Based on these attributes, we calculate novel WTP measures for delivery, namely VODT and VOTS. The former indicates the amount of additional delivery fee that users are willing to pay to save their order waiting time by one day, and the latter indicates the amount that they are willing to pay to shorten the time slot size by one hour. Both are calculated purely with respect to the delivery fee, not including item price (Gawor and Hoberg, 2019) or travel cost (Hsiao, 2009). * **Users' taste heterogeneity**. The rich stated choice data allows us to incorporate detailed users' taste heterogeneity into the analysis. We explore the interactions of delivery option preferences with various user information such as demographic and household characteristics, lifestyle attributes, and their membership in some e-marketplaces. Analyzing such interactions gives an insight into users' observed heterogeneity in their preferences of options and time slots, which may be the key to differentiation in demand management (Agatz et al., 2013). We also consider the nonlinear interactions of the WTP measures with the price of the purchased item and the frequency of online shopping. * **Advanced discrete choice analysis**. For analysis of the stated choice data, we estimate an MXL model (Train, 2009), which is an extended version of the MNL model to flexibly capture unobserved taste heterogeneity and substitution patterns. We treat the WTP measures for delivery as random parameters and analyze their distributions. Because we do not have prior knowledge of their distributional forms, we apply a flexible semi-nonparametric approach (Fosgerau and Mabit, 2013) that does not require a distributional assumption. We further incorporate error components capturing the underlying correlations among the utilities of the alternatives, focusing on a cross-nested structure of delivery options and time slots. The remainder of this paper is structured as follows. Section 2 describes the details of the design of the stated choice experiment. Section 3 reports the data collection and sample statistics. Section 4 introduces the MXL model of delivery option choice behavior to analyze the stated choice data, and Section 5 presents the estimation results. In Section 6, we conclude the study. ## 2 Experimental design We designed a web-based SP survey for e-commerce delivery option choices. The survey consists of three parts: (1) questions on the most recent online shopping experience, (2) stated choice tasks on delivery options, and (3) questions on socio-demographic characteristics, and household and lifestyle attributes. Note that the RP survey in the first part has mainly been conducted to establish a realistic situation for the respondents during the stated choice tasks. ### Stated choice task For the stated choice experiment, we asked respondents to assume that they order the same item at the same price as those ordered in their most recent online shopping transaction. The date they responded to the survey was assumed to be the order date for the stated choice tasks. Figure 1 shows an example of the stated choice task. For each task, the respondent first chooses a delivery option among "next-day", "scheduled" and "normal" delivery (Q1). Respondents who chose the scheduled delivery option are asked to jointly select the date and time slot for delivery (Q2 and Q3). The available dates for scheduled delivery range from 2 to 8 days after the order date (notated as "2+" \(\sim\) "8+"). For Q2, a calendar pops up on the click, and the respondent specifies the order date. While the entire period of delivery hours in a day is fixed 9:00-21:00 (12 hours in total), the size of a time slot varies from 2 to 4 hours by choice scenarios, resulting in from six to three available slots. Figure 1 shows a case where the time slot size is 3 hours, and therefore, the number of available slots is four (Q3). For the next-day delivery, the ordered item is delivered on the following day, but the respondent is not allowed to select a time slot. When respondents choose the normal delivery option, they can neither select the date nor the time slot for delivery. Note that this integrated choice task of delivery option, date, and time slot is a more realistic choice problem that e-commerce users often face when shopping online, e.g., in _Amazon_. Nevertheless, existing literature has focused only on a single aspect, such as option (e.g., Rai et al., 2019; Nguyen et al., 2019) or time slot choice (e.g., Agatz et al., 2021). ### Attributes The delivery attributes on which this study mainly focuses are fee, time, and time slot size. To characterize the delivery options, the eight attributes summarized in Table 1 were controlled in the stated choice tasks. The delivery fees are differentiated by the delivery options, ranging from 300 to 600 JPY (Attributes 1-3). For scheduled delivery, an additional charge of 100 JPY can be imposed to deliver on holidays and in the latest time slot (Attributes 4-5), respectively. The expected delivery time for normal delivery is based on the earliest possible date (Attribute 7) and the range of delivery dates (Attribute 8). The delivery time for next-day delivery is always one day, and that for scheduled delivery directly depends on the choice of the respondents, ranging from 2 to 8 days. As mentioned in the previous subsection, the size of a time slot is an attribute for scheduled delivery and varies from 2 to 4 hours (Attribute 6). The entire delivery period in a day ranges from 9:00 to 21:00, with the delivery time slots defined based on the slot size. The number of available slots, therefore, can be three, four, or six. When the size of a slot is three hours, for example, the available slots are [9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21] (as in Q3 in Figure 1). In total, the experiment has 3 four-level, 2 three-level, and 3 two-level attributes, implying \(4^{3}\times 3^{2}\times 2^{3}\) profiles for a full factorial design, which is very hard to manage. Therefore, we created a fractional factorial design consisting of 400 profiles where the main effects are orthogonal to each other. From this set of choice tasks, we randomly displayed five tasks to each respondent. We did not consider optimal designs like D-efficient designs (Kuhfeld et al., 1994) because prior values of the parameters for the delivery option choice model are not empirically known. Moreover, Walker et al. (2018) recently presented that random design, despite its simplicity, performs as well as any design in terms of robustness. **Q1. Which delivery option would you choose?** \((1)\) Next-day Delivery \((2)\) Scheduled Delivery (from 2 to 8 days after) \((3)\) Normal Delivery (specified day or time are not available) **Q2. If you chose "(2) Scheduled Delivery," please select your preferred delivery date from the calendar below.** Note: please consider today the order date. **Q3. If you chose "(2) Scheduled Delivery," please select your preferred delivery time from the list below.** \(9\):\(00\) - \(12\):\(00\) \(12\):\(00\) - \(15\):\(00\) \(15\):\(00\) - \(18\):\(00\) \(18\):\(00\) - \(21\):\(00\) Figure 1: Example of the stated choice task: Items in red indicate task attributes. This is an English translation of the survey (see Figure A.7 for the original). ## 3 Stated choice data ### Data collection We conducted the stated choice survey from April 30 to May 14, 2021, during the COVID-19 pandemic, collaborating with Yamato Holdings Co., Ltd. that has the largest market share of parcel delivery in Japan (42.3 % in the Financial Year 2018). The respondents were among the "Kuroneko Members" (more than 50 million in Japan), registered users of the Yamato Holdings' delivery service. In this survey, we randomly selected users who live within the three major metropolitan areas (Greater Tokyo, Osaka, and Nagoya) in Japan. The population of the three areas comprise of 52 million, accounting for 41% of Japan's total population1. The survey invitation was sent to 100,000 Kuroneko Members, of whom 4,872 completed it (i.e., the response rate was 4.87%). Since five choice tasks were given to each respondent, the original sample size observed was 24,360. Footnote 1: Source: 2015 Population Census (Statistics Bureau, Ministry of Internal Affairs and Communications) [https://www.stat.go.jp/english/data/kokusei/2015/summary.html](https://www.stat.go.jp/english/data/kokusei/2015/summary.html) ### Data cleaning We cleaned the data as follows: * First, we removed 635 observations where the scheduled delivery option is chosen for the next day or after more than 8 days 2. As mentioned earlier, we accept the scheduled delivery only for 2-8 days after the order date. (24,360 \(\rightarrow\) 23,725) Footnote 2: We could not restrict the available order dates for each participant due to system specifications. Alternatively, we carefully explained that the available dates for the scheduled delivery were 2 to 8 days after the order date (i.e., survey date). Footnote 3: This processing is also done in Nguyen et al. (2019), who also focused on respondents who have at least some experience shopping online. * Then, we removed 745 observations of 151 respondents who have never experienced shopping online. To analyze the preferences for delivery options in e-commerce, this study focuses on experienced users 3. (23,725 \(\rightarrow\) 22,980) Footnote 3: This processing is also done in Nguyen et al. (2019), who also focused on respondents who have at least some experience shopping online. * Finally, we removed 3120 observations of 655 respondents who chose an alternative dominated by other options once or more. Such alternatives include the normal delivery option with higher delivery fees than the next-day or scheduled delivery options. We eliminate the potential bias of responses with a lack of seriousness. (22,980 \(\rightarrow\) 19,860) After data cleaning, we finally obtained 19,860 observations of 4,062 unique respondents. \begin{table} \begin{tabular}{l l c c c} \hline \hline & Attribute & Alternative & No. of levels & Values \\ \hline 1 & Delivery fee (JPY) & Next-day & 4 & [300, 400, 500, 600] \\ 2 & Delivery fee (JPY) & Normal & 4 & [300, 400, 500, 600] \\ 3 & Delivery fee (JPY) & Scheduled & 4 & [300, 400, 500, 600] \\ 4 & Additional charge for holiday (JPY) & Scheduled & 2 & [\(\pm\)0, +100] \\ 5 & Additional charge for the latest slot (JPY) & Scheduled & 2 & [\(\pm\)0, +100] \\ 6 & Size of a slot (hrs) & Scheduled & 3 & [2, 3, 4] \\ 7 & Earliest possible date (d+) & Normal & 3 & [2+, 3+, 4+] \\ 8 & Range of delivery dates (days) & Normal & 2 & [2, 3] \\ \hline \hline \end{tabular} \end{table} Table 1: Attributes and attribute levels ### Sample statistics Table 2 lists the descriptive statistics of the demographic characteristics, household information, and e-commerce attributes of the respondents, as well as their recent online shopping transaction. Because of the COVID-19 pandemic, 26.7 % of the respondents telework once a week or more. In addition, 25.3 % of the respondents have a delivery box at their places, which is a locker-type device where delivery persons can leave parcels even when the recipient is not at home. Regarding online shopping experiences, 21.6 % of the respondents usually buy an item online once a week or more, and 46.1 % of the respondents have registered for a membership with a free delivery privilege. Note that it was clearly explained in the stated choice experiment that all respondents were required to pay the delivery fee even if they have such a membership. We observed a wide range of variety in category and price of items purchased. The information regarding the item price has been used in analyzing the WTPs, which is important, given that the literature targeting different marketplaces reported significantly different values of WTP for delivery (Dinlersoz and Li, 2006; Hsiao, 2009; Gawor and Hoberg, 2019). ### Aggregation Out of 19,860 observations, the next-day, normal and scheduled delivery options were chosen 8,429 (42.4 %), 4,265 (21.5 %), and 7,166 times (36.1 %), respectively. Of scheduled delivery, weekdays and holidays were chosen 1,327 (18.5 %) and 5,839 times (81.5 %), respectively; and the earliest, the latest and the other time slots were chosen 3,262 (45.5 %), 2,082 (29.1 %) and 1,822 times (25.4 %), respectively. Although next-day delivery is the most popular alternative, the other options were also chosen a considerably large number of times. To see the trade-off between delivery time and fee, we plot the aggregate choice results focusing on the two attributes in Figure 2, which includes the results of next-day (the delivery time is one day) and scheduled delivery (two to eight days). Since a delivery date is not fixed in the normal delivery option, this aggregation does not include the observations choosing that option. Figure 2(a) shows the choice results based on the delivery time and fee that each option offers. Most of the respondents seem to prefer deliveries within three days after the order date. There is no major change in the number of choices from the fourth to the eighth day after the order date, which may be because, for deliveries later than three days after, users care more about their schedule than the delivery time. Delivery within three days seems to be popular with a wide range of delivery fees; even up to JPY600, which is the highest possible fee for next-day delivery, one to three days after the order date were chosen relatively high numbers of times. Note that, due to the SP task design, JPY of 500 and 600 are not always expensive among the available options. This can be seen in Figure 2(b), which focuses on the delivery fee difference to the minimum among the fees of the offered alternatives, instead of the pure delivery fee. It shows that the next-day delivery option was chosen 6,426 times (32.4% of the total) when it was the least expensive among the available options. That is to say, e-commerce users consider the trade-off between delivery fee and time; although some still prefer their parcels to be delivered on the next day, many of them may change their preference when the fee is not the cheapest. ## 4 Delivery option choice model To analyze the stated choice data, we perform a discrete choice analysis of e-commerce delivery option. The alternatives for the choice problem are defined based on delivery options (next-day, scheduled, and normal), dates, and time slots. As mentioned earlier, the number of available time slots varies by choice scenarios. To analyze all samples in a unified manner, we categorize the time slots into three categories: "morning" (the earliest), "night" (the latest), and "other" \begin{table} \begin{tabular}{l l l l} \hline \hline Category & Characteristic & Value & Sample frequency \\ \hline Socio-demographic & Age & \(\leq 29\) & 2.8\% \\ & & 30–49 & 29.9\% \\ & & 50–69 & 58.8\% \\ & & \(\geq 70\) & 8.5\% \\ & Gender & Female & 41.3\% \\ & & Male & 58.7\% \\ & Occupation & Employee & 44.1\% \\ & & Self-employed & 5.2\% \\ & & Public officer & 4.6\% \\ & & Professional (incld. healthcare worker) & 3.4\% \\ & & Part-time job & 12.5\% \\ & & Student & 1.0\% \\ & & No job & 20.0\% \\ & & Other & 9.2\% \\ & Telework frequency & Everyday & 7.8\% \\ & & 3–4 days per week & 9.2\% \\ & & 1–2 days per week & 9.7\% \\ & & 2–3 days per month & 4.1\% \\ & & 1 day per month or less & 11.1\% \\ & & Never & 58.1\% \\ \hline Household & Residence & Tokyo & 29.6\% \\ & Household composition & Other & 70.4\% \\ & & Live alone & 19.1\% \\ & & Couple with double income & 17.0\% \\ & & Couple with single income & 14.9\% \\ & & Live with children & 32.9\% \\ & & Live with family (w/o children) & 11.4\% \\ & & Live with friends & 0.5\% \\ & & Other & 4.2\% \\ & House type & Detached house & 49.1\% \\ & & Apartment & 49.5\% \\ & & Other & 1.4\% \\ & Have delivery box? & Yes & 25.3\% \\ & & No & 74.7\% \\ \hline E-commerce attributes & E-shopping frequency & 3–4 times per week & 3.0\% \\ & & 1–2 times per week & 18.6\% \\ & & 2–3 times per month & 42.3\% \\ & & Once per month or less & 36.1\% \\ & E-commerce membership & Member w/ free-delivery privilege & 46.1\% \\ & Member w/o privilege & 48.8\% \\ & & Not a member & 5.1\% \\ \hline Recent e-shopping experience & Category of item & Books & 6.9\% \\ & & Software & 1.0\% \\ & & Computing devices & 8.7\% \\ & & Grocery & 18.9\% \\ & & Clothing & 15.0\% \\ & & CD / DVD & 2.9\% \\ & & Gift & 1.8\% \\ & & Daily necessities & 6.6\% \\ & & Toys & 2.2\% \\ & & Electric devices & 10.5\% \\ & & Healthcare goods & 6.1\% \\ & & Cosmetics & 6.2\% \\ & & Office supplies & 1.5\% \\ & Interior items & 2.1\% \\ & & Other & 9.6\% \\ Price of item (JPY) & \(\leq 999\) & 3.9\% \\ & & 1,000–2.999 & 20.7\% \\ & & 3,000–4.999 & 20.3\% \\ & & 5,000–7.499 & 12.4\% \\ & & 7,500–9.999 & 11.2\% \\ & & 10,000–14,999 & 10.4\% \\ & & 15,000–19.999 & 6.1\% \\ & & 20,000–29,999 & 4.9\% \\ & & 30,000–49,999 & 4.9\% \\ & & \(\geq 50,000\) & 5.2\% \\ \hline \hline \end{tabular} \end{table} Table 2: Sample statistics (\(N=4062\)) slots. Note that the additional charge is consistent with this definition (Attribute 5 in Table 1). Thus, we have 21 (\(7\times 3\)) schedules for the scheduled delivery, resulting in 23 alternatives in the choice set. ### Model formulation For the analysis, we estimate a MXL model that captures unobserved inter-individual taste heterogeneity and flexible substitution patterns. Based on the random utility (RUM) theory, in choice task \(t\in\{1,\ldots,T_{n}\}\), individual \(n\in\{1,\ldots,N\}\) is assumed to choose the alternative that maximizes the utility \(U_{ntj}\) as defined below: \[U_{ntj}=V(\mathbf{X}_{ntj};\mathbf{\beta}_{n})+\varepsilon_{ntj}, \tag{1}\] where \(V\) is the systematic utility, which is a function of observed explanatory variables \(\mathbf{X}_{ntj}\), and \(\varepsilon_{ntj}\) is the unobserved utility following extreme value distribution Type I, i.e., \(\varepsilon_{ntj}\sim\text{Gumbel}(0,\mu)\). We assume that the scale \(\mu\) is standardized as one. The preference parameters \(\mathbf{\beta}_{n}\) of individual \(n\) are assumed as realizations from distribution \(f(\mathbf{\beta}|\mathbf{\Omega})\) where \(\mathbf{\Omega}\) is a vector of hyperparameters. When \(\mathbf{\beta}_{n}\) is given, the probability that choice \(y_{nt}\) of individual \(n\) in choice scenario \(t\) is \(j\) is given by the MNL model (also called the logit kernel): \[P_{n}(y_{nt}=j|\mathbf{\beta}_{n})=\frac{\exp\{V(\mathbf{X}_{ntj};\mathbf{\beta}_{n})\}}{ \sum_{j^{\prime}\in C_{nt}}\exp\{V(\mathbf{X}_{ntj^{\prime}};\mathbf{\beta}_{n})\}}, \tag{2}\] where \(C_{nt}\) is the choice set for individual \(n\) at choice task \(t\)4. Therefore, the joint probability of the sequence of choices \(\{j_{t}\}_{t=1}^{T_{n}}\) for the given \(\mathbf{\beta}_{n}\) is Footnote 4: This study assumes that \(C_{nt}\) is always the same for all choice tasks of all individuals, i.e., \(C_{nt}=C\), and consists of 23 alternatives. \[P_{n}\left(\mathbf{y}_{nt}=\{j_{t}\}_{t=1}^{T_{n}}|\mathbf{\beta}_{n}\right)=\prod_{t= 1}^{T_{n}}P_{n}(y_{nt}=j_{t}|\mathbf{\beta}_{n}). \tag{3}\] Figure 2: Choice results with the focus on the delivery time (horizontal axis) and delivery fee (vertical axis): (a) takes a pure delivery fee, and (b) focuses on the fee difference among the offered alternatives. The numbers in the grids indicate the number of times chosen, with deeper colors representing higher numbers. Note that this aggregation includes only the observations that chose the next-day or scheduled delivery options, and therefore the total number does not correspond to 19,860, which is the number of the total observations. The MXL model marginalizes this probability over the parameter distribution \(f(\beta|\mathbf{\Omega})\), obtaining the unconditional probability \[P_{n}\left(\mathbf{y}_{nt}=\{j_{t}\}_{t=1}^{T_{n}}|\mathbf{\Omega}\right)=\int_{\beta}P_{ n}\left(\mathbf{y}_{nt}=\{j_{t}\}_{t=1}^{T_{n}}|\mathbf{\beta}\right)f(\beta|\mathbf{ \Omega})\mathrm{d}\mathbf{\beta}, \tag{4}\] and the log-likelihood function is given by \[LL(\mathbf{\Omega})=\sum_{n=1}^{N}\ln P_{n}\left(\mathbf{y}_{nt}=\{j_{t}\}_{t=1}^{T_{n} }|\mathbf{\Omega}\right). \tag{5}\] Since the integral in (4) is hard to compute, we estimate the MXL model by maximizing the following simulated log-likelihood function \[SLL(\mathbf{\Omega})=\sum_{n=1}^{N}\ln\left(\frac{1}{R}\sum_{r=1}^{R}P_{n}(\mathbf{y}_ {nt}=\{j_{t}\}_{t=1}^{T_{n}}|\mathbf{\beta}_{nr})\right)\approx LL(\mathbf{\Omega}) \tag{6}\] where \(\mathbf{\beta}_{nr}\) is the \(r\)th realization (\(r=\{1,\ldots,R\}\)) of \(n\)'s preference parameters drawn from distribution \(f\left(\beta|\mathbf{\Omega}\right)\) and \(R\) is the total number of random draws. ### Model specification #### 4.2.1 Utility function This study investigates the distributions of two WTP measures for delivery, VODT \(w_{d}\) and VOTS \(w_{h}\), as well as the effects of socio-economic characteristics \(\mathbf{s}\) on delivery option choices. Therefore, instead of taking the ratio of marginal utilities of the target attributes and the delivery fee (i.e., modeling in the preference space), we formulate the model in the WTP space (Train and Weeks, 2005), leading to a straightforward estimation of the WTP distributions. With this formulation, the WTP distribution is directly estimated in a stable manner without any restriction on the distribution of the delivery fee coefficient \(\beta_{c}\)(Scarpa et al., 2008). The utility \(U_{ntj}\) for alternative \(j\) of individual \(n\) in choice situation \(t\) is: \[U_{ntj}=\beta_{c,n}(\mathrm{DF}_{ntj}+w_{d,n}\mathrm{DT}_{ntj}+w_{h,n}\mathrm{ SS}_{ntj})+\mathbf{\alpha^{\prime}s}_{nj}+\delta_{j}\mathbf{\xi}+\varepsilon_{ntj} \tag{7}\] where \(\mathrm{DF}_{ntj}\), \(\mathrm{DT}_{ntj}\), and \(\mathrm{SS}_{ntj}\) are the delivery fee (JPY), expected delivery time (days), and time slot size (hours), respectively; \(\mathrm{DT}_{ntj}\) equals 1 for next-day delivery, 2-8 for scheduled delivery, and is defined as \(\mathrm{ED}_{ntj}+\mathrm{RD}_{ntj}/2\) for normal delivery, where \(\mathrm{ED}_{ntj}\) is the days to the earliest possible date and \(\mathrm{RD}_{ntj}\) is the range of delivery dates (see Table 1); \(\mathrm{SS}_{ntj}\) is fixed to be zero for the next-day and normal delivery options. The parameter \(\mathbf{\alpha}\) is a vector of the coefficients for the socioeconomic characteristics. The error component \(\varepsilon\) is assumed to be independent and identically distributed (iid) extreme value across observations, i.e., \(\varepsilon\sim\mathrm{Gumbel}(0,1)\). The normal error components \(\mathbf{\xi}=\{\xi_{m}\sim\mathcal{N}(0,\sigma_{m}^{2})\}_{m\in M}\) capture the heteroskedasticity and correlations among utilities of different delivery options, with a nesting structure \(M\) (see Section 4.2.4 for the detail), and \(\delta_{jm}\) equals one if alternative \(j\) belongs to nest \(m\) and zero otherwise (Train, 2009). We also assume that \(\beta_{c}\), \(w_{d}\) and \(w_{h}\) are randomly distributed. #### 4.2.2 Observed heterogeneity We structuralize the WTPs, by incorporating interactions with users' socio-economic characteristics, capturing the nonlinear observed taste heterogeneity across individuals. By following the continuous interaction specification taken in the estimation of the value of travel time savings (Axhausen et al., 2008), the delivery fee coefficient \(\beta_{c,n}\) is structured by the price of item \(\text{PI}_{\text{n}}\) that user \(n\) ordered5: Footnote 5: This is to capture the difference in sensitivity to the delivery fee depending on the item price. Gawor and Hoberg (2019) reported such a difference between electronic and book marketplaces by comparing their results with the those of Dinalersoz and Li (2006) and Hsiao (2009). However, how the item price actually affects the WTPs has never been investigated. \[\beta_{c,n}=\hat{\beta}_{c,n}\left(\frac{\text{PI}_{\text{n}}}{\text{PI}_{ \text{ref}}}\right)^{\gamma_{\text{PI}}}, \tag{8}\] and VODT \(w_{d}\) is structured by the frequency of online shopping \(\text{FR}_{\text{n}}\), \[w_{d,n}=\hat{w}_{d,n}\left(\frac{\text{FR}_{\text{n}}}{\text{FR}_{\text{ref}} }\right)^{\gamma_{\text{FR}}}, \tag{9}\] where \(\hat{\beta}_{c,n}\) and \(\hat{w}_{d,n}\) are realizations from the distributions; \(\text{PI}_{\text{ref}}\) and \(\text{FR}_{\text{ref}}\) are the sample averages; and \(\gamma_{\text{PI}}\) and \(\gamma_{\text{FR}}\) are the parameters to be estimated. #### 4.2.3 Unobserved heterogeneity: distributional specification We assume that \(\beta_{c}\) follows a log-uniform distribution, which has a shorter tail than a log-normal distribution and has been tested for the value of travel time savings in studies by Fosgerau (2006) and Hess et al. (2017). If \(y=\log(x)\) is uniformly distributed, \(x\) is defined to be log-uniformly distributed. Therefore, the fee coefficient is then given by \[\hat{\beta}_{c}=-\exp(a+bu), \tag{10}\] where \(u\sim\text{Uniform}(0,1)\), and \(a\) and \(b\) are the lower bound and the spread, which are the parameters to be estimated. Unlike the value of travel time savings in the transportation research field, we do not have sufficient empirical evidence for the WTPs of interest, therefore, assuming the parametric distributional form a priori may bias the understanding. After extensive testing, for the distributional specification of the WTPs \(w_{d},w_{h}\), we apply a semi-nonparametric approach by Fosgerau and Mabit (2013) that does not require a distributional assumption and that can flexibly describe the shape of distribution in a data-oriented manner. We transform a draw \(u\) from a distribution using a power series \[f(u|\boldsymbol{\eta})=\sum_{k=1}^{K}\eta_{k}u^{k}, \tag{11}\] where \(K\) is the dimension of the polynomial expansion. We then compute random draws of WTPs: \[\hat{w}_{d}=f(u_{d}|\boldsymbol{\eta}_{d}), \tag{12}\] \[\hat{w}_{h}=f(u_{h}|\boldsymbol{\eta}_{h}), \tag{13}\] where \(\boldsymbol{\eta}_{d}=(\eta_{d,0},\ldots,\eta_{d,K_{d}})\) and \(\boldsymbol{\eta}_{h}=(\eta_{h,0},\ldots,\eta_{h,K_{h}})\) are the vectors of the parameters to be estimated and define the distributional forms of the WTPs of interest, and \(K_{d}\) and \(K_{h}\) are the dimensions of the power serieses for VODT and VOTS, respectively. #### 4.2.4 Error components We have 23 alternatives in the choice set, some of which may introduce correlations violating the independence of irrelevant alternatives (IIA) assumption of the MNL model. To capture the heteroscedasticity and cross-correlated structures among the utilities of delivery options, we introduce the error components \(\boldsymbol{\zeta}\) in the MXL model. Figure 3 shows the cross-nested structure of the model, where we introduce eight error components, i.e., \(|M|=8\), for the delivery options (next-day, normal and scheduled), as well as date (holiday and non-holiday) and time slot nests (morning, night and other); for example, the utility of scheduled delivery two days after the order is placed and in the morning slot is defined as: \[U_{nt,(2+,\text{morning})}=\cdots+\xi_{\text{scheduled}}+\xi_{\text{holiday}}+ \xi_{\text{morning}}+\varepsilon_{nt,(2+,\text{morning})}\] where \(\xi_{m}\sim\mathcal{N}(0,\sigma_{m}^{2})\), \(\forall m\in M\), and the standard deviation \(\sigma_{m}\) is to be estimated. Note that whether "2+" is a holiday or not depends on the order date, therefore, \(U_{nt,(2+,morning)}\) does not necessarily contain the error component \(\xi_{holiday}\). Because we rely on panel data for model estimation, all scale parameters are identified (Walker et al., 2007). ## 5 Model estimation results This section reports the model estimation result. Maximum simulated likelihood estimation was performed using the Apollo package in R (Hess and Palma, 2019, 2019). ### Final model specification An extensive specification search has been conducted to reach the final model. Table 3 reports the comparison of the models developed during the critical stages of the modeling procedure. First, a standard MNL model was estimated, where no parameters were assumed to be random. Since its framework restricts substitution patterns and does not adequately account for preference heterogeneity across respondents, MXL specifications were explored. The error components were introduced into the model (MXL 1), to capture heteroskedasticity and flexible substitution patterns across the alternatives. Furthermore, three MXL specifications were explored in terms of unobserved preference heterogeneity by assuming the delivery fee coefficient and the two WTPs to be randomly distributed. For the distributions of the WTPs, the normal and log-normal distributions were tested (MXL 2 and MXL 3), as well as the polynomial approximation of (11) (MXL 4). After a comprehensive search6, the dimensions of the polynomial approximations Figure 3: Cross-nested structure of the choice model approximated by the error components. were finally set at \(K_{d}=4\) and \(K_{h}=2\), and uniformly distributed random numbers were used as the base \(u\) in (11). The overall goodness-of-fit of the models appears to be satisfactory, and the incorporation of both error components and random WTP parameters significantly improved the model performance (Table 3). Consequently, MXL 4 obtained the best goodness-of-fit in terms of the values of log-likelihood, AIC, BIC and \(\bar{p}^{2}\); especially, the polynomial approximations allowed the model to better fit compared to normal and log-normal distributions. ### Estimation results Table 4 enumerates the estimation results for three models: MNL, MXL 1 (error component model) and MXL 4 (error component and random parameter model with polynomial approximations). In all models, the sensitivities to the delivery attributes were estimated with statistical significance and have the expected signs. Most of the other parameters also correspond to our expectations. Below we discuss the results in detail. #### 5.2.1 Observed heterogeneity: effects of socio-demographic characteristics Here, we discuss observed heterogeneity for delivery option choice preferences, captured by interactions between alternative specific constants and socio-demographic characteristics. For the sake of representativeness in the effects of socio-demographic variables, the categories of some variables were merged before the model estimation. As such, we defined respondents who telework for 1-2 days per week or more as "teleworker"s (30.8% of sample, see Table 2). For the subgroups of choices, normal delivery (option choice), weekday delivery (date choice), and "other" slot (slot choice) are treated as references for the alternative specific constants, respectively. **Option choice** The estimation result indicates that users who are 70 years or older tend to choose normal delivery compared to next-day and scheduled delivery, while users younger than 30 years old exhibit opposite tendency, suggesting a clear difference in the delivery option preference between different age-groups. Additionally, male users prefer next-day delivery but not scheduled delivery, which implies that they tend to order items just before they need them. People living in Tokyo, the largest and busiest city in Japan, prefer scheduled delivery that allows them to select the delivery date and slot to fit in their schedule. In contrast, the estimation result indicates that teleworkers do not need scheduled delivery, reflecting the higher flexibility of their schedule to receive the parcels. During the COVID-19 pandemic many people started teleworking and last-mile delivery may have been released from complex time window constraints than before; however, non-teleworkers who prefer scheduled delivery and delivery providers may still have a problem due to the rapid growth in total demand of last-mile delivery. Moreover, users who have a privileged membership in an e-commerce marketplace particularly prefer next-day delivery to \begin{table} \begin{tabular}{l c c c c c} \hline \hline & MNL & MXL 1 & MXL 2 & MXL 3 & MXL 4 \\ \hline Error component & No & Yes & Yes & Yes & Yes \\ Distribution of VODT & Fixed & Fixed & Normal & Log-normal & Polynomial w. \(K_{d}=4\) \\ Distribution of VOTS & Fixed & Fixed & Normal & Log-normal & Polynomial w. \(K_{h}=2\) \\ No. of parameters & 46 & 54 & 57 & 57 & 61 \\ Log-likelihood & -35,068.2 & -28,359.06 & -26,812.02 & -26,796.48 & -26,624.67 \\ AIC & 70,228.4 & 56,826.12 & 53,738.05 & 53,706.96 & 53,371.35 \\ BIC & 70,591.64 & 57,252.53 & 54,188.15 & 54,157.06 & 53,853.03 \\ \(\bar{p}^{2}\) & 0.4361 & 0.5437 & 0.5685 & 0.5688 & 0.5715 \\ \hline \hline \end{tabular} \end{table} Table 3: Model comparison normal delivery. Since we clearly explained that their privilege was not valid during the stated choice tasks, this result highlights the inertia of their choices, i.e., they are used to requesting fast delivery options. Finally, the result suggests a strong relationship between the presence of a delivery box and users' delivery option choice preference. If users have a delivery box equipped with their homes, they do not need to worry about the timing of receipt of items; therefore, they are willing to opt for next-day delivery, thereby, not needing scheduled delivery. #### Slot choice In terms of delivery time slot preference, the analysis focused on its relationships with the characteristics of occupations and households, reflecting users' lifestyles. First, users living with their family do not prefer the latest slot delivery, implying that they can afford to receive their parcels during the day, instead of in the evening, when the family spends time together. The time slot choice also depends on housing types: people who live in detached houses tend to choose slots during the day, indicated by the negative signs of the coefficients of morning and night slots. During holidays, users prefer to receive parcels in the morning, but not in the evening. A possible explanation is that they may want to leave home after receiving the parcels or likely return home late in holidays. As for occupation, users who are self-employed, have part-time or no jobs clearly do not opt for the night slot. #### 5.2.2 Error components The standard deviations of the heteroskedastic error components are all significantly different from zero, except the one for the normal delivery option. This suggests that there exists unobserved heterogeneity and significant substitution patterns associated with choices of delivery options, dates, and time slots. First, the difference in the scales of the error components of next-day, scheduled, and normal delivery indicates the heteroskedasticity among the option alternatives. In addition, the estimation results of the error scales of the day and slot alternatives support the cross-nested structure of Figure 3; alternatives within a common nest are correlated with each other. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{MNL} & \multicolumn{2}{c}{MXL 1} & \multicolumn{2}{c}{MXL 4} \\ & Estimate & Rob.std.err. & Estimate & Rob.std.err. & Estimate & Rob.std.err. \\ \hline **Delivery attributes** & & & & & & \\ _Delivery fee_ (JPY) & & & & & & \\ Fixed param. & -0.009\({}^{***}\) & 0.000 & -0.018\({}^{***}\) & 0.000 & & & \\ Random param. & & & & & & -2.232\({}^{***}\) & 0.078 \\ Lower bound (log) & & & & & & -5.036\({}^{***}\) & 0.052 \\ Spread (log) & & & & & & -0.069\({}^{***}\) & 0.026 \\ Observed heterogeneity & & & & & & & \\ Item price (JPY) & -0.106\({}^{***}\) & 0.014 & -0.089\({}^{***}\) & 0.016 & -0.069\({}^{***}\) & 0.026 \\ _Value of delivery time savings_ (JPY/day) & & & & & & \\ Fixed param. & & 44.870\({}^{***}\) & 1.724 & 26.465\({}^{***}\) & 1.158 & & \\ Random param. & & & & & & 243.651\({}^{***}\) & 0.992 \\ \(\eta_{d,0}\) & & & & & & -1,034.516\({}^{***}\) & 1.175 \\ \(\eta_{d,2}\) & & & & & & 1,061.363\({}^{***}\) & 2.815 \\ \(\eta_{d,3}\) & & & & & & 618.977\({}^{***}\) & 2.384 \\ \(\eta_{d,4}\) & & & & & & -936.408\({}^{***}\) & 2.250 \\ Observed heterogeneity & & & & & & \\ Online shopping frequency & 0.104\({}^{***}\) & 0.020 & 0.146\({}^{***}\) & 0.033 & 0.160\({}^{***}\) & 0.033 \\ _Value of time slot shortening_ (JPY/hr) & & & & & & \\ Fixed param. & 9.412\({}^{***}\) & 2.275 & 5.455\({}^{***}\) & 2.088 & & \\ Random param. & & & & & & 31.248\({}^{***}\) & 7.596 \\ \(\eta_{h,0}\) & & & & & & -120.229\({}^{***}\) & 39.598 \\ \(\eta_{h,2}\) & & & & & & 100.399\({}^{**}\) & 42.696 \\ \hline \hline \end{tabular} \end{table} Table 4: Estimation results \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{MNL} & \multicolumn{2}{c}{MXL 1} & \multicolumn{2}{c}{MXL 4} \\ & Estimate & Rob.std.err. & Estimate & Rob.std.err. & Estimate & Rob.std.err. \\ \hline **Alternative specific constant** & & & & & & \\ _Option choice_ (ref. = Normal delivery) & & & & & & \\ Next-day delivery & -0.593\({}^{***}\) & 0.067 & -0.479\({}^{***}\) & 0.102 & -1.046\({}^{***}\) & 0.142 \\ Baseline & 0.479\({}^{***}\) & 0.181 & 0.742\({}^{***}\) & 0.297 & 1.254\({}^{***}\) & 0.463 \\ Age \(>=70\) years old & -0.373\({}^{***}\) & 0.114 & -0.569\({}^{***}\) & 0.193 & -0.611\({}^{**}\) & 0.249 \\ Male & 0.177\({}^{***}\) & 0.058 & 0.297\({}^{***}\) & 0.099 & 0.175 & 0.141 \\ Resident in Tokyo & -0.028 & 0.064 & -0.096 & 0.109 & -0.128 & 0.166 \\ Free membership & 0.415\({}^{***}\) & 0.059 & 0.751\({}^{***}\) & 0.102 & 0.767\({}^{***}\) & 0.141 \\ Teleworker & 0.102 & 0.104 & 0.168 & 0.176 & 0.402 & 0.275 \\ Delivery box & 0.157\({}^{***}\) & 0.065 & 0.331\({}^{***}\) & 0.112 & 0.436\({}^{***}\) & 0.165 \\ Scheduled delivery & & & & & & \\ Baseline & -1.941\({}^{***}\) & 0.120 & -5.740\({}^{***}\) & 0.302 & -6.889\({}^{***}\) & 0.571 \\ Age \(<=29\) years old & 0.238 & 0.210 & 0.206 & 0.406 & 0.833 & 0.567 \\ Age \(>=70\) years old & -0.261\({}^{**}\) & 0.129 & -0.289 & 0.241 & -0.932\({}^{**}\) & 0.369 \\ Male & -0.070 & 0.068 & -0.332\({}^{**}\) & 0.140 & -0.562\({}^{**}\) & 0.224 \\ Resident in Tokyo & 0.153\({}^{**}\) & 0.071 & 0.259\({}^{**}\) & 0.145 & 0.610\({}^{**}\) & 0.277 \\ Free membership & 0.140\({}^{**}\) & 0.067 & 0.018 & 0.137 & -0.006 & 0.240 \\ Teleworker & -0.308\({}^{**}\) & 0.123 & -0.670\({}^{***}\) & 0.237 & -0.824\({}^{**}\) & 0.494 \\ Delivery box & -0.494\({}^{***}\) & 0.078 & -1.091\({}^{***}\) & 0.160 & -1.284\({}^{***}\) & 0.285 \\ _Day choice_ (ref. = Weekday) & & & & & & \\ Holiday & & & & & & \\ Baseline & 0.333\({}^{***}\) & 0.092 & 3.063\({}^{***}\) & 0.252 & 4.651\({}^{***}\) & 0.450 \\ _Slot choice_ (ref. = Other slots) & & & & & & \\ Morning slot & & & & & & \\ Baseline & 0.191 & 0.138 & 0.077 & 0.312 & -0.159 & 0.511 \\ Live with children & -0.110 & 0.097 & -0.247 & 0.220 & -0.216 & 0.384 \\ Live with family & -0.219 & 0.166 & -0.359 & 0.377 & -0.747 & 1.097 \\ Couple with double income & 0.213\({}^{*}\) & 0.116 & 0.463\({}^{*}\) & 0.272 & 0.496 & 0.411 \\ Live alone & 0.348\({}^{***}\) & 0.111 & 0.646\({}^{**}\) & 0.265 & 0.602 & 0.530 \\ Live in a detached house & -0.270\({}^{***}\) & 0.078 & -0.485\({}^{***}\) & 0.176 & -0.550 & 0.347 \\ Holiday & 0.411\({}^{***}\) & 0.115 & 0.815\({}^{***}\) & 0.176 & 0.723\({}^{**}\) & 0.243 \\ Public officer & -0.051 & 0.165 & -0.009 & 0.384 & 0.244 & 0.694 \\ Professional job & -0.053 & 0.212 & -0.420 & 0.404 & -0.526 & 0.954 \\ Self employed & -0.144 & 0.167 & -0.391 & 0.378 & -0.053 & 0.578 \\ Part-time job & -0.130 & 0.112 & -0.426\({}^{*}\) & 0.258 & -0.347 & 0.358 \\ No job & 0.095 & 0.097 & 0.205 & 0.221 & 0.306 & 0.403 \\ Night slot & & & & & & \\ Baseline & 0.520\({}^{***}\) & 0.142 & 1.169\({}^{***}\) & 0.264 & 1.765\({}^{***}\) & 0.378 \\ Live with children & 0.034 & 0.113 & 0.186 & 0.208 & 0.246 & 0.271 \\ Live with family & -0.731\({}^{***}\) & 0.211 & -1.127\({}^{***}\) & 0.373 & -1.162\({}^{**}\) & 0.539 \\ Couple with double income & 0.172 & 0.134 & 0.528\({}^{*}\) & 0.280 & 0.379 & 0.307 \\ Live alone & -0.128 & 0.140 & -0.252 & 0.255 & -0.475 & 0.318 \\ Live in a detached house & -0.168\({}^{*}\) & 0.090 & -0.289\({}^{*}\) & 0.172 & -0.434\({}^{**}\) & 0.193 \\ Holiday & 0.028 & 0.113 & -0.314\({}^{**}\) & 0.157 & -0.405\({}^{**}\) & 0.203 \\ Public officer & 0.179 & 0.171 & 0.487 & 0.305 & 0.865\({}^{*}\) & 0.511 \\ Professional job & 0.438\({}^{**}\) & 0.201 & 0.857\({}^{**}\) & 0.404 & 0.220 & 0.625 \\ Self employed & -0.629\({}^{***}\) & 0.239 & -1.174\({}^{***}\) & 0.452 & -0.982\({}^{**}\) & 0.488 \\ Part-time job & -0.257\({}^{**}\) & 0.127 & -0.619\({}^{***}\) & 0.238 & -0.768\({}^{***}\) & 0.272 \\ No job & -0.807\({}^{***}\) & 0.127 & -1.413\({}^{***}\) & 0.263 & -1.649\({}^{***}\) & 0.316 \\ ** ### Willingness-to-pay for delivery Finally, we discuss the distributions of WTPs for delivery obtained from the estimated model. In the final model, all the distributional parameters of VODT, VOTS, and the sensitivity to the delivery fee are estimated with statistical significance. The polynomial approximation enabled estimating their distributions in a data-oriented manner, without imposing a parametric distributional form a priori. The estimation results of \(\gamma_{\text{PI}}\) and \(\gamma_{\text{FR}}\), capturing the observed heterogeneity in the willingness-to-pay distributions, are consistent with our expectations. The negative sign of the power \(\gamma_{\text{PI}}\) of the item price suggests that users who order more expensive items are less sensitive to the delivery fee. The positive sign of the power \(\gamma_{\text{FR}}\) of the user's online shopping frequency indicates that the scale of VODT increases according to the frequency. #### 5.3.1 Value of delivery time savings The distribution of VODT obtained from the estimation of MXL 4 is shown in Figure 4, and its summary statistics are reported in Table 5. The VODT ranges from \(-47.937\) to \(219.445\) JPY/day7, with 3.6 % of users having negative values for delivery time savings. The mean and median are \(44.459\) and \(25.608\) JPY/day, respectively, which are relatively smaller than the values reported in the literature (e.g., Hsiao, 2009; Gawor and Hoberg, 2019). These results imply that, in the delivery option choice context, some users do not necessarily need fast delivery, and rather prefer delivery as per their convenience, given their schedule constraints. In other words, not everyone wants next-day delivery; in the current e-commerce situation, many users choose the delivery option just because it is free. They (more than 50% of users) would be willing to wait an additional day if the delivery fee were increased by only 26 JPY. Nevertheless, we note that some users would highly value saving the delivery time, as the maximum of VODT is \(219.445\) JPY/day. Therefore, there would be a large variety (or heterogeneity) in VODT in the population. Footnote 7: The average rate over 10 years of 2012–2021 is 106.1 JPY/S. Source: International Monetary Fund (IMF) Data [https://data.imf.org/](https://data.imf.org/). It should also be noted that in the literature VODT was analyzed only separately in different markets; e.g., $0.53 and $3.61 per day for books (Hsiao, 2009) and electronic items (Gawor and Hoberg, 2019), respectively. Because our survey did not impose any restriction on the item category, we can analyze the variation of VODT across different categories in a unified framework. Based on the estimation result, we performed an ex-post segmentation analysis of VODT, and the results in Figure 5(a) show its heterogeneity across different categories of ordered items. For software and computer-related devices, VODT indicates the highest values, followed by CD/DVDs, toys, and electric devices. A possible explanation for this is that these items are not readily available in nearby stores, but users often want them urgently, thus requesting fast delivery. In fact, groceries have a lower VODT value than these items, because groceries for daily use can be obtained in nearby supermarkets. Moreover, users ordering books and cosmetics have low VODT values, suggesting that such items are relatively inexpensive and are often not needed immediately. This is also seen in the result of Figure 5(b), which shows that the more expensive the ordered item, the higher the VODT. Finally, Figure 5(c) clearly shows that users who frequently shop online have higher VODT values, that is, frequent users need fast delivery more than low-frequent users. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline & Mean & Std & Min & 25\% & 50\% & 75\% & Max & P(v \textless{} 0) \\ \hline VODT (JPY/day) & 44.459 & 43.308 & -47.937 & 18.29 & 25.608 & 60.226 & 219.445 & 0.036 \\ \hline \hline \end{tabular} \end{table} Table 5: Distribution characteristics of VODT Figure 4: Distribution of VODT among respondents Figure 5: VODT for different segments: focusing on (a) ordered item category, (b) ordered item price, and (c) e-shopping frequency of users. #### 5.3.2 Value of time slot shortening The distribution of VOTS obtained from MXL 4 is shown in Figure 6, and its statistics are reported in Table 6. The VOTS ranges from \(-\)3.827 to 27.086 JPY/hour, with 4.2% of the respondents having negative values. The mean and median are 4.716 and 4.968 JPY/hour, respectively. As such, although a majority of users would be willing to pay for the shortening of the delivery time slot size, the payment would be small. This means that users do not highly value the reduction in time slot size in monetary terms. Since the size of a time slot (i.e., time window constraint) has a significant impact on last-mile delivery (e.g., Nockold, 2001), this result may be an important finding for delivery demand management to improve logistics efficiency without a serious reduction in user satisfaction8. Footnote 8: Note that in our survey the time slot size ranged from two to four hours, and a two-hour slot might not have been sufficiently tight to increase the convenience of experienced e-commerce users. ## 6 Concluding remarks This study analyzed e-commerce users' preferences for delivery options. To this end, we designed and implemented a stated choice survey, where users were asked to indicate which option of next, scheduled, and normal delivery they would choose to deliver the ordered item, and if they chose scheduled delivery, to jointly select the delivery date and time slot too. The stated choice data of 4,062 users living in the three major metropolitan areas of Japan were analyzed by estimating a mixed logit model, capturing users' taste heterogeneity and substitution patterns. We also applied a semi-nonparametric approach by Fosgerau and Mabit (2013) to flexibly estimate the distributions of willingness-to-pay (WTP) for delivery attributes. The results of this study contribute to advancing the understanding of e-commerce user behavior, which plays a key role in delivery demand management as well as in designing urban logistics policies (Neslin et al., 2006; Holguin-Veras et al., 2017). Specifically, the present results \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline & Mean & Std & Min & 25\% & 50\% & 75\% & Max & P(v \textless{} 0) \\ \hline VOTS (JPY/hour) & 4.716 & 2.792 & -3.827 & 2.979 & 4.968 & 6.216 & 27.086 & 0.042 \\ \hline \hline \end{tabular} \end{table} Table 6: Distribution characteristics of VOTS Figure 6: Distribution of VOTS among respondents suggest that delivery service attributes including fee, time, and time slot size are significant determinants of the choice of a delivery option. We found that elderly people do not tend to choose user-oriented delivery options such as next-day and scheduled delivery. The frequency of teleworking and the presence of a delivery box also have a strong relationship with a low propensity to request scheduled delivery. However, users who have a delivery box at home tend to choose next-day delivery; that is, although installing a delivery box reduces the demand for scheduled delivery and the risk of delivery failure, it can lead to an increased demand for fast delivery. The analysis also revealed that the value of delivery time savings (VODT) is widely distributed among the respondents, and its maximum is 219.4 JPY per day. Nevertheless, the median VODT is only 25.6 JPY per day, debunking the myth that everyone needs fast delivery and adding to the statement and results of Rai et al. (2019). Today many e-commerce marketplaces offer users fast delivery for free, imposing a strict time constraint on urban logistics, but our results suggest that more than half of users would be willing to wait an additional day if the delivery fee increased only by 26 JPY. In terms of heterogeneity, VODT has a high value for users who frequently shop online and/or order expensive items, and varies according to the category of the item ordered. Additionally, the value of time slot shortening (VOTS) was found to be low and distributed with the median 5.0 JPY per hour, which means that users do not highly value the reduction in time slot size in monetary terms. Since time windows are important constraints for last-mile delivery, the result may suggest that lengthening the size of delivery time slots would be a way to significantly improve its efficiency without a serious reduction in user satisfaction. The present WTP measures were calculated purely with respect to the delivery fee, not including item price (Gawor and Hoberg, 2019) or travel cost (Hsiao, 2009); therefore, the results of this study can be used for the level-of-service design for last-mile delivery, independent of the retailer/marketplace strategy. Given the findings of e-commerce user behavior, future work includes a study of the impact of the design of delivery attributes on the efficiency of last-mile delivery (e.g., Agatz et al., 2021). Since our model describes the choices of option, date and time slot for delivery, it would be possible to analyze both day-to-day and within-day dynamics of delivery demand and their impact on operational efficiency, combined with a multi-period vehicle routing problem (Archetti et al., 2015) or agent-based simulation (Sakai et al., 2020, 2022). Note that this study was carried out during the COVID-19 pandemic, under the declaration of a state of emergency. Although we collected detailed information on users' lifestyles, including teleworking frequency, the results may still include the effects of unobserved variables specific during the pandemic. Therefore, the comparison of users' preferences for delivery options before, during (and possibly after) the pandemic is considered another direction of future work that could add an important contribution to the literature (Choi, 2021; Chowdhury et al., 2021). #### Acknowledgements We are grateful to Yamato Holdings Co., Ltd. for sending emails to their customers "Kuroneko Members" for invitation to our survey. #### CRediT author statement **Yuki Oyama**: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Data Curation, Writing - Original Draft, Visualization, Supervision. **Daisuke Fukuda**: Methodology, Software, Validation, Formal analysis, Writing - Review & Editing. **Naoto Imura**: Investigation, Project administration. **Katsuhiro Nishinari**: Resources, Funding acquisition. All authors approved the final manuscript. ## Appendix A Screen image of the stated choice survey
多くのECサイトが、ユーザーに無料で高速配送オプションを提供することで、ユーザーのニーズに応えるために、都市物流に過度の負担をかける。そのため、ユーザーの配送オプションに対する好みを理解することは、物流政策の設計にとって重要な要素となる。この研究では、回答者が異なる配送オプションと時間枠の選択肢を検討するchoice taskに直面させ、日本の3つの主要都市における4,062人のユーザーがその調査に協力した。このデータを分析するために、嗜好の多様性と柔軟な代用パターンを捉えた混合logitモデルが推定された。モデルの推定結果によると、配送属性(料金、時間、時間枠のサイズ)は、配送オプションの選択に重要な要因となっている。ユーザーの嗜好と社会的年齢、性別、テレワークの頻度、配送ボックスの有無との関連性も示された。さらに、配送時間の節約(VODT)と時間枠の短縮(VOTS)
2309.04222
Offline Recommender System Evaluation under Unobserved Confounding
Off-Policy Estimation (OPE) methods allow us to learn and evaluate decision-making policies from logged data. This makes them an attractive choice for the offline evaluation of recommender systems, and several recent works have reported successful adoption of OPE methods to this end. An important assumption that makes this work is the absence of unobserved confounders: random variables that influence both actions and rewards at data collection time. Because the data collection policy is typically under the practitioner's control, the unconfoundedness assumption is often left implicit, and its violations are rarely dealt with in the existing literature. This work aims to highlight the problems that arise when performing off-policy estimation in the presence of unobserved confounders, specifically focusing on a recommendation use-case. We focus on policy-based estimators, where the logging propensities are learned from logged data. We characterise the statistical bias that arises due to confounding, and show how existing diagnostics are unable to uncover such cases. Because the bias depends directly on the true and unobserved logging propensities, it is non-identifiable. As the unconfoundedness assumption is famously untestable, this becomes especially problematic. This paper emphasises this common, yet often overlooked issue. Through synthetic data, we empirically show how na\"ive propensity estimation under confounding can lead to severely biased metric estimates that are allowed to fly under the radar. We aim to cultivate an awareness among researchers and practitioners of this important problem, and touch upon potential research directions towards mitigating its effects.
Olivier Jeunen, Ben London
2023-09-08T09:11:26
http://arxiv.org/abs/2309.04222v1
# Offline Recommender System Evaluation under Unobserved Confounding ###### Abstract. Off-Policy Estimation (OPE) methods allow us to learn and evaluate decision-making policies from logged data. This makes them an attractive choice for the offline evaluation of recommender systems, and several recent works have reported successful adoption of OPE methods to this end. An important assumption that makes this work is the absence of unobserved confounders: random variables that influence both actions and rewards at data collection time. Because the data collection policy is typically under the practitioner's control, the unconfoundedness assumption is often left implicit, and its violations are rarely dealt with in the existing literature. This work aims to highlight the problems that arise when performing off-policy estimation in the presence of unobserved confounders, specifically focusing on a recommendation use-case. We focus on policy-based estimators, where the logging propensities are learned from logged data. We characterise the statistical bias that arises due to confounding, and show how existing diagnostics are unable to uncover such cases. Because the bias depends directly on the _true_ and unobserved logging propensities, it is non-identifiable. As the unconfoundedness assumption is famously untestable, this becomes especially problematic. This paper emphasises this common, yet often overlooked issue. Through synthetic data, we empirically show how naive propensity estimation under confounding can lead to severely biased metric estimates that are allowed to fly under the radar. We aim to cultivate an awareness among researchers and practitioners of this important problem, and touch upon potential research directions towards mitigating its effects. [name=Section = 1, leftmargin=*]mybox[name=Section = 2, leftmargin=*]mybox[name=Section = 3, rightmargin=*]mybox[name=Section = 4, rightmargin=*]mybox[name=Section = 5, leftmargin=*]mybox[name=Section = 6, rightmargin=*]mybox[name=Section = 7, rightmargin=*]mybox[name=Section = 8, rightmargin=*]mybox[name=Section = 9, rightmargin=*]mybox[name=Section = 10, rightmargin=*]mybox[name=Section = 11, leftmargin=*]mybox[name=Section = 12, rightmargin=*]mybox[name=Section 13, rightmargin=*]mybox[name=Section 14, rightmargin=*]mybox[name=Section 15, rightmargin=*]mybox[name=Section 16, rightmargin=*]mybox[name=Section 17, rightmargin=*]mybox[name=Section 18, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 17, rightmargin=*]mybox[name=Section 18, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 18, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmarginmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmarginmargin=*]mybox[name= 19, or instrumental variables (Koskov and Koskov, 2017)) or make assumptions about the nature of confounding variables (Koskov and Koskov, 2017; Koskov, 2017; Koskov, 2017). Analogously, instrumental variables have been leveraged to test for the unconfoundedness assumption (Koskov and Koskov, 2017), and other statistical methods have been proposed to assess the sensitivity of results to potential confounders (Koskov, 2017). Nevertheless, in the absence of additional tools, unconfoundedness is a famously _untestable_ assumption, rendering its effects especially troublesome. Focusing on the off-policy bandit setting with a guiding example in recommendation, we aim to answer: "_Can we reliably select the optimal policy from a set of competing policies, under unobserved confounding?_" ## 2. Off-policy estimation in the presence of unobserved confounders Throughout this work, we denote random variables as \(X\), with specific instances as \(x\in\mathcal{X}\). A contextual bandit problem consists of _contexts_\(X\) (e.g. user and context features), _actions_\(A\) (e.g. item recommendations), and _rewards_\(R\) (e.g. clicks, streams, revenue). Rewards are causally influenced by both contexts and actions, as illustrated by the edges in the causal graph shown in Figure 1. A contextual _policy_\(\pi\) determines which actions are selected (or sampled), thereby inducing a probability distribution over \(A\), which is often denoted with the shorthand \(\pi(a|x)\coloneqq\mathrm{P}(A=a|X=x;\Pi=\pi)\). A policy's effectiveness is measured by the expected reward obtained when selecting actions according to that policy: \(\mathbb{E}_{x}\mathbb{E}_{a\sim\pi(\cdot|x)}\left[R\right]\). In a recommendation application, this value can be estimated by deploying the policy in an online experiment. However, since such experiments are typically costly, and we may have many policies to evaluate, we would rather obtain reward estimates by other means. Suppose there is an existing deployed policy, called the _logging policy_\(\pi_{0}\), with which we collect a dataset \(\mathcal{D}_{0}\coloneqq\{(a_{i},r_{i})_{i=1}^{N}\}\). We will assume, as is often the case, that the logging policy, and the contextual covariates it uses, are unobservable, as indicated by the dashed nodes and edges in Figure 1. Our goal is to leverage data logged under \(\pi_{0}\) to estimate the expected reward under \(\pi-\)a problem often referred to as _off-policy_ estimation (Koskov and Koskov, 2017; Koskov, 2017). One simple off-policy estimator is the _Direct Method_ (DM). DM computes the expected reward under \(\pi\) (Eq. 1) using a model \(\widehat{R}^{A}_{\mathrm{DM}}(a)\) that estimates the reward for every available action. Since we assume that contextual covariates are unavailable, the best we can do for \(\widehat{R}^{A}_{\mathrm{DM}}(a)\) is to naively count the observed rewards for every action (Eq. 2). \[\widehat{R}_{\mathrm{DM}}(\pi)=\sum_{a\in\mathcal{A}}\widehat{R}^{A}_{\mathrm{ DM}}(a)\pi(a)\qquad\qquad(1)\qquad\qquad\qquad\widehat{R}^{A}_{\mathrm{DM}}(a)= \frac{\sum\left(a_{i}r_{i}\right)\in\mathcal{D}_{0}\ \mathbb{1}\left\{a_{i}=a\right\}\cdot r_{i}}{\sum_{(a_{i},r_{i})\in \mathcal{D}_{0}}\ \mathbb{1}\left\{a_{i}=a\right\}} \tag{2}\] Unfortunately, this estimator is biased, for two reasons: (a) it does not take into account the covariates \(X\) (i.e., the model is mis-specified), and (b) it ignores the selection bias from the logging policy \(\pi_{0}\), influencing the estimates in Eq. 2. In theory, we can bypass both the model mis-specification and selection bias problems by leveraging the _ideal_ IPS estimator (Eq. 3), which is provably unbiased. Importantly, ideal IPS requires access to both the contextual covariates and the exact action probabilities (_propensities_) under \(\pi_{0}-\)which we assume are unavailable. Accordingly, we will adopt the common practice of using estimated logging propensities \(\widehat{\pi}_{0}\) for IPS (Eq. 4). As the estimated propensities cannot properly consider all covariates, this leads to unobserved confounding. \[\widehat{R}_{\mathrm{ideal-IPS}}(\pi)=\frac{1}{|\mathcal{D}_{0}|}\sum_{(x_{i},a_{i},r_{i})\in\mathcal{D}_{0}}r_{i}\frac{\pi(a_{i})}{\pi_{0}(a_{i}|x_{i})} \quad(3)\qquad\qquad\qquad\widehat{R}_{\mathrm{estim-IPS}}(\pi)=\frac{1}{| \mathcal{D}_{0}|}\sum_{(a_{i},r_{i})\in\mathcal{D}_{0}}r_{i}\frac{\pi(a_{i})} {\widehat{\pi}_{0}(a_{i})} \tag{3}\] Using the fact that the ideal IPS estimator is unbiased, we can quantify the bias of the estimated IPS estimator as: \[\mathbb{E}[\widehat{R}_{\mathrm{estim-IPS}}(\pi)]-\mathbb{E}_{a\sim\pi}[R] =\mathbb{E}[\widehat{R}_{\mathrm{estim-IPS}}(\pi)]-\mathbb{E}[ \widehat{R}_{\mathrm{ideal-IPS}}(\pi)]=\mathbb{E}\left[R\pi(A|X)\left(\frac{1 }{\widehat{\pi}_{0}(A)}-\frac{1}{\pi_{0}(A|X)}\right)\right]. \tag{4}\] Figure 1. Probabilistic Graphical Model (PGM) for our setup. To further illustrate our point, we resort to Pearl's do-calculus framework (Pearl, 1974). What OPE methods wish to estimate is the expected value of the reward given that a new policy _intervenes_ on the action distribution. When unobserved confounders are present, this _interventional_ quantity is _not_ equal to the _observational_ quantity we can estimate from logged data: \(\mathbb{E}\left[R|A=a\right]\neq\mathbb{E}\left[R|\text{do}(A=a)\right]\). Instead, we would require the "backdoor adjustment" to obtain: \[\mathbb{E}\left[R|\text{do}(A=a)\right]=\sum_{x\in X}\mathbb{E}\left[R|A=a,X=x \right]. \tag{6}\] It should be clear that without access to \(X\), this estimand is non-identifiable, and this problem is not easily solved. ## 3. Existing Diagnostics for Logging Propensities do Not Uncover Confounding Bias Several diagnostics have been proposed in the literature to detect data quality issues with logged bandit feedback. In particular, they try to uncover cases where the two classical assumptions of the IPS estimator do not hold (Han and Recht, 1974; Han and Recht, 1974): (1) either the empirical action frequencies in the data do not match those implied by the logged propensities, or (2) the logging policy does not have full support over the action space. Note that the presence of unobserved confounders does _not_ automatically violate these assumptions. As a result, the diagnostics that were proposed will _not_ detect confounding bias. Logging propensities can be estimated by empirically counting logged actions, as shown in Eq. 7. In doing so, we obtain unbiased estimates of the true marginal action probabilities. Indeed, \(\lim_{N\to\infty}\widehat{n_{0}}(a)=\text{P}(A=a|\Pi=\pi_{0})\). Li et al. propose the use of _arithmetic_ and _harmonic_ mean tests to compare empirical action frequencies with the logging propensities (Han and Recht, 1974). As we _define_ the logging propensities to be equal to the empirical action frequencies, it should be clear that this test will trivially pass. Alternatively, London and Joachims propose to use the average importance weight as a control variate, whose expectation should equal 1 for any target policy \(\pi\)(Han and Recht, 1974). Here as well, because the marginal propensities are unbiased (Eq. 7), we can show that the control variate remains unbiased as well (Eq. 8). \[\widehat{n}_{0}(a)=\frac{1}{|\mathcal{D}_{0}|}\sum_{(a_{i},r_{i})\in\mathcal{ D}_{0}}\mathbb{1}\{a_{i}=a\}\underset{\lim}{\lim}\,\text{P}(A=a|\Pi=\pi_{0})= \sum_{x\in X}\text{P}(A=a|X=x,\Pi=\pi_{0})\text{P}(X=x). \tag{7}\] Theorem 3.1 ().: _When unobserved confounders are present and logging propensities are estimated from empirical data (thus ignoring the confounders), the expected value of the importance weights equals 1 for any target policy:_ \[\underset{\begin{subarray}{c}x\sim\text{P}(X)\\ a\sim\text{P}(A|X=x,\Pi=\pi_{0})\end{subarray}}{\mathbb{E}}\left[\frac{\pi(a) }{\widehat{\pi}_{0}(a)}\right]=1.\] Proof.: \[\underset{\begin{subarray}{c}x\sim\text{P}(X)\\ a\sim\text{P}(A|X=x,\Pi=\pi_{0})\end{subarray}}{\mathbb{E}}\left[\frac{\pi(a) }{\widehat{\pi}_{0}(a)}\right] =\sum_{a\in\mathcal{A}}\sum_{x\in X}\frac{\pi(a)}{\widehat{\pi }_{0}(a)}\text{P}(A=a|X=x,\Pi=\pi_{0})\text{P}(X=x)\] \[=\sum_{a\in\mathcal{A}}\frac{\pi(a)}{\widehat{\pi}_{0}(a)}\sum_{ x\in X}\text{P}(A=a|X=x,\Pi=\pi_{0})\text{P}(X=x)\] (8) \[\underset{\begin{subarray}{c}\lim\\ N\to\infty\end{subarray}}{\lim}\sum_{a\in\mathcal{A}}\pi(a)\frac{\sum_{x\in X }\text{P}(A=a|X=x,\Pi=\pi_{0})\text{P}(X=x)}{\sum_{x\in X}\text{P}(A=a|X=x, \Pi=\pi_{0})\text{P}(X=x)}=\sum_{a\in\mathcal{A}}\pi(a)=1\quad\Box\] As such, existing diagnostics are unable to detect issues of unobserved confounding. This implies that the self-normalised IPS (SNIPS) estimator and its extensions that adopt the above control variate to reduce the variance of the IPS estimator, would exhibit the same bias as estimated IPS when unobserved confounders are present (Han and Recht, 1974; Han and Recht, 1974). ## 4. Empirical validation of the effects of unobserved confounding on synthetic data We now describe a guiding example, and provide a notebook that implements the methods described earlier at github.com/olivierjeunen/confounding-consequences-2023/. Consider a setting with two possible actions and a binary covariate \(X=\{x_{0},x_{1}\}\), following the distribution in Table 0(a) (parameterised with \(\alpha\in\left[\frac{1}{2},1\right]\)). Rewards are Bernoulli-distributed (Table 0(b)). The logging policy is contextual, taking a suboptimal action with probability \(\epsilon\in[0,1]\) (Table 0(c)). We can map this to an intuitive setting: action \(a_{1}\) is of general appeal to the entire population (i.e. \(R\perp\!\!\!\perp X|A=a_{1}\)); whereas action \(a_{0}\), on the other hand, is specifically appealing to a more niche user-base (i.e. \(\mathbb{E}[R|X=x_{0},A=a_{0}]>\mathbb{E}[R|X=x_{0},A=a_{1}]\), but \(\mathsf{P}(X=x_{0})<\mathsf{P}(X=x_{1})\)). Estimates for logging propensities can be obtained by empirical counting, as in Eq. 7. The expected value for these estimated context-independent propensities is shown in Table 0(d). _Naive propensity estimation methods suffer from confounding bias._ We simulate an off-policy estimation setup where we wish to evaluate deterministic policies \(\pi_{\mathsf{a}}(a)\equiv 1\). We obtain \(N=2\cdot 10^{6}\) samples from the synthetic distribution described in Table 1, and compute the confounded estimate \(\widehat{R}_{\text{estim-IPS}}\), as well as the unobservable ideal IPS estimate \(\widehat{R}_{\text{ideal-IPS}}\). We vary both the level of selection bias \(\epsilon\) (over the x-axis), and the confounding distribution \(\alpha\) (over columns) in Fig. 2, where the y-axis shows the estimated difference in rewards from policies \(\pi_{a_{1}}\) and \(\pi_{a_{0}}\). We shade the positive and negative regions in the plot to clearly visualise when an off-policy estimator allows us to _correctly_ identify the optimal policy, or when it does not. We observe that the IPS estimator with estimated propensities fails considerably, in that it will incorrectly identify \(\pi_{a_{0}}\) as the reward-maximising policy. Only when \(\epsilon\) is sufficiently high (i.e. approaching a uniform logging policy for \(\epsilon=0.5\), and hence no confounding is present), \(\widehat{R}_{\text{estim-IPS}}\) is able to correctly identify \(\pi_{a_{1}}\). This shows that, even in simplified settings, the estimates we obtain from IPS with confounded estimates lead to misleading conclusions. Furthermore, existing diagnostics cannot detect these problems when they occur. ## 5. Conclusions & outlook Unobserved confounders lead to biased estimates, both for DM- and IPS-based methods. This problem has received considerable attention in the research literature for general offline reinforcement learning use-cases, but the literature dealing with these issues in recommendation settings remains scarce. Our work highlights that this is problematic-especially in cases where propensities are estimated under simplifying independence assumptions. In doing so, we add to the literature identifying problematic practices that might hamper progress in the field (Bouquet et al., 2017; Goyal et al., 2018; Goyal et al., 2018; Goyal et al., 2018). \begin{table} \begin{tabular}{c
オフライン評価の対象となる aan-policy 評価方法を用いて、ログデータから行動と報酬の両方を影響するランダムな変数 を排除することで、推薦システムのオフライン評価に適した選択となります。 最近、いくつかの研究で、この方法が推薦システムのオフライン評価に成功していることを報告しています。 この方法で成功するための重要な前提条件の一つは、未観察の共変量の存在しないことである:行動と報酬が収集時に影響を与えるランダムな変数。 データ収集ポリシーは一般的に実践者によって制御されており、この前提条件はしばしば不明確に、その違反は既存の文献でほとんど扱われない。 この論文では、未観察の共変量の存在下でのオフライン評価を行う際の課題について強調し、特に推薦システムの用途に焦点を当てています。 政策ベースの推定方法を用いる場合、ログの確率はログデータから
2309.10257
Observation of universal dissipative dynamics in strongly correlated quantum gas
Dissipation is unavoidable in quantum systems. It usually induces decoherences and changes quantum correlations. To access the information of strongly correlated quantum matters, one has to overcome or suppress dissipation to extract out the underlying quantum phenomena. However, here we find an opposite effect that dissipation can be utilized as a powerful tool to probe the intrinsic correlations of quantum many-body systems. Applying highly-controllable dissipation in ultracold atomic systems, we observe a universal dissipative dynamics in strongly correlated one-dimensional quantum gases. The total particle number of this system follows a universal stretched-exponential decay, and the stretched exponent measures the anomalous dimension of the spectral function, a critical exponent characterizing strong quantum fluctuations of this system. This method could have broad applications in detecting strongly correlated features, including spin-charge separations and Fermi arcs in quantum materials.
Yajuan Zhao, Ye Tian, Jilai Ye, Yue Wu, Zihan Zhao, Zhihao Chi, Tian Tian, Hepeng Yao, Jiazhong Hu, Yu Chen, Wenlan Chen
2023-09-19T02:32:02
http://arxiv.org/abs/2309.10257v1
# Observation of universal dissipative dynamics in strongly correlated quantum gas ###### Abstract Dissipation is unavoidable in quantum systems. It usually induces decoherences and changes quantum correlations. To access the information of strongly correlated quantum matters, one has to overcome or suppress dissipation to extract out the underlying quantum phenomena. However, here we find an opposite effect that dissipation can be utilized as a powerful tool to probe the intrinsic correlations of quantum many-body systems. Applying highly-controllable dissipation in ultracold atomic systems, we observe a universal dissipative dynamics in strongly correlated one-dimensional quantum gases. The total particle number of this system follows a universal stretched-exponential decay, and the stretched exponent measures the anomalous dimension of the spectral function, a critical exponent characterizing strong quantum fluctuations of this system. This method could have broad applications in detecting strongly correlated features, including spin-charge separations and Fermi arcs in quantum materials. Open quantum systems and non-Hermitian physics are emerging topics in recent years, uncovering fascinating dissipation-driven phenomena and attracting considerable attention [1; 2; 3; 4]. Ultracold atomic gases are among the best testbeds for exploring dissipation effects in quantum many-body systems because of the full control of both dissipation channels and their strengths, together with the versatile tunability of many-body quantum correlations. Utilizing these advantages, novel physics different from those in closed systems have been observed in cold atom platforms, including dissipative-stabilized strong-correlated matters [5; 6; 7; 8], quantum phase transitions in superradiance [9; 10; 11; 12; 13], continuous time crystals [14; 15], and dissipation-enabled entangled states [16; 17]. However, these novel physics usually emerge in the strong dissipation regimes and dissipation-driven steady states. In contrast, in the weak dissipation regime and the short-time scale before reaching the steady state, how dissipation interplays with quantum many-body correlations is rarely studied. A weak dissipation usually does not disturb intrinsic quantum correlations at short time. However, it does cause a dynamical response, usually manifested as the decay of a physical observable in time. With constant dissipation, we often observe exponential decay for most conventional quantum states. These states are quantum phases with well-defined quasi-particles, displaying delta-function type single-particle spectral function. In contrast, if quantum fluctuation in the system is so strong that it destroys well-defined quasi-particles, the dissipative dynamics is predicted to display a stretched-exponential decay [18]. Especially for quantum critical states [19; 20; 21; 22], the spectral function exhibits a power-law divergence around the threshold \(\Delta\) in the form of \((\omega^{2}-\Delta^{2})^{-\eta}\), and the dissipative dynamics at short time follows a stretched-exponential form of \(\exp[-(t/\tau_{0})^{2\eta-1}]\)[18]. The power \(\eta\) in the spectral function is the anomalous dimension, a critical exponent manifesting strong quantum fluctuations in this system. Usually, the stronger the quantum fluctuation, the smaller the anomalous dimension \(\eta\). Thus, measuring the dynamics under weak dissipation provides an alternating route to access many-body correlations at equilibrium, in contrast to existing measurements in condensed matter and cold atom physics using Hermitian perturbation as a probing tool. The one-dimensional (1D) Bose gas is an optimal choice for performing measurements of strong correlation effects by using dissipation. First, 1D systems are known to display strong quantum fluctuations at low temperature and obey the divergent single-particle spectral function mentioned above [20; 21]. Moreover, its anomalous dimension is tunable by tuning interaction strength [20]. Secondly, the 1D Bose gas is integrable, and the analytical solution allows a quantitative comparison between theory and experiment. This advantage allows us to benchmark this new measurement scheme. Thirdly, 1D atomic gas can be precisely controlled, and many fascinating phenomena have been observed already [23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38]. However, all existing experiments focus on density distributions, momentum distributions, or density and spin correlation functions. The measurement of single-particle spectral function is still missing. Especially, the crucial quantity of the anomalous dimension has not been experimentally measured. In our experiment, we realize Luttinger liquid of bosonic \({}^{87}\)Rb atoms trapped in a 1D array of tubes created by a two-dimensional (2D) optical lattice (see Supplementary materials (SM), and Fig. 1**A**). These atoms, initialized in the state of \(|5S_{1/2},F=1,m_{F}=1\rangle\), are confined in a crossed dipole trap and levitated against gravity using a magnetic field gradient. By adjusting the intensity of the lattice beams, we tune the lattice depth \(U_{l}\) of each lattice beam to modify the transverse ground state size \(a_{\perp}\). This leads to the change of the 1D interaction strength \(g_{\rm 1D}=4\hbar^{2}a_{s}/[ma_{\perp}^{2}(1-1.46\frac{a_{s}}{a_{\perp}})]\), where \(m\) is the mass of an atom, and \(a_{s}\) is the three-dimensional scattering length of \({}^{87}\)Rb at 98 Bohr radii. Thus, as we change the lattice depth \(U_{l}\) from 0.52 kHz to 430 kHz, the Luttinger liquid is tuned from weakly interacting region to Tonks-Girardeau region. The mean dimensionless interaction strength \(\gamma=mg_{\rm 1D}/n\hbar^{2}\), representing the ratio of interaction energy and kinetic energy, is varied from 0.08 to 5.36, where \(n\) is the 1D atomic density. To introduce well-controlled one-body loss and avoid two-body loss caused by light-assisted collision, we set the frequency of the dissipation light to be 96 MHz blue-detuned from the transition of \(|5S_{1/2},F=1\rangle\)\(\rightarrow\)\(|5P_{3/2},F=2\rangle\) (see Fig. 1**B**). The loss introduced by such dissipation light leads to decay of the atom number in the Luttinger liquid. We fix the intensity of the dissipation light, and measure the residual atom number \(N(t)\) with respect to the duration time \(t\) of the dissipation light. As shown in Fig. 1**D**, a decay process (blue diamonds) different from the conventional exponential decay is observed. Here, we set the lattice depth \(U_{l}\) to 65 kHz with a corresponding \(\gamma\) of 1.54, and fix the intensity of the dissipation light to a saturation parameter \(s\) of 0.01. In order to investigate this unconventional decay behavior, we fit the decay curve to both stretched-exponential function and power function, and find the stretched-exponential function an obviously better fit. Inspired by Ref. [18], we use the fitting function \[N(t)=N(0)\,\exp\left[-\left(\frac{t}{\tau_{0}}\right)^{\alpha}\right], \tag{1}\] to extract stretched exponent \(\alpha\), dissipation characteristic time \(\tau_{0}\), and initial atom number \(N(0)\). \(\alpha=0.70\pm 0.04\) and \(\tau_{0}=25.3\pm 1.1\) ms are obtained from fitting. For comparison, we also measure the loss dynamics of a three-dimensional Bose-Einstein condensate (BEC) in a crossed dipole trap where the decay follows an exponential function with the fitted stretched exponent \(\alpha=0.99\pm 0.04\) (red circles in Fig. 1**D**). Then, we change the dissipation rate in the Luttinger liquid by tuning the intensity of the dissipation light, so that the dissipation characteristic time \(\tau_{0}\) is ranged from 10 ms to 200 ms (Fig. 2**A** and **B**). At different dissipation rates, the atom number decay fittings show consistent stretched exponent \(\alpha\)-s, remaining the same within one standard deviation (Fig. 2**C**). These data demonstrate that the stretched-exponential decay is a universal response of the Luttinger liquid to the dissipative probe. The stretched exponent \(\alpha\) being independent of the dissipation strength, indicates that this exponent measures an intrinsic equilibrium property of the Luttinger liquid. In order to explore the relation between the dissipative dynamics and the intrinsic correlations of the Luttinger liquid, we investigate how the stretched exponent \(\alpha\) changes in different interaction regimes, tuning the Luttinger liquid from weakly interacting Bose gas to strongly correlated Tonks-Girardeau gas (Fig. 3**A**). Here, Luttinger liquid with the dimensionless interaction Figure 1: **Illustration of the experiment setup.** (**A**), A BEC with \(4\times 10^{4}\)\({}^{87}\)Rb atoms is prepared in a crossed dipole trap and loaded into 2D optical lattice made from two orthogonal laser beams. 3300 tubes of 1D Bose gas ensemble are created. A near-resonant light along the \(x\) axis is shined onto the ensemble to introduce one-body dissipation. An absorption imaging is set up along the \(x\) axis. (**B**), The dissipation light is set to 96 MHz blue-detuned from \(|5S_{1/2},F=1\rangle\rightarrow|5P_{3/2},F=2\rangle\) transition. (**C**), A typical time-of-flight (TOF) image for atoms in 2D array of 1D tubes. (**D**), Normalized atom number \(N(t)/N(0)\) versus dissipation-duration time \(t\) in log-linear plot for the dissipative dynamics of 1D quantum gas (blue diamonds) and 3D BEC (red circles). The solid curves are fittings using the stretched-exponential function, with the fitted stretched exponents \(\alpha=0.70(4)\) (blue) and \(\alpha=0.99(4)\) (red). This shows qualitative difference of the stretched-exponential decay versus the exponential one between strongly correlated one-dimensional gas and weakly interacting three-dimensional BEC. All error bars correspond to one standard deviation. strength \(\gamma\) tuned from \(0.08\) to \(5.36\) are created, where we measure the atom decay curves (Fig. 3**B**), fit the data with Eq. 1, and extract the stretched exponent \(\alpha\) from fittings at these dimensionless interaction strengths, as shown in Fig. 3**C**. Here, we obtain \(\alpha=0.93\pm 0.16\), \(0.77\pm 0.08\), \(0.70\pm 0.04\), and \(0.58\pm 0.06\) for \(\gamma=0.08\), \(0.72\), \(1.54\), and \(5.36\), respectively. These observations can be quantitatively understood by applying the non-Hermitian linear response theory to the Luttinger liquid [18; 21]. The theory connects dissipative dynamics of an observable to the correlation functions of the initial equilibrium state. By calculating the linear-order perturbations and spectral functions at zero temperature (see SM), the theoretical model predicts a stretched-exponential behavior for one-body dissipation as \(N(t)\propto\exp[-(t/\tau_{0})^{2\eta-1}]\). This prediction connects the stretched exponent \(\alpha\) with the anomalous dimension \(\eta\) as \(\alpha=2\eta-1\), where \(\eta\) is determined by Luttinger parameter \(K\) as \(\eta=1-1/\left(4K\right)\). Using Lieb-Liniger model, both \(\eta\) and \(K\) are determined by the dimensionless interaction strength \(\gamma\) (see SM). In Fig. 3**C**, we plot the theoretical predicted values for zero-temperature case of \(\alpha=2\eta-1\) versus \(\gamma\) (black solid curve). As the dimensionless interaction strength \(\gamma\) increases, the quantum fluctuations get stronger, and the quasi-particle description fails. This leads to a decreasing anomalous dimension \(\eta\) with a stronger deviation from \(1\), and a smaller Luttinger parameter \(K\). This zero-temperature theoretical prediction curve is not fully aligned with experimental data due to the finite-time effect, because it is only valid at dissipation time \(t\ll h/\pi k_{B}T\), where \(h/\pi k_{B}T\sim 3\) ms for a typical temperature \(T=5\) nK (see SM). However, to guarantee reasonable data quality, experimentally we have to choose an intermediate dissipation time \(t\sim h/\pi k_{B}T\), and the fitted stretched exponent \(\alpha\) is influenced by the chosen upper bound \(t_{u}\) of the dissipation time (see Fig. S2**B**). Therefore, we extend the zero-temperature linear response theory into the finite-temperature regime with higher-order corrections, and plot the corrected theoretical curve of \(\alpha\) in Fig. 3**C** (black dashed curve) and Fig. 2**C** (grey shade). For the fittings to extract \(\alpha\), we only include the experimental data within \(t_{u}\), represented by the solid lines in both Fig. 2**B** and 3**B**. The corrected theoretical predic Figure 2: **Dissipation dynamics at different dissipation strength for Luttinger liquid at dimensionless interaction strength \(\gamma=1.54\). (A), Illustration of the dissipation process at different dissipation strength. Arrows with different colors imply different dissipation strengths applied to the 1D Bose gas. (B), The atom number \(N(t)\) decays with respect to the dissipation time \(t\) in log-linear plot. Data with different labels represent dissipation processes at different dissipation strengths. The solid lines are fittings to the stretched-exponential decay in Eq. 1 using data within the upper bound of the dissipation time \(t_{u}=13\) ms. (C), The fitted \(\alpha\) under different dissipation strength characterized by \(\tau_{0}\). Vertical and horizontal error bars are the one standard deviations of the fitted \(\alpha\)-s and \(\tau_{0}\)-s, respectively. The grey shade is the theoretical prediction of the non-Hermitian linear response theory, taking both finite-time correction and the atom-number inhomogeneity of different tubes into account. All error bars correspond to one standard deviation.** tions agree well with our experimental data. One important thing to be noticed is that this finite-time correction is temperature-independent. We choose the interrogate time upper bound \(t_{u}\) to be inversely proportional to the temperature \(T\) of the system to keep the product of \(t_{u}\) and \(T\) as a constant (see SM). To further demonstrate the universal dissipation dynamics is robust against the temperature variation and thermal effects, we perform the measurements for Luttinger Liquid with the same \(\gamma\) but different temperature \(T\) (Fig. 4**A**). To achieve this, we vary the depth of the crossed dipole trap while simultaneously adjust the depth of the lattice, to keep the dimensionless interaction strength \(\gamma\) constant at \(1.63\pm 0.10\). As a result, we are able to tune the initial temperature of the ensemble from 5.6 nK to 16.4 nK (see SM for temperature calibration), and show consistent \(\alpha\) values for different temperatures (Fig. 4**C**). We also rescale the dissipation time with the dissipation characteristic time \(\tau_{0}\) for data at each temperature, normalize the atom number, and find that the different temperature groups fall on the same line with \(\alpha=0.69\), which agrees well with the theoretical prediction of \(\alpha\) at \(\gamma=1.63\) (Fig. 4**B**). This observation demonstrates the universality of the dissipation process among a wide range of temperature. For each parameter set, we calculate the chemical potential of 1D tubes using the Yang-Yang equation [28], confirming that the chemical potential and the temperature are small enough compared with the tightly confined vibrational frequency and thus all ensembles can be described by one-dimensional Luttinger liquid. In addition, we normalize the temperature \(T\) with the degenerate temperature \(T_{d}=\hbar^{2}n^{2}/2mk_{B}\), and plot \(\alpha\) versus this normalized temperature in Fig. 4**C** inset. According to the theoretical prediction of the Lieb-Liniger model at finite temperature [39; 40], the normalized temperature \(T/T_{d}\) characterizes the extent of deformation in the spectral function caused by thermal effects. When \(T/T_{d}\) is smaller than 1.5, the spectral function remains almost unchanged compared with the zero-temperature case. Notably, our finite temperature results agree with this theoretical prediction, since the stretched exponent \(\alpha\), characterizing the spectral function, remains constant when \(T/T_{d}\) falls within the range of 0.6 to 1.6. Our experiment demonstrates a universal dissipative dynamics in strongly correlated quantum gas, while the Figure 3: **Dissipation dynamics at different dimensionless interaction strength \(\gamma\). (A), Illustration of the dissipation process at different dimensionless interaction strength \(\gamma\). (B), The normalized atom number \(N(t)/N(0)\) versus the rescaled time \(t/\tau_{0}\) at different \(\gamma\) in log-linear plot. Different labels show dissipation processes under different \(\gamma\) and the solid lines are fits to Eq. 1 using data within the upper bound of the dissipation time \(t_{u}\). (C), The fitted stretched exponent \(\alpha\) as a function of \(\gamma\) in linear-log plot. The black solid line shows the zero-temperature theoretical prediction for \(\alpha=2\eta-1\) at different \(\gamma\), while the dashed line shows the prediction with the finite-time correction. The grey diamonds with \(\gamma=0.08\) are in the crossover of the three-dimensional and the one-dimensional Bose gas region, and the theoretical prediction based on Lieb-Liniger model is not applicable to these data. All error bars correspond to one standard deviation.** dissipative process is only decided by the intrinsic correlations, independent of any external parameters such as dissipation strength and temperature. Similar phenomenon named anomalous decay of coherence was also observed in the superfluid-Mott insulator phase transition [41]. Such anomalous decay was attributed to the diffusion dynamics in momentum space in the original paper, and later explained by the non-Hermitian linear response theory [18]. In this article, we fully benchmark the unconventional decay behavior in 1D Bose gas experimentally, and show that the predictions of the non-Hermitian linear response theory agree well quantitatively with the experimental results. These results fully support the validity of this new theory, and extend this theory from zero temperature region to finite temperature region. To conclude, we demonstrate a powerful method to utilize dissipation to probe quantum systems and observe equilibrium many-body quantum correlations. This novel dissipative probe method is used to detect the anomalous dimension in 1D Bose gas experimentally, which is hardly accessible in a closed system due to technical difficulties [36]. This method is also expected to detect other quantum correlation features in strongly correlated systems with fractional excitations, including anomalous dimensions in one-dimensional Hubbard model with spin-charge separation and Fermi arc in high-TC superconductors. **Acknowledgement** We acknowledge the inspiring discussions with H. Zhai, X. -W. Guan, and T. Giamarchi, and the technical supports from W. Zhang and Z. Zhang. **Funding**: This work is supported by National Natural Science Foundation of China (92165203, 61975092, 11974202, 12174358), National Key Research and Development Program of China (2021YFA1400904, 2021YFA0718303, 2022YFA1405300), Beijing Natural Science Foundation (Z180013) and Swiss National Science Foundation (200020-188687). Figure 4: **Dissipation dynamics at different temperature.** (**A**), Illustration of the dissipation processes at different temperature \(T\). (**B**), The normalized atom number \(N(t)/N(0)\) versus the rescaled dissipation time \(t/\tau_{0}\) in log-linear plot at different temperature. All data fall on the same stretched-exponential curve with \(\alpha=0.69\). All error bars correspond to one standard deviation. (**C**), Fitted \(\alpha\) at different temperatures. The horizontal error bars stand for the uncertainty of the temperature, while the vertical error bars represent the one-standard-deviation confidence intervals of the fittings. The grey shade is estimated by the non-Hermitian linear response theory with the finite-time correction and the atom-number inhomogeneity. The inset shows the fitted \(\alpha\) versus the rescaled temperature \(T/T_{d}\), where \(T_{d}=\hbar^{2}n^{2}/2mk_{B}\).
量子システムでは、消失は避けられない。これは通常、 décoherenceと量子相関の変更をもたらす。強相関を持つ量子現象の情報にアクセスするには、消失を克服または抑制する必要がある。しかし、ここでは、消失は強力なツールとして利用できるという逆効果を発見した。超低温原子系における高度に制御可能な消失を利用することで、強相関を持つ1次元量子ガスにおける普遍的な消失動態を観察した。このシステムの totale 個数はこの指数関数的な飽和の指数値に従う。そして、指数値はスペクトル関数の異常性、このシステムの強量子波動の重要な指数値を特徴付けるものである。この方法は、強相関の検出に幅広く応用できる。その例として、量子材料におけるスピン-電荷分離とFermi弧が挙げられる。 Please note that you should follow the specific grammatical structure and usage of the original Japanese sentence when translating.
2305.19818
Spectal Harmonics: Bridging Spectral Embedding and Matrix Completion in Self-Supervised Learning
Self-supervised methods received tremendous attention thanks to their seemingly heuristic approach to learning representations that respect the semantics of the data without any apparent supervision in the form of labels. A growing body of literature is already being published in an attempt to build a coherent and theoretically grounded understanding of the workings of a zoo of losses used in modern self-supervised representation learning methods. In this paper, we attempt to provide an understanding from the perspective of a Laplace operator and connect the inductive bias stemming from the augmentation process to a low-rank matrix completion problem. To this end, we leverage the results from low-rank matrix completion to provide theoretical analysis on the convergence of modern SSL methods and a key property that affects their downstream performance.
Marina Munkhoeva, Ivan Oseledets
2023-05-31T13:02:06
http://arxiv.org/abs/2305.19818v2
# Bridging Spectral Embedding and ###### Abstract Self-supervised methods received tremendous attention thanks to their seemingly heuristic approach to learning representations that respect the semantics of the data without any apparent supervision in the form of labels. A growing body of literature is already being published in an attempt to build a coherent and theoretically grounded understanding of the workings of a zoo of losses used in modern self-supervised representation learning methods. In this paper, we attempt to provide an understanding from the perspective of a Laplace operator and connect the inductive bias stemming from the augmentation process to a low-rank matrix completion problem. To this end, we leverage the results from low-rank matrix completion to provide theoretical analysis on the convergence of modern SSL methods and a key property that affects their downstream performance. ## 1 Introduction Self-supervised methods have garnered significant interest due to their heuristic approach to learning representations that capture the semantic information of data without requiring explicit supervision in the form of labels. Contrastive learning based methods among the former would use the repulsion among arbitrary pair of points in the batch, while non-contrastive would rely on the consistency among different views of the same image. While self-supervised representation learning becomes more ubiquitous in the wild, especially in the important domain such as medical imaging [14], the theoretical grounding of these methods would potentially help avoid the pitfalls in applications. Unsurprisingly, researchers are already trying to build theoretically sound understanding of modern self-supervised representation learning methods. The overall goal of this work is to understand self-supervised representation learning through the lens of nonlinear dimensionality reduction methods (e.g. Laplacian Eigenmaps [3]) and low-rank matrix completion problem [24]. To this end, we take on a Laplace operator perspective on learning the optimal representations in the manifold assumption. We then derive a trace maximization formulation to learn eigenfunctions of the Laplace operator of the underlying data manifold. We adopt the heat kernel based embedding map that in theory under certain conditions is an almost isometric embedding of the low-dimensional manifold into the Euclidean space. As a result, we discuss how existing several SSL methods (e.g. SimCLR [8], BarlowTwins [33], VICReg[2]) can be comprehended under this view. It is important to note that our current understanding of the topic lacks one crucial aspect. Traditional spectral methods commonly operate with complete kernel matrices, whereas our approach deals with incomplete and potentially noisy ones. The only available similarity information among examples is derived from the data augmentation process, which generates a positive pair. Meanwhile the remaining examples in the batch are either seen as negative ones (contrastive) or are not considered at all (non-contrastive). A pertinent and often overlooked question emerges: how can self-supervised learning methods effectively leverage such limited signals to converge towards meaningful representations? In response, we shed light on this matter by establishing a connection between SSL and a matrix completion problem. We demonstrate that these optimization problems are Lagrangian dual of each other, implying that optimizing the SSL objective simultaneously entails reconstructing the kernel matrix. We can summarize the contributions of this paper as follows: * We propose an eigen-problem objective for spectral embeddings from graphs induced by augmentations and use it to interpret modern SSL methods. * We show that SSL methods do _Laplacian-based nonlinear dimensionality reduction_ and _low-rank matrix completion_ simultaneously. We leverage theory behind matrix completion problem to provide insights on the success of SSL methods and their use in practice. * While the number of observed entries required by theory decreases with epochs, we find that the actual number is a constant and the former eventually intersects the latter. * We find a possible explanation for disparity in downstream performance of the backbone and projection outputs. ## 2 Background This work relies on the manifold hypothesis, a hypothesis that many naturally occurring high-dimensional data lie along a low-dimensional latent manifold inside the high-dimensional space. Since we can only observe a sample from the manifold in the ambient space, neither the true manifold nor its metric are available to us. Although we never explicitly work with Laplace-Beltrami operator, we still give a brief definition below to provide some grounding for the reasoning later. Laplace operatorLet \(\mathcal{M}\) be a Riemannian manifold and \(g\) be the Riemannian metric on \(\mathcal{M}\). For any smooth function \(u\) on \(\mathcal{M}\), the gradient \(\nabla u\) is a vector field on \(\mathcal{M}\). Let \(\nu\) be the Riemannian volume on \(\mathcal{M}\), \(d\nu=\sqrt{\det g}\,dx^{1}...dx^{D}\). By the divergence theorem, for any smooth functions \(u\) and \(v\) (plus smooth and compact support assumptions) \(\int_{\mathcal{M}}u\,\text{div}\nabla v\,d\nu=-\int_{\mathcal{M}}\langle \nabla u,\nabla v\rangle d\nu,\) where \(\langle\cdot,\cdot\rangle=g(\cdot,\cdot)\). The operator \(\Delta=\text{div}\nabla\) is called the Laplace-Beltrami operator of the Riemannian manifold \(\mathcal{M}\). In practice, we usually work with finite samples, and Laplace operator is typically approximated with graphs or meshes. While the latter are typically used in computational mathematics, the former find a widespread use in machine learning. Below is a brief overview of the relevant graph notions, for a detailed exposition please see [30]. Graph Laplacian and Spectral EmbeddingGiven a graph \(\Gamma=(V,E)\) with \(|V|=n\) vertices and a set of edges \(e_{ij}=(v_{i},v_{j})\) that form a weighted adjacency matrix \(\mathbf{A}_{ij}=w_{ij}\) with non-negative weight \(w_{ij}\geq 0\), whenever there is \(e_{ij}\in E\), otherwise \(0\). With a degree matrix \(\mathbf{D}=\text{diag}(\mathbf{A}\mathbf{1})\), the graph Laplacian is given by \(\mathbf{L}=\mathbf{D}-\mathbf{A}\), the corresponding random walk Laplacian is a normalization \(\mathbf{L}_{rw}=\mathbf{I}-\mathbf{D}^{-1}\mathbf{A}\). All graph Laplacians admit an eigenvalue decomposition, i.e. \(\mathbf{L}=\mathbf{U}\Lambda\mathbf{U}^{\top}\), where \(\mathbf{U}\in\mathbb{R}^{n\times n}\) contains eigenvectors in columns and a diagonal matrix \(\Lambda\in\mathbb{R}^{n\times n}\) has eigenvalues on the diagonal. Note that there is a trivial eigenpair \((0,\mathbf{1})\). [5; 4] show that the eigenvectors of the graph Laplacian of a point-cloud dataset converge to the eigenfunctions of the Laplace-Beltrami operator under uniform sampling assumption. Whenever one has an affinity matrix, a positive semidefinite pairwise similarity relations among points, the classical way to obtain embeddings for the points is to perform spectral embedding. A typical algorithm would include constructing a graph based on the affinity matrix, computing \(k\) first Figure 1: Illustrative graph of the theoretical fraction \(p^{*}\) of the observed entries required for matrix completion to succeed with high probability. As training proceeds, unmaterialized kernel matrix size \(n\) increases and \(p^{*}\), which is roughly \(\sim\log n/n\), decreases. Eventually, the actual (constant) fraction \(p\) of observed entries under self-supervised learning augmentation protocol intersects the theoretical bound. eigenvectors of its Laplacian and setting the embeddings to be the rows of the matrix \(\mathbf{U}\in\mathbb{R}^{n\times k}\) that contains the eigenvectors as columns. Matrix completionLet \(\mathbf{M}\in\mathbb{R}^{n\times n}\) is partially observed matrix with rank \(r\). Under Bernoulli model, each entry \(\mathbf{M}_{ij}\) is observed independently of all others with probability \(p\). Let \(\Omega\) be the set of observed indices. The matrix completion problem aims to recover \(\mathbf{M}\) from \(m=|\Omega|\) observations. The standard way to solve this problem is via nuclear norm minimization: \[\min_{\mathbf{X}\in\mathbb{R}^{n\times n}}\|\mathbf{X}\|_{*}\quad\text{ subject to}\quad\mathbf{X}_{ij}=\mathbf{M}_{ij}\text{ for }(i,j)\in\Omega, \tag{1}\] where \(\|\mathbf{X}\|_{*}\) is the nuclear norm of \(\mathbf{X}\), i.e. the sum of its singular values. A large body of work [7; 24; 23; 9] has succeeded in providing and enhancing of the conditions that guarantee the optimal solution \(\mathbf{X}^{*}\) to be both unique and equal to \(\mathbf{M}\) with high probability. NotationAny matrix \(\mathbf{X}\!\in\!\mathbb{R}^{n_{1}\times n_{2}}\) has a singular value decomposition (SVD) \(\mathbf{X}\!=\!\mathbf{U}\Sigma\mathbf{V}^{\top}\), where the columns of the matrix \(\mathbf{U}\!\in\!\mathbb{R}^{n_{1}\times n_{1}}\) are left singular vectors, the singular values \(\sigma_{1}\geq\sigma_{2}\geq\cdots\geq\sigma_{\min(n_{1},n_{2})}\) lie on the diagonal of a diagonal matrix \(\Sigma\in\mathbb{R}^{n_{1}\times n_{2}}\), and right singular vectors are the columns of \(\mathbf{V}\in\mathbb{R}^{n_{2}\times n_{2}}\). \(||\mathbf{X}||_{F},\;||\mathbf{X}||=\sigma_{1},\;||\mathbf{X}||_{*}=\sum_{i} \sigma_{i}\) denote Frobenius, spectral and nuclear norms of \(\mathbf{X}\), respectively. \(||\mathbf{x}||_{p}\) denotes \(p\)-th vector norm. ## 3 SSL and Spectral Embedding In this section, we will provide a construction that covers most of the self-supervised learning methods and gives SSL a novel interpretation. First, we formalize the setup from the perspective of the manifold hypothesis. Then, we make some modelling choices to yield an SSL formulation as a trace maximization problem, a form of eigenvalue problem. Finally, we describe how three well known representatives SSL methods fall under this formulation. Let \(\mathbf{X}\in\mathbb{R}^{n\times d^{\prime}}\) be a set of \(n\) points in \(\mathbb{R}^{d^{\prime}}\) sampled from a low-dimensional data manifold observed in a \(d^{\prime}\)-dimensional ambient Euclidean space. However, data is rarely given in the space where each dimension is meaningful, in other words \(d^{\prime}\gg d^{*}\), where \(d^{*}\) is unknown true dimensionality of the manifold. The goal of nonlinear dimensionality reduction is to find a useful embedding map into \(d\)-dimensional Euclidean space with \(d\ll d^{\prime}\). Two of the classical approaches, namely Eigenmaps [3] and Diffusion maps [10], use eigenfunctions of the graph Laplacian, an approximation of the Laplace operator associated with the data manifold. Both can be loosely described as a variant of map \[\Phi(\mathbf{x})=[\;\phi_{1}(\mathbf{x})\quad\phi_{2}(\mathbf{x})\quad\ldots \quad\phi_{d}(\mathbf{x})\;], \tag{2}\] where \(\phi_{k}\) is \(k\)-th eigenfunction of the negative Laplace operator on the underlying manifold. This type of map bears a direct connection to theoretical question how to find an embedding of certain types of manifolds into Euclidean spaces, studied in differential geometry. For instance, [6] construct embedding for a given smooth manifold by using its heat kernel. This approach has been subsequently enhanced through the application of a truncated heat kernel expansion. Furthering this trajectory, [22] explore whether and when such embedding is close to being isometric, which is desirable as isometric embedding preserves distances. Motivated by the heat kernel embedding literature, we provide a general construction for self-supervised methods in what follows. The heat kernel on a manifold can be represented as an expansion \(H(p,q,t)=\sum_{k=0}^{\infty}e^{-\lambda_{k}t}\phi_{k}(p)\phi_{k}(q)\), where \(p\) and \(q\) are some points on the manifold, \(t>0\) and \(\phi_{k}\) are normalized eigenfunctions of the Laplace operator, i.e. \(-\Delta\phi_{k}=\lambda_{k}\phi_{k}\) and \(|\phi_{k}|_{2}=1\). However, working with finite data, we can only operate with a graph approximation of the manifold, and will make use of the heat kernel construction for graphs. The latter is given by a matrix exponential \(\mathbf{H}_{t}=e^{-t\mathbf{L}}\), alternatively represented through an expansion \(\mathbf{H}_{t}=\sum_{i=0}^{n}e^{-\lambda_{k}t}\phi_{k}\phi_{k}^{T}\), where \(\mathbf{L}\) is a graph Laplacian and \((\lambda_{k},\phi_{k})\) is \(k\)-th eigenpair of \(\mathbf{L}\). Consider Laplacian EigenMaps method, it first constructs a graph and weighs the edges with a heat kernel of the Euclidean space which takes the form of a Gaussian kernel \(h_{E}(\mathbf{x},\mathbf{y},t)=\exp(-||\mathbf{x}-\mathbf{y}||_{2}^{2}/t)\) whenever the distance is less then some hyperparameter, i.e. \(||\mathbf{x}-\mathbf{y}||_{2}<\epsilon\), or \(\mathbf{y}\) is among \(k\) nearest neighbours of \(\mathbf{x}\). Next, the method proceeds with finding the eigenvectors of the graph Laplacian constructed from the thus weighted adjacency matrix \(\mathbf{W}\). Contrarily, the proposed construction acknowledges the cardinality and complexity typically associated with the modern day datasets, where it is challenging to select the cutoff distance or the number of neighbours, and impractical to compute all pairwise distances. Our ideal graph has edges reflecting the semantic similarity between points; e.g. there is an edge whenever the pair of points belong to the same class. The drawback is that this matrix is unknown. This is where augmentation comes into play as it provides a peek into only a fraction of the entries in the unmaterialized heat kernel matrix. Consequently, we claim that SSL implicitly solves a low-rank matrix completion problem by instantiating some of the pairwise similarity entries in \(\mathbf{H}_{t}\) via the augmentation process. However, prior to this, we need to demonstrate the generality of the perspective we have formulated. To accomplish this, we articulate a trace maximization problem. ### Trace Minimization To start, we construct an unmaterialized heat kernel matrix in accordance with the standard self-supervised learning protocol for augmentations. Given a dataset with \(N\) points, one training epoch generates \(a\) views of each point. Typically, \(a=2\) and training runs for \(n_{epochs}\) number of epochs. The whole training generates exactly \(n_{epochs}\times a\) views per original instance. As a result the total number of processed examples is \(n=N\times a\times n_{epochs}\), thereby we have that \(\mathbf{H}_{t}\in\mathbb{R}^{n\times n}\). As a rule, SSL utilises architectures with a heavy inductive bias, e.g. ResNet [17] / Vision Transformers [12] for image data. Thus, we may safely assume that the same instance views are close in the embedding space and are connected. This results in a block diagonal adjacency matrix \(\mathbf{W}\), where \(i\)-th block of ones accounts for all the views of \(i\)-th image among \(N\) original images. The fraction of observed entries equal to \(p=N\times(n_{epochs}\times a)^{2}/n^{2}=1/N\), a constant fraction for a given dataset. Note that in the ideal scenario, we would know the cluster/class affiliation for each point and would have connected same-class views with edges. However, in reality, we only know instance affiliation of each view. Let us denote the ideal scenario heat kernel matrix as \(\mathbf{H}_{t}\) and its partially observed counterpart used in reality -- \(\widehat{\mathbf{H}}_{t}\). To obtain the heat kernel matrix, we use the normalized random walk Laplacian \(\mathbf{L}_{rw}\), for which the heat kernel matrix is as before: \(\mathbf{H}_{t}=\exp(-t\mathbf{L}_{rw})=\sum_{k=0}^{n}\exp(-\lambda_{k}t) \mathbf{u}_{k}\mathbf{u}_{k}^{\top},\) where \(\mathbf{u}_{k}\) is \(k\)-th eigenvector and \(\lambda_{k}\) a corresponding eigenvalue of \(\mathbf{L}_{rw}\). To obtain spectral embeddings from an ideal \(\mathbf{H}_{t}\), one would then proceed with solving a trace minimization form of eigenvalue problem in (3), where the optimal solution is exactly the eigenvectors of \(\mathbf{L}_{rw}\), i.e. \(\mathbf{Z}^{*}=\mathbf{U}\). \[\max_{\mathbf{Z}} \operatorname{Tr}(\mathbf{Z}^{\top}\mathbf{H}_{t}\mathbf{Z})\] (3) s.t. \[\mathbf{Z}^{\top}\mathbf{Z}=\mathbf{I}, \mathbf{s.t.} \mathbf{Z}_{\theta}^{\top}\mathbf{Z}_{\theta}=\mathbf{I}_{d}, \mathbf{Z}_{\theta}^{\top}\mathbf{1}=\mathbf{0},\ ||\mathbf{Z}_{\theta}^{j}||_{2}=1\ \forall j\in[1,d]\] However, for lack of a better alternative we resort to the incomplete \(\widehat{\mathbf{H}}_{t}\) and need to learn a parameterized map \(\mathcal{F}_{\theta}(\mathbf{X})=\mathbf{Z}_{\theta}\) in (4). Apart from the specified eigenfunction pairwise orthogonality constraint, the resulting loss is comprised of implicit constraints on the trivial eigenfunction (identity) and function norm. But before we treat (3) with incomplete \(\mathbf{H}_{t}\) as matrix completion problem, we show that this formulation can be seen as a generalization for a number of existing self-supervised learning methods. Specifically, several modelling choices differentiate the resultant methods. ### Self-Supervised Methods Learn Eigenfunctions of the Laplace Operator SimClrConsider the following contrastive loss, variants of which are widely adopted in self-supervised learning methods: \(L_{CL}=\sum\limits_{i,j\in+pairs}l_{ij}+l_{ji}\), where \[l_{ij}=\log\frac{e^{\kappa\langle\mathbf{z}_{i},\mathbf{z}_{j}\rangle}}{\sum \limits_{k\neq i}e^{\kappa\langle\mathbf{z}_{i},\mathbf{z}_{k}\rangle}}=\kappa \langle\mathbf{z}_{i},\mathbf{z}_{j}\rangle-\log\sum\limits_{k\neq i}e^{ \kappa\langle\mathbf{z}_{i},\mathbf{z}_{k}\rangle}, \tag{5}\] with \(\mathbf{z}_{i}=f_{\theta}(\mathbf{x}_{i})\) is an embedding of input \(\mathbf{x}_{i}\) given by function \(f\) parameterized by a neural network with learnable parameters \(\theta\), which produces unit-norm embeddings, i.e. \(||\mathbf{z}_{i}||_{2}=1\). One iteration of SimCLR takes a batch of \(N\) data points and creates \(2N\) views, augmenting each sample to obtain a positive pair (\(\mathbf{z}_{i},\mathbf{z}_{j}\)), while treating all the other samples and their augmentations as contrastive samples \(\mathbf{z}_{k}\). Let us include \(k=i\) in the sum in denominator of (5) for a moment. A goal of SimCLR is to obtain optimal representations \(\mathbf{Z}\in\mathbb{R}^{2N\times d}\) such that for each positive pair \((i,j)\), their representations will be aligned as much as possible \(\langle\mathbf{z}_{i},\mathbf{z}_{j}\rangle\to 1\). Let \(\mathbf{A}_{ij}=\exp(\kappa[\mathbf{Z}\mathbf{Z}^{\top}]_{ij})\) and \(\mathbf{D}=\text{diag}(\mathbf{A}\mathbf{1})\), then we can rewrite (5) as \[l_{ij}=\log\mathbf{A}_{ij}-\log\sum_{k=1}^{2N}\mathbf{A}_{ik}=\log\mathbf{A}_{ ij}-\log\mathbf{D}_{ii}=\log\frac{\mathbf{A}_{ij}}{\mathbf{D}_{ii}}=\log[ \mathbf{D}^{-1}\mathbf{A}]_{ij} \tag{6}\] where we are interested in the right hand side \(\log[\mathbf{D}^{-1}\mathbf{A}]_{ij}\). Let us note that \([\mathbf{D}^{-1}\mathbf{A}]_{ij}\) from (6) is a normalized adjacency of a graph \(G=(V,E)\), where node set \(V\) is the set of views. Since representations live on a unit \((d-1)\)-sphere, we have a choice of naturally arising distributions (von Mises-Fisher, spherical normal, etc) to instantiate the weighting function \(\mu(\mathbf{x}_{i},\mathbf{x}_{j})\). The model choice of SimCLR is to instantiate the weighting function with the density of the von Mises-Fisher distribution with \(\kappa>0\) (without the partition function): \[\mathbf{A}_{ij}=\exp(\kappa\mathbf{z}_{i}^{\top}\mathbf{z}_{j}), \tag{7}\] where \(\mathbf{A}_{ij}\) equals \(e^{\kappa}\), with a typical value \(\kappa=2\), whenever \(i,j\) is a positive pair, and \(e^{0}=1\) otherwise. Note that \(\log(\mathbf{D}^{-1}\mathbf{A})=\mathbf{D}^{-1}\mathbf{A}-\mathbf{I}+o(( \mathbf{D}^{-1}\mathbf{A})^{2})\approx-\mathbf{L}_{rw}\), thus in loose terms the objective in (5) maybe be seen as a minimization of a trace of a negative of the graph Laplacian by learning \(\mathbf{z}\)'s that shape \(\mathbf{A}\). Decoupled Contrastive Learning [32] can be seen as reweighting the adjacency matrix by setting \(\mathbf{A}_{ij}=w(z_{i},z_{j})\), where \(w(z_{i},z_{j})\) is a reweighting function to emphasize hard examples. For non-contrastive methods (BarlowTwins, VICReg, etc) SSL methods (where no negative examples are used, e.g. BarlowTwins, SimSiam), the respective losses could also be interpreted as learning a diffusion map. The key distinction with contrastive methods expresses itself in setting \(A_{ij}=0\) for view pairs \((i,j)\) that do not have shared original, thus eliminating any signal from a possibly negative pair. BarlowTwinsmethod normalizes the columns of representation matrices \(\mathbf{Z}_{a},\mathbf{Z}_{b}\in\mathbb{R}^{N\times d}\), which is the same as the function norm constraint in (4). The overall objective is as follows: \[J=\sum_{ii}(\mathbf{Z}_{a}^{\top}\mathbf{Z}_{b}-\mathbf{I})_{ii}^{2}+\alpha \sum_{i\neq j}(\mathbf{Z}_{a}^{\top}\mathbf{Z}_{b}-\mathbf{I})_{ij}^{2}, \tag{8}\] and simultaneously aims at three things: (i) enforce the closeness of the rows, i.e. positive pair embeddings, (ii) restrict the norm of the columns, i.e. the eigenfunctions, (iii) orthogonalize columns, again the eigenfunctions. The kernel matrix choice in BarlowTwins is a simple bi-diagonal adjacency matrix \(\mathbf{A}_{ij}=1\) as long as \((i,j)\) is a positive pair, and 0 otherwise. VicRegobjective is a weighted sum of _variance, covariance_ and _invariance_ terms: \[J_{var}=\sum_{k=1}^{d}\max(0,1-\sqrt{[\mathbf{Z}^{\top}\mathbf{Z}]_{kk}}), \quad J_{cov}=\sum_{k\neq l}[\mathbf{Z}^{\top}\mathbf{Z}]_{kl}^{2},\quad J_{ inv}=\sum_{ij}\mathbf{A}_{ij}\|\mathbf{Z}_{i}-\mathbf{Z}_{j}\|^{2},\] a similar formulation to the previous method, however, the choice for the adjacency matrix here is little bit different. Individual terms of this loss have separate coefficients. The choice of these hyperparameters defines the implicit adjacency matrix entries, controls the maximum function norm allowed and the trade-off between orthogonality and the trace terms in (4). ## 4 SSL and Low-Rank Matrix Completion In this section, we use low-rank matrix completion theory to show that the limited information from augmentations might be quite enough under certain conditions. First, we establish the connection between our self-supervised learning objective (4) and the low-rank matrix completion problem. We then draw practical insights from the existing convergence theory for matrix completion problem. ### Low-Rank Matrix Completion Dual We argue that the objective in (3) with a substitute incomplete kernel matrix \(\widehat{\mathbf{H}}_{t}\) implicitly contains an objective for the low-rank matrix completion problem. To show this, we first introduce the general form of the nuclear norm minimization problem with affine subspace constraint in (9). \[\min_{\mathbf{X}} \|\mathbf{X}\|_{*}\] (9) subject to \[\mathcal{A}(\mathbf{X})=\mathbf{b}, \max_{\mathbf{q}} \mathbf{b}^{\top}\mathbf{q}\] (10) subject to \[\mathcal{A}(\mathbf{X})=\mathbf{b}, \text{subject to} \|\mathcal{A}^{*}(\mathbf{q})\|\leq 1,\] The subspace is given by linear equations \(\mathcal{A}(\mathbf{X})=\mathbf{b}\) and linear operator \(\mathcal{A}:\mathbb{R}^{n\times n}\to\mathbb{R}^{p}\) can be a random sampling operator. The dual for the nuclear norm \(|\cdot|_{*}\) is the operator norm \(|\cdot|\). The problem in (9) has been initially introduced as a heuristic method for seeking minimum rank solutions to linear matrix equations, but has later been shown to have theoretical guarantees under certain conditions on the linear operator \(\mathcal{A}\)[24, 23, 27, 15]. The Lagrangian dual of (9) is given by (10), where operator \(\mathcal{A}^{*}:\mathbb{R}^{p}\to\mathbb{R}^{n\times n}\) is the adjoint of \(\mathcal{A}\). We write down an instance of (9) in (11) below to better reflect the specifics of our setting. Since the true underlying heat kernel matrix \(\mathbf{H}_{t}\) is observed only _partially_ with known entries indicated by a collection of index pairs \(\Omega\) induced by augmentation process, we can form a sampling symmetric matrix \(\mathbf{W}\): \(\mathbf{W}_{ij}=1\) if \((i,j)\in\Omega\), and \(0\) otherwise, indicating observed entries. Now, the incomplete kernel matrix can be written down explicitly as \(\widehat{\mathbf{H}}=\mathbf{W}\odot\mathbf{H}\), where \(\odot\) denotes Hadamard (element-wise) product. The constraint in (9) instantiates as \(\mathcal{A}(\mathbf{X})=\texttt{vec}(\mathbf{W}\odot\mathbf{X})\). \[\min_{\mathbf{X}\in\mathcal{S}_{+}} \|\mathbf{X}\|_{*}\] (11) subject to \[\texttt{vec}(\mathbf{W}\odot\mathbf{X})=\texttt{vec}(\widehat{ \mathbf{H}})\] subject to \[\mathbf{Z}^{\top}\mathbf{Z}=\mathbf{I}\] We proceed by showing that maximisation of the trace formulation in (12) embraces reconstruction of the incomplete underlying kernel matrix with entries known to us only from the augmentation process and specified by the matrix \(\mathbf{W}\). **Proposition 4.1**.: _The trace maximization problem given in (12) is a Lagrangian dual of low-rank matrix completion problem in (11)._ Proof.: First, we show that (12) is an instance of (10). Let linear operator \(\mathcal{A}(\mathbf{X}):\mathbb{R}^{n\times n}\to\mathbb{R}^{n^{2}}\) be a sampling vectorization of \(\texttt{vec}(\mathbf{W}\odot\mathbf{X})\). By trace of a matrix product, we can rewrite the objective as \(\mathrm{Tr}[\mathbf{Z}^{\top}\widehat{\mathbf{H}}\mathbf{Z}]=\mathrm{Tr}[ \widehat{\mathbf{H}}\mathbf{Z}\mathbf{Z}^{\top}]=(\texttt{vec}\widehat{ \mathbf{H}})^{\top}\texttt{vec}(\mathbf{Z}\mathbf{Z}^{\top})=\mathbf{b}^{\top }\mathbf{q}\). The adjoint operator \(\mathcal{A}^{*}(\mathbf{q})=\texttt{mat}(\mathbf{D}_{\mathbf{W}}\mathbf{q})\) acts a sampling matricization, i.e. it samples and maps \(\mathbf{q}\) back to \(\mathbb{R}^{n\times n}\): \(\mathcal{A}^{*}(\mathbf{q})=\mathbf{W}\odot\mathbf{Z}\mathbf{Z}^{\top}\), and the constraint of the dual (10) becomes \(\|\mathbf{W}\odot\mathbf{Z}\mathbf{Z}^{\top}\|\leq 1\). As \(\mathbf{Z}\) admits singular value decomposition \(\mathbf{Z}=\mathbf{U}\odot\mathbf{V}^{\top}\), the constraint in (12) \(\mathbf{Z}^{\top}\mathbf{Z}=\mathbf{I}\) implies \(\sigma_{i}(\mathbf{Z})=1\) for all \(i\). To establish equivalence of the constraints in (10) and (12), i.e. \(\|\mathbf{W}\odot\mathbf{Z}\mathbf{Z}^{\top}\|=\sigma_{1}(\mathbf{W}\odot \mathbf{Z}\mathbf{Z}^{\top})\leq 1\) given \(\mathbf{W}\) and \(\sigma_{1}(\mathbf{Z})=1\), we can use a result for singular values of Hadamard product [2], specifically, for \(k=1,2,\ldots,n\): \(\sum_{i=1}^{k}\sigma_{i}(\mathbf{A}\odot\mathbf{B})\leq\sum_{i=1}^{k}\min(c_{ i}(\mathbf{A}),r_{i}(\mathbf{A}))\sigma_{i}(\mathbf{B})\), where \(c_{i}(\mathbf{A})\) and \(r_{i}(\mathbf{A})\) are column and row lengths of \(\mathbf{A}\), and \(\sigma_{1}\geq\sigma_{2}\geq\cdots\geq\sigma_{n}\). Let \(\mathbf{A}=\mathbf{W}\) and \(\mathbf{B}=\mathbf{Z}\mathbf{Z}^{\top}\), then \(r_{i}=c_{i}=\sqrt{K}\), yielding \(\sigma_{1}(\mathbf{W}\odot\mathbf{Z}\mathbf{Z}^{\top})\leq\sqrt{K}\), where \(K=n_{epochs}\times a\) is a constant, consequently, it can be accounted for by rescaling \(\mathbf{W}\) and not affecting the maximization. Finally, since (11) is an instance of (9), we can conclude that (12) is dual to (11). By establishing this connection, we can leverage the theoretical guarantees derived from matrix completion in our subsequent analysis. ### Convergence IncoherenceStandard incoherence is the key notion in the analysis of matrix completion problem. Intuitively, incoherence characterises the ability to extract information from a small random subsample of columns in the matrix. More formally, it is defined as an extent to which the singular vectors are aligned with the standard basis. **Definition 4.2**.: (\(\mu_{0}\)-incoherence). Given matrix \(\mathbf{M}\in\mathbb{R}^{n_{1}\times n_{2}}\) with rank \(r\) and SVD \(\mathbf{M}=\mathbf{U}\Sigma\mathbf{V}^{\top}\), \(\mathbf{M}\) is said to satisfy the _standard incoherence_ condition with incoherence parameter \(\mu_{0}\) if \[\max_{1\leq i\leq n_{1}}\|\mathbf{U}^{\top}\mathbf{e}_{i}\|_{2}\leq\sqrt{\frac{ \mu_{0}r}{n_{1}}}\,\quad\max_{1\leq j\leq n_{2}}\|\mathbf{V}^{\top}\mathbf{e}_{j}\|_{2}\leq \sqrt{\frac{\mu_{0}r}{n_{2}}}\, \tag{13}\] where \(\mathbf{e}_{i}\) is the \(i\)-th standard basis vector of a respective dimension. Since the matrix we aim to recover is symmetric, \(n_{1}=n_{2}=n\) and its left, right singular and eigenvectors are identical. To the best of our knowledge, optimal sample complexity bounds for matrix recovery via nuclear norm minimisation were obtained in [9]. Specifically, given \(\mathbf{M}\) satisfies standard incoherence condition (13) with parameter \(\mu_{0}\), if uniform sampling probability \(p^{*}\geq\nicefrac{{c_{0}\mu_{0}r\log^{2}(n)}}{{n}}\) for some \(c_{0}>0\), then \(\mathbf{M}\) is the unique solution to (1) with high probability. Matrix recovery is easy when it is highly incoherent -- when information is spread more uniformly among its columns/rows, loosing a random subset of its entries is not as big of a deal as when information is concentrated in certain important columns/rows. On the other hand, high incoherence intuitively makes matrices harder when used as feature matrices in downstream tasks. This might explain why typical SSL methods rely on the output of backbone network (_representations_) rather than the output of projection head (_embeddings_). It is easy to see that downstream performance depends on the alignment of the target matrix \(\mathbf{Y}\) with the left eigenvectors of the feature matrix \(\mathbf{X}\). The target matrix \(\mathbf{Y}\) in downstream classification task is typically a very simple binary matrix, and can be shown to have low incoherence. However, whenever \(\mathbf{X}\) is obtained as the projection head output of a network learned via self-supervised learning method with a spectral embedding type objective, the incoherence of \(\mathbf{X}\) is inherently tied to the incoherence of the kernel matrix. The latter needs to have high incoherence to be recoverable. Consequently, we put forward the following proposition and find its empirical support in Section 5.2. **Proposition 4.3**.: _Projection head outputs (embeddings) yield lower performance on the downstream task due to their high incoherence. The complexity of the projection head correlates with coherence of the backbone output (representations)._ Only a fraction of total entries in \(\mathbf{H}_{t}\) is required for matrix recovery with high probability.We can further narrow down the bound on \(p^{*}\) if we consider structured matrix completion problem in the form of some side information which has a direct connection to self-supervised setup. Let the column/row space of \(\mathbf{M}\) lie in some known \(r^{\prime}\)-dimensional subspace of \(\mathbb{R}^{n}\) spanned by the columns of \(\bar{\mathbf{U}}\), \(n>r^{\prime}\geq r\). Then the nuclear norm minimization problem transforms into: \[\min_{\mathbf{X}}\quad\|\mathbf{X}\|_{*}\quad\text{subject to}\quad(\bar{ \mathbf{U}}\mathbf{X}\bar{\mathbf{U}}^{\top})_{ij}=\mathbf{M}_{ij},\quad(i,j) \in\Omega. \tag{14}\] In practice, we use neural network parameterisation to learn the heat kernel map, this choice inadvertently restricts the reconstructed kernel to be aligned with the column space of the network outputs, bringing the inductive bias of the architecture into picture. The optimal sampling complexity bound for (14) extends as \(p^{*}\gtrsim\nicefrac{{\mu_{0}\bar{\mu}_{0}r\bar{r}\log(\bar{\mu}_{0}\bar{r}) \log n}}{{n}^{2}}\), where \(\bar{\mu}_{0}\) and \(\bar{r}\) are the coherence and the rank of \(\bar{\mathbf{U}}\), respectively. Suppose we wanted to recover some binary adjacency matrix \(\mathbf{A}\), such that \(\mathbf{A}_{ij}=1\) if \(i,j\) belong to the same cluster, 0 otherwise. Because \(\mathbf{A}\) can be rearranged to be block-diagonal with \(r\) blocks, its coherence \(\mu_{0}(\mathbf{A})=\nicefrac{{n}}{{r}}{{n}_{min}}\), and exact reconstruction is possible provided \[p^{*}\gtrsim\nicefrac{{\bar{\mu}_{0}\bar{r}\log(\bar{\mu}_{0}\bar{r})\log n}}{ {n}_{min}}, \tag{15}\] where \(n_{min}\) is the minimal cluster size. Heat kernel matrix constructed from such \(\mathbf{A}\) will have its eigenspectrum closely resembling that of \(\mathbf{A}\), albeit smooth, yet still having same pattern in eigengaps. So we may safely adopt this bound for \(\mathbf{H}_{t}\). For balanced class datasets \(n_{min}=\nicefrac{{n}}{{c}}\), and we can immediately see that the number of required observations \(m=p^{*}n^{2}=c\bar{\mu}_{0}\bar{r}\log(\bar{\mu}_{0}\bar{r})\log n\) grows linearly with the number of classes \(c\) in the dataset. For illustrative purposes we plot the theoretical bound \(p^{*}\) on the fraction of the observed entries for a successful matrix completion from (15) in Figure 1 along with the actual fraction \(p\) of observed entries under self-supervised learning augmentation protocol to demonstrate that the latter intercepts the former given enough training epochs. To be specific, we set the size of the training dataset \(N=50\)k (CIFAR-10 size), the cluster size (number of same class examples) \(n_{min}=5000\) (\(c=10\)), the number of views \(a=2\), the number of epochs \(n_{epochs}\) range from \(1\) to \(1000\), and \(r=512\) (embedding size), assume \(\mu=20\) (which seems to be a fair estimate in light of the experiments in Section 5) and \(c_{0}=5\), a constant used to control the probability of exact matrix recovery. Based on this bound, we highlight the following factors that play important role in the success of any SSL method with spectral embedding type objective. The choice of the similarity function affects the incoherence parameter in the bound. The number of samples per class (alternatively the minimal cluster size \(n_{min}\)) should also be high enough for \(p^{*}\) to decrease rapidly. Finally, though potentially in contrast to the empirical observations (higher \(d\) on ImageNet yields higher downstream performance), the rank \(r\), effectively the dimension of embedding \(d\), should not be too large. ## 5 Experiments We first verify that the performance of the proposed representation learning is at least on par with the state-of-the-art methods. We then study the effect of the complexity of the projection head on the incoherence value and its connection to the downstream performance of the backbone against projection head outputs. ### Comparable performance To demonstrate that our trace maximization objective is on par with existing SSL methods, we test our objective in (4) on a standard ResNet-18 backbone neural network with 3-layer MLP projection head (respective dimensions: 2048-2048-2048) and obtain comparable results on CIFAR-10 and CIFAR-100 to those state-of-art methods. For CIFAR-10 benchmark, a fine-tuned VICReg yields downstream performance of 91.45% for top-1 and 99.8% for top-5 accuracy which we match by our heat kernel embedding formulation that gives 90.94% and 99.76%, respectively, without finetuning. ### Incoherence effect on downstream performance We test whether incoherence could explain the performance disparity between the outputs of the backbone, termed _representations_, and the projection head, known as _embeddings_. We train a ResNet-18 backbone with various configurations of projection heads against our SSL objective on the CIFAR-10 dataset. The performance of the representations and embeddings of the pre-trained model is evaluated in a downstream classification task, while incoherence \(\mu\) for both candidates is estimated on the training set. Experimental details are included in the Appendix. Figure 2 (left) indicates incoherence is higher for more shallow projection heads and decreases as the number of layers in the head increase, a result we anticipated. A similar picture holds for VICReg method, albeit incoherence falls as number of projection head parameters. We estimate Figure 2: (left) For RQ loss, incoherence of the backbone outputs (_representations_) decreases with the increasing number of layers in the projection head, while incoherence of projection head outputs (_embeddings_) depends only on the affinity function adopted by the SSL method. (right) Each pair of same shape marks indicate one model. Embeddings (blue) have lower coherence (larger \(\mu\)) and yield lower downstream performance than representations (green). Corresponding projection head configurations: * \([2048\)-\(512\)-\(512]\), * \([2048\)-\(2048\)-\(512]\), * \([1024\)-\(1024\)-\(1024\)-\(1024\)], * \([2048\)-\(2048\)-\(2048]\), * \([2048\)-\(512\)-\(512\)-\(512\)], * \([1024\)-\(1024\)-\(1024\)-\(1024\)], * \([2048\)-\(512\)-\(512\)-\(512\)-\(512\)]. incoherence on the augmented training set, i.e. we compute a rank-\(d\) SVD of the representations matrix \(\mathbf{Z}\in\mathbb{R}^{50000\times d}\), with the backbone output dimension \(d=512\), and compute the standard incoherence \(\mu\) of \(\mathbf{U}\) as given by (13): \[\mu=\frac{n}{d}\max_{1\leq i\leq n}\|\mathbf{U}^{\top}\mathbf{e}_{i}\|_{2}^{2}.\] Individual models are specified by a unique shape in Figure 2 (right). The green colour signifies the position of each model's representations within the Accuracy-Incoherence plane, while the blue colour reflects the corresponding embeddings coordinates. The resulting scatter plot supports our hypothesis in Proposition 4.3 that incoherence indeed plays a crucial role in explaining the use of the backbone outputs. For successful matrix completion, high incoherence of the partially observed affinity matrix is essential. However, for the downstream performance of the representations, the opposite is preferred. The projection head functions as a disentangling buffer, enabling the representations to maintain low incoherence. Conversely, the embeddings inherit the coherence of the underlying affinity matrix. A connection between incoherence and downstream performance, similar to the reported above, can be seen on the models trained with VICReg objective. Again, we conduct SSL training with 1000 epochs and standard hyperparameters - we train a collection of ResNet-18 backbone models with varying projection head configurations ( * \(512\)-\(512\), \(\blacktriangle\)\(512\)-\(512\)-\(512\), \(\circ\)\(512\)-\(512\)-\(512\), \(\square\)\(1024\)-\(512\), \(\blacksquare\)\(2048\)-\(512\), \(\blacktriangle\)\(1024\)-\(1024\)-\(1024\)) on CIFAR-10. Next, we train a standard linear probe on top of either a trained backbone or backbone + projection head on a downstream classification task for 100 epochs with standard hyperparameters. The downstream performance of the trained models is measured as top-1 accuracy on the validation set with the standard set of augmentations. Figure 3_left_ panel depicts the inverse dependence of the representation incoherence on the complexity of a projection head reflected by its number of parameters. The _right_ panel is a scatter plot of the measured downstream performance versus the estimated incoherence of the representations (green) and embeddings (blue). This repeatedly provides empirical evidence that accuracy grows as incoherence decreases and representations characterised by lower incoherence compared to embeddings. Intuitively lower incoherence suggests less entanglement in the features making it easier to solve the downstream task. ## 6 Related Work Self-Supervised Representation LearningRecent success of self-supervised methods [8, 13, 2, 32, 33], especially in the domain of computer vision received great amount of attention since the learned representations ended up respecting the semantics of the data. There has been a great interest in trying to understand the inner workings of the seemingly heuristic objectives since. While there are many lenses one may take up to study the problem [18, 31, 19, 29, 28, 16], a particularly related to this work concurrent body of literature has adopted the view of the kernel or laplacian-based spectral representation learning [11, 1], which we also share in this work. We highlight our main difference to the results provided in these very recent papers. [1] does a brilliant Figure 3: (left) For VICReg loss, incoherence of representations depends on the number of parameters in the projection head. (right) Embeddings (blue) have higher incoherence (larger \(\mu\)) and yield lower downstream performance compared to representations (green). job connecting and characterizing modern SSL methods into classical existing counterparts. However, it does not answer an important question whether an incomplete a priori knowledge about the data manifold stemming from augmentations can provide a good approximation to essentially nonlinear dimensionality reduction methods such as LLE [25], MDS [20], and kernel PCA [26]. We not only show SSL methods to have an objective function similar to the objectives of classical spectral decomposition methods, e.g. LaplacianEigenmaps [3], but also try to address the problem of incomplete and noisy measurements that we get as an inductive bias during the augmentation process. We hope that this perspective via the low-matrix completion problem will yield further theoretical results on the success of self-supervised learning and practical benefits when applying these methods in the wild, e.g. in domains such as medical imaging [14] and hard sciences [21]. ## 7 Conclusion and Future Work In this work, we make an attempt to bridge modern self-supervised methods with classical Laplacian-based dimensionality reduction methods and low-rank matrix completion in hopes to provide theoretical insights on the recent successes of SSL methods. We show that these methods are not only doing Laplacian-based nonlinear reduction but are able to approximate and recover the truncated version of the underlying Laplace operator given only noisy and incomplete information from augmentation protocol by adopting low-rank matrix completion extensive literature and results. However, when working with datasets with potentially large number of classes, it might be a good idea to consider whether the size of the sample is large enough so that the minimal cluster size allows the full data matrix to be considered low-rank, otherwise the SSL methods would possibly fail to converge. We also spot a direct influence of the inductive bias in the parameterization of the learned map on the column space of the recovered matrix. We also hypothesize that the disparity in downstream performance between backbone and projection head outputs can be explained by the high incoherence if the latter which is tied to the incoherence of the kernel one is recovering during training. The kernel should have high incoherence to be recoverable. One of the possible avenues for future work stems from the notion of incoherence. We see it in exploring incoherence property of different types of the similarity or weighting functions one may use to instantiate the adjacency matrix with. We hope that this work paves the way for a deeper study of the connection between self-supervised methods and classical problems such as matrix completion to yield better practical and theoretical understanding of various applications of SSL in different domains, not only computer vision.
自己教師あり方法が、データの文脈を尊重する表現学習の原理的なアプローチであることから、注目を集めてきました。これはラベルの明示的な監督なしで学習できるという点で、注目を集めています。膨大な研究論文が、現代の自己教師あり表現学習方法における様々な損失の仕組みを理論的に理解するための試みとして出版されています。この論文では、ラプラス演算子という視点から理解を深め、増幅プロセスからの誘導的バイアスを低ランク行列完備問題に繋げます。この目的のために、低ランク行列完備の結果を利用して、現代のSSL方法の収束分析と、その downstream 性能に影響を与える重要な特性を理論的に分析します。
2302.14343
New perspectives on spectroscopic factor quenching from reactions
The evolution of single-particle strengths as the neutron-to-proton asymmetry changes informs us of the importance of short- and long-range correlations in nuclei and has therefore been extensively studied for the last two decades. Surprisingly, the strong asymmetry dependence of these strengths and their extreme values for highly-asymmetric nuclei inferred from knockout reaction measurements on a target nucleus are not consistent with what is extracted from electron-induced, transfer, and quasi-free reaction data, constituting a two-decade old puzzle. This work presents the first consistent analysis of one-nucleon transfer and one-nucleon knockout data, in which theoretical uncertainties associated with the nucleon-nucleus effective interactions considered in the reaction models are quantified using a Bayesian analysis. Our results demonstrate that, taking into account these uncertainties, the spectroscopic strengths of loosely-bound nucleons extracted from both probes agree with each other and, although there are still discrepancies for deeply-bound nucleons, the slope of the asymmetry dependence of the single-particle strengths inferred from transfer and knockout reactions are consistent within $1\sigma$. Both probes are consistent with a small asymmetry dependence of these strengths. The uncertainties obtained in this work represent a lower bound and are already significantly larger than the original estimates.
Chloë Hebborn, Filomena M. Nunes, Amy E. Lovell
2023-02-28T06:31:18
http://arxiv.org/abs/2302.14343v2
# New perspectives on spectroscopic factor quenching from reactions ###### Abstract The evolution of single-particle strengths as the neutron-to-proton asymmetry changes informs us of the importance of short- and long-range correlations in nuclei and has therefore been extensively studied for the last two decades. Surprisingly, the strong asymmetry dependence of these strengths inferred from knockout reaction measurements is not consistent with what is extracted from electron-induced, transfer, and quasi-free reaction data, constituting a 15-year old puzzle. This work presents the first consistent analysis of transfer and knockout data, in which theoretical uncertainties associated with the nucleon-nucleus effective interactions used in the reaction models are quantified using a Bayesian analysis. Our results demonstrate that, taking into account these uncertainties, the asymmetry dependence of the single-particle strengths inferred from transfer and knockout reactions end up being consistent. Both probes show that these strengths decrease slowly with the asymmetry of the nucleus. The uncertainties obtained in this work represent a lower bound and are already significantly larger than the original estimates. + Footnote †: preprint: LLNL-JRNL-845215, LA-UR-23-21722 _Introduction:_ Systematic studies of nuclei along isotopic chains have revealed unexpected trends that challenge our understanding of nuclear structure [1; 2; 3]. While energy spectra hold an important component of this complex many-body puzzle, current reaction studies extract information on the composition of the nuclear wavefunction itself, and in particular the distribution of strength across various nuclear orbitals. This is expressed in terms of a spectroscopic factor (SF), proportional to the probability that the system will be found in a particular configuration. It is the evolution of this shell structure away from stability that provides unique insights on the nuclear force [2]. It is now well accepted that our best theory predictions for SFs do not match those extracted from experiment [4]. This is understood as coming from missing physics in the model; however, for two decades, nuclear physicists have grappled with the asymmetry dependence of this mismatch, quantified in the ratio \(\mathcal{R}\) between the SF extracted from experiment and that predicted by the nuclear shell model. The now famous asymmetry plot showing this ratio \(\mathcal{R}\) as a function of the difference between neutron and proton separations energies (\(\Delta S\)) has caused great controversy [4]. The asymmetry dependence of \(\mathcal{R}\) found in the analysis of knockout reactions [5; 6; 7] is not consistent with that found using other probes, namely for electron-induced [8], quasi-free [9] and transfer reactions [10; 11; 12] (see the recent review Ref. [4] for a full status). For the last two decades, many studies have attempted to understand the source of this inconsistency. Our work is one more study along these lines, although it brings a novel perspective. Evidently, SFs are not observables and are model dependent [13; 14]. The extraction of \(\mathcal{R}\) from experimental data require both a reaction model and a structure model. The analysis of knockout reactions make use of the eikonal reaction theory as well as large-scale shell model calculations [5; 15]. To understand the asymmetry dependence of \(\mathcal{R}\) associated with knockout observables, the validity of the shell-model SFs and the eikonal model have been thoroughly analyzed, e.g. Refs. [16; 17; 18; 19; 20; 21] discuss the importance of short- and long-range correlations for structure predictions and Refs. [22; 23; 24; 25; 26; 27] address the validity of the eikonal approximation. Similarly many studies have been done testing the validity of the theories used in the transfer analyses [28; 29; 30; 31], including benchmarks of the reaction models. Given that SFs are not observables, extra care needs to be taken to extract this information. When using different probes, it is essential to make the same assumptions so the conclusions are reliable. Equally necessary is a good understanding of the theoretical uncertainties without which any disagreement between results is rendered meaningless. One of the earlier studies did attempt to quantify the uncertainties associated with the reaction theory used in the transfer [28] however those estimates were obtained without a rigorous statistical analysis. This work offers the first consistent analysis of knockout and transfer reaction data using Bayesian statistics to quantify the theoretical uncertainties associated with the effective interactions in the models. Although in the majority of cases, the \(\mathcal{R}(\Delta S)\) plots contain only statistical errors from the experimental data, we understand there are significant uncertainties attributed to the reaction models themselves. Most noteworthy are the uncertainties associated with the phenomenological fits of the effective interactions used, the so-called _optical potentials_[32]. Bayesian analyses of elastic scattering has led to the understanding that uncertainties associated with the optical potentials are larger than previously estimated [33; 34; 35; 36]. In this work, we study the single-nucleon transfer and the single-nucleon knockout on three different Ar isotopes, for which there are data. For transfer, we reanalyze \({}^{34,36,46}\)Ar\((p,d)^{33,35,45}\)Ar(g.s.) at 33\(A\) MeV [37]. For knockout, we reanalyze \({}^{32}\)Ar+\({}^{9}\)Be\(\rightarrow\)\({}^{31}\)Ar(g.s.)+\(X\) at 65.1\(A\) MeV [38] and \({}^{34,46}\)Ar+\({}^{9}\)Be\(\rightarrow\)\({}^{33,45}\)Ar(g.s.)+\(X\) at 70\(A\) MeV [39; 40]. This set of reactions spans a wide range of \(\Delta S\) and is sufficient to quantify the asymmetry dependence of \(\mathcal{R}\). _Methodology:_ For the transfer reactions, we use the Adiabatic Wave Approximation (ADWA) [41] taking as input the nucleon-\({}^{A}\)Ar and nucleon-\({}^{A-1}\)Ar interactions at the beam energy and at half the deuteron energy, respectively. The nucleon-Ar optical potential parameters are directly sampled from the recent global parameterization KDUQ [42] from which we compute the credible intervals for the transfer angular distributions using the code nlat[43]. For knockout reactions, we use the eikonal method [15; 44; 45] and quantify the uncertainties arising from the \(n\)-\({}^{9}\)Be target interaction only. Because KDUQ is not appropriate for light targets, we follow the work done in Ref. [36], and instead we generate mock elastic angular distributions with a realistic potential [46] and assign an error of 10%. Parameter posterior distributions are obtained from the Bayesian analysis of the \(n\)-\({}^{9}\)Be target elastic scattering and are propagated to obtain credible intervals for the knockout momentum distributions. For the core-\({}^{9}\)Be interaction, we use the optical limit with the parameters of Ref. [47] and the density of \({}^{9}\)Be approximated by two-parameter Fermi distributions [48]. We do not include the uncertainties associated with the core-\({}^{9}\)Be interaction as there are no elastic-scattering data on these systems or realistic potential to generate mock data and it has been shown that the uncertainties arising from this interaction are less significant [36]. In both the transfer and knockout calculations, we do not include the spin-orbit force for convenience, since we expect it to have a negligible effect. Critical to both reaction calculations are the structure input: the exact same description is used for the single-particle structure of the isotopes involved. We use a Wood-Saxon potential with a radius of \(r_{R}=1.25\,(A-1)^{1/3}\) fm and diffuseness of \(a_{R}=0.65\) fm and we fit its depth to the neutron separation energy. Details concerning the relevant single-particle states used in the transfer and knockout calculations are given in Table 1. _Results:_ The transfer angular distributions are shown in Fig. 1: the normalized 68% (dark shaded blue) and 95% (light shaded blue) credible intervals are compared to the data reported in Ref. [37]. The shape of the predicted transfer angular distributions are in good agreement with experiment, corroborating the assumptions made in ADWA. The results for parallel-momentum distributions following knockout are shown in Fig. 2: the normalized 68% (dark shaded salmon) and 95% (light shaded salmon) credible intervals are compared to the data reported in Refs. [38; 39; 40]. In general, the experimental distributions are well reproduced by the eikonal model, except for the \({}^{46}\)Ar case, which exhibits a highly-asymmetric distribution. As discussed in Ref. [40], this low-momentum tail is likely due to additional, dissipative mechanisms acting in the final state of the reaction products, which are not included in the eikonal approximation. We will discuss how this impacts the extracted SF later. The parallel-momentum distributions are numerically integrated to obtain the total knockout cross sections \begin{table} \begin{tabular}{c c c c c} \hline \hline & \(S_{n}\) [MeV] & \(J^{\pi}\) & \((A-1)\) & \(J^{\pi}\) & \(nlj\) \\ \({}^{32}\)Ar & 21.60 & 5/2\({}^{+}\) & 0\({}^{+}\) & 0d5/2 \\ \({}^{34}\)Ar & 17.07 & 0\({}^{+}\) & 1/2\({}^{+}\) & 1s1/2 \\ \({}^{36}\)Ar & 15.26 & 0\({}^{+}\) & 3/2\({}^{+}\) & 0d3/2 \\ \({}^{46}\)Ar & 8.07 & 0\({}^{+}\) & 7/2\({}^{-}\) & 0f7/2 \\ \hline \hline \end{tabular} \end{table} Table 1: Properties of the single-particle wavefunction for \({}^{32,34,36,46}\)Ar: the neutron separation energy (\(S_{n}\)), the spin and parity of the nucleus (\(J^{\pi}\)), of the \(A-1\) core (\((A-1)\)\(J^{\pi}\)) and the number of nodes \(n\), the partial wave \(l\) and the spin \(j\) of the core-neutron single-particle wavefunction. shown in Table 2: the experimental total cross sections (\(\sigma_{exp}\), 2\({}^{\rm nd}\) column) are listed along with the theoretical ones, namely the diffractive-breakup contributions (\(\sigma_{dif}\), 4\({}^{\rm th}\) column), the stripping contributions (\(\sigma_{str}\), 5\({}^{\rm th}\) column) and the total predicted single-particle cross sections (\(\sigma_{sp}=\sigma_{dif}+\sigma_{str}\), 3\({}^{\rm rd}\) column). It is clear that the errors on the theoretical total knockout cross section are also mostly defined by the uncertainties in the stripping component. This feature is expected for well-bound nuclei (note that all cases considered in this work have neutron separation energies \(S_{n}>8\) MeV) which makes the reaction process more sensitive to the details of the optical potentials [36]. We now consider the extraction of the SFs. For transfer, we follow a similar procedure as in Ref. [28]: we extract the SF by adjusting the angular distributions to the data points around the peak. The corresponding SFs and their uncertainties are displayed in the 2\({}^{\rm nd}\) column of Table 3, along with the 1\(\sigma\) (2\(\sigma\)) errors. Our SF\({}_{trans}\) are consistent with those extracted in Ref. [28] (3\({}^{\rm rd}\) column) although they exhibit larger errors. The relative uncertainty in SF\({}_{trans}\) increases with the binding energy of the projectile, similarly to what was observed in knockout observables [36]. For knockout, we sample both the total cross section posterior distribution predicted by theory and the corresponding experimental total cross sections, assuming a normal distribution. We then extract the distribution of the SFs by taking the ratio of the experimental samples with the theoretical ones. The corresponding SF\({}_{ko}\), shown in the 4\({}^{\rm th}\) column of Table 3, are consistent with the ones extracted in the original analyses (5\({}^{\rm th}\) column) [38; 39; 40]. However, the original uncertainties for SF\({}_{ko}\) are much smaller than those we obtained here, just as was found in the transfer case. Note that the SFs extracted from knockout data on \({}^{46}\)Ar (\({}^{34}\)Ar) are consistent with the ones extracted from the transfer data within 1\(\sigma\) (2\(\sigma\)). To obtain the ratio \(\cal R\), we use previously published large-scale shell model calculations [49]: SF\({}_{SM}=4.39\) for \({}^{32}\)Ar [38], and SF\({}_{SM}=1.39,2.22,5.51\) for \({}^{34,36,46}\)Ar [37]. No uncertainties have been estimated for these predictions. As noted in earlier analyses [5; 6; 7], the shell-model SFs are significantly larger than the SFs extracted from the knockout of deeply-bound nuclei, e.g. \({}^{32,34}\)Ar, but are in agreement with those extracted from transfer. Fig. 3 contains the asymmetry dependence of the SF ratios \(\cal R\) using transfer (blue bars) and knockout (red bars) data. The thick bars represented 1\(\sigma\) and the thin bars represent 2\(\sigma\), both obtained from the credible intervals on the SFs reported in Table 3. We must point out that the results for \(\cal R\) obtained for the three discrete values of \(\Delta S\) are not necessarily consistent with a linear dependence, either for transfer or for knockout. Nevertheless, for the sake of comparison with previous studies, we fit \(\cal R\) to \({\cal R}(\Delta S)=a\Delta S+b\), for each reaction case, transfer or knockout, taking into account the 1\(\sigma\) uncertainty (blue and salmon shaded bands). The slope obtained for transfer (\(a=-0.0036\pm 0.0090\)) is consistent with the slope obtained for knockout (\(a=-0.0178\pm 0.0088\)), within uncertainties, even though there are significant differences in the values of the intercept (\({\cal R}(0)=0.79\pm 0.09\) for transfer and \({\cal R}(0)=0.55\pm 0.14\) for knockout). Our extracted slope for knockout is consistent with that extracted previously (\(a=-0.016\)[7]) however now we include the 1\(\sigma\) uncertainty coming from both the optical potentials in the theoretical analysis and the experimental errors. _Discussion:_ An optical model dependence on the overall normalization of the extracted SFs might be expected, but we have verified that the slope obtained is not dependent on the choice of the optical potential. When repeating the knockout analysis using KDUQ, the same parameterization used in the transfer analysis, we obtained a slope of \(-0.0134\pm 0.0122\) (shown in the Sup \begin{table} \begin{tabular}{c|c|c|c|c} & \(\sigma_{exp}\) [mb] & \(\sigma_{sp}\) [mb] & \(\sigma_{dif}\) [mb] & \(\sigma_{str}\) [mb] \\ \hline \({}^{32}\)Ar & \(10.4^{+1.3}_{-1.3}\) & \(9.6^{+1.3(5.6)}_{-1.4(5.6)}\) & \(2.9^{+0.5(1.6)}_{-0.1(4.1)}\) & \(6.8^{+1.3(8.1)}_{-4.5(2.5)}\) \\ \({}^{34}\)Ar & \(4.7^{+0.9}_{-0.8}\) & \(12.8^{+2.4(7.4)}_{-0.5(6.6)}\) & \(2.4^{+1.4(7.4)}_{-1.3(6.6)}\) & \(6.8^{+1.6(7.6)}_{-4.5(5.6)}\) \\ \({}^{46}\)Ar & \(61^{+9}_{-9}\) & \(13.4^{+1.8(7.9)}_{-4.2(6.2)}\) & \(4.2^{+0.5(2.1)}_{-1.3(1.6)}\) & \(9.2^{+1.7(8.3)}_{-4.4(6.1)}\) \\ \end{tabular} \end{table} Table 2: The knockout experimental (\(\sigma_{exp}\)) and theoretical single-particle cross section (\(\sigma_{sp}\)) along with their diffractive-breakup (\(\sigma_{dif}\)) and stripping contributions (\(\sigma_{str}\)). The numbers are organized as \(X^{+Y(Y^{\prime})}_{-Z(Z^{\prime})}\) where \(X\) denotes the average value, \(Y\) and \(Z\) (\(Y^{\prime}\) and \(Z^{\prime}\)) correspond respectively to the 1\(\sigma\) (2\(\sigma\)) uncertainties obtained by propagating the uncertainties due to the nucleon-\({}^{\rm th}\)Be target interaction. Figure 2: Parallel-momentum distributions of the remaining (a) \({}^{31}\)Ar, (b) \({}^{33}\)Ar and (c) \({}^{45}\)Ar after the one-neutron knockout of \({}^{32}\)Ar, \({}^{33}\)Ar and \({}^{46}\)Ar off a \({}^{9}\)Be target at 65.1\(A\) MeV, 70\(A\) MeV and 70\(A\) MeV, respectively. The theoretical distributions were folded with the experimental resolution, and their center has been adjusted so that the average distribution reproduces the high-momentum tail of the data. The data and the experimental resolution profile were taken from Refs. [38; 39; 40]. plemental Material). In addition to transfer and heavy-ion knockout, \((p,pn)\) and \((p,2p)\) reactions have also been studied in this context [9]. Although to make a meaningful comparison, similar uncertainty analysis for those reaction channels needs to be done, our results do not seem inconsistent with Ref. [9]. As pointed out in Ref. [9], the asymmetry dependence extracted can change slightly when using shell-model predictions with different residual interactions and/or model spaces, or even when using different model assumptions for the geometry of the single-particle wave function. Yet, there will be no significant change on the relative uncertainties due to the optical potentials. To remove any possible dependence on the shell model and to facilitate a future comparison with results obtained from \((e,e^{\prime}p)\) measurements, we also provide the asymmetry plot when \(\mathcal{R}\) is extracted using the Independent Particle Model (IPM) occupation numbers in Supplemental Material (the ratio \(\mathcal{R}\) deduced from the \((e,e^{\prime}p)\) data [8] relies on the IPM). The results for \(\mathcal{R}\) using IPM do not seem inconsistent with those of Ref. [8], however a study of the uncertainty in that reaction probe remains to be completed. One must keep in mind that the uncertainties here presented are only a lower bound. First, we did not include the parametric uncertainties associated with the description of the single-particle state [35] even though we did use the exact same model in both analyses. For this reason, we expect these uncertainties to have no impact on our conclusions. Moreover, we did not quantify the uncertainties associated with the core-target potentials, used to compute knockout cross sections. More intricate is the quantification of model uncertainties. The reaction models used to interpret the measurements have approximations and therefore a complete analysis should include the errors associated with them. Although the inclusion of model uncertainties is beyond the scope of this work, there are plans to tackle this problem in the near future and it is important to identify the theory approximation in the models that are likely to be more relevant. Next, we briefly discuss this aspect. In transfer, ADWA has been benchmarked against full Faddeev calculations [28]: at \(E_{p}=33\) MeV the differences are only significant for the \({}^{36}\)Ar case but a more rigorous quantification is desirable. In knockout, the measured momentum distributions exhibit an asymmetry for \({}^{46}\)Ar that the model does not predict. The interpretation of the knockout data relies on the eikonal model, which contains two approximations: the adiabatic approximation and a core-spectator approximation. The adiabatic approximation violates energy conservation and is the cause for the symmetric parallel-momentum distributions (as seen in Fig. 2(c)). Improved models which do conserve energy are able to describe the distributions (e.g., Ref. [22]). It has been shown that the integrated cross sections produced in such models agree with those from the eikonal model, proving that this aspect does not have a strong impact on the extracted SFs [24]. The second approximation, the core-spectator approximation, assumes that the core degrees of freedom are "frozen" during the collision process. However, dissipative mechanisms associated with the removal of the nucleon have been shown to decrease the predicted cross sections [23], an effect that is more important the more bound the system is. We expect that the extracted SFs obtained from knockout data when including these dissipative effects would be larger for nuclei with large \(\Delta S\), which could explain part of the apparent discrepancies between transfer and knockout predictions in Fig. 3. Unfortunately, accounting for these dissipative effects is not trivial and requires updated reaction frameworks. Initial studies Figure 3: Ratio of the SF extracted from data and the shell-model SF (including the center-of-mass correction) as a function of the asymmetry of the nucleus \(\Delta S=S_{n}-S_{p}\). The blue error bars correspond to the SFs extracted from transfer data [37] and the red ones to the SFs extracted from knockout data [38; 39; 40]. Each error bars show the \(1\sigma\) and \(2\sigma\) uncertainties. The shaded area correspond to the \(1\sigma\) uncertainties of a linear fit of the transfer (blue) and knockout (red) error bars. \begin{table} \begin{tabular}{c||c c|c c} & SF\({}_{trans}\) & Ref. [28] & SF\({}_{ko}\) & Ref. \\ \hline \({}^{32}\)Ar & & \(1.2^{+0.3(0.8)}_{-0.5(0.7)}\) & \(1.1^{+0.1}_{-0.1}\)[38] \\ \({}^{34}\)Ar & \(0.91^{+0.16(0.42)}_{-0.25(0.47)}\) & \(0.92^{+0.12}_{-0.12}\) & \(0.39^{+0.08(0.23)}_{-0.16(0.25)}\) & \(0.36^{+0.07}_{-0.07}\)[39] \\ \({}^{36}\)Ar & \(2.1^{+0.2(0.8)}_{-0.4(1.3)}\) & \(2.21^{+0.49}_{-0.49}\) & & \\ \({}^{46}\)Ar & \(4.7^{+0.5(2.4)}_{-1.2(2.5)}\) & \(4.93^{+0.69}_{-0.69}\) & \(4.9^{+1.3(2.8)}_{-1.5(2.8)}\) & \(4.9^{+0.7}_{-0.7}\)[40] \\ \end{tabular} \end{table} Table 3: SFs extracted from transfer [37] and knockout data [38; 39; 40] compared with previous analyses (\(3^{rd}\) and \(5^{th}\) columns). The numbers are organized as \(X^{+Y(Y^{\prime})}_{-Z(Z^{\prime})}\) where \(X\) denotes the average value, \(Y\) and \(Z\) (\(Y^{\prime}\) and \(Z^{\prime}\)) correspond respectively to the \(1\sigma\) (\(2\sigma\)) uncertainties obtained by propagating the uncertainties due to the nucleon-nucleus interactions. in the reaction theory community are moving in this direction [50; 51] but more work is needed, including the coupling of the new frameworks with a Bayesian analysis. _Conclusions:_ In summary, we reanalyse a set of transfer and knockout data using a Bayesian framework to quantify the theoretical uncertainties due to the optical potentials, known to be one of the leading sources of uncertainties in reaction models. In the past, optical potentials uncertainties were estimated naively by comparing the results with two arbitrary parameterizations [28]. This work demonstrates that those original estimates produce uncertainties that are significantly underestimated. Most importantly, our results show that, when the optical potential uncertainties are included in a robust statistical approach, transfer and knockout reactions lead to a consistent picture, producing a similar asymmetry dependence. We thus conclude that the single-particle strength decreases slowly with the asymmetry of the nucleus. Even though theoretical uncertainties need to be quantified in the analysis of \((p,2p)\), \((p,pn)\) and \((e,e^{\prime}p)\) data to make a meaningful comparison between these probes and the present work, the slopes that we extract here do not seem inconsistent with previous analyses of those data [8; 9]. Finally, it is also clear that to infer accurate and precise information from reaction data, optical potentials need to be better constrained. Of particular relevance is their imaginary strength simulating the loss of flux from the elastic channel due to open reaction channels. To improve these phenomenological interactions, one can enforce the dispersion conditions, relating the real and imaginary parts of the interaction in the appropriate manner [52]. These dispersive potentials provide a consistent framework to describe the structure and reaction properties. The extension of the Bayesian framework in this direction is being pursued. _Acknowledgments._ C. H. would like to thank L. Moschini for sharing her code computing the nuclear eikonal phase within the optical limit approximation and A. Gade for sharing the knockout data on \({}^{34}\)Ar. Valuable discussions with T. R. Whitehead are acknowledged. C. H. would like to thank K. Kravvaris, G. Potel and C. D. Pruitt for interesting discussions. C. H. acknowledges the support of the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, under the FRIB Theory Alliance award no. DE-SC0013617 and under Work Proposal no. SCW0498. A. E. L. acknowledges the support of the Laboratory Directed Research and Development program of Los Alamos National Laboratory. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract No. DE-AC52-07NA27344 and by Los Alamos National Laboratory under Contract 89233218CNA000001. F. M. N. acknowledges the support of the U.S. Department of Energy grant DE-SC0021422. This work relied on iCER and the High Performance Computing Center at Michigan State University for computational resources.
核子の対称性の変化に伴い単粒子強度の進化を突き止め、核内の短距離と長距離の correlations の重要性を示唆しました。それ以来20年間、その重要性を理解するために広く研究が行われてきました。驚くべきことに、これらの強度の強い対称性依存性と、核子への knockout 測定から推定された極めて高い対称性核は、電子誘導、転移、 quasi-free 反射データから抽出された値とは一致していません。これにより、20年以上にわたる謎が生まれました。この研究は、転移反応と knockout データの最初の整合的な分析を提示しました。この分析では、反応モデルで考慮される核子-核質の有効相互作用に関連する理論的不確定性をベイズ分析を用いて定量化しました。私たちの結果は、これらの不確定性を考慮すると、両方の探査対象からの suelty-bound 核子のスペ
2309.16626
Chemical evolution of local post-starburst galaxies: Implications for the mass-metallicity relation
We use the stellar fossil record to constrain the stellar metallicity evolution and star-formation histories of the post-starburst (PSB) regions within 45 local post-starburst galaxies from the MaNGA survey. The direct measurement of the regions' stellar metallicity evolution is achieved by a new two-step metallicity model that allows for stellar metallicity to change at the peak of the starburst. We also employ a Gaussian process noise model that accounts for correlated errors introduced by the observational data reduction or inaccuracies in the models. We find that a majority of PSB regions (69% at $>1\sigma$ significance) increased in stellar metallicity during the recent starburst, with an average increase of 0.8 dex and a standard deviation of 0.4 dex. A much smaller fraction of PSBs are found to have remained constant (22%) or declined in metallicity (9%, average decrease 0.4 dex, standard deviation 0.3 dex). The pre-burst metallicities of the PSB galaxies are in good agreement with the mass-metallicity relation of local star-forming galaxies. These results are consistent with hydrodynamic simulations, which suggest that mergers between gas-rich galaxies are the primary formation mechanism of local PSBs, and rapid metal recycling during the starburst outweighs the impact of dilution by any gas inflows. The final mass-weighted metallicities of the PSB galaxies are consistent with the mass-metallicity relation of local passive galaxies. Our results suggest that rapid quenching following a merger-driven starburst is entirely consistent with the observed gap between the stellar mass-metallicity relations of local star-forming and passive galaxies.
Ho-Hin Leung, Vivienne Wild, Michail Papathomas, Adam Carnall, Yirui Zheng, Nicholas Boardman, Cara Wang, Peter H. Johansson
2023-09-28T17:30:59
http://arxiv.org/abs/2309.16626v3
# Chemical evolution of local post-starburst galaxies: Implications for the mass-metallicity relation ###### Abstract We use the stellar fossil record to constrain the stellar metallicity evolution and star-formation histories of the post-starburst regions within 45 local post-starburst galaxies from the MaNGA survey. The direct measurement of the regions' stellar metallicity evolution is achieved by a new two-step metallicity model that allows for stellar metallicity to change at the peak of the starburst. We also employ a Gaussian process noise model that accounts for correlated errors introduced by the observational data reduction or inaccuracies in the models. We find that a majority of post-starburst regions (69% at \(>1\sigma\) significance) increased in stellar metallicity during the recent starburst, with an average increase of 0.8 dex and a standard deviation of 0.4 dex. A much smaller fraction of PSBs are found to have remained constant (22%) or declined in metallicity (9%, average decrease 0.4 dex, standard deviation 0.3 dex). The pre-burst metallicities of the post-starburst galaxies are in good agreement with the mass-metallicity relation of local star-forming galaxies. These results are consistent with hydrodynamic simulations, which suggest that mergers between gas-rich galaxies are the primary formation mechanism of local PSBs, and rapid metal recycling during the starburst outweighs the impact of dilution by any gas inflows. The final mass-weighted metallicities of the post-starburst galaxies are consistent with the mass-metallicity relation of local passive galaxies. Our results suggest that rapid quenching following a merger-driven starburst is entirely consistent with the observed gap between the stellar mass-metallicity relations of local star-forming and passive galaxies. keywords: galaxies: evolution - galaxies: abundances - galaxies: starburst - galaxies: stellar content - methods: statistical ## 1 Introduction Since the advent of the first large-scale galaxy surveys such as the 2dF Galaxy Redshift Survey (Colless et al., 2001) and the Sloan Digital Sky Survey (York et al., 2000), galaxies have been observed to fall into a bimodal distribution in photometric colours in the local Universe (Strateva et al., 2001; Baldry et al., 2004; Bell et al., 2004; Gavazzi et al., 2010). The two sub-populations are found to exhibit different distributions across many other properties, including total stellar mass (Vulcani et al., 2013), star-formation history (SFH) (Kauffmann et al., 2003), kinematics (Graham et al., 2018), stellar metallicity (Gallazzi et al., 2005; Peng et al., 2015), radial concentration (Hogg et al., 2002), and environment (Balogh et al., 2004; Gavazzi et al., 2010). The red sequence consists of quenched, mostly dispersion-dominated galaxies, whilst the blue cloud consists of star-forming, mostly rotationally-supported galaxies. The former also have higher stellar metallicity at a given stellar mass than the latter, which can be used to understand the origin of galaxy bimodality by probing the mechanisms of galaxy formation and quenching (Peng et al., 2015; Trussler et al., 2020). Metallicity is the measurement of the mass of all elements heavier than hydrogen and helium, relative to the total mass of baryons. The vast majority of metals are produced through stellar processes, including a combination of stellar nucleosynthesis, type Ia and core collapse supernovae (for a review, see Nomoto et al., 2013 and more recently Maiolino and Mannucci, 2019). These metals are then released into a galaxy's inter-stellar medium (ISM) through mass loss during the red giant phase in lower mass stars (\(\approx 2-8\)M\({}_{\odot}\)) and supernovae in higher mass stars (\(\gtrsim 8\)M\({}_{\odot}\)). In a closed box system (e.g. Tinsley, 1980) the recycling of this gas into new stars leads to the next generation of stars formed having a higher stellar metallicity than the previous. However, the closed box model is an unrealistic approximation of galaxies, as interactions with the medium outside the galaxy through inflows and outflows are omitted. Inflows from the galaxy's circum-galactic medium (CGM) bring in metal-poor gas, diluting the gas reservoir and lowering both the gas-phase and subsequently stellar metallicity. Outflows remove gas, slowing down star formation to produce fewer metals. Additionally, outflows that originate from stellar feedback might preferentially remove high metallicity ISM gas from systems, further strengthening the role of outflows in lowering metallicity, particularly in lower mass galaxies (Chisholm et al., 2018). Therefore, the stellar metallicity of a galaxy is a result of the net sum of three processes: enrichment through stellar processes, inflows, and outflows. These processes are key components of the baryonic cycle in galaxies, which is intrinsically linked to mechanisms that cause galaxy properties to vary with time, including the quenching of star formation. A key piece of the puzzle to understand the baryonic cycle and the evolution of galaxies is provided by higher redshift galaxy surveys such as UltraVISTA (McCracken et al., 2012). The surveys found that red quiescent galaxies grow in both number and total stellar mass since \(z=4\)(Ilbert et al., 2013; Muzzin et al., 2013), implying star-forming blue cloud galaxies must shut down (quench) their star formation to form quiescent red-sequence galaxies. However, the demographics of red and blue galaxy populations alone are unable to inform on the timescales of these quenching events: the steady growth in quenched galaxies could arise from the average over many individual galaxies with a wide range of different quenching timescales. As stars form in molecular clouds, the quenching of star formation can be achieved in two ways. The first is the complete consumption of gas following the (likely gradual) termination of the supply of cold gas into the regions of star formation. The second is the sudden heating and/or disruption of the molecular clouds due to disruptive events originating from either within or outwith the galaxy. These two processes are expected to act on different timescales (e.g. Schawinski et al., 2014), which is consistent with observational findings that quenching of star formation occurs over varying timescales, ranging from \(>5\) Gyr to \(<1\) Gyr (Heavens et al., 2004; Pacifici et al., 2016; Rowlands et al., 2018; Carnall et al., 2018). Mechanisms proposed for the slow termination of star formation include natural depletion of gas reservoirs over time through the gradual locking up of gas into stars, the "maintenance" of hot gas reservoirs by active galactic nucleus (AGN) feedback preventing cooling of the CGM (Croton et al., 2006), morphological quenching due to the stabilising force of a central spheroid (Martig et al., 2009; Ceverino et al., 2010), shock heating of higher mass halo gas preventing cooling of gas onto galaxies (Dekel and Birnboim, 2006), the inhibition of radial inflows of cold gas by the increase in angular momentum of accreted gas due to disc growth (Renzini et al., 2018; Peng and Renzini, 2020) and the restriction and/or stripping of galaxy gaseous envelopes by tidal forces in clusters (Balogh et al., 2000; Boselli and Gavazzi, 2006). Peng et al. (2015) and Trussler et al. (2020) have argued that slow quenching mechanisms are the main driver of intermediate and low stellar mass (\(M_{*}<10^{11}M_{\odot}\)) galaxy quenching at \(z<1\) due to the higher metallicity of quenched galaxies compared to star-forming galaxies in the local Universe. In this model, the slow decrease in cold gas supply leads to gradual quenching, which allows for star formation to continue with the remaining gas in the galaxy while a lack of continued inflow of low metallicity CGM gas brings reduced dilution effects. The combined effect enhances the metallicity of quenched galaxies with respect to star-forming galaxies. Trussler et al. (2020) further concluded that, although the decrease in gas supply is the main driver for quenching, a continuous secondary contribution from gas ejection through outflows is required to match the star-formation rates (SFRs) of local passive galaxies particularly at lower stellar masses. On the other hand, studies that analysed large scale cosmological hydrodynamical simulations have found an important contribution to the build up of the red sequence from rapidly-quenched galaxies (SIMBA, \(\approx 50\%\) contribution of total stellar mass at \(z\sim 1\): Rodriguez Montero et al., 2019; Zheng et al., 2022; IllustrisTNG, \(\approx 40\%\) of galaxies over all redshifts: Walters et al., 2022). Suggested mechanisms that could lead to this rapid quenching of star formation include feedback in the form of violent ejection of gas from the central regions of a galaxy powered by AGN outflows (Feruglio et al., 2010; Cicone et al., 2014). Stellar sources such as supernovae and stellar winds could similarly provide substantial feedback, particularly in dense starburst regions (Martin, 1998, 2005; Bolatto et al., 2013; Molero et al., 2023). In clusters, infalling star-forming satellites can experience processes such as ram pressure stripping, thermal evaporation and viscous stripping, which may be powerful enough to remove cold gas directly from star-forming regions (Boselli and Gavazzi, 2006). Several approaches have been used to measure the relative importance of various quenching mechanisms observationally. This includes, but is not limited to, fitting for the SFHs of quiescent galaxies to obtain their quenching timescales (e.g. Pacifici et al., 2016), identifying star-forming galaxies with unusually low molecular gas fractions and short depletion times (e.g. Gomez-Guijarro et al., 2022), and the aforementioned difference in mass-metallicity (MZ) relations between star-forming and quenched galaxies (Peng et al., 2015; Trussler et al., 2020). Despite the substantial work in recent years, the various approaches lead to conflicting results in the relative importance of fast and slow quenching mechanisms. One promising avenue towards resolving this confusion in the literature is the study of post-starburst (PSB) galaxies, which have experienced a recent (\(<2\) Gyr), rapid drop in star formation activity (e.g. Wild et al., 2020). Studying the prevalence and properties of such objects has the potential to constrain both the contribution of rapid quenching to the growth of the red sequence, as well as the physical mechanisms responsible for such rapid quenching events (e.g. Wild et al., 2009; Rowlands et al., 2018; Davis et al., 2019; Li et al., 2019; Zheng et al., 2020). Historically these were first identified as "E+A" or "K+A" galaxies due to their strong Balmer absorption lines and a lack of nebular emission lines (Dressler and Gunn, 1983). As a result of their SFH, PSSB exhibit an abundance of A and F type stars, while the shorter-lived O and B stars are largely absent, allowing the pre-burst stellar population to not be heavily outshone (see French, 2021, for a recent review). PSBs typically display compact morphologies, in both the local Universe and at higher redshifts (e.g. Almaini et al., 2017; Chen et al., 2022). Some studies have suggested that high redshift starburst galaxies such as sub-millimetre galaxies are progenitors of high-redshift PSBs (Toft et al., 2014; Wild et al., 2016, 2020; Wilkinson et al., 2021) and that low redshift starburst or ultraluminous infrared galaxies (ULIRGs) are progenitors of low-redshift PSBs (Hopkins et al., 2008; Cales et al., 2011; French et al., 2015; Pawlik et al., 2018). The initial quenching that transitions PSBs away from the starburst phase is expected to be mainly driven by stellar feedback (see e.g. Wild et al., 2009), but current-generation simulations require AGN mechanical feedback (outflows) to completely halt star formation and sustain the reduced SFR after the starburst (e.g. Zheng et al., 2020). Although PSBs account for only a minor \(<1\%\) of the galaxy population at redshift \(z\sim 0\)(Pawlik et al., 2016), the short visibility window of the spectral features means that a considerable fraction of all quenched galaxies could have passed through a PSB phase, particularly at higher redshift (Wild et al., 2009, 2016, 2020; Whitaker et al., 2012; Belli et al., 2019; Taylor et al., 2023). Therefore, PSBs provide a key testing ground to study the effects of fast quenching mechanisms. Measuring the gas-phase metallicity of PSBs is challenging due to the weakness of nebula emission lines and contamination with AGN, shock or diffuse interstellar excitation mechanisms, and can only be achieved in some cases (see Rowlands et al., 2018; Boardman et al. submitted). However, we might expect substantial chemical evolution to occur during such extreme changes in star formation rate. Given the negative radial metallicity gradients of star forming galaxies (e.g. Matteucci and Francois, 1989; Zaritsky et al., 1994), the inflow of gas required to drive the centralised starburst common to many PSBs might be expected to pull in lower metallicity gas from the outskirts of the galaxies, reducing metallicity. On the other hand, the very high star formation rates over a short period of time will lead to repeated recycling of gas released from evolved stars and a rapid build up in metals. Given the higher metallicity of quiescent galaxies than star-forming galaxies at given stellar mass (Gallazzi et al., 2005; Peng et al., 2015), which of these processes dominate in reality has important implications for how significantly post-starburst galaxies, and rapid quenching more generally, can contribute to the build-up of the quiescent population. A systematic characterisation of the stellar metallicity evolution of PSBs has not been attempted previously to our knowledge. In this study, we aim to measure this by taking advantage of the fact that both the pre-burst and starburst stellar population are visible in PSBs' integrated light spectrum. To draw a more direct comparison with simulations that focus on the chemical evolution in the cores of starburst galaxies, we focus this study on analysing galaxies with PSB-like centres. In Section 2, we describe our data and sample selection. In Section 3, we present our method of spectral fitting of the optical continuum through stellar population synthesis models. We test the method with both "self-consistent" and simulation-based parameter recovery in Section 4, to verify we can recover the SFH and chemical history of PSBs. We then apply the method to MaNGA galaxies, present the results in Section 5, and discuss them in Section 6. Where necessary, we assume a cosmology with \(\Omega_{M}=0.3\), \(\Omega_{\Lambda}=0.7\) and \(h=0.7\). All magnitudes are in the AB system (Oke and Gunn, 1983). We assume a Kroupa (2001) stellar initial mass function (IMF), and take solar metallicity \(Z_{\odot}=0.0142\)(Asplund et al., 2009). We re-scale all metallicity measurements quoted from the literature to this solar metallicity for direct comparison. Throughout, we denote lookback time as \(t\) and ages of the Universe as \(t^{\prime}\), such that \(t^{\prime}=t_{H}-t\) where \(t_{H}\) is the age of the Universe. ## 2 Data MaNGA (Bundy et al., 2015) is an integral field spectrograph (IFS) survey of \(\approx 10000\)\(M_{*}>10^{9}M_{\odot}\) galaxies (11273 datacubes) in the local \(z<0.2\) neighbourhood, a part of the fourth-generation Sloan Digital Sky Survey (SDSS-IV, Blanton et al., 2017) that ran from 2014 to 2020. It used the Sloan Foundation Telescope at Apache Point Observatory (Gunn et al., 2006) to collect spatially-resolved spectra by using hexagonal bundles of 19 to 127 optical fibres, depending on the apparent size of the target. The output BOSS spectrographs (Smee et al., 2013) provide high quality spectra in the wavelength range \(3622-10354\)A at a spectral resolution of \(R\sim 2000\)1. We access MaNGA data through both the web interface and the python package Marvin(Cherinka et al., 2019). Footnote 1: \(R=\lambda/\Delta\lambda_{\rm FWHM}\) For all MaNGA galaxies in the full data release DR17 (Abdurro'uf et al., 2022), we obtain redshift from the MaNGA data reduction pipeline (Law et al., 2016, 2021) and galaxy stellar mass from the NASA-Sloan Atlas (NSA_ELPETRO_MASS, a K-correction fit to elliptical Petrosian fluxes, see Blanton et al., 2011). We obtain spectral indices along with other necessary properties from the MaNGA data analysis pipeline (Westfall et al., 2019; Belfiore et al., 2019). We adjust the stellar masses from NSA for Hubble constant \(h=0.7\). Other stellar mass estimates from SDSS-MPA/JHU2 and the Wisconsin method (Chen et al., 2012) were also considered, but provided no qualitative changes to the conclusions. We limit the sample to \(z<0.06\) in favour of local PSBs with good spatial resolution, leaving 7971 galaxies. Footnote 2: J. Brinchmann: [http://www.mpa-garching.mpg.de/SDSS](http://www.mpa-garching.mpg.de/SDSS) and [http://home.strw.leidenuniv.nl/~jarle/SDSS/](http://home.strw.leidenuniv.nl/~jarle/SDSS/) Within each MaNGA galaxy's datacube3, spaxels4 marked with NOCOV, LOUCOV or DONOTUSE flags are removed. To identify PSB spaxels, we broadly follow the methods in Chen et al. (2019), specifically requiring the spaxels' median spectral SNR \(>8\) per pixel, strength of the H\(\delta\) Balmer absorption line after accounting for emission infilling H\(\delta_{A}>3\)A (Worthey and Otaviani, 1997), equivalent width of the H\(\alpha\) nebular emission line after accounting for underlying absorption W(H\(\alpha\)) \(<10\)A5, and \(\log{\rm W(H\alpha)}<0.23\times{\rm H}\delta_{A}-0.46\). Footnote 3: The main derived data product of the MaNGA survey; a 3D array with two spatial dimensions and one wavelength dimension. See Law et al. (2016) for details. Selecting only galaxies with a PSB spaxel fraction \(>0.05\) among all classifiable spaxels (spaxels not marked with the previous flags nor the SNR threshold that we impose), we sliced the galaxies into 3 elliptical annuli with \(0<R/R_{e}<0.5\), \(0.5<R/R_{e}<1\) and \(1<R/R_{e}<1.5\), where \(R_{e}\) is the \(r\)-band elliptical-Petrosian effective radius, using the elliptical polar distance of each spaxel from the galaxy centre. Our galaxy sample is selected to have \(>50\%\) of the inner annulus spaxels classifiable, and \(>50\%\) of these spaxels to be classified as a PSB, yielding 54 candidates. This sample selection is qualitatively similar to the Chen et al. (2019) selection of MaNGA galaxies with "central" post-starburst regions. After the removal of candidates with faulty MaNGA observations (e.g. mismatched redshift and obvious foreground stars, removed 2: 8248-6104, 8601-12703), active galactic nuclei (AGN) broad emission (removed 1: 11004-6104) and datacubes flagged as BADFLUX by the MaNGA DRP (removed 1: 8944-1902, spectrum also appears to be faulty upon visual inspection), the final sample contains 50 PSBs. They span a total stellar mass range of \(8.55<\log_{10}M_{*}/M_{\odot}<10.63\), as listed in Table 1 together with other properties. We form a stacked PSB-only spectrum for each galaxy, only including spaxel contributions from the PSB-classified spaxels. To ensure spaxel quality, we remove spaxels marked with quality flags DEADFIBER or FORESTAR in MaNGA's H\(\alpha\) emission line maps. Spectra are summed unweighted, while uncertainties are summed in quadrature. The stacking of many spaxels for each galaxy allows for a very high SNR to be reached across the full MaNGA wavelength range, with the mean SNR per pixel ranging from 95 to \(>1200\). The SNR of the sample is listed in Table 1. After correcting for Milky Way dust reddening, we further mask major nebular emission lines, residuals of strong skylines (central flux \(>5\times 10^{-16}\) erg s\({}^{-1}\) cm\({}^{-2}\) A\({}^{-1}\) in Hanuschik, 2003) and Balmer infilling. Since Pawik et al. (2018) have previously shown that stellar population synthesis models based on the MILES stellar library (Falcon-Barroso et al., 2011) showed improved recovery of the SFH of PSBs compared to models based on other libraries, we limit the spectra to rest frame \(\lambda<7500\)A to be fully within the MILES range when fitting. Figure 1 demonstrates the stacking process with two PSBs as examples. Limiting the spectra to rest frame \(\lambda<7500\)A potentially loses valuable constraining power on the older stellar population. Spectral information from longer wavelengths can form a longer wavelength baseline to minimise any age-dust-metallicity degeneracy (see Sections 5.1 and 6.1 of Conroy, 2013). Hence, the flux in the portions with observed frame wavelength \(7500<\lambda<9500\)A was summed into a single, artificial photometric data point, passed jointly with the trimmed spectra to the fitting framework (Section 3). However, no significant differences in the estimated stellar and dust properties were observed with or without the photometric data point, therefore, we limit our analysis to the trimmed spectra for the rest of this study. ## 3 Optical continuum spectral fitting To fully utilise the fossil record stored in the high-quality MaNGA spectra, we employ the fully Bayesian spectral energy density (SED) fitting code Bagpipes(Carnall et al., 2018, 2019). In this section, we describe in detail our spectral fitting procedure of the optical continuum, which includes the assumed parametric SFH model for PSBs from Wild et al. (2020) (Section 3.1). Motivated by a suite of gas-rich binary major emissions that create PSB signatures (Zheng et al., 2020), we introduce a novel two-step metallicity model which decouples metallicity before and after the starburst, allowing for any change in stellar metallicity during the starburst to be recovered (Section 3.2). Additionally, we employ a Gaussian process correlated noise model as an additive term to the physical model's predicted spectrum to account for correlated observational uncertainties and imperfect spectral models (Section 3.3). The sampling of the posterior surface is done using the MultiNest nested sampling algorithm (Feroz and Hobson, 2008) and its python interface (Buchner et al., 2014). As shown in Section 4 below, our two-step metallicity model also recovers SFH-related parameters more accurately. Within Bagpipes, we utilise the Bruzual and Charlot (2003) stellar population synthesis models (2016 version), and assume the initial mass function from Kroupa (2001). We apply the two-component dust attenuation law from Wild et al. (2007) and da Cunha et al. (2008), with a fixed power-law exponent \(n=0.7\) for the interstellar medium (ISM). The dust law asserts that stars younger than 10 Myr have a steeper power-law exponent \(n=1.3\) and are more attenuated than older stars by a factor \(\eta\) (\(=1/\mu\) in Wild et al., 2007; da Cunha et al., 2008), as they are assumed to be surrounded by their birth clouds. Overall, our model has 18 parameters as listed in Table 2, 3 fixed and 15 free to be estimated. As we follow the Bayesian paradigm, prior distributions are placed on the 15 parameters. It is important to also be aware of the imposed prior probability densities on derived physical properties, for example, specific SFR (sSFR) and mass-weighted formation age (\(t_{\rm M}\)), as they can impact the estimated galaxy properties and their uncertainties (Carnall et al., 2019). These are shown alongside SFH draws from the SFH prior in Figure 3 of Wild et al. (2020). ### The star-formation history model The star-formation history traces the rate of star formation in a galaxy and all of its progenitors back in time, typically expressed in lookback time. To model both the recent starburst and the underlying older stellar population expected in most local PSBs, we adopt the two-component parametric SFHs of Wild et al. (2020), which provides a good fit to combined spectra and photometry of \(z\sim 1\) PSBs: \[{\rm SFR}(t)\propto\frac{1-f_{\rm burst}}{\int\psi_{e}{\rm d}t}\times\psi_{e} (t)\big{|}_{t_{\rm form}>t>t_{\rm burst}}+\frac{f_{\rm burst}}{\int\psi_{\rm burst }{\rm d}t}\times\psi_{\rm burst}(t). \tag{1}\] This is made up of the older, exponential decay component \(\psi_{e}\) and the double power-law starburst component \(\psi_{\rm burst}\), both a function of lookback time \(t\). The lookback time when the older population began to form is denoted as \(t_{\rm form}\), while the time since the peak of the starburst is denoted as \(t_{\rm burst}\). The fraction \(f_{\rm burst}\) controls the proportion of mass formed during the starburst. The two components have the forms: \[\psi_{e}(t^{\prime}) =\exp^{\frac{t^{\prime}}{t_{e}}} \tag{2}\] \[\psi_{\rm bursts}(t^{\prime}) =\left[\left(\frac{t^{\prime}}{t^{\prime}_{\rm burst}}\right)^{ \alpha}+\left(\frac{t^{\prime}}{t^{\prime}_{\rm burst}}\right)^{-\beta}\right] ^{-1}. \tag{3}\] All times in Equations 2 and 3 are in ages of the Universe, therefore unlike \(t_{\rm burst}\), \(t^{\prime}_{\rm burst}\) in the starburst component's function represents the age of the Universe at the peak of the starburst. \(\tau_{e}\) is the older population's exponential decay timescale, while \(\alpha\) and \(\beta\) control the declining and increasing timescales of the burst respectively, with larger values corresponding to steeper slopes. The usage of the fraction \(f_{\rm burst}\) instead of parameterizing the stellar mass formed in the components individually allows for an easier application of a flat prior over \(f_{\rm burst}\). This allows for not only SFH shapes with a strong starburst, but also rapid quenching events of the existing star formation when \(f_{\rm burst}\sim 0\). \begin{table} \begin{tabular}{l l l l l l l l l} \hline Plate-IFU & MaNGA ID & RA (degrees) & Dec. (degrees) & Redshift & \(\log_{10}\mathbf{M}_{\star}\) & \multicolumn{2}{c}{PSB spach} & \multicolumn{1}{c}{\multirow{2}{*}{Number of stacked}} & \multicolumn{1}{c}{\multirow{2}{*}{Stacked mean}} \\ (1) & (2) & (3) & (4) & (5) & & (6) & (7) & \\ \hline 7961-1901 & 1-178035 & 259.53275 & 30.12902 & 0.0296 & 9.68 & 0.45 & 162 & 335.8 \\ 7964-1902 & 1-179682 & 317.42261 & 0.62777 & 0.0242 & 9.42 & 0.07 & 28 & 169.6 \\ 7965-1902 & 1-653485 & 318.50227 & 0.53509 & 0.0269 & 10.10 & 0.91 & 356 & 843.5 \\ 8080-3702 & 1-38062 & 49.22887 & -0.04201 & 0.0231 & 9.88 & 0.39 & 285 & 553.1 \\ 8081-3702 & 1-38166 & 49.94685 & 0.62382 & 0.0247 & 9.14 & 0.12 & 92 & 112.0 \\ \hline \end{tabular} \end{table} Table 1: List of 50 studied post-starburst galaxies and their properties: (1) MaNGA Plate-IFU identifier; (2) MaNGA identifier; (3) R.A. (J2000); (4) Declination (J2000); (5) Redshift; (6) \(\log_{10}\) total stellar mass fitted from K-corrected elliptical Petrosian photometric fluxes in GALEX/SDSS _FNugriz_ bands from the NSA catalogue, adjusted for \(h=0.7\); (7) Number fraction of classified PSB spacks among all spacks not marked with the \(\rm 30COV\) or \(\rm LO0K0V\) flags in MaNGA datacubes; (8) Final number of stacked spacks, after excluding spacks marked with \(\rm DEAPFIBER\) or FOREST; (9) Mean SNR of the stacked optical spectrum over the full MaNGA wavelength range. The full table is available as supplementary online material. We investigated allowing the rising starburst slope \(\beta\) to vary freely, with a similar prior to \(\alpha\). However, parameter recovery tests performed using \(\mathrm{SNR}=100\), at the lower end of our observations, showed that \(\beta\) is poorly constrained in older starbursts (\(t_{\mathrm{burst}}>1\) Gyr). Therefore, we fix \(\beta=250\), consistent with the typical value found from fits to younger starbursts. A common alternative SED fitting method avoids assuming parametric forms for the star-formation history and instead allowing stars to form in fixed or variable bins in time (e.g. Cid Fernandes et al., 2005; Tojeiro et al., 2007; Iyer and Gawiser, 2017; Johnson et al., 2021). In general these models do well with smooth SFHs, but are less well suited to galaxies which undergo a rapid change in SFR, due to the need for adaptive variability of the number of time bins. However, both Pawlik et al. (2018) and Suess et al. (2022) have successfully employed such methods, often referred to as non-parametric, to fit PSBs. Suess et al. (2022) increased the number of time bins around \begin{table} \begin{tabular}{l l l l l} \hline Type & Parameter & Form & Min & Max \\ \hline SFH & \(\log_{10}(M_{*}/M_{\odot})\) & Uniform & 6 & 13 \\ & \(t_{\mathrm{form}}\) / Gyr & Uniform & 4 & 14 \\ & \(r_{e}\) / Gyr & Uniform & 0.3 & 10 \\ & \(t_{\mathrm{burst}}\) / Gyr & Uniform & 0 & 4 \\ & \(\alpha\) & \(\log_{10}\) Uniform & 0.01 & 1000 \\ & \(\beta\) & Fixed = 250 & - & - \\ & \(f_{\mathrm{burst}}\) & Uniform & 0 & 1 \\ Metallicity & \(Z_{\mathrm{old}}/Z_{\odot}\) & \(\log_{10}\) Uniform & 0.014 & 3.52 \\ & \(Z_{\mathrm{burst}}/Z_{\odot}\) & \(\log_{10}\) Uniform & 0.014 & 3.52 \\ Dust & \(AV\) / mag & Uniform & 0 & 2 \\ & birthcloud factor \(\eta\) & Uniform & 1 & 5 \\ & \(h_{\mathrm{influcloud}}\) / Gyr & Fixed = 0.01 & - & - \\ GP noise & uncorrelated amplitude \(s\) & \(\log_{10}\) Uniform & 0.1 & 10 \\ & correlated amplitude \(\sigma\) & \(\log_{10}\) Uniform & \(10^{-4}\) & 1 \\ & period/length scale \(\rho\) & \(\log_{10}\) Uniform & 0.04 & 1.0 \\ & dampening quality factor \(Q\) & Fixed = 0.49 & - & - \\ Miscellaneous & redshift & Uniform & 0.8 z & 1.2 z \\ & \(\sigma_{\mathrm{dip}}\) / km/s & \(\log_{10}\) Uniform & 40 & 4000 \\ \hline \end{tabular} \end{table} Table 2: Model priors used for fitting PSB SEDs. The parameter symbols are described in Sections 3 to 3.3, or otherwise have their usual meanings. Some parameters have prior shape \(\log_{10}\) uniform, which indicates a flat prior in uniform space \(\log(X)\sim U(\log(min),\log(max))\). Redshift is given a uniform prior ranging from 80% to 120% of the target’s MaNGA redshift (\(z\)). Note that \(\sigma_{\mathrm{dip}}\) is not the intrinsic velocity dispersion of the galaxy, as it does not account for the finite resolution of the spectral templates or observational data. Figure 1: Two typical PSBs from our sample. The top represents PSBs with the vast majority of classifiable spaxels classified as PSB, while the bottom represents PSBs with only a core PSB region. The left panels show the SDSS 3-colour image with the galaxy’s Plate-IFU marked on the top right corner. The MaNGA field of view is marked as the pink hexagon. The middle panels show the spaxel selection (broadly following Chen et al., 2019), displaying regions with no/faulty observations (transparent), with median spectral \(\mathrm{SNR}<5\) too low to be classified (grey), classified as PSB (blue) and classified as non-PSB (red). The right panels show the stacked observed-frame spectrum of the PSB classified spaxels (black), the stacked \(1\sigma\) observational uncertainty (red, multiplied by \(10\times\) to make visible) and spectral ranges masked during the fitting process (grey bands), including major nebular emission lines, skyline residuals and Balmer infilling. The resulting stacked spectra have a mean SNR of 274 and 482 respectively. the time of the starburst, successfully recovering the rapid rise and fall in SFR of mock PSBs. While this can provide more flexibility in theory, in practice the need to define time bins and in some cases the inclusion of some form of regularisation to smooth between time bins makes the method more model dependent than it first seems. Additionally, no code currently exists which can implement both non parametric SFHs and a Gaussian process (GP) model to account for correlated noise, which we found crucial for our fitting (see Section 3.3). Therefore, we opt for a parametric SFH approach, noting that the GP noise component is able to account for any slight imperfections in the assumed SFH. ### Two-step metallicity: insight from PSB merger simulations During integrated light SED fitting, stellar metallicity is often assumed to be constant (e.g. Onodera et al., 2012; Gallazzi et al., 2014; Carnall et al., 2018; French et al., 2018; Wild et al., 2020; Suess et al., 2022). This is done mainly to limit the dimensionality of the problem, by sacrificing the second-order effects of chemical evolution on observations when compared to that from varying SFH, especially for broad-band photometry. This work aims to explore whether this simplification can be removed, and the chemical evolution of PSBs recovered. To propose a simple yet representative metallicity evolution model for PSBs, we consult the suite of gas-rich binary major merger smoothed-particle hydrodynamics (SPH) simulations that create PSB signatures in Zheng et al. (2020). The simulations were performed using the SPH code SPHGal (Hu et al., 2014; Eisenreich et al., 2017), which is an updated version of the Gadget-3 code (Springel, 2005). SPHGal implements sub-resolution astrophysics models from Scannapieco et al. (2005, 2006), updated by Aumer et al. (2013), and includes gas cooling rates following Wiersma et al. (2009). Chemical evolution and stellar feedback from type Ia and type II supernovae, and AGB stars are accounted for (for details, see Section 3.1 of Zheng et al., 2020). The merger progenitor galaxies were set up following Johansson et al. (2009) with modifications in the SFR adapted from Lahen et al. (2018), and initial orbital configurations following Naab & Burkert (2003). The AGN feedback models are from Choi et al. (2012) and Choi et al. (2014). The galaxy models have a baryonic particle mass of \(1.4\times 10^{5}\)M\({}_{\odot}\) for both gas and stars, and a gravitational softening length of 28 pc for all baryonic particles. For our fiducial model we use the retrograde-prograde orbit merger simulation of two identical progenitor galaxies with initial gas mass fractions of \(f_{\rm gas}=0.22\) (\(2\)xS\({}_{\rm c}\)\(\sim\)\(0.07\)), simulated with mechanical black hole feedback but no radiative feedback, because it results in strong PSB spectral features. Figure 2 plots the stellar metallicity of simulation particles against their lookback times of formation, together with the simulated SFH. When the merger-triggered starburst occurs at \(\sim 550\) Myr in lookback time, the newly formed stars have significantly higher stellar metallicity than previous star formation due to rapid recycling of gas to form many generations of stars, and the trend settles on more than twice the pre-burst metallicity after the starburst ends. Similar patterns are seen in other gas-rich merger simulations (Perez et al., 2011; Torrey et al., 2012). We approximate the rapid metallicity increase with a step function and introduce a two-step metallicity model with the time of transition fixed at the peak of the starburst \(t_{\rm burst}\): \[Z(t)=\begin{cases}Z_{\rm old}&t>t_{\rm burst}\\ Z_{\rm burst}&t\leq t_{\rm burst}\end{cases}. \tag{4}\] Both \(t\) and \(t_{\rm burst}\) are in lookback times. The two metallicity levels \(Z_{\rm old}\) and \(Z_{\rm burst}\) are independent and have identical priors, to ensure the model is equally able to fit an increase, decrease or no change in stellar metallicity during the starburst. We experimented with several more complex metallicity evolution models: a three-step model (pre-burst, during burst, after burst); a gradual increase in metallicity prior to the burst; a two-step metallicity with scatter in the metallicity of coeval stars, following a log-normal or exponential distribution. None provided significantly improved parameter recovery, and given that we do not expect the simulations to be a perfect representation of the real Universe, we felt that any additional model complexity was not justifiable. ### Treatment of correlated errors When fitting photometric data, it is safe to assume the observational uncertainties in the bands are uncorrelated, due to individual photometric bands being observed at different time points, with different instrument set ups. However, when working with spectra consecutive pixels are not independent, due to the many processing steps involved in translating the raw spectroscopic observations into 1D spectral arrays. Following the methods in Carnall et al. (2019a, see Section 4 for a detailed discussion regarding the treatment of spectroscopic uncertainties), we introduce an additive, Gaussian process (GP) correlated noise component. As well as allowing for correlated uncertainties that stem from the standard data reduction of spectra, this component also serves to account for model-data mismatch that originates from assumptions and approximations involved at all stages of stellar population synthesis: isochrones, stellar spectral templates, SFH, chemical evolution and dust models (see Conroy, 2013, for a review). A Gaussian process (GP) can be visualised as a series of random variables along one or more continuous axes that represents some physical property. It is a general technique, that has been used to model data in various sub-fields of astronomy, including light curves of X-ray binaries and AGNs (Kelly et al., 2014), asteroseismic data of stars (Brewer & Stello, 2009; Foreman-Mackey et al., 2017), exoplanets (Barclay et al., 2015; Foreman-Mackey et al., 2017; Chakrabarty & Sengupta, 2019) and radial velocity measurements (Czekala et al., 2017), and the cosmic microwave background (Bond et al., 1999). In the case of spectral fitting, the random variables model a series of spectral flux densities along an array of wavelengths, which forms an SED. Each variable is modelled with a Gaussian distribution, such that for a dataset with \(N\) values, an N-dimensional Gaussian distribution is constructed. Before the variables are conditioned on the observed data, the prior mean of the Gaussian distributions is typically set as a vector of zeros. This is also adopted in this study. The covariance matrix describes the relationship between each one of the random variables with all other random variables. Each covariance is described by a kernel function that depends on the separation between two observations considering their physical properties. For an in-depth description of GP modelling, see Rasmussen & Williams (2006). For the fitting of spectra, the GP's covariance matrix allows us to quantify the correlated noise between the measured flux density of any wavelength bin with all other bins. This is useful since it can account for correlated noise on different wavelength scales, where measurements at close-by wavelength bins are expected to correlate more strongly than measurements separated by longer distances. Hence, the close-to-diagonal terms of the covariance matrix will likely have a larger magnitude than off-diagonal terms. To reduce computational time, we replace the squared exponential kernel used in Carnall et al. (2019a) with a stochastically-driven damped simple harmonic oscillator (SHOTerm), implemented through the celerite2 python package (Foreman-Mackey et al., 2017; Foreman-Mackey, 2018). The GP model of Carnall et al. (2019) used a covariance matrix describing the covariance between two wavelength bins \(j\) and \(k\): \[\mathrm{C}_{jk}(\mathbf{\Phi})=s^{2}\sigma_{j}\sigma_{k}\delta_{jk}+b^{2}\exp\left( -\frac{(\lambda_{j}-\lambda_{k})^{2}}{2l^{2}}\right)\, \tag{5}\] with parameters \(\mathbf{\Phi}=(s,b,l)\), where \(s\) scales the observational uncertainties \(\sigma_{j,k}\) on the SED fluxes, \(b\) is the amplitude of the correlated noise and \(l\) is the lengthscale of the squared exponential kernel in units of wavelength. \(\lambda_{j}\) and \(\lambda_{k}\) are the wavelengths at indices \(j\) and \(k\), and \(\delta_{jk}\) is the Kronecker delta function. The first term allows for scaling of the uncorrelated input observational noise while the second term is the GP kernel function for correlated noise. In this study, we replace the second term with the celerite SHOTerm kernel function \(K\), which is a sum of exponential terms: \[K_{\alpha}(|\lambda_{j}-\lambda_{k}|)=\sum_{m=1}^{M}a_{m}\exp\left(-c_{m}(| \lambda_{j}-\lambda_{k}|)\right)\, \tag{6}\] where \(\alpha=(\mathbf{a},\mathbf{c})\), with \(\mathbf{a}\) and \(\mathbf{c}\) vectors with elements \(a_{m}\) and \(c_{m}\) respectively. For a single exponential term of this form, the corresponding inverse covariance matrix is tri-diagonal, which can be computed with a small number of evaluations (Rybicki and Press, 1992; Kelly et al., 2011), facilitating a reduction in computation time. To allow for easier usage of the kernel function, we follow Foreman-Mackey et al. (2017) to take the Fourier transform of equation (6), with the power spectral density \[S(\omega)=\sqrt{\frac{2}{\pi}}\frac{S_{0}\omega_{0}^{4}}{(\omega^{2}-\omega_{ 0}^{2})^{2}+\omega_{0}^{2}\omega^{2}/Q^{2}} \tag{7}\] where \(\omega_{0}\) is the frequency of the undamped oscillator, \(Q\) is the quality factor of the oscillator, and \(S_{0}\) is proportional to the power of the oscillator at \(\omega=\omega_{0}\): \(S(\omega_{0})=\sqrt{2/\pi}S_{0}Q^{2}\). To make the function more intuitive, celerite2 allows for the option to swap frequency \(\omega_{0}\) with period \(\rho\) via the relationship \(\rho=2\pi/\omega_{0}\), such that period \(\rho\) is proportional to a typical lengthscale at which fluxes at \(\lambda_{j}\) and \(\lambda_{k}\) correlate by a standard degree. celerite2 also allows for swapping \(S_{0}\) with the standard deviation of the GP realisations \(\sigma\) via the relationship \(\sigma=\sqrt{S_{0}\omega_{0}Q}\), such that this amplitude parameter is independent of the other parameters \(\omega_{0}\) and \(Q\). Aiming to emulate the behaviour of the squared exponential kernel, which works well on spectral fitting problems, we match its autocorrelation curves with those from the SHOTerm kernel. This process is described in Appendix A. This replacement of kernels allowed for a \(\sim 100\) fold reduction in computational time. ## 4 Testing of fitting methods In this section we demonstrate that the combination of the 2-component SFH model with the two-step metallicity model can recover relevant galaxy parameters when presented with spectra of PSBs, with low systematic biases. We perform two types of parameter recovery tests: a "self-consistent" test is described in Section 4.1, and a smooth particle hydrodynamic (SPH) simulation test is described in Section 4.2. ### Self-consistent parameter recovery The "self-consistent" test involves generating mock PSB spectra using the functional forms for all parameters, including SFH, metallicity evolution and dust, then fitting the mock spectra with the same functional forms and spectral templates used for mock spectra generation. This setup ensures there is no model-data mismatch nor correlated errors. If the parameter recovery is successful across a large range of input values, it indicates the fitting process can recover the required parameters with minimal degeneracies when the model can perfectly describe the data. We generate spectra using Bagpipes, with identical wavelength range and spectral resolution as our real MaNGA spectra, perturbing the generated spectra with Gaussian uncorrelated errors assuming \(\mathrm{SNR}=100\) similar to the minimum SNR of our observed spectra. Typical dust and dispersion values were assumed, based on the results Figure 2: Star-formation history (bottom) and stellar metallicities (\(Z_{\star}/Z_{\odot}\)) of the stars formed (top) in the binary gas-rich major merger simulation 2xSc07 that creates PSB signatures in Zheng et al. (2020). The full SFH is shown in the inset panel, where the shaded region indicates the assumed SFH of the progenitor galaxies. Stellar metallicity increases from –solar levels to more than twice solar not long after the peak of the starburst at \(\sim 550\) Myr lookback time. from our observed sample (\(A_{V}=0.6\), \(\eta=3\), \(\sigma_{\rm disp}=80\) km/s). Since we do not inject correlated errors, there is no need to include the GP noise component during fitting. Figure 3 shows the recovery performance of a self-consistent test with mock input parameter values similar to the SPH-simulated PSB in Figure 2, using the two-step metallicity model. The left panels demonstrate that we are able to recover the input SFH to within \(1\sigma\) for nearly all lookback times. In the top left panel, the apparent mismatch between the posterior median SFH (solid orange) and input SFH (solid black) before \(z=1\) is partly a plotting artefact. Since each posterior sample is an exponential decay function with an abrupt increase in SFR at \(t_{\rm form}\), the median SFR includes a steadily decreasing fraction of models with no star formation. Hence we also plot the SFHs of 15 posterior samples in the same panel, and show the cumulative fraction of the total stellar mass formed against the age of the Universe in the bottom left panel as an alternative visualisation. In the cumulative SFH, it is easier to see that the discrepancy between the fitted median and input curves is \(1<1\sigma\). The right panels of Figure 3 show violin representations of the posterior distributions for seven key parameters, demonstrating they are recovered to within \(2\sigma\) of the input truths (solid bars). Particularly, we are able to recover the difference between the pre-burst and starburst stellar metallicities, with a high degree of confidence. #### 4.1.1 Sample of self-consistent model tests To understand whether the offsets observed between true values posterior median estimates in Figure 3 are systematic, we repeated the recovery test 100 times with randomly drawn input values6 from the priors in Table 2. Variations in dust and velocity dispersion are omitted due to computational time limitations (although see below for a comparison when these are included). We only fit mock spectra that are classified as PSBs under our selection criteria using H\(\delta_{\rm A}\) and W(H\(\alpha\)) (Section 2). The mean offset (median of posterior - input truth) and fitting uncertainty for all tests are listed in Table 3. Identifying parameters where the mean offset is greater than the mean uncertainty, we find a very slight average overestimation in burst age. However, this is two orders of magnitude smaller than the range of our sample's estimated burst ages (Section 5), thus this does not impact our main results. Footnote 6: The total stellar mass formed and redshift are fixed, as these do not alter the shape of the spectrum, and varying them will not provide additional insight. In addition to the test shown in Figure 3, five self-consistent parameter recovery tests with dust and velocity dispersion are performed based on randomly drawn input values, including \(A_{V}\), \(\eta\) and \(\sigma_{\rm disp}\). Comparing the five test results to the 100 above that did not have the dust component and added velocity dispersion, a \(\sim 40\%\) increase in estimation uncertainty is seen across individual SFH properties \begin{table} \begin{tabular}{l l l} \hline Parameter & Mean offset (\(\overline{\Delta}\)) & Mean \(1\sigma\) uncertainty \\ \hline \(\log_{10}({\rm M}_{*}/{\rm M}_{\odot})\) & -0.002 & 0.011 \\ \(t_{\rm form}\) / Gyr & 0.20 & 1.33 \\ \(t_{\rm burst}\) / Gyr & 0.049 & 0.037 \\ \(f_{\rm burst}\) & 0.017 & 0.022 \\ \(Z_{\rm old}\) / \(Z_{\odot}\) & -0.001 & 0.019 \\ \(Z_{\rm burst}\) / \(Z_{\odot}\) & -0.023 & 0.038 \\ \hline \end{tabular} \end{table} Table 3: The offsets (\(\Delta\), median of posterior – input truth) and mean uncertainties from 100 self-consistent parameter recovery tests using the two-step metallicity model. The input values are randomly drawn from the priors given in Table 2, but were then checked to ensure the resulting system satisfied our PSB selection criteria. We list here the mean offset and fitting uncertainty averaged across all 100 tests. All symbols follow the definitions in Section 3. Figure 3: Self-consistent parameter recovery test with the two-step metallicity model. **Left**: SFH (top) and fractional cumulative SFH (bottom), showing the input truth (black line), posterior median (solid orange line) and its \(1\sigma\) region (shaded), and 15 random draws from the posterior fit (dashed lines). **Right**: Violin plots showing posterior distributions of total stellar mass formed (\(\log_{10}M_{*}/M_{\odot}\)), extinction in V band (\(A_{V}\)), velocity dispersion (\(\sigma_{\rm disp}\)), burst mass fraction (\(f_{\rm burst}\)), age of the burst (\(t_{\rm burst}\)) and metallicity levels. The height corresponds to the distribution’s density, while the central box plot marks its median (white dot), \(1\sigma\) region (thick brown bar) and \(2\sigma\) region (thin brown line). The vertical black lines indicate the input truths. In the lower right panel, the lighter and darker shaded violins correspond to the posterior of the burst and older metallicities, respectively. All parameters are estimated with accuracy within \(2\sigma\), and the metallicity change is recovered. and metallicity. Despite this, with dust and velocity dispersion, the recovered values remain within \(2\sigma\) of the input truths (see Figure 3). These tests show that, in the absence of model-data mismatch and correlated noise, we can recover the input parameters of the two-step metallicity model, via integrated light MaNGA-like spectra, for a wide range of PSBs with varying stellar and metallicity histories. ### SPH Simulation parameter recovery The second parameter recovery test involved generating mock PSB spectra from the stellar particles of the SPH simulations in Zheng et al. (2020), and fitting them with our assumed models to see whether we can recover the underlying galaxy properties. Unlike in the "self-consistent" tests above, the star formation and chemical evolution history of the SPH simulations is complex, and cannot be perfectly described by the simple functional forms of our model. Additionally, stars formed coevally in the simulation can have a range of metallicities, which is not possible in our model. Thus, the mock spectra are created from galaxy properties that do not exist within the prior model space, so parameters can only be approximately recovered. Any inaccuracies and biases found during this test allows for conclusions to be drawn concerning the models' performances when tasked with real data, which will exhibit similar characteristics. While investigating the cause of the small number of self-consistent parameter recovery tests with large discrepancies between estimated and true values, we discovered that many occur as the mock galaxy's \(t_{\rm form}\) approaches the age of the Universe. The rate of change in the flux of a galaxy spectrum with time decreases with increasing stellar age, hence, errors on \(t_{\rm form}\) increase for the oldest galaxies. This issue is not due to the changing metallicity, as it is seen in the parameter recovery tests of both constant and two-step metallicity models. Unfortunately, all the SPH simulations in Zheng et al. (2020) were initialised with analytic templates that began their star formation at age of the Universe \(t=0.5\) Gyr. Therefore, to enable a better insight into the recovery of star formation and metallicity histories in PSBs, we scale the age of all simulated stellar particles down by 15%, preserving the total stellar mass formed and the shape and chronology of the SFH. The shift away from simulated SFHs that began at very low age of the Universe does not impact our results, as the typical estimated age of the Universe when star formation began for our sample (\(t^{\prime}_{\rm form}\)) is \(>3\) Gyr. Mock spectra are generated exactly as described in Section 4.1, with the same dust properties, velocity dispersion, SNR and uncorrelated Gaussian noise. Due to model-data mismatch caused by the simulations' parameters lying outside of the prior space, the GP noise component is included when fitting. Figure 4 compares the results of fitting spectra constructed from the binary merger simulation shown in Figure 2 with the constant and two-step metallicity models. Due to the model no-longer existing within the model prior space, we no longer expect perfect parameter recovery. The top left panels show that the two-step metallicity model outperforms the constant model when recovering the SFH of simulated PSBs that underwent changes in stellar metallicity. The bottom left panel shows the fitted two-step metallicity model closely follows the simulation's metallicity evolution7. Footnote 7: The sudden drop in metallicity of the simulation at 10.5 Gyr is a result of the switch from analytic progenitor galaxies to SPH simulation and is not physical. As there is no direct input "truth" corresponding to many of the fitted parameters, in the right hand violin plots we instead compare the fraction of mass formed within \(t<1.5\) Gyr (\(f_{\rm young}\)), mass-weighted mean stellar age within lookback time \(t<1.5\) Gyr (\(t_{\rm M,young}\)), and the mass-weighted mean stellar metallicity throughout the entire simulation, as well as before and after the peak SFR of the starburst. In all cases, the two-step metallicity model substantially outperforms the constant metallicity model in recovering the underlying galaxy properties. In the bottom right panel, we see that the fitted metallicity of the constant metallicity model is \(>5\sigma\) higher than the true overall mass-weighted metallicity. The over-estimation of the older stellar population's metallicity results in a redder old-star spectrum, leading to a younger fitted \(t_{\rm form}\). The failure to recover the light from old stars (formed before 6 Gyr), leads to an underestimation of total stellar mass formed by \(>0.2\) dex. On the other hand, the underestimation of the burst population's metallicity results in a bluer young-star spectrum, leading to an overestimation of the burst age to compensate. The flexibility of the two metallicity levels allows for these problems to be mitigated. As a result, the violin plots show a significantly more accurate recovery of all parameters displayed. To verify that the two-step metallicity model also enables good recovery of input true values when metallicity declines during the starburst, we artificially flip the metallicity of stellar particles in the simulation, to simulate a decrease in metallicity. We found the two-step model again results in a superior recovery of the SFH compared to the constant model. #### 4.2.1 Sample of simulation recovery tests To investigate the possible bias on recovered parameters, we expand the simulation parameter recovery test to a suite of 46 tests performed on PSB spectra predicted from the simulations in Zheng et al. (2020). All tests assumed the same dust properties, velocity dispersion, SNR and perturbation of the generated spectrum as previous tests. We only use the simulation runs that resulted in an obvious starburst followed by rapid quenching i.e. prograde-prograde and retrograde-prograde simulations, more gas rich progenitors, and those with mechanical black hole feedback, but without radiative feedback. The inclusion of radiative feedback was found to be too effective at suppressing the increased star formation after the merger, leading to no/very weak PSB signatures in the resulting galaxy (see Zheng et al., 2020). Specifically, these were simulations Sa\({}_{\rm Sc}\)\({}_{\rm 00}\), Sa\({}_{\rm S}\)\({}_{\rm S}\)\({}_{\rm 00}\), 2xSc\({}_{\rm 00}\), Sc\({}_{\rm S}\)\({}_{\rm d}\)\({}_{\rm 00}\), Sa\({}_{\rm S}\)\({}_{\rm C}\)\({}_{\rm 07}\), Sa\({}_{\rm S}\)\({}_{\rm S}\)\({}_{\rm d}\)\({}_{\rm 07}\), 2xSc\({}_{\rm 07}\), Sc\({}_{\rm S}\)\({}_{\rm d}\)\({}_{\rm 07}\) and 2xSc\({}_{\rm d}\)\({}_{\rm 07}\). The initial gas mass fractions of the progenitors are 0.17, 0.22 and 0.31 for Sa, Sc and Sd, respectively. From each of the 10 SPH simulations, we extract 10 post-burst spectra equally spaced in time from the peak of the starburst to the end of the simulation. We do this by discarding star particles formed after each time point, input the remaining particles' stellar metallicity and ages (shifted to the new time of observation), into Ba\({}_{\rm dep}\)s which then constructs the integrated spectrum from SSPs in the same way as our models. We then measure the H\(\delta_{\rm A}\) and W(H\(\alpha\)) from the integrated spectra, and check whether they pass our selection criteria (Section 2). This results in 46 simulated PSB spectra at \(0.11-0.71\) Gyr since the peak SFR of the starburst. Similar to the example shown in Figure 2, all chosen simulations exhibit rapid stellar metallicity increase during the starburst, leading to a much higher recent (\(t<1.5\) Gyr) mass-weighted metallicity than before the starburst (\(t>1.5\) Gyr). The mean offset and fitted uncertainty for both constant and two-step metallicity models are presented in Table 4. The two-step metallicity model achieves less bias in SFH-related parameters (total mass formed, \(t_{\rm M,young}\) and \(f_{\rm young}\)), \(A_{V}\) and \(\sigma_{\rm disp}\) than the constant model, for a wide range of PSBs with varying SFHs and chemical evolution, ages and scatter. The two-step metallicity model returns a small mean offset in both metallicity measurements, which indicates the model is able to accurately recover the metallicity change for a broad range of simulated PSBs. We note that among the suite of recovery tests there are several outliers with larger offsets in metallicity estimated with the two-step metallicity model than with the constant model. These are limited to models with a starburst peaking recently (0.2 - 0.4 Gyr in lookback time). For these to be selected as PSBs, they have a correspondingly rapid quenching time (e-folding timescale \(\tau\sim 50\) Myr). In this case, the two-step metallicity model can suffer from a degeneracy between the true solution and a slightly older starburst with higher burst mass, longer quenching timescales and a declining stellar metallicity. In most cases where the two-step model fails to recover the correct metallicity evolution, the constant model also suffers from a similar older, more massive starburst degeneracy, albeit to a less severe degree. PSBs with such recent starbursts are not found in our observed sample (Table 5). Therefore, we do not expect this to be a significant concern for our results. To understand the spectral features that drive the better parame \begin{table} \begin{tabular}{l l l l l} \hline & \multicolumn{2}{c}{Constant metallicity model} & \multicolumn{2}{c}{Two-step metallicity model} \\ \cline{2-5} Parameter & Mean offset (\(\Delta\)) & Mean \(1\sigma\) uncertainty & Mean offset (\(\Delta\)) & Mean \(1\sigma\) uncertainty \\ \hline \(\log_{10}(\mathrm{M_{*}/M_{\odot}})\) (1) & -0.186 & 0.013 & -0.048 & 0.012 \\ \(\mathrm{M_{*,old}}\) / Gyr (2) & -3.26 & 0.19 & -1.55 & 0.34 \\ \(\mathrm{M_{*,young}}\) / Gyr (3) & 0.153 & 0.009 & 0.027 & 0.015 \\ \(f_{young}\) (4) & 0.108 & 0.008 & 0.019 & 0.005 \\ \(\mathrm{Z_{M,before\ peak}}\) / \(Z_{\odot}\) (5) & 0.463 & 0.041 & 0.007 & 0.020 \\ \(\mathrm{Z_{M,after\ peak}}\) / \(Z_{\odot}\) (6) & -0.940 & 0.041 & -0.276 & 0.040 \\ \(\mathrm{A_{V}}\) / mag (7) & -0.067 & 0.021 & 0.025 & 0.009 \\ \(\sigma_{disp}\) / km/s (8) & 1.48 & 0.64 & 0.21 & 0.60 \\ \hline \end{tabular} \end{table} Table 4: The offsets (\(\Delta\), median of posterior - input truth) from 46 parameter recovery tests fitting SPH-derived PSB spectra with the constant and two-step metallicity models. We list here the mean offset and fitting uncertainty averaged across all 46 simulated spectra. Parameters (1), (7) and (8) follow the meanings in Section 3, while the other parameters are (2) mass-weighted mean stellar age before lookback time \(t=1.5\) Gyr, (3) mass-weighted mean stellar age after lookback time \(t=1.5\) Gyr, (4) fraction of mass formed within \(t<1.5\) Gyr, (5) mass-weighted mean stellar metallicity before the corresponding simulation’s peak SFR of the starburst, and (6) mass-weighted mean stellar metallicity after the corresponding simulation’s peak SFR of the starburst. For a wide range of SFHs, metallicity evolution and dust properties, the two-step metallicity model shows smaller offsets than the constant model. Figure 4: Simulation-based parameter recovery test, comparing the results of fitting the constant (blue) and two-step (orange) metallicity models to star particles generated from an SPH merger simulation (from Zheng et al. 2020, see Figure 2). Most panels are as per Figure 3. Panel (C) additionally presents the stellar metallicity evolution; lines and shaded regions have the same meaning as for the SFH panels. The vertical magenta line marks a lookback time of 1.5 Gyr, before and after which we calculate the fraction of mass formed within \(t<1.5\) Gyr (\(f_{\mathrm{young}}\)) and the mass-weighted mean stellar age within \(t<1.5\) Gyr (\(\mathrm{\mu_{M,young}}\)) shown in panels (G) and (H). In panel (I), the top-most black vertical line indicates the mass-weighted metallicity throughout the simulation, while the following two are the mass-weighted metallicity of stars formed before and after the peak of the starburst. The two-step metallicity model out-performs the constant model in the recovery of all parameters. ter recovery by the two-step metallicity model, Figure 5 compares the fitted spectra from the two metallicity models. The lower three plots in each panel of Figure 5 show the fitting residuals, the residuals smoothed by a mean boxcar of width 11 pixels and the fitted spectra's GP noise contributions. Vertical coloured regions mark the wavelengths of the Lick indices (Worthey et al., 1994; Worthey & Ottaviani, 1997) and indices CNB and H+K from Brodie & Hanes (1986). A goodness of fit measurement, the reduced chi-squared value \((\chi^{2}_{\nu})^{8}\) for both models is noted in the lower panels. At first sight the fitted spectra from both metallicity models appear to be very well matched with the mock spectrum, both with reduced chi-squared values close to unity. However, the smoothed residual reveals differences at all wavelengths. The two-step model's smoothed residual is smaller particularly within the iron and magnesium metallicity indices and the calcium H+K lines. This is consistent with the better fit to stellar metallicities obtained when using the two-step model. The constant metallicity model's GP noise component led to a best fit model with a prominent slope (the red-end is \(2.51^{+0.38}_{-0.36}\times 10^{-16}\) erg s\({}^{-1}\) cm\({}^{-2}\) A\({}^{-1}\) higher than the blue-end), while the physical spectrum is significantly bluer than the input mock spectrum. This could arise from a combination of incorrectly estimated dust attenuation curve, incorrect metallicity or SFH properties, leading to the larger offsets for all properties in Table 4. The two-step metallicity model's GP noise component has a much smaller amplitude at all wavelengths (amplitude \(RMS=4.2^{+8.4}_{-2.8}\times 10^{-19}\) erg s\({}^{-1}\) cm\({}^{-2}\) A\({}^{-1}\), small enough to be hidden behind the dashed line), and no overall slope. This indicates that the two-step model's higher degree of flexibility allows for a better approximation of the mock spectrum generated from simulation, and only minor corrections from the GP noise are required. To summarise, the two-step metallicity model allows for metallicity evolution during the starburst to be traced without significant estimation biases or reduction in fitting precision due to the increased complexity, while allowing for better recovery of the SFH and its parameters of the galaxy. ## 5 Results We fit all 50 PSBs with the two-step metallicity model described in Section 3, including the new GP correlated noise model. 5/50 (10%) resulted in fitted GP noise components showing obvious trends across the fitted spectral range, with amplitudes much larger than the observational uncertainty scaled by the posterior median uncorrelated noise amplitude (\(s\) in Equation 5). This can potentially indicate additional sources, such as AGN or foreground/background sources in their fitted spectra, or complex stellar/dust properties that cannot be adequately fit with the model. Due to these considerations, their results are excluded from further analysis (Plate-IFU: 7965-1902, 8080-3702, 9494-3701, 9495-3702, 9507-12704). All remaining 45 were found to have clear PSB SFHs, where rapid quenching follows an episode of strong starburst. In general, the PSB regions underwent starburst \(\sim 1\) Gyr before observation, with the youngest starburst occurring \(\approx 0.45\) Gyr ago. The starbursts have a wide range in mass fraction (\(\approx 0.06-0.94\)). The fitted SFH properties, metallicity levels, \(A_{V}\) and reduced chi-squared of the maximum likelihood posterior sample spectrum are reported in Table 5 and discussed in the following sections. All fitted SFHs and metallicity evolution are plotted in Appendix B. Figure 6 shows an example fit. In the top panel, the posterior best fit spectrum (orange) provides a visibly good fit to the observed PSB-region-stacked MaNGA spectrum (black), as is also seen by the small and normally distributed fitting residuals in the middle panel and the near unity reduced chi-squared. The posterior spectrum contains the median physical model spectrum (blue) and the additive GP noise component (bottom panel). The latter does not exhibit an overall slope and has amplitude comparable to the observational uncertainty scaled by the posterior median uncorrelated noise amplitude (dark blue dashed line), which are signs of a well behaved model. In the top panel, the physical spectrum is further separated to show dust-attenuated light contributions from the starburst (lime) and the older components (red). The split is placed at the time of minimum SFR between the old population and the starburst. This PSB has a burst mass fraction of \(f_{\rm burst}=0.24^{+0.05}_{-0.03}\), and burst age of \(t_{\rm burst}=0.61^{+0.12}_{-0.03}\) Gyr, leading to a light contribution from the starburst that dominates marginally over the more massive older population in the red end of the optical spectrum, but more significantly at bluer wavelengths. ### Most PSBs increase in stellar metallicity during starbursts In Figure 7 we present the fitted posterior median metallicity levels before and after the starburst with 1\(\sigma\) contours to indicate posterior uncertainties. Most PSBs (31/45, 69%) lie above the diagonal constant metallicity line at \(>1\sigma\) (20/45, 44% at \(>3\sigma\)), indicating these PSBs experienced a significant increase in stellar metallicity during the starburst, many of which increased to 5% the original metallicity (galaxies lying above the upper dotted line). A smaller fraction (44/45, 9%) of PSBs are found to instead drop in metallicity at \(>1\sigma\) (1/45, 2% at \(>3\sigma\)), while the remaining portion (10/45, 22%) have constant metallicity within the 1\(\sigma\) errors (24/45, 53% within \(3\sigma\)). Since estimating properties of the older stellar population is usually more challenging, the pre-burst stellar metallicity tends to have larger uncertainty. This uncertainty further increases where PSBs are found to have high burst light fractions (\(f_{\rm burst,L}\)) due to heavy outshining of the older population's light contribution. We have therefore separated the sample at \(f_{\rm burst,L}=0.9\) (calculated by integrating over the full fitted wavelength range) in Figure 7. The objects with high burst light fraction are seen to cluster around the line of constant metallicity (thick dashed line) but with large uncertainty, consistent with no change in metallicity being the a priori assumption given by the two-step metallicity model with independent and identical priors on metallicities. Excluding these 13, the proportion of PSBs that experience a net positive change in stellar metallicity at \(>1\sigma\) is 82% (27/33). ### A recovered mass-metallicity relation, both before and after starburst As reviewed in Section 1, the mass-metallicity (MZ) relation has been used in the literature to infer the dominant quenching timescales that build up the red-sequence. It is interesting to compare the stellar mass and metallicity properties of our sample of PSBs to the MZ relation of local star-forming and quiescent galaxies, as the nature of PSBs being found soon after rapid quenching can provide insight into the impact of these quenching processes on chemical evolution. The top left panel of Figure 8 shows our PSB sample's mass-weighted stellar mass-metallicity relation before the starburst occurred. Also shown are the MZ relations from three studies of local galaxies: star-forming SDSS galaxies, mass-weighted (Panter et al., 2008); SDSS, all types, light-weighted (Gallazzi et al., 2005); star-forming and passive SDSS galaxies, light-weighted (Peng et al., 2015). Our PSBs broadly follow the known MZ relation where metallicity increases with mass, especially when we consider only the more reliable lower burst light fraction galaxies (magenta dots). This indicates that prior to the starburst, the PSB progenitors are consistent with being drawn from the underlying star-forming population, exhibiting no atypical chemical properties. The top right panel of Figure 8 shows our PSB sample's mass-weighted stellar metallicity during the starburst, which shows no observable correlations with total stellar mass, suggesting starbursts disrupt the MZ relation. In the bottom left panel, we show the overall mass-weighted stellar metallicity of the PSBs, which exhibit a remarkable agreement with the light-weighted mass-metallicity relation of passive galaxies from Peng et al. (2015). It is remarkable since local PSBs, being only recently quenched quiescent galaxies, might not be expected to be representative of the galaxy population at \(z=0\). The difference between mass-weighted and light-weighted Figure 5: Comparing the spectral fitting performance of the constant (blue) and the two-step (orange) metallicity models applied to a mock PSB spectrum generated from an SPH merger simulation (Figure 4). The spectrum is split into two rows for clarity. In each row, the main panel shows the input mock spectrum (black) and the best-fit spectra from the two metallicity models (blue and orange). The lower panels show the fitting residuals, residuals smoothed by a mean boxcar of width 11 pixels, the GP noise contribution to each metallicity models’ best-fit spectral model, respectively. The orange GP noise curve has a much smaller amplitude compared to the blue curve and largely lies along the black dashed line. All \(y\)-axes have the same units. The vertical grey bars indicate masked regions where emission lines would appear in the MaNGA data. Coloured bars mark the wavelengths of common spectral indices. Reduced chi-squared values of the maximum likelihood posterior sample spectrum (including GP noise) for both fits, \(\chi^{2}_{\nu}\), are also shown. The two-step model achieves a better fit of the SPH PSB mock, particularly in the spectral regions of iron and magnesium metallicity indices and the calcium H+K lines. MZ relations should not affect our conclusions, since the difference is minor for quiescent galaxies (Trussler et al., 2020). In the bottom right panel of Figure 8 we compare the difference between overall mass-weighted metallicity and the pre-burst metallicity, with the difference between passive and star-forming galaxies from Peng et al. (2015). In both cases the differences decrease with increasing \(M_{\star}\). The matching trends point towards the PSB phase as a valid process that can create the large gap found between the star-forming and passive MZ relations reported in the literature. The implications are discussed in Section 6. ## 6 Discussion Our results show that most PSBs in our sample underwent an increase in stellar metallicity during the starburst phase, some very substantially. This indicates that the effect of stellar enrichment from the rapid recycling of gas during multiple rapid generations of star formation usually outweighs the combined effects of metal dilution from gas inflow and metal removal via outflow. In this section, we draw together studies from the literature to explain the metallicity changes we observe in PSB galaxies, and discuss the implications of our results for the role of post-starburst galaxies in galaxy evolution more generally. Figure 6: Example of a fitted stacked MaNGA spectrum and the contributing components (10838-1902, the top galaxy in Figure 1). **Top:** The stacked observed spectrum created by combining only spavels classified as PSB (black), the posterior best fit spectrum and its distribution (orange line and orange shaded region), which includes contribution both from the physical model and GP noise. The physical model (blue) can be separated to show light contributions from the starburst (line and line shaded region) and the older components (red and red shaded region). The reduced chi-squared value of the maximum likelihood posterior sample spectrum (including GP noise), \(\chi^{2}_{\nu}\), is shown. **Middle:** The fitting residual (orange), defined as the observed stacked spectrum (black curve) - posterior best-fit spectrum (orange curve). The light blue line and the blue dashed line show the input observational uncertainty before and after scaling by the fitted noise scaling factor \(s\), respectively. An increase of around \(\times 3-5\) is typically required. **Bottom:** The fitted GP noise component and its distribution in orange, with blue curves as above. The majority of the fitted GP noise flux lies below the scaled observational uncertainty (blue dashed) and there is no obvious global trend, thus, this galaxy is recognised as a good fit. Note that y-axes have the same units, but the three panels vary in scaling. In all panels, vertical grey bands indicate regions masked due to skyline residuals, strong nebular emission lines or Balmer infilling. \begin{table} \begin{tabular}{l l l l l l l l l l l} \hline Plate-IFU & \(\log_{10}\) & \(A_{V}\) (mag) & \(\log_{10}\) SFR\({}_{100\mathrm{Myr}}\) & \(t_{\mathrm{burst}}\) (Gyr) & \(\tau_{1/2}\) (Myr) & \(Z_{\mathrm{cold}}/Z_{0}\) & \(Z_{\mathrm{burst}}/Z_{0}\) & \(Z_{\mathrm{diff}}/Z_{0}\) & \(\chi^{2}_{\nu}\) \\ (1) & \(M_{\star,\mathrm{PSB}/M_{\odot}}\) (2) & (3) & (M\({}_{0}\) yr\({}^{-1}\)) (4) & (5) & \(f_{\mathrm{burst}}\) (6) & (7) & (8) & (9) & (10) & (11) \\ \hline 7961-1901 & \(9.75^{+0.03}_{-0.03}\) & \(0.85^{+0.03}_{-0.03}\) & \(-0.65^{+0.07}_{-0.06}\) & \(1.69^{+0.09}_{-0.09}\) & \(0.81^{+0.11}_{-0.12}\) & \(483^{+56}_{-35}\) & \(1.59^{+1.07}_{-0.26}\) & \(0.95^{+0.14}_{-0.15}\) & \(-0.66^{+0.92}_{-0.15}\) & \(0.950\) \\ 7964-1902 & \(9.19^{+0.08}_{-0.04}\) & \(1.27^{+0.04}_{-0.04}\) & \(-1.45^{+0.12}_{-0.14}\) & \(1.86^{+0.29}_{-0.23}\) & \(0.61^{+0.26}_{-0.19}\) & \(499^{+12}_{-72}\) & \(1.00^{+0.72}_{-0.40}\) & \(1.27^{+0.26}_{-0.22}\) & \(0.27^{+0.61}_{-0.92}\) & \(0.965\) \\ 7965-1902 & – & – & – & – & – & – & – & – & – & – & 0.878 \\ 8080-3702 & – & – & – & – & – & – & – & – & – & 0.928 \\ 8081-3702 & \(8.76^{+0.06}_{-0.07}\) & \(0.45^{+0.10}_{-0.08}\) & \(-1.02^{+0.08}_{-0.08}\) & \(0.93^{+0.14}_{-0.12}\) & \(0.35^{+0.10}_{-0.08}\) & \(535^{+167}_{-110}\) & \(0.22^{+0.06}_{-0.09}\) & \(3.19^{+0.22}_{-0.40}\) & \(2.96^{+0.22}_{-0.35}\) & \(0.971\) \\ \hline \end{tabular} \end{table} Table 5: Posterior estimated properties of 50 PSBs from the spectral fitting of stacked MaNGA spavels. 5 PSBs marked by dashes were poorly fit and are not considered in further analysis. Columns are (1) MaNGA Plate-IFU, (2) stellar mass within the stacked PSB spavels, (3) ISM dust attenuation at 5500Å (\(V\) band), (4) the \(\log_{10}\) SFR within the stacked PSB spavels averaged over the last 100 Myr, (5) time since the peak of the starburst, (6) fraction of mass formed during the starburst, (7) SFR halving timescale of the starburst, (8) stellar metallicity before the burst, (9) stellar metallicity during and after the burst, (10) change in metallicity, and (11) reduced chi-squared value of the maximum likelihood posterior sample spectrum. The full table is available as supplementary online material. ### On the implications of our results for the origin of PSBs Galaxy mergers have been a popular suggested trigger for local PSBs, supported by the large fraction of faint tidal features or companion galaxies (Zabludoff et al., 1996; Chang et al., 2001), high shape asymmetry (Pawlik et al., 2016; Wilkinson et al., 2022), neural network determined post-merger classification (Wilkinson et al., 2022) and unusual features in high spatial resolution images (Sazonova et al., 2021). Recent mergers also exhibit a higher fractions of PSBs than non-merging galaxies (Ellison et al., 2022; Li et al., 2023). Observationally, a lowering of gas-phase metallicity is observed in both starbursts and pairs of interacting galaxies (Kewley et al., 2006; Rupke et al., 2008; Ellison et al., 2008), apparently in contradiction to the results in this study. We explore this further below. Simulations demonstrate how the disruption of the gravitational potential caused by a galaxy merger leads to strong torques that can drive a rapid gas inflow to the central regions, compress it and fuel a strong starburst (Barnes & Hernquist, 1991, 1996). Forward modelling of these simulations have consistently shown this to be a reliable way to create galaxies with post-starburst spectral features and morphologies (Bekki et al., 2005; Wild et al., 2009; Snyder et al., 2011; Davis et al., 2019; Pawlik et al., 2019; Zheng et al., 2020). Comparatively fewer studies have focused on the chemical evolution of galaxies during gas-rich merger-induced starbursts. Since the outer regions of a star-forming galaxy are typically more metal-poor (e.g. Matteucci & Francois, 1989; Zaritsky et al., 1994), the inflow of substantial gas driven by the disrupted gravitational potential would lead to a net decrease in central stellar metallicity, so long as the impact of stellar enrichment from the starburst is sufficiently weak. Initial hydrodynamic simulation studies found that gas funnelling events initially decrease the central gas-phase metallicity by diluting the existing relatively high metallicity gas, smoothing the negative radial metallicity gradients common to most star-forming galaxies (Perez et al., 2006; Rupke et al., 2010; Lahen et al., 2018) in agreement with the lowered gas-phase metallicity observed in local interacting and starburst galaxies. Torrey et al. (2012) conducted ensembles of SPH simulations of major merging pairs of star-forming galaxies, finding that the change in stellar metallicity during the resulting starburst depends on the gas fractions of the progenitor galaxies: progenitors with low gas mass fractions tend to decrease stellar metallicity due to strong dilution from inflowing metal-poor gas during the merger, while progenitors with higher gas mass fractions tend to increase in stellar metallicity due to the stronger starburst and greater stellar enrichment. Perez et al. (2011) found that gas-rich mergers drive a net increase in gas-phase metallicity of comparable magnitude to Torrey et al. (2012), though mainly caused by rapid increases in SFR due to fragmentation of the discs before merger-driven inflow occurs. On the other hand, the more modern simulated major mergers from Zheng et al. (2020) used in this work produce metallicity increases with only a weak trend with gas mass fractions. Orbits that produce stronger starbursts induce stronger metallicity enhancements, and minor mergers require higher gas fractions to achieve the same strength of starburst and therefore metallicity enhancement. Results are sensitive to the AGN feedback prescription used, as this impacts the strength of the starburst. Evidently there is scope for further simulation work on the chemical evolution of galaxies during gas-rich merger induced starbursts, to understand the impact of resolution, code type, AGN and chemical evolution modelling on the final properties of the descendants. The further development of semi-analytic models (e.g. Molero et al., 2023), for the specific case of low redshift elliptical formation, may also prove fruitful. Gas-rich galaxy mergers are not the only plausible cause of post-starburst galaxies in the local Universe. Ram-pressure stripping in dense galaxy clusters can compress the cold interstellar gas reservoir (Lee et al., 2017), potentially leading to enhanced star formation in affected galaxies (Vulcani et al., 2020; Roberts et al., 2022), followed by rapid quenching (Poggianti et al., 2019; Werle et al., 2022). Although initially identified in clusters (Dressler & Gunn, 1983), it is important Figure 7: Pre-burst and post-burst median posterior stellar metallicities of PSB regions in MaNGA galaxies. The right panel is a zoomed-in view of the region bound by red dashed lines in the left panel. PSBs with a less dominating burst light fraction (posterior median \(f_{\rm burst,L}<0.9\)) are shown in magenta dots, otherwise in dark green triangles. The contours correspond to 1\(\sigma\) regions (enclosing the top 39.3% of marginalised posterior probability), highlighting estimation uncertainties and degeneracies. The dashed black diagonal line marks constant stellar metallicity, while the dotted lines mark a 5\(\times\) and 0.2\(\times\) change in metallicity. Most PSB regions are found to increase in stellar metallicity during the starbursts. to note that PSBs are predominantly located in field environments (e.g. Quintero et al., 2004; Blake et al., 2004; Goto, 2005; Wild et al., 2009; Pawlik et al., 2018). The precise enhancement in the fraction of PSBs in dense clusters is still debated and may depend critically on redshift, stellar mass, and cluster and PSB selection methods (e.g. Poggianti et al., 2009; Vergani et al., 2010; von der Linden et al., 2010; Socolovsky et al., 2018; Paccagnella et al., 2019; Wilkinson et al., 2021). Interestingly, lower stellar mass galaxies that are undergoing ram-pressure stripping are found to have elevated gas-phase metallicities compared to galaxies in both field and cluster environments of the same mass (Franchetto et al., 2020), which might be a result of the increased stellar enrichment from ram-pressure compression without a significant dilution effect from metal-poor gas inflow. This would produce a rise in metallicity after a starburst, at least qualitatively similar to the metallicity increase seen in the majority of our PSB sample. To investigate further, we cross-matched our final sample of 45 well-fitted PSBs with the GEMA-VAC cluster catalogue (Argudo-Fernandez et al., 2015, version DR17), finding 11/45 to be members of rich clusters (member galaxies \(>100\)). This is around twice as high as a control sample (12.9%) of galaxies from MaNGA DR17 matched in total stellar mass and D\({}_{4000}\) stellar index at 1R\({}_{\rm e}\) (MaNGA-Pipe3D, Lacerda et al., 2022; Sanchez et al., 2022; for D\({}_{4000}\), see Bruzual, 1983). However, we do not find any observable difference in the metallicity change and post-burst metallicity distributions of PSBs that are within rich clusters, compared to those that are not. Additionally, the PSBs and controls showed no significant difference in their distribution of local density as defined by the projected density to the fifth nearest neighbour (2-sample KS test \(p=0.77\)). Therefore, the Figure 8: Stellar mass-metallicity relations of PSB regions both before (upper left), during the starburst (upper right) and overall mass-weighted (bottom left). PSBs with a less dominating burst light fraction (posterior median \(f_{\rm burst,L}<0.9\)) and a higher burst light fraction (posterior median \(f_{\rm burst,L}\geq 0.9\)) are marked with magenta dots and dark green triangles, respectively. All values are plotted against the estimated stellar mass of the whole galaxy from the NSA catalogue. Stellar mass-metallicity relations from the literature are also plotted for comparison, as indicated in the legends. The dashed black lines mark the 16th and 84th percentiles from Gallazzi et al. (2005). The bottom right panel compares the difference between overall mass-weighted and pre-burst metallicity and the difference between passive and star-forming relations from the literature. The PSB pre-burst metallicity are found to agree with the Peng et al. (2015) star-forming relation, while the overall mass-weighted metallicity agrees with the Peng et al. (2015) passive relation, suggesting the PSB phase can explain the wide gap between the two literature relations. importance of environmental processes are unclear from our present sample. While we find that the majority of PSBs in our sample have undergone significant increases in metallicity, a small number have experienced a metallicity drop. The most straightforward cause of a metallicity drop is strong inflow by metal-poor gas, with the inflow triggering a starburst that either produces not enough metals to counteract the effects of dilution, or the metals produced were preferentially expelled by outflows (Chisholm et al., 2018). We verified that there was no systematic correlation between the change in metallicity and size of the PSB regions, either in absolute size or relative to \(R_{e}\), as might occur if the different patterns in metallicity evolution were caused by notably different merger types, or different processes entirely. The uncertainty in the simulations means that we cannot rule out mergers as a plausible trigger for either sets of PSBs. ### On the implications of our results for quenching pathways The evolution of stellar metallicity of galaxies has the potential to provide insight into the relative importance of different proposed quenching mechanisms. Peng et al. (2015) measured the mass-metallicity (MZ) relation of star-forming and passive galaxies from SDSS, finding passive galaxies to be significantly more metal rich than star-forming galaxies with the same total stellar mass, with the gap widening for lower mass galaxies (see Fig. 8, lower right). The authors conclude the large MZ gap rejects quenching mechanisms that act on short timescales as a major contributor in quenching. They argue this is because rapid quenching would prevent significant increase in stellar metallicity as galaxies become passive, predicting little to no MZ gap, which is inconsistent with their observations. Instead, they favour slower mechanisms such as the strangulation of gas inflow, which allows for quenching galaxies to increase their metallicity from the star-forming MZ to the passive MZ through stellar enrichment, given enough time. Trussler et al. (2020) largely agrees with Peng et al. (2015), but additionally proposes ejection of gas through outflows to play a minor role in quenching. However, Figure 8 shows a good agreement between the mass-weighted metallicity evolution of our sample of PSBs and the star-forming-passive MZ gap. This indicates that the PSB phase, with relatively short starburst and rapid quenching that follows, is sufficient to provide the observed metallicity enhancement as galaxies move from the blue cloud onto the red sequence. Our results suggest long-term processes such as starvation are not the only viable pathways to explain the MZ gap as has previously been suggested. This result has global implications for galaxy evolution, because both observations (e.g. Wild et al., 2009, 2016, 2020; Whitaker et al., 2012; Belli et al., 2019; Taylor et al., 2023) and simulations (e.g. Rodriguez Montero et al., 2019; Zheng et al., 2022; Walters et al., 2022) have found that PSBs and rapidly-quenched galaxies could contribute significantly to the growth of the red-sequence at \(z>0.5\). Studies of the evolving stellar mass function of the red sequence have found it to grow rapidly at least until \(z=1\) (e.g. Ilbert et al., 2013; Muzzin et al., 2013), with growth slowing or stalling by \(z=1\) for \(\log_{10}(M_{*}/M_{\odot})>10.5\) galaxies (Ilbert et al., 2013; Rowlands et al., 2018). Therefore, a large fraction of the present-day red sequence likely quenched prior to \(z=1\), meaning a significant fraction of the local red sequence may have arrived there following a PSB phase. Making the lape that our local PSBs are likely undergoing similar chemical evolution to that experienced by PSBs at \(z>1\), we thus conclude that short lived starbursts followed by rapid quenching might be a significant contributor to the observed MZ gap in local galaxies. ### On the implications of our results for the chemical evolution of starbursts Previous detailed theoretical work on the impact of bursty star formation on metallicity, and chemical abundance patterns more generally, has focused on local and relatively weak fluctuations in star formation rate, as might have occurred within regions of the Milky Way. On small scales, such as within the solar neighbourhood or within dwarf galaxies such as the Magellanic clouds, periods of increased efficiency of star formation (i.e. \(\epsilon=\rm SFR/M_{gas}\)) will lead to an increase in metallicity due to gas recycling and stellar enrichment (e.g. Weinberg et al., 2017; Johnson and Weinberg, 2020). However, on global galaxy-wide scales, evidence for substantial enhancements in star formation efficiency in starbursts is still unclear, with inferred differences potentially driven by uncertainties in which CO-to-H\({}_{2}\) conversion factor to assume in different galactic environments (see Kennicutt and Evans, 2012 for a review, and Tacconi et al., 2018 for a summary of relevant results). We might expect the super-solar metallicity starbursts that we infer occurred in the recent past history of our PSB sample to be visible in analyses of the gas-phase mass-metallicity relation. The SFR dependence of the mass-metallicity relation for star-forming galaxies has been much debated in the past decade. Barrera-Ballesteros et al. (2017) use a sample of spatially resolved MaNGA galaxies to argue for no dependence of the MZ relation on star formation rate, and in particular there is no noticeable increase in metallicity at high sSFR in their data. However, our sample will represent \(<1\%\) of the star forming population so will not be captured in large numbers in blind surveys. Previous studies have suggested extreme LIRGs or ULIRGs in the local Universe as progenitors to local PSBs, with LIRGs and ULIRGs having similarly low number densities (Hopkins et al., 2008; Cales et al., 2011; French et al., 2015; Pawlik et al., 2018). The metallicity of such extreme starbursts is very difficult to estimate due to dust obscuration. A recent study by Chartab et al. (2022) used mid IR strong line metallicity diagnostics to show that gas in local ULIRGs is not metal deficient as previously reported using standard optical line diagnostics. The difference arises due to dust obscuring the more metal enhanced star forming regions, and places ULIRGs firmly on the local MZ relation. Further work is clearly needed to verify whether super solar gas can be identified robustly in extreme starburst regions of local galaxies. We searched for correlations between the stellar metallicity evolution and SFH in our PSB sample, which could further elucidate any relations between starburst properties and chemical evolution. However, the potential for sample selection effects to impact observed relations made it difficult to draw firm conclusions, and we therefore leave this to future work. ### Caveats There are a number of caveats that are worth keeping in mind with regards to our study. The most important are the fact that we fit only the spatially resolved pre-selected PSB regions of the galaxies, and the lack of alpha enhanced stellar population models. We chose to fit only the PSB regions in the galaxies in order to simplify the SFH of the integrated spectrum, improving the accuracy of our results for these regions. By selection these regions are centrally located, and therefore represent the majority of light and mass in the galaxy, but some are more inclusive of the entire galaxy than others. Systematic correlations between the spatial extent of the PSB regions and a number of fitting galaxy properties are found in our sample. In particular, galaxies with larger PSB regions tend to have lower burst mass fractions but more rapid quenching, while those with smaller PSB regions have an even spread across the whole prior range. Further work is needed to explore the relation of these regions to the wider galaxy, and whether there are correlations between chemical evolution and the PSB spatial extent. While alpha enhanced SSPs are available for older stellar populations (e.g. ALF, Conroy and van Dokkum, 2012; Conroy et al., 2018), and have been directly compared to Bagpipes (Carnall et al., 2022), these models are not suitable for young or intermediate age (\(\lesssim 1\) Gyr) stellar populations as found in our galaxies. Our tests on mock PSBs did not investigate the possibility of systematic uncertainties in the stellar population models, relying on the GP noise component to model them out. Further work could be done to explore the ability of the GP noise to model such rest-frame uncertainties, for example via the creation of models using one set of SSPs and fitting with a different set. Given that there are a wide variety of different spectral features that are better fit by the two-step metallicity model in Figure 5, and we do not see any evidence for the alpha elements being worse fit than the non-alpha elements, we do not believe alpha enhancement could be driving any of the results observed here. To consider selection effects introduced by our PSB classification scheme (Section 2), we calculate the theoretical selection fraction of galaxy models within our prior space as a function of parameters of interest. This is done by first randomly drawing \(10^{6}\) mock galaxy models from the assumed SFH and metallicity priors, constructing mock spectra and measuring the spectral features H\(\delta_{\rm A}\) and W(H\(\alpha\)). The fraction of mocks classified as PSB through our classification scheme is taken as the theoretical selection fraction. Although we found slight selection trends in a variety of physical properties (e.g. both older, weaker bursts and younger, slower decay bursts are less likely to be selected as PSBs), they were not consistent with causing any of the metallicity results presented here. As shown in Figure 1, we do not apply any cut on inclination during sample selection and both edge-on and face-on PSBs are included in our sample. To verify the effect of inclination on our results, we extract the 2D Sersic fit axis ratio (b/a) from the NSA catalogue (Blanton et al., 2011), and found insignificant systematic correlations with our fitted galaxy properties in all cases (\(p>0.05\), Spearman ranked correlation test). ## 7 Summary and conclusions Through selecting and stacking the post-starburst regions of 50 central PSB galaxies from the MaNGA IFU survey, we fit the resulting high SNR \(>100\) stacked spectra with the Bayesian spectral energy density fitting code Bagpipes. Taking inspiration from a suite of binary gas-rich merger simulations that created mock PSBs, we implemented a two-step metallicity evolution model where stars formed before and during the starburst are allowed independent metallicities. We reduced the computational time to fit the high SNR spectra by a factor of 100, by replacing the original Gaussian process kernel used in Bagpipes with a stochastically-driven damped simple harmonic oscillator (SHOterm), implemented through the celerite2 code. After careful verification of our fitting procedure through ensembles of "self-consistent" and simulation-based parameter recovery tests, we applied our model to the stacked spectra of MaNGA PSB regions to obtain 45 well-fitted results, where for the first time, the metallicity evolution of PSB galaxies with rapid SFH changes can be directly measured. Our results lead to the following main conclusions: 1. A majority (\(31/45,69\%\)) of the PSB regions of galaxies formed significantly more metal-rich stars during the starburst than before (average increase = 0.8 dex with standard deviation = 0.4 dex), while a smaller number of PSB regions formed stars of equal or lower metallicity (Figure 7). This suggests mechanisms that substantially raise stellar metallicity play important roles in the origin of PSBs: the effects of metal enrichment through stellar recycling outweigh those from dilution by gas inflow and metal removal by outflows. 2. This rise in metallicity during the starburst is consistent with simulations of gas rich mergers, agreeing with previous results that mergers are the leading cause of low redshift PSBs. However, we note that there is some disagreement on the impact of mergers on chemical enrichment in simulations, and more work needs to be done to corroborate the results from the Zheng et al. (2020) simulations used here. 3. A good agreement is found between the PSBs' pre-burst metallicity and star-forming mass-metallicity relations from the literature (Figure 8, top left). This is consistent with PSBs being drawn from the underlying population of star-forming disk galaxies as expected. 4. The PSBs' final mass-weighted mass-metallicity relation matches the local passive mass-metallicity relation. This suggests that the stellar metallicity evolution caused by rapid quenching following a starburst is entirely consistent with the observed gap in the stellar mass-metallicity relations between local star-forming and passive galaxies. Our results further validate the idea that rapid quenching following a starburst phase may be an important contributing pathway to the formation of the local quiescent galaxy population. In this study we have focused on galaxies with central PSB features. Further work will be required to understand the importance of these features' spatial extent and how they compare to galaxies with other PSB spatial distributions (e.g. ringed and irregular PSBs, Chen et al., 2019). The measurement of alpha enhancement in PSBs can allow for more precise timing of their starburst and quenching. Although difficult to obtain for recently quenched systems, alpha enhancement might be detectable in PSBs with older starbursts, for instance through the methods of Conroy et al. (2018). Lastly, further simulation work on the chemical evolution of galaxies during starbursts and rapid quenching is required, to understand the effects of AGN, shocks, stellar feedback, mergers/interactions and environments on chemical evolution. ## Acknowledgements We thank Dan Foreman-Mackey, Kartheik Iyer and Joel Leja for their assistance in using celerite2, Dense Basis, and Prospector, respectively. We thank Justus Neumann and Yingjie Peng for providing data. We also thank Justin Otter, Kate Rowlands and Omar Almaini and others in the UDS/PSB collaboration for useful discussions and insightful feedback. We thank Natalia Lahen for feedback on the manuscript. We thank Lath Taj Adlecen for verification of fitting methods. VW and NFB acknowledge support from STFC consolidated grant ST/V000861/1. P.H.J. acknowledges support from the European Research Council via ERC Consolidator Grant KETJU (no. 818930). H-H.L thanks Alfie Russell and Sahyadri Krishna for assistance in language and phrasing. Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High Performance Computing at the University of Utah. The SDSS website is www.sdss4.org. SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, Center for Astrophysics | Harvard & Smithsonian, the Chilean Participation Group, the French Participation Group, Instituto de Astrofisica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, the Korean Participation Group, Lawrence Berkeley National Laboratory, Leibniz Institut fur Astrophysik Potsdam (AIP), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astrophysik (MPA Garching), Max-Planck-Institut fur Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observatario Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Autonoma de Mexico, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University. _Software: Astropy_ (Astropy Collaboration et al., 2013), Bagpipes (Camal et al., 2018, 2019a), CLERTE (Forman-Mackey et al., 2017; Foreman-Mackey, 2018), Dense Basis (Iyer et al., 2019), Marvin (Cherinka et al., 2019), Marplotlib (Hunter, 2007), MultiNest (Feroz & Hobson, 2008), Numpy (Harris et al., 2020), pipes_vis (Leung et al., 2021), Prospector (Johnson et al., 2021), pyMultiNest (Buchner et al., 2014), Scipy (Virtanen et al., 2020), Seaborn (Waskom, 2021) For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising. ## Data Availability All utilised MaNGA data are publicly available at the SDSS database [https://www.sdss4.org/dr17/](https://www.sdss4.org/dr17/) or through Marvin at [https://dr17.sdss.org/marvin/](https://dr17.sdss.org/marvin/). The stacked spectra and posterior samples of all 50 fitted galaxies are available at [https://doi.org/10.17630/acb496c-1c59-416e-8b73-026799a@c1ca](https://doi.org/10.17630/acb496c-1c59-416e-8b73-026799a@c1ca). Python scripts to recreate the figures in Section 5 are available at [https://github.com/HinLeung622/chemical_evolution_of_PSBs_scripts](https://github.com/HinLeung622/chemical_evolution_of_PSBs_scripts).
``` Stellarfossil記録を用いて、post-starburst(PSB)領域の恒星金属量進化と星形成履歴を定量する。MaNGA調査から、その45個の近傍のPSB銀河を対象とする。直接測定は、恒星金属量をピーク時に変化させるための新しい2段階の金属量モデルによって達成される。観測データの減算やモデルの不正確さによる関連 error を考慮するガウス過程ノイズモデルも用いる。結果として、PSB領域の多数の領域(69%以上、1σの信頼度以上)が、最近の星bursts中に恒星金属量が上昇し、平均増加量は0.8 dex、標準偏差は0.4 dexであることが判明した。さらに、わずかな割合のPSBが、恒星金属量に変化を伴ったまま(22%)または減少した(9%、平均減少量は
2305.00391
Alternately denoising and reconstructing unoriented point sets
We propose a new strategy to bridge point cloud denoising and surface reconstruction by alternately updating the denoised point clouds and the reconstructed surfaces. In Poisson surface reconstruction, the implicit function is generated by a set of smooth basis functions centered at the octnodes. When the octree depth is properly selected, the reconstructed surface is a good smooth approximation of the noisy point set. Our method projects the noisy points onto the surface and alternately reconstructs and projects the point set. We use the iterative Poisson surface reconstruction (iPSR) to support unoriented surface reconstruction. Our method iteratively performs iPSR and acts as an outer loop of iPSR. Considering that the octree depth significantly affects the reconstruction results, we propose an adaptive depth selection strategy to ensure an appropriate depth choice. To manage the oversmoothing phenomenon near the sharp features, we propose a $\lambda$-projection method, which means to project the noisy points onto the surface with an individual control coefficient $\lambda_{i}$ for each point. The coefficients are determined through a Voronoi-based feature detection method. Experimental results show that our method achieves high performance in point cloud denoising and unoriented surface reconstruction within different noise scales, and exhibits well-rounded performance in various types of inputs. The source code is available at~\url{https://github.com/Submanifold/AlterUpdate}.
Dong Xiao, Zuoqiang Shi, Bin Wang
2023-04-30T05:25:39
http://arxiv.org/abs/2305.00391v2
# Alternately denoising and reconstructing unoriented point sets ###### Abstract We propose a new strategy to bridge point cloud denoising and surface reconstruction by alternately updating the denoised point clouds and the reconstructed surfaces. In Poisson surface reconstruction, the implicit function is generated by a set of smooth basis functions centered at the octnodes. When the octree depth is properly selected, the reconstructed surface is a good smooth approximation of the noisy point set. Our method projects the noisy points onto the surface and alternately reconstructs and projects the point set. We use the iterative Poisson surface reconstruction (iPSR) to support unoriented surface reconstruction. Our method iteratively performs iPSR and acts as an outer loop of iPSR. Considering that the octree depth significantly affects the reconstruction results, we propose an adaptive depth selection strategy to ensure an appropriate depth choice. To manage the oversmoothing phenomenon near the sharp features, we propose a \(\lambda\)-projection method, which means to project the noisy points onto the surface with an individual control coefficient \(\lambda_{i}\) for each point. The coefficients are determined through a Voronoi-based feature detection method. Experimental results show that our method achieves high performance in point cloud denoising and unoriented surface reconstruction within different noise scales, and exhibits well-rounded performance in various types of inputs. Point cloud denoising, Poisson surface reconstruction, Projection-based denoising ## 1 Introduction Point clouds are widely applied in a wide range of geometric applications. However, the real scanned point clouds obtained by the sensing technologies typically contain a certain amount of noise and outliers, which significantly reduces their shape representation capacities. When the input point clouds are unoriented (i.e., no consistently oriented normals are given), applying them for surface reconstruction becomes a challenging task. Point cloud denoising is a traditional solution for handling low-quality inputs, which has been extensively studied for more than two decades. The denoising techniques can be classified into several categories. A typical idea is to project the noisy point set onto the estimated local surfaces. However, existing projection-based methods either require consistently oriented normals for robust shape fitting (e.g., MLS-projection [1, 2]) or only considers the local shape properties within a specific neighbor scale (e.g., jet smoothing [3, 4]). In recent years, researchers have made remarkable progress in unoriented surface reconstruction such as VIPSS [5], PGR [6] and iPSR [7]. Among them, iPSR iteratively performs screened Poisson surface reconstruction [8] and updates the point normals from the surface generated by the previous iteration. The algorithm terminates when approximating the iterative fixed point or reaching the maximum number of iterations. iPSR achieves high performance for clean inputs. However, the sample positions are fixed during the iterative process, reducing its performance in point clouds with large noise. Therefore, we wonder if the point positions can also be updated to achieve a higher reconstruction quality. In this work, we bridge point cloud denoising and surface reconstruction by alternately updating the denoised point clouds and the reconstructed surfaces. Our method acts as an outer loop of iPSR. Specifically, we project the noisy point set onto the surface generated by iPSR in each iteration. This idea is based on the following observations: in Poisson surface reconstruction, the reconstructed surface is generated by the isosurfacing of an implicit function spanned by a series of smooth basis centered at the octnodes. When a proper octree depth is selected, the reconstructed surface can be regarded as a good smooth approximation of the noisy point cloud. Furthermore, we notice that the depth selection will significantly affect the convergence of iPSR. Accordingly, we propose an adaptive depth selection method based on the convergence situation of iPSR, which is judged by the normal variations in the algorithm. To address the oversmoothing phenomenon in the sharp edges, we propose a \(\lambda\)-projection method by estimating the sharpness ratio and assigning an individual control coefficient for each point. The coefficients are determined based on a Voronoi-based feature estimation algorithm [9]. From the point cloud denoising viewpoint, our method belongs to the projection-based category and iteratively projects the noisy point set onto the surface generated by iPSR with an adaptive depth selection strategy and a \(\lambda\)-projection method. From the unoriented surface reconstruction viewpoint, our method iteratively updates the point positions in the algorithm (instead of updating the normals only) and achieves considerable improvements for noisy inputs. We qualitatively and quantitatively examine the efficacy of our method in point-wise noise, structured noise, outliers and real scanned noise. The experimental results indicate that our method achieves high performance in point cloud denoising and unoriented surface reconstruction tasks, and shows well-rounded performance within different shapes and noise scales. We also verify that our approach can not be substituted by simply adjusting some parameters in iPSR or other denoising and reconstructing approaches. ## 2 Related works In this section, we provide a brief review of point cloud denoising and unoriented surface reconstruction techniques and focus on the works that are most relevant to our method. ### Projection-based point cloud denoising Point cloud denoising has been extensively studied for more than two decades due to its broad applications. Han et al. [10] provide a systematic review involving a wide variety of traditional approaches. A recent survey paper [11] introduces more up-to-date techniques including many interesting learning-based methods. Projection-based denoising is the most relevant category of our method with the main idea to project the noisy point cloud onto the local surfaces approximated by adjacent points. A commonly used local fitting technique is the Moving Least Squares (MLS) surfaces. The seminar work [1] defines a \(C^{2}\) smooth surface through the projection procedure. However, the sharp features may be oversmoothened due to the smoothness property. Fleishman et al. [12] approximate the point set by the feature-preserving piecewise smooth surfaces. Each piece fits the point shape on one side of the sharp edge. APSS [13] utilizes algebraic spheres to define the MLS surfaces, which significantly improves the quality near the high curvature regions. RIMLS [2] applies the robust local kernel regression to achieve feature preserving surface approximation. Majority of the above-mentioned techniques require consistently oriented normals as inputs. APSS supports both unoriented and oriented point fitting. However, normals are necessary for achieving the high-fidelity quality. Except for MLS projection, Cazals et al. [3; 4] estimate local shape properties through the Taylor expansion of the local height field. Then, a denoising technique can be realized by projecting the noisy points onto the approximated local surface. The aforementioned method is named as "jet smoothing", which does not require normals as inputs. However, the local fitting is only performed within a user specific neighbor scale. Except for local surface fitting, some other approaches carry out point projection through an optimization manner. Lipman et al. [14] propose a parameterization-free locally optimal projection (LOP) operator, which generates a set of uniform distributed points to approximate the original shape. This technique is further improved by Huang et al. [15] through a robust repulsion term and the adaptive density weights. Liu et al. [16] suggest performing WLOP in an iterative manner. Preiner et al. [17] propose a continuous WLOP formulation based on the Gaussian mixture technique. This work significantly reduces the time resources of the WLOP. However, the above-mentioned methods may lack the feature preserving mechanisms and result in oversmoothing near the sharp edges. To address this issue, Huang et al. [18] propose to detect the sharp regions and carry out a point resampling operation near the sharp edges. Lu et al. [19] propose a GMM-inspired anisotropic projection strategy, which preserves the sharp features by considering filtered normals in the objective function. ### Other point cloud denoising techniques Projection is not the only means to carry out point cloud denoising. Digne et al. [20] and Zhang et al. [21] introduce the bilateral filters for 3D point set filtering. Their methods are mainly inspired by image denoising techniques. Avron et al. [22] and Sun et al. [23] propose the L1 and L0 optimization formulation for point set filtering based on the observation that many surfaces are piecewise smooth. Accordingly, the sparsity of the first order information can be applied to clean the point set. Agathos et al. [24] extend the Taubin smoothing techniques from the mesh denoising to point clouds, and implement a fast GPU version to support large scale data. Wang et al. [25] present a feature-preserving unoriented reconstruction pipeline and separate the entire task into several processes, including noise scale estimation, tangent plane detection, outlier removal, feature detection and noise smoothing. The application of deep neural networks to geometric processing has achieved significant success in recent years due to the rapid advancement of 3D deep learning. Some interesting works learn the local shape properties through point-based 3D networks and obtain considerable results in point cloud denoising and unoriented surface reconstruction [26; 27; 28; 29]. However, the above-mentioned techniques typically require abundant training data to learn the shape priors. ### Surface reconstruction from unoriented point sets Despite a long research history with a considerable number of excellent studies [30; 31; 32; 5], surface reconstruction from unoriented point clouds remains a challenging problem. The main reason is that obtaining a globally consistent normal orientation is always a non-trivial task. Some remarkable studies have emerged in recent years and achieve high-fidelity reconstruction results for clean inputs [6; 7]. However, the point positions are fixed during the algorithm, limiting their effects on point clouds with a certain amount of noise. This is why we propose to fill this gap by updating the point positions iteratively. In contrast with these methods, DPSR [33] proposes a differentiable Poisson solver and updates the source point cloud through the backpropagation of the Chamfer loss. However, the computational process of DPSR relies on the GPU, which results in the high resource demand and the restricted grid resolution. ## 3 Method ### Overview Given an input 3D point cloud \(P^{(0)}=\{p_{1},p_{2},...,p_{n}\}\) with noise, our method iteratively updates the reconstructed surface and the denoised point cloud. The two main operators in our algorithm are the reconstruction operator and the projection operation. We apply the iterative Poisson surface reconstruction (iPSR) [7] to carry out unoriented surface reconstruction, which takes a raw point cloud \(P\) and a user-specific octree depth \(d\) as inputs, and generates a smooth surface \(S\) directly from the point positions. This operation can be denoted as follows: \[S=f_{ipsr}(P,d). \tag{1}\] Then, the noisy points are projected onto the surface. Instead of directly projecting the points onto \(S\), we propose a \(\lambda\)-projection operator \(f_{proj}\), which takes a point cloud \(P\) and a surface \(S\) as inputs and projects each point \(p_{i}\in P\) on \(S\) with a control coefficient \(\lambda_{i}\). If \(q_{i}\) is the closest point to \(p_{i}\) on \(S\), then the projected point \(p_{i}^{\prime}\) satisfies: \[p_{i}^{\prime}=(1-\lambda_{i})p_{i}+\lambda_{i}q_{i}. \tag{2}\] This projection operation is denoted as follows: \[P^{\prime}=f_{proj}(P,S). \tag{3}\] Generally, the alternative updating process of our method can be expressed as follows: \[S^{(k)}=f_{ipsr}(P^{(k)},d^{(k)}),\qquad P^{(k+1)}=f_{proj}(P^{(k)},S^{(k)}). \tag{4}\] Our method acts as an outer loop of iPSR. Fig. 1 shows the input point cloud \(P^{(0)}\) and the reconstructed surfaces generated by our method from \(S^{(0)}\)(iPSR) to \(S^{(4)}\). It can be seen that the mesh quality is improved and the noise is cleaned through the iteration. In section 3.2, we will introduce the adaptive octree depth selection strategy to determine the depth \(d^{(k)}\) in Equation 4. In section 3.3, we will introduce how the projection coefficient \(\lambda_{i}\) is selected for each point to address the oversmoothing phenomenon near the sharp edges. ### Adaptive depth selection strategy When carrying out accurate and high-fidelity reconstruction by the traditional Poisson surface reconstruction approach [34; 8], the octree depth is always set to a large value for supporting detailed isosurfacing. In the original conception, a larger octree depth would lead to a better mesh quality, until reaching the maximum depth required for the geometric complexity. However, this situation is not the case when doing unoriented reconstruction for noisy point clouds using iPSR. Firstly, we will provide a brief review of the normal iteration process of iPSR as preliminaries. **iPSR.** Given an unoriented point cloud \(P\) and a user specific octree depth \(d\), iPSR firstly constructs an octree from \(P\) and generates a new sample set \(\mathcal{S}=\{s_{1},s_{2},...,s_{m}\}\) based on the octree Figure 1: Given the input point cloud \(P^{(0)}\), our method iteratively performs reconstruction and projection in every step. We show the reconstructed surfaces from \(S^{(0)}\)(iPSR) to \(S^{(4)}\) of our method. The surface quality is improved through the iteration. nodes. Each sample \(s_{i}\) is assigned with an initialized normal \(n_{i}\). The normals can be randomly initialized in the first iteration. We denote the set of normals as \(\mathcal{N}=\{n_{1},n_{2},...,n_{m}\}\). iPSR iteratively updates the sample normals \(\mathcal{N}\) from the surface generated by the last iteration. Specifically, an intermediate surface is generated by inputting \(\mathcal{S}\) and \(\mathcal{N}\) to the screened Poisson surface reconstruction [8]. Then, the normal \(n_{i}\) of the sample \(s_{i}\) is updated by the average normal of the adjacent face list near \(s_{i}\) in the intermediate surface. The algorithm terminates when the iterative fixed point is approximated or the maximum iteration is reached. Then, a consistent and high quality triangular mesh can be obtained from the last iteration. In the original work of iPSR, the octree depth is manually set to a fixed integer 10 in most experiments. However, we notice that choosing an appropriate depth for large noise inputs can bring about considerable advantages for this algorithm. When the octree depth is considerably large, the convergence of iPSR may encounter some challenges for large noise inputs and produce "lattice" surfaces, such as the middle column of Fig. 2. Reducing the octree depth can make the convergence easier and produce smoother surfaces. However, an extremely small depth will result in the loss of the shape details, as shown in the teapot mouth of Fig. 5 when doing reconstruction of depth 6. Therefore, an appropriate depth is critical for achieving a decent performance. In this work, we propose an adaptive depth selection strategy based on the convergence situation (i.e., good or bad) of iPSR. Recall that iPSR updates the sample normals in an iterative manner. In their algorithm, the average of the top \(0.1\%\) normal variations \(v\) is calculated for each iteration. This value can be applied to judge if the algorithm has well converged. The iteration process of iPSR stops when \(v<0.175\), or the maximum iteration 30 is reached. Accordingly, we also apply the normal variations to estimate the convergence situation of iPSR in the first reconstruction of our method. Specifically, we set the candidate depths to be \([d_{min},d_{max}]\) and first perform iPSR by depth \(d=d_{max}\). If iPSR converges before reaching the maximum normal iteration 30, or the average normal variations of the last five iterations are less than 0.7, then iPSR converges well at this depth and \(d^{(0)}=d\) is set. Otherwise, \(d=d-1\) and iPSR is performed again, until converges well or reaches the minimum depth \(d=d_{min}\). This approach ensures that a proper depth is chosen at the first reconstruction. The depth \(d^{(0)}\) of the first reconstruction accounts for a critical role in our algorithm, and has been determined through the above-mentioned strategy. In the subsequent reconstructions, we can set an incremental depth list for each \(d^{(0)}\in[d_{min},d_{max}]\). For instance, the depth list is set to \([6,6,7,7,8]\) for \(d^{(0)}=6\). The depth increases in the following stages because the noise magnitude becomes smaller through the alternative denoising and reconstructing process. Additional details will be specified in the experimental section. Once \(d_{min}\) and \(d_{max}\) is determined, and the depth list for each \(d^{(0)}\in[d_{min},d_{max}]\) is set, we can run the dataset in batches without manually adjusting the depth for each shape or reconstruction in Equation 4. Therefore, the depth selection strategy of our method is adaptive. ### \(\lambda\)-projection method In each iteration of our method with the input point cloud \(P^{(k)}\) and the reconstructed surface \(S^{(k)}=f_{ipxr}(P^{(k)},d^{(k)})\), we project each point \(p_{i}^{(k)}\in P^{(k)}\) onto the reconstructed surface \(S^{(k)}\). However, if we simply project all the points onto the surface, then the sharp edges will be smoothed into rounded corners after several iterations. Accordingly, a \(\lambda\)-projection method is proposed to avoid oversmoothing near the sharp edges. The main idea is to assign a projection coefficient \(\lambda_{i}^{(k)}\) for each point \(p_{i}^{(k)}\). If \(q_{i}^{(k)}\) is the nearest point of \(p_{i}^{(k)}\) in the surface \(S^{(k)}\), then the projected points \(p_{i}^{(k+1)}\) satisfy: \[p_{i}^{(k+1)}=(1-\lambda_{i}^{(k)})p_{i}^{(k)}+\lambda_{i}^{(k)}q_{i}^{(k)}. \tag{5}\] The coefficient \(\lambda_{i}^{(k)}\) is different for each point and mainly depended on its sharpness degree. In this work, a Voronoi-based feature estimation method [9] is used to examine the sharpness degree of all the points. Given the point cloud \(P^{(k)}=\{p_{i}^{(k)}\}_{i=1}^{n}\), [9] firstly computes the convolved Voronoi covariance measures from the neighborhood of each \(p_{i}^{(k)}\). Then, the sharpness ratio \(r_{i}^{(k)}\) can be computed from the eigenvalues of the covariance measure. The point whose sharpness ratio is larger than a certain threshold is considered as a feature point in the shape. Figure 2: **The effectiveness of the adaptive depth selection strategy in large noise examples. If the depth is simply set to 8, as shown in the middle column, then the resulting meshes contain “lattice” structures. By contrast, utilizing our adaptive depth selection method brings about significant improvements in these examples.** Once the sharpness ratios of the point set are calculated, the projection coefficient \(\lambda_{i}^{(k)}\) can be determined from the sharpness ratio \(r_{i}^{(k)}\). We set \(\lambda_{i}^{(k)}\) to be a continuous function of \(r_{i}^{(k)}\): \[\lambda_{i}^{(k)}=0.1+0.9\times e^{\pi(r_{i}^{(k)})}, \tag{6}\] where \[g(r_{i}^{(k)})=\frac{\left(max\{r_{i}^{(k)}-c,0\}\right)^{2}}{\sigma^{2}}. \tag{7}\] Here, \(c\) and \(\sigma\) are both user specific parameters, which represent the threshold and the standard deviation, respectively. Equation 6 and 7 demonstrate that \(\lambda_{i}^{(k)}\in(0.1,1.0]\). If \(r_{i}^{(k)}<c\), then \(p_{i}^{(k)}\) is not regarded as a feature point, \(\max\{r_{i}^{(k)}-c,0\}=0\), and \(\lambda_{i}^{(k)}=1.0\), indicating that point \(p_{i}^{(k)}\) is projected onto the surface at \(q_{i}^{(k)}\) (\(p_{i}^{(k+1)}=q_{i}^{(k)}\)). If \(r_{i}^{(k)}>c\), then \(p_{i}^{(k)}\) is regarded as a feature point with a large degree of sharpness. As the sharpness ratio \(r_{i}^{(k)}\) increases, \(\lambda_{i}^{(k)}\) will decrease, but not less than the basic offset \(0.1\). \(\sigma\) controls the decreasing speed of \(\lambda_{i}^{(k)}\). It should be noticed that the sharp edges are only considered when the noise scale is not that large. In section 3.2, we propose an adaptive depth selection method. Large noise will result in a low \(d^{(0)}\). Here, we choose a depth \(d_{sharp}\) as threshold. When \(d^{(k)}<d_{sharp}\), the sharp edges are not taken into account in this iteration because of the following reasons: 1) the input point cloud contains large noise; and 2) it is still in the early stage of the algorithm and the top priority of this iteration is to reduce the noise amplitude of the point cloud. When \(d^{(k)}<d_{sharp}\), we set \(\lambda_{i}^{(k)}=0.5\) for all points, which not only filters the noise but also maintains the original shape. When \(d^{(k)}\geq d_{sharp}\), Equation 6 and 7 are utilized to calculate the projection coefficient \(\lambda_{i}^{(k)}\) for each point. In Fig. 3, we show the reconstructed surface of a noisy _fan-disk_ model with and without the \(\lambda\)-projection method. The results indicate that the proposed \(\lambda\)-projection method is helpful in ameliorating the oversmoothing phenomenon near the sharp edges. For further improvements to this issue, we can carry out a point resample process near the sharp edges similar to EAR [18] and RFEPS [35]. The normal inputs of iPSR can be either random or manually initialized. Therefore, in the subsequent reconstructions of our method, we initialize the point normal of \(p_{i}^{(k+1)}\) as the surface normal at \(q_{i}^{(k)}\). In this way, the convergence speed of iPSR is significantly improved. ## 4 Experiments ### Overview In the experiments, the minimum octree depth \(d_{min}\) is set to 6 and the maximum depth \(d_{max}\) is set to 8 in the adaptive depth selection strategy of our approach. We perform five alternative updating processes. Specifically, we use \(P^{(5)}\) in Equation 4 as the denoised point cloud and \(S^{(5)}\) as the reconstructed surface. If \(d^{(0)}\) is calculated to be 6 based on the normal variations, then the depths in the subsequent five reconstructions \(S^{(1)}\) to \(S^{(5)}\) are set to \([6,6,7,7,8]\). If \(d^{(0)}=7\), then the subsequent depths are \([7,7,8,8,8]\). If \(d^{(0)}=8\), all the iterations are carried out within depth 8. The threshold \(d_{sharp}\) in the \(\lambda\)-projection is set to 8. Specifically, we only consider the sharp features when \(d=8\). We choose the following non-data-driven and data-driven approaches as baselines and conduct qualitative and quantitative comparisons with these methods. These approaches belong to different categories. **RIMLS**[2]: RIMLS applies the robust local kernels to the MLS surfaces. This approach requires consistently oriented normals as inputs. Here, the normals are estimated by PCA [36] and oriented by Hoppe et al. [37]. The RIMLS implementation of MeshLab [38] is used to conduct experiments. **WLOP**[15]: WLOP is a locally optimal projection method for point cloud consolidation, which generates an evenly distributed particle set of the original point cloud. We use the implementation of WLOP in CGAL [39]. **Jet smoothing**[3; 4]: Osculating jet is an efficient polynomial fitting technique to estimate the local surface properties. This technique can be applied to denoise the point cloud through projecting the points onto the local surface shape. We use the "jet smoothing" implementation of CGAL. **Bilateral smoothing**[39]: Bilateral filter is a useful tool for reducing the noise in point sets. We apply the CGAL implementation of the "bilateral smoothing" filter. It is worth noticing that the CGAL implementation of bilateral smoothing is combined with an edge preserving module based on the philosophy of EAR [18], which is introduced in the official documentation of CGAL. The bilateral smoothing of CGAL also requires normals as inputs. We estimate the normals by PCA and Hoppe et al. [37] similar to RIMLS. **PointCleanNet**[27]: PointCleanNet is a representative supervised learning approach for point cloud filtering. The network contains an outlier detector branch and a denoiser branch both based on PointNet [40]. The trained model provided by the authors are directly applied to conduct the qualitative and quantitative comparisons. ### Ablation study with iPSR Our method performs iPSR in an iterative manner. Therefore, it is necessary to validate that our method cannot be substituted Figure 3: **Reconstructed mesh of a noisy _fandisk_ model with and without the \(\lambda\)-projection strategy. The results indicate that the proposed \(\lambda\)-projection method can ameliorate the oversmoothing phenomenon near the sharp edges.** by simply adjusting some parameters in iPSR. The reconstruction results of the iPSR and ours in a noisy horse model are illustrated in Fig. 4. In Poisson surface reconstruction, "point weight" is an important parameter to control the degree of fitting the sample points. In our method, the point weight is set to 1.0 during the alternative updating process. If the point weight is reduced to 0.5, and iPSR is carried out only once, then the reconstructed mesh remains to be noisy. We also show the reconstruction of iPSR in zero point weight. Setting the point weight to zero directly in the iPSR may cause difficulties in algorithm convergence. Accordingly, we manually initialize the normals with a correct orientation. It can be seen that the zero weight iPSR cannot achieve an on par result with our method. Moreover, the ears of the horse are oversmoothened in the zero weight iPSR. Fig. 5 shows the reconstruction of our method and iPSR within different octree depths of a noisy teapot model. In our method, the adaptive depth selection strategy is applied, and the depth \(d^{(0)}\) is calculated as 8. We also show the iPSR reconstructions from octree depth 6 to 8. Although reducing the octree depth can make the reconstruction of iPSR smoother, some details are lost due to the low depth setting, for instance, the mouth of the teapot. Therefore, our method can not be substituted by simply adjusting the octree depth of iPSR. ### Comparison with other denoising approaches In this section, we compare our method with the other point cloud denoising techniques mentioned in Section 4.1 to examine the efficacy of our method. We use the _famous_ dataset complied by a recent study Points2Surf [28], which contains 22 well-known shapes such as Armadillo, Stanford Bunny and Utah teapot. These shapes are normalized with the maximum axis length to be 1.0, and 10K to 100K points are randomly sampled from each shape. Then, the randomized Gaussian noise is added to each sample point of the shape. The standard deviations (STD) are selected within five different levels from \(0.5\times 10^{-2}\) to \(2.5\times 10^{-2}\). Parameter settings are typically crucial for non-data-driven denoising approaches. During the experiments, we set the parameter "filter scale" for RIMLS to 7. In WLOP, the "representative particle number" is set to 95% of the original point set, and the "neighbor radius" is set to 0.04. Jet smoothing has only one parameter "neighbor size". "Jet-small" is used to represent the parameter setting 48 to the neighbor size. Meanwhile, "jet-large" is used to represent the parameter setting 96. Three parameters, namely, "neighbor size", "sharpness angle" and "iters", exist in bilateral smoothing. Increasing the sharpness angle will reduce the sharpness of the results. "Bilateral-small" is used to represent the parameter setting with neighbor size 24, sharpness angle 25 and iters 5, whilst "bilateral-large" is used to denote neighbour size 48, sharpness angle 50 and iters 5. Only one parameter "iters" exists in PointCleanNet, which is set to 5 during the experiments. In our method, the Poisson point weight is set to 1.0 in the datasets with a standard deviation less than \(2.0\times 10^{-2}\), and 0.5 otherwise. Parameters \(c\) and \(\sigma\) in Equation 7 are determined as follows. Firstly, the sharpness ratios of all points in a shape is sorted. Then, \(c\) is set to the value of the 90% position of the sorted array, and \(\sigma\) is set to \(c/2\). Firstly, we quantitatively compare the quality of the denoised point cloud in terms of the root mean square distance-to-surface (RMSD). If \(P\) is the denoised point cloud and \(P^{\prime}\) is a densely sampled point cloud of the ground truth mesh. Then, the RMSD value can be calculated as follows: \[RMSD(P,P^{\prime})=\sqrt{\frac{1}{N}\sum_{p_{i}\in P}\min_{p_{i}\in P}\|p_{i}- p_{j}\|_{2}^{2}}. \tag{8}\] The quantitative results are shown in Table 1. The RMSD values are multiplied by \(10^{3}\). The results indicate that our method exhibits high performance in all the five noise scales from \(0.5\times 10^{-2}\) to \(2.5\times 10^{-2}\). The qualitative comparisons are Figure 4: Reconstruction of our method in point weight 1.0 and reconstruction of the iPSR in different smaller point weights. The results indicate that our method cannot be substituted by simply adjusting the point weight of iPSR. Figure 5: Reconstruction of our method and the iPSR in different depth. The results indicate that our method cannot be substituted by simply adjusting the depth of iPSR. shown in Fig. 6. For jet smoothing and bilateral smoothing, we show the better performing from the small parameter setting and the large parameter setting of each shape. We annotate the RMSD value (also multiplied by \(10^{3}\)) in the bottom of each point set and colorize the point to surface distance from the denoised point cloud to the ground truth surface. The five examples belong to the dataset of different noise scales. The results show that the generated point clouds of our method perform the lowest error. We also compare the quality of the reconstructed mesh amongst different approaches. For WLOP, bilateral smoothing, jet smoothing and PointCleanNet, we use iPSR to generate the reconstructed surfaces from the denoised point clouds provided by their algorithm. The reason is that no consistently oriented normals are required for iPSR. The L1 Chamfer distance (CD) and the normal consistency (NC) are used to measure the mesh quality. Here, L1 CD means to apply the L1 sum toward all sample points, rather than utilizing the L1 distance of point positions. The formula for calculating the CD value is presented as follows: \[CD(P,P^{\prime})=\frac{1}{|P|}\sum_{p_{j}\in P}\min_{p_{j}\in P^{\prime}}\|p_{i }-p_{j}\|_{2}+\frac{1}{|P^{\prime}|}\sum_{p_{j}\in P^{\prime}}\min_{p_{j}\in P }\|p_{j}-p_{j}\|_{2}. \tag{9}\] Where \(P\) and \(P^{\prime}\) are point clouds uniformly sampled from the reconstructed mesh and the ground truth mesh. Normal consistency (NC) can also be named as the mesh cosine similarity, which calculates the average absolute normal dot product between the sample of the ground truth mesh and the nearest point in the sample of the reconstructed mesh. Table 2 quantitatively compares the mesh quality in terms of CD and NC amongst different approaches. We show the model names, noise scales, CD and NC values in the table. The CD values are also multiplied by \(10^{3}\). In each row, the red number represents the best value of amongst methods, and the blue number represents the second best. Our method achieves well-rounded performance. The meshes generated by bilateral smoothing show high NC values. However, the CD values are typically large, indicating that the filtered point positions are far from the ground truth. The results of jet smoothing exhibit Figure 6: Qualitative comparisons of the denoised point cloud. We annotate the RMSD value (multiplied by \(10^{3}\)) at the bottom of each example and colorize the point to surface distance from the denoised point cloud to the ground truth mesh. low CD values. However, the NC values of the generated surfaces are not that satisfactory. This phenomenon is also shown in Fig. 12, where we present the qualitative results of rows 3,5 and 9 in Table 2. We also colorize the error from the reconstructed mesh to the ground truth mesh with a color bar. The results of our method exhibits low error and high normal consistency. PointCleanNet achieves the lowest CD value in the "Liberty" model. But in reality, this shape is in the training set of PointCleanNet. ### Managing different situations In this section, we examine the ability of our method in managing various situations including misalignment, outliers, noisy CAD-like models and real scanned point clouds. **Misalignment** Misalignment is a typical noise category, especially for the point clouds obtained with a scanner. Here, we use the misalignment dataset provided by a recent benchmark [41] to examine the ability of our method to manage this type of structured noise. The point clouds are obtained with the Blensor simulator [42] by adding some perturbations to the camera extrinsics. Fig. 7 shows the misalignment inputs and the denoised point clouds generated by our method. The results indicate that the misalignment situation is efficiently managed by our method. **Outliers** Fig. 8 shows a point cloud including both misalignment artifacts and outliers. In this example, 1K outliers are randomly sampled within a unit cube and added to a point set with approximately 160K points. In Poisson surface reconstruction, a few outliers may not have significant influence on the global implicit function. Accordingly, our method performs a certain degree of robustness to outliers, whilst the other traditional approaches encounter some challenges to manage this situation. Jet smoothing and bilateral smoothing can filter out more outliers if a large neighbor size is utilized. For instance, setting the neighbor size of jet smoothing to 1024 can filter out most of the outliers in this case. However, the original shape will also be severely oversmoothed by such a large neighbor scale. Actually, this situation can also be managed by applying iPSR to the denoised point clouds provided by jet and bilateral smoothing, and then projecting the noisy points onto the surface. However, this operation aligns with the underlying philosophy of our method, which further confirms the validity of our concept. **Noisy CAD-like inputs** The CAD-like inputs always include rich sharp features. We have demonstrated in Section 3.3 and Fig. 3 that the \(\lambda\)-projection method we proposed is helpful for alleviating the oversmoothing phenomenon near the sharp edges. In this section, we mainly focus on the comparisons. We set the parameters \(c\) and \(\sigma\) in Equation 7 to be 0.11 and 0.05, respectively. Fig. 9 presents the qualitative comparisons of our method with WLOP and jet smoothing in two CAD-like models. The results illustrate that our method has a fundamental edge preserving capability with the help of the \(\lambda\)-projection. **Real scanned data** In Fig. 10, we examine the ability of our approach in handling the real scanned point clouds and compare our method with iPSR (run only once) and jet smoothing. The data are provided by [41]. The surfaces of jet smoothing are generated by feeding the denoised point clouds to iPSR. Our method achieves decent performance. Jet smoothing even aggravates the noise near the thin structures, resulting in the failure of the iPSR to converge to the correct surface for the bowl \begin{table} \begin{tabular}{c c c c c c} \hline \hline Method/STD & \(0.5\times 10^{-2}\) & \(1.0\times 10^{-2}\) & \(1.5\times 10^{-2}\) & \(2.0\times 10^{-2}\) & \(2.5\times 10^{-2}\) \\ \hline RIMLS & 2.456 & 3.121 & 4.378 & 6.469 & 9.813 \\ WLOP & 3.433 & 3.669 & 4.032 & 4.799 & 6.420 \\ Bilateral-small & 2.064 & 2.895 & 4.298 & 6.202 & 8.548 \\ Bilateral-large & 3.630 & 4.061 & 4.626 & 5.305 & 6.447 \\ Jet-small & 1.994 & 2.752 & 3.837 & 5.350 & 7.248 \\ Jet-medium & 2.807 & 3.351 & 4.012 & 4.892 & 6.072 \\ PointCleanNet & 2.518 & 3.362 & 4.267 & 5.516 & 7.210 \\ Ours & **1.975** & **2.245** & **2.592** & **3.459** & **4.253** \\ \hline \hline \end{tabular} \end{table} Table 1: **Quantitative comparisons on the famous dataset within different noise standard deviations (STD). We report the root mean square distance-to-surface (RMSD) of each method. The RMSD values are multiplied by \(10^{3}\).** \begin{table} \begin{tabular}{c c c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Noise} & \multicolumn{2}{c}{RIMLS} & \multicolumn{2}{c}{WLOP} & \multicolumn{2}{c}{Bilateral-small} & \multicolumn{2}{c}{Bilateral-large} & \multicolumn{2}{c}{Jet-small} & \multicolumn{2}{c}{Jet-large} & \multicolumn{2}{c}{PointCleanNet} & \multicolumn{2}{c}{Ours} \\ \cline{2-13} & & CD\(\downarrow\) & NC\(\uparrow\) & CD\(\downarrow\) & NC\(\uparrow\) & CD\(\downarrow\) & NC\(\uparrow\) & CD\(\downarrow\) & NC\(\uparrow\) & CD\(\downarrow\) & NC\(\uparrow\) & CD\(\downarrow\) & NC\(\uparrow\) & CD\(\downarrow\) & NC\(\uparrow\) & CD\(\downarrow\) & NC\(\uparrow\) \\ \hline tortuga & \(0.5\times 10^{-2}\) & 3.98 & 0.986 & 4.33 & 0.983 & 4.04 & 0.986 & 5.40 & 0.984 & 3.48 & 0.986 & 3.51 & 0.986 & 3.47 & 0.983 & 3.47 & 0.987 \\ Utah\_teapot & \(0.5\times 10^{-2}\) & 3.76 & 0.978 & 4.52 & 0.973 & 3.68 & 0.980 & 4.60 & 0.974 & 3.57 & 0.975 & 3.65 & 0.975 & 3.61 & 0.971 & 3.46 & 0.978 \\ horse & \(1.0\times 10^{-2}\) & 4.81 & 0.977 & 6.07 & 0.973 & 3.72 & 0.981 & 5.07 & 0.983 & 3.56 & 0.977 & 3.22 & 0.981 & 3.56 & 0.976 & 3.27 & 0.985 \\ angle & \(1.0\times 10^{-2}\) & 5.79 & 0.940 & 7.78 & 0.908 & 4.07 & 0.945 & 6.15 & 0.935 & 3.90 & 0.937 & 4.42 & 0.935 & 4.22 & 0.924 & 3.47 & 0.948 \\ Armdillo & \(1.5\times 10^{-2}\) & 9.23 & 0.932 & 6.84 & 0.930 & 6.27 & 0.941 & 9.40 & 0.932 & 5.91 & 0.935 & 6.56 & 0.932 & 5.87 & 0.929 & 5.31 & 0.943 \\ xyzg\_dragon & \(1.5\times 10^{-2}\) & 14.97 & 0.829 & 9.75 & 0.814 & 5.92 & 0.860 & 8.72 & 0.839 & 5.96 & 0.852 & 7.20 & 0.844 & 6.59 & 0.837 & 5.30 & 0.864 \\ hand & \(2.0\times 10^{-2}\) & 28.94 & 0.793 & 12.23 & 0.875 & 7.56 & 0.856 & 10.75 & 0.872 & 6.54 & 0.868 & 8.38 & 0.880 & 8.12 & 0.860 & 7.01 & 0.879 \\ serapis & \(2.0\times 10^{-2}\) & 9.27 & 0.957 & 6.97 & 0.956 & 6.40 & 0.957 & 7.22 & 0.958 & 6.55 & 0.946 & 6.77 & 0.954 & 6.59 & 0.950 & 6.19 & 0.964 \\ Liberty & \(2.5\times 10^{-2}\) & 43.42 & 0.613 & 8.81 & 0.801 & 12.39 & 0.713 & 7.75 & 0.791 & 10.30 & 0.715 & 7.29 & 0.778 & 6.11 & 0.805 & 6.42 & 0.813 \\ galera & \(2.5\times 10^{-2}\) & 19.64 & 0.908 & 7.52 & 0.937 & 7.42 & 0.923 & 8.49 & 0.934 & 7.09 & 0.913 & 7.44 & 0.927 & 7.14 & 0.927 & 6.63 & 0.941 \\ \hline \hline \end{tabular} \end{table} Table 2: **Quantitative comparisons of the reconstructed mesh quality. We show the Chamfer distance (CD) and the normal consistency (NC) of each method. The CD values are multiplied by \(10^{3}\). The red color is used to represent the best value amongst all approaches, and the blue is used to denote the second best. The results indicate that our method exhibits well-rounded performance.** model. ## 5 Conclusion In this work, we propose an alternative denoising and reconstructing approach for unoriented point sets and performs iPSR in an iterative manner. An adaptive depth selection strategy is proposed to ensure that the reconstruction is carried out within an appropriate octree depth of iPSR. Moreover, we present a \(\lambda\)-projection method to address the oversmoothing phenomenon near the sharp edges during the iterative process. The experimental results show that our method exhibits high performance in point cloud denoising and surface reconstruction tasks and manages various situations. The main drawback of our method is the high time consumption compared with traditional denoising techniques. Our method performs iPSR several times and acts as an outer loop of iPSR. iPSR itself is an iterative method for the screened Poisson surface reconstruction. Accordingly, our method requires about 8.5min on AMD Ryzen 5 5600H CPU @ 3.3GHz to denoise and reconstruct the bowl model of Fig. 10 with approximately 0.2M points. However, one time iPSR reconstruction is also required for other denoising approaches to generate a reliable surface when no consistently oriented normals are provided. For instance, combining jet smoothing and iPSR also takes about 105s for this model. Our method is faster than the learning-based PointCleanNet, which requires 27min in the RTX 2080Ti GPU with 5 items. Furthermore, the reconstruction complexity of screened Poisson surface reconstruction is a linear function of the point number due to the application of the conforming cascade Poisson solver. Therefore, the time complexity of our method is not large relative to the point number. Fig. 11 demonstrates our reconstruction of a noisy point cloud with 1M points. The denoising process is carried out within octree depth 10. Our method can manage this example in about one hour.
新しい戦略を提案する。それは、点 cloud ノイズ除去と表面再構築を交互に更新することで実現する。ポッソン表面再構築では、OCTノードを中心とした滑らかな基関数を用いて非線形関数を生成する。適切なOCT深度を選択すると、再構築された表面はノイズのある点セットの良好な滑らかな近似となる。私たちの方法は、ノイズのある点セットを表面にプロジェクションし、点セットを交互に再構築し、プロジェクションすることで、その点を処理する。私たちは、不 oriented SURFACERE constructionをサポートする、Iterative Poisson surface reconstruction (iPSR) を用いる。私たちの方法は、iPSRをサポートするために、iterative iPSR を行い、iPSR の外ループを担う。OCT深度が再構築結果に大きく影響を与えることから、適応深度選択戦略を提案し、適切な深さを選択することを保証
2309.11149
Electrostatic environment and Majorana bound states in full-shell topological insulator nanowires
The combination of a superconductor (SC) and a topological insulator (TI) nanowire was proposed as a potential candidate for realizing Majorana zero modes (MZMs). In this study, we adopt the Schr\"odinger-Poisson formalism to incorporate the electrostatic environment inside the nanowire and systematically explore its topological properties. Our calculations reveal that the proximity to the SC induces a band bending effect, leading to a non-uniform potential across the TI nanowire. As a consequence, there is an upward shift of the Fermi level within the conduction band. This gives rise to the coexistence of surface and bulk states, localized in an accumulation layer adjacent to the TI-SC interface. When magnetic flux is applied, these occupied states have different flux-penetration areas, suppressing the superconducting gap. However, this impact can be mitigated by increasing the radius of the nanowire. Finally, We demonstrate that MZMs can be achieved across a wide range of parameters centered around one applied flux quantum, $\phi_0 = h/2e$. Within this regime, MZMs can be realized even in the presence of conduction bands, which are not affected by the band bending effect. These findings provide valuable insights into the practical realization of MZMs in TI nanowire-based devices, especially in the presence of a complicated electrostatic environment.
Li Chen, Xiao-Hong Pan, Zhan Cao, Dong E. Liu, Xin Liu
2023-09-20T08:53:56
http://arxiv.org/abs/2309.11149v2
# Electrostatic environment and Majorana bound states in full-shell topological insulator nanowires ###### Abstract The combination of a superconductor (SC) and a topological insulator (TI) nanowire was proposed as a potential candidate for realizing Majorana zero modes (MZMs). In this study, we adopt the Schrodinger-Poisson formalism to incorporate the electrostatic environment inside the nanowire and systematically explore its topological properties. Our calculations reveal that the proximity to the SC induces a band bending effect, leading to a non-uniform potential across the TI nanowire. As a consequence, there is an upward shift of the Fermi level within the conduction band. This gives rise to the coexistence of surface and bulk states, localized in an accumulation layer adjacent to the TI-SC interface. When magnetic flux is applied, these occupied states have different flux-penetration areas, suppressing the superconducting gap. However, this impact can be mitigated by increasing the radius of the nanowire. Finally, We demonstrate that MZMs can be achieved across a wide range of parameters centered around one applied flux quantum, \(\phi_{0}=h/2e\). Within this regime, MZMs can be realized even in the presence of conduction bands, which are not affected by the band bending effect. These findings provide valuable insights into the practical realization of MZMs in TI nanowire-based devices, especially in the presence of a complicated electrostatic environment. ## I Introduction Majorana zero modes (MZMs), as quasi-particles at topological superconductor boundaries, have been extensively studied because of their potential applications in topological quantum computations [1; 2; 3]. The most heavily investigated experimental systems to search for MZMs are semiconductor (SM)-superconductor (SC) devices [4; 5; 6]. Despite various experimental progress been reported [7; 8; 9; 10; 11], the conclusive observation of MZMs is still lacking. A significant reason is that some trivial mechanisms can also produce similar experimental signatures [12; 13; 14; 15; 16; 17; 18; 19; 20; 21], which significantly complicated the search for MZMs. To overcome this issue, two main directions have been pursued. The first approach involves utilizing alternative detection methods providing signals that can hardly be mimicked by non-Majorana states [22; 23; 24; 25; 26]. One such method is nonlocal conductance measurements in three-terminal devices [27; 28; 29; 30; 31; 32; 33; 34; 35], which can directly detect the bulk gap closing and reopening. The second approach focuses on finding materials with high quality and unique properties that are conducive to the formation and manipulation of MZMs [36; 37; 38; 39; 40; 41; 42; 43; 44]. For instance, materials like topological insulator (TI) nanowires have been identified as potential candidates [36; 37; 38]. When a TI is made into a nanowire, quantum confinement gives rise to peculiar one-dimensional Dirac sub-bands whose energy dispersion can be manipulated by external fields. In contrast to semiconductor-based systems where the Fermi level needs to be finely tuned within a narrow gap opened by the Zeeman effect, TI nanowires offer a topological region that can extend throughout the entire bulk gap [36; 37]. In the past few years, substantial progress in the growth of TI nanowire devices have been reported [45; 46; 47; 48]. These advancements enabled the fabrication of high-quality TI nanowires with well-controlled properties [49; 50; 51]. Recently, proximity-induced superconductivity in TI nanowires have been experimentally reported [51; 52; 53], but the deterministic evidence of the MZMs in TI nanowire is still lacking. Meanwhile, the previous theoretical works [36; 37; 38] treat chemical potential and induced superconducting (SC) gap as as independently adjustable parameters. However, the unavoidable electrostatic effects and band bending effect at the TI-SC interface can greatly complicate the Majorana physics in the TI-SC system, as they did in the SM-SC system [54; 55; 56]. For instance, in experiments where TI are grown with SC films, the process induces charge doping from the SC to the TI, resulting in a shift of the Fermi level into the conduction band [57; 58]. This effect is undesirable since the realization of MZMs requires TIs to be bulk-insulating [36; 37; 59]. Furthermore, the band bending effect near the SC can suppress the tunability of surface states through gating [60], further complicating the control of electronic properties in TI-SC hybrid devices. These challenges and limitations motivate us to develop more realistic calculations that can accurately describe the electrostatic environment and band structures of TI nanowires, leading to a better understanding of their topological properties. In this work, we investigate the properties of a TI nanowire covered by a full-shell SC. To account for the electrostatic environment of the system, we employ the self-consistent Schrodinger-Poisson (SP) methods to calculate the electrostatic potential inside the TI nanowire. Our analysis reveals that the band bending effect at the TI-SC interface leads to a shift of the Fermi level into the conduction band, consistent with experimental observations [57, 58]. Consequently, the surface states and bulk states coexist in the system, and they are confined to an accumulation region near the TI-SC interface. Moreover, these occupied states have different flux-penetration areas, leading to a suppression of the SC gap under the application of a magnetic field. To address this issue, we propose to use a TI nanowire with a larger radius. Finally, we give a topological phase diagram and demonstrate that MZMs can be achieved over a wide range of parameters near one applied flux quantum, \(\phi_{0}=h/2e\). In this case, the presence of MZMs is independent of the strength of the band bending, eliminating the need for fine tuning of the Fermi level. These findings provide valuable insights into the phase diagram and practical realization of MZMs in TI nanowire-based devices. The paper is organized as follows. In Sec. II, we construct a model Hamiltonian in the cylinder coordinate. In Sec. III, we calculate the electrostatic potential using the Schrodinger-Poisson approach. In Sec. IV, we discuss the topological properties of the TI nanowire. Finally, we draw a discussion and conclusion in Sec. V. ## II Model Hamiltonian We consider a topological insulator (TI) nanowire coated by a full superconducting shell, as illustrated in Figure 1. The system is exposed to a magnetic field \(\mathbf{B}\) oriented along the nanowire's direction. To maintain the system's rotational symmetry, we adopt the electromagnetic vector potential \(\mathbf{A}=\frac{1}{2}(\mathbf{B}\times\mathbf{r})\). Subsequently, we formulate the electronic Hamiltonian of the TI in cylindrical coordinates as follows (see Appendix A): \[H_{\rm e}=H_{\rm TI}+H_{M}-e\phi(r), \tag{1}\] where \[H_{\rm TI} = M(r,\theta,z)s_{0}\sigma_{z}+D(r,\theta,z)s_{0}\sigma_{0}+A_{1}( -i\partial_{z})s_{z}\sigma_{x} \tag{2}\] \[+A_{2}P_{-\theta}s_{+}\sigma_{x}+A_{2}P_{+\theta}s_{-}\sigma_{x}.\] The Pauli matrices \(s\) and \(\sigma\) acts on spin and orbital space, respectively. \(r\), \(\theta\), and \(z\) are the cylindrical coordinates. We define \(s_{\pm}=(s_{x}\pm is_{y})/2\), \(s_{\theta}=\cos\theta s_{y}-\sin\theta s_{x}\), \(M(r,\theta,z)=m_{0}-B_{1}\partial_{z}^{2}-B_{2}(\frac{1}{2}\partial_{r}+ \partial_{r}^{2}+\frac{1}{r^{2}}\partial_{\theta}^{2})\), \(D(r,\theta,z)=C_{0}-D_{1}\partial_{z}^{2}-D_{2}(\frac{1}{r}\partial_{r}+ \partial_{r}^{2}+\frac{1}{r^{2}}\partial_{\theta}^{2})\), \(P_{\pm\theta}=-ie^{\pm i\theta}(\partial_{r}+\frac{1}{r}\partial_{\theta})\). The parameters \(m_{0}\), \(C_{0}\), \(B_{i}\), \(A_{i}\) and \(D_{i}\) with \(i=1,2\) are model parameters from _ab initio_ calculations [61]. The electrostatic potential \(\phi(r)\) arises due to the band bending effect at the interface between the TI and SC, which can be self-consistently calculated through the SP method, as we will show in Sec. III. \(H_{M}\) is the magnetic flux-induced term, which takes the form (see Appendix A): \[H_{M}=\frac{B_{2}}{r^{2}}[\Phi^{2}(r)+2iB_{2}\Phi(r)\partial_{ \theta}]s_{0}\sigma_{z}-\frac{A_{2}\Phi(r)}{r}s_{\theta}\sigma_{x}. \tag{3}\] Here, \(\Phi(r)=Br^{2}/\Phi_{0}\) represents the normalized magnetic flux with respect to the flux quantum \(\Phi_{0}=h/e\). Notably, the angular momentum operator \(\tilde{J}_{z}^{\rm e}\) commutes with \(H_{\rm e}\), with \(\tilde{J}_{z}^{\rm e}=-i\partial_{\theta}+\frac{1}{2}s_{z}\). This leads to the eigenvalue \(j_{\rm e}\) of \(\tilde{J}_{z}^{\rm e}\) taking half-integer values \(\mathbb{Z}+\frac{1}{2}\). The angular dependence of \(\tilde{H}_{e}\) can be eliminated using a unitary transformation \(U=\exp[-i(j_{e}-\frac{1}{2}s_{z})\theta]\), namely \(\tilde{H}_{e}=UH_{e}U^{\dagger}\). Consequently, \(\tilde{H}_{e}\) becomes block diagonal, expressed as: \[\tilde{H}_{\rm e}=\bigoplus_{j_{e},k_{z}}H_{\rm TI}^{j_{\rm e}}(r,k_{z}). \tag{4}\] Notably, we replace \(-i\partial_{z}\) with \(k_{z}\) because it is a good quantum number for an infinite nanowire. The explicit form of \(H_{\rm TI}^{j_{\rm e}}(r,k_{z})\) is given in Appendix A. When the electrostatic potential is absent, i.e., \(\phi(r)=0\), the energy Figure 1: A TI nanowire is covered by a full superconducting shell. The magnetic field is applied along the nanowire (\(z\) direction). The radius of the TI nanowire is \(R_{0}\). Figure 2: The profiles of (a) the electrostatic potential \(-e\phi(r)\) and (b) the charge density \(\rho(r)\) are depicted along the radial direction of the nanowire for three distinct band bending strengths \(W\). spectrum of the surface states can be approximated by the formula [62; 63]: \[E_{k_{z},j_{c}}=A_{2}\sqrt{k_{z}^{2}+\left(\frac{j_{e}-\Phi(R_{0})}{R_{0}}\right) ^{2}}. \tag{5}\] In the absence of a magnetic field, the branches \(E_{k_{z},\pm|j_{e}|}\) are doubly degenerate due to time reversal symmetry. Upon application of a magnetic field, a finite gap of \(2\delta=\frac{2A_{2}\Phi(R_{0})}{R_{0}}\) emerges between bands with \(\pm j_{e}\). For \(R_{0}=50\) nm, the surface level spacing is 8.2 meV at half flux quantum, i.e., \(\Phi(R_{0})=1/2\). The corresponding Zeeman energy scale \(E_{z}=0.056\) meV (taking \(g\) factor for Bi\({}_{2}\)Se\({}_{3}\) is \(g\approx 4\)[64]). Therefore, the Zeeman effect is negligible compared to orbital effects in our system. ## III Electrostatic potential To compute the electrostatic potential \(\phi(r)\) self-consistently, we begin by solving the Schrodinger equation: \[H_{\rm TI}^{j_{e}}(r,k_{z})\psi_{n_{z},k_{z}}^{j_{z}}(r)=E_{n_{z},k_{z}}^{j_{e }}\psi_{n_{z},k_{z}}^{j_{e}}(r) \tag{6}\] in each \(j_{e}\) block with given \(k_{z}\). We solve it numerically on the basis of Bessel functions (see Appendix. B). Here, \(n_{z}\) is the index of the transverse modes. It is important to note that we solve the Schrodinger equation only within the TI region. This is due to the fact that the superconductor screens the electric field due to its metallic nature. As a result, throughout the self-consistent procedure, we treat the SC shell solely as a boundary condition with a band offset \(W\) at the TI-SC interface. Then the charge density with the profile \(\phi(r)\) is obtained \[\rho(r)=\frac{-e}{(2\pi)^{2}}\sum_{n_{z},j_{e}}\int dk_{z}\left[|\psi_{n_{z},k_ {z}}^{j_{e}}(r)|^{2}f_{T}-\rho_{\rm val}(r)\right], \tag{7}\] where \(f_{T}=1/\left(e^{E_{n_{z},k_{z}}^{j_{e}}/T}+1\right)\) represents the Fermi distribution. Notably, the first term on the right-hand side of Eq. (7) accounts for the charge density originating from all occupied states. To obtain the charge density of free electrons or holes, the density from the entire valence band \(\rho_{\rm val}(r)\) needs to be subtracted [60], see details in Appendix C. Finally, the electrostatic potential is determined by solving the Poisson equation in radial coordinates: \[\frac{1}{r}\partial_{r}\phi(r)+\partial_{r}^{2}\phi(r)=-\frac{\rho(r)}{\epsilon _{0}\epsilon_{r}}, \tag{8}\] where \(\epsilon_{r}\) is the relative dielectric constant of TI. The SP method is to solve Eq.(6) and Eq.(8) self-consistently (see Appendix D for details). In Figure 2, we present the distribution of the self-consistent potential \(\phi(r)\) and charge density \(\rho(r)\) for various values of \(W\). Notably, the potential gradually increases from the boundary to the interior of the TI due Figure 3: (a)-(c) Left to right: the band structure of TI nanowire with inhomogeneous potential \(\phi(r)\) when \(W=0.1,0.2,0.3\) eV, respectively. The blue (black) lines correspond to surface states (bulk states). The red dashed line represents the Fermi level. Notably, the absence of a magnetic flux maintains the doubly degenerate nature of all bands due to time reversal symmetry. (d)-(e) The density distribution of occupied states at the Fermi level in panels (a)-(c). to the charge screening effect. The contact between the TI nanowire and the SC shell induces charge doping from the SC to the TI, resulting in an upward shift of the Fermi level, which is evident in the band structure shown in Figure 3(a)-(c). When \(W\) is relatively small, the Fermi level remains within the bulk band gap, leading to the occupation of only surface states [Figure 3(a)]. As \(W\) increases up to 0.2 and 0.3 eV, the Fermi level moves into the conduction band [Figure 3(b)-(c)]. Recent _ab initio_ calculations suggest that \(W\approx 0.3\) eV in Bi\({}_{2}\)Te\({}_{3}\)-Nb hybrid systems [65]. This implies that most TI-SC nanowires naturally exhibit a Fermi level pinned within the conduction band, as demonstrated in Figure 3(c). Furthermore, the distributions of the density of states (DOS) of the wave functions at the Fermi level are shown in Figure 3(d)-(f). Remarkably, both the occupied surface states and bulk states are confined to a narrow accumulation region near the TI-SC interface [66, 67], characterized by a width of about 30 nm. The remaining part of the nanowire remains relatively insulating in the bulk. Moreover, it is observed that the surface states and bulk states exhibit distinct localizations near the TI-SC interface, as indicated by the blue and black lines in Figure 2(e)(f). This confinement of the surface states and bulk states within the accumulation region is a significant consequence arising from the electrostatic environment of the system, which has not been considered in previous theoretic works [36, 37, 38]. ## IV Topological property When considering the presence of the superconductor, the system is described by the Bogoliubov-de Gennes (BdG) Hamiltonian, which takes the form: \[H=\begin{pmatrix}H_{\mathrm{e}}&is_{y}\Delta(r)e^{in\theta}\\ -is_{y}\Delta(r)e^{-in\theta}&-H_{\mathrm{e}}^{*}\end{pmatrix}. \tag{9}\] We use a spatial dependence of the pairing amplitude in such a setup, which is given by \(\Delta(r\leq R_{0})=\Delta_{0}\exp((r-R_{0})/\xi)\)[68], \(\xi\) is superconducting coherence length in the TI. \(n\) is the superconducting phase winding number. In this work, we choose \(n=[\phi_{\mathrm{flux}}+0.5]\) where the square brackets indicate taking the closest integer smaller than it. \(\phi_{\mathrm{flux}}=BR_{0}^{2}/\phi_{0}\) representing the penetrated magnetic flux normalized by the superconducting flux quanta, \(\phi_{0}=h/2e\). The BdG Hamiltonian \(H\) satisfies \([\hat{J}_{z},H]=0\) with \(\hat{J}_{z}=-i\partial_{\theta}+\frac{1}{2}s_{z}\tau_{z}-\frac{n}{2}\tau_{z}\). And \(j\) is the eigenvalue of the total angular momentum \(\hat{J}_{z}\). Consequently, the BdG Hamiltonian can be block diagonal as: \[H=\bigoplus_{j,k_{z}}H^{j}(r,k_{z}), \tag{10}\] with \[H^{j}=\begin{pmatrix}H_{\mathrm{TI}}^{j_{\mathrm{e}}}&is_{y}\Delta(r)\\ -is_{y}\Delta(r)&-(H_{\mathrm{TI}}^{m-j_{\mathrm{e}}})^{*}\end{pmatrix}. \tag{11}\] In Fig. 4, a schematic representation of the superconducting pairing sectors for the surface states is provided, characterized by the electronic angular momentum \(j_{e}\). Notably, the pairing potential occurs between the two surface states whose total angular momentum satisfies \(j_{\mathrm{e1}}+j_{\mathrm{e2}}=n\), as indicated by the blue dashed box. As illustrated in Fig. 3(f), the occupied surface states and bulk states exhibit distinct localizations near the Figure 4: Schematic of the superconducting pairing sectors of the surface states for \(n=0\) in panel (a) and \(n=1\) in panel (b). The pairing potential occurs between two surface states whose total angular momentum satisfies \(j_{\mathrm{e1}}+j_{\mathrm{e2}}=n\), as indicated by the blue dashed box. The red upward (downward) arrows signify surface states with negative (positive) angular momentum \(j_{\mathrm{e}}\). Figure 5: The minimal gap \(\Delta_{\mathrm{min}}\) of all the occupied states as a function of the magnetic field with (a) different band bend strength \(W\) and (b) different radius \(R_{0}\). In panel (a), \(R_{0}\) is fixed to 50 nm. In panel (b), the band bending is fixed to 0.3 eV. The abscissa below panel (b) corresponds to the case with \(R_{0}=70\) nm. We choose the parameters \(\Delta_{0}=1.6\) meV [69] and \(\xi=25\) nm [68]. TI-SC interface. Consequently, these states exhibit different magnitudes of the induced superconducting gaps. To quantitatively assess this phenomenon, we define the minimum gap \(\Delta_{\rm min}\) among all occupied states. In Fig. 5(a), \(\Delta_{\rm min}\) is plotted as a function of the magnetic field for various band bending strengths \(W\). When \(W=0.1\) eV (blue lines), \(\Delta_{\rm min}\) is largest, displaying a typical Little-Parks oscillation behavior. Notably, the maximum of \(\Delta_{\rm min}\) occurs when the flux \(\phi\) slightly exceeds the integer superconducting flux quantum. This is due to the fact that the actual flux-penetration area of the states is slightly smaller than the nanowire's cross-sectional area. Furthermore, we observe a significant decrease in \(\Delta_{\rm min}\) as \(W\) increases, as depicted by the red and black lines in Fig.5(a). This behavior can be elucidated as follows: As \(W\) rises to 0.2 eV, both surface states and bulk states become occupied [Fig.3(b)]. In general, the SC gap of bulk states is smaller than that of the surface states [60]. Additionally, their difference in the flux-penetration area, \(\Delta A_{\rm phys}\), introduces a phase uncertainty \(\delta\phi=2\pi(\Delta A_{\rm phys}B/\phi_{0})\), which suppress the \(\Delta_{\rm min}\) as the magnetic field increases. As shown by the red (black) lines in Fig. 5(a), the third (second) Little-Parks oscillation peak disappears when \(W=0.2\) (0.3) eV. Thus, in comparison to the scenario where the TI nanowire is solely occupied by surface states, a significant reduction in \(\Delta_{\rm min}\) is observed when the Fermi level resides within the bulk bands. To address this challenge, we propose employing a TI nanowire with a larger radius. As illustrated in Fig. 5(b), the SC gap shows an upward trend with an increase in the nanowire's radius. This trend appears to be unexpected in the context of an intuitive understanding of the proximity effect in TI-SC slab systems, in which the induced gap in the TI typically decreases with increasing thickness [57; 70]. However, there are two key factors at play. Firstly, the presence of a full SC shell confines the occupied states to an accumulation layer near the TI-SC interface. The thickness of the accumulation layer determines the coupling strength between the TI and the SC. Notably, this accumulation layer maintains a nearly consistent thickness of approximately 30 nm, regardless of the specific radius of the TI nanowire (see Appendix E). Secondly, as the radius of the nanowire's cross-sectional area increases, the ratio between the accumulation layer and the nanowire's sectional area can be effectively reduced. As a consequence, this leads to an enhancement of \(\Delta_{\rm min}\). To characterize the topology of the TI nanowire, we calculate the the Pfaffian topological invariant \(\nu\), also called the Kitaev or Majorana number [71]. A unitary transformation is used to express the Hamiltonian \(H\) in the Majorana basis \(H_{M_{j}}\), which is also block diagonal as \[H_{Mj}(k_{z}=0,\pi) = \bigoplus_{j}H^{j}_{Mj}(k_{z}=0,\pi). \tag{12}\] Then the topological invariant \(\nu\) can be calculated in each \(j\) blocks, which takes the form [72] \[\nu = {\rm sgn}\bigg{\{}\prod_{j}\frac{{\rm Pf}[H^{j}_{Mj}(k_{z}=0)]}{ {\rm Pf}[H^{j}_{Mj}(k_{z}=\pi)]}\bigg{\}} \tag{13}\] \[= {\rm sgn}\bigg{\{}\prod_{j}\nu_{j}\bigg{\}}.\] As depicted in Fig. 4, the configuration of the superconducting pairing depends on the parity of the winding number \(n\). This feature engenders different topological properties of the TI nanowire, contingent on whether \(n\) is an even or odd integer. For the sake of clarity, let us first consider the even-\(n\) scenario, as illustrated in Fig. 4(a). The surface states with \(\pm j_{e}\) exhibit an energy splitting which plays a similar role to the Zeeman splitting in the Rashba nanowire system. Therefore, the realization of MZMs requires the fine tuning of the Fermi level. However, since the TI nanowire is fully surrounded by the SC shell, the strong screening effect in the SC shell makes it difficult to tune the Fermi level by the gate voltage. Although Fermi level control can be equivalently achieved by altering the magnitude of \(W\), it is essential to note that in practical experiments, \(W\) is a nonadjustable parameter that is determined by work function imbalance at the TI-SC interface [73]. Considering these intricate factors, the realization of MZMs with even-\(n\) appears to be difficult in our proposed framework. In the scenario where \(n\) is an odd integer, such as the case of \(n=1\), distinct behavior emerges. Here, the presence of a solitary \(j_{e}=1/2\) surface subband [Fig. 4(b)] violates the Fermion doubling theorem, leading to a topological invariant \(\nu_{j=0}=-1\)[71]. Consequently, the topological conditions require that the remaining blocks (\(j\neq 0\)) should be topologically trivial. For the \(j\neq 0\) blocks, the energy splitting \(2\delta^{{}^{\prime}}\) between the \(j_{e}\) and \(1-j_{e}\) subbands is given by \(\frac{A_{1}}{R_{0}}|1-\phi_{\rm flux}|\). When a magnetic flux of \(\phi_{\rm flux}=1\) is applied, the \(j_{e}\) and \(1-j_{e}\) subbands become perfectly degenerate, indicating a topologically nontrivial system regardless of the Fermi level's position within the bulk gap of the TI nanowire [36; 37]. Remarkably, we find that the system always remains topologically nontrivial even when the Fermi level is deep within the conduction band [Fig. 6(a)]. This finding seems to contrast with previous works that neglected the electrostatic environment and posited that achieving MZMs requires the Fermi level within the bulk gaps [36; 37; 38]. To grasp this distinction intuitively, one can apprehend it as the following. The Fermi levels in the previous works are tuned by the phenomenal parameters, i.e., the homogeneous chemical potential \(\mu\). When \(\mu\) is inside the conduction band, the bulk of the nanowire becomes metallic and the topological surface states disappear [36; 37; 59]. However, in this work, the upward shift of the Fermi level is caused by the band bending effect at the TI-SC interface, described by the electrostatic potential. Although the surface states and bulk states are both occupied, they are confined to an accumulation layer adjacent to the TI- SC interface. Remarkably, the confinement of the electrostatic potential protects the surface states, especially for \(j_{e}=1/2\) surface subbands, from hybridization with the conduction bands [Fig. 3(f)]. This finding is our central result, as it demonstrates that MZMs can be realized even in the presence of conduction bands, which are not affected by the band bending effect. Notably, our results require that the primary TI nanowire (without the effect of electrostatic potential) be bulk-insulating, which is consistent with the previous work. In addition to the topological invariant \(\nu=-1\), the realization of robust MZMs also requires large \(\Delta_{\text{min}}\). Fig. 6(b) shows the topological phase diagram as a function of the magnetic flux \(\phi_{\text{flux}}\) and band bending strength \(W\). A large topological region with a finite SC gap exits near a single flux quantum, \(\phi_{\text{flux}}=1\). Notably, topological phases are not dependent upon the precise value of \(W\). This signifies that achieving MZMs solely demands the application of a magnetic field near the \(n=1\) region, thereby obviating the necessity for finely tuning the Fermi level. To further confirm the system is exactly in the topological phase under such conditions, we consider a TI nanowire with finite length \(L_{z}\) in the \(z\) direction. Then we calculate the eigenvalues of each \(j\) block, as shown in Fig. 6(c). Analogous to the Caroli-deGennes-Matricon (CdGM) states [74], we observe in-gap states with nearly equal energy separation \(\delta E\) in each \(j\) block. These CdGM analogs are confined to the TI-SC interface rather than around a vortex core [75, 76]. Notably, a pair of MZMs emerge in the \(j=0\) blocks because of the particle-hole symmetry. We further calculate the distribution of the DOS of MZMs in the \(L_{z}-r\) plane [Fig. 6(d)]. As we can see, MZMs are mostly localized in the center of the top and bottom surface of the TI nanowire and gradually decay to the lateral boundary. As previously mentioned, the suppression of the SC gap can be mitigated by increasing the radius \(R_{0}\). Nevertheless, the energy separation \(\delta E\) diminishes with increasing \(R_{0}\)[75], see the black line in Fig. 6(e). In order to detect and manipulate MZMs, it is requisite that \(\delta E\) far exceeds the experimental temperature. Notably, \(\delta E\) still remains approximately at \(0.064~{}\Delta_{0}\approx 0.1\) meV when \(R_{0}=100\) nm. Finally, we consider the disorder effect on the TI nanowire. This is important because present-day bulk insulating TI wires are relatively dirty [49]. The charged impurities in the bulk samples can lead to fluctuations Figure 6: (a) The topological invariant \(\nu\) as a function of the band bending strength \(W\) when \(n=1\). (b) The phase diagram as a function of magnetic flux and band bending strength. The SC gap is multiplied by the topological invariant \(\nu\), so the red regions correspond to the gaped topological phase. (c) The eigenvalues of several lowest \(j\) blocks when \(n=1\). A pair of MZMs exists in \(j=0\) block. (d) The distribution of DOS of MZMs in the \(L_{z}-r\) plane. (e) Black line: the average energy separation of the in-gap states, \(\delta E\) decreases with increasing \(R_{0}\). Red line: the minimal gap \(\Delta_{\text{min}}\) increases with \(R_{0}\). (f) The average energy of MZMs with 30 different disorder configurations at various fluctuation strengths \(u_{0}\). The inset shows the distribution of electrostatic potential with fluctuations \(u_{0}=5\) meV in one disorder configuration. Parameters used in each panel: (a) \(R_{0}=70\) nm, \(\phi=1.26\). (b) \(R_{0}=70\) nm. (c)(d)(f) \(R_{0}=70\) nm, \(L_{z}=1000\) nm, \(\phi=1.26\), \(W=0.3\) eV. (e) \(\phi=1.26\), \(W=0.3\) eV. of the electrostatic potential up to several meV [77; 78; 79; 80]. In order to investigate the stability of MZMs, we use the on-site fluctuations in the potential \(\delta\phi(r)\) that are drawn randomly as \(\delta\phi(r)\in[-u_{0}/2,u_{0}/2]\). Notably, MZMs remains strongly pinned to zero energy up for \(u_{0}=10\) meV [Fig. 6(f)]. This stability against disorder is a direct consequence of our setup's large topological phase for MZMs. ## V Discussion and conclusion This study delves into the topological characteristics of a TI nanowire covered by a full SC shell. To comprehensively account for the system's electrostatic environment, we employ the self-consistent Schrodinger-Poisson method, enabling us to compute the internal electrostatic potential within the TI nanowire. Our analysis unearths a distinctive outcome: the band bending effect at the interface between the TI and SC induces a notable shift of the Fermi level into the conduction band. This shift, in turn, leads to the coexistence of occupied surface states and bulk states, localized within an accumulation region proximate to the TI-SC interface. This accumulation layer maintains a nearly consistent thickness of approximately 30 nm, regardless of the specific radius of the TI nanowire. When magnetic flux is applied, the surface states and bulk states have different flux-penetration areas, which engenders a suppression on the superconducting gap. To address this issue, we propose to use TI nanowires with larger radii. Finally, We demonstrate that MZMs can be achieved across a wide spectrum of parameters centered around one applied flux quantum, \(\phi_{0}=h/2e\). Importantly, within this regime, MZMs can be realized even in the presence of conduction bands, which are not affected by the band bending effect. In our calculations, we have retained the rotational symmetry of the TI nanowire. This strategic choice reduces the computational cost, and facilitates the treatment of the fully three-dimensional system [81]. Importantly, the topological properties of TI nanowires remain insensitive to the specific shape of the cross-section [72]. Refs. [54; 55; 56] proposed that the electrostatic environment in Rashba semiconductors has a significant effect on their topological proprieties. This prompts our inquiry into the electrostatic influences within TI nanowires. Indeed, in the context of TI nanowires, the role of the electrostatic effect also remains essential. Compared with bulk states in Rashba semiconductors, the surface states are more localized near the TI-SC interface, so they are more sensitive to band bending effect. Building upon this insight, Ref. [60] demonstrated that surface states near the SC nearly do not respond to gating, thereby constraining the tunability of the system. As it has been shown theoretically, the geometry of SC in the TI nanowire-based devices can either be a full shell [72], or just attaching the SC to several side surfaces of the TI nanowire [38]. The full shell geometry offers a larger induced SC gap but restricts the tunability of Fermi level through the gate voltage. Notably, our results demonstrate that the presence of MZMs remains independent of the band bending strength, thereby eliminating the need for the fine tuning of the Fermi level. This signifies that achieving MZMs solely demands the application of a magnetic field near the \(\phi_{\text{flux}}=h/2e\) region, further reducing the difficulties in experimental control. ###### Acknowledgements. _Acknowledgments -_ Authors thank Chun-Xiao Liu and Fu-Chun Zhang for helpful discussions. X. Liu acknowledges the support of the Innovation Program for Quantum Science and Technology (Grant No. 2021ZD0302700) and the National Natural Science Foundation of China (NSFC) (Grant No. 12074133). Dong E. Liu acknowledges the support of the Innovation Program for Quantum Science and Technology (Grant No. 2021ZD0302400), and the National Natural Science Foundation of China (1974198). Xiao-Hong Pan acknowledges the support of the China Postdoctoral Science Foundation (Grant No. 2023M731208). Zan Cao acknowledges the support of the National Natural Science Foundation of China (No. 12374158). Li Chen and Xiao-Hong Pan contributed equally to this work. ## Appendix A Model Hamiltonian of TI in Cylindrical Coordinates The model Hamiltonian of TI in the Cartesian coordinates takes the form [61; 64] \[H_{\text{car}}(k)=\epsilon_{0}(k)+\begin{bmatrix}M(k)&A_{1}k_{z}&0&A_{2}k_{- }\\ A_{1}k_{z}&-M(k)&A_{2}k_{-}&0\\ 0&A_{2}k_{+}&M(k)&A_{1}k_{z}\\ A_{2}k_{+}&0&A_{1}k_{z}&-M(k)\end{bmatrix}, \tag{11}\] where \(k_{\pm}=k_{x}\pm ik_{y}\), \(\epsilon_{0}(k)=C_{0}+D_{1}k_{z}^{2}+D_{2}(k_{x}^{2}+k_{y}^{2})\) and \(M(k)=M_{0}+B_{1}k_{z}^{2}+B_{2}(k_{x}^{2}+k_{y}^{2})\). To rewrite Eq. 11 in cylindrical coordinates, we can use the relation: \[\begin{bmatrix}\partial_{x}\\ \partial_{y}\\ \partial_{z}\end{bmatrix}=\begin{bmatrix}\cos\theta&-\frac{1}{r}\sin\theta& 0\\ \sin\theta&\frac{1}{r}\cos\theta&0\\ 0&0&1\end{bmatrix}\begin{bmatrix}\partial_{r}\\ \partial_{\theta}\\ \partial_{z}\end{bmatrix}. \tag{12}\] Subsequently, the TI Hamiltonian in cylindrical coordinates takes the form \[H_{\text{TI}}(r,\theta,z)=\epsilon_{0}(r,\theta,z)+ \tag{13}\] \[\begin{bmatrix}M(r,\theta,z)&-iA_{1}\partial_{z}&0&A_{2}P_{- \theta}\\ -iA_{1}\partial_{z}&-M(r,\theta,z)&A_{2}P_{-\theta}&0\\ 0&A_{2}P_{+\theta}&M(r,\theta,z)&-iA_{1}\partial_{z}\\ A_{2}P_{+\theta}&0&-iA_{1}\partial_{z}&-M(r,\theta,z)\end{bmatrix}\] where \(M(r,\theta,z)=m_{0}-B_{1}\partial_{z}^{2}-B_{2}\nabla_{\text{in}}^{2}\), \(\epsilon(r,\theta,z)=C_{0}-D_{1}\partial_{z}^{2}-D_{2}\nabla_{\text{in}}^{2}\). Here, \(\nabla_{\text{in}}^{2}=\frac{1}{r}\partial_{r}+\partial_{r}^{2}+\frac{1}{r^{2} }\partial_{\theta}^{2}\) is the Laplacian operator in the in-plane coordinates. \(P_{\pm\theta}=-ie^{\pm i\theta}(\partial_{r}\pm\frac{i}{r}\partial_{\theta})\). Now, consider a magnetic field applied along the nanowire (\(z\) direction), and choose the gauge \(\mathbf{A}=\frac{1}{2}(\mathbf{B}\times\mathbf{r})=A_{\theta}\tilde{\theta}\) with \(A_{\theta}=\frac{Br}{2}\). It is straightforward to demonstrate that the vector potential affects only \(\partial_{\theta}\): \[-i\partial_{\theta}\longrightarrow-i\partial_{\theta}-\Phi(r). \tag{21}\] Here, \(\Phi(r)=Br^{2}/\Phi_{0}\) represents the normalized magnetic flux with respect to the flux quantum \(\Phi_{0}=h/e\). Subsequently, the TI Hamiltonian changes to: \[H_{\rm TI}\longrightarrow H_{\rm TI}+H_{M}. \tag{22}\] Where \(H_{M}\) is the additional term originating from the magnetic flux, taking the form: \[H_{M}=\frac{B_{2}}{r^{2}}[\Phi^{2}(r)+2iB_{2}\Phi(r)\partial_{\theta}]s_{0} \sigma_{z}-\frac{A_{2}\Phi(r)}{r}s_{\theta}\sigma_{x}. \tag{23}\] Finally, the TI Hamiltonian with magnetic flux and electrostatic potential in cylindrical coordinates takes the form: \[H_{\rm e}=H_{\rm TI}+H_{M}-e\phi(r). \tag{24}\] Notably, we have \([H_{\rm e},\hat{J}_{z}^{\rm e}]=0\) with \(\hat{J}_{z}^{\rm e}=-i\partial_{\theta}+\frac{1}{2}s_{z}\). Importantly, the angular dependence of \(H_{e}\) can be eliminated using a unitary transformation \(\tilde{H}_{\rm e}=UH_{\rm e}U^{\dagger}\) where \(U=\exp[-i(j_{e}-\frac{1}{2}s_{z})\theta]\). Consequently, \(\tilde{H}_{\rm e}\) becomes block diagonal, expressed as: \[\tilde{H}_{\rm e}=\bigoplus_{j_{e},k_{z}}H_{\rm TI}^{j_{e}}(r,k_{z}). \tag{25}\] Notably, we replace \(-i\partial_{z}\) with \(k_{z}\) because it is a good quantum number. Then we can divide the \(H_{\rm TI}^{j_{e}}(r,k_{z})\) into three parts as: \[H_{\rm TI}^{j_{e}}(r,k_{z})=H_{r}^{j_{e}}(r)+H_{M}^{j_{e}}(r)+H_{k_{z}}^{j_{e} }(k_{z}). \tag{26}\] All the \(k_{z}\) terms are included in \(H_{k_{z}}^{j_{e}}(k_{z})\), expressed as \(H_{k_{z}}^{j_{e}}(k_{z})=(C_{0}+D_{1}k_{z}^{2})s_{0}\sigma_{0}+(m_{0}+B_{1}k_{z }^{2})s_{0}\sigma_{z}+A_{1}k_{z}s_{z}\sigma_{x}\). \(H_{M}^{j_{e}}(r)\) is the flux term obtained from the transformation \(UH_{M}U^{\dagger}\), which takes the form \[H_{M}^{j_{e}}(r) = B_{2}\frac{\Phi^{2}(r)}{r^{2}}s_{0}\sigma_{z}-\frac{2B_{2}\Phi( r)}{r^{2}}(j-\frac{1}{2}s_{z})\sigma_{z} \tag{27}\] \[- \frac{A_{2}\Phi(r)}{r}s_{y}\sigma_{x}.\] And \(H_{r}^{j_{e}}(r)\) is given by \[H_{r}^{j_{e}}(r)=\epsilon_{r}-e\phi(r)+ \tag{28}\] \[\begin{bmatrix}M^{\lambda_{j_{e}}-\frac{1}{2}}&0&P_{j_{e}}^{+}\\ 0&-M^{\lambda_{j_{e}}-\frac{1}{2}}&P_{j_{e}}^{+}&0\\ 0&P_{j_{e}}^{-}&M^{\lambda_{j_{e}}+\frac{1}{2}}&0\\ P_{j_{e}}^{-}&0&0&-M^{\lambda_{j_{e}}+\frac{1}{2}}\end{bmatrix}\] where \(\epsilon_{r}=-D_{2}(\partial_{r}^{2}+\frac{1}{r}\partial_{r}-\frac{(\lambda_{ j_{e}}+\frac{1}{2})^{2}}{r^{2}})\), \(M^{\lambda_{j_{e}}\pm\frac{1}{2}}=-B_{2}(\partial_{r}^{2}+\frac{1}{r}\partial_ {r}-\frac{(\lambda_{j_{e}}+\frac{1}{2})^{2}}{r^{2}})\), \(P_{j_{e}}^{\pm}=-iA_{2}(\partial_{r}\pm\frac{\lambda_{j_{e}}+\frac{1}{2}}{r})\). Notably, \(\lambda_{j_{e}}=j_{e}-\Phi(r)\) which can be regarded as the flux modulated angular momentum. Consequently, the blocks with the \(\pm\lambda_{j_{e}}\) have the same eigenvalues. For instance, when \(\Phi(r)=1/2\), the surface subbands in \(j_{e}=1/2\) (\(\lambda_{j_{e}}=0\)) block are gapless nondegenerate bands. While the subbands within \(j_{e}=-1/2\) (\(\lambda_{j_{e}}=-1\)) and \(j_{e}=3/2\) (\(\lambda_{j_{e}}=1\)) blocks are degenerate with the same eigenvalues. ## Appendix B Bessel Expansion In the main text, we have transformed the Hamiltonian \(H_{e}\) in the block diagonal form according to a unitary transformation \(\tilde{H}_{e}=UH_{e}U^{\dagger}\). Finally, \(\tilde{H}_{e}\) is block diagonal which can be written as \[\tilde{H}_{\rm e}=\bigoplus_{j_{e},k_{z}}H_{\rm TI}^{j_{e}}(r,k_{z}). \tag{29}\] For the numerical diagonalization of the Hamiltonian within each \(j_{e}\) block, we have employed the Bessel expansion. The Bessel functions satisfy the orthogonality relation: \[\frac{1}{(N_{q}^{m})^{2}}\int_{0}^{R_{0}}J_{m}(\alpha_{q^{\prime}}^{m}\frac{r}{ R_{0}})J_{m}(\alpha_{q}^{m}\frac{r}{R_{0}})rdr=\delta_{qq^{\prime}}, \tag{30}\] Here, \(m\) denotes the orbital angular momentum and \(\alpha_{q}^{m}\) represents the \(q\)-th zero of the \(m\)-order Bessel function \(J_{m}(x)\). The normalized factor is denoted as \(N_{q}^{m}=\frac{1}{\sqrt{2}}R_{0}J_{m+1}(\alpha_{q}^{m})\). For convenience, we introduce the normalized Bessel functions \(|J_{m}^{q}\rangle=J_{m}(\alpha_{q}^{m}\frac{r}{R_{0}})/N_{q}^{m}\). These normalized functions \(|J_{m}^{q}\rangle\) with the same \(m\) but different zeros constitute a complete orthogonal basis, suitable for Figure 7: (a) The electron band structure of TI nanowire. The green, blue, and black lines correspond to the case when \(n_{\rm tru}=25,~{}45,~{}60\), respectively. (b) Zoom view of the region depicted by the red box in panel (a). It is noted that the band structure converges to stable values when \(n_{\rm tru}\) is sufficiently large. In our calculations, we choose \(n_{\rm tru}=45\). Parameters used in this plot: \(\phi=0\), \(W=0.3\), \(R=50\) nm. the expansion of the Hamiltonian \(H_{\rm TI}^{j_{e}}(r,k_{z})\). Because \(|J_{m}^{g}\rangle\) have an infinite number of zeros, a truncation is needed. This truncation involves selecting a finite set of zeros, up to a truncated zero \(\alpha_{n_{\rm tru}}^{m}\). In this context, the dimension of the discrete Hamiltonian within each \(j_{e}\) block becomes \(4\times n_{\rm tru}\). When \(n_{\rm tru}\) is chosen sufficiently large, this truncation introduces minimal error within the low-energy regime [Figure 7]. ## Appendix C Charge density In our calculations, the Hamiltonian of the topological insulator (TI) is described by a four-band \(k\cdot p\) model, which includes both conduction and valence bands. In Fig. 8, we present a schematic diagram of the TI nanowire's band structure. We define \(\rho_{\rm oc}(r)\) as the occupied charge density, obtained by integrating over all occupied eigenstates, while \(\rho_{\rm val}(r)\) represents the density originating from the entire valence band. The density of free electrons or holes is given by \(\rho(r)=\rho_{\rm oc}(r)-\rho_{\rm val}(r)\). When the Fermi level is situated at the neutral point, the TI nanowire behaves as an insulator, and we have \(\rho_{\rm oc}(r)-\rho_{\rm val}(r)=0\). However, when the Fermi level is located within the conduction (valence) bands, we have \(\rho_{\rm oc}(r)-\rho_{\rm val}(r)>(<)0\), indicating electron (hole) doping. The growth of a TI with SC films induces electron doping from the SC to the TI, causing an upward shift of the Fermi level into the conduction band [58; 65]. ## Appendix D Schrodinger-Poisson Method To obtain the electrostatic potential \(\phi(r)\), we employ the Schrodinger-Poisson Method. Initially, we introduce an initial potential \(\phi_{0}(r)=0.1\) eV into the Hamiltonian \(H_{\rm TI}^{j_{e}}(r,k_{z})\) and solve the Schrodinger equation within each \(j_{e}\) block: \[H_{\rm TI}^{j_{e}}(r,k_{z})\psi_{n_{z},k_{z}}^{j_{e}}(r)=E_{n_{z},k_{z}}^{j_{e }}\psi_{n_{z},k_{z}}^{j_{e}}(r). \tag{13}\] This yields a set of eigenenergies \(E_{n_{z},k_{z}}^{j_{e}}\) and eigenstates \(\psi_{n_{z},k_{z}}^{j_{e}}(r)\). Here, \(n_{z}\) denotes the index of transverse modes. The charge density \(\rho_{1}\) with potential \(\phi_{0}(r)\) is obtained by integrating over the occupied eigenstates: \[\rho_{1}(r)=\frac{-e}{(2\pi)^{2}}\sum_{n,j_{e}}\int dk_{z}\left[|\psi_{n_{z},k _{z}}^{j_{e}}(r)|^{2}f_{T}-\rho_{\rm val}(r)\right]. \tag{14}\] Finally, a new potential \(\phi_{1}(r)\) is determined by solving the Poisson equation: \[\frac{1}{r}\partial_{r}\phi_{1}(r)+\partial_{r}^{2}\phi_{1}(r)=-\frac{\rho_{1 }(r)}{\epsilon_{0}\epsilon_{r}}. \tag{15}\] It's worth noting that \(\phi_{1}(r)\) generally deviates from the initial potential \(\phi_{0}(r)\). The discrepancy is quantified by the error: \[\sigma_{1}=\frac{\sum_{m}[\phi_{1}(r_{m})-\phi_{0}(r_{m})]}{N_{m}}. \tag{16}\] Here, \(\sigma_{1}\) is indexed by the iteration number, \(m\) denotes the site index, and \(N_{m}\) is the number of sites. The SP problem necessitates a self-consistent solution involving the iterative equations, Eq.13 and Eq.15, until the error of the \(i\)-th iteration \(\sigma_{i}\) becomes smaller than the critical value \(\sigma_{c}\). The output \(\phi_{i}(r)\) after convergence is the final self-consistent potential. In our approach, we utilize the linear iteration. The input potential at each iteration is a mixture of the input and output potentials from the previous iteration [56; 38]: \[\phi_{i}^{\rm in}(r)=\kappa\phi_{i-1}^{\rm out}(r)+(1-\kappa)\phi_{i-1}^{\rm in }(r). \tag{17}\] In our calculations, we set \(\kappa=0.1\) and \(\sigma_{c}=10^{-8}\) eV. The iteration error \(\sigma_{i}\) significantly diminishes as the number of iterations increases [Fig. 9(a)]. The potential convergence is observable after approximately 40 iterations, as illustrated by the black solid and dashed lines [Fig. 9(b)]. Figure 8: Schematic diagram of the three typical cases of the band structure of TI nanowire: (a) insulating, (b) electron doping, (c) hole doping. The red and gray bands correspond to TI surface states and bulk states, respectively. The blue dash line represents the Fermi level. \(\rho_{\rm oc}(r)\) is defined as the occupied charge density, which is obtained by integrating over the whole occupied eigenstates. \(\rho_{\rm val}(r)\) is the density stems from the whole valence band. The free electrons or holes is obtained by \(\rho(r)=\rho_{\rm oc}(r)-\rho_{\rm val}(r)\). Figure 9: (a) The error of Schrödinger-Poisson equations as a function of the number of iterations. (b) The distribution of the electrostatic energy \(-e\phi(r)\) as the number of iterations increases. The convergence occurs when the iterations number \(i>40\) with the error \(\sigma<10^{-7}\) eV, see the black solid and dashed lines. ## Appendix E Accumulation Layer The band bending effect-induced electrostatic potential confines the bulk states and surface states to an accumulation layer near the TI-SC interface, with a characteristic width of approximately 30 nm. Consequently, the TI nanowire can be approximately divided into two regions: the accumulation layer and the insulating core. Remarkably, we find that the accumulation layer has a fixed thickness of approximately 30 nm and doesn't increase with the radius of the TI nanowire. This can be explained by the distribution of the confinement potential [Fig. 10(a)]. For the convenience of comparison, we align the three different radii \(\phi(r)\) at the boundary of the nanowire, i.e., at the point \(r=R_{0}\). It is evident that the distribution of the three potentials near the boundary (\(r=R_{0}\)) is nearly identical, ensuring a consistent thickness for the accumulation layer [Fig. 10(b)]. Enlarging the nanowire radius will primarily increase the size of the insulating core region. Within this region, due to the absence of charge carriers, the potential remains notably flat. In a TI-SC hybrid system, a thinner accumulation layer implies a stronger coupling between the TI and the SC, thereby resulting in a more significant proximity effect. Since the thickness of the accumulation layer remains constant irrespective of the radius \(R_{0}\), this property offers an advantage in terms of flexibility in fabricating nanowires under various conditions. Furthermore, in the presence of magnetic flux, the differing flux-penetration areas between the bulk states and surface states induce a notable reduction in \(\Delta_{\mathrm{min}}\) [Fig. 5(a)]. By increasing the value of \(R_{0}\), the relative area between the accumulation layer and the nanowire can be effectively reduced. As a consequence, this leads to an enhancement of \(\Delta_{\mathrm{min}}\) [Fig. 5(b)].
超伝導体 (SC)とトポロジال絶縁体 (TI) ナノ線との組み合わせが、マジョラナゼロモード (MZM)の実現の潜在的な候補となることが提案されました。この研究では、Schr\"odinger-Poisson形式を使ってナノ線の静電環境を組み込み、そのトポロジカルプロパティを系統的に探索しました。計算の結果、SCの近接はバンドの曲げ効果を引き起こし、TI ナノ線内での非均一な電位が生じます。その結果、伝導帯のフェルミレベルは上昇し、表面と内部状態の共存が生じます。この共存は、TI-SC界面に隣接する蓄積層に局在しています。磁場が加えられた場合、これらの占有状態は異なる磁場浸透領域を持ち、超伝導ギャップを抑制します。しかし、ナノ線の半
2309.09706
Dislocations with corners in an elastic body with applications to fault detection
This paper focuses on an elastic dislocation problem that is motivated by applications in the geophysical and seismological communities. In our model, the displacement satisfies the Lam\'e system in a bounded domain with a mixed homogeneous boundary condition. We also allow the occurrence of discontinuities in both the displacement and traction fields on the fault curve/surface. By the variational approach, we first prove the well-posedness of the direct dislocation problem in a rather general setting with the Lam\'e parameters being real-valued $L^\infty$ functions and satisfy the strong convexity condition. Next, by considering the scenario that the Lam\'e parameters are constant and the fault curve/surface possesses certain corner singularities, we establish a local characterisation of the slip vectors at the corner points over the dislocation curve/surface. In our study the dislocation is geometrically rather general and may be open or closed. For both cases, we establish the uniqueness results for the inverse problem of determining the dislocation curve/surface and the slips.
Huaian Diao, Hongyu Liu, Qingle Meng
2023-09-18T12:18:55
http://arxiv.org/abs/2309.09706v2
# Dislocations with corners in an elastic body with applications to fault detection ###### Abstract. This paper focuses on an elastic dislocation problem that is motivated by applications in the geophysical and seismological communities. In our model, the displacement satisfies the Lame system in a bounded domain with a mixed homogeneous boundary condition. We also allow the occurrence of discontinuities in both the displacement and traction fields on the fault curve/surface. By the variational approach, we first prove the well-posedness of the direct dislocation problem in a rather general setting with the Lame parameters being real-valued \(L^{\infty}\) functions and satisfy the strong convexity condition. Next, by considering the scenario that the Lame parameters are constant and the fault curve/surface possesses certain corner singularities, we establish a local characterisation of the slip vectors at the corner points over the dislocation curve/surface. In our study the dislocation is geometrically rather general and may be open or closed. For both cases, we establish the uniqueness results for the inverse problem of determining the dislocation curve/surface and the slips. **Keywords:** dislocations, elasticity, corners, slips, well-possdness, inverse problem, uniqueness. ## 1. Introduction In this study, our focus lies on the phenomenon known as elastic dislocation. An elastic dislocation refers to a surface or a crack within an elastic solid across which there are discontinuities of the elastic displacement fields. It may arise in various practical scenarios, such as a fault plane undergoing slip for a limited duration or the sliding of faces in a crack. The modeling and comprehension of interior elastic dislocations hold significant importance in the geophysical and seismological communities. Specifically, the study can finds important applications in monitoring, understanding, and mitigating earthquakes and landslides. For further details and additional references on this subject, we refer to [7, 9, 14, 16, 22, 23, 27] and the related literature cited therein. Though the problem has been extensively and intensively studied in the physical literature, there is only limited theoretical understanding. Recently, Aspri et al. [2] investigated the direct and inverse problems for elastic dislocation by modeling the Earth's crust as an infinite half-space. The authors demonstrated the well-posedness of the direct problem by assuming that the elastic coefficients are Lipschitz continuous and the surface is also Lipschitz, and established the uniqueness of the fault and slip from a single measurement of surface displacement on an open set. Additional assumptions were made that the fault, with at least one corner singularity, must be a graph concerning a given coordinate system, and the slip must be tangential to the fault. Subsequently, Aspri et al. [1, 3] considered the dislocation problem on bounded domains in 2D and 3D, respectively, where [1] considered dislocation models in anisotropic and inhomogeneous elastic elastic media in 2D and [3] studied the dislocations in a layered isotropic elastic medium in 3D. The elastic dislocations were modeled as open, oriented fault curves/surfaces within an elastostatic system, with discontinuity in the displacement field across such fault curves/surfaces. The uniqueness of both the fault curves/surfaces and the slip vectors can be obtained by a single passive measurement of the elastostatic field on part of the elastic solid boundary in a general scenario. Compared with [2], the results in [1, 3] do not require additional constraints on the fault and slip, except for a fixed coordinate system and the conditions that the slip field belongs to a suitable space with good extension properties and has full support in the closure of the fault surface/curve. It is point out that in anisotropic elastic materials, in order to guarantee that the unique continuation property exists, additional assumptions are needed for the elastic coefficients (cf. [1]). Lastly, Elena et al. [4] also studied the crack problem in a bounded domain. Motivated by the practical and theoretical studies mentioned above, we shall propose and study the elastic dislocation problem in more sophisticated and challenging setups. We confine ourselves to the elastic dislocation problem in an extremely general form, which allows the occurrence of discontinuities in both the displacement and traction fields across the fault curve/surface. Moreover, the dislocation is geometrically rather general. In fact, the fault curve/surface for describing the dislocation can be an open or closed curve/surface in our study. In this paper, we investigate both the direct and inverse dislocation problems. Our mathematical setup allows the presence of discontinuities in both the displacement and traction fields across the fault curve/surface. The direct problem is to determine the elastic displacements of the elastic transmission problem with mixed homogeneous boundary conditions (see Problem (2.6)), assuming that the dislocation curve/surface \(\mathcal{S}\), the elastic stiffness tensor \(\mathbb{C}(\mathbf{x})\), and the slips \(\mathbf{f}\) and \(\mathbf{g}\) over \(\mathcal{S}\) are known. The inverse problem is to determine the dislocation curve/surface \(\mathcal{S}\) and the slip vectors \(\mathbf{f}\) and \(\mathbf{g}\) from the measurement of the displacement field. We focus on a single measurement of displacement to study the inverse dislocation problems, which means that we only need a pair of the Cauchy datum of the corresponding elastic displacement to determine \(\mathcal{S}\), \(\mathbf{f}\) and \(\mathbf{g}\). Actually, various mature technologies, such as Synthetic Aperture Radar (SAR) and Global Positioning System (GPS) arrays, can be used to measure surface displacement measurements. For the direct problem of elastic dislocations, it is a common practice to employ a lift of the jumps to establish a variational framework to prove its well-posedness. This approach has been extensively discussed for dislocations (cf. [28]). In this paper, we mainly consider the dislocation problems with corner singularities in a bounded domain. There is currently a lot of work on the singularity of the corner transmission problems (cf. [10, 11, 17, 21]). In addition, one also used suited weighted spaces to study transmission problems (cf. [19]). In order to guarantee the self-contained content of the paper, using the variational approach, we prove the well-posedness of Problem (2.6), where there exist jumps in both the displacement and traction fields across the fault curve/surface in general scenarios. It includes a general setting where the Lame parameters are real-valued \(L^{\infty}\) functions and satisfies the strong convexity condition. Moreover, we consider both open and closed fault curves/surfaces. As for the inverse dislocation problems, it is worth noting that there is an alternative approach and additional numerical analysis, particularly for polyhedral domains(cf. [26]). However, in the context of our study, we specifically focus on the scenario where the Lame parameters are constant, and the fault curve/surface exhibits specific corner singularities. We first establish a local characterisation of the slip vectors at the corner points over the dislocation curve/surface by analyzing the singularity formation of the elastic field locally around certain abnormal points on the fault surface in a microlocal way. In fact we utilize the so-called CGO (complex geometric optic) solutions for the underlying elastic system to achieve the corresponding characterisation of the slip vectors at the corner points, where subtle and delicate analysis is developed in our study. For both cases that the dislocation may be open or closed, we establish the uniqueness results for the inverse problem of determining the dislocation curve/surface and the slips. The paper is structured as follows. In Section 2, we introduce the mathematical setup of our study and establish the well-posedness of the dislocation problem. In Section 3, we propose some admissible assumptions about \((\mathcal{S};\mathbf{f},\mathbf{g})\) for the scenario that \(\mathcal{S}\) may be closed or open. For both cases, assuming that Lame parameters are constants and the fault curve/surface posses corner singularities, we establish a local characterization of the slip vectors at the corner points over the dislocation curve/surface. Furthermore, we also establish global uniqueness results for the inverse dislocation problem for determining the dislocation curve/surface \(\mathcal{S}\) and the slip vectors \(\mathbf{f}\) and \(\mathbf{g}\) in these two cases with additional geometrical assumption about the dislocation curve/surface. In Section 4, we derive several local results for the slip vectors \(\mathbf{f}\) and \(\mathbf{g}\) at the corner points along \(\mathcal{S}\). Section 5 is devoted to proving the uniqueness of the inverse problem presented in Section 3. ## 2. Mathematical setup and the direct problem In this section, we pay our attention to the mathematical setup of the dislocation problem and the study of the well-posedness of the direct problem. ### Mathematical setup We first introduce a geometric and mathematical setup for our study; see Fig. 1 for a schematic illustration in 2D. Let \(\lambda(\mathbf{x})\) and \(\mu(\mathbf{x})\), \(\mathbf{x}=(x_{j})_{j=1}^{n}\in\Omega\), be real-valued \(L^{\infty}\) functions, which are referred to as the Lame parameters of the elastic solid \(\Omega\). We define \(\mathbb{C}(\mathbf{x})=(C_{ijkl}(\mathbf{x}))_{i,j,k,l=1}^{n}\), \(\mathbf{x}\in\Omega\), as a four-rank tensor given by: \[\mathbb{C}(\mathbf{x}):=\lambda(\mathbf{x})\mathbf{I}\otimes\mathbf{I}+2\mu( \mathbf{x})\mathbb{I},\ \ \text{where}\ \ C_{ijkl}(\mathbf{x})=\lambda(\mathbf{x})\delta_{ij}\delta_{kl}+\mu( \mathbf{x})(\delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk}), \tag{2.1}\] Here, \(\mathbf{I}\) and \(\mathbb{I}\) respectively represent the identity operators of the second- and fourth-rank tensors, and \(\delta_{ij}\) is the Kronecker delta. We assume that the Lame parameters \(\lambda(\mathbf{x})\) and \(\mu(\mathbf{x})\) satisfy the strong convexity conditions: \[\mu(\mathbf{x})>0,\quad 2\mu(\mathbf{x})+n\lambda(\mathbf{x})>0,\quad\forall \mathbf{x}\in\Omega, \tag{2.2}\] These conditions ensure the uniform strong convexity of the elastic stiffness tensor \(\mathbb{C}(\mathbf{x})\). For \(\mathbf{A}=(a_{ij})_{i,j=1}^{n}\), we define the operation ":" as: \[\mathbb{C}:\mathbf{A}=\big{(}C_{ijkl}:\mathbf{A}\big{)}_{ij}=(\Pi)_{ij},\quad \text{where}\quad\Pi_{ij}:=\sum_{k,l=1}^{n}C_{ijkl}a_{kl}. \tag{2.3}\] Let \(\mathbf{u}(\mathbf{x})=(u_{j}(\mathbf{x}))_{j=1}^{n}\), \(\mathbf{x}\in\Omega\), with each \(u_{j}(\mathbf{x})\) being a complex-valued function. We introduce the Lame operator \(\mathcal{L}\) as: \[\mathcal{L}\mathbf{u}:=\nabla\cdot(\mathbb{C}:\nabla\mathbf{u})=\mu\Delta \mathbf{u}+(\lambda+\mu)\nabla(\nabla\cdot\mathbf{u}), \tag{2.4}\] where \(\nabla\mathbf{u}:=(\partial_{j}u_{i})_{i,j=1}^{n-1}\), \(\nabla\cdot\mathbf{u}:=\sum_{j=1}^{n}\partial_{j}u_{j}\), and \(\partial_{j}u_{j}=\partial u_{j}/\partial x_{j}\). Furthermore, let \(\boldsymbol{\nu}\in\mathbb{S}^{n-1}\) be the unit normal vector, and we define the traction operator as: \[\mathcal{T}_{\boldsymbol{\nu}}(\mathbf{u})=\boldsymbol{\nu}\cdot(\mathbb{C}: \nabla\mathbf{u}). \tag{2.5}\] Let \(\mathcal{S}\subset\Omega\) be an oriented Lipschitz curve/surface, which can be open or closed. We define \(\mathcal{S}^{\pm}\) as the two sides of \(\mathcal{S}\), with \(\mathcal{S}^{+}\) representing the side where \(\boldsymbol{\nu}\) points outward. We denote the jump of a function or tensor field \(\mathbf{p}\) across \(\mathcal{S}\) as \([\mathbf{p}]_{\mathcal{S}}:=\mathbf{p}|_{\mathcal{S}}^{+}-\mathbf{p}|_{ \mathcal{S}}^{-}\), where \(\mathbf{p}|_{\mathcal{S}}^{\pm}\) represent the non-tangential limits of \(\mathbf{p}\) on \(\mathcal{S}^{\pm}\), respectively. The elastic dislocation problem that we consider allows the occurrence of discontinuities in both the displacement and traction fields, denoted by \(\mathbf{f}\) and \(\mathbf{g}\), respectively. ### Mathematical model In this paper, our main focus is on the following elastostatic system for \(\mathbf{u}\in H^{1}(\Omega\backslash\overline{\mathcal{S}})^{n}\): \[\begin{cases}\mathcal{L}\mathbf{u}(\mathbf{x})=\mathbf{0},&\mathbf{x}\in \Omega\backslash\overline{\mathcal{S}},\\ \mathcal{T}_{\boldsymbol{\nu}}\mathbf{u}\big{|}_{\Sigma_{N}}=\mathbf{0},& \mathbf{u}\big{|}_{\Sigma_{D}}=\mathbf{0},\\ \mathbf{[u]}_{\mathcal{S}}=\mathbf{f},&[\mathcal{T}_{\boldsymbol{\nu}} \mathbf{u}]_{\mathcal{S}}=\mathbf{g}.\end{cases} \tag{2.6}\] Here, \(\partial\Omega=\Sigma_{D}\cup\Sigma_{N}\) represents a Lipschitz partition of \(\partial\Omega\). To investigate both the direct and inverse dislocation problems, we introduce some relevant function spaces for the slips \(\mathbf{f}\) and \(\mathbf{g}\). Noting that the function spaces differ depending on whether \(\mathcal{S}\) is closed or open. **Class 1:** When \(\mathcal{S}\) is closed, we consider slips \(\mathbf{f}\) and \(\mathbf{g}\) that satisfy \[\mathbf{f}\in H^{\frac{1}{2}}(\mathcal{S})^{n}\quad\text{and}\quad\mathbf{g} \in H^{-\frac{1}{2}}(\mathcal{S})^{n}.\] **Class 2:** When \(\mathcal{S}\) is open, we assume that \(\mathbf{f}\) and \(\mathbf{g}\) belong to appropriate weighted spaces with a favorable extension property on such a curve/surface (see, for example, [8, 18, 20, 25]). Following [8, 20, 25], we introduce the following space: \[H^{\frac{1}{2}}_{00}(\mathcal{S})^{n}:=\Big{\{}\mathbf{u}\in H^{\frac{1}{2}}_ {0}(\mathcal{S})^{n};\ \varrho^{-\frac{1}{2}}\mathbf{u}\in L^{2}(\mathcal{S})^{n}\Big{\}}\] associated with the following norms \[\|\mathbf{f}\|_{H^{\frac{1}{2}}_{00}(\mathcal{S})^{n}}:=\|\mathbf{f}\|_{H^{ \frac{1}{2}}_{2}(\mathcal{S})^{n}}+\|\varrho^{-\frac{1}{2}}\mathbf{f}\|_{L^{2} (\mathcal{S})^{n}}\quad\text{ for }\mathbf{f}\,\in H^{\frac{1}{2}}_{00}( \mathcal{S})^{n}.\] Here, \(H_{0}^{\frac{1}{2}}(\mathcal{S})^{n}\) is the closure of the space of smooth functions with compact support in \(\mathcal{S}\) with respect to the \(H^{\frac{1}{2}}(\mathcal{S})^{n}\) norm. In addition, \(\mathbf{g}\) belongs to the space \(H_{0}^{-\frac{1}{2}}(\mathcal{S})^{n}\) which is the dual space of \(H_{00}^{\frac{1}{2}}(\mathcal{S})^{n}\). Let \(\varrho\in C^{\infty}(\overline{\mathcal{S}})\) denote a weight function which possesses the certain properties: 1. \(\varrho\) has the same order as the distance to the boundary, that is to say \[\lim_{\mathbf{x}\to\mathbf{x}_{0}}\frac{\varrho(\mathbf{x})}{\mathrm{d}( \mathbf{x},\partial\mathcal{S})}=d\neq 0,\,\forall\,\mathbf{x}_{0}\,\in \partial\mathcal{S},\] 2. \(\varrho(\mathbf{x})\) is positive in \(\mathcal{S}\) and \(\varrho\) vanishes on \(\partial\mathcal{S}\). We want to note that when \(\mathcal{S}\) is a curve in the 2D case, \(\partial\mathcal{S}\) corresponds to the set of endpoints of \(\mathcal{S}\). Similarly, when \(\mathcal{S}\) is a surface in the 3D case, \(\partial\mathcal{S}\) corresponds to the boundary curve of \(\mathcal{S}\). Assume that \(\mathcal{S}\) is open. Let \(\mathcal{S}\) be extended to a closed Lipschitz curve/surface \(\Gamma=\overline{\mathcal{S}}\cup\Gamma_{0}\) satisfying \(\Gamma\cap\partial\Omega=\emptyset\), where \(\Gamma_{0}\) is a curve or a surface linking with the boundary \(\partial\mathcal{S}\) satisfying \(\Gamma_{0}\cap(\mathcal{S}\backslash\partial\mathcal{S})=\emptyset\). Hence, the Lipschitz domain \(\Omega\) can be partitioned into two connected subdomains \(\Omega_{1}\) and \(\Omega_{1}^{c}=\Omega\backslash\overline{\Omega}_{1}\), where \(\partial\Omega_{1}=\Gamma\) and \(\partial\Omega_{1}^{c}=\Gamma\cup\partial\Omega\). Let \(\mathbf{f}\) be continuously extended to \(\widetilde{\mathbf{f}}\in H^{1/2}(\Gamma)^{n}\) by zero on \(\Gamma\backslash\mathcal{S}\) and \(\mathbf{g}\) be continuously extended to \(\widetilde{\mathbf{g}}\in H^{-\frac{1}{2}}(\Gamma)^{n}\) by zero on \(\Gamma\backslash\mathcal{S}\). That is to say \[\widetilde{\mathbf{f}}=\begin{cases}\mathbf{f},&\mathbf{x}\in \mathcal{S},\\ \mathbf{0},&\mathbf{x}\in\Gamma\backslash\overline{\mathcal{S}}\end{cases} \qquad\text{ and }\qquad\widetilde{\mathbf{g}}=\begin{cases}\mathbf{g},& \mathbf{x}\in\mathcal{S},\\ \mathbf{0},&\mathbf{x}\in\Gamma\backslash\overline{\mathcal{S}}.\end{cases} \tag{2.7}\] In particular, when \(\mathcal{S}\) is closed, \(\Gamma=\mathcal{S}\), \(\widetilde{\mathbf{f}}=\mathbf{f}\) and \(\widetilde{\mathbf{g}}=\mathbf{g}\). Regardless of whether \(\mathcal{S}\) is open or closed, we use \[\Omega_{1}:=\mathrm{enclose}(\mathcal{S}) \tag{2.8}\] to denote the aforementioned domain satisfying \(\partial\Omega_{1}=\Gamma\). Similarly, when \(\mathcal{S}\) is closed, let the domain \(\mathrm{enclose}(\mathcal{S})\) satisfy that \(\partial\left(\mathrm{enclose}(\mathcal{S})\right)=\mathcal{S}\). We now are in the position to show the existence of a unique weak solution \(\mathbf{u}\in H_{\Sigma_{D},\Sigma_{N}}^{1}(\Omega\backslash\overline{ \mathcal{S}})^{n}\) to Problem (2.6) corresponding with the boundary data on \(\Sigma_{D}\) and \(\Sigma_{N}\). **Theorem 2.1**.: _There exists a unique solution \(\mathbf{u}\in H_{\Sigma_{D},\Sigma_{N}}^{1}(\Omega\backslash\overline{ \mathcal{S}})^{n}\) to Problem (2.6)._ Proof.: In the following we first prove the well-posedness of Problem (2.6) for the case that \(\mathcal{S}\) is open. When \(\mathcal{S}\) is closed, the corresponding proof can be obtained in a similar way. We shall use the variational technique (cf. [15]) to verify that there exists a unique solution to this problem. From [1, Lemma 3.2] and [3, Remark 3.3], Problem (2.6) can be recast equivalently as the following PDE system for \(\mathbf{u}_{1}\in H^{1}(\Omega_{1})^{n}\) and \(\mathbf{u}_{2}\in H^{1}(\Omega_{1}^{c})^{n}\) such that \[\begin{cases}\mathcal{L}\mathbf{u}_{1}(\mathbf{x})=\mathbf{0}, \quad\mathbf{x}\in\Omega_{1},\\ \mathcal{L}\mathbf{u}_{2}(\mathbf{x})=\mathbf{0},\quad\mathbf{x}\in\Omega_{1} ^{c},\\ \mathcal{T}_{\nu}\mathbf{u}_{2}\big{|}_{\Sigma_{N}}=\mathbf{0},\,\mathbf{u}_{2} \big{|}_{\Sigma_{D}}=\mathbf{0},\\ \mathbf{u}_{2}\big{|}_{\Gamma}-\mathbf{u}_{1}\big{|}_{\Gamma}=\widetilde{ \mathbf{f}},\\ \mathcal{T}_{\nu}\mathbf{u}_{2}\big{|}_{\Gamma}-\mathcal{T}_{\nu}\mathbf{u}_{1} \big{|}_{\Gamma}=\widetilde{\mathbf{g}},\end{cases} \tag{2.9}\] where \(\widetilde{\mathbf{f}}\in H^{1/2}(\Gamma)^{n}\) and \(\widetilde{\mathbf{g}}\in H^{-1/2}(\Gamma)^{n}\) defined by (2.7) are given. Let \(\mathbf{u}_{\widetilde{\mathbf{f}}}\) be the unique solution to the following Dirichlet boundary value problem \[\mathcal{L}\,\mathbf{u}_{\widetilde{\mathbf{f}}}=\mathbf{0}\quad\text{in}\quad \Omega_{1}^{c},\quad\mathbf{u}_{\widetilde{\mathbf{f}}}=\widetilde{\mathbf{f} }\quad\text{on}\quad\Gamma,\quad\mathbf{u}_{\widetilde{\mathbf{f}}}=\mathbf{0} \quad\text{on}\quad\partial\Omega.\] Let us next consider an equivalent variational formulation of Problem (2.9): Find \(\mathbf{w}\in H^{1}_{\Sigma_{D},\Sigma_{N}}(\Omega)^{n}\) satisfying \[\int_{\Omega_{1}}(\mathbb{C}:\nabla\mathbf{w}):\nabla\overline{ \boldsymbol{\phi}}\,\mathrm{d}\mathbf{x}+\int_{\Omega_{1}^{c}}(\mathbb{C}: \nabla\mathbf{w}):\nabla\overline{\boldsymbol{\phi}}\,\mathrm{d}\mathbf{x}+ \int_{\Gamma}\widetilde{\mathbf{g}}\cdot\overline{\boldsymbol{\phi}}\, \mathrm{d}\sigma \tag{2.10}\] \[=\int_{\Sigma_{N}}\mathcal{T}_{\boldsymbol{\nu}}(\mathbf{u}_{ \widetilde{\mathbf{f}}})\cdot\overline{\boldsymbol{\phi}}\,\mathrm{d}\sigma- \int_{\Omega_{1}^{c}}(\mathbb{C}:\nabla\mathbf{u}_{\widetilde{\mathbf{f}}}): \nabla\overline{\boldsymbol{\phi}}\,\mathrm{d}\mathbf{x},\qquad\qquad\forall \boldsymbol{\phi}\in H^{1}_{\Sigma_{D},\Sigma_{N}}(\Omega)^{n}.\] With the help of the first Betti identity, it's clear to show that \(\mathbf{u}_{1}:=\mathbf{w}\big{|}_{\Omega_{1}}\) and \(\mathbf{u}_{2}:=\mathbf{w}\big{|}_{\Omega_{1}^{c}}+\mathbf{u}_{\widetilde{ \mathbf{f}}}\) satisfies Problem (2.9). Conversely, multiplying these equations in (2.9) by a test function and using transmission conditions, denote \(\mathbf{w}:=\mathbf{u}_{1}\) in \(\Omega_{1}\) and \(\mathbf{w}:=\mathbf{u}_{2}-\mathbf{u}_{\widetilde{\mathbf{f}}}\) in \(\Omega_{1}^{c}\), we can directly show that \(\mathbf{w}\) belongs to \(H^{1}_{\Sigma_{D},\Sigma_{N}}(\Omega)^{n}\) and fulfils (2.10), where \((\mathbf{u}_{1},\mathbf{u}_{2})\) is a solution to Problem (2.9). Let \(a(\cdot,\cdot)\) and \(\mathcal{F}(\cdot)\) be defined as follows: \[a(\mathbf{w},\boldsymbol{\phi}) :=\int_{\Omega_{1}}(\mathbb{C}:\nabla\mathbf{w}):\nabla \overline{\boldsymbol{\phi}}\,\mathrm{d}\mathbf{x}+\int_{\Omega_{1}^{c}}( \mathbb{C}:\nabla\mathbf{w}):\nabla\overline{\boldsymbol{\phi}}\,\mathrm{d} \mathbf{x},\] \[\mathcal{F}(\boldsymbol{\phi}) :=-\int_{\Gamma}\widetilde{\mathbf{g}}\cdot\overline{\boldsymbol {\phi}}\,\mathrm{d}\sigma+\int_{\Sigma_{N}}\mathcal{T}_{\boldsymbol{\nu}}( \mathbf{u}_{\widetilde{\mathbf{f}}})\cdot\overline{\boldsymbol{\phi}}\, \mathrm{d}\sigma-\int_{\Omega_{1}^{c}}(\mathbb{C}:\nabla\mathbf{u}_{ \widetilde{\mathbf{f}}}):\nabla\overline{\boldsymbol{\phi}}\,\mathrm{d} \mathbf{x}.\] Hence, we can rewrite (2.10) as the problem of finding \(\mathbf{w}\in H^{1}_{\Sigma_{D},\Sigma_{N}}(\Omega)^{n}\) such that \[a(\mathbf{w},\boldsymbol{\phi})=\mathcal{F}(\boldsymbol{\phi})\qquad\text{for all}\quad\boldsymbol{\phi}\in H^{1}_{\Sigma_{D},\Sigma_{N}}(\Omega)^{n}.\] It is easy to see that \(\mathcal{F}\) is a linear continuous operator from \(H^{1}_{\Sigma_{D},\Sigma_{N}}(\Omega)^{n}\) to \(\mathbb{C}\). Combining Korn's inequality (cf. [18]) with the uniform strong convexity of \(\mathbb{C}\), it directly leads to the fact that the bilinear form \(a(\cdot,\cdot)\) is strictly coercive and bounded. As a consequence of the Lax-Milgram Lemma, there exists a unique solution \(\mathbf{w}\in H^{1}_{\Sigma_{D},\Sigma_{N}}(\Omega)^{n}\) of (2.10) such that \[\|\mathbf{w}\|_{H^{1}_{\Sigma_{D},\Sigma_{N}}(\Omega)^{n}} \leq\Big{\|}\mathbf{w}\big{|}_{\Omega_{1}}\Big{\|}_{H^{1}(\Omega _{1})^{n}}+\Big{\|}\mathbf{w}\big{|}_{\Omega_{1}^{c}}\Big{\|}_{H^{1}(\Omega_{1} ^{c})^{n}}\leq\|\mathcal{F}\|\leq C\big{(}\|\widetilde{\mathbf{f}}\|_{H^{\frac {1}{2}}(\Gamma)^{n}}+\|\widetilde{\mathbf{g}}\|_{H^{-\frac{1}{2}}(\Gamma)^{n}} \big{)}\] \[\leq C\big{(}\|\mathbf{f}\|_{H^{\frac{1}{2}}_{00}(\mathcal{S})^{n }}+\|\mathbf{g}\|_{H^{-\frac{1}{2}}_{0}(\mathcal{S})^{n}}\big{)}.\] The above results implies immediately that there exists a unique solution \(\mathbf{u}\in H^{1}_{\Sigma_{D},\Sigma_{N}}(\Omega)^{n}\) to Problem (2.6) and \(\mathbf{u}\) can be estimated by \(\mathbf{f}\) and \(\mathbf{g}\) with respect to \(H^{\frac{1}{2}}_{00}(\mathcal{S})^{n}\) norm and \(H^{-\frac{1}{2}}_{0}(\mathcal{S})^{n}\) norm, respectively. The proof is complete. ## 3. The inverse problem and main results In this section, we are devoted to studying the uniqueness results of the inverse dislocation problem, which is composed of identifying the dislocations \(\mathcal{S}\) and slips \(\mathbf{f}\) and \(\mathbf{g}\) over \(\mathcal{S}\) by observation data on an open set \(\Sigma_{0}\subset\Sigma_{N}\). For our study, we formulate the inverse problem as \[\Lambda_{\mathcal{S};\mathbf{f},\mathbf{g}}=\mathbf{u}\big{|}_{\Sigma_{0}},\] where \(\mathbf{u}\) is the solution to Problem (2.6). That is, \(\Lambda_{\mathcal{S};\mathbf{f},\mathbf{g}}\) contains the elastic deformation data caused by the dislocation \((\mathcal{S};\mathbf{f},\mathbf{g})\) and observed on \(\Sigma_{0}\subset\Sigma_{N}\). The inverse problem we are devoted to can be formulated by \[\Lambda_{\mathcal{S};\mathbf{f},\mathbf{g}}(\mathbf{u}\big{|}_{\Sigma_{0}, \Sigma_{N}})\to\mathcal{S},\mathbf{f},\mathbf{g}. \tag{3.1}\] The uniqueness results can be proved under some assumptions about the geometry of the dislocation \(\mathcal{S}\) and a _priori_ information about Lame parameters \(\lambda(\mathbf{x})\) and \(\mu(\mathbf{x})\) of the elastic solid \(\Omega\), where \(\lambda(\mathbf{x})\) and \(\mu(\mathbf{x})\) are real constants and satisfy the strong convexity condition (2.2). In order to describe the geometry of \(\mathcal{S}\), we next introduce some notations for the geometric setup; see Fig. 3 for a schematic illustration. Given \(\mathbf{x}_{c}\in\mathbb{R}^{2}\) and constants \(\theta_{m}\), \(\theta_{M}\in(-\pi,\pi)\) such that \(\theta_{M}-\theta_{m}\in(0,\pi)\), we consider the following open sector \[\mathcal{K}_{\mathbf{x}_{c}}\,=\Big{\{}\mathbf{x}\in\mathbb{R}^{2}\big{|}\, \mathbf{0}<\mathbf{x}=\mathbf{x}_{c}+(r\cos\theta,r\sin\theta)^{\top},\ \theta_{\min}<\theta<\theta_{\max},r>0\Big{\}} \tag{3.2}\] with boundaries \[\Gamma^{+}_{\mathbf{x}_{c}}=\Big{\{}\mathbf{x}\in\mathbb{R}^{2} \big{|}\,\mathbf{x}=\mathbf{x}_{c}+(r\cos\theta_{M},r\sin\theta_{M})^{\top},\, r>0\Big{\}}\,,\] \[\Gamma^{-}_{\mathbf{x}_{c}}=\Big{\{}\mathbf{x}\in\mathbb{R}^{2} \big{|}\,\mathbf{x}=\mathbf{x}_{c}+(r\cos\theta_{m},r\sin\theta_{m})^{\top},\, r>0\Big{\}}\,.\] The point \(\mathbf{x}_{c}\) is said to be a planar corner point with opening angle \(\theta_{M}-\theta_{m}\) and boundaries \(\Gamma^{\pm}_{\mathbf{x}_{c}}\). Let \[\mathcal{C}_{\mathbf{x}_{c},h} :=\mathcal{K}_{\mathbf{x}_{c}}\cap B_{h}(\mathbf{x}_{c}),\ \ \ \ \Gamma^{\pm}_{\mathbf{x}_{c},h}:=\Gamma^{\pm}_{\mathbf{x}_{c}}\cap B_{h}( \mathbf{x}_{c}),\] \[\Lambda^{\mathbf{x}_{c}}_{h} :=\mathcal{K}_{\mathbf{x}_{0}}\cap\partial B_{h}(\mathbf{x}_{c}), \ \ \ \Sigma_{\mathbf{x}_{c}}:=\mathcal{C}_{\mathbf{x}_{c},h}\backslash\mathcal{C}_{ \mathbf{x}_{c},h/2}, \tag{3.3}\] where \(B_{h}(\mathbf{x}_{c})\) denotes an open disk centered at \(\mathbf{x}_{c}\) of radius \(h\in\mathbb{R}_{+}\). For the sake of brevity, we use \(B_{h}\), \(\mathcal{K}\), \(\Gamma^{\pm}\), \(\mathcal{C}_{h}\), \(\Gamma^{\pm}_{h}\), \(\Lambda_{h}\) and \(\Sigma\) to represent the corresponding notations at the origin. Figure 2. Schematic illustration of a 2D corner/3D edge corner. ### Main uniqueness results Before giving the uniqueness results in Theorem 3.1-Theorem 3.3, we introduce some admissible conditions about the dislocation for our subsequent study. **Definition 3.1**.: Let \(\Omega\) be a bounded Lipschitz domain in \(\mathbb{R}^{n}(n=2,3)\). We say that \((\mathcal{S};\mathbf{f},\mathbf{g})\) belongs to the admissible class \(\mathcal{T}\) if the following conditions are fulfilled: 1. In \(\mathbb{R}^{2}\), \(\mathcal{S}\subset\mathbb{R}^{2}\) is an oriented Lipschitz curve. There exists at least one planar corner point \(\mathbf{x}_{c}\) on \(\mathcal{S}\) such that \(\Gamma^{\pm}_{\mathbf{x}_{c},h}\subset\mathcal{S}\), where \(\Gamma^{\pm}_{\mathbf{x}_{c},h}=\partial\mathcal{K}_{\mathbf{x}_{0}}\cap B_{h} (\mathbf{x}_{c})\) and \(\mathcal{C}_{\mathbf{x}_{c},h}=B_{h}(\mathbf{x}_{c})\cap\mathcal{K}_{\mathbf{ x}_{c}}=B_{h}(\mathbf{x}_{c})\cap\Omega_{1}\). Here, \(\mathcal{K}_{\mathbf{x}_{c}}\) and \(B_{h}(\mathbf{x}_{c})\) are defined in (3.2) and \(\Omega_{1}=\mathrm{enclose}(\mathcal{S})\) is given in (2.8). 2. In \(\mathbb{R}^{3}\), \(\mathcal{S}\subset\mathbb{R}^{3}\) is an oriented Lipschitz surface. Suppose that \(\mathcal{S}\) possesses at least one 3D edge corner \(\mathbf{x}_{c}=(\mathbf{x}_{c}^{\prime},x_{3})^{\top}\in\mathbb{R}^{3}\), where \(\mathbf{x}_{c}^{\prime}\in\mathbb{R}^{2}\) is a planner corner point. In other words, for sufficient small positive numbers \(h\) and \(M\), we have that \(\Gamma^{\pm}_{\mathbf{x}_{c}^{\prime},h}\times(-M,M)\subset\mathcal{S}\) and \(B^{\prime}_{h}(\mathbf{x}_{c}^{\prime})\times(-M,M)\cap\Omega_{1}=\mathcal{C}^ {\prime}_{\mathbf{x}_{c},h}\times(-M,M)\), where \(\Gamma^{\pm}_{\mathbf{x}_{c}^{\prime},h}\) are two edges of a sectorial corner at \(\mathbf{x}_{c}^{\prime}\) and \(B^{\prime}_{h}(\mathbf{x}_{c}^{\prime})\) is an open disk centered at \(\mathbf{x}_{c}^{\prime}\) with radius \(h\), which are defined in (3.3). The opening angle of the sectorial corner at \(\mathbf{x}_{c}^{\prime}\) is referred to be the opening angle of the corresponding 3D edge corner. 3. In \(\mathbb{R}^{2}\), let \(\mathbf{f}_{j}:=\mathbf{f}\big{|}_{\Gamma^{j}_{\mathbf{x}_{c},h}}\) and \(\mathbf{g}_{j}:=\mathbf{g}\big{|}_{\Gamma^{j}_{\mathbf{x}_{c},h}}\) satisfy \(\mathbf{f}_{j}\in C^{1,\alpha_{j}}(\Gamma^{j}_{\mathbf{x}_{c},h})^{2}\) and \(\mathbf{g}_{j}\in C^{\beta_{j}}(\Gamma^{j}_{\mathbf{x}_{c},h})^{2}\) with \(\alpha_{j},\beta_{j}\) being in \((0,1)\) and \(j=+,-\), where \(\mathbf{x}_{c}\) and \(\Gamma^{\pm}_{\mathbf{x}_{c},h}\) are just the ones in (1). 4. In \(\mathbb{R}^{3}\), let \(\mathbf{f}_{j}:=\mathbf{f}\big{|}_{\Gamma^{\prime j}_{\mathbf{x}_{c}^{\prime},h }\times(-M,M)}\) and \(\mathbf{g}_{j}=\mathbf{g}\big{|}_{\Gamma^{\prime j}_{\mathbf{x}_{c}^{\prime}, h}\times(-M,M)}\) fulfill that \(\mathbf{f}_{j}\in C^{1,\alpha_{j}}(\Gamma^{\prime j}_{\mathbf{x}_{c}^{\prime}, h}\times(-M,M))^{3}\) and \(\mathbf{g}_{j}\in C^{\beta_{j}}(\Gamma^{\prime j}_{\mathbf{x}_{c}^{\prime},h} \times(-M,M))^{3}\) with \(\alpha_{j},\beta_{j}\) being in \((0,1)\) and \(j=+,-\), where \(\mathbf{x}_{c}^{\prime}\) and \(\Gamma^{\prime\pm}_{\mathbf{x}_{c}^{\prime},h}\) are just the ones in (2). Furthermore, \(\mathbf{f}_{j}\) and \(\mathbf{g}_{j}\) are are independent of \(x_{3}\). 5. Let \(\mathcal{V}_{\mathcal{S}}\) signify the set of 2D corners/3D edge corners of \(\mathcal{S}\). In \(\mathbb{R}^{3}\), denote \(\mathbf{g}=(\mathbf{g}^{(1,2)},g^{3})\). Either the following assumptions \(\mathcal{A}_{1}\) or \(\mathcal{A}_{2}\) is satisfied, Assumption \(\mathcal{A}_{1}:\forall\mathbf{x}_{c}\in\mathcal{V}_{\mathcal{S}}\), \(\mathbf{f}_{-}(\mathbf{x}_{c})\neq\mathbf{f}_{+}(\mathbf{x}_{c})\), Assumption \(\mathcal{A}_{2}:\forall\mathbf{x}_{c}\in\mathcal{V}_{\mathcal{S}}\), \(\mathbf{g}_{+}(\mathbf{x}_{c})\neq W_{\mathbf{x}_{c}^{\prime}}\mathbf{g}_{-}( \mathbf{x}_{c})\), if \(n=2\), \(\mathbf{g}_{+}^{(1,2)}(\mathbf{x}_{c})\neq W_{\mathbf{x}_{c}^{\prime}}\mathbf{g}_ {-}^{(1,2)}(\mathbf{x}_{c})\) if \(n=3\), or \(g_{+}^{3}(\mathbf{x}_{c})\neq 0\) and \(g_{-}^{3}(\mathbf{x}_{c})\neq 0\), where \(W_{\mathbf{x}_{c}}=\begin{bmatrix}-\cos\theta_{\mathbf{x}_{c}}&-\sin\theta_{ \mathbf{x}_{c}}\\ -\sin\theta_{\mathbf{x}_{c}}&\cos\theta_{\mathbf{x}_{c}}\end{bmatrix}\) and \(W_{\mathbf{x}_{c}^{\prime}}=\begin{bmatrix}-\cos\theta_{\mathbf{x}_{c}^{\prime}}&- \sin\theta_{\mathbf{x}_{c}^{\prime}}\\ -\sin\theta_{\mathbf{x}_{c}^{\prime}}&\cos\theta_{\mathbf{x}_{c}^{\prime}}\end{bmatrix}\). Here, \(\theta_{\mathbf{x}_{c}}\) and \(\theta_{\mathbf{x}_{c}^{\prime}}\) denote the opening angle at the 2D corner/3D edge corner point \(\mathbf{x}_{c}\) of \(\mathcal{S}\). _Remark 3.1_.: The admissible conditions that \(\mathcal{S}\) possesses at least one planar corner/3D edge corner in Definitions 3.1 can be easily fulfilled in generic physical scenarios. For example, \(\mathcal{S}\) is a piecewise linear fault curve/surface. In what follows, \((\mathcal{S};\mathbf{f},\mathbf{g})\) is said to be an admissible dislocation with slip vectors \(\mathbf{f}\) and \(\mathbf{g}\) if it fulfils the conditions in Definitions 3.1. The uniqueness results in [1, 2] for determining \(\mathcal{S}\) only focused on the case that \(\mathcal{S}\) is open and there is only the displacement discontinuity on the fault curve/surface. Those methodology developed in [1, 2] cannot deal with the case that \(\mathcal{S}\) is closed. In our study, we can handle these two situations and also allow the occurrence of discontinuities in the both displacement and traction fields on the dislocation. In Theorem 3.1, we obtain a local uniqueness result for the inverse dislocation problem (3.1) by using a single displacement measurement on \(\Sigma_{0}\) whose proof is postponed in Section 5. **Theorem 3.1**.: _Let \((\mathcal{S}_{1};\mathbf{f}^{1},\mathbf{g}^{1})\) and \((\mathcal{S}_{2};\mathbf{f}^{2},\mathbf{g}^{2})\) belong to \(\mathcal{T}\). Assume that \(\operatorname{supp}(\mathbf{g}^{i})=\operatorname{supp}(\mathbf{f}^{i})= \overline{\mathcal{S}}_{i}\) and \(\mathbf{u}_{i}\) is the unique solution to Problem (2.6) in \(H^{1}_{\Sigma_{D},\Sigma_{N}}(\Omega\backslash\overline{\mathcal{S}_{i}})\) with respect with to \(\mathbf{g}=\mathbf{g}^{i}\), \(\mathbf{f}=\mathbf{f}^{i}\) and \(\mathcal{S}=\mathcal{S}_{i}\), respectively, for \(i=1,2\). If \(\mathbf{u}_{1}\big{|}_{\Sigma_{0}}=\mathbf{u}_{2}\big{|}_{\Sigma_{0}}\), then \(\mathcal{S}_{1}\Delta\mathcal{S}_{2}:=(\mathcal{S}_{1}\backslash\mathcal{S}_{ 2})\cup(\mathcal{S}_{2}\backslash\mathcal{S}_{1})\) cannot contain a planar corner/3D edge corner \(\mathbf{x}_{c}\)._ In Theorem 3.2 and 3.3 we derive the global uniqueness results for the inverse dislocation problem (3.1) on the determination of the dislocation surface \(\mathcal{S}\) and \(\mathbf{f}\), \(\mathbf{g}\) from a given single displacement measurement on \(\Sigma_{0}\). The proofs of Theorems 3.2 and 3.3 are postponed in Section 5. We first consider the case that \(\mathcal{S}_{1}\) and \(\mathcal{S}_{2}\) are closed. **Theorem 3.2**.: _Let \((\mathcal{S}_{1};\mathbf{f}^{1},\mathbf{g}^{1})\) and \((\mathcal{S}_{2};\mathbf{f}^{2},\mathbf{g}^{2})\) belong to \(\mathcal{T}\), where \(\mathcal{S}_{1}\) and \(\mathcal{S}_{2}\) are closed. Assume that \(\Omega_{1}=\operatorname{enclose}(\mathcal{S}_{1})\) and \(\Omega_{2}=\operatorname{enclose}(\mathcal{S}_{2})\) are two convex polygons in \(\mathbb{R}^{2}\) or two convex polyhedrons in \(\mathbb{R}^{3}\), where \(\mathcal{S}_{i}=\bigcup_{k=1}^{m_{i}}\Pi_{i,k}\)\((i=1,2)\). Here \(\Pi_{i,k}\) is the \(k\)-th edge or surface of the polygon or polyhedron \(\Omega_{i}\). Let \(\mathbf{u}_{i}\) be the unique solution to Problem (2.6) in \(H^{1}_{\Sigma_{D},\Sigma_{N}}(\Omega\backslash\overline{\mathcal{S}})\) with respect with to \(\mathbf{g}^{i}\) and \(\mathbf{f}^{i}\), respectively, for \(i=1,2\). If \(\left.\mathbf{u}_{1}\right|_{\Sigma_{0}}=\mathbf{u}_{2}\big{|}_{\Sigma_{0}}\), then \(\mathcal{S}_{1}=\mathcal{S}_{2}\), namely \(m_{1}=m_{2}:=m,\ \Pi_{1,k}=\Pi_{2,k}:=\Pi_{k}\)\((k=1,\dots,m)\). Furthermore, assume that \(\mathbf{f}^{i}\) and \(\mathbf{g}^{i}\) are piecewise-constant functions on \(\Pi_{k}\). Then we have_ \[\left.(\mathbf{f}^{1}-\mathbf{f}^{2})\right|_{\Pi_{k+1}}=\left(\mathbf{f}^{1}- \mathbf{f}^{2}\right)\big{|}_{\Pi_{k}},\ (\mathbf{g}^{1}-\mathbf{g}^{2})\big{|}_{\Pi_{k+1}}=W_{\mathbf{x}_{k}}( \mathbf{g}^{1}-\mathbf{g}^{2})\big{|}_{\Pi_{k}},\,k=1,\cdots,m-1 \tag{3.4}\] _and_ \[\left(\mathbf{f}^{1}-\mathbf{f}^{2}\right)\big{|}_{\Pi_{1}}=\left(\mathbf{f}^ {1}-\mathbf{f}^{2}\right)\big{|}_{\Pi_{m}},\quad\left(\mathbf{g}^{1}-\mathbf{ g}^{2}\right)\big{|}_{\Pi_{1}}=W_{\mathbf{x}_{m}}(\mathbf{g}^{1}-\mathbf{g}^{2}) \big{|}_{\Pi_{m}}, \tag{3.5}\] _where \(W_{\mathbf{x}_{k}}=\begin{bmatrix}-\cos\theta_{\mathbf{x}_{k}}&-\sin\theta_{ \mathbf{x}_{k}}\\ -\sin\theta_{\mathbf{x}_{k}}&\cos\theta_{\mathbf{x}_{k}}\end{bmatrix}\) is similarly defined as \(W_{\mathbf{x}_{c}}\) in Definition 3.1 and \(\theta_{\mathbf{x}_{k}}\) corresponds to the opening angle for the 2D corners/3D edge corners \(\mathbf{x}_{k}\), \(k=1,2,\cdots,m\)._ In Theorem 3.3, we investigate the unique determination of a piecewise curve or a piecewise surface \(\mathcal{S}\), where \(\mathcal{S}\) is open. Before that we introduce the corresponding definition. **Definition 3.2**.: Suppose that \(\mathcal{S}\subset\mathbb{R}^{n}\)\((n=2,3)\) is open. Under rigid motion, let \(\mathcal{S}\subset\mathbb{R}^{2}\) be the graph of a function \(f(x_{1})\), where \(x_{1}\in[a,b]\). If \([a,b]=\cup_{i=1}^{\ell}[a_{i},a_{i+1}]\) with \(l\geq 3\), \(a_{i}<a_{i+1}\), \(a_{1}=a\) and \(a_{\ell}=b\), which is piecewise linear polynomial on each piece \([a_{i},a_{i+1}]\), then \(\mathcal{S}\subset\mathbb{R}^{2}\) is referred as a piecewise curve. Under rigid motion, let \(\mathcal{S}\subset\mathbb{R}^{3}\) be the graph of a function \(f(x_{1},x_{3})\), where \((x_{1},x_{3})\in[a_{1},a_{2}]\times[b_{1},b_{2}]\). If \(f(x_{1},c)\) with \(c\in[b_{1},b_{2}]\) being fixed satisfies \(f(x_{1},c)=g(x_{1})\), where the graph of \(g(x_{1})\) is a piecewise curve like the ones in \(\mathbb{R}^{2}\), then \(\mathcal{S}\subset\mathbb{R}^{3}\) is referred as a piecewise surface. See Fig. 3 for a schematic illustration. **Theorem 3.3**.: _Assume that \((\mathcal{S}_{1};\mathbf{f}^{1},\mathbf{g}^{1})\) and \((\mathcal{S}_{2};\mathbf{f}^{2},\mathbf{g}^{2})\) belong to \(\mathcal{T}\), where \(\mathcal{S}_{1}\) and \(\mathcal{S}_{2}\) are open. Let \(\mathbf{u}_{i}\) be the unique solution to Problem (2.6) in \(H^{1}_{\Sigma_{D},\Sigma_{N}}(\Omega\backslash\overline{\mathcal{S}_{i}})\) with respect to \(\mathbf{g}^{i}\), \(\mathbf{f}^{i}\) and \(\mathcal{S}_{i}\), where \(\mathbf{f}^{i}\in H^{\frac{1}{2}}_{00}(\mathcal{S}_{i})\) and \(\mathbf{g}^{i}\in H^{-\frac{1}{2}}_{0}(\mathcal{S}_{i})\) with \(\operatorname{supp}(\mathbf{g}^{i})=\operatorname{supp}(\mathbf{f}^{i})= \overline{\mathcal{S}}_{i}\), \(i=1,2\). Suppose that the curves/surfaces \(\mathcal{S}_{1}\) and \(\mathcal{S}_{2}\) are piecewise curves in \(\mathbb{R}^{2}\) or piecewise surfaces in \(\mathbb{R}^{3}\). If_ \[\mathbf{u}_{1}\big{|}_{\Sigma_{0}}=\mathbf{u}_{2}\big{|}_{\Sigma_{0}}, \tag{3.6}\] _then_ \[\mathcal{S}_{1}=\mathcal{S}_{2},\quad\mathbf{f}^{1}=\mathbf{f}^{2}\,\text{ and }\,\mathbf{g}^{1}=\mathbf{g}^{2}.\] ## 4. Local results of slips \(\mathbf{f}\) and \(\mathbf{g}\) at the corners on \(\mathcal{S}\) In this section, we shall proceed to derive several auxiliary propositions which describe the local results of slip vectors \(\mathbf{f}\) and \(\mathbf{g}\) at the corner points of \(\mathcal{S}\). These auxiliary results play a key role in establishing our main results in Theorems 3.1, 3.2 and 3.3. To derive these propositions, we shall introduce two kinds of the so-called CGO (complex geometrical optics) solutions satisfying different Lame /acoustic equations. ### Local results for 2D case This subsection is devoted to analyzing the local behaviours of slip vectors \(\mathbf{f}\) and \(\mathbf{g}\) around a planar corner. Next, we introduce the first kind of CGO solution \(\mathbf{u}_{0}\) introduced in [6], which is given by \[\mathbf{u}_{0}(\mathbf{x})=\begin{pmatrix}\exp(-s\sqrt{z})\\ \mathrm{i}\exp(-s\sqrt{z})\end{pmatrix}:=\begin{pmatrix}u_{1}^{0}(\mathbf{x}) \\ u_{2}^{0}(\mathbf{x})\end{pmatrix}\quad\text{in}\quad\Omega,\quad\mathbf{x}=(x_ {1},x_{2})^{\top}, \tag{4.1}\] where \(z=x_{1}+\mathrm{i}\,x_{2}\), \(s\in\mathbb{R}_{+}\) and \(\Omega\cap(\mathbb{R}_{-}\cup\{0\})=\emptyset\). Here the complex square root of \(z\) is defined as \[\sqrt{z}=\sqrt{|z|}\left(\cos\frac{\theta}{2}+\mathrm{i}\sin\frac{\theta}{2} \right),\] where \(-\pi<\theta<\pi\) is the argument of \(z\). Furthermore, it yields that \(\mathcal{L}\,\mathbf{u}_{0}=\mathbf{0}\) in \(\Omega\). Some significant properties and regularity results of the CGO solution given in (4.1) need to be reviewed, which is beneficial for the subsequent analysis. **Lemma 4.1**.: _[_6_, Proposition 3.1]_ _Let \(\mathbf{u}_{0}\) be given as above. Then we have the following properties_ \[\int_{\mathcal{K}}u_{1}^{0}(\mathbf{x})\mathrm{d}\mathbf{x}=6\mathrm{i}(e^{-2 \theta_{M}\mathrm{i}}-e^{-2\theta_{m}\mathrm{i}})s^{-4} \tag{4.2}\] Figure 3. Schematic illustration of a piecewise curve or a piecewise surface. _and_ \[\int_{\mathcal{K}}|u_{j}^{0}(\mathbf{x})||\mathbf{x}|^{\alpha}\mathrm{d}\mathbf{x} \leq\frac{2(\theta_{M}-\theta_{m})\Gamma(2\alpha+4)}{\delta_{\mathcal{K}}^{2 \alpha+4}}s^{-2\alpha-4},\quad j=1,2, \tag{4.3}\] _where \(\mathcal{K}\) is defined in Section 3, \(\alpha,h>0\), \(\delta_{\mathcal{K}}=\min\limits_{\theta_{m}<\theta<\theta_{M}}\cos\frac{ \theta}{2}\) is a positive constant._ The following critical estimate can be obtained by using Laplace transform and the exponential function with negative power. **Lemma 4.2**.: _For any \(\alpha>0\), if \(\omega(\theta)>0\), then we have_ \[\lim\limits_{s\to+\infty}\int_{0}^{h}r^{\alpha}e^{-s\sqrt{\tau}\omega(\theta)} \mathrm{d}\mathbf{r}=\mathcal{O}(s^{-2\alpha-2}).\] We next recall some critical lemmas about the regularity of the CGO solution \(\mathbf{u}_{0}\) defined in (4.1). **Lemma 4.3**.: _[_13_, Lemma 2.3]_ _Let \(\mathcal{C}_{\mathbf{x}_{c},h}\) be defined in (3.3) and \(\mathbf{u}_{0}\) be given in (4.1). Then \(\mathbf{u}_{0}\in H^{1}(\mathcal{C}_{\mathbf{x}_{c},h})^{2}\) and \(\mathcal{L}\,\mathbf{u}_{0}=\mathbf{0}\) in \(\mathcal{C}_{\mathbf{x}_{c},h}\). Furthermore, it holds that_ \[\left\|\mathbf{u}_{0}\right\|_{L^{2}(\mathcal{C}_{\mathbf{x}_{c},h})^{2}}\leq \sqrt{\theta_{M}-\theta_{m}}e^{-s\sqrt{\Theta}}h\] _and_ \[\left\|\left|\mathbf{x}\right|^{\alpha}\mathbf{u}_{0}\right\|_{L^{2}( \mathcal{C}_{\mathbf{x}_{c},h})^{2}}\leq s^{-2(\alpha+1)}\frac{2\sqrt{(\theta _{M}-\theta_{m})\Gamma(4\alpha+4)}}{(2\delta_{\mathcal{K}})^{2\alpha+2}},\] _where \(\Theta\in[0,h]\) and \(\delta_{\mathcal{K}}\) is defined in (4.3)._ **Lemma 4.4**.: _[_12_, Lemma 2.8]_ _Let \(\Gamma_{h}^{\pm}\) and \(u_{1}^{0}(\mathbf{x})\) be respectively defined in (3.3) and (4.1) with \(\mathbf{x}_{c}\) coinciding with the origin. We have_ \[\int_{\Gamma_{h}^{+}}u_{1}^{0}(\mathbf{x})\mathrm{d}\sigma=2s^{- 2}\left(\mu(\theta_{M})^{-2}-\mu(\theta_{M})^{-2}e^{-s\sqrt{h}\mu(\theta_{M})}\right.\] \[\left.-\mu(\theta_{M})^{-1}s\sqrt{h}e^{-s\sqrt{h}\mu(\theta_{M})} \right),\] \[\int_{\Gamma_{h}^{-}}u_{1}^{0}(\mathbf{x})\mathrm{d}\sigma=2s^{- 2}\left(\mu(\theta_{m})^{-2}-\mu(\theta_{m})^{-2}e^{-s\sqrt{h}\mu(\theta_{m})}\right.\] \[\left.-\mu(\theta_{m})^{-1}s\sqrt{h}e^{-s\sqrt{h}\mu(\theta_{m})} \right),\] _where \(\mu(\theta):=\cos(\theta/2)+\mathrm{i}\sin(\theta/2)=e^{\mathrm{i}\theta/2}\)._ To prove our main Theorems 3.1, 3.2 and 3.3, the next critical auxiliary proposition is needed. Since \(\mathcal{L}\) is invariant under rigid motion, in what follows, the underlying corner point \(\mathbf{x}_{c}\) coincides with the origin. **Proposition 4.1**.: _Under the same setup about \(\mathcal{C}_{h}\) and \(\Gamma_{h}^{\pm}\) given in (3.3) with \(\mathbf{x}_{c}\) coinciding with the origin. Let \(\mathbf{v}\in H^{1}(\mathcal{C}_{h})^{2}\) and \(\mathbf{w}\in H^{1}(\mathcal{C}_{h})^{2}\) satisfy_ \[\begin{cases}\mathcal{L}\,\mathbf{v}=\mathbf{0},&\mathcal{L}\,\mathbf{w}= \mathbf{0}&\text{in}\quad\mathcal{C}_{h},\\ \mathbf{v}-\mathbf{w}=\mathbf{f}_{+},\,\mathcal{T}_{\boldsymbol{\nu}}\, \mathbf{v}-\mathcal{T}_{\boldsymbol{\nu}}\,\mathbf{w}=\mathbf{g}_{+}&\text{on} \quad\Gamma_{h}^{+},\\ \mathbf{v}-\mathbf{w}=\mathbf{f}_{-},\,\mathcal{T}_{\boldsymbol{\nu}}\, \mathbf{v}-\mathcal{T}_{\boldsymbol{\nu}}\,\mathbf{w}=\mathbf{g}_{-}&\text{on} \quad\Gamma_{h}^{-}\end{cases}\] _with \(\mathbf{f}_{j}\in H^{\frac{1}{2}}(\Gamma_{h}^{j})^{2}\cap C^{1,\alpha_{j}}(\Gamma_ {h}^{j})^{2}\) and \(\mathbf{g}_{j}\in H^{-\frac{1}{2}}(\Gamma_{h}^{j})^{2}\cap C^{\beta_{j}}(\Gamma_ {h}^{j})^{2}\), where \(j=+,-\) and \(\alpha_{+},\alpha_{-},\beta_{+},\beta_{-}\in(0,1)\). Then we have the following continuities at the vertex point, that is to say,_ \[\mathbf{g}_{+}(\mathbf{0})=W\,\mathbf{g}_{-}(\mathbf{0})\quad\text{and}\quad \mathbf{f}_{+}(\mathbf{0})=\mathbf{f}_{-}(\mathbf{0}), \tag{4.4}\] _where \(W=\begin{bmatrix}-\cos(\theta_{M}-\theta_{m}),&-\sin(\theta_{M}-\theta_{m})\\ -\sin(\theta_{M}-\theta_{m}),&+\cos(\theta_{M}-\theta_{m})\end{bmatrix}.\)_ Proof.: Thanks to the symmetric role of \((\Re\mathbf{v},\Re\mathbf{w})\) and \((\Im\mathbf{v},\Im\mathbf{w})\). We just have to prove that the corresponding results hold for \((\Re\mathbf{v},\Re\mathbf{w})\). By a similar proof process, it can be proved that those results are still valid for \((\Im\mathbf{v},\Im\mathbf{w})\), hence for \((\mathbf{v},\mathbf{w})\). Due to Betti's second formula, we have the following integral identity \[\int_{\Gamma_{h}^{+}}\Re\mathbf{g}_{+}\cdot\mathbf{u}_{0}-\mathcal{ T}_{\boldsymbol{\nu}}\mathbf{u}_{0}\cdot\Re\mathbf{f}_{+}\,\mathrm{d}\sigma+ \int_{\Gamma_{h}^{-}}\Re\mathbf{g}_{-}\cdot\mathbf{u}_{0}-\mathcal{T}_{ \boldsymbol{\nu}}\mathbf{u}_{0}\cdot\Re\mathbf{f}_{-}\,\mathrm{d}\sigma\] \[=\int_{\Lambda_{h}}\mathcal{T}_{\boldsymbol{\nu}}\big{(}\Re \mathbf{v}-\Re\mathbf{w}\big{)}\cdot\mathbf{u}_{0}-\mathcal{T}_{\boldsymbol{ \nu}}\mathbf{u}_{0}\cdot\big{(}\Re\mathbf{v}-\Re\mathbf{w}\big{)}\,\mathrm{d}\sigma. \tag{4.5}\] Since \(\mathbf{f}_{j}\in C^{1,\alpha_{j}}(\Gamma_{h}^{j})^{2}\) and \(\mathbf{g}_{j}\in C^{\beta_{j}}(\Gamma_{h}^{j})^{2}\) for \(j=+,-\), we have the expansions as follows \[\mathbf{f}_{j}(\mathbf{x}) =\mathbf{f}_{j}(\mathbf{0})+\delta\mathbf{f}_{j}(\mathbf{x}), \quad\big{|}\delta\mathbf{f}_{j}(\mathbf{x})\big{|}\leq A_{j}|\mathbf{x}|^{1+ \alpha_{j}}, \tag{4.6}\] \[\mathbf{g}_{j}(\mathbf{x}) =\mathbf{g}_{j}(\mathbf{0})+\delta\mathbf{g}_{j}(\mathbf{x}), \quad\big{|}\delta\mathbf{g}_{j}(\mathbf{x})\big{|}\leq B_{j}|\mathbf{x}|^{ \beta_{j}}, \tag{4.7}\] where \(A_{j}\) and \(B_{j}\) are positive. From the expression of \(\mathbf{u}_{0}\), it is easy to imply that \[\frac{\partial u_{1}^{0}}{\partial r}=-\frac{s}{2\sqrt{r}}e^{-sr^{1/2}\mu( \theta)+\mathrm{i}\frac{\theta}{2}}\quad\text{and}\quad\frac{\partial u_{1}^{ 0}}{\partial\theta}=-\frac{\mathrm{i}s\sqrt{r}}{2}e^{-sr^{1/2}\mu(\theta)+ \mathrm{i}\frac{\theta}{2}},\] where \(\mu(\cdot)\) is given by Lemma 4.9. Thus, we directly obtain \[\frac{\partial u_{1}^{0}}{\partial x_{1}}=-\frac{s}{2\sqrt{r}}e^{-sr^{1/2}\mu( \theta)-\mathrm{i}\frac{\theta}{2}}\quad\text{and}\quad\frac{\partial u_{1}^{ 0}}{\partial x_{2}}=-\frac{\mathrm{i}s\sqrt{r}}{2}e^{-sr^{1/2}\mu(\theta)- \mathrm{i}\frac{\theta}{2}}.\] Notice that \(u_{2}^{0}(\mathbf{x})=\mathrm{i}u_{1}^{0}(\mathbf{x})\), we get \[\nabla\mathbf{u}_{0}=-\frac{s}{2\sqrt{r}}e^{-s\sqrt{r}\mu(\theta)-\frac{\theta }{2}\mathrm{i}}\begin{bmatrix}1&\mathrm{i}\\ \mathrm{i}&-1\end{bmatrix},\] Therefore, one can prove that \[\int_{\Gamma_{h}^{+}}\mathcal{T}_{\boldsymbol{\nu}_{M}}\mathbf{u}_ {0}\,\mathrm{d}\sigma =\int_{\Gamma_{h}^{+}}-\frac{s}{2r^{\frac{1}{2}}}e^{-s\sqrt{r}\mu( \theta_{M})-\mathrm{i}\frac{\theta_{M}}{2}}\begin{bmatrix}1&\mathrm{i}\\ \mathrm{i}&-1\end{bmatrix}\cdot\begin{bmatrix}-\sin\theta_{M}\\ \cos\theta_{M}\end{bmatrix}\mathrm{d}\sigma\] \[=-\frac{\mu s}{2}e^{\mathrm{i}\theta_{M}/2}\begin{bmatrix}\mathrm{i }\\ -1\end{bmatrix}\int_{0}^{h}r^{-\frac{1}{2}}e^{-s\sqrt{r}\mu(\theta_{M})}\mathrm{d}r\] \[=-\mu se^{\mathrm{i}\theta_{M}/2}\begin{bmatrix}\mathrm{i}\\ -1\end{bmatrix}\int_{0}^{\sqrt{h}}e^{-s\,t\,\mu(\theta_{M})}\mathrm{d}t\] \[=\mu\big{(}e^{-s\,\sqrt{h}\,\mu(\theta_{M})}-1\big{)}\begin{bmatrix} \mathrm{i}\\ -1\end{bmatrix}. \tag{4.8}\] By the similar arguments, we can derive \[\int_{\Gamma_{h}^{-}}\mathcal{T}_{\boldsymbol{\nu}_{m}}\mathbf{u}_{0}\, \mathrm{d}\sigma=\mu\big{(}e^{-s\sqrt{h}\mu(\theta_{m})}-1\big{)}\begin{bmatrix} -\mathrm{i}\\ 1\end{bmatrix}. \tag{4.9}\] From Lemma 4.9, we obtain \[\int_{\Gamma_{h}^{+}}\mathbf{u}_{0}\,\mathrm{d}\sigma=2s^{-2}\left(\mu^{-2}(\theta _{M})-\mu^{-2}(\theta_{M})e^{-s\sqrt{h}\mu(\theta_{M})}-\mu^{-1}(\theta_{M})s \sqrt{h}e^{-s\sqrt{h}\mu(\theta_{M})}\right)\begin{bmatrix}1\\ \mathrm{i}\end{bmatrix}, \tag{4.10}\] \[\int_{\Gamma_{h}^{-}}\mathbf{u}_{0}\,\mathrm{d}\sigma=2s^{-2}\left(\mu^{-2}( \theta_{m})-\mu^{-2}(\theta_{m})e^{-s\sqrt{h}\mu(\theta_{m})}-\mu^{-1}(\theta_ {m})s\sqrt{h}e^{-s\sqrt{h}\mu(\theta_{m})}\right)\begin{bmatrix}1\\ \mathrm{i}\end{bmatrix}, \tag{4.11}\] Substituting (4.8)-(4.11) into (4.5), the following integral identity holds, \[\Re\mathbf{f}_{+}(\mathbf{0})\cdot\mu\Big{(}e^{-s\sqrt{h}\mu( \theta_{M})}-1\Big{)}\begin{bmatrix}\mathrm{i}\\ -1\end{bmatrix}+\Re\mathbf{f}_{-}(\mathbf{0})\cdot\mu\Big{(}e^{-s\sqrt{h}\mu( \theta_{m})}-1\Big{)}\begin{bmatrix}-\mathrm{i}\\ +1\end{bmatrix}\] \[-\Re\mathbf{g}_{+}(\mathbf{0})\cdot\begin{bmatrix}1\\ \mathrm{i}\end{bmatrix}2s^{-2}\mu^{-2}(\theta_{M})-\Re\mathbf{g}_{-}(\mathbf{0 })\cdot\begin{bmatrix}1\\ \mathrm{i}\end{bmatrix}2s^{-2}\mu^{-2}(\theta_{m})=\sum_{j=1}^{7}R_{j}, \tag{4.12}\] where \[R_{1}=-2s^{-2}\Re\mathbf{g}_{+}(\mathbf{0})\cdot\begin{bmatrix}1 \\ \mathrm{i}\end{bmatrix}\Big{(}\mu^{-1}(\theta_{M})\,s\sqrt{h}\,e^{-s\sqrt{h}\, \mu(\theta_{M})}+\mu^{-2}(\theta_{M})\,e^{-s\sqrt{h}\,\mu(\theta_{M})}\Big{)},\] \[R_{2}=-2s^{-2}\Re\mathbf{g}_{-}(\mathbf{0})\cdot\begin{bmatrix}1 \\ \mathrm{i}\end{bmatrix}\big{(}\mu^{-1}(\theta_{m})\,s\sqrt{h}\,e^{-s\sqrt{h}\, \mu(\theta_{m})}+\mu^{-2}(\theta_{m})\,e^{-s\sqrt{h}\,\mu(\theta_{m})}\big{)},\] \[R_{3}=\int_{\Lambda_{h}}\mathcal{T}_{\boldsymbol{\nu}}\big{(} \Re\mathbf{v}-\Re\mathbf{w}\big{)}\cdot\mathbf{u}_{0}-\mathcal{T}_{ \boldsymbol{\nu}}\mathbf{u}_{0}\cdot\big{(}\Re\mathbf{v}-\Re\mathbf{w}\big{)} \,\mathrm{d}\sigma,\] \[R_{4}=-\int_{\Gamma_{h}^{+}}\Re\delta\mathbf{f}_{+}\cdot \mathcal{T}_{\boldsymbol{\nu}_{M}}\mathbf{u}_{0}\,\mathrm{d}\sigma,\quad\,R_ {5}=-\int_{\Gamma_{h}^{-}}\Re\delta\mathbf{f}_{-}\cdot\mathcal{T}_{\boldsymbol {\nu}_{m}}\mathbf{u}_{0}\,\mathrm{d}\sigma,\] \[R_{6}=\int_{\Gamma_{h}^{+}}\Re\delta\mathbf{g}_{+}\cdot\mathbf{ u}_{0}\,\mathrm{d}\sigma,\qquad\qquad R_{7}=\int_{\Gamma_{h}^{-}}\Re\delta \mathbf{g}_{-}\cdot\mathbf{u}_{0}\,\mathrm{d}\sigma.\] From the expression of \(\mu(\cdot)\) given by Lemma 4.9, it is direct to obtain that \(\mu^{-2}(\theta_{M})\), \(\mu^{-1}(\theta_{M})\), \(\mu^{-2}(\theta_{m})\) and \(\mu^{-1}(\theta_{m})\) are bounded. For sufficient large \(s\), we have \[\big{|}R_{1}\big{|}=\mathcal{O}(s^{-1}e^{-c_{1}s})\quad\text{and}\quad\big{|}R _{2}\big{|}=\mathcal{O}(s^{-1}e^{-c_{2}s}), \tag{4.13}\] where \(c_{1}\) and \(c_{2}\) are positive constants not depending on \(s\). Considering the estimates of \(\delta\mathbf{f}_{+}\) in (4.6), the expression of \(\mathcal{T}_{\boldsymbol{\nu}_{M}}\mathbf{u}_{0}\) in (4.8) and Lemma 4.9, we get \[\big{|}R_{4}\big{|}\leq\,c_{4}\,\int_{0}^{h}r^{\alpha_{+}+1}e^{-s\sqrt{\tau} \cos\frac{\theta_{M}}{2}}\,\mathrm{d}r=\mathcal{O}(s^{-2\alpha_{+}-2}). \tag{4.14}\] Similarly, we get the following estimates \[\big{|}R_{5}\big{|}=\mathcal{O}(s^{-2\alpha_{-}-2}),\quad\big{|}R_{6}\big{|}= \mathcal{O}(\tau^{-2\beta_{+}-2}),\quad\big{|}R_{7}\big{|}=\mathcal{O}(\tau^{- 2\beta_{-}-2}). \tag{4.15}\] For the term \(R_{3}\), by virtue of Cauchy-Schwarz inequality, trace theorem and Lemma 4.3, we obtain \[\big{|}R_{3}\big{|} \leq\|\mathbf{u}_{0}\|_{L^{2}(\Lambda_{h})^{2}}\|\mathcal{T}_{ \boldsymbol{\nu}}(\Re\mathbf{v}-\Re\mathbf{w})\|_{L^{2}(\Lambda_{h})^{2}}+\| \Re\mathbf{v}-\Re\mathbf{w}\|_{L^{2}(\Lambda_{h})^{2}}\|\mathcal{T}_{\boldsymbol {\nu}}\mathbf{u}_{0}\|_{L^{2}(\Lambda_{h})^{2}}\] \[\leq\Big{(}\|\mathbf{u}_{0}\|_{L^{2}(\Lambda_{h})^{2}}+\|\mathcal{T} _{\boldsymbol{\nu}}\mathbf{u}_{0}\|_{L^{2}(\Lambda_{h})^{2}}\Big{)}\big{\|} \mathbf{v}-\mathbf{w}\big{\|}_{H^{1}(\mathcal{C}_{h})^{2}}\] \[\leq c_{3}\big{\|}\mathbf{u}_{0}\big{\|}_{H^{1}(\mathcal{C}_{h})^{ 2}}\leq c_{3}e^{-s\sqrt{h}\delta\mathcal{C}}, \tag{4.16}\] where \(c_{3}\) is positive and \(\delta_{\mathcal{K}}\) is defined in (4.3). Together (4.13)-(4.16) with the identity (4.12), as \(s\to+\infty\), one clearly implies that \[\begin{bmatrix}+\mathrm{i}\\ -1\end{bmatrix}\cdot\left(\Re\mathbf{f}_{+}(\mathbf{0})-\Re\mathbf{f}_{-}( \mathbf{0})\right)=0. \tag{4.17}\] Since \(\Re\mathbf{f}_{+}(\mathbf{0})\) and \(\Re\mathbf{f}_{-}(\mathbf{0})\) belong to \(\mathbb{R}\), we derive that \[\Re\mathbf{f}_{+}(\mathbf{0})=\Re\mathbf{f}_{-}(\mathbf{0}). \tag{4.18}\] Substituting (4.18) into (4.12), and then multiplying the new equation by \(s^{2}\), one gets that \[\mu\,s^{2}\Re\mathbf{f}_{+}(\mathbf{0})\cdot\begin{bmatrix}\mathrm{i}\\ -1\end{bmatrix}\left(e^{-s\sqrt{h}\mu(\theta_{M})}-e^{-s\sqrt{h}\mu(\theta_{m} )}\right)-2\begin{bmatrix}1\\ \mathrm{i}\end{bmatrix}\cdot\left(\frac{\Re\mathbf{g}_{+}(\mathbf{0})}{\mu^{2 }(\theta_{M})}+\frac{\Re\mathbf{g}_{-}(\mathbf{0})}{\mu^{2}(\theta_{m})} \right)=\sum_{j=1}^{7}s^{2}\,R_{j},\] Let \(s\) tend to \(+\infty\), we have \[\begin{bmatrix}1\\ \mathrm{i}\end{bmatrix}\cdot\left(\Re\mathbf{g}_{+}(\mathbf{0})\mu^{-2}( \theta_{M})+\Re\mathbf{g}_{-}(\mathbf{0})\mu^{-2}(\theta_{m})\right)=0.\] Noting that \(\mu^{-2}(\theta_{M})\neq\mu^{-2}(\theta_{m})\) and \(\frac{\mu^{2}(\theta_{M})}{\mu^{2}(\theta_{m})}=e^{\mathrm{i}(\theta_{M}- \theta_{m})}\), then let \(\Re\mathbf{g}_{+}(\mathbf{0}):=\begin{bmatrix}a_{11}\\ a_{21}\end{bmatrix}\) and \(\Re\mathbf{g}_{-}(\mathbf{0}):=\begin{bmatrix}a_{12}\\ a_{22}\end{bmatrix}\), where \(a_{ij}\in\mathbb{R}\) for \(i,j=1,2\), the above equation can be rewritten as follows \[\begin{cases}a_{11}+a_{12}\cos(\theta_{M}-\theta_{m})-a_{22}\sin(\theta_{M}- \theta_{m})=0,\\ a_{21}+a_{12}\sin(\theta_{M}-\theta_{m})+a_{22}\cos(\theta_{M}-\theta_{m})=0, \end{cases}\] which is \[\Re\mathbf{g}_{+}(\mathbf{0})=W\Re\mathbf{g}_{-}(\mathbf{0}).\] Here, \[W=\begin{bmatrix}-\cos(\theta_{M}-\theta_{m}),&-\sin(\theta_{M}-\theta_{m}) \\ -\sin(\theta_{M}-\theta_{m}),&\cos(\theta_{M}-\theta_{m})\end{bmatrix}\quad \text{and}\quad\det(W)\neq 0.\] The proof is complete. ### Local results for 3D case As discussed in Remark 4.2 in [13], the regularity result on the underlying elastic displacement around a general polyhedral corner in \(\mathbb{R}^{3}\) is challenging. So in this subsection, we shall restrict ourselves to the 3D edge corner. We introduce a dimension reduction operator \(\mathcal{P}\) to study the relevant continuities about slip vectors \(\mathbf{f}\) and \(\mathbf{g}\) at a 3D edge corner. In what follows, we suppose that the dislocation \(\mathcal{S}\) is a Lipschitz surface possessing at least one 3D edge corner \(\mathbf{x}_{c}=(\mathbf{x}_{c}^{\prime},x_{c}^{3})^{\top}\in\mathcal{S}\subset \mathbb{R}^{3}\). The next definition shall state a dimension reduction operator, which is beneficial to derive a crucial auxiliary proposition similar to Proposition 4.1 at a 3D edge corner. **Definition 4.1**.: Let \(\mathcal{C}^{\prime}_{\mathbf{x}_{c}^{\prime},h}\) be defined as \(\mathcal{C}_{\mathbf{x}_{c},h}\) in (3.3) with the vertex \(\mathbf{x}_{c}^{\prime}\). For a given function \(\phi\) in the domain \(\mathcal{C}^{\prime}_{\mathbf{x}_{c}^{\prime},h}\times(-M,M)\) with \(M>0\). For any fixed \(x_{c}^{3}\in(-M,M)\), we assume that \(\phi\in C_{0}^{\infty}(x_{c}^{3}-L,x_{c}^{3}+L)\) is a nonnegative function \(\phi\not\equiv\emptyset\). Write \(\mathbf{x}=(\mathbf{x}^{\prime},x_{3})^{\top}\in\mathbb{R}^{3}\). The dimension reduction operator \(\mathcal{P}\) is defined as follows \[\mathcal{P}(\mathbf{h})(\mathbf{x}^{\prime})=\int_{x_{c}^{3}-L}^{x_{c}^{3}+L} \phi(x_{3})\mathbf{h}(\mathbf{x}^{\prime},x_{3})\mathrm{d}x_{3},\] where \(\mathcal{C}^{\prime}_{\mathbf{x}_{c}^{\prime},h}\) is defined in (3.3) with the vertex \(\mathbf{x}_{c}^{\prime}\). Before deriving the main results of this subsection, we review some important properties of such operator. **Lemma 4.5**.: _[_13_, Lemma 3.1]_ _Let \(\mathbf{h}\in H^{m}(\mathcal{C}^{\prime}_{\mathbf{x}^{\prime}_{c},h}\times(-M,M) )^{3}\), \(m=1,2\). Then_ \[\mathcal{P}(\mathbf{h})(\mathbf{x}^{\prime})\in H^{m}(\mathcal{C}^{\prime}_{ \mathbf{x}^{\prime}_{c},h}))^{3}.\] _Similarly, if \(\mathbf{h}\in C^{\delta}\big{(}\overline{\mathcal{C}^{\prime}_{\mathbf{x}^{ \prime}_{c},h}}\times[-M,M]\big{)}^{3}\) with \(\delta\in(0,1)\), then_ \[\mathcal{P}(\mathbf{h})(\mathbf{x}^{\prime})\in C^{\delta}\big{(}\overline{ \mathcal{C}^{\prime}_{\mathbf{x}^{\prime}_{c},h}}\big{)}^{3}.\] Noting that the three-dimensional isotropic elastic operator \(\mathcal{L}\) defined in (2.1)-(2.3) can be rewritten as \[\mathcal{L} =\begin{bmatrix}\lambda\Delta+(\lambda+\mu)\partial_{1}^{2}&( \lambda+\mu)\partial_{1}\partial_{2}&(\lambda+\mu)\partial_{1}\partial_{3}\\ (\lambda+\mu)\partial_{1}\partial_{2}&\lambda\Delta+(\lambda+\mu)\partial_{2 }^{2}&(\lambda+\mu)\partial_{2}\partial_{3}\\ (\lambda+\mu)\partial_{1}\partial_{3}&(\lambda+\mu)\partial_{2}\partial_{3}& \lambda\Delta+(\lambda+\mu)\partial_{3}^{2}\end{bmatrix}\] \[=\widetilde{\mathcal{L}}+\begin{bmatrix}\lambda\partial_{3}^{2}& 0&(\lambda+\mu)\partial_{1}\partial_{3}\\ 0&\lambda\partial_{3}^{2}&(\lambda+\mu)\partial_{2}\partial_{3}\\ (\lambda+\mu)\partial_{1}\partial_{3}&(\lambda+\mu)\partial_{2}\partial_{3}& \lambda\partial_{3}^{2}+(\lambda+\mu)\partial_{3}^{2}\end{bmatrix},\] where \[\widetilde{\mathcal{L}}=\begin{bmatrix}\lambda\Delta^{\prime}+(\lambda+\mu) \partial_{1}^{2}&(\lambda+\mu)\partial_{1}\partial_{2}&0\\ (\lambda+\mu)\partial_{1}\partial_{2}&\lambda\Delta^{\prime}+(\lambda+\mu) \partial_{2}^{2}&0\\ 0&0&\lambda\Delta^{\prime}\end{bmatrix}=\begin{bmatrix}\mathcal{L}_{\mathcal{ P}}&0\\ 0&\lambda\Delta^{\prime}\end{bmatrix} \tag{4.19}\] with \(\Delta^{\prime}=\partial_{1}^{2}+\partial_{2}^{2}\) being the Laplace operator with respect to the \(\mathbf{x}^{\prime}\)-variables. Here, the operator \(\mathcal{L}_{\mathcal{P}}\) is the two-dimensional isotropic elastic operator with respect to the \(\mathbf{x}^{\prime}\)-variables. For any fixed \(x_{c}^{3}\in(-M,M)\) and sufficient small \(L>0\) such that \((x_{c}^{3}-L,x_{c}^{3}+L)\subset(-M,M)\). At this moment, we have \(\mathcal{L}=\widetilde{\mathcal{L}}\). Since \(\mathcal{L}\) is invariant under rigid motion, in what follows we assume that \(\mathbf{x}^{\prime}_{c}=\mathbf{0}\) in \(\mathbb{R}^{2}.\) Hence let \(\mathcal{C}^{\prime}_{\mathbf{x}^{\prime}_{c},h}\) and \(\Gamma^{\pm}_{\mathbf{x}^{\prime}_{c},h}\) be defined in (3.3) with \(\mathbf{x}^{\prime}_{c}\) coinciding with the origin in 2D case. By some tedious calculations, we have the following lemma. **Lemma 4.6**.: _Under the setup about \(\mathcal{C}^{\prime}_{\mathbf{x}^{\prime}_{c},h}\) and a 3D edge corner as above, suppose that \(\mathbf{f}_{j}\in C^{1,\alpha_{j}}\big{(}\Gamma^{\prime j}_{\mathbf{x}^{ \prime}_{c},h}\times(-M,M)\big{)}^{3}\bigcap H^{\frac{5}{2}}\big{(}\Gamma^{ \prime j}_{\mathbf{x}^{\prime}_{c},h}\times(-M,M)\big{)}^{3}\) and \(\mathbf{g}_{j}\in C^{\beta_{j}}\big{(}\Gamma^{\prime j}_{\mathbf{x}^{\prime}_{ c},h}\times(-M,M)\big{)}^{3}\bigcap H^{-\frac{1}{2}}\big{(}\Gamma^{\prime j}_{ \mathbf{x}^{\prime}_{c},h}\times(-M,M)\big{)}^{3}\) do not depend on \(x_{3}\) for \(j=+,-\), where \(\alpha_{+}\), \(\alpha_{-}\), \(\beta_{+}\), \(\beta_{-}\in(0,1)\). Denote \(\mathbf{v}=(\mathbf{v}^{(1,2)},\,v_{3})^{\top}\), \(\mathbf{w}=(\mathbf{w}^{(1,2)},\,w_{3})^{\top}\), \(\mathbf{f}_{\pm}=(\mathbf{f}_{\pm}^{(1,2)},\,f_{\pm}^{3})^{\top}\) and \(\mathbf{g}_{\pm}=(\mathbf{g}_{\pm}^{(1,2)},\,g_{\pm}^{3})^{\top}\). Then the transmission eigenvalue problem for \((\mathbf{v},\mathbf{w})\in H^{1}\big{(}\mathcal{C}^{\prime}_{\mathbf{x}^{ \prime}_{c},h}\times(-M,M)\big{)}^{3}\times H^{1}\big{(}\mathcal{C}^{\prime} _{\mathbf{x}^{\prime}_{c},h}\times(-M,M)\big{)}^{3}:\)_ \[\begin{cases}\mathcal{L}\,\mathbf{v}=\mathbf{0},&\mathcal{L}\,\mathbf{w}= \mathbf{0}&\text{in}&\mathcal{C}^{\prime}_{\mathbf{x}^{\prime}_{c},h}\times(-M,M),\\ \mathbf{v}-\mathbf{w}=\mathbf{f}_{+},\,\mathcal{T}_{\boldsymbol{\nu}}\,\mathbf{v} -\mathcal{T}_{\boldsymbol{\nu}}\,\mathbf{w}=\mathbf{g}_{+}&\text{on}&\Gamma^{ \prime+}_{\mathbf{x}^{\prime}_{c},h}\times(-M,M),\\ \mathbf{v}-\mathbf{w}=\mathbf{f}_{-},\,\mathcal{T}_{\boldsymbol{\nu}}\,\mathbf{v} -\mathcal{T}_{\boldsymbol{\nu}}\,\mathbf{w}=\mathbf{g}_{-}&\text{on}&\Gamma^{ \prime-}_{\mathbf{x}^{\prime}_{c},h}\times(-M,M)\end{cases}\] can be reduced to be_ \[\begin{cases}\widetilde{\mathcal{L}}\,\mathcal{P}(\mathbf{v})(\mathbf{x}^{\prime} )=\mathcal{G}_{1}(\mathbf{x}^{\prime}),&\widetilde{\mathcal{L}}\,\mathcal{P}( \mathbf{w})(\mathbf{x}^{\prime})=\mathcal{G}_{2}(\mathbf{x}^{\prime}),&\mathbf{ x}^{\prime}\in\mathcal{C}^{\prime}_{\mathbf{x}^{\prime}_{c},h},\\ \mathcal{P}(\mathbf{v})(\mathbf{x}^{\prime})=\mathcal{P}(\mathbf{w})(\mathbf{ x}^{\prime})+\mathcal{P}(\mathbf{f}_{+})(\mathbf{x}^{\prime}),&\mathbf{x}^{\prime}\in \Gamma^{\prime+}_{\mathbf{x}^{\prime}_{c},h},\\ \mathcal{P}(\mathbf{v})(\mathbf{x}^{\prime})=\mathcal{P}(\mathbf{w})(\mathbf{ x}^{\prime})+\mathcal{P}(\mathbf{f}_{-})(\mathbf{x}^{\prime}),&\mathbf{x}^{\prime}\in \Gamma^{\prime-}_{\mathbf{x}^{\prime}_{c},h},\\ \mathcal{R}_{1}^{+}=\mathcal{R}_{2}^{+}+\mathcal{P}(\mathbf{g}_{+})(\mathbf{x }^{\prime}),&\mathbf{x}^{\prime}\in\Gamma^{\prime+}_{\mathbf{x}^{\prime}_{c}, h},\\ \mathcal{R}_{1}^{-}=\mathcal{R}_{2}^{-}+\mathcal{P}(\mathbf{g}_{-})(\mathbf{x }^{\prime}),&\mathbf{x}^{\prime}\in\Gamma^{\prime-}_{\mathbf{x}^{\prime}_{c}, h},\end{cases} \tag{4.20}\] _where_ \[\mathcal{G}_{1}=-\int_{x_{c}^{2}-L}^{x_{c}^{3}+L}\phi^{\prime\prime }(x_{3})\begin{bmatrix}\lambda\mathbf{v}^{(1,2)}(\mathbf{x})\\ (2\lambda+\mu)v_{3}(\mathbf{x})\end{bmatrix}\mathrm{d}x_{3}+(\lambda+\mu)\int_{ x_{c}^{2}-L}^{x_{c}^{3}+L}\phi^{\prime}(x_{3})\begin{bmatrix}\nabla v_{3}( \mathbf{x})\\ \partial_{1}v_{1}(\mathbf{x})+\partial_{2}v_{2}(\mathbf{x})\end{bmatrix} \mathrm{d}x_{3},\] \[\mathcal{R}_{1}^{+}=\begin{bmatrix}\mathcal{T}_{\boldsymbol{\nu} _{M}}\mathcal{P}(\mathbf{v}^{(1,2)})+\lambda\mathcal{P}(\partial_{3}v_{1}) \boldsymbol{\nu}_{M}\\ \mu\partial_{\boldsymbol{\nu}_{M}}\mathcal{P}(v_{3})+\mu\begin{bmatrix} \mathcal{P}(\partial_{3}v_{1})\\ \mathcal{P}(\partial_{3}v_{2})\end{bmatrix}\cdot\boldsymbol{\nu}_{M}\end{bmatrix}, \mathcal{R}_{2}^{+}=\begin{bmatrix}\mathcal{T}_{\boldsymbol{\nu}_{M}} \mathcal{P}(\mathbf{w}^{(1,2)})+\lambda\mathcal{P}(\partial_{3}w_{3}) \boldsymbol{\nu}_{M}\\ \mu\partial_{\boldsymbol{\nu}_{M}}\mathcal{P}(w_{3})+\mu\begin{bmatrix} \mathcal{P}(\partial_{3}w_{1})\\ \mathcal{P}(\partial_{3}w_{2})\end{bmatrix}\cdot\boldsymbol{\nu}_{M}\end{bmatrix} \text{on}\Gamma^{\prime+}_{\mathbf{x}^{\prime}_{c},h},\] \[\mathcal{R}_{1}^{-}=\begin{bmatrix}\mathcal{T}_{\boldsymbol{\nu} _{m}}\mathcal{P}(\mathbf{v}^{(1,2)})+\lambda\mathcal{P}(\partial_{3}v_{3}) \boldsymbol{\nu}_{m}\\ \mu\partial_{\boldsymbol{\nu}_{m}}\mathcal{P}(v_{3})+\mu\begin{bmatrix} \mathcal{P}(\partial_{3}v_{1})\\ \mathcal{P}(\partial_{3}v_{2})\end{bmatrix}\cdot\boldsymbol{\nu}_{m}\end{bmatrix},\mathcal{R}_{2}^{-}=\begin{bmatrix}\mathcal{T}_{\boldsymbol{\nu}_{m}} \mathcal{P}(\mathbf{w}^{(1,2)})+\lambda\mathcal{P}(\partial_{3}w_{3}) \boldsymbol{\nu}_{m}\\ \mu\partial_{\boldsymbol{\nu}_{m}}\mathcal{P}(w_{3})+\mu\begin{bmatrix} \mathcal{P}(\partial_{3}w_{1})\\ \mathcal{P}(\partial_{3}w_{2})\end{bmatrix}\cdot\boldsymbol{\nu}_{m}\end{bmatrix} \text{on}\Gamma^{\prime-}_{\mathbf{x}^{\prime}_{c},h}.\] _Here, \(\boldsymbol{\nu}_{M}\) and \(\boldsymbol{\nu}_{m}\) denote the exterior unit normal vector to \(\Gamma^{\prime+}_{\mathbf{x}^{\prime}_{c},h}\) and \(\Gamma^{\prime+}_{\mathbf{x}^{\prime}_{c},h}\), respectively, \(\mathcal{T}_{\boldsymbol{\nu}}\) is the two-dimensional boundary traction operator._ By applying the decomposition of \(\widetilde{\mathcal{L}}\) given in (4.19), it is direct to obtain the following results. Here, we omit the proof. **Lemma 4.7**.: _Under the same setup in Lemma 4.6, the transmission system (4.20) is equivalent to the following two PDE systems_ \[\begin{cases}\mathcal{L}_{\mathcal{P}}\,\mathcal{P}(\mathbf{v}^{(1,2)})( \mathbf{x}^{\prime})=\mathcal{G}_{1}^{(1,2)}(\mathbf{x}^{\prime})&\text{in}& \mathcal{C}^{\prime}_{\mathbf{x}^{\prime}_{c},h},\\ \mathcal{L}_{\mathcal{P}}\,\mathcal{P}(\mathbf{w}^{(1,2)})(\mathbf{x}^{\prime})= \mathcal{G}_{2}^{(1,2)}(\mathbf{x}^{\prime})&\text{in}&\mathcal{C}^{\prime}_{ \mathbf{x}^{\prime}_{c},h},\\ \mathcal{P}(\mathbf{v}^{(1,2)})(\mathbf{x}^{\prime})=\mathcal{P}(\mathbf{w}^{(1,2 )})(\mathbf{x}^{\prime})+\mathcal{P}(\mathbf{f}_{+}^{(1,2)})(\mathbf{x}^{ \prime})&\text{on}&\Gamma^{\prime+}_{\mathbf{x}^{\prime}_{c},h},\\ \mathcal{P}(\mathbf{v}^{(1,2)})(\mathbf{x}^{\prime})=\mathcal{P}(\mathbf{w}^{(1,2 )})(\mathbf{x}^{\prime})+\mathcal{P}(\mathbf{f}_{-}^{(1,2)})(\mathbf{x}^{ \prime})&\text{on}&\Gamma^{\prime-}_{\mathbf{x}^{\prime}_{c},h},\\ \mathcal{R}_{1}^{(1,2)}=\mathcal{R}_{2}^{(1,2)}+\mathcal{P}(\mathbf{g}_{+}^{(1,2 )})(\mathbf{x}^{\prime})&\text{on}&\Gamma^{\prime+}_{\mathbf{x}^{\prime}_{c},h},\\ \mathcal{R}_{1}^{(1,2)}=\mathcal{R}_{2}^{(1,2)}+\mathcal{P}(\mathbf{g}_{-}^{(1,2 )})(\mathbf{x}^{\prime})&\text{on}&\Gamma^{\prime-}_{\mathbf{x}^{\prime}_{c},h} \end{cases} \tag{4.21}\] _and_ \[\begin{cases}\lambda\Delta^{\prime}\,\mathcal{P}(v_{3})(\mathbf{x}^{\prime})= \mathcal{G}_{1}^{(3)}(\mathbf{x}^{\prime})&\text{in}\ \ \ \mathcal{C}_{\mathbf{x}_{c}^{\prime},h}^{\prime},\\ \lambda\Delta^{\prime}\,\mathcal{P}(w_{3})(\mathbf{x}^{\prime})=\mathcal{G}_{2 }^{(3)}(\mathbf{x}^{\prime})&\text{in}\ \ \ \mathcal{C}_{\mathbf{x}_{c}^{\prime},h}^{\prime},\\ \mathcal{P}(v_{3})(\mathbf{x}^{\prime})=\mathcal{P}(w_{3})(\mathbf{x}^{\prime })+\mathcal{P}(f_{+}^{3})(\mathbf{x}^{\prime})&\text{on}\ \ \ \mathcal{I}_{\mathbf{x}_{c}^{\prime},h}^{\prime+},\\ \mathcal{P}(v_{3})(\mathbf{x}^{\prime})=\mathcal{P}(w_{3})(\mathbf{x}^{\prime })+\mathcal{P}(f_{-}^{3})(\mathbf{x}^{\prime})&\text{on}\ \ \ \mathcal{I}_{\mathbf{x}_{c}^{\prime},h}^{\prime-},\\ \partial_{\boldsymbol{\nu}_{M}}\mathcal{P}(v_{3})(\mathbf{x}^{\prime})= \partial_{\boldsymbol{\nu}_{M}}\mathcal{P}(w_{3})(\mathbf{x}^{\prime})+\frac{ 1}{\mu}\mathcal{P}(g_{+}^{3})(\mathbf{x}^{\prime})&\text{on}\ \ \ \mathcal{I}_{\mathbf{x}_{c}^{\prime},h}^{\prime+},\\ \partial_{\boldsymbol{\nu}_{m}}\mathcal{P}(v_{3})(\mathbf{x}^{\prime})= \partial_{\boldsymbol{\nu}_{m}}\mathcal{P}(w_{3})(\mathbf{x}^{\prime})+\frac{ 1}{\mu}\mathcal{P}(g_{-}^{3})(\mathbf{x}^{\prime})&\text{on}\ \ \ \mathcal{I}_{\mathbf{x}_{c}^{\prime},h}^{\prime-},\end{cases} \tag{4.22}\] _where_ \[\mathcal{G}_{1}^{(1,2)}=-\lambda\int_{x_{c}^{3}-L}^{x_{c}^{3}+L} \phi^{\prime\prime}(x_{3})\mathbf{v}^{(1,2)}(\mathbf{x})\mathrm{d}x_{3}+( \lambda+\mu)\int_{x_{c}^{3}-L}^{x_{c}^{3}+L}\phi^{\prime}(x_{3})\nabla v_{3}( \mathbf{x})\mathrm{d}x_{3},\] \[\mathcal{G}_{2}^{(1,2)}=-\lambda\int_{x_{c}^{3}-L}^{x_{c}^{3}+L} \phi^{\prime\prime}(x_{3})\mathbf{w}^{(1,2)}(\mathbf{x})\mathrm{d}x_{3}+( \lambda+\mu)\int_{x_{c}^{3}-L}^{x_{c}^{3}+L}\phi^{\prime}(x_{3})\nabla w_{3}( \mathbf{x})\mathrm{d}x_{3},\] \[\mathcal{G}_{1}^{(3)}=-(2\lambda+\mu)\int_{x_{c}^{3}-L}^{x_{c}^{3 }+L}\phi^{\prime\prime}(x_{3})v_{3}(\mathbf{x})\mathrm{d}x_{3}+(\lambda+\mu) \int_{x_{c}^{3}-L}^{x_{c}^{3}+L}\phi^{\prime}(x_{3})(\partial_{1}v_{1}+ \partial_{2}v_{2})\mathrm{d}x_{3},\] \[\mathcal{G}_{2}^{(3)}=-(2\lambda+\mu)\int_{x_{c}^{3}-L}^{x_{c}^{3 }+L}\phi^{\prime\prime}(x_{3})w_{3}(\mathbf{x})\mathrm{d}x_{3}+(\lambda+\mu) \int_{x_{c}^{3}-L}^{x_{c}^{3}+L}\phi^{\prime}(x_{3})(\partial_{1}w_{1}+ \partial_{2}w_{2})\mathrm{d}x_{3},\] \[\mathcal{R}_{1}^{+(1,2)}=\mathcal{T}_{\boldsymbol{\nu}_{M}} \mathcal{P}(\mathbf{v}^{(1,2)})+\lambda\mathcal{P}(\partial_{3}v_{3})\, \boldsymbol{\nu}_{M},\ \mathcal{R}_{2}^{+(1,2)}=\mathcal{T}_{\boldsymbol{\nu}_{M}} \mathcal{P}(\mathbf{w}^{(1,2)})+\lambda\mathcal{P}(\partial_{3}w_{3})\quad \text{on}\ \Gamma_{\mathbf{x}_{c}^{\prime},h}^{\prime+},\] \[\mathcal{R}_{1}^{-(1,2)}=\mathcal{T}_{\boldsymbol{\nu}_{m}} \mathcal{P}(\mathbf{v}^{(1,2)})+\lambda\mathcal{P}(\partial_{3}v_{3})\, \boldsymbol{\nu}_{m},\ \mathcal{R}_{2}^{-(1,2)}=\mathcal{T}_{\boldsymbol{\nu}_{m}} \mathcal{P}(\mathbf{w}^{(1,2)})+\lambda\mathcal{P}(\partial_{3}w_{3})\quad \text{on}\ \Gamma_{\mathbf{x}_{c}^{\prime},h}^{\prime-}.\] To obtain the continuity of \(\mathcal{P}(f_{+}^{3})\) and \(\mathcal{P}(f_{-}^{3})\), \(\mathcal{P}(g_{+}^{3})\) and \(\mathcal{P}(g_{-}^{3})\) at a 3D edge corner in the conventional or rotational sense. We also use the CGO solution \(u_{0}\) introduced in [5]. To be more precise, such CGO solution possesses a similar form and properties like Lemma 4.1-Lemma 4.9 (cf. [12]). For the reader's sake, we include these properties here. **Lemma 4.8**.: _[_5_, Lemma 2.2]_ _Let \(\mathbf{x}^{\prime}=(x_{1},x_{2})^{\top}=r(\cos\theta,\sin\theta)^{\top}\in \mathbb{R}^{2}\) and \(s\in\mathbb{R}_{+}\),_ \[u_{0}(s\mathbf{x}^{\prime}):=\exp(-\sqrt{sr}\mu(\theta)), \tag{4.23}\] _where \(\mu(\cdot)\) is defined in Lemma 4.9. Then \(s\longmapsto u_{0}(s\mathbf{x}^{\prime})\) decays exponentially in \(\mathbb{R}_{+}\) and_ \[\Delta^{\prime}u_{0}=0\quad\text{in}\quad\mathbb{R}^{2}\backslash\mathbb{R}_{0,-}^{2} \tag{4.24}\] _where \(\mathbb{R}_{0,-}^{2}:=\{\mathbf{x}^{\prime}\in\mathbb{R}^{2}|\mathbf{x}^{\prime}=( x_{1},x_{2});x_{1}\leq 0,x_{2}=0\}\). Moreover,_ \[\int_{\mathcal{K}_{\mathcal{P}}}u_{0}(s\mathbf{x}^{\prime})\mathrm{d}\mathbf{x}^{ \prime}=6\mathrm{i}(e^{-2\theta_{M}\mathrm{i}}-e^{-2\theta_{m}\mathrm{i}})s^{-2} \tag{4.25}\] _and for \(\alpha,\,s>0\) and \(h>0\)_ \[\int_{\mathcal{K}_{\mathcal{P}}}|u_{0}(s\mathbf{x}^{\prime})||\mathbf{x}^{\prime }|^{\alpha}\mathrm{d}\mathbf{x}^{\prime}\leq\frac{2(\theta_{M}-\theta_{m})\Gamma(2 \alpha+4)}{\delta_{\mathcal{K}_{\mathcal{P}}}^{2\alpha+4}}s^{-\alpha-2},\] \[\int_{\mathcal{K}_{\mathcal{P}}\backslash B_{h}}|u_{0}(s\mathbf{x}^{\prime})| \mathrm{d}\mathbf{x}^{\prime}\leq\frac{6(\theta_{M}-\theta_{m})}{\delta_{ \mathcal{K}_{\mathcal{P}}}^{4}}s^{-2}e^{-\frac{\sqrt{\pi h}}{2}\delta_{\mathcal{ K}_{\mathcal{P}}}}, \tag{4.26}\] _where \(\mathcal{K}_{\mathcal{P}}\) is defined like \(\mathcal{K}\) in Section 3 and \(\delta_{\mathcal{K}_{\mathcal{P}}}=\min\limits_{\theta_{m}<\theta<\theta_{M}} \cos\frac{\theta}{2}\) is a positive constant._ **Lemma 4.9**.: _[_12_, Lemma 2.4]_ _For any \(\alpha>0\), if \(\omega(\theta)>0\), then we have_ \[\lim\limits_{s\to+\infty}\int_{0}^{h}r^{\alpha}e^{-\sqrt{\pi}\omega(\theta)} \mathrm{d}\mathbf{r}=\mathcal{O}(s^{-\alpha-1}).\] **Lemma 4.10**.: _[_12_, Lemma 2.3]_ _Let \(\mathcal{C}^{\prime}_{\mathbf{x}^{\prime}_{c},h}\) be defined as in (3.3) and \(u_{0}\) be given in (4.23). Then \(u_{0}\in H^{1}(\mathcal{C}^{\prime}_{\mathbf{x}^{\prime}_{c},h})^{2}\) and \(\Delta^{\prime}u_{0}=0\) in \(\mathcal{C}^{\prime}_{\mathbf{x}^{\prime}_{c},h}\). Furthermore, it holds that_ \[\left\|u_{0}\right\|_{L^{2}(\mathcal{C}^{\prime}_{\mathbf{x}^{\prime}_{c},h}) ^{2}}\leq\frac{\sqrt{\theta_{M}-\theta_{m}}e^{-2\sqrt{s\Theta}\delta_{\mathcal{ K}_{\mathcal{P}}}}h^{2}}{2}\] _and_ \[\left\||\mathbf{x}^{\prime}|^{\alpha}u_{0}\right\|_{L^{2}(\mathcal{C}^{\prime }_{\mathbf{x}^{\prime}_{c},h})^{2}}^{2}\leq s^{-2(\alpha+1)}\frac{2(\theta_{M} -\theta_{m})\Gamma(4\alpha+4)}{(4\delta_{\mathcal{K}_{\mathcal{P}}})^{2\alpha +2}},\] _where \(\Theta\in[0,h]\) and \(\delta_{\mathcal{K}_{\mathcal{P}}}\) is given in Lemma 4.8._ We proceed to derive a key proposition to establish one main geometric result, which is a three-dimensional result similar to Proposition 4.1. **Proposition 4.2**.: _Consider the same setup in Lemma 4.6 with \(\mathbf{x}_{c}\) coinciding with the origin. Assume that \(\mathbf{f}_{j}\in C^{1,\alpha_{j}}\big{(}\Gamma^{\prime j}_{h}\times(-M,M) \big{)}^{3}\bigcap H^{\frac{1}{2}}\big{(}\Gamma^{\prime j}_{h}\times(-M,M) \big{)}^{3}\) and \(\mathbf{g}_{j}\in C^{\beta_{j}}\big{(}\Gamma^{\prime j}_{h}\times(-M,M)\big{)} ^{3}\bigcap H^{-\frac{1}{2}}\big{(}\Gamma^{\prime j}_{h}\times(-M,M)\big{)} ^{3}\) are independent of \(x_{3}\), where \(\alpha_{j},\beta_{j}\) are in \((0,1)\) and \(j=+,-\). Let \(\mathbf{v}\in H^{1}\left(\mathcal{C}^{\prime}_{h}\times(-M,M)\right)^{3}\) and \(\mathbf{w}\in H^{1}\left(\mathcal{C}^{\prime}_{h}\times(-M,M)\right)^{3}\) satisfy the PDE system (4.20). Let \(\mathbf{g}_{j}(\mathbf{0})=(\mathbf{g}_{j}^{(1,2)}(\mathbf{0})^{\top},g_{j}^{ 3}(\mathbf{0}))^{\top}\) for \(j=+,-\). Then we have_ \[\mathbf{f}_{+}(\mathbf{0})=\mathbf{f}_{-}(\mathbf{0}),\quad\mathbf{g}_{+}^{(1,2)}(\mathbf{0})=W\mathbf{g}_{-}^{(1,2)}(\mathbf{0})\quad\text{and}\quad g_{+} ^{3}(\mathbf{0})=g_{-}^{3}(\mathbf{0})=0, \tag{4.27}\] _where \(W=\begin{bmatrix}-\cos(\theta_{M}-\theta_{m}),&-\sin(\theta_{M}-\theta_{m}) \\ -\sin(\theta_{M}-\theta_{m}),&+\cos(\theta_{M}-\theta_{m})\end{bmatrix}\), \(\theta_{M}\) and \(\theta_{m}\) are the arguments corresponding to the boundaries \(\Gamma^{\prime+}_{h}\) and \(\Gamma^{\prime-}_{h}\) respectively._ Proof.: Similar to the proof of Proposition 4.1, we only consider the corresponding proofs for \((\Re\mathcal{P}(\mathbf{v}),\Re\mathcal{P}(\mathbf{w}))\). We follow similar arguments in the proof of Proposition 4.1 with some necessary modifications. We divide the proof into two parts. **Part I.** We first shall prove that \[\Re\mathcal{P}(\mathbf{f}_{+}^{(1,2)})(\mathbf{0})=\Re\mathcal{P}(\mathbf{f}_ {-}^{(1,2)})(\mathbf{0})\quad\text{and}\quad\Re\mathcal{P}(\mathbf{g}_{+}^{(1,2)})(\mathbf{0})=W\Re\mathcal{P}(\mathbf{g}_{-}^{(1,2)})(\mathbf{0}). \tag{4.28}\] In this part, we consider the PDE system (4.21). Noting that \(\mathbf{f}_{j}\in C^{1,\alpha_{j}}\big{(}\Gamma^{\prime j}_{h}\times(-M,M) \big{)}^{3}\) and \(\mathbf{g}_{j}\in C^{\beta_{j}}\big{(}\Gamma^{\prime j}_{h}\times(-M,M)\big{)} ^{3}\) for \(j=+,-\). Since \(\mathbf{f}_{j}\) and \(\mathbf{g}_{j}\) do not depend on \(x_{3}\), we have the following expansions \[\mathcal{P}(\mathbf{f}_{j})(\mathbf{x}^{\prime}) =\mathcal{P}(\mathbf{f}_{j})(\mathbf{0})+\delta\mathcal{P}(\mathbf{ f}_{j})(\mathbf{x}^{\prime}),\quad\left|\delta\mathcal{P}(\mathbf{f}_{j})( \mathbf{x}^{\prime})\right|\leq A_{j}|\mathbf{x}^{\prime}|^{1+\alpha_{j}},\] \[\mathcal{P}(\mathbf{g}_{j})(\mathbf{x}^{\prime}) =\mathcal{P}(\mathbf{g})(\mathbf{0})+\delta\mathcal{P}(\mathbf{g})( \mathbf{x}^{\prime}),\quad\left|\delta\mathcal{P}(\mathbf{g})(\mathbf{x}^{\prime}) \right|\leq B_{j}|\mathbf{x}^{\prime}|^{\beta_{j}}. \tag{4.29}\] By a series of similar derivations in Proposition 4.1, we deduce the following integral identity \[\mu\,\Re\mathcal{P}(\mathbf{f}_{+}^{(1,2)})(\mathbf{0})\cdot\begin{bmatrix} \mathrm{i}\\ -1\end{bmatrix}\left(e^{-s\sqrt{h}\,\mu(\theta_{M})}-1\right)+\mu\,\Re\mathcal{P }(\mathbf{f}_{-}^{(1,2)})(\mathbf{0})\cdot\begin{bmatrix}\mathrm{-i}\\ +1\end{bmatrix}\left(e^{-s\sqrt{h}\,\mu(\theta_{m})}-1\right)\] \[-2s^{-2}\Re\mathcal{P}(\mathbf{g}_{+}^{(1,2)})(\mathbf{0})\cdot \begin{bmatrix}1\\ \mathrm{i}\end{bmatrix}\mu^{-2}(\theta_{M})-2s^{-2}\Re\mathcal{P}(\mathbf{g}_ {-}^{(1,2)})(\mathbf{0})\cdot\begin{bmatrix}1\\ \mathrm{i}\end{bmatrix}\mu^{-2}(\theta_{m})=\sum_{j=1}^{8}Q_{j}, \tag{4.30}\] where \(\mu(\cdot)\) is given by Lemma 4.9. These \(\Re\mathbf{f}_{j}(\mathbf{0})\), \(\Re\mathbf{g}_{j}(\mathbf{0})\), \(\Re\delta\mathbf{f}_{j}\), \(\Re\delta\mathbf{g}_{j}\), \(\Re\mathbf{v}\) and \(\Re\mathbf{w}\) in \(R_{1}\)-\(R_{7}\) given by (4.12) are replacing by \(\Re\mathcal{P}(\mathbf{f}_{j}^{(1,2)})(\mathbf{0})\), \(\Re\mathcal{P}(\mathbf{g}_{j}^{(1,2)})(\mathbf{0})\), \(\delta\Re\mathcal{P}(\mathbf{f}_{j}^{(1,2)})\), \(\delta\Re\mathcal{P}(\mathbf{g}_{j}^{(1,2)})\), \(\Re\mathcal{P}(\mathbf{g}_{j}^{(1,2)})\), \(\Re\mathcal{P}(\mathbf{v}^{(1,2)})\) and \(\Re\mathcal{P}(\mathbf{w}^{(1,2)})\) for \(j=+,-\). In addition, \[Q_{8}=-\int_{\mathcal{C}_{h}^{\prime}}\left(\Re\mathcal{G}_{1}^{(1,2)}( \mathbf{x}^{\prime})-\Re\mathcal{G}_{2}^{(1,2)}(\mathbf{x}^{\prime})\right) \cdot\mathbf{u}_{0}\,\mathrm{d}\mathbf{x}^{\prime}.\] Similar to Proposition 4.1, we have the following estimates \[\begin{array}{ll}\left|Q_{1}\right|=\mathcal{O}(s^{-1}e^{-c_{1}^{\prime}s}),&\left|Q_{2}\right|=\mathcal{O}(s^{-1}e^{-c_{2}^{\prime}s}),&\left|Q_{3} \right|=\mathcal{O}(e^{-c_{3}^{\prime}s}),&\left|Q_{4}\right|=\mathcal{O}(s^{ -2\alpha_{+}-2}),\\ \left|Q_{5}\right|=\mathcal{O}(s^{-2\alpha_{-}-2}),&\left|Q_{6}\right|= \mathcal{O}(s^{-2\beta_{+}-2}),&\left|Q_{7}\right|=\mathcal{O}(s^{-2\beta_{- }-2}),\end{array}\] where these constants \(c_{1}^{\prime}\), \(c_{2}^{\prime}\) and \(c_{3}^{\prime}\) do not depend on \(s\), \(\alpha_{+},\alpha_{-}\), \(\beta_{+}\), \(\beta_{-}\in(0,1)\). From the expressions of \(\mathcal{G}_{1}\) and \(\mathcal{G}_{2}\) defined in (4.20), denote \(\Re\mathcal{G}_{1}^{(1,2)}-\Re\mathcal{G}_{2}^{(1,2)}:=\mathbf{h}_{1}+ \mathbf{h}_{2}\). Here, \(\mathbf{h}_{1}=-\int_{-L}^{+L}\phi^{\prime\prime}(x_{3})\begin{bmatrix} \lambda(\Re v_{1}-\Re w_{1})\\ \lambda(\Re v_{2}-\Re w_{2})\end{bmatrix}\,\mathrm{d}x_{3}\) and \(\mathbf{h}_{2}=(\lambda+\mu)\int_{-L}^{+L}\phi^{\prime}(x_{3})\begin{bmatrix} \partial_{1}(\Re v_{3}-\Re w_{3})\\ \partial_{2}(\Re v_{3}-\Re w_{3})\end{bmatrix}\,\mathrm{d}x_{3}\). By the regularities of \(\mathbf{v}\) and \(\mathbf{w}\) in \(\mathcal{C}_{h}^{\prime}\times(-M,M)\), we directly obtain that \(\Re\mathcal{G}_{1}^{(1,2)}\in H^{1}(\mathcal{C}_{h}^{\prime})^{2}\) and \(\Re\mathcal{G}_{2}^{(1,2)}\in L^{2}(\mathcal{C}_{h}^{\prime})^{2}\). By using the Cauchy-Schwarz inequality and the first equation in Lemma (4.3), we prove \[\left|\int_{\mathcal{C}_{h}^{\prime}}(\Re\mathcal{G}_{1}^{(1,2)}- \Re\mathcal{G}_{2}^{(1,2)})\cdot\mathbf{u}_{0}\,\mathrm{d}x_{3}\right|=\left| \int_{\mathcal{C}_{h}^{\prime}}(\mathbf{h}_{1}+\mathbf{h}_{2})\cdot\mathbf{u} _{0}\,\mathrm{d}x_{3}\right|\] \[\quad\leq\|\mathbf{h}_{1}\|_{L^{2}(\mathcal{C}_{h}^{\prime})^{2}} \|\mathbf{u}_{0}\|_{L^{2}(\mathcal{C}_{h}^{\prime})^{2}}+\|\mathbf{h}_{2}\|_{L ^{2}(\mathcal{C}_{h}^{\prime})^{2}}\|\mathbf{u}_{0}\|_{L^{2}(\mathcal{C}_{h}^{ \prime})^{2}}\] \[\quad\leq C^{\prime}e^{-c_{3}^{\prime}s},\] where \(C^{\prime}>0\) and \(c_{3}^{\prime}>0\) do not depend on \(s\). From (4.30), we have \[\begin{bmatrix}-\mathrm{i}\\ 1\end{bmatrix}\cdot\left(\Re\mathcal{P}(\mathbf{f}_{+}^{(1,2)})(\mathbf{0})- \Re\mathcal{P}(\mathbf{f}_{-}^{(1,2)})(\mathbf{0})\right)=0\quad\text{as}\quad s \rightarrow+\infty.\] Noticing that \(\Re\mathcal{P}(\mathbf{f}_{+}^{(1,2)})(\mathbf{0})\) and \(\Re\mathcal{P}(\mathbf{f}_{-}^{(1,2)})(\mathbf{0})\) are real, we can prove \[\Re\mathcal{P}(\mathbf{f}_{+}^{(1,2)})(\mathbf{0})=\Re\mathcal{P}(\mathbf{f}_{ -}^{(1,2)})(\mathbf{0}).\] Substituting the above equation into (4.30), then multiplying the new identity by \(s^{2}\), we get \[\mu\,s^{2}\Re\mathcal{P}(\mathbf{f}_{+}(\mathbf{0}))\cdot\begin{bmatrix} \mathrm{i}\\ -1\end{bmatrix}\left(e^{-s\sqrt{h}\mu(\theta_{M})}-e^{-s\sqrt{h}\mu(\theta_{m}) }\right)-2\begin{bmatrix}1\\ \mathrm{i}\end{bmatrix}\cdot\left(\frac{\Re\mathcal{P}(\mathbf{g}_{+}(\mathbf{0}))}{ \mu^{2}(\theta_{M})}+\frac{\Re\mathcal{P}(\mathbf{g}_{-}(\mathbf{0}))}{\mu^{2} (\theta_{m})}\right)=\sum_{j=1}^{8}s^{2}\,Q_{j}.\] We note that the first term on the left hand of the last equation is bounded by \(s^{2}e^{-c_{7}^{\prime}s}\) with \(c_{7}^{\prime}=\sqrt{h}\min\{\cos\frac{\theta_{M}}{2},\cos\frac{\theta_{m}}{2}\}>0\). As \(s\) tends to \(+\infty\), we obtain \[\begin{bmatrix}1\\ \mathrm{i}\end{bmatrix}\cdot\left(\frac{\Re\mathcal{P}(\mathbf{g}_{+}(\mathbf{0})) }{\mu^{2}(\theta_{M})}+\frac{\Re\mathcal{P}(\mathbf{g}_{-}(\mathbf{0}))}{\mu^{2} (\theta_{m})}\right)=0.\] By the same method of proving the first equation in (4.4), we can prove that the second equation in (4.28) holds. **Part II.** We next shall prove that \[\Re{\mathcal{P}}(f_{+}^{3})({\bf 0})=\Re{\mathcal{P}}(f_{-}^{3})({\bf 0})\quad \text{and}\quad\Re{\mathcal{P}}(g_{+}^{3})({\bf 0})=\Re{\mathcal{P}}(g_{-}^{3})({\bf 0 })=0. \tag{4.31}\] In this part, we consider the PDE system (4.22). Let us deduce some similar operations above by using the CGO solution \(u_{0}\) given in (4.23), we set up an integral identity as follows \[\frac{1}{\lambda}\int_{{\mathcal{C}}_{h}^{\prime}}\big{(}\Re{ \mathcal{G}}_{1}^{(3)}({\bf x}^{\prime})-\Re{\mathcal{G}}_{2}^{(3)}({\bf x}^{ \prime})\big{)}\,u_{0}{\rm d}{\bf x}^{\prime}=\int_{\Lambda_{h}^{\prime}} \partial_{\boldsymbol{\nu}}{\mathcal{P}}(v_{3}-w_{3})\,u_{0}-\partial_{ \boldsymbol{\nu}}u_{0}\,{\mathcal{P}}(v_{3}-w_{3}){\rm d}\sigma\] \[\qquad\qquad\qquad\qquad\qquad\qquad+\int_{\Gamma_{h}^{\prime-} }\boldsymbol{\nu}_{m}\cdot\big{(}\Re{\mathcal{P}}(f_{-}^{(1,2)})({\bf x}^{ \prime})+\frac{1}{\mu}\Re{\mathcal{P}}(g_{-}^{(3)})\big{)}\,u_{0}-\Re{ \mathcal{P}}(f_{-}^{(3)})\,\partial_{\boldsymbol{\nu}_{m}}u_{0}{\rm d}\sigma.\] Due to the expansions of \({\bf f}_{\pm}\) and \({\bf g}_{\pm}\) in (4.29), the above integral identity can be reduced into \[\lambda\,\mu\,{\rm i}\,\Big{(}\Re{\mathcal{P}}(f_{+}^{3})(0)-\Re{\mathcal{P} }(f_{-}^{3})(0)\Big{)}-2\lambda\,s^{-1}\bigg{(}\frac{\Re{\mathcal{P}}(g_{+}^{3 })(0)}{\mu^{2}(\theta_{M})}+\frac{\Re{\mathcal{P}}(g_{-}^{3})(0)}{\mu^{2}( \theta_{m})}\bigg{)}=\sum_{j=1}^{9}M_{j},\] where \[M_{1} =2\lambda\,s^{-1}\Re{\mathcal{P}}(g_{+}^{3})({\bf 0})\Big{(}\mu^{- 1}(\theta_{M})\,\sqrt{sh}\,e^{-\sqrt{sh}\,\mu(\theta_{M})}+\mu^{-2}(\theta_{M })\,e^{-\sqrt{sh}\,\mu(\theta_{M})}\Big{)},\] \[M_{2} =2\lambda\,s^{-1}\Re{\mathcal{P}}(g_{-}^{3})({\bf 0})\Big{(}\mu^{- 1}(\theta_{m})\,\sqrt{sh}\,e^{-\sqrt{sh}\,\mu(\theta_{m})}+\mu^{-2}(\theta_{m })\,e^{-\sqrt{sh}\,\mu(\theta_{m})}\Big{)},\] \[M_{3} =-\lambda\mu\,\int_{\Lambda_{h}^{\prime}}\partial_{\boldsymbol{ \nu}}u_{0}\,\,\Re{\mathcal{P}}(v_{3}-w_{3})({\bf x}^{\prime})-\partial_{ \boldsymbol{\nu}}u_{0}\,\,\Re{\mathcal{P}}(v_{3}-w_{3})({\bf x}^{\prime})\,{ \rm d}\sigma,\] \[M_{4} =\lambda\,\mu\,{\rm i}\,\bigg{(}\frac{\Re{\mathcal{P}}(f_{-}^{3 })(0)}{e^{\sqrt{sh}\,\mu(\theta_{m})}}-\frac{\Re{\mathcal{P}}(f_{+}^{(3)})(0)}{ e^{\sqrt{sh}\,\mu(\theta_{M})}}\bigg{)},\,\,M_{5}=\mu\int_{{\mathcal{C}}_{h}^{ \prime}}(t_{1}+t_{2})\,u_{0}{\rm d}{\bf x}^{\prime},\] \[M_{6} =\lambda\mu\,\int_{\Gamma_{h}^{\prime+}}\delta\Re{\mathcal{P}}(f _{+}^{3})({\bf x}^{\prime})\,\,\partial_{\boldsymbol{\nu}_{M}}u_{0}\,{\rm d} \sigma,\qquad M_{7}=\lambda\mu\,\int_{\Gamma_{h}^{\prime-}}\delta\Re{ \mathcal{P}}(f_{-}^{3})({\bf x}^{\prime})\,\,\partial_{\boldsymbol{\nu}_{m}}u_ {0}\,{\rm d}\sigma,\] \[M_{8} =-\lambda\int_{\Gamma_{h}^{\prime+}}\delta\Re{\mathcal{P}}(g_{+ }^{3})({\bf x}^{\prime})\,\,u_{0}\,{\rm d}\sigma,\qquad\qquad M_{9}=-\lambda \int_{\Gamma_{h}^{\prime-}}\delta\Re{\mathcal{P}}(g_{-}^{3})({\bf x}^{\prime}) \,\,u_{0}\,{\rm d}\sigma.\] Here, \[t_{1} =-(2\lambda+\mu)\Re\int_{-L}^{L}\phi^{\prime\prime}({\bf x}^{ \prime})(v_{3}-w_{3}){\rm d}x_{3},\] \[t_{2} =(\lambda+\mu)\Re\int_{-L}^{L}\phi^{\prime}({\bf x}^{\prime}) \big{(}\partial_{1}(v_{1}-w_{1})+\partial_{2}(v_{2}-w_{2})\big{)}{\rm d}x_{3}.\] Using those estimates list in Lemma 4.8-Lemma 4.10, we have \[\big{|}M_{1}\big{|} ={\mathcal{O}}(e^{-q_{1}\sqrt{s}}),\quad\big{|}M_{2}\big{|}={ \mathcal{O}}(e^{-q_{2}\sqrt{s}}),\quad\big{|}M_{3}\big{|}={\mathcal{O}}(s^{-1}e ^{-1/2hs}),\,\big{|}M_{4}\big{|}={\mathcal{O}}(s^{-q_{4}\sqrt{s}}),\] \[\big{|}M_{6}\big{|} ={\mathcal{O}}(s^{-\alpha_{+}-1}),\quad\big{|}M_{7}\big{|}={ \mathcal{O}}(e^{-\alpha_{-}-1}),\quad\big{|}M_{8}\big{|}={\mathcal{O}}(e^{- \beta_{+}-1}),\qquad\big{|}M_{9}\big{|}={\mathcal{O}}(e^{-\beta_{--}-1}),\] where these above constants do not depend on \(s\). Using the similar technique of estimating of \(Q_{8}\), we get \[\big{|}M_{5}\big{|}=\mathcal{O}(e^{-q_{5}s}),\quad\text{where}\quad q_{5}>0.\] Let \(s\to+\infty\), the first equation in (4.31) holds clearly, that is to say, \[\Re\mathcal{P}(f_{+}^{3})(\mathbf{0})=\Re\mathcal{P}(f_{-}^{3})(\mathbf{0}).\] Substituting the above equation into (4.30) and multiplying the new identity by \(s^{2}\), then letting \(s\to+\infty\), one can obtain that \[\frac{\Re\mathcal{P}(g_{+}^{(3)})(0)}{\mu^{2}(\theta_{M})}+\frac{\Re\mathcal{ P}(g_{-}^{(3)})(0)}{\mu^{2}(\theta_{m})}=0.\] It's worth noting that \(\frac{\mu^{2}(\theta_{M})}{\mu^{2}(\theta_{m})}=e^{\mathrm{i}(\theta_{M}- \theta_{m})}\) and \(\theta_{M}-\theta_{m}\in(0,\pi)\). Hence, \[\Re\mathcal{P}(g_{+}^{3})(\mathbf{0})=\Re\mathcal{P}(g_{-}^{3})(\mathbf{0})=0.\] Thanks to the symmetric role of \((\Re\mathbf{v},\Re\mathbf{w})\) and \((\Im\mathbf{v},\Im\mathbf{w})\), the two equations (4.28) and (4.31) directly lead to the results list in (4.27). ## 5. The proofs of Theorem 3.1-Theorem 3.3 Proof of Theorem 3.1.: We prove this theorem by contradiction. Assume that there exists a planar corner/3D edge corner \(\mathbf{x}_{c}\) on \(\mathcal{S}_{1}\Delta\mathcal{S}_{2}\). Without loss of generality, we assume that \(\mathbf{x}_{c}\) coincides with the origin \(\mathbf{0}\), \(\mathbf{0}\in\mathcal{S}_{\mathbf{1}}\) and \(\mathbf{0}\notin\mathcal{S}_{\mathbf{2}}\). Denote \(\mathbf{w}=\mathbf{u}_{1}-\mathbf{u}_{2}\). **Case 1**: \(n=2\). We note that \(\Gamma_{h}^{\pm}=\mathcal{S}_{1}\cap B_{h}\), \(\mathcal{C}_{h}=\Omega_{1}\cap B_{h}\) and \(\mathcal{C}_{h}\cap\Omega_{2}\neq\emptyset\) for sufficient small \(h\in\mathbb{R}_{+}\), where \(\Omega_{1}\) and \(\Omega_{2}\) are defined in a similar way as (2.8). Since \(\mathbf{u}_{1}\big{|}_{\Sigma_{0}}=\mathbf{u}_{2}\big{|}_{\Sigma_{0}}\) and \(\Sigma_{0}\subset\Sigma_{N}\), we know that \[\mathbf{w}=\mathbf{u}_{1}-\mathbf{u}_{2}=\mathcal{T}_{\boldsymbol{\nu}} \mathbf{u}_{1}-\mathcal{T}_{\boldsymbol{\nu}}\mathbf{u}_{2}=\mathbf{0}\quad \text{on}\quad\Sigma_{0}.\] Let \(\mathbf{w}^{-}\) and \(\mathbf{w}^{+}\) represent \(\mathbf{w}\big{|}_{\Omega_{1}}\) and \(\mathbf{w}\big{|}_{\Omega\setminus\overline{\Omega}_{1}}\), respectively. With the help of the unique continuation principle properly and the fact that \(\mathbf{u}_{2}\) is real analytic in \(B_{h}\), it is clear to obtain \[\mathcal{L}\,\mathbf{w}^{-}=\mathbf{0}\text{ in }\mathcal{C}_{h},\ \mathbf{w}^{-} \big{|}_{\Gamma_{h}^{j}}=\mathbf{u}_{1}\big{|}_{\Gamma_{h}^{j}}-\mathbf{u}_{2 }\big{|}_{\Gamma_{h}^{j}}=-\mathbf{f}_{j}^{1},\ \mathcal{T}_{\boldsymbol{\nu}}\mathbf{u}_{1}\big{|}_{\Gamma_{h}^{j}}- \mathcal{T}_{\boldsymbol{\nu}}\mathbf{u}_{2}\big{|}_{\Gamma_{h}^{j}}=- \mathbf{g}_{j}^{1},\ j=+,-. \tag{5.1}\] From Proposition 4.1, we directly imply that \(\mathbf{f}_{+}^{1}(\mathbf{0})=\mathbf{f}_{-}^{1}(\mathbf{0})\) and \(\mathbf{g}_{+}^{1}(\mathbf{0})=W\mathbf{g}_{-}^{1}(\mathbf{0})\), where \(W\) is given in Proposition 4.1. This must contradict to the admissibility condition (5) in Definition 3.1. **Case 2**: \(n=3\). It is noted that \(\mathbf{0}=(\mathbf{0}^{\prime},0)^{\top}\in\Gamma_{h}^{\prime\pm}\times(-M,+ M)\subset\mathcal{S}_{1}\) is a 3D edge corner point and \(\mathcal{C}_{h}^{\prime}\times(-M,+M)\subset\Omega_{1}\) and \(\mathcal{C}_{h}^{\prime}\times(-M,+M)\cap\Omega_{2}=\emptyset\), where \(\Gamma_{h}^{\prime\pm}\) and \(\mathcal{C}_{h}^{\prime}\) are defined in (3.3) and \(\Omega_{1}\), \(\Omega_{2}\) are given by (2.8). Similar to the previous case, we get \[\begin{cases}\mathcal{L}\,\mathbf{w}^{-}=\mathbf{0},&\text{in}\quad\mathcal{C} _{h}^{\prime}\times(-M,M),\\ \mathbf{w}=-\mathbf{f}_{+}^{1}&\text{on}\quad\Gamma_{h}^{\prime+}\times(-M,M), \\ \mathbf{w}=-\mathbf{f}_{-}^{1}&\text{on}\quad\Gamma_{h}^{\prime-}\times(-M,M), \\ \mathcal{T}_{\boldsymbol{\nu}_{M}}\mathbf{w}=-\mathbf{g}_{+}^{1}&\text{on}\quad \Gamma_{h}^{\prime+}\times(-M,M),\\ \mathcal{T}_{\boldsymbol{\nu}_{m}}\mathbf{w}=-\mathbf{g}_{-}^{1}&\text{on}\quad \Gamma_{h}^{\prime-}\times(-M,M).\end{cases}\] These results that \(\mathbf{f}_{+}^{1}(\mathbf{0})=\mathbf{f}_{-}^{1}(\mathbf{0})\), \(\mathbf{g}_{+}^{1,(1,2)}(\mathbf{0})=W\mathbf{g}_{-}^{1,(1,2)}(\mathbf{0})\) and \(g_{+}^{1,3}(\mathbf{0})=g_{+}^{1,3}(\mathbf{0})=0\) are yielded by Proposition 4.2, where \(W\) is given in Definition 3.1. This contradicts to the admissibility condition (5) in Definition 3.1. The proof is complete. Proof of Theorem 3.2.: Firstly, we shall prove \(\mathcal{S}_{1}=\mathcal{S}_{2}\) by contradiction. Assume that \(\Omega_{1}\neq\Omega_{2}\). Since \(\Omega_{1}\) and \(\Omega_{2}\) are both convex polygons or polyherons, there must exist a corner \(\mathbf{x}_{c}\) belonging to \(\Omega_{1}\Delta\Omega_{2}\), which contradicts Theorem 3.1, thus we have \(\Omega_{1}=\Omega_{2}\). It directly leads to \(\mathcal{S}_{1}=\mathcal{S}_{2}\). In what follows, we shall first prove (3.4) and (3.5) for the 2D case. Denote \(\hat{\Omega}:=\Omega_{1}=\Omega_{2}\) and \(\mathcal{S}:=\mathcal{S}_{1}=\mathcal{S}_{2}\). Let \(\mathbf{w}=\mathbf{u}_{1}-\mathbf{u}_{2}\). Since \(\mathbf{u}_{1}=\mathbf{u}_{2}\) on \(\Sigma_{0}\subset\Sigma_{N}\), as a direct consequence, we have \(\mathbf{w}=0\) and \(\mathcal{T}_{\boldsymbol{\nu}}\mathbf{w}=0\) on \(\Sigma_{0}\), where \(\mathbf{w}=\mathbf{u}_{1}-\mathbf{u}_{2}\). By virtue of the unique continuation principle properly again, one obtains \[\mathbf{w}^{+}\big{|}_{\Gamma_{h}^{\pm}}=\mathcal{T}_{\boldsymbol{\nu}} \mathbf{w}^{+}\big{|}_{\Gamma_{h}^{\pm}}=\mathbf{0},\] where \(\mathbf{w}^{+}\) represents \(\mathbf{w}\big{|}_{\Omega\setminus\overline{\hat{\Omega}}}\). Since \((\mathcal{S};\mathbf{f}^{1},\mathbf{g}^{1})\) and \((\mathcal{S};\mathbf{f}^{2},\mathbf{g}^{2})\) are admissible, we get \[\mathcal{L}\,\mathbf{w}^{-}=\mathbf{0}\text{ in }\mathcal{C}_{h},\quad \mathbf{w}^{-}\big{|}_{\Gamma_{h}^{j}}=\mathbf{f}_{j}^{2}-\mathbf{f}_{j}^{1} \text{ and }\mathcal{T}_{\boldsymbol{\nu}}\mathbf{w}^{-}\big{|}_{\Gamma_{h}^{j}}= \mathbf{g}_{j}^{2}-\mathbf{g}_{j}^{1}\quad\text{for}\quad j=+,-, \tag{5.2}\] where \(\mathbf{w}^{-}\) signifies \(\mathbf{w}\big{|}_{\hat{\Omega}}\). From Proposition 4.1 and (5.2), we obtain the following local uniqueness \[\mathbf{f}_{+}^{2}(\mathbf{0})-\mathbf{f}_{+}^{1}(\mathbf{0})=\mathbf{f}_{-}^{ 2}(\mathbf{0})-\mathbf{f}_{-}^{1}(\mathbf{0})\quad\text{and}\quad\mathbf{g}_{ +}^{1}(\mathbf{0})-\mathbf{g}_{+}^{2}(\mathbf{0})=W\Big{(}\mathbf{g}_{+}^{1}( \mathbf{0})-\mathbf{g}_{+}^{2}(\mathbf{0})\Big{)},\] Furthermore, since \(\mathbf{f}_{i}\) and \(\mathbf{g}_{i}\) (\(i=1,2\)) are piecewise constant valued functions, we immediately obtain (3.4) and (3.5). By a similar method used in 2D case, we can prove (3.4) and (3.5) in 3D case,. Proof of Theorem 3.3.: The argument of proving this theorem is similar to the one used in the proof of Theorem 3.2, where we only need some necessary modifications. Let \(\mathcal{S}_{1}\) and \(\mathcal{S}_{2}\) are two different piecewise curves in \(\mathbb{R}^{2}\) or piecewise surfaces in \(\mathbb{R}^{3}\). From Definition 3.2, we can directly imply that \(\mathcal{S}_{1}\Delta\mathcal{S}_{2}\) contains a planar or 3D edge corner can be derived by Definition 3.2. Under the condition (3.6), adopting a similar argument in proving Theorem 3.2, we can show that \(\mathcal{S}_{1}=\mathcal{S}_{2}\). Set \(\mathbf{w}=\mathbf{u}_{1}-\mathbf{u}_{2}\). We have that \(\mathbf{w}=\mathcal{T}_{\boldsymbol{\nu}}\mathbf{w}=\mathbf{0}\) on \(\Sigma_{0}\) for \(\mathbf{u}_{1}=\mathbf{u}_{2}\) on \(\Sigma_{0}\subset\Sigma_{N}\). By using the unique continuation property again, we conclude that \(\mathbf{w}=\mathbf{0}\) in \(\Omega\setminus\overline{\mathcal{S}}\). Hence, it is direct to imply that \[\mathbf{0}=[\mathbf{w}]_{\mathcal{S}_{1}}=[\mathbf{w}]_{\mathcal{S}_{2}}\, \Rightarrow\,[\mathbf{u}_{1}]_{\mathcal{S}_{1}}=[\mathbf{u}_{2}]_{\mathcal{S} _{2}}\,\text{and}\,[\mathcal{T}_{\boldsymbol{\nu}}\mathbf{u}_{1}]_{\mathcal{S} _{1}}=[\mathcal{T}_{\boldsymbol{\nu}}\mathbf{u}_{2}]_{\mathcal{S}_{2}}\, \Rightarrow\,\mathbf{f}_{1}=\mathbf{f}_{2}\,\text{and}\,\mathbf{g}_{1}= \mathbf{g}_{2}.\] The proof is complete. ## Acknowledgment The work of H. Diao is supported by National Natural Science Foundation of China (No. 12371422) and the Fundamental Research Funds for the Central Universities, JLU (No. 93Z172023Z01). The work of H. Liu was supported by the Hong Kong RGC General Research Funds (projects 11311122, 11300821 and 12301420), NSF/RGC Joint Research Fund (project N_CityU101/21) and the ANR/RGC Joint Research Fund (project A_CityU203/19). The work of Q. Meng is supported by the Hong Kong RGC Postdoctoral Fellowship (No. 9061028).
This paper focuses on an elastic dislocation problem that is motivated by applications in the geophysical and seismological communities. Our model, the displacement satisfies the Lam\'e system in a bounded domain with a mixed homogeneous boundary condition. We also allow the occurrence of discontinuities in both the displacement and traction fields on the fault curve/surface. By the variational approach, we first prove the well-posedness of the direct dislocation problem in a rather general setting with the Lam\'e parameters being real-valued $L^\infty$ functions and satisfy the strong convexity condition. Next, by considering the scenario that the Lam\'e parameters are constant and the fault curve/surface possesses certain corner singularities, we establish a local characterization of the slip vectors at the corner points over the dislocation curve/surface. In our study the dislocation is geometrically rather general and may be open or closed. For both cases, we establish the uniqueness results for the inverse problem of determining the dislocation curve/surface and the slips.
2309.06435
Early time dynamics far from equilibrium via holography
We investigate the early time dynamics of heavy ion collisions studying the time evolution of the energy-momentum tensor as well as energy-momentum correlations within a uniformly thermalizing holographic QGP. From these quantities, we suggest a far-from equilibrium definition of shear viscosity, which is a crucial property of QCD matter as it significantly determines the generation of elliptic flow already at early times. During an exemplary initial heating phase of the holographic QGP the shear viscosity of entropy density ratio decreases down to 60%, followed by an overshoot to 110% of the near-equilibrium value, $\eta/s=1/(4\pi)$. Implications for the QCD QGP are discussed. Subsequently, we consider a holographic QGP which is Bjorken-expanding. Its energy-momentum tensor components have a known hydrodynamic attractor to which all time evolutions collapse independent of the initial conditions. Based on this, we propose a definition for a far from equilibrium speed of sound, and analytically compute its hydrodynamic attractor. Subjecting this Bjorken-expanding plasma to an external magnetic field and an axial chemical potential, we study the chiral magnetic effect far from equilibrium.
Matthias Kaminski, Casey Cartwright, Marco Knipfer, Michael F. Wondrak, Björn Schenke, Marcus Bleicher
2023-09-12T17:56:02
http://arxiv.org/abs/2309.06435v1
# Early time dynamics far from equilibrium via holography ###### Abstract: We investigate the early time dynamics of heavy ion collisions studying the time evolution of the energy-momentum tensor as well as energy-momentum correlations within a uniformly thermalizing holographic QGP. From these quantities, we suggest a far-from equilibrium definition of shear viscosity, which is a crucial property of QCD matter as it significantly determines the generation of elliptic flow already at early times. During an exemplary initial heating phase of the holographic QGP the shear viscosity of entropy density ratio decreases down to 60%, followed by an overshoot to 110% of the near-equilibrium value, \(\eta/s=1/(4\pi)\). Implications for the QCD QGP are discussed. Subsequently, we consider a holographic QGP which is Bjorken-expanding. Its energy-momentum tensor components have a known hydrodynamic attractor to which all time evolutions collapse independent of the initial conditions. Based on this, we propose a definition for a far from equilibrium speed of sound, and analytically compute its hydrodynamic attractor. Subjecting this Bjorken-expanding plasma to an external magnetic field and an axial chemical potential, we study the chiral magnetic effect far from equilibrium. Introduction One important practical and theoretical question is why relativistic hydrodynamics describes heavy-ion collision data far beyond its regime of applicability. In particular, hydrodynamics appears to be a valid description far away from local and global equilibrium, in the presence of large gradients, at very early times during the evolution of quark-gluon-plasma (QGP) after collisions of heavy ions or even heavy-light (Pb+p) and light-light (p+p) collisions [1]. In part, these points were confirmed in holographic plasma [2] in which numerical computation of all observables is possible at all times. Here, we report on the continued holographic exploration of the far-from-equilibrium regime of \(\mathcal{N}=4\) Super-Yang-Mills (SYM) theory. We use the holographic correspondence to compute three time-dependent quantities: the shear transport, the speed of sound, and the chiral magnetic current. ## 2 \(\eta/s\) far from equilibrium We intend to explore the early times after a heavy-ion collision during which the system is far from equilibrium. Near equilibrium, a Kubo formula relates the retarded momentum space shear correlator \(\tilde{G}_{R}^{xy,xy}=\langle T^{xy}T^{xy}\rangle\) at vanishing spatial momentum to the shear viscosity: \(\eta=-\lim_{\omega\to 0}\frac{1}{\omega}\mathrm{Im}\,\tilde{G}_{R}^{xy,xy}( \omega,\mathbf{k}=\mathbf{0})\). Here, we holographically compute \(\tilde{G}_{R}^{xy,xy}\) far from equilibrium and define a _far-from-equilibrium shear viscosity_[3] \[\eta(t_{avg})=-\lim_{\omega\to 0}\frac{1}{\omega}\mathrm{Im}\,\tilde{G}_{R}^{ xy,xy}(t_{avg},\omega,\mathbf{k}=\mathbf{0})\,, \tag{1}\] where \(t_{avg}\) is the time with which the state changes as discussed below. Thermalization of a plasma corresponds to horizon formation in the gravity dual [4], Fig. 1 (left). A far-from-equilibrium plasma state heating up over a time \(\Delta t\) is modeled [3] by the AdS\({}_{4}\) Vaidya metric \[ds^{2}=g_{\mu\nu}dx^{\mu}dx^{\nu}=\frac{1}{z^{2}}(-f(t,z)dt^{2}-2dtdz+dx^{2}+ dy^{2})\,,\quad f(t,z)=1-2G_{N}M(t)z^{3}\,, \tag{2}\] with the time coordinate \(t\), the radial AdS-coordinate \(z\), having the boundary at \(z=0\) and the horizon at \(z=1\), and Newton's gravitational constant \(G_{N}\). Note, that the black hole mass \(M(t)=m+m_{s}(1+\tanh(t/\Delta t))/2\) is a function of the time \(t\).1 The background metric (2) is perturbed by a metric shear perturbation, \(h_{xy}(t,z)\), which is required to solve a linearized Einstein equation. Solutions correspond to the expectation value of the energy-momentum tensor of the plasma and its source \(h_{\mu\nu}^{(0)}\) according to \(h_{\mu\nu}\sim h_{\mu\nu}^{(0)}+\langle T_{\mu\nu}\rangle\,z^{4}+\dots\). In order to obtain the retarded shear correlator \(G_{R}^{xy,xy}\), linear response theory allows to utilize a delta source: \(h_{xy}^{(0)}=\delta(\tau-t_{p})\). This yields the two-point function in terms of a one-point function at time \(t\) in presence of a delta-source at time \(t_{p}\): \(\langle T^{xy}\rangle_{\delta(t_{p})}=\int d\tau\tau G_{R}^{xy,xy}(\tau,t) \delta(\tau-t_{p})\propto G_{R}^{xy,xy}(t_{p},t)\). Assuming no dependence on spatial boundary coordinates \(x\) or \(y\), a Wigner transform now yields the representation in terms of the relative frequency \(\omega\): \(G_{R}^{xy,xy}(t_{p},t)\to G_{R}^{xy,xy}(t_{avg},t_{\mathrm{rel}})\sim\tilde{G} _{R}^{xy,xy}(t_{avg},\omega)\;e^{-i\omega t_{\mathrm{rel}}}\), where the average time is \(t_{avg}=(t_{p}+t)/2\) and the relative time is \(t_{rel}=t_{p}-t\). Near an equilibrium state, the ratio of shear viscosity to entropy density is \(\eta/s=1/(4\pi)\) in \(\mathcal{N}=4\) SYM theory [5] (black line in Fig 1, right). In Fig. 1 (right), the shear viscosity (1) is shown for an example plasma heat up starting at \(T_{c}=155\) MeV, ending at \(T_{final}=310\) MeV, rising over \(\Delta t=0.3\) fm (RHIC energies). For this example, the shear transport ratio first drops below \(60\%\), then rises above \(110\%\) of \(1/(4\pi)\).2 How typical is this behavior when changing \(\Delta t\) and \(T_{final}\)? Fig. 2 (left) shows that over a wide range of values a significant decrease below \(1/(4\pi)\) is generic. The increase above \(1/(4\pi)\) only exists for small enough \(T_{final}<6.5T_{c}\). Fig. 2 (right) shows a stark contrast between the holographic _far-from-equilibrium_ results (\(\eta/s<1/(4\pi)\)), and the _near-equilibrium_ lattice QCD and _near-equilibrium_ FRG results (suggesting \(\eta/s>1/(4\pi)\)). This may indicate that the Bayesian study [9] underestimated the elliptic flow generated at early times. Footnote 2: It is important to recall that \(\eta/s=1/(4\pi)\) is _not_ a universal lower bound [6]. ## 3 Speed of sound far from equilibrium Consider a Bjorken-expanding \(\mathcal{N}=4\) SYM plasma. At early times, thermodynamic quantities are not strictly well-defined as the plasma is far from equilibrium and has a large pressure anisotropy. Here, we propose working definitions far from equilibrium. We use the temperature definition \(T=(\epsilon/\sigma_{SB})^{1/4}\), which is sometimes called _pseudo temperature_[1]. We holographically compute the speed of sound far from equilibrium according to the proposed definition [10] \[c_{\perp}^{2}=-\frac{\partial\langle T_{x_{1}}^{x_{1}}\rangle}{\partial\langle T _{0}^{0}\rangle}\,,\quad c_{||}^{2}=-\frac{\partial\langle T_{\xi}^{\xi} \rangle}{\partial\langle T_{0}^{0}\rangle}\,, \tag{3}\] with the pseudorapidity \(\xi=\frac{1}{2}\ln[(t+x_{3})/(t-x_{3})]\), the spatial coordinates \(x_{1},x_{2},x_{3}\) and the proper time \(\tau=\sqrt{t^{2}-x_{3}^{2}}\) with the Stefan-Boltzmann constant \(\sigma_{SB}\)[10]. Similar to the previous section, a time-dependent metric provides the thermalizing plasma state. However, now this plasma is Figure 1: Left: Thermalization in quantum field theories corresponds to horizon formation in their gravity dual description. Right: Example for a time-evolution of the far-from-equilibrium shear (1) normalized to the entropy measure \(s(t_{avg})\propto\frac{\partial^{\rm one-shell}}{\partial T}\) based on identification of \(S^{\rm on-shell}\) with the generating functional. expanding in the longitudinal \(x_{3}\)-direction, while isotropic and uniform in the transverse \((x_{1},x_{2})\)-plane. This complication now only allows numerical solutions for the background metric describing the time-dependent state, using [2]. It can be analytically shown [10] that the pressure anisotropy attractor [11] implies an attractor for the time-dependent speed of sound \[\mathcal{C}_{||}^{2}=\frac{1}{3}-\frac{2}{9}\left(\mathcal{A}_{0}(w)+\frac{w} {4}\frac{\partial\mathcal{A}_{0}(w)}{\partial w}\right)\,, \tag{4}\] with \(w=\tau T\) and the pressure anisotropy attractor \(\mathcal{A}_{0}(w)=(2530w-276)/(3975w^{2}-570w+120)\)[11]. This sound attractor (solid black line) is shown in Fig. 3 along with the numerically computed speed of sound in Bjorken-expanding holographic plasma, starting from various distinct initial conditions (solid colorful lines) and the hydrodynamic expectations (dashed lines). Hydrodynamic expectations are coinciding with the sound attractor already at very early times (\(\tau T\approx 0.5\)), indicating again a fast hydrodynamization. All initial states evolve towards the sound attractor very quickly, around \(\tau T<1\). The perpendicular speed of sound has an analogous attractor [10]. ## 4 Chiral magnetic effect far from equilibrium In the Bjorken-expanding holographic plasma described in the previous section, we introduce a chemical potential \(\mu\) and magnetic field \(B\) which both depend on time due to the Bjorken-expansion. In this setting, we compute [12] (highlighted in [13]) the time-dependent chiral magnetic current \(\langle J_{V}^{1}\rangle\) generated due to the chiral magnetic effect (CME).At distinct energies, this current first increases rapidly and then decreases slower, see Fig. 4. Although Fig. 4 suggest the CME to be weaker at higher energies, the accumulated charge which would be measured in the detectors indicates the opposite to be true when various parameter combinations are considered [12]. Figure 2: Left: Dependence of \(\eta/s\) on the instantaneous temperature, \(T(t_{avg})\) defined from the Hawking temperature of the black hole at each time. Shaded areas indicate the values arising from a sweep over a range of heat-up times for a fixed peak temperature. Right: These same holographic results (SYM, area enclosed by red solid curve) compared to \(1/(4\pi)\) (SYM, blue line). Theoretical QCD results are computed near equilibrium by functional renormalization group (FRG, dashed) [7] and lattice QCD (lQCD, circles) [8]. ## 5 Discussion We have computed time-dependent shear viscosity, speeds of sound, and the chiral magnetic current in holographic plasmas far from equilibrium. A small value of \(\eta/s\) at early times implies large generation of elliptic flow at early times, challenging current assumptions. In order to check the far-from-equilibrium speed of sound definition (3), the speed of sound waves is to be calculated directly from the fluctuations around Bjorken-expanding holographic plasma, using techniques from [3]. For a conclusive CME current estimate, a dynamical magnetic field interacting with the charged plasma, and a dynamically created axial imbalance need to be included. In summary, hydrodynamics performs well when its definitions are pushed beyond their limits. This may suggest that an effective field theory of fluid dynamics far from equilibrium is awaiting its construction. This work was supported by an Excellence Fellowship from Radboud University (M.F.W.), the U.S. Department of Energy grant DE-SC0012447 (C.C., M.K., M.K.), and DOE Contract No. DE-SC0012704 (B.P.S.). Figure 4: The charge current generated along the magnetic field due to the CME at distinct energies (\(\propto T^{4}\)). Figure 3: Attractor for the longitudinal speed of sound (black solid line) towards which all initial conditions (solid lines, distinct colors) evolve. Dashed: 0th (black), 1st (red), 2nd (blue) order hydrodynamic expectation.
``` 重 ions 衝突の初期の時間ダイナミクスを調査し、エネルギー-運動量テンソルとエネルギー運動量相関を、均一な熱化ホログラフィック QGP の内部で時間変化を追跡します。これらの量から、shear viscositを遠離平衡状態の定義を提案し、これはQCD 物質の重要な性質であり、初期時間における円形流れの生成を大きく決定します。ホログラフィック QGP の初期の加熱段階では、entropy 密度比の剪切粘性率は60%まで低下し、その後、近似平衡値の110%にオーバーシュートします。この近似平衡値は$\eta/s=1/(4\pi)$です。この結果について、QCD QGPの解釈を検討します。その後、ホログラフィック QGPをBjorken 膨張させたものとして検討します。そのエネルギー運動量テンソル
2305.20041
Simulation and Retargeting of Complex Multi-Character Interactions
We present a method for reproducing complex multi-character interactions for physically simulated humanoid characters using deep reinforcement learning. Our method learns control policies for characters that imitate not only individual motions, but also the interactions between characters, while maintaining balance and matching the complexity of reference data. Our approach uses a novel reward formulation based on an interaction graph that measures distances between pairs of interaction landmarks. This reward encourages control policies to efficiently imitate the character's motion while preserving the spatial relationships of the interactions in the reference motion. We evaluate our method on a variety of activities, from simple interactions such as a high-five greeting to more complex interactions such as gymnastic exercises, Salsa dancing, and box carrying and throwing. This approach can be used to ``clean-up'' existing motion capture data to produce physically plausible interactions or to retarget motion to new characters with different sizes, kinematics or morphologies while maintaining the interactions in the original data.
Yunbo Zhang, Deepak Gopinath, Yuting Ye, Jessica Hodgins, Greg Turk, Jungdam Won
2023-05-31T17:13:24
http://arxiv.org/abs/2305.20041v1
# Simulation and Retargeting of Complex Multi-Character Interactions ###### Abstract. We present a method for reproducing complex multi-character interactions for physically simulated humanoid characters using deep reinforcement learning. Our method learns control policies for characters that imitate not only individual motions, but also the interactions between characters, while maintaining balance and matching the complexity of reference data. Our approach uses a novel reward formulation based on an **interaction graph** that measures distances between pairs of interaction landmarks. This reward encourages control policies to efficiently imitate the character's motion while preserving the spatial relationships of the interactions in the reference motion. We evaluate our method on a variety of activities, from simple interactions such as a high-five greeting to more complex interactions such as gymnastic exercises, Salsa dancing, and box carrying and throwing. This approach can be used to "clean-up" existing motion capture data to produce physically plausible interactions or to retarget motion to new characters with different sizes, kinematics or morphologies while maintaining the interactions in the original data. Character Animation, Interactions, Physics Simulation, Physics-based Characters, Reinforcement Learning + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ physically simulated characters have been studied far less than single characters, in part because it is very challenging to learn controllers for multiple characters interacting with each other. As with a single character, balance must be maintained, but the interaction constraints also have to be solved simultaneously. Although some breakthrough results were demonstrated in recent studies (Haworth et al., 2020; Liu et al., 2022; Won et al., 2021), the complexity of the demonstrated interactions are still far from what people routinely perform in daily life. We demonstrate a novel learning-based method that provides a physics-based retargeting of complex interactions for multiple characters. More specifically, given reference motions that capture interactions between people, we learn control policies (a.k.a. controllers) of simulated characters via deep reinforcement learning that imitate not only the motion of the individuals but also the interactions between them. Our learned policies can produce plausible and semantically equivalent interactions when the sizes and kinematics of the characters are varied significantly. If the size of the simulated characters match those in the original motion capture data, the resulting motion is almost indistinguishable from the reference data and any errors from the capture process are eliminated by ensuring that the interactions are now physically plausible. To solve the challenges in learning multi-character interactions, we develop new rewards based on an **interaction graph** (IG) which measures distances between pairs of specified locations on the characters, and in particular reflects between-character distances. Rewards based on the IG enable control policies to efficiently deploy complex interactions for physically simulated characters while preserving the semantics of the interactions (i.e. spatial relationship) included in the reference data. In our formulation, manual annotation of the interaction in each motion is not necessary except for choosing a set of general interaction landmarks that work for a variety of scenarios. To show the effectiveness of our method, we record motions that include multi-person interactions at varying levels of difficulty, and test our method with motions that are composed of simple interactions such as a high-five or other greetings, as well as complex interactions such as gymnastic exercises, Salsa dancing, and box moving/throwing. We demonstrate the generality of our system by reproducing interactions not only for simulated characters with different body dimensions than the motion capture subjects, but also for a robot with a different kinematic structure. Finally, we run comparison and ablation studies that justify each choice in the system design. ## 2. Related Work We first review papers synthesizing multi-character interaction for kinematic approaches, which inspired our dynamic formulation. We then review recent progress in control of physically simulated characters via deep reinforcement learning. ### Multi-character Interactions for Kinematic Characters Most approaches for creating or editing multi-character interactions among kinematic characters are data-driven methods, which means that appropriate motion capture data should be obtained in advance. A popular line of work is based on optimization, where the basic idea is to optimize individual character motions with spatio-temporal constraints (Kwon et al., 2008; Liu et al., 2006), game theory (Shum et al., 2007, 2008; Shum et al., 2012; Wampler et al., 2010) so that the optimized motions have newly synthesized interactions. These methods are suitable for synthesizing motions having sparse interactions, however, the optimization quickly becomes intractable as the complexity of the interactions increases, so it is not suitable for synthesizing dense and complex interactions. Another approach is patch-based methods, where a patch includes a short interaction of multiple characters (Hyun et al., 2013; Lee et al., 2006; Shum et al., 2008; Won et al., 2014; Yersin et al., 2009). For this work, motion capture data where multiple actors are recorded simultaneously is required. New motions can be synthesized by connecting boundaries of multiple patches, thus creating multiple interactions that were not performed together in the original data. Methods for adapting existing interactions to new environments and characters have also been studied (Al-Asqhar et al., 2013; Ho et al., 2014, 2010; Jin et al., 2018; Kim et al., 2021, 2014, 2009). The key idea is to define an interaction descriptor that encodes the spatial and temporal relationship, then to edit the motions while minimizing the semantic difference between the original motion and the edited motions where the difference is measured by the descriptor. This idea has also been used to synthesize hands interacting with objects (Zhang et al., 2021). Our state representation and reward function for deep reinforcement learning are inspired by one of these descriptor-based approaches (Ho et al., 2010), where they construct an interaction graph by connecting edges among pre-specified markers on the body surface. By utilizing deep reinforcement learning and a novel formulation to measure interaction graph similarities, our method can be applied to dynamic characters having different body shapes instead of generating kinematic interaction motions as was done in (Ho et al., 2010). ### Physically Simulated Characters and Interactions In many cases, what we refer to as _interaction_ between different characters means physical interaction where physical forces occur between the characters at contacts. By incorporating physics simulation into the motion of the characters, those physical interactions can be synthesized in a plausible manner. Multi-character interactions have been created by solving a quadratic programming problem where the equations of motion for the entire dynamical system are used as either hard or soft constraints (Mordatch et al., 2012; Otani and Bouyarmane, 2017; Vaillant et al., 2017). Although cooperative multi-character interactions could be synthesized by these methods without using reference data, the generated motions are typically slow and less-dynamic due to the quasi-static assumption in their optimization formulation, and they require frame-level specification of all interactions in advance. Combining deep reinforcement learning (DRL) and motion capture data has allowed several breakthroughs in learning imitation controllers (Bergamin et al., 2019; Chentanez et al., 2018; Fussell et al., 2021; Park et al., 2019; Peng et al., 2018, 2021; Won et al., 2020), learning reusable motor skills (Merel et al., 2019; Peng et al., 2019, 2022; Won et al., 2022; Yao et al., 2022), and motion tracking (Winkler et al., 2022; Ye et al., 2022). Although there also have been some studies synthesizing dynamic interactions with objects or other characters (Hawworth et al., 2020; Liu et al., 2022; Merel et al., 2020; Won et al., 2021), the complexity of the demonstrated interactions are still not comparable to what people routinely perform in daily life. In addition, each of these works developed a task-specific reward function to enforce interactions between multiple entities. In this paper, we aim to synthesize various types of spatially and temporally dense interactions for full-body humanoid characters that are physically simulated. This problem is especially challenging because the motor skills must be sophisticated enough to perform those complex interactions while remaining robust enough to maintain balance. ## 3. Method Our goal is to build controllers that enable physically simulated characters to perform complex physical interactions with each other. For each behavior, we take a reference motion capture clip representing the desired multi-character interaction and produce controllers that enable the simulated characters to mimic those interactions. Our goal is to generate character interactions that are _semantically_ similar to those present in the reference motions. To achieve this, we use multi-agent deep reinforcement learning where the states and rewards are designed based on spatial descriptors inspired by (Ho et al., 2010). Different from (Ho et al., 2010) where only kinematic motions are generated, our method can be applied to dynamic characters having dramatically different body shapes from the captured actors. ### Environment Our characters are modeled as articulated rigid body objects by following (Won et al., 2020). Each character has 22 links and 22 joints, where each joint has three degree-of-freedom and is actuated by stable-PD servos (Tan et al., 2011) given target joint angles. We used an open-source framework (Won et al., 2020) to implement and simulate our characters. ### Problem Formulation We formulate the problem as a multi-agent Markov Decision Process (MDP). Consider \(k\) controllable agents, we define the tuple \(\{S,O_{1}\cdots O_{k},A_{1}\cdots A_{k},R_{1}\cdots R_{k},T,\rho\}\) where \(S\) is the entire state of our environment, \(O_{i}\) and \(A_{i}\) are the observation and action of \(i\)-th agent, respectively. The reward function \(R_{i}:O_{i}\times A_{i}\rightarrow\mathbb{R}\) evaluates the quality of the current state and action of \(i\)-th agent, the environment is updated by the transition function \(T:S\times A_{1}\times\cdots\times A_{k}\to S\) given a set of actions performed by all the agents, and \(\rho:S\rightarrow[0,1]\) is the probability distribution of the initial states. We aim to learn a set of optimal control policies \(\{\pi_{i}|i=1\cdots k\}\) that maximizes average expected return \(\mathbb{E}\left[\sum_{t=0}^{T}\gamma^{t}r_{t,t}\right]\) for each agent, where \(\gamma\in(0,1)\) is the discount factor that prevents the sum from being infinity. ### Interaction Graph To better describe the semantics of the interaction happening between agents (or between an agent and an object) during the motion, we define the notion of an Interaction Graph (IG), a graph-based spatial descriptor where the information on interactions is stored in its vertices and edges. This idea is inspired by (Ho et al., 2010). To construct an interaction graph, we first place a collection of markers on salient locations on each character (see Figure 2). Fifteen markers are placed in total for each character, where three markers are on each limb in the vicinity of joint locations, one on the pelvis, one on the torso, and one on the head. These markers will be considered as the nodes of the graph, each of which is associated with a 6-dimensional vector \(n_{i}=(p_{i},v_{i})\in\mathbb{R}^{6}\), where \(p_{i}\in\mathbb{R}^{3}\) is the position of the vertex and \(v_{i}\in\mathbb{R}^{3}\) is the velocity of the vertex. For example, a total of 30 vertices will be used for interactions associated with two characters (see Figure 2). On every time step, we perform a Delauney Tetrahedralization over all the vertices based on the spatial distances between pairs of markers to get a compact collection of edges connecting the vertices. Each edge is assigned a feature vector \(e_{ij}=(p_{ij},v_{ij})\in\mathbb{R}^{6}\) that encodes the relative relationship between the two vertices, where \(p_{ij}=p_{j}-p_{i}\in\mathbb{R}^{3}\) and \(v_{ij}=v_{j}-v_{i}\in\mathbb{R}^{3}\) are the positional and velocity components of the edge features. The example interaction graph in Figure 2 includes both edges connecting nodes on a single character and edges connecting nodes on different characters. The edges within the character help maintain the motion quality of an individual character, while the edges between the characters act as guides for maintaining the relative position of the body parts of the two characters. Details are discussed later in section 3.4. There is a major difference between how we compare two spatial descriptors in the interaction graph and how they are compared in the Interaction Mesh (IM) in (Ho et al., 2010). We perform edge-level (i.e. distance) computation whereas IM computes volumetric deformation on a tetrahedron. We further augment the state of an edge with velocities as they are crucial for a physics simulation. Given the input reference motions clips, we build and store such an IG to capture the spatial relationship across the agents and object at each time-step. ### Reward Design We choose to measure the interaction similarity in two ways: an _edge-weighting function_ that highlights the importance of interaction regions in the graph and an _edge-similarity function_ that measures the similarity between two IGs with the same connectivity. For the following similarity measurement, we make use of two interaction graphs \(G^{sim}\) and \(G^{ref}\) with the same connectivity, one from the simulated environment, the other from the reference motion clips. The connectivity of both graphs is the same as computed on the reference motions using the above mentioned method. The interaction graph we defined is a set of spatial descriptors that encode the relative formation among the vertices in the graph. #### 3.4.1. Edge Weighting Function We are guided by the intuition that instances where two body parts are close or in contact with each other are particularly important for multi-character interactions. We define a function that dynamically assigns different weights for each edge according to its relative importance to the others. More specifically, for an edge connecting vertices \(i\) and \(j\), the weight for the edge \(w_{ij}\) is defined as: \[w_{ij}=0.5*\frac{\exp\left(-k_{w}\|p_{ij}^{sim}\|\right)}{\sum_{ij}\exp\left(-k_{w} \|p_{ij}^{sim}\|\right)}+0.5*\frac{\exp\left(-k_{w}\|p_{ij}^{ref}\|\right)}{ \sum_{ij}\exp\left(-k_{w}\|p_{ij}^{ref}\|\right)}, \tag{1}\] where \(k_{w}\) controls how sensitive the weighting function is with respect to the distance of the edges. The first term gives more attention to an edge if the two nodes in the simulation are close to each other, the second term makes sure an edge in the reference motion gets more attention when its nodes stay close. In practice, we found the second term alone is enough for most of our experiments, and we only use the first term for a few examples where it improves the performance. Normalizing the weights allows our reward function to adapt to various interaction scenarios. For example, when two characters are far away from each other, the edges connecting those two characters do not contribute much to the reward while the edges connecting vertices within individual characters become important. On the other hand, when the two characters are close to each other, some of the connections between their body parts will have large weights. This adjustment based on proximity allows the body parts that are not associated with the close interactions to remain close to the original motion. #### 3.4.2. Edge Similarity Function Given two interaction graphs \(G\) and \(G^{\prime}\), we design a distance function measuring their differences. Our distance function measures position and velocity similarity of the two different formations by comparing all corresponding edges in the two graphs. #### 3.4.3. Positional Graph Similarity To compare the positional graph similarity between two graphs, we separately consider the similarity of the two graph edges connecting each individual character \(E_{self}\) (self-connections) and between characters \(E_{cross}\) (cross-connections). The discrepancy of each edge is computed as follows: \[err_{self,ij}=\|\frac{p_{ij}^{sim}-p_{T,ij}^{sim}}{\|p_{T,ij}^{sim}\|}-\frac{ p_{ij}^{ref}-p_{T,ij}^{ref}}{\|p_{T,ij}^{ref}\|}\| \tag{2}\] where \(p_{T,ij}^{sim}\) and \(p_{T,ij}^{ref}\) are edges computed from the first frame of the motion sequence for both simulation and reference motions. In all the reference motion sequences, the motion capture actors are instructed to start in a T-pose. In other words, we first compute the deviation of an edge from its corresponding T-pose edge, it is then normalized by the length of the T-pose edge. Finally, we compute the difference between the two deviations, one for the simulated characters and the other for the reference motion clips. Note that this similarity measurement is not sensitive to specific body sizes and proportions due to the normalization of the deviations. This formulation is also similar to measuring the Laplacian coordinate difference of all graph nodes between simulation and reference in that they both try to maintain the similarity of the local structure between two graphs, but our formulation gives direct measure to the edge similarity that captures the interaction. It is challenging to define a reference edge length for the cross-connections because the variance can be extremely high. For example, imagine that the two characters are standing 10m apart versus 0.1m. Instead, we directly penalize the difference in the edge length and direction so that the same cross-connection similarity can be applied to various motions: \[err_{cross,ij}=0.5*\frac{\|p_{ij}^{sim}-p_{ij}^{ref}\|}{\|p_{ij}^{sim}\|}+0. 5*\frac{\|p_{ij}^{sim}-p_{ij}^{ref}\|}{\|p_{ij}^{ref}\|} \tag{3}\] where we normalize the difference by the lengths in the simulation and the reference clips, respectively, then average them so that the similarity becomes symmetric. This symmetry also enables the error to be used for characters having different body shapes. The total error for positional graph similarity is then the sum of the two error terms from all edges: \[err_{pos\_graph}=\sum_{ij\in E_{cross}}w_{ij}err_{cross,ij}+\sum_{ij\in E_{ self}}w_{ij}err_{self,ij} \tag{4}\] #### 3.4.4. Velocity Graph Similarity To measure the velocity discrepancy between graphs, we simply measure the difference of velocity of all edges in simulation and reference as: \[err_{vel\_graph}=\sum_{ij\in E_{cross}\cup E_{self}}w_{ij}\|p_{ij}^{sim}-v_{ ij}^{ref}\| \tag{5}\] In contrast to the positional similarities, we observed that the velocities of the graph vertices do not vary much when modifying the body size/proportion of the simulated characters. Thus we do not perform any velocity normalization. We also do not separate the velocity similarities by edge type because we did not find any benefit in doing so. #### 3.4.5. Final Reward Design We define our reward function based on the errors computed from the interaction graphs. In addition, we add two more error terms measuring the tracking of the root joint and center-of-mass, which are frequently used in learning imitation controllers for physically simulated characters. As a result, our reward function is composed of four terms \[r=r_{pos\_graph}\cdot\textbf{{}^{\prime}vel\_graph}\cdot\textbf{{}^{\prime} root}\cdot\textbf{{}^{\prime}com} \tag{6}\] Figure 2. Interaction Graph of the reference characters. Higher opacity on an edge indicates a higher weight for the edge when computing the reward. \[r_{pos\_graph}=\exp(-k_{1}*err_{pos\_graph}) \tag{7}\] \[r_{vel\_graph}=\exp(-k_{2}*err_{vel\_graph})\] \[r_{root}=\exp(-k_{3}*err_{root})\] \[r_{com}=\exp(-k_{4}*err_{com})\] where \(r_{pos\_graph}\), \(r_{vel\_graph}\) measure the difference between the two interaction graphs and \(r_{root}\), \(r_{com}\) encourage the tracking of the root joint and the center-of-mass projected on the ground, \(k_{1},\cdots,k_{4}\) are the sensitivities of the terms, respectively. The errors for the tracking are defined as follows: \[err_{root}=w_{\rho}\|\bar{p}_{sim}-\bar{p}_{ref}\|^{2}+w_{q}\|log(q_{sim}^{- 1}\cdot q_{ref})\|^{2}+\] \[w_{w}\|\bar{e}_{sim}-\bar{e}_{ref}\|^{2}+w_{\omega}\|o_{sim}- \omega_{ref}\|^{2} \tag{8}\] \[err_{com}=w_{com,x}\|x_{sim}-x_{ref}\|+w_{com,x}\|\bar{x}_{sim}-\dot{x}_{ref}\| \tag{9}\] where \(\bar{p},\bar{e}\) are the position and velocity of the root joint excluding the height components, and \(q_{,\omega}\) are the orientation and angular velocity of the root joint, respectively, \(x\) and \(\dot{x}\) are the center-of-mass position and velocity of the simulated character excluding their height components. \(w_{p}\), \(w_{q}\), \(w_{o}\), \(w_{w}\), \(w_{com,x}\), and \(w_{com,\dot{x}}\) are the relative weights of the terms. Note that we ignore the height components of the linear positions and velocities so that the relevant errors are not directly affected by the absolute size of the character. In contrast to (Ho et al., 2010), where tetrahedron volumes are used to measure similarities of meshes, our edges-based reward is more sensitive to point-to-point physical interactions. In addition, it is not trivial to design an adaptive weight function in a volume-based setting, which ensures the motion quality of the individual characters is preserved, even when characters are far apart, making our reward a good substitute for motion imitation. ### Observation and Action Spaces The observation space of our environment is inspired by the design from prior work (Won et al., 2020, 2021) where the observation of an agent \(o_{i}=(o_{sim},o_{ref})\) consists of the states of the simulated characters and objects, which are computed from the simulation and the reference motion clips. For the simulated observation space \(o_{sim}=(o_{sim,self},o_{sim,other},o_{sim,object})\), we include the position, orientation, linear and angular velocity for each link of the characters and the objects. To make sure the state is invariant to the global position and orientation of the agent, all values are transformed to the facing frame of the controlled character. The facing frame of the character is computed by projecting the global transformation of the character root to the ground. The reference observation \(o_{ref}=(o_{ref}^{0},o_{ref}^{0.05},o_{ref}^{0.15})\) contains the reference information 0, 0.05, and 0.15 seconds in the future. For each future reference observation frame \(o_{ref}^{*}=(o_{ref,self}^{*},o_{ref,other}^{*},o_{ref,object}^{*})\), we include the position, orientation, linear and angular velocity for each link of the characters and the objects in the facing frame of the reference character. Our action \(a\) is the change of pose \(\Delta q\) from the pose \(q_{ref}\) given the reference frame at each time-step. A new reference pose \(q_{ref}+\Delta q\) (i.e. a set of joint angles) is given to the stable PD servos attached to our simulated character and then joint torques are computed accordingly. ## 4. Results In this section, we show that our formulation can be applied to a variety of motions with multiple characters and objects. By dynamically adjusting the weights, our method focus on the adjustments to the motion on the physical interactions. This approach results in higher quality motion than existing work in scenarios with complex interactions. Further, our formulation is able to preserve interaction when the body size, kinematics, and skeleton of the characters differ from the reference motion sequences. ### Experiment Setup The structure of our policy follows a encoder-decoder style as presented in (Won et al., 2021), where the encoder is a fully connected neural network with two hidden layers with 256 and 128 units respectively. The encoder takes the full observation and projects it onto a 32 dimensional latent vector \(z\). The decoder is another fully connected network with two hidden layers with 256 units, and it takes as input the concatenated vector \(z_{decoder}=(o_{sim,self},z)\) and outputs the action of the policy. To speed up the learning for all of the experiments below, we pre-train an imitation policy of a single character on sequences that can be performed without a partner (e.g. high five, greetings, and push ups). When training an interaction-graph based policy, we reuse the pre-trained decoder and allow its weights to be updated during the training. The decoder is reusable because the latent dimensions are unchanged. The encoder trained simultaneously with the pre-trained decoder is not reusable due to differences in input dimensions. This design makes it easier for the policy to maintain balance at the initial phase of learning, and therefore results in faster training. The training time of a policy varies based on the difficulty of the sequence. For easier sequences, it takes about 300 million to 500 million samples to train one policy. For harder sequences, it could take more than 2 billion samples to train a policy. All experiments are run using 640 CPUs and take from 3 days to 9 days to train a policy based on the sequence difficulty. ### Human-Human Interaction In the human-human interaction scenarios, we aim to show that our method is capable of reproducing imitation policies with similar motion quality as existing works such as (Fussell et al., 2021; Peng et al., 2018; Won et al., 2020) while the interaction is better preserved. We show a variety of scenarios ranging from sparsely interacting motions to continuously interacting motions between the two human characters. #### 4.2.1. Light Interaction Figure 2(a) and 2(b) shows light physical interactions. In _Rapper-Style Greetings_, the two characters touch their hands, elbows, shoulders, and legs in sequence to greet each other, an action which has been shown in many hip-hop music videos (Sinestesia3000 2012). In _Jumpower_, one character jumps over the other character. In these scenarios, physical interactions are of short duration with little physical forces, and the interactions are well-preserved semantically when the interacting body parts are close enough with the right timing. #### 4.2.2. Heavy Interaction Figure 2(a), 2(b), and 2(c) shows physical interactions where significant forces occur between the two characters. The _Lift-Pushup_ example includes interactions where one character needs to lift the other character's legs while that character is performing a push-up exercise. In the first salsa dancing motion (_Salsa Grasping_), two character's hands are grasped together to form a loop for one character to go under. In another salsa dancing motion (_Salsa Support_), one character needs to support the other while they lean backward. This type of interaction is more challenging than the light interactions because the two simulated characters need to perform highly coordinated motions with force exchange. For example, the character performing a push-up would not be able to imitate the reference motions successfully unless his legs are grasped by the other character. Furthermore, these heavy interactions make maintaining balance difficult because significant forces are applied between the two characters. Our method was able to imitate these challenging motions successfully as shown in Figure (a)a. Because our characters do not have fingers, we mimic the grasp by adding weld constraints in the physics simulation. More specifically, we examine the reference motion and label a sequence of grasping windows to identify when grasping should be presented on specified body pairs at certain moment. During simulation, when a character's hand is close to a body part it should grasp at that moment, weld constraints between the two body parts are made temporarily. The constraint is removed when the grasping window is over, representing the character releasing their hands. Other than hand grasping, we did not use any weld constraints, all the complex physical interactions emerged during the learning process. These results show that our formulation allows for the control policies to be aware of the relative formation among various body parts regardless of the interaction duration, and to preserve those formations in the simulation. ### Human-Object Interaction We further demonstrate that our formulation can also handle human-object interactions where the objects are passively simulated. Figure (a)a and (b)b show the two motions: One includes interactions where two persons are throwing and catching a small box repeatedly, the other includes interactions where two persons are lifting and moving a large box. For both motion sequences, we place an extra marker on every vertex of the box (i.e. 8 markers in total) when constructing the interaction graph. For the edges connecting the characters to the box, we use the reward formulation in Equation 3 to measure the discrepancy between simulation and reference. In addition, we choose to remove all edges connecting the markers on the box because their relative distances will stay constant throughout the motion. The resulting graph is shown in Figure 12. The control policies learned with those additional markers successfully reproduce both hand-object interaction scenarios, which shows the generality of our method. ### Retargeting to different body sizes Our graph-based formulation is robust to the change in body conditions because we compute features in a normalized manner. As a result, the interactions in the same reference motions can be applied to simulated characters that have completely different body dimensions from the actors in the reference motions. Figure (c)c, (d)d demonstrates motions that include light interaction. In both sequences, we scale all limbs of the yellow and blue characters by a factor of 1.3 and 0.5, respectively, so the yellow character is almost 2 times taller than the blue character. The scaled characters are trained using our framework to track the reference motion. In the _Rapper-style Greeting_ motion, for example, we see the taller character deliberately bends down to reach their hand, elbow, and shoulder to the shorter character when the interaction happens, and they straighten back after they finish the interaction. Similarly, the taller character lowers their waist when the shorter character jumps over their back in the _Jumpover_ motion. Learning how to transfer forces via physical interactions is crucial to imitating motions including heavy interaction as in Figure (d)d,(e)e,and(f)f. For the _Lift-Pushup_ motion (Figure (d)d), we give a 0.5 scaling factor to all limbs of the blue character, for the _Salsa Grasping_ motion (Figure (e)e), we scale the yellow character's limbs by 0.5, and for _Salsa Support_ motion ((f)f), we scale the yellow character's limbs by 0.8. For this type of motion, our method allows the scaled characters to adjust their motions to preserve the interactions rather than simply mimicking the original reference motions, and therefore the semantics of the interaction are transferred successfully to the scaled characters. For example, the taller characters in the _Lift-Pushup_ motions learned to bend down and reach the target grasping region to form the grasp. Finally, we also scale the characters and objects for human-object interaction scenarios. Figure 6 shows the control policies learned successfully for the small box throwing-catching and the large box lifting-moving. For both human-object interaction motions, we scale the yellow character's limbs by 0.7. ### Non-human Characters Our method can also transfer interactions in the reference motions to characters with different kinematic configurations. For example, if we use a robot with fewer DoFs than the reference character, our method can still make the robot create the interactions existing in the reference motions. As shown in Figure 4, we replace one of the characters by a Baxter robot composed of two industrial manipulators. Because the robot has a fixed base, we place the robot at the location where the greeting motion is conducted and place a total of eight markers on the upper body of the robot on its head, torso, upper arms, lower arms, and end-effectors to match those of the human character. For the human character, we keep the same 15 markers as described earlier on the human body. We then use a total of 23 markers to construct the interaction graph for the training. During the training, we use two separate reward functions for the character and robot. The character receives the same reward terms as described above, the robot only receives a reward from \(r_{pos\_graph}\) and \(r_{vel\_graph}\) because it is not mobile. In addition, we found that including the first term in Equation 1 was helpful for the robot because it was immobile. This term highlights the edge error when the robot body parts are staying close but the reference characters' body is far away. Because the kinematic structure of the robot is completely different that of the actor in the reference character/motion, we ask the policy to directly output the absolute target joint angles \(q\) instead of learning the deviation (i.e. \(\Delta q\)) from the reference \(q_{ref}\) for both the human character and the robot. Our framework can successfully generate animations of the Baxter robot performing greetings with a human character (Figure 4(a)), and perform a highfive with another Baxter robot (Figure 4(b)). These examples demonstrate the potential of our method as an option to retarget human motion onto robots and create compelling human-robot interactions. ### Comparison We conduct comparison and ablation studies to show the effectiveness of our graph-based formulation in reproducing complex interaction for physically simulated characters. #### 4.6.1. Joint-based Reward To highlight the necessity of formulating the interaction-graph-based reward, we compare our method with the commonly used joint-based reward formulation for motion imitation. For the sequences that use a joint-based reward, we apply a similar formulation as described in (Peng et al., 2018; Won et al., 2020). That formulation asks the policy to minimize the positional and angular differences of the joints and links between the simulation and the reference motion. In this formulation, no reward term exists to evaluate the quality of interactions between multiple characters or characters to objects. As a result, when the simulated character has a different body configuration from the reference motion, the characters will only learn to mimic the poses in the reference motion instead of learning to adapt to the other characters (or object) to correctly perform the interaction. Figure 7 shows a comparison for the greeting motions. The control policies trained using a joint-based reward fail to cause the taller character to bend down to meet the shorter character. Similar behaviors are observed in the other motions for the control policies trained using the joint-based reward only. We further contrast the performance for the dense interaction example between the interaction graph and joint-based rewards. Figure 8 shows such a comparison on _Lift-Pushup_ sequence with a scaled character. When using the interaction graph reward, the taller character actively bends forward to reach its hands to the shorter character's lower leg to form the grasping constraints and lift the shorter character. When using a joint-based reward, on the other hand, there is no reward based on the relative poses between the two characters and the taller character cannot grasp the shorter character's leg and the interaction semantics are not preserved. Furthermore, we show that a joint-based reward also produces lower quality motions when re-targeting motions for human-object interactions. Figure 10 shows a comparison for a small box throw and catch motion trained with the interaction graph reward and joint-based reward. The two characters are able to perform the throw and catch motion sequence in the joint-based reward because of the presence of the additional object observation and reward as described above. However, it fails to preserve the interaction semantics because the shorter character should catch the box by holding on two opposite faces of the box instead of supporting the box on its bottom. #### 4.6.2. Edge Weighting Function We do an ablation on the edge weighting function (Equation 1) to understand how this helps the training selectively pay attention to more important edges and ignore irrelevant edges. Our experiments demonstrate that this design can help in generating more natural-looking motions. In Figure 11, we compare the resulting policy trained with (left) and without (right) the weighting function for the greeting motion. When the edge weighting function is present, the taller character learns to bend its waist to reduce its height when greeting the shorter character. However, when all the edges have the same weight during training, the taller character instead learns to walk and complete all the greetings with the legs bent at an unnatural angle. This unnatural behavior is created because the policy tries to get a low error on every edge of the graph regardless of the distances of the nodes. ## 5. Discussion We demonstrated a method of simulating and retargeting complex multi-Character interactions by using deep reinforcement learning where novel state and rewards that are character-agnostic are developed based on an _Interaction Graph_. Our formulation is applicable to a variety of interactions among people ranging from sparse interactions (e.g. greeting, jumpower) to complex ones (e.g. exercise motion, Salsa dancing) regardless of whether the body size, kinematics or skeleton of the simulated characters are the same as that of the actors who recorded the reference motions. While we demonstrate many successful examples, there are some limitations to our method. First, there are some limitations of our reward function design. Because the action space of our policy is not directly associated with the reward function, our training usually requires more samples to converge compared to a joint-based reward function. In addition, due to the lack of supervision on the joint angles, the motion generated from our policy could contain artifacts on joints that have little impact to the interaction. For example, sometimes the character may tilt the head or the waist at an unnatural angle because this deviation from the reference will not affect the positions of the interaction graph's node, and therefore it does not decrease the reward. Adding more markers would be an immediate remedy but this would also increase computational cost. Another limitation is that our controllers are imitation controllers which cannot perform interactions that do not exist in the reference motions. Further the controllers only work for the specific body configuration that it was trained on, so one policy cannot easily be generalized to work on a character with a different body configuration. We also observe that the variability of our result is limited by the dissimilarity of the character and the difficulty of the task. Extreme scaling or drastically different skeletons could fail to imitate the interactions due to their physical limits. For example, in challenging interaction scenarios such as box throwing, our method fails when replacing one human character with a robot. We envision several future directions to reduce the limitations. For better variability, we can build a stronger motion prior that contains a larger variety of motion types. Further training on top of the motion prior could be more sample efficient, and allow the policy to explore the motion space to find a valid solution when the character shape undergoes extreme changes. To improve the generalization of our method, a better observation representation would be helpful. Currently we are using the commonly used joint-based
``` 複雑な多言語的な相互作用を物理的シミュレーションの人形キャラクターで再現するための方法を提示します。この方法では、個々の運動だけでなく、キャラクター間の相互作用も学習する制御ポリシーを構築し、バランスを維持しながら、参照データの複雑さをマッチします。このアプローチでは、相互作用のランドマーク間の距離を測定する相互作用グラフに基づいた新しい報酬の構成方法を使用しています。この報酬は、制御ポリシーがキャラクターの動きを効率的に模倣する一方で、参照モーションにおける相互作用の空間的な関係を維持するように促します。様々な活動において、このアプローチを評価しました。単純な相互作用から、ハイタッチの挨拶など、複雑な相互作用の評価まで、さまざまな運動を評価しました。これは、既存のモーションキャプチャデータの「クリーニング」を可能にすることができ、物理的に可能な相互作用を作成するか、サイズ、運動学、形態が異なる新しいキャラクターにモーション
2310.04427
Generative AI in the Construction Industry: Opportunities & Challenges
In the last decade, despite rapid advancements in artificial intelligence (AI) transforming many industry practices, construction largely lags in adoption. Recently, the emergence and rapid adoption of advanced large language models (LLM) like OpenAI's GPT, Google's PaLM, and Meta's Llama have shown great potential and sparked considerable global interest. However, the current surge lacks a study investigating the opportunities and challenges of implementing Generative AI (GenAI) in the construction sector, creating a critical knowledge gap for researchers and practitioners. This underlines the necessity to explore the prospects and complexities of GenAI integration. Bridging this gap is fundamental to optimizing GenAI's early-stage adoption within the construction sector. Given GenAI's unprecedented capabilities to generate human-like content based on learning from existing content, we reflect on two guiding questions: What will the future bring for GenAI in the construction industry? What are the potential opportunities and challenges in implementing GenAI in the construction industry? This study delves into reflected perception in literature, analyzes the industry perception using programming-based word cloud and frequency analysis, and integrates authors' opinions to answer these questions. This paper recommends a conceptual GenAI implementation framework, provides practical recommendations, summarizes future research questions, and builds foundational literature to foster subsequent research expansion in GenAI within the construction and its allied architecture & engineering domains.
Prashnna Ghimire, Kyungki Kim, Manoj Acharya
2023-09-19T18:20:49
http://arxiv.org/abs/2310.04427v1
# Generative AI in the Construction Industry: Opportunities & Challenges ###### Abstract In the last decade, despite rapid advancements in artificial intelligence (AI) transforming many industry practices, construction largely lags in adoption. Recently, the emergence and rapid adoption of advanced large language models (LLM) like OpenAI's GPT, Google's PaLM, and Meta's Llama have shown great potential and sparked considerable global interest. However, the current surge lacks a study investigating the opportunities and challenges of implementing Generative AI (GenAI) in the construction sector, creating a critical knowledge gap for researchers and practitioners. This underlines the necessity to explore the prospects and complexities of GenAI integration. Bridging this gap is fundamental to optimizing GenAI's early-stage adoption within the construction sector. Given GenAI's unprecedented capabilities to generate human-like content based on learning from existing content, we reflect on two guiding questions: What will the future bring for GenAI in the construction industry? What are the potential opportunities and challenges in implementing GenAI in the construction industry? This study delves into reflected perception in literature, analyzes the industry perception using programming-based word cloud and frequency analysis, and integrates authors' opinions to answer these questions. This paper recommends a conceptual GenAI implementation framework, provides practical recommendations, summarizes future research questions, and builds foundational literature to foster subsequent research expansion in GenAI within the construction and its allied architecture & engineering domains. G 1 Ph.D. Student, Durham School of Architectural Engineering & Construction, University of Nebraska-Lincoln, USA; pghimire3@huskers.unl.edu 1 Assistant Professor, Durham School of Architectural Engineering & Construction, University of Nebraska-Lincoln, USA; kkim13@unl.edu 2 AI Scientist, SRI International, USA; manoj.acharya@sri.com 2 Footnote 2: email: pghimire3@huskers.unl.edu; **Keywords:** Generative AI; Construction; AEC; OpenAI; GPT; PaLM; Llama; LLM; Fine Tuning ## 1 Introduction In the last four decades, the field of machine learning (ML), particularly the deep learning subdomain reliant on artificial neural networks, has undergone substantial maturation, causing immense transformations across many industrial landscapes [1]. It has emerged as a powerful asset, automating procedures within the construction sector, an industry that trails behind others in both efficiency and output. However, embracing this paradigm shift faces impediments due to gradual headway in overseeing data quality and the absence of directives for integrating domain expertise with data-centric evaluation. These challenges crystallize into three critical concerns: the disparity between a feature-rich space and limited samples, the balance between model precision and applicability, and the reconciliation of machine learning outcomes with field-specific insights [1, 2]. Here are three simple examples of these challenges: (1) A construction company has a large amount of data on the features of construction projects, but only data on a limited number of projects. This disparity between the feature-rich space and the limited samples makes it difficult to train a machine learning model that can precisely predict the cost of construction projects, (2) An owner organization is trying to implement a machine learning model to predict the completion time of a construction project based on data they have access to such as project value, delivery method, complexity, and materials quantity in previous projects. However, the company wants to make sure that the model is applicable to a wide range of projects, so it does not want to make the model too precise. A more precise model will be able to make more accurate predictions about the completion time of a project, but it may not be applicable to a wide range of projects. A less precise model will be more applicable to a wider range of projects, but it may not be as accurate, (3) Safety manager is using a machine learning model to predict the likelihood of a fall accident on a construction site and has access to data on the weather, the type of construction, and the safety practices used on previous projects and predicts that there is a 10% chance of a fall accident on the current project. However, the developed model may not be able to account for all of the factors, such as human errors, unforeseen conditions, that can contribute to an accident. Therefore, traditional machine learning algorithms are somewhat constrained in their capabilities restricted to these limitations[3]. The rapid growth of artificial intelligence (AI), a discipline that involves developing computer systems capable of human-like cognition and actions, has enabled the advancement of sophisticated large language models (LLMs), such as GPT, PaLM, and Llama. GenAI, a subset of deep learning, leverages neural networks, and can process both labeled and unlabeled data using supervised, unsupervised, and semi-supervised methods to synthesize novel content like text, images, and audio [4, 5]. An LLM trains models on existing data, constructing statistical representations to predict content. When provided prompts, generative systems output new synthesized content learned from underlying patterns. Architecturally, transformer models enable GenAI, containing encoders to process inputs and decoders to translate them into contextually relevant outputs [5]. There are four major types of GenAI models: text-to-text, text-to-image, text-to-video/3D, and text-to-task. Text-to-text models, trained to learn mappings between text pairs, accept natural language input and generate text output [6]. Text-to-image models, a recent development, are trained on image datasets paired with text captions. These models take text prompts as input and generate corresponding images as output, often using diffusion techniques[7]. Text-to-video models synthesize videos from text prompts, accepting inputs ranging from single sentences to full scripts, and outputting corresponding video representations [8]. Similarly, text-to-3D models create 3D objects that match a user's textual description. Text-to-task models are trained to execute particular tasks based on textual prompts. These models can perform diverse actions including responding to questions, conducting searches, making predictions, and carrying out requested behaviors [9]. LLMs are a type of general AI. As large pre-trained models designed for adaptability, foundation models like GPT constitute AI architectures that are trained on vast data quantities. This enables fine-tuning to a wide range of tasks including question answering (Q&A), sentiment analysis, information extraction, image captioning, object recognition, instruction following, and more. [10] Over the past few decades, in the construction, researchers have published articles on implementing AI and its subdomains to address industry-specific challenges. These studies demonstrate AI and machine learning applications across the construction management spectrum, including safety management [11, 12, 13, 14, 15], cost predictions [16, 17, 18, 19, 20], schedule optimization [1, 21, 22], progress monitoring [23, 24, 25, 26, 27], quality control [28, 29], supply chain management[30, 31, 32, 33], logistics management[34, 35], project risks management [36, 37, 38, 39, 40, 41], disputes resolution [42, 43], waste management [44, 45, 46], sustainability assessments[47, 48, 49, 50, 51], visualization [52, 53], and overall construction process improvements [1, 54, 55, 56, 57]. Also, there have been studies highlighting the integration of AI with Building Information Modeling (BIM) to enhance information extraction, streamline workflows, and optimize construction management efficiency [58, 59, 60, 61, 62]. Furthermore, some research studies also emphasized the impact of robotics and AI integration in construction such as improvements in construction quality, safety, project acceleration, and the mitigation of labor shortages [63, 64, 65, 66]. However, there is a noticeable gap in research on GenAI's applications, future opportunities, and adoption barriers specific to the construction industry. This gap is likely due to the recent and rapid emergence of GenAI as a novel technology for this field, resulting in a delay in research and implementation when compared to other industries that have already begun to explore and capitalize on the benefits of GenAI adoption [67, 68, 2, 69, 70, 4]. As the construction industry continues to deal with its unique challenges, there exists a vital need to bridge this research gap, uncover the untapped opportunities offered by GenAI, and address the barriers obstructing its adoption within the construction sector. With this background, in this study we seek to answer the two major research questions: (1) What are the current opinions and evidence about the opportunities & potential applications, and overall challenges related to GenAI technologies implementation in the context of construction?, and (2) What are the most important research questions to investigate in future related to GenAI technologies in the context of construction? The remainder of this paper is arranged as follows: Section 2 summarizes our methodology. Section 3 describes various GenAI model structures and presents related work in construction. Section 4 synthesizes opinions and evidence on opportunities, summarizes potential application areas, and visualizes conceptual implementation framework, and Section 5 examines key challenges, from technical limitations to industry challenges. The recommendations for implementation, and critical research questions to prioritize investigating GenAI's unknowns in construction will be discussed in Section 6. Finally, Section 7 concludes by spotlighting this study's significant findings. ## 2 Methodology To achieve our research goals, we followed a research framework as mentioned in Figure 1. Given the limited literature on generative AI in construction, we conducted a non-systematic review using keywords like "Generative AI AND Construction", "Generative AI", and "Large Language Models AND Construction" in Scopus and Google Scholar. We then used the snowball method, identifying key articles and mining their references and citations to find more relevant studies. In addition, to get the most up-to-date insights, construction industry professionals' perceptions of generative AI via posts on LinkedIn over the three months leading up to August 20, 2023. Using three keyword combinations - "Generative AI in construction", "#generativai #construction", and "#generativeai #aec" - we identified 32 relevant opinions comprising a total of 63,778 words. Our analysis incorporated various formats including posts, comments, polls, and articles. Articles accounted for 48% of the data, comments 34%, posts 16%, and polls 6%. To analyze this data, we utilized programming-based text mining techniques including word cloud analysis to highlight the most frequent terms, sentiment analysis to categorize opinions as positive, negative, or neutral, and frequency analysis to summarize key themes throughout the corpus. With a literature review and industry perspectives, this paper outlines potential GenAI applications in construction. A conceptual implementation framework is then proposed to implement identified applications, along with key implementation challenges. Figure 1: Research Framework Furthermore, we integrated the perspectives of the authors in this study. As experts in allied disciplines related to emergence technology such as generative AI in the built environment, the authors contribute more than a decade of combined experience in areas including AI in construction, automation in construction, and generative AI specifically. ## 3 Various GenAI Model Structures and Related Work in Construction In recent years, researchers have increasingly focused on modifying the learning algorithms of generative AI (GenAI) models to fit specific domains and tackle industry-specific problems. The choice of which generative AI model to use depends on the specific task at hand. Based on their generative mechanism, there are five major types of GenAI models [2, 71, 72]. Generative Adversarial Networks (GAN) are often used for image generation because they can create realistic images. Variational AutoEncoders (VAE) are commonly used for text generation, as they can produce clear, grammatically correct samples by learning the original distribution of the training data. Autoregressive models are best at text generation similar to their training data, since they generate text token-by-token while conditioning on previous tokens. Diffusion models can create smooth and natural image samples by starting with noise and reversing a diffusion process. And, flow-based models learn transformations between data and latent representations, enabling diverse and creative image generation. In the following subsections, we will investigate the background of each model, explain their operational mechanisms including model architecture, underline any limitations, examine their relevance within the construction domain, if such use cases exist, and summarize the characteristics, advantages, and disadvantages of all models. ### Generative Adversarial Network First introduced by Goodfellow et al. in 2014, GANs are a type of deep learning model comprised of two neural networks: a generator and a discriminator[73]. The generator is tasked with creating new synthetic data, while the discriminator attempts to differentiate between real and generated data. As shown in Figure 2 (a) [72], GANs are trained through an adversarial process, where the generator produces fake samples that are fed along with real samples into the discriminator. The discriminator then predicts which samples are real or fake, and loss gradients are calculated using a loss function to update both models. During training, the generator tries to fool the discriminator by improving its ability to generate realistic data [71, 74]. The format of the real and synthetic data samples can vary, as long as the neural network architectures are adapted accordingly. GANs have proven adept at generating images, video, and text that are remarkably close to actual data distributions. Their adversarial training process allows for modeling complex, multi-modal data. However, GAN training can be unstable, and finding the optimal balance between the generator and discriminator is challenging [75]. GANs have shown possibilities for a variety of applications in the construction industry. Researchers have demonstrated that GANs can generate plausible technical drawings, including floorplans, mechanical/electrical/plumbing diagrams, sectional views, and colored plans [72]. The adversarial training process allows GAN models to synthesize images that closely match the style and content of real architectural drawings across multiple domains. In another study, GANs have been applied to generate photorealistic renderings of building facades [76]. By learning from datasets of real facade images, GANs can produce synthetic views that are useful for tasks like style classification and image restoration. ### Variational AutoEncoders Variational Autoencoders (VAEs) are a class of generative models specifically designed to acquire a data representation in a lower-dimensional latent space. This latent space provides a compressed yet essential feature representation of the original data [81]. Kingma and Welling introduced VAEs in 2013, establishing them as a pivotal model in the field [82]. VAEs consist of two intertwined and independently parameterized components: the encoder, responsible for recognition, and the decoder, focused on generation. These components work in tandem to support each other's operations [83]. The model comprising an encoder network \(Q_{\phi}(Z|X)\) and a decoder network \(P_{\theta}(X|Z)\) is illustrated in Figure 2 (b). VAEs are proficient in approximate inference and can be effectively trained using gradient descent methods. The encoder network, characterized by parameters \(\phi\), efficiently compresses data into the lower-dimensional latent space, mapping input data X to a continuous latent variable Z. Conversely, the decoder network, parameterized by \(\theta\), utilizes this latent variable to generate data, performing the reverse mapping from Z to reconstructed data. Both the encoder and decoder employ deep neural networks for their construction, with parameters \(\theta\) and \(\phi\), respectively [77]. VAEs are trained to utilize Figure 2: GenAI Models variational inference, enabling the acquisition of a probabilistic distribution over the latent space. This learned distribution empowers VAEs to generate new data samples that closely resemble the training data. VAEs exhibit versatility and find applications in several domains, including data compression, image synthesis, text generation, and discovery. Because VAE imposes assumptions about the latent space, they are less flexible than other generative models in capturing complex real-world data distributions and data sequences [84], [85]. Like other industries, construction struggles with limited access to large datasets, a major obstacle for implementing deep learning models. While several studies have investigated big data challenges, solutions remain needed to compile requisite construction data. A recent study by Delgado & Oyedele [86] highlighted the approaches to addressing limited data including data augmentation through distortions and variants of original data, synthetic data generation with methods like VAE, and transfer learning. And, the study explored using VAE to expand financial datasets for construction projects, as financial data lacks the transformation invariance present in images, making AutoEncoders a promising technique. The results showed that VAE provided more robust outputs and better represented the non-linear correlations between the variables in the financial datasets. Another study by Balmer et al. [87] presented the use of VAEs for the conceptual design of pedestrian bridges from synthetically generated data, eliminating manual and time-consuming traditional design processes. Variational AutoEncoders show promise for generating new design and construction data to address limited datasets, and facilitating advanced deep learning applications. VAEs can be used to generate new data that is similar to existing data for defect detection, extract features from sensor data for predictive maintenance, model uncertainty in construction projects for risk assessment, and generate new designs for buildings or infrastructure. VAEs can learn from data at different levels of abstraction, depending on the specific task being performed. ### Autoregressive models An autoregressive model is a type of generative model that predicts the next token in a sequence, given the previous tokens. This means that the model is trained on a sequence of data, and it learns to predict the next token in the sequence based on the previous tokens[88]. One common architecture for an autoregressive model is a recurrent neural network (RNN) as shown in Figure 2 (c). The output at time 't' in an autoregressive model relies not only on the input 'x\({}_{i}\)' but also on prior inputs 'x' from preceding time steps. Nevertheless, in contrast to an RNN, the preceding 'x's are not conveyed through a concealed state; rather, they are directly supplied to the model as additional inputs [78]. Autoregressive generative models leverage the chain rule of probability to decompose the joint distribution of a sequence into conditional distributions over tokens based on their context [84], [89]. While autoregressive models are powerful density estimators, their sequential sampling is slow for high-dimensional data and requires a fixed ordering to decompose the data, which is not always straightforward [84]. A study by Elfahham [90] found the prediction of the construction cost index using the autoregressive time series method was most accurate compared to neural network and linear regression approaches. The autoregressive technique's specialized modeling of temporal dependencies allowed it to outperform. Autoregressive models have the potential to enable advanced analytics in construction by modeling temporal dependencies in historical data. Applications include forecasting construction costs, risk identification, schedule optimization, and automating tasks. These models capture relationships over time to predict future outcomes and empower data-driven decision-making. ### Diffusion Models Diffusion models, a type of GenAI, produce high-quality synthetic images and videos by learning to reverse an artificial diffusion process. This process involves gradually adding Gaussian noise to training data over multiple time steps, following a predefined schedule that gradually masks the original data[7], as shown in Figure 2 (d) [79]. During training, the model learns to take a noisy sample from an intermediate point within this noise schedule and subsequently predict a less noisy version of the data from the previous time step. By repeatedly applying this de-noising prediction across many time steps, the model can start from pure noise and reverse the diffusion back to a realistic generated image[91]. Though sampling is relatively slow due to the multiple required predictions, diffusion models can generate sharp and coherent outputs, especially for image generation. Their ability to condition the sampling makes them versatile and broadly applicable across computer vision tasks. Popular GenAI models like DALL-E2 and Imagen are based on the diffusion model concept[7]. Some studies underline the major limitations of the diffusion models such as poor time efficiency during inference requiring many evaluation steps, and high computational expense for the iterative de-noising [92], [93]. ### Flow-based Models Flow-based models represent a category of GenAI models that generate synthetic outputs by framing the data generation process as a continuous normalizing flow. They work by taking noise vectors and repeatedly transforming them through a series of bijective functions, each designed to bring the distributions closer to the target data distribution. Unlike other generative models, the flow model only uses a reversible encoder to complete the model's construction, which makes the design more delicate [2] as shown in Figure 2 (e) [90]. Through these transformations, flow models can convert noise inputs into realistic generated samples. The origin of flow-based generative models dates back to the work of Dinh et al. in 2014 [94]. These models offer various advantages, including precise latent-variable inference, accurate log-likelihood evaluation, and efficiency in both inference and synthesis processes [95]. These models were further refined and extended by Dinh et al. in 2016 [96]. The flow-based models have some challenges in terms of training complexity due to the need for inverting networks and computing determinants, which creates a primary drawback. Table 1 provides a summary of GenAI model types, their characteristics, advantages, and disadvantages. It helps in understanding and selecting the suitable generative model for specific applications. \begin{table} \begin{tabular}{c c c c} _GenAI Model_ & **Characteristics** & **Advantages** & **Disadvantages** \\ _Type_ & & & \\ _Generative_ & Two neural networks, a & - Generate high-quality & - Unstable to train \\ _Adversarial_ & generator, and a & data that is & - Difficult to find the \\ _Network (GAN)_ & discriminator, compete with & indistinguishable from & right balance between \\ _with_ & each other to generate & real data. & the generator and \\ _Variational_ & Encodes data into a latent & - Generate data that is & - Less flexible than \\ _AutoEncoder_ & space and then decodes it & similar to the training & GANs & - Lack the ability to \\ _(VAE)_ & back into the original space. & data. & tackle sequential data \\ \end{tabular} \end{table} Table 1: Summary of GenAI Models ## 4 Opportunities of GenAI in Construction ### Current GenAI Applications and Developments in Construction Recent studies using LLMs to solve construction-related problems demonstrate the long-term opportunities of GenAI in the industry. In 2023, Zheng and Fischer developed a BIM-GPT integrated framework [97] to retrieve, summarize, and answer questions from the BIM database, overcoming the challenges due to the extensive engineering required to automate complex information extraction from rich BIM models. By prompting the LLM appropriately, BIM-GPT shows how advanced integration can extract value from construction data assets. In the early days, such a pioneering idea laid the groundwork for GenAI in the AEC domain. A recent work by Prieto et al. in 2023 [98] shows the potential for large language models to automate repetitive, time-intensive construction tasks. Their study tested using ChatGPT to generate coherent schedules that logically sequence activities and meet scope requirements. Hasan et al. proposed a novel method for classifying injury narratives to identify risks and hazards in construction by fine-tuning bidirectional encoder representations from transformers (BERT) sentence-pair models [99]. The BERT-based approach was also utilized for the automatic detection of contractual risk clauses within construction specifications [100]. A study indicated that limited language generation applications in construction despite extensive documentation such as drawings, reports, and contract documents, cannot feed intelligent systems, though they contain critical references for decisions. Generative AI-like technologies such as ChatGPT and BARD can enable automated synthesis of construction documents and question answering, overcoming analog barriers to unlock the value in this data [101]. In construction automation, the major challenge in maximizing robotic systems is creating efficient sequence planning for construction tasks. Current methods, including mathematics, and machine learning, have limitations in adapting to dynamic construction settings. To address this, a recent study introduced RoboGPT, leveraging ChatGPT's advanced reasoning for automated sequence planning in robot-based construction assembly [102]. The recent CREATE AI Act authorizing the National Artificial Intelligence Research Resource (NAIRR) indicates growing government interest in expanding AI development. By providing open access to key AI resources, NAIRR aims to catalyze innovation across sectors while also serving as a testbed for trustworthy AI practices. Though in the early stages, this initiative represents an important step toward equitable AI advancement by connecting public infrastructure to circulate capabilities more widely through academia and industry[103]. Given the rapid development and deployment of LLMs in recent years, comparing LLMs is useful for tracking progress in this fast-moving field and understanding tradeoffs between model scale, and accessibility to provide an at-a-glance overview for researchers and practitioners. The training parameter size indicates the scale and potential capability of LLMs, giving users insight into model strength, and infrastructure requirements. Bigger models with more parameters tend to be more powerful, generally costlier and need more computational resources. The LLMs include both open-source and closed-source approaches, each with distinct implications for access, innovation, and collective development. On one hand, open-source large language models promote transparency by providing public access to critical model assets like source code, training data, and model parameters. With freely available implementation details, open source fosters collaboration as developers and researchers can contribute to enhancing and customizing the models to align with specific needs. However, hosting and maintaining accessible open-source models incur infrastructure costs. In contrast, closed-source LLMs are proprietary models restricted to license-holder organizations. Without access to the underlying code, the specific details of the architecture, and training data, the algorithms of closed-source LLMs may not be known to the public. While commercial closed-source models may ensure consistent uptime through dedicated cloud resources, their lack of public transparency limits external innovation opportunities. At the same time, closed-source models carry the advantage of preserving training data privacy. Table 2 summarizes the top ten LLMs currently available, and offers insights for developers and researchers to evaluate both open-source and closed-source options against capability, and updated time when selecting a model aligned with their priorities and constraints. ### What Opportunities are Perceived by Construction Industry Practitioners? To gain insights into construction industry professionals' perspectives on GenAI, various text analytics techniques were applied. A word cloud uncovered frequent key terms, sentiment analysis indicated overall sentiment, and opportunities list synthesized potential application areas. This comprehensive text data analysis provides a picture of discussion topics, attitudes, and outlooks regarding the potential of integrating GenAI into the construction industry. A word cloud visualization of the LinkedIn data provides an overview of frequently mentioned terms related to generative AI in construction (Figure 2). A word cloud provides a visual representation of textual data, \begin{table} \begin{tabular}{l l l l l l} & **LLM** & **Developed** & **Training** & **Release** & **Access** \\ & & **by** & **Parameter** & **Year** & **Year** \\ & & & & **Size** & \\ & & & **(Billion)** & & \\ \hline **1** & GPT-4 & OpenAI & 1000+ & 2023 & Closed \\ **2** & PaLM & Google AI & 540 & 2022 & Open \\ **3** & MT- & Nvidia & 530 & 2021 & Closed \\ & NLG & & & & \\ **4** & Llama 2 & Meta AI & 500 & 2023 & Open \\ **5** & Gopher & DeepMind & 280 & 2021 & Open \\ **6** & GPT- & OpenAI & 175 & 2022 & Closed \\ & 3.5 & & & & \\ **7** & GPT-3 & Open AI & 175 & 2020 & Closed \\ **8** & OPT & Meta AI & 175 & 2022 & Open \\ **9** & LaMDA & Google AI & 137 & 2022 & Open \\ **10** & GPT- & Microsoft & 100 & 2023 & Closed \\ & NeoX & & & & \\ \end{tabular} \end{table} Table 2: Current Ten Largest LLMs [94], [95], [96], [97], [97], [98]-[102] serving as an impactful tool for text analysis [113, 114]. We preprocessed the data by cleaning and tokenization to improve quality. Text cleaning involved formatting adjustments to improve computational readability. Tokenization segmented the text into discrete, meaningful units by isolating individual words and phrases. We then utilized the Natural Language Toolkit (NLTK) in Python to remove generic stop words and distill the corpus down to substantive terms [115, 116]. This shaped a refined dataset with reduced noise, ready for analysis. The results summarize a diverse range of terms that capture the overarching themes and trends within the dataset. The most dominant word is "ai" highlighting the increased attention on artificial intelligence technologies broadly. Notably, "generative" appears with high frequencies demonstrating awareness of this specific AI subdomain. Other common terms like "design", "data", "project", and "technology" indicate a focus on potential applications in construction processes. "ChatGPT" arises fairly often as well, suggesting this popular demo has significantly shaped industry impressions of generative AI capabilities and potential applications in construction. Numerous terms point to opportunities like "productivity", "designs" "tools", and "processes". Meanwhile, words such as "help", "need", "could", and "future" convey a sense of anticipation and speculation around GenAI's developing impacts. Taken together, the word cloud provides a snapshot of how construction professionals are engaging with the emergent GenAI phenomenon, highlighting key opportunities while also indicating uncertainty about optimal applications and next steps. Furthermore, it is important to uncover the underlying sentiments conveyed in the text. Sentiment analysis, also called opinion mining, involves using computational methods to determine the opinions, attitudes, and emotions expressed toward a subject[114, 117, 118]. Sentiment analysis classifies opinions at three levels: document level categorizes the sentiment of entire documents; sentence-level determines the sentiment of each sentence; and aspect-level examines deeper to categorize sentiment towards specific entity aspects[119]. In our study, we utilized the TextBlob library to quantify sentiment polarity scores, ranging from -1 to 1, revealing positive, negative, or neutral sentiment. Through preprocessing, tokenization, and model-driven analysis, we categorized each text segment. In our sentiment analysis, the discernment of emotional tonality yielded a remarkable distribution: a predominant positivity, coupled with very small negativity and an equivalent neutrality. This outcome highlights the overwhelmingly positive sentiment inherent within the analyzed corpus about GenAI in construction. Visualization using a bar chart showed proportions of positive, negative, and neutral sentiments as shown in Figure 4. Figure 3: Word Cloud Analysis of Industry Practitioners’ Opinions Based on the analysis of people's perspectives, this study synthesizes the key themes regarding the potential opportunities of Generative AI in construction as mentioned in Table 3. First, we identified the main points and common ideas expressed across multiple perspectives in the body of the text through careful reading and analysis. Second, we synthesized these main points into a few key, overarching themes that capture the essence of the perspectives. There is consensus around Generative AI's promise to drive greater efficiency, innovation, and data-driven decision-making across the construction lifecycle. However, viewpoints diverge regarding the scale and scope of GenAI's applications, as well as the need to thoughtfully manage its integration to maximize benefits versus risks. Figure 4: Sentiment Analysis of Industry Practitioners’ Opinions \begin{table} \begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline \multicolumn{1}{|c|}{Perspectives: Main Points} & Key Theme \\ \hline Applying GenAI for construction documents management & \\ \hline Enterprise search & \\ \hline Data management ultimately offers time-saving benefits and increased productivity when effectively leveraged & Construction Documents and Data Management \\ \hline For example, Integrating GenAI in scheduling to identify the most effective schedule path to follow. & \\ \hline Can help improve conversations and collaboration between project stakeholders such as contractors, designers, and owners. & Question Answering (QnA): \\ \hline Stakeholder demands for faster, affordable, and sustainable builds create opportunities for GenAI and automation to address construction’s unique challenges such as repetitive tasks and unsafe work environments. & \\ \hline AI-generated designs and plans reduce manual work, enhancing data systems for faster payments, fewer errors, and better decisions. & AI-Generated Designs \\ \hline \end{tabular} \end{table} Table 3: Overarching Themes on Opportunities Generative AI increases predictive capabilities, leveraging historical data for accurate project forecasting, forecasting of trends, risk assessment, and opportunity identification. Incorporating GenAI streamlines the synthesis of project data and provides avenues for automating intricate information management, such as contract-related data, thereby enhancing decision-making during the initial phases of construction. AI and modern innovations in construction address labor shortages, cost escalation, and environmental concerns, positioning the industry for a transformative future. Integrate materials assessment AI tools to support informed materials selection for improved sustainability, maximizing de-carbonization. The development of GenAI, like ChatGPT, enhances human capabilities rather than replacing jobs. ### Potential Applications of GenAI in Construction Generative AI shows huge potential to transform information workflows in architecture, engineering, and construction. Advanced LLMs can parse volumes of unstructured data to extract insights with new levels of ease and speed. For instance, by analyzing building codes, generative models can identify relevant requirements and produce summarized, project-specific reports for architects. This automates laborious manual reviews. Similarly, contractors can input design specifications into AI systems to automatically compile cost and schedule estimates by associating 3D models with external databases. Many simple properties like material name, soil type, concrete strength, roof slope, furniture suppliers, last changed by, as well as complex analytical queries become accessible to stakeholders through AI's natural language capabilities. Whether generating code requirements from regulations, connecting designs to cost data, or retrieving wind load assumptions, GenAI allows seamless information flow between physical and virtual manifestations of the built environment. The power of language models lies in their ability to comprehend, reason about, and generate knowledge. As explained through these use cases, GenAI can improve project understanding and decision-making by unlocking information trapped in unstructured data. The GenAI holds vast potential to increase productivity and collaboration in the AEC industry. In this section, based on lessons learned from literature, peoples' perspectives, and building lifecycle tasks identified[120, 121, 122, 123], we provide the potential application examples across the project lifecycle, detailing beneficiaries and appropriate GenAI model types for each as shown in Table 4. Clearly defining the output modality generated by each AI system, whether text, image, 3D, video, or task, simplifies technical requirements for implementation. Readers can identify suitable architectures by mapping desired functionality to output types. In addition, clustering potential applications by common model families also enables knowledge transfer across use cases and highlights productive pairings of activities with generative techniques. In addition, the popular model examples of each type at the end of the table expedites the process of model selection, allowing researchers and practitioners to make quicker decisions customized to their specific application requirements and objectives. \begin{table} \begin{tabular}{|l c c c|} \hline \hline **Phase** & **Potential GenAI Application** & **Main beneficiary** & **Model type based on the output** \\ \hline Feasibility & * To generate a feasibility report & Stakeholders & text-to-text \\ & * To generate a project initiation document (PID) & Owner & text-to-text \\ & * Interactive Q&A chatbot to refine PID details & Owner & text-to-text \\ & * To create visual representations of data such as site & & \\ & conditions, traffic patterns, zoning laws, etc. & & \\ & * To predict project milestones and success criteria for & & \\ & different phases of the project & Stakeholders & text-to-text \\ & * To create contracts and agreements & Stakeholders & text-to-text \\ \hline Design & * To generate multiple conceptual designs based on the & & \\ & program requirements and communicate with the architect & Architect & text-to-task \\ & * Animated 3D visualization of organization chart and & & \\ & responsibilities & Stakeholders & text-to-3D \\ & * To automatically generate detailed cost estimation report & Owner & text-to-text \\ & * To associate cost/time data with building design & Contractor & text-to-text \\ & * To extract structural design requirements & Engineer & text-to-text \\ & * To extract MEP design requirements & Engineers & text-to-text \\ & * To generate a permit application draft & Architect & text-to-text \\ & * To generate a risk analysis report & Stakeholders & text-to-text \\ & * To develop a design communication Q&A chatbot & Architect & text-to-text \\ & * To compare the design against the building code & & \\ & requirements & Architect & text-to-task \\ & * To perform complex design checking (routing analysis, & & \\ & etc.) & Architect & text-to-task \\ & * To select the most suitable contractors based on project- & & \\ & specific criteria, performance histories, and contractual & & \\ & considerations & Owner & text-to-text \\ \hline Procurement & * To visualize the material delivery schedule & Logistics team & text-to-3D \\ & * To generate a request for a quotation & Procurement team & text-to-text \\ & * Identification of optimal supplier based on variables & Project manager & text-to-text \\ & * Streamline subcontractor bidding and selection & Contractor & text-to-text \\ & * Automated inventory management & Procurement team & text-to-text \\ \hline Construction & * To extract project information from construction & & \\ & documents such as dimensions, materials used, & & \\ & responsible person, point of contact, etc. & & \\ & * To generate new documents. Examples- proposals, & & \\ & reports, etc. & & \\ & * To classify \& cluster documents based on project types, & \\ & internal departments, sub-contractors, project phases, & & \\ & document types, materials, supply chain, etc. & Contractor & text-to-text \\ & * Generating code to automate tasks. & Contractor & text-to-text \\ & * Translating documents into different languages. & Contractor & text-to-text \\ \hline \hline \end{tabular} \end{table} Table 4: Potential Applications of GenAI in Different Phases of Building Lifecycle * To optimize cost estimation workflow * To help progress tracking and identify safety concerns with drone integration * To provide customized alerts and notifications on changes * To help quality control such as comparing completed tasks to project specifications to identify defects and deviations * To generate an optimal schedule path * Searching specific information on the data lake, shared folder, project-specific information, etc. * To generate targeted safety training materials * To generate targeted trade training materials & To generate targeted trade training materials & To create a knowledge management system using a Q&A Maintenance & To create a work order from logs & Generative design of replacement parts & To generate maintenance schedule & predictive & To generate maintenance & Facility manager & To generate an energy consumption report & Chabot to assist occupants [MISSING_PAGE_POST] & may include privacy constraints and noise reduction to enhance the model's performance. The resulting fine-tuned model has knowledge customized for the construction domain. Finally, the adapted model is deployed through careful prompt engineering to query its capabilities. Users provide prompts and obtain answers or visualizations based on the fine-tuned model's specialized intelligence. This conceptual framework for fine-tuning LLMs bridges the gap between pre-trained models and enterprise-specific applications, promoting adaptability in a wide range of domains. ## 5 Challenges of GenAI Implementation in Construction Generative AI adoption across industries is rapidly growing, driven by the immediate integration of new technologies like ChatGPT intensifying competitive pressures on organizations while this novelty presents new risks[69]. Like other industries, the integration of GenAI in construction is associated with complex challenges. Therefore, it is important to understand these challenges before applying the proposed conceptual framework. These challenges comprise various areas, including domain knowledge, the potential for hallucinations in AI-generated outputs, the crucial aspect of accuracy in AI predictions, the generalizability of AI models to new situations, the need for frequent model updates and interpretability, the cost implications of deploying generative AI, and the ethical considerations around data privacy, bias, and accountability as shown in Figure 6. Furthermore, the construction sector faces specific regulatory hurdles related to the responsible use of GenAI, prompting the need for AI skill development and training, liability determination, copyright and intellectual property concerns, and certification protocols. Addressing these multidimensional challenges requires a proactive and collaborative effort involving industry experts, policymakers, and AI researchers to ensure the safe and effective implementation of GenAI in construction practices. Figure 5: A Conceptual GenAI Implementation Framework ### Domain knowledge The construction industry poses unique difficulties in applying GenAI due to its vast domain knowledge requirements. Capturing the industry's complicated technical engineering expertise across structural, mechanical, electrical, plumbing, and project management disciplines remains challenging. Construction also relies heavily on physical situational awareness and spatial reasoning when manipulating materials and navigating dynamic job site capabilities stretching the limits of AI [36]. Consequently, construction's vast knowledge context hinders GenAI's ability to extract meaningful structure-activity relationships from industry data. However, promising avenues exist to address these knowledge gaps. For instance, large language models like GPT require fine-tuning and contextual input tailored to the construction domain in order to efficiently generate industry-specific insights [124]. Hybrid reasoning techniques combining top-down ontological, symbolic knowledge with bottom-up neural networks can be beneficial. Therefore, advancing construction-focused GenAI requires incorporating domain knowledge more seamlessly into model architecture and training. This domain knowledge infusion remains an open research area for unlocking GenAI that can meet construction's complex and ever-changing demands. ### Hallucinations Generative artificial intelligence systems face challenges with hallucination, generating convincing but false outputs due to limited knowledge [70]. These hallucinations often result from factors such as inadequate or noisy training data, a lack of contextual understanding, or imposed constraints. GenAI systems are particularly notorious for producing aesthetically pleasing yet inaccurate predictions, often with an unwarranted high level of confidence. For instance, in the context of a GenAI scheduling system, hallucinations could lead to the generation of inaccurate timelines for critical paths. In construction-focused AI, which lacks the capability to perceive and validate real-world complexities directly, there is a risk of generating hallucinatory outputs that are apart from reality. To mitigate these potentially unsafe hallucinations, several strategies can be employed. These include the use of high-quality training data, a strong grounding in engineering and construction knowledge, simulated testing to validate predictions, continuous monitoring of uncertainty, and the introduction of human oversight throughout the AI's decision-making processes. Figure 6: Challenges of GenAI in Construction ### Accuracy Ensuring accuracy is a major challenge for GenAI, as inappropriate outputs can lead to big failures. Large language models like GPT-3 show these limits, relying on minimal training data from unverified sources [125]. Lack of fundamental construction engineering knowledge, such models obtain only superficial statistical associations rather than causal basics, risking construction decisions through misguided outputs. However, techniques exist to enhance output validity. Construction-specific fine-tuning with validated datasets can align models to the complexities of the built environment. Uncertainty indicators can flag doubtful predictions needing additional verification. Simulated testing enables early correction of inaccuracies before real-world implementation [126]. Further, prompted self-improvement may allow models to iteratively refine their outputs [127]. Overall, connecting robust datasets, uncertainty metrics, simulated validation, and self-correction procedures can introduce proper engineering causality over statistics, improving construction GenAI's accuracy. Advancing fundamental reasoning capabilities remains critical for developing generative intelligent systems that meet the construction industry's need for reliable automation and decision-making. ### Generalizability Generalizability refers to the ability of a generative AI model to extend its learning beyond the specific datasets and distributions it was trained on. A GenAI system utilizing historical data may encounter issues with poor generalization, where the knowledge derived from training data in the in-sample period does not effectively apply to new, out-of-sample data in testing. Even if a model fits the training data well, its poor generalization is unusable for addressing real-world decision-making challenges [128]. For example, a model pre-trained on fixed historical data may fail to account for unexpected changes like weather delays, labor availability, or design changes. Models trained on a limited dataset, unfamiliar inputs, and lack of a casual understanding mechanism in the model are the major challenges that contribute to the generalizability problem. Collecting diverse training data and testing models on novel inputs helps the construction GenAI better generalize [129]. Leveraging simulation, causal reasoning, and common-sense checks also improves generalization by teaching strong process knowledge. And, continual learning enables adaptation to new data over time. Together these solutions improve generalization. ### Model Updates and Interpretability Model updating is a key challenge for deploying generative AI in construction. Training data can quickly become outdated as materials, methods, and regulations frequently change. Without recent data, models will miss new innovations and provide unreliable guidance. For example, an AI chatbot trained before the pandemic may overlook the impacts of supply chain disruptions and labor shortages. Regularly retraining models on new data is essential, but costly and complex at scale. Potential solutions include modular model architectures to simplify updating, simulations to generate fresh synthetic training data, and lightweight model adaptation techniques like transfer learning. However, balancing model accuracy and update will remain an obstacle. User oversight and paired human-AI collaboration are recommended when utilizing construction generative AI. In addition, another limitation of deep generative models is their black-box nature - the internal workings are not transparent or easily interpretable. This is problematic for critical construction applications where explainability is important [130], [131]. The opaque processes by which generative AI systems produce outputs create uncertainties around reliability and trustworthiness. Users cannot validate which parts of the model's knowledge base are being leveraged. Therefore, more research is needed to develop interpretable model architectures and training techniques, making the decision-making logic clear. Progress in the construction of explainable AI will be key to wider adoption by explaining the reasoning behind outputs and establishing confidence in the technology. ### Cost Training and operating generative AI models require significant costs, presenting challenges for widespread construction industry adoption. The training phase alone demands massive computing resources and time to produce capable generative capacity. Ongoing operating expenses also accumulate from the energy required to run large models and web-serving infrastructure [2]. For example, monthly subscription fees to access ChatGPT currently start at $20 with traffic limitations. In addition, utilizing GPT models to develop conversational apps produces additional usage costs billed per generated token [124]. Initial application development leveraging these models is expensive upfront too. The considerable resource demands and ongoing costs act as barriers, especially for smaller construction companies with limited budgets [132]. Further optimizations to reduce the computing power, energy, and data needs of generative models would support feasibility. More cost-effective scaling solutions tailored for construction use cases could also expand access. Overcoming these cost challenges requires a well-balanced approach, considering the long-term benefits of GenAI integration against the upfront investments needed to tie together its capabilities effectively. ### Ethical Challenges The adoption of generative AI models also raises ethical issues around data privacy, bias, and accountability that the construction industry must proactively address. These data-intensive models can utilize sensitive project information and personal details lacking proper consent, presenting risks of confidentiality breaches and intellectual property violations. Researchers and the industry should implement data privacy safeguards and anonymization measures. For example, OpenAI's ChatGPT explicitly acknowledges its potential to generate inaccurate information about individuals, locations, or facts, underlining the need for researchers to be aware of this limitation and ethical challenges when incorporating ChatGPT in scientific works. This includes essential considerations regarding data privacy, confidentiality, and informed consent [133]. The handling of sensitive data by ChatGPT introduces vulnerabilities that may be exploited for unauthorized access or misuse, thereby posing substantial privacy and security risks [69]. Also, the adoption of LLMs raises concerns about creating potential biases[134]. The utilization of confidential construction data like cost, schedule, safety records, contract documents, and BIM model information may potentially trespass upon intellectual property rights and give rise to ethical and legal difficulties. Therefore, establishing clear accountability for errors or accidents caused by AI-generated outputs remains a complex issue needing careful consideration, in order to develop ethically responsible frameworks for implementing generative AI within the construction industry. ### Construction Regulatory Challenges In the construction sector, the integration of GenAI poses several complex regulatory challenges. Successful implementation requires AI understanding, skillsets, and trainings so that industry experts can properly utilize these models. One of the major skills required is proficiency in "prompt engineering," optimizing prompts to maximize model efficacy [124], [135]. However, overreliance on automation risks in reduction of human expertise and the potential for errors in cases of AI malfunction or erroneous information provision [136]. As generative models become capable of autonomously producing comprehensive deliverables, for example, a detailed site safety plan, a serious concern emerges regarding accountability in the event of a failure. Determining liability in such instances, wherein something goes wrong, becomes a complex matter. Who bears responsibility in the event of a failure - is it the developer of the AI system, the construction company implementing it, or the safety manager who approved the final AI-generated plans? Additionally, the independent origination of new content by AI raises questions about copyrights and intellectual property. The ownership of AI-generated content requires a clear legislative definition. To maintain expertise and safety standards, construction companies could introduce certification protocols for AI training and deployment. Moreover, close cooperation between industry experts, policymakers, and AI researchers is essential to navigate these regulatory challenges. ### What Challenges are Perceived by Construction Industry Practitioners? The challenges obstructing GenAI adoption in construction are associated with both technological and human factors. A recent LinkedIn poll of 48 AEC professionals investigated the frequency of generative AI usage in their work, finding 40% have never tried it, 33% use it sometimes, 19% use it often, and 8% use it all the time[137]. This reveals that most AEC professionals are still in the early stages of generative AI adoption, though a segment has integrated these tools into their regular workflows. And, another poll of 16 AEC professionals examined whether their organizations have policies regarding the use of commercial GenAI tools, finding 63% do not, 31% do, and 6% are unsure[137]. This indicates that most companies currently lack formal guidelines on GenAI usage, presenting an opportunity to implement policies and controls given the rise of technologies like ChatGPT. The analysis of perspectives shows key themes around security, governance, awareness, and adaptation as mentioned below. Construction companies must proactively address these multifaceted challenges to unlock their potential. This requires strategic approaches customized to the construction industry's distinct needs within this rapid innovation. A thoughtful, industry-centered path can help overcome obstacles and realize GenAI's potential. * **Proactive Approach Needed**: The implementation of GenAI in construction requires a proactive approach to security and governance. Addressing these challenges is vital to unlock the potential for improved productivity and creativity during the industry's technological transformation. * **Strategic Adoption:** The adoption of GenAI within construction companies requires a strategic approach to manage security, risks, and governance effectively. The practical procedures allow responsible and ethical utilization while maintaining standards of security, safety, and compliance. The guidance from construction technology experts can support in setting up a successful generative AI program. * **Implementation Challenges**: GenAI systems help a comprehensive analysis of trade-offs in construction projects, including physical, financial, and sustainable aspects. However, addressing implementation challenges, such as increasing awareness and understanding, is essential to drive broader adoption and establish convincing business cases for technology investments. * **Limited Awareness**: The construction industry is facing difficulties in building an efficient business case for investments in software, hardware, training, and infrastructure due to limited awareness. These challenges related to accessing and sharing big data hinder the effectiveness of GenAI models. Moreover, regulatory and legal complexities, particularly concerning intellectual property rights, add compliance concerns when deploying GenAI in visualizations or renderings. * **Expectation of Mature Technologies**: The construction market expects mature technologies ready for immediate use, focusing on solutions designed to the industry's distinctive challenges. However, this expectation leads to a deeper exploration of automation and AI in construction, recognizing the need for specialized solutions. * **Risk Mitigation and Ethical Governance:** To effectively implement GenAI in the construction industry, it is important to apply comprehensive risk mitigation strategies. These include various measures such as data encryption, strict access controls, and secure data storage practices. Furthermore, to safeguard AI-generated outcomes, addressing intellectual property concerns through well-defined guidelines and contractual agreements is essential. * **Novelty Challenge**: Another challenge in applying GenAI lies in its novelty. For example, many traditional schedulers are familiar with long-standing tools and may hesitate to embrace newer, more advanced solutions. ## 6 Recommendations and Future Directions In section 4.3, we have explained various potential applications that serve as a foundation for future research directions. We have structured this section into two subsections: 1) recommendations: short-term and long-term adaption strategies and, 2) future research directions: major future research questions. These sections show the directions for studies aimed at facilitating the effective integration of GenAI within the industry. ### Recommendations We recommend the following short-term and long-term strategies for adapting GenAI in construction: * **Fine Tuning LLMs:** The recommended initial approach for the integration of GenAI into the construction industry involves the fine-tuning of available powerful pre-trained language models using construction-specific data. Construction companies have the opportunity to curate datasets comprising various resources such as design documents, building codes, contractual documents, technical documents, and BIM data. This data is helpful in informing the selected LLM about specialized vocabulary and contextual nuances of the construction. Starting with modest datasets and focusing on strongly defined tasks can simplify the process of prompt engineering that enables the GenAI systems for construction needs. * **Human Oversight:** GenAI systems still require human oversight to validate quality and accuracy while capable of automating tasks. Model outputs should be reviewed and feedback can be provided to improve performance. Therefore, human-in-the-loop approaches that combine AI generation with human judgment can improve the strengths of both. * **Evaluating Business Impact:** It is recommended to assess the business impacts of GenAI using experiments measuring key performance indicators. Pilot studies could evaluate model influence on metrics such as productivity, cost, time, risks, etc. The measurement as a model integrates more data and provides insight into returns over investment. This can help to quantify the benefits of GenAI investment for the organization. * **Developing Custom LLMs:** In the long run, collaborative efforts between the AEC industry and researchers can focus on designing specialized language model architectures for construction-related tasks. This involves compiling extensive datasets from the AEC domain. The fundamental approach is to establish a secure central data repository, with contributions from construction companies, and consultants. Training models on this data, with the support of AI researchers, will allow domain expertise and innovation. ### Future Research Directions We present the following major future research questions for adapting GenAI in construction: * How can we develop GenAI models that can accurately extract detailed project information from a variety of construction documents and BIM models? This could help improve productivity. * What techniques can enable GenAI models to automatically generate feasible building designs based on requirements? Generative design could help with time and cost savings. * How can we build AI assistants that can have natural conversations with human stakeholders to refine project details, requirements, and reports in different phases of the building lifecycle? Conversational AI could help project stakeholders. * What GenAI techniques can enable the automated generation of 3D visualizations, videos, and images from text descriptions? This could help in better communication. * How can we develop AI systems to accurately evaluate construction progress, safety, and quality using visual data? Computer vision integration could be key to achieving this. * What GenAI techniques can optimize construction scheduling, logistics, and cost estimating? This could help in construction project management. * How can we build AI assistants that can understand BIM model information, extract that information, and update BIM models based on prompts? This could help to accelerate the BIM execution process for general contractors. * How can we integrate robotics with natural language AI to enable easy human-robot interactions? This could help enhance the usability, and accessibility of robotic systems, leading to improved collaboration. * What machine learning techniques can support accurate automatic code generation for construction tasks and changes in scope? This could help to track changes and troubleshoot issues. * How can we build GenAI models that learn continuously from construction data to improve predictions and decision-making over time? This could help in the overall success of an organization, and future project forecasting. ## 7 Conclusion This study makes important contributions by investigating the evolving opportunities and challenges of implementing Generative AI in the construction industry. Through a detailed literature review, we have identified the limitations of traditional AI methods and examined the recent use cases of GenAI models. We have also investigated the industry practitioners' insights, using sentiment analysis and theme-based interpretation, into the perceived application potential and barriers to adopting GenAI in the construction sector. Synthesizing these findings, we identified potential applications and proposed a conceptual framework to guide researchers and practitioners in implementing GenAI in construction. The mapping of different GenAI model types to various construction tasks suggested potential future applications of text-to-text, text-to-image, text-to-3D/Video, and text-to-task models for applications across project feasibility, design, procurement, construction, and operation phases. However, our study also highlights significant GenAI implementation challenges around domain knowledge, hallucinations, model accuracy, generalizability, interpretability, cost, ethical, and regulatory challenges that must be addressed before executing the proposed framework. Recommendations provided in this study are expected to help construction stakeholders with strategies for initiating GenAI adoption and plan for long-term application while mitigating risks. The future research questions identified can direct the construction research community to focus on the practical applications of GenAI capabilities. Moreover, this study provides a strong literature foundation for realizing the capacity and challenges of GenAI in this industry. Further validation studies implementing the proposed framework and developing real construction applications would be a natural extension of this research. ## Funding This research received no external funding. ## Conflicts of Interest The authors declare no conflict of interest.
過去10年、人工知能(AI)の急速な進歩により多くの産業 prácticas が変化したが、建設業界は遅れている。近年、OpenAIのGPT、GoogleのPaLM、MetaのLlamaのような高度な大規模言語モデル (LLM) の出現と急速な採用が、大きな可能性を示し、世界中で大きな関心を呼び起こしてきた。しかし、現在の勢いは、建設産業におけるGenerative AI(GenAI)の導入に関する研究が欠けているため、研究者や実践者にとって重要な知識の空白が生じてきた。この空白を埋めることが、GenAIの統合の展望と複雑さを調査する必要性を示している。建設産業におけるGenAIの早期導入を最適化するのに、この空白を埋めることは必須だ。GenAIの持つ、既存のコンテンツから学習することで人間らしいコンテンツを生成する能力を考えると、私たちは、建設業界におけるGenAIの未来について、以下の2つのガイド
2309.09060
Sub-action Prototype Learning for Point-level Weakly-supervised Temporal Action Localization
Point-level weakly-supervised temporal action localization (PWTAL) aims to localize actions with only a single timestamp annotation for each action instance. Existing methods tend to mine dense pseudo labels to alleviate the label sparsity, but overlook the potential sub-action temporal structures, resulting in inferior performance. To tackle this problem, we propose a novel sub-action prototype learning framework (SPL-Loc) which comprises Sub-action Prototype Clustering (SPC) and Ordered Prototype Alignment (OPA). SPC adaptively extracts representative sub-action prototypes which are capable to perceive the temporal scale and spatial content variation of action instances. OPA selects relevant prototypes to provide completeness clue for pseudo label generation by applying a temporal alignment loss. As a result, pseudo labels are derived from alignment results to improve action boundary prediction. Extensive experiments on three popular benchmarks demonstrate that the proposed SPL-Loc significantly outperforms existing SOTA PWTAL methods.
Yueyang Li, Yonghong Hou, Wanqing Li
2023-09-16T17:57:40
http://arxiv.org/abs/2309.09060v1
# Sub-action Prototype Learning for Point-level Weakly-supervised Temporal Action Localization ###### Abstract Point-level weakly-supervised temporal action localization (PWTAL) aims to localize actions with only a single timestamp annotation for each action instance. Existing methods tend to mine dense pseudo labels to alleviate the label sparsity, but overlook the potential sub-action temporal structures, resulting in inferior performance. To tackle this problem, we propose a novel sub-action prototype learning framework (SPL-Loc) which comprises Sub-action Prototype Clustering (SPC) and Ordered Prototype Alignment (OPA). SPC adaptively extracts representative sub-action prototypes which are capable to perceive the temporal scale and spatial content variation of action instances. OPA selects relevant prototypes to provide completeness clue for pseudo label generation by applying a temporal alignment loss. As a result, pseudo labels are derived from alignment results to improve action boundary prediction. Extensive experiments on three popular benchmarks demonstrate that the proposed SPL-Loc significantly outperforms existing SOTA PWTAL methods. Weakly-supervised temporal action localization, Point-level supervision, Sub-action prototype learning ## I Introduction Temporal action localization (TAL) is an important visual task with numerous applications (e.g., anomaly detection [1] and video retrieval [2]) and has witnessed remarkable progress in the fully-supervised setting [3]. To bypass the tedious manual annotations of action boundaries, video-level weakly-supervised TAL methods [4, 5, 6] has draw increasing attention which localizes actions with only video-level class labels. However, due to the absent of explicit location supervision, they suffer from action-background confusion and drop largely behind the fully-supervised methods. To balance annotation costs and model performance, point-level weakly-supervised temporal action localization (PWTAL) is proposed, where only one single timestamp (point) is annotated within each action instance for training. To date in the literature, pioneering PWTAL methods divide the input video into a series of snippets and seek to generate dense pseudo labels to provide snippet-level supervision. SF-Net [7] mines potential action and background frames to expand point annotations. Ju et al. [8] design a differentiable mask generator. LACP [9] proposes a greedy algorithm to search for proposals with high confidences as fine-grained supervision. Nevertheless, these methods handle the information within each snippet independently, overlooking the potential temporal structures of action instances, thus can only obtain suboptimal pseudo labels and fail to learn action completeness. To illustrate the above issue, we take _"LongJump"_ in Fig. 1 as an example. Baseline is a conventional method that only performs at snippet-level. It completely capture the representative instance (e.g., the blue dashed box) but fail to detect the outlier instance (e.g., the green dashed box) with camera view change. One common detail is that a class of actions is generally involves multiple sub-actions that occur sequentially, i.e., sub-action level motion pattern, no matter how the appearance changes. Although the two action instances differ in appearance, they both consists of three ordered sub-actions: _"approach run"_ (snippet #1, #2 and #3), _"takeoff"_ (snippet #4) and _"landing"_ (snippet #5). Accordingly, we argue that the sub-action temporal dependency of representative instances can serve as a bridge to offer completeness guidance for outlier instances, which helps thoroughly explore intrinsic motion pattern of a certain action class. This observation motivate us to exploit the potential completeness clues, i.e., representative sub-action temporal dependency, to mine intrinsic motion pattern at sub-action level, so as to promote pesudo label generation. A primary challenge is to obtain sub-actions features from representative instances with large temporal scale variation and apply their temporal dependency to guide other instances. To this end, take advantage of prototypical features' noise robustness [10], we propose a novel **S**ub-action **P**rototype **L**earning framework for point-level weakly-supervised temporal action **L**o**e**alization (SPL-Loc), which contains two modules: Sub-action Prototype Clustering (SPC) and Ordered Prototype Alignment (OPA). SPC does sub-action prototype extraction from representative proposals and adaptively change the prototype count. This makes the extracted prototypes temporal-aware and spatially-adapted, so as to be robust enough to handle the variations caused by video noise such as camera view change, incomplete body and motion ambiguous. For the partition of the input video into multiple region, OPA selects proposal-related prototypes to perform temporal alignment [11] with undetermined Fig. 1: Comparison between localization results of baseline and SPL-Loc. The blue dashed box denotes a representative instance and the green one denotes an outlier instance. The outlier instance is hard to be detected by baseline, because camera view has changed and only part of the athlete’s body appears in the snippet #1 and #2. By contrast, the representative instance with less interference and the entire body is much easier to be detected. region, which helps deliver sub-action level completeness cues among different instances. Furthermore, the alignment results can serve as online pseudo labels to provide fine-grained supervision. The main contributions can be summarized as follows: 1) A unified pseudo label generation framework is designed to mine intrinsic motion pattern at sub-action level for PWTAL. 2) Two modules, i.e., SPC and OPA, are proposed to extract sub-action prototypes and provide completeness guidance during pesudo label generation respectively. 3) The experimental results highlight the benefits of our method, which significantly outperforms SOTA methods. ## II Methodology ### _Baseline Setup_ As shown in Fig. 2, given an \(T\)-snippets input video, a set of point annoations is provided \(\{(y_{t_{i}}^{act},t_{i})\}_{i=1}^{N^{act}}\) for all \(N^{act}\) instances, where \(t_{i}\) and \(y_{t_{i}}^{act}\) provide location and class supervision respectively. A feature extractor is employed to encode motion (FLOW) and appearance (RGB) information of the input video. Both RGB and FLOW features are fed in two convolutional layers to learn \(D\)-dimensional task-specific embedding \(X\in\mathbb{R}^{T\times D}\). Subsequently, temporal class activation sequence (TCAS) \(\mathcal{A}\in\mathbb{R}^{T\times(C+1)}\), representing the snippet-level action scores, is derived from another two convolutional layers with sigmoid and average operation. The \((C+1)\)-\(th\) dimension corresponds to the background score. Our baseline introduces video- and point-level loss for network training. For each class, we perform the \(top\)-\(k\) pooling operation [12] to produce video-level class score which is supervised by a cross-entropy loss \(\mathcal{L}_{video}\). In addition, following [9], we search for pseudo background points between any two adjacent action points by applying a threshold on the background score to supplement the point annotations. A focal loss \(\mathcal{L}_{point}\)[13] is adopted to supervise the action points and the pseudo background points. The total loss of baseline is as follows: \[\mathcal{L}_{base}=\lambda_{1}\mathcal{L}_{video}+\lambda_{2}\mathcal{L}_{ point} \tag{1}\] where \(\lambda_{*}\) is the hyper-parameter to balance the loss terms. ### _Sub-action Prototype Learning_ Although the pseudo background points can supplement the sparse point annoations to some extent, baseline still requires dense pesudo-labels and suffers from incomplete predictions. Nevertheless, the proposals with high confidences produced by baseline can still provide a decent approximation of action durations. To take full advantage of this prior knowledge, we regard the proposals with high confidences as representative proposals and introduce Sub-action Prototype Clustering (SPC) and Ordered Prototype Alignment (OPA). **Sub-action Prototype Clustering.** Recent work [14] construct class prototypes for snippet-level relationship modeling but merely use a single prototype to represent a whole action instance, which drop temporal scale information unavoidably. Intuitively, an action instance with longer duration need more prototypes to capture the variations of temporal scale and spatial content. Consequently, SPC focus on a multiple prototypes learning strategy to adaptively change the prototype count according to the proposal duration. Formally, given a representative proposal \(X^{p}\in\mathbb{R}^{N_{p}\times D}\) and uniformly initialized sub-action prototypes \(S^{0}\in\mathbb{R}^{N_{s}\times D}\) within \(X^{p}\), where \(N_{p}\) is the proposal length and \(N_{s}\) denotes the number of prototypes. We define the distance metric \(Dis\) by calculating the feature similarity while also considering the temporal similarity. \[Dis=\sqrt{{(d_{f})}^{2}+\gamma{(d_{t})}^{2}} \tag{2}\] where \(d_{t}\) and \(d_{f}\)denote the Euclidean distance for temporal position and features respectively. \(\gamma>0\) is a trade-off parameters. Afterward, we extract sub-action prototypes by iterative computing association and weighted update. The process of SPC is delineated in Algorithm 1. ``` 1: proposal feature \(X^{p}\), initial sub-action prototypes \(S^{0}\) 2:Input: final sub-action prototypes \(S^{n}\) 3:for\(\mu\in\{1,2,...,L\}\)do 4: Compute distance function, \(Dis=\sqrt{{(d_{f})}^{2}+\gamma{(d_{t})}^{2}}\) 5: Compute association between each snippet \(X^{p}_{i}\) and sub-action prototype \(S^{n-1}_{j}\), \(A^{n}_{ij}=-e^{-Dix(X^{p}_{i},s^{n-1}_{j})}\) 6: Update sub-action prototypes, \(S^{n}_{i}=\frac{1}{\sum_{i}A^{n}_{ij}}\frac{N^{p}_{i}}{\sum_{i}A^{n}_{ij}X^{p} _{i}}\) 7:return final sub-action prototypes \(S^{n}\) ``` **Algorithm 1** Sub-action Prototype Clustering (SPC) As discussed above, the main contribution of SPC is its adaptive perception to temporal scale and spatial content. To guarantee this adaptability, we constrain the number of prototypes according to the following adaptive criterion. \[N_{s}=\min[(\frac{N_{p}}{r_{p}}),N_{max}] \tag{3}\] where \(r_{p}\) indicates the average length corresponding to each initial sub-action prototype. To prevent sub-actions from being oversegmented, we set hyper-parameter \(N_{max}\) to constrain the upper limit of \(N_{s}\). After the prototype extraction, a memory bank \(\mathcal{M}^{sub}\) is employed to store sub-action prototypes for each class which enables us to mine intrinsic motion pattern in an inter-video fashion. For each training epoch, only the prototypes of the \(top\)-\(k_{sub}\) confidences are retained in \(\mathcal{M}^{sub}\). **Ordered Prototype Alignment.** As shown in Fig. 2, based on the annotation sequence, the input video can be split into \(N^{act}\) undetermined regions \(\{X^{un}_{n}\}_{n=1}^{N^{act}}\) and \(N^{act}+1\) background regions \(\{X^{kkg}_{n}\}_{n=1}^{N^{act}+1}\). Each undetermined regions contains an action instance. Different from [9] which directly searches proposals within undeterminate regions as pseudo-labels, OPA aims to refine video features while generating pseudo-labels capable of covering more complete actions by introducing dynamic time warping (DTW) [15]. Given an \(M_{n}\)-snippets undetermined region feature \(X^{un}_{n}\in\mathbb{R}^{M_{n}\times D}\) and the \(N_{n}\)-snippets proposal feature \(X^{pro}_{n}\in\mathbb{R}^{N_{n}\times D}\) searched within it. We measure the similarity between each proposal snippet and each prototype in \(\mathcal{M}^{sub}\) of the same class by calculating the cosine distance. The proposal-related prototype sequence \(P^{sub}_{n}\in\mathbb{R}^{N_{n}\times D}\) is generated by selecting the prototypes with the maximum similarity to the proposal snippets in order. The consecutively repeated prototypes are dropped to reduce computational redundancy. Assuming that undeterminated regions evolve in a background-action-background order, we temporally average pooling the background regions \(X_{n}^{bkg}\) and \(X_{n+1}^{bkg}\) adjacent to \(X_{n}^{un}\) to produce the background prototypes\(P_{n}^{bkg}\in\mathbb{R}^{1\times D}\) and \(P_{n+1}^{bkg}\in\mathbb{R}^{1\times D}\) respectively. Subsequently, we obtain the ordered prototype sequence, carrying the completeness guidance information of \(X^{un}\), by concatenation: \(P_{n}^{ord}=Concate(P_{n}^{bkg},P_{n}^{sub},P_{n+1}^{bkg})\in\mathbb{R}^{(N_{n}+ 2)\times D}\). The cumulative distance function between the ordered prototype sequence \(P_{n}^{ord}\) and the undetermined region \(X_{n}^{un}\) is evaluated with the following recursion: \[S(i,j)=Cos(i,j)+min\{S(i,j-1),S(i-1,j-1)\} \tag{4}\] where \(Cos(\cdot)\) represents cosine distance, \(S(i,j)\) is evaluated on the \(i\)-\(th\) prototype of \(P_{n}^{ord}\) and the \(j\)-\(th\) snippet of \(X_{n}^{un}\). Notely, different from conventional DTW methods, we impose rigid constraints on warping paths to guarantee that each snippet can only be aligned to a single prototype. Finally, pesudo label is derived from the combination of alignment results for all undetermined regions. The snippets aligned with the sub-action prototypes are considered as positive labels, while those aligned with the background prototypes are considered as negative labels After the above recursion, the distance measure between \(P_{n}^{ord}\) and \(X_{n}^{un}\) is provided by: \(\phi(P_{n}^{ord},X_{n}^{un})=S(M_{n},N_{n}+2)\). It is evident that the undetermined region \(X_{n}^{un}\) should be more similar to the ordered prototype of the same class \(P_{n}^{ord}\) than the ordered prototype of different class \(\bar{P}_{n;c}^{ord}\). To this end, we design the following contrasting loss: \[\mathcal{L}_{OPA}=-\frac{1}{N^{act}}\sum_{n=1}^{N^{act}}\log[\frac{\exp(-\phi (P_{n}^{ord},X_{n}^{un})/\tau)}{\sum_{c=1}^{C}\exp(-\phi(\bar{P}_{n;c}^{ord},X_ {n}^{un})/\tau)}] \tag{5}\] where \(\tau\) denotes the temperature parameter. **Training Objective.** Following [9], score contrastive loss \(\mathcal{L}_{PL}\) is introduced to provide fine-grained supervision through the pseudo labels. The total loss is composed as follows: \[\mathcal{L}_{total}=\mathcal{L}_{base}+\lambda_{3}\mathcal{L}_{OPA}+\lambda_{ 4}\mathcal{L}_{PL} \tag{6}\] where \(\lambda_{*}\) is the hyper-parameter to balance the loss terms. ### _Inference_ During inference, we apply the multi-threshold approach [16] on TCAS to get proposals and choose outer-inner-contrast score [17] to calculate the confidence for each proposal. Finally, NMS [18] is performed to remove redundant proposals. Note that SPC and OPA are not performed at testing time. ## III Experiments ### _Datasets_ We evaluate SPL-Loc on three temporal action localization datasets. THUMOS-14 [19] contains 200 validation and 213 test videos, which belong to 20 action categories. The video length varies over a wide range, making this dataset extremely challenging. GTEA [20] consisting of 7 fine-grained daily actions in kitchen, has 21 training videos and 7 test videos. In BEOID [21], there are 58 videos from 30 action categories. ### _Implementation Details_ In accordance with previous PWTAL methods [7, 8, 9], we report mean average precision (mAP) at different IoU thresholds to evaluate performance. I3D [22] is employed to extract video features, which is not fine-tuned for fair comparison. The feature dimension \(D\) is set to 1024. For all three datasets, we set \(\lambda_{1}=1\), \(\lambda_{2}=1\), \(\lambda_{3}=1.5\), \(\lambda_{4}=1\). The number of iterations \(L\) is set to 6. For THUMOS-14, we set \(r_{p}=5\), \(\gamma=3\), \(k_{sub}=10\). For GTEA and BEOID we set \(r_{p}=3\), \(\gamma=1\), \(k_{sub}=8\). ### _Ablation studies_ To evaluate the effectiveness of SPL-Loc, we conduct experiment with ablation on THUMOS-14. #### Iii-C1 The impact of the adaptive criterion Table I reports the impact of the maximum number of prototype \(N_{max}\). Notably, SPC degenerates to global average pooling when \(N_{max}=1\). As \(N_{max}\) increases, the perception ability of the prototype to temporal scale is gradually enhanced. When \(N_{max}=5\), SPL-Loc achieves the optimal performance and does not improve with the increase of \(N_{max}\) after that. We infer that it leads to over-segmentation for some representative actions when \(N_{max}\) is too large, which does not further help to learn action completeness. We also compared our adaptive criterion with fixed criterion (\(N_{max}=5\)) in Table II. Even with the fixed number of prototypes, our approach still exceed the method that generates generates pseudo labels using only proposals. Fig. 2: Overall architecture of SPL-Loc, which contains two modules: Sub-action Prototype Clustering (SPC) and Ordered Prototype Alignment (OPA). Besides, introducing the adaptive criterion can generate better pseudo labels while reduce computational redundancy. #### Iv-C2 The contribution of each component As shown in Table III, the upper section reports the effect of the proposed method on baseline. Even though without pseudo label, our model (id 3) still achieves absolute gain of 1.8% in terms of average mAP. This clearly shows that the proposed OPA is able to refine the temporal context of action instances, leading to better action-background separation and better inter-class distinction. The lower section reports the effect of the proposed method on pseudo label generation. Especially, in experiment 5, we wipe out the SPC and pick the same number of representative snippets to perform temporal alignment at snippet-level for fair comparison. This setting achieves only a slight improvement of 0.1%, much less than 1.5%. This is because our sub-action prototypes are able to perceive the temporal scale and spatial content and produce better pseudo labels than representative snippets. The similar phenomenon also appears in the upper section. Therefore, we argue that mine intrinsic motion pattern at sub-action level is more important than snippet-level. ### _Qualitative results_ As shown in Fig. 3. the method that generate pseudo labels using only proposals can produces polarized TCAS scores but still fails to localize complete actions. In contrast, our SPL-Loc can produce more accurate pseudo labels and more complete localization results regardless of the video noise. ### _Comparison with SOTA methods_ As shown in Table III, in both manual labels [7] and uniform distribution labels [23], SPL-Loc surpasses the best competitor LACP, showing its robustness to label selection. Meanwhile, SPL-Loc exceeds video-level methods by a large margin with affordable annotations cost. Table IV summarize the results on GTEA and BEOID. SPL-Loc still outperforms previous methods, which demonstrates the effectiveness. Fig. 3: Visualization of pseudo labels (PL) and CAS. “Pro” denotes only proposals are used to generate pseudo labels. (a) is a slow long action. (b) has a long “approach run” and part of the athlete’s body is missing when a camera view change occurs (highlighted with red box).
```japanese 点レベルの弱 supervised 時間的行動位置付け (PWTAL) は、各行動インスタンスに対して単一のタイムスタンプの標的を目標とする。既存の方法では、密接な偽ラベルを収集してラベルの疎性と緩和する場合がありますが、潜在的なサブ行動の時間的構造を過looked し、パフォーマンスが劣ります。この問題に対処するため、私たちは、サブ行動プロトタイプ学習フレームワーク(SPL-Loc)を提案します。これはサブ行動プロトタイプ集約 (SPC) と順序プロトタイプアライメント (OPA) から構成されています。SPC は、時間的な尺度と空間的な内容の変動を感知できるサブ行動プロトタイプを適応的に抽出します。OPA は、タイムラインアライメント損失を適用して、関連性の高いプロトタイプを選択して、偽ラベルの生成に完成度を向上させます。結果として、偽ラベルはアライ
2302.14425
Circumplanetary disk ices II. Composition
The subsurface oceans of icy satellites are among the most compelling among the potentially habitable environments in our Solar System. The question of whether a liquid subsurface layer can be maintained over geological timescales depends on its chemical composition. The composition of icy satellites is linked to that of the circumplanetary disk (CPD) in which they form. The CPD accretes material from the surrounding circumstellar disk in the vicinity of the planet, however, the degree of chemical inheritance is unclear. We aim to investigate the composition of ices in chemically reset or inherited circumplanetary disks to inform interior modeling and the interpretation of in situ measurements of icy solar system satellites, with an emphasis on the Galilean moon system. We used a radiation-thermochemical code to produce circumplanetary disk models and extract the ice composition from time-dependent chemistry, incorporating gas-phase and grain-surface reactions. The initial sublimation of ices during accretion may result in a CO2-rich ice composition. Sublimated ammonia ice is destroyed by background radiation while drifting towards the CPD midplane. Liberated nitrogen becomes locked in N2 due to efficient self-shielding, leaving ices depleted of ammonia. A significant ammonia ice component remains only when ices are inherited from the circumstellar disk. The observed composition of the Galilean moons is consistent with the sublimation of ices during accretion onto the CPD. In this scenario, the Galilean moon ices are nitrogen-poor and CO2 on Callisto is endogenous and primordial. The ice composition is significantly altered after an initial reset of accreted circumstellar ice. The chemical history of the Galilean moons stands in contrast to the Saturnian system, where the composition of the moons corresponds more closely with the directly inherited circumstellar disk material.
Nickolas Oberg, Stephanie Cazaux, Inga Kamp, Tara-Marie Bründl, Wing-Fai Thi, Carmen Immerzeel
2023-02-28T09:06:58
http://arxiv.org/abs/2302.14425v1
# Circumplanetary disk ices ###### Abstract Context:The subsurface oceans of icy satellites are among the most compelling among the potentially habitable environments in our Solar System. The question of whether a liquid subsurface layer can be maintained over geological timescales depends on its chemical composition. The composition of icy satellites is linked to that of the circumplanetary disk (CPD) in which they form. The CPD accretes material from the surrounding circumstellar disk in the vicinity of the planet, however, the degree of chemical inheritance is unclear. Aims:We aim to investigate the composition of ices in chemically reset or inherited circumplanetary disks to inform interior modeling and the interpretation of in situ measurements of icy solar system satellites, with an emphasis on the Galilean moon system. Methods:We used the radiation-thermochemical code ProDiMo to produce circumplanetary disk models and then extract the ice composition from time-dependent chemistry, incorporating gas-phase and grain-surface reactions. Results:The initial sublimation of ices during accretion may result in a CO\({}_{2}\)-rich ice composition due to efficient OH formation at high gas densities. In the case of a Jovian CPD, the sublimation of accreted ices results in a CO\({}_{2}\) iceline between the present-day orbits of Ganymede and Callisto. Submillimeter ammonia ice is destroyed by background radiation while drifting towards the CPD midplane. Liberated nitrogen becomes locked in N\({}_{2}\) due to efficient self-shielding, leaving ices depleted of ammonia. A significant ammonia ice component remains only when ices are inherited from the circumstellar disk. Conclusions:The observed composition of the Galilean moons is consistent with the sublimation of ices during accretion onto the CPD. In this scenario, the Galilean moon ices are nitrogen-poor and CO\({}_{2}\) on Callisto is endogenous and primordial. The ice composition is significantly altered after an initial reset of accreted circumstellar ice. The chemical history of the Galilean moons stands in contrast to the Saturnian system, where the composition of the moons corresponds more closely with the directly inherited circumstellar disk material. Conclusions: ## 1 Introduction The search for habitable worlds beyond the solar system has historically focused on planets in the so-called "habitable zone," where surface conditions theoretically support the presence of liquid water (Hart, 1979). In the Solar System, however, icy satellites and minor bodies outside of the classical habitable zone are the most common type of worlds that are known to host oceans of liquid water (Hussmann et al., 2006; Nimmo & Papalardo, 2016). Evidence strongly supports the presence of a sub-surface ocean on the Galilean satellites Europa and Ganymede, as well as (to a lesser extent) Callisto (Carr et al., 1998; Khurana et al., 1998; Kivelson et al., 2002; Sohl et al., 2002; Saur et al., 2015). The resonant configuration of the satellites prevents a damping of the orbital eccentricities, producing levels of tidal heating capable of sustaining subsurface oceans over geological timescales (Peale & Lee, 2002; Hussmann & Spohn, 2004; Showman et al., 1997). Whether or not a given level of tidal heating produces subsurface melt depends in part on the composition of the satellite ices. The proposed abundant impurities include NH\({}_{3}\), CH\({}_{4}\), CO, and CO\({}_{2}\), along with salts MgSO\({}_{4}\) and NaCl (Kargel, 1992; Mousis & Alibert, 2006). The liquids temperature of co-deposited ice mixtures can be depressed by the presence of NH\({}_{3}\)(Choukroun & Grasset, 2010; Sohl et al., 2010) or methanol (CH\({}_{3}\)OH) (Deschamps et al., 2010; Dougherty et al., 2018), as well as salts to a lesser extent. Hence, the composition of the volatile reservoir from which icy satellites form is of direct relevance to the presence of a subsurface ocean, their geothermal and physical evolution (Hammond et al., 2018), the interpretation of in-situ geophysical measurements (Vance et al., 2018), and the eventual atmospheric composition by outgassing or impact dissociation (Sekine et al., 2014; Glein, 2015). In particular, ammonia is important to the interior state and evolution of icy bodies. The presence of NH\({}_{3}\) in the form of dihydrate can drive differentiation of rock and ice (Desch et al., 2009). Ammonia in a pure H\({}_{2}\)O-NH\({}_{4}\) eutectic system produces a freezing point depression of \(\sim\)100 K (Kargel, 1992; Grasset et al., 2000; Leliwa-Kopystynski et al., 2002). Ammonia can also reduce the density of melt with implications for buoyancy and cryovolcanism (Croft et al., 1988), while increasing viscosity and reducing the efficiency of convection (Grasset et al., 2000). Ammonia has been detected in the plumes of Enceladus (Waite et al., 2009) but not on the surface of the Galilean moons. Tentative evidence for a subsurface ocean on Callisto would be bolstered by the presence of an ammonia component of 1-5% (Kirk & Stevenson, 1987; Showman & Malhotra, 1999; Spohn & Schubert, 2003). In the "gas-starved" circumplanetary disk (CPD) paradigm, moon formation occurs in a relatively low-mass, cool disk that must accumulate solids to form giant moons over time (Canup & Ward, 2002, 2006; Batygin & Morbidelli, 2020). Infalling material from the surrounding circumstellar disk may be shock-heated in the process of accretion onto the CPD (Szulagyi, 2017; Szulagyi & Mordasini, 2017; Aoyama et al., 2018), with increasing shock temperature for increasing planetary mass. If the shock heating chemically resets infalling gas or ices, new ice formation must occur within the CPD to produce the icy satellites we see today. The resulting composition of the satellite ices may then depart substantially from those in the planetary feeding zone. Prior works modeling equilibrium condensation chemistry in a Jovian CPD suggest that in the event of an initial vaporization of ices, the "mostly inefficient" gas-phase reactions lead to ratios of CO\({}_{2}\):CO:CH\({}_{4}\) and N\({}_{2}\):NH\({}_{3}\) that are not substantially different from those in the feeding zone of Jupiter (Mousis & Alibert, 2006; Mousis et al., 2006). However, it has long been recognized that grain-surface chemistry plays a critical role in the formation of many common molecules under interstellar conditions (Hasegawa et al., 1992; van Dishoeck & Blake, 1998; Cazaux & Tielens, 2002; Caselli et al., 2004; Garrod et al., 2006; Ruaud et al., 2015; Wakelam et al., 2017). The use of a more comprehensive modeling approach including grain-surface and photochemistry to revisit the formation of ices in CPDs is thus motivated. We aim to investigate the composition of ices that form in a chemically reset CPD with viscous timescale \(10^{3}-10^{4}\) yr, where infalling ices are sublimated and gas is atomized by shock-heating. These results will be contrasted with a partial reset in which only ices are sublimated during accretion and with a full chemical inheritance scenario in which the composition of the circumstellar disk gas and ice is preserved. We intend to link observations of solar system icy satellites with modern chemical disk models to lay the foundation for our understanding of how icy moons are built up from material in the CPD. ## 2 Methods We used the radiation-thermochemical disk modeling code ProDiMo1 to model gas and dust chemistry and physics in disks (Woitke et al., 2009, 2016; Kamp et al., 2010, 2017; Thi et al., 2011, 2020). The gas-grain chemistry is solved self-consistently with the 2D radiative transfer and heating and cooling balance using a rate equation-based approach. Most reaction rates are selected from the UMIST2012 database (McElroy et al., 2013) and three-body collider reactions are adopted from the UMIST2006 rate file (Woodall et al., 2007), as they were not included in the 2012 release. In the following sections we review the implementation of the grain surface chemistry (Sect. 2.1), extensions to our standard chemical network (Sect. 2.1.1), and properties of the CPD model (Sect. 2.2). Footnote 1: [https://prodimo.iwf.oeaw.ac.at/](https://prodimo.iwf.oeaw.ac.at/) ### Grain surface chemistry ProDiMo includes a rate-equation based, statistical two-phase approach to gas and dust grain surface chemistry that is largely based on the work of Hasegawa et al. (1992). Gas-phase atoms and molecules can become weakly adsorbed to grain surface physisorption sites. Physisorbed species diffuse in a random-walk process "hopping" from one physisorption site to another (Barlow & Silk, 1976). Diffusion occurs thermally if there is sufficient energy to overcome a diffusion barrier or can otherwise occur by tunneling. The surface diffusion rate is the sum of the thermal, tunneling, and cosmic-ray induced diffusion rates. The rate of thermal diffusion of species, \(i\), is: \[R_{i}^{\rm diff,th}=\nu_{0,i}Q_{i}^{\rm diff}(t_{i}^{\rm diff},E_{i}^{\rm diff} )e^{\Delta E_{i}/\rm{\rm{i}}h_{\rm{i}}T_{\rm{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it \it{ \it{ \ the probability of a reaction per encounter multiplied by the encounter rate between the two species diffusing across the surface. The encounter rate between two adsorbed species \(i\) and \(j\) hopping across the surface is then: \[k_{ij}=\kappa_{ij}(R_{i}^{\rm diff}+R_{j}^{\rm diff})/n_{d}\ \ {\rm cm}^{3}{\rm s}^{-1}, \tag{7}\] where \(\kappa_{ij}\) is the reaction probability, \(R_{i}^{\rm diff}\) and \(R_{j}^{\rm diff}\) are the diffusion rates (s\({}^{-1}\)) for species, \(i\) and \(j\), and \(n_{d}\) is the dust grain number density (cm\({}^{-3}\)). The reaction probability, \(\kappa_{ij}\), takes into account the competition between association of the species and diffusion (Garrod & Pauly 2011; Bonfanti & Martinazzo 2016; Ruaud et al. 2016): \[\kappa_{ij}=\frac{Q_{\rm Bell}(a^{\prime}_{ij},E_{i}^{\rm ext})}{Q_{\rm Bell}( a^{\prime}_{ij},E_{i}^{\rm ext})+P_{i}^{\rm diff}+P_{j}^{\rm diff}}, \tag{8}\] where \(a^{\prime}_{ij}\) is the reactive barrier width, \(E_{i}^{\rm ext}\) is the activation energy of the reaction barrier, and \(P_{i}^{\rm diff}=R_{i}^{\rm diff}/\nu_{0,i}\). We assume the semi-equilibrium theory in which reactions between physisorbed and gas-phase species (Eley-Rideal) is equal to the probability of the gas atom colliding with the physisorbed species multiplied by the probability of the gas-phase species having sufficient energy to overcome the reaction barrier. Impinging gas-phase species are assumed to have an energy relative to the surface species \(1/2k_{\rm B}T_{g}+E_{i}^{b}\), where \(T_{g}\) is the gas temperature and \(E_{i}^{b}\) is the binding energy. Photon and cosmic-ray induced dissociation and desorption of grain-surface species are also included. Adsorption and desorption processes are described fully in Thi et al. (2020). #### 2.1.1 Extending chemistry beyond the standard network We developed an extended chemical network based on the "DIANA standard large" network described in Kamp et al. (2017), which contains 235 species (including 63 ceses) and 12 elements + polycyclic aromatic hydrocarbons (PAHs) and is optimized for gas-phase chemistry + direct adsorption and desorption from grains. The use of grain-surface reactions necessitates the inclusion of several additional species to the DIANA standard chemical network to capture the relevant chemistry occurring at the disk midplane. These seven additional gas-phase species and six additional ices are listed in Table 1. Physisorbed atomic hydrogen and H\({}_{2}\) are included for their critical role in many grain-surface reactions. Hydrogenated PAH and O\({}_{2}\)H are included for their relevance to the chemical reset scenario in which atomic H is initially very abundant. The rationale for their inclusion is discussed in the following sections. In addition, HOCO and HCOOH are more directly involved in the formation of relevant ices and their roles are discussed in Sect. 3.1.3. #### 2.1.1.1 Hydrogenated polyelic aromatic hydrocarbons (PAH-H) In PaoDMo, the formation rate of H\({}_{2}\) can be calculated in multiple ways. The standard approach is that H\({}_{2}\) formation proceeds via a pseudo-reaction at a rate calculated according to the analytical approach of Cazaux & Tielens (2002)m which presupposes that surface-chemisorbed H atoms play a dominant role at high temperatures (\(\geq\)100 K). However, the formation of H\({}_{2}\) is calculated explicitly when grain-surface reactions are included in the reaction network (Thi et al. 2020). It was noted in the accompanying work that H\({}_{2}\) formation occurs in parallel with H\({}_{2}\)O ice deposition on grains at the midplane when the CPD is chemically reset (Oberg et al. (2022) hereafter, Paper I). The formation of H\({}_{2}\)O ice after the reset is rapid and a median-sized grain is coated in several (\(\gg\)3) monolayers of water ice prior to the complete conversion of H to H\({}_{2}\). This poses a problem as the formation of H\({}_{2}\) via surface-chemisorbed H is considered implausible when the number of water ice monolayers exceeds a certain number (\(\sim\) 3) (Wakelam et al. 2017). We assume that the diffusion timescale of the atomic hydrogen in a water ice matrix and subsequent difficulty of H\({}_{2}\) escape from the grain silicate surface precludes this formation pathway in our scenario. An alternative path to form H\({}_{2}\) is via hydrogenated polycyclic aromatic hydrocarbons (PAH-H) (Bauschlicher 1998). Experimental and theoretical works have demonstrated that H\({}_{2}\) can form via Eley-Rideal abstractions on neutral PAHs (Bauschlicher 1998; Rauls & Hornekaer 2008; Mennella et al. 2012; Thrower et al. 2012) and cationic PAHs (Hirama et al. 2004; Cazaux et al. 2016; Boschman et al. 2012). We include in the chemical network the singly hydrogenated species PAH-H, PAH-H\({}^{+}\), and the physisorbed ice form PAH-H\(\#\)(Thrower et al. 2009) to enable this formation path. As a first step towards H\({}_{2}\) formation, the neutral or ionized PAH is hydrogenated with a small (324 K) activation barrier. The H\({}_{2}\) formation at the CPD midplane then proceeds primarily via \[\rm{PAH\mbox{-}H+H\to H_{2}+PAH}, \tag{9}\] and to a lesser extent (\(\sim 1-10\%\) of the total H\({}_{2}\) formation rate depending on location in the CPD), directly via the gas-phase neutral-neutral reactions: \[\rm{H+HCO\to H_{2}+CO}, \tag{10}\] \[\rm{H+HNO\to H_{2}+NO}. \tag{11}\] While we do include several grain-surface reactions to form H\({}_{2}\) (e.g., H\(\#\) + HCO\(\#\) \(\to\) CO\(\#\) + H\({}_{2}\#\), O\(\#\) + H\({}_{2}\)CO\(\#\) \(\to\) CO\({}_{2}\#\) \begin{table} \begin{tabular}{l l} \hline \hline gas-phase species \\ \hline O\({}_{2}\)H & \\ HOCO & \\ HCOOH\({}^{+}\) & \\ HCOOH\({}^{+}_{2}\) & \\ PAH-H & \\ PAH-H\({}^{+}\) & \\ \hline ices & E\({}_{\rm ads}\) [K] \\ \hline H\# & 600\({}^{1}\) \\ H\({}_{2}\)\# & 430\({}^{2}\) \\ O\({}_{2}\)H\# & 3650\({}^{2}\) \\ HOCO\# & 2000\({}^{3}\) \\ HCOOH\# & 5000\({}^{4}\) \\ PAH-H\# & 5600\({}^{5}\) \\ \hline \end{tabular} Adsorption energies are adopted from \({}^{1}\)Cazaux & Tielens (2002), \({}^{2}\)Garrod & Herbst (2006), \({}^{3}\)Ruaud et al. (2015), \({}^{4}\)�berg et al. (2009) \({}^{5}\)Thrower et al. (2009) \end{table} Table 1: Non-standard species included in the chemical network. + H\({}_{2}\)#); in practice, these occur at a negligible rate due to the 50 K minimum dust temperature in the CPD. The resulting efficiency of H\({}_{2}\) formation is lower than the analytic rate of Cazaux & Tielens (2002) in part due to the low ambient temperatures (\(<\) 200 K, which in combination with the activation barrier impede the process) at the optically thick midplane. The correspondingly longer time over which atomic hydrogen is present has direct consequences for the efficiency of water formation. Gas-phase H\({}_{2}\)O can then form via the hydrogenation of OH for an extended period of time (discussed further in Paper I). #### 2.1.1.2 O\({}_{2}\)H The hydroperoxyl radical O\({}_{2}\)H is a very reactive oxygen species that we have found to play a role in the formation of methanol in the inner region of chemically reset CPDs. We include the gas and ice form of O\({}_{2}\)H in the extended chemical network, with an adsorption energy of 3650 K (Garrod & Herbst, 2006). The oxygen-bearing gas-phase species abundances are sensitive to the presence of O\({}_{2}\)H at high densities. Three-body collider reactions with free atomic hydrogen and O\({}_{2}\) produce O\({}_{2}\)H. This reaction has been extensively studied both theoretically (Horowitz, 1985; Sellevag et al., 2008; Morii et al., 2009) and experimentally (Kurylo, 1972; Davidson et al., 1996; Hahn et al., 2004; Mertens et al., 2009) at high and low temperatures. With the inclusion of O\({}_{2}\)H in the extended network the gas-phase O\({}_{2}\) reservoir at the midplane (nominally present at an abundance \(\sim\)10\({}^{-4.4}\) relative to hydrogen in the standard network) is depleted and converted via OH into H\({}_{2}\)O through the following reactions \[{\rm O}_{2}+{\rm H}+{\rm M}\rightarrow{\rm O}_{2}{\rm H}+{\rm M}, \tag{12}\] or \[{\rm O}_{2}+{\rm HCO}\rightarrow{\rm O}_{2}{\rm H}+{\rm CO}, \tag{13}\] followed by \[{\rm O}_{2}{\rm H}+{\rm H}\rightarrow{\rm OH}+{\rm OH}, \tag{14}\] \[{\rm OH}+{\rm H}\rightarrow{\rm H}_{2}{\rm O}+{\rm photon}, \tag{15}\] These reactions compete for the free H that is required to form methanol via \[{\rm H}_{2}{\rm CO}+{\rm H}\rightarrow{\rm CH}_{3}{\rm O}, \tag{16}\] \[{\rm CH}_{3}{\rm O}+{\rm H}\rightarrow{\rm CH}_{3}{\rm OH}, \tag{17}\] and thus suppress its formation. The inclusion of O\({}_{2}\)H in the chemical network reduces the abundance of methanol ice interior to the NH\({}_{3}\) iceline relative to the results of the standard chemical network by 90-99%. However, this has a negligible impact on the total disk-integrated methanol abundance. ### Circumplanetary disk model We adopted the properties of the PadoDiMo circumplanetary disk model developed in Paper I. The CPD is a 'gas-starved," actively fed accretion disk (Canup & Ward, 2002) that is heated primarily by viscous dissipation at the midplane (D'Alessio et al., 1998; Frank et al., 2002). The parameters of the reference CPD model are listed in Table 2. The physical, radiative, and thermal properties of the CPDs are demonstrated in Paper I, namely, Figures 3 and 4. The disk structure in terms of radial and vertical dimension, density profile, dust-to-gas ratio, and temperature, are assumed to exist in a steady-state and are kept fixed. Following the gas-starved disk paradigm, the CPD does not instantaneously contain the solid mass required to form the Galilean satellites. The total refractory dust mass is 1.7\(\times\)10\({}^{-5}\) M\({}_{\oplus}\) and exists in the form of small grains (0.05-3000 um). The dust grain size distribution is described by a smooth power-law \(n\propto a^{-3.5}\). Such a disk is optically thick out to approximately \(\sim\)1/3rd of the planetary Hill radius \(R_{\rm H}\), which is coincident with the theoretical limit for stable orbits (Quillen & Trilling, 1998; Ayliffe & Bate, 2009; Martin & Lubow, 2011). For the ice-to-rock ratio of the solids in the CPD to be consistent with the ice-to-rock ratio of the outer Galilean satellites, it was found in Paper I that the dust-to-gas ratio of the CPD may be depleted relative to the canonical 10\({}^{-2}\) by a factor \(\gtrsim 10-20\). This depletion in dust corresponds with the rapid inwards drift and loss of grains larger than \(\sim\)150 um, which was found to occur naturally for a disk with a mass of 10\({}^{-7}\) M\({}_{\odot}\) and accretion rate \(\dot{M}=10^{-11}\) M\({}_{\odot}\)yr\({}^{-1}\) (Paper I). Alternatively, pressure-bump trapping at the gap edge can directly prevent larger grains from accreting onto the CPD (Rice et al., 2006; Morbidelli & Nesvorny, 2012; Zhu et al., 2012; Bitsch et al., 2018). For assumptions regarding the efficiency of settling, surface density power-law, and maximum grain size, it was found in Paper I that the global dust-to-gas ratio should not exceed 10\({}^{-3.3}\) for a CPD with a mass of 10\({}^{-7}\) M\({}_{\odot}\) and accretion rate of \(\dot{M}=10^{-11}\) M\({}_{\odot}\)yr\({}^{-1}\) to satisfy the constraints on Jovian icy moon bulk composition. The properties of the disk models are further justified and detailed in Paper I, where the authors explored a small grid of plausible CPD parameters. In this work, our analysis is focused on the case of the 10\({}^{-7}\) M\({}_{\odot}\) CPD with accretion rate \(\dot{M}=10^{-11}\)M\({}_{\odot}\)yr\({}^{-1}\), as it was most closely able to reproduce the radial ice-to-rock ratio of the Galilean satellites while being consistent with the circumstellar-lar disk gas component expected lifetime. We contrast our results with those of the "high" viscosity, hotter CPD with \(\dot{M}=10^{-10}\)M\({}_{\odot}\)yr\({}^{-1}\) and correspondingly shorter viscous timescale 10\({}^{3}\) yr given uncertainties in the magnitude of the disk viscosity. #### 2.2.1 Initial chemical conditions Three different initial chemical conditions are considered. The first case is a full chemical "reset," in which ices are initially sublimated and the circumstellar gas is reverted to a purely atomic, ionized state. A chemical reset may occur if, for instance, the circumstellar material is shock-heated during accretion onto the CPD, if the gas and dust are irradiated while crossing the optically thin gap, or if material only flows into the gap from the upper optically thin surface layers of the circumstellar disk. The CPD model is initialized in this fully reset state after which it is allowed to chemically evolve over its viscous timescale \(t_{\rm visc}\). The viscous timescale of the disk is defined as the time over which the majority of the gas mass is lost \(t_{\rm visc}={\rm M}_{\rm pol}/\dot{M}\), where \(\dot{M}\) is the mass accretion rate. We assume that gas is lost to viscous radial flow either to decretion beyond the disk edge or accretion onto the planet. As the disk mass is assumed to be constant, the net inflow-outflow rate of matter is necessarily zero. Our reference CPD model has a viscous timescale of 10\({}^{4}\) yr with a corresponding midplane heating rate equivalent to an \(\alpha\)-viscosity of 10\({}^{-3.6}\). We contrast these results with a "partial reset" in which only the ices are placed back in the gas-phase. This is similar to the work of Mousis & Alibert (2006) wherein the authors consider a case in which infalling ices are initially sublimated in a warm disk which subsequently cools, although we consider a disk with a static temperature structure. Finally we consider an "inheritance" case in which the chemical composition at the circumstellar disk outer edge is used as the initial state. The abundance of the most common species for these three initial conditions can be found in Appendix B. The circumstellar disk model and the sampling of the inheritance chemistry are described in the accompanying Paper I. It is also necessary to consider the consequences of the gas and dust being shocked at several scale heights above the CPD midplane (Takasao et al. 2021) prior to the gas turbulently diffusing downwards into the optically thick region. The ambient conditions at \(\sim\)5 pressure scale heights (\(A_{\rm V}=0.01\)) differ significantly from those at the midplane (\(A_{\rm V}\)=21) given the magnitude of the external stellar irradiation. To take into account this gradual change in ambient conditions, we incorporated an additional step necessary to prevent the sublimated ices immediately re-adsorbing to grains. We adapted the model to follow a single parcel of gas and dust that is initialized above the midplane and then settles towards the midplane at the centrifugal radius (\(\sim 0.03\) R\({}_{\rm H}\)) (Machida et al. 2008). This process is labeled as step 2 in Fig.1. In this step, we evolved the chemistry in a 0D grid-cell for a fraction of the diffusion timescale. The resulting composition of the gas and ice was extracted and used to populate a new grid-cell, in which the background conditions are updated to correspond to the downwards motion of the gas parcel. The extracted relative species abundances were simply applied to the new cell and absolute abundances were rescaled to correspond to the new grid-cell density. This process was repeated iteratively as ambient conditions (optical depth, density, and gas and dust temperature) change. As a simplification owing to significant uncertainties in the origin, magnitude, and spatial distribution of turbulence within the CPD, we simply assumed that the parcel travels at a constant rate until it reaches the midplane. The timescale of this process is \(\sim 10\) yr (Paper I), although this value is still highly uncertain. Accordingly, we also considered diffusion timescales of 1, 10, and 100 yr. The final composition of the parcel at the midplane was then used to populate the CPD midplane for the final step (step 3 in Fig.1), whereby chemical evolution proceeds up until the viscous timescale. ### Likelihood of chemical reset and magnitude of shock-heating Icy grains passing through an optically thin gap at 5 au around a Sun-like star can retain their icy mantles if swept up by the planet within \(\sim 10-100\) orbital timescales (Paper I). If a (partial) chemical reset occurs, it must instead be due to either accreted material originating from a higher altitude in the circumstellar disk where ices are unstable or, otherwise, shock-heating on the CPD surface. We can estimate the shock velocity of infalling matter where it strikes the CPD and consider which of our initial chemical conditions corresponds most appropriately to the formation of the Galilean moon system. Angular momentum of infalling circumstellar gas and dust causes it to accrete onto the CPD near the so-called centrifugal radius, \(r_{\rm cf}\) (Hayashi et al. 1985; Machida et al. 2008). The infall velocity at \(r_{\rm cf}\) must be \(\gtrsim 8-10\) km s\({}^{-1}\) for dust grain icy mantles to be lost due to sputtering and thermal desorption (Woitke et al. 1993; Aota et al. 2015; Tielens 2021). We approximated the infall velocity as a function of planetocentric radius by considering orbits with apoapsis of a single circumstellar disk pressure scale height at the position of Jupiter (\(z=0.5\) au) (Paper I), with orbital eccentricities corresponding to passage through the planet equatorial plane at some distance \(r\). The resulting infall velocities, \(v_{\rm infall}\), can be seen in Fig. 2 for planets of Saturnian, Jovian, and super-Jovian (10 M\({}_{\rm J}\)) mass. The infall velocity at \(r_{\rm cf}\) is independent of \begin{table} \begin{tabular}{l l l} \hline \hline Parameter & Symbol & Value \\ \hline Planetary mass & \(M_{\rm p}\) & 1.0 M\({}_{\rm J}\) \\ Planetary luminosity & \(L_{\rm p}\) & \(10^{-5}\) L\({}_{\odot}\) \\ Effective temperature & \(T_{\rm eff,p}\) & 1000 K \\ UV luminosity & \(L_{\rm UV,p}\) & 0.01 L\({}_{\rm p}\)\({}^{*}\) \\ Interstellar UV field & \(\chi\) & \(3\times 10^{3}\) \\ Background temperature & \(T_{\rm back}\) & 50 K \\ \hline Disk mass & \(M_{\rm cpd}\) & \(10^{-7}\) M\({}_{\odot}\) \\ Disk inner radius & \(R_{\rm in,cpd}\) & 0.0015 au \\ Exponential decay radius & \(R_{\rm in,cpd}\) & 0.11 au \\ Disk outer radius & \(R_{\rm out,cpd}\) & 0.34 au \\ Column density power in. & \(\epsilon\) & 1.0 \\ \hline Accretion rate & \(\dot{M}\) & \(10^{-11}\)-\(10^{-10}\) M\({}_{\odot}\) yr\({}^{-1}\) \\ Viscosity & \(\alpha\) & \(10^{-36}\)-\(10^{-2.7}\) \\ \hline Minimum dust size & \(a_{\rm min}\) & 0.05 \(\rm\upmu m\) \\ Maximum dust size & \(a_{\rm max}\) & 3000 \(\rm\upmu m\) \\ Dust-to-gas ratio & \(d/g\) & \(10^{-3.3}\) \\ Flaring index & \(\beta\) & 1.15 \\ Reference scale height & \(H_{\rm 0.1au}\) & 0.01 au \\ \hline \end{tabular} * Planetary UV luminosity is expressed in multiples of the planetary luminosity, L\({}_{\rm p}\). \end{table} Table 2: Parameters of the reference CPD model. Figure 1: Schematic illustration of the modeling process. In step 1, the chemistry in a circumstellar disk model is evolved for 4 Myr. This chemistry is extracted from the gap outer wall region and used as a starting point prior to accretion. To consider various possible accretion scenarios, the composition of the infalling material is either reset to atomic (full reset), the ices are sublimated (partial reset) or the chemistry remains unaltered (inherit). In step 2, the chemistry of a parcel of gas and ice is evolved for 10 yr as it travels towards the CPD midplane. In step 3, the chemistry is evolved at the CPD midplane for the viscous timescale of the disk. the planetary mass, but it is instead a function of the planetary semimajor axis (for a circular orbit). The shock velocity at the centrifugal radius of Jupiter is in the regime of icy mantle loss to sputtering (Draine and McKee, 1993; Tielens, 2005). Hence, if the majority of grains accrete near \(r_{\rm cf}\), Jupiter's CPD may be best represented by the "partial reset" scenario. Conversely, no ice sublimation is expected to occur due to shock-heating in the case of Saturn. A full chemical reset is more likely to occur for a super-Jupiter at a stellocentric distance of 2-3 au from a solar-mass star. ### Chemical network diagrams Throughout this work, we make use of algorithmically generated chemical network diagrams to describe relations between atomic and molecular species, their relative abundances, formation rates, and the types of reactions that are involved. The diagrams are generated with an implementation of the PyVis software package (itself based on the VisJS library Perrone et al. (2020)). A description of how these diagrams are generated and interpreted can be found in Appendix A. ## 3 Results and discussion Prior to reaching the midplane, the accreted gas and dust diffuses downwards from the optically thin surface layer of the CPD at the centrifugal radius, \(r_{\rm c}\). We iteratively evolved the disk chemistry as the background conditions change during the descent. The relevant properties of the vertical slice through the circumplanetary disk during the descent to the midplane at \(r_{\rm c}\) can be found in Fig. 3 panel (a). The post-shock evolution of the water ice abundance during the descent to the midplane can be found in panels (b), (c), and (d) of Fig. 3 for the reset, partial reset, and inheritance cases, respectively. Solid lines trace the evolution of ice impurity abundances as the gas parcel moves downwards from five scale heights (left) to the midplane (right). Dashed lines trace the abundances in the case of a hotter, higher viscosity CPD with \(t_{\rm visc}=10^{3}\) yr. The initial pre-shock abundances of the impurities are indicated by the colored circles on the ordinate. In the case of the reference (low-viscosity) fully or partially reset CPD, significant quantities of water ice have already formed prior to reaching the midplane. In the fully reset case, the ice is predominantly water with \(<0.1\%\) impurities in the form of CH\({}_{3}\)OH and HCOOH ice. In the partial reset case, a significant (25%) CO\({}_{2}\) component has formed. In the inheritance case, ices are able to survive the brief exposure to the optically thin upper layers of the disk and the CPD accret Figure 3: (a) Properties of a vertical slice in the circumplanetary disk at the centrifugal radius, from five pressure scale heights (left) to the midplane (right). Evolution of the abundance of H\({}_{2}\)O, NH\({}_{3}\) and CO\({}_{2}\) ice as the parcel of gas and dust sinks to the midplane after accretion in the reset case: (b) in the partial reset case (c) and the inheritance case (d). Solid lines trace the relevant properties and abundances for the low viscosity (\(t_{\rm visc}=10^{4}\) yr) case and the dashed line describes the high viscosity (\(t_{\rm visc}=10^{3}\) yr) case. Figure 2: Velocity of material falling onto a CPD \(v_{\rm infall}\) at radius \(r\) for planets of Saturnian (dotted line), Jovian (dashed lines), and super-Jovian (10 M\({}_{\rm J}\)) (solid line) mass. The centrifugal radii \(r_{\rm cf}\) of Jupiter and Saturn are indicated by the blue triangle and orange star, respectively. The radial position of the Galilean satellites is indicated by the four black circles and the radial position of Titan is indicated by the white circle. The planetary orbital radius, \(a_{\rm p}\), that corresponds to the infall velocity at the centrifugal radius, \(v_{\rm infall}(r_{\rm cf})\), is indicated on the right vertical axis. The shaded colored regions indicate different chemical consequences of shock-heating corresponding to a given \(v_{\rm infall}\). The units on the abscissa are in multiples of the Jovian Hill radius. All calculations correspond to a solar mass star. ice composition. In the high-viscosity (\(\alpha=10^{-3}\)) CPD model, ices are not thermally stable at the centrifugal radius midplane. This can be seen in Fig. 3, where ice abundances decline immediately prior to reaching the midplane. Consequently the initial post-shock conditions of a "partial reset" and "inheritance" converge to a similar ice-free molecular gas composition by the time the gas parcel reaches the midplane. After the step involving the accreted gas and dust being followed as it travels towards the midplane (step 4 in Fig. 1), the resulting chemical abundances are used to specify the initial conditions for the rest of the CPD as it evolves on the viscous timescale (step 5 in Fig. 1). After \(10^{3}\)-\(10^{4}\) yr of further evolution, we extracted the radial ice composition at the midplane from six distinct CPD models describing three initial chemical conditions (reset, partial reset, and inheritance) and two disk \(\alpha\)-viscosities (corresponding to viscous timescales of \(10^{3}\) and \(10^{4}\) yr). An overview of the radial midplane ice composition, the disk-integrated total molecular ice composition, and the disk-integrated total elemental ice budget of the low-viscosity CPDs can be found in Fig. 4. For reference, the ice-to-rock ratio of the solids at the CPD midplane is also included in Fig.4 (left column) as a solid white line. The settling of large grains to the midplane strongly reduces the local ice-to-rock ratio. Realistically, accreting moons may be able to capture solids drifting at higher altitudes above the midplane within their gravitational sphere of influence. Hence, we included also the ice-to-rock ratio of solids integrated up to an altitude equal to the Hill radius of a Ganymede-mass object (dashed white line). The radial abundance profiles of NH\({}_{3}\), HCOOH, CO\({}_{2}\), and CH\({}_{3}\)OH ices can be found in Fig. 5. Ices at the partially or fully reset CPD midplane are found to contain significant impurities in the form of CO\({}_{2}\) and HCOOH, as well as, to a lesser extent, CH\({}_{3}\)OH. The chemically inherited CPD additionally contains HCN and hydrocarbon ices which were already present at the time of accretion. Trace amounts of OCN, SO, SO\({}_{2}\), NH, H\({}_{2}\), OH, and HNO ices can also be found, but each at \(<0.1-0.5\)% of the total ice mass. Although several of these ices have negligible absolute abundances, the fraction of their key element which has frozen out can be substantial. In particular, sulfur has frozen completely out of the gas-phase outside of the centrifugal radius in all cases. The element fraction in ice can be found in Appendix C. In the following subsections, we discuss the formation and abundance of the impurities NH\({}_{3}\), CO\({}_{2}\), HCOOH, and CH\({}_{3}\)OH. ### Partial reset (initial sublimation of ices) It is likely that in the case of the Jupiter system, the shock velocity of matter accreting at the centrifugal radius did not lead to the full dissociation of molecules. A less extreme C-type shock-heating could simply cause icy grain mantles to desorb by sputtering, for instance. Accordingly, we focus our analysis and discussion on this case, whereby all ices are put back into their respective gas-phase counterpart. #### 3.1.1 Ammonia (NH\({}_{3}\)) Immediately after accretion onto the CPD and the sublimation of ices, hydrogen is predominantly found in the form of H\({}_{2}\), oxygen in H\({}_{2}\)O, and nitrogen in NH\({}_{3}\) and HCN at a ratio 1:0.63. After ten years of drifting towards the midplane the gas is still H\({}_{2}\)-dominated, but nitrogen is found primarily in N\({}_{2}\). After being initially sublimated a minor fraction of the NH\({}_{3}\) immediately re-adsorbs to the grains (see Fig. 3 (c)), but it is not stable against photodissociation given the background UV field intensity (\(\chi_{\rm{RT}}>1000\)). Above 2-3 scale heights, the NH\({}_{3}\) ice is photodissociated on the grain surface to, for instance, NH\({}_{2}\)# and H# or back into the gas phase as NH. Once the majority of nitrogen is locked into N\({}_{2}\) via NH+NH, it is stable against photodissociation due to self-shielding (Li et al., 2013), preventing the accumulation of NH\({}_{3}\). The photodissociation timescale of N\({}_{2}\) is much larger than the disk viscous timescale. Near the midplane, NH\({}_{3}\) ice forms by direct adsorption from the gas phase onto dust grains. The gas-phase NH\({}_{3}\) originates primarily via a sequence of three body collider reactions: \[{\rm H_{2}+N+M\to NH_{2}+M,} \tag{18}\] \[{\rm NH_{2}+H+M\to NH_{3}+M.} \tag{19}\] Here, M = H, H\({}_{2}\), or He. The importance of this pathway is illustrated clearly by the green arrows in the chemical network diagram Fig. 6. These collider reactions are very efficient at the typical CPD midplane densities (n\({}_{\rm{H}}\sim 10^{12}\) cm\({}^{-3}\)). However, the absence of abundant atomic nitrogen prevents the collider pathway from producing significant quantities of NH\({}_{3}\). Then, N\({}_{2}\) is destroyed predominantly by reactions with He ions at a relatively low rate, as He+ is produced only be cosmic-ray ionization. The collider pathway to form NH\({}_{3}\) thus does not result in significant accumulation of NH\({}_{3}\) ice. By the time the gas parcel has reached the midplane, NH\({}_{3}\) ice is present only as a trace species. The collider-pathway begins with the formation of NH\({}_{2}\) (Eq. 18) which is also relevant to the water formation pathway involving NH\({}_{2}\) + O \(\to\) NH + OH (Kamp et al., 2017). While the pre-exponential factor \(10^{-26}\) cm\({}^{6}\) s\({}^{-1}\) is derived from the work of Avramenko & Krasnen'kov (1966), we have chosen to adopt a significantly lower rate more typical of collider reactions (\(10^{-30}\) cm\({}^{6}\) s\({}^{-1}\)), which still produces enough NH\({}_{2}\) for this path to be the dominant NH\({}_{3}\) formation route in the inner disk. It has been noted that this particular reaction is critical to accurately reproduce observed OH and H\({}_{2}\)O gas-phase abundances, but that modern reevaluation of its rate and temperature dependence are needed (Kamp et al., 2017). For the second collider reaction in this path (Eq. 19), we adopted the rate coefficients of Gordon et al. (1971), listing a pre-exponential factor \(6.07\times 10^{-30}\) cm\({}^{6}\) s\({}^{-1}\). Other more recent experimental results assuming the reaction to be in the three body pressure regime give values in the range 2.3\(\times 10^{-30}\) - 1.42\(\times 10^{-29}\) for various third bodies (Altinay & Macdonald, 2012, 2015), hence, we consider this a reasonable value. In the outer disk, NH\({}_{3}\) gas is efficiently photodissociated. The NH\({}_{3}\) ice is instead formed primarily by barrier-less successive hydrogenation of atomic nitrogen on icy grain surfaces (Charnley et al., 2001; Fedoseev et al., 2015) which has been experimentally demonstrated to occur (Hiraoka et al., 1995; Hidaka et al., 2011) via the Langmuir-Hinshelwood mechanism. The formation pathway is then \[{\rm NH\#+H\#\to NH_{2}\#,} \tag{20}\] \[{\rm NH_{2}\#+H\#\to NH_{3}\#.} \tag{21}\] Both in the inner and outer disk NH\({}_{3}\) ice does not consitute more than \(10^{-3}\) of the total ice by molar fraction. #### 3.1.2 Carbon dioxide (CO\({}_{2}\)) While CO\({}_{2}\) ice is initially only a trace species in the accreted circumstellar disk material, it becomes abundant in the CPD prior to the accreted material reaching the midplane. The chemical network diagram of the predominant CO\({}_{2}\) ice formation paths during this stage can be found in Fig. 7. This figure illustrates how the production of OH by collider reactions (green arrows) is critical to the efficient formation of CO\({}_{2}\) ice. In the time that accreted gas and ice reside in the optically thin surface layers of the CPD, it initially liberates significant quantities of atomic oxygen from gas-phase H\({}_{2}\)O, which is hydrogenated via three-body collider reactions. The OH then reacts with abundant gas-phase CO to produce 98% of the CO\({}_{2}\), which then freezes out onto grains. In particular three-body collider reactions account for nearly all (\(>99\%\)) OH formation which is critical for the CO+OH gas-phase reaction. It can also be seen in Fig. 7 that the grain-surface formation of CO\({}_{2}\) ice plays only a minor role prior to the gas parcel reaching the midplane. After the gas and dust parcel reaches the midplane, the chemistry is evolved for an additional 10\({}^{5}\)-10\({}^{4}\) yr for the high- and low-viscosity cases, respectively. The resulting composition at \(t_{\rm visc}\) is similar to that of the full reset case, with the exception that the inner CPD (near the present day orbit of Callisto) also retains a significant CO\({}_{2}\) ice component. This can be seen in Fig. 5 (a) and Fig. 5 (d). CO\({}_{2}\) ice formation continues in the outer CPD at the midplane in the absence of abundant atomic O, as OH is produced instead on grain-surfaces by the photodissocia Figure 4: Overview of the chemical composition at the CPD midplane for the “full reset” case (_top row_), for the “partial reset” case (_middle row_), and for the “full inheritance” case (_bottom row_). _Left column_: Radial mass fraction of ices at the CPD midplane (filled-colored regions) where f\({}_{\rm ice}\)\(>\) 0.01. The white lines indicate the radial ice-to-rock ratio of solids at the midplane (solid line) and integrated up to an altitude above the midplane equal to the Hill radius of Ganymede (dashed line). The estimated ice-to-rock ratio of the Galilean satellites is included (circles with error bars). _Center column_: Radially integrated midplane ice composition out to \(R_{\rm H}/3\) (outer ring) and within the orbit of Callisto a\({}_{\rm HV}\) (inner circle). _Right column_: Total disk-integrated elemental composition of the ices are shown in the same two radial zones. tion of H2O#. This is described in the following section and can be seen in Fig. 8. #### 3.1.3 Formic acid (HCOOH) HOCO (hydrocarboxyl1 radical) and HCOOH (formic acid) are of relevance in the cold, high-density midplane where CO\({}_{2}\) ice can form; thus, these were included in our extended chemical network. Formic acid is the simplest carboxylic acid and has been identified in star-forming regions (Schutte et al., 1999; Ikeda et al., 2001) both in gaseous and solid states, as well as in protoplanetary disks (Favre et al., 2018) and in comets (Crovisier et al., 2004). Its abundance typically varies with 1-10% of water ice (Bisschop et al., 2007). The chemical network diagram of HCOOH formation in the outer CPD can be found in Fig. 8. It is clear that grain surface reactions play a completely dominant role in this process. In the outer CPD, we find that although it is not stable as an ice, the gas-phase CO freezes out and temporarily occupies a physisorption site on the grain surface. Prior to desorbing the CO# reacts on the grain surface OH# to form CO\({}_{2}\)# and H# (Oba et al., 2010; Liu and Sander, 2015), for which we have adopted the effective barrier of 150 K (Fulle et al., 1996; Ruaud et al., 2016). \[\mathrm{CO}+\mathrm{dust}\rightarrow\mathrm{CO}\#, \tag{22}\] \[\mathrm{CO}\#+\mathrm{OH}\#\rightarrow\mathrm{CO}_{2}\#+ \mathrm{H}\#. \tag{23}\] Alternatively, as an intermediate step of the OH# + CO# reaction the van der Waals complex HOCO# is formed, which can be hydrogenated to form HCOOH#. \[\mathrm{CO}\#+\mathrm{OH}\#\rightarrow\mathrm{HOCO}\#, \tag{24}\] \[\mathrm{HCO}\#+\mathrm{H}\#\rightarrow\mathrm{HCOOH}\#. \tag{25}\] The HCOO# formation route can explain the presence of HCOOH# in cold, dense clouds (Ioppolo et al., 2011; Qasim et al., 2019). The resulting radial abundance of HCOOH# in the reference CPD can be seen in Fig.5 (c). In the partial reset case, HCOOH ice can locally constitute a significant fraction of the ices in the reference CPD (\(\sim\)10mol%). We found significant abundances (\(\sim 10\%\) relative to H\({}_{2}\)O ice) of HCOOH ice in the outer region of the CPD. This is comparable to the upper end of inferred abundances (\(\sim 1-10\%\) level relative to H\({}_{2}\)O ice) observed toward young stellar objects (Schutte et al., 1999; Keane et al., 2001; Knez et al., 2005; Boogert et al., 2015). The relatively large abundance of HCOOH ice in the outer CPD relative to its observationally derived abundance in astrophysical ice mixtures in the ISM is noteworthy. However, this was not entirely unexpected. The minimum CPD temperature set by equilibrium with the background radiation field Figure 5: Radial abundance of selected non-H\({}_{2}\)O ices as a fraction of the total ice abundance for the low-viscosity case with \(r_{\mathrm{visc}}=10^{4}\) yr (solid lines) and high-viscosity case with \(r_{\mathrm{visc}}=10^{3}\) yr (dashed lines). The position of the Galilean satellites are indicated by the empty circles. A light gray horizontal line indicates a concentration of 1%. Figure 6: Chemical network diagram illustrating the formation of NH\({}_{3}\) ice in the CPD after a partial reset, immediately prior to the accreted gas reaching the midplane. The pathway from N\({}_{2}\) to NH3# is highlighted. Percentages for reaction A\(\rightarrow\)B indicate the fraction of reactions forming B which involve species A. A label “steady-state” indicates that the net rate is zero. ensures that a large region in the outer CPD exhibits a narrow range of temperature from 50-55 K. Given that the majority of the disk surface area is in this zone, the total disk composition is weighted heavily towards these specific conditions. However, background temperatures as low as 30 K or as high as 70 K do not produce abundant alternative impurities, while the outer CPD remains dominated by CO\({}_{2}\) and HCOOH ice. Additionally, the stability of the HCOOH ice in our model is subject to several uncertainties. The only grain-surface reaction in our network that is able to destroy HCOOH# is the photo-induced dissociation to HCO# and OH#. Alternatively, it can be placed directly back into the gas phase by thermal, cosmic-ray, or UV-photon induced desorption. We did not include grain-surface hydrogenation of the HCOOH ice. Bisschop et al. (2007) found that hydrogen bombardment of a pure multilayer HCOOH ice does not result in the production of detectable reaction products, concluding that the hydrogenation of HCOOH does not play a role in formation of more complex species and that only minor amounts desorb. In contrast Chaabouni et al. (2020) found that H-bombardment of a \(<1-3\) monolayer coating of HCOOH ice at 10-100 K results in efficient production of CO\({}_{2}\) and H\({}_{2}\)O molecules, as well as CH\({}_{3}\)OH and H\({}_{2}\)CO. The authors suggest that this disagreement stems from the inefficiency of H atom diffusion through the pure HCOOH multilayer used in the experimental setup of Bisschop et al. (2007). Alternatively, the sub-monolayer conditions present in the setup of Chaabouni et al. (2020) potentially cause the substrate itself to become hydrogenated, increasing the sticking coefficient for H atoms and promoting surface reactions. Where HCOOH ice is found in our CPD, it has been co-deposited with H\({}_{2}\)O ice and CO\({}_{2}\) ice (with molar ratio H\({}_{2}\)O:CO\({}_{2}\):HCOOH 100:80:80), with an equivalent thickness of several hundred monolayers. Hence we consider it plausible that the majority of the HCOOH embedded within the ice matrix would not be efficiently hydrogenated. Overall, HCOOH ice has not been detected on the surface of any Galilean moon. Experimental results indicate that HCOOH ice has a relatively short \(8\times 10^{7}\) yr half-life against irradiation by galactic cosmic rays, being dissociated into CO or CO\({}_{2}\)(Bergantini et al., 2013). Any HCOOH accreted onto the surface of, for instance, Callisto would therefore likely be absent in the present era, having reduced to \(<1\%\) of its initial concentration within only 0.56 Gyr. There is a paucity of research investigating the role of HCOOH in subsurface melts, however, we know that under hydrothermal conditions, water can act as a homogeneous catalyst for the decarboxylation pathway of HCOOH decomposition in liquids (Ruelle et al., 1986), where it decomposes to become the astrobiologically relevant CO\({}_{2}\) and H\({}_{2}\) molecules (Yu & Savage, 1998). ### Full reset (initially atomic gas) In the full reset case, the gas in the CPD is initially fully atomic and ionized and no ices are present. This state represents, for instance, the accretion of a high-mass planet (\(M>1M_{\rm J}\)), with correspondingly higher infall shock-velocity at the CPD surface, or an accretion of material originating from a greater scale height in the circumstellar disk than we have considered. In the fully reset case, the abundant free atomic hydrogen enables highly efficient combustion chemistry to produce a water-dominated ice composition, as found in Paper I. This efficient water formation locks away the majority of atomic oxygen early on and it is \(10^{5}\) times less abundant than in the partial reset case after 5 yr. Accordingly, the OH formation rate via O+H is lower and so significantly less OH is available to form CO\({}_{2}\) via CO+OH, while the CO abundances are very similar between the two cases (\(10^{-3.86}\) vs. \(10^{-3.88}\) relative to H\({}_{2}\)). Again, ammonia ice is not able to form in abundance as the initially atomic nitrogen is predominantly locked in N\({}_{2}\) within a single year via N + NO \(\rightarrow\)N\({}_{2}\) + O or N + NH \(\rightarrow\)N\({}_{2}\) + H. The radial composition of the ices after \(10^{4}\) yr is similar to the partial Figure 8: Chemical network diagram centered on the formation of HCOOH ice in the outer regions of the CPD at the midplane. Figure 7: Chemical reaction network illustrating the formation of CO\({}_{2}\) ice after a partial reset in which ices accreting onto the CPD are initially sublimated and placed into the gas-phase. reset case, although CO\({}_{2}\) ice is found in abundance only in the outer disk beyond \(\sim 2\times\) the semi-major axis of Callisto. In contrast to the partial reset case, the inner disk region is dominated by water ice with a minor (\(<1\%\)) methanol (CH\({}_{3}\)OH) component. Methanol is an important primordial solar system volatile and may act as an anti-freeze in subsurface oceans (Deschamps et al. 2010; Dougherty et al. 2018). It has been found to be abundant in solid form near protostars (Dartois et al. 1999; Boogert et al. 2015), in comets (Bockelee-Morvan et al. 1991; Mumma et al. 1993; Bockelee-Morvan & Biver 2017; Biver & Bockelee-Morvan 2019), and in the gas-phase in planet-forming disks (Walsh et al. 2016; Booth et al. 2021), where it may be formed via grain-surface reactions involving hydrogenation of CO# (Hiroska et al. 1994; Watanabe & Kouchi 2002). At typical pressures in our reference CPD the freeze-out temperature of methanol is greater than that of NH\({}_{3}\) and CO\({}_{2}\) (Mousis et al. 2009; Johnson et al. 2012). Thus, if the CO\({}_{2}\) ice observed on Callisto's surface was formed primordially in the CPD, we could expect that temperatures in the CPD could have allowed for stable methanol ice to be present as well. Indeed, we find that in the inner disk this occurs for \(t_{\rm visc}>10^{3}\) yr, where methanol ice is present at the 1% level at temperatures above 65 K with a peak abundance at 95-100 K. At these densities, it originates almost exclusively from reactions in the gas-phase via sequential hydrogenation of CO in two- and three-body reactions. Approximately 70% is formed via: \[{\rm CO}+{\rm H}+{\rm M}\rightarrow{\rm HCO}+{\rm M}, \tag{26}\] \[{\rm HCO}+{\rm H}+{\rm M}\rightarrow{\rm H}_{2}{\rm CO}+{\rm M},\] (27) \[{\rm H}_{2}{\rm CO}+{\rm H}\rightarrow{\rm CH}_{3}{\rm O},\] (28) \[{\rm CH}_{3}{\rm O}+{\rm H}\rightarrow{\rm CH}_{3}{\rm OH}, \tag{29}\] and the remainder by \[{\rm H}_{2}{\rm CO}+{\rm H}\rightarrow{\rm CH}_{2}{\rm OH}, \tag{30}\] \[{\rm CH}_{2}{\rm OH}+{\rm H}\rightarrow{\rm CH}_{3}{\rm OH}. \tag{31}\] For the reaction H\({}_{2}\)CO\(\rightarrow\)CH\({}_{3}\)OH, we have adopted the rate coefficients from Huynh & Violi (2008) with a barrier of 865 K. In the absence of this reaction, we find that methanol is produced in similar quantities via the CH\({}_{2}\)OH pathway. The rate of formation is thus highly contingent on the availability of free atomic hydrogen in the gas-phase. The absence of abundant atomic hydrogen prevents the accumulation of methanol in the partial reset or inheritance cases. An additional "bottleneck" in the reaction network is H\({}_{2}\)CO. This can be seen in Fig. 9. H\({}_{2}\)CO is formed almost exclusively (\(>99\%\)) via gas-phase three-body collider reactions. In the ISM, methanol ice abundances can significantly exceed that which we find in the CPD. The grain-surface hydrogenation of H\({}_{2}\)CO to form CH\({}_{3}\)O (Eq. 28) has been observed at low temperatures experimentally (Hidaka et al. 2004; Fuchs et al. 2009; Chuang et al. 2016), suggesting that successive hydrogenations of CO can explain the observed abundance of interstellar methanol at low temperatures (\(<15\) K). Above this temperature the desorption of H atoms and lower sticking efficiency of H due to the absence of H\({}_{2}\) causes a considerable drop in this reaction rate. While these reactions are included in our chemical network, the gas temperature in the CPD does not fall below 50 K; thus, we find this path to be inefficient. ### Full inheritance case In the event of a full chemical inheritance from the circumstellar disk gap edge, the ice accreting onto the CPD consists predominantly of water, with ratios H\({}_{2}\)O:NH\({}_{3}\):HCN of 100:15:10 and a significant \(\sim 10\%\) component of hydrocarbon ices (e.g. C\({}_{2}\)H\({}_{2}\), C\({}_{3}\)H\({}_{2}\)). This result is generally consistent with modeling of the outer regions of the circumstellar disk where NH\({}_{3}\)/H\({}_{2}\)O = 0.14 with as much as 80% of the nitrogen locked into NH\({}_{3}\) and to a lesser extent HCN (Dodson-Robinson et al. 2009). The final composition of the ices in the inheritance scenario is highly contingent on their initial composition. Given the difficulties in correctly capturing the relevant physical conditions at the outer gap edge and the uncertainty from which altitude the gas originates, we consider it more informative to discuss how the ices are altered post-accretion, rather than focusing on their final composition. Some minor processing of the ices occurs once they are incorporated into the CPD. The more volatile HCN and hydrocarbon ices are lost in the inner region of the disk where only NH\({}_{3}\)# and a minor component of hydrocarbon ices remain as impurities. In the outer region of the CPD, some conversion of H\({}_{2}\)O and HCN to HCOOH occurs, and to a minor extent CO\({}_{2}\). At temperature below 70 K HCOOH is co-deposited with the NH\({}_{3}\) ice. In the presence of the proton acceptor NH\({}_{3}\), HCOOH will convert to the formate anion HCOO\({}^{-}\) and NH\({}^{+}\) (Hudson & Moore 1999; Schutte et al. 1999; Galvez et al. 2010) however formate is not included in our chemical network. Likewise the salt formation reaction is not included in the network. We consider what impact the inclusion of this process could have on our final derived abundances. While the activation barrier of the reaction is negligible, the barrier against diffusion prevents it from occurring at 50-70 K (Schutte et al. 1999). However some of the HCOOH will react immediately upon deposition due to the acid and base being in direct contact at adjacent binding sites. 10% of the HCOOH ice is observed to react promptly at 10 K in H\({}_{2}\)O Figure 9: Chemical network diagram centered on the formation of CH\({}_{3}\)OH at the CPD midplane in the reset case. NH\({}_{3}\)-HCOOH mixtures with equal parts NH\({}_{3}\)-HCOOH (Schutte et al. 1999). Hence we might expect that as much as \(\sim 20\%\) of the HCOOH present in the outer disk could be converted upon adsorption to HCOO-NH\({}_{4}\)+. ### Differing diffusion timescales Owing to the uncertainity in the diffusion timescale on which the gas parcel drifts towards the CPD midplane we considered also the case of 10\(\times\) shorter and longer \(t_{\rm diff}\). For \(t_{\rm diff}=100\) yr all three initial conditions converge towards a similar final composition which is CO\({}_{2}\)-dominated (\(>95\%\) by weight) across the entire disk. This is clearly inconsistent with observations of the Galilean moons. The shorter \(t_{\rm diff}=1\) yr leaves the chemistry less affected by the time spent at the disk surface. In the partial reset case, a minor fraction (3%) of the accreted circumstellar NH\({}_{3}\) ice survives and can still be found at the CPD midplane after \(10^{4}\) yr. In the full reset case, the CH\({}_{3}\)OH component in the inner disk region becomes more substantial, increasing to a peak of 4% of the total ice mass. This additional CH\({}_{3}\)OH forms because more of the initially atomic hydrogen survives until ices become stable against photodissociation, and are available to hydrogenenate H\({}_{2}\)CO and CH\({}_{3}\)O. ## 4 Implications Absence of ammonia as an indicator of chemical reset We have found that a partial or complete chemical reset of the CPD tends to suppress NH\({}_{3}\) formation as efficient N\({}_{2}\) self-shielding locks up nitrogen in N\({}_{2}\). Even if a substantial component (\(\sim 20-30\%\)) of NH\({}_{3}\) ice were present in Jupiter's feeding zone, a partial or complete reset would prevent its accumulation in the building blocks of the moons. Without a substantial NH\({}_{3}\) component the liquidus temperature of the Galilean subsurface oceans may not differ substantially from that of a pure water ice. Europa appears to be the only Galilean moon where tectonic or cryovolcanic processes have recently exchanged material between the surface and subsurface where it could provide clues to composition of an ocean (Kargel et al. 2000; Zolotov & Shock 2001). NH\({}_{3}\) brought to the surface in the form of an NH\({}_{3}\)-H\({}_{2}\)O matrix could be lost on geologically brief timescales to external radiation (Moore et al. 2007; Bergantini et al. 2014). Longevity of surface ammonia might be extended if it would appear in a more stable form such as a hydrate or salt (Cook et al. 2018) but no positive detection has thus far been made (Clark et al. 2014). The non-detection of ammonium compounds on Europa's surface is compatible with a lack of ammonia in a subsurface ocean, although is certainly not conclusive evidence of its absence. In contrast to the Galilean system, several lines of evidence indicate the presence of NH\({}_{3}\) ice during the accretion of the Saturnian moons. The inferred interior composition of the Saturnian moon Enceladus appears to resemble more closely well-mixed outer solar system material and is generally consistent with a composition inherited from solar nebular (cometary) material (Waite et al. 2009). Enceladus contains a liquid water ocean (Thomas et al. 2016) from which interior material is ejected through plumes (Spahn et al. 2006; Porco et al. 2006; Waite et al. 2006). The presence of NH\({}_{3}\) in the plumes of Enceladus has been established by measurements from several instruments onboard the Cassini spacecraft (Waite et al. 2009) at \(>0.1\%\) relative to H\({}_{2}\)O, besides CO\({}_{2}\), CH\({}_{4}\) and H\({}_{2}\) (Magee & Waite 2017). Likewise NH\({}_{3}\) ice is considered to be a likely source of Titan's nitrogen (McKay et al. 1988; Sekine et al. 2011; Mandt et al. 2014). We suggest that the CPDs of sufficiently massive planets lose accreted NH\({}_{3}\) ice to mild accretion shocks and subsequent chemical evolution, and that the absence of NH\({}_{3}\) ice may indicate a (partial) chemical reset has occurred. As NH\({}_{3}\) represents one of the most potent and potentially abundant anti-freezes, subsurface ocean occurrence rates and longevity may then be relatively enhanced in the icy moons that accompany lower-mass giant planets which inherit circumstellar material. ### Carbon Dioxide at the origin of Ganymede and Callisto Several lines of evidence suggest the surface of Callisto is among the most primordial of the regular satellites, potentially providing a direct link to the formation environment of the Galilean moons (Moore et al. 2004; Zahnle et al. 2003). CO\({}_{2}\) ice has been detected on the surface of both Ganymede and Callisto (Carlson et al. 1996; McCord et al. 1997) but only appears below the surface on Callisto (Hibbitts et al. 2002; Hibbitts et al. 2003), where it appears to be exhummed impact cratering. In contrast, CO\({}_{2}\) on the surface of Ganymede appears to be of exogenous or radiolitic origin (Hibbitts et al. 2003). Hence if we consider Callisto's reservoir of CO\({}_{2}\) ice to be primordial we can consider which of our assumptions are consistent with its presence. In the partial reset case, which we considered to be a priori the most likely initial condition of accreted material, CO\({}_{2}\) ice is present in significant quantities at the present-day position of Callisto but less so near Ganymede. Superficially this appears to be consistent with the proposed distinct origins of Ganymede and Callisto's CO\({}_{2}\). However, the local ice mass fraction of CO\({}_{2}\) in the CPD is high (\(\geq\)60%). This appears to be in conflict with the inferred surface abundance of CO\({}_{2}\) ice on Callisto, where it constitutes no more than 0.01-0.16% of the host material mass (Hibbitts et al. 2002). It is however unclear whether the observationally inferred surface abundance of CO\({}_{2}\) on Callisto is truly representative of the subsurface composition. Pure CO\({}_{2}\) ice is not stable at the surface of the Galilean moons and CO\({}_{2}\) may instead be present in the form of clathrates (Chaban et al. 2007). Hence, an initially large CO\({}_{2}\) component exposed to the surface could have been lost to sublimation and dissociation. A substantial subsurface CO\({}_{2}\) reservoir is nevertheless implied, given the continuous replenishment of Callisto's CO\({}_{2}\) exosphere (Carlson 1999). In contrast to the partially reset case, we find CO\({}_{2}\) ice at a concentration of \(\sim 0.2\%\) near Callisto's location in the fully reset CPD. While this appears to be more representative of what is known of the Galilean moon surface composition, the primordial CO\({}_{2}\) concentration of Callisto's building blocks cannot simply be derived from the present state of the surface. Our findings are consistent with a primordial origin for Callisto's CO\({}_{2}\), and point to the possibility that Ganymede and Callisto's icy building blocks had distinct chemical compositions. While it has been suggested that Ganymede may have formed with a primordial CO\({}_{2}\) component which was lost during an episodic period of surface melting, our results suggest icy grains in its vicinity were CO\({}_{2}\)-poor. A CPD midplane temperature profile which is dominated by viscous heating and in which the water iceline falls between Europa and Ganymede naturally produces a CO\({}_{2}\) iceline between Ganymede and Callisto. ## 5 Summary and Conclusions If CPD ice composition is (partially or fully) reset, NH\({}_{3}\) ice formation is inefficient due to N\({}_{2}\) self-shielding. The resulting \(\ll\)1% concentration of NH\({}_{3}\) ice is unlikely to significantly alter the thermophysical/chemical properties of subsurface melt. The most significant impurities are the carbon-bearing CO\({}_{2}\) and HCOOH ices and each make up at most \(\sim 10\)% of the molar ice fraction. If the growth of the Galilean mooms occurred near their present-day positions they are largely free of impurities, being composed of 98% water ice, \(\sim 2\)% CH\({}_{3}\)OH, and trace amounts of CO\({}_{2}\). If instead the CPD ice composition is inherited from the circumstellar nebula, NH\({}_{3}\) ice can survive conditions at the CPD midplane and becomes the most abundant impurity. Observations indicating the presence of NH\({}_{3}\) in the Saturnian satellite system but not in the Galilean one are consistent with a reset-inheritance dichotomy. NH\({}_{3}\) in the planetary feeding zone of Jupiter, if present, may have been destroyed during accretion onto the CPD and then could not form again in time. Our key findings are summarized as follows: 1. The ice composition of the Galilean moons corresponds to a partial or full chemical reset, as opposed to the ices of the Saturnian moons, which may have been more directly inherited from the circumstellar disk. 2. A partial reset prevents efficient formation of ammonia ice. The building blocks of the Galilean moons (and of exomoons forming in similar CPDs) would be nitrogen-poor (NH\({}_{3}\) ice abundances with respect to the H\({}_{2}\)O ice of \(\sim 0.1\)%). 3. Our results are consistent with a primordial origin for CO\({}_{2}\) ice on Callisto and an ice composition that is chemically distinct from Ganymede. The composition of the building blocks that form moons around giant planets is determined by the conditions of accretion onto the planet's CPD, which in turn is influenced by the mass and orbital properties of the planet. The compositional reset-inheritance dichotomy of CPD ices ties together the properties of the planet and the long-term geophysical evolution and composition of icy satellite interior oceans. ###### Acknowledgements. The research of N.O. and I.K. is supported by grants from the Netherlands Organization for Scientific Research (NWO, grant number 614.001.552) and the Netherlands Research School for Astronomy (NOVA). This research has made use of NASA's Astrophysics Data System Bibliographic Services. This research has also extensively used Numpy (Harris et al., 2020), Matplotlib (Hunter 2007), Scipy (Virtanen et al.), and Prodromorph [https://gjilab.astro.rug.nl/prodromorph.N.O](https://gjilab.astro.rug.nl/prodromorph.N.O) would like to thank S. Coulemans for her suggestion that greatly improved the visualizations in this work, as well as J. Tjoa and S. van Merlo for helpful discussions and support.
凍土衛星の地下海は、太陽系における潜在的に住み着ける環境の中で最も魅力的な存在の一つです。地下層に液体層を維持できるかどうかは、その化学的組成に依存します。凍土衛星は、その形成に由来する環惑星盤(CPD)の化学的組成に影響を受けています。しかし、CPDは、惑星周辺の塵盤から物質を accretion するもので、その化学的継承度合いは不明です。私たちは、化学的にリセットされたまたは継承された環惑星盤の氷の組成を調査することで、内部モデルと氷の太陽系衛星のインsitu 測定の解釈を支援することを目的としています。特に、ガリレオ衛星系について重点を置きます。私たちは、放射線熱化学コードを使用して環惑星盤のモデルを作成し、時間依存化学から氷の組成を抽出しました。 accretion 時の氷の初期蒸発は、CO2
2309.16140
CLIP-Hand3D: Exploiting 3D Hand Pose Estimation via Context-Aware Prompting
Contrastive Language-Image Pre-training (CLIP) starts to emerge in many computer vision tasks and has achieved promising performance. However, it remains underexplored whether CLIP can be generalized to 3D hand pose estimation, as bridging text prompts with pose-aware features presents significant challenges due to the discrete nature of joint positions in 3D space. In this paper, we make one of the first attempts to propose a novel 3D hand pose estimator from monocular images, dubbed as CLIP-Hand3D, which successfully bridges the gap between text prompts and irregular detailed pose distribution. In particular, the distribution order of hand joints in various 3D space directions is derived from pose labels, forming corresponding text prompts that are subsequently encoded into text representations. Simultaneously, 21 hand joints in the 3D space are retrieved, and their spatial distribution (in x, y, and z axes) is encoded to form pose-aware features. Subsequently, we maximize semantic consistency for a pair of pose-text features following a CLIP-based contrastive learning paradigm. Furthermore, a coarse-to-fine mesh regressor is designed, which is capable of effectively querying joint-aware cues from the feature pyramid. Extensive experiments on several public hand benchmarks show that the proposed model attains a significantly faster inference speed while achieving state-of-the-art performance compared to methods utilizing the similar scale backbone.
Shaoxiang Guo, Qing Cai, Lin Qi, Junyu Dong
2023-09-28T03:40:37
http://arxiv.org/abs/2309.16140v1
# CLIP-Hand3D: Exploiting 3D Hand Pose Estimation via Context-Aware Prompting ###### Abstract. Contrastive Language-Image Pre-training (CLIP) starts to emerge in many computer vision tasks and has achieved promising performance. However, it remains underexplored whether CLIP can be generalized to 3D hand pose estimation, as bridging text prompts with pose-aware features presents significant challenges due to the discrete nature of joint positions in 3D space. In this paper, we make one of the first attempts to propose a novel 3D hand pose estimator from monocular images, dubbed as CLIP-Hand3D, which successfully bridges the gap between text prompts and irregular detailed pose distribution. In particular, the distribution order of hand joints in various 3D space directions is derived from pose labels, forming corresponding text prompts that are subsequently encoded into text representations. Simultaneously, 21 hand joints in the 3D space are retrieved, and their spatial distribution (in x, y, and z axes) is encoded to form pose-aware features. Subsequently, we maximize semantic consistency for a pair of pose-text features following a CLIP-based contrastive learning paradigm. Furthermore, a coarse-to-fine mesh regressor is designed, which is capable of effectively querying joint-aware cues from the feature pyramid. Extensive experiments on several public hand benchmarks show that the proposed model attains a significantly faster inference speed while achieving state-of-the-art performance compared to methods utilizing the similar scale backbone. Code is available at: [https://anonymous.4open.science/r/CLIP_Hand_Demo-FD2B/README.md](https://anonymous.4open.science/r/CLIP_Hand_Demo-FD2B/README.md). CLIP, Hand Pose Estimation, Text Supervision, Transformer + Footnote †: leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin= mainly depends on large-scale feature encoders, such as 8-Stacked Hourglass (Hourglass, 2017), HRNet-w48 (Hourglass, 2018), or 2-Stacked ResNet50 (He et al., 2017), which seriously affects the inference speed (as shown in Fig. 1). To achieve faster inference while maintaining high accuracy, researchers have made two main efforts. On the one hand, they carefully design limited-scale visual feature encoders to capture more complex feature details. However, their performance is still unsatisfied due to their inherent defects, that is, they excel at handling and extracting relatively easily recognizable visual semantic representations. On the other hand, they pay more attention to expanding datasets. Although, this method can achieve superior performance, it is still quite difficult to obtain high-quality datasets with ground truth. From the above discussions, one question arises immediately: _Is it possible to leverage high-level human language knowledge to guide them and thereby encode more latent hand semantic details?_ Recently, Radford et al. (Radford et al., 2018) introduced the CLIP model to simultaneously input image-text pairs into corresponding feature encoder modules and maximize the feature consistency between them through contrastive learning. Zhang et al. (Zhang et al., 2019), Xu et al. (Xu et al., 2020), Guzhov et al. (Guzhu et al., 2020) adopted the learning pattern of CLIP and subsequently proposed their respective models. Experimental results from these studies demonstrate that appropriate text prompts can enrich visual representations, effectively transferring high-level human knowledge into deep neural networks. However, unlike image classification and segmentation, 3D hand pose and mesh recovery face challenges in connecting recapitulative text prompts with irregular joint-aware distributions due to the unique nature of labels (discrete 3D joint positions). To address this issue, we propose a novel and effective model, dubbed as CLIP-Hand3D, as illustrated in Fig. 2, which for the first time, successfully transfers discrete 3D joint positions into appropriate text prompts and generates the corresponding text representations. Specifically, we employ a 1D convolution layer to encode the spatial order of joints along the x, y, and z directions respectively. Simultaneously, the shallow perceptual features encoding the Lixel-map are preserved, and subsequently used for matching their corresponding text representation by a contrastive learning paradigm. In addition, we design a novel hand mesh estimator, incorporating a series of Transformer layers with multi-head self-attention mechanisms. Initially, it adopts a coarse-to-fine learning strategy, iteratively refining sparse-to-dense vertex positions with an appropriate positional encoding scheme for the current mesh structure. Then, from a global to local perspective, it effectively queries detailed cues from the feature pyramid by utilizing a joint-related feature projection module. Furthermore, our model achieves a significant inference speed due to a relatively lightweight visual encoder and a lower-dimensional feature space of the designed regression head. As depicted in Fig. 1, our model is capable of real-time inference, achieving a substantially higher FPS value than all other state-of-the-art methods. The main **contributions** of our paper are three-folds: * A novel model is designed to estimate 3D hand pose and shape from monocular RGB images, exhibiting a _significantly faster_ inference speed, while achieving the _state-of-the-art_ accuracy compared to methods using the similar scale visual encoders. * A novel text feature generation module is designed, which successfully connects irregular joint position labels and text prompts for the first time, thereby achieving consistent matching between pose-aware features and text representations. * A novel Transformer mesh regressor is designed, which effectively locates the spatial positional encodings among all sparse-to-dense mesh vertices, thereby matching the inherited joint-related features from the visual encoder. ## 2. Related Work **3D Sparse Joints Regression:** In terms of network structure and regression methods, sparse joints regression works can be divided into several categories: forward kinematics-based regression (Hou et al., 2019; Zhang et al., 2019; Zhang et al., 2019; Zhang et al., 2019; Zhang et al., 2019; Zhang et al., 2019), inverse kinematics-based regression (Zhang et al., 2019; Zhang et al., 2019), graph neural network-based pose estimators (Zhang et al., 2019; Zhang et al., 2019; Zhang et al., 2019; Zhang et al., 2019; Zhang et al., 2019; Zhang et al., 2019; Zhang et al., 2019), 2.5D heatmap-based pose estimators (Zhang et al., 2019; Zhang et al., 2019; Zhang et al., 2019; Zhang et al., 2019; Zhang et al., 2019), and Transformer-based regression networks (Zhang et al., 2019; Zhang et al., 2019; Zhang et al., 2019; Zhang et al., 2019; Zhang et al., 2019). Besides, many researcher have explored weakly supervised learning from various perspectives (Zhang et al., 2019; Zhang et al., 2019; Zhang et al., 2019; Zhang et al., 2019; Zhang et al., 2019; Zhang et al., 2019; Zhang et al., 2019). **3D Dense Vertices Regression:** Boukhayma et al. (Hou et al., 2019) were the first to attempt to estimate 3D hand mesh by predicting MANO model hyperparameters. Ge et al. (Ge et al., 2019) constructed a graph structure for each vertex position of the hand mesh. Lin et al. successively proposed Transformer-based structures (Lin et al., 2019) and a carefully designed Mesh-Graphormer (Hou et al., 2019) for 3D hand mesh estimation. Tang et al. (Zhou et al., 2019) presented a model to align estimated hand shape with input image semantics. Chen et al. proposed CMR (Chen et al., 2019) and Mborcon (Mborcon, 2019), which estimate 3D hand mesh in camera space and focus on a mobile-friendly model, respectively. Moreover, Li et al. (Li et al., 2019), Yu et al. (Yu et al., 2019), Kim et al. (Kim et al., 2019), and Lee et al. (Lee et al., 2019) developed a series of impressive models for regressing two hands' 3D pose and shape from monocular images. **CLIP-based methods:** Radford et al. (Radford et al., 2018) proposed the CLIP model, which associated textual and visual representations and enabled reliable zero-shot inference. Xu et al. (Xu et al., 2020) introduced the Video-CLIP model, aiming to unify video and textual representations through contrastive learning pretraining. Wang et al. (Wang et al., 2019) proposed the first text and image-driven NeRF implementation method. Tevert et al. (Tevert et al., 2019) utilized the knowledge encapsulated in CLIP to introduce a generative model. Zhang et al. (Zhang et al., 2019) proposed Point-CLIP, achieving alignment between point cloud encoding in CLIP and 3D classification text. Xu et al. (Xu et al., 2020) explored zero-shot transfer from textual supervision to semantic segmentation tasks. Rao et al. (Rao et al., 2019) introduced Dense-CLIP, transforming the original Figure 2. A schematic diagram of text prompts generation. By randomly selecting 5 joints from the 21 joints and generating corresponding text descriptions according to their distribution order in the x, y, and z directions. image-text matching problem in CLIP into a pixel-text matching problem. _Although CLIP-based methods have achieved impressive results in classification and segmentation tasks, no existing work has explored the use of text representations to connect pose-aware features. Besides, it is a challenging task to transform irregular joint positions into appropriate text prompts, particularly as it differs from the generalized words used for character class descriptions._ ## 3. Method As discussed earlier, we are dedicated to building a bridge between text prompts and pose-aware features, thereby introducing high-level human knowledge to drive deep neural networks to encode more semantic details. As depicted in Fig. 3, we provide a complete pipeline and network structure containing various modules and gradually introduce the implementation details of each sub-module in the following sections. ### Text Feature Generation **Text Prompts Generation:** Unlike tasks such as image classification and segmentation, generating corresponding text prompts from pose labels is not a straightforward process. As shown in Fig. 2, we propose a method for converting pose labels into text prompts. Firstly, we randomly sample \(N\) keypoints from the \(K=21\) hand keypoints, with \(N\leq K\). Then, we slice the pose label according to the indices of the \(N\) keypoints in the \(K\) hand keypoints. For the distribution of \(N\) hand keypoints along the \(x\)-axis, we arrange them in ascending order and generate a set \(N_{x}\) describing the sampled points in the \(x\)-direction according to the index order. For a specific hand keypoint i, \(N_{x^{i}}\) belongs to \(N_{x}\). Since there is a high semantic consistency between the image and pose label, a text prefix "From left to right," can be added to describe the order of hand keypoints in the \(x\)-direction. Similarly, we obtain the sets \(N_{y}\) and \(N_{z}\) for the sampled points in the \(y\) and \(z\) directions, respectively, and generate corresponding description prefixes "From top to bottom," and "From near to far,". As shown in Fig. 2, assuming **5** keypoints are selected from the **21** keypoints, namely index MCP, thumb fingertip, little PIP, middle DIP, and ring fingertip, we can generate three corresponding text prompts \(W_{x}\), \(W_{y}\), and \(W_{z}\). Where \(W_{x}\): "From left to right, the joints are index MCP, thumb fingertip, little PIP, middle DIP, and ring fingertip"; \(W_{y}\): "From top to bottom, the joints are little PIP, thumb fingertip, ring fingertip, index MCP, and middle DIP"; \(W_{2}\): "From near to far, the joints are thumb fingertip, ring fingertip, middle DIP, index MCP, and little PIP." **Text Feature Encoding:** Given text prompts \(W_{x}\), \(W_{y}\), and \(W_{z}\), we first employ a similar processing approach to the CLIP (Spiel et al., 2017) model for tokenization, resulting in \(T_{x}\), \(T_{y}\), and \(T_{z}\). Specifically, input text prompts are tokenized into a list of tokens using a mini-batch strategy. Next, with \(T_{x}\), \(T_{y}\), and \(T_{z}\) provided, we feed them into a **pre-trained** CLIP model to extract features \(F_{cx}\), \(F_{cy}\), and \(F_{cz}\), and further put them into their corresponding attention layer to get text representations \(F_{tx}\), \(F_{ty}\), and \(F_{tz}\), which describe the spatial order of joints in each dimension. To adapt the dimensionality of latent joint feature encodings, and further refine the specific downstream task, we introduce several Transformer modules with multi-head self-attention mechanisms to aid in consistently matching visual features and text representations. Formally, we have: \[F_{t}\Rightarrow\begin{cases}F_{tx}=\Phi_{x}(F_{c}(T_{x})),T_{x}=Token(W_{x}), \\ F_{ty}=\Phi_{y}(F_{c}(T_{y})),T_{y}=Token(W_{y}),\\ F_{tz}=\Phi_{z}(F_{c}(T_{z})),T_{z}=Token(W_{z}),\end{cases} \tag{1}\] where \(\Phi_{x}\), \(\Phi_{y}\), and \(\Phi_{z}\) represent text encoders based on the Transformer structure; \(F_{c}\) denotes the **pre-trained** CLIP model; and \(Token\) means tokenizing the text prompts. ### Pose Feature Generation **Visual Encoder:** Following previous methods (Zhu et al., 2017; Wang et al., 2018; Wang et al., 2018), we employ the original version of ResNet50 (He et al., 2017) as the Visual Encoder to encode the input monocular RGB image \(I\in R^{(224,224,3)}\). Given Figure 3. A detailed illustration of our proposed pipeline. First, _Text Feature Generation_ (see Sec. 3.1 for details) converts **3D** pose labels into text prompts and generate text features \(F_{t}\); then, we input image \(I\) into a CNN-based _Pose Feature Generation_ (see Sec. 3.2 for details) to extract pose-aware features \(F_{p}\) and regress **3D** joint positions \(P\); next, we construct matrices \(M_{lf}\), \(M_{lb}\), and \(M_{nf}\) through the _Feature Matching_ (see Sec. 3.3 for details), and maximize the semantic consistency between pose-aware features and their corresponding text representation; finally, we estimate reliable **3D** hand mesh vertices \(V\) through the _Mesh Regressor_ (see Sec. 3.4 for details). the input monocular image \(I\), the Visual Encoder yields the following features: shallow features \(F_{0}\) and feature pyramids \(F_{1}\), \(F_{2}\), \(F_{3}\), and \(F_{4}\). Specifically, \(F_{0}\in R^{(56,56,56)}\) retains the shallow representation, thereby preserving rich high-resolution hand semantics to facilitate refined hand vertices prediction. The feature pyramid (\(F_{1}\in R^{(256,56,56)}\), \(F_{2}\in R^{(512,28,28)}\), \(F_{3}\in R^{(1024,14,14)}\)) offers a latent distribution from global to local, which corresponds to the coarse-to-fine mesh feature sampling in the Mesh Regressor module. \(F_{4}\in R^{(2048,8,8)}\) is passed to the Joint Regressor module to output joint spatial positions in the x, y, and z directions by feature upsampling (de-convolution layer). **Joint Regressor:** Given the feature encoding \(F_{4}\) produced by the Visual Encoder, the Joint Regressor module generates spatial positions of hand joints (in _uvd_ space). Firstly, we employ a deconvolution layer (kernel-size=4, stride=2) to perform feature upsampling on \(F_{4}\), obtaining the feature \(F_{I}\), and subsequently reducing its number of channels from 1024 to 256. Following the Lixel-Map-based approach proposed by (Wang et al., 2017; Wang et al., 2017), we unfold the feature \(F_{I}\) along the specified dimension to obtain feature \(F_{Ix}\), \(F_{Iy}\), \(F_{Iz}\), and further apply 1D convolution to obtain the latent features \(F_{p}:\{F_{px},F_{py},F_{pz}\}\). Then, we design corresponding 1D softmax layers according to the heatmap size to capture the maximum response points of hand joints \(P:\{P_{x},P_{y},P_{z}\}\) in the x, y, and z directions, respectively. By identifying the position indices of the maximum response points in the feature map and using the concatenate operation, the Joint _uvd_ Regressor module outputs the 3D hand joint positions \(P\in R^{(21,3)}\) in _uvd_ space. Formally, we have: \[P\Rightarrow\begin{cases}P_{x}=Argmax(Soft(F_{px})),F_{px}=Conv1d(F_{Ix}),\\ P_{y}=Argmax(Soft(F_{py})),F_{py}=Conv1d(F_{Iy}),\\ P_{z}=Argmax(Soft(F_{py})),F_{pz}=Conv1d^{*}(F_{Iz}),\end{cases} \tag{2}\] where \(Conv1d\) means 1D-Convolution layer; \(*\) denotes multi-layer structure; \(Soft\) representes 1D-Softmax activation layer; \(Argmax\) means the index of the maximum value. ### Feature Matching Building on several prior CLIP-based works, we pass images and texts through their respective feature encoders to obtain pose-aware features \(F_{p}\) and text representations \(F_{t}\). To enrich the pose-aware features using text representations, we together project \(F_{p}\) and \(F_{t}\) into a common feature embedding space and compute the feature similarity between them. Employing a mini-batch optimization strategy with a batch size of **B**, we construct the logit latent matrix **M** of shape [3, B, B], treating all matched pose-text pairs as positive samples and the remaining non-matching pose-text pairs as negative samples. Intuitively, all values on the diagonal of the logit latent matrix represent the B pose-text pairs in the current batch. Formally, we have: \[\begin{cases}M_{lr}=\tau_{x}\hat{F}_{px}\cdot\hat{F}_{tx}^{T},\\ M_{tb}=\tau_{y}\hat{F}_{py}\cdot\hat{F}_{ty}^{T}\\ M_{nf}=\tau_{z}\hat{F}_{py}\cdot\hat{F}_{tz}^{T},\end{cases} \tag{3}\] where \(M_{lr}\), \(M_{tb}\), and \(M_{nf}\) represent three different logit latent matrices and \(M=\{M_{lr},M_{tb},M_{nf}\}\); \(\tau_{x}\), \(\tau_{y}\), and \(\tau_{z}\) denote the corresponding learnable temperature parameters to scale the logit matrix respectively; \(\hat{F}_{s}\) stands for visual or text feature representations after L2 regularization; \(\cdot\) represents matrix multiplication; and \(T\) indicates matrix transpose. The left sub-figure in Fig. 4 presents a matching result of visual and text representations in the "horizontal" direction for a batch containing 8 samples, where the green elements on the diagonal represent positive samples and the blue elements denote negative samples. Specifically, we use yellow and red boxes to display two image-text matching pairs for detailed illustration. On one hand, images with ID 3 and ID 6 have a high matching similarity. This is because their hand joint distributions exhibit high consistency when viewed from the "From left to right" perspective ("ring fingertip, middle DIP, ring MCP, middle MCP, wrist, thumb DIP, and thumb fingertip"). We highlight their corresponding text prompts in red. On the other hand, images with ID 1 and ID 3 have a lower pose-aware features similarity. The right sub-figure in Fig. 4 shows a matching result of visual and text representations in the "vertical" direction for another batch containing 8 samples, where the yellow elements on the diagonal represent positive samples, and the **black** elements denote negative samples. Intuitively, image samples with ID 1 and ID 5 have high similarity in pose distribution from top to bottom, while samples with ID 3 and ID 5 have low similarity. The hands in images with ID 1 and ID 5 are both pointing "downward" while the hand in the image with ID 3 appears to be pointing "upward". Figure 4. A visualization of the logit latent matrix. The left subplot shows the text-pose feature pairs’ matching result in the “left-to-right” direction in one batch; the right subplot shows the feature pairs’ matching result in the “top-to-bottom” direction in another batch. ### Mesh Regressor To forward the Joint _uvd_ into the Mesh Regressor module, we first generate the corresponding heatmap \(H^{P}\) according to the pre-defined heatmap size. Then, we concatenate the \(F_{0}\) encoding the shallow semantic information of the hand with heatmap \(H^{P}\) along the specified dimension, and after using the CNN and MLP layer to adjust the number of channels and feature dimensions. Finally, we obtain the 3D hand shape feature encoding \(F_{I}\). **Mesh Regressor Layer**: As depicted in Fig. 3, we designed a mesh feature refinement network with a coarse-to-fine approach, implemented in the order of (Zhu et al., 2017; Wang et al., 2018; Wang et al., 2019; Wang et al., 2019) to refine mesh features. Additionally, we devised a mesh node sampling network with a sparse-to-dense progression, increasing the number of hand mesh points in the order of (Wang et al., 2018; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019). We employed the upsampling network based on GNN to expand the number of nodes while utilizing an MLP layer to adjust the feature dimensions. Following each feature dimension adjustment, we introduce a Transformer-based multi-head attention regression network to enhance the mesh features and enable self-attention interaction at that layer. Notably, the Transformer-based regressor maintains the original node's positional encoding, as it does not alter the number of nodes or feature dimensions at each layer. Finally, we designed an MLP layer for regressing and predicting the spatial coordinates of the mesh \(V\). **Feature Pyramid Projection**: To fully harness the hand feature encoding in the Visual Encoder, we designed a Joint-based Feature Pyramid Projection module. As illustrated in Fig. 3, we project the features of each level in the Feature Pyramid according to the Joint _uvd_ position. Specifically, we adopt a structure similar to U-Net (Wang et al., 2019), linking the mesh feature encodings at each level with the corresponding visual representation, ensuring the consistency of feature encoding levels. For instance, when considering the \(F_{1}\) in the feature pyramid, we project the joint _uvd_ position to obtain the corresponding feature sampling \(F_{1J}\), and then pass it to the feature encoding representing the dense hand mesh. High-resolution feature maps possess a smaller spatial receptive field, which aids in preserving more shallow hand semantics to assist the corresponding hand mesh feature encoding in accurately determining the hand mesh point positions. ### Loss Functions We mainly applied the following loss functions: **1) Supervised Learning Loss**: For 3D hand pose and shape recovery from monocular RGB image, we employ \(L1\) norm for 3D hand joint loss \(\mathcal{L}_{P}\) and vertices loss \(\mathcal{L}_{V}\). Formally, we have: \[\mathcal{L}_{P}=\sum_{i=1}^{J}\left\|P_{i}^{(gt)}-P_{i}\right\|_{1},\mathcal{ L}_{V}=\sum_{i=1}^{K}\left\|V_{i}^{(gt)}-V_{i}\right\|_{1}, \tag{4}\] where \(P_{i}\) and \(P_{i}^{(gt)}\) represent the predicted 3D hand pose and its ground truth, respectively; \(V_{i}\) and \(V_{i}^{(gt)}\) represent the predicted hand vertices and their ground truth, respectively. **2) Norm Loss**: To maintain the stability of predicted hand mesh surface normals and vertices during the training process, we applied normal loss \(\mathcal{L}_{N}\) and vertex loss \(\mathcal{L}_{E}\). \[\mathcal{L}_{N}=\sum_{c\in C}\sum_{(i,j)\subset C}\left|\frac{V_{i}-V_{j}}{|| V_{i}-V_{j}||_{2}}\cdot n_{c}^{(gt)}\right|_{1}, \tag{5}\] where \(V\) and \(C\) represent the predicted vertices and its corresponding triangular faces respectively, \(n_{c}^{(gt)}\) denotes unit normal vector for each face \(c\in C\), \(V^{(gt)}\) means hand vertices ground-truth. **3) Consistency Loss**: Following several self-supervision related works, we applied 2D and 3D consistency losses (\(\mathcal{L}_{C2d}\) and \(\mathcal{L}_{C3d}\)) to supervise the predicted pose and vertices. \[\mathcal{L}_{C2d}=\left\|Aff(\hat{P}_{1})-\hat{P}_{2}\right\|_{1},\mathcal{L}_ {C2d}=\left\|Rot(V_{1})-V_{2}\right\|_{1}, \tag{6}\] where \(V_{1}\), \(\hat{P}_{1}\) and \(V_{2}\), \(\hat{P}_{2}\) represent the predicted hand vertices and joint _uvd_ positions under different viewpoints, and \(Rot\) and \(Aff\) denote the rotation and affine transformation matrix. **4) CLIP Loss**: To maximize the values distributed along the diagonal of the logit matrix, thereby achieving a match between text and pose-aware features, we introduce CLIP loss \(\mathcal{L}_{CLIP}\) to supervise a batch containing B image-text pairs. Formally, we have: \[\mathcal{L}_{CLIP*}=\bigg{(}-\frac{1}{B}\sum_{i=1}^{B}log\frac{exp( \hat{F}_{ps}\cdot\hat{F}_{ps}/\tau_{s})}{\sum_{j=1}^{B}exp(\hat{F}_{ps}\cdot \hat{F}_{ps}/\tau_{s})}\bigg{)}\] \[+\bigg{(}-\frac{1}{B}\sum_{i=1}^{B}log\frac{exp(\hat{F}_{ts}\cdot \hat{F}_{ps}/\tau_{s})}{\sum_{j=1}^{B}exp(\hat{F}_{ts}\cdot\hat{F}_{ps}/\tau_{s })}\bigg{)}, \tag{7}\] where \(\mathcal{L}_{CLIP*}\) describes the consistency loss of pose-text pairs along the current coordinate axis direction; \(\hat{F}_{ts}\) and \(\hat{F}_{ts}\) represent the corresponding normalized pose representation and text feature encoding, respectively; Subsequently, we add the CLIP losses in three directions (x, y, z) and compute the average to obtain \(\mathcal{L}_{CLIP}=\frac{1}{len(S)}\sum_{s\in S}\mathcal{L}_{CLIP*},S=\{x,y,z\}\). Finally, we train the whole framework to optimize all the learnable parameters in an **end-to-end** manner. Formally, we have: \[\mathcal{L}=\alpha_{1}(\mathcal{L}_{P}+\mathcal{L}_{V})+\alpha_{2}(\mathcal{L}_ {N}+\mathcal{L}_{E})+\alpha_{3}(\mathcal{L}_{C2d}+\mathcal{L}_{C3d})+\alpha_{4} \mathcal{L}_{CLIP}, \tag{8}\] where the hyper-parameters \(\alpha_{1}\), \(\alpha_{2}\), \(\alpha_{3}\) and \(\alpha_{4}\) are balance factors to weight the losses, where \(\alpha_{1}=1.0\), \(\alpha_{2}=0.05\), \(\alpha_{3}=0.1\) and \(\alpha_{4}=0.1\). ## 4. Experiments ### Datasets & Metrics **FreiHAND**:The FreiHAND dataset (FHD) (Zhu et al., 2017) contains **130,240** training images from 32 characters of different genders and ethnic backgrounds, holding either nothing or various standard daily necessities. The test set in this dataset includes **3,960** samples collected from specific outdoor and office scenes. **RHD**:The RHD (Wang et al., 2018) dataset is a synthetic dataset, which poses a significant challenge due to the complex texture-based backgrounds, rich hand gestures, and severe self-occlusion present in the hand objects. Following the same settings with the previous methods, we used **41,258** images for training and **2,728** images for testing. **STB**:The Stereo Hand Pose Tracking Benchmark (STB) (Wang et al., 2018) is a real-world dataset. Following the same settings with the previous methods, we use STB-SK subset includes **15,000** RGB images for training and another 3,**000** RGB images for testing, all of which provide accurate hand keypoint annotations. **Real-world**:The real-world dataset (Kulon et al., 2018) consists of two parts, the training set and the test set. The training set contains over **300,000** synthesized hand images and corresponding labels, while the test set provides more than **500** real-world hand images. We use the following metrics to quantitatively evaluate model performance: **PJPE** (per joint position error), **PVPE** (per vertice position error), **PA-PJPE**, **PA-PVPE**, **Median PIPE**, **3D PCK**, **AUC** (area under PCK curve), **F@5mm**, and **F@15mm**. ### Implementation Details During the training and inference stages, we use **PyTorch** as the framework to conduct all experiments. We train our full model on a single NVIDIA RTX 3090 and a single NVIDIA RTX 2080Ti for image inference. Initially, we pre-train the weight parameters of the Visual Encoder (ResNet50) and Joint _uvd_ Regressor. Before this, we load the ImageNet pre-trained weight parameters into the Visual Encoder. Subsequently, we fine-tune the entire network parameters in an **end-to-end** optimization manner. The specific process involves using the AdamW optimizer (Kingmaa et al., 2014) with a mini-batch size of 48 and training for 200 epochs. The initial learning rate is set at 1e-3, and the learning rate schedule follows a fixed-step decay strategy, where the learning rate is reduced to 0.25 times the previous rate every 50 epochs. Although our proposed model contains several MLP layers, the feature dimensions of the designed fully connected layers are relatively low (e.g., 128, 64, 32), which lays the foundation for fast inference. On a single NVIDIA RTX 2080Ti, the inference speed surpasses **77 FPS**; on a single NVIDIA RTX 3090, the inference speed achieves **120 FPS**. ### Quantitative and Qualitative Results #### 4.3.1. Quantitative Results For the FreiHAND dataset, we primarily conduct a detailed comparison with the current state-of-the-art (SOTA) methods. As shown in Table. 1, among methods that employ the original ResNet50 as the visual encoder (MANO CNN (Kulon et al., 2018), Kulon et al. (2018), I2L-MeshNet (Wang et al., 2019), I2UV-HandNet (Chen et al., 2021), HandAR (Wang et al., 2020), and CycleHand (Kulon et al., 2018)), our approach achieves SOTA performance and inference speed (compared to the _fastest-performing_ method by Kulon et al., F@5: 0.614 vs. **0.728**, FPS: 60 vs. **77**; compared to the _best-performing_ method by HandAR, F@5: 0.724 vs. **0.728**, FPS: 39 vs. **77**). It can also be found that, when compared with SOTA models with large-scale visual encoders (MeshGraphormter (Sandjil et al., 2019), and MobRecon (Chen et al., 2021)), our method can still achieve speed comparison (FPS: 4, 45 vs. 77) with competitive performance (6.1, 5.9 vs. **6.6mm**) Visual encoders with relatively large complexity (Stack-ResNet50 (Chen et al., 2021), (Chen et al., 2021), HRNet-w48 (Kulon et al., 2018; Kulon et al., 2018; Kulon et al., 2018; Kulon et al., 2018), N-Stacked Hourglass (Kulon et al., 2018; Kulon et al., 2018)) can encode more semantic details of hand image yet can bring trouble for the faster network inference. Besides, as the scale of the visual encoder model increases, our model can also achieve the \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Method & Backbone & PA-PIPE \(\downarrow\) & PA-PVPE \(\downarrow\) & F@5mm \(\uparrow\) & F@15mm \(\uparrow\) & FPS \(\uparrow\) \\ \hline MANO CNN (ICCV 19) (Kulon et al., 2018) & ResNet50 & 10.9 & 11.0 & 0.516 & 0.934 & - \\ Hasson et al. (CVPR 19) (Hasson et al., 2018) & ResNet18 & 13.3 & 13.3 & 0.429 & 0.907 & 20 \\ Boukh et al. (CVPR 19) (Kulon et al., 2018) & ResNet34 & 13.2 & 35.0 & 0.427 & 0.894 & 11 \\ Kulon et al. (CVPR 20) (Kulon et al., 2018) & ResNet50 & 8.4 & 8.6 & 0.614 & 0.966 & 60 \\ I2L-MeshNet (ICCV 20) (Wang et al., 2020) & ResNet50 & 7.4 & 7.6 & 0.681 & 0.973 & 33 \\ I2UV-HandNet (ICCV 21) (Chen et al., 2021) & ResNet50 & 7.2 & 7.4 & 0.682 & 0.973 & - \\ HandAR (ICCV 21) (Wang et al., 2020) & ResNet50 & 6.2 & 6.7 & 0.724 & 0.981 & 39 \\ CycleHand (ACM MM 22) (Kulon et al., 2018) & ResNet50 & 8.3 & 8.3 & 0.631 & 0.967 & - \\ \hline & ResNet50 & 6.6 & 6.7 & 0.728 & 0.981 & 77 \\ \hline Pose2Mesh (ECCV 20) (Kulon et al., 2018) & HRNet+Linear & 7.4 & 7.6 & 0.683 & 0.973 & 22 \\ MANO GCN (ICME 21) (Chen et al., 2021) & HBNet=w48 & 9.5 & 9.5 & 0.579 & 0.950 & - \\ CMR (CVPR 211) (Chen et al., 2021) & Stack-ResNet50 & 6.9 & 7.0 & 0.715 & 0.977 & 30 \\ METO (ICCV 21) (Chen et al., 2021) & HRNet+w48 & 6.7 & 6.8 & 0.717 & 0.981 & 4 \\ HIU (ICCV 21) (Chen et al., 2021) & Stack-Hourglass & 7.1 & 7.3 & 0.699 & 0.974 & 9 \\ MoRehcon (CVPR 22) (Chen et al., 2021) & HRNet+w48 & 6.5 & 6.2 & 0.760 & 0.984 & **45** \\ Fast-METO (ECCV 22) (Chen et al., 2021) & HRNet+w48 & 5.5 & - & 0.982 & 14 \\ MeshGraphormter (ICCV 21) (Chen et al., 2021) & HRNet+w48 & **5.9** & **6.0** & **0.765** & **0.987** & 4 \\ \hline \hline \end{tabular} \end{table} Table 1. Quantitative evaluation. Comparison with other SOTA methods on the FreiHAND (Kulon et al., 2018) test set. We evaluated them by PA-PJPE, PA-PVPE, F@5/15mm and FPS metrics. We use bold font to indicate the best performance, and use ”_ to represent the second-best performance. \begin{table} \begin{tabular}{c c c c} \hline \hline Dataset & STB [58] & HID (Kulon et al., 2018) \\ \hline Evaluation & AUC \(\uparrow\) & PIPE \(\downarrow\) & AUC \(\uparrow\) & PIPE \\ \hline Zimmer et. al. (CVPR 2017) (Wang et al., 2020) & 0.948 & - & 0.670 & 30.42 \\ Spurr et al. (CVPR 2018) (Wang et al., 2020) & 0.983 & 8.56 & 0.849 & 19.73 \\ Cai et al. (CVPR 2018) (Chen et al., 2021) & 0.994 & - & 0.887 & - \\ Ge et al. (CVPR 2019) (Chen et al., 2021) & 0.998 & 6.37 & 0.920 & - \\ Boukh et al. (CVPR 2019) (Kulon et al., 2018) & - & 9.76 & - & - \\ Zhang et al. (CV 2020) (Chen et al., 2021) & 0.995 & - & 0.901 & - \\ Yang et al. (CVPR 2019) (Chen et al., 2021) & 0.996 & - & 0.943 & 13.14 \\ Zhou et al. (CVPR 2020) (Chen et al., 2021) & 0.989 & - & 0.856 & - \\ Wu et al. (ACM MM 2020) (Wang et al., 2020) & 0.929 & - & 0.929 & - \\ Kulon et al. (CVPR 2020) (Kulon et al., 2021) & - & - & 0.956 & 10.92 \\ Yang et al. (2020) (Chen et al., 2021) & 0.955 & 0.997 & 10.05 & 0.951 & 12.76 \\ Cai et al. (TPAMI 2020) (Chen et al., 2021) & 0.996 & 7.10 & 0.915 & - \\ Zhang et al. (ACM MM 2020) (Chen et al., 2021) & 0.996 & - & - & - \\ Li et al. (AAAA 2021) (Chen et al., 2021) & 0.996 & - & 0.960 & 10.65 \\ Chen et al. (CVPR 2021) (Chen et al., 2021) & - & - & 0.949 & - \\ Zhang et al. (CCV 2021) (Chen et al., 2021) & 0.995 & - & 0.964 & - \\ CycleHand (ACM MM 2022) (Kulon et al., 2021) & - & 7.94 & - & - \\ \hline Ours & 0.999 & 6.35 & 0.965 & 10.58 \\ \hline \hline \end{tabular} \end{table} Table 2. Quantitative evaluation. Comparison with other SOTA methods on the RHD and STB test sets. We use bold font to indicate the best performance, and use ”_ to represent the second-best performance. accuracy of state-of-the-art performance. Note that some methods' performance using supplementary or mixed datasets is not included in our statistics. In addition, to further validate the superior performance of the proposed method, as shown in Table. 2, we compared the model comparisons on the STB and RHD datasets with several other methods. The STB dataset contains a relatively limited distribution of hand poses, making it easier to fit. In the 20-50mm AUC curve evaluation, our method achieved SOTA performance (AUC: **0.999**, PJPE: **6.35mm**). On the other hand, the RHD dataset is a synthetic dataset, with samples featuring highly complex textures and severe self-occlusions. Compared to the currently the _best-performing_ methods proposed by Zhang et al. (2018) and Li et al. (2019), our proposed model achieved the highest AUC value (0.960, 0.964 vs. **0.965**) and the lowest PJPE (10.65mm vs. **10.58**mm). For the method proposed by Kulon et al. (2018), which uses the same visual encoder, our model still has a certain advantage (FPS: 60 vs. **77**, AUC: 0.956 vs. **0.965**, PJPE: 10.92 vs. **10.58**). Note that the methods that use supplementary datasets were not included in the statistics. pairs in a mini-batch that conform to the general distribution in the respective direction. ### Ablation Study **Ablation Study of different structures:** In Table. 3, we show the evaluation results of five methods with different model constraints on the FreiHAND validation set. The most significant performance gap is between the method "w/o CLIP loss" and the "Full mode" (AUC: 0.763 vs. **0.776**, PA-JPE: 7.29 vs. **6.88**); 2) The method "w/o Feature Projection" is weaker than the "Full model" in all evaluation metrics (AUC: 0.769 vs. **0.776**, PA-PJPE: 7.11 vs. **6.88**); 3) The performance of model "w/o Sparse-to-dense" structure (using a coarse-to-fine feature refinement strategy) is slightly behind the "Full model" (AUC: 0.773 vs. **0.776**, PA-JPE: 6.97 vs. **6.88**); 4) The "baseline" method without any sub-module or loss from 1), 2), and 3) has a significant gap compared to the "Full model" (AUC: 0.758 vs **0.776**, PA-PJPE: 7.47 vs **6.88**). In summary, the supervision based on CLIP can enhance pose-aware representation by introducing text prompts, thereby improving model performance. The proposed Feature Projection and designed Sparse-to-dense structure have a positive impact on model inference. As shown in the right part of Fig. 7, we provide the 3D PCK curve within a given range and the corresponding PA-PJPE values to quantitatively demonstrate these results. In Fig. 8, we qualitatively compare the inference results of some images "w/o CLIP loss" and "with CLIP loss," using red dashed boxes to indicate image details and pose details separately. Taking the second image as an example, the estimation result supervised by CLIP loss can more accurately locate the position of the thumb fingertip on the left side of the middle fingertip. **Ablation Study of batch size in CLIP:** To further clarify the impact of batch size on the proposed CLIP-based model, Table. 4, we demonstrate the influence of different batch sizes on inference performance for the FreiHAND validation set. Compared to the "baseline" method without CLIP loss, inference accuracy positively correlates with the batch size **B**. Specifically, when the "B = 16", the model performance has a slight difference compared to that of "B = 32" (AUC: 0.772 vs. **0.776**, PA-PJPE: 7.01 vs. **6.88**). We analyze the reason that smaller batch sizes cannot effectively connect text prompts and pose-aware features due to it being unable to provide enough negative samples for the contrastive learning paradigm. Regrettably, we cannot set a larger batch size to further verify the impact of this factor due to limited computing resources. As depicted in the left part of Fig. 7, we provide the 3D PCK curve within a given range and the corresponding PA-PJPE values to quantitatively demonstrate the model performance under different batch size settings. **Ablation Study of text prompts generation:** In addition, we conducted ablation experiments to verify the effectiveness of different text prompt generation methods on the model. In Table. 5, we compared three methods' performance (10 joints, 15 joints, and 21 joints) of selecting hand joints and generating corresponding text prompts. Compared to the method of selecting only 10 joints "N = 10" and converting them into text prompts, setting all 21 joints "N = 21" can encode a richer spatial distribution of hand joints, thereby guiding pose-aware features to more accurately locate hand joints detailed cues in 3D space. ## 5. Conclusion This paper introduces CLIP-Hand3D, the first successful integration of text representations containing advanced human knowledge into 3D hand recovery. The proposed model achieves state-of-the-art performance on three public datasets, compared to methods employing similar-scale visual encoders, while significantly increasing the inference speed by a large margin (\(\approx\)28.3%). Specifically, The Text Feature Generation module converts the joint distribution order concealed within pose labels into text prompts and further matches pose-aware features with text representations through contrastive learning, improving the model performance by approximately 8.1%. Additionally, the lightweight Mesh Regressor not only incorporates position encodings of varying scales but also queries joint-related semantic cues from the latent feature pyramid. We aim to delve deeper into the relationship between text and vision in the future for more flexible hand image understanding.
Contrastive Language-Image Pre-training (CLIP)が多くのコンピュータビジョンタスクに姿を現し、その性能は期待に値している。しかし、CLIPが3D手ポーズ推定に一般化できるのかは、テキストプロンプトとポーズ Awareの機能を結びつけることは、3D空間における関節の位置の離散的な性質から、大きな課題になる。この論文では、 monocular画像からなる新しい3D手ポーズ推定モデルであるCLIP-Hand3Dを提案する、最初の試みの一つである。CLIP-Hand3Dは、テキストプロンプトと不規則な詳細なポーズ分布を結びつけることで、重要な役割を果たしている。特に、3D空間の各方向における手関節の分布は、ポーズラベルから導出し、それらに対応するテキストプロンプトを生成する。同時に、3D空間の21個の関節が取得され、それらの空間分布(x、y、
2309.10531
Proposal for an Organic Web, The missing link between the Web and the Semantic Web, Part 1
A huge amount of information is produced in digital form. The Semantic Web stems from the realisation that dealing efficiently with this production requires getting better at interlinking digital informational resources together. Its focus is on linking data. Linking data isn't enough. We need to provide infrastructural support for linking all sorts of informational resources including resources whose understanding and fine interlinking requires domain-specific human expertise. At times when many problems scale to planetary dimensions, it is essential to scale coordination of information processing and information production, without giving up on expertise and depth of analysis, nor forcing languages and formalisms onto thinkers, decision-makers and innovators that are only suitable to some forms of intelligence. This article makes a proposal in this direction and in line with the idea of interlinking championed by the Semantic Web.
Mathilde Noual
2023-09-19T11:17:32
http://arxiv.org/abs/2309.10531v1
# Proposal for an Organic Web ###### Abstract A huge amount of information is produced in digital form. The Semantic Web stems from the realisation that dealing efficiently with this production requires getting better at interlinking digital informational resources together [4]. Its focus is on linking _data_. Linking data isn't enough. Not all information produced is intended to be processed _as data_ per se. Most of the digital content produced today is unstructured (informal) text whose progressive semantics are only intended to be intelligible to humans. The documents containing the information can themselves be interlinked as if they were data. But links between granular documents then only convey a shallow pre-defined semantics that ignores the rich progressive semantics expressed inside the documents. Dealing with traditional documents as if they were data, is bound to make suboptimal use of their contents, and arguably remains of limited utility. We need to provide infrastructural support for linking all sorts of informational resources including resources whose understanding and fine interlinking requires domain-specific human expertise. At times when many problems scale to planetary dimensions, it is essential to scale coordination of information processing and information production, without giving up on expertise and depth of analysis, nor forcing languages and formalisms onto thinkers, decision-makers and innovators that are only suitable to some forms of intelligence. I make a proposal in this direction and in line with the idea of interlinking championed by the Semantic Web. This proposal is the result of a compilation of ideas contributed by the scientific and entrepreneur communities over several years of discussions. **Keywords:** Digital information system/network, Continual improvement, Global redundancy management, Collective documentation, Collective intelligence, Slow-first collaboration, Datamodel, Scientific research infrastructure, Knowledge management/engineering, Local-first software, Digital sobriety, Crowdsourced analytical reasoning, ## 1 Introduction ### Motivation Most of the digital content produced today is "unstructured", meaning its semantics is not understood without the involvement of a human mind. Natural language processing techniques only extract a tiny proportion of the semantics of all unstructured content produced by humans (and now machines). Huge quantities of unstructured digital content are a problem. More unstructured content means more work for humans. The risks are: (i) that content be made suboptimal use of, by both humans and machines, and (ii) that content stand in the way of humans communicating and working efficiently with each other. A primary motivation of my proposal is to address this problem and deal with unstructured digital content _early_ in its life cycle. My aim is to help preempt the production of digital content with low capacity to positively impact on and empower human enterprises and with high capacity to stand in their way. This requires means to gauge the value of new pieces of information against the body of pre-existing information which Vannevar Bush famously referred to as "_the record_" [10], as I do here too. It isn't enough to evaluate pieces of information individually, we also need to take a step back and watch over humanity's global informational commons. The actual digitalisation of knowledge and of knowledge work is an opportunity to materialise a well delineated version of the record. But amassing quantities of digital content is not enough to make the record manageable. Pieces of information in the record must be related with one another _meaningfully_ (cf Suppl. Mat. B.2). It isn't enough to have all theorems about, say, Boolean Automata Networks digitally recorded. To ensure that we make good use of all those theorems, and that we don't add repetitions of knowledge that they already capture, we must also know, and document, how those theorems relate to each other: _which ones are generalisations of which other ones, which ones are used in the proofs of others, which ones are equivalent to each other, which ones contradict each other..._ I contend that the record should honour the highest level of expertise of the humans who contribute knowledge to it. More than just document the existence and nature of epistemic relations between theorems and other pieces of information, we should also endeavour to _highlight_ the finest known details of those relations - e.g. _how does theorem \(t_{1}\) generalise theorem \(t_{2}\), that is, what features of Boolean Automata Networks does the generalisation relation operate on: the topology of the Boolean Automata Networks? their dynamics? In what way is formalism \(f_{1}\) equivalent to formalism \(f_{2}\): mathematically? semantically? philosophically? What question do two results provide contradicting answers to?..._ My proposal relies on design biases. I document them in yellow boxes in the present article. The supplementary material B accompanying this article, presents further fundamental philosophical biases underlying my proposal. Design Bias #1: **Smart Networking** I wish to support an informational network whose _structure_ reflects as best as possible the depth of human expertise across the diversity of informational domains that get documented in the network. Design Bias 1 requires that expertise no longer be documented only _inside_ documents at the endpoints of the informational network's links. The links themselves should reflect expert insight. This disagrees with the documentation of links by/for outsiders. My proposal devises a solution to manage the record both locally and globally. It aims (1) to favour the systematic stripping down of new digital content to the bare, useful minimum, defined in terms of what information the digital record already contains, and (2) to promote care and detail in linking new digital content to humanity's body of digital content. ### Information As suggested above, the focus of this proposal is on _unstructured_ information meant to be consumed by humans rather than machines. The proposal is nonetheless extended to deal marginally with the case of structured data. This extension will be presented in a subsequent article. Until then, our focus is more specifically on textual information, for instance: pieces of science, of philosophical arguments, of geopolitical discussions, polemics, recipes, tutorials, stories... The solution proposed in this paper may also to some extent accommodate poems and videos, but I leave this less argumentative kind of content aside for the moment to concentrate on textual information that invites discussion, nuance, questioning, updating, reformulating... A future extension of the solution to at least pictures (e.g. photos of whiteboards) will be important in my opinion. The solution should ultimately accommodate the diversity of successful informational work practices and not impose cumbersome documentation efforts on thinkers, especially when less interfering documentation alternatives can be put into place (see Design Bias 2). It remains that the extension of our solution to non-textual media can be taken care of later once I have mastered the solution for textual information. Design Bias #2: **Experts at work know best / Support what already works** I wish to support humans in their practice of informational work. Many information workers, i.e., thinkers (e.g. scientist researchers) are already equipped to doing worthwhile informational work and are doing it well. Different thinkers operate within different epistemic cultures, and their chains of thought follow different series of landmarks. It is essential for "informational welfare" (progress and quality of information) that existing informational know-how, cultures and habits be respected. I wish to dedicate a solution to supporting what works well already1. And in particular I wish to propose a solution that preserves the focus of experts on what they are successfully contributing. Footnote 1: I believe it is far more important and safe to support what already functions well than to attempt relieving pain points of informational workers. Indeed, solutions brought to informational workers are bound to emphasise certain of their activities over others. Pain points of informational workers that are _at the core_ of informational work and that outsiders don’t understand (e.g. difficulty in proving a conjecture) are best dealt with by the informational workers themselves. All the other pain points relate to activities that are _marginal_ to the informational work (e.g. self-marketing activities). Arguably, mitigating those pain points risks facilitating and thereby emphasising those marginal activities at the expense of the core informational work. Dynamics and balance of experts’ activities should be protected from generic technologies built by outsiders. Design Bias 2 means the solution aimed at here, is _not_ primarily a solution in support of communication. The Supplementary Material B accompanying the present article emphasises a distinction between supporting communication and supporting information (cf Suppl. Mat. B.19). The present proposal emerged from a context of transdisciplinary scientific research involving formal sciences and life sciences. Its motivating intention isn't so much to support the production of formal documents that help communicate ideas. It rather is to support the stage of informational work at which ideas are shaped with words, sometimes upstream from the work of formal documentation. The emphasis is _less_ on supporting use and reuse of informational resources (contrary to the Web Annotation Data Model[53]), than it is on supporting _the continual improvement, renewal and obsolescence of informational resources_. I expect enhanced use and reuse to naturally follow from smart networking (as well as from architectural choices presented in Suppl. Mat. A). Design Bias #3: **Continual Improvement and Updating of Information** I consider information as a process, something to do, until it no longer is relevant. A piece of information is neither definite nor free-standing. Unlike data is not a given and unlike knowledge it is not established. It is _at its best_ when it is subjected to continual improvement: when it is getting nuanced, detailed, challenged, (re)contextualised, updated... to follow the world it describes as it changes. Eventually a piece of information becomes obsolete. I wish to support information _at its best_ and facilitate the obsolescence of unprocessable information. Notably, Design Bias 3 disagrees with ridding the record of low quality information. Low quality information is typically information that can be improved. See Suppl. Mat. B.16. Following Design Bias 3 and its emphasis on the _dynamism_ of information, the solution I propose to build is also not optimized for automated reasoning and inference, which require _settling_ on some knowledge base. The Supplementary Material B further details assumed characteristics of the notion of information, providing epistemic foundation to this proposed solution. ### Epistemic Glue A base assumption of the proposed solution is that different pieces of information may be produced by different humans. The solution is to support _collective_ documentation. Appreciating the way in which individual pieces of information relate to each other is paramount to this project of organising the record and managing redundancy in it. Consider two individual pieces of information, \(\mathcal{I}_{1}\) and \(\mathcal{I}_{2}\). A legitimate question is: How are \(\mathcal{I}_{1}\) and \(\mathcal{I}_{2}\) related? Possible answers are: * _They are not related at all._ * _They contradict each other._ * _They are about the same topic_ \(\mathcal{T}\)_._ * _They use the same term_ \(\mathcal{T}\)_._ * _They refer to the same object or concept_ \(x\)_._ * _They denote the same object or concept_ \(x\)_._ * _They imply the same consequences._ * _They answer the same question._ * _They appear in the same book._ There is a great diversity of ways in which two independent pieces of information might relate to each other. Obviously not all relations are possible between any two pieces of information. And some relations are harder to document than others - e.g. it is harder to say that \(\mathcal{I}_{1}\) and \(\mathcal{I}_{2}\) refer to the same concept than to say that they use the same term. Note also that \(\mathcal{I}_{1}\) and \(\mathcal{I}_{2}\) may be related in more than one way, and that a relation between \(\mathcal{I}_{1}\) and \(\mathcal{I}_{2}\) is itself a piece of information. I use the term "glue" to refer to information that relates pieces of information together. Because the record is a collective document, glue is necessary for it to have structure. Without glue, the record would merely be a collection of independent resources, possibly organised into some sub-collections and categories defined by an arbitrary (central) entity. An important desirable property of glue is that it be _generic_ so the structure it gives to the record be _domain-agnostic_. For instance, saying that \(\mathcal{I}_{1}\) answers the question \(\mathcal{I}_{2}\) is a generic, domain-agnostic way of gluing \(\mathcal{I}_{1}\) and \(\mathcal{I}_{2}\) together. Saying that \(\mathcal{I}_{1}\Longrightarrow\mathcal{I}_{2}\) (\(\mathcal{I}_{1}\) mathematically implies \(\mathcal{I}_{2}\) as in \(\neg\mathcal{I}_{1}\vee\mathcal{I}_{2}\) holds) is not. Not all relations provide the same "epistemic depth" of glue. For instance, understanding that \(\mathcal{I}_{1}\) implies \(\mathcal{I}_{2}\) is epistemically deeper than understanding that \(\mathcal{I}_{1}\) and \(\mathcal{I}_{2}\) appear in the same book. Generally, let \(\mathcal{R}\) and \(\mathcal{R}^{\prime}\) be two relations (like the ones listed above) both between pieces of information \(\mathcal{I}_{1}\) and \(\mathcal{I}_{2}\). Informally, we say that \(\mathcal{R}\) is _epistemically deeper_ than \(\mathcal{R}^{\prime}\) if understanding \(\mathcal{R}\)_leads to_ more understanding of \(\mathcal{I}_{1}\) and \(\mathcal{I}_{2}\) than does understanding \(\mathcal{R}^{\prime}\), or if \(\mathcal{R}\)_comes from_ more understanding of \(\mathcal{I}_{1}\) and \(\mathcal{I}_{2}\). The glue we're interested here must be _smart_ (cf Design Bias 1). It must be generic without being shallow. We want to materialise a _smartly_ networked version of the digital record. So we are interested in emphasising relations that are epistemically deep. However, the digital record is a _collective_ work. A diversity of epistemic cultures, approaches, formalisms and languages need to be accommodated. Our solution must support the provision of glue in a diversity of formalisms, languages _etc_. For experts to contribute glue, glue should not be tedious to contribute. Following Design Bias 2, a legal scientist who understands that \(\mathcal{I}_{1}\) legally implies \(\mathcal{I}_{2}\) should _not_ have to understand the mathematical notion of implication (nor any other notion of implication for that matter) in order to document the legal relation between \(\mathcal{I}_{1}\) and \(\mathcal{I}_{2}\). Similarly, when documenting a mathematical relation between two theorems \(T_{1}\) and \(T_{2}\), a mathematician who knows how theorem \(T_{1}\) generalises theorem \(T_{2}\) should _not_ be required to take the approach of a formal ontology modeller3. An expertise in \(\mathcal{R}\), \(\mathcal{I}_{1}\) and \(\mathcal{I}_{2}\) (all three are pieces of information) should be enough to document \(\mathcal{R}\) between \(\mathcal{I}_{1}\) and \(\mathcal{I}_{2}\). No understanding of the repercussions of \(\mathcal{R}\) beyond \(\mathcal{I}_{1}\) and \(\mathcal{I}_{2}\) should be required. In short, from the point of view of experts at work, documenting content as part of our smart networking solution should not be significantly different from documenting content today with current digital technologies that don't support smart networking (e.g. text editors). There are many ways of "gluing" informational resources together (cf bullet points above). A popular way involves ontological and taxonomic commitments (cf Suppl. Mat. B.15). For instance, two independent informational resources A and B (e.g. two statements, or two articles) may be related based on the fact that they both assume that a certain object 'Bob' exists and they both agree on what Bob really refers to, what kind of object it is, e.g. a 'person' which is a type of'mammal' with a 'phone number', equal to 01-23-45-67-89. If resources A and B agree on the existence and meaning of 'Bob', there is deep epistemic glue between them in the form of shared rigorous semantics. Checking that the 'Bob' of resource A is exactly the same as the 'Bob' of resource B can be very demanding. Often, correspondences across ontologies and taxonomies, between concepts used to define an object (e.g. person, mammal, phone number) are not trivial to establish [44]. I propose to avoid this difficulty altogether and _not_ restrict the glue we consider here in the way formal ontologies and taxonomies restrict it to semantic relations. A founding hypothesis of my proposal is that good information feeds on a diversity of epistemic glue and that the glue itself must be challengeable like any other piece of information is. Thus, my proposal neither assumes, imposes nor even aims at semantic homogeneity. I propose to allow for ontological and taxonomic commitments to be documented explicitly, questioned and discussed like any other piece of information. Even without a guarantee that resources A and B rigorously agree on the existence and meaning of 'Bob', A and B may still be glued together by the _explicit (motivated) assumption_ that they do agree, or simply by a question asking whether they agree4. An obvious consequence is that the digital version of the record that I propose to structure with glue, isn't expected to have global semantic consistency. It can't serve as a functional representation of the world5. The aim here is rather to support "safe passage" between different documented perspectives on the world. Footnote 4: This suggests an alternative to having to build trust in the content of the record. Rather than try to make the information trustworthy, we encourage anyone who has a doubt to express that doubt so that it can be addressed. Of course, for this, efficient redundancy management is key. Footnote 5: This is desirable. A model doesn’t scale well. A collectively built, all-encompassing model would have limited use for the huge majority of humans who continually need to change perspectives, deal with uncertainty, ambiguity _etc._ When gluing a new piece of information \(\mathcal{I}\) to existing pieces of information \(\mathcal{I}_{1},\mathcal{I}_{2},\ldots\mathcal{I}_{n}\) already in the record, \(\mathcal{I}\) should gain in precision because of the epistemically relevant context provided by the glue to \(\mathcal{I}_{1},\mathcal{I}_{2},\ldots\mathcal{I}_{n}\). Conversely, \(\mathcal{I}_{1},\mathcal{I}_{2},\ldots\mathcal{I}_{n}\) should also gain from being completed, nuanced, circumscribed _etc_ by the glue to \(\mathcal{I}\). Otherwise, the relations between \(\mathcal{I}\) and \(\mathcal{I}_{1},\mathcal{I}_{2},\ldots\mathcal{I}_{n}\) should highlight the inappropriateness of \(\mathcal{I}\). Glue generally gives information on information6. More glue should make the record more manageable because it should increase the ease with which we can epistemically relate any two pieces of information in it and thus jointly deal with them. It should help take into account pieces of information that come from heterogeneous sources. Individual pieces of information that make sense individually shouldn't make less sense when glued to other pieces of the record. They shouldn't degrade the record either. On the contrary, more information should be beneficial to the record, or it should be easy to see that it isn't. Footnote 6: _Not_ meta-information – of Suppl. Mat. B.3 ### There is a missing link between the Web and the Semantic Web. Traditional human-centric information networks - e.g. the Web, the network of inter-cited academic publications - are not smart epistemically structured networks. They are _infrastructural_ networks taking care of the logistics of humans' informational activities. Information in these networks is mostly _inside_ the documents at the endpoints of the network links. The links between documents are epistemically shallow. The Web is missing a smart epistemic backbone. The Semantic Web proposes to materialise a semantic one, in the aim of opening the wealth of Web content up to automatic processing. It defines standards like RDF (the Resource Description Framework) [14, 36] that support strong semantic gluing of informational resources. The gluing operates at a finer granularity than the Web's cross-document hyperlinking. The shift in granularity is consequential: it unlocks possibilities of collective documentation and enhanced resource sharing (data can be reused in multiple RDF triples by multiple authors)7. Footnote 7: RDF is a standard data model that expresses information by way of ’semantic triples”. A semantic triple consists in a subject, a predicate and an object which altogether express a statement. Each of the three components of a triple are individually addressable. Atomic pieces of information can thus be reused in multiple triples by multiple authors. “Bob” can be the subject and object of multiple statements. ”enjoys” can be the predicate of multiple statements, linking together multiple resources acting as subjects and objects. The Semantic Web only concerns a minor proportion of the information that is of interest to humans. Its standards are designed to make information machine-understandable in order to support automatic reasoning over it. Most information that is of interest to humans is expressed by humans, for humans8. It doesn't need to be made machine-understandable and is therefore usually not worth being formalised into SW standards. When a researcher proves a new theorem \(T\) there rarely is a pressing incentive, if any, to make a machine understand \(T\). There is usually one for the researcher's human peers to understand \(T\). The sort of information that the Semantic Web is primarily designed to deal with is _metadata_ (cf Suppl. Mat. B.3). Metadata is essential for transferring rudimentary semantics over to the machines [24]. But unlike the expression and proof of a theorem \(T\), metadata describing \(T\) (e.g. the authorship, the creation date of \(T\)) is not reflective of the depth and nuance of understanding that domain experts have of \(T\). So we can't make a smart network out of metadata. Also, the SW standards are designed to constructively collect information as part of coherent ontological models. But our interest here is supporting informational progress (cf Design Bias 3), which can happen in a variety of ways, including through the deconstruction of established models. Human produced information expresses doubts, questions, hypotheses _etc_ and sometimes challenges the best models actually in service. A web looser than the Semantic Web is needed to interlink and structure this information. Footnote 8: The common denominator for humans is natural language, not any specific formal language. Even when humans (e.g. theoretical computer scientists at work) endeavour to produce highly formalised information, they spend most of their work time navigating _between_ levels of formalisation, rather than thinking committedly in one particular formalism. As most humans don’t speak in rigorous RDF triples, populating the Semantic Web with human produced information would require intermediary entities savvy of SW standards. The translation of a piece of information into SW standards is rarely worth the cost and effort. Arguably, there also is a danger in handing the translation of expert information over to non-specialists. Profound domain-specific knowledge is best documented first-hand by the domain experts. I propose to materialise **an intermediary web** - namely, the **"the MMM"** - geared towards supporting human reasoning and its organised documentation9. MMM stands for **Mutual Mutable Medium**, meaning _collective dynamic document_ or _record_10. Footnote 9: The semantic Web vision is to support automated reasoning by providing machines with collectively built knowledge in the right formalisation. Arguably, a reasonable intermediary step to organising information for machines is to do it for humans (cf Suppl. Mat. B.12). Footnote 10: “Mutual” in MMM replaces “Worldwide” in WWW. The idea is to go from a paradigm where information is meant to be distributed to everyone, to a paradigm where information emerges from small scale invested relationships. The intermediary MMM web is to **co-exist with the original Web**, possibly even interface with it. Theoretically, anything expressible in text on the Web can be expressed on the MMM with no obligation for translation or reformulation. In practice, software interfaces may exist (for instance Web browser extensions comparable to hypothes.is [46, 57], see also Fig. 23 in Suppl. Mat. A.4) to copy or to reference information from the Web onto the MMM where information benefits from epistemic interlinking. The intermediary MMM web is also to **relay the Semantic Web**. Without serving the same purpose as the Semantic Web nor conveying the same kind of content, the MMM is to offer some compatibility with the standards of the Semantic Web. \begin{table} \begin{tabular}{|l|c|c|} \cline{2-3} \multicolumn{1}{c|}{} & **The Web** & **The Semantic Web** \\ \hline Is designed primarily for... & humans & machines \\ \hline Allows anyone to contribute without technical skills & & ✗ \\ \hline Doesn’t limit expressivity, accepts any textual information expressed & & ✗ \\ in any formalism or language, at any level of formalisation, including & & \\ healthily challenging and nuancing information & & \\ \hline Supports epistemic glue in between individual resources & ✗ & ✗ \\ & hyperlinks only & _semantic_ glue \\ \hline Is designed to network any granularity of informational resources & ✗ & ✗ \\ & mostly documents & \\ \hline Natively allows to annotate any granularity of information & ✗ & ✗ \\ & & cf RDFS comments \\ \hline Natively allows to interlink annotations like any other informational & ✗ & ✗ \\ resource & & \\ \hline \end{tabular} \end{table} Table 1: There is room for an intermediary web whose content, like the Webs, is primarily for humans to consume, and whose structure, like the Semantic Webs, is “smart” (cf Design Bias 1) and “epistemically deep” (cf Section 1.3). A mapping between the MMM and the RDF data models will be provided in a follow-up article. Once mapped into the MMM, data looses some of its ability to be efficiently processed by machines. Automatically populating the MMM with Semantic Web resources may nonetheless have some advantages. It may favour the reuse of resources because it brings the resources to humans, exposing them to human scrutiny and annotations, and making humans systematically aware of the existing terminologies and taxonomies that are epistemically relevant to their work. It may help mitigate the amount of redundant and diverging new terminology. Conversely, interfacing the Semantic Web with the MMM can enable more systematic generation and update of formal ontologies. The MMM data model can be leveraged in the capacity of "proto-taxonomical" structure for smart systematic documenting and informing of ontology design decisions [38, 48]. ### Requirements To materialise a version of the digital record that has the desirable properties discussed in Section 1, I propose to organise information as per a pre-defined data model. The data model is called the **MMM data model** or **MMM format**. It is formally introduced in the next section. Here let us first list some basic requirements for the MMM format. N.B.: This proposal is _not_ a solution for organising already archived information**. It aims instead at supporting humans as they work on updating and renewing information. In other terms, the solution is to resemble an upgrade on the concept of whiteboard more than it is to resemble an upgrade on the concept of library. 1. **Inclusivity and informalism-friendliness**: It must be possible and easy for a human user to contribute anything expressible as text in natural language. We shouldn't have to bother users with unwelcome formalisation exercises. It must be possible to contribute content to the smart network, without knowledge of any formal language. 2. **Epistemic glue**: It must be easy for users to document links between their contributions and pre-existing ones. It must be possible for them to make explicit the epistemic relationship between them. 3. **Recursive annotation**: It must be possible to question, nuance, challenge, detail any contribution. Generally, it must be possible to comment on/annotate any contribution, including annotations themselves, as well as links between contributions and links between annotations. 4. **Minimal metadata**: The amount of metadata associated to each contribution must be kept minimal, and for strictly administrative purposes (cf Suppl. Mat. B.3). Metadata should not be needed to assess the quality of contributions. 5. **Reformulation**: It must be possible to contribute to the record by adding a reformulation of an existing contribution (this proposal does _not_ aim at finding a unique canonical expression of each piece of information). 6. **Intelligible collective documentation**: It must be possible for independent users to contribute to the record without consulting each other, even when they document closely related information. The record should not decrease in quality. It must be possible for the set of theirs contributions to constitute an intelligible collective document whose smart structure allows navigating meaningfully from contribution to contribution. 7. **Contribution types**: It must be easy to distinguish between different types of contributions. In particular it must be easy to distinguish contributions that are questions from contributions that are not (cf Suppl. Mat. B.17). Generally: 1. The semantics of contribution types must be intuitive. It must be easy for contributors to know which type to assign to their contributions. 2. The number of different contribution types must be kept small. To assign a type to their contributions, users must not need to learn a long list of types. 3. The set of different contribution types must be stable. Types must be pre-defined11. Users must not need to keep up to speed with evolving definitions. Footnote 11: This means that we need to have a good set of types from the start to limit the need for future evolutions. I claim to have a basis for this starting set (cf Section 2). Methodical and diverse testing of this set needs to be performed beyond what has already been accomplished before writing this proposal. 4. The semantics of contribution types must be generic (domain-independent). A contribution's type must convey the basic epistemic role or purpose of the contribution (e.g. is the contribution a question, a statement or something else?) The genericness of types must maintain bridges across domains of expertise. 5. The semantics of contribution types must be loose. There must be room for interpretation and specification of the meaning of a type. It must be possible for users to make slightly different uses of the same type12. Users from different domains must still use the same contribution type in _relatable_ ways13 (cf requirement R7d). It must thus be possible for contributors to _narrow down_ the generic epistemic purpose conveyed by a type. For instance having assigned the type "question" to a contribution, it must be possible for the contributor to specify that the contribution is a _rhetorical_ question or that it is a _confirmation seeking_ question. Footnote 12: Just like how biologists and mathematicians slightly diverge on what kind of information goes in an article introduction. But they can still read each others’ articles without being thrown off by what they find in the introduction. 6. It must be easy for contributors to assign a _default_ type to their contributions when they don't want to bother choosing a more epistemically meaningful type. It must be possible for contributors to use our solution without leveraging its structuring opportunities. 7. It must be easy to highlight the ontological commitments underlying a contribution. For instance the question "_What are genes made of_" makes the tacit ontological assumption that genes exist (cf Suppl. Mat. B.18). It must be easy to make a contribution whose purpose is to highlight this about this question. Because of requirement R1 and also because of Design Bias 2 ("Experts at work know best"), the MMM data model must be very flexible by design. This means that there may often be several ways to document the same piece of information in MMM format. As a consequence, the definition of the MMM format needs to be relayed by a collection of _best practices for users._ Some are mentioned below. Best practices may be promoted through the design of MMM editors and other tools for contributing content in MMM format (cf Suppl. Mat. A.4). The primary version of the MMM data model introduced in Section 2 is expected to need some minor tweaking. It is essential that it remain small and simple. The MMM format must strike a balance between freedom and constraint in the exercise of manual documentation (cf Suppl. Mat. B.12). Possible necessary future adaptations of the MMM format should be weary of maintaining that balance. The definition of the MMM data model is exclusively motivated by _practical_ reasons. There is no epistemological theory behind its definition. Any modification to the MMM data model must be done circumspcly to address practical needs only (rather than to be exhaustive and demonstrate a certain form coherence). ## 2 Definition of the MMM format In the sequel, I use the symbol \(\mathbb{S}\) to denote the set of strings of characters possibly including markdown symbols, and I use the symbol \(\mathbb{D}\) to denote the set of dates. #### 2.0.1 Landscape A MMM network, a.k.a. "**landscape**" consists of objects called "**landmarks**". Exactly one of those objects is a special landmark called the "**pit**", denoted by \(\bot\). All other landmarks in a landscape **N** are **contributions**\(c\in\textbf{C}\) belonging to the set \(\textbf{C}\subset\textbf{N}=\textbf{C}\cup\{\bot\}\) of contributions. Contributions have attributes that I list in next paragraphs SS2.2.1 - SS2.2.4. ### Main Contribution Attributes Contributions convey information through their three main attributes, namely, their label, their type, and their tags. #### 2.1.1 Labels Contributions \(c\in\mathbf{C}\) have **labels**. For now, we consider labels are taken from the set \(\mathbb{S}\) of character strings of arbitrary length. Labels can be empty. And some types of contributions (namely bidirectional edges) can have multiple labels. Labels satisfy requirement R1. There is no limit on the length of a contribution label. An entire book could be copied into the label of a contribution. _Best practices for users_: Keep labels short. Prefer to decompose a long text into multiple contribution labels. #### 2.1.2 Types A contribution has an abstract type and a concrete type (a.k.a. a type and a subtype). There are five different sets of **abstract contribution types**, namely _(i)_ the set \(\mathbf{V}\) of vertex types (cf SS2.3.2 below), _(ii)_ the set \(\mathbf{P}\) of pen types (cf SS2.3.7), and the set \(\mathbf{E}\) of edge types (cf SS2.3.3) which comprises _(iii)_ the set \(\mathbf{E_{A}}\) of adirectional edge types (cf SS2.3.4), _(iv)_ the set \(\mathbf{E_{0}}\) of unirectional edge types (cf SS2.3.5), and _(v)_ the set \(\mathbf{E_{B}}\) of bidirectional edge types (cf SS2.3.6). The set of abstract types is denoted \(\mathbf{T^{AB}}\). It is equal to \(\mathbf{T^{AB}}=\mathbf{V}\cup\mathbf{P}\cup\mathbf{E}=\mathbf{V}\cup\mathbf{ P}\cup\mathbf{E}_{\mathbf{A}}\cup\mathbf{E_{0}}\cup\mathbf{E_{B}}\). The abstract type of a contribution is specified by a **concrete type**. Examples of concrete types of vertex contributions are the question and narrative subtypes. Examples of concrete types of edges are the pertains and equates subtypes. The different concrete types are formally introduced in subsequent paragraphs of this section: SS2.3.2 - SS2.3.7. Types generally satisfy requirement R7 about contribution types. Abstract types are however merely infrastructural while concrete types convey structuring epistemic information in agreement with requirements R7d - R7f (satisfaction of requirement R7a is to be further supported by UJ application code). Importantly, despite concrete types being epistemically structuring, plenty of room is left for most of them (especially concrete edge types) to be interpreted with some flexibility (cf the interpretations.graphml file [42]). The set of concrete types is denoted \(\mathbf{T^{CO}}\). It is equal to \(\mathbf{T^{CO}}=\mathbf{T_{V}}\cup\mathbf{T_{P}}\cup\mathbf{T_{E}}=\mathbf{T_ {V}}\cup\mathbf{T_{P}}\cup\mathbf{T_{A}}\cup\mathbf{T_{U}}\cup\mathbf{T_{B}}\). #### 2.1.3 Tags Contributions \(c\in\mathbf{C}\) are associated a possibly empty **set of tags**. The tag set associated to a contribution is often empty. By convention for now we expect that a tag is a string that starts with the character '\(\oplus\)'. We denote by \(\mathbb{S}_{\oplus}\) the set of character strings in which tags are taken from. Like labels, tags are for enriching concrete types. Tags are used instead or in addition to labels when the full semantics of a contribution is specified somewhere else than in the contribution's label and than in the contribution's concrete type, e.g. when it is specified in an external resource (cf Fig 1). Tags are typically URIs or standardised keywords, e.g. the @yes and @no tags typically specify the meaning of an answers edge incoming a closed question contribution (cf Fig. 2). Pervasive tags may eventually inform an update of the set of predefined concrete MMM contribution types - provided they represent fundamental epistemic concepts that are not already represented in the set of predefined concrete types. It however is essential to keep the set of concrete types (listed in the sequel) small to satisfy requirement R7. ### Metadata Attributes The remaining five MMM landmark attributes that we introduce next constitute "**metadata**". For the sake of simplicity, except the identifier, we will ignore metadata attributes in the definition and in the examples of MMM contributions given later on. Metadata attributes are kept minimal in agreement with requirement R4. Figure 1: A relatesTo edge (cf §2.3.5) labelled ”is a” and tagged with the URI of the RDF schema definition of ”RDF type” [36]. Figure 2: A closed question and two statement answers each linked to the question by an edge of type answers appropriately tagged. #### 2.2.1 Identifiers All landmarks \(x\in\mathbf{N}\) in a landscape \(\mathbf{N}=\mathbf{C}\cup\{\bot\}\) have a unique **identifier** taken in a certain set of identifiers denoted here by \(\mathbf{I}\). Those could be standard (namespace based) UUIDs (universally unique identifiers). A hash function applied to just the label and type of a contribution (see SS2.1.1 and SS2.1.2 below) may facilitate the identification of duplicates (cf SS3.1.9). Involving a user identifier in the hash may also be required to support the digital signature of contributions. Future research will explore how to define identifiers of MMM contributions so as to facilitate search in the MMM space. Whatever the landscape \(\mathbf{N}\), the identifier of the pit is always the same because there is only one pit. The pit is the first collective landmark of the intermediary MMM web. Let us assume for now, in the examples below, that \(\mathbf{I}=\mathbb{N}\) and that the identifier of the pit is \(0\). #### 2.2.2 Authorship Contributions are associated a possibly empty, grow-only **set of authorships** in the power set \(\mathcal{P}(\mathbf{A})\) of the set \(\mathbf{A}\) of authorships. Authorships are taken in the set \(\mathbf{A}=\mathcal{P}(\mathbb{S})\times\mathbb{D}\). An _authorship_\(a\in\mathbf{A}\) is a pair comprised of a list of author names and a date. For instance the following is an authorship: \[(\underbrace{(\text{``Jane Smith''},\text{``Ivy Li''},\text{``Amari Doe''})}_{ \text{team of authors}}),\underbrace{13/08/2023}_{\text{timestamp}}\in\mathbf{A}.\] Contributions can be assigned several authorships, and usually they must be assigned at least one. The following example of an authorship set contains three authorships: \[(\underbrace{(\text{``Anne Martin''})}_{\text{one authorship}},(\underbrace{( \text{``Jane Smith''},\text{``Ivy Li''},\text{``Amari Doe''})}_{\text{another authorship}},\underbrace{(\text{``Al B.''})}_{\text{another}})\mid\in\mathcal{P}(\mathbf{A}).\] _Best practices for users:_ Authorships are primarily to recognise the humans who have _recorded_ a contribution. When a contribution is a quote from the work of a different human, use additional contributions (cf SS12) to make the source explicit. The late Alan Turing, for instance, should never appear as part of an authorship of an MMM contribution. Recognition of intellectual activity is mentioned in SS3.3.1 below. The pit landmark has no authorship set attribute. #### 2.2.3 Status As will be detailed in the sequel, MMM landscapes are to be collectively built and distributed. Individual MMM contributions are stored locally on users' machines. They may also be shared. A contribution \(c\in\mathbf{C}\) has a **status**. By default, the status of a contribution is private and local. If \(c\) is private, a possibility is that \(c\) is privately shared with one or several groups of users who have access right to \(c\). A private contribution that is shared with groups \(g_{1},\ldots,g_{n}\) of users, has its status attribute set to \(\text{sharedWith:}g_{1},\ldots,g_{n};R\) where \(R\) is the reference to a sharing contract, cf SS3.3.4. If \(c\) is private and shared with no-one, it has default status local. A contribution can also be public, meaning that _any_ user has access right to it. Public contributions have their status attribute set to public. When a contribution's status attribute is set to public, its label, type and tag attributes become immutable. Let \(s\) and \(s^{\prime}\) be two contribution statuses. We define an order \(\preceq\) on contribution statuses. We write \(s\preceq s^{\prime}\) when either one of the following conditions is satisfied: * \(s=\)local, or * \(s^{\prime}=\)public, or * \(s=\)sharedWith:\(g_{1},\ldots,g_{n};R\), \(s^{\prime}=\)sharedWith:\(g_{1}^{\prime},\ldots,g_{m}^{\prime};R^{\prime},\bigcup g_{i}\subseteq \bigcup g_{i}^{\prime}\) and \(R^{\prime}\) is no more constraining than \(R\). To downgrade (resp. upgrade) status \(s\) is to replace \(s\) with status \(s^{\prime}\) where \(s^{\prime}\neq s\) abd \(s^{\prime}\preceq s\) (resp. \(s\preceq s^{\prime}\)). Downgrading a contribution status is forbidden. A contribution's status can only be upgraded. The pit landmark's status is public. #### 2.2.4 Marks Contributions may be marked by any number of marks. Marks can be _ad hoc_ custom marks, e.g.: archived, hidden, dim, highlighted, folder, unpublished. There also are predefined marks, e.g.: new (meaning unread), obsolete (cf SS3.1.7), syncWith (cf SS3.3.6), subscribedTo (cf SS3.3.5), rewarded (cf SS3.3.1). Marks may have parameters. For instance, the syncWith mark is parametrised by the list of devices \(d_{1},\ldots,d_{n}\) that the contribution is meant to be copied to. Marks are mostly for internal house-keeping, in contrast to the status which universally characterises a contribution. The meaning of a mark and existence is usually specific to one user or to a team of users. For instance, one user or team may choose to hide contribution \(c\) marked as hidden, while other users may neither mark nor hide \(c\), and others may mark it as hidden but not hide it. Also in contrast to the status, marks are usually locally revocable by the user. #### 2.2.5 Timestamps A contribution \(c\in\mathbf{C}\) is associated a timestamp corresponding to the date at which a user first encounters \(c\). If the user is the creator of \(c\), the timestamp is the date of creation of \(c\). Otherwise, it is the date at which the user receives \(c\) from another user. Defined this way, timestamp attributes allow ordering contributions into a timeline of granular events, namely contribution appearances (creation or reception) which are the events we propose to emphasise. The definition of the timestamp attribute of MMM contributions will need to be refined to support finer timelines also accounting for modifications of the attributes of existing contributions. ### Landmarks We have defined the main attributes of MMM landmarks. Now we specify the different kinds of landmarks, and especially the different kinds of contributions. #### 2.3.1 Contributions For the sake of simplicity in the following sections we ignore the metadata of contributions - i.e., the authorship set, status, mark set and timestamp attributes. A contribution is an object from \(\mathbf{C}\subset\mathbf{I}\times\mathbb{S}\times\mathcal{P}(\mathbb{S}_{ \oplus})\times\mathbf{T}^{\textsc{st}}\) comprised of an identifier in \(\mathbf{I}\), a label in \(\mathbb{S}\), a tag set in \(\mathcal{P}(\mathbb{S}_{\oplus})\) and an abstract type in \(\mathbf{T}^{\textsc{st}}\). Abstract types define the following sets of contributions: * The set \(\mathbf{C}_{\mathbf{V}}\subset\mathbf{I}\times\mathbb{S}\times\mathcal{P}( \mathbb{S}_{\oplus})\times\mathbf{V}\) of "vertex contributions" a.k.a. "vertices" a.k.a. "nodes" * The set \(\mathbf{C}_{\mathbf{P}}\subset\mathbf{I}\times\mathbb{S}\times\mathcal{P}( \mathbb{S}_{\oplus})\times\mathbf{P}\) of "pen contributions" a.k.a. "pens" * The set \(\mathbf{C}_{\mathbf{E}}\subset\mathbf{I}\times\mathbb{S}\times\mathcal{P}( \mathbb{S}_{\oplus})\times\mathbf{E}\) of "edge contributions" a.k.a. "edges" a.k.a. "links" which is comprised of: Visual design choices in the illustrations of this article are arbitrary. This proposal does not cover visualisation of the MMM formatted content. Different user interfaces can later offer different visualisations to accommodate different preferences. #### 2.3.2 Vertices A vertex contribution \(c\in\mathbf{C}_{\mathbf{V}}\subset\mathbf{I}\times\mathbb{S}\times\mathcal{P}( \mathbb{S}_{\oplus})\times\mathbf{V}\) is composed of an identifier, **a non-empty label**, a tag set in \(\mathcal{P}(\mathbb{S}_{\oplus})\) and an abstract type in \(\mathbf{V}=\mathbf{T}_{\mathbf{V}}\subset\mathbf{T}^{\texttt{AB}}\) equal to the concrete type. Vertices have one of five possible (abstract/concrete) types: Figure 3: Summary diagram of the MMM format definitions. \[\mathbf{T_{V}=V=\{question,narrative,existence,action,data\}}.\] The types of vertices that are the most central to our system are question vertices, narrative vertices and existence vertices. Contrary to the other two types, redundancy management is to be severe on those central three types. We won't mind if there are several data vertices labelled "42". However, we will mind if there are several question vertices labelled "What colour is the sky?". Vertex labels _cannot_ be empty. Below are examples of contributions that are vertices. I remind that visual choices made in the illustrations in this article are arbitrary. MMM documented information doesn't even need to be graphically represented. The MMM-JSON format for instance is enough to capture it. * (1, "What colour is the sky?", \(\mathfrak{O},\text{question})\in\mathbf{C_{V}}\) * (2, "The sky is blue.", \(\mathfrak{O},\text{narrative})\in\mathbf{C_{V}}\) * (3, "Sky", \(\mathfrak{O},\text{existence})\in\mathbf{C_{V}}\) * (4, "To be blue", \(\mathfrak{O},\text{existence})\in\mathbf{C_{V}}\) * (5, "Blue", \(\mathfrak{O},\text{existence})\in\mathbf{C_{V}}\) * (6, "the color of a cloudless daytime sky", \(\mathfrak{O},\text{existence})\in\mathbf{C_{V}}\) * (7, "Turquoise", \(\mathfrak{O},\text{existence})\in\mathbf{C_{V}}\) * (8, "bleu", \(\mathfrak{O},\text{existence})\in\mathbf{C_{V}}\) * (9, "White", \(\mathfrak{O},\text{existence})\in\mathbf{C_{V}}\) * (10, "Colour", \(\mathfrak{O},\text{existence})\in\mathbf{C_{V}}\) * (11, "true", \(\{\mathfrak{O}boolean\},\text{data})\in\mathbf{C_{V}}\) * (12, "12", \(\mathfrak{O}\text{int},\text{data})\in\mathbf{C_{V}}\) * (13, "Boil water.", \(\mathfrak{O},\text{action})\in\mathbf{C_{V}}\) Contributions with identifiers 5 to 9 above could be drawn from a glossary file listing/defining colours. Those contributions could be tagged (@colours) instead of \(\mathfrak{O}\). Contribution 3 could be tagged \(\{\mathfrak{O}myWeatherVocab,\mathfrak{O}some-standard-bird-ontology\}\). _Best practices for users:_ Use question vertices for labels that end with a question mark. Use narrative as the default contribution type when you don't want to bother determining what other type best suits your new contribution. But ideally, use narrative vertices for labels that end with a period, for instance, a statement, a series of statements, a theorem, a story.... Use existence vertices for labels that typically don't end with any punctuation mark, naming or formulating something that matters to you, a concept or a property (e.g. "Sky", "Blue", "To be blue"). Use action vertices for labels that describe some action to perform (e.g. "Boil water"). Use data vertices for labels that express the value of a data point (e.g. "42", "true", "13/08/1991"). The default contribution type narrative satisfies requirement R7f while the vertex type existence satisfies requirement R7g. Before I introduce the other types of MMM contributions, I recall that in agreement with Design Bias 2 and with requirement R7f, the MMM format provides a default information container (namely the narrative node). A busy/lazy user (using a lazy UI) doesn't need to know about, nor use anything else in order to document their notes in MMM format. They could possibly consider that one narrative corresponds one traditional document. I propose however to incentivise and facilitate the use of other MMM contributions. #### 2.3.3 Edges There are three categories of edges: adirectional, unidirectional and bidirectional. Contrary to that of vertices, the abstract type of edges contains more data than just the concrete type. For one, the abstract type specifies the endpoints of the edge. Bidirectional edges are special. A bidirectional edge resembles two unidirectional edges in opposite direction. Each direction may have its own label and its own tag set. Contrary to node labels, edge labels can be empty. Edge endpoints can be any kind of landmark: vertices, edges, pens and even the pit. This allows to satisfy requirement R3, while generally MMM edges satisfy requirements R2 and R6. An edge can't be one of its own two endpoints. An edge can however have another edge as one of its endpoints. We may have to limit the depth of the recursion14. Footnote 14: Say an ordinary edge between two node contributions is of depth 0. An edge of depth \(n+1\) is an edge that has one of its endpoint that is an edge of depth \(n\) and the other endpoint is of depth no greater than \(n\). MMM landscapes might be easier to store and to navigate if we forbid edges of depth greater than a given maximum. I expect that very deep recursion is dispensable. _Best practices for users:_ Make extensive use of directional edges. Prefer contributing information in edge labels than in vertex labels (cf SS3.1.2). Concrete edge types play an important role in this proposal. They allow to _roughly_ sort epistemic glue according to its intended purpose. Together with complementary edge information (conveyed by tags and labels), concrete edge types allow aggregating information (cf SS3.2.4). Figure 4: Different concrete types of MMM edges. As mentioned above (cf §1.5), the concrete types of MMM edges (e.g. equates, instantiates, pertains) have deliberately loose semantics. This proposal offers a starting set of edge types that is intended to remain small. Future modifications of this starting set may be needed to ensure the use of the different edge types is balanced. Again let us insist on the flexibility of interpretation of concrete types of edges. It plays a central part in the intended universality of the MMM data structure. Concrete types are meant to convey a form of common epistemic denominator like the concrete type question: most people can have some form of agreement on what a question is, if only they agree that it is not a statement. The same holds for concrete types of edges but the room for variations in interpretations of each concrete edge type is wider. For instance, the equates edge introduced below can be interpreted as conveying a relation of synonymy between two linguistic terms. It can just as well be used to link two mathematically equivalent theorems expressed in different formalisms. Future work will have to circumscribe the extent of the variations in interpretations that is tolerated. #### 2.3.4. Adirectional Edges There is only one concrete type of adjicretional edge, namely the relate edge. The abstract type15 of an adjicretional edge is a triple in. The first two components of an adjicretional edge type are the identifiers of its endpoints. The order of these two components does not matter. Footnote 15: The term “type” in “abstract type” is not ideal. For two edges to have the same abstract type, they need to share the same endpoints and the same concrete type. Arguably, they need to almost be the same edge. Here are some examples of contributions that are adjicretional edges (see Fig. 5). * \((14,"",\emptyset,5,3,\text{relate})\in\mathbf{C}_{\mathbf{E_{A}}}\) is a labelless edge between vertices with IDs 3 and 5 introduced as examples in SS2.3.2. * \((15,"",\emptyset,5,4,\text{relate})\in\mathbf{C}_{\mathbf{E_{A}}}\) is another labelless edge, between vertices with IDs 4 and 5. * \((16,"",\emptyset,4,5,\text{relate})\in\mathbf{C}_{\mathbf{E_{A}}}\) is an edge labelled "similar" between the same two vertices as edge \(15\). As mentioned already, edges can have other landmarks than vertices as endpoints: * \((17,"",\emptyset,15,5,\text{relate})\in\mathbf{C}_{\mathbf{E_{A}}}\) is an edge between edge with ID \(15\) and vertex with ID \(5\). #### 2.3.5. Unidirectional edges The abstract type of an adjicretional edge is a triple from. The first (resp. second) component is the identifier of the edge's start point (resp. endpoint). The last component is the edge's concrete type taken in: \[\mathbf{T}_{U}\ni \{\text{answers},\text{questions},\text{pertains}, \text{instantiates},\] \[\text{nuances},\text{supports},\text{pennedIn},\text{precedes}, \text{relatesTo}\}\] Figure 5: Adirectional edges of type relate (represented in black lines here). All other edge types should be preferred to this default edge type. Edges can have edges as endpoints. Their labels are empty or not. Above all edge labels are empty except one which is set to ”similar”. The relatesTo edge is the default unidirectional edge that is to be used (sparingly) as a default edge similar to the additional relate edge. The pertains edge is an ubiquitous type of contribution that we might consider renaming "details". In agreement with requirement R7e, a pertains edge from contribution \(a\) to contribution \(b\) can mean a number of things such as: \(a\) belongs to \(b\), \(a\) characterises \(b\), \(b\) involves \(a\), \(a\) is a detail of \(b\), \(b\) concerns \(a\), \(b\) refers to \(a\), or \(b\) is about \(a\)... Work is actually being carried out to explore and circumscrib the relevant semantics of this edge type. I expect this edge type might need to be divided into two edge types, e.g. pertains and characterises. Tying pertains links between pieces of information and the topics/concepts they are about plays an important role in our solution (cf SSA.2.3). The pennedIn edge is the only MMM contribution that comes with a constraint of usage. The endpoint of pennedIn edge can only be a pen contribution (cf SS2.3.7 below). The start point can be any type of contribution. In contrast to pertains edges (and other edge types) which convey semantically structural information, pennedIn edges are rather for the meta coordination of MMM contributors. We further discuss this special type of edge in SS2.4.3 below. Here are some examples of contributions that are unidirectional edges (see Fig. 4): * \((18,"",\emptyset,2,1,\text{answers})\in\mathbf{C_{E_{0}}}\) offers narrative vertex \(2\) as answer to question vertex \(1\) (cf SS2.3.2). * \((19,"",\emptyset,5,1,\text{answers})\in\mathbf{C_{E_{0}}}\) offers existence vertex \(5\) as answer to question vertex \(1\). * \((20,"\text{is }a",\{\text{@rdf:type}\},5,10,\text{instantiates})\in\mathbf{C_{E_{0}}}\). * \((21,"\text{definition}",\emptyset,6,5,\text{pertains})\in\mathbf{C_{E_{0}}}\). #### 2.3.6 Bidirectional edges Formally, the abstract type of a bidirectional edge is a seven component uple from \(\mathbf{E_{B}}=\mathbf{I}\times\mathbf{I}\times\mathbf{T}_{B}\times\mathbb{S} \times\mathbb{S}\times\mathcal{P}(\mathbb{S}_{\emptyset})\times\mathcal{P}( \mathbb{S}_{\emptyset})\). The first two components are the identifiers of the edge's startpoint and endpoint. The third component is the edge's concrete type taken in: \[\mathbf{T}_{B}\supseteq\{\text{equates,differsFrom}\}.\] The fourth (resp. fifth) component is the label of the edge specific to the direction start point \(\rightarrow\) endpoint (resp. endpoint \(\rightarrow\) start point). The sixth (resp. seventh) component is the tag list specific to the direction start point \(\rightarrow\) endpoint (resp. endpoint \(\rightarrow\) start point). One, two or all three of the labels of a bidirectional edge can be empty. Here are some examples of contributions that are bidirectional edges (see Fig. 4): * \((22,\underbrace{\text{language translation}"}_{\text{main label}}, \emptyset,5,8,\text{equates, }\underbrace{\text{"EN}\rightarrow\text{FR"}}_{\text{label for dir 1}},\underbrace{\text{"FR}\rightarrow\text{EN"}}_{\text{label for dir 2}},\emptyset,\emptyset)\in\mathbf{C_{E_{0}}}\) * \((23,"",\emptyset,5,7,\text{differsFrom, }"\text{add a bit of green}","\text{remove some green}",\emptyset,\emptyset)\in\mathbf{C_{E_{0}}}\) Bidirectional equates edges are important as they allow connecting similar contributions that are not necessarily perfect duplicates. They support requirement R5. #### 2.3.7 Pens The abstract type of a pen is a couple taken from the set \(\mathbf{P}=\mathcal{P}(\mathbf{I})\times\mathbf{T_{P}}\). The first component is a set of identifiers. The second component is the pen's concrete type from: \[\mathbf{T_{P}}\supseteq\{\text{definition,reasons,conditions, glossary, experimentalProtocol,measure,pointer,document,default}\}.\] Pens are similar to edges in a hypergraph. Let \(p=(i,l,x,S,t)\in\mathbf{C_{P}}\) be an arbitrary pen where \(S\in\mathcal{P}(\mathbf{I})\) is a set of landmark identifiers. Abusing language, for any landmark identifier \(j\in S\), we say that the pen \(p\)_contains_ the landmark \(j\). A landmark can be contained in multiple pens, i.e., pens can overlap. Here are some examples of pen contributions: Pens can contain any type of landmarks. Pen 24 above for instance contains two vertices and an edge. Pens can also contain other pens. #### 2.3.8 The Pit The pit denoted \(\bot\) is a very special kind of landmark in the landscape, the only landmark that is not a user contributed contribution. There only is one pit for all users. All users see the pit identified with the same identifier. The pit represents Figure 6: A definition pen used to specify that node 6 not only characterises the concept ”Blue” of node 5, it defines it. Not everyone has to agree with this definition. Someone else could include node 5 in a different definition pen. Figure 7: A default pen for colour names, containing previously defined contributions. absurdity. It plays an essential role in our quality management system. See SS3.1.6. We have defined the different kinds of landmarks and their attributes. This concludes our definition of the MMM format. A JSON schema formalisation of the MMM format, namely JSON-MMM exists, cf [41]. ### Areas _etc_ Now we look into sets of contributions. We already have seen in SS2.0.1 that a set of contributions including the pit constitutes a landscape. #### 2.4.1 Landscapes, Areas, and Territories A MMM landscape \(\mathbf{N}\) is a set of landmarks, necessarily containing the pit landmark \(\bot\). **Areas** are a generalisation of landscapes. An area is any set of landmarks not necessarily containing \(\bot\). **The MMM** - i.e. the structured digital version of the record that I propose to materialise (cf SS1.1) a.k.a. the intermediary epistemic web (cf SS1.4) - is the reunion of all landscapes. It is denoted \(\mathbf{N^{\star}}\). Some areas of \(\mathbf{N^{\star}}\) are private. Others are public. The reunion of all public landmarks is called **the public MMM** and denoted \(\mathbf{N^{\star}_{p}}\). \(\mathbf{N^{\star}}\) and \(\mathbf{N^{\star}_{p}}\) are _collective_ landscapes: their landmarks are contributed by different users and can be stored in a distributed manner. Other smaller landscapes may also be collective in that sense. A special kind of landscape is a **territory**. A territory is associated with a human user. It comprises \(\bot\) and the set of contributions that the user is acquainted with. Typically a territory is mostly stored locally, on the user's device(s), although parts of it may be stored remotely. The part that is stored locally is called the **local territory**. Users need not keep local copies of all landmarks they are acquainted with as they may have little interest in some landmarks they have visited. Also, they may trust other users to keep copies of the landmarks they are interested in and not store them themselves (cf Suppl. Mat. A). #### 2.4.2 Paths A MMM **path** is a series of MMM landmarks \(l_{1},l_{2},\ldots,l_{n}\) such that for any even integer \(i<n\), the contribution \(l_{i}\) is an edge between landmark \(l_{i-1}\) and landmark \(l_{i+1}\). By default, edges in an MMM path don't need to all have the same direction. If they do, then we say the path is directed. Note that because edges can have edges as endpoints, a MMM path can have several successive edges. #### 2.4.3 Mutable Pens and Contributions The set \(S\) of contents of a pen \(p\) is immutable (except for the local obsolescence mechanism, cf SS3.1.7). We can nonetheless define _mutable_ pens. Contrary to normal, immutable pens defined in SS2.3.7, **mutable pens** aren't atomic contributions. They are _sets_ of contributions containing exactly one (possibly empty) pen \(p\) and other contributions linked to \(p\) by a pennedIn edge. The contents of the mutable pen are the contributions that are linked to the pen \(p\) by a pennedIn edge, and if any, the contents of \(p\). Contents can be added to a mutable pen. Using the obsoleing mechanism described in SS3.1.7, contents can also be removed. Mutable pens are typically for delineating a mutable collaborative area of the landscape that plays the role of an editable document. Multiple users can include contents in a mutable pen. Application code can offer the possibility to the user to remain "working inside" the mutable pen/document, meaning that all contributions created by the user during the work session are automatically linked with a pennedIn edge to the pen. By default, contribution labels and types are immutable. Pens together with the obsoleting mechanism described in SS3.1.7 can also be used to define **mutable contributions**. The MMM format isn't designed for real-time sharing of fine-grained information (see SS3.3.2). MMM contributions are meant to be final. The MMM is mostly an add-only space equipped with a slow obsoleting mechanism described in SS3.1.7. ## 3 Landscape Based Activities Section 2 above defined the MMM format. Next, Section 3 here discusses how to use the elements of this format. Three kinds of landscape based activities are presented, namely, editing the landscape, exploring the landscape, and sharing landmarks. The supplementary material A describes the technological infrastructure and tooling to assist with MMM landscape based activities. An essential part of the proposition is the plan to support different user interfaces (UIs) in order to accommodate different epistemic approaches and interests towards information. Figure 8: A mutable pen. UI application code can visually represent atomic and mutable pens in a similar way although they are different objects. I recall that this proposal is to deal with unstructured, typically disputable, information. Shopping lists are thus not typical content of interest to this proposal, but they make for simple illustrations. Figure 9: A mutable contribution. Contributions marked with � are obsoleted contributions. They are not immediately deleted but eventually will (cf §3.1.7). Application code is required to handle marks and ensure desired user experience in agreement with the sharing mechanisms presented in §3.3. ### Landscape Editing Activities #### 3.1.1 Contributing Contributing means recording one or several new contributions on a landscape. The most basic contributing actions are: posting a question using a question node, posting a vocabulary term using an existence node, posting a statement or posting anything arbitrary using a narrative node. Other basic contributions are to post an edge - e.g. a default relate edge or an equates edge - between two existing nodes. Most contributing actions involve multiple nodes and/or edges (cf SS3.1.2). _Best practices for users:_ In a context of collective documentation on a shared landscape like \(\mathbf{N}_{p}^{\star}\), the act of contributing should happen mainly through acts of "annotation" (see below SS3.1.2). Contributions that are not annotations will typically contribute an isolated contribution or group of contributions. #### 3.1.2 Annotating/Improving Design Bias #4: **Information as improving events** Information is regarded here as _events_ that cause the record to improve. Note that following Design Bias 3, improving the record does not necessarily mean adding high quality information to it. It can mean exposing low quality information to nuance and detail. To support Design Bias 4, I propose to encourage the provision of new information in the form of _annotations_ to pre-existing information (see also Suppl. Mat. B.16 in relation to improving the quality of information on the record). A MMM annotation is a set of MMM contributions that involves at least one edge incident on a pre-existing contribution in \(\mathbf{N}^{\star}\). To annotate contribution \(c\in\mathbf{C}\) means to add at least one edge to the landscape between \(c\) and another contribution. Annotations complete, nuance, challenge, support, detail, update _etc_ information that is already in the record \(\mathbf{N}^{\star}\). Here are some annotation patterns that I expect to be useful: 1. Question a contribution, using the "questions" type of edge: 2. Answer a question using the "answers" type of edge: [MISSING_PAGE_POST] * [11] Red-flag a contribution (see details given in SS3.1.6): What colour is the sky? What colour is the sky? * [12] Chemtrails are for mind and weather control. * [13] BEFORE * [14] AFTER [MISSING_PAGE_POST] contribution: * [99] Reference a contribution: * [99] a contribution: * [99] Reference a contribution: * [99] a contribution: An author documented in the authorship set of a contribution \(c\), isn't necessarily the original author of the text documented in \(c\)'s label. Alice may document in the MMM that contribution \(c\) is supported by reference \(r\). The author of \(r\) might not agree with Alice on that. In the case pictured on the right above, the pertains edge links reference \(r\) to a quote extracted from the referenced document. In this case, the pertains edge conveys the meaning that the quote has the property of coming from the document referenced by \(r\): the quote is _characterised_ by its source. In contrast, in the case pictured on the right, the pertains edge conveys the fact that the quote is a part of the referenced document. By convention, MMM edges used for linking a contribution to its reference can be labelled or tagged "reference". UI application code can promote the documentation of references in standard formats such as APA, MLA, bibtex. Annotation is recursive in the sense that any contribution that takes part in an annotation can itself be annotated. The 12 annotation patterns listed above are suggestions. I recall that except for the pennedIn edge, there are no grammar rules for assembling MMM contributions in a landscape. I propose that user interfaces of MMM editing tools be designed to facilitate the annotation patterns listed above. They may provide shortcuts to some annotating patterns, and possibly emphasise some patterns more than others depending on what annotations are more typical to a given user segment served by the given UI. The metadata of a MMM contribution (id, authorship set, mark set, status, timestamp) can't be annotated. MMM edges refer to the main informational payload of their contribution endpoints. And the main informational payload of a contribution is expressed in the contribution's label, type and possibly tag set. Consider the node with ID 2 given as an example in SS2.3.2 above. While you can comment on the statement "The sky is blue." given by the label of the node, you can't comment on the fact that the ID of that contribution is 2. Similarly, while you can comment on the fact that edge with ID \(12\) has concrete type relate (see for instance SS3.1.6) you can't comment on the fact that its documented contributor is Anne Martin. _Best practices for users:_ If you wish to use some metadata as information, then document it explicitly as information: on the MMM, this means document it as a contribution which has its own metadata. #### 3.1.3 Implanting Implanting means contributing edges between a new contribution and pre-existing contributions on the landscape. It is comparable to annotating. Annotating is aimed at improving pre-existing content by way of new content. Implanting is aimed at using old content to provide context and visibility to new contributions (cf Suppl. Mat. B.5). _Best practices for users:_ Implant much and well. Poorly implanted contributions will have less visibility as the pathways leading to them are rare and/or at risk of being red-flagged and obsoleted. Implanting is central to my proposal. In the distributed MMM network, the better a contribution is implanted (not just the _more_ it is), the more visible it is likely to be: the more likely it is that a peer will discover it by following a path to it (cf SS3.2.5). Implantation is also key to redundancy management. The better implanted a new contribution is, the easier it is to identify how the information it conveys relates to other recorded information and possibly overlaps some of it. Before a contribution \(c\) is recorded for good on the landscape, and before it propagates to other users' territories, contributions in the neighbourhood of \(c\) where \(c\) is implanted can help determine the value that \(c\) adds to the area. Good, early implantation can also spare authors the effort of documenting atomic pieces of information that have already been documented by them or by someone else. Authors can concentrate on the added value they bring. They need only link that added value to the already documented relevant information that has been found, offering all the epistemic glue they have to favour good implantation. Incentives for authors are discussed in [43]. Because they favour connectedness of the overall MMM network, implantation and annotation, also contribute to a _desirable_ form of redundancy. See SS3.2.6 below. #### 3.1.4 Bridging Bridging two contributions (or areas) \(c\) and _c'_ means contributing an edge between them (epistemically gluing them together) or contributing several contributions which together materialise a path between \(c\) and _c'_. Bridging is particularly important when \(c\) and _c'_ are initially "distant" on the MMM (cf SS3.2.1), i.e., when no epistemic relation has been documented between them, not even an indirect one. Following Design Bias 2, I mentioned that the MMM solution is intended to address information problems primarily rather than to address communication problems. Communication solutions help get messages across geographical distance (e.g. powerful vocal cords, emailing _etc_) and across language barriers (e.g. hand waving, dictionaries, automatic translators). But interlocutors in the same room speaking the same language sometimes still struggle to understand each other because of different epistemic cultures, viewpoints, mindsets _etc._ Increasing the likelihood of an epistemic bridge being documented between the interlocutors' MMM territories may help. Any mechanism that increases the connectedness of the MMM network favours bridging. #### 3.1.5 Documenting Traditional documents like scientific articles contain multiple pieces of information. And unless they amount to unexplained bullet point lists, traditional documents also provide epistemic links between the pieces of information they contain. Information in scientific articles is typically organised in sections, subsections _etc_ (cf Suppl. Mat. B.12). In the MMM, a traditional self-standing document can be recorded in one piece, e.g. as the label of a narrative node. Or it can be decomposed into several pieces conveyed by a network of several interlinked MMM contributions. _Best practices for users:_ Decompose the content that you want to document into atomic contributions well interlinked with each other (rather than dump it as a monolithic text in a single contribution). Limit the amount of content you convey through each single contribution and make extensive use of meaningful MMM edges. Optionally a MMMified document can be delineated using a pen - typically a _mutable_ pen to support collaborative editing (cf SS2.4.3). #### 3.1.6 Disapproving and Red-Flagging A contribution can be deemed of low quality - i.e., disapproved of - for exactly two reasons: 1. The contribution is poorly positioned in the landscape. The edges between it and other landmarks are absurd. For instance the contribution is off-topic. Or it is on-topic, but it is implanted with the wrong kind of edge, e.g. it is linked with an answers edge to a question node while not providing an answer to that specific question. 2. The contribution is well positioned, but its label is of poor quality. For instance the contribution is a narrative node conveying a statement that someone considers untrue. Figure 10: An “MMification” of a traditional linear text document. N.B.: the MMM format isn’t designed for MMMifying existing documents. It rather is conversely meant for doing the informational work in preparation of a new document composition (cf §3.1.8 and Suppl. Mat. A.3). I propose to manage the two situations differently, and neither by resorting to censorship. The second situation is a case where we especially don't want to make the low quality contribution less visible on the MMM. My conviction is that mistakes, misinformation, disinformation _etc_ are unavoidable if not normal on a collaborative information space (cf Suppl. Mat. B.16). Red-flagging is an annotation pattern for dealing with the first situation listed above. Red-flagging a contribution \(c\in\mathbf{C}\) consists in recording an equates edge between \(c\) and \(\bot\). Typically, red-flagging applies on contributions that are edges. Red-flagging an edge conveys the idea that the edge's endpoints should not be linked in the way the edge says they are. Consider for instance the example on the right above. A question\(q\) asks about RNA mechanisms involved in vaccines. A narrative contribution \(n\) is provided as an answer to \(q\) through the answers edge \(e\) connecting \(n\) to \(q\). Narrative \(n\) makes no mention of RNA molecules. Whatever the value of the statement \(n\) makes, whether it is true or false, whether we agree with \(n\) or not, it is easy to see that \(n\) is not an answer to \(q\) because an answer to \(q\) has to involve the notion of RNA. So independently of what one may think of \(n\), one can safely red-flag edge \(e\) (not \(n\) itself). This operation leaves all three contributions \(q\), \(n\), and \(e\) unchanged. The only change to the landscape is the addition of a new equates edge from \(e\) to \(\bot\). This new edge contribution can be labelled "An answer to the question must mention RNA mechanisms." or simply "Off topic" to specify the reason for the red-flagging of \(e\). _Best practices for users:_ Document the reason for the red-flagging of a contribution in one of the labels of the equates edge linking it to \(\bot\). In the example above the narrative contribution \(n\) about 5G doesn't need to be red-flagged. Justifying a red-flagging of \(n\) itself is more demanding than justifying the red-flagging of \(e\). If \(n\) could be standing alone disconnected of other contributions, and then "Off topic" wouldn't apply. _Best practices for users:_ Avoid red-flagging a well positioned contribution especially if it is of low quality. If it is of low quality, then annotate it: link new contributions to it that explicit in what way it is of low quality. In particular, use nuance and questions edges abundantly to nuance and question the contribution. Only red-flag poorly positioned contributions. Application code can deal with red-flagged contributions differently depending on the desired user experience. One UI could hide all contributions that have been red-flagged at least once by the community of users. This would mean that the 5G contribution above wouldn't be reachable from the question about vaccines. Another UI could wait for the contribution to have been red-flagged 5 times (5 different authorships for all equates edges between \(n\) and \(\bot\)). Another UI could highlight red-flagged contributions. #### 3.1.7 Obsoleting Obsoleting a contribution starts with marking it as obsolete. Different MMM editing tools may deal with obsolete contributions slightly differently. I describe the main idea. The end result of obsolescence is the deletion of the contribution from the user's local database state. There may be several copies of a contribution \(c\) in the distributed MMM network. By default, obsolete contributions are not propagated. If Alice has marked contribution \(c\) as obsolete, and if Bob has no copy of \(c\) in his own local database, then Bob won't get \(c\) from Alice. However, if Charlie already has a copy of \(c\), then Charlie may be notified of Alice's obsolescence of \(c\). So obsolete contributions tend to disappear for good from the MMM, although no single user can be responsible for the definite disappearance of a shared contribution. When a contribution \(c\) is marked as obsolete, all edges incident on it are also marked as obsolete. The edge recursion mentioned in SS2.3.3 (see footnote 14 on page 15) means that obsolescence cascades down series of incident edges. We say that we **recursively obsolete \(c\)**. Obsolete contributions are not immediately deleted. They remain "in limbo" for a customisable amount of time. Suppose contribution \(c\) is marked as obsolete on Alice's territory. Suppose that Bob has the three contributions \(c\), \(c^{\prime}\) and edge \(e\) between \(c\) and \(c^{\prime}\) stored on his territory. Neither of these three contributions is marked as obsolete on Bob's territory. And suppose Alice synchronises with Bob. Having \(c\) in limbo on Alice's side allows dealing appropriately with Bob's input to Alice's territory. The fact that Bob's input is linked to a piece of information that Alice has already obsoleted may be used as a pretext to filter out everything Bob has linked to \(c\). In this case Alice neither gets \(c\), \(c^{\prime}\) nor \(e\) from Bob. Alice may also on the contrary decide to take \(c\) out of the limbo because if Bob is still interested in \(c\) she decides she may also be after all. Another possibility is that Alice obsoleted \(c\) because she replaced \(c\) with \(c^{\prime}\). If there is an equates edge connecting \(c\) to \(c^{*}\), it suggests that \(c\) and \(c^{\prime\prime}\) are "epistemically equivalent", i.e., they convey the same idea. Alice might then be interested in Bob's annotation of \(c\), provided it is redirected onto \(c^{*}\) as in the figure below: The Limbo periods are periods during which a user's disinterest for a contribution is remembered and can be shared. The main reason for obsoleting a contribution is that the contribution is no longer _useful_. An obsolete contribution is not "epistemically inconvenient". Its presence in a landscape doesn't make the landscape less valuable. However, an obsolete contribution, since it no longer is relevant to a particular user, becomes _visually_ inconvenient to that user. Because obosolescence is local, a user can't retract a contribution that she has already shared. Once a contribution \(c\) labelled "_The cat is out of the box_." has propagated, it can't be deleted from the MMM by any single user. If the author of \(c\) regrets her publication, the only things she can do is (1) try to propagate her obsoleting of \(c\), and (2) direct attention away from \(c\) or change the perception of \(c\) by adding annotations around \(c\) (she could also red-flag her own contribution, but that might not be advantageous to her in some contexts). #### 3.1.8 Drafting, Substituting and Versioning The MMM can be used as a drafting medium: a place to organise ideas before deciding a linear order to present those ideas in a traditional document. MMM contributions are not meant to undergo significant changes after their creation. Contributions represent atomic units of work. Replacing a contribution by a new contribution is a workaround the definiteness of contributions. As before, in the figures below, \(\mathsf{X}\) marks obsoleted contributions that will eventually be deleted. Rather than make contributions evolve over time, I advise the following default strategy for replacing a contribution \(c\) with a new version \(c^{\prime}\) of \(c\): * Make a copy \(c^{\prime}\) of \(c\) that is a brand-new contribution with a different identifier than \(c\). * Let the user change whatever attribute she needs to change in \(c^{\prime}\). Figure 11: Versioning. An idea is reformulated. The old version is obsoleted and appropriately linked to the new version. * Link \(c\) and \(c^{\prime}\) using an equates edge. The new equates edge can be tagged "@version" and optionally labelled with the reason for the replacement of the old version by the new. In some cases, the old and new versions might be too different to justify an equates link. Another type of link should be preferred, possibly a relate link (cf Fig. 12). * Recursively obsolete \(c\). * If \(c\) and \(c^{\prime}\) are epistemically equivalent (e.g. if \(c^{\prime}\) corrects a typo in \(c\)) and have been linked with an equates edge, then make copies (with different identifiers) of all edges incident on \(c\), replacing the \(c\) endpoints of these edges with \(c^{\prime}\) endpoints. If \(c\) and \(c^{\prime}\) aren't epistemically equivalent, leave it to the user (or possibly to a trained AI) to decide how to redirect edges from \(c\) to \(c^{\prime}\). Because the obsolescence around \(c\) is recursive, recursive redirection might be necessary. * Add \(c^{\prime}\) and new incident edges to all pens to which \(c\) and old edges belong. _Best practices for users:_ To modify an existing MMMified document \(D\) - i.e. a document documented in the MMM as a network of interlinked contributions (cf Fig. 10) - first locate the exact pieces of information (contributions) in \(D\) that your modification applies to. If you want to specify one or several of those pieces of information, then preferably annotate them (cf SS3.1.2). If you want to delete them then obsolete them (cf SS2.2.4). And if you want to replace them then record their new versions and link them appropriately to the old using an equates link as detailed above. Figure 12: Versioning when the old and new versions of a contribution are not epistemically equivalent. The vertical pertains edge incoming the old version cant simply be redirected to the new version. Figure 13: Another case of versioning in which the old and new versions of the contribution aren’t epistemically equivalent. Again the vertical pertains edge incoming the old version can’t simply be redirected to the new version. #### 3.1.9 Merging Merging on the MMM only concerns contributions that are "epistemically equivalent", i.e., that convey the same idea. A contribution labelled "_The Earth is round._" will never be merged nor replaced with a contribution labelled "_The Earth is flat._" nor even with a contribution labelled "_The Earth is roundish._". The MMM remains a mostly add-only system. The atomic unit of information that can be added to, and obsoleted from, the landscape is a MMM contribution. MMM contributions don't disappear from the landscape because they are replaced _per se_, but because they are _in themselves_ no longer useful. For the sake of simplicity I ignore mark sets in this section, although a merging operation for mark sets can be defined. I assume an order can be defined on MMM contribution identifiers, for instance using timestamps. Let \(c\) and \(c^{\prime}\) be two MMM contributions, respectively with identifiers \(i\) and \(i^{\prime}\), with identical labels \(l=l^{\prime}\), with tag sets \(x\) and \(x^{\prime}\), with identical types \(t=t^{\prime}\), with authorship sets \(a\) and \(a^{\prime}\), and with statuses \(s\) and \(s^{\prime}\). I define the order \(\preceq\) on contributions so that \(c\preceq c^{\prime}\) holds whenever all the following holds: \(i\leq i^{\prime}\), \(l=l^{\prime}\), \(x\subseteq x^{\prime}\), \(t=t^{\prime}\), \(a\subseteq a^{\prime}\), and \(s\leq s^{\prime}\). I define function \(\texttt{m}:\texttt{C}\times\texttt{C}\rightarrow\texttt{C}\) such that for any two contributions \(c\), \(c^{\prime}\in\texttt{C}\) that share the same label and type, \(\texttt{m}(c,c^{\prime})\in\texttt{C}\) is the contribution whose identifier is \(\max\{i,i^{\prime}\}\), whose label is \(l=l^{\prime}\), whose tag set is \(x\cup x^{\prime}\), whose type is \(t=t^{\prime}\), whose authorship set is \(a\cup a^{\prime}\), and whose status is \(\max\{s,s^{\prime}\}\). Contribution \(\texttt{m}(c,c^{\prime})=c\lor c^{\prime}\) is the join of \(c\) and \(c^{\prime}\). The set of contributions that share the same label and type is a join-semilattice partially ordered by \(\preceq\). The merge operation only affects contributions \(c\), \(c^{\prime}\) that have the same label and type. The end result of the merge is that only one contribution survives, which is equal to \(\texttt{m}(c,c^{\prime})\). Merging doesn't modify the label nor the type of any MMM contributions. Typically, its effect is to complete the authorship set and to upgrade the status of a contribution. Merging can affect "homologous contributions" and "non-homologous contributions". Homologous contributions are contributions that have the same identifier. They necessarily have identical labels and types but may differ by their tag set and metadata. A merge of homologous contributions \(c_{1}\) and \(c_{2}\) typically happens when a user has a copy \(c_{1}\) of a contribution locally stored and receives another copy \(c_{2}\) of the same contribution from a peer over the distributed network. Contributions \(c\), \(c^{\prime}\) with different identifiers (non-homologues) can also be merged as long as they have the same label and type. If \(i<i^{\prime}\) are the identifiers of \(c\) and \(c^{\prime}\), merging \(c\) and \(c^{\prime}\) is called **merging \(c\)_into \(c^{\prime}\)_**. Generally merging \(c\) and \(c^{\prime}\) with identifiers \(i\preceq i^{\prime}\) consists in the following: * Update \(c^{\prime}\) to \(\texttt{m}(c,c^{\prime})\) which has the same identifier as \(c^{\prime}\). * Create an equates edge between \(c\) and \(c^{\prime}\). I recall that a bidirectional edge has three labels and three tag sets. The new equates edge can be labelled or tagged "merged" (main label/tag set), "replaced by" (direction \(c\) to \(c^{\prime}\)), and/or "replaces" (direction \(c^{\prime}\) to \(c\)). * Recursively obsolete \(c\). Figure 14: Contribution with ID 2 is merged into contribution with ID 52. * Recursively make non-homologous copies of all recently obsoleted edges around \(c\) and redirect them towards \(c^{\prime}\), as in SS3.1.8. * Add \(c^{\prime}\) and new incident edges to all pens to which \(c\) and old edges belong. As long as obsoleted contributions are in limbo, the merging operation is commutative, associative and idempotent because applying function \(\mathtt{m}:\mathbf{C}\times\mathbf{C}\rightarrow\mathbf{C}\) is. The order in which merges occur and repetitions of the same merge are impactless. Function \(\mathtt{m}:\mathbf{C}\times\mathbf{C}\rightarrow\mathbf{C}\) may be generalised to allow the merge of two pens \(p\) and \(p^{\prime}\) that have different contents but same concrete type. The resulting/surviving pen is a pen whose identifier is \(\max\{i,i^{\prime}\}\) and whose set of contents is the union of the contents of \(p\) and the contents of \(p^{\prime}\). The merge of mutable pens follows since merging causes all incident edges, including pennedBy edges, to be copied to the surviving pen. Merging is essential to redundancy management on the MMM. A **relaxed version of the merge** operation can be implemented to allow for the merge of two contributions with different but epistemically equivalent labels. The regular merge described above updates the surviving contribution to \(\mathtt{m}(c,c^{\prime})\) and in doing so may modify some contribution attributes. In contrast, because "epistemic equivalence" is subjective, the relaxed merge must not modify any of the two contributions that are merged, except for marking one as obsolete. A diversity of mechanisms may be implemented to identify potential motives for merges (e.g. language similarity [23]) and submit them to the user for translation into actual merges. Because obsolete marks are local, merges also tend to be local. Alice may merge non-homologous contributions \(c\) and \(c^{\prime}\) locally resulting for instance in the authorship set of \(c^{\prime}\) to grow. If Bob also has copies of \(c\) and \(c^{\prime}\), Bob will not be affected by the merge. This is desirable as \(c\) and \(c^{\prime}\) may be public contributions and Alice alone should not be allowed to update public material for everyone. If Bob independently decides to merge \(c\) and \(c^{\prime}\), Bob will adopt the same merging strategy as Alice: merging the contribution with the smallest ID into the contribution with the largest. If Bob doesn't have a copy of \(c\) nor of \(c^{\prime}\), by default, Bob will not inherit obsolete contribution \(c\) from Alice. If Bob has a copy of \(c\), we may use Alice's equates link between \(c\) and \(c^{\prime}\) as a trigger for Bob to substitute \(c\) with \(c^{\prime}\). The more users merge \(c\) into \(c^{\prime}\), the greater the likelihood that \(c\) ends up obsoleting from the entire distributed MMM database. The notion of merging contributions naturally extends to a notion of **merging landscapes**. For any two landscape \(L\) and \(L^{\prime}\), \(\mathtt{m}^{*}(L,L^{\prime})\) is the landscape that contains exactly the union of all contributions of \(L\) and all contributions of \(L^{\prime}\) where contributions \(c_{1}\in L\) have been merged with their homologous contributions \(c_{2}\in L^{\prime}\). Let us write \(L\sqsubseteq L^{\prime}\) whenever for any contribution \(c_{1}\in L\) there is a homologous contribution \(c_{2}\in L^{\prime}\). The set of landscapes partially ordered by \(\sqsubseteq\) forms a join-semilattice where the join of two landscapes is given by \(\mathtt{m}^{*}\). The MMM is the join of all landscapes. If we guaranteed delivery of every contribution to every peer on the distributed MMM network (and also immutability of contributions which we mostly have for public contributions) then we could guarantee landscape convergence (locally users would eventually see the same landscape) [54]. Importantly however, we _arenit_ aiming at convergence. We don't want peers to see the same landscape _despite_ the distributed nature of the MMM, i.e., we don't want them to have the same local territory. On the contrary I propose to leverage the distributed nature of the MMM in order support a diversity of points of view on the record, and reduce the overall amount of digital content that any peer sees to just what is relevant to them16 (cf SS3.2.5). Because not everyone is interested in the same material, the MMM need not be materialised at any point in time at any single node of the distributed network [43]. Footnote 16: The subject of echo chambers is discussed in §3.2.5 and in [43]. #### 3.1.10 Updating the Landscape Let us continue ignoring marks, including the obsolete mark. The possible ways of updating a landscape \(L\) are the following: 1. Add a contribution to \(L\) (this is the principal way of updating a landscape, cf Design Bias 4). 2. Merge duplicate contributions (same label, same type) as described in SS3.1.9. 3. Add a tag to the tag set of a contribution in \(L\). 4. Add an authorship to the authorship set of a contribution in \(L\). 5. Upgrade the status of a contribution in \(L\). For any of the five kinds of updates \(u\) listed above, let us write \(L+u\) to denote the landscape obtained by applying \(u\) to \(L\). We have \(L\sqsubseteq L+u\), provided we consider obsolete contributions when non-homologous duplicates are merged. Landscapes are thus monotonically non-decreasing with updates. Updates to a user's local territory cause the territory to grow. We have a simple case of CRDT (Conflict-Free Data Type) [54] because as noted above, the set of landscapes forms a semilattice ordered by \(\sqsubseteq\) and the merge function \(\texttt{m}^{\star}\) computes the join of two landscapes (cf SS3.1.9). The YEd desktop tool has recently been used as a demonstrator of the basic MMM editing functionalities. In particular, it has been used to test the design of the MMM data model on information emanating from the aeronautical engineering domain. The set of YEd "palettes" that have been used in that exercise is available on Gitlab [42]. YEd certainly isn't the only existing graph editor tool that can be adapted to act as a rudimentary MMM editor. It has the advantage of serialising graphs into the standard GraphML format. And conveniently, it allows to group nodes and thus partially supports MMM pens. It does not allow to have edges act as endpoints of other edges however. We have used a temporary workaround breaking edges with a dummy node (cf the MMM EDGE BREAK POINTS.graphml palette). ### Landscape Consuming Activities In section 3.1, we saw how to modify the MMM by adding and obsoleting content. Now, assuming the MMM is populated, we explore uses we can make of it. #### 3.2.1 Measuring A distance can be defined between any two contributions \(c\) and \(c^{\prime}\) in the MMM, for instance as the length of the shortest undirected path between \(c\) and \(c^{\prime}\). The notion of distance between contributions in the MMM network can be refined to capture a notion of "**epistemic proximity**" between contributions. Naively, the depth _underneath_ a contribution \(c\) can be defined as the length of the longest acyclic directed path incoming \(c\). A notion of **absolute contribution depth** can be introduced to characterise contributions against a referent contribution depth 0 - the depth of the most vacuous (/abstract) question possible (e.g. "What is the nature of existence?"). The'maturity' of a contribution may be measured in terms of the number of annotations to it, especially the number of nuances, details, (answered) questions to it, and recursively, the maturity of those annotations. The'reliability' Figure 15: Suggested scale of epistemic proximity for two statements. of a contribution \(c\) can be defined in terms of maturity, in terms of ratio of details and nuances, and in terms of the depth of \(c\) (length of directed paths outgoing \(c\) leading to mature contributions). Other, finer metrics can be defined using the graph theoretic properties of landmarks and of areas of the landscape in order to further qualify elements of the landscape. This form of "epistemic topography" may allow nuancing primitive binary qualification of information (e.g. correct/incorrect, true/false, consensual/not consensual, there/not there) replacing it by richer more profound qualification (cf Suppl. Mat. B.1). The number of red-flagging edges incident on a contribution \(c\) (i.e., the number of equates edges connecting \(c\) with \(\bot\)), and the number of authors of each public red-flagging edge can also be counted to quantify the quality of \(c\) or of neighbours of \(c\). Application code may rely on metrics and thresholds to decide when to trigger the publication or display of a MMM contribution (e.g. on an external feed). #### 3.2.2 Zooming in/out An exploitable property of the MMM format is that most links are "vertical" links (unidirectional) as opposed to "horizontal" (adirectional or bidirectional) meaning that in some respects they express the idea that what is expressed at their start point is more defined, more narrow or more precise than what is expressed at their endpoint which is more abstract, more compendious or more indiscriminate. Unidirectional/Vertical edges tend go from more specific to more general. This feature can be used to implement "epistemic zooming in and out". Of course, in the MMM, directed paths can be circular, and I expect circularity will be common. This is not a problem. The "epistemic zooming" proposition is merely practical. It is to allow some interactive filtering out of content, locally (as opposed to revealing some profound property of information). A fundamental functionality that we may want MMM editors to support is "contribution collapse": all contributions on an acyclic directed path to a given contribution are hidden and displayed on demand. The YEd graph editor mentioned before partially supports this functionality assuming YEd groups are used to represent MMM nodes (cf collapsingNodes.graphml[42]). #### 3.2.3 Filtering Highlighting The diversity of metrics that can be defined to qualify landscapes (cf SS3.2.1) can be used to define a diversity of custom filters. These would allow users to experience the same landscape from different points of view. I propose that a collection of pre-defined adjustable filters be provided to users. Different areas of a landscape, and even different single contributions, may be managed by different filtering rules. Further research is needed to define and test possible default filter rules and determine the degree of stringency needed. A default filter rule could for instance reject any contribution that hasn't been challenged \(x\) times and nuanced \(y\) times by \(z\) different users, and/or whose annotations are shallow in the sense that the depth underneath them is less than \(d\in\mathbb{N}\). In this case, default values for \(x,y,z\) and \(d\) need to be determined based on an understanding of the possible local and global consequences of systematically rejecting more or less content with this rule. Filter rules may also be used to trigger the local marking of contributions (cf SS2.2.4) - e.g. as hidden or dim. Front-end code can use the marks to appropriately display the landscape to the user. Rules can also conversely be defined to highlight certain contributions. We may for instance want to highlight contributions that have high measures of quality, or that are marked as rewarded. #### 3.2.4 Aggregating Aggregation on the MMM means grouping together annotations that serve the same purpose - e.g. grouping together all yes answers to a given question\(q\) which can be identified as contributions linked to \(q\) by an answers edge tagged @yes. Aggregation can support redundancy management mechanisms. Comparing aggregable contributions can allow identifying overlaps between those contributions. UIs can be designed to preempt the recording of new contributions by the user that overlap with existing content, or to assist the user in narrowing down the value they can add. UIs can also be designed to systematically hide repetitive content identified as epistemically very close or equivalent to content not hidden. This way, sheer quantity and repetition of a piece of information may not directly translate into visibility of this piece of information. Aggregation can be leveraged to further improve the landscape readability showing less objects at once. Neighbouring contributions may be grouped together into "macro-meta contributions" by the graphical user interface (GUI), based on their subtype and on their position in the landscape. For instance all definitions of the same term, or all answers to the same question may be displayed by the GUI as a single macro-meta node on which the user needs to click to access the details of each definition or answer. Macro-meta contributions are part of the view, not the data model. #### Navigating, Exploring, Searching and Finding On a MMM landscape, **navigating** or **exploring** means travelling down a path. Let \(c\) and \(c^{\prime}\) be two MMM contributions. Let \(a\) be a landscape area. Let \(u\) be a user and let \(T_{u}\) denote \(u\)'s territory. We say that \(c\) is **findable** or **visible**_from_ contribution \(c^{\prime}\) (resp. from area \(a\)) if there is a path between \(c\) and \(c^{\prime}\) (resp. between \(c\) and any landmark \(c^{\prime}\in a\)). We say that \(c\) is findable _by_ user \(u\) if \(c\) is findable from \(T_{u}\). Further consideration of path properties (e.g. directedness) may help refine the notion of findability. Based on appropriate notions of findability and relevant filters, various landscape exploration strategies may be defined. I call **"wayfarer exploration strategies"** strategies that explore MMM landscapes by following paths in the MMM network. Alternative **"parachuting exploration strategies"** can be used to explore the MMM. In contrast to wayfarer strategies, parachuting strategies ignore the MMM network topology and the MMM data model semantics. They explore MMM contributions in more traditional lexical ways using natural language processing techniques to infer semantic similarities between contributions. Figure 16: Aggregation of all documented answers to a question, in one macro-meta-node. Search over the MMM matches textual search queries to MMM landscape areas. The search query may be _selective_ - for instance when the user is looking for a contribution with a given identifier, or for questions recorded between two given dates, or for contributions that have more than 3 authors and are tagged "@genetic regulation". In this case, a parachutist strategy is enough. The query may also be _approximate_ with an exploratory intention behind it. In this case, a wayfarer approach might be better suited to locate epistemically relevant content. Given a textual search query \(q\) formulated by the user, a wayfarer search strategy (i) picks a MMM landmark (or area) \(s\) as starting point of the search, (2) performs a wayfarer exploration of the landscape following paths (of a certain capped length) outgoing \(s\), (3) circumscribes a relevant target area \(a\) of the landscape that is to be outputted as the search result, and possibly (4) derives an epistemic link in the MMM landscape between \(q\), \(s\) and \(a\). Graph theoretic properties combined with properties of the MMM data model (cf SS3.2.1) will be useful to formalise the details of these steps. I leave this as an open problem for now. A subsequent article will propose ontology-based exploration strategies to support searching the landscape from a configurable point of view. Defining a relevant wayfarer exploration may entail specifying the semantics of the MMM data model. The semantics of the MMM data model have deliberately been left vague and informal17. Varying specifications of these semantics are possible. Different specifications may induce different notions of "consistency" for instance. One may specify that two existence nodes are consistent as long as they aren't connected by "contradicting" paths: one path being a sequence of equates edges, another path being composed only of equates edges except for one differsFrom edge. A variation of the semantics relevant to scientific researchers might restrict this definition to equates edges that are tagged with a certain tag - e.g. "@biological equivalence" or "@logical equivalence". Specifying the MMM data model semantics allows to define the purpose of a wayfarer exploration (e.g. checking the consistency of an area of landscape). Wayfarer exploration is then as close as we get to reasoning on the MMM. The semantics determines how the information conveyed by MMM contribution types should be interpreted, how they should inform decisions and in particular how they should orient the exploration. A semantics may specify that equates edges tagged "@naturalLanguageTranslation" or more specifically "@EN \(\rightarrow\) FR", don't count. Available language translations would then be ignored by the wayfarer exploration. Footnote 17: We have been using the term “epistemics” rather than ”semantics”. The wayfarer approach favours discovery of information that is epistemically related to information that has already been discovered. This might favour preparedness of the user to new information. The user's local territory grows gradually to include contributions that are epistemically close to the contributions she already understands (output area \(a\) of a search is probably directly related to input query \(q\)). _In itself_ wayfarer discovery might also favour epistemic isolation reminiscent of the actual Web's echo chambers. Epistemic proximity is however not semantic proximity. Contradictory answers to the same question are for example very close epistemically. So are the different interpretations of the same question. Arguably, humans can't systematically avoid being directed by confirmation bias towards the same statements supporting the same views. But if they come across questions on the MMM (e.g. "How do vaccines work?"), there is no obvious natural way for them to systematically preempt exposure to the different answers that are contributed by individuals with different mindsets. The different answers are likely to be implanted on the same MMM landmark, namely the question they answer. Epistemic locality does not necessarily entail social locality as very different people can take interest in the same questions. Furthermore, wayfarer exploration not only leverages the connectedness of the overall MMM network it may also contribute to enhancing it. By making users aware of information that is already documented, it can help preempt the documentation of redundant content and encourage the building of short epistemic bridges between previously epistemically distant contributions. #### 3.2.6 Safe Reformulating and Translating One idea or piece of information can be expressed in multiple ways. Different expressions of the same idea don't necessarily speak to the same people, if only because people understand different languages. Connectedness of the overall MMM network increases the likelihood that there is a path between the different expressions of the same idea: one expression is findable from another expression. People who understand idea \(i\) via expression \(E\) can be made aware of the annotations concerning \(i\) contributed by people who better understand an alternative expression \(E^{\prime}\) of \(i\). In the MMM, the epistemic connection between contributions \(E\) and \(E^{\prime}\) being explicitly documented, it can be used to ensure "safe passage" between \(E\) and \(E^{\prime}\). Arguably, as the formulation of a piece of information changes, the information itself changes, possibly degrades. This may have repercussions on people's understanding of the information. "Safe passage" means that the new formulation is accessed with awareness of ensuing semantic changes. The people who translate or vulgarise information to give other people access to it are sometimes not the experts who can gauge the extent of the semantic drift caused by the reformulation. But if the drift is documented in the MMM (e.g. with a labelled equates or differsFrom edge between the two formulations) by someone who can discern it, then it becomes discussable like any other piece of information. Reformulation on the MMM is like any other ordinary act of deriving information out of pre-existing information. It is a documentable process that can be challenged and justified explicitly. In addition to promoting epistemic democracy, multiple connected formulations of the same idea is _a desirable form of redundancy_ that supports mitigation of _a useless sort of redundancy_. The useless sort of redundancy occurs between two expressions that are semantically too similar to be humanly distinguishable. One of the expressions can be removed without risk of reducing, now or in the future, the number of people potentially able to understand the idea. Desirable redundancy might not be relevant to _every_ user, but globally it contributes to the connectedness of the MMM network which in turns helps compare MMM contributions and identify useless overlaps. #### 3.2.7 Epistemic Time Travelling The system architecture that we propose in the supplementary material A allows for "epistemic time travel". On the MMM, epistemic time travel consists in playing back and forth the history of changes to a landscape. #### 3.2.8 "Citing the future" On the MMM, citation links are conveyed by MMM edges (or paths) linking a citing contribution to a cited contribution. Though an appropriate choice of edge types and possibly through the documentation of edge labels, MMM citation links can be made to convey epistemic glue. In traditional documents like scientific articles, citation links direct the reader to a resource from the past. In contrast, MMM links direct the reader to a durable location in the landscape. The area around this location may evolve over time as contributions (nuances, supporting details, questions, _etc._) are implanted on it. But the cited contribution location perdures. Suppose that article \(a\) published in 2023 refers to current "_RNA vaccination_" techniques and cites the latest scientific publication on that subject, namely article _a_\({}^{\prime}\). Both \(a\) and _a_\({}^{\prime}\) are implanted in the MMM. Ten years later, Alice reads \(a\). By then, \(a\) is still somewhat relevant but the contents of _a_\({}^{\prime}\) are long outdated. Contribution _a_\({}^{\prime}\) is now deep under a decade of thorough annotations conveying new understanding of "_RNA vaccination_" and new technical propositions. The link from \(a\) to _a_\({}^{\prime}\) in the MMM doesn't just point Alice in the direction of the old outdated article _a_\({}^{\prime}\), it points her towards a relevant area of the MMM that has been continually updated. Note that understanding an old article \(a\) properly might require more than updated information on the subject of the article _a_\({}^{\prime}\) that is cited by \(a\). It might require understanding the historical context in which the link between \(a\) and _a_\({}^{\prime}\) was made. Epistemic time travel (cf SS3.2.7) allows to access the state of the landscape back then and play the succession of events that lead to outdating the contents of _a_\({}^{\prime}\). Thus while _a_\({}^{\prime}\) becomes outdated, the link (epistemic glue) between _a_\({}^{\prime}\) and \(a\) continues to convey information of actual worth, for at least as long as \(a\) doesn't also outdate. It is the purpose of central "refrigeration mechanisms" (MMM archival) to save some contents such as contribution \(a\) from global obsolescence (disappearance from the MMM). Refrigeration mechanisms need to be defined to implement choices of what contributions deserve to be archived and for how long, depending possibly on (the dynamics of) the contributions' implantation. #### 3.2.9 File and Folder Organising MMM contributions (e.g. existence nodes) can be used to represent bibliographical references to documents that exist outside the MMM. Similarly, MMM contributions can be used to represent resources like files and folders existing in a file hierarchy. Mapping (parts of) a device's local file hierarchy into the user's local MMM territory would allow the user to seamlessly navigate between their MMM notes and their local file hierarchy (see Fig. 17 and Suppl. Mat. B.2.). All contributions resulting from this mapping should be marked with a distinct mark, say FH. And all contributions marked FH should be considered private and unpublished. ### Landscape Sharing Activities In this section we discuss the social dimension of the MMM proposal. Figure 17: Bottom left: a simplified version of a part of the hierarchy of files and folders that a researcher could have on one of their devices, as presented by the Tree program. Above right: the corresponding MMM landscape area. I recall that all visual representations of MMM formatted information in this article are arbitrary. Precisely, a UI could in this case provide a visual representation of the MMM to the user identical to the terminal output of the Tree program. File and folder paths can be documented as tags in contributions’ tag sets. pennedIn edges can be tagged “@file contained in” or “@sub-directory of”. Other MMM contributions can be added. The diversity of MMM edge types can be leveraged to complete the file system’s hierarchical organisation and play a role similar to rich symbolic links. In the sequel let us assume the following context. There exists software interfaces for human users to interact with the MMM. The software are locally installed on users' devices. Each user has their own local territory as mentioned in SS2.4.1. A user's local territory is stored on the user's device(s). Devices (belonging to the same or to different users) connect to each other (over the internet or a LAN) to exchange MMM contributions. I say these devices are _MMM hosts_, participating in the distributed MMM network. I sometimes refer to users as _peers_ in this network. I don't assume that MMM hosts operate as servers. Details of the implementation of the distributed MMM network and the software involved are given in the supplementary material A. #### 3.3.1 Rewarding Suppose that Alice publishes contribution \(c_{A}\) (e.g. a description of how she feeds the lab mice). Then Bob publishes contribution \(c_{B}\) as an annotation of \(c_{A}\) (maybe a specification of the problems Bob encounters in carrying out an experiment using Alice's mice). Later Charles publishes \(c_{C}\) which, among many other things, builds on \(c_{B}\). And it turns out that Charles ends up greatly rewarded for \(c_{C}\) (maybe \(c_{C}\) gets published in a prestigious academic journal, maybe it is the object of a Nobel Prize). I propose to use the collectively documented train of thought \(c_{A}\longrightarrow c_{B}\longrightarrow c_{C}\) to formally and proportionately acknowledge Alice's and Bob's participation in the work that eventually produced \(c_{C}\). Charles (or the prestigious publisher or the Nobel Foundation) may locally mark \(c_{C}\) as rewarded. The rewarded mark can be parametrised with (1) data specifying or referring to the details of the reward (e.g. "Nobel Prize"), (2) data specifying the MMM distance to the rewarded contribution (0 in the case of \(c_{C}\)), and (3) the identifier of the rewarded contribution if the distance is Figure 18: The MMM is the reunion of all MMM contributions stored on the local territories of users. As users share MMM contributions with each other, replication of the material occurs. Some contributions are copied on multiple hosts. greater than 0. The distributed system through word of mouth (successive local shares, cf SS3.3.4) propagates rewarded marks. Charles' reward can "trickle" down to Bob and then Alice. Every tricking step increments the distance recorded as parameter to the rewarded mark. On reception of a homologous copy \(c_{A}^{\prime}\) of Alice's contribution \(c_{A}\), a comparison can be made18 between the rewarded marks of both copies. The minimum distance to \(c_{C}\) can be kept as parameter of the rewarded mark. I call this mechanism "trickling reward". The topic of acknowledgement and reward is discussed in [43]. Footnote 18: By the concierge module, cf Suppl. Mat. A.2. #### 3.3.2 (Slow) Collaborating The MMM is a collective document: users can offer answers to the same questions, they can nuance each others' statements, they can add to each others' mutable pens, they can connect each other's contributions using appropriate MMM edges _etc_. Consider a traditional document \(d\) decomposed into area \(a_{d}\) of the MMM (\(a_{d}\) is the "MMified" version of \(d\), cf SS3.1.5). Several users may work simultaneously on \(a_{d}\), annotating contributions that are already in \(a_{d}\) and episodically sharing with each other their annotations. Collaborators don't have to keep copies of every contribution in \(a_{d}\). **Floating and dangling edges are possible.** Alice may keep a copy of the edge \(e\) going from her contribution \(c_{A}\) to Bob's contribution \(c_{B}\), without storing a local copy of \(c_{B}\). Alice might regard the label and type of \(e\) as enough information about \(c_{B}\). The MMM solution isn't optimised for fast communication. It is to natively support "**slow-first collaboration**" which is when collaborators can work without having fast editor access at any moment to what their collaborators are doing (real time read access remains a possibility). Slow-first collaboration is sometimes enough and sometimes preferable (cf Suppl. Mat. B.20). Users edit _different_ contributions and then link them if relevant. To support real-time collaboration _within_ contributions (collaborators concurrently editing the same contributions), will require nesting finer-grained CRDTs in the native coarse-grain MMM CRDT-like structure [32]. Contributions are atomic units in the MMM just like characters are atomic units in traditional digital documents (cf Table 2). Let us expand on a recommendation made in SS3.1.8 which frontend code can implement: _Best practices for users:_ Just like you don't half-way type a character in a digital document, avoid half-way documenting a contribution in the MMM. When you are done editing contribution \(c\) mark it so as to indicate that \(c\) is finished (which is different from meaning \(c\) is _definite_ because of obsoleting and versioning mechanisms described in SS3.1.8), e.g. mark it as synchronisable. Only allow synchronisable contributions to be accessed from other devices. Figure 19: Automatic assisting mechanisms for redundancy management can leverage the properties of the area surrounding Alice’s contribution \(c_{A}\) and Bob’s contribution \(c_{B}\) in order to detect similarity between \(c_{A}\) and \(c_{B}\). Application code make encourage Alice and Bob to merge \(c_{A}\) and \(c_{B}\), to obsolete one of them or to document an explicit relationship between them. #### 3.3.3 Defining Topics In the MMM, topics - e.g. "holiday ideas" or "Al" - are typically documented in the user's territory as the labels of existence nodes. They can also be documented as the labels of any other kind of MMM contribution including question nodes, narrative nodes, pens and edges. When Alice records a new contribution \(c\) in her territory, she implants \(c\) (she links \(c\) to other contributions in her territory). If this materialises a semantic path between \(c\) and the existence node labelled "holiday ideas", then \(c\) can be considered related to the topic "holiday ideas" If the implantation of \(c\) in Alice's territory materialises a path between \(c\) and the existence node labelled "Al", then \(c\) will be regarded as related to the topic of "Al". We define the **topic anchor** to be the MMM contribution whose label - e.g. "holiday ideas" or "Al" - gives the topic name. \begin{table} \begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline **Google Docs** & **MMM** \\ \hline A Google Docs necessarily belongs to the holder of the Google account that it was created from. The owner and the editors of a Google Docs have more rights than its commentators. & Users don’t have statuses, only contributions do. Any user can annotate a contribution that she has access to. Alice may annotate Bob’s contribution, Bob may reject Alice’s request to share her annotation with him, and Alice may persist her annotation and share it with Charlie even if it displeases Bob. \\ \hline The finest granularity of collaboration is possible. Editors can see the effect of each other’s keystrokes in real-time. Their keystrokes can interact with each other when they simultaneously edit the same area of the document. & Fast fine-grained collaboration isn’t native to the MMM. The MMM only supports coarse-grained collaboration where collaborators don’t simultaneously edit the same atomic unit of information. \\ \hline Coarse-grained collaboration is supported. Users can comment and make suggestions to the main document. Their annotations are spatially tied to character positions in the document. & Annotations are _epistemically_ tied to the landscape. This is a simple case of CRDT because changes to the landscape may be applied without knowledge of the original context in which the changes were made. \\ \hline Versioning allows viewing past states of the document and to compare versions of the document after changes have been applied to it. & Changes are encapsulated in contributions and logged (cf SSA.1.1). The MMM system supports fine-grained "epistemic time travel" discussed in SSA.1.3. \\ \hline \end{tabular} \end{table} Table 2: Collaboration with Google Docs and collaboration with the MMM. The **topic extent** defines an area of the landscape surrounding the topic anchor. This area contains landmarks that are considered to fall within the scope of the topic. The area can be defined in terms of distance or depth - e.g. contribution \(c\) is relevant to the topic if \(c\) is at a distance \(d\leq 3\) of the topic anchor \(a\), or at a depth \(d\leq a\) underneath \(a\). Features of the MMM format may be exploited to refine the delineation of topic extents (cf SS3.2.1). For instance, a user might want to filter out of the topic's extent, contributions connected via relate or relatesTo edges. Or she might not be interested in questions whose answers fall outside the topic extent or whose answers are shallow. Formally a MMM topic is a couple \(T=(a,e)\) where \(a\) is the topic anchor and \(e\) is the topic extent. Let \(T=(a,e)\) be a topic whose extent \(e\) circumscribes an area of radius \(n\in\mathbb{N}\) around contribution \(a\). Let \(c\) be a contribution in that area at a distance \(m\leq n\) of \(a\). Let \(e_{c}\) be the area of radius \(n-m\in\mathbb{N}\) around contribution \(c\). We say that topic \(T_{c}=(c,e_{c})\) is **inherited** from topic \(T\). More generally, a topic \(T_{c}=(c,e_{c})\) inherited from topic \(T=(a,e)\) is such that contribution \(c\) belongs to the area circumscribed by \(e\), and this area contains the area circumscribed by \(e_{c}\). MMM topics are for **sharing and synchronising MMM contributions** on a need-to-know basis. The "need" is captured in advance in the topic extent. #### Sharing Contributions Users can send MMM contributions to each other, individually or by batches. Contributions that Alice has shared with Bob or received from Bob are marked with a parametrised sharedWith mark. When Alice receives a contribution \(c\) from Bob, \(c\) is initially marked as new on Alice's territory. A customisable amount of systematic filtering can be applied to new contributions on delivery (cf SSA.2.10, SSA.2.7 and SSA.2.6 in SSA.2). Alice might Figure 20: Two topics: ”moving house” and ”electric current”. The topic _anchors_ are respectively the pen and the existence node in which each topic name is documented. Topic scopes are captured in topic _extents_ and can overlap. In this example, if the topic extents are restricted to the dashed rectangles, then no contribution yet falls into the scope of both topics at once. only want to see contributions that are already well implanted in the global MMM or in the sender's local territory. If the new contribution \(c\) is not automatically filtered out, it remains for Alice to reject it or to accept it. If Alice rejects \(c\), \(c\) is deleted (not obsoleted) from her territory. If Alice accepts \(c\), then the new mark on \(c\) is removed. If there already is a homologous copy of \(c\) on Alice's territory, it is merged with \(c\). Sharing maintains the CRDT like properties of the set of landscapes mentioned in SS3.1.10. When Alice accepts a new contribution, her local territory grows according to the partial order \(\sqsubseteq\) on landscapes defined in SS3.1.9. I propose that **share contracts** be associated with MMM contributions when they are shared. The default share contract forbids the recipient of a MMM contribution from communicating the source's address to a third party. Only the source host can relax the contract. Possibly, in a GDPR-compliant way, the contract lists alternative hosts who have given their permission to be known as alternative hosts of the MMM material to share. The contract may contain some copyright clauses restricting what the recipient host can do with a shared contribution [43]. It may formalise a non-disclosure agreement. Research work is needed to define ways of enforcing contracts. Alice must not be able to change the contract she has with Bob. The software she uses to connect with Bob must not violate the contract. And/or peers with whom Alice connects shouldn't accept to interact with Alice in ways that violate the contracts applying to the data Alice has. #### 3.3.5 Subscribing to Topics Users can subscribe to topics. The anchors of the topics that they subscribe to must exist on their local territories. To subscribe to topic \(T=(c,e)\), the user must compose a "**subscription request**" and send it to one or several MMM hosts. The user not only specifies what information they are interested in acquiring, they also specify who they want to get the information from. The subscription request is a message with the following data: 1. A MMM topic \(T=(c,e)\) whose anchor \(c\) can be found on the user's local territory. 2. How often and until when the user wishes to receive \(T\)-related material. 3. A host from which the user wishes to get \(T\)-related material / the host to which the subscription request is sent. This host must have the anchor contribution \(c\) on their local territory as well if they are to serve \(T\)-related material to the subscriber. 4. A subscription contract specifying if the subscription can be forwarded by the recipient to an alternative \(T\)-serving host, and by the sender to an alternative \(T\)-interested subscriber19. Footnote 19: Suppose Alice is subscribed to Bob’s \(T\)-related contributions. The subscription contract might allow Bob to forward Alice’s subscription over to Charlie whom Bob gets his \(T\)-related information from – assuming Bob’s subscription contract with Charlie allows it. And the Alice-Bob contract might specify that Alice can forward the data she has on Bob to Eve so that Eve can receive \(T\)-related contributions directly from Bob. Serving a subscription consists in sharing \(T\)-related material found on one's territory to a subscribed peer (cf SS3.3.4). Further work is needed to ensure that serving subscription material is efficient. Work is in particular needed to determine the respective responsibilities of client and server in identifying subscription material in the landscape. Identifying the contributions \(c^{\prime}\) that fall into a topic \(T\)'s scope requires running through the landscape and possibly computing inherited topics. To facilitate the task of serving MMM material to his subscribers, Bob might want to constrain the topic extents that he is willing to serve, e.g. to "_one-size-fits-all-subscribers_" penned areas. He might leave most of the measuring and filtering work to his subscribers. As suggested in SS3.3.4, users can send each other unrequested MMM contributions. I propose to deal with these spontaneous exchanges of MMM material as **subscription**_invitations_. To share contribution \(c\) with Bob, Alice invites Bob to subscribe to \(T=(c,e)\) where \(e\) is empty if Alice wants to share nothing else than \(c\) with Bob. The extent \(e\) can alternatively define an area of radius \(1\) around \(c\) if Alice wants to share \(c\) and immediate annotations to \(c\). When he receives Alice's invitation to subscribe to her \(T\)-related material, Bob can modify certain parameters of the proposed subscription. He can for instance modify the extent of the topic and reduce the frequency of \(T\)-related news he will be receiving from Alice. I suggest that conversely, when Alice shares contribution \(c\) with Bob, especially if Alice is the author of \(c\), then Alice automatically subscribes to Bob's copy of \(c\) so that she episodically receives from Bob updates and annotations concerning \(c\) Alice's copy of the anchor contribution \(c\) is marked as subscribedTo, and possibly, so are also other contributions that fall into the topic's scope \(e\). The subscribedTo mark is parametrised with some of the subscription parameters. Subscribing to a topic on the MMM is comparable to joining a Semantic Overlay Network (SON) [12, 13] Connections between peers are determined by semantics. SONs are clusters of peers that share an interest for a concept (e.g. minimal tech music) and have resources related to that concept. Resources and peers are assigned to concepts and concepts are themselves organised into a predefined taxonomy of concepts. The taxonomy is leveraged to forward queries to the right peers and optimise search performance. In the MMM system, the set of all peers subscribed to topic \(T=(c,e)\) is reminiscent of a SON, even if contribution \(c\) is not part of a common predefined hierarchy of concepts and might not represent a concept at all (\(c\) might be a question or something else). We don't necessarily have the potential for a peer-to-peer connection between any two peers in the cluster. #### 3.3.6 Synchronising Across Devices Synchronising an area \(a\) of the landscape with another device one owns is similar to sharing it with a peer. The local copy of contribution \(c\) on device \(d_{0}\) is marked with the syncWith mark parametrised with the list of other devises that \(c\) is copied to. In contrast to sharing, synchronising usually doesn't ignore the house-keeping marks. Typically, if a device has contribution \(c\) marked as highlighted, other devices of the same user will also have \(c\) marked as highlighted. Relay servers can be used to streamline cross-device synchronisation. Data transiting through these thin servers should be end-to-end encrypted. #### 3.3.7 Publishing Contributions I propose to promote a strong notion of publication. Design Bias #5: **Irrevocable Publicness** Information that is published can't be unpublished nor altered by any number of individuals, not even by the author or publisher of the information. A strong notion of publication requires an accompanying paradigmatic shift towards tolerance for miscommunication and misinformation. This is discussed in Suppl. Mat. B.16. Figure 21: Sharing information by value or by pointer. Contributions marked with ✗ are obsoleted contributions. Although the MMM format is not primarily designed to store and manage mutable data like phone numbers and addresses, mutable MMM contributions (cf §2.4.3) can nonetheless marginally support the sharing of mutable data by pointer or by value. N.B.: The assumption is that the MMM node storing Alice’s address is part of Alice’s local territory. Combining MMM concepts with concepts from the Solid project [52], a proposal for personal data management will be made in a follow-up article. Publishing a contribution to the MMM starts with upgrading its status to public. Importantly, if Alice didn't create \(c\) herself, if she got \(c\) from Bob, then the propagation contract she has with Bob might forbid Alice from publishing \(c\). Contributions marked as public may remain unseen by everyone except their creator. But the point of making a contribution public is to share it. Sharing a public contribution happens as described in SS3.3.4. Like other contributions, public contributions propagate through word of mouth (successive shares). So they don't necessarily propagate. A public entity like a university participating in the distributed MMM network may reject all public contributions it receives unless the contributions are from its affiliated researchers and are highly implanted in the university's local territory. From the moment a public contribution is shared, the publication is virtually irrevocable. This is because (1) the public status of a contribution can't be downgraded and (2) sharing is irrevocable. _Best practices for users_: use versioning and obsoleting mechanisms to share a correction (eg a typo correction) for a contribution that has already propagated over the distributed network. The author of a public contribution \(c\) shares equal **ownership** of **and control** over \(c\) with anyone who has a copy of \(c\). Anyone is free to make a local copy of \(c\), share \(c\) and annotate \(c\). No-one (not even the author) can directly modify the label and type of \(c\) (cf Table 3 below). However, anyone can modify the epistemic environment around \(c\) by (publicly) commenting on \(c\), supporting it, detailing it, nuancing it, questioning it, red-flagging it, relating it to other information _etc_. So anyone can potentially sway the way others interpret \(c\). As mentioned before, sheer quantity and repetition of information in relation to \(c\) does not necessarily translate into visibility of this information on the MMM however (cf SS3.2.4 and Suppl. Mat. B.5). So no single user or group of users has the exclusive power to definitely sway the interpretation of \(c\). Users can't delete public contributions from the MMM once they have been shared. They can only delete their own local copies. I recall that the MMM is not suited for the documentation of all kinds of content. It is especially ill-suited for content that _cant_ be disputed such as feelings. It is better suited for analytical information that has some collective value such as scientific contributions. All the current owners of a local copy of a public contribution \(c\) could mark \(c\) as obsolete and \(c\) could eventually entirely disappear from the MMM. Contributions that disappear shortly after they propagate might not be deemed of the same quality20 as persisting pervasive contributions. I propose that collective obsolescence be leveraged to inform the design of smart archiving mechanisms capable of identifying digital public information that is worth archiving (cf Suppl. Mat. A.1.3). ## 4 Conclusion I introduced the MMM data model and a notion of epistemic landscape based on it. I defined the MMM as the reunion of all epistemic landscapes. I detailed the different types of MMM contributions and their attributes, which are involved in the MMM. I presented landscape based activities centered around editing landscapes, consuming information and sharing information. Like the Semantic Web's underlying formalisms, the MMM's networked structure allows pieces of information to be connected to each other so that they gain in precision from context, and they can be reused and meaningfully processed - by humans in the case of the MMM. The MMM data model is assigned _loose_ semantics because like the original Web21, the MMM is meant to accommodate a diversity of use cases involving humans at work contributing to scientific research and other evolving informational fields. Footnote 21: The CERN Web [3], not so much the Web centered around social media. I proposed to introduce rich metrics expanding our traditional definitions of informational quality. MMM metrics leverage the non-binary epistemic qualities of information captured in the MMM data model. They also account for the contextual "implantation" of information reflected by the MMM network topology. I formalised the notion of implantation and I evoked the incentives for authors to implant their contributions well. Implantation is central to my proposal. It is to promote connectedness of the MMM network, making every piece of information more likely to be findable from an arbitrary location in the MMM landscape. Connectedness can facilitate global \begin{table} \begin{tabular}{|l|c|c|} \hline \multicolumn{3}{|c|}{**Allowed landscape modifications depending on contribution status**} \\ \hline \multicolumn{3}{|c|}{**Landscape modification:**} & \multicolumn{2}{|c|}{**Status of the concerned contribution:**} \\ & Private & Public \\ \hline Add a new contribution & \(\blacktriangledown\) & \(\blacktriangledown\) \\ \hline Add an authorship & \(\blacktriangledown\) & \(\blacktriangledown\) \\ \hline Add a tag & \(\blacktriangledown\) & \(\blacktriangledown\) \\ \hline Add, modify remove a mark & \(\blacktriangledown\) & \(\blacktriangledown\) \\ \hline Remove an author or authorship & \(\blacktriangledown\) & \(\blacktriangledown\) \\ \hline Remove a tag & \(\blacktriangledown\) & \(\blacktriangledown\) \\ \hline Change the endpoints of an edge & \(\blacktriangledown\) & \(\blacktriangledown\) \\ \hline Add a label to an edge & \(\blacktriangledown\) & \(\blacktriangledown\) \\ \hline Add a label to a pen & \(\blacktriangledown\) & \(\blacktriangledown\) \\ \hline Change the label of a contribution & \(\blacktriangledown\) & \(\blacktriangledown\) \\ \hline Change the concrete type of a contribution & \(\blacktriangledown\) & \(\blacktriangledown\) \\ \hline Add an author to an authorship list & \(\blacktriangledown\) & \(\blacktriangledown\) \\ \hline Upgrade the status & \(\blacktriangledown\) & – \\ \hline Downgrade the status & \(\blacktriangledown\) & \(\blacktriangledown\) \\ \hline Change the id of a contribution & \(\blacktriangledown\) & \(\blacktriangledown\) \\ \hline \end{tabular} \end{table} Table 3: The different modifications that a user can apply to a MMM landscape, depending on the status of the contribution involved in the modification. \(\blacktriangledown\) stands for possible and \(\blacktriangledown\) stands for impossible. Landscape modifications only apply to a user’s local territory. If the concerned contribution is sent to peers, the change may propagate through merges. For instance, Alice may add an authorship mentioning herself in the authorship set of contribution \(c\). Even if \(c\)’s status is public, the change only affects Alice’s local copy of \(c\) at first, until \(c\) is sent to another user Bob and Bob accepts it. Some changes mandatorily propagate whenever there is an opportunity for them to – cf rows in dark grey. Some changes such as changes relative to house-keeping marks, only propagate to a user’s devices. Share contracts may settle how/if modifications affecting private contributions propagate. Restrictions on the conditions for modifying and propagating tags are to be determined. redundancy mitigation. The supplementary material A proposes to further support it through automatic connection suggesting mechanisms (see in particular the lawless parachutist software component). The supplementary material A proposes a technological infrastructure to support the MMM. The MMM is organically distributed among peers who store the parts of the MMM network that are relevant to them and share parts that might be relevant to others. Globally, MMM contributions get replicated as they get shared and as peers decide to keep local copies for themselves. Locally, globally unique identifiers avoid duplication. I propose to encourage multiple UIs in order to accommodate the diversity of epistemic cultures. I defined a notion of local epistemic territory characteristic of a human user. MMM contributions inputted through the various UIs used by a user are all to be funnelled to that user's territory. On the MMM, all granularities of informational resources can be documented, identified and referred to, including fine-grained informational resources like single ideas and coarse-grained informational resources like the entire contents of an article. Information consumers can _precisely_ select the pieces of information that they consume and keep copies of. Their local epistemic territories can grow without getting littered by irrelevant atomic pieces of information. Useless archival and useless exchanges of information between users can be mitigated. The "refrigeration" process mentioned in the supplementary material A is a possible MMM based alternative to the Internet Archive's Wayback Machine [6]. Valuable pieces of MMM contributions can be archived enmeshed as they are with the epistemic MMM landscape. Collective obsolescence ensures that any atomic piece of information that isn't of use to anyone, eventually disappears from the record and archiving concentrates on information deemed relevant. This proposal is primarily geared towards digital content sobriety. Enhanced epistemic democracy is expected to follow. The focus is on what people are _not_ interested in. My hypothesis is that to be well-informed one must have enough focus and time to reason and to analyse information [43]. One must not suffer from information overload. I propose to equip people with enhanced means of disregarding information, without necessarily missing out on relevant content. Alice may not need any details on how electricity works. Bob may not be interested in having an example of fungal dissemination mechanism. With our proposed MMM solution, Alice and Bob can be made aware of the availability of these resources without having to come into contact with them even if their current interests are taking them in the immediate epistemic vicinity of those resources. Assuming as I do that there is a huge amount of information that a person does _not_ want to have access to, and that different people are uninterested in different areas and different levels of information, allows us to regard each host of the MMM network as a point of view on the MMM record participating in the distributed pruning of information. The MMM proposal is the result of a compilation of ideas contributed by the scientific and entrepreneur communities over several years of discussions. Further collective participation is welcome to help address remaining questions in the aim of materialising the MMM. Indeed, a number of research questions spanning over multiple domains still require attention. Some are essential administration questions: _How should MMM identifiers be defined? How can their definition provide an indexing of MMM contributions that facilitates search over the MMM? How can share contracts be enforced? What should they regulate? Should data identifying authors be hashed into the MMM identifiers of the contributions they author in order to support authentication of shared MMM contributions? If so, should the merge operation be adapted to persist the hashed data?_ The local territories of different users, in particular different users of the same machine, may overlap. _Can we give granular access rights to MMM contributions so that the overlaps don't have to be duplicated on the disk?_ Requirements listed in SS1.5 limit future possibilities of modifying the main attributes of MMM contributions. However, the basic metadata attributes can and may need to be adapted to provide satisfactory answers to some of these non-epistemic questions. Other questions requiring further work are concerned by the epistemic organisation of MMM information: _How can we nest fine-grain CRDTs in MMM contributions to support fast paced collaboration? How is MMM subscription material identified and by whom, the serving host or the recipient host? What relevant MMM based metrics can be defined? What relevant MMM based filters can be defined? What relevant MMM based filters can be defined? What are possible global repercussions of using certain filters systematically and universally? What learning mechanisms can be implemented on the MMM to enhance connectedness and promote other desirable global (topological) qualities of the MMM record?_ The architectural proposal made in the Supplementary Material proposes to persist MMM data in a graph database. _What should be the design of this database to optimize the different landscape-based activities - provided the MMM network is not exactly a graph because of pens and because of edges acting as nodes? What query language should we rely on?_ The supplementary material also mentions the possibility of documenting into the MMM the design choices and semantics of external data _schemas_ as well as possible known relations between data schemas. Because of the flexibility of the MMM data model, there is flexibility in the way external data models can be mapped to the MMM data model. _What external data models are worth mapping to the MMM data model, why and how should they be mapped?_ Perhaps most generative of questions is the interface between the MMM and formal ontologies. A preliminary RDF-MMM mapping is available on Gitlab [40]. As the MMM data model is not equivalent to RDF, different mappings are possible to serve different purposes in relation to the following questions: _How can formal ontologies thread the MMM and provide support for "long distance" MMM exploration and search? Conversely, can updated, possibly informally expressed knowledge documented on the MMM assist the design, evaluation, completion, alignement and generally the evolution of formal models? How can the MMM data model and its interface with standard ontology and knowledge graph formalisms be automatically leveraged to those ends? Are there kinds of information of interest in ontology engineering that are computationally easier to get from epistemic glue in the MMM than from formal inferences on represented knowledge?_ Answers to the latter questions would specify some additional incentives for populating the MMM. In a follow-up article I propose to formalise a knowledge graph called the "**Socio-Economic Overlay Network**" (SEON) and its interface with the MMM. The SEON is to relay the MMM on informational content like metadata that is less amenable to discussion than typical MMM information. The questions listed above need to be specified, and the list is not exhaustive nor definite. In addition to the research questions the MMM poses a number of implementation challenges. The supplementary material A sketches a technological solution abiding by local-first principles [33]. It would be relevant to examine possible synergies and overlaps with ongoing initiatives and existing technologies. The technological solution proposed for the MMM encourages multiplicity of frontend interfaces with a standard MMM backend. Opportunities to interface existing tools with this backend are key. The supplementary material A proposes a distributed architecture to support the MMM. The appropriate nature and conditions of network connections between hosts of this network need to be determined. _What protocols should they rely on? When can direct peer-to-peer connections be implemented? When are relay servers relevant?_
大量のデジタル情報が生成されます。Semantic Webは、効率的に処理するために、デジタル情報資源を相互接続する方法を向上させる必要があるという認識に基づいています。その焦点点は、データのリンクです。データのリンクは、十分ではありません。あらゆる種類の情報資源を相互接続するためのインフラストラクチャの支援が必要です。この相互接続を可能にするには、理解と詳細な相互接続を必要とする専門知識を有する専門分野の人間の専門知識が必要となります。広範囲の問題が惑星規模に拡大する際には、情報処理と情報生成の協調性をスケールアップさせることが重要です。専門知識と分析の深さを捨てることなく、言語と形式を強制する必要はありません。この文では、相互接続を重視するSemantic Webの考え方と整合性のある提案をしています。
2309.04544
Intercavity polariton slows down dynamics in strongly coupled cavities
Band engineering stands as an efficient route to induce strongly correlated quantum many-body phenomena. Besides inspiring analogies among diverse physical fields, tuning on demand the group velocity is highly attractive in photonics because it allows unconventional flows of light. $\Lambda$-schemes offer a route to control the propagation of light in a lattice-free configurations, enabling exotic phases such as slow-light and allowing for highly optical non-linear systems. Here, we realize room-temperature intercavity Frenkel polaritons excited across two strongly coupled cavities. We demonstrate the formation of a tuneable heavy-polariton, akin to slow light, appearing in the absence of a periodic in-plane potential. Our photonic architecture based on a simple three-level scheme enables the unique spatial segregation of photons and excitons in different cavities and maintains a balanced degree of mixing between them. This unveils a dynamical competition between many-body scattering processes and the underlying polariton nature which leads to an increased fluorescence lifetime. The intercavity polariton features are further revealed under appropriate resonant pumping, where we observe suppression of the polariton fluorescence intensity.
Yesenia A García Jomaso, Brenda Vargas, David Ley Domínguez, Román Armenta, Huziel E. Sauceda, César L Ordoñez-Romero, Hugo A Lara-García, Arturo Camacho-Guardian, Giuseppe Pirruccio
2023-09-08T18:22:06
http://arxiv.org/abs/2309.04544v2
# Flatband slows down polariton dynamics in strongly coupled cavities ###### Abstract Flatbands in condensed-matter, atomic physics, and quantum optics stand as the basis for several strongly correlated quantum many-body phenomena such as Wigner crystallization, the fractional quantum Hall effect and Moire-related physics. Besides inspiring analogies among diverse physical fields, flatbands are highly sought-after in photonics because they allow unconventional light flows such as slow-light. Here, we realize room-temperature slow-light with Frenkel polaritons excited across two strongly coupled cavities. We demonstrate the formation of a tuneable flatband appearing in absence of a periodic in-plane potential. Our simple photonic architecture enables the unique spatial segregation of photons and excitons in different cavities and maintains a balanced degree of mixing between them. This unveils a dynamical competition between many-body scattering processes and the underlying polariton nature which leads to an increased fluorescence lifetime. The polariton features are further revealed under appropriate resonant pumping, where we observe suppression of the flatband polariton fluorescence intensity. In condensed matter physics, flatbands have led to the achievement of several breakthroughs, including strongly correlated electronic states [1], non-conventional superconductivity [2; 3; 4], and topological phases of matter in two-dimensional materials [5]. Similarly, within photonic systems, the presence of flat optical bands is highly desirable, as they could prompt the generation of strongly correlated photonic states [6; 7; 8; 9; 10]. This is not only fundamentally relevant, but could also have an impact on energy harvesting, sensing, and information processing. Slow-light, the phenomenon of controlling and manipulating dispersion of light [11], has received attention in quantum optics [12], atomic physics [13; 14] and condensed matter [15; 16]. The ability to tune the dispersion of light yields new opportunities to design light-matter interactions and non-linear optical devices [17; 18]. In the quantum domain, slow-light is typically understood in terms of polaritons [19], hybrid light-matter quasiparticles that arise from mixing a photon with an elementary matter excitation. The propagation of light in the form of a dark-state polariton within a medium can be slowed down and even stopped by heavily unbalancing the mixture of light and matter. In atomic gases, slow-light is typically accompanied by the phenomenon of electromagnetically induced transparency whereby, assisted by a control laser field, light propagates undamped in a typically opaque medium [20; 21]. **Slow-light and intercavity organic polaritons** Polaritons found in both organic and inorganic semiconductors have demonstrated a high degree of flexibility to control the internal energy structure and dynamics, unfolding alternative routes, [22; 23; 24; 25; 26; 27; 28] including the use of photonic lattices to craft flatbands [29; 30; 31; 32]. In this article, our proposal involves the successful implementation of a polariton flatband at room temperature through the strong coupling of a photonic cavity with a polaritonic cavity. The dispersion of the so-formed intercavity polaritons is tailored by tweaking the three-energy-level diagram. This adjustment of the diagram allows us to fine-tune the dispersion of the resulting intercavity polaritons. The observed flatband signals the transition from a bright to dark polariton state characterized by a stretched polariton lifetime. The dark nature of such intercavity flat polariton is indirectly confirmed by its suppressed light-emission obtained under careful resonant pumping. Our findings suggest that reducing polariton group velocity is akin to generating slow-light in atomic gases when the condition for electromagnetically induced transparency is fulfilled. Furthermore, inter-cavity polaritons open up the quantum tomography protocols for local and independent measurements of the photonic and molecular degrees of freedom. Up until now, the exploration of strongly coupled photonic cavities has primarily centered around photonic crystal cavities [33], micro-rings [34], and whispering-gallery modes [35]. In the context of semiconductor microcavities, double-well potentials have led to the realization of phenomena like Josephson oscillations and self-trapping with intercavity polaritons, [36; 37] and Figure 1: **Experimental setup.** (a) Representation of the hybrid photonic-polaritonic coupled system. (b) Relevant energy levels: \(|\omega_{c}^{(L)}\rangle\), \(|\omega_{c}^{(R)}\rangle\) and \(|\omega_{X}\rangle\) represent the left photon, right photon and exciton state, respectively. Photons tunnel from left to right cavity with a hopping amplitude \(t\). \(\Omega\) is the light-matter coupling. have been recently proposed for quantum chemistry control [38]. To realize slow-light we designed a three-level energy scheme representing two strongly coupled cavities. This set-up reminds of the \(\Lambda\)-scheme commonly employed in quantum control experiments. Our system, sketched in Fig. 1(a), is composed by two cavities, labelled (Left) and (Right), coupled _via_ a thin mirror. The left cavity, represented by \(|\omega_{c}^{(L)}\rangle\), is purely photonic, whereas the right cavity host both photons and excitons, identified by \(|\omega_{c}^{(R)}\rangle\) and \(|\omega_{c}^{(X)}\rangle\), respectively. The Hamiltonian of the system is given by \[\hat{H}= \sum_{i=L,R}\omega_{c}^{i}(\theta)\hat{a}_{i}^{\dagger}\hat{a}_{i }-t(\hat{a}_{L}^{\dagger}\hat{a}_{R}+\hat{a}_{R}^{\dagger}\hat{a}_{L})+ \tag{1}\] \[+\omega_{X}\hat{x}^{\dagger}\hat{x}+\Omega\left(\hat{x}^{\dagger }\hat{a}_{R}+\hat{a}_{R}^{\dagger}\hat{x}\right)\] where \(\omega_{c}^{(L/R)}(\theta)\) indicates the energy of the left/right cavity photons, which are created with the operator \(\hat{a}_{L/R}^{\dagger}\). Here, the angle \(\theta\) corresponds to the incident angle of light injected to the right cavity. In Fig. 1(b), the photon hopping is characterised by a tunneling amplitude \(t\), while the light-matter coupling is given by the Rabi frequency \(\Omega\). The sample consists of two vertically stacked nanocavities fabricated on a glass substrate by a sequence of multiple sputtering and spin-coating steps. The front, middle and back mirrors are made by Ag and their thickness equal to 20 nm, 20 nm and 300 nm, respectiveley. The middle mirror width determines the tunneling amplitude \(t\). The Left nanocavity is filled with polymethyl methacrylate (PMMA), whereas the Right one embeds a dye-doped polyvinyl alcohol (PVA) layer. The excitonic content is provided by a high concentration of homogeneously dispersed Eithrosine B (ErB) molecules [39]. The absorption spectrum of the active medium exhibits a main peak around \(\omega_{X}\approx 2.24\)eV associated to a principal exciton resonance and a second peak at \(\omega_{v}\approx 2.4\)eV related to the first vibron mode. The polymer thickness of the Left cavity features a slow wedge which provides us the possibility of fine tuning \(|\omega_{c}^{(L)}\rangle\) in a wide photon energy range. When the resonance frequency of the left cavity at normal incidence matches the exciton one, an intercavity polariton, \(|D(\theta=0)\rangle\), emerges \[|D(\theta=0)\rangle=\frac{\Omega}{\sqrt{\Omega^{2}+t^{2}}}|\omega_{c}^{(L)}( \theta=0)\rangle-\frac{t}{\sqrt{\Omega^{2}+t^{2}}}|\omega_{X}\rangle, \tag{2}\] solely formed by the superposition of the left cavity photon and the exciton. Importantly, the right cavity photon does not participate in the formation of this polariton state. Further details on the geometrical order of the cavities are provided in the Supplementary Information. In Fig. 2(a)-(c), we show the local band structure of the coupled cavity system measured via Fourier microscopy, which demonstrates that the \(\Lambda\)-scheme yields a middle polariton (MP) state with energies close to the bare exciton one. Fine-tuning of the MP energy is accomplished by leveraging the wedged PMMA thickness. The MP state shows a reduced dispersive character compared to the upper (UP) and lower polariton (LP) and flattens for a specific value of the thickness of the photonic cavity. To theoretically understand our results, we introduce the detuning, \(\delta=\omega_{c}^{(L)}-\omega_{X}\), and the imaginary-time Green's function of the system, \(\mathcal{G}_{\alpha,\beta}(\tau)=-\langle T_{\tau}[\hat{\psi}_{\alpha}(\tau) \hat{\psi}_{\beta}^{\dagger}(0)]\rangle\), where the subindices \(\alpha,\beta\) correspond to the left/right cavity photon and exciton, respectively. The fields \(\psi\) and \(\psi^{\dagger}\) evolve according to Eq. 1. We define the spectral function of the left cavity photon as \(A(\omega)=-2\text{Im}\mathcal{G}_{11}(\omega)\) and we plot it in Fig. 2(d)-(f) for three values of \(\delta\), showing a very good agreement with the experimentally observed reflectance. By continuously varying the left cavity thickness, its photon energy is driven in resonance with the exciton energy. Furthermore, the energy of the polariton states obtained from our theory and plotted with dashed curves on the reflectance maps, Fig. 2(a)-(c), provide an excellent quantitative understanding of the system. We obtain a reduction of the photonic dispersion of the left cavity photons \[\omega_{\text{MP}}(\theta)\approx\omega_{X}+\left[\omega_{c}^{L}(\theta)- \omega_{X}\right]\frac{1}{1+\left(\frac{t}{\Omega}\right)^{2}}, \tag{3}\] this means that the dispersion of the MP is controlled by the tunnelling ratio, \(t\), which can be tailored by means of the middle mirror thickness. On resonance, the MP emerges at the energy of the bare exciton, its dispersion reduced by a factor 4-8 compared to the energies of the upper and lower polaritons, this leads to a pronounced flatband over a wide range of incident angles. While the dispersion can be further reduced, this would be at the expense of strongly suppressing the photonic component. The character of the MP is clearly unveiled once it is written in terms of the bare photon and exciton states \(|\text{MP}\rangle=\sum_{\alpha}\mathcal{C}_{\text{MP}}^{\alpha}|\alpha\rangle\). The amplitude of its Hopfield coefficients \(Z_{\text{MP}}^{\alpha}=|\mathcal{C}_{\text{MP}}^{\alpha}|^{2}\) in Fig. 2(g)-(i) demonstrates that the MP decouples from the right cavity photons for \(\delta=0\). Furthermore, Fig. 2(i) shows that the Hopfield coefficients remain invariant along the angular region where the MP is flat. There, the MP has only non-vanishing Hopfield coefficients of the left cavity photon and the exciton. For conventional polaritons arising in two-level schemes, suppression of the polariton dispersion can only be achieved by compromising the degree of mixing between photons and excitons. This means that only far-detuned polaritons asymptotically exhibit quasi-flat dispersion and, thus, are formed by a large exciton component and negligible photon contribution. In contrast, our three-level design allows for flatband polaritons and slow-light retaining a significant photonic component of circa 40%. **Short-time dynamics.** Now we focus on the effect of the vanishing curvature of the MP, hinting at a mechanism analogous to the charge population trapping observed in atomic electromagnetically induced transparency. For this, we measure the prompt fluorescence decay employing the time-correlated single-photon counting technique. We explore the dynamics of the LP and MP and relate them with the corresponding photon component. The coupled cavities are pumped off-resonantly with laser pulses centered around 2.42 eV. Figure 3(a) displays decays for two detunings used in Fig. 2, i.e., \(\delta/\mathrm{eV}=0\) and \(\delta/\mathrm{eV}=-0.37\). In the considered time interval, all decays are well described by two time constants. For our analysis we focus initial dynamics illustrated in the pink shaded region We model the evolution assuming a dynamical equation of the form \(C(t)=A\exp(-\Gamma t)+(1-A)\exp(-\eta t)\), normalized to \(C(t=0)=1\). Figure 3(a) shows that this model (solid curves) provides a very good fit to the experimental data (points). We obtain that the LP dynamics is dominated by a single exponential with \(\Gamma_{\mathrm{LP}}(\delta=0)=3.25\) ns\({}^{-1}\) and \(A_{\mathrm{LP}}=0.94\), while \(\Gamma_{\mathrm{LP}}(\delta=-0.37)=3.0\) ns\({}^{-1}\) with \(A=0.86\). On the other hand, we find a richer dynamics for the MP, whose dynamics is characterised by Figure 2: **Intercavity polaritons.** (a)-(c) s-polarized reflectance as a function of the angle of incidence and photon energy for (a) \(\delta/\mathrm{eV}=-0.37\), (b) \(\delta/\mathrm{eV}=-0.18\) and (c) \(\delta/\mathrm{eV}=0\). The white, red, and black dashed curves correspond to the theoretical fitting of the energies of the lower, middle, and upper polariton respectively. (d)-(f) Spectral function, \(A(\mathbf{k},\omega)\), calculated for the same detunings as in (a)-(c). Middle polariton Hopfield coefficients for (g) \(\delta/\mathrm{eV}=-0.37\), (h) \(\delta/\mathrm{eV}=-0.18\) and (i) \(\delta/\mathrm{eV}=0\). The black, blue and red curves correspond to the left cavity photon, right cavity photon and exciton component, respectively. \(\Gamma_{\rm MP}(\delta=0)=0.18\) ns\({}^{-1}\) and \(\eta_{\rm MP}(\delta=0)=3.25\) ns\({}^{-1}\) for the flatband condition, and \(\Gamma_{\rm MP}(\delta=-0.37)=0.19\) ns\({}^{-1}\) and \(\eta_{\rm MP}(\delta=-0.37)=4.57\) ns\({}^{-1}\) for the detuned case. In contrast to the LP, the weight of the exponentials for the MP is almost equally distributed as \(A(\delta=0)=0.55\) and \(A(\delta=-0.37)=0.5\). The significantly slower dynamics of the MP shown in Fig. 3(a) is consequence of the interplay between the two dynamical factors. Indeed, we observe that at very small times, in the linear regime \(C_{\rm LP}(t)\approx 1-\Gamma_{\rm LP}t\), whereas \(C_{\rm MP}(t)\approx 1-[A\Gamma_{\rm MP}+(1-A)\eta_{\rm MP}]t=1-\gamma_{\rm eff}t\). On resonance, \(\delta=0\), we obtain \(\gamma_{\rm eff}\approx 1.7\) ns\({}^{-1}\) which is significantly smaller than the dominant decay rate of the LP. We attribute this effect to a competition between a reduced photon component of the MP, which suppresses the damping rate, and the presence of the dark-states reservoir of exciton laying close to the energy of the MP, which may favour non-radiative scattering and accelerated decay. We observe that the dynamics of the MP is slower than LP one, even though the MP energetically lies on top of the reservoir of dark-state excitons (see Fig. 3(b)). Therefore, we conclude that the vanishing curvature of the MP and its concomitant dark nature produces a significant effect in the polariton dynamics in the nanosecond range. Finally, we see that the decay of the LP becomes slower Figure 3: **Short-time dynamics and dark-state polaritons.** (a) Normalized fluorescence lifetime showing the short-time dynamics of the middle and lower polaritons. The dynamics of the MP is displayed with orange and green asterisks for \(\delta/{\rm eV}=-0.37\) and \(\delta/{\rm eV}=0\), respectively. The LP decay is illustrated with blue and purple asterisks for \(\delta/{\rm eV}=-0.27\) and \(\delta/{\rm eV}=0\), respectively. Shaded pink depicts the region where the fastest decay component dominates the early polariton dynamics. (b) Sketch of the relevant energy levels. Fine-tuning the left cavity photon energy, shifts the energy of the three polariton states. The grey area symbols the presence of the dark exciton reservoir. The triplet state of the ErB molecules, \(|T_{1}\rangle\), influences the LP dynamics via intersystem crossing. (c)-(d) s-polarized fluorescence, expressed in counts per integration time, for \(\delta/{\rm eV}=-0.37\) and \(\delta/{\rm eV}=0\), respectively. The energy of the bare exciton is represented by the dashed white line. Photons are injected at \(\omega_{\rm p}/{\rm eV}=2.6\) which lies at the energy of the upper polariton. (e) Hopfield coefficients of the UP for \(\delta/{\rm eV}=0\) showing that it is predominantly formed by right cavity photons The black, blue and red curves correspond to the left cavity photon, right cavity photon and exciton component, respectively as it approaches the energy of the ErB triplet state, \(|T_{1}\rangle\). This is evident from the asymptotic intensity value of the decay curve which does not converge to the noise floor of the other curves. Here, intersystem crossing not only increases significantly the slow component of the decay, but also stretches the fast one, outcompeting the role played by the photonic Hopfield coefficients and producing long-lived polaritons. **Photoluminescence** To further investigate the nature of the polariton states we now turn our attention to the steady-state fluorescence. The system is pumped with a CW laser emitting at \(\omega_{p}=2.62\)eV, which corresponds to an off-resonant excitation for all negative detunings. This means that we inject photons approximately equally in both cavities and produce excitons in the right cavity. For \(\delta/\mathrm{eV}=-0.37\), the MP is formed almost equally by both left and right cavity photons, while LP is formed predominantly by left cavity photons. Thus, we observe a higher fluorescence intensity from the MP than from the LP, as seen in Fig. 3(c). In the Supplemental Information we show that as we decrease the detuning, the LP acquires a larger fraction of the photon and exciton component of the right cavity, which consistently conduces to an increased LP fluorescence. However, as we approach the condition for \(\delta/\mathrm{eV}=0\), the UP energy shifts until it matches the pump energy at normal incidence. As shown in Fig. 3(e), the UP for \(\delta/\mathrm{eV}=0\), is primarily formed by right cavity photons. Therefore, this configuration preferentially injects photons into the right cavity. As we move towards the condition for flatband polariton, the right cavity photon component of the MP decreases until vanishing exactly. Decoupling the MP from the right cavity photons results in an inefficient excitation of the MP and, thus, a strongly suppressed MP fluorescence intensity, as demonstrated in Fig. 3(d), confirming the transition of the MP to a dark state. **Conclusions and Outlook** We have experimentally demonstrated the formation of inter-cavity polaritons composed by the admixture of photons and excitons sitting in physically separated optical cavities. The three energy level \(\Lambda\)-scheme underpinning the physics of this system implies, on resonance, the existence of a polariton flatband. We observed the flattening of the middle polariton branch which transits smoothly to a dark state upon tweaking the photon-exciton detuning. This mechanism effectively decouples the flatband polariton from free space leading to slower short-time dynamics. On resonance, the absence of one of the photon states in the composition of the dark polariton is confirmed by the steady-state fluorescence, whose intensity is suppressed under resonant pumping of the upper polariton branch. The interplay of the polaritons with the exciton reservoir needs to be taken into account as this introduces entropically favored scattering paths that may hamper the observation of more exotic physics. However, we stress that the short-dynamics modification hints at that slow-light possibly overcomes the effect of the reservoir. This motivates further studies on the interplay between the dark-state dynamics and the possible breakdown of quasiparticle picture [39, 40, 41]. Hybrid photonic-polariton systems are an ideal platform to explore many-body physics in multi-level coupled cavity systems. This shares analogy to interlayer excitons observed in stacked 2D materials, where the indirect character of the interlayer excitons gives rise to strongly interacting many-body states [42], long-lived exciton-polaritons [43, 44], and new classes of polaritons [45, 46]. Inspired by how twistronics in stacked 2D materials gave rise and non-conventional superconductivity [47], we envisage that polariton flatbands can be useful to increase polariton correlations and facilitate the observation of non-trivial quantum phases in lattices-free systems. Moreover, our strategy to generate slow-light does not compromise the photonic component of the flatband polariton, making it appealing for strongly correlated truly polariton states. The reduced in-plane propagation helps confining spatially polaritons without the additional complication of fabricating physical boundaries. This may lower the threshold needed for quantum phases transitions while maintaining the architecture of the system simple. Furthermore, the quantum entanglement between the photonic and molecular degrees of freedom may be unraveled by exploiting the spatially indirect character of the intercavity polariton. **Methods** _Sample fabrication_ The sample is composed by two vertically stacked Fabry-Perot cavities fabricated on a glass substrate \(10\times 10\) mm\({}^{2}\) by multiple successive sputtering and spin-coating steps. The bottom 300 nm-thick Ag mirror was fabricated by magnetron sputtering operated at room temperature and a base pressure of approximately \(10^{-6}\) Torr which, during deposition, is pressurized with argon flow to \(3\times 10^{-3}\) Torr. We deposited 99.99% purity Ag at a rate of 0.08 nm/s. The active layer of the first cavity is obtained starting from a solution of 25 mg of polyvinyl alcohol (PVA, Mowid 44-88, 86.7-88.7% hydrolyzed, Mw \(\approx\) 205 000 g/mol) dispersed in 1 mL of distilled water. Then, 9.8 mg of Erythrosin B (ErB, Sigma Aldrich with dye content \(>\)90%) was added to the PVA/water solution, yielding a 0.5 M concentration. The ErB/PVA thin films were deposited by spin-coating at 2100 rpm using a 0.45 \(\mu\)m pore PTFE syringe filter, obtaining approximately 120 nm thickness. The first cavity is completed by fabricating a 20 nm-thick middle mirror on top of the active layer. The second cavity is formed by a Polymethyl methacrylate (PMMA, Mw \(\approx\) 120 000 g/mol) layer embedded between the middle and top mirror. This layer is obtained starting from a 25 mg/mL solution of PMMA. The solution is spin-coated at 2600 rpm for 60 s using a 0.45 \(\mu\)m pore PTFE syringe filter and provides a slow thickness gradient centered around 140 nm. Using PMMA instead of PVA avoids the formation of micro bubbles at the surface of the second cavity. _Experimental set-up_ Energy-momentum spectroscopy is performed in a homemade confocal Fourier optical microscope. Imaging the back-focal plane of a high numerical aperture microscope objective onto the entrance slit of a spectrograph (Kymera 328i, Andor) coupled to a sCMOS camera (Zyla 4.2P, Andor) is done by a Bertrand lens and provides direct access to the angular- and spectral-resolved reflectance. In our set-up the sample is illuminated through a Plan Fluor 50x/0.8 NA objective (Nikon) with white light emitted by a halogen lamp. The focal spot full-width at half-maximum equals 14 \(\mu\)m. The collected light is dispersed by a diffraction grating (150 lines mm, blazed at 500 nm). Two linear polarizers in the excitation and collection path are used to select the s- or p- polarization. Angular-resolved reflectance is obtained by replacing the cavity with a commercial mirror, which allows to normalize the spectra reflected off the cavity at each angle with those obtained with the mirror at the corresponding angles. Angular-resolved steady-state fluorescence is measured by pumping the coupled cavity with a 473 nm continuous wave laser (Excelisor 473, Spectra Physics) coupled to the Fourier microscope in epi-illumination configuration and focused down to 1 \(\mu\)m. The laser power is attenuated to 30 mW to avoid local damaging of the sample. The pump laser is filtered by a 500 nm long pass filter and its polarization is selected by the same broadband linear polarizer used for reflectance. The measurements for different detunings are collected using the same integration time to ensure comparability. Lifetime measurements are performed by a homemade time-correlated single-photon counting module coupled to the Fourier microscope. 100 ps laser pulses centered around 513 nm (LDH-P-C-520M, PicoQuant) are focused on the sample surface by the same epi-illumination path used for the steady-state fluorescence. We use a repetition rate of 10 MHz and an intensity such that the average photon count rate at the detector is always 0.04% of the excitation rate. The focus diameter is roughly 1 \(\mu\)m. The pump beam is filtered by a 525 nm long pass filter, while the appropriate 10 nm full-width at half-maximum band pass filter selects the LP or MP normal incidence wavelength, for each detuning. The emitted light follows the same optical path as reflectance and steady-state fluorescence but is directed to a single-photon avalanche photodiode (MPD). The trigger from the laser driver (PDL 800-D, PicoQuant) and the signal from the detector are sent to a time-to-digital converter (Time Tagger 20, Swabian Instruments). All histograms are built with 100 ps binwidth. The instrument response function (IRF) has been measured in several ways to check consistency of the result and ensure a reliable exponential fit for the short-time dynamics. The relation between reflectance, fluorescence and decay measurements is guaranteed by the overlapping focus spots and the slow gradient slope of the PMMA layer thickness. _Detuning-resolved measurements_ In order to access experimentally a large set of detunings in a single sample, we designed the PMMA layer to exhibit a slow and almost linear gradient towards the peripheral zones. This radial gradient is controlled by the spin-coating rotation speed. A radial sample movement of one millimiter corresponds to increasing the PMMA thickness by approximately 50 nm. On the other hand, the ErB/PVA layer featured a constant thickness throughout the sample. The position of the focal spot is controlled by micrometer screws that permit shifting the sample in the focal plane of the microscope objective. _Theoretical approach.-_ The Green's functions follows the Dyson equation \(\mathcal{G}^{-1}(z)=[\mathcal{G}^{(0)}(z)]^{-1}-\Sigma(z)\), with the non-vanishing terms \[\mathcal{G}^{(0)}_{11}(z)=\frac{1}{z-\omega_{c}^{L}(\theta)}, \mathcal{G}^{(0)}_{22}(z)=\frac{1}{z-\omega_{c}^{R}(\theta)} \tag{4}\] \[\mathcal{G}^{(0)}_{33}(z)=\frac{1}{z-\omega_{X}},\] and the self-energy given by \[\Sigma_{12}(z)=\Sigma_{12}(z)=-t, \tag{5}\] \[\Sigma_{23}(z)=\Sigma_{32}(z)=\Omega.\] The Green's function can be obtained analytically in this case, in particular, we obtain \[\mathcal{G}_{11}(z)=\frac{1}{\mathcal{G}^{(0)}_{11}(z)-t^{2}\mathcal{G}^{(0)} _{11}(z-\Omega^{2}\mathcal{G}^{(0)}_{33}(z)).} \tag{6}\] After analytic continuation \(z\rightarrow\omega+i0^{+}\), the energies of the polaritons are obtained by the position of the poles of the Green's function \[\mathrm{Re}[\mathcal{G}^{-1}_{11}(E)]=0, \tag{7}\] which has the three solutions coined lower, middle, and upper polariton. The residue \[Z=\left.\left(\frac{\partial\mathrm{Re}[\mathcal{G}^{-1}_{11}(\omega)]}{ \partial\omega}\right)^{-1}\right|_{\omega=E}. \tag{8}\] The dispersion of the cavity photons is given \(\omega_{c}^{(R/L)}(\mathbf{k})=\frac{c}{n_{c}^{(R/L)}}\sqrt{k_{z}^{2}+k_{||}^ {2}}\), where the incident light is along the \(z\) axis, perpendicular to the cavity mirrors, where the angle \(\theta\), is given by \(k_{||}=n_{c}^{(R/L)}\frac{\omega}{c}\sin\theta\). _Acknowledgments.-_ We thank Joel Yuen-Zhou for the critical reading to our manuscript and valuable discussions. G. P. acknowledges financial support from Grants UNAM DGAPAP PAPIIT No. IN104522 and CONACyT projects 1564464 and 1098652. H. L. G. acknowledges financial support from Grant UNAM DGAPAP PAPIIT No. IA107023. A. C. G. acknowledges financial support from Grant UNAM DGAPAP PAPIIT No. IN108620. C. L. O-R acknowledges financial support from Grant UNAM DGAP PAPIIT IG100521. H.E.S. acknowledges support from DGTIC-UNAM under Project LANCAD-UNAM-DGTIC-419 and from Grant UNAM DGAPAPIIT No. IA106023. A.C.-G, G. P. and H. E. S acknowledge financial support of PIIF 2023 H.E.S, acknowledges Carlos Ernesto Lopez Nataren for helping with the high-performance computing infrastructure. H. A. L.-G, A.C.-G, G. P acknowledge support from Grant UNAM DGAPAPIIE No. PE101223. Contributions Y.A.G.-C, B. V., D. L. D, C. L. O.-R., H. L.-G, and G. P performed the experiments. R. A, H. E. S., and A. C.-G provided the theoretical analysis. H. A. L.-G., A. C. -G and G. P wrote the paper, with input from all authors. A. C.-G and G. P. designed the project. Correspondence to Arturo Camacho Guardian and Giuseppe Pirruccio.
バンドエンジニアリングは、 strongly correlated量子多体現象を誘発するための効率的なルートです。異なった物理分野における様々な類推をもたらし、要求に応じてグループ速度をチューニングすることは、光学において非常に魅力的です。これは、従来の光流を可能にするためです。 $\Lambda$ シミュレーションは、格子のない構成で光伝播を制御するためのルートを提供し、遅光など、異色の相を可能にします。ここでは、二つの強く結合されたキャビティに跨って室温でのインタキャビティFrenkelポラリトンを実現しました。私たちは、周期性平面内ポテンシャルの不在下で、遅光に似た可変重力ポラリトン形成を証明しました。この単純な三段階スキームに基づく光学アーキテクチャは、異なるキャビティ内の光と励起子の空間分離を可能にし、それらの混合のバランスを維持します。これは
2309.13038
Privacy Assessment on Reconstructed Images: Are Existing Evaluation Metrics Faithful to Human Perception?
Hand-crafted image quality metrics, such as PSNR and SSIM, are commonly used to evaluate model privacy risk under reconstruction attacks. Under these metrics, reconstructed images that are determined to resemble the original one generally indicate more privacy leakage. Images determined as overall dissimilar, on the other hand, indicate higher robustness against attack. However, there is no guarantee that these metrics well reflect human opinions, which, as a judgement for model privacy leakage, are more trustworthy. In this paper, we comprehensively study the faithfulness of these hand-crafted metrics to human perception of privacy information from the reconstructed images. On 5 datasets ranging from natural images, faces, to fine-grained classes, we use 4 existing attack methods to reconstruct images from many different classification models and, for each reconstructed image, we ask multiple human annotators to assess whether this image is recognizable. Our studies reveal that the hand-crafted metrics only have a weak correlation with the human evaluation of privacy leakage and that even these metrics themselves often contradict each other. These observations suggest risks of current metrics in the community. To address this potential risk, we propose a learning-based measure called SemSim to evaluate the Semantic Similarity between the original and reconstructed images. SemSim is trained with a standard triplet loss, using an original image as an anchor, one of its recognizable reconstructed images as a positive sample, and an unrecognizable one as a negative. By training on human annotations, SemSim exhibits a greater reflection of privacy leakage on the semantic level. We show that SemSim has a significantly higher correlation with human judgment compared with existing metrics. Moreover, this strong correlation generalizes to unseen datasets, models and attack methods.
Xiaoxiao Sun, Nidham Gazagnadou, Vivek Sharma, Lingjuan Lyu, Hongdong Li, Liang Zheng
2023-09-22T17:58:04
http://arxiv.org/abs/2309.13038v2
# Privacy Assessment on Reconstructed Images: ###### Abstract Hand-crafted image quality metrics, such as PSNR and SSIM, are commonly used to evaluate model privacy risk under reconstruction attacks. Under these metrics, reconstructed images that are determined to resemble the original one generally indicate more privacy leakage. Images determined as overall dissimilar, on the other hand, indicate higher robustness against attack. However, there is no guarantee that these metrics well reflect human opinions, which offers trustworthy judgement for model privacy leakage. In this paper, we comprehensively study the faithfulness of these hand-crafted metrics to human perception of privacy information from the reconstructed images. On 5 datasets ranging from natural images, faces, to fine-grained classes, we use 4 existing attack methods to reconstruct images from many different classification models and, for each reconstructed image, we ask multiple human annotators to assess whether this image is recognizable. Our studies reveal that the hand-crafted metrics only have a weak correlation with the human evaluation of privacy leakage and that even these metrics themselves often contradict each other. These observations suggest risks of current metrics in the community. To address this potential risk, we propose a learning-based measure called **SemSim** to evaluate the **Sem**antic **Sim**ilarity between the original and reconstructed images. SemSim is trained with a standard triplet loss, using an original image as an anchor, one of its recognizable reconstructed images as a positive sample, and an unrecognizable one as a negative. By training on human annotations, SemSim exhibits a greater reflection of privacy leakage on the semantic level. We show that SemSim has a significantly higher correlation with human judgment compared with existing metrics. Moreover, this strong correlation generalizes to unseen datasets, models and attack methods. We envision this work as a milestone for image quality evaluation closer to the human level. The project webpage can be accessed at [https://sites.google.com/view/semsim](https://sites.google.com/view/semsim). ## 1 Introduction This paper studies the _evaluation_ of privacy risks of image classification models, with a focus on reconstruction attacks [5; 41]. During inference, a target classifier, a reconstruction attack algorithm and a test set are used. For each original test image, the attack algorithm intercepts gradients of the target model to obtain a reconstructed image [6; 38]. The evaluation objective is to measure whether the reconstructed image leaks any private information of the original one. In the literature, _objective evaluation metrics_[28; 24] such as peak signal-to-noise ratio (PSNR), mean squared error (MSE) and structural similarity index (SSIM) are commonly used. They measure the similarity between two images on the pixel-level. In common practice, the high similarity between the original and reconstructed image indicates a good reconstruction attack, thus a more vulnerable classification model. Conversely, the low similarity between the two images means poor reconstruction, which is believed to indicate weak privacy risk. However, it is often subject to _human perception_ whether privacy is leaked or preserved. In Figure 1, we show examples where hand-craft evaluation metrics, such as PSNR and SSIM, and CNN-feature-based metric learned perceptual image patch similarity (LPIPS) [37] give different judgments of privacy assessment on reconstructed images from human perception. For example, in **(A)**, the reconstructed image that is recognizable (privacy-leaked) by annotators is evaluated as better privacy-preserved by PSNR, SSIM, and LPIPS. In **(B)**, sometimes some of these metrics provide consistent judgments with human annotators, but their evaluation accuracy is still unstable for different images. In light of the above discussions, this paper raises the question: _is model privacy preservation ability as measured by existing metrics faithful to human perception?_ To answer this question, we conduct extensive experiments to study the correlation of model privacy preserving ability measured by human perception and existing evaluation metrics. Specifically, for each reconstructed image, we ask five independent annotators whether the reconstruction is recognizable. We use the average annotator responses over the test set as human perception of privacy information leakage. On a wide range of scenarios (5 datasets of different concepts, many different classification models and 4 reconstruction attack methods), we find that there is only a weak correlation between human perception and existing metrics. It suggests that a model determined as less vulnerable to reconstruction attacks by existing metrics may actually reveal more private information as judged by humans. Recognizing such discrepancy, we propose a new learning-based metric, semantic similarity (SemSim), to measure model vulnerability to reconstruction attack. Using binary human labels that indicate whether a reconstructed image is recognizable, we train a simple neural network with a standard triplet loss function. For an unseen pair of images, we extract their features from the neural network and compute their \(\ell_{2}\) distance, which is referred to as the SemSim score. If a model has a low (resp. high) average SemSim score, it is considered to have a high (resp. low) risk of privacy leakage, We experimentally show models' vulnerability to reconstruction attack which is ranked by SemSim has a much stronger correlation with human perception than existing metrics. Our main contributions are summarized below. Figure 1: **Inconsistency between existing metrics and human judgements on privacy information leakage.** For each original image, we present two reconstructions produced by InvGrad [7]. Below the reconstructed images, each colored corresponds to a different metric, indicating that the corresponding metric evaluates the reconstruction to have more information leakage. In **(A)**, according to PSNR, MSE, SSIM and LPIPS, the first reconstructed image is evaluated to have _more_ privacy leakage [7; 6] than the second one (_i.e._, the first one has a higher PSNR, SSIM values, and a lower MSE and LPIPS values). However, human annotators perceive the first image as having _less_ privacy leakage, since they cannot recognise this recognition (in contrast to the second reconstruction, which is recognizable and suggested to have more information leakage). _Such inconsistency in privacy assessment is our key observation and motivation_. Moreover, we observe in **(B)** that even these metrics themselves often disagree with each other. * We find model privacy leakage against reconstruction attacks measured by existing metrics is often inconsistent with human perception. * We propose SemSim, a learning-based and generalizable metric to assess model vulnerability to reconstruction attack. Its strong correlation with human perception under various datasets, classifiers and attack methods demonstrates its effectiveness. * We collect human perception annotations on whether privacy is preserved for 5 datasets, 14 different architectures of each set, and 4 reconstruction methods. These annotations will become valuable benchmarks for future study and has been made available at [https://sites.google.com/view/semsim](https://sites.google.com/view/semsim). ## 2 Related Work **Image quality and similarity metrics** are usually used to indicate the performance of reconstruction attack approaches [40; 41; 39] and also in privacy assessment [6; 35; 31] of methods against reconstruction attacks. These metrics can be broadly categorized into pixel-level and perceptual metrics. Pixel-level metrics, such as PSNR [13; 28] and MSE [33], evaluate differences between pixel values of the original and reconstructed images [6; 35; 31], to reflect the degree of privacy leakage. Perceptual metrics, such as SSIM [34] and LPIPS [37] are designed to take into account the perceptual quality of images for privacy leakage evaluation [12]. This paper examines the effectiveness of these metrics in privacy leakage evaluation and finds they exhibit weak correlation with human annotations. **Reconstruction attacks**[41; 7; 39; 40] aim to recover the training samples from the shared gradients. Phong [25] show provable reconstruction feasibility on a single neuron or single layer networks, which provide theoretical insights into this task. Wang [32] propose an empirical approach to extract single image representations by inverting the gradients of a 4-layer network. Meanwhile, Zhu [41] formulate this attack as an optimization process in which the adversarial participant searches for optimal samples in the input space that can best match the gradients. They employed the L-BFGS [18] algorithm to implement this attack. Zhao [39] extend the approach with a label restoration step, hence improving speed of single image reconstruction. We focus on model privacy assessment against reconstruction attacks and evaluate different metrics using several attack methods. **Human perception annotations** play an essential role in evaluating machine learning models [21; 19; 26]. Most public test sets, such as the ImageNet [1] dataset from the computer vision, are annotated by humans, allowing for conventional evaluation. Moreover, human feedback has been used to improve machine learning models, such as InstructGPT [23]. In fields where human annotations were expensive to obtain,, medical image analysis [36] and image generation [27], there is increasing evidence that the human judgements or evaluation is valuable and offers new insights. In our paper, we consider the information leakage of reconstructed images ## 3 Privacy Assessment Metrics on Reconstructed Images: A Revisit **Pipeline of privacy assessment on the reconstructed images.** As shown in Figure 2, the goal of evaluation is to compare privacy risks of a series of \(K\) image classification models \(\{\mathcal{M}_{k}\}_{k=1}^{K}\), under reconstruction attacks. The evaluation process simulates stealing data from gradients [41; 39]. Its input consists of an original image set \(\mathcal{X}=\{\mathbf{x}_{i}\in\mathbb{R}^{m\times n}\}_{i=1}^{N}\), where \(N\) is the number of images, and a reconstruction algorithm \(\mathcal{A}\) used to attack models \(\{\mathcal{M}_{k}\}_{k=1}^{K}\). Given a target model \(\mathcal{M}\)1, whose parameter weights are denoted by \(\mathcal{W}\), its gradients \(\nabla\mathcal{W}_{\mathcal{X}}\) can be calculated using the original data \(\mathcal{X}\). The attack algorithm \(\mathcal{A}\) is applied to the target model \(\mathcal{M}\) and its gradients \(\nabla\mathcal{W}_{\mathcal{X}}\) to obtain a set of reconstructed images denoted by \(\bar{\mathcal{X}}:=\mathcal{A}(\mathcal{M},\nabla\mathcal{W}_{\mathcal{X}})= \{\bar{\mathbf{x}}_{i}\}_{i=1}^{N}\). Note that, \(\mathcal{A}\) can access the gradients, but has not access to \(\mathcal{X}\). We can evaluate the privacy leakage of a target model \(\mathcal{M}\) over the original set of \(\bar{\mathcal{X}}\) as follows: Footnote 1: Unless explicitly stated otherwise, the subscript of \(\mathcal{M}\) is omitted when this does not create ambiguity. \[\text{PL}(\mathcal{M}):=\text{InfoLeak}(\mathcal{X},\bar{\mathcal{X}})=\text {InfoLeak}(\mathcal{X},\mathcal{A}(\mathcal{M},\nabla\mathcal{W}_{\mathcal{X} })), \tag{1}\] where \(\text{InfoLeak}(\cdot,\cdot)\) represents the amount of information leakage in reconstructed images. Therefore, it is important to have an effective metric for indicating \(\text{InfoLeak}(\cdot,\cdot)\). **Information leakage formulation.** As introduced in Section 2, information leakage is often assimilated to reconstruction quality and is based on a distance between an original image \(\mathbf{x}_{i}\) and its reconstructed counterpart \(\tilde{\mathbf{x}}_{i}\). Under such pointwise metric, InfoLeak\((\cdot,\cdot)\) of an image set \(\mathcal{X}\) and its reconstructed set \(\bar{\mathcal{X}}\) can be defined as: \[\text{InfoLeak}(\mathcal{X},\bar{\mathcal{X}})=\frac{1}{N}\sum_{i=1}^{N}d( \mathbf{x}_{i},\tilde{\mathbf{x}}_{i}), \tag{2}\] where \(d\) can be a hand-crafted metric, such as MSE [33], PSNR [13; 28] or SSIM [34], or model based, such as LPIPS [37]. Equation (2) averages the distances or similarities over all the original - reconstructed image pairs to obtain the information leakage score of the attacked model \(\mathcal{M}\). Apart from these, we can also use Frechet Inception Distance (FID) [9]. It measures information leakage as the distribution difference between original and reconstructed images: InfoLeak\((\mathcal{X},\bar{\mathcal{X}})\propto\text{FID}(\mathcal{X},\bar{ \mathcal{X}})\). ## 4 Diagnosis of Existing Metrics and Our Proposal ### Collecting human assessment of privacy leakage from reconstructed images To evaluate whether a reconstructed image leaks privacy, human perception offers very useful judgement. In the context of image recognition and face recognition, it is to determine if the human can still recognize the reconstructed object or face. For **image classification**, given an image, we provide human annotators with an incomplete list of classes. For example, for the CIFAR-100 dataset, instead of providing annotators with a list of all the 100 classes which are hard to memorize, we provide them with a list of the top-20 possible classes that includes the ground truth. We request annotators to annotate the class of a given image. If the annotate thinks the images is "incomprehensible" (_i.e._, severely blurry) or the right class does not appear in the candidate list, then the annotation is 'none'. We compare the human annotations between an image and its reconstructed version. If they are the same, privacy is not preserved; otherwise, privacy is preserved. The annotation pipeline and more details of the annotation process are provided in the supplementary material. For **face recognition and fine-grained image recognition**, because it is by nature very difficult for a human to assign a class label from 20 candidates, we give annotators two images at a time: an original image and its reconstruction. We then ask the annotator to tell whether the two images Figure 2: **Task definition: privacy leakage assessment on reconstructed images. Given \(K\) classification models \(\mathcal{M}_{1},\mathcal{M}_{2},...,\mathcal{M}_{K}\) against the image reconstruction attack \(\mathcal{A}\) (we use \(K=3\) as an example in this figure) on a set of original images \(\mathcal{X}\). For each model, we get a set of reconstructed images. The main goal of privacy leakage assessment on reconstructed images is to measure whether semantic information of an original image, is still accessible. We can ask human annotators to evaluate whether they can recognize the image class and then average across the set of images to obtain the overall human evaluation score of privacy leakage. In the existing literature, image quality metrics, such as PSNR, are used to measure privacy leakage. Here, the evaluation of example images shows again that PSNR deviates from human evaluation.** contain the same person or category. If yes, then privacy is not considered as preserved; otherwise, it is. Note that in this procedure, to mitigate the potential bias of annotators, we also give reconstructed images that do not pair with the original image. In all the above procedures, each image or image pair is labeled by 5 independent annotators. Binary labels, _i.e._, whether a reconstructed image is recognizable, are obtained via majority voting. In this study, we deal with five datasets: CIFAR-100, Caltech-101, Imagenette and Celeb-A and Stanford Dogs2. For each classification model being attacked, we annotate 600, 700, and 100 reconstructed images for the CIFAR-100, Caltech-101, and the other three datasets, respectively. Footnote 2: The new annotated dataset is distributed under license CC BY-NC 4.0 1, which allows others to share, adapt, and build upon the dataset and restricts its use for non-commercial purposes. ### Correlation analysis between human perception and existing metrics Examples from Figure 1 motivate us to conduct a more comprehensive analysis of the inconsistency between human perception and existing metrics in terms of privacy leakage. To this end, for the reconstructed image set of 14 target models, we plot their privacy risk measured by various metrics against collected human labels in Figure 3 **(B)**. We find that the correlation strength between human evaluation and existing metrics is relatively weak. For example, Kendall's rank correlation \(\tau\) that measures rank consistency is only 0.2904 and 0.3978 for SSIM and LPIPS. Even in the best case, _i.e._, FID _vs_ human, correlation is only moderate with \(\tau=0.5604\). It signifies that a model identified as more robust against reconstruction attacks based on existing metrics may actually be perceived as highly vulnerable according to human judgment when comparing different models. The primary issue lies in the fact that existing metrics are computed on either a pixel-wise or patch-wise basis, without considering the semantic understanding of privacy leakage. As a result, these metrics fail to accurately capture the image semantics related to privacy risks. This problem motivates us to design privacy-oriented metrics to better assess privacy leakage. ### Proposed metric To obtain a metric that is more faithful to human perception, we propose SemSim, a learning-based metric using human annotations as training data. The pipeline of SemSim is presented in Figure 4. **Training.** Using binary human labels whether a reconstructed image is recognizable, we train a simple neural network \(f_{\theta}\) with a standard triplet loss function. We take the original image \(\mathbf{x}_{i}\) as an anchor Figure 3: **Correlation between existing metrics and their alignment with human perception in measuring privacy risk. \(14\) classification models are attacked by InvGrad [7] on the CIFAR-100 dataset. Each subfigure presents the correlation between the rankings of model privacy leakage obtained by two metrics. The correlation strength is measured by Spearman’s rank correlation (\(\rho\)) [30] and Kendall’s rank correlation (\(\tau\)) [15]. Between existing metrics, (A) indicates that correlation is sometimes very weak. Furthermore, (B) indicates that the correlation between existing metrics and human perception is generally weak.** and split its reconstructions into positive \(\bar{\mathbf{x}}_{i}^{+}\) and negative \(\bar{\mathbf{x}}_{i}^{-}\) samples based on human annotations. The loss function is \(L=\sum_{i=1}^{N}\max\{d(\mathbf{x}_{i},\bar{\mathbf{x}}_{i}^{+})-d(\mathbf{x}_{ i},\bar{\mathbf{x}}_{i}^{-})+\alpha,0\}\), where \(\mathbf{x}_{i}\) is an original image and \(\bar{\mathbf{x}}_{i}^{+}\) (resp. \(\bar{\mathbf{x}}_{i}^{-}\)) stands for one of its recognizable (resp. unrecognizable) reconstruction, and \(\alpha\) is the margin. Thus, we obtain our neural network \(f_{\mathbf{\theta}}\) trained on human-annotated datasets. **Inference.** During the evaluation, \(f_{\mathbf{\theta}}\) is used for extracting features for original and reconstructed images. We calculate the \(\ell_{2}\) distance between their feature vectors, that is \(SemSim(\mathbf{x},\bar{\mathbf{x}})=\ell_{2}(f_{\mathbf{\theta}}(\mathbf{x}),f_{ \mathbf{\theta}}(\bar{\mathbf{x}}))\), and then average this score over test set as the overall model performance score. **Key Observations.** We believe SemSim captures semantic information, which plays a crucial role in privacy preservation. There are several key factors contributing to its effectiveness. (1) Being trained on human annotations enables SemSim to capture privacy leakage semantics better than metrics based on pixel-level similarity or patch CNN features. (2) By utilizing a CNN model that extracts relevant higher-level features, SemSim captures visual information related to information leakage effectively. (3) It incorporates the relationship between the original image and recognizable/unrecognizable reconstructions, improving its accuracy in assessing privacy leakage and providing better privacy assessment. SemSim has a limitation in that it requires annotated data for training. While we show that it is very generalizable and can work better than existing metrics with limited training data (refer to Figure 7), we prioritize our future work to annotate more data for even improved generalization. ### Discussions **Are there other ways than human perception to assess privacy leakage?** Yes. We can use a classification model trained on a dataset that contains the same categories as the reconstruction set to classify the reconstructions. If the model accurately predicts the categories, it indicates a potential privacy leakage. In our preliminary study, we used two models trained on the CIFAR-100 dataset, achieving accuracies of 82% and 65% on the test set, respectively, for recognizing reconstructed images. By using their recognition accuracies as indicators of privacy leakage, we obtained Kendall's rank correlation coefficients of 0.7023 and 0.5044 with human evaluation, respectively. These results are considered acceptable. However, there are limitations to this approach. The classification model must be trained on a dataset that matches the categories of the task, and it needs to be accurate. These limitations affect the scope of this method. Nonetheless, exploring the use of classifiers to evaluate privacy risk offers an alternative viewpoint to human perception, and it merits further investigation **Is privacy leakage on reconstructed images a binary problem?** No. We simplify this problem by binarizing it. It can be continuous, where privacy information is leaked to a greater or lesser degree, depending on various factors such as the task and the type and amount of data that is leaked. **How to define privacy leakage on reconstructed images in other vision tasks?** The definition depends on the task context. For example, in object counting [22], privacy information can be defined as the number of objects. Therefore, for different tasks, the definition of privacy leakage should be carefully designed and accompanied by a tailored evaluation method. **Relationship between image quality and private leakage of reconstructed images.** The relationship between the image quality of a reconstructed image and its information leakage is complex. While better image quality can indicate better reconstruction performance, it does not necessarily imply higher privacy leakage. Conversely, a reconstructed image with poor image quality can still contain private information, while an image with higher quality may preserve privacy better. Therefore, the relationship between image quality and privacy leakage is not always straightforward and Figure 4: **Training and inference pipeline of SemSim.** Feature extractor \(f_{\mathbf{\theta}}\) is trained on human-annotated images with a triplet loss [29]. An original image \(\mathbf{x}\) is used as anchor, and its reconstructions are split into positive (recognizable) and negative (unrecognizable) samples based on human annotations (Section 4). The goal is to minimize the anchor distance to positive samples and maximize that to negative ones. During inference, given an original image and its reconstruction, we use \(f_{\mathbf{\theta}}\) to extract their features and compute the \(\ell_{2}\) distance between the two features. requires careful consideration and evaluation. These discussions also encourage us to explore new metrics that incorporate semantic-level information in order to better assess privacy leakage. **Limitation and potential improvement methods for Semsim.** One limitation of SemSim is its potential performance decrease when faced with significant distributional shifts. To address this limitation, we can annotate diverse data types to enhance the adaptability of Semsim to a wider range of domain variations. Additionally, exploring other strategies, such as incorporating local image regions and utilizing multi-valued annotated training data, could also be considered to further enhance the effectiveness of SemSim. ## 5 Experiments **Experimental Setups** **- Datasets.** We evaluate using the CIFAR-100 [17], Caltech-101 [4], Imagenette [1]3, CelebA [20], and Stanford Dogs [16] datasets. The first three are for generic object recognition, CelebA is for face recognition, and Stanford Dogs is a fine-grained classification dataset. **- Classification models.** We use the following backbones: ResNet20, ResNet50, ResNet152 [8], DenseNet [11] and 8-layer CoveNet [6]. They were trained using different strategies, such as data augmentation [6], gradients with Gaussian/Laplacian noise [41], and layer-wise pruning techniques [3]. In total, there are 70 different models. Details are provided in the _supplementary material_. **- Reconstruction attack methods.** We mainly use InvGrad [7]. In the ablation study, we evaluate SemSim using four additional attack methods, including DLG [41], CAFE [14], and GradAttack [12]. **- Correlation strength measurements.** We use two rank correlation coefficients: Spearman's rank correlation \(\rho\)[30] and Kendall's rank correlation \(\tau\)[15] to measure the consistency between different metrics with human perception. Values of \(\rho\) and \(\tau\) are between \([-1,1]\). Being closer to -1 or 1 indicates a stronger correlation, and 0 means no correlation. **Implementation Details** **- Classification model training.** The training of all the models to be evaluated was conducted using the PyTorch framework. The details of the classifier training, such as the specific architectures and hyperparameters used for each model, are provided in the _supplementary material_. We perform model training with one RTX-2080TI GPU and a 16-core AMD Threadripper CPU @ 3.5GHz. **-SemSim model training.** In the main evaluation, SemSim is trained using a learning rate of 0.1 and a batch size of 128 on the ResNet50 architecture for 200 epochs. We use leave-one-out evaluation on the 5 datasets. Some examples of the annotation data are provided in Figure 1 and Figure 5. ### Main Evaluation **Inconsistency between existing metrics and human perception: more results.** On each of the five test sets, we rank the 14 models according to each of the existing metrics as well as human perception. Figure 5: **Sample annotation results.** For each original image (leftmost column), its reconstructed images are placed left to right by their PSNR values from large to small. The red cross denotes that the human annotator fails to recognize the image. We observe that human evaluation is inconsistent with PSNR ranking, _e.g._, some images that are top-ranked, or equivalently determined as high quality by PSNR are actually not recognizable by humans. The model ranking of each metric is correlated with that from human assessment. We find that PSNR, MSE, SSIM, LPIPS, and FID do not have a high correlation with human assessment. The worst performing metric is FID: Kendall's \(\tau\) is only -0.1556, -0.4252, 0.0989, and -0.3196 between FID and human perception, on the four test sets, respectively. While the rest four metrics exhibit a stronger correlation than FID, Kendall's \(\tau\) is generally around 0.5, which is considered only moderate. Moreover, from Figure 3, we find that the correlation between existing metrics themselves is often weak. For example, In Figure 3 right, Kendall's \(\tau\) is only -0.2904 between PSNR and LPIPS. This contradiction also exists between PSNR and LPIPS and others. The above results advocate the study of new metrics that are privacy oriented. **Comparing SemSim with existing metrics in terms of faithfulness to human perception.** We utilize SemSim to rank the models and examine its correlation with the ranking based on human perception, as shown in Table 1. We make two key observations. First, SemSim exhibits a much stronger correlation with human perception. On the five test sets, _Kendall's \(\tau\) is -0.7143, -0.6889, -0.7012, -0.6923, and -0.5938, respectively_, which is 0.2418, 0.1333, 0.2663, 0.1319 and 0.2401 higher than PSNR, for example. The above results suggest the risks of current metrics in the community and advocate the proposed learning-based, privacy-oriented metric. Second, on Stanford Dogs, while SemSim is still much more faithful to human perception than other metrics, the overall correlation is lower than other datasets. Because dog species are hard to recognize, more noise was introduced to human annotation and thus to the ranking results and correlation. We speculate that fine-grained datasets are harder for privacy interception through reconstruction: humans themselves will find it hard to recognize the private content. **Generalization ability of SemSim.** In Table 1, we adopt a leave-one-out setup, where SemSim is trained on four datasets and tested on the fifth dataset. Moreover, for each dataset, the model architectures are different. For example, when using CelebA as a test set, the tested target models are ResNet50 and DesNet _etc_, while target models in training are Resnet20, 8-layer CoveNet and ResNet152 _etc_. As such, the superior results in Table 1 demonstrate the generalization ability of SemSim for test sets and model architectures. \begin{table} \begin{tabular}{l|c|c c c c c|c} \hline \hline Datasets & Metrics & PSNR & MSE & SSIM & LPIPS & FID & **SemSim** \\ \hline \multirow{2}{*}{CIFAR-100} & Spearman’s \(\rho\) & 0.6703 & -0.6176 & 0.3939 & -0.5127 & **-0.7363** & **-0.8637** \\ & Kendall’s \(\tau\) & 0.4725 & -0.4286 & 0.2904 & -0.3978 & **-0.5604** & **-0.7143** \\ \hline \multirow{2}{*}{Caltech-101} & Spearman’s \(\rho\) & 0.6970 & **-0.7349** & 0.7218 & -0.5127 & -0.2242 & **-0.8182** \\ & Kendall’s \(\tau\) & 0.5556 & **-0.5525** & 0.5244 & -0.4072 & -0.1556 & **-0.6889** \\ \hline \multirow{2}{*}{Imagenette} & Spearman’s \(\rho\) & 0.5382 & -0.6395 & 0.6433 & **-0.6539** & -0.4791 & **-0.8257** \\ & Kendall’s \(\tau\) & 0.4349 & -0.5525 & 0.5108 & **-0.5922** & -0.4252 & **-0.7012** \\ \hline \multirow{2}{*}{CelebA} & Spearman’s \(\rho\) & **0.7495** & -0.7349 & 0.6846 & -0.5824 & -0.1516 & **-0.8263** \\ & Kendall’s \(\tau\) & **0.5604** & -0.5525 & 0.5264 & -0.4505 & -0.0989 & **-0.6923** \\ \hline \multirow{2}{*}{Stanford Dogs} & Spearman’s \(\rho\) & 0.4023 & -0.3968 & **0.4782** & -0.5031 & -0.3969 & **-0.7120** \\ & Kendall’s \(\tau\) & 0.3537 & -0.2743 & **0.3048** & -0.3929 & -0.3196 & **-0.5938** \\ \hline \hline \end{tabular} \end{table} Table 1: **Comparison of different metrics on different datasets**. For each metric, we rank the 14 models and compute the correlation with rankings made by human assessment. For each test set (Column 1), SemSim is trained on the combination of the rest four datasets. Here, InvGrad [7] attack is used. \(\rho\) and \(\tau\) are reported. SemSim has a much stronger correlation with human annotations. \begin{table} \begin{tabular}{l|c|c c c c|c} \hline \hline Attacks & Metrics & PSNR & MSE & SSIM & LPIPS & FID & **SemSim** \\ \hline \multirow{2}{*}{DLG [41]} & Spearman’s \(\rho\) & 0.6515 & -0.6367 & 0.4069 & -0.5477 & **-0.7268** & **-0.8749** \\ & Kendall’s \(\tau\) & 0.4857 & -0.4174 & 0.2858 & -0.4294 & **-0.5237** & **-0.7342** \\ \hline \multirow{2}{*}{CAFE [14]} & Spearman’s \(\rho\) & **0.7104** & -0.6916 & 0.5870 & -0.6793 & -0.6925 & **-0.8864** \\ & Kendall’s \(\tau\) & **0.5392** & -0.4259 & 0.3318 & -0.4762 & -0.4735 & **-0.7510** \\ \hline \multirow{2}{*}{GradAttack [12]} & Spearman’s \(\rho\) & 0.6831 & -0.6944 & 0.5753 & -0.6841 & **-0.7204** & **-0.8437** \\ & Kendall’s \(\tau\) & 0.4943 & -0.4980 & 0.3495 & -0.4531 & **-0.4819** & **-0.7260** \\ \hline \hline \end{tabular} \end{table} Table 2: **Comparison of different metrics under different attacks on the CIFAR-100 dataset.** SemSim is trained using human annotations obtained through the InvGrad [7] attack method and evaluated on different attack methods listed in the table. Furthermore, we use SemSim to evaluate model vulnerability to unseen attacks. Results are provided in Table 2, where SemSim is trained using human annotations obtained through the InvGrad [7] attack method and evaluated on other attack methods such as DLG, CAFE, and GradAttack. Remarkably, we consistently observe higher correlation between SemSim and human perception compared to existing metrics. On the CIFAR-100 dataset, we observe significant improvements in Kendall's \(\tau\) of -0.7342, -0.7510, and -0.7260, respectively, for DLG, CAFE, and GradAttack. These findings demonstrate the robustness of SemSim in capturing the privacy leakage of reconstructed images across different reconstruction attacks. **Visualization results of SemSim.** Figure 6 presents two examples of ranking reconstruction images using PSNR and SemSim. In both cases, SemSim outperforms PSNR and provides better results. ### Further Analysis **Impact of the number of human annotations on SemSim training.** To evaluate this impact, we use the CIFAR100 dataset for testing and randomly select human-annotated training samples from the rest four datasets to train SemSim. Results are shown in Figure 7 (**A**). We observe a correlation drop between SemSim and human perception as the number of training samples decreases. However, even with as few as 50 training samples (each samples includes 14 reconstructed images), SemSim outperforms existing metrics like PSNR and FID. **Impact of different backbones for SemSim.** As mentioned in the implementation details, SemSim uses a simple ResNet50 network. Here, we try several different opinions such as LeNet and ResNet18, and present their correlation with human perception in Figure 7 (**B**). We show that even a simple LeNet model can achieve \(\tau\) scores higher than 0.65, surpassing the best score of 0.5604 obtained by FID. Moreover, we observe that there is a correlation between the complexity of the backbone Figure 6: **Comparing the ranking of reconstruction images using PSNR and SemSim. From the visualizations, we can observe that PSNR exhibits some inconsistencies with human perception, while SemSim consistently aligns with the judgments of human annotators. In the two examples, SemSim correctly ranks all the images with noticeable information leakage (including a laptop or chair) before the ones without or with less information leakage (that are unrecognizable). However, the rankings provided by PSNR are inaccurate for some images.** Figure 7: **Analysis of SemSim. Evaluating the impact of (A) size of human-annotated training data, (B) variations of SemSim backbones, and (C) different loss functions for SemSim training. Experiments are conducted on the CIFAR-100 dataset.** architectures and the performance of SemSim. This indicates that utilizing more advanced and sophisticated backbone models may be able to further enhance SemSim to capture and represent visual information, leading to improved evaluation of privacy leakage in reconstructed images. **Impact of other loss functions for SemSim training.** We further experiment with different loss functions and hyperparameters, including the contrastive loss and the triplet loss (where we set \(\alpha=1\) in experiments). From the results shown in Figure 7**(C)**, we observe that the triplet loss shows a comparable correlation strength to the contrastive loss in relation to human assessment. ## 6 Conclusion This paper investigates the suitability of existing evaluation metrics when privacy is leaked by a reconstruction attack. We first collect comprehensive human perception annotations on whether a reconstructed image leaks information from the original image. We find that model vulnerability to such attacks measured by existing metrics such as PSNR has a relatively weak correlation with human perception, which poses a potential risk to the community. We then propose SemSim trained on human annotations to address this problem. On five test sets, we show that SemSim has much stronger faithfulness to human perception than existing metrics. Such faithfulness remains strong when SemSim is used for different model architectures, test categories, and attack methods, thus validating its effectiveness. In future work, we will collect human perception labels from a wider source of datasets and train a more generalizable metric for privacy leakage assessment.
手作業で作成された画像品質指標、例えばPSNRとSSIMは、再構築攻撃によるモデルのプライバシーリスクを評価するために一般的な手段です。これらの指標に基づいて、元の画像に近似するよう再構築された画像は、一般的にプライバシー漏洩をより示唆します。一方、全体的に異なる画像は、攻撃に対してより高い robustnessを示します。しかし、これらの指標は、人間の意見を正確に反映しているとは限らないため、プライバシー漏洩の判断において、より信頼性の高い評価となるでしょう。この論文では、これらの手作りの指標の、人間の感覚に対する忠実性を、再構築された画像からのプライバシー情報の正確性について、網羅的に調べます。5つの異なる画像(自然画像、顔、細部クラスなど)で、4つの既存の攻撃方法を使用して、様々な分類モデルから画像を再構築し、各再構築された画像に対して、複数の人間参加者が
2309.05882
Fluid Dynamic Simulations of Mach and Regular Reflections in Oblique Shock-Wave Configurations using Adaptive Mesh Refinement
In the context of the interaction between a moving plane shock wave and an inclined wall (wedge), it is possible to distinguish four distinct shock reflection configurations. These shock wave reflections, which depend on the characteristics of the incident shock wave and the geometry of the surface that it interacts with, are (i) regular reflection (RR), (ii) simple Mach reflection (SMR), (iii) transition Mach reflection (TMR), and (iv) double Mach reflection (DMR). The impact of these shock reflections on flow properties can be significant so understanding them is important when predicting the behavior of shock waves in more complex flow configurations. Previous research works have explored the referred shock reflections through both numerical and experimental approaches, employing various gases and different flow and geometrical configurations. The present study involves the use of a high-fidelity computational fluid dynamics (CFD) tool, known as PeleC, which is a compressible solver based on AMReX specifically designed to handle complex flow configurations. Accordingly, by solving the time-dependent Euler equations for various 2D flow configurations, this work studies shock wave reflections accounting for four different Mach-based operating conditions and compares and analyzes the resulting density profiles on the wedge wall with experimental data. To strike a balance between model accuracy and computational efficiency, adaptive mesh refinement (AMR) is incorporated, and a mesh independence study is performed by varying the number of AMR levels. The results of this study demonstrate the capabilities of the CFD tool employed as it accurately predicts the sensitivity of wave characteristics to different operating conditions.
Sebastian Valencia, Cesar Celis, Andres Mendiburu, Luis Bravo, Prashant Khare
2023-09-12T00:05:47
http://arxiv.org/abs/2309.05882v1
# Cob-2023-0803 ###### Abstract In the context of the interaction between a moving plane shock wave and an inclined wall (wedge), it is possible to distinguish four distinct shock reflection configurations. These shock wave reflections, which depend on the characteristics of the incident shock wave and the geometry of the surface that it interacts with, are (i) regular reflection (RR), (ii) simple Mach reflection (SMR), (iii) transition Mach reflection (TMR), and (iv) double Mach reflection (DMR). The impact of these shock reflections on flow properties can be significant so understanding them is important when predicting the behavior of shock waves in more complex flow configurations. Previous research works have explored the referred shock reflections through both numerical and experimental approaches, employing various gases and different flow and geometrical configurations. The present study involves the use of a high-fidelity computational fluid dynamics (CFD) tool, known as PeleC, which is a compressible solver based on AMReX specifically designed to handle complex flow configurations. Accordingly, by solving the time-dependent Euler equations for various 2D flow configurations, this work studies shock wave reflections accounting for four different Mach-based operating conditions and compares and analyzes the resulting density profiles on the wedge wall with experimental data. To strike a balance between model accuracy and computational efficiency, adaptive mesh refinement (AMR) is incorporated, and a mesh independence study is performed by varying the number of AMR levels. The numerical method utilized here is based on a finite volume discretization, involving approximate Riemann solvers. Temporal and spatial integration is performed using the method of lines (MOL), a second-order characteristic-based spatial method, coupled with a Runge-Kutta time integration. The time step obeys a specified Courant-Friedrichs-Lewy (CFL) condition of 0.3. The results of this study demonstrate the capabilities of the CFD tool employed as it accurately predicts the sensitivity of wave characteristics to different operating conditions. The findings of this work will serve as a foundation for future studies involving more complex flow configurations such as those featuring detonation waves. Wedge flows, Shock waves, Wave propagation and interaction, Euler equations, AMR. ## 1 Introduction Shock wave reflection is a fundamental phenomenon in gas dynamics that has attracted the attention of researchers for over a century. It occurs when a shock wave encounters a surface or another shock wave. The phenomenon was first observed by Ernst Mach in 1878, who experimentally identified two types of reflection configurations, (i) regular reflection (RR) and (ii) Mach reflection (MR) (Mach, 1878). In RR, an incident shock wave and a reflected shock one meet at a reflection point on a reflecting surface. A MR involves in turn one slipstream and three shock waves, the incident, the reflected and the Mach stem ones. Later on, von Neumann proposed the two- and three-shock theories for treating RR and MR, respectively, assuming the flow of an ideal gas to be inviscid (von Neumann, 1943). White and Smith later identified four shock reflection patterns, (i) RR, (ii) single-Mach reflection (SMR), (iii) transitional-Mach reflection (TMR), and (iv) double-Mach reflection (DMR) (White, 1952; Smith, 1945). In SMR, the point of convergence between the incident and reflected shock waves is situated above the wedge. At this convergence point, a third shock known as the Mach stem extends towards the surface of the wedge. Additionally, a curved shear layer called the slipstream trails behind the triple shock convergence point as the shocks propagate along the wedge. In DMR, a bend occurs in the reflected shock wave, giving rise to a second Mach stem. TMR marks the onset of double Mach reflection, where the second triple point is barely visible and manifests itself as a slight bend in the reflected shock (Hryniewicki et al., 2016). The detachment criterion between RR and MR was further investigated by Henderson and Lozzi (1975), who established that, if the transition follows the detachment criterion, a discontinuous transition pressure jump exists. Hence, another criterion named the mechanical equilibrium one is defined based on flow configurations that always fulfill the mechanical equilibrium during transition processes between Mach and regular reflections. These criteria resulted in transitional boundaries that distinguish the different shock wave reflection regions (Henderson and Lozzi, 1975). Deschambault and Glass (1983) conducted experimental investigations that demonstrated that, when compared to the ideal gas case, the use of properties of real gases do not significantly affect the four types of shock wave reflections. They obtained indeed reliable data for the four shock reflection types in air by utilizing infinite-fringe interferometric techniques (Deschambault and Glass, 1983). Ben-Dor et al. (1977) studied in turn a planar shock reflection over a plane double wedge and considered several complicated wave configurations (Ben-Dor et al., 1987). Further studies on the reflection of planar shock waves over different solid walks have been performed in the past both numerically and experimentally (Previtali et al. 2015; Zhang et al., 2016; Geva et al., 2017; Hryniewicki et al., 2017). Accordingly, to assess the capabilities of the computational tool employed here, this work carries out a detailed analysis of regular and Mach reflections using Pelec, a compressible solver based on AMReX. The numerical results obtained in this work are compared with the experimental data from Deschambault and Glass (1983) showing that the numerical model employed here is capable of accurately capturing the supersonic flow characteristics of regular and Mach reflections. ## 2 Theoretical Analysis ### Principles of Regular Reflection Normal shock waves are characterized by a sudden and significant increase in pressure and temperature, as well as a decrease in velocity, across the shock wave. The properties of normal shock waves can be calculated using the Rankine-Hugoniot relations (Houghton, 2017), which relate the upstream and downstream states of the gas to the properties of the shock wave. These relations describe the conservation of mass, momentum, and energy across the shock wave. The properties of normal shock waves depend on the Mach number of the incoming flow \(M_{i}\). Based on the conventional Rankine-Hugoniot equations, the detailed flow characteristics are computed as follows (Houghton, 2017), \[\frac{\rho_{2}}{\rho_{1}}=\frac{\left(1+\gamma\right){M_{i}}^{2} }{\left(\gamma-1\right){M_{i}}^{2}+2} \tag{1}\] \[\frac{p_{2}}{P_{1}}=\frac{2\gamma{M_{i}}^{2}-\left(\gamma-1 \right)}{\gamma+1}\] (2) \[\frac{a_{2}}{a_{1}}=\frac{\left[\frac{T_{2}}{T_{1}}\right]}{T_{1} }=\frac{\left[\frac{p_{2}\rho_{1}}{P_{1}\rho_{2}}\right.}{\left.\right.}\] (3) \[\frac{U_{2}-U_{1}}{a_{1}}=\frac{2\left({M_{i}}^{2}-1\right)}{M_{i} \left(\gamma+1\right)} \tag{4}\] where the flow properties temperature \(T\), density \(\rho\), pressure \(P\), sound speed \(a\), and velocity \(U\) denoted by the subscript \(1\) characterize the flow conditions before the shock, while those properties denoted by subscript \(2\) correspond to the flow conditions after the shock. \(\gamma\) is in turn the specific heat ratio. ### Analytical Boundaries for RR-MR transition The theoretical boundaries that define the RR-MR transition, i.e., the detachment boundary, the sonic boundary, and the von Neumann boundary, are obtained from the analytical solutions determined by von Neumann (1943). The work of von Neumann was later studied by Henderson and Lozzi (1975), who provided an analytical expression for these transition boundaries. Figure **1** shows that there are three regions determined by the mechanical equilibrium and detachment boundaries, (i) an upper region only for regular reflection, (ii) a dual region for either regular reflection or Mach reflection between the two boundaries, and (iii) a lower region for only Mach reflection (SMR, TMR, and DMR). The physically realizable solution that defines the detachment boundary comes from the two-shock theory (von Neuman, 1943). Expressed in terms of wedge angle \(\theta\) and incoming flow Mach number \(M_{l}\), this solution is given by, \[\cos\theta=\frac{1}{a+2e\cos(f/3)}, \tag{5}\] where, \[\begin{array}{c}a=\frac{1+(\gamma-1)d}{3},\qquad b=2d-d^{3},\qquad c=\gamma d ^{2},\qquad d=\frac{2}{\gamma+1}\frac{{M_{l}}^{2}-1}{{M_{l}}^{2}},\qquad e= \sqrt{a^{2}+\frac{b}{3}},\\ f=\cos^{-1}\left(\frac{ab+2a^{3}-c}{2e^{3}}\right),\end{array} \tag{6}\] The mechanical equilibrium criterion in turn is established based on the three-shock theory (von Neuman, 1943). The transition from Mach reflection to regular reflection appears when the triple-point trajectory angle diminishes to zero. At this situation, the Mach stem decreases to an infinitesimal length and the slipstream disappears. The relation between the wedge angle \(\theta\) and incoming flow Mach number \(M_{l}\) is shown to be, \[\cos^{2}\theta=\frac{c}{b+\sqrt{b^{2}-ac}} \tag{7}\] where, \[\begin{array}{c}a=4d+2(\gamma-1)(\gamma+2)d^{2}-(\gamma^{2}-1)d^{3},\qquad b =\gamma+3-\frac{1}{2}(5-\gamma)(\gamma+1)d+2\gamma d^{2},\\ c=4-4d,\qquad d=\frac{2}{\gamma+1}\frac{{M_{l}}^{2}-1}{{M_{l}}^{2}}.\end{array} \tag{8}\] Finally, the sonic boundary is given by a fifth order polynomial in terms of \(\sin^{2}\theta\) versus the inverse incident shock strength \(P_{1}/P_{2}\), and it is not considered here because this boundary lies very close to the detachment one, differing by less than a half of a degree for each given value. Figure 1: Regions of RR and MR patterns separated by analytical transition boundaries for air (Hryniewicki et al., 2016). ## 3 Mathematical Modeling Accounting for two-dimensional flow configurations, the time-dependent Euler equations are solved here. This choice of using the Euler equations aligns with previous studies conducted by various researchers in the field. Therefore, for a 2D Cartesian coordinate system (x, y), the corresponding transport equations for mass, momentum, and energy expressed in matrix-vector form are given by (Houghton, 2017), \[\frac{\partial U}{\partial t}+\frac{\partial F}{\partial x}+\frac{\partial G}{ \partial y}=0, \tag{9}\] where \(t\) stands for physical time. The vector of solution variables, \(\mathbf{U}\), and the inviscid flux vectors, \(\mathbf{F}\) and \(\mathbf{G}\), are in turn given by, \[\mathbf{U}=\begin{pmatrix}\rho\\ \rho u\\ \rho v\\ \rho E\end{pmatrix},\qquad\mathbf{F}=\begin{pmatrix}\rho u\\ \rho u^{2}+p\\ \rho uv\\ (\rho E+p)u\end{pmatrix},\qquad\mathbf{G}=\begin{pmatrix}\rho v\\ \rho uv\\ \rho v^{2}+p\\ (\rho E+p)v\end{pmatrix}, \tag{10}\] where \(\rho\), \(u\), \(v\), and \(p\) represent gas density, velocities along the x and y coordinates, and pressure, respectively. In addition, \(E\) is the total energy including the kinetic (\(E_{k}\)) and internal (\(E_{U}\)) energies, \[E=E_{k}+E_{U}, \tag{11}\] \[E_{k}=\frac{1}{2}(u^{2}+v^{2}),\] (12) \[E_{U}=\frac{R}{\gamma-1}T, \tag{13}\] where \(T\) and \(R\) stand for, respectively, flow temperature and gas constant. ## 4 Numerical Modeling This section highlights the numerical modeling approach utilized here, with specific focus on the solver and numerical schemes employed, the geometric configuration accounted for, and the boundary conditions imposed. ### Solver and numerical schemes To conduct the intended numerical simulations, an open-source AMR-based compressible reacting flow solver, named Pelec (PeleC, 2023), is employed in this work. Pelec solves transport equations for mass and momentum in the compressible flow regime. Pelec is built on top of the AMReX framework, which massively facilitates parallel block-structured adaptive mesh refinement (AMR). The validity of Pelec has been established through its previous successful applications in several standard cases, including the Sod shock tube (Henry de Frahan et al., 2022). The governing equations here are closed with the ideal gas equation of state (EoS) available in the PelePhysics submodule, which provides models and parameters associated with thermodynamics, transport properties, and chemical reactions. The system of partial differential equations solved here is spatially discretized using a second-order finite volume approach. Notice that Pelec supports two different discretization methods, (i) the unsplit piecewise parabolic method (PPM) with optional hybrid PPM WENO variants, and (ii) a second-order characteristic-based spatial method coupled with a Runge-Kutta time integration known as a method of lines (MOL). For the present study, only the former method has been utilized as it is suitable for complex geometries with embedded boundaries (EB). In addition, for an accurate resolution of the resolving shock waves, adaptive mesh refinement (AMR) is enabled at locations with relatively high-density gradients. ### Geometric configuration A two-dimensional computational domain has been adopted here which shares similarities with the one used by Hryniewicki et al. (2016), Figure **2**. More specifically, the computational domain spans over a spatial region with x-coordinates ranging from 0 to 4 meters, and y-coordinates ranging from 0 to 0.75 meters. The domain is discretized using a grid featuring 512 cells in the \(x-\)direction and 96 cells in the \(y-\)direction. It is worth noticing that, in the two-dimensional plane, the grid cells are square shaped with an aspect ratio of unity (\(Dx/Dy=1\)). Besides, the initial position of the shockwave has been established at \(x=0.5\) meters, whereas the rigid wedge is introduced at \(x=3.0\) meters with different wedge angles. To enhance the grid resolution in specific regions of the flow, adaptive mesh refinement (AMR) processes are carried out during the computations. Both to reduce numerical errors and to ensure that grid-independent results are obtained, a mesh independence study was also conducted. The referred study, which main outcomes are summarized in Section 5.1, involved analyzing the results obtained using different AMR levels. ### Boundary and initial conditions It is of particular interest here to analyze the reflected shock waves originated under different incoming flow Mach numbers and wedge angles. To do so, for each of the four cases studied here and listed in Table 1, the initial thermochemical conditions ahead of the incident shock wave (Figure **2**, region 1) are identical to those investigated by Deschambault and Glass (1983). These initial conditions include density, temperature, and pressure of the fluid flow. Furthermore, the flow properties behind the incident shock (Figure **2**, region 2) are determined using the Rankine-Hugoniot equations, Eqs. (1)-(4). Regarding the boundary conditions, all numerical simulations carried out in this work involved the use of a first-order extrapolation (FOExtrap) based outflow boundary condition along the y-direction boundaries. In addition, the upper, lower, and wedge boundaries were set to no-slip wall conditions. Finally, notice that, as shown in Figure **3**, each of the four cases analyzed in this work correspond to a specific shock wave reflection region defined by the detachment and mechanical equilibrium criteria. \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline \hline **CASE** & \(\mathbf{\theta_{w}}\) & \(\mathbf{M_{s}}\) & \(\mathbf{P_{1}(Kpa)}\) & \(\mathbf{T_{1}(K)}\) & \(\mathbf{\rho_{1}\Big{(}\dfrac{k\mathbf{g}}{m^{3}}\Big{)}}\) \\ \hline 1 & 63.4\({}^{\circ}\) & 2.05 & 33.3306 & 298.4 & 0.387 \\ 2 & 60\({}^{\circ}\) & 4.70 & 6.13283 & 298.5 & 0.0712 \\ 3 & 27\({}^{\circ}\) & 2.03 & 33.3306 & 299.2 & 0.387 \\ 4 & 20\({}^{\circ}\) & 7.19 & 7.99934 & 298.5 & 0.0929 \\ \hline \hline \end{tabular} \end{table} Table 1: Initial conditions for the four oblique shock wave reflection cases studied. Figure 2: 2D domain for shock wave reflections of an inclined and rigid wedge with angle \(\mathbf{\theta}\)(Hryniewicki et al., 2016). ## 5 Results and Discussion The main numerical results obtained in this work are presented and discussed in this section. Both qualitative and quantitative analyses of the referred results are carried out. ### Mesh independence study A mesh independence analysis was firstly performed here to determine the requirements in terms of grid resolution for the numerical simulations. Initially, a mesh with 512x96 elements and a minimum element size of 7.81 mm was generated and used as the base mesh. On top of this initial mesh, four other meshes were generated by varying the number of AMR levels. Finally, accounting for Case 3 (Table 1), several numerical simulations were conducted using the five meshes generated and the corresponding results are shown in Figure **4**. As illustrated in Figure **4** (left plot), in terms of wedge wall density, there are no significant differences between the results obtained with the meshes featuring 3 and 4 AMR levels. Consequently, the mesh including 3 AMR levels, featuring a minimum element size of 0.97 mm and a mesh size of about 1.5 million elements, was chosen here to carry out the intended numerical simulations. This mesh configuration was deemed adequate to achieve the desired level of accuracy in the simulations performed in this work. Figure **5** shows the computational mesh utilized here for the numerical simulations carried out. In this figure, the black and gray boxes indicate the regions where adaptive mesh refinement (AMR) was employed. More specifically, the black boxes represent the areas with the highest level of refinement, whereas the gray ones indicate regions with a lower level of refinement. It can be observed from Figure **5** that the area around the wedge surface features the highest level of Figure 4: Density ratio profiles along the compression ramp surface obtained with meshes featuring different number of AMR levels. Right plot is a zoom of the left one. Figure 3: Oblique shock wave reflection cases studied and their relative location regarding the analytical transition boundaries. refinement throughout. This occurs because this wedge region is a critical zone where the shock wave interacts with the wedge wall. As such, it is essential to have a refined and high-quality mesh to both capture the intricate features of the flow and accurately predict the wave-wall reflection interactions. Overall, the mesh was designed to ensure the accuracy of the numerical results by enforcing sufficient mesh resolution in the regions of interest. ### Density ratio distributions Figure **6** to Figure **9** shows density ratio contours obtained from the numerical simulations conducted for the four cases studied here (Table 1), which allows a qualitative comparison with the experimental data presented by Deschambault and Glass (1983). To gain an insight of the observed patterns, it is imperative to delve into the theories and descriptions of regular reflection, single Mach reflection, and transitional Mach reflection. Regular reflection occurs when a shock wave encounters a solid wall at an appropriate angle. In this scenario, the resulting reflected shock wave remains attached to the wall, forming an oblique shock wave. Besides, the incident and reflected shock waves intersect, giving rise to a regular arrangement of shock waves. As supported by the revised transition boundaries theory depicted in Figure 3, Case 1 and Case 2 studied here belong to the region of regular reflection. Both the numerical results and the experimental data corroborate this finding, as no Mach stem is present, and the shock pattern includes only two propagating shock waves (Figure **6** and Figure **7**). In single Mach reflection in turn, the incident shock wave strikes the surface of the wedge, generating a curved reflected shock wave that intersects with the incident one. The point of intersection forms what is called the triple point, which lies above the surface of the wedge. At the triple point, a third shock wave called the Mach stem extends towards the surface of the wedge. This shock pattern including the triple point is clearly noticed in the numerical results obtained for Case 3 and the experimental data from Deschambault and Glass (1983) (Figure **8**). Finally, transitional Mach reflection involves a second triple point that represents the confluence point of the incident and reflected shocks. This second triple point is slightly visible and manifest itself as a subtle kink or bend in the reflected shock wave. In this scenario, the overall shock pattern is in the process of transitioning from the simpler single Mach reflection pattern to the more intricate double Mach reflection one. Case 4 (Figure **9**) studied here exemplifies this scenario, where the second triple point is barely discernible. Both the numerical results and the experimental data exhibit this trend, which agree with the relevant theory and the von Neumann criteria. Figure 5: Details of computational mesh employed plus AMR levels included and contours of density ratio. Figure 8: Density ratio contours for Case 3 compared with experimental data from Deschambault and Glass (1983). Figure 6: Density ratio contours for Case 1 compared with experimental data from Deschambault and Glass (1983). Figure 7: Density ratio contours for Case 2 compared with experimental data from Deschambault and Glass (1983). ### Density ratio profiles Figure **10** illustrates with blue lines the density ratio profiles along the wedge wall computed for each of the four cases studied here (Table 1). This figure also includes the experimental data obtained by Deschambault and Glass (1983) as red symbols. From Figure **10a**, for Case 1, featuring a Mach number of 2.05 and an angle of incidence of 63.4\({}^{\circ}\), and exhibiting a regular reflection, the numerical results show a relatively good agreement with the experimental data, with no major differences detected in this case. This emphasizes that the numerical predictions accurately captured the supersonic flow characteristics. In Case 2 (Figure **10b**), which features a Mach number of 4.70 and an angle of incidence of 60\({}^{\circ}\), the flow is expected to be in the dual region of Regular and Mach reflections. In this case, the numerical results show a pattern similar to a regular reflection profile in the wedge wall, which is not observed in the experiments. This discrepancy could be attributed to the limitations of the numerical model employed in this work, as it may not be able to fully capture the complex physics of the flow in this particular situation. Nevertheless, the results of the numerical simulations carried out for this case still provide valuable insights into the flow characteristics and indicate the need for further research and development of more accurate models. In Case 3 (Figure **10c**), characterized by a Mach number of 2.03 and an angle of incidence of 27\({}^{\circ}\), a SMR is obtained (Figure **8**). Like Case 1, in this case the numerical results are quite similar to the experimental data and the discrepancies are insignificant. Finally, in Case 4 (Figure **10d**), featuring a Mach number of 7.19 and an angle of incidence of 20\({}^{\circ}\), a TMR is obtained. The numerical results and the experimental data show in this case some discrepancies in the region near the triple point and at the reflected shock. However, the density profile is well captured along all the wedge wall, providing valuable insights into the shockwave interactions and supersonic flow characteristics. Figure 9: Density ratio contours for Case 4 compared with experimental data from Deschambault and Glass (1983). ## 6 Conclusions This study was particularly focused on regular (RR) and Mach (MR) reflections, which are the two main types of shock wave reflections observed in steady flows. The computational tool PeleC, which includes adaptive mesh refinement (AMR), was used to model the shock wave reflection phenomena, and the obtained numerical results were compared with experimental data available in literature. From the results obtained in this work, it can be concluded that the numerical model employed here is able to accurately capture the supersonic flow characteristics for cases such as those involving regular and single Mach reflections. However, for cases involving more complex physics, such as those featuring dual regions of regular and Mach reflections, the model seems to have limitations that affect its accuracy. Overall, the results discussed here provide valuable insights into shock wave interactions and characteristics of wedge flows. The particular supersonic flow configurations where the numerical model seems to have some limitations highlight the need for further development and use of more accurate models. Notice that the findings of this work contribute to ongoing research in the field of supersonic flows focused on the study of more complex fluid flows such as those featuring 3D configurations, viscous flows, and detonation waves. ## 7 Acknowledgements This work has been supported by the US Army Research Laboratory under Research Grant No. W911NF-22-1-0275. Luis Bravo was supported by the US Army Research Laboratory 6.1 Basic research program in propulsion sciences.
移動飛行機衝撃波と傾斜壁(楔形)間の相互作用の文脈において、四つの異なる衝撃反射構成が区別可能である。これらの衝撃波の反射は、入射衝撃波の特性と、その反射波が相互作用する表面の幾何学的な形状に依存する。これらの反射は、 (i) 定形反射 (RR)、 (ii) 単純なマッハ反射 (SMR)、 (iii) 遷移マッハ反射 (TMR) 、 (iv) 2重マッハ反射 (DMR) の4つの種類に分類される。これらの衝撃波反射は、流体の性質に影響を与える可能性があり、それらへの理解は、複雑な流体配置における衝撃波の挙動を予測する際に重要である。過去の研究は、数値および実験的アプローチを通してこれらの衝撃反射を探索しており、さまざまなガスの使用と異なる流体および幾何学的構成 employing していた
2309.14757
Age Minimization in Massive IoT via UAV Swarm: A Multi-agent Reinforcement Learning Approach
In many massive IoT communication scenarios, the IoT devices require coverage from dynamic units that can move close to the IoT devices and reduce the uplink energy consumption. A robust solution is to deploy a large number of UAVs (UAV swarm) to provide coverage and a better line of sight (LoS) for the IoT network. However, the study of these massive IoT scenarios with a massive number of serving units leads to high dimensional problems with high complexity. In this paper, we apply multi-agent deep reinforcement learning to address the high-dimensional problem that results from deploying a swarm of UAVs to collect fresh information from IoT devices. The target is to minimize the overall age of information in the IoT network. The results reveal that both cooperative and partially cooperative multi-agent deep reinforcement learning approaches are able to outperform the high-complexity centralized deep reinforcement learning approach, which stands helpless in large-scale networks.
Eslam Eldeeb, Mohammad Shehab, Hirley Alves
2023-09-26T08:37:21
http://arxiv.org/abs/2309.14757v1
# Age Minimization in Massive IoT via UAV Swarm: A Multi-agent Reinforcement Learning Approach ###### Abstract In many massive IoT communication scenarios, the IoT devices require coverage from dynamic units that can move close to the IoT devices and reduce the uplink energy consumption. A robust solution is to deploy a large number of UAVs (UAV swarm) to provide coverage and a better line of sight (LoS) for the IoT network. However, the study of these massive IoT scenarios with a massive number of serving units leads to high dimensional problems with high complexity. In this paper, we apply multi-agent deep reinforcement learning to address the high-dimensional problem that results from deploying a swarm of UAVs to collect fresh information from IoT devices. The target is to minimize the overall age of information in the IoT network. The results reveal that both cooperative and partially cooperative multi-agent deep reinforcement learning approaches are able to outperform the high-complexity centralized deep reinforcement learning approach, which stands helpless in large-scale networks. Age of information, UAVs, Machine learning, Multi-agent reinforcement learning ## I Introduction The road to future wireless networks is encompassed by a massive deployment of IoT devices to enable vehicle platooning, smart agriculture, and yet many unforeseen applications. In such applications, there is an imperative requirement for fresh information about processes monitored or executed by these devices. Hence, information freshness has been the focus of many studies in the recent few years [1, 2]. In this context, a metric termed Age of Information (AoI) was introduced to address time sensitivity in different scenarios. AoI is defined as the time elapsed since the generation of the freshest received packet. Thus, the lower the age, the fresher the information available about a certain process. Meanwhile, the deployment of UAVs as mobile relay units or base stations (BS) is considered to be appealing for remote area coverage and collecting data from low-energy devices due to the following reasons [3]: 1. UAVs can provide coverage for remote areas, where it is cumbersome to replace the batteries of IoT devices. 2. UAVs can reduce the transmission distance of IoT nodes by moving close to them and then relaying the transmitted information to the BS. 3. UAVs can reach high altitudes and hence, the probability of line-of-sight (LOS) with both BS and IoT nodes becomes higher. In this context, UAV swarming is also suggested as a promising coverage solution for massive MIMO [4], edge intelligence [5], as well as UAV search and rescue operations [6]. ### _Literature Review_ Many works have studied age minimization in UAV-assisted IoT. For instance, the work in [7] suggested a graph theory approach to minimize the age, where the UAVs collect data from sensors only at fixed data collection points around the map, without the flexibility of collecting information anywhere. Similarly, the work in [8] optimized the stationary positions of these UAVs using a game theoretic approach. Meanwhile, the authors of [9] applied the genetic algorithm and dynamic programming to address the same problem. However, their approach was applied to a limited number of only 15 sensor nodes with no scalability to massive deployments. The same shortcoming also applies to the work in [10]. To this end, the massive deployment of devices and services is overwhelmingly leading to large-scale problems with a large number of non-linear parameters, making them computationally prohibitive for optimization using conventional statistical methods. Such methods are increasingly becoming unable to scale well to these problems, which mandates a paradigm shift towards more scalable approaches. At the heart of these approaches lie the decentralized machine learning algorithms. Decentralized approaches such as Federated learning and multi-agent reinforcement learning (MARL) usually have less complexity [11]. Furthermore, decentralized solutions may not require sharing of all information about agents, and hence, data privacy is preserved. In the recent few years, MARL has drawn a great amount of attention to solving problems related to massive IoT. For example, the authors [12] applied MARL to propose an energy harvesting scheme in massive scenarios. In addition, the work in [13] applied deep MARL in a UAV swarm sensing application. ### _Paper Contributions_ We summarize the contributions of this work as follows1: Footnote 1: Unlike our previous works in [14, 15], which present two UAVs serve 10 IoT devices and a centralized solution for multiple UAVs serving a cluster-based IoT network, respectively, we focus on comparing MARL solutions in massive IoT network served by a swarm of UAVs. * We propose multi-agent deep reinforcement learning solutions with partial and no sharing of information to minimize the AoI. * Our solutions account for the IoT devices' energy consumption, UAV transmission modes, as well as the division of the massive network into clusters. * We compare the proposed MARL approaches to the centralized solution in terms of performance and complexity. * Interestingly, the proposed MARL approaches render very good performance in terms of AoI and complexity, when compared to centralized RL, especially when deploying a large number of UAVs. ### _Outline_ Section II depicts the system model and the problem formulation. The proposed deep MARL schemes are presented in Section III. Section IV elucidates the results and finally, the paper is concluded in Section V. ## II System model and Problem Formulation As illustrated in Fig. 1, we consider the uplink of a static and densely grounded IoT scenario with one BS with height \(h_{BS}\) located in the center of a grid world. The vertical/horizontal distance between two adjacent cells is \(L_{c}\). The set of IoT devices \(\mathcal{D}=\{1,2,\cdots,D\}\) are uniformly distributed and each has a coordinate \((x_{d},y_{d})\). There exists a UAV swarm set \(\mathcal{U}=\{1,2,\cdots,U\}\) of \(U\) fixed-velocity rotary-wing UAVs that serve the devices and relay the information to the BS. There is a restricted area in the grid world in which the UAVs are not allowed to fly over for security reasons. We refer to the coordinates of the restricted area with the set \(\mathcal{R}\). Each UAV has three main tasks at each time instant: schedule one or more devices for uplink, move in a certain direction (or hover) \(w(t)\), and relay the received packets to the BS. To this end, if a device \(d\) is served by a UAV, we refer to this as \(k_{u}(t)=d\), where the scheduling vector of all UAVs is \(\boldsymbol{K(t)}=[k_{1}(t),\cdots,k_{U}(t)]\). In addition, each UAV can navigate as follows \[l_{u}(t+1)=\begin{cases}l_{u}(t)+(0,L_{c}),&\quad w_{u}(t)=\text{ north},\\ l_{u}(t)-(0,L_{c}),&\quad w_{u}(t)=\text{south},\\ l_{u}(t)+(L_{c},0),&\quad w_{u}(t)=\text{east},\\ l_{u}(t)-(L_{c},0),&\quad w_{u}(t)=\text{west},\\ l_{u}(t),&\text{Hover},\end{cases} \tag{1}\] where \(l_{u}=(x_{u},y_{u})\) is the coordinate vector of the UAV and the position vector of all the UAVs is \(L_{U}=[l_{1},\cdots,l_{U}]\). The movement vector of all the UAVs is \(\boldsymbol{W(t)}=[w_{1}(t),\cdots,w_{u}(t)]\). The UAVs perform their tasks in the form of a sequence of cycles. Each cycle consists of 4 stages: _(i) navigation stage_, _(ii) transmission stage_, _(iii) relaying stage_, and _(vi) AoI update stage_. We define the unit of time in the network as a frame, where a UAV performs a full cycle in one frame. Each frame has a duration of \(T_{fr}\) seconds. As illustrated in Fig. 2, we assume that the navigation stage is performed simultaneously with the other stages, where in the half-duplex mode, the transmission stage is followed by the relaying stage, whereas in the full-duplex mode2, both stages are performed simultaneously. Footnote 2: To facilitate the problem formulation, we assume ideal full-duplex operation. In practice, there may exist a performance degradation due to self-interference and hardware impairments, even though the self-interference can be brought down below the noise floor [16]. ### _Clustering, Rate Calculation, and UAV Cycle_ Consider a set \(\mathcal{C}=\{1,2,\cdots,C\}\) of \(\mathcal{C}\) clusters, where each device \(d\) is assigned to a cluster \(c\). Each cluster has a fixed number of devices denoted \(D_{c}\). Assuming that each UAV schedules \(D_{c}\) devices for uplink in the time dedicated for transmission \(t_{T}\) given a transmission data rate \(R_{T}\), the number of devices in each cluster is upper bounded by \[D_{c}\leq\frac{R_{T}\,t_{T}}{M}, \tag{2}\] where \(M\) is the packet size, \(t_{T}\) is expressed in terms of the navigation time \(t_{N}\), i.e., \(t_{T}=\frac{t_{N}}{2}=\frac{L_{c}}{2\,v_{u}}\) in the half-duplex mode and \(t_{T}=t_{N}=\frac{L_{c}}{v_{u}}\) in the full-duplex mode. Note that, the extension to variable data rates is straightforward and does not lie within the direct scope of this work. Finally, the bound for the number of devices in a cluster is formulated as \[D_{c}\leq\begin{cases}\frac{R_{T}\,L_{c}}{2\,M\,v_{u}},&\text{half-duplex},\\ \frac{R_{T}\,L_{c}}{M\,v_{u}},&\text{full-duplex}.\end{cases} \tag{3}\] Each UAV \(u\) can schedule and serve \(D_{c}\) devices in a cluster \(c\) at each frame. We refer to the chosen clusters by \(c\) by a UAV \(u\) at time instant \(t\) as \(k_{u}(t)=c\). The scheduling vector of all UAVs is \(K_{U}(t)=[k_{1}(t),\cdots,k_{U}(t)]\). The UAV cycle is illustrated as follows: Fig. 1: The system model: A swarm of UAVs flies around to receive information from clusters of IoT devices and relay this information to the BS. The blocked area is a no-fly zone. Fig. 2: A UAV navigation step from one grid point to another. #### Ii-A1 Navigation stage The position of each UAV \(u\) at a time instant \(t\) is determined by its height \(h_{u}\) and the projection on the 2D plane \((x_{u}(t),y_{u}(t))\). Each UAV \(u\) flies with a fixed velocity of \(v_{u}\) from the center of a cell to the center of an adjacent cell. The duration of the navigation stage is formulated as follows \[t_{N}=\frac{L_{c}}{v_{u}}. \tag{4}\] #### Ii-A2 Transmission stage The UAVs communicate with the scheduled devices to receive their packets. We assume an LoS component between the UAV and the devices. When a UAV \(u\) schedules an IoT device \(d\) at time \(t\), the channel gain between both is \[g_{du}(t)=\frac{\beta_{0}}{h_{u}^{2}+||L_{du}(t)||^{2}}, \tag{5}\] where \(\beta_{0}\) is the channel gain at a reference distance of \(1\) m, \(L_{du}\) is the distance between the device and the UAV. The required transmission power allocated for the device is \[P_{d}(t)=\frac{\left(2^{\frac{M}{2}}-1\right)\sigma^{2}}{g_{du}(t)}=\left(2^{ \frac{M}{2}}-1\right)\frac{\sigma^{2}}{\beta_{0}}\,\left(h_{u}^{2}+||L_{du}(t )||^{2}\right), \tag{6}\] where \(M\) is the packet size, \(B\) is the bandwidth and \(\sigma^{2}\) is the noise power. The duration of the transmission stage is \(t_{T}\) and it is half the duration of the navigation stage \(t_{T}=\frac{t_{N}}{2}\) in the half-duplex mode and it equals the duration of the navigation stage \(t_{T}=t_{N}\) in the full-duplex mode. The transmission power vector of the network is \(\mathbf{P}(t)=(P_{1}(t),\cdots,P_{D}(t))\). #### Ii-A3 Relaying stage We assume an LoS component between the UAV and the BS. The channel gain between the UAV \(u\) and the BS at time instant \(t\) is given by [17] \[g_{uBS}(t)=\frac{\beta_{0}}{|h_{u}-h_{BS}|^{2}+||l_{u}||^{2}}, \tag{7}\] where the coordinate of the BS is assumed to be \((0,0)\). The duration of the relaying stage is \(t_{R}\) and it is half the duration of the navigation stage \(t_{R}=\frac{t_{N}}{2}\) in the half-duplex mode and it equals the duration of the navigation stage \(t_{R}=t_{N}\) in the full-duplex mode. #### Ii-A4 AoI update stage The AoI of device \(d\) is calculated as the time difference between the current instant and the time instant of the last update received from device \(d\). Thus, the AoI for device \(d\) at time instant \(t\) is updated according to the following update equation \[A_{d}(t)=\begin{cases}1,&\text{if }s_{d}(t)=1,\\ \text{min}\{A_{max},A_{d}(t-1)+1\},&\text{otherwise},\end{cases} \tag{8}\] where \(A_{max}\) is the maximum AoI threshold allowed in the network, which is chosen to be relatively high. The duration of the AoI update stage is neglected. The AoI vector of the network is \(\mathbf{A}(t)=(A_{1}(t),\cdots,A_{D}(t))\). ### _Problem Formulation_ The goal is to plan the trajectories of the UAVs and their scheduling policies to maximize the freshness of the information, i.e., minimize the average AoI of the IoT devices. Consider the weighting vector \(\mathbf{\Theta}=(\theta_{1},\cdots,\theta_{D})\), where \(\theta_{d}\) influences the importance of the AoI of device \(d\). We can formulate the optimization problem as follow \[\mathbf{P1}: \min_{\mathbf{W(t)},\mathbf{K(t)}} \frac{1}{T}\sum_{t=1}^{T}\operatorname{Tr}\left(\,\mathbf{\Theta }\,\mathbf{A}\right)+\frac{\zeta}{D}\operatorname{Tr}\left(\mathbf{P}\right)\] (9a) s.t. \[l_{u}(t)\in\mathcal{X}, \tag{9b}\] where \(\operatorname{Tr}(\cdot)\) is the trace of a matrix, \(\zeta\) is a transmission power penalty to the optimization problem to force the UAV to move closer to the devices with high AoI, and \(\mathcal{X}\) is the set of all possible locations in the grid world. The constraint in (9a) ensures that the UAVs move inside the grid world, whereas the constraint (9b) ensures that the optimized parameter will not lead one of the UAVs inside the restricted area. ## III The Proposed MARL Solutions The aforementioned optimization problem in \(\mathbf{P1}\) is a non-linear integer programming optimization problem whose order of complexity is very large, even for a small number of UAVs [14]. We propose novel centralized and decentralized MARL solutions to solve this complex optimization problem in a massive setup of IoT devices and a swarm of UAVs. ### _The Markov Decision Process_ Assume that the swarm of UAVs are the agents that need to optimize their movement and scheduling decisions to minimize the objective function in (9) in the environment denoted in the previous section. In deep reinforcement learning, Markov decision process (MDP) problems are usually interpreted in terms of the state, action, and reward, where the agent \(u\) explores the environment at time \(t\), observes a state \(s(t)\), takes an action \(a(t)\) that transits the agent to a new state \(s(t+1)\). Then the agent receives a reward of \(r(t)\). Therefore, for \(\mathbf{P1}\), those are defined as follows: #### Iii-A1 State it consists of the position of the UAV \(l_{u}(t)=(x_{u}(t),y_{u}(t))\) and the AoI of the devices \(\mathbf{A}(t)\). Therefore, the state vector of a UAV \(u\) at time instant \(t\) is \(s_{u}(t)=[l_{u},A(t)]\). #### Iii-A2 Action space it consists of the movement direction of the UAV \(w_{u}(t)\) and the chosen cluster by the UAV \(k_{u}(t)\). Therefore, the action vector of a UAV \(u\) at time \(t\) is \(a_{u}(t)=[w_{u}(t),k_{u}(t)]\). #### Iii-A3 Reward it is formulated in terms of the AoI and the transmission power as defined in (9). The immediate reward of a UAV \(u\) at time \(t\) is \[r_{u}(t)=-\operatorname{Tr}\left(\,\mathbf{\Theta}\,\mathbf{A}\right)-\frac{ \zeta}{D_{c}}\operatorname{Tr}\left(\mathbf{P}\right), \tag{10}\] where the goal is to maximize the accumulative reward \[\max_{\pi_{u}}\,\sum_{t=1}^{T}r_{u}(t), \tag{11}\] where \(\pi_{u}\) is the policy followed by the UAV during the whole episode and \(T\) is the episode length. ### _MARL Solutions_ The Q-function \(Q(s_{u},a_{u})\) evaluates how good an action is at each particular state [14]3. It calculates the expected accumulative reward by taking an action \(a_{u}\) at state \(s_{u}\) and following the policy \(\pi_{u}\). The optimal policy \(\pi_{u}^{*}\) for each UAV \(u\) is the policy that maximizes the Q-function, i.e., at each state, choose the action that has the highest expected accumulative reward Footnote 3: Note that [14, 17] provide interesting insights about age minimization as well as the air interface between UAVs and ground nodes. However, the solutions proposed therein lack scalability to massive networks served by a UAV swarm. \[\pi_{u}^{*}=\operatorname*{arg\,max}_{a_{u}}\;Q(s_{u},a_{u}), \tag{12}\] where the optimal policy of all the UAVs is \(\pi^{*}=[\pi_{1}^{*},\cdots,\pi_{U}^{*}]\). This problem can be solved iteratively, where \[Q\left(s_{u}\left(t\right),a_{u}\left(t\right)\right)\leftarrow \;Q\left(s_{u}\left(t\right),a_{u}\left(t\right)\right)+\] \[\alpha\;\left(r_{u}\left(t\right)+\gamma\;\max_{a_{u}}Q\left(s_{u }\left(t+1\right),a_{u}\right)-Q\left(s_{u}\left(t\right),a_{u}\left(t\right) \right)\right), \tag{13}\] where \(\alpha\) is the learning rate, and \(\gamma\) is the discount factor that controls how much the network cares about the future rewards compared to the immediate rewards. Solving the Q-function iteratively requires the agents to visit all the possible states and experience all the possible actions, which is relatively complex in high dimension state and action spaces [18]. Instead, function approximators, such as neural networks, are used to overcome the dimensionality curse [19]. We apply a deep Q-network (DQN) solution to find the optimal Q-function and the optimal policy. In DQNs, a neural network is used as a function approximator to estimate the Q-function \(Q(s_{u},a_{u}|\theta_{u})\), where \(\theta\) is a vector containing the weights of the trained network. This network is called the current network. The Q-function is estimated by optimizing \(\theta_{u}\) that minimizes the loss function \[\mathbf{L}(\theta_{u}(t))= (r_{u}(t)+\max_{a_{u}}Q(s_{u}(t+1),a_{u}|\theta_{u}(t-1)) \tag{14}\] \[-Q(s_{u}(t),a_{u}(t)|\theta_{u}(t)))^{2}.\] In addition, the DQN adopts another neural network, which is called the target network to estimate the target Q-function values that will be used to update the Q-function [19]. Herein, we propose four schemes to train the agents using the DQN algorithm as follows: #### Iii-B1 **Centralized-RL (C-RL)** The training is done at the BS, which passes the optimal policy to the agents. This is a high-complexity scheme since the state and action spaces include all possible options for all the UAVs \(s(t)=[L_{U}(t),A(t)]\) and \(a(t)=[W(t),K_{U}(t)]\), respectively. Herein, the message exchange overhead is low, but it requires a very long training time to converge to the optimal policy. #### Iii-B2 **Cooperative-MARL (Co-MARL)** The UAVs are considered to be individual agents and can train their own local DQNs. The training is run serially, where each UAV passes its selected action to the next UAV directly or through the BS 4. Each UAV utilizes the optimized policies of the other UAVs to find its optimal policy. The AoI is updated universally at the BS and distributed among all the UAVs at the end of each episode. This scheme is of low complexity since each UAV considers only its state and action spaces during training; however, it requires high message exchange between the UAVs which consumes time and decreases the spectral efficiency. Footnote 4: We assume that passing information about actions between UAVs is done within a negligibly short time, compared to the navigation step time \(T_{N}\). Moreover, this work focuses on evaluating different methods for learning the trajectory of each UAV; therefore, to facilitate the analysis, we assume that the sharing of information between UAVs is performed through reliable error-free links. #### Iii-B3 **Partially cooperative-MARL (PCo-Markl)** Each UAV runs its training locally and they share its optimized actions with the BS without sending them to the other UAVs. The BS calculates the universal AoI and distributes it among all the UAVs. This scheme is simple and has low message exchange, but it does not have full knowledge of the true state and therefore, is unable to converge to the optimum policies in many cases. #### Iii-B4 **Decentralized-MARL (D-Markl)** Each UAV runs its training locally without sharing any information with other UAVs or BS. This scheme is very simple and has high spectral efficiency, but its performance is poor as it possesses no information about the true states and actions of the other UAVs. Fig. 3 illustrates the proposed DQN solution and the difference between the Co-MARL and the D-MARL schemes. ## IV Numerical Results and Discussion In this section, we show the simulation results of the proposed MARL solutions. We consider a \(1100\) m \(\times\)\(1100\) m network with the parameters shown in Table I. The training is performed using the Pytorch framework on a single NVIDIA Tesla V100 GPU. Due to the high number of UAVs, the trajectories would appear unclear and interfere with each other. Therefore, we are not providing the UAV trajectory plots here Fig. 3: A schematic for the proposed MARL algorithms Fig. 4 illustrates the average AoI in the half-duplex and the full-duplex modes for a swarm of \(10\) UAVs that are serving \(300\) IoT devices while varying the transmission rate. As illustrated in (3), the transmission rate dictates the number of devices in each cluster, and hence, controls the number of clusters that has a direct influence on the AoI. The average AoI is reduced as the rate increases. In addition, the Co-MARL solution has the lowest AoI compared to the PCo-MARL and the D-MARL solutions. Overall, all the learning schemes outperform the RW baseline model. However, low transmission rates affect the D-MARL scheme forcing the UAVs to choose a particular action (a cluster and a movement direction) all the time as the action space becomes relatively large. Therefore, the decentralized solution only outperforms the RW in high transmission rates. In Fig. 3(a), the D-MARL solution outperforms the RW for rates higher than \(27.5\) Mbps in the case of the half-duplex mode, whereas in Fig. 3(b), it renders better AoI for rates higher than \(22.5\) Mbps. Moreover, at the same transmission rates, the full-duplex mode always outperforms the half-duplex mode, since the UAV is able to receive and transmit data simultaneously, which allows for a higher number of served devices per navigation step and hence, lower age. Fig. 5 shows the average AoI resulting from different numbers of UAVs that are serving \(300\) devices in the full-duplex mode using a transmission rate of \(31.25\) Mbps. All the MARL schemes are trained and compared using the same DQN and the same number of episodes. We can notice that as the number of UAVs increases, the AoI decreases for all schemes. All the proposed solutions outperform the RW baseline solution. The Co-MARL scheme has the lowest AoI, whereas the D-MARL scheme has the highest AoI compared to all the proposed schemes. The C-RL has the lowest AoI but it requires more episodes in training compared to the other schemes as its state and action spaces are relatively large. In addition, when the number of UAVs is larger than \(3\) UAVs, the C-RL is unable to converge due to the dimensionality curse. Table II depicts the complexity analysis of all the proposed schemes and the RW baseline scheme. We define the time complexity as the execution time of each algorithm, whereas we define the computational complexity (\(\#\) computations) as the number of multiplications and additions required by each algorithm. The signaling overhead is the number of messages exchanged (other than the relayed data) between the BS and \begin{table} \begin{tabular}{c c|c c} \hline \hline **Parameter** & **Value** & **Parameter** & **Value** \\ \hline \(\beta_{0}\) & \(30\) dB & \(\sigma^{2}\) & \(-100\) dBm \\ \(B\) & \(1\) MHz & \(M\) & \(5\) Mb \\ \(h_{u}\) & \(100\) m & \(h_{BS}\) & \(15\) m \\ \(v_{u}\) & \(25\) m/s & \(L_{c}\) & \(100\) m \\ \(A_{max}\) & \(30\) & \(\zeta\) & \(5\) \\ \(T\) & \(60\) frames & \(\theta_{d}\) & \(\frac{1}{D}\) \\ \(\alpha\) & \(0.0001\) & \(\gamma\) & \(0.99\) \\ DQN layers & \((64,128,64)\) & Episodes & \(100000\) \\ Loss & MSE & optimizer & Adam \\ \hline \hline \end{tabular} \end{table} TABLE I: UAV model and DQN parameters. Fig. 4: The average AoI of \(10\) UAVs serving \(300\) IoT devices in the half-duplex and the full-duplex modes while changing the transmission rates. Fig. 5: The average AoI while changing the number of the UAVs that are serving \(300\) IoT devices in the full-duplex mode using a transmission rate of \(31.25\) Mbps. the UAVs. The RW has the lowest time complexity as it does not perform any computations. During the training, the C-RL requires double the number of episodes of the other schemes to converge. The Co-MARL scheme has the highest time complexity for an episode as each UAV must wait for the information shared by other UAVs before taking an action due to its high signaling overhead. The C-RL scheme has the lowest signaling overhead but it demands a large number of computations (10 times higher than MARL) and its time complexity is also higher. Meanwhile, all MARL schemes have lower computation demands. Taking a deeper look into the results in Fig. 5 and Table II, one can observe that the deployment of a higher number of UAVs can reduce the average AoI of the network. For example, deploying \(10\) UAVs instead of \(3\) UAVs improves the average AoI to a value of \(1.9\) instead of \(2.6\) (i.e., about \(30\%\) improvement). Moreover, \(10\) UAVs are able to reach this low AoI value using MARL which requires much less UAV computation capabilities on board. The cost of a UAV with less processing capabilities on board could indeed be cheaper. An interesting conclusion here is that we are able to acquire more fresh information using a swarm of low-cost UAVs rather than deploying a small number of high-cost UAVs. ## V Conclusions This paper presented different learning approaches on how to collect fresh information from a massive IoT network using a UAV swarm. The main target of the proposed approaches was to maximize the overall information freshness of the network. We applied different MARL algorithms and compared them to the centralized RL solution. The results revealed that the, although the centralized solution is able to reach the best performance with a low number of UAVs, its time and computational complexity, are quite high. Moreover, it fails to converge for a high number of UAVs, unlike the MARL solutions which provide low-complexity solutions in all scenarios and are able to scale well to the UAV swarm setup. An interesting future direction is to consider the non-ideal transmission of information among UAVs and study its effect on the overall performance of MARL as noted in Section III-B. ## Acknowledgments This work is partially supported by Academy of Finland, 6G Flagship program (Grant no. 346208) and FIREMAN (Grant no. 326301), and the European Commission through the Horizon Europe project Hexa-X (Grant Agreement no. 101015956).
多数の規模のIoT通信シナリオでは、IoTデバイスは移動可能な動的ユニットからカバーする必要があり、そのユニットはIoTデバイスに近づくことができ、アップリンクエネルギー消費を削減することができる。これらのシナリオにおける robustness は、大規模なUAV(UAV swarm)を導入することです。これは、IoTネットワークのカバーを提供し、より良い視線(LoS)を提供します。しかし、大量のサービスユニットを持つこれらの大規模なIoTシナリオの研究は、高次元問題と複雑さの問題を引き起こします。この論文では、多代理学習の深層強化学習を適用して、UAVの群れを導入することによる高次元問題に対処します。それは、IoTデバイスから新鮮な情報収集を行うことです。目標は、IoTネットワークにおける全体的な情報更新の最小化です。結果からは、協力型と部分協力型の多代理深層強化学習アプローチは、大規模なネットワークに抵抗する複雑な中央化
2301.00100
Cobordism invariance of the index for realizations of elliptic operators revisited
We revisit an argument due to Lesch (Topology 32 (1993), no. 3, 611-623) for proving the cobordism invariance of the index of Dirac operators on even-dimensional closed manifolds and combine this with recent work by the author (New York J. Math. 28 (2022), 705-772) to show vanishing results for the spectral flow for families of selfadjoint Fredholm realizations of elliptic operators in case the family is induced on the boundary by an elliptic operator on a compact space. This work is motivated by studying the behavior of the index of realizations of elliptic operators under cobordisms of statified manifolds.
Thomas Krainer
2022-12-31T02:48:57
http://arxiv.org/abs/2301.00100v1
# Cobordism invariance of the index for realizations of elliptic operators revisited ###### Abstract. We revisit an argument due to Lesch [11, 12] for proving the cobordism invariance of the index of Dirac operators on even-dimensional closed manifolds and combine this with recent work by the author [10] to show vanishing results for the spectral flow for families of selfadjoint Fredholm realizations of elliptic operators in case the family is induced on the boundary by an elliptic operator on a compact space. This work is motivated by studying the behavior of the index of realizations of elliptic operators under cobordisms of stratified manifolds. Key words and phrases:Manifolds with singularities, index theory, cobordism 2020 Mathematics Subject Classification: Primary: 58J20; Secondary: 58J05, 58J32, 58J30 ## 1. Introduction One of the original proofs of the Atiyah-Singer Index Theorem is based on showing that the index of Dirac type operators is invariant under cobordisms, see Palais [17]. This proof is analytic in nature and rooted in the classical theory of elliptic boundary value problems. Other proof strategies for the index theorem such as the heat equation proof have generally been favored because these proofs require less sophisticated analytic techniques than the original cobordism proof. Higson [9] gave a proof of the cobordism invariance of the index by attaching an infinite half-cylinder to the boundary and extending the operator from the manifold with boundary to the manifold with cylindrical end. The Dirac type operator on the resulting odd-dimensional complete manifold is essentially selfadjoint, and the analytic arguments involved in Higson's proof are considerably simpler compared to the original proof. Lesch [11], on the other hand, gave a proof by attaching a (generalized) cone to the boundary and extended the operator from the manifold with boundary to a cone operator; while conic manifolds are incomplete and thus dealing with domains of realizations of the resulting conic Dirac type operator is needed, Lesch's approach is still much simpler from a functional analytic point of view than the original proof because the maximal and minimal domains of \(L^{2}\)-based realizations in the conic case differ only by a finite-dimensional space - the price to pay is the more intricate analysis to deal with the singularity which at this juncture has been introduced artificially. Several other analytic proofs of the cobordism invariance of the index [3, 16], a \(K\)-theory proof [5], and generalizations [4, 8, 14, 19] have since been found. This note is motivated by recent advances in elliptic theory on stratified manifolds with incomplete iterated wedge metrics [1, 2, 6, 7, 15, 18] and gives an application of the spectral flow formula for indicial operators obtained in our recent paper [10]. Stratified cobordisms and the cobordism invariance of the index for the signature operator have been considered in [1, 2], where especially in [2] the operator is no longer essentially selfadjoint and suitable boundary conditions associated with the singular strata are considered; stratified cobordism and the invariance of the index are used in an essential way to establish the properties of the signature of a Cheeger space considered in that paper. From our point of view Lesch's proof [11, 12] of the cobordism invariance of the index is very natural in the context of elliptic theory on stratified manifolds because, unlike in the classical smooth case, singular analysis and dealing with boundary conditions associated with singular strata already are essential features of the investigations here. In this note we will revisit and extend Lesch's proof from the Dirac case to more general operators of any order, and what amounts to the vanishing of the index in the Dirac case (for null-cobordisms) will accordingly generalize to the vanishing of the spectral flow for indicial families. Our recent paper [10] on indicial operators, which are abstract functional analytic model operators associated to generalized conical singularities, is the basis for this. We will only be concerned with null-cobordisms and proving vanishing results here; more general notions of cobordisms and cobordism invariance follow upon reduction to this case. Without detailing the precise assumptions, the argument proceeds as follows: Let \((M,g)\) be a Riemannian manifold, and let \(U=U(Y)\subset M\) be an open subset that is isometric to \((0,\varepsilon)\times Y\) with product metric \(dx^{2}+g_{Y}\) for some \(\varepsilon>0\), where \((Y,g_{Y})\) is another Riemannian manifold. The reader ought to think of both \(M\) and \(Y\) as the open interior of compact stratified manifolds \(\overline{M}\) and \(\overline{Y}\) equipped with incomplete iterated wedge metrics, where \(\overline{Y}\) is a boundary hypersurface of \(\overline{M}\), and \(U(Y)\) is a collar neighborhood. Let \(\mathscr{E}\to M\) be a Hermitian vector bundle such that \(\mathscr{E}\big{|}_{U(Y)}\cong\pi_{Y}^{*}E\) isometrically, where \(E\to Y\) is a Hermitian vector bundle, and \(\pi_{Y}:(0,\varepsilon)\times Y\to Y\) is the canonical projection. Let \[A:C_{c}^{\infty}(M;\mathscr{E})\to C_{c}^{\infty}(M;\mathscr{E})\] be an elliptic differential operator of order \(\mu\geq 1\) that is symmetric with respect to the inner product induced by the Riemannian and Hermitian metrics, and suppose that \(A\) is in \(U(Y)\) of the form \[A\cong A_{\wedge}=x^{-1}\sum_{j=0}^{\mu}a_{j}(y,D_{y})(xD_{x})^{j}:C_{c}^{ \infty}((0,\varepsilon)\times Y;\pi_{Y}^{*}E)\to C_{c}^{\infty}((0, \varepsilon)\times Y;\pi_{Y}^{*}E),\] where \(a_{j}(y,D_{y})\in\operatorname{Diff}^{\mu-j}(Y;E)\). Let \[p(\sigma)=\sum_{j=0}^{\mu}a_{j}(y,D_{y})\sigma^{j}:C_{c}^{\infty}(Y;E)\to C_{ c}^{\infty}(Y;E),\;\sigma\in\mathbb{C},\] be the indicial family. Now suppose that \[A_{\min}:\mathcal{D}_{\min}(A)\subset L^{2}(M;\mathscr{E})\to L^{2}(M; \mathscr{E}) \tag{1.1}\] is some closed symmetric extension of \(A:C_{c}^{\infty}(M;\mathscr{E})\subset L^{2}(M;\mathscr{E})\to L^{2}(M; \mathscr{E})\), and let \(A_{\max}:\mathcal{D}_{\max}(A)\subset L^{2}(M;\mathscr{E})\to L^{2}(M; \mathscr{E})\) be the adjoint - we point out here that \(A_{\min}\) is not necessarily the minimal extension of \(A\) from \(C_{c}^{\infty}(M;\mathscr{E})\), and therefore \(A_{\max}\) is not the largest \(L^{2}\)-based closed extension either, i.e. we only have \[\mathcal{D}_{\min}(A)\supset\{u\in L^{2}(M;\mathscr{E});\;\exists u _{k}\in C_{c}^{\infty}(M;\mathscr{E}),u_{k}\to u\text{ in }L^{2}(M;\mathscr{E}),\] \[\text{and }Au_{k}\subset L^{2}(M;\mathscr{E})\text{ Cauchy}\},\] \[\mathcal{D}_{\max}(A)\subset\{u\in L^{2}(M;\mathscr{E});\;\exists v \in L^{2}(M;\mathscr{E}):\] \[\langle A\phi,u\rangle_{L^{2}(M;\mathscr{E})}=\langle\phi,v \rangle_{L^{2}(M;\mathscr{E})}\;\forall\phi\in C_{c}^{\infty}(M;\mathscr{E})\},\] and these inclusions are generally proper. The reader ought to think of the operator \(A\) as an elliptic iterated incomplete wedge operator on \(\overline{M}\), and the domain \(\mathcal{D}_{\min}(A)\) as determined by previously chosen boundary conditions for \(A\) associated with singular strata of \(\overline{M}\) away from the boundary hypersurface \(\overline{Y}\subset\overline{M}\). One of the main points now is that under suitable localization and compatibility assumptions these extensions of \(A\) should localize to \(U(Y)\) and be fully captured by the extensions of the indicial operator \[A_{\wedge}:C_{c}^{\infty}(\mathbb{R}_{+};E_{1})\subset L^{2}(\mathbb{R}_{+} \times Y;\pi_{Y}^{*}E)\to L^{2}(\mathbb{R}_{+}\times Y;\pi_{Y}^{*}E). \tag{1.2}\] Here \[H^{\mu}_{\text{comp}}(Y;E)\subset E_{1}\subset H^{\mu}_{\text{loc}}(Y;E)\] is the common domain for the indicial family \(p(\sigma):E_{1}\subset E_{0}\to E_{0}\), \(\sigma\in\mathbb{C}\), where \(E_{0}=L^{2}(Y;E)\), giving rise to a holomorphic family of unbounded Fredholm operators that are selfadjoint for \(\sigma\in\mathbb{R}\). The reader ought to think of \(E_{1}\) as determined by certain lateral boundary conditions associated with the singular strata of \(\overline{Y}\), obtained via restriction to \(U(Y)\) by the previously determined boundary conditions on \(\overline{M}\) for \(A\) that gave rise to \(\mathcal{D}_{\min}(A)\); the localization and compatibility assumptions are such that the boundary conditions previously chosen for \(A\) on \(\overline{M}\) should be selfadjoint away from the boundary hypersurface \(\overline{Y}\). The upshot of all of this is that we obtain a unitary equivalence \[\big{(}\mathcal{D}_{\max}(A)/\mathcal{D}_{\min}(A),[\cdot,\cdot]_{A}\big{)} \cong\big{(}\mathcal{D}_{\max}(A_{\wedge})/\mathcal{D}_{\min}(A_{\wedge}),[ \cdot,\cdot]_{A_{\wedge}}\big{)}\] of finite-dimensional indefinite inner product spaces by passing to representatives supported in \(U(Y)\cong(0,\varepsilon)\times Y\), thus allowing transitioning between \(M\) and \(\mathbb{R}_{+}\times Y\); here \[[\cdot,\cdot]_{A}:\mathcal{D}_{\max}(A)\times\mathcal{D}_{\max}(A)\to\mathbb{C},\] \[[u,v]_{A}=\frac{1}{i}\Big{[}\langle A_{\max}u,v\rangle_{L^{2}}-\langle u,A_{ \max}v\rangle_{L^{2}}\Big{]}\] is the adjoint pairing, and likewise for \([\cdot,\cdot]_{A_{\wedge}}\), while \(\mathcal{D}_{\min}(A_{\wedge})\) is the domain of the closure \(A_{\wedge,\min}\) of (1.2), and \(\mathcal{D}_{\max}(A_{\wedge})\) is the domain of the adjoint \(A_{\wedge,\max}=A_{\wedge,\min}^{*}\). In particular, we have \[\operatorname{sgn}\bigl{(}\mathcal{D}_{\max}(A)/\mathcal{D}_{\min}(A),[\cdot,\cdot]_{A}\bigr{)}=\operatorname{sgn}\bigl{(}\mathcal{D}_{\max}(A_{\wedge})/ \mathcal{D}_{\min}(A_{\wedge}),[\cdot,\cdot]_{A_{\wedge}}\bigr{)}\] for the signatures of these spaces. On the one hand, using the spectral flow formula from [10], we have \[\operatorname{sgn}\bigl{(}\mathcal{D}_{\max}(A_{\wedge})/\mathcal{D}_{\min}( A_{\wedge}),[\cdot,\cdot]_{A_{\wedge}}\bigr{)}=\operatorname{SF}\!\left[\,p( \sigma):E_{1}\subset E_{0}\to E_{0},\;-\infty<\sigma<\infty\,\right]\!,\] while on the other hand \(\operatorname{sgn}\bigl{(}\mathcal{D}_{\max}(A)/\mathcal{D}_{\min}(A),[\cdot, \cdot]_{A}\bigr{)}=0\) if (1.1) is Fredholm or the embedding \(\mathcal{D}_{\max}(A)\hookrightarrow L^{2}(M;\mathscr{E})\) is compact, which combined leads to the desired conclusion that \[\operatorname{SF}\!\left[\,p(\sigma):E_{1}\subset E_{0}\to E_{0},\;-\infty< \sigma<\infty\,\right]=0.\] In the Dirac case the spectral flow of the indicial family is easily seen to equal the Fredholm index of the operator \(D:\mathcal{D}(D)\subset L^{2}(Y;E_{-})\to L^{2}(Y;E_{+})\) on the even-dimensional boundary \(Y\), thus recovering cobordism invariance of the index in this context. The structure of this paper is as follows: In Section 2 we briefly review what is needed from extension theory of symmetric operators, in particular the criteria that ensure that \(\bigl{(}\mathcal{D}_{\max}(A)/\mathcal{D}_{\min}(A),[\cdot,\cdot]_{A}\bigr{)}\) is finite-dimensional with signature zero. In Section 3 we review results from our paper [10] on indicial operators in the form in which they are needed here; we also address in this section how indicial operators of first order that model the Dirac case fit into this framework in order to obtain the desired conclusions about the cobordism invariance of the index when specializing to such operators. In Section 4 we fill in the details of the outline above and prove the null-cobordism theorem (Theorem 4.2). Finally, in Appendix A, we discuss the null-cobordism theorem for smooth manifolds; assumptions appear much weaker here on the geometry and the participating objects because the analytic tools available in this case are rich enough to create the preconditions needed to apply the null-cobordism theorem rather than having to assume them from the outset. With the ongoing further development of singular analysis on stratified manifolds we anticipate similar reductions and simplifications for such cases in the future as well. ## 2. Preliminaries from extension theory Let \(H\) be a separable complex Hilbert space, and suppose \(A_{\min}:\mathcal{D}_{\min}\subset H\to H\) is closed, densely defined, and symmetric. Let \(A_{\max}:=A_{\min}^{*}:\mathcal{D}_{\max}\subset H\to H\) be the adjoint. We equip \(\mathcal{D}_{\max}\) with the graph inner product \[\langle u,v\rangle_{A_{\max}}=\langle u,v\rangle+\langle A_{\max}u,A_{\max}v\rangle\] and associated graph norm. Then \(\mathcal{D}_{\min}\subset\bigl{(}\mathcal{D}_{\max},\|\cdot\|_{A_{\max}}\bigr{)}\) is a closed subspace, and \[\mathcal{D}_{\max}=\mathcal{D}_{\min}\oplus\ker(A_{\max}+i)\oplus\ker(A_{ \max}-i)\] by von Neumann's formulas. The dimensions \[n_{\pm}=\dim\ker(A_{\max}\mp\lambda i)\in\mathbb{N}_{0}\cup\{\infty\},\quad \lambda>0,\] are the deficiency indices of the operator \(A_{\min}\) and independent of \(\lambda>0\). The operators \[A_{\min}\pm i\lambda:\mathcal{D}_{\min}\subset H\to H,\quad\lambda>0,\] are injective and have closed range, and we have \(n_{\pm}<\infty\) if and only if \(A_{\min}\pm i\lambda\) is Fredholm, in which case \(n_{\pm}=-\operatorname{ind}(A_{\min}\pm i\lambda)\). The adjoint pairing \[[\cdot,\cdot]_{A}:\mathcal{D}_{\max}\times\mathcal{D}_{\max}\to\mathbb{C},\] descends to a nondegenerate Hermitian sesquilinear form (indefinite inner product) \[[\cdot,\cdot]:\mathcal{D}_{\max}/\mathcal{D}_{\min}\times\mathcal{D}_{\max}/ \mathcal{D}_{\min}\to\mathbb{C}.\] If \(\dim\mathcal{D}_{\max}/\mathcal{D}_{\min}<\infty\), i.e. if \(A_{\min}\) has finite deficiency indices, the signature of the adjoint pairing is given by \[\operatorname{sgn}\bigl{(}\mathcal{D}_{\max}/\mathcal{D}_{\min},[\cdot,\cdot] \bigr{)}=n_{+}-n_{-}.\] The following criteria are standard and useful for verification that \(n_{+}=n_{-}<\infty\). **Proposition 2.1**.: _Suppose \(A_{\min}:\mathcal{D}_{\min}\subset H\to H\) is Fredholm. Then \(A_{\min}\) has finite and equal deficiency indices, and therefore_ \[\operatorname{sgn}\bigl{(}\mathcal{D}_{\max}/\mathcal{D}_{\min},[\cdot,\cdot] \bigr{)}=0.\] Proof.: Because \(A_{\min}:\mathcal{D}_{\min}\subset H\to H\) is Fredholm there exists \(\varepsilon>0\) such that \(A_{\min}+i\lambda:\mathcal{D}_{\min}\subset H\to H\) is Fredholm for \(-\varepsilon<\lambda<\varepsilon\), and consequently both \(n_{\pm}<\infty\) and \(A_{\min}+i\lambda\) is Fredholm for all \(\lambda\in\mathbb{R}\). Now \[\mathbb{R}\ni\lambda\mapsto A_{\min}+i\lambda:\mathcal{D}_{\min}\subset H\to H\] is a continuous Fredholm function and therefore has constant index. Thus \[n_{+}=-\operatorname{ind}(A_{\min}+i)=-\operatorname{ind}(A_{\min}-i)=n_{-}.\] **Proposition 2.2**.: _If the embedding \(\bigl{(}\mathcal{D}_{\max},\|\cdot\|_{A_{\max}}\bigr{)}\hookrightarrow H\) is compact then \(A_{\min}\) has finite and equal deficiency indices._ Proof.: The norms \(\|\cdot\|_{A_{\max}}\) and \(\|\cdot\|_{H}\) are equivalent on \(\ker(A_{\max}\pm i)\), and the identity map \(\bigl{(}\ker(A_{\max}\pm i),\|\cdot\|_{A_{\max}}\bigr{)}\to\bigl{(}\ker(A_{ \max}\pm i),\|\cdot\|_{H}\bigr{)}\) is compact by assumption. Thus \(\dim\ker(A_{\max}\pm i)<\infty\). Now \(A_{\min}\pm i:\mathcal{D}_{\min}\subset H\to H\) are both Fredholm, and because \(\mathcal{D}_{\min}\hookrightarrow H\) is compact we have \(\operatorname{ind}(A_{\min}-i)=\operatorname{ind}(A_{\min}+i)\). The proposition is proved. ## 3. Indicial operators We consider indicial operators of the form \[A_{\wedge}=x^{-1}\sum_{j=0}^{\mu}a_{j}(xD_{x})^{j}:C_{c}^{\infty}(\mathbb{R} _{+};E_{1})\subset L^{2}(\mathbb{R}_{+};E_{0})\to L^{2}(\mathbb{R}_{+};E_{0}), \tag{3.1}\] where \(\mu\in\mathbb{N}\) and \(E_{0}\) and \(E_{1}\) are separable complex Hilbert spaces such that \(E_{1}\hookrightarrow E_{0}\) is continuous and dense, and the operators \(a_{j}:E_{1}\to E_{0}\) are continuous for \(j=0,\ldots,\mu\). Let \[p(\sigma)=\sum_{j=0}^{\mu}a_{j}\sigma^{j}:E_{1}\to E_{0},\quad\sigma\in \mathbb{C} \tag{3.2}\] be the indicial family associated with \(A_{\wedge}\). We make the following assumptions: 1. \(p(\sigma):E_{1}\subset E_{0}\to E_{0}\) is closed, densely defined, and Fredholm for \(\sigma\in\mathbb{C}\), and the map \(\mathbb{C}\ni\sigma\mapsto p(\sigma)\in\mathscr{L}(E_{1},E_{0})\) is holomorphic. 2. We have \(p(\overline{\sigma})^{*}=p(\sigma):E_{1}\subset E_{0}\to E_{0}\) as unbounded operators in \(E_{0}\). 3. For \((\lambda,\sigma)\in\mathbb{R}^{2}\) and \(|\lambda,\sigma|\geq R\gg 0\) sufficiently large \(p(\sigma)+i\lambda:E_{1}\to E_{0}\) is invertible with \[\sup_{|\lambda,\sigma|\geq R}\Bigl{\{}(1+\lambda^{2}+\sigma^{2\mu})^{\frac{1}{2 }}\bigl{\|}\bigl{(}p(\sigma)+i\lambda\bigr{)}^{-1}\bigr{\|}_{\mathscr{L}(E_{0} )}+\bigl{\|}\bigl{(}p(\sigma)+i\lambda\bigr{)}^{-1}\bigr{\|}_{\mathscr{L}(E_{0},E_{1})}\Bigr{\}}<\infty,\] and for every \(k\in\{1,\ldots,\mu\}\) we have \[\sup_{|\lambda,\sigma|\geq R}(1+\lambda^{2}+\sigma^{2\mu})^{\frac{k}{2\mu}} \bigl{\|}\bigl{[}\partial_{\sigma}^{k}p(\sigma)\bigr{]}\bigl{(}p(\sigma)+i \lambda\bigr{)}^{-1}\bigr{\|}_{\mathscr{L}(E_{0})}<\infty.\] In [10] we systematically studied operators of the kind (3.1) under such assumptions. We summarize some of the findings below: 1. The operator (3.1) is symmetric and densely defined in \(L^{2}(\mathbb{R}_{+};E_{0})\). Let \(A_{\wedge,\min}\) be its closure, and \(A_{\wedge,\max}=A_{\wedge,\min}^{*}\) be the adjoint. Then \[\dim\mathcal{D}_{\max}(A_{\wedge})/\mathcal{D}_{\min}(A_{\wedge})<\infty,\] i.e., \(A_{\wedge}\) has finite deficiency indices. 2. The boundary spectrum \[\operatorname{spec}_{b}(p)=\{\sigma\in\mathbb{C};\;p(\sigma):E_{1}\to E_{0} \text{ is not invertible}\}\subset\mathbb{C}\] is discrete, and every strip \(|\Im(\sigma)|\leq K\), \(K>0\), contains only finitely many elements of \(\operatorname{spec}_{b}(p)\). The elements of the boundary spectrum are generally referred to as indicial roots. 3. Fix an arbitrary cut-off function \(\omega\in C_{c}^{\infty}(\overline{\mathbb{R}}_{+})\) with \(\omega\equiv 1\) near \(x=0\). For each indicial root \(\sigma_{0}\in\operatorname{spec}_{b}(p)\) let \[\mathscr{E}_{\sigma_{0}}(p)=\Big{\{}u=\omega\sum_{j=0}^{k}e_{j}\log^{j}(x)x^{i \sigma_{0}};\;k\in\mathbb{N}_{0}\text{ and }e_{j}\in E_{1},\] (3.3) \[\text{ and }p(\sigma)(Mu)(\sigma)\text{ is holomorphic at }\sigma=\sigma_{0}\Big{\}},\] where \[\big{(}Mu\big{)}(\sigma)=\int_{0}^{\infty}x^{-i\sigma}u(x)\,\frac{dx}{x}\] is the Mellin transform of \(u\). This space is finite-dimensional for every \(\sigma_{0}\), and we have \[\mathcal{D}_{\max}(A_{\wedge})=\mathcal{D}_{\min}(A_{\wedge})\oplus\bigoplus _{\begin{subarray}{c}\sigma_{0}\in\operatorname{spec}_{b}(p)\\ -\frac{1}{2}<\Im(\sigma_{0})<\frac{1}{2}\end{subarray}}\mathscr{E}_{\sigma_{ 0}}(p).\] (3.4) 4. We have \[x^{\frac{1}{2}}\mathscr{H}(\mathbb{R}_{+};E_{1})\cap L^{2}(\mathbb{R}_{+};E_{0 })\hookrightarrow\mathcal{D}_{\min}(A_{\wedge}),\] and \(\mathcal{D}_{\min}(A_{\wedge})=x^{\frac{1}{2}}\mathscr{H}(\mathbb{R}_{+};E_{1 })\cap L^{2}(\mathbb{R}_{+};E_{0})\) if and only if \(p(\sigma):E_{1}\to E_{0}\) is invertible for all \(\Im(\sigma)=-\frac{1}{2}\). The space \(\mathscr{H}(\mathbb{R}_{+};E_{1})\) is the completion of \(C_{c}^{\infty}(\mathbb{R}_{+};E_{1})\) with respect to the norm \[\|u\|_{\mathscr{H}}^{2}=\int_{\mathbb{R}}\|p(\sigma+i\gamma_{0})(Mu)(\sigma) \|_{E_{0}}^{2}\,d\sigma,\] where \(\gamma_{0}\in\mathbb{R}\) is arbitrary such that \(p(\sigma+i\gamma_{0}):E_{1}\to E_{0}\) is invertible for all \(\sigma\in\mathbb{R}\). We have \[\mathscr{H}(\mathbb{R}_{+};E_{1})\hookrightarrow H_{b}^{\mu}(\mathbb{R}_{+};E_{ 0})\cap L_{b}^{2}(\mathbb{R}_{+};E_{1}),\] and in typical situations these spaces are equal; this is the case, for instance, if \[\sup_{\sigma\in\mathbb{R}}\|p(\sigma+i\gamma_{0})(\langle\sigma\rangle^{\mu} +i\Lambda)^{-1}\|_{\mathscr{L}(E_{0})}<\infty,\] (3.5) where \(\Lambda:E_{1}\subset E_{0}\to E_{0}\) is selfadjoint (e.g. for \(\Lambda=p(0)\)). 5. While not discussed in [10] it is not hard to see that, under the added assumption that the embedding \(E_{1}\hookrightarrow E_{0}\) is compact, multiplication by a cut-off function \(\omega\in C_{c}^{\infty}\big{(}\overline{\mathbb{R}}_{+}\big{)}\) with \(\omega\equiv 1\) near \(x=0\) induces a compact operator \(\omega:x^{\alpha}\mathscr{H}(\mathbb{R}_{+};E_{1})\to L_{b}^{2}(\mathbb{R}_{+};E _{0})\) for every \(\alpha>0\)1. Footnote 1: The function \(a_{0}(x,\sigma)=x^{\alpha}\omega(x)p(\sigma+i\gamma_{0})^{-1}\) is a Mellin symbol taking values in the compact operators \(E_{0}\to E_{0}\), and we have \(\sup\{\langle\log(x)\rangle^{j}\langle\sigma\rangle^{\mu+k}\|(xD_{x})^{l} \partial_{\sigma}^{k}a_{0}(x,\sigma)\|_{\mathscr{L}(E_{0})};\ (x,\sigma)\in\mathbb{R}_{+} \times\mathbb{R}\}<\infty\) for all \(j,k,l\in\mathbb{N}_{0}\). Thus the Mellin pseudodifferential operator \(\operatorname{op}_{M}(a_{0}):L_{b}^{2}(\mathbb{R}_{+};E_{0})\to L_{b}^{2}( \mathbb{R}_{+};E_{0})\) is compact, which implies compactness of the multiplication operator \(\omega:x^{\alpha}\mathscr{H}(\mathbb{R}_{+};E_{1})\to L_{b}^{2}(\mathbb{R}_{+} ;E_{0})\) as asserted. Consequently, if additionally \(p(\sigma):E_{1}\to E_{0}\) is invertible for all \(\Im(\sigma)=-\frac{1}{2}\), we obtain a compact map \(\omega:\mathcal{D}_{\max}(A_{\wedge})\to L^{2}(\mathbb{R}_{+};E_{0})\), and a bounded map \(1-\omega:\mathcal{D}_{\max}(A_{\wedge})\to\mathcal{D}_{\min}(A_{\wedge})\). The latter is based on the identity \(\mathcal{D}_{\min}(A_{\wedge})=x^{\frac{1}{2}}\mathscr{H}(\mathbb{R}_{+};E_{1} )\cap L^{2}(\mathbb{R}_{+};E_{0})\) and localization properties of the space \(\mathscr{H}(\mathbb{R}_{+};E_{1})\) (see [10, Proposition 7.6]). 6. The adjoint pairing \[[\cdot,\cdot]_{A_{\wedge}}:\mathcal{D}_{\max}(A_{\wedge})\times\mathcal{D}_{ \max}(A_{\wedge})\to\mathbb{C},\] \[[u,v]_{A_{\wedge}}=\frac{1}{i}\Big{[}\langle A_{\wedge,\max}u,v \rangle_{L^{2}(\mathbb{R}_{+};E_{0})}-\langle u,A_{\wedge,\max}v\rangle_{L^{2} (\mathbb{R}_{+};E_{0})}\Big{]}\] induces a nondegenerate Hermitian sesquilinear form \[[\cdot,\cdot]:\mathcal{D}_{\max}(A_{\wedge})/\mathcal{D}_{\min}(A_{\wedge}) \times\mathcal{D}_{\max}(A_{\wedge})/\mathcal{D}_{\min}(A_{\wedge})\to\mathbb{ C},\] and its signature is given by the spectral flow of the indicial family (3.2) along the real line: \[\operatorname{sgn}\bigl{(}\mathcal{D}_{\max}(A_{\wedge})/\mathcal{D}_{\min}(A _{\wedge}),[\cdot,\cdot]\bigr{)}=\operatorname{SF}[\,p(\sigma):E_{1}\subset E _{0}\to E_{0},\ -\infty<\sigma<\infty\,].\] (3.6) Note that \(p(\sigma):E_{1}\to E_{0}\) is invertible for \(|\sigma|\geq T\gg 0\) large enough, \(\sigma\in\mathbb{R}\), and the spectral flow in (3.6) then refers to \(p(\sigma)\) on the interval \(-T\leq\sigma\leq T\). Only crossings of real indicial roots contribute terms to the spectral flow. The focus in this paper is on the signature of the adjoint pairing, and by (3.6) only real indicial roots are relevant. In order to obtain simple expressions for the minimal domain and the maximal domain (3.4) of \(A_{\wedge}\) it is sometimes convenient to introduce a scaling parameter \(t>0\) to remove any small non-real indicial roots from the strip \(|\Im(\sigma)|\leq\frac{1}{2}\). This leads to \[A_{\wedge,t}=x^{-1}\sum_{j=0}^{\mu}a_{j}t^{j}(xD_{x})^{j}:C_{c}^{\infty}( \mathbb{R}_{+};E_{1})\subset L^{2}(\mathbb{R}_{+};E_{0})\to L^{2}(\mathbb{R}_{+ };E_{0})\] with indicial family \[p_{t}(\sigma)=p(t\sigma):E_{1}\subset E_{0}\to E_{0},\quad\sigma\in\mathbb{C},\] and the standing assumptions on \(p(\sigma)\) imply that the analogous properties are also satisfied for \(p_{t}(\sigma)\), and all estimates are locally uniform with respect to \(t>0\). In particular, the spectral flow \[\operatorname{SF}[\,p_{t}(\sigma):E_{1}\subset E_{0}\to E_{0},\ -\infty<\sigma< \infty\,]\] is independent of \(t>0\) by homotopy invariance, and thus \[\operatorname{sgn}\bigl{(}\mathcal{D}_{\max}(A_{\wedge,t})/\mathcal{D}_{\min}(A _{\wedge,t}),[\cdot,\cdot]\bigr{)}\] is independent of \(t>0\). For \(0<t\leq t_{0}\) small enough, \(p_{t}(\sigma):E_{1}\subset E_{0}\to E_{0}\) is invertible for all \(0<|\Im(\sigma)|\leq\frac{1}{2}\). We then have \[\mathcal{D}_{\min}(A_{\wedge,t})=x^{\frac{1}{2}}\mathscr{H}(\mathbb{R}_{+};E_{ 1})\cap L^{2}(\mathbb{R}_{+};E_{0}),\] where the definition of \(\mathscr{H}(\mathbb{R}_{+};E_{1})\) is accordingly based on \(p_{t}(\sigma)\), and \[\mathcal{D}_{\max}(A_{\wedge,t})=\mathcal{D}_{\min}(A_{\wedge,t})\oplus \bigoplus_{\sigma_{0}\in\operatorname{spec}_{b}(p_{t})\cap\mathbb{R}}\mathscr{ E}_{\sigma_{0}}(p_{t}).\] If (3.5) holds for \(p(\sigma)\) it is true for all \(p_{t}(\sigma)\), and in this case the space \[\mathscr{H}(\mathbb{R}_{+};E_{1})=H_{b}^{\mu}(\mathbb{R}_{+};E_{0})\cap L_{b} ^{2}(\mathbb{R}_{+};E_{1})\] is independent of \(t>0\); thus the minimal domain \[\mathcal{D}_{\min}(A_{\wedge})=x^{\frac{1}{2}}H_{b}^{\mu}(\mathbb{R}_{+};E_{0 })\cap x^{\frac{1}{2}}L_{b}^{2}(\mathbb{R}_{+};E_{1})\cap L^{2}(\mathbb{R}_{+ };E_{0})\] is independent of \(0<t\leq t_{0}\). ### Operators of first order Let \(D:\mathcal{D}(D)\subset H_{1}\to H_{2}\) be closed and densely defined, and let \(D^{*}:\mathcal{D}(D^{*})\subset H_{2}\to H_{1}\) be the adjoint. Write \[E_{0}=\begin{array}{c}H_{1}\\ \oplus\\ H_{2}\end{array}\text{ and }E_{1}=\begin{array}{c}\mathcal{D}(D)\\ \oplus\\ \mathcal{D}(D^{*})\end{array}\hookrightarrow E_{0}.\] We assume that \(D\) (and therefore also \(D^{*}\)) is Fredholm, and that the embeddings for both domains \(\mathcal{D}(D)\hookrightarrow H_{1}\) and \(\mathcal{D}(D^{*})\hookrightarrow H_{2}\) are compact. Consider then \[\mathscr{D}_{\wedge}=x^{-1}\begin{bmatrix}\begin{bmatrix}1&0\\ 0&-1\end{bmatrix}(xD_{x})+\begin{bmatrix}0&D^{*}\\ D&0\end{bmatrix}\end{bmatrix}:C_{c}^{\infty}(\mathbb{R}_{+};E_{1})\subset L^ {2}(\mathbb{R}_{+};E_{0})\to L^{2}(\mathbb{R}_{+};E_{0})\] with indicial family \[\mathscr{D}(\sigma)=\begin{bmatrix}\sigma&D^{*}\\ D&-\sigma\end{bmatrix}:E_{1}\subset E_{0}\to E_{0},\quad\sigma\in \mathbb{C}.\] Now \(\mathscr{D}(\sigma)\) satisfies the assumptions previously stated for indicial families with \(\mu=1\), including (3.5) with \(\Lambda=\mathscr{D}(0)\); see Lemma 3.8 for the required estimates. Therefore the conclusions summarized above hold for \(\mathscr{D}_{\wedge}\), and by Lemma 3.9 we have \[\operatorname{sgn}\bigl{(}\mathcal{D}_{\max}(\mathscr{D}_{\wedge})/\mathcal{D }_{\min}(\mathscr{D}_{\wedge}),[\cdot,\cdot]\bigr{)}=\operatorname{ind}[D: \mathcal{D}(D)\subset H_{1}\to H_{2}]. \tag{3.7}\] The only real indicial root is \(\sigma_{0}=0\), and after possibly introducing a sufficiently small scaling parameter \(t>0\) and replacing \(\mathscr{D}_{\wedge}\) by \[\mathscr{D}_{\wedge,t}=x^{-1}\begin{bmatrix}t\begin{bmatrix}1&0\\ 0&-1\end{bmatrix}(xD_{x})+\begin{bmatrix}0&D^{*}\\ D&0\end{bmatrix}\end{bmatrix}\] we have \[\mathcal{D}_{\min}(\mathscr{D}_{\wedge,t})=x^{\frac{1}{2}}H_{b}^{ 1}(\mathbb{R}_{+};E_{0})\cap x^{\frac{1}{2}}L_{b}^{2}(\mathbb{R}_{+};E_{1})\cap L ^{2}(\mathbb{R}_{+};E_{0}),\] \[\mathcal{D}_{\max}(\mathscr{D}_{\wedge,t})=\mathcal{D}_{\min}( \mathscr{D}_{\wedge,t})\oplus\mathscr{E}_{0}(\mathscr{D}_{t}).\] In this case \(\mathscr{E}_{0}(\mathscr{D}_{t})=\mathscr{E}_{0}(\mathscr{D})\) is also independent of \(t>0\), and we have \[\mathscr{E}_{0}(\mathscr{D})=\Bigl{\{}u=\omega\begin{bmatrix}k\\ k^{*}\end{bmatrix};\;k\in\ker(D),\;k^{*}\in\ker(D^{*})\Bigr{\}}.\] This follows from (3.3) in view of \[\mathscr{D}(\sigma)^{-1} =\sigma\begin{bmatrix}1&0\\ 0&-1\end{bmatrix}[\mathscr{D}(0)^{2}+\sigma^{2}]^{-1}+\mathscr{D}(0)[\mathscr{D} (0)^{2}+\sigma^{2}]^{-1}\] \[=\begin{bmatrix}\Pi_{D}&0\\ 0&-\Pi_{D^{*}}\end{bmatrix}\frac{1}{\sigma}+\text{holomorphic}\] near \(\sigma=0\), where \(\Pi_{D}:H_{1}\to\ker(D)\) and \(\Pi_{D^{*}}:H_{2}\to\ker(D^{*})\) are the orthogonal projections onto the kernels of \(D\) and \(D^{*}\), respectively. For sufficiently small \(t>0\) a brief calculation shows that the adjoint pairing is given by \[\left[\omega\begin{bmatrix}k_{1}\\ k_{1}^{*}\end{bmatrix},\omega\begin{bmatrix}k_{2}\\ k_{2}^{*}\end{bmatrix}\right]_{\mathscr{D}_{\wedge,t}}=t\big{(}\langle k_{1},k _{2}\rangle_{H_{1}}-\langle k_{1}^{*},k_{2}^{*}\rangle_{H_{2}}\big{)}\] for \(k_{j}\in\ker(D)\) and \(k_{j}^{*}\in\ker(D^{*})\), \(j=1,2\), which provides a direct justification for (3.7) for \(\mathscr{D}_{\wedge,t}\) (for small \(t>0\)) that does not rely on the spectral flow. **Lemma 3.8**.: _For \((\lambda,\sigma)\in\mathbb{R}^{2}\) write \(z=\sigma+i\lambda\in\mathbb{C}\) and consider_ \[\mathbf{D}(z)=\mathscr{D}(\sigma)+i\lambda=\begin{bmatrix}z&D^{*}\\ D&-\overline{z}\end{bmatrix}:E_{1}\subset E_{0}\to E_{0}.\] _Then \(\mathbf{D}(z)\) is invertible for all \(z\in\mathbb{C}\setminus\{0\}\), and_ \[\sup_{|z|\geq 1}\{|z|\cdot\|\mathbf{D}(z)^{-1}\|_{\mathscr{L}(E_{0})}+\| \mathbf{D}(z)^{-1}\|_{\mathscr{L}(E_{0},E_{1})}\}<\infty.\] Proof.: We have \(\mathbf{D}(z)^{*}=\mathbf{D}(\overline{z})\), and \[\mathbf{D}(z)^{*}\mathbf{D}(z)=\mathbf{D}(z)\mathbf{D}(z)^{*}=\begin{bmatrix} |z|^{2}+D^{*}D&0\\ 0&|z|^{2}+DD^{*}\end{bmatrix}=\mathscr{D}(0)^{2}+|z|^{2}.\] This operator is invertible for \(z\in\mathbb{C}\setminus\{0\}\), and consequently \(\mathbf{D}(z)\) is invertible with \[\mathbf{D}(z)^{-1}=\mathbf{D}(z)^{*}[\mathbf{D}(z)\mathbf{D}(z)^{*}]^{-1}=[ \overline{z}\Pi_{1}-z\Pi_{2}][\mathscr{D}(0)^{2}+|z|^{2}]^{-1}+\mathscr{D}(0) [\mathscr{D}(0)^{2}+|z|^{2}]^{-1},\] where \(\Pi_{j}:E_{0}\to H_{j}\subset E_{0}\) is the orthogonal projection, \(j=1,2\). In view of \(\mathscr{D}(0)[\overline{z}\Pi_{1}-z\Pi_{2}]=[\overline{z}\Pi_{2}-z\Pi_{1}] \mathscr{D}(0)\) we have \[\mathscr{D}(0)\mathbf{D}(z)^{-1}=[\overline{z}\Pi_{2}-z\Pi_{1}]\mathscr{D}(0 )[\mathscr{D}(0)^{2}+|z|^{2}]^{-1}+\mathscr{D}(0)^{2}[\mathscr{D}(0)^{2}+|z|^ {2}]^{-1}.\] The Spectral Theorem implies \[\sup_{|z|\geq 1}\{\|\mathscr{D}(0)^{2}[\mathscr{D}(0)^{2}+|z|^{2}]^{-1}\|+\|z \mathscr{D}(0)[\mathscr{D}(0)^{2}+|z|^{2}]^{-1}\|+\|z^{2}[\mathscr{D}(0)^{2}+ |z|^{2}]^{-1}\|\}<\infty,\] where \(\|\cdot\|=\|\cdot\|_{\mathscr{L}(E_{0})}\). The lemma now follows. **Lemma 3.9**.: _We have_ \[\operatorname{ind}[D:\mathcal{D}(D)\subset H_{1}\to H_{2}]=\operatorname{SF} \bigl{[}\mathscr{D}(\sigma):E_{1}\subset E_{0}\to E_{0},\;\sigma\in\mathbb{R} \bigr{]}.\] Proof.: Let \(\mathcal{K}=\ker(\mathscr{D}(0))=\ker(D)\oplus\ker(D^{*})\). Then \[\mathscr{D}(\sigma)=\begin{bmatrix}\mathscr{D}_{\mathcal{K}}(\sigma)&0\\ 0&\mathscr{D}_{\mathcal{K}^{\perp}}(\sigma)\end{bmatrix}:\begin{array}{ cccc}\mathcal{K}&&\mathcal{K}\\ \oplus&\rightarrow&\oplus\\ \mathcal{K}^{\perp}\cap E_{1}&&\mathcal{K}^{\perp}\end{array},\quad\sigma\in \mathbb{R}.\] Now \(\mathscr{D}_{\mathcal{K}}(\sigma):\mathcal{K}\to\mathcal{K}\), \(\sigma\neq 0\), has eigenvalues \(\sigma,-\sigma\) of multiplicities \(\dim\ker(D)\) and \(\dim\ker(D^{*})\), respectively, and \(\mathscr{D}_{\mathcal{K}^{\perp}}(\sigma)\) is invertible for all \(\sigma\in\mathbb{R}\). Thus \[\operatorname{ind}D =\dim\ker(D)-\dim\ker(D^{*})\] \[=\operatorname{SF}\bigl{[}\mathscr{D}_{\mathcal{K}}(\sigma): \mathcal{K}\to\mathcal{K},\ \sigma\in\mathbb{R}\bigr{]}\] \[=\operatorname{SF}\bigl{[}\mathscr{D}(\sigma):E_{1}\subset E_{0} \to E_{0},\ \sigma\in\mathbb{R}\bigr{]}.\] ## 4. The null-cobordism theorem We now revisit the setting discussed in the introduction to prove the null-cobordism theorem. We make the following product type assumptions on the geometry and the operator: Let \((M,g)\) be a Riemannian manifold, and let \(U=U(Y)\subset M\) be an open subset that is isometric to \((0,\varepsilon)\times Y\) with product metric \(dx^{2}+g_{Y}\) for some \(\varepsilon>0\), where \((Y,g_{Y})\) is another Riemannian manifold. Let \(\mathscr{E}\to M\) be a Hermitian vector bundle such that \(\mathscr{E}\bigr{|}_{U(Y)}\cong\pi_{Y}^{*}E\) isometrically, where \(E\to Y\) is a Hermitian vector bundle, and \(\pi_{Y}:(0,\varepsilon)\times Y\to Y\) is the canonical projection. Let \[A:C_{c}^{\infty}(M;\mathscr{E})\to C_{c}^{\infty}(M;\mathscr{E})\] be an elliptic differential operator of order \(\mu\geq 1\) that is symmetric with respect to the inner product induced by the Riemannian and Hermitian metrics, and suppose that \(A\) is in \(U(Y)\) of the form \[A\cong A_{\wedge}=x^{-1}\sum_{j=0}^{\mu}a_{j}(y,D_{y})(xD_{x})^{j}:C_{c}^{ \infty}((0,\varepsilon)\times Y;\pi_{Y}^{*}E)\to C_{c}^{\infty}((0, \varepsilon)\times Y;\pi_{Y}^{*}E),\] where \(a_{j}(y,D_{y})\in\operatorname{Diff}^{\mu-j}(Y;E)\). Let \[p(\sigma)=\sum_{j=0}^{\mu}a_{j}(y,D_{y})\sigma^{j}:C_{c}^{\infty}(Y;E)\to C_{c }^{\infty}(Y;E),\ \sigma\in\mathbb{C},\] be the indicial family. We assume that \(p(\sigma):E_{1}\subset E_{0}\to E_{0}\) satisfies the assumptions stated in Section 3 with \(E_{0}=L^{2}(Y;E)\) and some domain \[H^{\mu}_{\operatorname{comp}}(Y;E)\subset E_{1}\subset H^{\mu}_{\operatorname{ loc}}(Y;E).\] We also assume that the embedding \(E_{1}\hookrightarrow E_{0}\) is compact, and that \(p(\sigma):E_{1}\to E_{0}\) is invertible for \(0<|\Im(\sigma)|\leq\frac{1}{2}\); as explained in Section 3, the latter can generally be achieved by introducing a scaling parameter (which for geometric operators typically corresponds to scaling the metric). The closed extensions of the indicial operator \[A_{\wedge}:C_{c}^{\infty}(\mathbb{R}_{+};E_{1})\subset L^{2}(\mathbb{R}_{+} \times Y;\pi_{Y}^{*}E)\to L^{2}(\mathbb{R}_{+}\times Y;\pi_{Y}^{*}E)\] are then described as explained in Section 3. Let \[A_{\min}:\mathcal{D}_{\min}(A)\subset L^{2}(M;\mathscr{E})\to L^{2}(M; \mathscr{E})\] be a closed symmetric extension of \(A:C_{c}^{\infty}(M;\mathscr{E})\subset L^{2}(M;\mathscr{E})\to L^{2}(M; \mathscr{E})\), and let \(A_{\max}:\mathcal{D}_{\max}(A)\subset L^{2}(M;\mathscr{E})\to L^{2}(M; \mathscr{E})\) be the adjoint; as discussed in the introduction, \(A_{\min}\) is generally not the minimal extension of \(A\) from \(C_{c}^{\infty}(M;\mathscr{E})\) and thus \(A_{\max}\) is not the largest \(L^{2}\)-based closed extension. By elliptic regularity we have \[H^{\mu}_{\mathrm{comp}}(M;\mathscr{E})\subset\mathcal{D}_{\min}(A)\subset\mathcal{ D}_{\max}(A)\subset H^{\mu}_{\mathrm{loc}}(M;\mathscr{E}).\] By a cut-off function we mean any function \(\omega\in C_{c}^{\infty}([0,\varepsilon))\) such that \(\omega\equiv 1\) near \(x=0\), and we consider \(\omega\) a function on \(M\) supported in \(U(Y)\). We make the following localization and compatibility assumptions between \(A\) and \(A_{\wedge}\): * For every cut-off function \(\omega\), multiplication by \(1-\omega\) gives a continuous operator \(\mathcal{D}_{\max}(A)\to\mathcal{D}_{\min}(A)\). We also assume that \(1-\omega:\mathcal{D}_{\min}(A)\to L^{2}(M;\mathscr{E})\) is compact. * For every cut-off function \(\omega\), multiplication by \(\omega\) gives continuous operators \(\mathcal{D}_{\min}(A)\to\mathcal{D}_{\min}(A_{\wedge})\) and \(\mathcal{D}_{\min}(A_{\wedge})\to\mathcal{D}_{\min}(A)\). To make sense of the mappings above note that \[M\supset U(Y)\cong(0,\varepsilon)\times Y\subset\mathbb{R}_{+}\times Y,\] which allows transitioning both ways between functions on \(M\) supported in \(U(Y)\) and functions on \(\mathbb{R}_{+}\times Y\) supported in \((0,\varepsilon)\times Y\). We will use these transitions freely in what follows. **Proposition 4.1**.: _Let \(\omega\in C_{c}^{\infty}([0,\varepsilon))\) be any cut-off function. The map_ \[\mathcal{D}_{\max}(A)/\mathcal{D}_{\min}(A)\ni u+\mathcal{D}_{\min}(A) \longmapsto\omega u+\mathcal{D}_{\min}(A_{\wedge})\in\mathcal{D}_{\max}(A_{ \wedge})/\mathcal{D}_{\min}(A_{\wedge})\] _is well-defined, and induces a unitary equivalence between the indefinite inner product spaces_ \[\big{(}\mathcal{D}_{\max}(A)/\mathcal{D}_{\min}(A),[\cdot,\cdot]_{A}\big{)} \cong\big{(}\mathcal{D}_{\max}(A_{\wedge})/\mathcal{D}_{\min}(A_{\wedge}),[ \cdot,\cdot]_{A_{\wedge}}\big{)}.\] Proof.: We first prove that multiplication by \(\omega\) gives a well-defined map \[\mathcal{D}_{\max}(A)\ni u\mapsto\omega u\in\mathcal{D}_{\max}(A_{\wedge}).\] Note that with \(u\) also \(\omega u\in\mathcal{D}_{\max}(A)\) by our localization assumption. Now pick another cut-off function \(\tilde{\omega}\in C_{c}^{\infty}([0,\varepsilon))\) such that \(\tilde{\omega}\equiv 1\) in a neighborhood of \(\mathrm{supp}(\omega)\). Let \(\phi\in\mathcal{D}_{\min}(A_{\wedge})\) be arbitrary, and write \(\phi=\tilde{\omega}\phi+(1-\tilde{\omega})\phi\). Since \[\mathcal{D}_{\min}(A_{\wedge})=x^{\frac{1}{2}}\mathscr{H}(\mathbb{R}_{+};E_{1 })\cap L^{2}(\mathbb{R}_{+};E_{0})\] as a consequence of our assumptions we have that both \(\tilde{\omega}\phi\), \((1-\tilde{\omega})\phi\in\mathcal{D}_{\min}(A_{\wedge})\), see Section 3. We also have \(\tilde{\omega}\phi\in\mathcal{D}_{\min}(A)\) by our localization and compatibility assumption with respect to the minimal domains. Using the locality of the differential operators \(A_{\wedge}\) and \(A\) we get \[\langle A_{\wedge}\phi,\omega u\rangle =\langle A_{\wedge}(\tilde{\omega}\phi),\omega u\rangle=\langle A (\tilde{\omega}\phi),\omega u\rangle=\langle\tilde{\omega}\phi,A_{\max}( \omega u)\rangle\] \[=\langle\phi,\tilde{\omega}A_{\max}(\omega u)\rangle=\langle\phi,A _{\max}(\omega u)\rangle.\] As this is valid for all \(\phi\in\mathcal{D}_{\min}(A_{\wedge})\) we see that \(\omega u\in\mathcal{D}_{\max}(A_{\wedge})\) with \(A_{\wedge,\max}(\omega u)\) given as the restriction of \(A_{\max}(\omega u)\) to \(U(Y)\) and extended trivially to \(\mathbb{R}_{+}\times Y\). As for \(u\in\mathcal{D}_{\min}(A)\) we also have \(\omega u\in\mathcal{D}_{\min}(A_{\wedge})\) by assumption, we thus obtain that the map \[\mathcal{D}_{\max}(A)/\mathcal{D}_{\min}(A)\ni u+\mathcal{D}_{\min}(A) \longmapsto\omega u+\mathcal{D}_{\min}(A_{\wedge})\in\mathcal{D}_{\max}(A_{ \wedge})/\mathcal{D}_{\min}(A_{\wedge})\] is well-defined. Conversely, multiplication by \(\omega\) likewise gives a well-defined map \[\mathcal{D}_{\max}(A_{\wedge})\ni u\mapsto\omega u\in\mathcal{D}_{\max}(A).\] Note that if \(u\in\mathcal{D}_{\max}(A_{\wedge})\) then \(\omega u\in\mathcal{D}_{\max}(A_{\wedge})\) and \((1-\omega)u\in\mathcal{D}_{\min}(A_{\wedge})\) by Section 3. Now let \(\tilde{\omega}\in C_{c}^{\infty}([0,\varepsilon))\) be such that \(\tilde{\omega}\equiv 1\) in a neighborhood of \(\operatorname{supp}(\omega)\). Let \(\phi\in\mathcal{D}_{\min}(A)\) be arbitrary, and write \(\phi=\tilde{\omega}\phi+(1-\tilde{\omega})\phi\); by the localization and compatibility assumptions both terms are in \(\mathcal{D}_{\min}(A)\), and we also have \(\tilde{\omega}\phi\in\mathcal{D}_{\min}(A_{\wedge})\). We get \[\langle A\phi,\omega u\rangle =\langle A(\tilde{\omega}\phi),\omega u\rangle=\langle A_{\wedge }(\tilde{\omega}\phi),\omega u\rangle=\langle\tilde{\omega}\phi,A_{\wedge,\max }(\omega u)\rangle\] \[=\langle\phi,\tilde{\omega}A_{\wedge,\max}(\omega u)\rangle= \langle\phi,A_{\wedge,\max}(\omega u)\rangle.\] This shows that \(\omega u\in\mathcal{D}_{\max}(A)\) with \(A_{\max}(\omega u)\) given by \(A_{\wedge,\max}(\omega u)\) in \(U(Y)\) and extended trivially to \(M\). We thus obtain a map \[\mathcal{D}_{\max}(A_{\wedge})/\mathcal{D}_{\min}(A_{\wedge})\ni u+\mathcal{D }_{\min}(A_{\wedge})\longmapsto\omega u+\mathcal{D}_{\min}(A)\in\mathcal{D}_{ \max}(A)/\mathcal{D}_{\min}(A),\] and both maps are inverses of each other. Finally, as for both \(A\) and \(A_{\wedge}\) each class in \(\mathcal{D}_{\max}/\mathcal{D}_{\min}\) has a representative supported in \(U(Y)\), and by the standing product type assumptions both adjoint pairings agree on those representatives, the proposition follows. **Theorem 4.2** (Null-Cobordism Theorem).: _Under the stated product type, localization, and compatibility assumptions we have_ \[\operatorname{SF}[\,p(\sigma):E_{1}\subset E_{0}\to E_{0},\;-\infty< \sigma<\infty\,]=0.\] _If moreover_ \[p(\sigma)=\begin{bmatrix}\sigma&D^{*}\\ D&-\sigma\end{bmatrix}:\begin{array}{c}\mathcal{D}(D)\\ \oplus\\ \mathcal{D}(D^{*})\end{array}\subset L^{2}\begin{pmatrix}E_{-}\\ Y;&\oplus\\ E_{+}\end{pmatrix}\to L^{2}\begin{pmatrix}E_{-}\\ Y;&\oplus\\ E_{+}\end{pmatrix}\] _with an elliptic Fredholm operator of first order_ \[D:\mathcal{D}(D)\subset L^{2}(Y;E_{-})\to L^{2}(Y;E_{+}),\] _then \(\operatorname{ind}[D:\mathcal{D}(D)\subset L^{2}(Y;E_{-})\to L^{2}(Y;E_{+})]=0\)._ Proof.: By Proposition 4.1 we have a unitary equivalence between the indefinite inner product spaces \[\big{(}\mathcal{D}_{\max}(A)/\mathcal{D}_{\min}(A),[\cdot,\cdot]_{A}\big{)} \cong\big{(}\mathcal{D}_{\max}(A_{\wedge})/\mathcal{D}_{\min}(A_{\wedge}),[ \cdot,\cdot]_{A_{\wedge}}\big{)}.\] Because \[\operatorname{sgn}\big{(}\mathcal{D}_{\max}(A_{\wedge})/\mathcal{D}_{\min}(A_ {\wedge}),[\cdot,\cdot]_{A_{\wedge}}\big{)}=\operatorname{SF}[\,p(\sigma):E_ {1}\subset E_{0}\to E_{0},\;-\infty<\sigma<\infty\,]\] by (3.6) it suffices to show that \[\operatorname{sgn}\big{(}\mathcal{D}_{\max}(A)/\mathcal{D}_{\min}(A),[\cdot, \cdot]_{A}\big{)}=0,\] and by Proposition 2.2 this will be the case if the embedding \(\mathcal{D}_{\max}(A)\hookrightarrow L^{2}(M;\mathscr{E})\) is compact. Because \(A\) has finite deficiency indices we only need to prove that \(\mathcal{D}_{\min}(A)\hookrightarrow L^{2}(M;\mathscr{E})\) is compact. Now let \(\omega\), \(\tilde{\omega}\in C_{c}^{\infty}([0,\varepsilon))\) be cut-off functions such that \(\tilde{\omega}\equiv 1\) in a neighborhood of \(\operatorname{supp}(\omega)\). By assumption the multiplication operator \[1-\omega:\mathcal{D}_{\min}(A)\to L^{2}(M;\mathscr{E})\] is compact, and \[\tilde{\omega}:\mathcal{D}_{\min}(A)\to\mathcal{D}_{\min}(A_{\wedge})\] is continuous. Now \[\mathcal{D}_{\min}(A_{\wedge})=x^{\frac{1}{2}}\mathscr{H}(\mathbb{R}_{+};E_{1}) \cap L^{2}(\mathbb{R}_{+};E_{0}),\] and because \(E_{1}\hookrightarrow E_{0}\) is compact, multiplication by \(\omega\) is a compact operator \[\omega:\mathcal{D}_{\min}(A_{\wedge})\to L^{2}(\mathbb{R}_{+};E_{0}),\] see Section 3. Consequently, using the product type assumptions, the composition \[\omega=\omega\tilde{\omega}:\mathcal{D}_{\min}(A)\to L^{2}(M;\mathscr{E})\] is compact, which shows that the embedding \(\iota=\omega+(1-\omega):\mathcal{D}_{\min}(A)\to L^{2}(M;\mathscr{E})\) is compact. Finally, the vanishing of the index in the special case of operators of first order follows from (3.7). ## Appendix A The null-cobordism theorem for closed manifolds In this appendix we discuss a version of the null-cobordism Theorem 4.2 for closed manifolds. Most of the previous assumptions no longer explicitly appear in this version, e.g., we do not assume product type geometry, and there isn't an operator \(A\) on \(M\) at the outset, but symbolic assumptions instead. As mentioned in the introduction this is due to the richness of analytic tools available for this situation that allows to create the preconditions needed to apply Theorem 4.2 instead of having to assume them from the outset. Let \(Y\) be a closed, compact Riemannian manifold and \(E\to Y\) be a Hermitian vector bundle, and consider a family \[p(\sigma)=\sum_{j=0}^{\mu}a_{j}(y,D_{y})\sigma^{j}:C^{\infty}(Y;E)\to C^{ \infty}(Y;E),\;\sigma\in\mathbb{R},\] (A.1) where \(a_{j}(y,D_{y})\in\mathrm{Diff}^{\mu-j}(Y;E)\), and \(\mu\geq 1\). We assume that the parameter-dependent principal symbol \[\boldsymbol{\sigma}(p)(y,\eta;\sigma)=\sum_{j=0}^{\mu}\boldsymbol{\sigma}(a_{ j})(y,\eta)\sigma^{j}:E_{y}\to E_{y}\] (A.2) is invertible on \(\big{(}T^{*}Y\times\mathbb{R}\big{)}\setminus 0\), and that \(p(\sigma)=p(\sigma)^{*}\) is (formally) selfadjoint. By elliptic and analytic Fredholm theory, \[\mathbb{R}\ni\sigma\mapsto p(\sigma):H^{\mu}(Y;E)\subset L^{2}(Y;E)\to L^{2}( Y;E)\] is a family of selfadjoint unbounded Fredholm operators acting in \(L^{2}(Y;E)\) that is invertible for all \(\sigma\in\mathbb{R}\) except at finitely many points, and it makes sense to consider the spectral flow \[\mathrm{SF}[p(\sigma)]:=\mathrm{SF}[p(\sigma):H^{\mu}(Y;E)\subset L^{2}(Y;E) \to L^{2}(Y;E),\;-\infty<\sigma<\infty]\in\mathbb{Z}\] associated with \(p(\sigma)\). **Lemma A.3**.: _The spectral flow is an invariant of the principal symbol (A.2) in the sense that if \(p_{j}(\sigma)\), \(j=1,2\), are two elliptic selfadjoint families of order \(\mu\geq 1\) of the form (A.1) with \(\boldsymbol{\sigma}(p_{1})(y,\eta;\sigma)=\boldsymbol{\sigma}(p_{2})(y,\eta;\sigma)\) then \(\mathrm{SF}[p_{1}(\sigma)]=\mathrm{SF}[p_{2}(\sigma)]\)._ Proof.: Let \(R>0\) be such that \[p_{1}(\sigma)+s[p_{2}(\sigma)-p_{1}(\sigma)]:H^{\mu}(Y;E)\subset L^{2}(Y;E) \to L^{2}(Y;E)\] is invertible for \(|\sigma|\geq R>0\) and all \(0\leq s\leq 1\). Consequently, this family is a homotopy of selfadjoint Fredholm functions on \([-R,R]\), invertible at both endpoints, and by the homotopy invariance of the spectral flow for such families we see that \(\mathrm{SF}[p_{1}(\sigma)]=\mathrm{SF}[p_{2}(\sigma)]\) Suppose there exists a compact Riemannian manifold \(M\) with \(\partial M=Y\). Utilizing the geodesic flow from the boundary in the direction of the inner normal vector field shows that there exists \(\varepsilon>0\) and a collar neighborhood map \(U(Y)\cong[0,\varepsilon)\times Y\) near the boundary such that the metric in \(U(Y)\) takes the form \(dx^{2}+g_{Y}(x)\) with a smooth family of metrics \(g_{Y}(x)\) on \(Y\), \(0\leq x<\varepsilon\), and such that \(g_{Y}(0)=g_{Y}\) is the given metric on \(Y\). Moreover, by choosing \(\varepsilon>0\) small enough, there exists a defining function for \(\partial M\) on \(M\) that in \(U(Y)\) is represented by projection onto the coordinate in \([0,\varepsilon)\). We'll also denote this global defining function by \(x:M\to\overline{\mathbb{R}}_{+}\). In particular, \[T^{*}M\big{|}_{Y}=T^{*}Y\oplus\operatorname{span}\{dx\big{|}_{Y}\}\] subject to these choices, and we can split variables \((y,\eta;\sigma)\in T^{*}M\big{|}_{Y}\) accordingly. **Theorem A.4** (Null-Cobordism Theorem).: _Let \(M\) be a compact Riemannian manifold \(M\) with \(\partial M=Y\), and let \(\mathscr{E}\to M\) be a Hermitian vector bundle with \(\mathscr{E}\big{|}_{Y}=E\). Let \(T^{*}M\big{|}_{Y}\cong T^{*}Y\times\mathbb{R}\) subject to the choices described above, and suppose there exists a symmetric, elliptic, differential principal symbol \(a\in C^{\infty}(T^{*}M\setminus 0;\operatorname{End}(\pi^{*}\mathscr{E}))\) of order \(\mu\) such that_ \[a(y,\eta;\sigma)=\boldsymbol{\sigma}(p)(y,\eta;\sigma)\text{ for }(y,\eta; \sigma)\in\big{(}T^{*}M\setminus 0\big{)}\big{|}_{Y},\] _where \(\pi:T^{*}M\to M\) is the canonical projection. Then \(\operatorname{SF}[p(\sigma)]=0\)._ With the family \(p(\sigma)\) from (A.1) we associate the indicial operator \[A_{\wedge}=x^{-1}\sum_{j=0}^{\mu}a_{j}(y,D_{y})(xD_{x})^{j}:C^{\infty}_{c}( \mathbb{R}_{+}\times Y;E)\subset L^{2}(\mathbb{R}_{+}\times Y;E)\to L^{2}( \mathbb{R}_{+}\times Y;E).\] (A.5) Here we also write \(E\) for its pull-back to \(\mathbb{R}_{+}\times Y\) with respect to the projection onto \(Y\), and equip \(\mathbb{R}_{+}\times Y\) with the product metric \(dx^{2}+g_{Y}\). Then \(A_{\wedge}\) is symmetric and densely defined. Let \(\mathcal{D}_{\min}(A_{\wedge})\) be the domain of the closure, and \(\mathcal{D}_{\max}(A_{\wedge})\) be the domain of the adjoint. Proof of Theorem a.4.: In the previously fixed collar neighborhood \(U(Y)\cong[0,\varepsilon)\times Y\) we utilize standard deformations of the Riemannian metric on \(M\), the Hermitian metric on \(\mathscr{E}\), and the principal symbol \(a\) to reduce to a product type structure near the boundary, as follows: Pick an isomorphism \(\mathscr{E}\big{|}_{U(Y)}\cong\pi_{Y}^{*}E\) that is the identity over \(Y\), where \(\pi_{Y}:[0,\varepsilon)\times Y\to Y\) is the projection map. With respect to the pull-back of the given Hermitian metric on \(E\) to \(\pi_{Y}^{*}E\), the metric on \(\mathscr{E}\big{|}_{U(Y)}\) under this isomorphism is then represented by \(h(x,y)\in C^{\infty}([0,\varepsilon)\times Y;\operatorname{End}(\pi_{Y}^{*}E))\) such that \(h=h^{*}>0\) and \(h(0,y)=\operatorname{Id}\). Choose \(C^{\infty}\)-functions \(\phi,\psi:[0,\varepsilon)\to\mathbb{R}\) with \[\phi\equiv 0\text{ on }0\leq x\leq\tfrac{\varepsilon}{3},\;0< \phi<\tfrac{2\varepsilon}{3}\text{ on }\tfrac{\varepsilon}{3}<x<\tfrac{2\varepsilon}{3},\text{ and }\phi\equiv x\text{ on }\tfrac{2 \varepsilon}{3}\leq x<\varepsilon;\] \[\psi\equiv x\text{ on }0\leq x\leq\tfrac{\varepsilon}{3},\;\psi>0\text{ on }\tfrac{\varepsilon}{3}<x<\tfrac{2\varepsilon}{3},\text{ and }\psi\equiv 1\text{ on }\tfrac{2 \varepsilon}{3}\leq x<\varepsilon.\] We then deform the Riemannian metric on \(U(Y)\) and Hermitian metric on \(\mathscr{E}\big{|}_{U(Y)}\) to \[\tilde{g}=dx^{2}+g_{Y}(\phi(x))\text{ and }\tilde{h}(x,y)=h(\phi(x),y)\in C^{ \infty}([0,\varepsilon)\times Y;\operatorname{End}(\pi_{Y}^{*}E)),\] respectively, which both connect seamlessly with the Riemannian metric on \(M\) outside \(U(Y)\), and the Hermitian metric on \(\mathscr{E}\). We also change the principal symbol in \(\lx@overaccentset{{\circ}}{U}(Y)\) to \[\tilde{a}(x,y,\eta;\sigma)=\psi(x)^{-1}a(\phi(x),y,\eta;\psi(x)\sigma):E_{y} \to E_{y}\] (A.6) for \((x,y,\eta;\sigma)\in T^{*}\big{(}(0,\varepsilon)\times Y\big{)}\setminus 0\) with the obvious identifications of variables, which again connects seamlessly outside the collar neighborhood. The new homogeneous principal symbol \(\tilde{a}\in C^{\infty}(T^{*}\lx@overaccentset{{\circ}}{M}\setminus 0 _{\sharp}\operatorname{End}(\pi^{*}\mathscr{E}))\) is symmetric with respect to the new metric on \(\mathscr{E}\), and elliptic over \(\lx@overaccentset{{\circ}}{M}\). In \(\lx@overaccentset{{\circ}}{U}(Y)\) we have \[\tilde{a}(x,y,\eta;\sigma)=x^{-1}\,\boldsymbol{\sigma}(p)(y,\eta;x\sigma):E_{ y}\to E_{y}\text{ for }0<x<\tfrac{\varepsilon}{3}\] by construction, which aligns with the principal symbol of \(A_{\wedge}\) from (A.5). Let now \(A\in\operatorname{Diff}^{\mu}(\lx@overaccentset{{\circ}}{M}; \mathscr{E})\) be symmetric \(C^{\infty}_{c}(\lx@overaccentset{{\circ}}{M};\mathscr{E})\to C^{ \infty}_{c}(\lx@overaccentset{{\circ}}{M};\mathscr{E})\) with respect to the \(L^{2}\)-inner product associated with the modified metrics on \(M\) and \(\mathscr{E}\), respectively, such that the principal symbol \(\boldsymbol{\sigma}(A)=\tilde{a}\) on \(T^{*}\lx@overaccentset{{\circ}}{M}\setminus 0\), and such that in \(\lx@overaccentset{{\circ}}{U}(Y)\) we have \(A=A_{\wedge}\) on \(C^{\infty}_{c}((0,\tfrac{\varepsilon}{4})\times Y;E)\). Then \[A=x^{-1}P:C^{\infty}_{c}(\lx@overaccentset{{\circ}}{M};\mathscr{E} )\subset L^{2}(M;\mathscr{E})=x^{-\frac{1}{2}}L^{2}_{b}(M;\mathscr{E})\to x^{ -\frac{1}{2}}L^{2}_{b}(M;\mathscr{E})\] is symmetric, and \(P\in\operatorname{Diff}^{\mu}_{b}(M;\mathscr{E})\) is \(b\)-elliptic (see [13]). Moreover, by construction \(p(\sigma)\) is the indicial family of the operator \(P\). By analytic Fredholm theory \(p(\sigma):H^{\mu}(Y;E)\to L^{2}(Y;E)\) is invertible for \(\sigma\in\mathbb{C}\) except for the discrete set \(\operatorname{spec}_{b}(p)\). In the sequel it will be convenient to assume that \(\operatorname{spec}_{b}(p)\cap\{\sigma\in\mathbb{C};\ 0<|\Im(\sigma)|\leq\frac{1}{2} \}=\emptyset\). As explained in Section 3, this can be achieved by replacing \(p(\sigma)\) by \(p(t\sigma)\) for sufficiently small \(t>0\) if necessary, which does not impact the spectral flow. Moreover, the assumptions of the theorem pertaining to the principal symbol of \(p(\sigma)\) also hold for \(p(t\sigma)\); to see this pick a \(C^{\infty}\)-function \(\chi:[0,\varepsilon)\to\mathbb{R}\) with \[\chi\equiv t\text{ on }0\leq x\leq\tfrac{\varepsilon}{3},\ \chi>0\text{ on }\tfrac{\varepsilon}{3}<x<\tfrac{2\varepsilon}{3},\text{ and }\chi\equiv 1\text{ on }\tfrac{2\varepsilon}{3}\leq x<\varepsilon,\] and alter the principal symbol (A.6) in \(\lx@overaccentset{{\circ}}{U}(Y)\) to \[\tilde{a}(x,y,\eta;\sigma)=\psi(x)^{-1}a(\phi(x),y,\eta;\psi(x)\chi(x)\sigma) :E_{y}\to E_{y}\] for \((x,y,\eta;\sigma)\in T^{*}\big{(}(0,\varepsilon)\times Y\big{)}\setminus 0\). We may thus proceed without loss of generality under the assumption that \(\operatorname{spec}_{b}(p)\cap\{\sigma\in\mathbb{C};\ 0<|\Im(\sigma)|\leq\frac{1}{2} \}=\emptyset\). In view of Section 3 for \(A_{\wedge}\) and by invoking elliptic regularity for \(A\) we then get \[\mathcal{D}_{\min}(A_{\wedge}) =x^{\frac{1}{2}}H^{\mu}_{b}(\mathbb{R}_{+};L^{2}(Y;E))\cap x^{ \frac{1}{2}}L^{2}_{b}(\mathbb{R}_{+};H^{\mu}(Y;E))\cap L^{2}(\mathbb{R}_{+} \times Y;E),\] \[\mathcal{D}_{\min}(A) =x^{\frac{1}{2}}H^{\mu}_{b}(M;\mathscr{E}),\] and \[\mathcal{D}_{\max}(A_{\wedge}) =\mathcal{D}_{\min}(A_{\wedge})\oplus\bigoplus_{\sigma_{0}\in \operatorname{spec}_{b}(p)\cap\mathbb{R}}\mathscr{E}_{\sigma_{0}}(p),\] \[\mathcal{D}_{\max}(A) =\mathcal{D}_{\min}(A)\oplus\bigoplus_{\sigma_{0}\in \operatorname{spec}_{b}(p)\cap\mathbb{R}}\mathscr{E}_{\sigma_{0}}(p),\] where \(\mathscr{E}_{\sigma_{0}}(p)\) is defined as in (3.3) based on a cut-off function \(\omega\in C^{\infty}_{c}([0,\tfrac{\varepsilon}{4}))\) with \(\omega\equiv 1\) near \(x=0\) so that elements in \(\mathscr{E}_{\sigma_{0}}(p)\) can interchangeably be regarded both as sections of \(E\) on \(\mathbb{R}_{+}\times Y\), as well as sections of \(\mathscr{E}\) on \(M\) supported near the boundary. In particular, this implies that \[\big{(}\mathcal{D}_{\max}(A)/\mathcal{D}_{\min}(A),[\cdot,\cdot]_{A}\big{)}\cong \big{(}\mathcal{D}_{\max}(A_{\wedge})/\mathcal{D}_{\min}(A_{\wedge}),[\cdot, \cdot]_{A_{\wedge}}\big{)}\] because \([u,v]_{A_{\wedge}}=[u,v]_{A}\) for \(u,v\in\bigoplus\limits_{\sigma_{0}\in\operatorname{spec}_{b}(p)\cap\mathbb{R}} \mathscr{E}_{\sigma_{0}}(p)\) by construction. Finally, it remains to note that \(\mathcal{D}_{\max}\hookrightarrow x^{-\frac{1}{4}}H^{\mu}_{b}(M;\mathscr{E})\), and the embedding \(x^{-\frac{1}{4}}H^{\mu}_{b}(M;\mathscr{E})\hookrightarrow x^{-\frac{1}{2}}L^{2 }_{b}(M;\mathscr{E})=L^{2}(M;\mathscr{E})\) is compact. Theorem A.4 and Lemma 3.9 imply: **Corollary A.7** (Cobordism Invariance of the Index).: _Suppose that \(E=E_{-}\oplus E_{+}\) is an orthogonal direct sum, and that the family (A.1) is of the form_ \[\mathscr{D}(\sigma)=\begin{bmatrix}\sigma&D^{*}\\ D&-\sigma\end{bmatrix}:C^{\infty}\!\begin{pmatrix}E_{-}\\ Y;&\oplus\\ E_{+}\end{pmatrix}\to C^{\infty}\!\begin{pmatrix}E_{-}\\ Y;&\oplus\\ E_{+}\end{pmatrix},\;\sigma\in\mathbb{R},\] _where \(D:C^{\infty}(Y;E_{-})\to C^{\infty}(Y;E_{+})\) is an elliptic differential operator of first order, and \(D^{*}:C^{\infty}(Y;E_{+})\to C^{\infty}(Y;E_{-})\) is its (formal) adjoint. Then_ \[\operatorname{SF}[\mathscr{D}(\sigma)]=\operatorname{ind}D=\dim\ker(D)- \dim\ker(D^{*}).\] _In particular, if the assumptions of Theorem A.4 hold, then \(\operatorname{ind}(D)=0\)._
``` レスチによる同位元理論の考察から、同位元理論の同位元数におけるコ Bordism 不変性を証明するための議論を再考する。この議論を、近年発表された作者の業績と組み合わせることで、自乗可能なフレホルム表現の族に対して、境界を作用域に導入された場合のスペクトル流の消失を示す。この研究は、同位元理論の表現の同位元数の振る舞いを、境界の同位元構造の変形における考察に基づいている。 ```
2309.15449
Spinal constructions for continuous type-space branching processes with interactions
We consider branching processes describing structured, interacting populations in continuous time. Dynamics of each individuals characteristics and branching properties can be influenced by the entire population. We propose a Girsanov-type result based on a spinal construction, and establish a many-to-one formula. By combining this result with the spinal decomposition, we derive a generalized continuous-time version of the Kesten-Stigum theorem that incorporates interactions. Additionally, we propose an alternative approach of the spine construction for exact simulations of stochastic size-dependent populations.
Charles Medous
2023-09-27T07:33:21
http://arxiv.org/abs/2309.15449v2
# Spinal constructions for continuous type-space branching processes with interactions ###### Abstract. We consider branching processes describing structured, interacting populations in continuous time. Dynamics of each individual's characteristics and branching properties can be influenced by the entire population. We propose a spinal construction, and establish a Girsanov-type result. By combining this result with the spinal decomposition, we derive a modified continuous-time version of the Kesten-Stigum theorem that incorporates interactions. Additionally, we propose an alternative simulation approach for stochastic size-dependent populations using appropriate spine constructions. ## Introduction Spine techniques and spinal trees are classical tools in the general context of branching processes since the work of Kallenberg [35], Chauvin and Rouault [11, 12], and later Kurtz, Lyons, Pemantle and Peres [44, 43, 37]. Spinal trees are constructed based on an original branching process by distinguishing a lineage, called the spine. Its only living representative-the spinal individual- follows a biased reproduction law compared to the other individuals in the process, ensuring that the spine does not die out. In the specific yet widely studied case of size-biased trees, the reproductive law \((\widehat{p}_{k},k\geq 0)\) of the spinal individual is defined by \[\widehat{p}_{k}=\frac{kp_{k}}{m},\quad\text{for }k\geq 0,\] where \(m\) is the mean value of the law of reproduction \((p_{k},k\geq 0)\) in the branching process. This size-biased reproductive law of the spinal individual is closely related to the biased ancestral reproduction [12, 26]. In fact the spine was found to characterize the process of the trait of a uniformly sampled individual, in a large population approximation see _e.g._[45]. The sampling of \(k\geq 2\) distinct individuals from those living in a population at a time \(t\) is associated with a \(k\)-spines construction. For further literature on multiple-particles sampling we refer the reader to [30, 29, 34, 13] and the references herein. Many-to-One formulas [31, 26] are prominent among classical spine results. Such formulas derive from a Girsanov-type result on the change of probability measure associated with the spine, which can be regarded as a Doob's transform, as described in references such as [14, 2]. These formulas give expectations of sums over particles in the branching process in terms of a Feynman-Kac path integral expectation related to the spinal individual. Consequently, the spinal individual is often referred to as a "typical individual" within the population. The connection to Feynman-Kac path integrals implies a shared foundation between these concepts. For a comprehensive overview on this subject, we refer to [18]. Another interesting property of spinal constructions is the "spinal decomposition" [11, 44]. It establishes that the spine process is equivalent to the initial branching process, with the removal of one individual and the addition of an immigration source. The introduction of new individuals into the population follows the biased reproductive law specific to the spine. In recent decades, the spinal decomposition has emerged as a highly valuable tool for investigating branching processes. One of its notable contributions is providing a new proof of the \(L\log L\) criteria, which were originally proved by Kesten and Stigum for Galton-Watson (GW) processes [36] and by Biggins for continuous-time branching processes [8] using analytical methods. These results give specific conditions on the reproductive law, ensuring the non-degeneracy of the martingale involved in the spinal change of measure at infinity. By combining the spinal decomposition with a previously known result from measure theory, Lyons, Pemantle, and Peres [44] provided a probabilistic "conceptual" proof of Kesten and Stigum's theorem for single-type GW processes. This method proved to be easily generalizable to continuous-time structured branching processes [2, 26, 7] as well as branching Brownian motions [38, 22, 27]. More recently Bertoin and Mallein extended this proof for general branching Levy processes [6]. For equivalent results on superprocesses we refer to [40, 41, 42, 21, 50]. Finally, we mention Hardy and Harris [28] who adapted the spinal decomposition to prove the \(\mathcal{L}^{p}\)-convergence of some key martingales, which was later used to establish strong laws of large numbers [23]. The fundamental assumption in the aforementioned works is the branching property, which assumes that the behavior of all particles in the process during their life is independent of one another. However, in various systems of population dynamics such as genetics, epidemiology, chemistry, and even queueing systems, interactions between individuals do occur and this fundamental hypothesis falls apart. Recently, Bansaye [3] established a spine construction and Many-to-One formulas for interacting branching populations, where the branching rates and reproductive laws depend on the traits of all individuals in the population. These traits belong to a finite set and are fixed during the life of the individuals. Using the spinal decomposition, Bansaye has found an \(L\log L\) criterion for a single-type, density-dependent population. We also mention a recent work on spine processes for density-dependent individual-based models in a large population approximation [33]. In this article we consider a wide class of continuous-time structured branching processes with general interactions. These processes are used to model structured populations, where the behavior of each individual is influenced by the overall population state. Every individual in the population is characterized by its trait, taking values in a compact subset of \(\mathbb{R}^{d}\). The lifespan of each individual is exponentially distributed with a time-inhomogeneous rate that depends on the traits of all individuals. Upon an individual's death, a random number of children are generated, each inheriting random traits at birth, that are influenced by the traits of all individuals in the population. Between these branching events, the evolution of the traits of all individuals in the population is deterministic and also influenced by the entire population's state. Notably, the branching parameters are determined by the traits of all individuals, thereby the branching property no longer holds in this framework. We introduce a comprehensive spinal construction for those processes, using a change of measure associated with a positive weight function \(\psi_{\phi}\) that depends on the trait of the spinal individual and of those of every individual. For a fixed function \(\psi_{\phi}\) both the spinal individual and those outside the spine are subject to a bias. We derive a Girsanov-type formula associated with this change of measure, taking the form of a path-integral formulation that involves a non-linear operator. A classical approach to establishing limiting results, such as the central limit theorem or large deviations, involves determining the eigenfunctions of such operators [7, 22, 15]. However, due to the presence of interactions, this operator is contingent on the entire population, necessitating the eigenfunctions to be dependent on the traits of all individuals. Thus, the weight function \(\psi_{\phi}\) must rely on both the trait of the spinal individual and the traits of all individuals in order to account for this dependency. Under certain non-explosion assumptions regarding the branching parameters and the set of weight functions, we obtain a modified Many-to-One formula. Unlike the classical Many-to-One formula- that describes the behavior of the branching process using only the behavior of the spine- our formula relies on the whole spinal population. Subsequently, we use this result in conjunction with the spinal decomposition to establish \(L\log L\) criteria. More precisely, we exhibit both a sufficient condition and a necessary condition for the non-degeneracy at infinity of the additive martingale associated with the spinal change of measure. It is important to note that this result is only applicable when \(\psi_{\phi}\equiv 1\), and therefore, we lack knowledge regarding the limit behavior of this derivative at infinity for more intricate weight functions. Finally we study a particular case of structured Yule process with mass loss events happening at size-dependent rates. Yule processes are pure birth processes that are widely used in population genetics to model and reconstruct phylogenetic trees, see _e.g._ Aldous' review [1]. We use the spinal construction with a multiplicative weight function to retrieve a conditional branching property in the associated spine process, and propose an efficient algorithmic construction based on this property. **Notation** In the sequel \(\mathbb{N}^{*}=\{1,2,\cdots\}\) will denote the set of positive integers, \(\mathbb{R}_{+}:=[0,+\infty)\) the real line, \(\overline{\mathbb{R}}_{+}:=\mathbb{R}_{+}\cup\{+\infty\}\) and \(\mathbb{R}_{+}^{*}:=(0,+\infty)\). We will denote respectively by \(\mathfrak{B}\left(A,B\right)\) (resp. \(\mathcal{C}^{1}\left(A,B\right)\)) the set of measurable (resp. continuously differentiable) \(B\)-valued functions on a set \(A\). For every couple \((f,g)\) of real-valued measurable functions on a set \(A\), we denote for all \(x\), \(fg(x)\) the product \(f(x)g(x)\). We denote by \(\mathcal{M}_{F}\left(\mathcal{A}\right)\) the set of finite non-negative measures on a set \(\mathcal{A}\). The set of trait \(\mathcal{X}\) is a compact subset of \(\mathbb{R}^{d}\) equipped with the \(\ell_{1}\)-norm on \(\mathbb{R}^{d}\), that is denoted \(|x|\) for all \(x\in\mathcal{X}\). We denote \(\cdot\) the canonical scalar product on \(\mathbb{R}^{d}\). We use the Ulam-Harris-Neveu notations [47] to label these individuals. We introduce the set of labels: \[\mathcal{U}:=\left\{\emptyset\right\}\cup\bigcup_{k\geq 0}\left(\mathbb{N}^{*} \right)^{k+1}.\] We consider branching processes starting from multiple initial individuals, thus the root \(\emptyset\) will be treated as a phantom individual and its direct descendants will be the ancestor generation. For two elements \(u,v\) of \(\mathcal{U}\backslash\left\{\emptyset\right\}\), there exist two positive integers \(n,p\) such that \(u=(u_{1},\ldots,u_{n})\) and \(v=(v_{n},\ldots,v_{p})\) and we write \(uv:=(u_{1},\ldots,u_{n},v_{1},\ldots,v_{p})\) the concatenation of \(u\) and \(v\). We identify both \(\emptyset u\) and \(u\emptyset\) with \(u\). An individual \(v\in\mathcal{U}\) is a descendant of \(u\) if there exists \(w\in\mathcal{U}\) such that \(v=uw\). In this case we denote \(u\preceq v\) and we denote \(u\prec v\) if \(w\neq\emptyset\). Let us introduce \(\overline{\mathbb{V}}\) the subset of \(\mathcal{M}_{F}\left(\mathcal{U}\times\mathcal{X}\right)\) composed of all finite point measures on \(\mathcal{U}\times\mathcal{X}\), that is \[\overline{\mathbb{V}}:=\left\{\sum_{i=1}^{N}\delta_{(u^{i},x^{i})},\ N\in \mathbb{N},\ \left(u^{i},1\leq i\leq N\right)\in\mathcal{U}^{N},\ \left(x^{i},1\leq i\leq N\right)\in \mathcal{X}^{N}\right\}.\] We also define the set of marginal population measures, that is \[\mathbb{V}:=\left\{\sum_{i=1}^{N}\delta_{x^{i}},\ N\in\mathbb{N},\ \left(x^{i},1\leq i\leq N\right)\in\mathcal{X}^{N}\right\}.\] For any measure \(\bar{\nu}=\sum_{i=1}^{N}\delta_{(u^{i},x^{i})}\) in \(\overline{\mathbb{V}}\), we will write \(\nu:=\sum_{i=1}^{N}\delta_{x^{i}}\) its projection on \(\mathbb{V}\). By convention, if the number of points in the measure is \(N=0\), \(\bar{\nu}\) and \(\nu\) are the trivial zero measures on \(\mathcal{U}\times\mathcal{X}\) and \(\mathcal{X}\). We introduce for every \(\bar{\nu}\in\overline{\mathbb{V}}\), every \(g\in\mathfrak{B}\left(\mathcal{U}\times\mathcal{X},\mathbb{R}\right)\) and every \(f\in\mathfrak{B}\left(\mathcal{X},\mathbb{R}\right)\) \[\langle\bar{\nu},g\rangle:=\int_{\mathcal{U}\times\mathcal{X}}g(u,x)\bar{\nu} (\mathrm{d}u,\mathrm{d}x),\ \ \text{and}\ \ \langle\nu,f\rangle:=\int_{\mathcal{X}}f(x)\nu(\mathrm{d}x).\] Finally, we denote by \(\mathbb{D}\left(\mathcal{A},\mathcal{M}\right)\) the Skorohod space of cadlag functions from a subset \(\mathcal{A}\) of \(\mathbb{R}_{+}\) to a set \(\mathcal{M}\). For every process \(\left(X_{t},t\in\mathcal{A}\right)\in\mathbb{D}\left(\mathcal{A},\mathcal{M}\right)\) and \(x\in\mathcal{M}\), we will denote \[\mathbb{E}_{x}\left[f\left(X_{t}\right)\right]:=\mathbb{E}\left[f\left(X_{t} \right)\left|X_{0}=x\right]\quad\text{and}\quad\mathbb{P}_{x}\left(f\left(X_{t }\right)\right):=\mathbb{P}\left(f\left(X_{t}\right)\left|X_{0}=x\right).\] ## 1. Definition of the population In this section we describe informally the population process. Its rigorous definition as a strong solution of a stochastic differential equation (SDE) is presented in Section 5.1. The population is described at any time \(t\in\mathbb{R}_{+}\) by the finite point measure \(\bar{\nu}_{t}\in\overline{\mathbb{V}}\) given by the sum of the Dirac masses at the pair composed of the label and the trait of every individual living in the population at this time. We write \[\mathbb{G}\left(t\right):=\left\{u\in\mathcal{U}:\int_{\mathcal{U}\times \mathcal{X}}\mathbbm{1}_{\left\{v=u\right\}}\bar{\nu}_{t}\left(\mathrm{d}v, \mathrm{d}x\right)>0\right\}\] the set of labels of living individuals at time \(t\). For every individual labeled by \(u\in\mathbb{G}\left(t\right)\) we denote \(X_{t}^{u}\) its trait, and with a slight abuse of notation, \(X_{s}^{u}\) will denote the trait of its unique ancestor living at time \(s\in[0,t]\). The stochastic point process \(\left(\bar{\nu}_{t},t\geq 0\right)\in\mathbb{D}\left(\mathbb{R}_{+},\overline{ \mathbb{V}}\right)\) describing the evolution of the population and its associated marginal process are given, for every \(t\geq 0\), by \[\bar{\nu}_{t}=\sum_{u\in\mathbb{G}\left(t\right)}\delta_{\left(u,X_{t}^{u} \right)}\quad\text{and}\quad\nu_{t}=\sum_{u\in\mathbb{G}\left(t\right)}\delta _{X_{t}^{u}}.\] Let us remark that at all time \(t\), \(\bar{\nu}_{t}\) encodes the trait and lineage of every living individual, and the projection \(\nu_{t}\) on \(\mathbb{V}\) keeps the information on the trait of each individual only. The initial population is given by a measure \(\bar{z}=\sum_{i=1}^{N}\delta_{\left(u^{i},x^{i}\right)}\). During their lives, the traits of the individuals in the population evolve according to population-dependent dynamics. In a population \(\bar{\nu}_{t}\) at time \(t\), for all \(u\in\mathbb{G}\left(t\right)\) \[\frac{\mathrm{d}X_{t}^{u}}{\mathrm{d}t}=\mu\left(X_{t}^{u},\nu_{t},t\right),\] where \(\mu\) is a measurable \(\mathcal{X}\)-valued function on \(\mathcal{X}\times\mathbb{V}\times\mathbb{R}_{+}\). An individual with trait \(x\) in a population \(\nu_{t}\) at time \(t\) dies at an instantaneous rate \(B\left(x,\nu_{t},t\right)\), where \(B\) is a continuous function from \(\mathcal{X}\times\mathbb{V}\times\mathbb{R}_{+}\) to \(\mathbb{R}_{+}\). It produces an offspring of \(n\) individuals, where \(n\) is randomly chosen with distribution \(\left(p_{k}\left(x,\nu_{t},t\right),k\in\mathbb{N}\right)\) of first moment \(m\left(x,\nu_{t},t\right)\). Thus branching events that lead to \(n\) children happen at rate \(B_{n}(\cdot):=B(\cdot)p_{n}(\cdot)\). If \(n\geq 1\), the traits at birth of these \(n\) children are given by the vector \(\mathbf{y}=\left(y^{1},\cdots,y^{n}\right)\) randomly chosen according to \(K_{n}\left(x,\nu_{t},t,\cdot\right)\), a probability measure on \(\mathcal{X}^{n}\). The labeling choice for these children is arbitrary yet necessary to uniquely define a stochastic point process in \(\mathbb{D}\left(\mathbb{R}_{+},\overline{\mathbb{V}}\right)\). Here, for a parent individual of label \(u\), for all \(1\leq i\leq n\), the \(i\)-th child is labeled \(ui\) and its trait is \(y^{i}\), the \(i\)-th coordinate of the vector \(\mathbf{y}\). We denote \(T_{\mathrm{Exp}}\in\overline{\mathbb{R}}_{+}\) the explosion time of the process \(\left(\bar{\nu}_{t},t\geq 0\right)\), defined as the limit of its jumps times \(\left(T_{k},k\geq 0\right)\). In order to ensure the non-explosion in finite time of such a process we introduce the following set of hypotheses. **Assumption A**.: _We consider the following assumptions:_ 1. _There exists a positive continuous function_ \(\mu_{0}\) _on_ \(\mathbb{R}_{+}\)_, such that for all_ \(\left(x,\nu,t\right)\in\mathcal{X}\times\mathbb{V}\times\mathbb{R}_{+}\)__ \[\left|\mu\left(x,\nu,t\right)\right|\leq\mu_{0}(t)\left(1+\left|x\right|+\left| \frac{\left\langle\nu,I_{d}\right\rangle}{\left\langle\nu,1\right\rangle} \right|\right),\] _where_ \(I_{d}\) _is the identity function on_ \(\mathcal{X}\)_._ 2. _For all_ \(\left(x,\nu,t\right)\in\mathcal{X}\times\mathbb{V}\times\mathbb{R}_{+}\)_,_ \[B_{1}\left(x,\nu,t\right)<+\infty.\] _._ 3. _There exists a positive continuous function_ \(b_{0}\) _on_ \(\mathbb{R}_{+}\)_, such that for all_ \((x,\nu,t)\in\mathcal{X}\times\mathbb{V}\times\mathbb{R}_{+}\)__ \[\sum_{n\neq 1}nB_{n}\left(x,\nu,t\right)\leq b_{0}(t)\left(1+|x|\right).\] 4. _For all_ \((x,\nu,t,n)\in\mathcal{X}\times\mathbb{V}\times\mathbb{R}_{+}\times\mathbb{N} ^{*}\)_,_ \[K_{n}\left(x,\nu,t,\mathcal{A}_{n}(x)\right)=0,\quad\text{where}\ \ \mathcal{A}_{n}(x):=\bigg{\{}(y^{i},1\leq i\leq n)\in\mathcal{X}^{n}:\ \ \sum_{i=1}^{n}|y^{i}|>|x|\bigg{\}}.\] The growth rate of the traits of individuals is bounded in the first hypothesis by an exponential growth rate controlled by the trait of each individual and the mean trait in the population. The second point ensures that events that do not change the number of individuals do not accumulate in finite populations. The third hypothesis uniformly controls the minimum lifetime of an individual. This hypothesis, together with the first one, ensures that the lifespan of each individual decreases exponentially at most with its trait. Note that this assumption does not constrain the function \(B_{1}(\cdot)\). The last hypothesis restricts the framework under consideration to fragmentation processes that do not create matter or energy. This set of hypothese is sufficiently large to cover a large portion of models in physics and ecology, exponential growth being a classical assumption in many stochastic models in ecology and evolution, see _e.g._[49, 16]. **Proposition 1.1**.: _Under Assumption A, the sequence \((T_{k},k\geq 0)\) of jumps times of the process \((\bar{\nu}_{t},t\geq 0)\) tends to infinity almost surely._ We can thus conclude, following the proof of Theorem 2.1 in [45], that under this set of hypothese, the process \((\bar{\nu}_{t},t\geq 0)\) is uniquely defined on \(\mathbb{R}_{+}\). However, the spinal construction introduced in Section 2 can be established with less restrictive hypothese. In this case, accumulation of jumps times may happen in finite time and the spinal construction holds until this explosion time. ## 2. Results In this section, we consider the law of a randomly sampled individual in the general branching population described in Section 1. Our main result gives the appropriate change of measure linking this distribution at time \(t\) to the trajectory of an auxiliary process until this time. We then explicit this auxiliary process as a spinal construction. The spinal construction generates a \(\overline{\mathbb{V}}\)-valued process along with the label of a distinguished individual that can change with time. For convenience, we will denote \(\overline{\mathbb{W}}\) and \(\mathbb{W}\) the sets such that \[\overline{\mathbb{W}}:=\left\{(e,\bar{\nu})\in\mathcal{U}\times\overline{ \mathbb{V}}:\ \langle\bar{\nu},\mathbbm{1}_{\{e\}\times\mathcal{X}}\rangle\geq 1 \right\},\quad\text{and}\quad\ \mathbb{W}:=\left\{(x,\nu)\in\mathcal{X}\times\mathbb{V}:\ \langle\nu,\mathbbm{1}_{\{x\}}\rangle\geq 1 \right\}.\] Thus, the spine process is a \(\overline{\mathbb{W}}\)-valued branching process and its marginal is a \(\mathbb{W}\)-valued branching process. We propose here a general spinal construction, where branching rates are biased with a weight function \(\psi_{\phi}\), that is an element of the set \(\mathcal{D}\), defined by \[\mathcal{D}:=\left\{F_{f}\in\mathfrak{B}\left(\mathbb{W}\times\mathbb{R}_{+}, \mathbb{R}_{+}\right)\text{ s.t. }\left(f,F\right)\in\mathcal{C}^{1}\left(\mathcal{X}\times\mathbb{R}_{+}, \mathbb{R}\right)\times\mathcal{C}^{1}\left(\mathcal{X}\times\mathbb{R}\times \mathbb{R}_{+},\mathbb{R}_{+}^{*}\right)\right\}, \tag{2.1}\] where for every \((x,\nu,t)\in\mathbb{W}\times\mathbb{R}_{+},\ F_{f}(x,\nu,t):=F(x,\langle\nu,f( \cdot,t),t\rangle.\) In the following, for every \((u,\bar{\nu})\in\overline{\mathbb{W}}\) we denote \(x_{u}\) the trait of the individual of label \(u\) in the population \(\bar{\nu}\), and for every \(n\geq 0\) and every \(\mathbf{y}=(y^{i},1\leq i\leq n)\in\mathcal{X}^{n}\) we write \[\bar{\nu}_{+}(u,\mathbf{y}):=\bar{\nu}-\delta_{(u,x_{u})}+\sum_{i=1}^{n} \delta_{(ui,y^{i})},\quad\text{and}\quad\nu_{+}(x,\mathbf{y}):=\nu-\delta_{x} +\sum_{i=1}^{n}\delta_{y^{i}}. \tag{2.2}\] We introduce the key operator \(\mathcal{G}\) involved in the spinal construction. It is defined for all \(F_{f}\in\mathcal{D}\) and \((x_{e},\nu,t)\in\mathbb{W}\times\mathbb{R}_{+}\) by \[\mathcal{G}F_{f}(x_{e},\nu,t):=GF_{f}\left(x_{e},\nu,t\right)\\ +\sum_{n\geq 0}\Bigg{\{}B_{n}(x_{e},\nu,t)\int_{\mathcal{X}^{n}} \bigg{[}\sum_{i=1}^{n}F_{f}\left(y^{i},\nu_{+}(x_{e},\mathbf{y}),t\right)-F_{ f}\left(x_{e},\nu_{+}(x_{e},\mathbf{y}),t\right)\bigg{]}K_{n}\left(x_{e},\nu,t, \mathrm{d}\mathbf{y}\right)\\ +\int_{\mathcal{X}}B_{n}(x,\nu,t)\int_{\mathcal{X}^{n}}\big{[}F_{ f}\left(x_{e},\nu_{+}(x,\mathbf{y}),t\right)-F_{f}(x_{e},\nu,t)\big{]}K_{n} \left(x,\nu,t,\mathrm{d}\mathbf{y}\right)\nu(\mathrm{d}x)\Bigg{\}}, \tag{2.3}\] where the operator \(G\) is the generator of the deterministic evolution of the traits between branching events, given for every \(F_{f}\in\mathcal{D}\), and \((x_{e},\nu,t)\in\mathbb{W}\times\mathbb{R}_{+}\) by \[GF_{f}\left(x_{e},\nu,t\right):=D_{1}F\left(x_{e},\langle\nu,f( \cdot,t)\rangle,t\right)\cdot\mu\left(x_{e},\nu,t\right)+D_{3}F\left(x_{e}, \langle\nu,f(\cdot,t)\rangle,t\right)\\ +D_{2}F\left(x_{e},\langle\nu,f(\cdot,t)\rangle,t\right)\int_{ \mathcal{X}}\left[\frac{\partial f}{\partial x}(x,t)\cdot\mu\left(x,\nu,t \right)+\frac{\partial f}{\partial t}(x,t)\right]\nu(\mathrm{d}x), \tag{2.4}\] where \(D_{i}F(\cdot,\cdot)\) denotes the derivative of the function \(F\in\mathcal{C}^{1}(\mathcal{X}\times\mathbb{R}\times\mathbb{R}_{+},\mathbb{ R}_{+}^{*})\) with respect to \(i\)-th variable. To maintain consistency in our notations, we arbitrary defined \(K_{0}\left(x,\nu_{t},t,\cdot\right):=\delta_{x}(\cdot)\). Rigorously, the previously introduced objects are families of operators \((\mathcal{G}_{(x,\nu,t)},(x,\nu,t)\in\mathcal{X}\times\mathbb{V}\times \mathbb{R}_{+})\) and \((G_{(x,\nu,t)},(x,\nu,t)\in\mathcal{X}\times\mathbb{V}\times\mathbb{R}_{+})\). This abuse of notation will be used for the subsequently introduced operators in this work. The operator \(\mathcal{G}\) is generally not the generator of a conservative Markov process on \(\mathbb{W}\). Indeed \(\mathcal{G}1=B(x_{e},\nu,t)(m(x_{e},\nu,t)-1)\), which is non-zero if there exists a tuple \((x_{e},\nu,t)\) such that the mean number of children \(m\left(x_{e},\nu,t\right)\neq 1\). We point out that these operators are the equivalents- for interacting, structured, branching populations- of the Schrodinger operator \(\mathcal{G}\) introduced in [15]. To ensure that for all \((x_{e},\nu,t)\in\mathbb{W}\times\mathbb{R}_{+}\) the function \(\mathcal{G}F_{f}(x_{e},\nu,t)\) is finite we make an assumption on the functions \(F_{f}\in\mathcal{D}\). **Assumption B**.: _Let \((p_{n},n\in\mathbb{N})\) and \((K_{n},n\in\mathbb{N})\) be the reproduction parameters of the original branching process and \(\psi_{\phi}\in\mathcal{D}\) a weight-function. For all \((x,x_{e},\nu,t)\in\mathcal{X}\times\mathbb{W}\times\mathbb{R}_{+}\),_ \[\sum_{n\in\mathbb{N}}p_{n}(\rho)\int_{\mathcal{X}^{n}}\left(\psi_{\phi}\left(x_ {e},\nu_{+}(x,\mathbf{y}),t\right)+\sum_{i=1}^{n}\psi_{\phi}\left(y^{i},\nu_{ +}(x,\mathbf{y}),t\right)\right)K_{n}\left(\rho,d\mathbf{y}\right)<+\infty.\] Remark that for a chosen set of parameters of the original branching process \((\nu_{s},s\geq 0)\), this assumption restricts the set of suitable weight functions. We will establish in Proposition 2.4 that the operator \(\mathcal{G}\) constructs the generator \(\widehat{\mathcal{L}}_{\psi_{\phi}}\) of the spine process associated with a function \(\psi_{\phi}\in\mathcal{D}\) ensuring Assumption B, defined for functions \(F_{f}\) such that \((f,F)\in\mathcal{C}^{1}(\mathcal{U}\times\mathcal{X}\times\mathbb{R}_{+}, \mathbb{R})\times\mathcal{C}^{1}(\mathcal{U}\times\mathbb{R}\times\mathbb{R}_{+},\mathbb{R}_{+}^{*})\) and for all \((e,\bar{\nu},t)\in\overline{\mathbb{W}}\times\mathbb{R}_{+}\) by \[\widehat{\mathcal{L}}_{\psi_{\phi}}F_{f}(e,\bar{\nu},t):=\widehat{ G}F_{f}(e,\bar{\nu},t)\\ +\sum_{n\geq 0}\int_{\mathcal{U}\times\mathcal{X}}B_{n}(x,\nu,t)\int_{ \mathcal{X}^{n}}\Bigg{\{}\mathbbm{1}_{\{u=e\}}\sum_{i=1}^{n}\big{[}F_{f}\left( ei,\bar{\nu}_{+}(e,\mathbf{y}),t\right)-F_{f}(e,\bar{\nu},t)\big{]}\frac{ \psi_{\phi}(y^{i},\nu_{+}(x_{e},\mathbf{y}),t)}{\psi_{\phi}(x_{e},\nu,t)}\\ +\mathbbm{1}_{\{u\neq e\}}\big{[}F_{f}\left(e,\bar{\nu}_{+}(u, \mathbf{y}),t\right)-F_{f}(e,\bar{\nu},t)\big{]}\frac{\psi_{\phi}(x_{e},\nu_{+ }(u,\mathbf{y}),t)}{\psi_{\phi}(x_{e},\nu,t)}\Bigg{\}}K_{n}\left(x,\nu,t, \mathrm{d}\mathbf{y}\right)\bar{\nu}(\mathrm{d}u,\mathrm{d}x), \tag{2.5}\] where \[\widehat{G}F_{f}(e,\bar{\nu},t): =D_{2}F\left(e,\langle\bar{\nu},f(\cdot,t)\rangle,t\right)\int_{ \mathcal{U}\times\mathcal{X}}\left[\frac{\partial f}{\partial x}(u,x,t)\cdot \mu\left(x,\nu,t\right)+\frac{\partial f}{\partial t}(u,x,t)\right]\bar{\nu}( \mathrm{d}u,\mathrm{d}x)\] \[+D_{3}F\left(e,\langle\bar{\nu},f(\cdot,t)\rangle,t\right).\] Remark that branching rates of both spinal and non-spinal individuals are biased by the function \(\psi_{\phi}\). Assumption B ensures that the total branching rate is finite from every state of the spinal process, however it is not sufficient to avoid explosion of this process in finite time. Dynamics 2.2 and 2.3 below will provide a more intricate explanation of the spine process associated with this generator. Finally, we introduce for all \(t\geq 0\), the \(\mathcal{U}\)-valued random variable \(U_{t}\) that picks an individual alive at time \(t\). Its law is characterized by the function \(p_{u}\left(\bar{\nu}_{t}\right)\) which yields the probability to choose the individual of label \(u\) in the set \(\mathbb{G}\left(t\right)\). We can now state our main result, that is a Girsanov-type formula for the spinal change of measure. It characterizes the joint probability distribution of \(\left(U_{t},\left(\bar{\nu}_{s},s\leq t\right)\right)\)- that is the randomly sampled individual in the population \(\bar{\nu}_{t}\) at time \(t\) and the whole trajectory of the population until this time- and links it to the law of the spine process through a path-integral formula. **Theorem 2.1**.: _Let \(\psi_{\phi}\in\mathcal{D}\) verifying Assumption B, \(t\geq 0\), and \(\bar{z}\in\overline{\mathbb{V}}\). Let \(\left(\left(E_{t},\bar{\chi}_{t}\right),t\geq 0\right)\) be the time-inhomogeneous \(\overline{\mathbb{W}}\)-valued branching process defined by the infinitesimal generator \(\widehat{\mathcal{L}}_{\psi_{\phi}}\). Let \(\widehat{T}_{\text{Exp}}\) denote its explosion time and \(\left(\left(Y_{t},\chi_{t}\right),t\geq 0\right)\) its projection on \(\mathbb{W}\)._ _Under Assumption B, for every measurable non-negative function \(H\) on \(\mathcal{U}\times\mathbb{D}\left([0,t],\mathbb{V}\right)\):_ \[\mathbb{E}_{\bar{z}}\left[\mathbbm{1}_{\left\{T_{\text{Exp}}>t, \mathbb{G}\left(t\right)\neq\emptyset\right\}}H\left(U_{t},\left(\bar{\nu}_{s },s\leq t\right)\right)\right]=\\ \langle z,\psi_{\phi}(\cdot,z)\rangle\mathbb{E}_{\bar{z}}\left[ \mathbbm{1}_{\left\{\widehat{T}_{\text{Exp}}>t\right\}}\xi\left(E_{t},\left( \bar{\chi}_{s},s\leq t\right)\right)H\left(E_{t},\left(\bar{\chi}_{s},s\leq t \right)\right)\right],\] _where:_ \[\xi\left(E_{t},\left(\bar{\chi}_{s},s\leq t\right)\right):=\frac{p_{E_{t}} \left(\bar{\chi}_{t}\right)}{\psi_{\phi}\left(Y_{t},\chi_{t},t\right)}\exp \left(\int_{0}^{t}\frac{\mathcal{G}\psi_{\phi}\left(Y_{s},\chi_{s},s\right)}{ \psi_{\phi}\left(Y_{s},\chi_{s},s\right)}ds\right).\] We take inspiration from the work of Bansaye [3] for the proof. The idea is to decompose both processes on their possible trajectories, then establish by induction on the successive jumps times the equality in law for the trajectories between these times. The process \(\left(\left(E_{t},\bar{\chi}_{t}\right),t\geq 0\right)\) gives at any time \(t\geq 0\) the label of the spinal individual- that encodes the whole spine lineage- and the spinal population. Our result thus links, for every \(\psi_{\phi}\), the sampling of an individual and the trajectory of the population to the trajectory of the spine process. The path integral term that links these two terms is difficult to handle in general and finding eigenfunctions of \(\mathcal{G}\) may widely simplify the expression [15, 2, 3]. Finding such functions for single type, density-dependent populations is possible in models with simple interactions [3][Section 3]. Nevertheless, this becomes a challenging issue in the majority of scenarios. Subsequent sections of this work will explore applications of this formula where the path-integral component is tractable. We first introduce additional notations concerning the dynamics of the spine process given by the generators introduced in (2.5). Following notations of Section 1, and disregarding the dependency on the chosen function \(\psi_{\phi}\in\mathcal{D}\) ensuring Assumption B in the subsequent branching parameters, we introduce the dynamics of the traits in the spine process. As previously discussed, the construction distinguishes dynamics of the spine from the rest of the individuals. When there is no possible confusion, the label of the spinal individual at any time will be denoted \(e\) for convenience, and \(x_{e}\) its trait. We will also use for every \(\left(x,\left(x_{e},\nu\right),t\right)\in\mathcal{X}\times\mathbb{W}\times \mathbb{R}_{+}\) the notations \[\rho_{e}:=\left(x_{e},\nu,t\right)\quad\text{and}\quad\rho:=\left(x,\nu,t \right). \tag{2.6}\] We first introduce the branching parameters of the individuals outside the spine in a spinal population. **Dynamics 2.2** (Individuals outside the spine).: _For all \(\left(n,x,\left(x_{e},\nu\right),t\right)\) in \(\mathbb{N}^{*}\times\mathcal{X}\times\mathbb{W}\times\mathbb{R}_{+}\)_ 1. \(\widehat{K}_{n}\left(x_{e},\rho,\cdot\right)\in\mathcal{M}_{F}\left(\mathcal{X }^{n}\right)\) _is the kernel giving the traits at birth of the_ \(n\) _children generated by a non-spinal individual of trait_ \(x\) _at time_ \(t\) _in a spinal population_ \(\left(x_{e},\nu\right)\)_. For all_ \(\mathcal{A}\subset\mathcal{X}^{n}\)_,_ \[\widehat{K}_{n}\left(x_{e},\rho,\mathcal{A}\right):=\frac{1}{\widehat{\Gamma} _{n}\left(x_{e},\rho\right)}\int_{\mathcal{A}}\psi_{\phi}\left(x_{e},\nu_{+}(x,\mathbf{y}),t\right)K_{n}\left(\rho,d\mathbf{y}\right),\] (2.7) _where_ \(\widehat{\Gamma}_{n}(\cdot)\) _is the normalization function, defined as_ \[\widehat{\Gamma}_{n}\left(x_{e},\rho\right):=\int_{\mathcal{X}^{n}}\psi_{\phi }\left(x_{e},\nu_{+}(x,\mathbf{y}),t\right)K_{n}\left(\rho,d\mathbf{y}\right),\] _and_ \(\nu_{+}\) _is defined in (_2.2_)._ 2. _The law_ \(\left(\widehat{p}_{k}\left(x_{e},\rho\right),k\in\mathbb{N}\right)\) _of the number of children of an individual of trait_ \(x\) _branching at time_ \(t\) _in a spinal population_ \(\left(x_{e},\nu\right)\)_, is defined for all_ \(n\in\mathbb{N}\) _as_ \[\widehat{p}_{n}\left(x_{e},\rho\right):=\frac{1}{\sum_{k\in\mathbb{N}} \widehat{\Gamma}_{k}\left(x_{e},\rho\right)p_{k}\left(\rho\right)}\widehat{ \Gamma}_{n}\left(x_{e},\rho\right)p_{n}\left(\rho\right).\] 3. _Each individual of trait_ \(x\) _outside the spine of trait_ \(x_{e}\) _in a population_ \(\nu\) _at time_ \(t\)_, branches to_ \(n\) _children at rate_ \(\widehat{B}_{n}\left(x_{e},\rho\right)\)_, defined as_ \[\widehat{B}_{n}\left(x_{e},\rho\right):=\frac{\widehat{\Gamma}_{n}\left(x_{e},\rho\right)}{\psi_{\phi}\left(x_{e},\nu,t\right)}B_{n}\left(\rho\right).\] The total branching rate outside the spine is defined, for all \(\left(x_{e},\nu,t\right)\in\mathbb{W}\times\mathbb{R}_{+}\) by \[\widehat{\tau}\left(x_{e},\nu,t\right):=\int_{\mathcal{X}}\sum_{n\geq 0} \widehat{B}_{n}\left(x_{e},x,\nu,t\right)\nu(\mathrm{d}x)-\sum_{n\geq 0} \widehat{B}_{n}\left(x_{e},\rho_{e}\right).\] We now introduce the branching parameters of the spine in a \(\psi_{\phi}\)-spinal construction. **Dynamics 2.3** (Spinal individual).: _For all \(\left(n,x,\left(x_{e},\nu\right),t\right)\) in \(\mathbb{N}^{*}\times\mathcal{X}\times\mathbb{W}\times\mathbb{R}_{+}\)_ 1. \(\widehat{K}_{n}^{*}\left(\rho_{e},\cdot\right)\in\mathcal{M}_{F}\left( \mathcal{X}^{N}\right)\) _is the kernel giving the traits at birth of the_ \(n\) _children generated by the spinal individual of trait_ \(x_{e}\) _at time_ \(t\) _in a population_ \(\nu\)_. For all_ \(\mathcal{A}\subset\mathcal{X}^{n}\)_,_ \[\widehat{K}_{n}^{*}\left(\rho_{e},\mathcal{A}\right):=\frac{1}{\widehat{ \Gamma}_{n}^{*}\left(\rho_{e}\right)}\int_{\mathcal{A}}\sum_{i=1}^{n}\psi_{ \phi}\left(y^{i},\nu_{+}(x_{e},\mathbf{y}),t\right)K_{n}\left(\rho_{e},d \mathbf{y}\right),\] (2.8) _where_ \(\widehat{\Gamma}_{n}^{*}(\cdot)\) _is the normalization function, defined as_ \[\widehat{\Gamma}_{n}^{*}\left(\rho_{e}\right):=\int_{\mathcal{X}^{n}}\sum_{i= 1}^{n}\psi_{\phi}\left(y^{i},\nu_{+}(x_{e},\mathbf{y}),t\right)K_{n}\left( \rho_{e},d\mathbf{y}\right).\] 2. _The law_ \(\left(\widehat{p}_{k}^{*}\left(\rho_{e}\right),k\in\mathbb{N}\right)\) _of the number of children of the spinal individual of trait_ \(x_{e}\) _branching at time_ \(t\) _in a population_ \(\nu\)_, is defined for all_ \(k\in\mathbb{N}\) _as_ \[\widehat{p}_{n}^{*}\left(\rho_{e}\right):=\frac{1}{\sum_{k\in\mathbb{N}} \widehat{\Gamma}_{k}^{*}\left(\rho_{e}\right)p_{k}\left(\rho_{e}\right)}\widehat {\Gamma}_{n}^{*}\left(\rho_{e}\right)p_{n}\left(\rho_{e}\right).\] 3. _The spinal individual of trait_ \(x_{e}\) _in a population_ \(\nu\) _at time_ \(t\)_, branches to_ \(n\) _children at a rate_ \(\widehat{B}_{n}^{*}\left(\rho_{e}\right)\)_, defined as_ \[\widehat{B}_{n}^{*}\left(\rho_{e}\right):=\frac{\widehat{\Gamma}_{n}^{*}\left( \rho_{e}\right)}{\psi_{\phi}\left(x_{e},\nu,t\right)}B_{n}\left(\rho_{e}\right).\] _._ 4. _When the spinal individual of trait_ \(x_{e}\) _branches at time_ \(t\) _in a population_ \(\nu\) _and is replaced by_ \(n\) _children with trait_ \(\mathbf{y}\)_, the integer-valued random variable_ \(J(\rho_{e},\mathbf{y})\) _choosing the new spinal individual after a spinal branching event is given, for all_ \(1\leq j\leq n\) _by_ \[\mathbb{P}\left(J(\rho_{e},\mathbf{y})=j\right)=\frac{\psi_{\phi}\left(y^{j}, \nu_{+}(x_{e},\mathbf{y}),t\right)}{\sum_{i=1}^{n}\psi_{\phi}\left(y^{i},\nu_{+ }(x_{e},\mathbf{y}),t\right)}.\] (2.9) The total branching rate from every trait \(\rho_{e}\) is denoted \[\widehat{\tau}_{\mathrm{tot}}\left(\rho_{e}\right):=\sum_{n\geq 0}\widehat{B}_{n} ^{*}\left(\rho_{e}\right)+\widehat{\tau}\left(\rho_{e}\right). \tag{2.10}\] Remark that \(\widehat{K}_{0}^{*}=0\), therefore the spinal individual cannot branch without children and the spinal population never goes extinct. Notice that for \(\psi_{\phi}\equiv 1\), individuals outside the spine follow the same dynamics as the individuals in the population \((\nu_{t},t\geq 0)\). In this case, the spinal individual of trait \(x_{e}\) branches at time \(t\) in a population \(\nu\) with rate \(m(\rho_{e})B(\rho_{e})\), where \(m(\cdot)\) is the mean offspring number that is finite under Assumption B. The random number of children at a branching event thus follows the size-biased law \(kp_{k}(\cdot)/m(\cdot)\) and the new spinal individual is chosen uniformly among the offspring. Theorem 2.1 is verified until the first explosion time of both processes. We established in Proposition 1.1 that under Assumption A the branching process does not explode in finite time. To ensure the non explosion of the spine process, we have to consider an additional assumption on the weight function \(\psi_{\phi}\) used for the construction. **Assumption C**.: _There exists a positive continuous function \(\hat{b}_{0}\) on \(\mathbb{R}_{+}\), such that for all \((x,\nu,t)\in\mathcal{X}\times\mathbb{V}\times\mathbb{R}_{+}\)_ \[\sum_{n\neq 1}n\left(\widehat{B}_{n}\left(x,\nu,t\right)+\widehat{B}_{n} ^{*}\left(x,\nu,t\right)\right)\leq\hat{b}_{0}(t)\left(1+|x|\right).\] This assumption, involving both the branching parameters and the function \(\psi_{\phi}\), is stronger than Assumption B. The set of weight functions that can be used to construct a spine process that does not explode in finite time may differ from one model to another. However, one may rather use more restrictive conditions that are sufficient for every branching process under Assumption A. For example, in mass-conservative models, taking \(\psi_{\phi}(x,\nu)=x\) ensures the non-explosion of the spine process regardless the initial branching process that satisfies Assumption A. **Proposition 2.4**.: _Under Assumption A, for every \(\psi_{\phi}\in\mathcal{D}\) ensuring Assumption C, the spine process \(((E_{t},\tilde{\chi}_{t}),t\geq 0)\) whose law is characterized by the generator \(\widehat{\mathcal{L}}_{\psi_{\phi}}\) does not explode in finite time. Furthermore the generator \(\widehat{L}_{\psi_{\phi}}\) of the marginal spine process \(((Y_{t},\chi_{t}),t\geq 0)\) given by Dynamics 2.2 and 2.3, is defined for all function \(F_{f}\in\mathcal{D}\) and all \((x,\nu,t)\in\mathcal{X}\times\mathbb{V}\times\mathbb{R}_{+}\) by_ \[\widehat{L}_{\psi_{\phi}}F_{f}^{t}\left(x,\nu,t\right):=\frac{\mathcal{G}\left[ \psi_{\phi}F_{f}\right]\left(x,\nu,t\right)}{\psi_{\phi}\left(x,\nu,t\right)} -\frac{\mathcal{G}\psi_{\phi}\left(x,\nu,t\right)}{\psi_{\phi}\left(x,\nu,t \right)}F_{f}\left(x,\nu,t\right).\] Proof.: The proof of non-explosion is shown following the proof of Proposition 1.1. The expression of the generator of the marginal spine process is purely computational. Both of them are described in Appendix A. It follows that the marginal law of the spine process is characterized by the operator \(\mathcal{G}\) and the weight function \(\psi_{\phi}\). **Corollary 2.5**.: _Let \(\psi_{\phi}\in\mathcal{D}\) ensuring Assumption C, \(t\geq 0\) and \(\bar{z}\in\overline{\mathbb{V}}\). Under Assumption \(A\), for any \(f\) measurable non-negative function on \(\mathcal{U}\times\mathbb{D}\left([0,t],\mathcal{X}\times\overline{\mathbb{V}}\right)\),_ \[\mathbb{E}_{\bar{z}}\left[\sum_{u\in\mathbb{G}(t)}\psi_{\phi}\left( X_{t}^{u},\bar{\nu}_{t},t\right)f\left(u,\left(\left(X_{s}^{u},\bar{\nu}_{s} \right),0\leq s\leq t\right)\right)\right]\\ =\langle z,\psi_{\phi}(\cdot,z,0)\rangle\mathbb{E}_{\bar{z}} \left[\exp\left(\int_{0}^{t}\frac{\mathcal{G}\psi_{\phi}\left(Y_{s},\chi_{s},s \right)}{\psi_{\phi}\left(Y_{s},\chi_{s},s\right)}\,ds\right)f\left(E_{t}, \left(\left(Y_{s},\bar{\chi}_{s}\right),0\leq s\leq t\right)\right)\right].\] Proof.: We use Assumptions A and C to ensure that \(T_{\mathrm{Exp}}\) and \(\widehat{T}_{\mathrm{Exp}}\) are almost surely infinite. Let \(f\) be measurable non-negative function on \(\mathcal{U}\times\mathbb{D}\left([0,t],\mathcal{X}\times\overline{\mathbb{V}}\right)\). We introduce \(H\) the measurable non-negative function defined for all \(\left(u,\bar{z}_{s},s\leq t\right)\in\mathcal{U}\times\mathbb{D}\left([0,t], \mathbb{V}\right)\) by \[H(u,\bar{z}_{s},s\leq t):=\psi_{\phi}(X_{t}^{u},z_{t},t)f\left(u,\left(X_{s}^{u },\bar{z}_{s}\right),s\leq t\right)\langle z_{t},1\rangle.\] The corollary is thus a direct application of Theorem 2.1 to the function \(H\) with a uniformly sampled individual. This formula gives a change of probability that involves the function \(\mathcal{G}\psi_{\phi}/\psi_{\phi}\) with a path-integral formula. This study is related to Feynman-Kac path measures and semigroups. We refer to [18] for an overview on this subject. In the case of a branching process with interactions, the integral term depends on the trajectory of the whole spinal population. In general cases with interactions, the branching property is not verified and the so-called Many-to-One formula- see _e.g._ Proposition 9.3 in [5]- fall apart,. However, if \(\psi_{\phi}\) is an eigenfunction of the operator \(\mathcal{G}\), then we have the following Many-to-One formula for any non-negative measurable function \(g\) on \(\mathbb{D}\left([0,t],\mathcal{X}\right)\) \[\mathbb{E}_{\bar{z}}\left[\sum_{u\in\mathbb{G}(t)}\psi_{\phi}\left(X_{t}^{u}, \bar{\nu}_{t},t\right)g\left(X_{s}^{u},s\leq t\right)\right]=C_{t}\mathbb{E} _{\bar{z}}\left[g\left(Y_{s},s\leq t\right)\right],\] where \(C_{t}\) is a time-dependent positive constant. This formula, established in [15], reduces the empirical measure of the trajectories of all the individuals until time \(t\) to the law of the trajectory of a unique individual in the spinal construction, the spinal individual. The spinal individual in this case can be considered as a typical individual, reflecting the average behavior of the whole population. **Remark 2.6**.: _If we assume for all \(\left(x,\nu,t\right)\in\mathcal{X}\times\mathbb{V}\times\mathbb{R}_{+}\), that \(B(x,\nu,t)=B(t)\) and for all \(k\geq 0\), \(p_{k}(x,\nu,t)=p_{k}(t)\) in the considered branching process, taking \(\psi_{\phi}\equiv 1\) gives the classical Many-to-One formula [4]:_ \[\mathbb{E}_{\bar{z}}\left[\sum_{u\in\mathbb{G}(t)}g\left(X_{s}^{u},s\leq t \right)\right]=\mathbb{E}_{\bar{z}}\left[\left\langle\nu_{t},1\right\rangle \right]\mathbb{E}_{\bar{z}}\left[g\left(Y_{s},s\leq t\right)\right].\] _where the average number of individuals in the population is given at time \(t\) by_ \[\mathbb{E}_{\bar{z}}\left[\left\langle\nu_{t},1\right\rangle\right]=\langle z,1\rangle\exp\left(\int_{0}^{t}B(s)(m(s)-1)\text{d}s\right).\] ## 3. Kesten-Stigum criterion In this section, we present both a sufficient and a necessary Kesten-Stigum criterium for density-dependent branching processes with a continuous type space. Before stating the result, we introduce the limiting martingale that naturally follows from the spinal construction. **Proposition 3.1**.: _Under Assumption A, for every \(\psi_{\phi}\in\mathcal{D}\) ensuring Assumption B,_ \[W_{t}(\psi_{\phi}):=\sum_{u\in\mathbb{G}(t)}\exp\left(-\int_{0}^{t}\frac{ \mathcal{G}\psi_{\phi}\left(X_{s}^{u},\nu_{s},s\right)}{\psi_{\phi}\left(X_{s}^ {u},\nu_{s},s\right)}\text{d}s\right)\psi_{\phi}\left(X_{t}^{u},\nu_{t},t\right) \tag{3.1}\] _is a non-negative martingale with respect to the filtration \((\mathcal{F}_{t},t\geq 0)\) generated by the original process. It almost surely converges to a random variable \(W(\psi_{\phi})\in[0,\infty)\)._ Note that if the process \((\nu_{t},t\geq 0)\) goes extinct almost surely, then \(W(\psi_{\phi})=0\) almost surely. However, even on the survival event, the martingale \((W_{t}(\psi_{\phi}),t\geq 0)\) may also almost surely degenerate to \(0\). The limit depends on the chosen \(\psi_{\phi}\) function and we will focus here on the case \(\psi_{\phi}\equiv 1\) (see [2] for the case without interactions). In this case the branching parameters of the original process must be such that \(m(\cdot)\) is finite to ensure Assumption B. We denote \[\mathcal{W}_{t}:=W_{t}(1)=\sum_{u\in\mathbb{G}(t)}e^{-\int_{0}^{t}B(X_{s}^{u},\nu_{s},s)(m(X_{s}^{u},\nu_{s},s)-1)\text{d}s}\qquad\text{and}\quad\mathcal{ W}:=\limsup_{t\to\infty}\mathcal{W}_{t}. \tag{3.2}\] We noticed in Remark 2.6 that if the branching rates do not depend on the trait nor the population then we have an expression of the mean size of the population. For non-structured branching processes the mean size \(\mathbb{E}\left[N_{t}\right]\) of the population, where \(N_{t}:=\left\langle\nu_{t},1\right\rangle\), is given by \(\mathbb{E}\left[N_{t}\right]=N_{0}\exp\left(\int_{0}^{t}B(s)(m(s)-1)\text{d}s\right)\). One may ask under what conditions this exponential growth accurately reflects the rate of increase in the population size. In this particular case, \(\mathcal{W}_{t}/N_{0}=N_{t}/\mathbb{E}\left[N_{t}\right]\), thus finding conditions for the non-degeneracy of this martingale gives a direct answer to the question. In the general case with interactions, Corollary 2.5 gives \[\mathcal{W}_{t}\frac{\mathbb{E}\left[N_{t}\right]}{N_{0}}=\sum_{u\in\mathbb{G }(t)}e^{-\int_{0}^{t}B(X_{s}^{u},\nu_{s},s)(m(X_{s}^{u},\nu_{s},s)-1)\text{d}s }\mathbb{E}_{\bar{z}}\left[e^{\int_{0}^{t}B(Y_{s},X_{s},s)(m(Y_{s},X_{s},s)-1 )\text{d}s}\right],\] that is close to \(N_{t}\) if the mean behavior of the spinal individual is not far from the averaging behavior of all the individuals in the branching process. In the sequel we will suppose that the following hypothesis on the branching rates holds **Assumption D**.: _Assumptions A and C hold true and there exist \(c,C,\overline{B}\in\mathbb{R}_{+}^{*}\) such that, for all \(\rho\in\mathbb{W}\times\mathbb{R}_{+}\)_ \[c\leq B(\rho)(m(\rho)-1)\leq C\quad\text{and}\quad\text{ }B(\rho)\leq \overline{B}.\] This assumption ensures that the process is strongly supercritical- in a sense that the uniform lower bound of the reproductive law is strictly greater than \(1\)- and that the only absorbing state for the branching process is the null measure \(\nu\equiv 0\). This uniform hypothesis could be partially released under strong positivity assumptions on the generator of the branching process, see [37] in the discrete case and [32] Chapter 3 for continuous time. It also restricts the setting to branching processes with bounded branching rates. We can now express the result. **Theorem 3.2**.: _We assume Assumption D, and introduce for all \(k\in\mathbb{N}\)_ \[\overline{p}_{k}:=\sup_{\rho\in\mathbb{W}\times\mathbb{R}_{+}}p_{k}(\rho), \quad\text{and }\underline{p}_{k}:=\inf_{\rho\in\mathbb{W}\times\mathbb{R}_{+}}p_{k}(\rho).\] _If \(\sum_{k\geq 1}k\log(k)\overline{p}_{k}<+\infty\) then, for all initial measure \(z\in\mathbb{V}\)_ \[\mathbb{E}_{z}\left[\mathcal{W}\right]=\langle z,1\rangle.\] _If \(\sum_{k\geq 1}k\log(k)\underline{p}_{k}=+\infty\), then_ \[\mathcal{W}=0\quad\text{ almost surely.}\] The idea of the proof, based on the conceptual proofs established in [44, 26, 3], is to consider the spinal process as a process with immigration: every individual outside the spine follows the same dynamics as the original process \((\nu_{t},t\geq 0)\) and the spinal individual provides new individuals at a biased rate. Note that the spinal construction also changes the branching rates of individuals outside the spine if \(\psi_{\phi}\) depends on the population state. **Remark 3.3**.: _If \((\overline{p}_{k},k\geq 0)\) and \((\underline{p}_{k},k\geq 0)\) have finite first moment \(\overline{m}\) and \(\underline{m}\), then we can introduce \(\bar{L}\) and \(\underline{L}\) the \(\mathbb{N}\)-valued random variables of law given respectively by \((\overline{p}_{k}/\overline{m},k\in\mathbb{N})\) and \((\underline{p}_{k}/\underline{m},k\in\mathbb{N})\). The conditions of Theorem 3.2 thus become \(\mathbb{E}\left[\bar{L}\log\left(\bar{L}\right)\right]<+\infty\) for the non-degeneracy and \(\mathbb{E}\left[\underline{L}\log\left(\underline{L}\right)\right]=+\infty\) for the degeneracy. Note that these conditions are similar to the conditions (16b) and (18b) in [2]. In the case of constant reproductive law, it is well known that these two conditions become a dichotomy. Athreya [2] showed that this dichotomy remains valid for multitype Galton-Watson processes with a finite set of traits and no interactions._ ## 4. A Yule process with competition In this section, we introduce an alternative application of the spinal construction method that enables us to obtain a spine process with straightforward dynamics from a branching process with intricate interactions. As a toy model, we consider a time-inhomogeneous Yule process with competitive interactions between the individuals, affecting their traits. The individuals in the population are characterized by their trait \(x\in\mathbb{R}_{+}^{*}\) that can be for example a mass or a size. An individual with trait \(x\) divides at an instantaneous rate \(r(t)x\) where \(r\) is a measurable function on \(\mathbb{R}_{+}\), into two children of size \(\Lambda x\) and \((1-\Lambda)x\), where \(\Lambda\) is a \([0,1]\)-valued random variable with probability density function (p.d.f) \(q\). We assume that: \[m_{\rm div}:=\mathbb{E}\left[\Lambda\right]\in(0,1),\quad\text{and}\quad\ K_{ \rm div}:=\mathbb{E}\left[\frac{1}{\Lambda(1-\Lambda)}\right]<+\infty.\] This mass-conservative mechanism of division is classical in cell modeling [25]. Moreover, each individual experiences the influence of the whole population, leading to a reduction of their trait. Consequently, at an instantaneous rate \(d(t)N_{t}\) - where \(N_{t}\) is the population size at time \(t\), and \(d\) is a positive measurable function on \(\mathbb{R}_{+}\) - each individual loses a fraction \((1-\Theta)\) of its size. \(\Theta\) is a \([0,1]\)-valued random variable with \(p\) its p.d.f and we assume that \[m_{\rm loss}:=\mathbb{E}\left[\Theta\right]\in(0,1)\quad\text{and}\quad\ K_{ \rm loss}:=\mathbb{E}\left[\frac{1}{\Theta}\right]<+\infty.\] These events can be interpreted as an inhibition of reproductive material in cells due to competitive interactions within the population. Finally, we consider that the trait of each individual grows exponentially at an instantaneous rate \(\mu(t)\). The dependency in time of these parameters can represent an external control or the effect of a deterministic environment. This defines a branching process \((\nu_{t},t\geq 0)\), whose law is characterized by the infinitesimal generators \((\mathcal{J}^{t},t\geq 0)\). For all \(t\geq 0\), \(\mathcal{J}^{t}\) is defined on the set \(\mathcal{D}_{J}\), where \[\mathcal{D}_{J}:=\left\{H_{h}\in\mathfrak{B}\left(\mathbb{R}_{+}\right), \exists\left(h,H\right)\in\mathcal{C}^{1}\left(\mathbb{R}_{+},\mathbb{R} \right)\times\mathcal{C}^{1}\left(\mathbb{R}_{+},\mathbb{R}_{+}^{*}\right):\ \forall\nu\in\mathbb{W},\ H_{h}\left(\nu\right)=H\left(\langle\nu,h \rangle\right)\right\}\] by \[\mathcal{J}^{t}H_{h}(\nu) =H^{\prime}\left(\langle\nu,h\rangle\right)\int_{\mathbb{R}_{+}} h^{\prime}(y)\mu(t)y\nu({\rm d}y)\] \[+\int_{\mathbb{R}_{+}}r(t)y\int_{0}^{1}\left[H_{h}\left(\nu- \delta_{y}+\delta_{\lambda y}+\delta_{(1-\lambda)y}\right)-H_{h}\left(\nu \right)\right]q(\lambda){\rm d}\lambda\nu({\rm d}y)\] \[+d(t)\langle\nu,1\rangle\int_{\mathbb{R}_{+}}\int_{0}^{1}\left[H _{h}\left(\nu-\delta_{y}+\delta_{\theta y}\right)-H_{h}\left(\nu\right)\right] p({\rm d}\theta\nu({\rm d}y).\] We remark that Assumption A is verified by the parameters of this branching process, and thus it does not explode in finite time almost surely. Note that in this population, the dynamics are correlated such that an increase in population size accelerates the rate of loss, while loss events slow the rate of division. The operator at the core of the spinal construction introduced in (2.3), defined for functions \(F_{f}\) in the set \(\mathcal{D}\) introduced in (2.1), is such that for all \((x,\nu,t)\in\mathbb{W}\times\mathbb{R}_{+}\) \[\mathcal{GF}_{f}(x, \nu,t)=D_{1}F\left(x,\langle\nu,f(\cdot,t)\rangle,t\right)\mu(t)x +D_{3}F\left(x,\langle\nu,f(\cdot,t)\rangle,t\right)\] \[+D_{2}F\left(x,\langle\nu,f(\cdot,t)\rangle,t\right)\int_{ \mathbb{R}_{+}}\left[\frac{\partial f}{\partial y}(y,t)\mu(t)y+\frac{\partial f }{\partial t}(y,t)\right]\nu(\mathrm{d}y)\] \[+r(t)x\int_{0}^{1}\left[F_{f}\left(\lambda x,\nu-\delta_{x}+ \delta_{\lambda x}+\delta_{(1-\lambda)x},t\right)+F_{f}\left((1-\lambda)x, \nu-\delta_{x}+\delta_{\lambda x}+\delta_{(1-\lambda)x},t\right)\right.\] \[\left.-F_{f}\left(x,\nu-\delta_{x}+\delta_{\lambda x}+\delta_{(1 -\lambda)x},t\right)\right]q(\lambda)\mathrm{d}\lambda\] \[+d(t)\langle\nu,1\rangle\int_{0}^{1}\left[F_{f}\left(\theta x, \nu-\delta_{x}+\delta_{\theta x},t\right)-F_{f}\left(x,\nu-\delta_{x}+\delta _{\theta x},t\right)\right]p(\theta)\mathrm{d}\theta\] \[+\int_{\mathbb{R}_{+}}r(t)y\int_{0}^{1}\left[F_{f}\left(x,\nu- \delta_{y}+\delta_{\lambda y}+\delta_{(1-\lambda)y},t\right)-F_{f}\left(x,\nu,t\right)\right]q(\lambda)\mathrm{d}\lambda\nu(\mathrm{d}y)\] \[+d(t)\langle\nu,1\rangle\int_{\mathbb{R}_{+}}\int_{0}^{1}\left[F _{f}\left(x,\nu-\delta_{y}+\delta_{\theta y},t\right)-F_{f}\left(x,\nu,t\right) \right]p(\theta)\mathrm{d}\theta\nu(\mathrm{d}y).\] Notice that polynomial functions can not be eigenvalues of the operators \(\mathcal{G}^{t}\) for all \(t\). Finding analytic expression of eigenfunctions for such non-local operator is complex, and the existence of such eigenfunctions is not guaranteed in general, see [9, 17]. Here we propose to use the change of measure associated with the spinal construction in order to decorrelate the dynamics within the spine process. We believe that this method can be generalized to different models. We choose \(\psi\in\mathcal{C}^{1}(\mathbb{R}_{+}^{*}\times\mathbb{R}\times\mathbb{R}_{+},\mathbb{R}_{+}^{*})\) and \(\phi\in\mathcal{C}^{1}(\mathbb{R}_{+}^{*}\times\mathbb{R}_{+},\mathbb{R})\) such that for all \((x,y,t)\in\mathbb{R}_{+}^{*}\times\mathbb{R}\times\mathbb{R}_{+}\) \[\psi(x,y,t):=xe^{-y}\quad\text{and}\qquad\phi(x,t):=\ln\left(xr(t)K_{\text{div }}\right). \tag{4.1}\] Applied to a spinal state \((x_{e},\chi,t)\in\mathbb{W}\) where \(\chi:=\sum_{u\in\mathbb{G}(t)}\delta_{x^{u}}\), this weight function verifies \[\psi_{\phi}(x_{e},\chi,t)=\frac{x_{e}}{\prod_{u\in\mathbb{G}(t)}(r(t)K_{\text{ div}}x^{u})}\] and ensures Assumption B. We can now determine the parameters of the spinal process using this function \(\psi_{\phi}\). The behavior of the traits between branching events remains unchanged compared to the Yule process under consideration. The division events occur at rate \(1\) for both the spinal and the non-spinal individuals. The random variable \(\widehat{\Lambda}\), which determines the distribution of mass during division in the spinal construction, has a density function \(\widehat{q}\) given by: \[\widehat{q}(\lambda):=\frac{q(\lambda)}{\lambda(1-\lambda)K_{\text{div}}}. \tag{4.2}\] As a result, in this \(\psi_{\phi}\)-spinal process, division events are no longer dependent on the size of the individuals. Individuals outside the spine lose a random fraction \(1-\widehat{\Theta}\) of their mass at a rate \(\widehat{B}_{1}(x,\nu,t):=K_{\text{loss}}d(t)\langle\nu,1\rangle\) where \(\widehat{\Theta}\) follows a probability density function \(\widehat{p}\) given by: \[\widehat{p}(\theta)=\frac{p(\theta)}{\theta K_{\text{loss}}}. \tag{4.3}\] The loss events for the spinal individual follow the same dynamics as those in the Yule process being considered. It is worth noting that Assumption C holds in this case, leading to \(\widehat{T}_{\rm Exp}=\infty\) almost surely. Therefore, by using appropriate functions \(\psi\) and \(\phi\) in the spinal construction, we can make division events independent of loss events, resulting in a conditional branching property. This property can be used to enhance the simulation complexity of the branching process. A classical exact method to simulate non-homogeneous Poisson processes is the thinning algorithm, introduced by Lewis and Shedler [39]. It is used to simulate Poisson processes of intensity \(c(t)\) on a window \([0,T]\) for a fixed \(T>0\). The idea is to generate possible jump times \((t_{i},1\leq i\leq n)\) at a rate \(\bar{c}:=\sup_{[0,T]}c(t)\) and accept them with probability \(c(t_{i})/\bar{c}\). When the intensity \(c(\cdot)\) depends not only on time \(t\) but also on the entire past of the point process, one can use Ogata's modified thinning algorithm [48]. Given the information of the first \(k\) points, \((t_{i},1\leq i\leq k)\) the intensity \(c(\cdot)\) is deterministic on \([t_{k},T_{k+1}]\) with \(T_{k+1}\) the next time of jump. As a result, generating the next point in such processes can be considered as generating the first point in an inhomogeneous Poisson process. This idea has been more recently adapted for branching process, see _e.g._[24, 10, 25]. The main limitation of this method is that \(\bar{c}\) can become excessively large even for small simulation windows \(T\). This results in the rejection of most of the generated points. Another exact method, based on inverse transform sampling, consists in generating the arrival times of the process by sampling a uniform random variable \(U\) in \([0,1]\), see _e.g._[19]. The arrival times \(t_{k}\) are thus generated by inversion of the cumulative distribution function of the jumps times, such that \(1-\exp(\int_{0}^{t_{k}}c(s){\rm d}s)=U\). However, an exact inversion is inaccessible in general cases, and in this model in particular. Here, we propose a new simulation method, based on the spinal construction, that is much faster than the Ogata's algorithm. The idea is to use the fact that the division events are independent from the mass in the spine process. Thus, we can generate a binary tree with unit rate, then spawn a Poisson point process indexed on this tree that distributes the loss events and chooses the spinal individual. Finally the trait of each individual at every time is computed using the deterministic growth and the encountered loss events. Theses three steps are illustrated in Figure 1 and a detailed algorithm can be found in Appendix D. We then use Theorem 2.1 to compute various statistics of the branching process based on the statistics obtained from the spine process. Note that in this auxiliary process, it is possible to achieve exact simulations because of the bounded rates. For all \((x,\nu,t)\in\mathbb{R}_{+}^{*}\times\mathbb{V}\times\mathbb{R}_{+}\), the expression of the operator \(\mathcal{G}\) applied to the chosen \(\psi_{\phi}\), defined in (4.1), is given by \[\frac{\mathcal{G}\psi_{\phi}\left(x,\nu,t\right)}{\psi_{\phi}\left(x,\nu,t \right)}=\mu(t)+\left[-\frac{\dot{r}(t)}{r(t)}-\mu(t)+d(t)(S-1)\left(K_{\rm loss }-1\right)\right]S-r(t)B,\] where for all \(\nu\in\mathbb{V}\), \(S:=\langle\nu,1\rangle\) and \(B:=\int_{\mathbb{R}_{+}^{*}}x\nu({\rm d}x)\) denote respectively the size of the population \(\nu\) and its total biomass. Figure 1. Algorithm description Thus, any statistics on the branching process estimated by Monte Carlo methods can be alternatively estimated using trajectories of the spine process and the formula given by Theorem 2.1 applied to this spinal construction. ## 5. Proofs ### Proof of Section 1 In this section, we derive the result on the existence and uniqueness of the considered branching process. Proof of Proposition 1.1.: The description of the process \((\bar{\nu}_{t},t\geq 0)\) introduced in Section 1 leads to a canonical SDE driven by the dynamics of the trait and a multivariate point process. This SDE is then used to show that the mean number of individuals in the population and the trait are bounded at any time \(t\geq 0\). Using Assumption A.2 we conclude that there is no explosion in finite time of the population. Following [4, 45], for convenience, we introduce a vector \(\boldsymbol{\theta}:=(\theta_{i},i\in\mathbb{N}^{*})\) of uniform independent random variables on \([0,1]\) and a family \((F_{i,n},i\leq n,n\in\mathbb{N}^{*})\) of measurable maps from \(\mathcal{X}\times\mathbb{V}\times\mathbb{R}_{+}\times[0,1]\) to \(\mathcal{X}\) such that for all \((n,x,\nu,t)\in\mathbb{N}^{*}\times\mathcal{X}\times\mathbb{V}\times\mathbb{R}_ {+}\), the random vector \((F_{i,n}\left(x,\nu,t,\theta_{i}\right),i\leq n)\) is distributed as \(K_{n}\left(x,\nu,t,\cdot\right)\). Note that taking a unique random variable \(\theta\) uniform on \([0,1]\) correlates the trait at birth between every child. Let \(E=\mathcal{U}\times\mathbb{R}_{+}\times\mathbb{N}\times[0,1]^{\mathbb{N}}\) and \(Q\left(\mathrm{d}s,\mathrm{d}u,\mathrm{d}r,\mathrm{d}k,\mathrm{d}\boldsymbol{ \theta}\right)\) be a Poisson point measure on \(\mathbb{R}_{+}\times E\) with intensity \(q\) such that \[q\left(\mathrm{d}s,\mathrm{d}u,\mathrm{d}r,\mathrm{d}k,\mathrm{d}\boldsymbol{ \theta}\right)=\mathrm{d}s\left(\sum_{v\in\mathcal{U}}\delta_{v}(\mathrm{d}u) \right)\mathrm{d}r\left(\sum_{n\in\mathbb{N}}\delta_{n}(\mathrm{d}k)\right) \mathrm{d}\boldsymbol{\theta}.\] We denote by \((\mathcal{F}_{t},t\geq 0)\) the canonical filtration associated with this Poisson point measure and \((T_{k},k\geq 0)\) the sequence of jumps times, that is the random sequence of times of arrivals given by the Poisson point measure \(Q\). Let \(\bar{z}\in\overline{\mathbb{V}}\). For every function \(g\in\mathcal{C}^{1}\left(\mathcal{U}\times\mathcal{X},\mathbb{R}\right)\) and \(t\geq 0\), the process \((\bar{\nu}_{t},t\geq 0)\) starting from \(\bar{z}\) verifies \[\langle\bar{\nu}_{t},g\rangle:= \ \langle\bar{z},g\rangle+\int_{0}^{t}\int_{\mathcal{U}\times \mathcal{X}}\frac{\partial g}{\partial x}(u,x)\cdot\mu\left(x,\nu_{s},s\right) \bar{\nu}_{s}(\mathrm{d}u,\mathrm{d}x)\mathrm{d}s\] \[+\int_{[0,t]\times E}\mathbbm{1}_{\left\{u\in\mathbb{G}\left( \bar{\nu}_{s^{-}}\right),r\leq B_{k}\left(X_{s^{-}}^{u},\nu_{s^{-}},s\right) \right\}}\] \[\times\left[\sum_{i=1}^{k}g\left(ui,F_{i,k}\left(X_{s^{-}}^{u}, \nu_{s^{-}},s,\theta_{i}\right)\right)-g(u,X_{s^{-}}^{u})\right]Q\left( \mathrm{d}s,\mathrm{d}u,\mathrm{d}r,\mathrm{d}k,\mathrm{d}\boldsymbol{\theta }\right). \tag{5.1}\] Note that the dynamical construction of the marginal measure-valued process only does not ensure uniqueness: the individual for the next branching event is chosen according to its trait, and thus two individuals with the same trait can be indistinctly chosen to be the branching one. The labeling of individuals allows us to overcome this problem. However, any other labeling method could work, see _e.g._[24]. The proof of uniqueness and existence of a solution is classical and has been first established for measure-valued process by Fournier and Meleard [24]. Here we consider non-bounded rates and we follow with small adaptations the proof of Lemma 2.5 in [45]. Let \(T>0\). We prove the non-accumulation of branching events on \([0,T]\). First we use equation (5.1) applied to the constant function equal to \(1\), that gives the number of individuals in the population, denoted \((N_{t},t\geq 0)\). Using Assumption A.3, we have for all \(t<T_{k}\wedge T\) \[\mathbb{E}_{\bar{z}}\left[N_{t}\right]\leq N_{0}+\int_{0}^{t}b_{0}(s)\left( \mathbb{E}_{\bar{z}}\Big{[}\sum_{u\in\mathbb{G}(t)}\left|X_{s}^{u}\right| \Big{]}+\mathbb{E}_{\bar{z}}\left[N_{s}\right]\right)\mathrm{d}s.\] Next, we take a sequence of functions \((g_{n},n\in\mathbb{N})\) in \(\mathcal{C}^{1}(\mathcal{U}\times\mathcal{X},\mathbb{R}_{+})\) such that \(\lim_{n\to\infty}g_{n}(u,x)=|x|\) and \(\lim_{n\to\infty}\frac{\partial g_{n}}{\partial x}(u,x)\cdot\mu(x,\nu,s)\leq|\mu (x,\nu,s)|\) for all \(\nu,s\). Applying equation (5.1) to these functions and using Assumptions A.1 and A.4, we have when \(n\to+\infty\), \[\mathbb{E}_{\bar{z}}\Big{[}\sum_{u\in\mathbb{G}(t)}|X_{t}^{u}|\, \Big{]}\leq \langle\bar{z},|\cdot|\rangle+\int_{0}^{t}\mu_{0}(s)\left(\mathbb{E }_{\bar{z}}\Big{[}\sum_{u\in\mathbb{G}(s)}|X_{s}^{u}|\,\Big{]}+\mathbb{E}_{ \bar{z}}\Big{[}\Big{|}\sum_{u\in\mathbb{G}(s)}X_{s}^{u}\Big{|}\Big{]}+\mathbb{ E}_{\bar{z}}\left[N_{s}\right]\right)\mathrm{d}s.\] According to Gronwall's lemma, for all \(t<T_{k}\wedge T\), we have \[\mathbb{E}_{\bar{z}}\left[N_{t}\right]+\mathbb{E}_{\bar{z}}\Big{[}\sum_{u\in \mathbb{G}(t)}|X_{t}^{u}|\,\Big{]}\leq\left(N_{0}+\langle\bar{z},|\cdot| \rangle\right)e^{A(T)t}<\infty\] where \(A(T)=\sup_{s\leq T}b_{0}(s)+\mu_{0}(s)\). The number of individuals is thus almost surely finite at finite time as well as the trait of every individual. Assumption A.2 ensures that in finite time, there is no accumulation of branching events that does not change the size of the population. ### Proof of Section 2 Theorem 2.1 is proved following the steps of the proof of Theorem 1 in [3]. Let \(\psi_{\phi}\in\mathcal{D}\) ensuring Assumption B, and \(\left(\left(E_{t},\bar{\chi}_{t}\right),t\geq 0\right)\) be the time-inhomogeneous \(\overline{\mathbb{W}}\)-valued branching process of generators \(\left(\widehat{\mathcal{L}}_{\psi_{\phi}}^{t},t\geq 0\right)\), defined in (2.5). We will first need to introduce some notations. We will denote \((U_{k},k\geq 0)\) the sequence of \(\mathcal{U}\)-valued random variables giving the label of the branching individuals at the jumps times \((T_{k},k\geq 0)\). Let \((N_{k},k\geq 0)\) be the sequence of \(\mathbb{N}\)-valued random variables giving the number of children at each branching event and we denote for brevity \(A_{k}:=(U_{k},N_{k})\) for all \(k\geq 0\). At the \(k\)-th branching time \(T_{k}\), we denote \(\mathcal{Y}_{k}\) the \(\mathcal{X}^{N_{k}}\)-valued random variable giving the vector of offspring traits. Finally we introduce for all \(k\geq 0\), \(\mathcal{V}_{k}:=(T_{k},A_{k},\mathcal{Y}_{k})\). We similarly define \(((\widehat{T}_{k},\widehat{U}_{k},\widehat{N}_{k},\widehat{\mathcal{Y}}_{k}),k\geq 0)\), the sequence of jumps times, labels of the branching individual, number of children and trait of these children at birth in the spinal construction. Remark that the distribution of number of children and traits depend on whether the branching individual is the spinal one or not. We will also use for all \(k\geq 0\), \(\widehat{\mathcal{V}}_{k}:=(\widehat{T}_{k},\widehat{A}_{k},\widehat{ \mathcal{Y}}_{k})\) where \(\widehat{A}_{k}:=(\widehat{U}_{k},\widehat{N}_{k})\). At time \(s\in[\widehat{T}_{k-1},\widehat{T}_{k})\), the label of the spinal individual is denoted \(E_{k}\) and its trait \(Y_{k}\). For a given initial population \(\bar{z}=\sum_{i=1}^{n}\delta_{(i,x^{i})}\in\overline{\mathbb{V}}\), we use by convention \(U_{0}=\emptyset,\ N_{0}=n,\ \mathcal{Y}_{0}=(x^{i},1\leq i\leq n)\) almost surely. The same convention holds for the spine process. For all \(0\leq k\), we introduce the associated filtrations \[\mathcal{F}_{k}=\sigma\left(\mathcal{V}_{i},0\leq i\leq k\right),\text{ and }\ \widehat{\mathcal{F}}_{k}=\sigma\left(E_{k},\left(\widehat{\mathcal{V}}_{i},0\leq i \leq k\right)\right).\] Remark that these notations, summarized in Figure 2, are well-defined until the explosion time of the branching and spine processes and that the vectors \(\left(\mathcal{V}_{k},k\geq 0\right)\) where for all \(k\geq 0\), \(\mathcal{V}_{k}:=(T_{k},A_{k},\mathcal{Y}_{k})\) characterize the trajectories of \((\bar{\nu}_{t},t\geq 0)\) until the explosion time. For every initial population \(\bar{z}\in\overline{\mathbb{V}}\) and every \(k\geq 0\), we introduce the set of sequences of \(k\) branching events that lead to non-extinguished trajectories \(\mathfrak{U}_{k}(\bar{z})\subset(\mathcal{U}\times\mathbb{N})^{k}\), starting from \(\bar{z}\). We also introduce for all \(a\in\mathfrak{U}_{k}(\bar{z})\) and all \(0\leq i\leq k,\ \mathbb{G}_{i}(a)\) the set of labels of individuals living between the \(i\)-th and \((i+1)\)-th branching events in the population where all the branching events were given by \(a\). By decomposing the branching process \((\bar{\nu}_{t},t\geq 0)\) based on the sequences \(a\in\mathfrak{U}_{k}(\bar{z})\) and sampling at time \(t\) an individual \(e\in\mathbb{G}_{k}(a)\), we get that for every measurable non-negative function \(H\) on \(\mathcal{U}\times\mathbb{D}\left([0,t],\mathbb{V}\right)\) \[\mathbb{E}_{\bar{z}}\left[\mathbbm{1}_{\{T_{\mathbb{E}_{\mathbb{E} _{\mathbb{E}}}}>t,\mathbb{G}(t)\neq\emptyset\}}H\left(U_{t},(\bar{\nu}_{s},s \leq t)\right)\right]=\\ \sum_{k\geq 0}\sum_{a\in\mathfrak{U}_{k}(\bar{z})}\sum_{e\in \mathbb{G}_{k}(a)}\mathbb{E}_{\bar{z}}\left[H\left(e,(\bar{\nu}_{s},s\leq t) \right)p_{e}(\bar{\nu}_{t})\mathbbm{1}_{\{T_{k}\leq t<T_{k+1}\}}\prod_{i=0}^{k} \mathbbm{1}_{\{A_{i}=a_{i}\}}\right]. \tag{5.2}\] The expectation on the right-hand side of (5.2) is linked by a Girsanov-type result to the spinal construction, as shown in Lemma 5.1. The difference with our proof and the proof of Theorem 1 in [3] lies essentially in the demonstration of this lemma. **Lemma 5.1**.: _For any \(k>0\), \(\bar{z}\in\overline{\mathbb{V}}\) and \(a=((u_{i},n_{i}),0\leq i\leq k)\in\mathfrak{U}_{k}(\bar{z})\), let \(F\) be a measurable non-negative function on \(\Pi_{i=1}^{k}\left(\mathbb{R}_{+}\times\mathcal{U}\times\mathbb{N}\times \mathcal{X}^{n_{i}}\right)\). For any \(e\in\mathbb{G}_{k}\left(a\right)\),_ \[\mathbb{E}_{\bar{z}}\left[F\left(\mathcal{V}_{i},0\leq i\leq k \right)\prod_{i=0}^{k}\mathbbm{1}_{\{A_{i}=a_{i}\}}\right]=\left\langle z, \psi_{\phi}\left(\cdot,z,0\right)\right\rangle\\ \times\mathbb{E}_{\bar{z}}\Bigg{[}\xi_{k}\left(E_{k},\left( \widehat{\mathcal{V}}_{i},0\leq i\leq k\right)\right)F\left(\widehat{ \mathcal{V}}_{i},0\leq i\leq k\right)\mathbbm{1}_{\{E_{k}=e\}}\prod_{i=0}^{k} \mathbbm{1}_{\{\widehat{A}_{i}=a_{i}\}}\Bigg{]}\,,\] _where_ \[\xi_{k}\left(E_{k},\left(\widehat{\mathcal{V}}_{i},0\leq i\leq k \right)\right):=\frac{1}{\psi_{\phi}\left(Y_{\widehat{T}_{k}},\chi_{\widehat{ T}_{k}},\widehat{T}_{k}\right)}\exp\left(\int_{0}^{\widehat{T}_{k}}\frac{ \mathcal{G}\psi_{\phi}\left(Y_{s},\chi_{s},s\right)}{\psi_{\phi}\left(Y_{s}, \chi_{s},s\right)}ds\right).\] We prove this lemma by induction on the number of branching events \(k\). We will first state a technical lemma to avoid too long computations. **Lemma 5.2**.: _For all \(k>0\), \(\bar{z}\in\overline{\mathbb{V}}\) and \(a=((u_{i},n_{i}),0\leq i\leq k)\in\mathfrak{U}_{k}(\bar{z})\),_ \[\frac{\xi_{k}\left(E_{k},\left(\widehat{\mathcal{V}}_{i},0\leq i \leq k\right)\right)}{\xi_{k-1}\left(E_{k-1},\left(\widehat{\mathcal{V}}_{i}, 0\leq i\leq k-1\right)\right)}=\frac{\psi_{\phi}\left(Y_{\widehat{T}_{k}}, \chi_{\widehat{T}_{k}},\widehat{T}_{k}\right)}{\psi_{\phi}\left(Y_{\widehat{T }_{k}},\chi_{\widehat{T}_{k}},\widehat{T}_{k}\right)}\exp\left(\int_{\widehat {T}_{k-1}}^{\widehat{T}_{k}}\lambda\left(Y_{s},\chi_{s},s\right)ds\right)\quad \text{a.s.}\] _where, for all \(s\in[\widehat{T}_{k-1},\widehat{T}_{k})\),_ \[\lambda\left(Y_{s},\chi_{s},s\right):=\widehat{\tau}_{tot}\left(Y_{s},\chi_{s },s\right)-\int_{\mathcal{X}}B\left(x,\chi_{s},s\right)\chi_{s}(dx). \tag{5.3}\] We recall that \(\widehat{\tau}_{tot}\) is the total branching rate of the spine process defined in (2.10). The proof of this lemma is in Appendix B. Figure 2. Sequential notations for the spine process. Proof of Lemma 5.1.: Following the proof of Theorem 1 in [3], the result is established by induction on the number \(k\) of branching events. The original branching process and its associated spine process may stop branching in finite time, in this case the total numbers of branching events, respectively \(N_{\mathrm{tot}}\) and \(\widehat{N}_{\mathrm{tot}}\), are finite. For all \(k\) such that \(k>N_{\mathrm{tot}}\), the \(k\)-th branching event of the original construction arrives at \(T_{k}=+\infty\). In this case we set \(A_{k}=(\emptyset,0)\) by convention. The same convention is used for the spinal construction. Let \(\bar{z}:=\sum_{i=1}^{n}\delta_{(i,x^{i})}\in\overline{\mathbb{V}}\) be the initial population and let \(F\) be a measurable non-negative function on \(\mathbb{R}_{+}\times\mathcal{U}\times\mathbb{N}\times\mathcal{X}^{n}\). Then, by definition \[\mathcal{V}_{0}=\widehat{\mathcal{V}}_{0}=(0,(\emptyset,n),(x^{i},1\leq i\leq n )),\text{ a.s.}\] Therefore, for all \(e\in\mathbb{G}(0)\) \[\mathbb{E}_{\bar{z}}\left[\xi_{0}\left(E_{0},\widehat{\mathcal{V}}_{0}\right) F\left(\widehat{\mathcal{V}}_{0}\right)\mathbbm{1}_{\{E_{0}=e\}}\mathbbm{1}_{\{ \widehat{A}_{0}=(\emptyset,n)\}}\right]=F\left(\mathcal{V}_{0}\right) \mathbbm{1}_{\{A_{0}=(\emptyset,n)\}}\mathbb{E}_{\bar{z}}\left[\frac{ \mathbbm{1}_{\{E_{0}=e\}}}{\psi_{\phi}(Y_{0},z,0)}\right].\] We recall that the individual \(1\leq i\leq n\) of trait \(x^{i}\) in the population \(\bar{z}\) is chosen to be the spinal individual with probability \(\psi_{\phi}(x^{i},z,0)(\left\langle z,\psi_{\phi}(\cdot,z,0)\right\rangle)^{-1}\). Then we have \[\mathbb{E}_{\bar{z}}\left[\xi_{0}\left(E_{0},\widehat{\mathcal{V}}_{0}\right) F\left(\widehat{\mathcal{V}}_{0}\right)\mathbbm{1}_{\{E_{0}=e\}}\mathbbm{1}_{\{ \widehat{A}_{0}=(\emptyset,n)\}}\right]=\frac{1}{\left\langle z,\psi_{\phi} \left(\cdot,z,0\right)\right\rangle}\mathbb{E}_{\bar{z}}\left[\mathbbm{1}_{\{ A_{0}=(\emptyset,n)\}}F\left(0,A_{0},\mathcal{Y}_{0}\right)\right].\] Thus the result holds for \(k=0\). Now let \(k\geq 1\) and assume that the following induction hypothesis holds at rank \(k-1\). **Induction Hypothesis.**_For every \(a=(a_{i},0\leq i\leq k-1)\in\left(\mathcal{U}\times\mathbb{N}\right)^{k-1}\) with \(a_{i}=(u_{i},n_{i})\), every measurable non-negative function \(F\) on \(\bigotimes\limits_{i=1}^{k-1}\left(\mathbb{R}_{+}\times\mathcal{U}\times \mathbb{N}\times\mathcal{X}^{n_{i}}\right)\) and every \(e\in\mathbb{G}_{k-1}\left(a\right)\):_ \[\mathbb{E}_{\bar{z}}\left[F\left(\mathcal{V}_{i},0\leq i\leq k-1 \right)\prod_{i=0}^{k-1}\mathbbm{1}_{\{A_{i}=a_{i}\}}\right]=\left\langle z, \psi_{\phi}\left(\cdot,z\right)\right\rangle\mathbb{E}_{\bar{z}}\Bigg{[}\xi_{ k-1}\left(E_{k-1},\left(\widehat{\mathcal{V}}_{i},0\leq i\leq k-1\right)\right)\] \[\times F\left(\widehat{\mathcal{V}}_{i},0\leq i\leq k-1\right) \mathbbm{1}_{\{E_{k-1}=e\}}\prod_{i=0}^{k-1}\mathbbm{1}_{\{A_{i}=a_{i}\}} \Big{]}. \tag{5.4}\] Let \(a=(a_{i},0\leq i\leq k)\in\left(\mathcal{U}\times\mathbb{N}\right)^{n}\) with \(a_{i}=(u_{i},n_{i})\) and \(e\in\mathbb{G}_{k}\left(a\right)\). We denote \(a^{\prime}=(a_{i},0\leq i\leq k-1)\) and take \(F_{k}^{a}\) a measurable non-negative function on \(\bigotimes\limits_{i=1}^{k}\left(\mathbb{R}_{+}\times\mathcal{X}^{n_{i}}\right)\) such that, for all \(\left(\left(t_{i},y_{i}\right),0\leq i\leq k\right)\in\bigotimes\limits_{i=1}^{ n}\left(\mathbb{R}_{+}\times\mathcal{X}^{n_{i}}\right)\): \[F_{k}^{a}\left(\left(t_{i},y_{i}\right),0\leq i\leq n\right):=F_{k-1}^{a^{ \prime}}\left(\left(t_{i},y_{i}\right),0\leq i\leq k-1\right)I(t_{k}-t_{k-1}) F(y_{k}), \tag{5.5}\] where \(F_{k-1}^{a^{\prime}},I\) and \(F\) are measurable non-negative and bounded, respectively on \(\bigotimes\limits_{i=1}^{k-1}\left(\mathbb{R}_{+}\times\mathcal{X}^{n_{i}}\right)\), \(\mathbb{R}_{+}\) and \(\mathcal{X}^{n_{k}}\). Let \(e^{\prime}\in\mathbb{G}_{k-1}\left(a^{\prime}\right)\), such that there exists \(j\in\mathbb{N}^{*}\cup\left\{\emptyset\right\}\) verifying \(e^{\prime}j=e\). We introduce the \(\mathbb{N}^{*}\cup\left\{\emptyset\right\}\)-valued random variable \(J_{k}\) choosing the label of the spinal individual at the \(k\)-th branching event, so that \(E_{k}=E_{k-1}J_{k}\) almost surely. Following the proof of Lemma 1 in [3], we express both sides of the equality (5.4) for \(k\geq 1\) conditionally on the filtrations at the previous step \(k-1\). We recall that, for all \(1\leq j\leq k\) \[\mathcal{F}_{j}=\sigma\left(\mathcal{V}_{i},0\leq i\leq j\right),\text{ and }\ \widehat{\mathcal{F}}_{j}=\sigma\left(E_{j},\left(\widehat{\mathcal{V}}_{i},0\leq i \leq j\right)\right).\] Using (5.5) and conditioning on the filtration after the \((k-1)\)-th branching event, we have \[\mathbb{E}_{\bar{z}}\left[F_{k}^{a}\left(\left(T_{i},\mathcal{Y}_{i} \right),0\leq i\leq k\right)\prod_{i=0}^{k}\mathbbm{1}_{\left\{A_{i}=a_{i} \right\}}\right]=\\ \mathbb{E}_{\bar{z}}\left[F_{k-1}^{a^{\prime}}\left(\left(T_{i}, \mathcal{Y}_{i}\right),0\leq i\leq k-1\right)C\left(\mathcal{Y}_{i},0\leq i\leq k -1\right)\prod_{i=0}^{k-1}\mathbbm{1}_{\left\{A_{i}=a_{i}\right\}}\right], \tag{5.6}\] where: \[C\left(\mathcal{V}_{i},0\leq i\leq k-1\right):=\mathbb{E}\left[I\left(T_{k}-T_ {k-1}\right)F(\mathcal{Y}_{k})\mathbbm{1}_{\left\{A_{k}=a_{k}\right\}}\big{|} \mathcal{F}_{k-1}\right]. \tag{5.7}\] We apply the induction hypothesis (5.4) in the previous equation (5.6) with \(H\) given by \[H\left(\mathcal{V}_{i},0\leq i\leq k-1\right):=F_{k-1}^{a^{\prime}}\left( \left(T_{i},\mathcal{Y}_{i}\right),0\leq i\leq k-1\right)C\left(\mathcal{V}_{i},0\leq i\leq k-1\right).\] Thus \[\mathbb{E}_{\bar{z}}\left[F_{k}^{a}\left(\left(T_{i},\mathcal{Y}_ {i}\right),0\leq i\leq k\right)\prod_{i=0}^{k}\mathbbm{1}_{\left\{A_{i}=a_{i} \right\}}\right]=\left\langle z,\psi_{\phi}\left(\cdot,z,0\right)\right\rangle \mathbb{E}_{\bar{z}}\Bigg{[}\xi_{k-1}\left(E_{k-1},\left(\widehat{\mathcal{V} }_{i},0\leq i\leq k-1\right)\right)\\ \times F_{k-1}^{a^{\prime}}\left(\left(\widehat{T}_{i},\widehat{ \mathcal{Y}}_{i}\right),0\leq i\leq k-1\right)C\left(\widehat{\mathcal{V}}_{i},0\leq i\leq k-1\right)\mathbbm{1}_{\left\{E_{k-1}=e^{\prime}\right\}}\prod_{i =0}^{k-1}\mathbbm{1}_{\left\{A_{i}=a_{i}\right\}}\Big{]}. \tag{5.8}\] Similarly, we use (5.5) for the spine process. Conditioning on the filtration after the \((k-1)\)-th branching event, we have \[\mathbb{E}_{\bar{z}}\Bigg{[}\xi_{k}\left(E_{k},\left(\widehat{ \mathcal{V}}_{i},0\leq i\leq k\right)\right)F_{k}^{a}\left(\left(\widehat{T}_{ i},\widehat{\mathcal{Y}}_{i}\right),0\leq i\leq k\right)\mathbbm{1}_{\left\{E_{k}=e \right\}}\prod_{i=0}^{k}\mathbbm{1}_{\left\{\widehat{A}_{i}=a_{i}\right\}}\Bigg{]} =\\ \mathbb{E}\Big{[}\xi_{k-1}\left(E_{k-1},\left(\widehat{\mathcal{V} }_{i},0\leq i\leq k-1\right)\right)F_{k-1}^{a^{\prime}}\left(\left(\widehat{T}_ {i},\widehat{\mathcal{Y}}_{i}\right),0\leq i\leq k-1\right)\\ \times\widehat{C}\left(e^{\prime},\left(\widehat{\mathcal{V}}_{i},0\leq i\leq k-1\right)\right)\prod_{i=0}^{k-1}\mathbbm{1}_{\left\{\widehat{A}_ {i}=a_{i}\right\}}\Big{]}, \tag{5.9}\] where, using Lemma 5.2, \[\widehat{C}\left(e^{\prime},\left(\widehat{\mathcal{V}}_{i},0\leq i \leq k-1\right)\right):=\mathbbm{1}_{\left\{E_{k-1}=e^{\prime}\right\}} \mathbb{E}\left[\frac{\psi_{\phi}\left(Y_{\widehat{T}_{k}^{-}},\chi_{\widehat{T }_{k}^{-}},\widehat{T}_{k}\right)}{\psi_{\phi}\left(Y_{\widehat{T}_{k}},\chi_{ \widehat{T}_{k}},\widehat{T}_{k}\right)}\exp\left(\int_{\widehat{T}_{k-1}}^{ \widehat{T}_{k}}\lambda\left(Y_{s},\chi_{s},s\right)\mathrm{d}s\right)\\ \times\mathbbm{1}_{\left\{J_{k}=j\right\}}I\left(\widehat{T}_{k}- \widehat{T}_{k-1}\right)F\left(\widehat{\mathcal{Y}}_{k}\right)\mathbbm{1}_{ \left\{\widehat{A}_{k}=a_{k}\right\}}\left|\widehat{\mathcal{F}}_{k-1}\right. \right]. \tag{5.10}\] If we can establish that \[\widehat{C}\left(e^{\prime},\left(\widehat{\mathcal{V}}_{i},0\leq i\leq k-1 \right)\right)=\mathbbm{1}_{\left\{E_{k-1}=e^{\prime}\right\}}C\left(\left( \widehat{\mathcal{V}}_{i},0\leq i\leq k-1\right)\right), \tag{5.11}\] then the expectations in the right-hand sides of equations (5.8) and (5.9) are equal and we get \[\mathbb{E}_{\bar{z}}\left[F_{k}^{a}\left(\left(T_{i},\mathcal{Y}_ {i}\right),0\leq i\leq k\right)\prod_{i=0}^{k}\mathbbm{1}_{\left\{A_{i}=a_{i} \right\}}\right]=\left\langle z,\psi_{\phi}\left(\cdot,z,0\right)\right\rangle\\ \times\mathbb{E}_{\bar{z}}\Bigg{[}\xi_{k}\left(E_{k},\left( \widehat{\mathcal{V}}_{i},0\leq i\leq k\right)\right)F_{k}^{a}\left(\left( \widehat{T}_{i},\widehat{\mathcal{Y}}_{i}\right),0\leq i\leq k\right) \mathbbm{1}_{\left\{E_{k}=e\right\}}\prod_{i=0}^{k}\mathbbm{1}_{\left\{\widehat{A }_{i}=a_{i}\right\}}\Bigg{]}.\] From this equality and using a monotone class argument for the functions \(F_{k}^{a}\) defined in (5.5), we obtain (5.4) at rank \(k\), that concludes the proof. In the following we show (5.11), by describing the dynamics of both processes. **Computations for the branching process.** Conditioning in chain the expression of \(C\) defined in (5.7), we get \[C\left(\mathcal{V}_{i},0\leq i\leq k-1\right)=\mathbb{E}\Big{[}I(T_{k}-T_{k-1} )\mathbb{E}\Big{[}1_{\{A_{k}=a_{k}\}}\mathbb{E}\big{[}F(\mathcal{Y}_{k})\big{|} \mathcal{F}_{k-1},T_{k},A_{k}\big{]}\Big{|}\mathcal{F}_{k-1},T_{k}\Big{]}\Big{|} \mathcal{F}_{k-1}\Big{]}.\] Using the conditional distribution of \(\mathcal{Y}_{n}\), we have \[\mathbb{E}\left[F(\mathcal{Y}_{k})\big{|}\mathcal{F}_{k-1},T_{k},A_{k}\right] =\int_{\mathcal{X}^{N_{k}}}F(\mathbf{y})K_{N_{k}}\left(X_{T_{k}^{-}}^{U_{k}}, \nu_{T_{k}^{-}},T_{k},\mathrm{d}\mathbf{y}\right).\] Then we remark that for \(a_{k}=(u_{k},n_{k})\), \[\mathbb{E}\left[\mathbb{1}_{\{A_{k}=a_{k}\}}\big{|}\mathcal{F}_{k-1},T_{k} \right]=\frac{B_{n_{k}}\left(X_{T_{k}^{-}}^{u_{k}},\nu_{T_{k}^{-}},T_{k}\right) }{\tau\left(\nu_{T_{k}^{-}},T_{k}\right)},\] where \(\tau\left(\nu,s\right):=\int_{\mathcal{X}}B(x,\nu,s)\nu(\mathrm{d}x)\) is the total branching rate. Finally, using that the time between two jumps follows an inhomogeneous exponential law of instantaneous rate \(\tau(\cdot)\), we have \[C\left(\mathcal{V}_{i},0\leq i\leq k-1\right)=\int_{\mathbb{R}_{ +}}I(t)\exp\bigg{(}-\int_{T_{k-1}}^{t+T_{k-1}}\tau\left(\nu_{s},s\right) \mathrm{d}s\bigg{)}\\ \times B_{n_{k}}\left(X_{t+T_{k-1}}^{u_{k}},\nu_{t+T_{k-1}},t+T_{ k-1}\right)\int_{\mathcal{X}^{n_{k}}}F(\mathbf{y})K_{n_{k}}\left(X_{t+T_{k-1}}^{u_{k}}, \nu_{t+T_{k-1}},t+T_{k-1},\mathrm{d}\mathbf{y}\right)\mathrm{d}t.\] **Computations for the spinal construction.** We follow the computations for the branching process, and differentiate between whether the branching individual is the spinal one or not. Conditioning on the next jump time, \(\widehat{C}\) defined in (5.10), is such that \[\widehat{C}\left(e^{\prime},\left(\widehat{\mathcal{V}}_{i},0 \leq i\leq k-1\right)\right)=\mathbb{1}_{\{E_{k-1}=e^{\prime}\}}\mathbb{E} \left[\exp\left(\int_{\widehat{T}_{k-1}}^{\widehat{T}_{k}}\lambda\left(Y_{s}, \chi_{s},s\right)\mathrm{d}s\right)I\left(\widehat{T}_{k}-\widehat{T}_{k-1} \right)\right.\\ \times\left.\psi_{\phi}\left(Y_{\widehat{T}_{k}^{-}},\chi_{ \widehat{T}_{k}^{-}},\widehat{T}_{k}\right)\mathbb{E}\left[\left.\frac{F\left( \widehat{\mathcal{Y}}_{k}\right)\mathbb{1}_{\{A_{k}=a_{k}\}}\mathbb{1}_{\{ \widehat{A}_{k}=a_{k}\}}}{\psi_{\phi}\left(Y_{\widehat{T}_{k}},\chi_{\widehat {T}_{k}},\widehat{T}_{k}\right)}\right|\widehat{\mathcal{F}}_{k-1},\widehat{T }_{k}\right]\right|\widehat{\mathcal{F}}_{k-1}\right]. \tag{5.12}\] We handle this last equation by complete induction, on the event \(\{j=\emptyset\}\cup\{j\neq\emptyset\}\), to show that \[\psi_{\phi}\left(Y_{\widehat{T}_{k}^{-}},\chi_{\widehat{T}_{k}^{ -}},\widehat{T}_{k}\right)\mathbb{E}\left[\left.\frac{F\left(\widehat{ \mathcal{Y}}_{k}\right)\mathbb{1}_{\{J_{k}=j\}}\mathbb{1}_{\{\widehat{A}_{k}= a_{k}\}}}{\psi_{\phi}\left(Y_{\widehat{T}_{k}},\chi_{\widehat{T}_{k}},\widehat{T}_{k} \right)}\right|\widehat{\mathcal{F}}_{k-1},\widehat{T}_{k}\right]=\\ \frac{B_{n_{k}}\left(X_{T_{k}^{-}}^{u_{k}},\chi_{\widehat{T}_{k}^{ -}},\widehat{T}_{k}\right)}{\widehat{\tau}_{\mathrm{tot}}\left(\chi_{\widehat{T }_{k}^{-}},\widehat{T}_{k}\right)}\int_{\mathcal{X}^{\widehat{N}_{k}}}F\left( \mathbf{y}\right)K_{\widehat{N}_{k}}\left(X_{\widehat{T}_{k}^{-}}^{\widehat{U} _{k}},\chi_{\widehat{T}_{k}^{-}},\widehat{T}_{k},\mathrm{d}\mathbf{y}\right). \tag{5.13}\] _Branching outside the spine._ If \(j=\emptyset\), then the branching individual is not the spinal one, and \(e=e^{\prime}j=e^{\prime}\). We follow the same conditioning than for the branching process, and use the fact that on the event \(\{j=\emptyset\}\) the trait of the spinal individual is \(\hat{\mathcal{F}}_{k-1}\)-measurable. Using the expression of the \(\psi_{\phi}\)-biased distribution \(\widehat{K}\) of \(\widehat{\mathcal{Y}}_{k}\), defined in (2.7), we get \[\mathbb{E}\left[\left.\frac{F\left(\widehat{\mathcal{Y}}_{k}\right) \mathbb{1}_{\{J_{k}=\emptyset\}}\mathbb{1}_{\{\widehat{A}_{k}=a_{k}\}}}{\psi_ {\phi}\left(Y_{\widehat{T}_{k}},\chi_{\widehat{T}_{k}},\widehat{T}_{k}\right) }\right|\widehat{\mathcal{F}}_{k-1},\widehat{T}_{k}\right]=\] \[\mathbb{E}\left[\mathbbm{1}_{\{\widehat{A}_{k}=a_{k}\}}\ \mathbb{1}_{\{J_{k}= \emptyset\}}\int_{\mathcal{X}^{\widehat{N}_{k}}}\frac{F\left(\mathbf{y} \right)}{\widehat{\Gamma}_{n_{k}}\left(Y_{\widehat{T}_{k}^{-}},X_{\widehat{T}_ {k}^{-}}^{u_{k}},\chi_{\widehat{T}_{k}^{-}},\widehat{T}_{k}\right)}K_{n_{k}} \left(X_{\widehat{T}_{k}^{-}}^{\widehat{U}_{k}},\chi_{\widehat{T}_{k}^{-}}, \widehat{T}_{k},\mathrm{d}\mathbf{y}\right)\right|\widehat{\mathcal{F}}_{k-1},\widehat{T}_{k}\right].\] We then recall the distribution of \(\widehat{A}_{k}\) outside the spine, established in Dynamics 2.2 : \[\mathbb{E}\left[\mathbb{1}_{\{\widehat{A}_{k}=a_{k}\}}\mathbb{1}_{\{J_{k}= \emptyset\}}|\widehat{\mathcal{F}}_{k-1},\widehat{T}_{k}\right]=\frac{\widehat {\Gamma}_{n_{k}}\left(Y_{\widehat{T}_{k}^{-}},X_{\widehat{T}_{k}^{-}}^{u_{k}},\chi_{\widehat{T}_{k}^{-}},\widehat{T}_{k}\right)}{\psi_{\phi}\left(Y_{ \widehat{T}_{k}^{-}},\chi_{\widehat{T}_{k}^{-}},\widehat{T}_{k}\right)}\frac {B_{n_{k}}\left(X_{T_{k}^{-}}^{u_{k}},\chi_{\widehat{T}_{k}^{-}},\widehat{T}_ {k}\right)}{\widehat{\tau}_{\mathrm{tot}}\left(\chi_{\widehat{T}_{k}^{-}}, \widehat{T}_{k}\right)}.\] This gives (5.13) on the event \(\{j=\emptyset\}\). _Spine branching._ We follow the same computations when \(j\neq\emptyset\), that corresponds to the case when the branching individual is the spinal one, i.e \(u_{k}=e^{\prime}\). In this case, the distribution of \(Y_{\widehat{T}_{k}}\) now depends on the traits \(\widehat{\mathcal{Y}}_{k}\). Thus conditioning on \(\widehat{\mathcal{Y}}_{k}\) and using the distribution of the next spinal individual, defined in (2.9), we have \[\mathbb{1}_{\{j\neq\emptyset\}}\mathbb{E}\left[\left.\frac{F\left( \widehat{\mathcal{Y}}_{k}\right)\mathbb{1}_{\{J_{k}=j\}}\mathbb{1}_{\{ \widehat{A}_{k}=a_{k}\}}}{\psi_{\phi}\left(Y_{\widehat{T}_{k}},\chi_{\widehat {T}_{k}},\widehat{T}_{k}\right)}\right|\widehat{\mathcal{F}}_{k-1},\widehat{T }_{k}\right]=\] \[\mathbb{E}\left[\mathbb{1}_{\{\widehat{A}_{k}=(e^{\prime},n_{k}) \}}\ \mathbb{E}\left[\left.\frac{F\left(\widehat{\mathcal{Y}}_{k}\right)}{\sum_{i=1 }^{\widehat{N}_{k}}\psi_{\phi}\left(\widehat{\mathcal{Y}}_{k}^{i},\chi_{ \widehat{T}_{k}^{-}}-\delta_{Y_{\widehat{T}_{k}^{-}}}+\sum_{l=1}^{\widehat{N} _{k}}\delta_{\widehat{\mathcal{Y}}_{k}^{l}},\widehat{T}_{k}\right)}\right| \widehat{\mathcal{F}}_{k-1},\widehat{T}_{k}\right].\] We then use the distribution \(\widehat{K}^{*}\) of \(\widehat{\mathcal{Y}}_{k}\), defined in (2.8) when the branching individual is the spinal one, conditioning on \(\widehat{A}_{k}\), defined in Dynamics 2.3. \[\mathbb{1}_{\{\widehat{A}_{k}=(e^{\prime},n_{k})\}}\mathbb{E} \left[\left.\frac{F\left(\widehat{\mathcal{Y}}_{k}\right)}{\sum_{i=1}^{\widehat {N}_{k}}\psi_{\phi}\left(\widehat{\mathcal{Y}}_{k}^{i},\chi_{\widehat{T}_{k}^{- }}-\delta_{Y_{\widehat{T}_{k}^{-}}}+\sum_{l=1}^{\widehat{N}_{k}}\delta_{ \widehat{\mathcal{Y}}_{k}^{l}},\widehat{T}_{k}\right)}\right|\widehat{ \mathcal{F}}_{k-1},\widehat{T}_{k},\widehat{A}_{k}\right]=\] \[\mathbb{1}_{\{\widehat{A}_{k}=(e^{\prime},n_{k})\}}\int_{ \mathcal{X}^{n_{k}}}\frac{F\left(\mathbf{y}\right)}{\widehat{\Gamma}_{n_{k}} ^{*}\left(Y_{\widehat{T}_{k}^{-}},\chi_{\widehat{T}_{k}^{-}},\widehat{T}_{k} \right)}K_{n_{k}}\left(Y_{\widehat{T}_{k}^{-}},\chi_{\widehat{T}_{k}^{-}}, \widehat{T}_{k},\mathrm{d}\mathbf{y}\right).\] Finally we use the distribution of \(\widehat{A}_{k}\) when the branching individual is the spinal one, defined in Dynamics 2.3. \[\mathbb{E}\left[\mathbb{1}_{\{\widehat{A}_{k}=(e^{\prime},n_{k})\}}\big{|} \widehat{\mathcal{F}}_{k-1},\widehat{T}_{k}\right]=\frac{\widehat{\Gamma}_{n_{k}} ^{*}\left(Y_{\widehat{T}_{k}^{-}},\chi_{\widehat{T}_{k}^{-}},\widehat{T}_{k} \right)}{\psi_{\phi}\left(Y_{\widehat{T}_{k}^{-}},\chi_{\widehat{T}_{k}^{-}}, \widehat{T}_{k}\right)}\frac{B_{n_{k}}\left(Y_{T_{k}^{-}},\chi_{\widehat{T}_{k}^{- }},\widehat{T}_{k}\right)}{\widehat{\tau}_{\mathrm{tot}}\left(\chi_{\widehat{T}_ {k}^{-}},\widehat{T}_{k}\right)}.\] This gives (5.13) on the event \(\{j\neq\emptyset\}\). Now, combining (5.12) and (5.13), we get \[\widehat{C}\left(e^{\prime},\left(\widehat{\mathcal{V}}_{i},0\leq i \leq k-1\right)\right)=\mathbbm{1}_{\{E_{k-1}=e^{\prime}\}}\mathbb{E}\Bigg{[} \exp\left(\int_{\widehat{T}_{k-1}}^{\widehat{T}_{k}}\lambda\left(Y_{s},\chi_{s },s\right)\mathrm{d}s\right)I\left(\widehat{T}_{k}-\widehat{T}_{k-1}\right)\\ \times\frac{B_{n_{k}}\left(X_{T_{k}^{-}}^{u_{k}},\chi_{\widehat{T }_{k}^{-}},\widehat{T}_{k}\right)}{\widehat{\tau}_{\mathrm{tot}}\left(\chi_{ \widehat{T}_{k}^{-}},\widehat{T}_{k}\right)}\int_{\mathcal{X}^{n_{k}}}F\left( \mathbf{y}\right)K_{n_{k}}\left(X_{\widehat{T}_{k}^{-}}^{u_{k}},\chi_{\widehat {T}_{k}^{-}},\widehat{T}_{k},\mathrm{d}\mathbf{y}\right)\Bigg{|}\widehat{ \mathcal{F}}_{k-1}\Bigg{]}.\] Using that the time between two jump follows an inhomogeneous exponential law of instantaneous rate \(\widehat{\tau}_{\mathrm{tot}}\), we have \[\widehat{C}\left(e^{\prime},\left(\widehat{\mathcal{V}}_{i},0\leq i \leq k-1\right)\right)=\mathbbm{1}_{\{E_{k-1}=e^{\prime}\}}\int_{\mathbb{R}_{+ }}I(t)\exp\left(\int_{\widehat{T}_{k-1}}^{\widehat{T}_{k}}\lambda\left(Y_{s}, \chi_{s},s\right)-\widehat{\tau}_{\mathrm{tot}}\left(Y_{s},\chi_{s},s\right) \mathrm{d}s\right)\\ \times B_{n_{k}}\left(X_{t+\widehat{T}_{k-1}}^{u_{k}},\nu_{t+ \widehat{T}_{k-1}},t+\widehat{T}_{k-1}\right)\int_{\mathcal{X}^{n_{k}}}F( \mathbf{y})K_{n_{k}}\left(X_{t+\widehat{T}_{k-1}}^{u_{k}},\nu_{t+\widehat{T}_{ k-1}},t+\widehat{T}_{k-1},\mathrm{d}\mathbf{y}\right)\mathrm{d}t. \tag{5.14}\] Finally, using in (5.14) the fact that \(\lambda\), defined in (5.3), is the difference of branching rates between the branching process and the spine process, we get (5.11) and it concludes the proof. Proof of Theorem 2.1.: Let \(\psi_{\phi}\in\mathcal{D}\), \(t\geq 0\), and \(\bar{z}\in\overline{\mathbb{V}}\). Let \(\left(\left(E_{t},\bar{\chi}_{t}\right),t\geq 0\right)\) be the time-inhomogeneous \(\overline{\mathbb{W}}\)-valued branching process of generators \(\left(\widehat{\mathcal{L}}_{\psi_{\phi}}^{t},t\geq 0\right)\) defined in (2.5). Let \(\widehat{T}_{\mathrm{Exp}}\) denote its explosion time and \(\left(\left(Y_{t},\chi_{t}\right),t\geq 0\right)\) its projection on \(\mathbb{W}\). For every \(t<T_{\mathrm{Exp}}\), there exists \(k\in\mathbb{N}\) such that \(\left\{T_{k}\leq t<T_{k+1}\right\}\) where \(T_{k+1}=+\infty\) if there is no more jumps after \(T_{k}\). Thus we can write \[\mathbb{E}_{\bar{z}}\left[\mathbbm{1}_{\left\{T_{\mathrm{Exp}}>t,\mathbb{G}(t )\neq\emptyset\right\}}H\left(U_{t},\left(\bar{\nu}_{s},s\leq t\right)\right) \right]=\sum_{k\geq 0}\mathbb{E}_{\bar{z}}\left[\mathbbm{1}_{\left\{T_{k}\leq t<T_{k+1}, \mathbb{G}(t)\neq\emptyset\right\}}H\left(U_{t},\left(\bar{\nu}_{s},s\leq t \right)\right)\right].\] For every \(t\geq 0\), \(k\in\mathbb{N}\), every non-extinguished sequence \(a=\left(a_{i},1\leq i\leq k\right)\in\mathfrak{U}_{k}(\bar{z})\) with \(a_{i}=\left(u_{i},n_{i}\right)\) and every \(e\in\mathbb{G}_{k}\left(a\right)\), there exists a measurable non-negative function \(F_{k,a}^{t,e}\) on \(\bigotimes\limits_{i=1}^{k}(\mathbb{R}_{+}\times\mathcal{U}\times\mathbb{N} \times\mathcal{X}^{n_{i}})\), such that \[\mathbb{E}_{\bar{z}}\left[\mathbbm{1}_{\left\{T_{\mathrm{Exp}}>t, \mathbb{G}(t)\neq\emptyset\right\}}H\left(U_{t},\left(\bar{\nu}_{s},s\leq t \right)\right)\right]=\\ \sum_{k\geq 0}\sum_{a\in\mathfrak{U}_{k}(\bar{z})}\sum_{e\in \mathbb{G}_{k}(a)}\mathbb{E}_{\bar{z}}\left[\mathbb{E}\left[\mathbbm{1}_{\left\{ T_{k+1}>t\right\}}|\mathcal{F}_{k}\right]\mathbbm{1}_{\left\{T_{k}\leq t \right\}}F_{k,a}^{t,e}\left(\mathcal{V}_{i},0\leq i\leq k\right)\prod_{i=0}^{k }\mathbbm{1}_{\left\{A_{i}=a_{i}\right\}}\right].\] We apply Lemma 5.1 with the non-negative measurable function \(F\) defined for all sequence \(\left(v_{i},0\leq i\leq k\right)\in\bigotimes\limits_{i=1}^{k}(\mathbb{R}_{+} \times\mathcal{U}\times\mathbb{N}\times\mathcal{X}^{n_{i}})\), where for all \(0\leq i\leq k\), \(v_{i}:=\left(t_{i},a_{i},y_{i}\right)\), by \[F\left(v_{i},0\leq i\leq k\right):=\mathbb{P}\bigg{(}S_{k+1}>t-t_{k}\bigg{|} \bigcap_{0\leq i\leq k}\left\{\mathcal{V}_{i}=v_{i}\right\}\bigg{)}\mathbbm{1}_ {t_{k}\leq t}F_{k,a}^{t,e}\left(v_{i},0\leq i\leq k\right),\] where \(S_{k+1}\) is the random variable giving the \((k+1)\)-th inter-arrival time of jumps in the original process. Note that \(S_{k+1}\) follows the same probability law as \(T_{k+1}-T_{k}\). We thus get, for all \(e\in\mathbb{G}_{k}(a)\) \[\mathbb{E}_{\bar{z}}\left[\mathbbm{1}_{\{\left\{\mathbb{G}(t)\neq \emptyset\right\}\cap\{T_{\mathrm{Exp}}>t\}}}H\left(U_{t},\left(\bar{\nu}_{s},s \leq t\right)\right)\right]=\langle z,\psi_{\phi}\left(\cdot,z,0\right)\rangle \sum_{k\geq 0}\sum_{a\in\mathbb{U}_{k}(\bar{z})}\sum_{e\in\mathbb{G}_{k}(a)} \mathbb{E}_{\bar{z}}\Big{[}\mathbbm{1}_{\{E_{k}=e\}}\\ \times\xi_{k}\left(E_{k},\left(\widehat{\mathcal{V}}_{i},0\leq i \leq k\right)\right)F\left(\widehat{\mathcal{V}}_{i},0\leq i\leq k\right) \prod_{i=0}^{k}\mathbbm{1}_{\{\widehat{A}_{i}=a_{i}\}}\Big{]}. \tag{5.15}\] Then, using that \(\lambda\) introduced in (5.3) verifies \(\lambda:=\widehat{\tau}_{\mathrm{tot}}-\tau\), \[\mathbb{P}\bigg{(}S_{k+1}>t-\widehat{T}_{k}\bigg{|}\bigcap_{0 \leq i\leq k}\left\{\widehat{\mathcal{V}}_{i}=v_{i}\right\}\bigg{)}\mathbbm{1 }_{\{\widehat{T}_{k}\leq t\}} =e^{\int_{T_{k}}^{t}\lambda(Y_{s},\chi_{s},s)\mathrm{d}s}e^{- \int_{T_{k}}^{t}\widehat{\tau}_{\mathrm{tot}}\left(Y_{s},\chi_{s},s\right) \mathrm{d}s}\mathbbm{1}_{\{\widehat{T}_{k}\leq t\}}\] \[=e^{\int_{T_{k}}^{t}\lambda(Y_{s},\chi_{s},s)\mathrm{d}s} \mathbb{E}\left[\mathbbm{1}_{\{\widehat{T}_{k+1}>t\}}|\widehat{\mathcal{F}}_{ k}\right]\mathbbm{1}_{\{\widehat{T}_{k}\leq t\}}.\] We recall that on the event \(\{\widehat{T}_{k+1}>t\}\), there is no jump in the interval \((\widehat{T}_{k},t]\), thus, applying Lemma 5.2 on this interval we get \[\xi_{k}\left(E_{k},\left(\widehat{\mathcal{V}}_{i},0\leq i\leq k \right)\right)\mathbb{P}\bigg{(}S_{k}>t-\widehat{T}_{k}\bigg{|}\bigcap_{0\leq i \leq k}\left\{\widehat{\mathcal{V}}_{i}=v_{i}\right\}\bigg{)}\mathbbm{1}_{\{ \widehat{T}_{k}\leq t\}}=\\ \frac{1}{\psi_{\phi}\left(Y_{t},\chi_{t},t\right)}\exp\left(\int_ {0}^{t}\frac{\mathcal{G}\psi_{\phi}\left(Y_{s},\chi_{s},s\right)}{\psi_{\phi }\left(Y_{s},\chi_{s},s\right)}\mathrm{d}s\right)\mathbb{E}\left[\mathbbm{1}_ {\{\widehat{T}_{k}\leq t<\widehat{T}_{k+1}\}}|\widehat{\mathcal{F}}_{k}\right]. \tag{5.16}\] On the event \(\{\widehat{T}_{k}\leq t<\widehat{T}_{k+1}\}\), \(E_{k}=E(t)\) almost surely, then using (5.15) and (5.16) gives \[\mathbb{E}_{\bar{z}}\left[\mathbbm{1}_{\{T_{\mathrm{Exp}}>t, \mathbb{G}(t)\neq\emptyset\}}H\left(U_{t},\left(\bar{\nu}_{s},s\leq t\right) \right)\right]=\langle z,\psi_{\phi}\left(\cdot,z\right)\rangle\sum_{k\geq 0}\sum_{a\in \mathbb{U}_{k}(\bar{z})}\sum_{e\in\mathbb{G}_{k}(a)}\mathbb{E}_{\bar{z}}\Big{[} \mathbbm{1}_{\{E(t)=e\}}\\ \times\frac{\mathbbm{1}_{\{\widehat{T}_{k}\leq t<\widehat{T}_{k+1 }\}}}{\psi_{\phi}\left(Y_{t},\chi_{t},t\right)}\exp\left(\int_{0}^{t}\frac{ \mathcal{G}\psi_{\phi}\left(Y_{s},\chi_{s},s\right)}{\psi_{\phi}\left(Y_{s}, \chi_{s},s\right)}\mathrm{d}s\right)F_{k,a}^{t,e}\left(\widehat{\mathcal{V}}_{ i},0\leq i\leq k\right)\prod_{i=0}^{k}\mathbbm{1}_{\{\widehat{A}_{i}=a_{i}\}} \Big{]}.\] Reconstructing the right-hand side and using the fact that the spine process does not extinct, we have \[\mathbb{E}_{\bar{z}}\left[\mathbbm{1}_{\{T_{\mathrm{Exp}}>t, \mathbb{G}(t)\neq\emptyset\}}H\left(U_{t},\left(\bar{\nu}_{s},s\leq t\right) \right)\right]=\\ \langle z,\psi_{\phi}(\cdot,z,0)\rangle\mathbb{E}_{\bar{z}}\left[ \mathbbm{1}_{\{\widehat{T}_{\mathrm{Exp}}>t\}}\frac{p_{E_{t}}\left(\bar{\chi}_ {t}\right)}{\psi_{\phi}\left(Y_{t},\chi_{t},t\right)}\exp\left(\int_{0}^{t} \frac{\mathcal{G}\psi_{\phi}\left(Y_{s},\chi_{s},s\right)}{\psi_{\phi}\left(Y_ {s},\chi_{s},s\right)}\mathrm{d}s\right)H\left(E_{t},\left(\bar{\chi}_{s},s \leq t\right)\right)\right],\] that concludes the proof. ### Proofs of Section 3 In this section we derive the results on the limiting martingale and a \(L\log L\) criterion. Proof of Proposition 3.1.: Let \(\psi_{\phi}\in\mathcal{D}\) ensuring Assumption B, we first show for all \(t\geq 0\), the integrability of the random variable \(W_{t}(\psi_{\phi})\) introduced in (3.1). As Assumption A is satisfied, we can apply Corollary 2.5 for any \(t\geq 0\) to the positive function \(f\) on \(\mathbb{D}([0,t],\mathcal{X}\times\overline{\mathcal{V}})\), such that for all \(u\in\mathbb{G}(t)\), \[f\left((X_{s}^{u},\bar{\nu}_{s}),0\leq s\leq t\right):=\exp\left(-\int_{0}^{t} \frac{\mathcal{G}\psi_{\phi}\left(X_{s}^{u},\nu_{s},s\right)}{\psi_{\phi}\left(X _{s}^{u},\nu_{s},s\right)}\mathrm{d}s\right).\] Theorem 2.1 applied to the function \(f\), ensures that for any initial condition \(\bar{z}\), \[\mathbb{E}_{\bar{z}}\left[\sum_{u\in\mathbb{G}(t)}\exp\left(-\int_{0}^{t}\frac{ \mathcal{G}\psi_{\phi}\left(X_{s}^{u},\nu_{s},s\right)}{\psi_{\phi}\left(X_{s}^{u },\nu_{s},s\right)}\mathrm{d}s\right)\psi_{\phi}\left(X_{t}^{u},\nu_{t},t\right) \right]=\langle z,\psi_{\phi}(\cdot,z,0)\rangle. \tag{5.17}\] This last identity guarantees the integrability of \(W_{t}(\psi_{\phi})\). Now, for all \(r,t\in\mathbb{R}_{+}^{*}\), such that \(t\geq r\), we decompose the population at time \(t\) according to their ancestors at time \(r\). We then have \[\mathbb{E}_{\bar{z}}\left[W_{t}(\psi_{\phi})\big{|}\mathcal{F}_{r}\right] =\sum_{v\in\mathbb{G}(r)}\exp\left(-\int_{0}^{r}\frac{\mathcal{G }\psi_{\phi}\left(X_{s}^{u},\nu_{s},s\right)}{\psi_{\phi}\left(X_{s}^{u},\nu_{s },s\right)}\mathrm{d}s\right)\] \[\quad\times\mathbb{E}_{\bar{z}}\left[\sum_{u\in\mathbb{G}(t),\ s.t\ v\preceq u}\exp\left(-\int_{r}^{t}\frac{\mathcal{G}\psi_{\phi}\left(X_{s}^ {u},\nu_{s},s\right)}{\psi_{\phi}\left(X_{s}^{u},\nu_{s},s\right)}\mathrm{d}s \right)\psi_{\phi}\left(X_{t}^{u},\nu_{t},t\right)\left|\mathcal{F}_{r}\right].\] Now, in order to establish that the conditional expectation is equal to \(\psi_{\phi}\left(X_{r}^{v},\nu_{r}\right)\) for all \(v\in\mathbb{G}(r)\), we apply Corollary 2.5 to the positive functions \(f_{v}\) on \(\mathcal{U}\times\mathbb{D}([0,t],\overline{\mathbb{V}})\), such that \[f_{v}\left(u,(\bar{\nu}_{s},0\leq s\leq t)\right):=\mathbbm{1}_{v\preceq u} \exp\left(-\int_{r}^{t}\frac{\mathcal{G}\psi_{\phi}\left(X_{s}^{u},\nu_{s},s \right)}{\psi_{\phi}\left(X_{s}^{u},\nu_{s},s\right)}\mathrm{d}s\right)\] and use the Markov property. We get that for all \(v\in\mathbb{G}(r)\) \[\mathbb{E}\left[\sum_{\begin{subarray}{c}u\in\mathbb{G}(t),\\ s.t\ v\preceq u\end{subarray}}\exp\left(-\int_{r}^{t}\frac{\mathcal{G}\psi_{ \phi}\left(X_{s}^{u},\nu_{s},s\right)}{\psi_{\phi}\left(X_{s}^{u},\nu_{s},s \right)}\mathrm{d}s\right)\psi_{\phi}\left(X_{t}^{u},\nu_{t},t\right)\left| \mathcal{F}_{r}\right]=\\ \langle\nu_{r},\psi_{\phi}(\cdot,\nu_{r},r)\rangle\mathbb{E}\left[ \mathbbm{1}_{\{v\preceq E_{t}\}}|v\in\mathbb{G}(r)\right].\] Finally using the new spinal individual distribution (2.9), we get \[\mathbb{E}\left[W_{t}(\psi_{\phi})\big{|}\mathcal{F}_{r}\right]=\sum_{v\in \mathbb{G}(r)}\exp\left(-\int_{0}^{r}\frac{\mathcal{G}\psi_{\phi}\left(X_{s}^ {u},\nu_{s},s\right)}{\psi_{\phi}\left(X_{s}^{u},\nu_{s},s\right)}\mathrm{d}s \right)\psi_{\phi}\left(X_{r}^{v},\nu_{r},r\right).\] That concludes the proof. To establish Theorem 3.2, we need the following lemma from measure theory. **Lemma 5.3**.: _Let \((\Omega,\mathcal{F},\mu)\) be a probability space and let \(\widehat{\mu}\) be a finite non negative measure on \(\Omega\). Let \((\mathcal{F}_{t},0\leq t)\) be increasing \(\sigma\)-fields such that \(\sigma\left(\bigcup\limits_{0\leq t}\mathcal{F}_{t}\right)=\mathcal{F}\), and \(\widehat{\mu}_{t}\), \(\mu_{t}\) be the restrictions of \(\widehat{\mu}\) and \(\mu\) to \(\mathcal{F}_{t}\). Suppose that there exists a non-negative \(\mathcal{F}_{t}\)-martingale \((W_{t},0\leq t)\) such that for all \(t\geq 0\)_ \[\frac{d\widehat{\mu}_{t}}{d\mu_{t}}=W_{t}.\] _Then, denoting \(W:=\limsup_{t\to+\infty}W_{t}\), we have the following dichotomy:_ 1. \(\int W\mathrm{d}\mu=\int W_{0}\mathrm{d}\mu\) _if and only if_ \(W<+\infty\)__\(\widehat{\mu}\)_-a.s._ 2. \(W=0\)__\(\mu\)_-a.s. if and only if_ \(W=+\infty\)__\(\widehat{\mu}\)_-a.s._ Proof.: We refer to Athreya [2] for the proof of this result in discrete time. The extension to continuous time changes of measure uses Kolmogorov's extension theorem, see Durrett Appendix A [20] for further details. We now state the following lemma, that is the dual proposition of Theorem 3.2. **Lemma 5.4**.: _Under Assumption D, let \((\widehat{\mathcal{W}}_{t},t\leq\widehat{T}_{\text{Exp}})\) be the \(\widehat{\mathcal{F}}_{t}\)-adapted process such that, for all \(t\leq\widehat{T}_{\text{Exp}}\)_ \[\widehat{\mathcal{W}}_{t}:=\sum_{u\in\widehat{\mathbb{G}}(t)}e^{-\int_{0}^{t}B( X_{s}^{u},\chi_{s},s)(m(X_{s}^{u},\chi_{s},s)-1)\,ds}, \tag{5.18}\] _where \(\widehat{\mathbb{G}}(t)\) is the set of labels of individuals living in the spinal population at time \(t\). If \(\sum_{k\geq 1}k\log(k)\overline{p}_{k}<+\infty\) then \(\widehat{T}_{\text{Exp}}=\infty\) a.s. and \(\limsup_{t\to+\infty}\widehat{\mathcal{W}}_{t}<+\infty\) a.s. if \(\sum_{k\geq 1}k\log(k)\underline{p}_{k}=+\infty\), then \(\limsup_{t\to+\infty}\widehat{\mathcal{W}}_{t}=+\infty\) a.s._ The proof of this lemma is technical and is given in Appendix C. It involves the decomposition of the spine process as an immigration process along a line, as well as stochastic calculus tools. Proof of Theorem 3.2.: Theorem 2.1 applied with \(\psi_{\phi}\equiv 1\) exhibits the martingale \(\mathcal{W}_{t}\) as a Radon-Nikodym derivative. We can thus apply Lemma 5.3 to the spinal change of measure. The martingale \(\mathcal{W}_{t}\) and its limit \(\mathcal{W}\), introduced in (3.2), verify the following dichotomy: 1. \(\mathbb{E}_{\bar{z}}\left[\mathcal{W}\right]=\mathbb{E}_{\bar{z}}\left[ \mathcal{W}_{0}\right]\) if and only if \(\limsup_{t\to+\infty}\widehat{\mathcal{W}}_{t}<+\infty\) a.s. 2. \(\mathcal{W}=0\) a.s. if and only if \(\limsup_{t\to+\infty}\widehat{\mathcal{W}}_{t}=+\infty\) a.s. We use (5.17) with \(\psi_{\phi}\equiv 1\) to obtain that \[\mathbb{E}_{\bar{z}}\left[\mathcal{W}_{0}\right]=\langle\nu_{0},1\rangle,\] and a direct application of Lemma 5.4 concludes the proof. ## Appendix A Proof of Proposition 2.4 The spine process \(\left(\left(E_{t},\bar{\chi}_{t}\right),t\geq 0\right)\) defined by Dynamics 2.2 and 2.3 can be rigorously expressed as the solution of a SDE driven by a multivariate point measure. We introduce \(\widehat{\boldsymbol{\theta}}:=\left(\widehat{\theta}_{i},i\in\mathbb{N}^{*}\right)\) and \(\widehat{\boldsymbol{\theta}}^{*}:=\left(\widehat{\theta}_{i}^{*},i\in \mathbb{N}^{*}\right)\) two vectors of independent uniform random variables on \([0,1]\). The random vectors giving the traits of the offspring of size \(n\) are denoted \(\left(\widehat{F}_{i,n}\left(x,x_{e},\chi,t,\widehat{\theta}_{i}\right),i\leq n\right)\) outside the spine and \(\left(\widehat{F}_{i,n}^{*}\left(x_{e},\chi,t,\widehat{\theta}_{i}^{*}\right), i\leq n\right)\) for the spinal individual of trait \(x_{e}\). Let \(\widehat{E}=\mathcal{U}\times\mathbb{R}_{+}\times\mathbb{N}\times[0,1]^{ \mathbb{N}}\) and \(\widehat{E}^{*}=\mathbb{R}_{+}\times\mathbb{N}\times[0,1]^{\mathbb{N}}\). Let \(\widehat{Q}\left(\mathrm{d}s,\mathrm{d}u,\mathrm{d}r,\mathrm{d}k,\mathrm{d} \boldsymbol{\theta}\right)\) and \(\widehat{Q}^{*}\left(\mathrm{d}s,\mathrm{d}r,\mathrm{d}k,\mathrm{d}\boldsymbol {\theta},\mathrm{d}j\right)\) be two independent Poisson point measures on \(\mathbb{R}_{+}\times\widehat{E}\) and \(\mathbb{R}_{+}\times\widehat{E}^{*}\) with respective intensity \(\mathrm{d}s(\sum_{v\in\mathcal{U}}\delta_{v}(\mathrm{d}u))\mathrm{d}r(\sum_{ n\in\mathbb{N}}\delta_{n}(\mathrm{d}k))\mathrm{d}\widehat{\boldsymbol{\theta}}\) outside the spine and \(\mathrm{d}s\mathrm{d}r(\sum_{n\in\mathbb{N}}\delta_{n}(\mathrm{d}k))\mathrm{d} \widehat{\boldsymbol{\theta}}^{*}(\sum_{n\in\mathbb{N}}\delta_{n}(\mathrm{d}j))\) for the spine. We denote by \(\left(\widehat{\mathcal{F}}_{t},t\geq 0\right)\) the canonical filtration associated with these Poisson point measures. The \(\widehat{\mathcal{F}}_{t}\)-adapted set of labels of the living individuals in the population outside the spine is denoted \(\widehat{\mathbb{G}}(t)\). Let \((e,\bar{z})\in\overline{\mathbb{W}}\). Under Assumptions A and C, the process \(\left(\left(E_{t},\bar{\chi}_{t}\right),t\geq 0\right)\) is the unique \(\widehat{\mathcal{F}}_{t}\)-adapted solution, for every function \(g\in\mathcal{C}^{1}\left(\mathcal{U}\times\mathcal{U}\times\mathcal{X}, \mathbb{R}\right)\) and \(t\geq 0\), of the following equation \[\langle\bar{\chi}_{t},g\left(E_{t},\cdot\right)\rangle: =\langle\bar{z},g(e,\cdot)\rangle+\int_{0}^{t}\int_{\mathcal{U} \times\mathcal{X}}\frac{\partial g}{\partial x}(E_{s},u,x)\cdot\mu\left(x, \chi_{s},s\right)\bar{\chi}_{s}(\mathrm{d}u,\mathrm{d}x)\mathrm{d}s\] \[+\int_{[0,t]\times\widehat{E}}\left[\sum_{i=1}^{k}g\left(E_{s} \cdot,ui,\widehat{F}_{i,k}\left(X_{s\cdot}^{u},Y_{s\cdot},\chi_{s\cdot},s, \widehat{\theta}_{i}\right)\right)-g\left(E_{s\cdot},u,X_{s\cdot}^{u}\right)\right]\] \[\qquad\qquad\times\mathbb{1}_{\left\{u\in\widehat{\Theta}_{s}^{*} \right\}}\mathbb{1}_{\left\{r\leq\widehat{B}_{k}\left(X_{s\cdot}^{u},Y_{s \cdot}\cdot\chi_{s\cdot},s\right)\right\}}\widehat{Q}\left(\mathrm{d}s,\mathrm{ d}u,\mathrm{d}r,\mathrm{d}k,\mathrm{d}\widehat{\boldsymbol{\theta}}\right)\] \[+\int_{[0,t]\times\widehat{E}^{*}}\left[\sum_{i=1}^{k}g\left(E_{s \cdot}j,E_{s\cdot}j,\widehat{F}_{i,k}^{*}\left(Y_{s\cdot},\chi_{s\cdot},s, \widehat{\theta}_{i}^{*}\right)\right)-g\left(E_{s\cdot},E_{s\cdot},Y_{s\cdot }\right)\right]\] \[\qquad\qquad\times\mathbb{1}_{\left\{\ r\leq\widehat{B}_{k}^{*} \left(Y_{s\cdot}\cdot\chi_{s\cdot},s\right)\right\}}\widehat{Q}^{*}\left( \mathrm{d}s,\mathrm{d}r,\mathrm{d}k,\mathrm{d}\widehat{\boldsymbol{\theta}}^{*}, \mathrm{d}j\right).\] (A.1) This assertion is shown following the same computations than for proof of Proposition 1.1 and [45], using the positivity of the \(\psi_{\phi}\) function. Assumption C leads to the same bound as Assumption A.3 in the case of a spine process. Now we establish the expression of the generator of the marginal spine process. We recall the notations \(\nu_{+}(x,\mathbf{y}):=\nu+\sum_{i=1}^{n}\delta_{y^{i}}-\delta_{x}\), \(\rho:=(x,\nu,t)\) and \(\rho_{e}:=(x_{e},\nu,t)\) introduced in (2.6). Taking expectations of (A.1) for the marginal process on \(\mathbb{W}\), we derive the non-homogeneous infinitesimal operator of \(((Y_{t},\chi_{t}),t\geq 0)\), following steps in [24]. It is given by the operator \(\widehat{L}_{\psi_{\phi}}\), defined for every \(F_{f}\in\mathcal{D}\), introduced in (2.1), and \((x_{e},\nu,t)\in\mathbb{W}\times\mathbb{R}_{+}\), by \[\widehat{L}_{\psi_{\phi}}F_{f}(x_{e},\nu,t): =GF_{f}\left(x_{e},\nu,t\right)\] \[+\int_{\mathcal{X}}\widehat{B}_{n}(\rho)\int_{\mathcal{X}^{n}} \Bigg{[}F_{f}\left(x_{e},\nu_{+}(x,\mathbf{y}),t\right)-F_{f}(x_{e},\nu,t) \Bigg{]}\widehat{R}_{n}\left(\rho,\mathrm{d}\mathbf{y}\right)\nu(\mathrm{d}x)\] \[-\widehat{B}_{n}(\rho_{e})\int_{\mathcal{X}^{n}}\Bigg{[}F_{f} \left(x_{e},\nu_{+}(x_{e},\mathbf{y})\right)-F_{f}(x_{e},\nu)\Bigg{]}\widehat{ R}_{n}\left(\rho_{e},\mathrm{d}\mathbf{y}\right)\Bigg{\}}.\] The first line gives the dynamical evolution between branching events. The second is related to spinal branching events and the choice of a new spinal individual among the offspring population. The last two lines describe the branching events outside the spine, for all individuals but the spinal one. Using that \[\frac{G\left[\psi_{\phi}F_{f}\right]}{\psi_{\phi}}\left(\cdot\right)=\frac{G \psi_{\phi}}{\psi_{\phi}}F_{f}\left(\cdot\right)+GF_{f}\left(\cdot\right),\] we get \[\frac{\mathcal{G}\left[\psi_{\phi}F_{f}\right]}{\psi_{\phi}}\left(\cdot\right) -\frac{\mathcal{G}\psi_{\phi}}{\psi_{\phi}}F_{f}\left(\cdot\right)=GF_{f} \left(\cdot\right)+\sum_{n\geq 0}\frac{B\left(\cdot\right)}{\psi_{\phi} \left(\cdot\right)}\widehat{\mathcal{T}}_{n}\left(\cdot\right),\] where the jump part \(\widehat{\mathcal{T}}_{n}\) is defined for all \((x_{e},\nu,t)\in\mathbb{W}\times\mathbb{R}_{+}\) by \[\widehat{\mathcal{T}}_{n}(\rho_{e}) :=p_{n}(\rho_{e})\int_{\mathcal{X}^{n}}\sum_{i=1}^{n}\psi_{\phi}F _{f}\left(y_{i},\nu_{+}(x_{e},\mathbf{y}),t\right)-\psi_{\phi}F_{f}(x_{e},\nu_ {+}(x_{e},\mathbf{y}),t)K_{n}\left(\rho_{e},\mathrm{d}\mathbf{y}\right)\] \[+\int_{\mathcal{X}}p_{n}(\rho)\int_{\mathcal{X}^{n}}[\psi_{\phi}F _{f}\left(x_{e},\nu_{+}(x,\mathbf{y}),t\right)-\psi_{\phi}F_{f}(x_{e},\nu,t)]K _{n}\left(\rho,\mathrm{d}\mathbf{y}\right)\nu(\mathrm{d}x)\] \[-p_{n}(\rho_{e})\int_{\mathcal{X}^{n}}\left[\sum_{i=1}^{n}\psi_{ \phi}\left(y_{i},\nu_{+}(x_{e},\mathbf{y}),t\right)-\psi_{\phi}(x_{e},\nu_{+}(x _{e},\mathbf{y}),t)\right]F_{f}(x_{e},\nu,t)K_{n}\left(\rho_{e},\mathrm{d} \mathbf{y}\right)\] \[-\int_{\mathcal{X}}p_{n}(\rho)\int_{\mathcal{X}^{n}}[\psi_{\phi} \left(x_{e},\nu_{+}(x,\mathbf{y}),t\right)-\psi_{\phi}(x_{e},\nu,t)]F_{f}(x_{e },\nu,t)K_{n}\left(\rho,\mathrm{d}\mathbf{y}\right)\nu(\mathrm{d}x).\] (A.2) Rearranging the terms in (A.2) we get \[\widehat{\mathcal{T}}_{n}(\rho_{e}) :=p_{n}(\rho_{e})\int_{\mathcal{X}^{n}}\sum_{i=1}^{n}\psi_{\phi} \left(y_{i},\nu_{+}(x_{e},\mathbf{y})\right)\left[F_{f}\left(y_{i},\nu_{+}(x_{e },\mathbf{y})\right)-F_{f}(x_{e},\nu)\right]K_{n}\left(\rho_{e},\mathrm{d} \mathbf{y}\right)\] \[+\int_{\mathcal{X}}p_{n}(\rho)\int_{\mathcal{X}^{n}}[F_{f}\left( x_{e},\nu_{+}(x,\mathbf{y})\right)-F_{f}(x_{e},\nu)]\psi_{\phi}\left(x_{e},\nu_{+}(x,\mathbf{y})\right)K_{n}\left(\rho,\mathrm{d}\mathbf{y}\right)\nu(\mathrm{d}x)\] \[-p_{n}(\rho_{e})\int_{\mathcal{X}^{n}}[F_{f}\left(x_{e},\nu_{+}(x _{e},\mathbf{y})\right)-F_{f}(x_{e},\nu)]\psi_{\phi}\left(x_{e},\nu_{+}(x_{e},\mathbf{y})\right)K_{n}\left(\rho_{e},\mathrm{d}\mathbf{y}\right).\] We conclude the proof using the branching rates introduced in Dynamics 2.2 and 2.3. ## Appendix B Proof of Lemma 5.2 The recursive equality for \(\xi_{n}\) derives from the fundamental theorem of calculus. Let \(n\geq 1\), \[\frac{\xi_{n}\left(E_{n},\left(\widehat{\mathcal{V}}_{i},0\leq i\leq n \right)\right)}{\xi_{n-1}\left(E_{n-1},\left(\widehat{\mathcal{V}}_{i},0\leq i \leq n-1\right)\right)} = \frac{\psi_{\phi}\left(Y_{\widehat{T}_{n-1}},\chi_{\widehat{T}_ {n-1}}\right)}{\psi_{\phi}\left(Y_{\widehat{T}_{n}},\chi_{\widehat{T}_{n}} \right)}\exp\left(\int_{\widehat{T}_{n-1}}^{\widehat{T}_{n}}\frac{\mathcal{G} \psi_{\phi}\left(Y_{s},\chi_{s}\right)}{\psi_{\phi}\left(Y_{s},\chi_{s}\right) }{\rm d}s\right).\] We remark that \(\mathcal{G}\) and \(G\) introduced in (2.3) and (2.4), verify for all \(\left(x_{e},\nu,t\right)\in\mathbb{W}\times\mathbb{R}_{+}\) \[\frac{\left[\mathcal{G}-G\right]\psi_{\phi}(x_{e},\nu,t)}{\psi_{\phi}(x_{e}, \nu,t)}=\widehat{\tau}_{\rm tot}(x_{e},\nu,t)-\int_{\mathcal{X}}B(x,\nu,t)\nu ({\rm d}x),\] where \(\widehat{\tau}_{\rm tot}\) is defined in (2.10). Then, using derivative chain rule we get that \[G\left(\ln\circ\psi_{\phi}\right)(x_{e},\nu,t)=\frac{G\psi_{\phi}(x_{e},\nu,t )}{\psi_{\phi}(x_{e},\nu,t)}.\] Between successive branching events, the evolution of the process is purely deterministic and we can apply the fundamental theorem of calculus: \[\int_{\widehat{T}_{n-1}}^{\widehat{T}_{n}}G\left(\ln\circ\psi_{\phi}\right)(Y_ {s},\chi_{s},s)\,{\rm d}s=\ln\left(\psi_{\phi}\left(Y_{\widehat{T}_{n}}\cdot, \chi_{\widehat{T}_{n}}\cdot,\widehat{T}_{n}\right)\right)-\ln\left(\psi_{\phi} \left(Y_{\widehat{T}_{n-1}},\chi_{\widehat{T}_{n-1}},\widehat{T}_{n-1}\right) \right).\] We thus have \[\exp\left(\int_{\widehat{T}_{n-1}}^{\widehat{T}_{n}}\frac{\mathcal{G}\psi_{ \phi}\left(Y_{s},\chi_{s},s\right)}{\psi_{\phi}\left(Y_{s},\chi_{s},s\right) }{\rm d}s\right)=\frac{\psi_{\phi}\left(Y_{\widehat{T}_{n}}\cdot,\chi_{ \widehat{T}_{n}}\cdot,\widehat{T}_{n}\right)}{\psi_{\phi}\left(Y_{\widehat{T}_ {n-1}},\chi_{\widehat{T}_{n-1}},\widehat{T}_{n-1}\right)}\exp\left(\int_{ \widehat{T}_{n-1}}^{\widehat{T}_{n}}\lambda\left(Y_{s},\chi_{s},s\right){\rm d }s\right),\] that concludes the proof. ## Appendix C Proof of Lemma 5.4 To establish this lemma, we follow the conceptual decomposition of the spine process, first introduced by Lyons, Pemantle and Peres [44]. In the following, \(\psi_{\phi}\equiv 1\). We consider the process \((\mathring{\chi}_{t},t\geq 0)\) describing all the individuals outside the spine and its associated marginal process \(\mathring{\chi}_{t}\), defined for all \(t\geq 0\) by: \[\mathring{\mathring{\chi}}_{t}:=\bar{\chi}_{t}-\delta_{(E_{t},Y_{t})},\quad \text{and}\quad\mathring{\chi}_{t}:=\chi_{t}-\delta_{Y_{t}}.\] The random set \(\hat{\mathbb{G}}(t)\) of labels of individuals outside the spine living at time \(t\) is defined by \[\mathring{\mathbb{G}}(t)=\left\{u\in\mathcal{U}\backslash\{E_{t}\}:\ \int_{\mathcal{U}\times\mathcal{X}}\mathbbm{1}_{\{v=u\}}\bar{\chi}_{t}({\rm d}v,{\rm d}x)>0\right\}.\] We also introduce for all \(t\geq 0\), \[\mathring{\mathcal{W}}_{t}:=\sum_{u\in\hat{\mathbb{G}}(t)}\exp\left(-\int_{0} ^{t}\Lambda\left(X_{s}^{u},\mathring{\chi}_{s}+\delta_{Y_{s}},s\right){\rm d}s \right),\] where for all \((x,\nu,t)\in\mathbb{W}\times\mathbb{R}_{+}\), \(\Lambda(x,\nu,t):=B(x,\nu,t)(m(x,\nu,t)-1)\). We remark that, under Assumption D, for all \(t\geq 0\) \[\mathring{\mathcal{W}}_{t}\leq\widehat{\mathcal{W}}_{t}\leq\mathring{\mathcal{ W}}_{t}+1,\] (C.1) where \(\widehat{\mathcal{W}}_{t}\) is the process introduced in (5.18). We first establish the non-degenerate case, and suppose that \(\sum_{k\geq 1}k\log(k)\overline{p}_{k}<+\infty\). We show that the process \((\mathring{\mathcal{W}}_{t},t\geq 0)\) does not explode and is almost surely bounded at infinity, following proof of Proposition 5 in [3]. To this end, we express it as a solution of a SDE, where we distinguish the contribution of the spinal individual from that of the other individuals. Let \(\widehat{E}=\mathcal{U}\times\mathbb{R}_{+}\times\mathbb{N}\times[0,1]^{\mathbb{N}}\) and \(\widehat{E}^{*}=\mathbb{R}_{+}\times\mathbb{N}\times[0,1]^{\mathbb{N}}\times \mathbb{N}\). Let \(\widehat{Q}\left(\mathrm{d}s,\mathrm{d}u,\mathrm{d}r,\mathrm{d}k,\mathrm{d} \boldsymbol{\theta}\right)\) and \(\widehat{\boldsymbol{\theta}}^{*}=\left(\widehat{\theta}_{i}^{*},i\in \mathbb{N}^{*}\right)\) two vectors of independent uniform random variables on \([0,1]\). The random vectors giving the traits of the offspring are denoted \(\left(\widehat{F}_{i,n}\left(x,\chi,t,\widehat{\theta}_{i}\right),i\leq n\right)\) outside the spine and \(\left(\widehat{F}_{i,n}^{*}\left(x,\chi,t,\widehat{\theta}_{i}^{*}\right),i \leq n\right)\) for the spinal individual. We localize the process \(\left(\hat{\mathcal{W}}_{t},t\geq 0\right)\) to avoid explosion. Let \(T^{m}:=\inf\{t\geq 0:\;\mathrm{Card}(\hat{\mathbb{G}}(t))\geq m\}\) be the first time at which the population outside the spine includes more than \(m\) individuals for \(m\geq 1\). We remark that the branching events impact the process \(\left(\hat{\mathcal{W}}_{t},t\leq T^{m}\right)\) only through the change in the number of individuals. Furthermore, as \(\psi_{\phi}\equiv 1\), every individual in \(\hat{\mathbb{G}}(t)\) behaves like the individuals in the original branching process. Then, for all \(t\leq T^{m}\) \[\hat{\mathcal{W}}_{t} =\hat{\mathcal{W}}_{0}-\int_{0}^{t}\sum_{u\in\hat{\mathbb{G}}(s) }\Lambda\left(X_{s}^{u},\chi_{s},s\right)e^{-\int_{0}^{s}\Lambda(X_{w}^{u},\chi _{w},w)\mathrm{d}w}\mathrm{d}s\] \[+\int_{0}^{t}\int_{\widehat{E}}\mathbbm{1}_{\{u\in\hat{\mathbb{G} }(s)\}}\mathbbm{1}_{\left\{r\leq B_{k}\left(X_{s^{-}}^{u},\chi_{s^{-}}s\right) \right\}}(k-1)e^{-\int_{0}^{s}\Lambda(X_{w}^{u},\chi_{w},w)\mathrm{d}w}\widehat {Q}\left(\mathrm{d}s,\mathrm{d}u,\mathrm{d}r,\mathrm{d}k,\mathrm{d}\widehat{ \boldsymbol{\theta}}\right)\] \[+\int_{0}^{t}\int_{\widehat{E}^{*}}\mathbbm{1}_{\left\{r\leq kB_{ k}\left(X_{s^{-}}^{u},\chi_{s^{-}}s\right)\right\}}(k-1)e^{-\int_{0}^{s}\Lambda(Y_{w}, \chi_{w},w)\mathrm{d}w}\widehat{Q}^{*}\left(\mathrm{d}s,\mathrm{d}r,\mathrm{d }k,\mathrm{d}\widehat{\boldsymbol{\theta}},\mathrm{d}j\right).\] The first line describes the dynamics of the function between branching events. The second line describes the branching events outside the spine. The last one is associated with the contribution of the spinal individual that is here a source of immigration, where the individual chosen to be the new spinal individual is removed from the process. Introducing \(\widetilde{Q}\left(\mathrm{d}s,\mathrm{d}u,\mathrm{d}r,\mathrm{d}k,\mathrm{d} \widehat{\boldsymbol{\theta}}\right)\) the compensated measure of \(\widetilde{Q}\left(\mathrm{d}s,\mathrm{d}u,\mathrm{d}r,\mathrm{d}k,\mathrm{d} \widehat{\boldsymbol{\theta}}\right)\), we get \[\hat{\mathcal{W}}_{t} =\hat{\mathcal{W}}_{0}+\int_{0}^{t}\int_{\widehat{E}}\mathbbm{1}_{ \left\{u\in\hat{\mathbb{G}}(s)\right\}}\mathbbm{1}_{\left\{r\leq B_{k}\left(X _{s^{-}}^{u},\chi_{s^{-}}s\right)\right\}}(k-1)e^{-\int_{0}^{s}\Lambda(X_{w}^{ u},\chi_{w},w)\mathrm{d}w}\widetilde{Q}\left(\mathrm{d}s,\mathrm{d}u,\mathrm{d}r, \mathrm{d}k,\mathrm{d}\widehat{\boldsymbol{\theta}}\right)\] \[+\int_{0}^{t}\int_{\widehat{E}^{*}}\mathbbm{1}_{\left\{r\leq kB_{ k}\left(X_{s^{-}}^{u},\chi_{s^{-}}s\right)\right\}}(k-1)e^{-\int_{0}^{s}\Lambda(Y_{w}, \chi_{w},w)\mathrm{d}w}\widehat{Q}^{*}\left(\mathrm{d}s,\mathrm{d}r,\mathrm{ d}k,\mathrm{d}\widehat{\boldsymbol{\theta}}\right).\] Remark that this last equality only holds for constant \(\psi_{\phi}\). Then, conditionally on \(\widehat{\mathcal{F}}_{\infty}^{*}\), \(\left(\hat{\mathcal{W}}_{t\wedge T^{m}},t\geq 0\right)\) is a submartingale and we have, for all \(t\leq T^{m}\) \[\mathbb{E}_{\bar{z}}\left[\hat{\mathcal{W}}_{t}\big{|}\widehat{\mathcal{F}}_{t}^ {*}\right]=\mathbb{E}_{\bar{z}}\left[\hat{\mathcal{W}}_{0}\right]+\int_{0}^{t} \int_{\widehat{E}^{*}}\mathbbm{1}_{\left\{r\leq kB_{k}\left(X_{s^{-}}^{u}, \chi_{s^{-}}s\right)\right\}}(k-1)e^{-\int_{0}^{s}\Lambda(Y_{w},\chi_{w},w) \mathrm{d}w}\widehat{Q}^{*}\left(\mathrm{d}s,\mathrm{d}r,\mathrm{d}k,\mathrm{d} \widehat{\boldsymbol{\theta}}\right).\] (C.2) Using that \(\mathbb{E}_{\bar{z}}\left[\hat{\mathcal{W}}_{0}\right]=\left\langle z,1\right\rangle-1\), along with bounds in Assumption D and in (C.2), we get \[\mathbb{E}_{\bar{z}}\left[\hat{\mathcal{W}}_{t\wedge T^{m}}\big{|}\widehat{ \mathcal{F}}_{\infty}^{*}\right]\leq\left\langle z,1\right\rangle+\int_{0}^{t} \int_{\widehat{E}^{*}}\mathbbm{1}_{\left\{r\leq k\overline{B}\bar{p}_{k}\right\} }(k-1)e^{-cs}\widehat{Q}^{*}\left(\mathrm{d}s,\mathrm{d}r,\mathrm{d}k,\mathrm{d} \widehat{\boldsymbol{\theta}}\right),\] where \(\overline{B}=\sup_{\rho\in\mathbb{W}\times\mathbb{R}_{+}}B(\rho)<+\infty\). Then there exist two independent families of random variables \(\left(\overline{S}_{i},i\geq 0\right)\) and \(\left(\overline{N}_{i},i\geq 0\right)\), where the \(S_{i}\) are independent exponential random variables of parameter \(\overline{B}\), and the \(\overline{N}_{i}\) are independent random variables distributed as \(\mathbb{P}\left(\overline{N}_{i}=k-1\right)=k\bar{p}_{k}/(\sum_{k\geq 0}k\bar{p}_{k})\), such that \[\mathbb{E}_{\bar{z}}\left[\hat{\mathcal{W}}_{t\wedge T^{m}}\big{|}\widehat{ \mathcal{F}}_{\infty}^{*}\right]\leq\langle z,1\rangle+\sum_{i\geq 0}\overline{N}_{i}e^ {-c\overline{T}_{i}},\] where \(\overline{T}_{i}:=\sum_{k=0}^{i}\overline{S}_{k}\). Note that the assumption \(\sum_{k\geq 1}k\log(k)\overline{p}_{k}<+\infty\) ensures that the law of the random variables \(\overline{N}_{i}\) is well defined. The conclusion follows the steps from proof of Proposition 5 in [3]: using the Borel Cantelli lemma, \(\sum_{k\geq 1}k\log(k)\overline{p}_{k}<+\infty\), ensures that \(\limsup_{i\to\infty}\ln(\overline{N}_{i})/i=0\) a.s.. Thus the series \(\left(\sum_{i\geq 0}\overline{N}_{i}e^{-c\overline{T}_{i}}\right)\) is almost surely finite. Letting \(m\to\infty\) and using the upper bound of Assumption D, we get that \(T^{m}\to\infty\) almost surely. By Fatou's lemma, we then have \(\sup_{t\geq 0}\mathbb{E}_{\bar{z}}\left[\hat{\mathcal{W}}_{t}\big{|} \widehat{\mathcal{F}}_{\infty}^{*}\right]<\infty\). The quenched submartingale \((\hat{\mathcal{W}}_{t\wedge T^{m}},t\geq 0)\) converges a.s. to a finite random variable. Thus \(\limsup_{t\to+\infty}\hat{\mathcal{W}}_{t}<+\infty\) almost surely and we conclude the proof using (C.1). We handle now the degenerated case, and suppose that \(\sum_{k\geq 1}k\log(k)\underline{p}_{k}=+\infty\). We denote \((\widehat{T}_{n}^{*},n\geq 0)\) the sequence of jumps times of the spinal individual, given by the measure \(\widehat{Q}^{*}\), and \((\widehat{N}_{n}^{*},n\geq 0)\) the sequence giving the number of children at each branching event of the spine. For all \(n\geq 0\), we remark that \[\hat{\mathcal{W}}_{\widehat{T}_{n}^{*}}=\hat{\mathcal{W}}_{\widehat{T}_{n}^{* }}+\left(\widehat{N}_{n}^{*}-1\right)\exp\left(-\int_{0}^{\widehat{T}_{n}^{* }}\Lambda\left(Y_{s},\chi_{s},s\right)\mathrm{d}s\right).\] Using that \(\hat{\mathcal{W}}_{\cdot}\) is almost surely non-negative, and using the upper bound in Assumption D, we obtain \[\hat{\mathcal{W}}_{\widehat{T}_{n}^{*}}\geq\left(\widehat{N}_{n}^{*}-1\right) \exp\left(-C\widehat{T}_{n}^{*}\right).\] (C.3) Assumption C with \(\psi_{\phi}\equiv 1\) implies that for all \(\rho\in\mathcal{X}\times\mathbb{V}\times\mathbb{R}_{+},\ \sum_{k\geq 1}kp_{k}(\rho)<+\infty\). Thus \(\sum_{k\geq 1}kp_{k}<+\infty\) and we introduce \(\underline{N}\) the random variable of law \((kp_{k}/(\sum_{k\geq 1}kp_{k}),k\geq 0)\). The criterion \(\sum_{k\geq 1}k\log(k)\underline{p}_{k}=+\infty\) thus ensures that for all \(K>0\) \[\frac{\mathbb{E}\left[\log(\underline{N})\right]}{K}=+\infty.\] Thus, we get for all \(K>0\), \[\sum_{n\geq 1}\mathbb{P}\left(\frac{\log(\widehat{N}_{n}^{*})}{n}\geq K\Big{|} \widehat{\mathcal{F}}_{\widehat{T}_{n}^{*}}^{*}\right)\geq\sum_{n\geq 1} \mathbb{P}\left(\frac{\log(\underline{N})}{K}\geq n\right)=+\infty.\] We can apply the conditional second Borel-Cantelli lemma, see Theorem 4.3.4 in [20]. For all \(K>0\), \[\limsup_{n\to\infty}\widehat{N}_{n}^{*}e^{-Kn}=+\infty\quad\text{a.s.}\] (C.4) Then using Markov's inequality we get \[\mathbb{P}\left(\widehat{T}_{n}^{*}\geq nK\right)\leq\frac{\mathbb{E}\left[ \sum_{i=1}^{n}\widehat{T}_{i}^{*}-\widehat{T}_{i-1}^{*}\right]}{nK}.\] The upper bound in Assumption D ensures that the rate of every inter-arrival time of jumps is lower than \(\overline{B}\). Thus we obtain for all \(K>0\), \[\mathbb{P}\left(\widehat{T}_{n}^{*}\geq nK\right)\leq\frac{\overline{B}}{K}.\] (C.5) Thus, combining (C.4) and (C.5) into (C.3), we get that \(\limsup_{n\to\infty}\hat{\mathcal{W}}_{\widehat{T}_{n}^{*}}=+\infty\) almost surely. Notice that Assumption D ensures that the jumps times \((\widehat{T}_{n}^{*},n\geq 0)\) almost surely degenerate at infinity, thus \(\limsup_{t\to\infty}\hat{\mathcal{W}}_{t}=+\infty\) and relation (C.1) concludes the proof. ## Appendix D Algorithmic construction We propose the efficient algorithm that generates trajectories of the spine process constructed with the \(\psi_{\phi}\) function introduced in (4.1). We introduce \(F_{\mathrm{div}}^{-1},F_{\mathrm{loss}}^{-1}\) and \(F_{\mathrm{loss}}^{*-1}\) the generalized inverse of the cumulative distribution functions of the random variables \(\Lambda,\Theta\) and \(\Theta\), defined in (4.2) and (4.3). Using the deterministic evolution of the traits and knowing the branching events, it is easy to recover the traits at all time of the individuals. We first generate a realization of a Poisson point process of intensity \(1\) on the interval \([t_{1},t_{2}]\), starting from \(n\) roots following classical algorithms [46]. It returns a list \(T_{\mathrm{div}}=[t_{1},T_{1},T_{2},\cdots]\) of increasing times, a list \(I_{\mathrm{div}}=[i_{1},i_{2},\cdots]\) containing the numbering of the individuals that branched at these times, a list \(L_{\mathrm{div}}=[\lambda_{1},\lambda_{2},\cdots]\) of fractions of mass at birth, and the numbering \(E\) of the spinal individual. The numbering choice is arbitrary and we chose to add every new individuals at the end of the hidden list. This computation is handled by the following function Tree, where \(i_{0}\) is the initial numbering of the spinal individual. ``` function\((T_{\mathrm{div}},I_{\mathrm{div}},E,L_{\mathrm{div}})=\mathrm{Tree}(t_{1},t_{2},n,i_{0},F_{ \mathrm{div}}^{-1})\) \((T_{\mathrm{div}},N)=([t_{1}],n)\) % Step 1; generating the division times while\(T_{\mathrm{div}}[\mathrm{end}]<t_{2}\)do Generate \(u\sim\mathrm{uniform}(0,1)\) append \(T_{\mathrm{div}}[\mathrm{end}]-\ln(u)/N\) to \(T_{\mathrm{div}}\) \(\triangleright\) Branching time \(N\gets N+1\) \(\triangleright\) New population size endwhile if\(T_{\mathrm{div}}[\mathrm{end}]>t_{2}\)then pop \(T_{\mathrm{div}}\) % Step 2; generating the fractions at birth and choosing the spinal individual \((I_{\mathrm{div}},E)=([0,\cdots,0],i_{0})\)\(\triangleright\)\(I_{\mathrm{div}}\) of size length\((T_{\mathrm{div}})\) for\(i\in\{1,\cdots,\mathrm{length}(T_{\mathrm{div}})-1\}\)do Generate \(I\sim\mathrm{uniform}(\{1,\cdots,n+i-1\})\)\(\triangleright\) Branching individual \(I_{\mathrm{div}}[i]\gets I\) Generate \(v,q\sim\mathrm{uniform}(0,1)\) append \(F_{\mathrm{div}}^{-1}(u)\) to \(L_{\mathrm{div}}\)\(\triangleright\) Fraction \(\lambda\) at birth if\(E=i\) and \(p>F_{\mathrm{div}}^{-1}(u)\)then\(E\gets n+i\)\(\triangleright\) New spinal individual position endfor endfunction ``` **Algorithm 1**The Algorithm 1: \(\mathrm The last line of this algorithm lists the labels of individuals in increasing order, grouping siblings together. The label of the spinal individual is thus \(U[E]\). To compute a trajectory of the spinal process on the interval \([0,T]\), we need to establish the time of jumps and their outcomes. Using the deterministic evolution of the traits and knowing the branching events, it is easy to recover the traits at all time for the individuals. Furthermore, depending on the statistics that one want to evaluate on the system, it might not be necessary to compute the traits of all individuals. For that reason we propose an algorithmic construction of a trajectory of the spinal process, returning the list of jumps times, events and labels of the individuals living at time \(T\). This algorithm also distinguishes the spinal individual in the population by returning the label of the spinal individual in the living ones at time \(T\). To simplify we will denote \(\mathrm{PPP}_{[t_{1},t_{2}]}(c(\cdot))\) the list of times given by a Poisson point process of intensity \(c(\cdot)\) on the interval \([t_{1},t_{2}]\), computed using Lewis' thinning algorithm [39]. ``` 0: model parameters: \(d,F_{\mathrm{div}}^{-1},F_{\mathrm{loss}}^{-1},F_{\mathrm{loss}}^{*-1},T\), initial condition: \(z=[x^{1},\cdots,x^{n}]\). 0:\((T_{\mathrm{div}},I_{\mathrm{div}},E,L_{\mathrm{div}},T_{\mathrm{loss}},T_{ \mathrm{loss}}^{*},I_{\mathrm{loss}},L_{\mathrm{loss}},L_{\mathrm{loss}}^{*})\) % Initializing the spine Generate \(u\sim\)uniform\((0,1)\) \(i_{0}=1\) while\(\sum_{i=1}^{i_{0}}x^{i}/(\sum_{i=1}^{n}x^{i})<u\)do \(i_{0}\gets i_{0}+1\) endwhile % Generating binary spinal tree on division events \((T_{\mathrm{div}},I_{\mathrm{div}},E,L_{\mathrm{div}})=\mathrm{Tree}(0,T,n,i_{ 0},F_{\mathrm{div}}^{-1})\) % Generating loss events for the spine and individuals outside the spine \((T_{\mathrm{loss}},T_{\mathrm{loss}}^{*},I_{\mathrm{loss}})=([\cdot],[\cdot],[\cdot])\) for\(i\in\{1,\cdots,\mathrm{length}(T_{\mathrm{div}})\}\)do % For the individuals outside the spine Generate \(P\sim\mathrm{PPP}_{\big{[}T_{\mathrm{div}}[i-1],T_{\mathrm{div}}[i]\big{]}} \big{(}K_{\mathrm{loss}}(n+i-1)^{2}d(\cdot)\big{)}\) % Times of loss events append \(P\) to \(T_{\mathrm{loss}}\) Generate \(I_{u}\sim\Big{(}\mathrm{uniform}\left(\{1,\cdots,n+i-1\}\right)^{\mathrm{ length}(T_{\mathrm{loss}}[i])}\Big{)}\) % Distributing loss events append \(I_{u}\) to \(I_{\mathrm{loss}}\) % For the spine Generate \(P^{*}\sim\mathrm{PPP}_{\big{[}T_{\mathrm{div}}[i-1],T_{\mathrm{div}}[i]\big{]} }\big{(}(n+i-1)d(\cdot)\big{)}\) % Times of loss events append \(P^{*}\) to \(T_{\mathrm{loss}}^{*}\) endfor Generate \(L_{u}\sim\big{(}\mathrm{uniform}(0,1)^{\mathrm{length}(T_{\mathrm{loss}})} \big{)}\) % Generating fractions lost \(L_{\mathrm{loss}}=F_{\mathrm{loss}}^{-1}(L_{u})\) Generate \(L_{u}^{*}\sim\big{(}\mathrm{uniform}(0,1)^{\mathrm{length}(T_{\mathrm{loss}}^{* })}\big{)}\) % Generating fractions lost \(L_{\mathrm{loss}}^{*}=F_{\mathrm{loss}}^{*-1}(L_{u}^{*})\) ``` **Algorithm 1** Simulation algorithm of the jumps times until time \(T\) The output tuple \((T_{\mathrm{div}},I_{\mathrm{div}},E,L_{\mathrm{div}},T_{\mathrm{loss}},T_{ \mathrm{loss}}^{*},I_{\mathrm{loss}},L_{\mathrm{loss}},L_{\mathrm{loss}}^{*})\) is the minimal information needed to construct the spinal process introduced in Section 4. The list \(U\) of labels of the individuals can be computed with the function \(\mathrm{Labels}(I_{\mathrm{div}},n)\). Notice that a lot of operations can be parallelized in this algorithm, unlike in the classical Lewis' algorithm. Moreover, depending on the statistic that one wants to compute on this process, the algorithm can be further simplified. For example, if one takes interest in the total biomass of the population, the indexes of the individuals that branched are not necessary. ## Acknowledgments I express my gratitude to L. Coquille, A. Marguet, and C. Smadi for their guidance and valuable feedback throughout this work. I would like to extend special thanks to A. Marguet and C. Smadi for their numerous corrections made to this article. I thank V. Bansaye for stimulating discussions on the subject of this paper and S. Billiard for fruitful discussions on the Yule model. I acknowledge partial support by the Chair "Modelisation Mathematique et Biodiversite" of VEOLIA-Ecole Polytechnique-NNHN-F.X. This work is supported by the French National Research Agency in the framework of the "France 2030" program (ANR-15-IDEX-0002) and by the LabEx PERSYVAL-Lab (ANR-11-LABX-0025-01).
We consider branching processes describing structured, interacting populations in continuous time. Dynamics of each individual's characteristics and branching properties can be influenced by the entire population. We propose a Girsanov-type result based on a spinal construction, and establish a many-to-one formula. By combining this result with the spinal decomposition, we derive a generalized continuous-time version of the Kesten-Stigum theorem that incorporates interactions. Additionally, we propose an alternative approach of the spine construction for exact simulations of stochastic size-dependent populations.
2309.04550
Retrieving Evidence from EHRs with LLMs: Possibilities and Challenges
Unstructured data in Electronic Health Records (EHRs) often contains critical information -- complementary to imaging -- that could inform radiologists' diagnoses. But the large volume of notes often associated with patients together with time constraints renders manually identifying relevant evidence practically infeasible. In this work we propose and evaluate a zero-shot strategy for using LLMs as a mechanism to efficiently retrieve and summarize unstructured evidence in patient EHR relevant to a given query. Our method entails tasking an LLM to infer whether a patient has, or is at risk of, a particular condition on the basis of associated notes; if so, we ask the model to summarize the supporting evidence. Under expert evaluation, we find that this LLM-based approach provides outputs consistently preferred to a pre-LLM information retrieval baseline. Manual evaluation is expensive, so we also propose and validate a method using an LLM to evaluate (other) LLM outputs for this task, allowing us to scale up evaluation. Our findings indicate the promise of LLMs as interfaces to EHR, but also highlight the outstanding challenge posed by "hallucinations". In this setting, however, we show that model confidence in outputs strongly correlates with faithful summaries, offering a practical means to limit confabulations.
Hiba Ahsan, Denis Jered McInerney, Jisoo Kim, Christopher Potter, Geoffrey Young, Silvio Amir, Byron C. Wallace
2023-09-08T18:44:47
http://arxiv.org/abs/2309.04550v3
# Retrieving Evidence from EHRs with LLMs: ###### Abstract Unstructured Electronic Health Record (EHR) data often contains critical information complementary to imaging data that would inform radiologists' diagnoses. However, time constraints and the large volume of notes frequently associated with individual patients renders manual perusal of such data to identify relevant evidence infeasible in practice. Modern Large Language Models (LLMs) provide a flexible means of interacting with unstructured EHR data, and may provide a mechanism to efficiently retrieve and summarize unstructured evidence relevant to a given query. In this work, we propose and evaluate an LLM (Flan-T5 XXL) for this purpose. Specifically, in a zero-shot setting we task the LLM to infer whether a patient has or is at risk of a particular condition; if so, we prompt the model to summarize the supporting evidence. Enlisting radiologists for manual evaluation, we find that this LLM-based approach provides outputs consistently preferred to a standard information retrieval baseline, but we also highlight the key outstanding challenge: LLMs are prone to hallucinating evidence. However, we provide results indicating that model confidence in outputs might indicate when LLMs are hallucinating, potentially providing a means to address this. NLP, LLMs, radiology ## 1 Introduction We consider using Large Language Models (LLMs) as interfaces to unstructured data (e.g., notes) in patient Electronic Health Records (EHRs), ultimately to aid radiologists performing imaging diagnosis. The motivation here is that unstructured evidence within EHR may support (or render less likely) particular diagnostic hypotheses radiologists come to based on imaging alone, but time constraints--combined with the often lengthy records associated with individual patients--make manually finding and drawing upon such evidence infeasible in practice. Consequently, radiologists often perform diagnosis with comparatively little knowledge of patient history. Modern LLMs offer a flexible mechanism to interface with unstructured EHR data. For example, recent work has shown that LLMs can perform "zero-shot" information extraction from clinical notes with reasonable accuracy (Agrawal et al., 2022). In this work, we propose and evaluate an approach to extract evidence from clinical notes in order to aid diagnosis. More concretely, we envision a clinician first providing an initial suspected diagnosis as a query. The model (an LLM) should then confirm whether there is unstructured (textual) evidence in the patient record that might support this diagnosis, and--if so--summarize this for the clinician. Figure 1 provides a schematic of the approach. LLMs provide an attractive mechanism to permit such interactions given their flexibility and established dexterity working with unstructured text. We anticipate--and empirically verify in this work--that they will therefore be able to find and summarize evidence relevant to an arbitrary query more capably than "traditional" (which is to say, pre-LLM) information retrieval (IR) methods. Critically, they can also answer general questions (e.g., "Is this patient at risk of _Atrial fibrillation?_") and provide summaries of supporting evidence identified; retrieval methods do not offer such functionality. However, they also bring serious challenges: Skillful as they are, it is well-known that LLMs are also prone to "hallucinating" content (Azamfirei et al., 2023; Zhang et al., 2023). We therefore perform an empirical evaluation with practicing radiologists to assess both the potential benefits and risks of using LLMs to aid diagnosis. The expert evaluations confirm that LLMs are more capable than a standard IR system at surfacing and summarizing evidence relevant to a given diagnosis. However, such models also bring inherent challenges. How can we know, e.g., that the summary of evidence supporting a given condition is in fact faithful to the patient record at hand? We highlight several examples where the LLM fabricates plausible patient history that _would_ support a condition of interest. This is potentially dangerous and certainly frustrates the provider, who must then read through the record carefully to ascertain that there is in fact no such evidence, eliminating both the safety and efficiency benefits hoped for from use of LLMs. Our contributions are summarized as follows. (1) We introduce an approach in which we task an LLM (specifically, Flan-T5 XXL; Chung et al., 2022) to assess whether a patient is at risk of or has a given condition, and to produce a conditional summary of any corresponding supporting evidence. We conduct expert evaluation of this and find it considerably outperforms baseline evidence retrieval approaches. (2) We show in-depth examples that highlight key challenges to using LLMs for this task; hallucinated content. This points to future research needed to address issues currently precluding the use of LLMs as an interface to EHR. ## 2 Retrieving and summarizing evidence with LLMs For a given query (\(\equiv\) condition), we attempt to retrieve two distinct types of evidence from patient history: (A) Snippets that indicate a patient _may be at risk_ of developing the condition in the future, and (B) those that suggest the patient _currently has_ the condition. For example, a patient on anticoagulants after a recent posterior fossa surgery may be at risk of an intracranial hemorrhage (but not experiencing one currently). By contrast, observing acute posterior fossa hemorrhage indicates the patient most likely has intracranial hemorrhage. Extracting evidence for _risk_ informs clinicians about occurrences in the patient's history (such as procedures, diagnoses) that make them more vulnerable to the condition. Extracting evidence for _signs_ of a condition serves two purposes: signs that occur in the patient's immediate history indicate the patient likely has the condition. Signs that occur earlier indicate the patient has a history of the condition which also serves as important information. We use Flan-T5 XXL as our base LLM. While larger, proprietary models may offer superior results, we wanted to use an accessible LLM to ensure reproducibility. Moreover, protections for patient privacy mandated by the Health Insurance Portability and Accountability Act (HIPAA), and our institutional policy on use of LLM restrict us to using models that can be deployed "in-house", precluding hosted variants (e.g., those provided by OpenAI). Zero-shot sequential promptingWe adopt a sequential prompting approach to find and summarize evidence. First we ask the LLM whether a given note indicates that the corresponding patient is at risk for, or has a given query diagnosis--this prompts the LLM for a binary decision regarding these. When the answer is 'Yes', we prompt the model to provide support for its response. Specifically, to query whether the patient is at risk for the given diagnosis, we use the prompts below. 'Read the following clinical note of a patient: [NOTE]. Question: Is the patient at risk of [DIAGNOSIS]? Choice -Yes -No. Answer: To elicit supporting evidence from the model for such risk predictions, we use the following prompt. 'Read the following clinical note of a patient: [NOTE]. Answer step by step: based on the note, why is the patient at risk of [DIAGNOSIS]? Answer: Similarly, to query whether the patient _has_ a given diagnosis, we ask the model instead " Question: Does the patient have [DIAGNOSIS]?" (asking for a binary response). And then to obtain evidence supporting this assessment (in the case of a positive response), we prompt with: "Question: Extract signs of [DIAGNOSIS] from the note.". In the above prompts, [NOTE] denotes a patient note, and [DIAGNOSIS] a potential diagnosis for which we would like to retrieve supporting evidence. We then combine and present the result for the two types of evidence (risks and signs) to the end user. ## 3 Data For evaluation we worked with radiologists (specializing in neuroimaging) from the Brigham and Women's Hospital in Boston (BWH). For experiments, we used a private dataset from this hospital and the publicly available MIMIC-III (Johnson et al., 2016) dataset, to ensure that our findings are robust and (partially) reproducible. **BWH** dataset comprises patients admitted to the Emergency Room (ER) of BWH between 2010 and 2015 along with clinical notes including: cardiology, endoscopy, operative, pathology, pulmonary, radiology reports, and discharge summaries. We sampled patients who underwent brain imaging within 48 hours of their ER visit because they are likely to have undetermined diagnoses. We are interested in scenarios where patients are associated with a large volume of EHR data, so we included patients with \(\geq\)10 EHR notes. **MIMIC-III** is a publicly available database of deidentified EHR from patients admitted to the Intensive Care Unit (ICU) of the Beth Israel Deaconess Medical Center between 2001 and 2012. It contains both structured data (e.g, demographics, vital sign measurements, lab test results), and unstructured data (e.g., nurse and physician notes, ECG and radiology reports and discharge summaries). Similar to the BWH dataset, we sampled patients that underwent brain imaging within 48 hours of their ER or Urgent Care visit, whose EHR included \(\geq 10\) notes. We sampled data for individual patients, but evaluated models with respect to diagnoses. For example, if a patient report mentioned'stroke' and'sinusitis', the radiologist evaluated the surfaced evidence for each condition independently. To reduce annotation effort, we discarded diagnoses with more than 20 pieces of evidence and finally sampled 8 instances from each source to create our final evaluation dataset. See Figure 2 for a schematic of our data sampling procedure. Table 1 reports statistics about the set of examples used for evaluation. ## 4 Evaluation For expert evaluation, one of the collaborating radiologists identified all diagnoses discussed in the _Findings_ and _Impressions_ sections of the radiology reports of 10 patients from each dataset (excluding MIMIC-III patients from the pilot study).3 Then, for each diagnosis, we retrieved supporting evidence from all patient notes using the zero-shot prompting strategy from Section 2. Three collaborating radiologists then manually assessed each retrieved piece of evidence. Footnote 3: While this is a relatively small number of patients, we emphasize that manual evaluation is expensive: Radiologists on our team spent \(\sim\)9 hours manually assessing outputs. Figure 3 shows the evaluation interface that our radiologist team-members used to assess model outputs. Because the relevance of an evidence snippet inherently depends on the context, we ask radiologists to ground their assessments by assuming the following hypothetical setting: "You are a radiologist reviewing a scan of a patient in the ER. Based on the scan, you are concerned that the patient has the diagnosis stated below. Assess the relevance of the retrieved evidence to support your inference." For each piece of evidence surfaced by a model, radiologists answered two questions: _Is the evidence present in the note?_ LLMs can hallucinate evidence. Therefore, we first ask radiologists to confirm whether the model generated evidence is in fact supported by the note on the basis of which it was produced. To aid the radiologists in finding the corresponding sentences, we compute ClinicalBERT (Alsentzer et al., 2019) embeddings of sentences in the notes and highlight those with a cosine similarity of \(\geq 0.9\) with the ClinicalBERT embedding of the generated evidence. This heuristic approach realizes high precision but low recall. Therefore, if a highlighted sentence is incongruuous with generated evidence, we Figure 2: Data sampling flow-chart. An instance is a unique (patient, diagnosis) combination. ask radiologists to read through the entire note to try and manually identify support. Note that the (non-generative) retrieval method to which we compare as a baseline is extractive, and so incapable of hallucinating content; we nevertheless ask this question with regards to the baseline for consistency and to ensure blinding. _Is the evidence relevant?_ If the generated evidence is supported by the note, we ask radiologists whether it is _relevant_ to the query diagnosis. Specifically, we collect assessments on the following scale. 0: Not UsefulThe information is not useful; is irrelevant to the query condition (e.g., 'The patient is on a ventilator' for pneumoccephalus). 1: Weak CorrelationThe information surfaced has a plausible but probably weak correlation with the query condition (e.g., 'The patient is in a flutter' for intracranial hemorrhage). 2: UsefulThe retrieved evidence is relevant and may inform one's diagnostic assessment (e.g., 'The patient has a TBI' for intracranial hemorrhage). 3: Very UsefulThe retrieved evidence is clearly relevant and would likely inform diagnosis (e.g., 'hypertension' for small vessel disease (SVD)). To capture whether the two models we evaluated--one an LLM, the other a retrieval approach--provide distinct evidence, we asked annotators if one model surfaced relevant evidence that the other did not. We also ask radiologists to choose which of the two models they subjectively prefer and to offer any relevant comments. Finally, we record the time taken to evaluate each piece of evidence. ## 5 Results To first assess agreement between radiologists, we had all of them annotate evidence surfaced by the LLM for one particular patient, selected at random from the BWH dataset. For this patient, the model generated 10 pieces of (potentially) relevant evidence for the query _chemoradiation necrosis_. On this shared set, the inter-annotator agreement score (average pairwise Cohen's \(\kappa\)) for relevance assessments between the three radiologists was 0.68. To verify a given piece of evidence provided by a model it took radiologists an average of \(35.09\pm 49.24\) and \(21.37\pm 24.36\) seconds for FLAN-T5 and CBERT models, respectively. Evaluating FLAN-T5 demands more time because the highlights are approximations (heuristically matched to generations) and so do not always map correctly to the sentence corresponding to the evidence. Evaluating hallucinations takes even longer (\(129.22\pm 131.12\) seconds per piece of evidence) because radiologists must peruse the note multiple times to confirm that said evidence is not present. Figure 4 shows results (see Table 2 for examples). Radiologists found 41.5% (MIMIC) and 48.4% (BWH) of the evidence generated by FLAN-T5 to be (very) useful. In comparison, 34.4% (MIMIC) and 39.7% (BWH) of CBERT's evidence was (very) useful. 23.0% (MIMIC) and 18.5% (BWH) of the evidence generated by FLAN-T5 showed weak correla Figure 4: Evidence generated by the LLM (FLAN-T5) is more often deemed (very) useful than that retrieved by CBERT. But on average, 9.4% of evidence generated by FLAN-T5 is hallucinated. Figure 3: Screenshot of the evaluation interface showing highlighted evidence. tions (discussed in Section 5.3). 8.8% (MIMIC) and 10% (BWH) of evidence was hallucinated (discussed in 5.2). 26.7% ((MIMIC) and 23.1% (BWH) of the evidence was deemed not useful; the latter was primarily true information about the patient's condition which was unrelated to the diagnosis. ### Binary decision recall As discussed above, we first ask the LLM whether a given note indicates that the corresponding patient is at risk for, or has, a given query diagnosis. The precision of this LLM inference is implicitly measured by the assessment of generated evidence; if the patient does not have and/or is not at risk for a given condition, any generated evidence will necessarily be irrelevant. But this does not capture recall, i.e., how sensitive the model is with respect to identifying when a patient indeed has or is at risk of a condition. Therefore, to additionally estimate model _recall_ with respect to inferences based on notes, we sampled 20 patients from the BWH data and followed prior work (McInerney et al., 2020) in terms of evaluation. Specifically, we asked radiologists to browse reports from up to one year after the patient's reference radiology report and tag relevant diagnoses; these can then be viewed as "future" diagnoses with respect to the original reference report. The radiologists then selected past notes containing supporting evidence for these diagnoses. Out of the 200 notes marked as containing evidence, FLAN-T5 correctly identified 140, corresponding to a recall of 0.7. ### Hallucinations Concerningly, some of the model hallucinations flagged by radiologists were about risk factors that are highly relevant to the query diagnosis. We provide a few illustrative examples: **Example 1** For a patient with demyelination as the query diagnosis, the model hallucinated the evidence 'axonal degeneration'. Demyelination is commonly viewed as the primary factor responsible for the deterioration of axons within multiple sclerosis lesions. The model also hallucinated signs of demyelination as evidence ('numbness and tingling in the arms and legs'). There was no evidence in the record indicating axonal degeneration or either of these symptoms. **Example 2** For a patient with chemo-radiation necrosis as the query diagnosis, the model hallucinated that 'the patient had a history of chemo-radiation necrosis'. A history of radiation necrosis would be (very) relevant to its diagnosis, but there was no such history in the EHR. In other instances, the model hallucinated vague evidence such as 'The patient is taking a lot of medications that can cause small vessel disease' for small vessel disease as the query diagnosis (a radiologist went through the note and was unable to find mention of any such medication). Figure 5: Distributions of two types of confidence scores, stratified by expert assigned labels. We consider measuring (a) similarities between multiple sampled outputs and (b) simply taking the normalized likelihood assigned to sequences by the language model. Both confidence scores provide good discrimination of “hallucinated” evidence, compared to all other types (yielding AUCs of \(>\)0.9). They also both correlate with the rated usefulness of evidence. **Can hallucinations be identified by soft-matching the evidence back with the note?** A possible way to identify hallucinations is to try to'match' generated evidence to text within patient notes. If so, this intuitively suggests the generated content is also present in the underlying note. Practically, we do this by generating embeddings of generated evidence texts and sentences in notes and scoring their similarities; we can then consider a generation supported if this similarity exceeds some threshold. Note, however, that this approach may construe accurate evidence as 'hallucinated' if the the generation is comparatively abstractive. For example, the model output 'The patient has had multiple heart surgeries' is true, but has a low ClinicalBERT embedding similarity score with the corresponding sentence 'This is a 64 year-old gentelman with known coronary artery disease status post an inferoposterior myocardial infarction in [**2116**] who is status post redo-CABG times six and bovine AVR' in the note. Because abstraction--the ability to render relevant patient history concisely--is a major benefit of LLMs, using heuristic matching to noisily filter outputs (limiting to ostensibly 'faithful' cases) does not seem a particularly promising approach. **How certain is the model about the hallucinations?** We evaluate the degree to which model uncertainty, quantified both via normalized output likelihoods under the LM and using'self-consistency' (Huang et al., 2022; Si et al., 2022), can be used to infer when the model is 'hallucinating' content. The latter method entails eliciting 'Chain-of-Thought' (CoT) reasoning (Wei et al., 2022) from models repeatedly, sampling tokens at each step to yield multiple reasoning paths. The fraction of these that resolve to the same final output (i.e., which are consistent) can then be taken as a proxy for model certainty in that output. Outputs here are evidence summaries, so we take the average similarity of sampled outputs as a proxy for certainty (details in Appendix C). We find that both methods provide confidence scores that are highly indicative of hallucinations; this can be seen in Figure 5(_b_). This is promising, because it suggests we might be able to simply abstain from providing outputs in such cases. ### Weakly correlating evidence One factor complicating our evaluation is that the LLM often surfaced evidence which might have a relatively weak correlation with the query condition. One could argue that the model was 'correct' in retrieving such evidence from a population epidemiology perspective, but incorrect from an individual patient clinical perspective. In other words, the evidence is so weakly correlated with the condition that it is minimally useful. (see Appendix E.1 for examples). ### Preferences Figure 6 shows how often radiologists preferred each model. FLAN-T5 provided comparatively precise and concise output. Abstractive evidence was considered better than the extractive snippets from CBERT, which often chunked useful evidence with neighboring irrelevant sentences (notes are usually poorly formatted, making sentence-parsing difficult). See Appendix E.1 for examples. While radiologists largely preferred FLAN-T5 (XXL), CBERT was preferred in three cases: _(1) Poor precision_ For pneumoccephalus, 50% of the evidence surfaced by FLAN-T5 was not useful. _(2) Poor recall:_ In the case of chemoradiation necrosis, FLAN-T5 had a poor recall and failed to retrieve essential evidence related to the patient's radiation therapy. This may be because of the term 'chemoradiation', which is more commonly referred to as 'radiation necrosis'. Changing the diagnosis to 'radiation necrosis', resulted in the model retrieving evidence related to the patient's radiation therapy. _(3) Supported alternate diagnosis:_ Interestingly, our radiologist preferred CBERT for the case of demyelination because it helped confirm that the patient did _not_ have demyelination, but in fact had a glioma (tumor). Demyelinating lesions and glioma present similar imaging characteristics and can be difficult to diagnose based on conventional MR imaging (Toh et al., 2012). A brain biopsy is often conducted to differentiate between the two. All the evidence evaluated as (very) useful were snippets from the pathology report discussing the tests and related results that indicated Figure 6: Model output preference counts, which indicate that radiologists prefer the generative outputs to retrieved snippets. that demyelination was less likely and that the findings were most consistent with glioma. ### Robustness to query variation As mentioned in Section 5.4, FLAN-T5 performed better when the diagnosis 'chemoradiation necrosis' was revised to the more commonly used 'radiation necrosis'. Another such case was'sinus disease', more commonly known as sinusitis. The model did well in this case; 80% of the evidence was useful. CBERT, however, surfaced evidence for cardiac sinus disease (sinus bradycardia, sinus tachycardia, etc). Much of the evidence was related to the patient's cardiac history which would be relevant if they had cardiac sinus disease. 90% of the evidence was hence not useful. ## 6 Related Work NLP for EHR.Navigating EHRs is cumbersome, motivating several efforts in summarization of and information extraction from EHR (Pivovarov and Elhadad, 2015). For example, in recent related work, Jiang et al. (2023) created a proactive note retrieval system based on the current clinical context to aid note-writing. Adams et al. (2021) considered "hospital-course summarization", aiming to condense the notes of a patient visit into a paragraph, while Liang et al. (2019) proposed to create disease-specific extractive summaries from clinical notes. NLP in Radiology.Previous works regarding NLP in radiology primarily focus on processing radiology reports. Some work has sought to generate the the clinical Impression section based on the Findings section of reports (Van Veen et al., 2023; Zhang et al., 2019; Sotudeh et al., 2020). Other efforts have focussed on extracting specific observations from radiology reports (Smit et al., 2020; Jaiswal et al., 2021), and modeled disease progression using radiology reports (Di Noto et al., 2021; Khanna et al., 2023). The prior works most relevant to this effort concern aiding radiologists in diagnosing conditions. (McInerney et al., 2020) propose using a distantly supervised model (trained to predict ICD codes) to perform extractive summarization conditioned on a diagnoses. Our work addresses the problem in a zero-shot setting. Tang et al. (2023) address diagnostic uncertainty by suggesting less likely diagnosis to radiologists by learning to differentiate between likely and less likely diagnoses via contrastive learning. More broadly, recent work has shown the potential of ML and NLP to assist radiologists across a range of tasks, including breast cancer screening (Wu et al., 2020), malignant lung nodules detection (Sim et al., 2020), and chest X-ray interpretation (Seah et al., 2021). Suggesting diagnosis (Tang et al., 2023) based on findings and supporting it with evidence (McInerney et al., 2020) also fall in this area. ## 7 Discussion and Limitations We have proposed and evaluated a method for using LLMs to retrieve and summarize evidence from patient records which might be relevant to a particular condition or diagnosis of interest, with the ultimate aim of aiding radiologists performing imaging diagnosis. Expert evaluations of model outputs performed by radiologists suggest that this is a promising \begin{table} \begin{tabular}{p{56.9pt} p{56.9pt} p{113.8pt} p{113.8pt}} **Evaluation** & **Diagnosis** & **Evidence** & **Explanation** \\ \hline Very Useful & intracranial hemorrhage & Recent fossa surgery and now on anticoagulants & Surgery in the brain inevitably leaves some hemorrhage. Anticoagulants increase the risk of hemorrhage. ‘Recent surgery’ and ‘anticogagulants’ make hemorrhage highly likely. \\ \hline Useful & infarction & There is calcified thrombus obstructing the origins of the M2 branches & ‘Thrombus’ is diagnostic of infarction, which is very useful information. But ‘calcified thrombus’ implies chronicity, so the thrombus could have been present for a long time and there may not be an acute infarction at this time. \\ \hline Weak Correlation & pneumoccephalus & patient was involved in a motorcycle accident & A traumatic head injury is an important risk factor of pneumoccephalus. A motorcycle accident increases the likelihood of a head injury. \\ \hline Not Useful & small vessel disease (SVD) & patient is at risk of endocarditis & Not helpful in diagnosing SVD. \\ \hline Hallucination & intracranial hemorrhage & patient has a brain tumor & Not present in the note (and relevant to the diagnosis). \\ \end{tabular} \end{table} Table 2: Examples of evidence surfaced by FLAN-T5 for different evaluation categories. approach, in that annotators tended to prefer LLM outputs to simple retrieval results. But there are important **limitations** to the approach and to our evaluation. For example, we found that LLMs are prone to hallucinating (plausible) evidence, potentially hindering their utility for the envisioned use. However, our results also suggest that confidence scores might allow one to pro-actively identify hallucinations, and abstain in such cases. Interestingly, confidence also seems to correlate with perceived usefulness. This suggests an interesting direction to explore in future work. Our evaluation was limited in a few key ways, which might reduce the generalizability of our findings. First, we enlisted a small set of radiologists to perform in-depth evaluation of a small number of instances. This is because evaluation is time consuming: We re-emphasize that this exercise required substantial allocation (\(\sim\)9 hours) of scarce expert time. Another limitation here is that we considered only one LLM (specifically, FLAN-T5): Other LLMs might, naturally, perform better or worse. Finally, we did not extensively iterate on the specific prompts used, and this too could substantially affect results.
電子ヘルスケア記録 (EHR) の未構造化データは、画像に補完的な重要な情報を含んでおり、放射線科医の診断を助ける可能性があります。しかし、患者に関するノートの大量と時間制限は、関連する証拠を効果的に特定することは実質的に不可能です。この研究では、LLM を使用したゼロショット戦略を提案し、評価しました。LLM は、患者 EHR に関連する質問に焦点を当てて、関連する証拠を効率的に検索して要約する機能を備えています。この方法は、LLM を使用して患者に特定の症状があるかどうかを推定し、その症状に関する証拠を要約するように依頼しています。専門家の評価によると、この LLM 基礎のアプローチは、従来の情報検索基準と比較して、 consistently より良い結果を示しています。手動評価はコストがかかるため、LLM を使用して他の LLM 出力を評価する方法
2309.14743
The galactic tooth-fairy and a cosmic bullet: Amateur discoveries and a call for further research
There are countless digital sky surveys and automated scans of the night sky which use computer algorithms to detect and categorize objects. With the advent of Artificial Intelligence such surveys will become even more efficient in the near future. Despite this some objects are missed by surveys or pose no initial interest. At times such missed objects are unique in nature and of decent angular sizes, demanding research, unlike the billions of tiny specs of galaxies that would be too tedious to name and study. In this scenario the amateur astronomer and their spirit for old school astronomical discovery steps in, to manually comb the sky and catalogue unique objects as was done in the early days of astronomy. In this paper two unique, previously uncatalogued galaxy candidates, namely Shaheer I and Shaheer II are identified and studied. Both galaxies lay at a distance of 6.67 arc-minutes from each other in the constellation of Camelopardalis. One boasts an unusual morphological profile, akin to a molar tooth, while the other seems to be shooting through space at tremendous velocities. The objects were discovered during visual inspection of digital surveys and then imaged from amateur telescopes at Taqwa observatory, Pakistan's first and only dark sky observatory (bortle 1). We perform photometry using PetroFit to discuss the potential nature of the galaxies and implore further collaborative research to fully uncover their characteristics.
Muhammad Shaheer Niazi
2023-09-26T08:11:40
http://arxiv.org/abs/2309.14743v1
# The galactic tooth-fairy and a cosmic bullet: Amateur discoveries and a call for further research ###### Abstract There are countless digital sky surveys and automated scans of the night sky which use computer algorithms to detect and categorize objects. With the advent of Artificial Intelligence such surveys will become even more efficient in the near future. Despite this some objects are missed by surveys or pose no initial interest. At times such missed objects are unique in nature and of decent angular sizes, demanding research, unlike the billions of tiny specs of galaxies that would be too tedious to name and study. In this scenario the amateur astronomer and their spirit for old school astronomical discovery steps in, to manually comb the sky and catalogue unique objects as was done in the early days of astronomy. In this paper two unique, previously uncatalogued galaxy candidates, namely Shaheer I and Shaheer II are identified and studied. Both galaxies lay at a distance of 6.67 arc-minutes from each other in the constellation of Camelopardalis. One boasts an unusual morphological profile, akin to a molar tooth, while the other seems to be shooting through space at tremendous velocities. The objects were discovered during visual inspection of digital surveys and then imaged from amateur telescopes at Taqwa observatory, Pakistan's first and only dark sky observatory (bottle 1). We perform photometry using PetroFit to discuss the potential nature of the galaxies and implore further collaborative research to fully uncover their characteristics. keywords: methods: observational - catalogues - Galaxy: general ## 1 Introduction Amateur astronomy and the digitization of sky surveys (e.g. panSTARRS, Chambers et al., 2016) has brought in a surge of new discoveries over the last decade. In the past, the astronomer would manually survey photographic plates and come upon a new discovery such as with Bothun et al. (1987). Digitized surveys now make it possible for anyone to explore the cosmos and engage in citizen science. Today the dedicated amateur astronomer can build a decent observatory with large aperture SCT's (schmidt-Cassegrain telescope), a dark sky and a high resolution, high Quantum efficiency imaging sensor enabling them to contribute to science with their observations. The rapid development and accessibility of CMOS imaging technologies has a significant role to play here. Examples of contributions include DGSAT: Dwarf Galaxy Survey with Amateur Telescopes (Javanmardi et al., 2016). Their team has made many low surface brightness dwarf galaxy discoveries (Martinez-Delgado et al., 2021; Collins et al., 2022) in the local group using preliminary amateur discoveries and then larger professional setups to probe deeper into the galaxies. Similarly KPS-1b was the first transiting exoplanet discovered through an amateur setup, Burdanov et al. (2018). Due to the trend of deep space astro-photography and the associated long integration times (sometimes reaching over 100 hours), we have begun to uncover hidden objects which were previously not visible due to the magnitude limitations of sky surveys. Planetary Nebulae are one such class of objects being discovered primarily by amateur astronomers, e.g. D0 et al. (2022). Being almost invisible in sky surveys, they are brought out by deep imaging of sectors with potential candidates using narrow-band filters and advanced processing techniques. The aim of this paper, alongside describing two previously uncatalogued galaxies, is to promote scientific discovery and research for the amateur community and to introduce the latest accessible computational methods which can be easily used for preliminary analysis. The related intermediate processing steps are also detailed to make it easy to understand. ## 2 Discovery The galaxy candidate Shaheer I was found during visual inspection of the digital pan-STARRS survey near the MCG+12-06-003 group of galaxies in the camelopardalis constellation. Due to its unusual shape and dim profile it caught interest leading to its coordinates being noted down. On searching through various catalogues and online databases, it was established that automated surveys had not detected the objects as a potential galaxy. After the field of view around Shaheer I was imaged at Taqwa observatory (see3.1) and inspected, Shaheer II was found just 6.67 arc minutes to the south. Despite its brighter surface profile and obvious galactic nucleus, it had also not been detected in galactic surveys however some point source astrometric and photometric data (see table 2) is available through the extended Gaia-PS1-SDSS (GPS1+) proper motion catalog (Tian et al., 2020) for Shaheer II and the pan-STARRS release (PS1) survey for Shaheer I. See table 1 for coordinates. ## 3 Imaging Imaging was done at Taqwa Space Observatory (26\({}^{\circ}\)27\({}^{\circ}\)35.0\({}^{\prime}\)N 66\({}^{\circ}\)18\({}^{\prime}\)28.0\({}^{\prime}\)E) (see appendix A) using two telescopes. Luminance data was taken using a Meade lx200 acf 16 inch SCT with an FLI ML29052 CCD operating at -20\({}^{\circ}\)C. RBG data was taken through a Celestron edge HD 8 inch SCT and an ASI294mc CMOS colour camera operating at -15\({}^{\circ}\)C. The data was stacked in APP (Astro Pixel Processor) and no further editing was done. ### Luminance Luminance here will refer to filter-less, monochrome imaging (Fig. 1, 2) of the objects to try and resolve surface features. We imaged a large field of view containing Shaheer I and II for a total integration time for 37,080 second (10.3 hours) over multiple nights in February 2023. Single exposures of 180 second were used taking in consideration tracking errors. With this setup we had an effective angular resolution of 0.28\({}^{\prime}\)/px. ### RGB colour Colour data (Fig. 3) was imaged for a total integration time of 29,760 seconds (8.2 hours) with single exposures of 120 seconds. The CMOS camera used and the smaller aperture telescope limited the resolution available for the colour images in comparison to the high resolution luminance images. ## 4 Photometry ### Source selection Photometry was performed using the newly developed PetroFit (Geda et al., 2022) which helps create a petrosian profile (Petrosian, 1976) and fitting model. It was used to calculate various structural parameters of the galaxies, namely; Ellipticity, elongation, Sersic index and Half-light radius (\(r_{eff}\)). PetroFit also calculated the noise levels of the luminance images which serves to estimate detection thresholds and background subtraction. The sources in each image were then \begin{table} \begin{tabular}{l c c} \hline \hline & Shaheer I & Shaheer II \\ \hline RA(J2000) & 05 25 15.26 & 05 24 54.28 \\ Dec(J2000) & +72 56 31.06 & +72 49 58.52 \\ \hline \hline \end{tabular} \end{table} Table 1: Coordinates of both galaxy candidates. Figure 1: cropped frame showcasing Shaheer I. The tooth like structure is evident. 20 percent dynamic stretch. FOV 2.4’ x 2.4’ Figure 3: Shaheer I (left), bluish colour is visible. Stretched to 20 percent of the dynamic range as it is a dim object. Saturation applied to enhance colour. Shaheer II (right), whitish with hints of faint blue. Stretched to 15 percent of the dynamic range. Figure 2: cropped frame showcasing Shaheer II. Obvious nucleus and cluster regions on rim visible. 15 percent dynamic stretch as this object is brighter. FOV 2.4’ x 2.4’ identified and segmented (fig 4) using a detection threshold of \(2\sigma\) ( where \(\sigma\) is the data standard deviation.) and a minimum source size of 5 pixels. A gaussian Kernel size of 3 was chosen with FWHM values of 5 and 3 for Shaheer II and Shaheer I respectively. This was due to the differing surface brightness of both objects. Segment de-blending was used for Shaheer II to isolate the star near its rim while no de-blending was necessary for Shaheer I. A point spread function (PSF) of the sensor used was estimated by selecting one bright star from the FOV and normalizing it, see fig 5. ### Curve of Growth The radial profiles (fig 7, 8) of both candidates in the x and y axis along with the enclosed flux curve of growth (fig 6) was modelled using PetroFit's source photometry. Note that the center of the object is automatically determined, in the case of Shaheer I, its asymmetric shape causes some degree of error. The background is subtracted to ensure the light profile of the galaxy is measured only. ### Petrosian profile Using the fluxes found through the apertures, the petrosian profile (fig 9) was created from which the petrosian radius, effective half light radius, total flux radius and concentration indexes for both were found. These values are listed in table 2. Values for \(\epsilon\) were changed from the standard value of 2 used by the Sloan Digital Sky Survey (SDDS, Strauss et al. (2002)). This was done to keep the total flux radius within where the petrosian values lie \(1\geq\eta\geq 0\). For \(\epsilon=2\) Figure 4: FITS data and isolated sources for Shaheer II (left) and Shaheer I (right). Note no deblending of sources in the right image. Figure 5: Normalized star PSF cutout from the FOV in odd dimensions of 53x53 pixels. Figure 6: Curve of growths and PetroFit’s apertures visualized on the targets. Shaheer I bottom, Shaheer II on top. Note each pixel is 0.28 arcsec. Shaheer I is stretched to a greater degree to make it more apparent. Figure 7: Radial profile along x and y axis for Shaheer I. the total flux radius would lie where \(\eta<0\). PetroFit plots the curves of growth (in green) along with the petrosian profile and marks other important values as well. Fig 10 visualizes the various radii on the images of each galaxy. ### Sersic model Finally Sersic models were created (fig 11), which is done using the PSF (shown earlier in fig 6). No oversampling or weights were used. The sersic index, \(n\), was estimated using the fitted models, see table 2. Residuals were produced by subtracting the model from the data. the regions owing to the blue-white colour profile. The presence of these brighter regions are only visible through current images in the top rim of the galaxy, while the southern region is a diffuse halo. Despite its apparent symmetry, the diffuse halo might extend a bit further outwards. The large proper motion of the source found in GPS1+ (see table 2) suggests that it is a high propermotion/velocity galaxy (thus the title cosmic bullet) creating a cometary phenomena similar to the "Comet galaxy" introduced in Cortese et al. (2007). In terms of gravitational influences, there are two potential attractor's which might be pulling Shaheer II creating its terdory shape. The first is a galaxy, 2MASX J05251648+7253484, (2MASS; Skrutskie et al. 2006), which is closest in terms of angular distances (see fig 12). The direction of the top rim coincides with the direction of 2MASX J05251648+7253484. The second attractor candidate would be MCG+12-06-003 (fig 13), which is part of a group of galaxies 11.18 arc-minutes away, which also appear to be interacting with each other. Directions to both galaxies are shown in fig 14. Spectral analysis will be needed to calculate further parameter. Deeper, higher resolution imaging through a larger ground or space based telescope would be required to resolve the regions and complete structure of each galaxy. ## 6 Conclusions This paper presents the discovery and analysis of two new galaxies, Shaheer I and Shaheer II with images taken through amateur telescopes at the Taqwa space observatory. The galaxies were found through visual inspection of the pan-STARRS survey. The images which have resolutions and limiting magnitudes comparable to many optical surveys of the night sky, help prove that amateur contributions towards astronomical research can go a long way. Using the data taken through our observatory, photometric analysis was performed using PetroFit (while detailing each step for reproducability). I extracted as much quantitative data along with qualitative data, through visual and photometric analysis, as was possible. Deeper imaging of both galaxies will be necessary to estimate distances and to probe further into the nature of both galaxies, especially for Shaheer I. I propose further collaborative research incorporating the use of larger telescopes and photometric filters to obtain more precise data. I also invite the amateur community to take deeper, long integration, images of the galaxies and perform any kinds of analyses possible. Data to be found in future studies would include detailed morphological classification and spectra along with investigating and proving the high proper motion of Shaheer II. Figure 11: Sérsic fitted models and residuals. Figure 12: The galaxy 2MASX J05251648+7253484 and Shaheer II, line drawn between them, the angular distance between them is 4.164 arcsec. Image cropped from the total FOV taken from Taqwa observatory. Figure 13: MCG+12-06-003 and associated galaxy group. MCG+12-06-003 seems to be interacting with the galaxy to its right with visible tidal streams, albeit faint. The black arrow points towards the direction of Shaheer II. Image cropped from the total FOV taken from Taqwa observatory. ## Acknowledgements All images and data was capture at Taqwa space observatory in Bela, Balochistan, Pakistan, see Appendix A. Thanks to the Taqwa team for their great setup and for the imaging time. This research made use of Photutils, an Astropy package for detection and photometry of astronomical sources (Bradley et al., 2022). This research made use of PetroFit (Geda et al., 2022), a package based on Photutils, for calculating Petrosian properties and fitting galaxy light profiles. ## Data Availability PetroFit can be found at github.com/PetroFit/petrofit/tree/main. Extracted astrometry and photometry (Magnitudes and proper motions) are mentioned in the article and can be found from VizieR. PS1 (pan-STARRS data release 1), obj id:195530813054621280 and GSP1+ obj id: -9050757570386836921. Astro pixel processor (APP) can be purchased from www.astropixelprocessor.com/.
countless digital 天体観測と自動的な天体観測の夜空の画像、コンピュータアルゴリズムを使用して、オブジェクトを検出し、カテゴリ付けを行う。人工知能の advent とこの調査は、近い将来、さらに効率的になる。この despite of、この調査は、調査対象となるオブジェクトがいくつか見逃されているか、興味を持たない。そのような見逃されたオブジェクトは、ときにはユニークな性質と、正直なところ、大きな角度の大きさであり、研究を必要とする、従来の天文学の発見の精神を必要とする。銀河の数の多さに比べて、研究の対象となるオブジェクトは、数えきれないほど多い。この場合、専門の天文学者と彼らの伝統的な天文学の発見の精神が、天体観測を Manually して、ユニークなオブジェクトを分類する。この論文では、Shaheer I と Shaheer II という、今までにない銀河候補として、両者を
2309.05896
Silicon charge pump operation limit above and below liquid helium temperature
Semiconductor tunable barrier single-electron pumps can produce output current of hundreds of picoamperes at sub ppm precision, approaching the metrological requirement for the direct implementation of the current standard. Here, we operate a silicon metal-oxide-semiconductor electron pump up to a temperature of 14 K to understand the temperature effect on charge pumping accuracy. The uncertainty of the charge pump is tunnel limited below liquid helium temperature, implying lowering the temperature further does not greatly suppress errors. Hence, highly accurate charge pumps could be confidently achieved in a $^4$He cryogenic system, further promoting utilization of the revised quantum current standard across the national measurement institutes and industries worldwide.
Ajit Dash, Steve Yianni, MengKe Feng, Fay Hudson, Andre Saraiva, Andrew S. Dzurak, Tuomo Tanttu
2023-09-12T00:42:47
http://arxiv.org/abs/2309.05896v1
# Silicon charge pump operation limit above and below liquid helium temperature ###### Abstract Semiconductor tunable barrier single-electron pumps can produce output current of hundreds of picoamperes at sub ppm precision, approaching the metrological requirement for the direct implementation of the current standard. Here, we operate a silicon metal-oxide-semiconductor electron pump up to a temperature of 14 K to understand the temperature effect on charge pumping accuracy. The uncertainty of the charge pump is tunnel limited below liquid helium temperature, implying lowering the temperature further does not greatly suppress errors. Hence, highly accurate charge pumps could be confidently achieved in a \({}^{4}\)He cryogenic system, further promoting utilization of the revised quantum current standard across the national measurement institutes and industries worldwide. The seven base International Systems of Units (SI) serve as basis for measuring any physical quantity. Refining these units over the years aims to ascertain a consistent and universal metrological standard. Recent revision of SI suggests the use of quantized charge pump for a practical realization of the primary current standard, by agreeing a fixed value of elementary charge \(e\) (\(=1.602176634\times 10^{-19}\) A\(\cdot\) s) [1; 2]. A charge pump is a nanoelectronic device that transfers integer \(n\) number of electrons, holes or cooper pairs per voltage cycle with frequency \(f\), yielding quantized current \(I\) (\(=n\times e\times f\)). A clock-controlled on-demand charge emitting characteristics of charge pump also attracts attention in the field of quantum information processing and quantum optics [3; 4]. Significant research has been pursued by the national measurement institutes and academia to realize quantized pumping of quasi-particles in variety of metal, superconductor, metal-superconductor hybrids and semiconductor systems [5]. Silicon metal-oxide-semiconductor (SiMOS) nanostructure based charge pumps has evinced the potential of practical realization of the SI Ampere by demonstrating remarkable combination of pumping speed and fidelity [6; 7; 8; 9; 10; 11]. Besides, quantum devices fabricated on Si have exhibited significant reduction of \(1/f\) noise and background charge fluctuations at regimes of high amplitude operations [12; 13]. SiMOS gate-stack technology enables fabrication of multi-layer top gates, facilitating strong planar electrostatic confinement to define a quantum dot (QD) [14]. Owing to the small physical size, the electronically defined QD in Si has high charging energy, hence capable of transferring discrete number of charges at a base temperature of sample-space (\(T_{\text{base}}\)) up to few kelvins [7; 9; 10; 11; 15]. However, the influence of temperature on the charge pumping accuracy has not been assessed in any physical system. Understanding the temperature limit is crucial to choose an optimal \(T_{\text{base}}\), which might further relax the requirement of \({}^{3}\)He or dilution refrigerator. In this work, we realize quantized electron pumping in a voltage-induced Si QD up to a \(T_{\text{base}}\) of 14 K. Later, we fit the measured current plateaus to decay-cascade and thermal model of charge transfer to quantify the charge capturing mechanism as a function of \(T_{\text{base}}\) and periodic drive amplitude. We realize the transition temperature between tunnel limited and thermally limited electron capture mechanism in our Si based charge pump is higher than the liquid helium temperature (\(\approx\) 4.2 K) for a wide range of pulsing drive amplitude while pumping single-electron per voltage cycle of 100 MHz frequency. These results forecast the theoretical errors of the charge pump when operated above and below the transition temperature. The charge pump measured in this work is fabricated on a near-intrinsic silicon substrate with a 7 nm thick thermally grown silicon dioxide (SiO\({}_{2}\)) layer. The aluminum (Al) gate-stack architecture in Figs. 1(a, b) is realized by defining the device morphology with electron-beam-lithography, followed by thermal evaporation of subsequent three Al metal layers. Al top-gates are electrically insulated from the adjoining metal gates by thermally growing aluminium oxide (Al\({}_{\text{x}}\)O\({}_{\text{y}}\)) of 3 nm thickness between each layer. Top-gates are connected to a programmable room-temperature DC bias source through 300 MHz cryogenic low-pass-filter. A QD is electrically induced under plunger gate (PL) by tuning the planar confinement potential (\(V_{\text{C1}}\) and \(V_{\text{C2}}\)) and tunnel barrier potential (\(V_{\text{BL}}\) and \(V_{\text{BR}}\)). Clock-controlled charge transfer characteristics of the pump is instigated by adding an AC \(sine\) waveform with a time-period of 10 ns, generated using an arbitrary-waveform-generator connected to BL pulsing barrier gate in Fig. 1(a). The periodic drive modulates BL barrier potential as show in Fig. 1(b) to load electron from the source reservoir (i), followed by capturing the electron in the QD (ii) and finally unloading it to drain reservoir (iii), generating a pump current (\(I\)). The output current is pre-amplified with a gain of \(10^{8}\) V/ A using a transimpedance-amplifier and measured using a voltmeter by integrating over a time of 20 ms, or one power line cycle. The pumped current is normalized as \(I/ef\) to elucidate the average number of electrons pumped per AC voltage cycle \(\langle n\rangle\). All the measurements are performed in a variable-temperature-insert at \(T_{\rm base}\) that ranges from 2 K to 14 K with a cryogenic temperature controller. A single sweep of measured \(T_{\rm base}\) dependent normalized pump current as a function of plunger gate voltage \(V_{\rm PL}\) at a constant AC periodic drive amplitude \(\widetilde{V}_{\rm BL}\) of 350 mV is displayed in Fig. 1(c). We observe smoothing rise to the \(\langle n\rangle\) plateau with increasing \(T_{\rm base}\) from 2 K to 14 K, in Fig. 1(c). This corroborates the occurrence of dissimilar electron transfer mechanism in our charge pump, which is explained by decay-cascade [16] and thermal [6; 17] models by elaborating the process of periodic decoupling of QD from source reservoir lead. The decay-cascade model assumes that the dominant charge pumping error mechanism is a series of non-equilibrium electron escape events back to the source reservoir to yield \(\langle n_{\rm D}\rangle\) number of trapped electrons in the QD, given as: \[\langle n_{\rm D}\rangle=\sum_{i=1}^{2}\exp\left\{-\exp\left[\alpha_{\rm D_{ i}}^{*}(V_{\rm PL}-V_{\rm o_{i},D})\right]\right\} \tag{1}\] where \(\alpha_{\rm D}^{*}\) is the gate-referred tunnel rate factor and \(V_{\rm 0,D}\) is the threshold voltage obtained from decay-cascade fit of the normalized pump current. The double-exponential function describing the decay-cascade regime in Eq. 1 analytically has an asymmetric rise shape. At elevated temperatures, the broader energy spectrum of the electron reservoir becomes the dominating error process, to capture \(\langle n_{\rm T}\rangle\) number of electrons in the QD [6]. The average number of electrons pumped in the thermal regime is ascertained by the Fermi distribution of electrons in the source reservoir leads at thermal equilibrium, expressed as: \[\langle n_{\rm T}\rangle=\sum_{i=1}^{2}1/\{1+\exp\left[\beta_{\rm T_{i}}^{*}(V _{\rm PL}-V_{\rm o_{i},T})\right]\} \tag{2}\] where \(\beta_{\rm T}^{*}\) is the gate-referred modified thermodynamic beta and \(V_{\rm 0,T}\) is the threshold voltage obtained from the thermal fit of the normalized pump current, which analytically ascribe to a symmetrical rise shape. The phenomenological fit parameter \(\beta_{\rm T}^{*}\) corresponds to heat induced by the AC periodic drive, required to prompt the charge pumping process, and inferred as \(\beta_{\rm T}^{*}=(e\cdot\alpha_{\rm PL-QD})/(k_{B}\cdot T_{\rm pump})\). Here, \(\alpha_{\rm PL-QD}=e(\Delta V_{\rm SD}/\Delta V_{\rm PL})\) is lever-arm of PL to pump QD and \(T_{\rm pump}\) is the local electron temperature at the source and drain reservoir Figure 1: (a) False-color scanning-electron-micrograph of an electron pump similar to one used in the experiment together with schematic of the measurement setup. (b) Three-dimensional cross-section schematic of the charge pump along the black dashed line showing the three layers (yellow: 20 nm, red: 27 nm and violet: 35 nm) gate-stack architecture. (i) load, (ii) capture and (iii) unload illustrate conduction-band energy level profile during three stages of an electron (green dot) pumping cycle. (c) Average number of pumped electrons per AC voltage cycle \(\langle n\rangle\) as a function of plunger gate voltage \(V_{\rm PL}\) with varying base temperature of sample-space \(T_{\rm base}\) up to 14 K. (d) Plateaus of measured \(\langle n\rangle\) at \(T_{\rm base}\) 2 K, 6 K, and 14 K along with its fit to decay-cascade model (D), thermal model (T), and weighed sum of decay-cascade and thermal model (DT) of charge pumping. Insets show the zoomed-in axes at the raising edge of first and second plateaus. The data is horizontally shifted for clarity. leads. We calculated \(\alpha_{\rm PL-QD}\) from the slope of experimentally measured \(V_{\rm PL}\) versus \(V_{\rm SD}\) when an electron is added to the QD from the source reservoir. We suspect involvement of both the non-equilibrium (decay-cascade) and equilibrium (thermal) charge capturing mechanism in the pumping process. Therefore, we propose a weighed sum of decay-cascade and thermal model, quoted as combined model, given as: \[\langle n_{\rm DT}\rangle=\sum_{i=1}^{2}(\zeta_{\rm DT_{i}})\cdot \exp\left\{-\exp\left[\alpha_{\rm DT_{i}}^{*}(V_{\rm PL}-V_{0_{i},\rm DT_{\rm T }})\right]\right\}\\ +(1-\zeta_{\rm DT_{i}})\cdot 1/\{1+\exp\left[\beta_{\rm DT_{i}}^{*}(V_{ \rm PL}-V_{0_{i},\rm DT_{\rm T}})\right]\} \tag{3}\] where \(\zeta_{\rm DT}\) is weight of the non-equilibrium decay-cascade component in the combined model, having statistical bounds of confidence interval between 0 and 1, \(\alpha_{\rm DT}^{*}\) is the temperature-independent gate-referred tunnel rate constant obtained from decay-cascade fit, \(1-\zeta_{\rm DT}\) is weight of the equilibrium thermal component in the combined model, \(\beta_{\rm DT}^{*}\) is the gate-referred modified thermodynamic beta of the thermal component in the combined model, and \(V_{0,\rm DT_{\rm B}}\) and \(V_{0,\rm DT_{\rm T}}\) are the threshold voltage of decay-cascade and thermal component in the combined model, respectively. To investigate the dependency of temperature on the charge pumping mechanism, we fit the \(\langle n\rangle\) associated with one and two electron pumping plateau, measured as a function of \(V_{\rm PL}\) at varied \(T_{\rm base}\), while keeping the top gate DC voltages constant at \(V_{\rm BL}\) = 1.20 V, \(V_{\rm BR}\) = 2.28 V, \(V_{\rm SL}\) = 2.2 V, \(V_{\rm DL}\) = 2.2 V, \(V_{\rm C1}\) = 0 V and \(V_{\rm C2}\) = 0 V. Fitting of the measured data to the aforementioned decay-cascade (in Eq. 1), thermal (in Eq. 2) and combined (in Eq. 3) models, are shown in Fig. 1(d). The magnified-axes in the insets of Fig. 1(d), visibly depict the decay-cascade model fits better at lower \(T_{\rm base}\). However, with an increment of \(T_{\rm base}\) the measured plateaus starts agreeing more with the thermal model. To quantify the electron transfer mechanism as a function of \(T_{\rm base}\) we implement two approaches. First, individually determining the average residual sum of squares of the decay-cascade (\(\mu_{RSS,\rm D}\)) and thermal (\(\mu_{RSS,\rm T}\)) fit to the experimental data. A statistical value of \(\Delta\mu_{RSS}\) (= \(\mu_{RSS,\rm D}-\mu_{RSS,\rm T}\)) less than zero implies a better fit to the decay-cascade model, whereas a positive \(\Delta\mu_{RSS}\) indicates the paramountcy of the thermal model, in Fig. 2(a). Second, we benchmark the weight component of \(\zeta_{\rm DT}\) at 0.5, signifying the transition from the regions where decay-cascade or thermal model fit better, in Fig. 2(b). The operating regime of charge pump is purely decay-cascade if \(\zeta_{\rm DT}\) is about unity, in contrast estimation of \(\zeta_{\rm DT}\) close to zero, corresponds to thermal mechanism of electron capture in the QD. From the results in Fig. 2(a, b), we find our charge pump operates in decay-cascade regime at \(T_{\rm base}\) less than 5 K while transferring single Figure 2: Phenomenological fitting parameters along with error bars, extracted from the decay-cascade model (D), thermal model (T), and weighed sum of decay-cascade and thermal model (DT) charge pumping for first and second plateau as a function of base temperature of sample-space \(T_{\rm base}\). (a) Difference between the average residual sum of squares of decay-cascade and thermal fits of the measured data (\(\Delta\mu_{RSS}\)). (b) Weight component of decay-cascade (\(\zeta_{\rm DT}\)), and thermal (\(1-\zeta_{\rm DT}\)) in the combined model. (c) Gate-referred tunnel rate factor (\(\alpha_{\rm D}^{*}\)). (d) Local electron temperature at the source and drain reservoir leads (\(T_{\rm pump}\)). The difference between the black-dashed line (at \(T_{\rm base}\)) and \(T_{\rm pump}\) depict the AC periodic drive induced heating in the pump (\(\Delta T_{\rm pump}\)). (e) \(\Delta\mu_{RSS}\), (f) \(\zeta_{\rm DT}\), (g) \(\alpha_{\rm D}^{*}\) and (h) \(\Delta T_{\rm pump}\) for single-electron pumping, measured as a function of \(T_{\rm base}\) and drive amplitude (\(\widetilde{V}_{\rm BL}\)). electron per voltage cycle. However, pumping two electrons shifts this decay-cascade to thermal transition below a temperature of 4 K. The phenomenological fit parameter \(\alpha_{\rm D}^{*}\) relate to tunneling rate of excess electrons (\(\Gamma_{\rm D_{i}}^{\rm escape}\)), escaping back to the source reservoir leaving \(\langle n\rangle\) number of electrons in the QD. One can theoretically forecast the lower bound of uncertainties encountered during the process of pumping from the difference \((\alpha_{\rm D_{2}}^{*}\cdot V_{0_{2},{\rm D}})-(\alpha_{\rm D_{1}}^{*}\cdot V _{0_{1},{\rm D}})\), equivalent to the difference of back tunneling rate of excess electrons \(\ln\Gamma_{\rm D_{2}}^{\rm escape}-\ln\Gamma_{\rm D_{1}}^{\rm escape}\)[16; 18]. Although \(\alpha_{\rm D}^{*}\) is independent of temperature, the regression analysis results deduced a decrement in \(\alpha_{\rm D_{1}}^{*}\) and \(\alpha_{\rm D_{2}}^{*}\), when the charge pump is operated in the thermal regime, corresponding to a \(T_{\rm base}\) higher than the transition temperature. In contrast, we observe saturation of \(\alpha_{\rm D}^{*}\) for both one and two electron transfer, when \(T_{\rm base}\) is lower than the transition temperature, in Fig. 2(c). Although the charge pump is cooled to \(T_{\rm base}\) temperature, the local electron temperature in the source and drain reservoir leads (\(T_{\rm pump}\)) may be higher due to AC periodic drive induced heating. In order to assess, the \(T_{\rm pump}\) whilst pumping one and two electrons per voltage cycle, we infer the gate-referred modified thermodynamic beta extracted from the thermal component of the combined model to evaluate the value of \(T_{\rm pump_{i}}=(e\cdot\alpha_{\rm PL-QD})/(k_{B}\cdot\beta_{\rm DT_{i}}^{*})\). To pursue a fair comparison of local electron temperature at the source reservoir as a function of varying \(T_{\rm base}\), we evaluate the heat induced in the pump \(\Delta T_{\rm pump}=T_{\rm pump}-T_{\rm base}\), in Fig. 2(d). We assess, the heat induced due to AC periodic drive in our system is higher during transferring single-electron per cycle, when compared to that of two electrons transfer. However, the number of electrons existing in the QD before initialization of charge pumping process might have an influence in the value of \(\Delta T_{\rm pump}\). Therefore, the gate-referred modified thermodynamic beta used to deduce the \(T_{\rm pump}\) for two electrons plateau deserves further theoretical and experimental investigation. Next, we turn our attention towards the impact of \(\widetilde{V}_{\rm BL}\) on the single-electron pumping fit parameters (\(\Delta\mu_{RSS}\), \(\zeta_{\rm DT}\), \(\alpha_{\rm D}^{*}\) and \(\Delta T_{\rm pump}\)) as a function of \(T_{\rm base}\) and \(\widetilde{V}_{\rm BL}\). The top-gate DC voltages are kept constant at \(V_{\rm BL}=1.32\) V, \(V_{\rm BR}=2.38\) V, \(V_{\rm SL}=2.2\) V, \(V_{\rm DL}=2.2\) V, \(V_{\rm C1}=\) -0.03 V and \(V_{\rm C2}=0\) V while repeating the measurement for different values of \(\widetilde{V}_{\rm BL}\) and \(T_{\rm base}\). The \(\Delta\mu_{RSS}\) and \(\zeta_{\rm DT}\) in Fig. 2(e) and Fig. 2(f), respectively, deduce a leftward shift of the transition temperature with rising \(\widetilde{V}_{\rm BL}\). In addition, it is worth noting the value of \(\alpha_{\rm D}^{*}\) deteriorates with increment in \(\widetilde{V}_{\rm BL}\) in Fig. 2(g). This informs us the importance of tuning the \(\widetilde{V}_{\rm BL}\) amplitude. Further, the evaluated value of \(\Delta T_{\rm pump}\) support the statement as at a particular \(T_{\rm base}\) the heat induced is directly proportional to \(\widetilde{V}_{\rm BL}\), in Fig. 2(h). In order to pursue a qualitative investigation of the single-electron pumping uncertainties, we follow a theoretical approach to assess the lower bound of error occurring during the single-electron pumping processes [16]. To determine the charge pumping error \(\epsilon_{\rm pump}\) as a function of \(\widetilde{V}_{\rm BL}\) and \(T_{\rm base}\), we use the single-electron transfer experimental data, which is measured as a function of \(V_{\rm PL}\) by varying \(\widetilde{V}_{\rm BL}\) and \(T_{\rm base}\), a single sweep of whose (at \(\widetilde{V}_{\rm BL}=400\)) is shown in Fig. 1 (c). The \(\epsilon_{\rm pump}\) is given as \(1-\langle n_{\rm D}\rangle\), at the point of inflection (\(V_{\rm PL}^{*}\)) on \(\langle n\rangle=1\) plateau. While calculating the \(\epsilon_{\rm pump}\), we assumed \(\alpha_{\rm D_{1}}^{*}=\alpha_{\rm D_{2}}^{*}\) and \(\Delta V_{0,{\rm D}}(=V_{0_{2},{\rm D}}-V_{0_{1},{\rm D}})\) is independent of \(\widetilde{V}_{\rm BL}\). The lower bound of the evaluated single-electron pumping error as a function of \(T_{\rm base}\) and \(\widetilde{V}_{\rm BL}\) is illustrated in Fig.3. The theoretical error rate of our Si single-electron pump when operated at a \(T_{\rm base}\) of 2 K and \(\widetilde{V}_{\rm BL}=350\) mV is 1.72 ppm. However, the theoretical uncertainty figure might be overestimated compared to our charge pump capability due to the normal accuracy measurements using the voltmeter [19]. At a constant \(T_{\rm base}\), the reduction in \(\widetilde{V}_{\rm BL}\) lead to sharper edges of the plateau, and theoretically lower error bound. Besides, the captivating results delineate, single-electron pumping error is almost independent of \(T_{\rm base}\) when operated in a regime where an electron is captured in the pump QD by following a sequence of tunneling event back to the source reservoir. Overall, we showed that our Si single-electron pump is operable at liquid helium temperature with tunnel limited errors dominating the pump fidelity, indicating high precision metrological current measurements could be consistently done in cheap \({}^{4}\)He systems. This pave the path for transportable and scalable primary SI current standard by deploying SiMOS technology, which is Figure 3: Lower bound of single-electron pumping error (\(\epsilon_{\rm pump}\)) as a function of varying drive amplitude (\(\widetilde{V}_{\rm BL}\)) and base temperature of sample-space (\(T_{\rm base}\)). well-established across the semiconductor foundries. To future characterise the charge pumping errors we will use an on-chip charge sensor. A.D. performed all measurements and all calculations under T.T.'s supervision. S.Y. and F.H. fabricated the device under A.S.D's supervision. A.D., S.Y., M.K.F., A.S., and T.T. participated in data interpretation. A.D., S.Y., and T.T. designed the project and experimental setup. A.D. wrote the manuscript with contribution from all authors. We thank Md. Mamunur Rahman and Alexandra Dickie for assistance in the cryogenic setup. We acknowledge support from the Australian Research Council (DP200103515), the U.S. Army Research Office (W911NF-17-1-0198), and NSW node of the Australian National Fabrication Facility. A.D., and M.K.F. acknowledge scholarship support from the Sydney Quantum Academy, Australia.
半導体可変バリアー単電子ポンプは、数百ピコアンペアの出力電流を、数百ピコアンペアの出力電流を、数百ピコアンペアの出力電流を、数百ピコアンペアの出力電流を、数百ピコアンペアの出力電流を、数百ピコアンペアの出力電流を、数百ピコアンペアの出力電流を、数百ピコアンペアの出力電流を、数百ピコアンペアの出力電流を、数百ピコアンペアの出力電流を、数百ピコアンペアの出力電流を、数百ピコアンペアの出力電流を、数百ピコアンペアの出力電流を、数百ピコアンペアの出力電流を、数百ピコアンペアの出力電流を、数百ピコアンペアの出力電流を、数百ピコアンペアの出力電流
2301.00114
Skeletal Video Anomaly Detection using Deep Learning: Survey, Challenges and Future Directions
The existing methods for video anomaly detection mostly utilize videos containing identifiable facial and appearance-based features. The use of videos with identifiable faces raises privacy concerns, especially when used in a hospital or community-based setting. Appearance-based features can also be sensitive to pixel-based noise, straining the anomaly detection methods to model the changes in the background and making it difficult to focus on the actions of humans in the foreground. Structural information in the form of skeletons describing the human motion in the videos is privacy-protecting and can overcome some of the problems posed by appearance-based features. In this paper, we present a survey of privacy-protecting deep learning anomaly detection methods using skeletons extracted from videos. We present a novel taxonomy of algorithms based on the various learning approaches. We conclude that skeleton-based approaches for anomaly detection can be a plausible privacy-protecting alternative for video anomaly detection. Lastly, we identify major open research questions and provide guidelines to address them.
Pratik K. Mishra, Alex Mihailidis, Shehroz S. Khan
2022-12-31T04:11:25
http://arxiv.org/abs/2301.00114v4
# Skeletal Video Anomaly Detection using Deep Learning: Survey, Challenges and Future Directions ###### Abstract The existing methods for video anomaly detection mostly utilize videos containing identifiable facial and appearance-based features. The use of videos with identifiable faces raises privacy concerns, especially when used in a hospital or community-based setting. Appearance-based features can also be sensitive to pixel-based noise, straining the anomaly detection methods to model the changes in the background and making it difficult to focus on the actions of humans in the foreground. Structural information in the form of skeletons describing the human motion in the videos is privacy-protecting and can overcome some of the problems posed by appearance-based features. In this paper, we present a survey of privacy-protecting deep learning anomaly detection methods using skeletons extracted from videos. We present a novel taxonomy of algorithms based on the various learning approaches. We conclude that skeleton-based approaches for anomaly detection can be a plausible privacy-protecting alternative for video anomaly detection. Lastly, we identify major open research questions and provide guidelines to address them. skeleton, body joint, human pose, anomaly detection, video. ## I Introduction Anomalous events pertain to unusual or abnormal actions, behaviours or situations that can lead to health, safety and economical risks [1]. Anomalous events, by definition, are largely unseen and not much is known about them in advance [2]. Due to their rarity, diversity and infrequency, collecting labeled data for anomalous events can be very difficult or costly [1, 3]. With the lack of predetermined classes and a few labelled data for anomalous events, it can be very hard to train supervised machine learning models [1]. Therefore, a general approach in majority of anomaly detection algorithms is to train a model that can best represent the 'normal' events or actions, and any deviations from it can be flagged as an unseen anomaly [4]. Anomalous behaviours among humans can be attributed at an individual level (e.g., falls [5]) or multiple people in a scene (e.g., pedestrian crossing [6], violence in a crowded mall [7]). In the context of video-based anomaly detection, the general approach is to train a model to learn the patterns of actions or behaviours of individual(s), background and other semantic information in the normal activities videos, and identify significant deviations in the test videos as anomalies. However, anomaly detection is a challenging task due to the lack of labels and often times the unclear definition of an anomaly [2]. The majority of video-based anomaly detection approaches use RGB videos where the people in the scene are identifiable. While using RGB camera-based systems in public places (e.g., malls, airports) is generally acceptable, the situation can be very different in personal dwelling, community, residential or clinical settings [8]. In a home or residential setting (e.g., nursing homes), individuals or patients can be monitored in their personal space that may breach their privacy. The lack of measures to deal with the privacy of individuals can be a bottleneck in the adoption and deployment of the anomaly detection-based systems [9]. However, monitoring of people with physical, cognitive or aging issues is also important to improve their quality of life and care. Therefore, as a trade-off, privacy-protecting video modalities can fill that gap and be used in these settings to save lives and improve patient care. Wearable devices face compliance issues among certain populations, where people may forget or in some cases refuse to wear them [10]. Some of the privacy-protecting camera modalities that has been used in the past for anomaly detection involving humans include depth cameras [5, 11], thermal cameras [12], and infrared cameras [13, 14]. While these modalities can partially or fully obfuscate an individual's identity, they require specialized hardware or cameras and can be expensive to be used by general population. Skeletons extracted from RGB camera streams using pose estimation algorithms provide a suitable solution of privacy protection over RGB and other types of cameras [15]. Skeleton tracking only focuses on body joints and ignores facial identity, full body scan or background information. The pixel-based features in RGB videos that mask important information about the scene are sensitive to noise resulting from illumination, viewing direction and background clutter, resulting in false positives when detecting anomalies [16]. Furthermore, due to redundant information present in these features (e.g., background), there is an increased burden on methods to model the change in those areas of the scene rather than focus on the actions of humans in the foreground. Extracting information specific to human actions can not only provide a privacy-protecting solution, but can also help to filter out the background-related noise in the videos and aid the model to focus on key information for detecting abnormal events related to human behaviour. The skeletons represent an efficient way to model the human body joint positions over time and are robust to the complex background, illumination changes, and dynamic camera scenes [17]. In addition to being privacy-protecting, skeleton features are compact, well-structured, semantically rich, and highly descriptive about human actions and motion [17]. Anomaly detection using skeleton tracking is an emerging area of research as awareness around privacy of individuals and their data grows. However, skeleton-based approaches may not be sufficient for situations that explicitly need facial information for analysis, including emotion recognition [18, 19], pain detection [20] or remote heart monitoring [21], to name a few. In recent years, deep learning methods have been developed to use skeletons for different applications, such as action recognition [43], medical diagnosis [24], and sports analytics [44]. The use of skeletons for anomaly detection in videos is an under-explored area, and concerted research is needed [24]. The human skeletons can help in developing privacy-preserving solutions for private dwellings, crowded/public areas, medical settings, rehabilitation centers and long-term care homes to detect anomalous events that impacts health and safety of individuals. Use of this type of approach could improve the adoption of video-based monitoring systems in the homes and residential settings. However, there is a paucity of literature on understanding the existing techniques that use skeleton-based anomaly detection approaches. We identify this gap in the literature and present one of the first survey on the recent advancements in using skeletons for anomaly detection in videos. We identified the major themes in existing work and present a novel taxonomy that is based on how these methods learn to detect anomalous events. We also discuss the applications where these approaches were used to understand their potential in bringing these algorithms in a personal dwelling, or long-term care scenario. ## II Literature Survey We adopted a narrative literature review for this work. The following keywords (and their combinations) were used to search for relevant papers - skeleton, human pose, body pose, body joint, anomaly detection, and video. These keywords were searched on scholarly databases, including Google Scholar, IEEE Xplore, Elsevier and Springer. We mostly reviewed papers between year 2016 to year 2023; therefore, the list may not be comprehensive. In this review, we only focus on the recent deep learning-based algorithms for skeletal video anomaly detection and do not include traditional machine learning-based approaches. We did not adopt the systematic or scoping review search protocol for this work; therefore, our literature review may not be exhaustive. However, we tried our best to include the latest development in the field to be able to summarize their potential and identify challenges. In this section, we provide a survey of skeletal deep learning video anomaly detection methods. We present a novel taxonomy to study the skeletal video anomaly approaches based on learning approaches into four broad categories, i.e., reconstruction, prediction, their combinations and other specific approaches. Table I and II provides a summary of 24 relevant papers, based on the taxonomy, found in our literature search. Unless otherwise specified, the values in the last column of the table refer to AUC(ROC) values corresponding to each dataset in the reviewed paper. Five papers use reconstruction approach, six papers use prediction approach, six papers use a combination of reconstruction and prediction approaches, four papers use a combination of reconstruction and clustering approaches, and three papers use other specific approaches. ### _Reconstruction Approaches_ In the reconstruction approaches, generally, an autoencoder (AE) or its variant model is trained on the skeleton information of only normal human activities. During training, the model learns to reconstruct the samples representing normal activities with low reconstruction error. Hence, when the model encounters an anomalous sample at test time, it is expected to give high reconstruction error. Gatt et al. [22] used Long Short-Term Memory (LSTM) and 1-Dimensional Convolution (1DConv)-based AE models to detect abnormal human activities, including, but not limited to falls, using skeletons estimated from videos of a publicly available dataset. Temuroglu et al. [23] proposed a skeleton trajectory representation that handled occlusions and an AE framework for pedestrian abnormal behaviour detection. The pedestrian video dataset used in this work was collected by the authors, where the training dataset was composed of normal walking, and the test dataset was composed of normal and drunk walking. The pose skeletons were treated to handle occlusions using the proposed representation and combined into a sequence to train an AE. They compared the results of occlusion-aware skeleton keypoints input with keypoints without occlusion flags, keypoint image heatmaps and raw pedestrian image inputs. The authors used average of recall and specificity to evaluate the models due to the unbalanced dataset and found that occlusion-aware input achieved the highest results. Suzuki et al. [24] trained a Convolutional AE (CAE) on good gross motor movements in children and detected poor limb motion as an anomaly. Motion time-series images [45] were obtained from skeletons estimated from the videos of kindergarten children participants. The motion time-series images were fed as input to a CAE, which was trained on only the normal data. The difference between the input and reconstructed pixels was used to localize the poor body movements in anomalous frames. Jiang et al. [25] presented a message passing Gated Recurrent Unit (GRU) encoder-decoder network to detect and localize the anomalous pedestrian behaviours in videos captured at the grade crossing. The field-collected dataset consisted of over 50 hours of video recordings at two selected grade crossings with different camera angles. The skeletons were estimated and decomposed into global and local components before being fed as input to the encoder-decoder network. The localization of the anomalous pedestrians within a frame was done by identifying the skeletons with reconstruction error higher than the empirical threshold. They manually removed wrongly detected false skeletons as they claim that the wrong detection issue was observed at only one grade crossing. However, an approach of manual removal of false skeletons is impractical in many real world applications where the data is very large, making the need of an automated false skeleton identification and removal step imperative. Fan et al. [26] proposed an anomaly detection framework which consisted of two pairs of generator and discriminator. The generators were trained to reconstruct the normal video frames and the corresponding skeletons, respectively. The discriminators were trained to distinguish the original and reconstructed video frames and the original and reconstructed skeletons, respectively. The video frames and corresponding extracted skeletons served as input to the framework during training; however, at test time, decision was made based on only reconstruction error of video frames. ChallengesAEs or their variants are widely used in many video-based anomaly detection methods [5]. The choice of the right architecture to model the skeletons is very important. Further, being trained on the normal data, they are expected to produce higher reconstruction error for the abnormal inputs than the normal inputs, which has been adopted as a criterion for identifying anomalies. However, this assumption does not always hold in practice, that is, the AEs can generalize well that it can also reconstruct anomalies well, leading to false negatives [46]. ### _Prediction Approaches_ In prediction approaches, a network is generally trained to learn the normal human behaviour by predicting the skeletons at the next time step(s) using the skeletons representing normal human actions at past time steps. During testing, the test samples with high prediction errors are flagged as anomalies as the network is trained to predict only the skeletons representing normal actions. Rodrigues et al. [27] suggested that abnormal human activities can take place at different timescales, and the methods that operate at a fixed timescale (frame-based or video-clip-based) are not enough to capture the wide range of anomalies occurring with different time duration. They proposed a multi-timescale 1DConv encoder-decoder network where the intermediate layers were responsible to generate future and past predictions corresponding to different timescales. The network was trained to make predictions on normal activity skeletons input. The prediction errors from all timescales were combined to get an anomaly score to detect abnormal activities. Luo et al. [16] proposed a spatio-temporal Graph Convolutional Network (GCN)-based prediction method for skeleton-based video anomaly detection. The body joints were estimated and built into skeleton graphs, where the body joints formed the nodes of the graph. The spatial edges connected different joints of a skeleton, and temporal edges connected the same joints across time. A fully connected layer was used at the end of the network to predict future skeletons. Zeng et al. [28] proposed a hierarchical spatio-temporal GCN, where high-level representations encoded the trajectories of people and the interactions among multiple identities while low-level skeleton graph representations encoded the local body posture of each person. The method was proposed to detect anomalous human behaviours in both sparse and dense scenes. The inputs were organized into spatio-temporal skeleton graphs whose nodes were human body joints from multiple frames and fed to the network. The network was trained on the input skeleton graph representations of normal activities. Optical flow fields and size of skeleton bounding boxes were used to determine sparse and dense scenes. For dense scenes with crowds, higher weights were assigned to high-level representations while for sparse scenes, the weights of low-level graph representations were increased. During testing, the prediction errors from different branches were weighted and combined to obtain the final anomaly score. Fan et al. [29] proposed a GRU feed-forward network that was trained to predict the next skeleton using past skeleton sequences and a loss function that incorporated the range and speed of the predicted skeletons. Pang et al. [30] proposed a skeleton transformer to predict future pose components in video frames and considered error between predicted pose components and corresponding expected values as anomaly score. They applied a multi-head self-attention module to capture long-range dependencies between arbitrary pairwise pose components and the temporal convolutional layer to concentrate on local temporal information. Huang et al. [31] proposed a spatio-temporal graph transformer to encode the hierarchical graph embeddings of human skeletons for jointly modeling the interactions between individuals and the correlations among body joints within a single individual. Input to the transformer was provided as global and local graphs. Each node in the global graph encoded the speed of an individual as well as the relative position and interaction relations between individuals. Each local graph encoded the pose of an individual. ChallengesIn these methods, it is difficult to choose how far in future (or past) the prediction should be made to achieve optimum results. This could potentially be determined empirically; however, in the absence of a validation set such solutions remain elusive. The future prediction-based methods can be sensitive to noise in the past data [47]. Any small changes in the past can result in significant variation in prediction, and not all of these changes signify anomalous situations. ### _Combinations of learning approaches_ In this section, we discuss the existing methods that utilize a combination of different learning approaches, namely, reconstruction and prediction approaches, and reconstruction and clustering approaches. #### Iii-C1 Combination of reconstruction and prediction approaches Some skeletal video anomaly detection methods utilize a multi-objective loss function consisting of both reconstruction and prediction errors to learn the characteristics of skeletons signifying normal behaviour and identify skeletons with large errors as anomalies. Morais et al. [17] proposed a method to model the normal human movements in surveillance videos using human skeletons and their relative positions in the scene. The human skeletons were decomposed into two sub-components: global body movement and local body posture. The global movement tracked the dynamics of the whole body in the scene, while the local posture described the skeleton configuration. The two components were passed as input to different branches of a message passing GRU single-encoder-dual-decoder-based network. The branches processed their data separately and interacted via cross-branch message passing at each time step. Each branch had an encoder, a reconstruction-based decoder and a prediction-based decoder. The network was trained using normal data, and during testing, a frame-level anomaly score was generated by aggregating the anomaly scores of all the skeletons in a frame to identify anomalous frames. In order to avoid the inaccuracy caused by incorrect detection of skeletons in video frames, the authors leave out video frames where the skeletons cannot be estimated by the pose estimation algorithm. Hence, the results in this work was not a good representation of a real-world scenario, which often consists of complex-scenes with occluding objects and overlapping movement of people. Boekhoudt et al. [7] utilized the network proposed by Morais et al. [17] for detecting human crime-based anomalies in videos using a newly proposed crime-based video surveillance dataset. Similar to the work by Morais et al. [17], Li and Zhang [32] proposed a dual branch single-encoder-dual-decoder GRU network that was trained on normal behaviour skeletons estimated from pedestrian videos. The two decoders were responsible for reconstructing the input skeletons and predicting future skeletons, respectively. However, unlike the work by Morais et al. [17], there was no provision of message passing between the branches. Li et al. [33] proposed a single-encoder-dual-decoder architecture established on a spatio-temporal Graph CAE (GCAE) embedded with a LSTM network in hidden layers. The two decoders were used to reconstruct the input skeleton sequences and predict the unseen future sequences, respectively, from the latent vectors projected via the encoder. The sum of maximum reconstruction and prediction errors among all the skeletons within a frame was used as anomaly score for detecting anomalous frames. Wu et al. [34] proposed a GCN-based encoder-decoder architecture that was trained using normal action skeleton graphs and keypoint confidence scores as input to detect anomalous human actions in surveillance videos. The skeleton graph input was decomposed into global and local components. The network consisted of three encoder-decoder pipelines: the global pipeline, the local pipeline and the confidence score pipeline. The global and local encoder-decoder-based pipelines learned to reconstruct and predict the global and local components, respectively. The confidence score pipeline learned to reconstruct the confidence scores. Further, a Support Vector Data Description (SVDD)-based loss was employed to learn the boundary of the normal action global and local pipeline encoder output in latent feature space. The network was trained using a multi-objective loss function, composed of a weighted sum of skeleton graph reconstruction and prediction losses, confidence score reconstruction loss and multi-center SVDD loss. Luo et al. [35] proposed a single-encoder-dual-decoder memory enhanced spatial-temporal GCAE network, where spatial-temporal graph convolution was used to encode discriminative features of skeleton graphs in spatial and temporal domains. The memory module recorded patterns for normal behaviour skeletons. Further, the encoded representation was not fed directly into the reconstructing and predicting decoders but was used as a query to retrieve the most relevant memory items. The memory module was used to restrain the reconstruction and prediction capability of the network on anomalies. #### Iii-B2 Combination of reconstruction and clustering approaches Some skeletal video anomaly detection methods utilize a two-stage approach to identify anomalous human actions using spatio-temporal skeleton graphs. In the first pre-training stage, a GCAE-based model is trained to minimize the reconstruction loss on input skeleton graphs. In the second fine-tuning stage, the latent features generated by the pre-trained GCAE encoder is fed to a clustering layer and a Dirichlet Process Mixture model is used to estimate the distribution of the soft assignment of feature vectors to clusters. Finally at the test time, the Dirichlet normality score is used to identify the anomalous samples. Markovitz et al. [36] identified that anomalous actions can be broadly classified in two categories, fine and coarse-grained anomalies. Fine-grained anomaly detection refers to detecting abnormal variations of an action, e.g., abnormal type of walking. Coarse-grained anomaly detection refers to defining particular normal actions and regarding other actions as abnormal, such as determining dancing as normal and gymnastics as abnormal. They utilized a spatio-temporal GCAE to map the skeleton graphs representing normal actions to a latent space, which was soft assigned to clusters using a deep clustering layer. The soft-assignment representation abstracted the type of data (fine or coarse-grained) from the Dirichlet model. After pre-training of GCAE, the latent feature output of the encoder and clusters were fine-tuned by minimizing a multi-objective loss function consisting of both the reconstruction loss and clustering loss. They leveraged ShanghaiTech [48] dataset to test the performance of their proposed method on fine-grained anomalies, and NTU-RGB+D [49] and Kinetics-250 [50] datasets for coarse-grained anomaly detection performance evaluation. Cui et al. [37] proposed a semi-supervised prototype generation-based method for video anomaly detection to reduce the computational cost associated with graph-embedded networks. Skeleton graphs for normal actions were estimated from the videos and fed as input to a shift spatio-temporal GCAE to generate features. It was not clear which pose estimation algorithm was used to estimate the skeletons from video frames. The generated features were fed to the proposed prototype generation module designed to map the features to prototypes and update them during the training phase. In the pre-training step, the GCAE and prototype generation module were optimized using a loss function composed of reconstruction loss and generation loss of prototypes. In the fine-tuning step, the entire network was fine-tuned using a multi-objective loss function, composed of reconstruction loss, prototype generation loss and cluster loss. Later, Liu et al. [38] used self-attention augmented graph convolutions for detecting abnormal human behaviours based on skeleton graphs. Skeleton graphs were fed as input to a spatio-temporal self-attention augmented GCAE and latent features were extracted from the encoder part of the trained GCAE. After pre-training of GCAE, the entire network was fine-tuned using a multi-objective loss function consisting of both the reconstruction loss and clustering loss. Chen et al. [39] proposed a multiscale spatial temporal attention GCN, which included an encoder to extract features, a reconstruction decoder branch to optimize encoder, and a clustering layer branch to obtain anomaly scores. During training, the decoder is used to optimize the encoder by minimizing the reconstruction error. However, during testing, the decoder is discarded, and only the clustering layer is used to generate the anomaly score. It used three scales of human skeleton graphs, namely, joint, part and limb. Spatial attention graph convolution operation was carried out on each scale, and the output features of three scales were weighted and summed to constitute the multiscale skeleton features. ChallengesThe combination-based methods can carry the limitations of the individual learning approaches, as described in Section II-A and II-B. Further, in the absence of a validation set, it is difficult to determine the optimum value of combination coefficients in a multi-objective loss function. ### _Other Approaches_ This section discusses the methods that leveraged a pre-trained deep learning model to encode latent features from the input skeletons and used approaches such as, clustering and multivariate gaussian distribution, in conjunction for detecting human action-based anomalies in videos. Yang et al. [40] proposed a two-stream fusion method to detect anomalies pertaining to body movement and object positions. YOLOv3 [51] was used to detect people and objects in the video frames. Subsequently, skeletons were estimated from the video frames and passed as input to a spatio-temporal GCN, followed by a clustering-based fully connected layer to generate anomaly scores for skeletons. The information pertaining to the bounding box coordinates and confidence score of the detected objects was used to generate object anomaly scores. Finally, the skeleton and object normality scores were combined to generate the final anomaly score for a frame. Nanjun et al. [41] used the skeleton features estimated from the videos for pedestrian anomaly detection using an iterative self-training strategy. The training set consisted of unlabelled normal and anomalous video sequences. The skeletons were decomposed into global and local components, which were fed as input to an unsupervised anomaly detector, iForest [52], to yield the pseudo anomalous and normal skeleton sets. The pseudo sets were used to train an anomaly scoring module, consisting of a spatial GCN and fully connected layers with a single output unit. As part of the self-training strategy, new anomaly scores were generated using previously trained anomaly scoring module to update the membership of skeleton samples in the skeleton sets. The scoring module was then retrained using updated skeleton sets, until the best scoring model was obtained. However, the paper doesn't discuss the criteria to decide the best scoring model. Tani and Shibata [42] proposed a framework for training a frame-wise Adaptive GCN (AGCN) for action recognition using single frame skeletons and used the features extracted from the AGCN to train an anomaly detection model. As part of the proposed framework, a pretrained action recognition model [53] was used to identify the frames with large temporal attention in the Kinetics-skeleton dataset [54] as the action frames to train the AGCN. Further, the trained AGCN was used to extract features from the normal behaviour skeletons identified in the ShanghaiTech Campus dataset [17] to model a multivariate gaussian distribution. During testing, the Mahalanobis distance was used to calculate the anomaly score under the multivariate gaussian distribution. ChallengesThe performance of these methods rely on the pre-training strategy of the deep learning models used to learn the latent features and the choice of training parameters for the subsequent machine learning models. ## III Discussion This section leverages Table I and II and synthesizes the information and trends that can be inferred from the existing work on skeletal video anomaly detection. * ShanghaiTech [48] and CUHK Avenue [55] were the most frequently used video datasets to evaluate the performance of the skeletal video anomaly detection methods. The ShanghaiTech dataset has videos of people walking along a sidewalk of the ShanghaiTech university. Anomalous activities include bikers, skateboarders and people fighting. It has 330 training videos and 107 test videos. However, not all the anomalous activities are related to humans. A subset of the ShanghaiTech dataset that contained anomalous activities only related to humans was termed as HR ShanghaiTech and was used in many papers. The CUHK Avenue dataset consists of short video clips looking at the side of a building with pedestrian walking by it. Concrete columns that are part of the building cause some occlusion. The dataset contains 16 training videos and 21 testing videos. The anomalous events comprise of actions such as "throwing papers", "throwing bag", "child skipping", "wrong direction" and "bag on grass". Similarly, a subset of the CUHK Avenue dataset containing anomalous activities only related to humans, called HR Avenue, has been used to evaluate the methods. Other video datasets that have been used include UTD-MHAD [56], UMN [57], UCSD Pedestrian [6], IITB-Corridor [27], HR Crime [7], NTU-RGB+D [49], and Kinetics-250 [50]. From the type of anomalies present in these datasets, it can be inferred that the existing skeletal video anomaly detection methods have been evaluated mostly on individual human action-based anomalies. Hence, it is not clear how well can they detect anomalies that involve interactions among multiple individuals or interaction among people and objects. * Most of the papers (22 out of 24), detected anomalous human actions for multiple people in the video scene. Other two papers detected irregular body postures and poor body movements in children, respectively, for single person in the video scene. The usual approach was to estimate the skeletons for the people in the scene using a pose estimation algorithm, and calculate anomaly scores for each of the skeletons. The maximum anomaly score among all the skeletons within a frame was used to identify the anomalous frames. A single video frame could contain multiple people, among which not all of them were performing anomalous actions. Hence, taking the maximum anomaly score of all the skeletons helped to nullify the effect of people with normal actions on the final decision for the frame. Further, calculating anomaly scores for individual skeletons helped to localize the source of anomaly within a frame. * The definition of anomalous human behaviours can differ across applications. While most of the existing papers focused on detecting anomalous human behaviours in general, four papers focused on detecting anomalous behaviours for specific applications, that is, drunk walking [23], poor body movements in children [24], abnormal pedestrian behaviours at grade crossings [25] and crime-based anomalies [7]. Further, the nature of anomalous behaviours can vary depending upon various factors, like span of time, crowded scenes, and specific action-based anomalies. Some papers identified and addressed the need to detect specific types of anomalies, namely, multi-timescale anomalies occurring over different time duration [27], anomalies in both sparse and crowded scenes [28], fine and coarse-grained anomalies [36] and body movement and object position anomalies [40]. * Alphapose [58] and Openpose [59] were the most common choice of pose estimation algorithm for extraction of skeletons for the people in the scene. Other pose estimation methods that have been used were Posenet [60] and HRNet [61]. However, in general, the papers did not provide any rationale behind their choice of the pose estimation algorithm. * The type of models used in the papers can broadly be divided into two types, sequence-based and graph-based models. The sequence-based models that have been used include 1DConv-AE, LSTM-AE, GRU, and Transformer. These models treated skeleton keypoints for individual people across multiple frames as time series input. The graph-based models that have been used involve GCAE and GCN. The graph-based models received spatio-temporal skeleton graphs for individual people as input. The spatio-temporal graphs were constructed by considering body joints as the nodes of the graph. The spatial edges connected different joints of a skeleton, and temporal edges connected the same joints across time. * Area Under Curve (AUC) of Receiver Operating Characteristic (ROC) curve was the most common metric used to evaluate the performance among the existing skeletal video anomaly detection methods. Other performance evaluation metrics include F score, accuracy, Equal Error Rate (EER) and AUC of Precision-Recall (PR) Curve. EER signifies the percentage of misclassified frames when the false positive rate equals to the miss rate on the ROC curve. While AUC(ROC) can provide a good estimate of the classifier's performance over different thresholds, it can be misleading in case the data is imbalanced [62]. In anomaly detection scenario, it is common to have imbalance in the test data, as the anomalous behaviours occur infrequently, particularly in many medical applications [63, 64]. The AUC(PR) value provides a good estimate of the classifier's performance on imbalanced datasets [62]; however, only one of the papers used AUC(PR) as an evaluation metric. * The highest AUC(ROC) values reported for the ShanghaiTech [48] and CUHK Avenue [55] datasets across different methods in Table I and II were 0.83 and 0.92, respectively. A direct comparison may not be possible due to the difference in the experimental setup and train-test splits across the reviewed methods; however, it gives some confidence on the viability of these approaches for skeletal video anomaly detection. ## IV Challenges and Future Directions In general, the efficiency of the skeletal video anomaly detection algorithms depends upon the accuracy of the skeletons estimated by the pose-estimation algorithm. If the pose estimation algorithm misses certain joints or produces artifacts in the scene, then it can increase the number of false alarms. There are various challenges associated with estimating skeletons from video frames [65]: (i) complex body configuration causing self-occlusions and complex poses, (ii) diverse appearance, including clothing, and (iii) complex environment with occlusion from other people in the scene, various viewing angles, distance from camera and truncation of parts in the camera view. This can lead to a poor approximation of skeletons and can negatively impact the performance of the anomaly detection algorithms. Methods have been proposed to address some of these challenges [66, 67]; however, extracting skeletons in complex environments remains a difficult problem. Some of the existing methods manually remove inaccurate and false skeletons [17, 25] to train the model, which is impractical in many real-world applications where the amount of available data is very large. There is a need of an automated false skeleton identification and removal step, when estimating skeletons from videos. The skeletons collected using Microsoft Kinect (depth) camera has been used in the past studies [68, 69]. However, the defunct production of the Microsoft Kinect camera [70] has lead to hardware constraints in the further development of skeletal anomaly detection approaches. Other commercial products include Vicon [71] with optical sensors and TheCaptury [72] with multiple cameras. But they function in very constrained environments or require special markers on the human body. New cameras, such as 'Sentinare 2' from AltumView [73], circumvent such hardware requirements by directly processing videos on regular RGB cameras and transmitting skeletons information in real-time. The existing approaches for skeletal video anomaly detection involve spatio-temporal skeleton graphs [16] or temporal sequences [17], which are constructed by tracking an individual across multiple frames. However, this is challenging in scenarios where there are multiple people within a scene. The entry and exit of people in the scene, overlapping of people during movement and presence of occluding objects make tracking people across frames a very challenging task. There can be deployment issues in these methods because the choice of threshold is not clear. In the absence of any validation set (containing both normal and unseen anomalies) in an anomaly detection setting, it is very hard to fine-tune an operating threshold using just the training data (comprising of normal activities only). To handle these situations, outliers within the normal activities can be used as a proxy for unseen anomalies [74]; however, inappropriate choices can lead to increased false alarms or missed alarms. Domain expertise can be utilized to adjust a threshold, which may not be available in many cases. The anomalous human behaviours of interest and their difficulty of detection can vary depending upon the definition of anomaly, application, time span of the anomalous actions, and presence of single/multiple people in the scenes. For example, in the case of driver anomaly detection application, the anomalous behaviours can include talking on the phone, dozing off or drinking [14]. The anomalous actions can span over different time lengths, ranging from few seconds to hours or days, e.g., jumping and falls [74] are short-term anomalies, while loitering and social isolation [75] are long-term events. More focus is needed on developing methods that can identify both short and long-term anomalies. Sparse scene anomalies can be described as anomalies in scenes with less number of humans, while dense scene anomalies can be described as anomalies in crowded scenes with large number of humans [28]. It is comparatively difficult to identify anomalous behaviours in dense scenes than sparse scenes due to tracking multiple people and finding their individual anomaly scores [17]. Thus, there is a need to develop methods that can effectively identify both sparse and dense scene anomalies. Further, there is a need to address the challenges associated with the granularity and the decision making time of the skeletal video anomaly detection methods for real time applications. The existing methods mostly output decision on a frame level, which becomes an issue when the input to the method is a real-time continuous video stream at multiple frames per second. This can lead to alarms going off multiple times a second, which can be counter-productive. One solution is for the methods to make decisions on a time-window basis, each window of length of a specified duration. However, this brings in the question about the optimal length of each decision window. A short window is impractical as it can lead to frequent and repetitive alarms, while a long window can lead to missed alarms, and delayed response and intervention. Domain knowledge can be used to make a decision about the length of decision windows. Skeletons can be used in conjunction with optical flow [76] to develop privacy-protecting approaches to jointly learn from temporal and structural modalities. Approaches based on federated learning (that do not combine individual data, but only the models) can further improve the privacy of these methods [77]. Segmentation masks [78] can be leveraged in conjunction with skeletons to occlude humans while capturing the information pertaining to scene and human motion to develop privacy-protecting anomaly detection approaches. The skeletons signify motion and posture information for the individual humans in the video; however, they lack information regarding human-human and human-object interactions. Information pertaining to interaction of the people with each other and the objects in the environment is important for applications such as, violence detection [7], theft detection [7] and agitation detection [64] in care home settings. Skeletons can be used to replace the bodies of the participants, while keeping the background information in video frames [79] to analyze both human-human and human-object interaction anomalies. Further, object bounding boxes can be used in conjunction with human skeletons to model human-object interaction while preserving the privacy of humans in the scene. The information from other modalities (e.g. wearable devices) along with skeleton features can be used to develop multi-modal anomaly detection methods to improve the detection performance. As can be seen in Table I and II, the existing skeletal video anomaly detection methods and available datasets focus towards detecting irregular body postures [16], and anomalous human actions [30] in mostly outdoor settings, and not in proper healthcare settings, such as personal homes and long-term care homes. This a gap towards real world deployment, as there is a need to extend the scope of detecting anomalous behaviours using skeletons to in-home and care home settings, where privacy is a very important concern. This can be utilized to address important applications, such as fall detection [80], agitation detection [64, 79], and independent assistive living. This will help to develop supportive homes and communities and encourage autonomy and independence among the increasing older population and dementia residents in care homes. While leveraging skeletons helps to get rid of facial identity and appearance-based information, it is important to ask the question if skeletons can be considered private enough [81, 82] and what steps can be taken to further anonymize the skeletons. ## V Conclusion In this paper, we provided a survey of recent works that leverage the skeletons or body joints estimated from videos for the anomaly detection task. The skeletons hide the facial identity and overall appearance of people and can provide vital information about joint angles [83], speed of walking [84], and interaction with other people in the scene [17]. Our literature review showed that many deep learning-based approaches leverage reconstruction, prediction error and their other combinations to successfully detect anomalies in a privacy protecting manner. This review suggests the first steps towards increasing adoption of devices (and algorithms) focused on improving privacy in a residential or communal setting. It will further improve the deployment of anomaly detection systems to improve the safety and care of people. The skeleton-based anomaly detection methods can be used to design privacy-preserving technologies for the assisted living of older adults in a care environment [85] or enable older adults to live independently in their own homes to cope with the increasing cost of long-term care demands [86]. Privacy-preserving methods using skeleton features can be employed to assist with skeleton-based rehab exercise monitoring [87] or in social robots for robot-human interaction [88] that assist older people in their activities of daily living. ## VI Acknowledgements This work was supported by AGE-WELL NCE Inc, Alzheimer's Association, Natural Sciences and Engineering Research Council and UAE Strategic Research Grant.
``` 既存の動画異常検出方法は、主に顔や外観に基づいた特徴を持つ動画を利用しています。顔の識別が可能な動画の使用は、特に病院や地域社会で利用する場合にプライバシーの懸念を招き、外観に基づく特徴は、ピクセルベースのノイズの影響を受けやすく、異常検出方法のモデル化を困難にし、前景の人間の行動に集中させるのが難しいです。人体動作を記述する構造情報として、動画から抽出した骨格はプライバシー保護に有効であり、外観に基づく特徴の問題に対処できる可能性があります。この論文では、骨格に基づくプライバシー保護のための深層学習異常検出方法に関する調査を行います。私たちは、さまざまな学習アプローチに基づいたアルゴリズムの新しい分類体系を提供します。骨格に基づく異常検出方法がプライバシー保護の代替手段であるという結論に達し、最後に、主要なオープンな研究
2303.18122
Scale-Invariant Model for Gravitational Waves and Dark Matter
We have conducted a revised analysis of the first-order phase transition that is associated with symmetry breaking in a classically scale-invariant model that has been extended with a new $SU(2)$ gauge group. By incorporating recent developments in the understanding of supercooled phase transitions, we were able to calculate all of its features and significantly limit the parameter space. We were also able to predict the gravitational wave spectra generated during this phase transition and found that this model is well-testable with LISA. Additionally, we have made predictions regarding the relic dark matter abundance. Our predictions are consistent with observations but only within a narrow part of the parameter space. We have placed significant constraints on the supercool dark matter scenario by improving the description of percolation and reheating after the phase transition, as well as including the running of couplings. Finally, we have also analyzed the renormalization-scale dependence of our results.
Alexandros Karam, Maciej Kierkla, Bogumiła Świeżewska
2023-03-31T15:07:57
http://arxiv.org/abs/2303.18122v1
# Scale-Invariant Model for Gravitational Waves and Dark Matter ###### Abstract: The present contribution summarises the results recently published in Ref. [1]. We have conducted a revised analysis of the first-order phase transition that is associated with symmetry breaking in a classically scale-invariant model that has been extended with a new \(SU(2)\) gauge group. By incorporating recent developments in the understanding of supercooled phase transitions, we were able to calculate all of its features and significantly limit the parameter space. We were also able to predict the gravitational wave spectra generated during this phase transition and found that this model is well-testable with LISA. Additionally, we have made predictions regarding the relic dark matter abundance. Our predictions are consistent with observations but only within a narrow part of the parameter space. We have placed significant constraints on the supercool dark matter scenario by improving the description of percolation and reheating after the phase transition, as well as including the running of couplings. Finally, we have also analyzed the renormalization-scale dependence of our results. ## 1 Introduction Considering the recent direct detection of gravitational waves (GW) by the LIGO and Virgo Collaborations [2, 3, 4, 5, 6, 7], as well as the upcoming Laser Interferometer Space Antenna (LISA) [8, 9, 10, 11, 12, 13] and other future and ongoing experiments [14, 15, 16, 17, 18, 19, 20, 21, 22, 23], it is reasonable to explore ways to utilize GW to investigate fundamental physics. One promising method is to search for evidence of a first-order phase transition (PT) in the early Universe through the primordial gravitational wave background [9, 10, 11, 12, 13, 24]. This signal is expected to be present at frequencies within LISA's sensitivity range if the transition occurred around temperatures similar to those of the electroweak PT, \(T\sim 100~{}\mathrm{GeV}\). However, in many models, the signal is not strong enough to be detected. In contrast, the class of models with classical scale invariance [25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38] typically predicts a strong gravitational wave signal within LISA's reach due to a logarithmic potential that enables significant supercooling and latent heat release during the transition. Within the wide variety of classically conformal models, those incorporating an additional gauge group are particularly promising due to their high level of predictability. The conformal Standard Model (SM) can be extended in a minimal manner with the addition of either an extra \(U(1)\)[33, 34, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70] or \(SU(2)\)[25, 31, 32, 50, 64, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118], larger gauge groups, extra fermions, or more intricate architectures [119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138]. The focus of our current work is on the first-order PT in a classically scale-invariant model that includes an additional \(SU(2)_{X}\) gauge symmetry and a scalar that transforms as a doublet under this group while remaining a singlet of the SM. In addition to exhibiting a strong first-order phase transition, this model also provides a candidate for dark matter particles that are stabilized by a residual symmetry that persists after the \(SU(2)_{X}\) symmetry is broken [139, 140]. Although the possibility of detecting GW from a PT and exploring events that occurred in the early Universe is exciting, the imprecise nature of theoretical predictions is discouraging [141, 142]. The dependence on the renormalisation scale is one of the main sources of uncertainty in these predictions. Classically scale-invariant models, owing to the logarithmic nature of their potential, span a broad range of energies and therefore are particularly susceptible to issues related to scale dependence. In this work [1]: 1. We present updated predictions of the stochastic GW background in the classically scale-invariant model with \(SU(2)_{X}\) symmetry, incorporating recent advances in understanding supercooled PTs [143, 144, 145, 146, 147]. Our study is the first to include the condition for percolation in the SU(2)\({}_{X}\) model, and we show that it significantly affects the parameter space. 2. We pay close attention to the renormalisation-scale dependence of the results. To minimise this dependence, we use a renormalisation-group improved effective potential and perform an expansion in powers of couplings consistent with the conditions from conformal symmetry breaking and the radiative nature of the transition. 3. We investigate the DM phenomenology in light of the updated understanding of the PT. ## 2 The model In this work [1], we analyse the classically scale-invariant SM extended by a dark \(SU(2)_{X}\) gauge group. The new fields of the model are: * the scalar doublet \(\Phi\) of \(SU(2)_{X}\), * the three dark gauge bosons \(X\) of \(SU(2)_{X}\). The Higgs \(H\) and new scalar \(\Phi\) doublets can be written as \[H=\frac{1}{\sqrt{2}}\left(\begin{array}{c}0\\ h\end{array}\right),\quad\Phi=\frac{1}{\sqrt{2}}\left(\begin{array}{c}0\\ \varphi\end{array}\right).\] In terms of \(h\) and \(\varphi\), the one-loop effective potential can be written as \[V(h,\varphi)=V^{(0)}(h,\varphi)+V^{(1)}(h,\varphi), \tag{1}\] where the tree-level part is \[V^{(0)}(h,\varphi)=\frac{1}{4}\left(\lambda_{1}h^{4}+\lambda_{2}h^{2}\varphi^ {2}+\lambda_{3}\varphi^{4}\right)\,, \tag{2}\] with \(\lambda_{2}\) being the portal coupling that connects the visible and dark sectors. The one-loop correction is given by \[V^{(1)}(h,\varphi)=\frac{1}{64\pi^{2}}\sum_{a}n_{a}M_{a}^{4}(h,\varphi)\left( \log\frac{M_{a}^{2}(h,\varphi)}{\mu^{2}}-C_{a}\right), \tag{3}\] where \[n_{a}=(-1)^{2s_{a}}Q_{a}N_{a}(2s_{a}+1),\] and the sum runs over all particle species. With \(M_{a}(h,\varphi)\) we denote the field-dependent mass of a particle, \(n_{a}\) denotes the number of degrees of freedom associated with each species and \(C_{a}=\frac{5}{6}\) for vector bosons and \(C_{a}=\frac{3}{2}\) for other particles. Furthermore, \(Q_{a}=1\) for uncharged particles, and \(Q_{a}=2\) for charged particles, \(N_{a}=1\), \(3\) for uncoloured and coloured particles, respectively. Regarding symmetry breaking, the stationary point equations divided by the VEVs, \(v=\langle h\rangle\), \(w=\langle\varphi\rangle\), read \[\frac{1}{v^{3}}\frac{\partial V}{\partial h}=\lambda_{1}+\frac{1}{2}\lambda_{ 2}\left(\frac{w}{v}\right)^{2}+\frac{1}{v^{3}}\left.\frac{\partial V^{(1)}}{ \partial h}\right|_{h=v,\varphi=w}=0, \tag{4}\] \[\frac{1}{w^{3}}\frac{\partial V}{\partial\varphi}=\lambda_{3}+\frac{1}{2} \lambda_{2}\left(\frac{v}{w}\right)^{2}+\frac{1}{w^{3}}\left.\frac{\partial V ^{(1)}}{\partial\varphi}\right|_{h=v,\varphi=w}=0. \tag{5}\] Typically, \(v_{\varphi}/v_{h}\gg 10\), therefore the \(\lambda_{2}\left(v_{h}/v_{\varphi}\right)^{2}\) term can be neglected. Then, the second equation becomes \[\lambda_{3}=-\frac{9}{256\pi^{2}}g_{X}^{4}\left[2\log\left(\frac{gx}{2}\frac{ w}{\mu}\right)-\frac{1}{3}\right]. \tag{6}\] The first equation reads \[\lambda_{1}+\frac{1}{2}\lambda_{2}\left(\frac{w}{v}\right)^{2}+\frac{1}{16\pi^{2} }\sum_{w^{\pm},Z,t}n_{a}\frac{M_{a}^{4}(h,\varphi)}{v^{4}}\left(\log\frac{M_{a}^ {2}(h,\varphi)}{\mu^{2}}-C_{a}+\frac{1}{2}\right)=0. \tag{7}\] The above indicates that the symmetry breaking in the \(\varphi\) direction follows the Coleman-Weinberg mechanism, while the symmetry breaking in the direction of \(h\) is similar to that of the SM, as the "tree-level mass term" is generated by the portal coupling. The physical mass corresponds to a pole of the propagator, i.e. is evaluated away from \(p^{2}=0\), and is given by \[M_{\text{pole}}^{2}=m_{\text{tree-level}}^{2}+\text{Re}[\Sigma(p^{2}=M_{\text{ pole}}^{2})]. \tag{8}\] Including loop corrections from self energies which introduce momentum dependence, we have \[M^{2}(p)=\left(\begin{array}{cc}3\lambda_{1}v^{2}+\frac{\lambda_{2}}{2}w^{2 }&\lambda_{2}vw\\ \lambda_{2}vw&3\lambda_{3}w^{2}+\frac{\lambda_{2}}{2}v^{2}\end{array}\right)+ \left(\begin{array}{cc}\Sigma_{hh}(p)&\Sigma_{h\varphi}(p)\\ \Sigma_{h\varphi}(p)&\Sigma_{\varphi\varphi}(p)\end{array}\right). \tag{9}\] By diagonalising the mass matrix we obtain the mass eigenvalues \[M_{\pm}^{2}(p^{2})=\frac{1}{2}\Bigg{\{}\left(3\lambda_{1}+\frac{ \lambda_{2}}{2}\right)v^{2}+\frac{1}{2}\left(\frac{\lambda_{2}}{2}+3\lambda_{ 3}\right)w^{2}+\Sigma_{hh}(p^{2})+\Sigma_{\varphi\varphi}(p^{2})\\ \pm\sqrt{\left[\left(3\lambda_{1}-\frac{\lambda_{2}}{2}\right)v^{2}- \left(3\lambda_{3}-\frac{\lambda_{2}}{2}\right)w^{2}+\Sigma_{hh}(p^{2})- \Sigma_{\varphi\varphi}(p^{2})\right]^{2}+4\lambda_{2}^{2}v^{2}w^{2}}\Bigg{\}}. \tag{10}\] Neglecting terms suppressed by a product of a small coupling, \(\lambda_{2}\) or \(\lambda_{3}\) and the Higgs VEV, we can approximately determine which of the mass eigenvalues corresponds to the Higgs particle. We find \[M_{+}^{2}(h,\varphi)=3\lambda_{3}\varphi^{2}+\Sigma_{\varphi \varphi}(p^{2}), \tag{11}\] \[M_{-}^{2}(h,\varphi)=3\lambda_{1}h^{2}+\frac{1}{2}\lambda_{2} \varphi^{2}+\Sigma_{hh}(p^{2})\,. \tag{12}\] for \(3\lambda_{1}h^{2}-3\lambda_{3}\varphi^{2}+\frac{1}{2}\lambda_{2}\varphi^{2}+ \Sigma_{hh}(p^{2})-\Sigma_{\varphi\varphi}(p^{2})<0\). For the opposite sign, \(M_{+}\) and \(M_{-}\) are interchanged. Then, to obtain the momentum-corrected masses we solve the gap equations \[M_{H}^{2}=M_{\mp}^{2}(p^{2}=M_{H}^{2}), \tag{13}\] \[M_{S}^{2}=M_{\pm}^{2}(p^{2}=M_{S}^{2}). \tag{14}\] We identify the first one with the Higgs \(M_{H}=125\,\text{GeV}\), while the other gives the mass of the new scalar \(S\). Finally, the mass eigenstates are obtained from the gauge eigenstates by a rotation matrix as \[\left(\begin{array}{c}\phi_{-}\\ \phi_{+}\end{array}\right)=\left(\begin{array}{cc}\cos\theta&\sin\theta\\ -\sin\theta&\cos\theta\end{array}\right)\left(\begin{array}{c}h\\ \varphi\end{array}\right),\qquad-\frac{\pi}{2}<\theta<\frac{\pi}{2}\,. \tag{15}\] In order to scan the parameter space, we employ the following numerical procedure: 1. We choose the values of the input parameters, \(M_{X}\) and \(g_{X}\). We assume the tree-level relation for the \(X\) mass \(M_{X}=\frac{1}{2}g_{X}v_{\varphi}\) so we can compute the value of the \(\varphi\) VEV, \(v_{\varphi}\). The values of \(g_{X}\) and \(v_{\varphi}\) are treated as evaluated at the scale \(\mu=M_{X}\). 2. We use the minimisation condition along the \(\varphi\) direction, evaluated at \(\mu=M_{X}\) to evaluate \(\lambda_{3}\). This gives us a simple relation \[\lambda_{3}=\frac{3}{256\pi^{2}}g_{X}^{4}.\] 3. The \(g_{X}\) and \(\lambda_{3}\) couplings are evolved using their RGEs and evaluated at \(\mu=M_{Z}\). 4. If \(g_{X}(M_{Z})\leqslant 1.15\) the RG-improved potential is well-behaved throughout the scales considered. 5. The value of \(\lambda_{2}\) as a function of \(\lambda_{1}(\mu=M_{Z})\) is obtained from the first minimisation condition. 6. The value of \(\lambda_{1}\) is computed from the requirement that the physical Higgs mass is equal to \(125\) GeV, using the first gap equation. The evaluation is performed at \(\mu=M_{Z}\), therefore the vacuum expectation value of \(\varphi\) at \(\mu=M_{Z}\) is needed. It is found using the second minimisation condition evaluated at \(\mu=M_{Z}\). 7. The mass of \(S\) is computed by solving iteratively the second gap equation. 8. The mixing between the scalars is evaluated by demanding that the off-diagonal terms of the mass matrix evaluated at \(p^{2}=0\) and in the mass-eigenbasis are zero. Figure 1: Values of the new scalar mass \(M_{S}\) (left panel) and the VEV \(w\) (evaluated at \(\mu=M_{X}\)) (right panel). In the left panel the thick black line indicates where \(M_{S}=M_{H}=125\,\mathrm{GeV}\) and across this line mass ordering between \(S\) and \(H\) changes (to the left of the line \(M_{S}<M_{H}\), and to the right \(M_{H}<M_{S}\)). To the right of the dotted line \(\xi_{H}\) becomes numerically equal to \(1\). The dashed lines indicate a discrepancy between the running and the pole mass (in percent). Grey-shaded regions are excluded. We present the result of the scan for \(M_{S}\) and \(w\) (the VEV of \(\varphi\)) in figure 1. The new scalar \(S\) is heavier than the Higgs boson in most of the parameter space. The dashed lines in the plot represent the disparity between the mass obtained by solving eq. (14) iteratively and the mass estimated from the effective potential approximation. Although the differences are not negligible, they do not exceed 10% even in the upper right region of the parameter space. Finally, the region of low \(X\) masses is excluded because it is not possible the reproduce a stable minimum with the correct Higgs VEV and mass in this regime, while the upper right corner is cut off by the condition \(g_{X}(M_{Z})\leq 1.15\) for the perturbativity of the dark gauge coupling. ## 3 Dark matter Our DM candidates are the three vector bosons \(X_{\mu}^{a}\) (where \(a=1,\,2,\,3\)) of the hidden sector gauge group \(SU(2)\) with mass \(M_{X}=\frac{1}{2}g_{X}w\). As discussed in [139], the gauge bosons are stable due to an intrinsic \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}^{\prime}\) symmetry associated with complex conjugation of the group elements and discrete gauge transformations. This discrete symmetry actually generalizes to a custodial \(SO(3)\)[140] and the dark gauge bosons are degenerate in mass. For the standard freeze-out mechanism, the Boltzmann equation has the form \[\frac{\mathrm{d}n}{\mathrm{d}t}+3\,H\,n=-\,\frac{\left\langle\sigma v\right\rangle _{\mathrm{ann}}}{3}\left(n^{2}-n_{eq}^{2}\right)-\frac{2\left\langle\sigma v \right\rangle_{\mathrm{semi}}}{3}\,n\left(n-n_{eq}\right)\,. \tag{15}\] The annihilation cross section is dominated by the \(XX\to SS\) process \[\left\langle\sigma v\right\rangle_{\mathrm{ann}}=\frac{11g_{X}^{4}}{2304\pi M _{X}^{2}}\,, \tag{16}\] while the semiannihilation cross section is dominated by the \(XX\to XS\) process \[\left\langle\sigma v\right\rangle_{\mathrm{semi}}=\frac{3g_{X}^{4}}{128\pi M _{X}^{2}}\,. \tag{17}\] Interestingly, the semiannihilation processes dominate since \(\left\langle\sigma v\right\rangle_{\mathrm{semi}}\sim 5\left\langle\sigma v \right\rangle_{\mathrm{ann}}\). Solving the Boltzmann equation, we obtain the dark matter relic abundance \[\Omega_{X}h^{2}=\frac{1.04\times 10^{9}\,\,\mathrm{GeV}^{-1}}{\sqrt{g_{*}}\,M _{P}J(x_{f})},\qquad J(x_{f})=\int_{x_{f}}^{\infty}dx\,\frac{\left\langle \sigma v\right\rangle_{\mathrm{ann}}+2\left\langle\sigma v\right\rangle_{ \mathrm{semi}}}{x^{2}}\,, \tag{18}\] where \(x_{f}\approx 25-26\) and \(x=M_{X}/T\). The correct relic abundance \(\Omega_{\mathrm{DM}}h^{2}=0.120\pm 0.001\) is reproduced if \[g_{X}\approx 0.9\times\sqrt{\frac{M_{X}}{1\,\,\mathrm{TeV}}}\,. \tag{19}\] Finally, DM particles can scatter off of nucleons, with the spin-independent cross section given by \[\sigma_{\mathrm{SI}}=\frac{m_{N}^{4}f^{2}}{16\pi v^{2}}\bigg{(}\frac{1}{M_{S }^{2}}-\frac{1}{M_{H}^{2}}\bigg{)}^{2}g_{X}^{2}\sin^{2}2\alpha\simeq\frac{64 \pi^{3}f^{2}m_{N}^{4}}{81M_{X}^{6}}\approx 0.6\times 10^{-45}\,\, \mathrm{cm}^{2}\left(\frac{\mathrm{TeV}}{\mathrm{M_{X}}}\right)^{6}\,. \tag{20}\] Then, to evade the experimental bounds we would have \(\sigma_{\mathrm{SI}}<1.5\times 10^{-45}\,\,\mathrm{cm}^{2}\,\,(\mathrm{M_{X}}/ \mathrm{TeV})\) for \(M_{X}>0.88\,\mathrm{TeV}\). ## 4 Finite temperature The temperature-dependent effective potential is \[V(h,\varphi,T)=V^{(0)}(h,\varphi)+V^{(1)}(h,\varphi)+V^{T}(h,\varphi,T)+V_{\rm daisy }(h,\varphi,T). \tag{11}\] The finite-temperature correction is \[V^{T}(h,\varphi,T)=\frac{T^{4}}{2\pi^{2}}\sum_{a}n_{a}J_{a}\left(\frac{M_{a}(h, \varphi)^{2}}{T^{2}}\right), \tag{12}\] where the sum runs over particle species. \(J_{a}\) denotes the thermal function, which is given by \[J_{F,B}(y^{2})=\int_{0}^{\infty}\!{\rm d}x\,x^{2}\log\Bigl{(}1\pm e^{-\sqrt{x^ {2}+y^{2}}}\Bigr{)}, \tag{13}\] where "\(+\)" for fermions (\(J_{F}\)) and "\(-\)" for bosons (\(J_{B}\)). The correction from the daisy-resummed diagrams is \[V_{\rm daisy}(h,\varphi,T)=-\frac{T}{12\pi}\sum_{i}n_{i}\left[\left(M_{i,\rm th }^{2}(h,\varphi,T)\right)^{3/2}-(M_{i}^{2}(h,\varphi))^{3/2}\right], \tag{14}\] where \(n_{i}\) is the number of degrees of freedom, \(M_{i,\rm th}\) denotes thermally corrected mass, and \(M_{i}\) the usual field dependent mass. The zero-temperature part of the effective potential along the \(\varphi\) direction reads \[V(\varphi)=\frac{1}{4}\lambda_{3}(t)Z_{\varphi}(t)^{2}\varphi^{4}+\frac{9M_{X }(\varphi,t)^{4}}{64\pi^{2}}\left(\log\frac{M_{X}(\varphi,t)^{2}}{\mu^{2}}- \frac{5}{6}\right), \tag{15}\] where \(t=\log\frac{\mu}{\mu_{0}}\), \(\mu_{0}=M_{Z}\), \(M_{X}(\varphi,t)=\frac{1}{2}g_{X}(t)\sqrt{Z_{\varphi}(t)}\varphi\), \(\mu=\frac{1}{2}g_{X}(M_{X})\varphi\equiv\overline{M}_{X}(\varphi)\). Note that we include more terms in the renormalisation-group improved potential than in the approaches often found in the literature. In detail: 1. The approach of [31, 64] approximates the running quartic coupling via its \(\beta\) function, relates the renormalisation scale with the field and uses as a reference scale the scale at which \(\lambda_{\varphi}\) changes sign, \[V_{1}\approx\frac{1}{4}\lambda_{3}(t)\varphi^{4}\approx\frac{1}{4}\frac{9g_{ X}^{4}}{128\pi^{2}}\log\biggl{(}\frac{\varphi}{\varphi_{0}}\biggr{)},\] (16) where \(t=\log\frac{\mu}{\varphi_{0}}\), \(\lambda_{\varphi}(0)=0\) and \(g_{X}\) is evaluated at \(\mu=\varphi_{0}\) (the running of \(g_{X}\) is not included). 2. The approach of [147] also approximates the one-loop potential by the tree-level potential with running coupling but uses \(\mu=\varphi\) and some fixed reference scale \(\mu_{0}=m_{t}\), \[V_{2}\approx\frac{1}{4}\lambda_{3}(t)\varphi^{4},\] (17) where \(t=\log\biggl{(}\frac{\varphi}{\mu_{0}}\biggr{)}\). To better understand which contributions are crucial we perform a series of approximations or modifications on our approach, the results of which are presented in the right panel of figure 2. Namely: 1. \(V_{a}\) corresponds to the potential \(V\) with the part proportional to the logarithm neglected. \(V_{a}\) exactly overlaps with the full potential (solid blue line). 2. \(V_{b}\) corresponds to the potential \(V\) with the choice of \(\mu=\varphi\) (darkest green, long-dashed line). This choice alone does not modify the potential significantly with respect to our choice (solid blue line). 3. \(V_{c}\) corresponds to the potential \(V\) with the constant \(-\frac{5}{6}\) neglected (dark green, medium-dashed curve). Since the omission of the logarithm (with our choice of the scale) does not visibly modify the result, \(V_{c}\) is equivalent to using the tree-level part of \(V\). Here the difference with respect to the full potential is significant. It is understandable, since the choice of the scale was such as to get rid of the logarithmic term but not the \(\frac{5}{6}\) constant. 4. \(V_{d}\) corresponds to the tree-level part of \(V\) but with the choice \(\mu=\varphi\) (light green, short-dashed line), which makes this choice very close to \(V_{1}\) and \(V_{2}\) discussed above. Clearly, \(V_{d}\) differs significantly from the full potential. ## 5 Phase transition and gravitational wave signal A first-order phase transition proceeds through nucleation, growth and percolation of bubbles filled with the broken-symmetry phase in the sea of the symmetric phase. This corresponds to the fields tunnelling through a potential barrier. In our case, we have checked that tunnelling proceeds along the \(\varphi\) direction, while the transition in the \(h\) direction is smooth. ### Important temperatures The temperatures relevant to our discussion are: Figure 2: Effective potential at zero temperature along the \(\varphi\) direction for a benchmark point with \(g_{X}=0.9\), \(M_{X}=10^{4}\,\mathrm{GeV}\) (defined at \(\mu=M_{X}\)). Left panel: Comparison of different approaches used in the literature, \(V_{1}\) of eq. (11) (yellow solid), \(V_{2}\) of eq. (12) (dashed red) and the full potential \(V\) of eq. (13) used in this work (solid blue). Right panel: Comparison of different approximations imposed on the full potential \(V\) of eq. (13) used in this work (solid blue) discussed in the main text: \(V_{b}\) (long-dashed darkest green), \(V_{c}\) (medium-dashed dark green), \(V_{d}\) (short-dashed light green). Critical Temperature \(T_{c}\).At high temperatures the symmetry is restored and the effective potential has a single minimum at the origin of the field space. As the Universe cools down, a second minimum is formed. At the critical temperature, the two minima are degenerate, and for lower temperatures, the minimum with broken symmetry becomes the true vacuum. This is the temperature at which the tunnelling becomes possible. Thermal Inflation Temperature \(T_{V}\).If there is large supercooling, i.e. the phase transition is delayed to low temperatures, much below the critical temperature, it is possible that a period of thermal inflation due to the false vacuum energy appears before the phase transition completes. The Hubble parameter can be written as \[H^{2}=\frac{1}{3\bar{M}_{\rm Pl}^{2}}(\rho_{R}+\rho_{V})=\frac{1}{3\bar{M}_{ \rm Pl}^{2}}\left(\frac{T^{4}}{\xi_{g}^{2}}+\Delta V\right),\quad\xi_{g}=\sqrt {30/(\pi^{2}g_{*})}\,, \tag{10}\] where \(\Delta V\) is the difference between the values of the effective potential at false and true vacuum. The onset of the period of thermal inflation can be approximately attributed to the temperature at which vacuum and radiation contribute to the energy density equally, \[T_{V}\equiv\left(\xi_{g}^{2}\Delta V\right)^{\frac{1}{4}}. \tag{11}\] For supercooled transitions, it is a good approximation to assume that \(\Delta V\) is independent of the temperature below \(T_{V}\). By using the temperature \(T_{V}\), the Hubble constant can be rewritten as \[H^{2}\simeq\frac{1}{3\bar{M}_{\rm Pl}^{2}\xi_{g}^{2}}\left(T^{4}+T_{V}^{4} \right). \tag{12}\] In the case of large supercooling, the contribution to the Hubble parameter from radiation energy can be neglected, leaving \[H^{2}\simeq H_{V}^{2}=\frac{1}{3\bar{M}_{\rm Pl}^{2}}\Delta V. \tag{13}\] In figure 3 there are the same excluded areas as before and two new shaded regions. The lower left corner (darkest grey) is not analysed because there the PT is sourced by the QCD phase transition, which is beyond the scope of the present work. The light-grey region around \(M_{X}\approx 10^{6}\,\)GeV is where the percolation criterion of eq. (11) is violated and is discussed in more detail below. Nucleation Temperature \(T_{n}\).Below the critical temperature, nucleation of bubbles of true vacuum becomes possible. To compute the decay rate of the false vacuum we start by solving the bounce equation, \[\frac{\mathrm{d}^{2}\varphi}{\mathrm{d}r^{2}}+\frac{2}{r}\frac{\mathrm{d} \varphi}{\mathrm{d}r}=\frac{\mathrm{d}V(\varphi,T)}{\mathrm{d}\varphi},\qquad \frac{\mathrm{d}\varphi}{\mathrm{d}r}=0\quad\text{ for }\quad r=0\quad\text{ and }\quad \varphi\to 0\quad\text{ for }\quad r\to\infty. \tag{14}\] Once the bubble profile is known we can compute the Euclidean action along the tunnelling path \[S_{3}(T)=4\pi\int r^{2}\mathrm{d}r\frac{1}{2}\left(\frac{\mathrm{d}\varphi}{ \mathrm{d}r}\right)^{2}+V(\varphi,T). \tag{15}\] Then the decay rate of the false vacuum due to the thermal fluctuations is given by \[\Gamma(T)\approx T^{4}\left(\frac{S_{3}(T)}{2\pi T}\right)^{3/2}e^{-S_{3}(T)/T}. \tag{10}\] The nucleation temperature is defined as the temperature at which at least one bubble is nucleated per Hubble volume, which can be interpreted as the onset of the PT. \[N(T_{n})=1=\int_{t_{\rm c}}^{t_{n}}dt\frac{\Gamma(t)}{H(t)^{3}}=\int_{T_{n}}^{ T_{c}}\frac{dT}{T}\frac{\Gamma(T)}{H(T)^{4}}. \tag{11}\] The common criterion for evaluating \(T_{n}\) as \(S_{3}/T_{n}\approx 140\) is not reliable in the case of strongly supercooled transitions. Percolation Temperature \(T_{p}\).When the bubbles of the true vacuum percolate, most of the bubble collisions take place. Therefore, the percolation temperature is the relevant temperature for the GW signal generation. The probability of finding a point still in the false vacuum at a certain temperature is given by \(P(T)=e^{-I(T)}\), where \(I(T)\) is the amount of true vacuum volume per unit comoving volume and reads as \[I(T)=\frac{4\pi}{3}\int_{T}^{T_{c}}{\rm d}T^{\prime}\frac{\Gamma(T^{\prime})}{ T^{4}H(T^{\prime})}\left(\int_{T}^{T^{\prime}}\frac{{\rm d}\tilde{T}}{H( \tilde{T})}\right)^{3}. \tag{12}\] We can distinguish between the vacuum and radiation domination period which leads to the Hubble parameter in the following form: \[H(T)\simeq\left\{\begin{array}{ll}H_{\rm R}(T)=\frac{T^{2}}{\sqrt{3M_{\rm pl }\tilde{\epsilon}_{\rm gs}^{2}}},&\mbox{for}\quad T>T_{V},\\ H_{\rm V}=\frac{T_{V}^{2}}{\sqrt{3M_{\rm pl}\tilde{\epsilon}_{\rm gs}^{2}}},& \mbox{for}\quad T<T_{V}.\end{array}\right. \tag{13}\] Figure 3: The values of the critical temperature \(T_{c}\) (left panel) and the temperature at which thermal inflation starts \(T_{V}\) (right panel). We can thus write a simplified version of \(I(T)\) valid in the region where \(T<T_{V}\): \[I_{\rm RV}(T)=\frac{4\pi}{3H_{\rm V}^{4}}\left(\int_{T_{V}}^{T_{c}} \frac{dT^{\prime}\Gamma\left(T^{\prime}\right)}{T^{\prime 6}}T_{V}^{2}\left(2T_{V}-T- \frac{T_{V}^{2}}{T^{\prime}}\right)^{3}+\int_{T}^{T_{V}}\frac{dT^{\prime} \Gamma\left(T^{\prime}\right)}{T^{\prime}}\left(1-\frac{T}{T^{\prime}}\right)^ {3}\right) \tag{21}\] The percolation criterion is given by \[I_{\rm RV}(T_{p})=0.34\,,\qquad{\rm or}\qquad P(T_{p})=0.7\,. \tag{22}\] The fraction \(0.34\) is the ratio of the volume in equal-size and randomly-distributed spheres (including overlapping regions) to the total volume of space for which percolation occurs in three-dimensional Euclidean space, and implies that at \(T_{p}\) at least \(34\%\) of the (comoving) volume is converted to the true minimum. Comparing to the values of \(T_{n}\) (fig. 4) one can see that these two temperatures are of the same order, yet they differ, hence one should not use \(T_{n}\) as a proxy for the temperature at which the PT proceeds in case of the models with large supercooling. One also needs to make sure that the volume of the false vacuum \(V_{f}\sim a^{3}(T)P(T)\) is decreasing around the percolation temperature. This condition is especially constraining in models featuring strong supercooling, as thermal inflation can prevent bubbles from percolating. It can be expressed as \[\frac{1}{V_{f}}\frac{{\rm d}V_{f}}{{\rm d}t}=3H(t)-\frac{{\rm d}I( t)}{{\rm d}t}=H(T)\left(3+T\,\frac{{\rm d}I(T)}{{\rm d}T}\right)<0. \tag{23}\] Figure 4: The values of the nucleation temperature \(T_{n}\) (left panel) and the percolation temperature \(T_{p}\) (right panel). Reheating Temperature \(T_{r}\).At the end of the phase transition, the Universe is in a vacuum-dominated state. Then the total energy released in the phase transition is \(\Delta V(T_{p})\approx\Delta V(T=0)\equiv\Delta V\). If reheating is instantaneous, this whole energy is turned into the energy of radiation, \[\Delta V=\rho_{R}(T_{r})=\rho_{R}(T_{V})\qquad\rightarrow\qquad T_{r}=T_{V}. \tag{10}\] On the other hand, if at \(T_{p}\) the rate of energy transfer from the \(\varphi\) field to the plasma, \(\Gamma_{\varphi}\), is smaller than the Hubble parameter, \(\Gamma_{\varphi}<H(T_{p})\), then the energy will be stored in the scalar field oscillating about the true vacuum and redshift as matter until \(\Gamma_{\varphi}\) becomes comparable to the Hubble parameter. In this case \[T_{r}=T_{V}\sqrt{\frac{\Gamma_{\varphi}}{H_{*}}}. \tag{11}\] The rate of energy transfer from \(\varphi\) to the plasma reads \[\Gamma_{\varphi}= \xi_{S}^{2}(1-\xi_{S}^{2})\Gamma_{\rm SM}(S)+(1-\xi_{S}^{2}) \Gamma(S\to HH),\quad\xi_{S}=\left\{\begin{array}{ll}-\sin\theta& \mbox{for}\;\;M_{H}\leqslant M_{S}\\ \cos\theta&\mbox{for}\;\;M_{H}>M_{S}\end{array}\right. \tag{12}\] where \(\Gamma_{\rm SM}\) denotes a decay width computed as in the SM, i.e. with the same couplings and decay channels, but for a particle of mass \(M_{S}\). The mixing enhances the decay width twofold, first, it amplifies the coupling \(SHH\) as compared to \(\phi hh\) and, moreover, it allows a contribution from the SM sector, which is especially important when the \(S\to HH\) decay is kinematically forbidden. Figure 5: Contour plot of the decimal logarithm of the ratio of the energy transfer rate \(\Gamma_{\varphi}\) to the Hubble parameter \(H\). The equality \(H=\Gamma_{\varphi}\) is indicated as a thick black solid line in the lower right corner. The percolation bound is shown as a black dashed line (in other plots it is shown as a light-grey region). ### Supercool Dark Matter? The authors of [31, 64, 76] claim that for a wide range of parameters, there can be supercool DM. Their main assumptions are: * The true vacuum has zero energy, the energy in the false vacuum is \(\Delta V\simeq 9m_{\chi}^{4}/(128\pi^{2})\), which implies that supercooling starts at \[T_{V}\simeq\frac{M_{X}}{8.5}\quad\text{and}\quad H_{*}=\sqrt{\frac{3}{\pi}}\, \frac{M_{X}^{2}}{4M_{\text{pl}}}\,.\] * Nucleation occurs when \(S_{3}(T_{n})/T_{n}\simeq 4\ln\bigl{(}M_{\text{pl}}/m_{\chi}\bigr{)}\simeq 142\). * The reheating temperature is related to the thermal inflation temperature as \(T_{\text{r}}=T_{V}\,\,\min\left(1,\Gamma/H\right)^{1/2}\), where \(\Gamma\simeq\Gamma_{h}\sin^{2}(v/w)\), with \(\Gamma_{h}\approx 4\,\,\text{MeV}\). * The DM abundance resulting from inflationary supercooling is \[Y_{\text{DM}}\equiv\frac{n_{\text{DM}}|_{T=T_{\text{r}}}}{s|_{T=T_{\text{r}}} }=\frac{45g_{\text{DM}}}{2\pi^{4}g_{*}}\,\frac{T_{\text{r}}}{T_{\text{V}}} \left(\frac{T_{\text{n}}}{T_{\text{V}}}\right)^{3}\,.\] * For \(T_{\text{r}}<T_{\text{dec}}\simeq M_{X}/25\), both supercooling and sub-thermal production contribute to the DM relic abundance, \[\Omega_{\text{DM}}h^{2}=\Omega_{\text{DM}}h^{2}|_{\text{supercool}}+\Omega_{ \text{DM}}h^{2}|_{\text{sub-thermal}}\,.\] * For \(T_{\text{r}}>T_{\text{dec}}\), the plasma thermalizes again, and the usual freeze-out mechanism yields the relic abundance, \[\Omega_{\text{DM}}h^{2}=\Omega_{\text{DM}}h^{2}|_{\text{freeze-out}}\,.\] Nevertheless, our analysis suggests that due to the percolation criterion which excludes \(M_{X}\) above \(\sim 10^{6}\,\,\text{GeV}\) and the fact that \(\Gamma_{\varphi}>H(T_{p})\) in the rest of the DM range, we find \(T_{r}>T_{\text{dec}}\) for all parameter points. Hence, the supercool DM population gets diluted away, the sub-thermal population reaches thermal equilibrium again, and the relic abundance is produced as in the standard freezeout scenario (see fig. 6). Our conclusions were also validated in a recent paper [152]. ### Gravitational waves The GW signal in the model under consideration can be sourced by bubble collisions. The spectrum is: \[\Omega_{\text{col}}(f)=\left(\frac{R_{*}H_{*}}{5}\right)^{2}\left(\frac{ \kappa_{\text{col}}\alpha}{1+\alpha}\right)^{2}S_{\text{col}}(f)\,. \tag{5.17}\] where \(R_{*}\) is the length scale of the transition, \(\kappa_{\text{col}}\) is the energy transfer efficiency factor at the end of the transition and \(\alpha=\Delta V/\rho_{R}(T_{p})\) is the transition strength. The spectral shape \(S_{\text{col}}\) and peak frequency are defined as \[S_{\text{col}}=25.09\left[2.41\left(\frac{f}{f_{\text{col}}}\right)^{-0.56}+2.3 4\left(\frac{f}{f_{\text{col}}}\right)^{0.57}\right]^{-4.2}\,,\qquad f_{\text{ col}}\simeq 0.13\left(\frac{5}{R_{*}H_{*}}\right). \tag{5.18}\] The spectra of the sound-wave-sourced GW are expressed as: \[\Omega_{\rm sw}(f)=\left(\frac{R_{*}H_{*}}{5}\right)\left(1-\frac{1}{\sqrt{1+2 \tau_{\rm sw}H_{*}}}\right)\left(\frac{\kappa_{\rm sw}\alpha}{1+\alpha}\right)^ {2}S_{\rm sw}(f), \tag{19}\] with \[S_{\rm sw}(f)=\left(\frac{f}{f_{\rm sw}}\right)^{3}\left[\frac{4}{7}+\frac{3}{ 7}\left(\frac{f}{f_{\rm sw}}\right)^{2}\right]^{7/2}, \tag{20}\] where the duration of the sound wave period normalised to Hubble and the peak frequency can be expressed as \[\tau_{\rm sw}H_{*}=\frac{R*H_{*}}{U_{f}},\quad U_{f}\simeq\sqrt{\frac{3}{4} \frac{\alpha}{1+\alpha}\kappa_{\rm sw}}\,,\quad f_{\rm sw}\simeq 0.54\left( \frac{5}{R_{*}H_{*}}\right)\,. \tag{21}\] To assess the observability of a signal we compute the signal-to-noise (SNR) ratio for the detectors that have the best potential of observing the predicted signal, i.e. LISA and AEDGE. We calculate the SNR using the usual formula [153, 154]: \[{\rm SNR}=\sqrt{\mathcal{T}\int_{f_{\rm min}}^{f_{\rm max}}{\rm d}f\left[ \frac{h^{2}\Omega_{\rm GW}(f)}{h^{2}\Omega_{\rm Sens}(f)}\right]^{2}}, \tag{22}\] Figure 6: Left: Dark matter relic abundance \(\Omega_{X}h^{2}\) with colour changing according to the value of the gauge coupling \(g_{X}\). The black lines correspond to the measured value \(\Omega_{\rm DM}h^{2}=0.120\pm 5\sigma\). Right: The spin-independent dark matter-nucleon cross section. The coloured region corresponds to points that reproduce the measured relic abundance within \(5\sigma\). The lines represent the exclusion limit from the XENON1T 2018 [148] (solid), PandaX-4T 2021 [149] (dashed), LZ 2022 [150] (large dashed) and the scheduled XENONnT [151] (dot dashed) experiments. where \(\mathcal{T}\) is the duration of collecting data and \(h^{2}\Omega_{\rm Sens}(f)\) is the sensitivity curve of a given detector. For calculations we have used data collecting durations as \(\mathcal{T}_{\rm LISA}\) = 75 % \(\cdot\) 4 years [153] and \(\mathcal{T}_{\rm AEDGE}\) = 3 years [17]. We will assume that a signal could be observed if \(\mathrm{SNR}>10\), which is the usual criterion. The results are presented in figure 7. Superimposed is a curve indicating where in the parameter space the correct DM relic density is reproduced and the DM direct detection constraints are satisfied (solid black). Strikingly, the SNR for LISA for the predicted signal is above the observability threshold within the whole parameter space, and almost whole in the case of AEDGE. This means that a first-order phase transition sourced by tunnelling of a scalar field in the present model should be thoroughly testable by LISA and AEDGE. Moreover, in case of not observing a signal consistent with the expectations for the first-order phase transitions this scenario could be falsified. The correct DM relic abundance and non-exclusion by direct detection experiments (solid black line in figure 7) are located in the region of a relatively weaker signal. It is still well observable with LISA and AEDGE. The GW signal in the region where the correct abundance is reproduced is sourced entirely by sound waves. Examples of spectra for points along the black line in figure 7 are shown in figure 8. ### Renormalisation-scale dependence Finally, we perform scans of the parameter space at fixed \(\mu\). This will tell us how our understanding of the parameter space and observability of the GW signal depends on the renormalisation Figure 7: Results for the signal-to-noise ratio for LISA (left panel) and AEDGE (right panel) for the predicted GW signal. The black line corresponds to the points that reproduce the measured DM relic abundance and also evade the DM direct detection experimental constraints. scale. Figure 9 shows the results for the percolation temperature \(T_{p}\) computed at different scales (\(\mu=M_{X}\) (left), \(\mu=M_{Z}\) (right)) together with the previous constraints on the parameter space. Both figures indicate a striking dependence on the renormalisation scale. This has further implications since for \(T_{p}\lesssim 0.1\,\mathrm{GeV}\) the PT is believed to be sourced by the QCD effects, which changes the nature and properties of the PT. In this work we focus on the PT sourced by the tunnelling, therefore the considered parameter space changes dramatically as the renormalisation scale is changed. Also, the answer to a basic question - whether or not the PT completes via percolation of bubbles of the true vacuum - is altered by the change of the renormalisation scale as can be seen by the \(\mu=M_{X}\) (left), \(\mu=M_{Z examining the percolation criterion (light-grey shaded region). The results show that the change of the scale at which computations are performed not only changes the results quantitatively, by shifting the values of the characteristic parameters of the phase transition, but it also significantly modifies them qualitatively - by modifying the character of the phase transition, the very fact of its completion and the dominant source of the GW signal. ## 6 Summary and conclusions In the present work, we studied a model endowed with classical scale invariance, a dark SU(2)\({}_{X}\) gauge group and a scalar doublet of this group. This model provides a dynamical mechanism of generating all the mass scales via radiative symmetry breaking, while featuring only two free parameters. Moreover, it provides dark matter candidates - the three gauge bosons of the SU(2)\({}_{X}\) group which are degenerate in mass - stabilised by an intrinsic \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}^{\prime}\) symmetry. Like other models with scaling symmetry, the studied model exhibits strong supercooling which results in the generation of an observable gravitational-wave signal. Motivated by these attractive features we performed an analysis of the phase transition, gravitational wave generation and dark matter relic abundance, updating and extending the existing results [71, 72, 73, 50, 74, 75, 76, 25, 31, 32, 71, 72, 73, 74, 75, 76]. The analysis features the key ingredients: * careful analysis of the potential in the light of radiative symmetry breaking; * using renormalisation-group improved potential which includes all the leading order terms; * using RG-running to move between various relevant scales: the electroweak scale for scalar mass generation, the scale of the mass of the new scalar for its decay during reheating; * careful analysis of the supercooled phase transition, following recent developments, in particular imposing the percolation criterion which proved crucial for phenomenological predictions; * analysis of dark matter relic abundance in the light of the updated picture of the phase transition; * analysis of gravitational-wave spectra using most recent results from simulations; * using fixed-scale potential, in addition to the renormalisation-group-improved one, to study the scale dependence of the results. The first and foremost result of our analysis is that within the model the gravitational wave signal sourced by a first-order phase transition associated with the \(SU(2)_{X}\) and electroweak symmetry breaking is strong and observable for the whole allowed parameter space. This is an important conclusion since it allows this scenario to be falsified in case of negative LISA results. Second, we exclude the supercool dark matter scenario within the region where the phase transition proceeds via nucleation and percolation of bubbles of the true vacuum. It is a result of a combination of two reasons: we include the percolation condition, eq. (13), which allows to verify that a strongly supercooled phase transition indeed completes via percolation of bubbles and strongly constrains the parameter space relevant for our analysis. Moreover, we improve on the computation of the decay rate of the scalar field \(\varphi\), which controls the reheating rate, which pushes the onset of inefficient reheating towards higher \(M_{X}\), beyond the region of interest. Third, we find the parameter space in which the correct relic dark matter abundance is predicted. It is produced via the standard freeze-out mechanism in the region with relatively low \(M_{X}\) and large \(g_{X}\). It is the region where the phase transition is relatively weak (compared with other regions of the parameter space), yet the gravitational-wave signal should be well observable with LISA. This parameter space is further reduced due to the recent direct detection constraints. Moreover, in the present work we focused on the issue of scale dependence of the predictions. Our approach to reducing this dependence was to implement the renormalisation-group improvement procedure, respecting the power counting of couplings to include all the relevant terms. For comparison, we present results of computations performed at fixed scale, where the dependence on the renormalisation scale is significant. It is important to note that with the change of the scale the predictions do not only change quantitatively, they can change qualitatively. For example, for computations performed at a fixed scale (both \(\mu=M_{X}\) and \(M_{Z}\)) gravitational waves sourced by bubble collisions are not present. At the same time, with RG improvement we see a substantial region where bubble collisions are efficient in producing an observable signal. To sum up, the classically scale-invariant model with an extra SU(2) symmetry remains a valid theoretical framework for describing dark matter and gravitational-wave signal produced during a first-order phase transition in the early Universe. It will be tested experimentally by LISA and other gravitational-wave detectors. The predictions, however, are sensitive to the theoretical procedures implemented. Therefore, it is crucial to improve our understanding of theoretical pitfalls affecting the predictions. The present work is a step in this direction. ###### Acknowledgments. We would like to thank Kristjan Kannike, Wojciech Kotlarski, Luca Marzola, Tania Robens, Martti Raidal and Rui Santos for useful discussions. We are indebted to Marek Lewicki for numerous discussions, clarifications and sharing data for the SNR plots. We are grateful to Joao Viana for his computation of Higgs decay width using hdecay and IT hints. We would also like to thank Matti Heikinheimo, Tomislav Prokopec, Tommi Tenkanen, Kimmo Tuominen and Ville Vaskonen for collaboration in the early stages of this work. AK was supported by the Estonian Research Council grants MOBTT5, MOBTT86, PSG761 and by the EU through the European Regional Development Fund CoE program TK133 "The Dark Side of the Universe". The work of BS and MK is supported by the National Science Centre, Poland, through the SONATA project number 2018/31/D/ST2/03302. AK would also like the thank the organizers of the "School and Workshops on Elementary Particle Physics and Gravity", Corfu 2022, for their hospitality during his stay, and for giving him the opportunity to present this work.
``` 1階相転換に伴う対称破れと関連する、スケール不変モデルを拡張したクラスically scale-invariantモデルに対する修正分析を実施しました。新しい $SU(2)$ ゲージ群を備えたこのモデルにおいて、超冷相転換の理解を最近進展させたことで、その特徴を計算し、パラメータ空間を大幅に制限することができました。このモデルは、この相転換中に生成された重力波スペクトルを予測することができ、LISA にてこのモデルが十分に実証可能であることが判明しました。さらに、このモデルは、暗黒物質の残留量に関する予測もしています。私たちの予測は、観測と整合していますが、パラメータ空間の狭い範囲のみです。相転換後の percolation や reheating の記述を改善し、couplings のランニングを含めることで、超冷暗黒物質シナリオに対する重要な制約を課せました
2310.20585
Modelling of dust emission of a filament in the Taurus molecular cloud
Dust emission is an important tool in studies of star-forming clouds, as a tracer of column density and indirectly via the dust evolution that is connected to the history and physical conditions of the clouds. We examine radiative transfer (RT) modelling of dust emission over an extended cloud region, using a filament in the Taurus molecular cloud as an example. We examine how well far-infrared observations can be used to determine both the cloud and the dust properties. Using different assumptions of the cloud shape, radiation field, and dust properties, we fit RT models to Herschel observations of the Taurus filament. Further comparisons are made with measurements of the near-infrared extinction. The models are used to examine the degeneracies between the different cloud parameters and the dust properties. The results show significant dependence on the assumed cloud structure and the spectral shape of the external radiation field. If these are constrained to the most likely values, the observations can be explained only if the dust far-infrared (FIR) opacity has increased by a factor of 2-3 relative to the values in diffuse medium. However, a narrow range of FIR wavelengths provides only weak evidence of the spatial variations in dust, even in the models covering several square degrees of a molecular cloud. The analysis of FIR dust emission is affected by several sources of uncertainty. Further constraints are therefore needed from observations at shorter wavelengths, especially regarding the trends in dust evolution.
M. Juvela
2023-10-31T16:20:09
http://arxiv.org/abs/2310.20585v1
# Modelling of dust emission of a filament in the Taurus molecular cloud ###### Abstract Context:Dust emission is an important tool in studies of star-forming clouds, as a tracer of column density and indirectly via the dust evolution that is connected to the history and physical conditions of the clouds. Aims:We examine radiative transfer (RT) modelling of dust emission over an extended cloud region, using a filament in the Taurus molecular cloud as an example. We examine how well far-infrared observations can be used to determine both the cloud and the dust properties. Methods:Using different assumptions of the cloud shape, radiation field, and dust properties, we fit RT models to Herschel observations of the Taurus filament. Further comparisons are made with measurements of the near-infrared extinction. The models are used to examine the degeneracies between the different cloud parameters and the dust properties. Results:The results show significant dependence on the assumed cloud structure and the spectral shape of the external radiation field. If these are constrained to the most likely values, the observations can be explained only if the dust far-infrared (FIR) opacity has increased by a factor of 2-3 relative to the values in diffuse medium. However, a narrow range of FIR wavelengths provides only weak evidence of the spatial variations in dust, even in the models covering several square degrees of a molecular cloud. Conclusions:The analysis of FIR dust emission is affected by several sources of uncertainty. Further constraints are therefore needed from observations at shorter wavelengths, especially regarding the trends in dust evolution. ## 1 Introduction A number of space-borne observatories have made observations over near-infrared (NIR), mid-infrared (MIR), FIR, submillimetre, and millimetre wavelengths, and over extended regions of the interstellar medium (ISM). _Planck_ at \(\lambda\geq 350\mu\)m (Tauber et al. 2010), _Herschel_ at \(\lambda\)=70-500 \(\mu\)m (Pilbratt et al. 2010), _ISO_ at 2.4-250 \(\mu\)m (Kessler et al. 1996), _AKARI_ at \(\lambda\)=1.7-180 \(\mu\)m(Murakami et al. 2007), _Spitzer_ at 3.6-160 \(\mu\)m (Werner et al. 2004), and WISE at 3-22 \(\mu\)m (Wright et al. 2010). This means that many nearby clouds have been observed with full wavelength coverage of the observable dust emission. The data have shown that the emission properties of interstellar dust are not constant at Galactic scales (Planck Collaboration et al. 2014a) and also change drastically between diffuse and molecular regions (Lagache et al. 1998; Cambresy et al. 2001) and even at smaller scales. One of the effects is the enhanced FIR/sub-millimetre opacity of dense molecular clouds, which is believed to be connected to the formation of ice mantles, increased grain sizes, and the formation of dust aggregates in the densest and coldest parts of the molecular clouds. A strong effect was observed in a filament in Taurus already in (Stepnik et al. 2003), and this is now known to be a common phenomenon of the densest parts of molecular clouds (Roy et al. 2013; Juvela et al. 2015b). The changes in dust are reflected not only in the absolute opacity but also in the opacity spectral index \(\beta\), which gets typically has values in the range \(\beta\)=1.5-2.0, depending on both the observed source and the wavelengths. Balloon-borne telescopes (Bernard et al. 1999; Dupac et al. 2003; Desert et al. 2008; Paradis et al. 2009) and the analysis of _Herschel_ and _Planck_ data (Paradis et al. 2010, 2012a; Planck Collaboration et al. 2014b, 2016) revealed variations over wide sky areas. Ground-based instruments have extended the studies to millimetre wavelengths, with resolution comparable to the space-borne FIR data. There is evidence of FIR spectrum getting steeper towards dense clumps and cores (Juvela et al. 2015a; Scibelli et al. 2023), but the spectrum flattens (\(\beta\) decreases) at longer wavelengths (Paradis et al. 2012b; Planck Collaboration et al. 2014b). Mason et al. (2020) found in Orion significant 3 mm and 1 cm excess over the \(\beta=1.7\) spectrum appropriate in the FIR regime, a possible sign of the presence of extremely large grains. Dust observations are often analysed via fits to the spectral energy distribution (SED). However, the warmest dust component within the telescope beam, including the line-of-sight (LOS) variations, has the largest effect on the SED. Therefore, the assumption of single temperature results in an overestimation of the (mass-averaged) dust temperature (Shetty et al. 2009b; Juvela & Ysard 2012b) and the dust column densities are underestimated. With observations at more frequencies, the data can be modelled as the sum of several temperature components or a general distribution of dust temperatures (Juvela 2023). The Abel transform has been applied to the analysis of cores (Roy et al. 2014; Bracco et al. 2017), making use of the strong constraint of the spherical symmetry of the source, which is then mostly applicable for isolated cores. More generally, methods like point process mapping (PPMAP; Marsh et al. 2015) decompose the spectrum along each line of sight into a sum of temperature components. PPMAP has been applied also to extended cloud regions, including cloud filaments in the Taurus molecular cloud and elsewhere Howard et al. (2019, 2021). The simultaneous determination of the temperature and \(\beta\) suffers from degeneracy, which makes the results sensitive to noise and any sources of systematic errors (Shetty et al. 2009a; Veneziani et al. 2010; Juvela & Ysard 2012a). This is only partly remedied by the use of hierarchical Bayesian modelling (Kelly et al. 2012; Juvela et al. 2013), where the additional constraint comes from the assumed similarity in the \((T,\beta)\) values in the observational set. With observations over a sufficiently wide wavelength range and with a sufficiently high signal-to-noise ratio (SN), it should be possible to extract some information also of spectral index variations and even the LOS temperature variations. Both are needed for accurate optical depth estimates and better column density and mass estimates. What the empirical SED-fitting methods lack is the requirement of physical self-consistency between the radiation field, the cloud structure, and the dust properties. Short wavelengths (from ultraviolet to NIR) determine how much energy dust absorbs and how this heating varies over the source. The energy is re-emitted at longer wavelengths, mainly in the FIR. The FIR dust properties affect the final dust temperature and, especially via \(\beta\), the shape of the observed SEDs. Full self-consistency between the different wavelengths and different source regions is enforced only in a complete RT model. This should be a useful additional constraint when observations are used to study dust properties and especially their spatial variations. The advantages of full modelling are not always self-evident. When the whole source volume is interconnected via radiation transport and the density field is complex, it is very difficult to find a model that reproduces all observations within the observational uncertainties. The predictions of a model that fits observations only approximately are naturally less reliable. In contrast, in the simple SED fits the parameters (\(T\) and possibly \(\beta\)) can be tuned for each pixel separately. Therefore, the observations are matched well at each position, and the main uncertainties are caused by the more naive assumptions underlying the analysis (e.g. a single temperature). Even if a RT model does not fit the observations perfectly, the fit residuals can still provide valuable information on where the physical conditions or the dust properties differ from the model assumptions. The complexity of RT models (LOS cloud structure, spectrum and intensity of the external radiation field, positions and properties of potential embedded radiation sources, etc.) makes it difficult to cover all feasible parameter combinations. Thus, although a good fit might be reached with some RT model, the mapping of all potentially correct models (and the associated uncertainties) is hardly possible. In this paper we construct large-scale RT models for the LDN 1506 (B212-B215) filament in the Taurus molecule cloud. The small distance (\(\sim\)129 pc for B215; Galli et al. 2019) and the absence significant local heating makes the region relatively easy to model. The models cover an area of \(1.8\times 1.8\) degrees, and they are optimised to reproduce the FIR observations over the full field. The FIR emission of individual filament cross sections was already modelled in Ysard et al. (2013), suggesting clear dust evolution and increased FIR dust emissivity in the filament. Rather than trying to develop a definite model for the region and its dust property variations, we examine to what extent the FIR observations are able to constrain the dust properties. We compare models with different dust properties and investigate other factors, such as the 3D cloud density field and the external radiation field, that could confuse the evidence for dust evolution. In these models, the model parameters (apart from possible density-dependent changes in dust properties) cannot be tuned locally, so a perfect match to the observations is not to be expected. However, if the modelling were successful (and preferably without significant degeneracies), it could provide simultaneously valuable information on the dust properties, the cloud mass and mass distribution, and the radiation field. The contents of the paper are the following. We present the observational data in Sect. 2 and the methods of RT modelling and employed dust models are presented in Sect.3. The results from the optimised RT models with basic dust models are presented in Sect. 4. We discuss the results in Sect. 5, where we also investigate further the dust property changes that are needed to explain the Taurus observations. The final conclusions are listed in Sect. 6. ## 2 Observational data The cloud models are optimised based on 250-500 \(\mu\)m surface brightness observations. Measurements of NIR extinction and 160 \(\mu\)m emission are used afterwards to check, how well the models predict these shorter-wavelength data. ### Surface brightness maps The 250 \(\mu\)m, 350 \(\mu\)m, and 500 \(\mu\)m FIR surface brightness maps were all observed with the _Herschel_ spectral and photometric imaging receiver (SPIRE) instrument, and the relative accuracy of the observations is expected to be better than 2%1. The Taurus SPIRE observations we made as part of the Gould Belt project (PI Ph. Andre; parallel mode maps, OBS ID 1342204860 and 1342204860). The angular resolution of the observations is about 18, 26, and 37 arcsec for the three bands in the order of increasing wavelength. For the modelling, the 250 \(\mu\)m and 350 \(\mu\)m Figure 1: _Herschel_ 250 \(\mu\)m map of the Taurus filament. The model fits are compared mainly to the data inside the cyan contour (“filament region”), which corresponds to 10 MJy sr\({}^{-1}\) in the background-subtracted 350 \(\mu\)m map. The circles labelled A-C correspond to selected small regions, in order of increasing column density, that are used in the model comparisons. The white circle shows the area used for background subtraction. were degraded to correspond to the Gaussian beam with the full width at half maximum (FWHM) equal to 30\({}^{\prime\prime}\). The 500 \(\mu\)m data were similarly degraded to FWHM=40\({}^{\prime\prime}\). The lower resolution relaxes the requirements for the spatial resolution of the RT models and reduces the uncertainties connected to the beam shapes. All maps were resampled onto the same 10\({}^{\prime\prime}\) pixels in galactic coordinates. We subtracted from the maps the background emission that was estimated as the mean value within 3.9 arcmin of the position (\(l\),\(b\))=(170.747 deg, -16.724 deg) (Fig. 1). The data were colour corrected for modified blackbody (MBB) spectra \(\propto B_{\nu}(T)\times\nu^{2}\), where the temperatures were obtained from MBB fits at 40\({}^{\prime\prime}\) resolution and assuming \(\beta=1.8\). The colour corrections of the SPIRE channels are small (a couple of percent) and change only slowly as functions of temperature and spectral index. The 250 \(\mu\)m map is shown in Fig. 1. We also use _Herschel_ 160 \(\mu\)m maps that were observed with the photodetector array camera and spectrometer (PACS) instrument (PACS photometry maps, OBS ID 1342227304 and 1342227305). The data are convolved down to 30\({}^{\prime\prime}\) resolution and colour corrected for the same SED shape as the SPIRE channels. We assume for the PACS data a 4% uncertainty relative to the SPIRE data. ### Extinction of background stars NIR extinction can provide a good discriminator for different models that match the same FIR observations (Juvela et al. 2020). To estimate the NIR extinction, we use stars from the 2MASS survey (Skrutskie et al. 2006). Figure 2 shows the J-band extinction map calculated with the NICER method (Lombardi & Alves 2001) as well as the distribution of the individual stars. The area contains some 14 700 stars, 2364 of which are inside the cyan contour in Fig. 1, which we refer to as the filament region. The extinction map is made using the \(R_{\rm V}\)=4 extinction law (Cardelli et al. 1989). Due to the high density of the filament region, we adopt a value above the normal ISM value of \(R_{\rm V}\)=3.1, but the \(R_{\rm V}\) difference has a minimal effect at NIR wavelengths (Cardelli et al. 1989; Martin & Whittet 1990; Hensley & Draine 2023). The resolution of the extinction map (2\({}^{\prime}\) in Fig. 2) is restricted by the number of background stars, and the stellar density decreases towards the densest regions. This results in higher uncertainty in A(J) towards density peaks and systematic errors in regions of extinction gradients. Some or even most of the bias can be eliminated statistically Lombardi (2009, 2018). However, in the following analysis, we will use the individual stars rather than the continuous extinction map. The stars probe the extinction towards discrete positions, without the uncertainty of spatial interpolation. In spite of photometric errors and uncertainty of the intrinsic colours of the individual stars, the number of stars is sufficient for reliable comparisons with the predictions of the fitted RT models. The latter have 30\({}^{\prime\prime}\) resolution and, by construction, no structure at scales below 10\({}^{\prime\prime}\). ## 3 Methods The modelling includes the construction of the initial model setup, including the selection of a dust model, and the optimisation of the model against FIR observations. Here the word "model" refers to the combination of the chosen density field, the description of the radiation sources, and the specification of the dust model. ### Density field The initial density field was constructed based on the observed 250 \(\mu\)m surface brightness, combined with an assumption of the line-of-sight (LOS) density profile. The absolute values or the variations in the initial mass distribution over the plane of the sky (POS) are not import because the final values will result from the the model optimisation. On the other hand, the assumed LOS cloud structure is an important parameter. A larger extent means that the medium receives more radiation, leading to smaller temperature gradients and higher surface brightness per column density. We used two options for the LOS structure. The first one is a Gaussian profile that is fully determined by the FWHM of the LOS density distribution. We use three values \(FWHM\)=0.2, 0.5, and 0.9 pc. Here \(FWHM\)=0.2 pc is closest to the expected size of interstellar filaments, while the larger values might account for more extended cloud structures or a sheet-like geometry along the LOS direction. Alternatively, we use Plummer-type density profiles (Arzoumanian et al. 2011). Cross sections of the column density of the main filament were fitted with the Plummer model, which was then converted to the LOS density profile under the assumption of rotational symmetry. The parameters of the Plummer fit are the central density, the size of the central flat region, and the asymptotic powerlaw index \(p\). The parameters were allowed to vary along the filament, but the \(p\) parameter was limited to the range 1.7\(-\)3.5. For the RT calculations, the densities were discretised onto a three-dimensional hierarchical octree grid. The modelled area is \(106.7\times 106.7\) arcmin in size (\(\sim\)3.2 square degrees), and with a pixel size of 10\({}^{\prime\prime}\) corresponds to \(640\times 640\) resolution element. The octree grid has a root grid of 80\({}^{\circ}\) cells. With three levels of refinement, the smallest cell size thus corresponds to the 10\({}^{\prime\prime}\) pixel size, and all maps of the models have the same \(640\times 640\) pixels as the observations. The refinement was based on volume density. By using the hierarchical discretisation, the total number of volume elements could be kept around 15 million. During the optimisation, the density field is modified based on the observed and model-predicted 350 \(\mu\)m surface brightness maps, and the ratio in each map pixel is used to scale the densities in all cells along the same LOS. The correction changes the model column densities but does not change the original LOS density profile. Figure 2: The 2MASS stars (black dots) plotted on the NICER \(A\)(J) extinction map. The nominal resolution of the extinction map is 2 arcmin. ### Radiation field The dust is in the simulations heated by anisotropic background radiation. The angular distribution of the sky brightness is taken from COBE DIRBE allsky maps (Boggess et al. 1992), where the closest band is also used for frequencies outside the observed range. DIRBE is used only for the angular distribution, and the values were rescaled to match the level and spectral shape of the Mathis et al. (1983) model of the interstellar radiation field (ISRF). The strength of this external field is in RT models left as a free parameter, with the same multiplier for all frequencies. This scalar scaling parameter is optimised based on the observed and the model-predicted average surface brightness ratios \(I_{\nu}(250\,\mu\)m/\(I_{\nu}(500\,\mu\)m that are calculated over the filament region (inside the cyan contour in Fig. 1). The more diffuse regions are excluded, also because they could be subjected to different radiation fields along the LOS and could include more extended structures than in the model that is limited to a finite LOS depth. Because the modelled region is embedded in an extended molecular cloud, we consider alternative radiation fields that have suffered \(A_{V}\)=0-2 magnitudes of extinction due to cloud layers outside the modelled volume. The field is attenuated according to the extinction curve of the Compiegne et al. (2011) dust model. Extinction will remove more energy from the shortest wavelengths. The incoming radiation is then effectively moved to longer wavelengths where it suffers less attenuation, leading to smaller temperature gradients inside the model volume. The effect is therefore qualitatively similar to the effect of a larger LOS extent of the model cloud. Based on _Planck_ dust optical depth maps, the LOS extinction around inside the modelled areas and outside the main filaments is \(A_{\rm V}\sim 1\) mag. The scaling from 353 GHz includes large uncertainty, and the visual extinction could be smaller if one assumed the FIR opacity to be above the values found in diffuse regions of the Milky Way (Boulanger et al. 1996; Planck Collaboration XI 2014). Since the external layer would correspond to half of the full LOS extinction (and mutual shadowing provided by the dense filaments is quite small), the external layer is expected to correspond to less than \(A_{\rm V}\)=1 mag. The dust heating should be caused mainly by the stellar radiation field, which has some uncertainty in its level and spectral shape. Short-wavelength dust emission, such as from polycyclic aromatic hydrocarbons (PAHs), could also contribute to the dust heating at high optical depths, when the short-wavelength part of the ISRF is strongly attenuated. Also this would be part of the general ISRF rather than of local origin, since the reduced abundance and excitation of PAHs reduces their emission inside dense molecular clouds. The combination of the ISRF scaling factor and the assumed external attenuation is effectively changing the balance between the heating by UV-optical and NIR-MIR radiation, whether the latter is due to PAHs or other sources. The contribution of even longer wavelengths (e.g. of the cosmic microwave background) is insignificant at the optical depths of the Taurus filaments. We include in some models also point sources in order to test the effects of potential embedded sources. The luminosity of each source is left as a free parameter (reported in units of the solar luminosity) and the emission is modelled as pure blackbody radiation. The source temperature is a again relevant parameter, because it affects the wavelength distribution of the heating radiation and how localised the effect of each point source remains. Because of the finite resolution of the models, the temperature assigned to the sources corresponds to the escaped radiation at some 1500 au distance from the source (the minimum cell size in the models), not the intrinsic emission of an embedded (young) stellar objects. The source luminosities are optimised based on the surface brightness at 20-60\({}^{\prime\prime}\) distance of each point source. The direct LOS to the source is excluded, because that is more dependent on coarse resolution of the RT model. ### Dust models One of the main goals of the paper is to compare how well observations can be matched with different assumptions of the dust properties. We use three basic dust models, the Compiegne et al. (2011) model (in the following COM), and the core-mantle-mantle grains (CMM) and aggregates with ice mantles (AMMI) from the THEMIS dust model (Jones et al. 2013; Ysard et al. 2016). We also include two ad hoc variations of the CMM model. In CMM-1 the opacity spectral index \(\beta\) is artificially reduced by 0.3 units for all wavelengths \(\lambda>70\,\mu\)m, while keeping the \(250\,\mu\)m opacity unchanged. In CMM-2 the \(\lambda>70\,\mu\)m opacities are increased by 50%, with no change in \(\beta\). The extinction curves (optical depth per Hydrogen atom) of the dust models are shown in Fig. 3. It is worth noting that the absolute level of the extinction curve will have no effect on the quality of the FIR fits nor the predictions for the NIR extinction. It will affect on the mass estimates (the conversion from optical depth to mass including division with opacity \(\kappa_{\nu}\)). On the other hand, the ratio between the optical and NIR wavelengths (where dust absorbs energy) and the FIR wavelengths (where energy is re-emitted) is important for the FIR fits and especially for the predicted NIR extinction values. In most runs, the same dust properties are used throughout the model volume. We test later also some models with a smooth transition from one set of dust properties at low densities to another at high densities. The transition is implemented with the help of fractional abundances \[\chi=\frac{1}{2}-\frac{1}{2}\tanh[4.0\times(\log_{10}n_{\rm H}-\log_{10}n_{0} )]. \tag{1}\] Thus, the model includes two dust components with fractional abundances \(\chi\) and \(1-\chi\), where the values depend on the local volume density \(n({\rm H})\) and the pre-selected density threshold \(n_{0}({\rm H})\). Figure 3: Extinction curves of the adopted dust models. Below \(70\,\mu\)m the CMM-1 and CMM-2 variants are identical to CMM. ### Radiative transfer calculations The radiative transfer calculations were carried out with the SOC Monte Carlo program (Juvela [2019]), using the volume discretisation described in Sect. 3.1. Thanks to the use of hierarchical grids, the model resolution was in dense regions 10\({}^{\prime\prime}\), better than for the observations, while the run times were still manageable. To keep the Monte Carlo noise at \(\lesssim\)1% level (well below that of observations), we used \(\sim 10^{7}\) photon packages per frequency for both the external field and each of the point sources. For point sources this is somewhat excessive, because the influence of each source is limited to a small region. SOC can handle spatial variations in the dust properties if, as described in Sect. 3.3, this is described using abundance variations for a finite number of dust components. A single RT run took between \(\sim\)10 seconds and a couple of minutes, depending on the number of point sources and dust components. Because the analysis was limited to emission at long wavelengths, \(\lambda\geq 160\,\mu\)m, the stochastic heating (relevant only for small grains) was not solved, and the dust was assumed to be in equilibrium with the radiation field. ### Modified blackbody fits The optical depths of the RT models will be compared to the values obtained by fitting the SEDs with MBB functions. In the case of a single temperature component and optically thin emission, the optical depth is \[\tau_{\nu}=I_{\nu}/B_{\nu}(T_{\rm d}), \tag{2}\] where the dust temperature \(T_{\rm d}\) is obtained by fitting the multi-frequency observations with a MBB function, \[I_{\nu}\propto B_{\nu}(T_{\rm d})\times\nu^{\beta}.\] The dust opacity is here assumed to follow a powerlaw, \(\kappa_{\nu}\propto\nu^{\beta}\). For this analysis, the surface brightness data are first convolved to the same resolution, which makes it possible to do the calculations for each map pixel separately. Juvela ([2023]) discussed alternative SED fits, where the dust temperatures is assumed to follow a normal distribution and both the mean temperature and the width of this temperature distribution are free parameters. The fit was done with Markov chain Monte Carlo (MCMC) methods and the angular resolution of the maps at different frequencies was not required to be the same. However, in this paper we use the observations at a common angular resolution, the same as in the case of the single-temperature fits. ## 4 Results We present below results from the fitting of alternative models to the FIR observations of the Taurus B212-B215 (L1506) filament. Results are shown for single-dust models (Sect. 4.1), testing separately the potential effects of embedded sources (Sect. 4.2), before first experiments with spatial variations in the dust properties (Sect. 4.3). ### Single-dust models We fitted the observations with RT models with COM, CMM, CMM-1, CMM-2, and AMMI dust models (Sect. 3.3), using constant dust properties over the model volume. The optimised parameters are the column densities (adjusted pixel by pixel) and the strength of the external radiation field (a scalar parameter). Since the strength of the radiation field is optimised using the average SED in the filament region, the fit concentrates on matching the average SED shape in that area. We tested alternative models that differ regarding the LOS density profile (Sect. 3.1) and the external radiation field (Sect. 3.2). Figure 4 presents as an example of a fit carried out with the CMM dust model, a Gaussian LOS density profile with \(FWHM\)=0.5 pc, and no external attenuation of the ISRF. The upper frames show the observed 160-500 \(\mu\)m maps. The second row shows the surface brightness predictions from the RT model fitted to the 250-500 \(\mu\)m observations. The 160 \(\mu\)m map is therefore a prediction to a wavelength outside the fitted range. In the lower left map corner, the RT model is extrapolated outside the SPIRE coverage in order to reduce edge effects close to the filament. The last row of frames in Fig. 4 shows the fit residuals (percentage of the surface brightness). Because the column density is adjusted separately for each map pixel and using the observed and model-predicted 350 \(\mu\)m maps, the 350 \(\mu\)m residuals should always be close to zero. However, Fig.4k shows one example how the fit can fail, in this case near the region C (the north-eastern core). The level of the external radiation field is set based on the average signals in the filament area. However, with adopted radiation field SED and the dust properties, the model is unable to produce sufficiently high surface brightness around one position (region C). This could also happen if the core had an embedded radiation source, which is not part of the model. The positive 350 \(\mu\)m residuals indicate that the optimisation has at this one position increased the column density to the set upper limit (ten times the initial analytical column density estimate), and the fit in that region is incorrect. In Fig. 4, the 250 \(\mu\)m and 500 \(\mu\)m residuals are up to 10% level and thus significant compared to the relative accuracy between the SPIRE bands. At the extrapolated 160 \(\mu\)m wavelength the errors increase beyond 20%. The model tends to be too cold along the central filament, leading to residuals (observation minus model prediction) that are positive at 250 \(\mu\)m and negative at 500 \(\mu\)m. Because the model tries to match the average 250 \(\mu\)m/500 \(\mu\)m ratio over the whole filament region, the errors are always relative. Thus, for the optimised radiation field, the emission is too cold in the inner and too warm in the outer parts of the filament. It also means that the temperature gradients are stronger in the model than in the real cloud, and the model probably has too high optical depth at the short wavelengths that are responsible for the dust heating. Figure 5 shows fits where the external radiation field is attenuated by \(A_{\rm V}=1^{\rm mag}\). Because the level of the radiation field is a free parameter, the effect is only to change the shape of this spectrum. As the external extinction removes energy preferentially from the short wavelengths, the model is effectively optically thinner for the remaining radiation, and this should reduce the temperature variations inside the model. Compared to the previous \(A_{\rm V}=0^{\rm mag}\) case, the fits with the CMM dust model show some improvement, and the positive 350 \(\mu\)m residuals cover a smaller area. The 250 \(\mu\)m errors are almost down to the 4% level. Interestingly, the 500 \(\mu\)m data, which should be less sensitive to temperature variations, show more significant variations in the residuals across the filament. Figure 5 also shows results for two ad hoc modifications of the CMM dust model. CMM-1 has a 0.3 units lower \(\beta\) with no change in the 250 \(\mu\)m opacity, and CMM-2 retains the original \(\beta\) but with 50% higher FIR opacity. Both CMM-1 and CMM-2 result in some improvement in the fits. The 250-500 \(\mu\)m fit is almost perfect with CMM-1 (with lower \(\beta\)), but the extrap olation to 160 \(\mu\)m now overestimates rather than underestimates the emission. The CMM-2 (with higher absolute FIR opacity) is close to the original CMM, but with lower errors especially at 500 \(\mu\)m. The figure shows results also for two other dust models, the COM model of diffuse medium and, with its ice mantles, AMMI in principle appropriate for the densest parts of molecular clouds. Interestingly, both result in very similar fit quality, the match to the 500 \(\mu\)m data being better than with CMM. Unlike the CMM model that tends to underestimate the 160 \(\mu\)m intensity within the filament, both COM and AMMI lead to some overestimation at this wavelength. Instead of modifying the dust model or the radiation field, the temperature gradients can be reduced by making the cloud more extended in the LOS direction. Figure 6 shows alternative fits where, the LOS density distribution corresponds \(FWHM\) of 0.2 pc, 0.5 pc, or 0.9 pc. There is a clear difference between the \(A_{\rm V}=0^{\rm mag}\) and \(A_{\rm V}=1^{\rm mag}\) cases, but the cloud \(FWHM\) has an even stronger effect. If the cloud is made very elongated in the LOS direction with \(FWHM\)=0.9 pc, the fit to the 250-500 \(\mu\)m data is quite good, although the 160 \(\mu\)m emission remains overestimated. Although interstellar filaments are expected to be narrow with \(FWHM\sim 0.1\) pc, the \(FWHM\) parameter describes the whole cloud, where values 0.2 pc and even 0.5 pc might still be realistic. However, the use of Plummer profiles that are derived directly from the fits to the POS filament structure (column density) should result in a better simultaneous description of both the narrow filament and the surrounding extended cloud. Figure 6 shows that this default LOS Plummer profile results in roughly similar quality of fit as the Gaussian model with \(FWHM=0.5\) pc. By increasing the LOS extent up to an aspect ration of 3:1, the residuals again drop to \(\sim\)4% or below along the main filament. Figure 4: Fit using the CMM dust model, \(FWHM=0.5\) pc, and \(A_{\rm V}=0\) mag. The first row of frames shows the observations, the second row the predictions of the fitted RT model, and the third row the fit residuals. The actual fit used only 250-500 \(\mu\)m data. The data are plotted at the resolution of the used observations, but the contours in the bottom frames show the \(\pm\)4% error levels (red and blue contours, respectively) when, for clarity, the data have been smoothed to 3 arcmin resolution. Figure 5: Comparison of residuals in fits of single-dust models. The cloud LOS extents is \(FWHM=0.5\) pc and the attenuation of the external radiation field corresponds to \(A_{\rm V}=1\) mag. The rows correspond to the dust models COM, CMM, CMM-1, CMM-2, and AMMI, respectively. Figure 6: Comparison of fit residuals for models of different cloud LOS extent. The upper three rows correspond to Gaussian LOS density profiles with FWHM equal to 0.2, 0.5, and 0.9 pc, respectively. On the fourth row the LOS profile is based on Plummer fits to the filament profile, and the last row is the same but with the cloud a factor of three longer in the LOS direction. All calculations assume an external attenuating layer of \(A_{\rm V}=1^{\rm mag}\). Although the quality of the fits can be similar, different assumptions (dust, radiation field, cloud shape) result in significantly different predictions. Figure 7 compares the NIR and FIR optical depths and the mass estimate of the fits that use different assumptions for the dust properties, the \(A_{\rm V}\) value of the attenuating layer, and the LOS extent of the cloud (FWHM of Gaussian density distribution). The mass estimates depend on the FIR emissivity but also indirectly on other factors that control the dust temperatures. The estimates vary by a factor of five between the COM (low FIR opacity) and AMMI (high FIR opacity) fits. However, this results directly from the different absolute dust opacities (a factor of five between AMMI and COM). The LOS cloud size is the second most important parameter, with up to 50% decrease in the mass estimates between the smallest \(FWHM=\)0.2 pc and the largest \(FWHM\)=0.9 pc values. The effects of the radiation field attenuation \(A_{\rm V}\) are only slightly smaller and are particularly clear for the more compact clouds (small \(FWHM\)). The mean optical depth \(\tau(250\,\mu\)m) varies by a factor of three and depends as much on the cloud FWHM as the dust model. The NIR and FIR optical depths are naturally correlated. There are thus similar large differences in the model-predicted \(\tau({\rm J})\) values, and direct NIR extinction measurements should be able to rule out some dust models. Figure 8 compares the fit quality for the models in Fig. 7. This is shown as the mean rms value of the relative error in the fitted 250-500 \(\mu\)m bands, which is on average around 5%. The plot additionally show the rms errors of the predictions of the 160 \(\mu\)m surface brightness and of the J-band extinctions. The latter are calculated using the individual 2MASS stars, normalised with the formal error estimates of the \(A_{\rm J}\) measurements. The SPIRE and 160 \(\mu\)m errors are correlated to such an extent that the addition of the 160 \(\mu\)m does not substantially change the conclusions reached with the SPIRE bands only. The CMM models tend to result in the worst fit. The COM and CMM-1 models give the best match to the SPIRE observations and the NIR extinctions. The CMM-2 and AMMI models are slightly worse in matching the SPIRE observations, but, compared to CMM, CMM-2 gives a much better match to NIR extinction data. Systematic fit residuals would at first glance appear to be potential indicators of dust evolution. However, the above results show that the results are affected by multiple factors that are not related to the dust. Figure 9 shows the fit quality in the region C as a function of \(FWHM\) and\(A_{\rm V}\). Figure 10 is the corresponding plot for the relative residuals \(r\) (red and blue colours for positive and negative mean values of \(r(250\,\mu\)m) - \(r(500\,\mu\)m), respectively). As the parameters \(FWHM\) and \(A_{\rm V}\) are varied, the mean residual changes sign. This applies to all tested dust models and takes place over range of the (\(FWHM\), \(A_{\rm V}\)) parameter space where the fit quality changes only little. Obviously, this makes it more difficult to interpret fits in terms of dust evolution. In contrast, the NIR extinction varies significantly between the dust models but is less sensitive to the \(FWHM\) and \(A\) parameters (Fig. 8). ### Effect of embedded point sources The B212-B215 filament is not associated with bright stars, and YSO catalogues lists only a couple of low-luminosity sources (\(L<1\,L_{\odot}\)) that do not coincide with the cores showing the largest fit residuals. However, internal heating can significantly affect the SED of the protostellar cores relative to the surrounding filament. We tested this possibility by adding a few ad hoc point sources towards the intensity maxima in the filament of the Taurus model (Fig. 11). The blackbody temperature of the sources was set to either 200 K or 3000 K, each run adopting the same value for all the sources. These temperatures were used to mimic different types of sources, all of which remain unresolved in our simulations. At the higher temperatures that emission is more towards the shorter wavelengths and, because of the higher optical depths, the effect should be spatially more limited. The luminosity of each source was optimised to result in zero average 250 \(\mu\)m residuals within 30-70\({}^{\prime\prime}\) distance of the source. The sources were places either at the exact location of the LOS density maximum or displaced by 0.2 pc along the LOS direction. The latter should lead to the effect of the sources to be spatially more extended. Note that the model cell size in dense regions is \(\sim 0.006\) pc, much below the 0.2 pc value. Figure 12 compares four models without point sources to the corresponding models with point sources of different temperature and LOS location. The residuals were previously seen to be largest for the smaller \(FWHM\) and \(A_{\rm V}\). In these cases the effect of point sources remains limited, partly because these models also tend to have higher optical depths (at the wavelengths relevant for dust heating). Embedded sources are not able to correct the extended positive residuals of the fits. At the same time, their effect would be locally too strong, and the observations exclude the possibility of the Taurus filament harbouring such sources (Fig. A.1). Only when the basic model already closely matches the observations, such as with \(FWHM=0.5\) pc and \(A_{\rm V}=1^{\rm mag}\), embedded sources could provide minor improvement without being very prominent in the frequency maps. ### Models with two dust components We examined next very briefly models with spatial variations in the dust properties, using combinations of the COM, CMM, and AMMI dusts. We return in Sect. 5 to two-component models that include further modifications to the dust properties. In the RT calculations, the dust property variations are described as changes in the relative abundance of two dust species (Eq. (1)). The transition between the dust components depends on the volume density and the selected density threshold \(n_{0}\). Figure 13 shows examples of how these translate to spatial distribution of the components. Because the model densities are adjusted during the model optimisation, the spatial distributions are slightly different for each dust combination and \(FWHM\) and \(A_{\rm V}\) values. Figure 14 compares some two-component fits to the calculations with the single CMM dust, all with \(FWHM=0.5\) pc and \(A_{\rm V}=1^{\rm mag}\). The figure is similar to Figs. 7-8, showing the estimated masses, NIR and FIR optical depths, and the match to observations. The quality of the FIR fits (to SPIRE data and the extrapolation to 160 \(\mu\)m) varies only little. In the single-component tests of Fig. 8, both COM and AMMI resulted in better fits than CMM. The two-component fits are consistent with this trend: the smaller the relative abundance of the CMM dust, the better the fit. However, compared to the previous single-component models, these dust combinations provide little improvement in the FIR fits and can even increase the disagreement with the NIR extinction data. ### Comparison with analytical estimates The RT modelling can serve many purposes, one of which is the study of the cloud column density structure. Column densities can be estimated also more easily using MBB fits (Sect. 3.5), but the two estimates will not be the same. An RT model gives a self-consistent fit to the data. The RT model gives a self-consistent fit to the data, and the consistent description of the temperature variations within the target but does not fit the observed intensities everywhere very precisely. In contrast, MBB fits adopt a very simple model for the source (e.g. possibly a single temperature) but usually match the intensity observations better. MBB fits can use more complex assumptions but quickly suffer from the degeneracy between the fitted parameters (Juvela 2023). Figure 15 compares optical depths with the SPIRE data (250-500 \(\mu\)m) at 41\({}^{\prime\prime}\) resolution. The estimates are from single-temperature MBB fits, from MBB fits with a Gaussian temperature distribution, and from two selected RT models. The single-temperature calculations are taken here as the reference, although they are known to underestimate the optical depths. The MBB fits with Gaussian temperature distribution can still be done without any priors, since they have only three free parameters (intensity, mean temperature, width of the temperature distribution), the same as the number of observed bands. As one piece of prior information, very low temperatures \(T_{\rm dust}<6.5\) K were excluded from the proposed temperature distributions. These fits result in higher \(\tau\) values than the single-temperature fits, although the difference are only about 10% in the densest regions. Figure 15 includes two RT models, both with \(FWHM=0.5\) pc and \(A_{\rm V}=1^{\rm mag}\). These use COM and CMM dusts for which the \(\beta\) values are, respectively, close to the \(\beta=1.8\) and \(\beta=2.0\) (the values also adopted for the shown MBB fits). The COM model had given a relatively good fit to the observations Figure 8: Errors in selected model fits with the COM, CMM, CMM-1, CMM-2, and AMMI dust models. The errors are shown for the original SPIRE fit (circles) as well as for the 160 \(\mu\)m surface brightness (squares) and NIR extinction \(A(\rm J)\) (triangles). For the FIR data the values are the mean relative error over the filament region. For the NIR extinction the value is the mean \(\chi^{2}\) value that is calculated over the stars inside the filament area and using the \(A(\rm J)\) error estimates of the individual 2MASS stars. The red, black, and blue colours correspond to \(FWHM\)=0.2, 0.5, and 0.9 pc, respectively, and the small, medium, and large symbol sizes correspond to \(A_{\rm V}\)=0, 1, and 2 mag. The x-axis is labelled according to these parameters. Figure 7: Mass and optical depth values in selected RT models. The solid magenta line and the left axis show the estimated mass for the filament area, and the dashed magenta line and the right axis the corresponding mean optical depth \(\tau\)(250 \(\mu\)m). The symbols show the J-band (triangles) and 250 \(\mu\)m (squares) optical depths (right axis) for the positions A (open symbols) and B (filled symbols). The red, black, and blue colours correspond to \(FWHM\)=0.2, 0.5, and 0.9 pc, respectively, and the small, medium, and large symbols to \(A_{\rm V}\)=0, 1, and 2 mag. The x-axis is also labelled according to the \(FWHM\) and \(A_{\rm V}\) values of the models. (Fig. 8), and also in Fig. 15 the estimates are outside the filament similar to the two MBB fits. Along the filament, the values are close to the results of the MBB fits Gaussian temperature distributions, although the map has more noise-like fluctuations (relative to the MBB solution). The model optical depths are higher towards the densest regions, by up to 30% in the region B and up to 60% in the region C. The CMM model provided a worse fit to the observations and was close to a point where it could not produce the observed intensity levels, especially near the C region, which can be expected to lead to higher \(\tau\) values. The predicted \(\tau\) values along the filament are still similar to the MBB fit with Gaussian temperature distribution solution, 5-10% above the single-temperature MBB fit. The values are higher towards the cores, by up to 40% in around the region B and close to a factor of two near the region C. In general, RT models should give more accurate (or more robust) estimates than simple SED fits, but only if the models also accurately match the surface brightness observations. In this case, the optical depths from the COM model might be more accurate than the predictions of the CMM model, but this also depends, for example, on the actual dust \(\beta\) values in the cloud. The MBB fits may look reliable, but the fact that they fit the observations to a high precision does not mean that their optical depth predictions would be accurate (Juvela 2023). In the case of the Taurus observations above, the correct optical depths are not known. Therefore it is useful to repeat the comparison in the other direction, starting with the surface brightness maps predicted by the RT models and comparing the MBB fit results to the known optical depths of the models. The results are shown in Fig. 16, as the ratio between the MBB estimates and the now known true values. The results are qualitatively similar to the analysis of Taurus observations above. The single-temperature MBB fit underestimates the optical depth by up to \(\sim\)40%, and the errors of the MBB fit with Gaussian temperature distributions is half of this and still in the same direction. ## 5 Discussion The dust properties and dust evolution can be studied by comparing observations to the predictions of RT models. In this paper, we have examined this using the Taurus B212-B215 (LDN 1506) filament, to identify and quantify some of the factors that may affect the reliability of the conclusions derived from the modelling. ### Model optimisation In the RT modelling, we selected first the cloud shape, the dust properties, and the SED shape of the radiation field (in terms of external extinction), before optimising a set of other parameters. The optimisations were based on heuristics, which is significantly faster than blind \(\chi^{2}\) optimisation. The model column densities were adjusted based on the ratio of the observed and model-predicted 350 \(\mu\)m intensities, assuming that the surface brightness is a monotonically increasing function of the column density. Because column density is adjusted pixel by pixel, one could expect a perfect fit at this wavelength, apart from small errors due to the model discretisation, Figure 11: Locations of point sources plotted on the 250 \(\mu\)m map of the L1506 filament. The cyan circles indicate positions of YSOs from Rebull et al. (2010), and the blue crosses the locations of hypothetical embedded sources that were added in the RT model. Figure 10: As Fig. 9 but showing the relative fit residuals (250 \(\mu\)m residual minus 500 \(\mu\)m residual) in the region C. Red symbols correspond to cases where the 250 \(\mu\)m emission is underestimated and 500 \(\mu\)m emission is underestimated towards the main core. In the case of the blue symbols the situation is reversed: with larger LOS extent and large external extinction, the models overestimate rather than underestimate the dust temperature in the core. Figure 9: Relative rms error for the model fits using the COM, CMM, and AMMI dust models. Errors are plotted against the \(FWHM\) (x-axis) and \(A_{\rm V}\) (y-axis) parameters. The data correspond to region C (cf. Fig. 1). The diameter of the symbols is proportional to the squared error. Models with larger LOS extent and/or more extincted radiation fields tend to result in better fits. observational artefacts, or noise at scales below the beam size. However, when the column density and radiation field were optimised, for some dust models the intensities could be locally close to or even above what the model could produce (e.g. Fig. 4k). This depends somewhat also on the surrounding area, because of the mutual shadowing of the different parts of the cloud. Beyond the saturation point the surface brightness decreases with increasing column density, and for this reason it is important to start the model optimisation with low column densities combined with high radiation field. The problem regions can be identified from positive residuals at 350 \(\mu\)m (and at longer wavelengths; Fig. 6c). If the radiation field were known, some models might be excluded already based on this saturation and observations of a single frequency. However, we included the strength of the external radiation field always as a free parameter, \(k\)(ISRF). It was updated based on the average intensity ratios \(I_{\nu}(250\,\mu\mathrm{m})/I_{\nu}(500\,\mu\mathrm{m})\) in the filament region, based on the knowledge that the ratio increases with increasing strength of the radiation field. The optimisation results in the correct average intensity ratio. This is similar but not exactly the same as the maximum likelihood solution (minimisation of the \(\chi^{2}\) value summed over pixels). The distinction is not critical for the comparison of the models, especially as the fit errors are dominated by systematic and spatially correlated errors. Because the same scalar radiation field parameter applies to the whole model and also the model density field is simpler than the real cloud, a perfect fit to an extended cloud region is quite unlikely. Figure 12: Comparison of 250 \(\mu\)m fit residuals for models with hypothetical embedded sources. Each row corresponds to a combination of \(FWHM\) and \(A_{\nu}\) values, as noted on the left side of the frames. Each column of frames corresponds to a different case concerning the point sources: first column without sources, columns 2-3 with sources embedded at the location of the LOS density maximum, and columns 3-4 with sources displaced 0.2 pc along the LOS from to the density maximum. The temperatures of the point sources are given above each column of frames. Figure 13: Examples of column densities associated with the dust components in two-dust models. The plots correspond to a combination of CMM and AMMI dusts and the model parameters \(FWHM\)=0.5 pc and \(A_{\nu}=1^{\mathrm{mag}}\). The left frames show the column density associated with the dust in regions of lower volume density and the right frames the column density associated with the second dust component. The density threshold for the transition between the components is \(n_{\mathrm{0}}(\mathrm{H})=1000\,\mathrm{cm}^{-3}\) in the upper and \(n_{\mathrm{0}}(\mathrm{H})=3000\,\mathrm{cm}^{-3}\) in the lower frame. The cyan contour is drawn at \(N(\mathrm{H})=10^{21}\,\mathrm{cm}^{-3}\). As described above, the optimisation always knows, based on the current solution, in which direction the free parameters are to be adjusted. Furthermore, although the number of column-density parameters was large (with over 400 000 pixels per map), these are either independent (pixels located far from each other) or strongly correlated (pixels within the same beam). This results in fast convergence, and the calculations typically required only some tens of optimisation steps. Since one RT run took about one minute or less (down to 10 s with a single dust component and without embedded point sources), the model construction and optimisation is computationally quite feasible. The model predictions were always convolved to the resolution of the observations, which allowed one to used observational maps at different resolutions. Basic single-temperature pixel-by-pixel MBB fits can be done for small maps in a fraction of a second. However, if beam convolution is included also into SED fitting, the run times increase and the computational advantage relative to the full RT modelling may no longer be significant (Juvela 2023). ### Density fields Compared to the POS cloud shape, the LOS structure remains poorly constrained. The LOS cloud size was also seen to have a significant effect on the dust temperature variations. In the runs with constant dust properties, the more extended models were clearly preferred for the Taurus filament. Fits to 250-500 \(\mu\)m data required at least \(FWHM\)=0.5 pc, but fits were the better the larger the LOS cloud extent (Fig. 8). Models with Plummer density profiles gave similar results, calling for larger LOS extent (e.g. with aspect ratio 3:1; Fig. 6q-t). Since the fit errors were largest in areas of high column density, also larger \(FWHM\) values are needed mostly in those areas. Figure 10 shows examples of a further test, where \(FWHM\) is optimised as a function of the map position. A larger LOS size is indeed seen towards the cores, but the values reaching \(FWHM\)=1.1 pc, the maximum values allowed. In the case of the Taurus filament, it is very improbable that the cloud would have such extreme elongation and just in the LOS direction. This is not necessarily the case in the analysis of isolated clumps, where the selection of bright but starless sources might be biased towards elongated sources (even filaments) that are seen along their major axis. Thus, in the absence of further information on the volume densities, the unknown cloud structure can be a significant hindrance on attempts to determine the dust properties. In the case of the Taurus filament, the preference for longer LOS sizes could in principle be related to the source inclination, since we have so far assumed the filament to be in perpendicular to the LOS. There is some evidence that the Taurus cloud is not aligned with the plane of the sky (Roccatagliata et al. 2020; Ivanova et al. 2021), and for example Shimajiri et al. (2019) suggested significant 20-70 degree inclinations for the neighbouring B211/B213 filament. Inclination can increase the LOS column density without a change in the filament size perpendicular to its symmetry axis or any changes in the dust temperatures. However, when the effect of inclination was tested in practice, the fit quality changed only at \(\sim\)1% level. Once the models are optimised, the LOS optical depths remain quite similar and the lower optical depth perpendicular to filament axis appear to be compensated by lower values of the external radiation field (Fig. 11). This is similar to the conclusions of Ysard et al. (2013) regarding the effects of inclination. Figure 14: Comparison of mass and optical depth estimates (frame a) and the quality of the fit (frame b) for models with two dust components. The results are plotted for the single-component CMM model and three dust combinations (COM-CMM, CMM-AMMI, and COM-AMMI), with density threshold \(n_{0}\)(H\({}_{\rm{J}}\)) for the transition between the two dust properties. Figure is similar to Figs. 7-8, with results shown only for the case of \(FWHM\)=0.5 pc and \(A_{\rm{V}}=1^{\rm{mag}}\). Figure 15: Comparison of 250 \(\mu\)m optical depth estimates from MBB fits and RT models. The uppermost frames show the results of single-temperature MBB fits with \(\beta=1.8\) (frame a) and \(\beta=2.0\) (frame b). The second row shows the ratios between the MBB fit with a Gaussian temperature distribution and the corresponding single-temperature fit from the first row, with contours drawn at levels of \(\pm\)5%. The bottom row shows the \(\tau\) ratios for two RT models (\(FWHM=0.5\) pc, \(A_{\rm{V}}=1^{\rm{mag}}\)) and the single-temperature MBB fits from above. The dust models are COM in frame e and CMM in frame f. One final aspect of the cloud structure that was not examined in Sect. 4 is its potential small-scale inhomogeneity. This would work in the same direction as a larger \(FWHM\) and make the model at large scales more isothermal. We tested this in the case of one model (\(FWHM\)=0.5 pc, \(A_{\rm V}=1^{\rm mag}\), CMM dust), multiplying the originally smooth density field with a Gaussian random field (standard deviation of 37%, truncated at zero) that was generated for different powerlaw power spectra, to compare the effects of mainly small-scale or large-scale fluctuations. However, once the model was again optimised, the differences to the original model were very small, both in terms of the fit quality and the cloud mass and ISRF estimates. Only when the density multipliers were squared (creating larger density fluctuations with a larger fraction of low-density cells), the change became noticeable, but still only at 10% level of the original fit residuals. Thus, only extreme clumpiness of the medium would alter the results. Although the real LOS cloud extent cannot be directly measured, some limits can be extracted from line observations. Based on the densities obtained by modelling \({}^{13}\)CO, C\({}^{18}\)O, and N\({}_{2}\)H\({}^{+}\) observations (Pagani et al. 2010), Ysard et al. (2013) estimated for L1506C (i.e. around the region B) an LOS width of at most \(\sim\)0.3 pc. This is thus similar to the filament POS thickness, and models with larger \(FWHM\) (e.g. those with the 3:1 aspect ratios) appear in this case unlikely. ### Radiation field The radiation field was adjusted so to match the average SED shape in the filament region. This does not prevent the fits from showing spatially correlated errors at smaller spatial scales and at \(\sim\)10% levels. The 250 \(\mu\)m and 500 \(\mu\)m errors are anticorrelated, as can be expected if the dust is locally warmer or colder than in the real cloud. The residuals do not show any obvious gradients over the field, which suggests that the assumed ISRF anisotropy roughly matches the conditions in the cloud. Only the 160 \(\mu\)m maps might show slightly larger positive residuals on the southern side. If the ISRF were accurate outside the Taurus cloud, the part of the molecular cloud that resides between the target region and the Galactic plane could cause some shielding, which could contribute to this minor asymmetry. The errors were always mainly correlated with the column density. The tests showed the spectral shape of the external radiation field has a clear effect on the fits, although less than the cloud \(FWHM\). Similar to larger LOS cloud sizes, better fits were obtained by increasing the optical depth of the external attenuating cloud layer. The mechanism is also the same: once the short-wavelength photons are removed by the external layer, the model itself is heated mainly at longer wavelengths where the optical depths are lower, resulting in smaller temperature variations. The extinction was varied up to \(A_{\rm V}=3^{\rm mag}\) but, like the 3:1 aspect ratio for the cloud shape, such high values are unlikely. In the _Planck_ 353 GHz dust opacity map the median value over the examined area is \(3.7\times 10^{-5}\). To estimate a lower limit for the LOS extinction, the 353 GHz value can be first scaled to 250 \(\mu\)m assuming \(\beta=1.5\), further to J-band optical depth using \(\tau(250\mu{\rm m})/\tau(J)=1.6\times 10^{-3}\) (Juvela et al. 2015b), and finally to visual extinction \(A_{\rm V}=0.47^{\rm mag}\) assuming the \(R_{\rm V}=4.0\) extinction curve (Cardelli et al. 1989). On the other hand, the assumptions \(\beta=2.0\), a factor of three lower \(\tau(250\mu{\rm m})/\tau(J)\) ratio (i.e. a value more consistent with diffuse clouds), and \(R_{\rm V}=3.1\) gives a value of \(2.8^{\rm mag}\). This applies to the full LOS extinction and gives an upper limit \(A_{\rm V}=1.4^{\rm mag}\) for the extinction of the external layer. These are not strict limits either, since they should describe the extinction towards the main radiation sources. Due to the rest of the Taurus complex and other cloud, the extinction between the target region and the Galactic plane could be higher that between the target and the observer (or the Galactic centre). Each fitted model gives an estimate for the ISRF strength at the boundary of the modelled volume or, for \(A_{\rm V}>0^{\rm mag}\), outside the assumed additional external layer. The estimates depend directly on the FIR spectral index \(\beta\) of the dust model and indirectly, via the dust energy balance, on the optical/FIR dust opacities. Figure 17 shows the ISRF estimates for selected models. For \(A_{\rm V}=0^{\rm mag}\) the values of \(k\)(ISRF) are close to one, with a scatter of some 20% and approximate agreement with the Mathis et al. (1983) ISRF model. When \(A_{\rm V}\) is assumed to be larger, the \(k\)(ISRF) values are naturally larger since they describe the strength of the original field, not the field at the model boundary. According to Fig. 17, in these cases the \(k\)(ISRF) values are higher approximately by a factor \(e^{\rm r}\), where \(\tau\) is the optical depth of the external layer at \(\sim\)0.8 \(\mu\)m. The 0.8 \(\mu\)m optical depth thus seems to describe the effective impact of the attenuation on the dust heating. The wavelength is of course not constant but moves to larger values if the extinction is further increased. If the external attenuation is \(A_{\rm V}=1^{\rm mag}\), the field outside the external layer is already twice as strong as the Mathis field. Thus, in spite of the better match to the FIR observations, this makes models with \(A_{\rm V}>1^{\rm mag}\) clearly less likely, these also being in disagreement with the direct background extinction estimates derived from _Planck_ observations. ### Errors in observations Before discussing potential spatial variations of dust properties, it is worth noting that also systematic errors in the surface brightness measurements can result in variations that are correlated with the observed intensities. These could thus be misinterpreted as physical effects that would be connected to the column density and volume density variations. This is especially true for zero-point errors in the intensity measurements. Figure 16: MBB optical depth estimates calculated for synthetic observations from RT models. The uppermost frames show the results of MBB fits with a single temperature component (\(\beta=1.8\) in frame a, \(\beta=2.0\) in frame b) relative to the actual optical depths in the model. The second row shows the corresponding ratios for MBB fits with Gaussian temperature distributions. The RT models (\(FWHM=0.5\) pc, \(A_{\rm V}=1^{\rm mag}\)) used either the COM (frames a and c) or the CMM dust (frames b and d). The cyan contours are drawn at the values of 0.8, 0.9, and 1.0. Figure 18 uses the model with \(FWHM=0.5\) pc, \(A_{\rm V}=1^{\rm mag}\), and CMM dust to illustrate the potential effects. An error is assumed to affect either the scaling or the zero point of the intensity measurements, either at 250 \(\mu\)m or 500 \(\mu\)m. The multiplicative errors (factor \(\gamma\)) are varied between -10% and 10%, and the additive errors in the range \(\delta=\pm 3\) MJ yr sr\({}^{-1}\). For the SPIRE observations, the multiplicative errors should be below 2%. Additive errors could appear because of mapping artefacts (leading to errors in the difference relative to the reference regions), statistical errors in the background value derived using the reference region, or deviations from the assumption that the background SED is constant over the field. In this paper, the background subtraction was carried out using a relatively close reference region that has an absolute 350 \(\mu\)m surface brightness of at least 10 MJ yr sr\({}^{-1}\) (Fig. 1). For the sake of discussion, shading in Fig. 18 corresponds to ad-hoc upper limits of 1 MJy sr\({}^{-1}\) at 250 \(\mu\)m and 0.5 MJy sr\({}^{-1}\) at 500 \(\mu\)m for these errors. Compared to region B, the surface brightness in region C is much closer to the level where model predictions could start to saturate, and the fits fit might behave there in an unexpected way. However, Fig.18 shows that the behaviour is both similar and very systematic in both positions. Therefore, also the estimated mass of the filament region is not unduly dependent on the fit in any single sub-region. Any errors that make the SED to appear hotter naturally decrease the mass estimates, but the effect of 2% multiplicative errors is less than 10% in the mass. Additive errors are particularly insidious because they introduce intensity-dependent changes in the band ratios, and they can even change the sign of the residuals. In Fig.18, this is seen only when the 500 \(\mu\)m measurements have zero-point errors. When the error is below \(\delta=\)-0.5 MJy sr\({}^{-1}\), the residuals in both regions B and C approach zero and they become negative for larger zero-point errors. Thus, before fit residuals can be interpreted as a sign of dust evolution, one has to be confident of sufficient absolute accuracy of the surface brightness measurements. In the Taurus data, it is unlikely that zero-point errors would explain all of the fit residuals (i.e. errors are likely to be well below 0.5 MJy sr\({}^{-1}\) at 500 \(\mu\)m). However, they could still have a noticeable effect on the magnitude of the residuals, thus complicating the interpretation of the observations in terms of dust property variations. More secure conclusions could naturally be reached by investigating a set of fields, provided that all observational errors average out in a large samples. Some zero-point errors can potentially result also from spatial variations in the SED shape of the background emission. This is not likely to be significant factor in the Taurus field, due to its high Galactic latitude (little LOS confusion) and the larger intensity contrast between the filament and the reference region. However, for compact sources this could result even in systematic effects, if the reference region is affected by limb brightening, leading to overestimation of the short-wavelength emission in the reference area compared to the target (Men'shchikov 2016). ### Dust properties Given the effects of all the other factors listed above, can something still be said about the dust properties? In the basic single-dust models, the CMM model provided the worst fit to both the FIR data and the NIR extinction, followed by the AMMI and the COM models. The origin of the differences seems to be the same as for the other mechanisms above, lower temperature contrasts leading to better fits. First, the spectral index \(\beta\) is lower for COM that the other two models (\(\Delta\beta\sim\)0.2), which tends to lead to higher temperatures and thus lower optical depths. Since the Figure 17: Radiation field scaling factors \(k\)(ISRF) in case of selected single-dust models. The black crosses show the values for each combination of cloud \(FWHM\), \(A_{\rm V}\), and dust model, as listed right of the frame. In cases with \(A_{\rm V}>0\), the blue crosses show approximate values at the model boundary, if the effective attenuation is \(e^{-\tau(0.8\,\mu\rm m)}\), using the 0.8 \(\mu\)m optical depth of the external layer. Figure 18: Effects of systematic observational errors. Errors are introduced to the intensity measurements, and the frames show the effects on the filament mass \(M\), radiation field intensity, and optical depths \(r\) in the sub-regions B and C. The parameters are plotted relative to those estimated with the original observations. Each frame also shows the average 250 \(\mu\)m residuals in the B and C regions (right y axis, dashed and continuous solid lines). The upper frames include multiplicative errors \(\gamma\) in the 250 \(\mu\)m (frame a) or 500 \(\mu\)m (frame b) measurements. The lower frames include additive errors \(\delta\) in the same bands. The shaded regions indicate probable upper limits limits for the errors in the Taurus observations: 2% for relative calibration and (ad hoc) \(\pm\)1 MJy sr\({}^{-1}\) and \(\pm\)0.5 MJy sr\({}^{-1}\) for the 250 \(\mu\)m and 500 \(\mu\)m zero points, respectively. column densities were free parameters, the absolute level of the dust opacity is not important but the balance between the NIR and FIR opacities is. The ratios \(\tau(250\,\mu{\rm m})/\tau({\rm J})\) are for CMM and AMMI around \(4.0\times 10^{-4}\) and slightly higher \(4.9\times 10^{-4}\) for COM (Fig. 3). The COM models have therefore a lower optical depth at the short wavelengths, also leads to slightly a better match to the observed NIR values. The CMM-1 and CMM-2 modifications of the CMM model both go in the same direction, where the \(\Delta\beta=-0.3\) of CMM-1 had a larger positive effect than the 50% increase in the FIR opacity of CMM-2. The latter correspond to \(\tau(250\,\mu{\rm m})/\tau({\rm J})=6\times 10^{-4}\), which is still below the value of \(\tau(250\,\mu{\rm m})/\tau({\rm J})=1.6\times 10^{-3}\) suggested by some observations of dense clumps (Juvela et al. 2015b). It is particularly noteworthy that all of the examined dust models overestimated the measured NIR opacities by a large margin. This is illustrated in Fig. 19, where we plot the model values against the \(\tau({\rm J})\) estimates for the 2MASS stars. CMM is furthest and the COM and CMM-1 models closest to the observed values. The error is larger for CMM than for COM and AMMI, but the discrepancy is always significant, a factor of 2-3. This does not directly imply a similar error in the dust opacities, because the results in Fig. 8 depend in a more complex way on the model optimisation (especially on the dust temperatures). If observations are to be fit with a single dust component, the spectral index \(\beta\) should be decreased or the ratio of the FIR and NIR opacities increased. Direct measurements of \(\beta\) are uncertain, but the typical estimates are around \(\beta=1.8\) for molecular clouds and possibly even higher in FIR observations towards dense parts of molecular clouds (Sadavoy et al. 2013; Juvela et al. 2015a; Bracco et al. 2017; Juvela et al. 2018). The spectral index of the CMM-1 model already had a lower value of \(\beta\sim 1.7\). Significantly lower \(\beta\) values are observed mainly at very small scales (e.g. in protostellar sources, less relevant here) and at millimetre wavelengths (Planck Collaboration et al. 2014c; Sadavoy et al. 2016; Mason et al. 2020). On the other hand, the dust opacity is believed to increase quite systematically from diffuse medium to molecular clouds and further to cores (Martin et al. 2012; Roy et al. 2013; Juvela et al. 2015b). In one of the earlier studies on Taurus, (Stepnik et al. 2003) concluded that the dust FIR/submm emissivity in LDN 1506 would be 3.4 times higher than in diffuse regions. Qualitatively similar conclusions were reached even for the high-latitude cloud LDN 1642, which was studied in Juvela et al. (2020). There, in addition to FIR emission and NIR extinction measurements, the need for dust with lower optical-NIR opacities was also shown by the modelling of the optical-MIR scattered light. Based on the above, we tested a further modified dust model, CMM-3, where the FIR \(\beta\) was kept the same as in CMM but the FIR opacity was increased to \(\tau(250\,\mu{\rm m})/\tau({\rm J})=1.2\times 10^{-3}\). This is a factor of three higher than in the original CMM model but still below the observed value quoted above (Juvela et al. 2015b). Figure 20 shows that CMM-3 allows a better match to the FIR data, and the 250-500 \(\mu{\rm m}\) residuals actually change signs between the case with \(FWHM=0.2\,{\rm pc}\) and \(A_{\rm V}=0^{\rm mag}\) and the case with \(FWHM=0.5\,{\rm pc}\) and \(A_{\rm V}=1^{\rm mag}\). This applies even at \(160\,\mu{\rm m}\), and the predictions for the NIR optical depth are much closer to the observations. Thus, the observations can be explained with a model with almost the expected cloud shape and radiation field, simply by adopting the higher \(\tau(250\,\mu{\rm m})/\tau({\rm J})\) ratio. The 250-500 \(\mu{\rm m}\) residuals are small (\(\sim\)10%) also outside the filament. In other words, the results do not show strong evidence for dust evolution between regions of different density. The \(160\,\mu{\rm m}\) residuals are still positive (up to \(\sim\)20%) but especially in Fig. 20b this applies to the whole field. ### FIR evidence of dust evolution Section 5.5 showed that models can match the 250-500 \(\mu{\rm m}\) observations well, simply by assuming a high FIR-to-NIR opacity ratio (Fig. 20). Significant further fine-tuning is possible with the \(A_{\rm V}\) and \(FWHM\) parameters. The evidence for spatial dust property variations is not strong even at \(160\,\mu{\rm m}\), especially considering the potential effects of changes in the density field. Many studies have shown that higher FIR-to-NIR opacity ratios are found only in dense clouds. The lower ratios of diffuse clouds are also well constrained and correspondingly encoded in dust models (Draine 2003; Compiegne et al. 2011; Jones et al. 2013; Hensley & Draine 2023). The dust properties vary even in diffuse medium (up to 50% in the FIR-to-visual opacity ratio; Fanciullo et al. 2015), and larger changes can take place still at large scales, in the transition to molecular clouds (Roy et al. 2013; Juvela et al. 2015b; Hayashi et al. 2019). It would therefore be surprising if the dust properties remained constant over the whole area, which was well over \(10\,{\rm pc}^{2}\) in size. In Sect. 4.3, none of the two-component models with the COM, CMM, and AMMI dust properties resulted in significant improvement in the fits. The best combination was COM-AMMI, because those components are also individually best in fitting the FIR data. The match to NIR extinction measurements was not improved compared to the single-dust models. Therefore, a combination of different dust populations has little effect, if the dust components themselves are far from correct. The column density was optimised for every LOS separately. This results, at least at \(350\,\mu{\rm m}\), to a better fit than if the densities were assumed to follow a more rigid analytical prescription. This may be particularly important when one investigates changes in the dust properties. Otherwise small differences between the assumed (analytical) density profiles and the real cloud structure Figure 19: Model predictions for \(\tau({\rm J})\) plotted against the estimates calculated for 2MASS stars. The dots represent individual stars in the filament region and the solid lines moving averages. The models correspond to \(A_{\rm V}=1^{\rm mag}\) and \(FWHM=0.5\,{\rm pc}\) and the dust models listed in the legend. The two modifications of CMM are CMM-1 (\(\Delta\beta=-0.3\)) and CMM-2 (50% increase in FIR opacity). The uncertainty of the relative zero points of the data sets is expected to be a fraction of \(\tau({\rm J})=1\), with little impact especially in the high-\(\tau\) end of the plot. The straight black solid and dashed lines indicate, respectively, the one-to-one relationship and values higher by a factor of three. could translate into artificial dust property variations in the models. Ysard et al. (2013) used cylindrical radiative transfer models to examine the FIR observations of selected cross sections along LDN 1506 filament. With a different background subtraction, the modelling concentrated on the central \(\pm 0.2\) pc part of the filament. Each cross section was optimised separately, with slightly different values of the dust and cloud parameters. A good match to both NIR extinction and FIR emission required changes in the dust properties. The models included a transition from standard diffuse-medium dust at low densities to aggregates above a sharp density threshold of \(\sim\)1000-6000 cm\({}^{-3}\), with a factor of \(\sim\)2 increase in the 250 \(\mu\)m opacity. Only the low spectral index value of the adopted aggregates, \(\beta\sim 1.3\), caused some problems in fitting the 500 \(\mu\)m observations. The FIR-to-NIR ratios of the aggregate models tested in that paper (Ossenkopf & Henning 1994; Ormel et al. 2011) were in the range \(\kappa(2.2\,\mu\)m/\(\kappa(250\,\mu\)m)=208-379. This correspond to \(\tau(250\,\mu\)m)/\(\tau(\rm J=1\times 10^{-3}-1.9\times 10^{-3}\), values that are similar or higher than in our ad-hoc CDM-3 case and in rough agreement with the earlier modelling work of Stepnik et al. (2003). At low densities Ysard et al. (2013) adopted the dust properties from Compiegne et al. (2011). However, one can note that even the dust models developed for more diffuse Milky Way regions have significant differences in their FIR-to-NIR optical depth ratios (Guillet et al. 2018; Hensley & Draine 2023). Our models try to find a self-consistent description for a larger cloud area. Following the above examples, we also paired a diffuse-medium dust (COM) with a dust variants with much higher FIR opacity. The other model parameters were varied in the ranges \(FWHM\)=0.2-0.5 pc, \(A_{\rm V}\)=0-1 mag, and \(n_{0}\)=0-3000 cm\({}^{-3}\). For comparison, the first row of Fig. 21 shows a single-component fit with the CMM-3 dust, where the other model parameters are chosen to be between the two cases of Fig. 20 and thus nearly optimal for that dust model. The second row is a two-component fit that combines the COM and CMM-3 dusts. There is marginal improvement at 250-500 \(\mu\)m, and the 160 \(\mu\)m residuals are closer to zero, but only outside the filament. The models preferred a low value of \(n_{0}\), at there is little difference to the single-component fit. We already saw in connection with single-dust models that both higher FIR opacity and lower \(\beta\) improved the fits. On the last row of Fig. 21, COM dust is paired with another modification of CMM. The FIR opacity is increased only by a factor of two (instead of the factor of three in CMM-3) but the 250-500 \(\mu\)m \(\beta\) is also decreased to 1.72. The other parameters (\(FWHM\), \(A_{\rm V}\), \(n_{0}\)) are the same as in the previous cases. The resulting fit to 250-500 \(\mu\)m is better, although the average errors were already below 5%. The main change is at 160 \(\mu\)m, where the average residuals are closer to zero. The lower \(\beta\) is compensated by higher dust temperature, which has increased the 160 \(\mu\)m emission from the model and reduced the average residuals closer to zero. All three models in Fig. 20 are in good agreement with the observations of the NIR extinction. After the adoption of the CMM-3 model, the fits also prefer more compact clouds shapes (\(FWHM<\)0.5 pc) and lower cloud shielding (\(A_{\rm V}\lesssim 1^{\rm mag}\)). These are both consistent with other constraints from the measurements of the volume density (this also corresponding to an approximate cylinder symmetry for the filament) and LOS extinction. The high FIR-to-NIR opacity ratio makes the dust colder, which is compensated in the models by a stronger radiation field. These result in mass estimates that are towards the lower end of all the single-dust models in Fig. 7. All models in Fig. 20 Figure 20: Fits with the modified CMM-3 dust. The upper frames correspond to a \(FWHM=0.2\) pc and \(A_{\rm V}=0^{\rm mag}\) model and the lower frames to a \(FWHM=0.5\) pc and \(A_{\rm V}=1^{\rm mag}\) model. The leftmost frames show the model-predicted J-band optical depths against the values measured with 2MASS stars. Dots show the values for individual stars, the solid blue line is a moving average, and the straight cyan line corresponds to the one-to-one relation. The other frames show the 160, 250, and 500 \(\mu\)m fit residuals (\(r_{\rm err}\)), where the -5% and +5% error levels are indicated with white and black contours, respectively. It is noteworthy that in the regions B and C (B212 and B215) the 250 \(\mu\)m and 500 \(\mu\)m residuals have different signs in the two models. have \(k_{\rm ISRF}\) values close to two, the parameter describing the field outside the assumed external cloud layer. The estimated field strength also increases if \(\beta\) is decreased. In Fig. 20, we chose to show the model results for \(A_{\rm V}=0.5^{\rm mag}\), which is in best agreement with the estimates of the LOS extinction. The \(A_{\rm V}=1^{\rm mag}\) models resulted in even slightly better fits to the 160 \(\mu\)m data. However, this increase in \(A_{\rm V}\) would also result in \(k\)(ISRF) increasing further by 0.8-1.0 units. The difference relative to models of the local ISRF may be something of a problem already for the models in Fig. 20. Also in the L1506 models of Ysard et al. (2013), attenuated fields resulted in worse fits than the standard ISRF. However, in that case higher radiation field values were not tested. The ISRF in the solar neighbourhood is well constrained by direct and indirect observations and modelling (Mathis et al., 1983; Lehtinen & Mattila, 2013; Fanciullo et al., 2015; Planck Collaboration et al., 2016; Mattila et al., 2018). It is still noteworthy that the DIRBE-observed average sky brightness is in NIR some 50% above the Mathis et al. (1983) values (Lehtinen & Mattila, 1996) and the difference approaches at 5 \(\mu\)m a factor of two. NIR radiation is important for the dust heating in the deeper cloud layers. Nevertheless, since the Taurus filament is not likely to get significant additional heating from local radiation sources, a high \(k\)(ISRF) values of the models remain a concern. The ratio of FIR and NIR opacities is a sensitive tracer of dust evolution. Apart from, for example, detailed spectroscopic observations of the NIR-MIR extinction curve, also dust scattering across the same wavelengths can provide useful constraints on the dust models (Lefevre et al., 2014; Saajasto et al., 2021; Juvela et al., 2020). ## 6 Conclusions We have modelled the FIR dust emission over an extended region of the B212-B215 filament in the Taurus molecular cloud. The goal has been to examine how the modelling results are affected by different factors and how well the dust properties can be constrained using FIR data over a limited wavelength range. We also wanted to see, if it is possible to build a self-consistent model for the larger, 16 square degree area with a large range of column densities. The fits used primarily 250-500 \(\mu\)m data, but the model predictions for 160 \(\mu\)m surface brightness and NIR extinction were also examined. We used three basic dust models from the literature (COM, CMM, AMMI) but finally also examined what further modifications of dust opacity and opacity spectral index are needed in order to match the Taurus observations. * The use of RT modelling for the analysis of extended maps has become feasible. The three square degree maps were modelled using only \(\sim\)15 million cells, with the run times of the individual RT runs varying between 10 seconds and a couple of minutes. * The models were optimised using simple heuristics for the column density and radiation field updates. This allowed fast convergence and the final result was usually found after some tens of iterations. * The use of the basic dust models led to small but significant differences in the FIR fit quality. Largest errors are found observed towards regions of high column density. However, the largest discrepancy was in the NIR extinction, where these models overestimated the observed NIR extinction by a factor of 2-3. * The fits are strongly affected by the assumed LOS cloud size, especially in regions of high column density, and by the spectral shape of the illuminating radiation field. These effects are of similar magnitude or even larger than the differences between the tested basic dust models. * The adoption of a different dust model may cause only small changes in the fit quality and yet result in large differences in the estimates of the cloud mass (up to a factor of five for the studied models) and the radiation field intensity (up to \(\pm\)30%). The mass differences are affected by the absolute values of the dust opacity, while the estimates for the optical depths varied by less than a factor of two. * The Taurus observations could be fit, down to the 160 \(\mu\)m band, by adopting a dust model where the FIR opacity was increased by a factor of 2-3 (\(\tau(250\,\mu\)m)/\(\tau\)(J) = \((0.8-1.2)\times 10^{-3}\)). The resulting models also are in good agreement with NIR extinction observations, but require a radiation field that is twice the standard value in the solar neighbourhood. * The models are consistent with the expected dust evolution, where for example the formation of ice mantles and grain aggregates could explain the large FIR dust opacities. However, although dust can be seen to be clearly different from the normal dust in diffuse medium, the 250-500 \(\mu\)m or even the 160-500 \(\mu\)m data alone are not sufficient to unambiguously show dust property variations within the field. * The effects of dust property variations are partly degenerate with those of the poorly constrained density and radiation fields. Systematic errors in data, mainly in the zero points of the surface brightness measurements, can produce further effects that are correlated with the column density. In the case of Herschel data, the magnitude of these potential effects is small but not negligible. * The optical depths of the fitted radiative transfer models were compared to estimates from SED fitting. For the Taurus observations, single-temperature MBB fits gave expectedly the lowest values, MBB fits assuming a Gaussian temperature distribution up to 15% higher values, and radiative transfer modelling up to 50% higher values. Similar results were observed when analysing synthetic surface brightness observations from the models. However, depending on the assumed dust properties, RT models may also overestimate the optical depths and the effect can be locally significant. To constrain the dust models further, it would be important to have multi-frequency data also on the NIR-MIR extinction and light scattering. Such observations are currently possible with the James Webb Space Telescope (JWST). As a higher mass counterpart to the Taurus filaments, the Orion molecular cloud three (OMC-3) would be a promising target, due to the clear MIR absorption seen in _Spitzer_ data towards its filaments (Juvela & Mannfors, 2023). ###### Acknowledgements. MJ acknowledges the support of the Academy of Finland Grant No. 348342.
塵埃放出は、星形成雲の研究において重要なツールであり、その塵埃の柱密度を測定するツールであり、塵埃の進化を通じて、雲の履歴と物理的条件に関連している。私たちは、タージュ分子雲のフィラメントを使って、塵埃放出を拡張した領域で放射伝達モデルを検討した。遠赤外線観測を塵埃の雲と塵埃の特性を決定するためにどのように使用できるかを検討した。異なる雲の形状、放射場、塵埃の特性の仮定を用いて、Herschel観測に基づいて、RTモデルをフィッティングした。これらのモデルを、Taurusフィラメントの近赤外線消光度との比較分析を行った。塵埃放出のモデルは、雲の形状と塵埃の特性に関する相違点の相互作用を調査するために使用された。結果として、塵埃放出のモデルは、雲の構造と外部放射
2309.09074
Test-Time Compensated Representation Learning for Extreme Traffic Forecasting
Traffic forecasting is a challenging task due to the complex spatio-temporal correlations among traffic series. In this paper, we identify an underexplored problem in multivariate traffic series prediction: extreme events. Road congestion and rush hours can result in low correlation in vehicle speeds at various intersections during adjacent time periods. Existing methods generally predict future series based on recent observations and entirely discard training data during the testing phase, rendering them unreliable for forecasting highly nonlinear multivariate time series. To tackle this issue, we propose a test-time compensated representation learning framework comprising a spatio-temporal decomposed data bank and a multi-head spatial transformer model (CompFormer). The former component explicitly separates all training data along the temporal dimension according to periodicity characteristics, while the latter component establishes a connection between recent observations and historical series in the data bank through a spatial attention matrix. This enables the CompFormer to transfer robust features to overcome anomalous events while using fewer computational resources. Our modules can be flexibly integrated with existing forecasting methods through end-to-end training, and we demonstrate their effectiveness on the METR-LA and PEMS-BAY benchmarks. Extensive experimental results show that our method is particularly important in extreme events, and can achieve significant improvements over six strong baselines, with an overall improvement of up to 28.2%.
Zhiwei Zhang, Weizhong Zhang, Yaowei Huang, Kani Chen
2023-09-16T18:46:34
http://arxiv.org/abs/2309.09074v1
# Test-Time Compensated Representation Learning for Extreme Traffic Forecasting ###### Abstract Traffic forecasting is a challenging task due to the complex spatio-temporal correlations among traffic series. In this paper, we identify an underexplored problem in multivariate traffic series prediction: extreme events. Road congestion and rush hours can result in low correlation in vehicle speeds at various intersections during adjacent time periods. Existing methods generally predict future series based on recent observations and entirely discard training data during the testing phase, rendering them unreliable for forecasting highly nonlinear multivariate time series. To tackle this issue, we propose a test-time compensated representation learning framework comprising a spatio-temporal decomposed data bank and a multi-head spatial transformer model (CompFormer). The former component explicitly separates all training data along the temporal dimension according to periodicity characteristics, while the latter component establishes a connection between recent observations and historical series in the data bank through a spatial attention matrix. This enables the CompFormer to transfer robust features to overcome anomalous events while using fewer computational resources. Our modules can be flexibly integrated with existing forecasting methods through end-to-end training, and we demonstrate their effectiveness on the METR-LA and PEMS-BAY benchmarks. Extensive experimental results show that our method is particularly important in extreme events, and can achieve significant improvements over six strong baselines, with an overall improvement of up to 28.2%. neural networks, time series, extreme event, spatio-temporal decomposition, transformer ## 1 Introduction With the construction of smart cities, traffic speed forecasting plays an essential role in vehicle dispatching and route planning for intelligent transportation systems [29]. Recently, a significant amount of deep models have been subsequently developed to this research area, achieving noticeable improvements over traditional methods [1, 2, 3, 4, 6, 9, 10, 11, 13, 20, 25, 26, 27]. Especially graph neural network (GNN) based methods have attracted tremendous attention and demonstrated predictive performance due to their ability to capture the complicated and dynamic spatial correlations from graph [1, 3, 6, 10, 11, 13, 20, 25, 26]. However, multivariate traffic series forecasting remains challenging because it is difficult to simultaneously model complicated spatial dependencies and temporal dynamics, especially for the extreme events [12]. Existing literature has paid little attention to extreme traffic forecasting. In this paper, we first identify extreme events in traffic speed forecasting, as shown in Figure 2. An extreme event occurs when recent observations and forecasts exhibit different distributions, indicating a high degree of non-linearity between them, which increases the prediction difficulty of the model. We then define two evaluation metrics to measure extreme events: events with a large number of zero-valued speeds and events with high entropy in multivariate the traffic series. As illustrated in Figure 1 (a), those two evaluation indicators exhibit a long-tailed distribution. More zero-valued speeds and larger input entropy indicate complex road conditions. We observe that existing methods struggle to handle extreme events well because they predict future series based on a limited horizon of recent observations. For example, they predict vehicle speeds for the next 30 minutes based on the observations collected in the last hour [13, 25, 26]. The drawback of this approach is that limited-horizon observations can cause forecasting models to fail when faced with extreme events, as illustrated in Figure 1 (b). We argue that simply extending the field of view of observations is not a viable solution to this dilemma. Due to daily and weekly periodic characteristic and temporal dynamics, the simple enlarged long-distance historical data is redundant and unreliable. Additionally, learning informative patterns from long-term historical data is computationally expensive [21]. Therefore, it is crucial to determine how to enlarge the horizon to the entire historical data within limited computing resource to improve predictions during extreme events. We propose a test-time compensated representation learning framework to enhance model predictions under extreme events. Unlike other time series, such as stock prices, the historical traffic series have a longer shelf life in predicting the future as their patterns are always exhibit periodicity over time. Thus, the first component of our method divides all training data according to periodic characteristics (e.g., daily and weekly periods) and stores it in a data bank that can be retrieved based on the input timestamps. The spatio-temporal decomposed data bank explicitly separates all historical data in temporal dimension to make the following transformer model easy to access historical series with similar periods. The 3D visualization in Figure 4 shows that, when the data after spatio-temporal decomposition, the vehicle speeds at a certain intersection always remain in a relatively stable state, and those abnormal traffic series are easier to distinguish. The second component of our framework is a multi-head transformer model (CompFormer), which establishes connections between recent observations and historical traffic series stored in the data bank through a spatial attention matrix. Benefiting from the spatio-temporal decomposition of traffic series, spatial attention matrix can integrate all historical data with lower computing resources than spatio-temporal attention matrix to enhance the prediction ability of the model under extreme events. The architecture of our framework is depicted in and Figure 3 and Figure 5. The compensated features learned by CompFormer can be concatenated with the input embeddings, thus all the modules can be trained end-to-end. Our modules can be flexibly integrated with existing traffic forecasting methods, such as DCRNN [13], MTGNN [25], GWN [26], GTS [20], DGCRN [11]) and STEP [21]. We evaluate our proposed framework on two benchmark datasets, METR-LA and PEMS-BAY. The experimental results show that our method significantly improves prediction performance in extreme cases compared to six strong baselines, with an overall improvement of up to 28.2%. Furthermore, we provide a detailed analysis of the effectiveness of our proposed CompFormer component, including ablation studies. The main contributions of this paper are summarized as follows: * We identify extreme events as a critical challenge in multivariate traffic series forecasting and propose two evaluation metrics to measure the severity of these events. * We develop a test-time compensated representation learning framework, consisting of a spatio-tempeoral decomposed data bank and a multi-head spatial transformer model (CompFormer), to address the issue of extreme events. * Our framework can be flexibly integrated with existing forecasting methods, and we demonstrate its effectiveness on the METR-LA and PEMS-BAY benchmarks. We conduct extensive experiments and analyses to validate the performance of our method, showing significant improvements over six strong baselines, especially in extreme cases. Overall, this paper provides a novel perspective on the problem of traffic forecasting and proposes an effective method for addressing extreme events, which is an under-explored area in the literature. ## 2 Related Works In this section, we first review the state-of-the-art deep learning models for traffic series forecasting. Next, we introduce the extreme events in general time series prediction scenarios. Finally, we describe how existing methods use the periodicity information to enhance the predictive capabilities. ### _Traffic Speed Forecasting_ In recent years, Graph Neural Networks (GNNs) have become a frontier in traffic forecasting research and shown state-of-the-art performance due to their strong capabilities in modeling spatial dependencies from graphical datasets [1, 3, 6, 10, 11, 20, 25, 26]. Most existing methods focus on modelling spatial dependencies by developing effective techniques to learn from the graph constructed from the geometric relations among the roads. For example, with graphs as the input, the diffusion convolutional recurrent neural network (DCRNN) [13] models the spatial dependency of traffic series with bidirectional Fig. 1: (a) Two types of evaluation metrics to measure the extreme events: one is the number of zero-valued observations, and the other is the entropy of input. This figure is made from METR-LA data. (b) The curves above show that the MAE loss achieved by GWN [26] is always large when the entropy and number of zero-valued speed in input are large. The left Y-axis represents the MAE loss value, and the **right Y-axis** represents the values of input entropy and number of zero. random walks on a directed graph as a diffusion process. Graph WaveNet (GWN) [26] combines the self-adaptive adjacency matrix and dilated causal convolution to learn the spatial and temporal information respectively. However, the above models assume fixed spatial dependencies among roads [18]. Wu et al. [25] propose a general GNN-based framework for capturing the spatial dependencies without well-defined graph structure. Benefiting from the ability of capturing global sequential dependency and parallelization, attention mechanism has become a popular technique in sequential dependency modeling, including self-attention [18, 23, 27, 32] and other variants [14, 19, 28]. Recent traffic forecasting models utilized multi-head attention [22] to model spatial and temporal dependencies [14, 18, 19, 23, 27, 28, 32]. Attention modules can directly access past information in long input sequences, but they cannot enlarge the predictor horizon to the whole historical data. This is because the length of the input need to be continuously increase, which results in intractable computational and memory cost in attention. Therefore, to enlarge the predictor horizon of predictor without increasing the computing resource, we propose a spatio-temporal decomposition method in this paper that makes transformer only pay attention to spatial dimension, greatly reducing the complexity of compensated representation learning. ### _Extreme Events_ Previous traffic forecasting methods overlook the extreme events that feature abrupt speed change, irregular and rare occurrences, resulting in poor performance when applying them for speed prediction. In other time series prediction tasks, most of the existing methods consider extreme events as a rare event prediction problem [5, 24]. Ding et al. [5] addressed the extreme events by formulating it as a data imbalance problem and resolving them by sample re-weighting. However, the extreme condition problem is more severe in traffic data due to many observations containing many zero-valued speeds and large entropy (Figure 1), which makes it difficult to give a proper definition for these extreme events to identify them during training because of the spatio-temporal complexity of traffic series. Therefore, we cannot address this problem by simply re-weighting as the above methods. In this paper, we construct a spatial transformer model to automatically extract compensated features from the whole historical data for the extreme events when needed. ### _Periodicity_ Periodicity has been widely used in many time series prediction tasks. Considering the obvious periodicity of traffic data, Guo et al. [8] proposed three independent spatial-temporal attention modules to respectively model three temporal properties of traffic flows, i.e., recent, daily-periodic and weekly-periodic dependencies. ST-ResNet [31] is proposed for crowd flows prediction, in which three residual networks are used to model the temporal closeness, period, and trend properties of crowd traffic. Liang et al. [15] regarded the periodicities as a external factor and Yao et al. [28] designed a periodically shifted attention mechanism to handle long-term periodic temporal shifting. However, their prediction ability is limited because of their limited horizon on the historical data. Therefore, in order to make full use of whole historical data with limited computing resource, we propose the spatio-temporal decomposition method according to the periodicity characteristic. ## 3 Extreme Events In this section, we first describe the mathematical expression of multivariate time series prediction in the context of traffic speed forecasting. Next, two typical types of extreme events that pose challenges for existing forecasting methods are introduced. Finally, we introduce two evaluation metrics to measure the extreme events. ### _Preliminaries_ Consider a multivariate traffic time series with \(N\) correlated variables, denoted as \(\mathcal{X}=\{\mathbf{x}_{\cdot,1},x_{\cdot,2},\ldots,\mathbf{x}_{\cdot,t},\ldots\}\). Here, Fig. 2: (a) Under normal traffic conditions, the vehicle speed at each intersection is stable at around 60 km/h, thus the prediction error will be very small. (b) In the first extreme case, the observed series and the predicted values are completely uncorrelated, which makes it difficult for the model to accurately predict. (c) In the second extreme case, traffic jams lead to huge differences in the vehicle speeds at various intersections, which also increase the difficulty of the model’s predictions. The figures are made from METR-LA data. each component \(\mathbf{x}_{\cdot,t}=(\mathbf{x}_{1,t},\mathbf{x}_{2,t},\ldots,\mathbf{x}_{N,t})^{ \top}\in\mathbb{R}^{C\times D}\) represents the recording of \(C\) sensors as time step \(t\), where \(D\) is the representation dimension of these sources. The goal is to predict the future values of this series based on the historical observations. Existing methods always formulate the problem as to find a function \(\mathcal{F}(\cdot;\theta)\) to forecast the next \(\tau\) steps based on the historical data of the past \(L\) steps: \[(\hat{\mathbf{x}}_{\cdot,t+1},\hat{\mathbf{x}}_{\cdot,t+2},\ldots,\hat{ \mathbf{x}}_{\cdot,t+\tau})=\mathcal{F}(\mathbf{x}_{\cdot,t-L+1},\ldots, \mathbf{x}_{\cdot,t-1},\mathbf{x}_{\cdot,t};\theta), \tag{1}\] where \(\theta\) is the parameters of the forecasting model. We denote the input sequence at time \(t\) as \(\mathbf{X}_{t}=(\mathbf{x}_{\cdot,t-L+1},\ldots,\mathbf{x}_{\cdot,t-1}, \mathbf{x}_{\cdot,t})^{\top}\in\mathbb{R}^{L\times C\times D}\), where \(L\) is the input length. In this paper, we take the traffic speed forecasting problem as an example and assume \(D=2\), with one dimension recording the vehicle speed and the other dimension recording the global time stamp. ### _Definition of Extreme Events_ In this section, we provide visualization of traffic series to present the extreme events. As shown in Figure 1, we define extreme events by the distribution of observed and predicted values, as well as current road conditions. * **Normal condition.** In non-morning and evening rush hours and without traffic jams, vehicle speeds at each intersection are stable at around 60 km/h, and the distributions of observations and predictions are similar, which enables the model to accurately predict the speed. * **Extreme event [1].** The first extreme case shows that the linear correlation between observations and predictions is very low, such as having no correlation at all, which makes it impossible for the model to obtain accurate predictions only by relying on recent observations. * 70 km/h), and this infrequent event will also make it more difficult for the model to predict. ### _Evaluation Metrics of Extreme Events_ Based on the definition of extreme events, we introduce two different evaluation metrics to measure the severity of extreme events. We also qualitatively analyze the relationship between evaluation metrics and prediction errors through visualization. We denote \(Z_{t}\) to be the number of zero speed in \(\mathbf{X}_{t}\), which can be calculated as: \[Z_{t}=\sum_{l=1}^{L}\sum_{n=1}^{C}z_{l}^{n}, \tag{2}\] where \(z_{t}^{c}=1\) if the speed of the \(c\)-th sensor at time \(t-l+1\) is 0, i.e., \(X_{t}(l,c,0)=0\), otherwise \(z_{t}^{n}=0\). A larger \(Z_{t}\) may be caused by a large area of traffic jam due to a traffic accident or the absence of vehicles in this time step, and this infrequent condition makes the speed prediction difficult. The extreme event is also measured by the entropy of the speed values of the sensors in \(\mathbf{X}_{t}\), which can be computed by: \[P_{t}=\frac{1}{L}\sum_{l=1}^{L}\sum_{c=1}^{C}-(\hat{\mathbf{X}}_{t}(l,c,0)+ \epsilon)\text{log}(\hat{\mathbf{X}}_{t}(l,c,0)+\epsilon), \tag{3}\] where \(\hat{\mathbf{X}}_{t}\) is the normalized probability distribution, and \(\epsilon>0\) is a small positive value used to address the zero-valued speed for numerical stability. A larger \(P_{t}\) indicates a more diverse speed distribution, which implies a higher level of traffic congestion and more difficulty in predicting speeds. As shown in Figure 1 (a), the distribution curves of the two evaluation metrics exhibit long-tailed distributions, which is a good indication that extreme events are infrequent but repetitive. About 0.4% of the recorded values in the multidimensional traffic speed series are all zeros. The empirical results shown in Figure 1 (b) demonstrate that existing methods perform poor in these events. In the 9th, 84th and 103th predictions, the larger \(Z_{t}\) and \(P_{t}\) indicate that the extreme events are more serious, and the prediction error of the model is correspondingly larger. Therefore, we would like to point out that predicting traffic speed in the above two infrequent but repetitive extreme events is Fig. 3: The overall framework of our proposed test-time compensation learning method. 1) All historical series are integrated into a periodic data bank. 2) The CompFormer model is used to establish a connection between recent observations and selected historical series using a spatial attention matrix to transfer robust representation for overcoming extreme events. 3) The learned compensated features are concatenated with the original embeddings as a new input for the forecasting models. The CompFormer and forcasting model can be trained end-to-end. important and valuable in smart transportation systems, especially for driving route planning and vehicle dispatching. However, traffic speed forecasting that rely solely on recent observations is unreliable, especially under extreme events. ## 4 Methodology In this section, we first visualize the traffic time series before and after spatio-temporal decomposition to analyze why the following proposed periodic data bank is necessary. Next, we introduce the construction of periodic data bank. Finally, our proposed CompFormer is presented to learn compensated representation. ### _Spatio-temporal Decomposition_ The existence of extreme events renders the predictions unreliable by relying solely on recent observations. Therefore, it is worth exploring how to use all historical data to participate in the testing phase to address the aforementioned shortcomings. Our analysis indicates that using more historical data as observations (such as all training data, one week's data and one day's data) for prediction would result in the inclusion of significant redundant information, which will greatly increase the difficulty of model prediction. We visualize and analyze the correlation of traffic time series among different adjacent time stamps by calculating their correlation coefficients. In order to analyze the linear correlation of \(\mathbf{X}_{t}\), we compute their pearson product-moment correlation coefficient (PPMCC) by: \[\begin{split}\rho(\mathbf{X}_{t}(l-1,:,0),\mathbf{X}_{t}(l,:,0))= \\ \frac{\sum_{i=1}^{C}(x_{ti}^{l-1}-\bar{x}_{t}^{l-1})(x_{ti}^{l}- \bar{x}_{t}^{l})}{\sqrt{\sum_{i=1}^{C}(x_{ti}^{l-1}-\bar{x}_{t}^{l-1})^{2}}} \sqrt{\sum_{i=1}^{C}(x_{ti}^{l}-\bar{x}_{t}^{l})^{2}}\end{split} \tag{4}\] where \(x_{t}^{l}\in\mathbb{R}^{C}\) is speed values of all intersections at time stamp \(t\). Thus, the value of \(\rho\) can vary between \(-1\) and \(1\). The smaller absolute value \(|\rho|\) means that the \(x_{t}^{l-1}\) and \(x_{t}^{l}\) at the adjacent time are less linearly correlated, which will increase the predictive difficulty of the model. From the first column of Figure 4, we can see that the correlation coefficients of the multi-dimensional traffic time series of five minutes before and after show obvious morning and evening peaks on weekdays. The low correlation coefficient values mean that long-term predictions using only short-term historical data are unreliable. Another conclusion is that traffic conditions at various times on weekends are different from those on weekdays. If we simply enlarge the input of the model to the historical data of the past week or past day, this requires a lot of computing resources to make the model find informative data from redundant information [21]. Therefore, how to use all historical data for prediction under limited computing resources is an urgent problem to be solved. As shown in the second column of Figure 4, the outliers in highly complicated traffic series are easy to identify after spatio-temporal decomposition. For example, at 06:00, the Fig. 4: The visualization of Pearson product-moment correlation coefficients (PCCS) and historical traffic series after spatio-temporal decomposition on Tuesday and Saturday. 1) The first column of figures show that the traffic time series on weekdays and weekends presents different morning and evening peaks. 2) The second column of figures show that it is easy to identify the outliers (extreme events) of the spatio-temporal-decomposed traffic series at a certain intersection, which will reduce the difficulty of model prediction. speed values recorded at the intersection are within a stable range, \(\approx 60km/h\). In addition, an intersection contains only about ten recorded values at a particular time in the entire METR-LA dataset, thus allowing for accurate predictions to be made with minimal computational resources. The next question is how to use all the historical data after spatio-temporal decomposition. Therefore, we propose the following periodic data bank and the spatial transformer model. ### _Periodic Data Bank_ In this section, we propose to separate all the historical traffic series according to the periodicity characteristic to construct a periodic data bank, which includes the following data preprocessing and bank construction. **Data preprocessing.** We replace all zero-valued speeds in the training data by looking up the historical data. The zero-valued speeds have two different types. The first is all the speed values of input traffic series equal to zero, thus we replace them with the data at the same periodic time. The second is only a small part of input sequence has zero speed, we replace them with mean value of the remain non-zero values. **Bank construction.** In constructing the periodic data bank, we consider the various periodic granularities of traffic series, such as \(P=2016\) is the periodicity combined from minute of hour (12), hour of day (24) and day of week (7). By dong this, the numbers of records in one periodicity is \(Q\). We partitions and stores all the traffic series into a matrix according to their periodicity. \[\mathcal{M}=[\mathbf{M}_{1},\mathbf{M}_{2},...,\mathbf{M}_{P}]^{\top}\in \mathbb{R}^{P\times Q\times C} \tag{5}\] where \(\mathbf{M}_{p}\in\mathbb{R}^{Q\times C}\), each column stores all the recorded data of a certain periodicity. For example, as shown in Figure 4, at 6:00 am on Tuesday, a total of \(Q=10\) pieces of data were recorded at a certain intersection, and the METR-LA dataset has \(C=207\) intersections, so \(\mathbf{M}_{\text{Tues, 06.00}}\in\mathbb{R}^{10\times 207}\). One of the benefits of this spatio-temporal decomposition is that each \(\mathbf{M}_{p}\) slice functions as a representative of all the training data collected for a given periodicity, thereby reducing the computational complexity of the subsequent transformer model designed to learn compensated representation. ### _Spatial Transformer_ Now we turn to present our multi-head spatial transformer module that can extract the compensated features from the above periodic data bank for extreme traffic prediction. After spatio-temporal decomposition, the proposed CompFormer only needs to automatically identify the anomalous patterns of \(\mathbf{X}_{t}\in\mathbb{R}^{L\times C\times D}\) in \(C\) dimension and transfer robust knowledge from periodic data bank based on the time stamp \(t\). To be precise, our CompFormer module is developed as \begin{table} \begin{tabular}{l c c} \hline \hline **Required Model:** Spatial transformer model \(\mathcal{A}_{\theta}\), \\ forecasting model \(\mathcal{F}_{\theta}\) & \\ **Required Input:** Mini batch input \(\mathbf{X}\in\mathbb{R}^{B\times L\times C\times D}\) \\ **Required Periodic Data Bank:**\(\mathcal{M}\in\mathbb{R}^{P\times Q\times C\times D}\) \\ 1:for\(i\)=1 to \(L\)do \\ 2: Current input \(\mathbf{X}_{t}^{i}\in\mathbb{R}^{B\times C\times 1}\) \\ 3: Extract series from \(\mathcal{M}\): \(M_{i}^{t+1}\in\mathbb{R}^{B\times(R\times C)\times 1}\) \\ 4: Set Query \(\mathbf{X}_{t}^{i}\), Key \(M_{i}^{t+1}\), Value \(M_{i}^{t+1}\) \\ 5: Attention output \\ \(O_{i}^{t+1}\in\mathbb{R}^{B\times C\times D}=A_{\theta}(\mathbf{X}_{i},M_{i}^ {t+1},M_{i}^{t+1})\) \\ 6: Assign \(F_{o}^{t+1}[i]=O_{i}^{t+1}\) \\ 7:endfor \\ 8: New input \(\mathbf{X}\in\mathbb{R}^{B\times L\times C\times(D+D)}\) = _concat_(\(\mathbf{X},\mathbf{F}_{o}^{t+1}\)) \\ 9: Prediction \(Y=f_{\theta}(\mathbf{X})\) \\ 10: Compute MAE loss \\ 11: Update \(\mathcal{A}_{\theta}\) and \(\mathcal{F}_{\theta}\) by back-propagation \\ \hline \hline \end{tabular} \end{table} TABLE I: Complexity comparison with different attention mechanism. Here \(L\) denotes the historical sequence length, \(C\) is the spatial dimension, \(R\) is number of historical series extracted from the periodic data bank, where \(R\ll L\). We assume the feature dimension is \(1\). Fig. 5: The framework of compensation representation learning process. 1) The input embedding is set as query (Q), and the selected \(R\) historical series are presented as key (K) and value (V). 2) The size of compensated feature learned by CompFormer is equal to input (i.e., query). follows. As shown in Figure 5, the observations \(\mathbf{X}_{t}(:,:,0)\in\mathbb{R}^{L\times C\times 1}\) with extremeness are transformed to embeddings by fully connected layers, and the historical embeddings \(\hat{\mathbf{X}}_{t+1}\in\mathbb{R}^{L\times RC\times D}\) are random sampled from periodic data bank based on time stamp \(t\), where hyper-parameter \(R\) is the number of sampled series. \(\hat{\mathbf{X}}_{t+1}\) represents the future historical data, because the temporal dynamics can cause the road conditions to be very different in the five minutes before and after, especially in the morning and evening peaks. In addition, we would like to point out that due to our well organized periodic data bank, such simple random sampling can achieve good performance, which is verified in our experiments. In our proposed CompFormer, we take the short observation embeddings \(\mathbf{X}_{t}\in\mathbb{R}^{L\times C\times D}\) and sampled historical series \(\hat{\mathbf{X}}_{t+1}\in\mathbb{R}^{L\times RC\times D}\) as query (\(Q\)) and key (\(K\)), value (\(V\)), respectively. The computation of CompFormer takes the form of \[\begin{split} a_{ij}&=\text{Softmax}\left(\frac{ \text{exp}(s_{ij})}{\sum_{k=1}^{RC}\text{exp}(s_{ik})}\right)\\ s_{ij}&=\mathcal{A}_{\theta}\left(\mathbf{X}_{t}( L,i,D),\hat{\mathbf{X}}_{t+1}(L,j,D)\right),\end{split} \tag{6}\] where \(\mathcal{A}_{\theta}\) is the transformer model, \(i=1,2,\dots C,j=1,2,\dots RC\). \(a\in\mathbb{R}^{C\times RC}\) is the _spatial attention matrix_. The attention score \(a_{ij}\) represents the relationship between the input and the historical data in spatial dimension, i.e., its value shows the importance of the corresponding sampled series from periodic data bank, and it guides the interaction of information between them to transfer robust representation from \(\hat{\mathbf{X}}_{t+1}\). Next, \[O_{r}=\sum_{r=1}^{R}a_{r}\times\hat{\mathbf{X}}_{t+1}(r), \tag{7}\] after finishing the above computation process, the CompFormer output \(O\in\mathbb{R}^{L\times C\times D}\) (compensated features) will be concatenated on the input \(\mathbf{X}_{t}^{L\times C\times D}\) to be put into the Fig. 6: The visualization of three different types of attention matrices. Fig. 7: The visualization of the relationship among the loss gap with and without our method. We can see that in the events with higher degree of extremeness, the improvement achieved by our module becomes more significant, i.e, the loss gap becomes larger. forecasting model, as illustrated in Figure 5. The detailed steps of our method are given in Algorithm 1. **Complexity comparison.** As illustrated in Figure 6, the traditional temporal attention calculates the relative weight between \(L\) sequences, and its time complexity is \(L^{2}\). Multivariate traffic series need to learn both spatial dependence and temporal dynamics, so the time complexity of spatio-temporal attention is \(L^{2}C^{2}\). However, due to the very high computational complexity and memory consumption, we cannot simultaneously learn the importance among very long historical sequences. Due to the spatio-temporal decomposed data bank we constructed, the time complexity of spatial attention is \(RC^{2}\), and it no longer dependents on the length of input sequence. In our experiments, the value of \(R\) is 5. ## 5 Experiments ### _Datasets_ Our experiments are conducted on the following two most widely used real-world large-scale traffic datasets, METRA-LA and PEMS-BAY, in which 70% of data is used for training, 20% are used for testing while the remaining 10% for validation. METRA-LA [13] is a public traffic speed dataset collected from loop detectors in the highway of Los Angeles containing 207 selected sensors and ranging from Mar 1st 2012 to Jun 30th 2012. The unit of speed is mile/h. PEMS-BAY [13] is another public traffic speed dataset collected by California Transportation Agencies (CalTrans) with much larger size. It contains 325 sensors in the Bay Area ranging from Jan 1st 2017 to May 31th 2017. The detailed information of the above two datasets is illustrated in Table II. Following the empirical settings of DCRNN [13], we aim to predict the traffic speeds of next 15, 30 and 60 minutes based on the observations in the previous hour. As in both of these two datasets, the sensors record the speed value for every 5 minutes, thus the input length equals to \(12\), while the output length equals to \(3\), \(6\) and \(12\) in our experiments. ### _Baselines and Configurations_ We verify the effectiveness of our proposed method in boosting existing methods by integrating it with the following six latest strong baselines: **DCRNN**[13], **MTGNN**[25], **GWN**[26], **GTS**[20], **DGCRN**[11] and **STEP**[21]. We also give the results of some other representative baselines to better evaluate the performance of our method. All the experiments are implemented by Pytorch 1.7.0 on a virtual workstation with a 11G memory Nvidia GeForce RTX 2080Ti GPU. We train our model by using Adam optimizer with gradient clip threshold equals to 5. The batch size is set to 64 on METRA-LA and PEMS-BAY datasets. Early stopping is employed to avoid overfitting. We adopt the MAE loss function defined in Section 5.3 as the objective for training. ### _Evaluation Metrics_ Following the baselines [13, 25, 26], we evaluate the performances of different forecasting methods by three commonly used metrics in traffic prediction: * Mean Absolute Error (MAE), which is a basic metric to reflect the actual situation of the prediction accuracy. * Root Mean Squared Error (RMSE), which is more sensitive to abnormal value. * Mean Absolute Percentage Error (MAPE) which can eliminate the influence of data value magnitude to some extent by normalization. The detailed formulations of these three metrics are: \[MAE(x,\hat{x}) =\frac{1}{N}\sum_{i=1}^{N}\|x_{i}-\hat{x}_{i}\|,\] \[RMSE(x,\hat{x}) =\sqrt{\frac{1}{N}\sum_{i=1}^{N}\|x_{i}-\hat{x}_{i}\|^{2}},\] \[MAPE(x,\hat{x}) =\frac{1}{N}\sum_{i=1}^{N}\frac{\|x_{i}-\hat{x}_{i}\|}{\|x_{i}\|},\] Fig. 8: The visualization of relationship between the effect of our spatial attention module and the extremeness. The results show that the magnitude of the attention output rapidly increases when the number of zero-valued speed or the entropy of the input becomes larger. where \(x=(x_{1},...,x_{N})\) denotes the ground truth of the speed of \(N\) sensors, \(\hat{x}=(\hat{x}_{1},...,\hat{x}_{N})\) represents the predicted values. ### _Main Results_ In this section, we show the main results to demonstrate the effectiveness of our proposed framework in boosting the latest strong baselines. Specifically, we present the overall improvements on different metrics achieved by our method (Section 5.4) and its superiority in predicting for the extreme events (Section 5.5). Table III summarizes the main results of our methods and the baselines on the two traffic datasets. For each dataset, the results can be divided into two parts: the first part is the result of the representative baselines and the second part gives the results of the six latest baselines with/without our method. The results show that integrated with our method, all the six latest baselines can be boosted effectively, which leads to significantly better performance than the representative baselines. For example, in predicting the speed of next 60 minutes on METR-LA, we can decrease the MAPE loss of GTS [20] by 28.2%. \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{Venue} & \multicolumn{3}{c}{METR-LA} \\ \cline{3-5} & & 15 min & 30 min & 60 min \\ \hline METR-LA & 34272 & 207 & 5min & 12 & 12 \\ PEMS-BAY & 52116 & 325 & 5min & 12 & 12 \\ \hline \hline \end{tabular} \end{table} TABLE II: Statistics of datasets. \begin{table} \begin{tabular}{l c|c c|c c} \hline \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{Venue} & \multicolumn{3}{c}{METR-LA} \\ \cline{3-5} & & 15 min & 30 min & 60 min \\ \hline & MAE / RMSE / MAPE & MAE / RMSE / MAPE & MAE / RMSE / MAPE \\ \hline FC-LSTM & - & 3.44 / 6.30 / 9.60 & 3.77 / 7.23 / 10.90 & 4.37 / 8.69 / 13.20 \\ WaveNet [16] & 2016 ICLR & 2.99 / 5.89 / 8.04 & 3.59 / 7.28 / 10.25 & 4.45 / 8.93 / 13.62 \\ STGCN [30] & 2017 IJCAI & 2.88 / 5.74 / 7.62 & 3.47 / 7.24 / 9.570 & 4.59 / 9.40 / 12.70 \\ ST-MetaNet [17] & 2019 KDD & 2.69 / 5.17 / 6.91 & 3.10 / 6.28 / 8.57 & 3.59 / 7.52 / 10.63 \\ LDS [7] & 2019 ICML & 2.75 / 5.35 / 7.10 & 3.14 / 6.45 / 8.60 & 3.63 / 7.67 / 10.34 \\ AGCRN [1] & 2020 NeuIPS & 2.87 / 5.58 / 7.70 & 3.23 / 6.58 / 9.000 & 3.62 / 7.51 / 10.38 \\ GMAN [32] & 2020 AAAI & 2.80 / 5.55 / 7.41 & 3.12 / 6.49 / 8.730 & 3.44 / 7.35 / 10.07 \\ \hline DCRNN [13] & 2018 ICLR & 2.77 / 5.38 / 7.30 & 3.15 / 6.45 / 8.80 & 3.60 / 7.59 / 10.50 \\ DCRNN \(w\) / _Ours (CompFormer)_ & - & **2.56 / 5.15 / 6.60** & **2.98 / 6.34** / 8.01 & **3.53 / 7.34 / 9.610** \\ GWN [26] & 2019 IJCAI & 2.71 / 5.16 / 7.11 & 3.11 / 6.24 / 8.54 & 3.56 / 7.31 / 7.10 \\ GWN \(w\) / _Ours (CompFormer)_ & - & **2.67 / 5.09 / 6.83** & **3.03 / 6.07 / 8.19** & **3.43 / 7.06 / 9.680** \\ MTGNN [25] & 2020 KDD & 2.69 / 5.18 / 6.86 & 3.05 / 6.17 / 8.19 & 3.49 / 7.23 / 9.87 \\ MTGNN \(w\) / _Ours (CompFormer)_ & - & **2.65 / 5.11 / 6.78** & **3.00 / 6.07 / 8.11** & **3.40 / 7.09 / 9.65** \\ GTS [20] & 2021 ICLR & 2.92 / 5.91 / 7.70 & 3.51 / 7.29 / 10.10 & 4.28 / 8.87 / 13.10 \\ GTS \(w\) / _Ours (CompFormer)_ & - & **2.62 / 5.18 / 6.70** & **3.00 / 6.22 / 7.90** & 3.39 / 7.26 / **9.400** \\ DGCRN [11] & 2021 TKDE & 2.64 / 5.07 / 6.68 & 3.02 / 6.13 / 8.05 & 3.49 / 7.33 / 9.72 \\ DGCRN \(w\) / _Ours (CompFormer)_ & - & **2.59 / 5.00 / 6.58** & **2.96 / 6.05 / 7.89** & **3.38 / 7.12 / 9.51** \\ STEP [21] & 2022 KDD & 2.61 / 4.98 / 6.60 & 2.96 / 5.97 / 7.96 & 3.37 / 6.99 / 9.61 \\ STEP \(w\) / _Ours (CompFormer)_ & - & **2.58 / 4.94 / 6.47** & **2.85 / 5.82 / 7.80** & **3.27 / 6.76 / 9.46** \\ \hline \hline \end{tabular} \begin{tabular}{l c|c c|c c} \hline \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{Venue} & \multicolumn{3}{c}{15 min} & 30 min & 60 min \\ \cline{3-5} & & MAE / RMSE / MAPE & MAE / RMSE / MAPE & MAE / RMSE / MAPE \\ \hline FC-LSTM & - & 2.05 / 4.19 / 4.80 & 2.20 / 4.55 / 5.20 & 2.37 / 4.96 / 5.70 \\ WaveNet [16] & 2016 ICLR & 1.39 / 3.01 / 2.91 & 1.83 / 4.21 / 4.16 & 2.35 / 5.43 / 5.87 \\ STGCN [30] & 2017 IJCAI & 1.36 / 2.96 / 2.90 & 1.81 / 4.27 / 4.17 & 2.49 / 5.69 / 5.79 \\ AGCRN [1] & 2020 NeuIPS & 1.37 / 2.87 / 2.94 & 1.69 / 3.85 / 3.87 & 1.96 / 4.54 / 4.64 \\ GMAN [32] & 2020 AAAI & 1.34 / 2.91 / 2.86 & 1.63 / 3.76 / 3.68 & 1.86 / 4.32 / 4.37 \\ \hline DCRNN [13] & 2018 ICLR & 1.38 / 2.95 / 2.90 & 1.74 / 3.97 / 3.90 & 2.07 / 4.74 / 4.90 \\ DCRNN \(w\) / _Ours (CompFormer)_ & - & **1.32 / 2.78 / 2.76** & **1.65 / 3.73 / 3.64** & **1.95 / 4.49 / 4.55** \\ GWN [26] & 2019 IJCAI & 1.33 / 2.75 / 2.71 & 1.65 / 3.75 / 3.73 & 2.00 / 4.63 / 4.77 \\ GWN \(w\) / _Ours (CompFormer)_ & - & **1.31 / 2.73 / 2.70** & **1.60 / 3.63 / 3.62** & **1.92 / 4.38 / 4.48** \\ MTGNN [25] & 2020 KDD & 1.34 / 2.83 / 2.84 & 1.66 / 3.79 / 3.77 & 1.95 / 4.52 / 4.64 \\ MTGNN [25] & 1.32 / **2.78 / 2.76** & **1.64 / 3.68 / 3.65** & **1.92 / 4.38 / 4.44** \\ GTS [20]* & 2021 ICLR & 1.34 / 2.83 / 2.86 & 1.67 / 3.78 / 3.83 & 1.95 / 4.45 / 4.67 \\ GTS \(w\) / _Ours (CompFormer)_ & - & **1.31 / 2.80 / 2.79** & **1.62 / **3.70 / 3.75** & **1.87 / 4.33 / 4.52** \\ DGCRN [11] & 2021 TKDE & 1.30 / 2.73 / 2.71 & 1. Moreover, from these results, we can see that the loss decreasing achieved in predicting the speed of next 60 minutes is much larger than those of 15 and 30 minutes. To be precise, on PEMS-BAY, we can decrease the MAPE loss of DCRNN [13] by up to 7.14% in predicting the speeds of next 60 minutes. This implies that our method can achieve much more significant improvement if we predict the future for more steps, whose advantage comes from our test-time compensated representation learning framework in prediction. \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{3}{c}{15 min} & \multicolumn{3}{c}{30 min} & \multicolumn{3}{c}{60 min} \\ \cline{2-10} & MAE & RMSE & MAPE & MAE & RMSE & MAPE & MAE & RMSE & MAPE \\ \hline MTGNN \(w\) / _add_ & 2.70 & 5.22 & 7.07\% & 3.04 & 6.19 & 8.37\% & 3.43 & 7.18 & 9.86\% \\ MTGNN \(w\) / _cat_ & **2.65** & **5.11** & **6.78\%** & **3.00** & **6.07** & **8.11\%** & **3.40** & **7.09** & **9.65\%** \\ \hline DCRRN \(w\) / _add_ & 2.65 & 5.17 & 6.76\% & 3.00 & 6.15 & 8.18\% & 3.42 & 7.25 & 9.76\% \\ DGCRN \(w\) / _cat_ & **2.59** & **5.00** & **6.58\%** & **2.96** & **6.05** & **7.89\%** & **3.38** & **7.12** & **9.51\%** \\ \hline \hline \end{tabular} \end{table} TABLE IV: The experimental results of MTGNN [25] and DGCRN [11] on METR-LA dataset. \begin{table} \begin{tabular}{l c c c|c c c c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{3}{c}{15 min} & \multicolumn{3}{c}{30 min} & \multicolumn{3}{c}{60 min} \\ \cline{2-10} & MAE & RMSE & MAPE & MAE & RMSE & MAPE & MAE & RMSE & MAPE \\ \hline MTGNN \(w\) / _add_ & 2.70 & 5.22 & 7.07\% & 3.04 & 6.19 & 8.37\% & 3.43 & 7.18 & 9.86\% \\ MTGNN \(w\) / _cat_ & **2.65** & **5.11** & **6.78\%** & **3.00** & **6.07** & **8.11\%** & **3.40** & **7.09** & **9.65\%** \\ \hline DGCRN \(w\) / _add_ & 2.65 & 5.17 & 6.76\% & 3.00 & 6.15 & 8.18\% & 3.42 & 7.25 & 9.76\% \\ DGCRN \(w\) / _cat_ & **2.59** & **5.00** & **6.58\%** & **2.96** & **6.05** & **7.89\%** & **3.38** & **7.12** & **9.51\%** \\ \hline \hline \end{tabular} \end{table} TABLE V: Analysis of hyper-parameter \(R\). The experiments are constructed with GWN [26] on METR-LA dataset. Fig. 9: The visualization of performance of our method compared with GWN [26] in the extreme events on METR-LA dataset. It shows that when the events become more extreme, the superiority of our method becomes more significant. ### _Superiority in Extreme Events_ Figure 8 present the relationship between the effect of our ComFormer module and the extremeness. We can see that the magnitude of the attention output rapidly increases when the number of zero-valued speed or the entropy of the input becomes larger. This implies that the attention module extracts much more auxiliary information from the periodic data bank to assist prediction and plays a more important role in these extreme events. Figure 7 presents the relationship among the improvement achieved by our framework, which is measured by the gained loss on GWN [26] and DGCRN [11], the effect of our modules and the extremeness. The consistent shapes of these curves demonstrate that in the events with higher degree of extremeness, our modules can achieve more significant improvement in prediction as the loss gap is much larger. The results above consistently verify that our modules can boost the baselines in face of the extreme events. Figure 9 presents the prediction results of GWN [26] with and without our attention module on some events with various degrees of extremeness. It shows that when the extremeness is low, the improvement achieved by our module is limited; while for the extreme events, the improvement is much more significant. As shown in Figure 10, when predicting vehicle speed on complex road conditions, our proposed modules can significantly improve prediction performance. ### _Ablation Study_ In this section, we perform ablation studies to examine the impact of different settings on our proposed framework. Fig. 10: The prediction performance of our method compared with GWN [26] on METR-LA dataset. Follow the previous works [13], the target zero-valued speeds are not evaluated. #### 5.6.1 Effect of concatenated compensation Our method concatenates instead of adds the spatial attention output to the original input. As shown in Table IV, the concatenated compensated features outperform the added approach on both MTGNN [25] and DGCRN [11]. #### 5.6.2 Effect of hyper-parameter R The results presented in Table V indicate that our module is not sensitive to the value of \(R\), and it can achieve good performance even when we sample only 5 series. This allows us to reduce the computational complexity of attention via uniform sampling, which is made possible by our well-constructed periodic data bank that utilizes spatio-temporal decomposition. ## 6 Conclusion In this paper, we propose a framework for test-time compensated representation learning to improve the performance of existing traffic forecasting models under extreme events. The key idea is that the traffic patterns exhibit obvious periodic characteristics, thus we can store them separately in a periodic data bank and extract valuable information at a lower cost. This enables the model to learn useful information from the periodic data bank via a spatial transformer to to compensate for the unreliable recent observations during extreme events. The proposed CompFormer only needs to focus on the spatial dimension and transfer the compensated representation via a spatial attention score. The experimental results demonstrate the effectiveness of our method in improving the performance of existing strong baselines.
交通予測は、交通系列間の複雑な空間-時的な相関関係から成る、難しい課題です。この論文では、多変量交通系列の予測において、未探索の課題である極端なイベントを特定しました。道路渋滞や rush hour は、隣接する時間帯における異なる交差点での車両速度の相関性を低減させる可能性があります。既存の方法では、一般的に過去の観察に基づいて将来の系列を予測し、テストフェーズではトレーニングデータの利用を完全に外しているため、非線形な多変量時系列の予測に信頼性が低いと言えます。この問題に対処するために、 spatio-temporal decomposed データバンクと多頭空間変換モデル (CompFormer) を組み合わせてテスト時間補償された表現学習フレームワークを提案しました。前者は、周期的な特性に基づいて時間軸にすべてのトレーニングデータを明確に分離し、後者は、空間注意行列を介して、最近の観察と
2309.16154
Magnetic Field Structure of the Crab Pulsar Wind Nebula Revealed with IXPE
We report a detailed study of the magnetic-field structure of the Crab pulsar wind nebula, using the X-ray polarization data in 2--8~keV obtained with the Imaging X-ray Polarimetry Explorer. Contamination of the pulsar emission to the data of the nebula region was removed through application of a stringent pulsation phase-cut, extracting a phase range of 0.7--1.0 only. We found that the electric field vector polarization angle (PA) was about $130^{\circ}$ from north to east with the polarization degree (PD) of about 25\% at the pulsar position, indicating that the direction of the toroidal magnetic field is perpendicular to the pulsar spin axis in the region close to the termination shock. The PA gradually deviated from the angle as an increasing function of the distance from the pulsar. There was a region of a low PD to the west of the X-ray torus. Although such a region is expected to be located at the torus edge, where geometrical depolarization due to a steep spatial variation of the PA is expected, the observed low-PD region positionally deviated from the edge. We found that the region of low PD positionally coincided with a dense filament seen in the optical band, and conjecture that the low-PD region may be produced through deflection of the pulsar wind. By comparing the values of the PD at the pulsar position between the data and a model, in which toroidal and turbulent magnetic fields were considered, we estimated the fractional energy of the turbulent magnetic field to be about $2/3$ of the total. We also evaluated a potential polarization of the northern jet in the nebula and derived the PD and PA to be about $30\%$ and $120^{\circ}$, respectively.
T. Mizuno, H. Ohno, E. Watanabe, N. Bucciantini, S. Gunji, S. Shibata, P. Slane, W. C. Weisskopf
2023-09-28T04:04:35
http://arxiv.org/abs/2309.16154v1
# Magnetic Field Structure of the Crab Pulsar Wind Nebula Revealed with IXPE ###### Abstract We report a detailed study of the magnetic-field structure of the Crab pulsar wind nebula, using the X-ray polarization data in 2-8 keV obtained with the Imaging X-ray Polarimetry Explorer. Contamination of the pulsar emission to the data of the nebula region was removed through application of a stringent pulsation phase-cut, extracting a phase range of 0.7-1.0 only. We found that the electric field vector polarization angle (PA) was about \(130^{\circ}\) from north to east with the polarization degree (PD) of about 25% at the pulsar position, indicating that the direction of the toroidal magnetic field is perpendicular to the pulsar spin axis in the region close to the termination shock. The PA gradually deviated from the angle as an increasing function of the distance from the pulsar. There was a region of a low PD to the west of the X-ray torus. Although such a region is expected to be located at the torus edge, where geometrical depolarization due to a steep spatial variation of the PA is expected, the observed low-PD region positionally deviated from the edge. We found that the region of low PD positionally coincided with a dense filament seen in the optical band, and conjecture that the low-PD region may be produced through deflection of the pulsar wind. By comparing the values of the PD at the pulsar position between the data and a model, in which toroidal and turbulent magnetic fields were considered, we estimated the fractional energy of the turbulent magnetic field to be about \(2/3\) of the total. We also evaluated a potential polarization of the northern jet in the nebula and derived the PD and PA to be about \(30\%\) and \(120^{\circ}\), respectively. X-rays:individual (Crab nebula) -- magnetic field -- polarization + Footnote †: journal: Physics Letters B ## 1 Introduction The Crab Nebula complex originated from a supernova in 1054 (SN 1054). It consists of a pulsar (PSR) (PSR B0531+21 or PSR J0534+2200) and a pulsar wind nebula (PWN) powered by the PSR. Its short distance of about 2 kpc from the Earth and the high power of the PSR (spin-down luminosity of \(\sim\)\(5\times 10^{38}\) erg s\({}^{-1}\)) render the Crab Nebula the apparently brightest PWN at most wavelengths. It has been observed across all wavelengths from the radio, optical, X-ray, to \(\gamma\)-ray bands in imaging, photometry, spectroscopy, and polarimetry (see for a review, e.g., Hester 2008 and Buhler & Blandford 2014). The Crab PWN has a broadband non-thermal spectrum from radio to \(\gamma\)-rays from synchrotron emission from accelerated electrons (and positrons) up to a few hundred MeV and from inverse Compton emission above this energy. Therefore, it is one of the best targets to study the physics of the relativistic outflows (the typical Lorentz factor of the pulsar wind can be as high as \(\sim\)\(10^{6}\)) and particle acceleration. Whereas the apparent size of the Crab PWN is \(6^{{}^{\prime}}\times 4^{{}^{\prime}}\) in the optical band, it is roughly half of this in the X-ray band, presumably due to the strong synchrotron cooling of high-energy electrons. Chandra observations (Weisskopf et al. 2000) revealed a set of well-developed axisymmetric structures known as jets, inner ring, and (outer) torus, and collectively referred as "jet-torus". The ring and torus are presumably produced by the combination of the nebula-wide toroidal magnetic field injected by the pulsar wind, and the flow pattern downstream of the wind's termination shock (that is considered to be traced by the inner ring). Polarization measurements are a key to understand the magnetic-field structure and thus the interaction of the pulsar wind with the ambient medium. In particular, particles responsible for X-ray and \(\gamma\)-ray emission suffer from severe synchrotron cooling (in a given magnetic field the cooling time is inversely proportional to the electron energy), hence X-ray and \(\gamma\)-ray polarimetry is a valuable probe of the magnetic-field structure close to where particles are accelerated. High-resolution optical polarimetry observations of the Crab PWN revealed high polarizations associated with a few features known an "wisps" and a "knot" close to the PSR on top of a nebula-wide polarization pattern that broadly runs in the east-west directions (Moran et al., 2013, Hester, 2008). In X-rays, OSO-8 detected significant off-pulse (OP) polarizations with a polarization degree (PD) of \(\sim\)20% and electric field vector polarization angle (PA) of \(\sim\)155\({}^{\circ}\) (measured on the plane of the sky from north to east) in soft X-rays (\(\leq\)10 keV), establishing that synchrotron processes dominate the emission (Weisskopf et al., 1978). Subsequent studies in soft/hard X-ray and \(\gamma\)-ray bands reported similar PDs in the OP phase (Feng et al., 2020, Chauvin et al., 2017, Vadawale et al., 2018, Dean et al., 2008, Forot et al., 2008). Although these measurements could in principle provide vital information about the magnetic field properties close to where the particle acceleration occurs, their detections were marginal (\(\leq\)4\(\sigma\) level) except for the one with OSO-8, and the results lacked any spatial information. In 2022, the Crab PSR and PWN were observed by the Imaging X-ray Polarimetry Explorer, IXPE (Bucciantini et al., 2023), the first mission devoted to spatially-resolved polarization measurements in X-rays (Weisskopf et al., 2022). The IXPE observation provided us with much better PSR and PWN polarization measurements of the Crab PSR/PWN than any of the past X-ray observatories. Bucciantini et al. (2023) reported with IXPE the first X-ray detection of a significant polarization with a PD of \(\sim\)15% from the PSR emission only in the core of the main pulse, while the total pulsed emission of the PSR was consistent with being unpolarized. The strong upper limit of PD \(\sim\)6% at the 99% confidence level is inferred from the reported Stokes parameters. They also revealed the first X-ray polarization map of the PWN, which shows the toroidal magnetic field structure close to the PSR position. The PD distribution was highly asymmetric about the projected torus axis. Here we report the results of in-depth analyses of the IXPE observation data of the Crab PWN. Whereas the initial results of the Crab PWN (and PSR) observation were reoprted by Bucciantini et al. (2023), detailed studies of the nebula's magnetic field properties were deferred to following works. For example, how the magnetic field direction and turbulence develop in the nebula and the relation with known structures are yet to be examined in detail. Here we focus on the PWN and investigate the polarization (magnetic field) properties in detail. This paper is structured as follows. Section 2 describes the observations and data reduction. Then we describe our polarization analysis in section 3 and discuss the obtained polarization properties in section 4. Finally, the summary is given in section 5. ## 2 Observations and Data Reduction IXPE was launched on 2021 December 9. Since then, it has provided us with new insight into almost all classes of X-ray objects (PWNe/PSRs, black-hole binaries, active galactic nuclei, etc.), thanks to its advanced capabilities in imaging, photometry, spectroscopy, and polarimetry. IXPE consists of three polarization-sensitive gas-pixel detectors (Costa et al. 2001, Baldini et al. 2021), placed at the focal planes of three sets of Wolter-1 mirror module assembly (Soffitia et al. 2021). An X-ray photon focused by the mirror and absorbed by the gas in the detector ejects a photoelectron, most likely in the direction of the electric vector, and subsequently produces a charge cloud. The PD and PA of a polarized source are determined from the angular distribution of the tracks made by the initial photoelectron. Each pair of the mirror and detector is named detector unit (DU) 1, 2, or 3. The mirrors give angular resolutions of \(\leq\)30\({}^{{}^{\prime\prime}}\) in half-power diameter (HPD) and a field of view of a \(12\farcm 9\times 12\farcm 9\) square, which are adequate for spatially resolving the Crab PWN. The mirrors have an on-axis effective area of 590 cm\({}^{2}\) at 4.5 keV (calibrated using Ti-K) for the three telescopes combined. The focal-plane detectors are sensitive to polarization in X-rays from 2 to 8 keV, and reduce the effective area to 26 cm\({}^{2}\) at 4.5 keV and give the peak effective area of 80 cm\({}^{2}\) at 2.3 keV (calibrated using Mo-L) if the detector quantum efficiency and event reconstruction efficiencies are taken into account. Although the net effective area is a decreasing function of energy and falls to 2.3 cm\({}^{2}\) at 8 keV, the polarization sensitivity improves with energy. The modulation factor \(\mu\), defined as the degree of modulation of the initial directions of the photoelectrons for a 100%-polarized source, is \(\sim\)15% at 2 keV and \(\geq\)50% at 8 keV. The Crab PSR/PWN was observed twice with IXPE in 2022 between February 21 and March 7 for a total on-source time of \(\sim\)92 ks. We also performed a simultaneous Chandra observation of the Crab (ObsID 23539). We used the same datasets as those in Bucciantini et al. (2023), where the following corrections to the publicly available level-2 event files of IXPE were applied: (1) the energy correction to compensate for the time-dependent charge to-energy conversion, using onboard calibration sources (Ferrazzoli et al., 2020), (2) a World Coordinate System (WCS) correction to account for the slight offset among the three DUs, (3) an aspect-solution correction to remove spurious offsets in the pointing solution (due to transitions between different star trackers activated in turn in orbit), and (4) the barycenter correction, using the most recent optical coordinates and the ICRS reference frame. For details of the corrections, see Bucciantini et al. (2023). To study the PWN, we also applied a pulse phase cut. Following Bucciantini et al. (2023), we defined the OP phase as 0.7-1.0 (with the main X-ray pulsar peak at phase 0.13). We also examined events in the OP plus bridge phase (phase range of 0.2-0.4), but found that the count rate and polarization properties of the PWN in the vicinity (\(\sim\)15") of the PSR were significantly affected by the contaminating signals from the PSR. We, therefore, analyzed only the data of the OP phase in the following analysis. ## 3 Data Analysis ### Polarization Map of the Crab PWN We analyzed the level-2 event list of IXPE with the corrections applied and the contamination of the PSR emission removed (see section 2) in the FITS format (Dietz et al., 2022). The reduced FITS file in a table format contains columns of values of the Stokes parameters \(q_{k}\equiv 2\cos 2\phi_{k}\) and \(u_{k}\equiv 2\sin 2\phi_{k}\), where \(\phi_{k}\) is the reconstructed photoelectron direction of the event number \(k\), in addition to columns of time, energy channel, and sky and detector coordinates commonly used in X-ray data. The event-by-event Stokes parameters can now be used to determine the polarization integrated over energy and/or position and/or pulse phase (see Kislat et al. (2015) and Vink & Zhou (2018)). Denoting the modulation factor and effective area as \(\mu_{k}\) and \(A_{k}\), respectively, the weighted Stokes parameters \(I\), \(Q\), and \(U\) are calculated as \(I=\Sigma 1/A_{k}\), \(Q=\Sigma q_{k}/\mu_{k}/A_{k}\), and \(U=\Sigma u_{k}/\mu_{k}/A_{k}\), respectively, and the variances, as \(V(Q)=\Sigma(1/\mu_{k})^{2}(1/A_{k})^{2}\cdot\Sigma q_{k}^{2}/\Sigma 1\) and \(V(U)=\Sigma(1/\mu_{k})^{2}(1/A_{k})^{2}\cdot\Sigma u_{k}^{2}/\Sigma 1\). Source polarization is calculated as \(\mathrm{PD}=\sqrt{Q^{2}+U^{2}}/I\) and \(\mathrm{PA}=\frac{1}{2}\arctan(U/Q)\), and their errors are estimated through the propagation of errors in \(Q\), \(U\), and \(I\). See the Appendix 1 for detail. In our formulations, all relevant parameters (\(I\), \(Q\), \(U\), \(V(Q)\), and \(V(U)\)) are additive quantities. Therefore, if we prepare these maps in fine resolution, we can always apply arbitrary binning and evaluate PD, PA, and the errors of each binned pixel. Our formulations are slightly different from those in the standard software ixpeobssim(Baldini et al., 2022), which is a simulation and analysis framework specifically developed for the IXPE data. While ixpeobssim calculates errors of Stokes parameters, PD, and PA adequately if the number of events is sufficiently large in each pixel, it does not allow flexible binning of fine-resolution maps. We, therefore, adopt our formulations in this study. Before beginning the detailed polarization analysis for the present work, we validated our formulations by comparing the result with the source polarization value (integrated over the ellipse that delineates the spine of the X-ray torus of Ng & Romani (2004); see figure 1) obtained with the ixpeobssim of version 28.4.0. As a result, we confirmed that the PD and PA derived with the two methods were almost identical, with the differences much smaller than the (already small) statistical errors; our formulations and ixpeobssim give PD of \((23.2\pm 0.6)\%\) and \((23.3\pm 0.6)\%\), respectively, and PA of \((137.7\pm 0.7)^{\circ}\) and \((137.6\pm 0.7)^{\circ}\), respectively. We began our analysis with maps of \(I\), \(Q\), \(U\), etc. in fine binning with a pixel size of \(2\farcs 6\times 2\farcs 6\). Before constructing the maps, we need to apply one more correction known as the leakage correction (Bucciantini et al., 2022). We adopted the linear formula for the expected leakage distribution described in Bucciantini et al. (2022), and calculated the leakage maps of normalized \(Q\) and \(U\) (denoted as \(Q_{\rm N,leak}\) and \(U_{\rm N,leak}\), respectively), using the Chandra image of the Crab (figure 1(a)). Then we derived the leakage-corrected Stokes parameters according to \(Q_{\rm lcor}=Q-I\cdot Q_{\rm N,leak}\) and \(U_{\rm lcor}=U-I\cdot U_{\rm N,leak}\). Figures 1(b) and (c) show the obtained leakage-corrected \(I\) and PD maps of the Crab PWN in 2-8 keV. There, we applied sliding-box smoothing with \(5\times 5\) pixels to increase the photon statistics while maintaining the original resolution, in order to investigate the polarization properties in detail. Although one may bin maps into larger pixels, the procedure may introduce depolarization due to a mismatch between the pixel locations and polarization distributions. We instead applied sliding-box smoothing to avoid such a potential drawback. For completeness, we also show maps with normal binning in Appendix 2. The toroidal magnetic field structure was clearly visible around the PSR in the figure, as reported by Bucciantini et al. (2023). In addition, we found a few notable properties about the Crab PWN polarization as follows. 1. The magnetic-field direction slightly deviates from that of the torus major axis (see also Bucciantini et al., 2023), the tendency of which is particularly evident outside of the ellipse that delineates the torus but already present inside. 2. There are high PD areas (PD\(\geq\)40%) in the north and south of the torus (already pointed out by Bucciantini et al., 2023). 3. Regions of a very-low PD (blue/black regions in the figure 1(c)) are identified to the east and west of the torus. The low-PD area to the west of the torus does not coincide with the southwest edge of the torus. 4. In the north and south of the torus, moderately low PD regions (red regions in the figure 1(c)) positionally coincide with the north/south jets seen in the Chandra image. ### Positional Dependence of Polarization Here we investigate the positional dependence of the polarization (the first point in the notable feature list at the end of section 3.1) in detail. First, we made the PA profiles along the major and minor axes of the X-ray torus (figure 2) and found that the PA was \(\sim\)130\({}^{\circ}\) at the PSR position (i.e., torus center), which is close to the direction of the projected torus axis (126\({}^{\circ}\); Ng & Romani (2004)), whereas the PA values gradually deviated as an increasing function of the distance from the PSR position along the major axis. Then, we defined a small ellipse (region 1) and three concentric annular ellipses (regions 2-4, numbered for increasing radii) with equal areas, all centered at the PSR position; the ellipse delineating the spine of the the X-ray torus by Ng & Romani (2004) is composed of regions 1, 2, and 3 (figure 1 (c)). We calculated the PD and PA in each region by integrating the pixel values inside each region without smoothing and summarize the results in figure 3. We found that the PA at the innermost region is the closest to the angle of the projected torus axis (that is presumably parallel to the PSR spin axis), and the PD and PA are decreasing and increasing functions of the distance from the PSR. We thus conclude that the direction of the toroidal magnetic field in the Crab PWN is perpendicular to the PSR spin axis in the regions close to the termination shock and gradually deviated to the east-west direction due to some environmental effects. While the physical origin of the magnetic-field deviation is not clear, it could be possible mismatch between the explosion symmetry axis and pulsar spin axis as discussed by Hester (2008). While the X-ray polarimetry probes magnetic-field direction close to the particle acceleration sites, the position resolution of IXPE is limited. In this regard, optical polarimetry is complementary. Indeed, high-resolution optical polarimetry by Moran et al. (2013) gave similar results. While structures close to the pulsar, such as the knot and wisps, have PA of \(\sim\)125\({}^{\circ}\), the rest of the inner nebula shows the PA distribution peaked at \(\sim\)165\({}^{\circ}\). Therefore two results provide a consistent picture of the development of magnetic field in the Crab PWN. Figure 1: (a) Chandra image (ObsID 23539) of the Crab PWN (in unit of counts s\({}^{-1}\) cm\({}^{-2}\)), and the (b) smoothed Stokes \(I\) map (in unit of counts) and (c) PD map of the Crab PWN obtained with IXPE, all in 2–8 keV. Coordinates are given in right ascension (horizontal axis) and declination (vertical axis) at J2000.0. For the PD map, only pixels with a polarization detection significance of higher than \(3\sigma\) and MDP\({}_{99}<100\%\) or those with MDP\({}_{99}<20\%\) are shown, where MDP\({}_{99}\) is the minimum detectable polarization at a 99% confidence level. The PSR position (cross), the spine of the X-ray torus (thick-line ellipse; in black for panels a and b but in green for panel c), and its axes (dashed lines whose position angles are \(126^{\circ}\) and \(36^{\circ}\)) according to Ng & Romani (2004), together with a magenta circle of 26\({}^{\circ}\) diameter (approximately the HPD of the IXPE image) are indicated in each panel for reference. The PD map (panel c) also shows the reconstructed magnetic-field vectors (perpendicular to the PA) by segments with a spacing of 5 pixels (\(=13^{\prime\prime}\)), where the lengths are proportional to the PD. Thin-line annular ellipses indicate the regions used in the data analysis to investigate the positional dependence of polarization. We note that Ng & Romani (2004) do not give the torus center position. Since the PSR position is very close (\(\leq 1^{\prime\prime}\)) to the center of the inner ring (Weisskopf et al. 2012), we assume that the torus center is located at the PSR position for simplicity. Fig. 3: (a) PD and (b) PA of regions 1–4, where all the regions have equal areas; region 1 is the closest to the PSR, and regions 2–4 are increasingly farther. Fig. 2: PA as a function of the distance from the PSR along the (a) major axis and (b) minor axis, with sliding-box binning (\(5\times 5\) pixels) applied. The positive distances are defined as the northwest and southwest directions, respectively. ## 4 Discussion ### Comparison with Past High-Energy Polarimetry X-ray and \(\gamma\)-ray polarimetry is a valuable probe into the magnetic-field structures of the Crab PWN (see Introduction). Figure 4 shows the results of this work for the inner region (region 1) and for the whole nebula integrated over the circle with a 2\(\farcm\)5-radius from Bucciantini et al. (2023). Also shown are reported results from other missions. While the IXPE energy range overlaps with that of OSO-8, the obtained PA for the whole nebula was different by \(\sim\)10\({}^{\circ}\). Bucciantini et al. (2023) argued that the apparent discrepancy of the PA could be due to the variability of the PWN, where structures are known to change in shape and location over a typical timescale of a few years (e.g., Schweizer et al., 2013). Our analysis of the same data as those used by Bucciantini et al. (2023) confirms the difference in PA, and the argument by Bucciantini et al. (2023) about the cause of the discrepancy remains plausible. We note that figure 4 plots the PD and PA of the whole nebula except for region 1. Therefore, ignoring region 1, one can see that there is a strong trend that the PD is \(\sim\)20% in soft X-rays and gradually increases toward the high energy. In addition, the PA in soft X-rays is \(\sim\)20\({}^{\circ}\) offset toward the south from the direction of the projected torus axis, but gradually approaches the direction in the high-energy band. This seems reasonable as the size of the nebula is decreasing as the energy increases. Dean et al. (2008) and Forot et al. (2008) argued that such an energy dependence could be due to synchrotron cooling. Higher energy electrons have a shorter lifetime. As a result, the PA in \(\gamma\)-rays is expected to be perpendicular to the magnetic-field direction near the jet and/or termination shock, and accordingly to be parallel to the PSR spin axis and X-ray torus axis. The past observational data before IXPE, however, could not verify the hypothesis of Dean et al. (2008) and Forot et al. (2008) due to the lack of spatial information. The IXPE data, for the first time, revealed how the magnetic-field structure developed from the PSR position toward the outer part of the PWN; the PA is close to the direction of the projected torus axis at the PSR position and gradually deviates from the angle as a function of the distance from the PSR (figure 2). The PA of region 1 is consistent with those reported in the \(\gamma\)-ray band and close to the projected torus axis. Consequently, the IXPE data support the past speculation about how the magnetic field changes within the nebula, providing us with much richer information on the magnetic-field structures and thus much more solid observational evidence (see the following subsections). ### Comparison with Known Structures Figure 1 shows that the PD distribution is far from uniform with the level of asymmetry much stronger than the intensity. A steep spatial variation of the polarization within the HPD will produce low PD areas (e.g., Nakamura & Shibata 2007). Such a geometrical depolarization, however, should still produce a symmetric PD distribution about the projected torus axis, somewhat contrary to the observation. As discussed in Bucciantini et al. (2023), patchy development of the magnetic-field turbulence is a possible scenario for a non-uniform PD distribution. Alternatively the PD may be originally high at high latitudes and that the emission is accidentally depolarized due to environmental effects (e.g., local interaction with the ambient supernova ejecta). A 3D simulation of the Crab PWN by Porth et al. (2014) suggests that the toroidal magnetic field is dominant at high latitudes except for the regions along the PSR spin axis. The very high PD observed in the Vela PWN (Xie et al. 2022) is consistent with this scenario. A possible alternating magnetic field ("striped wind model") due to an oblique rotation of the PSR will dissipate during the propagation toward the termination shock (e.g., Nagata et al. 2008). Although this process will reduce the PD along the equatorial plane of the PSR, the PD at the high latitude will remain high. Therefore, we argue that the PD of the Crab PWN was originally high at the high latitude but reduces due to environmental effects. A blue-shifted filament in the western part of the Crab PWN has a high column density of \(\sim\)10\({}^{21}\) cm\({}^{2}\)(e.g., Mori et al. 2004, Martin et al. 2021). It runs nearly parallel to the projected torus axis on the plane of the sky, and positionally coincides with a very-low PD area in the Figure 4: (a) PD and (b) PA measured with IXPE and obtained in past measurements in the X-ray and \(\gamma\)-ray bands. Note that the Integral IBIS result, which is not plotted for clarity, gives an even larger PD and a consistent PA compared with those by Integral SPI. western part of the Crab PWN1, as shown in figure 5(b). The filament may alter the magnetic-field direction and produce the region of low PD. In addition another low-PD area is observed at the northeast edge of the torus (we interpret that it is also due to geometrical depolarization caused by the steep spatial variation of the PA). We also note that the low-PD area and the filament in the western part of the PWN are positionally close to the western "bay"-like structure at around right ascension \(\alpha=5^{\mathrm{h}}34^{\mathrm{m}}28^{\mathrm{s}}\) (J2000.0), where there is a sharp drop in X-ray emission, which is a feature of the Crab PWN commonly observed at all wavelengths from the radio to X-rays (e.g., Dubner et al. 2017 and Seward et al. 2006). Although the physical origin of the filament and the "bay" in the west of the torus is unknown, the toroidal magnetic field in the nebula flow seems to experience significant changes of direction by the filament (see also figure 2(a)) and another low-PD area is produced. Subsequently, the plasma is blocked from flowing further westward, which decreases the X-ray emission as observed. Footnote 1: According to figure 7 in Martin et al. (2021), we assume that the edges of the filament are situated at \((\alpha,\delta)_{J2000.0}=(88^{\circ}625,+22^{\circ}008)\) and (83\({}^{\circ}\)610, +22\({}^{\circ}\)015), where \(\alpha\) and \(\delta\) are right ascension and declination, respectively. Fig. 5: (a) Chandra image of the Crab PWN (in unit of \(\mathrm{counts\ s^{-1}\ cm^{-2}}\)) and (b) the PD map of the Crab PWN obtained with IXPE, both in J2000.0 equatorial coordinate. They are basically the same as figures 1(a) and 1(c), respectively, but with the regions used to study the polarization of the northern jet (blue and black boxes for source and off-source regions, respectively) overlaid. See section 4.4 for details. In panel (b), the position of a dense filament, which is identified in the optical band and contributes to the X-ray absorption (Mori et al. 2004, Martin et al. 2021), is indicated by a cyan dashed line (section 4.2). PWNe are bubbles of relativistic particles and magnetic fields produced through the interaction between the ultra-relativistic pulsar wind and the ambient medium. They have been extensively studied to understand the dynamics of the pulsar wind and associated particle acceleration. We aim to evaluate the magnetic field turbulence of the Crab PWN by comparing the observed polarization properties with a dedicated simulation. Kennel & Coroniti (1984a) developed a pioneering 1D magnetohydrodynamic (MHD) model to describe the physics of the PWN, which has been widely accepted since. Here is the overview of the standard physical processes according to the model. The ultra-relativistic wind produced by the pulsar slows down at the termination shock. There, the toroidal magnetic field is compressed, the plasma is heated, and particles are accelerated. As the post-shock flow expands toward the nebula's edge, a bubble of high-energy particles and intense magnetic field is generated. The key parameter to characterize the process is the \(\sigma\)-parameter, the ratio of the magnetic energy flux to the kinetic energy flux immediately before the termination shock occurs. Kennel & Coroniti (1984a, 1984b) suggested \(\sigma\sim 0.003\) for the Crab PWN to reproduce the observed synchrotron luminosity and expansion velocity. With this value of \(\sigma\), however, the flow velocity of the medium is predicted to be non-relativistic. It hence cannot explain the apparent asymmetry in the X-ray surface brightness along the northwest-southeast axis, which presumably originates from Doppler boosting and relativistic aberration. In this regard, Mori et al. (2004) suggested \(\sigma\sim\)0.05 to produce the relativistic flow speed and reproduce the observed surface-brightness asymmetry. This much larger value of \(\sigma\), however, raises a different problem, as pointed out by Shibata et al. (2003); an intensity peak close to the termination shock, which should spatially coincide with the inner ring, is predicted to appear, contrary to the observed X-ray emission from the torus being much brighter than from the inner ring. There is also another problem with Kennel & Coroniti's model; i.e., a purely toroidal magnetic field assumed in the model predicts "lip-shaped" synchrotron emission to be generated as shown by Shibata et al. (2003), again contrary to the observed "ring-like" shape of the torus. In short, the model by Kennel & Coroniti (1984a) (hereafter referred to as "the KC model") does not fully agree with observation under the assumption of a purely toroidal field, regardless of the \(\sigma\) value. We note that magnetic field turbulence is not taken into account in the KC model, but it is also important and has been studied extensively (e.g., Luo et al. 2020). Indeed, Shibata et al. (2003) proposed a modified model from the KC model, introducing magnetic-field turbulence, which gives a solution to the problem mentioned above with the comparatively large \(\sigma\) of \(\sim\) 0.05 that is necessary to explain the surface-brightness asymmetry. They argued that an alternating magnetic field due to an oblique rotation of the PSR will cause the dissipation of the magnetic field through magnetic reconnection. As a result, beyond the termination shock, the flow will be decelerated, and the magnetic field will accumulate and be amplified. This means that \(\sigma\) is in effect reasonably small and hence the resultant luminosity intensity peak will explain the observation (see their figure 3). In addition, the magnetic-field turbulence can produce the annular-elliptical emission in agreement with the observation. The degree of the magnetic-field turbulence can be constrained by comparing the X-ray polarization data and the model. In this regard, Nakamura & Shibata (2007) developed a polarization-distribution model of the torus emission, assuming the canonical value of the \(\sigma\)-parameter (0.003) and a flow velocity of 0.2\(c\) (where \(c\) is the speed of light) to reproduce the observed north-south asymmetry in the torus brightness. They compared the predicted PD integrated over the entire nebula with the OSO-8 result (PD\(\sim\)20%) and found that the energy of the turbulent magnetic field was \(\sim\)60% of the total magnetic-field energy for the randomness parameter, \(b\), of 0.6 in their definition. More recently, Bucciantini et al. (2017) developed a similar model in which magnetic-field turbulence was considered and the inner ring and jet were taken into account in addition to the torus. Their result of the degree of turbulence based on the OSO-8 result was very similar to that by Nakamura & Shibata (2007). Their best estimate of the magnetic-field fluctuation parameter was 0.7, giving a ratio of energies of the turbulent and toroidal magnetic field of 3:2. These pioneering works are based on the integrated PD and hence are subject to uncertainty. We therefore carried out a new model calculation, convolved with the IXPE responses (effective area, energy resolution, spatial resolution, and modulation response), and directly compared the result with the IXPE observation to evaluate the magnetic-field turbulence. In this work, we do not adopt 2D and 3D MHD simulations in the model because recently reported results with 2D and 3D MHD simulations of the PWN (e.g., Del Zanna et al. 2006 and Porth et al. 2014) did yield broadly symmetric magnetic-field structures, in disagreement with the observation, although they successfully reproduced more detailed structures such as jets, wisps, and a knot. Instead, we develop a phenomenological model with aid of the (1D) KC model in the manner described below. We aim to reproduce the overall trend of the observed properties of the X-ray torus where non-axisymmetry of the PD is less prominent than outside and evaluate the magnetic-field turbulence therein. 1. The Stokes parameters in the flow frame and the observer frame are calculated in the manner described in Nakamura & Shibata (2007). 2. The PWN is modeled with a simple equatorial wedge (see figure 1 of Shibata et al. 2003), parameterized with the inner radius \(r_{\rm s}\), outer radius \(r_{\rm n}\), and semi-opening angle \(\theta_{0}\). Specifically, we assume the following values: * \(r_{\rm s}=0.1\) pc to match the observed inner ring, * \(r_{\rm n}=0.6\) pc to reproduce the size of the X-ray torus, * \(\theta_{0}\) is \(\pm 10^{\circ}\) to match the observed X-ray structure, which is not spherically symmetric but torus-like, * flow velocity \(v=0.2c\) to reproduce the observed north-south asymmetry of the X-ray torus, and * wedge position and inclination angles of \(126.3^{\circ}\) and \(63.0^{\circ}\), respectively, taken from Ng & Romani (2004). 3. The magnetic field is assumed to consist of a toroidal component and a turbulent (isotropic) one in the flow frame with \(b=0.6\) according to Nakamura & Shibata (2007) and Bucciantini et al. (2017). 4. The radial profile of the magnetic field distribution is assumed to follow the KC model prediction under an assumption of a canonical value of \(\sigma=0.003\). As in Nakamura & Shibata (2007), the pulsar wind luminosity and wind Lorentz factor are assumed to be \(5\times 10^{38}\) erg s\({}^{-1}\) and \(3\times 10^{6}\), respectively. 5. The electron distribution within the PWN wedge is assumed to be uniform with a power-law index of 3, giving the index 2 for the synchrotron emission that approximates the observed spectrum in soft X-rays (e.g., Toor & Seward 1974, Mori et al. 2004). We calculate the observed synchrotron emission, taking into account the flow velocity and the magnetic field described above. Figure 6 shows the obtained count and PD maps convolved with the IXPE response. The PD map shows depolarization at the edges along the torus major axis, and the reconstructed magnetic-field direction is perpendicular to the projected torus axis. Figure 7 summarizes the comparisons of the count and PD distributions between the data and model. Here, since the PSF size of IXPE is moderate, the obtained PD and PA are inevitably affected by high-PD areas outside the torus and the low-PD area in the western part of the PWN. In addition, as described in section 3.2, the magnetic field structure away from the termination shock is likely to be affected by environmental effects. Hence, we compare the PD between the data and model at the PSR position where the contamination from the high/low PD areas is minimal. The obtained values of PD are \((27.0\pm 2.0)\%\) and \(33.5\%\), respectively. With \((1-b)\) scaling of the PD as in Nakamura & Shibata (2007), we obtained \(b=0.68\) as our best estimate for the magnetic-field turbulence there. Although the observed data and model broadly agree, a closer look identifies some differences; the model gives a wider distribution along the major axis and the peak position closer to the PSR along the minor axis. By reducing \(r_{\rm n}\) by 0.1 pc, we obtain a better agreement in the count profiles along the major axis, but the agreement worsens along the minor axis. Increasing \(r_{\rm n}\) gives an opposite trend. Through these procedures the model PD and PA are hardly affected (the difference is much smaller than the statistical error of data). In consequence, our estimate of the magnetic-field turbulence (parameter \(b\)) is not affected against fine-tuning the torus shape, and revealed that the turbulent magnetic-field gives about 2/3 of the total magnetic-field energy close to the termination shock. We also note that the observed large non-axisymmetry of the PD distribution (distribution of magnetic-field turbulence), particularly outside the X-ray torus, does not agree with predictions by 2D and 3D MHD simulations. The IXPE results thus require further development of such theoretical works. For example, it is important to run longer 3D simulations and examine how the magnetic-field dissipation develops in the Crab nebula, as pointed out by Porth et al. (2014). ### Polarization Properties of Jets As described in section 3.1, moderately low PD areas are seen at high latitudes along the axis of the jets seen in the Chandra image. The area corresponding to the southern jet is sandwiched between high PD areas. The same would be true for the norhtern jet, if the PD of an area around the dense filament were high (see also section 4.2). We speculate that the jets have different PA and/or lower PD than regions outside the PSR spin axis and produce these moderately low PD areas. To investigate the possibility, we evaluate the polarizations of the northern jet using source and off-source regions along a line parallel to the major axis of the X-ray torus by Ng & Romani (2004), as shown in figure 5. The line is \(50^{"}\) away from the torus major axis, and the source region (jet plus foreground and background emission along the line of sight) is located on the projected torus axis. The off-source region is defined in a high PD area \(26.\!^{\prime\prime}3\) away from the source toward the northeast along the line, where the offset value is comparable to the HPD. The spatial distribution of the torus-like emission at the high latitude, which is the contamination to the jet emission in the source region, is unknown. We examined the count rate profile of the Chandra data along the line 30" away from the torus major axis, where the contribution from the jet is small, and found that it is uniform within \(\pm\)15% in the regions \(\leq\)30" toward the northeast from the projected torus axis. Hence, we consider that the off-source region we defined is adequate to evaluate the polarization properties of the jet. We also tested two other off-source regions, 21.99 and 30.77 away from the source region, to evaluate the systematic uncertainty. The size of each region is \(5\times 5\) pixels. We calculate the background-subtracted Stokes \(I\), \(Q\), and \(U\) and derive the PD and PA. Table 1 tabulates the derived values of PD and PA. We also attempted a similar analysis to evaluate the potential polarization in the southern jet, but found the source-to-background count-ratio to be small (\(\leq\)1.15), yielding a result with a meaninglessly large 1\(\sigma\) error of the PD to be \(\geq\)40%. To summarize, we found that the northern jet should have a PD \(\sim\)30% and PA \(\sim\)120\({}^{\circ}\), with a detection significance of \(\sim\)2\(\sigma\), in order to justify the lower polarization along the projected torus axis. In comparison, the northern high-PD area gives PD\(\sim\)45% and PA\(\sim\)160\({}^{\circ}\). Therefore, although the systematic and statistical errors are large, the estimated PA of the jet is roughly parallel to the observed direction of the jet, i.e., the magnetic field direction is perpendicular to it, as expected in the situation where the magnetic field is compressed by the jet. This Figure 6: Smoothed (a) Stokes \(I\) map (in unit of counts) and (b) PD map of the simulated Crab PWN convolved with the IXPE response in 2–8 keV, both in J2000.0 equatorial coordinate. Like figure 1(c), only the pixels with a significance of more than 3\(\sigma\) and MDP\({}_{99}<100\%\) or those with MDP\({}_{99}<20\%\) are shown in the PD map. The PSR position, spine of the X-ray torus and its axes (Ng & Romani 2004) are indicated for reference. The reconstructed magnetic field directions are overlaid in cyan segments in the PD map. possible superposition of polarized radiation from the jet, with different properties than regions outside the PSR spin axis, may produce moderately low PD areas through depolarization of the (originally) higher PD and different PA along the line of sight at the high latitude. \begin{table} \begin{tabular}{c c c c} \hline & BG1 (closest) & BG2 & BG3 (farthest) \\ \hline \hline PD & \((38.8\pm 20.5)\%\) & \((26.9\pm 10.8)\%\) & \((18.4\pm 9.1)\%\) \\ PA (deg) & \(117\pm 15\) & \(122\pm 11\) & \(134\pm 14\) \\ S/BG ratio & \(1.25\) & \(1.59\) & \(1.75\) \\ \hline \end{tabular} \end{table} Table 1: Estimated polarization of the northern jet with three separate off-source (BG) regions Figure 7: Comparison of the observed and simulated profiles of the Crab PWN along the (upper row) major and (lower row) minor axes. The profiles of the count and PD are shown in the left and right panels, respectively. The vertical scales for the simulation are adjusted arbitrarily in the count profiles. See text for detail of the simulation. ## 5 Summary We carried out a detailed analysis of the spatially-resolved X-ray polarization data of the Crab PWN obtained with IXPE, following the initial analysis report by Bucciantini et al. (2023). We investigated how the polarization properties develop, depending on the distance from the PSR. We found that the reconstructed magnetic field was parallel to the pulsar wind in the inner region close to the PSR and then it gradually deviated to the east-west direction as a function of the distance from the PSR. We also found that a low-PD area to the west of the X-ray torus is not due to the steep spatial variation of the PA. Instead, the presence of a dense filament seen in the optical band indicates that the flow of the pulsar wind is deflected and this produces variations in the magnetic field direction leading to the low-PD region. We employed a phenomenological model of the X-ray torus with simplistic toroidal and turbulent magnetic fields and obtained the randomness parameter \(b\) to be \(\sim\)2/3 through comparison between the data and model. We also investigated possible jet polarizations. Although the errors are large, the northern jet seems to have a PD of \(\sim\)30%, which is much lower than the high PD observed at the high latitude and PA of \(\sim\)120\({}^{\circ}\), which differs from the PA at the high latitude by \(\sim\)40\({}^{\circ}\). ## Acknowledgments The Imaging X-ray Polarimetry Explorer (IXPE) is a joint US and Italian mission. The US contribution is supported by the National Aeronautics and Space Administration (NASA) and led and managed by its Marshall Space Flight Center (MSFC), with industry partner Ball Aerospace (contract NNM15AA18C). The Italian contribution is supported by the Italian Space Agency (Agenzia Spaziale Italiana, ASI) through contract ASI-OHBI-2017- 12-I.0, agreements ASI-INAF-2017-12-H0 and ASI-INFN-2017.13- H0, and its Space Science Data Center (SSDC) with agreements ASI-INAF-2022-14-HH.0 and ASI-INFN 2021-43-HH.0, and by the Istituto Nazionale di Astrofisica (INAF) and the Istituto Nazionale di Fisica Nucleare (INFN) in Italy. This research used data products provided by the IXPE Team (MSFC, SSDC, INAF, and INFN) and distributed with additional software tools by the High-Energy Astrophysics Science Archive Research Center (HEASARC), at NASA Goddard Space Flight Center (GSFC). This research used data products provided by the IXPE Team (MSFC, SSDC, INAF, and INFN) and distributed with additional software tools by the High-Energy Astrophysics Science Archive Research Center (HEASARC), at NASA Goddard Space Flight Center (GSFC). This work was supported by JSPS KAKENHI Grant Number 23H01186 (T.M.), 22K14068 (E.W.), 19H00696 (S.G.), and 22K03681 (S.S.). N.B. was supported by the INAF MiniGrant "PWNnumpol - Numerical Studies of Pulsar Wind Nebulae in The Light of IXPE". ## Appendix 1 Polarization Analysis Formulation The event-by-event Stokes parameters are given by \(i_{k}\equiv 1\), \(q_{k}\equiv 2\cos 2\phi_{k}\), and \(u_{k}\equiv 2\sin 2\phi_{k}\), where \(\phi_{k}\) is the reconstructed photoelectron direction of the event number \(k\). Stokes \(I\), \(Q\) and \(U\) of the source are then estimated with \(I=\Sigma i_{k}=\Sigma 1,\ Q=\Sigma q_{k}/\mu,\ \mbox{and}\ U=\Sigma u_{k}/\mu\), respectively, providing that the effective area and modulation factor (\(\mu\)) are energy independent. The variance \(V(q)\) of Stokes \(Q\) of an event is given by \(V(q)=\langle q^{2}\rangle-\langle q\rangle^{2}\). Assuming that the source polarization and/or \(\mu\) are significantly smaller than 1, which is usually the case, the relation \(\langle q^{2}\rangle\gg\langle q\rangle^{2}\) holds and hence \(V(q)\approx\langle q^{2}\rangle=\frac{1}{N}\Sigma(2\cos 2\phi_{k})^{2}\), where \(N\) is the number of events. Then, we can estimate \(V(Q)\) with \(V(Q)=NV(q)/\mu^{2}\approx\Sigma(2\cos 2\phi_{k})^{2}/\mu^{2}\). Similarly, we obtain \(V(U)\approx\Sigma(2\sin 2\phi_{k})^{2}/\mu^{2}\). The source polarization parameters are calculated as \(\mbox{PD}=\sqrt{Q^{2}+U^{2}}/I\) and \(\mbox{PA}=\frac{1}{2}\arctan(U/Q)\). When the number of events is large as is often the case, \(\sqrt{V(I)}/I=1/\sqrt{N}\) is much smaller than \(\sqrt{V(Q)}/Q\) and \(\sqrt{V(U)}/U\), and the errors of PD and PA practically depend only on \(V(Q)\) and \(V(U)\). To calculate MDP\({}_{99}\), we consider the statistical error of a zero polarization source; \(V(Q)\approx\Sigma(2\cos 2\phi_{k})^{2}/\mu^{2}=4(N/2)/\mu^{2}=2N/\mu^{2}\) (because \(\langle\cos^{2}\phi\rangle=1/2\)) and \(V(U)\approx 2N/\mu^{2}\). This yields \(\sigma(Q/I)=\sqrt{V(Q)}/N=\sqrt{2/N}/\mu\) and \(\sigma(U/I)=\sqrt{2/N}/\mu\), resulting in MDP\({}_{99}=3\sqrt{2/N}/\mu=4.29/\mu\sqrt{N}\). In a realistic case, the modulation factor \(\mu\) is energy dependent. Denoting it as \(\mu_{k}\), we define \(I_{\rm corr}\equiv\Sigma i_{k}=N\) (not changed), \(Q_{\rm corr}\equiv\Sigma q_{k}/\mu_{k}\), and \(U_{\rm corr}\equiv\Sigma u_{k}/\mu_{k}\). The effective area is also energy dependent in most cases (\(A_{k}\)). Stokes parameters are given by \[\tilde{I}_{\rm corr} \equiv \Sigma i_{k}/A_{k}(\equiv\tilde{N}),\] (A1) \[\tilde{Q}_{\rm corr} \equiv \Sigma q_{k}/\mu_{k}/A_{k},\] (A2) \[\tilde{U}_{\rm corr} \equiv \Sigma u_{k}/\mu_{k}/A_{k}.\] (A3) The source polarization parameters are then obtained with \[\mbox{PD} = \sqrt{\tilde{Q}_{\rm corr}^{2}+\tilde{U}_{\rm corr}^{2}}/\tilde{I},\] (A4) \[\mbox{PA} = \frac{1}{2}\arctan(\tilde{U}_{\rm corr}/\tilde{Q}_{\rm corr}).\] (A5) By taking account of the error propagation, we obtain \[V(\tilde{Q}_{\rm corr}) = \{(1/\mu_{1})^{2}(1/A_{1})^{2}+\cdots+(1/\mu_{N})^{2}(1/A_{N})^{2 }\}V(q)\] (A6) \[= \Sigma(1/\mu_{k})^{2}(1/A_{k})^{2}\cdot\Sigma q_{k}^{2}/N,\] \[V(\tilde{U}_{\rm corr}) = \Sigma(1/\mu_{k})^{2}(1/A_{k})^{2}\cdot\Sigma u_{k}^{2}/N.\] (A7) Since \(\sqrt{V(\tilde{I}_{\rm corr})}/\tilde{I}_{\rm corr}\) is much smaller than \(\sqrt{V(\tilde{Q}_{\rm corr})}/\tilde{Q}_{\rm corr}\) (and \(\sqrt{V(\tilde{U}_{\rm corr})}/\tilde{U}_{\rm corr}\)), we can calculate the errors of the PD and PA by using \(V(\tilde{Q}_{\rm corr})\) and \(V(\tilde{U}_{\rm corr})\) only. In particular, when the polarization significance is high, both the PD and PA distributions are symmetric, and their \(1\sigma\) errors are \[\frac{\sigma(\text{PD})}{\text{PD}} \approx\frac{\sqrt{\tilde{Q}_{\text{corr}}^{2}V(\tilde{Q}_{\text{ corr}})+\tilde{U}_{\text{corr}}^{2}V(\tilde{U}_{\text{corr}})}}{\tilde{Q}_{\text{ corr}}^{2}+\tilde{U}_{\text{corr}}^{2}}, \tag{10}\] \[\sigma(\text{PA}) \approx\frac{1}{2}\frac{\sigma(\text{PD})}{\text{PD}}. \tag{11}\] To calculate \(\text{MDP}_{99}\), we consider the statistical error of a zero polarization source and obtain \(V(\tilde{Q}_{\text{corr}})\approx\Sigma(1/\mu_{k})^{2}(1/A_{k})^{2}\cdot \Sigma q_{k}^{2}/N=\Sigma(1/\mu_{k})^{2}(1/A_{k})^{2}\cdot 4(N/2)/N=2\Sigma(1/\mu_{k})^{2}( 1/A_{k})^{2}\). Similarly, \(V(\tilde{Q}_{\text{corr}})\approx 2\Sigma(1/\mu_{k})^{2}(1/A_{k})^{2}\). Defining \(N\langle\frac{1}{\mu^{2}}\frac{1}{A^{2}}\rangle\equiv(\frac{1}{\mu_{1}}\frac {1}{A_{1}})^{2}+\cdots+(\frac{1}{\mu_{N}}\frac{1}{A_{N}})^{2}\), we have \(\sigma(\frac{\tilde{Q}_{\text{corr}}}{I_{\text{corr}}})=\sigma(\frac{\tilde{ U}_{\text{corr}}}{I_{\text{corr}}})=\frac{\sqrt{2N}}{N}\sqrt{\langle\frac{1}{\mu^{2}} \frac{1}{A^{2}}\rangle}\). Finally, we obtain \[\text{MDP}_{99}=\frac{3\sqrt{2N}}{\tilde{N}}\sqrt{\langle\frac{1}{\mu^{2}} \frac{1}{A^{2}}\rangle}=\frac{4.29}{\tilde{N}/\sqrt{N}}\sqrt{\langle\frac{1}{ \mu^{2}}\frac{1}{A^{2}}\rangle}. \tag{12}\] ## Appendix 2 Crab PWN Maps with Normal Binning While the sliding-box smoothing maintains fine resolution and allows us to study polarization properties in detail, the statistical errors of adjacent pixels are not independent. For completeness, we also show the Stokes \(I\) and PD maps of the Crab PWN in figure 8, in which normal binning of \(5\times 5\) pixels is applied.
We, X-ray極性探査機を用いて、Crab超新星風星雲の磁場構造を詳細に調査した。その方法は、2--8~keVのX線偏光データを用いた。このデータから、星雲領域へのpulsarの放射線混入を除去するために、厳格な回転周期切断を適用した。その結果、0.7--1.0の相関範囲しか得られなかった。その結果、電場ベクトル偏光角(PA)は、北から東へ約130度であった。この偏光度(PD)は約25%で、pulsarの座標で観測された。この結果、星雲領域の磁場の方向は、 terminationshock に近接する領域で、渦状磁場の方向が、pulsar spins axis と直交していることを示した。偏光角は、pulsarから離れるにつれて
2302.14431
Efficient Masked Autoencoders with Self-Consistency
Inspired by the masked language modeling (MLM) in natural language processing tasks, the masked image modeling (MIM) has been recognized as a strong self-supervised pre-training method in computer vision. However, the high random mask ratio of MIM results in two serious problems: 1) the inadequate data utilization of images within each iteration brings prolonged pre-training, and 2) the high inconsistency of predictions results in unreliable generations, $i.e.$, the prediction of the identical patch may be inconsistent in different mask rounds, leading to divergent semantics in the ultimately generated outcomes. To tackle these problems, we propose the efficient masked autoencoders with self-consistency (EMAE) to improve the pre-training efficiency and increase the consistency of MIM. In particular, we present a parallel mask strategy that divides the image into K non-overlapping parts, each of which is generated by a random mask with the same mask ratio. Then the MIM task is conducted parallelly on all parts in an iteration and the model minimizes the loss between the predictions and the masked patches. Besides, we design the self-consistency learning to further maintain the consistency of predictions of overlapping masked patches among parts. Overall, our method is able to exploit the data more efficiently and obtains reliable representations. Experiments on ImageNet show that EMAE achieves the best performance on ViT-Large with only 13% of MAE pre-training time using NVIDIA A100 GPUs. After pre-training on diverse datasets, EMAE consistently obtains state-of-the-art transfer ability on a variety of downstream tasks, such as image classification, object detection, and semantic segmentation.
Zhaowen Li, Yousong Zhu, Zhiyang Chen, Wei Li, Chaoyang Zhao, Rui Zhao, Ming Tang, Jinqiao Wang
2023-02-28T09:21:12
http://arxiv.org/abs/2302.14431v2
# Efficient Masked Autoencoders with Self-Consistency ###### Abstract Inspired by masked language modeling (MLM) in natural language processing, masked image modeling (MIM) has been recognized as a strong and popular self-supervised pre-training method in computer vision. However, its high random mask ratio would result in two serious problems: 1) the data are not efficiently exploited, which brings inefficient pre-training (e.g., 1600 epochs for MAE \(vs.\) 300 epochs for the supervised), and 2) the high uncertainty and inconsistency of the pre-trained model, i.e., the prediction of the same patch may be inconsistent under different mask rounds. To tackle these problems, we propose efficient masked autoencoders with self-consistency (EMAE), to improve the pre-training efficiency and increase the consistency of MIM. In particular, we progressively divide the image into \(K\) non-overlapping parts, each of which is generated by a random mask and has the same mask ratio. Then the MIM task is conducted parallelly on all parts in an iteration and generates predictions. Besides, we design a self-consistency module to further maintain the consistency of predictions of overlapping masked patches among parts. Overall, the proposed method is able to exploit the data more efficiently and obtains reliable representations. The experiments on ImageNet show that EMAE achieves even higher results with only 300 pre-training epochs under ViT-Base than MAE (1600 epochs). EMAE also consistently obtains state-of-the-art transfer performance on various downstream tasks, like object detection, and semantic segmentation. ## 1 Introduction Self-supervised learning in computer vision (CV) [1, 2, 4, 5, 6, 8, 18, 19, 20, 30, 46] is widely used to learn general representations from large-scale unlabeled images without relying on human annotations. Generally, massive pretext tasks [1, 2, 4, 5, 6, 8, 14, 18, 19, 20, 23, 30, 40, 48, 53] are defined for learning unsupervised visual features. Among them, inspired by the success of masked language modeling (MLM) [12], MIM is proposed for pre-training in CV and has shown a preponderant advantage in performance. Commonly, the random mask ratio of MIM is much higher than that of MLM due to the difference in the information density of image and language data [19] (\(e.g.\) MAE [19] adopts \(75\%\) mask ratio while BERT [12] uses \(15\%\) mask ratio). However, we observe that the high random mask ratio will bring two serious problems: 1) images are not exploited efficiently resulting in inefficient pre-training, and 2) the high uncertainty and inconsistency of the pre-trained model. Here we take MAE and BERT as examples of MIM and MLM, respectively. First, MAE only exploits \(25\%\) of the whole image to train the model in a single epoch. Contrastively, BERT uses \(85\%\) of the text corpus. Due to in Figure 1: **Different reconstruction results of MAE [19] correspond to different mask seeds. (a) Different combinations sampled by different mask seeds. (b) Reconstruction results of MAE. For the three reconstructions of (b), only the first represents a normal cattle, and the third even reconstructs a dog. The semantics of these reconstructions by MAE are inconsistent.** sufficient data utilization of MIM, the pre-training epochs commonly are higher than MLM. (_e.g._, 1600 epochs for MAE \(vs.\) 40 epochs for MLM in NLP), and the number of pre-training epochs is too large. The reason for this phenomenon is not only the difference in image and language data, but also the difference in the utilization of data between the MLM and MIM methods. Besides, the pre-training efficiency of MIM is still not comparable to that of the supervised (300 epochs for the supervised). Second, the high mask ratio will introduce less reliable features. Hence, different combinations of random visible patches sampled from the original images may generate inconsistent predictions for the same masked patch. As shown in Figure 1, MAE generates different reconstruction results corresponding to different mask seeds, and the semantics of different results are inconsistent. Is it possible to reduce the random mask ratio to increase pre-training efficiency and improve consistency? In fact, the prior work [19] already shows that reducing the mask ratio brings lower transfer ability for downstream tasks. Therefore, it is essential to design an efficient learning paradigm and reduce the uncertainty of MIM. In this work, we propose efficient masked autoencoders with self-consistency (EMAE), which aims to enhance the pre-training efficiency and increase the certainty and consistency of MIM. Concretely, we first progressively divide the image into \(K\) random non-overlapping parts with the same number of image patches by random mask, and each part performs the MIM task parallelly. The design can effectively improve the pre-training efficiency. Furthermore, a self-consistency module is introduced to encourage the model to output reliable representations under different input visible patches. We validate our method on multiple visual downstream tasks. With only 300 pre-training epochs, our method outperforms MAE with 1600 pre-training epochs by 0.4% on the ImageNet linear evaluation protocol under ViT-base [14]. Our 800-epoch classification results using ViT-large are comparable with that of 1600-epoch MAE using ViT-huge, it achieves state-of-the-art (SOTA) performance. Moreover, under the architecture of ViT-base, EMAE achieves 54.8% bbox mAP and 47.6% mask mAP using ViTDet [28] on COCO [34] object detection and instance segmentation. Also, EMAE obtains 49.3% mIoU using UperNet [47] on ADE20K [52] semantic segmentation. Overall, we make the following contributions: * We theoretically prove the high random mask ratio of MIM leads to two serious problems: inefficient pre-training and high inconsistency. * We propose efficient masked autoencoders with self-consistency to effectively improve pre-training efficiency and obtain reliable representations in MIM. * Massive experiments demonstrate the effectiveness and stronger generalization ability of our method. In particular, EMAE outperforms previous SOTA methods in various downstream tasks. ## 2 Related Work Self-supervised learning builds general representations by exploiting the internal priors or structures of data. These representations can be transferred to various downstream tasks and improve the performance of these tasks. Self-supervised methods in computer vision were based on pre-text tasks [18, 19, 20, 23, 24, 25, 26, 2, 39, 40, 41, 46, 48, 50, 53, 4, 4, 5, 51, 2]. A milestone in self-supervised learning was the contrastive learning/instance discrimination [2, 5, 6, 8, 18, 20, 46]. Besides, mask image modeling [18, 19, 4, 53, 48, 1] is currently the focus of the research community. ### Contrastive learning The learning objective of contrastive learning/instance discrimination is simply to learn representations by distinguishing each image from others, and this approach has proved the excellent performance on extensive downstream tasks, such as image classification [11, 22], object detection [33, 42] and segmentation [21]. The pioneering work [46] proposes to use a memory bank to store the instance class representation vector. MoCo [20] improves the training of instance discrimination methods by storing representations from a momentum encoder instead of the trained network. SimCLR [5] shows that the memory bank can be entirely replaced with the elements from the same batch if the batch is large enough. Meanwhile, BYOL [18] proposes an asymmetric structure and directly bootstraps the representations by attracting the different features from the same instance. Moreover, MoCo v3 [9] applies the practice of contrastive learning in convolutional neural networks to ViT [14] architecture. DINO [3] utilizes knowledge distillation together with ViT in the contrastive learning framework. Nevertheless, much of their progress has far been limited to single-centric-object pre-training data such as ImageNet [11] due to the prior of image semantic consistency [35, 31]. ### Masked image modeling With the development of vision Transformer [10, 14] and the success of MLM paradigm in NLP [12], MIM becomes the focus of the research community. MST [30] is the first to introduce MIM into the siamese structure and propose an attention-guided mask strategy. iBOT [53] also adopts the siamese structure and acquires impressive performance. However, these work are built on contrastive learning. The prior work [4] operates on sequences of pixels and predicts unknown pixels. ViT [14] also attempts to study masked patch prediction for self-supervised learning. Following ViT, SimMIM [48] proposes to predict pixels. Moreover, BEiT [1] proposes to predict discrete tokens, and its performance depends on the performance of the pre-trained model VQVAE [45]. It is noted that MAE [19] proposes an asymmetric encoder-decoder architecture for the MIM task and shows excellent performance in a variety of visual downstream tasks. However, the mask ratio of MIM is commonly high and this results in two serious problems, namely insufficient pre-training and inconsistency of the pre-trained model. To overcome these problems, we propose efficient masked autoencoders with self-consistency to improve pre-training efficiency and consistency of MIM. ### Autoregressive image encoding Compared with MIM pre-training for ViT, GPT-like autoregressive models [4, 24] in CV can utilize the whole image to perform the self-supervised task. However, the kind of method does not show superior performance and has quadratic time/space complexity of self-attention. ## 3 Method In this section, we propose an efficient masked autoencoder with self-consistency (EMAE) to learn visual representations, and the pipeline of our proposed method is shown in Figure 2. According to our EMAE, the design of whole data utilization can improve pre-training efficiency in MIM. Besides, the self-consistency module is proposed to reduce the uncertainty and inconsistency of MIM. Therefore, we first give preliminaries about MAE in 3.1, which is adopted as our baseline of MIM. Then, we introduce the design of our proposed EMAE, which boosts both the learning efficiency and performance in 3.2. Finally, we discuss how the proposed approach will affect the efficiency and performance in 3.3. ### Preliminaries about MAE MAE is a highly recognized MIM framework for self-supervised learning, which gradually incorporates some of the impressive practice in the area. Specifically, according to the pioneering work [19], given a natural image from an unlabeled dataset \(\mathbf{X}\), we divide it into \(N\) regular image patches, denoted as \(\mathbf{x}\in\mathbf{R}^{N\times S}\) where \(S\) denotes the patch size (_e.g_. \(16\times 16\) in ViT [14]). Then, we let \(\mathbf{m}\) = \((m_{1},...,m_{N})\) denote a binary vector of length \(N\), where \(m_{a}\in\{0,1\}\) and \(a\in\{1,2,...,N\}\), representing the mask over the image, and generating two complementary combinations of \(\mathbf{x}\): masked patches \(\mathbf{x}_{m}\) and the visible patches \(\mathbf{x}_{v}\) are given with Eq(1), where \(\eta=N\times p\) and \(\kappa=N\times(1-p)\). \[\mathbf{x}_{m}=\mathbf{x}\odot\mathbf{m}\in\mathbf{R}^{\eta\times S}, \mathbf{x}_{v}=\mathbf{x}\odot(1-\mathbf{m})\in\mathbf{R}^{\kappa\times S}, \tag{1}\] The MAE model \(h=g\circ f\) is an encoder-decoder architecture, we fed these visible patches into encoder \(f(\cdot)\) (_e.g_., ViT-base), and obtain the latent features. Then, a decoder \(g(\cdot)\) maps the latent feature back to the pixel space to reconstruct the complementary combination and obtain the prediction \(\mathbf{x}_{p}\). In particular, MAE minimizes the mean squared error (MSE) between the reconstructed and masked image patches \(\mathbf{x}_{m}\), and the loss function is shown as Eq(2). \[\mathcal{L}_{\text{MAE}}(\mathbf{x})=\mathcal{L}(\mathbf{x}_{v},\mathbf{x}_{m })=\mathbb{E}\,||g(f(\mathbf{x}_{v}))-\mathbf{x}_{m}||^{2}, \tag{2}\] Figure 2: **Illustration of our EMAE.** The whole image is first progressively divided into \(K\) non-overlapping parts \(\mathbf{x}_{v_{1}}\),..., \(\mathbf{x}_{v_{K}}\) by random mask, and each part has the same number of visible patches. Then, each part is fed into the encoder-decoder architecture and performs the MIM task to generate \(\mathbf{x}_{m_{1}}\),..., and \(\mathbf{x}_{m_{K}}\). Furthermore, the self-consistency module guides the overlapping patches of predictions among parts to be pulled together. Here, we take \(\mathbf{x}_{p_{1}}\) and \(\mathbf{x}_{p_{K}}\) as examples. ### Efficient MAE with self-consistency In this section, with MAE as the baseline, we introduce EMAE, a simple method that can greatly improve the pre-training efficiency and obtain reliable representations. An overview of EMAE is shown in Figure 2. EMAE makes two modifications: 1) exploits the whole data, and 2) designs the self-consistency module. In the following, we illustrate the method we proposed in detail. #### 3.2.1 The utilization of whole data As described in Eq(1), the random mask ratio is \(p\), thus the data utilization is \(1-p\). That is, the higher the random mask ratio, the lower the data utilization. The low data utilization leads to insufficient data training and decreases the pre-training efficiency. Here we take MAE and BERT as MIM and MLM examples, the ratio of data utilization of MLM to MIM is \((\frac{17}{5})^{M}\) if MLM and MIM are pre-trained with the same \(M\) epochs. The high mask ratio makes MAE unable to fully exploit the whole data, thus reducing the pre-training efficiency. Besides, training a model with \(1600\) epochs consumes a lot of resources and time for the general academic community. For reducing the training epochs and increasing the pre-training efficiency, it is crucial to sufficiently utilize the data. Therefore, we exploit the whole data to train the model, thereby increasing the data utilization. Concretely, the whole image is first divided into \(N\) image patches. Then, the tensor \(\mathbf{t}\) with random values of length \(N\) is generated, and each value of the tensor fits the uniform distribution on the interval \([0,1]\). The tensor is sorted in ascending order by value and the sorted indices \(\mathbf{ids}\) are obtained as Eq(3), where \(\texttt{s}(\cdot)\) returns the indices that sort a tensor in ascending order by value. \[\mathbf{ids}=\texttt{s}(\mathbf{t}), \tag{3}\] Here, we divide the sorted indices of length \(N\) equally into \(K\) non-overlapping parts \(\mathbf{ids}_{1}\), \(\mathbf{ids}_{2}\),..., and \(\mathbf{ids}_{K}\) as shown in Eq(4), where \(i\in\{1,2,...,K\}\). \[\mathbf{ids}_{i}=\mathbf{ids}[(i-1)\times\frac{N}{K}:i\times\frac{N}{K}], \tag{4}\] Therefore, \(N\) image patches can be divided equally into \(K\) non-overlapping parts \(\mathbf{x}_{v_{1}}\), \(\mathbf{x}_{v_{2}}\),..., and \(\mathbf{x}_{v_{K}}\) by the indices \(\mathbf{ids}_{i}\), as shown in Eq(5), where \(\texttt{d}(\cdot)\) denotes drawing values from input \(\mathbf{x}\) according to the specified indices \(\mathbf{ids}_{i}\). \[\mathbf{x}_{v_{i}}=\texttt{d}(\mathbf{x},\mathbf{ids}_{i}), \tag{5}\] The mask \(\mathbf{m}_{i}\) of any part is given with Eq(6), where \(\texttt{ms}(\cdot)\) obtains the mask from \(\mathbf{t}\) according to \(\mathbf{ids}_{i}\). \[\mathbf{m}_{i}=\texttt{ms}(\mathbf{t},\mathbf{ids}_{i}), \tag{6}\] Any part \(\mathbf{x}_{v_{i}}\) has \(N/K\) visible patches, and its corresponding complementary view \(\mathbf{x}_{m_{i}}\) has \(N-N/K\) masked patches, which is defined as Eq(7). Hence, the mask ratio of any part is \((N-N/K)/K=(K-1)/K\). More details are described in Algorithm 1. \[\mathbf{x}_{m_{i}}=\mathbf{x}\odot\mathbf{m}_{i}, \tag{7}\] ``` 1:returns a tensor filled with random numbers from a uniform distribution on the interval [0,1];gather: gathers values along an axis specified by dim; argsort returns the indices that sort a tensor along a given dimension in ascending order by value. ``` **Algorithm 1** Pseudocode of the division of whole data in a PyTorch-like style. From the above description, image patches are progressively divided into \(K\) non-overlapping parts according to the principle of sampling without replacement [36]. When \(K\) is set to be 4, the mask ratio of each part is \(75\%\) (the same as the mask ratio of MAE). The design ensures the whole image can be applied to train the model while each patch of the image in an iteration can be sampled once, thereby enhancing the data utilization. According to our design in Figure 2, each part containing visible patches is fed as the input into the encoder-decoder architecture and performs the MIM task, and the loss function is defined as Eq(8). \[\mathcal{L}_{whole}(\mathbf{x})=\mathop{\mathbb{E}}_{i\in[1,K]}\mathcal{L}( \mathbf{x}_{v_{i}},\mathbf{x}_{m_{i}}), \tag{8}\] #### 3.2.2 Self-consistency module The utilization of whole image in Section 3.2.1 enhances the global understanding of the model in an iteration, but it still can not guarantee the reliability of the output results for each part. According to the pioneering work [27], human intelligence is a self-consistency system, which helps to efficiently learn and correct mistakes. Therefore, it is reasonable to believe that artificial models can also improve training efficiency and consistency when introducing the self-consistency mechanism. In detail, the predictions of the pre-trained model are encouraged to be consistent under different input visible patches from the same image. In terms of Section 3.2.1, each part has \(N/K\) of the whole image and generates \(N-N/K\) predictions. The predictions of each part are \(\mathbf{x}_{p_{1}}\), \(\mathbf{x}_{p_{2}}\),..., and \(\mathbf{x}_{p_{K}}\). Obviously, there is a certain ratio of overlap between the predictions for any two sets of parts, and the ratio is \((K-2)/(K-1)\). The overlapping position \(\mathbf{s}_{ij}\) of any two sets of predictions \(\mathbf{x}_{p_{i}}\), \(\mathbf{x}_{p_{j}}\) can be obtained by the mask \(\mathbf{m}_{i}\) and \(\mathbf{m}_{j}\), where \(i,j\in\{1,2,...,K\}\) and \(i\neq j\). The definition of \(\mathbf{s}_{ij}\) is denoted as Eq(9). \[\mathbf{s}_{ij}=\mathbf{m}_{i}\ \cap\ \mathbf{m}_{j}, \tag{9}\] Consequently, the self-consistency module is proposed to guide the predictions of each overlapping position to keep consistent. As shown in Figure 2, the self-consistency module pulls together the overlapping predictions between any two sets \(x_{pi}\) and \(x_{pj}\), which minimizes the mean absolute error between the overlapping reconstructed results to increase consistency. The _self-consistency loss_ is defined as Eq(10), where sg[\(\cdot\)] stands for stop gradient. For each prediction of any part, it will calculate with that of other parts by \(K-2\) times. \[\mathcal{L}_{sc}(\mathbf{x}_{v_{i}},\mathbf{x}_{v_{j}})=\big{(}||\texttt{ sg}[\mathbf{x}_{p_{i}}]-\mathbf{x}_{p_{j}}||+||\mathbf{x}_{p_{i}}-\texttt{ sg}[\mathbf{x}_{p_{j}}]||\big{)}\texttt{\odot s}_{ij}, \tag{10}\] Finally, the self-consistency loss of the image is calculated according to the Eq(12). \[\mathcal{L}_{consistency}(\mathbf{x})=\mathop{\mathbb{E}}_{i\in[1,K],\ j\in[i+1,K]} \mathcal{L}_{sc}(\mathbf{x}_{v_{i}},\mathbf{x}_{v_{j}}), \tag{11}\] The behavior induced by the self-consistency loss can be observed in Figure 3: the reconstructed images from different combinations end up matching closely each other. #### 3.2.3 Objective function Our EMAE consists of the design of whole data utilization and the self-consistency module. Thus, the final loss for our EMAE can be formulated as Eq(12), each loss coefficient is set to be \(1\) for equally weighted. \[\mathcal{L}_{total}(\mathbf{x})=\mathcal{L}_{whole}(\mathbf{x})+\mathcal{L}_{ consistency}(\mathbf{x}), \tag{12}\] ### Discussion In this section, we present some intuitive analysis about why EMAE can improve the pre-training efficiency and increase consistency, which will be further demonstrated with empirical results in Section 4. The primary component that makes EMAE converge faster is the utilization of multiple non-overlapping parts, which efficiently exploits the whole image in the training stage. Thus, EMAE can get sufficient supervision signals in each epoch compared to MAE and achieves promising performance with fewer epochs. Notably, according to the principle of sampling without replacement, the whole image is divided equally into \(K\) non-overlapping parts. The design can ensure each patch in an image can be used to train the model, thus enhancing the utilization of whole data, unlike MAE. Finally, we propose the self-consistency loss to decrease the uncertainty and inconsistency of MIM. Based on the design of the whole data, the self-consistency mechanism further im \begin{table} \begin{tabular}{l c c c c} \hline \hline **Method** & **Arch** & **pre-training** & \multicolumn{2}{c}{**ImageNet**} \\ & & \# epochs & LP & FT \\ \hline \multicolumn{5}{l}{_Supervised learning on ImageNet:_} \\ scratch [7] & ViT-S & 300 & - & 79.9\% \\ scratch [19] & ViT-B & 300 & - & 82.3\% \\ scratch [19] & ViT-L & 300 & - & 82.6\% \\ \hline \multicolumn{5}{l}{_Contrastive learning:_} \\ MoCo v3 [9] & ViT-B & 300 & 76.2\% & 83.2\% \\ DINO [3] & ViT-B & 400 & **78.2\%** & **83.4\%** \\ \hline \multicolumn{5}{l}{_Masked image modeling + contrastive learning:_} \\ MST [30] & ViT-S & 100 & 75.0\% & - \\ AttMask [25] & ViT-S & 100 & 76.1\% & 81.3\% \\ iBOT [53] & ViT-S & 100 & 74.4\% & 81.1\% \\ iBOT [53] & ViT-S & 3,200 & 77.9\% & 82.3\% \\ iBOT [53] & ViT-B & 1,600 & 79.5\% & 84.0\% \\ iBOT [53] & ViT-L & 1,200 & **81.0\%** & **84.8\%** \\ \hline \multicolumn{5}{l}{_Masked image modeling:_} \\ CAE [7] & ViT-S & 300 & 50.8\% & 81.8\% \\ CAE [7] & ViT-B & 800 & 68.3\% & 83.6\% \\ BEiT [1] & ViT-B & 800 & 56.7\% & 83.2\% \\ BEiT [1] & ViT-L & 800 & 73.5\% & 85.2\% \\ SimMIM [48] & ViT-B & 800 & 56.7\% & 83.8\% \\ MAE [19] & ViT-B & 300 & 61.5\% & 82.9\% \\ MAE [19] & ViT-B & 800 & 64.4\% & 83.4\% \\ MAE [19] & ViT-B & 1600 & 67.8\% & 83.6\% \\ MAE [19] & ViT-B & 2400 & 68.2\% & 83.8\% \\ MAE [19] & ViT-L & 1600 & 75.6\% & 85.9\% \\ MAE [19] & ViT-H & 1600 & 76.6\% & 86.9\% \\ \hline EMAE & ViT-B & 300 & 68.2\% & 83.8\% \\ EMAE & ViT-B & 800 & 70.4\% & 84.0\% \\ **EMAE** & ViT-L & 800 & **76.7\%** & **86.3\%** \\ \hline \hline \end{tabular} \end{table} Table 1: **Comparison of SOTA self-supervised learning methods.** EMAE is pre-trained on ImageNet train set and achieves state-of-the-art performance than previous masked image modeling methods. For evaluation, we test the performance of the pre-trained models under two supervised training settings: 1) linear probing (LP), and 2) end-to-end fine-tuning (FT). We report top-1 accuracy on the ImageNet val set. proves feature representations, thereby benefiting the performance. Owing to these merits, EMAE can achieve high pre-training efficiency and consistent representations, thus obtaining promising performance. Experimental results in Section 4.5 below will validate these analyses. ## 4 Experiments ### Datasets and pre-raining settings Following the setting of MAE [19], EMAE is mainly evaluated on linear probing, finetuning classification, object detection, instance segmentation, and semantic segmentation tasks. The details of our utilized datasets and experimental settings are introduced in the next. **Datasets.** In this paper, EMAE is evaluated on the linear probing and finetuning classification task of ImageNet-1K [11], which is a popular large-scale image classification dataset with \(1.28\) million images and \(1000\) categories. Additionally, detection and segmentation are dense vision tasks, which contain massive instances inside each image. Therefore, the evaluation of dense vision tasks can better reflect the semantics capacity of pre-trained models. We conduct sufficient experiments on COCO [34] and ADE20k [52] dataset to verify the generalization and transfer ability of EMAE. COCO is a relatively challenging object detection and instance segmentation dataset, which train2017 contains about \(118k\) images and evaluate on the val2017 contains \(5k\) images. Moreover, ADE20K is also a challenging semantic segmentation dataset, and it contains \(25k\) images of \(150\) categories. **Pre-training settings.** The training settings are the same as MAE [19], we adopt the same encoder-decoder structure to perform the MIM task. Our method is general for ViT backbones, while most experiments are conducted with ViT-base, due to the limitation of computation resources. Specifically, we partition the image of \(224\times 224\) into \(14\times 14\) patches with the patch size being \(16\times 16\). The \(K\) is set to be \(4\) by default. The batch size is set to be \(4096\). Meanwhile, the weight decay, \(\beta_{1}\) and \(\beta_{2}\) for AdamW [38] is set to be \(0.05\), \(0.9\) and \(0.95\) respectively. We use a cosine learning rate strategy [37] with warmup [17]. The warmup number is set to be 40 epochs and the base learning rate is set to be _base_lr \(=1.5e^{-4}\). After the self-supervised pre-training on the ImageNet-1K is over, we evaluate the learned representation quality with only the encoder preserved. Please refer to Appendix for more details. ### Image classification on ImageNet-1K **Experimental setting.** For a fair comparison, we fully follow the hyperparameters of MAE [19] in our image classification experiments. We evaluate the performance of the pre-trained encoder under two supervised training settings: 1) linear probing (LP), and 2) end-to-end fine-tuning (FT). For the linear probing, all parameters of the pre-trained encoder are frozen while only the last classification layer is trained. The LARS [49] optimizer is exploited with batch size \(16,384\). The ViT-base is trained with \(90\) epochs while the ViT-large is trained with \(50\) epochs. The cosine learning strategy is used with \(0.1\) base learning rate. Also, for the end-to-end fine-tuning, the pre-trained encoder is fine-tuned with the classification head together. For different ViT backbones, ViT-base is trained for \(100\) epochs while ViT-large is trained for \(50\) epochs, of which the base learning rate and batch size are set to \(1e^{-3}\) and \(1,024\), respectively. Besides, AdamW optimizer with a cosine learning rate scheduler is adopted. The weight decay is set to \(0.05\). **Classification results.** As shown in Tab. 1, we surprisingly find that our method using ViT-base can surpass the MAE by around \(6.0\%\sim 6.7\%\) with the same pre-training epochs (300 and 800 epochs) in linear probing. The 300-epoch classification results are on par with that of the 2400-epoch MAE. The phenomenon indicates that our method can significantly improve the training efficiency of MIM. Extra training (800 epochs) further improves the linear result to \(70.4\%\) and finetuning result to \(84.0\%\), and achieves SOTA performance. Moreover, it is noted that our 800-epoch classification results using ViT-large are comparable with that of 1600-epoch MAE using ViT-huge. As a MIM-based method, EMAE surpasses previous SOTA MIM-based methods. The linear results of EMAE are slightly inferior to contrastive-based methods due to the contrastive-based method has the assumption of image semantic consistency [31, 35], and the assumption is consistent with the prior of the linear probing task. ### Object detection and instance segmentation To further validate the learned visual representation of our EMAE, we perform fine-tuning on the COCO [34] object detection and instance segmentation. We choose the Mask R-CNN [21] framework. Concretely, we adopt FPNs [32] to scale the feature map into different sizes as introduced in [29]. By fully following the strategy of previous [19, 29], we conduct these experiments on COCO. The results are reported in Table 2 in terms of box AP (\(AP^{b}\)) for detection and mask AP (\(AP^{m}\)) for segmentation. we show the performance of the learned representation by different self-supervised methods and supervised training. We observe that our method achieves the best results with \(51.4\%\)\(AP^{b}\) and \(45.7\%\)\(AP^{m}\), and surpasses MAE by \(1.0\) and \(0.8\) points, individually. Besides, we also conduct experiments on the SOTA ViT-based detection framework ViTDet [28] to verify the transfer ability of MAE. For a fair comparison, all of these experiments strictly adopt the training settings of ViTDet. In Table 3, it can be observed that our EMAE outperforms MAE \(1.3\) and \(1.2\) points on \(AP^{b}\) and \(AP^{m}\). Also, the results of ViTDet with Cascade Mask RCNN reach \(54.8\%\)\(AP^{b}\) and \(47.6\%\)\(AP^{m}\), and surpass MAE by \(0.8\) and \(0.9\) points, respectively. These experiments show that our EMAE can be applicable to any other detection framework to boost performance without additional training costs and efforts. ### Semantic segmentation We also evaluate our EMAE on another dense prediction task, semantic segmentation on the ADE20K [52] dataset. The mean Intersection of Union (mIoU) averaged over all semantic categories is reported as the evaluation metric. In particular, by fully following the training settings of MAE, we adopt UperNet framework [47] in our experiments and report the results in Table 4. We compare our method with supervised pre-training on ImageNet-1K as well as SOTA self-supervised methods. It can be observed that the proposed EMAE achieves the highest \(49.3\%\) mIoU and gets superior performance than all the other baselines, further validating the effectiveness of our framework. ### Ablation studies To better investigate the effectiveness of different components in our proposed EMAE, we conduct ablation studies on ImageNet-1K dataset. Linear probing is still an excellent evaluation method to quickly validate the learned representations, and MAE also selects the mask ratio according to the performance of linear probing. Hence, we adopt the results of linear probing as our benchmark for measuring effectiveness in ablation studies. For a fair comparison, the architecture of various methods adopts ViT-base. **Effect of whole data utilization.** In Section 3.2.1, we have discussed the superiority of whole data utilization. As shown in Table 5, the second line shows the results of MAE pre-training on ImageNet dataset for different pre-training \begin{table} \begin{tabular}{l c c} \hline \hline Method & Pre-train & Pre-train & ADE \\ & data & epochs & mIoU \\ \hline Supervised [19] & ImageNet-1K & 300 & 47.4 \\ SplitMask [15] & ADE20K & 21000 & 45.7 \\ MoCo v3 [9] & ImageNet-1K & 600 & 47.3 \\ BEIT [19] & ImageNet-1K+DALLE & 800 & 47.1 \\ PeCo [13] & ImageNet-1K & 300 & 46.7 \\ CIM [16] & ImageNet-1K & 100 & 43.5 \\ MAE [19] & ImageNet-1K & 1600 & 48.1 \\ CAE [7] & ImageNet-1K & 800 & 48.8 \\ \hline **EMAE** & ImageNet-1K & 800 & **49.3** \\ \hline \hline \end{tabular} \end{table} Table 4: **Results of semantic segmentation on ADE\(20\)K using UperNet [47].** For a fair comparison, the architecture of various methods adopts ViT-B. We report results measured by mean Intersection of Union (mIoU), and EMAE surpasses the previous self-supervised method and achieves the SOTA performance. \begin{table} \begin{tabular}{l|c|c|c|c|c|c|c} \hline \hline Method & Pre-train & \multicolumn{2}{c|}{Object detection} & \multicolumn{2}{c}{Instance segmentation} \\ \cline{3-8} & epochs & data & AP\({}^{b}\) & AP\({}^{b}_{50}\) & AP\({}^{b}_{75}\) & AP\({}^{m}\) & AP\({}^{b}_{50}\) & AP\({}^{m}_{75}\) \\ \hline Supervised [7] & 300 & ImageNet-1K & 46.9 & 68.9 & 51.0 & 41.5 & 65.5 & 44.4 \\ MoCo v3 [9] & 600 & ImageNet-1K & 47.9 & - & - & 42.7 & - & - \\ BEiT [1] & 800 & ImageNet-1K + DALLE & 49.8 & - & - & 44.4 & - & - \\ CAE [7] & 1600 & ImageNet-1K & 50.0 & 70.9 & 54.8 & 44.0 & 67.9 & 47.6 \\ MAE [19] & 1600 & ImageNet-1K & 50.4 & 70.8 & 55.7 & 44.9 & 68.3 & 48.9 \\ \hline **EMAE** & 800 & ImageNet-1K & **51.4** & **72.2** & **56.5** & **45.7** & **69.4** & **49.8** \\ \hline \hline \end{tabular} \end{table} Table 2: **Results of object detection and instance segmentation on COCO using Mask R-CNN [19, 29].** The architecture of various methods adopts ViT-B. We adopt Mask R-CNN [21] with FPN [32], and report the bounding box AP and mask AP on COCO val2017. EMAE outperforms the previous SOTA self-supervised learning method. \begin{table} \begin{tabular}{l|c|c|c|c} \hline \hline Method & Pre-train data & AP\({}^{b}\) & AP\({}^{m}\) \\ \hline Rand Init [28] & - & \(48.1\) & \(42.6\) \\ Supervised [28] & ImageNet-1K & 47.6 & 42.4 \\ Supervised [28] & ImageNet-21K & 47.8 & 42.6 \\ \hline MAE [20] & ImageNet-1K & 51.2 & 45.5 \\ MAE, _our impl._ & ImageNet-1K & 51.6 & 45.9 \\ **EMAE** & ImageNet-1K & **52.5** & **46.7** \\ \hline MAE + cascade & ImageNet-1K & 54.0 & 46.7 \\ **EMAE + cascade** & ImageNet-1K & **54.8** & **47.6** \\ \hline \hline \end{tabular} \end{table} Table 3: **Results of object detection and instance segmentation fine-tuned on COCO using ViTDet [28].** For a fair comparison, the architecture of various methods adopts ViT-B. ViTDet [28] is adopted as the detection framework, and our EMAE achieves impressive performance and outperforms the previous SOTA self-supervised learning method MAE. \begin{table} \begin{tabular}{l|c|c|c|c} \hline \hline MethodPre-epochs & 100 & 200 & 300 & 800 \\ \hline baseline & 54.8\% & 58.8\% & 61.5\% & 64.4\% \\ \hline + whole data utilization & 60.0\% & 63.4\% & 65.3\% & 68.4\% \\ \hline ++ self-consistency & **60.9\%** & **65.0\%** & **68.4\%** & **70.4\%** \\ \hline \hline \end{tabular} \end{table} Table 5: **Ablations for EMAE: Effect of whole data utilization and the self-consistency module.** We report the results of linear evaluation on ImageNet. epochs (_e.g_., \(100\), \(200\), \(300\), and \(800\) pre-training epochs) as our baseline. The third line indicates that the whole data is divided into \(4\) non-overlap parts under different pre-training epochs. Compared with the baseline, the results of the third line surpass the MAE by around \(4.0\%\sim 5.2\%\) with the same pre-training epochs, demonstrating that sufficient training data can effectively improve performance. Moreover, these results illustrate the importance of exploiting the whole data, which can efficiently increase data utilization and improve pre-training efficiency. **Effect of the self-consistency module.** To further improve consistency, we propose the self-consistency module to encourage the model to generate reliable representations in the pre-training process. On the basis of the whole data utilization design, the self-consistency loss is introduced to the training stage, and the results are listed in the fourth line of Table 5. Compared with the whole data utilization design, the results of the fourth line further enhance the performance of the pre-trained model, _e.g_., the results of the fourth line surpass that of the third line by around \(0.9\%\sim 2.9\%\) with the same pre-training epochs. Hence, EMAE outperforms the MAE by around \(6.0\%\sim 6.7\%\) with the same pre-training epochs. Notably, after the self-consistency module is introduced to the model, it can be observed that the reconstructed images generated by our EMAE end up matching closely each other in Figure 3. From the above description, it illustrates the effectiveness and superiority of the self-consistency mechanism, which further improves the performance and efficiency of MIM. **Effect of the \(K\) division.** According to Section 3.2.1, the \(K\) directly determines the mask ratio of each part, and the mask ratio is \(\frac{K-1}{K}\). In Table 6, it can be observed that our approach constantly surpasses the MAE with different mask ratios (_e.g_., \(\frac{3}{4}\), \(\frac{6}{7}\), and \(\frac{13}{14}\) mask ratios) in the same pre-training settings. Meanwhile, the performance of our approach acquires the best when the \(K\) is set to be \(4\) (the mask ratio is \(75\%\)), and the phenomenon also fits the observation of MAE about the mask ratio. ## 5 Conclusion In this paper, we investigate the two serious problems caused by the high mask ratio of masked image modeling, namely inefficient pre-training and the high inconsistency of the pre-trained model. To overcome the above problems, we propose an approach, called efficient masked autoencoders with self-consistency (EMAE). The proposed EMAE exploits the whole data to perform the self-supervised task, which improves the data utilization, and thus enhances the pre-training efficiency. At the same time, the self-consistency module is proposed to decrease the uncertainty and inconsistency of masked image modeling, which further improves the performance. The proposed EMAE shows good versatility and scalability in multiple downstream visual tasks, such as linear evaluation, finetuning classification, object detection, instance segmentation, and semantic segmentation. We expect that our study can attract the community's attention to more efficient and reliable masked image modeling. **Limitations and social impacts.** In this work, we validate the performance of EMAE by constructing experi \begin{table} \begin{tabular}{l|c|c|c} \hline MethodPre-epochs & 100 & 300 & 800 \\ \hline (a) MAE mask ratio = \(\frac{3}{4}\) & 54.8\% & 61.5\% & 64.4\% \\ \hline (b) EMAE K = 4 & 60.9\% & 68.4\% & 70.4\% \\ \hline (c) MAE mask ratio = \(\frac{9}{7}\) & 53.3\% & 61.0\% & 63.9\% \\ \hline (d) EMAE K = 7 & 60.5\% & 66.5\% & 68.1\% \\ \hline (e) MAE mask ratio = \(\frac{13}{14}\) & 46.4\% & 54.7\% & 60.7\% \\ \hline (f) EMAE K = 14 & 52.5 \% & 61.0\% & 66.7\% \\ \hline \end{tabular} \end{table} Table 6: **Ablations for EMAE: Effect of the \(K\) division.** We report the results of linear evaluation on ImageNet. Figure 3: **Different reconstruction results of our EMAE correspond to different mask seeds.** Different combinations of visible patches are sampled from the same image by the random seeds, then these combinations are fed into our EMAE and EMAE generates the reconstructed images. Here, we reconstruct three images as examples. These reconstruction results contain similar semantics and become matched closely with each other, demonstrating the effectiveness of our self-consistency module. ments on ImageNet dataset. However, the promise of self-supervised learning is to establish a general feature extractor with larger datasets. We have not extended this method to larger datasets [43, 44, 51] and larger architectures (_e.g._, ViT-H) due to the resource and time consumption. Meanwhile, EMAE predicts content based on learned statistics of the training dataset and as such will reflect biases in those data, including ones with negative social impacts. These limitations warrant further research and consideration when building upon this work to achieve a better self-supervised learning method.
masked image modeling (MIM) は、自然言語処理タスクにおけるマスク言語モデル (MLM) にインスパイアされて、コンピュータビジョンでの強固な自己 supervised であるプリトレーニング手法として認識されています。しかし、MIM の高ランダムマスク比率は、2 つの深刻な問題を引き起こします。 1) どのIterationにおいても画像のデータ利用が不十分になり、トレーニングの時間を延ばします。 2) 予測の不一致により、信頼性の低い生成結果が得られます。 つまり、同じパッチの予測は、異なるマスクラウンドで不一致になり、最終的に生成されるアウトカムにおける意味の不一致につながります。これらの問題に対処するため、私たちは、効率的なマスクオートエンコーダーと自己一致性 (EMAE) を提案して、MIM のトレーニング効率を向上させ、その一致性を高めます。 特に、画像を K 個の非重なり合う部分に分割し、
2309.10243
Transferable Adversarial Attack on Image Tampering Localization
It is significant to evaluate the security of existing digital image tampering localization algorithms in real-world applications. In this paper, we propose an adversarial attack scheme to reveal the reliability of such tampering localizers, which would be fooled and fail to predict altered regions correctly. Specifically, the adversarial examples based on optimization and gradient are implemented for white/black-box attacks. Correspondingly, the adversarial example is optimized via reverse gradient propagation, and the perturbation is added adaptively in the direction of gradient rising. The black-box attack is achieved by relying on the transferability of such adversarial examples to different localizers. Extensive evaluations verify that the proposed attack sharply reduces the localization accuracy while preserving high visual quality of the attacked images.
Yuqi Wang, Gang Cao, Zijie Lou, Haochen Zhu
2023-09-19T01:48:01
http://arxiv.org/abs/2309.10243v1
# Transferable Adversarial Attack on Image Tampering Localization ###### Abstract It is significant to evaluate the security of existing digital image tampering localization algorithms in real-world applications. In this paper, we propose an adversarial attack scheme to reveal the reliability of such tampering localizers, which would be fooled and fail to predict altered regions correctly. Specifically, the adversarial examples based on optimization and gradient are implemented for white/black-box attacks. Correspondingly, the adversarial example is optimized via reverse gradient propagation, and the perturbation is added adaptively in the direction of gradient rising. The black-box attack is achieved by relying on the transferability of such adversarial examples to different localizers. Extensive evaluations verify that the proposed attack sharply reduces the localization accuracy while preserving high visual quality of the attacked images. Yuqi Wang\({}^{1,2}\), Gang Cao\({}^{1,2*}\), Zijie Lou\({}^{1,2}\), Haochen Zhu\({}^{1,2}\)\({}^{1}\)State Key Laboratory of Media Convergence and Communication, Communication University of China, Beijing 100024, China \({}^{2}\)School of Computer and Cyber Sciences, Communication University of China, Beijing 100024, China Anti-forensics, Adversarial attack, Adversarial example, Transferability, Image tampering localization ## 1 Introduction Since digital image editing becomes easy, image authenticity is queried frequently. It is important to develop digital forensic techniques for detecting image forgeries. Many effective image tampering localization algorithms base on deep learning [1-8] have been proposed in recent years. Such algorithms effectively learn internal forensic traces from the training data. Specifically, the local consistency-based image splicing localizers, such as Noiseprint [6], EXIF-Net [7] and Forensic Similarity Graph [8], regard tampering localization as an anomaly detection problem. Such localizers rely on the extraction and consistency-checking of appropriate local features. There also exists another type of more effective localizers [1-5], which regard the localization as image semantic segmentation. The tampering localization map generated by encoder/decoder networks consists of pixel-level binary real/falsified labels. It is significant to evaluate the security of such tampering localization algorithms against malicious attacks in real-world applications. Different from the image classification scenario, tampering localization involves the pixel-level prediction of tampering probability. As a result, it is necessary to specially address the adversarial attack on existing tampering localizers from the view of anti-forensics. Previous attacks on image forensic algorithms are generally based on artificial features [9], Generative Adversarial Network (GAN) [10, 11] and adversarial examples [12, 13]. The artificial feature method is typically targeted to a specific forensic algorithm, and fails to address the deep learning-based tampering localizers. Xie _et al._ proposed a GAN-based method to attack the global manipulation detection schemes [10]. In the latest literature [11], forensic traces are synthesized by a two-phase GAN to deceive three local consistency-based image splicing localizers [6-8]. However, such a method can not be used for attacking the other major category of localizers, i.e., the segment-based ones [1-5]. Adversarial examples exploit the vulnerability of neural networks by adding minor perturbation to the inputs, resulting in some forensic errors [12]. In [14], the optimization-based adversarial example method is employed to attack the convolutional neural network (CNN)-based global manipulation detectors. Gradient of the output score function with respect to pixel values is explored in depth. The common gradient-based adversarial example algorithms including Fast Gradient Sign Method (FGSM) [15], Jacobian-based Saliency Map Attack (JSMA) [16] and Projected Gradient Descent (PGD) [17] have been used to attack the global manipulation detectors [12] and source camera identification models [13]. As far as we know, there are no prior works on attacking the tampering localizers via adversarial example. To attenuate the deficiency of prior works, here we propose effective adversarial attacks on both the local consistency-based and segmentation-based tampering localizers. Specifically, two practical adversarial example methods are presented in a unified attack framework. In the optimization-based attack, the attacked image forgery is treated as the parameter to be optimized via Adam optimizer [18]. In the gradient-based attack, the invisible perturbation yielded by FGSM is added to the tampered image along gradient ascent direction. The transfer-based black-box attack is achieved by applying the generated adversarial example in white-box scenario to other localizers. Extensive evaluations verify the effectiveness of our proposed attack methods. In the rest of this paper, the detailed attack scheme is proposed in Section 2. Performance testing experiments are given in Section 3, followed by the conclusion drawn in Section 4. ## 2 Proposed Adversarial Attacks In this section, we first present the attack framework on tampering localizers. Then two specific attack methods are described in Subsections 2.2 and 2.3, respectively. ### Attack framework on tampering localizers Let the targeted tampering localization method to be attacked, namely victim localizer, be denoted by \(y=f_{\theta}(\chi)\). Here, \(x\in[0,1]^{H\times W\times 3}\) denotes the normalized input image forgery with \(H\times W\) pixels, and \(y\in[0,1]^{H\times W}\) is the pixel-wise prediction probability map. \(\theta\) denotes model parameters. The pixel \(x_{i,j,k}\) at the position \((i,j)\) with higher \(y_{i,j}\) values towards 1 signifies the higher probability for a tampered pixel. Let \(y^{\theta}\in\{0,1\}^{H\times W}\) be the ground truth of the forged image \(x\), where the values 1, 0 mark the tampered and pristine pixels, respectively. Let \(x^{*}\) be the generated adversarial example image. The corresponding localization map \(y^{*}\) predicted by the victim localizer \(f_{\theta}\) is \[y^{*}=f_{\theta}(x^{*}). \tag{1}\] Generating adversarial examples can be formulated as finding an instance \(x^{*}=x+\delta\), which satisfies the constrains as \[\left\{\begin{array}{l}y^{*}\to y^{t}\\ D(x,x+\delta)\leq B\\ x+\delta\in[0,1]^{H\times W\times 3}\end{array}\right. \tag{2}\] where \(\delta\) is the perturbation quantity. \(y^{t}\) is the target prediction probability map, and \(y^{*}\to y^{t}\) denotes \(y^{*}\) approaching \(y^{t}\). The attack aims to find suitable \(\delta\) that makes \(y^{*}\) approach to \(y^{t}\) while limiting the visual distortion \(D(x,x+\delta)\) below a constant \(B\). \(D(\cdot,\cdot)\) is typically realized by \(L_{p}\) norm. \(x^{*}\) should still be a valid image. Within the above framework, we propose two specific white-box attack methods for generating the adversarial example \(x^{*}\), i.e., optimization-based and gradient-based attack. Note that the adversarial example yielded in such white-box attacks would be directly applied to the black-box attack against other localizers. The transferability of adversarial examples is exploited due to the limited knowledge of tampering localizers. ### Optimization-based attack method In this attack, the attacked forged image \(x^{*}\) is regarded as the objective [19] to be optimized. In terms of Eq. (2), generating the adversarial example can be approximated as the following optimization problem: \[\begin{array}{ll}\underset{\delta}{\text{minimize}}&D(x,x+\delta)\\ \text{such that}&y^{*}\to 0^{H\times W}\\ &x+\delta\in[0,1]^{H\times W\times 3}.\end{array} \tag{3}\] It finds \(\delta\) that minimizes \(D\) and make \(y^{*}\) tend to a zero matrix \(y^{t}=O^{H\times W}\). Furthermore, Eq. (3) can be reformulated as \[\underset{\delta}{\text{minimize}} \lambda\|\delta\|_{2}+l(y^{*},O^{H\times W})\] \[\text{such that}\ x+\delta\in[0,1]^{H\times W\times 3} \tag{4}\] where \(l(\cdot,\cdot)\) is the binary cross-entropy (BCE) loss function that measures the distance between the prediction probability map \(y^{*}\)and the target prediction probability map \(O^{H\times W}\). \(\lambda\) controls the proportion of perturbation, and the perturbation magnitude is measured by \(L_{2}\) norm. Detailed iterative process of the optimization-based attack is illustrated in Fig. 1. In each iteration, the adversarial example \(x^{*}_{i}\) is input to the victim localizer for generating the prediction probability map \(y^{*}_{i}\). Then the loss between \(y^{*}_{i}\) and the target map \(O^{H\times W}\) is calculated. \(\delta_{i+1}\) is gained by solving the minimization problem described in Eq. (4) via Adam optimizer. Finally, the adversarial example image is updated by \(x^{*}_{i+1}=x+\delta_{i+1}\) followed by clipping into the range \([0,1]\). Note that the perturbation is added globally, since the local modification to tampered regions may still leave some new inconsistency. Besides, the loss value is also computed on global images. It aims to incur less difference between the tampered and unaltered regions. ### Gradient-based attack method Inspired by [12], the popular gradient-based adversarial example method FGSM [15] is used to attack tampering localizers. FGSM takes advantage of the linear approximation of victim localizers for fast generation. Adding a small perturbation in the gradient ascent direction can enlarge the loss value of the victim localizer dramatically. Generating the adversarial example \(x^{*}\) via FGSM can be formulated as \[x^{*}=clip\left(x+\varepsilon\cdot sign\big{(}\nabla_{x}l(y,y^{\theta})\big{)} \right) \tag{5}\] where \(l(y,y^{\theta})\) is the loss function of the victim localizer at the training phase. By calculating gradient of the loss with respect to the input forgery image \(x\), the perturbation is added in the direction of gradient rising denoted by \(sign\nabla_{x}l\). The magnitude of perturbation is constrained by \(\|\delta\|_{\infty}\leq\varepsilon\). Finally, to ensure the validity of generated adversarial examples, the clipping into the range \([0,1]\) is performed. ## 3 Experiments In this section, the performance evaluation experiments for the proposed attack methods are presented in detail. \begin{table} \begin{tabular}{r|c c c c} \hline \multicolumn{1}{c}{Attack} & Method & **Before** & **Opt-OSN** & **Opt-MVSS** & **Opt-PSCC** \\ \hline \multirow{2}{*}{\begin{tabular}{c} F1 \\ (\(d\times 100\)) \\ \end{tabular} } & OSN[1] &.51 &.05 (90) &.23 (55) &.33 (35) \\ & MVSS[2] &.45 &.13 (71) &.03 (93) &.19 (58) \\ & PSCC[3] &.46 &.23 (50) &.23 (50) &.13 (72) \\ \hline \multirow{2}{*}{\begin{tabular}{c} IoU \\ (\(d\times 100\)) \\ \end{tabular} } & OSN[1] &.47 &.04 (91) &.20 (57) &.29 (37) \\ & MVSS[2] &.40 &.10 (75) &.02 (95) &.15 (61) \\ & PSCC[3] &.41 &.20 (51) &.20 (51) &.11 (73) \\ \hline \multirow{2}{*}{ \begin{tabular}{c} PSNR (dB) \\ SSIM \\ \end{tabular} } & — & 35.54 & 35.54 & 35.20 \\ & — & 0.95 & 0.95 & 0.95 \\ \hline \end{tabular} \end{table} Table 1: Localization accuracy and visual quality comparison before and after the optimization-based attack on CASIAv1 dataset. The results of white-box attacks are underlined. Figure 1: The proposed optimization-based attack scheme against image tampering localization algorithms. ### Experimental setting #### 3.1.1 Datasets and localization algorithms Test datasets include CASIAv1[20], Columbia[21], Coverage[22], DSO[23] and IMD[24] with 920, 160, 100, 100 and 2010 forged images, respectively. Due to limited computing resources, we follow the prior work[4] to crop some oversized images to \(1096\times 1440\) pixels for preparing the test images. The attack performance is tested against six state-of-the-art image tampering localization algorithms: OSN[1], MVSS[2], PSCC[3], CAT[4], TruFor[5] and Noiseprint[6]. The first three are used as victim localizers for white-box attack. The officially released models of such localizers are used to generate adversarial examples. Noiseprint can not work on the CASIAv1 and IMD images due to too uniform content or small resolution. #### 3.1.2 Evaluation metrics The localization accuracy metrics, i.e., F1 and Intersection over Union (IoU) are widely adopted by the existing works on tampering localization. F1 is the harmonic average of precision and recall, and IoU measures the similarity between the predicted area and the ground truth. Such metric values before and after the attack, and their decrease rate (\(d\)) are shown to evaluate the performance of attack methods. Here, \(d\) is defined as the ratio between the decrement value and the measurement before attacks. Meanwhile, Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity (SSIM) are used to evaluate the visual quality of attacked images. #### 3.1.3 Parameter Setting In the optimization-based attack, the adversarial example is initialized with the forged image, i.e., \(x^{*}=x\), the Adam optimizer is implemented with a learning rate of 0.003, and the number of iterations is set to 30 epochs. The optimal \(\lambda\) that achieves a good attack performance while maintaining a certain level of visual quality is searched through experiments. We select the parameters that make the attack most effective, while keeping the PSNR greater than 34 dB. In the gradient-based attack, we set the step size as 0.02, 0.02, 0.001, 0.01 and 0.01 for the five datasets respectively to achieve similar PSNR values. ### Influence of victim localizer in white-box attack First, we apply optimization-based attack to MVSS, OSN and PSCC on CASIAv1 dataset for choosing the best victim localizer in the white-box scenario. Table 1 shows the F1, IoU and decrease rate before and after optimization-based white-box attacks and their transferability. It is observed that optimization-based attack can significantly reduce image tampering localization accuracy. Moreover, the decrease rate can exceed 70% in the white-box scenario. In addition, optimization-based attack shows strong transferability. The localization accuracy of image forensic methods has also degraded and it has reduced by at least 35%. As can be seen from Table 1, the selection of victim localizer has no obvious impact on the attack effect in the white-box scenario. The adversarial examples generated against OSN and MVSS have equal attack performance, while generated against PSCC are slightly less effective. The white-box attack against PSCC only decrease the localization accuracy by about 70%. The same conclusion is found in the gradient-based attack, too. For the sake of consistency and without loss of generality, OSN is chose as the victim localizer in all the following experiments for white-box attack. ### Transferability in black-box attack To further demonstrate the transferability of our attack, we evaluate the effectiveness of optimization-based and gradient-based attacks on more other datasets and localizers. The attack performance on four different datasets is presented in Table 2. We find that the adversarial examples generated in the white-box scenario have a certain transferability. In the white-box scenarios, optimization-based and gradient-based attacks can reduce the accuracy of the victim localizer by at least 49%. As can be seen from the results on the DSO dataset, the white-box attack can reduce OSN accuracy by nearly 100%. In white-box scenario, the knowledge about the victim localizer can be fully accessed to help attack the target localizer. As a result, white-box attacks can significantly degrade the performance of the victim localizer. However, the attack performance is not as obviously in black-box scenarios when compared to white-box \begin{table} \begin{tabular}{l l l l l l l l l l l l l l l} \hline \multicolumn{2}{c}{Dataset} & \multicolumn{3}{c}{Columbia} & \multicolumn{3}{c}{Coverage} & \multicolumn{3}{c}{DSO} & \multicolumn{3}{c}{IMD} \\ \cline{3-14} \multicolumn{2}{c}{Attack Method} & **Before** & **Opt** & **FGSM** & **Before** & **Opt** & **FGSM** & **Before** & **Opt** & **FGSM** & **Before** & **Opt** & **FGSM** \\ \hline \multirow{4}{*}{\begin{tabular}{} \end{tabular} } & OSN[1] &.71 &.12(83) &.26(64) &.26 &.11(58) &.13(52) &.47 &.01(98) &.00(100) &.50 &.04(92) &.00(100) \\ & MVSS[2] &.64 &.55(14) &.53(17) &.45 &.21(54) &.24(48) &.30 &.17(43) &.21(29) &.27 &.11(59) &.14(48) \\ & PSCC[3] &.62 &.36(41) &.45(27) &.44 &.13(71) &.13(70) &.53 &.00(100) &.00(100) &.16 &.01(94) &.02(89) \\ & CAT[4] &.79 &.91(-15) &.92(-15) &.29 &.34(-17) &.33(-15) &.33 &.04(88) &.07(79) &.67 &.20(70) &.26(61) \\ & TruFor[5] &.81 &.73(10) &.71(12) &.53 &.34(35) &.36(32) &.90 &.35(62) &.42(53) &.72 &.43(41) &.47(35) \\ & Noiseprint[6] &.36 &.16(56) &.13(64) &.15 &.12(20) &.15(-3) &.29 &.04(86) &.05(84) & — & — & — \\ \hline \multirow{4}{*}{\begin{tabular}{} \end{tabular} } & OSN[1] &.61 &.09(85) &.20(68) &.18 &.08(55) &.09(49) &.34 &.00(100) &.03(91) &.40 &.03(93) &.06(85) \\ & MVSS[2] &.60 &.45(24) &.44(26) &.38 &.17(56) &.19(51) &.22 &.12(45) &.15(31) &.21 &.08(62) &.10(50) \\ & PSCC[3] &.48 &.27(44) &.35(28) &.34 &.11(67) &.11(69) &.42 &.00(100) &.00(100) &.13 &.01(92) &.01(90) \\ & CAT[4] &.75 &.88(-18) &.89(-19) &.23 &.26(-13) &.26(-12) &.28 &.03(89) &.04(85) &.59 &.15(75) &.20(65) \\ & TruFor[5] &.75 &.64(15) &.62(18) &.45 &.28(39) &.29(36) &.85 &.27(68) &.33(61) &.63 &.34(46) &.38(39) \\ & Noiseprint[6] &.26 &.09(65) &.08(71) &.09 &.07(21) &.09(-4) &.21 &.02(90) &.03(88) & — & — \\ \hline \multirow{4}{*}{ \begin{tabular}{} \end{tabular} } & PSNR (dB) & — & 35.34 & 34.92 & — & 34.19 & 34.20 & — & 35.02 & 36.51 & — & 35.00 & 36.90 \\ & SSIM & — & 0.87 & 0.85 & — & 0.94 & 0.94 & — & 0.87 & 0.91 & — & 0.89 & 0.94 \\ \hline \end{tabular} \end{table} Table 2: Localization accuracy and visual quality comparison before and after attacks on more other datasets and localizers. The results of white-box attacks are underlined. attacks. Take the test results on the Columbia dataset as an example, the adversarial examples transferred to Noiseprint have the best attack performance, the decrease rate reach to 60%. However, while transferred to CAT, the localization performance after attack is even better. The black-box attack only uses the adversarial examples generated in the white-box scenario. And the victim localizer is not similar to the target localizer. Therefore, the attack performance in the black-box scenario is not as well as white-box scenario. Additionally, owing to the significant disparity between CAT and OSN, combined with CAT's insensitivity to minor perturbations, the accuracy of CAT is even improved when adversarial example is transferred to this localizer. As for the gradient-based black-box attack against Noiseprint on Coverage dataset, possibly due to poor performance before attack, the accuracy after the attack is slightly improved. It can be concluded that in most cases, the adversarial examples generated for OSN based on optimization and gradient perform well in both white-box and black-box scenarios. Qualitative evaluation results of the optimization-based attack against different localization algorithms are shown in Fig. 2. It can be observed that the adversarial perturbation is difficult to perceive. The predicted masks after attacks indicate that the localizers fail to locate the tampered regions accurately. ### Performance comparison with other attacks The code for GAN-based attack method [11] is not released, so we cannot compare our attack methods with it. Therefore, we compare proposed methods with common post-processing attacks on CASIAv1. JPEG compression [25] with the factor of 55, median filter [25] with the kernel 3\(\times\)3 and JPEG compression after median filtering are tested. The comparison results are shown in Table 3. Compared with JPEG compression and median filtering attacks, adversarial attack can reduce the accuracy of the tampering localization algorithms while maintaining better visual quality. JPEG compression after median filtering has the best attack performance, except for TruFor. The localization accuracy of TruFor has only reduced by about 40%, all other localizers have reduced by more than 59%. However, such combined post-processing attack also leads to severely degraded images. The average PSNR of attacked images is only 26.06 dB. This indicates that the method sacrifices too much visual quality in order to improve attack performance. ## 4 Conclusion In this work, we propose an effective adversarial attack scheme to evaluate the security of the state-of-the-art image tampering localization algorithms. The attack on tampering localizers is first formulated formally, then two specific adversarial example attack methods are presented under a unified attack framework. In both white and black-box scenarios, the accuracies of state-of-the-art tampering localizers are significantly reduced by our proposed attacks. Meanwhile, the adversarial example images enjoy good transferability and visual transparency. Our attack methods also outperform other existing attacks. \begin{table} \begin{tabular}{l l c c c c c c} \multicolumn{1}{c}{} & \multicolumn{1}{c}{Attack} & \multicolumn{1}{c}{**Before**} & \multicolumn{1}{c}{**JPEG**} & \multicolumn{1}{c}{**Median**} & \multicolumn{1}{c}{**Median**} & \multicolumn{1}{c}{**Opt**} & \multicolumn{1}{c}{**FGSM**} \\ \multicolumn{1}{c}{} & Method & **[25]** & **[25]** & **Median** & **Median** & **Opt** & **FGSM** \\ \multicolumn{1}{c}{} & OSN[1] &.51 &.26 (48) &.37 (28) &.18 (64) &.06 (88) &.05 (90) \\ \multicolumn{1}{c}{} & MVSS[2] &.45 &.15 (68) &.39 (14) &.18 (59) &.12 (73) &.13 (71) \\ \multicolumn{1}{c}{} & PSCC[3] &.46 &.18 (62) &.26 (44) &.03 (93) &.24 (48) &.23 (50) \\ \multicolumn{1}{c}{} & CAT[4] &.72 &.29 (59) &.12 (83) &.17 (76) &.41 (43) &.40 (44) \\ \multicolumn{1}{c}{} & TruFor[5] &.69 &.57 (17) &.51 (27) &.44 (37) &.44 (36) &.49 (29) \\ \multicolumn{1}{c}{} & OSN[1] &.47 &.23 (51) &.31 (33) &.15 (67) &.04 (91) &.04 (91) \\ \multicolumn{1}{c}{} & MVSS[2] &.40 &.12 (71) &.33 (17) &.14 (64) &.09 (78) &.10 (75) \\ \multicolumn{1}{c}{} & PSCC[3] &.41 &.14 (66) &.19 (53) &.03 (94) &.21 (49) &.20 (51) \\ \multicolumn{1}{c}{} & CAT[4] &.64 &.24 (62) &.09 (86) &.13 (80) &.35 (45) &.34 (47) \\ \multicolumn{1}{c}{} & TruFor[5] &.63 &.50 (20) &.45 (28) &.38 (40) &.39 (38) &.43 (32) \\ \multicolumn{1}{c}{} & PSNR (dB) & — & 30.43 & 26.87 & 26.06 & 35.54 & 35.05 \\ \multicolumn{1}{c}{} & SSIM & — & 0.93 & 0.83 & 0.80 & 0.95 & 0.94 \\ \end{tabular} \end{table} Table 3: Performance comparison with other attack methods on CASIAv1. The results of white-box attacks are underlined. Figure 2: Results of the optimization-based attack against different tampering localization algorithms on five example forged images. Here, * denotes the results of white-box attacks.
現実アプリケーションにおける既存のデジタル画像偽装 localizarionアルゴリズムのセキュリティを評価することが重要である。この論文では、そのような偽装 localizerの信頼性を明らかにするために、敵対的攻撃スキームを提案する。このスキームは、予測を誤ることで、偽装された領域を正しく予測できなくなる。具体的には、最適化と勾配に基づいた敵対的エキスパimentを白/黒のボックス攻撃に適用する。同様に、敵対的エキスパimentは逆勾配伝播により最適化され、勾配が上昇する方向に適応的に偏りを加える。この黒のボックス攻撃は、これらの敵対的エキスパimentの転送可能性を利用して実現する。広範囲の評価により、提案された攻撃は、ロケーション精度を大幅に向上させつつ、攻撃された画像の視覚的品質を維持することに成功した。 This translation is a direct word
2309.16632
Sparse Submodular Function Minimization
In this paper we study the problem of minimizing a submodular function $f : 2^V \rightarrow \mathbb{R}$ that is guaranteed to have a $k$-sparse minimizer. We give a deterministic algorithm that computes an additive $\epsilon$-approximate minimizer of such $f$ in $\widetilde{O}(\mathsf{poly}(k) \log(|f|/\epsilon))$ parallel depth using a polynomial number of queries to an evaluation oracle of $f$, where $|f| = \max_{S \subseteq V} |f(S)|$. Further, we give a randomized algorithm that computes an exact minimizer of $f$ with high probability using $\widetilde{O}(|V| \cdot \mathsf{poly}(k))$ queries and polynomial time. When $k = \widetilde{O}(1)$, our algorithms use either nearly-constant parallel depth or a nearly-linear number of evaluation oracle queries. All previous algorithms for this problem either use $\Omega(|V|)$ parallel depth or $\Omega(|V|^2)$ queries. In contrast to state-of-the-art weakly-polynomial and strongly-polynomial time algorithms for SFM, our algorithms use first-order optimization methods, e.g., mirror descent and follow the regularized leader. We introduce what we call {\em sparse dual certificates}, which encode information on the structure of sparse minimizers, and both our parallel and sequential algorithms provide new algorithmic tools for allowing first-order optimization methods to efficiently compute them. Correspondingly, our algorithm does not invoke fast matrix multiplication or general linear system solvers and in this sense is more combinatorial than previous state-of-the-art methods.
Andrei Graur, Haotian Jiang, Aaron Sidford
2023-09-28T17:38:13
http://arxiv.org/abs/2309.16632v2
# Sparse Submodular Function Minimization ###### Abstract In this paper we study the problem of minimizing a submodular function \(f:2^{V}\rightarrow\mathbb{R}\) that is guaranteed to have a \(k\)-sparse minimizer. We give a deterministic algorithm that computes an additive \(\epsilon\)-approximate minimizer of such \(f\) in \(\widetilde{O}(\mathsf{poly}(k)\log(|f|/\epsilon))\) parallel depth using a polynomial number of queries to an evaluation oracle of \(f\), where \(|f|=\max_{\leq\leq V}|f(S)|\). Further, we give a randomized algorithm that computes an exact minimizer of \(f\) with high probability using \(\widetilde{O}(|V|\cdot\mathsf{poly}(k))\) queries and polynomial time. When \(k=\widetilde{O}(1)\), our algorithms use either nearly-constant parallel depth or a nearly-linear number of evaluation oracle queries. All previous algorithms for this problem either use \(\Omega(|V|)\) parallel depth or \(\Omega(|V|^{2})\) queries. In contrast to state-of-the-art weakly-polynomial and strongly-polynomial time algorithms for SFM, our algorithms use first-order optimization methods, e.g., mirror descent and follow the regularized leader. We introduce what we call _sparse dual certificates_, which encode information on the structure of sparse minimizers, and both our parallel and sequential algorithms provide new algorithmic tools for allowing first-order optimization methods to efficiently compute them. Correspondingly, our algorithm does not invoke fast matrix multiplication or general linear system solvers and in this sense is more combinatorial than previous state-of-the-art methods. ###### Contents * 1 Introduction * 1.1 Challenges and Additional Motivations * 1.2 Our Results * 1.3 Related Work * 1.4 Paper Organization * 2 Our Approach * 2.1 Framework * 2.2 Parallel Algorithm * 2.3 Sequential Algorithm * 3 Preliminaries * 3.1 Notation * 3.2 Submodular Functions and Lovasz Extension * 4 Our Framework * 4.1 Arc Information and Extension Maintainer * 4.2 Meta Algorithm * 4.3 Sparse Dual Certificate * 5 Poly\((k)\)-Depth Parallel Algorithm for \(k\)-sparse SFM * 5.1 Mirror Descent * 5.2 Dual Certificate via Mirror Descent with Truncation * 5.3 Dimensionality Reduction for Parallel Algorithm * 5.4 Arc Finding for Parallel Algorithm * 5.5 Putting It All Together: Proof of Theorem 1.1 * 6 \(\widetilde{O}(n\cdot\mathsf{poly}(k))\)-Query Algorithm for \(k\)-sparse SFM * 6.1 Dual Certificate via Stochastic Follow-the-Regularized-Leader * 6.2 Dimensionality Reduction for Sequential Algorithm * 6.3 Arc Finding for Sequential Algorithm * 6.4 Putting It All Together * 7 Ring Family and Extensions * A Optimization Methods * A.1 Stochastic Follow-the-Regularized-Leader * A.2 Stochastic Multiplicative Weights * B Proximal Step Over \(k\)-simplex Introduction Submodular function minimization (SFM) is a foundational problem in combinatorial optimization. Submodular functions encompass a wide range of functions that appear naturally in practical applications, including graph cut functions, matroid rank functions, set coverage functions, and utility functions from economics. Since seminal work of Edmonds in 1970 [1], SFM has served as a central tool in many areas such as theoretical computer science, operations research, game theory, and recently, machine learning. We refer interested readers to surveys [14, 15] for a more comprehensive account of the rich history of SFM. Throughout this paper we consider a standard setting for SFM. We are given a set function \(f:2^{V}\to\mathbb{R}\), where \(V\) is an \(n\)-element finite set, known as the _ground set_, and \(f\) is submodular, i.e., \[f(S\cup\{i\})-f(S)\geq f(T\cup\{i\})-f(T)\text{ for all }S\subseteq T\subseteq V \text{ with }i\notin T.\] Furthermore, we assume that \(f\) is accessed only through an _evaluation oracle_ which when queried at any \(S\subseteq V\) outputs \(f(S)\) in time EO. We let \(|f|\stackrel{{\text{\tiny def}}}{{=}}\max_{S\subseteq V}|f(S)|\) and \(f^{*}\stackrel{{\text{\tiny def}}}{{=}}\min_{S\subseteq V}f(S)\) and consider the problem of computing an \(\epsilon\)_-approximate minimizer_, i.e., \(S\subseteq V\) with \(f(S)\leq f^{*}+\epsilon\). Since seminal work of Grotschel, Lovasz, and Schrijver [11] showed that SFM can be solved in polynomial time, there have been multiple advances in SFM over the last few decades [16, 15, 14, 17, 18, 19, 20, 21]. In this paper, we focus on algorithms that solve SFM to _high accuracy_ with a _polynomial query complexity_, meaning that they solve the problem with a number of queries to an evaluation oracle that scale _weakly-polynomially_ (\(\poly(n,\log(|f|/\epsilon))\)) [11] or _strongly-polynomially_ (\(\poly(n)\)) [11, 12].1 Current state-of-the-art SFM algorithms in these regimes are weakly-polynomial \(\widetilde{O}(n^{2}\log(n|f|/\epsilon))\)-query, polynomial-time algorithms [13, 14, 15, 16], strongly-polynomial \(\widetilde{O}(n^{3})\)-query, polynomial-time algorithms [17, 18, 19], and a strongly-polynomial \(\widetilde{O}(n^{2})\)-query, exponential-time algorithm [16] (see Section 1.3 for more details)2. Footnote 1: When \(f\) is integer valued, any \(\epsilon<1\) approximate solution is optimal; a variety of the prior work consider only this setting. Throughout the paper we do not distinguish between prior work which consider exactly solving SFM integer valued \(f\) (with a dependence on \(|f|\)) and those that work in the more general setting we consider in this paper. Footnote 2: Throughout the paper we use \(\widetilde{O}(\cdot)\) to hide \(O(\poly(\log n))\) factors. On the hardness side, however, the current state-of-the-art lower bounds exclude algorithms making fewer than \(\Omega(n\log n)\) queries in the strongly-polynomial regimes [11] and fewer than \(\Omega(n)\) queries in the weakly-polynomial regime [12, 21]. Consequently, there are large, \(\Omega(n)\) gaps, between these lower bounds and the best known upper bounds. Unfortunately, obtaining nearly-linear (or provably near-optimal) query complexity algorithms for SFM has been elusive. In light of these developments, it is natural to ask, what additional structural assumptions may be needed to enable faster algorithms? One recent line of work has explored the complexity of _decomposable SFM_[15, 16, 17, 18, 19, 20, 21], that is the special case where \(f(S)=\sum_{i}f_{i}(S\cap T_{i})\) for submodular \(f_{i}\) and sparse \(T_{i}\) given an oracle for evaluating the individual \(f_{i}\) over \(T_{i}\). A different line of work [21, 22] considers the complexity of _approximate_ SFM when the minimizer is \(k\)-sparse, which we refer to as _\(k\)-sparse SFM_ for brevity.3 We refer to an SFM algorithm as approximate, if its query complexity is _pseudo-polynomial_, i.e., \(O(\poly(n,|f|/\epsilon))\). The state-of-the-art approximate \(k\)-sparse SFM algorithm has a query complexity of \(\widetilde{O}(k(|f|/\epsilon)^{2})\), when \(f\) is integer valued and \(\epsilon<1\). Footnote 3: This problem is distinct from that of computing the minimum value \(k\)-sparse set for a submodular function. In both of these cases, sparsity plays a prominent role. In the specific context of SFM, while various polyhedral and geometric properties of submodular functions have been extensively studied and heavily exploited since the 1970s [1], these properties are mostly global, involving the entire set \(V\) altogether. On the other hand, assuming \(k\)-sparsity of the minimizer allows one to take a glimpse into local properties of submodularity, e.g., to understand the role a small number of elements play for the minimization of the function. Moreover, sparsity of the minimizer is a natural assumption in convex optimization and submodular function minimization problems. In particular, sparsity arises in signal processing, feature selection, compressed sensing, etc. where the solution is often expected to be sparse, i.e., have a small number of non-zero elements [12, 13]. Sparsity is also common in cases where a regularizer is added to the objective function to encourage sparsity. One example of such a setup is the problem of finding an optimal dataset for speech recognition tasks [11]. This problem can be written as \(f(S)+\lambda|S|\), where \(f\) is a submodular objective, and therefore it is expected that the size of the minimizing set is much smaller than the ground set for large values of the regularization coefficient \(\lambda\). Consequently, understanding how the complexity of algorithms depends on the sparsity leads to better insight into more refined combinatorial and geometric structures of the problems. Therefore, the central question we ask in this paper is: _Can we leverage sparsity to improve upon state-of-the-art polynomial query complexities?_ \(k\)-sparse SFM is also interesting in light of recent work [14] seeking to clarify the _parallel depth_ of SFM, i.e., the number of parallel rounds of queries to the evaluation oracle required for a query-efficient algorithm. The state-of-the-art parallel depth lower bounds are \(\Omega(n/\log n)\) in the strongly-polynomial regime [10], which matches the upper bound in [15] up to a factor of \(\log^{2}n\), and \(\widetilde{\Omega}(n^{1/3})\) in the weakly-polynomial regime [11]. These polynomial parallel depth lower bounds crucially rely on the minimizers being dense for the constructed submodular functions, and highly parallel algorithms might be possible when the submodular function admits a sparse minimizer. Therefore, we also ask: _Can we improve the parallel complexities for \(k\)-sparse SFM?_ Besides being interesting from an algorithmic perspective, obtaining improved parallel algorithms for \(k\)-sparse SFM could aid lower bound development by showing how hard-instances for lower bounds must have dense minimizers. ### Challenges and Additional Motivations Beyond intrinsic interest in improving the complexity of \(k\)-sparse SFM, this problem is also an interesting testbed for new techniques and a number of larger open problems on SFM. Here we briefly elaborate on these challenges and motivations for studying \(k\)-sparse SFM. State-of-the-art SFM algorithms typically leverage the _Lovasz extension_[16] of \(f\), a convex function \(\hat{f}:[0,1]^{V}\to\mathbb{R}\) that agrees with \(f\) on the hypercube's vertices, i.e., \(\hat{f}(\vec{1}_{S})=f(S)\) for all \(S\subseteq V\). It is known that \(\hat{f}\) can be evaluated efficiently and minimizing \(\hat{f}\) suffices for SFM (see Section 3). Consquently, SFM algorithms can be readily obtained by applying convex optimization methods to the Lovasz extension. Indeed, state-of-the-art weakly-polynomial SFM algorithms [10, 11] follow this approach by using _cutting plane methods_, a class of weakly-polynomial convex optimization methods, to obtain \(\epsilon\)-approximate minimizers in \(\widetilde{O}(n\log(1/\epsilon))\) parallel rounds of \(\widetilde{O}(n)\) queries per round. State-of-the-art strongly-polynomial SFM algorithms [10, 12, 13] carefully apply these weakly-polynomial cutting plane methods iteratively. With the \(k\)-sparsity assumption on the solutions, a natural approach would be to apply these continuous optimization methods to minimize \(\hat{f}\) over \(S_{k}^{V}\stackrel{{\text{def}}}{{=}}\Delta_{k}^{V}\cap[0,1]^{V}\), where \(\Delta_{k}^{V}\stackrel{{\text{def}}}{{=}}\{x\in\mathbb{R}_{\geq 0 }^{V}\|\left\|x\right\|_{1}\leq k\}\) is the interior of the simplex scaled up by \(k\); this suffices for \(k\)-sparse SFM since \(\bar{1}_{S^{*}}\in S_{k}^{V}\) for the \(k\)-sparse minimizer \(S^{*}\subseteq V\). Unfortunately, while changing the domain from \([0,1]^{V}\) to \(S_{k}^{V}\) is known to improve the performance of certain pseudo-polynomial convex optimization methods (as in [11, 1]), it is not known to improve the performance of weakly-polynomial convex optimization algorithms (e.g., state-of-the-art cutting plane method [13]) by more than logarithmic factors. Furthermore, without using more of the structure \(\hat{f}\) it seems unlikely that this change of domain would affect the weakly-polynomial complexity by more than logarithmic factors, since one could scale a hard convex optimization problem to fit inside \(S_{k}^{V}\) without changing problem parameters by more than a polynomial factor. These challenges call for the development of new optimization techniques that better utilize structures of the Lovasz extension and sparsity of the domain, which might lead to applications for a broader range of open problems on SFM. We note several of these additional motivations below. Strongly-polynomial time \(O(n^{3-c})\)-query algorithm for SFM.One of the most important motivations is towards improving strongly-polynomial time SFM algorithms. The current best query complexity here is \(O(n^{3}\log\log n/\log n)\) given in [10], but this approach seems unlikely to provide further improvement given the stagnation of progress on obtaining a better approximation factor for the shortest vector problem, on which the algorithm in [10] crucially relies. Other state-of-the-art strongly-polynomial time SFM algorithms with \(\widetilde{O}(n^{3})\) query complexities in [12, 13] learn precedence constraints of the form, if \(p\in V\) is in a minimizer then so is \(q\) (e.g., [14, 15, 16]). In the worst case, these algorithms might make \(\widetilde{O}(n^{2})\) queries to learn only a single coordinate that must be in a minimizer (or not), or for many coordinates \(p\in V\) a single \(q\in V\) that must be in any minimizer containing \(p\). This worst-case behavior is a key barrier towards obtaining strongly-polynomial time algorithms with \(O(n^{3-c})\) query complexities for constant \(c>0\). However, this worst-case behavior is sparse, and \(k\)-sparse SFM algorithms which better exploit local properties of submodular functions might be useful to get around the aforementioned barrier in this case and lead to a smaller query complexity. SFM versus continuous optimization.Given the challenges of adapting weakly-polynomial convex optimization algorithms to leverage sparsity, obtaining weakly- and strongly-polynomial algorithms for \(k\)-sparse SFM could highlight differences between general convex optimization and SFM. Consequently, \(k\)-sparse SFM is a natural proving grounds for designing SFM algorithms that go beyond using the boundedness and convexity of the Lovasz extension. Combinatorial algorithms and iteration costs.The use of cutting plane methods in state-of-the-art SFM algorithms comes with certain inherent costs. Key among them is that all known cutting plane methods apply general linear system solvers or matrix multiplication methods, making these methods somewhat intrinsically non-combinatorial. This is inherent as ultimately the problems they solve are more general than that of solving arbitrary linear systems. Since, as argued above, obtaining better query complexities for weakly- and strongly-polynomial \(k\)-sparse SFM suggests departing from cutting plane methods, the problem could be an interesting one to see where more combinatorial methods or ones with lower iteration costs can shine. State-of-the-art pseudo-polynomial SFM algorithms leverage optimization machinery which does not use linear system solves and correspondingly have runtimes that are within polylogarithmic factors of their query complexity [10, 1]. Though there have been efforts in using alternative optimization methods to solve SFM, e.g., [14], the query complexities of such methods are much higher than the state-of-the-art. Correspondingly, \(k\)-sparse SFM is an interesting setting to see whether such methods can outperform cutting plane methods. ### Our Results Our main results include two algorithms which improve, respectively, the parallel depth and query complexities of polynomial-time \(k\)-sparse SFM algorithms. Parallel depth for \(k\)-sparse SFM.In the parallel model for SFM (in the weakly-polynomial regime), the algorithm can submit up to \(\mathsf{poly}(n,\log(|f|/\epsilon))\) parallel queries to the evaluation oracle in each round, and its parallel _depth_ is defined to be the number of rounds needed to find the minimizer in the worst case. Our main result for this model is the following theorem. **Theorem 1.1** (Parallel \(k\)-sparse SFM).: _There is a deterministic parallel algorithm for \(k\)-sparse SFM with parallel depth \(\widetilde{O}(k^{7}\cdot\log(|f|/\epsilon))\) and runtime \(\widetilde{O}(n^{2}\cdot k^{7}\log(|f|/\epsilon)\cdot\mathsf{EO}+\mathsf{poly} (n)\cdot\log(|f|/\epsilon))\)._ When the sparsity \(k=\widetilde{O}(1)\), the parallel depth in Theorem1.1 is \(\widetilde{O}(1)\). To the best of our knowledge, this is the first nearly-constant parallel depth result for SFM, beyond the trivial \(n^{k}\)-query algorithm that queries all \(k\)-sparse sets in a single round (which does not have polynomial query complexity whenever \(k=\omega(1)\)). Our result is in stark contrast to the best known weakly-polynomial parallel depth of \(\widetilde{O}(n)\) for general SFM [13]. It is important to emphasize here that \(\widetilde{O}(1)\)-sparsity is also _necessary_ for obtaining a nearly-constant parallel depth. The work of [10] implies that \(\widetilde{\Omega}(k^{1/3})\) parallel depth is required for any weakly-polynomial algorithm for \(k\)-sparse SFM. Query complexity for \(k\)-sparse SFM.While the algorithm in Theorem1.1 achieves a nearly-constant parallel depth when the sparsity is nearly-constant, even in this setting its query complexity is \(\Omega(n^{2})\). In light of the question of designing SFM algorithms with nearly-linear query complexity, our second main result is a pair of algorithms which improve the weakly- and strongly-polynomial query complexities for \(k\)-sparse SFM. (It remains open as to whether the parallel depth of strongly-polynomial \(k\)-sparse SFM can be similarly improved.) **Theorem 1.2** (Weakly-polynomial \(k\)-sparse SFM).: _There is a randomized algorithm that outputs an \(\epsilon\)-approximate minimizer for \(k\)-sparse SFM whp. in \(\widetilde{O}((n\cdot\mathsf{poly}(k)\cdot\mathsf{EO}+\mathsf{poly}(n))\log(|f| /\epsilon))\) time._ **Theorem 1.3** (Strongly-polynomial \(k\)-sparse SFM).: _There is a randomized algorithm that outputs an exact minimizer for \(k\)-sparse SFM whp. in \(\widetilde{O}(n\cdot\mathsf{poly}(k)\cdot\mathsf{EO}+\mathsf{poly}(n))\) time._ We include both theorems above because the \(\mathsf{poly}(k)\) in Theorem1.2 is slightly better than that in Theorem1.3 (see Section6.4). The algorithms in Theorems1.2 and 1.3 have nearly-linear query complexities when the sparsity \(k=\widetilde{O}(1)\). Previously, the only nearly-linear weakly-polynomial query complexity results for SFM were obtained when the submodular function \(f\) can be decomposed as \(f(S)=\sum_{i}f_{i}(S)\) and each \(f_{i}\) depends only on \(\widetilde{O}(1)\) coordinates [1, 2]. However, this is different and the techniques for solving it seem tailored to its structure. Our algorithms for Theorems 1.1-1.3 depart from the use of cutting plane methods and do not rely on linear system solves as a sub-procedure. In this sense, they are more combinatorial than state-of-the-art weakly-polynomial time [13, 14] and strongly-polynomial time SFM algorithms [13, 15, 16]. Somewhat surprisingly, our algorithms combine first-order methods, which have been primarily used for pseudo-polynomial SFM algorithms (e.g., [12, 13]), and arc finding, a technique central to many strongly-polynomial SFM algorithms (e.g., [13, 15]), to obtain very efficient weakly- and strongly-polynomial time algorithms. Previous combination of these two techniques only appeared in [15], but the resulting algorithm has query complexity and parallel depth at least a factor of \(n^{2}\) larger than the state-of-the-art algorithms based on cutting plane methods. The proofs of Theorems 1.2 and 1.3 additionally invoke various sampling techniques, which crucially allows us to save the additional factor of \(n\) from querying an entire subgradient of the Lovasz extension in each iteration. ### Related Work SFM is a central combinatorial optimization problem with extensive applications. The problem of maximizing a submodular function has also been widely studied, but is very different and has seemingly different structure, algorithms, and history (see, e.g., [11] for a survey on this topic). Strongly-, weakly-, and pseudo- polynomial algorithms for SFM.As discussed in the intro, a fundamental result for SFM is that it can be solved efficiently, in all three regimes of weakly-, strongly-, and pseudo-polynomial. The first weakly- and strongly-polynomial time SFM algorithms were given in the seminal work of Grotschel, Lovasz, and Schrijver [12, 13, 14]. The first pseudo-polynomial algorithm for SFM was given in a seminal work of Cunningham [10]. Since then, there has been a long line of work on designing better algorithms for SFM in all three regimes [11, 12, 13, 14, 15, 16, 17, 18]. The state-of-the-art algorithms for these regimes are shown in Table 1. Parallel SFM.For the parallel complexity of SFM discussed earlier in the intro, the current best weakly-polynomial algorithm has parallel depth \(O(n\log nM)\)[13] and the current best strongly-polynomial algorithms [15] have parallel depth \(O(n\log n)\) (with exponential runtime) or \(O(n^{2}\log\log n/\log n)\) (with polynomial runtime). In concurrent work [11], a superset of the authors give a \(\widetilde{O}(n^{1/3}/\epsilon^{2/3})\)-round \(\mathsf{poly}(n)\)-time algorithm for obtaining an \(\epsilon\)-approximate min \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Paper** & **Year** & **Running Times** & **Remarks** \\ \hline \hline [14] & 2020 & \begin{tabular}{c} \(O(n^{2}\log nM\cdot\mathsf{EO}+n^{3}\log nM)\) \\ \(O(n^{3}\log^{2}n\cdot\mathsf{EO}+n^{4}\log^{2}n)\) \\ \end{tabular} & \begin{tabular}{c} current best weakly \& \\ strongly runtime \\ \end{tabular} \\ \hline [13] & 2020 & \begin{tabular}{c} \(\widetilde{O}(nM^{2}\cdot\mathsf{EO}+\mathsf{poly}(n))\) \\ \(\widetilde{O}(kM^{2}\cdot\mathsf{EO}+\mathsf{poly}(n))\) \\ \end{tabular} & \begin{tabular}{c} current best pseudo-poly \\ current best sparse pseudo-poly \\ \end{tabular} \\ \hline [15] & 2021 & \begin{tabular}{c} \(O(n^{3}\log\log n/\log n\cdot\mathsf{EO}+\mathsf{poly}(n))\) \\ \(O(n^{2}\log n\cdot\mathsf{EO}+\exp(n))\) \\ \end{tabular} & \begin{tabular}{c} current best strongly \\ query complexity \\ \end{tabular} \\ \hline \end{tabular} \end{table} Table 1: State-of-the-art weakly-, strongly-, and pseudo-polynomial algorithms for submodular function minimization. \(k\) is the sparsity and parameter \(M=|f|/\epsilon\). imizer, and a 2-round \(n^{O(M)}\)-time algorithm for computing an exact minimizer. As discussed in the intro, lower bounds for parallel SFM have also been studied recently (see Table 2). Structured SFM.Given the aforementioned nearly \(n\)-factor gap between the state-of-the-art query complexity upper and lower bounds for SFM, there have been exciting recent results on improving the query complexity of SFM assuming more fine-grained structures of the submodular functions. In particular, for the problem of decomposable SFM discussed prior to Section 1.1, it is known that \(f\) can be minimized in weakly-polynomial time using \(\widetilde{O}(n)\) total queries to the evaluation oracles of each individual \(f_{i}\)[1, 22]. ### Paper Organization We start by providing an overview of our approach to obtaining our results in Section 2, followed by preliminaries in Section 3. Our unified algorithmic framework and key algorithmic tools for obtaining both our parallel and sequential results are introduced in Section 4. Our parallel results are presented in Section 5 and our sequential results are obtained in Section 6. ## 2 Our Approach Here we provide an overview of our approach towards proving Theorems 1.1-1.3. We first give some context and motivation, and then we cover the key components of our approach in Sections 2.1-2.3. To situate our approach, recall that previous state-of-the-art weakly- and strongly-polynomial time SFM algorithms all apply the general continuous optimization tool of _cutting plane methods_[14, 15, 16, 17, 18, 19, 20, 13, 15]. Cutting plane methods are known to compute \(\epsilon\)-approximate minimizers of bounded convex functions on \(\mathbb{R}^{n}\) in \(\widetilde{O}(n\log(1/\epsilon))\) iterations where each iteration consists of a subgradient computation, which typically takes \(\widetilde{O}(1)\) depth, \(O(n)\) queries to the evaluation oracle of \(f\), and \(\Omega(n^{2})\) additional work [15] involving linear algebraic operations such as a linear system solve. In this paper we seek to improve upon these methods for \(k\)-sparse SFM both in terms of performance and to avoid general linear algebraic primitives (to obtain, in some sense, a more combinatorial algorithm). However, as discussed in Section 1.1, it is unclear how to substantially improve cutting plane methods just using the assumption that their is a sparse optimal solution. Consequently, we depart from previous state-of-the-art weakly- and strongly-polynomial SFM algorithms and instead use first-order methods4 such as mirror descent (Algorithm 2) and (stochas \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Paper** & **Year** & **Parallel Depth** & **Accuracy** \\ \hline [2] & 2020 & \(\Omega(\log n/\log\log n)\) & exact \\ \hline [2] & 2021 & \(\widetilde{\Omega}(n^{1/3})\) & \(|f|/\mathsf{poly}(n)\) \\ \hline [2] & 2022 & \(\Omega(n/\log n)\) & exact \\ \hline \end{tabular} \end{table} Table 2: Parallel depth lower bounds for query-efficient SFM. In the “Accuracy” column, “exact” means the algorithm is required to compute an exact minimizer, and “\(|f|/\mathsf{poly}(n)\)” means the algorithm is allowed to output any approximate minimizer with an additive accuracy of \(|f|/\mathsf{poly}(n)\). tic) follow-the-regularized-leader (Algorithm 12) to minimize the Lovasz extension. These methods have performance depending more on problem geometry, e.g., the domains \(B_{\infty}^{V}\) versus \(S_{k}^{V}\), than cutting plane methods. Also, implementing them often does not require linear system solves and therefore they typically have much smaller iteration costs. Unfortunately, these desirable features of first-order methods have a cost. In contrast to cutting plane methods, when applied to non-smooth convex objectives like the Lovasz extension, their convergence rate depends polynomially on the accuracy rather than polylogarithmically. Therefore, it is natural to use such methods for pseudo-polynomial SFM algorithms [10, 11], but less clear how to leverage them to obtain improved weakly- or strongly- polynomial SFM algorithms. Fortunately, recent advances in weakly- and strongly-polynomial SFM algorithms provide hope for overcoming this limitation. Work of [11, 12] provide different ways to incorporate learned _precedence_ constraints, i.e., if an element is in a minimizer then what other elements must also be in that minimizer, to reduce the scale of the problem. For example, [12] showed that it suffices to solve SFM approximately to a relative accuracy of \(O(1/n^{3})\), in a primal-dual sense, repeatedly to obtain a strongly-polynomial algorithm for SFM. Despite the above hope for improving \(k\)-sparse SFM via first-order methods, there are a number of natural hurdles in the way. For example, the \(O(1/n^{3})\)-error requirement in [12] is prohibitively expensive for first-order methods to outperform cutting plane methods. Additionally, learning and updating precedence constraints need to be made sufficiently efficient. Nevertheless, we are able to follow this broad approach by introducing and leveraging a central concept of this paper we call _sparse dual certificates_ (see Section 4.3). In particular, we demonstrate how to carefully apply first-order methods to \(1/\mathsf{poly}(k)\)-accuracy to compute sparse dual certificates and, building upon [12], how these certificates can be used to efficiently deduce precedence constraints. Our parallel and sequential algorithms differ in their specific implementations of these strategies (see Sections 5 and 6 respectively). We believe the notion of sparse dual certificates and our algorithmic techniques for computing and using them for \(k\)-sparse SFM might have broader applications to improving weakly- or strongly- polynomial time SFM algorithms. Section organization.To illustrate our approach and our key insights, we subdivide the remainder of this section. In Section 2.1, we provide the general framework we use to iteratively decrease the scale of the \(k\)-sparse SFM problem. Section 2.2 and Section 2.3, we provide the key ideas in our parallel and sequential algorithms respectively. ### Framework Building on a long line of work [13, 14, 15] (and in particular, [12]), our algorithms for minimizing a submodular function \(f:2^{V}\to\mathbb{R}\) works by maintaining a set of precedence constraints indicating elements that must or must not be in any \(k\)-sparse minimizer, as well as for each \(p\in V\) a set of elements \(S_{p}\) that must be in any \(k\)-sparse minimizer \(S^{*}\) containing \(p\). We call these precedence constraints _arc constraints_5 and their collection a _ring family_. Footnote 5: Our definition of arc constraints is only with respect to \(k\)-sparse minimizers and is therefore different from the standard one in the literature. See Section 4.1 for more details. Given these arc constraints, we consider an induced _submodular extension_\(f^{\sharp}\) consistent with the ring family. \(f^{\sharp}\) is essentially the complement of a submodular extension studied in [12]; it is crucial that we work with \(f^{\sharp}\) since sparsity is not preserved under comple \(f^{\sharp}\) has many desirable properties. For example, minimizing \(f^{\sharp}\) suffices for minimizing \(f\) and any arc constraints learned for \(f^{\sharp}\) apply to \(f\). Beyond consistency and submodularity, the key property we use about \(f^{\sharp}\) is that the marginal vector6\(u\in\mathbb{R}^{V}\) defined as \(u_{p}\stackrel{{\text{\tiny{def}}}}{{=}}f^{\sharp}(\{p\})-f^{ \sharp}(\emptyset)\) for any coordinate \(p\in V\) does not increase as we add arc consraints. (See Section 4.1 for more details.) Footnote 6: The formal notation we define and use for the marginal vector in Section 3 and the rest of this paper is \(u_{f^{\sharp}}\). Here we drop the subscript and use \(u\) instead for simplicity. By maintaining the ring family and the extension \(f^{\sharp}\), and leveraging their properties, \(k\)-sparse SFM reduces to the problem of learning new arc constraints so that we can either 1. decrease the scale of \(\|u\|_{\infty}\) by more than a constant factor, or 2. learn enough arc constraints so that the \(k\)-sparse minimizer is clear. In particular, if \(\|u\|_{\infty}\leq\varepsilon/|V|\), then due to submodularity the largest set consistent with every arc constraint will be an \(\varepsilon\)-approximate minimizer for the original submodular function (see Claim 4.3). Note how \(k\)-sparsity helps for our purposes: if the set of elements \(S_{p}\) that must be in every \(k\)-sparse minimizer containing \(p\) has more than \(k\) elements, then \(p\) cannot be in any \(k\)-sparse minimizer and can therefore be discarded. This allows us to maintain at most \(k\) arc constraints from any element \(p\in V\), which significantly decreases the cost of manipulating the arc constraints and the submodular extension \(f^{\sharp}\). In both our parallel and sequential settings, we use \(\left\|u\right\|_{\infty}\) as a potential function and design efficient parallel and sequential subprocedures to find arc constraints to decrease \(\left\|u\right\|_{\infty}\) by a constant factor. Each setting has its distinct challenges and our techniques differ correspondingly. However, there is one common technique underlying these two different implementations, based on the notion of a _sparse dual certificate_ (see Section 4.3 for details). Sparse dual certificates.Sparse dual certificates are generalizations of standard dual solutions to SFM [1] that better capture the sparsity assumptions on the minimizers of the submodular function (see Definition 4.5 for definition). In our framework, sparse dual certificates bridge the gap between the task of finding arc constraints and the pseudo-polynomial convergence rate of first-order methods. In particular, we show how to use sparse dual certificates to deduce arc constraints (see Section 4.3). We also develop various algorithmic techniques to compute these certificates by running first-order methods up to only \(1/\mathsf{poly}(k)\) accuracy (see Sections 5.2 and 6.1 respectively for our parallel and sequential algorithms for computing sparse dual certificates). ### Parallel Algorithm To motivate our parallel algorithm, consider minimizing the Lovasz extension \(\hat{f}\) of the induced function \(f^{\sharp}:2^{V}\to\mathbb{R}\) with a \(k\)-sparse minimizer and let \(f^{*}\stackrel{{\text{\tiny{def}}}}{{=}}\min_{S\subseteq V}f^{ \sharp}(S)=\min_{S\subseteq V}f(S)\). As discussed above, it suffices to learn arc constraints so that we can decrease \(\left\|u\right\|_{\infty}\) by a constant factor after updating the ring family. We may assume that \(\min_{p\in V}u_{p}\geq 0\) as by submodularity adding any \(p\) with \(u_{p}<0\) to any set decreases its value and therefore \(p\) must be in every minimizer. As a warm-up for this goal, perhaps the first natural question is: under these assumptions how efficiently can we compute a \(\delta\left\|u\right\|_{\infty}\)-approximate minimizer for a given \(\delta=1/\mathsf{poly}(k)\)? The question of deducing arc constraints seems harder than this problem since it involves proving something about all \(k\)-sparse minimizers at that accuracy threshold. For this warm-up question, let us even assume for now that \(f^{*}\geq-\Omega\!\left(\left\|u\right\|_{\infty}\right)\), as the problem is in some sense easier otherwise and will be addressed towards the end of this subsection. A natural approach to this warm-up problem, as alluded to earlier, is to apply standard first-order methods such as mirror descent to the Lovasz extension \(\hat{f}\) of \(f^{\sharp}\) over the domain \(S_{k}^{V}\). By submodularity, \(u\) entrywise upper bounds the subgradients of \(\hat{f}\). If somehow the subgradients were also entrywise lower bounded by \(-\left\|u\right\|_{\infty}\), then standard analysis of mirror descent with an entropy regularizer (see Theorem 4.2 of [1]) applied to \(\hat{f}\) over \(S_{k}^{V}\) would compute an \(\delta\left\|u\right\|_{\infty}\)-approximate minimizer in \(\widetilde{O}(\delta^{-2})\) iterations. Furthermore, since each iteration of this method can be implemented in \(O(1)\) depth, this would yield a \(\widetilde{O}(\delta^{-2})\) depth algorithm as desired. Unfortunately, it is not necessarily the case that every subgradient of \(\hat{f}\) is entrywise lower bounded in magnitude by \(-\left\|u\right\|_{\infty}\). In fact, its most negative entry can be as negative as \(f^{*}-(n-1)\left\|u\right\|_{\infty}\), ruling out showing that mirror descent converges in \(\widetilde{O}(\mathsf{poly}(k,\delta^{-1}))\) iterations. To overcome this issue, we show that the structure of \(k\)-sparse solutions allows us to _truncate_ subgradients. We prove that if we run mirror descent methods with every subgradient coordinate of value \(\leq f^{*}-k\left\|u\right\|_{\infty}\) set to \(f^{*}-k\left\|u\right\|_{\infty}\), then this still approximately minimizes the Lovasz extension \(\hat{f}\) and computes sparse dual certificates (see Section 5.2). Running mirror descent with these truncated subgradients yields a deterministic algorithm which computes a \(\delta\left\|u\right\|_{\infty}\)-approximate minimizer in \(\widetilde{O}(\mathsf{poly}(k)/\delta^{2})\) depth and \(\widetilde{O}(n\cdot\mathsf{poly}(k)/\delta^{2})\) evaluation oracle queries. The solution to this warm-up problem is the key ingredient in our parallel algorithm. In particular, assuming \(f^{*}\leq-\|u\|_{\infty}\), we show that the sparse dual certificate obtained by running the warm-up algorithm over \(S_{V}^{k+1}\) with accuracy \(O(\|u\|_{\infty}/k)\) suffices to conclude an element that must be in every \(k\)-sparse minimizer, i.e., a dimensionality reduction. As dimensionality reduction can occur at most \(k\) times, this gives a \(\widetilde{O}(\mathsf{poly}(k))\)-depth \(\widetilde{O}(n\cdot\mathsf{poly}(k))\)-query algorithm. On the other hand, when \(f^{*}\geq-\left\|u\right\|_{\infty}\), then we consider each of the induced submodular functions \(f_{p}\) where an element \(p\) is always included and run the same algorithm on each such function. Note that each \(f_{p}\), once shifted to evaluate \(0\) at the new emptyset (or the singleton \(\{p\}\)), has minimum value \(-\Omega(u_{p})\). Consequently, when this is done for \(p\) with \(u_{p}\) near \(\left\|u\right\|_{\infty}\), the procedure finds an element which must be in any \(k\)-sparse minimizer containing \(p\). Importantly, this can be done in parallel for each individual \(p\)! This conveys the main ideas behind the parallel algorithm. See Section 5 for details. ### Sequential Algorithm In the previous section we outlined the main ideas of our parallel algorithm in Theorem 1.1. Unfortunately, that algorithm has a rather high query complexity. In every round of decreasing \(\left\|u\right\|_{\infty}\), the algorithm might solve \(n\) different induced SFM problems, corresponding to the inclusion of each element \(p\in V\), causing the query complexity to scale quadratic in \(n\) instead of linear. In the literature on using ring families for weakly- and strongly-polynomial SFM, there is a standard technique for alleviating the need to apply the algorithm to \(n\) different SFM problems to deduce arcs. In [14, 15] and early SFM algorithms (see [14] for a discussion of the history), the algorithm obtains a suitable dual certificate for the original function. This dual certificate is then modified by moving individual elements to the start of each permutation, and it is argued that this modification can be used to deduce arcs. In other words, rather than running \(n\) optimization methods to deduce \(n\) dual certificates, these methods deduce one set of dual certificates and consider \(n\) different modifications of it. In our sequential algorithm we follow a similar approach, but it brings about a variety of challenges, each of which requires algorithmic and analytic insights to overcome. The first challenge is that truncated subgradients, which are used in our parallel algorithm, do not seem amenable to this technique; it is unclear how to deduce arcs just by moving elements to the front after the truncation, which may lose critical information that makes this work. [14, 15] consider the elements of the dual certificate that decrease significantly when moving a certain element to the start of each permutation. However, truncation does not seem to allow for a similar approach, as all components that are negative past a threshold are truncated to the same value. To overcome this challenge, we provide a first-order method for computing \(\epsilon\)-approximate minimizers (and their associated sparse dual certificates) using true (rather than truncated) subgradients. As discussed in Section 2.2, this is difficult as the entries of the subgradient can vary by \(\Omega(n\left\|u\right\|_{\infty})\). Correspondingly, standard analysis of iterative first-order methods, e.g., mirror descent and FTRL (Follow-the-Regularized-Leader), require \(\Omega(n^{2})\) iterations, which would naively make a prohibitive \(\Omega(n^{3})\) queries! It is therefore imperative that we use a different technique (other than truncation as in the parallel setting) to either reduce the number of iterations or the cost per iteration; we do both. In particular, we use stochastic FTRL7 (Follow-the-Regularized-Leader) where in each iteration we sample a random 1-sparse unbiased estimator of the subgradient. We show how this can be implemented using \(\widetilde{O}(1)\) evaluation queries per iteration and that the total number of iterations is suitably bounded. Footnote 7: We choose stochastic FTRL rather than stochastic mirror descent to facilitate the attainment of with high probability success guarantees. Making the above approach work requires a number of insights. First, the number of iterations of stochastic FTRL is straightforwardly boundable in terms of the square of the \(\ell_{\infty}\) norm of the stochastic estimates of the subgradient (analogous to as it was done for mirror descent in the parallel setting). However, unfortunately any sampling scheme in the worst case could have an \(\ell_{\infty}\)-norm of \(\Omega(n\left\|u\right\|_{\infty})\), again leading to \(\Omega(n^{2})\) iterations. To get around this, we instead perform a more fine-grained analysis of the convergence of FTRL in terms of "local norms" (see Section 6.1 for details). This is a known optimization method analysis technique and our analysis is inspired from and perhaps most closely resembles [11]; this technique was not used in previous work on SFM that uses sampling [13, 14, 1]. The next challenge is to actually implement sampling using \(\widetilde{O}(1)\) queries per iteration so that the local norms of the samples are suitably small. Sampling \(i\in V\) with probability proportional to \(|(g_{x_{t}})_{i}|\) and then outputting \(\mathsf{sign}(g_{x_{t}})_{i}\cdot\left\|g_{x_{t}}\right\|_{1}\) would have the desired local norm bound. Additionally, sampling by this is essentially what is done in some of the sampling-based SFM methods [13, 14, 1] (albeit for a different norm analysis particularly relevant for pseudopolynomial SFM algorithms). However, these papers implement this sampling by somewhat complex dynamic data structure which could be challenging to analyze in our setting. Instead, we provide a simple straightforward sampling procedure which we call \(\mathsf{vSampling}\). This sampling scheme picks \(i\in V\) proportional to an upper bound \(v_{i}\) for \(|(g_{x_{t}})_{i}|\) so that \(\sum_{i\in I}v_{i}\) for consecutive coordinates \(I\) can be evaluated using only \(O(1)\) queries. This sampling can be implemented using \(O(\log n)\) queries by a simple (static) binary tree data structure and we prove it has the desired expected local-norm bounds. Another challenge we face is that our stochastic FTRL analysis merely yields a subgradient \(y\) that is a suitable dual certificate _in expectation_, whereas we need the guarantee to hold with high probability in order to correctly deduce arc-constraints. To this end, we show that \(\left\|y\right\|_{\infty}\) is small with high probability8, and apply the Azuma-Hoeffding concentration inequality for martingales to show that averaging over \(\mathsf{poly}(k)\) such subgradients yields a suitable dual certificate with high probability. Showing that no entry of \(y\) is too negative carries much of the difficulty in the analysis. For this step, we apply a novel analysis of our optimization method, which uses submodularity structure and couples the iterates of FTRL with iterates of an instantiation of the multiplicative weights algorithm. For details see Section 6.1. Footnote 8: Throughout this paper, with high probability means that the probability is \(1-n^{-C}\) for some constant \(C>0\). The above method computes an implicit representation of \(\widetilde{O}(n\cdot\mathsf{poly}(k))\) permutations such that the average of the subgradients they induce is a dual certificate of SFM from which either arcs or coordinates in the minimizer can be deduced. However, naively writing down the gradients that average out to the certificate would require \(\Omega(n^{2})\)-queries. Furthermore, deducing an arc for a single coordinate, through the operation of moving a set of elements to the beginning of each permutation (which we often refer to as move-to-front for simplicity), would also naively require \(\Omega(n^{2})\)-queries, which is prohibitively expensive. To overcome this limitation, we provide efficient methods to sample from these permutations and their associated subgradients. More specifically, we design a method which first draws \(\widetilde{O}(n\cdot\mathsf{poly}(k))\) samples from the subgradients as a preprocessing step, and then uses the samples to deduce several arcs. The preprocessing step enables an efficient implementation of the move-to-front operations due through an importance sampling technique. Each arc deduced requires \(\widetilde{O}(\mathsf{poly}(k))\) additional samples. For more details, see Section 6.3. This summarizes the main ingredients for obtaining our \(\widetilde{O}(n\cdot\mathsf{poly}(k)\log(|f|/\epsilon))\)-query result. Somewhat surprisingly, a more careful amortized cost analysis reveals that this algorithm is in fact a strongly-polynomial time algorithm that makes \(\widetilde{O}(n\cdot\mathsf{poly}(k))\) queries. This stems, partially, from a more fine-grained analysis of the size of subgradients and how many arcs are deduced each time we compute an \(\epsilon\)-approximate minimizer (and its corresponding dual certificate). See Section 6 for details. This discussion omits a variety of details which are deferred to Section 6. The use of randomness and the loss of parallelism in this sequential algorithm is interesting and we leave it as an open problem to see to what degree a deterministic \(\widetilde{O}(\mathsf{poly}(k)\log(1/\epsilon))\)-depth and \(\widetilde{O}(n\cdot\mathsf{poly}(k)\log(1/\epsilon))\)-work weakly-polynomial time algorithm (and a strongly-polynomial time analog) can be achieved. ## 3 Preliminaries ### Notation We often use \(V\) to denote a finite set of \(n\) elements. For any real number \(p\geq 1\), we use \(\|\cdot\|_{p}\) to denote the \(\ell_{p}\)-norm in \(\mathbb{R}^{V}\). We denote the unit cube by \(B_{\infty}^{V}\stackrel{{\mbox{\tiny{\rm{def}}}}}{{=}}[0,1]^{V}\). For any integer \(k>0\), we denote the interior of the simplex scaled by \(k\) as \(\Delta_{k}^{V}\stackrel{{\mbox{\tiny{\rm{def}}}}}{{=}}\{x\in \mathbb{R}_{\geq 0}^{V}:\|x\|_{1}\leq k\}\). In particular, define the standard simplex \(\Delta^{V}\stackrel{{\mbox{\tiny{\rm{def}}}}}{{=}}\Delta_{1}^{V}\). Further, we use \(S_{k}^{V}\stackrel{{\mbox{\tiny{\rm{def}}}}}{{=}}B_{\infty}^{V} \cap\Delta_{k}^{V}\) to denote the _truncated (interior of the) \(k\)-simplex_. For every \(S\subseteq V\), we use \(\vec{1}_{S}\) to denote the indicator vector for the set \(S\) (i.e. \((\vec{1}_{S})_{i}=1,\forall i\in S\) and \((\vec{1}_{S})_{i}=0,\forall i\in V\setminus S\)). For simplicity, for every \(i\in V\), we write \(\vec{1}_{i}\) to denote the vector with a single \(1\) in the \(i\)th coordinate and zero elsewhere. For any vector \(v\in\mathbb{R}^{V}\), we use \(|v|\) to denote the vector obtained by taking the coordinate-wise absolute value of \(v\). For any \(x\in\mathbb{R}_{\geq 0}^{V}\) and \(y\in\mathbb{R}^{V}\), we define \(\left\|y\right\|_{x}\stackrel{{\mbox{\tiny{\rm{def}}}}}{{=}} \sqrt{\sum_{i\in V}x_{i}y_{i}^{2}}\). For any \(y\in\mathbb{R}^{V}\), \(\ell\in\mathbb{Z}_{>0}\), and \(P\subseteq V\), we let \(y(P)\stackrel{{\mbox{\tiny{\rm{def}}}}}{{=}}\sum_{p\in P}y_{p}\) be the sum of the coordinates of \(y\) in \(P\) and \(y_{-}^{\ell}(P)\stackrel{{\mbox{\tiny{\rm{def}}}}}{{=}}\min_{w\in S _{\ell}^{P}}y^{\top}w\) be the sum of the most negative \(\ell\) coordinates of \(\min\{y,0\}\) in \(P\). We denote \(y_{-}(V)\stackrel{{\mbox{\tiny{\rm def}}}}{{=}}y_{-}^{n}(V)\). We denote the entropy regularizer (or entropy mirror map) as \(r(x)\stackrel{{\mbox{\tiny{\rm def}}}}{{=}}\sum_{i\in V}x_{i}\log x _{i}\) for any \(x\in\mathbb{R}_{\geq 0}^{V}\) (where we define \(0\log 0\stackrel{{\mbox{\tiny{\rm def}}}}{{=}}0\)). We let \(V_{x}(y)\stackrel{{\mbox{\tiny{\rm def}}}}{{=}}r(y)-\big{(}r(x)+ \bigtriangledown r(x)^{\top}(y-x)\big{)}\) the Bregman divergence of \(r\). Note that \[V_{x}(y) =\sum_{i\in V}y_{i}\log y_{i}-\sum_{i\in V}x_{i}\log x_{i}-\sum_{ i\in V}(1+\log x_{i})(y_{i}-x_{i})\] \[=\sum_{i\in V}y_{i}\log(y_{i}/x_{i})+\sum_{i\in V}(x_{i}-y_{i})= \langle y,\log(y/x)\rangle+\langle x-y,\vec{1}\rangle. \tag{1}\] ### Submodular Functions and Lovasz Extension Let \(f:2^{V}\rightarrow\mathbb{R}\) be a set function defined on subsets of the \(n\)-element finite set \(V\). We use \(f^{*}\) to denote the minimum value of \(f\) over \(2^{V}\). A set function \(f\) is _submodular_ if it satisfies the following property of _diminishing marginal differences_: **Definition 3.1** (Submodularity).: _A function \(f:2^{V}\rightarrow\mathbb{R}\) is submodular if \(f(T\cup\{i\})-f(T)\leq f(S\cup\{i\})-f(S)\), for any \(S\subseteq T\subseteq V\) and \(i\in V\setminus T\)._ In this section, the set function \(f\) we work with is assumed to be submodular even without stated explicitly. We may assume without loss of generality that \(f(\emptyset)=0\) by replacing \(f(S)\) by \(f(S)-f(\emptyset)\). We make the following two assumptions about the submodular functions we work with throughout this paper even when it is not explicitly stated: (1) \(f(\emptyset)=0\), and (2) \(f\) is accessed through an _evaluation oracle_, and use EO to denote the time to compute \(f(S)\) for any \(S\subseteq V\). Throughout, we use \(S_{\min}\) to denote the unique minimum minimizer9 of the given submodular function \(f\). We also define the vector \(u_{f}\in\mathbb{R}^{V}\) as \((u_{f})_{p}\stackrel{{\mbox{\tiny{\rm def}}}}{{=}}f(\{p\})\) for all \(p\in V\). In particular, by submodularity, \((u_{f})_{p}\) is an upper bound on the marginal of \(p\) for any \(S\subseteq V\setminus\{p\}\), i.e., \(f(S\cup\{p\})-f(S)\leq(u_{f})_{p}\). Given a submodular function \(f\) and \(P\subseteq V\), we define the contracted function \(f_{P}:2^{V\setminus P}\rightarrow\mathbb{R}\) as Footnote 9: Such a minimal minimizer exists because for any two minimizers \(S_{1}^{*},S_{2}^{*}\) of \(f\), their intersection \(S_{1}^{*}\cap S_{2}^{*}\) and union \(S_{1}^{*}\cup S_{2}^{*}\) must also be minimizers since by submodularity, \(2f^{*}=f(S_{1}^{*})+f(S_{2}^{*})\geq f(S_{1}^{*}\cup S_{2}^{*})+f(S_{1}^{*} \cap S_{2}^{*})\). \[f_{P}(S)\stackrel{{\mbox{\tiny{\rm def}}}}{{=}}f(S\cup P)-f(P).\] Note that \(f_{P}\) is also a submodular function with \(f_{P}(\emptyset)=0\). **Lovasz Extension and Subgradients.** Our algorithm for SFM is based on a standard convex relaxation of a submodular function, known as the Lovasz extension [10]. **Definition 3.2** (Lovasz Extension).: _The Lovasz extension, \(\hat{f}:B_{\infty}^{V}\rightarrow\mathbb{R}\), of a submodular function \(f\) is defined as \(\hat{f}(x)\stackrel{{\mbox{\tiny{\rm def}}}}{{=}}\mathbb{E}_{t \sim[0,1]}[f(\{i:x_{i}\geq t\})],\) where \(t\sim[0,1]\) is drawn uniformly at random._ We often overload notation and also use \(f:B_{\infty}^{V}\rightarrow\mathbb{R}\) to denote the Lovasz extension \(\hat{f}\) of \(f\). The Lovasz extension \(\hat{f}\) of a submodular function \(f\) has many desirable properties. In particular, \(\hat{f}\) is a convex relaxation of \(f\) and it can be evaluated efficiently. **Theorem 3.3** (Properties of Lovasz Extension, Theorem 6.3 in [11]).: _Let \(f:2^{V}\rightarrow\mathbb{R}\) be a submodular function and \(\hat{f}\) be its Lovasz extension. Then,_ 1. \(\hat{f}\) _is convex and_ \(\min_{x\in B_{\infty}^{V}}\hat{f}(x)=\min_{S\subseteq V}f(S)\)_;_ 2. \(f(S)=\hat{f}(I_{S})\) _for any_ \(S\subseteq V\)_, where_ \(I_{S}\) _is the indicator vector for_ \(S\)_;_ 3. _Suppose_ \(x\in B_{\infty}^{V}\) _satisfies_ \(1\geq x_{\pi(1)}\geq\ldots\geq x_{\pi(n)}\geq 0\) _for a permutation_ \(\pi:[n]\to V\)_, then_ \(\hat{f}(x)=\sum_{i\in[n]}(f(\pi[i])-f(\pi[i-1]))x_{\pi(i)}\) _where_ \(\pi[j]\stackrel{{\mathrm{\tiny def}}}{{=}}\{\pi(1),\cdots,\pi( j)\}\)_;_ 4. _The set of minimizers of_ \(\hat{f}\) _is the convex hull of the set of minimizers of_ \(f\)_._ In particular, Theorem 3.3 (c) implies that a subgradient of \(\hat{f}\) at \(x\in\mathbb{R}^{V}\) is given by \[(g_{x})_{\pi_{x}(i)}\stackrel{{\mathrm{\tiny def}}}{{=}}f(\pi_{ x}[i])-f(\pi_{x}[i-1]),\] where \(\pi_{x}:[n]\to V\) is the permutation corresponding to decreasing order of the coordinates of \(x\) as in Theorem 3.3 (c). Moreover, given \(x\in\mathbb{R}^{V}\), the subgradient \(g_{x}\) can be computed in time \(O(n\cdot\mathsf{EO}+n\log n)\) by sorting the coordinates of \(x\) in decreasing order and applying the formula above. Note that \(g_{x}\) only depends on the permutation \(\pi_{x}\). Therefore, given a permutation \(\pi:[n]\to V\), we also define \(g_{\pi}\) as the subgradient induced, i.e., \((g_{\pi})_{\pi(i)}=f(\pi[i])-f(\pi[i-1])\). For \(P\subseteq V\), we define \(\pi_{\leftarrow P}\) to be the permutation where the set \(P\) is moved to the front of the permutation \(\pi\) (with the relative order of elements in \(P\) preserved). Formally, if \(P=\{\pi(i_{1}),\cdots,\pi(i_{\ell})\}\) for indices \(i_{1}<\cdots<i_{\ell}\) and the remaining indices are \(j_{1}<\cdots<j_{n-\ell}\), then \[\pi_{\leftarrow P}(k)\stackrel{{\mathrm{\tiny def}}}{{=}}\begin{cases} \pi(i_{k})&\text{ if }k\in[\ell],\\ \pi(j_{k-\ell})&\text{ if }k>\ell.\end{cases}\] In particular, denote \(\pi_{\leftarrow i}\stackrel{{\mathrm{\tiny def}}}{{=}}\pi_{ \leftarrow\{i\}}\). For a permutation \(\pi:[n]\to V\) and \(P\subseteq V\), we use \(\Delta_{\pi,P}\in\mathbb{R}^{V}_{\geq 0}\) to denote the decrease of coordinates \(V\setminus P\) in \(g_{\pi}\) when we move \(P\) to front, i.e., \[(\Delta_{\pi,P})_{q}\stackrel{{\mathrm{\tiny def}}}{{=}}\begin{cases} 0&\text{ if }q\in P,\\ (g_{\pi})_{q}-(g_{\pi_{\leftarrow P}})_{q}&\text{ if }q\in V\setminus P.\end{cases}\] The _base polytope_ of a submodular function \(f\) is defined as \(B(f)\stackrel{{\mathrm{\tiny def}}}{{=}}\{y\in\mathbb{R}^{V}:y(S) \leq f(S),\forall S\subseteq V\text{ and }y(V)=f(V)\}\). Any vector \(y\in B(f)\) can be represented as \(y=\sum_{t\in[m]}\alpha_{t}g_{\pi_{t}}\), where \(\pi_{t}\) are permutations and the coefficients \(\alpha\in\Delta^{[m]}\). For any \(P\subseteq V\), and \(y\in B(f)\) represented as \(y=\sum_{t\in[m]}\alpha_{t}g_{\pi_{t}}\), we define the vector \(y_{\gets P}\in\mathbb{R}^{V\setminus P}\) by restricting the vector \(\sum_{t\in[m]}\alpha_{t}g_{(\pi_{t})_{\gets P}}\) to the coordinates in \(V\setminus P\), i.e., \(y_{\gets P}\) is obtained from \(y\) by moving \(P\) to the front. In other words, \(y_{\gets P}\) is obtained by moving \(P\) to front in every permutation \(\pi_{t}\). Note that this operation depends on the permutations \(\{\pi_{t}\}_{t\in[m]}\) in \(y\)'s representation. Whenever we write \(y_{\gets P}\), it will be clear from the context what representation we are referring to when performing the move-to-front operation. ## 4 Our Framework In this section, we describe our general framework for \(k\)-sparse SFM. This framework essentially reduces the problem of \(k\)-sparse SFM to "robustly" minimizing the Lovasz extension to \(\|u_{f}\|_{\infty}/\mathsf{poly}(k)\) accuracy. Our framework bears resemblance to one of the previous works, [4]. We build upon it and introduce several new concepts and techniques in the following subsections. In Section 4.1, we discuss arc constraints and a natural extension of a submodular function (in accord with a given set of arc constraints), which is encapsulated through a general data structure we call the _extension maintainer_; then in Section 4.2, we describe our meta algorithm behind our parallel and sequential algorithms (presented in Section 5 and 6 respectively) for \(k\)-sparse SFM. Our meta algorithm contains two critical procedures, Dimensionality-Reduction and Arc-Finding, whose implementations differ for our parallel and sequential algorithms. Both implementions leverage a common concept we introduce, called a _sparse dual certificate_, which might have broader SFM applications. We define sparse dual certificates and discuss its relationship to dimensionality reduction and arc finding in Section 4.3. Henceforth, we whenever we use the word "algorithm" in this section, we refer to our meta algorithm, unless specified otherwise. ### Arc Information and Extension Maintainer Arc Information and Extension.Our algorithm proceeds by finding elements \(p,q\in V\) such that any \(k\)-sparse minimizer of the submodular function \(f\) that contains \(p\) must also contain \(q\). We call such a constraint an _arc_\((p,q)\) and note that arcs are transitive, i.e., if \((p,q)\) and \((q,w)\) are arcs, then \((p,w)\) is an arc. Using arc information10 for SFM was introduced by Iwata, Fleischer, and Fujishige [13] and was later used in many other SFM algorithms, e.g., [14, 15, 16]. For any element \(p\), the set of all endpoints of arcs from \(p\) is denoted as \(p^{\downarrow}\), where we adopt the convention that \(p\in p^{\downarrow}\). To capture all the arc information, we introduce the function \(f^{\sharp}:2^{V}\to\mathbb{R}\) (see Definition 7.1) which satisfies the following properties. We defer the details to Section 7. Footnote 10: The arc definition we use here is slightly different from the standard definition in the literature [13, 14, 15, 16]. In the literature, an arc \((p,q)\) means that any minimizer of \(f\) containing \(p\) must contain \(q\), while we only require that this holds for \(k\)-sparse minimizers. **Lemma 4.1** (Properties of Extension \(f^{\sharp}\)).: _Let \(f\) be a submodular function with a \(k\)-sparse minimizer. Then, the following properties hold for the extension \(f^{\sharp}\):_ 1. \(f^{\sharp}\) _is a submodular function,_ 2. \(f^{\sharp}(S)\geq f(S^{\sharp})\) _for any set_ \(S\subseteq V\)_, where_ \(S^{\sharp}\) _is the unique maximal subset of_ \(S\) _consistent with all the arcs;_ \(f^{\sharp}(S)=f(S)\) _for any set_ \(S\subseteq V\) _that is consistent with all the arcs,_ 3. _Any_ \(k\)_-sparse minimizer of_ \(f\) _is also a_ \(k\)_-sparse minimizer of_ \(f^{\sharp}\)_; for any minimizer_ \(S^{*}\) _of_ \(f^{\sharp}\)_, the maximal subset of_ \(S^{*}\) _consistent with all the arcs is a minimizer of_ \(f\)_,_ 4. _For any_ \(p\in V\) _such that_ \(u_{p}\stackrel{{\mathrm{\tiny def}}}{{=}}f(p^{\downarrow})-f(p^{ \downarrow}\setminus\{p\})\geq 0\)_, we have_ \(u_{p}=(u_{f^{\sharp}})_{p}\stackrel{{\mathrm{\tiny def}}}{{=}}f^{ \sharp}(\{p\})-f^{\sharp}(\emptyset)\)_,_ 5. _When new arcs are added, the value of_ \(u_{p}\) _does not increase for any_ \(p\in V\)_._ Note that the second half of property 3 in Lemma 4.1 implies that in order to find a minimizer of the submodular function \(f\) that is consistent with all the arcs, it suffices to minimize the submodular extension \(f^{\sharp}\). In particular, any element that belongs to every minimizer of \(f^{\sharp}\) must also belong to every minimizer of \(f\). Moreover, by our definition of arcs, the first half of property 3 in Lemma 4.1 indicates that any arc for the submodular extension \(f^{\sharp}\) is also an arc for \(f\). Therefore, to deduce either arcs or dimensionality reduction, it suffices for our algorithm to find new arcs or dimensionality reduction for the submodular extension \(f^{\sharp}\) corresponding to the current set of arcs. Maintaining the Extension.Throughout the course of running the algorithm, there are instances where we can discard some elements, in the case when we conclude they are not part of the minimal minimizer. Note that if \(|p^{\downarrow}|>k\), then \(p\) cannot belong to any \(k\)-sparse minimizer of \(f\) and thus can be discarded. Let \(D\subseteq V\) be the set of all elements discarded in this way. Note that if an element \(p\) has an arc to any discarded element \(q\in D\), then \(p\) should also be discarded. Therefore, discarding elements would not affect the value of \(u_{p}\) for any non-discarded element \(p\). Thus it suffices to consider the extension \(f^{\sharp}\) of the function \(f\) restricted to \(V\setminus D\). The first component of our algorithm is a data structure that maintains the set of discarded elements \(D\), all the arc information for elements in \(V\setminus D\), as well as access to the submodular extension \(f^{\sharp}\) of \(f\) restricted to \(V\setminus D\). By Lemma4.1, the submodular extension \(f^{\sharp}\) captures all the arc information, i.e., \(p^{\downarrow}\subseteq V\) for all \(p\in V\setminus D\). The following theorem summarizes the operations and costs for the extension maintainer data structure. We defer the details and proofs to Section7. **Theorem 4.2** (Extension Maintainer).: _Given a submodular function \(f:2^{V}\to\mathbb{R}\) with \(n=|V|\) accessed through an evaluation oracle \(\mathsf{EO}\), there is a data structure that maintains the set of discarded elements \(D\), all the arcs for elements in \(V\setminus D\), the value \(u_{p}\) for all \(p\in V\setminus D\), and the corresponding submodular extension \(f^{\sharp}:2^{C\setminus D}\to\mathbb{R}\) through the following operations:_ 1. \(\mathsf{Init}(V,k,f)\)_: takes_ \(O(1)\)_-depth,_ \(O(n\cdot\mathsf{EO}+n)\) _time to initialize the data structure._ 2. \(\mathsf{Update}(\{S_{p}\}_{p\in V})\)_: takes_ \(O(k)\)_-depth,_ \(O(m\cdot\mathsf{EO}+nk)\) _time to update the data structure, where_ \(S_{p}\) _is the set of endpoints of the new arcs from_ \(p\) _and_ \(m\) _is the number of elements_ \(p\in V\setminus D\) _(after the update) that acquires new arcs._ 3. \(\mathsf{Subgrad}(\pi)\)_: takes_ \(O(1)\)_-depth,_ \(O(n\cdot\mathsf{EO}+n)\) _time to output the subgradient of_ \(f^{\sharp}\) _restricted to_ \(V\setminus D\)_._ 4. \(\mathsf{Partial}(i,\pi)\)_: takes_ \(O(1)\)_-depth,_ \(O(1\cdot\mathsf{EO}+n)\) _time to output the_ \(i\)_th coordinate of the subgradient of_ \(f^{\sharp}\) _restricted to_ \(V\setminus D\)_._ Apart from the above set \(D\) of discarded elements which are not contained in any \(k\)-sparse minimizers, our algorithm also separately maintains a set of elements \(W\subseteq V\) which are always contained in every \(k\)-sparse minimizers. The set of elements \(W\) can be contracted by replacing the function \(f\) with \(f_{W}:2^{V\setminus W}\to\mathbb{R}\) defined as \(f_{W}(S)\stackrel{{\mathrm{def}}}{{=}}f(S\cup W)-f(W)\). More specifically, our data structure maintains arcs only between elements in \(V\setminus(D\cup W)\), and it makes a query \(W\cup S\) to the evaluation oracle of \(f\) whenever the algorithm requests to query \(S\subseteq V\setminus(D\cup W)\) for \(f_{W}\); when computing subgradients for \(f_{W}\), our data structure always places the elements in \(W\) at the front of the permutation and computes the corresponding subgradient for \(f\). As we have assumed that the submodular function \(f\) has a \(k\)-sparse minimizer, the number of elements in the contracted set \(W\) never exceeds \(k\) during the execution of our algorithms. ### Meta Algorithm In this section, we present our meta algorithm in Algorithm1, which is common to both our parallel and sequential algorithms for Theorems1.1 - 1.3. As mentioned earlier, the difference between our parallel and sequential algorithms lie in their different implementations of the procedures Dimensionality-Reduction and Arc-Finding in Algorithm1. The guarantees of these two procedures are summarized as follows. * \(\mathsf{Dimensionality}\)-\(\mathsf{Reduction}(f,k)\): this procedure takes as input an integer \(k>0\), a submodular function \(f\) with a \(k\)-sparse minimizer. If \(f^{*}\leq-\|u_{f}\|_{\infty}/6k\) (i.e., \(f^{*}\) is sufficiently negative), then \(\mathsf{Dimensionality}\)-\(\mathsf{Reduction}(f,k)\) returns a set \(T\neq\emptyset\) that belongs to every minimizer of \(f\) with high probability, i.e., what we call a _dimensionality reduction_. * \(\mathsf{Arc}\)-\(\mathsf{Finding}(f,k,\mathsf{Scale})\): this procedure takes as input an integer \(k>0\), a submodular function \(f\) with a \(k\)-sparse minimizer, and a parameter \(\mathsf{Scale}>0\). If \(f^{*}\geq-\|u_{f}\|_{\infty}/6k\) (i.e., \(f^{*}\) is not too negative) and \(\mathsf{Scale}\geq\|u_{f}\|_{\infty}\), then, with high probability, \(\mathsf{Arc}\)-\(\mathsf{Finding}(f,k,\mathsf{Scale})\) returns a non-empty set \(S_{p}\) of endpoints of arcs from \(p\) for every element \(p\in V\) such that \((u_{f})_{p}\geq\mathsf{Scale}/2\) and \(p\) belongs to a \(k\)-sparse minimizer. (Consequently, if \((u_{f})_{p}\geq\mathsf{Scale}/2\) and \(S_{p}=\emptyset\), then \(p\) is not in any \(k\)-sparse minimizer.) The parallel and sequential implementations of \(\mathsf{Dimensionality}\)-\(\mathsf{Reduction}\) are given in Section 5.3 and Section 6.2 respectively. The parallel and sequential implementations of \(\mathsf{Arc}\)-\(\mathsf{Finding}\) are given in Section 5.4 and Section 6.3 respectively. The Meta Algorithm.We now describe our meta Algorithm 1. In each iteration of the while loop Line 8, we start by checking if any element \(p\in V\setminus W\) satisfies that \((u_{f^{\sharp}})_{p}=f^{\sharp}(\{p\})<0\). Adding such an element \(p\) to any set \(S\in V\setminus(W\cup\{p\})\) decreases its value thanks to submodularity, so it must lie inside any minimizer. We can therefore contract all such elements \(p\) and assume that \(u_{f^{\sharp}}\geq 0\). Our procedures \(\mathsf{Dimensionality}\)-\(\mathsf{Reduction}\) and \(\mathsf{Arc}\)-\(\mathsf{Finding}\) always work with \(u_{f^{\sharp}}\geq 0\), as our extension maintainer automatically adds any elements \(p\in V\setminus W\) with \((u_{f^{\sharp}})_{p}=f^{\sharp}(\{p\})<0\) to \(W\). Note that contracting elements does not increase any value \((u_{f^{\sharp}})_{p}>0\). Next, we check whether11\(f^{*}>-\|f^{\sharp}\|_{\infty}/6k\) or not using the procedure \(\mathsf{Dimensionality}\)-\(\mathsf{Reduction}\) In particular, if the minimum value \(f^{*}\) is very negative in the sense that \(f^{*}\leq-\|f^{\sharp}\|_{\infty}/6k\), then \(\mathsf{Dimensionality}\)-\(\mathsf{Reduction}\) will find a set of elements \(T\subseteq V\setminus W\) with \(T\neq\emptyset\) that belong to every minimizer. This set of elements \(T\) can then be contracted. Note that whenever all the contracted elements \(W\) is at least \(k\), or if every element in \(V\) is either contracted or discarded (i.e., \(W\cup D=V\)), then \(W\) must be a \(k\)-sparse minimizer so Algorithm 1 will return \(W\). Footnote 11: For our parallel algorithm in Section 5 we actually use the threshold \(\frac{\|u_{f^{\sharp}}\|_{\infty}}{4}\) to improve the oracle complexity by a \(\mathsf{poly}(k)\) factor. This larger threshold doesn’t work for our sequential algorithm in Section 6. On the other hand, if \(f^{*}\) is close to \(0\) in the sense that \(f^{*}>-\|f^{\sharp}\|_{\infty}/6k\), then Algorithm 1 will call the procedure \(\mathsf{Arc}\)-\(\mathsf{Finding}\) to find a set \(S_{p}\neq\emptyset\) of endpoints of arcs from any element \(p\) with large marginal \((u_{f^{\sharp}})_{p}>\|u_{f^{\sharp}}\|_{\infty}/2\). Since the value of \(\|u_{f^{\sharp}}\|_{\infty}\) does not increase after adding arcs due to Lemma 4.1, we can continue finding arcs until \(\|u_{f^{\sharp}}\|_{\infty}\) drops by more than a constant factor. Whenever \(\|u_{f^{\sharp}}\|_{\infty}\) becomes smaller than \(\epsilon/n\), then the set \(V\setminus D\) must be an \(\epsilon\)-approximate minimizer for \(f^{\sharp}\) by applying the following claim to the submodular function \(f^{\sharp}:2^{V\setminus D}\to\mathbb{R}\). **Claim 4.3**.: _Let \(f:2^{V}\to\mathbb{R}\) be a submodular function such that \(\|u_{f}\|_{\infty}\leq\epsilon/n\), then the set \(V\) is an \(\epsilon\)-approximate minimizer of \(f\)._ Proof.: For any \(S^{*}\subseteq V\) that minimizes \(f\) we have that \(f(V)\leq f(S^{*})+\sum_{p\in V\setminus S^{*}}(u_{f})_{p}\leq f^{*}+\epsilon\). The correctness of the meta algorithm follows from Claim 4.3 and its description above. **Corollary 4.4** (Correctness of Meta Algorithm).: _Algorithm 1 always outputs an \(\epsilon\)-approximate minimizer of \(f\). Moreover, if Algorithm 1 outputs the set \(W\), then it is an exact minimizer of \(f\)._ Proof.: Note that assuming the correctness of Dimensionality-Reduction and Arc-Finding, the elements in \(W\) are always in every minimizer, and the elements in \(D\) are not in every \(k\)-sparse minimizers. This holds because our Arc-Finding method is guaranteed to find at least one arc from every \(p\) that belongs to a \(k\)-sparse minimizer and has \((u_{f})_{p}\geq\mathsf{Scale}/2\). When the stopping condition of the while loop in Line 8 is \(|W|\geq k\) or \(D\cup W=V\), then the set \(W\) is a \(k\)-sparse minimizer; if the while loop is ended because \(\|u_{f^{\sharp}}\|_{\infty}\leq\epsilon/n\), then by Claim 4.3, the set \(V\setminus D\) is an \(\epsilon\)-approximate minimizer of \(f^{\sharp}\), which is also an \(\epsilon\)-approximate minimizer of \(f\) by Lemma 4.1. Note how sparsity helps us in our meta algorithm: for Dimensionality-Reduction, we can find at most \(k\) contracted elements \(W\) since \(f\) is guaranteed to have a \(k\)-sparse minimizer; for Arc-Finding, we can find at most \(k\) arcs from each element \(p\) with \(u_{p}\geq\mathsf{Scale}/2\) before concluding that \(p\) does not lie in any \(k\)-sparse minimizer and can be safely discarded. This guarantees that \(\|u_{f^{\sharp}}\|_{\infty}\) has to go down by a factor of \(2\) after at most \(k\) iterations of the inner while loop in Line 8. ``` Data: Integer \(k>0\), submodular function \(f:2^{V}\to\mathbb{R}\) with a \(k\)-sparse minimizer, and accuracy \(\epsilon>0\) Result: An \(\epsilon\)-approximate minimizer of \(f\) 1Contracted elements \(W\leftarrow\emptyset\) 2Extension maintainer \(\mathsf{RingFamily}\leftarrow\mathsf{Init}(V\setminus W,k,f)\)// \(f^{\sharp}:2^{V\setminus(D\cup W)}\to\mathbb{R}\) 3while\(|W|<k,D\cup W\neq V,\|u_{f^{\sharp}}\|_{\infty}>\frac{\epsilon}{n}\)do 4\(T\leftarrow\textsc{Dimensionality-Reduction}(f^{\sharp},k)\)// Find dim reduction\(T\neq\emptyset\)if\(f^{\ast}\leq-\|u_{f^{\sharp}}\|_{\infty}/6k\)if\(T\neq\emptyset\)then\(W\gets W\cup T\) and contract \(W\)// Work in the space \(V\setminus(W\cup D)\) 5else 6\(\mathsf{Scale}=\|u_{f^{\sharp}}\|_{\infty}\)// Find arcs if\(f^{\ast}>-\|u_{f^{\sharp}}\|_{\infty}/6k\)while\(\|u_{f^{\sharp}}\|_{\infty}>\frac{\mathsf{Scale}}{2}\)do 7\(\{S_{p}\}_{p\in V\setminus(D\cup W)}\leftarrow\textsc{Arc-Finding}(f^{ \sharp},k,\mathsf{Scale})\)// Find arcs if\((u_{f^{\sharp}})_{p}\geq\mathsf{Scale}/2\)for\(p\in V\setminus(D\cup W)\), \((u_{f^{\sharp}})_{p}\geq\mathsf{Scale}/2\) and \(S_{p}=\emptyset\)do 8\(D\gets D\cup\{p\}\)// Discard\(p\)if\((u_{f^{\sharp}})_{p}\geq\mathsf{Scale}/2\) and no arcs found 9 end if 10\(\mathsf{RingFamily.Update}(\{S_{p}\}_{p\in V\setminus(D\cup W)})\)// \(\|u_{f^{\sharp}}\|_{\infty}\) decreases after adding arcs 11 end if 12 13 end while 14 15 end while return\(\arg\min_{S\in\{W,V\setminus D\}}f(S)\)// Either\(W\) or \(V\setminus D\) will be an \(\epsilon\)-approximate minimizer ``` **Algorithm 1**Meta Algorithm ### Sparse Dual Certificate Both of our Dimensionality-Reduction and Arc-Finding procedures crucially rely on a core notion we call \((\delta,k)\) dual certificate. To motivate this concept, we recall that Edmonds' minimax theorem [1] states that \[\max_{y\in B(f)}y_{-}(V)=\min_{S\subseteq V}f(S), \tag{2}\] where \(B(f)\) is the base polytope, as defined in Section 3.2. In the beautiful framework established by [10], dimensionality reduction and arc information are deduced from an approximately optimal dual solution \(y\in B(f)\) that satisfies \[y_{-}(V)\leq f^{*}\leq y_{-}(V)+\delta.\] However, in order to find a dimensionality reduction or arc information, the approximation quality of the dual solution \(y\) is required to be \(\delta=\|u_{f}\|_{\infty}/\poly(n)\) in [10]. To the best of the authors knowledge, to achieve such a small accuracy, all known gradient descent methods would take at least a comparable \(\poly(n)\) number of iterations, which would be prohibitively expensive. In our framework, we relax the quality of the approximate dual solution, and, by exploiting the sparsity structure of our submodular function, show that for our dimensionality reduction and arc finding procedures, it is sufficient. More specifically, our procedures only need the approximation error to be \(\delta=\frac{\|u_{f}\|_{\infty}}{\poly(k)}\) instead of \(\frac{\|u_{f}\|_{\infty}}{\poly(n)}\), which allows us to only run \(\poly(k)\) iterations of gradient descent methods. **Definition 4.5** (\((\delta,k)\) Dual certificate).: \(y\in\R^{V}\) _is a \((\delta,k)\) dual certificate for submodular function \(f:2^{V}\to\R\) and \(\delta>0\) if_ 1. \(f^{*}\leq y_{-}^{k+1}(V)+\delta\) _and_ 2. \(f(S)\geq y(S)\) _for every_ \(k\)_-sparse_ \(S\subseteq V\)_._ Note that the approximation to \(f^{*}\) we use in the above definition is \(y_{-}^{k+1}(V)\) instead of \(y_{-}(V)\) in (2). This is because if \(f\) has a \(k\)-sparse minimizer \(S^{*}\), then any dual solution \(y\in B(f)\) satisfies \(y_{-}^{k}(V)\leq f^{*}\). To see this, let \(g_{\pi}\in B(f)\) be the BFS corresponding to permutation \(\pi\). Then, \[(g_{\pi})_{-}^{k}(V)=\min_{S\subseteq V,|S|\leq k}g_{\pi}^{\top}\vec{1}_{S} \leq g_{\pi}^{\top}\vec{1}_{S^{*}}\leq f^{*},\] where the last inequality follows from Lemma 63 in [11]. Since any \(y\in B(f)\) is a convex combination of BFSs, we have \(y_{-}^{k}(V)\leq f^{*}\). This suggests that the quantity \(y_{-}^{k}(V)\) is intuitively a natural dual characterization of \(f^{*}\) in the sparse setting. We will present efficient parallel and sequential algorithms for computing \((\delta,k)\) dual certificates in Section 5.2 and Section 6.1 respectively. For the remainder of this subsection, we discuss how the notion of \((\delta,k)\) dual certificate is useful for dimensionality reduction and arc finding. Dimensionality Reduction.Here, we show that given a \((\delta,k)\) dual certificate for a suitably chosen \(\delta\), we can find an element \(y\in V\) that is contained in every minimizer of \(f\), i.e., a dimensionality reduction. Before doing so, we first claim that any element that lies in every \(k\)-sparse minimizer of \(f\) must also lie in every minimizer of \(f\). **Claim 4.6**.: _Let \(f:2^{V}\to\R\) be a submodular function with a \(k\)-sparse minimizer. If element \(p\in V\) lies in every \(k\)-sparse minimizer of \(f\), then it must lie in every minimizer of \(f\)._ Proof.: Let \(S_{\min}\) be the unique minimal minimizer of \(f\), which satisfies \(|S_{\min}|\leq k\) by assumption. It follows that \(p\in S_{\min}\subseteq S^{*}\) for any other minimizer \(S^{*}\) of \(f\). The next lemma is key to the development of our implementations of the Dimensionality-Reduction procedure. **Lemma 4.7**.: _Let \(f:2^{V}\to\mathbb{R}\) be a submodular function with a \(k\)-sparse minimizer such that \(f(\emptyset)=0\), and \(\delta>0\). Let \(y\in\mathbb{R}^{V}\) be a \((\delta,k)\) dual certificate for \(f\). Then every \(p\in V\) of \(y\) with \(y_{p}<-\delta\) must be in every minimizer of \(f\). In particular, if \(\delta<\frac{|f^{*}|}{k}\), then the index of the most negative component of \(y\) must be in every minimizer of \(f\)._ Proof.: We start with the first statement. Let \(p\in V\) be a coordinate with \(y_{p}<-\delta\) and assume for the purpose of contradiction that \(p\) does not lie in a \(k\)-sparse minimizer \(S^{*}\) of \(f\). Then, \[f(S^{*})\geq y(S^{*})\geq y_{-}^{k+1}(V)-y_{p}\geq f^{*}-\delta-y_{p}>f^{*},\] contradicting to the assumption that \(S^{*}\) is a \(k\)-sparse minimizer of \(f\). This implies that \(p\) belongs to every \(k\)-minimizer of \(f\). Then by Claim 4.6, \(p\) must be in every minimizer of \(f\). For the second statement, let \(S^{*}\) be any \(k\)-sparse minimizer of \(f\). Then \(y(S^{*})\leq f^{*}\) so there most negative coordinate \(p\in S^{*}\) of \(y\) must satisfies \(y_{p}\leq f^{*}/k<-\delta\). The second statement then follows from the first statement. Thus far, our discussion applied to both the parallel and sequential algorithms. From this point onward, the arc-finding technique described in the paragraphs that follow, as well as the corresponding lemma (Lemma 4.8) applies exclusively to the sequential algorithm. Arc Finding.The next lemma of this subsection is crucial to our Arc-Finding procedure. It states that given a \((\delta,k)\) dual certificate \(y\) of \(f\), moving an element \(p\) to the front of the permutations corresponding to \(y\) produces a dual certificate for the contracted function \(f_{p}(\cdot):2^{V\setminus\{p\}}\to\mathbb{R}\). Then any dimensionality reduction \(S_{p}\subseteq V\setminus\{p\}\) for the function \(f_{p}\) is a set of arcs from \(p\). **Lemma 4.8**.: _Let \(f\) be a submodular function, \(y\in B(f)\) be a \((\delta,k)\) dual certificate, and \(P\subseteq V\) be a subset, then \(y_{\gets P}\in B(f_{P})\). Moreover, if \(P\subseteq S^{*}\), where \(S^{*}\) is a \(k\)-sparse minimizer of \(f\), then \(y(P)\leq\delta\) and \(y_{\gets P}\) is a \((\delta-y(P),k)\) dual certificate for \(f_{P}\)._ Proof.: Let \(y=\sum_{t\in[m]}\alpha_{t}g_{\pi_{t}}\) be a representation of \(y\), for coefficients \(\alpha\in\Delta^{[m]}\). Let \(y^{\prime}\stackrel{{\mbox{\tiny def}}}{{=}}\sum_{t\in[m]}\alpha_ {t}g_{(\pi_{t})_{\gets P}}\). To prove \(y_{\gets P}\in B(f_{P})\), we first note that \[y_{\gets P}(V\setminus P)=y^{\prime}(V)-y^{\prime}(P)=f(V)-f(P)=f_{P}(V \setminus P).\] For every \(S\subseteq V\setminus P\), we have \[y_{\gets P}(S)=y^{\prime}(S\cup P)-y^{\prime}(P)\leq f(S\cup P)-f(P)=f_{P} (S).\] The above two observations imply that \(y_{\gets P}\in B(f_{P})\). Next, we show that \(y(P)\leq\delta\) for any \(P\subseteq S^{*}\) for \(k\)-sparse minimizer \(S^{*}\) of \(f\). This follows from \[f^{*}\geq y(S^{*})=y(P)+y(S^{*}\setminus P)\geq y(P)+y_{-}^{k+1}(V)\geq y(P)+f^ {*}-\delta,\] where the second inequality is because \(|S^{*}\setminus P|\leq|S^{*}|\leq k\) and the last inequality uses the assumption that \(y\) is a \((\delta,k)\) dual certificate. The above implies that \(y(P)\leq\delta\). Finally, we further show that \(y_{\gets P}\) is a \((\delta-y(P),k)\) dual certificate for \(f_{P}\) when \(P\subseteq S^{*}\). Since \(y_{\gets P}\in B(f_{P})\), we have \(y_{\gets P}(S)\leq f_{P}(S)\) for any \(S\subseteq V\setminus P\). Hence, we are left to show \[(f_{P})^{*}\leq(y_{\gets P})_{-}^{k+1}(V\setminus P)+\delta-y(P). \tag{3}\] Define \(T^{*}\stackrel{{\mathrm{def}}}{{=}}S^{*}\setminus P\). Note that \[f_{P}(T^{*})=f(S^{*})-f(P)=f^{*}-f(P).\] On the other hand, any set \(T^{\prime}\subseteq V\setminus P\) satisfies \[f_{P}(T^{\prime})=f(T^{\prime}\cup P)-f(P)\geq f^{*}-f(P),\] which implies that \((f_{P})^{*}=f^{*}-f(P)\). Therefore, to prove (3), it suffices to prove that \[f^{*}-f(P)\leq(y_{\gets P})_{-}^{k+1}(V\setminus P)+\delta-y(P)=(y^{ \prime})_{-}^{k+1}(V\setminus P)+\delta-y(P). \tag{4}\] To prove (4), we seek to compare \((y^{\prime})_{-}^{k+1}(V\setminus P)\) to \(y_{-}^{k+1}(V\setminus P)\). As we move \(P\) to the front for each permutation in the subgradient \(g_{\pi_{t}}\) to obtain \(y^{\prime}\) from \(y\), we have \(y^{\prime}_{q}\leq y_{q}\) for each coordinate \(q\in V\setminus P\) due to submodularity. Also note that \(y^{\prime}(P)-y(P)=f(P)-y(P)\) and \(y^{\prime}(V)=y(V)=f(V)\). These imply that \[\sum_{q\in V\setminus P}(y_{q}-y^{\prime}_{q})=(y(V)-y(P))-(y^{\prime}(V)-y^{ \prime}(P))=f(P)-y(P),\] i.e., the total amount of decrease for coordinates in \(V\setminus P\) when changing from \(y\) to \(y^{\prime}\) is \(f(P)-y(P)\). In particular, the most negative \(k+1\) coordinates of \(y\) in \(V\setminus P\) can decrease by at most \(f(P)-y(P)\). Therefore, we have \[(y^{\prime})_{-}^{k+1}(V\setminus P)\geq y_{-}^{k+1}(V\setminus P)-(f(P)-y(P) )\geq f^{*}-\delta-(f(P)-y(P)),\] where the last inequality uses the assumption that \(y\) is a \((\delta,k)\) dual certificate for \(f\). This proves (4) and completes the proof of the lemma. ## 5Poly\((k)\)-Depth Parallel Algorithm for \(k\)-sparse SFM In this section, we present our parallel algorithm for \(k\)-sparse SFM and prove Theorem 1.1. We apply mirror descent with an entropic regularizer to the Lovasz extension of \(f\) restricted to \(S_{k}^{V}\), the subset of vectors in \([0,1]^{V}\) with \(\ell_{1}\) norm at most \(k\), up to an accuracy \(\|u_{f}\|_{\infty}/\mathsf{poly}(k)\) to obtain a \((\delta,k)\) dual certificate for dimensionality reduction or arc finding. However, naively using mirror descent yields an algorithm whose number of iterations depends polynomially on the ratio between the \(\ell_{\infty}\)-norm of the subgradient and the accuracy we need, which may be up to \(n\) if the \(\ell_{\infty}\)-norm of the gradient is large. To get around this large dependence on \(n\), we employ a novel technique of truncating the subgradient values. We show that we can cap the negative values at some threshold depending on \(f^{*}\) and \(\|u\|_{\infty}\) so that running mirror descent up to the same desired accuracy still allows us to find a contraction or an arc. The section is organized as follows. We start by presenting the classic mirror descent algorithm and its convergence guarantee in Section 5.1. In Section 5.2, we present our mirror descent with truncation method for obtaining \((\delta,k)\) dual certificates. Then in Section 5.3, we present our parallel implementation of Dimensionality-Reduction. Our parallel implementation of Arc-Finding is contained in Section 5.4. Finally in Section 5.5, we provide a proof of Theorem 1.1. ### Mirror Descent Before detailing our algorithm, we first present the classic mirror descent algorithm (Algorithm 2) of Nemirovski and Yudin [20]. We refer to the excellent monograph [1] for more background and details on mirror descent. The following lemma is the standard performance guarantee of mirror descent, which is a slight adaptation12 of Theorem 4.2 in [1]. Footnote 12: The statement of Lemma 5.1 comes from the RHS of the last inequality in the proof of Theorem 4.2 in [1]. Note that the bound on regret \(\sum_{t=0}^{m-1}\langle h_{t},x_{t}-w\rangle\) does not require \(h_{t}\) being the subgradient of \(f\) at \(x_{t}\). ``` Data: A convex function \(f\) on \(\mathbb{R}^{n}\), a convex domain \(D\subseteq\mathbb{R}^{n}_{>0}\), an initial point \(x_{0}\in D\), a step size \(\eta>0\), and number of iterations \(m\) Result: Sequence of iterates \(\{x_{0},x_{1},\ldots,x_{m}\}\subseteq D\) which satisfies Lemma 5.1 1Function MirrorDescent(\(f,D,x_{0},\eta,m\)): 2for\(t=0,1,\ldots,m\)do 3 Compute a vector \(h_{t}\in\mathbb{R}^{n}\)// Approximate subgradient \(h_{t}\)can depend on \(\{x_{i}\}_{i=0}^{t}\)\(x_{t+1}=\arg\min_{x\in D}\eta h_{t}^{\top}x+V_{x_{t}}(x)\)// Recall\(V_{x_{t}}(x)\stackrel{{\text{\tiny def}}}{{=}}r(x)-\left(r(x_{t})+ \triangledown r(x_{t})^{\top}(x-x_{t})\right)\) 4 end for return\(\{x_{0},x_{1},\ldots,x_{m}\}\) ``` **Algorithm 2**Mirror Descent **Lemma 5.1** (Mirror Descent, Theorem 4.2 in [1]).: _Let function \(f\) be convex on \(\mathbb{R}^{n}\) and \(r:\mathbb{R}^{n}\rightarrow\mathbb{R}\) be \(\rho\)-strongly convex on \(D\subseteq\mathbb{R}^{n}\) with respect to norm \(\|\cdot\|\). If the vectors \(h_{0},\ldots,h_{m-1}\) in Line 3 of Algorithm 2 satisfy \(\|h_{t}\|_{*}\leq L\) with respect to the dual norm \(\|\cdot\|_{*}\) for all iterations \(t=0,\ldots,m-1\). Then for any point \(w\in D\), the iterates of Algorithm 2 satisfy_ \[\sum_{t=0}^{m-1}\langle h_{t},x_{t}-w\rangle\leq\eta\frac{L^{2}m}{2\rho}+\frac {V_{x_{0}}(w)}{\eta}.\] ### Dual Certificate via Mirror Descent with Truncation In this subsection, we show how to efficiently compute a dual certificate as in Definition 4.5 using mirror descent with truncation. We start with the formal definition of truncation. **Definition 5.2** (Truncation).: _Given \(s>0\), we define the function \(\operatorname{trunc}_{s}(\cdot):\mathbb{R}^{V}\rightarrow\mathbb{R}^{V}\) as_ \[(\operatorname{trunc}_{s}(g))_{p}\stackrel{{\text{\tiny def}}}{{= }}\max\{-s,g_{p}\},\ \ \forall p\in V.\] Note that for any permutation \(\pi:[n]\to V\), the subgradient \(g_{\pi}\) satisfies \(f(S)\geq g_{\pi}(S)\) for any \(S\subseteq V\). Unfortunately, applying truncation to \(g_{\pi}\) doesn't preserve this property. However, the following claim shows that truncation does preserve a sparse counterpart of this property, i.e. the the second condition in Definition 4.5. **Claim 5.3**.: _Let \(f:2^{V}\rightarrow\mathbb{R}\) be a submodular function, \(\pi:[n]\to V\) a permutation, and \(s>0\). Let \(h\stackrel{{\text{\tiny def}}}{{=}}\operatorname{trunc}_{s}(g_{\pi})\) be the truncated subgradient. If \(f^{*}\geq-s+(k-1)\cdot\max_{p}(g_{\pi})_{p}\), then \(f(S)\geq h(S)\) for any \(k\)-sparse \(S\subseteq V\)._ Proof.: Fix any \(S\subseteq V\) and consider two cases. The first case is when no \(q\in S\) was truncated to get \(h\). In this case \(h(S)=g_{\pi}(S)\leq f(S)\) for any \(S\subseteq V\). The second case is when there exists \(q\in S\) so that \((g_{\pi})_{q}\) is truncated to get \(h\). In this case we must have \(h_{q}=-s\), and therefore \[h(S)\leq-s+(k-1)\max_{p}(g_{\pi})_{p}\leq f^{*}\leq f(S),\] where the second inequality follows from the assumed lower bound on \(f^{*}\). Claim 5.3 allows us to use truncated subgradients of the Lovasz extension in Algorithm 2 to compute a \((\delta,k)\) dual certificate. A formal description of this procedure is given in Algorithm 3. The correctness and parallel depth guarantee of Algorithm 3 is given in the following Lemma 5.4. ``` Data: A submodular function \(f\), a sparsity parameter \(k\in\mathbb{Z}_{>0}\), a lower bound \(\phi\), and an accuracy parameter \(\delta>0\)// Lower bound \(-\phi\leq f^{*}\) Result: A \((\delta,k)\) dual certificate \(y\) for \(f\) 1FunctionDualCertificate(\(f,k,\phi,\delta\)): 2\(s\gets k\|u_{f}\|_{\infty}+\phi\)// Truncation threshold 3\(x_{0}\leftarrow\frac{k}{n}\cdot\overline{1}_{V}\)// Initial point 4\(m\leftarrow\frac{s^{2}k(k+1)\log n}{\delta^{2}}\)// Number of iterations 5\(\eta\leftarrow\frac{2\sqrt{k\log n}}{s\sqrt{m(k+1)}}\)// Step size 6Run MirrorDescent(\(f,S^{V}_{k+1},x_{0},\eta,m\)) with \(h_{t}\leftarrow\text{trunc}_{s}(g_{x_{t}})\) for each iteration \(t=0,\ldots,m-1\) in Line 3 of Algorithm 2 7\(y=\frac{1}{m}\sum_{t=0}^{m-1}h_{t}\)return\(y\) ``` **Algorithm 3**Mirror Descent with Truncation for Parallel Algorithm **Lemma 5.4** (Mirror Descent with Truncation).: _Given a sparsity parameter \(k\in\mathbb{Z}_{>0}\), a submodular function \(f:2^{V}\rightarrow\mathbb{R}\), a lower bound \(-\phi\leq f^{*}\), and an accuracy parameter \(\delta>0\), Algorithm 3 outputs a \((\delta,k)\) dual certificate \(y\) in \(\tilde{O}(k^{2}(k\left\|u_{f}\right\|_{\infty}+\phi)^{2}/\delta^{2})\) parallel depth and \(\tilde{O}(nk^{2}(k\left\|u_{f}\right\|_{\infty}+\phi)^{2}/\delta^{2}\cdot \mathsf{EO}+\mathsf{poly}(n,(k\|u_{f}\|_{\infty}+\phi)/\delta))\) time._ Proof.: Since \(h_{t}=\text{trunc}_{s}(g_{x_{t}})\), we have \((h_{t})_{p}\geq-s\) for each coordinate \(p\in V\). Also note that for each coordinate \(p\) such that \((h_{t})_{p}\geq 0\), we have \((h_{t})_{p}\leq(g_{x_{t}})_{p}\leq\|u_{f}\|_{\infty}\). These imply that \(\|h_{t}\|_{\infty}\leq s\) in every iteration \(t\) of MirrorDescent in Line 6 of Algorithm 3. Next, we note that the negative entropy function \(r(x)=\sum_{i\in V}x_{i}\log x_{i}\) is \(1/(k+1)\)-strongly convex13 on \(S^{V}_{k+1}\). Thus we can apply Lemma 5.1, with \(\|\cdot\|\) being the \(\ell_{1}\)-norm \(\|\cdot\|_{1}\) and the parameters \(\rho=1/(k+1)\) and \(L=s\), to obtain that for every point \(w\in S^{V}_{k+1}\), Footnote 13: This standard fact can be obtained by showing that whenever \(x\in S^{V}_{k+1}\), the Hessian \(\nabla^{2}r(x)=\mathsf{diag}(1/x_{1},\ldots,1/x_{|V|})\) satisfies \(v^{\top}\nabla^{2}r(x)v\geq\|v\|_{1}^{2}/(k+1)\) for all \(v\in\mathbb{R}^{n}\). \[\sum_{t=0}^{m-1}h_{t}^{\top}(x_{t}-w)\leq\frac{\eta s^{2}m(k+1)}{2}+\frac{V_{x _{0}}(w)}{\eta}\leq\frac{\eta s^{2}m(k+1)}{2}+\frac{2k\log n}{\eta}.\] Here, the last inequality follows because \(x_{0}=\frac{k}{n}\cdot\vec{1}_{V}\) in Algorithm 3, \(w\in S_{k}^{V}\), and by (1) that \[V_{x_{0}}(w) =\sum_{i\in V}w_{i}\log(w_{i}n/k)+\sum_{i\in V}(k/n-w_{i})\] \[\leq k\cdot\sum_{i\in V}(w_{i}/k)\log(w_{i}/k)+\sum_{i\in V}w_{i} \cdot(\log n-1)+k\leq 2k\log n.\] The best choice of \(\eta\) above is \(\eta=\frac{2\sqrt{k\log n}}{s\sqrt{m(k+1)}}\), as in Algorithm 3. Since we have also set \(m=s^{2}k(k+1)\log n/\delta^{2}\). the above bound becomes \[\frac{1}{m}\sum_{t=0}^{m-1}h_{t}^{\top}(x_{t}-w)\leq\frac{s\sqrt{k(k+1)\log n} }{\sqrt{m}}\leq\delta. \tag{5}\] Now we consider the output \(y=\frac{1}{m}\sum_{t=0}^{m-1}h_{t}\) of Algorithm 3. Note that \(h_{t}\geq g_{x_{t}}\) for every iteration \(t\), which implies that \(h_{t}^{\top}x_{t}\geq g_{x_{t}}^{\top}x_{t}\) since \(x_{t}\in[0,1]^{V}\). It follows that \[f^{*}\leq\frac{1}{m}\sum_{t=0}^{m-1}f(x_{t})=\frac{1}{m}\sum_{t=0}^{m-1}g_{x_{ t}}^{\top}x_{t}\leq\frac{1}{m}\sum_{t=0}^{m-1}h_{t}^{\top}x_{t},\] where the equality above is due to Theorem 3.3. Now, pick \(w=\vec{1}_{T}\), where \(T\) is the set of the \(k+1\) most negative coordinates of \(y\), which achieves the minimum value of \(y^{\top}w=y_{-}^{k+1}(V)\). Plugging this choice of \(w\) into (5), we obtain \[f^{*}\leq\frac{1}{m}\sum_{t=0}^{m-1}h_{t}^{\top}x_{t}\leq y^{\top}w+\delta\leq y _{-}^{k+1}(V)+\delta,\] which gives the first condition in Definition 4.5. To obtain the second condition, note that since \(f^{*}\geq-\phi\), the choice of \(s=k\|u_{f}\|_{\infty}+\phi\) satisfies \[f^{*}\geq-\phi=-s+k\|u_{f}\|_{\infty}\geq-s+(k-1)\cdot\max_{p}(g_{x_{t}})_{p}.\] It then follows from Claim 5.3 that \(f(S)\geq h_{t}(S)\) for all \(k\)-sparse \(S\subseteq V\). This implies \(y(S)=\frac{1}{m}h_{t}(S)\leq f(S)\) for all \(k\)-sparse \(S\) and proves that \(y\) is a \((\delta,k)\) dual certificate. Finally, note that Algorithm 3 takes \(m=\tilde{O}(k^{2}(k\left\|u_{f}\right\|_{\infty}+\phi)^{2}/\delta^{2})\) iterations and that each \(h_{t}\) can be computed using one round of \(n\) parallel EO queries to \(f\). This implies that the parallel depth of Algorithm 3 is \(\tilde{O}(k^{2}(k\left\|u_{f}\right\|_{\infty}+\phi)^{2}/\delta^{2})\) and its runtime is \(\tilde{O}(nk^{2}(k\left\|u_{f}\right\|_{\infty}+\phi)^{2}/\delta^{2}\cdot \mathsf{EO}+\mathsf{poly}(n,(k\|u_{f}\|_{\infty}+\phi)/\delta))\). ### Dimensionality Reduction for Parallel Algorithm In this subsection, using the algorithm for computing \((\delta,k)\) dual certificates given in Section 5.2, we give an implementation of the procedure Dimensionality-Reduction for our parallel algorithm in Algorithm 4. Recall from Section 4.2 that assuming \(f\) has a \(k\)-sparse minimizer, the procedure Dimensionality-Reduction\((f,k)\) either outputs a subset \(T\neq\emptyset\) that belongs to every minimizer of \(f\), or certify that \(f^{*}>\|u_{f}\|_{\infty}/4\). Algorithm 4 aims at finding a value \(\phi>0\) such that \(-\phi\leq f^{*}\leq-\phi/2\). Since \(f^{*}>-(\|u_{f}\|_{1}-f(V))\) by submodularity, Algorithm 4 starts by guessing \(\phi=\phi^{(0)}\stackrel{{\mathrm{def}}}{{=}}\|u_{f}\|_{1}-f(V)\) in Line 2 and keeps decreasing the value of \(\phi\) by a factor of \(2\) in each iteration of the while loop in Line 3 until \(\phi\) falls under the threshold \(\|u_{f}\|_{\infty}/4\). In the case where \(f^{*}\leq-\|u_{f}\|_{\infty}/4\), then along this halving process there must be an iteration \(i\) where the value \(\phi=\phi^{(i)}\) satisfies \(-\phi^{(i)}\leq f^{*}\leq-\phi^{(i)}/2\). The following claim shows that in such an iteration, the set \(T\) in Line 6 is a non-emtpy dimensionality reduction. **Claim 5.5** (Correctness of Algorithm 4).: _In any iteration of the while loop of Algorithm 4, the set \(T\) in Line 6 lies in every minimizer of \(f\). Moreover, if the value \(\phi=\phi^{(i)}\) in an iteration \(i\) satisfies \(-\phi^{(i)}\leq f^{*}\leq-\phi^{(i)}/2\), then the set \(T\) in Line 6 is non-empty in that iteration._ Proof.: Since \(y\) is a \((\delta,k)\) dual certificate in each iteration of the while loop, and the set \(T\) in Line 6 is consisted of all the element \(p\in V\) with \(y_{p}<-\delta\), the first statement of the claim immediately follows from Lemma 4.7. For the second statement, note that if \(-\phi^{(i)}\leq f^{*}\leq-\phi^{(i)}/2\) and \(\delta=\phi^{(i)}/3k\), it follows that \(\delta<|f^{*}|/k\). So the second part of Lemma 4.7 implies that \(T\neq\emptyset\). Claim 5.5 implies that each non-empty set \(T\) in Line 6 found in the while loop lies in every \(k\)-sparse minimizer of \(f\). If no such non-empty set \(T\) is found throughout the while loop, then by Claim 5.5 this means \(f^{*}>\|u_{f}\|_{\infty}/4\), so Algorithm 4 will simply output \(\emptyset\). ``` Data: A sparsity parameter \(k\), and a submodular function \(f\) with a \(k\)-sparse minimizer Result: A subset \(T\subseteq V\) that must be in every minimizer of \(f\), or \(T=\emptyset\) certifying that \(f^{*}>-\|u_{f}\|_{\infty}/4\) 1FunctionDimensionality-Reduction(\(f,k\)): 2\(\phi\leftarrow\|u_{f}\|_{1}-f(V)\)//Lowerbound\(f^{*}\geq-(\|u_{f}\|_{1}-f(V))\) 3while\(\phi\geq\|u_{f}\|_{\infty}/4\)do//Implementwhileloopinparallel 4\(\delta=\frac{\phi}{3k}\) 5\(y\leftarrow\textsc{DualCertificate}(f,k,\phi,\delta)\)//\(y\)isa\((\delta,k)\)dualcertificate 6\(T\leftarrow\{p:y_{p}<-\delta\}\) 7if\(T\neq\emptyset\)thenreturn\(T\)//When\(-\phi\leq f^{*}\leq-\phi/2\)then\(T\neq\emptyset\)\(\phi\leftarrow\phi/2\)//\(T=\emptyset\)indicates\(f^{*}>-\phi/2\) 8 end if return\(\emptyset\) ``` **Algorithm 4**Dimensionality Reduction for Parallel Algorithm The correctness and runtime guarantees of Algorithm 4 is formally given in the following lemma. **Lemma 5.6** (Dimensionality-Reduction for Parallel Algorithm).: _Let \(k\in\mathbb{Z}_{>0}\) and \(f:2^{V}\rightarrow\mathbb{R}\) be a submodular function with a \(k\)-sparse minimizer. Then Algorithm 4 outputs a set \(T\subseteq V\) that lies in any minimizer such that \(T=\emptyset\) implies \(f^{*}>-\|u_{f}\|_{\infty}/4\). Moreover, the algorithm uses \(\tilde{O}(k^{6})\) parallel depth and runs in time \(\tilde{O}(nk^{6}\cdot\mathsf{EO}+\mathsf{poly}(n))\)._ Proof.: The correctness of Algorithm 4 follows immediately from Claim 5.5, so we only need to prove the bound on depth and runtime. By Lemma 5.4, the number of parallel rounds due to \(\mathsf{DualCertificate}\) calls is \[\tilde{O}(k^{2}(k\left\|u_{f}\right\|_{\infty}+\phi)^{2}/\delta^{2})=\tilde{O} \left(k^{4}\left(\frac{k\left\|u_{f}\right\|_{\infty}+\phi}{\phi}\right)^{2} \right)\leq\tilde{O}(k^{6}),\] where the last inequality is because we always have \(\phi\geq\|u_{f}\|_{\infty}/4\) and we always run with accuracy parameter \(\delta=\frac{\phi}{3k}\). Note that there are at most \(\log((\|u_{f}\|_{1}-f(V))/\|u_{f}\|_{\infty})=\tilde{O}(1)\) iterations of the while loop and these can be implemented in parallel. It follows that Lemma 5.4 also only uses \(\tilde{O}(k^{6})\) parallel depth, \(\tilde{O}(nk^{6})\) queries to \(\mathsf{EO}\), and \(\mathsf{poly}(n)\) runtime. ### Arc Finding for Parallel Algorithm In this subsection, we describe the procedure \(\mathsf{Arc}\)-\(\mathsf{Finding}\) for the parallel algorithm (formally given in Algorithm 5), which is based on the procedure \(\mathsf{Dimensionality}\)-\(\mathsf{Reduction}\) from the previous Section 5.3. For each element \(p\in V\) such that \((u_{f^{\sharp}})_{p}\geq\mathsf{Scale}/2\), Algorithm 5 simply runs \(\mathsf{Dimensionality}\)-\(\mathsf{Reduction}\) in Algorithm 4 with the contracted submodular function \(f^{\sharp}_{p^{\downarrow}}:2^{V\setminus p^{\downarrow}}\to\mathbb{R}\), which was defined in Section 3.2 as \[f^{\sharp}_{p^{\downarrow}}(S)\stackrel{{\text{\tiny def}}}{{=}}f^ {\sharp}(S\cup p^{\downarrow})-f^{\sharp}(p^{\downarrow}),\] and the remaining sparsity \(k-|p^{\downarrow}|\). This subset \(S_{p}\) returned from \(\mathsf{Dimensionality}\)-\(\mathsf{Reduction}\) will be the set of endpoints of arcs we found for the element \(p\). ``` Data: A sparsity parameter \(k\), extension \(f^{\sharp}\) of a submodular function \(f\) with a \(k\)-sparse minimizer, and parameter \(\mathsf{Scale}\geq\|u_{f^{\sharp}}\|_{\infty}\geq-4f^{*}\)// \(\mathsf{Arcs}\) are input through \(f^{\sharp}\) Result: A non-empty set \(S_{p}\subseteq V\) of endpoints of arcs from every \(p\in V\) s.t. \((u_{f^{\sharp}})_{p}\geq\mathsf{Scale}/2\) 1Function\(\mathsf{Arc}\)-\(\mathsf{Finding}(f^{\sharp},k,\mathsf{Scale})\): for\(p\in V\)do// Implement the for loop in parallel 2if\((u_{f^{\sharp}})_{p}\geq\mathsf{Scale}/2\)then 3\(S_{p}\leftarrow\mathsf{Dimensionality}\)-\(\mathsf{Reduction}(f^{\sharp}_{p^{\downarrow}},k-|p^{\downarrow}|)\)// Find arcs from \(p\) with large \((u_{f^{\sharp}})_{p}\) 4 end if 5else\(S_{p}\leftarrow\emptyset\) 6 end if 7return\(\{S_{p}\}_{p\in V}\) ``` **Algorithm 5**Arc Finding for Parallel Algorithm The following lemma summarizes the guarantees and runtime of the procedure \(\mathsf{Arc}\)-\(\mathsf{Finding}\). **Lemma 5.7** (\(\mathsf{Arc}\)-\(\mathsf{Finding}\) for Parallel Algorithm).: _Let \(k\in\mathbb{Z}_{>0}\), \(f:2^{V}\to\mathbb{R}\) be a submodular function with a \(k\)-sparse minimizer, \(f^{\sharp}\) be its extension w.r.t. a ring family \(\mathcal{F}\) that is consistent with all its \(k\)-sparse minimizers, and \(\mathsf{Scale}\geq\|u_{f^{\sharp}}\|_{\infty}\geq-4f^{*}\). Then Algorithm 5 outputs, for every \(p\in V\) with \((u_{f^{\sharp}})_{p}\geq\mathsf{Scale}/2\) such that \(p\) belongs to a \(k\)-sparse minimizer, a non-empty set \(S_{p}\subseteq V\setminus p^{\downarrow}\) of endpoints of arcs from \(p\). Moreover, the algorithm uses \(\tilde{O}(k^{6})\) parallel depth and \(\tilde{O}(n^{2}k^{6}\cdot\mathsf{EO}+\mathsf{poly}(n))\) time._ Proof.: We start with the first statement of the lemma. Fix any element \(p\in V\) that lies in a \(k\)-sparse minimizer \(S^{*}\) such that \((u_{f^{\sharp}})_{p}\geq\mathsf{Scale}/2\). By Lemma 4.1, we have \[(u_{f^{\sharp}})_{p}=f(p^{\downarrow})-f(p^{\downarrow}\setminus\{p\})=f^{\sharp} (p^{\downarrow})-f(p^{\downarrow}\setminus\{p\}),\] where the last equality follows since \(p^{\downarrow}\in\mathcal{F}\) is in the ring family. Since \(f^{*}\geq-\mathsf{Scale}/4\) by assumption, we have \(f(p^{\downarrow}\setminus\{p\})\geq-\mathsf{Scale}/4\) and therefore, \[f^{\sharp}(p^{\downarrow})=(u_{f^{\sharp}})_{p}+f(p^{\downarrow}\setminus\{p \})\geq\mathsf{Scale}/4.\] We also have for any element \(q\in V\setminus p^{\downarrow}\), \[(u_{f^{\sharp}_{p^{\downarrow}}})_{q}=f^{\sharp}_{p^{\downarrow}}(\{q\})-f^{ \sharp}_{p^{\downarrow}}(\emptyset)=f^{\sharp}(\{q\}\cup p^{\downarrow})-f^{ \sharp}(p^{\downarrow})\leq(u_{f^{\sharp}})_{q}\leq\mathsf{Scale},\] where the first inequality uses submodularity, and the last inequality is by our assumption that \(\|u_{f^{\sharp}}\|_{\infty}\leq\mathsf{Scale}\). It then follows that assuming \(u_{f^{\sharp}_{p^{\downarrow}}}\geq 0\), the contracted function \(f^{\sharp}_{p^{\downarrow}}\) satisfies (1) \(f^{\sharp}_{p^{\downarrow}}\) has a \((k-|p^{\downarrow}|)\)-sparse minimizer \(S^{*}\setminus p^{\downarrow}\), and (2) its minimum value \((f^{\sharp}_{p^{\downarrow}})^{*}\leq-\mathsf{Scale}/4\leq-\|u_{f^{\sharp}_{p^{ \downarrow}}}\|_{\infty}/4\). Lemma 5.6 then implies that a non-empty set \(S_{p}\subseteq V\setminus p^{\downarrow}\) of endpoints of arcs from \(p\) can be found for such an element \(p\) using \(\widetilde{O}(k^{6})\) parallel depth and \(\widetilde{O}(nk^{6}\cdot\mathsf{EO}+\mathsf{poly}(n))\) runtime. The second statement of the lemma follows immediately since for each one of the \(n\) elements \(p\in V\), the for loop in Algorithm 5 can be implemented in parallel. This proves the lemma. ### Putting It All Together: Proof of Theorem 1.1 Finally, we are ready to prove Theorem 1.1, which we restate below for convenience. **Theorem 1.1** (Parallel \(k\)-sparse SFM).: _There is a deterministic parallel algorithm for \(k\)-sparse SFM with parallel depth \(\widetilde{O}(k^{7}\cdot\log(|f|/\epsilon))\) and runtime \(\widetilde{O}(n^{2}\cdot k^{7}\log(|f|/\epsilon)\cdot\mathsf{EO}+\mathsf{poly }(n)\cdot\log(|f|/\epsilon))\)._ Proof of Theorem 1.1.: Consider the meta algorithm (Algorithm 1) with procedures \(\mathsf{Dimensionality}\)-\(\mathsf{Reduction}\) and \(\mathsf{Arc}\)-\(\mathsf{Finding}\) given in Algorithms 4 and 5. The correctness of Algorithm 1 is already given in Corollary 4.4 so we only need to analyze its parallel depth and runtime. Note that in each iteration of the outer while loop in Line 3 of Algorithm 1, one of the following three things will happen: (1) the size of the contracted elements \(W\) will increase due to \(\mathsf{Dimensionality}\)-\(\mathsf{Reduction}\) in Line 4 when \(f^{*}\leq-\|u_{f^{\sharp}}\|_{\infty}/4\), or (2) the size of the contracted elements \(W\) will increase because there exists element \(p\in V\setminus(W\cup D)\) such that \((u_{f^{\sharp}})_{p}<0\) in Line 3 of Algorithm 11 (extension maintainer), or (3) \(\|u_{f^{\sharp}}\|_{\infty}\) will decrease by a factor of \(2\), and a set \(S_{p}\) of endpoints of arcs is found for every element \(p\in V\setminus(D\cup W)\) with \((u_{f^{\sharp}})_{p}>\mathsf{Scale}/2\) in the while loop in Line 8 when \(f^{*}>-\|u_{f^{\sharp}}\|_{\infty}/4\). Note that (1) and (2) can happen at most \(k\) times before \(|W|\geq k\) and Algorithm 1 outputs \(W\); (3) can happen at most \(\log(|f|n/\epsilon)\) times before \(\|u_{f^{\sharp}}\|_{\infty}\leq\epsilon/n\). So the total number of iterations of the while loop in Line 3 will be at most \(O(k+\log(|f|n/\epsilon))\). We next bound the parallel depth and runtime due to \(\mathsf{Dimensionality}\)-\(\mathsf{Reduction}\). Note that the total number of times \(\mathsf{Dimensionality}\)-\(\mathsf{Reduction}\) is called in Line 4 is at most \(k+\log(|f|n/\epsilon)\) by the above. By Lemma 5.6, each \(\mathsf{Dimensionality}\)-\(\mathsf{Reduction}\) for (1) can be done in \(\widetilde{O}(k^{6})\) depth and runtime \(\widetilde{O}(nk^{6}\cdot\mathsf{EO}+\mathsf{poly}(n))\). So the total depth and runtime due to \(\mathsf{Dimensionality}\)-\(\mathsf{Reduction}\) is \(\widetilde{O}(k^{7}\log(|f|/\epsilon))\) and \(\widetilde{O}(nk^{7}\cdot\mathsf{EO}+\mathsf{poly}(n))\cdot\log(|f|/\epsilon)\) respectively. Next, we bound the parallel depth and runtime due to Arc-Finding. By Lemma 5.7, each call to Arc-Finding takes \(\widetilde{O}(k^{6})\) parallel depth and runtime \(\widetilde{O}(n^{2}k^{6}\cdot\mathsf{EO}+\mathsf{poly}(n))\). We perform at most \(k\) calls to Arc-Finding before \(\|u_{f^{\mathsf{s}}}\|_{\infty}\) decreases by a factor of \(2\), so the total depth and runtime due to Arc-Finding is \(\widetilde{O}(k^{7}\log(|f|/\epsilon))\) and \(\widetilde{O}(nk^{7}\cdot\mathsf{EO}+\mathsf{poly}(n))\cdot\log(|f|/\epsilon)\) respectively. Finally, note that each update to the RingFamily can be implemented in \(\widetilde{O}(k)\) depth and \(O(m\cdot\mathsf{EO}+nk)\) time where \(m\) is the total number of elements \(p\) from which arcs are found. Combining everything above, Algorithm 1 finds an \(\epsilon\)-approximate minimizer for \(k\)-sparse SFM in parallel depth \(\widetilde{O}(k^{7}\log(|f|/\epsilon))\) and runtime \(\widetilde{O}(n^{2}k^{7}\log(|f|/\epsilon)\cdot\mathsf{EO}+\mathsf{poly}(n) \cdot\log(|f|/\epsilon))\). ## 6 \(\widetilde{O}(n\cdot\mathsf{poly}(k))\)-Query Algorithm for \(k\)-sparse SFM In this section, we present our randomized sequential algorithm and prove Theorems 1.2 and 1.3. The section is organized as follows. In Section 6.1, we present our optimization method for computing \((\delta,k)\) dual certificates via a stochastic version of the follow-the-regularized-leader14 (FTRL) algorithm (e.g., [20, 21]) where the subgradient implementation is tailored to submodular structure. In Section 6.2 and Section 6.3, we present our sequential implementations of the subprocedures Dimensionality-Reduction and Arc-Finding in Algorithm 1 respectively. Finally, in Section 6.4, we present the proofs of Theorems 1.2 and 1.3. Footnote 14: The FTRL algorithm is also referred to as lazy mirror descent or dual averaging in the literature. ### Dual Certificate via Stochastic Follow-the-Regularized-Leader In this subsection, we present our sequential algorithm (see Algorithms 6 and 7) that efficiently computes a \((\delta,k)\) dual certificate. The main result of this subsection is the following theorem. **Theorem 6.1** (Stochastic Dual Certificate).: _Let \(f\) be a submodular function, \(k\in\mathbb{Z}_{>0}\) a sparsity parameter, \(\phi\geq|f^{*}|\) and \(0<\delta\leq\phi\) an accuracy parameter. Assuming \(\phi=\Omega(\|u_{f}\|_{\infty}/k)\), then Algorithm 6 outputs a set of permutations \(\{\pi^{(t)}\}_{t\in[m]}\) in time \(\widetilde{O}(m)\cdot(\mathsf{EO}+\mathsf{poly}(n))\), where_ \[m=\widetilde{O}(k^{6}\delta^{-4}\phi^{2}(\|u_{f}\|_{\infty}+\phi)(\|u_{f}\|_{ 1}+\phi)),\] _such that \(y\stackrel{{\mathrm{def}}}{{=}}\frac{1}{m}\sum_{t\in[m]}g_{\pi^{( t)}}\) is a \((\delta,k)\) dual certificate for \(f\) with high probability._ The idea behind Theorem 6.1 is to run a variant of stochastic follow-the-regularized-leader (stochastic FTRL) with an entropy regularizer whose process of generating stochastic subgradients is tailored to submodular structure (without subgradient truncation). While such a method, without controlling the \(\ell_{\infty}\)-norm of the subgradient via truncation as was done in the previous section, might seem too slow, a more careful local norm analysis of the variance (Lemma 6.3) shows how this can be made to work. In particular, one of the main novelties of our algorithm is a sampling method we call vSampling (Definition 6.2), which helps us sample low-variance unbiased stochastic subgradients in only \(\widetilde{O}(1)\) queries. We emphasize that while Algorithm 3 in Section 5 returns a \((\delta,k)\) dual certificate explicitly, Algorithm 6 only returns a dual certificate implicitly as a set of permutations \(\{\pi^{(t)}\}_{t\in[m]}\) with the property that \(y\stackrel{{\mathrm{def}}}{{=}}\frac{1}{m}\sum_{t\in[m]}g_{\pi^{( t)}}\) is a \((\delta,k)\) dual certificate with high probability. This is because computing the dual certificate \(y\) from the set of permutations \(\{\pi^{(t)}\}_{t\in[m]}\) takes \(\widetilde{O}(mn)\) evaluation oracle queries, where \(m\) can be bigger than \(n\). Nevertheless, as we show later (see Section 6.3), it suffices to sample coordinates of each \(g_{\pi(^{t})}\) to access the dual certificate much more efficiently. Now we formally describe Algorithm 6. This algorithm repeatedly runs Algorithm 7 on the Lovasz extension and outputs the union of all permutations correponding to the iterates generated. As discussed earlier, Algorithm 7 is a variant of the stochastic FTRL algorithm (Algorithm 12), with the stochastic approximate subgradients generated via the \(\mathsf{vSampling}\) method (Definition 6.2) in Lines 7-8. The convergence guarantee for executing Algorithm 7 once is a statement that holds in expectation (Lemma 6.4). The reason behind calling Algorithm 7 repeatedly in Algorithm 6 is to obtain an analogous guarantee with high probability, which is crucial to proving that the average of the subgradients output by Algorithm 6 is a sparse dual certificate. ``` Data: A sparsity parameter \(k\in\mathbb{Z}_{>0}\), a submodular function \(f\) with \(u_{f}\in\mathbb{R}_{\geq 0}^{V}\), a parameter \(\phi\geq|f^{*}|\), and an accuracy parameter \(0<\delta\leq\phi\) Result: A set of \(m=O(k^{6}\delta^{-4}\phi^{2}(\|u_{f}\|_{\infty}+\phi)(\|u_{f}\|_{1}+\phi))\) permutations \(\{\pi^{(t)}\}_{t\in[m]}\) s.t. \(y=\frac{1}{m}\sum_{t\in[m]}g_{\pi^{(t)}}\) is a \((\delta,k)\) dual certificate whp. 1FunctionStochDualCertificate(\(f,k,\phi,\delta\)): 2\(N\leftarrow\widetilde{O}(\delta^{-2}k^{5}\phi^{2})\) 3for\(\ell=1,\ldots,N\)do 4\(\mathcal{C}_{\ell}\leftarrow\)SubmodularFTRL(\(f,k,\phi,\delta/2\))//\(\mathcal{C}_{\ell}\)is a collection of permutations 5 6 end for return\(\bigcup_{\ell\in[N]}\mathcal{C}_{\ell}\) ``` **Algorithm 6**Stochastic Dual Certificate for Sequential Algorithm Now we formally define the \(\mathsf{vSampling}\) method that is used to generate the stochastic subgradients ents \(h_{t}\) in Lines 7-8 of Algorithm 7. **Definition 6.2** (The \(\mathsf{vSampling}\) method).: _Given a submodular function \(f:V\to\mathbb{R}\) such that \(u_{f}\in\mathbb{R}^{V}_{+}\) and a point \(x\in S^{V}_{k}\). Define vector \(v\in\mathbb{R}^{V}_{+}\) as \(v_{i}\stackrel{{\mathrm{\tiny def}}}{{=}}2(u_{f})_{i}-(g_{x})_{i}\). Then \(\mathsf{vSampling}(f,x)\) samples a coordinate \(j\in V\) with probability proportional to \(p_{j}\stackrel{{\mathrm{\tiny def}}}{{=}}\frac{v_{j}}{\|v\|_{1}}\) and returns the random vector \(\vec{1}_{j}\cdot(g_{x})_{j}p_{j}^{-1}\)._ Let us briefly mention the motivation behind the \(\mathsf{vSampling}\) method defined above. It is known that if one wants to sample a \(1\)-sparse unbiased estimator \(h\) of the subgradient vector \(g_{x}\in\mathbb{R}^{V}\), then sampling a coordinate \(j\in V\) proportional to \(|(g_{x})_{j}|\) achieves the smallest second moment15\(\mathbb{E}[\|h\|_{\infty}^{2}]\). However, this sampling method requires explicitly computing the values of all \(|(g_{x})_{j}|\), which takes \(O(n)\) queries and is unfortunately too expensive. The main purpose of oversampling (with probabilities proportional to \(v_{i}=2(u_{f})_{i}-(g_{x})_{i}\) in \(\mathsf{vSampling}\) is to make the sampling procedure more efficient while not significantly increasing the second moment. In particular, while it is prohibitively expensive to compute all the \(|(g_{x})_{i}|\), one can efficiently compute \(\sum_{i\in I}(g_{x})_{i}\) for a consecutive block of coordinates \(I\) in the permutation \(\pi_{x}\) using \(O(1)\) queries. Therefore, leveraging a binary search idea, one can efficiently sample proportional to \(v_{i}\) by computing \(\sum_{i\in I}v_{i}\) for \(O(\log n)\) consecutive blocks \(I\), each of which takes only \(O(1)\) queries. Footnote 15: It doesn’t matter which norm we measure \(h\) here since it is \(1\)-sparse. Formally, the following lemma provides upper bounds on the local norm of the stochastic subgradients generated by the \(\mathsf{vSampling}\) method and the runtime of the method. **Lemma 6.3**.: _Let \(f:2^{V}\to\mathbb{R}\) be a submodular function such that \(u_{f}\in\mathbb{R}^{V}_{+}\), \(\phi>0\) satisfies \(|f^{*}|\leq\phi\), and \(x\in S^{V}_{k}\). Then \(\mathsf{vSampling}(f,x)\) given in Definition 6.2 outputs a \(1\)-sparse random vector \(h\stackrel{{\mathrm{\tiny def}}}{{=}}\vec{1}_{j}(g_{x})_{j}p_{j}^ {-1}\) that satisfies \(\mathbb{E}[h]=g_{x}\) and_ \[\mathbb{E}[\|h\|_{x}^{2}]\leq(2k\|u_{f}\|_{\infty}-f^{*})(2\|u_{f}\|_{1}-f(V)) \leq U_{\infty}\cdot U_{1}\,,\] _where \(U_{\infty}\stackrel{{\mathrm{\tiny def}}}{{=}}2k\|u_{f}\|_{\infty }+\phi\) and \(U_{1}\stackrel{{\mathrm{\tiny def}}}{{=}}2\|u_{f}\|_{1}+\phi\). Moreover, given the vector \(u_{f}\) explicitly, \(\mathsf{vSampling}\) can be implemented in time \(O(\log n\cdot(\mathsf{EO}+n))\) time._ Proof of Lemma 6.3.: First, note that \[\mathbb{E}[h]=\sum_{j\in V}p_{j}\cdot\vec{1}_{j}(g_{x})_{j}p_{j}^{-1}=g_{x}.\] Hence, it suffices to prove the bound on the variance of the stochastic subgradient \(h\). Note that \[\mathbb{E}[\|h\|_{x}^{2}]=\sum_{i\in V}p_{i}x_{i}((g_{x})_{i}p_{i}^{-1})^{2}= \sum_{i\in V}x_{i}|(g_{x})_{i}|\cdot\frac{|(g_{x})_{i}|}{p_{i}}\leq|g_{x}|^{ \top}x\cdot\max_{i\in V}\frac{|(g_{x})_{i}|}{p_{i}}\,,\] where \(|g_{x}|\) is the vector with \((|g_{x}|)_{i}\stackrel{{\mathrm{\tiny def}}}{{=}}|(g_{x})_{i}|\) for each coordinate \(i\in V\). Let vector \(v\in\mathbb{R}^{V}\) be given by \(v_{i}\stackrel{{\mathrm{\tiny def}}}{{=}}2(u_{f})_{i}-(g_{x})_{i}\) for each \(i\in V\), for which \(p_{i}=\frac{v_{i}}{\|v\|_{1}}\). Since \(v_{i}\geq|(g_{x})_{i}|\) for all \(i\in V\), we have \(\max_{i\in V}\frac{|(g_{x})_{i}|}{p_{i}}=\max_{i\in V}\frac{|(g_{x})_{i}|\cdot \|v\|_{1}}{v_{i}}\leq\|v\|_{1}\). Hence, we obtain \[\mathbb{E}[\|h\|_{x}^{2}]\leq|g_{x}|^{\top}x\cdot\|v\|_{1}=|g_{x}|^{\top}x\cdot (2\|u_{f}\|_{1}-f(V))\leq|g_{x}|^{\top}x\cdot U_{1}.\] Let \(g_{+}\stackrel{{\text{\tiny{def}}}}{{=}}\max\{g_{x},\vec{0}\}\) and \(g_{-}\stackrel{{\text{\tiny{def}}}}{{=}}\min\{g_{x},\vec{0}\}\) entrywise. Since \(f(x)=g_{x}^{\top}x\), it follows that \[|g_{x}|^{\top}x=g_{+}^{\top}x-g_{-}^{\top}x=2g_{+}^{\top}x-f(x)\leq 2u_{f}^{ \top}x-f^{*}\leq 2k\|u_{f}\|_{\infty}-f^{*}\leq U_{\infty}, \tag{6}\] where the first inequality uses \((g_{x})_{i}\leq(u_{f})_{i}\) and \(x_{i}\geq 0\) for every \(i\in V\) and the last step follows from \(\left\|x\right\|_{1}\leq k\). This completes the proof of the first statement of the lemma. Finally, we describe how to implement the \(\mathsf{vSampling}\) procedure. For every \(i\in[|V|]\), \[\sum_{j\in\pi_{x}[i]}v_{j}=\Big{(}\sum_{j\in\pi_{x}[i]}2(u_{f})_{j}\Big{)}-f( \pi_{x}[i]),\] which can be computed in time \(\mathsf{EO}+O(n)\) given the vector \(u_{f}\) and the set \(\pi_{x}[i]\). This allows us to sort the coordinates in \(V\) according to \(\pi_{x}\) and use binary search to sample the coordinate \(j\sim p_{j}\) in \(O(\log n)\) iterations. The total runtime for \(\mathsf{vSampling}\) is therefore \(O(\log n\cdot(\mathsf{EO}+n))\). The next lemma gives an expectation bound on the regret for Algorithm 7. **Lemma 6.4** (Expected Regret Bound for \(\mathsf{SubmodularFTRL}\)).: _In an execution of Algorithm 7, the random vector \(y=\frac{1}{M}\sum_{t=0}^{M-1}g_{x_{t}}\) satisfies that, for every \(w\in S_{k}^{V}\) (independent of the randomness of the algorithm), \(\mathbb{E}[\langle y,w\rangle]\geq f^{*}-\delta\)._ Proof.: Note that Algorithm 7 is an instantiation of Algorithm 12 with Lovasz extension \(f\), domain \(D=S_{k}^{V}\), step size \(\eta\), number of iterations \(M\), the entropy regularizer \(r(x)\), and the stochastic subgradient \(h_{t}\) generated as in Line 7 - 8. Therefore, as discussed in Section 2.3, the local norm analysis for stochastic FTRL (see the first statement of Lemma A.1) gives \[\mathbb{E}\left[\sum_{t=0}^{M-1}\langle g_{x_{t}},x_{t}-w\rangle\right]\leq \frac{\sup_{x\in S_{k}^{V}}r(x)-\inf_{y\in S_{k}^{V}}r(y)}{\eta}+\eta\sum_{t= 0}^{M-1}\mathbb{E}[\|h_{t}\|_{x_{t}}^{2}].\] Since \(\sup_{x\in S_{k}^{V}}r(x)=0\) and \(\inf_{y\in S_{k}^{V}}=-k\log(n/k)\), we can bound \(\sup_{x\in S_{k}^{V}}r(x)-\inf_{y\in S_{k}^{V}}r(y)\leq k\log n\). Using Lemma 6.3, we can bound \[\mathbb{E}[\|h_{t}\|_{x_{t}}^{2}]\leq(2k\|u_{f}\|_{\infty}-f^{*})(2\|u_{f}\|_ {1}-f(V))\leq(2k\|u_{f}\|_{\infty}+\phi)(2\|u_{f}\|_{1}+\phi),\] where we used \(\phi\geq|f^{*}|\). Dividing by \(M\) on both sides, the above bound then becomes \[\mathbb{E}\left[\left(\frac{1}{M}\sum_{t=0}^{M-1}\langle g_{x_{t}},x_{t} \rangle\right)-\langle y,w\rangle\right]\leq\frac{k\log n}{\eta M}+\eta(2k\|u _{f}\|_{\infty}+\phi)(2\|u_{f}\|_{1}+\phi).\] By optimally setting \(\eta=\sqrt{\frac{k\log n}{M(2k\|u_{f}\|_{\infty}+\phi)(2\|u_{f}\|_{1}+\phi)}}\) above, we obtain \[\mathbb{E}\left[\left(\frac{1}{M}\sum_{t=0}^{M-1}\langle g_{x_{t}},x_{t} \rangle\right)-\langle y,w\rangle\right]\leq\sqrt{\frac{k\log n}{M}\cdot(2k\|u _{f}\|_{\infty}+\phi)(2\|u_{f}\|_{1}+\phi)}=\delta,\] where the last equality is because of our setting \(M=\frac{k(2k\|u_{f}\|_{\infty}+\phi)(2\|u_{f}\|_{1}+\phi)\log n}{\delta^{2}}\). Then we have \[\mathbb{E}[\langle y,w\rangle]\geq\mathbb{E}\left[\frac{1}{M}\sum_{t=0}^{M-1} \langle g_{x_{t}},x_{t}\rangle\right]-\delta=\mathbb{E}\left[\frac{1}{M}\sum_{ t=0}^{M-1}f(x_{t})\right]-\delta\geq f^{*}-\delta,\] where the equality uses Theorem 3.3. This completes the proof of the lemma. Ideally, we would like to take \(w\) to be the indicator vector corresponding to the \(k\) most negative coordinates of \(y\) in Lemma 6.4 to argue that \(y\) is a \((\delta,k)\) dual certificate. Unfortunately, Lemma 6.4 only works when we fix \(w\) ahead of time without depending on \(y\). Therefore, to prove Theorem 6.1, we turn the expectation bound in Lemma 6.4 to a high probability bound, from which we can then union bound over all \(k\)-sparse indicator vectors \(w\). To this end, we start with the following high probability bound on \(\|y\|_{\infty}\). **Lemma 6.5** (High Probability \(\ell_{\infty}\) Bound for SubmodularFTRL).: _In an execution of Algorithm 7, the random vector \(y=\frac{1}{M}\sum_{t=0}^{M-1}g_{x_{t}}\) satisfies that_ \[\|y\|_{\infty}\leq\|u_{f}\|_{\infty}+\widetilde{O}\Big{(}\phi+\frac{k\delta^{ 2}}{U_{\infty}}\Big{)},\] _with probability at least \(1-\frac{1}{n^{\mathcal{C}}}\), where \(U_{\infty}\stackrel{{\mathrm{def}}}{{=}}2k\|u_{f}\|_{\infty}+\phi\) and \(C\) is a large enough constant._ To prove Lemma 6.5, we will use the following martingale inequality due to Freedman [10]. **Theorem 6.6** (Freedman's Inequality, Theorem 1.6 in [10]).: _Consider a real-valued martingale sequence \(\{Y_{t}\}_{t\geq 0}\) such that \(X_{0}=0\), and \(\mathbb{E}[Y_{t+1}|\mathcal{F}_{t}]=0\) for all \(t\), where \(\{\mathcal{F}_{t}\}_{t\geq 0}\) is the filtration defined by the martingale. Assume that the sequence is uniformly bounded, i.e., \(|Y_{t}|\leq R\) almost surely for all \(t\). Now define the predictable quadratic variation process of the martingale to be \(W_{t}=\sum_{j=1}^{t}\mathbb{E}[Y_{j}^{2}|\mathcal{F}_{j-1}]\) for all \(t\geq 1\). Then for all \(\lambda\geq 0\) and \(\sigma^{2}>0\), we have_ \[\mathbb{P}\Big{[}\exists\tau\geq 0\text{ s.t. }\sum_{j=0}^{\tau}Y_{j}\geq \lambda\text{ and }W_{\tau}\leq\sigma^{2}\Big{]}\leq\exp\Big{(}-\frac{\lambda^{2}/2}{\sigma^{2}+R \lambda/3}\Big{)}.\] Proof of Lemma 6.5.: For notational convenience let \(g_{t}\stackrel{{\mathrm{def}}}{{=}}g_{x_{t}}\). Note that for every coordinate \(i\in V\), we have \[y_{i}\leq\frac{1}{M}\sum_{t=0}^{M-1}(g_{t})_{i}\leq(u_{f})_{i}\leq\|u_{f}\|_{ \infty}.\] So we only need to prove that \(\sum_{t=0}^{M-1}-(g_{t})_{i}=\widetilde{O}(M\|u_{f}\|_{\infty})\) for each \(i\in V\), i.e., the coordinates of \(y\) don't become too negative. We start by bounding the empirically sampled vectors \(h_{t}\). **Bounding the empirical process \(h_{t}\).** Note that Algorithm 7 is an instantiation of Algorithm 12 with Lovasz extension \(f\), domain \(D=S_{k}^{V}\), step size \(\eta\), number of iterations \(M\), the entropy regularizer \(r(x)\), and the stochastic subgradient \(h_{t}\) generated as in Line 7 - 8. Therefore, by the second statement in Lemma A.1, with probability at least \(1-\rho\), we have \[\max_{i\in V}\sum_{t=0}^{M-1}-(h_{t})_{i}\leq\frac{Mv^{*}}{\eta}+\frac{1}{ \eta}\log\Big{(}\frac{n^{2}}{\rho}\Big{)}. \tag{7}\] Here, \(v^{*}\) is a number such that for every \(t\in\{0,\ldots,M-1\}\), \[-\eta\langle p^{(t)},g_{t}\rangle+\eta^{2}\mathbb{E}_{t}\left[\|h_{t}\|_{p^{( t)}}^{2}\right]\leq v^{*},\] where \(p^{(t)}\in\mathbb{R}^{V}\) is defined as \(p^{(t)}_{i}\stackrel{{\mathrm{def}}}{{=}}\frac{w^{(t)}_{i}}{\|w^ {(t)}\|_{1}}\) and \(w^{(t)}\in\mathbb{R}^{V}\) is given by \(w^{(t)}_{i}\stackrel{{\mathrm{def}}}{{=}}\exp(-\eta\sum_{t^{ \prime}=0}^{t-1}(h_{t^{\prime}})_{i})\). These definitions allow us to view \(x_{t}=\arg\min_{x\in S_{k}^{V}}\eta\sum_{t^{\prime}=0}^{t-1}h_{t^{\prime}}^{ \top}x+r(x)\) in Line 9 of Algorithm 7 as a proximal step with the vector \(\eta\sum_{t^{\prime}=0}^{t-1}h_{t^{\prime}}=-\log w_{i}^{(t)}\) in Lemma B.3. The second statement in Lemma B.3 then implies that the decreasing order of the coordinates in \(x_{t}\) and \(p^{(t)}\) are the same, i.e., \(\pi_{x_{t}}=\pi_{p^{(t)}}\) and thus \(g_{x_{t}}=g_{p^{(t)}}\). It follows that we can bound \[-\langle p^{(t)},g_{t}\rangle=-\langle p^{(t)},g_{p^{(t)}}\rangle=-f(p^{(t)}) \leq\phi,\] and by Lemma 6.3 that \[\mathbb{E}_{t}\left[\|h_{t}\|_{p^{(t)}}^{2}\right]\leq(2k\|u_{f}\|_{\infty}-f^ {*})(2\|u_{f}\|_{1}-f(V))\leq U_{\infty}\cdot U_{1},\] where we denote \(U_{1}\stackrel{{\text{\tiny def}}}{{=}}2\|u_{f}\|_{1}+\phi\) as before. Therefore, we can set \(v^{*}=\eta\phi+\eta^{2}U_{\infty}U_{1}\) and the bound in (7) becomes \[\max_{i\in V}\sum_{t=0}^{M-1}-(h_{t})_{i}\leq M\phi+M\eta U_{\infty}U_{1}+ \frac{1}{\eta}\log\Big{(}\frac{n^{2}}{\rho}\Big{)}. \tag{8}\] **Bounding the difference process \((-g_{t})_{i}-(-h_{t})_{i}\).** We have so far upper bounded how negative the empirical process \((h_{t})_{i}\) can become. We next seek to bound the difference \(\sum_{t=0}^{M-1}X_{t}\), where we denote \(X_{t}\stackrel{{\text{\tiny def}}}{{=}}(-g_{t})_{i}-(-h_{t})_{i}\) for a fixed coordinate \(i\). Note that \(\mathbb{E}_{t}[(h_{t})_{i}]=(g_{t})_{i}\), so the stochastic process \(\sum_{t=0}^{T-1}X_{t}\) is a martingale. For notational simplicity, we shall also denote \(G_{T}\stackrel{{\text{\tiny def}}}{{=}}\sum_{t=0}^{T-1}-(g_{t})_ {i}\) and \(H_{T}\stackrel{{\text{\tiny def}}}{{=}}\sum_{t=0}^{T-1}-(h_{t})_ {i}\) so that \(\sum_{t=0}^{T-1}X_{t}=G_{T}-H_{T}\). For this step, we will make use of Theorem 6.6. To apply Freedman's inequality, we first note that \((h_{t})_{i}\) either takes value \(\frac{(g_{t})_{i}}{p_{i}^{(t)}}\) (if \(i\) gets sampled) or \(0\). Therefore, with probability \(1\), \[|X_{t}|\leq\max\Big{\{}|(g_{t})_{i}|,\Big{|}\frac{(g_{t})_{i}}{p_{i}^{(t)}}-( g_{t})_{i}\Big{|}\Big{\}}\leq\Big{|}(g_{t})_{i}\cdot\frac{\|v^{(t)}\|_{1}}{v_{i}^{(t) }}\Big{|}\leq U_{1},\] where the last inequality follows from \(v_{i}^{(t)}=2(u_{f})_{i}-(g_{t})_{i}\geq|(g_{t})_{i}|\) and \(\|v^{(t)}\|_{1}=U_{1}\). We can also bound the quadratic variation as \[\mathbb{E}_{t}[X_{t}^{2}] =p_{i}^{(t)}\cdot\Big{(}-(g_{t})_{i}+\frac{(g_{t})_{i}}{p_{i}^{(t )}}\Big{)}^{2}+(1-p_{i}^{(t)})\cdot((g_{t})_{i})^{2}\] \[\leq((g_{t})_{i})^{2}\cdot\frac{1-p_{i}^{(t)}}{p_{i}^{(t)}}=((g_{t })_{i})^{2}\cdot\frac{\|v^{(t)}\|_{1}-v_{i}^{(t)}}{v_{i}^{(t)}}\leq|(g_{t})_{i }|\cdot U_{1}. \tag{9}\] Then by Theorem 6.6, we have for all \(\lambda,\sigma^{2}>0\), \[\mathbb{P}\Big{[}\sum_{t=0}^{M-1}X_{t}\geq\lambda\text{ and }\sum_{t=0}^{M-1}\mathbb{E}_{t}[X_{t}^{2}]\leq\sigma^{2}\Big{]}\leq\exp \Big{(}-\frac{\lambda^{2}/2}{\sigma^{2}+U_{1}\lambda/3}\Big{)}. \tag{10}\] But as the quadratic variation bound in (9) depends on \((g_{t})_{i}\), the quantity we want to control, we shall use (10) with different values of \(\lambda\) and \(\sigma^{2}\). Note that by (9), with probability \(1\), \[\sum_{t=0}^{M-1}\mathbb{E}_{t}[X_{t}^{2}]\leq U_{1}\cdot\sum_{t=0}^{M-1}|(g_{t })_{i}|\leq U_{1}\cdot\sum_{t=0}^{M-1}\|v^{(t)}\|_{1}\leq MU_{1}^{2}.\] On the other hand, if \(\sum_{t=0}^{M-1}|(g_{t})_{i}|\leq\|u_{f}\|_{\infty}\), then we would be done in controlling \((g_{t})_{i}\) already. Therefore, we define values \(\sigma_{i}^{2}\stackrel{{\mbox{\tiny{\rm{def}}}}}{{=}}\|u_{f}\|_{ \infty}U_{1}\cdot 2^{i}\), for \(i=0,\ldots,\ell\), where \(\ell\stackrel{{\mbox{\tiny{\rm{def}}}}}{{=}}\lceil\log(\frac{MU_ {1}}{\|u_{f}\|_{\infty}})\rceil=O(\log n)\), and the value \(\lambda_{i}\stackrel{{\mbox{\tiny{\rm{def}}}}}{{=}}\sqrt{2 \sigma_{i}^{2}+U_{1}^{2}}\cdot\log(\frac{n}{\rho})\). For \(i\in\{0,\ldots,\ell\}\), define events \[\mathcal{E}_{i}\stackrel{{\mbox{\tiny{\rm{def}}}}}{{=}}\Big{\{} \sum_{t=0}^{M-1}X_{t}<\lambda_{i}\mbox{ or }\sum_{t=0}^{M-1}\mathbb{E}_{t}[X_{t}^{2}]>\sigma^{2}\Big{\}}\quad\mbox{ and}\quad\mathcal{E}_{*}\stackrel{{\mbox{\tiny{\rm{def}}}}}{{=}}\{(\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq Finally, we have gathered enough tools to present the proof of Theorem 6.1, which we first restate below for convenience. **Theorem 6.1** (Stochastic Dual Certificate).: _Let \(f\) be a submodular function, \(k\in\mathbb{Z}_{>0}\) a sparsity parameter, \(\phi\geq|f^{*}|\) and \(0<\delta\leq\phi\) an accuracy parameter. Assuming \(\phi=\Omega(\|u_{f}\|_{\infty}/k)\), then Algorithm 6 outputs a set of permutations \(\{\pi^{(t)}\}_{t\in[m]}\) in time \(\widetilde{O}(m)\cdot(\mathsf{EO}+\mathsf{poly}(n))\), where_ \[m=\widetilde{O}(k^{6}\delta^{-4}\phi^{2}(\|u_{f}\|_{\infty}+\phi)(\|u_{f}\|_{1 }+\phi)),\] _such that \(y\stackrel{{\mathrm{\tiny def}}}{{=}}\frac{1}{m}\sum_{t\in[m]}g_{ \pi^{(t)}}\) is a \((\delta,k)\) dual certificate for \(f\) with high probability._ Proof of Theorem 6.1.: For each iteration \(\ell\in[N]\) in Line 4 of Algorithm 6, we let \(y_{\ell}\stackrel{{\mathrm{\tiny def}}}{{=}}\frac{1}{M}\sum_{ \pi\in\mathcal{C}_{\ell}}g_{\pi}\) denote the average subgradients in the \(\ell\)-th call to SubmodularFTRL \((\cdot)\). Lemma 6.5 implies that with probability at least \(1-n^{-C}\) where \(C>0\) is a large constant, for all \(\ell\in[N]\), \[\|y_{\ell}\|_{\infty}\leq\|u_{f}\|_{\infty}+\widetilde{O}(1)\cdot\Big{(}\phi+ \frac{k\delta^{2}}{U_{\infty}}\Big{)}\stackrel{{\mathrm{\tiny def }}}{{=}}R.\] For all \(\ell\in[N]\) define \[\overline{y}_{\ell}\stackrel{{\mathrm{\tiny def}}}{{=}}\begin{cases} y_{\ell}&\text{if }\left\|y_{\ell}\right\|_{\infty}\leq R\\ u_{f}&\text{if }\left\|y_{\ell}\right\|_{\infty}>R\end{cases}.\] Since \(g_{\pi}\leq u_{f}\) for all permutations \(\pi\) we see that \(y_{\ell}\leq\overline{y}_{\ell}\). Additionally, by the early fact regarding Lemma 6.5 we have that, for every \(\ell\in[N]\), \(y_{\ell}=\overline{y}_{\ell}\) with probability \(1-n^{-C}\), which translates to \(y_{\ell}=\overline{y}_{\ell},\forall\ell\in[N]\) with high probability, by applying a union bound over \(\ell\in[N]\) and using that \(N\leq n^{10}\). Next, for any fixed \(w\in S_{k}^{V}\), define \(X_{\ell}^{w}\stackrel{{\mathrm{\tiny def}}}{{=}}\langle\overline{ y}_{\ell},w\rangle\). Note that each iteration of Line 4 is an independent execution of Algorithm 7 with accuracy \(\delta/2\). Consequently, Lemma 6.4 and the \(\overline{y}_{\ell}\geq y_{\ell}\) implies that \[\mathbb{E}[X_{\ell}^{w}]\geq\mathbb{E}[\langle\overline{y}_{\ell},w\rangle] \geq f^{*}-\frac{\delta}{2}\,.\] Additionally, from the definition of \(\overline{y}\) and \(R\) we see that \(\left\|\overline{y}\right\|_{\infty}\leq\left\|u_{f}\right\|_{\infty}\) and therefore \[|X_{\ell}^{w}|\leq k\|y_{\ell}\|_{\infty}\leq kR\,.\] Consequently, applying Azuma-Hoeffding's inequality yields that \[\mathbb{P}\Big{(}\frac{1}{N}\sum_{\ell\in[N]}(X_{\ell}^{w}-\mathbb{E}[X_{\ell }^{w}])\leq-\frac{\delta}{2}\Big{)}\leq\exp\Big{(}-\frac{N\delta^{2}}{8k^{2} R^{2}}\Big{)}.\] The above probability is smaller than \(n^{-10k}\) by our choice of \[N=100\delta^{-2}R^{2}k\log n=\widetilde{O}(1)\cdot\delta^{-2}k\Big{(}k\|u_{f }\|_{\infty}+\widetilde{O}(k)\cdot\Big{(}\phi+\frac{k\delta^{2}}{U_{\infty}} \Big{)}\Big{)}^{2}\leq\widetilde{O}(\delta^{-2}k^{5}\phi),\] where we used the assumptions that \(\|u_{f}\|_{\infty}\leq O(k\phi)\) and \(0<\delta\leq\phi\). This implies that with probability at least \(1-n^{-9k}\), we have \[\frac{1}{N}\sum_{\ell\in[N]}\langle\overline{y}_{\ell},w\rangle\geq f^{*}-\delta.\] Finally, taking the union bound over all the \(O(n^{k})\) vertices of \(S_{k}^{V}\) for the choice of \(w\), we obtain that \[\max_{w\in S_{k}^{V}}\frac{1}{N}\sum_{\ell\in[N]}\langle\overline{y}_{\ell},w \rangle\geq f^{*}-\delta,\] with probability \(1-n^{-5k}\). Since \(y_{\ell}=\overline{y}_{\ell}\) with probability \(1-n^{-C}\), we obtain that \(\frac{1}{N}\sum_{\ell\in[N]}y_{\ell}\) is a \((\delta,k)\) dual certificate with high probability. The total number of permutations used is \[m =MN=\widetilde{O}(\delta^{-2}k(k\|u_{f}\|_{\infty}+\phi)(\|u_{f} \|_{1}+\phi))\cdot\widetilde{O}(\delta^{-2}k^{5}\phi^{2})\] \[=\widetilde{O}(k^{6}\delta^{-4}\phi^{2}(\|u_{f}\|_{\infty}+\phi)( \|u_{f}\|_{1}+\phi)).\] Since for each permutation, we only sample a single coordinate in Line 7 - 8 in Algorithm 7, the total runtime is then \(\widetilde{O}(m)\cdot(\mathsf{EO}+\mathsf{poly}(n))\) by the second statement of Lemma 6.3. This completes the proof of the theorem. ### Dimensionality Reduction for Sequential Algorithm In this subsection, using the algorithm for computing \((\delta,k)\) dual certificates in Section 6.1, we give the implementation of the subprocedure \(\mathsf{Dimensionality}\)-\(\mathsf{Reduction}\) for our sequential algorithm in Algorithm 8. Recall from Section 4.2 that assuming \(f\) has a \(k\)-sparse minimizer, the subprocedure \(\mathsf{Dimensionality}\)-\(\mathsf{Reduction}(f,k)\) either outputs a dimensionality reduction \(T\neq\emptyset\) that belongs to every minimizer of \(f\), or certifies that \(f^{*}>-\frac{\|u_{f}\|_{\infty}}{12k}\). Similarly to Algorithm 4, Algorithm 8 aims at finding a value \(\phi>0\) such that \(-\phi\leq f^{*}\leq-\phi/2\), starting from the initial guess \(\phi^{(0)}=\|u_{f}\|_{1}-f(V)\) and halving the value of \(\phi\) in each iteration of the while loop until the threshold \(\frac{\|u_{f}\|_{\infty}}{12k}\) is reached. If \(f^{*}\leq-\frac{\|u_{f}\|_{\infty}}{12k}\), then along this halving process there must be an iteration \(i\) such that \(-\phi^{(i)}\leq f^{*}\leq-\phi^{(i)}/2\) for which we can obtain a non-empty dimension reduction \(T\neq\emptyset\). **Lemma 6.7** (Dimensionality-Reduction for Sequential Algorithm).: _Let \(k\in\mathbb{Z}_{>0}\) and \(f:2^{V}\to\mathbb{R}\) be a submodular function with a \(k\)-sparse minimizer and \(u_{f}\geq 0\). Then Algorithm 8 outputs a set \(T\subseteq V\) that must be in every minimizer of \(f\) such that \(T=\emptyset\) implies \(f^{*}>-\|u_{f}\|_{\infty}/12k\). Moreover, the algorithm runs in time \(\widetilde{O}\Big{(}\frac{k^{12}\|u_{f}\|_{1}}{\|u_{f}\|_{\infty}}\cdot\mathsf{ EO}+\mathsf{poly}(n)\Big{)}\)._ In the remainder of this subsection we prove Lemma 6.7. To establish the correctness of Algorithm 8, we start with the following claim that the \(\mathsf{vSampling}\) for a random permutation \(g_{\pi^{(i_{t})}}\) in Line 7 has small variance on each coordinate. This claim differs from Lemma 6.3, as it is a bound on the variance of one coordinate of \(z^{(t)}\), rather than a bound on the local norm of \(z^{(t)}\) at \(x_{i_{t}}\). **Claim 6.8** (Coordinate Variance Bound for \(\mathsf{vSampling}\)).: _Let \(f:2^{V}\to\mathbb{R}\) be a submodular function such that \(f(\emptyset)=0\) and \(u_{f}\geq 0\). Let \(\{\pi^{(t)}\}_{t\in[m]}\) be a set of permutations on \(V\) and define the vector \(y\stackrel{{\mbox{\tiny{\rm{\tiny def}}}}}{{=}}\frac{1}{m}\sum_{ t\in[m]}g_{\pi^{(t)}}\). Define the random vector_ \[w\stackrel{{\mbox{\tiny{\rm{\tiny def}}}}}{{=}}\frac{(2\left\|u_ {f}\right\|_{1}-f(V))\cdot(g_{\pi^{(i)}})_{p}}{2(u_{f})_{p}-(g_{\pi^{(i)}})_{ p}}\cdot\vec{1}_{p},\] _where \(i\in[m]\) is sampled uniformly at random and \(p\in V\) is sampled with probability proportional to \(2(u_{f})_{p}-(g_{\pi^{(i)}})_{p}\). Then we have_ 1. \(\mathbb{E}[w]=y\)_,_ 2. \(\left\|w\right\|_{\infty}\leq 2\left\|u_{f}\right\|_{1}-f(V)\) _with probability 1, and_ 3. \(\mathbb{E}[w_{q}^{2}]\leq(2\left\|u_{f}\right\|_{1}-f(V))(2(u_{f})_{q}-y_{q})\) _for any coordinate_ \(q\in V\) Proof.: Note that \(\sum_{p\in V}(2(u_{f})_{p}-(g_{\pi^{(i)}})_{p})=2\left\|u_{f}\right\|_{1}-f(V)\) and consequently \(\mathbb{E}[w]=y\). Further, \[\left\|w\right\|_{\infty}\leq(2\left\|u_{f}\right\|_{1}-f(V))\cdot\max_{i\in[m],p\in V}\frac{(g_{\pi^{(i)}})_{p}}{2(u_{f})_{p}-(g_{\pi^{(i)}})_{p}}\leq 2 \left\|u_{f}\right\|_{1}-f(V)\] with probability 1, where the last inequality follows because \((g_{\pi^{(i)}})_{p}\leq(u_{f})_{p}\) by submodularity and \((u_{f})_{p}\geq 0\) by the assumption of the claim. Next we note that \[\mathbb{E}_{i,p}[|w_{q}|] =\mathbb{E}_{i}\Big{[}\mathbb{P}[p=q]\cdot\Big{|}\frac{(2\left\|u _{f}\right\|_{1}-f(V))\cdot(g_{\pi^{(i)}})_{q}}{2u_{q}-(g_{\pi^{(i)}})_{q}} \Big{|}\Big{]}\] \[=\mathbb{E}_{i}[|(g_{\pi^{(i)}})_{q}|]\leq(u_{f})_{q}+\mathbb{E}_ {i}[(u_{f})_{q}-(g_{\pi^{(i)}})_{q}]=2(u_{f})_{q}-y_{q}.\] Consequently, we have \[\mathbb{E}[w_{q}^{2}]\leq\left\|w\right\|_{\infty}\cdot\mathbb{E}_{i,p}[|w_{q }|]\leq(2\left\|u_{f}\right\|_{1}-f(V))(2(u_{f})_{q}-y_{q}).\] This completes the proof of the claim. Using the above variance bound, we next prove that whp. the vector \(z\) in Line 9 of Algorithm 8 is an estimate of the \((\delta,k)\) dual certificate \(y\stackrel{{\text{\tiny def}}}{{=}}\frac{1}{m}\sum_{t\in[m]}g_{ \pi^{(t)}}\) with additive accuracy \(O(\phi/k)\). **Claim 6.9** (Estimate for Dual Certificate).: _Under the same assumptions as in Claim 6.8, the random vector \(z\) in Line 9 of Algorithm 8 satisfies that \(\mathbb{E}[z]=y\), and \(\|z-y\|_{\infty}\leq\alpha\) with high probability, where the accuracy_ \[\alpha\stackrel{{\text{\tiny def}}}{{=}}10\cdot\max\left\{\frac{(2 \|u_{f}\|_{1}-f(V))\cdot\log n}{N},\sqrt{\frac{(2\|u_{f}\|_{1}-f(V))\cdot(\|u_ {f}\|_{\infty}-f^{*})\cdot\log n}{N}}\right\}.\] _In particular, if parameter \(\phi\geq\|u_{f}\|_{\infty}/12k\) satisfies \(-\phi\leq f^{*}\), and \(N=\Theta(\frac{k^{4}\|u_{f}\|_{1}}{\|u_{f}\|_{\infty}}\cdot\log n)\) as in Algorithm 8, then the accuracy \(\alpha\leq\phi/8k\)._ Proof.: Note that for every \(t\in[N]\), each random vector \(z^{(t)}\) is obtained by applying \(\mathsf{vSampling}\) to a uniformly random vector in \(\{g_{\pi^{(t)}}\}_{t\in[m]}\) exactly as in Claim 6.8. Therefore, it follows from Claim 6.8 that \(\mathbb{E}[z]=y\) and \(\|z^{(t)}\|_{\infty}\leq M\stackrel{{\text{\tiny def}}}{{=}}2\|u _{f}\|_{1}-f(V)\) with probability 1, and that for every coordinate \(q\in V\), the variance of each sampled vector \(z^{(t)}\) satisfies \[\mathbb{E}[(z_{q}^{(t)}-y_{q})^{2}]\leq\mathbb{E}[(z_{q}^{(t)})^{2}]\leq(2\|u _{f}\|_{1}-f(V))(2(u_{f})_{q}-y_{q})\leq 2(2\|u_{f}\|_{1}-f(V))(\|u_{f}\|_{ \infty}-f^{*}).\] Next, to prove that \(\|z-y\|_{\infty}\leq\alpha\) whp., we fix an arbitray coordinate \(q\in V\). Since \(z_{q}^{(t)}\) are i.i.d. samples with expectation \(\mathbb{E}[z_{q}^{(t)}]=y_{q}\), it follows from Bernstein's inequality that \[\mathbb{P}\left[\left|\sum_{t\in[N]}(z_{q}^{(t)}-y_{q})\right|\geq\lambda \right]\leq 2\cdot\exp\left(\frac{\lambda^{2}/2}{\sum_{t\in[N]}\mathbb{E}((z_{q }^{(t)}-y_{q})^{2})+\lambda M/3}\right). \tag{11}\] where recall that \(M\stackrel{{\text{\tiny def}}}{{=}}2\|u_{f}\|_{1}-f(V)\) is an upper bound on \(\|w_{q}^{(t)}\|\) almost surely. Now we plug \(\lambda=\alpha N\) into the bound Equation (11). Note that the definition of \(\alpha\) in the claim satisfies that \[\sum_{t\in[N]}\mathbb{E}((z_{q}^{(t)}-y_{q})^{2})\leq 2N(2\|u_{f}\|_{1}-f(V))(\|u_ {f}\|_{\infty}-f^{*})\leq 2N\cdot\frac{\alpha^{2}N}{100\log n}\leq\lambda^{2}/2 \cdot\frac{1}{25\log n},\] and that \[\lambda M/3\leq\lambda(2\|u_{f}\|_{1}-f(V))/3\leq\frac{\alpha N}{30\log n}\leq \lambda/2\cdot\frac{1}{15\log n}.\] It thus follows from Equation (11) that \[\mathbb{P}\left[\left|\sum_{t\in[N]}(z_{q}^{(t)}-y_{q})\right|\geq\alpha N \right]\leq 2n^{-9}.\] The first statement of the claim then follows by applying a union bound to all coordinates \(q\in V\). To prove the second statement of the claim, we can trivially upper bound \[2\|u_{f}\|_{1}-f(V)\leq 2\|u_{f}\|_{1}+\phi\quad\text{and}\quad(2\|u_{f}\|_{1}- f(V))\cdot(\|u_{f}\|_{\infty}-f^{*})\leq(2\|u_{f}\|_{1}+\phi)\cdot(\|u_{f}\|_{ \infty}+\phi).\] Then the bound \(\alpha\leq\phi/8k\) follows by plugging in the value of \(N\) and using \(\phi\geq\|u_{f}\|_{\infty}/12k\). The following claim establishes the correctness of Algorithm 8. **Claim 6.10**.: _Let \(f\) be a submodular function with a \(k\)-sparse minimizer, and \(\phi\geq\|u_{f}\|_{\infty}/12k\) satisfying \(-\phi\leq f^{*}\) in one iteration of the while loop. Then the subset \(T\subseteq V\) in Line 10 of Algorithm 8 is in every minimizer of \(f\) whp. Moreover, if \(-\phi\leq f^{*}\leq-\phi/2\), then \(T\neq\emptyset\) whp._ Proof.: Note that by Theorem 6.1, the set of permutations \(\{\pi^{(t)}\}_{t\in[m]}\) returned by \(\mathsf{StochDualCertificate}\) is such that \(y=\frac{1}{m}\sum_{t\in[m]}g_{\pi^{(t)}}\)is a \((\delta,k)\) dual certificate whp. Since \(\phi\geq\|u_{f}\|_{\infty}/12k\) and \(-\phi\leq f^{*}\), Claim 6.9 implies that the vector \(z\) in Line 9 of Algorithm 8 satisfies \(\|z-y\|_{\infty}\leq\phi/8k\) whp. In the following, we condition on the above two events. In the first part of the claim, for every element \(p\in T\) defined in Line 10, we have \[y_{p}\leq z_{p}+\|z-y\|_{\infty}\leq-\frac{3\phi}{8k}+\frac{\phi}{8k}<-\delta.\] It then follows from Lemma 4.7 that \(p\) must be in every minimizer of \(f\). This proves the first statement of the claim. To prove the second statement, note that since \(-\phi\leq f^{*}\leq-\phi/2\) and \(y\) is a \((\delta,k)\) dual certificate, the most negative coordinate \(p\) of \(y\) satisfies \(y_{p}\leq-|f^{*}|/k\). Therefore, \[z_{p}\leq y_{p}+\phi/8k\leq-\phi/2k+\phi/8k\leq-3\phi/8k,\] which implies that \(p\) must be inside the subset \(T\) considered Line 10. This proves that \(T\neq\emptyset\) whp. for such an iteration and completes the proof of the claim. Finally, we are ready to prove Lemma 6.7 which we restate below for convenience. **Lemma 6.7** (Dimensionality-Reduction for Sequential Algorithm).: _Let \(k\in\mathbb{Z}_{>0}\) and \(f:2^{V}\to\mathbb{R}\) be a submodular function with a \(k\)-sparse minimizer and \(u_{f}\geq 0\). Then Algorithm 8 outputs a set \(T\subseteq V\) that must be in every minimizer of \(f\) such that \(T=\emptyset\) implies \(f^{*}>-\|u_{f}\|_{\infty}/12k\). Moreover, the algorithm runs in time \(\widetilde{O}\Big{(}\frac{k^{12}\|u_{f}\|_{1}}{\|u_{f}\|_{\infty}}\cdot \mathsf{EO}+\mathsf{poly}(n)\Big{)}\)._ Proof of Lemma 6.7.: The first statement of the lemma essentially follows from Claim 6.10. Since Algorithm 8 starts from \(\phi=\phi_{0}\stackrel{{\mathrm{def}}}{{=}}\|u_{f}\|_{1}-f(V)\geq|f^{ *}|\), Claim 6.10 implies that the subset \(T\) in Line 10 is in every minimizer of \(f\) whp. and that \(-\phi\leq-f^{*}\) in every iteration of the while loop executed by Algorithm 8. In particular, if \(f^{*}\leq-\|u_{f}\|_{\infty}/12k\), then there must be one iteration of the while loop in which \(-\phi\leq f^{*}\leq-\phi/2\) (assuming \(T=\emptyset\) in all previous iterations). For such an iteration, Claim 6.10 states that the subset \(T\neq\emptyset\) whp. and therefore Algorithm 8 will never return \(\emptyset\). This proves the first statement of the lemma. Next we prove the second statement on the runtime of Algorithm 8. We first note that the total number of iterations of the while loop is \[\log\Big{(}\frac{\|u_{f}\|_{1}-f(V)}{\max\{\|u_{f}\|_{\infty}/12k,|f^{*}|\}} \Big{)}\leq\log\Big{(}\frac{\|u_{f}\|_{1}+|f^{*}|}{\max\{\|u_{f}\|_{\infty}/12 k,|f^{*}|\}}\Big{)}=O(\log n).\] For each iteration of the while loop, as we have \(\|u_{f}\|_{\infty}=\widetilde{O}(k)(\phi+\delta)\) each time we call \(\mathsf{StochDualCertificate}\), by Theorem 6.1, the number of queries to \(\mathsf{EO}\) made by calling the \(\mathsf{StochDualCertificate}\) routine is given by \[\widetilde{O}(k^{6}\delta^{-4}\phi^{2}(\|u_{f}\|_{\infty}+\phi)(\|u_{f}\|_{1}+ \phi))\leq\widetilde{O}(k^{10})\cdot\frac{(\|u_{f}\|_{\infty}+\phi)(2\|u_{f}\| _{1}+\phi)}{\phi^{2}}\leq\widetilde{O}(k^{12}\frac{\|u_{f}\|_{1}}{\|u_{f}\|_{ \infty}}),\] where the first inequality is obtained by plugging in \(\delta=\phi/8k\) and \(\phi\geq|f^{*}|\), and the second inequality uses \(\phi\geq\|u_{f}\|_{\infty}/12k\). Next we observe that given the vector \(u_{f}\), each \(\mathsf{vSampling}\) in Line 7 can be implemented using only \(O(\log n)\) additional queries. Thus the total number of queries due to estimating \(y\) in Line 9 is at most \(\widetilde{O}(N)\ll\widetilde{O}(k^{12}\|u_{f}\|_{1}/\|u_{f}\|_{\infty})\). Finally, note that every step of Algorithm 8 uses polynomial additional runtime. ### Arc Finding for Sequential Algorithm In this section, we present our sequential \(\mathsf{Arc}\)-\(\mathsf{Finding}\) subprocedure. As mentioned in Section 2.3, in prior work [10, 14], finding arcs relies on the "move-to-front" approach. Given a vector \(y\) as the average of subgradients \(\{g_{\pi^{(t)}}\}_{t\in[m]}\) and an element \(p\in V\), this approach considers the vector \(y^{\prime}\) obtained by moving \(p^{\downarrow}\) to the front of every permutation \(\pi^{(t)}\). A coordinate \(q\notin p^{\downarrow}\) with negative enough entry \(y^{\prime}(q)\) is then concluded to be in every sparse minimizer containing \(p\), i.e. an arc from \(p\) to \(q\). Key to our algorithm is the following lemma which shows when the move-to-front procedure allows us to deduce arcs or dimensionality reductions. Recall from in Section 3 that, for some permutation \(\pi\), \(\left\|\Delta_{\pi,\cdot}p\right\|_{1}\) measures the total decrease of coordinates of \(g_{\pi}\) in \(V\setminus p^{\downarrow}\) after moving \(p^{\downarrow}\) to the front of \(\pi\). Similarly, \(\left\|\Delta_{p}\right\|_{1}\) measures the total decrease of coordinates of \(y\) in \(V\setminus p^{\downarrow}\) after moving \(p^{\downarrow}\) to the front in every permutation. Informally, Lemma 6.11 states that any element \(p\) in some \(k\)-sparse minimizer \(S^{*}\) with large \((u_{f^{\sharp}})_{p}\) must also have large \(\left\|\Delta_{p}\right\|_{1}\). Moreover, we can find \(q\in S^{*}\setminus p^{\downarrow}\) such that \(y_{q}\) decreases significantly after moving \(p^{\downarrow}\) to the front, hence deducing an arc \(p\to q\). **Lemma 6.11** (Move to Front).: _Let \(f\) be a submodular function with a \(k\)-sparse minimizer and \(f^{\sharp}\) be its extension w.r.t. a ring family \(\mathcal{F}\) that is consistent with all its \(k\)-sparse minimizers. Let \(S^{*}\) be a \(k\)-sparse minimizer of \(f\) (and therefore \(f^{\sharp}\)), element \(p\in S^{*}\), and \(y\in B(f^{\sharp})\) be a \((\delta,k)\) dual _certificate of \(f^{\sharp}\). Define the vector \(\Delta_{p}\in\mathbb{R}_{\geq 0}^{V}\) as_ \[(\Delta_{p})_{i}\stackrel{{\mathrm{\tiny def}}}{{=}}\begin{cases}0& \text{if }i\in p^{\downarrow}\\ y_{i}-(y_{\gets p^{\downarrow}})_{i}&\text{otherwise}.\end{cases}\] _If \(f(p^{\downarrow})>(2k+1)\delta\), then the following hold:_ * _(1)_ \(\left\lVert\Delta_{p}\right\rVert_{1}=f(p^{\downarrow})-y(p^{\downarrow})>2k\delta\) _and_ \(y(p^{\downarrow})\leq\delta\)_,_ * _(2) There exists_ \(q\in S^{*}\setminus p^{\downarrow}\) _so that_ \((\Delta_{p})_{q}>\frac{1}{k}\left\lVert\Delta_{p}\right\rVert_{1}\)_, and_ * _(3) If an element_ \(q\in V\setminus p^{\downarrow}\) _satisfies_ \((\Delta_{p})_{q}\geq\frac{1}{2k}\left\lVert\Delta_{p}\right\rVert_{1}\)_, then_ \(q\in S^{*}\)_._ _In particular, the above hold if \((f^{\sharp})^{*}=f^{*}\geq-\frac{(u_{f^{\sharp}})_{p}}{2}\) and \(\delta\leq\frac{(u_{f^{\sharp}})_{p}}{6k}\)._ Proof.: Statement (1) is an immediate corollary of Lemma 4.8 together with \(p^{\downarrow}\in\mathcal{F}\). To prove the second statement, we observe that \[y(S^{*}\setminus p^{\downarrow})=y(S^{*})-y(p^{\downarrow})\geq y_{-}^{k+1}(V) -y(p^{\downarrow})\geq f^{*}-\delta-y(p^{\downarrow}),\] where the first inequality is because \(|S^{*}|\leq k\) and the last inequality uses that \(y\) is a \((\delta,k)\) dual certificate for \(f^{\sharp}\) and that \((f^{\sharp})^{*}=f^{*}\). By Lemma 4.8, we have \(y_{\gets p^{\downarrow}}\in B(f^{\sharp}_{p^{\downarrow}})\). It follows that \[y_{\gets p^{\downarrow}}(S^{*}\setminus p^{\downarrow})\leq f^{\sharp}_{p ^{\downarrow}}(S^{*}\setminus p^{\downarrow})=(f^{\sharp})^{*}-f^{\sharp}(p^{ \downarrow})=f^{*}-f(p^{\downarrow}),\] where the first equality uses that \(S^{*}\setminus p^{\downarrow}\) is a minimizer of \(f^{\sharp}_{p^{\downarrow}}\), and the second equality uses Lemma 4.1. Subtracting the above two inequalities, we have \[\Delta_{p}(S^{*}\setminus p^{\downarrow})=y(S^{*}\setminus p^{\downarrow})-y _{\gets p^{\downarrow}}(S^{*}\setminus p^{\downarrow})\geq f(p^{ \downarrow})-\delta-y(p^{\downarrow})=\|\Delta_{p}\|_{1}-\delta. \tag{12}\] Since \(|S^{*}\setminus p^{\downarrow}|\leq k-1\) and that \(\|\Delta_{p}\|_{1}>2k\delta\) by (1), there exists \(q\in S^{*}\setminus p^{\downarrow}\) such that \[(\Delta_{p})_{q}\geq\frac{\|\Delta_{p}\|_{1}-\delta}{k-1}>\frac{\|\Delta_{p}\| _{1}}{k}.\] This completes the proof of (2). Next, note that by (12) and (1) we have \(\Delta_{p}(V\setminus S^{*})\leq\delta<\frac{\|\Delta_{p}\|_{1}}{2k}\). Since all entries of \(\Delta_{p}\) are non-negative, any coordinate \((\Delta_{p})_{q}>\frac{\|\Delta_{p}\|_{1}}{2k}\) must satisfy \(q\in S^{*}\setminus p^{\downarrow}\). This proves (3). Finally, for the "in particular" part of the lemma, we note that \[f(p^{\downarrow})=(u_{f^{\sharp}})_{p}+f(p^{\downarrow}\setminus\{p\})\geq(u_ {f^{\sharp}})_{p}+f^{*}\geq\frac{(u_{f^{\sharp}})_{p}}{2}>(2k+1)\delta,\] where in the last inequality we use \(f^{*}\geq-\frac{(u_{f^{\sharp}})_{p}}{2}\) and \(\delta\leq\frac{(u_{f^{\sharp}})_{p}}{6k}\). In light of Lemma 6.11, we define \(A\stackrel{{\mathrm{\tiny def}}}{{=}}\{p\in V:(u_{f^{\sharp}})_{ p}\geq\|u_{f^{\sharp}}\|_{\infty}/2\}\) as the _active_ set of elements. Then Lemma 6.11 allows us to deduce arcs from any \(p\in A\) whenever we have \(f^{*}\geq-\frac{\|u_{f^{\sharp}}\|_{\infty}}{4}\) and a \((\frac{\|u_{f^{\sharp}}\|_{\infty}}{12k},k)\) dual certificate. For the convenience of our presentation, we simplify the notation as \(f\) in the remainder of this section. However, it is important to keep in mind that our arc finding procedures are always applied to the extension \(f^{\sharp}\). Even though Lemma 6.11 shows how the move-to-front approach allows us to deduce arcs or dimensionality reductions from elements in the active set \(A\), the cost of computing \(\Delta_{p}=\frac{1}{m}\sum_{t\in[m]}\Delta_{\pi^{(t)},p^{i}}\) for every \(p\in A\) is unfortunately too high. In fact, even computing a single coordinate \((\Delta_{\pi^{(t)},p^{i}})_{q}\) for all \(t\in[m]\) and \(p\in A\) naively takes \(m\cdot|A|\) queries to the evaluation oracle. As \(m\approx\poly(k)\cdot\frac{\|u_{f}\|_{1}}{\|u_{f}\|_{\infty}}\) in Theorem 6.1 and \(|A|\) can be as large as \(n\), naively calculating \((\Delta_{\pi^{(t)},p^{i}})_{q}\) for all \((t,p)\in[m]\times A\) requires \(n^{2}\cdot\poly(k)\) evaluation queries which we cannot afford. **Estimating large \((\Delta_{p})_{q}\) via sampling.** Hence, we resort to estimating the large coordinates of \(\Delta_{p}\) up to \(O(\frac{1}{k}\|\Delta_{p}\|_{1})\) error via sampling, which allows us to identify coordinates \(q\in V\setminus p^{\downarrow}\) with \(\Delta_{p}(q)>\frac{1}{k}\|\Delta_{p}\|_{1}\) or certify that none exists. Before presenting the procedure, let us start by gathering some intuition. For every \(p\in A\), we consider the matrix \(M_{p}\in\mathbb{R}^{m\times(V\setminus p^{\downarrow})}\) where each row \(t\in[m]\) is exactly \(\Delta_{\pi^{(t)},p^{\downarrow}}\). The quantity we would like to estimate is the vector \(\Delta_{p}=\frac{1}{m}\sum_{t\in[m]}\Delta_{\pi^{(t)},p^{\downarrow}}\), which is the average of the rows of \(M_{p}\). To do so, the naive approach would be to uniformly sample a row \(t\in[m]\), and then sample \(q\propto\frac{(\Delta_{\pi^{(t)},p^{\downarrow}})_{q}}{\|\Delta_{\pi^{(t)},p^ {\downarrow}}\|_{1}}=:\beta_{q}\) and outputs \(\|\Delta_{\pi^{(t)},p^{\downarrow}}\|_{1}\cdot\vec{1}_{q}\). This can easily be verified to be an unbiased estimator for \(\Delta_{p}\): \[\mathbb{E}_{t\sim[m],q\sim\beta_{q}}\Big{[}\|\Delta_{\pi^{(t)},p^{\downarrow} }\|_{1}\cdot\mathbf{1}_{q}\Big{]}=\mathbb{E}_{t\sim[m]}[\Delta_{\pi^{(t)},p^{ \downarrow}}]=\Delta_{p}.\] However, the output estimate has entry \(\|\Delta_{\pi^{(t)},p^{\downarrow}}\|_{1}\) that can be on the order of \(\|u_{f}\|_{1}\), while \((\Delta_{p})_{q}\leq\|\Delta_{p}\|_{1}\) is typically on the order of \(O(k)\cdot\|u_{f}\|_{\infty}\). Thus using the above sampling procedure to estimate \((\Delta_{p})_{q}\) up to \(O(\frac{1}{k}\|\Delta_{p}\|_{1})\) error would require too many samples. A natural fix to the above idea is to sample \(t\in[m]\) depending on \(\|\Delta_{\pi^{(t)},p^{\downarrow}}\|_{1}\). In particular, by sampling \(t\propto\|\Delta_{\pi^{(t)},p^{\downarrow}}\|_{1}\) and \(q\propto\beta_{q}\), the random vector \(\|\Delta_{p}\|_{1}\cdot\mathbf{1}_{q}\) is an unbiased estimator of \(\Delta_{p}\) whose non-zero entry now only has range \(\|\Delta_{p}\|_{1}\). This allows us to obtain \(O(\frac{1}{k}\|\Delta_{p}\|_{1})\) error for large entries \((\Delta_{p})_{q}\) using only \(\poly(k)\) samples. However, the obstacle to this approach is that computing \(\|\Delta_{\pi^{(t)},p^{\downarrow}}\|_{1}\) for all \(p\in A\) and \(t\in[m]\) would again require \(m|A|=O(n^{2}\poly(k))\) queries, which is computationally too expensive for us. **Efficiently estimating \(\Delta_{p}\) simultaneously for all \(p\in A\).** To get around the aforementioned computational bottleneck, we present a novel oversampling technique that allows us to obtain \(\poly(k)\) samples of \(t\) for all \(p\in A\) simultaneously. At first thought, this might appear impossible as the values \((\Delta_{p})_{q}\) crucially depends on \(p\in A\). Our crucial idea here is to perform an "oversampling" using the uniform upper bound vector \(2u_{f}-g_{\pi^{(t)}}\geq\Delta_{\pi^{(t)},p^{\downarrow}}\) for all \(p\in A\). In particular, for any \(t\in[m]\), by sampling \(a\propto\frac{2(u_{f})_{a}-(g_{\pi^{(t)}})_{a}}{\|2u_{f}-g_{\pi^{(t)}}\|_{1}}\), we simultaneously have for all \(p\in A\) that \[\mathbb{P}(a\in p^{\downarrow})\propto B_{p}^{(t)}\stackrel{{ \mathrm{def}}}{{=}}2u_{f}(p^{\downarrow})-g_{\pi^{(t)}}(p^{\downarrow})\geq\| \Delta_{\pi^{(t)},p^{\downarrow}}\|_{1}.\] Therefore, by sampling \(t\in[m]\) uniformly and \(a\propto\frac{2(u_{f})_{a}-(g_{\pi^{(t)}})_{a}}{\|2u_{f}-g_{\pi^{(t)}}\|_{1}}\), and accepting this sample for each \(p\in A\) only if \(a\in p^{\downarrow}\), we obtain a way to sample \(t\in[m]\) proportional to \(B_{p}^{(t)}\) simultaneously for all \(p\in A\). Moreover, as \(B_{p}^{(t)}\geq\|\Delta_{\pi^{(t)},p^{\downarrow}}\|_{1}\), this oversampling still requires only \(\poly(k)\) samples \(C_{p}\) to estimate \((\Delta_{p})_{q}\) up to error \(O(\frac{1}{k}\|\Delta_{p}\|_{1})\). Having obtained samples \(C_{p}\) proportional to \(B_{p}^{(t)}\) for every \(p\in A\), we feed them into the procedure Negative-Mass-Estimate (given in Algorithm 10) to compute an unbiased estimate \(\tilde{\Delta}_{p}\) for all \(p\in A\) with additive error \(\frac{1}{4k}\left\|\Delta_{p}\right\|_{1}\) for every coordinate \(q\in V\setminus p^{\downarrow}\). To do so, as mentioned earlier, for each \(t\in C_{p}\) it samples element \(q\propto\beta_{q}\) and uses \(\frac{\left\|\Delta_{\pi^{(t)},p^{\downarrow}}\right\|_{1}}{B_{p}^{(t)}} \cdot\vec{1}_{q}\) as the estimate for \(\Delta_{p}\). This estimate satisfies \[\mathbb{E}_{t\propto B_{p}^{(t)},q\propto\beta_{q}}\Big{[}\frac{\left\|\Delta_ {\pi^{(t)},p^{\downarrow}}\right\|_{1}}{B_{p}^{(t)}}\cdot\vec{1}_{q}\Big{]}= \mathbb{E}_{t\propto B_{p}^{(t)}}\Big{[}\frac{\Delta_{\pi^{(t)},p^{\downarrow} }}{B_{p}^{(t)}}\Big{]}=\frac{\Delta_{p}}{\frac{1}{m}\sum_{t\in[m]}B_{p}^{(t)}},\] which is a factor of \(\frac{1}{m}\sum_{t\in[m]}B_{p}^{(t)}\) smaller than \(\Delta_{p}\). To fix this, while performing the oversampling described, we also obtain estimates \(\tilde{z}_{p}\) of the values of \(\frac{1}{m}\sum_{t\in[m]}B^{(t)}\) for all \(p\in A\) and re-weight properly at the end. See Algorithm 9 and 10 for formal descriptions of our arc-finding procedures. ``` Data: A sparsity parameter \(k\), a submodular function \(f\) with a \(k\)-sparse minimizer, permutations \(\{\pi^{(t)}\}_{t\in[m]}\) such that \(\frac{1}{m}\sum_{t\in[m]}g_{\pi^{(t)}}\) is a \((\delta,k)\) dual certificate with \(\delta\leq\frac{\left\|u_{f}\right\|_{\infty}}{24k}\), and parameter Scale \(\geq\left\|u_{f}\right\|_{\infty}\geq-12kf^{*}\) Result: A set \(S_{p}\subseteq V\setminus p^{\downarrow}\) of arcs from every \(p\in V\) with \((u_{f})_{p}\geq\textsf{Scale}/2\), where \(S_{p}=\emptyset\) certifies that \(p\) is not in any \(k\)-sparse minimizer 1Function\(\texttt{Arc-Finding}(f,k,\{\pi^{(t)}\}_{t\in[m]},\textsf{Scale})\): 2\(A\leftarrow\{p:(u_{f})_{p}\geq\frac{\textsf{Scale}}{2}\}\)\(S_{p},C_{p}\leftarrow\emptyset\) for all \(p\in A\)// Initialize set of arcs and samples from \(p\) to be empty 3\(N\leftarrow\Theta(k^{4}(\log n)\cdot\frac{\left\|u_{f}\right\|_{1}}{\left\|u_{f }\right\|_{\infty}})\)// Draw \(N\) samples in total 4\(N_{p}\leftarrow\Theta(k^{4}\log n)\) and \(\textsf{count}_{p}\gets 0\), \(\forall p\in A\)// Assign \(N_{p}\) samples for each \(p\in A\) 5for\(\text{itr}\in[N]\)do 6 Sample a pair \((t,a)\in[m]\times V\) with probability \(\frac{2(u_{f})_{a}-(g_{\pi(t)})_{a}}{m\cdot(2\left\|u_{f}\right\|_{1}-f(V))}\)forevery \(p\in A\) with \(a\in p^{\downarrow}\)do 7\(\textsf{count}_{p}\leftarrow\textsf{count}_{p}+1\)if\(\left|C_{p}\right|<N_{p}\)then\(C_{p}\gets C_{p}\cup\{t\}\) 8 9 end for 10 11 end for 12\(\tilde{z}_{p}\leftarrow\frac{\textsf{count}_{p}}{N}\cdot(2\left\|u_{f} \right\|_{1}-f(V))\) for all \(p\in A\)// Multiplicative estimate of \(\frac{1}{m}\sum_{t\in[m]}B_{p}^{(t)}\)\(\{\tilde{\Delta}_{p}\}_{p\in A}\leftarrow\)NegativeMassEstimate(\(f,\{\pi^{(t)}\}_{t\in[m]},\{C_{p}\}_{p\in A},\{\tilde{z}_{p}\}_{p\in A}\))for all \(p\in A\) and \(q\in V\setminus p^{\downarrow}\)do 13if\(\tilde{\Delta}_{p}(q)\geq\frac{3}{4k}\|\tilde{\Delta}_{p}\|_{1}\)then\(S_{p}\gets S_{p}\cup\{q\}\) 14 15 end for return\(\{S_{p}\}_{p\in A}\) ``` **Algorithm 9**Arc Finding for Sequential Algorithm The following is our main guarantee for the Negative-Mass-Estimate routine in Algorithm 10. **Lemma 6.12**.: _Given a submodular function \(f:2^{V}\rightarrow\mathbb{R}\), subset \(A\subseteq V\), and permutations \(\{\pi^{(t)}\}_{t\in[m]}\). For each \(p\in A\), let \(C_{p}\subseteq[m]\) be \(N_{p}=10^{5}k^{4}\log n\) i.i.d. samples of \(t_{i}\) with \(\mathbb{P}(t_{i}=\{\pi^{(t)}\}_{t\in[m]}\). For each \(p\in A\), let \(C_{p}\subseteq[m]\) be \(N_{p}=10^{5}k^{4}\log n\) i.i.d. samples of \(t_{i}\) with \(\mathbb{P}(t_{i}=\{\pi^{(t_{i})}\}_{t\in[m]}\). For each \(p\in A\), let \(C_{p}\subseteq[m]\) be \(N_{p}=10^{5}k^{4}\log n\) i. \(t)\propto B_{p}^{(t)}\) and \(\tilde{z}_{p}\) be a random variable such that \(\mathbb{E}[\tilde{z}_{p}]=\frac{1}{m}\sum_{t\in[m]}B_{p}^{(t)}\leq 10k\|\Delta_{p}\|_{1}\), and \(\tilde{z}_{p}\in\frac{1}{m}\sum_{t\in[m]}B_{p}^{(t)}(1\pm\frac{1}{16k})\) with high probability. Then, the output of Algorithm 10 satisfies that with high probability, \((\tilde{\Delta}_{p})_{q}\in(\Delta_{p})_{q}\pm\frac{1}{8k}\left\|\Delta_{p} \right\|_{1}\) and \(\|\tilde{\Delta}_{p}\|_{1}\in(1+\frac{1}{8k})\|\Delta_{p}\|_{1}\) for all \(p\in A\) that belongs to a \(k\)-sparse minimizer and \(q\in V\setminus p^{\downarrow}\). Moreover, Algorithm 10 uses \(\widetilde{O}(k\sum_{p\in A}N_{p})\) queries and \(\mathsf{poly}(n)\) additional runtime._ Proof.: Fix \(p\in A\) that is in a \(k\)-sparse minimizer. For each sample \(t\in C_{p}\) and \(q\in V\setminus p^{\downarrow}\), we have \[\mathbb{E}_{t\propto B_{p}^{(t)},q\propto\beta_{q}}[\tilde{ \Delta}_{p}^{(t)}\cdot\tilde{z}_{p}] =\mathbb{E}_{t\propto B_{p}^{(t)},q\propto\beta_{q}}\Big{[}\frac{ \left\|\Delta_{\pi^{(t)},p^{\downarrow}}\right\|_{1}}{B_{p}^{(t)}}\cdot\vec{1} _{q}\cdot\tilde{z}_{p}\Big{]}=\mathbb{E}_{t\propto B_{p}^{(t)}}\Big{[}\frac{ \Delta_{\pi^{(t)},p^{\downarrow}}}{B_{p}^{(t)}}\cdot\tilde{z}_{p}\Big{]}\] \[=\frac{\Delta_{p}}{\frac{1}{m}\sum_{t\in[m]}B_{p}^{(t)}}\cdot \mathbb{E}[\tilde{z}_{p}]=\Delta_{p}.\] This implies \(\mathbb{E}[\tilde{\Delta}_{p}]=\Delta_{p}\), i.e. Algorithm 10 outputs an unbiased estimator. As \(\tilde{z}_{p}\in\frac{1}{m}\sum_{t\in[m]}B_{p}^{(t)}(1\pm\frac{1}{16k})\) with high probability, it suffices to show that \(\frac{1}{N_{p}}\sum_{t_{i}\in C_{p}}\tilde{\Delta}_{p}^{(t_{i})}\) concentrates around its mean. To this end, fix a coordinate \(q\in V\setminus p^{\downarrow}\). Note that for each sample \(t_{i}\in C_{p}\), we have \[0\leq(\tilde{\Delta}_{p}^{t_{i}})_{q}\leq\frac{\left\|\Delta_{\pi^{t_{i}},p^{ \downarrow}}\right\|_{1}}{B_{p}^{(t_{i})}}\leq 1.\] Let \(\mu_{q}\stackrel{{\mathrm{def}}}{{=}}\mathbb{E}[(\tilde{\Delta}_ {p}^{t_{i}})_{q}]\). Using Hoeffding's inequality with error \(\gamma=\frac{1}{100k^{2}}\) gives that \[\mathbb{P}\Big{[}\Big{|}\frac{1}{N_{p}}\sum_{t_{i}\in C_{p}}(\tilde{\Delta}_ {p}^{(t_{i})})_{q}-\mu_{q}\Big{|}\geq\gamma\Big{]}\leq 2\exp\big{(}-\gamma^{2}N_{p}/2 \big{)}\leq 1/\mathsf{poly}(n).\] This shows that with high probability, for all \(p\in A\) and \(q\in V\setminus p^{\downarrow}\) we have \[\Big{|}(\tilde{\Delta}_{p})_{q}-(\Delta_{p})_{q}\Big{|}=\Big{|}\frac{1}{N_{p}} \sum_{t_{i}\in C_{p}}(\tilde{\Delta}_{p}^{(t_{i})})_{q}\cdot\tilde{z}_{p}-( \Delta_{p})_{q}\Big{|}\leq\frac{1}{16k}(\Delta_{p})_{q}+(1+\frac{1}{16k})\cdot \frac{\gamma}{m}\sum_{t\in[m]}B_{p}^{(t)}\] \[\leq\frac{\|\Delta_{p}\|_{1}}{16k}+\frac{17}{16}\cdot\frac{1}{100k^{2}} \cdot 10k\cdot\|\Delta_{p}\|_{1}\leq\frac{\|\Delta_{p}\|_{1}}{8k},\] where the second inequality uses the assumption that \(\frac{1}{m}\sum_{t\in[m]}B_{p}^{(t)}\leq 10k\|\Delta_{p}\|_{1}\). The proof that \(\|\tilde{\Delta}_{p}\|_{1}\in(1+\frac{1}{8k})\|\Delta_{p}\|_{1}\) is similar. This completes the proof of the first statement of the lemma. Now we prove the "moreover" part of the lemma. Note that the only place where Algorithm 10 queries the evaluation oracle is in Line 4. In this line, computing \(\|\Delta_{\pi^{(t)},p^{\downarrow}}\|_{1}\) and \(B_{p}^{(t)}\) takes \(O(k)\) queries for each \(p\in A\) and \(t\in C_{p}\) which is a total of \(O(k\sum_{p\in A}N_{p})\) queries. It therefore suffices to show that we can sample \(q\) proportional to \((\Delta_{\pi^{(t)},p^{\downarrow}})_{q}\) in \(\widetilde{O}(k\sum_{p\in A}N_{p})\) queries. Note that \((\Delta_{\pi^{(t)},p^{\downarrow}})_{q}\geq 0\) for all \(q\in V\setminus p^{\downarrow}\) and that for any \(i\in V\setminus p^{\downarrow}\) computing \(\sum_{j\in[i]\setminus p^{\downarrow}}(\Delta_{\pi^{(t)},p^{\downarrow}})_{j}\) takes only \(O(k)\) queries since \(|p^{\downarrow}|\leq k\). It follows that we can apply binary search as in the proof of Lemma 6.3 to sample \(q\propto\beta_{q}\) in \(O(k\log n)\) queries. For all \(p\in A\) and \(q\in V\setminus A\), this is a total of \(\widetilde{O}(k\sum_{p\in A}N_{p})\) queries. Finally, note that every operation in Algorithm 10 requires at most \(\mathsf{poly}(n)\) additional computational cost. This finishes the proof of the lemma. Now, we are ready to present our result regarding the arc-finding procedure. **Lemma 6.13** (Arc Finding for Sequential Algorithm).: _Let \(f\) be a submodular function with \(f^{*}\geq-\frac{\|u_{f}\|_{\infty}}{12k}\) and \(\{\pi^{(t)}\}_{t\in[m]}\) be permutations such that \(y=\frac{1}{m}\sum_{t\in[m]}g_{\pi^{(t)}}\) is a \((\delta,k)\) dual certificate with \(\delta\leq\frac{\|u_{f}\|_{\infty}}{24k}\). Then with high probability, Algorithm 9 outputs arcs \(S_{p}\neq\emptyset\) for each \(p\in A\) that belongs to some \(k\)-sparse minimizer. Moreover, Algorithm 9 uses \(\widetilde{O}(k^{5}\frac{\|u_{f}\|_{1}}{\|u_{f}\|_{\infty}})\) queries and \(\mathsf{poly}(n)\) additional runtime._ Proof of Lemma 6.13.: First, we show that the input passed to Algorithm 10 satisfies the conditions in Lemma 6.12. We begin by proving that, at the end of the for loop in Line 6, we have collected \(N_{p}\) samples (i.e. \(|C_{p}|=N_{p}\)) for all \(p\in A\) with high probability. To prove this, we fix \(p\in A\). In each iteration of Line 6, if \(|C_{p}|<N_{p}\), then the probability that \(C_{p}\) is updated is \(\frac{2u_{f}(p^{\downarrow})-g_{\pi^{(t)}}(p^{\downarrow})}{2\|u_{f}\|_{1}-f(V )}\). Since \((u_{f})_{q}-(g_{\pi^{(t)}})_{q}\geq(u_{f})_{q}\) for every \(q\in V\), and \(-\frac{\|u_{f}\|_{\infty}}{12k}\leq f(V)\leq\|u_{f}\|_{1}\), we have \[\frac{2u_{f}(p^{\downarrow})-g_{\pi^{(t)}}(p^{\downarrow})}{2\|u_{f}\|_{1}-f( V)}\geq\frac{u_{f}(p^{\downarrow})}{3\|u_{f}\|_{1}}\geq\frac{\|u_{f}\|_{ \infty}}{6\|u_{f}\|_{1}},\] where the last inequality follows because \(p\in A\) satisfies \((u_{f})_{p}\geq\|u_{f}\|_{\infty}/2\). Therefore, in \(N=\Omega(k^{4}\log n\frac{\|u_{f}\|_{1}}{\|u_{f}\|_{\infty}})\) iterations, by the multiplicative Chernoff bound, \(|C_{p}|\) will be updated \(\Omega(\frac{N\|u_{f}\|_{\infty}}{\|u_{f}\|_{1}})=\Omega(k^{4}\log n)\) times with high probability. Moreover, each \(t_{i}\in C_{p}\) is indeed picked i.i.d. from \([m]\) with \(\mathbb{P}(t_{i}=t)\propto B_{p}^{(t)}\). This is because as long as \(|C_{p}|<N_{p}\), \(C_{p}\) gets updated if and only if the sample \((t,a)\) drawn satisfies \(a\in p^{\downarrow}\). Conditioned on the event that \(a\in p^{\downarrow}\), the sample \(t\) has probability distribution over \([m]\) proportional to \(B_{p}^{(t)}=2u_{f}(p^{\downarrow})-g_{\pi^{(t)}}(p^{\downarrow})\). Next, we show that with high probability, \(\tilde{z}_{p}\in(1\pm\frac{1}{16k})\cdot(2u_{f}(p^{\downarrow})-y(p^{ \downarrow}))=(1\pm\frac{1}{16k})\cdot\frac{1}{m}\sum_{t\in[m]}B_{p}^{(t)}\) for all \(p\in A\). This holds since each time we draw a sample \((t,a)\), the probability that \(\mathsf{count}_{p}\) gets updated is \[\mathbb{E}_{t\sim[m]}\Big{[}\frac{2u_{f}(p^{\downarrow})-g_{\pi^{(t)}}(p^{ \downarrow})}{2\|u_{f}\|_{1}-f(V)}\Big{]}=\frac{2u_{f}(p^{\downarrow})-y(p^{ \downarrow})}{2\|u_{f}\|_{1}-f(V)}\geq\frac{\|u_{f}\|_{\infty}}{6\|u_{f}\|_{1}}.\] This implies that \(\mathbb{E}[\tilde{z}_{p}]=2u_{f}(p^{\downarrow})-y(p^{\downarrow})=\frac{1}{m} \sum_{t\in[m]}B_{p}^{(t)}\), i.e. \(\tilde{z}_{p}\) are unbiased estimators. Moreover, since we draw \(N=\widetilde{O}(k^{4}\frac{\|u_{f}\|_{1}}{\|u_{f}\|_{\infty}})\) samples in total, multiplicative Chernoff bound again implies that \(\tilde{z}_{p}\) are indeed \((1\pm\frac{1}{16k})\)-multiplicative estimates of \(2u_{f}(p^{\downarrow})-y(p^{\downarrow})\) with high probability. Lastly, we show that \(\mathbb{E}[\tilde{z}_{p}]=O(k)\cdot\left\|\Delta_{p}\right\|_{1}\) for all \(p\in A\) that belongs to a \(k\)-sparse minimizer. To see why this holds, note that \[\|\Delta_{p}\|_{1}=f(p^{\downarrow})-y(p^{\downarrow})=(u_{f})_{p}+f(p^{ \downarrow}\setminus\{p\})-y(p^{\downarrow})\geq(1-\frac{1}{6k})\cdot(u_{f})_ {p}-\delta,\] where in the last inequality we used that \(f^{*}\geq-\frac{\|u_{f}\|_{\infty}}{12k}\geq-\frac{(u_{f})_{p}}{6k}\) and \(y(p^{\downarrow})\leq\delta\) by Lemma 6.11. Using \(y_{-}^{k+1}(V)\geq f^{*}-\delta\) as \(y\) is a \((\delta,k)\) dual certificate, this further implies that \[\frac{2u_{f}(p^{\downarrow})-y(p^{\downarrow})}{\|\Delta_{p}\|_{1}}\leq\frac{ 2k\|u_{f}\|_{\infty}-(f^{*}-\delta)}{(1-\frac{1}{6k})\cdot(u_{f})_{p}-\delta} \leq O(k),\] where the last inequality follows from \(\delta=O(\frac{1}{k}\|u_{f}\|_{\infty})\) and \(p\in A\). Hence, the input passed to Algorithm 10 satisfies the assumptions in Lemma 6.12. Next, we prove the correctness of our arc finding method. Fix \(p\in A\) that belongs to some \(k\)-sparse minimizer. By Lemma 6.12, we have \((\tilde{\Delta}_{p})_{q}\in(\Delta_{p})_{q}\pm\frac{1}{8k}\|\Delta_{p}\|_{1}\) and \(\|\tilde{\Delta}_{p}\|_{1}\in(1\pm\frac{1}{8k})\|\Delta_{p}\|_{1}\) with high probability for all \(q\in V\setminus p^{\downarrow}\). Therefore, every \(q\) with \((\tilde{\Delta}_{p})_{q}\geq\frac{3}{4k}\|\tilde{\Delta}_{p}\|_{1}\) must have \((\Delta_{p})_{q}\geq\frac{1}{2k}\|\Delta_{p}\|_{1}\), implying that \(p\to q\) is an arc due to Lemma 6.11. Hence, all arcs deduced are valid. Lastly, note that for each \(p\in A\) that belongs to some \(k\)-sparse minimizer, there must be at least one such \(q\in V\setminus p^{\downarrow}\), as otherwise we would have \((\Delta_{p})_{q}\leq\frac{1}{k}\|\Delta_{p}\|_{1}\) for all \(q\) which is a contradiction to Lemma 6.11. This implies that we can find valid arcs \(S_{p}\neq\emptyset\) for every element \(p\in A\) that belongs to some \(k\)-sparse minimizer. Finally, we show that Algorithm 9 can be implemented using \(\widetilde{O}(k^{5}\cdot(\frac{\|u_{f}\|_{1}}{\|u_{f}\|_{\infty}}+|A|))\) queries. Note that each sample pair \((t,a)\) in Line 6 can be implemented in \(O(\log n)\) queries and \(O(n\log n)\) additional runtime by first sampling \(t\sim[m]\) uniformly at random, and then use binary search to sample \(a\in V\). Together with the runtime bound in Lemma 6.12, we have that Algorithm 9 can be implemented using \(\widetilde{O}(N+k\sum_{p\in A}N_{p})=\widetilde{O}(k^{5}\frac{\|u_{f}\|_{1}}{ \|u_{f}\|_{\infty}})\) queries and \(\mathsf{poly}(n)\) additional computation, where we used that \(|A|\leq 2\frac{\|u_{f}\|_{1}}{\|u_{f}\|_{\infty}}\). This completes the proof of the lemma. ### Putting It All Together We are now ready to prove the correctness, query complexity and runtime guarantees for our sequential algorithm. We prove Theorem 1.2 and Theorem 1.3 in the following. **Theorem 1.2** (Weakly-polynomial \(k\)-sparse SFM).: _There is a randomized algorithm that outputs an \(\epsilon\)-approximate minimizer for \(k\)-sparse SFM whp. in \(\widetilde{O}((n\cdot\mathsf{poly}(k)\cdot\mathsf{EO}+\mathsf{poly}(n)))\log( |f|/\epsilon))\) time._ Proof of Theorem 1.2.: Consider the meta algorithm (Algorithm 1) with subprocedures Dimensionality-Reduction and \(\mathsf{Arc}\)-Finding given in Algorithms 8 and 9. The correctness of Algorithm 1 is already given in Corollary 4.4 so we only need to analyze its query complexity and runtime. Note that in each iteration of the outer while loop in Line 3 of Algorithm 1, one of the following three things will happen: (1) the size of the contracted elements \(W\) will increase due to Dimensionality-Reduction in Line 4 when \(f^{*}\leq-\|u_{f^{*}}\|_{\infty}/12k\), or (2) the size of the discarded elements \(D\) will increase because there exists element \(p\in V\setminus(W\cup D)\) such that \((u_{f^{*}})_{p}<0\) in Line 30 of Algorithm 11 (extension maintainer), or (3) \(\|u_{f^{*}}\|_{\infty}\) will decrease by a factor of \(2\), and a set of arcs \(S_{p}\) are found for every element \(p\in V\setminus(D\cup W)\) with \((u_{f^{*}})_{p}>\mathsf{Scale}/2\) in the while loop in Line 8 when \(f^{*}>-\|u_{f^{*}}\|_{\infty}/12k\) (in this case, the discarded set \(D\) might also increase due to \(S_{p}=\emptyset\) or an element \(p\) has more than \(k\) arcs). Note that (1) and (2) can happen at most \(k\) times before \(|W|\geq k\) and Algorithm 1 outputs \(W\); (3) can happen at most \(\log(|f|n/\epsilon)\) times before \(\|u_{f^{*}}\|_{\infty}\leq\epsilon/n\). So the total number of iterations of the while loop in Line 3 will be at most \(O(k+\log(|f|n/\epsilon))\). Moreover, each iteration makes \(1\) call to Dimensionality-Reduction and at most \(k\) calls to \(\mathsf{Arc}\)-Finding, as otherwise the number of new arcs found from elements \(p\) with \((u_{f^{*}})_{p}>\mathsf{Scale}/2\) would be more than \(k\) and such elements can be safely discarded. The total number of times Dimensionality-Reduction is called in Line 4 is at most \(k+\log(|f|n/\epsilon)\) by the above. By Lemma 6.7, each call to Dimensionality-Reduction can be done in runtime \(\widetilde{O}(\frac{\|u_{f^{*}}\|_{1}}{\|u_{f^{*}}\|_{\infty}}k^{12}\cdot \mathsf{EO}+\mathsf{poly}(n))\). Next, to handle a call to \(\mathsf{Arc}\)-Finding, one needs to first compute a \((\delta,k)\) dual certificate \(y\) (implicitly given by the permutations \(\{\pi^{(t)}\}_{t\in[m]}\)), where \(\phi=\frac{\|u_{f^{*}}\|_{\infty}}{12k}\), \(\delta=\frac{\|u_{f^{*}}\|_{\infty}}{24k}=\phi/2\), which by Theorem 6.1 uses \[\widetilde{O}(k^{6}\delta^{-4}\phi^{2}(\|u_{f^{*}}\|_{\infty}+\phi)(\|u_{f^{* }}\|_{1}+\phi))\leq\widetilde{O}(k^{8}\frac{\|u_{f^{*}}\|_{1}}{\|u_{f^{*}}\|_ {\infty}})\] queries to \(\mathsf{EO}\) and \(\mathsf{poly}(n)\) additional runtime. Given the \((\delta,k)\) dual certificate, Lemma 6.13 states that each call to \(\mathsf{Arc}\)-Finding can be done in \(\widetilde{O}(\frac{\|u_{f^{*}}\|_{1}}{\|u_{f^{*}}\|_{\infty}}k^{5}\cdot \mathsf{EO}+\mathsf{poly}(n))\) runtime. By bounding \(\frac{\|u_{f^{*}}\|_{1}}{\|u_{f^{*}}\|_{\infty}}\leq n\), we obtain that the runtime due to Dimensionality-Reduction and \(\mathsf{Arc}\)-Finding is \(\widetilde{O}((nk^{12}\cdot\mathsf{EO}+\mathsf{poly}(n))\log(|f|/\epsilon))\) in total. Finally, note that each update to the RingFamily can be implemented in \(O(m\cdot\mathsf{EO}+nk)\) time where \(m\) is the total number of elements \(p\) from which arcs are found. Combining everything above, Algorithm 1 finds an \(\epsilon\)-approximate minimizer for \(k\)-sparse SFM in runtime \(\widetilde{O}((nk^{12}\cdot\mathsf{EO}+\mathsf{poly}(n))\cdot\log(|f|/ \epsilon))\). This completes the proof of the theorem. We now prove our strongly-polynomial result for our sequential algorithm. **Theorem 1.3** (Strongly-polynomial \(k\)-sparse SFM).: _There is a randomized algorithm that outputs an exact minimizer for \(k\)-sparse SFM whp. in \(\widetilde{O}(n\cdot\mathsf{poly}(k)\cdot\mathsf{EO}+\mathsf{poly}(n))\) time._ **Proof of Theorem 1.3.** Consider the meta algorithm in Algorithm 1 with subprocedures Dimensionality-Reduction and \(\mathsf{Arc}\)-Finding as in Algorithms 8 and 9, where we set \(\epsilon=0\). We show that this algorithm outputs a \(k\)-sparse minimizer and has runtime bound \(\widetilde{O}(nk^{13}\cdot\mathsf{EO}+\mathsf{poly}(n))\). The correctness of Algorithm 1 is already given in Corollary 4.4 so we only need to analyze its runtime. The runtime due to RingFamily operations can be easily bounded by \(O(n\cdot\mathsf{poly}(k)\cdot\mathsf{EO}+\mathsf{poly}(n))\), since on average each arc takes at most \(O(1\cdot\mathsf{EO}+nk)\) to update, and there can be at most \(nk\) arcs in total. Hence, it suffices to reason about the complexity of Dimensionality-Reduction and \(\mathsf{Arc}\)-Finding over the course of the algorithm. We have already argued that in the proof of Theorem 1.2 above that one iteration of the while loop in Line 3, the number of calls to Dimensionality-Reduction and \(\mathsf{Arc}\)-Finding is \(1\) and \(O(k)\) respectively. Moreover, each such iteration takes \(\widetilde{O}(\frac{\|u_{f^{\sharp}}\|_{1}}{\|u_{f^{\sharp}}\|_{\infty}}k^{13} \mathsf{EO}+\mathsf{poly}(n))\) time, where \(u_{f^{\sharp}}\) is the vector at the beginning of the while loop. To obtain a strongly polynomial runtime, we bound the calls to Dimensionality-Reduction and Arc-Finding in a more fine-grained way than in the proof of Theorem1.2. We charge the number of queries made each iteration of the while loop to elements in \(V\) according to the entries of the vector \(u_{f^{\sharp}}\). In particular, we charge \(\widetilde{O}(\frac{(u_{f^{\sharp}})_{p}}{\|u_{f^{\sharp}}\|_{\infty}}k^{12})\) queries to each \(p\in V\). Note that such a charging scheme accounts for the total number of queries used in one iteration of the outer while loop. Hence, it suffices to show that the total charge to element \(p\) is at most \(\widetilde{O}(k^{12})\). Now, fix an element \(p\in V\) and observe that \((u_{f^{\sharp}})_{p}\) can only change when we deduce an arc from \(p\) (directly or through transitive closure). Moreover, \((u_{f^{\sharp}})_{p}\) can change at most \(k\) times as otherwise \(p\) would have more than \(k\) arcs and should be discarded. We therefore focus on a sequence of iterations \(t_{0},\cdots,t_{1}\) where the value \((u_{f^{\sharp}}^{(t)})_{p}\) in the \(t\)th iteration is fixed throughout \(t\in\{t_{0},\cdots,t_{1}\}\). Note that \(\|u_{f^{\sharp}}^{(t)}\|_{\infty}\) decreases by a factor of \(2\) after each iteration \(t\) and that \(\|u_{f^{\sharp}}^{(t_{1})}\|_{\infty}\geq(u_{f^{\sharp}}^{(t_{1})})_{p}\). We can therefore bound the total number of queries charged to \(p\) in iterations \(t_{0},\cdots,t_{1}\) as \[\widetilde{O}(k^{12})\cdot\sum_{t=t_{0}}^{t_{1}}\frac{(u_{f^{\sharp}}^{(t)})_ {p}}{\|u_{f^{\sharp}}^{(t)}\|_{1}}\leq\widetilde{O}(k^{12})\sum_{t=t_{0}}^{t_ {1}}2^{-(t_{1}-t)}\leq\widetilde{O}(k^{12}).\] Since there are at most \(k\) such sequence of iterations for \(p\), the total number of queries charged to \(p\) throughout the entire algorithm is at most \(\widetilde{O}(k^{13})\). This proves that the algorithm makes at most \(\widetilde{O}(nk^{13})\) queries to EO in total. Finally, to prove that the additional computation is also strongly polynomial, we note that each iteration of the while loop in Line3 deduces either a dimensionality reduction or an arc. Hence, Line3 gets called at most \(nk\) times. Since each iteration uses \(\mathsf{poly}(n)\) additional computation, the total runtime we obtain is \(\widetilde{O}(nk^{13}\cdot\mathsf{EO}+\mathsf{poly}(n))\). ## 7 Ring Family and Extensions In this section, we present more details on arc information and the submodular extension \(f^{\sharp}\) mentioned in Section4.1. As was illustrated in Section4.1, our algorithms proceed by finding elements \(p,q\in V\) such that any \(k\)-sparse minimizer of the submodular function \(f\) containing \(p\) must also contain \(q\). Such information is captured by an _arc_\((p,q)\). To incorporate all the arc information, we use the following framework of SFM over ring families first introduced by Iwata, Fleischer, and Fujishige [13]. This framework was later used in many other algorithms, e.g. [1, 1, 15]. Here, we emphasize once more the slight difference between the arc definition we use and the standard one in the literature; see Footnote10 for a more detailed discussion. Our presentation and notations follow the ones in [15], with necessary adaptions to fit the arc definition of this paper. **Directed graph and partial ordering.** All arc information is maintained as a directed graph \(D=(V,F)\), with the property that if \((p,q)\in F\), then every \(k\)-sparse minimizer \(S\subseteq V\) of the submodular function \(f\) that contains \(p\) must also contain \(q\). We may assume that \(F\) is acyclic as any (directed) cycle of \(D\) can be contracted (e.g. Section4.1 of [15]). We may also assume that \(D\) is _transitive_, i.e. \((p,q)\in F\) iff there is a directed path from \(q\) to \(p\) in \(D\), by maintaining its transitive closure. The acyclic graph \(D=(V,F)\) defines a partial order \(\preceq_{F}\), i.e. \(q\preceq_{F}p\) if there exists a directed path in \(F\) from \(p\) to \(q\) (and \((p,q)\in F\) when \(D\) is transitive); in particular, \(p\preceq_{F}p\) for all \(p\in V\). We say that an ordering of the vertices is _consistent with \(\preceq_{F}\)_, if \(q\) is ordered before \(p\) whenever \(q\preceq_{F}p\). We use the simpler notation \(q\preceq p\) if \(F\) is clear from the context. **Ring families.** The acyclic graph \(D=(V,F)\) defines a ring family \(\mathcal{F}\stackrel{{\text{\tiny def}}}{{=}}\{S\subseteq V:(p \in S)\wedge(q\preceq p)\Rightarrow q\in S\}\), i.e. \(\mathcal{F}\) is the set of all lower ideals of the partial order \(\preceq\). Note that \(\mathcal{F}\) contains \(\emptyset\) and \(V\) and is closed under set intersection and union. Our algorithms begin with no arc information, so \(\mathcal{F}=2^{V}\). Throughout our algorithm, we maintain the ring family \(\mathcal{F}\subseteq 2^{V}\) (by the arcs in \(D=(V,F)\)) to be _consistent_ with all the \(k\)-sparse minimizers of the submodular function \(f\), i.e., \(\mathcal{F}\) contains all the \(k\)-sparse minimizers of \(f\). Given a ring family \(\mathcal{F}\), for any element \(p\in V\), we define \[p^{\downarrow\text{\tiny def}}\equiv\{q\in V:q\preceq p\}\quad\text{and}\quad p ^{\uparrow\text{\tiny def}}\equiv\{q\in V:p\preceq q\}.\] Since we have assumed that \(D=(V,F)\) is transitive, for all \(p\in V\), \(p^{\downarrow}=\{p\}\cup\{q\in V:(p,q)\in F\}\) and \(p^{\uparrow}=\{p\}\cup\{q\in V:(q,p)\in F\}\). Similarly, for any \(X\subseteq V\), we define \[X^{\downarrow\text{\tiny def}}\equiv\bigcup_{p\in X}p^{\downarrow}\quad\text{ and}\quad X^{\uparrow\text{\tiny def}}\equiv\bigcup_{p\in X}p^{\uparrow}.\] Note that \(X^{\downarrow}\) is the unique minimal set in \(\mathcal{F}\) containing \(X\), and \(V\setminus X^{\uparrow}\) is the unique maximal element of \(\mathcal{F}\) disjoint from \(X\). We also define the set \[X^{\sharp}\stackrel{{\text{\tiny def}}}{{=}}V\setminus(V \setminus X)^{\uparrow}\] that we shall use. Note that \(X^{\sharp}\) is the unique maximal set in \(\mathcal{F}\) that is contained in \(X\). **Upper bound values and extensions of submodular functions.** For every \(p\in V\), we define the upper bound values \[u_{p}\stackrel{{\text{\tiny def}}}{{=}}f(p^{\downarrow})-f(p^{ \downarrow}\setminus\{p\}).\] The intuitive meaning of \(u_{p}\) is that for any set \(X\in\mathcal{F}\) that does not contain \(p\) and that \(X\cup\{p\}\in\mathcal{F}\), then \(u_{p}\) is always an upper bound on the marginal value \(f(X\cup\{p\})-f(X)\). Recall from Lemma 4.1 that the upper bound values \(u_{p}\) coincide with \((u_{f^{\sharp}})_{p}\stackrel{{\text{\tiny def}}}{{=}}f^{\sharp}( \{p\})-f^{\sharp}(\emptyset)\) whenever \(u_{p}\geq 0\). Using the upper bound values, we define an extension \(f^{\sharp}\) of the submodular function \(f\) that captures the ring family structure \(\mathcal{F}\). **Definition 7.1** (Submodular Extension).: _Given a submodular function \(f\) and a ring family \(\mathcal{F}\) consistent with the structure of its minimizers, define_ \[f^{\sharp}(S)\stackrel{{\text{\tiny def}}}{{=}}f(S^{\sharp})+u^{ +}(S\setminus S^{\sharp}).\] The function \(f^{\sharp}\) is the natural complement of the function \(f^{\downarrow}\) defined in [10] with the relationship \(f^{\sharp}(S)=\bar{f}^{\downarrow}(V\setminus S)\), where \(\bar{f}(S)\stackrel{{\text{\tiny def}}}{{=}}f(V\setminus S)\) denotes the complement of the submodular function \(f\). However, we need the extension \(f^{\sharp}\) to crucially exploit the sparsity of the minimizers of \(f\). We now prove the following lemma which collects the properties of \(f^{\sharp}\). **Lemma 4.1** (Properties of Extension \(f^{\sharp}\)).: _Let \(f\) be a submodular function with a \(k\)-sparse minimizer. Then, the following properties hold for the extension \(f^{\sharp}\):_ 1. \(f^{\sharp}\) _is a submodular function,_ 2. \(f^{\sharp}(S)\geq f(S^{\sharp})\) _for any set_ \(S\subseteq V\)_, where_ \(S^{\sharp}\) _is the unique maximal subset of_ \(S\) _consistent with all the arcs;_ \(f^{\sharp}(S)=f(S)\) _for any set_ \(S\subseteq V\) _that is consistent with all the arcs,_ 3. _Any_ \(k\)_-sparse minimizer of_ \(f\) _is also a_ \(k\)_-sparse minimizer of_ \(f^{\sharp}\)_; for any minimizer_ \(S^{*}\) _of_ \(f^{\sharp}\)_, the maximal subset of_ \(S^{*}\) _consistent with all the arcs is a minimizer of_ \(f\)_,_ 4. _For any_ \(p\in V\) _such that_ \(u_{p}\stackrel{{\mbox{\tiny{\rm{\tiny def}}}}}{{=}}f(p^{\downarrow} )-f(p^{\downarrow}\setminus\{p\})\geq 0\)_, we have_ \(u_{p}=(u_{f^{\sharp}})_{p}\stackrel{{\mbox{\tiny{\rm{\tiny def}}}} }{{=}}f^{\sharp}(\{p\})-f^{\sharp}(\emptyset)\)_,_ 5. _When new arcs are added, the value of_ \(u_{p}\) _does not increase for any_ \(p\in V\)_._ Proof.: To prove property 1, we note from above that \(f^{\sharp}(S)=\bar{f}^{\downarrow}(V\setminus S)\), where \(\bar{f}(S)=f(V\setminus S)\) is the complement set function of \(f\) and \(\bar{f}^{\downarrow}\) is the submodular extension defined in [13]. It follows that \(f^{\sharp}\) is also a submodular function. The first part of property 2 follows from Definition 7.1 and the non-negativity of \(u^{+}\); the second part is because \(S^{\sharp}=S\) for any set \(S\) that is consistent with all the arcs, i.e. \(S\) is in the ring family generated by all the arcs. We now prove the first part of property 3. By property 2, \(f^{\sharp}(S)\geq f(S^{\sharp})\) and that \(f^{\sharp}(S^{*})=f(S^{*})\) for any \(k\)-sparse minimizer \(S^{*}\) of \(f\). It follows that the minimum value of \(f^{\sharp}\) agrees with \(f^{*}\) and any \(k\)-sparse minimizer \(S^{*}\) of \(f\) is also a minimizer of \(f^{\sharp}\). For the second part of property 3, we note that \(f((S^{*})^{\sharp})\leq f^{\sharp}(S^{*})=f^{*}\) which implies that \((S^{*})^{\sharp}\) is a minimizer of \(f^{\sharp}\). To prove property 4, we consider two cases. The first case is when there is no element \(q\neq p\) such that \(q\preceq p\). In this case, \(\{p\}\in\mathcal{F}\) and so \(\{p\}^{\sharp}=\{p\}\). We also have \(p^{\downarrow}=\{p\}\) and therefore \(u_{p}=f(\{p\})-f(\emptyset)=f^{\sharp}(\{p\})-f^{\sharp}(\emptyset)=(u_{f^{ \sharp}})_{p}\). In the other case where there exists \(q\neq p\) such that \(q\preceq p\), then \(\{p\}^{\sharp}=\emptyset\). If follows that \(f^{\sharp}(\{p\})=f(\emptyset)+u^{+}(\{p\})=f^{\sharp}(\emptyset)+u_{p}\), where the last equality uses the assumption that \(u_{p}\geq 0\). Since adding new arcs do not decrease the set \(p^{\downarrow}\) for any element \(p\in V\), property 5 immediately follows from the submodularity of \(f\). ### Missing Proofs for Extension Maintainer The proof of Lemma 4.1 follows immediately from the above. The data structure for the extension maintainer is formally given in Algorithm 11. We now prove that Algorithm 11 satisfies the properties in Theorem 4.2. **Proof of Theorem 4.2.** For \(\mathsf{Init}(V,k)\), we need \(O(n)\) time for the initialization of \(V\), \(D\), \(k\) and \(p^{\downarrow}\), and \(O(n\cdot\mathsf{EO}+n)\) time for computing \(u_{p}\). These operations can be done in \(O(1)\)-depth in parallel. For \(\mathsf{Subgrad}(\pi)\), note that \(\pi[i]^{\sharp}\) is monotonically increasing, so we can compute them one by one using a total of \(O(n)\) time. We then need to make \(n\) queries, one to each \(f(\pi[i]^{\sharp})\). These operations can be done in \(O(1)\) parallel round. The analysis for \(\mathsf{Partial}(i)\) is similar to \(\mathsf{Subgrad}(\pi)\). Finally, we prove the lemma statement for \(\mathsf{Update}(\{S_{p}\}_{p\in V})\), which requires a very careful analysis. In Line 20 - 21, we need to first compute the transitive closure when the new arcs corresponding to vertices in each \(S_{p}\) are added. Direct implementation of this step could take \(O(n)\) parallel depth, e.g. when the new arcs form a chain of length \(O(n)\) and propagating the information of the last arc along the chain takes \(O(n)\) depth. But clearly, such a large parallel depth is unnecessary for the correctness of the entire function, since each element \(p\in V\setminus D\) can have at most \(k\) arcs or else it will be discarded in Line 24. ``` 1State maintained: 2Ground set \(V\), discarded elements \(D\subseteq V\), and contracted elements \(W\subseteq V\)// Elements in \(D\texttt{ not in any }k\texttt{-sparse minimizer}\), elements in \(W\) belong to every minimizer 3Submodular function \(f\)// Implicitly maintained and accessed by EO 4Explicit array of sets \(p^{\downarrow}\subseteq V\setminus D\) for each \(p\in V\setminus D\)// Closure of all arcs 5Sparsity parameter \(k\in\mathbb{Z}_{\geq 0}\) 6Explicit array of \(u_{p}\in\mathbb{R}\) for all \(p\in V\setminus D\)// Upper values, \(u_{p}=(u_{f\texttt{\tiny{f}}})_{p}\) whenever \(u_{p}\geq 0\) 7Information access: 8Subgradient of extension \(f^{\sharp}\)// Submodular extension, only accessible through queries Discarded set \(D\subseteq V\)// Read-only access 9\(u_{p}\) for any \(p\in V\setminus D\)// Read-only access 10 11FunctionInit(\(V,k,f\)): 12\(V\gets V\), \(D\leftarrow\emptyset\), \(W\leftarrow\emptyset\), \(k\gets k\), \(f\gets f\) 13for\(p\in V\)do 14\(p^{\downarrow}\leftarrow\{p\}\) and \(u_{p}\gets f(\{p\})-f(\emptyset)\) 15Contract \(p\) into \(W\) if \(u_{p}<0\)// Must be in any minimizer 16 17 18 19FunctionUpdate(\(\{S_{p}\}_{p\in V}\)): 20\(p^{\downarrow}\gets p^{\downarrow}\cup S_{p}\) for each \(p\in V\setminus D\) 21 Update each \(p^{\downarrow}\) to its transitive closure// Parallel implementation in Theorem 4.2 22for\(p\in V\setminus D\)do 23if\(|p^{\downarrow}|>k\) or \(p^{\downarrow}\cap D\neq\emptyset\)then 24\(D\gets D\cup\{p\}\)// \(p\) is discarded if it has more than \(k\) arcs, or an arc to \(D\) 25 26 end if 27else 28\(u_{p}\gets f(p^{\downarrow})-f(p^{\downarrow}\setminus p)\)// If \(p\) not discarded then nor will elements in \(p^{\downarrow}\) 29 end if 30 31 end for 32if\(\exists p\in V\setminus(W\cup D)\) with \(u_{p}<0\)then 33Include all such \(p^{\downarrow}\) in \(W\) and contract them// Must be in any minimizer 34 35 end if 36 37FunctionSubgrad(\(\pi\)): 38if\(D\) is not at the end of \(\pi\)then return "Error" 39\(g_{i}\leftarrow(f(\pi[i]^{\sharp})+u^{+}(\pi[i]\setminus\pi[i]^{\sharp}))-(f( \pi[i-1]^{\sharp})-u^{+}(\pi[i-1]\setminus\pi[i-1]^{\sharp}))\) for all \(i\in V\setminus D\)return vector \(g\in\mathbb{R}^{|V\setminus D|}\)// The full sub-gradient, for parallel algorithm 39FunctionPartial(\(i,\pi\)): 40if\(i\in D\)then return "Error" 41\(g_{i}\leftarrow(f(\pi[i]^{\sharp})+u^{+}(\pi[i]\setminus\pi[i]^{\sharp}))-(f( \pi[i-1]^{\sharp})-u^{+}(\pi[i-1]\setminus\pi[i-1]^{\sharp}))\)return\(g_{i}\in\mathbb{R}\)// The \(i\)th coordinate of sub-gradient \(g_{\pi}\), for sequential algorithm Therefore, we can implement Line 20 - 21 in depth \(k\) by using a parallel BFS approach where each \(p\) runs the following procedure in parallel for \(k\) iterations: \(p\) checks every element \(q\in p^{\downarrow}\) and update new arcs in each \(q^{\downarrow}\). Note that this will not compute the full transitive closure, but suffices for the condition in Line 24, and can correctly compute \(p^{\downarrow}\) for all element \(p\) that will not be discarded after the update. Note that the above strategy can also be implemented in total time \(O(nk)\), since we can freeze (and discard it in Line 24) each element \(p\) whenever \(|p^{\downarrow}|>k\). This way each element adds at most \(k\) new arcs, with a total runtime of \(O(kn)\) for Line 20 - 21. The remaining part of the function \(\mathsf{Update}(\{S_{p}\}_{p\in V})\) can be computed using \(O(1)\) depth, \(O(m)\) queries to EO, one for each \(p\) whose \(u_{p}\) needs to be updated, and additional \(O(nk)\) time. This holds because we only check if \(u_{p}<0\) for the elements \(p\) whose \(u_{p}\) changed. ## Acknowledgements We thank Deeparnab Chakrabarty for helpful discussions and we thank the anonymous reviewers for helpful feedback. Andrei Graur was supported in part by the Nakagawa departmental fellowship award from the Management Science and Engineering Department at Stanford University, NSF CAREER Award CCF-1844855, and NSF Grant CCF-1955039. Part of work was done while Haotian Jiang was a Ph.D. student at the Unviersity of Washington and supported by a Packard fellowship. Aaron Sidford was supported in part by a Microsoft Research Faculty Fellowship, NSF CAREER Award CCF-1844855, NSF Grant CCF-1955039, a PayPal research award, and a Sloan Research Fellowship.
この論文では、$k$次正則な最小値を持つようなサブモジュラー関数 $f: 2^V \rightarrow \mathbb{R}$ を最小化するという問題を研究しています。 私たちは、$f$ の加算$\epsilon$近似最小値を求める決定的なアルゴリズムを提案し、$k$次正則な最小値を持つような $f$ の加算$\epsilon$近似最小値を求めるために、$ \widetilde{O}(\mathsf{poly}(k)\log(|f|/\epsilon))$ 回の並列深度を用いて、$f$ の評価オラクルへの$n$回の問い合わせを用いて計算します。 $f$ の最大値 $|f| = \max_{S \subseteq V} |f(S)|$ を表します。さらに、ランダムなアルゴリズムを提案し、$f$ の最小値を、高い確率で $
2302.14357
A Token-Wise Beam Search Algorithm for RNN-T
Standard Recurrent Neural Network Transducers (RNN-T) decoding algorithms for speech recognition are iterating over the time axis, such that one time step is decoded before moving on to the next time step. Those algorithms result in a large number of calls to the joint network, which were shown in previous work to be an important factor that reduces decoding speed. We present a decoding beam search algorithm that batches the joint network calls across a segment of time steps, which results in 20%-96% decoding speedups consistently across all models and settings experimented with. In addition, aggregating emission probabilities over a segment may be seen as a better approximation to finding the most likely model output, causing our algorithm to improve oracle word error rate by up to 11% relative as the segment size increases, and to slightly improve general word error rate.
Gil Keren
2023-02-28T07:20:49
http://arxiv.org/abs/2302.14357v2
# A Token-Wise Beam Search Algorithm for RNN-T ###### Abstract Standard Recurrent Neural Network Transducers (RNN-T) decoding algorithms for speech recognition are iterating over the time axis, such that one time step is decoded before moving on to the next time step. Those algorithms result in a large number of calls to the joint network, that were shown in previous work to be an important factor that reduces decoding speed. We present a decoding beam search algorithm that batches the joint network calls across a segment of time steps, which results in 40%-70% decoding speedups, consistently across all models and settings experimented with. In addition, aggregating emission probabilities over a segment may be seen as a better approximation to finding the most likely model output, causing our algorithm to improve oracle word error rate by up to 10% relative as the segment size increases, and to slightly improve general word error rate. Gil Keren Meta AI, USA gilkeren@meta.com ## 1 Introduction The Recurrent Neural Network Transducer model (RNN-T) [1, 2] has become a common model of choice for on-device speech recognition over the past few years due to its attractive accuracy to model size tradeoff [3, 4]. Indeed, compared to Connectionist Temporal Classification (CTC) models, RNN-T models normally yield a better word error rate (WER) for a fixed model size [5]. However, CTC models are still largely superior in decoding speed [6], which results from their conditional independence assumption. During RNN-T decoding with standard algorithms, frames (time steps of the encoder network output) are being iterated over and decoded sequentially. For each frame, the joint network (the joiner) is applied to combine the encoder and prediction networks outputs, before deciding which tokens are added to the N-nest hypotheses beam [1, 7, 8]. The process repeats until no new tokens are emitted in this frame, and only then the next frame is decoded. Previous work has shown that even with a single layer joiner, its repeated application is responsible for a significant amount of decoding time and power consumption [9, 10], due to lack of parallelization and repeatedly loading the joiner weight matrix to memory. Note that for models using larger subword units, the joiner is being applied much more often than the prediction network, as the latter is only being applied when a new token is emitted. While decoding in a strictly monotonic manner along the time axis may be simpler, and mandatory in strict streaming use cases, in practice this is often not required. For non-streaming cases, decoding algorithms may indeed benefit from the available lookahead for better accuracy and computational efficiency. In practical streaming cases, streaming is normally done in a segment-by-segment manner, where each segment contains a few frames [11, 12]. Therefore in those scenarios as well, some lookahead is available for the decoding algorithm to benefit from. In this work, we present a novel beam search decoding algorithm for RNN-T that decodes audio utterances segment by segment. We employ the available lookahead to improve computational efficiency by batching the joint network applications across multiple frames and reduce the total number of invocations of the joint network. Moreover, token emission probabilities are potentially spread across a number of frames, which may cause a standard strictly monotonic decoding algorithm to discard a correct hypothesis during search. Our proposed algorithm aggregates emission probabilities across all frames of the decoded segment, aiming at improving decoding accuracy. This may be viewed as a theoretically motivated improvement to the heuristic search process, that provides a better approximation to finding the most probable token sequence. Instead of iterating over frames, our algorithm iterates over emitted tokens. Given a segment to decode, the algorithm starts by batching the joiner calls across the entire segment and aggregates emission probabilities, to find the most likely (N-best) first tokens to emit in this segment. Next, the joiner calls are batched again across the entire segment and emission probabilities are aggregated, to find the most likely next tokens to emit in this segment, given the current prefix. The process continues until no new tokens are emitted in the segment. We use the term token-wise to describe this search logic, in contrast to the standard algorithms being time-wise. When using a segment size of one frame, our algorithm is identical to a standard breadth-first RNN-T decoding algorithm [7]. Note that while we reduce the total number of joiner calls, which in general improves decoding speed, each call incurs increased computation, since we are joining the entire segment instead of only one frame. This would not make sense for very long segments, as the total computation may increase dramatically. In the experiments below, we explore multiple segment sizes to find that segments containing about 3-5 frames (after the encoder stride) result in best decoding speed. Overall, our novel token-wise beam search algorithm for RNN-T generalizes a standard beam search algorithm, and employs a larger lookahead which is normally available in both streaming and non-streaming applications, to improve decoding speed and accuracy. In experiments, we find that when using a segment size of 3-5 time frames, we manage to consistently increase the decoding speed by about 40%-70%. In addition, as the segment size increases, decoding accuracy improves, which is demonstrated by up to a 10% improvement in oracle WER, and a slight improvements in general WER. ## 2 Related work We mention additional related work that was not otherwise mentioned in Section 1. Monotonic RNN-T [7] is another technique capable of reducing the number of joiner invocations, as each frame is joined at most once. However, without using complex initialization techniques, monotonic RNN-T normally results in some WER degradation compared to standard RNN-T [13]. Nevertheless, Our proposed algorithm can be easily adjusted to support monotonic RNN-T in order to speed up its decoding even further. Alignment-length synchronous decoding (ALSD) [10] shows a reduction in joiner invocations and a decoding speedup, however this was compared to an algorithm forcing each hypotheses to a fixed number (at least 4) of expansions per frame. ALSD speedups come from the algorithm's ability to trim hypotheses early without expanding them, an ability the standard baseline algorithm considered in this work possesses as well. In addition, ALSD allows different hypotheses to be joined with different encoder frames, which cannot be done trivially using a single joiner call, therefore may result in a slowdown compared to other optimized implementations. Another technique reducing the number of joiner invocations is presented in [9], by using HAT factorization [14] and a threshold on the blank emission probability, above which the joiner will not be invoked. While this method requires replacing standard RNN-T with the HAT variant and finding the appropriate threshold to use, our proposed algorithm can be altered to benefit from this method as well, such that speed gains from both methods are cumulative. ## 3 Method ### Recurrent Neural Network Transducer Consider an audio segment (possibly features) denoted \(x\), and a sequence of reference tokens (during training) or previously emitted tokens (during inference) \(y=(y_{1},...,y_{U})\). The RNN-T model has three main components. The encoder network, processing an audio segment with possible downsampling to length \(T\): \(h(t)=\texttt{encoder}(x)\), the prediction network processing the reference or emitted tokens: \(g(u)=\texttt{predictor}(y)\), and the joint network (the joiner) combining the above into token scores: \(j(t,u,k)=\texttt{joiner}(h,g)\). Here \(k\) denotes the token from a vocabulary \(V\cup\{\phi\}\) where \(\phi\) is the blank symbol. During decoding with an RNN-T model, the search process aims at finding the most probable output sequence: \(y^{*}=argmax_{y}p(y|x)\). Solving for the latter is intractable in practice, therefore beam search algorithms are normally used. Common versions are variants of the original algorithm [1], such as [7, 8, 10]. In this work, we refer to the breadth-first beam search algorithm from [7] as the standard decoding algorithm. This standard algorithm was used in many works including [15, 16, 12, 17, 18]. The standard algorithm decodes along the time axis of the audio. At each iteration, given \(N\) hypotheses in the beam, the joiner computes token emission probabilities for the current frame, including blank emission. Existing hypotheses are expanded by either a blank or a non-blank symbol, with new scores \[p(y_{1},...,y_{U},y_{U+1})=p(y_{1},...,y_{U})\cdot J(t,U,y_{U+1}) \tag{1}\] where \(J\) is the joiner output \(j\) after softmax normalization, and \(y_{U+1}\in V\cup\{\phi\}\) here. The most likely \(N\) new hypotheses replace the previous ones in the beam. The probability of different alignments corresponding to the same token sequence are summed. Since RNN-T allows multiple token emissions in a single time-step, the next iteration moves to the next frame only after all hypotheses in the beam end with a blank symbol emission. As a result, the joiner is invoked at least once per frame, and at least once more if some non-blank tokens are emitted in any of the \(N\) hypotheses in the beam during this frame. ### Token-Wise Beam Search As motivated above, our goal is to speed up decoding by reducing the total number of joiner invocations. Decoding is often done in an offline manner for an entire sequence, or in a non-strict streaming setting in a segment-by-segment manner, with each segment containing multiple encoder output frames. Therefore, we design an algorithm that decodes a number of frames simultaneously, allowing batching joiner invocations across frames and potentially improving the search accuracy. Our algorithm operates in a token-wise manner. Given a segment of \(S\) frames to decode, we aggregate emission probabilities across the sequence to find the most likely tokens to expand the current \(N\) hypotheses. Those expansions may happen in any frame, and not necessarily at the first frame. We continue by searching again for the most likely next token emissions across the sequence, making sure the second token is emitted after the first one. The search continues until no non-blank tokens emissions for the sequence are in the beam. We describe how to aggregate emission probabilities across a sequence of length \(S\). Assume a prefix of non-blank tokens \(y=(y_{1},...,y_{U})\) with probability \(p(y)\). For each frame \(t_{1}\) denote the associated probability \(p(y_{1},...,y_{U}(t_{1}))\) of the sequence \(y\) where the last token \(y_{U}\) was emitted in frame \(t_{1}\), such that: \[p(y)=\sum_{t_{1}=1}^{S}p(y_{1},...,y_{U}(t_{1})). \tag{2}\] Define the probability of emitting blank from frame \(t_{1}\) to frame \(t_{2}\) as: \[Bl(U,t1,t2)=\begin{cases}\prod_{t^{\prime}=t_{1}}^{t_{2}-1}J(t^{\prime},U,\phi )&t_{1}<t_{2}\\ 1&t_{1}=t_{2}\\ 0&t_{1}>t_{2}.\end{cases} \tag{3}\] The probability to emit a new token \(y_{U+1}\) at frame \(t_{2}\), given that the previous token was emitted in an earlier frame \(t_{1}\), is the probability of emitting blanks from \(t_{1}\) to \(t_{2}\) and emit \(y_{U+1}\) at \(t_{2}\): \[p(y_{1},...,y_{U}(t_{1}),y_{U+1}(t_{2}))=\\ p(y_{1},...,y_{U}(t_{1}))\cdot Bl(U,t1,t2)\cdot J(t_{2},U,y_{U+1}). \tag{4}\] The probability to emit a new token \(y_{U+1}\) at time \(t_{2}\) is then factored across the different \(t_{1}\) locations: \[p(y_{1},...,y_{U},y_{U+1}(t_{2}))=\\ \sum_{t_{1}=1}^{S}p(y_{1},...,y_{U}(t_{1}),y_{U+1}(t_{2})). \tag{5}\] Finally, the probability of expanding \(y\) with a the new non-blank token \(y_{U+1}\) across the segment is then factored across the different \(t_{2}\) locations: \[p(y_{1},...,y_{U+1})=\sum_{t_{2}=1}^{S}p(y_{1},...,y_{U},y_{U+1}(t_{2})). \tag{6}\] For blank expansions, the token sequence does not change, and its probability is updated using blank emission probabilities for the rest of the segment: \[p(y)=\sum_{t_{1}=1}^{S}p(y_{1},...,y_{U}(t_{1}))\cdot Bl(U,t_{1},S+1). \tag{7}\] Equations 3-7 are used to compute the aggregated token emission probabilities across the segment. When starting to decode a segment, we assume the distribution of \(p(y_{1},...,y_{U}(t_{1}))\) is concentrated in the first frame of the segment (\(t_{1}=1\)). In consecutive steps, Eq. 5 from the previous step is used as the distribution over last token emission frames. The rest of the algorithm is identical to the standard algorithm, including summing scores of hypotheses that correspond to the same token sequence. Note that when \(S=1\), the sequence expansion probabilities amount to precisely the way those are computed in the standard algorithm time-wise algorithm in Eq. 1, therefore the proposed algorithm is a natural generalization of the standard algorithm. The full algorithm is given at Algorithm 1. Note that for simplicity, the algorithm assumes the encoder output was computed beforehand. In streaming applications, one may want to compute the encoder output in a segment-by-segment manner as well, and integrate that into Algorithm 1. The segment size used for encoder computation and decoding are not necessarily constraint to be equal. In initial experiments, we included a version of the algorithm that limits the maximum number of tokens to emit in a segment, similar to the N-step constrained beam search from [8]. However, we found that for all segment sizes, tuning this hyperparameter does not lead to any compute savings without a considerable degradation in WER, therefore we omit this functionality from the algorithm. ## 4 Experiments ### RNN-T Models and Data Three different standard RNN-T models are evaluated. The first is trained on Librispeech data [19], where the encoder network contains 20 Emformer layers [11] and performs a total stride of 4, the prediction network contains 3 LSTM layers, and the joiner is a ReLU layer followed by a single fully connected layer projecting the representation to the vocabulary dimension. For this model, 500 subword units are used [20]. This model has in total of 78M parameters. The second and third models are trained on large in-house dataset containing 1.5M hours speech data. Our in-house training set combines two sources. The first consists of English video data publicly shared by Facebook users; all videos are completely de-identified. The second contains de-identified English data with no user-identifiable information in the voice assistant domain. All utterances are mor-phed when the speakers are de-identified. Note that the data is not morphed during training. The second model contains 28 Emformer layers in the encoder network with a total stride of 4, and 2 LSTM layers in the prediction network. The third model contains 13 Emformer layers with a total stride of 6, and a single LSTM layer in the prediction network. The joint network is identical to the ones used in the Librispeech model. The total number of parameters in the second and third models are 104M and 27M respectively. We evaluate the second model on a voice assistant test set, and the third model on a voice dictation test set. ``` Inputs: \(enc\): a sequence of encoder output frames \(N\): beam size, \(S\): decoding segment size # Hypotheses have members: tok: tokens, s: score, # o: predictor output, d: last token frame distribution. B = [([l], 1, \(\texttt{InitPredictor}()\), Null)] t = 1 while\(t\leq\texttt{length}(h)\)do \(seg=enc[t:t+S]\) \(A=\texttt{ChooseNBest}(B)\) \(B=[]\) \(t=t+s\) for\(h\) in \(\texttt{length}(A)\)do \(A(h).\texttt{d}=(A(h).\texttt{s},0,...,0)\) endfor while\(\texttt{length}(A)>0\)do \(pred=[A(h).\texttt{o}\) for\(h\) in \(N]\) \(J(h,t,k)=\texttt{Softmax}(\texttt{Joiner}(enc,pred))\) # Below follows Eq. 3-7, \(Bl(h,t_{1},t_{2})=\texttt{ComputeBlankProduct}()\) \(d(h,t2,k)=\texttt{ComputeNonBlankExpansionT}()\) \(\delta(h+k)=\texttt{ComputeNonBlankExpansion}()\) \(\delta(h+\phi)=\texttt{ComputeBlankExpansion}()\) # Add all blank expansions to \(B\) and merge if needed for\(h\) from \(1\) to \(\texttt{length}(A)\)do \(h_{blank}=(A(h).tok,\delta(h+\phi),A(h).o,\texttt{Null})\) AddAndMerge\((B,h_{blank})\) endfor # Choose up to \(N\)-best non-blank expansions \(A^{\prime}=A\) \(A=[]\) \(threshold=\texttt{ChooseNthScore}(B,N)\) for\((h,k)\) in \(\texttt{ChooseNBestExpansions}(\delta(h+k))\)do \(h_{k}=(A^{\prime}(h).tok+k,\delta(h+k),\texttt{Null},d(h,\cdot,k))\) if\(\texttt{length}(B)<N\) or \(\delta(h+k)>threshold\)then \(A.\texttt{Append}(h_{k})\) endif endfor UpdatePredictorOutput\((A)\) endwhile endwhile return ChooseNBest\((B)\) ``` **Algorithm 1** Token-wise beam search for RNN-T. ### Results We evaluate our proposed token-wise beam search algorithm using different segment sizes to measure accuracy and decoding speed. When using a segment size of one, our algorithm is identical to the standard algorithm, as also verified in initial experiments. Therefore all results are reported using the same implementation, to avoid any noise during performance measurements. Results appear in Table 1. Many additional results are given in the appendix. Those contain an additional test set, segment size and beam sizes, and performance metrics using a GPU. The results in the appendix follow a similar trend. All experiments in Table 1 were obtained using a single core CPU. The first observation from the results is that using the token-wise beam search with a segment size of 2-5 results in consistent throughput improvement over the standard algorithm (seg ment size of 1). The throughput is measured as number frames (of the encoder output) decoded per second. The best segment size, often 3 frames, results in 43%-71% throughput increase over the standard algorithm. The last two columns provide further insight regarding the speed gains. Calls / Frame is the average number of times the joint network is invoked per frame, which decreases consistently as the segment size increases. On the other hand, each joiner call involves a larger number of frames as the segment size increases. The Joins / Frame column measures the total joiner computation as the average number each frame was in the joiner input. Overall, the total compute increases with the segment size, but since the number of joiner calls decreases at the same time and each call incurs an extra cost, segment sizes of 3-5 result in a good tradeoff with considerable overall speedups. The second observation is that token-wise beam search benefits from aggregating emission probabilities across a segment, as seen in improved Oracle WER (OWER) as the segment size increases. Oracle WER measures the maximum WER in the \(N\) best hypotheses returned by the search algorithm. Improvements in OWER are up to 11% relative for a segment size of 50 frames. As the entire set of \(N\) best hypotheses are often processed by downstream applications such as natural language understanding models [21, 22, 23, 24], this improvement may be useful in those situations. ## 5 Conclusion We proposed a token-wise beam search algorithm for RNN-T models that does not require any changes to the trained model, and can be applied in any offline or standard non-strict streaming decoding setting. Our algorithm aggregates emission probabilities over segments of frames, thus reducing the number of joiner invocations, which result in consistent 40%-70% speedups, and improves search accuracy as seen in up to 10% oracle WER improvement. In future work, we plan to adjust this algorithm to support monotonic RNN-T [7] and blank thresholding [9] to further improve decoding speed. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Model / N-best & Segment Size & WER & OWER & Throughput (f/s) & Calls/Frame & Joins/Frame \\ \hline \multirow{8}{*}{Libri. Other / 2} & 1 & 8.66 & 8.15 & 192.12 & 1.41 & 1.41 \\ & 2 & 8.67 (-0.12\%) & 7.82 (4.05\%) & 220.27 (14.65\%) & 0.75 & 1.50 \\ & 3 & 8.63 (0.35\%) & 7.69 (5.64\%) & **304.62 (58.56\%)** & 0.54 & 1.62 \\ & 5 & 8.62 (0.46\%) & 7.51 (7.85\%) & 200.64 (4.43\%) & 0.38 & 1.90 \\ & 10 & 8.61 (0.58\%) & 7.41 (9.08\%) & 191.48 (-0.33\%) & 0.27 & 2.61 \\ & 20 & 8.59 (0.81\%) & **7.38 (9.45\%)** & 114.41 (-40.45\%) & 0.20 & 3.91 \\ \hline \multirow{8}{*}{Libri. Other / 10} & 1 & 8.58 & 5.42 & 48.40 & 1.69 & 1.69 \\ & 2 & 8.57 (0.12\%) & 5.26 (2.95\%) & 67.44 (39.34\%) & 0.89 & 1.77 \\ & 3 & 8.59 (-0.12\%) & 5.21 (3.87\%) & **82.84 (71.16\%)** & 0.64 & 1.90 \\ & 5 & 8.56 (0.23\%) & 5.16 (4.80\%) & 71.86 (48.47\%) & 0.45 & 2.21 \\ & 10 & 8.56 (0.23\%) & 5.1 (5.90\%) & 50.65 (4.65\%) & 0.30 & 2.95 \\ & 20 & 8.57 (0.12\%) & **5.07 (6.46\%)** & 30.33 (-37.33\%) & 0.22 & 4.34 \\ \hline \multirow{8}{*}{Large Assistant / 2} & 1 & 8.71 & 7.48 & 244.91 & 1.28 & 1.28 \\ & 2 & 8.56 (1.72\%) & 7.3 (2.41\%) & 241.64 (-1.34\%) & 0.74 & 1.46 \\ & 3 & 8.49 (2.53\%) & 7.22 (3.48\%) & 348.79 (42.42\%) & 0.54 & 1.61 \\ & 5 & 8.41 (3.44\%) & 7.04 (5.88\%) & **350.63 (43.17\%)** & 0.35 & 1.71 \\ & 10 & 8.43 (3.21\%) & 6.98 (6.68\%) & 306.70 (25.23\%) & 0.21 & 2.02 \\ & 20 & 8.35 (4.13\%) & **6.97 (6.82\%)** & 221.90 (-9.40\%) & 0.13 & 2.54 \\ \hline \multirow{8}{*}{Small Dictation / 2} & 1 & 8.25 & 4.37 & 59.65 & 1.83 & 1.83 \\ & 2 & 8.26 (-0.12\%) & 4.35 (0.46\%) & 77.69 (30.24\%) & 1.05 & 2.09 \\ & 3 & 8.27 (-0.24\%) & 4.35 (0.46\%) & **90.58 (51.85\%)** & 0.79 & 2.33 \\ \cline{1-1} & 5 & 8.29 (-0.48\%) & 4.29 (1.83\%) & 88.86 (48.97\%) & 0.50 & 2.47 \\ \cline{1-1} & 10 & 8.26 (-0.12\%) & 4.26 (2.52\%) & 74.98 (25.70\%) & 0.30 & 2.89 \\ \cline{1-1} & 20 & 8.23 (0.24\%) & **4.24 (2.97\%)** & 48.71 (-18.34\%) & 0.19 & 3.58 \\ \hline \multirow{8}{*}{Small Dictation / 10} & 1 & 4.14 & 3.71 & 464.99 & 0.94 & 0.94 \\ \cline{1-1} & 2 & 4.16 (-0.48\%) & 3.57 (3.77\%) & 599.36 (30.29\%) & 0.53 & 1.06 \\ \cline{1-1} & 3 & 4.12 (0.48\%) & 3.5 (5.66\%) & **683.26 (46.94\%)** & 0.39 & 1.16 \\ \cline{1-1} & 5 & 4.14 (-0.00\%) & 3.49 (5.93\%) & 484.32 (4.16\%) & 0.28 & 1.38 \\ \cline{1-1} & 10 & 4.13 (0.24\%) & 3.45 (7.01\%) & 457.11 (-1.69\%) & 0.19 & 1.85 \\ \cline{1-1} & 20 & 4.12 (0.48\%) & **3.42 (7.82\%)** & 156.37 (-66.37\%) & 0.14 & 2.74 \\ \hline \multirow{8}{*}{Small Dictation / 10} & 1 & 4.07 & 2.15 & 124.53 & 1.17 & 1.17 \\ \cline{1-1} & 2 & 4.08 (-0.25\%) & 2.07 (3.72\%) & **202.67 (62.75\%)** & 0.65 & 1.30 \\ \cline{1-1} & 3 & 4.09 (-0.49\%) & 2.02 (6.05\%) & 187.53 (50.59\%) & 0.48 & 1.42 \\ \cline{1-1} & 5 & 4.09 (-0.49\%) & 2 (6.98\%) & 160.05 (28.52\%) & 0.34 & 1.68 \\ \cline{1-1} & 10 & 4.08 (-0.25\%) & 1.97 (8.37\%) & 102.65 (-17.57\%) & 0.23 & 2.22 \\ \cline{1-1} & 20 & 4.07 (-0.00\%) & **1.94 (9.77\%)** & 53.80 (-56.80\%) & 0.16 & 3.23 \\ \hline \hline \end{tabular} \end{table} Table 1: Decoding results with the standard algorithm (segment size 1) and our proposed token-wise beam search. In parentheses are the relative improvement % over the standard algorithm. Best throughput and OWER values are in boldface.
標準的な再帰的ニューラルネットワークトラ-ダ (RNN-T)のデコーダアルゴリズムは、音声認識のために時系列軸を巡回しており、1つの時刻ステップが1つのステップずつ解読され、次の時刻ステップに移ります。これらのアルゴリズムは、結合ネットワークへのコールの数を多く発生させ、これらは過去の研究で重要な要素として示されて、デコーダの速度を低下させる要因だと示唆されています。本論文では、結合ネットワークのコールを時間ステップのセグメントに分散処理するデコーダビームサーチアルゴリズムを提案しています。このアルゴリズムにより、全てのモデルと実験に使用された設定で、20%〜96%のデコーダ速度向上を実現しています。さらに、時間ステップのセグメント内でエミッション確率をまとめることは、最も可能性の高いモデル出力を見出すのに良い近似だと考えられ、このアルゴリズムにより、オラクル
2309.09712
Testing alternative spacetimes by high-frequency quasi-periodic oscillations observed in microquasars and active galactic nuclei
In this article, we try to capture the influence of deviation from standard Kerr black hole spacetime on observed high-frequency quasi-periodic oscillations signal. We explore the dynamics of test particles in the field of rotating compact objects governed by the various modifications of the standard Kerr black hole spacetime and apply the model of epicyclic oscillations of Keplerian discs to the observed microquasars and active galactic nuclei high-frequency quasi-periodic oscillations data. We presented a generalized formalism for the fitting of the high-frequency quasi-periodic oscillations models so-called epicyclic resonance and relativistic precession models, under the assumption of stationary, axisymmetric, and asymptotically flat spacetimes. Recently, we have used the same set of stationary, axisymmetric, and asymptotically flat spacetimes, and estimated the restrictions of spacetime parameters with the help of hot-spot data of three flares observed at Sgr~A* by GRAVITY instrument \citep{Shahzadi-et-al:2022:EPJC:}. The aim of this work is not to test a particular theoretical model or to determine and constrain its parameters, but to map a set of well-astrophysically motivated deviations from classical Kerr black hole spacetime and demonstrate which ones provide the best fit for high-frequency quasi-periodic oscillations data and could be fruitful for future exploration.
Misbah Shahzadi, Martin Kološ, Rabia Saleem, Zdeněk Stuchlík
2023-09-18T12:28:13
http://arxiv.org/abs/2309.09712v1
Testing alternative spacetimes by high-frequency quasi-periodic oscillations observed in microquasars and active galactic nuclei ###### Abstract In this article, we try to capture the influence of deviation from standard Kerr black hole spacetime on observed high-frequency quasi-periodic oscillations signal. We explore the dynamics of test particles in the field of rotating compact objects governed by the various modifications of the standard Kerr black hole spacetime and apply the model of epicyclic oscillations of Keplerian discs to the observed microquasars and active galactic nuclei high-frequency quasi-periodic oscillations data. We presented a generalized formalism for the fitting of the high-frequency quasi-periodic oscillations models so-called epicyclic resonance and relativistic precession models, under the assumption of stationary, axisymmetric, and asymptotically flat spacetimes. Recently, we have used the same set of stationary, axisymmetric, and asymptotically flat spacetimes, and estimated the restrictions of spacetime parameters with the help of hot-spot data of three flares observed at Sgr A* by GRAVITY instrument [1]. The aim of this work is not to test a particular theoretical model or to determine and constrain its parameters, but to map a set of well-astrophysically motivated deviations from classical Kerr black hole spacetime and demonstrate which ones provide the best fit for high-frequency quasi-periodic oscillations data and could be fruitful for future exploration. ###### Contents * I Introduction * II Stationary and axisymmetric Spacetimes * II.1 Classical (Kerr) BHs in GR * II.2 Charged BHs in GR * II.3 Bumpy spacetimes in GR * II.4 Rotating regular BHs in GR * II.5 Rotating BHs in alternative theories of gravity * II.6 Rotating BHs modified by DM or quintessence field * III Quasi-periodic oscillations * IV Orbital models of HF QPOs * IV.1 The ER model * IV.2 The RP model * IV.3 Resonant radii and the fitting technique * V Conclusions * Data Availability ## I Introduction General Relativity (GR), as the accepted theory of gravity, agrees with all observations at the Solar System scale and beyond [2]. Standard GR tests in weak field limit: the bending of light [3], the gravitational redshift, and the correction to the Mercury perihelion precession [4] have been recently supplemented by strong gravity field tests such as direct detection of gravitation waves [5] and first black hole (BH) image [6]. One of the important GR predictions is the existence of compact collapsed astrophysical objects - BHs - where the strong gravity regime is manifested. However, the non-linear behaviour and strong-field structure of GR still remain elusive and difficult to test [7]. Observational data based on the dynamics of the whole Universe affirm that the major part of the Universe is invisible to direct detection. This invisible (dark) component can be divided into dark matter (DM) and dark energy (DE) [8]. Both DM and DE, which can be well represented by the cosmological constant model for example, have large relevance in astrophysical processes, see [9; 10; 11; 12; 13; 14]. Modern cosmological observations reveal that our Universe is composed of 68.3% DE, 26.8% DM, and 4.9% ordinary matter [15; 16]. The DM surrounding the galaxies and clusters does not interact with the baryonic matter but can be observed by its gravitational effects on visible matter. Babcock [17] examined the rotational speed of luminous objects in Andromeda galaxy and found that rotational speed continuously increases as one moves away from the center of these objects. This demonstrates that the outer region of that luminous part is dominated by matter which does not shine. Zwicky [18] found a large amount of unseen (non-luminous) matter in the Universe rather than the seen (luminous) and de tected the non-luminous mass lying outside the luminous parts of the galaxies. Besides these theoretical observations, there is no experimental success in detecting DM yet. In addition to the need for DE and DM, in the study of BHs, a problem that appears in GR is the presence of singularities that are points or set of points where the geodesic is interrupted and the physical quantities diverge [19; 20]. It is believed that the problem of singularity occurs because the theory is classical and that in the quantum theory of gravity, this problem would be solved. This, together with some long-standing problems in GR (like difficulties in explaining the accelerated Universe and galaxy rotation curves, etc), has motivated the study of viable alternative theories of gravity. Such theories, also known as modified theories of gravity, aim to reproduce GR in the weak-field regime, but they can differ substantially from it in the strong curvature regime, where non-linear effects become dominant. These modified theories of gravity are developed by modifying the matter distribution or gravitational part of the Einstein-Hilbert action. Modified theories of gravity continue to attract widespread attention among cosmologists and astrophysicists alike. Although Einstein s theory of gravity has recognized us with explanations of physical observations, it fails in accounting for various other physical phenomena [21; 22; 23]. This has prompted researchers to modify the classical GR or to adopt the alternative theories of gravity. The foregoing decade has witnessed a huge influx of both astrophysical and cosmological models borne out of modified gravity theories. The astrophysical BH candidates can be classified into three major classes, depending on the mass of BHs: stellar-mass BHs having mass \(M{\sim}20~{}M_{\odot}-100~{}M_{\odot}\) located in X-ray binary systems; supermassive BHs with \(M{\sim}10^{5}~{}M_{\odot}-10^{10}~{}M_{\odot}\) situated in galactic nuclei; and intermediate-mass BHs having mass \(M{\sim}10^{2}~{}M_{\odot}-10^{4}~{}M_{\odot}\)[24; 25]. The class of intermediate-mass BHs is still debatable because their observations are indirect and dynamical measurements of their masses are still lacking. Microquasars are binary systems composed of a BH and a companion (donor) star. Matter flowing from the companion star onto the BH forms an accretion disk, and relativistic jets, which are the bipolar outflow of matter along the BH's accretion disk rotation axis. Due to friction, the matter in the accretion disk becomes hot and emits electromagnetic radiation, including X-rays in the vicinity of the BH horizon. The quasi-periodic oscillations (QPOs) in X-ray flux light curves have long been observed in stellar-mass BH binaries and are considered one of the most efficient tests of strong gravity models. These variations appear very close to the BH, and present frequencies that scale inversely with the mass of the BH. The current technical possibilities to measure the frequencies of QPOs with high precision allow us to get useful knowledge about the central object and its background. According to the observed frequencies of QPOs, which cover the range from a few mHz up to 0.5kHz, different types of QPOs were distinguished. Mainly, these are the high-frequency (HF) and low-frequency (LF) QPOs with frequencies up to 500 Hz and up to 30 Hz, respectively. The oscillations of HF QPOs in BH microquasars are usually stable and detected with the twin peaks which have a frequency ratio close to \(3:2\)[26]. However, this phenomenon is not universal, the HF QPOs have been observed in only 11 out of 7000 observations of 22 stellar mass BHs [27]. The oscillations usually occur only in specific states of hardness and luminosity, moreover, in X-ray binaries, HF QPOs occur in an "anomalous" high-soft state or steep power law state, both corresponding to a luminous state with a soft X-ray spectrum. One can obtain helpful information about the bounds of parameters of the system, using the methods of spectroscopy (frequency distribution of photons) and timing (photon number time dependence) for particular microquasars [26]. In this connection, the binary systems having BHs, being compared to neutron star systems, seem to be promising due to the reason that any astrophysical BH is assumed to be a Kerr BH (corresponding to the unique solution of GR in 4D for uncharged BHs which does not violate the weak cosmic censorship conjecture and no-hair theorem) that is specified by only two parameters: the spin parameter and BH mass. After the first observation of QPOs, there were many efforts to fit the observed QPOs, and various models have been presented, such as the hot-spot models, warped disk models, disko-seismic models, epicyclic resonance (ER) models, relativistic precession (RP) models and many versions of resonance models [28]. The most extended is thus the so-called geodesic oscillatory model where the observed frequencies are associated with the frequencies of the geodesic orbital and epicyclic motion [29]. It is interesting that the characteristic frequencies of HF QPOs are close to the values of the frequencies of the test particles, geodesic epicyclic oscillations in the regions near the innermost stable circular orbit (ISCO) which makes it reasonable to construct the model involving the frequencies of oscillations associated with the orbital motion around Kerr BHs [29]. However, until now, the exact physical mechanism of the generation of HF QPOs is unknown, since none of the models can fit the observational data from different sources [30]. Even more serious situation has been exposed in the case of HF QPOs related to accretion disks orbiting supermassive BHs in active galactic nuclei (AGNs) [31; 32]. One possibility to overcome this problem is associated with the electromagnetic interactions of slightly charged matter moving around a magnetized BH [33; 34; 35]. Here, we focus on different possibilities associated with the internal rotation of accreting matter. In this study, we consider the classical Kerr BHs, rotating charged BHs (Kerr-Newman (KN), braneworld, dyonic, Kerr-Taub-NUT, KN-Taub-NUT), many different bumpy spacetimes in GR (Johannsen-Psaltis, Hartle Thorne, Kerr-Q, Quasi-Kerr, accelerating-rotating), rotating regular BHs in GR (Bardeen, Ayon-Beato-Garica (ABG), Hayward), various metrics in several theories of gravity (Kerr-Sen BHs in heterotic string theory, Born-Infeld BHs in Einstein-Born-Infeld theory, Kalb-Ramond BHs in heterotic string theory, Gauss-Bonnet BHs in Einstein-Gauss-Bonnet theory, Konoplya-Rezzolla-Zhidenko BHs in an unknown gravity, Kerr-MOG BHs in scalar-tensor-vector theory, BHs with Weyl corrections, BHs in Rastall gravity, rotating regular BHs in conformal massive gravity, rotating regular BHs in Einstein-Yang-Mills theory, hairy BHs), as well as rotating BHs modified by DM or quintessence (dirty BHs, BHs in perfect fluid DM (PFDM), BHs in cold DM halo, BHs in scalar field DM halo, Hayward BHs in PFDM, BHs in quintessence), and explore the orbital and epicyclic motion of neutral test particles in the background of well-motivated considered rotating BHs. We look especially for the existence and properties of the harmonic or QPOs of neutral test particles. The quasi-harmonic oscillations around a stable equilibrium location and the frequencies of these oscillations are then compared with the frequencies of the HF QPOs observed in microquasars GRS 1915+105, GRO 1655-40, XTE 1550-564, and XTE J1650-500 as well as AGNs TON S 180, ESO 113-G010, 1H0419-577, RXJ 0437.4-4711, 1H0707-495, RE J1034+396, Mrk 766, ASASSN-14li, MCG-06-30-15, XMMU 134736.6+173403, Sw J164449.3+573451, MS 2254.9-3712 [32; 36; 26]. Throughout the paper, we use the space-like signature \((-,+,+,+)\) and the system of units in which \(c=1\) and \(G=1\). However, for the expressions with an astrophysical application and estimates, we use the units with the gravitational constant and the speed of light. Greek indices are taken to run from 0 to 3; Latin indices are related to the space components of the corresponding equations. ## II Stationary and axisymmetric spacetimes In four-dimensional GR, the no-hair theorem [37; 38] states that the uncharged rotating BHs are uniquely characterized by only two parameters, the mass \(M\), and spin \(a\) of the BH, and are governed by the Kerr metric. This metric is a unique axisymmetric, stationary, asymptotically flat, and vacuum solution of the Einstein field equations which possesses an event horizon, but there are no closed timelike curves in an exterior domain. Due to the weak cosmic censorship conjecture [39], the central singularity is always behind the event horizon. However, the hypothesis that the astrophysical BH candidates are characterized by the Kerr spacetimes still lacks direct evidence, furthermore, the GR has been tested only in the regime of weak gravity [40]. For strong gravitational fields, the GR could be broken down and astrophysical BHs might not be the Kerr BHs as predicted by the no-hair theorem [41]. Several parametric deviations from the Kerr metric have been proposed to investigate the observational signatures in both the electromagnetic and gravitational-wave spectra that differ from the expected Kerr signals. The line element of an arbitrary, stationary, axisymmetric, and asymptotically flat spacetime with reflection symmetry reads \[\mathrm{d}s^{2}=g_{tt}\mathrm{d}t^{2}+g_{rr}\mathrm{d}r^{2}+g_{\theta\theta} \mathrm{d}\theta^{2}+g_{\phi\phi}\mathrm{d}\phi^{2}+2g_{t\phi}\mathrm{d}t \mathrm{d}\phi, \tag{1}\] where metric components \(g_{\alpha\beta}\) are functions of \(r\), \(\theta\), and some additional parameters. In the following, we consider several stationary, axisymmetric, and asymptotically flat spacetimes both in GR and modified theories of gravity. The exact forms of the metrics can be found in the given references below or in review [1]. ### Classical (Kerr) BHs in GR The nonzero components of the metric tensor \(g_{\mu\nu}\), describing the geometry of the well-known classical neutral rotating Kerr BH can be written in the standard Boyer-Lindquist coordinates in the form [42; 43] \[g_{tt} = -\left(\frac{\Delta-a^{2}\sin^{2}\theta}{\Sigma}\right),\quad g_ {rr}=\frac{\Sigma}{\Delta},\quad g_{\theta\theta}=\Sigma,\] \[g_{\phi\phi} = \frac{\sin^{2}\theta}{\Sigma}\left[(r^{2}+a^{2})^{2}-\Delta a^{2} \sin^{2}\theta\right],\] \[g_{t\phi} = \frac{a\sin^{2}\theta}{\Sigma}\left[\Delta-(r^{2}+a^{2})\right], \tag{2}\] with \[\Delta = r^{2}-2Mr+a^{2}, \tag{3}\] \[\Sigma = r^{2}+a^{2}\cos^{2}\theta, \tag{4}\] where, \(M\) and \(a\) are the mass and rotation parameters of the BH, respectively. The spin parameter \(a\) is bounded by \(a\leq M\). The horizons for Kerr BH can be found by solving the condition \(\Delta=0\). For details of its properties, see [44]. ### Charged BHs in GR According to the no-hair theorem, BH solutions of the Einstein-Maxwell equations of GR (combining the field equations of gravity and electromagnetism) are fully characterized by their mass \(M\), rotation parameter \(a\), and electric charge \(Q\). There are many kinds of charges such as electric, magnetic, tidal, dyonic, etc. In the following, we consider BH solutions with different charges. * KN BHs [45; 46; 47] * Braneworld BHs [48; 49], tested for QPOs in [50; * Dyonic charged BHs [52; 53] * Kerr-Taub-Nut BHs [54; 55], tested for QPOs in [56] * KN-Taub-Nut BHs [54; 55] ### Bumpy spacetimes in GR It is possible that the spacetime around massive compact objects which are assumed to be BH is not described by the Kerr metric but by a metric that can be considered as a perturbation of the Kerr metric, and is usually known as bumpy (non-Kerr) spacetime [57]. These spacetimes have multipoles and possess some features that deviate slightly from the Kerr spacetime, reducing to the classical Kerr BH solutions when the deviation is zero. Here, we consider some bumpy spacetimes in GR. * Johannsen-Psaltis spacetime [41], tested for QPOs in [58] * Hartle-Thorne spacetime [59], tested for QPOs in [60; 61] * Kerr-Q spacetime [62], tested for QPOs in [61] * Quasi-Kerr BHs [63] * Accelerating and rotating BHs [64] ### Rotating regular BHs in GR The regular BHs are non-singular exact solutions of the Einstein field equations minimally coupled to non-linear electrodynamics, satisfy the weak energy condition, and yield alteration to the classical BHs [65]. The regular BHs are constructed to be regular everywhere, i.e., the Ricci scalar and the components of the Riemann tensor are finite \(\forall\ r\geq 0\). However, the rotating regular BH solutions have been obtained by the non-complexification procedure of the Newman-Janis algorithm and they are not exact solutions (exact up to \(a^{2}\), where \(a\) is the rotation parameter) [66]. Recently, the precise constraints on the physical charges of regular and various other BHs have been proposed with the help of Event Horizon Telescope [67]. There are three well-known rotating regular BHs in GR, namely * Regular Bardeen BHs [68; 69], tested for QPOs in [70] * Regular ABG BHs [66; 71] * Regular Hayward BHs [72] ### Rotating BHs in alternative theories of gravity The late-time acceleration of the Universe is surely the most challenging problem in cosmology. Many cosmological observations indicate that the accelerated expansion of the Universe is due to the existence of a mysterious form of energy known as DE. Modern astrophysical and cosmological models are also faced with two severe theoretical problems, that can be summarized as the DM (non or weakly interacting), and the DE problem. The two observations, namely, the mass discrepancy in galactic clusters, and the behaviour of the galactic rotation curves, suggest the existence of a DM at galactic and extragalactic scales. Recently, several modified theories of gravity have been proposed to address these two intriguing and exciting problems facing modern physics. These modified theories of gravity are constructed by modifying the gravitational or matter part of the Einstein-Hilbert action. In addition, both non-rotating and rotating BH solutions have been derived in these modified theories of gravity [73; 74; 75]. In the following, we consider the several rotating BH solutions in many different modified theories of gravities. * Kerr-Sen BHs [73] * Einstein-Born-Infeld BHs [76] * Kalb-Ramond BHs [77; 78] * Einstein-Gauss-Bonnet BHs [79; 80; 81] * Konoplya-Rezzolla-Zhidenko BHs [82; 83] * Kerr-MOG BHs [74; 84], tested for QPOs in [85] * BHs with Weyl corrections [86] * BHs in Rastall gravity [87] * Regular BHs in conformal massive gravity [88] * Regular BHs in Einstein-Yang-Mills theory [89] * Hairy BHs [90] ### Rotating BHs modified by DM or quintessence field Recently, with the help of Event Horizon Telescope's observations of BH shadows, it has been proposed that the existence of BHs in the Universe is almost universally accepted [91]. Inspired by this, many physicists have begun to study the interaction between DM and BHs [92; 93; 94; 95; 96; 97; 98]. Due to the existence of the supermassive BHs at the centers of galaxies, the strong gravitational potential of the BH concentrates a large number of DM particles near the BH horizon [99]. The DM density increases by orders of magnitude due to the BH's gravitational field. Therefore, if DM particles can annihilate into gamma-ray radiation, the intensity of gamma-ray radiation near the BH will increase greatly, which provides a good opportunity for us to detect the DM annihilation signal. A series of DM models have been proposed in the literature, some of which we consider here. * BHs in DM (dirty BHs) [100], tested (non-rotating case) for QPOs in [101] * BHs in PFDM [81; 102] * BHs in cold DM halo [103] * BHs in scalar field DM halo [103] * Hayward BHs in PFDM [104] * BHs in quintessence [105; 106; 107; 108] Note that the special model of BH spacetime modified roughly by any kind of DM was tested by HF QPOs recently in [109; 101]. ## III Quasi-periodic oscillations Within this article, we will restrict our attention to test particle dynamics which can be approximated as geodesics motion. The Hamiltonian for the neutral particle motion can be written as \[H=\frac{1}{2}g^{\alpha\beta}p_{\alpha}p_{\beta}+\frac{1}{2}m^{2}, \tag{5}\] where \(p^{\mu}=mu^{\mu}\) is the four-momentum of the particle with mass \(m\), \(u^{\alpha}=\frac{\mathrm{d}x^{\alpha}}{\mathrm{d}\tau}\) denotes the four-velocity, and \(\tau\) is the proper time. The Hamilton equations of motion can be written as \[\frac{\mathrm{d}x^{\alpha}}{\mathrm{d}\zeta}\equiv mu^{\alpha}=\frac{\partial H }{\partial p_{\alpha}},\quad\frac{\mathrm{d}p_{\alpha}}{\mathrm{d}\zeta}=- \frac{\partial H}{\partial x^{\alpha}}, \tag{6}\] where \(\zeta=\tau/m\) is the affine parameter. Using Hamiltonian formalism, one can find the equations of motion of particles. Due to the symmetries of the BH geometry, there exist two constants of motion, i.e., specific energy \(E\), and specific angular momentum \(L\), which remain conserved throughout the motion and can be expressed as \[u_{t} = g_{tt}u^{t}+g_{t\phi}u^{\phi}=-E, \tag{7}\] \[u_{\phi} = g_{\phi\phi}u^{\phi}+g_{t\phi}u^{t}=L. \tag{8}\] Using the above equations, we can write \[\frac{\mathrm{d}t}{\mathrm{d}\tau} = -\frac{Eg_{\phi\phi}+Lg_{t\phi}}{g_{tt}g_{\phi\phi}-g_{t\phi}^{2}}, \tag{9}\] \[\frac{\mathrm{d}\phi}{\mathrm{d}\tau} = \frac{Lg_{tt}+Eg_{t\phi}}{g_{tt}g_{\phi\phi}-g_{t\phi}^{2}}. \tag{10}\] From the conservation of the rest mass, \(g_{\alpha\beta}u^{\alpha}u^{\beta}=-1\), we have \[\dot{r}^{2}g_{rr}+\dot{\theta}^{2}g_{\theta\theta}=V_{\rm eff}(r,\theta,E,L), \tag{11}\] where the effective potential \(V_{\rm eff}\) takes the form \[V_{\rm eff}(r,\theta,E,L)=\frac{L^{2}g_{tt}+E^{2}g_{\phi\phi}+2ELg_{t\phi}}{- g_{tt}g_{\phi\phi}+g_{t\phi}^{2}}-1. \tag{12}\] The effective potential (12) is very important since it enables us to demonstrate the general properties of the test particle dynamics, avoiding the necessity to solve the equations of motion. Due to the reflection and axisymmetric properties of spacetime, for the existence of equatorial circular orbits, we have \[\frac{\mathrm{d}r}{\mathrm{d}\tau}=\frac{\mathrm{d}\theta}{\mathrm{d}\tau}= \frac{\mathrm{d}^{2}r}{\mathrm{d}\tau^{2}}=0. \tag{13}\] Consequently, the equation describing the circular motion of the particle reduces to the following form \[g_{tt,r}t^{2}+2g_{t\phi,r}\dot{t}\dot{\phi}+g_{\phi\phi,r}\dot{\phi}^{2}=0, \tag{14}\] and the orbital frequency \(\Omega_{\phi}\) of test particles can be written in the form \[\Omega_{\phi}=\frac{\mathrm{d}\phi}{\mathrm{d}\tau}=\frac{-g_{t\phi,r}\pm \sqrt{(g_{t\phi,r})^{2}-g_{tt,r}\ g_{\phi\phi,r}}}{g_{\phi\phi,r}}, \tag{15}\] where the upper and lower signs refer to the prograde, and retrograde orbits, respectively. It is clear from Eq. (15), that the orbital frequency \(\Omega_{\phi}\) in independent of the metric coefficients \(g_{rr}\) and \(g_{\theta\theta}\), while the partial derivatives of \(g_{tt},g_{\phi\phi}\), and \(g_{tr}\) with respect to the radial coordinate \(r\) are involved. The circular equatorial orbits are governed by the condition [33; 34] \[V_{\rm eff}(r,E,L)=0,\quad\partial_{r}V_{\rm eff}(r,E,L)=0. \tag{16}\] The energy \(E\) and angular momentum \(L\) of circular orbits can be found by solving the system of Eqs. (16), and the orbital frequency \(\Omega_{\phi}\) in terms of constants of motion can be written as \[\Omega_{\phi}=\frac{p_{\phi}}{p_{t}}=-\frac{g_{tt}L+g_{t\phi}E}{g_{t\phi}L+g_{ \phi\phi}E}. \tag{17}\] Combining Eqs. (15), and (17), along with the condition \(V_{\rm eff}(r,E,L)=0\), one can find the energy and angular momentum in terms of orbital frequency \(\Omega_{\phi}\) as \[E = \frac{-g_{tt}-g_{t\phi}\Omega_{\phi}}{\sqrt{-g_{tt}-2g_{t\phi} \Omega_{\phi}-g_{\phi\phi}\Omega_{\phi}^{2}}}, \tag{18}\] \[L = \pm\frac{g_{t\phi}+g_{\phi\phi}\Omega_{\phi}}{\sqrt{-g_{tt}-2g_{ t\phi}\Omega_{\phi}-g_{\phi\phi}\Omega_{\phi}^{2}}}, \tag{19}\] where the upper and lower signs correspond to the prograde and retrograde orbits, respectively. The innermost stable equatorial circular orbits so-called ISCOs are governed by the Eq. (16) along with the condition \(\partial_{rr}V_{\rm eff}(r,E,L)=0\). The position of ISCO is one of the parameters that are very sensitive to the value of the BH spin. The radial and latitudinal epicyclic frequencies can be easily calculated by considering the small perturbations around circular orbits along the radial and the latitudinal directions, respectively. If \(\delta_{r}\) and \(\delta_{\theta}\) are the small displacements along the mean orbit, i.e., \(r=r_{0}+\delta_{r}\), and \(\theta=\theta_{0}+\delta_{\theta}\), they lead to the following differential equations \[\frac{\mathrm{d}^{2}\delta_{r}}{\mathrm{d}t^{2}}+\Omega_{r}^{2} \delta_{r}=0, \tag{20}\] \[\frac{\mathrm{d}^{2}\delta_{\theta}}{\mathrm{d}t^{2}}+\Omega_{ \theta}^{2}\delta_{\theta}=0, \tag{21}\] where the radial \(\Omega_{r}\) and the latitudinal \(\Omega_{\theta}\) frequencies take the form \[\Omega_{r}^{2} = \frac{-1}{2g_{rr}(t)^{2}}\frac{\partial^{2}V_{\rm eff}}{\partial r ^{2}}, \tag{22}\] \[\Omega_{\theta}^{2} = \frac{-1}{2g_{\theta\theta}(t)^{2}}\frac{\partial^{2}V_{\rm eff }}{\partial\theta^{2}}. \tag{23}\] The expressions for fundamental frequencies (15), (22), and (23) are given in dimensionless form. In physical units, one needs to extend the corresponding formulae by the factor \(c^{3}/GM\). Then, the radial, latitudinal, and orbital frequencies in \(Hz\), measured by the distant observers are given by \[\nu_{\alpha}=\frac{1}{2\pi}\frac{c^{3}}{GM}\,\Omega_{\alpha}[\mathrm{Hz}], \tag{24}\] where \(\alpha\in\{r,\theta,\phi\}\). The analysis of the characteristics of these frequencies and their relations are given in the following section. ## IV Orbital models of HF QPOs The QPOs in X-ray flux have long been detected in the stellar-mass BH binaries and are regarded as an efficient test of models of strong gravity. The oscillations of neutral particles around circular orbits, explored in the previous section, propose an interesting astrophysical application, related to HF QPOs detected in Galactic low mass X-ray binaries containing BHs [110; 111] or neutron stars [112]. There are two AGNs, i.e., 2XMM J123+1106, and KIC 9650712 in which LF QPOs candidates have been declared. The LF QPOs identification was based on mass scaling of frequency assuming a linear relation between the mass of BH and LF QPOs period, yielding consistent masses with other independent estimates, as well as low coherence and high fractional variability consistent with LF QPOs. Since LF QPOs are not likely to be caused by the same process as HF QPOs, hence, we do not include them in our analysis. Some HF QPOs have been identified in sources where there is no spin or mass determination. The first AGN QPO observation in RE J1034+396 has a high coherence and a measured BH mass consistent with the known mass-frequency scaling relation for HF QPOs in XRBs. As in several of the other cases, assuming the detected QPO frequency was an LF QPO results in mass upper limits that are inconsistent with the BH mass measurements from independent sources and are often debatable low for AGN (\(\sim 10^{5}M_{\odot}\)). We have selected four microquasars, i.e., GRO 1655-40, XTE 1550-564, XTE 1650-500, and GRS 1915+105, and many AGNs presented in Fig. 1 and Tab. 1. The masses \(M\) and spins \(a\) of the central objects for all four microquasars have been estimated, but there are some AGNs for which spin estimates can not be found in lit Figure 1: The position of microquasars, AGNs, and Sagittarius (Sgr) A* depending on the product of mass, frequencies, and spin parameter \(a/M\). The shaded regions represent the objects with mass estimates, often with large errors, but no spin estimates in the literature. The colored blocks specify those objects for which the spin is estimated, while the black regions indicate those objects for which only a lower limit of spin is known. erature, while the mass estimates exist with large errors, see shaded regions in the middle plot of Fig. 1. There also exist some AGNs for which only a lower limit of spin is known. Our Galactic centre, Sgr A* source, has attracted large attention recently by [113; 114]. In our previous work, we have been discussing the hot spot dynamics as a model for observed flares [1] testing alternative theories of gravity. The QPOs for Sgr A* source has been already reported in [115; 116], but there is a growing concern if such data are not only red noise reported as a signal. The light curves duration to QPOs period ratio is much much larger for microquasars than for AGNs, hence the noise contamination is a smaller problem for microquasars as well. For AGNs or Sgr A* sources, the data must be examined very carefully. It is also interesting that reported HF QPOs for Sgr A* are at 0.8 mHz (1.4 mHz), see Tab. 1. The observed flares around Sgr A* have periods of 30-70 min, corresponding to Sgr A* flares frequency 0.2-0.5 mHz, and are not that far from QPOs frequency 0.8 mHz (1.4 mHz). Hence, Sgr A* flares and QPOs could be related although they are observed in different energy bands (IR vs. X-ray) [117; 118]. The twin HF QPOs models involving the orbital motion of matter around BH can be classified into four classes: the hot spot models (the RP model and its variations [29; 119], the tidal precession model [120]), resonance models [116; 121; 122] and disko-seismic models [123; 124]. These models are applied to match the twin HF QPOs for the microquasar GRO J1655-40 in Figure 2: First row: Radial profiles of lower \(\nu_{\rm L}(r)\) and upper \(\nu_{\rm U}(r)\) frequencies for ER (left) and RP (right) HF QPOs models in the background of classical Kerr BH. The BH spin \(a=0.5\) has been used. The vertical lines show the position \(\nu_{\rm U}:\nu_{\rm L}=3:2\) of resonant radii \(r_{3:2}\). Second row: Fitting of microquasars and AGNs sources with HF QPOs ER (left) and RP (right) models. The BH spin \(a/M\) is given on the horizontal axis, while on the vertical axis, we give BH mass \(M\) multiplied by the observed upper HF QPOs frequency \(\nu_{\rm U}\). Gray boxes show the values of BH spin \(a\) and mass \(M\), estimated from the observations, see Tab. 1. Solid curves represent the upper test particle frequency at \(r_{3:2}\) resonant radii \(f_{\rm U}=\nu_{\rm r}(r_{3:2},a)\) as a function of BH spin \(a\) for co-rotating particles, while dashed curves are plotted for contra-rotating particles. Figure 3: Radial profiles of lower \(\nu_{\rm L}(r)\) and upper \(\nu_{\rm U}(r)\) frequencies for ER HF QPOs model in the background of many different stationary, axisymmetric and asymptotically flat spacetimes. We compare the Kerr BH limit with the well-motivated deviations from the classical Kerr BH solution. The frequencies are plotted for classical Kerr BH as gray curves, while black curves are for non-Kerr BHs. The BH spin is taken \(a=0.5\) for all spacetimes. The position \(\nu_{\rm U}:\nu_{\rm L}=3:2\) of resonant radii \(r_{3\geq 2}\) is also plotted. Figure 4: Radial profiles of lower \(\nu_{\rm L}(r)\) and upper \(\nu_{\rm U}(r)\) frequencies for RP HF QPOs model in the background of many different stationary, axisymmetric and asymptotically flat spacetimes. We compare the Kerr BH limit with the well-motivated deviations from the classical Kerr BH solution. The frequencies are plotted for classical Kerr BH as gray curves, while black curves are for non-Kerr BHs. The BH spin is taken \(a=0.5\) for all spacetimes. The position \(\nu_{\rm U}:\nu_{\rm L}=3:2\) of resonant radii \(r_{3\times 2}\) has also been shown by vertical lines. Figure 5: Fitting of observational data of QPOs observed in the well-known microquasars as well as AGNs in the framework of the ER model of geodesic oscillations of Keplerian disks modified for the epicyclic oscillations of neutral test particles orbiting many different kinds of rotating BHs. The BH spin \(a/M\) is given on the horizontal axis, while on the vertical axis, we give BH mass \(M\) divided by the observed upper HF QPOs frequency \(\nu_{\rm U}\). Gray boxes show the values of BH spin \(a\) and mass \(M\), estimated from the observations, see Tab. 1. Solid curves represent the upper test particle frequency at \(r_{3:2}\) resonant radii \(f_{\rm U}=\nu_{\rm r}(r_{3:2},a)\) as the function of BH spin \(a\) for co-rotating particles, while dashed curves are plotted for contra-rotating particles. Gray curves depict the Kerr limit. Figure 6: Fitting of observational data of QPOs observed in the well-known microquasars as well as AGNs in the framework of the RP model of geodesic oscillations of Keplerian disks modified for the epicyclic oscillations of neutral test particles orbiting many different kinds of rotating BHs. The BH spin \(a/M\) is given on the horizontal axis, while on the vertical axis, we give BH mass \(M\) divided by the observed upper HF QPOs frequency \(\nu_{\rm U}\). Gray boxes show the values of BH spin \(a\) and mass \(M\), estimated from the observations, see Tab. 1. Solid curves represent the upper test particle frequency at \(r_{3:2}\) resonant radii \(f_{\rm U}=\nu_{\rm r}(r_{3:2},a)\) as the function of BH spin \(a\) for co-rotating particles, while dashed curves are plotted for contra-rotating particles. Gray curves depict the Kerr limit. [125]. The models can also be applied for intermediate massive BHs [126]. In this article, we will consider two well-known HF QPOs models, i.e., ER and RP models. One way of demonstrating this QPO phenomenon in the microquasars or AGNs is the formation of the oscillatory test particle motion around magnetized Kerr BHs [33; 127; 34] or spinning test particles around Kerr BH [128]. Here, we explore the other modification of the strong gravity due to the modifications of classical Kerr BH. ### The ER model The ER model is one of the popular variants of the resonance models characterized by the epicyclic parametric resonance model that supposes a \(3:2\) non-linear resonance between the axisymmetric radial and latitudinal epicyclic modes of accretion disc oscillations [116; 129]. All the estimates obtained so far that were associated with the ER model assume a geodesic approximation of the accreted fluid motion. In this approximation, the two observable resonant frequencies are illustrated by the radial \(\nu_{r}\) and latitudinal \(\nu_{\theta}\) epicyclic frequencies of test particle motion and given by \[\nu_{\rm L}=\nu_{r},\quad\nu_{\rm U}=\nu_{\theta}, \tag{25}\] \begin{table} \begin{tabular}{c c c c c} \hline Name & BH Spin & \(M_{\rm BH}\) [\(M_{\odot}\)] & \(f_{\rm QPO}\) [Hz] & Object Type \\ \hline XTE J1550-564 & \(0.75<a<0.77^{r}\) & \(9.1^{+0.6}_{-0.6}\) & 184 & microquasar \\ & \(0.29<a<0.62^{r,c}\) & 184 & & \\ \hline XTE J1650-500 & \(0.78<a<0.8^{r}\) & \(5.0^{+2.0}_{-2.0}\) & 250 & microquasar \\ \hline GROJ1655-40 & \(0.65<a<0.75^{c}\) & \(5.9^{+0.8}_{-0.8}\) & 300 & microquasar \\ & \(0.9<a<0.998^{r}\) & 300 & & \\ & \(0.97<a<0.99^{r}\) & 300 & & \\ \hline GRS1915+105 & \(0.97<a<0.99^{r,c}\) & \(9.5-14.4\) & 41 & microquasar \\ & & 67 & & \\ \hline Sgr A* & 0.4 & \(4.3\times 10^{6}\) & \(0.886\times 10^{-3}\) & Galactic center \\ \hline & & \(\log M_{\rm BH}\) & QPO Band & \\ \hline TON S 180 & \(<0.4\) & \(6.85^{+0.5}_{-0.5}\) & \(5.56\times 10^{-6}\) & EUV & NLS1 \\ ESO 113-G010 & 0.998 & \(6.85^{+0.15}_{-0.24}\) & \(1.24\times 10^{-4}\) & X & NLS1 \\ ESO 113-G010 & 0.998 & \(6.85^{+0.15}_{-0.24}\) & \(6.79\times 10^{-5}\) & X & NLS1 \\ 1H0419-577 & \(>0.98\) & \(8.11^{+0.50}_{-0.50}\) & \(2.0\times 10^{-6}\) & EUV & Sy1 \\ RXJ 0437.4-4711 & – & \(7.77^{+0.5}_{-0.5}\) & \(1.27\times 10^{-5}\) & EUV & Sy1 \\ 1H0707-495 & \(>0.976\) & \(6.36^{+0.24}_{-0.06}\) & \(2.6\times 10^{-4}\) & X & NLS1 \\ RE J1034+396 & 0.998 & \(6.0^{+1.0}_{-3.49}\) & \(2.7\times 10^{-4}\) & X & NLS1 \\ Mrk 766 & \(>0.92\) & \(6.82^{+0.05}_{-0.06}\) & \(1.55\times 10^{-4}\) & X & NLS1 \\ ASASSN-14li & \(>0.7\) & \(6.23^{+0.35}_{-0.35}\) & \(7.7\times 10^{-3}\) & X & TDE \\ MCG-06-30-15 & \(>0.917\) & \(6.20^{+0.09}_{-0.12}\) & \(2.73\times 10^{-4}\) & X & NLS1 \\ XMMU J134736.6+173403 & – & \(6.99^{+0.46}_{-0.20}\) & \(1.16\times 10^{-5}\) & X & \\ Sw J164449.3+573451 & – & \(7.0^{+0.30}_{-0.35}\) & \(5.01\times 10^{-3}\) & X & TDE \\ MS 2254.9-3712 & – & \(6.6^{+0.39}_{-0.60}\) & \(1.5\times 10^{-4}\) & X & NLS1 \\ \hline \end{tabular} \end{table} Table 1: Observational data for QPOs around stellar-mass and supermassive BHs [32]. In objects where multiple HF QPOs are reported in small integer ratios, only the lowest frequency is given. Superscripts on the spin ranges indicate the measurement methods: Fe K\(\alpha\) reflection spectroscopy (\(r\)) or continuum fitting (\(c\)). Restrictions on mass and spin of the BHs located in them, based on measurements independent of the HF QPO measurements given by the optical measurement for mass estimates and by the spectral continuum fitting for spin estimates [36; 26]. where \(\nu_{\rm L}\) and \(\nu_{\rm U}\) denote the upper and lower frequencies of the twin peak with a frequency ratio close to \(3:2\). In the ER model, the oscillating torus is supposed to be radiating uniformly. A sufficiently large inhomogeneity on the radiating torus, which orbits with the frequency \(\nu_{\phi}\), enables the introduction of the nodal frequency related to this inhomogeneity [125, 130]. In more general accretion flows, non-geodesic effects associated for instance with magnetic fields or other forces may affect the characteristics of the considered oscillation modes and hence influence the spin predictions [33, 34]. The radial profiles of upper \(\nu_{\rm U}\) and lower \(\nu_{\rm L}\) frequencies of the ER model for neutral test particles in the background of Kerr BH are presented in Fig. 2, while the fundamental frequencies for a wide class of stationary, axisymmetric and asymptotically flat spacetimes are shown in Fig. 3. We compare the motion of particles orbiting Kerr BH with the particle motion around many different non-Kerr BHs and examine the deviations from the Kerr limit. The radial profiles of frequencies of particles moving around non-Kerr BHs can be observed differently than the case of Kerr BH when the particle moves close to the BH but the radial profiles coincide as the particles move far away from the BHs. However, a clear deviation can be seen for the case of Kerr-Sen BHs in heterotic string theory and the BHs surrounded by cold DM halo. ### The RP model The RP model was originally proposed to describe QPOs in low-mass X-ray binaries with a neutron star and then extended to systems with stellar-mass BHs [131]. It does not explain the origin of the QPOs, but it relates the observed frequencies of the QPOs with the fundamental frequencies of the background spacetime. In this model, the lower of the twin frequencies is described with the periastron precession frequency, while the upper one is described by the orbital frequency, given by \[\nu_{\rm L}=\nu_{\phi}-\nu_{r},\quad\nu_{\rm U}=\nu_{\phi}, \tag{26}\] where upper \(\nu_{\rm L}\) and lower \(\nu_{\rm U}\) frequencies are the frequencies of the twin peak with a frequency ratio close to \(3:2\). The radial profiles of upper \(\nu_{\rm U}\) and lower \(\nu_{\rm L}\) frequencies of the RP model for neutral test particles in the background of Kerr BH are presented in Fig. 2, while the fundamental frequencies for many different rotating non-Kerr metrics are shown in Fig. 4. The comparison of the motion of particles orbiting Kerr BH with the particle motion around many different non-Kerr metrics has been depicted. A clear deviation can be seen for the case of Kerr-Sen BHs in heterotic string theory and the BHs surrounded by cold DM halo. ### Resonant radii and the fitting technique The HF QPOs observed in BH microquasars are generally stable and appear in a pair of two peaks with upper \(\nu_{\rm U}\) and lower \(\nu_{\rm L}\) frequencies in the timing spectra. The original resonance models induced the non-linear coupling between the radial and vertical epicyclic frequencies, or between the radial and orbital epicyclic frequencies in the accretion disk. In the first case, a coupling was supposed when the frequencies were in a ratio of frequencies \(\nu_{\rm U}:\nu_{\rm L}\) close to a fraction \(3:2\), congruent to the peaks observed in a handful of X-ray power spectra known at that time. In order to understand this coupling, Horak [132] explored weak non-linear interactions between the epicyclic modes in slender tori and observed the strongest resonance between the axisymmetric modes when their frequencies were in a ratio \(3:2\). The observations of this effect in different non-linear systems show the existence of the resonances between two modes of oscillations. In the case of geodesic QPO models, the observed frequencies can be expressed in terms of linear combinations of the particle fundamental frequencies \(\nu_{r}\), \(\nu_{\theta}\) and \(\nu_{\phi}\). For neutral test particles moving around rotating non-Kerr BH, the upper and lower frequencies of HF QPOs are the functions of mass \(M\) of the BH, rotation parameter \(a\), the resonance position \(r\), and some additional parameter \(\alpha\) of BH as \[\nu_{\rm U}=\nu_{\rm U}(r,M,a,\alpha),\quad\nu_{\rm L}=\nu_{\rm L}(r,M,a, \alpha). \tag{27}\] The dependence of frequencies \(\nu_{\rm U}\) and \(\nu_{\rm L}\) on the BH mass \(M\), spin \(a\) and additional parameter of BH is complicated and hidden inside \(\Omega_{r},\Omega_{\theta},\Omega_{\phi}\) functions, as given by Eq. (24). In order to fit the frequencies observed in HF QPOs with the BH parameters, one needs first to calculate the so-called resonant radii \(r_{3:2}\), given by \[\nu_{\rm U}(r_{3:2}):\nu_{\rm L}(r_{3:2})=3:2. \tag{28}\] For given values of spin parameter \(a\), the resonant radii \(r_{3:2}\) are usually represented by a numerical solution of higher order polynomial in \(r\). Since Eq. (28) is independent of the mass of BH, thus the resonant radius solution also has no explicit dependence on the BH mass. In order to find the resonant radii, we use the technique proposed in [29]. Substituting the resonance radius into Eq. (27), we obtain the frequencies \(\nu_{\rm U}\) and \(\nu_{\rm L}\) in terms of the BH mass \(M\), spin \(a\) and parameter \(\alpha\). Now, we can compare the calculated frequencies \(\nu_{\rm U}\) and \(\nu_{\rm L}\) with observed HF QPOs frequencies \(f_{\rm U}\) and \(f_{\rm L}\), see Tab. 1. If one assumes that observed frequencies are exactly in \(3:2\) ratio, \(f_{\rm U}:f_{\rm L}=3:2\), then the following equations \[f_{\rm U}=\nu_{\rm U}(r,M,a,\alpha),\quad f_{\rm L}=\nu_{\rm L}(r,M,a,\alpha), \tag{29}\] are equivalent with \[f_{\rm U}=\nu_{\rm U}(r_{3:2},M,a,\alpha). \tag{30}\] The position of resonant radii \(r_{3:2}\) for HF QPO ER and RP models for many different non-Kerr spacetimes is depicted in Figs. 3 and 4, respectively. The positions of resonant radii \(r_{3:2}\) for the case of the HF QPO RP model exist closer to the BH as compared to the ER model. Substituting the resonance radius \(r_{3:2}\) into the Eq. (27), we obtain the frequencies \(\nu_{\rm U}\) and \(\nu_{\rm L}\) in terms of the BH mass, rotation \(a\) and additional parameter of BH. Now, we can compare the calculated frequencies \(\nu_{\rm U}\) and \(\nu_{\rm L}\) with the observed HF QPOs frequencies \(f_{\rm U}\) and \(f_{\rm L}\), respectively, see Tab. 1. The calculated upper frequency at resonant radii \(r_{3:2}\) as a function of BH spin \(\nu_{\rm U}(a)\) has been used to fit the observed BH spin and mass data in Figs. 5 and 6 for ER and RP QPO models, respectively. For the HF QPO ER model, the frequencies for contrrotating particles do not fit any microquasars, but the co-rotating particles fit all microquasars very well. The best fitting of observational data of QPOs in the background of ER model can be seen in the case of BHs surrounded by PFDM. However, in the case of the RP model, the frequencies for contra-rotating particles fit only a few microquasars and AGNs, but the co-rotating particles do not fit any microquasars or AGNs sources except rotating regular BHs in conformal massive gravity. We observe that all considered BHs fit the observational data of QPOs around Sgr A* for the RP model, however, we don't see any fitting for ER model except hairy BHs. ## V Conclusions The astrophysical BH candidates are thought to be the Kerr BHs predicted by GR, but the actual nature of these objects has still to be confirmed. In order to test the Kerr nature of these objects, one needs to probe the geometry of the spacetime around them and check if the observations are consistent with the predictions of the Kerr metric, which can be achieved in a number of ways. Many observations, for instance, HF QPOs observed in the X-ray flux of some stellar-mass BHs [32], hot-spot data of three flares observed by the GRAVITY instrument [133], and BH shadow of Sgr A* observed by Event Horizon Telescope [134] can help us to determine the nature of spacetime. Moreover, continuum-fitting [135; 136] and the iron line [137; 138] methods could also be used to test the Kerr BH paradigm. We consider a wide range (30 in total) of well-motivated deviations from classical Kerr BH and explore the neutral test particle fundamental frequencies of small harmonic oscillations around circular orbits. Recently, we have used the same set of stationary, axisymmetric, and asymptotically flat spacetimes, and estimated the restrictions of spacetime parameters with the help of hot-spot data of three flares observed at Sgr A* by GRAVITY instrument [1]. The constraints of parameters for many different spherically symmetric spacetimes have been obtained using BH shadow of Sgr A* observed by Event Horizon Telescope in [139]. The objective of this work is not to test a specific theoretical model and to determine or constrain its parameters. Instead, our aim is to review a class of stationary, axisymmetric, and asymptotically flat spacetimes and investigate the possible deviations from the Kerr geometry of the spacetime around astrophysical BHs. The HF QPOs observed in some stellar-mass BH candidates may be used to test the Kerr-nature of these objects and to confirm, or rule out, the Kerr BH hypothesis. These QPO frequencies remain almost constant, indicating that they are not influenced by the characteristics of the accretion flow but rather by the spacetime geometry itself. We explored the perturbations of circular orbits and investigated the radial profiles of lower and upper frequencies for HF QPOs ER and RP models in the background of well-motivated deviations from classical Kerr BH. In the present work, we are demonstrating the general behaviour of fundamental (orbital) frequencies for various different axially spacetimes, demonstrating their differences. Our work could serve as a guidepost for any future detailed works on any chosen spacetime and our review article could help to select BH spacetime with desired orbital frequency behavior. For example, a clear deviation can be observed in the case of Kerr-Sen BHs in heterotic string theory and the BHs in PFDM. It is concluded that the frequencies of particles moving around non-Kerr BHs coincide with that of Kerr BH when the particles move away from the BHs. The positions of resonant radii, for example, \(r_{3:2}\) or \(r_{1:2}\), play a crucial role in understanding the behavior of test particle motion around BHs. Resonances can significantly alter the dynamics of test particles around BHs and highlight the deviations from the standard Kerr spacetime to non-Kerr spacetime [140; 141]. We examined the positions \(\nu_{\rm U}:\nu_{\rm L}=3:2\) of resonant radii \(r_{3:2}\) for both HF QPO ER and RP models in the background of several non-Kerr spacetimes and observed the resonant radii closer to the BH for the case of RP model as compared to the ER model. It is noted that for most of the considered non-Kerr spacetimes, the resonant radii are positioned at smaller radii than those predicted by the standard Kerr spacetime. We have presented a generalized formalism for fitting observational data of QPOs observed in the well-known microquasars, AGNs, and Sgr A* within the framework of the ER and RP models of geodesic oscillations of Keplerian disks modified for the epicyclic oscillations of both co-rotating and contra-rotating particles orbiting various types of rotating BHs. Our investigation primarily focuses on reviewing a specific class of stationary, axisymmetric, and asymptotically flat spacetimes, and comparing the observational data of HF QPOs with the fittings obtained from these spacetimes. It is found that the best fit for observational data in the background of ER and RP models can be observed for BHs surrounded by PFDM, and the regular BHs in conformal massive gravity, respectively. For the case of the ER model, only Hairy BHs are capable of fitting the observational data for Sgr A* among all the considered BH spacetimes. However, in the case of the RP model, all BH spacetimes that have been examined are able to fit the observational data for Sgr A*, except Hartle-Thorne spacetime and Quasi-Kerr BHs. It is worth mentioning that the best fit can be obtained for the microquasar GROJ1655-40 in the case of the ER HF QPO model. However, for the RP model, the best fit can be obtained for XTE J1650-500. As we can see there is currently no spacetime that can explain (fit) data for all microquasar or AGNs sources, especially it seems that it is quite hard to fit data for Sw J164449.3+573451, ASASSN-14li, and Sgr A* sources using neutral particle orbital frequencies. This could mean that more complex QPO models like with magnetic field influence [34] must be included or that the observed frequencies are not correct HF QPOs. ## Data availability The observational data associated with the manuscript has been presented in Fig. 1, and Tab. 1, there is no additional data for this article. ###### Acknowledgements. This work is supported by the Research Centre for Theoretical Physics and Astrophysics, Institute of Physics, Silesian University in Opava, and Czech Science Foundation Grant No. 23-07043S.
この論文では、標準Kerrブラックホール時空からの逸脱の影響を、観測された高周波quasi-周期性振動信号に捉えようとする。様々な標準Kerrブラックホール時空の修正によって支配される回転コンパクトオブジェクトの場のテスト粒子の運動を探索し、Kepler散の周期的な軌道振動モデルを用いて、観測されたマイクロクァーザスと活性銀河核の高周波 quasi-周期性振動データに適用する。この論文では、高周波quasi-周期性振動のモデルの適合のための汎用的な公式を提示する。これらを呼ばれる「エピシクリック共鳴」と重力 precession モデルであり、この公式は、静止、軸対称、そして漸近的に平面な時空の仮定のもとで適用される。近年では、同じ静止、軸対称、そして漸近的に平面な時空を仮定し、GRAVITY instr
2305.20055
Cross-Domain Car Detection Model with Integrated Convolutional Block Attention Mechanism
Car detection, particularly through camera vision, has become a major focus in the field of computer vision and has gained widespread adoption. While current car detection systems are capable of good detection, reliable detection can still be challenging due to factors such as proximity between the car, light intensity, and environmental visibility. To address these issues, we propose cross-domain Car Detection Model with integrated convolutional block Attention mechanism(CDMA) that we apply to car recognition for autonomous driving and other areas. CDMA includes several novelties: 1)Building a complete cross-domain target detection framework. 2)Developing an unpaired target domain picture generation module with an integrated convolutional attention mechanism which specifically emphasizes the car headlights feature. 3)Adopting Generalized Intersection over Union (GIOU) as the loss function of the target detection framework. 4)Designing an object detection model integrated with two-headed Convolutional Block Attention Module(CBAM). 5)Utilizing an effective data enhancement method. To evaluate the model's effectiveness, we performed a reduced will resolution process on the data in the SSLAD dataset and used it as the benchmark dataset for our task. Experimental results show that the performance of the cross-domain car target detection model improves by 40% over the model without our framework, and our improvements have a significant impact on cross-domain car recognition.
Haoxuan Xu, Songning Lai, Xianyang Li, Yang Yang
2023-05-31T17:28:13
http://arxiv.org/abs/2305.20055v4
# Cross-Domain Car Detection Model with Integrated Convolutional Block Attention Mechanism ###### Abstract Car detection, particularly through camera vision, has become a major focus in the field of computer vision and has gained widespread adoption. While current car detection systems are capable of good detection, reliable detection can still be challenging due to factors such as proximity between the car, light intensity, and environmental visibility. To address these issues, we propose cross-domain **C**ar **D**etection **M**odel with integrated convolutional block **A**ttention mechanism(CDMA) that we apply to car recognition for autonomous driving and other areas. CDMA includes several novelties: 1)Building a complete cross-domain target detection framework. 2)Developing an unpaired target domain picture generation module with an integrated convolutional attention mechanism which specifically emphasizes the car headlights feature. 3)Adopting Generalized Intersection over Union (GIOU) as the loss function of the target detection framework. 4)Designing an object detection model integrated with two-headed Convolutional Block Attention Module(CBAM). 5)Utilizing an effective data enhancement method. To evaluate the model's effectiveness, we performed a reduced will resolution process on the data in the SSLAD dataset and used it as the benchmark dataset for our task. Experimental results show that the performance of the cross-domain car target detection model improves by 40% over the model without our framework, and our improvements have a significant impact on cross-domain car recognition,exceeding most advaenced crossdomain models. keywords: Image Transformation, Domain Adaptive, Attention Mechanism, Object detection + Footnote †: journal: ## 1 Introduction Deep learning techniques enable the emergence of state-of-the-art models for solving target detection tasks[1; 2]. However, these techniques rely heavily on data and the accuracy of the training dataset. Current deep learning tasks mostly assume similar data distributions for both the training and test sets, making the task substantially less challenging. Ideally, object detectors should maintain high performance in detecting a given object despite significant variability in the data distribution. For example, the model should perform well on test data with different environments and angles from the training data. When the data distribution changes, we refer to this as a change in domain, which can lead to decreased object detection accuracy. Researchers in the field of domain adaptation have defined and explored this problem, proposing the task of learning the key features of an object from the source domain. Data annotation is a costly process for target detection, making it crucial to develop a model that can effectively detect targets in the target domain. It also presents a challenging goal for Unsupervised Domain Adaptation (UDA)[4; 5; 6], where only unlabelled target data is available in addition to labelled source data. Moreover, the training data may have been collected under varying scenario conditions, which is known as multi-source domain adaptation. The primary approach in UDA is to learn domain invariant feature representations by aligning the source and target domains. In target detection, recent research has focused on how to learn aligned features and how to induce feature alignment. For instance, [29; 8] propose adapting the backbone network and all internal image elevation features, as well as auxiliary features extracted from object suggestions using adversarial training[6]. [9] argues for the advantages of aggregating object suggestions before alignment and suggests compressing all suggestions into a single class of prototype vectors prior to alignment using contrast loss induction. Despite the success of many CNN-based target detection methods driven by deep convolutional networks (CNNs) [10; 1; 12], cross-domain detection remains a highly desirable and challenging task. Given the difficulty of obtaining labels, a method that can transform images from one domain to another can enable cross-domain label transformation. Generative Adversarial Networks (GANs) [13; 14; 15; 16]have emerged as a powerful tool for constructing image generation methods that address the problem of image transformation. In recent years, GAN-based image transformation methods have shown advanced image translation between different domains in an unsupervised manner. For instance, [16]use supervised techniques to build frameworks to propose CycleGAN, which can transform two types of images in both directions without pairwise matching. StarGAN[17] and UNIT[18] can transform images in multiple domain styles simultaneously. Car detection faces significant challenges due to the differences in lighting conditions between day and night. Current research mainly focuses on improving the robustness of models to adapt to various lighting conditions. However, existing methods often fail to fully consider the importance of car headlights in car detection. Therefore, CDMA uses an attention mechanism module to emphasize the importance of car headlights in the image transfer process, thereby improving the accuracy of the model in day and night image detection. Our research fills the gap in current research and achieves good results in experiments. In this study, we use the detection of car as a test case for our improved cross-domain object detection method. Our approach incorporates the concept of generative adversarial image generation and combines attention mechanisms from the latent space to exploit different features in the image, resulting in improved image generation for adversarial generative training. This results in better generators and decoders for GAN networks, allowing for better achievement of target-domain style image generation goals. Additionally, we enhance the detection efficiency of the car detector in the target domain by convolving the attention mechanism in both space and channel, using generalized cross union to increase the effectiveness of model correction. In summary, our key contributions are as follows: 1)A comprehensive framework for is proposed for nighttime target detection. By fine-tuning the target detector using the image generator in the target domain based on source domain training, CDMA achieves high-accuracy cross-domain target detection in the absence of the target domain, improving mAP by 18.55% over original target detection results. 2)A image generator is created which learns the primary features of blackout vehicles through an unpair target domain image generation module with an integrated convolutional attention mechanism. 3)The overall detection performance of the Faster R-CNN model is enhanced by using Generalized Intersection over Union as the loss function of the target detection framework, combined with two-headed Convolutional Block Attention Module,which we name it Attention R-CNN. 4)An effective data augmentation method is employed to expand the dataset effectively, even with a limited training set. Overall, these contributions demonstrate that our approach can effectively address the challenges of cross-domain target detection and significantly improve detection accuracy. We believe that our work will provide valuable insights for researchers and practitioners in this field. ## 2 Related Work **Object detection:** Classical object detection relies on manually extracted features and sliding window traversal of image pyramids for classification[10]. However, deep convolutional networks trained on large-scale data are gaining popularity. These deep learning algorithms can be classified into two types: frame-based [19; 20]and frame-free[21]. Frame-based algorithms can be further divided into one-stage[2; 22] and two-stage [1; 23]detection algorithms, while frame-free algorithms can be divided into keypoint and centroid methods. Faster R-CNN is widely used due to its good performance and open implementation. It utilizes a region proposal network (RPN) to generate candidate bounding boxes in the first stage, and in the second stage, it employs RoIPool and a fully connected layer to extract features for each candidate box. RPN and Fast R-CNN networks are trained jointly in an end-to-end manner using a multi-task loss function that combines the classification loss and regression loss. The classification loss measures the accuracy of the predicted class labels, while the regression loss measures the accuracy of the predicted bounding box offsets. By improving Faster R-CNN, we aim to enhance our detection model as well. **Unsupervised Cross-Domain Object Detection:** Unsupervised cross-domain object detection involves detecting objects in an unsupervised target domain using only labeled data from the source domain[24; 25; 26; 27]. In such tasks, semi-supervised learning-based methods utilize semi-supervised learning. This involves training a base model with labeled data from the source domain and an object detection model. The model is then used to predict the unlabeled data in the target domain, and the predicted pseudo-labeled data and labeled data from the source domain are combined as the training data for semi-supervised learning [24; 28]. Adversarial learning-based methods establish an adversarial network between the source and target domains, enabling cross-domain target detection by minimizing the distance between the source and target domains while maximizing the accuracy of the domain classifier[25; 5; 30]. **Dark Scene Image Generation:** By simulating the lighting conditions in low-light environments, we can generate high-quality dark scene images.Early methods for generating dark scene images were primarily based on statistical models, with the Gaussian Mixture Model [GMM] being used to model the transformation between night and day scenes[32]. More recent methods employ physical models to simulate the propagation process of light in dark scenes by establishing a light transmission model, resulting in high-quality dark scene images. Physical models are preferred, as they better simulate real-world lighting conditions and have been widely used in dark scene image generation tasks. Deep learning-based physical models, such as DeepISP[33] and DeepUPE[34], have also been developed and achieved promising results in generating dark scene images. GAN-based methods are currently among the most widely used methods for generating dark scene images. These methods train the generator and discriminator to produce images that closely resemble real dark scene images, while ensuring that the discriminator can distinguish between generated and real images with high accuracy. Models such as DCGAN[37], Pix2Pix[14], and CycleGAN[16] have achieved excellent results in dark scene image generation tasks. CDMA is mainly based on CycleGAN. **Attention Mechanism:** The attention mechanism is a technique that simulates the human visual attention mechanism, allowing the model to focus more on critical information in the input sequence and improve overall performance. In recent years, the attention mechanism has been widely adopted in computer vision for various tasks, including image classification, object detection, and image generation. For example, Xu et al[43]. proposed a Convolutional Neural Network that utilizes attention mechanism to emphasize significant regions in the input image and improve classification accuracy. Similarly, Chen et al[44]. developed an object detection model based on attention mechanism that can automatically prioritize the target area and enhance detection accuracy. The Convolutional Block Attention Module (CBAM) [38]is a deep learning model that leverages attention mechanism to boost performance and accuracy. By integrating attention mechanism into a convolutional neural network, CBAM can automatically learn the importance of different regions in an image and better extract image features, resulting in improved classification and detection accuracy. ## 3 Methods In this section, we present our main model that utilizes image transformation and advanced object detectors to achieve cross-domain car detection. We begin with a problem formulation and then outline the key components of CDMA, which showcases its capability to generalize across day-night domains and different constituent structures. We leverage our brand new model to combine the best performing components through a training strategy designed for cross-domain object detection. **Problem formulation:** In unsupervised domain adaptation for object detection, we start with the source domain consisting of \(N_{s}\) labeled images, \(S=\left\{\left(x_{i}^{S},y_{i}^{S},B_{i}^{S}\right)\right\}_{i=1}^{N_{S}}\), where \(x_{i}^{S}\) represents the image, \(y_{i}^{S}\) denotes the type of image label box, and \(B_{i}^{S}\) indicates the location of the label box. On the other hand, for the test set of \(N_{t}\) target domains, \(T=\left\{x_{i}^{T}\right\}_{i=1}^{N_{T}}\), we only have the images themselves without any labels. The primary objective of the UDA method is to train object detectors that can perform well on the target domain, despite the different characteristics of the images between the two domains. ### Overview Our main model consists of three main parts: (i) source domain training for object detection, (ii) generation of the target domain dataset, and (iii) training of the object detector on the generated dataset. As shown in Fig 1, we first learn and train the source domain dataset using an improved Faster R-CNN model (as mentioned in 3.2) to capture the characteristic information of vehicles during the day. Next, we use an unsupervised image-to-image style converter to generate nighttime target domain images based on the daytime source domain images. Since the converter only modifies the style and appearance of the image, the position of the original car in the image remains unchanged. Hence, we continue to use the labels from the original dataset for the generated images. Finally, we perform fine-tuning on the trained object detector using the generated dark target domain images to learn the features of the target domain and achieve cross-domain object detection. ### Source Domain Training for Object detection Car detection utilizes a generic object detector to locate cars in an image. Object detectors take a single image as input and output bounding boxes. Typically, object detection is trained using manually labeled Figure 1: Illustration of Cross-Domain car Object Detection Model:By utilizing source domain images to learn daytime car features within the source domain, the proposed method generates images with dark scene characteristics through the image generator and fine-tunes a target detector to effectively identify cross-domain vehicles. bounding boxes and corresponding images. However, since our objective is to perform unsupervised car detection in dark scenes, we do not have access to corresponding manual annotations. Therefore, we use style-transformed images from the source domain (daytime) for training, as discussed in (3.3). In our work, we adopt Faster R-CNN[1] as the basic framework for optimization. While other models are available[2; 22; 23], we choose this model due to its openness and excellent performance, CDMA for object dection show as Fig 2,we incorporates the CBAM attention mechanism before and after the convolutional layer, and employs the GIOU loss function during model training. Faster R-CNN is a fundamental algorithm for object detection that comprises of two modules. The first network proposes regions that potentially contain objects, while the second network classifies each proposed region and performs bounding box regression. The trained Fast R-CNN network is then incorporated as part of the RPN network, optimizing the entire Faster R-CNN model in an end-to-end training process. **Convolutional Attention Module:** We have integrated a module to improve the performance of convolutional neural networks in the backbone network of Faster RCNN. This module consists of two sub-modules: the channel attention module and the spatial attention module[38]. The channel attention module conducts global max pooling and global average pooling on each channel output of the convolutional layer. A concatenation operation is then performed based on the channels to obtain the attention weight of each channel through the sigmoid activation of two fully connected layers. The final weight is then multiplied by the original convolution layer output to obtain the enhanced feature map. The spatial attention module conducts adaptive average pooling and maximum pooling on the feature map of each channel. The two pooling results are then passed through a convolutional layer to obtain the corresponding weights. Finally, these weights are added to obtain the attention weight of each spatial location for weighting the feature map. In our model framework, the CBAM attention mechanism after the first convolutional layer and after the last convolutional layer, respectively, makes the model pay more attention to shallow features and deep features. **GIOU LOSS[39]:** IoU Loss[41] is a frequently used loss function for object detection that assesses the overlap between the detected box and the true box. However, when there is considerable overlap between the detection box and the true box, the IoU Loss becomes small, resulting in an unstable training process. To address this, we have introduced GIOU LOSS in our object detector. \[IoU=\frac{|A\cap B|}{|A\cup B|} \tag{1}\] \[GIoU=IoU-\frac{|C\backslash(A\cup B)|}{|C|} \tag{2}\] As shown in Equation 1, A represents the Ground Truth, B represents the prediction box, and C represents the closure of the two regions.In GIOU Loss, the closure is defined as the smallest rectangle parallel to the coordinate axis that surrounds the two rectangular regions. The IoU function is designed to measure the overlap between the detected and true box, but it has certain limitations that can lead to unstable training. One major limitation is that it does not account for the difference in area and aspect ratio between the two boxes, which can affect its accuracy. The GIOU function addresses this limitation by incorporating the difference in area and aspect ratio between the two boxes, allowing for a more accurate measurement of overlap. It also includes a penalty term for the distance between the center point of the detected box and the true box, resulting in a more precise measure of the difference between the two boxes. GIOU takes into account the area difference between two bounding boxes, so it can measure the similarity between them more accurately. GIOU avoids the case that the denominator is 0 when it is calculated, so it can avoid the case that it cannot be calculated. GIOU considers the center point distance between two bounding boxes when calculating, so it can reflect the similarity change between them more smoothly. Overall, the GIOU function provides a more accurate and stable measure of overlap, leading to improved performance in object detection models. Due to our fine-tuning strategy, the model requires high stability of loss. We believe that GIOU is more suitable for our model framework than other IoUs. ### Generation of the target domain dataset **Adversarial Training:** CycleGAN[16] is a type of generative adversarial network used for image-to-image translation that learns the mapping relationship between different domains without relying on pairwise data. The original CycleGAN consists of two generators and a discriminator. Each generator learns to transform image domains, while each discriminator determines whether the generated images are authentic or not. The generator and discriminator work in opposition to continuously improve the generator's ability to learn image features in the target domain. The loss of CycleGAN is calculated using the adversarial loss of the generator and discriminator, as well as the cycle consistency loss to measure the difference between the input image and the recovered image of two transformations, and the identity loss. Adversarial Loss: In Equation 3, s and t represent the source and target domains, respectively. The target domain is used to train the generator and discriminator, enabling the generator to produce images that can deceive the discriminator, making it unable to distinguish between real and fake images. \[L_{\text{gan}}^{s\to t}=\left(E_{x\sim X_{t}}\left[\left(D_{t}(x)\right)^{2} \right]+E_{x\sim X_{t}}\left[\left(1-D_{t}\left(G_{s\to t}(x)\right)\right)^{2} \right]\right) \tag{3}\] Cycle Consistency Loss: As shown in Equation 4, this loss is used to ensure that the generator's mapping is bidirectional, meaning that the mapping from "s" to "t" and the mapping from "t" to "s" are inverses of each other. This loss function helps to generate more realistic images. \[L_{\text{cycle}}^{s\to t}=E_{x\sim X_{t}}\left[\left|\ x-G_{t\to s}\left(G_{s \to t}(x)\right)\right)\right|_{1}\right] \tag{4}\] Identity Loss: As depicted in Equation 5, this loss guarantees consistency between the input image and the output image. Specifically, the output image processed by the generator should be as similar to the input image as possible. \[L_{\text{identity}}^{s\to t}=E_{x\sim X_{t}}\left[\left|x-G_{s\to t}(x) \right|_{1}\right] \tag{5}\] **Attention Mechanism:** Despite the good results achieved by CycleGAN, the model's performance is not satisfactory when detecting vehicles in the transition from day to night (as shown in the figure). Images in the dark domain often contain less useful information (mostly black pixels), and although CycleGAN can transfer the style of photos to the dark domain, the model's focus does not seem to be on vehicles. As a result, the model fails to effectively extract important features from dark vehicles, which are often interfered with by street lights and headlights. To address this issue, we added an appropriate attention mechanism module to the generator and discriminator. Inspired by [47], we implemented a Class Activation Map (CAM)[42; 46] attention mechanism in CDMA, which guides the model to better distinguish important regions between the source and target domains.As shown in Fig 3 we incorporate an attention mechanism in both the decoder and encoder to generate feature maps that highlight important information. This has led to significant improvements in the image generation quality. The attention map is generated by the auxiliary classifier and embedded into the Figure 2: The object detection model follows a specific process.Our Faster-CNN model incorporates the CBAM attention mechanism before and after the convolutional layer. generator and discriminator, respectively. The CAM in the generator allows the model to perceive the significant differences between the two domains (A, B), while the CAM in the discriminator helps distinguish between real and fake samples and enables the gradient to generate better samples. \(\eta\), where \(\eta_{s}(x)\) represents the probability that x comes from \(X_{S}.\eta_{D_{i}}\) and \(D_{t}(x)\) are trained to discriminate whether x comes from \(X_{t}\) or \(G_{s\to t}\left(X_{s}\right)\). \[L_{cam}^{s\to t}=-\left(E_{x\sim X_{s}}\left[\log\left(\eta_{s}(x)\right) \right]+E_{x\sim X_{s}}\left[\log\left(1-\eta_{s}(x)\right)\right]\right) \tag{6}\] \[L_{cam}^{D_{i}}=E_{x\sim X_{s}}\left[\left(\eta_{D_{i}}(x)\right)^{2}\right]+E_{ x\sim X_{s}}\left[\left(1-\eta_{D_{i}}\left(G_{s\to t}(x)\right)^{2}\right]. \tag{7}\] Finally, we joint the encoders, decoders,discriminators, and auxiliary classifiers as our final Loss: \[\min_{G_{s\to t},G_{s\to t},\eta_{D}}\max_{D_{i},D_{i},\eta_{D_{i}},\eta_{D_{i }}}\lambda_{1}L_{tsgan}+\lambda_{2}L_{cycle}+\lambda_{3}L_{identity}+\lambda_{4 }L_{cam}\,, \tag{8}\] Among them \(\lambda_{1}\)=1, \(\lambda_{2}\)=10, \(\lambda_{3}\)=100,\(\lambda_{4}\)=1000, \(\mathrm{L_{tsgan}}=\mathrm{L_{tsgan}^{s\to t}}+\mathrm{L_{tsgan}^{t\to s}}\),\(\mathrm{L_{cycle}}=\mathrm{L_{cycle}^{s\to t}}+\mathrm{L_{cycle}^{t\to s}}\), \(\mathrm{L_{identity}^{s\to t}}+\mathrm{L_{identity}^{t\to s}}\), \(\mathrm{L_{cam}}=\mathrm{L_{cam}^{s\to t}}+\mathrm{L_{cam}^{t\to s}}\) We put the CAM attention module at the end of the encoder in the generator of CycleGAN, which helps the generator to better capture the key information in the deep features in the image at the time of image transformation. Specifically, it uses global average pooling to calculate the average of each channel and uses it as a measure of channel importance. Then, by weighting the channels, the channels with higher importance are taken into account more, which enhances the expressiveness and generalization ability of the generator. Figure 3: The framework of the image generator.We have incorporated an attention mechanism in both the decoder and encoder to generate feature maps that highlight important information. This has led to significant improvements in the image generation quality. ## 4 Experiment ### Dataset We utilized the SODA10M dataset[48] as our primary training dataset. SODA10M is a high-quality driving scene dataset, collected via crowdsourcing methods, with images at 1080P+ resolution. The dataset consists of two parts, one with ten million labeled images and another with twenty thousand unlabeled images, categorized into six primary car scene categories including pedestrian, cyclist, car, truck, tram and tricycle. We chose the car category as our primary training set, which only contained images of urban scenes on sunny days in Shanghai. The test set contained images from a variety of different scenes,as shown in Tab 1.The training dataset consists of a single scene for each city, location, weather, and time period, whereas the test set contains different categories for each of these criteria. Notably, there are 1656 scenes that take place at night in the test set. We used 5000 labeled training images for training CDMA, which only contained scene vehicles on sunny days during the day and city street scenes. Additionally, we selected 5000 labeled test images, which included day and night, clear, overcast, and rainy weather, as well as city street, highway, and country road scenes. The model was not exposed to the test set content prior to testing. For image generation, we chose 1000 high-quality photos with dark scenes from the test set, and conducted generative adversarial learning with 1000 daytime photos from the training set to train a model with dark scene characteristics. ### Data augmentation To enhance the original limited source domain dataset, we added Gaussian noise, random mask, and random brightness, contrast, and saturation adjustments. Adding Gaussian noise can simulate image noise in real-world scenarios, making the model better suited to handle real-world images. Adding random masks can simulate occlusion, partial target missing, and other complex scenarios, making the model better adapted to object detection tasks in complex scenes. Random adjustments to brightness, contrast, and saturation can increase the diversity of data and improve the model's robustness by adjusting the image's brightness, contrast, and saturation. Random brightness adjustment can simulate images under different lighting conditions, allowing the model to better adapt to real-world scenarios. Contrast adjustment can increase the contrast of the image, making the target more prominent and easier to detect. Saturation adjustment can increase or decrease the color saturation of the image, allowing the model to better adapt to different color distributions. These data augmentation methods can improve the model's generalization ability, prevent overfitting, and help the model better adapt to different scenarios and data distributions. Using these three methods of data augmentation, we increased the original training set source domain images from 5000 to 20000. Each operation added an additional 5000 training samples to the dataset. ### Evaluation criteria According to the previous work, we selected four evaluation indicators, Recall, Precison, F1-Score, and mAP. Recall is a measure of the sensitivity of CDMA, which is how many examples of all the cars CDMA predicted the model was able to correctly identify. precison refers to the total proportion of the actual labels among all the samples predicted as positive examples by our model, which is used to measure the accuracy of the model. F1-Score is a measure of model performance, which is the harmonic mean of precision and recall, and can comprehensively evaluate the classification performance of CDMA. mAP synthesizes the AP values of multiple object categories and can be used to comprehensively evaluate the performance of object detection models. \[\text{Precision }=\text{TP}/(\text{TP + FP}) \tag{9}\] \begin{table} \begin{tabular}{l|c c c|c c c|c c|c c} \hline \hline & \multicolumn{3}{c|}{City} & \multicolumn{3}{c|}{Location} & \multicolumn{3}{c|}{Weather} & \multicolumn{3}{c}{Period} \\ & Shanghai & Shenzhen & Guangzhou & Citystreet & Highway & Countryroad & Clear & Overcast & Rainy & Daytime & Night \\ \hline Train & 5000 & 0 & 0 & 5000 & 0 & 5000 & 0 & 0 & 5000 & 0 \\ Test & 2596 & 1435 & 969 & 1587 & 2744 & 674 & 2023 & 2328 & 649 & 3344 & 1656 \\ \hline \hline \end{tabular} \end{table} Table 1: SODA 10M in detail. \[\text{Recall }=\text{TP}/(\text{TP}+\text{FN}) \tag{10}\] \[F_{1}=2\cdot\frac{\text{precision }\cdot\text{ recall}}{\text{precision }+\text{ recall}} \tag{11}\] ### Baselines In the study, we compared CDMA with three baseline models. Faster R-CNN[1] utilizes RPN (Region Proposal Network) to generate candidate regions and the ROI Pooling layer to extract features. RetinaNet[31] uses Focal Loss to solve the class imbalance problem in object detection, and FPN to extract multi-scale features. Focal Loss is a loss function that can make the model pay more attention to samples that are difficult to classify, thereby improving the accuracy of the model. The RetinaNet algorithm has a good detection effect on small targets, a fast training speed, and relatively high accuracy. Additionally, it can also be used for tasks such as instance segmentation.YOLOv5[36] is characterized by its fast speed, relatively high accuracy, and its ability to perform real-time object detection. YOLOv5 transforms the object detection task into a regression problem by using the idea of the YOLO (You Only Look Once) algorithm. The algorithm divides the entire image into multiple grids, with each grid responsible for detecting objects within it. Subsequently, a neural network directly outputs the location and category information of the target. YOLOv5 also employs techniques such as multi-scale prediction and adaptive training to improve the detection effect. YOLO7[40] is a state-of-the-art object detection model that predicts bounding boxes and class probabilities directly from input images. It uses a fully convolutional architecture and a modified DarkNet backbone to improve accuracy. Additionally, it employs spatial pyramid pooling and feature fusion to better capture object context and improve detection performance. YoloV8 is the latest version of the Yolo object detection model family, featuring new features and optimizations. It introduces a hybrid backbone with ResNet and DarkNet layers for better feature extraction and accuracy. YoloV8 also employs a novel attention mechanism that selectively weights feature maps to improve detection performance and reduce computational overhead. In the domain adaptation field, DA Faster R-CNN [29]incorporates domain adaptation techniques into Faster R-CNN. It utilizes a method called Self-Adversarial Network to minimize the differences between the source and target domains, thereby improving the performance of object detection. Specifically, a feature layer was used to extract features from the source domain and the target domain, and a domain classifier was used to distinguish the source domain from the target domain. At the same time, the model also uses an inverse gradient layer to combat the difference between the source domain and the target domain, so as to achieve domain adaptation. Similarly, Domain Adaptive Faster R-CNN is based on Faster R-CNN and has better scalability. This means that the performance can be further improved by increasing the depth of the network, adding more feature layers, etc. However, due to the high complexity of domain adaptation training, the training time of this architecture is relatively long. Another approach, known as Cross-Domain Adaptive Teacher for Object Detection (CDAT)[45], employs a cross-domain adaptive teacher model for training. The goal is to address the domain gap between a domain with annotations (source) and a domain of interest without annotations (target). The core idea of this algorithm is to establish a bridge between the source and target domains in object detection. By training a teacher model in the source domain, it is then applied to the object detection task in the target domain. The teacher-student framework has been effective in semi-supervised learning, but it can generate low-quality pseudo labels due to domain shift. To mitigate this problem, the CDAT model leverages domain adversarial learning and weak-strong data augmentation. Feature-level adversarial training is used in the student model to ensure domain-invariant features. Weak-strong augmentation and mutual learning between the teacher and student models are also employed to enable the teacher model to learn from the student model without being biased to the source domain. Feature-level adversarial training in the student model ensures domain-invariant features, which are essential for cross-domain object detection. Weak and strong data augmentation and mutual learning between teacher and student models enable the teacher model to learn knowledge from the student model without bias to the source domain. ### Experiment details Due to GPU limitations, our target domain generation architecture takes an input image with a resolution of 256\(\times\)256 pixels and produces an output image with the same resolution. To minimize image degradation during the downsizing and upsizing process, we resize the 1920\(\times\)1080 image to 1080\(\times\)1080 after using the Laplacian pyramid, which is created using high-resolution blurred images[49]. To obtain the high-resolution image, we replace the top layer of the Laplacian pyramid with the low-resolution image after image generation, and carry out the Laplacian upgrading process as usual. The image is then converted back to 1080\(\times\)1080 and resized to 1920\(\times\)1080 again, as shown in the figure. Table 2outlines the hyperparameters of our model, which we trained using two TITAN xp under the PyTorch-based framework. ## 5 Result ### Domain adaptation detection result Although our model has shown improvement compared to mainstream object detection algorithms, the improvement is not as significant for object detection in dark scenes. Therefore, we implemented a domain adaptive training strategy for the model, as shown in Section 4. The training results are presented in Table 3. After fine-tuning with images generated by the adversarial training network, the model demonstrated significant improvement. Specifically, using only dark scene images generated by CycleGAN resulted in a 0.06 and 13.14% increase in F1-Score, Recall, Precision, and mAP, respectively. After implementing data augmentation and introducing the CAM attention mechanism in CycleGAN, the F1-Score, Recall, Precision, and mAP were increased by 0.1, 13.76%, 9.23%, and 18.55%, respectively, compared to the original model test results.We can see the results more clearly in Fig 4. The first column displays the basic Faster RCNN detection results, the second column shows the output from our model, and the third column visualizes the attention mechanism of Faster RCNN. It is apparent that the basic Faster RCNN has a relatively poor detection effect. Not only are the label box scores low, but also close and distant cars are often missed, and even large cars are mislabeled as "CAR". After implementing our cross-domain training strategy and making improvements, the detection effect significantly improved. Our model can now detect cars at long distances and in dark environments, generating appropriate detection boxes. \begin{table} \begin{tabular}{c|c|c} \hline Models & Parameters & Value \\ \hline & Batch Size & 1 \\ & Image size & 256*256 \\ Image generation & Learning rate & 0.0001 \\ & Iterations & 1000000 \\ & Optimizer & Adam \\ \hline & Batch Size & 8 \\ & Anchors size & [8,16,32] \\ Object detection & Epochs & 30 \\ & Learning Rate & 0.001 \\ & Optimizer & Adam \\ & Momentum & 0.9 \\ \hline \end{tabular} \end{table} Table 2: The hyperparameters of our model. \begin{table} \begin{tabular}{l c c c c} \hline & F1-Score & Recall & Precision & mAP \\ \hline Attention RCNN & 0.3757 & 60.12\% & 27.32\% & 41.42\% \\ + Cyclegan & 0.4573 & 73.26\% & 33.24\% & 54.92\% \\ + Data augmentation & 0.4383 & 73.37\% & 31.25\% & 57.69\% \\ + Attention mechanism in cyclegan & 0.4891 & 73.88\% & 36.55\% & 59.97\% \\ \hline \end{tabular} \end{table} Table 3: Domain adaptation detection result. According to the Table 4, our model saw significant improvements during training with SODA 10M. Compared to the original RestiNet, YOLOv5, our cross-domain fine-tuned model shows notable gains. Specifically, F1-Score and mAP increases by 0.0218 and 19.81%, 0.097 and 19.90%, respectively. While our model has lower precision rates than RestiNet and YOLOv5 by 4.25% and 11.31%, the recall of our model improves significantly by 20.45% and 25.86%. Compared to yolov7 and yolov8,Recall and mAP increases by 22.36% and 9.15%,20.53% and 7.32%.Overall, this is a promising outcome. Although there's room to improve the accuracy of our model for car identification and make more precise detection box, it performs better in detecting vehicles in the picture, which is the primary goal of our car detection scenario. ### Target domain image generation results We handpicked 1000 high-quality photos of dark scenes from the test set in SODA10M, as there were several uninformative photos present. We then employed generative adversarial learning with 1000 daytime photos from the training set to train the model with the characteristics of dark scenes. Although CycleGAN can simulate dark environments well, it often fails to handle the crucial details of dark vehicles with care. As depicted below, night lights appear to be a characteristic of the scene rather than one of dark vehicles in CycleGAN's understanding, despite simulating the dark scene well. However, for vehicular objects, it simply reduces the overall brightness. Consequently, the added lights on the car cannot be placed in their appropriate positions (the headlights and the rear lights) which has a certain misleading effect on our object detector model's training. With the help of the attention mechanism, generative adversarial training has started to focus more on the primary difference between day and night vehicles during model training. In the region near the headlights, the attention heat map appears redder than other areas, and the model is now able to detect the changes in the key parts of the car while adapting to the overall style changes,as shown in Fig 5. As shown in \begin{table} \begin{tabular}{l c c c c} \hline \hline & F1-Score & Recall & Precison & mAP \\ \hline Faster R-CNN[1] & 0.3567 & 57.66\% & 25.83\% & 39.29\% \\ RetinaNet[31] & 0.4673 & 53.43\% & 40.74\% & 40.16\% \\ Yolov5[36] & 0.4794 & 48.02\% & 47.86\% & 40.07\% \\ Yolov7[40] & 0.5641 & 51.52\% & 62.31\% & 50.82\% \\ DA-Faster RCNN[29] & 0.4872 & 62.47\% & 28.92\% & 43.68\% \\ Yolov8 & **0.5874** & 53.35\% & **64.76**\% & 52.65\% \\ CDAT [45] & 0.5486 & 69.17 \% & 34.29\% & 55.73\% \\ CDMA & 0.4891 & **73.88**\% (\(\uparrow\)16.22\%) & 36.55\% & **59.97**\% (\(\uparrow\)7.32\%) \\ \hline \hline \end{tabular} \end{table} Table 4: Contrast experiment on SODA 10M. Figure 4: Domain adaptation result.The image consists of three rows. The first row displays the detection results obtained using the basic Faster R-CNN model. The second row showcases the detection results obtained after utilizing our training strategy and the improved algorithm. The third and final row displays the attention heat map of the object detector. Fig 6,by incorporating the attention mechanism, generative adversarial training began to focus more on the primary differences between day and night vehicles during model training. In the area near the headlights, the attention heat map appeared more red than in other areas, indicating that the model was paying closer attention to changes in key parts of the car while the overall style was also changing. ### Object detection results As shown in Table 5, we utilized the following methods for our dataset: 1) Faster R-CNN 2) Attention R-CNN 3) Attention-RCNN without attention mechanism in the head4) Attention-RCNN without attention mechanism in the head 5) Attention-RCNN without attention mechanism. By only training with the source domain training set, our model (Attention R-CNN)achieved an F1-Score, Recall, Precision, and mAP that were 0.02, 2.46%, 1.49%, and 2.13% higher than Faster-RCNN, respectively. ### Ablation experiment To demonstrate the effectiveness of our detector module, we conducted ablation experiments by removing the head attention mechanism, tail attention mechanism, and GIOU loss function, respectively. The Figure 5: Comparison of CycleGAN with and without Attention Mechanism.The first row displays the source domain image, followed by the second row which showcases the target domain image generated by CycleGAN. The third row displays the image generated by CycleGAN with the attention mechanism. \begin{table} \begin{tabular}{l c c c c} \hline \hline & F1-Score & Recall & Precision & mAP \\ \hline Faster R-CNN & 0.3567 & 57.66\% & 25.83\% & 39.29\% \\ Attention R-CNN & 0.3757 & 60.12\% & 27.32\% & 41.42\% \\ w/o attention mechanism in the head & 0.3670 & 59.40\% & 26.55\% & 40.78\% \\ w/o attention mechanism in the tail & 0.3558 & 59.93\% & 25.30\% & 41.15\% \\ w/o attention mechanism & 0.3619 & 58.81\% & 26.14\% & 40.71\% \\ \hline \hline \end{tabular} \end{table} Table 5: Object detection results. results indicate that compared to the tail attention mechanism, the head attention mechanism has a stronger impact on model performance, and the test results significantly dropped after removing the head attention mechanism. Additionally, the use of GIOU and the application of the attention mechanism have a similar positive impact on the model performance. For our training strategy, ablation experiments were conducted on the five different object detectors (1, 2, 3, 4, 5) presented in Table 1. These experiments included 1) training in the source domain, 2) fine-tuning, 3) fine-tuning for image generation and data augmentation based on CycleGAN, and 4) attention-based CycleGAN and data-augmented fine-tuning. Regardless of the object detector used, there was a significant improvement from training solely in the source domain to fine-tuning with images generated by the generative adversarial network for domain adaptation. Observing the results of our ablation experiments, we can conclude that each of our components has proven to be useful. The results show that using GIOU instead of the traditional IOU evaluation metric leads to better experimental results. IOU is a commonly used metric for evaluating object detection algorithms, but it has some limitations. For instance, IOU may be inaccurate when evaluating objects of different sizes, shapes, and orientations. To address this issue, GIOU was proposed. In our model, GIOU considers the offset, width difference, and height difference between the predicted box and the ground truth box, making it more sensitive to changes in object shape and orientation. While IOU only considers the overlap area of Fig. 6: Target domain image generation result. The first row displays the source domain image, followed by the second row which showcases the heat map generated by the source domain generator. The third row displays the source domain image generated by the generator. The fourth and fifth rows show the heat map and image generated by the target domain generator, respectively. The sixth row displays the heat map of the source domain picture generated by the source domain generator using the target domain generated picture, while the seventh row displays the source domain picture generated by the target domain generated picture. two boxes when calculating the metric, GIOU introduces a scaling factor by dividing the overlap area by the minimum bounding box area of the two boxes, which better considers changes in object size. GIOU also takes into account the offset between the predicted box and the ground truth box, making it more sensitive to changes in object position. Furthermore, GIOU is continuous and differentiable, making it more stable and better suited for optimizing object detection models. Regarding the attention ablation experiments with CBAM, we observed that using this module in any way improved model performance to some extent. We suspect that adding this module increased model complexity, leading to better performance. The results also showed that adding CBAM to different positions had varying effects on model performance. In our model, CBAM enhances feature expression by adaptively adjusting the weights of different feature channels through channel attention. This helps improve the model's perception of objects and thus detection accuracy. Additionally, CBAM suppresses background interference by adaptively learning the spatial correlation of different positions in the feature map through spatial attention. This helps reduce false detection rates and improve model robustness. Furthermore, CBAM can adaptively adjust the correlation between different feature channels and spatial positions, thereby improving the model's generalization ability and stability, reducing overfitting, and improving model reliability. We can verify the superiority of our model framework by determining whether the decision to use Cycegan had a decisive impact on the experimental results. ## 6 Conclusion Based on our experiments and results, we can conclude that our proposed cross-domain car detection model is effective in improving target detection accuracy in the target domain. CDMA includes several novelties, such as a complete cross-domain target detection framework, an unpair target domain picture generation module with an integrated convolutional attention mechanism, the use of Generalized Intersection over Union (GIOU) as the loss function of the target detection framework, a target detection model with an integrated two-head Convolutional Block Attention Module, and an effective data enhancement method. Our experimental results showed that our framework, which incorporates convolutional attention mechanisms designed to enhance the model's focus on the car headlights feature, significantly improved the performance of the cross-domain vehicle target detection model by 40% compared to the model without \begin{table} \begin{tabular}{l l c c c c} \hline \hline Models & Training Strategy & F1-Score & Recall & Precision & mAP \\ \hline \multirow{4}{*}{Faster-RCNN} & Source domain & 0.3568 & 57.66\% & 25.83\% & 39.29\% \\ & + Cyclegan & 0.4362 & 69.12\% & 31.86\% & 54.41\% \\ & + Data augmentati & 0.4479 & 72.81\% & 32.42\% & 56.78\% \\ & + Attention mechanism in cyclegan & 0.4598 & 73.62\% & 33.43\% & 58.19\% \\ \hline \multirow{4}{*}{+GIOU} & Source domain & 0.3619 & 58.81\% & 26.14\% & 40.71\% \\ & + Cyclegan & 0.4304 & 69.49\% & 31.77\% & 54.64\% \\ & + Data augmentati & 0.4377 & 70.22\% & 31.79\% & 55.68\% \\ & + Attention mechanism in cyclegan & 0.4740 & 74.18\% & 34.83\% & 58.26\% \\ \hline \multirow{4}{*}{+Attention mechanism in the head} & Source domain & 0.3558 & 59.93\% & 25.30\% & 41.15\% \\ & + Cyclegan & 0.4030 & 70.66\% & 32.41\% & 53.28\% \\ \cline{1-1} & + Data augmentati & 0.4297 & 72.97\% & 30.45\% & 57.42\% \\ \cline{1-1} & + Attention mechanism in cyclegan & 0.4867 & **74.49**\% & 36.14\% & 58.81\% \\ \hline \multirow{4}{*}{+Attention mechanism} & Source domain & 0.3670 & 59.40\% & 26.55\% & 40.78\% \\ & + Cyclegan & 0.4404 & 69.71\% & 32.19\% & 54.39\% \\ \cline{1-1} & + Data augmentati & 0.4172 & 71.35\% & 29.48\% & 57.11\% \\ \cline{1-1} & + Attention mechanism in cyclegan & 0.4767 & 73.37\% & 35.30\% & 59.22\% \\ \hline \multirow{4}{*}{+Attention mechanism} & Source domain & 0.3757 & 60.12\% & 27.32\% & 41.42\% \\ \cline{1-1} & + Cyclegan & 0.4573 & 73.26\% & 33.24\% & 54.92\% \\ \cline{1-1} & + Data augmentati & 0.4383 & 73.37\% & 31.25\% & 57.69\% \\ \cline{1-1} & + Attention mechanism in cyclegan & **0.4891** & 73.88\% & **36.55**\% & **59.97**\% \\ \hline \hline \end{tabular} \end{table} Table 6: Ablation experiment. our framework. Furthermore, our approach had a significant impact on cross-domain vehicle recognition. By fine-tuning the target detector using the image generator in the target domain based on source domain training, we achieved high-accuracy cross-domain target detection even in the absence of the target domain. Regarding future research directions, our approach can be extended to other domains, such as pedestrian detection or object detection in other scenarios. Future research can explore the effectiveness of our approach in these domains and further improve the accuracy of cross-domain target detection. Additionally, while our approach achieved high-accuracy cross-domain target detection in the absence of the target domain, it still requires labelled source data. Future research can explore the use of unsupervised domain adaptation techniques to further reduce the reliance on labelled data and improve the scalability of our approach. Furthermore, future research can explore the use of other generative models and investigate their effectiveness in cross-domain target detection. Lastly, while our approach achieved significant improvements in cross-domain target detection, there is still room for further improvement. Future research can explore the use of other attention mechanisms or loss functions to further enhance the detection efficiency of the car detector in the target domain. Overall, our approach demonstrates that it can effectively address the challenges of cross-domain target detection and significantly improve detection accuracy. We believe that our work provides a solid foundation for future research in cross-domain target detection and can inspire new ideas and approaches for improving the accuracy and effectiveness of cross-domain target detection models. ## Declaration During the preparation of this work we used Chatgpt in order to improve language and readability of our paper. After using this tool/service, all the authors reviewed and edited the content as needed and take full responsibility for the content of the publication. All the authors read and revised the manuscript.
車検知、特にカメラによる視認を通して、コンピュータビジョン分野における重要な焦点であり、幅広い採用が進んでいます。現状の車検知システムは良好な検出能力を持っていますが、車との近接、光強度、環境可視性の要因などにより、信頼性の高い検出は依然として課題となっています。これらの課題に対処するために、私たちは、車検知を自律運転やその他の領域に適用する際に、統合された convolutio nal block attention メカニズム(CDMA)を搭載したクロスドメインの車検知モデルを提案しました。CDMAには、以下の幾つかの新技術が含まれており、1)クロスドメインのターゲット検知フレームワークを構築する。2)統合された convolutio nal attention メカニズムを備えた未ペアのターゲットドメイン画像生成モジュールを開発する。3)ターゲット検知フレームワークの損失関数を Generalized Intersection over Union (GIOU)
2309.06421
AGMDT: Virtual Staining of Renal Histology Images with Adjacency-Guided Multi-Domain Transfer
Renal pathology, as the gold standard of kidney disease diagnosis, requires doctors to analyze a series of tissue slices stained by H&E staining and special staining like Masson, PASM, and PAS, respectively. These special staining methods are costly, time-consuming, and hard to standardize for wide use especially in primary hospitals. Advances of supervised learning methods have enabled the virtually conversion of H&E images into special staining images, but achieving pixel-to-pixel alignment for training remains challenging. In contrast, unsupervised learning methods regarding different stains as different style transfer domains can utilize unpaired data, but they ignore the spatial inter-domain correlations and thus decrease the trustworthiness of structural details for diagnosis. In this paper, we propose a novel virtual staining framework AGMDT to translate images into other domains by avoiding pixel-level alignment and meanwhile utilizing the correlations among adjacent tissue slices. We first build a high-quality multi-domain renal histological dataset where each specimen case comprises a series of slices stained in various ways. Based on it, the proposed framework AGMDT discovers patch-level aligned pairs across the serial slices of multi-domains through glomerulus detection and bipartite graph matching, and utilizes such correlations to supervise the end-to-end model for multi-domain staining transformation. Experimental results show that the proposed AGMDT achieves a good balance between the precise pixel-level alignment and unpaired domain transfer by exploiting correlations across multi-domain serial pathological slices, and outperforms the state-of-the-art methods in both quantitative measure and morphological details.
Tao Ma, Chao Zhang, Min Lu, Lin Luo
2023-09-12T17:37:56
http://arxiv.org/abs/2309.06421v2
# AGMDT: Virtual Staining of Renal Histology Images with Adjacency-Guided Multi-Domain Transfer ###### Abstract Renal pathology, as the gold standard of kidney disease diagnosis, requires doctors to analyze a series of tissue slices stained by H&E staining and special staining like Masson, PASM, and PAS, respectively. These special staining methods are costly, time-consuming, and hard to standardize for wide use especially in primary hospitals. Advances of supervised learning methods have enabled the virtually conversion of H&E images into special staining images, but achieving pixel-to-pixel alignment for training remains challenging. In contrast, unsupervised learning methods regarding different stains as different style transfer domains can utilize unpaired data, but they ignore the spatial inter-domain correlations and thus decrease the trustworthiness of structural details for diagnosis. In this paper, we propose a novel virtual staining framework AGMDT to translate images into other domains by avoiding pixel-level alignment and meanwhile utilizing the correlations among adjacent tissue slices. We first build a high-quality multi-domain renal histological dataset where each specimen case comprises a series of slices stained in various ways. Based on it, the proposed framework AGMDT discovers patch-level aligned pairs across the serial slices of multi-domains through glomerulus detection and bipartite graph matching, and utilizes such correlations to supervise the end-to-end model for multi-domain staining transformation. Experimental results show that the proposed AGMDT achieves a good balance between the precise pixel-level alignment and unpaired domain transfer by exploiting correlations across multi-domain serial pathological slices, and outperforms the state-of-the-art methods in both quantitative measure and morphological details. + Footnote †: 10: 2023: The copyright of this document resides with its authors. It may be distributed unchanged freely in print or electronic forms. * These authors contributed equally to this work. ** Corresponding author. Introduction Pathological examination is the gold standard of clinical diagnosis. One essential step is the staining in pathological slide preparation, which aims to highlight tissue structures and enhance lesion visibility. In renal pathology, pathologists utilize four types of stains, namely: H&E, Masson, PASM, and PAS, each representing unique structural features for diagnosis. For example, the basic H&E staining presents cell nucleus to be blue and cytoplasms to be pink, while PAS stains the contours of extracellular matrices such as the basal membrane of glomerulus (kidney spherule), tubules, mesangial matrix to facilitate identification of inherent cell types. Compared to PAS, PASM performs better for thickened basal membrane lesions and stains the basal membrane, mesangial matrix, and type IV collagen black to identify cell types by their location to the basal membrane of glomerulus. Masson's trichrome stain is widely used for staining collagen fibers, where it portrays the basal membrane and type III collagen as blue or green, and immune complexes, plasma, and fibrinogen as red. Among the four types, H&E staining is the most commonly used, while the three special staining methods also have their unique value in renal pathological diagnose. However, since more samples need to be taken, and the procedures of special staining are complex with the staining effects highly relying on experienced operators, multi-staining increases the cost and uncertainty of diagnosis significantly. In the field of histopathology-assisted diagnosis, some deep learning-based methods[] have proven effective. The advances of deep learning technologies also trigger virtual staining as a promising alternative. Leveraging Generative Adversarial Network (GAN)[], researchers can transform stained tissues from one type to another digitally. Rivenson _et al._[] uses deep learning on autofluorescence images of unlabeled histology samples to convert unstained slices into images of various staining effects, where the virtually-stained images closely match the ones under standard chemical staining. On this basis, de Haan _et al._[] employes autofluorescence as an intermediate modality and trained a supervised neural network to virtually stain H&E histological images into virtually-stained images of special stains such as Masson, PAS, and silver staining (PASM). Within deep learning technologies, supervised methods exhibit high accuracy and reliability, but since one tissue slice cannot be stained multiple times, pixel-to-pixel cross-domain alignment is hard to achieve, and additional channels such as autofluorescence must be used as mediators. Zeng _et al._[] proposed a semi-supervised virtual staining method to associate adjacent slices utilizing patch labels of binary progesterone receptors (PR) results and transfer H&E images into immunohistochemistry (IHC) images. But though it can preserve structural consistency between H&E and IHC slices, it highly depends on the binary PR labels and is not easy to generalize into other staining types. There are also some unsupervised methods applied to staining transfer. UGATIT [] is proven to visually perform well in single-modal unsupervised stain translation task. Further, based on StarGAN[], Lin _et al._[] proposed UMDST, a multi-domain stain transfer method for kidney histological images, using a single network to generate multiple types of virtually stained images. However, due to the ignorance of spacial alignment of structural details, unsupervised methods sometimes tend to produce pseudo staining artifacts of anatomical structures, especially for important diagnostic features such as glomerular structures. This paper proposes a novel adjacency-guided multi-domain transfer framework (AG-MDT) to transfer renal histology images into multiple staining types, exploiting the correlations across adjacent slices with different stains. The framework uses a generator-discriminator basis, where an adaptive paring module is introduced using glomerulus detection and bipartite graph matching to obtain patch-level aligned pairs in adjacent slices, and a multi-domain stain transferring module guides the training process with the correlations across pairs from adjacent tissue slices. Special loss function is designed to reflect the patch-level adjacent-slice correlations when adjusting the network. We also build a full-stack renal histological dataset. Each case in our dataset has all four staining types: H&E, Masson, PASM, PAS that applied to adjacent slices of the same tissue. In addition, the dataset contains 32,413 pairs of glomeruli aligned at the patch level. To our knowledge, this is the first uniform virtual staining framework that can generate multi-stained histological images utilizing correlations across adjacent slices. In addition, we provide a high-quality renal histological dataset for future researchers to train and validate virtual staining methods. Experimental results show that the proposed AGMDT framework outperforms the state-of-the-art staining methods in quantitative measures and meanwhile represents more realistic and accurate structural details for better clinical diagnosis. ## 2 Related work Medical Image-to-image TranslationI2I translation involves transforming an image from one style to another while preserving its content. In the field of biological imaging, image translation techniques are used for tasks such as color normalization, virtual staining, and MRI modality conversion[]. Compared to natural image translation tasks, image translation tasks in the area of biological imaging require higher accuracy and correctness in the generated images. However, in many cases, obtaining pixel-level paired data needed for supervised methods is extremely difficult. On the other hand, unsupervised methods are prone to producing artifacts that fail to meet the requirements. To address these challenges, some researchers have attempted to leverage adjacency information and domain knowledge to perform semi-supervised learning using non-pixel-accurate aligned paired data[], but these methods do not readily apply to the task of I2I translation for special stains. Figure 1: An overview of the adjacency-guided multi-domain transfer framework. #### Histological Image Registration Medical image registration is an essential step in many computer-aided medical image analysis tasks[1]. Histological image registration is particularly challenging due to the large sizes of some images and differences in local structure between slices[2]. Common image registration approaches such as intensity-based and feature-based methods that use hand-crafted image features[2] cannot be applied directly. Song _et al_. developed an unsupervised content classification method that generates multichannel probability images from a roughly aligned image pair, enhancing the structural similarity between image pairs. The emergence of the ANHIR challenge has given new vitality to this field. The lead method proposed by the Mevis team[2] is a 3-step registration pipeline consisting of robust pre-alignment, NGF similarity-based iterative affine registration and B-spline-based non-rigid registration, which works well but may introduce additional overhead. Existing methods have some limitations in the registration of adjacent slices in virtual staining of renal histology images. The registration process does not include features unique to renal histology images like glomerular morphology and location and may be time-consuming due to the large size of histology images. ## 3 Method Inspired by pathologists' diagnostic work with adjacent slices of different staining, we propose the AGMDT method for multi-domain stain transfer that incorporates information from adjacent slices. We introduce an adaptive pairing module and an adjacency-guided encoder module to generate and incorporate supervision information from these adjacent slices and better guide the process of stain transfer. In this way, we avoid the limitation of pixel-level aligned data that impede the supervised methods, and achieve better transformation results than unsupervised methods. ### Adjacency-Guided Multi-Domain Transfer Framework Figure 1 shows the overall structure of AGMDT which is a GAN-based multi-domain stain transfer framework. Our generator comprises three key components: a basic encoder-decoder structure for de-stain and re-stain, a style code generator, and the adjacent supervision component. Specifically, the encoder extracts features from the input image, while the decoder reconstructs the image with the guidance of the target domain style code generated by the style code generator and the input image features extracted by the encoder. To better represent the target domain features, we enhance the style code generator by using a 64-dimensional vector as the stain domain label and by utilizing a multi-head attention layer to extract task-related features. The adjacent supervision component is the key component which consists of two modules: the adaptive pairing module and the adjacency-guided encoder module. Among them, the adaptive pairing module provides paired patches of the multi-domain renal histological dataset to the adjacency-guided encoder module, followed by stain transfer supervision with the aid of the adjacency-guided encoder module. The structure of the adjacency-guided encoder module is based on U-net[2]. It takes a pair of adjacent-slice patches and the corresponding generated patch as input. By predicting the deformable field, we can obtain the moved generated image using this field. Furthermore, the distance between the adjacent-slice and moved generated images is calculated to encourage the model to learn a more realistic style. The discriminator component of AGMDT adopts a dual-discriminator structure: one of the discriminators has the same structure as the discriminator in UMDST and another employs a pre-trained large model structure, which utilizes a frozen pre-trained backbone to extract image features and a learnable head to judge whether an image is real or generated []. The dual-discriminator structure enables the model to effectively distinguish between generated and real images, and to guide the generator in performing virtual staining. ### Adaptive Pairing Module To maximize data utilization, we proposed the adaptive pairing module, which adaptively provides paired patches of the multi-domain renal histological dataset to the semi-supervised part of the model. As shown in Figure 2, the adaptive pairing module completes its task in four stages: initial alignment, glomerulus segmentation and matching, keypoint-based affine registration and patch pairing. The goal of initial alignment is to achieve a rough but fast contour alignment. Due to the inherent structural differences in adjacent tissue slices, only one parameter, the rotation angle, needs to be optimized in this stage []. After calculating the center point of the source and target images, the adaptive pairing module used an exhaustive rotation angle search method to obtain the initial alignment result. In the glomerulus segmentation and matching stage, it is imperative to accomplish the detection and matching of glomeruli in adjacent slice pairs and to store the center coordinates of the paired glomeruli as keypoint information. First, the module used a pre-trained model [] to segment glomeruli on adjacent slice pairs and record their location in the original image. Then the relationship between the center distances of the glomeruli was converted into a bipartite graph matching problem, which was then solved using the Hungarian algorithm to match the glomeruli. Finally, the center coordinates of the successfully matched glomerulus image pairs are stored as keypoint pairs. Since there are still a large number of kidney tissue areas that cannot be registered, affine registration based on keypoints [] was performed to improve the overall registration effect of the kidney tissue. After the first three stages, we obtained a region-level aligned dataset of whole slide images (WSI) along with keypoint pairs of the glomeruli. The region-level aligned dataset is divided into groups, and each group contains H&E/PAS/PASM/Masson-stained adjacent slices of kidney tissue. In the patch pairing stage, the adaptive pairing module combined images of H&E and the other three staining types in each group into image pairs, and cut patches by pairs. After calculating the similarity of H&E-PAS/PASM/Masson stained patch pairs using lpips [] and fsim [], it filtered out the successfully paired patches according to the similarity, which were fed into the semi-supervised part of the generator. Meanwhile, the unpaired patches were fed into the unsupervised part of the generator for training, making full use of the data in this way. In the process, the module also reused the key point pairs to build a glomerulus-aligned renal histological dataset. ### Loss Function To incorporate adjacent-slice information as supervision, we design the following loss function. The generator's loss function consists of six components, including: adversarial loss\(l_{adv}^{G}\), classification \(l_{cls}^{G}\), auxiliary loss \(l_{padv}^{G}\), \(l_{pcls}^{G}\),cycle loss \(l_{cyc}\), identity loss \(l_{idt}\), and adjacency guided loss \(l_{adj}\), which measures the discrepancy between the generated data and its corresponding adjacency data. The loss function is formulated as follows: \[l^{G}=\lambda_{1}\times(l^{G}_{adv}+l^{G}_{cls}+l_{cyc}+l_{idt})+\lambda_{2} \times(l^{G}_{\eta adv}+l^{G}_{\eta cls})+\lambda_{3}\times l_{adj} \tag{1}\] We denote the Adjacency-guided Encoder as \(A\), the encoder-decoder module for de-stain and re-stain as \(S\) and the formula for computing \(l_{adj}\) is as follows: \[l_{adj}=\mathbb{E}_{x,\widetilde{y}}\left[\|\nabla A(S(x),\widetilde{y})\|^{ 2}\right]+\mathbb{E}_{x,\widetilde{y}}\left[\|\widetilde{y}-S(x)\circ A(S(x), \widetilde{y})\|_{1}\right] \tag{2}\] Here, \(x\) refers to the input image, while \(\widetilde{y}\) represents its corresponding adjacent-slice image. When dealing with unpaired data, the adjacency-guided loss is not calculated. So the loss function for unpaired data can be defined as follows: \[l^{G}=\lambda_{1}\times(l^{G}_{adv}+l^{G}_{cls}+l_{cyc}+l_{idt})+\lambda_{2} \times(l^{G}_{\eta adv}+l^{G}_{\eta cls}) \tag{3}\] As for discriminator, the loss function remains consistent across both paired data and unpaired data. The discriminator's loss function consists of five components: adversarial losses \(l^{D}_{adv1}\) and \(l^{D}_{adv2}\), classification loss \(l^{D}_{cls}\), auxiliary losses \(l^{D}_{\eta adv}\) and \(l^{D}_{\eta cls}\), and pre-training discriminator loss \(l^{D}_{frz}\): \[l^{D}=\lambda_{1}\times(l^{D}_{adv}+l^{D}_{cls})+\lambda_{2}\times(l^{D}_{ \eta adv}+l^{D}_{\eta cls})+\lambda_{3}\times l^{D}_{frz} \tag{4}\] The details of the loss functions can be found in the supplementary materials. ## 4 Experiments ### Dataset To ensure the accuracy and reliability of our research data, we collaborated with experts from the pathology department of Peking University Third Hospital to conduct sample preparation and data collection. We obtained two sets of adjacent slices from each kidney tissue, with each set consisting of four slices stained with H&E, Masson, PASM, and PAS, respectively. We ensured that the thickness of the slices was within the range of 1-2 \(\mu m\). All tissue slices were extracted from pre-existing specimens, which were subjected to meticulous de-identification of any patient-related information. Therefore, this work does not impede conventional nursing practices or sample collection procedures. After acquiring the specimens, the slices were scanned using Jiangfeng Bio's KF-pro-005 whole slice scanner equipped with a \(\times\)40 objective lens. Figure 2: The structure of adaptive pairing module, including initial alignment, glomerulus segmentation and matching, keypoint-based affine registration and patch pairing. After the processing of the adaptive pairing module, we have successfully created a comprehensive renal histological dataset obtained from serial slices stained with H&E, PAS, PASM, and Masson. The dataset contains 188 whole slide images from 22 patients with pathologists' WSI-level diagnoses, and 32,413 pairs of patch-level aligned glomeruli. In short, the glomerulus-aligned dataset fills the gap of high-quality, open-source multiple staining datasets in the field of renal histology. It will facilitate the evaluation and further development of virtual staining techniques. ### Implementation Details Our experiments were implemented on a device with an Intel(R) Xeon(R) Gold 5218 CPU, and one NVidia Telsa V100 GPU. We used the PyTorch framework to implement our algorithm. In the staining transfer process, we trained our model for 600,000 iterations.Following UMDST's settings, we employed Adam optimizer with a learning rate of 1e-4 and linearly decaying it at 150,000 iterations. Both the training and testing batch sizes were set to 1. ### Results Here, we present the stain transfer results of our proposed method and baselines. Our method outperforms all baselines in terms of both quantitative evaluation and visual effects. The evaluated baselines include UGATIT and UMDST, which have previously demonstrated effective stain transfer[], as well as the structurally simpler MUNIT[] and FUNIT[] methods. Among these approaches, UGATIT and MUNIT require separate models for each specific stain transfer. In contrast, our approach, UMDST, and FUNIT achieve multi-domain staining transfer using a single model. Table 1 presents the quantitative evaluation results of different methods. We use two metrics, DISTS[] and DBCNN[], to assess the quality of stain transfer results. Both DISTS and DBCNN are effective indicators for evaluating image quality. The results in Table 1 indicate that our approach produces stained images of higher quality, with greater similarity to real adjacent-slice images and higher structural and textural similarity when compared to other methods for all three special staining transfer task. Figure 3 illustrates the visual results of stain transfer using both AGMDT and baselines. The images generated by AGMDT exhibited correct anatomical structures and superior color mapping performance for all three stain transfer tasks. While MUNIT and FUNIT learned good style information, they failed to preserve anatomical structure accurately. UGATIT, \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{H\&E2MASS} & \multicolumn{2}{c}{H\&E2PASM} & \multicolumn{2}{c}{H\&E2PAS} \\ Methods & DISTS\(\downarrow\) & DBCNN\(\uparrow\) & DISTS\(\downarrow\) & DBCNN\(\uparrow\) & DISTS\(\downarrow\) & DBCNN\(\uparrow\) \\ \hline FUNIT[] & 0.2452 & 56.52 & 0.2496 & 54.01 & 0.2259 & 53.17 \\ MUNIT[] & 0.3207 & 29.76 & 0.3037 & 32.67 & 0.2764 & 36.02 \\ UGATIT[] & 0.2374 & 57.18 & 0.2467 & 55.64 & 0.2166 & 51.91 \\ UMDST[] & 0.2064 & 53.44 & 0.2770 & 54.02 & 0.2162 & 51.90 \\ Ours(w/o Adj) & 0.2143 & 58.75 & 0.2468 & 60.61 & 0.1917 & 55.74 \\ Ours (w Adj) & **0.2020** & **64.49** & **0.2453** & **66.23** & **0.1909** & **59.85** \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of different methods. UMDST, and our proposed method were able to retain the anatomical structure effectively. However, our method produced more realistic color mapping results that closely resembled adjacent-slice images. This improved realism is attributed to the effective incorporation of adjacent-slice information constraints in our framework. ### Ablation Study We conducted an experiment to evaluate the impact of adjacent-slice information by comparing our model to a control model that excluded the adjacent supervision component while maintaining identical settings. As demonstrated in Figure 4, virtual staining results were generated for Masson, PASM, and PAS staining using both models - with and without the adjacent supervision component. For Masson virtual staining, the red rectangle in the figure illustrates that our model successfully learned the color mapping relationship between H&E and Masson by incorporating adjacent-slice information constraints, while the model without these constraints failed to accurately map the glomerular internal tissue structure Figure 3: The virtual staining results of AGMDT and baseline methods.The first row shows the input H&E image, and the second row represents the adjacent slice reference of the corresponding virtual staining target domain. Lines 3-7 show the virtual staining results of each method. to red, instead displaying it as green. The adjacent supervision component also produced clearer and more realistic basement membrane structures for virtual PASM staining. Similar improvements can also be observed for PAS staining in terms of increased clarity and accuracy of basement membrane structure representation after the integration of adjacent-slice information constraints. ## 5 Conclusion In this paper, we introduce an adjacency-guided multi-domain transfer framework to virtually transfer renal histology images into special staining types. The framework is a uniform one to generate multiple staining effects which includes a patch-level multi-domain adaptive paring module based on bipartite graph matching, and an adjacency-guided encoder with adjacency supervision. This is the first multi-modal staining transfer method to incorporate adjacent-slice information. A full-stack renal histological dataset based on adjacent slices of the four typical staining is also created, with pathologists' WSI-level diagnoses and 32,413 pairs of patch-level aligned glomeruli. Experimental results show that this method achieves the best virtual staining effect compared to state-of-the-art methods in terms of both quantitative measurements and visual details for clinical diagnosis. AGMDT framework can also be extended to virtual staining for other types such as IHC, given the adjacent-slice information is included in the training dataset. ## Acknowledgements We thank Qiuchuan Liang for doing some data processing work. Figure 4: The virtual staining results of our model with or without the AGE module.
腎臓病理学は、腎臓病の診断の基準となるため、医師はH&E染色とMasson、PASM、PASなどの特殊染色によって組織の切片を分析する必要があります。これらの特殊染色方法はコストがかかり、時間がかかり、広範な利用には標準化が難しい、特に初級病院では。指導的学習方法の進歩により、H&E画像を特殊染色画像に変換する事が可能となりましたが、訓練のためにピクセルピクセルなアライメントを達成することは困難です。一方、異なる染料を異なるスタイル転移ドメインとして扱う無监督学習方法では、不Pairedデータを利用できますが、空間的な相互ドメイン相関を無視し、診断のための構造的な詳細の信頼性を低下させます。この論文では、AGMDTという新しいバーチャル染色フレームワークを提案しました。これは、ピクセルレベルのアライメントを回避しながら、他のド
2309.05128
Robot-assisted Soil Apparent Electrical Conductivity Measurements in Orchards
Soil apparent electrical conductivity (ECa) is a vital metric in Precision Agriculture and Smart Farming, as it is used for optimal water content management, geological mapping, and yield prediction. Several existing methods seeking to estimate soil electrical conductivity are available, including physical soil sampling, ground sensor installation and monitoring, and the use of sensors that can obtain proximal ECa estimates. However, such methods can be either very laborious and/or too costly for practical use over larger field canopies. Robot-assisted ECa measurements, in contrast, may offer a scalable and cost-effective solution. In this work, we present one such solution that involves a ground mobile robot equipped with a customized and adjustable platform to hold an Electromagnetic Induction (EMI) sensor to perform semi-autonomous and on-demand ECa measurements under various field conditions. The platform is designed to be easily re-configurable in terms of sensor placement; results from testing for traversability and robot-to-sensor interference across multiple case studies help establish appropriate tradeoffs for sensor placement. Further, a developed simulation software package enables rapid and accessible estimation of terrain traversability in relation to desired EMI sensor placement. Extensive experimental evaluation across different fields demonstrates that the obtained robot-assisted ECa measurements are of high linearity compared with the ground truth (data collected manually by a handheld EMI sensor) by scoring more than $90\%$ in Pearson correlation coefficient in both plot measurements and estimated ECa maps generated by kriging interpolation. The proposed robotic solution supports autonomous behavior development in the field since it utilizes the ROS navigation stack along with the RTK GNSS positioning data and features various ranging sensors.
Dimitrios Chatziparaschis, Elia Scudiero, Konstantinos Karydis
2023-09-10T20:23:00
http://arxiv.org/abs/2309.05128v1
# Robot-assisted Soil Apparent Electrical Conductivity Measurements in Orchards ###### Abstract Soil apparent electrical conductivity (ECa) is a vital metric in Precision Agriculture and Smart Farming, as it is used for optimal water content management, geological mapping, and yield prediction. Several existing methods seeking to estimate soil electrical conductivity are available, including physical soil sampling, ground sensor installation and monitoring, and the use of sensors that can obtain proximal ECa estimates. However, such methods can be either very laborious and/or too costly for practical use over larger field conapies. Robot-assisted ECa measurements, in contrast, may offer a scalable and cost-effective solution. In this work, we present one such solution that involves a ground mobile robot equipped with a customized and adjustable platform to hold an Electromagnetic Induction (EMI) sensor to perform semi-autonomous and on-demand ECa measurements under various field conditions. The platform is designed to be easily re-configurable in terms of sensor placement; results from testing for traversability and robot-to-sensor interference across multiple case studies help establish appropriate tradeoffs for sensor placement. Further, a developed simulation software package enables rapid and accessible estimation of terrain traversability in relation to desired EMI sensor placement. Extensive experimental evaluation across different fields demonstrates that the obtained robot-assisted ECa measurements are of high linearity compared with the ground truth (data collected manually by a handheld EMI sensor) by scoring more than \(90\%\) in Pearson correlation coefficient in both plot measurements and estimated soil apparent electrical conductivity maps generated by kriging interpolation. The proposed robotic solution supports autonomous behavior development in the field since it utilizes the Robot Operating System (ROS) navigation stack along with the Real-Time Kinematic (RTK) GNSS positioning data and features various ranging sensors. ## I Introduction Agricultural geophysics employs non-invasive sensing techniques to characterize soil spatial variability and provide valuable insights into soil-plant-management relationships [1, 2]. Specifically, geospatial information on soil characteristics can indicate optimal cultivation approaches and may provide an estimate of expected yields [3, 4]. Soil salinity (i.e. salt content) is a crucial metric used to describe the soil characteristics and water content of an area. As such, it has been used widely across applications such as in agriculture, water management, geological mapping, and engineering surveys [5, 6, 7]. Information about bulk density, minerals content, pH, soil temperature, and more, can be evaluated by measuring the ECa of the field and generating a profile of the surveyed land. Thus, approximating the ECa spatial variability of a field can provide a broader understanding of the water flow through the ground, pinpoint any spots with irregular soil patterns, and finally indicate the necessity of supplying additive plant nutrients or different irrigation approaches [8]. In general, soil conductivity is mainly estimated in-situ, using three distinctive methods [9]; moisture meters (hydrometers) installed into the ground, time-domain reflectometers, and measurement of soil electromagnetic induction (EMI). In the first two cases, growers install and use decentralized sensor arrays to gather information from selected points on the field [10, 11, 12]. A main drawback of these approaches is that they provide discrete measurements and hence sparse information over the complete field, which may lead to less efficient agricultural tactics and higher costs while aiming to scale over larger fields. On the other side, ECa is measured geospatially (on-the-go) with Electrical Resistivity methods and EMI sensors. The EMI measurements of soil apparent electrical conductivity can be performed in a continuous manner and proximally, whereby a farm worker walks through the field holding the EMI sensor or a field vehicle that carries the sensor is driven around [5] (Fig. 1). In this way, a more spatially-dense belief about field irrigation is formed as the sensor can either gather continuous measurements of selected regions within the field or sparse measurements from specific points. Figure 0(a) depicts an instance from a manual survey of soil moisture in an olive tree field, with the use of the GF CMD-Tiny EMI instrument. Despite the benefits afforded by continuous EMI soil measurements, a noteworthy drawback is that such surveys may not scale well in larger fields as they can become quite labor-intensive depending on the broader area of inspection and weather conditions (e.g., heat fatigue). The standard of practice (besides manual operation) is to use an ATV (Fig. 0(b)) that carries a (often larger) sensor; however, due to its size, it may be hard to get the sensor close to the tree roots where estimating soil apparent electrical conductivity is most crucial. A robot-assisted solution has the potential to get the sensor close to the roots (as in the manual case) in a less laborious way (as in the ATV case) and thus bridge the gap between the two main standard of practice methods by offering an attractive alternative. The use of (mobile) robots has been offering key assistance in contemporary survey and agronomy processes, for example to better understand field conditions with the use of onboard sensors, provide real-time field modeling and decision-making on agricultural tactics, and at cases offer aid to farm workers (e.g., transporting workers across the field or elevating them to reach parts of the tree that are high off the ground [13]). In many scenarios, Unmanned Ground Vehicles (UGVs) are used to collect data from the ground [14, 15, 16], while other studies utilize aerial data taken by Unmanned Aerial Vehicles (UAVs) [17, 18, 19] and make decisions [20, 21] for the surveying area on-the-fly. As a follow-up, there is research that demonstrates collaborative approaches with both aerial and ground robots [22, 23, 24] which can yield an even more detailed and broader inspection of the field. Out of all types of data collected in the field, the most relevant to this present work concerns soil moisture. During the past years, there have been various approaches aiming to utilize robots for collection of soil moisture measurements. Some examples of related research include mobile robots conducting soil potential of hydrogen (pH) measurements for determination of soil health in the survey site [25], robot manipulators with onboard depth cameras that place soil moisture sensors on the plants [26], and even a UAV-based multispectral system for estimating and modeling soil salinity of the inspected field [27]. Thayer \(et\)\(al.\)[28] revealed the NP-hard routing problem of autonomous robots in precision irrigation scenarios and developed two domain-specific heuristic approaches. In more detail, Pulido Fentanes \(et\)\(al.\)[29] demonstrated a mobile robot that performs soil moisture measurements autonomously in the field, using a cosmic-ray sensor. The main drawback of using such sensory equipment is the increase in the overall cost of the application, which might be accurate but financially inefficient for some farming applications. On the aerial side, Tseng \(et\)\(al.\)[30] showed the usability of machine learning with aerial imagery to learn and predict the soil moisture conditions on individual plants and large fields through the air. Even though this application reported reduced water consumption by up to 52%, there is a need for a more accurate and robust system for measuring soil moisture levels. In addition, Lukowska \(et\)\(al.\)[31] presented an all-terrain six-wheeled robot, that utilizes a drilling system with a flap design to perform soil sampling in larger fields. Along similar lines, Dimaya \(et\)\(al.\)[32] developed a mobile soil robot collector (SoilBot) to automate soil collection in a sugar cane field. In such cases, as the post-processing of the soil samples will provide detailed information about the moisture level of the sampled soil, there may be approaches for real-time and broader field monitoring robotic applications. In addition, \(Agrobot\)[14] is a farm robot that has been designed to reduce human labor through autonomous seed sowing and soil moisture measuring. Further, Bourgeois \(et\)\(al.\)[33] demonstrated a low-cost and portable land robot, namely RoSS, for soil quality sensing purposes, which can collect soil samples and/or insert soil sensor probes at sampling locations to measure the moisture levels. In these approaches, vital information for the total salinity of the field might be missing as local information is obtained from the water content in the soil of the sampled areas. Campbell \(et\)\(al.\)[34] showcased a small-factor robot with an attached EMI sensor for field-scale soil apparent electrical conductivity measurements. Even though this work presents an efficient approach toward integrating an EMI sensor onto a mobile robot to conduct continuous ECa measurements, it has limited traversability and operational time because of its small size. In this study we present a mid-sized ground mobile robot solution that is able to conduct semi-autonomous and on-demand continuous EMI ECa measurements under various and larger field environments, and obtain a field-scale ECa map. Our proposed solution is based on the Clearpath Jackal UGV, which is equipped with a customized and adjustable platform which can carry the EMI instrument GF CMD-Tiny. The robot supports teleoperation via Bluetooth, it can directly navigate through sending desired waypoints and can execute trajectories as it utilizes its onboard GPS and RTK positioning data along with local and global planners to reach desired goals. The design, hardware and software system integration and testing of this platform are fully presented and evaluated in both simulated and real field-scale scenarios, including over bare fields and in muddy terrains. Fig. 1: (a) Instance of the manual data collection process using the handheld EMI sensor. Manually-collected data serve as ground truth in this work. (b) The robot considered in this work is a Clearpath Jackal UGV wheeled robot, carrying a GF CMD-Tiny EMI instrument (long orange cylinder) for ECa measurements alongside a Polaris ATV. (c) GNSS positioning information for the robot to navigate autonomously as well for geo-localization and cross-reference of obtained sensor measurements is provided via an RTK-Base station. The proposed robot platform demonstrates high efficiency in terms of portability, traversability as well as data collection since the robot is found capable to collect data with high linearity compared to handheld (no-robot) cases. Thus, our proposed solution shows promise to serve as a useful tool in modern field survey, ECa mapping, and irrigation scheduling. In the remainder of the manuscript, we first discuss key components (off-the-shelf as well as fabricated in-house) and the overall system design, and offer key system integration information (Section II). An important contribution of this work is a thorough study of the tradeoffs regarding EMI sensor placement on the mobile robot that involves tools and processes we develop for both experimental testing and testing in simulation; this is presented in Section III. Details and key findings from an initial testing phase to validate the preliminary efficacy of the overall system are also reported in that section. Full system evaluation and testing results across multiple trials in two distinctive fields are presented in Section IV. Finally, Section V concludes this manuscript. ## II System Design and Integration of Key Components ### _Soil Conductivity and Employed Sensor_ Electromagnetic induction can help measure the soil apparent electrical conductivity of a field. The main operating principle is based on the evaluation of the induced magnetic field from the ground as transmitted by an electromagnetic conductivity meter. Specifically, the EMI transmitter emits a harmonic signal toward the ground and generates a magnetic field. Since the receiver is placed with the same dipole orientation as the transmitter, it captures the secondary (induced) magnetic field which relates to the ground conductivity, namely out-of-phase measured in \(mS/m\) and the in-phase that is a relative metric to the primary magnetic field and measures the magnetic susceptibility of the area. In our study, we use the CMD-Tiny meter from GF Instruments, which features a compact and lightweight build. This instrument has a control unit module that is used for configuring and logging the EMI measurements and the CMD probe that is the main magnetic sensing module. The latter component has a cylindrical shape with \(50\ cm\) length and \(4.25\ cm\) of diameter, and the total setup weighs \(425\ g\) (Fig. 1a). This EMI instrument can obtain soil conductivity measurements from \(0.35\ m\) up to \(0.7\ m\) in-ground depth according to its setup and selected resolution. ### _Mobile Robot Setup_ We use the Clearpath Robotics Jackal robot platform,1 which is a UGV designed for use in outdoor and rugged all-terrain environments. The Jackal has been used in agricultural robotics research [35, 36], autonomous exploration [37, 38, 39], as well as social-aware navigation [40, 41]. The robot's dimensions are \(50.8\times 43.2\times 25.4\ cm\) with a payload area of \(43\times 32.25\ cm\) for mounting various (OEM and custom) onboard modules. Its available payload capacity reaches \(20\ kg\). The robot features an onboard NVIDIA Jetson AGX Xavier computer that is responsible for all onboard computation. On the sensors and actuators side, the robot is equipped with a GNSS receiver, an IMU module, and motorized wheel encoders (besides the custom payloads developed in this work and which we discuss later, or additional sensors like stereo cameras and LiDAR that are routinely deployed on the robot for autonomous navigation [42, 43]). Additionally, the Jackal is a ROS-compatible robot, as it uses the ROS navigation stack and the ROS environment for its main functionality. The total operating time of this robot can reach up to \(4\ hrs\) depending on the use and type of the operating environment. Importantly, the required operating time on the field can vary depending on field size and type, and the desired field mapping resolution. For instance, the surveys conducted herein took place over a total period of \(1.5\ hrs\) (\(3\) surveys each lasting for \(0.5\ hrs\)), in a \(30\times 15\ m\) field, and with the robot moving with a stable linear speed of \(1\ m/s\). Under these settings, the total battery consumption never exceeded \(40\%\) during our experiments in each case, for performing the total ECa mapping of the field. The operational time of the Jackal can be expanded by the use of a secondary/additional battery. If an increased amount of surveying time and operation in even larger fields are desired, an alternative commercial wheeled robot can be employed instead (e.g., the Clearpath Robotics Husky,2 and Warthog UGV,3 or the Amiga from Farm-ng4); our method can directly apply to such robots as well. Figure 1b depicts the Jackal robot equipped with various onboard sensors, in comparison to a Polaris ATV that is typically used for mechanized EMI measurements. Footnote 1: [https://clearpathrobotics.com/jackal-small-unmanned-ground-vehicle/](https://clearpathrobotics.com/jackal-small-unmanned-ground-vehicle/) Footnote 2: [https://clearpathrobotics.com/whwhys-unmanned-ground-vehicle-robot/](https://clearpathrobotics.com/whwhys-unmanned-ground-vehicle-robot/) Footnote 3: [https://clearpathrobotics.com/warthog-unmanned-ground-vehicle-robot/](https://clearpathrobotics.com/warthog-unmanned-ground-vehicle-robot/) Footnote 4: [https://farm-ng.com/products/ls-maquin-amiga](https://farm-ng.com/products/ls-maquin-amiga) ### _Positioning System Integration for Field Navigation_ Nowadays, high-in-accuracy Global Navigation Satellite Systems (GNSS) are used (along with field sensors) in precision agriculture to generate, extract, and obtain field observations with spatial information. In many cases, Real-Time Kinematic (RTK) positioning is applied to provide \(cm\)-level accuracy on the captured data by using real-time corrections through an established and calibrated base station. Herein we use the Holybro H-RTK F9P GNSS series as the high-in-accuracy positioning module for our field experiments, which integrates a differential high-precision multi-band GNSS positioning system. By selecting specific points on the field, we calibrate the RTK base station to obtain its position at \(cm\)-level and we use telemetry to establish communication with the robot's onboard autopilot hardware. Figure 1c illustrates an H-RTK F9P RTK base station establishment in an outdoor environment. On the robot side, we use the Holybro Pixhawk 4 autopilot module in the rover airframe (UGV mode), along with the Holybro SiK Telemetry V3 \(100\)\(Mhz\) and the GNSS receiver to capture the RTK corrections from the base station. The MAVROS software5 is utilized to parse the positioning data and use them with the onboard computer through a USB connection at a rate of \(10\)\(Hz\). In this way, the Jackal robot is able to reliably georeference every captured measurement in the field. Footnote 5: [http://wiki.ros.org/mavros](http://wiki.ros.org/mavros) On the navigation side, the robot's captured positions are described in the World Geodetic System 1984 (WGS-84). Additionally, the Jackal uses an extended Kalman Filter [44], which fuses information from the onboard IMU and the wheel encoders to provide the state estimation of the robot (odometry). Since in our case we require the Jackal to be able to follow a predefined GNSS-tagged trajectory, we utilize the \(navsat\_transform\_node\)6 from the ROS navigation stack. Through this approach, initially, all the requested geotagged targets are transformed to the Universal Transverse Mercator (UTM) coordinate system. Given this information and through the positioning data of the robot's odometry, a static transformation is generated to describe both the UTM coordinates of the robot and the targets' position into the local robot's frame. In this way, the requested trajectory can be georeferenced and transformed into Jackal's local frame and thus the robot can follow it. It is worth mentioning that, in case the robot gets into a position where there is limited satellite visibility (i.e. GPS-denied environments), it continues geotagging the captured measurements based on pose belief estimation via fusion of the onboard IMU and wheel encoders odometry data.6 While not employed in this work, our method is autonomous-ready in the sense that it can directly be integrated with waypoint navigation determined readily by onboard sensors (e.g., [45]), where task allocation and motion planning for (newly-perceived) obstacle avoidance can happen online (e.g., [46, 47]). Also, mapped modalities from the flora [48] and the fauna [49] during the survey, can even enhance and provide a multi-modal belief about the field conditions. Footnote 6: [http://docs.ros.org/en/jade/api/robot_localization/html/navsat_transform_node.html](http://docs.ros.org/en/jade/api/robot_localization/html/navsat_transform_node.html) Additionally, on the traversing side, we focus on open-world navigation without the existence of obstructive objects in the path that may cause the robot to get out of track to bypass them. The robot can get over small tree branches or crops as a farm worker would do, but the ground should be relatively uncluttered for the ECa inspection as in a typical survey scenario. Also, as Jackal is an IP62 rugged robot, it has a capable high torque \(4\times 4\) drivetrain allowing it to navigate through muddy and uneven parts similar to a common ATV. In our experiments, we test our proposed system in both dry and muddy environments to demonstrate our system's efficacy. The robot supports a \(20\)\(cm\) wheel diameter (up to \(30\)\(cm\)) and a variety of tire models, such as square (plastic) spike sets, which can make it even more versatile for challenging terrain types that require higher traction. ### _Robot Configuration and the Design of the Sensor Mounting Platform_ As our aim is to attach the CMD-Tiny meter on the Jackal robot, we start with the design of the sensor platform and the definition of its adjustable parameters. The prototype renderings are depicted in Fig. 2. As the Jackal robot has a limited payload area, we aim to design and use a module that can be attached on the robot's top plate and extend outwards so that the sensor can get in close proximity to the ground. In this way, the robot can carry the EMI sensor and obtain soil conductivity measurements while navigating in a continuous manner. However, the sensor's sensitivity in magnetic field measurements to other metallic and/or electronic components (like the robot itself) in view of module placement on the robot requires a study of some key tradeoffs. In particular, the EMI measurements can be distorted by the presence of other magnetic fields caused by the robot, as well as by random oscillations caused by the robot's movement. Thus, one intuitive solution would be to place the sensor as far as possible from the robot (i.e. large value for parameter \(d_{b}\)) and as close to the ground as possible (i.e. small value for parameter \(d_{h}\)). However, as distance \(d_{b}\) increases (Fig. 2), the moment arm of the payload increases, which in turn can Fig. 2: CAD rendering of the Jackal’s platform to hold the GF CMD-Tiny instrument. The parameters \(d_{h}\) and \(d_{b}\) indicate the adjustable height and distance of the sensor probe, respectively. lead to higher power consumption of the robot. In addition, the magnitude of the oscillations (which directly affects EMI measurement consistency) will also increase. Finally, the longer the distance the worse the capability of the robot to traverse uneven terrain and/or negotiate dips and bumps that are bound to exist in the field in practice, as possible sensor collisions with the ground can occur if height \(d_{h}\) is not sufficiently large. For these reasons, we design an adjustable platform to support multiple configurations of the CMD-Tiny meter's control unit and probe position, relative to the robot's main body, and study the tradeoff between measurement consistency and robot traversability to identify an optimal sensor placement (see Section III below). Figure 2 provides multiple views of the designed platform in CAD (Fusion 360), with all the designed parts included in the final assembly. PVC tubes of \(80\ cm\) length, \(10\ mm\) diameter, and \(SCH\ 80\) wall thickness serve as the basis for an expandable and rigid structure to mount the sensor (cylindrical part) as well as its associated data collection logger (rectangular part placed on top of the two long PVC tubes). Additionally, a shorter PVC tube of \(30\ cm\) length, has been installed along with an intermediate support mounted on the Jackal's front bumper to enhance the stability of the overall EMI module. On the robot side, two supports are installed on the Jackal's top chassis to mount the two PCV tubes, and a unified support has been installed on the opposite side to hold the sensor's cylindrical probe. The latter support is crucial in allowing for reconfiguring the position and height from the ground of the probe and can be sturdily fixed in place when in use. Also, the total length of the sensor holder platform can be modified by the supports that are installed on the top chassis. Fabricated components of the platform were all 3D printed in polylactic acid (PLA) material. ## III Development and Calibration for Optimal Sensor Placement The optimal sensor placement in terms of the robot body is determined by two factors: 1) electromagnetic interference from the robot chassis and its onboard electronics, and 2) robot terrain traversability. The optimal solutions to each of these factors on their own are in fact opposing each other. Indeed, to minimize electromagnetic interference, the sensor should be placed as far from the robot as possible. On the contrary, the longer the extension of the sensor holding platform from the robot's center of mass, the worse it becomes to overcome obstacles without risking colliding the sensor with the ground (Fig. 2), in addition to increasing the moment arm of the cantilever which in turn would increase the required motor torque to move without tilting forward (that also leads to higher power consumption and lower operational time). In this section, we study optimal sensor placement in two distinct settings. First (Section III-A), we benchmark experimentally the robot's interference on the EMI measurements for a given fixed distance from the ground (\(d_{h}=5\ cm\)) and for varying distances of the probe (\(d_{b}\in[10,100]\ cm\)). Second (Section III-B), we determine terrain traversability while considering a subset of viable distances identified from the first step and two distinctive sensor height values to better understand the role of robot oscillations onto potential probe collisions with the ground. To make this process systematic, we create a realistic simulated environment and conduct this initial set of traversability evaluations in simulation. This process yields a set of candidate probe placement distances (\(d_{b}\)) as a function of probe height off the ground (\(d_{h}\)). Additionally, we perform feasibility experimental testing in measuring continuous ECa over small areas to both narrow down the range of viable configurations and to validate optimal sensor placement (Section III-C). ### _Determination of Robot Interference in Soil Conductivity Measurements_ The first step to be examined concerns the robot's electromagnetic interference in the soil conductivity measurements. Specifically, the robot is equipped with a high-torque \(4\times 4\) motored drivetrain and various onboard sensors such as a 3D LiDAR and the GNSS receiver. Given this setup, the robot generates electromagnetic fields that may interfere with the electromagnetic field generated by the GF CMD-Tiny sensor as well as the secondary current read by the sensor, and thus cause distorted measurements. _Experimental Procedure__: To examine and mitigate the robot's interference on the EMI measurements, three independent field experiments were conducted under different robot-sensor configurations in which the sensor was placed at predefined positions away from the robot's body to monitor the level of saturation. The fields that were selected for this purpose included a bare field, an olive tree grove, and an orange tree grove, which are located at the USDA-ARS U.S. Salinity Laboratory at the University of California, Riverside (33deg58'21.936" N, -117deg19'13.5732" E). In each field experiment, five distinct field points were selected to measure the soil conductivity at. To get a broad spectrum of values for better understanding of the robot's electromagnetic interference, sampling points were either irrigated recently or non-irrigated. At the beginning of each experiment, handheld measurements were performed with the CMD-Tiny sensor, with no presence of any device that may generate additional electromagnetic fields. These measurements serve as the baseline. With these as a reference, we then placed the UGV with the CMD-Tiny sensor in different configurations and repeated the data collection process, each time increasing their relative distance within the range of \([10,100]\ cm\) at a step of \(10\ cm\). Soil conductivity data collected from these field tests are shown in Table I. Column "\(\infty\)" represents the handheld measurements where there is no appearance of external electromagnetic interference; the reported values are used as reference measurements. The columns starting from \(10\ cm\) until \(100\ cm\) represent the soil conductivity measurements that were made at the corresponding relative distances of the robot and the sensor's probe, at the same locations as in the handheld case. Mean and one-standard deviation values from the five trials in each of the 33 distinctive cases shown in Table I are also provided. (\(\{10,20,30\}\ cm\)) exhibit both significantly higher mean values but also more variability (i.e. higher one-standard deviation) across all three fields. Especially the \(d_{b}=10\ cm\) case exhibits excessive interference and clear evidence of saturation (the bare field depicts this most clearly). In contrast, larger \(d_{b}\) values exhibit smaller interference as it can be readily verified by the reported means and one-standard deviations that converge to the baseline values. In addition, it can be observed that after some threshold distance, means and one-standard deviations do not vary significantly. In this work, this upper threshold is selected at \(d_{b}=70\ cm\). Another interesting observation concerns the cross-field variability. The olive and citrus orchards that are regularly irrigated yield similar values across all tests, whereas the bare field (not irrigated) has lower ECa values (as expected). However, the measured values appear to converge faster (i.e. in shorter distances) in the irrigated fields compared to the bare field. We can associate this finding with the fact that in irrigated fields the reported ECa values are a function of both actual soil electrical conductivity and electromagnetic interference from the robot, and the former dominates more rapidly as the probe distance increases. In contrast, in the bare field, the actual ECa level attains much lower values, hence readings are more susceptible to electromagnetic interference and larger distances appear to be required to dampen down the interference's effect. This finding can be a useful tool to correlate soil salinity over the same field before and after planting, and while growing. We can further justify the selection of the lower and upper thresholds for \(d_{b}\) by inspecting measurement linearity compared to the baseline (Fig. 3) and via a Pearson correlation test (Fig. 4) and subsequent linear regression (Fig. 5). According to the \(1:1\) control lines in all three panels of Fig. 3 that correspond to each tested field, the saturation is notable in the measurements when the sensor is less than \(40\ cm\) away from the robot's body; obtained values for \(30\ cm\) and below clearly deviate from the other cases, while obtained values of \(70\ cm\) are getting closely clustered together and approaching the \(1:1\) control line.7 Footnote 7: Note that the case of \(10\ cm\) has significant interference and widely varied values (at cases even negative as shown in Table I) and is henceforth not shown in these graphs to improve visual clarity of the figure. The case of \(d_{b}=40\ cm\) appears to be the switching point. While it could be argued based on Table I and Fig. 3 that this case demonstrates high differentiation from the baseline (especially at the bare field), the Pearson correlation test (Fig. 4a) yields a value of \(0.98\) for the distance of \(40\ cm\), which approximates the value of 1 and thus can be considered as a constant linear offset. Slopes and linear regression results also provide additional supporting evidence for picking \(d_{b}=40\ cm\) as the lower threshold for further analysis. It can also be readily verified that there are no significant differences from setting the probe further than \(70\ cm\) away from the robot body, as obtained values appear to have stabilized very close to the respective baseline measurement, in all three fields. Considering that the further we place the probe the worse its terrain traversability capacity (see next section too), we deduce that \(d_{b}=70\ cm\) is an appropriate upper threshold for further analysis. Finally, Fig. 4d shows the intercept coefficient decrease as the probe distance increases which corroborates all previous observations. Given all these remarks, we validate that keeping the sensor at a certain distance and farther from the robot's body frame leads to a decrease of the included noise in the conductivity measurements because of electromagnetic interference and increases the linearity of the given readings with respect to the reference (handheld) measurements. As such, we select the fixed distances of \(d_{b}\in\{40,50,60,70\}\ cm\) to further evaluate the robot's traversability capacity and thus help select the optimal one. Fig. 5: Linear regression graphs of the captured conductivity data with respect to the baseline values, for evaluating the robot interference in the EMI measurements. Panels (a)–(j) correspond to each of the cases in the range \([10,100]\ cm\) of robot-sensor distance, respectively. Upper and lower bounds of the fit are depicted with continuous curves (in red). ### _Evaluation of Robot Platform Maneuverability though Gazebo Simulation_ With the set of candidate sensor-to-body distance values been identified (\(d_{b}\in\{40,50,60,70\}\ cm\)), the next step is to examine the robot's ability to move over various uneven terrain fields while carrying the probe without the probe colliding with the ground as the robot negotiates dips and bumps. In general, each platform configuration may have a distinct effect on the robot's traversability capacity, sensor stability, and thus sensor readings. In an effort to make this process scalable and generalizable, we develop a realistic simulation environment and test robot traversability and probe oscillations for different sensor placement configurations over varied sets of emulated terrains. A Gazebo model for the Clearpath Robotics Jackal robot employed herein is already publically available.8 We developed a custom Gazebo model for the GF CMD-Tiny sensor and integrated into the main robot model. Figure 7 depicts these models. Four separate Gazebo models of the sensor were generated (Fig. 7b), each corresponding to one of the considered probe distances \(d_{b}\in\{40,50,60,70\}\ cm\). The sensor's height off the ground was adjusted to be \(d_{h}\in\{6,11\}\ cm\), which approximates the average height of sensor placement during handheld continuous measurements. The simulated Jackal model weighs \(17\ kg\), similar to the real one, and the sensor platform with the onboard CMD-Tiny sensor is at \(4\ kg\). To evaluate the GF CMD-Tiny sensor oscillations in the \(z\)-axis during the Jackal's movement, a ranging plugin was also developed in Gazebo, to provide continuous information about the distance of the EMI sensor from the ground. The Gazebo plugin publishes continuously the ranges of the sensor body to the ground through a \(sensor\_msgs/LaserScan\) ROS topic, thus allowing us to capture the deviation data in real-time (one such example is shown in Fig. 7c). Footnote 8: [http://wiki.ros.org/Robots/Jackal](http://wiki.ros.org/Robots/Jackal) Note that the simulated setup features the same ROS libraries as the real robot does, and it can be teleoperated or commanded to move to a goal pose (i.e. position and orientation) in the local frame, as it has an integrated RTK-GNSS antenna as well. To evaluate the rigidity and robustness of the designed platform we conducted three independent simulated experiments inside the emulated field in the Gazebo world for the selected platform configurations (parameters \(d_{b}\) and \(d_{h}\)). In these experiments, the Jackal is commanded to follow specific trajectories of different difficulty terrain types, namely a straight path in a slightly planar area, a straight path in a more rocky area, and a complex path with mixed type terrain (Fig. 8). During the trajectory following, the robot measures the vertical oscillations of the EMI sensor with respect to the ground. Each individual case is repeated for five times, thus giving rise to a total of 120 (simulated) experimental trials in this specific benchmark. _Key Findings:_ Results of this second experimental benchmark are contained in Table II. First, we confirm (as anticipated) that the increase of the sensor distance relative to the robot body makes the robot less stable and in turn leads to increased probe oscillations when traversing uneven terrain. However, it turns out that the cases of \(d_{h}=\{50,60\}\ cm\) lead to very similar platform oscillations in terms of reported variance, which in fact are close to the shortest case of \(40\ cm\) and more steady compared to the longest \(70\ cm\) case. In more detail, and with reference to Table II, the increase of the distance in the probe placement results in increased mean and variance of position deviations while moving, for both tested sensor height levels off the ground. Mounting the sensor closer to the robot's body and keeping a shorter robot-sensor footprint, the robot is more stable in either smoother or more rough terrain types. For the \(d_{h}=6\ cm\) case, placing the sensor at \(d_{b}=\{50,60\}\ cm\) achieves similar results by keeping \(\simeq 1\ cm\) of difference in standard deviation of the \(70\ cm\) case. Also, even though \(40\ cm\) is the less shaky solution overall, the cases of \(d_{b}=\{50,60\}\ cm\) have less than \(\simeq 0.5\ cm\) difference in standard deviation. Additionally, by comparing across experiments, the \(50\ cm\) case appears to score less than \(10^{-4}\ m^{2}\) of variance. These observations are consistent also in the case of \(d_{h}=11\ cm\), by having the smallest sensor displacement when \(d_{b}=40\ cm\) and with the cases of \(d_{b}=\{50,60\}\ cm\) performing equivalently. Based on these observations, we deduce that the cases of setting \(d_{b}=\{50,60\}\ cm\) may offer the best trade-off in terms of noise and steadiness compared with the shorter \(40\ cm\) case, in which the probe is placed close to the robot's chassis and is affected more by the electromagnetic interference as shown in Section III. A second observation from these results concerns the sensor's height off the ground, \(d_{h}\). Inspection of the results for different sensor height values \(d_{h}\) in Table II, we notice a similar oscillating behavior, whereby a shorter height (i.e. \(d_{h}=6\ cm\)) might be preferred to reduce variations from the ground. This becomes even more critical considering that, during real surveys, it is hard for a farm worker that uses the handheld sensor to keep the sensor consistently at a stable height off the ground while walking. For these reasons we chose to use \(d_{h}=6\ cm\). ### _Preliminary Feasibility Experimental Testing of Boundary Configurations_ The last set of calibration tests for optimal sensor placement included validation of the preliminary feasibility of the robot-sensor setup to measure ECa continuously. Based on the aforementioned results, we elected to study in these validation \begin{table} \begin{tabular}{c|c c c|c c c|c c c} \hline \hline \multirow{2}{*}{\(d_{b}\) - \(d_{h}\)} & \multicolumn{3}{c}{Smooth Land} & \multicolumn{3}{c}{Rocky Land} & \multicolumn{3}{c}{Mixed-terrain Land} \\ \cline{2-10} & mean (cm) & \(\sigma\) (cm) & variance (\(cm^{2}\)) & mean (m) & \(\sigma\) (cm) & variance (\(cm^{2}\)) & mean (cm) & \(\sigma\) (cm) & variance (\(cm^{*}\)) \\ \hline 40cm - 6cm & -0.50 & 2.57 & 6.61e-02 & -0.51 & 3.15 & 9.94e-02 & -0.54 & 2.58 & 6.65e-02 \\ 50cm - 6cm & -0.60 & 3.13 & 9.81e-02 & -0.52 & 3.13 & 9.77e-02 & -0.68 & 2.80 & 7.83e-02 \\ 60cm - 6cm & -0.83 & 2.97 & 8.81e-02 & -0.49 & 3.24 & 0.10 & -0.79 & 3.02 & 9.10e-02 \\ 70cm - 6cm & 0.68 & 4.11 & 0.17 & -0.61 & 3.91 & 0.15 & -0.71 & 3.20 & 0.10 \\ \hline 40cm - 11cm & -0.84 & 2.97 & 8.83e-02 & -0.78 & 3.37 & 0.11 & -1.03 & 3.35 & 0.11 \\ 50cm - 11cm & -1.12 & 3.39 & 0.11 & -1.03 & 4.07 & 0.17 & -1.25 & 3.66 & 0.13 \\ 60cm - 11cm & -1.47 & 4.37 & 0.19 & -0.93 & 3.94 & 0.16 & -1.57 & 4.47 & 0.20 \\ 70cm - 11cm & -1.23 & 3.69 & 0.14 & -1.62 & 4.76 & 0.23 & -1.50 & 4.04 & 0.16 \\ \hline \hline \end{tabular} \end{table} TABLE II: Measured sensor oscillations during simulated surveys in the Gazebo environment. experiments the two boundary cases of probe distance (i.e. \(d_{b}=\{40,70\}\ cm\)), at a constant height of \(d_{h}=6\ cm\).9 By doing so, we can get a better understanding of the effects of placing the sensor closer or further away from the robot on _continuous_ ECa measurements. Footnote 9: We wish to highlight at this point that the obtained results so far suggest that the configuration \(\{d_{b}=60\ cm,d_{h}=6\ cm\}\) can serve as the optimal one in the context of this work. This is the configuration we test in field-level experiments in Section IV that follows. However, for completeness purposes, and in an effort to better explain the effects of placing the sensor closer or further away from the robot on soil ECa measurements, we tested also the two boundary configurations (which are still viable in principle) but at a smaller-scale experimental setup compared to the field-level experimental setups in Section IV. _Experimental Procedure_: The validation experiments took place in the same field that was emulated for the aforementioned simulation testing. Without loss of generality, we considered two distinct cases: 1) a straight-line trajectory with \(d_{b}=70\ cm\) and \(d_{h}=6\ cm\), and 2) a U-shaped trajectory with \(d_{b}=40\ cm\) and \(d_{h}=6\ cm\). In each case we conducted three independent trials. The starting and end positions as well as intermediate waypoints were the same for each set of trials. The physical setup, experimental field, and the two types of trajectories are depicted in Fig. 9. Manual data collections with the handheld sensor (three for each trajectory type) were also performed to serve as the baseline. In total, we conducted 12 experimental trials for validation in the bare field (six robotized and six manual). Obtained soil ECa measurements were georeferenced via RTK-GNSS in the robotized measurement and the sensor's embedded GPS in the manual measurements. Collected soil ECa data were used to generate a custom raster of the surveyed area; then we used kriging interpolation through exponential semuriarogram to obtain the ECa map which was embedded onto satellite imagery via the ESRI ArcGIS 10.8.2 software. _Key Findings_: Obtained results visualized based on aggregated soil conductivity plots and the corresponding ECa maps for the cases of \(d_{b}=70\ cm\) (straight line trajectory) and \(d_{b}=40\ cm\) (U-shaped trajectory) are depicted in Fig. 10 and Fig. 11, respectively. Foremost, results validate that the shorter probe distance placement demonstrates a higher difference on average sensor readings compared to the longer distance setting against the manual baseline data (case \(d_{b}=40\ cm\), robotized: \(\{\mu=13.32,\sigma=0.99\}\ mS/m\) and manual: \(\{\mu=7.55,\sigma=0.92\}\ mS/m\); case \(d_{b}=70\ cm\), robotized: \(\{\mu=11.40,\sigma=1.83\}\ mS/m\) and manual: \(\{\mu=9.34,\sigma=1.59\}\ mS/m\)). Despite this increased difference, however, the standard deviations are close between robotized and manual measurements in both cases. This indicates that there is consistency among the two types of measurements, and that the observed offsets in robotized measurements compared to their respective manual baselines can be in fact treated as constant offsets. Further evidence in support of the constant offset presence can be obtained by the Pearson Correlation Coefficient (PCC). In both cases (straight line and U-shape), the robotized ECa measurements showcase high linearity with their manual counterparts. Specifically, by removing outlier measurements that lie out of the \(\pm 2\sigma\) measurement distribution, the straight-line trajectory attains a PCC of 0.95 whereas the U-shaped trajectory reaches a PCC of 0.88 (despite significant electromagnetic interference as demonstrated in static tests discussed in Section III-A), compared with manual measurements that were conducted on exactly the same paths. These findings can be visually corroborated by the obtained ECa maps in panels (b) and (c) of Fig. 10 and Fig. 11, with the main difference being that the color differential in the shorter probe placement case of \(d_{b}=40\ cm\) (Fig. 11) being more pronounced due to the larger constant offset in measurements compared to the longer probe placement case of \(d_{b}=70\ cm\) (Fig. 10). In all, these preliminary feasibility testing validates we can perform trustworthy continuous ECa measurements with the developed robotic setup even at the two boundary configurations. Results verify the system's high linearity with respect to manually-collected (handheld) data, and that the constant additive offset on the overall ECa measurements caused by the robot's electromagnetic interference does not significantly affect the linearity of obtained measurements in the end. Putting everything together, we conclude that the configuration \(\{d_{b}=60\ cm,d_{h}=6\ cm\}\) can serve as an appropriate tradeoff that combines lower robot-to-sensor signal interference and higher maneuverability, and it is thus selected as the optimal configuration to conduct field-scale experiments. These are discussed next. Fig. 9: (a) The Jackal robot equipped with the platform holding the CMD-Tiny instrument at the configuration of \(d_{b}=50\ cm\) and \(d_{h}=6\ cm\). (b) Instance of robot during the soil ECa data collection for validation. (c) Validation testing considered two distinctive trajectories, one following a straight line and another performing a U-shaped curve. ## IV Field-scale Experiments The analysis conducted so far has helped determine an optimal sensor setup (\(\{d_{b}=60\ cm,d_{h}=6\ cm\}\)) for the robot considered in this study that balances between electromagnetic interference caused by the robot and its electronic components and robot traversability capacity of uneven terrain. The analysis has also helped validate the preliminary feasibility of collecting continuous soil ECa measurements over small bare-field areas, with obtained results being trustworthy and consistent to manually-collected baselines. We now turn our attention to robot-assisted continuous soil ECa measurements over larger fields. _Experimental Procedure:_ We performed continuous soil ECa measurements in two distinctive fields, an olive tree grove (not irrigated recently with respect to data collections) and a citrus tree grove (irrigated prior to data collections). The arid canopy of olive trees is located close to the USDA-ARS U.S. Salinity Laboratory at the University of California, Riverside (UCR) (33deg58'21.936" N, -117deg19'13.5732" E), whereas the freshly irrigated citrus orchard is located within the UCR Agricultural Station fields (AES; 33deg57'52.0272" N, -117deg20'13.7184" E). It is worth noting that the latter case in fact contained various soil conditions (highly irrigated/wet parts and arid parts), and hence helped evaluate 1) the proposed robot's performance in the same survey as terrain conditions vary, and 2) if the robot-assisted soil conductivity curves and soil ECa maps match those corresponding to manual data collections. Experimental setups and snapshots of the two fields are depicted in Fig. 12. For the olive tree grove, we considered an area of two full tree rows covered following a U-shaped trajectory, whereas, for the citrus tree grove, we considered an area of roughly three tree rows covered following an S-shaped trajectory. Manual data collections with the handheld sensor (three in each field) were also performed to serve as the baseline. In total, we conducted 12 experimental trials for overall system field-scale evaluation in the olive and citrus tree groves (six robotized and six manual). Obtained soil ECa measurements were georeferenced via RTK-GNSS in the robotized measurement and the sensor's embedded GPS in the manual measurements. Collected soil ECa data were used to generate a custom raster of the surveyed area; then kriging interpolation through exponential semivariogram helped obtain the ECa map which was embedded onto satellite imagery via the ESRI ArcGIS 10.8.2 software. _Key Findings:_ Results from field-scale experiments for olive and citrus tree groves are shown in Fig. 13-16. Collected raw data from each considered case, aggregated soil conductivity data comparing robotized to manual baselines, as well as Fig. 11: (a) The soil conductivity curves of the U-shaped trajectory case in the bare field. Fitted graphs of both measurement curves correspond to an 8th grade least squares fit. (b)-(c) Soil ECa maps corresponding to manually-collected (handheld) and robotized-collected data, respectively, computed by applying kriging interpolation through exponential semivariogram. Each panel also contains the value-based color scale, the map statistics, and the histogram of the conductivity values. Fig. 10: (a) The soil conductivity curves of the straight-line trajectory case in the bare field. Fitted graphs of both measurement curves correspond to an 8th grade least squares fit. (b)-(c) Soil ECa maps corresponding to manually-collected (handheld) and robotized-collected data, respectively, computed by applying kriging interpolation through exponential semivariogram. Each panel also contains the value-based color scale, the map statistics, and the histogram of the conductivity values. the corresponding ECa maps are presented. It can be readily verified from the graphs that robot-assisted continuous ECa measurements approximate very well the manually-collected baselines in both cases. For the case of the olive tree grove (arid field) raw measurement plots (Fig. 12(a) and Fig. 12(b)) match either very well. The visual observation is corroborated via the mean measurement plots (Fig. 12(c)), where PCC reaches a score of 0.97. The mean conductivity value in the robotized case is \(14.23\)\(mS/m\), compared to the \(10.56\)\(mS/m\) mean value of the manual case, which represents a fixed increase which can be seen by the curvature of the polynomial fits. Additionally, the standard deviations are closely matching robotized: \(1.65\)\(mS/m\); manual: \(1.75\)\(mS/m\)) which demonstrates consistency across both soil ECa measurement means. Output ECa maps (Fig. 14) are also very similar, with a PCC value of 0.97 in a pixel-wise correlation. It is notably clear that the robotized results are close to the manual ones, in spite of the constant positive offset on the level of the measured soil apparent electrical conductivity. Similar observations can be readily made for the case of the citrus tree grove. Recall that this field was recently irrigated prior to data collections, hence there was more terrain variability; this ranged from muddy soil to normally irrigated areas and more dry parts of the field areas. From a robot operation standpoint, the proposed system demonstrates robust behavior even in this diverse and more demanding survey case, and it is noteworthy that the robot's traversability in the irrigated turf was efficient even when navigating over mud. Raw data graphs (Fig. 14(a) and Fig. 14(b)) demonstrate several notable peaks in soil conductivity measurements that were caused when traversability in the muddy terrain and the well-irrigated soil. These peaks were captured in both manual and robot-assisted surveys. The graphs shown in Fig. 14(c) have a PCC of 0.90, which demonstrates the proposed robot's robustness in even muddy and quite diverse soil-ECa-level fields. The polynomial fits of both plots (Fig. 14(c)) present similar curvature with an additive offset increase in the robot's case, caused by the electromagnetic interference, whereas the pixel-wise correlation of the output ECa maps reaches a value of 0.96 (Fig. 16). Visual inspection of the obtained soil ECa maps also supports the aforementioned findings. Fig. 12: Field-scale data collections instances: (a) robotized in the olive tree grove and (b)/(c) manual/robotized in the orange tree grove. Fig. 13: Olive tree grove case. (a) and (b) Graphs of the raw conductivity measurements by using directly the sensor and via the robot, respectively. (c) Soil conductivity plots of both handheld and robot cases. Fitted plot of both measurement curves is an 8th grade least squares fit. ## V Conclusion In this study, we presented a robotized means to perform precise and continuous soil apparent electrical conductivity (ECa) measurement surveys. Our proposed solution involves a ground mobile robot equipped with a customized and adjustable platform to hold an Electromagnetic Induction (EMI) sensor. The optimal placement of the EMI sensor is determined by satisfying two competing objectives; the minimization of static electromagnetic interference in measurements and the facilitation of robot traversability in the field. Extensive experimental evaluation across static calibration tests and over different types of fields concludes to the optimal EMI configuration setup to be used for large-field testing. Throughout a series of real field experiments, our study demonstrates that the obtained robot-assisted soil conductivity measurements present high linearity compared to the ground truth (data collected manually by a handheld EMI sensor) by scoring more than \(90\%\) in Pearson correlation coefficient in both plot measurements and estimated ECa maps generated by kriging interpolation. The proposed platform can deliver high-linearity scores in real survey scenarios, at an olive and citrus grove under different irrigation levels, and serve as a robust tool for large-scale ECa mapping in the field, with the potential development of a fully-autonomous behavior. Future work will focus on integration within a task and motion framework for informative proximal sampling, as well as integration with physical sampling means to perform multiple tasks simultaneously.
土壌表面電導率(ECa)は、精密農業とスマート農業において重要な指標であり、最適な水量管理、地質学的マップ作成、収量予測に使用されます。既存の方法では、土壌電気伝導度の推定のためのいくつかの方法が開発されています。物理的土壌採取、地表面センサーの設置とモニタリング、および近傍のECa推定を行うことができるセンサーの使用があります。しかし、これらの方法には、より広大なフィールドの利用にあたり、作業が大変な場合があり、コストがかかる場合があります。ロボットが支援するECa測定では、スケーラブルでコスト効率の高い解決策を提供する可能性があります。本研究では、このような解決策として、センサーの設置を調整できるカスタマイズされたプラットフォームを備えた、移動可能な地面ロボットを提案します。このプラットフォームは、電磁誘導(EMI)センサーを保持して、さまざまなフィールド条件で半自動的にオンデ
2310.00133
Prior Mismatch and Adaptation in PnP-ADMM with a Nonconvex Convergence Analysis
Plug-and-Play (PnP) priors is a widely-used family of methods for solving imaging inverse problems by integrating physical measurement models with image priors specified using image denoisers. PnP methods have been shown to achieve state-of-the-art performance when the prior is obtained using powerful deep denoisers. Despite extensive work on PnP, the topic of distribution mismatch between the training and testing data has often been overlooked in the PnP literature. This paper presents a set of new theoretical and numerical results on the topic of prior distribution mismatch and domain adaptation for alternating direction method of multipliers (ADMM) variant of PnP. Our theoretical result provides an explicit error bound for PnP-ADMM due to the mismatch between the desired denoiser and the one used for inference. Our analysis contributes to the work in the area by considering the mismatch under nonconvex data-fidelity terms and expansive denoisers. Our first set of numerical results quantifies the impact of the prior distribution mismatch on the performance of PnP-ADMM on the problem of image super-resolution. Our second set of numerical results considers a simple and effective domain adaption strategy that closes the performance gap due to the use of mismatched denoisers. Our results suggest the relative robustness of PnP-ADMM to prior distribution mismatch, while also showing that the performance gap can be significantly reduced with few training samples from the desired distribution.
Shirin Shoushtari, Jiaming Liu, Edward P. Chandler, M. Salman Asif, Ulugbek S. Kamilov
2023-09-29T20:49:00
http://arxiv.org/abs/2310.00133v1
# Prior Mismatch and Adaptation in PnP-ADMM with a Nonconvex Convergence Analysis ###### Abstract Plug-and-Play (PnP) priors is a widely-used family of methods for solving imaging inverse problems by integrating physical measurement models with image priors specified using image denoisers. PnP methods have been shown to achieve state-of-the-art performance when the prior is obtained using powerful deep denoisers. Despite extensive work on PnP, the topic of _distribution mismatch_ between the training and testing data has often been overlooked in the PnP literature. This paper presents a set of new theoretical and numerical results on the topic of prior distribution mismatch and domain adaptation for _alternating direction method of multipliers (ADMM)_ variant of PnP. Our theoretical result provides an explicit error bound for PnP-ADMM due to the mismatch between the desired denoiser and the one used for inference. Our analysis contributes to the work in the area by considering the mismatch under _nonconvex_ data-fidelity terms and _expansive_ denoisers. Our first set of numerical results quantifies the impact of the prior distribution mismatch on the performance of PnP-ADMM on the problem of image super-resolution. Our second set of numerical results considers a simple and effective domain adaption strategy that closes the performance gap due to the use of mismatched denoisers. Our results suggest the relative robustness of PnP-ADMM to prior distribution mismatch, while also showing that the performance gap can be significantly reduced with few training samples from the desired distribution. ## 1 Introduction _Imaging inverse problems_ consider the recovery of a clean image from its corrupted observation. Such problems arise across the fields of computational imaging, biomedical imaging, and computer vision. As imaging inverse problems are typically ill-posed, solving them requires the use of image priors. While many approaches have been proposed for implementing image priors, the current literature is primarily focused on methods based on training _deep learning (DL)_ models to map noisy observations to clean images [1, 2, 3]. _Plug-and-Play (PnP)_ Priors [4, 5] has emerged as a class of DL algorithms for solving inverse problems by denoisers as image priors. PnP has been successfully used in many applications such as super-resolution, phase retrieval, microscopy, and medical imaging [6, 7, 8, 9, 10, 11, 12]. The success of PnP has resulted in the development of its multiple variants (e.g., PnP-PGM, PnP-SGD, PnP-ADMM. PnP-HQS), strong interest in its theoretical analysis, as well as investigation of its connection to other methods used in inverse problems, such as score matching and denoising diffusion probabilistic models [13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27]. Despite extensive literature on PnP, the research in the area has mainly focused on the setting where the distribution of the test or inference data is perfectly matched to that of the data used for training the image denoiser. Little work exists for PnP under mismatched priors, where a distribution shift exists between the training and test data. In this paper, we investigate the problem of _mismatched_ priors in PnP-ADMM. We present a new theoretical analysis of PnP-ADMM that accounts for the use of mismatched priors. Unlike most existing work on PnP-ADMM, our theory is compatible with _nonconvex_ data-fidelity terms and _expansive_ denoisers [28, 29, 30, 31, 32]. Our analysis establishes explicit error bounds on the convergence of PnP-ADMM under a well-defined set of assumptions. We validate our theoretical findings by presenting numerical results on the influence of distribution shifts, where the denoiser trained on one dataset (e.g., BreCaHAD or CelebA) is used to recover an image from another dataset (e.g., MetFaces or RxRx1). We additionally present numerical results on a simple domain adaptation strategy for image denoisers that can effectively address data distribution shifts in PnP methods (see Figure 4 for an illustration). Our work thus enriches the current PnP literature by providing novel theoretical and empirical insights into the problem of data distribution shifts in PnP. All proofs and some details that have been omitted due to space constraints of the main text are included in the supplementary material. ## 2 Background **Inverse problems.** Inverse problems involve the recovery of an unknown signal \(\mathbf{x}\in\mathbb{R}^{n}\) from a set of noisy measurements \(\mathbf{y}=\mathbf{A}\mathbf{x}+\mathbf{e}\), where \(\mathbf{A}\in\mathbb{R}^{m\times n}\) is the measurement model and \(\mathbf{e}\) is the noise. Inverse problems are often formulated and solved as optimization problems of form \[\widehat{\mathbf{x}}\in\operatorname*{arg\,min}_{\mathbf{x}\in\mathbb{R}^{n}}f(\mathbf{x })\quad\text{with}\quad f(\mathbf{x})=g(\mathbf{x})+h(\mathbf{x})\, \tag{1}\] where \(g\) is the data-fidelity term that measures the consistency with the measurements \(\mathbf{y}\) and \(h\) is the regularizer that incorporates prior knowledge on \(\mathbf{x}\). The least-squares function \(g(\mathbf{x})=\frac{1}{2}\|\mathbf{A}\mathbf{x}-\mathbf{y}\|_{2}^{2}\) and total variation (TV) function \(h(\mathbf{x})=\tau\|\mathbf{D}\mathbf{x}\|_{1}\), where \(\mathbf{D}\) denotes the image gradient and \(\tau>0\) a regularization parameter, are commonly used functions for the data-fidelity term and the regularizer [33, 34]. **Deep Learning.** DL has gained significant attention in the context of inverse problems [1, 2, 3]. DL methods seek to perform a regularized inversion by learning a mapping from the measurements to the target images parameterized by a deep convolutional neural network (CNN) [35, 36, 37, 38, 39, 40]. Model-based DL (MBDL) refers to a sub-class of DL methods for inverse problems that also integrate the measurement model as part of the deep model [3, 41]. MBDL approaches include methods such as PnP, regularization by denoising (RED), deep unfolding (DU), and deep equilibrium models (DEQ) [42, 43, 44, 45]. **Plug-and-Play Priors.** PnP is one of the most popular MBDL approaches for solving imaging inverse problems that uses denoisers as priors [4] (see also recent reviews [17, 19]). PnP has been extensively investigated, Figure 1: _Illustration of domain adaptation in PnP-ADMM. The mismatched denoiser is pre-trained on source distribution (BreCaHAD) and adapted to target distribution (MetFaces) using a few samples. Adapted prior is then plugged into PnP-ADMM algorithm to reconstruct a sample from MetFaces._ leading to multiple PnP variants and theoretical analyses [13, 15, 25, 26, 27, 28, 28, 32, 46]. Existing theoretical convergence analyses of PnP differ in the specifics of the assumptions required to ensure the convergence of the corresponding iterations. For example, bounded, averaged, firmly nonexpansive, nonexpansive, residual nonexpansive, or demi-contractive denoisers have been previously considered for designing convergent PnP schemes [13, 14, 20, 22, 23, 25, 28, 30, 32, 47, 48, 49]. The recent work [50] has used an elegant formulation of an MMSE denoiser from [51] to perform a nonconvex convergence analysis of PnP-PGM without any nonexpansiveness assumptions on the denoiser. Another recent line of PnP work has explored specification of the denoiser as a gradient-descent step on a functional parameterized by a deep neural network [25, 26, 52]. PnP-ADMM is summarized in Algorithm 1[4, 5], where \(\mathsf{D}_{\sigma}\) is an additive white Gaussian denoiser (AWGN) denoiser, \(\gamma>0\) is the penalty parameter, and \(\sigma>0\) controls the denoiser strength. PnP-ADMM is based on the alternating direction method of multipliers (ADMM) [53]. Its formulation relies on optimizing in an alternating fashion the augmented Lagrangian associated with the objective function in (1) \[\phi\left(\mathbf{x},\mathbf{z},\mathbf{s}\right)=g(\mathbf{x})+h(\mathbf{z})+\frac{1}{\gamma}\bm {s}^{\intercal}(\mathbf{x}-\mathbf{z})+\frac{1}{2\gamma}\left\|\mathbf{x}-\mathbf{z}\right\|_ {2}^{2}. \tag{2}\] The theoretical convergence of PnP-ADMM has been explored for convex functions using monotone operator theory [28, 32], for nonconvex regularizer and convex data-fidelity terms [52], and for bounded denoisers [13]. **Distribution Shift.** Distribution shifts naturally arise in imaging when a DL model trained on one type of data is applied to another. The mismatched DL models due to distribution shifts lead to suboptimal performance. Consequently, there has been interest in mitigating the effect of mismatched DL models [54, 55, 56, 57]. In PnP methods, a mismatch arises when the denoiser is trained on a distribution different from that of the test data. The prior work on denoiser mismatch in PnP is limited [20, 58, 59, 27]. **Our contributions.**_(1)_ Our first contribution is a new theoretical analysis of PnP-ADMM accounting for the discrepancy between the desired and mismatched denoisers. Such analysis has not been considered in the prior work on PnP-ADMM. Our analysis is broadly applicable in the sense that it does _not_ assume convex data-fidelity terms and nonexpansive denoisers. _(2)_ Our second contribution is a comprehensive numerical study of distribution shifts in PnP through several well-known image datasets on the problem of image super-resolution. _(3)_ Our third contribution is the illustration of simple data adaptation for addressing the problem of distribution shifts in PnP-ADMM. We show that one can successfully close the performance gap in PnP-ADMM due to distribution shifts by adapting the denoiser to the target distribution using very few samples. ## 3 Proposed Work This section presents the convergence analysis of PnP-ADMM that accounts for the use of mismatched denoisers. It is worth noting that the theoretical analysis of PnP-ADMM has been previously discussed in [16, 28, 31, 32]. The novelty of our work can be summarized in two aspects: (1) we analyze convergence with the mismatched priors; (2) our theory accommodates nonconvex \(g\) and expansive denoisers. ### PnP-ADMM with Mismatched Denoiser We denote the target distribution as \(p_{\mathbf{x}}\) and the mismatched distribution as \(\widehat{p}_{\mathbf{x}}\). The mismatched denoiser \(\widehat{\mathsf{D}}_{\sigma}\) is a _minimum mean squared error (MMSE)_ estimator for the AWGN denoising problem \[\mathbf{v}=\mathbf{x}+\mathbf{e}\quad\text{with}\quad\mathbf{x}\sim\widehat{p}_{\mathbf{x}},\quad \mathbf{e}\sim\mathcal{N}(0,\sigma^{2}\mathbf{I}). \tag{3}\] The MMSE denoiser is the conditional mean estimator for (3) and can be expressed as \[\widehat{\mathsf{D}}_{\sigma}(\mathbf{v})\,\coloneqq\,\mathbb{E}[\mathbf{x}|\mathbf{v}]= \int_{\mathbb{R}^{n}}\mathbf{x}\widehat{p}_{\mathbf{x}|\mathbf{v}}(\mathbf{x}|\mathbf{v})\,\mathrm{ d}\mathbf{x}, \tag{4}\] where \(\widehat{p}_{\mathbf{x}|\mathbf{v}}(\mathbf{x}|\mathbf{v})\propto G_{\sigma}(\mathbf{v}-\mathbf{x}) \widehat{p}_{\mathbf{x}}(\mathbf{x})\), with \(G_{\sigma}\) denoting the Gaussian density. We refer to the MMSE estimator \(\widehat{\mathsf{D}}_{\sigma}\), corresponding to the mismatched data distribution \(\widehat{p}_{\mathbf{x}}\), as the mismatched prior. Since the integral (4) is generally intractable, in practice, the denoiser corresponds to a deep model trained to minimize the mean squared error (MSE) loss \[\mathcal{L}(\widehat{\mathsf{D}}_{\sigma})=\mathbb{E}\left[\|\mathbf{x}-\widehat{ \mathsf{D}}_{\sigma}(\mathbf{v})\|_{2}^{2}\right]. \tag{5}\] MMSE denoisers trained using the MSE loss are optimal with respect to the widely used image-quality metrics in denoising, such as signal-to-noise ratio (SNR), and have been extensively used in the PnP literature [24, 27, 50, 60, 61]. When using a mismatched prior in PnP-ADMM, we replace Step 4 in Algorithm 1 by \[\mathbf{z}^{k}\leftarrow\widehat{\mathsf{D}}_{\sigma}\left(\mathbf{x}^{k}+\mathbf{s}^{k- 1}\right), \tag{6}\] where \(\widehat{\mathsf{D}}_{\sigma}\) is the mismatched MMSE denoiser. To avoid confusion, we denote by \(\mathbf{z}^{k}\) and \(\mathbf{\overline{z}}^{k}\) the outputs of the mismatched and target denoisers at the \(k\) iteration, respectively. Consequently, we have \(\mathbf{\overline{z}}^{k}=\mathsf{D}_{\sigma}(\mathbf{x}^{k}+\mathbf{s}^{k-1})\), where \(\mathsf{D}_{\sigma}\) is the target MMSE denoiser. ``` 1:input:\(\mathbf{z}^{0},\mathbf{s}^{0}\in\mathbb{R}^{n}\), parameters \(\sigma,\gamma>0\). 2:for\(k=1,2,3,\cdots\)do 3:\(\mathbf{x}^{k}\leftarrow\mathsf{prox}_{\gamma g}(\mathbf{z}^{k-1}-\mathbf{s}^{k-1})\) 4:\(\mathbf{z}^{k}\leftarrow\mathsf{D}_{\sigma}\left(\mathbf{x}^{k}+\mathbf{s}^{k-1}\right)\) 5:\(\mathbf{s}^{k}\leftarrow\mathbf{s}^{k-1}+\mathbf{x}^{k}-\mathbf{z}^{k}\) 6:endfor ``` **Algorithm 1** PnP-ADMM ### Theoretical Analysis Our analysis relies on the following set of assumptions that serve as sufficient conditions. **Assumption 1**.: _The prior distributions \(p_{\mathbf{x}}\) and \(\widehat{p}_{\mathbf{x}}\), denoted as target and mismatched priors respectively, are non-degenerate over \(\mathbb{R}^{n}\)._ A distribution is considered degenerate over \(\mathbb{R}^{n}\) if its support is confined to a lower-dimensional manifold than the dimensionality of \(n\). Assumption 1 is useful to establish an explicit link between a MMSE denoiser and its associated regularizer. For example, the regularizer \(h\) associated with the target MMSE denoiser \(\mathsf{D}_{\sigma}\) can be expressed as (see [50, 51] for background) \[h(\mathbf{x})\,\coloneqq\,\begin{cases}-\frac{1}{2\gamma}\|\mathbf{x}-\mathsf{D}_{ \sigma}^{-1}(\mathbf{x})\|_{2}^{2}+\frac{\sigma^{2}}{\gamma}h_{\sigma}(\mathsf{D}_ {\sigma}^{-1}(\mathbf{x}))&\text{for }\mathbf{x}\in\mathsf{Im}(\mathsf{D}_{\sigma})\\ +\infty&\text{for }\mathbf{x}\notin\mathsf{Im}(\mathsf{D}_{\sigma}),\end{cases} \tag{7}\] where \(\gamma>0\) denotes the penalty parameter, \(\mathsf{D}_{\sigma}^{-1}:\mathsf{Im}(\mathsf{D}_{\sigma})\to\mathbb{R}^{n}\) represent a well defined and smooth inverse mapping over \(\mathsf{Im}(\mathsf{D}_{\sigma})\), and \(h_{\sigma}(\cdot)\,\coloneqq\,-\log(p_{\mathbf{u}}(\cdot))\), with \(p_{\mathbf{u}}\) denoting the probability distribution over the AWGN corrupted observations \[\mathbf{u}=\mathbf{x}+\mathbf{e}\quad\text{with}\quad\mathbf{x}\sim p_{\mathbf{x}},\quad\mathbf{e} \sim\mathcal{N}(0,\sigma^{2}\mathbf{I}),\] (the derivation is provided in Section E.1 for completeness). Note that the smoothness of both \(\mathsf{D}_{\sigma}^{-1}\) and \(h_{\sigma}\) guarantees the smoothness of the function \(h\). Additionally, similar connection exist between the mismatched MMSE denoiser \(\widehat{\mathsf{D}}_{\sigma}\) and the regularizer \(\hat{h}(\mathbf{x})\), with \(\hat{h}_{\sigma}(\cdot)\,\coloneqq\,-\log(\widehat{p}_{\mathbf{v}}(\cdot))\) characterizing the relationship between mismatched denoiser and shifted distribution. **Assumption 2**.: _The function \(g\) is continuously differentiable._ This assumption is a standard assumption used in nonconvex optimization, specifically in the context of inverse problems [62, 63, 64]. **Assumption 3**.: _The data-fidelity term and the implicit regularizers are bounded from below._ Assumption 3 implies that there exists \(f^{*}>-\infty\) such that \(f(\mathbf{x})\geq f^{*}\) for all \(\mathbf{x}\in\mathbb{R}^{n}\). **Assumption 4**.: _The denoisers \(\mathsf{D}_{\sigma}\) and \(\widehat{\mathsf{D}}_{\sigma}\) have the same range \(\mathsf{Im}(\mathsf{D}_{\sigma})\). Additionally, functions \(h\) and \(\hat{h}\) associated with \(\mathsf{D}_{\sigma}\) and \(\widehat{\mathsf{D}}_{\sigma}\), are continuously differentiable with \(L\)-Lipschitz continuous gradients over \(\mathsf{Im}(\mathsf{D}_{\sigma})\)._ It is known (see [50, 51]) that functions \(h\) and \(\hat{h}\) are infinitely differentiable over their ranges. The assumption that the two image denoisers have the same range is also a relatively mild assumption. Ideally, both denoisers would have the same range corresponding to the set of desired images. Assumption 4 is thus a mild extension that further requires Lipschitz continuity of the gradient over the range of denoisers. **Assumption 5**.: _The mismatched denoiser \(\widehat{\mathsf{D}}_{\sigma}\) satisfies_ \[\|\widehat{\mathsf{D}}_{\sigma}(\mathbf{v}^{k})-\mathsf{D}_{\sigma}(\mathbf{v}^{k}) \|_{2}\leq\delta_{k},\quad k=1,2,3,\ldots\] _where \(\widehat{\mathsf{D}}_{\sigma}\) is given in (4) and \(\mathbf{v}^{k}=\mathbf{x}^{k}+\mathbf{s}^{k-1}\) in Algorithm 1._ Our analysis assumes that at every iteration, PnP-ADMM uses a mismatched MMSE denoiser, derived from a shifted distribution. We consider the case where at iteration \(k\) of PnP-ADMM, the distance of the outputs of \(\mathsf{D}_{\sigma}\) and \(\widehat{\mathsf{D}}_{\sigma}\) is bounded by a constant \(\delta_{k}\). **Assumption 6**.: _For the sequence \(\{\mathbf{x}^{k},\mathbf{z}^{k},\mathbf{s}^{k}\}\) generated by iterations of PnP-ADMM with mismatched MMSE denoiser in Algorithm 1, there exists a constant \(R\) such that_ \[\left\|\mathbf{z}^{k}-\mathbf{z}^{*}\right\|_{2}\leq R,\quad k=1,2,3,\ldots\] _where \((\mathbf{x}^{*},\mathbf{z}^{*},\mathbf{s}^{*})\) are stationary points of augmented Lagrangian in (2)._ This assumption is a reasonable assumption since many images have bounded pixel values, for example \([0,255]\) or \([0,1]\). We are now ready to present our convergence result under mismatched MMSE denoisers. **Theorem 1**.: _Run PnP-ADMM using a **mismatched** MMSE denoiser for \(t\geq 1\) iterations under Assumptions 1-6 with the penalty parameter \(0<\gamma\leq 1/(4L)\). Then, we have_ \[\min_{1\leq k\leq t}\left\|\nabla f\left(\mathbf{x}^{k}\right)\right\|_{2}^{2} \leq\frac{1}{t}\sum_{k=1}^{t}\left\|\nabla f\left(\mathbf{x}^{k}\right)\right\|_{ 2}^{2}\leq\frac{A_{1}}{t}\left(\phi\left(\mathbf{x}^{0},\mathbf{z}^{0},\mathbf{s}^{0} \right)-\phi^{*}\right)+A_{2}\overline{\varepsilon}_{t}\] _where \(A_{1}>0\) and \(A_{2}>0\) are iteration independent constants, \(\phi^{*}=\phi(\mathbf{x}^{*},\mathbf{z}^{*},\mathbf{s}^{*})\), and \(\overline{\varepsilon}_{t}\coloneqq(1/t)\left(\varepsilon_{1}+\cdots+ \varepsilon_{t}\right)\) is the error term that is an average of the quantities \(\varepsilon_{k}\coloneqq\mathsf{max}\{\delta_{k},\delta_{k}^{2}\}\). In addition, if the sequence \(\{\delta_{i}\}_{i\geq 1}\) is summable, we have that \(\|\nabla f(\mathbf{x}^{t})\|_{2}\to 0\) as \(t\to\infty\)._ When we replace the mismatched MMSE denoiser with the _target_ MMSE denoiser, we recover the traditional PnP-ADMM. To highlight the impact of the mismatch, we next provide the same statement but using the target denoiser. **Theorem 2**.: _Run PnP-ADMM with the MMSE denoiser for \(t\geq 1\) iterations under Assumptions 1-4 with the penalty parameter \(0<\gamma\leq 1/(4L)\). Then, we have_ \[\underset{1\leq k\leq t}{\text{min}}\left\|\nabla f\left(\mathbf{x}^{k}\right) \right\|_{2}^{2}\leq\frac{1}{t}\sum_{k=1}^{t}\left\|\nabla f(\mathbf{x}^{k})\right\| _{2}^{2}\leq\frac{C}{t}\left(\phi(\mathbf{x}^{0},\mathbf{z}^{0},\mathbf{s}^{0})-\phi^{*} \right),\] _where \(C>0\) is a constant independent of iteration._ The proof of Theorem 1 is provided in the appendix. For completeness, we also provide the proof of Theorem 2. Theorem 1 provides insight into the convergence of PnP-ADMM using mismatched MMSE denoisers. It states that if \(\delta_{k}\) are summable, the iterates of PnP-ADMM with mismatched denoisers satisfy \(\nabla f(\mathbf{x}^{t})\to\mathbf{0}\) as \(t\to 0\) and PnP-ADMM converges to a stationary point of the objective function \(f\) associated with the target denoiser. On the other hand, if the sequence \(\delta_{k}\) is not summable, the convergence is only up to an error term that depends on the distance between the target and mismatched denoisers. Theorem 1 can be viewed as a more flexible alternative for the convergence analyses in [28, 31, 32]. While the analyses in the prior works assume convex \(g\) and nonexpansive residual, nonexpansive or bounded denoisers, our analysis considers that denoiser \(\mathsf{D}_{\sigma}\) is a mismatched MMSE estimator, where the mismatched denoiser distance to the target denoiser is bounded by \(\delta_{k}\) at each iteration of PnP-ADMM. In conclusion, PnP-ADMM using a mismatched MMSE denoiser approximates the solution obtained by PnP-ADMM using the target MMSE denoiser with an error that depends on the discrepancy between the denoisers. Therefore, one can control the accuracy of PnP-ADMM using mismatched denoisers by controlling the error term \(\overline{\varepsilon}_{t}\). This error term can be controlled by using domain adaptation techniques for decreasing the distance between mismatched and target denoisers, thus closing the gap in the performances of PnP-ADMM. We numerically validate this observation in Section 4 by considering the fine-tuning of mismatched denoisers to the target distribution with a limited number of samples. ## 4 Numerical Validation We consider PnP-ADMM with mismatched and adapted denoisers for the task of image super-resolution. Our first set of results shows how distribution shifts relate to the prior disparities and their impact on PnP recovery performance. Our second set of results shows the impact of domain adaptation on the denoiser gap and PnP performance. We use the traditional \(l_{2}\)-norm as the data-fidelity term. To provide an objective evaluation of the final image quality, we use two established quantitative metrics: Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM). We use DRUNet architecture [12] for all image denoisers. To model prior mismatch, we train denoisers on five image datasets: MetFaces [65], AFHQ [67], CelebA [66], BreCaHAD [69], and RxRx1 [68]. Figure 2 illustrates samples from the datasets. Our training dataset consists of 1000 randomly chosen, resized, or cropped image slices, each measuring \(256\times 256\) pixels. Unlike several existing PnP methods [23, 28] that suggest the inclusion of the spectral normalization layers into the CNN to enforce Lipschitz continuity on the denoisers, we directly train denoisers without any nonexpansiveness constraints. Figure 2: _Sample images from the datasets used for training the denoisers. From left to right: MetFaces [65], CelebA [66], AFHQ [67], RxRx1 [68], and BreCaHAD [69]._ ### Impact of Prior Mismatch The observation model for single image super-resolution is \(\mathbf{y}=\mathbf{SHx}+\mathbf{e}\), where \(\mathbf{S}\in\mathbb{R}^{m\times n}\) is a standard \(s\)-fold downsampling matrix with \(n=m\times s^{2}\), \(\mathbf{H}\in\mathbb{R}^{n\times n}\) is a convolution with anti-aliasing kernel, and \(\mathbf{e}\) is the noise. To compute the proximal map efficiently for the \(l_{2}\)-norm data-fidelity term (Step 3 in Algorithm 1), we use the closed-form solution outlined in [12, 70]. Similarly to [12], we use four isotropic kernels with different standard deviation \(\{0.7,1.2,1.6,2\}\), as well as four anisotropic kernels depicted in Table 1. We perform downsampling at scales of \(s=2\) and \(s=4\). Figure 3 illustrates the performance of PnP-ADMM using the target and four mismatched denoisers. Note the suboptimal performance of PnP-ADMM using mismatched denoisers trained on the BreCaHAD, RxRx1, CelebA, and AFHQ datasets relative to PnP-ADMM using the target denoiser trained on the MetFaces dataset. Figure 3 illustrates how distribution shifts can lead to mismatched denoisers, subsequently impacting the performance of PnP-ADMM. It's worth noting that the denoiser trained on the CelebA dataset [66], which \begin{table} \begin{tabular}{c c c c c c c c c} \multicolumn{2}{c}{Kernels} & \multicolumn{2}{c}{Prior} & \multicolumn{2}{c}{\(s=2\)} & \multicolumn{2}{c}{\(s=4\)} & \multicolumn{2}{c}{\(Avg\)} \\ \cline{4-9} \multicolumn{2}{c}{} & & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM \\ \hline \multirow{4}{*}{\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}\) & \multirow{4}{*}{\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{ }}}}}}}}}}}}}}}\)} & \multirow{4}{*}{\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}\)}} & \multirow{4}{*}{\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}\)}} & \multirow{4}{*}{\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}\)} & \multirow{4}{*}{\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}\)} & \multirow{4}{*}{\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}\)} & \multirow{4}{*}{\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}}\) &{\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{}}}}}}}}}}}}}}}}}}}\) }} & \multirow{2}{*}{\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\bmbmbmbmbmbm{\bmbmbm{\bmbmbm{ }}}}}}}}}}}}}} & & & & & & \\ \hline \multirow{4}{*}{\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{}}}}}}}}}}}}}}}\)} & \multirow{4}{*}{\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}\)} & \multirow{4}{*}{\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}\)} & \multirow{4}{*}{\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{}}}}}}}}}}}}}}}}}}\)} & \multirow{4}{*}{\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{{\mathbf{\mathbf{\mathbf{\mathbf{{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{}}}}}}}}}}}}}}}}}\)} & \multirow{4}{*}{\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{{\mathbf{\mathbf{\mathbf{{\mathbf{\mathbf{{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{}}}}}}}}}}}}}}}}}\)} & \multirow{4}{*}{\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{{\mathbf{\mathbf{{\mathbf{\mathbf{{\mathbf{\mathbf{\mathbf{\mathbf{{\mathbf{\mathbf{\mathbf{}}}}}}}}}}}}}}}}}\)} & \multirow{4}{*}{\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{{\mathbf{\mathbf{{\mathbf{\mathbf{{\mathbf{\mathbf{\mathbf{{\mathbf{\mathbf{\mathbf{}}}}}}}}}}}}}}}}}\)} & \multirow{4}{*}{\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{{\mathbf{\mathbf{{\mathbf{\mathbf{{\mathbf{\mathbf{{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{}}}}}}}}}}}}}}}}}\)} & \multirow{4}{*}{\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{{\mathbf{\mathbf{{\mathbf{\mathbf{{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{}}}}}}}}}}}}}}}}\)} & \multirow{4}{*}{\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{{\mathbf{\mathbf{{\mathbf{\mathbf{{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{}}}}}}}}}}}}}}}}\)} & \multirow{4}{*}{\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{{\mathbf{\mathbf{{\mathbf{\mathbf{{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{}}}}}}}}}}}}}}}}\)} & \multirow{4}{*}{\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{{\mathbf{\mathbf{\mathbf{{\mathbf{{\mathbf{\mathbf{{\mathbf{\mathbf{\mathbf{\mathbf{}}}}}}}}}}}}}}}}\)} & \multirow{4}{*}{\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{{\mathbf{\mathbf{{\mathbf{\mathbf{{\mathbf{\mathbf{{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{}}}}}}}}}}}}}}}}\)} & \multirow{4}{*}{\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{{\mathbf{\mathbf{\mathbf{{\mathbf{\mathbf{{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{}}}}}}}}}}}}}}}}\)} & \multirow{4}{*}{\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{{\mathbf{\mathbf{{\mathbf{\mathbf{{\mathbf{\mathbf{{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{}}}}}}}}}}}}}}}}\)}} & \multirow{4}{*}{\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{{\mathbf{\mathbf{{\mathbf{\mathbf{{\mathbf{\mathbf{{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\bm}}}}}}}}}}}}}}}}}\)} & \multirow{4}{*}{\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{{\mathbf{\mathbf{{\mathbf{\mathbf{{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\bm}}}}}}}}}}}}}}}}\)}} & \multirow{4}{*}{\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{{\mathbf{\mathbf{{\ consists of facial images similar to MetFaces, is the best-performing mismatched denoiser. Table 1 provides a quantitative evaluation of the PnP-ADMM performance with the target denoiser consistently outperforming all the other denoisers. Notably, the mismatched denoiser trained on the BreCaHAD dataset [69], containing cell images that are most dissimilar to MetFaces, exhibits the worst performance. ### Domain Adaption In domain adaptation, the pre-trained mismatched denoisers are updated using a limited number of data from the target distribution. We investigate two adaptation scenarios: in the first, we adapt the denoiser initially pre-trained on the BreCaHAD dataset to the MetFaces dataset, and in the second, we use the denoiser initially pre-trained on CelebA for adaptation to the RxRx1 dataset. Figure 4 illustrates the influence of domain adaptation on denoising and PnP-ADMM. The reported results are tested on RxRx1 and MetFaces datasets for the super-resolution task. The kernel used is shown on the top left corner of the ground truth image in Figure 3 and the images are downsampled at the scale of \(s=4\). Note how the denoising performance improves as we increase the number of images used for domain adaptation. This indicates that domain adaptation reduces the distance of mismatched and target denoisers. Additionally, note the direct correlation between the denoising capabilities of priors and the performance of PnP-ADMM. Figure 4 shows that the performance of PnP-ADMM with mismatched denoisers can be significantly improved by adapting the mismatched denoiser to the target distribution, even with just four images from the target distribution. Figure 5 presents visual examples illustrating domain adaptation in PnP-ADMM for image super-resolution. The recovery performance is shown for two test images from the MetFaces using adapted denoisers against both target and mismatched denoisers. The experiment was conducted under the same settings as those in Figure 3. Note the effectiveness of domain adaptation in mitigating the impact of distribution shifts on PnP-ADMM. Table 2 provides quantitative results of several adapted priors on the test data. The results presented in Table 2 show the substantial impact of domain adaptation, using a limited number of data, in significantly narrowing the performance gap that emerges as a consequence of distribution shifts. mismatch. The theoretical results in this paper extend the recent PnP work by accommodating mismatched priors while eliminating the need for convex data-fidelity and nonexpansive denoiser assumptions. The empirical validation of PnP-ADMM involving mismatched priors and the domain adaptation strategy highlights the direct relationship between the gap in priors and the subsequent performance gap in the PnP-ADMM recovery, effectively reflecting the influence of distribution shifts on image priors. \begin{table} \begin{tabular}{c c c c c c c c c} \multicolumn{2}{c}{Kernels} & \multicolumn{2}{c}{Prior} & \multicolumn{2}{c}{\(s=2\)} & \multicolumn{2}{c}{\(s=4\)} & \multicolumn{2}{c}{\(Avg\)} \\ \cline{3-10} \multicolumn{2}{c}{} & & & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM \\ \hline \multirow{4}{*}{\begin{tabular}{c} **Faces** \\ \end{tabular} } & & & & & & & & & \\ \cline{3-10} & & & & & & & & & \\ \cline{3-10} & & & & & & & & & \\ \cline{3-10} & & & & & & & & & \\ \hline \multirow{4}{*}{\begin{tabular}{c} **Faces** \\ \end{tabular} } & & & & & & & & & & \\ \cline{3-10} & & & & & & & & & \\ \cline{3-10} & & & & & & & & & \\ \cline{3-10} & & & & & & & & & \\ \hline \multirow{4}{*}{\begin{tabular}{c} **Faces** \\ \end{tabular} } & & & & & & & & & & \\ \cline{3-10} & & & & & & & & & & \\ \cline{3-10} & & & & & & & & & \\ \hline \multirow{4}{*}{ \begin{tabular}{c} **Faces** \\ \end{tabular} } & & & & & & & & & & \\ \cline{3-10} & & & & & & & & & & \\ \cline{3-10} & & & & & & & & & \\ \end{tabular} \end{table} Table 2: _PSNR (dB) and SSIM comparison of super-resolution with mismatched, target, and adapted denoisers for the test set from MetFaces, averaged for indicated kernels. We highlighted the **target**, mismatched, and the best adapted priors._ Figure 5: _Visual comparison on super-resolution with target (MetFaces), mismatched (BreCaHAD), and adapted priors on two MetFaces test images. The images are downsampled by the scale of \(s=4\). The performance is reported in terms of PSNR (dB) and SSIM. Note how the recovery performance increases by adaptation of mismatched priors to a larger set of images from the target distribution._ ## Limitations The basis of our analysis in this study relies on the assumption that the denoiser used in the inference accurately computes the MMSE denoiser for both target and mismatched distributions. While this assumption aligns well with deep denoisers trained via the MSE loss, it does not directly extend to denoisers trained using alternative loss functions, such as the \(l_{1}\)-norm or SSIM. As is customary in theoretical research, our analysis remains valid only under the fulfillment of these assumptions, which could potentially restrict its practical applicability. In our future work, we aim to enhance the results presented here by exploring new PnP strategies that can relax these convergence assumptions. ## Reproducibility Statement We have provided the anonymous source code in the supplementary materials. The included README.md file contains detailed instructions on how to run the code and reproduce the results reported in the paper. The algorithm's pseudo-code is outlined in Algorithm 1, and for more comprehensive information about training, parameter selection, and other details, please refer to the Supplementary Section G. Additionally, for the theoretical findings presented in section 3, complete proofs, along with further clarifications on the assumptions and additional contextual information, can be found in the appendices A-E. ## Ethics Statement To the best of our knowledge, this work does not give rise to any significant ethical concerns. ## Acknowledgement This paper is partially based upon work supported by the NSF CAREER awards under grants CCF-2043134 and CCF-2046293.
PnP prior(Plug-and-Play prior)は、画像復元逆問題を解くための広く使用されている方法のファミリーです。これは、物理測定モデルを画像ノイズ除去器を用いて指定された画像先入観と統合することによって利用されます。PnP手法は、先入観が強力な深層ノイズ除去器を用いて得られる場合、最新の性能を示したことが示されています。PnPの多大な研究が行われていますが、トレーニングデータとテストデータの分布不一致は、PnP論文においてしばしば無視されてきました。この論文では、PnPの変種である、alternating direction method of multipliers(ADMM)の分布不一致と、ドメイン適合に関する新しい理論的および数値的結果を提示します。理論的な結果では、PnP-ADMMの誤差界を明示的に示します。これは、望ましいノイズ除去器と推論に使用されたノイズ除去器との
2309.14025
Optimum control strategies for maximum thrust production in underwater undulatory swimming
Fishes, cetaceans, and many other aquatic vertebrates undulate their bodies to propel themselves through water. Swimming requires an intricate interplay between sensing the environment, making decisions, controlling internal dynamics, and moving the body in interaction with the external medium. Within this sequence of actions initiating locomotion, biological and physical laws manifest complex and nonlinear effects, which does not prevent natural swimmers to demonstrate efficient movement. This raises two complementary questions: how to model this intricacy and how to abstract it for practical swimming. In the context of robotics, the second question is of paramount importance to build efficient artificial swimmers driven by digital signals and mechanics. In this study, we tackle these two questions by leveraging a biomimetic robotic swimmer as a platform for investigating optimal control strategies for thrust generation. Through a combination of machine learning techniques and intuitive models, we identify a control signal that maximizes thrust production. Optimum tail-beat frequency and amplitude result from the subtle interplay between the swimmer's internal dynamics and its interaction with the surrounding fluid. We then propose a practical implementation for autonomous robotic swimmers that requires no prior knowledge of systems or equations. Direct fluid-structure simulations confirms the effectiveness and reliability of the proposed approach. Hence, our findings bridge fluid dynamics, robotics, and biology, providing valuable insights into the physics of aquatic locomotion
L. fu, S. Israilov, J. Sanchez Rodriguez, C. Brouzet, G. Allibert, C. Raufaste, M. Argentina
2023-09-25T10:42:58
http://arxiv.org/abs/2309.14025v3
# Optimum control strategies for maximum thrust production in underwater undulatory swimming ###### Abstract The diversity of shapes and physiologies among multicellular organisms is tremendous, and it can be rationalized by the principles of Darwinian evolution and the various functions required by animals (1). Locomotion is a vital activity for metazoans, as it enables them to fulfill essential functions necessary for survival, such as accessing favorable environments, engaging in reproduction, hunting, and evading predators. From tadpoles of a few centimeters to whales of 20 meters in length, swimming consists in pushing the water by undulating the body (2) which produces a thrust exploiting the inertia of the displaced fluid (3). The kinematics of underwater undulatory swimming appear to be particularly robust in vertebrates, highlighting general physical principles. The wavelength of the body deformation is of the order of the length of the swimmer (4, 5), while there is on average a factor 0.2 between tail beat amplitude and swimmer length (6-10). The tail beat frequency \(f\) is not fixed for an individual but tunes its swimming speed: the higher the frequency, the higher the speed (3, 6, 9-11). There is evidence that each swimmer can vary its frequency within a frequency band whose range is set by the interplay between the muscle properties and the interaction of the swimmer with its surrounding fluid (10). Muscles have their own limits in terms of speed of contraction and tension, as represented by Hill's muscle model (12). Constraints can also be imposed by decision processes that are either spontaneous, through proprioceptive reflexes (13-15), or conscious, for instance through the choice of the activity level (16, 17). These decisions drive the gait (18) and we can expect different control strategies for an individual swimming at burst speed (19) and the same swimmer exhibiting a swim-and-coast gait in a sustained level of activity (20). These considerations are echoed in biomimetic robotic swimmers (15, 21-24). The biological nature of the internal dynamics is here replaced by robotic elements from electronics, mechanics and/or computer science. Developing efficient and fully autonomous artificial swimmers requires finding the appropriate control strategies tailored to specific constraints to achieve optimal performance for a given action. Recent years have witnessed the development of various approaches to control soft robot fish, both in simulations and experiments. Classical control techniques such as PI (25), PID (26), and robust controllers (27) have been employed to improve trajectory tracking performance. The emergence of artificial intelligence algorithms has paved the way for novel control approaches designed specifically for soft swimmers. Models and simulations are now utilized to explore various machine learning algorithms, achieving high swimming speeds while maintaining trajectory accuracy (28-31). Regardless of the methodology employed, the quest to achieve the highest swimming speed through robotic fish remains an enduring challenge in the domain of aquatic locomotion. While certain investigations have explored the correlation between swimming speed and tail beat frequency (31-33), none of these prior studies has offered a conclusive comprehension of the optimal control strategy required to attain this objective. In this article, we present our findings on the most effective scheme for generating maximum thrust in swimming using a single-parameter control system. Initially, we showcase the experimental outcomes achieved by employing deep reinforcement learning (RL) techniques (34) to drive a swimming robot. [name=A,b,c,d,e,f,g,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,i,j,j,i,j,i,j,i,j,i,j,j,i,j,i,j,i,j,j,i,j,i,j,i,j,i,j,i,j,i,j,j,i,j,i,j,j,i,j,i,j,j,i,j,i,j,j,i,j,j,i,j,i,j,j,i,j,i,j,j,i,j,i,j,j,i,j,i,j,j,i,j,i,j,i,j,j,i,j,i,j,i,j,j,i,j,i,j,i,j,j,i,j,i,j,i,j,j,i,j,i,j,j,i,j,i,j,i,j,j,i,j,i,j,j,i,j,j,i,j,i,j,j,i,j,i,j,i,j,j,i,j,j,i,j,i,j,j,i,j,i,j,j,i,j,i,j,j,i,j,i,j,i,j,j,i,j,i,j,j,i,j,j,i,j,i,j,j,i,j,i,j,j,i,j,j,i,j,i,j,j,i,j,j,i,j,j,i,j,j,i,j,i,j,j,i,j,j,i,j,j,i,j,i,j,j,i,j,j,i,j,j,i,j,j,i,j,i,j,j,i,j,i,j,j,i,j,j,i,j,i,j,j,i,j,i,j,j,i,j,j,i,j,j,i,j,i,j,j,i,j,j,i,j,j,i,j,j,i,j,i,j,j,i,j,j,i,j,j,i,j,i,j,j,i,j,j,i,j,j,i,j,j,i,j,j,i,j,j,i,j,i,j,j,i,j,j,i,j,j,i,j,j,i,j,i,j,j,i,j,j,i,j,i,j,j,i,j,j,i,j,j,i,j,i,j,j,i,j,i,j,j,i,j,i,j,i,j,j,i,j,i,j,j,i,j,i,j,j,i,j,i,j,i,j,j,i,j,i,j,j,i,j,j,i,j,j,i,j,j,i,j,i,j,j,i,j,i,j,i,j,j,i,j,i,j,j,i,j,j,i,j,j,i,j,j,i,j,j,i,j,i,j,j,i,j,j,i,j,j,i,j,j,i,j,j,i,j,j,i,j,i,j,j,i,j,j,i,j,j,i,j,j,i,j,j,i,j,j,i,j,i,j,j,i,j,j,i,j,j,i,j,j,i,j,j,i,j,j,i,j,i,j,j,i,j,j,i,j,j,i,j,j,i,j,j,i,j,j,i,j,j,i,j,j,i,j,j,i,j,j,i,j,j,j,i,j,j,i,j,j,i,j,j,i,j,j,i,j,i,j,j,i,j,j,i,j,j,i,j,i,j,i,j,j,i,j,i,j,j,i,j,i,j,j,i,j,j,i,j,i,j,j,i,j,i,j,i,j,j,i,j,j,i,j,i,j,j,i,j,j,i,j,j,i,j,i,j,j,i,j,j,i,j,i,j,i,j,j,i,j,i,j,j,i,j,i,j,j,i,j,i,j,j,i,j,i,j,j,i,j,j,i,j,j,i,j,j,i,j,i,j,j,i,j,i,j,i, Through these experiments, we identify the optimal control strategy, which involves utilizing a square wave function that alternates between the two extreme values allowed by the controller. To gain a deeper understanding of the underlying physics, we develop a theoretical model that further supports and explains the observed results. Subsequently, we rigorously validate our optimized thrust strategy through a comprehensive 2D numerical simulation. By aligning experiments, theory, and numerical simulations, we successfully determine the most efficient approach to achieve maximum swimming speed. This integrated approach provides valuable insights into the best methods for enhancing propulsion in aquatic systems. ## Results ### Experiments and machine learning We utilized the experimental platform described in [15, 35], which was equipped with the robotic fish illustrated in Fig. 1. The robotic fish consisted of a deformable skeleton with a fin attached to its end, fabricated through 3D printing using a flexible polymer. To achieve controlled deformation, a servomotor was employed, connected to the skeleton's end via two cables. By rotating the wheel of the servomotor by an angle \(\phi(t)\), we induced a deformation that controlled the fin angle \(\alpha(t)\). This design closely emulates the functioning of antagonistic muscles found in natural swimmers, responsible for initiating body deformations. The robot was immersed inside a water tank and its head was fixed to a force sensor that measured the longitudinal force \(F_{x}(t)\). This force served as a measure of thrust and was, on average, positive when the fish was in the propulsion phase. We employed an efficient RL algorithm to identify the optimal control signal that maximized the average thrust. In accordance with the specified details in the Materials and Methods section, we limited the instruction angle \(\phi_{\mathrm{c}}(t)\) of the servomotor to vary within the interval \([-\Phi,\Phi]\). The RL algorithm was designed to maximize a reward proportional to the thrust \(F_{x}(t)\). The algorithm required inputs to characterize the state of the system at every time step. Two independent sets of inputs were considered consecutively to probe the robustness of the learning process and ensure convergence toward the optimal control signal. First, the algorithm was provided with \(\phi_{\mathrm{c}}(t)\), \(F_{y}(t)\) and its time derivative \(\dot{F}_{y}(t)\) to account for the oscillatory nature of the system. After a few hours, the learning process converged toward a periodic motion. As shown in Fig. 2a, the optimal control signal \(\phi_{\mathrm{c}}^{*}(t)\) consists of a square wave function that switches abruptly between the two allowed extreme values \(\pm\Phi\). Second, we provided images recorded by a camera, as shown in Fig. 1b, as inputs instead of relying on lateral force measurements. At each time step, four images were used (i.e., the current image and the previous three) to calculate convergence and speed. Despite this change in inputs, the resulting optimal control signal remained consistent with the previous findings. That is, the algorithm converged to a square wave function with the same amplitude and frequency, reaffirming the robustness of the previously identified optimal control strategy. The two approaches were then compared in terms of relevant physical quantities while varying the maximum instruction angle \(\Phi\). In Fig. 3, the maximized average thrust \(\overline{F_{x}}^{*}\), the optimal frequency \(f^{*}\) and the corresponding maximum fin angle \(\alpha_{\mathrm{max}}^{*}\) are plotted as functions of \(\Phi\). Whatever \(\Phi\), the agreement is excellent between the two approaches for all quantities. Specifically, \(\alpha_{\mathrm{max}}^{*}\) is proportional to \(\Phi\) and the optimal frequency is a slowly decreasing function of \(\Phi\) varying between \(2.5\) and \(1\) Hz in the parameter range considered. To confirm that square wave forcing produces the greatest thrust, we applied three types of classical periodic forcing (sinusoidal, triangular, and square) for a given \(\Phi\) and measured the averaged thrust as a function of forcing frequency, \(f\). Results are shown in Fig. 4 for \(\Phi=0.42\) rad. Regardless of the type of actuation, the variation of the forcing frequency results in the presence of a peak in thrust, but the square wave forcing consistently generates the highest average thrust. This maximum thrust is achieved at a frequency of \(1.8\) Hz, a value that closely aligns with the frequency obtained using the RL approach. The analysis thus reveals a robust feature of optimal control: a square wave function oscillating between two extreme values. This suggests the existence of a strong mechanism driving maximum thrust. ### Model We present a comprehensive model that combines the physical aspects of underwater undulatory swimming with the internal dynamics of the servomotor to gain insights into the experimentally discovered optimal solutions. The dynamics of the fin angle \(\alpha(t)\) can be described by a damped harmonic oscillator, influenced by the angle \(\phi(t)\) of the servomotor wheel [15]: \[\tilde{\alpha}(t)+\xi\omega_{0}\dot{\alpha}(t)+\omega_{0}^{2}(\alpha(t)- \alpha_{\mathrm{c}}(t))=0,\ \alpha_{\mathrm{c}}(t)=\lambda\phi(t).\] [1] This equation captures the interaction between the deformable fin and the surrounding water, driven by an instruction angle, \(\alpha_{\mathrm{c}}(t)\), that is proportional to the servomotor wheel angle, \(\phi(t)\). The numerical values of the parameters were determined following the procedure described in [15]. The proportionality factor \(\lambda=0.46\) depends on the length of the cables and elasticity of the polymeric skeleton. Additionally, the servomotor has its own internal dynamics to adjust the servomotor wheel angle \(\phi(t)\) to the instruction angle \(\phi_{\mathrm{c}}(t)\)[15]: \[\dot{\phi}(t)=\Omega\tanh\left(\frac{1}{\Delta}\left(\phi_{\mathrm{c}}(t)- \phi(t)\right)\right).\] [2] The model includes the parameters \(\Omega=5.8\) rad.s\({}^{-1}\) and \(\Delta=0.29\) rad, which represent the maximum angular speed of the wheel and the required angle difference for the servomotor to operate at its maximum angular speed, respectively. The nonlinearity introduced by the \(\tanh\) function results in the saturation of the wheel speed \(\dot{\phi}(t)\) at \(\pm\Omega\) if the servomotor is unable to keep up with the provided instruction. This saturation occurs when the servomotor is too slow to reach the desired angle \(\phi_{\mathrm{c}}(t)\). Eqs. (1) and (2) correspond to two essential aspects of the dynamics: one pertains to the interaction between the undulating swimmer and its aquatic environment, while the other accounts for the limitations imposed by the swimmer's internal dynamics. Our experimental system thus serves as a suitable candidate for mimicking the dual nature of dynamics observed in natural fish. In natural fish, the internal dynamics are limited by biological constraints at the muscular level [10]. The thrust is proportional to the mass of water accelerated in the longitudinal direction and is written \(F_{x}=-K\alpha(t)\tilde{\alpha}(t)\)[15, 36], where the parameter 12.9 \(10^{-3}\) N.rad\({}^{-2}\).s\({}^{2}\) characterizes the thrust efficiency of the robot in water and is measured following the procedure described in (15). The average thrust over an undulation period \(T=1/f\) can be written as: \[\overline{F_{x}}=-\frac{K}{T}\int_{0}^{T}\alpha(t)\hat{\alpha}(t)dt=\frac{K}{T} \int_{0}^{T}\hat{\alpha}(t)^{2}dt. \tag{3}\] We employ RL techniques on the model to verify the optimal control strategy that yields the maximum average thrust (see the Materials and Methods section for more details). In Fig. 2b, we show the time evolution of the optimal instruction angle \(\phi_{c}^{*}(t)\) for \(\Phi=0.31\) rad, alongside the corresponding rescaled fin angle \(\alpha^{*}(t)/\lambda\) and thrust \(F_{x}^{*}(t)\). Consistent with the experimental results, the optimal instruction angle adopts a square wave function, alternating between \(\Phi\) and \(-\Phi\). In fact, the dynamics presented in Figs. 2a and b, measured in experiments and simulated with the model, are very similar. This is confirmed by the excellent agreements shown in Figs. 3b and c for the relevant quantities such as undulation frequency and maximum fin angle of the optimal control. Furthermore, this simple model is able to reproduce the average thrust force, as depicted in Fig. 3a. Regardless of the specific value of \(\Phi\), both the experiments and the model demonstrate that the system waits for the fin angle to approach its instruction before changing the control. In other words, the control signal switches direction when the fin angle is close to its maximal value \(\alpha_{\max}^{*}=\lambda\Phi\). This finding suggests an intuitive mechanism for selecting the frequency simultaneously with the control switch, emphasizing the efficiency and adaptability of the system in achieving maximum thrust. The model is also validated with the data obtained in Fig. 4 in which sinusoidal, triangular and square wave forms were enforced. The agreement between the experiments and the model is again excellent. The maximum thrust is generated around a frequency of 1.8 Hz, indicating that the system operates optimally near its resonance in \(\hat{\alpha}\), which occurs at a frequency \(\omega_{0}/(2\pi)\sim 2.0\) Hz, regardless of the damping factor. For comparison, the resonance in \(\alpha\) suggested by various other studies (37-39) occurs at a lower value, \(\omega_{0}\sqrt{1-\xi^{2}/2}/(2\pi)\simeq 1.1\) Hz in our system. ### Bang-Bang control The periodic and abrupt changes in the instruction angle resemble the behavior of a bang-bang controller (40). In the context of the model, we apply Pontryagin's maximum principle (41) to explain why square wave control at maximum allowed values \(\pm\Phi\) in the instruction angle \(\phi_{c}\) produces the maximum thrust \(\overline{F_{x}}\), defined in Eq. (3), under the constraints of Eqs. (1) and (2). The determination of the maximum thrust thus requires the resolution of a variational problem subject to three Lagrange multipliers, associated with the dynamic equations for \(\alpha(t),\dot{\alpha}(t)\), and \(\phi(t)\). The bang-bang controller provides the optimal control if the response is driven by linear equations (42). This condition is not completely satisfied in this system due to the nonlinear behavior of the servomotor's internal dynamics, Eq. (2). However, a bang-bang controller remains the optimal choice in this system because of the specific nature of the nonlinear function driving the relaxation dynamics of \(\phi\) (SI Appendix). This appears particularly simple to explain for a fast servomotor, i.e., if \(\phi(t)\) almost immediately follows the command \(\phi_{c}(t)\). In our case, we can assimilate \(\phi(t)\) to \(\phi_{c}(t)\) for \(\Phi\ll\Phi_{s}\), with: \[\Phi_{s}=\frac{\Omega}{\omega_{0}}, \tag{4}\] as shown in the SI Appendix. In both limits, the forcing command acts linearly through the Eq. (1), which completely justifies the choice of a bang-bang controller as an optimal strategy. In addition, this asymptotic permits the computation of the average thrust resulting from a square wave driving at frequency \(f\): \[\overline{F_{x}}=K\lambda^{2}\Phi^{2}\omega_{0}^{2}\frac{\frac{4f}{2\omega_{0 }}\left(\sinh\left(\frac{\xi\omega_{0}}{4f}\right)-\frac{\xi\sinh\left(\sqrt{ \xi^{2}-4}\frac{\omega_{0}}{4f}\right)}{\sqrt{\xi^{2}-4}}\right)}{\cosh\left( \sqrt{\xi^{2}-4}\frac{\omega_{0}}{4f}\right)+\cosh\left(\frac{\xi\omega_{0}}{4f }\right)}, \tag{5}\] where \(K\lambda^{2}\Phi^{2}\omega_{0}^{2}\) is the dimensional value that gives the scaling of the average thrust in this limit. Regardless of the value of \(\xi\) between 0 and 2, this thrust is maximum when \(\dot{\alpha}\) is resonant and the oscillator is driven close to its undamped frequency: \(f^{*}\simeq\omega_{0}/(2\pi)\). For the case of a slow servomotor with \(\Phi\gg\Phi_{s}\), the system of equations is nonlinear, and the maximum thrust can be expressed as: \[\overline{F_{x}}=K\lambda^{2}\Omega^{2}, \tag{6}\] which is reached as long as the servomotor wheel is moving at maximum angular speed \(\Omega\) (i.e. \(\dot{\alpha}(t)=\pm\lambda\Omega\) in Eq. (3)). The square wave forcing at maximum allowed values appears as the optimal solution to maximize the difference between \(\phi_{c}(t)\) and \(\phi(t)\) in Eq. (2) and ensures that the servomotor wheel is moving at maximum angular speed. In this limit, the optimal undulation period is determined by the shortest time required to sweep the servomotor angle from \(-\Phi\) to \(\Phi\) and then back to \(-\Phi\), all at maximum angular speed. This leads to the expression \(f^{*}=\Omega/(4\Phi)\). The behavior resulting from both limits is depicted in Fig. 3, and agrees with the outcomes obtained in experiments and with the model, as well as indicating a transition for \(\Phi\) around \(\Phi_{s}\). ### Swinging control: a model-free strategy In the above model, to maximize thrust, the optimal frequency must be found in order to select the best square wave function for control. It is therefore necessary to perform preliminary calibrations to measure the relevant quantities, in this case \(\omega_{0}\), \(\xi\), \(\Omega\) and \(\Delta\), to bring a particular system closer to optimality. Here we explore another strategy that does not require any prior knowledge about the system. Starting from the thrust expression, Eq. (3), the optimization is intrinsically linked to the time evolution of the fin angle velocity \(\dot{\alpha}\). An intuitive approach would be to favor the phases of maximum speed by reversing the sign of the instruction angle when the fin slows down too much. We call this swinging control in reference to the way children are able to swing without knowing the physical laws involved. To this end, we introduced an instruction-changing criterion \(C\) (\(0\leq C\leq 1\)), such that \(\phi_{c}\) changes sign if the speed slows to \(C\dot{\alpha}_{M}\), with \(\dot{\alpha}_{M}\) the last observed maximal angle speed. This condition ensures that the highest possible speed is maintained and that the instruction is changed when the speed becomes too slow. We found numerically that this strategy is always independent of the system history and the initial conditions, as the algorithm converges toward a unique solution. We evaluated the average thrust resulting from the swinging strategy by varying the parameters \(\omega_{0}\Phi/\Omega\), which measures the servomotor's ability to follow the instruction, and \(C\). Figure 5 shows that a criterion value of \(C\) ranging from 0 to 0.8 yields satisfactory performance for the swinging strategy whatever the value of \(\omega_{0}\Phi/\Omega\). The swinging control consistently delivers excellent outcomes without imposing stringent constraints on the choice of \(C\): the thrust efficiency, defined as \(\overline{F^{\mathrm{wsing}}/F^{\star}}\), being higher than 70% in the parameter range: for \(C=0.6\), this ratio exceeded 95% regardless of the value of \(\omega_{0}\Phi/\Omega\), but fine-tuning remains possible. This strategy can be proposed even in specific cases : in the example of a fast servomotor (\(\Phi\ll\Phi_{s}\)) and an undamped oscillator (\(\xi\to 0\)), the optimum is attained when the instruction angle changes sign at \(\dot{\alpha}=0\) (i.e., \(C=0\)) with a thrust efficiency very close to 100%, as shown in the inset of Fig. 5, and detailed in (SI Appendix). Swinging control is a robust strategy for achieving the highest thrusts without prior knowledge of the system or complex control algorithms. Its straightforward implementation using basic sensors makes it practical for various applications. ### 2D direct numerical simulations To transpose our results to a real swimming problem, we conducted simulations for the complete fluid-structure interaction in a 2D configuration (see details in the Materials and Methods section). In this numerical setup, the swimmer moves within a tank filled with liquid having the same density and viscosity as water. We consider the fish body to be viscoelastic, and we adjust its parameters to match the values of \(\omega_{0}\) and \(\xi\) obtained from the robotic fish experiments. The entire body measures \(L\!=\!10\) cm with an average thickness equal to \(H=1.3\) cm. To model muscular activity, we impose a spatiotemporal variation of the elastic body length at equilibrium. In particular, we set that the equilibrium of the strain component \(\epsilon_{xx}\) varies parabolically as a function of the distance to the head \(X\), and linearly from the midline \(Y\): \(\epsilon_{xx}\propto(\overline{X}/L)^{2}(Y/H)a(t)\), where \(a(t)\) drives the swimmer deformation and varies temporally. With this functional form, the half body (\(Y\gtrsim 0\)) extends its length while the other part (\(Y<0\)) retracts, resulting in the swimmer bending to compensate for the inhomogeneous change in length across the body thickness. We have simulated the motion of this 2D active elastic beam embedded in water, driven by different functional forms of \(a(t)\in(-1,1)\). In Fig. 6, we present the cruising speed achieved with square, sine, and triangular wave functions at various control frequencies (\(f\)). The square wave forcing consistently leads to the highest speed, regardless of the frequency, confirming the predictions from the RL algorithm and the model. In addition, we have implemented the swinging strategy with \(C=0\). The swinging controller automatically selects a frequency that propels the swimmer to a speed close to the maximum, as depicted in Fig. 6. Therefore, our numerical simulations validate our interpretations regarding the mechanisms to achieve the highest swimming speed. ## Discussion In this article, our primary objective was to determine the actuation that yields the highest thrust, and thus the highest swimming speed, for underwater swimmers. Considering the complex fluid-structure interaction at relatively high Reynolds numbers, we employed various approaches and methods to shed light on this topic. Notably, reinforcement learning techniques played a crucial role in revealing that the most effective method to achieve high thrust in experiments involves employing a bang-bang controller. This type of actuation oscillates abruptly and periodically between the two extreme command values. To rationalize and validate these experimental results, we studied a simple model using reinforcement learning. The results from this model closely aligned with those obtained through the experimental approach, further reinforcing the effectiveness of the bang-bang controller. Additionally, full 2D numerical simulations of autonomous swimmers confirmed the validity of our findings. We have successfully illuminated the strategy to attain the highest thrust for underwater swimmers, comprising three key messages. Firstly, the command should resemble a periodic square function, exploiting the natural oscillation of the tail-fin system. Secondly, the frequency of actuation needs to be close to that which maximizes tail speed. This frequency tends to approach the body's natural undulation frequency as damping vanishes. Lastly, an efficient approach to selecting a nearly optimal swimming gait is to switch the actuation sign as the fin speed becomes too small. Given the simplicity and practicality of our results, we envision powerful applications in terms of underwater biomimetic swimmers. Furthermore, these findings can be utilized to develop strategies that enhance the performance of human athletes, particularly those involved in aquatic disciplines. The potential for real-world implementation is promising, opening up exciting avenues for both biomimetic technology and athletic performance improvement. ### Materials and Methods #### Experimental setup Figure 1 illustrates the setup of the robotic fish used in this study. A comprehensive description of the setup can be found in previous works [15, 35], and here we present the main features. The robot fish is equipped with a soft tail and caudal fin made of a flexible polymer called "Ninjafhex" from NinjaTek. The tail is actuated by a waterproof servomotor (Hitec HS-5086WP) connected to two nylon fishing cables. A Raspberry Pi 4B (8Gb) controls the servomotor using pulse-width modulation (PWM) signals. To measure forces, a force sensor is linked to the robot fish via an aluminum rod, enabling bi-directional force measurements (longitudinal force, \(F_{x}\), and normal force, \(F_{y}\)) with a precision of approximately \(10^{-3}\) N. An analog force signal is converted into a digital format using an ADC converter (Adafruit 1115) and then collected by the Raspberry Pi via the I2C interface. The robotic swimmer is positioned in a water tunnel (Rolling Hills Research Corporation, Model 0710) using a beam clamp that holds the aluminum rod. In this experimental study, our focus is on determining the highest thrust generated by the oscillation of the swimmer's tail; hence, no water flow was generated by the tunnel pump. For the learning process based solely on force sensor measurements, we conducted it directly on the Raspberry Pi. However, for the image-based learning process, we utilized an external computer due to computational constraints. Data exchange between the Raspberry Pi and the computer occurred via the TCP/IP protocol using an RJ45 Ethernet cable. #### Machine Learning with force sensor In our search for the best functional form for the servomotor driving, we employed deep Reinforcement Learning (RL) techniques. Within this framework, we described the control of the swimmer using Markov decision processes (MDP). The state of the swimmer, denoted by \(s=(F_{y}(t),F_{y}(t),\phi_{c}(t))\), is characterized by the normal force, \(F_{y}(t)\), and its derivative, \(F_{y}(t)\), as well as the command angle of the servomotor, \(\phi_{c}(t)\). The action, represented by \(\phi_{c}(t)\in[-\Phi,\Phi]\), refers to the command angle of the servomotor, and the reward is directly proportional to the thrust force, \(F_{x}(t)\). The objective of the RL algorithm is to maximize the total cumulative reward, which, in this context, translates to maximizing the thrust, \(F_{x}\), generated by the swimmer. To achieve this, the RL algorithm explores the available action and state spaces and gradually converges to the best control sequence through a trial and error process. We exploit the PPO (proximal policy optimization) algorithm (43) for this study because it is capable of handling both discretized and continuous action spaces. PPO is a policy gradient method in RL that stabilizes the learning process to avoid large excursions. In our experiments, RL applies the control policy and sends a command to the robotic fish every 50 ms. The control policy is incrementally updated (trained) after each 768 control steps. We chose this number to ensure sufficient exploration of the action and state spaces before each update. The entire training process spans \(10^{5}\) control steps, which amounts to approximately 2 hours. Throughout this process, we save the control policy and the state value function every 5000 steps. After each training of the Neural Network, we conduct an inference to evaluate the quality of the actual best control policy: only the best action is chosen at each step during this process. This evaluation provides valuable insights into the performance of the swimmer with the optimized control sequence. More details regarding the PPO implementation and the meta-parameters are given in the Supplementary Information file. ### Machine learning with images. In our study, we employed a web-camera (ODRO UDS-CAM 720P) placed above the robotic fish's undulating fin to record its motion. Instead of the previous state representation \((F_{y}(t),F_{y}(t),\phi_{c}(t))\), we used a stack of 4 consecutive images separated by the sampling time interval (44), to represent the state, \(s\), of the robotic fish. The recorded RGB images were preprocessed by converting them to gray-scale and re-scaling them from their original size to \((84,84)\) pixels. This resizing was performed to reduce the computation time, while maintaining sufficient information. We used a Convolutional Neural Network (CNN), as introduced by (45, 46), as an image compression tool for the raw data of the images (\(84\times 84\times 4\) pixels). The output of the CNN is a latent vector with 512 dimensions that is then passed to the fully-connected neural networks to perform the actor-critic algorithm. Beyond this change in state representation, the remaining experimental methods for learning remained the same as those used for learning from the force sensor. ### Fluid-Structure Interaction Simulation. To simulate the fish swimming driven by an optimal command, we used the software COMSOL 6.1 following the approach outlined in (47). The 2D computational domain covers a rectangle with dimensions \(100\times 20\) cm\({}^{2}\), representing water. In Fig. (a)a, we show the geometry of the simulation. The two horizontal borders represent slip boundary conditions for the velocity, while the left and right borders are associated with entrance and exit boundary conditions. The swimmer is approximated by a viscoelastic beam of Young Modulus \(10^{4}\) Pa, a Poisson ratio 0.3 and a viscosity \(10^{-4}\) Pa.s. It navigates toward the left. The thickness \(t(X)\) of the viscoelastic beam is given by the relation \[t(X)=H\frac{X}{L}\left(1-\frac{X}{L}\right)e^{-\frac{X}{L}},\] where \(X\) is the curvilinear distance along the midline, measured from the head. Here \(H=4\) cm and \(L=10\) cm, such that the thickest part of the beam measures 1.3 cm, see Fig. (b)b. To model the antagonistic muscle action, we impose on the swimmer that the equilibrium component \(\epsilon_{xx}\) of the strain varies spatiotemporally: \[\epsilon_{XX}(X,Y,t)=0.01(X/L)^{2}(Y/H)a(t),\] where \(a(t)\) drives the motion dynamics, and \(\epsilon\) is the strain tensor (48). This forcing modifies the equilibrium length of each part of the body in an opposite manner: when the superior part of the swimmer elongates its length, the other part contracts it, following the dynamics of \(a(t)\). \(a(t)\) can be a wave function or defined as \(a(t)=\text{sign}(v(t))\), where \(v(t)\) is the normal velocity of the swimmer at the tail. The numerical value is chosen such that the typical amplitude of the tail oscillation is close to 0.2. The fluid-structure problem is solved using a fully coupled approach and the PARDISO linear solver; the nonlinear problem is tackled with a Newton algorithm. Because the swimmer deforms its shape, the mesh is adapted using a Yeoh method. To avoid excessively large deformations in the mesh due to the swimmer's movement, the entire computational domain is remensed is the discretisation is too distorted. Approximately 7,000 vertices are needed for almost 40,000 degrees of freedom to coarsely solve the complete fluid-structure interaction on the whole domain, and predict a correct swimming velocity. A finer element distribution is ensured at the head and tail (Fig. (d)d.) However, to accurately capture the wake, 300,000 elements are required, as shown in Fig. (d)d. The typical time step used is \(10^{-3}\) s. The center of mass of the swimmer is computed at each time step during the simulation. This comprehensive setup enables us to study and analyze the swimming behavior of the robotic fish driven by the optimal command obtained through the reinforcement learning process. This work has been supported by the French government, through the UCAJEDI and UCA DS4H Investments in the Future projects managed by the National Research Agency (ANR) with the reference numbers ANR-15-IDEX-0001 and ANR-17-EURE-0004. The authors also thanks the NIDS Academy. JSR acknowledges funding from Ministerio de Universidades and European Union-Next-GenerationEU. We thank F. Boyer for very interesting comments.
魚類、海獣、そして多くのその他の水生脊椎動物は、水中で自己を推進するために体と体を波打らせています。泳ぐには、環境を感知し、判断を下す、内部のダイナミクスを制御し、外的な媒体と相互作用しながら体を動かすための複雑な相互作用が必要です。この動作の開始におけるこのシーケンスの中で、生物学的および物理的法則は複雑で非線形な影響を発生させますが、これは自然な泳ぐ者に効率的な運動を demonstrable することを妨げません。この現象をどのようにモデル化するか、どのように抽象化するかという2つの補完的な質問があります。この質問は、ロボットの分野において、特に2番目の質問が重要な役割を果たします。この質問は、デジタル信号とメカニズムで駆動する効率的な人工泳ぐロボットを構築するためのものです。この研究では、これらの2つの質問を、生物模倣ロボット
2301.00672
Reducing the Quantum Many-electron Problem to Two Electrons with Machine Learning
An outstanding challenge in chemical computation is the many-electron problem where computational methodologies scale prohibitively with system size. The energy of any molecule can be expressed as a weighted sum of the energies of two-electron wave functions that are computable from only a two-electron calculation. Despite the physical elegance of this extended ``aufbau'' principle, the determination of the distribution of weights -- geminal occupations -- for general molecular systems has remained elusive. Here we introduce a new paradigm for electronic structure where approximate geminal-occupation distributions are ``learned'' via a convolutional neural network. We show that the neural network learns the $N$-representability conditions, constraints on the distribution for it to represent an $N$-electron system. By training on hydrocarbon isomers with only 2-7 carbon atoms, we are able to predict the energies for isomers of octane as well as hydrocarbons with 8-15 carbons. The present work demonstrates that machine learning can be used to reduce the many-electron problem to an effective two-electron problem, opening new opportunities for accurately predicting electronic structure.
LeeAnn M. Sager-Smith, David A. Mazziotti
2022-12-29T16:30:37
http://arxiv.org/abs/2301.00672v1
# Reducing the Quantum Many-electron Problem to Two Electrons with Machine Learning ###### Abstract An outstanding challenge in chemical computation is the many-electron problem where computational methodologies scale prohibitively with system size. The energy of any molecule can be expressed as a weighted sum of the energies of two-electron wave functions that are computable from only a two-electron calculation. Despite the physical elegance of this extended "aufbau" principle, the determination of the distribution of weights--geminal occupations--for general molecular systems has remained elusive. Here we introduce a new paradigm for electronic structure where approximate geminal-occupation distributions are "learned" via a convolutional neural network. We show that the neural network learns the \(N\)-representability conditions, constraints on the distribution for it to represent an \(N\)-electron system. By training on hydrocarbon isomers with only 2-7 carbon atoms, we are able to predict the energies for isomers of octane as well as hydrocarbons with 8-15 carbons. The present work demonstrates that machine learning can be used to reduce the many-electron problem to an effective two-electron problem, opening new opportunities for accurately predicting electronic structure. ## Introduction For any molecular system, the Schrodinger equation can, _in theory_, be solved exactly using a full configuration interaction (FCI) calculation [1, 2, 3] with a complete basis set; however, _in practice_, the computational complexity of such an exact approach grows factorially with system size [3], making molecular systems with more than a few dozen electrons intractable. Over time, many approximate methodologies have been introduced in an attempt to obtain "good enough" solutions to the electronic Schrodinger equation that predict energies within chemical accuracy (\(\sim\)1 kcal/mol). Hartree-Fock theory--a mean-field approach--yields reasonable results for a wide array of molecular systems containing up to a few hundred atoms [2]; however, it fails in molecules in which the motions of electrons are significantly correlated. Techniques which more-accurately capture correlation energy such as many-body perturbation theory, coupled cluster theory, complete active-space self-consistent field theory, and others remain computationally expensive for large system sizes [2, 4]. The so-called many-electron problem--whereby the cost of highly-accurate _ab initio_ computational methodologies scales in a prohibitive manner with system size--is hence an outstanding challenge in chemical computations. Machine learning may enable us to circumvent this problem by allowing us to use information about smaller molecules to treat correlation in larger systems at a reduced cost [5]. It has been used to learn the energies of various molecular structures [6, 7, 8, 9], new functionals for density functional theory (DFT) [10, 11, 12], inverse problems in electronic structure theory [13, 14], and even the many-body wave function of one-dimensional spin systems [15]. However, these areas are in their early stages and have yet to demonstrate definite success in decreasing the degree of scaling with system size. In this _Article_ we introduce a new paradigm for utilizing machine learning in quantum chemistry in which we reduce the quantum many-electron problem to a more tractable, better scaling two-electron problem. As originally proposed by Bopp [16, 17], the energy of a molecule of arbitrary size can be expressed without approximation as a weighted sum of the energies of two-electron wave functions, known as geminals. However, despite its physical significance as an extension of the "aufbau" principle, the distribution of weights--geminal occupations--has remained elusive. Here, we show that the geminal-occupation distribution can be learned with machine learning. We use a convolutional neural network (CNN) to learn an effective temperature in a Boltzmann-like distribution for the geminal occupations. The effective temperature--or correlation temperature--is inversely related to the electron correlation. The neural network, we demonstrate, learns the \(N\)-representability of the distribution--the representability of the distribution by an \(N\)-electron system [18, 19, 20, 21], which appears as a nonzero temperature. The scheme can be viewed as a two-electron reduced density matrix (2-RDM) theory as the geminal occupations are an integral part of the 2-RDM. A schematic of the machine learning algorithm for predicting molecular energies is shown in Fig. 1. We apply the machine learning algorithm to hydrocarbon systems. Specifically, by training a convolutional neural network on all isomers of ethane through heptane, we predict the correlation temperatures--and hence molecular energies--of all of the isomers of octane as well as all straight-chained hydrocarbons from octane through pentadecane. We find that this RDM-based machine learning method accurately recovers the correlation energy for larger hydrocarbon systems, with the \(N\)-representability conditions being learned by the CNN framework. Our approach--which scales as \(O[n^{6}]\)--improves upon the exponential scaling of traditional configuration-interaction calculations, foreshadowing the potential utility of this machine-learning reduced density matrix approach to the determination of accurate molecular energies. While polynomial-scaling levels of theory such as Coupled Cluster with Single and Double Excitations (CCSD) can be used to treat weakly-correlated systems such as the hydrocarbons presented in this manuscript, if trained on appropriate molecular data, our convolutional network approach may be capable of accurately recovering correlation energy for more highly-correlated systems. ## Results and Discussion **Theory.** Central to our modern understanding of chemistry is the concept of the molecular orbital. Any molecule's electronic structure can be readily understood in terms of its molecular orbitals which are filled from lowest-in-energy to highest-in-energy by the Pauli exclusion principle. When electrons of a molecule become strongly correlated, however, the orbital picture with unit filling of the lowest orbitals breaks down. Because electronic interactions are at most pairwise, the orbital picture can in principle be replaced by an exact two-electron (geminal) picture, which is derivable from 2-RDM theory. The ground- or excited-state energy of any atom or molecule is expressible as an exact functional of the 2-RDM (\({}^{2}D\)) [16, 18, 19, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59] \[E=\int{}^{2}\hat{K}\,{}^{2}D(\mathrm{I}\overline{2};12)d1d2 \tag{1}\] where \({}^{2}\hat{K}\) is the reduced Hamiltonian operator \[{}^{2}\hat{K}=-\frac{N}{2}\left(\frac{\hat{p}_{1}^{2}}{2m}+\frac{ \hat{p}_{2}^{2}}{2m}+\sum_{k}\frac{Z_{k}}{r_{1k}}+\sum_{k}\frac{Z_{k}}{r_{2k}}\right)\] \[+\frac{N(N-1)}{2}\frac{1}{r_{12}}. \tag{2}\] In a finite orbital basis set, the operators are expressible as a reduced Hamiltonian matrix. Diagonalization of this reduced Hamiltonian matrix yields a set of eigenvalues and eigenvectors (or geminals). In the basis set of geminals, the Hamiltonian is a diagonal matrix consisting of its eigenvalues, the 2-RDM has a non-negative diagonal elements which we denote by \(p_{i}\), and energy is the sum over the geminal eigenvalues of the Hamiltonian matrix \(\epsilon_{i}\) weighted by the non-negative geminal occupations \(p_{i}\): \[E=\sum_{i}p_{i}\epsilon_{i}. \tag{3}\] By this transformation we express the energy as a functional of the eigenvalues of the reduced Hamiltonian \(\epsilon_{i}\), which are readily computed at the cost of the two-electron calculation, and the unknown geminal occupations \(p_{i}\) (see Fig. 2). The German chemist Bopp originally proposed approximating the geminal occupation numbers by a Pauli-like filling scheme [16, 17]. He suggested choosing the lowest \(N(N-1)/2\) to be equal to one. This approach, while analogous to the filling of orbitals in molecular-orbital theory, generates accurate energies for four-electron atoms and ions but energies for larger molecular systems that are too low. Coleman suggested that the filling of the geminal by two electrons--or the pseudo-particle called a pairon--should follow a fundamental probability distribution as in statistical mechanics [16]. He proposed a Boltzmann distribution for the geminal occupations based on the geminal energies. While such a distribution is not exact because the pairon pseudo-particles obey neither the Fermi-Dirac or Bose-Einstein particle statistics, there exists a Boltzmann-like distribution given by \[p_{i}=\frac{N(N-1)}{Z}e^{-\epsilon_{i}/kT^{*}} \tag{4}\] and parameterized by a specific correlation temperature (\(T^{*}\)) such that the resultant approximate geminal probability distribution allows for the accurate computation of a molecule's energy according to Eq. (3). However, the ability to determine such a correlation temperature is currently only possible if the geminal energies (\(\epsilon_{i}\)) and geminal populations (\(p_{i}\)) are both known. Here, we train a convolutional neural network (CNN) to predict the correlation temperature for a given molecular system consistent with its ground-state energy. The convolutional neural network is trained on inputs corresponding to both geminal energies--expressed as partition functions given by \[Z=\sum_{i}e^{-\epsilon_{i}/kT} \tag{5}\] for a variety of temperatures--as well as the computed Hartree-Fock correlation temperature (\(T^{*}_{HF}\)) and with training outputs corresponding to a \(\Delta\) value representing the difference between the exact (i.e. configuration interaction) correlation temperature and the HF correlation temperature, i.e., \(\Delta=T^{*}_{EXACT}-T^{*}_{HF}\). For larger molecular systems, we then predict the \(\Delta\) values by reading in the geminal energies and Hartree-Fock correlation temperatures for those molecules into the trained neural network. These \(\Delta\) values are then added to the \(T^{*}_{HF}\)s in order to yield the exact correlation temperatures, which allows for the approximation of the geminal probability distributions and hence the molecular energies via Eq. (3). In general, for two-electron reduced density matrix methodologies, the 2-RDM must be constrained to represent the \(N\)-electron wavefunction through application of \(N\)-representability constraints [18, 19, 20, 21]. Here, if \(N\)-representability conditions are not accounted for in our Boltzmann-like machine learning approach, the correlation temperature would be zero, which corresponds to the lowest-energy geminal being fully occupied by all electron pairs. This electronic structure machine learning approach, however, maintains \(N\)-representability by learning correlation temperatures from \(N\)-representable training data and applying this inherent "learned" \(N\)-representability to the testing data. See the Experimental section at the end of this document for additional details. **Energetic Predictions for Isomers of Octane.** For the eighteen isomers of octane--with molecular geometries obtained from the PubChem database [60]--, the Hartree-Fock and CASSCF energies are computed using Dunning's double-zeta (cc-pVDZ) basis set with complete active-space self-consistent-field (CASSCF) calculations employing a \([N_{e}=8,\ N_{o}=8]\) active space. Utilizing a convolutional neural network trained on hydrocarbons ranging from two to seven carbon atoms, the correlation temperature corresponding to the CASSCF energy is predicted for each of the octane isomers and used to compute the predicted CASSCF energies shown in Fig. 3(a). As can be seen from this figure, which shows energy versus isomer identifier, the predicted CASSCF energies (green circles) show good agreement with the actual CASSCF energies (black boxes), vastly improving upon the Hartree-Fock energies (blue diamonds), and hence our predictions capture the correlation energy in a fairly accurate manner. Additionally, in order to demonstrate the generality of our reduced density matrix approach for "learning" molecular energies, Coupled Cluster Single Double (CCSD) energies are computed for the cc-pVDZ basis for hydrocarbons ranging from two to seven carbon atoms. The corresponding CCSD correlation temperatures are then used to train a convolutional neural net, and the correlation temperature corresponding to the CCSD energy is then predicted for each isomer of octane, with the resultant predicted CCSD energies shown in Fig. 3(b). Similar to the CASSCF energies from Fig. 3(a), the CCSD predicted energies (green circles) demonstrate good agreement with the actual CCSD energies (black boxes) when compared to the Hartree-Fock energies (blue diamonds). Hence, for this second level of theory, our predictions capture correlation energies in a fairly accurate manner. Additional predictions corresponding to CCSD cal Figure 2: **Example of geminal energies and probabilities.** For (a) benzene, we can use the (b) geminal energies \(\epsilon_{i}\) to learn the (c) geminal probabilities \(p_{i}\)—both of which are computed here from a [\(N_{e}=6,\ N_{o}=6\)] complete active-space self-consistent-field (CASSCF) using the minimal Slater-type orbital basis set with six Gaussian primitive functions representing each Slater-type orbital (STO-6G). Knowing both geminal energies and geminal populations is sufficient to determine molecular energies via Eq. (3). Figure 1: **Graphic demonstrating algorithm flow.** For a given molecule, a trained convolutional neural network is used to predict the Boltzmann-like correlation temperature (\(T_{f}\)) with the eigenfunctions of the reduced Hamiltonian (\(\epsilon_{j}\)) and the Hartree-Fock correlation temperature (\(T_{i}\)) as inputs. The correlation temperature (\(T_{f}\)) allows for the approximation of the geminal populations (\(p_{f,j}\)) by Eq. (4), which is sufficient for the prediction of the energy by Eq. (3). culations utilizing the STO-6G basis set can be seen in the Supporting Information. We next explore systems composed of larger hydrocarbons to determine whether such good agreement remains consistent as system size is increased while the training data remains the same. **Energetic Predictions for Large Hydrocarbon.** For the eight straight-chained hydrocarbons ranging from octane to pentadecane--with molecular geometries obtained from the PubChem database [60]--, the Hartree-Fock and CASSCF energies are computed using Dunning's double-zeta (cc-pVDZ) basis set with the CASSCF calculations employing a [\(N_{e}=8,\ N_{o}=8\)] active space. Utilizing a convolutional neural network trained on hydrocarbons ranging from two to seven carbon atoms, the correlation temperature corresponding to the CASSCF energy is predicted for each of the octane to pentadecane hydrocarbon isomers and used to compute the predicted CASSCF energies shown in Fig. 4. As can be seen from this figure, which shows energy per carbon versus number of carbons, the predicted CASSCF energies (green circles) show good agreement with the actual CASSCF energies (black boxes), vastly improving upon the Hartree-Fock energies (blue diamonds), and hence our predictions capture the correlation energy in a fairly accurate manner. Although there is a slight increase in the error as system size is increased, it appears to be small enough that the energies of even larger hydrocarbon isomers may be able to be predicted in an accurate manner through use of our convolutional neural network trained on only hydrocarbons with seven or fewer carbon atoms. Similar promising results are obtained for predicting CASSCF energies for octane, nonane, decane, and undecane via a convolutional neural network trained on CASSCF calculations for hydrocarbons with two to seven carbons that utilize a [10,10] active space and the cc-pVTZ basis set as can be seen in the Supporting Information. ## Conclusions In this _Article_, we introduce a new paradigm based on a two-electron, reduced density matrix approach for the utilization of machine learning architecture in the prediction of accurate correlation energies for molecular systems at reduced computational expense. By employing a Boltzmann-like distribution for two-electron geminal populations parameterized by a correlation temperature, we train a convolutional neural network on correlation temperatures corresponding to CASSCF and CCSD calculations for smaller molecular systems in order to predict CASSCF and CCSD correlation temperatures for larger, more computationally-expensive molecular systems and hence obtain predicted CASSCF/CCSD energies. Moreover, the \(N\)-representability conditions are inherently maintained by our CNN framework--as evinced by nonzero correlation temperatures. This methodology for the prediction of CASSCF energies scales as \(O[n^{6}]\) with the number of orbitals due to the diagonalization of the reduced Hamiltonian, which is an improvement over the exponential scaling of a traditional CASSCF calculation. See the Experimental section for additional comments on computational scaling. Demonstrating the power of this technique, we train a convolutional neural network on small hydrocarbon systems--with the number of carbon atoms ranging from two to seven--in order to predict CASSCF energies for larger hydrocarbon systems--with the number of carbons ranging from eight to fifteen. We find that our RDM-based machine learning approach accurately recovers the correlation energy for the larger hydrocarbon systems. Thus, our trained convolutional neural network allows us to predict CASSCF-like results at significantly lower computational expense. While the hydrocarbons involved in training and testing this implementation of our machine-learning reduced density matrix approach do not demonstrate large degrees of correlation, the prediction of accurate correlation energies for larger molecular systems of the type included in the training set likely indicates that as long as the convolutional neural network is trained on appropriate small molecules, the energies of highly-correlated, larger molecules should be able to be obtained via our methodology. Specifically, if one wishes to predict the energy of a molecule which demonstrates a fairly-large degree of correlation, smaller correlated systems would likely be necessary to train the neural network. Application of our machine-learning reduced density matrix approach to highly-correlated systems is a future direction of this research. This work foreshadows the promise of machine learning in molecular electronic structure calculations, demonstrating that "learning" information about less-expensive, smaller molecular systems can be directly applied to larger typically more-expensive molecules. Future electronic structure methodologies may even include pre-trained convolutional neural networks--possibly varying with the types of atoms, basis set, active space, functional groups, and/or degree of bond saturation inherent to the molecular system of interest--trained on FCI (or similarly expensive) correlation temperatures. This work serves as an initial step in the realization of a combined reduced-density-matrix and machine-learning approach that may provide a real advance in decreasing computational expense for large, highly-correlated electronic structure calculations. ## Supporting Information An analysis of the effect of changing the active space size on our machine-learning reduced density matrix approach; application of our reduced density matrix machine learning algorithm to the prediction of CASSCF energies with a [10,10] active space and cc-pVTZ basis set; application of our reduced density matrix machine learning algorithm to the prediction of CCSD energies with a STO-6G basis. ## Acknowledgements D.A.M. gratefully acknowledges support from U.S. National Science Foundation under Grant No. 2155082 and the Department of Energy, Office of Basic Energy Sciences under Grant No. DE-SC0019215. L.M.S.-S. also acknowledges support from the U. S. National Science Foundation under Grant No. DGE-1746045. ## References * (1) Figure 3: **Octane data.** Hartree-Fock energies (HF, blue diamonds), (a) Complete Active Space Self-Consistent Field/(b) Coupled Cluster Single Double (CASSCF/CCSD, black boxes) energies, and energy values predicted via utilization of Convolutional Neural Networks (CNN, green circles) are shown for the series of octane isomers. As can be seen, the CNN methodology trained on smaller hydrocarbon data fairly accurately recovers the correlation energy. Isomer labels are given by [8.01: ‘Octane’, 8.02: ‘2-Methylheptane’, 8.03: ‘3-Methylheptane’, 8.04: ‘4-Methylheptane’, 8.05: ‘2,2-Dimethylhexane’, 8.06: ‘2,3-Dimethylhexane’, 8.07: ‘2,4-Dimethylhexane’, 8.08: ‘2,5-Dimethylhexane’, 8.09: ‘3,3-Dimethylhexane’, 8.10: ‘3,4-Dimethylhexane’, 8.11: ‘3-Ethylhexane’, 8.12: ‘2,2,3-Trimethylpentane’, 8.13: ‘2,2,4-Trimethylpentane’, 8.14: ‘2,3,3-Trimethylpentane’, 8.15: ‘2,3,4-Trimethylpentane’, 8.16: ‘3-Ethyl-2-Methylpentane’, 8.17: ‘3-Ethyl-3-Methylpentane’, 8.18: ‘2,2,4,4-Tetramethylbutane’]. Hartree-Fock, CASSCF, and CCSD calculations are all computed here using Dunning’s double-zeta (cc-pVDZ) basis set with the CASSCF calculations employing a [\(N_{e}=8,\ N_{o}=8\)] active space.
化学計算において、重要な課題は、計算方法論がシステムサイズとともに著しく増大することである。あらゆる分子のエネルギーは、2電子波動関数の重み付き和として表現することができ、2電子計算から算出可能である。この拡張された“aufbau”原則の物理的エレガントさはさておき、一般分子の系における重み分布(geminaloccupations)の決定は困難である。そこで、ここでは、近似的なgeminal-occupation分布をコンvolutionalニューラルネットワークを通して学習する新しい電子構造パラダイムを導入する。このニューラルネットワークは、$N$-representability条件を学習し、$N$-電子系を表現するのに必要な分布の制約を学習できる。2-7炭素原子を持つ炭化水素の異性体を用いて訓練することで、オクタンの異性体だけでなく、8-15炭素を持つ炭化水素のエネルギーを
2309.03559
An Anchor Learning Approach for Citation Field Learning
Citation field learning is to segment a citation string into fields of interest such as author, title, and venue. Extracting such fields from citations is crucial for citation indexing, researcher profile analysis, etc. User-generated resources like academic homepages and Curriculum Vitae, provide rich citation field information. However, extracting fields from these resources is challenging due to inconsistent citation styles, incomplete sentence syntax, and insufficient training data. To address these challenges, we propose a novel algorithm, CIFAL (citation field learning by anchor learning), to boost the citation field learning performance. CIFAL leverages the anchor learning, which is model-agnostic for any Pre-trained Language Model, to help capture citation patterns from the data of different citation styles. The experiments demonstrate that CIFAL outperforms state-of-the-art methods in citation field learning, achieving a 2.68% improvement in field-level F1-scores. Extensive analysis of the results further confirms the effectiveness of CIFAL quantitatively and qualitatively.
Zilin Yuan, Borun Chen, Yimeng Dai, Yinghui Li, Hai-Tao Zheng, Rui Zhang
2023-09-07T08:42:40
http://arxiv.org/abs/2309.03559v2
# An Anchor Learning Approach for Citation Field Learning ###### Abstract Citation field learning is to segment a citation string into fields of interest such as author, title, and venue. Extracting such fields from citations is crucial for citation indexing, researcher profile analysis, etc. User-generated resources like academic homepages and Curriculum Vitae, provide rich citation field information. However, extracting fields from these resources is challenging due to inconsistent citation styles, incomplete sentence syntax, and insufficient training data. To address these challenges, we propose a novel algorithm, _CIFAL_ (citation field learning by anchor learning), to boost the citation field learning performance. _CIFAL_ leverages the anchor learning, which is model-agnostic for any Pre-trained Language Model, to help capture citation patterns from the data of different citation styles. The experiments demonstrate that _CIFAL_ outperforms state-of-the-art methods in citation field learning, achieving a 2.83% improvement in field-level F1-scores. Extensive analysis of the results further confirms the effectiveness of _CIFAL_ quantitatively and qualitatively. Zilin Yuan\({}^{1}\), Borun Chen\({}^{2}\), Yimeng Dai\({}^{3}\), Yinghui Li\({}^{1}\), Hai-Tao Zheng\({}^{1,4,*}\), Rui Zhang\({}^{5,*}\)\({}^{1}\)Shenzhen International Graduate School, Tsinghua University \({}^{2}\)Meituan, \({}^{3}\)Sapia.ai, \({}^{4}\)Pengcheng Laboratory, \({}^{5}\)www.ruizhang.info Citation Field Learning, Anchor Learning, Pre-trained Language Model ## 1 Introduction Citations play a crucial role in assessing a researcher's academic accomplishments. The _author_ field in a citation, for instance, reveals collaborative relationships among researchers, while the _title_ field indicates their research interests. In this paper, our objective is to learn various citation fields, including author, title, venue, and year, from each citation. For instance, given the citation string _"Shannon, C.E., 2001. A mathematical theory of communication. ACM SIGMOBILE Mobile Computing and Communications Review, 5(1), pp.3-55."_, we aim to extract information such as the author ("_Shannon, C.E._"), title ("_A mathematical theory of communication_"), venue ("_ACM SIGMOBILE Mobile Computing and Communications Review_") and year ("_2001_"). This task is commonly referred to as citation field learning, citation metadata extraction, or reference parsing in other studies [1, 2, 3, 4]. By extracting citation fields, we can establish collaboration networks among researchers and conduct various studies [5, 6], such as research community detection [7], research trends prediction [8], and topics analysis [9]. While we may obtain citation information from bibliographic databases and multiple user-generated resources, this information is vexed by several problems: the limited coverage of disciplines and researchers, inconsistent citation styles, incomplete sentence syntax, and insufficient training data. In the research community, some machine learning models [10, 11] and deep learning models [12, 13, 14] have been proposed to parse the citation and have shown promising results. However, these models either rely on a substantial amount of manually labeled data or struggle with understanding unstructured information. As it is expensive to obtain a large amount of manually labeled training data, one potential solution is to employ data enhancement techniques [15, 16, 17]. For instance, Zhang et al.(2018) [12] proposed an algorithm to augment the limited manually labeled dataset. Specifically, the algorithm automatically generates citation items based on the citation styles and meta citation data obtained from publicly available sources. Another approach is to utilize supervised citation data to fine-tune pre-trained models. By employing data enhancement techniques, we can fine-tune the pre-trained models with a larger supervised dataset, enabling them to accurately capture domain-specific and task-specific patterns. However, previous studies have indicated that using generated data in this manner may have a negative impact on transferability to downstream tasks [18, 19, 20, 21]. Therefore, we propose to leverage anchor learning to effectively capture citation patterns from the extensive generated data without compromising the transferability of pre-trained models on limited labeled data. It is model-agnostic and can be applied to any Pre-trained Language Model (PLM) by incorporating a task-guided pre-training stage between the general pre-training and fine-tuning stages [22]. Specifically, after pre-training the PLM, the large generated data is utilized for task-guided pre-training, while the small manually labeled training dataset is used for fine-tuning. At the task-guided stage, sequence anchors, i.e., important tokens in the generated data, are masked based on our anchor learning technique. We call this masking strategy anchor masking. Then, the pre-trained model is trained to predict these anchors. The underlying idea is to facilitate the model in learning crucial citation patterns from the generated data while preserving transferability compared to direct training using labeled citations. In summary, the main contributions of this paper are summarized as follows: * We propose a novel algorithm named _CIFAL_ (citation field learning by anchor learning) that leverages anchor learning to enhance the performance of citation field learning, while maintaining the transferability of pre-trained models on limited labeled data. * Extensive experiments show that _CIFAL_ achieves state-of-the-art performance. Extensive model analysis and result analysis demonstrate the effectiveness of the proposed methods quantitatively and qualitatively. The paper is organized as follows. Section 2 presents our proposed methods. Section 3 reports our experimental results and Section 4 presents the analysis of our models. Finally, Section 5 concludes the paper and discusses future works. ## 2 Methodology ### Citation Field Learning by Anchor Learning The pre-train-then-fine-tune paradigm has shown significant improvement for various NLP tasks [23, 24]. However, when faced with limited labeled data during the fine-tuning stage, models struggle to effectively capture domain-specific and task-specific patterns. To address this issue, we propose to utilize anchor learning to help capture citation patterns from a large generated dataset [12]. Figure 1 illustrates the anchor learning technique, which is model-agnostic and can be applied to any PLM. It involves training an anchor selector and adding a task-guided pre-training stage between the general pre-training and fine-tuning stages. We denote the large generated data as \(D_{\text{Generate}}\) and the small manually labeled data as \(D_{\text{Task}}\). Instead of randomly masking tokens in \(D_{\text{Generate}}\), we mask anchors in \(D_{\text{Generate}}\) based on an anchor selector learned on \(D_{\text{Task}}\). The PLM is then task-guided pre-trained to predict these masked anchors in \(D_{\text{Generate}}\), allowing for the capture of citation patterns. Finally, the task-guided pre-trained model is fine-tuned on the citation field learning task using \(D_{\text{Task}}\). During the fine-tuning process, our algorithm efficiently maps tokens to the representation space and feeds these representations to the sequence labeling layer. We use Bi-LSTM-CRF to illustrate the adopted model in our method in Figure 1, although other models can be used as well. Next, we'll introduce the detail of the anchor selector learned on \(D_{\text{Task}}\). ### Anchor Selector Given an input sequence \(\mathbf{s_{g}}=(\omega_{1},\omega_{2},\cdots,\omega_{n})\) from \(D_{\text{Generate}}\), where \(n\) is the number of tokens and \(w\) represents a token, the goal of the anchor selector is to identify a set of anchors \(C_{g}\) from \(\mathbf{s_{g}}\). Existing state-of-the-art citation field learning algorithms have shown good performance in fields such as authors and year, but they struggle to perform well in the venue field. This indicates that the language patterns in the venue are more complex and hard to capture with limited manually labeled data. As a result, we focus on training the anchor selector specifically for the venue field. In other words, the set \(C_{g}\) will only contain tokens from the venue field. For example, in the citation item "Shannon, C.E., 2001. A mathematical theory of communication. ACM SIGMOBILE Mobile Computing and Communications Review, 5 (1), pp.3-55.", the token "ACM" can be considered as an anchor for the venue field. Figure 1: The anchor learning of _CIFAL_. We first fine-tune a PLM on \(D_{\rm Task}\) to learn a basic labeling model. We use \(D(\mathbf{v_{t}}|\mathbf{s_{t}})\) to denote the distribution of confidences for tokens from the venue field in \(\mathbf{s_{t}}\), and \(D(\mathbf{v_{t}}-w_{i}|\mathbf{s_{t}}-w_{i})\) to denote the distribution of confidences for tokens from the venue field after removing a specific token \(w_{i}\) from \(\mathbf{s_{t}}\). Next, we select an anchor token based on the difference between the average confidences in \(D(\mathbf{v_{t}}|\mathbf{s_{t}})\) and the average of confidences in \(D(\mathbf{v_{t}}-w_{i}|\mathbf{s_{t}}-w_{i})\): \[\mathrm{S}(\omega_{i})=\frac{1}{k}\sum D(\mathbf{v_{t}}|\mathbf{s_{t}})-\frac{1}{k-1} \sum D(\mathbf{v_{t}}-w_{i}|\mathbf{s_{t}}-w_{i}), \tag{1}\] where \(k\) is the number of tokens from the venue field. By iteratively calculating the change of confidence after removing tokens one by one, we can determine the anchor tokens set \(C_{t}\) for \(D_{\rm Task}\). The significance of a token \(\omega_{i}\) can be measured by its corresponding score \(\mathrm{S}(\omega_{i})\), where a higher score indicates a greater impact on the prediction and a richer source of important citation patterns information. We set the threshold \(\delta=0.05\) and the criterion for selecting anchors is: \[\mathrm{S}(\omega_{i})>\delta, \tag{2}\] this criterion implies that when the token \(\omega_{i}\) is removed, the average confidence of the remaining tokens significantly decreases. Therefore, tokens that meet this criterion can be considered as elements of the anchor set. Once the anchor sets for all sequences in \(D_{\rm Task}\) are obtained, a PLM is then fine-tuned to train the anchor selector. The anchor selector is a token-level binary classifier that predicts whether a token of a sequence belongs to its anchor set or not. Subsequently, this anchor selector can be used to obtain the anchor set \(C_{g}\) for \(D_{\rm Generate}\) to perform the task-guided pre-training. ## 3 Experiments ### Dataset Our experiments involve two datasets: a labeled dataset and a generated dataset. The labeled dataset used in our study is called _Cita_[12], which consists of 3000 manually annotated citations across more than 20 disciplines. Each token in the annotated datasets is assigned a ground truth label, such as author, title, year, venue, or other. Another dataset [12] used in this study is generated from three bibliography sources: DBLP1, PubMed Central2, and X University3 eprint library, which contains approximately 80 thousand citation items. Zhang et al. (2018) [12] created three datasets using the aforementioned sources. They then extracted equi-size subsets from each dataset, taking into consideration the discipline distribution of _Cita_. As a result, a generated dataset consisting of 100,026 citation items was obtained. Footnote 1: [https://dblp.org/](https://dblp.org/) Footnote 2: [https://www.ncbi.nlm.nih.gov/pmc/](https://www.ncbi.nlm.nih.gov/pmc/) Footnote 3: University name is hidden for anonymity ### Model Settings We employ the WordPiece tokenizer in BERT to tokenize each citation. For the encoder part, we select the model architecture of \(\mathrm{BERT}_{\rm BASE}\). BERT Adam is utilized during training. The initial learning rate is \(5\times 10^{-5}\) and the batch size is set to 32 for both task-guided pre-training and fine-tuning. During task-guided pre-training, the model is trained for 20k steps using the generated data, which combines the anchor learning technique with random masking. In the fine-tuning stage, the model is fine-tuned for 20 epochs, and the version with the highest accuracy on the validation set is selected. To enable the models to learn from the surface features of citation items, we retain the use of uppercase letters. ### Measurement Criteria and Models Evaluated In line with previous research [25, 26], we evaluate both field-level matching and token-level matching precision, recall, and F1-score. The primary measurement is conducted at the field-level, where each consecutive token in every field of the citation string must be correctly labeled. The token-level measurement considers each individual token, allowing partially labeled fields to receive partial credit. We conducted an evaluation and comparison of five models, including four baseline models and our proposed model. The baseline models consist of **BioPro**[27], **ParsCit4**[28], **CRF**[29], and **Bi-LSTM-CRF**[30]. The proposed method we tested is _CIFAL_. Footnote 4: ParsCit: [http://parscit.comp.nus.edu.sg/](http://parscit.comp.nus.edu.sg/) ### Results Table 1 shows the overall results of the evaluated models5. Based on the results of Table 1, it is evident that our methods consistently outperform all the baseline models. Specifically, _CIFAL_ achieves an improvement of up to **0.83%** on token-level f1-score and **2.68%** on field-level f1-score. In theory, it would be challenging to achieve significant improvement on the token-level due to the large number of tokens, where even hundreds of label changes may not have a substantial impact on performance. Conversely, the improvement on the field-level is more consistent and significant. This is reasonable as accurately predicting tokens at the field boundaries contributes significantly to the field-level performance while having minimal impact on the token-level performance. Footnote 5: The \(\dagger\) symbol in the tables denotes that the result is statistically significant. Table 2 shows the performance of the models on each field. The table consists of four fields, each reported with f1-scores at two levels. The results indicate that _CIFAL_ achieves the highest f1-scores on the _Author_, _Venue_, and _Tille_ fields. The improvement in the _Venue_ and _Tille_ fields can be attributed to the fact that _CIFAL_ focuses on learning venue patterns during task-guided pre-training, and the title of most published papers have some connection to their respective venues. The improvement on _Author_ field indicates that the model can recognize some very challenging name cases when the obstacles to recognizing uncertain venues are clear. For instance, in book citations, the format of editors' names often closely resembles that of author names and frequently appears after the venue. Consequently, the field-level results for the _Author_ field exhibit significant improvement. ## 4 Model Analysis ### Ablation Study The task-guided pre-training and the anchor masking technique may both contribute to our state-of-the-art performances on the citation field learning task. Hence, we make modifications to the _CIFAL_ model by employing different settings for task-guided pre-training and anchor masking. We test 4 versions of modified models: 1) **Fine-tune**: The _CIFAL_ model without task-guided pre-training and only be fine-tuned on the manually labeled data; 2) **Anchor Masking**: The _CIFAL_ model with task-guided pre-training by anchor masking; 3) **Attention Masking**: The _CIFAL_ model with task-guided pre-training by attention masking, and it should be noted that the attention masking strategy we utilize here is based on the attention weights of BERT Encoder layers.; 4) **Random Masking**: The _CIFAL_ model with task-guided pre-training by random masking. We present the f1-scores on field-level of each field in Table 3. The results of the ablation experiments demonstrate that the impact of task-guided pre-training with anchor learning technique is greater than any other method. Specifically, the venue field achieves an f1-score of 87.26%, which significantly outperforms the other modified models. This outcome highlights the effectiveness of the anchor learning technique in improving the performance of the venue field. ### Effect of Anchor Learning To prove that our anchor learning technique does select the anchor set of the input sequence for the target field, we present some examples and count the tokens that appear most frequently in the anchor set obtained by applying our strategy on the generated data. Considering the citation item "_Voelcker, J ( 2013), Communications and Navigation, IEEE Spectrum, pages 2574-2583._", when we use the anchor learning technique for this input, the model will mask the tokens "_IEEE_" and "_Spectrum_" for task-guided pre-training, where the two masked tokens just belong to the venue field. It proves that anchor learning does select the important tokens for the venue field. Additionally, we conducted a statistical analysis on the generated data and got the twelve most frequently masked tokens. Among these tokens, "_Journal_" is the most frequently masked token and it occurs 57,195 times in total. This result aligns with our expectations, as "_Journal_" is a commonly used term in most venues. Other high-frequency tokens such as "Conference" and "Research" are also prevalent in the venue field. Based on the example and the statistical analysis, we can see the effectiveness of our anchor learning technique. In this way, the task-guided pre-training can better capture the task-specific language pattern. ## 5 Conclusion In this paper, we studied the problem of citation field learning and proposed a novel method called _CIFAL_. Our approach utilizes anchor learning techniques to effectively improve the performance in this domain, while also ensuring the transferability of pre-trained models. The experimental results demonstrate that our model significantly outperforms existing state-of-the-art solutions. Additionally, we conducted further ablation studies to evaluate the impact of anchor learning in our proposed methods. To further enhance our approach, future work could focus on improving the semantic representation capacity within the citation field in _CIFAL_. \begin{table} \begin{tabular}{c|c c c|c c c} \hline \hline Methods & \multicolumn{2}{c|}{Bit-LSTM-CRF} & \multicolumn{2}{c}{**CIFAL** (Ours)} \\ \hline Field & Token-level & Field-level & Token-level & Field-level & TeleNet-level \\ \hline _Author_ & 80.41 & 22.61 & 87.61 & 81.24 & 99.04 & 83.23 \\ _You_ & 82.88 & 82.99 & 93.75 & 97.71 & **99.46\({}^{\dagger}\)** & **99.49\({}^{\dagger}\)** \\ Yuan & 72.03 & 59.99 & 62.84 & 55.59 & 86.64 & 76.24 \\ _Title_ & 81.70 & 56.14 & 88.45 & 62.50 & 95.64 & 75.93 \\ \hline Average & 80.01 & 35.41 & 83.02 & 74.78 & 96.91 & 84.29 \\ \hline \hline \end{tabular} \end{table} Table 2: The F1-scores for Each Field on Token-Level and Field-Level on the Cita dataset. \begin{table} \begin{tabular}{c|c c c|c c} \hline \hline Methods & _Author_ & _Title_ & _Venue_ & _Year_ & Overall \\ \hline Fine-tune & 91.06 & 86.42 & 86.04 & **99.39** & 90.81 \\ Anchor Masking & 91.29 & 86.61 & **87.26** & 99.08 & **91.13** \\ Attention Masking & **91.51** & 86.94 & 86.60 & 99.27 & 91.04 \\ Random Masking & 91.24 & **87.07** & 86.22 & 99.13 & 90.99 \\ \hline \hline \end{tabular} \end{table} Table 3: Each Field and Overall F1-scores on Field-level of modified models. \begin{table} \begin{tabular}{c|c c|c c|c c} \hline \hline Methods & \multicolumn{2}{c|}{Bit-LSTM-CRF} & \multicolumn{2}{c}{**CIFAL** (Ours)} \\ \hline Field & Token-level & Field-level & TeleNet-level & Field-level & TeleNet-level \\ \hline _Author_ & 80.41 & 22.61 & 87.61 & 81.24 & 99.04 & 83.23 \\ _You_ & 82.88 & 82.99 & 93.75 & 97.71 & **99.46\({}^{\dagger}\)** & **99.49\({}^{\dagger}\)** \\ Yuan & 72.03 & 59.99 & 62.84 & 55.59 & 86.64 & 76.24 \\ _Title_ & 81.70 & 56.14 & 88.45 & 62.50 & 95.64 & 75.93 \\ \hline Average & 80.01 & 35.41 & 83.02 & 74.78 & 96.91 & 84.29 \\ \hline \hline \end{tabular} \end{table} Table 2: The F1-scores for Each Field on Token-Level and Field-Level on the Cita dataset.
引用フィールド学習は、引用文を作者、タイトル、会場などの興味のある項目に分割することです。引用文からこれらの項目を抽出することは、引用索引化、研究者PROFILE分析など、多くの研究活動に不可欠です。ユーザー生成型のリソースである学術ホームページや履歴書などの情報源は、引用フィールドの情報を提供しています。しかし、これらの情報源から項目を抽出することは、不整合な引用スタイル、未完了の文章構文、訓練データ不足などの課題に直面しています。これらの課題に対処するため、私たちは、anchor learning を利用した新しいアルゴリズム、CIFAL(citation field learning by anchor learning)を提案しました。CIFAL は、任意の事前学習済み言語モデルに適用できるアンカー学習を駆使することで、異なる引用スタイルのデータから引用パターンを捉える助けになります。実験結果は、CIFALが引用フィールド学習における先進的な手法に優位性を示し、F1スコアで
2309.04236
Adaptive Distributed Kernel Ridge Regression: A Feasible Distributed Learning Scheme for Data Silos
Data silos, mainly caused by privacy and interoperability, significantly constrain collaborations among different organizations with similar data for the same purpose. Distributed learning based on divide-and-conquer provides a promising way to settle the data silos, but it suffers from several challenges, including autonomy, privacy guarantees, and the necessity of collaborations. This paper focuses on developing an adaptive distributed kernel ridge regression (AdaDKRR) by taking autonomy in parameter selection, privacy in communicating non-sensitive information, and the necessity of collaborations in performance improvement into account. We provide both solid theoretical verification and comprehensive experiments for AdaDKRR to demonstrate its feasibility and effectiveness. Theoretically, we prove that under some mild conditions, AdaDKRR performs similarly to running the optimal learning algorithms on the whole data, verifying the necessity of collaborations and showing that no other distributed learning scheme can essentially beat AdaDKRR under the same conditions. Numerically, we test AdaDKRR on both toy simulations and two real-world applications to show that AdaDKRR is superior to other existing distributed learning schemes. All these results show that AdaDKRR is a feasible scheme to defend against data silos, which are highly desired in numerous application regions such as intelligent decision-making, pricing forecasting, and performance prediction for products.
Di Wang, Xiaotong Liu, Shao-Bo Lin, Ding-Xuan Zhou
2023-09-08T09:54:36
http://arxiv.org/abs/2309.04236v1
# Adaptive Distributed Kernel Ridge Regression: A Feasible Distributed Learning Scheme for Data Silos ###### Abstract Data silos, mainly caused by privacy and interoperability, significantly constrain collaborations among different organizations with similar data for the same purpose. Distributed learning based on divide-and-conquer provides a promising way to settle the data silos, but it suffers from several challenges, including autonomy, privacy guarantees, and the necessity of collaborations. This paper focuses on developing an adaptive distributed kernel ridge regression (AdaDKRR) by taking autonomy in parameter selection, privacy in communicating non-sensitive information, and the necessity of collaborations in performance improvement into account. We provide both solid theoretical verification and comprehensive experiments for AdaDKRR to demonstrate its feasibility and effectiveness. Theoretically, we prove that under some mild conditions, AdaDKRR performs similarly to running the optimal learning algorithms on the whole data, verifying the necessity of collaborations and showing that no other distributed learning scheme can essentially beat AdaDKRR under the same conditions. Numerically, we test AdaDKRR on both toy simulations and two real-world applications to show that AdaDKRR is superior to other existing distributed learning schemes. All these results show that AdaDKRR is a feasible scheme to defend against data silos, which are highly desired in numerous application regions such as intelligent decision-making, pricing forecasting, and performance prediction for products. AdADKRR for Data Silos distributed learning, data silos, learning theory, kernel ridge regression ## 1 Introduction Big data has made a profound impact on people's decision-making, consumption patterns, and ways of life (Davenport et al., 2012; Tambe, 2014), with many individuals now making decisions based on analyzing data rather than consulting experts; shopping online based on historical sales data rather than going to physical stores; gaining insights into consumer behaviors and preferences based on the consumption data rather than language communications. With the help of big data, organizations can identify patterns, trends, and correlations that may not be apparent in data of small size, which leads to more accurate predictions, better understanding of behaviors, and improved operational efficiencies. However, data privacy and security (Jain et al., 2016; Li and Qin, 2017) have garnered widespread attention, inevitably resulting in the so-called data silos, meaning that large-scale data distributed across numerous organizations cannot be centrally accessed, that is, organizations can only use their own local data but cannot obtain relevant data from elsewhere. For example, a large amount of medical data are stored in fragmented forms in different medical institutions but cannot be effectively aggregated; massive amounts of operational data are distributed among various companies but cannot be centrally accessed; and numerous consumer behavior data are collected by different platforms but cannot become public resources due to privacy factors. Data silos is a significant challenge (Fan et al., 2014) for the use of big data, requiring ingenious multi-party collaboration methods to increase the willingness of data holders to cooperate and improve their efficiency of data analysis without leaking their sensitive information. Designing and developing feasible methods to avoid data silos is a recent focus of machine learning, which not only determines the role that machine learning plays in the era of big data but also guides the future direction of machine learning development. Distributed learning (Balcan et al., 2012; Lea and Nicoll, 2013) is a promising approach for addressing the data silos, as it enables multiple parties to collaborate and learn from each other's data without having to share the data themselves. As shown in Figure 1, there are generally three ingredients in a distributed learning scheme. The first one is local processing, in which each local machine (party) runs a specific learning algorithm with its own algorithmic parameters and data to yield a local estimator. The second one is communication, where several useful but non-sensitive pieces of information are communicated with each other to improve the quality of local estimators. To protect data privacy, neither data nor information that could lead to data disclosure is permitted to be communicated. The last one is synthesization, in which all local estimators are communicated to the global machine to synthesize a global estimator. In this way, multiple parties can collaborate on solving problems that require access to the whole data from different sources while also Figure 1: Training and testing flows of DKRR addressing privacy concerns, as sensitive data are kept in their original locations and only non-sensitive information is shared among the parties involved. Due to the success in circumventing the data silos, numerous distributed learning schemes with solid theoretical verification have been developed, including distributed linear regression (Zhang et al., 2013), distributed online learning (Dekel et al., 2012), distributed conditional maximum entropy learning (Mcdonald et al., 2009), distributed kernel ridge regression (Zhang et al., 2015), distributed local average regression (Chang et al., 2017a), distributed kernel-based gradient descent (Lin and Zhou, 2018), distributed spectral algorithms (Mucke and Blanchard, 2018), distributed multi-penalty regularization algorithms (Guo et al., 2019), and distributed coefficient regularization algorithms (Shi, 2019). In particular, these algorithms have been proven to achieve optimal rates of generalization error bounds for their batch counterparts, as long as the algorithm parameters are properly selected and the number of local machines is not too large. However, how to choose appropriate algorithm parameters without sharing the data to achieve the theoretically optimal generalization performance of these distributed learning schemes is still open, because all the existing provable parameter selection strategies, such as the logarithmic mechanism for cross-validation (Liu et al., 2022), generalized cross-validation (Xu et al., 2019), and the discrepancy principle (Celisse and Wahl, 2021), need to access the whole data. This naturally raises the following problem: **Problem 1**: _How to develop a feasible parameter selection strategy without communicating the individual data of local machines with each other to equip distributed learning to realize its theoretically optimal generalization performance and successfully circumvent the data silos?_ In this paper, taking distributed kernel ridge regression as an example, we develop an adaptive parameter selection strategy based on communicating non-sensitive information to solve the above problem. Our basic idea is to find a fixed basis, and each local machine computes an approximation of its derived rule (the relationship between the input and output) based on the basis and transfers the coefficients of the basis to the global machine. The global machine then synthesizes all the collected coefficients through a specific synthesis scheme and communicates the synthesized coefficients back to each local machine. In this way, each local machine obtains a good approximation of the global rule and uses this rule for cross-validation to determine its algorithm parameters. The road map of our approach is shown in Figure 2. Using the developed parameter selection strategy, we propose a novel adaptive distributed kernel ridge regression (AdaDKRR) to address the data silos. Our main contributions can be concluded as follows: \(\bullet\)_Methodology novelty:_ Since data stored in different local machines cannot be communicated, developing an adaptive parameter selection strategy based on local data to equip distributed learning is not easy. The main novelty of our approach is a nonparametric-to Figure 2: Road map of the proposed parameter selection strategy parametric model transition method, which determines the algorithm parameters of distributed nonparametric learning schemes by communicating the coefficients of fixed basis functions without leaking any sensitive information about the local data. With such a novel design, we develop a provable and effective parameter selection strategy for distributed learning based on cross-validation to solve the data silos. As far as we know, this is the first attempt at designing provable parameter selection strategies for distributed learning to address the data silos. \(\bullet\)_Theoretical assessments:_ Previous theoretical research (Zhang et al., 2015; Lin et al., 2017; Mucke and Blanchard, 2018; Shi, 2019) on distributed learning was carried out with three crucial assumptions: 1) the sizes of data in local machines are almost the same; 2) the parameters selected by different local machines are almost the same; 3) the number of local machines is not so large. In this paper, we present a detailed theoretical analysis by considering the role of the synthesization strategy and removing the assumption of the same data size. Furthermore, we borrow the idea of low-discrepancy sequences (Dick and Pillichshammer, 2010) and the classical radial basis function approximation (Wendland and Rieger, 2005; Rudi et al., 2015) to prove the feasibility of the proposed parameter selection strategy and remove the above-mentioned same parameter assumption. Finally, we provide an optimal generalization rate for AdaDKRR in the framework of statistical learning theory (Cucker and Zhou, 2007; Steinwart and Christmann, 2008), which shows that if the number of local machines is not so large, the performance of AdaDKRR is similar to running KRR on the whole data. This provides a solid theoretical verification for the feasibility of AdaDKRR to address the data silos. \(\bullet\)_Experimental verification:_ We conduct both toy simulations and real-world data experiments to illustrate the excellent performance of AdaDKRR and verify our theoretical assertions. The numerical results show that AdaDKRR is robust to the number of basis functions, which makes the selection of the basis functions easy, thus obtaining satisfactory results. In addition, AdaDKRR shows stable and effective learning performances in parameter selection for distributed learning, regardless of whether the numbers of samples allocated to local machines are the same or not. We also apply AdaDKRR to two real-world data sets, including ones designed to help determine car prices and GPU acceleration models, to test its usability in practice. The rest of this paper is organized as follows. In the next section, we introduce the challenges, motivations, and some related work of parameter selection in distributed learning. In Section 3, we propose AdaDKRR and introduce some related properties. In Section 4, we provide theoretical evidence of the effectiveness of the proposed adaptive parameter selection strategy and present an optimal generalization error bound for AdaDKRR. In Section 5, we numerically analyze the learning performance of AdaDKRR in toy simulations and two real-world applications. Finally, we draw a simple conclusion. The proofs of all theoretical results and some other relevant information about AdaDKRR are postponed to the Appendix. ## 2 Challenges, Our approaches, and Related Work Let \((\mathcal{H}_{K},\|\cdot\|_{K})\) be a reproducing kernel Hilbert space (RKHS) induced by a Mercer kernel \(K\)(Cucker and Zhou, 2007) on a compact input space \(\mathcal{X}\). Suppose there is a data set \(D_{j}=\{(x_{i,j},y_{i,j})\}_{i=1}^{|D_{j}|}\subset\mathcal{X}\times\mathcal{Y}\) stored in the \(j\)-th local machine with \(1\leq j\leq m\) and \(\mathcal{Y}\subseteq\mathbb{R}\) as the output space. Without loss of generality, we assume that there are no common samples of local machines, i.e. \(D_{j}\cap D_{j^{\prime}}=\varnothing\) for \(j\neq j^{\prime}\). Distributed kernel ridge regression (DKRR) with regularization parameters \(\vec{\lambda}:=\{\lambda_{1},\ldots,\lambda_{m}\}\) is defined by (Zhang et al., 2015; Lin et al., 2017) \[\overline{f}_{D,\vec{\lambda}}=\sum_{j=1}^{m}\frac{|D_{j}|}{|D|}f_{D_{j}, \lambda_{j}}, \tag{1}\] where \(\lambda_{j}>0\) is a regularization parameter for \(j=1,\ldots,m\), \(D=\cup_{j=1}^{m}D_{j}\), \(|D|\) denotes the cardinality of the data set \(D\), and the local estimator \(f_{D_{j},\lambda_{j}}\) is defined by \[f_{D_{j},\lambda_{j}}=\arg\min_{f\in\mathcal{H}_{K}}\left\{\frac{1}{|D_{j}|} \sum_{(x,y)\in D_{j}}(f(x)-y)^{2}+\lambda_{j}\|f\|_{K}^{2}\right\}. \tag{2}\] Therefore, in DKRR defined by (1), each local machine runs KRR (2) on its own data \(D_{j}\) with a specific regularization parameter \(\lambda_{j}\) to generate a local estimator, and the global machine synthesizes the global estimator \(\overline{f}_{D,\vec{\lambda}}\) by using a weighted average based on data sizes. If \(\lambda_{j}\) is given, then it does not need additional communications in DKRR to handle the data silos. However, how to choose \(\lambda_{j}\) to optimize DKRR is important and difficult, as data are distributively stored across different local machines and cannot be shared. ### Challenges and road-map for parameter selection in distributed learning Recalling that the introduction of the regularization term in (2) is to avoid the well-known over-fitting phenomenon (Cucker and Zhou, 2007) that the derived estimator fits the training data well but fails to predict other queries, the optimal regularization parameter is frequently Figure 3: Relationship between bias (variance) and regularization parameter values. Training data \(\{(x_{i},y_{i})\}_{i=1}^{5000}\) are generated by drawing \(\{x_{i}\}_{i=1}^{5000}\) i.i.d. according to the uniform distribution on \([0,1]^{3}\) and \(y_{i}=g_{1}(x_{i})+\varepsilon\), where \(g_{1}(x)\) is defined by (20) and \(\varepsilon\sim\mathcal{N}(0,0.1^{2})\); testing data \(\{(x^{\prime}_{i},y^{\prime}_{i})\}_{i=1}^{1000}\) are generated similarly to the training data but with a promise that \(y^{\prime}_{i}=g_{1}(x^{\prime}_{i})\). The training samples are uniformly distributed to \(m\) local machines, and the number \(m\) is set to 10. “DKRR-machine1” represents running KRR on a local machine with a data subset of size \(5000/m\). selected when the bias is close to the variance. However, as shown in Figure 3, if we choose the theoretically optimal regularization parameter based on its own data in each local machine, it is usually larger than the optimal parameter of the global estimator, i.e., \(\lambda_{1}^{*}>\lambda_{2}^{*}\sim\lambda_{3}^{*}\), resulting in the derived global estimator under-fitting. This is not surprising, as the weighted average in the definition of (1) helps to reduce the variance but has little influence on the bias, just as Figure 4 purports to show. Therefore, a smaller regularization parameter than the theoretically optimal one is required for each local machine based on its own data, leading to over-fitting for each local estimator. The weighted average in (1) then succeeds in reducing the variance of DKRR and avoids over-fitting. The problem is, however, that each local machine only accesses its own data, making it difficult to determine the extent of over-fitting needed to optimize the performance of distributed learning. This refers to the over-fitting problem of parameter selection in distributed learning, and it is also the main challenge of our study. Generally speaking, there are two ways to settle the over-fitting problem of parameter selection in distributed learning. One is to modify the existing parameter selection strategies, such as cross-validation (Gyorfi et al., 2002; Caponnetto and Yao, 2010), the balancing principle (De Vito et al., 2010; Lu et al., 2020), the discrepancy principle (Raskutti et al., 2014; Celisse and Wahl, 2021), and the Lepskii principle (Blanchard et al., 2019), to force the local estimator in each local machine to over-fit their own data. A typical example is the logarithmic mechanism (Liu et al., 2022), which uses \(\lambda_{j}^{\log_{|D_{j}|}|D|}\) to reduce the regularization parameter \(\lambda_{j}\) selected by \(D_{j}\) alone as the optimal one. Recalling that it is unknown what the extent of over-fitting should be, it is difficult for this approach to get appropriate regularization parameters to achieve the theoretically optimal learning performance established in (Zhang et al., 2015; Lin et al., 2017) of DKRR. The other is to modify the target functions for parameter selection in each local machine so that the existing strategies can directly find the optimal regularization parameter for distributed learning. We adopt the latter in this paper since it is feasible for this purpose by designing delicate communication strategies. In particular, it is possible to find a good approximation of the global estimator \(\overline{f}_{D,\bar{X}}\) by communicating non-private information. Figure 4: Comparisons of bias and variance with different regularization parameter values. The data and simulation settings are the same as in Figure 3. Our approach is motivated by four interesting observations. First, it can be seen in Figure 3 that the optimal regularization parameter for the local estimator \(f_{D_{j}}\) is not the optimal one for the global estimator. If we can find an approximation of the global estimator \(\overline{f}_{D,\overline{\lambda}}\) and use this approximation instead of the local estimator \(f_{D_{j},\lambda_{j}}\) as the target of parameter selection in the \(j\)-th local machine, then it is not difficult to determine a nearly optimal regularization parameter for the global estimator \(\overline{f}_{D,\overline{\lambda}}\) through the existing parameter selection strategies. Second, due to privacy, it is impossible to communicate the local estimator \(f_{D_{j},\lambda_{j}}\) directly since such a communication requires not only the coefficients of linear combinations of shifts of kernels but also the centers of the kernel that should be inputs of data \(D_{j}\). However, these local estimators can be well approximated by linear combinations of some fixed basis functions, which is a classical research topic in approximation theory (Narcowich and Ward, 2004; Wendland and Rieger, 2005; Narcowich et al., 2006). Third, the well-developed sampling approaches including Monte Carlo sampling and Quasi-Monte Carlo sampling (Dick and Pillichshammer, 2010; Leobacher and Pillichshammer, 2014) introduced several low-discrepancy sequences, such as Sobol sequences, Niederreiter sequences, and Halton sequences, to improve the efficiency of the above approximation. Based on this, each local machine can generate the same centers of the kernel to establish a set of fixed basis functions, thus realizing the communication of functions by transmitting coefficients. Finally, though data cannot be communicated, some other non-private information, such as predicted values of queries, gradients, and coefficients of some basis functions, is communicable in distributed learning (Li et al., 2014; Lee et al., 2017; Jordan et al., 2019). According to the above four important observations, we design the road map for parameter selection in distributed learning, as shown in Figure 2. As stated above, there are five crucial ingredients in our approach: basis functions generation, local approximation, communications, global approximation, and local parameter selection. For the first issue, we focus on searching low-discrepancy sequences (Dick and Pillichshammer, 2010; Leobacher and Pillichshammer, 2014) to form the centers of the kernel and then obtain a linear space spanned by these basis functions. For the second issue, we use the radial basis function approximation approach (Narcowich and Ward, 2004; Narcowich et al., 2006; Rudi et al., 2015) with the noise-free data \(\big{\{}\big{(}x_{i,j},f_{D_{j},\lambda_{j}}(x_{i,j})\big{)}\big{\}}\) to provide a local approximation of the local estimator. For the third issue, each local machine transmits the coefficients of its local approximation to the global machine without leaking any sensitive information about its own data. For the fourth issue, the global machine synthesizes these coefficients by weighted average like (1) and transmits the synthesized coefficients back to all local machines. For the last issue, each local machine executes a specific parameter selection strategy to determine the regularization parameter of the global approximation. Noting that besides the coefficients of some fixed basis functions, the sensitive information of the data in local machines is not communicated, which implies that the proposed approach provides a feasible scheme to settle the data silos. ### Related work Since data silos caused by a combination of data privacy and interoperability impede the effective integration and management of data, it is highly desirable to develop feasible machine learning schemes to settle them and sufficiently explore the value of big data. Federated learning (Li et al., 2020) is a popular approach to handling the data silos. It starts with a pre-training model that all data holders know and aims at collaborative training through multiple rounds of communications of non-sensitive information from the data holders to aggregate a golden model. Although it has been numerically verified that federated learning is excellent in some specific application areas (Tuor et al., 2021; Li et al., 2022), the exploration of pre-training models and multiple rounds of communications leads to essential weaknesses of the current defense against privacy attacks, such as data poisoning, model poisoning, and inference attacks (Li et al., 2020; Lyu et al., 2020). More importantly, the lack of solid theoretical verifications restricts the use of federated learning in high-risk areas such as natural disaster prediction, financial market prediction, medical diagnosis prediction, and crime prediction. Theoretically, nonparametric distributed learning based on a divide-and-conquer strategy (Zhang et al., 2015; Zhou and Tang, 2020) is a more promising approach for addressing the data silos. As shown in Figure 1, it does not need a pre-training model or multiple rounds of communications. Furthermore, solid theoretical verification has been established for numerous distributed learning schemes, including DKRR (Zhang et al., 2015; Lin et al., 2017), distributed gradient descents (Lin and Zhou, 2018), and distributed spectral algorithms (Guo et al., 2017; Mucke and Blanchard, 2018), in the sense that such a distributed learning scheme performs almost the same as running the corresponding algorithms on the whole data under some conditions. These interesting results seem to show that distributed learning can successfully address the data silos while realizing the benefits of big data without communicating sensitive information about the data. However, all these exciting theoretical results are based on the assumption of proper selection of the algorithm (hyper-)parameters for distributed learning, which is challenging in reality if the data cannot be shared. This is the main reason why nonparametric distributed learning has not been practically used for settling the data silos, though its design flow is very suitable for this purpose. As an open question in numerous papers (Zhang et al., 2015; Lin et al., 2017; Mucke and Blanchard, 2018; Zhao et al., 2019), the parameter selection of distributed learning has already been noticed by (Xu et al., 2019) and (Liu et al., 2022). In particular, Xu et al. (2019) proposed a distributed generalized cross-validation (DGCV) for DKRR and provided some solid theoretical analysis. It should be noted that the proposed DGCV essentially requires the communication of data, making it suffer from the data silos. Liu et al. (2022) proposed a logarithmic mechanism to force the over-fitting of local estimators without communicating sensitive information about local data and theoretically analyzed the efficacy of the logarithmic mechanism. However, their theoretical results are based on the assumption that the optimal parameter is algebraic with respect to the data size, which is difficult to verify in practice. Compared with all these related works, our main novelty is to propose an adaptive parameter selection strategy to equip non-parametric distributed learning schemes and thus settle the data silos. It should be highlighted that our proposed approach only needs two rounds of communications of non-sensitive information. We provide the optimality guarantee in theory and the feasibility evidence in applications. ## 3 Adaptive Distributed Kernel Ridge Regression In this section, we propose an adaptive parameter selection strategy for distributed kernel ridge regression, which is named AdaDKRR, to address the data silos. As discussed in the ``` 0: Training data subset \(D_{j}=\{(x_{ij},y_{ij})\}_{i=1}^{|D_{j}|}\) with \(x_{ij}\in\mathcal{X}\) and \(|y_{ij}|\leq M\) stored in the \(j\)-th local machine for \(j=1,\cdots,m\), a candidate set of the regularization parameter \(\Lambda=\{\lambda_{\ell}\}_{\ell=1}^{L}\), and a query point \(x\). Divide \(D_{j}=\{(x_{ij},y_{ij})\}_{i=1}^{|D_{j}|}\) into training and validation sets, and denote them as \(D_{j}^{tr}\) and \(D_{j}^{val}\), respectively. 1: Local machines: given \(\lambda_{\ell}\in\Lambda\) and \(j\), run KRR with data \(D_{j}^{tr}\) to obtain a local estimator \[f_{D_{j}^{tr},\lambda_{\ell}}=\arg\min_{f\in\mathcal{H}_{K}}\left\{\frac{1}{|D _{j}^{tr}|}\sum_{(x,y)\in D_{j}^{tr}}(f(x)-y)^{2}+\lambda_{\ell}\|f\|_{K}^{2} \right\}.\] (3) \(\triangleright\) Local Processing 2: Local machines: generate the same set of centers \(\Xi_{n}=\{\xi_{k}\}_{k=1}^{n}\) and define a set of basis functions \(B_{n,K}:=\{\sum_{k=1}^{n}a_{k}K_{\xi_{k}}:a_{k}\in\mathbb{R}\}\) with \(K_{\xi}(x)=K(\xi,x)\). \(\triangleright\) Basis Generation 3: Local machines: for some \(s\in\mathbb{N}\), generate a set of points \(\{x_{i,j}^{*}\}_{i=1}^{s}\subseteq\mathcal{X}\) and define an approximation of \(f_{D_{j}^{tr},\lambda_{\ell}}\) by running KRR on data \(\left\{\left(x_{i,j}^{*},f_{D_{j}^{tr},\lambda_{\ell}}(x_{i,j}^{*})\right) \right\}_{i=1}^{s}\), that is, \[f_{D_{j}^{tr},\lambda_{\ell},n,\mu,s}^{local}=\arg\min_{f\in B_{n,K}}\frac{1}{ s}\sum_{i=1}^{s}\left(f(x_{i,j}^{*})-f_{D_{j}^{tr},\lambda_{\ell}}(x_{i,j}^{*}) \right)^{2}+\mu\|f\|_{K}^{2}\] (4) for some \(\mu>0\), and denote \(f_{D_{j}^{tr},\lambda_{\ell},n,\mu,s}^{local}=\sum_{k=1}^{n}a_{j,k,\ell}^{ local}K_{\xi_{k}}\). \(\triangleright\) Local Approximation 4: Local machines: transmit the cofficient matrix \(\left(a_{j,k,\ell}^{local}\right)_{k=1,\ell=1}^{n,L}\) to the global machine. 5: Global machine: synthesize the coefficients by \(a_{k,\ell}^{global}=\sum_{j=1}^{m}\frac{|D_{j}^{tr}|}{|D^{tr}|}a_{j,k,\ell}^{ local}\) and communicate \(\left(a_{k,\ell}^{global}\right)_{k=1,\ell=1}^{n,L}\) to each local machine. \(\triangleright\) Synthesization and Communication(II) 6: Local machines: obtain a global approximation \(f_{D^{tr},\lambda_{\ell},n,\mu,s}^{global}\) as \[f_{D^{tr},\lambda_{\ell},n,\mu,s}^{global}:=\sum_{k=1}^{n}a_{k,\ell}^{global} K_{\xi_{k}}=\sum_{j=1}^{m}\frac{|D_{j}^{tr}|}{|D^{tr}|}\sum_{k=1}^{n}a_{j,k,\ell}^{ local}K_{\xi_{k}}=\sum_{j=1}^{m}\frac{|D_{j}^{tr}|}{|D^{tr}|}f_{D_{j}^{tr}, \lambda_{\ell},n,\mu,s}^{local}\] (5) and define \[\lambda_{j}^{*}=\arg\min_{\lambda_{\ell}\in\Lambda}\frac{1}{|D_{j}^{val}|} \sum_{(x,y)\in D_{j}^{val}}\left(\pi_{M}f_{D^{tr},\lambda_{\ell},n,\mu,s}^{ global}(x)-y\right)^{2}\] (6) with \(\pi_{M}f(x)=\operatorname{sign}(f(x))\min\{|f(x)|,M\}\). \(\triangleright\) Local Validation 7: Local machines: calculate \(\pi_{M}f_{D^{tr},\lambda_{j}^{*},n,\mu,s}^{global}(x)\) and communicate it to the global machine. 8: Global machine: synthesize the AdaDKRR estimator as \(\triangleright\) Global Estimator \[\overline{f}_{D,\lambda^{*}}^{Ada}(x):=\overline{f}_{D,\lambda^{*},n,\mu,s}^{ Ada}(x)=\sum_{j=1}^{m}\frac{|D_{j}|}{|D|}\pi_{M}f_{D^{tr},\lambda_{j}^{*},n,\mu,s}^{ global}(x). \tag{7}\] previous section, our approach includes five important ingredients: basis generation, local approximation, communications, global approximation, and parameter selection. To ease the description, we use the "hold-out" approach (Caponnetto and Yao, 2010) in each local machine to adaptively select the parameter, though our approach can be easily designed for other strategies. The detailed implementation of AdaDKRR is shown in Algorithm 1. Compared with the classical DKRR (Zhang et al., 2015; Lin et al., 2017), AdaDKRR presented in Algorithm 1 requires five additional steps (Steps 2-6) that include basis generation, local approximation, global approximation, and two rounds of communications with \(\mathcal{O}(mnL)\) communication complexity. Algorithm 1 actually presents a feasible framework for selecting parameters of distributed learning, as the basis functions, local approximation, and global approximation are not unique. It should be highlighted that Algorithm 1 uses the "hold-out" method in selecting the parameters, while our approach is also available for cross-validation (Gyorfi et al., 2002), which requires a random division of the training data \(D_{j}\). We refer the readers to Algorithm 2 in the Appendix for the detailed training and testing flows of the cross-validation version of AdaDKRR. In the basis generation step (Step 2), we generate the same set of basis functions in all local machines so that the local estimators defined in (3) can be well approximated by linear combinations of these basis functions. Noting that the local estimators are smooth and in \(\mathcal{H}_{K}\), numerous basis functions, such as polynomials, splines, and kernels, can approximate them well from the viewpoint of approximation theory (Wendland and Rieger, 2005). Since we have already obtained a kernel \(K\), we use the kernel to build up the basis functions, and then the problem boils down to selecting a suitable set of centers \(\Xi_{n}:=\{\xi_{k}\}_{k=1}^{n}\) so that \(\text{span}\{K_{\xi_{k}}\}\) can well approximate functions in \(\mathcal{H}_{K}\). There are roughly two approaches to determining \(\Xi_{n}\). One is to generate a set of fixed low sequences, such as Sobol sequences and Halton sequences, with the same size (Dick and Pillichshammer, 2010). It can be found in (Dick and Pillichshammer, 2010) that the complexity of generating \(n\) Sobol sequences (or Halton sequences) is \(\mathcal{O}(n\log n)\). Furthermore, it can be found in (Dick and Pillichshammer, 2010; Dick, 2011; Feng et al., 2021) that there are \(c,\beta>0\) such that \[\sup_{\|f\|_{K}\leq 1,\|g\|_{K}\leq 1}\left|\int f(x)g(x)dP_{u}(x)-\frac{1}{ n}\sum_{k=1}^{n}f(\xi_{k})g(\xi_{k})\right|\leq cn^{-\beta}, \tag{8}\] where \(P_{u}\) denotes a uniform distribution. The other method is to generate \(n\) points (in a random manner according to a uniform distribution) in the global machine, and then the global machine transmits this set of points to all local machines. In this paper, we focus on the first method to reduce the cost of communications, though the second one is also feasible. In the local approximation step (Step 3), we aim to finding a good approximation of the local estimator \(f_{D_{j}^{tr},\lambda_{\ell}}\). The key is to select a suitable set of points \(\{x_{i,j}^{*}\}_{j=1}^{s}\) and a suitable parameter \(\mu\) so that the solution to (4) can well approximate \(f_{D_{j}^{tr},\lambda_{\ell}}\). Since there are already two point sets, \(\{x_{i,j}\}_{i=1}^{|D_{j}^{tr}|}\) and \(\{\xi_{k}\}_{k=1}^{n}\), we can select one of them as \(\{x_{i,j}^{*}\}_{j=1}^{s}\). In this paper, we use the former, but choosing the latter is also reasonable because the solution to (4) is a good approximation of the local estimator (Wendland and Rieger, 2005). Noting \(s=|D_{j}^{tr}|\), we write \(f_{D_{j}^{tr},\lambda_{\ell},n,\mu,s}^{local}\) as \(f_{D_{j}^{tr},\lambda_{\ell},n,\mu}^{local}\). Recalling the idea of Nystrom regularization (Rudi et al., 2015; Sun et al., 2021) and regarding (4) as a Nystrom regularization scheme with the noise-free data \(\left\{\left(x_{i,j},f_{D_{j}^{tr},\lambda_{\ell}}(x_{i,j})\right)\right\}_{i =1}^{|D_{j}^{tr}|}\), we obtain from (4) that \(f_{D_{j}^{tr},\lambda_{\ell},n,\mu}^{local}(\cdot)=\sum_{k=1}^{n}\alpha_{j,k, \ell}^{local}K_{\xi_{k}}(\cdot)\), where \[\vec{a}_{j,\ell}^{local}:=\Big{(}a_{j,1,\ell}^{local},\cdots,a_{j,n,\ell}^{ local}\Big{)}^{T}\!\!=\!\!\left(\mathbb{K}_{|D_{j}^{tr}|,n}^{T}\mathbb{K}_{|D_{j}^{tr} |,n}+\mu\left|D_{j}^{tr}\right|\mathbb{K}_{n,n}\right)^{\dagger}\mathbb{K}_{ |D_{j}^{tr}|,n}^{T}\!\!\vec{f}_{D_{j}^{tr},\lambda_{\ell}}, \tag{9}\] \(A^{\dagger}\) and \(A^{T}\) denote the Moore-Penrose pseudo-inverse and transpose of a matrix \(A\), respectively, \(\Big{(}\mathbb{K}_{|D_{j}^{tr}|,n}\Big{)}_{i,k}=K(x_{i,j},\xi_{k})\), \((\mathbb{K}_{n,n})_{k,k^{\prime}}=K(\xi_{k},\xi_{k^{\prime}})\), and \(\vec{f}_{D_{j}^{tr},\lambda_{\ell}}=\Big{(}f_{D_{j}^{tr},\lambda_{\ell}}(x_{1,j}),\dots,\)\(f_{D_{j}^{tr},\lambda_{\ell}}(x_{|D_{j}^{tr}|,j})\Big{)}^{T}\). Therefore, it requires \(\mathcal{O}\left(|D_{j}^{tr}|n^{2}+n^{3}\right)\) floating computations to derive the local approximation. Since \(\Big{\{}\Big{(}x_{i,j},f_{D_{j}^{tr},\lambda_{\ell}}(x_{i,j})\Big{)}\Big{\}}_ {i=1}^{|D_{j}^{tr}|}\) is noise-free, the parameter \(\mu\) in (4) is introduced to overcome the ill-conditionness of the linear least problems and thus can be set to be small (e.g., \(\mu=10^{-4}\)). In the global approximation step (Step 6), the global approximation is obtained through a weighted average. The optimal parameters of local machines are then searched for the global approximation via the validation set. If \(f_{D_{j}^{tr},\lambda_{\ell},n,\mu}^{local}\) is a good approximation of \(f_{D_{j}^{tr},\lambda_{\ell}}\), then \(f_{D^{tr},\lambda_{\ell},n,\mu}^{global}\) is a good approximation of the global estimator \(\overline{f}_{D^{tr},\lambda_{\ell}}\) defined by (1). Therefore, the optimal parameters selected for the global approximation are close to those of the global estimator. It should be noted that introducing the truncation operator \(\pi_{M}\) in parameter selection is to ease the theoretical analysis and does not require additional computation. It requires \(\mathcal{O}\left(|D_{j}^{val}|nL\right)\) floating computations in this step. The flows of AdaDKRR adopted in this paper can be found in Figure 5. ## 4 Theoretical Verifications In this section, we study the generalization performance of AdaDKRR defined by (7) in a framework of statistical learning theory (Cucker and Zhou, 2007; Steinwart and Christmann, 2008), where samples in \(D_{j}\) for \(j=1,2,\dots,m\) are assumed to be independently and identically drawn according to an unknown joint distribution \(\rho:=\rho(x,y)=\rho_{X}(x)\rho(y|x)\) with the marginal distribution \(\rho_{X}\) and the conditional distribution \(\rho(\cdot|x)\). The regression function \(f_{\rho}(x)=E[y|X=x]\) minimizes the generalization error \(\mathcal{E}(f):=\int_{\mathcal{Z}}(f(x)-y)^{2}d\rho\) for \(f\in L_{\rho_{X}}^{2}\), where \(L_{\rho_{X}}^{2}\) denotes the Hilbert space of \(\rho_{X}\)-square integrable functions on \(\mathcal{X}\), with the norm denoted by \(\|\cdot\|_{\rho}\). Therefore, the purpose of learning is to obtain an estimator \(f_{D}\) based on \(D_{j}\) for \(j=1,2,\dots,m\) to approximate the regression function \(f_{\rho}\) without leaking the privacy information of \(D_{j}\). In this way, the performance of the global estimator \(\overline{f}_{D,\vec{\lambda}^{*}}^{Ada}\) is quantitatively measured by the excess generalization error \[\mathcal{E}(\overline{f}_{D,\vec{\lambda}^{*}}^{Ada})-\mathcal{E}(f_{\rho})= \Big{\|}\overline{f}_{D,\vec{\lambda}^{*}}^{Ada}-f_{\rho}\Big{\|}_{\rho}^{2}, \tag{10}\] which describes the relationship between the prediction error and the data size. ### Generalization error for DKRR Before presenting the generalization error analysis for AdaDKRR, we first study the theoretical assessments of DKRR, which have been made in (Zhang et al., 2015; Lin et al., 2017) to show that DKRR performs similarly to running KRR on the whole data stored in a large enough machine, provided \(m\) is not so large, \(|D_{1}|\sim\cdots\sim|D_{j}|\), and the regularization parameters \(\lambda_{1}\sim\cdots\sim\lambda_{m}\) are similar to the optimal regularization parameter of KRR with the whole data. The restriction on the number of local machines is natural since it is impossible to derive satisfactory generalization error bounds for DKRR when \(m=|D|\), i.e., there is only one sample in each local machine. However, the assumptions \(|D_{1}|\sim\cdots\sim|D_{j}|\) and \(\lambda_{1}\sim\cdots\sim\lambda_{m}\) are a little bit unreasonable. On the one hand, local agents attending to the distributed learning system frequently have different data sizes, making it unrealistic to assume that the data sizes of local machines are the same. On the other hand, it is difficult to develop a parameter selection strategy for local machines so that Figure 5: Training and testing flows of the proposed method are similar to the optimal regularization parameter of KRR with the whole data, as local agents only access their own data. Noticing these, we derive optimal generalization error bounds for DKRR without the assumptions \(|D_{1}|\sim\cdots\sim|D_{j}|\) and \(\lambda_{1}\sim\cdots\sim\lambda_{m}\) the same as the theoretically optimal one. For this purpose, we introduce several standard assumptions on the data \(D_{j}\), regression function \(f_{\rho}\), and kernel \(K\). As shown in Algorithm 1, our first assumption is the boundedness assumption of the output. **Assumption 1**: _There exists a \(M>0\) such that \(|y|\leq M\) almost surely._ Assumption 1 is quite mild since we are always faced with finitely many data, whose outputs are naturally bounded. It should be mentioned that Assumption 1 implies \(\|f_{\rho}\|_{L^{\infty}}\leq M\) directly. To present the second assumption, we should introduce the integral operator \(L_{K}\) on \(\mathcal{H}_{K}\) (or \(L^{2}_{\rho_{X}}\)) given by \[L_{K}f:=\int_{\mathcal{X}}K_{x}f(x)d\rho_{X},\qquad f\in\mathcal{H}_{K}\quad( \text{or }f\in L^{2}_{\rho_{X}}).\] The following assumption shows the regularity of the regression function \(f_{\rho}\). **Assumption 2**: _For some \(r>0\), assume_ \[f_{\rho}=L_{K}^{r}h_{\rho},\ \ \text{for some }h_{\rho}\in L^{2}_{\rho_{X}}, \tag{11}\] _where \(L_{K}^{r}\) denotes the \(r\)-th power of \(L_{K}:L^{2}_{\rho_{X}}\to L^{2}_{\rho_{X}}\) as a compact and positive operator._ According to the no-free-lunch theory (Gyorfi et al., 2002, Chap.3), it is impossible to derive a satisfactory rate for the excess generalization error if there is no restriction on the regression functions. Assumption 2 actually connects \(f_{\rho}\) with the adopted kernel \(K\), where the index \(r\) in (11) quantifies the relationship. Indeed, (11) with \(r=1/2\) implies \(f_{\rho}\in\mathcal{H}_{K}\), \(0<r<1/2\) implies \(f_{\rho}\notin\mathcal{H}_{K}\), and \(r>1/2\) implies that \(f_{\rho}\) is in an RKHS generated by a smoother kernel than \(K\). Our third assumption is on the property of the kernel, measured by the effective dimension (Caponnetto and De Vito, 2007), \[\mathcal{N}(\lambda)=\operatorname{Tr}((\lambda I+L_{K})^{-1}L_{K}),\qquad \lambda>0.\] **Assumption 3**: _There exists some \(s\in(0,1]\) such that_ \[\mathcal{N}(\lambda)\leq C_{0}\lambda^{-s}, \tag{12}\] _where \(C_{0}\geq 1\) is a constant independent of \(\lambda\)._ It is obvious that (12) always holds with \(s=0\) and \(C_{0}=\kappa:=\sqrt{\sup_{x\in\mathcal{X}}K(x,x)}\). As discussed in (Fischer and Steinwart, 2020), Assumption 3 is equivalent to the eigenvalue decay assumption employed in (Caponnetto and De Vito, 2007; Steinwart et al., 2009; Zhang et al., 2015). It quantifies the smoothness of the kernel and the structure of the marginal distribution \(\rho_{X}\). For example, if \(\rho_{X}\) is the uniform distribution on the unit cube in the \(d\)-dimensional space \(\mathbb{R}^{d}\) (i.e., \(\mathcal{X}=\mathbb{I}^{d}\)), and \(K\) is a Sobolev kernel of order \(\tau>d/2\), then Assumption 3 holds with \(s=\frac{d}{2\tau}\)(Steinwart et al., 2009). The above three assumptions have been widely used to analyze generalization errors for kernel-based learning algorithms (Blanchard and Kramer, 2016; Chang et al., 2017; Dicker et al., 2017; Guo et al., 2017, 2017; Lin et al., 2017; Lin and Zhou, 2018; Mucke and Blanchard, 2018; Shi, 2019; Fischer and Steinwart, 2020; Lin et al., 2020; Sun et al., 2021), and optimal rates of excess generalization error for numerous learning algorithms have been established under these assumptions. Under these well-developed assumptions, we provide the following theorem that DKRR can achieve the optimal rate of excess generalization error established for KRR with the whole data (Caponnetto and De Vito, 2007; Lin et al., 2017; Fischer and Steinwart, 2020), even when different local machines possess different data sizes. **Theorem 1**: _Under Assumption 1, Assumption 2 with \(\frac{1}{2}\leq r\leq 1\), and Assumption 3 with \(0<s\leq 1\), if_ \[\lambda_{j}=C_{1}\left\{\begin{array}{cc}|D|^{-\frac{1}{2r+s}},&\mbox{if}&|D _{j}|\geq|D|^{\frac{1}{2r+s}}\log^{4}|D|,\\ |D_{j}|^{-1}\log^{4}|D|,&\mbox{otherwise},\end{array}\right. \tag{13}\] _and_ \[m\leq|D|^{\frac{s}{2r+s}}(\log|D|)^{-8r}, \tag{14}\] _then_ \[E\left[\|\overline{f}_{D,\vec{\lambda}}-f_{\rho}\|_{\rho}^{2}\right]\leq C_{2 }|D|^{-\frac{2r}{2r+s}}, \tag{15}\] _where \(C_{1}\) and \(C_{2}\) are constants independent of \(|D|\) or \(m\)._ Under Assumptions 1-3, it can be found in (Caponnetto and De Vito, 2007) that the derived learning rates in (15) are optimal in the sense that there is a regression function \(f_{\rho}^{*}\) satisfying the above three assumptions such that \[E\left[\|\overline{f}_{D,\vec{\lambda}}-f_{\rho}^{*}\|_{\rho}^{2}\right]\geq C _{3}|D|^{-\frac{2r}{2r+s}}\] for a constant \(C_{3}\) depending only on \(r\) and \(s\). Unlike the existing results on distributed learning (Zhang et al., 2015; Lin et al., 2017; Mucke and Blanchard, 2018; Lin et al., 2020) that imposed strict restrictions on the data sizes of local machines, i.e., \(|D_{1}|\sim|D_{2}|\sim\cdots\sim|D_{m}|\), Theorem 1 removes this condition since it is difficult to guarantee the same data size for all participants in the distributed learning system. As a result, it requires completely different mechanisms (13) to select the regularization parameter and stricter restriction on the number of local machines (14). The main reason for the stricter restriction on \(m\) is that the distributed learning system accommodates local machines with little data, i.e., \(|D_{j}|\leq|D|^{\frac{1}{2r+s}}\log^{4}|D|\). If we impose a qualification requirement that each participant in the distributed learning system has at least \(|D|^{\frac{1}{2r+s}}\log^{4}|D|\) samples, then the restriction can be greatly relaxed, just as the following corollary shows. **Corollary 2**: _Under Assumption 1, Assumption 2 with \(\frac{1}{2}\leq r\leq 1\), and Assumption 3 with \(0<s\leq 1\), if \(|D_{j}|\geq|D|^{\frac{1}{2r+s}}\log^{4}|D|\), \(\lambda_{j}=C_{1}|D|^{-\frac{1}{2r+s}}\) for all \(j=1,2,\ldots,m\), and_ \[m\leq|D|^{\frac{2r+s-1}{2r+s}}\log^{-4}|D|, \tag{16}\] _then_ \[E\left[\|\overline{f}_{D,\vec{\lambda}}-f_{\rho}\|_{\rho}^{2}\right]\leq C_{2}|D |^{-\frac{2r}{2r+s}}, \tag{17}\] _where \(C_{1}\) and \(C_{2}\) are constants independent of \(|D|\) or \(m\)._ Theorem 1 and Corollary 2 provide a baseline for the analysis of AdaDKRR in terms that the generalization error of AdaDKRR should be similar to (15). ### Learning performance of AdaDKRR In this subsection, we study the theoretical behavior of AdaDKRR (7) by estimating its generalization error in the following theorem. **Theorem 3**: _Under Assumption 1, Assumption 2 with \(1/2\leq r\leq 1\), and Assumption 3 with \(0<s\leq 1\), if \(\rho_{X}\) is a uniform distribution, \(|D_{j}|\geq(8C_{1}^{*}(\log(1+\kappa)+2))^{2}|D|^{\frac{1}{2r+s}}\log^{4}|D|\), \(\Lambda\) contains a \(\bar{\lambda}\sim|D|^{-\frac{1}{2r+s}}\), and_ \[m\leq\min\left\{|D|^{\frac{2r+s-1}{4r+2s}}\log^{-4}|D|,|D|^{\frac{s}{2r+s}}\log ^{-1}L\right\}, \tag{18}\] _then for any \(\mu\in\left[\left(8C_{1}^{*}\left(\log(1+\kappa)+2)\right)^{2}\max_{j=1,..,m} \frac{\log^{4}|D_{j}^{tr}|}{|D_{j}^{tr}|},|D|^{-\frac{1}{2r+s}}\right]\) and \(\Xi_{n}\) satisfying (8) for some \(c,\beta>0\) with \(\mu n^{\beta}\geq 2c\), there holds_ \[E\left[\left\|\overline{f}_{D,\bar{\lambda}^{*}}^{Ada}-f_{\rho}\right\|_{\rho }^{2}\right]\leq C|D|^{-\frac{2r}{2r+s}}, \tag{19}\] _where \(C,C_{1}^{*}\) are constants depending only on \(\|h_{\rho}\|_{\rho},M,r,C_{0},c,\) and \(\beta\)._ Compared with Theorem 1, it can be found that AdaDKRR possesses the same generalization error bounds under some additional restrictions, implying that the proposed parameter selection strategy is optimal in the sense that no other strategies always perform better. There are five additional restrictions that may prohibit the wide use of the proposed approach: (I) \(m\) satisfies (18); (II) \(|D_{j}|\geq(8C_{1}^{*}(\log(1+\kappa)+2))^{2}|D|^{\frac{1}{2r+s}}\log^{4}|D|\); (III) \(\Lambda\) contains a \(\bar{\lambda}\sim|D|^{-\frac{1}{2r+s}}\); (IV) \(\rho_{X}\) is a uniform distribution; (V) \(\mu\in\left((8C_{1}^{*}(\log(1+\kappa)+2))^{2}\right.\)\(\left.\max_{j=1,...,m}\left(\log^{4}|D_{j}^{tr}|)/|D_{j}^{tr}|,|D|^{-\frac{1}{2r+s}}\right)\) and \(\Xi_{n}\) satisfies (8) for some \(c,\beta>0\) with \(n\) satisfying \(\mu n^{\beta}\geq 2c\). Condition (I) is necessary since it is impossible to derive a satisfactory distributed learning estimator when each local machine has only one sample. Condition (II) presents a qualification requirement for the local machines participating in the distributed learning system, indicating that their data sizes should not be so small. Condition (III) means that the candidate set \(\Lambda\) should include the optimal parameter. Noting (18), the restriction on \(m\) is logarithmic with respect to \(L\), and we can set \(\Lambda=\{\lambda_{k}\}_{k=1}^{L}\) with \(\lambda_{k}=q^{k}\) for some \(q\in(0,1)\) and \(L\sim|D|\). Conditions (IV) and (V) are mainly due to setting \(\{x_{i,j}^{*}\}_{i=1}^{s}\) to \(\{x_{i,j}\}_{i=1}^{|D_{j}|}\) in the local approximation step (Step 3 in Algorithm 1). Therefore, we have to use the quadrature property (8) of the low-discrepancy property, which requires the samples to be drawn i.i.d. according to the uniform distribution. Furthermore, the well-conditioness of the local approximation imposes a lower bound of \(\mu\). Since \(|D_{j}|\geq(8C_{1}^{*}(\log(1+\kappa)+2))^{2}|D|^{\frac{1}{2r+s}}\log^{4}|D|\), it is easy to check that \((8C_{1}^{*}(\log(1+\kappa)+2))^{2\frac{\log^{4}|D_{j}^{tr}|}{|D_{j}^{tr}|}}\leq |D|^{-\frac{1}{2r+s}}\), and there are numerous feasible values for \(\mu\). The restriction on \(\mu\) is to theoretically verify the well-conditionness of the local approximation in the worst case. In practice, it can be set to \(10^{-4}\) directly. It would also be interesting to set a suitable \(\{x_{i,j}^{*}\}_{i=1}^{s}\) to remove or relax conditions (IV) and (V). As shown in Theorem 3, under some assumptions, we prove that AdaDKRR performs similarly to running the optimal learning algorithms on the whole data \(D=\cup_{j=1}^{m}D_{j}\) without considering data privacy. Recalling in Algorithm 1 that AdaDKRR only requires communicating non-sensitive information, it is thus a feasible strategy to address the data silos. ## 5 Experimental Results In this section, we use the following parameter selection methods for distributed learning to conduct experiments on synthetic and real-world data sets: 1. On each local machine, the parameters are selected by cross-validation, and DKRR is executed with these selected parameters; we call this method DKRR with cross-validation ("DKRR" for short). 2. On the \(j\)-th local machine, we first select parameters by cross-validation, and then transform the selected regularization parameter \(\lambda_{j}\) by \(\lambda_{j}\leftarrow\lambda_{j}^{\log|D|/\log|D_{j}|}\), and transform the selected kernel width \(\sigma_{j}\) by \(\sigma_{j}\leftarrow\sigma_{j}^{\log|D|/\log|D_{j}|}\) if the Gaussian kernel is used; DKRR is executed with these transformed parameters; we call this method DKRR with cross-validation and logarithmic transformation ("DKRRLog" for short). 3. The proposed adaptive parameter selection method is applied to distributed learning and is denoted by "AdaDKRR". All the experiments are run on a desktop workstation equipped with an Intel(R) Core(TM) i9-10980XE 3.00 GHz CPU, 128 GB of RAM, and Windows 10. The results are recorded by averaging the results from multiple individual trials with the best parameters.1 Footnote 1: The MATLAB code, as well as the data sets, can be downloaded from [https://github.com/18357710774/](https://github.com/18357710774/) AdaDKRR. ### Synthetic Results In this part, the performance of the proposed method is verified by four simulations. The first one studies the influence of the number and type of center points for local approximation on generalization ability. The second one exhibits the robustness of AdaDKRR to the number of center points. The third simulation presents comparisons of the generalization ability of the three mentioned methods with changing the number of local machines, provided that all training samples are uniformly distributed to local machines. The last simulation focuses on comparisons of generalization ability for the three methods when the training samples are unevenly distributed on local machines. Before carrying out experiments, we describe the generating process of the synthetic data and some important settings of the simulations. The inputs \(\{x_{i}\}_{i=1}^{N}\) of training samples are independently drawn according to the uniform distribution on the (hyper-)cube \([0,1]^{d}\) with \(d=3\) or \(d=10\). The corresponding outputs \(\{y_{i}\}_{i=1}^{N}\) are generated from the regression models \(y_{i}=g_{j}(x_{i})+\varepsilon\) with the Gaussian noise \(\mathcal{N}(0,0.2)\) for \(j=1,2\), where \[g_{1}(x)=\left\{\begin{array}{ll}(1-\|x\|_{2})^{6}(35\|x\|_{2}^{2}+18\|x\|_{2 }+3)&\quad\text{if }0<\|x\|_{2}\leq 1,\\ 0&\quad\text{if }\|x\|_{2}>1,\end{array}\right. \tag{20}\] for the 3-dimensional data, and \[g_{2}(x)=(\|x\|_{2}-1)\left(\|x\|_{2}-2\right)(\|x\|_{2}-3) \tag{21}\] for the 10-dimensional data. The generation of test sets \(\{(x_{i}^{\prime},y_{i}^{\prime})\}_{i=1}^{N^{\prime}}\) is similar to that of training sets, but it has the promise of \(y_{i}^{\prime}=g_{j}(x_{i}^{\prime})\). For the 3-dimensional data, we use the kernel function \(K_{1}(x_{1},x_{2})=h(\|x_{1}-x_{2}\|_{2})\) with \[h(r)=\left\{\begin{array}{ll}(1-r)^{4}(4r+1)&\quad\text{if }0<r\leq 1,\\ 0&\quad\text{if }r>1,\end{array}\right. \tag{22}\] and the regularization parameter \(\lambda\) is chosen from the set \(\{\frac{1}{2^{q}}|\frac{1}{2^{q}}\geq 10^{-10},q=0,1,2,\cdots\}\). For the 10-dimensional data, we use the Gaussian kernel \(K_{2}(x_{1},x_{2})=\exp\left(-\frac{\|x_{1}-x_{2}\|_{2}^{2}}{2\sigma^{2}}\right)\), the regularization parameter \(\lambda\) is chosen from the set \(\{\frac{1}{3^{q}}|\frac{1}{3^{q}}\geq 10^{-10},q=0,1,2,\cdots\}\), and the kernel width \(\sigma\) is chosen from 10 values that are drawn in a logarithmic, equally spaced interval \([0.1,10]\). In the simulations, we generate 10000 samples for training and 1000 samples for testing, and the regularization parameter \(\mu\) for local approximation is fixed as \(10^{-4}\). **Simulation 1:** In this simulation, we select three types of center points for local approximation, including two QMCS (Sobol points and Halton points) and one MCS (random points). The number \(m\) of local machines varies from the set \(\{20,40,80,160\}\). For each fixed \(m\), the relation between the test MSE and the number \(n\) of center points is shown in Figure 6, in which the dashed lines exhibit the best test MSEs with the optimal numbers of the three types of center points. From the above results, we have the following observations: 1) As the number of center points increases, the curves of test MSE have a trend of descending first and then ascending. This is because very few center points cannot provide satisfactory accuracy for local approximation, resulting in the approximate function based on the center points having a large deviation from the ground truth, while a large number of center points put the estimator at risk of over-fitting. 2) The optimal number of center points generally decreases as the number of local machines increases. Because a smaller \(m\) indicates that there are more training samples on each local machine, more center points are required to cover these samples to obtain a satisfactory local approximation. 3) The three types of center points perform similarly on the 3-dimensional data, but Sobol points and Halton points are obviously better than random points on the 10-dimensional data, especially for larger numbers of local machines. In addition, the optimal number of random points is usually larger than the number of Sobol points and Halton points. The reason is that the discrepancy of QMCS is smaller than that of MCS, indicating that the sample distribution of QMCS is more uniform than that of MCS. Therefore, QMCS can better describe the structural information of the data and is more effective than MCS in local approximation. Since Sobol points perform similarly to Halton points, we take Sobol points as an example to demonstrate the superiority of the proposed method in the following experiments. **Simulation 2:** In this simulation, we check the robustness of the proposed method concerning the number \(n\) of center points, as \(n\) determines the accuracy of the local approximation. We set the number \(n\) in two ways: 1) by fixing \(n\) as a constant (denoted by "\(n=\#\)"), and 2) by adaptively adjusting \(n\) as the average number of training samples in each local machine (denoted by "\(n=|D|/m\)"). We vary the number of Sobol points from the sets \(\{10,20,\cdots,100,200,\cdots,500,1000,2000\}\) and \(\{10,20,\cdots,100,200,\cdots,500,1000\}\) for the 3-dimensional and 10-dimensional data, respectively, and vary the number of local machines from the set \(\{20,40,80,150,300\}\). The testing RMSEs with respect to different orders of magnitude \(n\) under different numbers of local machines are shown in Figure 7, where "\(n\) best" represents the optimal MSE corresponding to the best \(n\) chosen from the candidate set and provides a baseline for assessing the performance of the proposed method. From the results, it can be seen that the generalization performances with different orders of \(n\) are all comparable with the best \(n\) when \(m\) is large (e.g., \(m\geq 80\)). Even when \(m\) is small, we can also obtain a satisfactory result by simply varying a few different orders of magnitude of \(n\). In addition, for different numbers of local machines, the proposed method with an adaptive number of center points shows stable performance that is comparable to the best \(n\). All these results demonstrate that the proposed method is robust to the number of center points. **Simulation 3:** This simulation compares the proposed method with DKRR and DKRRLog under the condition that all training samples are uniformly distributed to local machines. The number of local machines changes from the set \(\{10,20,40,80,150,160,240,300\}\) Figure 6: Relationship between test MSE and the number of center points in local approximation using the three low-discrepancy sequences for AdaDKRR with different numbers of local machines For AdaDKRR, the number \(n\) of Sobol points is chosen from the set \(\{50,100,500,1000\}\). The results of test MSE as a function of the number of local machines are shown in Figure 8, where "DKRR-best" denotes the best performance of local machines in DKRR. Based on the above results, we have the following observations: 1) The test MSE grows as the number of local machines increases for all methods, but the growth of AdaDKRR is much slower than that of other methods. 2) When the number of local machines is small (e.g., \(m\leq 40\)), DKRR-best has the worst performance; the MSE values of DKRR are smaller than those of DKRR-best, which verifies that distributed learning can fuse the information of local machines and achieve better generalization performance than each local machine; AdaDKRR performs similarly to DKRRLog, and both of them are significantly better than DKRR, which provides evidence that it is not a good choice to select parameters only based on the data in each local machine. 3) When the number of local machines increases, the generalization performance of DKRR and DKRRLog deteriorates dramatically, even worse than that of a single local machine (e.g., \(m\geq 200\)), whereas the test MSE of AdaDKRR grows slowly and has obvious superiority to other methods. These results show that the proposed method is effective and stable in parameter selection for distributed learning. Figure 8: Comparisons of test MSE among the three parameter selection approaches with different numbers of local machines Figure 7: Relationship between test MSE and the number of Sobol points for AdaDKRR with different numbers of local machines **Simulation 4:** In this simulation, we compare the generalization performance of the three methods under non-uniform distributions of the number of training samples in local machines. Specifically, all training samples are randomly distributed to local machines, meaning the numbers of training samples in local machines are also random. In addition, we set the minimum number of training samples on each local machine so that cross-validation could be performed. For example, "\(R=5\)" means that the minimum number of training samples on each local machine is no less than 5. AdaDKRR uses the adaptive number of Sobol points (i.e., \(n=|D|/m\)) in local approximation for convenience. The number \(m\) of local machines varies from the set \(\{40,80,150,300\}\). For each fixed number \(m\), we compare the generalization performance in four cases, including \(R\in\{5,10,20\}\) and uniform split (denoted as "Usplit"), and the results are shown in Figure 9. Note that the case "Usplit" can be considered as "\(R=|D|/m\)". From the results, we can see that the performance differences among the three methods in each case of the non-uniform split are similar to those in the case of the uniform split, as described in Simulation 3. Additionally, the test MSE usually increases from the case "\(R=5\) to the case "Usplit". This is because a smaller value of \(R\) means that the distribution of the number of samples is more uneven, and the generalization performance of a single local machine with a large number of training samples is much better than that of a combination of several local machines with the same total number of training samples, as distributed learning shows. Compared with DKRR and DKRRLog, AdaDKRR is more robust to the split of training samples distributed to local machines, especially for the 10-dimensional data. The above results demonstrate that AdaDKRR is suitable for distributed learning with different data sizes of participants. ### Real-World Applications The mentioned parameter selection methods are tested on two real-world data sets: used car price forecasting and graphics processing unit (GPU) performance prediction for single Figure 9: Comparisons of test MSE among the three parameter selection approaches with different distribution patterns of data in local machines precision general matrix multiplication (SGEMM). Before discussing the experiments, it is important to clarify some implementation details: 1) For each data set, half of the data samples are randomly chosen as training samples, and the other half are used as testing samples to evaluate the performance of the mentioned methods. Following the typical evaluation procedure, 10 independent sets of training and testing samples are generated by running 10 random divisions on the data set. 2) For each data set, min-max normalization is performed for each attribute except the target attribute. Specifically, the minimum and maximum values of the \(i\)-th attribute of training samples are calculated and denoted by \(F_{min}^{(i)}\) and \(F_{max}^{(i)}\), respectively. The \(i\)-th attribute of samples is rescaled using the formula \(\hat{\mathbf{x}}_{i}=\left(\mathbf{x}_{i}-F_{min}^{(i)}\right)/\left(F_{max}^ {(i)}-F_{min}^{(i)}\right)\), where \(\mathbf{x}_{i}\) is the \(i\)-th attribute vector. 3) Based on the numerical results provided in Simulation 3 of the previous subsection, we vary the number of Sobol points in the set \(\{10,50,100,500\}\) and record the best result for AdaDKRR; the Gaussian kernel is used for the two data sets. 4) We consider the case that all training samples are uniformly distributed to local machines. #### 5.2.1 Used Car Price Forecasting As the number of private cars increases and the used car industry develops, more and more buyers are making used cars their primary choice due to their cost-effectiveness and practicality. Car buyers usually purchase used cars from private sellers and auctions aside from dealerships, and there is no manufacturer-suggested retail price for used cars. Car sellers want to reasonably evaluate the residual value of used cars to ensure sufficient profit margins. Car buyers hope the car they buy is economical, or at least they won't buy overpriced cars due to their unfamiliarity with the pricing of used cars. Therefore, pricing a used car can be regarded as a very important decision problem that is related to the success of the transaction between buyers and sellers. However, the sale price of used cars is very complicated, because it depends not only on the wear and tear of the car, such as usage time, mileage, and maintenance, but also on the performance of the car, such as brand, gearbox, and power, as well as on some social factors, such as car type, fuel type, and sale region. Sellers and buyers usually spend a lot of time and effort negotiating the price of used cars, so it is desirable to develop an effective pricing model from a collection of existing transaction data to provide a reliable reference for sellers and buyers and promote the success of transactions. Figure 10: Details of the attributes of the data set CarTianchi The used car data on Tianchi (CarTianchi for short) 2 provided by Alibaba Cloud comes from the used car transaction records of a trading platform, and its goal is to establish models to predict the price of used cars. The data set contains more than \(400,000\) samples; each sample is described by \(31\) attributes, of which \(15\) are anonymous and \(4\) are masked to protect the confidentiality of the data. The attributes are described in detail in Figure 10, and they are grouped into four categories, including car condition, performance, social factor, and transaction. Note that the \(15\) anonymous attributes are classified into the category of transaction for convenience. In the experiment, we selected a subset of \(129,710\) samples with price attributes after removing samples with missing values to train and evaluate models. We use the time difference between the sales date (i.e., the create date) and the registration date as an approximation of the usage time and remove the attributes of the sale ID, create date, and registration date. Data binning is applied to the attributes of usage time and power because their values are highly dispersed, and the details are listed in Table 1. The histogram of the target attribute of prices, as well as the skewness and kurtosis, are shown in Figure 11 (a), from which it can be seen that the data distribution does not obey the normal distribution, with a sharp peak and a long tail dragging on the right. Therefore, the target attribute of price is transformed by a logarithmic operation, and the histogram of the transformed price is close to a normal distribution, as shown in Figure 11 (b). The regularization parameter \(\lambda\) is chosen from the set \(\{\frac{1}{3^{q}}|\frac{1}{3^{q}}\geq 10^{-10},q=0,1,2,\cdots\}\), and the kernel width \(\sigma\) is chosen from \(10\) values that are drawn in a logarithmic, equally spaced interval \([1,10]\). \begin{table} \begin{tabular}{c|c} \hline Attribute & Data binning \\ \hline \hline Usage time & 0: [0,90], 1: (90,180], 2: (180, 365], 3: (365,730], 4: (730,1095], 5: (1095, 1460], 6: (1460,2190], 7: (2190,3650], 8: (3650,5475], 9: (5475,+\(\infty\))) \\ \hline Power & 0: [-19.3,1931.2], 1: (1931.2,3862.4], 2: (3862.4,5793.6], 3: (5793.6,7724.8], 4: (7724.8,9656.0], 5: (9656.0,11587.2], 6: (11587.2,13518.4], 7: (13518.4,15449.6], 8: (15449.6,17380.8], 9: (17380.8,19312] \\ \hline \end{tabular} \end{table} Table 1: The details of data binning on the data set CarTianchi Figure 11: The histogram of historical transaction prices The relationship between test MSE and the number \(m\) of local machines for the compared methods is shown in Figure 12, where \(m\) varies from the set \(\{20,40,\cdots,300\}\). From the results, we can see that DKRR-best has the worst generalization performance due to the limited number of training samples in local machines; DKRR synthesizes the estimators of local machines and thus achieves better performance than each local estimator; although the logarithmic transformation on the parameters makes DKRRLog superior to DKRR, it can still be significantly improved by AdaDKRR, especially for large numbers of local machines (e.g., \(m\geq 100\)). The above results demonstrate the effectiveness of the proposed parameter selection approach in distributed learning. #### 5.2.2 SGEMM GPU Performance Prediction Over the past decade, GPUs have delivered considerable performance in multi-disciplinary areas such as bioinformatics, astronomy, and machine learning. However, it is still a challenging task to achieve close-to-peak performance on GPUs, as professional programmers must carefully tune their code for various device-specific problems, each of which has its own optimal parameters such as workgroup size, vector data type, tile size, and loop unrolling factor. Therefore, it is important to design effective GPU acceleration models to automatically perform parameter tuning with the data collected from the device. The data set SGEMM GPU(Nugteren and Codreanu, 2015) considers the running time of dense matrix-matrix multiplication \(C=\alpha A^{T}B+\beta C\), as matrix multiplication is a fundamental building block in deep learning and other machine learning methods, where \(A\in\mathbb{R}^{K\times M}\), \(B\in\mathbb{R}^{K\times N}\), \(C\in\mathbb{R}^{M\times N}\), \(M=N=K=2048\), \(\alpha\) and \(\beta\) are constants, and \(A^{T}\) is the transpose of \(A\). The data set contains \(241,600\) samples; each sample includes a possible combination of \(14\) parameters of the SGEMM kernel and \(4\) running times for this parameter combination. The \(14\) parameters and their corresponding domains are as follows: * Per-matrix 2D tiling at workgroup-level uses the parameters \(M_{wg},N_{wg}\in\{16,32,64,\)\(128\}\), which correspond to the matrix dimensions of \(M\) and \(N\), respectively. * The inner dimension of 2D tiling at workgroup-level uses the parameter \(K_{wg}\in\{16,32\}\), which corresponds to the dimension of \(K\). * Local workgroup size uses the parameters \(M_{dimC},N_{dimC}\in\{8,16,32\}\). Figure 12: Comparisons of test MSE among the three parameter selection approaches with different numbers of local machines on the data set CarTianchi * Local memory shape uses the parameters \(M_{dimA},N_{dimB}\in\{8,16,32\}\). * The kernel loop unrolling factor is denoted by \(K_{wi}\in\{2,8\}\). * Per-matrix vector widths for loading and storage use parameters \(M_{vec},N_{vec}\in\{1,2,4,\) 8\(\}\), where \(M_{vec}\) is for matrices \(A\) and \(C\), and \(N_{vec}\) is for matrix \(B\). * The enabling stride for accessing off-chip memory within a single thread is denoted by \(M_{stride},N_{stride}\in\{0,1\}\), where \(M_{stride}\) is for matrices \(A\) and \(C\), and \(N_{stride}\) is for matrix \(B\). * Per-matrix manual caching of the 2D workgroup tile can be controlled by parameters \(L\$_{A},L\$_{B}\in\{0,1\}\). In the experiment, the first 14 columns of the data are used as data input, and the average time of the 4 runs is regarded as data output. Similar to the data set CarTianchi, the distribution of the average running time has a sharp peak and a long tail dragging on the right. Therefore, as suggested by Nugteren and Codreanu (2015), we also perform a logarithmic operation on the average running time. The regularization parameter \(\lambda\) is chosen from the set \(\{\frac{1}{5^{q}}|\frac{1}{5^{q}}\geq 10^{-10},q=0,1,2,\cdots\}\), and the kernel width \(\sigma\) is chosen from 10 values that are drawn in a logarithmic, equally spaced interval \([1,100]\). Figure 13 records the relationship between test MSE and the number of local machines for the compared methods. It can be seen that the performance of these methods on the data set SGEMM GPU is similar to that on the data set CarTianchi. The only difference is that DKRRLog performs extremely poorly in generalization when the number of local machines is larger than 120. This is because the transformed parameters are far from the optimal parameters in some local machines. These results provide another piece of evidence that the proposed AdaDKRR is stable and effective in selecting parameters. ## 6 Conclusion This paper proposed an adaptive parameter selection strategy for distributed learning to settle the data silos. Specifically, by communicating the coefficients of fixed basis functions, we obtained a good approximation of the global estimator and thus determined the algorithm parameters for the global approximation without leaking any sensitive information Figure 13: Comparisons of test MSE among the three parameter selection approaches with different numbers of local machines on the data set SGEMM GPU about the data. From a theoretical perspective, we established optimal rates of excess generalization error for the proposed method in a framework of statistical learning theory by utilizing the idea of low-discrepancy sequences and the classical radial basis function approximation. According to the theoretical findings, as long as the number of local machines is not too large, the proposed method is similar to running KRR on the whole data. The theoretical results demonstrate the efficacy of the proposed method for parameter selection in distributed learning. From an application point of view, we also applied the proposed method to several simulations and two real-world data sets, used car price forecasting and GPU performance prediction. The numerical results verify our theoretical assertions and demonstrate the feasibility and effectiveness of the proposed method in applications.
データシーム、特にプライバシーと互換性の問題により、同様の目的を持つ異なる組織間の協業をSignificantly制限しています。分散学習は、データシームを解決するための有望な方法ですが、自己決定、プライバシー保証、および協業の必要性を満たすために困難な課題に直面しています。この論文では、自己決定、プライバシーの保証、およびパフォーマンスの改善に必要な協業を考慮して、適応型分散kernel ridgeregression (AdaDKRR) を開発することに重点を置きます。AdaDKRRは、パラメータ選択の自己決定、非センシティブな情報通信のプライバシー、およびパフォーマンスの改善に必要な協業を考慮して開発されています。私たちは、AdaDKRRの理論的検証と包括的な実験によって、その実現可能性と有効性を示します。理論的には、AdaDKRRは、ある程度の条件下で、全体データに対する最適な学習
2309.04961
Multi-modal Extreme Classification
This paper develops the MUFIN technique for extreme classification (XC) tasks with millions of labels where datapoints and labels are endowed with visual and textual descriptors. Applications of MUFIN to product-to-product recommendation and bid query prediction over several millions of products are presented. Contemporary multi-modal methods frequently rely on purely embedding-based methods. On the other hand, XC methods utilize classifier architectures to offer superior accuracies than embedding only methods but mostly focus on text-based categorization tasks. MUFIN bridges this gap by reformulating multi-modal categorization as an XC problem with several millions of labels. This presents the twin challenges of developing multi-modal architectures that can offer embeddings sufficiently expressive to allow accurate categorization over millions of labels; and training and inference routines that scale logarithmically in the number of labels. MUFIN develops an architecture based on cross-modal attention and trains it in a modular fashion using pre-training and positive and negative mining. A novel product-to-product recommendation dataset MM-AmazonTitles-300K containing over 300K products was curated from publicly available amazon.com listings with each product endowed with a title and multiple images. On the all datasets MUFIN offered at least 3% higher accuracy than leading text-based, image-based and multi-modal techniques. Code for MUFIN is available at https://github.com/Extreme-classification/MUFIN
Anshul Mittal, Kunal Dahiya, Shreya Malani, Janani Ramaswamy, Seba Kuruvilla, Jitendra Ajmera, Keng-hao Chang, Sumeet Agarwal, Purushottam Kar, Manik Varma
2023-09-10T08:23:52
http://arxiv.org/abs/2309.04961v1
# Multi-modal Extreme Classification ###### Abstract This paper develops the MUFIN technique for extreme classification (XC) tasks with millions of labels where datapoints and labels are endowed with visual and textual descriptors. Applications of MUFIN to product-to-product recommendation and bid query prediction over several millions of products are presented. Contemporary multi-modal methods frequently rely on purely embedding-based methods. On the other hand, XC methods utilize classifier architectures to offer superior accuracies than embedding-only methods but mostly focus on text-based categorization tasks. MUFIN bridges this gap by reformulating multi-modal categorization as an XC problem with several millions of labels. This presents the twin challenges of developing multi-modal architectures that can offer embeddings sufficiently expressive to allow accurate categorization over millions of labels; and training and inference routines that scale logarithmically in the number of labels. MUFIN develops an architecture based on cross-modal attention and trains it in a modular fashion using pre-training and positive and negative mining. A novel product-to-product recommendation dataset MM-AmazonTiltes-300K containing over 300K products was curated from publicly available amazon.com listings with each product endowed with a title and multiple images. On the all datasets MUFIN offered at least 3% higher accuracy than leading text-based, image-based and multi-modal techniques. Code for MUFIN is available at [https://github.com/Extreme-classification/MUFIN](https://github.com/Extreme-classification/MUFIN) Figure 1: Predictions on the MM-AmazonTiltes-300K product-to-product recommendation task illustrate the need for accurate multi-modal retrieval. For a decorative motorcycle-shaped alarm clock as the query product, multi-modal retrieval using MUFIN was able to retrieve visually similar products such as a motorcycle-shaped pencil holder as well as visually dissimilar but related products such as a motorcycle themed ashtray. Recovery using the visual modality alone ignored thematically linked products, instead recovering mostly motorcycle-shaped products. Textual recovery on the other hand fixated on the word “motorcycle” and started recovering accessories for actual motorcycles. ## 1 Introduction **Extreme Classification (XC).** The goal of extreme multi-label classification is to develop architectures to annotate datapoints with the most relevant _subset_ of labels from an extremely large set of labels. For instance, given a product purchased by a user, we may wish to recommend to the user, the subset (i.e. one or more) of the most related products from an extremely large inventory of products. In this example, the purchased product is the datapoint and each product in the inventory becomes a potential label for that datapoint. Note that multi-label classification generalizes multi-class classification where the objective is to predict a single mutually exclusive label for a given datapoint. An example of a multi-class problem would be to assign a product to a single exclusive category in a product taxonomy. **Multi-modal XC.** An interesting XC application arises when datapoints and labels are endowed with both visual and textual descriptors. Example uses cases include (1) Product-to-product recommendation [29] with products being represented using their titles and one or more images. (2) Bid-query prediction [5] where an advertisement with visual and textual descriptions has to be tagged with the list of user queries most likely to lead to a click on that ad. (3) Identifying compatible outfits where each outfit is described using multiple images and a textual caption [37]. **Challenges in Multi-modal XC.** Existing multi-modal methods [34, 36, 12, 37] are often _embeddings-only_ i.e. categorization is done entirely using embeddings of datapoints and categories obtained from some neural architecture. However, XC research has shown that training classifiers alongside embedding architectures can offer improved results [3, 5, 41]. However, existing XC research focuses mostly on text-based categorization. Bridging this gap requires architectures that offer multi-modal embeddings sufficiently expressive to perform categorization over millions of classes. Also required are routines that can train classifiers over millions of classes and still offer predictions in milliseconds as demanded by real-time applications [3, 5, 14]. This is usually possible only if training and inference scale logarithmically with the number of labels. **Contributions.** The MUFIN method targets XC tasks with millions of labels where both datapoints and labels can be endowed with visual and textual descriptors. (1) MUFIN melts a novel embedding architecture and a novel classifier architecture. The former uses multi-modal attention whereas the latter uses datapoint-label cross attention and high-capacity one-vs-all classifiers. (2) MUFIN training scales to tasks with several millions of labels by using pre-training and hard-positive and hard-negative mining. MUFIN offers predictions within 3-4 milliseconds per test point even on tasks with millions of labels. (3) This paper releases the MM-AmazonTitles-300K product-to-product recommendation dataset curated from publicly available amazon.com listings with over 300K products each having a title and multiple images. (4) MUFIN offers at least 3% higher accuracy than leading text-only, image-only and multi-modal methods on several tasks (MM-AmazonTitles-300K, A2Q-4M) including zero-shot tasks (Polyvore) indicating the superiority of not just MUFIN's classifiers but its embedding model as well. ## 2 Related Work **Large-scale Visual Categorization.** Categorization with a large number of classes has received much attention [13, 24, 11]. Early methods learnt classifiers over hand-crafted or pre-trained features such as HoG [9] (\(100K\) classes). Contemporary approaches offer superior accuracies by using task-specific representations obtained from neural architectures. Some of these [11, 24, 32, 38] eschew classifiers entirely and focus on purely embedding-based methods while others train embedding and classifier models jointly using techniques such as hierarchical soft-max, in-batch negative mining [17] and hard-negative mining [43]. However, these works do not consider multi-modal data. **Extreme Classification.** XC methods seek to learn classifiers that offer efficient prediction even with millions of labels. Earlier works used fixed or pre-trained features and learnt classifier architectures such as multi-way classification trees [18], one-vs-all classifiers [1, 14] and probabilistic label trees [15]. Recent advances [5, 6, 7, 8, 16, 19, 29, 35, 41] have introduced task-specific neural representations that are jointly learnt alongside the classifiers and offer performance boosts over embedding-only methods. However, these mostly consider tasks with textual descriptions only. **Multi-modal Product Recommendation.** The task of recommending related products such as compatible outfits [37] has led to several multi-modal techniques that utilize product images as well as product title or category. ADDE-O [12] learns a disentangled visual representation for outfits so that an outfit with an altered category such as color or size can be recovered simply by appending the query product with a category modifier such as "blue" or "extra large". The Type-aware approach [37] learns product embeddings that respect textual product types but capture product similarity and compatibility. SCE-Net [36] learns image representations that jointly capture multiple aspects of similarity e.g. color, texture without having to learn separate feature spaces for each aspect. SSVR [34] introduces semi- and self-supervised techniques that use textual categories to regularize product image embeddings. S-VAL [20] and CSA-Net [23] perform similarity and compatibility-based retrieval focusing on using the visual modality alone or else using the textual category/type information as a black-box category. Note that none of these methods utilize classifiers and are purely embedding-based methods. Modality fusion techniques have also been explored. Early works adopted late fusion by treating modalities separately till each yielded a score whereas recent works [30] have explored early and _bottle-necked_ fusion. MUFIN performs early fusion via its multi-modal attention blocks (see Sec. 3). **Multi-Modal Learning.** Methods for multi-modal tasks such as image captioning and associated word prediction [17] have proposed embedding-only solutions (CLIP [32], VisualBERT [21]) as well as classifier architectures (IMRAM [4], M3TR [42]). MUFIN empirically outperforms CLIP and VisualBERT while IMRAM and M3TR could not scale to the datasets used in our experiments. ## 3 MUFIN MUlimodal extreme classiFIcatioN **Notation.**\(L\) is the number of labels (_e.g_. number of products available for recommendation, bid queries). \(N\) train points are presented as \(\left\{\left(X_{i},\mathbf{y}_{i}\right)\right\}_{i=1}^{N}\). Datapoint \(i\) is represented using \(m_{i}\) descriptors (textual e.g. product title and/or visual e.g. product image) as \(X_{i}=\left\{x_{i}^{1},\ldots,x_{i}^{m_{i}}\right\}\). \(\mathbf{y}_{i}\in\left\{-1,+1\right\}^{L}\) is the ground-truth label vector for datapoint \(i\) with \(y_{il}=+1\) if label \(l\in[L]\) is relevant to the datapoint \(i\) be \(y_{il}=-1\). Each label \(l\in[L]\) is represented as \(Z_{l}=\left\{z_{1}^{1},\ldots,z_{l}^{m_{l}}\right\}\) using \(m_{l}\) textual/visual descriptors. **Motivation for MUFIN's Architecture.** MUFIN seeks to obtain an embedding \(\hat{\mathbf{x}}_{i}\in\mathbb{R}^{D}\) for every datapoint \(X_{i}\) and a classifier \(\mathbf{w}_{l}\in\mathbb{R}^{D}\) for every label \(l\in[L]\) so that \(\mathbf{w}_{l}^{\top}\hat{\mathbf{x}}_{i}\) is indicative of the relevance of label \(l\) to datapoint \(i\). Datapoints and labels each having multiple descriptors i.e. \(m_{i},m_{l}\geq 1\) present opportunities to ease this process: (1) The neural architecture used to obtain datapoint embeddings \(\hat{\mathbf{x}}_{i}\) can also be used to obtain label embeddings \(\hat{\mathbf{z}}_{l}\) that can serve as a convenient warm start when learning \(\mathbf{w}_{l}\) and has been found to accelerate training in XC methods [5, 29]. (2) Cross-talk among descriptors of a datapoint and those of a label may make the classifier's job easier by promoting affinity among related datapoint-label pairs. Alignment between descriptors of datapoint \(i\) and those of label \(l\) can be used to construct an alternate embedding \(\hat{\mathbf{x}}_{i}^{l}\) of the datapoint that is _adapted_ to the label \(l\). The goal of this _label-adapted_ embedding would be not budge if the label \(l\) is irrelevant i.e. \(\hat{\mathbf{x}}_{i}^{l}\approx\hat{\mathbf{x}}_{i}\) if \(y_{il}=-1\) but approach the label classifier if the label is relevant i.e. \(\hat{\mathbf{x}}_{i}^{l}\rightarrow\hat{\mathbf{w}}_{l}\) if \(y_{il}=+1\). (3) Self-talk among descriptors of the same datapoint/label allow different modalities to interact and produce superior embeddings for that datapoint/label. MUFIN adopts both bag and vector representations for labels and datapoints to let descriptors retain their identity and allow efficient classification. Attention blocks are used to implement cross-talk and self-talk. Fig. 5 shows that label-adapted embeddings learnt by MUFIN do achieve the objectives stated above by noticing that images of a datapoint appear among images of a relevant label. **Bag Embeddings.** A visual architecture \(\mathcal{E}_{V}\) is used to map visual descriptors to \(\mathbb{R}^{D}\) (MUFIN uses ViT-32 [10] with \(D=192\)). A textual architecture \(\mathcal{E}_{T}\) (MUFIN uses msmarco-distilber-base-v4 [33] with \(D=192\)) is used to map textual descriptors to \(\mathbb{R}^{D}\). We note that both the ViT and Sentence-BERT models have a native dimensionality of 768. An adaptive maxpool 1D layer was use to project down to obtain 192 dimensional descriptor embeddings. \(\mathcal{E}_{V},\mathcal{E}_{T}\) are shared by datapoints and labels. MUFIN maps datapoints and labels to bags of embeddings as shown in Fig. 2(a). A datapoint \(X_{i}=\left\{x_{i}^{1},\ldots,x_{i}^{m_{i}}\right\}\) is mapped to \(\hat{\mathbf{X}}_{i}^{1}=\mathcal{E}(X_{i})\in\mathbb{R}^{m_{i}\times D}\) by first encoding each descriptor of that datapoint using either \(\mathcal{E}_{V}\) or \(\mathcal{E}_{T}\), depending on whether that descriptor is visual or textual, to obtain a bag of pre-embeddings \(\hat{\mathbf{X}}_{i}^{0}\in\mathbb{R}^{m_{i}\times D}\). These are then passed through a self-attention block \(\mathcal{A}_{S}\) (an instantiation of the block depicted in Fig. 2(d)) to obtain \(\hat{\mathbf{X}}_{i}^{1}=\mathcal{A}_{S}(\hat{\mathbf{X}}_{i}^{0})=\mathcal{A} (\hat{\mathbf{X}}_{i}^{0},\hat{\mathbf{X}}_{i}^{0})\). A label \(Z_{l}=\left\{z_{l}^{1},\ldots,z_{l}^{m_{l}}\right\}\) is similarly mapped to \(\hat{\mathbf{Z}}_{l}^{1}=\mathcal{E}(Z_{l})\in\mathbb{R}^{m_{l}\times D}\). We note that the same self-attention block \(\mathcal{A}_{S}\) is used to embed both datapoints and labels. **Vector Embeddings.** MUFIN obtains vector embeddings by aggregating and normalizing the bag embeddings offered by \(\mathcal{E}\) (see Fig. 2(a)). The vector embedding for a datapoint \(i\) is obtained as \(\hat{\mathbf{x}}_{i}^{1}=\mathfrak{N}(\mathbf{1}^{\top}\hat{\mathbf{X}}_{i}^{1})\in S ^{D-1}\) where \(\mathbf{1}\in\mathbb{R}^{m_{i}}\) is the all ones vector, \(\mathfrak{N}:\mapsto\mathbf{v}/\left\|\mathbf{v}\right\|_{\mathcal{H}}\) is the normalization operator and \(S^{D-1}\) is the unit sphere in \(D\) dimensions. Similarly \(\hat{\mathbf{z}}_{l}^{1}=\mathfrak{N}(\mathbf{1}^{\top}\hat{\mathbf{Z}}_{l}^{1})\). Given a datapoint \(i\) and label \(l\), MUFIN constructs the label-adapted embedding \(\hat{\mathbf{x}}_{i}^{2,l}\) for the datapoint as shown in Fig. 2(c). A bag embedding for the datapoint adapted to the label is obtained as \(\hat{\mathbf{X}}_{i}^{2,l}=\mathcal{A}_{C}(\hat{\mathbf{X}}_{i}^{1},\hat{ \mathbf{Z}}_{l}^{1})\) where \(\hat{\mathbf{X}}_{i}^{1}=\mathcal{E}(X_{i}),\hat{\mathbf{Z}}_{l}^{1}=\mathcal{E }(Z_{l})\) using a cross-attention block \(\mathcal{A}_{C}\) (Fig. 2(d)) which is vectorized to yield the label-adapted vector embedding \(\hat{\mathbf{x}}_{i}^{2,l}=\mathfrak{N}(\mathbf{1}^{\top}\hat{\mathbf{X}}_{i}^{2,l})\). Note that \(\mathcal{A}_{S}\) and \(\mathcal{A}_{C}\) do not share parameters. **Scoring Model and Label Classifiers.** Given a datapoint \(i\) and a label \(l\in[L]\), MUFIN assigns a relevance score by taking a dot product of the adapted vector embedding of the datapoint \(\hat{\mathbf{x}}_{i}^{2,l}\) with the classifier vector \(\mathbf{w}_{l}\) constructed as shown in Fig. 2(b) by linearly combining the (normalized) vector embedding for the label \(\hat{\mathbf{z}}_{l}^{1}\) with a normalized free vector \(\mathfrak{N}(\mathbf{\eta}_{l})\). The free vector \(\mathbf{\eta}_{l}\) and the combination weight \(\alpha_{l}\in[0,1]\) are learnt independently per label. ### Modular Training with MUFIN **Trainable Parameters.** The encoder blocks \(\mathcal{E}_{V},\mathcal{E}_{T}\), the attention blocks \(\mathcal{A}_{S},\mathcal{A}_{C}\), the free vectors and weights \(\mathbf{\eta}_{l},\alpha_{l},l\in[L]\) for the label classifiers were trained. MUFIN adopted a training strategy first proposed in the DeepXML paper [7] that breaks training into 4 distinct modules. **Module I: Pre-training.** In this module, only the encoders \(\mathcal{E}_{V},\mathcal{E}_{T}\) and the self-attention block \(\mathcal{A}_{S}\) were trained in a Siamese fashion. The cross-attention block \(\mathcal{A}_{C}\) was bypassed i.e \(\hat{\mathbf{x}}_{i}^{2,l}=\hat{\mathbf{x}}_{i}^{1}\) and \(\alpha_{l}\) was set to \(0\) for all \(l\in[L]\) so as to also exclude the free vectors. A pretrained ViT-32 model [10] was used to initialize \(\mathcal{E}_{V}\) and its final layer was fine-tuned during training. A pre-trained Sentence-BERT model (msmarco-distilbert-base-v4) [33] was used to initialize \(\mathcal{E}_{T}\) and was fine-tuned end-to-end during training. The transformation layers \(Q,K,V,O\) in \(\mathcal{A}_{S}\) were initialized to identity. Datapoints and labels were represented by their vector embeddings i.e. \(\hat{\mathbf{x}}_{i}^{1}\) and \(\hat{\mathbf{z}}_{l}^{1}\) respectively. Training encouraged \(\hat{\mathbf{x}}_{i}^{1}\) and \(\hat{\mathbf{z}}_{l}^{1}\) to approach each other for related pairs and rerel for unrelated pairs. Mini-batches \(B\) were created over labels instead of datapoints by sampling labels randomly. This was observed to improve performance over rare labels [28, 5]. Training with respect to all \(N\) datapoints for each label would have resulted in an \(\Omega\left(NL\right)\) epoch complexity that is infeasible when \(N,L\) are both in the millions. Thus, a set \(\mathcal{P}_{l}\) of _hard-positive_ datapoints for each label \(l\in B\) was chosen among the set \(\left\{i:y_{il}=+1,\left\langle\hat{\mathbf{z}}_{l}^{1},\hat{\mathbf{x}}_{i}^{1}\right\rangle \leq 0.9\right\}\) since positive datapoints too similar to the label i.e. \(\left\langle\hat{\mathbf{z}}_{l}^{1},\hat{\mathbf{x}}_{i}^{1}\right\rangle>0.9,y_{il}=+1\) would yield vanishing gradients. In-batch negative sampling was also done by selecting for each label \(l\in B\), a set \(\mathcal{N}_{l}\) of _hard-negative_ datapoints among positive datapoints of other labels in the same minibatch. Hard-positive and negative mining was found to accelerate training by focusing on those label-datapoint pairs that gave most prominent gradients. The following contrastive loss was used to train \(\mathcal{E}_{V},\mathcal{E}_{T}\) and \(\mathcal{A}_{S}\) using mini-batches \(B\) over labels: \[\sum_{l\in B}\sum_{i\in\mathcal{P}_{l}}\sum_{j\in\mathcal{N}_{l}}\left[\left\langle \hat{\mathbf{z}}_{l}^{1},\hat{\mathbf{x}}_{j}^{1}\right\rangle-\left\langle\hat{\mathbf{z }}_{l}^{1},\hat{\mathbf{x}}_{i}^{1}\right\rangle+\gamma\right]_{+}\] MUFIN used \(\gamma=0.2,\left|\mathcal{P}_{l}\right|=2,\left|\mathcal{N}_{l}\right|=3\). Capping the sizes of the sets to \(\left|\mathcal{P}_{l}\right|,\left|\mathcal{N}_{l}\right|\leq\mathcal{O}\left( 1\right)\) ensured that an epoch complexity of \(\mathcal{O}\left(L\right)\) instead of \(\Omega\left(LN\right)\). **Module II: Augmented Retrieval.** The label-wise training strategy adopted by Module I is sympathetic to rare labels but is not aligned to final prediction where labels need to be predicted for datapoints, not the other way round. Moreover, in-batch negative mining is inexpensive but may offer inferior convergence [40]. To accelerate subsequent training, a set of \(\mathcal{O}\left(\log L\right)\) most promising labels was retrieved for each datapoint. The irrelevant labels in this set would form hard-negatives for subsequent training. MUFIN improved retrieval by exploiting multiple descriptors for each label. After Module I, datapoint vector and label bag embeddings i.e. \(\hat{\mathbf{x}}_{i}^{1},\hat{\mathbf{Z}}_{l}^{1}\) were re-computed. Label centroid vectors [5] were created as \(\hat{\mathbf{\mu}}_{l}=\text{mean}\left\{\hat{\mathbf{x}}_{i}^{1}:y_{il}=+1\right\}\). An ANNS (approximate nearest neighbor search) structure NN\({}^{x}\)[20] was created over the set of \(\sum_{l\in[L]}(m_{l}+1)\) vectors \(\bigcup_{l\in[L]}\hat{\mathbf{Z}}_{l}^{1}\cup\left\{\hat{\mathbf{\mu}}_{l}\right\}\) with each vector recording the identity of the label to which it belonged. ANNS queries of the form NN\({}^{x}\)\((\hat{\mathbf{x}}_{i}^{1})\) were then fired to retrieve for each datapoint \(i\), a set \(R_{i}\) of \(\mathcal{O}\left(\log L\right)\leq 100\) unique labels. Negative labels in this set i.e. \(\left\{l\in R_{i}:y_{il}=-1\right\}\) were well-suited to serve as hard-negative labels for the datapoint \(i\). Ablations in Sec. 4 show that this technique offers superior performance than if the ANNS structure NN\({}^{x}\) were to be created over vector representations \(\hat{\mathbf{z}}_{l}^{1}\) of the labels instead. Note that we could have fired bag-queries on the datapoint side as well i.e. fire \(m_{i}\) ANNS queries for datapoint \(i\), one for each element of the datapoint bag \(\hat{\mathbf{X}}_{i}^{1}=\mathcal{E}(X_{i})\). However that would substantially increase retrieval time by a factor of \(m_{i}\) (\(m_{i}\approx 5\) for MM-Amazon-300K) and was thus avoided. Instead, the approach adopted by MUFIN ensures superior retrieval at the cost of a single ANNS query per datapoint. **Module III: Pre-training to Fine-tuning Transfer.** The encoders \(\mathcal{E}_{V},\mathcal{E}_{T}\) and the self-attention block \(\mathcal{A}_{S}\) were initialized to their values after Module I training. The transformation layers \(Q,K,V,O\) within \(\mathcal{A}_{C}\) are initialized to identity. The free vectors \(\mathbf{\eta}_{l}\) were all offered uniform Xavier initialization and \(\alpha_{l}=0.5\) was initialized for all labels \(l\in[L]\). **Module IV: Fine-tuning.**\(\mathcal{E}_{V},\mathcal{E}_{T},\mathcal{A}_{S}\) were further fine-tuned whereas \(\mathcal{A}_{C},\mathbf{\eta}_{l},l\in[L]\) and \(\alpha_{l},l\in[L]\) were trained from scratch. Mini-batches \(B\) were created over datapoints to align with the final prediction task. For each \(i\in B\), a set \(\mathcal{S}_{i}\) of random positive labels was chosen. A set \(\mathcal{T}_{i}\) of hard-negative labels was chosen among negative labels in the shortlist \(R_{i}\) constructed in Module II. A datapoint \(i\) was represented using label-adapted embeddings w.r.t. the positive and hard-negative labels shortlisted for them i.e. \(\left\{\hat{\mathbf{x}}_{i}^{2,l}:l\in\mathcal{S}_{i}\cup\mathcal{T}_{i}\right\}\). Labels were represented using their classifier vectors \(\mathbf{w}_{l}\) (see Fig. 2(b)). The following cosine embedding loss was used to train \(\mathcal{E}_{V},\mathcal{E}_{T},\mathcal{A}_{S},\mathcal{A}_{C}\) and \(\mathbf{\eta}_{l},\alpha_{l},l\in[L]\) using mini-batches \(B\) of datapoints: \[\sum_{i\in B}\left\{\sum_{l\in\mathcal{S}_{i}}\left(1-\left\langle\hat{\mathbf{x }}_{i}^{2,l},\mathbf{w}_{l}\right\rangle\right)+\sum_{k\in\mathcal{T}_{i}}\left[ \left\langle\hat{\mathbf{x}}_{i}^{2,k},\mathbf{w}_{k}\right\rangle-\gamma\right]_{+}\right\}\] MUFIN used \(\gamma=0.5,|\mathcal{S}_{i}|=2,|\mathcal{T}_{i}|=12\). Capping the set sizes to \(|\mathcal{S}_{i}|,|\mathcal{T}_{i}|\leq\mathcal{O}\left(\log L\right)\) ensured that an epoch complexity of \(\mathcal{O}\left(N\log L\right)\) instead of \(\Omega\left(NL\right)\). **Prediction with MUFIN.** Given a test point \(X_{t}\) with \(m_{t}\) descriptors \(X_{t}=\left\{x_{t}^{1},\ldots,x_{t}^{m_{t}}\right\}\), its vector representation \(\hat{\mathbf{x}}_{t}^{1}=\mathfrak{N}\left(\mathbf{1}^{\top}\mathcal{E}(X_{t})\right)\) is used to query the ANNS structure and perform augmented retrieval of labels to yield a shortlist \(R_{t}=\text{NN}^{x}(\hat{\mathbf{x}}_{t}^{1})\) of \(100\leq\mathcal{O}\left(\log L\right)\) labels. For each retrieved label \(l\in R_{t}\), a _similarity_ score is assigned as \(a_{tl}\stackrel{{\text{def}}}{{=}}\max\ \left\langle\hat{\mathbf{x}}_{t}^{1},\mathbf{v} \right\rangle,\mathbf{v}\in\hat{\mathbf{Z}}_{l}^{1}\cup\left\{\hat{\mathbf{\mu}}_{l}\right\}\) (recall that in augmented retrieval, each label \(l\in[L]\) contributes \(m_{l}+1\) entries to the ANNS structure). Vector representations for \(X_{t}\) adapted to all shortlisted labels i.e. \(\left\{\hat{\mathbf{x}}_{t}^{2,l},l\in R_{t}\right\}\) are computed and the corresponding label classifiers applied to yield _classifier_ scores \(c_{il}\stackrel{{\text{def}}}{{=}}\left\langle\mathbf{w}_{l},\hat{ \mathbf{x}}_{t}^{2,l}\right\rangle\) for each \(l\in R_{t}\). The classifier and similarity scores are then combined linearly as \(s_{tl}=\beta\cdot c_{tl}+(1-\beta)\cdot a_{tl}\). A fixed value of \(\beta=0.7\) was used. Final predictions are made in descending order of the scores \(s_{tl}\). The prediction time complexity of MUFIN is derived in App. A in the supplementary. **Handling unseen labels with MUFIN (\(\mathbf{\alpha=1}\)).** A variant dubbed "MUFIN (\(\alpha=1\))" was developed to handle unseen labels (for which supervision was not available during training) by setting \(\alpha_{l}=1\) for all \(l\in[L]\). This causes MUFIN to start using the vector label representation itself as the classifier i.e. \(\mathbf{w}_{l}\equiv\hat{\mathbf{z}}_{l}^{1}\) and give relevance scores of the form \(\left\langle\hat{\mathbf{z}}_{l}^{1},\hat{\mathbf{x}}_{t}^{2,l}\right\rangle\). Note that the cross-attention block \(\mathcal{A}_{C}\) can still be applied to yield adapted datapoint embeddings \(\hat{\mathbf{x}}_{t}^{2,l}\) even w.r.t. unseen labels. The variant MUFIN (\(\alpha=1\)) sets \(\alpha_{l}=1\) for all labels \(l\in[L]\) to ensure consistency. ## 4 Experimental Results **Datasets.** Dataset construction details are given in App. B in the supplementary [link]. Tab. 1 presents dataset statistics. The Polyvore FITB task relies on precomputed shortlists for each query and does not present a satisfactory benchmark for multi-modal XC methods where the goal is to retrieve results directly from a catalog of millions of labels. Other multimodal datasets [22, 25] were found similarly lacking. Thus, two other tasks were considered. The A2Q-4M dataset presents a heterogeneous task where datapoints have multi-modal descriptors but labels are purely textual. The MM-AmazonTitles-300K dataset also presents occasional datapoints/labels with either the text or vision modality missing entirely. Thus, these tasks demand that the architecture be resilient to missing modes. _MM-AmazonTitles-300K_: An XC product-to-product recommendation dataset was curated from an Amazon click dump [31]. Given a query product, the task is to retrieve the subset of the most relevant products from a catalog of over 300K unique products. Each product is represented by a title and up to 15 images. This dataset has been released at the The Extreme Classification Repository [2][link]. _A2Q-4M_: A large bid-query prediction task was mined from the internal click logs of the Bing search engine. Given an ad as a datapoint represented by an image and textual description, the task is to predict the subset of user queries (textual) most likely to lead to a click on that ad. _Polyvore-Disjoint_: Polyvore is a popular fashion website where users can create outfit compositions [37]. The Fill-In-The-Blank (FITB) task requires the most compatible outfit to be chosen from a pre-computed shortlist given an incomplete _query_ outfit with 4-5 images and short captions. **Baselines.** Due to lack of space, a detailed discussion on the baselines is provided in App. C of the supplementary. _MM-AmazonTitles-300K_: MUFIN was compared to leading text-based XC methods [5, 18, 27, 29, 39, 41] and leading multi-modal methods CLIP [32] and VisualBert [21] that employ cross-modal pre-training to embed related items (_e.g_. an image and its associated caption) closeby. AttentionXML [41] employs label-specific datapoint representations similar to MUFIN. SiameseXML was augmented to utilize a DistilBERT architecture similar to MUFIN instead of the bag-of-embeddings model used in [5]. The details of the augmentation are given in App. C. CLIP and VisualBert use the ViT and Resnet-101 image encoders respectively. To offer a fair comparison, pre-trained encoders for these methods were injected into MUFIN's training pipeline and afforded the same self-attention, cross attention and classifier architectures. Tab. 2 shows that MUFIN outperformed even augmented versions of these algorithms. _A2Q-4M_: Multi-modal baseline methods struggled to scale to this dataset so comparisons were made only to the leading text-based method SiameseXML [5]. _Polyvore-Disjoint_: MUFIN was compared to leading methods including ADDE-O [12], CSA-Net [23], Type-aware [37], S-VAL [20], SCE-Net average [36] and SSVR [34]. Since this dataset offers only unseen labels as recommendation candidates at test time, only the MUFIN (\(\alpha=1\)) variant was executed for fair comparison. **Evaluation Metrics.** Standard XC metrics _e.g_. area under the curve (AUC), precision (P@\(k\)), nDCG (N@\(k\)), and recall (R@\(k\)) were used for the MM-AmazonTitles-300K and A2Q-4M tasks. Classification accuracy was used for the multi-class Polyvore-Disjoint task as is standard [12, 23, 37]. **Hyperparameters.** MUFIN uses ViT-32 [10] as the image encoder \(\mathcal{E}_{V}\) with \(32\times 32\) patches and the msmarco-distilbert-base-v4 architecture [33] as the text encoder \(\mathcal{E}_{T}\). The AdamW optimizer with a one-cycle cosine scheduler with warm start of 1000 iterations was used. MUFIN could train on a 24-core Intel Skylake 2.4 GHz machine with 4 V100 GPUs within 48 hrs on the A2Q-4M dataset. See App. D in the supplementary for hyperparameter details. ### Results and Discussion **MM-AmazonTitles-300K.** Tab. 2 shows MUFIN gave 3.6-11% higher P@1 than text-based XC methods. MUFIN is 5.5% better in P@1 than AttentionXML [41] that also employs label-specific datapoint representations. MUFIN is also 3.5% more accurate than SiameseXML that uses a DistilBERT encoder similar to MUFIN. This indicates the benefit of melding multi-modal information with high-capacity classifier architectures. MUFIN's lead is similarly high in terms of other metrics such as P@5 and R@10. MUFIN also gave 3.2-12% higher P@1 than multi-modal methods CLIP and VisualBERT. It is notable that the methods being compared to are variants of CLIP and VisualBERT that were offered MUFIN's attention modules and training strategies. App. E shows that MUFIN's lead over these methods could be as high as 30% if they are not offered these augmentations. This highlights the utility of MUFIN's task-specific pre-training in Module I. **A2Q-4M.** MUFIN could train on this dataset with 9M training points within 48 hrs on 4\(\times\)V100 GPUs. MUFIN achieved 47.56% P@1 compared to 44.46% P@1 by SiameseXML. MUFIN also offered predictions within 4 milliseconds per test datapoint on a single V100 GPU. MUFIN is able to scale to tasks with several millions of labels offering prediction times suitable for real-time applications. **Polyvore-Disjoint.** Tab. 3 presents results on the FITB task where MUFIN (\(\alpha=1\)) could be 3-4% more accurate than the next-best method. MUFIN is encoder-agnostic and continues to outperform existing methods even if MUFIN replaces its \(\mathcal{E}_{V}\) with Resnet18 (used by ADDE-O [12]). **Category-wise Analysis.** To analyze the gains offered by MUFIN, the performance of various algorithms was considered on the 20 unique categories of 300K products in the MM-AmazonTitles-300K dataset. Fig. 3 shows that MUFIN's multi-modal recommendations are 2-6% more accurate on all popular categories than SiameseXML (that used the same text encoder \(\mathcal{E}_{T}\) as MUFIN). Tab. 6 and Fig. 8 in the supplementary present qualitative results that show that the trend persists and MUFIN offers superior performance than competing methods on almost all categories irrespective of the popularity of the category. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Dataset & Train Datapoints & Labels & Test Instances & Average Labels & Average Tokens & Average Images \\ & \(N\) & \(L\) & \(N^{{}^{\prime}}\) & per datapoint & per datapoint & per datapoint \\ \hline Polyvore-Disjoint & 16,995 & - & 15,145 & 1 & 27.31 & 4 \\ \hline MM-AmazonTitles-300K & 586,781 & 303,296 & 260,536 & 8.13 & 20.41 & 4.91 \\ \hline A2Q-4M & 9,618,490 & 4,528,191 & 3,933,149 & \(\ddagger\) & \(\ddagger\) & \(\ddagger\) & \(\ddagger\) \\ \hline \hline \end{tabular} \end{table} Table 1: Statistics for datasets used to benchmark MUFIN. For Polyvore-Disjoint, a ‘-’ indicates that only unseen labels were available for recommendation at test time. For A2Q-4M, a \(\ddagger\) indicates numbers redacted for the proprietary dataset. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline **Label popularity.** To analyze MUFIN's performance on rare and popular labels, labels were divided into 5 _equivaluminous_ bins of increasing label frequency such that each bin had an equal number of datapoint-label pairs from the ground truth. Fig. 4 shows that MUFIN and MUFIN (\(\alpha=1\)) outperform the baseline methods across all bins. **Impact of Cross Attention (\(\mathbf{\mathcal{A}_{c}}\)).** Fig. 5a shows the cross-attention heat map generated by \(\mathcal{A}_{C}\) between a datapoint \(i\) and a relevant label in the retrieved shortlist \(R_{i}\) for that datapoint. MUFIN was able to match a chair in a datapoint image [Image 5] to a similar chair in the background of a label image [Image 3] (magnified in Fig. 5b). Fig. 5c shows that for a given datapoint (\(\widehat{\mathbf{x}}^{1}\), ), cross-attention allowed MUFIN to generate a label-adapted datapoint representation (\(\widehat{\mathbf{x}}^{2,+}\), ) that is embedded close to the relevant label (\(\mathbf{w}_{+}\), ). However, the label-adapted representation of the same datapoint (\(\widehat{\mathbf{x}}^{2,-}\), ) is unmoved for an irrelevant label (\(\mathbf{w}_{-}\), ). Label-adapted representations allow MUFIN to boost the score for relevant labels and rank them higher. **Label semantics.** Type-aware methods [12, 23, 37] have demonstrated that explicitly incorporating label categories while training can improve model accuracy. However, in XC settings with millions of labels, label hierarchies are often unavailable or incomplete [2]. Fig. 6 depicts the label classifiers (\(\mathbf{w}_{l}\)) learnt by MUFIN using t-SNE representations. MUFIN could identify category-based relationships among labels without any explicit feedback on cluster identity. MUFIN's clusters exhibit sub-clustering that can be attributed to the fact that labels belonging to a category can be further grouped into sub-categories, _e.g._ _"Home and Kitchen"_ can be further clustered into _"Furniture"_ and _"Utensils"_. Thus MUFIN draws its gains on diverse label types (Figs. 3 and 4) by deducing label-datapoint relationships from multi-modal information (Figs. 5 and 6). ### Ablation This section investigates design choices made by MUFIN for its key components - sampling, retrieval, representation, and ranker (score). Tab. 4 summarizes the ablation results. The ablation experiments have been explained in detail in App. F in the supplementary [link]. **Sampling.** Removing hard +ve sampling (MUFIN-no +ve) causes a 1% drop in P@5. Removing hard -ve and +ve sampling (MUFIN-no +ve, -ve) leads to a 1.5% drop in P@5. **Retrieval.** Recall from Sec. 3 that retrieval of label shortlists \(R_{i}\) could have been done over label embeddings \(\hat{\mathbf{z}}_{l}^{1}\) or bag embeddings \(\hat{\mathbf{Z}}_{l}^{1}\) of the labels. The augmented retrieval strategy of MUFIN (MUFIN-P-I-bag) can be 0.3% and 1% more accurate in R@10 and P@1 as compared to retrieval based on vector embeddings \(\hat{\mathbf{z}}_{l}^{1}\) alone (MUFIN-P-I-vec). **Representation.** MUFIN was 3-4% more accurate than the MUFIN-ConCat variant that concatenated \(\hat{\mathbf{x}}^{1},\hat{\mathbf{z}}_{l}^{1}\) followed by two feed-forward layers instead of using the cross Figure 4: Analyzing the performance of MUFIN and other methods on popular vs. rare labels. Labels were divided into 5 bins in increasing order of popularity (left-to-right). The plots show the overall R@10 of each method (histogram group “complete”) and the contribution of each bin to this value. The results indicate that MUFIN’s performance on popular labels (histogram group 1) does not come at the cost of performance in rare labels. Other methods seem to exhibit a trade-off between rare and popular labels. \begin{table} \begin{tabular}{l c} \hline \hline **Methods** & **FITB Accuracy** \\ \hline **MUFIN (\(\alpha=1\))** & **64.17** \\ \hline **MUFIN (Resnet18, \(\alpha=1\))** & **61.63** \\ \hline \hline **Visual + Textual** & \\ \hline \hline ADDE-O [12] & 60.53 \\ \hline Type-aware [37] & 55.65 \\ \hline SCE-Net average [36] & 53.67 \\ \hline SSVR [34] & 51.5 \\ \hline \hline **Visual** & \\ \hline CSA-Net [23] & 59.26 \\ \hline S-VAL [20] & 54.3 \\ \hline \hline \end{tabular} \end{table} Table 3: Results on the FITB task on Polyvore-Disjoint. MUFIN is 3-4% more accurate compared to the next best method. Figure 3: MUFIN outperforms baseline methods on almost all categories. Only the top 5 categories are shown here to avoid clutter. Tab. 6 in the supplementary contains results on all categories. Figure 5. The impact of multi-modal cross-attention in MUFIN. Fig. 4(a) shows the cross-attention heat map between the datapoint \(X\) and a relevant label \(Z_{+}\). Fig. 4(b) shows that cross-attention is able to identify objects in \(X\) among objects in the background of the relevant label \(Z_{+}\). Fig. 4(c) shows how this helps boost the scores assigned to relevant labels. \(\hat{\mathbf{\hat{x}}}^{1}\) () represents the non-adapted vector embedding of the datapoint \(X\). \(\mathbf{w}_{+}\)() and \(\mathbf{w}_{-}\)() represent the classifier vectors for the relevant and irrelevant labels \(Z_{+}\) and \(Z_{-}\) respectively. Similarly, \(\hat{\mathbf{x}}^{2,+}\) () and \(\hat{\mathbf{x}}^{2,-}\)() represent the vector embedding of the datapoint adapted to \(Z_{+}\) and \(Z_{-}\) respectively. Notice how adaptation moves \(\hat{\mathbf{x}}^{2,+}\)() closer to \(\mathbf{w}_{+}\)() allowing the relevant label \(Z_{+}\) to get a higher score. On the other hand, adaptation has no effect when done with respect to an irrelevant label (note that \(\hat{\mathbf{x}}^{1}\) () and \(\hat{\mathbf{x}}^{2,-}\) () do indeed almost overlap). Fig. 4(c) was plotted by projecting the vectors \(\hat{\mathbf{x}}^{1},\mathbf{w}_{+},\mathbf{w}_{-},\hat{\mathbf{x}}^{2,+},\hat{\mathbf{x}}^{2,-}\) onto \(\mathbb{R}^{2}\) using a t-SNE embedding. Attention block \(\mathcal{A}_{C}\). Removing the self-attention block from Modules I-IV (MUFIN-no \(\mathcal{A}_{S}\)) led to a 1.6% drop in P@5. **Ranker.** MUFIN's novel scoring architecture can be upto 1.5% more accurate in terms of P@5 than variants that either exclude the cross-attention block (MUFIN-no \(\mathcal{A}_{C}\)) or the one-vs-all classifiers (MUFIN-(\(\alpha=1\))). The ablations show that MUFIN's design choices with respect to hard +ve, -ve sampling, self- and cross-attention, and one-vs-all classifiers, each offer performance boosts. **Dataset and Supplementary Material.** The MM-AmazonTitles-300K dataset can be downloaded at [http://manikvarma.org/downloads/XC/XMLRepository.html](http://manikvarma.org/downloads/XC/XMLRepository.html). MUFIN pseudocode, implementation details, additional results and discussions on limitations of MUFIN, ethical considerations and future work are presented in the supplementary at [http://manikvarma.org/pubs/mittal22-supp.pdf](http://manikvarma.org/pubs/mittal22-supp.pdf). ## Acknowledgements The authors thank the reviewers for helpful comments. The authors are grateful to the High-Performance Computing facility and staff at IIT Delhi. AM is supported by a Google PhD Fellowship. AM, KD, and SA acknowledge funding from a Microsoft Research India grant on Extreme Classification. PK is supported by Microsoft Research India via consultancy grant no MRLIPL/CS/2021296. \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Ablation**} & \multicolumn{2}{c}{**P@1/N**} & \multirow{2}{*}{**P@5**} & \multirow{2}{*}{**N@5**} & \multirow{2}{*}{**R@10**} \\ & **N@1** & & & \\ \hline **MUFIN** & **52.3** & **34.76** & **50.46** & **50.63** \\ \hline \multicolumn{5}{c}{**Sampling**} \\ \hline MUFIN-no +ve & 50.35 & 33.71 & 48.91 & 49.19 \\ \hline MUFIN-no +ve, -ve & 49.69 & 33.33 & 47.9 & 48.76 \\ \hline \multicolumn{5}{c}{**Retrieval**} \\ \hline MUFIN-P-I-bag & 42.72 & 28.8 & 42.03 & 44.49 \\ \hline MUFIN-P-I-vec & 41.71 & 28.26 & 41.31 & 44.2 \\ \hline \multicolumn{5}{c}{**Representation**} \\ \hline MUFIN-ConCat & 49.61 & 32.89 & 47.87 & 47.97 \\ \hline MUFIN-no \(\mathcal{A}_{S}\) & 49.98 & 33.16 & 48.11 & 48.11 \\ \hline \multicolumn{5}{c}{**Ranker**} \\ \hline MUFIN-no \(\mathcal{A}_{C}\) & 50.22 & 33.87 & 49.03 & 49.68 \\ \hline MUFIN-(\(\alpha=1\)) & 49.25 & 33.19 & 48.53 & 49.87 \\ \hline \hline \end{tabular} \end{table} Table 4: An ablation study exploring alternate architecture and training choices for MUFIN. Choices made by MUFIN could lead to 3-4% gain in P@1 and 1-2% gain in R@10 than alternatives. Figure 6: t-SNE representations for classifiers \(\mathbf{w}_{l},l\in[L]\) learnt by MUFIN show that labels belonging to the same category are clustered together. Only the 5 most popular categories are shown.
この論文では、百万個のラベルを持つ極端分類 (XC)タスクのための MUFIN テクニックを開発しています。データポイントとラベルには、視覚的およびテキスト的記述子が与えられています。MUFINは、製品対製品の推奨と数百万個の製品の求値予測に適用されています。現代の多様性メソッドは、ほとんどが埋め込みベースのメソッドに依存しています。一方、XCメソッドは分類器アーキテクチャを用いることで、テキストベースの分類タスクよりも高い精度を達成しています。MUFINは、多様な分類をXC問題に変換することで、このギャップを埋めています。これは、数十万個のラベルを正確に分類するための表現力のある埋め込みを生成できるマルチモダルのアーキテクチャを開発する課題と、ラベル数に比例してスケーラブルなトレーニングと推論手順を開発することです。MUFINは、クロスモダルト
2307.16611
On the Kohayakawa-Kreuter conjecture
Let us say that a graph $G$ is Ramsey for a tuple $(H_1,\dots,H_r)$ of graphs if every $r$-coloring of the edges of $G$ contains a monochromatic copy of $H_i$ in color $i$, for some $i \in [r]$. A famous conjecture of Kohayakawa and Kreuter, extending seminal work of R\"odl and Ruci\'nski, predicts the threshold at which the binomial random graph $G_{n,p}$ becomes Ramsey for $(H_1,\dots,H_r)$ asymptotically almost surely. In this paper, we resolve the Kohayakawa-Kreuter conjecture for almost all tuples of graphs. Moreover, we reduce its validity to the truth of a certain deterministic statement, which is a clear necessary condition for the conjecture to hold. All of our results actually hold in greater generality, when one replaces the graphs $H_1,\dots,H_r$ by finite families $\mathcal{H}_1,\dots,\mathcal{H}_r$. Additionally, we pose a natural (deterministic) graph-partitioning conjecture, which we believe to be of independent interest, and whose resolution would imply the Kohayakawa-Kreuter conjecture.
Eden Kuperwasser, Wojciech Samotij, Yuval Wigderson
2023-07-31T12:38:29
http://arxiv.org/abs/2307.16611v1
# On the Kohayakawa-Kreuter conjecture ###### Abstract. Let us say that a graph \(G\) is Ramsey for a tuple \((H_{1},\ldots,H_{r})\) of graphs if every \(r\)-coloring of the edges of \(G\) contains a monochromatic copy of \(H_{i}\) in color \(i\), for some \(i\in[\![r]\!]\). A famous conjecture of Kohayakawa and Kreuter, extending seminal work of Rodl and Rucinski, predicts the threshold at which the binomial random graph \(G_{n,p}\) becomes Ramsey for \((H_{1},\ldots,H_{r})\) asymptotically almost surely. In this paper, we resolve the Kohayakawa-Kreuter conjecture for almost all tuples of graphs. Moreover, we reduce its validity to the truth of a certain deterministic statement, which is a clear necessary condition for the conjecture to hold. All of our results actually hold in greater generality, when one replaces the graphs \(H_{1},\ldots,H_{r}\) by finite families \(\mathcal{H}_{1},\ldots,\mathcal{H}_{r}\). Additionally, we pose a natural (deterministic) graph-partitioning conjecture, which we believe to be of independent interest, and whose resolution would imply the Kohayakawa-Kreuter conjecture. EK, WS, and YW are supported by ERC Consolidator Grant 101044123 (RandomHypGra), by Israel Science Foundation Grant 2110/22, and by NSF-BSF Grant 2019679. YW is additionally supported by ERC Consolidator Grant 863438 (LocalGlobal). ## 1. Introduction ### Symmetric Ramsey properties of random graphs Given graphs \(G\) and \(H_{1},\ldots,H_{r}\), one says that \(G\) is _Ramsey for the tuple_\((H_{1},\ldots,H_{r})\) if, for every \(r\)-coloring of the edges of \(G\), there is a monochromatic copy of \(H_{i}\) in some color \(i\in[\![r]\!]\). In the symmetric case \(H_{1}=\cdots=H_{r}=H\), we simply say that \(G\) is _Ramsey for \(H\) in \(r\) colors_. Ramsey's theorem [24] implies that the complete graph \(K_{n}\) is Ramsey for \((H_{1},\ldots,H_{r})\) whenever \(n\) is sufficiently large. The fundamental question of graph Ramsey theory is to determine, for a given tuple \((H_{1},\ldots,H_{r})\), which graphs \(G\) are Ramsey for it. For more on this question, as well as the many fascinating sub-questions it contains, we refer the reader to the survey [3]. In this paper, we are interested in Ramsey properties of random graphs, a topic that was initiated in the late 1980s by Frankl-Rodl [6] and Luczak-Rucinski-Voigt [31]. The main question in this area is, for a given tuple \((H_{1},\ldots,H_{r})\), which functions \(p=p(n)\) satisfy that \(G_{n,p}\) is Ramsey for \((H_{1},\ldots,H_{r})\) a.a.s.1 In the case \(H_{1}=\cdots=H_{r}\), this question was resolved in the remarkable work of Rodl and Rucinski [25, 26, 27]. In order to state their result, we need the following terminology and notation. For a graph \(J\), we denote by \(v_{J}\) and \(e_{J}\) the number of vertices and edges, respectively, of \(J\). The _maximal \(2\)-density_ of a non-empty graph \(H\) with \(v_{H}\geqslant 3\) is then defined2 to be Footnote 1: As usual, \(G_{n,p}\) denotes the binomial random graph with edge probability \(p\) and we say that an event happens _asymptotically almost surely (a.a.s.)_ if its probability tends to \(1\) as \(n\to\infty\). Footnote 2: We also define \(m_{2}(K_{2})\coloneqq 1/2\) and \(m_{2}(H)\coloneqq 0\) if \(H\) has no edges. \[m_{2}(H)\coloneqq\max\left\{\frac{e_{J}-1}{v_{J}-2}:J\subseteq H,v_{J} \geqslant 3\right\}.\] With this notation, we can state the random Ramsey theorem of Rodl and Rucinski [27]. **Theorem 1.1** (Rodl-Rucinski [27]).: _For every graph \(H\) which is not a forest3 and every integer \(r\geqslant 2\), there exist constants \(c,C>0\) such that_ Footnote 3: Rodl and Rucinski also determined the Ramsey threshold when \(H\) is a forest, but for simplicity we do not state this more general result. \[\lim_{n\to\infty}\Pr(G_{n,p}\text{ is Ramsey for }H\text{ in }r\text{ colors})=\begin{cases}1&\text{if }p\geqslant Cn^{-1/m_{2}(H)},\\ 0&\text{if }p\leqslant cn^{-1/m_{2}(H)}.\end{cases}\] As with many such threshold results for random graph properties, Theorem 1.1 really consists of two statements: the \(1\)_-statement_, which says that \(G_{n,p}\) satisfies the desired property a.a.s. once \(p\) is above some threshold, and the \(0\)_-statement_, which says that \(G_{n,p}\) a.a.s. fails to satisfy the desired property if \(p\) is below some threshold. In recent years, there has been a great deal of work on transferring combinatorial theorems, such as Ramsey's theorem or Turan's theorem [30], to sparse random settings. As a consequence, several new proofs of the \(1\)-statement of Theorem 1.1 have been found. Two such proofs were first given by Conlon-Gowers [4] and, independently, by Friedgut-Rodl-Schacht [8] (see also Schacht [29]) with the use of their transference principles. More recently, Nenadov and Steger [22] found a very short proof of the \(1\)-statement of Theorem 1.1 that uses the hypergraph container method of Saxton-Thomason [28] and Balogh-Morris-Samotij [1]. However, these techniques are not suitable for proving the respective \(0\)-statements such as that in Theorem 1.1. Furthermore, whereas the \(0\)-statement of the aforementioned sparse random analogue of Turan's theorem is very easy to establish, proving the \(0\)-statement of Theorem 1.1 requires a significant amount of work. To understand this, suppose that \(G\) is some graph that is Ramsey for \(H\) in \(r\) colors. As is well-known (see e.g. [14, Theorem 3.4]), the probability that \(G_{n,p}\) contains \(G\) as a subgraph is bounded away from zero if (and only if) \(p=\Omega(n^{-1/m(G)})\), where \(m(G)\) is the _maximal density_ of \(G\), defined by \[m(G)\coloneqq\max\left\{\frac{e_{J}}{v_{J}}:J\subseteq G,v_{J}\geqslant 1 \right\}.\] In particular, if \(m(G)\leqslant m_{2}(H)\), then the \(0\)-statement of Theorem 1.1 cannot hold. Therefore, a prerequisite for any proof of the \(0\)-statement is the following result, which Rodl-Rucinski [25] termed the _deterministic lemma_: If \(G\) is Ramsey for \(H\) in \(r\) colors, then \(m(G)>m_{2}(H)\). We stress that this result is by no means trivial; in particular, it turns out to be false if we remove the assumption that \(H\) is not a forest [7, 27], or if we move from graphs to hypergraphs [9]. To complement the deterministic lemma, Rodl-Rucinski also proved what they termed a _probabilistic lemma_. Loosely speaking, this is a result that says that the \(0\)-statement of Theorem 1.1 is actually _equivalent_ to the deterministic lemma. In other words, an obvious necessary condition for the validity of the \(0\)-statement--the non-existence of a graph \(G\) that is Ramsey for \(H\) and satisfies \(m(G)\leqslant m_{2}(H)\)--is also a sufficient condition. ### Asymmetric Ramsey properties of random graphs Given our good understanding of Ramsey properties of random graphs in the symmetric case, provided by Theorem 1.1, it is natural to ask what happens if we remove the assumption that \(H_{1}=\cdots=H_{r}\). This question was first raised by Kohayakawa and Kreuter [15], who proposed a natural conjecture for the threshold controlling when \(G_{n,p}\) is Ramsey for an arbitrary tuple \((H_{1},\ldots,H_{r})\). To state their conjecture, we need the notion of the _mixed \(2\)-density_: For graphs \(H_{1},H_{2}\) with \(m_{2}(H_{1})\geqslant m_{2}(H_{2})\), their mixed \(2\)-density is defined as \[m_{2}(H_{1},H_{2})\coloneqq\max\left\{\frac{e_{J}}{v_{J}-2+1/m_{2}(H_{2})}:J \subseteq H_{1},v_{J}\geqslant 2\right\}.\] With this terminology, we may state the conjecture of Kohayakawa and Kreuter [15]. **Conjecture 1.2** (Kohayakawa-Kreuter [15]).: _Let \(H_{1},\ldots,H_{r}\) be graphs satisfying \(m_{2}(H_{1})\geqslant\cdots\geqslant m_{2}(H_{r})\) and \(m_{2}(H_{2})>1\). There exist constants \(c,C>0\) such that_ \[\lim_{n\to\infty}\Pr(G_{n,p}\text{ is Ramsey for }(H_{1},\ldots,H_{r}))= \begin{cases}1&\text{if }p\geqslant Cn^{-1/m_{2}(H_{1},H_{2})},\\ 0&\text{if }p\leqslant cn^{-1/m_{2}(H_{1},H_{2})}.\end{cases}\] The assumption \(m_{2}(H_{2})>1\) is equivalent to requiring that \(H_{1}\) and \(H_{2}\) are not forests; it was added by Kohayakawa, Schacht, and Spohel [16] to rule out sporadic counterexamples, in analogy with the assumption that \(H\) is not a forest in Theorem 1.1. The role of the mixed \(2\)-density \(m_{2}(H_{1},H_{2})\) in the context of Conjecture 1.2 can seem a little mysterious at first, but there is a natural (heuristic) explanation. Since one can color all edges that do not lie in a copy of \(H_{1}\) with color \(1\), the only important edges are those that do lie in copies of \(H_{1}\). The mixed \(2\)-density is defined in such a way that \(p=\Theta(n^{-1/m_{2}(H_{1},H_{2})})\) is the threshold at which the number of copies of (the densest subgraph of) each of \(H_{2},\ldots,H_{r}\) is at least of the same order of magnitude as the number of edges in the union of all copies of (the densest subgraph of) \(H_{1}\) in \(G_{n,p}\). Since at least one edge in each copy of \(H_{1}\) must receive a color from \(\{2,\ldots,r\}\), this is the point where avoiding monochromatic copies of \(H_{2},\ldots,H_{r}\) becomes difficult. Conjecture 1.2 has received a great deal of attention over the years, and has been proved in a number of special cases. Following a sequence of partial results [9, 11, 15, 16, 19], the \(1\)-statement of Conjecture 1.2 was proved by Mousset, Nenadov, and Samotij [20] with the use of the container method as well as a randomized "typing" procedure. We henceforth focus on the \(0\)-statement, where progress has been more limited. Note that, in order to prove the \(0\)-statement, one can make several simplifying assumptions. First, one can assume that \(r\), the number of colors, is equal to \(2\). Indeed, if one can a.a.s. \(2\)-color the edges of \(G_{n,p}\) and avoid monochromatic copies of \(H_{1},H_{2}\) in colors \(1,2\), respectively, then certainly \(G_{n,p}\) is not Ramsey for \((H_{1},\ldots,H_{r})\). Furthermore, if \(H_{2}^{\prime}\subseteq H_{2}\) is a subgraph satisfying \(m_{2}(H_{2}^{\prime})=m_{2}(H_{2})\), then the \(0\)-statement for the pair \((H_{1},H_{2}^{\prime})\) implies the \(0\)-statement for \((H_{1},H_{2})\), as any coloring with no monochromatic copy of \(H_{2}^{\prime}\) in particular has no monochromatic copy of \(H_{2}\). Thus, we may assume that \(H_{2}\) is _strictly \(2\)-balanced_, meaning that \(m_{2}(H_{2}^{\prime})<m_{2}(H_{2})\) for any \(H_{2}^{\prime}\subsetneq H_{2}\). For exactly the same reason, we may assume that \(H_{1}\) is _strictly \(m_{2}(\cdot,H_{2})\)-balanced_, meaning that \(m_{2}(H_{1}^{\prime},H_{2})<m_{2}(H_{1},H_{2})\) for any \(H_{1}^{\prime}\subsetneq H_{1}\). Let us say that the pair \((H_{1},H_{2})\) is _strictly balanced_ if \(H_{2}\) is strictly \(2\)-balanced and \(H_{1}\) is strictly \(m_{2}(\cdot,H_{2})\)-balanced. Additionally, let us say that \((H_{1}^{\prime},H_{2}^{\prime})\) is a _strictly balanced pair of subgraphs_ of \((H_{1},H_{2})\) if \((H_{1}^{\prime},H_{2}^{\prime})\) is strictly balanced and satisfies \(m_{2}(H_{2}^{\prime})=m_{2}(H_{2})\) and \(m_{2}(H_{1}^{\prime},H_{2}^{\prime})=m_{2}(H_{1},H_{2})\). All previous works on the \(0\)-statement of Conjecture 1.2 have made these simplifying assumptions, working in the case \(r=2\) and with a strictly balanced pair \((H_{1},H_{2})\). The original paper of Kohayakawa and Kreuter [15] proved the \(0\)-statement of Conjecture 1.2 when \(H_{1}\) and \(H_{2}\) are cycles. This was extended to the case when both \(H_{1}\) and \(H_{2}\) are cliques in [19], and to the case when \(H_{1}\) is a clique and \(H_{2}\) is a cycle in [18]. To date, the most general result is due to Hyde [13], who proved the \(0\)-statement of Conjecture 1.2 for almost all pairs of regular graphs \((H_{1},H_{2})\); in fact, this follows from Hyde's main result [13, Theorem 1.9], which establishes a certain deterministic condition whose validity implies the \(0\)-statement of Conjecture 1.2. Finally, the first two authors [17] recently proved the \(0\)-statement of Conjecture 1.2 in the case where \(m_{2}(H_{1})=m_{2}(H_{2})\). Because of this, we henceforth focus on the case that \(m_{2}(H_{1})>m_{2}(H_{2})\). ### New results As in the symmetric setting, a necessary prerequisite for proving the \(0\)-statement of Conjecture 1.2 is proving the following _deterministic lemma_: If \(G\) is Ramsey for \((H_{1},H_{2})\), then \(m(G)>m_{2}(H_{1},H_{2})\). The main result in this paper is a corresponding probabilistic lemma, which states that this obvious necessary condition is also sufficient. **Theorem 1.3**.: _The \(0\)-statement of Conjecture 1.2 holds if and only if, for every strictly balanced pair \((H_{1},H_{2})\), every graph \(G\) that is Ramsey for \((H_{1},H_{2})\) satisfies \(m(G)>m_{2}(H_{1},H_{2})\)._ More precisely, we prove that if \((H_{1},H_{2})\) is any pair of graphs and \((H_{1}^{\prime},H_{2}^{\prime})\) is a strictly balanced pair of subgraphs of \((H_{1},H_{2})\), then the \(0\)-statement of Conjecture 1.2 holds for \((H_{1},H_{2})\) if every graph \(G\) which is Ramsey for \((H_{1}^{\prime},H_{2}^{\prime})\) satisfies \(m(G)>m_{2}(H_{1}^{\prime},H_{2}^{\prime})=m_{2}(H_{1},H_{2})\). While we believe that the probabilistic lemma, Theorem 1.3, is our main contribution, we are able to prove the deterministic lemma in a wide range of cases. This implies that the \(0\)-statement of Conjecture 1.2 is true for almost all pairs of graphs. The most general statement we can prove is slightly tricky to state because of the necessity of passing to a strictly balanced pair of subgraphs; however, here is a representative example of our results, which avoids this technicality and still implies Conjecture 1.2 for almost all pairs of graphs. We state the more general result in Theorem 1.7 below. **Theorem 1.4**.: _Conjecture 1.2 holds for all sequences \(H_{1},\ldots,H_{r}\) of graphs satisfying \(m_{2}(H_{1})\geqslant\cdots\geqslant m_{2}(H_{r})\) and \(m_{2}(H_{2})>\frac{11}{5}\)._ As discussed above, Theorem 1.4 follows easily from Theorem 1.3 and a deterministic lemma for strictly balanced pairs \((H_{1},H_{2})\) satisfying \(m_{2}(H_{1})\geqslant m_{2}(H_{2})>\frac{11}{5}\). The deterministic lemma in this setting is actually very straightforward and follows from standard coloring techniques. Using a number of other coloring techniques, we can prove the deterministic lemma (and thus Conjecture 1.2) in several additional cases, which we discuss below. However, let us first propose a conjecture, which we believe to be of independent interest, and whose resolution would immediately imply Conjecture 1.2 in all cases. **Conjecture 1.5**.: _For any graph \(G\), there exists a forest \(F\subseteq G\) such that_ \[m_{2}(G\setminus F)\leqslant m(G).\] Here, \(G\setminus F\) denotes the graph obtained from \(G\) by deleting the edges of \(F\) (but not deleting any vertices). To give some intuition for Conjecture 1.5, we note that \(m(G)\leqslant m_{2}(G)\leqslant m(G)+1\) for any graph \(G\), and that \(m_{2}(F)=1\) for any forest \(F\) which is not a matching. Thus, it is natural to expect that by deleting the edges of a forest, we could decrease \(m_{2}(G)\) by roughly \(1\). Conjecture 1.5 says that this is roughly the case, in that the deletion of an appropriately-chosen forest can decrease \(m_{2}(G)\) to lie below \(m(G)\). Moreover, we note that Conjecture 1.5 easily implies the deterministic lemma in all cases4 with \(m_{2}(H_{1})>m_{2}(H_{2})\), and thus implies Conjecture 1.2. Indeed, it is straightforward to verify in this case that \(m_{2}(H_{1})>m_{2}(H_{1},H_{2})\) (see Lemma 3.4 below). Now, suppose that \(G\) is some graph with \(m(G)\leqslant m_{2}(H_{1},H_{2})<m_{2}(H_{1})\). If Conjecture 1.5 is true, we may partition the edges of \(G\) into a forest \(F\) and a graph \(K\) with \(m_{2}(K)\leqslant m(G)<m_{2}(H_{1})\). This latter condition implies, in particular, that \(K\) contains no copy of \(H_{1}\). Additionally, by the assumption \(m_{2}(H_{2})>1\) in Conjecture 1.2, we know that \(H_{2}\) contains a cycle and thus \(F\) contains no copy of \(H_{2}\). In other words, coloring the edges of \(K\) with color \(1\) and the edges of \(F\) with color \(2\) witnesses that \(G\) is not Ramsey for \((H_{1},\ldots,H_{r})\). Footnote 4: Recall that the case of \(m_{2}(H_{1})=m_{2}(H_{2})\) was settled in [17], so we may freely make this assumption. Because of this, it would be of great interest to prove Conjecture 1.5. Somewhat surprisingly, we know how to prove Conjecture 1.5 under the extra assumption that \(m(G)\) is an integer. This extra condition seems fairly artificial, but we do not know how to remove it--our technique uses tools from matroid theory that seem to break down once \(m(G)\) is no longer an integer. We present this proof in Appendix B, in the hope that it may serve as a first step to the full resolution of Conjecture 1.5, and thus Conjecture 1.2. Although we are not able to resolve Conjecture 1.5, we do have a number of other techniques for proving the deterministic lemma, and thus Conjecture 1.2, under certain assumptions. First, we are able to resolve the case when the number of colors is at least three and \(m_{2}(H_{2})=m_{2}(H_{3})\). **Theorem 1.6**.: _Let \(H_{1},\ldots,H_{r}\) be a sequence of graphs with \(r\geqslant 3\) and suppose that \(m_{2}(H_{1})\geqslant m_{2}(H_{2})=m_{2}(H_{3})\geqslant\cdots\geqslant m_{2 }(H_{r})\) and \(m_{2}(H_{2})>1\). Then Conjecture 1.2 holds for \(H_{1},\ldots,H_{r}\)._ We can also prove Conjecture 1.2 in a number of additional cases, expressed in terms of the properties of (a strictly balanced pair of subgraphs of) the pair \((H_{1},H_{2})\) of two densest graphs. Recall that the _degeneracy_ of \(H\) is the maximum over all \(J\subseteq H\) of the minimum degree of \(J\). **Theorem 1.7**.: _Suppose that \((H_{1},H_{2})\) is strictly balanced. Suppose additionally that one of the following conditions holds:_ 1. \(\chi(H_{2})\geqslant 3\)_, or_ 2. \(H_{2}\) _is not the union of two forests, or_ 3. \(\chi(H_{1})>m_{2}(H_{1},H_{2})+1\)_, or_ 4. \(H_{1}\) _has degeneracy at least_ \(\lfloor 2m_{2}(H_{1},H_{2})\rfloor\)_, or_ 5. \(H_{1}=K_{s,t}\) _for some_ \(s,t\geqslant 2\)_, or_ _._ * \(m_{2}(H_{1})>\lceil m_{2}(H_{1},H_{2})\rceil\)_._ _In any of these cases, Conjecture 1.2 holds for \((H_{1},H_{2})\)._ **Remark**.: The only graphs \(H_{2}\) which do not satisfy (a) or (b) are sparse bipartite graphs, such as even cycles. On the other hand, (c) applies whenever \(H_{1}\) is a clique5 or, more generally, a graph obtained from a clique by deleting few edges. Moreover, (d) applies to reasonably dense graphs, as well as all \(d\)-regular bipartite graphs with \(d\geqslant 2\), and (e) handles all cases when \(H_{1}\) is a biclique6. Thus, very roughly speaking, the strictly balanced cases that remain open in Conjecture 1.2 are those in which \(H_{2}\) is bipartite and very sparse and \(H_{1}\) is not "too dense". Footnote 5: Note that \(m_{2}(H_{1},H_{2})\leqslant m_{2}(H_{1})\), hence (c) holds if \(\chi(H_{1})>m_{2}(H_{1})+1\), and cliques satisfy \(m_{2}(K_{k})=\frac{k+1}{2}\). Footnote 6: In fact, our proof of (e) applies to a larger class of graphs, which we call _\((s,t)\)-graphs_; see Section 5 for details. Case (f) is somewhat stranger and it is not obvious that there exist graphs to which it applies. However, one can check that, for example, it applies if \(H_{1}=K_{3,3,3,3}\) and \(H_{2}=C_{8}\), and that none of the other cases of Theorem 1.7 (or any of the earlier results on Conjecture 1.2) apply in this case. However, the main reason we include (f) is that it is implied by our partial progress on Conjecture 1.5; since we believe that this conjecture is the correct approach to settling Conjecture 1.2 in its entirety, we wanted to highlight (f). We remark that, unfortunately, the conditions in Theorem 1.7 do not exhaust all cases. While it is quite likely that simple additional arguments could resolve further cases, Conjecture 1.5 remains the only (conjectural) approach we have found to resolve Conjecture 1.2 in all cases. Moreover, our proof of the probabilistic lemma implies that, in order to prove Conjecture 1.2 for a pair \((H_{1},H_{2})\), it is enough to prove the deterministic lemma for graphs \(G\) of order not exceeding an explicit constant \(K=K(H_{1},H_{2})\). In particular, the validity of Conjecture 1.2 for any specific pair of graphs reduces to a finite computation. ### Ramsey properties of graph families All of the results discussed in the previous subsection hold in greater generality, when we replace \(H_{1},\ldots,H_{r}\) with \(r\) finite families of graphs. In addition to being interesting in its own right, such a generalization also has important consequences in the original setting of Conjecture 1.2; indeed, our proof of the three-color result, Theorem 1.6, relies on our ability to work with graph families. Before we state our more general results, we need the following definitions. **Definition 1.8**.: Let \(\mathcal{H}_{1},\ldots,\mathcal{H}_{r}\) be finite families of graphs. We say that a graph \(G\) is _Ramsey_ for \((\mathcal{H}_{1},\ldots,\mathcal{H}_{r})\) if every \(r\)-coloring of \(E(G)\) contains a monochromatic copy of some \(H_{i}\in\mathcal{H}_{i}\) in some color \(i\in[\![r]\!]\). We now define the appropriate generalizations of the notions of maximum \(2\)-density and mixed \(2\)-density to families of graphs. First, given a finite family of graphs \(\mathcal{H}\), we let \[m_{2}(\mathcal{H})\coloneqq\min_{H\in\mathcal{H}}m_{2}(H).\] Second, given a graph \(H\) and a (finite) family \(\mathcal{L}\) of graphs, we let \[m_{2}(H,\mathcal{L})\coloneqq\max\left\{\frac{e_{J}}{v_{J}-2+1/m_{2}(\mathcal{ L})}:J\subseteq H,v_{J}\geqslant 2\right\}.\] Third, given two finite families of graphs \(\mathcal{H}\) and \(\mathcal{L}\) with \(m_{2}(\mathcal{H})\geqslant m_{2}(\mathcal{L})\), we define \[m_{2}(\mathcal{H},\mathcal{L})\coloneqq\min_{H\in\mathcal{H}}m_{2}(H,\mathcal{ L}).\] Finally, continuing the terminology above, let us say that the pair \((\mathcal{H},\mathcal{L})\) is _strictly balanced_ if every graph in \(\mathcal{L}\) is strictly \(2\)-balanced and every graph in \(\mathcal{H}\) is strictly \(m_{2}(\cdot,\mathcal{L})\)-balanced. The following conjecture is a natural generalization of Conjecture 1.2 to families of graphs. **Conjecture 1.9** (Kohayakawa-Kreuter conjecture for families).: _Let \(\mathcal{H}_{1},\ldots,\mathcal{H}_{r}\) be finite families of graphs with \(m_{2}(\mathcal{H}_{1})\geqslant\cdots\geqslant m_{2}(\mathcal{H}_{r})\) and suppose that \(m_{2}(\mathcal{H}_{2})>1\). There exist constants \(c,C>0\) such that_ \[\lim_{n\to\infty}\Pr(G_{n,p}\text{ is Ramsey for }(\mathcal{H}_{1},\dots, \mathcal{H}_{r}))=\begin{cases}1&\text{if }p\geqslant Cn^{-1/m_{2}(\mathcal{H}_{1}, \mathcal{H}_{2})},\\ 0&\text{if }p\leqslant cn^{-1/m_{2}(\mathcal{H}_{1},\mathcal{H}_{2})}.\end{cases}\] Note that, for any \(H_{1}\in\mathcal{H}_{1},\dots,H_{r}\in\mathcal{H}_{r}\), the property of being Ramsey for \((H_{1},\dots,H_{r})\) implies the property of being Ramsey for \((\mathcal{H}_{1},\dots,\mathcal{H}_{r})\). Therefore, the \(1\)-statement of Conjecture 1.9 follows from the \(1\)-statement of Conjecture 1.2, which we know to be true by the result of Mousset, Nenadov, and Samotij [20]. The \(0\)-statement of Conjecture 1.9 remains open; the only progress to date is due to the first two authors [17], who proved Conjecture 1.9 whenever \(m_{2}(\mathcal{H}_{1})=m_{2}(\mathcal{H}_{2})\). We make further progress on this conjecture: as in the case of single graphs, we prove a probabilistic lemma that reduces the \(0\)-statement to a deterministic lemma, which is clearly a necessary condition. **Theorem 1.10** (Probabilistic lemma for families).: _The \(0\)-statement of Conjecture 1.9 holds if and only if, for every strictly balanced pair \((\mathcal{H}_{1},\mathcal{H}_{2})\) of finite families of graphs, every graph \(G\) that is Ramsey for \((\mathcal{H}_{1},\mathcal{H}_{2})\) satisfies \(m(G)>m_{2}(\mathcal{H}_{1},\mathcal{H}_{2})\)._ As in Theorems 1.4 and 1.7, we can prove the deterministic lemma for families in a wide variety of cases, namely when every graph \(H_{1}\in\mathcal{H}_{1}\) or every graph \(H_{2}\in\mathcal{H}_{2}\) satisfies one of the conditions in Theorem 1.7. In particular, we resolve Conjecture 1.9 in many cases. However, we believe that the right way to resolve Conjecture 1.9 in its entirety is the same as the right way to resolve the original Kohayakawa-Kreuter conjecture, Conjecture 1.2. Namely, if Conjecture 1.5 is true, then Conjecture 1.9 is true for all families of graphs. ### Organization Most of the rest of this paper is dedicated to proving Theorem 1.10, and thus also Theorem 1.3. Our technique is inspired by recent work of the first two authors [17], who proved Conjecture 1.9 in the case \(m_{2}(\mathcal{H}_{1})=m_{2}(\mathcal{H}_{2})\). Therefore, we assume henceforth that \(m_{2}(\mathcal{H}_{1})>m_{2}(\mathcal{H}_{2})\). We will now change notation and denote \(\mathcal{H}_{1}=\mathcal{H}\) and \(\mathcal{H}_{2}=\mathcal{L}\). The names stand for _heavy_ and _light_, respectively, and are meant to remind the reader that \(m_{2}(\mathcal{L})<m_{2}(\mathcal{H})\). We also assume henceforth that \((\mathcal{H},\mathcal{L})\) is a strictly balanced pair of families. The rest of this paper is organized as follows. In Section 2, we present a high-level overview of our proof of Theorem 1.10. Section 3 contains a number of preliminaries for the proof, including the definitions and basic properties of _cores_--a fundamental notion in our approach--as well as several simple numerical lemmas. The proof of Theorem 1.10 is carried out in detail in Section 4. In Section 5, we prove the deterministic lemma under various assumptions, which yields Theorems 1.4 and 1.7 as well as their generalizations to families. We conclude with two appendices: Appendix A proves Theorem 1.6 by explaining what in our proof needs to be adapted to deal with the three-color setting; and Appendix B presents our partial progress on Conjecture 1.5. Additional note.: As this paper was being written, we learned that very similar results were obtained independently by Bowtell, Hancock, and Hyde [2], who also resolve Conjecture 1.2 in the vast majority of cases. As with this paper, they first prove a probabilistic lemma, showing that resolving the Kohayakawa-Kreuter conjecture is equivalent to proving a deterministic coloring result. By using a wider array of coloring techniques, they are able to prove more cases of Conjecture 1.2 than we can. Additionally, they consider a natural generalization of the Kohayakawa-Kreuter to uniform hypergraphs (a topic that we chose not to pursue here) and establish its \(0\)-statement for almost all pairs of hypergraphs; see also [9] for more on such hypergraph questions. In contrast, their work does not cover families of graphs, a generalization that falls out naturally from our approach. Acknowledgments.: We would like to thank Anita Liebenau and Leticia Mattos for fruitful discussions on Ramsey properties of random graphs. We are also indebted to Candida Bowtell, Robert Hancock, and Joseph Hyde for sharing an early draft of their paper [2] with us, and for their many invaluable comments. ## 2. Proof outline We now sketch, at a very high level, the proof of the probabilistic lemma. Let us fix a strictly balanced pair of families \((\mathcal{H},\mathcal{L})\). We wish to upper-bound the probability that \(G_{n,p}\) is Ramsey for \((\mathcal{H},\mathcal{L})\), where \(p\leqslant cn^{-1/m_{2}(\mathcal{H},\mathcal{L})}\) for an appropriately chosen constant \(c=c(\mathcal{H},\mathcal{L})>0\). Our approach is modeled on the recent proof of the \(0\)-statement of Theorem 1.1 due to the first two authors [17]; however, there are substantial additional difficulties that arise in the asymmetric setting. One can immediately make several simplifying assumptions. First, if \(G_{n,p}\) is Ramsey for \((\mathcal{H},\mathcal{L})\), then there exists some \(G\subseteq G_{n,p}\) that is _minimally_ Ramsey for \((\mathcal{H},\mathcal{L})\), in the sense that any proper subgraph \(G^{\prime}\subsetneq G\) is not Ramsey for \((\mathcal{H},\mathcal{L})\). It is not hard to show (see Lemma 3.2 below) that every minimally Ramsey graph has a number of interesting properties. In particular, if \(G\) is minimally Ramsey, then every edge of \(G\) lies in at least one copy of some \(H\in\mathcal{H}\), and at least one copy of some \(L\in\mathcal{L}\). Our arguments will exploit a well-known strengthening of this property, which we call _supporting a core_; see Definition 3.1 for the precise definition. We would ideally like to union-bound over all possible minimally Ramsey graphs \(G\) in order to show that a.a.s. none of them appears in \(G_{n,p}\). Unfortunately, there are potentially too many minimally Ramsey graphs for this to be possible. To overcome this, we construct a smaller family \(\mathcal{S}\) of subgraphs of \(K_{n}\) such that every Ramsey graph \(G\) contains some element of \(\mathcal{S}\) as a subgraph. Since \(\mathcal{S}\) is much smaller than the family of minimally Ramsey graphs, we can effectively union-bound over \(\mathcal{S}\). This basic idea also underlies the container method [1, 28] and the recent work of Harel, Mousset, and Samotij on the upper tail problem for subgraph counts [12]. The details here, however, are slightly subtle; there are actually three different types of graphs in \(\mathcal{S}\) and a different union-bound argument is needed to handle each type. We construct our family \(\mathcal{S}\) with the use of an exploration process on minimally Ramsey graphs, each of which supports a core. This exploration process starts with a fixed edge of \(K_{n}\) and gradually adds to it copies of graphs in \(\mathcal{H}\cup\mathcal{L}\). As long as the subgraph \(G^{\prime}\subseteq G\) of explored edges is not yet all of \(G\), we add to \(G^{\prime}\) a copy of some graph in \(\mathcal{H}\cup\mathcal{L}\) that intersects \(G^{\prime}\) but is not fully contained in it. By choosing this copy in a principled manner (more on this momentarily), we can ensure that \(\mathcal{S}\) satisfies certain conditions which enable this union-bound argument. Since our goal is to show that the final graph \(G^{\prime}\) is rather dense (and thus unlikely to appear in \(G_{n,p}\)), we always prefer to add copies of graphs in \(\mathcal{H}\), as these boost the density of \(G^{\prime}\). If there are no available copies of \(H\in\mathcal{H}\), we explore along some \(L\in\mathcal{L}\). As \(L\) may be very sparse, this can hurt us; however, the "core" property guarantees that each copy of \(L\) comes with at least one copy of some \(H\in\mathcal{H}\) per new edge. An elementary (but fairly involved) computation shows that the losses and the gains pencil out, which is the key fact showing that \(\mathcal{S}\) has the desired properties. ## 3. Preliminaries ### Ramsey graphs and cores Given a graph \(G\), denote by \(\mathcal{F}_{\mathcal{H}}[G],\mathcal{F}_{\mathcal{L}}[G]\) the set of all copies of members of \(\mathcal{H},\mathcal{L}\), respectively, in \(G\). We think of \(\mathcal{F}_{\mathcal{H}}[G],\mathcal{F}_{\mathcal{L}}[G]\) as hypergraphs on the ground set \(E(G)\); in particular, we think of an element of \(\mathcal{F}_{\mathcal{H}}[G],\mathcal{F}_{\mathcal{L}}[G]\) as a collection of edges of \(G\) that form a copy of some \(H\in\mathcal{H},L\in\mathcal{L}\), respectively. To highlight the (important) difference between the members of \(\mathcal{H}\cup\mathcal{L}\) and their copies (i.e. the elements of \(\mathcal{F}_{\mathcal{H}}[G]\cup\mathcal{F}_{\mathcal{L}}[G]\)), we will denote the former by \(H\) and \(L\) and the latter by \(\widehat{H}\) and \(\widehat{L}\). Given a graph \(G\) and \(\mathcal{F}_{\mathcal{H}}\subseteq\mathcal{F}_{\mathcal{H}}[G],\mathcal{F}_{ \mathcal{L}}\subseteq\mathcal{F}_{\mathcal{L}}[G]\), we say that the tuple \((G,\mathcal{F}_{\mathcal{H}},\mathcal{F}_{\mathcal{L}})\) is _Ramsey_ if, for every two-coloring of \(E(G)\), there is an element of \(\mathcal{F}_{\mathcal{H}}\) that is monochromatic red or an element of \(\mathcal{F}_{\mathcal{L}}\) that is monochromatic blue. In particular, we see that \(G\) is Ramsey for \((\mathcal{H},\mathcal{L})\) if and only if \((G,\mathcal{F}_{\mathcal{H}}[G],\mathcal{F}_{\mathcal{L}}[G])\) is Ramsey. Having said that, allowing tuples \((G,\mathcal{F}_{\mathcal{H}},\mathcal{F}_{\mathcal{L}})\) where \(\mathcal{F}_{\mathcal{H}}\) and \(\mathcal{F}_{\mathcal{L}}\) are proper subsets of \(\mathcal{F}_{\mathcal{H}}[G]\) and \(\mathcal{F}_{\mathcal{L}}[G]\), respectively, enables us to deduce further useful properties. These are encapsulated in the following definition. **Definition 3.1**.: An \((\mathcal{H},\mathcal{L})\)_-core_ (or _core_ for short) is a tuple \((G,\mathcal{F}_{\mathcal{H}},\mathcal{F}_{\mathcal{L}})\), where \(G\) is a graph and \(\mathcal{F}_{\mathcal{H}}\subseteq\mathcal{F}_{\mathcal{H}}[G],\mathcal{F}_{ \mathcal{L}}\subseteq\mathcal{F}_{\mathcal{L}}[G]\), with the following properties: * The hypergraph \(\mathcal{F}_{\mathcal{H}}\cup\mathcal{F}_{\mathcal{L}}\) is connected and spans \(E(G)\). * For every \(\widehat{H}\in\mathcal{F}_{\mathcal{H}}\) and every edge \(e\in\widehat{H}\), there exists an \(\widehat{L}\in\mathcal{F}_{\mathcal{L}}\) such that \(\widehat{H}\cap\widehat{L}=\{e\}\). * For every \(\widehat{L}\in\mathcal{F}_{\mathcal{L}}\) and every edge \(e\in\widehat{L}\), there exists an \(\widehat{H}\in\mathcal{F}_{\mathcal{H}}\) such that \(\widehat{H}\cap\widehat{L}=\{e\}\). We say that \(G\)_supports a core_ if there exist \(\mathcal{F}_{\mathcal{H}}\subseteq\mathcal{F}_{\mathcal{H}}[G],\mathcal{F}_ {\mathcal{L}}\subseteq\mathcal{F}_{\mathcal{L}}[G]\) such that \((G,\mathcal{F}_{\mathcal{H}},\mathcal{F}_{\mathcal{L}})\) is a core. The reason we care about cores is that minimal Ramsey graphs support cores, as shown in the following lemma. Essentially the same lemma appears in the work of Rodl and Rucinski [25], where it is given as an exercise. The same idea was already used in several earlier works, including [15, Claim 6] and [18, Lemma 4.1]. **Lemma 3.2**.: _Suppose that a graph \(G\) is Ramsey for \((\mathcal{H},\mathcal{L})\), but none of its proper subgraphs are Ramsey for \((\mathcal{H},\mathcal{L})\). Then \(G\) supports an \((\mathcal{H},\mathcal{L})\)-core._ Proof.: As \(G\) is Ramsey for \((\mathcal{H},\mathcal{L})\), we know that \((G,\mathcal{F}_{\mathcal{H}}[G],\mathcal{F}_{\mathcal{L}}[G])\) is a Ramsey tuple. Let \(\mathcal{F}_{\mathcal{H}}\subseteq\mathcal{F}_{\mathcal{H}}[G],\mathcal{F}_ {\mathcal{L}}\subseteq\mathcal{F}_{\mathcal{L}}[G]\) be inclusion-minimal subfamilies such that \((G,\mathcal{F}_{\mathcal{H}},\mathcal{F}_{\mathcal{L}})\) is still a Ramsey tuple. In other words, this tuple is Ramsey, but for any \(\mathcal{F}^{\prime}_{\mathcal{H}}\subseteq\mathcal{F}_{\mathcal{H}},\mathcal{ F}^{\prime}_{\mathcal{L}}\subseteq\mathcal{F}_{\mathcal{L}}\) such that at least one inclusion is strict, the tuple \((G,\mathcal{F}^{\prime}_{\mathcal{H}},\mathcal{F}^{\prime}_{\mathcal{L}})\) is not Ramsey. We will show that \((G,\mathcal{F}_{\mathcal{H}},\mathcal{F}_{\mathcal{L}})\) is a core. If some \(e\in E(G)\) is not contained in any edge of \(\mathcal{F}_{\mathcal{H}}\cup\mathcal{F}_{\mathcal{L}}\), then \((G\setminus e,\mathcal{F}_{\mathcal{H}},\mathcal{F}_{\mathcal{L}})\) is still Ramsey, and thus \(G\setminus e\) is Ramsey for \((\mathcal{H},\mathcal{L})\), contradicting the minimality of \(G\). Furthermore, if \(\mathcal{F}_{\mathcal{H}}\cup\mathcal{F}_{\mathcal{L}}\) is not connected, then at least one of its connected components induces a Ramsey tuple, which contradicts the minimality of \((\mathcal{F}_{\mathcal{H}},\mathcal{F}_{\mathcal{L}})\). Thus, the first condition in the definition of a core is satisfied. We now turn to the next two parts of the definition. To see that the second condition in the definition of a core is satisfied, fix some \(\widehat{H}\in\mathcal{F}_{\mathcal{H}}\) and some \(e\in\widehat{H}\). By minimality, we can find a two-coloring of \(E(G)\) such that no element of \(\mathcal{F}_{\mathcal{L}}\) is blue and no element of \(\mathcal{F}_{\mathcal{H}}\setminus\{\widehat{H}\}\) is red. Note that all edges of \(\widehat{H}\) are colored red, as otherwise our coloring would witness \((G,\mathcal{F}_{\mathcal{H}},\mathcal{F}_{\mathcal{L}})\) being not Ramsey. Flip the color of \(e\) from red to blue. Since \(\widehat{H}\) is now no longer monochromatic red, we must have created a monochromatic blue element \(\widehat{L}\) of \(\mathcal{F}_{\mathcal{L}}\). As all edges of \(\widehat{H}\setminus e\) are still red, we see that \(\widehat{H}\cap\widehat{L}=\{e\}\), as required. Interchanging the roles of \(\mathcal{F}_{\mathcal{H}},\mathcal{F}_{\mathcal{L}}\), and the colors yields the third condition in the definition of a core. ### Numerical lemmas In this section, we collect a few useful numerical lemmas, all of which are simple combinatorial facts about vertex- and edge-counts in graphs. We begin with the following well-known result, which we will use throughout. **Lemma 3.3** (The mediant inequality).: _Let \(a,c\geqslant 0\) and \(b,d>0\) be real numbers with \(a/b\leqslant c/d\). Then_ \[\frac{a}{b}\leqslant\frac{a+c}{b+d}\leqslant\frac{c}{d}.\] _Moreover, if one inequality is strict, then so is the other (which happens if and only if \(a/b<c/d\))._ Proof.: Both inequalities are easily seen to be equivalent to the inequality \(ad\leqslant bc\), which is itself the same as \(a/b\leqslant c/d\). **Lemma 3.4**.: _Let \((\mathcal{H},\mathcal{L})\) be a strictly balanced pair. If \(m_{2}(\mathcal{L})<m_{2}(\mathcal{H})\), then \(m_{2}(\mathcal{L})<m_{2}(\mathcal{H},\mathcal{L})<m_{2}(\mathcal{H})\)._ Proof.: To see the second inequality, let \(H\in\mathcal{H}\) be a graph with \(m_{2}(H)=m_{2}(\mathcal{H})\) and observe that the strict \(m_{2}(\cdot,\mathcal{L})\)-balancedness of \(H\) implies that \[m_{2}(H,\mathcal{L})=\frac{e_{H}}{v_{H}-2+1/m_{2}(\mathcal{L})}=\frac{(e_{H}-1)+ 1}{(v_{H}-2)+1/m_{2}(\mathcal{L})}\leqslant\frac{m_{2}(H)\cdot(v_{H}-2)+1}{(v_{ H}-2)+1/m_{2}(\mathcal{L})}.\] Since \(m_{2}(H)=m_{2}(\mathcal{H})>m_{2}(\mathcal{L})\), Lemma 3.3 implies that \(m_{2}(\mathcal{H},\mathcal{L})\leqslant m_{2}(H,\mathcal{L})<m_{2}(\mathcal{H})\). For the first inequality, let \(H\in\mathcal{H}\) be a graph for which \(m_{2}(H,\mathcal{L})=m_{2}(\mathcal{H},\mathcal{L})\) and let \(J\subseteq H\) be its subgraph with \(\frac{e_{J}-1}{v_{J}-2}=m_{2}(H)\). By the strict \(m_{2}(\cdot,\mathcal{L})\)-balancedness of \(H\), we have \[m_{2}(H,\mathcal{L})\geqslant m_{2}(J,\mathcal{L})=\frac{(e_{J}-1)+1}{(v_{J}- 2)+1/m_{2}(\mathcal{L})}=\frac{m_{2}(H)\cdot(v_{J}-2)+1}{(v_{J}-2)+1/m_{2}( \mathcal{L})}.\] Since \(m_{2}(H)>m_{2}(\mathcal{L})\), Lemma 3.3 implies that \(m_{2}(\mathcal{H},\mathcal{L})=m_{2}(H,\mathcal{L})\geqslant m_{2}(J,\mathcal{ L})>m_{2}(\mathcal{L})\). **Lemma 3.5**.: _Let \(H\in\mathcal{H}\) be strictly \(m_{2}(\cdot,\mathcal{L})\)-balanced. Then for any \(F\subsetneq H\) with \(v_{F}\geqslant 2\), we have_ \[e_{H}-e_{F}>m_{2}(H,\mathcal{L})\cdot(v_{H}-v_{F})\geqslant m_{2}(\mathcal{H },\mathcal{L})\cdot(v_{H}-v_{F}).\] Proof.: The second inequality follows from the definition of \(m_{2}(\mathcal{H},\mathcal{L})\). Since \(e_{F}<e_{H}\), we may assume that \(v_{F}<v_{H}\), as otherwise the claimed inequality holds vacuously. Since \(H\) is strictly \(m_{2}(\cdot,\mathcal{L})\)-balanced, we have \[m_{2}(H,\mathcal{L})=\frac{e_{H}}{v_{H}-2+1/m_{2}(\mathcal{L})}=\frac{(e_{H}-e _{F})+e_{F}}{(v_{H}-v_{F})+(v_{F}-2+1/m_{2}(\mathcal{L}))}\] whereas \[\frac{e_{F}}{v_{F}-2+1/m_{2}(\mathcal{L})}<m_{2}(H,\mathcal{L}).\] Since \(v_{H}>v_{F}\), we may use Lemma 3.3 to conclude that \((e_{H}-e_{F})/(v_{H}-v_{F})>m_{2}(H,\mathcal{L})\). **Lemma 3.6**.: _Let \(L\in\mathcal{L}\) be strictly 2-balanced. Then for any \(J\subsetneq L\) with \(e_{L}\geqslant 1\), we have_ \[e_{L}-e_{J}\geqslant m_{2}(L)\cdot(v_{L}-v_{J})\geqslant m_{2}(\mathcal{L}) \cdot(v_{L}-v_{J}).\] _Moreover, the first inequality is strict unless \(J=K_{2}\)._ Proof.: The second inequality is immediate since \(m_{2}(\mathcal{L})\leqslant m_{2}(L)\). Since \(e_{J}<e_{L}\), we may assume that \(v_{J}<v_{L}\), as otherwise the claimed (strict) inequality holds vacuously. We clearly have equality if \(J=K_{2}\) and strict inequality if \(v_{J}=2\) and \(e_{J}=0\), so we may assume henceforth that \(v_{J}>2\). Since \(L\) is strictly 2-balanced, \[m_{2}(L)=\frac{e_{L}-1}{v_{L}-2}=\frac{(e_{L}-e_{J})+(e_{J}-1)}{(v_{L}-v_{J})+ (v_{J}-2)}\] whereas \((e_{J}-1)/(v_{J}-2)<m_{2}(L)\). Since \(v_{J}>2\), we may apply Lemma 3.3 to conclude the desired result, with a strict inequality. **Lemma 3.7**.: _Suppose that \((\mathcal{H},\mathcal{L})\) is a strictly balanced pair. Defining \(\alpha\coloneqq m_{2}(\mathcal{H},\mathcal{L})\) and \(X\coloneqq\min_{H\in\mathcal{H}}\{(e_{H}-1)-\alpha\cdot(v_{H}-2)\}\), we have that_ \[X+(v_{K}-2)(\alpha-1)\geqslant e_{K}\cdot\left(\frac{\alpha}{m_{2}(L)}-1\right)\] _for every \(L\in\mathcal{L}\) and every non-empty \(K\subseteq L\). Moreover, the inequality is strict unless \(K=K_{2}\)._ Proof.: Without loss of generality, we may assume that \(m_{2}(L)<\alpha\) and that \(v_{K}>2\), as otherwise the statement holds vacuously (recall from Lemma 3.4 that \(\alpha=m_{2}(\mathcal{H},\mathcal{L})>m_{2}(\mathcal{L})>1\)). Fix some \(L\in\mathcal{L}\) and a nonempty \(K\subseteq L\). Recall that each \(H\in\mathcal{H}\) is strictly \(m_{2}(\cdot,\mathcal{L})\)-balanced and satisfies \(m_{2}(H,\mathcal{L})\geqslant m_{2}(\mathcal{H},\mathcal{L})=\alpha\). This implies that \[\frac{e_{H}}{v_{H}-2+1/m_{2}(\mathcal{L})}\geqslant\alpha\] or, equivalently, \[e_{H}\geqslant\alpha\cdot(v_{H}-2)+\frac{\alpha}{m_{2}(\mathcal{L})}.\] Consequently, \[X=\min_{H\in\mathcal{H}}\{(e_{H}-1)-\alpha\cdot(v_{H}-2)\}\geqslant\frac{ \alpha}{m_{2}(\mathcal{L})}-1\geqslant\frac{\alpha}{m_{2}(L)}-1,\] where the final inequality uses that \(m_{2}(L)\geqslant m_{2}(\mathcal{L})\). Since \(L\) is strictly \(2\)-balanced and we assumed that \(m_{2}(L)<\alpha\), we have \[(e_{K}-1)\cdot\left(\frac{\alpha}{m_{2}(L)}-1\right)\leqslant m_{2}(L)\cdot(v_{K} -2)\cdot\left(\frac{\alpha}{m_{2}(L)}-1\right)=(v_{K}-2)(\alpha-m_{2}(L)).\] Rearranging the above inequality, we obtain \[e_{K}\cdot\left(\frac{\alpha}{m_{2}(L)}-1\right)-(v_{K}-2)( \alpha-1) \leqslant(1-m_{2}(L))(v_{K}-2)+\left(\frac{\alpha}{m_{2}(L)}-1\right)\] \[<\frac{\alpha}{m_{2}(L)}-1\leqslant X,\] where the penultimate inequality uses the assumption that \(v_{K}>2\). ## 4. Proof of the probabilistic lemma In this section, we prove Theorem 1.10. We in fact prove the following more precise statement. **Lemma 4.1** (Theorem 1.10, rephrased).: _Let \((\mathcal{H},\mathcal{L})\) be a strictly balanced pair of finite families of graphs satisfying \(m_{2}(\mathcal{H})>m_{2}(\mathcal{L})\). There exists a constant \(c>0\) such that the following holds. If \(p\leqslant cn^{-1/m_{2}(\mathcal{H},\mathcal{L})}\), then a.a.s. every \(G\subseteq G_{n,p}\) which supports a core satisfies \(m(G)\leqslant m_{2}(\mathcal{H},\mathcal{L})\)._ Note that this immediately implies the difficult direction in Theorem 1.10. Indeed, suppose that the \(0\)-statement of 1.9 fails for some tuple \((\mathcal{H}_{1},\ldots,\mathcal{H}_{r})\), i.e., the random graph \(G_{n,p}\) is Ramsey for \((\mathcal{H}_{1},\ldots,\mathcal{H}_{r})\) with probability bounded away from zero when \(p=cn^{-1/m_{2}(\mathcal{H}_{1},\mathcal{H}_{2})}\), for an arbitrarily small constant \(c>0\). In particular, with probability bounded away from zero, \(G_{n,p}\) contains a graph that is also Ramsey for any pair \((\mathcal{H},\mathcal{L})\) of families of subgraphs of \((\mathcal{H}_{1},\mathcal{H}_{2})\). For an appropriately chosen pair \((\mathcal{H},\mathcal{L})\), Lemma 3.2 implies that some subgraph \(G\subseteq G_{n,p}\) supports an \((\mathcal{H},\mathcal{L})\)-core. By the assumed assertion of Lemma 4.1, a.a.s. any such \(G\subseteq G_{n,p}\) satisfies \(m(G)\leqslant m_{2}(\mathcal{H},\mathcal{L})\). However, by the deterministic lemma (i.e. the assumption of Theorem 1.10), we know that no such \(G\) can be Ramsey for \((\mathcal{H},\mathcal{L})\), a contradiction. Our proof of Lemma 4.1 follows closely the proof of the probabilistic lemma in recent work of the first two authors [17]. Fix a strictly balanced pair \((\mathcal{H},\mathcal{L})\) of families satisfying \(m_{2}(\mathcal{H})>m_{2}(\mathcal{L})\), and let \(\alpha\coloneqq m_{2}(\mathcal{H},\mathcal{L})\). Let \(\mathcal{G}_{\mathrm{bad}}\) denote the set of graphs \(G\subseteq K_{n}\) which support a core and satisfy \(m(G)>m_{2}(\mathcal{H},\mathcal{L})\). The key lemma, which implies Lemma 4.1, is as follows. **Lemma 4.2**.: _There exist constants \(\Lambda,K>0\) and a collection \(\mathcal{S}\) of subgraphs of \(K_{n}\) satisfying the following properties:_ 1. _Every element of_ \(\mathcal{G}_{\mathrm{bad}}\) _contains some_ \(S\in\mathcal{S}\) _as a subgraph._ 2. _Every_ \(S\in\mathcal{S}\) _satisfies at least one of the following three conditions:_ 1. \(v_{S}\geqslant\log n\) _and_ \(e_{S}\geqslant\alpha\cdot(v_{S}-2)\)_;_ 2. \(v_{S}<\log n\) _and_ \(e_{S}\geqslant\alpha\cdot v_{S}+1\)_;_ 3. \(v_{S}\leqslant K\) _and_ \(m(S)>\alpha\)_._ 3. _For every_ \(k\in\llbracket n\rrbracket\)_, there are at most_ \((\Lambda n)^{k}\) _graphs_ \(S\in\mathcal{S}\) _with_ \(v_{S}=k\)_._ Before we prove Lemma 4.2, let us see why it implies Lemma 4.1. Proof of Lemma 4.1.: Recall that \(p\leqslant cn^{-1/\alpha}\), for a small constant \(c=c(\mathcal{H},\mathcal{L})\) to be chosen later. We wish to prove that a.a.s. \(G_{n,p}\) contains no element of \(\mathcal{G}_{\mathrm{bad}}\). By Lemma 4.2(a), it suffices to prove that a.a.s. \(G_{n,p}\) contains no element of \(\mathcal{S}\). By (b), the elements of \(\mathcal{S}\) are of three types, each of which we deal with separately. First, recall that for any fixed graph \(S\) with \(m(S)>\alpha\), we have that \(\Pr(S\subseteq G_{n,p})=o(1)\) (see e.g. [14, Theorem 3.4]). As there are only a constant number of graphs on at most \(K\) vertices, we may apply the union bound and conclude that a.a.s. no graph \(S\) satisfying \(v_{S}\leqslant K\) and \(m(S)>\alpha\) appears in \(G_{n,p}\). This deals with the elements of \(\mathcal{S}\) corresponding to case (iii). Let \(\mathcal{S}^{\prime}\subseteq\mathcal{S}\) be the set of \(S\in\mathcal{S}\) which lie in cases (i) or (ii). We have that \[\Pr(S\subseteq G_{n,p}\text{ for some }S\in\mathcal{S}^{\prime}) \leqslant\sum_{S\in\mathcal{S}^{\prime}}p^{e_{S}}\] \[\leqslant\sum_{k=1}^{\lceil\log n\rceil-1}(\Lambda n)^{k}p^{ \alpha k+1}+\sum_{k=\lceil\log n\rceil}^{\infty}(\Lambda n)^{k}p^{\alpha(k-2)}\] \[\leqslant p\sum_{k=1}^{\infty}(\Lambda c^{\alpha})^{k}+c^{-2 \alpha}n^{2}\sum_{k=\lceil\log n\rceil}^{\infty}(\Lambda c^{\alpha})^{k}\] We now choose \(c\) so that \(\Lambda c^{\alpha}=e^{-3}\). Then the first sum above can be bounded by \(p\), which tends to \(0\) as \(n\to\infty\). The second term can be bounded by \(2c^{-2\alpha}n^{-1}\), which also tends to \(0\) as \(n\to\infty\). All in all, we find that a.a.s. \(G_{n,p}\) does not contain any graph in \(\mathcal{S}\), as claimed. ### The exploration process and the proof of Lemma 4.2 In this section, we prove Lemma 4.2. We will construct the family \(\mathcal{S}\) by considering an exploration process on the set \(\mathcal{G}\) of graphs \(G\subseteq K_{n}\) which support a core. For each such \(G\in\mathcal{G}\), let us arbitrarily choose collections \(\mathcal{F}_{\mathcal{H}}\subseteq\mathcal{F}_{\mathcal{H}}[G]\) and \(\mathcal{F}_{\mathcal{L}}\subseteq\mathcal{F}_{\mathcal{L}}[G]\) such that \((G,\mathcal{F}_{\mathcal{H}},\mathcal{F}_{\mathcal{L}})\) is a core. From now on, by copies of graphs from \(\mathcal{H},\mathcal{L}\) in \(G\), we mean only those copies that belong to the families \(\mathcal{F}_{\mathcal{H}},\mathcal{F}_{\mathcal{L}}\), respectively. This subtlety will be extremely important in parts of the analysis. We first fix arbitrary orderings on the graphs in \(\mathcal{H}\) and \(\mathcal{L}\). Additionally, we fix a labeling of the vertices of \(K_{n}\), which induces an ordering of all subgraphs according to the lexicographic order. Together with the ordering on \(\mathcal{H},\mathcal{L}\), we obtain a lexicographic ordering on all copies in \(K_{n}\) of graphs in \(\mathcal{H},\mathcal{L}\). Now, given a \(G\in\mathcal{G}\), we build a sequence \(G_{0}\subsetneq G_{1}\subsetneq\cdots\subseteq G\) as follows. We start with \(G_{0}\) being the graph comprising only the smallest edge of \(G\). As long as \(G_{i}\neq G\), do the following: Since \(G\neq G_{i}\) and \((G,\mathcal{F}_{\mathcal{H}},\mathcal{F}_{\mathcal{L}})\) is a core, there must be some copy of a graph from \(\mathcal{H}\cup\mathcal{L}\) which belongs to \(\mathcal{F}_{\mathcal{H}}\cup\mathcal{F}_{\mathcal{L}}\) that intersects \(G_{i}\) but is not fully contained in \(G_{i}\). Call such an _overlapping_ copy _regular_ if it intersects \(G_{i}\) in exactly one edge, called its _root_; otherwise, call the copy _degenerate_. We form \(G_{i+1}\) from \(G_{i}\) as follows: 1. Suppose first that there is an overlapping copy of some graph in \(\mathcal{H}\). We form \(G_{i+1}\) by adding to \(G_{i}\) the smallest (according to the lexicographic order) such copy. We call \(G_{i}\to G_{i+1}\) a _degenerate \(\mathcal{H}\)-step_. 2. Otherwise, there must be an overlapping copy \(\widehat{L}\) of some \(L\in\mathcal{L}\). Note that, for every edge \(e\in\widehat{L}\setminus G_{i}\), there must be a copy of some \(H\in\mathcal{H}\) that meets \(\widehat{L}\) only at \(e\), as \((G,\mathcal{F}_{\mathcal{H}},\mathcal{F}_{\mathcal{L}})\) is a core. Note further that this copy of \(H\) does not intersect \(G_{i}\), as otherwise we would perform a degenerate \(\mathcal{H}\)-step. We pick the smallest such copy for every \(e\in\widehat{L}\setminus G_{i}\), and call it \(\widehat{H_{e}}\) (note that the graphs \(H_{e}\in\mathcal{H}\) such that \(H_{e}\cong\widehat{H_{e}}\) may be different for different choices of \(e\)). We say that \(\widehat{L}\) is _pristine_ if it is regular and the graphs \(\{\widehat{H_{e}}\}_{e\in\widehat{L}\setminus G_{i}}\) are all vertex-disjoint (apart from the intersections that they are forced to have in \(V(\widehat{L})\)). 1. If there is a pristine copy of some graph in \(\mathcal{L}\), we pick the smallest one in the following sense: First, among all edges of \(G_{i}\) that are roots of a pristine copy of some graph in \(\mathcal{L}\), we choose the one that arrived to \(G_{i}\) earliest. Second, among all pristine copies that are rooted at this edge, we pick the smallest (according to the lexicographic order). We then form \(G_{i+1}\) by adding to \(G_{i}\) this smallest copy \(\widehat{L}\) as well as all \(\widehat{H_{e}}\) where \(e\in\widehat{L}\setminus G_{i}\). We call \(G_{i}\to G_{i+1}\) a _pristine step_. 2. If there are no pristine copies of any graph in \(\mathcal{L}\), we pick the smallest (according to the lexicographic order) overlapping copy \(\widehat{L}\) of a graph in \(\mathcal{L}\) and we still form \(G_{i+1}\) by adding to \(G_{i}\) the union of \(\widehat{L}\) and all its \(\widehat{H_{e}}\) with \(e\in\widehat{L}\setminus G_{i}\). We call \(G_{i}\to G_{i+1}\) a _degenerate \(\mathcal{L}\)-step_. We define the _balance_ of \(G_{i}\) to be \[b(G_{i})\coloneqq e_{G_{i}}-\alpha\cdot v_{G_{i}},\] where we recall that \(\alpha=m_{2}(\mathcal{H},\mathcal{L})\). The key result we will need in order to prove (b) is the following lemma. We remark that a similar result was proved by Hyde [13, Claims 6.2 and 6.3]; it plays an integral role in his approach to the Kohayakawa-Kreuter conjecture. **Lemma 4.3**.: _For every \(i\), we have that \(b(G_{i+1})\geqslant b(G_{i})\). Moreover, there exists some \(\delta=\delta(\mathcal{H},\mathcal{L})>0\) such that \(b(G_{i+1})\geqslant b(G_{i})+\delta\) if \(G_{i+1}\) was obtained from \(G_{i}\) by a degenerate step._ As the proof of Lemma 4.3 is somewhat technical, we defer it to Section 4.2. For the moment, we assume the result and continue the discussion of how we construct the family \(\mathcal{S}\). We now let \(\Gamma\coloneqq\lceil 2\alpha/\delta\rceil\), where \(\delta\) is the constant from Lemma 4.3. For \(G\in\mathcal{G}\), let \[\tau(G)\coloneqq\min\{i:v_{G_{i}}\geqslant\log n\text{ or }G_{i}=G\text{ or }G_{i-1}\to G_{i}\text{ is the $\Gamma$th degenerate step}\}\] and let \[\mathcal{S}\coloneqq\{G_{\tau(G)}:G\in\mathcal{G}_{\text{bad}}\}. \tag{1}\] Having defined the family \(\mathcal{S}\), we are ready to prove Lemma 4.2. Since the definition of \(\mathcal{S}\) clearly guarantees property (a), it remains to establish properties (b) and (c). We begin by showing that, if \(K\) is sufficiently large (depending only on \(\mathcal{H}\) and \(\mathcal{L}\)), then (b) holds. Proof of Lemma 4.2(b).: Let \(\delta\) be the constant from Lemma 4.3, let \(M\coloneqq\max\{e_{L}\cdot v_{H}:H\in\mathcal{H},L\in\mathcal{L}\}\), and let \(K\coloneqq 2M^{2}\Gamma\); note that each of these parameters depends only on \(\mathcal{H}\) and \(\mathcal{L}\). Every \(S\in\mathcal{S}\) is of the form \(G_{\tau(G)}\) for some \(G\in\mathcal{G}_{\text{bad}}\). We split into cases depending on which of the three conditions defining \(\tau(G)\) caused us to stop the exploration. Suppose first that we stopped the exploration because \(v_{S}\geqslant\log n\). By Lemma 4.3, we have that \[e_{S}-\alpha\cdot v_{S}=b(S)=b(G_{\tau(G)})\geqslant b(G_{0})=1-2\alpha,\] and therefore \(e_{S}\geqslant\alpha\cdot(v_{S}-2)\). This yields case (i). Next, suppose we stopped the exploration because step \(G_{\tau(G)-1}\to G_{\tau(G)}\) was the \(\Gamma\)th degenerate step. As we are not in the previous case, we may assume that \(v_{S}<\log n\). By Lemma 4.3 and our choice of \(\Gamma\), we have that \[e_{S}-\alpha\cdot v_{S}=b(S)=b(G_{\tau(G)})\geqslant b(G_{0})+\Gamma\delta \geqslant 1-2\alpha+2\alpha=1.\] Rearranging, we see that \(e_{S}\geqslant\alpha\cdot v_{S}+1\), yielding case (ii). The remaining case is when we stop because \(S=G\in\mathcal{G}_{\text{bad}}\). Since the definition of \(\mathcal{G}_{\text{bad}}\) implies that \(m(G)>\alpha\), in order to establish (iii), we only need to show that \(v_{G}\leqslant K\). For this proof, we need to keep track of another parameter during the exploration process, which we term the _pristine boundary_. Recall that at every pristine step, we add to \(G_{i}\) a copy \(\widehat{L}\) of some \(L\in\mathcal{L}\) that intersects \(G_{i}\) in a single edge (the root), and then add copies \(\widehat{H_{e}}\) of graphs \(H_{e}\in\mathcal{H}\), one for every edge of \(\widehat{L}\) apart from the root. Let us say that the _boundary_ of this step is the set of all newly added vertices that are not in \(\widehat{L}\), that is, the set \(V(G_{i+1})\setminus(V(G_{i})\cup V(\widehat{L}))=(\bigcup_{e\in\widehat{L} \setminus G_{i}}V(\widehat{H_{e}}))\setminus V(\widehat{L})\). Note that the size of the boundary is equal to \[Y_{i}\coloneqq\sum_{e\in\widehat{L}\setminus G_{i}}(v_{H_{e}}-2);\] indeed, by the definition of pristine steps, the copies \(\widehat{H_{e}}\) are vertex-disjoint outside of \(V(\widehat{L})\). We claim that \(Y_{i}\geqslant 3\). To see this, note first that \(L\) has at least three edges, as it is not a forest. Similarly, each \(H_{e}\) has at least three vertices. Putting these together, we see that there are at least two terms in the sum, and every term in the sum is at least one. Thus, \(Y_{i}\geqslant 3\) unless \(e_{L}=3\) and \(v_{H_{e}}=3\) for all \(e\). But in this case, \(L=K_{3}=H_{e}\in\mathcal{H}\) for all \(e\), which means that \(\widehat{L}\) should have been added to \(G_{i}\) as a degenerate \(\mathcal{H}\)-step. We now inductively define the pristine boundary \(\partial G_{i}\) of \(G_{i}\) as follows. We set \(\partial G_{0}\coloneqq\varnothing\). If \(G_{i}\to G_{i+1}\) is a pristine step, then we delete from \(\partial G_{i}\) the two endpoints of the root and add to \(\partial G_{i}\) the boundary of this pristine step. Note that \(|\partial G_{i+1}|\geqslant|\partial G_{i}|+Y_{i}-2\geqslant|\partial G_{i}|+1\). On the other hand, if \(G_{i}\to G_{i+1}\) is a degenerate step, then we only remove vertices from \(\partial G_{i}\), without adding any new vertices. Namely, we remove from \(\partial G_{i}\) all the vertices which are included in the newly added graphs. In other words, if we performed a degenerate \(\mathcal{H}\)-step by adding a copy \(\widehat{H}\) of some graph in \(\mathcal{H}\), we set \(\partial G_{i+1}\coloneqq\partial G_{i}\setminus V(\widehat{H})\). Similarly, if we performed a degenerate \(\mathcal{L}\)-step by adding a copy \(\widehat{\mathcal{L}}\) of some graph in \(\mathcal{L}\) along with the graphs \(\widehat{H_{e}}\) for all \(e\in\widehat{\mathcal{L}}\setminus G_{i}\), we set \(\partial G_{i+1}\coloneqq\partial G_{i}\setminus(V(\widehat{L})\cup\bigcup_{ e}V(\widehat{H_{e}}))\). Note that in either case \(|\partial G_{i+1}|\geqslant|\partial G_{i}|-M\), as the union of all graphs added in each degenerate step can have at most \(M\) vertices. We now argue that \(\partial G_{\tau(G)}=\varnothing\). Indeed, suppose we had some vertex \(v\in\partial G_{\tau(G)}\). By definition, \(v\) was added during a pristine step, as a vertex of a copy \(\widehat{H_{e}}\) of some graph \(H_{e}\in\mathcal{H}\), and was never touched again. Observe that \(v\) is incident to some edge \(uv\) of \(\widehat{H_{e}}\) that was not touched by any later step of the exploration. However, as \((G,\mathcal{F}_{\mathcal{H}},\mathcal{F}_{\mathcal{L}})\) is a core and \(\widehat{H_{e}}\in\mathcal{F}_{\mathcal{H}}\), there must be some \(\widehat{L_{uv}}\in\mathcal{F}_{\mathcal{L}}\) that intersects \(\widehat{H_{e}}\) only at \(uv\). Moreover, as \(\widehat{L_{uv}}\) has minimum degree at least two (by the strict \(2\)-balancedness assumption), there is some edge \(vw\in\widehat{L_{uv}}\setminus uv\) that is incident to \(v\). Since we assumed that \(G_{\tau(G)}=G\), the edge \(vw\) must have been added at some point, a contradiction to the assumption that \(v\) was never touched again. Finally, since \(|\partial G_{i}|\) increases by at least one during every pristine step and decreases by at most \(M\) during each of the at most \(\Gamma\) degenerate steps, in order to achieve \(\partial G_{\tau(G)}=\varnothing\), there can be at most \(M\Gamma\) pristine steps. In particular, the total number of exploration steps is at most \(M\Gamma+\Gamma\). As each exploration step adds at most \(M\) vertices to \(G_{i}\), we conclude that \(v_{G}\leqslant M(M\Gamma+\Gamma)+2\leqslant K\). This completes the proof of (iii). Proof of Lemma 4.2(c).: Suppose \(S\) has \(k\) vertices and let \(G\in\mathcal{G}_{\text{bad}}\) be such that \(S=G_{\tau(G)}\). We consider the exploration process on \(G\). Note that in every step we add an overlapping copy of a graph from a finite family \(\mathcal{F}\) that comprises all graphs in \(\mathcal{H}\) (for the cases where we made a degenerate \(\mathcal{H}\)-step) and graphs in \(\mathcal{L}\) that have graphs from \(\mathcal{H}\) glued on subsets of their edges, with all intersection patterns (for the pristine and degenerate \(\mathcal{L}\)-steps). Let \(\mathcal{F}^{\times}\) denote the graphs in \(\mathcal{F}\) that correspond to a pristine step. Now, every degenerate step can be described by specifying the graph \(F\in\mathcal{F}\) whose copy \(\widehat{F}\) we are adding, the subgraph \(F^{\prime}\subseteq F\) and the embedding \(\varphi\colon V(F^{\prime})\to V(G_{i})\) that describe the intersection \(\widehat{F}\cap G_{i}\), and the sequence of \(v_{F}-v_{F^{\prime}}\) vertices of \(K_{n}\) that complete \(\varphi\) to an embedding of \(F\) into \(K_{n}\). Every pristine step is uniquely described by the root edge in \(G_{i}\), the graph \(F\in\mathcal{F}^{\times}\), the edge of \(F\) corresponding to the root, and the (ordered sequence of) \(v_{F}-2\) vertices of \(K_{n}\) that complete the root edge to a copy of \(F\) in \(K_{n}\). There are at most \(n^{k}\) ways to choose the sequence of vertices that were added through this exploration process, in the order that they are introduced to \(G\). Each pristine step adds at least one new vertex, so there are at most \(k\) pristine steps. Furthermore, there are always at most \(\Gamma\) degenerate steps, meaning that \(\tau(G)\leqslant k+\Gamma\). In particular, there are at most \((k+\Gamma)\cdot 2^{k+\Gamma}\) ways to choose \(\tau(G)\) and to specify which steps were pristine. For every degenerate step, there are at most \[\sum_{F\in\mathcal{F}}\sum_{\ell=2}^{v_{F}}\binom{v_{F}}{\ell}k^{\ell} \leqslant|\mathcal{F}|\cdot(k+1)^{M_{v}}\] ways of choosing \(F\in\mathcal{F}\) and describing the intersection of its copy \(\widehat{F}\) with \(G_{i}\) (the set \(V(F^{\prime})\subseteq V(F)\) and the embedding \(\varphi\) above), where \(M_{v}\coloneqq\max\{v_{F}\colon F\in\mathcal{F}\}\). As for the pristine steps, note that, in the course of our exploration, the sequence of the arrival times of the roots to \(G_{\tau(G)}\) must be non-decreasing. This is because as soon as an edge appears in some \(G_{i}\), every pristine step that includes it as a root at any later step is already available, and we always choose the one rooted at the edge that arrived to \(G\) the earliest. Therefore, there are at most \(\binom{es+k}{k}\) possible sequences of root edges, since this is the number of non-decreasing sequences of length \(k\) in \(\llbracket e_{S}\rrbracket\). To supplement this bound, remember that every step increases the number of edges in \(G_{i}\) by at most \(M_{e}\coloneqq\max\{e_{F}:F\in\mathcal{F}\}\), which means that \[e_{S}\leqslant 1+\tau(G)\cdot M_{e}\leqslant 1+(k+\Gamma)\cdot M_{e}.\] To summarize, the number of \(S\in\mathcal{S}\) with \(k\) vertices is at most \[n^{k}\cdot(k+\Gamma)\cdot 2^{k+\Gamma}\cdot\left(|\mathcal{F}|\cdot(k+1)^{M_{v}} \right)^{\Gamma}\cdot\binom{(k+\Gamma)\cdot M_{e}+k+1}{k}\cdot\left(|\mathcal{F} |\cdot M_{e}\right)^{k}.\] Every term in this product, apart from the first, is bounded by an exponential function of \(k\), since \(\Gamma,|\mathcal{F}|,M_{v}\), and \(M_{e}\) are all constants. Therefore, if we choose \(\Lambda=\Lambda(\mathcal{H},\mathcal{L})\) sufficiently large, we find that the number of \(S\in\mathcal{S}\) with \(v_{S}=k\) is at most \((\Lambda n)^{k}\), as claimed. ### Proof of Lemma 4.3 In this section, we prove Lemma 4.3. The proof is divided into a number of claims. Recall Lemma 3.5, which asserts that \[e_{H}-e_{F}>m_{2}(\mathcal{H},\mathcal{L})\cdot(v_{H}-v_{F})=\alpha\cdot(v_{H }-v_{F})\] for all \(H\in\mathcal{H}\) and all \(F\subsetneq H\). This implies that we can choose some \(\delta_{1}=\delta_{1}(\mathcal{H},\mathcal{L})>0\) so that \[e_{H}-e_{F}\geqslant\alpha\cdot(v_{H}-v_{F})+\delta_{1} \tag{2}\] for all \(H\in\mathcal{H}\) and all \(F\subsetneq H\); we henceforth fix such a \(\delta_{1}>0\). Our first claim deals with the (easy) case that \(G_{i}\to G_{i+1}\) is a degenerate \(\mathcal{H}\)-step. **Claim 4.4**.: _If \(G_{i}\to G_{i+1}\) is a degenerate \(\mathcal{H}\)-step, then \(b(G_{i+1})\geqslant b(G_{i})+\delta_{1}\)._ Proof.: Suppose we add to \(G_{i}\) a copy of some \(H\in\mathcal{H}\) that intersects \(G_{i}\) on a subgraph \(F\subseteq H\). This means that \[e_{G_{i+1}}=e_{G_{i}}+(e_{H}-e_{F})\qquad\text{ and }\qquad v_{G_{i+1}}=v_{G_{i}}+( v_{H}-v_{F})\] and thus \[b(G_{i+1})-b(G_{i})=(e_{H}-e_{F})-\alpha\cdot(v_{H}-v_{F})\geqslant\delta_{1},\] where the inequality follows from (2), as \(F\) must be a proper subgraph of \(H\). Now, suppose that \(G_{i}\to G_{i+1}\) is an \(\mathcal{L}\)-step, either degenerate or pristine, which means that we add a copy \(\widehat{L}\) of some \(L\in\mathcal{L}\) and then add, for every edge \(e\in\widehat{L}\setminus G_{i}\), a copy \(\widehat{H_{e}}\) of some \(H_{e}\in\mathcal{H}\). Let \(G_{i}^{\prime}\coloneqq G_{i}\cup\widehat{L}\) and let \(\widehat{J}\coloneqq G_{i}\cap\widehat{L}\), so that \(\widehat{J}\cong J\) for some \(J\subsetneq L\) with at least one edge. Note that \[b(G_{i}^{\prime})-b(G_{i})=(e_{L}-e_{J})-\alpha\cdot(v_{L}-v_{J}), \tag{3}\] as we add \(e_{L}-e_{J}\) edges and \(v_{L}-v_{J}\) vertices to \(G_{i}\) when forming \(G_{i}^{\prime}\). In order to analyze \(b(G_{i+1})-b(G_{i}^{\prime})\), we now define an auxiliary graph \(\mathcal{I}\) as follows. Its vertices are the edges of \(\widehat{L}\setminus\widehat{J}\). Recall that, for every such edge \(e\), the graph \(\widehat{H_{e}}\cong H_{e}\) intersects \(G_{i}^{\prime}\) only in the edge \(e\). A pair \(e,f\) of edges of \(\widehat{L}\setminus\widehat{J}\) will be adjacent in \(\mathcal{I}\) if and only if their corresponding graphs \(\widehat{H_{e}}\) and \(\widehat{H_{f}}\) share at least one edge (equivalently, the graphs \(\widehat{H_{e}}\setminus e\) and \(\widehat{H_{f}}\setminus f\) share an edge). Denote the connected components of \(\mathcal{I}\) by \(K_{1},\ldots,K_{m}\) and note that each of them corresponds to a subgraph of \(\widehat{L}\setminus\widehat{J}\). For each \(j\in\llbracket m\rrbracket\), let \[U_{j}\coloneqq\bigcup_{e\in K_{j}}(\widehat{H_{e}}\setminus e).\] Note that the graphs \(G_{i}^{\prime}\) and \(U_{1},\ldots,U_{m}\) are pairwise edge-disjoint and that each \(U_{j}\) shares at least \(v_{K_{j}}\) vertices (the endpoints of all the edges of \(K_{j}\)) with \(G_{i}^{\prime}\). It follows that \[b(G_{i+1})-b(G_{i}^{\prime})\geqslant\sum_{j=1}^{m}(e_{U_{j}}-\alpha\cdot(v_{ U_{j}}-v_{K_{j}}))=\sum_{j=1}^{m}(b(U_{j})+\alpha\cdot v_{K_{j}}). \tag{4}\] Finally, as in the statement of Lemma 3.7, define \[X\coloneqq\min\{(e_{H}-1)-\alpha\cdot(v_{H}-2):H\in\mathcal{H}\}.\] The following claim lies at the heart of the matter. **Claim 4.5**.: _For every \(j\in\llbracket m\rrbracket\), we have_ \[b(U_{j})\geqslant X-2\alpha-(v_{K_{j}}-2)+\min\{\delta_{1},1\}\cdot\mathbf{1}_{v _{K_{j}}>2}.\] Proof.: Since \(K_{j}\) is connected in \(\mathcal{I}\), we may order its edges as \(e_{1},\ldots,e_{\ell}\) so that, for each \(r\in\llbracket\ell-1\rrbracket\), the edge \(e_{r+1}\) is \(\mathcal{I}\)-adjacent to \(\{e_{1},\ldots,e_{r}\}\). Letting \(F\subseteq H_{e_{r+1}}\) be the subgraph corresponding to this intersection, we define, for each \(r\in\{0,\ldots,\ell\}\), \[U_{j}^{r}\coloneqq\bigcup_{s=1}^{r}(\widehat{H_{e_{s}}}\setminus e_{s}),\] so that \(\varnothing=U_{j}^{0}\subseteq\cdots\subseteq U_{j}^{\ell}=U_{j}\). Observe that \[b(U_{j}^{1})=e_{U_{j}^{1}}-\alpha\cdot v_{U_{j}^{1}}=(e_{H_{e_{1}}}-1)-\alpha \cdot v_{H_{e_{1}}}\geqslant X-2\alpha,\] where the inequality follows from the definition of \(X\). Suppose now that \(r\geqslant 1\) and let \(\widehat{F}\) be the intersection of \(\widehat{H_{e_{r+1}}}\setminus e_{r+1}\) with \(U_{j}^{r}\); note that this intersection is non-empty as \(e_{r+1}\) is \(\mathcal{I}\)-adjacent to \(\{e_{1},\ldots,e_{r}\}\). We have \[b(U_{j}^{r+1})-b(U_{j}^{r})=(e_{H_{e_{r+1}}}-1-e_{F})-\alpha\cdot(v_{H_{e_{r+1 }}}-v_{F}).\] Let \(t_{r+1}\) be the number of endpoints of \(e_{r+1}\) that are not in \(U_{j}^{r}\). Suppose first that \(t_{r+1}=0\), that is, both endpoints of \(e_{r+1}\) are already in \(U_{j}^{r}\). In this case, both endpoints of \(e_{r+1}\) also belong to \(\widehat{F}\) and thus \(\widehat{F}\cup e_{r+1}\) is isomorphic to a subgraph \(F^{+}\subseteq H_{e_{r+1}}\) with \(e_{F}+1\) edges and \(v_{F}\) vertices, which means that \[b(U_{j}^{r+1})-b(U_{j}^{r})=(e_{H_{e_{r+1}}}-e_{F^{+}})-\alpha\cdot(v_{H_{e_{r +1}}}-v_{F^{+}})\geqslant 0,\] by Lemma 3.5. In case \(t_{r+1}>0\), \(F\) is a proper subgraph of \(H_{e_{r+1}}\) and thus we have \[b(U_{j}^{r+1})-b(U_{j}^{r})\geqslant\delta_{1}-1\geqslant\delta_{1}-t_{r+1},\] see (2). We may thus conclude that \[b(U_{j})=b(U_{j}^{1})+\sum_{r=1}^{\ell-1}(b(U_{j}^{r+1})-b(U_{j}^{r}))\geqslant X -2\alpha-\sum_{r=1}^{\ell-1}t_{r+1}+\delta_{1}\cdot\mathbf{1}_{t_{2}+\cdots+t _{\ell}>0}.\] The desired inequality follows as \(t_{2}+\cdots+t_{\ell}=|V(K_{j})\setminus V(U_{j}^{1})|\leqslant v_{K_{j}}-2\) and, further, \(v_{K_{j}}>2\) implies that the sum \(t_{2}+\cdots+t_{r}\) is either positive or at most \(v_{K_{j}}-3\). We are now ready to show that the balance only increases when we perform an \(\mathcal{L}\)-step. **Claim 4.6**.: _If \(G_{i}\to G_{i+1}\) is an \(\mathcal{L}\)-step, then \(b(G_{i+1})\geqslant b(G_{i})\). Moreover, if this \(\mathcal{L}\)-step is degenerate, then \(b(G_{i+1})\geqslant b(G_{i})+\delta_{2}\) for some \(\delta_{2}>0\) that depends only on \(\mathcal{H}\) and \(\mathcal{L}\)._ Proof.: By (3), (4), and Claim 4.6, we have \[b(G_{i+1})-b(G_{i})=b(G_{i}^{\prime})-b(G_{i})+b(G_{i+1})-b(G_{i} ^{\prime})\] \[\geqslant(e_{L}-e_{J})-\alpha\cdot(v_{L}-v_{J})+\sum_{j=1}^{m}(b (U_{j})+\alpha\cdot v_{K_{j}})\] \[\geqslant(e_{L}-e_{J})-\alpha\cdot(v_{L}-v_{J})+\sum_{j=1}^{m} \big{(}X+(v_{K_{j}}-2)(\alpha-1)\big{)}+\min\{\delta_{1},1\}\cdot\mathbf{1}_{ \mathcal{I}\neq\varnothing},\] since \(\mathcal{I}\) is nonempty only if one of its components has more than two vertices. We now apply Lemma 3.7 to each component \(K_{j}\) to conclude that \[\sum_{j=1}^{m}\big{(}X+(v_{K_{j}}-2)(\alpha-1)\big{)}\geqslant\sum_{j=1}^{m}e _{K_{j}}\cdot\left(\frac{\alpha}{m_{2}(L)}-1\right)=(e_{L}-e_{J})\left(\frac{ \alpha}{m_{2}(L)}-1\right).\] Therefore, \[b(G_{i+1})-b(G_{i})\geqslant(e_{L}-e_{J})\cdot\frac{\alpha}{m_{2}(L)}-\alpha\cdot (v_{L}-v_{J})+\min\{\delta_{1},1\}\cdot\mathbf{1}_{\mathcal{I}\neq\varnothing} \geqslant\min\{\delta_{1},1\}\cdot\mathbf{1}_{\mathcal{I}\neq\varnothing},\] where the last inequality follows from Lemma 3.6. This implies the desired result if the \(\mathcal{L}\)-step is pristine. If the \(\mathcal{L}\)-step is not pristine but \(\mathcal{I}\) has no edges, it means that some vertex was repeated between different \(\widehat{H_{e}}\). In that case, the first inequality in (4) is strict (we assumed there that the graphs \(U_{j}\) share no vertices outside of \(V(K_{j})\)). All in all, we obtain the desired boost in the degenerate case. Combining Claims 4.4 and 4.6, we obtain Lemma 4.3. This completes the proof of the probabilistic lemma. ## 5. Proof of the deterministic lemma Given the probabilistic lemma and the work of the first two authors on the symmetric case [17], in order to prove Conjecture 1.9, which generalizes the Kohayakawa-Kreuter conjecture, we only need to show the following. For every strictly balanced pair \((\mathcal{H},\mathcal{L})\) of finite families of graphs with \(m_{2}(\mathcal{H})>m_{2}(\mathcal{L})>1\), we can two-color the edges of every graph \(G\) satisfying \(m(G)\leqslant m_{2}(\mathcal{H},\mathcal{L})\) so that there are neither red monochromatic copies of any \(H\in\mathcal{H}\) nor blue monochromatic copies of any \(L\in\mathcal{L}\). As discussed in the introduction, we do not know how to do this in all cases. However, the following proposition lists a number of extra assumptions under which we are able to find such a coloring. We recall the notion of the \(1\)_-density_ (or _fractional arboricity_) of a graph \(L\), defined by \[m_{1}(L)\coloneqq\max\left\{\frac{e_{J}}{v_{J}-1}:J\subseteq L,v_{J}\geqslant 2 \right\}.\] We also make the following definition. **Definition 5.1**.: Given positive integers \(s\leqslant t\), we say that a graph is an _\((s,t)\)-graph_ if its minimum degree is at least \(s\), and every edge contains a vertex of degree at least \(t\). We say that a graph is _\((s,t)\)-avoiding_ if none of its subgraphs is an \((s,t)\)-graph. **Proposition 5.2**.: _Let \((\mathcal{H},\mathcal{L})\) be a strictly balanced pair of finite families of graphs satisfying \(m_{2}(\mathcal{H})>m_{2}(\mathcal{L})\) and suppose that at least one of the following conditions holds:_ 1. \(\chi(L)\geqslant 3\) _for all_ \(L\in\mathcal{L}\)_;_ 2. \(\chi(H)>m_{2}(\mathcal{H},\mathcal{L})+1\) _for every_ \(H\in\mathcal{H}\)_;_ 3. \(m_{1}(L)>2\) _for all_ \(L\in\mathcal{L}\)_;_ 4. _every_ \(H\in\mathcal{H}\) _contains an_ \((s,t)\)_-graph as a subgraph, for some integers_ \(s\leqslant t\) _satisfying_ \[\frac{1}{s+1}+\frac{1}{t+1}<\frac{1}{m_{2}(\mathcal{H},\mathcal{L})};\] 5. \(\lceil m_{2}(\mathcal{H},\mathcal{L})\rceil<m_{2}(\mathcal{H})\)_;_ _Then any graph \(G\) with \(m(G)\leqslant m_{2}(\mathcal{H},\mathcal{L})\) is not Ramsey for \((\mathcal{H},\mathcal{L})\)._ Cases (a)-(c) all follow fairly easily from known coloring techniques; we supply the details in the remainder of this section. Case (d) is proved by a short inductive argument, see below. Case (e) follows from our partial progress on Conjecture 1.5, namely, that we are able to prove it when \(m(G)\) is an integer; we present the proof of this result in Appendix B. We end this section with short derivations of Theorems 1.4 and 1.7 from the proposition. Proof of Theorem 1.4.: Assume that \(m_{2}(L)>\frac{11}{5}\). By passing to a subgraph with the same \(2\)-density, we may assume that \(L\) is strictly \(2\)-balanced. Thanks to cases (a) and (c) of Proposition 5.2, we are done unless \(m_{1}(L)\leqslant 2\) and \(L\) is bipartite. The bounds on \(m_{1}(L)\) and \(m_{2}(L)\) imply that \(2v_{L}-2\geqslant e_{L}>\frac{11}{5}(v_{L}-2)+1\), which yields \(v_{L}<7\). However, as \(L\) is bipartite on at most six vertices, we have \(m_{2}(L)\leqslant m_{2}(K_{3,3})=2\), a contradiction. Proof of Theorem 1.7.: Cases (a), (b), (c), and (f) follow immediately7 from Proposition 5.2. For Theorem 1.7(d), note that a graph with minimum degree \(d\) is a \((d,d)\)-graph. Thus, if \(H_{1}\) has degeneracy at least \(d\), then it contains some \((d,d)\)-graph as a subgraph. Similarly, Theorem 1.7(e) follows, since if \(s\leqslant t\), then \(K_{s,t}\) is an \((s,t)\)-graph satisfying \(1/m_{2}(K_{s,t})=(s+t-2)/(st-1)\geqslant 1/(s+1)+1/(t+1)\). Footnote 7: Proposition 5.2(c) implies Theorem 1.7(b) thanks to Nash-Williams’s theorem (Theorem 5.7 below). ### Auxiliary results We start with a helpful observation relating \(m(G)\) and the degeneracy of \(G\). We say that a graph is \(d\)_-degenerate_ if its degeneracy is at most \(d\). **Lemma 5.3**.: _Every graph \(G\) is \(\lfloor 2m(G)\rfloor\)-degenerate._ Proof.: For every \(G^{\prime}\subseteq G\), we have \[\delta(G^{\prime})\leqslant\left\lfloor\frac{2e_{G^{\prime}}}{v_{G^{\prime}}} \right\rfloor\leqslant\lfloor 2m(G)\rfloor,\] where \(\delta(G^{\prime})\) is the minimum degree of \(G^{\prime}\). Our second lemma allows us to compare between the various densities. **Lemma 5.4**.: _For every graph \(H\), we have \(m_{2}(H)\leqslant m_{1}(H)+\frac{1}{2}\leqslant m(H)+1\)._ Proof.: Notice that both \(\frac{e-1}{v-2}\leqslant\frac{e}{v-1}+\frac{1}{2}\) and \(\frac{e}{v-1}\leqslant\frac{e}{v}+\frac{1}{2}\) are equivalent to \(e\leqslant\binom{v}{2}\), so both inequalities hold whenever \(v,e\) are the numbers of vertices and edges, respectively, of any graph. In particular, if \(v,e\) correspond to the subgraph of \(H\) that achieves \(m_{2}(H)\), we find that \(m_{2}(H)=\frac{e-1}{v-2}\leqslant\frac{e}{v-1}+\frac{1}{2}\leqslant m_{1}(H)+ \frac{1}{2}\). The second inequality follows in the same way, now passing to the subgraph that achieves \(m_{1}(H)\). Our next lemma gives a lower bound on the average degree of an \((s,t)\)-graph. We remark that this inequality is tight for \(K_{s,t}\) and that it can be restated as \(e_{H}/v_{H}\geqslant m(K_{s,t})\). **Lemma 5.5**.: _If \(H\) is an \((s,t)\)-graph, then_ \[\frac{1}{s}+\frac{1}{t}\geqslant\frac{v_{H}}{e_{H}}.\] Proof.: The assumption that \(H\) is an \((s,t)\)-graph implies that, for every \(uv\in E(H)\), we have \(1/\deg(u)+1/\deg(v)\leqslant 1/s+1/t\). This means that \[e_{H}\cdot\left(\frac{1}{s}+\frac{1}{t}\right)\geqslant\sum_{uv\in H}\left( \frac{1}{\deg(u)}+\frac{1}{\deg(v)}\right)=v_{H}.\qed\] The next lemma supplies a decomposition of a graph of bounded degeneracy. **Lemma 5.6**.: _If a graph \(G\) is \((dk-1)\)-degenerate, for some positive integers \(d,k\), then there is a partition \(V(G)=V_{1}\cup\dots\cup V_{k}\) such that the graphs \(G[V_{1}],\dots,G[V_{k}]\) are all \((d-1)\)-degenerate._ Proof.: We may construct the desired partition in the following way. Initialize \(V_{1}=\dots=V_{k}=\varnothing\) and let \(v_{1},\dots,v_{n}\) be an ordering of the vertices of \(G\) such that every \(v_{i}\) has at most \(dk-1\) neighbors preceding it. We distribute the vertices one-by-one, each time putting \(v_{i}\) in a set \(V_{j}\) where, at the time, \(v_{i}\) has the smallest number of neighbors. By the pigeonhole principle, this number is at most \(\lfloor\frac{dk-1}{k}\rfloor=d-1\). Finally, we quote Nash-Williams's theorem on partitions of graphs into forests. **Theorem 5.7** (Nash-Williams [21]).: _A graph \(G\) can be partitioned into \(t\) forests if and only if \(\lceil m_{1}(G)\rceil\leqslant t\)._ ### Proof of Proposition 5.2 We are now ready to prove Proposition 5.2. Denote \(\alpha\coloneqq m_{2}(\mathcal{H},\mathcal{L})\) and let \(G\) be an arbitrary graph satisfying \(m(G)\leqslant\alpha\). We will argue that (the edge set of) \(G\) can be partitioned into an \(\mathcal{H}\)-free graph and an \(\mathcal{L}\)-free graph. We split into cases, depending on which condition is satisfied by the pair \((\mathcal{H},\mathcal{L})\). _Cases (a) and (b)._ Let \(k\coloneqq\lfloor\alpha\rfloor+1\), so that \(m(G)\leqslant\alpha<k\), and note that Lemma 5.3 implies that \(G\) is \((2k-1)\)-degenerate. Consequently, Lemma 5.6 yields two partitions of the edges of \(G\): a partition into a \(1\)-degenerate graph and a \(k\)-colorable graph; and a partition into a \((k-1)\)-degenerate graph and a bipartite graph. The existence of the first partition proves (b), as every \(1\)-degenerate graph is \(\mathcal{L}\)-free whereas the assumption on \(\mathcal{H}\) implies that \(\chi(H)>k\) for every \(H\in\mathcal{H}\). We now argue that the existence of the second partition proves (a). To this end, note that the assumption there implies that every bipartite graph is \(\mathcal{L}\)-free, so it is enough to show that \(\delta(H)\geqslant k\) for every \(H\in\mathcal{H}\) and thus every \((k-1)\)-degenerate graph is \(\mathcal{H}\)-free. To see that this is the case, consider an arbitrary \(H\in\mathcal{H}\) and let \(v\in V(H)\) be its vertex with smallest degree. As \(H\) is strictly \(m_{2}(\cdot,\mathcal{L})\)-balanced, Lemma 3.5 gives \(\delta(H)=e_{H}-e_{H\setminus v}>\alpha\), unless \(v_{H}=3\), in which case \(H=K_{3}\) and we still have \(\delta(H)\geqslant 2=m_{2}(H)\geqslant m_{2}(\mathcal{H})>\alpha\). Since \(\delta(H)\) is an integer, we actually have \(\delta(H)\geqslant\lfloor\alpha\rfloor+1=k\), as needed. _Case (c)._ It is enough to show that \(G\) can be partitioned into an \(\mathcal{H}\)-free graph and a union of two forests; indeed, if \(m_{1}(L)>2\) for all \(L\in\mathcal{L}\), then no union of two forests can contain a member of \(\mathcal{L}\) as a subgraph, by (the easy direction of) Theorem 5.7. Let \(m_{1}(\mathcal{H})\coloneqq\min\{m_{1}(H):H\in\mathcal{H}\}\). By Lemma 5.4 and the assumption \(m(G)\leqslant m_{2}(\mathcal{H},\mathcal{L})<m_{2}(\mathcal{H})\), we find that \[m_{1}(G)\leqslant m(G)+\frac{1}{2}\leqslant m_{2}(\mathcal{H})+\frac{1}{2} \leqslant m_{1}(\mathcal{H})+1.\] As a result, if we let \(t\coloneqq\lceil m_{1}(\mathcal{H})\rceil\), we find that \(\lceil m_{1}(G)\rceil\leqslant t+1\) and therefore Theorem 5.7 supplies a partition \(G\) into \(t+1\) forests \(G_{1},\ldots,G_{t+1}\). Taking \(G^{\prime}\coloneqq G_{1}\cup\cdots\cup G_{t-1}\), we arrive at a partition \(G=G^{\prime}\cup(G_{t}\cup G_{t+1})\). By (the easy direction of) Theorem 5.7, we know that \(m_{1}(G^{\prime})\leqslant t-1<m_{1}(\mathcal{H})\), so \(G^{\prime}\) is \(\mathcal{H}\)-free. As \(G_{t}\) and \(G_{t+1}\) are forests, we get the desired decomposition. _Case (d)._ It is enough to show that \(G\) can be decomposed into a forest and an \((s,t)\)-avoiding graph. Assume that this is not the case and let \(G\) be a smallest counterexample with \(m(G)\leqslant\alpha\). It is enough to show that \(G\) is an \((s+1,t+1)\)-graph, as then Lemma 5.5 gives \[\frac{1}{s+1}+\frac{1}{t+1}\geqslant\frac{v_{G}}{e_{G}}\geqslant\frac{1}{m(G) }\geqslant\frac{1}{\alpha},\] a contradiction. Suppose first that \(G\) has a vertex \(v\) of degree at most \(s\). By minimality of \(G\), we can decompose the edges of \(G\setminus v\) into an \((s,t)\)-avoiding graph \(K\) and a forest \(F\). Adding an arbitrary edge incident with \(v\) to \(F\) and the remaining edges to \(K\) maintains \(F\) being a forest and \(K\) being \((s,t)\)-avoiding, as any \((s,t)\)-subgraph of \(K\) would have to use \(v\), which has degree at most \(s-1\) in \(K\). This contradicts our assumption on indecomposability of \(G\). Second, suppose that \(G\) contains an edge \(uv\) with \(\deg(u),\deg(v)\leqslant t\). By minimality of \(G\), we can decompose \(G^{\prime}\coloneqq G\setminus uv\) into a forest \(F\) and an \((s,t)\)-avoiding graph \(K\). Adding \(uv\) to \(F\) must close a cycle, meaning that both \(u\) and \(v\) are incident to at least one \(F\)-edge of \(G^{\prime}\) and thus the \(K\)-degrees of \(u\) and \(v\) in \(G^{\prime}\) are at most \(t-2\). This means, however, that we can add \(uv\) to \(K\) while still keeping the degrees of both its endpoints strictly below \(t\). Again, we find that \(K\) contains no \((s,t)\)-subgraph, a contradiction. _Case (e)._ Let \(k\coloneqq\lceil m_{2}(\mathcal{H},\mathcal{L})\rceil\). Since we assume that \(m_{2}(\mathcal{H})>k\), it is enough to decompose \(G\) into a forest and a graph \(K\) with \(m_{2}(K)\leqslant k\). The following theorem, which implies Conjecture 1.5 in the case that \(m(G)\) is an integer, supplies such a decomposition. **Theorem 5.8**.: _Let \(k\) be an integer, and let \(G\) be a graph with \(m(G)\leqslant k\). Then there exists a forest \(F\subseteq G\) such that \(m_{2}(G\setminus F)\leqslant k\)._ The proof of Theorem 5.8 is substantially more involved, as it relies on techniques from matroid theory. We are hopeful that similar techniques may be used to prove Conjecture 1.5 in its entirety. We defer the proof of Theorem 5.8 to Appendix B.
``` $G$ が $(H_1, \dots, H_r)$ の Ramsey であるとは、$G$ の辺の $r$ 色付けにおいて、任意の $i$ について、$i$ 色で $H_i$ が存在する。これは、Kohayakawa と Kreuter の有名な予想で、R\"odl と Ruci\'nski の初期の研究を拡張したものである。この予想は、二項ランダムグラフ $G_{n,p}$ が $(H_1, \dots, H_r)$ に asymptotical almost surely Ramsey になるための閾値を予測している。この論文では、 almost all Tuple of Graphs について Kohayakawa-Kreuter の予想を解決した。さらに、その有効性を、特定の決定論的な文言の真になるまでに縮小した。これは、予想が成立する必要条件である。私たちのすべての結果は、$H_1,
2309.08183
Spectral Properties and Weak Detection in Stochastic Block Models
We consider the spectral properties of balanced stochastic block models of which the average degree grows slower than the number of nodes (sparse regime) or proportional to it (dense regime). For both regimes, we prove a phase transition of the extreme eigenvalues of SBM at the Kesten--Stigum threshold. We also prove the central limit theorem for the linear spectral statistics for both regimes. We propose a hypothesis test for determining the presence of communities of the graph, based on the central limit theorem for the linear spectral statistics.
Yoochan Han, Ji Oon Lee, Wooseok Yang
2023-09-15T06:13:53
http://arxiv.org/abs/2309.08183v1
# Spectral Properties and Weak Detection in Stochastic Block Models ###### Abstract We consider the spectral properties of balanced stochastic block models of which the average degree grows slower than the number of nodes (sparse regime) or proportional to it (dense regime). For both regimes, we prove a phase transition of the extreme eigenvalues of SBM at the Kesten-Stigum threshold. We also prove the central limit theorem for the linear spectral statistics for both regimes. We propose a hypothesis test for determining the presence of communities of the graph, based on the central limit theorem for the linear spectral statistics. ## 1 Introduction We consider the stochastic block model (SBM), one of the most fundamental models for the networks with community structure. We focus on the spectral properties of balanced SBMs in which the difference between the intra-community probability and the inter-community probability is significantly smaller than their average. **Stochastic Block Model:** A stochastic block model we consider is a graph with \(N\) nodes, partitioned into disjoint subsets, called the communities, \(C_{1},\ldots,C_{K}\) of equal sizes, where the number of the communities \(K\) is independent of \(N\). Its adjacency matrix \(\widetilde{M}\) is a symmetric \(N\times N\) matrix whose entries are Bernoulli random variables satisfying \[\mathbb{P}(\widetilde{M}_{ij}=1)=\begin{cases}p_{s}&(i\sim j)\\ p_{d}&(i\not\sim j)\end{cases},\qquad\mathbb{P}(\widetilde{M}_{ij}=0)=\begin{cases} 1-p_{s}&(i\sim j)\\ 1-p_{d}&(i\not\sim j)\end{cases}, \tag{1.1}\] where \(i\sim j\) means that \(i\) and \(j\) are within the same community. We assume that the SBM is balanced, i.e., the communities are with the same size. For the spectral analysis, it is easier to rescale the adjacency matrix so that the typical size of the eigenvalues is of order one. We rescale \(\widetilde{M}\) via the average edge probability \(p_{a}\) defined as \[p_{a}:=\frac{p_{s}+(K-1)p_{d}}{K}, \tag{1.2}\] which can be obtained from given data. We introduce the rescaled matrix \(M\) defined by \[M_{ij}=\frac{\widetilde{M}_{ij}-p_{a}}{\sigma},\qquad\sigma:=\sqrt{N\cdot\frac{p_ {s}(1-p_{s})+(K-1)p_{d}(1-p_{d})}{K}}. \tag{1.3}\] With the rescaling, the variance of the entries \(M_{ij}\) is \(\Theta(N^{-1})\) and it can be checked that the most of the eigenvalues of \(M\) are contained in \([-2,2]\). Note that the entries of \(M\) are not centered due to the difference between \(p_{s}\) and \(p_{d}\). **Spiked Wigner Matrix:** A spiked Wigner matrix is a random matrix of the form \(\lambda XX^{T}+H\), where the spike \(X\) is an \(N\times K\) matrix whose column vectors are with the unit norm and \(H\) is an \(N\times N\) Wigner matrix. The parameter \(\lambda\) corresponds to the signal-to-noise ratio (SNR). In this model, with the normalization \(\mathbb{E}H_{ij}^{2}=N^{-1}\) for \(i\neq j\), the largest eigenvalue of \(\lambda XX^{T}+H\) converges to \(\lambda+\lambda^{-1}\) if \(\lambda>1\) and to \(2\) if \(\lambda<1\). This phase transition is called the Baik-Ben Arous-Peche (BBP) transition, after the seminal work of [3] for the phase transition of the largest eigenvalue of a spiked Wishart matrix. The BBP-transition result suggests that the standard principal component analysis (PCA) can be applied to the detection problem for spiked Wigner matrices in case \(\lambda>1\), which guarantees reliable detection. On the other hand, in case \(\lambda<1\), it is known that reliable detection is impossible if the noise is Gaussian and the spike is rank-1 [35]. In this case, one can consider the weak detection, which is a hypothesis test about the presence of the signal. The likelihood ratio (LR) test is optimal as can be checked from Neyman-Pearson lemma, but one can also construct an optimal test based on the behavior of the linear spectral statistics (LSS) of the eigenvalues [9, 25], which is a linear functional defined as \[L_{M}(f):=\sum_{i=1}^{N}f(\mu_{i}(M)) \tag{1.4}\] for a given function \(f\), where \(\mu_{1}(M),\cdots\mu_{N}(M)\) are the eigenvalues of the matrix \(M\). The rescaled adjacency matrix \(M\) can be viewed as a generalized spiked Wigner matrix with a spike of rank-\((K-1)\) as follows: We first decompose \(M\) into \(M=\mathbb{E}M+H\). The expectation \(\mathbb{E}M\) is a deterministic matrix whose rank is \((K-1)\) and its only non-zero eigenvalue is \(\frac{N(p_{s}-p_{d})}{K\sigma}\) with multiplicity \((K-1)\). The noise \(H:=M-\mathbb{E}M\), which we call a centered SBM, is a random matrix. It can be easily computed that the SNR of the SBM is given by \[\frac{N(p_{s}-p_{d})^{2}}{K(p_{s}+(K-1)p_{d})}. \tag{1.5}\] From the fact that \(1\) is the threshold for the SNR in the BBP-transition, it can be deduced that it is possible to reliably detect the communities in an SBM if the SNR in (1.5) is larger than \(1\). The threshold is called the Kesten-Stigum (KS) threshold, which first appeared in [26]. **Main Problem:** The spectral properties, including the location of the largest eigenvalues, of SBMs are largely unknown. The main difficulty in the spectral analysis of the SBM is that the variances of the entries of the noise \(H\) are not identical. Furthermore, in the sparse regime where \(p_{a}=o(1)\), the analysis is more involved due to its singular nature that the entries of \(M\) are highly concentrated at a single value \(-p_{a}/\sigma\). Our goal is to prove spectral properties of SBM, including the BBP-type transition and the CLT of the LSS, and apply it to the detection problem. **Main Contribution:** Our main contributions in this work are as follows: 1. We prove the eigenvalue phase transition in the dense regime and the sparse regime. (Theorems 2.1 and 3.2). 2. We prove the central limit theorem (CLT) of the LSS in both the dense regime and the sparse regime. (Theorems 2.3 and 3.3). 3. We propose a test based on the CLT of the LSS. (Theorem 2.4). 4. We prove the local law for the sparse centered generalized stochastic block model. (Lemma 4.3) In our work, in terms of the average edge probability \(p_{a}\) in (1.2), the dense regime means \(p_{a}=\Theta(1)\) and the sparse regime means \(p_{a}=N^{-c}\) for some constant \(c\in(0,1)\). See also Definitions 1.1 and 1.2. Our first main result is the phase transition of the largest eigenvalues of the SBM in both the dense regime and the sparse regime. More precisely, we prove that the largest eigenvalues pop up from the bulk of the spectrum if and only if the SNR is above the KS-threshold. For the proof, we adapt the strategy of [4], based on an estimate on the resolvent of a random matrix, known as the isotropic local law in random matrix theory. While we can directly apply the isotropic local law for generalized Wigner matrices in the dense regime, the corresponding result was not known in the sparse regime. In Lemma 4.3, we prove a weaker version of the isotropic local law in the sparse regime, which is enough for the proof of the eigenvalue phase transition. The local law we proved in this paper is not simply a tool for the proof of the eigenvalue transition but of great importance per se, since the result itself and also the idea of the proof for it can be used in many other problems on sparse random matrices. Our second main result is the CLT for the LSS of SBM with general ranks. For Wigner matrices, the proof of the CLT is based on the analysis of the resolvent in Cauchy's integral formula [2] or the analysis of the characteristic function [32], and the proof can be extended to more general models by the interpolation with a reference matrix [32, 9]. For the SBM in the dense regime, the reference matrix for the interpolation is a generalized Wigner matrix for which the CLT for the LSS was proved in [31]. In the sparse regime, however, the corresponding result is not known and thus as the first step we introduce a centered SBM and prove the CLT for the LSS for it. The proof in the first step requires the ideas from the both methods, the analysis of Cauchy's integral formula and the analysis of the characteristic function. We finish the proof by applying the interpolation method. With the CLT of the LSS, we follow the ideas in [9, 25] to propose a hypothesis test between the hypotheses on the number of communities \(K\), \[\boldsymbol{H}_{1}:K=K_{1},\qquad\boldsymbol{H}_{2}:K=K_{2},\] for non-negative integer \(K_{1}<K_{2}\), independent of \(N\). The test is computationally easy and the idea can also be used for the estimation of the rank of the spike. We prove the limiting error of the proposed test and numerically check its performance. While the main motivation of the current work lies in the study of the spectral behavior of the SBM, our results naturally extend to more general models. See Definitions 1.1 and 1.2. **Related Works:** The stochastic block model was introduced in the study of social networks [19]. It provides a basic yet fundamental model in various fields of study, most notably in the research for the community detection (and recovery) problem. Several methods have been proposed for the problem, including spectral clustering [27, 36, 24], maximization of modularities [5], semi-definite programming [17], and penalized ML detection with optimal misclassification proportion [16]. See [1] and references therein for the history and more recent developments. The spectral properties of spiked Wigner matrices, especially the behavior of the extremal eigenvalues, have been extensively studied in random matrix theory, e.g., [34, 4, 8]. Such results have been applied to the detection problem for spiked Wigner models [6, 30]. The limits of detection in this model have been considered in statistical learning theory [33, 35, 10, 9]. The study of sparse random matrices is also of great importance in random matrix theory. The analysis of sparse random matrices based on the Stieltjes-transform method was initiated in [12, 14], where the main objects of study was Erdo-Renyi graphs in the sparse regime. The behavior of the extreme eigenvalues of sparse Erdo-Renyi graphs were considered in various works [29, 23, 18, 28, 21]. Other related sparse random matrix models were also studied, including the sparse sample covariance matrices [22] and sparse SBM [23]. **Definition of the Model:** Here, we precisely define the model we consider in this paper. _Definition 1.1_ (Centered Generalized SBM, cgSBM).: Fix any \(0<\phi\leq 1/2\). We assume that \(H=(H_{ij})\) is a real \(N\times N\) block random matrix with \(K\) balanced communities with \(1\leq K\leq N\), whose entries are independently distributed random variables, up to symmetry constraint \(H_{ij}=H_{ji}\). We suppose that each \(H_{ij}\) satisfies the moment conditions \[\mathbb{E}H_{ij}=0,\qquad\mathbb{E}|H_{ij}|^{2}=\sigma_{ij}^{2},\qquad\mathbb{ E}|H_{ij}|^{k}\leq\frac{(Ck)^{ck}}{Nq^{k-2}},\qquad(k\geq 2), \tag{1.6}\] with sparsity parameter \(q\) satisfying \[q=C\cdot N^{\phi} \tag{1.7}\] for some constant \(C\). Here, we further assume the normalization condition \(\sum_{i}\sigma_{ij}^{2}=1\). Note that the condition (1.7) can be extended to \(C_{1}\cdot N^{\phi}\leq q\leq C_{2}\cdot N^{1/2}\) for some constant \(C_{1}\) and \(C_{2}\). _Definition 1.2_ (Deformed cgSBM).: Let \(H\) be a centered SBM given in Definition 1.1, \(K\in\mathbb{N}\) be fixed, \(V\) be a deterministic \(N\times K\) matrix satisfying \(V^{T}V=I\), and \(d_{1},\ldots,d_{K}\) be (possibly \(N\)-dependent) deterministic constants such that \(d_{1}\geq\cdots\geq d_{K}>0\). A rank-\(K\) deformed SBM is a matrix \(M\) of the form \[M=H+VDV^{T}, \tag{1.8}\] where \(VDV^{T}=\sum_{i=1}^{K}d_{i}\mathbf{v}^{(i)}(\mathbf{v}^{(i)})^{T}\) and \(D=\text{diag}(d_{1},\ldots,d_{K})\) with \(V=[\mathbf{v}^{(1)},\ldots,\mathbf{v}^{(K)}]\). Also, each column \(v^{(i)}\) of \(V\) should have block structure, which means that \(v^{(i)}\) can be partitioned into finite number of blocks with the same dimension and the entries in each block have same value. Let \[\gamma_{N}:=\frac{N(p_{s}-p_{d})}{\sigma K}=\sqrt{\frac{N(p_{s}-p_{d})^{2}}{K (p_{s}(1-p_{s})+(K-1)p_{d}(1-p_{d}))}}. \tag{1.9}\] Then, the (rescaled) SBM in (1.3)is a deformed cgSBM with \(d_{1}=\cdots=d_{K-1}=\gamma_{N}\) and its \(H\) is cgSBM satisfying \(q^{2}=Np_{a}\). We mostly focus on the case where \(\gamma:=\lim_{N\to\infty}\gamma_{N}\in(0,\infty)\), which happens when \(|p_{s}-p_{d}|\) is sufficiently small; in the dense regime, for example, \(|p_{s}-p_{d}|=O(N^{-1/2})\) **Organization of the Paper:** The rest of the paper is organized as follows: In Section 2, we present our main results for the dense regime, including a BBP-like transition for the extreme eigenvalues and the central limit theorem for the linear statistics, and we also propose an algorithm for a hypothesis test for the weak detection, based on the linear spectral statistics. In Section 3, we present our main results for the sparse regime, including a BBP-like transition for the extreme eigenvalues and the central limit theorem for the linear statistics. In Section 4, we prove the local law of the sparse centered generalized stochastic block model and use it to prove the transition for the extreme eigenvalues. In Section 5, we explain the main ideas of our proof of the central limit theorems. We conclude the paper in Section 6 with the summary of our works and possible future research directions. Some results from numerical experiments and the technical details of the proofs can be found in Appendices. ## 2 Main Results - Dense Regime In this section, we consider the cgSBM in the dense regime, satisfying \(\phi=1/2\) so that \(q=N^{1/2}\) and in terms of the (rescaled) SBM in (1.3), \[p_{a}=\frac{p_{s}+(K-1)p_{d}}{K}=\Theta(1).\] ### Eigenvalue phase transition Our first main result is the following phase transition for the largest eigenvalues, which basically coincides with the BBP-transition. **Theorem 2.1** (Eigenvalue phase transition).: _Let \(M\) be a deformed cgSBM that satisfies Definition 1.2 with cgSBM \(H\) has \(\phi=1/2\). The block structure condition of \(M\) can be omitted. Denote the ordered eigenvalues of \(M\) by \(\lambda_{1}(M)\geq\cdots\geq\lambda_{N}(M)\). Then, for each \(1\leq i\leq K\),_ \[\lambda_{i}(M)\to\begin{cases}d_{i}+d_{i}^{-1}&\text{ if }d_{i}>1,\\ 2&\text{ otherwise,}\end{cases}\] _as \(N\to\infty\). Moreover, for each fixed \(i>K\), \(\lambda_{i}(M)\to 2\) almost surely as \(N\to\infty\)._ We prove Theorem 2.1 in Section 4.1. We remark that similar results hold for the smallest eigenvalues \(\lambda_{N-i}(M)\) when \(d_{K-i}<-1\). By Theorem 2.1, if \(\gamma=\lim_{N\to\infty}\gamma_{N}>1\), the number of communities in (1.3) can be reliably checked by PCA. Results of numerical experiments regarding Theorem 2.1 are provided in Appendix A. ### CLT with perturbation Our second result in this section is the CLT for the LSS of deformed SBMs. For a precise statement, we introduce the Chebyshev polynomial (of the first kind). _Definition 2.2_ (Chebyshev polynomial).: The \(n\)-th Chebyshev polynomials of the first kind \(T_{n}\) are obtained from the recurrence relation \(T_{0}(x)=1\), \(T_{1}(x)=x\) and \[T_{n+1}(x)=2xT_{n}(x)-T_{n-1}(x).\] **Theorem 2.3** (CLT for deformed SBM).: _Let \(M\) be a deformed cgSBM that satisfies Definition 1.2 with \(d_{i}=\gamma_{N}\) for all \(i=1,2,\ldots,K\) with \(\gamma_{N}<1\) with cgSBM \(H\) having \(\phi=1/2\). The block structure condition of \(M\) can be omitted. Then, for any function \(f\) analytic on an open interval containing \([-2,2]\),_ \[\Big{(}L_{M}(f)-N\int_{-2}^{2}\frac{\sqrt{4-z^{2}}}{2\pi}f(z)\,\mathrm{d}z \Big{)}\Rightarrow\mathcal{N}\left(m_{K}(f),V_{0}(f)\right)\,. \tag{2.1}\] _The mean and the variance of the limiting Gaussian distribution are given by_ \[m_{K}(f)=\frac{1}{4}\left(f(2)+f(-2)\right)-\frac{1}{2}\tau_{0}( f)-\tau_{2}(f)+k_{4}\tau_{4}(f)+K\sum_{\ell=1}^{\infty}\gamma_{N}^{\ell}\tau_{ \ell}(f), \tag{2.2}\] \[V_{0}(f)=-\tau_{1}(f)^{2}+2k_{4}\tau_{2}(f)^{2}+2\sum_{\ell=1}^ {\infty}\ell\tau_{\ell}(f)^{2}\,, \tag{2.3}\] _where we let_ \[\tau_{\ell}(f)=\frac{1}{\pi}\int_{-2}^{2}T_{\ell}\left(\frac{x}{2}\right) \frac{f(x)}{\sqrt{4-x^{2}}}\,\mathrm{d}x\] _with \(T_{\ell}\) be the \(\ell\)-th Chebyshev polynomial. The parameter \(k_{4}\) is defined by_ \[k_{4}:=\frac{1-7p+12p^{2}-6p^{3}}{p(1-p)^{2}}. \tag{2.4}\] _where p is the limitig value of \(q^{2}/N\)_ The parameter \(k_{4}\) is approximately the sum of the fourth cumulants of \(H_{ij}\). Note that the variance \(V_{0}(f)\) of the limiting Gaussian does not depend on \(K\). We prove Theorem 2.3 in Appendix C. ### Detection Recall that \(\mathbf{H}_{1}\) and \(\mathbf{H}_{2}\) are the hypotheses such that \[\mathbf{H}_{1}:K=K_{1},\qquad\mathbf{H}_{2}:K=K_{2},\] for non-negative integer \(K_{1}<K_{2}\), independent of \(N\). Note that a hypothesis test between \(\mathbf{H}_{1}\) and \(\mathbf{H}_{2}\) corresponds to the weak detection for the presence of the community structure if \(K_{1}=0\). Suppose that the value \(\gamma_{N}<1\) is known and our task is to detect whether the community structure is present from a given data matrix \(M\) given in (1.3). If we construct a hypothesis test based on the LSS, it is clear that we need to maximize \[\left|\frac{m_{K_{1}}(f)-m_{K_{2}}(f)}{\sqrt{V_{0}(f)}}\right| \tag{2.5}\] Following the proof of Theorem 6 in [9], it can be proved that optimal \(f\) is of the form \(f=C_{1}\phi_{\lambda}+C_{2}\) for some constant \(C_{1}\) and \(C_{2}\), where \[\phi_{\gamma_{N}}(x):=\log\Bigl{(}\frac{1}{1-\gamma_{N}x+\gamma_{N}^{2}}\Bigr{)} +\gamma_{N}x+\gamma_{N}^{2}\Bigl{(}\frac{1}{k_{4}+2}-\frac{1}{2}\Bigr{)}x^{2}. \tag{2.6}\] We thus use a test statistic \(L_{\lambda}\) for the hypothesis test, defined as \[L_{\gamma_{N}} :=L_{M}(\phi_{\gamma_{N}})-N\int_{-2}^{2}\frac{\sqrt{4-z^{2}}}{2\pi} \phi_{\gamma_{N}}(z)\,\mathrm{d}z\] \[=-\log\det\Bigl{(}(1+\gamma_{N}^{2})I-\gamma_{N}M\Bigr{)}+\frac{ \gamma_{N}^{2}N}{2}+\gamma_{N}\operatorname{Tr}M+\gamma_{N}^{2}\Bigl{(}\frac{ 1}{k_{4}+2}-\frac{1}{2}\Bigr{)}(\operatorname{Tr}M^{2}-N). \tag{2.7}\] For \(L_{\lambda}\), we have the following CLT result as a direct consequence of (2.1). **Theorem 2.4**.: _Let \(M\) be a deformed cgSBM that satisfies Definition 1.2 with \(d_{i}=\gamma_{N}\) for all \(i\)'s and cgSBM \(H\) has \(\phi=1/2\). The block structure condition of \(M\) can be omitted. Then,_ \[L_{\gamma_{N}}\Rightarrow\mathcal{N}(m_{K},V_{0}),\] _where the mean \(m_{K}\) is given by_ \[m_{K}=m_{0}+K\Bigl{[}-\log(1-\gamma_{N}^{2})+\gamma_{N}^{2}+ \Bigl{(}\frac{1}{k_{4}+2}-\frac{1}{2}\Bigr{)}\gamma_{N}^{4}\Bigr{]} \tag{2.8}\] _with_ \[m_{0}=-\frac{1}{2}\log(1-\gamma_{N}^{2})-\frac{1}{2}\gamma_{N}^{ 2}+\frac{k_{4}\gamma_{N}^{4}}{4}, \tag{2.9}\] _and_ \[V_{0}=-2\log(1-\gamma_{N}^{2})+2\gamma_{N}^{2}+\Bigl{(}\frac{2}{ k_{4}+2}-1\Bigr{)}\gamma_{N}^{4}. \tag{2.10}\] We now propose a hypothesis test based on the CLT for the LSS. In this test, described in Algorithm 1, for a given (rescaled) adjacency matrix \(M\), we compute \(L_{\lambda}\) and compare it with the average of \(m_{K_{1}}\) and \(m_{K_{2}}\), \[m_{c}:= \frac{m_{K_{1}}+m_{K_{2}}}{2}\] \[= -\frac{K_{1}+K_{2}+1}{2}\log(1-\gamma_{N}^{2})+\left(\frac{K_{1}+ K_{2}-1}{2}\right)\gamma_{N}^{2}+\left(\frac{k_{4}-K_{1}-K_{2}}{4}+\frac{K_{1}+ K_{2}}{2(k_{4}+2)}\right)\gamma_{N}^{4}\,. \tag{2.11}\] We accept \(\mathbf{H}_{1}\) if \(L_{\gamma_{N}}\leq m_{c}\) and reject it otherwise. The sum of the type-I and type-II errors of the proposed test can be computed as in Section 3 of [10], and we state it as a corollary here. **Corollary 2.5**.: _The error of the test in Algorithm 1 converges to_ \[\operatorname{erfc}\biggl{(}\frac{K_{2}-K_{1}}{4}\sqrt{-\log(1- \gamma_{N}^{2})+\gamma_{N}^{2}+\Bigl{(}\frac{1}{k_{4}+2}-\frac{1}{2}\Bigr{)} \gamma_{N}^{4}}\biggr{)}. \tag{2.12}\] _where \(\operatorname{erfc}(\cdot)\) is the complementary error function defined as \(\operatorname{erfc}(x)=\frac{2}{\sqrt{\pi}}\int_{x}^{\infty}e^{-t^{2}}\, \mathrm{d}t\)._ For a numerical experiment, we generated \(1200\times 1200\) rescaled adjacency matrices with the average probability \(p_{a}=0.1\) and the SNR \(\gamma_{N}=\sqrt{0.7}\). In Figure 1(a), we plot the histograms of the test statistic \(L_{\gamma_{N}}\) for 10,000 independent samples with \(K=0,1,2,3,4\), respectively. It can be seen from the figure that there is a deterministic shift in the histograms as \(K\) increases, which is predicted by (2.8). We also performed the test illustrated in Algorithm 1 and compare the error from the numerical simulation and the theoretical error of the proposed test in Corollary 2.5 for 10,000 independent samples with \(K_{1}=0\) and \(K_{2}=1,2,3,4\), respectively, with \(p_{a}=0.1\) and varying \(\lambda\) from \(0\) to \(\sqrt{0.7}\). The numerical errors of the test closely match the theoretical errors, which are depicted in Figure 1(b). ``` 1:Data: \(\widetilde{M}\), parameter \(\gamma_{N}\) 2:\(M\leftarrow\) matrix given by (1.3), \(L_{\gamma_{N}}\leftarrow\) test statistic in (2.7), \(m_{c}\leftarrow\) critical value in (2.11) 3:if\(L_{\gamma_{N}}\leq m_{c}\)then 4: Accept \(\mathbf{H_{1}}\) 5:else 6: Reject \(\mathbf{H_{1}}\) 7:endif ``` **Algorithm 1**Hypothesis test Theorem 2.4 can also be used in the estimation of \(K\). Since the distance of the means \(|m_{K+1}-m_{K}|\) does not depend on \(K\), for a given test statistic \(L_{\gamma_{N}}\), the best candidate for the rank \(K\) is the minimizer of \(|L_{\gamma_{N}}-m_{K}|\). This procedure of the estimation is equivalent to find the nearest non-negative integer of the value \[\kappa^{\prime}:=\frac{L_{\gamma_{N}}-m_{0}}{-\log(1-\gamma_{N}^{2})+\gamma_{N }^{2}+\left(\frac{1}{k_{4}+2}-\frac{1}{2}\right)\gamma_{N}^{4}}. \tag{2.13}\] Figure 1: Under the setting in Section 2.3 with \(N=1200\) and fixed \(p_{a}=0.1\), (a) the histograms of the test statistic \(L_{\gamma_{N}}\) for \(K=0,1,2,3,4\), with \(\gamma_{N}=\sqrt{0.7}\), and (b) the errors from the simulation with Algorithm 1 (solid) versus the limiting errors in (2.12) (dashed) with \(K_{2}=1,2,3,4\). Main Results - Sparse Regime In this section, we consider the cgSBM in the sparse regime, satisfying \[0<\phi<1/2\] so that \(q\) can be less than \(N^{1/2}\) and in terms of the (rescaled) SBM in (1.3), \[p_{a}=\frac{p_{s}+(K-1)p_{d}}{K}=o(1).\] ### Eigenvalue phase transition Our first main result in the section is the following phase transition for the largest eigenvalues. We can get the following phase transition for the largest eigenvalues of the matrix with some sparsity condition. **Theorem 3.1** (Eigenvalue phase transition).: _Let \(M\) be a deformed cgSBM that satisfies Definition 1.2 with cgSBM \(H\) has \(1/8<\phi<1/2\). Denote the ordered eigenvalues of \(M\) by \(\lambda_{1}(M)\geq\cdots\geq\lambda_{N}(M)\). Then, for each \(1\leq i\leq K\),_ \[\lambda_{i}(M)\to\begin{cases}d_{i}+d_{i}^{-1}&\text{ if }d_{i}>1,\\ 2&\text{ otherwise,}\end{cases}\] _as \(N\to\infty\). Moreover, for each fixed \(i>K\), \(\lambda_{i}(M)\to 2\) almost surely as \(N\to\infty\)._ We prove Theorem 3.1 in section 4.2. Recall the rescaled stochastic block matrix in (1.3). We can get the following corollary for the eigenvalues of \(M\) in (1.3). **Corollary 3.2**.: _Consider the \(N\times N\) matrix \(M\) in (1.3) with \(N^{-3/4}\ll p_{a}\ll 1\). Denote the ordered eigenvalues of \(M\) by \(\lambda_{1}(M)\geq\lambda_{2}(M)\geq\cdots\geq\lambda_{N}(M)\). Recall the definition of \(\gamma_{N}\) in (1.9) and we define the constant \(\gamma\) as \(\gamma=\lim_{N\to\infty}\gamma_{N}\). Then, as \(N\to\infty\),_ 1. _if_ \(0\leq\gamma\leq 1\)_,_ \(\lambda_{i}(M)\to 2\) _for each_ \(1\leq i\leq N\)_._ 2. _if_ \(1<\gamma<\infty\)_,_ \(\lambda_{i}(M)\to\begin{cases}\gamma+\gamma^{-1}&\text{ for }1\leq i\leq K-1\\ 2&\text{ for }i>K-1\end{cases}\)__ 3. _if_ \(\gamma=\infty\)_,_ \(\begin{cases}\lambda_{i}(M)-(\gamma_{N}+\gamma_{N}^{-1})\to 0&\text{ for }1\leq i\leq K-1\\ \lambda_{i}(M)\to 2&\text{ for }i>K-1\end{cases}\)__ We prove Theorem 3.2 in Section 4.2. Note that the condition \(N^{-3/4}\ll p_{a}\ll 1\) is from \(1/8<\phi<1/2\) since \(q^{2}=Np_{a}\). As in the dense regime, we find that the number of communities can be reliably checked by PCA if \(\gamma>1\). We remark that in the sparse regime the difference between \(\sigma\) in (1.3) and its approximation \[\hat{\sigma}=\sqrt{Np_{a}(1-p_{a})}\] is negligible in the sense that \(|\sigma-\hat{\sigma}|=o(1)\). Thus, in case it is easier to find \(p_{a}\) than \(p_{s}\) and \(p_{d}\), it is possible to use \(\hat{\sigma}\) instead of \(\sigma\) for the PCA, since the change of the extreme eigenvalues is \(o(1)\). ### CLT with perturbation Heuristically, if we assume that Theorem 2.1 holds also in the sparse regime, then the mean \(m_{K}(f)\) and the variance \(V_{0}(f)\) in Theorem 2.3 will be dominated by the terms containing \(k_{4}\) as a coefficient, since \(k_{4}\sim p^{-1}\gg 1\) while all other terms are \(O(1)\). This in particular suggests that \(m_{K}(f),V_{0}(f)\sim p^{-1}\) and we need to rescale the LSS by the factor \(p^{1/2}\) to observe its fluctuation. Our main result in this section is the following theorem. **Theorem 3.3**.: _Suppose that \(M\) is a deformed cgSBM satisfying Definition 1.2 where the cgSBM \(H\) has \(1/6<\phi<1/2\). Let \(f\) be an analytic function on an open interval containing \([-2,2]\) such that \(\tau_{2}(f)=\Theta(1)\). Then_ \[\frac{q}{\sqrt{2N}}\cdot\frac{L_{M}(f)-\mathbb{E}[L_{M}(f)]}{|\tau_{2}(f)|} \Rightarrow\mathcal{N}(0,1),\] _where the right side is a standard Gaussian random variable._ We prove Theorem 3.3 in Section 5.2. For the mean \(L_{M}(f)\) in Theorem 3.3, we have the following expansion formula. **Proposition 3.4** (Expectation of LSS).: _Suppose that the assumptions in Theorem 3.3 hold. Then, the expectation \(L_{M}(f)\) satisfies_ \[\mathbb{E}\Big{[}\frac{q}{\sqrt{N}}\Big{(}L_{M}(f)-N\int_{-2}^{2}\frac{\sqrt{4 -z^{2}}}{2\pi}f(z)\,\mathrm{d}z-\frac{q^{2}}{N}\tau_{4}(f)\Big{)}\Big{]} \to 0. \tag{3.1}\] We prove Proposition 3.4 in Appendix E. Note that Theorem 3.3 and Proposition 3.4 can be used to the matrix \(M\) in (1.3) with \(N^{-3/4}\ll p_{a}\ll 1\) by the condition \(1/6<\phi<1/2\). We remark that the condition \(1/6<\phi<1/2\) is assumed due to a technical reason and we believe that it can be relaxed to for any \(\phi<1/2\). ## 4 Phase transition of the largest eigenvalue ### Phase transition of the largest eigenvalue in the dense regime In this section, we prove Theorem 2.1 by studying the spectrum of the rank-\(K\) deformed SBM in (1.8), defined by \[M\;=\;H+VDV^{T}\,.\] We first introduce a result for the location of the outlier eigenvalues, which is a special case of Lemma 6.1 in [4]. **Lemma 4.1**.: _Fix a positive integer \(K\), a family \(d_{1},\dots,d_{K}\) of pairwise distinct nonzero real numbers. Let us define, for \(z\in\mathbb{C}\backslash[-2,2]\), the \(K\times K\) matrix_ \[M_{G}(z)=\mathrm{diag}(1+d_{1}m_{sc}(z),\dots,1+d_{K}m_{sc}(z)), \tag{4.1}\] _and denote by \(z_{1}>\dots>z_{p}\) the \(z\)'s such that \(M_{G}(z)\) is singular, where \(p\in\{0,\dots,K\}\) is identically equal to the number of \(i\)'s such that \(-1<1/d_{i}<1\)._ _Let us also consider a \(K\times K\) Hermitian matrix \(M_{n}(z)\), defined on \(z\in\mathbb{C}\backslash[a_{n},b_{n}]\) such that the entries of \(M_{n}(z)\) are analytic functions of \(z\) and \([a_{n},b_{n}]\to[-2,2]\). We suppose that \(M_{n}(z)\) converges, to the function \(M_{G}(z)\), uniformly on \(\mathcal{D}:=\{z\in\mathbb{C}:\mathrm{dist}(z,[-2,2])\geq\eta\}\), for all \(\eta>0\) as \(n\to\infty\). Then_ * _There exists_ \(p\) _real sequences_ \(z_{n,1}>\ldots>z_{n,p}\) _converging respectively to_ \(z_{1},\ldots,z_{p}\) _such that for any (small)_ \(\epsilon>0\) _, for (large)_ \(n\) _, the z's in_ \(\mathbb{R}\backslash[-2-\epsilon,2+\epsilon]\) _such that_ \(M_{n}(z)\) _is singular are exactly_ \(z_{n,1},\ldots,z_{n,p}\)_,_ The following result has been used in several works on finite-rank deformations of random matrices. **Lemma 4.2**.: _If \(\mu\in\mathbb{R}\setminus\sigma(H)\) and \(\det(D)\neq 0\) then \(\mu\in\sigma(M)\) if and only if_ \[\det\bigl{(}V^{T}G(\mu)V+D^{-1}\bigr{)}\;=\;0\,.\] We omit the proof of Lemma 4.2 as it can be checked by an elementary matrix algebra. We now prove Theorem 2.1. From Weyl's interlacing inequality, for all \(1\leq i\leq N\), \[\lambda_{i-K}(H)\leq\lambda_{i}(M)\leq\lambda_{i}(H), \tag{4.2}\] where we use the convention that \(\lambda_{k}(H)=-\infty\) if \(k\leq 0\). Since the empirical spectral distribution of \(H\) converges to \(\mu_{sc}\), it follows that the empirical spectral distribution of \(M\) does as well. Note that for any \(N\)-independent \(i\geq 1\), \(\lambda_{i}(H)\to 2\). By (4.2), we deduce the that \(\liminf_{n\to\infty}\lambda_{i}(M)\geq 2\) for any \(N\)-independent \(i>1\) and also \(\lambda_{i}(M)\to 2\) if \(i>K\) in addition. Let us now consider the eigenvalues of \(M\) outside the spectrum of \(H\). By Lemma 4.2, these are precisely those values \(z\) outside the spectrum of \(H\) such that the \(K\times K\) matrix \[M_{N}(z):=V^{T}G(z)V+D^{-1} \tag{4.3}\] is singular. We first consider the case where all \(d_{i}^{\prime}s\) are pairwise distinct. From the isotropic local law for generalized Wigner matrices (see Lemma C.2 in Appendix), the \((i,j)\)-entry of \(M_{N}\) satisfies an estimate \[(M_{N})_{i,j}=\langle{\bf v}_{i}\,,G(z){\bf v}_{j}\rangle+\delta_{ij}d_{i}^{- 1}=m_{sc}(z)\langle{\bf v}_{i}\,,{\bf v}_{j}\rangle+\delta_{ij}d_{i}^{-1}+O(N ^{-1/2+\delta}) \tag{4.4}\] for any \(\delta>0\), with overwhelming probability. Thus we have, \[(M_{N})_{i,j}\to\delta_{ij}\left(m_{sc}(z)+\frac{1}{d_{i}}\right). \tag{4.5}\] Note that this convergence is uniform on \(\mathcal{D}\). We now apply Lemma 4.1 to find that 1. for \(i=1,2,\ldots,p\) with \(d_{i}>1\), the eigenvalues \(\lambda_{1}(M)>\lambda_{2}(M)>\ldots\lambda_{p}(M)\) are outside \([-2-\epsilon,2+\epsilon]\) for some \(\epsilon>0\) with overwhelming probability and \(\lambda_{i}(M)\to z_{i}\) where \(z_{i}\) satisfies \[m_{sc}(z_{i})+\frac{1}{d_{i}}=0,\] and 2. for \(i=p+1,\ldots,K\) with \(d_{i}\leq 1\), \(\lambda_{i}(M)\to 2\). From the identity \(m_{sc}^{2}(z)+zm_{sc}(z)+1=0\), we easily find that \(m_{sc}(z)+d^{-1}=0\) if and only if \(d>1\) and \(z=d+d^{-1}\). This concludes the proof of Theorem 2.1 in the case \(d_{i}\)'s are pairwise distinct. Lastly, we consider the case where the \(d_{i}\)'s are not necessarily pairwise distinct. If \(d_{i}=d_{i+1}\), for any (small) \(\epsilon>0\), using the continuity of \(\rho(x):=x+x^{-1}\) for \(x>0\), we can choose distinct numbers \(d_{i}^{\prime}\) and \(d_{i+1}^{\prime}\) such that \(|\rho(d_{i})-\rho(d_{i}^{\prime})|\), \(|\rho(d_{i+1})-\rho(d_{i+1}^{\prime})|\leq\epsilon\). We then use the inequality by Hoffman and Wielandt, Corollary 6.3.8 of [20], to control the change of the eigenvalue of \(M\) by the differences \(|d_{i}-d_{i}^{\prime}|\) and \(|d_{i+1}-d_{i+1}^{\prime}|\). Putting these results together with Theorem 2.1 for pairwise distinct \(d_{i}\) case, we can show that the Theorem 2.1 holds true for the case where the \(d_{i}\)'s are not necessarily pairwise distinct. ### Phase transition of the largest eigenvalue in the sparse regime In this section, we prove Theorem 3.1 and Corollary 3.2. The proof of Theorem 3.1 follows the proof of Theorem 2.1 which presented in Section 4.1 except the (4.4) since Lemma C.2 in Appendix can be used only in the dense regime. To prove same equation in the sparse regime, we prove the local law for the sparse centered generalized SBM which is defined in Definition 1.1 with some sparsity condition. **Proposition 4.3** (Local law for the sparse cgSBM).: _For centered generalized stochastic block model \(H\) defined in Definition 1.1 with \(1/8<\phi<1/2\) and its Green function \(G\) which is defined as \(G(z)=(H-zI)^{-1}\), define the function \(s(z)\) by_ \[s(z)=\frac{1}{N}\sum_{i,j}G_{ij}(z) \tag{4.6}\] _Then, the function \(s(z)\) follows_ \[\left|s(z)-\frac{-1}{z+m_{sc}}\right|=|s(z)-m_{sc}(z)|\prec\frac{\sqrt{N}}{q^ {4}}+\frac{1}{q} \tag{4.7}\] This Proposition 4.3 will be proved in Appendix F. Using this Lemma 4.3, we can prove a lemma about the sparse version of (4.4) that can finalize the proof of Theorem 3.2. **Lemma 4.4**.: _Let \(M\) be a deformed cgSBM that satisfies Definition 1.2 with cgSBM \(H\) has \(1/8<\phi<1/2\). Then for all \(i,j=1,\cdots,K\), the green function \(G(z)=(H-zI)^{-1}\) satisfies_ \[\langle\mathbf{v}_{i},G(z)\mathbf{v}_{j}\rangle=\langle\mathbf{v}_{i}, \mathbf{v}_{j}\rangle\left(m_{sc}(z)+O\left(\frac{\sqrt{N}}{q^{4}}+\frac{1}{q ^{2}}\right)\right) \tag{4.8}\] _while \(\mathbf{v}_{i},\mathbf{v}_{j}\)\((i,j=1,\cdots,K)\) are column vectors of \(V\)._ This Lemma 4.4 will be proved in Appendix G and it finalizes the proof of Theorem 3.1. For the proof of Corollary 3.2, recall the \(N\times N\) matrix \(M\) in (1.3). The proof starts with proving that \(M\) is rank-\((K-1)\) deformed cgSBM in Definition 1.2. The following lemma will be proved in Appendix G. **Lemma 4.5**.: _For the expectation matrix \(\mathbb{E}M\) of the rescaled SBM \(M\) in (1.3), the rank of \(\mathbb{E}M\) is \(K-1\) and the nonzero eigenvalues of \(\mathbb{E}M\) is \(\gamma_{N}\) with multiplicity \(K-1\). Consider the orthonormal eigenvectors \(\mathbf{v}_{i}\)\((i=1,\cdots,K-1)\) of \(\mathbb{E}M\) with respect to eigenvalue \(\gamma_{N}\). Then, \(\mathbf{v}_{i}\) have \(K\) blocks with each blocks have \(N/K\) elements. Also, each blocks are consisted of same number. In other word, each \(\mathbf{v}_{i}\) have block structure._ Using this Lemma 4.5, \(N\times N\) matrix \(\mathbb{E}M\) can be decomposed as \(\mathbb{E}M=VDV^{T}\) where \(VDV^{T}=\sum_{i=1}^{K-1}\gamma_{N}\mathbf{v}_{i}\mathbf{v}_{i}^{T}\) with \(V^{T}V=I\), \(V\) is \(N\times(K-1)\) matrix with columns \(\mathbf{v}_{i}\) for \(i=1,\cdots,K-1\) and \(D=\gamma_{N}\cdot I_{K-1}\). Also, it can be easily checked that \(H=M-\mathbb{E}M\) satisfies the Definition 1.1 with \(q^{2}:=Np_{a}\). Since \(N^{-3/4}\ll p_{a}\ll 1\) implies that \(1/8<q<1/2\), the \(M\) satisfies the condition for Theorem 3.1. Directly applying \(M\) into Theorem 3.1, Corollary 3.2 can be obtained. ## 5 Central Limit Theorems for Stochastic Block Models ### Sketch of Proof of theorem 2.3 Following [9, 25], we introduce a family of interpolating matrices \[M(\theta)=H+\theta\gamma_{N}VV^{T} \tag{5.1}\] for \(\theta\in[0,1]\) and denote the corresponding eigenvalues of \(M(\theta)\) by \(\{\lambda_{i}(\theta)\}_{i=1}^{N}\). We choose constants \(a\in(2,3)\) and \(v_{0}\in(0,1)\) so that the function \(f\) is analytic on the rectangular contour \(\Gamma\) with vertices \((\pm a\pm iv_{0})\). By Theorem 2.1, we may assume that all eigenvalues of \(M\) are inside \(\Gamma\). From Cauchy's integral formula, \[\begin{split}&\sum_{i=1}^{N}f(\lambda_{i}(1))-N\int_{-2}^{2}\frac{ \sqrt{4-z^{2}}}{2\pi}f(z)\,\mathrm{d}z\\ &=-\frac{1}{2\pi\,\mathrm{i}}\oint_{\Gamma}f(z)\big{(}\operatorname {Tr}(M(1)-zI)^{-1}-\operatorname{Tr}(H-zI)^{-1}\big{)}\,\mathrm{d}z\\ &\qquad+\frac{1}{2\pi\,\mathrm{i}}\oint_{\Gamma}f(z)\big{(} \operatorname{Tr}(H-zI)^{-1}-Nm_{sc}(z)\big{)}\,\mathrm{d}z\end{split} \tag{5.2}\] where we let \(m_{sc}(z):=\frac{-z+\sqrt{z^{2}-4}}{2}\) be the Stieltjes transform of the Wigner semicircle measure. Note that the second integral in the right side of (5.2) converges to a Gaussian, as proved by [31]. (See Theorem C.1 in Appendix for more detail.) Define the resolvent \(G(z)\) of \(H\) and its normalized trace \(m^{H}\) by \[G^{H}(z)\equiv G(z):=(H-zI)^{-1},\qquad m^{H}(z)\equiv m(z):=\frac{1}{N} \operatorname{Tr}G^{H}(z), \tag{5.3}\] where \(z=E+\mathrm{i}\eta\in\mathbb{C}^{+}\). From an elementary calculation involving the resolvents, \[(H-zI)^{-1}-(M(\theta)-zI)^{-1}=\sum_{m=1}^{K}\theta\sqrt{\lambda}(M(\theta)- zI)^{-1}\mathbf{v}^{(m)}(\mathbf{v}^{(m)})^{T}(H-zI)^{-1}. \tag{5.4}\] We can apply the isotropic local law for the inner products \(\langle\mathbf{v}^{(m)},\mathbf{v}^{(m)}\rangle\), which asserts that with isotropic local law, Lemma C.2, we obtain that with overwhelming probability, \[\langle\mathbf{v}^{(m)},(M(\theta)-zI)^{-1}\mathbf{v}^{(m)}\rangle=\frac{m_{ sc}(z)}{1+\theta\gamma_{N}m_{sc}(z)}+O(N^{-1/2+\delta}) \tag{5.5}\] with overwhelming probability. (See Lemma C.2 in Appendix for the isotropic local law. See Definition B.3 in Appendix for the precise definition of overwhelming probability events.) Using matrix differentiation identities and properties of resolvents, we find that \[\frac{\partial}{\partial\theta}\operatorname{Tr}(M(\theta)-zI)^{-1} =-\sum_{m=1}^{K}\gamma_{N}\frac{\partial}{\partial z}\left(( \mathbf{v}^{(m)})^{T}(M(\theta)-zI)^{-1}\mathbf{v}^{(m)}\right)\] \[=-\frac{K\gamma_{N}m^{\prime}_{sc}(z)}{(1+\theta\gamma_{N}m_{sc}( z))^{2}}+o(1), \tag{5.6}\] with overwhelming probability. Integrating over \(\theta\) from \(0\) to \(1\) and applying Cauchy's integral formula again, we then find that the difference between the LSS of \(M\) and that of \(H\) is \[K\sum_{\ell=1}^{\infty}\gamma_{N}^{\ell}\tau_{\ell}(f),\] and this proves the desired theorem. See Appendix C for the detailed proof. ### Proof of Theorem 3.3 Our proof of Theorem 3.3 is based on the following proposition about the characteristic function of the (rescaled) LSS of a centered SBM \(H\). **Proposition 5.1**.: _Suppose that \(H\) satisfies conditions in Definition 1.1 with \(1/6<\phi<1/2\). Let \(f\) be an analytic function on an open interval containing \([-2,2]\) and define_ \[\phi(t):=\mathbb{E}[\exp\bigl{\{}\mathrm{i}tq^{2}(L_{H}(f)-\mathbb{E}[L_{H}(f) ])/N\bigr{\}}].\qquad(t\in\mathbb{R}) \tag{5.7}\] _Then, with overwhelming probability, the characteristic function \(\phi\) satisfies_ \[\phi^{\prime}(t)=-2t\phi(t)\tau_{2}(f)^{2}+o(1).\] Applying the Arzela-Ascoli theorem and Levy's continuity theorem with Proposition 5.1, we can easily see that the CLT result in Theorem 3.3 holds. To prove Theorem 3.3 for deformed SBM \(M\) that replaced by \(H\), we follow the proof of Theorem 2.3 to show that \[\frac{q}{\sqrt{N}}\Bigl{(}\frac{\partial}{\partial\theta}\operatorname{Tr}(M( \theta)-zI)^{-1}\Bigr{)}=o(1)\] with overwhelming probability, which would imply that the desired theorem holds for \(M\). We now prove Proposition 5.1. From Cauchy's integral formula, \[\frac{q}{\sqrt{N}}\bigl{(}L_{H}(f)-\mathbb{E}[L_{H}(f)]\bigr{)} =-\frac{q\sqrt{N}}{2\pi\operatorname{i}}\oint_{\Gamma}f(z)\bigl{[} m(z)-\mathbb{E}m(z)\bigr{]}\,\mathrm{d}z\] \[=-\frac{q}{2\pi\operatorname{i}\!\sqrt{N}}\oint_{\Gamma}f(z) \bigl{[}\operatorname{Tr}G(z)-\mathbb{E}\operatorname{Tr}G(z)\bigr{]}\, \mathrm{d}z.\] Recall that vertices of the contour \(\Gamma\) are \((\pm a\pm\operatorname{i}\!v_{0})\) for constants \(a\in(2,3)\) and \(v_{0}\in(0,1)\). We rewrite the characteristic function \(\phi(\lambda)\) as \[\phi(t) :=\mathbb{E}\bigl{[}\exp\bigl{\{}\mathrm{i}t\frac{q}{\sqrt{N}}(L_ {H}(f)-\mathbb{E}[L_{H}(f)])\bigr{\}}\bigr{]} \tag{5.8}\] \[=\mathbb{E}\bigl{[}\exp\bigl{\{}-\frac{tq}{2\pi\sqrt{N}}\oint_{ \Gamma}f(z)(\operatorname{Tr}G(z)-\mathbb{E}\operatorname{Tr}G(z))\,\mathrm{d }z\bigr{\}}\bigr{]}.\qquad(t\in\mathbb{R})\] If we decompose the contour \(\Gamma\) into \[\Gamma_{1}:=\{z=E+\mathrm{i}\eta\in\Gamma:\eta\leq N^{-5}\},\qquad\Gamma_{2}:=\{z =E+\mathrm{i}\eta\in\Gamma:\eta>N^{-5}\}, \tag{5.9}\] it can be easily checked that the obvious that in the integral appearing in the right-side of (5.8), the contribution from the integral on \(\Gamma_{1}\) is negligible, i.e., \(o(1)\), in \(\phi^{\prime}(t)\) and \(\phi(t)\). We thus consider the integral on \(\Gamma_{2}\) only. Differentiating \(\phi(t)\), we get \[\phi^{\prime}(t)=-\frac{tq}{2\pi\sqrt{N}}\int_{\Gamma_{2}}f(z)\mathbb{E}\big{[} e(t)(\operatorname{Tr}G(z)-\mathbb{E}\operatorname{Tr}G(z))\big{]}\,\mathrm{d}z+o(1), \tag{5.10}\] where we let \[e(t):=\exp\bigl{\{}-\frac{tq}{2\pi\sqrt{N}}\int_{\Gamma_{2}}f(z)(\operatorname {Tr}G(z)-\mathbb{E}\operatorname{Tr}G(z))\,\mathrm{d}z\bigr{\}}. \tag{5.11}\] We thus focus on \(\mathbb{E}\big{[}e(t)\cdot(\operatorname{Tr}G(z)-\mathbb{E}\operatorname{Tr}G (z))\big{]}\) for which we have the following estimate. **Lemma 5.2**.: _Assume the conditions in Proposition 5.1. Then, uniformly on \(z\in\Gamma_{2}\),_ \[\begin{split}&\mathbb{E}\big{[}e(t)\cdot(\operatorname{Tr}G(z)- \mathbb{E}\operatorname{Tr}G(z))\big{]}\\ &=-\mathbb{E}[e(t)]\frac{t\sqrt{N}}{\pi q}m_{sc}(z)m^{\prime}_{sc} (z)\oint_{\Gamma}f(z^{\prime})m_{sc}(z^{\prime})m^{\prime}_{sc}(z^{\prime})\, \mathrm{d}z^{\prime}+o(\frac{\sqrt{N}}{q}),\end{split} \tag{5.12}\] _with overwhelming probability._ Lemma 5.2 is proved in Appendix D. Combining Lemma 5.2 with (5.10), we get \[\phi^{\prime}(t)=\frac{t}{2\pi^{2}}\phi(t)\Bigl{(}\oint_{\Gamma}f(z)m_{sc}(z)m ^{\prime}_{sc}(z)dz\Bigr{)}^{2}+o(1)=-2t\phi(t)\tau_{2}(f)^{2}+o(1) \tag{5.13}\] with overwhelming probability and this concludes the proof of the Proposition 5.1. ## 6 Conclusion and Future Works In this paper, we considered spectral properties of \(N\times N\) balanced stochastic block models with general number of communities. We first proved a BBP-like phase transition of extreme eigenvalues with the threshold equal to the Kesten-Stigum threshold for both the dense regime and the sparse regime. We then proved the central limit theorem for the linear spectral statistics for both the dense regime and the sparse regime, where in the dense regime the variance of the limiting Gaussian distribution does not depend on the number of communities while the mean depends on it. Exploiting this property, we proposed a hypothesis test based on the linear spectral statistics to determine the number of communities. We also provided the theoretical error of the proposed test and numerically checked the accuracy of the test. For the proof of the BBP-like phase transition of extreme eigenvalues for the sparse regime, we prove the local law for the sparse centered generalized stochastic block model. A possible future research direction is to extend our results in the dense regime to the sparse regime, especially the phase transition of the extremal eigenvalues and the optimal test statistic for a hypothesis test. We also hope to generalize the results in this paper to non-balanced stochastic block models or non-symmetric models such as directed graphs or bipartite graphs with community structure. ## Appendix A Simulations for Theorem 2.1 In Appendix A, we provide the results from the numerical simulation on the outlier eigenvalues. We fix \(N=8000\), \(K=4\) and \(p_{a}=0.1\). In Figure 2, we compare the eigenvalue distributions of the matrices with \(\lambda=1.5\) and with \(\lambda=0.5\). As can be seen from the figure, it can be seen that even if all other conditions are the same, the presence or absence of an outlier is determined according to the value of \(\lambda\). As \(\lambda\) increases, the gap between the outlier and the support of the semicircle distribution increases, and as the \(\lambda\) approaches \(1\), the gap gradually decreases. When the \(\lambda\) is less than \(1\), the two distributions are mixed and cannot be distinguished. ## Appendix B Preliminaries In Appendix B, we explain the basic definitions and theorems for our results. We first introduce more general class of matrix model that generalize Definition 1.1. _Definition_ B.1 (Centered Generalized SBM, cgSBM).: Fix any \(\phi<1/2\). We assume that \(H=(H_{ij})\) is a real \(N\times N\) block random matrix with \(K\) balanced communities with \(1\leq K\leq N\), whose entries are independently distributed random variables, up to symmetry constraint \(H_{ij}=H_{ji}\). We suppose that each \(H_{ij}\) satisfies the moment conditions \[\mathbb{E}H_{ij}=0,\qquad\mathbb{E}|H_{ij}|^{2}=\sigma_{ij}^{2}, \qquad\mathbb{E}|H_{ij}|^{k}\leq\frac{(Ck)^{ck}}{Nq^{k-2}},\qquad(k\geq 2),\] (B.1) with sparsity parameter \(q\) satisfying \[N^{\phi}\leq q\leq N^{1/2}.\] (B.2) Here, we further assume the normalization condition \(\sum_{i}\sigma_{ij}^{2}=1\). We also shall study the spectrum of the deformed cgSBM that generalize Definition 1.2 in the sense that entries do not have to be Bernoulli distribution. Note that the matrix \(M\) defined in (1.3) is the special case of rank-\((K-1)\) deformation of cgSBM with \(q^{2}:=Np_{a}\). _Definition_ B.2 (Deformed generalized stochastic block model, dgSBM).: Let \(H\) be a centered SBM given in Definition B.1, \(K\in\mathbb{N}\) be fixed, \(V\) be a deterministic \(N\times K\) matrix satisfying \(V^{T}V=I\) Figure 2: Under the setting in Theorem 2.1 with fixed \(N=8000\), \(K=4\) and \(p_{a}=0.1\), the empirical eigenvalue distribution (a) with \(\lambda=1.5\) and (b) with \(\lambda=0.5\). Red bar shows the number of outliers. and \(d_{1},\ldots,d_{K}\in\mathbb{R}\setminus\{0\}\) be deterministic numbers such that \(d_{1}\geq\cdots\geq d_{K}>0\). We also use the notation \(V=[\mathbf{v}^{(1)},\ldots,\mathbf{v}^{(K)}]\), where \(\mathbf{v}^{(1)},\ldots,\mathbf{v}^{(K)}\in\mathbb{R}^{N}\) are orthonormal. Then we define the rank-\(K\) deformed SBM by \[M=H+VDV^{T},\] (B.3) where \(VDV^{T}=\sum_{i=1}^{K}d_{i}\mathbf{v}^{(i)}(\mathbf{v}^{(i)})^{T}\) and \(D=\operatorname{diag}(d_{1},\ldots,d_{K})\). We denote by \(\kappa_{ij}^{(k)}\) the \(k\)-th cumulant of \(H_{ij}\). Under the moment condition in Definition B.1, \[\kappa_{ij}^{(1)}=0,\qquad|\kappa_{ij}^{(k)}|=O_{k}\Big{(}\frac{1}{Nq^{k-2}} \Big{)},\qquad\quad(k\geq 2).\] (B.4) For our model with the block structure, we abbreviate \(\kappa_{ij}^{(k)}\) as \[\kappa_{ij}^{(k)}=\begin{cases}\kappa_{s}^{(k)}&\text{ if $i$ and $j$ are within same community},\\ \kappa_{d}^{(k)}&\text{ otherwise }.\end{cases}\] (B.5) We will also use the normalized cumulants, \(s^{(k)}\), by setting \[s_{(\cdot)}^{(1)}:=0,\quad s_{(\cdot)}^{(k)}:=Nq^{k-2}\kappa_{(\cdot)}^{(k)}, \quad(k\geq 2).\] (B.6) For convenience, we define the parameters \(\zeta\) and \(\xi^{(4)}\) as \[\zeta:=\frac{s_{s}^{(2)}-s_{d}^{(2)}}{K}=\frac{N(\kappa_{s}^{(2)}-\kappa_{d}^ {(2)})}{K},\qquad\xi^{(4)}:=\frac{s_{s}^{(4)}+(K-1)s_{d}^{(4)}}{K}.\] (B.7) We introduce some notations of basic definitions. _Definition B.3_ (Overwhelming probability events).: We say that an \(N\)-dependent event \(\Omega\equiv\Omega^{(N)}\) holds with overwhelming probability if for any (large) \(D>0\), \[\mathbb{P}(\Omega^{c})\leq N^{-D},\] for \(N\geq N_{0}(D)\) sufficiently large. _Definition B.4_ (Stochastic domination).: Let \(X\equiv X^{(N)},Y\equiv Y^{(N)}\) be \(N\)-dependent non-negative random variables. We say that \(X\) stochastically dominates \(Y\) if, for all small \(\epsilon>0\) and large \(D>0\), \[\mathbb{P}(X^{(N)}>N^{\epsilon}Y^{(N)})\leq N^{-D},\] (B.8) for sufficiently large \(N\geq N_{0}(\epsilon,D)\), and we write \(X\prec Y\). When \(X^{(N)}\) and \(Y^{(N)}\) depend on a parameter \(u\in U\), then we say \(X(u)\prec Y(u)\) uniformly in \(u\in U\) if the threshold \(N_{0}(\epsilon,D)\) can be chosen independently of \(u\). We also use the notation \(X=O_{\prec}(Y)\) if \(|X|\prec|Y|\) and \(X=o_{\prec}(Y)\) if \(|X|\prec|YN^{-\epsilon^{\prime}}|\) for some sufficiently small fixed constant \(\epsilon^{\prime}\). Throughout rest of this paper, we choose \(\epsilon>0\) sufficiently small. More precisely, it is smaller than \((1/2-\phi)/20\), where \(\phi\) is the fixed parameter in Definition B.1 above. _Definition B.5_ (Stieltjes transform).: For given a probability measure \(\nu\), we define the Stieltjes transforms of \(\nu\) as \[m_{\nu}(z):=\int\frac{\nu(\mathrm{d}x)}{x-z},\qquad(z\in\mathbb{C}^{+})\] For example, the Stieltjes transform of the _semicircle measure_, \[\rho_{sc}(\mathrm{d}x):=\frac{1}{2\pi}\sqrt{(4-x^{2})_{+}}\,\mathrm{d}x,\] is given by \[m_{sc}(z)=\int\frac{\rho_{sc}(\mathrm{d}x)}{x-z}=\frac{-z+\sqrt{z^{2}-4}}{2},\] where the square root \(\sqrt{z^{2}-4}\) is chosen so that \(m_{sc}(z)\in\mathbb{C}^{+}\) for \(z\in\mathbb{C}^{+}\) and \(\sqrt{z^{2}-4}\sim z\) as \(z\to\infty\). Clearly, we have \[m_{sc}(z)+\frac{1}{m_{sc}(z)}+z=0.\] _Definition B.6_ (Green function(Resolvent)).: Given a real symmetric matrix \(H\) we define its resolvent or Green function, \(G(z)\), and the normalized trace of its Green function, \(m^{H}\), by \[G^{H}(z)\equiv G(z):=(H-zI)^{-1},\qquad m^{H}(z)\equiv m(z):=\frac{1}{N}\, \mathrm{Tr}\,G^{H}(z),\] (B.9) where \(z=E+\mathrm{i}\eta\in\mathbb{C}^{+}\) and \(I\) is the \(N\times N\) identity matrix. Denoting by \(\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{N}\) the ordered eigenvalues of \(H\), we note that \(m^{H}\) is the Stieltjes transform of the empirical eigenvalue measure of \(H\), \(\mu^{H}\), defined as \[\mu^{H}:=\frac{1}{N}\sum_{i=1}^{N}\delta_{\lambda_{i}}.\] In the rest of this section, we state the lemmas and theorems used to prove our results. **Lemma B.7** (Cumulant expansion, generalized Stein's lemma).: _Let \(F\in C^{\ell+1}(\mathbb{R};\mathbb{C}^{+})\) and fix \(\ell\in\mathbb{N}\). Let \(Y\) be a centered random variable with finite moments to order \(\ell+2\). Then,_ \[\mathbb{E}[YF(Y)]=\sum_{r=1}^{\ell}\frac{\kappa^{(r+1)}(Y)}{r!}\mathbb{E}[F^{( r)}(Y)]+\mathbb{E}[\Omega_{\ell}(YF(Y))],\] (B.10) _where \(\mathbb{E}\) denotes the expectation with respect to \(Y\), \(\kappa^{(r+1)}(Y)\) denotes the \((r+1)\)-st cumulant of \(Y\) and \(F^{(r)}\) denotes the \(r\)-th derivative of the function \(F\). The error term \(\Omega_{\ell}(YF(Y))\) satisfies_ \[\mathbb{E}[\Omega_{\ell}(YF(Y))]\leq C_{\ell}\mathbb{E}[|Y|^{\ell+2}]\sup_{|t| \leq Q}|F^{(\ell+1)}(t)|+C_{\ell}\mathbb{E}[|Y|^{\ell+2}\mathds{1}(|Y|>Q)\sup _{t\in\mathbb{R}}|F^{(\ell+1)}(t)|,\] (B.11) _where \(Q>0\) is an arbitrary fixed cutoff and \(C_{\ell}\) satisfies \(C_{\ell}\leq(C\ell)^{\ell}/\ell!\) for some constant \(C\)._ **Theorem B.8** (Local law, [23]).: _Let \(H\) satisfy Definition B.1 with \(\phi>0\). Then, there exist an algebraic function \(\widetilde{m}:\mathbb{C}^{+}\to\mathbb{C}^{+}\) such that the following hold:_ \[|m(z)-\widetilde{m}(z)| \prec\frac{1}{q^{2}}+\frac{1}{N\eta},\] (B.12) \[\max_{i,j}|G_{ij}(z)-\delta_{ij}\widetilde{m}| \prec\frac{1}{q}+\frac{1}{\sqrt{N\eta}},\] (B.13) _uniformly on the domain \(\mathcal{D}_{\ell}:=\{z=E+\mathrm{i}\eta\in\mathbb{C}^{+}:|E|<3,N^{-1+\ell}< \eta\leq 3\}\),where \(\ell\) is a small positive constant._ **Theorem B.9** ([23]).: _Suppose that \(H\) satisfies Definition B.1 with \(\phi>0\). Then,_ \[|\|H\|-L|\prec\frac{1}{q^{4}}+\frac{1}{N^{2/3}}.\] (B.14) _where \(L:=2+\frac{\xi^{(4)}}{q^{2}}+O(q^{-4})\)._ Combining Theorem B.8 and B.9, we can prove the local law on the outside of the spectrum. Since the proof is very similar to [13] or [7], we omit the details. **Theorem B.10** (Local law outside the spectrum).: _Suppose that \(H\) satisfies Definition B.1 with \(\phi>1/6\). Then,_ \[|m(z)-\widetilde{m}(z)|\prec\frac{1}{q^{2}}+\frac{1}{N\sqrt{( \kappa+\eta)}},\] (B.15) \[\max_{i,j}|G_{ij}(z)-\delta_{ij}\widetilde{m}|\prec\frac{1}{q}+ \frac{1}{\sqrt{N}(\kappa+\eta)^{1/4}},\] (B.16) _uniformly on the domain \(\mathcal{D}_{\tau}:=\{z=E+\mathrm{i}\eta\in\mathcal{D}_{\ell}:|E-L|\geq N^{-2 /3+\tau}\}\)._ Note that \(\widetilde{m}\) therein Theorem B.8 and Theorem B.10 may be replaced by \(m_{sc}\) without changing the error bound. We also have the following averaging fluctuation results for the monomials in the Green function entries. **Theorem B.11** (Theorem 4.8, Proposition 3.3 in [11]).: _Under Definition B.1, the following estimate hold for \(z\in\mathcal{D}_{\ell}\) uniformly:_ \[\Big{|}\sum_{i=1}^{N}\kappa_{ij}^{(2)}G_{ii}(z)-m_{sc}(z)\Big{|} \prec\rho\Psi^{2}(z),\] (B.17) _where \(\rho=O(\frac{1}{\sqrt{\kappa+\eta}})\) and \(\max_{i,j}|G_{ij}(z)-\delta_{ij}m_{sc}(z)|\prec\Psi(z)\)._ On the contour \(\Gamma_{2}\) defined in (5.9), Theorem B.10 and Theorem B.11 can be expressed simply as following: **Proposition B.12**.: _Under Definition B.1, for \(z\in\Gamma_{2}\) given in (5.9) uniformly,_ \[|m(z)-m_{sc}(z)|\prec\frac{1}{q^{2}},\qquad\max_{i,j}|G_{ij}(z)- \delta_{ij}m_{sc}|\prec\frac{1}{q},\] (B.18) \[\Big{|}\sum_{i=1}^{N}\kappa_{ij}^{(2)}G_{ii}(z)-m_{sc}(z)\Big{|} \prec\frac{1}{q^{2}}.\] (B.19) **Lemma B.13** (Basic properties of \(m_{sc}\)).: _Define the distance to the spectral edge_ \[\kappa\equiv\kappa(E):=\big{|}|E|-2\big{|}.\] (B.20) _Then for \(z\in D_{\ell}\) we have_ \[|m_{sc}(z)|\sim 1,\qquad\qquad|1-m_{sc}^{2}|\sim\sqrt{\kappa+\eta}\] (B.21) _and_ \[\operatorname{Im}\,m_{sc}(z)\sim\begin{cases}\sqrt{\kappa+\eta}&\text{if }E\leq 2 \\ \frac{\eta}{\sqrt{\kappa+\eta}}&\text{if }E\geq 2.\end{cases}\] (B.22) _Moreover,_ \[m^{\prime}_{sc}(z)=-\frac{m_{sc}(z)}{z+2m_{sc}(z)}=\frac{m_{sc}^{2}(z)}{1-m_{ sc}^{2}(z)}.\] (B.23) Proof.: The proof is an elementary calculation; see Lemma 4.2 in [15]. ## Appendix C Proof of Theorem 2.3 We first express the left-hand side of (2.1) by using a contour integral via Cauchy's integration formula. The integral is then written in terms of the Stieltjes transforms of the empirical spectral measure and the semicircle measure. Since the Stieltjes transform of the empirical spectral measure converges weakly to a Gaussian process, we find that the linear eigenvalue statistic also converges to a Gaussian random variable. We introduce the CLT for the centered stochastic block models. **Theorem C.1** ([31]).: _Let \(H\) be a cgSBM defined in defintion 1.1. Then, for any function \(f\) analytic on an open interval containing \([-2,2]\),_ \[\Big{(}L_{H}(f)-N\int_{-2}^{2}\frac{\sqrt{4-z^{2}}}{2\pi}f(z)\,\mathrm{d}z \Big{)}\Rightarrow\mathcal{N}\left(m_{0}(f),V_{0}(f)\right)\,.\] _where the mean \(m_{0}(f)\) is given by (2.2) with \(K=0\) and the variance \(V_{0}(f)\) is given in (2.3)._ Recall the definitions of \(M(\theta)\), and contour \(\Gamma\) from section 5.1. Define the resolvent \(R(\theta,z)\) and the normalized trace of the resolvent \(m_{N}(\theta,z)\) by \[R(\theta,z):=(M(\theta)-zI)^{-1},\qquad m_{N}(\theta,z):=\frac{1}{N}\operatorname {Tr}R(\theta,z)=\frac{1}{N}\sum_{i=1}^{N}\frac{1}{\lambda_{i}(\theta)-z}\] (C.1) Then with Cauchy's integral formula, we have \[\begin{split}\sum_{i=1}^{N}f(\lambda_{i}(1))-N\langle\mu_{sc},f \rangle&=-\frac{N}{2\pi\,\mathrm{i}}\oint_{\Gamma}f(z)\big{(}m_{ N}(1,z)-m_{sc}(z)\big{)}\,\mathrm{d}z\\ &=-\frac{1}{2\pi\,\mathrm{i}}\oint_{\Gamma}f(z)\big{(}\operatorname {Tr}R(1,z)-\operatorname{Tr}R(0,z)\big{)}\,\mathrm{d}z\\ &\qquad-\frac{1}{2\pi\,\mathrm{i}}\oint_{\Gamma}f(z)\big{(} \operatorname{Tr}R(0,z)-Nm_{sc}(z)\big{)}\,\mathrm{d}z.\end{split}\] (C.2) where the fluctuation result for (C.2) is already given by Theorem C.1. To analyze (C.2), we use the results from the random matrix theory like isotropic local law given by Lemma C.2 below. Hence, our strategy of the proof became to show that the limiting distribution of \(\operatorname{Tr}R(\theta,z)\) has deterministic shift when we integrate with respect to \(\theta\). More precisely, we claim that \[\frac{\partial}{\partial\theta}\operatorname{Tr}R(\theta,z)=-\frac{K\sqrt{ \lambda}m^{\prime}_{sc}(z)}{(1+\theta\sqrt{\lambda}m_{sc}(z))^{2}}+O_{\prec}( N^{-\frac{1}{2}})\] (C.3) uniformly on \(z\in\Gamma\). Once we prove the claim, we can use the lattice argument to prove Theorem 2.3 as follows: Choose points \(z_{1},z_{2},\dots,z_{16N}\in\Gamma\) so that \(|z_{i}-z_{i+1}|\leq N^{-1}\) for \(i=1,2,\dots,16N\) (with the convention \(z_{16N+1}=z_{1}\)). For each \(z_{i}\), the claim (C.3) shows that \[\operatorname{Tr}R(1,z_{i})-\operatorname{Tr}R(0,z_{i})=-\frac{K\sqrt{ \lambda}m^{\prime}_{sc}(z)}{1+\sqrt{\lambda}m_{sc}(z)}+O_{\prec}(N^{-\frac{1}{ 2}}).\] (C.4) For any \(z\in\Gamma\), if \(z_{i}\) is the nearest lattice point from \(z\), then \(|z-z_{i}|\leq N^{-1}\). From the Lipschitz continuity of \(\operatorname{Tr}R\), we then find \(|\operatorname{Tr}R(\theta,z)-\operatorname{Tr}R(\theta,z_{i})|=O_{\prec}(N^ {-1})\) uniformly on \(z\) and \(z_{i}\). Hence, by triangular inequality, we can show that \[|\xi_{N}(1,z)-\xi_{N}(0,z)+\frac{K\sqrt{\lambda}m^{\prime}_{sc}( z)}{1+\sqrt{\lambda}m_{sc}(z)}|\] \[\leq|\xi_{N}(1,z)-\xi_{N}(1,z_{i})|+|\xi_{N}(1,z_{i})-\xi_{N}(0, z_{i})+\frac{K\sqrt{\lambda}m^{\prime}_{sc}(z_{i})}{1+\sqrt{\lambda}m_{sc}(z_{i} )}|+|\xi_{N}(0,z_{i})-\xi_{N}(0,z)|\] \[=O_{\prec}(N^{-\frac{1}{2}}).\] Now, integrating over \(\Gamma\), we get \[\frac{-1}{2\pi\operatorname{i}}\oint_{\Gamma}f(z)\Big{(}\xi_{N}(1,z)-\xi_{N}( 0,z)\Big{)}\operatorname{d}z=\frac{K}{2\pi\operatorname{i}}\oint_{\Gamma}f(z )\frac{\sqrt{\lambda}m^{\prime}_{sc}(z)}{1+\sqrt{\lambda}m_{sc}(z)} \operatorname{d}z+O_{\prec}(N^{-\frac{1}{2}}).\] (C.5) Note that, for \(\ell=0,1,2,\dots\), \[\tau_{\ell}(f)=\frac{1}{2\pi}\int_{-\pi}^{\pi}f(2\cos\theta)\cos(\ell\theta) \operatorname{d}\theta=\frac{(-1)^{\ell}}{2\pi\operatorname{i}}\oint_{|s|=1} f\left(-s-\frac{1}{s}\right)s^{\ell-1}\operatorname{d}s\] (C.6) where we set \(s=-\mathrm{e}^{\mathrm{i}\theta}\) for the second inequality. Converting the integral on \(z\) to the integral on \(m_{sc}\) by \(z=-m_{sc}-\frac{1}{m_{sc}}\), we conclude that the the difference between the LSS of \(M\) and that of \(H\) is \[K\sum_{\ell=1}^{\infty}\sqrt{\lambda^{\ell}}\tau_{\ell}(f).\] We now prove the claim (C.3). For the ease of notation, we omit the \(z\)-dependence in some occasions. Using the formula \[\frac{\partial R_{jj}(\theta)}{\partial M_{ab}(\theta)}=\begin{cases}-R_{ja}( \theta)R_{bj}(\theta)-R_{jb}(\theta)R_{aj}(\theta)&\text{ if }a\neq b,\\ -R_{ja}(\theta)R_{aj}(\theta)&\text{ if }a=b,\end{cases}\] (C.7) and the fact that \(M\) and \(R(\theta)\) are symmetric, it is straightforward to check that \[\frac{\partial}{\partial\theta}\operatorname{Tr}R(\theta,z)=-\sum_{m=1}^{K} \sqrt{\lambda}\left({\mathbf{v}^{(m)}}^{T}R(\theta,z)^{2}{\mathbf{v}^{(m)}} \right)=-\sum_{m=1}^{K}\sqrt{\lambda}\frac{\partial}{\partial z}\left({ \mathbf{v}^{(m)}}^{T}R(\theta,z){\mathbf{v}^{(m)}}\right).\] (C.8) where \[R(\theta,z)^{2}=(M(\theta)-zI)^{-2}=\frac{\partial}{\partial z}(M(\theta)-zI)^ {-1}=\frac{\partial}{\partial z}R(\theta,z),\] (C.9) which can be checked from the definition of the resolvent. Set \(S(z):=R(0,z)=(H-zI)^{-1}\) for convenience. We have from the definition of the resolvents that \[R(\theta,z)^{-1}-S(z)^{-1}=\theta\sqrt{\lambda}VV^{T}S(z)\] (C.10) and after multiplying \(S(z)\) from the right and \(R(\theta,z)\) from the left, we find that \[S(z)-R(\theta,z)=\theta\sqrt{\lambda}R(\theta,z)VV^{T}S(z)=\sum_{i=1}^{K} \theta\sqrt{\lambda}R(\theta,z)\mathbf{v}^{(i)}(\mathbf{v}^{(i)})^{T}S(z).\] (C.11) Thus, \[\begin{split}\langle\mathbf{v}^{(m)},S(z)\mathbf{v}^{(m)}\rangle &=\langle\mathbf{v}^{(m)},R(\theta,z)\mathbf{v}^{(m)}\rangle+ \theta\sqrt{\lambda}\sum_{i=1}^{K}\langle\mathbf{v}^{(m)},R(\theta,z)\mathbf{ v}^{(i)}(\mathbf{v}^{(i)})^{T}S(z)\mathbf{v}^{(m)}\rangle\\ &=\langle\mathbf{v}^{(m)},R(\theta,z)\mathbf{v}^{(m)}\rangle+ \theta\sqrt{\lambda}\sum_{i=1}^{K}\langle\mathbf{v}^{(m)},R(\theta,z)\mathbf{ v}^{(i)}\rangle\langle\mathbf{v}^{(i)},S(z)\mathbf{v}^{(m)}\rangle\end{split}\] (C.12) For the resolvents of the centered SBM, we have the following lemma. **Lemma C.2** (Isotropic local law).: _For an \(N\)-independent constant \(\epsilon>0\), let \(\Gamma^{\epsilon}\) be the \(\epsilon\)-neighborhood of \(\Gamma\), i.e.,_ \[\Gamma^{\epsilon}=\{z\in\mathbb{C}:\min_{w\in\Gamma}|z-w|\leq\epsilon\}.\] _Choose \(\epsilon\) small so that the distance between \(\Gamma^{\epsilon}\) and \([-2,2]\) is larger than \(2\epsilon\), i.e.,_ \[\min_{w\in\Gamma^{\epsilon},x\in[-2,2]}|x-w|>2\epsilon.\] (C.13) _Assume that the matrix \(H\) satisfies Definition B.1 with \(q=c\sqrt{N}\) for some constant \(0<c<1\). Then, for any deterministic \(\mathbf{v},\mathbf{w}\in\mathbb{C}^{N}\) with \(\|\mathbf{v}\|=\|\mathbf{w}\|=1\) and sufficiently small \(\delta>0\), the following estimate holds uniformly on \(z\in\Gamma^{\epsilon}\):_ \[\big{|}\langle\mathbf{v},(H-zI)^{-1}\mathbf{w}\rangle-m_{sc}(z)\langle\mathbf{v},\mathbf{w} \rangle\big{|}=O_{\prec}(N^{-\frac{1}{2}}).\] (C.14) Proof.: This follows from Theorem 2.15 of [7]. See Lemma 7.7 of [9] for details. From the isotropic local law, Lemma C.2, we find that \[\langle\mathbf{v}^{(i)},S(z)\mathbf{v}^{(m)}\rangle=\delta_{im}m_{sc}(z)+O_{ \prec}(N^{-\frac{1}{2}}).\] (C.15) Moreover, from the rigidity of the eigenvalues, for some constant \(C\), we have \[\langle\mathbf{v}^{(m)},R(\theta,z)\mathbf{v}^{(m)}\rangle\leq\|R(\theta,z)\| \leq C.\] (C.16) We then have from (C.12) that \[m_{sc}=\langle\mathbf{v}^{(m)},R(\theta,z)\mathbf{v}^{(m)}\rangle+\theta \sqrt{\lambda}\langle\mathbf{v}^{(m)},R(\theta,z)\mathbf{v}^{(m)}\rangle m_{ sc}+O_{\prec}(N^{-1/2}).\] (C.17) Therefore, we conclude that \[\langle\mathbf{v}^{(m)},R(\theta,z)\mathbf{v}^{(m)}\rangle=\frac{m_{sc}(z)}{1+ \theta\sqrt{\lambda}m_{sc}(z)}+O_{\prec}(N^{-\frac{1}{2}}).\] (C.18) Note that \(|m_{sc}|\leq 1\) and \(\lambda<1\), hence \(|1+\sqrt{\lambda}m_{sc}|>c>0\) for some (\(N\)-independent) constant \(c\). Consider the boundary of the \(\epsilon\)-neighborhood of \(z\), \(\partial B_{\epsilon}(z)=\{w\in\mathbb{C}:|w-z|=\epsilon\}\). If we choose \(\epsilon\) as in the assumption of Lemma C.2, \(\partial B_{\epsilon}(z)\) does not intersect \([-2,2]\). Applying Cauchy's integral formula, we get \[\begin{split}\frac{\partial}{\partial z}\langle\mathbf{v}^{(m)},R(\theta,z)\mathbf{v}^{(m)}\rangle&=\frac{1}{2\pi\,\mathrm{i}} \oint_{\partial B_{\epsilon}(z)}\frac{\langle\mathbf{v}^{(m)},R(\theta,z) \mathbf{v}^{(m)}\rangle}{(w-z)^{2}}\,\mathrm{d}w\\ &=\frac{m^{\prime}_{sc}(z)}{(1+\theta\sqrt{\lambda}m_{sc}(z))^{ 2}}+O_{\prec}(N^{-\frac{1}{2}}).\end{split}\] (C.19) Plugging the estimate into the right-hand side of (C.8), we get the claim (C.3). ## Appendix D Proof of Lemma 5.2 With Definitions in Appendix B, recall that our goal is to show that \[\begin{split}&\mathbb{E}\big{[}e(t)\cdot(\operatorname{Tr}G(z)- \mathbb{E}\operatorname{Tr}G(z))\,\mathrm{d}z\big{]}\\ &=-\phi(t)\frac{t\sqrt{N}}{\pi q}\xi^{(4)}m_{sc}(z)m^{\prime}_{sc }(z)\oint_{\Gamma}f(z^{\prime})m_{sc}(z^{\prime})m^{\prime}_{sc}(z^{\prime}) \,\mathrm{d}z^{\prime}+o(\frac{\sqrt{N}}{q})\end{split}\] (D.1) with overwhelming probability, where \(e(t)\) is defined by \[e(t):=\exp\bigl{\{}-\frac{tq}{2\pi\sqrt{N}}\int_{\Gamma_{2}}f(z)(\operatorname{ Tr}G(z)-\mathbb{E}\operatorname{Tr}G(z))\,\mathrm{d}z\bigr{\}}.\] (D.2) Then the characteristic function \(\phi\) satisfies \[\phi^{\prime}(t)=-t\phi(t)V(f)+o_{\prec}(1),\] where \(V(f)\) is given by \[V(f)=2\xi^{(4)}\tau_{2}(f)^{2},\] (D.3) and which equals \(2\tau_{2}(f)^{2}\) for \(H\) given by Definition 1.1. Throughout this section, we say that a random variable \(Z\) is _negligible_ if \(Z=o_{\prec}(\sqrt{N}q^{-1})\). Proof.: To prove the proposition, we apply Lemma B.7 for \(l=4\) and get \[\begin{split}& z\mathbb{E}\big{[}e(t)(\operatorname{Tr}G(z)- \mathbb{E}\operatorname{Tr}G(z))\big{]}\\ &=\mathbb{E}\big{[}e(t)\sum_{i\neq j}(H_{ij}G_{ji}-\mathbb{E}[H_{ ij}G_{ji}])\big{]}+\mathbb{E}\big{[}e(t)\sum_{i}(H_{ii}G_{ii}-\mathbb{E}[H_{ii}G_{ ii}])\big{]}\\ &=I_{1}+I_{2}+I_{3}+I_{4}+R_{5}+I_{d},\end{split}\] (D.4) where \[I_{d} =\mathbb{E}\big{[}e(t)\sum_{i}(H_{ii}G_{ii}-\mathbb{E}[H_{ii}G_{ii}]) \big{]},\] (D.5) \[I_{1} =\sum_{i\neq j}\kappa_{ij}^{(2)}\Big{(}\mathbb{E}\big{[}\partial_{ ij}e(t)\cdot G_{ij}\big{]}+\mathbb{E}\big{[}(1-\mathbb{E})\big{(}\partial_{ij}G_{ ij}\big{)}\cdot e(t)\big{]}\Big{)},\] (D.6) \[I_{2} =\sum_{i\neq j}\frac{\kappa_{ij}^{(3)}}{2!}\Big{(}\mathbb{E} \big{[}\partial_{ij}^{2}e(t)\cdot G_{ij}\big{]}+2\mathbb{E}\big{[}\partial_{ ij}e(t)\cdot\partial_{ij}G_{ij}\big{]}+\mathbb{E}\big{[}(1-\mathbb{E})\big{(} \partial_{ij}^{2}G_{ij}\big{)}\cdot e(t)\big{]}\Big{)},\] (D.7) \[I_{3} =\sum_{i\neq j}\frac{\kappa_{ij}^{(4)}}{3!}\Big{(}\mathbb{E} \big{[}\partial_{ij}^{3}e(t)\cdot G_{ij}\big{]}+3\mathbb{E}\big{[}\partial_{ ij}^{2}e(t)\cdot\partial_{ij}G_{ij}\big{]}+3\mathbb{E}\big{[}\partial_{ij}e(t) \cdot\partial_{ij}^{2}G_{ij}\big{]}+\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad Combining Lemma D.1 and (D.4) together, with overwhelming probability, we obtain \[z\mathbb{E}\big{[}e(t)(\operatorname{Tr}G(z)-\mathbb{E}\operatorname {Tr}G(z))\big{]}\] \[=\frac{m_{sc}^{\prime}}{m_{sc}}(m_{sc}^{2}-1)\mathbb{E}\big{[}e(t) \cdot(1-\mathbb{E})\operatorname{Tr}G\big{]}\] \[\qquad+m_{sc}^{2}(m_{sc}^{\prime}+1)\frac{t\sqrt{N}}{\pi q}\xi^{ (4)}\mathbb{E}\big{[}e(t)\big{]}\int_{\Gamma_{2}}f(z)m_{sc}(z)m_{sc}^{\prime}( z)\,\mathrm{d}z+o(\frac{\sqrt{N}}{q})\] \[=-m_{sc}\mathbb{E}\big{[}e(t)\cdot(1-\mathbb{E})\operatorname{Tr }G\big{]}+m_{sc}^{\prime}\frac{t\sqrt{N}}{\pi q}\xi^{(4)}\mathbb{E}\big{[}e(t) \big{]}\int_{\Gamma_{2}}f(z)m_{sc}(z)m_{sc}^{\prime}(z)\,\mathrm{d}z+o(\frac{ \sqrt{N}}{q}).\] Rearrange the equation above and divide both side by \((z+m_{sc})\), we obtain (D.1). ### Proof of Lemma D.1 For the estimates in this section, we use the following power counting argument frequently. **Lemma D.2**.: _(Lemma 6.5 of [29]) For any \(i\) and \(k\),_ \[\frac{1}{N}\sum_{j=1}^{N}|G_{ij}(z)G_{jk}(z)|\prec\frac{\operatorname{Im}m(z)} {N\eta},\quad\frac{1}{N}\sum_{j=1}^{N}|G_{ij}(z)|\prec\left(\frac{\operatorname {Im}m(z)}{N\eta}\right)^{1/2},\quad(z\in\mathbb{C}^{+}).\] (D.16) _Moreover, For \(z\in\Gamma\), we have_ \[\frac{1}{N}\sum_{j=1}^{N}|G_{ij}(z)G_{jk}(z)|\prec\frac{1}{N},\qquad\frac{1}{ N}\sum_{j=1}^{N}|G_{ij}(z)|\prec\frac{1}{N^{1/2}},\qquad(z\in\Gamma).\] (D.17) #### d.1.1 estimate on \(e(t)\), \(R_{5}\) and \(I_{d}\) **Lemma D.3**.: _Let \(e(t)\) defined in (D.2). Then,_ \[\partial_{ij}e(t) =O_{\prec}\Big{(}\frac{\log N}{\sqrt{N}}\Big{)},\] (D.18) \[\partial_{ij}^{2}e(t) =-\frac{tq}{\pi\sqrt{N}}e(t)\int_{\Gamma_{2}}f(z)(G_{ii}(G^{ \prime})_{jj}+G_{jj}(G^{\prime})_{ii})\,\mathrm{d}z+O_{\prec}\Big{(}\frac{ \log N}{\sqrt{N}}\Big{)},\] (D.19) \[\partial_{ij}^{k}e(t) =O_{\prec}(\frac{q}{\sqrt{N}}),\qquad(k\geq 3).\] (D.20) Proof.: Note that \(G_{ij}(z)\) ism analytic in \(z\in\mathbb{C}\setminus\mathbb{R}\) and \(G_{ij}^{2}=\frac{d}{dz}G_{ij}(z)\). Using the Cauchy integral formula and the local law, we get \[\left|\frac{d}{dz}G_{ij}(z)-\delta_{ij}m_{sc}^{\prime}(z)\right|\prec\frac{1}{ q\operatorname{Im}z}.\] (D.21) Then with (D.21), we obtain \[\partial_{ij}e(t) =-\frac{tq}{2\pi\sqrt{N}}e(t)\int_{\Gamma_{2}}f(z)\partial_{ij}\sum _{k}G_{kk}(z)\,\mathrm{d}z\] \[=\frac{tq}{\pi\sqrt{N}}e(t)\int_{\Gamma_{2}}f(z)(G^{2})_{ij}\, \mathrm{d}z\] \[=O_{\prec}\Big{(}\frac{\log N}{\sqrt{N}}\Big{)}.\] (D.22) Taking derivative of (D.22) again, we get \[\partial_{ij}^{2}e(t) =\left(\frac{tq}{\pi\sqrt{N}}\int_{\Gamma_{2}}f(z)(G^{2})_{ij}\, \mathrm{d}z\right)^{2}e(t)+\frac{tq}{\pi\sqrt{N}}e(t)\int_{\Gamma_{2}}f(z)\sum _{k}\partial_{ij}(G_{ik}G_{kj})\,\mathrm{d}z\] (D.23) \[=-\frac{tq}{\pi\sqrt{N}}e(t)\int_{\Gamma_{2}}f(z)(G_{ii}(G^{ \prime})_{jj}+G_{jj}(G^{\prime})_{ii})\,\mathrm{d}z+O_{\prec}\Big{(}\frac{ \log N}{\sqrt{N}}\Big{)}.\] In general, repeatedly taking derivative with local law, we can show that for \(k\geq 3\), \[\partial_{ij}^{k}e(t)=O_{\prec}(\frac{q}{\sqrt{N}}),\qquad(k\geq 3).\] (D.24) With above bounds in Lemma D.3, the error term \(R_{5}\) can be estimated as \[|R_{5}| \leq CN^{2}\mathbb{E}|H_{ij}|^{6}\Big{(}\|\partial_{ij}^{5}(e(t) G_{ij})\|_{\infty}+\|e(t)\partial_{ij}^{5}G_{ij}\|_{\infty}\Big{)}\] (D.25) \[=O_{\prec}\Big{(}N^{2}\frac{1}{Nq^{4}}\Big{)}=o_{\prec}\Big{(} \frac{\sqrt{N}}{q}\Big{)},\] which is negligible. Similar as Lemma D.3, for all \(k\), we can show that \[\partial_{ii}^{k}e(t)=O_{\prec}(1).\] (D.26) Using Lemma B.7 with \(\ell=1\), we can expand \(I_{d}\) as follows \[\mathbb{E}\big{[}e(t)\sum_{i}(H_{ii}G_{ii}-\mathbb{E}[H_{ii}G_{ii} ])\big{]}\] (D.27) \[=\sum_{i}\kappa_{ii}^{(2)}\Big{(}\mathbb{E}\big{[}\partial_{ii}e (t)\cdot G_{ii}\big{]}+\mathbb{E}\big{[}(1-\mathbb{E})\big{(}\partial_{ii}G_{ ii}\big{)}\cdot e(t)\big{]}\Big{)}+O_{\prec}\big{(}\frac{1}{q}\big{)}.\] With (D.26) and local law, we can easily conclude that (D.27) is \(O(1)\) with overwhelming probability. _Remark D.4_.: In the rest of the section, for simplicity of notation, we sometimes omit the inequality sign for indices below the summation. #### d.1.2 estimate on \(I_{2}\) In this section, we will show that \(I_{2}\) is negligible, which is defined by \[I_{2}=\sum_{i\neq j}\frac{\kappa_{ij}^{(3)}}{2!}\Big{(}\mathbb{E}\big{[}\partial_ {ij}^{2}e(t)\cdot G_{ij}\big{]}+2\mathbb{E}\big{[}\partial_{ij}e(t)\cdot \partial_{ij}G_{ij}\big{]}+\mathbb{E}\big{[}(1-\mathbb{E})\big{(}\partial_{ij} ^{2}G_{ij}\big{)}\cdot e(t)\big{]}\Big{)}.\] Using Lemma D.2 and local law, one can show that the terms that cannot be clearly ignored are \[I_{2,1}:=\sum_{i\neq j}\kappa_{ij}^{(3)}\mathbb{E}[G_{ii}G_{jj}\partial_{ij}e(t)]\] (D.28) and \[I_{2,2}:=\sum_{i\neq j}\kappa_{ij}^{(3)}\mathbb{E}\big{[}e(t)\cdot(1-\mathbb{E })\big{(}G_{ii}G_{jj}G_{ij}\big{)}\big{]}.\] (D.29) We first estimate \(I_{2,1}\), \[I_{2,1} =\sum_{i\neq j}\kappa_{ij}^{(3)}\mathbb{E}[G_{ii}G_{jj}\partial_{ ij}e(t)]\] \[=m_{sc}^{2}\frac{t}{\pi}\frac{q}{\sqrt{N}}\sum_{i,j,k}\kappa_{ij }^{(3)}\mathbb{E}\big{[}e(t)\int_{\Gamma}f(z)G_{ik}(z)G_{kj}(z)\,\mathrm{d}z \big{]}+o_{\prec}\Big{(}\frac{\sqrt{N}}{q}\Big{)}\] \[=m_{sc}^{2}\frac{t}{\pi}\frac{q}{\sqrt{N}}\int_{\Gamma}f(z)\sum_ {i,j,k}\kappa_{ij}^{(3)}\mathbb{E}\big{[}e(t)G_{ik}G_{kj}\big{]}\,\mathrm{d}z +o_{\prec}\Big{(}\frac{\sqrt{N}}{q}\Big{)}.\] (D.30) To estimate \(\sum_{i,j,k}\kappa_{ij}^{(3)}\mathbb{E}\big{[}e(t)G_{ik}G_{kj}\big{]}\) with the error of \(o(N^{-1}q^{-2})\), we again multiply \(z\) and expand it using Lemma B.7 until \(l=10\). \[z\sum_{i,j,k}\kappa_{ij}^{(3)}\mathbb{E}\big{[}e(t)G_{ik}G_{kj} \big{]}=\sum_{i,j,k}\kappa_{ij}^{(3)}\mathbb{E}\big{[}e(t)\sum_{l}H_{il}G_{lk }G_{kj}\big{]}\] \[=\sum_{i,j,k,l}\kappa_{ij}^{(3)}\kappa_{il}^{(2)}\mathbb{E}\big{[} \partial_{il}(e(t)G_{lk}G_{kj})\big{]}+\sum_{i,j,k,l}\kappa_{ij}^{(3)}\frac{ \kappa_{il}^{(3)}}{2}\mathbb{E}\big{[}\partial_{il}^{2}(e(t)G_{lk}G_{kj})\big{]}\] \[\quad+\sum_{i,j,k,l}\kappa_{ij}^{(3)}\frac{\kappa_{il}^{(4)}}{6} \mathbb{E}\big{[}\partial_{il}^{3}(e(t)G_{lk}G_{kj})\big{]}+\cdots+R_{10}.\] (D.31) For each \(s\geq 3\), \(\kappa_{il}^{(s+1)}\leq\frac{C}{Nq^{2}}\) and \(\partial_{il}^{s}(e(t)G_{lk}G_{kj})\) contains at least two off-diagonal entries. Hence by Lemma D.2, that terms appear in (D.30) are negligible. Moreover, since \(q\gg N^{1/6}\), \(R_{10}\leq CN^{4}\frac{1}{Nq}\frac{1}{Nq^{9}}=C\frac{N^{2}}{q^{10}}=o_{\prec}( Nq^{-2})\) and thus negligible in (D.30). Using Lemma D.2 and Lemma D.3 with local law, we obtain the main order term of the first term of (D.31), \[\sum_{i,j,k,l}\kappa_{ij}^{(3)}\kappa_{il}^{(2)}\mathbb{E}\big{[} \partial_{il}(e(t)G_{lk}G_{kj})\big{]} =-\sum_{i,j,k,l}\kappa_{ij}^{(3)}\kappa_{il}^{(2)}\mathbb{E}\big{[} e(t)G_{ik}G_{kj}G_{ll}\big{]}+O_{\prec}(\frac{\sqrt{N}}{q})\] \[=-m_{sc}\sum_{i,j,k}\kappa_{ij}^{(3)}\mathbb{E}\big{[}e(t)G_{ik}G_ {kj}\big{]}+O_{\prec}(\frac{N}{q^{3}}+\frac{\sqrt{N}}{q}).\] (D.32) Similarly, we get the estimate for the second term of (D.31) as \[\sum_{i,j,k,l}\kappa^{(3)}_{ij}\frac{\kappa^{(3)}_{il}}{2}\mathbb{E} \big{[}\partial_{il}^{2}(e(t)G_{lk}G_{kj})\big{]} =\sum_{i,j,k,l}\kappa^{(3)}_{ij}\kappa^{(3)}_{il}\mathbb{E}\big{[} e(t)G_{ii}G_{ll}G_{lk}G_{kj}\big{]}+O_{\prec}(\frac{N}{q^{3}})\] \[=m_{sc}^{2}\sum_{i,j,l}\kappa^{(3)}_{ij}\kappa^{(3)}_{il}\mathbb{ E}\big{[}e(t)\sum_{k}G_{lk}G_{kj}\big{]}+O_{\prec}(\frac{N}{q^{3}})\] \[=m_{sc}^{2}\sum_{i,j,l}\kappa^{(3)}_{ij}\kappa^{(3)}_{il}\mathbb{ E}\big{[}e(t)(G^{2})_{lj}\big{]}+O_{\prec}(\frac{N}{q^{3}})\] \[=O_{\prec}(\frac{N}{q^{3}}).\] (D.33) To sum up, \[(z+m_{sc})\sum_{i,j,k}\kappa^{(3)}_{ij}\mathbb{E}\big{[}e(t)G_{ ik}G_{kj}\big{]}=O_{\prec}(\frac{N}{q^{3}}+\frac{\sqrt{N}}{q}).\] (D.34) Since \(|z+m_{sc}|>c\) for some constant \(c\), we can divide both side by \((z+m_{sc})\) and conclude that \[I_{2,1} =m_{sc}^{2}\frac{t}{\pi}\frac{q}{\sqrt{N}}\int_{\Gamma}f(z)\sum_{ i,j,k}\kappa^{(3)}_{ij}\mathbb{E}\big{[}e(t)G_{ik}G_{kj}\big{]}\,\mathrm{d}z+o \Big{(}\frac{\sqrt{N}}{q}\Big{)}\] \[=O_{\prec}(\frac{\sqrt{N}}{q^{2}}+1)+o_{\prec}\Big{(}\frac{\sqrt{ N}}{q}\Big{)},\] (D.35) and thus negligible. Now we move on to \(I_{2,2}\). Multiplying \(z\), we get \[zI_{2,2} =z\sum_{i,j}\kappa^{(3)}_{ij}\mathbb{E}\big{[}e(t)\cdot(1-\mathbb{ E})\big{(}G_{ii}G_{jj}G_{ij}\big{)}\big{]}\] \[=m_{sc}^{2}\sum_{i,j}\kappa^{(3)}_{ij}\mathbb{E}\big{[}e(t)\cdot( 1-\mathbb{E})\big{(}zG_{ij}\big{)}\big{]}+o_{\prec}(\frac{\sqrt{N}}{q}).\] (D.36) Similar as above, it can be easily checked that the only non negligible term of \(zI_{2,2}\) is \[-m_{sc}^{2}\sum_{i,j,k}\kappa^{(3)}_{ij}\kappa^{(2)}_{ik}\mathbb{ E}\big{[}e(t)\cdot(1-\mathbb{E})\big{(}G_{kk}G_{ij}\big{)}\big{]},\] (D.37) which can be estimated by \[-m_{sc}^{2}\sum_{i,j,k}\kappa^{(3)}_{ij}\kappa^{(2)}_{ik}\mathbb{ E}\big{[}e(t)\cdot(1-\mathbb{E})\big{(}G_{kk}G_{ij}\big{)}\big{]}=-m_{sc}^{3} \sum_{i,j}\kappa^{(3)}_{ij}\mathbb{E}\big{[}e(t)\cdot(1-\mathbb{E})\big{(}G_{ ij}\big{)}\big{]}+o_{\prec}(\frac{\sqrt{N}}{q}).\] (D.38) Combining (D.36) and (D.38), we conclude \[(z+m_{sc})I_{2,2}=o(\frac{\sqrt{N}}{q}),\] (D.39) thus \(I_{2,2}\) is negligible and so is \(I_{2}\). #### d.1.3 estimate on \(I_{3}\) Recall that \(I_{3}\) is defined by \[I_{3}=\sum_{i\neq j}\frac{\kappa_{ij}^{(4)}}{3!}\Big{(}\mathbb{E} \big{[}\partial_{ij}^{3}e(t)\cdot G_{ij}\big{]}+3\mathbb{E}\big{[}\partial_{ij}^ {2}e(t)\cdot\partial_{ij}G_{ij}\big{]}+3\mathbb{E}\big{[}\partial_{ij}e(t) \cdot\partial_{ij}^{2}G_{ij}\big{]}\] \[+\mathbb{E}\big{[}(1-\mathbb{E})\big{(}\partial_{ij}^{3}G_{ij} \big{)}\cdot e(t)\big{]}\Big{)}.\] (D.40) Using Lemma D.2 and local law, the only term that is not clearly negligible is \[I_{3,1}:=\sum_{i\neq j}\kappa_{ij}^{(4)}\mathbb{E}\big{[}e(t)\cdot[G_{ii}^{2}G_ {jj}^{2}-\mathbb{E}(G_{ii}^{2}G_{jj}^{2})]\big{]},\] which comes from \(\mathbb{E}\big{[}(1-\mathbb{E})\big{(}\partial_{ij}^{3}G_{ij}\big{)}\cdot e(t )\big{]}\) and \[-\sum_{i,j}\frac{\kappa_{ij}^{(4)}}{2}m_{sc}^{2}\mathbb{E}\big{[}\partial_{ij} ^{2}e(t)\big{]},\] which comes from \(\sum_{i\neq j}\kappa_{ij}^{(4)}\mathbb{E}\big{[}\partial_{ij}^{2}e(t)\cdot \partial_{ij}G_{ij}\big{]}\). Here, we claim that \(I_{3,1}\) is negligible. After multiplying \(z\), we have \[zI_{3,1} =\sum_{i,j}\kappa_{ij}^{(4)}\mathbb{E}\big{[}e(t)\cdot[zG_{ii}G_ {ii}G_{jj}^{2}-\mathbb{E}(zG_{ii}G_{ii}G_{jj}^{2})]\big{]}\] \[=\sum_{i,j}\kappa_{ij}^{(4)}\mathbb{E}\Big{[}\sum_{k}H_{ik}e(t)G_ {ik}G_{ii}G_{jj}^{2}-e(t)\mathbb{E}[\sum_{k}H_{ik}G_{ik}G_{ii}G_{jj}^{2}]\Big{]}\] \[\qquad-\sum_{i,j}\kappa_{ij}^{(4)}\mathbb{E}\big{[}e(t)\cdot[G_{ ii}G_{jj}^{2}-\mathbb{E}(G_{ii}G_{jj}^{2})]\big{]}\] \[=\sum_{i,j,k}\kappa_{ij}^{(4)}\kappa_{ik}^{(2)}\Big{(}\mathbb{E} \big{[}\partial_{ik}e(t)\cdot G_{ik}G_{ii}G_{jj}^{2}\big{]}+\mathbb{E}\big{[} (1-\mathbb{E})\big{(}\partial_{ik}G_{ik}G_{ii}G_{jj}^{2}\big{)}\cdot e(t) \big{]}\Big{)},\] \[\qquad+\sum_{i,j,k}\kappa_{ij}^{(4)}\frac{\kappa_{ik}^{(3)}}{2!} \Big{(}\mathbb{E}\big{[}(1-\mathbb{E})\big{(}\partial_{ik}^{2}G_{ik}G_{ii}G_{ jj}^{2}\big{)}\cdot e(t)\big{]}\Big{)}\] \[\qquad-\sum_{i,j}\kappa_{ij}^{(4)}\mathbb{E}\big{[}e(t)\cdot[G_{ ii}G_{jj}^{2}-\mathbb{E}(G_{ii}G_{jj}^{2})]\big{]}+o_{\prec}(\frac{\sqrt{N}}{q})\] \[=-m_{sc}\sum_{i,j}\kappa_{ij}^{(4)}\Big{(}\mathbb{E}\big{[}(1- \mathbb{E})\big{(}G_{ii}^{2}G_{jj}^{2}\big{)}\cdot e(t)\big{]}\Big{)}\] \[\qquad-\sum_{i,j}\kappa_{ij}^{(4)}\mathbb{E}\big{[}e(t)\cdot[G_{ ii}G_{jj}^{2}-\mathbb{E}(G_{ii}G_{jj}^{2})]\big{]}+o_{\prec}(\frac{\sqrt{N}}{q}).\] (D.41) However, in the same way, we can estimate the last term by \[\sum_{i,j}\kappa_{ij}^{(4)}\mathbb{E}\big{[}e(t)\cdot[(1-\mathbb{E})( G_{ii}G_{jj}^{2})]\big{]}\] \[=-m_{sc}\sum_{i,j}\kappa_{ij}^{(4)}\mathbb{E}\big{[}e(t)\cdot[(1- \mathbb{E})(G_{ii}G_{jj}^{2})]\big{]}-\sum_{i,j}\kappa_{ij}^{(4)}\mathbb{E} \big{[}e(t)\cdot[(1-\mathbb{E})(G_{jj}^{2})]\big{]}+o_{\prec}(\frac{\sqrt{N}}{ q})\] \[=-m_{sc}\sum_{i,j}\kappa_{ij}^{(4)}\mathbb{E}\big{[}e(t)\cdot[(1- \mathbb{E})(G_{ii}G_{jj}^{2})]\big{]}-\sum_{j}\sum_{i}\kappa_{ij}^{(4)} \mathbb{E}\big{[}e(t)\cdot[(1-\mathbb{E})(G_{jj}^{2})]\big{]}+o_{\prec}(\frac{ \sqrt{N}}{q})\] \[=-m_{sc}\sum_{i,j}\kappa_{ij}^{(4)}\mathbb{E}\big{[}e(t)\cdot[(1- \mathbb{E})(G_{ii}G_{jj}^{2})]\big{]}-2\frac{\xi^{(4)}}{q^{2}}m_{sc}\mathbb{E} \big{[}e(t)\cdot[(1-\mathbb{E})(\operatorname{Tr}G)]\big{]}+o_{\prec}(\frac{ \sqrt{N}}{q})\] \[=-m_{sc}\sum_{i,j}\kappa_{ij}^{(4)}\mathbb{E}\big{[}e(t)\cdot[(1- \mathbb{E})(G_{ii}G_{jj}^{2})]\big{]}+O_{\prec}(\frac{N}{q^{4}}).\] (D.42) Since \(|z+m_{sc}|>c\), the the last line of (D.41) is negligible, and therefore \(I_{3,1}\) is also negligible. To sum up, \[I_{3}=-\sum_{i,j}\frac{\kappa_{ij}^{(4)}}{2}m_{sc}^{2}\mathbb{E}\big{[}\partial _{ij}^{2}e(t)\big{]}+o_{\prec}(\frac{\sqrt{N}}{q}).\] (D.43) #### d.1.4 estimate on \(I_{4}\) and \(I_{1}\) By simple moment counting, it can be easily shown that all terms in \(I_{4}\) are negligible. We move on to \(I_{1}\) defined by \[I_{1}=\sum_{i\neq j}\kappa_{ij}^{(2)}\Big{(}\mathbb{E}\big{[}\partial_{ij}e(t )\cdot G_{ij}\big{]}+\mathbb{E}\big{[}(1-\mathbb{E})\big{(}\partial_{ij}G_{ij }\big{)}\cdot e(t)\big{]}\Big{)}.\] (D.44) Using Lemma D.2 and local law, we can ignore the first term of the right hand side of (D.44) and conclude that the only non negligible term of \(I_{1}\) is \[I_{1,1}=-\sum_{i\neq j}\kappa_{ij}^{(2)}\mathbb{E}\big{[}(1-\mathbb{E})\big{(} G_{ii}G_{jj}\big{)}\cdot e(t)\big{]}.\] (D.45) Multiplying \(z\) to \(I_{1,1}\) and using Lemma B.7 with sufficiently large \(l\), we have \[zI_{1,1}=\sum_{i,j,k}\kappa_{ij}^{(2)}\kappa_{ik}^{(2)}\mathbb{E }\big{[}e(t)(1-\mathbb{E})(G_{ii}G_{jj}G_{kk})\big{]}+\sum_{i,j,k}\kappa_{ij}^ {(2)}\kappa_{ik}^{(4)}\mathbb{E}\big{[}e(t)(1-\mathbb{E})(G_{ii}^{2}G_{jj}G_{ kk}^{2})\big{]}\] \[\qquad+\sum_{i,j,k}\kappa_{ij}^{(2)}\frac{\kappa_{ik}^{(4)}}{2} \mathbb{E}\big{[}\partial_{ij}^{2}e(t)\cdot G_{ii}G_{jj}G_{kk}\big{]}+\mathbb{ E}\big{[}e(t)(\operatorname{Tr}G-\mathbb{E}\operatorname{Tr}G)\big{]}+o_{\prec}(\frac{ \sqrt{N}}{q})\] \[=\widetilde{I}_{1}+\widetilde{I}_{2}+\widetilde{I}_{3}+\mathbb{E} \big{[}e(t)(\operatorname{Tr}G-\mathbb{E}\operatorname{Tr}G)\big{]}+o_{\prec}( \frac{\sqrt{N}}{q}),\] (D.46) after adding up all the easily negligible terms. We first estimate \(\widetilde{I}_{1}\). With local law, \(\sum_{j}\kappa_{ij}^{(2)}G_{jj}\) can be estimated by \(m_{sc}\), therefore \[\widetilde{I}_{1} =\sum_{i}\mathbb{E}\big{[}e(t)(1-\mathbb{E})(G_{ii}\sum_{j}\kappa_{ ij}^{(2)}G_{jj}\sum_{k}\kappa_{ik}^{(2)}G_{kk})\big{]}\] \[=-m_{sc}^{2}\mathbb{E}\big{[}e(t)(\operatorname{Tr}G-\mathbb{E} \operatorname{Tr}G)\big{]}+2m_{sc}\sum_{i,j}\kappa_{ij}^{(2)}\mathbb{E}\big{[} e(t)(1-\mathbb{E})(G_{ii}G_{jj})\big{]}+o_{\prec}(\frac{\sqrt{N}}{q}).\] (D.47) For \(\widetilde{I}_{2}\), after summing up all the terms for \(j\), we get \[\widetilde{I}_{2}=m_{sc}\sum_{i,k}\kappa_{ik}^{(4)}\mathbb{E}\big{[}e(t)(1- \mathbb{E})(G_{ii}^{2}G_{kk}^{2})\big{]}+o_{\prec}(\frac{\sqrt{N}}{q}).\] (D.48) Recall that \(I_{3,1}=\sum_{i\neq j}\kappa_{ij}^{(4)}\mathbb{E}\big{[}e(t)\cdot[G_{ii}^{2}G_ {jj}^{2}-\mathbb{E}(G_{ii}^{2}G_{jj}^{2})]\big{]}=O_{\prec}(\frac{N}{q^{4}})\). Hence, \(\widetilde{I}_{2}\) is also \(O_{\prec}(\frac{N}{q^{4}})\), thus negligible. Finally, \(\widetilde{I}_{3}\) can be computed by \[\widetilde{I}_{3}=m_{sc}^{3}\sum_{i,j}\frac{\kappa_{ij}^{(4)}}{2}\mathbb{E} \big{[}\partial_{ij}^{2}e(t)\big{]}+o(\frac{\sqrt{N}}{q}),\] (D.49) since \(G_{ii}\) can be estimated by \(m_{sc}\) with local law. To sum up, \(zI_{1,1}\) is computed by \[-\sum_{i\neq j}\kappa_{ij}^{(2)}\mathbb{E}\big{[}(1-\mathbb{E}) \big{(}G_{ii}G_{jj}\big{)}\cdot e(t)\big{]}\] \[= -m_{sc}^{2}\mathbb{E}\big{[}e(t)(\operatorname{Tr}G-\mathbb{E} \operatorname{Tr}G)\big{]}+2m_{sc}\sum_{i,j}\kappa_{ij}^{(2)}\mathbb{E}\big{[} e(t)(1-\mathbb{E})(G_{ii}G_{jj})\big{]}\] \[\quad+m_{sc}^{3}\sum_{i,j}\frac{\kappa_{ij}^{(4)}}{2}\mathbb{E} \big{[}\partial_{ij}^{2}e(t)\big{]}+\mathbb{E}\big{[}e(t)(\operatorname{Tr}G -\mathbb{E}\operatorname{Tr}G)\big{]}+o_{\prec}(\frac{\sqrt{N}}{q}).\] (D.50) After rearranging the terms containing \(G_{ii}G_{jj}\) to the left hand side, and using the identity that \(z+2m_{sc}=-\frac{m_{sc}^{\prime}}{m_{sc}}\), we finally get \[I_{1,1}=\frac{m_{sc}^{\prime}}{m_{sc}}(m_{sc}^{2}-1)\mathbb{E}\big{[}e(t)\cdot( 1-\mathbb{E})\operatorname{Tr}G\big{]}-m_{sc}^{2}m_{sc}^{\prime}\sum_{i\neq j} \frac{\kappa_{ij}^{(4)}}{2}\mathbb{E}\big{[}\partial_{ij}^{2}e(t)\big{]}+o_{ \prec}(\frac{\sqrt{N}}{q}).\] (D.51) ## Appendix E Proof of Proposition 3.4 Throughout this section, we say that a random variable \(Z\) is _negligible_ if \(|Z|=o(N^{-1/2}q^{-1})\) with overwhelming probability. For the convenience of notation, we define the parameter \(\Phi:=N^{-1/2-\delta}q^{-1}\) for sufficiently small \(\delta\). To prove Proposition 3.4, we need the following Lemma. **Lemma E.1**.: _Suppose that \(H\) satisfies conditions in Definition 1.2 with \(N^{-2/3}\ll p_{a}\ll 1\). Then,_ \[\mathbb{E}\big{[}(1+zm+m^{2}+\frac{\xi^{(4)}}{q^{2}}m^{4})(z+m+\zeta m)\big{]} =O_{\prec}(\Phi)\] _where \(\xi^{(4)}\) and \(\zeta\) are defined in (B.7)._ When we prove Lemma E.1, we have \[\mathbb{E}\big{[}(1+zm+m^{2})(z+m+\zeta m)\big{]}=-\mathbb{E}\big{[} \frac{\xi^{(4)}}{q^{2}}m^{4}(z+m+\zeta m)\big{]}+O_{\prec}(\Phi),\] (E.1) \[\mathbb{E}\big{[}(1+zm_{sc}+m_{sc}^{2})(z+m+\zeta m)\big{]}=0.\] (E.2) Using local law to estimate \(|m-m_{sc}|\prec q^{-2}\) and subtracting left hand side of these two equations, we obtain \[\mathbb{E}\big{[}(z(m-m_{sc})+m^{2}-m_{sc}^{2})(z+m(1+\zeta))\big{]}\] \[=\mathbb{E}\big{[}((z+2m_{sc})(m-m_{sc})+(m-m_{sc})^{2})(z+m(1+ \zeta))\big{]}\] \[=\mathbb{E}\big{[}((z+2m_{sc})(m-m_{sc})+(m-m_{sc})^{2})(z+m_{sc} (1+\zeta))\big{]}\] \[\qquad+\mathbb{E}\big{[}((z+2m_{sc})(m-m_{sc})+(m-m_{sc})^{2})((m -m_{sc})(1+\zeta))\big{]}\] \[=\mathbb{E}\big{[}((z+2m_{sc})(m-m_{sc})(z+m_{sc}(1+\zeta))\big{]} +O_{\prec}(\Phi).\] (E.3) Similarly subtracting right hand side of these two equations, we get \[-\mathbb{E}\big{[}\frac{\xi^{(4)}}{q^{2}}m^{4}(z+m+\zeta m)\big{]}=-\frac{\xi ^{(4)}}{q^{2}}m_{sc}^{4}(z+(1+\zeta)m_{sc})+O_{\prec}(\Phi).\] (E.4) Thus we conclude that \[\mathbb{E}\big{[}((z+2m_{sc})(m-m_{sc})(z+m_{sc}(1+\zeta))\big{]}=-\frac{\xi ^{(4)}}{q^{2}}m_{sc}^{4}(z+(1+\zeta)m_{sc})+O_{\prec}(\Phi),\] (E.5) and dividing the deterministic part, with overwhelming probability, we have \[\mathbb{E}\big{[}q\sqrt{N}(m-m_{sc})\big{]} =-\frac{\sqrt{N}}{q}\frac{1}{z+2m_{sc}}\xi^{(4)}m_{sc}^{4}+o(1)\] \[=\frac{\sqrt{N}}{q}\xi^{(4)}m_{sc}^{3}m_{sc}^{\prime}+o(1).\] (E.6) By (E.6), we finally obtain \[\mathbb{E}\Big{[}\frac{q}{\sqrt{N}}\big{(}L_{H}(f)-N\langle\mu_{ sc},f\rangle\big{)}\Big{]} =-\frac{1}{2\pi\,\mathrm{i}}\oint_{\Gamma}f(z)\mathbb{E}\big{[}q \sqrt{N}(m(z)-m_{sc}(z))\big{]}\,\mathrm{d}z\] \[=-\frac{1}{2\pi\,\mathrm{i}}\frac{\sqrt{N}}{q}\oint_{\Gamma}f(z) \xi^{(4)}m_{sc}^{3}m_{sc}^{\prime}\,\mathrm{d}z+o(1)\] \[=\frac{\sqrt{N}}{q}\xi^{(4)}\tau_{4}(f)+o(1).\] (E.7) Indeed, since \(q^{2}=Np_{a}\) and \(\xi^{(4)}=1+o(\frac{q}{\sqrt{N}})\) for \(H\) given in (1.1), Proposition 3.4 follows. ### Proof of Lemma e.1 To prove Lemma E.1, we return to Lemma B.7 which reads \[\mathbb{E}\Big{[}(z+m+2\zeta m)(1+zm)\Big{]}\] \[= \frac{1}{N}\sum_{i\neq k}\sum_{r=1}^{l}\frac{\kappa_{ik}^{(r+1)}} {r!}\mathbb{E}\Big{[}(\partial_{ik})^{r}\Big{(}G_{ik}(z+m+2\zeta m)\Big{)} \Big{]}+\mathbb{E}\Big{[}\Omega_{l}\Big{(}(z+m+2\zeta m)(1+zm)\Big{)}\Big{]},\] (E.8) where \(\partial_{ij}=\partial/(\partial H_{ik})\) as before. Remark that similar as proof of (D.10), we can ignore when \(i=k\). We leave details to the reader. Abbreviate \[I\equiv I(z,m)=(z+m+2\zeta m)(1+zm)=Q(1+zm).\] (E.9) where \(Q:=z+m+2\zeta m\). Then we can rewrite the cumulant expansion as \[\mathbb{E}I=\sum_{r=1}^{l}\sum_{s=0}^{r}w_{I_{r,s}}\mathbb{E}I_{r,s}+\mathbb{E }\Omega_{l}(I),\] (E.10) where we set \[I_{r,s} =\frac{1}{N}\sum_{i\neq k}\kappa_{ik}^{(r+1)}\Big{(}\partial_{ik} ^{r-s}G_{ik}\Big{)}\Big{(}\partial_{ik}^{s}Q\Big{)},\] (E.11) \[w_{I_{r,s}} =\frac{1}{(r-s)!s!}.\] (E.12) We can prove Lemma E.1 directly from the following result. **Lemma E.2**.: _Choose \(\ell\geq 10\). Then we have,_ \[w_{I_{1,0}}\mathbb{E}\big{[}I_{1,0}\big{]} =-\mathbb{E}\Big{[}(z+m+2\zeta m)(1-\zeta)m^{2}+2\zeta^{2}m^{3}- \zeta m-\zeta\frac{\xi^{(4)}}{q^{2}}m^{5}\Big{]}+O(\Phi),\] (E.13) \[w_{I_{2,0}}\mathbb{E}\big{[}I_{2,0}\big{]} =O(\Phi),\] \[w_{I_{3,0}}\mathbb{E}\big{[}I_{3,0}\big{]} =-\mathbb{E}\Big{[}q^{-2}\xi^{(4)}Qm^{4}\Big{]}+O(\Phi),\] \[w_{I_{r,0}}\mathbb{E}\big{[}I_{r,0}\big{]} =O(\Phi),\qquad(4\leq r\leq\ell),\] \[w_{I_{r,s}}\big{|}\mathbb{E}\big{[}I_{r,s}\big{]}\big{|} =O(\Phi),\qquad(1\leq s\leq r\leq\ell),\] _uniformly in \(z\in\Gamma_{2}\), for \(N\) sufficiently large. Moreover, we have, for the remainder term,,_ \[\mathbb{E}\Omega_{l}(I)=O(\Phi),\] (E.14) _uniformly in \(z\in\Gamma_{2}\), for \(N\) sufficiently large._ _Proof of Lemma E.1._ Recall that \(Q:=(z+m+2\zeta m)\). From Lemma E.2, \[\mathbb{E}\big{[}(zm+1)(z+m+2\zeta m)\big{]}\] \[=\mathbb{E}\Big{[}-Q\big{(}m^{2}+\frac{\xi^{(4)}}{q^{2}}m^{4} \big{)}\Big{]}+\mathbb{E}\Big{[}\zeta m\big{(}1+zm+m^{2}+\frac{\xi^{(4)}}{q^{ 2}}m^{4}\big{)}\Big{]}+O(\Phi).\] Collecting terms which contain \(Q\), it can be shown that \[\mathbb{E}\big{[}(1+zm+m^{2}+\frac{\xi^{(4)}}{q^{2}}m^{4})(z+m+2\zeta m)\big{]} =\mathbb{E}\big{[}\zeta m(1+zm+m^{2}+\frac{\xi^{(4)}}{q^{2}}m^{4})\big{]}+O( \Phi).\] Hence we conclude that \[\mathbb{E}\big{[}(1+zm+m^{2}+\frac{\xi^{(4)}}{q^{2}}m^{4})(z+m+\zeta m)\big{]}=O( \Phi).\] This proves Lemma E.1. We now choose an initial (small) \(\epsilon>0\). We use the factor \(N^{\epsilon}\) and allow \(\epsilon\) to increase by a tiny amount from line to line. We often drop \(z\) from the notation; it always understood that \(z\in\Gamma_{2}\) and all estimates uniform on \(\Gamma_{2}\) and also for sufficiently large \(N\). The proof of Lemma E.2 is done in remaining Subsections E.1.1-E.1.5 where \(\mathbb{E}I_{r,s}\) and the error terms controlled. Obviously, the error term \(\mathbb{E}\Omega_{l}(I)\) is negligible by simple power counting, we omit the details. #### e.1.1 Estimate on \(I_{2,0}\) Recall the definition of \(I_{r,s}\). We have \[I_{2,0}:=\frac{1}{N}\sum_{i\neq k}\kappa_{ij}^{(3)}\Big{(}\partial_{ij}^{2}G_{ ij}\Big{)}Q\] We note that \(I_{2,0}\) contains terms with one or three off-diagonal Green function entries \(G_{ij}\). We denote them by \(\mathbb{E}I_{2,0}^{(1)}\) and \(\mathbb{E}I_{2,0}^{(3)}\), respectively. **Lemma E.3**.: _For any small \(\epsilon>0\) and for all \(z\in\mathcal{E}\), we have_ \[|\mathbb{E}I_{2,0}^{(1)}|\leq\frac{N^{\epsilon}}{q^{2}\sqrt{N}}+\Phi,\qquad| \mathbb{E}I_{2,0}^{(3)}|\leq\Phi.\] (E.15) _for \(N\) sufficiently large. In particular, \(\mathbb{E}I_{2,0}\) is negligible._ Proof.: Since \(I_{2,0}^{(3)}\) has three off diagonal terms, we can bound one \(G_{ij}\) by \(q^{-1}\) and extract one factor of \(\frac{1}{N}\) from others. Thus we have \[\left|\mathbb{E}I_{2,0}^{(3)}\right|=\left|N\mathbb{E}\left[\sum_{i\neq j} \frac{\kappa_{ij}^{(3)}}{N^{2}}G_{ij}^{3}Q\right]\right|\leq\frac{N^{\epsilon }}{Nq^{2}}\leq\Phi.\] (E.16) Fix a small \(\epsilon>0\). From the definition of \(I_{2,0}^{(1)}\) and local law, we have \[\mathbb{E}I_{2,0}^{(1)}=N\mathbb{E}\left[\sum_{i\neq j}\frac{\kappa_{ij}^{(3 )}}{N^{2}}G_{ij}G_{ii}G_{jj}Q\right]=N\mathbb{E}\left[\sum_{i\neq j}\frac{ \kappa_{ij}^{(3)}}{N^{2}}G_{ij}m^{2}Q\right]+\Phi\] (E.17) Using the resolvent formula we expand in the index \(j\) to get \[z\mathbb{E}I_{2,0}^{(1)}=N\mathbb{E}\left[\sum_{i\neq j\neq k}\frac{\kappa_{ ij}^{(3)}}{N^{2}}H_{jk}G_{ki}m^{2}Q\right].\] (E.18) Similar as Section D.1.2 and D.1.3, applying the cumulant expansion to the right side of (E.18), we can show that the leading term of (E.18) is \(-\mathbb{E}[mI_{2,0}^{(1)}]\). Then changing \(m(z)\) by the deterministic quantity \(m_{sc}(z)\) and showing all the other terms in the cumulant expansion are negligible. Then we will get \[|z+m_{sc}(z)||\mathbb{E}I_{2,0}^{(1)}|\leq\frac{N^{\epsilon}}{q^{2}\sqrt{N}}+O( \Phi)=O(\Phi),\] (E.19) for sufficiently large \(N\). Since \(|z+m_{sc}|>c\) for some constant \(c>0\) uniformly on \(\Gamma_{2}\), the lemma follows directly. For simplicity we abbreviate \(\hat{I}\equiv I_{2,0}^{(1)}\). Using Lemma B.7, for arbitrary \(l^{\prime}\in\mathbb{N}\) we have the cumulant expansion \[z\mathbb{E}\hat{I}=\sum_{r^{\prime}=1}^{l^{\prime}}\sum_{s^{\prime}=0}^{r^{ \prime}}w_{\hat{I}_{r^{\prime},s^{\prime}}}\mathbb{E}\hat{I}_{r^{\prime},s^{ \prime}}+\mathbb{E}\Omega_{l^{\prime}}(\hat{I})\] (E.20) with \[\hat{I}_{r^{\prime},s^{\prime}}=\frac{1}{N}\sum_{i\neq j\neq k}\kappa_{ij}^{( 3)}\kappa_{jk}^{(r^{\prime}+1)}\Big{(}\partial_{jk}^{r^{\prime}-s^{\prime}}G_ {ki}\Big{)}\Big{(}\partial_{jk}^{s^{\prime}}m^{2}Q\Big{)}\] (E.21) with \(w_{\hat{I}_{r^{\prime},s^{\prime}}}=\frac{1}{(r^{\prime}-s^{\prime})!s^{ \prime}!}\). It can be checked that the error term \(\mathbb{E}\Omega_{l^{\prime}}(\hat{I})\) is negligible for large \(l^{\prime}\). _Remark E.4_.: Consider the terms \(\hat{I}_{r^{\prime},s^{\prime}}\) with \(1\leq s^{\prime}\leq r^{\prime}\). We claim that it is enough to show that \[\widetilde{I}_{r^{\prime},0}:=\frac{1}{N}\sum_{i\neq j\neq k}\kappa_{ij}^{(3)} \kappa_{jk}^{(r^{\prime}+1)}\Big{(}\partial_{jk}^{r^{\prime}}G_{ki}\Big{)}m^{ 2}Q,\] (E.22) are negligible for \(r^{\prime}\geq 2\) since \(\partial_{jk}^{r^{\prime}}G_{ki}\) contains at least one off-diagonal entry. Consider when \(\partial_{jk}\) acts on \(Qm^{2}\). Note that \(Q\) is third order polynomial in \(m\). Using \(|Q^{\prime}|\prec 1\), \(|Q^{\prime\prime}|\prec 1\), \(|Q^{\prime\prime\prime}|\prec 1\) and Lemma D.2 we have \[|\partial_{jk}(m^{2}Q)|=\Big{|}\Big{(}\frac{1}{N}\sum_{u=1}^{N}G_{uj}G_{ku} \Big{)}(m^{2}Q)^{\prime}\Big{|}\prec\frac{1}{N},\] (E.23) where the summation index \(u\) is generated from \(\partial_{ik}Q\). More generally, it can be easily shown that \(\partial_{jk}^{s^{\prime}}(m^{2}Q)\) contains at least two off-diagonal Green function entries for \(n\geq 1\). Thus we conclude that if \(\partial_{jk}\) acts \(s^{\prime}\geq 1\) times on \(m^{2}Q\) then with \(|G|\prec 1\), we have \[\hat{I}_{r^{\prime},s^{\prime}}=\frac{1}{N}\sum_{i\neq j\neq k}\kappa_{ij}^{(3 )}\kappa_{jk}^{(r^{\prime}+1)}\Big{(}\partial_{jk}^{r^{\prime}-s^{\prime}}G_{ ki}\Big{)}\Big{(}\partial_{jk}^{s^{\prime}}m^{2}Q\Big{)}\prec\frac{1}{q^{r}N},\] (E.24) which is negligible. We now estimate \(\hat{I}_{r^{\prime},0}\). For \(r^{\prime}=1\), we compute \[\mathbb{E}\hat{I}_{1,0} =-\mathbb{E}\Big{[}\frac{1}{N}\sum_{i\neq j\neq k}\kappa_{ij}^{(3 )}\kappa_{jk}^{(2)}G_{ji}G_{kk}m^{2}Q\Big{]}-\mathbb{E}\Big{[}\frac{1}{N}\sum_ {i\neq j\neq k}\kappa_{ij}^{(3)}\kappa_{jk}^{(2)}G_{jk}G_{ki}m^{2}Q\Big{]}\] \[=:\mathbb{E}\hat{I}_{1,0}^{(1)}+\mathbb{E}\hat{I}_{1,0}^{(2)}\] (E.25) where we organize the terms according to the off-diagonal Green function entries. By Lemma D.2, \[|\mathbb{E}\hat{I}_{1,0}^{(2)}|\leq\frac{N^{\epsilon}}{Nq}\leq\Phi.\] (E.26) We rewrite \(\hat{I}^{(1)}_{1,0}\) with \(m_{sc}\) as \[\mathbb{E}\hat{I}^{(1)}_{1,0} =-\mathbb{E}\Big{[}\frac{1}{N}\sum_{i\neq j\neq k}\kappa^{(3)}_{ij} \kappa^{(2)}_{jk}G_{ji}G_{kk}m^{2}Q\Big{]}\] \[=-\mathbb{E}\Big{[}\frac{1}{N}\sum_{i\neq j}\kappa^{(3)}_{ij}G_{ji }m^{3}Q\Big{]}+O(\Phi)\] \[=-\mathbb{E}\Big{[}\frac{1}{N}\sum_{i\neq j}\kappa^{(3)}_{ij}G_{ji }m_{sc}m^{2}Q\Big{]}-\mathbb{E}\Big{[}\frac{1}{N}\sum_{i\neq j}\kappa^{(3)}_{ ij}(m-m_{sc})G_{ji}m^{2}Q\Big{]}+O(\Phi).\] (E.27) By local law, for sufficiently large \(N\), the second term in (E.27) bounded as \[\left|\mathbb{E}\Big{[}\frac{1}{N}\sum_{i\neq j}\kappa^{(3)}_{ij}(m-m_{sc})G_{ ji}m^{2}Q\Big{]}\right|\leq\frac{N^{\epsilon}}{q}\mathbb{E}\Big{[}\frac{1}{N^{2}} \sum_{i\neq j}|m-m_{sc}||G_{ij}||Q|\Big{]}\leq\frac{N^{\epsilon}}{\sqrt{N}q^{3 }}=O(\Phi).\] (E.28) Thus, we get that \[\mathbb{E}\hat{I}_{1,0}=-m_{sc}\mathbb{E}\Big{[}\frac{1}{N}\sum_{i\neq j} \kappa^{(3)}_{ij}G_{ji}m^{2}Q\Big{]}+O(\Phi)=-m_{sc}\mathbb{E}I^{(1)}_{2,0}+O (\Phi).\] (E.29) We remark that in the expansion of \(\mathbb{E}\hat{I}=\mathbb{E}I^{(1)}_{2,0}\) the only term with one off-diagonal entry is \(\mathbb{E}\hat{I}^{(1)}_{2,0}\). All the other terms contain at least two off-diagonal entries, thus, negligible. To sum up, we find that for sufficiently large \(N\), \[|z+m_{sc}||\mathbb{E}I^{(1)}_{2,0}|=O(\Phi).\] (E.30) Since \(|z+m_{sc}|>c\), we obtain \(|\mathbb{E}I^{(1)}_{2,0}|=O(\Phi)\). This concludes the proof of (E.15). Summarizing, we showed that \[|EI_{2,0}|\leq\Phi,\] (E.31) for \(N\) sufficiently large and the second estimate in (E.13) is proved. #### e.1.2 Estimate on \(I_{3,0}\) Note that \(I_{3,0}\) contains terms with zero, two or four off-diagonal Green function entries. We split accordingly \[w_{I_{3,0}}I_{3,0}=w_{I^{(0)}_{3,0}}I^{(0)}_{3,0}+w_{I^{(2)}_{3,0}}I^{(2)}_{3,0}+w_{I^{(4)}_{3,0}}I^{(4)}_{3,0}.\] When there are two off-diagonal entries, from Lemma D.2, we obtain \[|\mathbb{E}I^{(2)}_{3,0}|\leq\left|N\max_{i,j}\kappa^{(4)}_{ij}\mathbb{E} \Big{[}\frac{1}{N^{2}}\sum_{i\neq j}G_{ii}G_{jj}(G_{ij})^{2}Q\Big{]}\right| \leq\frac{N^{\epsilon}}{Nq^{2}}\mathbb{E}|Q|\leq\Phi,\] (E.32) for sufficiently large \(N\) and similar argument holds for \(\mathbb{E}I^{(4)}_{3,0}\). Thus the only non-negligible term is \(I^{(0)}_{3,0}\). \[w_{I_{3,0}^{(0)}}\mathbb{E}I_{3,0}^{(0)} =-\frac{1}{N}\mathbb{E}\Bigl{[}\sum_{i\neq j}\kappa_{ij}^{(4)}G_{ ii}^{2}G_{jj}^{2}Q\Bigr{]}\] \[=-\frac{1}{N}\mathbb{E}\Bigl{[}\sum_{i,j}\kappa_{d}^{(4)}G_{ii}^{2 }G_{jj}^{2}Q\Bigr{]}-\frac{1}{N}\mathbb{E}\Bigl{[}\sum_{i}\sum_{j\sim i}(\kappa _{s}^{(4)}-\kappa_{d}^{(4)})G_{ii}^{2}G_{jj}^{2}Q\Bigr{]}\] \[=-\frac{1}{N}\kappa_{d}^{(4)}\mathbb{E}\Bigl{[}\sum_{i,j}G_{ii}^{2 }G_{jj}^{2}Q\Bigr{]}-\frac{1}{N}(\kappa_{s}^{(4)}-\kappa_{d}^{(4)})\mathbb{E} \Bigl{[}\sum_{i}\sum_{j\sim i}G_{ii}^{2}G_{jj}^{2}Q\Bigr{]}\] \[=-\frac{1}{N}\kappa_{d}^{(4)}\mathbb{E}\Bigl{[}\sum_{i,j}G_{ii}^{2 }G_{jj}^{2}Q\Bigr{]}-\frac{1}{N}(\kappa_{s}^{(4)}-\kappa_{d}^{(4)})\mathbb{E} \Bigl{[}\sum_{i}\sum_{j\sim i}G_{ii}^{2}G_{jj}^{2}Q\Bigr{]}.\] (E.33) We have \[G_{ii}^{2}G_{jj}^{2}=(G_{ii}^{2}-m^{2})(G_{jj}^{2}-m^{2})+m^{2}G _{ii}^{2}+m^{2}G_{jj}^{2}-m^{4}\] \[=O(q^{-2})+m^{2}\Bigl{(}(G_{ii}-m)^{2}+2G_{ii}m-m^{2}\Bigr{)}+m^{2 }\Bigl{(}(G_{jj}-m)^{2}+2G_{jj}m-m^{2}\Bigr{)}-m^{4}\] \[=O(q^{-2})+2m^{3}(G_{ii}+G_{jj})-3m^{4},\] (E.34) where \(|G_{ii}-m|\prec q^{-1}\) by local semicircle law. Therefore, for the first term, we can conclude that \[\frac{1}{N}\kappa_{d}^{(4)}\mathbb{E}\Bigl{[}\sum_{i,j}G_{ii}^{2} G_{jj}^{2}Q\Bigr{]} =\frac{1}{N}\kappa_{d}^{(4)}\mathbb{E}\Bigl{[}\sum_{i,j}\left(2m ^{3}(G_{ii}+G_{jj})-3m^{4}\right)Q\Bigr{]}+O(\frac{N^{\epsilon}}{q^{4}})\] \[=N\kappa_{d}^{(4)}\mathbb{E}\Bigl{[}m^{4}Q\Bigr{]}+O(\frac{N^{ \epsilon}}{q^{4}}).\] (E.35) Similarly we can estimate the second term by \[\frac{1}{N}(\kappa_{s}^{(4)}-\kappa_{d}^{(4)})\mathbb{E}\Bigl{[} \sum_{i}\sum_{j\sim i}G_{ii}^{2}G_{jj}^{2}Q\Bigr{]}\] \[=\frac{1}{N}(\kappa_{s}^{(4)}-\kappa_{d}^{(4)})\mathbb{E}\Bigl{[} \sum_{i}\sum_{j\sim i}\left(2m^{3}(G_{ii}+G_{jj})-3m^{4}\right)Q\Bigr{]}+O( \frac{N^{\epsilon}}{q^{4}})\] \[=\frac{N}{K}(\kappa_{s}^{(4)}-\kappa_{d}^{(4)})\mathbb{E}\Bigl{[} m^{4}Q\Bigr{]}+O(\Phi).\] (E.36) Therefore we obtain \[w_{I_{3,0}^{(0)}}\mathbb{E}I_{3,0}^{(0)} =-\frac{1}{N}\mathbb{E}\Bigl{[}\sum_{i\neq j}\kappa_{ij}^{(4)}G_ {ii}^{2}G_{jj}^{2}Q\Bigr{]}\] \[=-N\kappa_{d}^{(4)}\mathbb{E}\Bigl{[}m^{4}Q\Bigr{]}-\frac{N}{K}( \kappa_{s}^{(4)}-\kappa_{d}^{(4)})\mathbb{E}\Bigl{[}m^{4}Q\Bigr{]}+O(\Phi)\] \[=-\mathbb{E}\Bigl{[}q^{-2}\xi^{(4)}Qm^{4}\Bigr{]}+O(\Phi).\] (E.37) #### e.1.3 Estimate on \(I_{r,0}\) for \(r\geq 4\) For \(r\geq 5\) we use the bound \(|G_{ii}|\prec 1\) to obtain \[|\mathbb{E}I_{r,0}|\leq\Bigl{|}N\mathbb{E}\Bigl{[}\frac{1}{N^{2}} \kappa_{ij}^{(r+1)}\sum_{i\neq j}(\partial_{ij}^{r}G_{ij})Q\Bigr{]}\Bigr{|} \leq\frac{N^{\epsilon}}{q^{4}}\mathbb{E}\Bigl{[}\frac{1}{N^{2}}\sum_{i\neq j}1 \Bigr{]}=O(\Phi),\] (E.38) for sufficiently large \(N\). For \(r=4\), \(\partial_{ij}^{r}G_{ij}\) contains at least one off-diagonal term. Hence \[\Big{|}N\mathbb{E}\Big{[}\frac{1}{N^{2}}\sum_{i\neq j}\kappa_{ij}^{(5)}(\partial_ {ij}^{r}G_{ij})Q\Big{]}\Big{|}\leq\frac{N^{\epsilon}}{q^{3}}\mathbb{E}\Big{[} \frac{1}{N^{2}}\sum_{i\neq j}|G_{ij}||Q|\Big{]}\leq\frac{CN^{\epsilon}}{\sqrt {N}q^{3}}=O(\Phi),\] (E.39) for \(N\) sufficiently large. Thus we can conclude that all \(I_{r,0},r\geq 4\) are negligible. #### e.1.4 Estimate on \(I_{r,s}\) for \(r\geq 2,s\geq 1\) Similar to Remark E.4, if \(\partial_{jk}\) act on \(Q\) then it can be easily shown that those terms are negligible. We leave details for the reader. #### e.1.5 Estimate on \(I_{1,0}\) Finally we only need to estimate \(\mathbb{E}I_{1,0}\). We have \[\mathbb{E}I_{1,0}= \frac{1}{N}\sum_{i\neq j}\kappa_{ij}^{(2)}\mathbb{E}\Big{[} \Big{(}\partial_{ij}G_{ij}\Big{)}Q\Big{]}\] \[= -\frac{1}{N}\mathbb{E}\Big{[}\sum_{i,j}\kappa_{ij}^{(2)}G_{ii}G_ {jj}Q\Big{]}+\frac{1}{N}\mathbb{E}\Big{[}\sum_{i}\kappa_{ii}^{(2)}G_{ii}^{2}Q \Big{]}-\frac{1}{N}\mathbb{E}\Big{[}\sum_{i\neq j}\kappa_{ij}^{(2)}G_{ij}^{2} Q\Big{]}\] \[=: -\mathbb{E}I_{1,0}^{(2)}+\mathbb{E}I_{1,0}^{(1)}-\mathbb{E}I_{1, 0}^{(0)}\] (E.40) The second and third term can be bounded by \(\Phi\) since \[|\mathbb{E}I_{1,0}^{(1)}|=\left|\frac{1}{N}\mathbb{E}\Big{[}\sum_{i}\kappa_{s }^{(2)}G_{ii}^{2}Q\Big{]}\right|=O(\Phi),\] (E.41) \[|\mathbb{E}I_{1,0}^{(0)}|=\left|\frac{1}{N}\mathbb{E}\Big{[}\sum_{i\neq j} \kappa_{ij}^{(2)}G_{ij}^{2}Q\Big{]}\right|\leq\frac{C}{N}\mathbb{E}|Q|=O(\Phi).\] (E.42) Hence we only need to estimate \[\mathbb{E}I_{1,0}^{(2)}=\frac{1}{N}\mathbb{E}\Big{[}\sum_{i,j} \kappa_{ij}^{(2)}G_{ii}G_{jj}Q\Big{]}\] \[=\frac{1}{N}\mathbb{E}\Big{[}\sum_{i,j}\kappa_{d}^{(2)}G_{ii}G_{ jj}Q\Big{]}+\frac{1}{N}\mathbb{E}\Big{[}\sum_{i}\sum_{j\sim i}(\kappa_{s}^{(2)}- \kappa_{d}^{(2)})G_{ii}G_{jj}Q\Big{]}\] \[=\mathbb{E}\Big{[}(1-\zeta)m^{2}Q\Big{]}+\frac{\zeta K}{N^{2}} \mathbb{E}\Big{[}\sum_{i}\sum_{j\sim i}G_{ii}G_{jj}Q\Big{]}\] \[=\mathbb{E}\Big{[}(1-\zeta)m^{2}Q\Big{]}+\frac{\zeta K}{N^{2}} \mathbb{E}\Big{[}\sum_{i}\sum_{j\sim i}zG_{ii}G_{jj}\Big{]}+\frac{\zeta K}{N^{ 2}}\mathbb{E}\Big{[}\sum_{i}\sum_{j\sim i}G_{ii}G_{jj}(m+2\zeta m)\Big{]}.\] (E.43) Using Lemma B.7, we expand the second term of (E.43) as \[\frac{\zeta K}{N^{2}}\mathbb{E}\Bigl{[}\sum_{i}\sum_{j\sim i}zG_{ii} G_{jj}\Bigr{]} =\frac{\zeta K}{N^{2}}\mathbb{E}\Bigl{[}\sum_{i}\sum_{j\sim i} \Bigl{(}\sum_{k}H_{ik}G_{ki}-1\Bigr{)}G_{jj}\Bigr{]}\] \[=\frac{\zeta K}{N^{2}}\mathbb{E}\Bigl{[}\sum_{i}\sum_{j\sim i} \sum_{k\neq i}H_{ik}G_{ki}G_{jj}\Bigr{]}-\zeta\mathbb{E}\bigl{[}m\bigr{]}\] \[=\sum_{r=1}^{l}\sum_{s=0}^{r}w_{J_{r,s}}\mathbb{E}J_{r,s}-\zeta \mathbb{E}\bigl{[}m\bigr{]}+O(\frac{N^{\epsilon}}{q^{l}}),\] (E.44) where \[w_{J_{r}}=\frac{1}{r!},\quad J_{r}=\frac{\zeta K}{N^{2}}\sum_{i} \sum_{j\sim i}\sum_{k\neq i}\kappa_{ik}^{(r+1)}\mathbb{E}\Bigl{[}\partial_{ik} ^{r}\Bigl{(}G_{ik}G_{jj}\Bigr{)}\Bigr{]}.\] (E.45) Similar as estimating \(I_{r,s}\), it can be shown that all terms of \(J_{r}\) and the error term are negligible except \(J_{1}\) and \(J_{3}\) by using Lemma D.2. We omit the details. #### e.1.6 Estimate on \(J_{1}\) \[\mathbb{E}J_{1} =\frac{\zeta K}{N^{2}}\sum_{i}\sum_{j\sim i}\sum_{k\neq i}\kappa_ {ik}^{(2)}\mathbb{E}\Bigl{[}\Bigl{(}\partial_{ik}G_{ik}G_{jj}\Bigr{)}\Bigr{]}\] \[=-\frac{\zeta K}{N^{2}}\sum_{i}\sum_{j\sim i}\sum_{k}\kappa_{d}^ {(2)}\mathbb{E}\Bigl{[}G_{ii}G_{jj}G_{kk}+G_{ik}^{2}G_{kk}\Bigr{]}\] (E.46) \[-\frac{\zeta K}{N^{2}}\sum_{i}\sum_{j\sim i}\sum_{k\neq i}( \kappa_{s}^{(2)}-\kappa_{d}^{(2)})\mathbb{E}\Bigl{[}G_{ii}G_{jj}G_{kk}+G_{ik} ^{2}G_{kk}\Bigr{]}\] \[-2\frac{\zeta K}{N^{2}}\sum_{i}\sum_{j\sim i}\sum_{k\neq i}\kappa_ {ik}^{(2)}\mathbb{E}\Bigl{[}G_{ij}G_{jk}G_{ki}\Bigr{]}.\] Then we can show that \[\mathbb{E}J_{1}= -\frac{\zeta K}{N^{2}}\sum_{i}\sum_{j\sim i}\sum_{k}\kappa_{d}^{( 2)}\mathbb{E}\Bigl{[}G_{ii}G_{jj}G_{kk}\Bigr{]}\] \[-\frac{\zeta K}{N^{2}}\sum_{i}\sum_{j\sim i}\sum_{k\sim i}( \kappa_{s}^{(2)}-\kappa_{d}^{(2)})\mathbb{E}\Bigl{[}G_{ii}G_{jj}G_{kk}\Bigr{]} +O(\Phi),\] (E.47) since other terms are all negligible, similar as proving (E.42) and (E.41). The first term can be computed by \[-\frac{\zeta K}{N^{2}}\sum_{i}\sum_{j\sim i}\sum_{k\neq i}\kappa_ {d}^{(2)}\mathbb{E}\Bigl{[}G_{ii}G_{jj}G_{kk}\Bigr{]} =-\frac{\zeta K}{N}\sum_{i}\sum_{j\sim i}\kappa_{d}^{(2)}\mathbb{ E}\Bigl{[}mG_{ii}G_{jj}\Bigr{]}\] \[=-\frac{\zeta(1-\zeta)K}{N^{2}}\sum_{i}\sum_{j\sim i}\mathbb{E} \Bigl{[}mG_{ii}G_{jj}\Bigr{]}.\] (E.48) For the second term, we have \[-\frac{\zeta K}{N^{2}}\sum_{i}\sum_{j\sim i}\sum_{k\sim i}(\kappa_{s }^{(2)}-\kappa_{d}^{(2)})\mathbb{E}\Big{[}G_{ii}G_{jj}G_{kk}\Big{]}=-\frac{ \zeta K}{N^{2}}\sum_{i}\sum_{j\sim i}\sum_{k\sim i}(\kappa_{s}^{(2)}-\kappa_{d }^{(2)})\mathbb{E}\Big{[}G_{ii}G_{jj}G_{kk}\Big{]}\] \[=-\frac{\zeta^{2}K^{2}}{N^{3}}\sum_{i}\sum_{j\sim i}\sum_{k\sim i} \mathbb{E}\Big{[}G_{ii}G_{jj}G_{kk}\Big{]}\] \[=-\frac{\zeta^{2}K^{2}}{N^{3}}\sum_{i\sim j\sim k}\mathbb{E}\Big{[} m^{3}-3m^{2}G_{ii}+3mG_{ii}G_{jj}\Big{]}\] \[=-\frac{\zeta^{2}K^{2}}{N^{3}}\mathbb{E}\Big{[}\frac{N^{3}}{K^{2} }m^{3}-\frac{N^{3}}{K^{2}}3m^{3}+\frac{N}{K}\sum_{i}\sum_{j\sim i}3mG_{ii}G_{ jj}\Big{]}\] \[=-\frac{3\zeta^{2}K}{N^{2}}\mathbb{E}\Big{[}\sum_{i}\sum_{j\sim i }mG_{ii}G_{jj}\Big{]}+\zeta^{2}\mathbb{E}\big{[}2m^{3}\big{]}.\] (E.49) #### e.1.7 Estimate on \(J_{3}\) Recall that \[w_{J_{3}}\mathbb{E}J_{3}=\frac{\zeta K}{N^{2}}\sum_{i}\sum_{j\sim i}\sum_{k\neq i }\kappa_{ik}^{(4)}\mathbb{E}\Big{[}\partial_{ik}^{3}\Big{(}G_{ik}G_{jj}\Big{)} \Big{]}.\] Note that the terms contained in \(J_{3}\) with more than two off-diagonal Green function entries are negligible by using Lemma D.2. The only non-negligible terms are \(J_{3}^{(0)}\) and \(J_{3}^{(1)}\) which contain no and one off-diagonal Green function entry respectively. By simple calculation, we get \[J_{3}^{(0)}=-\frac{\zeta K}{N^{2}}\sum_{i}\sum_{j\sim i}\sum_{k} \kappa_{ik}^{(4)}G_{ii}^{2}G_{jj}G_{kk}^{2},\] (E.50) \[J_{3}^{(1)}=-\frac{\zeta K}{N^{2}}\sum_{i}\sum_{j\sim i}\sum_{k }\kappa_{ik}^{(4)}G_{ii}G_{jj}G_{kk}^{2}G_{ij},\] (E.51) To estimate \(J_{3}^{(0)}\) we expand \[(G_{ii}^{2}-m^{2})(G_{jj}-m)(G_{kk}^{2}-m^{2})= -m^{5}+G_{jj}m^{4}+(G_{ii}^{2}+G_{kk}^{2})m^{3}\] \[-G_{jj}(G_{ii}^{2}+G_{kk}^{2})m^{2}-G_{ii}^{2}G_{kk}^{2}m+G_{ii}^{ 2}G_{jj}G_{kk}^{2},\] (E.52) and by simple calculation, we obtain \[G_{ii}^{2}G_{jj}G_{kk}^{2}=-4m^{5}+m^{4}(G_{jj}+2G_{ii}+2G_{kk})+O(N^{\epsilon} q^{-2}).\] (E.53) Hence we get \[\mathbb{E}J_{3}^{(0)} =\frac{\zeta K}{N^{2}}\sum_{i}\sum_{j\sim i}\sum_{k}\kappa_{ik}^{(4) }\mathbb{E}\big{[}G_{ii}^{2}G_{jj}G_{kk}^{2}\big{]}\] \[=-\zeta N\kappa_{d}^{(4)}\mathbb{E}\big{[}m^{5}\big{]}-\frac{\zeta K }{N^{2}}(\kappa_{s}^{(4)}-\kappa_{d}^{(4)})\mathbb{E}\big{[}\frac{N^{3}}{K^{2} }m^{5}\big{]}+O(N^{\epsilon}q^{-4})\] \[=-\zeta\Big{(}\frac{N}{K}(\kappa_{s}^{(4)}-\kappa_{d}^{(4)})+N \kappa_{d}^{(4)}\Big{)}\mathbb{E}\big{[}m^{5}\big{]}+O(\Phi)\] \[=-\zeta q^{-2}\xi^{(4)}\mathbb{E}\big{[}m^{5}\big{]}+O(\Phi).\] (E.54) Now we show that \(\mathbb{E}J_{3,0}^{(1)}\) is also negligible. Using \(|G_{ii}|,|G_{jj}|,|G_{kk}|\prec 1\) and Lemma D.2, we get \[|\mathbb{E}J_{3}^{(1)}|\leq\frac{N^{\epsilon}}{\sqrt{N}q^{2}}=O( \Phi).\] (E.55) To sum up, we conclude that \[w_{J_{3}}\mathbb{E}J_{3}=-\zeta q^{-2}\xi^{(4)}\mathbb{E}\big{[}m ^{5}\big{]}+O(\Phi).\] (E.56) Now we estimate \(\mathbb{E}I_{1,0}\). By (E.43), (E.48), (E.49), (E.44) and (E.56) \[\mathbb{E}I_{1,0}=\frac{1}{N}\mathbb{E}\Big{[}\sum_{i,j}\kappa_{ ij}^{(2)}G_{ii}G_{jj}Q\Big{]}+O(\Phi)\] \[=-\mathbb{E}\Big{[}(1-\zeta)m^{2}Q\Big{]}-\frac{\zeta K}{N^{2}} \mathbb{E}\Big{[}\sum_{i}\sum_{j\sim i}zG_{ii}G_{jj}\Big{]}-\frac{\zeta K}{N^ {2}}\mathbb{E}\Big{[}\sum_{i}\sum_{j\sim i}G_{ii}G_{jj}(m+2\zeta m)\Big{]}+O(\Phi)\] \[=-\frac{K\zeta(1+2\zeta)}{N^{2}}\mathbb{E}\Big{[}\sum_{i}\sum_{j \sim i}mG_{ii}G_{jj}\Big{]}+\mathbb{E}\big{[}2\zeta^{2}m^{3}-\zeta m-\zeta q^{ -2}\xi^{(4)}m^{5}\big{]}\] \[\qquad-\mathbb{E}\Big{[}(1-\zeta)m^{2}Q\Big{]}-\frac{\zeta K}{N^ {2}}\mathbb{E}\Big{[}\sum_{i}\sum_{j\sim i}G_{ii}G_{jj}(m+2\zeta m)\Big{]}+O(\Phi)\] \[=-\mathbb{E}\Big{[}(z+m+2\zeta m)(1-\zeta)m^{2}+2\zeta^{2}m^{3}- \zeta m-\zeta\frac{\xi^{(4)}}{q^{2}}m^{5}\Big{]}+O(\Phi),\] (E.57) and this concluded the proof of Lemma E.2. ## Appendix F Proof of Lemma 4.3 With Definitions in Appendix B, recall that our goal is to show that \[\left|s(z)-\frac{-1}{z+m_{sc}}\right|=|s(z)-m_{sc}(z)|\prec\frac{ \sqrt{N}}{q^{4}}+\frac{1}{q}\] (F.1) where \(s(z)\) is defined as \[s(z)=\frac{1}{N}\sum_{i,j}G_{ij}(z)\] (F.2) and \(G(z)=(H-zI)^{-1}\) is the green function of matrix \(H\) satisfying the Definition B.1 with \(\phi>1/8\). To prove this Lemma, we prove a Lemma that gives better upper bounds for sums of entries of \(G\). **Lemma F.1** (Improved Bound for \(T_{k}\)).: _For any \(\phi>0\),_ \[T_{k}\prec\frac{1}{q^{2}}\] (F.3) _where \(T_{k}\) is defined as_ \[T_{k}(z)=\frac{1}{\sqrt{N}}\sum_{j}G_{kj}\] (F.4) This Lemma will be proved in Appendix F.1. To prove the Lemma 4.3, which is the local law of cgSBM in sparse regime, the goal is to find some \(\Psi(z)=o(1)\) such that \[\mathbb{E}\left[|P(s)|^{2D}\right]\leq|\Psi(z)|^{2D}\] (F.5) where \(P(s)\) is defined as \[P(s)=1+(z+m_{sc}(z)-\kappa_{3}m_{sc}(z)^{2})s(z)\] (F.6) with \(\kappa_{3}=\sum_{j}\kappa_{ij}^{(3)}\). By rewriting only one \(P(s)\) in \(\mathbb{E}\left[|P(s)|^{2D}\right]\), \[\mathbb{E}\left[|P(s)|^{2D}\right]\] (F.7) \[= \mathbb{E}\left[P(s)^{D-1}\overline{P(s)}^{D}\cdot\left\{1+(z+m_ {sc}-\kappa_{3}m_{sc}^{2})s(z)\right\}\right]\] (F.8) \[= \mathbb{E}\left[P(s)^{D-1}\overline{P(s)}^{D}\right]+\mathbb{E} \left[P(s)^{D-1}\overline{P(s)}^{D}\cdot\frac{1}{N}\sum_{i,j}(HG-I)_{ij}\right]\] (F.9) \[+m_{sc}\mathbb{E}\left[P(s)^{D-1}\overline{P(s)}^{D}\cdot s(z) \right]-\kappa_{3}m_{sc}^{2}\mathbb{E}\left[P(s)^{D-1}\overline{P(s)}^{D} \cdot s(z)\right]\] (F.10) \[= \frac{1}{N}\sum_{i,j,k}\mathbb{E}\left[P(s)^{D-1}\overline{P(s)} ^{D}\cdot H_{ik}G_{kj}\right]\] (F.11) \[+m_{sc}\mathbb{E}\left[P(s)^{D-1}\overline{P(s)}^{D}\cdot s(z) \right]-\kappa_{3}m_{sc}^{2}\mathbb{E}\left[P(s)^{D-1}\overline{P(s)}^{D} \cdot s(z)\right]\] (F.12) \[= C_{1}+C_{2}+\cdots+C_{t}+R_{t}\] (F.13) \[+m_{sc}\mathbb{E}\left[P(s)^{D-1}\overline{P(s)}^{D}\cdot s(z) \right]-\kappa_{3}m_{sc}^{2}\mathbb{E}\left[P(s)^{D-1}\overline{P(s)}^{D} \cdot s(z)\right]\] (F.14) where the terms \(C_{1},C_{2},\cdots,C_{t}\) and \(R_{t}\) is generated by Lemma B.7 (cumulant expansion) and \[C_{r}=\sum_{i,j,k}\frac{\kappa_{ik}^{(r+1)}}{r!}\mathbb{E}\left[\partial_{ik} ^{r}\left(P(s)^{D-1}\overline{P(s)}^{D}\cdot\frac{1}{N}G_{kj}\right)\right]\] (F.15) for \(1\leq r\leq t\) and \[R_{t}=\sum_{i,j,k}\mathbb{E}\left[\Omega_{t}\left(P(s)^{D-1}\overline{P(s)}^{D} \cdot\frac{1}{N}G_{kj}H_{ik}\right)\right]\] (F.9) where \[\mathbb{E}\left[\Omega_{t}\left(P(s)^{D-1}\overline{P(s)}^{D} \cdot\frac{1}{N}G_{kj}H_{ik}\right)\right]\leq 2C_{t}^{\prime}\cdot\mathbb{E} \left[|H_{ij}|^{t+2}\right]\cdot\|\partial_{ij}^{t+1}(P(s)^{D-1}\overline{P(s )}^{D}\cdot\frac{1}{N}G_{kj})\|_{\infty}\] (F.10) Then, we have a lemma for \(C_{1},\cdots,C_{t}\) and \(R_{t}\) which will be proved in Appendix F.2. **Lemma F.2**.: _For sufficiently large \(N\) and \(r,t\geq 3\), we have_ \[C_{1}+m_{sc}\mathbb{E}\left[P(s)^{D-1}\overline{P(s)}^{D}\cdot s (z)\right]=O\left(\frac{\sqrt{N}}{q^{6}}\right)\cdot\mathbb{E}\left[|P(s)|^{2 D-1}\right]\] (F.11) \[C_{2}-\kappa_{3}m_{sc}^{2}\mathbb{E}\left[P(s)^{D-1}\overline{P (s)}^{D}\cdot s(z)\right]=O\left(\frac{\sqrt{N}}{q^{4}}\right)\mathbb{E} \left[|P(s)|^{2D-1}\right]\] (F.12) \[C_{r}=O\left(\frac{\sqrt{N}}{q^{r+1}}\right)\mathbb{E}\left[|P( s)|^{2D-1}\right]\] (F.13) \[R_{t}=O\left(\frac{\sqrt{N}}{q^{t+2}}\right)\cdot\||P(s)|^{2D-1} \|_{\infty}\] (F.14) Take the integer \(t\) larger than \(8D-2\). Continuing (F.7), by Lemma F.2, \[\mathbb{E}\left[|P(s)|^{2D}\right]=O\left(\frac{\sqrt{N}}{q^{6}} \right)\cdot\mathbb{E}\left[|P(s)|^{2D-1}\right]+O\left(\frac{\sqrt{N}}{q^{4}} \right)\cdot\mathbb{E}\left[|P(s)|^{2D-1}\right]\] \[\qquad+O\left(\frac{\sqrt{N}}{q^{4}}\right)\cdot\mathbb{E}\left[ |P(s)|^{2D-1}\right]+O\left(\frac{\sqrt{N}}{q^{5}}\right)\cdot\mathbb{E}\left[ |P(s)|^{2D-1}\right]+\cdots\] \[\qquad+O\left(\frac{1}{q^{t+2}}\right)\cdot\||P(s)|^{2D-1}\|_{\infty}\] \[=O\left(\frac{\sqrt{N}}{q^{4}}\right)\cdot\mathbb{E}\left[|P(s)|^ {2D-1}\right]+O\left(\frac{\sqrt{N}}{q^{t+2}}\right)\cdot\||P(s)|^{2D-1}\|_{\infty}\] (F.15) Then since the infinite norm is constant, by Young's inequality and Jensen's inequality, \[\mathbb{E}\left[|P(s)|^{2D}\right]\leq\frac{1}{2D}\cdot O\left( \frac{\sqrt{N}}{q^{4}}\right)^{2D}+\frac{2D-1}{2D}\cdot\mathbb{E}\left[|P(s)| ^{2D-1}\right]^{\frac{2D}{2D-1}}+O\left(\frac{1}{q^{t+2}}\right)\] \[\leq\frac{1}{2D}\cdot O\left(\frac{\sqrt{N}}{q^{4}}\right)^{2D}+ \frac{2D-1}{2D}\cdot\mathbb{E}\left[|P(s)|^{2D}\right]+O\left(\frac{1}{q^{t+ 2}}\right)\] \[=\frac{1}{2D}\cdot O\left(\frac{\sqrt{N}}{q^{4}}\right)^{2D}+ \frac{2D-1}{2D}\cdot\mathbb{E}\left[|P(s)|^{2D}\right]\] (F.16) since \(t>8D-2\) and \[\mathbb{E}\left[|P(s)|^{2D}\right]\leq O\left(\frac{\sqrt{N}}{q^{4}} \right)^{2D}\] (F.17) which implies \[P(s)=1+(z+m_{sc}-\kappa_{3}m_{sc}^{2})s(z)\prec\frac{\sqrt{N}}{q^{ 4}}\] (F.18) so that \[|s(z)-\frac{-1}{z+m_{sc}-\kappa_{3}m_{sc}^{2}}|\prec\frac{\sqrt{N }}{q^{4}}\] (F.19) Also, \[|\frac{-1}{z+m_{sc}-\kappa_{3}m_{sc}^{2}}-\frac{-1}{z+m_{sc}}|=| \frac{-1}{z+m_{sc}}|\cdot|\bigg{(}1-\frac{\kappa_{3}m_{sc}^{2}}{z+m_{sc}} \bigg{)}^{-1}-1|\prec\frac{1}{q}\] (F.20) leads to \[|s(z)-\frac{-1}{z+m_{sc}}|=|s(z)-m_{sc}(z)|\prec\frac{\sqrt{N}}{q ^{4}}+\frac{1}{q}\] (F.21) which finalizes the proof of Lemma 4.3. ### Proof of Lemma F.1 For any \(t\geq 3\) and sufficiently large integer \(D\), \[\begin{split} z\mathbb{E}\left[|T_{k}(z)|^{2D}\right]& =\mathbb{E}\left[T_{k}(z)^{D-1}\overline{T_{k}(z)}^{D}\cdot\frac{ 1}{\sqrt{N}}\sum_{j}zG_{kj}\right]\\ &=\mathbb{E}\left[T_{k}(z)^{D-1}\overline{T_{k}(z)}^{D}\cdot \frac{1}{\sqrt{N}}\sum_{j}(GH-I)_{kj}\right]\\ &=\mathbb{E}\left[T_{k}(z)^{D-1}\overline{T_{k}(z)}^{D}\cdot \frac{1}{\sqrt{N}}\sum_{i,j}G_{ki}H_{ij}\right]-\mathbb{E}\left[T_{k}(z)^{D-1} \overline{T_{k}(z)}^{D}\cdot\frac{1}{\sqrt{N}}\right]\\ &\approx\sum_{i,j}\mathbb{E}\left[T_{k}(z)^{D-1}\overline{T_{k}(z )}^{D}\cdot\frac{1}{\sqrt{N}}G_{ki}H_{ij}\right]\end{split}\] (F.22) where the second term in (F.22) is \(O(1/\sqrt{N})\) so that negligible. By Lemma B.7, \[z\mathbb{E}\left[|T_{k}(z)|^{2D}\right]=A_{1}+A_{2}+\cdots+A_{t} +R_{t}\] (F.23) where \[A_{r}=\sum_{i,j}\frac{\kappa_{ij}^{(r+1)}}{r!}\mathbb{E}\left[\partial_{ij}^{r}(T _{k}(z)^{D-1}\overline{T_{k}(z)}^{D}\cdot\frac{1}{\sqrt{N}}G_{ki})\right]\] (F.24) for \(1\leq r\leq t\) and \[R_{t}=\sum_{i,j}\mathbb{E}\left[\Omega_{t}\left(T_{k}(z)^{D-1} \overline{T_{k}(z)}^{D}\cdot\frac{1}{\sqrt{N}}G_{ki}H_{ij}\right)\right]\] (F.25) where \[\mathbb{E}\left[\Omega_{t}\left(T_{k}(z)^{D-1}\overline{T_{k}(z)}^{D}\cdot \frac{1}{\sqrt{N}}G_{ki}H_{ij}\right)\right]\leq 2C_{t}\cdot\mathbb{E}\left[|H_{ ij}|^{t+2}\right]\cdot\|\partial_{ij}^{t+1}(T_{k}(z)^{D-1}\overline{T_{k}(z)}^{D} \cdot\frac{1}{\sqrt{N}}G_{ki})\|_{\infty}\] (F.26) In this proof, we use Lemma D.2 for power counting argument frequently. For expanding the derivatives in \(A_{r}\), we should divide the cases for the indices \(i=j\) by the matrix differentiation formula (C.7). However, dividing the \(i=j\) cases makes the bounds of equations smaller so it is okay to consider the derivatives with respect to \(H_{ij}\) for \(i=j\) as same as \(i\neq j\). #### f.1.1 Estimates for \(A_{1}\) The goal of this subsection is to prove \[A_{1}=-m_{sc}\mathbb{E}\left[|T_{k}(z)|^{2D}\right]\] (F.27) where \(A_{1}\) is defined as \[A_{1}\equiv\sum_{i,j}\kappa_{ij}^{(2)}\mathbb{E}\left[\partial_{ij}\left(T_{k }(z)^{D-1}\overline{T_{k}(z)}^{D}\cdot\frac{1}{\sqrt{N}}G_{ki}\right)\right]\] (F.28) Among the terms in \(A_{1}\), the only term that is not clearly negligible is \[A_{1}\approx\frac{1}{\sqrt{N}}\sum_{i,j}\kappa_{ij}^{(2)}\mathbb{E}\left[T_{k }(z)^{D-1}\overline{T_{k}(z)}^{D}(-G_{kj}G_{ii})\right]\] (F.29) Since local law implies \(|\sum_{i}\kappa_{ij}^{(2)}G_{ii}-m_{sc}|\prec 1/q^{2}\), by taking \(i\)-sum first, \[A_{1} \approx-m_{sc}\cdot\frac{1}{\sqrt{N}}\sum_{j}\mathbb{E}\left[T_{k }(z)^{D-1}\overline{T_{k}(z)}^{D}G_{kj}\right]\] \[=-m_{sc}\cdot\mathbb{E}\left[T_{k}(z)^{D-1}\overline{T_{k}(z)}^{ D}\frac{1}{\sqrt{N}}\sum_{j}G_{kj}\right]\] \[=-m_{sc}\mathbb{E}\left[|T_{k}(z)|^{2D}\right]\] (F.30) #### f.1.2 Estimates for \(A_{2}\) The goal of this subsection is to prove \[A_{2}=\kappa_{3}\cdot m_{sc}^{2}\cdot\mathbb{E}\left[|T_{k}(z)|^{2D}\right]+O(1/q^ {2})\mathbb{E}\left[|T_{k}(z)|^{2D-1}\right]\] (F.31) where \(A_{2}\) is defined as \[A_{2}\equiv\sum_{i,j}\frac{\kappa_{ij}^{(3)}}{2}\mathbb{E}\left[\partial_{ij}^ {2}\left(T_{k}(z)^{D-1}\overline{T_{k}(z)}^{D}\cdot\frac{1}{\sqrt{N}}G_{ki} \right)\right]\] (F.32) For this subsection and some next subsections, we will introduce some additional definition. _Definition_ F.3. For integer \(r\geq 2\) and integer \(1\leq p\leq 6\), define \(A_{r,p}\) as following: \[A_{r,1} =\frac{1}{r!\sqrt{N}}\sum_{i,j}\kappa_{ij}^{(r+1)}\mathbb{E}\left[ \partial_{ij}^{r-2}\{\partial_{ij}^{2}(T_{k}(z)^{D-1})\overline{T_{k}(z)}^{D} G_{ki}\}\right]\] (F.33) \[A_{r,2} =\frac{1}{r!\sqrt{N}}\sum_{i,j}\kappa_{ij}^{(r+1)}\mathbb{E} \left[\partial_{ij}^{r-2}\{T_{k}(z)^{D-1}\partial_{ij}^{2}(\overline{T_{k}(z) }^{D})G_{ki}\}\right]\] (F.34) \[A_{r,3} =\frac{1}{r!\sqrt{N}}\sum_{i,j}\kappa_{ij}^{(r+1)}\mathbb{E} \left[\partial_{ij}^{r-2}\{T_{k}(z)^{D-1}\overline{T_{k}(z)}^{D}\partial_{ij} ^{2}(G_{ki})\}\right]\] (F.35) \[A_{r,4} =\frac{1}{r!\sqrt{N}}\sum_{i,j}\kappa_{ij}^{(r+1)}\mathbb{E} \left[\partial_{ij}^{r-2}\{2\partial_{ij}(T_{k}(z)^{D-1})\partial_{ij}( \overline{T_{k}(z)}^{D})G_{ki}\}\right]\] (F.36) \[A_{r,5} =\frac{1}{r!\sqrt{N}}\sum_{i,j}\kappa_{ij}^{(r+1)}\mathbb{E} \left[\partial_{ij}^{r-2}\{2\partial_{ij}(T_{k}(z)^{D-1})\overline{T_{k}(z)} ^{D}\partial_{ij}(G_{ki})\}\right]\] (F.37) \[A_{r,6} =\frac{1}{r!\sqrt{N}}\sum_{i,j}\kappa_{ij}^{(r+1)}\mathbb{E} \left[\partial_{ij}^{r-2}\{2T_{k}(z)^{D-1}\partial_{ij}(\overline{T_{k}(z)}^{ D})\partial_{ij}(G_{ki})\}\right]\] (F.38) It is clear that \(A_{r}=A_{r,1}+A_{r,2}+A_{r,3}+A_{r,4}+A_{r,5}+A_{r,6}\). By simple calculation and counting, it is clear that \(A_{2,3}\) is the only non-negligible term among the six components of \(A_{2}\). By calculating the derivative in \[A_{2,3}=\frac{1}{2\sqrt{N}}\sum_{i,j}\kappa_{ij}^{(3)}\mathbb{E}\left[T_{k}(z) ^{D-1}\overline{T_{k}(z)}^{D}\partial_{ij}^{2}(G_{ki})\right],\] (F.39) the only non-negligible term is \[A_{2,3} \approx\frac{1}{\sqrt{N}}\sum_{i,j}\kappa_{ij}^{(3)}\mathbb{E} \left[T_{k}(z)^{D-1}\overline{T_{k}(z)}^{D}G_{ki}G_{jj}G_{ii}\right]\] \[=\frac{1}{\sqrt{N}}\sum_{i,j}\kappa_{ij}^{(3)}\mathbb{E}\left[T_{k }(z)^{D-1}\overline{T_{k}(z)}^{D}G_{ki}(G_{jj}-m_{sc})G_{ii}\right]\] (F.40) \[\quad+\frac{1}{\sqrt{N}}\sum_{i,j}\kappa_{ij}^{(3)}\mathbb{E} \left[T_{k}(z)^{D-1}\overline{T_{k}(z)}^{D}G_{ki}m(G_{ii}-m_{sc})\right]\] (F.41) \[\quad+\frac{1}{\sqrt{N}}\sum_{i,j}\kappa_{ij}^{(3)}\mathbb{E} \left[T_{k}(z)^{D-1}\overline{T_{k}(z)}^{D}G_{ki}m_{sc}^{2}\right]\] (F.42) Define \(A_{2,3,1}\) as the sum of (F.40) and (F.41), and \(A_{2,3,2}\) as (F.42) so that \(A_{2,3}=A_{2,3,1}+A_{2,3,2}\). Then, \[|A_{2,3,1}| \leq\frac{2}{\sqrt{N}}\cdot N^{2}\cdot\mathbb{E}\left[|T_{k}(z)|^{ 2D-1}\right]\cdot\frac{1}{Nq}\cdot\frac{1}{\sqrt{N}}\cdot\frac{1}{q}\cdot C\] \[=2C\cdot\frac{1}{q^{2}}\mathbb{E}\left[|T_{k}(z)|^{2D-1}\right]\] (F.43) and by taking \(j\)-sum first, \[A_{2,3,2} =\kappa_{3}\cdot m^{2}\cdot\mathbb{E}\left[T_{k}(z)^{D-1}\overline {T_{k}(z)}^{D}\cdot\frac{1}{\sqrt{N}}\sum_{i}G_{ki}\right]\] \[=\kappa_{3}\cdot m^{2}\cdot\mathbb{E}\left[|T_{k}(z)|^{2D}\right]\] (F.44) Therefore, \[A_{2}\approx A_{2,3}=A_{2,3,1}+A_{2,3,2}=O(1/q^{2})\mathbb{E}\left[|T_{k}(z)|^ {2D-1}\right]+\kappa_{3}\cdot m^{2}\cdot\mathbb{E}\left[|T_{k}(z)|^{2D}\right]\] (F.45) #### f.1.3 Estimates for \(A_{r}\) (\(r\geq 3\)) The goal of this subsection is to prove \[A_{r}=O(1/q^{r-1})\mathbb{E}\left[|T_{k}(z)|^{2D-1}\right]\] (F.46) for all integer \(r\geq 3\) where \(A_{r}\) is defined as \[A_{r}=\sum_{i,j}\frac{\kappa_{ij}^{(r+1)}}{r!}\mathbb{E}\left[\partial_{ij}^{r }\left(T_{k}(z)^{D-1}\overline{T_{k}(z)}^{D}\cdot\frac{1}{\sqrt{N}}G_{ki} \right)\right]\] (F.47) We introduce new definitions and lemmas for proof. _Definition_ F.4 (Effective Non-diagonal Entries).: The number of **effective non-diagonal entries** of some product of the entries of \(G=(H-zI)^{-1}\) is defined as the number of non-diagonal entries of \(G\) except \(G_{ij},G_{ji}\) while the power larger than \(2\) is counted as \(2\). For example, the number of effective non-diagonal entries in \(G_{ki}^{3}G_{kj}G_{ij}^{2}G_{ii}G_{jj}\) is \(2+1=3\). _Definition_ F.5 (Effectively negligible).: The term with constant, sigma, kappa, expectation, \(T_{k}\), and some entries of \(G=(H-zi)^{-1}\) is **effectively negligible** if \[n_{c}+n_{s}-1-\frac{1}{2}n_{end}<0\] (F.48) where \(n_{c}\) is the degree of \(N\) of the constant, \(n_{s}\) is the number of indices of sigma and \(n_{end}\) is the number of effective non-diagonal entries of the term. By Lemma D.2, if a term is effectively negligible, then the term is negligible even without \(G_{ij},G_{ji}\) hence negligible. For example, negligible term \[\frac{1}{\sqrt{N}}\sum_{i,j}\kappa_{ij}^{(4)}\mathbb{E}\left[T_{k}(z)^{D-1} \overline{T_{k}(z)}^{D}\cdot G_{ki}G_{ij}\right]\] (F.49) is not effectively negligible since \(n_{c}=-1/2,n_{s}=2\), and \(n_{end}=1\). Using this definition, we can make a lemma **Lemma F.6**.: _Let \(F(H_{ij})\) be a multiplication of the entries of \(G=(H-zi)^{-1}\) with \(n_{end}\) effectively negligible entries. Also, suppose that the powers of indices except \(k,i,j\) and the powers of \(G_{ki},G_{kj},G_{ik},G_{jk}\) are at most \(1\). Then, among \(\partial_{ij}F(H_{ij})\), there is no terms whose number of effectively negligible entries less than \(n_{end}\)._ Proof.: Since \(\partial_{ij}G_{pq}=-G_{pi}G_{jq}-G_{pj}G_{iq}\), without loss of generality, we will consider \(\partial_{ij}\) replaces \(G_{pq}\) into \(G_{pi}G_{jq}\). Suppose that there exists a term in \(\partial_{ij}F(H_{ij})\) such that have effectively negligible entries less than \(n_{end}\). Then, \(G_{pq}\) should be effectively negligible but \(G_{pi}\) and \(G_{jq}\) which are newly added should be not effectively negligible. Then, \(G_{pi}\) and \(G_{jq}\) should satisfy one of following conditions. The first one is to be diagonal entry, \(G_{ij}\), or \(G_{ji}\) and the second condition is to be the entry originally placed so that the power of such entry became larger than \(2\) and not in the first condition. Now, there are three possible cases. 1. If both \(G_{pi}\) and \(G_{jq}\) satisfy the first condition, then \(p\) and \(q\) should be either \(i\) and \(j\) which makes \(G_{pq}\) be not effectively negligible entry. 2. If one of \(G_{pi}\) and \(G_{jq}\) satisfies the first condition and the another satisfies the second condition, without loss of generality, suppose that \(G_{pi}\) and \(G_{jq}\) satisfy the first and second condition respectively. Then since \(p=i\) or \(p=j\), \(q\) should be \(k\) or index except \(i,j,k\) so that \(G_{pq}\) be effectively negligible. However, if \(q=k\), then \(G_{jq}=G_{jk}\) should have power larger than \(2\) which is impossible by assumption. Also, if \(q\) is a index except \(i,j,k\), namely \(\alpha\), then \(G_{jq}=G_{j\alpha}\) should have power larger than \(2\) which is impossible by assumption again. 3. If both \(G_{pi}\) and \(G_{jq}\) satisfy the second condition, since they should not satisfy the first condition, \(p\) and \(q\) cannot be \(i,j\). By the same reason as second case, \(p\) and \(q\) cannot be the indices except \(i,j,k\). Therefore, \(p=k\) and \(q=k\) which leads to contradiction since \(G_{pq}\) is not diagonal entry. Therefore, there is no such terms with effectively negligible entries less than \(n_{end}\). **Lemma F.7**.: _Let \(F(H_{ij})\) be a sum of some multiplications of the entries of \(G=(H-zI)^{-1}\). Also, suppose that the powers of indices except \(k,i,j\) and the powers of \(G_{ki},G_{kj},G_{ik},G_{jk}\) are at most \(1\). Then if \(\sum_{i,j}\kappa_{ij}^{(r)}\mathbb{E}\left[T_{k}(z)^{\alpha}\overline{T_{k}( z)}^{\beta}F(H_{ij})\right]\) is effectively negligible, then_ \[\sum_{i,j}\kappa_{ij}^{(r+1)}\mathbb{E}\left[\partial_{ij}\left\{T_{k}(z)^{ \alpha}\overline{T_{k}(z)}^{\beta}F(H_{ij})\right\}\right]\] (F.50) _is also effectively negligible where \(\alpha,\beta\geq 2\) and \(n\geq 3\)_ Proof.: Since \(\sum_{i,j}\kappa_{ij}^{(r)}\mathbb{E}\left[T_{k}(z)^{\alpha}\overline{T_{k}( z)}^{\beta}F(H_{ij})\right]\) is effectively negligible, \(n_{c}+n_{s}-1-\frac{1}{2}n_{end}<0\) where \(n_{c}\) is the degree of \(N\) of the constant in \(F\), \(n_{s}\) is the number of indices of sigma and \(n_{end}\) is the number of effective non-diagonal entries of \(G\) in \(F\). Then, \[\sum_{i,j}\kappa_{ij}^{(r+1)}\mathbb{E}\left[\partial_{ij}\left\{T_{k}(z)^{ \alpha}\overline{T_{k}(z)}^{\beta}F(H_{ij})\right\}\right]=L_{1}+L_{2}+L_{3}\] (F.51) where \[L_{1} =\alpha\sum_{i,j}\kappa_{ij}^{(r+1)}\mathbb{E}\left[T_{k}(z)^{ \alpha-1}\overline{T_{k}(z)}^{\beta}F(H_{ij})\cdot\partial_{ij}T_{k}(z)\right]\] (F.52) \[L_{2} =\beta\sum_{i,j}\kappa_{ij}^{(r+1)}\mathbb{E}\left[T_{k}(z)^{ \alpha}\overline{T_{k}(z)}^{\beta-1}F(H_{ij})\cdot\partial_{ij}\overline{T_{k} (z)}\right]\] (F.53) \[L_{3} =\sum_{i,j}\kappa_{ij}^{(r+1)}\mathbb{E}\left[T_{k}(z)^{\alpha} \overline{T_{k}(z)}^{\beta}F^{\prime}(H_{ij})\right]\] (F.54) Let \(n_{c}^{\prime},n_{s}^{\prime}\), and \(n_{end}^{\prime}\) be the constant of any term of \(L_{1}\). Then since \[|\partial_{ij}T_{k}(z)|=|\frac{1}{\sqrt{N}}\sum_{q}(-G_{ki}G_{jq}-G_{kj}G_{iq})|\] (F.55) has new effectively non-diagonal term \(G_{jq}\) and \(G_{iq}\), \(n_{c}^{\prime}=n_{c}-1/2\), \(n_{s}^{\prime}=n_{s}+1\), and \(n_{end}^{\prime}\geq n_{end}+1\). Therefore, \[n_{c}^{\prime}+n_{s}^{\prime}-1-\frac{1}{2}n_{end}^{\prime} \leq n_{c}-1/2+n_{s}+1-1-\frac{1}{2}(n_{end}+1)\] \[=n_{c}+n_{s}-1-\frac{1}{2}n_{end}<0\] (F.56) which implies any term of \(L_{1}\) is effectively negligible so that \(L_{1}\) is effectively negligible. Similarly, \(L_{2}\) is N-effectively negligible also. For \(L_{3}\), consider any term of \(L_{3}\) with constants \(n_{c}^{\prime\prime},n_{s}^{\prime\prime}\) and \(n_{end}^{\prime\prime}\), By the Lemma F.6, \(n_{end}^{\prime\prime}\geq n_{end}\) which implies \[n_{c}^{\prime\prime}+n_{s}^{\prime\prime}-1-\frac{1}{2}n_{end}^{\prime\prime} \leq n_{c}+n_{s}-1-\frac{1}{2}n_{end}<0\] (F.57) so that any terms of \(L_{3}\) is effectively negligible which makes \(L_{3}\) became effectively negligible. Since \(L_{1}\), \(L_{2}\), and \(L_{3}\) are effectively negligible, \[\sum_{i,j}\kappa_{ij}^{(r+1)}\mathbb{E}\left[\partial_{ij}\left\{T_{k}(z)^{ \alpha}\overline{T_{k}(z)}^{\beta}F(H_{ij})\right\}\right]\] (F.58) is effectively negligible also. We will prove for \(A_{r,p}\) (\(p\neq 3\)) first. **Theorem F.8**.: \(A_{r,p}\) _is effectively negligible for \(r\geq 2\) and \(p\neq 3\)_ Proof.: Consider \(A_{r,1}\) first. We will use induction on \(r\geq 2\). First, it can be shown that \(A_{2,1}\) is effectively negligible by calculation. Suppose that \[A_{r^{\prime},1}=\frac{1}{(r^{\prime})!\sqrt{N}}\sum_{i,j}\kappa_{ij}^{(r^{ \prime}+1)}\mathbb{E}\left[\partial_{ij}^{r^{\prime}-2}\left\{\partial_{ij}^{2 }(T_{k}(z)^{D-1})\overline{T_{k}(z)}^{D}G_{ki}\right\}\right]\] (F.59) is effectively negligible and define \(F(H_{ij})\) as the sum of the terms \(T_{k}(z)^{\alpha}\overline{T_{k}(z)}^{\beta}F(H_{ij})\) \[\sum_{t\in T}T_{k}(z)^{\alpha_{t}}\overline{T_{k}(z)}^{\beta_{t}}F_{t}(H_{ij}) =\partial_{ij}^{r^{\prime}-2}\left\{\partial_{ij}^{2}(T_{k}(z)^{D-1}) \overline{T_{k}(z)}^{D}G_{ki}\right\}\] (F.60) Since the indices except \(i,j,k\) should be came from \[\partial_{ij}T_{k}(z)=\frac{1}{\sqrt{N}}\sum_{\gamma}(-G_{ki}G_{j\gamma}-G_{kj}G_ {i\gamma}),\] (F.61) the new index(\(\gamma\)) appears only one time. Also, define the subsets \(T_{1}\) and \(T_{2}\) of \(T\) by \[T_{1}=\{t\in T\mid\text{indices of }G_{ki},G_{kj},G_{ik},G_{jk}\text{ of }F_{t}(H_{ij})\text{ are less than }2\},\ T_{2}=T-T_{1}\] (F.62) Then, \[A_{r^{\prime}+1,1} =\frac{1}{(r^{\prime}+1)!\sqrt{N}}\sum_{i,j}\kappa_{ij}^{(r^{ \prime}+2)}\mathbb{E}\left[\partial_{ij}^{r^{\prime}-1}\left\{\partial_{ij}^{2 }(T_{k}(z)^{D-1})\overline{T_{k}(z)}^{D}G_{ki}\right\}\right]\] \[=\frac{1}{(r^{\prime}+1)!\sqrt{N}}\sum_{i,j}\kappa_{ij}^{(r^{ \prime}+2)}\mathbb{E}\left[\partial_{ij}\left\{\sum_{t\in T}T_{k}(z)^{\alpha_ {t}}\overline{T_{k}(z)}^{\beta_{t}}F_{t}(H_{ij})\right\}\right]\] \[=\sum_{t\in T_{1}}\frac{1}{(r^{\prime}+1)!\sqrt{N}}\sum_{i,j} \kappa_{ij}^{(r^{\prime}+2)}\mathbb{E}\left[\partial_{ij}\left\{T_{k}(z)^{ \alpha_{t}}\overline{T_{k}(z)}^{\beta_{t}}F_{t}(H_{ij})\right\}\right]\] (F.63) \[+\sum_{t\in T_{2}}\frac{1}{(r^{\prime}+1)!\sqrt{N}}\sum_{i,j} \kappa_{ij}^{(r^{\prime}+2)}\mathbb{E}\left[\partial_{ij}\left\{T_{k}(z)^{ \alpha_{t}}\overline{T_{k}(z)}^{\beta_{t}}F_{t}(H_{ij})\right\}\right]\] (F.64) and (F.63) is effectively negligible by the Lemma F.7. Also, the baddest case in (F.64) is the case when the number of effectively negligible entries decreases, denoted by \(n_{end}^{\prime}=n_{end}-1\). However, in this case, since \(F_{2}\) contains either \(G_{ki}^{2}\) or \(G_{kj}^{2}\), which can be made only from \(\partial_{ij}T_{k}\), \(n_{c}+n_{s}-1-\frac{1}{2}n_{end}<-\frac{1}{2}\) which leads to \(n_{c}^{\prime}+n_{s}^{\prime}-1-\frac{1}{2}n_{end}^{\prime}<0\) so that (F.64) is also effectively negligible. Therefore, \(A_{r,1}\) is N-effectively negligible for \(r\geq 2\). By similar induction, the cases for \(p=2,4,5,6\) also can be proved since \(A_{2,p}\) is effectively negligible for \(p\neq 3\). Next, we have a lemma for the case \(A_{r,3}\). **Lemma F.9**.: _Among \(A_{r,3}\), the non effectively negligible terms have form of_ \[\frac{1}{\sqrt{N}}\sum_{i,j}\kappa_{ij}^{(r+1)}\mathbb{E}\left[T_{k}(z)^{D-1} \overline{T_{k}(z)}^{D}F(H_{ij})\right]\] (F.65) _for all \(r\geq 2\) where \(F(H_{ij})\) is a sum of constant multiple of product of entries of \(G=(H-zI)^{-1}\) and \(n_{end}\) of \(F(H_{ij})\) satisfies \(n_{end}\geq 1\)._ Proof.: We will use induction on \(r\). First, when \(r=2\), we can calculate \(A_{2,3}\) as \[A_{2,3}=\frac{1}{\sqrt{N}}\sum_{i,j}\kappa_{ij}^{(3)}\mathbb{E}\left[T_{k}(z)^ {D-1}\overline{T_{k}(z)}^{D}F(H_{ij})\right]\] (F.66) where \(F(H_{ij})\) can be written as \[F(H_{ij})=G_{ki}G_{ji}^{2}+G_{kj}G_{ii}G_{ji}+G_{kj}G_{ii}G_{ij}+G_{ki}G_{ii}G_{jj}\] (F.67) Then, since \(n_{c}=-1/2,n_{s}=2,n_{end}=1\), (F.66) is not effectively negligible. Therefore, \(F(H_{ij})\) is desired form and it implies that our claim holds for \(r=2\). Suppose that the non effectively negligible terms of \(A_{r^{\prime},3}\) has form of \[\frac{1}{\sqrt{N}}\sum_{i,j}\kappa_{ij}^{(r^{\prime}+1)}\mathbb{E }\left[T_{k}(z)^{D-1}\overline{T_{k}(z)}^{D}F(H_{ij})\right]\] (F.68) where \(F(H_{ij})\) is a sum of constant multiple of product of entries of \(G=(H-zI)^{-1}\) and \(n_{end}\geq 1\). Then, \[A_{r^{\prime},3}=A_{r^{\prime},3,eff}+\frac{1}{\sqrt{N}}\sum_{i, j}\kappa_{ij}^{(r^{\prime}+1)}\mathbb{E}\left[T_{k}(z)^{D-1}\overline{T_{k}(z)}^{ D}F(H_{ij})\right]\] (F.69) where \(A_{r^{\prime},3,eff}\) is effectively negligible term. By the definition of \(A_{r^{\prime}+1,3}\), \[A_{r^{\prime}+1,3} =\frac{1}{r^{\prime}+1}A_{r^{\prime},3,eff}^{\prime}+\frac{1}{(r ^{\prime}+1)\sqrt{N}}\sum_{i,j}\kappa_{ij}^{(r^{\prime}+2)}\mathbb{E}\left[ \partial_{ij}\left\{T_{k}(z)^{D-1}\overline{T_{k}(z)}^{D}F(H_{ij})\right\}\right]\] \[=\frac{1}{r^{\prime}+1}A_{r^{\prime},3,eff}^{\prime}+L_{1}+L_{2}+ L_{3}\] (F.70) with \(A_{r^{\prime},3,eff}^{\prime}\) is defined by replacing \(\kappa_{ij}^{(r^{\prime}+1)}\) with \(\kappa_{ij}^{(r^{\prime}+2)}\) in \(A_{r^{\prime},3,eff}\) and put \(\partial_{ij}\) inside the expectation of \(A_{r^{\prime},3,eff}\) which is effectively negligible by the proof of above lemmas and theorems. Also, \(L_{1}\), \(L_{2}\), and \(L_{3}\) is defined by \[L_{1} =\frac{D-1}{(r^{\prime}+1)\sqrt{N}}\sum_{i,j}\kappa_{ij}^{(r^{ \prime}+2)}\mathbb{E}\left[T_{k}(z)^{D-2}\overline{T_{k}(z)}^{D}F(H_{ij}) \cdot\partial_{ij}T_{k}(z)\right]\] (F.71) \[L_{2} =\frac{D}{(r^{\prime}+1)\sqrt{N}}\sum_{i,j}\kappa_{ij}^{(r^{ \prime}+2)}\mathbb{E}\left[T_{k}(z)^{D-1}\overline{T_{k}(z)}^{D-1}F(H_{ij}) \cdot\partial_{ij}\overline{T_{k}(z)}\right]\] (F.72) \[L_{3} =\frac{1}{(r^{\prime}+1)\sqrt{N}}\sum_{i,j}\kappa_{ij}^{(r^{ \prime}+2)}\mathbb{E}\left[T_{k}(z)^{D-1}\overline{T_{k}(z)}^{D}F^{\prime}(H_{ ij})\right]\] (F.73) For \(L_{1}\) and \(L_{2}\), since \[\partial_{ij}T_{k}(z)=\frac{1}{\sqrt{N}}\sum_{q}(-G_{ki}G_{jq}-G_ {kj}G_{iq})\] (F.74) , \(n_{c}^{\prime}=-1,n_{s}^{\prime}=3,n_{end}^{\prime}=n_{end}+2\geq 3\) implies \(L_{1}\) and \(L_{2}\) is effectively negligible. Therefore, non effectively negligible term of \(A_{r^{\prime}+1,3}\) is subsum of \(L_{3}\) denoted by \[\frac{1}{\sqrt{N}}\sum_{i,j}\kappa_{ij}^{(r^{\prime}+2)}\mathbb{E }\left[T_{k}(z)^{D-1}\overline{T_{k}(z)}^{D}\hat{F}(H_{ij})\right]\] (F.75) while \(\hat{F}(H_{ij})\) is the smallest subsum of \(\frac{1}{r^{\prime}+1}F^{\prime}(H_{ij})\) that makes the equation \[L_{3}-\frac{1}{\sqrt{N}}\sum_{i,j}\kappa_{ij}^{(r^{\prime}+2)} \mathbb{E}\left[T_{k}(z)^{D-1}\overline{T_{k}(z)}^{D}\hat{F}(H_{ij})\right]\] (F.76) is effectively negligible. Since \(\hat{F}(H_{ij})\) is desired form, what remains to prove is that \(n_{end}\) of \(\hat{F}\) satisfies \(n_{end}\geq 1\). This can be proved since \(\hat{F}\) contains at least one of \(G_{ki}\) or \(G_{kj}\). Therefore, our claim holds for \(r=r^{\prime}+1\) also. Now we can prove the main theorem for \(A_{r,3}\). **Theorem F.10**.: \(A_{r,3}=O(1/q^{r-1})\mathbb{E}\left[|T_{k}(z)|^{2D-1}\right]\) _for all \(r\geq 3\)_ Proof.: By Lemma F.9, we know that the non effectively negligible part of \(A_{r,3}\) has the form of \[\frac{1}{\sqrt{N}}\sum_{i,j}\kappa_{ij}^{(r+1)}\mathbb{E}\left[T_{k}(z)^{D-1} \overline{T_{k}(z)}^{D}F(H_{ij})\right]\] (F.77) where \(F(H_{ij})\) is a sum of constant multiple of product of entries of \(G=(H-zI)^{-1}\) and \(n_{end}\geq 1\) for \(r\geq 2\). Since \(T_{k}(z)^{D-1}\overline{T_{k}(z)}^{D}F(H_{ij})\) is some part of \(\partial_{ij}^{r}\{T_{k}(z)^{D-1}\overline{T_{k}(z)}^{D}G_{ki}\}\), we can know that \(F(H_{ij})\) is some part of \(\partial_{ij}^{r}G_{ki}\). By differentiating a product of entries of \(G\), the length of terms increases by \(1\) and the index \(i\) and the index \(j\) appear one more tie. Therefore, \(\partial_{ij}^{r}G_{ki}\)(as so \(F(H_{ij})\)) is a sum of constant multiple of product of \(r+1\) entries of \(G\) with \(2r+2\) indices (\(k\) : \(1\) time, \(i\) : \(r+1\) times, \(j\) : \(r\) times). Going back to (F.77), since the equation is not N-effectively negligible, \[n_{c}+n_{s}-1-\frac{1}{2}n_{end}=-\frac{1}{2}+2-1-\frac{1}{2}n_{end}\geq 0\] (F.78) which implies \(n_{end}\leq 1\) so that \(n_{end}=1\). Consider the indices of the terms in \(F(H_{ij})\). Since there is only one \(k\) index, \(G_{ki}\) or \(G_{kj}\) is the corresponding effectively non-diagonal entry(\(n_{end}=1\)). Therefore, the largest possible term in \(F(H_{ij})\) has the form \[G_{ki}\times(r\text{ diagonal entries})\text{ or }G_{kj}\times(r\text{ diagonal entries})\] (F.79) whose absolute value has order \[|\frac{1}{\sqrt{N}}\sum_{i,j}\kappa_{ij}^{(r+1)}\mathbb{E}\left[ T_{k}(z)^{D-1}\overline{T_{k}(z)}^{D}G_{ki}\times(r\text{ diagonal entries})\right]|\] \[\text{or }|\frac{1}{\sqrt{N}}\sum_{i,j}\kappa_{ij}^{(r+1)}\mathbb{E} \left[T_{k}(z)^{D-1}\overline{T_{k}(z)}^{D}G_{kj}\times(r\text{ diagonal entries})\right]|\] \[\leq\frac{1}{\sqrt{N}}\cdot N^{2}\frac{1}{Nq^{r-1}}\cdot\mathbb{ E}\left[|T_{k}(z)|^{2D-1}\right]\cdot\frac{1}{\sqrt{N}}\cdot\] \[=\frac{1}{q^{r-1}}\mathbb{E}\left[|T_{k}(z)|^{2D-1}\right]\cdot C\] (F.80) where \(C\) is a constant. Therefore, since N-effectively negligible term is negligible, \[A_{r,3} \approx\frac{1}{\sqrt{N}}\sum_{i,j}\kappa_{ij}^{(r+1)}\mathbb{E} \left[T_{k}(z)^{D-1}\overline{T_{k}(z)}^{D}F(H_{ij})\right]\] \[=O(1/q^{r-1})\mathbb{E}\left[|T_{k}(z)|^{2D-1}\right]\] (F.81) for all \(r\geq 3\) Finally, by Theorem F.8 and Theorem F.10, \[A_{r} =(A_{r,1}+A_{r,2}+A_{r,4}+A_{r,5}+A_{r,6})+A_{r,3}\] \[=O(1/q^{r-1})\mathbb{E}\left[|T_{k}(z)|^{2D-1}\right]\] (F.82) #### f.1.4 Estimates for \(R_{t}\) The goal of the subsection is to prove \[R_{t}=O(1/q^{t})\cdot\||T_{k}(z)|^{2D-1}\|_{\infty}\] (F.83) where \(R_{t}\) is defined as \[R_{t}=\sum_{i,j}\mathbb{E}\left[\Omega_{t}\left(T_{k}(z)^{D-1} \overline{T_{k}(z)}^{D}\cdot\frac{1}{\sqrt{N}}G_{ki}H_{ij}\right)\right]\] (F.84) where \[\mathbb{E}\left[\Omega_{t}\left(T_{k}(z)^{D-1}\overline{T_{k}(z) }^{D}\cdot\frac{1}{\sqrt{N}}G_{ki}H_{ij}\right)\right]\leq C_{t}\cdot\mathbb{E }\left[|H_{ij}|^{t+2}\right]\cdot\|\partial_{ij}^{t+1}(T_{k}(z)^{D-1}\overline {T_{k}(z)}^{D}\cdot\frac{1}{\sqrt{N}}G_{ki})\|_{\infty}\] (F.85) for some constant \(C_{t}\). Then, \[|R_{t}| \leq C_{t}\sum_{i,j}\mathbb{E}\left[|H_{ij}|^{t+2}\right]\cdot \left\|\partial_{ij}^{t+1}(T_{k}(z)^{D-1}\overline{T_{k}(z)}^{D}\cdot\frac{1 }{\sqrt{N}}G_{ki})\right\|_{\infty}\] \[=C_{t}\sum_{i,j}\kappa_{ij}^{(t+2)}\cdot\left\|\partial_{ij}^{t+ 1}(T_{k}(z)^{D-1}\overline{T_{k}(z)}^{D}\cdot\frac{1}{\sqrt{N}}G_{ki})\right\| _{\infty}\] (F.86) while (F.86) can be made by replacing the expectation symbol of \(A_{r+1}\) with infinite norm. Also, the process of estimating the bound of \(A_{r+1}\) contains no procedure using expectation. In other words, the expectation acts only as a linear operator in the process of estimating \(A_{r+1}\). Therefore, we can calculate the bound of \(R_{t}\) by same way as in \(A_{r+1}\) and \[R_{t}=O(1/q^{t})\cdot\||T_{k}(z)|^{2D-1}\|_{\infty}\] (F.87) _Proof for Lemma F.1._ By the subsections above, we can calculate \(z\mathbb{E}\left[|T_{k}(z)|^{2D}\right]\). Take the integer \(t\) larger than \(4D\). Then, \[z\mathbb{E}\left[|T_{k}(z)|^{2D}\right] =A_{1}+\cdots+A_{t}+R_{t}\] \[=-m_{sc}\mathbb{E}\left[|T_{k}(z)|^{2D}\right]+\kappa_{3}m_{sc}^ {2}\mathbb{E}\left[|T_{k}(z)|^{2D}\right]+O(1/q^{2})\mathbb{E}\left[|T_{k}(z) |^{2D-1}\right]\] \[\quad+O(1/q^{2})\mathbb{E}\left[|T_{k}(z)|^{2D-1}\right]+\cdots+ O(1/q^{t-1})\mathbb{E}\left[|T_{k}(z)|^{2D-1}\right]\] \[\quad+O(1/q^{t})\cdot\||T_{k}(z)|^{2D-1}\|_{\infty}\] \[=-m_{sc}\mathbb{E}\left[|T_{k}(z)|^{2D}\right]+\kappa_{3}m_{sc}^{ 2}\mathbb{E}\left[|T_{k}(z)|^{2D}\right]+O(1/q^{2})\mathbb{E}\left[|T_{k}(z)| ^{2D-1}\right]\] \[\quad+O(1/q^{t})\cdot\||T_{k}(z)|^{2D-1}\|_{\infty}\] (F.88) Now, \[(z+m_{sc}-\kappa_{3}m^{2})\mathbb{E}\left[|T_{k}(z)|^{2D}\right]=O(1/q^{2}) \mathbb{E}\left[|T_{k}(z)|^{2D-1}\right]+O(1/q^{t})\cdot\||T_{k}(z)|^{2D-1}\| _{\infty}\] (F.89) Since \(z+m_{sc}\gg\kappa_{3}m^{2}\), \(|z+m_{sc}-\kappa_{3}m^{2}|>c\) for some constant \(c\). Also, the infinite norm is constant. Therefore, by dividing each side by \(z+m_{sc}-\kappa_{3}m^{2}\), \[\mathbb{E}\left[|T_{k}(z)|^{2D}\right]=O(1/q^{2})\mathbb{E}\left[|T_{k}(z)|^{ 2D-1}\right]+O(1/q^{t})\] (F.90) Then by Young's inequality and Jensen's inequality, \[\mathbb{E}\left[|T_{k}(z)|^{2D}\right] \leq\frac{1}{2D}\cdot O(1/q^{2})^{2D}+\frac{2D-1}{2D}\cdot\mathbb{E }\left[|T_{k}(z)|^{2D-1}\right]^{\frac{2D}{2D-1}}+O(1/q^{t})\] \[\leq\frac{1}{2D}\cdot O(1/q^{2})^{2D}+\frac{2D-1}{2D}\cdot \mathbb{E}\left[|T_{k}(z)|^{2D}\right]+O(1/q^{t})\] \[=\frac{1}{2D}\cdot O(1/q^{2})^{2D}+\frac{2D-1}{2D}\cdot\mathbb{E }\left[|T_{k}(z)|^{2D}\right]\] (F.91) since \(t>4D\) and \[\mathbb{E}\left[|T_{k}(z)|^{2D}\right]\leq O(1/q^{2})^{2D}\] (F.92) which implies \[T_{k}\prec 1/q^{2}\] (F.93) ### Proof of Lemma F.2 In this subsection, we use Lemma D.2 with Lemma F.1 for power counting argument frequently. As in the proof of Lemma F.1 in subsection F.1, we will consider the derivatives with respect to \(H_{ik}\) when \(i=k\) as same as \(i\neq j\). #### f.2.1 Estimates for \(C_{1}\) In this subsection, we will estimate the order of \[C_{1}+m_{sc}\mathbb{E}\left[P(s)^{D-1}\overline{P(s)}^{D}\cdot s (z)\right]\] (F.94) By simple derivative calculation, we can know that \(C_{1}=C_{1,1}+C_{1,2}+C_{1,3}\) where \[C_{1,1} =\frac{D-1}{N}\sum_{i,j,k}\kappa_{ik}^{(2)}\mathbb{E}\left[P(s)^ {D-2}\overline{P(s)}^{D}G_{kj}\cdot\partial_{ik}P(s)\right]\] \[=\frac{D-1}{\sqrt{N}}\sum_{i,k}\kappa_{ik}^{(2)}\mathbb{E}\left[ P(s)^{D-2}\overline{P(s)}^{D}\cdot(z+m_{sc}-\kappa_{3}m_{sc}^{2})\cdot T_{k}(Q_ {ik}+Q_{ki})\right]\] (F.95) \[C_{1,2} =\frac{D}{N}\sum_{i,j,k}\kappa_{ik}^{(2)}\mathbb{E}\left[P(s)^{D- 1}\overline{P(s)}^{D-1}G_{kj}\cdot\overline{\partial_{ik}P(s)}\right]\] \[=\frac{D}{\sqrt{N}}\sum_{i,k}\kappa_{ik}^{(2)}\mathbb{E}\left[P(s )^{D-2}\overline{P(s)}^{D}\cdot(\overline{z}+\overline{m_{sc}}-\overline{ \kappa_{3}m_{sc}^{2}})\cdot T_{k}(\overline{Q_{ik}}+\overline{Q_{ki}})\right]\] (F.96) \[C_{1,3} =\frac{1}{N}\sum_{i,j,k}\kappa_{ik}^{(2)}\mathbb{E}\left[P(s)^{D- 1}\overline{P(s)}^{D}(-G_{ki}G_{kj}-G_{kk}G_{ij})\right]\] \[=\frac{1}{N}\sum_{i,j,k}\kappa_{ik}^{(2)}\mathbb{E}\left[P(s)^{D- 1}\overline{P(s)}^{D}\cdot(-G_{ki}G_{kj})\right]-m_{sc}\mathbb{E}\left[P(s)^{ D-1}\overline{P(s)}^{D}s(z)\right]\] (F.97) Then by moment counting, \[C_{1,1}=O\left(\frac{\sqrt{N}}{q^{6}}\right)\cdot\mathbb{E}\left[|P (s)|^{2D-1}\right],\ C_{1,2}=O\left(\frac{\sqrt{N}}{q^{6}}\right)\cdot\mathbb{E }\left[|P(s)|^{2D-1}\right],\] (F.98) \[C_{1,3}+m_{sc}\mathbb{E}\left[P(s)^{D-1}\overline{P(s)}^{D}\cdot s (z)\right]=O\left(\frac{1}{q^{2}}\right)\cdot\mathbb{E}\left[|P(s)|^{2D-1}\right]\] (F.99) are negligible since \(q\geq N^{\phi}>N^{1/8}\). Therefore, \[C_{1}+m_{sc}\mathbb{E}\left[P(s)^{D-1}\overline{P(s)}^{D}\cdot s(z)\right]=O \left(\frac{\sqrt{N}}{q^{6}}\right)\cdot\mathbb{E}\left[|P(s)|^{2D-1}\right]\] (F.100) #### f.2.2 Estimates for \(C_{2}\) In this subsection, we will estimate the order of \[C_{2}-\kappa_{3}m_{sc}^{2}\mathbb{E}\left[P(s)^{D-1}\overline{P(s)}^{D}\cdot s (z)\right]\] (F.101) Before calculating the bound, we will define \(C_{r,p}\) (\(r\geq 2,p=1,2,3\)) which clearly satisfy \(C_{r}=C_{r,1}+C_{r,2}+C_{r,3}\) as following: \[C_{r,1} =\frac{D-1}{N}\sum_{i,j,k}\frac{\kappa_{ik}^{(r+1)}}{r!}\mathbb{E }\left[\partial_{ik}^{r-1}\left\{P(s)^{D-2}\overline{P(s)}^{D}G_{kj}\cdot \partial_{ik}P(s)\right\}\right]\] (F.102) \[C_{r,2} =\frac{D}{N}\sum_{i,j,k}\frac{\kappa_{ik}^{(r+1)}}{r!}\mathbb{E} \left[\partial_{ik}^{r-1}\left\{P(s)^{D-1}\overline{P(s)}^{D-1}G_{kj}\cdot \overline{\partial_{ik}P(s)}\right\}\right]\] (F.103) \[C_{r,3} =\frac{1}{N}\sum_{i,j,k}\frac{\kappa_{ik}^{(r+1)}}{r!}\mathbb{E} \left[\partial_{ik}^{r-1}\left\{P(s)^{D-1}\overline{P(s)}^{D}(-G_{ki}G_{kj}- G_{kk}G_{ij})\right\}\right]\] (F.104) Then by derivative calculation, we can know that \(C_{2,1}\) and \(C_{2,2}\) have following bound \[C_{2,1}=O\left(\frac{\sqrt{N}}{q^{7}}\right)\cdot\mathbb{E}\left[|P(s)|^{2D-1} \right],\ C_{2,2}=O\left(\frac{\sqrt{N}}{q^{7}}\right)\cdot\mathbb{E}\left[|P( s)|^{2D-1}\right]\] (F.105) For \(C_{2,3}\), the only non-negligible term is \[\frac{1}{N}\sum_{i,j,k}\frac{\kappa_{ik}^{(3)}}{2}\mathbb{E}\left[P(s)^{D-1} \overline{P(s)}^{D}G_{kk}G_{ii}G_{kj}\right]=O\left(\frac{\sqrt{N}}{q^{3}} \right)\mathbb{E}\left[|P(s)|^{2D-1}\right]\] (F.106) which can be written as \[\frac{1}{N}\sum_{i,j,k}\frac{\kappa_{ik}^{(3)}}{2}\mathbb{E}\left[P(s )^{D-1}\overline{P(s)}^{D}(G_{kk}-m_{sc})G_{ii}G_{kj}\right]\] \[+\frac{1}{N}\sum_{i,j,k}\frac{\kappa_{ik}^{(3)}}{2}\mathbb{E} \left[P(s)^{D-1}\overline{P(s)}^{D}m_{sc}(G_{ii}-m_{sc})G_{kj}\right]\] \[+\frac{1}{N}\sum_{i,j,k}\frac{\kappa_{ik}^{(3)}}{2}\mathbb{E} \left[P(s)^{D-1}\overline{P(s)}^{D}m_{sc}^{2}G_{kj}\right]\] \[=O\left(\frac{\sqrt{N}}{q^{4}}\right)\cdot\mathbb{E}\left[|P(s)|^ {2D-1}\right]+\kappa_{3}m_{sc}^{2}\mathbb{E}\left[P(s)^{D-1}\overline{P(s)}^{ D}\cdot s(z)\right]\] (F.107) Therefore, \[C_{2,3}-\kappa_{3}m_{sc}^{2}\mathbb{E}\left[P(s)^{D-1}\overline{P(s)}^{D} \cdot s(z)\right]=O\left(\frac{\sqrt{N}}{q^{4}}\right)\mathbb{E}\left[|P(s)|^ {2D-1}\right]\] (F.108) which implies the desired term satisfies \[C_{2}-\kappa_{3}m_{sc}^{2}\mathbb{E}\left[P(s)^{D-1}\overline{P(s)}^{D}\cdot s (z)\right]=O\left(\frac{\sqrt{N}}{q^{4}}\right)\mathbb{E}\left[|P(s)|^{2D-1}\right]\] (F.109) #### f.2.3 Estimates for \(C_{r}\) (\(r\geq 3\)) In this subsection, we will change the two definitions F.4 and F.5 by replacing \(G_{ij},G_{ji}\) into \(G_{ik},G_{ki}\) since we differentiate the terms by \(H_{ik}\) in this subsection. Then the number \(n_{c}+n_{s}-1-\frac{1}{2}n_{end}\) in definition F.5 implies the order of \(N\) in the term while \(q\) is not counted. **Theorem F.11**.: \(C_{r,1}\) _and \(C_{r,2}\) are negligible for all integer \(r\geq 2\)._ Proof.: By the previous subsection, we can know that \(C_{2,1}=O\left(\sqrt{N}/q^{7}\right)\cdot\mathbb{E}\left[|P(s)|^{2D-1}\right]\) is negligible. Now, suppose that \(C_{r^{\prime},1}\) is negligible for some integer \(r^{\prime}\geq 2\). Write \(C_{r^{\prime},1}\) as following: \[C_{r^{\prime},1}=\frac{1}{(r^{\prime})!N}\sum_{i,j,k}\kappa_{ik}^{(r^{\prime}+ 1)}\mathbb{E}\left[\partial_{ik}^{r^{\prime}-1}\left\{P(s)^{D-2}\overline{P(s) }^{D}G_{kj}\cdot\partial_{ik}P(s)\right\}\right]\] (F.110) and define \(F_{t}(H_{ik})\) as \[\sum_{t\in T}P(s)^{\alpha_{t}}\overline{P(s)}^{\beta_{t}}F_{t}(H_{ik})= \partial_{ik}^{r^{\prime}-1}\left\{P(s)^{D-2}\overline{P(s)}^{D}G_{kj}\cdot \partial_{ik}P(s)\right\}\] (F.111) Then, \(C_{r^{\prime}+1,1}\) can be interpreted as \[(r^{\prime}+1)C_{r^{\prime}+1,1}=L_{1}+L_{2}+L_{3}\] (F.112) where \[L_{1} =\sum_{t\in T}\frac{\alpha_{t}}{(r^{\prime})!N}\sum_{i,j,k}\kappa_{ ik}^{(r^{\prime}+2)}\mathbb{E}\left[P(s)^{\alpha_{t}-1}\overline{P(s)}^{\beta_{t}}F_{t}(H_ {ik})\cdot\partial_{ik}P(s)\right]\] (F.113) \[L_{2} =\sum_{t\in T}\frac{\beta_{t}}{(r^{\prime})!N}\sum_{i,j,k}\kappa_{ ik}^{(r^{\prime}+2)}\mathbb{E}\left[P(s)^{\alpha_{t}}\overline{P(s)}^{\beta_{t}-1} F_{t}(H_{ik})\cdot\partial_{ik}\overline{P(s)}\right]\] (F.114) \[L_{3} =\sum_{t\in T}\frac{1}{(r^{\prime})!N}\sum_{i,j,k}\kappa_{ik}^{(r^ {\prime}+2)}\mathbb{E}\left[P(s)^{\alpha_{t}}\overline{P(s)}^{\beta_{t}}F_{t} ^{\prime}(H_{ik})\right]\] (F.115) Since \(\partial_{ik}P(s)\prec 1/q^{4}\) and \(\kappa_{ik}^{(r^{\prime}+2)}\) has one more \(q\) in the denominator than \(\kappa_{ik}^{(r^{\prime}+1)}\), the bound of \(L_{1}\) has 5 more \(q\) in the denominator than \(C_{r^{\prime},1}\) hence negligible. Similarly, \(L_{2}\) is negligible also. For \(L_{3}\), by the changed version of Lemma F.7, the index of \(N\) of order of \(C_{r^{\prime}+1}\) is smaller than that of \(C_{r^{\prime},1}\) (hence smaller than that of \(C_{2,1}\) (equals to 1/2)). Also, since \(r^{\prime}\geq 2\), \(\kappa_{ik}^{(r^{\prime}+2)}\) has more than two \(q\) in the denominator. Plus, since there is only one entry of \(G\) with index \(j\) inside \(L_{3}\), there exists at least one \(T_{\gamma}\) (\(\gamma\) is arbitrary) inside \(F^{\prime}(H_{ik})\) which means at least two \(q\) appear again inside \(F(H_{ik})\). Combining previous two facts, the order of \(L_{3}\) is \[L_{3}=O\left(\frac{N^{p_{1}}}{q^{p_{2}}}\right),\quad p_{1}<\frac{1}{2},\ p_{2}\geq 4\] (F.116) which means \(L_{3}\) is negligible since \(\phi>1/8\). Therefore, since \(L_{1}\), \(L_{2}\), and \(L_{3}\) are negligible, \(C_{r^{\prime}+1,1}\) is negligible also. By similar process, \(C_{r^{\prime}+1,2}\) is negligible. **Theorem F.12**.: \(C_{r,3}=O(\sqrt{N}/q^{r+1})\mathbb{E}\left[|P(s)|^{2D-1}\right]\) _for all integer \(r\geq 3\)._ Proof.: Among the terms in \[\frac{1}{N}\sum_{i,j,k}\frac{\kappa_{ik}^{(r+1)}}{r!}\mathbb{E}\left[P(s)^{ \alpha}\overline{P(s)}^{\beta}F(H_{ik})\right]\] (F.117) , if \(F(H_{ik})\) has more than two non-diagonal entries, then such terms are negligible since the order of \(N\) of their bounds are negative. Also, \(F(H_{ik})\) have at least one non-diagonal entries containing the index \(j\). Therefore, we can only consider the case that \(F(H_{ik})\) contains the only non-diagonal entry (containing \(j\)) which leads the order of \(N\) in their bounds are \(1/2\). Consequently, if \(\alpha<D-1\) or \(\beta<D\), it means that at least one of derivative has effected to at least one \(P(s)\) or \(\overline{P(s)}\). Then since \(\partial_{ik}P(s)\prec 1/q^{4}\), the order is smaller than \(O(\sqrt{N}/q^{4})\) which is negligible by the fact that \(\phi>1/8\). In conclusion, the possible non-negligible term in \(A_{r,3}\) is \[\frac{1}{N}\sum_{i,j,k}\frac{\kappa_{ik}^{(r+1)}}{r!}\mathbb{E}\left[P(s)^{D-1 }\overline{P(s)}^{D}\partial_{ik}^{r}(G_{kj})\right]\] (F.118) Since \(\partial_{ik}^{r}(G_{kj})\) contains \(2r+2\) indices consisted of one \(j\), r \(i\), and r+1 \(k\), the possible non-negligible terms are followings: \[\frac{1}{N}\sum_{i,j,k}\frac{\kappa_{ik}^{(r+1)}}{r!}\mathbb{E} \left[P(s)^{D-1}\overline{P(s)}^{D}G_{ij}\times(r\ \text{diagonal entries})\right]\] \[\text{or}\ \frac{1}{N}\sum_{i,j,k}\frac{\kappa_{ik}^{(r+1)}}{r!} \mathbb{E}\left[P(s)^{D-1}\overline{P(s)}^{D}G_{kj}\times(r\ \text{diagonal entries})\right]\] (F.119) which has order of \[O\left(\frac{\sqrt{N}}{q^{r+1}}\right)\mathbb{E}\left[|P(s)|^{2D-1}\right]\] (F.120) #### f.2.4 Estimates for \(R_{t}\) The goal of the subsection is to prove \[R_{t}=O(\sqrt{N}/q^{t})\cdot\||P(s)|^{2D-1}\|_{\infty}\] (F.121) where \(R_{t}\) is defined as \[R_{t}=\sum_{i,j,k}\mathbb{E}\left[\Omega_{t}\left(P(s)^{D-1} \overline{P(s)}^{D}\cdot\frac{1}{N}G_{kj}H_{ik}\right)\right]\] (F.122) where \[\mathbb{E}\left[\Omega_{t}\left(P(s)^{D-1}\overline{P(s)}^{D} \cdot\frac{1}{N}G_{kj}H_{ik}\right)\right]\leq C_{t}^{\prime}\cdot\mathbb{E} \left[|H_{ij}|^{t+2}\right]\cdot\|\partial_{ij}^{t+1}(P(s)^{D-1}\overline{P(s )}^{D}\cdot\frac{1}{N}G_{kj})\|_{\infty}\] (F.123) for some constant \(C_{t}\). Then, \[|R_{t}| \leq C_{t}^{\prime}\sum_{i,j,k}\mathbb{E}\left[|H_{ij}|^{t+2} \right]\cdot\left\|\partial_{ij}^{t+1}(P(s)^{D-1}\overline{P(s)}^{D}\cdot \frac{1}{N}G_{kj})\right\|_{\infty}\] \[=C_{t}^{\prime}\sum_{i,j,k}\kappa_{ij}^{(t+2)}\cdot\left\| \partial_{ij}^{t+1}(P(s)^{D-1}\overline{P(s)}^{D}\cdot\frac{1}{N}G_{kj}) \right\|_{\infty}\] (F.124) while the equation F.124 can be made by replacing the expectation symbol of \(C_{t+1}\) with infinite norm. Also, the process of estimating the bound of \(C_{t+1}\) contains no procedure using expectation. In other words, the expectation acts only as a linear operator in the process of estimating \(C_{t+1}\). Therefore, we can calculate the bound of \(R_{t}\) by same way as in \(C_{t+1}\) and \[R_{t}=O(\sqrt{N}/q^{t+2})\cdot\||P(s)|^{2D-1}\|_{\infty}\] (F.125) ## Appendix G Proof of Lemma 4.4 and Lemma 4.5 ### Proof of Lemma 4.4 By the block structure of \(M\), reordering the indices without loss of generality, there exists \(T>0\) such that \(N\) can be divided by \(T\). For \(i=1,\cdots,K\) \[\mathbf{v}_{i}=(\mathbf{v}_{i1}\cdots\mathbf{v}_{i1}|\mathbf{v}_ {i2}\cdots\mathbf{v}_{i2}|\cdots|\mathbf{v}_{iT}\cdots\mathbf{v}_{iT})^{T}\] (G.1) For the orthnormality of \(\{{\bf v}_{i},\ i=1,\cdots,K\}\), \({\bf v}_{ij}\ (i=1,\cdots,K,\ j,j^{\prime}=1,\cdots,T)\) should satisfy \[\frac{N}{T}\sum_{j=1}^{T}{\bf v}_{ij}^{2}=1,\ \frac{N}{T}\sum_{j=1}^{T}{\bf v }_{ij}{\bf v}_{i^{\prime}j}=0\] (G.2) if \(i\neq i^{\prime}\). Define \(s_{\alpha\beta}\) which is partial sum calculated by \(\alpha\)-th block of \({\bf v}_{i}\) and \(\beta\)-th block of \({\bf v}_{j}\) inside \(\langle{\bf v}_{i},G(z){\bf v}_{j}\rangle\) by \[s_{\alpha\beta}={\bf v}_{i\alpha}{\bf v}_{j\beta}\cdot\sum_{u=1+ \frac{N}{T}(\alpha-1)}^{\frac{N}{T}\alpha}\sum_{v=1+\frac{N}{T}(\beta-1)}^{ \frac{N}{T}\beta}G_{uv}\] (G.3) for \(\alpha,\beta=1,\cdots,T\). If \(\alpha\neq\beta\), there is no \(G_{uv}\) inside \(s_{\alpha\beta}\) such that \(u=v\) which means there is no diagonal entries in \(s_{\alpha\beta}\). Therefore, \(s_{\alpha\beta}\) with \(\alpha\neq\beta\) are negligible as \(N\to\infty\) since they are consisted of non-diagonal entries of \(G\). Otherwise, for \(\alpha=1,\cdots,T\), \[s_{\alpha\alpha}={\bf v}_{i\alpha}{\bf v}_{j\alpha}\cdot\sum_{u,v=1+ \frac{N}{T}(\alpha-1)}^{\frac{N}{T}\alpha}G_{uv}\] (G.4) has same behavior with \[{\bf v}_{i\alpha}{\bf v}_{j\alpha}\cdot\sum_{u,v=1}^{\frac{N}{T}}G_{uv}=\frac{ N}{T}{\bf v}_{i\alpha}{\bf v}_{j\alpha}\cdot\frac{1}{N/T}\sum_{u,v=1}^{\frac{N}{T}}G _{uv}=\frac{N}{T}{\bf v}_{i\alpha}{\bf v}_{j\alpha}\cdot\left(m_{sc}+O\left( \frac{\sqrt{N/T}}{q^{4}}+\frac{1}{q^{2}}\right)\right)\] (G.5) while the last equality is by Lemma 4.3. Then by summing all \(s_{\alpha\beta}\), \[\langle{\bf v}_{i},G(z){\bf v}_{j}\rangle=\sum_{\alpha=1}^{T}s_{ \alpha\alpha}= \frac{N}{T}\sum_{\alpha=1}^{T}{\bf v}_{i\alpha}{\bf v}_{j\alpha} \cdot\langle{\bf v}_{i},{\bf v}_{j}\rangle\left(m_{sc}+O\left(\frac{\sqrt{N/T }}{q^{4}}+\frac{1}{q^{2}}\right)\right)\] (G.6) \[= \langle{\bf v}_{i},{\bf v}_{j}\rangle\left(m_{sc}+O\left(\frac{ \sqrt{N}}{q^{4}}+\frac{1}{q^{2}}\right)\right)\] (G.7) ### Proof of Lemma 4.5 Without loss of generality, re-order the indices \(\mathbb{E}M\) so that \(i_{1}<i_{2}\) for all \(i_{1}\in C_{j_{1}}\) and \(i_{2}\in C_{j_{2}}\) with \(j_{1}<j_{2}\) so that \(\mathbb{E}M\) becomes block matrix. By (1.3), the expectation matrix \(\mathbb{E}M\) satisfies \[\mathbb{E}M_{ij}=\begin{cases}\frac{p_{s}-p_{a}}{\sigma}=\frac{1}{\sigma}\cdot \frac{K-1}{K}(p_{s}-p_{d})&(i\sim j)\\ \frac{p_{d}-p_{a}}{\sigma}=\frac{1}{\sigma}\cdot\frac{1}{K}(p_{d}-p_{s})&(i \not\sim j)\end{cases}=\begin{cases}(K-1)x&(i\sim j)\\ -x&(i\not\sim j)\end{cases}\] (G.8) where \(x\) is defined as \(x=\frac{1}{\sigma K}(p_{s}-p_{d})\). Then \(\mathbb{E}M\) is a block matrix as following: \[\begin{pmatrix}(K-1)x&\cdots&(K-1)x&-x&\cdots&-x&\text{}&-x&\cdots&-x\\ \vdots&&\vdots&&\vdots&&\vdots&\cdots&\vdots&&\vdots&\vdots\\ (K-1)x&\cdots&(K-1)x&-x&\cdots&-x&\text{}&-x&\cdots&-x\\ \hline-x&\cdots&-x&(K-1)x&\cdots&(K-1)x&\text{}&-x&\cdots&-x\\ \vdots&&\vdots&&\vdots&&\vdots&\cdots&\vdots&&\vdots&\vdots\\ -x&\cdots&-x&(K-1)x&\cdots&(K-1)x&&-x&\cdots&-x\\ \hline&&\vdots&&\vdots&&\text{}&\vdots&&\vdots&\vdots&\vdots\\ \hline-x&\cdots&-x&-x&-x&\text{}&(K-1)x&\cdots&(K-1)x\\ \vdots&&\vdots&&\vdots&\vdots&\cdots&\vdots&&\vdots&\vdots\\ -x&\cdots&-x&-x&-x&(K-1)x&\cdots&(K-1)x\end{pmatrix}\] (G.9) Since the rank of a matrix is the number of nonzero rows of its reduced row-echelon form, \(rank(A)=rank(A^{\prime})\) while the \(K\times K\) matrix \(A^{\prime}\) is defined as \[A^{\prime}=\begin{pmatrix}(K-1)x&-x&\cdots&-x\\ -x&(K-1)x&\cdots&-x\\ \vdots&\vdots&&\vdots\\ -x&-x&\cdots&(K-1)x\end{pmatrix}\] (G.10) while \(rank(A^{\prime})=K-1\) since \(null(A^{\prime})=span(\{(1,1,\cdots,1)^{T}\})\) which implies \(nullity(A^{\prime})=1\). Therefore, the rank of \(\mathbb{E}M\) is \(K-1\). Define \(N\)-dimensional vectors \(\mathbf{w}_{i}\)\((i=1,\cdots,K-1)\) by \[\mathbf{w}_{i}(j)=\begin{cases}1&\text{if }(i-1)K/N\leq j<iK/N\\ -1&\text{if }iK/N\leq j<(i+1)K/N\\ 0&\text{else}\end{cases}\] (G.11) Then \((A-(Nx)I_{N})\mathbf{w}_{i}=\mathbf{0}\) for all \(i=1,\cdots,K-1\) and \(\{\mathbf{w}_{i},\ i=1,\cdots,K-1\}\) is independent. Since \(rank(A)=K-1\), the non-zero eigenvalues of \(A\) is \(Nx=\frac{N}{\sigma K}(p_{s}-p_{d})\) with multiplicity \(K-1\). Consider the orthonormal eigenvectors \(\mathbf{v}_{i}\)\((i=1,\cdots,K-1)\) of \(\mathbb{E}M\) with respect to eigenvalue \(\frac{N}{\sigma K}(p_{s}-p_{d})\) generated by using the Gram-Schmidt process to \(\{\mathbf{w}_{i},\ i=1,\cdots,K-1\}\). Since \(\mathbf{w}_{i}\)\((i=1,\cdots,K-1)\) satisfy the block structure properties described in the lemma and the Gram-Schmidt process cannot affect to this property, \(\mathbf{v}_{i}\)\((i=1,\cdots,K-1)\) satisfies such property also.
Spectral properties of balanced stochastic block models について、平均Degreeがノード数よりも成長が遅い(Sparse regime) またはそれに比例する(Dense regime)について検討します。 両方の régimen において、Kesten–Stigum 閾値におけるSBMの極大値の相変移を示します。 また、両方の régimen において、線形スペクトル統計の中央 Limit theorem を証明します。 グラフのコミュニティの存在を判断する仮説テストを提案します。 **Explanation:** The translation accurately reflects the meaning and tone of the original sentence. Here's a breakdown of the reasoning behind each phrase: * **Spectral properties of balanced stochastic block models について:** This translates to "About the spectral properties of balanced stochastic block models." * **平均Degreeがノード数よりも成長が遅い(Sparse regime)またはそれに比例する(Dense regime)について検討します:** This translates
2302.14800
The JCMT Nearby Galaxies Legacy Survey: SCUBA-2 observations of nearby galaxies
We present 850$\mu$m observations of a sample of 8 nearby spiral galaxies, made using the SCUBA-2 camera on the James Clerk Maxwell Telescope (JCMT) as part of the JCMT Nearby Galaxies Legacy Survey (NGLS). We corrected our data for the presence of the $^{12}$CO $J=3\to 2$ line in the SCUBA-2 850$\mu$m bandwidth using NGLS HARP data, finding a typical $^{12}$CO contribution of $\sim 20$%. We measured dust column densities, temperatures and opacity indices by fitting spectral energy distributions constructed from SCUBA-2 and archival Herschel observations, and used archival GALEX and Spitzer data to make maps of surface density of star formation ($\Sigma_{\rm SFR}$). Typically, comparing SCUBA-2-derived H$_2$ surface densities ($\Sigma_{\rm H_2}$) to $\Sigma_{\rm SFR}$ gives shallow star formation law indices within galaxies, with SCUBA-2-derived values typically being sublinear and Herschel-derived values typically being broadly linear. This difference is likely due to the effects of atmospheric filtering on the SCUBA-2 data. Comparing the mean values of $\Sigma_{\rm H_2}$ and $\Sigma_{\rm SFR}$ of the galaxies in our sample returns a steeper star formation law index, broadly consistent with both the Kennicutt-Schmidt value of 1.4 and linearity. Our results show that a SCUBA-2 detection is a good predictor of star formation. We suggest that Herschel emission traces gas in regions which will form stars on timescales $\sim 5-100$ Myr, comparable to the star formation timescale traced by GALEX and Spitzer data, while SCUBA-2 preferentially traces the densest gas within these regions, which likely forms stars on shorter timescales.
Kate Pattle, Walter Gear, Christine D. Wilson
2023-02-28T17:50:51
http://arxiv.org/abs/2302.14800v1
# The JCMT Nearby Galaxies Legacy Survey: SCUBA-2 observations of nearby galaxies ###### Abstract We present 850\(\mu\)m observations of a sample of 8 nearby spiral galaxies, made using the SCUBA-2 camera on the James Clerk Maxwell Telescope (JCMT) as part of the JCMT Nearby Galaxies Legacy Survey (NGLS). We corrected our data for the presence of the \({}^{12}\)CO \(J=3\to 2\) line in the SCUBA-2 850\(\mu\)m bandwidth using NGLS HARP data, finding a typical \({}^{12}\)CO contribution of \(\sim 20\%\). We measured dust column densities, temperatures and opacity indices by fitting spectral energy distributions constructed from SCUBA-2 and archival _Herschel_ observations, and used archival GALEX and _Spitzer_ data to make maps of surface density of star formation (\(\Sigma_{\rm{SFR}}\)). Typically, comparing SCUBA-2-derived H\({}_{2}\) surface densities (\(\Sigma_{\rm{H_{2}}}\)) to \(\Sigma_{\rm{SFR}}\) gives shallow star formation law indices within galaxies, with SCUBA-2-derived values typically being sublinear and _Herschel_-derived values typically being broadly linear. This difference is likely due to the effects of atmospheric filtering on the SCUBA-2 data. Comparing the mean values of \(\Sigma_{\rm{H_{2}}}\) and \(\Sigma_{\rm{SFR}}\) of the galaxies in our sample returns a steeper star formation law index, broadly consistent with both the Kennicutt-Schmidt value of 1.4 and linearity. Our results show that a SCUBA-2 detection is a good predictor of star formation. We suggest that Herschel emission traces gas in regions which will form stars on timescales \(\sim 5-100\) Myr, comparable to the star formation timescale traced by GALEX and _Spitzer_ data, while SCUBA-2 preferentially traces the densest gas within these regions, which likely forms stars on shorter timescales. keywords: galaxies: star formation - galaxies: ISM - submillimetre: galaxies ## 1 Introduction The evolution of a galaxy is intrinsically linked to the star formation that takes place within it. Stars form from the densest phase of the interstellar medium of galaxies, from gravitationally unstable structures within clouds composed primarily of dense molecular hydrogen (Bergin & Tafalla, 2007). Understanding the timescale on which, and efficiency with which, molecular gas is converted into stars is crucial to understanding the star-forming histories of galaxies (e.g. Kennicutt & Evans, 2012). One of the key metrics by which the link between the gas properties of galaxies and the star formation within them is parametrized is the Kennicutt-Schmidt (KS) star formation law (Schmidt, 1959; Kennicutt, 1998), a scaling relation between surface gas density (\(\Sigma_{gas}\)) and surface density of star formation rate (\(\Sigma_{\rm{SFR}}\)). The star formation law can be measured either between a sample of galaxies (e.g. Kennicutt, 1998) or within individual galaxies (e.g. Leroy et al., 2008). \(\Sigma_{\rm{SFR}}\) is typically well-correlated with \(\Sigma_{gas}\), and the relationship is parametrized as \[\Sigma_{\rm{SFR}}\propto\Sigma_{gas}^{N} \tag{1}\] (Schmidt, 1959; Kennicutt, 1998). Kennicutt (1998) found \(N=1.4\pm 0.15\), measuring disc-averaged values of both quantities over an ensemble of galaxies. However, the molecular gas surface density (\(\Sigma_{\rm{H_{2}}}\)) is typically much better-correlated with \(\Sigma_{\rm{SFR}}\) than is the total gas surface density (Wong & Blitz, 2002), as might be expected given that star formation occurs within clouds of cold molecular gas (e.g. Kennicutt & Evans, 2012). The relationship between \(\Sigma_{SFR}\) and \(\Sigma_{gas}\) or \(\Sigma_{\rm{H_{2}}}\) within individual galaxies (the resolved KS law) has also been extensively investigated (e.g. Bigiel et al., 2008; Bolatto et al., 2017; Zabel et al., 2020; Ellison et al., 2021). On scales \(\gtrsim 1\) kpc, a correlation is seen between \(\Sigma_{SFR}\) and \(\Sigma_{\rm{H_{2}}}\); Bigiel et al. (2008) found an average index \(N=1.0\pm 0.2\) between \(\Sigma_{\rm{H_{2}}}\) and \(\Sigma_{SFR}\) in a sample of spiral galaxies. This linear relationship, also found by Bolatto et al. (2017), suggests that stars form molecular gas with constant efficiency within these galaxies. The offset of the resolved KS law varies significantly between galaxies, with galaxies with higher stellar masses, larger Sersic indices, and lower specific star formation rates typically having a lower resolved KS law (Ellison et al., 2021). Moreover, a range of values of \(N\) have been found in nearby galaxies: for example, Ford et al. (2013), observing M31, found a super-KS index of \(2.03\pm 0.04\) for \(\Sigma_{gas}\) (Hi, and H\({}_{2}\) from CO), but sublinear indices for molecular gas only: \(0.60\pm 0.01\) for \(\Sigma_{\rm{H_{2}}}\) from CO, and \(0.55\pm 0.01\) for \(\Sigma_{\rm{H_{2}}}\) from _Herschel_ dust emission, assuming a radial gas-to-dust ratio gradient. A sub-linear star formation law suggests that star formation becomes less efficient at high gas densities, which is difficult to physically motivate. However, Williams et al. (2018), observing M33, found on kpc scales an index \(1.30\pm 0.11\) for molecular gas from CO, but \(5.53\pm 0.75\) for total gas and \(5.85\pm 2.37\) for gas from dust, with all three quantities varying significantly with the spatial scale over which they were measured. These differences are likely due to the disc of M33 being H\({}_{1}\)-dominated (Williams et al., 2018); surface density of atomic gas \(\Sigma_{\rm H{\sc i}}\) is a poor tracer of \(\Sigma_{SFR}\)(e.g. Gao & Solomon, 2004). The index measured also depends on the amount of diffuse background subtracted in both the gas and the star formation rate tracers (Kumari et al., 2020). Molecular clouds contain an interstellar dust component, consisting principally of silicates, carbonaceous grains, and polyaromatic hydrocarbons (PAHs), which typically makes up \(\sim 1\%\) of molecular clouds by mass (e.g. Draine & Li, 2007). Continuum emission from interstellar dust is a widely-used tracer of molecular gas (e.g. Hildebrand, 1983). The James Clerk Maxwell Telescope (JCMT) Nearby Galaxies Legacy Survey (NGLS, Wilson et al., 2009, 2012) is a large programme which mapped the molecular gas and dust in a sample of galaxies within a distance of 25 Mpc. In this paper, we present Sub-millimetre Common-User Bolometer Array 2 (SCUBA-2) 850\(\mu\)m dust emission observations of 8 galaxies from the NGLS sample. We present the observations in Section 2, and briefly review the galaxies which we consider in Section 3. In Section 4 we compare the dust and atomic and molecular gas distributions of the galaxies which we consider. In Section 5 we describe the process of fitting modified black-body functions to the spectral energy distributions of the galaxies which we consider. In Section 6 we construct resolved and unresolved star formation laws for the galaxies which we consider. In Section 7 we discuss our results. Section 8 summarises this work. ## 2 Observations The SCUBA-2 850\(\mu\)m data presented in this paper was taken under project code MJLSN07. All data were taken using the SCUBA-2 CV-DAISY mapping mode, with the exception of NGC 5194, which was mapped using the PONG-900 mode (Holland et al., 2013). The CV-DAISY mode was used for these extended sources despite being optimised for compact sources because it produces a high exposure time in the map centre, which is necessary to observe relatively small low-surface-brightness sources in a reasonable amount of time (Holland et al., 2013). We selected 8 bright galaxies (listed in Table 1) with ancillary observations of the \({}^{12}\)CO \(J=1\to 0\) line made using the Nobeyama 45m telescope as part of the COMING (Sorai et al., 2019) and CO-ATLAS (Kuno et al., 2007) surveys. All of the galaxies in our sample were observed in the \({}^{12}\)CO \(J=3\to 2\) line using HARP (Buckle et al., 2009) as part of the NGLS. These galaxies also have ancillary observations with _Herschel_, GALEX and _Spitzer_(Clerk et al., 2018). These galaxies were selected for submillimetre brightness and completeness of ancillary data sets, and are not intended to be a representative sample of nearby galaxies. The UT start and end dates of the observations, the number of repeats and integration time per source, and the weather conditions under which they were observed are listed in Table 2. These data were largely taken in JCMT weather bands 2 and 3. JCMT weather bands are defined by atmospheric opacity at 225 GHz (\(\tau_{225}\)); Band 2 is defined by \(0.05<\tau_{225}<0.08\), and Band 3 by \(0.08<\tau_{225}<0.12\)(Dempsey et al., 2013). ### SCUBA-2 data reduction We reduced the data using the _skyloop1_ implementation of the _makemap_ algorithm in Smurf(Chapin et al., 2013), in which one iteration of _makemap_ is performed on each of the observations in the set in turn, with the set being averaged together at the end of each iteration, rather than each observation being reduced consecutively. Although SCUBA-2 observes 850\(\mu\)m and 450\(\mu\)m simultaneously, we consider only 850\(\mu\)m data in this paper. Footnote 1: [http://starlink.eao.hawaii.edu/docs/sun258.htx/sun258ss72.html](http://starlink.eao.hawaii.edu/docs/sun258.htx/sun258ss72.html) In order to ensure good image fidelity, we ran _makemap_ to a tolerance of 1%. We also defined a maximum observable size scale of 180'', in order to prevent the growth of large-scale, low-level emission structures in the output map. This maximum size scale is chosen to match the size of the central region of a CV-DAISY map over which exposure times are relatively uniform (Holland et al., 2013), and is more stringent than is usual for SCUBA-2 observations. The default maximum observable size scale is 300'', set by the size of a SCUBA-2 subarray (see, e.g., Holland et al., 2013; Kirk et al., 2018). By choosing a smaller maximum size scale, we place stronger constraints on the iterative _makemap_ algorithm, which is necessary in order to accurately recover faint extended sources, particularly when using the CV-DAISY mapping mode, which has a small map size and a non-uniform exposure time. We used the Nobeyama 45m \({}^{12}\)CO \(J=1\to 0\) maps (Kuno et al., 2007; Sorai et al., 2019) to define a fixed'mask' for each observation, defining areas of astrophysical emission. These observations were chosen because their resolution (\(\sim 17\)''; Sorai et al., 2019) is comparable to that of the JCMT, while their low-\(J\) transition and more transmissive atmospheric window give better sensitivity to extended emission over a larger area than do the NGLS HARP \(J=3\to 2\) data. Our aim in choosing a mask is to define the maximum area over which dust emission is likely to be detected; we want to provide sufficient constraint on the mapmaker to exclude regions without real signal whilst including an area sufficiently large to allow real extended structure to grow. The masked area was defined by an SNR \(>4\) in integrated intensity in the Nobeyama 45m data, as provided by the CO-ATLAS2 and COMING3 archives. Areas outside this masked region were set to zero until the final iteration of _makemap_(see Mairs et al., 2015 for a detailed discussion of the role of masking in SCUBA-2 data reduction). In each case, the SCUBA-2 flux recovered in the output map did not fill the area defined by the mask, indicating that our choice of mask did not encourage the growth of spurious structures in the SCUBA-2 maps. Footnote 2: [https://www.nro.nao.ac.jp/~nro45mrt/html/Coatlas/](https://www.nro.nao.ac.jp/~nro45mrt/html/Coatlas/) Footnote 3: [https://astro3.sci.hokudai.ac.jp/~radio/comming/](https://astro3.sci.hokudai.ac.jp/~radio/comming/) By choosing these map-making parameters, it is likely that we have sacrificed some potentially-recoverable large-scale structure in order to achieve good image fidelity. We discuss the implications of this choice at various points throughout this work. The output maps were gridded to 4'' pixels and calibrated in mJy beam\({}^{-1}\) and mJy arcsec\({}^{-2}\) using the standard SCUBA-2 850\(\mu\)m flux conversion factors (FFCs) of 537 Jy beam\({}^{-1}\) pW\({}^{-1}\) and 2340 mJy arcsec\({}^{-2}\) pW\({}^{-1}\)(Dempsey et al., 2013). The effective resolution of SCUBA-2 at 850\(\mu\)m is 14''.1 The RMS noise in our maps is in the range 2.6-4.0 mJy beam\({}^{-1}\), with variation between maps being due to differences in exposure time, elevation, and amounts of large-scale structure present. RMS noise values and linear resolutions for each individual field are listed in Table 3. ### CO subtraction The SCUBA-2 850\(\mu\)m filter has a half-power bandwidth of 85\(\mu\)m (Holland et al., 2013), and so SCUBA-2 850\(\mu\)m data can include a significant contribution from the \({}^{12}\)CO \(J=3\to 2\) transition, the rest wavelength of which is 867.6\(\mu\)m (345.8 GHz). We accounted for the contribution of CO to our SCUBA-2 flux densities using the technique described by Drabek et al. (2012). Each of the 850\(\mu\)m observations was re-reduced with the integrated HARP (Buckle et al., 2009) \({}^{12}\)CO data added to the SCUBA-2 bolometer time series as a negative signal. By repeating the data reduction process described in Section 2.1, the same spatial filtering is applied to the subtracted HARP \({}^{12}\)CO signal as to the SCUBA-2 data, and so the spatial scales in the two data sets are matched. The SCUBA-2 850\(\mu\)m and HARP \({}^{12}\)CO data have very similar angular resolutions, agreeing to within 2%, and so no correction for this small difference is necessary. We used the integrated NGLS HARP \({}^{12}\)CO maps4(Wilson et al., 2012), in which data are supplied for pixels with a SNR \(>2\) in total integrated intensity. The conversion from K km s\({}^{-1}\) to pW is dependent on atmospheric opacity; we calculated a conversion factor for each observation from the mean of its start- and end-time 225 GHz opacities using the relations given by Parsons et al. (2018). Footnote 4: [https://www.physics.mcmaster.ca/~wilson/www_xfer/NGLS/Data_and_Plots/v2-0/SINGS2/](https://www.physics.mcmaster.ca/~wilson/www_xfer/NGLS/Data_and_Plots/v2-0/SINGS2/) For the galaxies in question, the net contribution of \({}^{12}\)CO to the SCUBA-2 850\(\mu\)m flux is typically \(\sim 20\%\), but reaches 38% in NGC 3034 and \(\sim 50\%\) in NGC 5194, although our data reduction strategy in NGC 5194 may be suboptimal, as discussed in Section 3.8, below. The peak and total flux densities before and after CO subtraction are listed in Table 3. CO fractions of order tens of percent in bright regions indicate that this correction cannot be neglected when calculating dust masses from SCUBA-2 observations. As can be seen in Figures 1-8, the CO contamination fraction drops to near zero at the peripheries of the \({}^{12}\)CO SNR \(>2\) regions, indicating that little or no \({}^{12}\)CO emission has been retained in the 850\(\mu\)m images by excluding the lower-SNR data. With the exception of NGC 5194, the area over which SCUBA-2 signal is detected is well-covered by the footprint of the HARP observations. However, in the NGC 5194 field, the companion galaxy, NGC 5195, and a small part of the southern spiral arm of NGC 5194, are not mapped by HARP. The relatively few pixels which are thus bright in 850\(\mu\)m emission but not corrected for \({}^{12}\)CO contamination transfer not to be well-characterised and are excluded by data quality cuts discussed later in this work, and so do not affect our conclusions. We note that much lower CO contamination fractions have been found in SCUBA-2 images of M31 (Smith et al., 2021). The differences in CO fraction between M31 and the galaxies in our sample are likely due to CO emission from M31 having low surface brightness, with line fluxes \(<10\) K km s\({}^{-1}\) everywhere (Smith et al., 2021). The galaxies in our sample have significantly higher CO brightnesses (cf. Figures 1-8). As a potential green valley galaxy (Mutch et al., 2011), M31 is likely to have both less gas and less dust than actively star-forming galaxies. There may also be differences in metallicity between the galaxies in our sample and M31. The contribution of \({}^{12}\)CO \(J=3\to 2\) to SCUBA-2 850\(\mu\)m emission in local Milky Way star-forming regions is typically \(\lesssim 20\%\), but may be higher in the presence of outflows (Drabek et al., 2012; Pattle et al., 2015; Coude et al., 2016). \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{UT date} & \multicolumn{2}{c}{Integration} & \multicolumn{2}{c}{\(\tau_{225}\)} \\ \cline{2-7} Galaxy & Start & End & Repeats & time (mins) & Start & End \\ \hline NGC 3034 & 2012 Apr 01 & 2014 Feb 20 & 8 & \(\sim 21\) & 0.055 & 0.099 \\ NGC 3351 & 2014 Feb 12 & 2014 May 30 & 8 & \(\sim 21\) & 0.098 & 0.071 \\ NGC 3521 & 2013 Dec 05 & 2014 May 30 & 8 & \(\sim 21\) & 0.126 & 0.074 \\ NGC 4254 & 2012 Mar 31 & 2014 Jun 05 & 11 & \(\sim 21\) & 0.054 & 0.121 \\ NGC 4569 & 2012 Mar 31 & 2014 Jun 08 & 11 & \(\sim 21\) & 0.037 & 0.119 \\ NGC 4736 & 2012 Feb 04 & 2014 May 31 & 11 & \(\sim 21\) & 0.086 & 0.083 \\ NGC 5055 & 2013 Dec 11 & 2014 May 29 & 16 & \(\sim 21\) & 0.138 & 0.111 \\ NGC 5194 & 2014 Jan 15 & 2014 May 22 & 16 & \(\sim 42\) & 0.092 & 0.117 \\ \hline \hline \end{tabular} \end{table} Table 2: Details of the observing dates and conditions for the galaxies in our sample. Note that integration time is given per repeat. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & R.A. & Dec. & Hubble & & Distance & \\ Galaxy & (J2000) & (J2000) & classification & Inclination & (Mpc) & \(12+\log_{10}({\rm O/H})\) \\ \hline NGC 3034 & \(09^{h}55^{m}52\aas@@fstack{s}43\) & \(+69^{\circ}40^{\prime}46\aas@@fstack{\prime\prime}9\) & Scd & 76.9 & 3.61 & – \\ NGC 3351 & \(10^{h}43^{m}57\aas@@fstack{s}73\) & \(+11^{\circ}42\aas@@fstack{s}13\aas@@fstack{\prime\prime}0\) & Sb & 54.6 & 9.91 & \(8.654^{+0.018}_{-0.020}\) \\ NGC 3521 & \(11^{h}50^{m}48\aas@@fstack{s}57\) & \(-00^{\circ}02^{\prime}09\aas@@fstack{\prime\prime}2\) & SABb & 60.0 & 12.42 & \(8.604^{+0.025}_{-0.020}\) \\ NGC 4254 & \(12^{h}18^{m}49\aas@@fstack{s}63\) & \(+14^{\circ}24\aas@@fstack{\prime\prime}59\aas@@fstack{\prime\prime}4\) & Sc & 20.1 & 12.88 & \(8.554^{+0.022}_{-0.021}\) \\ NGC 4569 & \(12^{h}36^{m}49\aas@@fstack{s}43\aas@@fstack{\prime\prime}09\aas@@fstack{ \prime\prime}46\aas@@fstack{\prime\prime}3\) & SABa & 70.8 & 11.36 & – \\ NGC 4736 & \(12^{h}50^{m}53\aas@@fstack{s}15\aas@@fstack{\prime\prime}15\aas@@fstack{ \prime\prime}15\aas@@fstack{\prime\prime}2\aas@@fstack{\prime\prime}6\) & Sab & 31.7 & 4.39 & \(8.623^{+0.046}_{-0.06}\) \\ NGC 5055 & \(13^{h}15^{m}49\aas@@fstack{s}27\) & \(+42^{\circ}01^{\prime}45\aas@@fstack{\prime\prime}7\) & Sbc & 54.9 & \(9.04\) & \(8.581^{+0.052}_{-0.050}\) \\ NGC 5194 & \(13^{h}29^{m}52\aas@@fstack{s}70\) & \(+47^{\circ}1^{\prime}42\aas@@fstack{\prime\prime}9\) & Sbc & 32.6 & 8.59 & \(8.638^{+0.022}_{-0.06}\) \\ \hline \hline \end{tabular} \end{table} Table 1: The coordinates, Hubble classifications, inclinations, distances and metallicities of our set of galaxies. Classifications, inclinations, distances and metallicities (where present) are taken from the Dustpedia database (Clark et al., 2018). Dustpedia metallicities are presented by De Vis et al. (2019); we use their preferred ‘PG165’ calibration. We compare these to a solar metallicity of \(12+\log_{10}({\rm O/H})=8.69\pm 0.05\)(Asplund et al., 2009). ## 3 Galaxies under consideration We here briefly introduce the 8 galaxies which we consider in this paper. The coordinates, classifications, inclinations, distances and metallicities of the galaxies are listed in Table 1. The listed metallicities are taken from De Vis et al. (2019). All of the galaxies in our sample for which De Vis et al. (2019) list metallicities have slightly sub-solar values of 12 + log (O/H), with the largest deficit being \(-0.136^{+0.070}_{-0.071}\) in NGC 4254 (compared to the solar value of 12 + log(O/H) = 8.69 \(\pm\) 0.05; Asplund et al., 2009), equivalent to \(Z/Z_{\odot}=0.72^{+0.14}_{-0.10}\). The SCUBA-2 850 \(\mu\)m observations of the galaxies in our sample are shown in Figures 1-8, while observations made in the mid-infrared regime with _Spitzer_ and in the ultraviolet regime with GALEX are shown in Figures C1-C8 in Appendix C. ### Ngc 3034 NGC 3034 (Messier 82), shown in Figure 1, is a highly-inclined galaxy which is interacting with the neighbouring more massive spiral galaxy M81. At a distance of 3.61 Mpc (Jacobs et al., 2009), M82 is the nearest starburst galaxy to the Milky Way, and has a well-studied "superwind" emanating from the central starburst (e.g. Devine & Bally, 1999). M82, although sometimes classed as irregular, has weak bar and spiral arm features (Mayya et al., 2005), the magnetic field in which has recently been mapped using the POL-2 polarimeter on SCUBA-2 (Pattle et al., 2021), and was previously mapped with SCUPOL (Greaves et al., 2000). HARP \({}^{12}\)CO \(J=3\to 2\) observations of NGC 3034 were presented by Wilson et al. (2012). Unlike the other galaxies in our sample, M82 is saturated in the _Spitzer_ 24\(\mu\)m band, and so a star formation rate surface density map cannot be made for it. We nonetheless include it in our sample as it provides a useful point of comparison against previous observations. M82 was observed with the SCUBA camera, the predecessor to SCUBA-2, by Leeuw & Robson (2009). They measured a peak flux density of 1.4 Jy beam\({}^{-1}\), consistent with the peak flux density of 1.5 Jy/beam which we measure before CO subtraction with SCUBA-2. The SCUBA camera's 850 \(\mu\)m filter was subject to the same CO contamination effects as is that of SCUBA-2 (e.g. Meijerink et al., 2005). ### Ngc 3351 NGC3351, shown in Figure 2, is a barred spiral galaxy that displays a very young starburst population within a \(15\aas@@fstack{\prime\prime}3\times 11\aas@@fstack{\prime\prime}2\) circumnuclear ring (Alloin & Nieto, 1982). We detect but do not resolve the circumnuclear ring in our observations, but do not recover the extended structure of the galaxy. HARP \({}^{12}\)CO \(J=3\to 2\) observations of NGC 3351 were presented by (Wilson et al., 2012) and Tan et al. (2013). ### Ngc 3521 NGC 3521, shown in Figure 3 is a flocculent, weakly-barred spiral galaxy with a tightly-wound two-arm pattern (Liu et al., 2011). NGC 3521 is considered comparable to NGC 5194 (discussed below), as both are metal-rich and quiescently star-forming (Liu et al., 2011). We recover a significant amount of the extended structure of NGC 3521, although the spiral arms are not clearly visible in Figure 3. HARP \({}^{12}\)CO \(J=3\to 2\) observations of NGC 3521 were presented by (Wilson et al., 2012). ### Ngc 4254 NGC 4254 (Messier 99), shown in Figure 4, is an unbarred Virgo Cluster galaxy with a strong asymmetric spiral pattern which is clearly visible in our SCUBA-2 observations. NGC 4254 has a high star formation rate, particularly in its southern arm, and a tidal tail extending \(\sim 250\) kpc northward from the galaxy (not visible in our observations), suggesting that the galaxy is interacting with the centre of the Virgo Cluster (Haynes et al., 2007). HARP \({}^{12}\)CO \(J=3\to 2\) observations of NGC 4254 were presented by Wilson et al. (2009) and Wilson et al. (2012). ### Ngc 4569 NGC 4569 (Messier 90), shown in Figure 5, is a weakly barred Virgo cluster galaxy which underwent a ram pressure stripping event \(\sim 300\) Myr ago, possibly due to motion relative to the intra-cluster medium (Vollmer et al., 2004). The galaxy has a nuclear outflow and a stripped tail (Boselli et al., 2016), which is not visible in the SCUBA-2 data. HARP \({}^{12}\)CO \(J=3\to 2\) observations of NGC 4569 were presented by Wilson et al. (2009) and Wilson et al. (2012). ### Ngc 4736 NGC 4736 (Messier 94), shown in Figure 6, is an actively star-forming unbarred ring galaxy. The galaxy has a weak and diffuse outer ring, which is at best marginally detected in our SCUBA-2 observations, and a bright and actively star-forming inner pseudo-ring (Wong & Blitz, 2000; Waller et al., 2001). This inner pseudo-ring, \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{2}{c}{Linear} & \multicolumn{2}{c}{RMS noise} & \multicolumn{2}{c}{Peak 850\(\mu\)m F.D.} & \multicolumn{2}{c}{Total 850\(\mu\)m F.D.} & \multicolumn{2}{c}{Net CO} \\ & resolution & \multicolumn{2}{c}{CO-sub} & \multicolumn{2}{c}{With CO CO-sub} & \multicolumn{2}{c}{With CO CO} & CO-sub & fraction \\ \cline{2-7} & kpc & mJy beam\({}^{-1}\) & mJy arcsec\({}^{-2}\) & \multicolumn{2}{c}{mJy arcsec\({}^{-2}\)} & \multicolumn{2}{c}{Jy} & \\ \hline NGC 3034 & 0.25 & 3.7 & 0.016 & 6.53 & 4.45 & 8.72 & 5.43 & 0.38 \\ NGC 3351 & 0.68 & 3.4 & 0.015 & 0.38 & 0.27 & 0.19 & 0.15 & 0.19 \\ NGC 3521 & 0.85 & 4.0 & 0.017 & 0.20 & 0.18 & 1.48 & 1.20 & 0.18 \\ NGC 4254 & 0.88 & 3.8 & 0.017 & 0.22 & 0.16 & 1.02 & 0.80 & 0.21 \\ NGC 4569 & 0.81 & 3.8 & 0.017 & 0.31 & 0.22 & 0.38 & 0.31 & 0.17 \\ NGC 4736 & 0.30 & 3.0 & 0.013 & 0.21 & 0.14 & 0.76 & 0.58 & 0.24 \\ NGC 5055 & 0.62 & 2.6 & 0.011 & 0.36 & 0.28 & 1.92 & 1.65 & 0.14 \\ NGC 5194 & 0.59 & 3.3 & 0.014 & 0.30 & 0.21 & 0.50* & 0.24* & 0.52 \\ \hline \end{tabular} \end{table} Table 3: The key properties of the SCUBA-2 850\(\mu\)m measurements of the sources in our sample. The RMS noises listed in this table are measured in the CO-subtracted maps used for analysis. *Note that these values include summation over the significant negative bowls in the interarm regions of NGC 5194. with a radius of \(\sim 47^{\prime\prime}\)(Chyzy & Buta, 2008), is clearly visible in our SCUBA-2 data. HARP \({}^{12}\)CO \(J=3\to 2\) observations of NGC 4736 were presented by (Wilson et al., 2012). ### Ngc 5055 NGC 5055 (Messier 63), shown in Figure 7, is a moderately inclined unbarred spiral galaxy with a very large, warped Hi disc (Battaglia et al., 2006), in which massive stars have recently formed (Thilker et al., 2007). The spiral structure of the galaxy is clearly visible in our SCUBA-2 observations. HARP \({}^{12}\)CO \(J=3\to 2\) observations of NGC 5055 were presented by (Wilson et al., 2012). ### Ngc 5194 NGC 5194 (Messier 51a) is a face-on two-arm grand-design spiral galaxy, interacting with its neighbour, M51b. Both are shown in Figure 8. Star formation in NGC 5194 is mostly taking place in the galactic centre and spiral arms (Schinnerer et al., 2017). CO observations of NGC 5194 show chains of GMCs emerging from the Figure 1: JCMT NGLS observations of NGC 3034. Far left: SCUBA-2 850 \(\mu\)m data. Centre left: SCUBA-2 850 \(\mu\)m data, with \({}^{12}\)CO \(J=3\to 2\) contribution subtracted. Both SCUBA-2 images are shown with square-root scaling, in units of mJy arcsec\({}^{-2}\). Centre right: integrated HARP \({}^{12}\)CO \(J=3\to 2\) emission, in main-beam temperature units. Far right: fraction of SCUBA-2 emission in far-left panel that arises from \({}^{12}\)CO \(J=3\to 2\) emission. In the right-hand panels, the footprint of the HARP \({}^{12}\)CO observation is outlined in black. Figure 3: JCMT NGLS observations of NGC 3521. Panels as in Figure 2. Figure 2: JCMT NGLS observations of NGC 3351. Far left: SCUBA-2 850 \(\mu\)m data. Centre left: SCUBA-2 850 \(\mu\)m data, with \({}^{12}\)CO \(J=3\to 2\) contribution subtracted. Both SCUBA-2 images are shown with linear scaling, in units of mJy arcsec\({}^{-2}\). Right-hand panels as in Figure 1. Figure 4: JCMT NGLS observations of NGC 4254. Panels as in Figure 2. Figure 5: JCMT NGLS observations of NGC 4569. Panels as in Figure 2. Figure 6: JCMT NGLS observations of NGC 4736. Panels as in Figure 2. Figure 7: JCMT NGLS observations of NGC 5055. Panels as in Figure 2. spiral arms into the interarm regions (Koda et al., 2009; Schinnerer et al., 2017), but these features are not resolved in our observations. HARP \({}^{12}\)CO \(J=3\to 2\) observations of NGC 3521 were presented by (Wilson et al., 2012) and Vlahakis et al. (2013). Unlike the other galaxies in our sample, NGC 5194 was observed using the PONG 900 observing mode (Holland et al., 2013). In order to be as consistent as possible, we have applied the same data reduction parameters to all of the galaxies in our sample. However, the data reduction process which we describe in Section 2 does not appear to be optimal for NGC 5194; the SCUBA-2 images in Figure 8 show significant negative bowling in the inter-arm regions. This suggests that recoverable large-scale emission from NGC 5194 has been lost in the data reduction process, and affects our estimates of total flux density for the galaxy listed in Table 3. NGC 5194 was previously observed with SCUBA (Meijerink et al., 2005). These authors found an exponential disc in their 850\(\mu\)m observations, which we do not recover here. They measured a peak flux density \(\gtrsim 117\) mJy beam\({}^{-1}\), including a contribution from CO. We measure a peak flux density of 69 mJy beam\({}^{-1}\) before CO subtraction, again suggesting that the data reduction strategy which we employ in this work is not optimal for these observations of NGC 5194. Interestingly, after subtraction of their modelled exponential disc, Meijerink et al. (2005) find a peak flux density of \(\gtrsim 72\) mJy beam\({}^{-1}\), consistent with the peak flux density which we measure. This suggests that we may be recovering the emission from more compact structures in NGC 5194 quite well. ## 4 Comparison with molecular and atomic gas We investigated the correlation between SCUBA-2 850\(\mu\)m flux density and atomic and molecular gas column density. We measured Hi column density (\(N(\rm{H_{2}})\)) using Very Large Array (VLA) Hi observations, taken from the THINGS (Walter et al., 2008) (NGC 3351, 3521, 4736, 5055, 5194) and VIVA (Chung et al., 2009) (NGC 4254, 4569) surveys, and from de Blok et al. (2008) for NGC 3034. The VLA data were converted from Jy beam\({}^{-1}\) to brightness temperature (\(T_{B}\)) using the relation \[T_{B}=I_{\rm{H_{1}}}\times\frac{6.07\times 10^{5}}{\theta_{maj}\theta_{min}} \rm{K}\ (Jy\ beam^{-1})^{-1} \tag{2}\] (cf. Walter et al., 2008), where \(I_{\rm{H_{1}}}\) is Hi brightness and \(\theta_{maj}\) and \(\theta_{min}\) are the major and minor beam widths, respectively. \(N(\rm{H_{1}})\) was then calculated using the relation \[N(\rm{H_{1}})=1.823\times 10^{18}\ \rm{cm^{-2}\ K^{-1}}\times T_{B} \tag{3}\] (cf. Walter et al., 2008). We measured H\({}_{2}\) column density (\(N(\rm{H_{2}})\)) using the Nobeyama \({}^{12}\)CO 1\(\to\)0 observations from the COMING (Sorai et al., 2019) and CO ATLAS (Kuno et al., 2007) surveys which we used to define the data reduction masks as described in Section 2. \(N(\rm{H_{2}})\) was calculated using the relation \[N(\rm{H_{2}})=X_{\rm{CO}}\int T_{mb}d\nu, \tag{4}\] taking the integrated main-beam temperature maps from the COMING and CO-ATLAS databases, and using a CO X-factor of \(X_{\rm{CO}}=2\times 10^{20}\ \rm{cm^{-2}\ (K\,km\ s^{-1})^{-1}}\)(Bolatto et al., 2013). Bolatto et al. (2013) give an uncertainty on \(X_{\rm{CO}}\) of \(\pm 30\%\) in the Milky Way disc, and \(X_{\rm{CO}}\) is expected to vary further with metallicity. We assume a constant \(X_{\rm{CO}}\) in order to directly compare between \({}^{12}\)CO and SCUBA-2 850\(\mu\)m emission as molecular gas tracers for the galaxies in our sample, but note that this assumption introduces an additional uncertainty on the values of \(N(\rm{H_{2}})\) determined from \({}^{12}\)CO observations. The Nobeyama 45m \({}^{12}\)CO data have a resolution of \(17\arcsec\)(Sorai et al., 2019). The VLA data have a variety of resolutions. Where the VLA resolution was \(<17\arcsec\) we smoothed both the SCUBA-2 and the \(N(\rm{H_{2}})\) maps to \(17\arcsec\). Where the VLA resolution was \(>17\arcsec\) (NGC 3034 and NGC 4254), we smoothed the SCUBA-2 and \(N(\rm{H_{2}})\) maps to the geometric mean of the VLA beam's major and minor axes, \(34.3\arcsec\) and \(29.4\arcsec\) for NGC 3034 and NGC 4254 respectively. We corrected the derived column density for inclination angle using the inclination values given in the Dunstegau database (Clark et al., 2018) and listed in Table 3. Comparisons between SCUBA-2 850\(\mu\)m flux density and \(N(\rm{H_{1}})\), \(N(\rm{H_{2}})\) and total hydrogen column density (\(N(\rm{H_{1}})+2N(\rm{H_{2}})\)) are shown in Figure 11-18 in Appendix A. \({}^{12}\)CO emission and 850\(\mu\)m flux density are consistently correlated with one another, suggesting that both are tracing similar material, i.e. molecular gas (e.g. Bolatto et al., 2013). We typically see no strong correlation between Hi and 850\(\mu\)m, with the possible exceptions of NGC 4569 and 4736. Generally, the less well-resolved sources show a stronger correlation between Hi and 850\(\mu\)m dust emission. This further suggests that the 850\(\mu\)m dust emission detected by SCUBA-2 is preferentially tracing the molecular gas, as Hi and 850\(\mu\)m dust emission are better-correlated where Hi and \({}^{12}\)CO emission occupy the same beam. We note that in NGC 3034 the VLA data are saturated at high column densities, so there is no overlap between the \({}^{12}\)CO and Hi measurements. In the following section, we measure column density of \(N(\rm{H_{2}})\) from dust emission, taking the dust emission to trace the molecular gas. However, to do so, we must assume a gas-to-dust ratio, as discussed below. The gas-to-dust ratio depends on metallicity and varies Figure 8: JCMT NGLS observations of NGC 5194. Panels as in Figure 2. somewhat between the galaxies in our sample (De Vis et al., 2019; cf. Table 1). As discussed above, we expect \(X_{\rm CO}\) to vary systematically within and between galaxies in our sample. We are not able to distinguish between variation in gas-to-dust ratio and in \(X_{\rm CO}\), and so do not attempt to use the ratio between \({}^{12}\)CO emission and 850\(\mu\)m flux density to determine gas-to-dust ratios for the galaxies which we consider. We comment on the consequences of using a constant gas-to-dust ratio further below. ## 5 SED fitting We measured dust column densities, temperatures and opacity indices for the galaxies in our sample by fitting their spectral energy distributions (SEDs) using SCUBA-2 850\(\mu\)m and _Herschel_ Space Observatory data. We took _Herschel_ 70\(\mu\)m, 100\(\mu\)m (where present), 160\(\mu\)m, 250\(\mu\)m and 350\(\mu\)m data from the Dustpedia database (Clark et al., 2018). We excluded the _Herschel_ 500\(\mu\)m data as being too low-resolution (36.6''; Griffin et al., 2010), and, being away from the SED peak but shorter-wavelength than the SCUBA-2 850\(\mu\)m data, not necessary in order to produce a well-constrained fit. The lowest-resolution data set which we use is _Herschel_ 350\(\mu\)m, with a resolution of 25.2'' (Griffin et al., 2010). ### Spatial filtering SCUBA-2 is restricted in the spatial scales to which it is sensitive due to the need to distinguish between astrophysical and atmospheric signal (e.g. Chapin et al., 2013). SCUBA-2 is fundamentally insensitive to signal on scales larger than its array size (600''), but in practice the maximum size scale recovered is set by a combination of the mask used and the maximum size scale set in the reduction process, as discussed in Section 2. _Herschel_ images, having been taken above the atmosphere, are not subject to such constraints. It is therefore necessary to match the spatial scales in the Herschel and SCUBA-2 observations before comparing the data sets. We removed the large-scale structure from the _Herschel_ observations by passing them through the SCUBA-2 pipeline in the manner described by Sadavoy et al. (2013). Similarly to the method for CO subtraction, the _Herschel_ data are added to the SCUBA-2 bolometer time series, and the reduction process is repeated, including the application of the mask. In this case the _Herschel_ data are scaled to be a small positive perturbation on the SCUBA-2 signal in order to minimise the effect of the Herschel data on map convergence. The SCUBA-2 map is then subtracted from the _Herschel_+SCUBA-2 map and the scaling applied to the _Herschel_ data is reversed, leaving the spatially-filtered _Herschel_ signal. ### SED fitting We used the \(Starlink\) Kappa package (Currie et al., 2014) to convolve the 850\(\mu\)m data and all of the filtered _Herschel_ maps to 350\(\mu\)m resolution (25.2''), and to grid all of the maps to 16'' pixels. We then fitted SEDs pixel-by-pixel using the relation \[F_{\nu}=\Sigma_{\rm dust}B_{\nu}(T)\kappa_{0}\left(\frac{\nu}{\nu_{0}}\right)^ {\beta} \tag{5}\] (cf. Hildebrand, 1983), where \(\Sigma_{\rm dust}\) is surface density of dust in units of g cm\({}^{-2}\), \(B_{\nu}(T)\) is the Planck function at temperature \(T\), dust opacity \(\kappa_{0}=1.92\) cm\({}^{2}\) g\({}^{-1}\) at a reference frequency \(\nu_{0}=0.857\) THz (350\(\mu\)m) (Draine, 2003), and \(\beta\) is dust opacity index. We restricted our fitting to pixels where all the flux density values have an SNR greater than 3. We first fitted with \(\Sigma_{\rm dust}\), \(T\) and \(\beta\) as free parameters. We then repeated the fitting process, with \(\beta\) fixed at its median value from the previous fit. In both cases, we rejected any pixels where the fitted values of any of \(\Sigma_{\rm dust}\), \(T\) or \(\beta\) have uncertainties of \(>50\%\). This criterion only excluded a significant number of pixels in the \(\beta\)-free fit for NGC 3034. We converted \(\Sigma_{\rm dust}\) to molecular hydrogen column density (\(N(\rm H_{2})\)) using the relation \[N(\rm H_{2})=100\times\frac{\Sigma_{\rm dust}}{\mu m_{\rm m}}, \tag{6}\] where mean molecular mass is taken to be \(\mu=2.8\), \(m_{\rm m}\) is the mass of hydrogen and the gas-to-dust ratio is taken to be 100. We note that the galaxies in our sample have metallicities similar to, but slightly less than, the solar value (cf. Table 1), and so these values of \(N(\rm H_{2})\) may be slightly underestimated. Moreover, the value of the dust-to-gas ratio may vary significantly within galaxies (e.g. Williams et al., 2018), creating a further systematic uncertainty on \(N(\rm H_{2})\). The values of \(N(\rm H_{2})\) determined using this equation are not yet corrected for the effect of inclination angle. We show maps of best-fit \(N(\rm H_{2})\), \(T\), \(\beta\), their uncertainties and reduced \(\chi^{2}\) values for each galaxy in Appendix B. The mean, median maximum and minimum values for each of the fitted parameters are listed in Table 6 for the \(\beta\)-free case, and in Table 7 for the median-\(\beta\) case. \(N(\rm H_{2})\) values are at this stage shown without correction for inclination angle in order to show the fitted values. The median value of \(\beta\) varies in the range 1.63 - 2.26. The mean and median values of the fitted parameters are similar, except in the case of \(\Sigma_{dust}\) for NGC 3034 and, to a lesser extent, NGC 5194, where the mean value is significantly higher than the median. This is due to the large dynamic range in the NGC 3034 and NGC 5194 observations. The values of \(\Sigma_{dust}\) and \(T\) are similar in the \(\beta\)-free and median-\(\beta\) cases, suggesting that the variation in \(\beta\) within the galaxies is not very large. ### SCUBA-2 flux loss We compared the total flux density in the Herschel 250\(\mu\)m maps before and after they were passed through the SCUBA-2 pipeline in order to assess the likely amount of large-scale structure lost in the SCUBA-2 data reduction process. The 250\(\mu\)m data were chosen as having the resolution most similar to the SCUBA-2 850\(\mu\)m data: 18.1'' (Griffin et al., 2010) compared to 14.1'' for SCUBA-2. The total flux densities before and after filtering are listed in Table 4. The mean fraction of flux lost in the filtering process is \(0.58\pm 0.15\), and the median is 0.59. However, this flux is not lost evenly across the map, with small-scale structures being recovered well, and extended, diffuse emission being lost. The best-recovered galaxy is the bright and relatively compact NGC 3034, where only 32% of the Herschel 250\(\mu\)m flux is lost. The worst-recovered is NGC 3351, only the central region of which is visible in the SCUBA-2 image (see Figure 2). In this case, 85% of the Herschel flux density - mostly associated with the galactic disc - is lost. Despite this, the compact and strongly-peaked central region is recovered quite well, and extended, diffuse emission being lost. The best-recovered galaxy is the bright and relatively compact NGC 3034, where only 32% of the Herschel 250\(\mu\)m flux is lost. The worst-recovered is NGC 3351, only the central region of which is visible in the SCUBA-2 image (see Figure 2). In this case, 85% of the Herschel flux density - mostly associated with the galactic disc - is lost. Despite this, the compact and strongly-peaked central region is recovered quite well. We note again that we have optimised the SCUBA-2 DR process for image fidelity, rather than for large-scale structure recovery, as described in Section 2. An alternative data reduction scheme might be able to recover a larger fraction of the large-scale emission. However, these results show that SCUBA-2 will inherently be insensitive to a significant fraction of the submillimetre flux density from well-resolved galaxies such as those in our sample. In subsequent analysis, in order to evaluate the effects of the SCUBA-2 flux loss, we use measures of \(\Sigma_{dust}\) and \(N\)(H\({}_{2}\)) determined in two ways: (1) from SED fitting, and therefore subject to flux loss due to atmospheric filtering; and (2) from _Herschel_ SPIRE 250\(\mu\)m flux density using equation 5, using the \(T\) and \(\beta\) values from SED fitting, and therefore calculated from the full column of emission. These measures of \(\Sigma_{dust}\) and \(N\)(H\({}_{2}\)) are summarised in Table 5 and discussed further below. ## 6 Star formation law We measured surface density of star formation (\(\Sigma_{\rm SFR}\)) for the galaxies in our sample, in order to investigate the relationship between \(\Sigma_{\rm SFR}\) and \(\Sigma_{gas}\) both within and between the galaxies. ### Creating star formation surface density maps We measured the star formation rate in the galaxies in our sample using the method described by Leroy et al. (2008). We measured unobscured star formation using GALEX far-ultraviolet (FUV) emission (\(I_{\rm FUV}\)) and obscured star formation using _Spitzer_ Space Telescope 24\(\mu\)m emission (\(I_{24}\)). The older stellar population was accounted for using _Spitzer_ 3.6\(\mu\)m emission (\(I_{3,6}\)). All of these maps were taken from the Dustpedia database (Clark et al. 2018). NGC 3034 is saturated in the _Spitzer_ 24\(\mu\)m band, and so is excluded from further analysis. The _Spitzer_ and GALEX observations are shown in Figures 11-11 in Appendix C. We smoothed all of the data sets to 25\(\aas@@fstack{\prime\prime}\)2 to match the resolution of the SED-fitted maps, and again gridded the maps to 16\(\aas@@fstack{\prime\prime}\) pixels. Any pixels with significant GALEX near-ultraviolet (NUV) emission, i.e. where \(I_{\rm FUV}/I_{\rm FUV}>15\), were excluded from the analysis, in order to avoid contamination by foreground stars (Leroy et al. 2008). We then estimated the FUV flux density associated with star formation (\(I_{\rm FUV,SF}\)) using the relation \[I_{\rm FUV,SF}=I_{\rm FUV}-\sigma_{\rm FUV}I_{3,6}, \tag{7}\] taking \(\alpha_{\rm UV}=3\times 10^{-3}\), and the 24\(\mu\)m flux density associated with star formation (\(I_{\rm 24,SF}\)) using the relation \[I_{\rm 24,SF}=I_{\rm 24}-\alpha_{\rm 24}I_{3,6}, \tag{8}\] taking \(\alpha_{\rm 24}=0.1\) (Leroy et al. 2008). We then estimated the surface density of star formation to be \[\Sigma_{\rm SFR}=8.1\times 10^{-2}I_{\rm FUV,SF}+3.2\times 10^{-3}I_{\rm 24, SF}. \tag{9}\] By using these values, we have implicitly assumed a Chabrier (2003) Initial Mass Function (IMF) (Leroy et al. 2008). We assumed calibration uncertainties of 4.5% on GALEX FUV, 5% on _Spitzer_ 24\(\mu\)m and 3% on _Spitzer_ 3.6\(\mu\)m measurements, as per Clark et al. (2018). We assumed a 33% uncertainty on the \(I_{\rm 24,SF}\) term in equation 9, following Leroy et al. (2008) and Ford et al. (2013). The final \(\Sigma_{\rm SFR}\) maps are shown in Appendix D. These maps are expected to trace star formation on timescales \(<100\) Myr (Leroy et al. 2008). The mean age of the stellar population contributing to the unobscured star formation rate is \(\sim 10\) Myr, while the stellar population contributing to the obscured star formation rate has a mean age of \(\sim 5\) Myr (Kennicutt & Evans 2012). 90% of the stellar population contributing to each measure is expected to have an age \(<100\) Myr (Kennicutt & Evans 2012). We note that in every case, the surface density of obscured (24\(\mu\)m-traced) star formation is greater than that of unobscured (FUV-traced) star formation. The mean contributions from obscured and unobscured star formation to \(\Sigma_{\rm SFR}\) are shown in Figure 9. This suggests that the star formation being traced may be preferentially occurring on timescales as short as \(\sim 5\) Myr (Rieke et al. 2009; Kennicutt & Evans 2012). ### Resolved Kennicutt-Schmidt Law For each of the galaxies in our sample, we converted \(\Sigma_{\rm dust}\) to \(\Sigma_{\rm H_{2}}\), again assuming a gas-to-dust ratio of 100. We considered values of \(\Sigma_{\rm dust}\) derived (1) from SED fitting of the SCUBA-2 and filtered Herschel data, (a) with \(\beta\) as a free parameter (suffixed 'SED,\(\beta\)-free') and (b) with \(\beta\) fixed at the median value from the free-parameter fit (suffixed 'SED,median-\(\beta\)'), and (2) from _Herschel_ 250\(\mu\)m flux density using equation 5, taking \(T\) and \(\beta\) from the SED-fitted values (a) with \(\beta\) as a free parameter (suffixed '250,\(\beta\)-free') and (b) with \(\beta\) fixed at its median value (suffixed '250,median-\(\beta\)'). These permutations are summarised in Table 5. We note that the applicability of these \(T\) and \(\beta\) values derived from SED fitting to the filtered data to the extended emission which is not seen by SCUBA-2 is not certain, particularly as _Herschel_ is likely to be tracing a slightly warmer dust population. The values of \(\Sigma_{\rm H_{2}}\) determined from the _Herschel_ 250\(\mu\)m data may broadly represent upper limits on the true values, while the SED-fitted values are likely to be underestimates. Uncertainties on values of \(\Sigma_{\rm H_{2}}\) determined using method (1) were derived from the uncertainties returned by the SED fitting process. For method (2), uncertainties on values of \(\Sigma_{\rm H_{2}}\) were determined assuming that \begin{table} \begin{tabular}{c c c c} \hline \hline Galaxy & \multicolumn{2}{c}{250\(\mu\)m Flux Density} & Fraction lost \\ \cline{2-3} & Original & Filtered & \\ \cline{2-3} & \multicolumn{2}{c}{(Jy)} & \\ \hline NGC 3034 & 451.0 & 305.8 & 0.32 \\ NGC 3351 & 27.0 & 5.5 & 0.85 \\ NGC 3521 & 104.1 & 58.3 & 0.56 \\ NGC 4254 & 69.1 & 27.3 & 0.56 \\ NGC 4569 & 41.0 & 13.6 & 0.47 \\ NGC 4736 & 59.1 & 22.5 & 0.65 \\ NGC 5055 & 134.4 & 52.3 & 0.63 \\ NGC 5194 & 183.5 & 82.2 & 0.61 \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison of Herschel 250\(\mu\)m flux densities before and after filtering. Figure 9: Mean contributions to \(\Sigma_{\rm SFR}\) from obscured (24\(\mu\)m-traced) and unobscured (FUV-traced) star formation for the galaxies in our sample. Dotted grey line marks the 1:1 relationship. the uncertainty on the Herschel 250\(\mu\)m flux density is given by the 10% _Herschel_ SPIRE calibration uncertainty (Griffin et al., 2010), and propagating the uncertainties on \(T\) and \(\beta\) returned by the SED fitting process. For consistency with method (1), any pixels with an uncertainty on \(\Sigma_{\rm H_{2}}\) of \(>50\%\) were rejected. Our assumption of a constant dust-to-gas ratio adds a further source of uncertainty which is difficult to quantify, but we note that within any given galaxy, this systematic uncertainty ought to affect the values of \(\Sigma_{\rm H_{2}}\) returned by methods (1) and (2) consistently. In order to directly compare between the galaxies in our sample, we corrected both \(\Sigma_{\rm H_{2}}\) and \(\Sigma_{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{ Figure 10: Resolved star formation law plots for the galaxies in our sample. Blue circles mark values of \(\Sigma_{\rm H_{2}}\) derived from SED-fitted values of \(\Sigma_{dust}\) with \(\beta\) as a free parameter. Green diamonds; as blue circles but with \(\beta\) fixed at its median value. Red pentagons mark values of \(\Sigma_{\rm H_{2}}\) derived from 250\(\mu\)m dust emission, using values of \(T\) and \(\beta\) from SED fitting with \(\beta\) as a free parameter. Purple pluses; as red pentagons but with \(\beta\) fixed at its median value. In each case the data points are fitted with the function \(\log_{10}\Sigma_{\rm max}=N\log_{10}\Sigma_{\rm gas}+C\); best-fit lines are plotted in each panel. Best-fit relationships and correlation coefficients are listed in Table 8. All values are corrected for inclination. uncertainties on our values of \(\Sigma_{\rm SFR}\) are typically smaller than those on \(\Sigma_{\rm H_{2}}\), and not subject to the uncertainty on gas-to-dust ratio which affect the \(\Sigma_{\rm H_{2}}\) measurements. ### Unresolved Kennicutt-Schmidt Law We measured the relationship between the global values of \(\Sigma_{\rm H_{2}}\) and \(\Sigma_{\rm SFR}\) for the galaxies in our sample. To do so, we took the mean value of \(\Sigma_{\rm H_{2}}\) and \(\Sigma_{\rm SFR}\) for each galaxy. We are thus to some extent measuring luminosity-weighted values, although our use of SED-fitted values of \(T\) and \(\beta\) in our measurements of \(\Sigma_{\rm H_{2}}\) should mitigate against this effect. This is intended to be analogous to observing the galaxies as unresolved sources. However, the SED-fit values of \(\Sigma_{\rm H_{2}}\) remain subject to loss of extended emission: if we were truly observing unresolved sources with SCUBA-2, the full column of submillimetre emission would be accounted for. The mean values that we measure are shown in Figure 11. We see a positively correlated and broadly linear relationship between \(\log_{10}(\Sigma_{\rm H_{2}})\) and \(\log_{10}(\Sigma_{\rm SFR})\), albeit with considerable scatter, for all four of the methods of measuring \(\Sigma_{\rm H_{2}}\) which we consider. The mean values of \(\Sigma_{\rm H_{2}}\) are consistently lower for the SED-fit cases than for the 250\(\mu\)m-derived cases, as would be expected due to the loss of large-scale emission in the SCUBA-2 data. Allowing \(\beta\) to vary or fixing it at its median value produces little difference in the mean values of \(\Sigma_{\rm H_{2}}\). The best-fit indices are \(N_{\rm SED,\beta-free}=2.1\pm 1.1\), \(N_{\rm SED,median-\beta}=2.3\pm 1.0\), \(N_{\rm 250\mu m,\beta-free}=1.5\pm 1.1\), and \(N_{\rm 250\mu m,median-\beta}=1.9\pm 1.1\). These values are steeper than those seen within the galaxies, and are consistent both with one another and with the KS value of \(N=1.4\) within their large uncertainties. The best-fit relationships are plotted on Figure 11. ## 7 Discussion ### Resolved KS Law The fact that the values of \(N\) for the SED-fit cases are consistently shallower than the 250\(\mu\)m-derived values indicates that the atmospheric filtering to which SCUBA-2 is subject results in the loss of a higher fraction of emission in low-surface-brightness regions. This is as expected, as it is extended emission that is preferentially lost in the filtering process, as discussed in Section 2. Figure 10 shows good agreement between the SED-fit and 250\(\mu\)m-derived values of \(\Sigma_{\rm H_{2}}\) in the highest-density regions. If the sub-linear values of \(N\) recovered from the SED-fit cases were physical, it would suggest that star formation becomes less efficient at high gas densities (e.g. Ford et al., 2013). It appears likely that the shallow indices seen in the SED-fit data typically result from spatially-filtered dust emission data being compared to \(\Sigma_{\rm SFR}\) maps derived from GALEX and _Spitzer_ data, the extended components of which have not been correspondingly lost to atmospheric filtering. Nonetheless, the SED-fit data is well-correlated with \(\Sigma_{\rm SFR}\), indicating that the SCUBA-2 850\(\mu\)m emission is a good tracer of star formation despite missing some fraction of the extended dust emission. The \(\Sigma_{\rm SFR}\) maps constructed from GALEX and _Spitzer_ data trace star formation on timescales \(\sim 5-100\) Myr (Leroy et al., 2008; Kennicutt & Evans, 2012, and refs. therein). Therefore the 250\(\mu\)m-derived relationship between \(\Sigma_{\rm H_{2}}\) and \(\Sigma_{\rm SFR}\) is more accurate than the SED-fit relationship only if the gas traced by _Herschel_ but not by SCUBA-2 will arises from molecular clouds which will form stars in the next \(\sim 5-100\) Myr. Observations of local star-forming regions have shown that in resolved observations of molecular clouds SCUBA-2 effectively selects for dense and gravitationally bound material which is likely to form stars in the immediate future (Ward-Thompson et al., 2016). While each SCUBA-2 beam in our observations will integrate over many star-forming molecular cloud complexes, the strong correlation between the SED-fit \(\Sigma_{\rm H_{2}}\) values and \(\Sigma_{\rm SFR}\) nonetheless suggests that a gas density peak sufficiently strong to be traceable by SCUBA-2 may be required for significant amounts of star formation to occur. Figure 11: Mean values of \(\Sigma_{\rm H_{2}}\) and \(\Sigma_{\rm SFR}\) for the galaxies in our sample. Lines of best-fit are plotted. Symbols and colour coding as in Figure 10. \begin{table} \begin{tabular}{l c c c c c c c c c c c} \hline & \multicolumn{4}{c}{SED-fit} & \multicolumn{4}{c}{H250} \\ \cline{2-11} & \multicolumn{2}{c}{\(\beta\) free} & \multicolumn{2}{c}{median \(\beta\)} & \multicolumn{2}{c}{\(\beta\) free} & \multicolumn{2}{c}{median \(\beta\)} & \multicolumn{2}{c}{median \(\beta\)} \\ \cline{2-11} Galaxy & \(N\) & \(C\) & \(r^{2}\) & \(N\) & \(C\) & \(r^{2}\) & \(N\) & \(C\) & \(r^{2}\) & \(N\) & \(C\) & \(r^{2}\) \\ \hline NGC 3351 & 0.60\(\pm\)0.04 & \(-\)1.84\(\pm\)0.06 & 0.991 & 0.77\(\pm\)0.19 & \(-\)2.10\(\pm\)0.30 & 0.887 & 0.97\(\pm\)0.15 & \(-\)2.49\(\pm\)0.25 & 0.952 & 1.49\(\pm\)0.28 & \(-\)3.35\(\pm\)0.45 & 0.935 \\ NGC 3521 & 0.75\(\pm\)0.02 & \(-\)2.59\(\pm\)0.03 & 0.958 & 0.71\(\pm\)0.05 & \(-\)2.54\(\pm\)0.07 & 0.794 & 1.06\(\pm\)0.04 & \(-\)3.22\(\pm\)0.06 & 0.947 & 1.10\(\pm\)0.07 & \(-\)3.31\(\pm\)0.12 & 0.810 \\ NGC 4254 & 0.58\(\pm\)0.04 & \(-\)2.58\(\pm\)0.05 & 0.836 & 0.57\(\pm\)0.03 & \(-\)2.57\(\pm\)0.04 & 0.869 & 0.68\(\pm\)0.08 & \(-\)2.87\(\pm\)0.14 & 0.709 & 0.89\(\pm\)0.05 & \(-\)3.24\(\pm\)0.08 & 0.850 \\ NGC 4569 & 1.20\(\pm\)0.26 & \(-\)3.76\(\pm\)0.27 & 0.508 & 1.18\(\pm\)0.21 & \(-\)3.72\(\pm\)0.22 & 0.605 & 1.51\(\pm\)0.47 & \(-\)4.26\(\pm\)0.56 & 0.358 & 1.45\(\pm\)0.29 & \(-\)4.16\(\pm\)0.34 & 0.542 \\ NGC 4736 & 0.69\(\pm\)0.08 & \(-\)1.95\(\pm\)0.10 & 0.674 & 0.63\(\pm\)0.09 & \(-\)1.87\(\pm\)0.11 & 0.567 & 1.08\(\pm\)0.15 & \(-\)2.63\(\pm\)0.21 & 0.605 & 1.03\(\pm\)0.16 & \(-\)2.55\(\pm\)0.22 & 0.521 \\ NGC 5055 & 0.38\(\pm\)0.01 & \(-\)2.07\(\pm\)0.02 & 0.881 & 0.41\(\pm\)0.02 & \(-\)2.10\(\pm\)0.02 & 0.855 & 0.62\(\pm\)0.02 & \(-\)2.53\(\pm\)0.03 & 0.892 & 0.63\(\pm\)0.02 & \(-\)2.54\(\pm\)0.03 & 0.861 \\ NGC 5194 & 0.66\(\pm\)0.06 & \(-\)2.73\(\pm\)0.08 & 0.648 & 0.84\(\pm\)0.08 & \(-\)2.94\(\pm\)0.11 & 0.626 & 0.91\(\pm\)0.09 & \(-\)3.29\(\pm\)0.14 & 0.609 & 1.01\(\pm\)0.12 & \(-\)3.46\(\pm\)0.19 & 0.527 \\ \hline \end{tabular} \end{table} Table 8: Resolved KS law fitting results for the galaxies in our sample. The function \(\log_{10}\Sigma_{\rm SFR}=N\log_{10}\Sigma_{\rm SFR}+C\) was fitted to the data shown in Figure 10. Best-fit values of \(N\) and \(C\) are listed, along with the \(r^{2}\) correlation coefficient. The best-fit models are shown in Figure 10. Most of the galaxies in our sample are, in the 250\(\mu\)m-derived case, broadly consistent with \(N\sim 1\), suggesting a linear relationship between molecular gas mass and star formation rate (e.g. Bigiel et al., 2008; Lada et al., 2012). NGC 3351 and NGC 4569 show somewhat steeper relationships, but the former includes only four pixels in the galactic centre, and the latter is relatively weakly correlated. Uniquely in our sample, NGC 5055 shows an index which is significantly \(<1\) in both the SED-fit and 250\(\mu\)m-derived cases, and has a very high correlation coefficient between \(\Sigma_{\rm H_{2}}\), and \(\Sigma_{\rm SFR}\). The indices which we derive from dust emission for the galaxies in our sample are somewhat similar to those derived from dust emission by Ford et al. (2013) for M31. Ford et al. (2013) found an index of \(0.55\pm 0.01\) in M31 from _Herschel_ dust emission, quite similar to the values which we typically measure in the SED-fit case, but shallower than the indices which we typically measure using 250\(\mu\)m-derived values. However, the Ford et al. (2013) value for M31 is similar to our 250\(\mu\)m-derived values for NGC 5055, \(0.62\pm 0.02\) and \(0.63\pm 0.02\), perhaps suggesting a commonality between the two galaxies. However, none of our galaxies have indices similar to that measured on kpc scales in M33 from dust emission by (Williams et al., 2018), of \(5.85\pm 2.37\). The broadly linear relationship between 250\(\mu\)m-derived \(\Sigma_{\rm H_{2}}\) and \(\Sigma_{\rm SFR}\) suggests that the molecular clouds or cloud complexes traced by _Herschel_ will go on to form stars on timescales \(\sim 5-100\) Myr. The star formation efficiency per freefall time of a molecular cloud is very low (\(\sim 1\)%; e.g., McKee & Ostriker, 2007), and so we expect only a very small fraction of the material traced by these observations to be incorporated into new stars on this timescale. However, the equally good, albeit sub-linear, correlation between the SED-fit \(\Sigma_{\rm H_{2}}\) and \(\Sigma_{\rm SFR}\) suggests that SCUBA-2 is also a good tracer of star formation. The fact that 250\(\mu\)m-derived \(\Sigma_{\rm H_{2}}\) and SED-fit \(\Sigma_{\rm H_{2}}\) are equally well-correlated with \(\Sigma_{\rm SFR}\) suggests that the relatively diffuse material traced by _Herschel_ but not by SCUBA-2 is (a) not forming stars independently of the material traced by SCUBA-2 and (b) associated with the same star-forming peaks in column density which are detected by SCUBA-2. We suggest that SCUBA-2 is preferentially detecting the denser material within these molecular clouds. We hypothesise that this denser material may be likely to form stars on a timescale shorter than that of the material exclusively traced by _Herschel_. ### Unresolved KS law Figure 11 shows that the best-fit index for the star formation law over the mean values (\(\Sigma_{\rm H_{2}}\)) and (\(\Sigma_{\rm SFR}\)) of our sample of galaxies is steeper, and more comparable to the standard KS value of \(N=1.4\), than are those in the individual galaxies shown in Figure 10, although the uncertainties on the fits are sufficiently large that the comparison may not be meaningful. The fitted values are also broadly consistent with linearity, as would be expected for tracers of molecular gas (Gao & Solomon, 2004). The presence or absence of the full column of dust emission changes only the offset of the slope and not the slope itself, again to within the very large uncertainties on the fit. This again suggests that the presence of a SCUBA-2 detection is a good predictor of star formation. We emphasise that the loss of the contribution from extended emission in the SCUBA-2 data affects these results because we are averaging over observations of resolved sources; if we were observing more distant unresolved galaxies, the full column of submillimetre emission would be accounted for. We note that in this paper we have assumed a single gas-to-dust ratio and mean molecular mass for all of the galaxies in our sample. Some amount of the scatter in (\(\Sigma_{\rm H_{2}}\)) seen in Figure 11 could thus be caused by systematic errors in the conversion from dust to gas mass. However, the variations in metallicity in our sample are not large (see Table 1), suggesting that not all of the observed scatter is due to errors in in this conversion. We compared our results to archival measurements of a range of galaxies in Figure 12. Our galaxies correspond well to the normal/irregular galaxies presented by Gavazzi et al. (2003), James et al. (2004), Hameed & Devereux (2005) and Kennicutt et al. (2008). This is as expected, as all of the galaxies for which we have calculated \(\Sigma_{\rm SFR}\) are non-starburst spirals. The SED-fit values - in which the extended component of the SFR tracers is included but that of the dust emission is not - correspond better to the metal-poor archival galaxies (\(Z/Z_{\odot}<0.3\)) than to the 'normal' sample. Although the galaxies which we consider have sub-solar metallicities, the lowest-metallicity galaxy, NGC 4254, has \(Z/Z_{\odot}=0.72\)(De Vis et al., 2019). Using the 250\(\mu\)m-derived data puts the galaxies in our sample in the midst of the normal galaxies. This is as expected because, as discussed above, if dust continuum emission is lost to atmospheric filtering, the dust mass derived for the galaxy will be systematically lowered, and the galaxy will artificially appear metal-poor. Feathering of JCMT data with lower-resolution _Planck_ or _Herschel_ data (Smith et al., 2021) Figure 12: The mean values of \(\Sigma_{\rm H_{2}}\) and \(\Sigma_{\rm SFR}\) for the galaxies in our sample, compared to archival measurements. The grey circles mark normal or irregular galaxies presented by Gavazzi et al. (2003), James et al. (2004), Hameed & Devereux (2005) and Kennicutt et al. (2008). Grey crosses mark the low-surface-brightness subset of these galaxies. Grey squares mark infrared-selected starburst galaxies presented by Scoville et al. (2000) and Dopita et al. (2002). Grey diamonds mark the circumnuclear starbursts presented by Kormendy & Kennicutt (2004). Grey stars mark any low-metallicity galaxies in the above samples, i.e., those where \(Z<0.3\,Z_{\odot}\). could aid in direct comparison of the cold dust and star formation tracers. ## 8 Summary In this paper we have presented 850\(\mu\)m dust continuum observations of 8 nearby galaxies made with the SCUBA-2 camera on the James Clerk Maxwell Telescope (JCMT) as part of the JCMT Nearby Galaxies Legacy Survey (NGLS). The galaxies which we present are NGC 3034, NGC 3351, NGC 3521, NGC 4254, NGC 4569, NGC 4736, NGC 5055 and NGC 5194. These galaxies were selected for their high surface brightness and the presence of ancillary data, and are not a representative sample of local galaxies. We find that there is a significant contribution from the \({}^{12}\)CO \(J=3\to 2\) line in the SCUBA-2 850\(\mu\)m observations of all of the galaxies in our sample, typically \(\sim 20\)%, but higher for NGC 3034 and NGC 5194. We corrected the SCUBA-2 maps for this CO contamination using NGLS HARP \({}^{12}\)CO \(J=3\to 2\) observations. Comparison of our observations to VLA Hi and Nobeyama 45m \({}^{12}\)CO \(J=1\to 0\) observations shows that SCUBA-2 850\(\mu\)m emission is correlated with \({}^{12}\)CO emission, suggesting that the dust emission detected by SCUBA-2 is tracing molecular hydrogen gas. SCUBA-2 850\(\mu\)m emission is not well-correlated with atomic hydrogen emission. We fitted spectral energy distributions to each of the galaxies in our sample in order to measure surface dust mass, dust temperature and dust opacity index, using our SCUBA-2 850\(\mu\)m data and archival _Herschel_ Space Observatory observations. To do this, we filtered the _Herschel_ data to match the spatial scales present in the SCUBA-2 data. For our chosen SCUBA-2 data reduction scheme, a large fraction of the Herschel emission (a mean of 58%) was lost, principally the extended, low-surface-brightness component. We constructed resolved and unresolved star formation law plots for 7 of the galaxies in our sample, using archival _Spitzer_ and GALEX data to measure surface density of star formation. In the resolved case, we found that comparing surface density of star formation rate (\(\Sigma_{\rm SFR}\)) to SED-fit-derived (i.e. subject to atmospheric filtering) values of H\({}_{2}\) surface density (\(\Sigma_{\rm H_{2}}\)) typically produces sublinear star formation law indices, while comparing to _Herschel_ 250\(\mu\)m-derived values typically produces indices which are broadly linear. The exceptions to this are the poorly-fitted NGC 4569, which is significantly superlinear in both cases, and the well-fitted NGC 5055, which is significantly sublinear in both cases. The _Herschel_ 250\(\mu\)m-derived star formation law index for NGC 5055 is similar to that found in M31 by Ford et al. (2013), suggesting a commonality between the two galaxies. In the unresolved case, we found that comparing the mean values of \(\Sigma_{\rm SFR}\) and \(\Sigma_{\rm H_{2}}\) of the galaxies in our sample returns star formation law indices which are broadly consistent with both the Kennicutt-Schmidt value of 1.4 and linearity, within the large error bars on the best-fit indices. The loss of large-scale emission in the SCUBA-2 data changes the offset, but not the measured index, of the star formation law measured across the galaxies in our sample. The galaxies which we consider have mean \(\Sigma_{\rm SFR}\) and \(\Sigma_{\rm H_{2}}\) values consistent with their being 'normal' spiral galaxies, when compared to archival measurements. We find that SCUBA-2 emission is very well-correlated with star formation, but that SCUBA-2 cannot capture the extended dust emission component of the galaxies in our sample. We suggest that _Herschel_ emission traces material in molecular clouds which will form stars on timescales comparable to the star formation timescale traced by GALEX and _Spitzer_ data, while SCUBA-2 preferentially traces the densest gas within these clouds, which we hypothesise may form stars on a shorter timescale. ## Acknowledgements K.P. is a Royal Society University Research Fellow, supported by grant number URFR1211322. The research of C.D.W. is supported by grants from the Natural Sciences and Engineering Research Council of Canada and the Canada Research Chairs program. The James Clerk Maxwell Telescope is operated by the East Asian Observatory on behalf of The National Astronomical Observatory of Japan; Academia Sinica Institute of Astronomy and Astrophysics; the Korea Astronomy and Space Science Institute; Center for Astronomical Mega-Science (as well as the National Key R&D Program of China with No. 2017YFA0402700). Additional funding support is provided by the Science and Technology Facilities Council (STFC) of the United Kingdom (UK) and participating universities in the UK, Canada and Ireland. The JCMT has historically been operated by the Joint Astronomy Centre on behalf of the STFC of the UK, the National Research Council of Canada and the Netherlands Organisation for Scientific Research. Additional funds for the construction of SCUBA-2 were provided by the Canada Foundation for Innovation. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. ## Data Availability The raw SCUBA-2 data used in this paper are available in the JCMT archive at the Canadian Astronomy Data Centre under project code MJLSN07. The reduced SCUBA-2 data presented in this paper are available at [https://dx.doi.org/10.11570/23.0007](https://dx.doi.org/10.11570/23.0007).
``` 850μmの観測結果から、James Clerk Maxwell Telescope(JCMT)でSCUBA-2カメラを用いて撮影した、8Nearbyの螺旋銀河のサンプルに対して、JCMT Nearby Galaxies Legacy Survey(NGLS)を実施しました。SCUBA-2 850μm幅にわたるデータに対して、NGLS HARPデータを用いて$^{12}$CO $J=3\to 2$のラインの影響を補正し、その影響による$^{12}$COの寄与率は約20%となりました。スペクトルエネルギー分布から、SCUBA-2とアーカイブのHerschel観測を用いて塵の柱密度、温度、不透明度指数を測定し、GALEXとSpitzerのアーカイブデータを用いて、星形成表面密度($\Sigma_{\rm SFR}$)の地図を作成しました。通常、SCUBA-2によって算出したH$_2$表面密度
2306.17626
Design of Induction Machines using Reinforcement Learning
The design of induction machine is a challenging task due to different electromagnetic and thermal constraints. Quick estimation of machine's dimensions is important in the sales tool to provide quick quotations to customers based on specific requirements. The key part of this process is to select different design parameters like length, diameter, tooth tip height and winding turns to achieve certain torque, current and temperature of the machine. Electrical machine designers, with their experience know how to alter different machine design parameters to achieve a customer specific operation requirements. We propose a reinforcement learning algorithm to design a customised induction motor. The neural network model is trained off-line by simulating different instances of of electrical machine design game with a reward or penalty function when a good or bad design choice is made. The results demonstrate that the suggested method automates electrical machine design without applying any human engineering knowledge.
Yasmin SarcheshmehPour, Tommi Ryyppo, Victor Mukherjee, Alex Jung
2023-06-30T12:56:31
http://arxiv.org/abs/2306.17626v1
# Design of Induction Machines using ###### Abstract The design of induction machine is a challenging task due to different electromagnetic and thermal constraints. Quick estimation of machine's dimensions is important in the sales tool to provide quick quotations to customers based on specific requirements. The key part of this process is to select different design parameters like length, diameter, tooth tip height and winding turns to achieve certain torque, current and temperature of the machine. Electrical machine designers, with their experience know how to alter different machine design parameters to achieve a customer specific operation requirements. We propose a reinforcement learning algorithm to design a customised induction motor. The neural network model is trained off-line by simulating different instances of electrical machine design game with a reward or penalty function when a good or bad design choice is made. The results demonstrate that the suggested method automates electrical machine design without applying any human engineering knowledge. Design automation, Induction machines, Neural networks, Reinforcement learning. ## I Introduction Electric motors design and dimension estimation need to be fast, especially in sales tool in big corporation where thousands of motors are manufactured daily based on different customer requirements. This is specifically very important in large scale industry where customer ask for quick dimension of machines based on their application, efficiency, load curve and operational conditions. In this case, an electric motor designer select a machine which is closely matching with the customer requirements from an existing database, often referred as base machine, and then manually change few machine parameters to match the customer requirements. Normally, this is done with a deterministic approach either manually or with an algorithm where machine designer increase or decrease certain machine design parameters to check if the customer requirements are met. However, this is still a time consuming process, and can be an impediment to give customer an immediate feedback of machine design, price etc during a sales negotiation. Using meta-heuristic optimisation algorithm to resolve the above mentioned issue can be challenging as in electric machine design optimisation problem some fundamental variables have more importance than other geometrical shape variables. In [1], a reinforcement learning algorithm and evolutionary optimisation method is proposed to tackle this issue for a linear induction motor. However, finding fundamental machine variables like pole pair number, slot number with stator and rotor slot shape dimensions seems a fairly complex and time consuming process. Also, in [1], the proposed method design machine from scratch, whereas in our current context, the optimisation space is comparatively simpler as the base induction machine already have most of the correct fundamental machine design variables like pole numbers, stator and rotor slot numbers, machine diameters etc. Due to the manufacturing limit of our case, only few fixed diameters of stator and rotor laminations and rotor slots dimensions can be selected. Upon selecting a base machine, 3 machine design variables (length of the machine, rotor tooth tip height and number of coil turns) need to be selected automatically to meet the customer specification. If the goal is not met, then next base machine is selected, for example, with a bigger diameter and other geometrical parameters, and then again the 3 machine design variables are searched to met the customer requirements. Reinforcement Learning (RL) methods have been applied to optimization problems arising in important application domains such as arithmetic circuits, chip placement, aerodynamic design and so on. In all these applications a deep RL algorithm is chosen to solve the corresponding optimization problem [2]. In our context, an agent plays different games of electrical machine design and at each step it checks if the design choices has brought it closer to the objective. The usefulness of an action is quantified by some reward signal. After playing several games, the agent learns what kind of design choices like altering length, coil turns and tooth tip height need to be made to achieve certain torque speed curve, temperature rise, efficiency, weight of the machine etc. We propose a novel designing method for the induction machines based on RL [3] which use proximal policy optimisation (PPO) [4] to update the policy neural network (NN) based on cumulative reward for every game. The developed neural network model decreases the machine designing computational time to less than a minute. To the best knowledge of the authors, this is the first time when a design automation method of rotating electrical machine is investigated using RL, Proximal Policy Optimization (PPO) and NN. This model is developed for sales tool, however, engineers can also achieve an excellent initial machine design. ## II Problem Formulation Designing an electrical machine can be formulated as a parameter optimization problem that consists of an objective function \(F(x)\), design variables \(x\) as well as equality and inequality constraints. In our simplified electrical machine design problem, the feasibility of the machine design is checked with certain performance values which are rotor tooth tip height, airgap flux density, current, torque, and stator temperature rise. These performance values are directly or indirectly related to customer specific requirements and manufacturing constraints. In practice, there is a performance flag \(\in\{-1,0,1\}\) for each of these performance values that checks one characteristic of the electrical machine. For every flag, if its relevant performance value meets the requirements, then the flag value is \(0\), otherwise it is either \(+1\) or \(-1\). A machine design can be considered as feasible if and only if all its flags are equal to zero. we model the above mentioned electrical machine design problem as a RL problem. We solve this RL problem using PPO. ## III Method RL problems can be formulated as Markov Decision Processes (MDPs), consisting of three key elements, states, actions, and reward [2]. In this setting, each state is the concatenations of the performance flags' values, and the previous action. In this paper, the action space consists of \(6\) different actions, decreasing/increasing length, number of coil turns, and rotor tooth tip height of the machine. The agent (policy network) starts from an initial state \(s_{0}\), which is a base machine being calculated based on the user specified requirements and the manufacturing constraints, and the final state \(s_{T}\) corresponds to a feasible electrical machine that fits the given set of requirements and constraints. At each time step \(t\), the agent begins in state \(s_{t}\), takes an action \(a_{t}\) that changes the design variables, arrives at a new state (\(s_{t+1}\)), and receives a reward (\(r_{t}\)) that is calculated based on the performance values. Through repeated episodes (sequences of states, actions, and rewards), the policy network learns to take actions that maximizes the cumulative reward. We use a model-free on-policy RL method called PPO [4] to update the parameters of the policy network. PPO algorithms are policy gradient methods, which implies that they search the space of policies rather than assigning values to state-action pairs. As the optimization direction is guided by cumulative rewards, the reward function is the key element of RL methods. In this paper, for designing the reward function we only consider the flags' values. Each flag's value (\(+1\) or \(-1\)) indicates that the action should decrease or increase the corresponding performance value. Therefore, for the flag \(f_{i}\), the related reward, \(w_{f_{i}}\), is \(w_{p}>0\) (positive reward) as long as the corresponding performance value is modified towards its correct direction, otherwise it is \(w_{n}<0\) (negative reward). As an example, if \(torque\) flag is \(+1\), the action that increases the \(torque\) performance value gives a positive reward (\(w_{torque}=w_{p}\)) and the action that decreases it gives a negative reward (\(w_{torque}=w_{n}\)). For combining multiple objectives into one reward function, we take the sum of the rewards for all the flags. We continue giving these rewards and penalties until all the flags' values reach to \(0\). Sometimes agent can not find the correct machine design after getting stuck in the different loops. In order to prevent these loops, some priorities are defined among these flags. In addition to this, a negative penalty is given to the actions that leads to a machine state which has been already found during the training. The agent also loses the game if it reaches a fixed maximum number of total steps. When the agent wins the game by getting all zero flags, then it receives a huge reward. ## IV Results Three different induction machines are selected as a case study for this work. The PPO model is trained on 75 different variations of these selected induction machines. The game is designed in a way that for each machine an initial design \(s_{0}\) is calculated based on the power, voltage, and type of the machine. As mentioned in the section II, the observation state consists of the values of the design flags and the initial action. Then at each step \(t\), one of the \(6\) available actions is chosen. The maximum number of total steps is 300. Table I indicates the average number of steps for PPO for the selected machines. Figure 1 also represent PPO steps for a single induction machine. In the full paper, more details on the RL implementation, hyperparameter tuning, reward functions and its influence on the game's success and failure will be discussed. Also, more detail analysis on the design outcomes of different machines (different power, size etc) and scaling of the methods for different base machines will be presented.
電磁誘導機の設計は、電磁的および熱的な制約から、難しい課題となっています。機械の寸法を迅速に予測することは、顧客への迅速な見積もり作成のための重要なツールです。特定の要求に基づいて顧客に迅速な見積もりを提供するため。このプロセスにおける重要な部分としては、長さ、直径、歯先高さ、巻線数などの設計パラメータを選択することによって、機械のトルク、電流、温度を達成することです。電気機械設計者は、経験に基づいて、顧客に合わせた動作を実現するために、設計パラメータを調整することができる。私たちは、強化学習アルゴリズムを提案し、カスタム化された誘導モーターを設計します。このアルゴリズムは、電気機械設計ゲームにおけるさまざまなインスタンスをシミュレートして、良いまたは悪い設計の選択に報酬またはペナルティを与えて、ニューラルネットワークモデルをオフラインでトレーニングします。この方法の結果は、電気機械
2307.00073
A Foundation for Synthetic Algebraic Geometry
This is a foundation for algebraic geometry, developed internal to the Zariski topos, building on the work of Kock and Blechschmidt. The Zariski topos consists of sheaves on the site opposite to the category of finitely presented algebras over a fixed ring, with the Zariski topology, i.e. generating covers are given by localization maps $A\to A_{f_1}$ for finitely many elements $f_1,\dots,f_n$ that generate the ideal $(1)=A\subseteq A$. We use homotopy type theory together with three axioms as the internal language of a (higher) Zariski topos. One of our main contributions is the use of higher types -- in the homotopical sense -- to define and reason about cohomology. Actually computing cohomology groups, seems to need a principle along the lines of our ``Zariski local choice'' axiom, which we justify as well as the other axioms using a cubical model of homotopy type theory.
Felix Cherubini, Thierry Coquand, Matthias Hutzler
2023-06-30T18:23:04
http://arxiv.org/abs/2307.00073v2
# A Foundation for Synthetic Algebraic Geometry ###### Abstract This is a foundation for algebraic geometry, developed internal to the Zariski topos, building on the work of Kock and Blechschmidt ([12][13], [14]). The Zariski topos consists of sheaves on the site opposite to the category of finitely presented algebras over a fixed ring, with the Zariski topology, i.e. generating covers are given by localization maps \(A\to A_{f_{1}}\) for finitely many elements \(f_{1},\ldots,f_{n}\) that generate the ideal \((1)=A\subsetneq A\). We use homotopy type theory together with three axioms as the internal language of a (higher) Zariski topos. One of our main contributions is the use of higher types - in the homotopical sense - to define and reason about cohomology. Actually computing cohomology groups, seems to need a principle along the lines of our "Zariski local choice" axiom, which we justify as well as the other axioms using a cubical model of homotopy type theory. ###### Contents * 1 Preliminaries * 1.1 Subtypes and Logic * 1.2 Homotopy type theory * 1.3 Algebra * 2 Axioms * 2.1 Statement of the axioms * 2.2 First consequences * 3 Affine schemes * 3.1 Affine-open subtypes * 3.2 Pullbacks of affine schemes * 3.3 Boundedness of functions to \(\mathbb{N}\) * 4 Topology of schemes * 4.1 Closed subtypes * 4.2 Open subtypes * 5 Schemes * 5.1 Definition of schemes * 5.2 General Properties * 5.3 Glueing * 5.4 Subschemes * 5.5 Equality types * 5.6 Dependent sums * 6 Projective space * 6.1 Construction of projective spaces * 6.2 Functions on \(\mathbb{P}^{n}\) * 6.3 Line Bundles * 7 Bundles and cohomology * 7.1 Quasi-coherent bundles * 7.2 Finitely presented bundles * 7.3 Cohomology on affine schemes * 7.4 Cech-Cohomology * 8 Type Theoretic justification of axioms * 8.1 Internal sheaf model * 8.1.1 Axioms for the presheaf model * 8.1.2 Justification of the axioms for the presheaf model * 8.1.3 Sheaf model obtained by localisation from the presheaf model * 8.2 Presheaf models of univalence * 8.3 Propositional truncation * 8.4 Choice * 8.5 1-topos model * 8.6 Some properties of the sheaf model * 8.6.1 Quasi-coherence * 8.6.2 Projective space * 8.7 Global sections and Zariski global choice * A Negative results ## Introduction Algebraic geometry is the study of solutions of polynomial equations using methods from geometry. The central geometric objects in algebraic geometry are called _schemes_. Their basic building blocks are called _affine schemes_, where, informally, an affine scheme corresponds to a solution sets of polynomial equations. While this correspondence is clearly visible in the functorial approach to algebraic geometry and our synthetic approach, it is somewhat obfuscated in the most commonly used, topological approach. In recent years, computer formalization of the intricate notion of affine schemes received some attention as a benchmark problem - this is, however, _not_ a problem addressed by this work. Instead, we use a synthetic approach to algebraic geometry, very much alike to that of synthetic differential geometry. This means, while a scheme in classical algebraic geometry is a complicated compound datum, we work in a setting, based on homotopy type theory, where schemes are types, with an additional property that can be defined within our synthetic theory. Following ideas of Ingo Blechschmidt and Anders Kock ([1], [12], [13]), we use a base ring \(R\) which is local and satisfies an axiom reminiscent of the Kock-Lawvere axiom. This more general axiom is called _synthetic quasi coherence (SQC)_ by Blehschmidt and a version quantifying over external algebras is called the _comprehensive axiom_1 by Kock. The exact concise form of the SQC axiom we use, was noted by David Jaz Myers in 2018 and communicated to the first author. Footnote 1: In [12], Kock’s “axiom \(2_{k}\)” could equivalently be Theorem 12.2, which is exactly our synthetic quasi coherence axiom, except that it only quantifies over external algebras. Before we state the SQC axiom, let us take a step back and look at the basic objects of study in algebraic geometry, solutions of polynomial equations. Given a system of polynomial equations \[p_{1}(X_{1},\ldots,X_{n})=0,\] \[\vdots\] \[p_{m}(X_{1},\ldots,X_{n})=0,\] the solution set \(\{\,x:R^{n}\mid\forall i.\ p_{i}(x_{1},\ldots,x_{n})=0\,\}\) is in canonical bijection to the set of \(R\)-algebra homomorphisms \[\operatorname{Hom}_{R\cdot\operatorname{Alg}}(R[X_{1},\ldots,X_{n}]/(p_{1}, \ldots,p_{m}),R)\] by identifying a solution \((x_{1},\ldots,x_{n})\) with the homomorphism that maps each \(X_{i}\) to \(x_{i}\). Conversely, for any \(R\)-algebra \(A\) which is merely of the form \(R[X_{1},\ldots,X_{n}]/(p_{1},\ldots,p_{m})\), we define the _spectrum_ of \(A\) to be \[\operatorname{Spec}A\mathrel{\mathop{:}}\equiv\operatorname{Hom}_{R\cdot \operatorname{Alg}}(A,R).\] In contrast to classical, non-synthetic algebraic geometry, where this set needs to be equipped with additional structure, we postulate axioms that will ensure that \(\operatorname{Spec}A\) has the expected geometric properties. Namely, SQC is the statement that, for all finitely presented \(R\)-algebras \(A\), the canonical map \[A \xrightarrow{\sim}(\operatorname{Spec}A\to R)\] \[a \mapsto(\varphi\mapsto\varphi(a))\] is an equivalence. A prime example of a spectrum is \(\mathbb{A}^{1}\coloneqq\operatorname{Spec}R[X]\), which turns out to be the underlying set of \(R\). With the SQC axiom, _any_ function \(f:\mathbb{A}^{1}\to\mathbb{A}^{1}\) is given as a polynomial with coefficients in \(R\). In fact, all functions between affine schemes are given by polynomials. Furthermore, for any affine scheme \(\operatorname{Spec}A\), the axiom ensures that the algebra \(A\) can be reconstructed as the algebra of functions \(\operatorname{Spec}A\to R\), therefore establishing a duality between affine schemes and algebras. The Kock-Lawvere axiom used in synthetic differential geometry might be stated as the SQC axiom restricted to (external) _Weil-algebras_, whose spectra correspond to pointed infinitesimal spaces. These spaces can be used in both synthetic differential and algebraic geometry in very much the same way. In the accompanying formalization [CH] of some basic results, we use a setup which was already proposed by David Jaz Myers in a conference talk ([30, 31]). On top of Myers' ideas, we were able to define schemes, develop some topological properties of schemes, and construct projective space. An important, not yet formalized result is the construction of cohomology groups. This is where the _homotopy_ type theory really comes to bear - instead of the hopeless adaption of classical, non-constructive definitions of cohomology, we make use of higher types, for example the \(n\)-th Eilenberg-MacLane space \(K(R,n)\) of the group \((R,+)\). As an analogue of classical cohomology with values in the structure sheaf, we then define cohomology with coefficients in the base ring as: \[H^{n}(X,R)\coloneqq\|X\to K(R,n)\|_{0}.\] This definition is very convenient for proving abstract properties of cohomology. For concrete calculations we make use of another axiom, which we call _Zariski-local choice_. While this axiom was conceived of for exactly these kind of calculations, it turned out to settle numerous questions with no apparent connection to cohomology. One example is the equivalence of two notions of _open subspace_. A pointwise definition of openness was suggested to us by Ingo Blechschmidt and is very convenient to work with. However, classically, basic open subsets of an affine scheme are given by functions on the scheme and the corresponding open is morally the collection of points where the function does not vanish. With Zariski-local choice, we were able to show that these notions of openness agree in our setup. Apart from SQC, locality of the base ring \(R\) and Zariski-local choice, we only use homotopy type theory, including univalent universes, truncations and some very basic higher inductive types. Roughly, Zariski-local choice states that any surjection into an affine scheme merely has sections on a _Zariski-_cover.2 The latter, internal, notion of cover corresponds quite directly to the covers in the site of the _Zariski topos_, which we use to construct a model of homotopy type theory with our axioms. Footnote 2: It is related to the set-theoretic axiom called _axiom of multiple choice_ (AMC) [1] or _weakly initial set of covers axiom_ (WSC): the set of all Zariski-covers of an affine scheme is weakly initial among all covers. However, our axiom only applies to (affine) schemes, not all types or sets. More precisely, we can use the _Zariski topos_ over any base ring. Toposes built using other Grothendieck topologies, like for example the etale topology, are not compatible with Zariski-local choice. We did not explore whether an analogous setup can be used for derived algebraic geometry3 - meaning that the \(0\)-truncated rings we used are replaced by higher rings. This is only because for a derived approach, we would have to work with higher monoids, which is currently infeasible - we are not aware of any obstructions for, say, an SQC axiom holding in derived algebraic geometry. Footnote 3: Here, the word “derived” refers to the rings the algebraic geometry is built up from – instead of the \(0\)-truncated rings we use, “derived” algebraic geometry would use simplicial or spectral rings. Sometimes, “derived” refers to homotopy types appearing in “the other direction”, namely as the values of the sheaves that are used. In that direction, our theory is already derived, since we use homotopy type theory. Practically that means that we expect no problems when expanding our theory of synthetic schemes to what classic algebraic geometers call “stacks”. In total, the scope of our theory so far includes quasi-compact, quasi-separated schemes of finite type over an arbitrary ring. These are all finiteness assumptions, that were chosen for convenience and include examples like closed subspaces of projective space, which we want to study in future work, as example applications. So far, we know that basic internal constructions, like affine schemes, correspond to the correct classical external constructions. This can be expanded using our model, which is of course also important to ensure the consistency of our setup. ## Formalization There is a related formalization project, which, at the time of writing, contains the construction of projective \(n\)-space \(\mathbb{P}^{n}\) as a scheme. The code may be found here: [https://github.com/felixwellen/synthetic-geometry](https://github.com/felixwellen/synthetic-geometry) It makes extensive use of the algebra part of the cubical-agda library: [https://github.com/agda/cubical](https://github.com/agda/cubical) - which contains many contributions, in particular, on finitely presented algebras and related concepts, which where made in the scope of that project. ## Acknowledgements We use work from Ingo Blechschmidt's PhD thesis, section 18 as a basis. This includes in particular the synthetic quasi-coherence axiom and the assumption that the base ring is local. David Jaz Myers had the idea to use Blechschmidt's ideas in homotopy type theory and presented his ideas 2019 at the workshop "Geometry in Modal Homotopy Type Theory" in Pittsburgh. Myers ideas include the algebra-setup we used in our formalization. In December 2022, there was a mini-workshop in Augsburg, which helped with the development of this work. We thank Jonas Hofer and Lukas Stoll for spotting a couple of small errors. ## 1 Preliminaries ### Subtypes and Logic We use the notation \(\exists_{x:X}P(x)\coloneqq\|\sum_{x:X}P(x)\|\). We use \(+\) for the coproduct of types and for types \(A,B\) we write \[A\lor B\coloneqq\|A+B\|.\] We will use subtypes extensively. **Definition 1.1.1**: Let \(X\) be a type. A _subtype_ of \(X\) is a function \(U:X\to\mathrm{Prop}\) to the type of propositions. We write \(U\subseteq X\) to indicate that \(U\) is as above. If \(X\) is a set, a subtype may be called _subset_ for emphasis. For subtypes \(A,B\subseteq X\), we write \(A\subseteq B\) as a shorthand for pointwise implication. We will freely switch between subtypes \(U:X\to\mathrm{Prop}\) and the corresponding embeddings \[\sum_{x:X}U(x)\xleftrightarrow{}X\.\] In particular, if we write \(x:U\) for a subtype \(U:X\to\mathrm{Prop}\), we mean that \(x:\sum_{x:X}U(x)\) - but we might silently project \(x\) to \(X\). **Definition 1.1.2**: Let \(I\) and \(X\) be types and \(U_{i}:X\to\mathrm{Prop}\) a subtype for any \(i:I\). 1. The _union_\(\bigcup_{i:I}U_{i}\) is the subtype \((x:X)\mapsto\exists_{i:I}U_{i}(x)\). 2. The _intersection_\(\bigcap_{i:I}U_{i}\) is the subtype \((x:X)\mapsto\prod_{i:I}U_{i}(x)\). We will use common notation for finite unions and intersections. The following formula hold: **Lemma 1.1.3**: Let \(I\), \(X\) be types, \(U_{i}:X\to\mathrm{Prop}\) a subtype for any \(i:I\) and \(V,W\) subtypes of \(X\). 1. Any subtype \(P:V\to\mathrm{Prop}\) is a subtype of \(X\) given by \((x:X)\mapsto\sum_{x:V}P(x)\). 2. \(V\cap\bigcup_{i:I}U_{i}=\bigcup(V\cap U_{i})\). 3. If \(\bigcup_{i:I}U_{i}=X\) we have \(V=\bigcup_{i:I}U_{i}\cap V\). 4. If \(\bigcup_{i:I}U_{i}=\emptyset\), then \(U_{i}=\emptyset\) for all \(i:I\). **Definition 1.1.4**: Let \(X\) be a type. 1. \(\emptyset\coloneqq(x:X)\mapsto\emptyset\). 2. For \(U\subseteq X\), let \(\neg U\vDash(x:X)\mapsto\neg U(x)\). 3. For \(U\subseteq X\), let \(\neg\neg U\vDash(x:X)\mapsto\neg\neg U(x)\). **Lemma 1.1.5**: \(U=\emptyset\) if and only if \(\neg\left(\exists_{x:X}U(x)\right)\). ### Homotopy type theory Our truncation levels start at \(-2\), so \((-2)\)-types are contractible, \((-1)\)-types are propositions and \(0\)-types are sets. **Definition 1.2.1**: Let \(X\) and \(I\) be types. A family of propositions \(U_{i}:X\rightarrow\mbox{Prop covers }X\), if for all \(x:X\), there merely is a \(i:I\) such that \(U_{i}(x)\). **Lemma 1.2.2**: Let \(X\) and \(I\) be types. For propositions \((U_{i}:X\rightarrow\mbox{Prop})_{i:I}\) that cover \(X\) and \(P:X\rightarrow\mbox{0-Type}\), we have the following glueing property: If for each \(i:I\) there is a dependent function \(s_{i}:(x:U_{i})\to P(x)\) together with proofs of equality on intersections \(p_{ij}:(x:U_{i}\cap U_{j})\rightarrow(s_{i}(x)=s_{j}(x))\), then there is a globally defined dependent function \(s:(x:X)\to P(x)\), such that for all \(x:X\) and \(i:I\) we have \(U_{i}(x)\to s(x)=s_{i}(x)\) * We define \(s\) pointwise. Let \(x:X\). Using a Lemma of Kraus4 and the \(p_{ij}\), we get a factorization \(\sum_{i:I}U_{i}(x)\)\(\sum_{i:I}U_{i}(x)\)\(\sum_{i:I}U_{i}(x)\)\(\sum_{i:I}U_{i}(x)\)\(\sum_{-1}\)\(P(x)\)- which defines a unique value \(s(x):P(x)\). Footnote 4: For example this is the \(n=-1\) case of [CKV15][Theorem 2.1]. Similarly we can prove. **Lemma 1.2.3**: Let \(X\) and \(I\) be types. For propositions \((U_{i}:X\rightarrow\mbox{Prop})_{i:I}\) that cover \(X\) and \(P:X\rightarrow\mbox{1-Type}\), we have the following glueing property: If for each \(i:I\) there is a dependent function \(s_{i}:(x:U_{i})\to P(x)\) together with proofs of equality on intersections \(p_{ij}:(x:U_{i}\cap U_{j})\rightarrow(s_{i}(x)=s_{j}(x))\) satisfying the cocycle condition \(p_{ij}\cdot p_{jk}=p_{ik}\). then there is a globally defined dependent function \(s:(x:X)\to P(x)\), such that for all \(x:X\) and \(i:I\) we have \(p_{i}:U_{i}(x)\to s(x)=s_{i}(x)\) such that \(p_{i}\cdot p_{ij}=p_{j}\). This can be generalized to \(k\)-Type for each _external_\(k\). The condition for \(0\)-Type can be seen as an internal version of the usual patching _sheaf_ condition. The condition for \(1\)-Type is then the internal version of the usual patching \(1\)_-stack_ condition. ### Algebra **Definition 1.3.1**: A commutative ring \(R\) is _local_ if \(1\neq 0\) in \(R\) and if for all \(x,y:R\) such that \(x+y\) is invertible, \(x\) is invertible or \(y\) is invertible. **Definition 1.3.2**: Let \(R\) be a commutative ring. A _finitely presented_\(R\)-algebra is an \(R\)-algebra \(A\), such that there merely are natural numbers \(n,m\) and polynomials \(f_{1},\ldots,f_{m}:R[X_{1},\ldots,X_{n}]\) and an equivalence of \(R\)-algebras \(A\simeq R[X_{1},\ldots,X_{n}]/(f_{1},\ldots,f_{m})\). **Definition 1.3.3**: Let \(A\) be a commutative ring. An element \(r:A\) is _regular_, if the multiplication map \(r\mbox{\ -\ -}i:A\to A\) is injective. **Lemma 1.3.4**: Let \(A\) be a commutative ring. 1. All units of \(A\) are regular. 2. If \(f\) and \(g\) are regular, their product \(fg\) is regular. **Example 1.3.5**: The monomials \(X^{k}:A[X]\) are regular. **Lemma 1.3.6**: Let \(f:A[X]\) be a polynomial and \(a:A\) an element such that \(f(a):A\) is regular. Then \(f\) is regular as an element of \(A[X]\). **Proof** After a variable substitution \(X\mapsto X+a\) we can assume that \(f(0)\) is regular. Now let \(g:A[X]\) be given with \(fg=0\). Then in particular \(f(0)g(0)=0\), so \(g(0)=0\). By induction, all coefficients of \(g\) vanish. \(\square\) **Definition 1.3.7**: Let \(A\) be a ring and \(f:A\). Then \(A_{f}\) denotes the _localization_ of \(A\) at \(f\), i.e. a ring \(A_{f}\) together with a homomorphism \(A\to A_{f}\), such that for all homomorphisms \(\varphi:A\to B\) such that \(\varphi(f)\) is invertible, there is a unique homomorphism as indicated in the diagram: For \(a:A\), we denote the image of \(a\) in \(A_{f}\) as \(\frac{a}{1}\) and the inverse of \(f\) as \(\frac{1}{f}\). **Lemma 1.3.8**: Let \(A\) be a commutative ring and \(f_{1},\ldots,f_{n}:A\). For finitely generated ideals \(I_{i}\subseteq A_{f_{i}}\), such that \(A_{f_{i}f_{j}}\cdot I_{i}=A_{f_{i}f_{j}}\cdot I_{j}\) for all \(i,j\), there is a finitely generated ideal \(I\subseteq A\), such that \(A_{f_{i}}\cdot I=I_{i}\) for all \(i\). **Proof** Choose generators \[\frac{g_{i1}}{1},\ldots,\frac{g_{ik_{i}}}{1}\] for each \(I_{i}\). These generators will still generate \(I_{i}\), if we multiply any of them with any power of the unit \(\frac{f_{i}}{1}\). Now \[A_{f_{i}f_{j}}\cdot I_{i}\subseteq A_{f_{i}f_{j}}\cdot I_{j}\] means that for any \(g_{ik}\), we have a relation \[(f_{i}f_{j})^{l}g_{ik}=\sum_{l}h_{l}g_{jl}\] for some power \(l\) and coefficients \(h_{l}:A\). This means, that \(f_{i}^{l}g_{ik}\) is contained in \(I_{j}\). Multiplying \(f_{i}^{l}g_{ik}\) with further powers of \(f_{i}\) or multiplying \(g_{jl}\) with powers of \(f_{j}\) does not change that. So we can repeat this for all \(i\) and \(k\) to arrive at elements \(g_{ik}:A\), which generate an ideal \(I\subseteq A\) with the desired properties. \(\square\) The following definition also appears as [1][4] and a version restricted to external finitely presented algebras was already used by Anders Kock in [11][12]: **Definition 1.3.9**: The _(synthetic) spectrum_ of a finitely presented \(R\)-algebra \(A\) is the set of \(R\)-algebra homomorphisms from \(A\) to \(R\): \[\operatorname{Spec}A\coloneqq\operatorname{Hom}_{R\text{-}\operatorname{Alg} }(A,R)\] We write \(\mathbb{A}^{n}\) for \(\operatorname{Spec}R[X_{1},\ldots,X_{n}]\), which is canonically in bijection with \(R^{n}\) by the universal property of the polynomial ring. In particular, \(\mathbb{A}^{1}\) is (in bijection with) the underlying set of \(R\). Our convention is to use the letter \(R\) when we think of it as an algebraic object, and to write \(\mathbb{A}^{1}\) (or \(\mathbb{A}^{n}\)) when we think of it as a set or a geometric object. The Spec construction is functorial: **Definition 1.3.10**: For an algebra homomorphism \(f:\operatorname{Hom}_{R\text{-}\operatorname{Alg}}(A,B)\) between finitely presented \(R\)-algebras \(A\) and \(B\), we write \(\operatorname{Spec}f\) for the map from \(\operatorname{Spec}B\) to \(\operatorname{Spec}A\) given by precomposition with \(f\). **Definition 1.3.11**: Let \(A\) be a finitely presented \(R\)-algebra. For \(f:A\), the _standard open subset_ given by \(f\), is the subtype \[D(f)\coloneqq(x:\operatorname{Spec}A)\mapsto(x(f)\text{ is invertible}).\] later, we will use the following more general and related definitions: **Definition 1.3.12**: Let \(A\) be a finitely presented \(R\)-algebra. For \(n:\mathbb{N}\) and \(f_{1},\ldots,f_{n}:A\), there are 1. the "open" subset \[D(f_{1},\ldots,f_{n})\coloneqq(x:\operatorname{Spec}A)\mapsto(\exists_{i}\text{ such that }x(f_{i})\text{ is invertible})\] 2. the "closed" subset \[V(f_{1},\ldots,f_{n})\coloneqq(x:\operatorname{Spec}A)\mapsto(\forall_{i}\ x(f_{i})=0)\] It will be made precise in Section 4, in which sense these subsets are open or closed. We will later also need the notion of a _Zariski-Cover_ of a spectrum \(\operatorname{Spec}A\), for some finitely presented \(R\)-algebra \(A\). Intuitively, this is a collection of standard opens which jointly cover \(\operatorname{Spec}A\). Since it is more practical, we will however stay on the side of algebras. A finite list of elements \(f_{1},\ldots,f_{n}:A\) yields a Zariski-Cover, if and only if they are a _unimodular vector_: **Definition 1.3.13**: Let \(A\) be a finitely presented \(R\)-algebra. Then a list \(f_{1},\ldots,f_{n}:A\) of elements of \(A\) is called _unimodular_ if we have an identity of ideals \((f_{1},\ldots,f_{n})=(1)\). We use \(\operatorname{Um}(A)\) to denote the type of unimodular sequences in \(A\): \[\operatorname{Um}(A)\coloneqq\sum_{n:\mathbb{N}}\sum_{f_{1},\ldots,f_{n}:A}(f _{1},\ldots,f_{n})=(1).\] We will sometimes drop the natural number and the equality and just write \((f_{1},\ldots,f_{n}):\operatorname{Um}(A)\). **Definition 1.3.14**: Ab denotes the type of abelian groups. **Lemma 1.3.15**: Let \(A,B:\operatorname{Ab}\) and \(f:A\to B\) be a homomorphism of abelian groups. Then \(f\) is surjective, if and only if, it is a cokernel. **Proof** A cokernel is a set-quotient by an effective relation, so the projection map is surjective. On the other hand, if \(f\) is surjective and we are in the situation: then we can construct a map \(\varphi:B\to C\) as follows. For \(x:B\), we define the type of possible values \(\varphi(x)\) in \(C\) as \[\sum_{z:C}\exists_{y:A}(f(y)=x)\wedge g(y)=z\] which is a proposition by algebraic calculation. By surjectivity of \(f\), this type is inhabited and therefore contractible. So we can define \(\varphi(x)\) as its center of contraction. \(\square\) ## 2 Axioms ### Statement of the axioms We always assume there is a fixed commutative ring \(R\). In addition, we assume the following three axioms about \(R\), which were already mentioned in the introduction, but we will indicate which of these axioms are used to prove each statement by listing their shorthands. **Axiom (Loc)** \(R\) is a local ring (Definition 1.3.1). **Axiom (SQC)** For any finitely presented \(R\)-algebra \(A\), the homomorphism \[a\mapsto(\varphi\mapsto\varphi(a)):A\to(\operatorname{Spec}A\to R)\] is an isomorphism of \(R\)-algebras. **Axiom (Z-choice)** Let \(A\) be a finitely presented \(R\)-algebra and let \(B:\operatorname{Spec}A\to\mathcal{U}\) be a family of inhabited types. Then there merely exist unimodular \(f_{1},\ldots,f_{n}:A\) together with dependent functions \(s_{i}:\Pi_{x:D(f_{i})}B(x)\). As a formula5: Footnote 5: Using the notation from Definition 1.3.13 \[(\Pi_{x:\operatorname{Spec}A}\|B(x)\|)\to\|((f_{1},\ldots,f_{n}):\operatorname {Um}(A))\times\Pi_{i}\Pi_{x:D(f_{i})}B(x)\|.\] ### First consequences Let us draw some first conclusions from the axiom (SQC), in combination with (Loc) where needed. **Proposition 2.2.1** (using SQC): For all finitely presented \(R\)-algebras \(A\) and \(B\) we have an equivalence \[f\mapsto\operatorname{Spec}f:\operatorname{Hom}_{R\text{-Alg}}(A,B)=( \operatorname{Spec}B\to\operatorname{Spec}A).\] **Proof** By Lemma 3.1.2, we have a natural equivalence \[X\to\operatorname{Spec}(R^{X})\] and by SQC, the natural map \[A\to R^{\operatorname{Spec}A}\] is an equivalence. We therefore have a contravariant equivalence between the category of finitely presented \(R\)-algebras and the category of affine schemes. In particular, \(\operatorname{Spec}\) is an embedding. \(\square\) An important consequence of SQC, which may be called _weak nullstellensatz_: **Proposition 2.2.2** (using Loc, SQC): If \(A\) is a finitely presented \(R\)-algebra, then we have \(\operatorname{Spec}A=\emptyset\) if and only if \(A=0\). **Proof** If \(\operatorname{Spec}A=\emptyset\) then \(A=R^{\operatorname{Spec}A}=R^{\emptyset}=0\) by (SQC). If \(A=0\) then there are no homomorphisms \(A\to R\) since \(1\neq 0\) in \(R\) by (Loc). \(\square\) For example, this weak nullstellensatz suffices to prove the following properties of the ring \(R\), which were already proven in [1][Section 18.4]. **Proposition 2.2.3** (using Loc, SQC): 1. An element \(x:R\) is invertible, if and only if \(x\neq 0\). 2. A vector \(x:R^{n}\) is non-zero, if and only if one of its entries is invertible. 3. An element \(x:R\) is nilpotent, if and only if \(\neg\neg(x=0)\). **Proof** Part (a) is the special case \(n=1\) of (b). For (b), consider the \(R\)-algebra \(A\mathrel{\mathop{\kern 0.0pt\mathchar 0\relax}}R/(x_{1},\ldots,x_{n})\). Then the set \(\operatorname{Spec}A\equiv\operatorname{Hom}_{R\text{-Alg}}(A,R)\) is a proposition (that is, it has at most one element), and, more precisely, it is equivalent to the proposition \(x=0\). By Proposition 2.2.2, the negation of this proposition is equivalent to \(A=0\) and thus to \((x_{1},\ldots,x_{n})=R\). Using (Loc), this is the case if and only if one of the \(x_{i}\) is invertible. For (c), we instead consider the algebra \(A\mathrel{\mathop{\kern 0.0pt\mathchar 0\relax}}R_{x}\equiv R[\frac{1}{x}]\). Here we have \(A=0\) if and only if \(x\) is nilpotent, while \(\operatorname{Spec}A\) is the proposition \(\operatorname{inv}(x)\). Thus, we can finish by Proposition 2.2.2, together with part (a) to go from \(\neg\text{inv}(x)\) to \(\neg\neg(x=0)\). \(\square\) The following lemma, which is a variant of [1][Proposition 18.32], shows that \(R\) is in a weak sense algebraically closed. See Example A.0.3 for a refutation of a stronger formulation of algebraic closure of \(R\). **Lemma 2.2.4** (using Loc, SQC): Let \(f:R[X]\) be a polynomial. Then it is not not the case that: either \(f=0\) or \(f=\alpha\cdot(X-a_{1})^{e_{1}}\ldots(X-a_{n})^{e_{n}}\) for some \(\alpha:R^{\times}\), \(e_{i}\geq 1\) and pairwise distinct \(a_{i}:R\). **Proof** Let \(f:R[X]\) be given. Since our goal is a proposition, we can assume we have a bound \(n\) on the degree of \(f\), so \[f=\sum_{i=0}^{n}c_{i}X^{i}.\] Since our goal is even double-negation stable, we can assume \(c_{n}=0\lor c_{n}\neq 0\) and by induction \(f=0\) (in which case we are done) or \(c_{n}\neq 0\). If \(n=0\) we are done, setting \(\alpha\mathrel{\mathop{\kern 0.0pt\mathchar 0\relax}}c_{0}\). Otherwise, \(f\) is not invertible (using \(0\neq 1\) by (Loc)), so \(R[X]/(f)\neq 0\), which by (SQC) means that \(\operatorname{Spec}(R[X]/(f))=\{x:R\mid f(x)=0\}\) is not empty. Using the double-negation stability of our goal again, we can assume \(f(a)=0\) for some \(a:R\) and factor \(f=(X-a_{1})f_{n-1}\). By induction, we get \(f=\alpha\cdot(X-a_{1})\ldots(X-a_{n})\). Finally, we decide each of the finitely many propositions \(a_{i}=a_{j}\), which we can assume is possible because our goal is still double-negation stable, to get the desired form \(f=\alpha\cdot(X-\widetilde{a}_{1})^{e_{1}}\ldots(X-\widetilde{a}_{n})^{e_{n}}\) with distinct \(\widetilde{a}_{i}\). \(\square\) Affine schemes ### Affine-open subtypes We only talk about affine schemes of finite type, i.e. schemes of the form \(\operatorname{Spec}A\) (Definition 1.3.9), where \(A\) is a finitely presented algebra. **Definition 3.1.1**: A type \(X\) is _(qc-)affine_, if there is a finitely presented \(R\)-algebra \(A\), such that \(X=\operatorname{Spec}A\). If \(X\) is affine, it is possible to reconstruct the algebra directly. **Lemma 3.1.2** (using SQC): Let \(X\) be an affine scheme, then there is a natural equivalence \(X=\operatorname{Spec}(R^{X})\). * The natural map \(X\to\operatorname{Spec}(R^{X})\) is given by mapping \(x:X\) to the evaluation homomorphism at \(x\). There merely is an \(A\) such that \(X=\operatorname{Spec}A\). Applying \(\operatorname{Spec}\) to the canonical map \(A\to R^{\operatorname{Spec}A}\), yields an equivalence by \(\operatorname{SQC}\). This is a (one sided) inverse to the map above. So we have \(X=\operatorname{Spec}(R^{X})\). **Proposition 3.1.3**: Let \(X\) be a type. The type of all finitely presented \(R\)-algebras \(A\), such that \(X=\operatorname{Spec}A\), is a proposition. When we write "\(\operatorname{Spec}A\)" we implicitly assume \(A\) is a finitely presented \(R\)-algebra. Recall from Definition 1.3.11 that the standard open subset \(D(f)\subseteq\operatorname{Spec}A\) is given by \(D(f)(x)\cong\operatorname{inv}(f(x))\). **Example 3.1.4** (using Loc, SQC): For \(a_{1},\dots,a_{n}:R\), we have \[D((X-a_{1})\cdots(X-a_{n}))=\mathbb{A}^{1}\setminus\{a_{1},\dots,a_{n}\}.\] Indeed, for any \(x:\mathbb{A}^{1}\), \(((X-a_{1})\dots(X-a_{n}))(x)\) is invertible if and only if \(x-a_{i}\) is invertible for all \(i\). But by Proposition 2.2.3 this means \(x\neq a_{i}\) for all \(i\). **Definition 3.1.5**: Let \(X=\operatorname{Spec}A\). A subtype \(U:X\to\operatorname{Prop}\) is called _affine-open_, if one of the following logically equivalent statements holds: 1. \(U\) is the union of finitely many affine standard opens. 2. There are \(f_{1},\dots,f_{n}:A\) such that \[U(x)\Leftrightarrow\exists_{i}f_{i}(x)\neq 0\] By Definition 1.3.12 we have \(D(f_{1},\dots,f_{n})=D(f_{1})\cup\dots\cup D(f_{n})\). Note that in general, affine-open subtypes do not need to be affine - this is why we use the dash "-". We will introduce a more general definition of open subtype in Definition 4.2.1 and show in Theorem 4.2.7, that the two notions agree on affine schemes. **Proposition 3.1.6**: Let \(X=\operatorname{Spec}A\) and \(f:A\). Then \(D(f)=\operatorname{Spec}A[f^{-1}]\). * \( X=\operatorname{Spec}A\) and \(f:A\). Then \(D(f)=\operatorname{Spec}A[f^{-1}]\). **Proof** \[D(f)=\sum_{x:X}D(f)(x)=\sum_{x:\operatorname{Spec}A}\operatorname{inv}(f(x))= \sum_{x:\operatorname{Hom}_{R\text{-Alg}}(A,R)}\operatorname{inv}(x(f))= \operatorname{Hom}_{R\text{-Alg}}(A[f^{-1}],R)=\operatorname{Spec}A[f^{-1}]\] \(\square\) Affine-openness is transitive in the following sense: **Lemma 3.1.7**: Let \(X=\operatorname{Spec}A\) and \(D(f)\subseteq X\) be a standard open. Any affine-open subtype \(U\) of \(D(f)\) is also affine-open in \(X\). * It is enough to show the statement for \(U=D(g)\), \(g:A_{f}\). Then \[g=\frac{h}{f^{k}}.\] Now \(D(hf)\) is an affine-open in \(X\), that coincides with \(U\): Let \(x:X\), then \((hf)(x)\) is invertible, if and only if both \(h(x)\) and \(f(x)\) are invertible. The latter means \(x:D(f)\), so we can interpret \(x\) as a homomorphism from \(A_{f}\) to \(R\). Then \(x:D(g)\) means \(x(g)\) is invertible, which is equivalent to \(x(h)\) being invertible, since \(x(f)^{k}\) is invertible anyway. **Lemma 3.1.8** (using Loc, SQC): Let \(X=\operatorname{Spec}A\) be an affine scheme and \(D(f)\subseteq X\) a standard open, then \(D(f)=\emptyset\), if and only if, \(f\) is nilpotent. **Proof** Since \(D(f)=\operatorname{Spec}A_{f}\), by Proposition 2.2.2, we know \(D(f)=\emptyset\), if and only if, \(A_{f}=0\). The latter is equivalent to \(f\) being nilpotent. \(\square\) More generally, the Zariski-lattice consisting of the radicals of finitely generated ideals of a finitely presented \(R\)-algebra \(A\), coincides with the lattice of open subtypes. This means, that internal to the Zariski-topos, it is not necessary to consider the full Zariski-lattice for a constructive treatment of schemes. **Lemma 3.1.9** (using SQC): Let \(A\) be a finitely presented \(R\)-algebra and let \(f,g_{1},\ldots,g_{n}\in A\). Then we have \(D(f)\subseteq D(g_{1},\ldots,g_{n})\) as subsets of \(\operatorname{Spec}A\) if and only if \(f\in\sqrt{(g_{1},\ldots,g_{n})}\). **Proof** Since \(D(g_{1},\ldots,g_{n})=\{\,x\in\operatorname{Spec}A\mid x\notin V(g_{1},\ldots,g_{n})\,\}\), 6 the inclusion \(D(f)\subseteq D(g_{1},\ldots,g_{n})\) can also be written as \(D(f)\cap V(g_{1},\ldots,g_{n})=\emptyset\), that is, \(\operatorname{Spec}((A/(g_{1},\ldots,g_{n}))[f^{-1}])=\emptyset\). By (SQC) this means that the finitely presented \(R\)-algebra \((A/(g_{1},\ldots,g_{n}))[f^{-1}]\) is zero. And this is the case if and only if \(f\) is nilpotent in \(A/(g_{1},\ldots,g_{n})\), that is, if \(f\in\sqrt{(g_{1},\ldots,g_{n})}\), as stated. \(\square\) Footnote 6: See Definition 1.3.12 for “\(V(\ldots)\)” In particular, we have \(\operatorname{Spec}A=\bigcup_{i=1}^{n}D(f_{i})\) if and only if \((f_{1},\ldots,f_{n})=(1)\). ### Pullbacks of affine schemes **Lemma 3.2.1**: The product of two affine schemes is again an affine scheme, namely \(\operatorname{Spec}A\times\operatorname{Spec}B=\operatorname{Spec}(A\otimes_ {R}B)\). **Proof** By the universal property of the tensor product \(A\otimes_{R}B\). \(\square\) More generally we have: **Lemma 3.2.2** (using SQC): Let \(X=\operatorname{Spec}A,Y=\operatorname{Spec}B\) and \(Z=\operatorname{Spec}C\) be affine schemes with maps \(f:X\to Z\), \(g:Y\to Z\). Then the pullback of this diagram is an affine scheme given by \(\operatorname{Spec}(A\otimes_{C}B)\). **Proof** The maps \(f:X\to Z\), \(g:Y\to Z\) are induced by \(R\)-algebra homomorphisms \(f^{*}:A\to R\) and \(g^{*}:B\to R\). Let \[(h,k,p):\operatorname{Spec}A\times_{\operatorname{Spec}C}\operatorname{Spec}B\] with \(p:h\circ f^{*}=k\circ g^{*}\). This defines a \(R\)-cocone on the diagram Since \(A\otimes_{C}B\) is a pushout in \(R\)-algebras, there is a unique \(R\)-algebra homomorphism \(A\otimes_{C}B\to R\) corresponding to \((h,k,p)\). \(\square\) ### Boundedness of functions to \(\mathbb{N}\) While the axiom SQC describes functions on an affine scheme with values in \(R\), we can generalize it to functions taking values in another finitely presented \(R\)-algebra, as follows. **Lemma 3.3.1** (using SQC): For finitely presented \(R\)-algebras \(A\) and \(B\), the function \[A\otimes B \xrightarrow{\sim}(\operatorname{Spec}A\to B)\] \[c \mapsto(\varphi\mapsto(\varphi\otimes B)(c))\] is a bijection. **Proof** We recall \(\operatorname{Spec}(A\otimes B)=\operatorname{Spec}A\times\operatorname{Spec}B\) from Lemma 3.2.1 and calculate as follows. \[A\otimes B =(\operatorname{Spec}(A\otimes B)\to R)=(\operatorname{Spec}A \times\operatorname{Spec}B\to R)=(\operatorname{Spec}A\to(\operatorname{Spec}B \to R))=\qquad(\operatorname{Spec}A\to B)\] \[c \mapsto\qquad\qquad(\chi\mapsto\chi(c))\mapsto\ \ ((\varphi,\psi)\mapsto( \varphi\otimes\psi)(c))\mapsto\ \ (\varphi\mapsto(\psi\mapsto(\varphi\otimes\psi)(c)))\mapsto( \varphi\mapsto(\varphi\otimes B)(c))\] The last step is induced by the identification \(B=(\operatorname{Spec}B\to R)\), \(b\mapsto(\psi\mapsto\psi(b))\), and we use the fact that \(\psi\circ(\varphi\otimes B)=\varphi\otimes\psi\). \(\square\) **Lemma 3.3.2** (using SQC): Let \(A\) be a finitely presented \(R\)-algebra and let \(s:\operatorname{Spec}A\to(\mathbb{N}\to R)\) be a family of sequences, each of which eventually vanishes: \[\prod_{x:\operatorname{Spec}A}\|\sum_{N:\mathbb{N}}\prod_{n\geq N}s(x)(n)=0\|\] Then there merely exists one number \(N:\mathbb{N}\) such that \(s(x)(n)=0\) for all \(x:\operatorname{Spec}A\) and all \(n\geq N\). * The set of eventually vanishing sequences \(\mathbb{N}\to R\) is in bijection with the set \(R[X]\) of polynomials, by taking the entries of a sequence as the coefficients of a polynomial. So the family of sequences \(s\) is equivalently a family of polynomials \(s:\operatorname{Spec}A\to R[X]\). Now we apply Lemma 3.3.1 with \(B=R[X]\) to see that such a family corresponds to a polynomial \(p:A[X]\). Note that for a point \(x:\operatorname{Spec}A\), the homomorphism \[x\otimes R[X]:A[X]=A\otimes R[X]\to R\otimes R[X]=R[X]\] simply applies the homomorphism \(x\) to every coefficient of a polynomial, so we have \((s(x))_{n}=x(p_{n})\). This concludes our argument, because the coefficients of \(p\), just like any polynomial, form an eventually vanishing sequence. \(\square\) **Theorem 3.3.3** (using Loc, SQC): Let \(A\) be a finitely presented \(R\)-algebra. Then every function \(f:\operatorname{Spec}A\to\mathbb{N}\) is bounded: \[\Pi_{f:\operatorname{Spec}A\to\mathbb{N}}\|\Sigma_{N:\mathbb{N}}\Pi_{x: \operatorname{Spec}A}f(x)\leq N\|.\] * Given a function \(f:\operatorname{Spec}A\to\mathbb{N}\), we construct the family \(s:\operatorname{Spec}A\to(\mathbb{N}\to R)\) of eventually vanishing sequences given by \[s(x)(n)\equiv\left\{\begin{array}{ll}1&\text{if }n<f(x)\\ 0&\text{else.}\end{array}\right.\] Since \(0\neq 1:R\) by Loc, we in fact have \(s(x)(n)=0\) if and only if \(n\geq f(x)\). Then the claim follows from Lemma 3.3.2. \(\square\) If we also assume the axiom Z-choice, we can formulate the following simultaneous strengthening of Lemma 3.3.2 and Theorem 3.3.3. **Proposition 3.3.4** (using Loc, SQC, Z-choice): Let \(A\) be a finitely presented \(R\)-algebra. Let \(P:\operatorname{Spec}A\to(\mathbb{N}\to\operatorname{Prop})\) be a family of upwards closed, merely inhabited subsets of \(\mathbb{N}\). Then the set \[\bigcap_{x:\operatorname{Spec}A}P(x)\subseteq\mathbb{N}\] is merely inhabited. * By Z-choice, there merely exists a cover \(\operatorname{Spec}A=\bigcup_{i=1}^{n}D(f_{i})\) and functions \(p_{i}:D(f_{i})\to\mathbb{N}\) such that \(p_{i}(x)\in P(x)\) for all \(x:D(f_{i})\). By Theorem 3.3.3, every \(p_{i}:D(f_{i})=\operatorname{Spec}A[f_{i}^{-1}]\to\mathbb{N}\) is merely bounded by some \(N_{i}:\mathbb{N}\), and then \(\max(N_{1},\dots,N_{n})\in P(x)\) for all \(x:\operatorname{Spec}A\). \(\square\) ## 4 Topology of schemes ### Closed subtypes **Definition 4.1.1**: 1. A _closed proposition_ is a proposition which is merely of the form \(x_{1}=0\wedge\dots\wedge x_{n}=0\) for some elements \(x_{1},\dots,x_{n}\in R\). 2. Let \(X\) be a type. A subtype \(U:X\to\operatorname{Prop}\) is _closed_ if for all \(x:X\), the proposition \(U(x)\) is closed. 3. For \(A\) a finitely presented \(R\)-algebra and \(f_{1},\dots,f_{n}:A\), we set \(V(f_{1},\dots,f_{n})\coloneqq\{\,x:\operatorname{Spec}A\mid f_{1}(x)=\dots=f_{ n}(x)=0\,\}\). Note that \(V(f_{1},\dots,f_{n})\subseteq\operatorname{Spec}A\) is a closed subtype and we have \(V(f_{1},\dots,f_{n})=\operatorname{Spec}(A/(f_{1},\dots,f_{n}))\) **Proposition 4.1.2** (using SQC): There is an order-reversing isomorphism of partial orders \[\begin{split}\text{f.g.-ideals}(R)&\xrightarrow{ \sim}\Omega_{cl}\\ I&\mapsto(I=(0))\end{split}\] between the partial order of finitely generated ideals of \(R\) and the partial order of closed propositions. * For a finitely generated ideal \(I=(x_{1},\ldots,x_{n})\), the proposition \(I=(0)\) is indeed a closed proposition, since it is equivalent to \(x_{1}=0\wedge\cdots\wedge x_{n}=0\). It is also evident that we get all closed propositions in this way. What remains to show is that \[I=(0)\Rightarrow J=(0)\qquad\text{iff}\qquad J\subseteq I.\] For this we use synthetic quasicoherence. Note that the set \(\operatorname{Spec}R/I=\operatorname{Hom}_{R\text{-Alg}}(R/I,R)\) is a proposition (has at most one element), namely it is equivalent to the proposition \(I=(0)\). Similarly, \(\operatorname{Hom}_{R\text{-Alg}}(R/J,R/I)\) is a proposition and equivalent to \(J\subseteq I\). But then our claim is just the equation \[\operatorname{Hom}(\operatorname{Spec}R/I,\operatorname{Spec}R/J)= \operatorname{Hom}_{R\text{-Alg}}(R/J,R/I)\] which holds by Proposition 2.2.1, since \(R/I\) and \(R/J\) are finitely presented \(R\)-algebras if \(I\) and \(J\) are finitely generated ideals. \(\square\) **Lemma 4.1.3** (using SQC): We have \(V(f_{1},\ldots,f_{n})\subseteq V(g_{1},\ldots,g_{m})\) as subsets of \(\operatorname{Spec}A\) if and only if \((g_{1},\ldots,g_{m})\subseteq(f_{1},\ldots,f_{n})\) as ideals of \(A\). * The inclusion \(V(f_{1},\ldots,f_{n})\subseteq V(g_{1},\ldots,g_{m})\) means a map \(\operatorname{Spec}(A/(f_{1},\ldots,f_{n}))\to\operatorname{Spec}(A/(g_{1}, \ldots,g_{m}))\) over \(\operatorname{Spec}A\). By Proposition 2.2.1, this is equivalent to a homomorphism \(A/(g_{1},\ldots,g_{m})\to A/(f_{1},\ldots,f_{n})\), which in turn means the stated inclusion of ideals. \(\square\) **Lemma 4.1.4** (using Loc, SQC, Z-choice): A closed subtype \(C\) of an affine scheme \(X=\operatorname{Spec}A\) is an affine scheme with \(C=\operatorname{Spec}(A/I)\) for a finitely generated ideal \(I\subseteq A\). * By Z-choice and boundedness, there is a cover \(D(f_{1}),\ldots,D(f_{l})\), such that on each \(D(f_{i})\), \(C\) is the vanishing set of functions \[g_{1},\ldots,g_{n}:D(f_{i})\to R.\] By Lemma 4.1.3, the ideals generated by these functions agree in \(A_{f_{i}f_{j}}\), so by Lemma 1.3.8, there is a finitely generated ideal \(I\subseteq A\), such that \(A_{f_{i}}\cdot I\) is \((g_{1},\ldots,g_{n})\) and \(C=\operatorname{Spec}A/I\). \(\square\) ### Open subtypes While we usually drop the prefix "qc" in the definition below, one should keep in mind, that we only use a definition of quasi compact open subsets. The difference to general opens does not play a role so far, since we also only consider quasi compact schemes later. **Definition 4.2.1**: 1. A proposition \(P\) is _(qc-)open_, if there merely are \(f_{1},\ldots,f_{n}:R\), such that \(P\) is equivalent to one of the \(f_{i}\) being invertible. 2. Let \(X\) be a type. A subtype \(U:X\to\operatorname{Prop}\) is _(qc-)open_, if \(U(x)\) is an open proposition for all \(x:X\). **Proposition 4.2.2** (using Loc, SQC): A proposition \(P\) is open if and only if it is the negation of some closed proposition (Definition 4.1.1). * Indeed, by Proposition 2.2.3, the proposition \(\operatorname{inv}(f_{1})\vee\cdots\vee\operatorname{inv}(f_{n})\) is the negation of \(f_{1}=0\wedge\cdots\wedge f_{n}=0\). \(\square\) **Proposition 4.2.3** (using Loc, SQC): Let \(X\) be a type. 1. The empty subtype is open in \(X\). 2. \(X\) is open in \(X\). 3. Finite intersections of open subtypes of \(X\) are open subtypes of \(X\). 4. Finite unions of open subtypes of \(X\) are open subtypes of \(X\). 5. Open subtypes are invariant under pointwise double-negation. Axioms are only needed for the last statement. In Proposition 5.4.2 we will see that open subtypes of open subtypes of a scheme are open in that scheme. Which is equivalent to open propositions being closed under dependent sums. **Proof (of Proposition 4.2.3)** For unions, we can just append lists. For intersections, we note that invertibility of a product is equivalent to invertibility of both factors. Double-negation stability follows from Proposition 4.2.2. \(\Box\) **Lemma 4.2.4**: Let \(f:X\to Y\) and \(U:Y\to\mathrm{Prop}\) open, then the _preimage_\(U\circ f:X\to\mathrm{Prop}\) is open. **Proof** If \(U(y)\) is an open proposition for all \(y:Y\), then \(U(f(x))\) is an open proposition for all \(x:X\). \(\Box\) **Lemma 4.2.5** (using Loc, SQC): Let \(X\) be affine and \(x:X\), then the proposition \[x\neq y\] is open for all \(y:X\). **Proof** We show a proposition, so we can assume \(\iota:X\to\mathbb{A}^{n}\) is a subtype. Then for \(x,y:X\), \(x\neq y\) is equivalent to \(\iota(x)\neq\iota(y)\). But for \(x,y:\mathbb{A}^{n}\), \(x\neq y\) is the open proposition that \(x-y\neq 0\). \(\Box\) The intersection of all open neighborhoods of a point in an affine scheme, is the formal neighborhood of the point. We will see in Lemma 5.2.1, that this also holds for schemes. **Lemma 4.2.6** (using Loc, SQC): Let \(X\) be affine and \(x:X\), then the proposition \[\prod_{U:X\to\mathrm{Open}}U(x)\to U(y)\] is equivalent to \(\neg\neg(x=y)\). **Proof** By Proposition 4.2.3, \(\neg(x=y)\) implies \(\prod_{U:X\to\mathrm{Open}}U(x)\to U(y)\). For the other implication, \(\neg(x=y)\) is open by Lemma 4.2.5, so we get a contradiction. \(\Box\) We now show that our two definitions (Definition 3.1.5, Definition 4.2.1) of open subtypes of an affine scheme are equivalent. **Theorem 4.2.7** (using Loc, SQC, Z-choice): Let \(X=\mathrm{Spec}\,A\) and \(U:X\to\mathrm{Prop}\) be an open subtype, then \(U\) is affine open, i.e. there merely are \(h_{1},\ldots,h_{n}:X\to R\) such that \(U=D(h_{1},\ldots,h_{n})\). **Proof** Let \(L(x)\) be the type of finite lists of elements of \(R\), such that one of them being invertible is equivalent to \(U(x)\). By assumption, we know \[\prod_{x:X}\lVert L(x)\rVert.\] So by Z-choice, we have \(s_{i}:\prod_{x:D(f_{i})}L(x)\). We compose with the length function for lists to get functions \(l_{i}:D(f_{i})\to\mathbb{N}\). By Theorem 3.3.3, the \(l_{i}\) are bounded. Since we are proving a proposition, we can assume we have actual bounds \(b_{i}:\mathbb{N}\). So we get functions \(\bar{s}_{i}:D(f_{i})\to R^{b_{i}}\), by append zeros to lists which are too short, i.e. \(\widetilde{s}_{i}(x)\) is \(s_{i}(x)\) with \(b_{i}-l_{i}(x)\) zeros appended. Then one of the entries of \(\widetilde{s}_{i}(x)\) being invertible, is still equivalent to \(U(x)\). So if we define \(g_{ij}(x)\coloneqq\pi_{j}(\widetilde{s}_{i}(x))\), we have functions on \(D(f_{i})\), such that \[D(g_{i1},\ldots,g_{ib_{i}})=U\cap D(f_{i}).\] By Lemma 3.1.7, this is enough to solve the problem on all of \(X\). \(\Box\) This allows us to transfer one important lemma from affine-opens to qc-opens. The subtlety of the following is that while it is clear that the intersection of two qc-opens on a type, which are _globally_ defined is open again, it is not clear, that the same holds, if one qc-open is only defined on the other. **Lemma 4.2.8** (using Loc, SQC, Z-choice): Let \(X\) be a scheme, \(U\subseteq X\) qc-open in \(X\) and \(V\subseteq U\) qc-open in \(U\), then \(V\) is qc-open in \(X\). **Proof** Let \(X_{i}=\operatorname{Spec}A_{i}\) be a finite affine cover of \(X\). It is enough to show, that the restriction \(V_{i}\) of \(V\) to \(X_{i}\) is qc-open. \(U_{i}\coloneqq X_{i}\cap U\) is qc-open in \(X_{i}\), since \(X_{i}\) is qc-open. By Theorem 4.2.7, \(U_{i}\) is affine-open in \(X_{i}\), so \(U_{i}=D(f_{1},\ldots,f_{n})\). \(V_{i}\cap D(f_{j})\) is affine-open in \(D(f_{j})\), so by Lemma 3.1.7, \(V_{i}\cap D(f_{j})\) is affine-open in \(X_{i}\). This implies \(V_{i}\cap D(f_{j})\) is qc-open in \(X_{i}\) and so is \(V_{i}=\bigcup_{j}V_{i}\cap D(f_{j})\). \(\Box\) **Lemma 4.2.9** (using Loc, SQC, Z-choice): 1. (a) qc-open propositions are closed under dependent sums: if \(P:\operatorname{Open}\) and \(U:P\to\operatorname{Open}\), then the proposition \(\sum_{x:P}U(x)\) is also open. 2. Let \(X\) be a type. Any open subtype of an open subtype of \(X\) is an open subtype of \(X\). **Proof** (a) Apply Lemma 4.2.8 to the point \(\operatorname{Spec}R\). 2. Apply the above pointwise. \(\Box\) **Remark 4.2.10**: Lemma 4.2.9 means that the (qc-) open propositions constitute a _dominance_ in the sense of [11]. The following fact about the interaction of closed and open propositions is due to David Warn. **Lemma 4.2.11**: Let \(P\) and \(Q\) be propositions with \(P\) closed and \(Q\) open. Then \(P\to Q\) is equivalent to \(\neg P\lor Q\). **Proof** We can assume \(P=(f_{1}=\cdots=f_{n}=0)\) and \(Q=(\operatorname{inv}(g_{1})\vee\cdots\vee\operatorname{inv}(g_{m}))\). Then we have: \[(P\to Q) = \operatorname{Proposition}\ref{prop:P},\ldots,g_{m}\] \[(P\to\neg(g_{1}=\cdots=g_{m}=0)) =\] \[\neg(f_{1}=\cdots=f_{n}=g_{1}=\cdots=g_{m}=0) = \operatorname{Proposition}\ref{prop:P},\ldots,f_{n},g_{1},\ldots,g_{m}\] \[(\operatorname{inv}(f_{1})\vee\cdots\vee\operatorname{inv}(f_{n} )\vee\operatorname{inv}(g_{1})\vee\cdots\vee\operatorname{inv}(g_{m}) = \operatorname{Proposition}\ref{prop:P},\ldots,f_{n}\] \[\neg P\lor Q \square\] ## 5 Schemes ### Definition of schemes In our internal setting, schemes are just types satisfying a property and morphisms of schemes are type theoretic functions. The following definition _does not_ define schemes in general, but something which is expected to correspond to quasi-compact, quasi separated schemes, locally of finite type externally. **Definition 5.1.1**: A type \(X\) is a _(qc-)scheme_ if there merely is a cover by finitely many open subtypes \(U_{i}:X\to\operatorname{Prop}\), such that each of the \(U_{i}\) is affine. **Definition 5.1.2**: We denote the _type of schemes_ with \(\operatorname{Sch}_{\mathrm{qc}}\). Zariski-choice Z-choice extends to schemes: **Proposition 5.1.3** (using Z-choice): Let \(X\) be a scheme and \(P:X\to\operatorname{Type}\) with \(\prod_{x:X}\lVert P(x)\rVert\), then there merely is a cover \(U_{i}\) by standard opens of the affine parts of \(X\), such that there are \(s_{i}:\prod_{x:U_{i}}P(x)\) for all \(i\). ### General Properties **Lemma 5.2.1** (using Loc, SQC): Let \(X\) be a scheme and \(x:X\), then for all \(y:X\) the proposition \[\prod_{U:X\to\operatorname{Open}}U(x)\to U(y)\] is equivalent to \(\neg\neg(x=y)\). **Proof** By Proposition 4.2.3, open proposition are always double-negation stable, which settles one implication. For the implication \[\left(\prod_{U:X\rightarrow\mathrm{Open}}U(x)\to U(y)\right)\Rightarrow \neg\neg(x=y)\] we can assume that \(x\) and \(y\) are both inside an open affine \(U\) and use that the statement holds for affine schemes by Lemma 4.2.6. \(\Box\) ### Glueing **Proposition 5.3.1** (using Loc, SQC, Z-choice): Let \(X,Y\) be schemes and \(f:U\to X\), \(g:U\to Y\) be embeddings with open images in \(X\) and \(Y\), then the pushout of \(f\) and \(g\) is a scheme. **Proof** As is shown for example here, such a pushout is always \(0\)-truncated. Let \(U_{1},\ldots,U_{n}\) be a cover of \(X\) and \(V_{1},\ldots,V_{m}\) be a cover of \(Y\). By Lemma 4.2.8, \(U_{i}\cap U\) is open in \(Y\), so we can use (large) pushout-recursion to construct a subtype \(\tilde{U}_{i}\), which is open in the pushout and restricts to \(U_{i}\) on \(X\) and \(U_{i}\cap U\) on \(Y\). Symmetrically we define \(\tilde{V}_{i}\) and in total get an open finite cover of the pushout. The pieces of this new cover are equivalent to their counterparts in the covers of \(X\) and \(Y\), so they are affine as well. \(\Box\) ### Subschemes **Definition 5.4.1**: Let \(X\) be a scheme. A _subscheme_ of \(X\) is a subtype \(Y:X\rightarrow\mathrm{Prop}\), such that \(\sum Y\) is a scheme. **Proposition 5.4.2** (using Loc, SQC, Z-choice): Any open subtype of a scheme is a scheme. **Proof** Using Theorem 4.2.7. \(\Box\) **Proposition 5.4.3** (using Loc, SQC, Z-choice): Any closed subtype \(A:X\rightarrow\mathrm{Prop}\) of a scheme \(X\) is a scheme. **Proof** Any open subtype of \(X\) is also open in \(A\). So it is enough to show, that any affine open \(U_{i}\) of \(X\), has affine intersection with \(A\). But \(U_{i}\cap A\) is closed in \(U_{i}\) and therefore affine by Lemma 4.1.4. \(\Box\) ### Equality types **Lemma 5.5.1**: Let \(X\) be an affine scheme and \(x,y:X\), then \(x=_{X}y\) is an affine scheme and \(((x,y):X\times X)\mapsto x=_{X}y\) is a closed subtype of \(X\times X\). **Proof** Any affine scheme is merely embedded into \(\mathbb{A}^{n}\) for some \(n:\mathbb{N}\). The proposition \(x=y\) for elements \(x,y:\mathbb{A}^{n}\) is equivalent to \(x-y=0\), which is equivalent to all entries of this vector being zero. The latter is a closed proposition. \(\Box\) **Proposition 5.5.2** (using Loc, SQC, Z-choice): Let \(X\) be a scheme. The equality type \(x=_{X}y\) is a scheme for all \(x,y:X\). **Proof** Let \(x,y:X\) and \(U\subseteq X\) be an affine open containing \(x\). Then \(U(y)\wedge x=y\) is equivalent to \(x=y\), so it is enough to show that \(U(y)\wedge x=y\) is a scheme. As a open subscheme of the point, \(U(y)\) is a scheme and \((x:U(y))\mapsto x=y\) defines a closed subtype by Lemma 5.5.1. But this closed subtype is a scheme by Proposition 5.4.3. \(\Box\) ### Dependent sums **Theorem 5.6.1** (using Loc, SQC, Z-choice): Let \(X\) be a scheme and for any \(x:X\), let \(Y_{x}\) be a scheme. Then the dependent sum \[((x:X)\times Y_{x})\equiv\sum_{x:X}Y_{x}\] is a scheme. **Proof** We start with an affine \(X=\operatorname{Spec}A\) and \(Y_{x}=\operatorname{Spec}B_{x}\). Locally on \(U_{i}=D(f_{i})\), for a Zariski-cover \(f_{1},\ldots,f_{l}\) of \(X\), we have \(B_{x}=\operatorname{Spec}R[X_{1},\ldots,X_{n_{l}}]/(g_{i,x,1},\ldots,g_{i,x,m_{i}})\) with polynomials \(g_{i,x,j}\). In other words, \(B_{x}\) is the closed subtype of \(\mathbb{A}^{n_{i}}\) where the functions \(g_{i,x,1},\ldots,g_{i,x,m_{i}}\) vanish. By Lemma 3.2.2, the product \[V_{i}\cong U_{i}\times\mathbb{A}^{n_{i}}\] is affine. The type \((x:U_{i})\times\operatorname{Spec}B_{x}\subseteq V_{i}\) is affine, since it is the zero set of the functions \[((x,y):V_{i})\mapsto g_{i,x,j}(y)\] Furthermore, \(W_{i}\cong(x:U_{i})\times\operatorname{Spec}B_{x}\) is open in \((x:X)\times Y_{x}\), since \(W_{i}(x)\) is equivalent to \(U_{i}(\pi_{1}(x))\), which is an open proposition. This settles the affine case. We will now assume, that \(X\) and all \(Y_{x}\) are general schemes. We pass again to a cover of \(X\) by affine open \(U_{1},\ldots,U_{n}\). We can choose the latter cover, such that for each \(i\) and \(x:U_{i}\), the \(Y_{\pi_{1}(x)}\) are covered by \(l_{i}\) many open affine pieces \(V_{i,x,1},\ldots,V_{i,x,l_{i}}\) (by Theorem 3.3.3). Then \(W_{i,j}\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\raisebox{1.29pt}{$ \cdot$}}\kern-3.0pt\raisebox{-1.29pt}{$\cdot$}}\mskip-3.0mu }=(x:U_{i})\times V_{i,x,j}\) is affine by what we established above. It is also open. To see this, let \((x,y):((x:X)\times Y_{x})\). We want to show, that \((x,y)\) being in \(W_{i,j}\) is an open proposition. We have to be a bit careful, since the open proposition \(V_{i,x,j}\) is only defined, for \(x:U_{i}\). So the proposition we are after is \((z:U_{i}(x,y))\times V_{i,z,j}(y)\). But this proposition is open by Lemma 4.2.9. \(\square\) It can be shown, that if \(X\) is affine and for \(Y:X\to\operatorname{Sch}_{\mathrm{qc}}\), \(Y_{x}\) is affine for all \(x:X\), then \((x:X)\times Y_{x}\) is affine. An easy proof using cohomology is here. **Corollary 5.6.2**: Let \(X\) be a scheme. For any other scheme \(Y\) and any map \(f:Y\to X\), the fiber map \((x:X)\mapsto\operatorname{fib}_{f}(x)\) has values in the type of schemes \(\operatorname{Sch}_{\mathrm{qc}}\). Mapping maps of schemes to their fiber maps, is an equivalence of types \[\left(\sum_{Y:\operatorname{Sch}_{\mathrm{qc}}}(Y\to X)\right)\simeq(X\to \operatorname{Sch}_{\mathrm{qc}}).\] **Proof** By univalence, there is an equivalence \[\left(\sum_{Y:\operatorname{Type}}(Y\to X)\right)\simeq(X\to\operatorname{Type}).\] From left to right, the equivalence is given by turning a \(f:Y\to X\) into \(x\mapsto\operatorname{fib}_{f}(x)\), from right to left is given by taking the dependent sum. So we just have to note, that both constructions preserve schemes. From left to right, this is Theorem 5.6.4, from right to left, this is Theorem 5.6.1. \(\square\) Subschemes are classified by propositional schemes: **Corollary 5.6.3**: Let \(X\) be a scheme. \(Y:X\to\operatorname{Prop}\) is a subscheme, if and only if \(Y_{x}\) is a scheme for all \(x:X\). **Proof** Restriction of Corollary 5.6.2. \(\square\) We will conclude now, that the pullback of a cospan of schemes is a scheme. **Theorem 5.6.4** (using Loc, SQC, Z-choice): Let be schemes, then the _pullback_\(X\times_{Z}Y\) is also a scheme. **Proof** The type \(X\times_{Z}Y\) is given as the following iterated dependent sum: \[\sum_{x:X}\sum_{y:Y}f(x)=g(y).\] The innermost type, \(f(x)=g(y)\) is the equality type in the scheme \(Z\) and by Proposition 5.5.2 a scheme. By applying Theorem 5.6.1 twice, we prove that the iterated dependent sum is a scheme. Projective space ### Construction of projective spaces We give two definitions of projective space, which differ only in size. First, we will define \(n\)-dimensional projective space, as the type of lines in a \((n+1)\)-dimensional vector space \(V\). This gives a good mapping-in property - maps from a type \(X\) into projective space are then just families of lines in \(V\) on \(X\). Or in the words of traditional algebraic geometry: projective \(n\)-space is a fine moduli space for lines in \(V\). The second construction is closer to what can be found in a typical introductory textbook on algebraic geometry (see for example [10, Section I.2]), i.e. projective \(n\)-space is constructed as a quotient of \(\mathbb{A}^{n+1}\setminus\{0\}\). We will show that this quotient is a scheme, again analogous to what can be found in textbooks. In both, construction and proof, we do not have to pass to an algebraic representation and can work directly with the types of interest. Finally, in Proposition 6.1.6 we show that the two constructions are equivalent. **Definition 6.1.1**: 1. An \(n\)-dimensional \(R\)-_vector space_ is an \(R\)-module \(V\), such that \(\|V=R^{n}\|\). 2. We write \(R\)-\(\mathrm{Vect}_{n}\) for the type of these vector spaces and \(V\setminus\{0\}\) for the type \[\sum_{x:V}x\neq 0\] 3. A _vector bundle_ on a type \(X\) is a map \(V:X\to R\)-\(\mathrm{Vect}_{n}\). The following defines projective space as the space of lines in a vector space. This is a large type. We will see below, that the second, equivalent definition is small. **Definition 6.1.2**: 1. A _line_ in an \(R\)-vector space \(V\) is a subtype \(L:V\to\mathrm{Prop}\), such that there exists an \(x:V\setminus\{0\}\) with \[\prod_{y:V}\left(L(y)\Leftrightarrow\exists c:R.y=c\cdot x\right)\] 2. The space of all lines in a fixed \(n\)-dimensional vector space \(V\) is the _projectivization_ of \(V\): \[\mathbb{P}(V)\equiv\sum_{L:V\to\mathrm{Prop}}L\text{ is a line}\] 3. _Projective \(n\)-space_\(\mathbb{P}^{n}\equiv\mathbb{P}(\mathbb{A}^{n+1})\) is the projectivization of \(\mathbb{A}^{n+1}\). **Proposition 6.1.3**: For any vector space \(V\) and line \(L\subseteq V\), \(L\) is \(1\)-dimensional in the sense that \(\|L=_{R\text{-}\mathrm{Mod}}R\|\). **Proof** Let \(L\) be a line. We merely have \(x:V\setminus\{0\}\) such that \[\prod_{y:V}\left(L(y)\Leftrightarrow\exists c:R.y=c\cdot x\right)\] We may replace the "\(\exists\)" with a "\(\sum\)", since \(c\) is uniquely determined for any \(x,y\). This means we can construct the map \(\alpha\mapsto\alpha\cdot x:R\to L\) and it is an equivalence. \(\Box\) We now give the small construction: **Definition 6.1.4** (using Loc, SQC): Let \(n:\mathbb{N}\). _Projective \(n\)-space_\(\mathbb{P}^{n}\) is the set quotient of the type \(\mathbb{A}^{n+1}\setminus\{0\}\) by the relation \[x\sim y\coloneqq\sum_{\lambda:R}\lambda x=y.\] By Proposition 2.2.3, the non-zero vector \(y\) has an invertible entry, so that the right hand side is a proposition and \(\lambda\) is a unit. We write \([x_{0}:\cdots:x_{n}]:\mathbb{P}^{n}\) for the equivalence class of \((x_{0},\ldots,x_{n}):\mathbb{A}^{n+1}\setminus\{0\}\). **Theorem 6.1.5** (using Loc, SQC): \(\mathbb{P}^{n}\) is a scheme. **Proof** Let \(U_{i}([x_{0}:\cdots:x_{n}])\equiv(x_{i}\neq 0)\). This is well-defined, since the proposition is invariant under multiplication by a unit. Furthermore, \(U_{i}\) is open and the \(U_{i}\) form a cover, by the generalized field property (Proposition 2.2.3). So what remains to be shown, is that the \(U_{i}\) are affine. We will show that \(U_{i}=\mathbb{A}^{n}\). As an intermediate step, we have: \[U_{i}=\{(x_{0},\ldots,x_{n}):\mathbb{A}^{n+1}\mid x_{i}=1\}\] by mapping \([x_{0}:\cdots:x_{n}]\) with \(x_{i}\neq 0\) to \(\left(\frac{x_{0}}{x_{i}},\ldots,\frac{x_{n}}{x_{i}}\right)\) and conversely, \((x_{0},\ldots,x_{n})\) with \(x_{i}=1\) to \([x_{0}:\cdots:x_{i-1}:1:x_{i+1}:\cdots:x_{n}]\in U_{i}\). But then, \(\{(x_{0},\ldots,x_{n}):\mathbb{A}^{n+1}\mid x_{i}=1\}\) is equivalent to \(\mathbb{A}^{n}\) by leaving out the \(i\)-th component, so the \(U_{i}\) are affine. \(\Box\) To conclude with the constructions of projective space, we show that our two constructions are equivalent: **Proposition 6.1.6** (using Loc, SQC): For all \(n:\mathbb{N}\), the scheme \(\mathbb{P}^{n}\) as defined in Definition 6.1.4, is equivalent to \(\mathbb{P}(\mathbb{A}^{n+1})\) as defined in Definition 6.1.2. **Proof** Let \(\varphi:\mathbb{P}^{n}\to\{\text{lines in }\mathbb{A}^{n+1}\}\) be given by mapping \([x_{0}:\cdots:x_{n}]\) to \(\langle(x_{0},\ldots,x_{n})\rangle\subseteq\mathbb{A}^{n+1}\), i.e. the line generated by the vector \(x\coloneqq(x_{0},\ldots,x_{n})\). The map is well-defined, since multiples of \(x\) generate the same line. Then \(\varphi\) is surjective, since for any line \(L\subseteq\mathbb{A}^{n+1}\), there merely is a non-zero \(x\in L\), that we can take as a preimage. To conclude, we note that \(\varphi\) is also an embedding. So let \(\varphi([x])=\varphi([y])\). Then, since \(\langle x\rangle=\langle y\rangle\), there is a \(\lambda\in R^{\times}\), such that \(x=\lambda y\), so \([x]=[y]\). \(\Box\) Let us prove some basic facts about equality of points in \(\mathbb{P}^{n}\). **Lemma 6.1.7** (using Loc, SQC): For two points \([x_{0}:\cdots:x_{n}],[y_{0}:\cdots:y_{n}]:\mathbb{P}^{n}\) we have: \[[x]=[y]\Leftrightarrow\prod_{i\neq j}x_{i}y_{j}=y_{i}x_{j}.\] And dually: \[[x]\neq[y]\Leftrightarrow\bigvee_{i\neq j}x_{i}y_{j}\neq y_{i}x_{j}.\] As a consequence, \([x]=[y]\) is closed and \([x]\neq[y]\) is open. **Proof**\([x]\) and \([y]\) are equal, if and only if there merely is a \(\lambda:R^{\times}\), such that \(\lambda x=y\). By calculation, if there is such a \(\lambda\), we always have \(x_{i}y_{j}=y_{i}x_{j}\). So let \(x_{i}y_{j}=y_{i}x_{j}\) for all \(i\neq j\). Then, in particular, there are \(i,j\) such that \(x_{i}\neq 0\) and \(y_{j}\neq 0\). If \(i=j\), we define \(\lambda\equiv\frac{x_{i}}{y_{i}}\). If \(i\neq j\), we have \(x_{i}y_{j}=y_{i}x_{j}\) and therefore \(y_{i}\neq 0\) and \(x_{j}\neq 0\), so we can also set \(\lambda\coloneqq\frac{x_{i}}{y_{i}}\). By calculation, we have \(\lambda y=x\). The dual statement follows by Proposition 2.2.3. \(\Box\) **Lemma 6.1.8** (using Loc, SQC): Inequality of points of \(\mathbb{P}^{n}\) is an _apartness relation_. That means the following holds: * \(\forall x:\mathbb{P}^{n}\cdot\neg(x\neq x)\). * \(\forall x,y:\mathbb{P}^{n}\cdot x\neq y\Rightarrow y\neq x\). * If \(x\neq y\), we have \(\forall z:\mathbb{P}^{n}\cdot x\neq z\lor z\neq y\). **Proof** The first two statements hold in general for inequality. For the third statement, let \(x,y,z:\mathbb{P}^{n}\). Note that if \(x=z\) and \(z=y\), it follows that \(x=y\). So we have \(\neg(x=y)\Rightarrow\neg(x=z\wedge z=y)\). By Lemma 6.1.7, \(x=y\) and \(x=z\wedge z=y\) are both equivalent to the statement that some vector with components in \(R\) is zero, so we can replace negated equality, with existence of a non-zero element, or more explicitly, the following are equivalent: \[\neg(x=y)\Rightarrow\neg(x=z\wedge z=y)\] \[\neg\left(\prod_{i\neq j}x_{i}y_{j}=y_{i}x_{j}\right)\Rightarrow\neg \left(\prod_{i\neq j}x_{i}z_{j}=z_{i}x_{j}\wedge\prod_{i\neq j}y_{i}z_{j}=z_{i} y_{j}\right)\] \[\left(\bigvee_{i\neq j}x_{i}y_{j}\neq y_{i}x_{j}\right)\Rightarrow \left(\bigvee_{i\neq j}x_{i}z_{j}\neq z_{i}x_{j}\vee\bigvee_{i\neq j}z_{i}y_{j }\neq y_{i}z_{j}\right)\] \[(x\neq y)\Rightarrow(x\neq z)\vee(z\neq y)\] **Example 6.1.9** (using Loc, SQC): Let \(s:\mathbb{P}^{1}\to\mathbb{P}^{1}\) be given by \(s([x:y])\coloneqq[x^{2}:y^{2}]\) (see Definition 6.1.4 for notation). Let us compute some fibers of \(s\). The fiber \(\operatorname{fib}_{s}([0:1])\) is by definition the type \[\sum_{[x:y]:\mathbb{P}^{1}}[x^{2}:y^{2}]=[0:1].\] So for any \(x:R\) with \(x^{2}=0\), \([x:1]:\operatorname{fib}_{s}([0:1])\) and any other point \((x,y)\) such that \([x:y]\) is in \(\operatorname{fib}_{s}([0:1])\), already yields an equivalent point, since \(y\) has to be invertible. This shows that the fiber over \([0:1]\) is a first order disk, i.e. \(\mathbb{D}(1)=\{x:R|x^{2}=0\}\). The same applies to the point \([1:0]\). To analyze \(\operatorname{fib}_{s}([1:1])\), let us assume \(2\neq 0\) (in \(R\)). Then we know, the two points \([1:-1]\) and \([1:1]\) are in \(\operatorname{fib}_{s}([1:1])\) and they are different. It will turn out, that any point in \(\operatorname{fib}_{s}([1:1])\) is equal to one of those two. For any \([x^{\prime}:y^{\prime}]:\operatorname{fib}_{s}([1:1])\), we can assume \([x^{\prime}:y^{\prime}]=[x:1]\) and \(x^{2}=1\), or equivalently \((x-1)(x+1)=0\). By Lemma 6.1.8, inequality in \(\mathbb{P}^{n}\) is an apartness relation. So for each \(x:R\), we know \(x-1\) is invertible or \(x+1\) is invertible. But this means that for any \(x:R\) with \((x-1)(x+1)=0\), that \(x=1\) or \(x=-1\). While the fibers are not the same in general, they are all affine and have the same size in the sense that for each \(\operatorname{Spec}A_{x}\coloneqq\operatorname{fib}_{s}(x)\), we have that \(A_{x}\) is free of rank \(2\) as an \(R\)-module. To see this, let us first note, that \(\operatorname{fib}_{s}([x:y])\) is completely contained in an affine subset of \(\mathbb{P}^{1}\). This is a proposition, so we can use that either \(x\) or \(y\) is invertible. Let us assume without loss of generality, that \(y\) is invertible, then \[\operatorname{fib}_{s}([x:y])=\operatorname{fib}_{s}([\frac{x}{y}:1]).\] The second component of each element in the fiber has to be invertible, so it is contained in an affine subset, which we identify with \(\mathbb{A}^{1}\). Let us rewrite with \(z\coloneqq\frac{x}{y}\). Then \[\operatorname{fib}_{s}([z:1])=\sum_{a:\mathbb{A}^{1}}(a^{2}=z)=\operatorname{ Spec}R[X]/(X^{2}-z)\] and \(R[X]/(X^{2}-z)\) is free of rank \(2\) as an \(R\)-module. ### Functions on \(\mathbb{P}^{n}\) Here we prove the classical fact that all functions \(\mathbb{P}^{n}\to R\) are constant. We start with the case \(n=1\). **Lemma 6.2.1** (using Loc, SQC): All functions \(\mathbb{P}^{1}\to R\) are constant. * Consider the affine cover of \(\mathbb{P}^{1}=U_{0}\cup U_{1}\) as in the proof of Theorem 6.1.5. Both \(U_{0}\) and \(U_{1}\) are isomorphic to \(\mathbb{A}^{1}\) and the intersection \(U_{0}\cap U_{1}\) is \(\mathbb{A}^{1}\setminus\{0\}\), embedded in \(U_{0}\) by \(x\mapsto x\) and in \(U_{1}\) by \(x\mapsto\frac{1}{x}\). So we have a pushout square as follows. If we apply the functor \(X\mapsto R^{X}\) to this diagram, we obtain a pullback square of \(R\) algebras, and we can insert the known \(R\) algebras for the affine schemes involved. Here, the different variable names \(X\) and \(Y\) indicate the resulting homomorphisms. Now it is an algebraic computation, understanding the elements of \(R[X,Y]/(1-XY)\) as Laurent polynomials, to see that the pullback is the algebra \(R\), so we have \(R^{\mathbb{P}^{1}}=R\) as desired. **Lemma 6.2.2** (using Loc, SQC): Let \(p\neq q\in\mathbb{P}^{n}\) be given. Then there exists a map \(f:\mathbb{P}^{1}\to\mathbb{P}^{n}\) such that \(f([0:1])=p\), \(f([1:0])=q\). * What we want to prove is a proposition, so we can assume chosen \(a,b\in\mathbb{A}^{n+1}\setminus\{0\}\) with \(p=[a]\), \(q=[b]\). Then we set \[f([x,y])\coloneqq[xa+yb].\] Let us check that \(xa+yb\neq 0\). By Proposition 2.2.3, we have that \(x\) or \(y\) is invertible and both \(a\) and \(b\) have at least one invertible entry. If \(xa=-yb\) then it follows that \(x\) and \(y\) are both invertible and therefore \(a\) and \(b\) would be linearly equivalent, contradicting the assumption \(p\neq q\). Of course \(f\) is also well-defined with respect to linear equivalence in the pair \((x,y)\). **Lemma 6.2.3** (using Loc, SQC): Let \(n\geq 1\). For every point \(p\in\mathbb{P}^{n}\), we have \(p\neq[1:0:0:\ldots]\) or \(p\neq[0:1:0:\ldots]\). * This is a special case of Lemma 6.1.8, but we can also give a very direct proof: Let \(p=[a]\) with \(a\in\mathbb{A}^{n+1}\setminus\{0\}\). By Proposition 2.2.3, there is an \(i\in\{0,\ldots,n\}\) with \(a_{i}\neq 0\). If \(i=0\) then \(p\neq[0:1:0:\ldots]\), if \(i\geq 1\) then \(p\neq[1:0:0:\ldots]\). **Theorem 6.2.4** (using Loc, SQC): All functions \(\mathbb{P}^{n}\to R\) are constant, that is, \[H^{0}(\mathbb{P}^{n},R)\coloneqq(\mathbb{P}^{n}\to R)=R.\] * Let \(f:\mathbb{P}^{n}\to R\) be given. For any two distinct points \(p\neq q:\mathbb{P}^{n}\), we can apply Lemma 6.2.2 and (merely) find a map \(\widetilde{f}:\mathbb{P}^{1}\to R\) with \(\widetilde{f}([0:1])=f(p)\) and \(\widetilde{f}([1:0])=f(q)\). Then we see \(f(p)=f(q)\) by Lemma 6.2.1. In particular, we have \(f([1:0:0:\ldots])=f([0:1:0:\ldots])\). And then, by Lemma 6.2.3, we get \(f(p)=f([1:0:0:\ldots])\) for every \(p:\mathbb{P}^{n}\). **Remark 6.2.5**: Another proof of Theorem 6.2.4 goes as follows: A function \(f:\mathbb{P}^{n}\to R\) is by definition of \(\mathbb{P}^{n}\) (Definition 6.1.4) given by an \(R^{\times}\)-invariant function \(g:\mathbb{A}^{n+1}\setminus\{0\}\to R\). But it is possible to show that the restriction function \[(\mathbb{A}^{n+1}\to R)\xrightarrow{\sim}(\mathbb{A}^{n+1}\setminus\{0\}\to R)\] is bijective (as long as \(n\geq 1\)), so \(g\) corresponds to a function \(\widetilde{g}:\mathbb{A}^{n+1}\to R\) which is constant on every subset of the form \(\{\,rx\mid r:R^{\times}\,\}\) for \(x:\mathbb{A}^{n+1}\setminus\{0\}\). But then it is constant on the whole line \(\{\,rx\mid r:R\,\}\), since the restriction function \((\mathbb{A}^{1}\to R)\hookrightarrow(\mathbb{A}^{1}\setminus\{0\}\to R)\) is injective. From this it follows that \(f\) is constant with value \(\widetilde{g}(0)\). A third possibility is to directly generalize the proof of Lemma 6.2.1 to arbitrary \(n\): The set \(\mathbb{P}^{n}\) is covered by the subsets \(U_{0},\ldots,U_{n}\), so it is the colimit (in the category of sets) of a diagram of finite intersections of them, which are all affine schemes. The set of functions \(\mathbb{P}^{n}\to R\) is thus the limit of a corresponding diagram of algebras. These algebras are most conveniently described as sub-algebras of the degree \(0\) part of the graded algebra \(R[X_{0},\ldots,X_{n}]_{X_{0}\ldots X_{n}}\), for example \((U_{0}\to R)=R[\frac{X_{1}}{X_{0}},\ldots,\frac{X_{n}}{X_{0}}]\). Then the limit can be computed to be \(R\). ### Line Bundles We will construct Serre's twisting sheaves in this section, starting with the "minus first". The following works because of Proposition 6.1.3. We will also give some indication on which line bundles exist in general. **Definition 6.3.1**: Let \(X\) be a type. A _line bundle_ is a map \(\mathcal{L}:X\to R\)-Mod, such that \[\prod_{x:X}\lVert\mathcal{L}_{x}=_{R\text{-Mod}}R\rVert.\] The _trivial line bundle_ on \(X\) is the line bundle \(X\to R\text{-Mod},x\mapsto R\), and when we say that a line bundle \(\mathcal{L}\) is trivial we mean that \(\mathcal{L}\) is equal to the trivial line bundle, or equivalently \(\lVert\prod_{x:X}\mathcal{L}_{x}=_{R\text{-Mod}}R\rVert\). **Definition 6.3.2**: 1. The _tautological bundle_ is the line bundle \(\mathcal{O}_{\mathbb{P}^{n}}(-1):\mathbb{P}^{n}\to R\text{-Mod}\), given by \[(L:\mathbb{P}^{n})\mapsto L.\] 2. The _dual_\(\mathcal{L}^{\vee}\) of a line bundle \(\mathcal{L}:\mathbb{P}^{n}\to R\text{-Mod}\), is the line bundle given by \[(x:\mathbb{P}^{n})\mapsto\operatorname{Hom}_{R\text{-Mod}}(\mathcal{L}_{x},R).\] 3. The _tensor product_ of \(R\)-module bundles \(\mathcal{F}\otimes\mathcal{G}\) on a scheme \(X\) is given by pointwise taking the tensor product of \(R\)-modules. 4. For \(k:\mathbb{Z}\), the \(k\)-th _Serre twisting sheaf_\(\mathcal{O}_{\mathbb{P}^{n}}(k)\) on \(\mathbb{P}^{n}\) is given by taking the \(-k\)-th tensor power of \(\mathcal{O}_{\mathbb{P}^{n}}(-1)\) for negative \(k\) and the \(k\)-th tensor power of \(\mathcal{O}_{\mathbb{P}^{n}}(-1)^{\vee}\) otherwise. With this, we expect that many classical results can be reproduced. It is however, far from clear, in what sense we can expect that every line bundle on \(\mathbb{P}^{n}\) is a Serre twisting sheaf. The expectation is, that this is not the case, since even on \(\mathbb{A}^{1}\), we should not expect that all line bundles are trivial. The background of this expectation is the external fact, that the relative Picard group of the affine line over the spectrum of the base ring is not always trivial. We will proceed by showing the claim about line bundles on \(\mathbb{A}^{1}\), which will require some preparation. **Lemma 6.3.3** (using Loc, SQC, Z-choice): For every open subset \(U:\mathbb{A}^{1}\to\operatorname{Prop}\) of \(\mathbb{A}^{1}\) we have not not: either \(U=\emptyset\) or \(U=D((X-a_{1})\dots(X-a_{n}))=\mathbb{A}^{1}\setminus\{a_{1},\dots,a_{n}\}\) for pairwise distinct numbers \(a_{1},\dots,a_{n}:R\). * For \(U=D(f)\), this follows from Lemma 2.2.4 because \(D(\alpha\cdot(X-a_{1})^{e_{1}}\dots(X-a_{n})^{e_{n}})=D((X-a_{1})\dots(X-a_{n}))\). In general, we have \(U=D(f_{1})\cup\dots\cup D(f_{n})\) by Theorem 4.2.7, so we do not not get (that \(U=\emptyset\) or) a list of elements \(a_{1},\dots,a_{n}:R\) such that \(U=\mathbb{A}^{1}\setminus\{a_{1},\dots,a_{n}\}\). Then we can not not get rid of any duplicates in the list. \(\square\) **Lemma 6.3.4** (using Loc, SQC, Z-choice): Let \(U,V:\mathbb{A}^{1}\to\operatorname{Prop}\) be two open subsets and let \(f:U\cap V\to R^{\times}\) be a function. Then there do not exist functions \(g:U\to R^{\times}\) and \(h:V\to R^{\times}\) such that \(f(x)=g(x)h(x)\) for all \(x:U\cap V\). * By Lemma 2.2.4, we can assume \[U \cup V =D((X-a_{1})\dots(X-a_{k})),\] \[U =D((X-a_{1})\dots(X-a_{k})(X-b_{1})\dots(X-b_{l})),\] \[V =D((X-a_{1})\dots(X-a_{k})(X-c_{1})\dots(X-c_{m})),\] \[U \cap V =D((X-a_{1})\dots(X-a_{k})(X-b_{1})\dots(X-b_{l})(X-c_{1})\dots( X-c_{m})),\] where all linear factors are distinct. Then every function \(f:U\cap V\to R^{\times}\) can by (SQC), Lemma 2.2.4 and comparing linear factors not not be written in the form \[f =\alpha\cdot(X-a_{1})^{e_{1}}\dots(X-a_{k})^{e_{k}}(X-b_{1})^{e_{1}^{ \prime}}\dots(X-b_{l})^{e_{l}^{\prime}}(X-c_{1})^{e_{l}^{\prime\prime}}\dots( X-c_{m})^{e_{m}^{\prime\prime}}\] with \(\alpha:R^{\times}\), \(e_{i},e_{i}^{\prime},e_{i}^{\prime\prime}:\mathbb{Z}\). Other linear factors can not appear, since they do not represent invertible functions on \(U\cap V\). Now we can write \(f=gh\) as desired, for example with \[g =\alpha\cdot(X-a_{1})^{e_{1}}\dots(X-a_{k})^{e_{k}}(X-b_{1})^{e_{ 1}^{\prime}}\dots(X-b_{l})^{e_{l}^{\prime}},\] \[h =(X-c_{1})^{e_{1}^{\prime\prime}}\dots(X-c_{m})^{e_{m}^{\prime \prime}}.\] **Theorem 6.3.5** (using Loc, SQC, Z-choice): Every \(R^{\times}\)-torsor on \(\mathbb{A}^{1}\) (Definition 7.3.1) does not not have a global section. * Let \(T\) be an \(R^{\times}\)-torsor on \(\mathbb{A}^{1}\), that is, for every \(x:\mathbb{A}^{1}\), \(T_{x}\) is a set with a free and transitive \(R^{\times}\) action and \(\|T_{x}\|\). By (Z-choice), we get a cover of \(\mathbb{A}^{1}\) by open subsets \(\mathbb{A}^{1}=\bigcup_{i=1}^{n}U_{i}\) and local sections \(s_{i}:(x:U_{i})\to T_{x}\) of the bundle \(T\). From this we can not not construct a global section by induction on \(n\): Given any two local sections \(s_{i},s_{j}\) defined on \(U_{i},U_{j}\), let \(f:U_{i}\cap U_{j}\to R^{\times}\) be the unique function with \(f(x)s_{i}(x)=s_{j}(x)\) for all \(x:U_{i}\cap U_{j}\). Then by Lemma 6.3.4, we not not find \(g:U_{i}\to R^{\times}\), \(h:U_{j}\to R^{\times}\) such that the sections \(x\mapsto g(x)s_{i}(x)\) and \(x\mapsto h(x)^{-1}s_{j}(x)\), defined on \(U_{i}\) respectively \(U_{j}\), agree on \(U_{i}\cap U_{j}\). This yields a section \(\widetilde{s}:(x:U_{i}\cup U_{j})\to T_{x}\) by Lemma 1.2.2 and we can replace \(U\) and \(V\) by \(U\cup V\) in the cover. Finally, when we get to \(n=1\), we have \(U_{1}=\mathbb{A}^{1}\) and the global section \(s_{1}:(x:X)\to T_{x}\). \(\square\) **Corollary 6.3.6** (using Loc, SQC, Z-choice): Every line bundle on \(\mathbb{A}^{1}\) is not not trivial. * Given a line bundle \(\mathcal{L}\), we can construct an \(R^{\times}\) torsor \[x\mapsto\mathcal{L}_{x}\setminus\{0\}.\] Note that there is a well-defined \(R^{\times}\) action on \(M\setminus\{0\}\) for every \(R\) module \(M\), and the action on \(\mathcal{L}_{x}\setminus\{0\}\) is free and transitive and we have \(\|\mathcal{L}_{x}\setminus\{0\}\|\) since we merely have \(\mathcal{L}_{x}=R\) as \(R\) modules. By Theorem 6.3.5, there not not is a global section of this torsor, so we have a section \(s:(x:\mathbb{A}^{1})\to\mathcal{L}_{x}\) with \(s(x)\neq 0\) for all \(x:\mathbb{A}^{1}\). But this means that the line bundle \(\mathcal{L}\) is trivial, since we can build an identification \(\mathcal{L}_{x}=R\) by sending \(s(x)\) to \(1\). \(\square\) We now transfer this result to line bundles on \(\mathbb{P}^{1}\). **Lemma 6.3.7** (using Loc, SQC): Every invertible element of the ring of Laurent polynomials \(R[X]_{X}\) is not not of the form \(\alpha X^{n}\) for some \(\alpha:R^{\times}\) and \(n:\mathbb{Z}\). * Every element \(f:R[X]_{X}\) is of the form \(f=\sum_{i=m}^{m+n}a_{i}X^{i}\) for some \(m:\mathbb{Z}\), \(n\geq 0\) and \(a_{i}:R\). Every \(a_{i}\) is not not either \(0\) or invertible. Thus we can assume that either \(f=0\) or both \(a_{m}\) and \(a_{m+n}\) are invertible. If \(f\) is invertible, we can exclude \(f=0\), and it remains to show that \(n=0\). Applying the same reasoning to \(g\), where \(fg=1\), we see that \(n>0\) is indeed impossible. \(\square\) **Theorem 6.3.8** (using Loc, SQC, Z-choice): For every line bundle \(\mathcal{L}\) on \(\mathbb{P}^{1}\), there not not exists a \(k:\mathbb{Z}\) such that \(\mathcal{L}=\mathcal{O}_{\mathbb{P}^{1}}(k)\). * Let \(\mathcal{L}:\mathbb{P}^{1}\to R\)-Mod be a line bundle on \(\mathbb{P}^{1}\). By pushout recursion, \(\mathcal{L}\) is given by two line bundles \(\mathcal{L}_{0},\mathcal{L}_{1}:\mathbb{A}^{1}\to R\)-Mod and a glueing function \(g:(x:\mathbb{A}^{1}\setminus\{0\})\to\mathcal{L}_{0}(x)=\mathcal{L}_{1}(\frac{ 1}{x})\). Since we are proving a double negation, we can assume identifications \(p_{0}:(x:\mathbb{A}^{1})\to\mathcal{L}_{0}=R^{1}\) and \(p_{1}:(x:\mathbb{A}^{1})\to\mathcal{L}_{1}=R^{1}\) by Corollary 6.3.6. Now we can define \(g^{\prime}:(x:\mathbb{A}^{1}\setminus\{0\})\to R^{1}=R^{1}\) by \(g^{\prime}(x)\coloneqq p_{0}^{-1}(x)\cdot g(x)\cdot p_{1}(\frac{1}{x})\). By synthetic quasi-coherence, equivalently, \(g^{\prime}\) is an invertible element of \(R[X]_{X}\) and therefore by Lemma 6.3.7 given by \(\alpha X^{n}\) for some \(\alpha:R^{\times}\) and \(n:\mathbb{Z}\). We can assume \(\alpha=1\), since this just amounts to concatenating our final equality with the automorphism of line bundles given by \(\alpha^{-1}\) at all points. By explicit calculation, the tautological bundle \(\mathcal{O}_{\mathbb{P}^{1}}(-1)\) on \(\mathbb{P}^{1}\) is given by glueing trivial line bundles along a glueing function \(g_{-1}:(x:\mathbb{A}^{1}\setminus\{0\})\to R^{1}=R^{1}\) with \(g_{-1}(x)\coloneqq\lambda\mapsto x\cdot\lambda\). Note that an arbitrary choice of sign is involved, made by choosing the direction of the glueing function. Sticking with the same choice, calculation shows \(g_{1}(x)\coloneqq\lambda\mapsto\frac{1}{x}\cdot\lambda\) is a glueing function for the dual of the tautological bundle \(\mathcal{O}_{\mathbb{P}^{1}}(1)\) and the tensor product of line bundles corresponds to multiplication. \(\square\) ## 7 Bundles and cohomology In non-synthetic algebraic geometry, the structure sheaf \(\mathcal{O}_{X}\) is part of the data constituting a scheme \(X\). In our internal setting, a scheme is just a type satisfying a property. When we want to consider the structure sheaf as an object in its own right, we can represent it by the trivial bundle that assigns to every point \(x:X\) the set \(R\). Indeed, for an affine scheme \(X=\operatorname{Spec}A\), taking the sections of this bundle over a basic open \(D(f)\subseteq X\) \[\left(\prod_{x:D(f)}R\right)=(D(f)\to R)=A[f^{-1}]\] yields the localizations of the ring \(A\) expected from the structure sheaf \(\mathcal{O}_{X}\). More generally, instead of sheaves of abelian groups, \(\mathcal{O}_{X}\)-modules, etc., we will consider bundles of abelian groups, \(R\)-modules, etc., in the form of maps from \(X\) to the respective type of algebraic structures. ### Quasi-coherent bundles Sometimes we want to "apply" a bundle to a subtype, like sheaves can be evaluated on open subspaces and introduce the common notation "\(M(U)\)" for that below. It is, however, not justified to expect, that this application and the corresponding theory of "sheaves" is "the same" as the external one, since the definition below uses the internal hom "\(\prod\)" - where the corresponding external construction, would be the set of continuous sections of a bundle. **Definition 7.1.1**: Let \(X\) be a type and \(M:X\to R\)-Mod a dependent module. Let \(U\subseteq X\) be any subtype. 1. We write: \[M(U)\coloneqq\prod_{x:U}M_{x}.\] 2. With pointwise structure, \(U\to R\) is an \(R\)-algebra and \(M(U)\) is a \((U\to R)\)-module. Somewhat surprisingly, localization of modules \(M(U)\) can be done pointwise: **Lemma 7.1.2** (using Loc, SQC, Z-choice): Let \(X\) be a scheme and \(M:X\to R\)-Mod a dependent module. For any \(f:X\to R\), there is an equality \[M(X)_{f}=\prod_{x:X}(M_{x})_{f(x)}\] of \((X\to R)\)-modules. Proof.: First we construct a map, by realizing that the following is well-defined: \[\frac{m}{f^{k}}\mapsto\left(x\mapsto\frac{m(x)}{f(x)^{k}}\right)\] So let \(\frac{m}{f^{k}}=\frac{m^{\prime}}{f^{k}}\), i.e. let there be an \(l:\mathbb{N}\) such that \(f^{l}(mf^{k^{\prime}}-m^{\prime}f^{k})=0\). But then we can choose the same \(l:\mathbb{N}\) for each \(x:X\) and apply the equation to each \(x:X\). We will now show, that the map we defined is an embedding. So let \(g,h:M(X)_{f}\) such that \(p:\prod_{x:X}g(x)=_{(M_{x})_{f(x)}}h(x)\). Let \(m_{g},m_{h}:\prod_{x:X}M_{x}\) and \(k_{g},k_{h}:\mathbb{N}\) such that \[g=\frac{m_{g}}{f^{k_{g}}}\quad\text{and}\quad h=\frac{m_{h}}{f^{k_{h}}}.\] From \(p\) we know \(\prod_{x:X}\exists_{k_{x}:\mathbb{N}}f(x)^{k_{x}}(m_{g}(x)f(x)^{k_{h}}-m_{h}( x)f(x)^{k_{g}})=0\). By Proposition 3.3.4, we find one \(k:\mathbb{N}\) with \[\prod_{x:X}f(x)^{k}(m_{g}(x)f(x)^{k_{h}}-m_{h}(x)f(x)^{k_{g}})=0\] -- which shows \(g=h\). It remains to show that the map is surjective. So let \(\varphi:\prod_{x:X}(M_{x})_{f(x)}\) and note that \[\prod_{x:X}\exists_{k_{x}:\mathbb{N},m_{x}:M_{x}}.\varphi(x)=\frac{m_{x}}{f(x )^{k_{x}}}.\] By Proposition 3.3.4 and Proposition 5.1.3, we get \(k:\mathbb{N}\), an affine open cover \(U_{1},\dots,U_{n}\) of \(X\) and \(m_{i}:(x:U_{i})\to M_{x}\) such that for each \(i\) and \(x:U_{i}\) we have \[\varphi(x)=\frac{m_{i}(x)}{f(x)^{k}}.\] The problem is now to construct a global \(m:(x:X)\to M_{x}\) from the \(m_{i}\). We have \[\prod_{x:U_{ij}}\frac{m_{i}(x)}{f(x)^{k}}=\varphi(x)=\frac{m_{j}(x)}{f(x)^{k}}\] meaning there is pointwise an exponent \(t_{x}:\mathbb{N}\), such that \(f(x)^{t_{x}}m_{i}(x)=f(x)^{t_{x}}m_{j}(x)\). By Proposition 3.3.4, we can find a single \(t:\mathbb{N}\) with this property and define \[\tilde{m}_{i}(x)\coloneqq f(x)^{t}m_{i}(x).\] Then we have \(\tilde{m}_{i}(x)=\tilde{m}_{j}(x)\) on all intersections \(U_{ij}\), which is what we need to get a global \(m:(x:X)\to M_{x}\) from Lemma 1.2.2. Since \(\varphi(x)=\frac{f(x)^{t}m_{i}(x)}{f(x)^{t+k}}=\frac{\tilde{m}_{i}(x)}{f(x)^{t +k}}\) for all \(i\) and \(x:U_{i}\), we have found a preimage of \(\varphi\) in \(M(X)_{f}\). We will need the following algebraic observation: **Remark 7.1.3**: Let \(M\) be an \(R\)-module and \(A\) a finitely presented \(R\)-algebra, then there is an \(R\)-linear map \[M\otimes A\to M^{\operatorname{Spec}A}\] induced by mapping \(m\otimes f\) to \(x\mapsto x(f)\cdot m\). In particular, for any \(f:R\), there is a \[M_{f}\to M^{D(f)}.\] The map \(M\otimes A\to M^{\operatorname{Spec}A}\) is natural in \(M\). **Lemma 7.1.4** (using Loc, SQC, Z-choice): Let \(X\) be a scheme, \(M:X\to R\)-Mod, \(U\subseteq X\) open and \(f:A\). Then there is an \(R\)-linear map \[M(U)_{f}\to M(D(f)).\] * Combining Lemma 7.1.2 and pointwise application of Remark 7.1.3 we get \[M(U)_{f}=\left(\prod_{x:U}(M_{x})_{f(x)}\right)\to\left(\prod_{x:U}(M_{x})^{D( f(x))}\right)=\left(\prod_{x:D(f)}M_{x}\right)=M(D(f))\] A characterization of quasi coherent sheaves in the little Zariski-topos was found with [1][Theorem 8.3]. This characterization is similar to our following definition of weak quasi-coherence, which will provide us with an abelian subcategory of the \(R\)-module bundles over a scheme, where we can show that higher cohomology vanishes if the scheme is affine. **Definition 7.1.5**: An \(R\)-module \(M\) is _weakly quasi-coherent_, if for all \(f:R\), the canonical homomorphism \[M_{f}\to M^{D(f)}\] from Remark 7.1.3 is an equivalence. We denote the type of weakly quasi-coherent \(R\)-modules with \(R\)-Mod\({}_{wqc}\). **Lemma 7.1.6**: For any \(R\)-linear map \(f:M\to N\) of weakly quasi-coherent modules \(M\) and \(N\), the kernel of \(f\) is weakly quasi-coherent. * Let \(K\to M\) be the kernel of \(f\). For any \(f:R\), the map \(K^{D(f)}\to M^{D(f)}\) is the kernel of \(M^{D(f)}\to N^{D(f)}\). The latter map is equal to \(M_{f}\to N_{f}\) by weak quasi-coherence of \(M\) and \(N\) and \(K_{f}\to M_{f}\) is the kernel of \(M_{f}\to N_{f}\). Let the vertical maps in \[\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram Let us look at an example. **Proposition 7.1.9**: Let \(X\) be a scheme and \(C:X\to R\text{-Alg}_{fp}\). Then \(C\), as a bundle of \(R\)-modules, is weakly quasi coherent. * Then for any \(f:R\) and \(x:X\), using Lemma 3.3.1, we have \[(C_{x})_{f}=C_{x}\otimes_{R}R_{f}=(\operatorname{Spec}R_{f}\to C_{x})=(D(f) \to C_{x})={C_{x}}^{D(f)}.\] For examples of non weakly quasicoherent modules, see Proposition A.0.6 and Proposition A.0.5. **Lemma 7.1.10** (using Loc, SQC, Z-choice): Let \(X\) be an affine scheme and \(M_{x}\) a weakly quasicoherent \(R\)-module for any \(x:X\), then \[\prod_{x:X}M_{x}\] is a weakly quasi-coherent \(R\)-module. * We need to show: \[\left(\prod_{x:X}M_{x}\right)_{f}=\left(\prod_{x:X}M_{x}\right)^{D(f)}\] for all \(f:R\). By weak Lemma 7.1.2, quasi-coherence and Lemma 7.1.8 we know: \[\left(\prod_{x:X}M_{x}\right)_{f}=\prod_{x:X}\left(M_{x}\right)_{f(x)}=\prod_ {x:X}\left(M_{x}\right)^{D(f)}=\left(\prod_{x:X}M_{x}\right)^{D(f)}.\] Quasi-coherent dependent modules turn out to have very good properties, which are to be expected from what is known about their external counterparts. We will show below, that quasi coherence is preserved by the following constructions: **Definition 7.1.11**: Let \(X,Y\) be types and \(f:X\to Y\) be a map. * For any dependent module \(N:Y\to R\text{-Mod}\), the _pullback_ or _inverse image_ is the dependent module \[f^{*}N\coloneqq(x:X)\mapsto M_{f(x)}.\] * For any dependent module \(M:X\to R\text{-Mod}\), the _push-forward_ or _direct image_ is the dependent module \[f_{*}M\coloneqq(y:Y)\mapsto\prod_{x:\operatorname{fib}_{f}(y)}M_{\pi_{1}(x)}.\] **Theorem 7.1.12** (using Loc, SQC, Z-choice): Let \(X,Y\) be schemes and \(f:X\to Y\) be a map. * For any weakly quasi-coherent dependent module \(N:Y\to R\text{-Mod}\), the inverse image \(f^{*}N\) is weakly quasi-coherent. * For any weakly quasi-coherent dependent module \(M:X\to R\text{-Mod}\), the direct image \(f_{*}M\) is weakly quasi-coherent. * There is nothing to do, when we use the pointwise definition of weak quasi-coherence. * We need to show, that \[\prod_{x:\operatorname{fib}_{f}(y)}M_{\pi_{1}(x)}\] is a weakly quasi-coherent \(R\)-module. By Theorem 5.6.4, the type \(\operatorname{fib}_{f}(y)\) is a scheme. So by Lemma 7.1.10, the module in question is weakly quasi-coherent. With a non-cyclic forward reference to a cohomological result, there is a short proof of the following: **Proposition 7.1.13** (using Loc, SQC, Z-choice): Let \(f:M\to N\) be an \(R\)-linear map of weakly quasi-coherent \(R\)-modules \(M\) and \(N\), then the cokernel \(N/M\) is weakly quasi-coherent. **Proof** We will first show, that for an \(R\)-linear embedding \(m:M\to N\) of weakly quasi-coherent \(R\)-modules \(M\) and \(N\), the cokernel \(N/M\) is weakly quasi-coherent. We need to show: \[(N/M)_{f}=(N/M)^{D(f)}.\] By algebra: \((N/M)_{f}=N_{f}/M_{f}\). This means we are done, if \((N/M)^{D(f)}=N^{D(f)}/M^{D(f)}\). To see this holds, let us consider \(0\to M\to N\to N/M\to 0\) as a short exact sequence of dependent modules, over the subtype of the point \(D(f)\subseteq 1=\operatorname{Spec}R\). Then, taking global sections, by Theorem 7.3.4, we have an exact sequence \[0\to M^{D(f)}\to N^{D(f)}\to(N/M)^{D(f)}\to H^{1}(D(f),M)\] - but \(D(f)=\operatorname{Spec}R_{f}\) is affine, so the last term is \(0\) by Theorem 7.3.6 and \((N/M)^{D(f)}\) is the cokernel \(N^{D(f)}/M^{D(f)}\). Now we will show the statement for a general \(R\)-linear map \(f:M\to N\). By algebra, the cokernel of \(f\) is the same as the cokernel of the induced map \(M/K\to N\), where \(K\) is the kernel of \(f\). By Lemma 7.1.6, \(K\) is weakly quasi-coherent, so by the proof above, \(M/K\) is weakly quasi-coherent. \(M/K\to N\) is an embedding, so again by the proof above, its cokernel is weakly quasi-coherent. \(\square\) ### Finitely presented bundles We now investigate the relationship between bundles of \(R\)-modules on \(X=\operatorname{Spec}A\) and \(A\)-modules. **Proposition 7.2.1**: Let \(A\) be a finitely presented \(R\)-algebra. There is an adjunction between the category of \(A\)-modules and the category of bundles of \(R\)-modules on \(\operatorname{Spec}A\). For an \(A\)-module \(M\), the unit of the adjunction is: \[\eta_{M}: M\to\prod_{x:\operatorname{Spec}A}(M\otimes x)\] \[m\mapsto(m\otimes 1)_{x:\operatorname{Spec}A}\] **Example 7.2.2** (using SQC, Loc)**: It is not the case that for every finitely presented \(R\)-algebra \(A\) and every \(A\)-module \(M\) the map \(\eta_{M}\) is injective. **Proof** [10]. \(\square\) **Theorem 7.2.3**: Let \(X=\operatorname{Spec}(A)\) be affine and let a bundle of finitely presented \(R\)-modules \(M:X\to R\text{-Mod}_{\text{fp}}\) be given. Then the \(A\)-module \[\tilde{M}:=\prod_{x:X}M_{x}\] is finitely presented and for any \(x:X\) the \(R\)-module \(\tilde{M}\otimes_{A}R\) is \(M_{x}\). Under this correspondence, localizing \(\tilde{M}\) at \(f:A\) corresponds to restricting \(M\) to \(D(f)\). ### Cohomology on affine schemes **Definition 7.3.1**: Let \(X\) be a type and \(A:X\to\operatorname{Ab}\) a map to the type of abelian groups. For \(x:X\) let \(T_{x}\) be a set with an \(A_{x}\) action. 1. \(T\) is an \(A\)_-pseudotorsor_, if the action is free and transitive for all \(x:X\). 2. \(T\) is an \(A\)_-torsor_, if it is an \(A\)-pseudotorsor and \[\prod_{x:X}\lVert T_{x}\rVert.\] 3. We write \(A\)-Tors\((X)\) for the type of \(A\)-torsors on \(X\). Torsors on a point are a concrete implementation of first deloopings: **Definition 7.3.2**: Let \(n:\mathbb{N}\). A \(n\)-th _delooping_ of an abelian group \(A\), is a pointed, \((n-1)\)-connected, \(n\)-truncated type \(K(A,n)\), such that \(\Omega^{n}K(A,n)=_{\mathrm{Ab}}A\). For any abelian group and any \(n\), a delooping \(K(A,n)\) exists by [10]. Deloopings can be used to represent cohomology groups by mapping spaces. This is usually done in homotopy type theory to study higher inductive types, such as spheres and CW-complexes, but the same approach works for internally representing sheaf cohomology, which is the intent of the following definition: **Definition 7.3.3**: Let \(X\) be a type and \(\mathcal{F}:X\to\mathrm{Ab}\) a dependent abelian group. The \(n\)-th cohomology group of \(X\) with coefficients in \(\mathcal{F}\) is \[H^{n}(X,\mathcal{F})\cong\left\|\prod_{x:X}K(\mathcal{F},n)\right\|_{0}.\] **Theorem 7.3.4**: Let \(\mathcal{F},\mathcal{G},\mathcal{H}:X\to\mathrm{Ab}\) be such that for all \(x:X\), \[0\to\mathcal{F}_{x}\to\mathcal{G}_{x}\to\mathcal{H}_{x}\to 0\] is an exact sequence of abelian groups. Then there is a long exact sequence: \(H^{n}(X,\mathcal{F})\)\(H^{n}(X,\mathcal{G})\)\(H^{n+1}(X,\mathcal{F})\)\(H^{n}(X,\mathcal{H})\)\(H^{n+1}(X,\mathcal{F})\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\ldots\)\(\ldots\)\(\ldots\)\(\ldots\ldots\)\(\ldots\)\(\ldots\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\ldots\)\(\ldots\)\(\ldots\ldots\)\(\ldots\)\(\ldots\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\ldots\)\(\ldots\)\(\ldots\)\(\ldots\ldots\)\(\ldots\)\(\ldots\)\(\ldots\ldots\)\(\ldots\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\ldots\)\(\ldots\ldots\)\(\ldots\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\)\(\ldots\ldots\)\(\ldots\ldots\)\(\ldots\)\(\ldots\ldots\)\(\ldots\)\(\ldots\ldots\)\(\ldots\ldots\)\(\ldots\)\(\ldots\ldots\)\(\ldots\)\(\ldots\ldots\)\(\ldots\)\(\ldots\ldots\)\(\ldots\ldots\)\(\ldots\)\(\ldots\ldots\)\(\ldots\ldots\)\(\ldots\ldots\)\(\ldots\)\(\ldots\ldots\)\(\ldots\ldots\)\(\ldots\ldots\)\(\ldots\ldots\)\(\ldots\)\(\ldots\ldots\)\(\ldots\ldots\)\(\ldots\ldots\)\(\ldots\ldots\)\(\ldots\ldots\)\(\ldots\)\(\ldots\ldots\)\(\ldots\ldots\)\(\ldots\ldots\)\(\ldots\ldots\)\(\ldots\ldots\)\(\ldots\ldots\)\(\ldots\ldots\)\(\ldots\ldots\)\(\ldots\ldots\)\(\ldots\ldots\)\(\ldots\ldots\)\(\ldots\ldots\)\(\ldots\ldots\)\(\ldots\ldots\)\(\ldots\ldots\)\(\ldots\ldots\)\(\ldots\ldots\)\(\ldots\ldots\)\(\ldots\ldots\)\(\ldots\ldots\)\(\ldots\ldots\)\(\ldots\ldots\)\(\ldots\ldots\)\(\ldots\ldots\)\(\ldots\ldots\)\(\ldots\ldots\)\(\ldots\ldots\)\(\ldots\ldots\)\(\ldots\ldots\)\(\ldots\ldots\)\(\ldots\ldots\)\(\ldots\ldots\)\(\ldots\ldots\)\(\ldots\ldots\)\(\ldots\ldots\)\(\ldots\ldots\)\(\ldots\ldots\)\(\ldots\)\( **Theorem 7.3.6** (using Loc, SQC, Z-choice): For any affine scheme \(X=\operatorname{Spec}(A)\) and coefficients \(M:X\to R\text{-}\mathrm{Mod}_{wqc}\), we have \[H^{1}(X,M)=0.\] * We need to show, that any \(M\)-torsor \(T\) on \(X\) is merely equal to the trivial torsor \(M\), or equivalently show the existence of a section of \(T\). We have \[\prod_{x:X}\lVert T_{x}\rVert\] and therefore, by (Z-choice), there merely are \(f_{1},\ldots,f_{l}:A\), such that the \(U_{i}\coloneqq\operatorname{Spec}(A_{f_{i}})\) cover \(X\) and there are local sections \[s_{i}:\prod_{x:U_{i}}T_{x}\] of \(T\). Our goal is to construct a matching family from the \(s_{i}\). On intersections, let \(t_{ij}:=s_{i}-s_{j}\) be the difference, so \(t_{ij}:(x:U_{i}\cap U_{j})\to M_{x}\). By Lemma 7.1.8 equivalently, we have \(t_{ij}:M(U_{i}\cap U_{j})_{f_{i}f_{j}}\). Since the \(t_{ij}\) were defined as differences, the condition in Lemma 7.3.5 is satisfied and we get \(u_{i}:M(U_{i})_{f_{i}}\), such that \(t_{ij}=u_{i}-u_{j}\). So we merely have a matching family \(\tilde{s}_{i}:=s_{i}-u_{i}\) and therefore, using Lemma 1.2.2 merely a section of \(T\). \(\square\) A similar result is provable for \(H^{2}(X,M)\) using the same approach. There is an extension of this result to general \(n\) in work in progress [1]. ### Cech-Cohomology In this section, let \(X\) be a type, \(U_{1},\ldots,U_{n}\subseteq X\) open subtypes that cover \(X\) and \(\mathcal{F}:X\to\mathrm{Ab}\) a dependent abelian group on \(X\). We start by repeating the classical definition of Chech-Cohomology groups for a given cover. **Definition 7.4.1**: 1. For open \(U\subseteq X\), we use the notation from Definition 7.1.1: \[\mathcal{F}(U)\coloneqq\prod_{x:U}\mathcal{F}_{x}.\] 2. For \(s:\mathcal{F}(U)\) and open \(V\subseteq U\) we use the notation \(s\coloneqq s_{|V}\coloneqq(x:V)\mapsto s_{x}\). 3. For a selection of indices \(i_{1},...,i_{l}:\{1,\ldots,n\}\), we use the notation \[U_{i_{1}\ldots i_{l}}\coloneqq U_{i_{1}}\cap\cdots\cap U_{i_{l}}.\] 4. For a list of indices \(i_{1},\ldots,i_{l}\), let \(i_{1},\ldots,\hat{i_{t}},\ldots,i_{l}\) be the same list with the \(t\)-th element removed. 5. For \(k:\mathbb{Z}\), the \(k\)-th _Cech-boundary operator_ is the homomorphism \[\partial^{k}:\bigoplus_{i_{0},\ldots,i_{k}}\mathcal{F}(U_{i_{0}\ldots i_{k}}) \to\bigoplus_{i_{0},\ldots,i_{k+1}}\mathcal{F}(U_{i_{0}\ldots i_{k+1}})\] given by \(\partial^{k}(s)\coloneqq(l_{0},\ldots,l_{k+1})\mapsto\sum_{j=0}^{k}(-1)^{j}s _{l_{0},\ldots,\hat{l_{j}},\ldots,l_{k}|U_{i_{0}\ldots,\hat{l_{k+1}}}}\). 6. The \(k\)-th _Cech-Cohomology group_ for the cover \(U_{1},\ldots,U_{n}\) with coefficients in \(\mathcal{F}\) is \[\check{H}^{k}(\{U\},\mathcal{F})\coloneqq\ker\partial^{k}/\operatorname{im}( \partial^{k-1}).\] It is possible to construct a torsor from a Cech cocycle: **Lemma 7.4.2**: Let \(A\) be an abelian group and \(L\) a type with \(\lVert L\rVert\). Let us call \(c:(i,j:L)\to A\) a \(L\)-cocycle, if \(c_{ij}+c_{jk}=c_{ik}\) for all \(i,j,k:L\). Then there is a bijection: \[\left((T:A\text{-torsor})\times T^{L}\right)\to L\text{-cocycles}.\] **Proof** Let us first check, that the left side is a set. Let \((T,u),(T^{\prime},u^{\prime}):(T:A\text{-torsor})\times T^{L}\), then \((T,u)=(T^{\prime},u^{\prime})\) is equivalent to \((e:T\cong T^{\prime})\times((i:L)\to e(u_{i})=u_{i}^{\prime})\). But two maps \(e\) with this property are equal, since a map between torsors is determined by the image of a single element and \(L\) is inhabited. Assume now \((T,u):(T:A\text{-torsor})\times T^{L}\) to construct the map. Then \(c_{ij}\sqsubseteq u_{i}-u_{j}\) defines an \(L\)-cocycle, because \[u_{i}-u_{j}+u_{j}-u_{k}=u_{i}-u_{k}.\] This defines an embedding: Assume \((T,u)\) and \((T^{\prime},u^{\prime})\) define the same \(L\)-cocycle, then \(u_{i}-u_{j}=u_{i}^{\prime}-u_{j}^{\prime}\) for all \(i,j:L\). We want to show a proposition, so we can assume there is \(i:L\) and use that to get a map \(e:T\to T^{\prime}\) that sends \(u_{i}\) to \(u_{i}^{\prime}\). But then we also have \[e(u_{j})=e(u_{j}-u_{i}+u_{i})=e(u_{j}^{\prime}-u_{i}^{\prime}+u_{i})=u_{j}^{ \prime}-u_{i}^{\prime}+e(u_{i})=u_{j}^{\prime}-u_{i}^{\prime}+u_{i}^{\prime}= u_{j}^{\prime}\] for all \(j:L\), which means \((T,u)=(T^{\prime},u^{\prime})\). Now let \(c\) be an \(L\)-cocycle. Following [2][Section 5.2], we can define a preimage-candidate: \[T_{c}\sqsubseteq\{u:A^{L}\mid u_{i}-u_{j}=c_{ij}\}.\] \(A\) acts on \(T_{c}\) pointwise, since \((a+u_{i})-(a+u_{j})=u_{i}-u_{j}=c_{ij}\) for all \(a:A\). To show that \(T_{c}\) is inhabited, we may assume \(i_{0}:L\). Then we define \(u_{i}\sqsubseteq-c_{i_{0}i}\) to get \(u_{i}-u_{j}=-c_{i_{0}i}+c_{i_{0}j}=c_{ij}\). Now \(c\) is of type \((A^{L})^{L}=A^{L\times L}\), so we have an element of the left hand side. Applying the map constructed above yields a cocycle \[\tilde{c}_{ij}=(k\mapsto c_{ki})-(k\mapsto c_{kj})=(k\mapsto c_{ki}-c_{kj})=(k \mapsto c_{kj}+c_{ji}-c_{kj})=(k\mapsto c_{ji})\] - so \((T_{c},c)\) is a preimage of \(c_{ij}\). \(\Box\) **Definition 7.4.3**: The cover \(U_{1},\ldots,U_{n}\) is called _r-acyclic_ for \(\mathcal{F}\), if we have the following triviality of higher (non Cech) cohomology groups: \[\forall l,r\geq l>0\ \forall i_{0},\ldots,i_{r-l}.H^{l}(U_{i_{0},\ldots,i_{r- l}},\mathcal{F})=0.\] **Example 7.4.4**: If \(X\) is a scheme, \(U_{1},\ldots,U_{n}\) a cover by affine open subtypes and \(\mathcal{F}\) pointwise a weakly quasi coherent \(R\)-module, then \(U_{1},\ldots,U_{n}\) is \(1\)-acyclic for \(\mathcal{F}\) by Theorem 7.3.6. **Theorem 7.4.5** (using Z-choice): If \(U_{1},\ldots,U_{n}\) is a \(1\)-acyclic cover for \(\mathcal{F}\), then \[\check{H}^{1}(\{U\},\mathcal{F})=H^{1}(X,\mathcal{F}).\] **Proof** Let \(\pi\) be the projection map \[\pi:\left(\sum_{T\cdot\mathcal{F}\text{-Tors}(X)}\prod_{i}\prod_{x:U_{i}}T_{x }\right)\rightarrow\mathcal{F}\text{-Tors}(X).\] Let us abbreviate the left hand side with \(T(\mathcal{F},U)\). Since the cover is \(1\)-acyclic, \(\pi\) is surjective. With \(L_{x}\sqsubseteq\sum_{i}U_{i}(x)\) and Lemma 7.4.2 we get: \[T(\mathcal{F},U) =\prod_{x:X}(T_{x}:\mathcal{F}_{x}\text{-Tors})\times T_{x}^{L_{x}}\] \[=\prod_{x:X}L_{x}\text{-cocycles}.\] The latter is the type of Cech-\(1\)-cocycles (Definition 7.4.1 (e)) and in total the equality is given by the isomorphism \[(T,t)\mapsto(i,j\mapsto t_{i}-t_{j}):T(\mathcal{F},U)\rightarrow\ker(\partial ^{1})\subseteq\bigoplus_{i,j}\mathcal{F}(U_{ij}).\] Realizing, that \(\operatorname{im}(\partial^{0})\) corresponds to the subtype of \(T(\mathcal{F},U)\) of trivial torsors, we arrive at the following diagram: The composed map \(T(\mathcal{F},U)\to H^{1}(X,\mathcal{F})\) is a homomorphism and therefore by Lemma 1.3.15 a cokernel. So the two cohomology groups are equal, since they are cokernels of the same diagram. It is possible to pass from torsors to gerbes, which are the degree 2 analogue of torsors: **Definition 7.4.6**: Let \(A:\mathrm{Ab}\) be an abelian group. An _\(A\)-banded gerbe_, is a connected type \(\mathcal{G}:\mathcal{U}\), together with, for all \(y:\mathcal{G}\) an identification of groups \(\Omega(\mathcal{G},y)=A\). Analogous to the type of \(A\)-torsors, the type of \(A\)-banded gerbes is a second delooping of an abelian group \(A\). We can formulate a second degree version of Lemma 7.4.2: **Theorem 7.4.7**: Let \(A\) be an abelian group and \(L\) a type with \(\|L\|\). Let us call \(c:(i,j,k:L)\to A\) a \(L\)-2-cocycle, if \(c_{jkl}-c_{ikl}+c_{ijl}-c_{ijk}=0\) for all \(i,j,k,l:L\). Then there is a bijection: \[\big{(}(\mathcal{G}:A\text{-}\mathrm{gerbe})\times(u:\mathcal{G}^{L})\times(i,j:L)\to u_{i}=u_{j}\big{)}\to L\text{-}2\text{-}\mathrm{cocycle}.\] This is provable, again, by translating Deligne's argument [1][Section 5.3]. Using this, the correspondence of Eilenberg-MacLane-Cohomology and Cech-Cohomology can be extended in the following way: **Theorem 7.4.8**: If \(U_{1},\dots,U_{n}\) is a 2-acyclic cover for \(\mathcal{F}\), then \[\check{H}^{2}(\{U\},\mathcal{F})=H^{2}(X,\mathcal{F}).\] However, with this approach, we need versions of Lemma 1.2.2, with increasing truncation level. While this suggests, we can prove the correspondence for any cohomology group of _external_ degree \(l\), there is follow-up work in progress [1], which proves the correspondence for all _internal_\(l:\mathbb{N}\). In the same draft, there is also a version of the vanishing result for all internal \(l\). This means that many of the usual, essential computations with Cech-Cohomology can be transferred to synthetic algebraic geometry. ## 8 Type Theoretic justification of axioms In this section, we present a model of the 3 axioms stated in Section 2.1. This model is best described as an _internal_ model of a presheaf model. The first part can then be described purely syntactically, starting from any model of 4 other axioms that are valid in a suitable _presheaf_ model. We obtain then the sheaf model by defining a family of open left exact modalities, and the new model is the model of types that are modal for all these modalities. This method works both in a 1-topos framework and for models of univalent type theory. ### Internal sheaf model #### 8.1.1 Axioms for the presheaf model We start from 4 axioms. The 3 first axioms can be seen as variation of our 3 axioms for synthetic algebraic geometric. 1. \(R\) is a ring, 2. for any f.p. \(R\)-algebra \(A\), the canonical map \(A\to R^{\mathrm{Spec}(A)}\) is an equivalence 3. for any f.p. \(R\)-algebra \(A\), the set \(\mathrm{Spec}(A)\) satisfies choice, which can be formulated as the fact that for any family of types \(P(x)\) for \(x:\mathrm{Spec}(A)\) there is a map \((\Pi_{x:\mathrm{Spec}(A)}\,\|P(x)\|)\to\big{\|}\Pi_{x:\mathrm{Spec}(A)}P(x) \big{\|}\). 4. for any f.p. \(R\)-algebra \(A\), the diagonal map \(\mathbb{N}\to\mathbb{N}^{\mathrm{Spec}(A)}\) is an equivalence. As before, \(\mathrm{Spec}(A)\) denotes the type of \(R\)-algebra maps from \(A\) to \(R\), and if \(r\) is in \(R\), we write \(D(r)\) for the proposition \(\mathrm{Spec}(R_{r})\). Note that the first axiom does not require \(R\) to be local, and the third axiom states that \(\mathrm{Spec}(A)\) satisfies _choice_ and not only Zariski local choice, for any f.p. \(R\)-algebra \(A\). #### 8.1.2 Justification of the axioms for the presheaf model We justify briefly the second axiom (synthetic quasi-coherence). This justification will be done in a 1-topos setting, but exactly the same argument holds in the setting of presheaf models of univalent type theory, since it only involves strict presheaves. A similar direct verification holds for the other axioms. We work with presheaves on the opposite of the category of finitely presented \(k\)-algebra. We write \(L,M,N,\dots\) for such objects, and \(f,g,h,\dots\) for the morphisms. A presheaf \(F\) on this category is given by a collection of sets \(F(L)\) with restriction maps \(F(L)\to F(M),\ u\mapsto fu\) for \(f:L\to M\) satisfying the usual uniformity conditions. We first introduce the presheaf \(\mathsf{FP}\) of _finite presentations_. This is internally the type \[\Sigma_{n;\mathbb{N}}\Sigma_{m;\mathbb{N}}R[X_{1},\dots,X_{n}]^{m}\] which is interpreted by \(\mathsf{FP}(L)=\Sigma_{n;\mathbb{N}}\Sigma_{m;\mathbb{N}}L[X_{1},\dots,X_{n}]^ {m}\). If \(\xi=(n,m,q_{1},\dots,q_{m})\in\mathsf{FP}(L)\) is such a presentation, we build a natural extension \(\iota:L\to L_{\xi}=L[X_{1},\dots,X_{n}]/(q_{1},\dots,q_{m})\) where the system \(q_{1}=\dots=q_{m}=0\) has a solution \(s_{\xi}\). Furthermore, if we have another extension \(f:L\to M\) and a solution \(s\in M^{n}\) of this system in \(M\), there exists a unique map \(i(f,s):L_{\xi}\to M\) such that \(i(f,s)s_{\xi}=s\) and \(i(f,s)\circ\iota=f\). Note that \(i(\iota,s_{\xi})=\mathrm{id}\). Internally, we have a map \(A:\mathsf{FP}\to R-\mathsf{alg}(\mathcal{U}_{0})\), which to any presentation \(\xi=(n,m,q_{1},\dots,q_{m})\) associates the \(R\)-algebra \(A(\xi)=L[X_{1},\dots,X_{n}]/(q_{1},\dots,q_{m})\). This corresponds externally to the presheaf on the category of elements of \(\mathsf{FP}\) defined by \(A(L,\xi)=L_{\xi}\). Internally, we have a map \(\mathrm{Spec}(A):\mathsf{FP}\to\mathcal{U}_{0}\), defined by \(\mathrm{Spec}(A)(\xi)=Hom(A(\xi),R)\). We can replace it by the isomorphic map which to \(\xi=(n,m,q_{1},\dots,q_{m})\) associates the set \(S(\xi)\) of solutions of the system \(q_{1}=\dots=q_{m}=0\) in \(R^{n}\). Externally, this corresponds to the presheaf on the category of elements of \(\mathsf{FP}\) so that \(\mathrm{Spec}(A)(L,n,m,q_{1},\dots,q_{m})\) is the set of solutions of the system \(q_{1}=\dots=q_{m}=0\) in \(L^{n}\). We now define externally two inverse maps \(\varphi:A(\xi)\to R^{\mathrm{Spec}(A(\xi))}\) and \(\psi:R^{\mathrm{Spec}(A(\xi))}\to A(\xi)\). Notice first that \(R^{\mathrm{Spec}(A)}(L,\xi)\), for \(\xi=(n,m,q_{1},\dots,q_{m})\), is the set of families of elements \(l_{f,s}:M\) indexed by \(f:L\to M\) and \(s:M^{n}\) a solution of \(fq_{1}=\dots=fq_{m}=0\), satisfying the uniformity condition \(g(l_{f,s})=l_{(\circ\circ f),gs}\) for \(g:M\to N\). For \(u\) in \(A(L,\xi)=L_{\xi}\) we define \(\varphi\ u\) in \(R^{\mathrm{Spec}(A)}(L,\xi)\) by \[(\varphi\ u)_{f,s}=i(f,s)\ u\] and for \(l\) in \(R^{\mathrm{Spec}(A)}(L,\xi)\) we define \(\psi\ l\) in \(A(L,\xi)=L_{\xi}\) by \[\psi\ l=l_{\iota,s_{\xi}}\] These maps are natural, and one can check \[\psi\ (\varphi\ u)=(\varphi\ u)_{\iota,s_{\xi}}=i(\iota,s_{\xi})\ u=u\] and \[(\varphi\ (\psi\ l))_{f,s}=i(f,s)\ (\psi\ l)=i(f,s)\ l_{\iota,s_{\xi}}=l_{(i(f,s) \circ\iota),(i(f,s)\ s_{\xi})}=l_{f,s}\] which shows that \(\varphi\) and \(\xi\) are inverse natural transformations. Furthermore, the map \(\varphi\) is the external version of the canonical map \(A(\xi)\to R^{\mathrm{Spec}(A(\xi))}\). The fact that this map is an isomorphism is an (internally) equivalent statement of the second axiom. #### 8.1.3 Sheaf model obtained by localisation from the presheaf model We define now a family of propositions. As before, if \(A\) is a ring, we let \(\operatorname{Um}(A)\) be the type of unimodular sequences (Definition 1.3.13) \(f_{1},\dots,f_{n}\) in \(A\), i.e. such that \((1)=(f_{1},\dots,f_{n})\). To any element \(\vec{r}=r_{1},\dots,r_{n}\) in \(\operatorname{Um}(R)\) we associate the proposition \(D(\vec{r})=D(r_{1})\vee\dots\lor D(r_{n})\). If \(\vec{r}\) is the empty sequence then \(D(\vec{r})\) is the proposition \(1=_{R}0\). Starting from any model of dependent type theory with univalence satisfying the 4 axioms above, we build a new model of univalent type theory by considering the types \(T\) that are modal for all modalities defined by the propositions \(D(\vec{r})\), i.e. such that all diagonal maps \(T\to T^{D(\vec{r})}\) are equivalences. This new model is called the _sheaf model_. This way of building a new sheaf model can be described purely syntactically, as in [10]. In [11], we extend this interpretation to cover inductive data types. In particular, we describe there the sheafification \(\mathbb{N}_{S}\) of the type of natural numbers with the unit map \(\eta:\mathbb{N}\to\mathbb{N}_{S}\). A similar description can be done starting with the 1-presheaf model. In this case, we use for the propositional truncation of a presheaf \(A\) the image of the canonical map \(A\to 1\). We however get a model of type theory _without_ universes when we consider modal types. **Proposition 8.1.1**: The ring \(R\) is modal. It follows that any f.p. \(R\)-algebra is modal. * If \(r_{1},\dots,r_{n}\) is in \(\operatorname{Um}(R)\), we build a patch function \(R^{D(r_{1},\dots,r_{n})}\to R\). Any element \(u:R^{D(r_{1},\dots,r_{n})}\) gives a compatible family of elements \(u_{i}:R^{D(r_{i})}\), hence a compatible family of elements in \(R_{r_{i}}\) by quasi-coherence. But then it follows from local-global principle [10], that we can patch this family to a unique element of \(R\). If \(A\) is a f.p. \(R\)-algebra, then \(A\) is isomorphic to \(R^{\operatorname{Spec}(A)}\) and hence is modal. **Proposition 8.1.2**: In this new sheaf model, \(\bot_{S}\) is \(1=_{R}0\). * The proposition \(1=_{R}0\) is modal by the previous proposition. If \(T\) is modal, all diagonal maps \(T\to T^{D(\vec{r})}\) are equivalences. For the empty sequence \(\vec{r}\) we have that \(D(\vec{r})\) is \(\bot\), and the empty sequence is unimodular exactly when \(1=_{R}0\). So \(1=_{R}0\) implies that \(T\) and \(T^{\bot}\) are equivalent, and so implies that \(T\) is contractible. By extensionality, we get that \((1=_{R}0)\to T\) is contractible when \(T\) is modal. **Lemma 8.1.3**: For any f.p. \(R\)-algebra \(A\), we have \(\operatorname{Um}(R)^{\operatorname{Spec}(A)}=\operatorname{Um}(A)\). * Note that the fact that \(r_{1},\dots,r_{n}\) is unimodular is expressed by \[\left\|\Sigma_{s_{1},\dots,s_{n}:R}r_{1}s_{1}+\dots+r_{n}s_{n}=1\right\|\] and we can use these axioms 2 and 3 to get \[\left\|\Sigma_{s_{1},\dots,s_{n}:R}r_{1}s_{1}+\dots+r_{n}s_{n}=1\right\|^{ \operatorname{Spec}(A)}=\left\|\Sigma_{r_{1},\dots,r_{n}:A}\Pi_{x: \operatorname{Spec}(A)}r_{1}v_{1}(x)+\dots+r_{n}v_{n}(x)=1\right\|\] The result follows then from this and axiom 4. For an f.p. \(R\)-algebra \(A\), we can define the type of presentations \(Pr_{n,m}(A)\) as the type \(A[X_{1},\dots,X_{n}]^{m}\). Each element in \(Pr_{n,m}(A)\) defines an f.p. \(A\)-algebra. Since \(Pr_{n,m}(A)\) is a modal type since \(A\) is f.p., the type of presentations \(Pr_{n,m}(A)_{S}\) in the sheaf model defined for \(n\) and \(m\) in \(\mathbb{N}_{S}\) will be such that \(Pr_{\eta p,\eta q}(A)_{S}=Pr_{p,q}(A)\)[11]. **Lemma 8.1.4**: If \(P\) is a proposition, then the sheafification of \(P\) is * If \(Q\) is a modal proposition and \(P\to Q\) we have \[\left\|\Sigma_{(r_{1},\dots,r_{n}):\operatorname{Um}(R)}P^{D(r_{1},\dots,r_{n}) }\right\|\to Q\] since \(P^{D(r_{1},\dots,r_{n})}\to Q^{D(r_{1},\dots,r_{n})}\) and \(Q^{D(r_{1},\dots,r_{n})}\to Q\). It is thus enough to show that \[P_{0}=\left\|\Sigma_{(r_{1},\dots,r_{n}):\operatorname{Um}(R)}P^{D(r_{1}, \dots,r_{n})}\right\|\] is modal. If \(s_{1},\dots,s_{m}\) is in \(\operatorname{Um}(R)\) we show \(P_{0}^{D(s_{1},\dots,s_{m})}\to P_{0}\). This follows from \(\operatorname{Um}(R)^{D(r)}=\operatorname{Um}(R_{r})\), Lemma 8.1.3. **Proposition 8.1.5**: For any modal type \(T\), the proposition \(\left\|T\right\|_{S}\) is \[\left\|\Sigma_{(r_{1},\ldots,r_{n}):\mathrm{Um}(R)}T^{D(r_{1})}\times\cdots \times T^{D(r_{n})}\right\|\] **Proof** It follows from Lemma 8.1.4 that the proposition \(\left\|T\right\|_{S}\) is \[\left\|\Sigma_{(r_{1},\ldots,r_{n}):\mathrm{Um}(R)}\left\|T\right\|^{D(r_{1}, \ldots,r_{n})}\right\|=\left\|\Sigma_{(r_{1},\ldots,r_{n}):\mathrm{Um}(R)} \left\|T\right\|^{D(r_{1})}\times\cdots\times\left\|T\right\|^{D(r_{n})}\right\|\] and we get the result using the fact that choice holds for each \(D(r_{i})\), so that \[\left\|T\right\|^{D(r_{1})}\times\cdots\times\left\|T\right\|^{D(r_{n})}= \left\|T^{D(r_{1})}\right\|\times\cdots\times\left\|T^{D(r_{n})}\right\|= \left\|T^{D(r_{1})}\times\cdots\times T^{D(r_{n})}\right\|\] **Proposition 8.1.6**: In the sheaf model, \(R\) is a local ring. **Proof** This follows from Proposition 8.1.5 and Lemma 8.1.3. \(\Box\) **Lemma 8.1.7**: If \(A\) is a \(R\)-algebra which is modal and there exists \(r_{1},\ldots,r_{n}\) in \(\mathrm{Um}(R)\) such that each \(A^{D(r_{i})}\) is a f.p. \(R_{r_{i}}\)-algebra, then \(A\) is a f.p. \(R\)-algebra. **Proof** Using the local-global principles presented in [10], we can patch together the f.p. \(R_{r_{i}}\)-algebra to a global f.p. \(R\)-algebra. This f.p. \(R\)-algebra is modal by Proposition 8.1.1, and is locally equal to \(A\) and hence equal to \(A\) since \(A\) is modal. \(\Box\) **Corollary 8.1.8**: The type of f.p. \(R\)-algebras is modal and is the type of f.p. \(R\)-algebras in the sheaf model. **Proof** For any \(R\)-algebra \(A\), we can form a type \(\Phi(n,m,A)\) expressing that \(A\) has a presentation for some \(v:Pr_{n,m}(R)\), as the type stating that there is some map \(\alpha:R[X_{1},\ldots,X_{n}]\to A\) and that \((A,\alpha)\) is universal such that \(\alpha\) is \(0\) on all elements of \(v\). We can also look at this type \(\Phi(n,m,A)_{S}\) in the sheaf model. Using the translation from [18, 10], we see that the type \(\Phi(\eta n,\eta m,A)_{S}\) is exactly the type stating that \(A\) is presented by some \(v:Pr_{n,m}(A)\) among the modal \(R\)-algebras. This is actually equivalent to \(\Phi(n,m,A)\) since any f.p. \(R\)-algebra is modal. If \(A\) is a modal \(R\)-algebra which is f.p. in the sense of the sheaf model, this means that we have \[\left\|\Sigma_{n:\mathbb{N}_{S}}\Sigma_{m:\mathbb{N}_{S}}\Phi(n,m,A)_{S} \right\|_{S}\] This is equivalent to \[\left\|\Sigma_{n:\mathbb{N}}\Sigma_{m:\mathbb{N}}\Phi(\eta n,\eta m,A)_{S} \right\|_{S}\] which in turn is equivalent to \[\left\|\Sigma_{n:\mathbb{N}}\Sigma_{m:\mathbb{N}}\Phi(n,m,A)\right\|_{S}\] Using Lemma 8.1.7 and Proposition 8.1.5, this is equivalent to \(\left\|\Sigma_{n:\mathbb{N}}\Sigma_{m:\mathbb{N}}\Phi(n,m,A)\right\|\). \(\Box\) Note that the type of f.p. \(R\)-algebra is universe independent. **Proposition 8.1.9**: For any f.p. \(R\)-algebra \(A\), the type \(\mathrm{Spec}(A)\) is modal and satisfies the axiom of Zariski local choice in the sheaf model. **Proof** Let \(P(x)\) be a family of types over \(x:\mathrm{Spec}(A)\) and assume \(\Pi_{x:\mathrm{Spec}(A)}\left\|P(x)\right\|_{S}\). By Proposition 8.1.5, this means \(\Pi_{x:\mathrm{Spec}(A)}\left\|\Sigma_{(r_{1},\ldots,r_{n}):Um}P(x)^{D(r_{1})} \times\cdots\times P(x)^{D(r_{n})}\right\|\). The result follows then from choice over \(\mathrm{Spec}(A)\) and Lemma 8.1.3. \(\Box\) ### Presheaf models of univalence We recall first how to build presheaf models of univalence [11, 12], and presheaf models satisfying the 3 axioms of the previous section. The constructive models of univalence are presheaf models parametrised by an interval object \(\mathbf{I}\) (presheaf with two global distinct elements \(0\) and \(1\) and which is tiny) and a classifier object \(\Phi\) for cofibrations. The model is then obtained as an internal model of type theory inside the presheaf model. For this, we define \(C:U\to U\), uniform in the universe \(U\), operation closed by dependent products, sums and such that \(C(\Sigma_{X:U}X)\) holds. It further satisfies, for \(A:U^{\mathbf{I}}\), the transport principle \[(\Pi_{i:\mathbf{I}}C(Ai))\rightarrow(A0\to A1)\] We get then a model of univalence by interpreting a type as a presheaf \(A\) together with an element of \(C(A)\). This is over a base category \(\square\). If we have another category \(\mathcal{C}\), we automatically get a new model of univalent type theory by changing \(\square\) to \(\square\times\mathcal{C}\). A particular case is if \(\mathcal{C}\) is the opposite of the category of f.p. \(k\)-algebras, where \(k\) is a fixed commutative ring. We have the presheaf \(R\) defined by \(R(J,A)=Hom(k[X],A)\) where \(J\) object of \(\square\) and \(A\) object of \(\mathcal{C}\). The presheaf \(\mathsf{G_{m}}\) is defined by \(\mathsf{G_{m}}(J,A)=Hom(k[X,1/X],A)=A^{\times}\), the set of invertible elements of \(A\). ### Propositional truncation We start by giving a simpler interpretation of propositional truncation. This will simplify the proof of the validity of choice in the presheaf model. We work in the presheaf model over a base category \(\square\) which interprets univalent type theory, with a presheaf \(\Phi\) of cofibrations. The interpretation of the propositional truncation \(\|T\|\)_does not_ require the use of the interval \(\mathbf{I}\). We recall that in the models, to be contractible can be formulated as having an operation \(\mathsf{ext}(\psi,v)\) which extends any partial element \(v\) of extent \(\psi\) to a total element. The (new) remark is then that to be a (h)proposition can be formulated as having instead an operation \(\mathsf{ext}(u,\psi,v)\) which, now _given_ an element \(u\), extends any partial element \(v\) of extent \(\psi\) to a total element. Propositional truncation is defined as follows. An element of \(\|T\|\) is either of the form \(\mathsf{inc}(a)\) with \(a\) in \(T\), or of the form \(\mathsf{ext}(u,\psi,v)\) where \(u\) is in \(\|T\|\) and \(\psi\) in \(\Phi\) and \(v\) a partial element of extent \(\psi\). In this definition, the special constructor \(\mathsf{ext}\) is a "constructor with restrictions" which satisfies \(\mathsf{ext}(u,\psi,v)=v\) on the extent \(\psi\)[1]. ### Choice We prove choice in the presheaf model: if \(A\) is a f.p. algebra over \(R\) then we have a map \[l:(\Pi_{x:\operatorname{Spec}(A)}\left\|P\right\|)\rightarrow\left\|\Pi_{x: \operatorname{Spec}(A)}P\right\|\] For defining the map \(l\), we define \(l(v)\) by induction on \(v\). The element \(v\) is in \((\Pi_{x:\operatorname{Spec}(A)}\left\|P\right\|)(B)\), which can be seen as an element of \(\left\|P\right\|(A)\). If it is \(\mathsf{inc}(u)\) we associate \(\mathsf{inc}(u)\) and if it is \(\mathsf{ext}(u,\psi,v)\) the image is \(\mathsf{ext}(l(u),\psi,l(v))\). ### \(1\)-topos model For any small category \(\mathcal{C}\) we can form the presheaf model of type theory over the base category \(\mathcal{C}\)[10, 11]. We look at the special case where \(\mathcal{C}\) is the opposite of the category of finitely presented \(k\)-algebras for a fixed ring \(k\). In this model we have a presheaf \(R(A)=Hom(k[X],A)\) which has a ring structure. In the _presheaf_ model, we can check that we have \(\neg\neg(0=_{R}1)\). Indeed, at any stage \(A\) we have a map \(\alpha:A\to 0\) to the trivial f.p. algebra \(0\), and \(0=_{R}1\) is valid at the stage \(0\). The previous internal description of the sheaf model applies as well in the \(1\)-topos setting. However the type of modal types in a given universe is not modal in this \(1\)-topos setting. This problem can actually be seen as a motivation for introducing the notion of stacks, and is solved when we start from a constructive model of univalence. ### Some properties of the sheaf model #### 8.6.1 Quasi-coherence A module \(M\) in the sheaf model defined at stage \(A\), where \(A\) is a f.p. \(k\)-algebra, is given by a sheaf over the category of elements of \(A\). It is thus given by a family of modules \(M(B,\alpha)\), for \(\alpha:A\to B\), and restriction maps \(M(B,\alpha)\to M(C,\gamma\alpha)\) for \(\gamma:B\to C\). In general this family is not determined by its value \(M_{A}=M(A,\mathsf{id}_{A})\) at \(A,\mathsf{id}_{A}\). The next proposition expresses when this is the case in an internal way (this characterisation is due to Bletschmidt [1]). **Proposition 8.6.1**: \(M\) is internally quasi-coherent7 iff we have \(M(B,\alpha)=M_{A}\otimes_{A}B\) and the restriction map for \(\gamma:B\to C\) is \(M_{A}\otimes_{A}\gamma\). Footnote 7: In the sense that the canonical map \(M\otimes A\to M^{\operatorname{Spec}(A)}\) is an isomorphism for any f.p. \(R\)-algebra \(A\). #### 8.6.2 Projective space We have defined \(\mathbb{P}^{n}\) to be the set of lines in \(V=R^{n+1}\), so we have \[\mathbb{P}^{n}\ =\ \Sigma_{L:V\to\Omega}[\exists_{v:V}\neg(v=0)\wedge L=Rv]\] The following was noticed in [13]. **Proposition 8.6.2**: \(\mathbb{P}^{n}(A)\) is the set of submodules of \(A^{n+1}\) factor direct in \(A^{n+1}\) and of rank \(1\). **Proof**\(\mathbb{P}^{n}\) is the set of pairs \(L,0\) where \(L:\Omega^{V}(A)\) satisfies the proposition \(\exists_{v:V}\neg(v=0)\wedge L=Rv\) at stage \(A\). This condition implies that \(L\) is a quasicoherent submodule of \(R^{n+1}\) defined at stage \(A\). It is thus determined by its value \(L(A,\mathsf{id}_{A})=L_{A}\). Furthermore, the condition also implies that \(L_{A}\) is locally free of rank \(1\). By local-global principle [1], \(L_{A}\) is finitely generated. We can then apply Theorem 5.14 of [1] to deduce that \(L_{A}\) is factor direct in \(A^{n+1}\) and of rank \(1\). \(\Box\) One point in this argument was to notice that the condition \[\exists_{v:V}\neg(v=0)\wedge L=Rv\] implies that \(L\) is quasi-coherent. This would be direct in presence of univalence, since we would have then \(L=R\) as a \(R\)-module and \(R\) is quasi-coherent. But it can also be proved without univalence by transport along isomorphism: a \(R\)-module which is isomorphic to a quasi-coherent module is itself quasi-coherent. ### Global sections and Zariski global choice We let \(\Box T\) the type of global sections of a globally defined sheaf \(T\). If \(c=r_{1},\ldots,r_{n}\) is in \(\operatorname{Um}(R)\) we let \(\Box_{c}T\) be the type \(\Box T^{D(r_{1})}\times\cdots\times\Box T^{D(r_{n})}\). Using these notations, we can state the principle of Zariski global choice \[(\Box\left\|T\right\|)\leftrightarrow\left\|\Sigma_{c:\operatorname{Um}(k)} \Box_{c}T\right\|\] This principle is valid in the present model. Using this principle, we can show that \(\Box K(\mathsf{G}_{\mathsf{m}},1)\) is equal to the type of projective modules of rank \(1\) over \(k\) and that each \(\Box K(R,n)\) for \(n>0\) is contractible. ## Appendix A Negative results Here we collect some results of the theory developed from the axioms (Loc), (SQC) and (Z-choice) that are of a negative nature and primarily serve the purpose of counterexamples. We adopt the following definition from [1, Section IV.8]. **Definition A.0.1**: A ring \(A\) is _zero-dimensional_ if for all \(x:A\) there exists \(a:A\) and \(k:\mathbb{N}\) such that \(x^{k}=ax^{k+1}\). **Lemma A.0.2** (using Loc, SQC, Z-choice): The ring \(R\) is not zero-dimensional. **Proof** Assume that \(R\) is zero-dimensional, so for every \(f:R\) there merely is some \(k:\mathbb{N}\) with \(f^{k}\in(f^{k+1})\). We note that \(R=\mathbb{A}^{1}\) is an affine scheme and that if \(f^{k}\in(f^{k+1})\), then we also have \(f^{k^{\prime}}\in(f^{k^{\prime}+1})\) for every \(k^{\prime}\geq k\). This means that we can apply Proposition 3.3.4 and merely obtain a number \(K:\mathbb{N}\) such that \(f^{K}\in(f^{K+1})\) for all \(f:R\). In particular, \(f^{K+1}=0\) implies \(f^{K}=0\), so the canonical map \(\operatorname{Spec}R[X]/(X^{K})\to\operatorname{Spec}R[X]/(X^{K+1})\) is a bijection. But this is a contradiction, since the homomorphism \(R[X]/(X^{K+1})\to R[X]/(X^{K})\) is not an isomorphism. \(\square\) **Example A.0.3** (using Loc, SQC, Z-choice): It is not the case that every monic polynomial \(f:R[X]\) with \(\deg f\geq 1\) has a root. More specifically, if \(U\subseteq\mathbb{A}^{1}\) is an open subset with the property that the polynomial \(X^{2}-a:R[X]\) merely has a root for every \(a:U\), then \(U=\emptyset\). **Proof** Let \(U\subseteq\mathbb{A}^{1}\) be as in the statement. Since we want to show \(U=\emptyset\), we can assume a given element \(a_{0}:U\) and now have to derive a contradiction. By Z-choice, there exists in particular a standard open \(D(f)\subseteq\mathbb{A}^{1}\) with \(a_{0}\in D(f)\) and a function \(g:D(f)\to R\) such that \((g(x))^{2}=x\) for all \(x:D(f)\). By SQC, this corresponds to an element \(\frac{p}{f^{n}}:R[X]_{f}\) with \((\frac{p}{f^{n}})^{2}=X:R[X]_{f}\). We use Lemma 1.3.6 together with the fact that \(f(a_{0})\) is invertible to get that \(f:R[X]\) is regular, and therefore \(p^{2}=f^{2n}X:R[X]\). Considering this equation over \(R^{\text{red}}=R/\sqrt{(0)}\) instead, we can show by induction that all coefficients of \(p\) and of \(f^{n}\) are nilpotent, which contradicts the invertibility of \(f(a_{0})\). \(\square\) **Remark A.0.4**: Example A.0.3 shows that the axioms we are using here are incompatible with a natural axiom that is true for the structure sheaf of the big etale topos, namely that \(R\) admits roots for unramified monic polynomials. The polynomial \(X^{2}-a\) is even separable for invertible \(a\), assuming that \(2\) is invertible in \(R\). To get rid of this last assumption, we can use the fact that either \(2\) or \(3\) is invertible in the local ring \(R\) and observe that the proof of Example A.0.3 works just the same for \(X^{3}-a\). We now give two different proofs that not all \(R\)-modules are weakly quasi-coherent in the sense of Definition 7.1.5. The first shows that the map \[M_{f}\to M^{D(f)}\] is not always surjective, the second shows that it is not always injective. **Proposition A.0.5** (using Loc, SQC, Z-choice): The \(R\)-module \(R^{\mathbb{N}}\) is not weakly quasi-coherent (in the sense of Definition 7.1.5). **Proof** For \(f:R\), we have \((R^{\mathbb{N}})^{D(f)}=(R^{D(f)})^{\mathbb{N}}=(R_{f})^{\mathbb{N}}\), so the question is whether the canonical map \[(R^{\mathbb{N}})_{f}\to(R_{f})^{\mathbb{N}}\] is an equivalence. If it is, for a fixed \(f:R\), then the sequence \((1,\frac{1}{f},\frac{1}{f^{2}},\dots)\) has a preimage, so there is an \(n:\mathbb{N}\) such that for all \(k:\mathbb{N}\), \(\frac{a_{k}}{f^{n}}=\frac{1}{f^{k}}\) in \(R_{f}\) for some \(a_{k}:R\). In particular, \(\frac{a_{n+1}}{f^{n}}=\frac{1}{f^{n+1}}\) in \(R_{f}\) and therefore \(a_{n+1}f^{n+1+\ell}=f^{n+\ell}\) in \(R\) for some \(\ell:\mathbb{N}\). This shows that \(R\) is zero-dimensional (Definition A.0.1) if \(R^{\mathbb{N}}\) is weakly quasi-coherent. So we are done by Lemma A.0.2. \(\square\) **Proposition A.0.6** (using Loc, SQC, Z-choice): The implication \[M^{D(f)}=0\quad\Rightarrow\quad M_{f}=0\] does not hold for all \(R\)-modules \(M\) and \(f:R\). In particular, the map \(M_{f}\to M^{D(f)}\) from Definition 7.1.5 is not always injective. **Proof** Assume that the implication always holds. We construct a family of \(R\)-modules, parametrized by the elements of \(R\), and deduce a contradiction from the assumption applied to the \(R\)-modules in this family. Given an element \(f:R\), the \(R\)-module we want to consider is the countable product \[M(f)\subseteq\prod_{n:\mathbb{N}}R/(f^{n}).\] If \(f\neq 0\) then \(M(f)=0\) (using Proposition 2.2.3). This implies that the \(R\)-module \(M(f)^{f\neq 0}\) is trivial: any function \(f\neq 0\to M(f)\) can only assign the value \(0\) to any of the at most one witnesses of \(f\neq 0\) By assumption, this implies that \(M(f)_{f}\) is also trivial. Noting that \(M(f)\) is not only an \(R\)-module but even an \(R\)-algebra in a natural way, we have \[M(f)_{f}=0 \Leftrightarrow \exists k:\mathbb{N}.\;f^{k}=0\text{ in }M(f)\] \[\Leftrightarrow \exists k:\mathbb{N}.\;\forall n:\mathbb{N}.\;f^{k}\in(f^{n})\subseteq R\] \[\Leftrightarrow \exists k:\mathbb{N}.\;f^{k}\in(f^{k+1})\subseteq R.\] In summary, our assumption implies that the ring \(R\) is zero-dimensional (in the sense of Definition A.0.1). But this is not the case, as we saw in Lemma A.0.2. **Example A.0.7** (using Loc, SQC): It is not the case that for any pair of lines \(L,L^{\prime}\subseteq\mathbb{P}^{2}\), the \(R\)-algebra \(R^{L\cap L^{\prime}}\) is as an \(R\)-module free of rank 1. * The \(R\)-algebra \(R^{L\cap L^{\prime}}\) is free of rank 1 if and only if the structure homomorphism \(\varphi:R\to R^{L\cap L^{\prime}}\) is bijective. We will show that it is not even always injective. Consider the lines \[L=\{\,[x:y:z]:\mathbb{P}^{2}\mid z=0\,\}\] and \[L^{\prime}=\{\,[x:y:z]:\mathbb{P}^{2}\mid\varepsilon x+\delta y+z=0\,\},\] where \(\varepsilon\) and \(\delta\) are elements of \(R\) with \(\varepsilon^{2}=\delta^{2}=0\). Consider the element \(\varphi(\epsilon\delta):R^{L\cap L^{\prime}}\), which is the constant function \(L\cap L^{\prime}\to R\) with value \(\varepsilon\delta\). For any point \([x:y:z]:L\cap L^{\prime}\), we have \(z=0\) and \(\varepsilon x+\delta y=0\). But also, by definition of \(\mathbb{P}^{3}\), we have \((x,y,z)\neq 0:R^{3}\), so one of \(x,y\) must be invertible. This implies \(\delta\mid\varepsilon\) or \(\varepsilon\mid\delta\), and in both cases we can conclude \(\varepsilon\delta=0\). Thus, \(\varphi(\epsilon\delta)=0:R^{L\cap L^{\prime}}\). If \(\varphi\) was always injective then this would imply \(\varepsilon\delta=0\) for any \(\varepsilon,\delta:R\) with \(\varepsilon^{2}=\delta^{2}=0\). In other words, the inclusion \[\operatorname{Spec}R[X,Y]/(X^{2},Y^{2},XY)\hookrightarrow\operatorname{Spec}R [X,Y]/(X^{2},Y^{2})\] would be a bijection. But the corresponding \(R\)-algebra homomorphism is not an isomorphism.
This is a 基礎 for アルgebrIC geometry, 内在的に theZariski topos によって構築された、Kock と Blechschmidt の仕事に基づいている。Zariski topos は、固定された環の有限生成代数 kategori の対向の sheaves を持つ。Zariski topology を持つため、生成するカバーは、局所化写像 $A \to A_{f_1}$ である。 finitely many elements $f_1$, ... , $f_n$ が生成する理想 $(1) = A \subseteq A$ によって定義される。私たちは、 Homotopy type theory と三つの公理を内部言語として持つ (高次の) Zariski topos を使う。私たちの主要な貢献の一つは、ホモロジーの感覚で定義して議論する。実際に cohomology 群を計算するのは、私たちの ``Zariski local choice'' Axiom と似た原理に従う必要があるようだ。 これは
2306.08060
Software Supply Chain Vulnerabilities Detection in Source Code: Performance Comparison between Traditional and Quantum Machine Learning Algorithms
The software supply chain (SSC) attack has become one of the crucial issues that are being increased rapidly with the advancement of the software development domain. In general, SSC attacks execute during the software development processes lead to vulnerabilities in software products targeting downstream customers and even involved stakeholders. Machine Learning approaches are proven in detecting and preventing software security vulnerabilities. Besides, emerging quantum machine learning can be promising in addressing SSC attacks. Considering the distinction between traditional and quantum machine learning, performance could be varies based on the proportions of the experimenting dataset. In this paper, we conduct a comparative analysis between quantum neural networks (QNN) and conventional neural networks (NN) with a software supply chain attack dataset known as ClaMP. Our goal is to distinguish the performance between QNN and NN and to conduct the experiment, we develop two different models for QNN and NN by utilizing Pennylane for quantum and TensorFlow and Keras for traditional respectively. We evaluated the performance of both models with different proportions of the ClaMP dataset to identify the f1 score, recall, precision, and accuracy. We also measure the execution time to check the efficiency of both models. The demonstration result indicates that execution time for QNN is slower than NN with a higher percentage of datasets. Due to recent advancements in QNN, a large level of experiments shall be carried out to understand both models accurately in our future research.
Mst Shapna Akter, Md Jobair Hossain Faruk, Nafisa Anjum, Mohammad Masum, Hossain Shahriar, Akond Rahman, Fan Wu, Alfredo Cuzzocrea
2023-05-31T06:06:28
http://arxiv.org/abs/2306.08060v1
Software Supply Chain Vulnerabilities Detection in Source Code: Performance Comparison between Traditional and Quantum Machine Learning Algorithms ###### Abstract The software supply chain (SSC) attack has become one of the crucial issues that are being increased rapidly with the advancement of the software development domain. In general, SSC attacks execute during the software development processes lead to vulnerabilities in software products targeting downstream customers and even involved stakeholders. Machine Learning approaches are proven in detecting and preventing software security vulnerabilities. Besides, emerging quantum machine learning can be promising in addressing SSC attacks. Considering the distinction between traditional and quantum machine learning, performance could be varies based on the proportions of the experimenting dataset. In this paper, we conduct a comparative analysis between quantum neural networks (QNN) and conventional neural networks (NN) with a software supply chain attack dataset known as ClaMP. Our goal is to distinguish the performance between QNN and NN and to conduct the experiment, we develop two different models for QNN and NN by utilizing Pennylane for quantum and TensorFlow and Keras for traditional respectively. We evaluated the performance of both models with different proportions of the ClaMP dataset to identify the f1 score, recall, precision, and accuracy. We also measure the execution time to check the efficiency of both models. The demonstration result indicates that execution time for QNN is slower than NN with a higher percentage of datasets. Due to recent advancements in QNN, a large level of experiments shall be carried out to understand both models accurately in our future research. Software supply chain Security, Quantum machine learning, Quantum neural network (QNN), Neural Network (NN), ClaMP, TensorFlow, Pennylane ## I Introduction In recent years, threats to software supply chain security have evolved gradually. Analyzing threat patterns, detecting and predicting security vulnerabilities, and suspicious behaviors of software security threats, Machine Learning (ML) has long been adopted as a powerful approach [1, 2]. Due to the vast level of data stores globally and being enormously increasing by 20% every year, finding innovative approaches to machine learning is needed for proactive prevention and early detection of security threats [3, 4]. Quantum Machine Learning (QML) with the help of quantum random access memory (QRAM) has the potential and scores of research institutions are exploiting the promising QML to deal with large amounts of data [5, 6, 7, 8]. In general, Quantum Machine Learning refers to an integrated field of quantum computing, quantum algorithms, and classical machine learning where the algorithms are developed to address real-world problems of machine learning [32, 33], leveraging the efficiency and concepts of quantum computing [9, 10]. The fundamental concepts of quantum machine learning including quantum coherence, superposition, and entanglement provide quantum computers with immense power to process and handle data in such a way that leads toward the emerging implementation of quantum computing in technological fields [11, 12]. In contrast to conventional computing, the basic unit of quantum computing known as Qubit, can make use of both the values 0 and 1 in order to follow various paths of computation simultaneously [13]. Mathematically, a qubit state is a vector in two-dimensional space, illustrated by the linear combination of the two basis states (\(|0\rangle\), and \(|1\rangle\)) in a quantum system: \(|\psi\rangle\)=\(\alpha|0\rangle\)+\(\beta|1\rangle\), where \(\alpha\), \(\beta\in\mathbb{C}\) are probability amplitudes required to satisfy \(|\alpha|2\)+\(|\beta|2=1\)[14]. Such a sequence of basis states is described as quantum superposition, and correlations between two qubits through a quantum phenomenon are termed entanglement. With the ever-growing size of data, the average number of sophisticated and complicated cyberattacks and data violations such as software supply chain attacks and network intrusion attacks are also increasing rapidly globally. Software Supply Chain (SSC) attack occurs due to penetration of a vendor's network and insertion of malicious code by a cyber threat actor that jeopardizes the software before the vendor distributes it to the customers [15, 16]. SSC attacks affect the software development, dissemination, and utilization phase and becoming extremely critical due to excessive complications of software development strategies over the years [17]. Such attacks occur during the production phase causing vulnerabilities to downstream consumers. SSC attacks can also disrupt newly developed software through patches or hotfires or even from the outset, thus compromising the system from the start. Hence, SSC attacks can have a significant negative impact on software users in all sectors by gaining complete control over a software's regular functionality. Hijacking updates, undermining code signing, and compromising open-source code are common techniques exclusively used by threat actors to execute SSC attacks [15, 34]. In Recognition and investigation into SSC attacks, there is an absence of sufficient information concerning mitigating or preventing these risks. On the other hand, a network intrusion attack is an attempt to compromise the security of stored information or data on a computer connected to the network. Two distinct types of activities fall under this definition. First, an attacker can gain unauthorized access to a network, files, or information to steal sensitive data, leaving the data unharmed. An attacker can attempt to gain unauthorized access to user devices, resources, or servers to destabilize the entire network by encrypting, deleting, mishandling, or simply modifying the data [18]. To combat such complex, unlawful and unauthorized attacks, concern grows about preventing attacks from using a quantum machine learning-based paradigm [19, 20, 21]. In the past years, none to very little research was conducted on the software supply chain vulnerabilities dataset using quantum machine learning perhaps due to the availability of quantum computing resources. However, considering currently available QML-based platforms, Pennylane for instance offers programming quantum computers that enable a new paradigm termed quantum differentiable programming and provides seamless collaboration with other QML tools including IBM quantum, NumPy, and TensorFlow quantum. The main ideology of these applications is flexibility which allows it to know how to make a distinction between various quantum devices and choose the finest algorithm for the task. This paper conducts a comparative analysis between quantum neural networks (QNN) and conventional neural networks (NN) by utilizing a software supply chain attack dataset known as ClaMP. The primary contribution of this research is as follows: * We adopt both quantum machine learning and conventional machine learning to conduct an experiment on a software supply chain attack dataset. * We provide a comparative analysis of both QML, and ML's performance based on the findings of the experiments using different proportions of the dataset We organize the rest of the paper as follows: In Section II, we provide a brief related study on quantum machine learning and traditional machine learning. Section III explains the methodology we adopted for our comparative research. The experimental setting and results are explained in Section IV which includes dataset specification and processing. Section V discusses the findings of this paper. Finally, Section VI concludes the paper. ## II Related Work First, Addressing the constraints of traditional machine learning methods, researchers are interested in newly emerging quantum machine learning approach for detecting and preventing software and cybersecurity vulnerability [1]. Various machine learning techniques including neural network, naive bayes, logistic regression, convolutional neural network (CNN), decision tree and support vector machine are successfully applied for classifying software security activities including malware, ransomware, andnetwork intrusion detection [22, 23, 24]. Christopher Havenstein _et al._[25] presented another comparative study based on the performance between Quantum Machine Learning (QML) and Classical Machine Learning (ML). The authors worked on QML algorithms with reproducible code and similar code for ML. Later, quantum variational support vector machines were adopted that show higher accuracy than classical support vector machines. In conclusion, the researchers emphasize the potential of quantum multi-class SVM classifiers for the future. Luis E. Herrera Rodr'iguez et al. [31] presented a comparative study on various machine learning methods for dissipative quantum dynamics where the authors utilized 22 ML models to predict long-time dynamics of quantum systems. The models include convolutional, and fully connected feed-forward artificial neural networks (ANNs), and kernel ridge regression (KRR). Mohammad Masum _et al._[26] conducted research on quantum machine learning (QML) to detect software supply chain attacks [1]. The researchers analyzed speed up the performance of quantum computing by applying scores of novel approaches including quantum support vector machine (QSVM) and quantum neural network (QNN). Utilizing both methods, the authors detected software supply chain attacks in open-source quantum simulators, IBM Qiskit and TensorFlow quantum for instance. According to the research findings, quantum machine learning surpasses classical machine learning in terms of processing speed and computational time. MJH Faruk _et al._ studied quantum cybersecurity from both threats and opportunity perspectives. The authors have provided a comprehensive review of state-of-the-art quantum computing-based cybersecurity approaches. The research indicated that quantum computing can be utilized to address software security, cybersecurity, and cryptographic-related concerns. On the other hand, the malicious individual also misuses quantum computing against software infrastructure due to the immense power of quantum computers [27]. ## III Methodology We adopt Quantum Neural Network (QNN), a subfield of Quantum Machine Learning (QML) for this research and applied the model to the ClaMP dataset. Figure 1 demonstrates the framework representing the implementation process. At first, we pre-processed raw data prior to providing as input to the QML model. We used Python and the shuffle function of the Scikit-Learn (sklearn) library for data preprocessing. We also used the index-reset function and drop function from the python library while labeling the encoder from sklearn library. In order to maintain a balanced number of classes and to avoid imbalanced classes that may lead to the wrong prediction, we ensure the efficiency of all of the separated portions of the dataset created from the ClaMP. For the experiment, we consider only the balanced portions of the dataset. After applying the shuffle functions, we reset the index towards organizing the dataset in ascending order. The drop function is used to remove columns that are unnecessary and does not contribute to making the prediction. In quantum machine learning models, we need to feed numerical values, so the categorical values were converted into numerical values, and all the numerical values were normalized to maintain a similar scale. After preprocessing steps, we split the entire dataset comprising 5,210 rows into different smaller portions. We separated the dataset into 20 smaller datasets; the number of rows started from 5 percent of the total dataset and gradually increased by 5 percent up to 100 percent. The quantum machine learning model was applied to each of the dataset's separated portions. Before feeding into the QML model, the features were encoded into quantum states. We provide a comparative analysis of both QML and ML's performance based on the findings of the experiments using different proportions of the dataset. Quantum Neural Network (QNN) comes from neurocomputing theory, which converges with machine learning, quantum computing, and artificial neural network concepts [28]. QNN framework can be applied for processing neural computing utilizing vast levels of datasets to find the expected result. Before processing the data through the QNN, input data is encoded into a suitable qubit state with a proper number of qubits [29]. Later, the qubit state is modified for a specific number of layers using two gates: parameterized rotation gates and entangling gates, where the predicted value of a Hamilton operator, Pauli gates, for instance, is used to direct the altered qubit state. The results derived from Pauli gates are decoded and translated into applicable output data. A variational quantum circuits-based neural network plays various roles in QNN. Adam optimizer updates the parameters with some criteria including the size of complexity-theoretic measurements, depth, accuracy, and definite features, while the number of steps is Figure 1: process of the architecture of the framework necessary for solving the issue of in-depth measurement. Precision describes the setup required to solve a number of challenges. A quantum neural network consists of three items: input, output, and L hidden layers where the L hidden layer consists of a quantum circuit of the quantum perceptron, which acts on an initial state of the input qubits and produces a mixed state for the output qubits. QNN is able to do the quantum computation for also the two input or one input qubit perceptron, which goes through the quantum-circuit construction with quantum perceptron on 4 level qubits. The most comprehensive quantum perceptron implements any quantum channel on the input qubits. The precision of p(n) is denoted by {s (n), d(n)}, where size is denoted by s(n) and depth is denoted by d(n). The number of qubits in the circuit is measured in size, while the longest sequence of gates from input to output is measured in depth. The size and depth are created from gates D and U of precision p(n). A reversible U gate is usually followed by the D gate to eliminate the localization problem. The accuracy of the circuits is denoted by O(s(n)). ## IV Experiment & Results In this section, we present both the experiments and results. We first provide details of dataset specification followed by data processing. In order to explain an effective experiment, we define the experimental settings where we utilize accuracy, precision, recall, and F-score metrics for evaluating models' performance. Lastly, we present the experimental results. ### _Dataset Specification_ We applied Quantum Neural Network (QNN) to the ClaMP dataset for malware classification. ClaMP dataset has two versions such as ClaMP_raw and ClaMP_Integrated. The raw instance was aggregated from VirusShare, while the benign instances were integrated from windows files. Portable executable headers contain the information which is required for OS to run executable files. Therefore, features were collected from portable executable headers for malware and benign samples. Moreover, PE header. Hence, various raw features such as File Header (7 features), DOS header (19 features), and Optional header (29 features), were extracted using ruled based method from PE headers of the samples. Later, the meaningful features are derived using raw features including entropy, compilation time, and section time. Additionally, more information about the PE file was extracted by expanding a set of raw features from the file header. Finally, we selected three types of features including raw, derived, and expanded from the ClaMP_Integrated dataset, which contains a total of 68 features and the total number of features contains several raw, expanded, and derived features which are 28, 26, and 14 features, respectively [30]. ### _Data Preprocessing_ We applied QNN on ClaMP datasets where we utilized various data sizes to inspect the experimented method's comparative performance. We first considered the entire dataset, containing 5210 samples followed by randomly selected 5 percent of the dataset without replacing any instances and gradually increased the percentage by 5 percent, which are 10 percent, 15 percent, 20 percent, 25 percent, 30 percent, 35 percent, 40 percent, 45 percent, 50 percent, 55 percent, 60 percent, 65 percent, 70 percent, 75 percent, 80 percent, 85 percent, 90 percent, and finally 95 percent with 260, 521, 782, 1042, 1302, 1563, 1834, 2084, 2345, 2605, 2566, 3126, 3387, 3647, 3908, 4168, 4429, 4689, and 4950 samples, respectively, while preserving the class proportion. We converted the categorical values from the feature called 'packer type' of ClaMP data since this type of data cannot be directly entered into the model. The dataset contains a total of 108 columns, including one target variable. We used a standardization technique to transform all the features to a mean of zero and a standard deviation of one. ### _Experimental Settings_ The present quantum simulator does not accept large dimensions as input while our dataset contains 108 dimensions, which we cannot feed into the simulator. Hence, we adopted a dimension reduction technique on this dataset called Principal Component Analysis (PCA). PCA was applied to the vector of size 108 features of the CLAMP dataset for reducing the dimension. We selected the first 16 principal components, due to the limitation of qubit numbers in the existing simulator. First, the classical NN was directly applied to the reduced dataset. The next step was encoding the classical data as quantum circuits, which means converting all the features' values into a qubit value for processing in the quantum computer. Figure 2 demonstrates the circuit created for a random sample. The circuits were converted into TensorflowQuantum (TFQ). Next, a model circuit layer was developed for the QNN comprising of a two-layer model with the matched size of the data circuit and finally wrapped the model circuit in a TFQ-Keras model. We converted the quantum data and fed it to the model and used a parametrized quantum layer to train the model circuit on the quantum data. We adopt an optimization function known as hinge loss during the training phase. The labels were Figure 2: Demonstrates the quantum neural network with the input parameter and Linear entanglement structure converted to the -1 to 1 label. Finally, we trained the QNN for 100 epochs. ### _Experimental Results: Quantum Neural Network (QNN)_ Our comparative analysis between the classical neural network (NN) model and the quantum neural network (QNN) model illustrates in table 1 and Table 2 comprising twenty different portions of the ClaMP dataset. The results derived from the quantum neural network model show that the accuracy is random in the different portions of the dataset. For instance, for 5 percent of the dataset, the accuracy is 57 percent, the f1 score is 73 percent, precision is 100 percent, and recall is 57 percent, while for 20 percent of the dataset, the accuracy is 28 percent, f1 score is 28 percent, recall is 28 percent, and precision is 30 percent. Even if the dataset increases slowly the performance of the model reduces significantly in terms of accuracy. The accuracy suddenly jumps from 30, 35, and 40, to 45 percent and drops off 50 percent of the dataset. The accuracy for 30, 35, 40, 45, and 50 percent data are 53, 65, 70, and 45 percent respectively. From the larger portion of the data, 60, 70, 85, and 90, accuracy is incredibly low, which are 40, 50, 45, and 48 percent respectively, while for the data proportion of 65, 75, 80, 95, and 100, accuracy is comparatively high, which are 57, 70, 53, 55, and 53 percent respectively. Considering all of the experiments, the findings indicate that the accuracy is random on different portions of the dataset. The number of instances does not affect the accuracy, while it does affect the total execution time. Considering the experimental results, the total required execution time is higher when the number of instances is smaller, on the other hand, the execution time starts to decrease when the number of instances increases until a certain threshold. When the data proportion crosses the threshold, the required time gradually starts to increase. Table 1 shows that for 5 percent and 10 percent of the total dataset, the execution times are 12min 24s and 11min 42s respectively. From 15 percent to 80 percent dataset, except from 35 percent and 55 percent, the execution time remains 9 min 19s to 9min 48s. From 85 to 100 percent data, the execution time increases from 10 min to 11 min. Observing quantum neural network models experiment on different portions of data, we found that the performance of the model does not have an effect in terms of accuracy, but the execution time varies with different proportions of the dataset. ### _Experimental Results: Traditional Neural Network (NN)_ The results derived from the conventional neural network model also show similar results as like the quantum neural network in terms of the accuracy metric, as the accuracy is random in different portions of the dataset. Considering 5 percent of the dataset, the accuracy is 50 percent, the f1 score is 67 percent, the precision is 100 percent, and the recall is 50 percent; for 10 percent of the dataset, the accuracy is 46 percent, the f1 score is 63 percent, the precision is 100 percent, and the recall is 46 percent; for 15 percent of the dataset, the accuracy is 45 percent, the f1 score is 70 percent, the precision is 100 percent, and the recall is 54 percent, which means for the smallest portion of the dataset the accuracy is 50 percent, after increasing the data by 5 percent the accuracy drops by 4 percent, which is 46 percent, and the accuracy increases by 4 percent, which is 54 percent, after increasing the percentage by 10 percent. For the large proportion of the dataset, like 80, 85, 90, and 95 percent, the accuracy values are 53, 52, 54, 54, and 53, respectively. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Data & Precision & Recall & F1- & Accur & Execution \\ percentage & & & score & accy & Time \\ \hline [MISSING_PAGE_POST] \end{tabular} \end{table} Table 1: displays a comparative analysis of the different portions of the ClaMP dataset using the quantum machine learning model such as QNN \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline N & Precision & Recall & F1- & Accuracy & Execution \\ percent & & & score & Time \\ of total & & & & & \\ data & & & & & \\ points & & & & & \\ \hline 5 & 1.00 & 0.50 & 0.67 & 0.50 & 22.1s \\ \hline 10 & 1.00 & 0.46 & 0.63 & 0.46 & 15.2s \\ \hline 15 & 1.00 & 0.54 & 0.70 & 0.54 & 19.8s \\ \hline 20 & 1.00 & 0.48 & 0.65 & 0.48 & 44s \\ \hline 25 & 1.00 & 0.47 & 0.64 & 0.47 & 26.8s \\ \hline 30 & 1.00 & 0.52 & 0.69 & 0.52 & 41.9s \\ \hline 35 & 1.00 & 0.58 & 0.74 & 0.58 & 36.4s \\ \hline 40 & 1.00 & 0.49 & 0.66 & 0.49 & 42.3s \\ \hline 45 & 1.00 & 0.48 & 0.65 & 0.48 & 46.3s \\ \hline 50 & 1.00 & 0.54 & 0.70 & 0.54 & 1min 23s \\ \hline 55 & 1.00 & 0.53 & 0.70 & 0.53 & 1min 22s \\ \hline 60 & 1.00 & 0.51 & 0.68 & 0.51 & 51.1s \\ \hline 65 & 1.00 & 0.55 & 0.71 & 0.55 & 53.6s \\ \hline \end{tabular} \end{table} Table 2: displays a comparative analysis of the different portions of the ClaMP dataset using the classical machine learning model such as Analyzing the experimental results, accuracy does not follow any pattern. Neither the accuracy decreases with the different proportion of the dataset, nor does it increase but provides a very unpredictable and random result. Therefore, the model's performance does not have any impact on different proportions of the dataset in terms of the accuracy metrics. However, in terms of the total number of execution times, different portions of the dataset significantly affect the neural network model. We have observed that for smaller portions like 5, 10, 15, 20, 25, 30, 35, 40, and 45 percent of the total dataset the execution times required are 22.1s, 15.2s, 19.8s, 44s, 26.8s, 41.9s, 36.4s, 42.3s, and 46.3s respectively. For 50, 55, 70, 75, 80, 85, 90, 95, 100 percent of the total dataset, the execution times required are 1min 23s, 1min 22s, 1min 19s, 1min 13s, 1min 16s, 1min 22s, 1min 23s, 1min 17s, 1min 25s respectively. ## V Discussion The Quantum machine learning model is an emerging approach and has yet to conduct extensive experiments regarding the performance of the proportion of the dataset. In this study, we emphasize experimenting with the quantum machine learning model on different ratios of a dataset and observed how QML works with different ratios of the dataset. Further, we conducted a comparative analysis between the performance of the quantum machine learning model, and the classical machine learning model, to check how the traditional machine learning model works in comparison with the quantum machine learning model. In accordance with the experiment, QML seems to have a lower influence on various ratios of data in terms of accuracy; however, the efficiency metric is applicable in that case as the efficiency drops with the bigger proportion of the dataset that continues up to a certain limit. The proportion we have chosen are 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 75, 80, 85, 90, 95, and 100. The accuracy we have found is random, and the execution time decreases from 15 percent of data to 80 percent of data; then again, it starts to increase and continues until 100. The accuracy of the classical machine learning model is also random, but the efficiency starts to drop with the higher ratio of the dataset. The accuracy, f1 score, precision, and recall results show that the quantum machine learning neural network model and the classical machine learning model does not have an impact on various portions of the dataset. The results show a random pattern throughout the entire dataset. However, we found two specific patterns of execution time results for both models. For the QNN, the execution time decreases with the increment portion of the dataset until a certain threshold of data proportion, for a large number of instances. For the second model, the execution time increases with the increment proportion of the dataset. Therefore, the required execution time is totally opposite of the quantum machine learning model and the classical neural network model using the software vulnerability datasets. ## VI Conclusion Recently, quantum computing has become a prominent topic with opportunities in the computation of machine learning algorithms that have solved complex problems. This paper conducted a comparative study on quantum neural networks (QNN) and traditional neural networks (NN) and analyzes the performance of both models using software supply chain attack datasets. Due to the limited availability of the quantum computer, the QML model was applied on an open-source Penny lane simulator. We utilized accuracy and processing metrics for evaluating the model's performance. The experimental results indicate that QNN and NN differ in execution time where the QNN model provides quite higher than the NN model. However, the execution time for QNN slows down with the higher proportion of the dataset, while the execution time for NN increases with the higher percentage of the dataset. Although quantum machine learning has been rapidly growing over the last few decades, advancement is still required as the current version of quantum simulators comes with a limited number of qubits, which is not appropriate for software supply chain attacks. A large number of qubits that converges with quantum machine learning models may play a big role in terms of improving classification performance and reducing computation time. ## Acknowledgement The work is partially supported by the U.S. National Science Foundation Awards 2209638, 2209636, and 2209637. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
ソフトウェア供給チェーン (SSC) 攻撃は、ソフトウェア開発分野の進歩に伴い、急速に重要な課題の一つとなっています。一般的には、SSC攻撃はソフトウェア開発プロセス中に実行され、ソフトウェア製品の脆弱性をもたらし、さらに、顧客および関係者を含むダウンストリームの顧客や関係者まで広がる可能性があります。機械学習は、ソフトウェアセキュリティ脆弱性を検知および防止する手法として確立されています。さらに、量子機械学習は、SSC攻撃に対処する可能性を秘めています。伝統的な機械学習と量子機械学習の区別を考慮すると、実験データの割合によって、パフォーマンスは大きく変化することがあります。この論文では、量子神経ネットワーク (QNN) と従来の神経ネットワーク (NN) の性能を比較し、ソフトウェア供給チェーン攻撃のデバッグ用のデータセットである ClaMP を使用して実験を行います。私たちの目標は、QNN と NN の性能の違いを区別することと、実験を行うために、
2309.06810
Leveraging SE(3) Equivariance for Learning 3D Geometric Shape Assembly
Shape assembly aims to reassemble parts (or fragments) into a complete object, which is a common task in our daily life. Different from the semantic part assembly (e.g., assembling a chair's semantic parts like legs into a whole chair), geometric part assembly (e.g., assembling bowl fragments into a complete bowl) is an emerging task in computer vision and robotics. Instead of semantic information, this task focuses on geometric information of parts. As the both geometric and pose space of fractured parts are exceptionally large, shape pose disentanglement of part representations is beneficial to geometric shape assembly. In our paper, we propose to leverage SE(3) equivariance for such shape pose disentanglement. Moreover, while previous works in vision and robotics only consider SE(3) equivariance for the representations of single objects, we move a step forward and propose leveraging SE(3) equivariance for representations considering multi-part correlations, which further boosts the performance of the multi-part assembly. Experiments demonstrate the significance of SE(3) equivariance and our proposed method for geometric shape assembly. Project page: https://crtie.github.io/SE-3-part-assembly/
Ruihai Wu, Chenrui Tie, Yushi Du, Yan Zhao, Hao Dong
2023-09-13T09:00:45
http://arxiv.org/abs/2309.06810v2
# Leveraging SE(3) Equivariance for Learning 3D Geometric Shape Assembly ###### Abstract Shape assembly aims to reassemble parts (or fragments) into a complete object, which is a common task in our daily life. Different from the semantic part assembly (e.g., assembling a chair's semantic parts like legs into a whole chair), geometric part assembly (e.g., assembling bowl fragments into a complete bowl) is an emerging task in computer vision and robotics. Instead of semantic information, this task focuses on geometric information of parts. As the both geometric and pose space of fractured parts are exceptionally large, shape pose disentanglement of part representations is beneficial to geometric shape assembly. In our paper, we propose to leverage SE(3) equivariance for such shape pose disentanglement. Moreover, while previous works in vision and robotics only consider SE(3) equivariance for the representations of single objects, we move a step forward and propose leveraging SE(3) equivariance for representations considering multi-part correlations, which further boosts the performance of the multi-part assembly. Experiments demonstrate the significance of SE(3) equivariance and our proposed method for geometric shape assembly. Project page: [https://crtie.github.io/SE-3-part-assembly/](https://crtie.github.io/SE-3-part-assembly/) ## 1 Introduction Shape assembly aims to compose the parts or fragments of an object into a complete shape. It is a common task in the human-built world, from furniture assembly [16, 36] (_e.g._, assemble chair parts like legs and handles into a whole chair) to fractured object reassembly [5, 24] (_e.g._, assemble bowl fractures into a whole bowl). When trying to complete an object from parts, we will focus on their _geometric_ and _semantic_ information. There is a vast literature in both the computer vision and robotics fields studying the shape assembly problem, especially for the application purposes like furniture assembly and object assembly [1, 16, 19, 36]. Imagine we want to assemble a simple table with four wooden sticks and a flat board, we can infer that the sticks are the table legs so they should be vertically placed, while the board is the table top and should be horizontally placed. Here, we not only use geometric clues to infer the parts' functions but also use semantic information to predict the parts' poses. Recently, a two-part geometric mating dataset is proposed in NSM [5], which considers shape assembly from a pure geometric perspective, without relying on semantic information. This work randomly cuts an object into two pairs, and studies how to mate the fragment pairs into the original shape. Such design is practical in some applications such as object kitting [7, 18], form fitting [35], and pro Figure 1: **Geometric Shape Assembly** aims to assemble different fractured parts into a whole shape. We propose to leverage **SE(3) Equivariance** for learning Geometric Shape Assembly, which disentangles poses and shapes of fractured parts, and performs better than networks without SE(3)-equivariant representations. tein binding [27]. In these tasks, the semantic information can hardly be acquired from the fragment shapes, and thus it is nearly impossible to predict fragments' poses relying on semantic information (e.g. part acts as a leg should be horizontally placed). Instead, such geometric mating tasks should be accomplished by relying on geometric cues. Furthermore, the pairwise assembly task can be extended to the multi-part assembly task, and thus the pose space will grow much larger. Recent work [24] proposes a large-scale dataset named Breaking Bad, which models the destruction process of how an object breaks into fragments. For each object, there are multiple broken fragments with various and complex geometry, making it much more challenging for geometric shape understanding and assembly. Therefore, how to reduce the pose space and effectively assembly multiple fragments that are non-semantic but with diverse geometry still remains a problem. Compared to furniture assembly, which relies on both part semantics and geometry, geometric assembly that assembles diverse fractures mainly focuses on geometric information, while the space of part pose and geometry are much larger in this task. Therefore, **shape pose disentanglement** plays a significant role in boosting the performance of geometric shape assembly. Recently, achieving SE(3) equivariance for object representations is arousing much attention in 3D computer vision and robotics. Many works have studied SE(3)-equivariant architectures [3, 4, 6, 8, 13, 28, 29, 30, 38] and leveraged SE(3) equivariance in object pose estimation [17, 21] or robotic object manipulation [14, 23, 25, 26, 32]. SE(3) equivariance is suitable for the disentangling of shapes and poses of parts in geometric shape assembly. Specifically, like previous works [5, 24], we formulate the shape assembly task as a pose prediction problem, and the target is to predict the canonical SE(3) pose for each given fragment to compose a whole shape. For every single fragment, the predicted pose transformation should be _equivariant_ to its original pose, while being _invariant_ to other fragments' poses. Accordingly, the learned representations have two main features: _consistency_ and _stability_. _Consistency_ means that parts with the same geometry but different poses should have _equivariant_ representations, while _stability_ means the representation of a specific part should be _invariant_ to all other parts' poses and only related to their geometry characteristics. Leveraging such properties, the network can reduce the large pose space of the complex geometric shape assembly task and thus focus on the fragments' geometric information for shape assembly. While most previous works in vision and robotics only leverage SE(3) equivariance representations on a single shape, there exist multiple complex fractured parts in our geometric shape assembly task, and extracting other parts' geometric information is essential to a successful reassembly. How to leverage SE(3)-equivariant representations for multi-parts shape assembly is not a trivial problem, as learned part representations should not only consider the certain part, but also consider correlations with other parts (_e.g._, whether the notches of two parts match each other), while keeping the equivariance property. We propose to utilize both equivariant and invariant representations of single parts to compose the equivariant part representations including part correlations. To the best of our knowledge, we are the first to leverage the SE(3) equivariance property among multiple objects. In summary, we make the following contributions: * We propose to leverage SE(3) equivariance that disentangles shapes and poses of fractured parts for geometric shape assembly. * Utilizing both SE(3)-equivariant and -invariant representations, we learn SE(3)-equivariant part representations with part correlations for multi-part assembly. * Experiments on representative benchmarks, including both two-part and multi-part 3D geometric shape assembly, demonstrate the superiority of SE(3) equivariance and our proposed method. ## 2 Related Works ### 3D Shape Assembly Shape assembly is a long-standing problem with a rich literature. Many works have been investigating how to construct a complete shape from given parts [5, 9, 12, 16, 19, 22, 31, 33, 36], especially in application-specific domains. Based on PartNet, a large-scale dataset that contains diverse 3D objects with fine-grained part information, previous works propose a dynamic graph learning method [36] to predict 6-DoF poses for each input part (_e.g._, the back, legs and bars of a chair) and then assemble them into a single shape as output, or study how to assemble 3D shape given a single image depicting the complete shape [19]. Besides, many works study the shape assembly problem for different applications like furniture assembly [16], or unique needs of CAD workflow [12]. However, most previous works rely deeply on the semantic information of object parts, sometimes bypassing the geometric cues. As for the geometric cues, a recent work, NSM [5], tries to solve the two-part mating problem by mainly focusing on shape geometries without particular semantic information. Besides, a new dataset, Breaking Bad [24], raises a new challenge about how to assemble multiple non-semantic fragments into a complete shape. This work demonstrates that fractured shape reassembly is still a quite open problem. Following these two works, we focus on studying the geometric information and tackling the pure geometric shape assembly problem. ### SE(3)-Equivariant Representations Recently, achieving SE(3) equivariance has attracted a lot of attention, and many SE(3)-equivariant architectures have emerged [3, 4, 8, 13, 28, 29, 30, 38]. Thomas _et al._[28] propose a tensor field neural network that uses filters built from spherical, and Deng _et al._[6] introduce Vector Neurons that can facilitate rotation equivariant neural networks by lifting standard neural network representations to 3D space. We follow Vector Neuron [6] and apply the vector neuron version of DGCNN [29] model in our pipeline. Meanwhile, many recent works have utilized equivariant models for point cloud registration [20], object detection [34], pose estimation [17, 21], robotic manipulation [14, 23, 25, 26, 32], and demonstrate that such equivariant models can significantly improve the sample efficiency and generalization ability. In this paper, we leverage SE(3)-equivariant representations for geometric shape assembly to disentangle the shape and the pose. ## 3 Problem Formulation Imagine an object has been broken into \(N\) fractured parts (_e.g._, a broken porcelain vase found during archaeological work), we obtain the point cloud of each part, which forms \(\mathcal{P}=\{P_{i}\}_{i=1}^{N}\). Our goal is to assemble these parts together and recover the complete object. Formally, our framework takes all parts' point cloud \(\mathcal{P}\) as the input and predicts the canonical 3D pose of each part. We denote the predicted SE(3) pose of the \(i\)-th fractured part as \((R_{i},\ T_{i})\), where \(R_{i}\in\mathbb{R}^{3\times 3}\) is the predicted rotation matrix and \(T_{i}\in\mathbb{R}^{3}\) is the predicted translation vector. Then, we apply the predicted pose to transform the point cloud of each part and get the \(i\)-th part's predicted point cloud \(P_{i}^{\prime}=P_{i}R_{i}+T_{i}\). The union of all the transformed point clouds \(P_{whole}^{\prime}=\bigcup_{i}P_{i}^{\prime}\) is our predicted assembly result. ## 4 Method Our method leverages SE(3)-equivariant representations for geometric shape assembly. We start the Method Section by describing how to leverage SE(3)-equivariant for a single part as a basis (Sec. 4.1). Then, as geometric shape assembly requires each part to consider its correlations with other parts, we describe extending the leverage of SE(3) equivariance from single-part representations to part representations considering correlations with other parts (Sec. 4.2). Based on the learned equivariant representations and apart from predicting the pose of each fractured part, to further ensure all the re-posed parts compose a whole object, we propose translation embedding (Sec. 4.3) for geometric assembly and use adversarial learning (Sec. 4.4). Finally, we describe the loss functions (Sec. 4.5). ### Leveraging SE(3) Equivariance for Single Parts For the brevity of description, we first introduce a simple version (as a basis of our whole method): leveraging SE(3) equivariance in single parts' representations, without considering the correlations between multiple parts. Specifically, in this section, we start by revisiting Vector Neurons Networks (VNN) [6], a general framework for SO(3)-equivariant (rotation equivariant) network. Leveraging VNN, we introduce how we leverage rotation equivariance and translation equivariance for single parts. Vector Neurons Networks (VNN)is a general framework for SO(3)-equivariant networks. It extends neurons from 1D scalars to 3D vectors and provides various SO(3)-equivariant neural operations including linear layers (such as Conv and MLP), non-linear layers (such as Pooling and ReLU) and normalization layers. Besides, it also designs SO(3)-invariant layers to extract SO(3)-invariant representations. The above properties are mathematically rigorous. Rotation Equivariance and Invariance.In geometric part assembly, we suppose the predicted rotation of a part is equivariant with its original orientation and invariant with other parts' orientation. Accordingly, the network should learn both equivariant and invariant representations for each part. Based on VNN, we build a DGCNN [29] encoder with a SO(3)-equivariant head \(\mathcal{E}_{equiv}\) and a SO(3)-invariant encoder head \(\mathcal{E}_{inv}\) to extract part features with corresponding properties. Specifically, given an input point cloud \(P\), and a random rotation matrix \(R\), the encoders \(\mathcal{E}_{equiv}\) and \(\mathcal{E}_{inv}\) respectively satisfy rotation equivariance and invariance: \[\mathcal{E}_{equiv}(PR)=\mathcal{E}_{equiv}(P)R \tag{1}\] \[\mathcal{E}_{inv}(PR)=\mathcal{E}_{inv}(P) \tag{2}\] Translation Equivariance.To achieve translation equivariance in parts' pose prediction, we preprocess the raw point cloud of each part by posing its gravity center on the coordinate origin. That's to say, with an input point cloud \(P=(p_{1},p_{2},...,p_{n}),p_{i}\in\mathbb{R}^{3}\), where \(n\) is the number of points, we compute its gravity center \(\hat{x}=(\sum_{i=1}^{n}p_{i})/n\), and get the preprocessed point cloud \(\tilde{P}=P-\tilde{x}\), and then we use \(\tilde{P}\) as the network input. In this way, our prediction is translation equivariant. Formally, let \(T_{pred}\) denote the predicted translation output, if the part's point cloud changes from \(P\) to \(P+\Delta T\), we have: \[T_{pred}(P+\Delta T)=T_{pred}(P)+\Delta T \tag{3}\] ### Leveraging SE(3) Equivariance for Parts with Part Correlations In the geometric shape assembly task, all the fractured parts should be reassembled together, with each edge matching up well with the others. Therefore, all parts' geometry information, especially the geometry of edges, is significant. Besides, it is necessary to analyze all parts' shape information together to infer the position of each individual part, otherwise, it would be nearly impossible to predict accurate positions for those parts. Therefore, the correlations between parts are essential in the geometric assembly task, and we propose a correlation module to aggregate the information of multiple parts, while keeping leveraging SE(3) equivariance. Note that the translation equivariance in part pose predictions can be achieved using the same approach in Sec. 4.1, so in this section we mainly describe how to leverage rotation equivariance when taking multiple parts. #### Rotation Equivariance in Multi-Part Representations. To predict the final pose of part \(i\), we should consider its correlations with other parts. In fact, what really matters for this part's pose prediction is other parts' shape geometry instead of their initial poses. In other words, changed initial poses of other parts may not affect the predicted pose of part \(i\). Therefore, it comes naturally that we leverage the rotation-invariant representations of other parts (which are invariant to their initial poses) to extract their geometric features and further compute their correlations with part \(i\). Specifically, given the point cloud \(P_{i}\in\mathbb{R}^{n\times 3}\) of the \(i\)-th part, we pass it through rotation-equivariant and -invariant encoders \(\mathcal{E}_{equiv}\) and \(\mathcal{E}_{inv}\) and get corresponding features (shown in the **Equivariant and Invariant Feature Extraction** module in Figure 2): \[\begin{split}& F_{i}=\mathcal{E}_{equiv}(P_{i}),\quad F_{i}\in \mathbb{R}^{f\times 3}\\ & G_{i}=\mathcal{E}_{inv}(P_{i}),\quad G_{i}\in\mathbb{R}^{f \times f}\end{split} \tag{4}\] As shown in the **Part Correlation** module in Figure 2, to extract the correlation feature \(C_{i,\,j}\) between part \(i\) and part \(j,(j\neq i)\), we use matrix multiplication between \(G_{j}\) and \(F_{i}\): \[C_{i,\,j}=G_{j}\cdot F_{i},\quad C_{i,j}\in\mathbb{R}^{f\times 3} \tag{5}\] where \(\cdot\) denotes **matrix multiplication**. As \(G_{j}\) is invariant to \(P_{j}\), and \(F_{i}\) is equivariant to \(P_{i}\), the matrix multiplication \(C_{i,\,j}\) is thus equivariant to \(P_{i}\) with the geometry correlation between \(P_{i}\) and \(P_{j}\). Figure 2: **Overview of our proposed framework.** Taking as input the point cloud of each part \(i\), our framework first outputs the equivariant representation \(F_{i}\) and invariant representation \(G_{i}\), computes the correlation between part \(i\) and each part \(j\) using the matrix multiplication of \(F_{i}\) and \(G_{j}\), and thus gets each part’s equivariant representation \(H_{i}\) with part correlations. The rotation decoder and the translation decoder respectively take \(H\) and decode the rotation and translation of each part. Additional constraints such as adversarial training and canonical point cloud reconstruction using \(G\) further improves the performance of our method. Furthermore, to get the part representation \(H_{i}\) considering correlations with all other parts while maintaining equivariance with \(P_{i}\), we define \(H_{i}\) as: \[H_{i}=\frac{1}{N-1}\sum_{j=1,\,j\neq i}^{N}G_{j}\cdot F_{i},\quad H_{i}\in \mathbb{R}^{f\times 3} \tag{6}\] As \(G_{j}\) is invariant with \(P_{i}\) and \(F_{i}\) is equivariant with \(P_{i}\), it's easy to verify that, \(H_{i}\) is equivariant with \(P_{i}\), and invariant with \(P_{j}(j\neq i)\). _i.e_., for any rotation matrix \(R\) applied on part \(i\) or any other part \(j\): \[\begin{split} H_{i}(P_{1},\,...,\,P_{i}R,\,...P_{N})=H_{i}(P_{1 },\,...,\,P_{i},\,...,\,P_{N})R\\ H_{i}(P_{1},\,...,\,P_{j}R,\,...P_{N})=H_{i}(P_{1},\,...,\,P_{j}, \,...,\,P_{N}),(j\neq i)\end{split} \tag{7}\] Pose Prediction.As shown in the **Pose Prediction** module in Figure 2, given the equivariant representation \(H_{i}\) of part \(i\) with part correlations, we use a pose regressor \(\mathcal{R}\) to predict its rotation \(R_{pred,\,\,i}\) and translation \(T_{pred,\,\,i}\): \[\begin{split} R_{pred,\,\,i},\,T_{pred,\,\,i}=\mathcal{R}(H_{i}), \\ R_{pred,\,\,i}\in\mathbb{R}^{3\times 3},\quad T_{pred,\,\,i}\in\mathbb{R}^{3} \end{split} \tag{8}\] Canonical Part Reconstruction.To ensure that the rotation invariant feature \(G_{i}\) encodes geometric information of \(P_{i}\) with any initial pose, we use a point cloud decoder \(\mathcal{D}\) and expect \(\mathcal{D}\) to decode point cloud of \(i\)-th part in the canonical view when receiving \(G_{i}\) (shown in the **Additional Constraint** module in Figure 2): \[P^{*}_{pred,\,i}=\mathcal{D}(G_{i}),\quad P^{*}_{pred,\,i}\in\mathbb{R}^{n \times 3} \tag{9}\] Let \(P^{*}_{gt,\,i}\) denote the canonical point cloud of \(P_{i}\) and \(P^{*}_{pred,\,i}\) denote the prediction, we minimize the Chamfer Distance between \(P^{*}_{pred,\,i}\) and \(P^{*}_{gt,\,i}\). ### Translation Embeddings for Part Representations Since the reassembled whole object is the composition of multiple re-posed parts, although the above described designs learn the pose of each part, the framework lacks leveraging the property that the representations of all parts could compose the whole object. Inspired by Visual Translation Embedding (VTransE) [11, 37] that maps different objects' features into a space where the relations between objects can be the feature translation, we propose a similar Translation Embedding where the representations of parts can be added up to the representation of the whole shape. Formally, denoting the point cloud of the whole object at canonical pose as \(P^{*}_{gt}\), we pass it through our rotation equivariant encoder to get \(F^{*}_{gt}=\mathcal{E}_{equiv}(P^{*}_{gt})\), and minimize: \[\mathcal{L}2(\sum_{i}H_{i},\,F^{*}_{gt}) \tag{10}\] where \(H_{i}\) is the rotation equivariant feature of \(i\)-th fracture. Through this procedure, rotation equivariant representations of parts would be more interpretable as a whole. ### Adversarial Learning The above described designs use proposed representations to learn the pose of each part, lacking the evaluation that all re-posed parts visually make up a whole shape. Following the design of [5], we employ a discriminator \(\mathcal{M}\) and use adversarial learning to make the re-posed parts visually look like those of a whole object, as shown in the **Additional Constraint** module in Figure 2. Our discriminator \(\mathcal{M}\) takes as input the predicted re-assembly shape \(P^{\prime}_{whole}\) (defined in Sec. 3) and the ground truth point cloud of the whole object \(P_{whole}\), and distinguishes whether the input point clouds look visually plausible like a complete object. To achieve this, we define a loss term \(\mathcal{L}_{\text{G}}\) for training the generator (\(i.e.\) encoders \(\mathcal{E}\) and pose regressor \(\mathcal{R}\)), which is defined as: \[\mathcal{L}_{G}=\mathbb{E}\big{[}\|\mathcal{M}(P^{\prime}_{whole})-1\|\big{]}, \tag{11}\] and an adversarial loss \(\mathcal{L}_{D}\) for training the discriminator \(\mathcal{M}\), which is defined as: \[\mathcal{L}_{D}=\mathbb{E}\big{[}\|\mathcal{M}(P^{\prime}_{whole})\|\big{]}+ \mathbb{E}\big{[}\|\mathcal{M}(P_{whole})-1\|\big{]} \tag{12}\] Through the adversarial training procedure, the reassembled shapes become more plausible as a whole. ### Losses Our loss function consists of the following terms: \[\begin{split}&\mathcal{L}=\lambda_{rot}\mathcal{L}_{rot}+\lambda_{trans} \mathcal{L}_{trans}+\lambda_{point}\mathcal{L}_{point}\\ &+\lambda_{recon}\mathcal{L}_{recon}+\lambda_{embed}\mathcal{L}_ {embed}+\lambda_{adv}\mathcal{L}_{adv}\end{split} \tag{13}\] For an input broken object, we sample point clouds from every fractured part and form \(\mathcal{P}=\{P_{i}\}_{i=1}^{N}\). For the \(i\)-th part, we denote its ground truth rotation matrix and translation as \(R_{gt,\,\,i}\) and \(T_{gt,\,\,i}\), and the predicted rotation matrix and translation as \(R_{pred,\,i}\) and \(T_{pred,\,i}\). For rotation, we use geodesic distance (GD) between \(R_{gt}\) and \(R_{pred}\) as our rotation loss: \[\mathcal{L}_{rot}=arccos\frac{tr(R_{gt}R_{pred}^{T})-1}{2} \tag{14}\] For translation, we use \(\mathcal{L}2\) loss between \(T_{gt}\) and \(T_{pred}\) as the translation prediction loss: \[\mathcal{L}_{trans}=\mathcal{L}2(T_{pred},\,T_{gt}) \tag{15}\] Following [24], we use Chamfer Distance to further jointly supervise the predicted translation and rotation by supervising the predicted re-posed point cloud: \[\mathcal{L}_{point}=Chamfer(PR_{pred}+T_{pred},\;PR_{gt}+T_{gt}) \tag{16}\] As mentioned in Sec. 4.2, we also use Chamfer Distance as the reconstruction loss to supervise the invariant representation \(G_{i}\) can be further decoded to a canonical point cloud \(P_{pred,i}^{*}\), ensuring \(G_{i}\) encodes geometric information with any initial pose: \[\mathcal{L}_{recon}=Chamfer(P_{pred,i}^{*},\;P_{i}R_{gt,i}+T_{gt,i}) \tag{17}\] From Sec. 4.3, we design translation embedding loss to supervise that the representations of all fractured parts can be added up to the representation of the complete shape: \[\mathcal{L}_{embed}=\mathcal{L}2(\sum_{i}H_{i},\;F_{gt}^{*}) \tag{18}\] From Sec. 4.4, the adversarial loss is defined as: \[\mathcal{L}_{adv}=\mathbb{1}_{D}\mathcal{L}_{D}+\mathbb{1}_{G}\mathcal{L}_{G} \tag{19}\] where \(\mathbb{1}_{D}=1\) only if we're updating discriminator, \(\mathbb{1}_{G}=1\) only if updating generator. ## 5 Experiments ### Datasets, Settings and Metrics Datasets.We use two benchmark datasets for evaluation: * Geometric Shape Mating dataset [5] for **two-part assembly (mating)**. Objects in this dataset are cut into two parts by the randomly generated heightfields that can be parameterized by different functions. Specifically, we employ 4 kinds of cut types (planar, sine, parabolic and square functions) on 5 categories of objects (Bag, Bowl, Jar, Mug and Sofa) in ShapeNet [2]. We employ the official data collection code, collect 41,000 cuts for training and 3,100 cuts for testing. * Breaking Bad dataset's commonly used "everyday" object subset [24] for **multi-part assembly**. Compared with Geometric Shape Mating dataset, this dataset is much more challenging, as the objects are irregularly broken into multiple fragments by physical plausible decomposition, resulting in more parts with much more complex geometries. Our study focuses more on this multi-part geometric assembly problem. On both datasets, we train all methods in all categories, and test them on unseen objects in the same categories. Metrics.Following the evaluation metrics of the two datasets [5, 24], we import geodesic distance (GD) to measure the difference between predicted rotation and ground truth rotation. To further evaluate both the rotation and translation prediction, we compute the root mean squared error RMSE (\(R\)) between the predicted rotation \(R\) and the corresponding ground truth values, and the root mean squared error RMSE (\(T\)) between the predicted translation \(T\) and the corresponding ground truth values. Here we use Euler angle to represent rotation. Besides, we follow the evaluation protocol in [19, 24] and adopt part accuracy (PA) as an evaluation metric. This metric measures the portion of 'correctly placed' parts. We first use predicted rotation and translation to transform the input point cloud, and then compute the Chamfer Distance between the transformed point cloud and the ground truth. If the distance is smaller than a threshold, we count this part as 'correctly placed'. Hyper-parameters.We set batch size to be 32 for Breaking Bad, 48 for Geometric Shape Mating, and the initial learning rate of Adam Optimizer [15] to be 0.0001. We train the model 80 and 120 epochs respectively for Geometric Shape Mating and Breaking Bad. ### Baselines For the Geometric Shape Mating dataset, the two-part geometric shape assembly task, we compare our method with NSM [5], the state-of-the-art method for two-part mating. For the Breaking Bad dataset, the multi-part geometric shape assembly task, we modified the official code of the NSM [5] from two-part geometric shape assembly to multi-part geometric assembly by predicting the pose of each input part. We also compare our method with DGL [36] and LSTM [10] following the Breaking Bad benchmark [24]. All baseline implementations use the official code in two benchmarks [5, 24]. The baselines are described as follows: * **NSM**[5] extracts part features using transformer and predicts their poses for mating, achieving state-of-the-art performance in two-part mating. * **DGL**[24, 36] uses graph neural networks to encode and aggregate part features, and predicts the pose of each part. Following [24], we remove the node aggregation procedure as there does not exist parts with the same geometric appearance in geometric assembly. * **LSTM**[24, 36, 10] uses bi-directional LSTM to take part features as input and sequentially predicts the pose of each part. This method assembles the decision-making method of humans when faced with geometric shape assembly problems. ### Experimental Results and Analysis Table 1 and 2 show the quantitative performance of our method and baselines. The experimental results demonstrate that our method performs better than all baselines in both two-part and multi-part geometric shape assembly tasks over all evaluation metrics. As discussed in [24], predicting rotations of multiple parts is pretty more difficult than translation. Table 1 and 2 show our method has a significant improvement in this aspect, and outperforms all baselines in the root mean squared error RMSE (\(R\)) metric and the geodesic distance (GD) metric. In particular, our rotation is around 10 degrees less than the baselines. For translation prediction, our RMSE (\(T\)) also outperforms all baselines on both datasets. In addition, our method also outperforms all baselines in part accuracy (PA), especially for the LSTM and NSM in the more challenging Breaking Bad dataset. \begin{table} \begin{tabular}{l c c c c} \hline \hline Method & RMSE (\(R\)) \(\downarrow\) & GD (\(R\)) \(\downarrow\) & RMSE (\(T\)) \(\downarrow\) & PA \(\uparrow\) \\ \hline & degree & rad & \(\times 10^{-2}\) & \(\%\) \\ \hline DGL & 84.1 & 2.21 & 14.7 & 22.9 \\ LSTM & 87.6 & 2.24 & 16.3 & 13.4 \\ NSM & 85.6 & 2.21 & 15.7 & 16.0 \\ Ours & **75.3** & **2.00** & **14.1** & **26.7** \\ \hline \hline \end{tabular} \end{table} Table 1: **Quantitative evaluation on Breaking Bad dataset for multi-part geometric assembly.** We report quantitative results of our method and three learning-based shape assembly baselines on the everyday object subset. \begin{table} \begin{tabular}{l c c c c} \hline \hline Method & RMSE (\(R\)) \(\downarrow\) & GD (\(R\)) \(\downarrow\) & RMSE (\(T\)) \(\downarrow\) & PA \(\uparrow\) \\ \hline & degree & rad & \(\times 10^{-2}\) & \(\%\) \\ \hline NSM & 21.3 & 0.52 & 2.9 & 79.1 \\ Ours & **15.9** & **0.39** & **2.7** & **85.7** \\ \hline \hline \end{tabular} \end{table} Table 2: **Quantitative evaluation on Geometric Shape Mating dataset for two-part geometric assembly.** We report quantitative results of our method and the NSM baseline. Figure 4: **Qualitative results on Geometric Shape Mating dataset for two-part geometric shape assembly.** We observe better pose predictions (especially rotation) than NSM. Figure 3: **Qualitative results on Breaking Bad dataset for multi-part geometry shape assembly.** We observe better rotation and translation predictions (especially rotation) than baseline methods. This may result from our SO(3) equivariant network that disentangles shape and pose information, reducing the difficulty of learning rotations of different posed parts with different geometries, thus allowing for better predictions. Figure 4 shows qualitative examples of our method and NSM on the Geometric Shape Mating dataset. Although it is a comparatively simple dataset and the task is nearly solved by previous methods, our method still performs better, especially in rotation prediction. Figure 3 shows qualitative comparisons between our method and baselines on the more challenging and realistic Breaking Bad dataset. Although this task is highly difficult and all methods could not solve the task, our method can better predict the pose (especially the rotation) of each part. ### Ablation Studies To further evaluate the effectiveness of different components in our framework, we conduct ablation studies by comparing our method with the following ablated versions: * **w/o Corr**: our method without considering part correlations in each part's equivariant representations. * **w/o TE**: our method without translation embedding. * **w/o Adv**: our method without adversarial learning. As shown in Table 3 and 4, and Figure 5, the performance decline when removing part correlations in part representations demonstrate that, our proposed part correlations help in the geometric assembly of fractured parts, as it is significant to aggregate geometric information between parts for geometric shape assembly. As shown in Table 3 and 4, the translation embedding and adversarial training help improve the performance of our method, as described in Section 4.3 and 4.4, translation embedding and adversarial learning and can serve as pose fine-tuners and improve pose predictions. ## 6 Conclusion In this paper, to tackle 3D geometric shape assembly tasks that rely on _geometric_ information of fractured parts, we propose to leverage SE(3)-equivariant representations that disentangle shapes and poses to facilitate the task. Our method leverages SE(3) equivariance in part representations considering part correlations, by learning both SE(3)-equivariant and -invariant part representations and aggregating them into SE(3)-equivariant representations. To the best of our knowledge, we are the first to explore leveraging SE(3) equivariance on multiple objects in related fields. Experiments demonstrate the effectiveness of our method. **Limitations & Future Work** In Breaking Bad, although we perform better than all baselines, this does not mean that we have solved the problem. When the number of fractures increases, the problem's complexity increases sharply, and most existing methods cannot perform well. To completely solve the problem, more additional designs need to be added, while leveraging SE(3) equivariance is orthogonal to many designs. For the whole framework, while the learned representations are equivariant to input part poses, the rotation regressor is non-equivariant, as it limits the degree-of-freedom in pose prediction and leads to worse results. Besides, it will take computing resources and time to train equivariant networks than ordinary networks. ## 7 Acknowledge This work was supported by National Natural Science Foundation of China (No. 62136001). \begin{table} \begin{tabular}{l c c c c} \hline \hline Method & RMSE (\(R\)) \(\downarrow\) & GD (\(R\)) \(\downarrow\) & RMSE (\(T\)) \(\downarrow\) & PA \(\uparrow\) \\ \hline & degree & rad & \(\times 10^{-2}\) & \(\%\) \\ \hline w/o Corr & 79.8 & 2.17 & 15.7 & 18.4 \\ w/o TE & 77.2 & 2.04 & 15.2 & 22.5 \\ w/o Adv & 77.6 & 2.02 & 14.3 & 23.7 \\ Ours & **75.3** & **2.00** & **14.1** & **26.7** \\ \hline \hline \end{tabular} \end{table} Table 4: **Ablations on Breaking Bad.** We compare with versions removing part correlations (w/o Corr), translation embedding (w/o TE) and adversarial learning (w/o Adv). Figure 5: **Qualitative results of our method with and without part correlation on Geometric Shape Mating dataset.** The parts with representations considering part correlations match better. \begin{table} \begin{tabular}{l c c c c} \hline \hline Method & RMSE (\(R\)) \(\downarrow\) & GD (\(R\)) \(\downarrow\) & RMSE (\(T\)) \(\downarrow\) & PA \(\uparrow\) \\ \hline & degree & rad & \(\times 10^{-2}\) & \(\%\) \\ \hline w/o Corr & 19.2 & 0.52 & 2.9 & 80.5 \\ w/o TE & 17.3 & 0.46 & 2.8 & 84.3 \\ w/o Adv & 16.7 & 0.43 & 2.8 & 82.6 \\ Ours & **15.9** & **0.39** & **2.7** & **85.7** \\ \hline \hline \end{tabular} \end{table} Table 3: **Ablations on Geometric Shape Mating.** We compare with versions removing part correlations (w/o Corr), translation embedding (w/o TE) and adversarial learning (w/o Adv).
形状組装は、部品(または断片)を完成した物体へと再構成することを目的とし、これは日常生活でよくあるタスクです。 semanticsな部品組装(例:椅子などの部品を組み合わせる)とは異なり、幾何学的部品組装(例:ボウルを破片から組み立てる)はコンピュータビジョンとロボット工学における新しいタスクです。 このタスクは、セマンティック情報ではなく、部品の幾何学的情報に焦点を当てます。 破片の幾何学的な位置空間が非常に大きく、形状姿勢の分離は、幾何学的形状組装にとって有益です。 本論文では、このような形状姿勢分離のためにSE(3)等価性を利用します。 また、従来の視覚とロボット工学の研究では、単一の物体に対してSE(3)等価性を考慮していましたが、私たちは、多部品の関連
2308.00190
Universal Majorization-Minimization Algorithms
Majorization-minimization (MM) is a family of optimization methods that iteratively reduce a loss by minimizing a locally-tight upper bound, called a majorizer. Traditionally, majorizers were derived by hand, and MM was only applicable to a small number of well-studied problems. We present optimizers that instead derive majorizers automatically, using a recent generalization of Taylor mode automatic differentiation. These universal MM optimizers can be applied to arbitrary problems and converge from any starting point, with no hyperparameter tuning.
Matthew Streeter
2023-07-31T23:01:54
http://arxiv.org/abs/2308.00190v1
# Universal Majorization-Minimization Algorithms+ ###### Abstract Majorization-minimization (MM) is a family of optimization methods that iteratively reduce a loss by minimizing a locally-tight upper bound, called a majorizer. Traditionally, majorizers were derived by hand, and MM was only applicable to a small number of well-studied problems. We present optimizers that instead derive majorizers _automatically_, using a recent generalization of Taylor mode automatic differentiation. These _universal_ MM optimizers can be applied to arbitrary problems and converge from any starting point, with no hyperparameter tuning. ## 1 Introduction Optimization plays a central role in machine learning and statistics. To fit a model, one typically minimizes a loss function \(f:\mathbb{R}^{n}\to\mathbb{R}\), where \(f\) is available in symbolic form, allowing its derivatives to be computed easily via automatic differentiation. The optimizers used to fit models to data typically fall into one of two categories: 1. _General-purpose optimizers based on Taylor polynomials._ This category includes gradient descent, Newton's method, and many other optimizers. 2. _Majorization-minimization (MM) optimizers._ These methods are based on _majorizers_ (locally-tight upper bounds), which have been derived by hand for various loss functions of interest. MM optimizers are attractive because they converge from any starting point, do not require hyperparameter tuning, and sometimes offer faster rates of convergence than general-purpose methods. MM optimizers have been derived for logistic regression [5], quantile regression [18], support vector machines [13], and many other problems [8, 22]. However, because each new problem requires a non-trivial derivation, MM has not been applicable to more complex loss functions, such as the ones used in deep learning. In this paper, we will derive the upper bounds used in MM optimization algorithms _automatically_, thereby creating general-purpose MM optimizers that are as widely applicable as gradient-based methods. We thereby greatly extend the reach of this decades-old optimization template, making its benefits much more broadly available. Furthermore, we will see that the resulting optimizers perform well on a variety of problems. To automatically derive majorizers, we use a recently-developed algorithm [31] that automatically bounds the Taylor remainder series, using an interval arithmetic variant of Taylor mode automatic differentiation. Computing a majorizer this way requires an additional forward pass that uses memory linear in the number of inputs. For this reason, we do not majorize the original loss function (which would require too much memory), but instead seek a majorizer that is valid over a lower-dimensional subspace. Our first universal MM algorithm, called SafeRate, uses a one-dimensional upper bound to derive a learning rate that is guaranteed to monotonically reduce the loss. This learning rate depends on the current iterate \(\mathbf{x}_{t}\), and therefore adapts during the course of optimization. Our second algorithm, SafeCombination, generalizes this by computing a linear combination of \(d\) update directions rather than a single learning rate. A possible concern with any such approach is that our automatically-derived upper bounds may be too loose, especially for complex losses. Reassuringly, we find that the majorizers used by SafeRate are tight in practice, even for multi-layer perceptrons with up to 64 hidden layers. Although our universal MM optimizers are significant in their own right, we hope their biggest impact will be as illustrations of an underappreciated fact: _A general-purpose optimizer can automatically compute polynomial upper and lower bounds on the loss, rather than simply approximate it with a Taylor polynomial._ ## 2 Background We begin with a review of MM optimization algorithms, followed by a discussion of recently-developed techniques for bounding the Taylor remainder series. ### MM Optimization Algorithms Majorization-minimization (MM) is a well-known optimization technique based on a simple idea: by minimizing a locally-tight upper bound on a loss function of interest, we can iteratively reduce the loss. To present the method formally, we will use the following definition. **Definition 1** (Majorization).: _Consider a function \(f:\mathbb{R}^{n}\to\mathbb{R}\). A function \(\bar{f}:\mathbb{R}^{n}\times\mathbb{R}^{n}\to\mathbb{R}\) majorizes \(f\) if, for any \(\mathbf{y}\in\mathbb{R}^{n}\):_ \[f(\mathbf{x})\leq\bar{f}(\mathbf{x},\mathbf{y})\quad\forall\mathbf{x}\in \mathbb{R}^{n} \tag{1}\] _and furthermore, \(\bar{f}(\mathbf{y},\mathbf{y})=f(\mathbf{y})\)._ If \(\bar{f}\) majorizes \(f\), then \(\bar{f}\) provides an upper bound on \(f\) that is tight at a specified point \(\mathbf{y}\), and valid everywhere. Figure 1 illustrates a majorizer for the softplus function. Given an arbitrary starting point \(\mathbf{x}_{0}\in\mathbb{R}^{n}\), a majorization-minimization algorithm produces a sequence of iterates \(\mathbf{x}_{0},\mathbf{x}_{1},\mathbf{x}_{2},\ldots\) by setting \[\mathbf{x}_{t+1}=\operatorname*{argmin}_{\mathbf{x}\in\mathbb{R}^{n}}\left\{ \bar{f}(\mathbf{x},\mathbf{x}_{t})\right\}. \tag{2}\] An appealing property of MM is that the loss decreases monotonically: \(f(\mathbf{x}_{t+1})\leq f(\mathbf{x}_{t})\) for all \(t\). This follows from the sequence of inequalities: \[f(\mathbf{x}_{t+1})\leq\bar{f}(\mathbf{x}_{t+1},\mathbf{x}_{t})\leq\bar{f}( \mathbf{x}_{t},\mathbf{x}_{t})=f(\mathbf{x}_{t}) \tag{3}\] where the first inequality holds because \(\overline{f}\) majorizes \(f\), and the second inequality holds by definition of \(\mathbf{x}_{t+1}\). Majorization-minimization is the subject of a large literature, and many different majorizers have been derived for functions with specific properties. For example, if one can show that \(f\) is \(\beta\)-smooth (meaning that \(\nabla f\) has a Lipschitz constant at most \(\beta\)), then it can be shown that \(f\) is majorized by the quadratic function \(\bar{f}(\mathbf{x},\mathbf{y})=f(\mathbf{y})+\nabla f(\mathbf{y})(\mathbf{x}- \mathbf{y})+\frac{\beta}{2}\left\|\mathbf{x}-\mathbf{y}\right\|_{2}^{2}\). However, we are not aware any previous work on automatically deriving majorizers for arbitrary losses. See [8, 22] for textbook treatments of MM. ### Bounding the Taylor Remainder Series We now briefly review two methods for bounding the Taylor remainder series. For simplicity, consider a scalar function \(f:\mathbb{R}\rightarrow\mathbb{R}\) (all results in this section generalize to vector-variate functions). Given a polynomial degree \(k\), a _trust region_\([a,b]\), and a scalar \(x_{0}\in[a,b]\), these methods compute an interval \(I\) such that for all \(x\in[a,b]\), \[f(x)\in\underbrace{\left(\sum_{i=0}^{k-1}\frac{1}{i!}f^{(i)}(x_{0})(x-x_{0})^ {i}\right)}_{\text{Degree $k-1$ Taylor polynomial}}+\underbrace{I(x-x_{0})^{k}}_{\text{ Remainder bound}}.\lx@note{footnote}{The product of an interval $I=[\underline{I},\overline{I}]$, and a scalar $z$ is defined as $Iz\triangleq\{\alpha z:\alpha\in I\}=[\min\left\{\underline{I}z,\overline{I}z \right\},\max\left\{\underline{I}z,\overline{I}z\right\}]$.} \tag{4}\] A classical method for deriving the interval \(I\) is based on the fact that (4) holds so long as \[I\supseteq\left[\inf_{y\in[a,b]}\left\{\frac{f^{(k)}(y)}{(k+1)!}\right\}, \sup_{y\in[a,b]}\left\{\frac{f^{(k)}(y)}{(k+1)!}\right\}\right]. \tag{5}\] That (5) implies (4) can be shown using the Lagrange form of the Taylor remainder series and the mean value theorem [2]. Furthermore, an interval that satisfies (5) can be computed by first deriving an expression for \(f^{(k)}(y)\) (e.g., using automatic differentiation), and then evaluating this expression using interval arithmetic [14, 15, 20, 27]. Unfortunately, the bounds produced by direct interval arithmetic evaluation of the \(k\)th derivative can be very loose. To address this, we created the AutoBound algorithm [31], which uses an interval arithmetic variant of Taylor-mode automatic differentiation to compute bounds that can be much tighter. Figure 2 gives an example of upper and lower bounds computed by AutoBound when \(k=2\). Figure 2: Quadratic upper and lower bounds derived by AutoBound [31] for the function \(f(x)=\frac{3}{2}\exp(3x)-25x^{2}\), centered at \(x_{0}=\frac{1}{2}\), and valid over the interval \([0,1]\). Universal MM Optimization Algorithms We now present two _universal_ MM optimization algorithms: MM optimization algorithms that can be applied to any loss that can be written as a composition of known elementary univariate functions (e.g., exp, log, relu, softplus), plus the binary operations of addition and multiplication. The term "universal" is ours, and to our knowledge these algorithms are the first MM algorithms that are universal in this sense. In fact, our first algorithm is a special case of our second one, but we choose to present it separately for pedagogical purposes. Specifically: * The SafeRate algorithm uses AutoBound to compute a learning rate \(\eta_{t}\) that is guaranteed to reduce the loss on step \(t\). * The SafeCombination algorithm uses AutoBound to compute a vector \(\boldsymbol{\eta}_{t}\) that specifies a combination of user-supplied update directions that is guaranteed to reduce the loss. In both cases, the majorizer is defined over a low-dimensional subspace of the original optimization domain, making the algorithms _majorization-minimization subspace methods_. In addition, the value of the majorizer will be infinite outside a trust region, whose size affects the tightness of the majorizer. In order to speed up convergence, both algorithms use a simple heuristic to adapt the size of the trust region over time. ### Automatically Deriving Majorizers In SS2.2, we reviewed two techniques for bounding the Taylor remainder series. When used to compute a degree \(k\) bound for a univariate function \(f:\mathbb{R}\to\mathbb{R}\), these techniques take as input a trust region \([a,b]\subseteq\mathbb{R}\) and a point \(x_{0}\in[a,b]\), and return an interval \(I\) such that \[f(x)\in T_{k-1}(x;x_{0})+I(x-x_{0})^{k}\quad\forall x\in[a,b] \tag{6}\] where \(T_{k-1}(x;x_{0})\triangleq\sum_{i=0}^{k-1}\frac{f^{(i)}(x_{0})}{i!}(x-x_{0})^ {i}\) is the degree \(k-1\) Taylor polynomial of \(f\) at \(x_{0}\). In what follows, we will always have \((x-x_{0})^{k}\geq 0\) for \(x\in[a,b]\). Under this assumption, (6) implies the upper bound \[f(x)\leq T_{k-1}(x;x_{0})+\overline{I}(x-x_{0})^{k}\quad\forall x\in[a,b] \tag{7}\] where \(\overline{I}\) is the right end point of \(I\). We can therefore define a majorizer \[\bar{f}(x,x_{0})\triangleq\begin{cases}T_{k-1}(x;x_{0})+\overline{I}(x-x_{0} )^{k}&x\in[a,b]\\ \infty&x\notin[a,b].\end{cases} \tag{8}\] For a vector-variate function \(f:\mathbb{R}^{n}\to\mathbb{R}\), the situation is similar, except that the scalar product \(\overline{I}(x-x_{0})^{k}\) is replaced by an inner product of two tensors with \(n^{k}\) elements (see Appendix B for details). ### A Direct Approach Given the results from the previous section, the most obvious way to define a universal MM optimizer would be to auto-derive a majorizer for the loss function \(f:\mathbb{R}^{n}\to\mathbb{R}\). This approach can be made to work, and is practical when \(n\) is small. The downside of this approach is that the memory required by AutoBound [31] scales linearly with \(n\), making the direct approach impractical for large-scale optimization problems. We therefore focus on majorizing \(f\) over a lower-dimensional subspace. ### SafeRate: Determining a Safe Learning Rate Iterative methods for optimization, such as gradient descent and Newton's method, optimize a differentiable loss \(f:\mathbb{R}^{n}\to\mathbb{R}\) by performing a sequence of updates of the form \[\mathbf{x}_{t+1}=\mathbf{x}_{t}+\eta_{t}\mathbf{v}_{t} \tag{9}\] where \(\mathbf{v}_{t}\) is an update direction and \(\eta_{t}\) is a step size. To optimize \(f\) effectively, we would like to choose \(\mathbf{v}_{t}\) and \(\eta_{t}\) so as to guarantee convergence to a first-order critical point (or preferably, a local minimum). Optimization theory provides many methods for choosing \(\mathbf{v}_{t}\) and \(\eta_{t}\), which have different theoretical guarantees depending on what assumptions are made about \(f\). If \(f\) is convex and the gradient norm is bounded, then setting \(\mathbf{v}_{t}=-\nabla f(\mathbf{x}_{t})\) and \(\eta_{t}=\frac{1}{\sqrt{t}}\) guarantees that the loss of the average iterate converges to the global minimum [34]. If \(f\) is non-convex but its gradient has a Lipschitz constant of at most \(\beta\) (meaning \(f\) is "\(\beta\)-smooth"), then setting \(\mathbf{v}_{t}=-\nabla f(\mathbf{x}_{t})\) and \(\eta_{t}=\frac{1}{\beta}\) guarantees that the loss goes down monotonically [11]. See the textbooks [28, 29] for an introduction to optimization theory, including classical results on the rates of convergence achieved by different algorithms under various assumptions. The loss functions used to train neural networks are neither convex nor \(\beta\)-smooth. For such functions, the only globally convergent optimization methods we are aware of involve are classical methods that require line search or trust region adaptation (e.g., [29, chapters 3-4]). Because these classical approaches are based on Taylor polynomials, and have no way of knowing how accurate the Taylor polynomial approximation is at a given \(\mathbf{x}_{t}\), they require trial and error in order to find a point that reduces the loss. Using our new machinery, it is possible to find a learning rate that is _guaranteed_ to reduce the loss, at the cost of just one additional forward pass. To do so, we use AutoBound to derive a majorizer of the function \[h_{t}(\eta)=f(\mathbf{x}_{t}+\eta\mathbf{v}_{t}). \tag{10}\] Specifically, given a specified maximum learning rate \(\bar{\eta}_{t}\), we auto-derive a majorizer for \(h_{t}\) as described in SS3.1 to obtain a polynomial \(P_{t}\) such that \[h_{t}(\eta)\leq P_{t}(\eta)\quad\forall\eta\in[0,\bar{\eta}_{t}]. \tag{11}\] Note that \(P_{t}\) defines a majorizer of \(f\) that is valid for \(\mathbf{x}\in\{\mathbf{x}_{t}+\eta\mathbf{v}_{t}:\eta\in[0,\bar{\eta}_{t}]\}\). We then choose \(\eta_{t}\) to minimize the majorizer, setting \[\eta_{t}=\operatorname{argmin}_{\eta\in[0,\bar{\eta}_{t}]}\left\{P_{t}(\eta) \right\}. \tag{12}\] The \(\operatorname{argmin}\) can be found efficiently by computing the roots of a degree \(k-1\) polynomial (see Appendix B). How do we choose the maximum learning rate, \(\bar{\eta}_{t}\)? Because \(\eta_{t}\leq\bar{\eta}_{t}\), choosing \(\bar{\eta}_{t}\) too small will slow down progress. On the other hand, as \(\bar{\eta}_{t}\) increases, the majorizer \(P_{t}\) becomes looser, so choosing \(\bar{\eta}_{t}\) very large will _also_ result in a small value of \(\eta_{t}\). A simple rule, which we will find to be effective in our experiments in SS4, is to set \(\bar{\eta}_{1}\) to an arbitrary value (say, 1), and then to set \[\bar{\eta}_{t+1}=\begin{cases}2\bar{\eta}_{t}&\text{ if }\eta_{t}\geq\frac{1}{2} \bar{\eta}_{t}\\ \frac{1}{2}\bar{\eta}_{t}&\text{ otherwise.}\end{cases} \tag{13}\] We refer to the resulting algorithm as SafeRate, and give pseudocode below. A few remarks about SafeRate are in order: 1. As written, SafeRate calls AutoBound once per iteration, and passes it a symbolic expression that depends on \(\mathbf{x}_{t}\) and \(\mathbf{v}_{t}\). However, in an actual implementation, AutoBound is called once up front to return an object (e.g., a TensorFlow tensor) that works for arbitrary \(\mathbf{x}_{t}\) and \(\mathbf{v}_{t}\). Thus, the overhead of recursively analyzing the symbolic expression for the loss \(f\) is incurred once up front, rather than \(T\) times. 2. The use of a finite trust region is not necessary for all problems. In particular, for linear or logistic regression, AutoBound is able to compute a quadratic majorizer that is valid over an unbounded trust region. In such cases we may set \(\bar{\eta}_{1}=\infty\) to effectively disable the use of a finite trust region. ### SafeCombination: Combining Multiple Update Directions The method just discussed for determining a safe learning rate is a special case of a more general method, where a number of possible update directions are provided, and AutoBound is used to determine a linear combination of the update directions that is guaranteed to monotonically reduce the loss. Concretely, let \(\mathbf{U}_{t}\in\mathbb{R}^{n\times d}\) be a matrix whose columns specify \(d\) possible update directions (e.g., the negative gradient, or the update direction used by Adam or AdaGrad). We will perform an update of the form: \[\mathbf{x}_{t+1}=\mathbf{x}_{t}+\mathbf{U}_{t}\boldsymbol{\eta}_{t} \tag{14}\] for some \(\boldsymbol{\eta}_{t}\in[\mathbf{0},\bar{\boldsymbol{\eta}}_{t}]\subseteq \mathbb{R}^{d}\). The vector \(\boldsymbol{\eta}_{t}\) can be obtained by generalizing equations (10) and (11) in the natural way, leading to the bound: \[f(\mathbf{x}_{t}+\mathbf{U}_{t}\boldsymbol{\eta})\leq f(\mathbf{x}_{t})+ \nabla f(\mathbf{x}_{t})^{\mathsf{T}}\mathbf{U}_{t}\boldsymbol{\eta}+ \boldsymbol{\eta}^{\mathsf{T}}\overline{\mathcal{L}_{t}}\boldsymbol{\eta}\quad \forall\boldsymbol{\eta}\in[\mathbf{0},\bar{\boldsymbol{\eta}}_{t}] \tag{15}\] for some matrix \(\overline{\mathcal{L}_{t}}\in\mathbb{R}^{d\times d}\). The usual way to minimize the right hand side of (15) would be to set the derivative to zero, but this does not work here for two reasons. First, the bound is only valid for \(\boldsymbol{\eta}\in[\mathbf{0},\bar{\boldsymbol{\eta}}_{t}]\), and setting the derivative to zero might yield a value outside this hyperrectangle. Second, the matrix \(\overline{\mathcal{L}_{t}}\) is not necessarily positive semidefinite, so setting the derivative to zero does not necessarily give us the global minimum. Nevertheless, the right hand side can be approximately minimized over the hyperrectangle \([\mathbf{0},\bar{\boldsymbol{\eta}}_{t}]\) using a variant of conjugate gradient descent, which we describe in Appendix B. The vector \(\bar{\boldsymbol{\eta}}_{t}\) is determined adaptively using a scheme similar to (13), but applied to each of the \(d\) components of \(\bar{\boldsymbol{\eta}}_{t}\) independently. It can be shown that, by setting \(d=n\) and letting \(\mathbf{U}_{t}\) be the identity matrix, we obtain a second-order optimizer that can minimize quadratic losses (e.g., least squares linear regression) in one step. ### Theoretical Guarantees Our universal MM algorithms enjoy a number of desirable theoretical guarantees, which we now state. The proofs are given in Appendix A. First, like other MM optimizers, our universal MM optimizers are guaranteed to monotonically reduce the loss. **Proposition 1**.: _The iterates of SafeRate and SafeCombination satisfy_ \[f(\mathbf{x}_{T+1})\leq f(\mathbf{x}_{T})\leq\ldots\leq f(\mathbf{x}_{1}).\] The next question we would like to answer is how fast our universal MM optimizers converge. Because we have not yet developed a complete theory around the tightness of the bounds returned by AutoBound, we cannot yet answer this question in full generality. However, under the standard assumption of a \(\beta\)-smooth loss, and assuming that the PolynomialUpperBound function returns the upper bound that follows from the definition of \(\beta\)-smoothness, we can show our optimizers enjoy convergence guarantees similar to those of backtracking line search [3], but without the need to backtrack or to tune line search hyperparameters. As is standard in non-convex optimization, the guarantees are stated in terms of the minimum gradient norm. **Theorem 1**.: _Let \(f\), \(\mathbf{v}_{t}\), and \(P_{t}\) be defined as in the pseudocode for SafeRate. Suppose, for some \(\beta\geq 1\), \(f\) is \(\beta\)-smooth over the level set \(\{\mathbf{y}:f(\mathbf{y})\leq f(\mathbf{x}_{1})\}\), and suppose that the PolynomialUpperBound function returns_ \[P_{t}(\eta)=f(\mathbf{x}_{t})+\eta\nabla f(\mathbf{x}_{t})^{\mathsf{T}} \mathbf{v}_{t}+\frac{\beta}{2}\eta^{2}\left(f(\mathbf{x}_{t})^{\mathsf{T}} \mathbf{v}_{t}\right)^{2}.\] _Further suppose \(\mathbf{v}_{t}=-\nabla f(\mathbf{x}_{t})\). Then,_ \[\min_{1\leq t\leq T}\left\{\left\|\nabla f(\mathbf{x}_{t})\right\|^{2}\right\} \leq\frac{2\beta\left(f(\mathbf{x}_{1})-f(\mathbf{x}_{T+1})\right)}{T}.\] _The same guarantee holds for SafeCombination, provided the negative gradient is one of the \(d\) update directions._ Lastly, we consider the running time of our algorithms. Both algorithms require an additional forward pass, whose time complexity is some multiple of that required to compute \(f(\mathbf{x})\). We state our result in terms of the number of multiplications, which is the dominant term in the time complexity for the loss functions of interest. **Theorem 2**.: _Let \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\) be a function such that for any \(\mathbf{x}\in\mathbb{R}^{n}\), computing \(f(\mathbf{x})\) requires \(M_{0}\) multiplications, and calling DirectionOracle requires \(M_{1}\) multiplications. Then, using AutoBound as the PolynomialUpperBound function, each iteration of SafeRate requires \(O(M_{1}+M_{0}k\log(k))\) multiplications, where \(k\) is the polynomial degree, and each iteration of SafeCombination requires \(O(M_{1}+M_{0}d^{\omega})\) multiplications, where \(\omega<2.37286\) is the matrix multiplication exponent._ ## 4 Experiments We now compare our universal MM optimizers to existing optimizers, on a suite of full-batch optimization problems. The code for the AutoBound algorithm we use to derive majorizers is available on GitHub.2 Footnote 2: [https://github.com/google/autobound](https://github.com/google/autobound) ### Optimizers We compare SafeRate and SafeCombination to four existing optimizers: Adam [21], AdaGrad [10], gradient descent, and backtracking line search using the Armijo-Goldstein condition [3]. For Adam, AdaGrad, and gradient descent, we consider all learning rates in the set \(\big{\{}10^{i}:i\in\mathbb{Z},-4\leq i\leq 1\big{\}}\), and present results for the best-performing learning rate (learning rates that are powers of 10 outside this grid performed poorly on all problems). Other Adam hyperparameters are left at the default values recommended in the original paper [21]. _Backtracking line search_ refers to a gradient descent optimizer that chooses the learning rate on each step using a backtracking line search first described by Armijo [3]. Starting at a point \(\mathbf{x}\), backtracking line search seeks to find a point \(\mathbf{x}^{\prime}\) that reduces the loss by at least \(\frac{\alpha}{2}\left\|\nabla f(\mathbf{x})\right\|^{2}\), where \(\alpha\) starts at an initial value \(\alpha_{0}\in\mathbb{R}\), and is halved until this condition is satisfied. We consider all values of \(\alpha_{0}\in\big{\{}10^{i}:i\in\mathbb{Z},-5\leq i\leq 3\big{\}}\), and report results for the value of \(\alpha_{0}\) that achieved the minimum loss. These optimizers and hyperparameter grids are summarized in Table 1. \begin{table} \begin{tabular}{l l} \hline \hline Optimizer & Hyperparameters \\ \hline AdaGrad [10] & \(\eta\in\big{\{}10^{i}:i\in\{10^{i}:i\in\mathbb{Z},-4\leq i\leq 1\}\big{\}}\), \\ (diagonal matrix version) & \(\delta=0\) \\ Adam [21] & \(\eta\in\big{\{}10^{i}:i\in\{10^{i}:i\in\mathbb{Z},-4\leq i\leq 1\}\big{\}}\), \\ & \(\beta_{1}=0.9\), \(\beta_{2}=0.999\), \(\epsilon=10^{-8}\) \\ Backtracking Line Search [3] & \(\alpha_{0}\in\big{\{}10^{i}:i\in\mathbb{Z},-5\leq i\leq 3\big{\}}\) \\ GD & \(\eta\in\big{\{}10^{i}:i\in\{10^{i}:i\in\mathbb{Z},-4\leq i\leq 1\}\big{\}}\) \\ SafeRate[AdAdGrad] (ours) & \(\delta=0\) \\ SafeRate[Adam] (ours) & \(\beta_{1}=0.9\), \(\beta_{2}=0.999\), \(\epsilon=10^{-8}\) \\ SafeRate[GD] (ours) & - \\ SafeCombination[per-layer AdaGrad] (ours) & \(\delta=0\) \\ SafeCombination[per-layer Adam] (ours) & \(\beta_{1}=0.9\), \(\beta_{2}=0.999\), \(\epsilon=10^{-8}\) \\ SafeCombination[per-layer GD] (ours) & – \\ \hline \hline \end{tabular} \end{table} Table 1: Optimizers used in our experiments. In addition, we evaluate the performance of SafeRate using three choices for the directional oracle: SafeRate[GD] uses the negative gradient direction, SafeRate[AdaGrad] uses the AdaGrad direction (determined based on the observed history of gradient vectors), and SafeRate[Adam] uses the Adam direction. The SafeRate[AdaGrad] and SafeRate[Adam] update directions are based on a learning rate of.1 for the underlying AdaGrad/Adam algorithm, and the recommended default values for all other hyperparameters. Note that the learning rate affects only the scale of the update direction, and the behavior of SafeRate is largely invariant to this scale (it is not strictly invariant because of the way the scale interacts with the trust region size). For this reason we do not need to tune the learning rate. Finally, we evaluate the performance of SafeCombination, using three choices for the matrix of update directions. For SafeCombination[per-layer GD], there is an update direction for each tensor \(\mathbf{W}\) that appears in the loss, and the update direction for a tensor \(\mathbf{W}\) equals the negative gradient with entries for tensors other than \(\mathbf{W}\) zeroed out. Thus, when applied to a neural network optimization problem, SafeCombination[per-layer GD] computes a per-layer learning rate (hence its name). SafeCombination[per-layer AdaGrad] and SafeCombination[per-layer Adam] are similar, except they use the per-layer update directions given by AdaGrad and Adam, respectively, rather than the negative gradient. The Adam/AdaGrad update directions are determined in the same way described in the previous paragraph. When plotting an optimizer's performance as a function of the number of steps, we consider each distinct point evaluated as a separate step (for backtracking line search, this includes points where the sufficient-loss-decrease condition is not satisfied). ### One-Dimensional Problems We begin by experimenting with synthetic one-dimensional problems. This will allow us to understand the behavior of SafeRate qualitatively, in a low-dimensional setting where the results can be easily visualized. #### 4.2.1 Losses Table 2 summarizes the loss functions used in these experiments. Each loss is a one-dimensional example of a widely-studied problem. For least squares linear regression, we consider a problem with a single feature, making the loss a quadratic polynomial. The specific choice of quadratic does not qualitatively change the results; we choose \((x-\frac{3}{2})^{2}\). We also consider linear regression with error terms drawn from a _generalized symmetric normal distribution_[26, 33]. Setting the \(\beta\) parameter of the generalized symmetric normal distribution to 4 results in a quartic polynomial loss, where again the specific choice of quartic does not qualitatively change the results. For logistic regression, we choose a one-dimensional problem where two thirds of the labels are negative, and there is a single feature whose value is always 1. As a one-dimensional example of neural network training, we consider the loss \(f(x)=((\text{sigmoid}(x-10)-\frac{1}{2})^{2}\). Minimizing this loss can be thought of as optimizing the bias parameter in the first layer of a single-hidden-layer neural network with sigmoid activation functions and squared error, with a training set of size 1 and appropriate initial values for parameters other than \(x\). This loss is interesting because initially (when \(x=0\)) the input to the sigmoid is very small, resulting in very small initial gradients that can pose a challenge for traditional optimizers. \begin{table} \begin{tabular}{l l} \hline \hline Problem name & Loss function \\ \hline 1-d least squares linear regression & \(x\mapsto(x-\frac{3}{2})^{2}\) \\ 1-d linear regression with non-normal errors & \(x\mapsto(x-3)^{4}\) \\ 1-d logistic regression & \(x\mapsto\frac{2}{3}\log(1+\exp(x))+\frac{1}{3}\log(1+\exp(-x))\) \\ Optimizing a single neural network parameter & \(x\mapsto((\text{sigmoid}(x-10)-\frac{1}{2})^{2}\) \\ \hline \hline \end{tabular} \end{table} Table 2: One-dimensional loss functions used in our experiments. #### 4.2.2 Performance comparison Figure 3 compares the performance of SafeRate to that of tuned versions of the baseline optimizers, on the four one-dimensional optimization problems given in Table 2. For optimizers other than SafeRate (which has no hyperparameters), each plot shows the best-performing3 hyperparameter settings for each problem, considering all the settings in the grid defined by Table 1. For these one-dimensional problems, all optimizers require similar wall time per step. Footnote 3: Performance is measured by the minimum loss reached on any step. We make the following observations: * Despite not requiring hyperparameter tuning, SafeRate outperforms tuned version of all baseline optimizers on all four problems, with one exception (with a learning rate of 1, AdaGrad performs better on the neural network parameter optimization problem). * For 1-d least squares linear regression, the quadratic majorizer derived using AutoBound is exact, and thus SafeRate jumps to the global minimum on the first step. Figure 3: Comparison of optimizers on one-dimensional optimization problems. Each plot shows the loss as a function of the number of iterations (log scale). For optimizers other than SafeRate, the plot shows the best-performing hyperparameter from the grid defined in Table 1 (the best-performing hyperparameter is different for each plot). Despite not requiring hyperparameter tuning, SafeRate generally outperforms tuned versions of all baseline optimizers. For these one-dimensional problems, all optimizers require similar wall time per step. * For 1-d linear regression with non-normal errors, SafeRate converges super-linearly4, whereas gradient descent and AdaGrad appear to converge linearly. Footnote 4: An optimizer is said to converge linearly if the log of the optimality gap decreases linearly as a function of the log of the number of steps. #### 4.2.3 Safe learning rates We now examine in more detail how SafeRate behaves on these one-dimensional problems. Recall that the "safe" learning rate computed by SafeRate on step \(t\) depends on two things: the current iterate \(x_{t}\), and the maximum learning rate \(\bar{\eta}_{t}\) (which determines the trust region, \([0,\bar{\eta}_{t}]\)). Figure 4 depicts the safe learning rate as a function of \(x=x_{t}\), for various values of the trust region. In examining Figure 4, several things are worth noting: * In general, the learning rate depends in a non-trivial way on the current iterate \(x\), and can be orders of magnitude larger for some \(x\) than for others. * For some problems, the "safe" learning rate increases dramatically as one approaches the global minimum, while for other problems it decreases dramatically. * The optimal trust region width (i.e., the one that lets us compute the largest "safe" learning rate) also depends on \(x\). For example, for the linear regression problem with non-normal errors, at \(x=0\) the optimal trust region width is.01; at \(x=1\) it is 0.1, and at \(x=2.9\) it is 1. Figure 4: Safe gradient descent learning rates for various one-dimensional problems, as a function of the current iterate \(x\) and the trust region. For each \(x\), each plotted learning rate \(\eta\) is computed using the symbolic expression for the loss \(f\), and is guaranteed to reduce the loss: \(f(x-\eta\nabla f(x))\leq f(x)\). These points illustrate the potential for SafeRate to non-trivially adapt the learning rate during the course of optimization, and make clear that the trust region must adapt over time if we wish to compute the largest possible safe learning rates. #### 4.2.4 SafeRate trust region adaptation Figure 5 shows how the learning rate \(\eta_{t}\) and the maximum learning rate \(\bar{\eta}_{t}\) (which determines the trust region) evolve when running SafeRate on each of the one-dimensional problems. We summarize the results shown in Figure 5 as follows: * For least squares linear regression, SafeRate converges in one step, and thus trust region adaptation plays no role. * For linear regression with non-normal errors, the maximum learning rate \(\bar{\eta}_{t}\) is initially too large, and hence \(\eta_{t}\ll\bar{\eta}_{t}\), leading to slow progress. As a result, SafeRate decreases \(\bar{\eta}_{t}\) exponentially, which causes \(\eta_{t}\) to increase exponentially. Then, once \(\eta_{t}\approx\bar{\eta}_{t}\), both \(\bar{\eta}_{t}\) and \(\eta_{t}\)_increase_ exponentially for the remainder of the optimization process. We note that this learning rate schedule is very different from the regret-bound-minimizing schedule used in algorithms such as AdaGrad [10] and FTPRL [25], which always decreases the learning rate and does so at a polynomial rate. * For logistic regression, \(\bar{\eta}_{t}\) is initially too small, causing \(\eta_{t}\) to be capped at its maximum value. This causes \(\bar{\eta}_{t}\) to double until this is no longer the case, leading to convergence in a few steps. * For the one-dimensional neural network problem, the gradients are initially very small, and a very large learning rate is necessary to make progress. As in the logistic regression problem, \(\eta_{t}\) is initially capped at its maximum value, causing \(\bar{\eta}_{t}\) to double until this is no longer the case. Then, once the optimizer reaches the part of the sigmoid curve where the gradients are larger, \(\eta_{t}\) suddenly decreases by more than four orders of magnitude. This causes \(\bar{\eta}_{t}\) to decrease, causing \(\eta_{t}\) to increase again, and \(\eta_{t}\) continues to oscillate up and down for the remainder of the optimization run, while at the same time the loss decreases rapidly (as shown in Figure 3). Overall, the simple doubling/halving heuristic used by SafeRate is very effective on these problems, and leads to qualitatively different behaviors on each problem. ### Random Regression Problems For our next experiment, we evaluate SafeRate on randomly-generated linear and logistic regression problems. For linear regression, we consider both least-squares linear regression, as well as linear regression with non-normal error terms, described in more detail below. We generate random, well-specified regression problems by sampling feature vectors from a normal distribution with a specified covariance matrix, and generating labels based on a known model. Letting \(d\) denote the number of features, the covariance matrix is \(\mathbf{Z}^{\mathsf{T}}\mathbf{Z}\), where \(\mathbf{Z}\) is a \(d\) by \(d\) matrix whose elements are drawn from a standard normal distribution. The true model is \(\boldsymbol{\beta}^{*}\in\mathbb{R}^{d}\), where each coordinate of \(\boldsymbol{\beta}^{*}\) is drawn independently from a standard normal distribution. The labels are generated as follows: * For least-squares linear regression, the label for an example with feature vector \(\mathbf{a}\) is drawn from a normal distribution with mean \(\mathbf{a}^{\mathsf{T}}\boldsymbol{\beta}^{*}\) and standard deviation \(0.1\). * For logistic regression, the label for an example with feature vector \(\mathbf{a}\) is drawn from a Bernoulli distribution with mean \(\frac{1}{1+\exp(-\mathbf{a}^{\mathsf{T}}\boldsymbol{\beta}^{*})}\). * For linear regression with non-normal errors [26, 33], the label for an example with feature vector \(\mathbf{a}\) is drawn from a generalized symmetric normal distribution with parameters \(\alpha=.1\) and \(\beta=4\). For all three problems, the loss is the negative log-likelihood. We use \(d=100\) features and \(n=10000\) training examples. For linear regression with non-normal errors, the loss is \(f(\mathbf{x})=\sum_{i=1}^{n}(\mathbf{A}_{i}^{\mathsf{T}}\mathbf{x}-\mathbf{b }_{i})^{\beta}\), where \(\mathbf{A}\) is the feature matrix and \(\mathbf{b}\) is the label vector. Because we set \(\beta=4\), the loss grows as a quartic function of the estimation error (as in the one-dimensional example in SS4.2). It is worth noting that, for least squares linear regression, the quadratic upper bound that SafeRate uses to compute a safe learning rate is tight, and the SafeRate learning rate is therefore the rate that maximally reduces the loss on each step. Thus, applied to least squares linear regression, SafeRate[Adam] can be thought of as a variant of Adam that at each step moves as far as possible in the Adam direction, stopping at the point where further movement would increase the loss. In contrast, when applied to logistic regression or linear regression with non-normal errors, SafeRate will always choose a learning rate that reduces the loss, but this rate will be smaller than the one that reduces the loss maximally. Figure 6 compares SafeRate[GD], SafeRate[AdaGrad], and SafeRate[Adam] to tuned versions of the baseline optimizers given in Table 1. As in our previous experiment, we show only the best-performing hyperparameter value for each baseline optimizer. The plots on the left use the number of steps as the horizontal axis, while the plots on the right use wall time. Examining Figure 6, we note that: * For all three problems, SafeRate[Adam] outperforms all other optimizers in terms of the loss reached after a given number of steps. For linear regression with non-normal errors, it is also better in terms of wall time, while for the other two problems it is slightly worse in terms of wall time. * For the two linear regression problems, SafeRate[GD] outperforms the best fixed GD learning rate by a wide margin. * SafeRate[AdaGrad] consistently outperforms all fixed AdaGrad learning rates early in optimization (see also Figure 7), but a well-tuned AdaGrad learning rate performs slightly better asymptotically. For linear regression, this implies that greedily choosing the learning rate that maximally reduces the loss (as SafeRate[AdaGrad] does) does not produce the best loss asymptotically (although it comes close). For all three problems, SafeRate requires three matrix-vector products per step, whereas each step of GD, Adam, and AdaGrad requires only two. For this reason, we would expect SafeRate[Adam] to take about 1.5 times as long as Adam to complete a fixed number of steps. Empirically, however, it takes roughly twice as long, which may point to suboptimality of the generated computation graph for SafeRate (and an opportunity to improve the results). Figure 6: Comparison of optimizers on randomly-generated linear and logistic regression problems. Each plot shows the loss as a function of the number of iterations (left) or wall time (right), on a log scale. For optimizers other than SafeRate, the plot shows the best-performing hyperparameter from the grid defined in Table 1. Despite not requiring hyperparameter tuning, SafeRate typically performs about as well as the best learning rate in our grid, and sometimes outperforms it dramatically. Figure 7 presents a more detailed view of these results. In this figure, there are separate plots for GD, AdaGrad, and Adam that include results for all learning rates in the grid, plus results for the corresponding SafeRate algorithm. Figure 7 makes it clear that, in addition to being competitive with the best learning rate in the grid, SafeRate dramatically outperforms suboptimal learning rates, some of which lead to divergence or very slow progress. ### Multi-Layer Perceptrons For our final set of experiments, we use SafeRate and SafeCombination to train deep networks to classify images. Specifically, we train a fully connected network with one or two hidden layers to classify images from the MNIST dataset. We use 1000 hidden units, the softplus activation function, and the square loss. Following the recommendation of [16], we do _not_ use a final softmax layer when computing the square loss. We train in the full-batch setting, using the first 1000 images as our training set. To work around numerical issues in the AutoBound algorithm, we use float64 for the two-hidden-layer experiments with SafeRate and SafeCombination, and to keep the comparison fair we also use float64 for the baseline optimizers. As in our previous experiment, we evaluate SafeRate using three different choices of update direction, namely the directions given by GD, AdaGrad, and Adam (based on the observed sequence of gradients, as described in SS4.1). Additionally, we evaluate SafeCombination[per-layer GD], SafeCombination[per-layer AdaGrad], and SafeCombination[per-layer Adam], which compute adaptive per-layer learning rates using the update directions given by GD, AdaGrad, and Adam, respectively. Figure 7: Comparison of SafeRate to optimizers with a learning rate hyperparameter, when used to solve randomly-generated linear and logistic regression problems. _On these problems, SafeRate takes roughly twice as much wall time per step as the other algorithms (see Figure 6)._ Figure 8 shows the training loss reached by each optimizer as a function of the number of iterations, and as a function of wall time. We note that: * For the one-hidden-layer problem, SafeRate[Adam] reaches lower training loss than all the baseline optimizers (after 1024 steps). However, it requires about 2.5x as much wall time per step as Adam, and thus is somewhat worse in terms of training loss vs. wall time. * SafeCombination consistently makes more progress per step than SafeRate, at the cost of additional computation. * Using our current implementation, SafeCombination is very slow, requiring roughly 13x as much wall time per step as Adam on the one-hidden-layer problem, and roughly 100x as much time per step on the two-hidden-layer problem.5 Thus, SafeCombination is not currently competitive with the baseline optimizers in terms of wall time. These wall time numbers should be taken with a grain of salt, however, as we suspect they could be significantly reduced by optimizing our implementation.6 Footnote 5: Applied to a neural network with \(H\) hidden layers, SafeCombination requires \(O(d^{3}H)\) time per step, where \(d\) is the number of update directions. For SafeCombination[per-layer-Adam], we have \(d=H+1\), and thus the time per step is \(O(H^{4})\). Footnote 6: In particular, the wall time can likely be improved by taking advantage of the sparsity of the update directions. Figure 9 presents a more detailed view of the same results, with separate plots for GD, Adam, and AdaGrad, showing the performance of various learning rates along with that of the corresponding SafeRate and SafeCombination algorithms. To reduce clutter, only learning rates between \(10^{-4}\) and \(0.1\) are shown; learning rates outside this range performed poorly on both problems. For the one-hidden-layer network, SafeRate is generally able to offer performance comparable to the best learning rate in our grid on a per-step Figure 8: Comparison of optimizers, training a multi-layer perceptron on a subset of the MNIST dataset. Each plot shows the loss as a function of the number of iterations (left) or wall time (right), on a log scale. For optimizers other than SafeRate and SafeCombination, the plot shows the best-performing hyperparameter from the grid defined in Table 1. basis. The results are best for gradient descent, where SafeRate[GD] takes an order of magnitude fewer steps to reach the loss that GD reaches after 1024 steps with a tuned learning rate. Remarkably, we will see in SS4.5 that for these neural networks, the quadratic bounds used by SafeRate[GD] are nearly tight. Thus, on these problem, SafeRate[GD] greedily chooses a learning rate that is very close to the one that maximally reduces the loss on each step (similar to its behavior in the linear regression experiment). Figure 9: Comparison of SafeRate and SafeCombination to optimizers with a learning rate hyperparameter, when used to optimize a multi-layer perceptron on a subset of the MNIST dataset. SafeCombination makes more progress per step than SafeRate, but also requires significantly more computation per step. _Using our current implementation, SafeCombination[per-layer Adam] requires 13x more wall time per step than Adam for the one-hidden-layer problem, and 103x more wall time per step for the two-hidden-layer problem (see Figure 8)._ #### 4.4.1 Training an Overparameterized Neural Network in One Step In SS4.5, we will see that the quadratic bounds used by SafeRate[GD] are _empirically tight_ for single-hidden-layer networks with squared error loss: the upper and lower bounds are nearly identical, with the actual loss sandwiched in between them. Though we have not yet analyzed this phenomenon formally, we would expect the bounds to be tight when the trust region is small enough that the activations are near-affine as a function of the learning rate, for all learning rates in the trust region. At the same time, theory suggests that gradient descent can train sufficiently wide neural networks without ever departing from a small region of parameter space where the activations are near-affine [23]. Taken together, these observations suggest that SafeRate may become very efficient as the width of the network increases. As a partial confirmation of this conjecture, we now show that there exist (contrived) neural network optimization problems that require hundreds of steps to solve using the Adam optimizer, but that SafeCombination is able to solve after just _one step_. To show this, we consider the same setup as in the previous experiment, but with the size of the training set reduced from 1000 examples to just _one example_, and the number of hidden units increased from 1000 to \(10^{5}\) (and a single hidden layer). Figure 10 compares the performance of SafeRate and SafeCombination to various baseline optimizers, again considering the best hyperparameter value for each baseline optimizer.7 Footnote 7: For this experiment, the baseline optimizers benefit from lower learning rates, so we used a larger grid than the one given in Table 1. We note that, with a single training example, the optimization problem becomes trivial, and one could imagine other methods that would reach a global minimum in one step. Nevertheless, the fact that our universal MM optimizers recover good performance in this extreme case is encouraging, and suggests theoretical analysis of their behavior in the infinite-width limit as an interesting area of future work. ### Tightness of Automatically-Derived Majorizers The effectiveness of any MM optimization algorithm depends on the tightness of the majorizers (i.e., how close the upper bound is to the actual loss). For least-squares linear regression, it can be shown that the upper bounds used by SafeRate are exact. For more complex losses, the tightness of the upper bounds must be measured empirically. Figure 11 plot the majorizers used by the first iteration of SafeRate for the MNIST classification problem discussed in SS4.4, for multi-layer perceptrons with 1, 4, 16, or 64 hidden layers. Each plot shows quadratic, cubic, and quartic majorizers. With a single hidden layer (top plot), even the quadratic upper bounds are empirically tight. This is perhaps surprising considering that the loss itself is not quadratic, due to the softplus hidden layer. However, the function \(h_{t}(\eta)=f(\mathbf{x}_{t}-\eta\nabla f(\mathbf{x}_{t}))\) is empirically very close to quadratic for relevant values of \(\eta\) (those that are small enough to reduce the loss). Figure 10: Comparison of optimizers on a highly overparameterized problem with only a single training example. SafeCombination is able to solve the problem in just one step, whereas Adam requires hundreds of steps, even using the best learning rate from a coarse grid. As the number of hidden layers increases, the quadratic bounds become looser, but this can be compensated for by increasing the polynomial degree. Even with 64 hidden layers, the quartic polynomial majorizer is nearly tight. Thus, even for very deep networks, SafeRate is able to compute a learning rate that near-maximally reduces the loss, at the cost of only a single additional forward pass. Figure 12 compares the quadratic majorizers derived by AutoBound to quadratic majorizers derived from a bound on the range of the second derivative. Recall from SS2.2 that by bounding the range of the second derivative of a function over a given trust region, we can obtain quadratic bounds on the function which hold over the trust region, and recall that the range of the derivative can be bounded by evaluating the derivative using interval arithmetic. We consider a variant of this baseline method that handles bilinear operations efficiently using the interval bound propagation scheme described in [31] (which AutoBound also uses). With one hidden layer, both methods produce nearly tight majorizers. However, as the number of hidden layers increases, the majorizers produced by the baseline method quickly become much looser than those produced by AutoBound. With three hidden layers, the majorizer produced by the baseline method yields a learning rate that is over 11 times smaller than the one obtained using the majorizer produced by AutoBound. With four hidden layers, the learning rate produced by the baseline method is over 2500 times smaller than the one obtained using AutoBound. ### Summary In this section, we evaluated SafeRate and SafeCombination on a variety of full-batch optimization problems. Some of our most significant observations are: Figure 11: Automatically-derived majorizers for the function \(h_{1}(\eta)=f(\mathbf{x}_{1}-\eta\nabla f(\mathbf{x}_{1}))\), where \(f\) is the training loss for a multi-layer perceptron on a subset of the MNIST dataset. Figure 12: Comparison of quadratic majorizers derived by AutoBound to those derived using a baseline method based on range-bounding the second derivative. Each plot shows majorizers for the function \(h_{1}(\eta)=f(\mathbf{x}_{1}-\eta\nabla f(\mathbf{x}_{1}))\), where \(f\) is the training loss for a multi-layer perceptron on a subset of the MNIST dataset. * Applied to random linear and logistic regression problems, SafeRate is able to boost the performance of gradient descent and Adam while simultaneously eliminating the need to tune a learning rate hyperparameter. For linear regression with non-normal errors, SafeRate can _dramatically_ outperform tuned versions of the baseline optimizers. * There exist optimization problems where SafeRate converges at a faster rate than gradient descent, Adam, or AdaGrad (in particular, applied to a one-dimensional quartic, SafeRate converges at a super-linear rate, whereas gradient descent, Adam, and AdaGrad appear to converge linearly). It would be very interesting to characterize the space of such problems theoretically. * For multi-layer perceptrons with squared error loss, the bounds computed by SafeRate are _empirically tight_ (for up to 64 hidden layers, when using quartic bounds), and thus SafeRate uses the learning rate that maximally reduces the loss at each step. As a result, SafeRate can boost the performance of gradient descent while eliminating the learning rate hyperparameter, as was the case with linear regression. We also saw that SafeCombination can dramatically outperform SafeRate on a per-step basis, at the cost of significant additional computation per step. In the extreme case of a wide single-hidden-layer network and a single training example, SafeCombination was able to reach a global minimum in _one step_. Our evaluation of SafeCombination considered only a few possible choices for the matrix of update directions, and further exploration of this space remains a promising area of future work. ## 5 Related Work At a high level, this chapter considers the problem of minimizing a scalar-valued function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\), where the function \(f\) is available in symbolic form, and is composed of sub-differentiable elementary functions. This problem has been studied in the applied mathematics community for at least 80 years and is the subject of a vast literature; see the textbook by Nocedal and Wright [29] for an introduction. Most relevant to our work is the literature on majorization-minimization (MM) optimizers, which iteratively reduce the loss by minimizing a locally-tight upper bound (called a majorizer). MM is itself the subject of a large literature; see [19] for a tutorial, and see the textbooks by Lange [22] and de Leeuw [8] for a through introduction. Majorizers have been derived by hand for many specific problems of interest, including logistic regression [5], quantile regression [18], multidimensional scaling [7, 9, 12] generalized Bradley-Terry models [17], and support vector machines [13]. However, there seems to be very little work on deriving the majorizer automatically via symbolic computation, as we have done. The first work we are aware of is [32], which derives a quadratic majorizer using a recursive procedure that can be applied to neural networks of any depth. However, the approach taken by [32] differs from ours in several critical ways: 1. The algorithm of [32] seeks a quadratic majorizer that is valid everywhere (not just over a specified trust region), and hence is not applicable to losses such as \(f(x)=(x-3)^{4}\), which grow faster than any quadratic. As a consequence, the algorithm of [32] can only derive a quadratic majorizer for a neural network loss when the weights in all but one layer are held constant. 2. The algorithm of [32] propagates bounds from the output of the network toward the input, analogous to reverse-mode automatic differentiation. Such an algorithm has the virtue of efficiently computing majorizers that depend on a large number of input variables. In contrast, AutoBound uses memory linear in the number of input variables, making it only practical for majorizers that are a function of some lower-dimensional quantity such as the learning rate. However, the efficiency of the reverse-mode algorithm comes at a price, as the backward propagation of quadratic bounds requires the use of inequalities that can become very loose as the network grows wide. 3. As presented, the algorithm of [32] is specific to training a neural network with squared error loss and hyperbolic tangent activation functions, although we believe it could be generalized to other activation functions. As discussed in [31], we plan to develop a reverse-mode variant of AutoBound in the future, which could be used to derive majorizers that are more directly comparable to the ones derived by [32]. More recently, [4] presented a hyperparameter-free gradient descent algorithm for training deep neural networks, based on majorization-minimization ideas. This work was appeared after the initial publication of our work8, and was done independently. Unlike our work, this work is based on a number of approximations that do not yield true majorizers. It also applies to a narrower class of losses, and is not _universal_ in the sense we have described. However, it demonstrates that a hyperparameter-free optimizer based on majorization-minimization ideas can successfully train deep neural networks at ImageNet scale. Footnote 8: Our work first appeared as Chapter 5 of version 1 of [31], which was posted to arXiv in December 2022, while [4] was posted to arXiv in April 2023. ## 6 Future Work In this paper, we have presented universal MM optimization algorithms built on top of AutoBound [31]. Though we believe our experiments have shown that these algorithms exhibit qualitatively new (and desirable) behavior, we have not done all we can to turn them into practical general-purpose optimizers. Promising areas of future work include: * _Mini-batch optimization._ Both the theory and experiments in this chapter have been limited to full-batch optimization. However, the MM paradigm can be extended to mini-batch optimization (e.g., [24]). A universal mini-batch MM optimization algorithm may outperform Adam and AdaGrad on certain large-scale machine learning problems. * _Approximate majorizers._ Currently, both SafeRate and SafeCombination require more wall time per step that Adam or AdaGrad. However, the wall time can be made comparable to Adam or AdaGrad by computing the majorizer approximately, using a random subset of the training data. Preliminary experiments show that this can yield better tradeoffs between loss and wall time early in optimization, but some care is required to ensure convergence to the same loss asymptotically. Additionally, the universal MM optimization algorithms presented in this chapter have been limited to majorizing the loss as a function of a small number of learning rates. This was necessary because AutoBound, like forward-mode automatic differentiation, requires memory linear in the number of inputs. However, as discussed in [31], we plan to develop a reverse-mode variant of AutoBound that would allow us to efficiently compute quadratic majorizers for a function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\) as a function of the \(n\)-dimensional input. This would allow us to define universal MM optimization algorithms that determine the update direction on their own, rather than taking it as a given (as SafeRate does).
主語化最小化 (MM) は、繰り返し、損失を最小化する局部的にタイトの上限を最小化するという最適化手法のファミリーです。これは、マージオリザーと呼ばれています。従来、マージオリザーは手計算で求められていましたが、MMは、それほど研究されていない問題には適用できることが限られていました。私たちは、Taylorモード自動微分法の最近の一般化を用いて、自動的にマージオリザーを導出する最適化方法を提供しています。これらの汎用的な MM オプティマイザーは、任意の問題に適用でき、どの始点からも収束し、ハイパーパラメータチューニングなしで動作します。
2301.13381
When Source-Free Domain Adaptation Meets Learning with Noisy Labels
Recent state-of-the-art source-free domain adaptation (SFDA) methods have focused on learning meaningful cluster structures in the feature space, which have succeeded in adapting the knowledge from source domain to unlabeled target domain without accessing the private source data. However, existing methods rely on the pseudo-labels generated by source models that can be noisy due to domain shift. In this paper, we study SFDA from the perspective of learning with label noise (LLN). Unlike the label noise in the conventional LLN scenario, we prove that the label noise in SFDA follows a different distribution assumption. We also prove that such a difference makes existing LLN methods that rely on their distribution assumptions unable to address the label noise in SFDA. Empirical evidence suggests that only marginal improvements are achieved when applying the existing LLN methods to solve the SFDA problem. On the other hand, although there exists a fundamental difference between the label noise in the two scenarios, we demonstrate theoretically that the early-time training phenomenon (ETP), which has been previously observed in conventional label noise settings, can also be observed in the SFDA problem. Extensive experiments demonstrate significant improvements to existing SFDA algorithms by leveraging ETP to address the label noise in SFDA.
Li Yi, Gezheng Xu, Pengcheng Xu, Jiaqi Li, Ruizhi Pu, Charles Ling, A. Ian McLeod, Boyu Wang
2023-01-31T03:06:47
http://arxiv.org/abs/2301.13381v2
# When Source-Free Domain Adaptation Meets Learning with Noisy Labels ###### Abstract Recent state-of-the-art source-free domain adaptation (SFDA) methods have focused on learning meaningful cluster structures in the feature space, which have succeeded in adapting the knowledge from source domain to unlabeled target domain without accessing the private source data. However, existing methods rely on the pseudo-labels generated by source models that can be noisy due to domain shift. In this paper, we study SFDA from the perspective of learning with label noise (LLN). Unlike the label noise in the conventional LLN scenario, we prove that the label noise in SFDA follows a different distribution assumption. We also prove that such a difference makes existing LLN methods that rely on their distribution assumptions unable to address the label noise in SFDA. Empirical evidence suggests that only marginal improvements are achieved when applying the existing LLN methods to solve the SFDA problem. On the other hand, although there exists a fundamental difference between the label noise in the two scenarios, we demonstrate theoretically that the early-time training phenomenon (ETP), which has been previously observed in conventional label noise settings, can also be observed in the SFDA problem. Extensive experiments demonstrate significant improvements to existing SFDA algorithms by leveraging ETP to address the label noise in SFDA. ## 1 Introduction Deep learning demonstrates strong performance on various tasks across different fields. However, it is limited by the requirement of large-scale labeled and independent, and identically distributed (i.i.d.) data. Unsupervised domain adaptation (UDA) is thus proposed to mitigate the distribution shift between the labeled source and unlabeled target domain. In view of the importance of data privacy, it is crucial to be able to adapt a pre-trained source model to the unlabeled target domain without accessing the private source data, which is known as Source Free Domain Adaptation (SFDA). The current state-of-the-art SFDA methods (Liang et al., 2020; Yang et al., 2021; 2021) mainly focus on learning meaningful cluster structures in the feature space, and the quality of the learned cluster structures hinges on the reliability of pseudo labels generated by the source model. Among these methods, SHOT (Liang et al., 2020) purifies pseudo labels of target data based on nearest centroids, and then the purified pseudo labels are used to guide the self-training. G-SFDA (Yang et al., 2021) and NRC (Yang et al., 2021) further refine pseudo labels by encouraging similar predictions to the data point and its neighbors. For a single target data point, when most of its neighbors are correctly predicted, these methods can provide an accurate pseudo label to the data point. However, as we illustrate the problem in Figure 1(a-b), when the majority of its neighbors are incorrectly predicted to a category, it will be assigned with an incorrect pseudo label, misleading the learning of cluster structures. The experimental result on VisDA (Peng et al., 2017), shown in Figure 1ii, further verifies this phenomenon. By directly applying the pre-trained source model on each target domain instance (central instance), we collect its neighbors and evaluate their quality. We observed that for each class a large proportion of the neighbors are _misleading_ (i.e., the neighbors' pseudo labels are different from the central instance's true label), some even with high confidence (e.g., the _over-confident misleading neighbors_ whose prediction score is larger than 0.75). Based on this observation, we can conclude that: (1) the pseudo labels leveraged in current SFDA methods can be heavily noisy; (2) some pseudo-label purification methods utilized in SFDA, which severely rely on the quality of the pseudo label itself, will be affected by such label noise, and the prediction error will accumulate as the training progresses. More details can be found in Appendix A. In this paper, we address the aforementioned problem by formulating SFDA as _learning with label noise_ (LLN). Unlike existing studies that heuristically rely on cluster structures or neighbors, we investigate the properties of label noise in SFDA and show that there is an intrinsic discrepancy between the SFDA and the LLN problems. Specifically, in conventional LLN scenarios, the label noise is generated by human annotators or image search engines (Pattini et al., 2017; Xiao et al., 2015; Xia et al., 2020a), where the underlying distribution assumption is that the mislabeling rate for a sample is bounded. However, in the SFDA scenarios, the label noise is generated by the source model due to the distribution shift, where we prove that the mislabeling rate for a sample is much higher, and can approach 1. We term the former label noise in LLN as _bounded label noise_ and the latter label noise in SFDA as _unbounded label noise_. Moreover, we theoretically show that most existing LLN methods, which rely on bounded label noise assumption, are unable to address the label noise in SFDA due to the fundamental difference (Section 3). To this end, we leverage _early-time training phenomenon_ (ETP) in LLN to address the unbounded label noise and to improve the efficiency of existing SFDA algorithms. Specifically, ETP indicates that classifiers can predict mislabeled samples with relatively high accuracy during the early learning phase before they start to memorize the mislabeled data (Liu et al., 2020). Although ETP has been previously observed in, it has only been studied in the bounded random label noise in the conventional LLN scenarios. In this work, we theoretically and empirically show that ETP still exists in the unbounded label noise scenario of SFDA. Moreover, we also empirically justify that existing SFDA algorithms can be substantially improved by leveraging ETP, which opens up a new avenue for SFDA. As an instantiation, we incorporate a simple early learning regularization (ELR) term (Liu et al., 2020) with existing SFDA objective functions, achieving consistent improvements on four different SFDA benchmark datasets. As a comparison, we also apply other existing LLN methods, including Generalized Cross Entropy (GCE) (Zhang and Sabuncu, 2018), Symmetric Cross Entropy Learning (SL) (Wang et al., 2019), Generalized Jensen-Shannon Divergence (GJS) (Englesson and Azizpour, 2021) and Progressive Label Correction (PLC) (Zhang et al., 2021), to SFDA. Our empirical evidence shows that they are inappropriate for addressing the label noise in SFDA. This is also consistent with our theoretical results (Section 4). Our main contribution can be summarized as: (1) We establish the connection between the SFDA and the LLN. Compared with the conventional LLN problem that assumes bounded label noise, the problem in SFDA can be viewed as the problem of LLN with the unbounded label noise. (2) Figure 1: (i) (a) The SFDA problem can be formulated as an LLN problem. (b) The existing SFDA algorithms using the local cluster information cannot address label noise due to the unbounded label noise (Section 3). (c) We prove that ETP exists in SFDA, which can be leveraged to address the unbounded label noise (Section 4). (ii) Observed Label Noise Phenomena on VisDA dataset. We theoretically and empirically justify that ETP exists in the unbounded label noise scenario. On the algorithmic side, we instantiate our analysis by simply adding a regularization term into the SFDA objective functions. (3) We conduct extensive experiments to show that ETP can be utilized to improve many existing SFDA algorithms by a large margin across multiple SFDA benchmarks. ## 2 Related work **Source-free domain adaptation.** Recently, SFDA are studied for data privacy. The first branch of research is to leverage the target pseudo labels to conduct self-training to implicitly achieve adaptation (Liang et al., 2021; Tanwisuth et al., 2021; Ahmed et al., 2021; Yang et al., 2021). SHOT (Liang et al., 2020) introduces k-means clustering and mutual information maximization strategy for self-training. NRC (Yang et al., 2021) further investigates the neighbors of target clusters to improve the accuracy of pseudo labels. These studies more or less involve pseudo-label purification processes, but they are primarily heuristic algorithms and suffer from the previously mentioned label noise accumulation problem. The other branch is to utilize the generative model to synthesize target-style training data (Qiu et al., 2021; Liu et al., 2021). Some methods also explore the SFDA algorithms in various settings. USFDA (Kundu et al., 2020) and FS (Kundu et al., 2020) design methods for universal and open-set UDA. In this paper, we regard SFDA as the LLN problem. We aim to explore what category of noisy labels exists in SFDA and to ameliorate such label noise to improve the performance of current SFDA algorithms. **Learning with label noise.** Existing methods for training neural networks with label noise focus on symmetric, asymmetric, and instance-dependent label noise. For example, a branch of research focuses on leveraging noise-robust loss functions to cope with the symmetric and asymmetric noise, including GCE (Zhang and Sabuncu, 2018), SL (Wang et al., 2019), NCE (Ma et al., 2020), and GJS (Englesson and Azizpour, 2021), which have been proven effective in bounded label noise. On the other hand, CORES (Cheng et al., 2020) and CAL (Zhu et al., 2021) are shown useful in mitigating instance-dependent label noise. These methods are only tiled to conventional LLN settings. Recently, Liu et al. (2020) has studied early-time training phenomenon (ETP) in conventional label noise scenarios and proposes a regularization term ELR to exploit the benefits of ETP. PCL (Zhang et al., 2021) is another conventional LLN algorithm utilizing ETP, but it cannot maintain the exploit of ETP in SFDA as memorizing noisy labels is much faster in SFDA. Our contributions are: (1) We theoretically and empirically study ETP in the SFDA scenario. (2) Based on an in depth analysis of many existing LLN methods (Zhang and Sabuncu, 2018; Wang et al., 2019; Englesson and Azizpour, 2021; Zhang et al., 2021), we demonstrate that ELR is useful for many SFDA problems. ## 3 Label Noise In SFDA The presence of label noise on training datasets has been shown to degrade the model performance (Malach and Shalev-Shwartz, 2017; Han et al., 2018). In SFDA, existing algorithms rely on pseudo-labels produced by the source model, which are inevitably noisy due to the domain shift. The SFDA methods such as Liang et al. (2020); Yang et al. (2021);b) cannot tackle the situation when some target samples and their neighbors are all incorrectly predicted by the source model. In this section, we formulate the SFDA as the problem of LLN to address this issue. We assume that the source domain \(\mathcal{D}_{S}\) and the target domain \(\mathcal{D}_{T}\) follow two different underlying distributions over \(\mathcal{X}\times\mathcal{Y}\), where \(\mathcal{X}\) and \(\mathcal{Y}\) are respectively the input and label spaces. In the SFDA setting, we aim to learn a target classifier \(f(\mathbf{x};\theta):\mathcal{X}\rightarrow\mathcal{Y}\) only with a pre-trained model \(f_{S}(\mathbf{x})\) on \(\mathcal{D}_{S}\) and a set of unlabeled target domain observations drawn from \(\mathcal{D}_{T}\). We regard the incorrectly assigned pseudo-labels as noisy labels. Unlike the "bounded label noise" assumption in the conventional LLN domain, we will show that the label noise in SFDA is _unbounded_. We further prove that most existing LLN methods that rely on the bounded assumption cannot address the label noise in SFDA due to the difference. **Label noise in conventional LLN settings:** In conventional label noise settings, the injected noisy labels are collected by either human annotators or image search engines (Lee et al., 2018; Li et al., 2017; Xiao et al., 2015). The label noise is usually assumed to be either independent of instances (i.e., symmetric label noise or asymmetric label noise) (Patrini et al., 2017; Liu and Tao, 2015; Xu et al., 2019) or dependent of instances (i.e., instance-dependent label noise) (Berthon et al., 2021; Xia et al., 2020). The underling assumption for them is that a sample \(\mathbf{x}\) has the highest probability of being in the correct class \(y\), i.e., \(\Pr[\tilde{Y}=i|Y=i,X=x]>\Pr[\tilde{Y}=j|Y=i,X=x],\ \forall x\in \mathcal{X},i\neq j\) where \(\tilde{Y}\) is the noisy label and \(Y\) is the ground-truth label for input \(X\). Equivalently, it assumes a bounded noise rate. For example, given an image to annotate, the mislabeling rate for the image is bounded by a small number, which is realistic in conventional LLN settings (Xia et al., 2020; Cheng et al., 2020). When the label noise is generated by the source model, the underlying assumption of these types of label noise does not hold. **Label noise in SFDA:** As for the label noise generated by the source model, mislabeling rate for an image can approach \(1\), that is, \(\Pr[\tilde{Y}=j|Y=i,X=x]\to 1,\ \exists\mathcal{S}\subset\mathcal{X},\ \forall x \in\mathcal{S},i\neq j\). To understand that the label noise in SFDA is unbounded, we consider a two-component Multivariate Gaussian mixture distribution with equal priors for both domains. Let the first component (\(y=1\)) of the source domain distribution \(\mathcal{D}_{\mathcal{S}}\) be \(\mathcal{N}(\mathbf{\mu}_{1},\sigma^{2}\mathbf{I}_{d})\), and the second component (\(y=-1\)) of \(\mathcal{D}_{\mathcal{S}}\) be \(\mathcal{N}(\mathbf{\mu}_{2},\sigma^{2}\mathbf{I}_{d})\), where \(\mathbf{\mu}_{1},\mathbf{\mu}_{2}\in\mathbb{R}^{d}\) and \(\mathbf{I}_{d}\in\mathbb{R}^{d\times d}\). For the target domain distribution \(\mathcal{D}_{T}\), let the first component (\(y=1\)) of \(\mathcal{D}_{T}\) be \(\mathcal{N}(\mathbf{\mu}_{1}+\mathbf{\Delta},\sigma^{2}\mathbf{I}_{d})\), and the second component (\(y=-1\)) of \(\mathcal{D}_{T}\) be \(\mathcal{N}(\mathbf{\mu}_{2}+\mathbf{\Delta},\sigma^{2}\mathbf{I}_{d})\), where \(\mathbf{\Delta}\in\mathbb{R}^{d}\) is the shift of the two domains. Notice that the domain shift considered is a general shift and it has been studied in Stojanov et al. (2021); Zhao et al. (2019), where we also illustrate the domain shift in Figure 9 in supplementary material. Let \(f_{S}\) be the optimal source classifier. First, we build the relationship between the mislabeling rate for target data and the domain shift: \[\Pr_{(\mathbf{x},y)\sim\mathcal{D}_{T}}[f_{S}(\mathbf{x})\neq y]=\frac{1}{2} \Phi(-\frac{d_{1}}{\sigma})+\frac{1}{2}\Phi(-\frac{d_{2}}{\sigma}), \tag{1}\] where \(d_{1}=\left\|\frac{\mathbf{\mu}_{2}-\mathbf{\mu}_{1}}{2}-\mathbf{c}\right\|\mathrm{ sign}(\left\|\frac{\mathbf{\mu}_{2}-\mathbf{\mu}_{1}}{2}\right\|-\left\|\mathbf{c}\right\|)\), \(d_{2}=\left\|\frac{\mathbf{\mu}_{2}-\mathbf{\mu}_{1}}{2}+\mathbf{c}\right\|\), \(\mathbf{c}=\alpha(\mathbf{\mu}_{2}-\mathbf{\mu}_{1})\), \(\alpha=\frac{\mathbf{\Delta}^{\top}(\mathbf{\mu}_{2}-\mathbf{\mu}_{1})}{\|\mathbf{\mu}_{2}- \mathbf{\mu}_{1}\|^{2}}\) is the magnitude of domain shift, and \(\Phi\) is the standard normal cumulative distribution function. Eq. (1) shows that the magnitude of the domain shift inherently controls the mislabeling error for target data. This mislabeling rate increases as the magnitude of the domain shift increases. We defer the proof and details to Appendix B. More importantly, we characterize that the label noise is unbounded among these mislabeled samples. **Theorem 3.1**.: _Without loss of generality, we assume that the \(\mathbf{\Delta}\) is positively correlated with the vector \(\mathbf{\mu}_{2}-\mathbf{\mu}_{1}\), i.e., \(\mathbf{\Delta}^{\top}(\mathbf{\mu}_{2}-\mathbf{\mu}_{1})>0\). For \((\mathbf{x},y)\sim\mathcal{D}_{T}\), if \(\mathbf{x}\in\mathbf{R}\), then_ \[\Pr[f_{S}(\mathbf{x})\neq y]\geq 1-\delta, \tag{2}\] _where \(\delta\in(0,1)\) (i.e., \(\delta=0.01\)), \(\mathbf{R}=\mathbf{R}_{1}\bigcap\mathbf{R}_{2}\), \(\mathbf{R}_{1}=\{\mathbf{x}:\|\mathbf{x}-\mathbf{\mu}_{1}-\mathbf{\Delta}\|\leq\sigma( \frac{\sqrt{2}}{2}-\frac{\log\frac{1-\delta}{\delta}}{\sqrt{d}})\}\), and \(\mathbf{R}_{2}=\{\mathbf{x}:\mathbf{x}^{\top}\mathbf{I}_{d}>(\sigma d+2\mathbf{ \mu}_{1}^{\top}\mathbf{I}_{d})/2\}\). Meanwhile, \(\mathbf{R}\) is non-empty when \(\alpha>(\log\frac{1-\delta}{\delta})/d\), where \(\alpha=\frac{\mathbf{\Delta}^{\top}(\mathbf{\mu}_{2}-\mathbf{\mu}_{1})}{\|\mathbf{\mu}_{2}-\bm {\mu}_{1}\|^{2}}>0\) is the magnitude of the domain shift along the direction \(\mathbf{\mu}_{2}-\mathbf{\mu}_{1}\)._ Conventional LLN methods assume that the label noise is bounded: \(\Pr[f_{H}(\mathbf{x})\neq y]<m,\ \forall(\mathbf{x},y)\sim\mathcal{D}_{T}\), where \(f_{H}\) is the labeling function, and \(m=0.5\) if the number of clean samples of each component are the same (Cheng et al., 2020). However, Theorem 3.1 indicates that the label noise generated by the source model is unbounded for any \(\mathbf{x}\in\mathbf{R}\). In practice, region \(\mathbf{R}\) is non-empty as neural networks are usually trained on high dimensional data such that \(d\gg 1\), so \(\alpha>(\log\frac{1-\delta}{\delta})/d\to 0\) is easy to satisfy. The probability measure on \(\mathbf{R}=\mathbf{R}_{1}\bigcap\mathbf{R}_{2}\) (i.e., \(\Pr_{(\mathbf{x},y)\sim\mathcal{D}_{T}}[\mathbf{x}\in\mathbf{R}]\)) increases as the magnitude of the domain shift \(\alpha\) increases, meaning more data points contradict the conventional LLN assumption. More details can be found in Appendix C. Given that the unbounded label noise exists in SFDA, the following Lemma establishes that many existing LLN methods (Wang et al., 2019; Ghosh et al., 2017; Englesson and Azizpour, 2021; Ma et al., 2020), which rely on the bounded assumption, are _not_ noise tolerant in SFDA. **Lemma 3.2**.: _Let the risk of the function \(h:\mathcal{X}\rightarrow\mathcal{Y}\) under the clean data be \(R(h)=\mathbb{E}_{\mathbf{x},y}[\ell_{\text{LLN}}(h(\mathbf{x}),y)]\), and the risk of \(h\) under the noisy data be \(\widetilde{R}(h)=\mathbb{E}_{\mathbf{x},\tilde{y}}[\ell_{\text{LLN}}(h(\mathbf{ x}),\tilde{y})]\), where the noisy data follows the unbounded assumption, i.e., \(\Pr[\tilde{y}\neq y|\mathbf{x}\in\mathbf{R}]=1-\delta\) for a subset \(\mathbf{R}\subset\mathcal{X}\) and \(\delta\in(0,1)\). Then the global minimizer \(\tilde{h}^{\star}\) of \(\widetilde{R}(h)\) disagrees with the global minimizer \(h^{\star}\) of \(R(h)\) on data points \(\mathbf{x}\in\mathbf{R}\) with a high probability at least \(1-\delta\)._ We denote \(\ell_{\text{LLN}}\) by the existing noise-robust loss based LLN methods in Wang et al. (2019); Ghosh et al. (2017); Englesson and Azizpour (2021); Ma et al. (2020). When the noisy data follows the bounded assumption, these methods are noise tolerant as the minimizer \(\tilde{h}^{\star}\) converges to the minimizer \(h^{\star}\) with a high probability. We defer the details and proof of the related LLN methods to Appendix D. Learning With Label Noise in SFDA Given a fundamental difference between the label noise in SFDA and the label noise in conventional LLN scenarios, existing LLN methods, whose underlying assumption is bounded label noise, cannot be applied to solve the label noise in SFDA. This section focuses on investigating how to address the unbounded label noise in SFDA. Motivated by the recent studies Liu et al. (2020); Arpit et al. (2017), which observed an early-time training phenomenon (ETP) on noisy datasets with bounded random label noise, we find that ETP does not rely on the bounded random label noise assumption, and it can be generalized to the unbounded label noise in SFDA. ETP describes the training dynamics of the classifier that preferentially fits the clean samples and therefore has higher prediction accuracy for mislabeled samples during the early-training stage. Such training characteristics can be very beneficial for SFDA problems in which we only have access to the source model and the highly noisy target data. To theoretically prove ETP in the presence of unbounded label noise, we first describe the problem setup. We still consider a two-component Gaussian mixture distribution with equal priors. We denote \(y\) by the true label for \(\mathbf{x}\), and assume it is a balanced sample from \(\{-1,+1\}\). The instance \(\mathbf{x}\) is sampled from the distribution \(\mathcal{N}(y\boldsymbol{\mu},\ \sigma\mathbf{1}_{d})\), where \(\|\boldsymbol{\mu}\|=1\). We denote \(\tilde{y}\) by the noisy label for \(\mathbf{x}\). We observe that the label noise generated by the source model is close to the decision boundary revealed in Theorem 3.1. So, to assign the noisy labels, we let \(\tilde{y}=y\beta(\mathbf{x},y)\), where \(\beta(\mathbf{x},y)=\mathrm{sign}(\mathbb{1}\{y\mathbf{x}^{\top}\boldsymbol{ \mu}>r\}-0.5)\) is the label flipping function, and \(r\) controls the mislabeling rate. If \(\beta(\mathbf{x},y)<1\), then the data point \(\mathbf{x}\) is mislabeled. Meanwhile, the label noise is unbounded by adopting the label flipping function \(\beta(\mathbf{x},y)\): \(\Pr[\tilde{y}\neq y|y\mathbf{x}^{\top}\boldsymbol{\mu}\leq r]=1\), where \(\mathbf{R}=\{\mathbf{x}:y\mathbf{x}^{\top}\boldsymbol{\mu}\leq r\}\). We study the early-time training dynamics of gradient descent on the linear classifier. The parameter \(\theta\) is learned over the unbounded label noise data \(\{x_{i},\tilde{y}_{i}\}_{i=1}^{n}\) with the following logistic loss function: \[\mathcal{L}(\theta_{t+1})=\frac{1}{n}\sum_{i=1}^{n}\log\left(1+\exp\left(- \tilde{y}_{i}\theta_{t+1}^{\top}\mathbf{x}_{i}\right)\right),\] where \(\theta_{t+1}=\theta_{t}-\eta\nabla_{\theta}\mathcal{L}(\theta_{t})\), and \(\eta\) is the learning rate. Then the following theorem builds the connection between the prediction accuracy for mislabeled samples at an early-training time \(T\). **Theorem 4.1**.: _Let \(B=\{\mathbf{x}:\tilde{y}\neq y\}\) be a set of mislabeled samples. Let \(\kappa(B;\theta)\) be the prediction accuracy calculated by the ground-truth labels and the predicted labels by the classifier with parameter \(\theta\) for mislabeled samples. If at most half of the samples are mislabeled (\(r<1\)), then there exists a proper time \(T\) and a constant \(c_{0}>0\) such that for any \(0<\sigma<c_{0}\) and \(n\rightarrow\infty\), with probability \(1-o_{p}(1)\):_ \[\kappa(B;\theta_{T})\geq 1-\exp\{-\frac{1}{200}g(\sigma)^{2}\}, \tag{3}\] _where \(g(\sigma)=\frac{\mathrm{Erf}[\frac{1-\sigma}{\sqrt{2\pi}}]}{2(1+2\sigma)\sigma }+\frac{\exp\left(-\frac{(r-1)^{2}}{2\sigma^{2}}\right)}{\sqrt{2\pi}(1+2\sigma )}>0\) is a monotone decreasing function that \(g(\sigma)\rightarrow\infty\) as \(\sigma\to 0\), and \(\mathrm{Erf}[x]=\frac{2}{\sqrt{\pi}}\int_{0}^{x}e^{-t^{2}}\,\mathrm{d}t\)._ The proof is provided in Appendix E. Compared to ETP found in Liu et al. (2020), where the label noise is assumed to be bounded, Theorem 4.1 presents that ETP also exists even though the label noise is unbounded. At a proper time T, the classifier trained by the gradient descent algorithm can provide accurate predictions for mislabeled samples, where its accuracy is lower bounded by a function of the variance of clusters \(\sigma\). When \(\sigma\to 0\), the predictions of all mislabeled samples equal to their ground-truth labels (i.e., \(\kappa(B;\theta_{T})\to 1\)). When the classifier is trained for a sufficiently long time, it will gradually memorize mislabeled data. The predictions of mislabeled samples are equivalent to their incorrect labels instead of their ground-truth labels (Liu et al., 2020; Maennel et al., 2020). Based on these insights, the memorization of mislabeled data can be alleviated by leveraging their predicted labels during the early-training time. To leverage the predictions during the early-training time, we adopt a recently established method, early learning regularization (ELR) (Liu et al., 2020), which encourages model predictions to stick to the early-time predictions for \(\mathbf{x}\). Since ETP exists in the scenarios of the unbounded label noise, ELR can be applied to solve the label noise in SFDA. The regularization is given by: \[\mathcal{L}_{\mathrm{ELR}}(\theta_{t})=\log(1-\tilde{y}_{t}^{\top}f(\mathbf{x} ;\theta_{t})), \tag{4}\] where we overload \(f(\mathbf{x};\theta_{t})\) to be the probabilistic output for the sample \(\mathbf{x}\), and \(\bar{y}_{t}=\beta\bar{y}_{t-1}+(1-\beta)f(\mathbf{x};\theta_{t})\) is the moving average prediction for \(\mathbf{x}\), where \(\beta\) is a hyperparameter. To see how ELR prevents the model from memorizing the label noise, we calculate the gradient of Eq. (4) with respect to \(f(\mathbf{x};\theta_{t})\), which is given by: \[\frac{\mathrm{d}\mathcal{L}_{\text{ELR}}(\theta_{t})}{\mathrm{d}f(\mathbf{x}; \theta_{t})}=-\frac{\bar{y}_{t}}{1-\bar{y}_{t}^{*}\,f(\mathbf{x};\theta_{t})}.\] Note that minimizing Eq. (4) forces \(f(\mathbf{x};\theta_{t})\) to close to \(\bar{y}_{t}\). When \(\bar{y}_{t}\) is aligned better with \(f(\mathbf{x};\theta_{t})\), the magnitude of the gradient becomes larger. It makes the gradient of aligning \(f(\mathbf{x};\theta_{t})\) with \(\bar{y}_{t}\) overwhelm the gradient of other loss terms that align \(f(\mathbf{x};\theta_{t})\) with noisy labels. As the training progresses, the moving averaged predictions \(\bar{y}_{t}\) for target samples gradually approach their ground-truth labels till the time \(T\). Therefore, Eq. (4) prevents the model from memorizing the label noise by forcing the model predictions to stay close to these moving averaged predictions \(\bar{y}_{t}\), which are very likely to be ground-truth labels. Some existing LLN methods propose to assign pseudo labels to data or require two-stage training for label noise (Cheng et al., 2020; Zhu et al., 2021; Zhang et al., 2021). Unlike these LLN methods, Eq. (4) can be easily embedded into any existing SFDA algorithms without conflict. The overall objective function is given by: \[\mathcal{L}=\mathcal{L}_{\text{SFDA}}+\lambda\mathcal{L}_{\text{ELR}}, \tag{5}\] where \(\mathcal{L}_{\text{SFDA}}\) is _any_ SFDA objective function, and \(\lambda\) is a hyperparameter. Empirical Observations on Real-World Datasets. We empirically verify that target classifiers have higher prediction accuracy for target data during the early training and adaptation stage. We propose leveraging this benefit to prevent the classifier from memorizing the noisy labels. The observations are shown in Figure 2. The parameters of classifiers are initialized by source models. Labels of target data are annotated by the initialized classifiers. We train the target classifiers on target data with the standard cross-entropy (CE) loss and the generalized cross-entropy (GCE) loss, a well-known noise-robust loss widely leveraged in bounded LLN scenarios. The solid green, orange and blue lines represent the training accuracy of optimizing the classifiers with CE loss, GCE loss, and ELR loss, respectively. The dotted red lines represent the labeling accuracy of the initialized classifiers. Considering that the classifiers memorize the unbounded label noise very fast, we evaluate the prediction accuracy on target data every batch for the first \(90\) steps. After \(90\) steps, we evaluate the prediction accuracy for every \(0.33\) epoch. The green lines show that ETP exists in SFDA, which is consistent with our theoretical result. Meanwhile, in all scenarios, green and orange lines show that classifiers provide higher prediction accuracy during the first a few iterations. After a few iterations, they start to memorize the label noise even with noise-robust loss (e.g., GCE). Eventually, the classifiers are expected to memorize the whole datasets. For conventional LLN settings, it has been empirically verified that it takes a _much longer_ time before classifiers start memorizing the label noise (Liu et al., 2020; Xia et al., 2020). We provide further analysis in Appendix H. We highlight Figure 2: Training accuracy on various target domains. The source models initialize the classifiers and annotate unlabeled target data. As the classifiers memorize the unbounded label noise very fast, for the first \(90\) steps, we evaluate the prediction accuracy on target data every batch, and one step represents one training batch. After the \(90\) steps, we evaluate the prediction accuracy for every \(0.3\) epoch, shown as one step. We use the CE, GCE, and ELR to train the classifiers on the labeled target data, shown in solid green lines, solid orange lines, and solid blue lines, respectively. The dotted red line represents the accuracy of labeling target data. Eventually, the classifiers memorize the label noise, and the prediction accuracy equals the labeling accuracy (shown in (iii-iv)). Additional results on transfer pairs can be found in Appendix F. that PCL (Zhang et al., 2021) leverages ETP at every epoch, so it cannot capture the benefits of ETP and is inappropriate for unbounded label noise due to the fast memorization speed in SFDA. As a comparison, we choose ELR since it leverages ETP at every batch. The blue lines show that leveraging ETP via ELR can address the memorization of noisy labels in SFDA. ## 5 Experiments We aim to improve the efficiency of existing SFDA algorithms by using ELR to leverage ETP. We evaluate the performance on four different SFDA benchmark datasets: Office-\(31\)(Saenko et al., 2010), Office-Home (Venkateswara et al., 2017), VisDA (Peng et al., 2017) and DomainNet (Peng et al., 2019). Due to the limited space, the results on the dataset Office-\(31\) and additional experimental details are provided in Appendix G. **Evaluation.** We incorporate ELR into three existing baseline methods: SHOT (Liang et al., 2020), G-SFDA (Zhang and Sabuncu, 2018), and NRC (Yang et al., 2021). SHOT uses k-means clustering and mutual information maximization strategy to train the representation network while freezing the final linear layer. G-SFDA aims to cluster target data with similar neighbors and attempts to maintain the source domain performance. NRC also explores the neighbors of target data by graph-based methods. ELR can be easily embedded into these methods by simply adding the regularization term into the loss function to optimize without affecting existing SFDA frameworks. We average the results based on three random runs. **Results.** Tables 1-4 show the results before/after leveraging the early-time training phenomenon, where Table 4 is shown in Appendix G. Among these tables, the top part shows the results of conventional UDA methods, and the bottom part shows the results of SFDA methods. In the tables, we use SF to indicate whether the method is source free or not. We use Source Only + ELR to indicate ELR with self-training. The results show that ELR itself can boost the performances. As existing SFDA methods are not able to address unbounded label noise, incorporating ELR into these SFDA methods can further boost the performance. The four datasets, including all \(31\) pairs (e.g., \(A\to D\)) of tasks, show better performance after solving the unbounded label noise problem using the early-time training phenomenon. Meanwhile, solving the unbounded label noise on existing SFDA methods achieves state-of-the-art on all benchmark datasets. These SFDA methods also outperform most methods that need to access source data. **Analysis about hyperparameters \(\beta\) and \(\lambda\).** The hyperparameter \(\beta\) is chosen from {\(0.5\), \(0.6\), \(0.7\), \(0.8\), \(0.9\), \(0.99\)}, and \(\lambda\) is chosen from {\(1\), \(3\), \(7\), \(12\), \(25\)}. We conduct the sensitivity study on hyperparameters of ELR on the DomainNet dataset, which is shown in Figure 3(a-b). In each Figure, the study is conducted by fixing the other hyperparameter to the optimal one. The performance is robust to the hyperparameter \(\beta\) except \(\beta=0.99\). When \(\beta=0.99\), classifiers are sensitive to changes in learning curves. Thus, the performance degrades since the learning curves change quickly in the unbounded label noise scenarios. Meanwhile, the performance is also robust to the hyperparameter \(\lambda\) except when \(\lambda\) becomes too large. The hyperparameter \(\lambda\) is to balance the effects of existing SFDA \begin{table} \begin{tabular}{l|c c c c c c c c c c c c} Method & SFM & \(\pi\)–\(\Omega\)/\(\Lambda\)F–\(\beta\)F–\(\kappa\)wC–\(\Gamma\)–\(\pi\)F–\(\Gamma\)–\(\kappa\)F–\(\kappa\)wP–\(\Lambda\)F–\(\Gamma\)–\(\kappa\)Rw–\(\Delta\)F–\(\Gamma\)–\(\kappa\) algorithms and the effects of ELR. As we indicated in Tables 1-4, barely using ELR to address the SFDA problem is not comparable to these SFDA methods. Hence, a large value of \(\lambda\) makes neural networks neglect the effects of these SFDA methods, leading to degraded performance. ### Discussion on Existing LLN Methods As we formulate the SFDA as the problem of LLN, it is of interest to discuss some existing LLN methods. We mainly discuss existing LLN methods that can be easily embedded into the current SFDA algorithms. Based on this principle, we choose GCE (Zhang and Sabuncu, 2018), SL (Wang et al., 2019) and GJS (Englesson and Azizpour, 2021) that have been theoretically proved to be robust to symmetric and asymmetric label noise, which are bounded label noise. We highlight that a more recent method GJS outperforms ELR in real-world noisy datasets. However, we will show that GJS is inferior to ELR in SFDA scenarios, because the underlying assumption for GJS does not hold in SFDA. Besides ELR, which leverages ETP, PCL is another method to leverage the same phenomenon, but we will show that it is also inappropriate for SFDA. To show the effects of the existing LLN methods under the unbounded label noise, we test these LLN methods on various SFDA datasets with target data whose labels are generated by source models. As shown in Figure 4, GCE, SL, GJS, and PCL are better than CE but still not comparable to ELR. Our analysis indicates that ELR follows the principle of ETP, which is theoretically justified in SFDA scenarios by our Theorem 3.1. Methods GCE, SL, and GJS follow the bounded label noise assumption, which does not hold in SFDA. Hence, they perform worse than ELR in SFDA, even though GJS outperforms ELR in conventional LLN scenarios. PCL (Zhang et al., 2021) utilizes ETP to purify noisy labels of target data, but it performs significantly worse than ELR. As the memorization speed of the unbounded label noise is very fast, and classifiers memorize noisy labels within a few iterations (shown in Figure 2), purifying noisy labels every epoch is inappropriate for SFDA. However, we notice that PCL performs relatively better on DomainNet than on other datasets. The reason behind it is that the memorization speed in the DomainNet dataset is relatively slow than \begin{table} \begin{tabular}{l|c|c c c c c c c c c c c c} \hline Method & SFR\(\rightarrow\)CR\(\rightarrow\)PR\(\rightarrow\)SC\(\rightarrow\)RC\(\rightarrow\)PC\(\rightarrow\)SP\(\rightarrow\)RP\(\rightarrow\)CP\(\rightarrow\)SS\(\rightarrow\)RS\(\rightarrow\)CS\(\rightarrow\)P**Avg** \\ \hline MCD (Saito et al., 2018) & ✗ & 61.9 & 69.3 & 56.2 & 79.7 & 56.6 & 53.6 & 83.3 & 58.3 & 60.9 & 81.7 & 56.2 & 66.7 & 65.4 \\ DANN (Ganin et al., 2016) & ✗ & 63.4 & 73.6 & 72.6 & 86.5 & 65.7 & 70.6 & 86.9 & 73.2 & 70.2 & 85.7 & 75.2 & 70.0 & 74.5 \\ DAN (Long et al., 2015) & ✗ & 64.3 & 70.6 & 58.4 & 79.4 & 56.7 & 60.0 & 84.5 & 61.6 & 62.2 & 79.7 & 65.0 & 62.0 & 67.0 \\ COAL (Tan et al., 2020) & ✗ & 73.9 & 75.4 & 70.5 & 89.6 & 70.0 & 71.3 & 89.8 & 68.0 & 70.5 & 88.0 & 73.2 & 70.5 & 75.9 \\ MDD (Zhang et al., 2019) & ✗ & 77.6 & 75.7 & **74.2** & 89.5 & 74.2 & **75.6** & 90.2 & 76.0 & **74.6** & 86.7 & 72.9 & 73.2 & 78.4 \\ \hline \hline Source Only & ✓ & 53.7 & 71.6 & 52.9 & 70.8 & 49.5 & 58.3 & 85.2 & 59.6 & 59.1 & 30.6 & 74.8 & 65.7 & 61.0 \\ **+ELR** & ✓ & 70.2 & 81.7 & 61.7 & 79.9 & 63.8 & 67.0 & 90.0 & 72.1 & 66.8 & 85.1 & 78.5 & 68.8 & 73.8 \\ \hline SHOT (Liang et al., 2020) & ✓ & 73.3 & 80.1 & 65.8 & **91.4** & 74.3 & 69.2 & 91.9 & 77.0 & 66.2 & 87.4 & 81.3 & 75.0 & 77.7 \\ **+ELR** & ✓ & **78.0** & 81.9 & 67.4 & 91.1 & **75.9** & 71.0 & 92.6 & 79.3 & 68.0 & 88.4 & 84.8 & 77.0 & **79.7** \\ \hline G-SFDA (Yang et al., 2021) & ✓ & 65.8 & 78.9 & 60.2 & 80.5 & 64.7 & 64.6 & 89.3 & 69.9 & 63.6 & 86.4 & 78.8 & 71.1 & 72.8 \\ **+ELR** & ✓ & 69.4 & 80.9 & 60.6 & 81.3 & 67.2 & 66.4 & 90.2 & 73.2 & 64.9 & 87.6 & 82.1 & 71.0 & 74.6 \\ \hline NRC (Yang et al., 2021) & ✓ & 69.8 & 81.1 & 62.9 & 83.4 & 74.4 & 66.3 & 90.3 & 73.4 & 65.2 & 88.2 & 82.2 & 75.8 & 76.4 \\ **+ELR** & ✓ & 75.6 & **82.2** & 65.7 & 91.2 & 77.2 & 68.5 & **92.7** & **79.8** & 67.5 & **89.3** & **85.1** & **77.6** & 79.4 \\ \hline \end{tabular} \end{table} Table 2: Accuracies (%) on DomainNet for ResNet50-based methods. Figure 3: (a)-(b) show the test accuracy on the DomainNet dataset with respect to hyperparameters of ELR. (c) shows the test accuracy of incorporating various existing LLN methods into the SFDA methods on the DomainNet dataset. other datasets, which is shown in Figure 2. In conventional LLN scenarios, PCL does not suffer from the issue since the memorization speed is much lower than the conventional LLN scenarios. In Figure 3(c), we also evaluate the performance by incorporating the existing LLN methods into the SFDA algorithms SHOT and NRC. Since PCL and SHOT assign pseudo labels to target data, PCL is incompatible with some existing SFDA methods and cannot be easily embedded into some SFDA algorithms. Hence, we only embed GCE, SL, GJS, and ELR into the SFDA algorithms. The figure illustrates that ELR still performs better than other LLN methods when incorporated into SHOT and NRC. We also notice that GCE, SL, and GJS provide marginal improvement to the vanilla SHOT and NRC methods. We think the label noise in SFDA datasets is the hybrid noise that consists of both bounded label noise and unbounded label noise due to the non-linearity of neural networks. The GCE, SL, and GJS can address the bounded label noise, while ELR can address both bounded and unbounded label noise. Therefore, these experiments demonstrate that using ELR to leverage ETP can successfully address the unbounded label noise in SFDA. ## 6 Conclusion In this paper, we study SFDA from a new perspective of LLN by theoretically showing that SFDA can be viewed as the problem of LLN with the unbounded label noise. Under this assumption, we rigorously justify that robust loss functions are not able to address the memorization issues of unbounded label noise. Meanwhile, based on this assumption, we further theoretically and empirically analyze the learning behavior of models during the early-time training stage and find that ETP can benifit the SFDA problems. Through extensive experiments across multiple datasets, we show that ETP can be exploited by ELR to improve prediction performance, and it can also be used to enhance existing SFDA algorithms. \begin{table} \begin{tabular}{l|c|c c c c c c c c c} \hline Method & SFplane & bcycl & bus & car & horse & knife & mcycl & person & plant & sktbd & train & truck & **Per-class** \\ \hline DANN (Ganin et al., 2016) & ✗ & 81.9 & 77.7 & 82.84.43 & 81.2 & 29.5 & 65.1 & 28.6 & 51.9 & 54.6 & 82.8 & 7.8 & 57.4 \\ DAN (Long et al., 2015) & ✗ & 87.1 & 63.0 & 76.542.0 & 90.3 & 42.9 & 85.9 & 53.1 & 49.7 & 36.3 & 85.8 & 20.7 & 61.1 \\ ADR (Saito et al., 2018a) & ✗ & 94.2 & 48.5 & 84.07.29 & 90.1 & 74.2 & 92.6 & 72.5 & 80.8 & 61.8 & 82.2 & 28.8 & 73.5 \\ CDAN (Long et al., 2018) & ✗ & 85.2 & 66.9 & 83.05.08 & 84.2 & 74.9 & 88.1 & 74.5 & 83.4 & 76.0 & 81.9 & 38.0 & 73.9 \\ SAFN (Xu et al., 2019a) & ✗ & 93.6 & 61.3 & 84.17.06 & 94.1 & 79.0 & 91.8 & 79.6 & 89.9 & 55.6 & 89.0 & 24.4 & 76.1 \\ SWD (Lee et al., 2019) & ✗ & 90.8 & 82.5 & 81.77.05 & 91.7 & 69.5 & 86.3 & 77.5 & 87.4 & 63.6 & 85.6 & 29.2 & 76.4 \\ MDD (Zhang et al., 2019b) & ✗ & - & - & - & - & - & - & - & - & - & - & - & 74.6 \\ MCC (Lin et al., 2020) & ✗ & 88.7 & 80.3 & 80.71.5 & 90.1 & 93.2 & 85.0 & 71.6 & 89.4 & 73.8 & 85.0 & 36.9 & 78.8 \\ STAR (Lu et al., 2020) & ✗ & 95.0 & 84.0 & 84.6 & 73.0 & 91.6 & 91.8 & 85.9 & 78.4 & 94.4 & 84.7 & 87.0 & 42.2 & 82.7 \\ RWOT (Xu et al., 2020) & ✗ & 95.1 & 80.3 & 83.79.0 & 92.4 & 68.0 & 92.5 & 82.2 & 87.9 & 78.4 & **90.4** & **68.2** & 84.0 \\ \hline \hline Source Only & ✓ & 60.9 & 21.6 & 50.967.6 & 65.8 & 6.3 & 82.2 & 23.2 & 57.3 & 30.6 & 84.6 & 8.0 & 46.6 \\ **+ELR** & ✓ & 95.4 & 45.7 & 89.76.9 & 84.1 & 97.1 & **92.9** & 80.1 & 89.7 & 52.8 & 83.3 & 4.3 & 74.6 \\ \hline SHOT (Liang et al., 2020) & ✓ & 94.3 & 88.5 & 80.157.3 & 93.1 & 94.9 & 80.7 & 80.3 & 91.5 & 89.1 & 86.3 & 58.2 & 82.9 \\ **+ELR** & ✓ & 95.8 & 84.1 & 83.67.9 & 93.9 & **97.6** & 89.2 & 80.1 & 90.6 & 90.4 & 87.2 & 48.2 & 84.1 \\ \hline G-SFDA (Yang et al., 2021b) & ✓ & 96.0 & 87.6 & 85.372.8 & 95.9 & 94.7 & 88.4 & 79.0 & 92.7 & 93.9 & 87.2 & 43.7 & 84.8 \\ **+ELR** & ✓ & **97.3** & 89.1 & **89.8**/**72.9** & **96.9** & 97.5 & 92.2 & **82.5** & **95.8** & **94.5** & 87.3 & 34.5 & **86.4** \\ \hline **NRC** (Yang et al., 2021a) & ✓ & 96.9 & 89.7 & 84.059.8 & 95.9 & 96.6 & 86.5 & 80.9 & 92.8 & 92.6 & 90.2 & 60.2 & 85.4 \\ **+ELR** & ✓ & 97.1 & **89.7** & 82.7 & 62.0 & 96.2 & 97.0 & 87.6 & 81.2 & 93.7 & 94.1 & 90.2 & 58.6 & 85.8 \\ \hline \end{tabular} \end{table} Table 3: Accuracies (%) on VisDA-C (Synthesis \(\rightarrow\) Real) for ResNet101-based methods. Figure 4: Evaluation of label noise methods on SFDA problems. We use source models as an initialization of classifiers trained on target data and also use source models to annotate unlabeled target data. Then we treat the target datasets as noisy datasets and use different label noise methods to solve the memorization issue. #### Acknowledgments This work is supported by Natural Sciences and Engineering Research Council of Canada (NSERC), Discovery Grants program.
最新の stato-del-arte のソースなしのドメイン適合 (SFDA) 方法は、特徴空間での意味のあるクラスタ構造を学習することに重点を置いている。これらは、ソースドメインの知識を非ラベルのターゲットドメインに適応させることに成功している。これには、ソースデータへのアクセスを必要としない。しかし、既存の方法では、ソースモデルによって生成された偽ラベルに依存しており、この偽ラベルは、ドメインシフトの影響でノイズが発生する可能性がある。この論文では、SFDAをラベルノイズ学習 (LLN) の視点から研究する。従来のLLNシナリオにおけるラベルノイズとは異なり、ラベルノイズをSFDAでは異なる分布仮定に基づいていることを証明する。また、このような違いは、SFDAにおける既存のLLN方法を、その分布仮定に依存する能力を低下させ、ラベルノイズを解決することができないことを証明する。実
2309.03968
Common Firm-level Investor Fears: Evidence from Equity Options
We identify a new type of risk, common firm-level investor fears, from commonalities within the cross-sectional distribution of individual stock options. We define firm-level fears that link with upward price movements as good fears, and those relating to downward price movements as bad fears. Such information is different to market fears that we extract from index options. Stocks with high sensitivities to common firm-level investor fears earn lower returns, with investors demanding a higher compensation for exposure to common bad fears relative to common good fears. Risk premium estimates for common bad fears range from -5.63% to -4.92% per annum.
Jozef Barunik, Mattia Bevilacqua, Michael Ellington
2023-09-07T18:58:27
http://arxiv.org/abs/2309.03968v1
# Common Firm-level Investor Fears: Evidence from Equity Options ###### Abstract We identify a new type of risk, common firm-level investor fears, from commonalities within the cross-sectional distribution of individual stock options. We define firm-level fears that link with upward price movements as good fears, and those relating to downward price movements as bad fears. Such information is different to market fears that we extract from index options. Stocks with high sensitivities to common firm-level investor fears earn lower returns, with investors demanding a higher compensation for exposure to common bad fears relative to common good fears. Risk premium estimates for common bad fears range from -5.63% to -4.92% per annum. **JEL**: **Keywords**: \({}^{\rm a}\) Institute of Economic Studies, Charles University, Opletalova 26, 110 00, Prague, Czech Republic. \({}^{\rm b}\) The Czech Academy of Sciences, IITA, Pod Vodarenskou Vezi 4, 182 08, Prague, Czech Republic \({}^{\rm c}\) University of Liverpool Management School, Chatham Building, Chatham Street, L69 7ZH, UK. Email addresses: Jozef Barunik, barunik@fsv.cuni.cz, Mattia Bevilacqua m.bevilacqua@liverpool.ac.uk, Michael Ellington, m.ellington@liverpool.ac.uk We thank Mykola Babiak for invaluable discussions and comments. The support from the Czech Science Foundation under the 19-28231X (EXPRO) project is gratefully acknowledged. **Disclosure Statement:** Jozef Barunik, Mattia Bevilacqua, and Michael Ellington have nothing to disclose. Introduction Option prices contain information regarding uncertainty about future price movements of the underlying asset. Empirical evidence suggests this information is useful for explaining and predicting the cross-section of stock returns (see e.g. Bali and Hovakimian, 2009; Cremers and Weinbaum, 2010; Xing et al., 2010; An et al., 2014; Muravyev et al., 2022). These studies typically rely on extracting implied volatility, as a proxy for uncertainty for future price movements of the underlying asset, from index options as a gauge for investor fears. Notably, such fears are the expectations of holders of aggregate index funds. However, relatively little is known about how uncertainty stemming from firm-level options, or as we use interchangeably firm-level investor fears, affects stock returns. Option prices on individual stocks depend on the total volatility of the stock return which incorporates a firm-level component, as well as a market component (Campbell et al., 2001). As such, uncertainty one infers from a cross-sectional distribution of firms carries distinct and more granular information relative to measures of uncertainty about the state of the aggregate stock market, such as the Chicago Board Options Exchange (CBOE) volatility index (VIX).1 At the same time, investors fear the prospect of future negative returns, bad fears, more than positive ones, good fears (see e.g. Kahneman and Tversky, 2013; Kilic and Shaliastovich, 2019; Bollerslev et al., 2020). Footnote 1: Dew-Becker and Giglio (2023) present a cross-sectional uncertainty proxy from stock options on individual firms and link it to the business cycle. In fact, in many recent models and empirical work, firm-level uncertainty is the driving force (see e.g. Bloom, 2009; Gabaix, 2011; Acemoglu et al., 2012; Herskovic et al., 2020). This paper is, to the best of our knowledge, the first to study the information content within firm-level good and bad investor fears using a large cross-section of individual equity options for stock returns. Our main contribution is the discovery of a strong common structure in the cross-section of firm-level (good and bad) investor fears that commands a risk premium. We show, consistent with economic rationale, that stocks with high sensitivities to common firm-level investor fears earn lower returns. Our results indicate that investors demand a higher compensation for exposure to common bad fears relative to common good fears. The risk premium estimates for common bad fears range from -5.63% to -4.92% per annum. We document a strong factor structure in model-free implied variance measures that we define as firm-level investor fears. We extract implied variance measures from out-of-the-money (OTM) call and put option prices written on a large cross-section of individual stocks in a model-free manner as in (Bakshi et al., 1997, 2003). Then, we decompose these into firm-level good and bad fears in a similar manner to Kilic and Shaliastovich (2019); Barunik et al. (2022). Good fears relate to the prospect of upward price movements that we extract from call options. Bad fears relate to the prospect of downward price movements that we extract from put options. We obtain common factors using principal component analysis (PCA). A single factor we extract from firm-level bad (good) fears explains 75.65% (83.46%) of time-variation within the data. Such commonalities within firm-level investor fears are inherently different from market fears that we extract from index options. We use the same model-free approach to compute market fears, and then decompose into good and bad market fears, respectively. The correlation between monthly innovations of common fears and common fears orthogonal to innovations in market fears are in excess of 74%. We also examine rolling correlations between daily innovations of common fears and market fears. These statistics reveal low average correlations throughout our sample, with substantially lower values from 2012 onwards. Notably the correlation between common bad fears and bad market fears is always lower than the correlation between common fears and market fears, and common good fears and good market fears, respectively. Studies do exist on the factor structure in equity options. Engle and Figlewski (2015) model the correlation dynamics of implied volatilities and explores the role of the VIX as a common factor in explaining implied volatilities. Their results imply that investors are able to exploit the correlations among implied volatilities for hedging purposes. Christoffersen et al. (2018) extract factors from equity options for; the short-term level of implied volatilities, the moneyness slope, and the slope of the term-structure. They show that such factors correlate strongly with S&P500 index options. Both studies use constituents that comprise the Dow Jones Industrial Average (DJIA), and their respective samples end in 2009 and 2010. We move beyond this in two ways. First, we use all stocks with available option data. After applying standard filtering (see e.g. Carr and Wu, 2011), we have 526 firms, of which 90% of are large-cap, with the remaining 10% being mid-cap stocks. Second, we focus on the pricing implications of innovations to commonalities within firm level investor fears. We exploit the factor structure within firm-level implied variances and extract proxies of aggregate, good and bad common firm-level investor fears. In contrast to the above, our analysis reveals that co-movement within firm-level investor fears represents a distinct source of information from market-level fears. We emphasize the role of a factor structure present in a large cross-section of option prices that explains the cross-section of stock returns. Our results highlight that investors require compensation for exposure to common bad fears. Differences in firm beta's on common bad fears strongly associate with differences in expected returns. The top common bad fears beta quintile earns average risk-adjusted returns 5.16% per annum lower than firms in the bottom quin tile. We show that risk-adjusted spread portfolio returns maintain statistical and economic significance after controlling for innovations to the VIX, market fears, and an array of firm characteristics in the spirit of Ang et al. (2006b). Fama-MacBeth regressions estimate common bad fears risk premiums that range from -5.63% to -4.92% per annum, and further substantiates our portfolio sorts. The risk premium estimates of common bad fears from portfolios that control for market fears and other firm characteristics are statistically and economically significant at around -3% per annum. We show that the Fama and French (2015) five factors, momentum (Jegadeesh and Titman, 1993), common idiosyncratic volatility (Herskovic et al., 2016), liquidity (Pastor and Stambaugh, 2003), and the variance risk premium (Carr and Wu, 2009) do not subsume the common bad fears risk premium. These results also hold when considering a battery of anomaly portfolios as test assets. We also show that common firm-level investor fears risk premia are present using alternative factor definitions and the Giglio and Xiu (2021) three-pass regression procedure. Our finding that common good and common bad fears bear different risk premiums resonates well with Farago and Tedongap (2018) and Bollerslev et al. (2020). The former develop a model in which there exists a systematic risk factor explicitly relating to bad downside market volatility. They show that a good minus bad realized semivariance measure contains information for the cross-section of stock returns.2 The latter explore the cross-sectional pricing of good and bad realized semivariance measures that relate to positive and negative high frequency price increments. Their results show that firms with high good minus bad realized volatility earn lower returns. A portfolio of assets in the top quintile of good minus bad volatility generating a return 15% per annum lower than the corresponding bottom quintile portfolio.By contrast, we study the common components within good and bad implied volatilities from individual equity options for stock returns. Our results on common firm-level fears are robust to controlling for various measures of market fears and aggregate volatility. Footnote 2: We also consider commonalities among implied variance spreads and show that there is a negative risk premium similar in magnitude to those we report in the main text. This measure aligns more closely with Farago and Tedongap (2018). These results are available upon request. This paper also relates well with those showing that option prices contain predictive information about stock returns (Bali and Hovakimian, 2009; Cremers and Weinbaum, 2010; Xing et al., 2010). These studies use deviations between call and put implied volatilities written on index options as proxies for jump risk, or risk neutral skewness. We contribute to these studies by examining the pricing implications of co-movement among implied volatilities of individual equity options. We decompose firm-level implied volatilities into call and put components in order to investigate potential differences in the risk premiums on underlying stock returns. Our asset pricing exercises document a negative and significant risk premium to the common component of firm-level implied volatilities linking to put options, common bad fears. We show that sorting on current loadings to common bad fears generates significant spreads in returns over the next month. Therefore our results are of practical relevance since they reflect an ex ante implementable strategy that one can use to construct hedge portfolios. Finally, we connect with those studying volatility risk. Ang et al. (2006b) study firm exposures to innovations in the VIX index and show that a negative risk premium is present after controlling for an array of other risk factors and firm characteristics. Cremers et al. (2015) construct volatility and jump risk proxies using data on index option futures and show that volatility and jump risk bear different risk premiums; both of which are negative. Herskovic et al. (2016) considers co-movement in idiosyncratic return volatility and shows that the spread portfolio loading on common idiosyncratic return volatility earns an average return of -5.40% per annum. We investigate the pricing implications stemming from commonalities among ex-ante measures volatility using individual equity options. Our focus is on common firm-level investor fears and the decomposition into good, and bad fears relating to implied volatilities from call and put options, respectively. We show that exposure to common bad fears is where such risk premium is most prevalent. Intuitively this makes sense because investors require compensation for adverse changes to investment opportunities and therefore should accept lower returns in equilibrium for assets that hedge against such changes. The remainder of this paper proceeds as follows. Section 2 outlines our theoretical background, and how we measure firm-level (good and bad) investor fears. Section 3 describes the data and the common components within investor fears. Sections 4 and 5 present our empirical results and robustness analysis, respectively. Finally, Section 6 concludes. ## 2 Theoretical Background, Investor Beliefs, and Firm-Level Investor Fears In this Section, we motivate and outline our approach to measuring investor fears, which we proxy from implied variances, using firm-level equity option contracts. First, we motivate our approach by linking it to theory and then discuss investor fears at a general level. Next, we outline how we compute firm-level measures of implied variance and the data we use. Then, we provide evidence in favour of a common factor within such measures and investigate whether this common factor contains information distinct from what we call market fear; the implied variances that we obtain from S&P500 index options. ### Theoretical Background Here we outline the economic rationale for why one should expect negative risk prices for exposure to commonalities within firm-level investor fears. We first provide motivation regarding volatility at an aggregate, or market, level. Then, we explain possible mechanisms in the context of existing studies using firm-level volatilities. Theory provides a several reasons why investors require a premium for exposure to market volatility risk. Merton (1973) uses an intertemporal capital asset pricing model (ICAPM) to show that investors reduce current consumption to increase precautionary savings in light of uncertainty around market returns. Market volatility hence qualifies as a state variable in traditional multifactor pricing models where risk-averse agents demand stocks to hedge against the risk of deteriorating investment opportunities. This increases the prices of these assets and lowers expected returns. Campbell et al. (2018) extends the ICAPM framework of Campbell (1993) by allowing for stochastic volatility. They confirm that returns that positively covary with a variable forecasting future market volatility have low expected returns in equilibrium (Campbell, 1996, also documents this finding). Farago and Tedongap (2018) deduce an equilibrium asset pricing model with generalized disappointment aversion that provides an explanation for why such investors price downside volatility more dearly than upside volatility. Specifically, these investors care more about downside losses than upside gains. In doing so, they assign larger weights to outcomes that realize less than the investor's certainty equivalent. This framework yields a systematic risk factor that prices downside market volatility risk, which they confirm empirically. Regarding commonalities within firm-level volatilities, Herskovic et al. (2016) develop an incomplete markets model with a common idiosyncratic volatility factor driving dispersion in household income growth and also residual stock return volatility. Investors require compensation for changes in current and future cross-sectional consumption growth distribution. Heterogeneous exposures to shocks in common idiosyncratic volatility are the sole driver for differences in the risk premium across stocks. Positive shocks to common idiosyncratic volatility cause loadings to increase thereby lowering expected returns in equilibrium. Martin and Wagner (2019) develop a formula for individual stock returns using risk-neutral variances. They express expected stock returns in terms of the risk-neutral market variance, the risk-neutral variance of the stock, and the value-weighted average of individual stock risk-neutral variances. Although the model has no free parameters, it states a negative relationship between the expected return of the stock and the weighted average of risk-neutral individual stock variances. This term can be thought of as comparable to the common component we extract from firm-level fears and helps provide further rationale for negative risk prices. ### Investor Beliefs Market participants face uncertainty regarding future price movements. The VIX index is a popular gauge of investor fear which tracks market expectations of short-term future price uncertainty using the implied volatilities of S&P500 equity options. However, following Kahneman and Tversky (2013)'s prospect theory investors care differently about upside gains and downside losses. Those who face downside risks require a relative downside risk premium (Ang et al., 2006a) and those who face upside gains may be willing to pay for such an outcome (Breckenfelder and Tedongap, 2012). Kilic and Shaliastovich (2019), Barunik et al. (2022) and others provide insight on how one can decompose implied volatility (fear) into good components linking to the prospect of upward price movements using call options (good fear), and bad components linking to the prospect of downward price movements using put options (bad fear). We think of fears as a function of outcomes. Naturally, it is important to be able to measure expectations pertinent to positive and negative outcomes. We use the terms 'good fear" and "bad fear" to refer to these two complementary situations. Beliefs linking to the prospect of good (bad) states - good (bad) fears - reflect the situation where an investor fears uncertainty about price fluctuations, but the uncertainty itself associates with a positive (negative) outcome. The decomposition of implied volatilities into good and bad components represent proxies for good and bad fears. As we discuss above, most of the literature establishes well investor fears, as well as good and bad investor fears, from a market perspective by using information from index options. Focusing on firm-level investor fears is important for a number of reasons. First, many investors have substantial holdings of individual stocks. Such investors may fail to diversify in an optimal manner or are subject to restrictions by corporate compensation policies, and thus face considerable exposure to firm-level uncertainty regarding future price movements (Campbell et al., 2001). Second, the ability to diversify portfolios containing a subset of large stocks depends on the level of firm-level uncertainty around future price movements of stocks that comprise these portfolios. Third traders who seek to exploit mispricing, or hedge adverse outcomes for stocks within their portfolios, using options face firm-level uncertainty regarding future price movements to a larger degree than they do uncertainty around market movements. ### Measuring Firm-Level Investor Fears We use the methods in Bakshi and Madan (2000) and Bakshi et al. (2003) to extract variance measures from the cross-section of option prices in a model-free manner. We consider the price of a variance contract that pays the squared logarithm of the return at time \(t+1\), which in our case corresponds to a fixed horizon of the next 30 days. Let \(s_{i,t}\) denote the natural logarithm of the price \(S_{i,t}\) of the \(i\)th asset at time \(t\). The payoff of the variance contract is \(r_{i,t+1}^{2}=(s_{i,t+1}-s_{i,t})^{2}\) and we define the total implied variance, \(\sigma_{i,t}^{2}\), as the price of the contract: \[\sigma_{i,t}^{2}\equiv e^{-r_{t}^{f}}\mathbb{E}_{t}^{Q}\left[r_{i,t+1}^{2}\right] \tag{1}\] where \(\mathbb{E}_{t}^{Q}\) is the expectation operator under the risk-neutral measure conditional on time \(t\) information and \(r_{t}^{f}\) is the risk-free rate. Kilic and Shaliastovich (2019) and Barunik et al. (2022) show one can decompose equation (1) into two components that relate to the positive and negative returns of the variance contract, respectively. In the absence of arbitrage, the sum of these components is the total implied variance. One obtains the prices of these components from OTM call and put options. Implied variance measures the expectations of fluctuations to the underlying asset over a given horizon. This reflects investors' fears, which directly relate to uncertainty about the future price movements. Furthermore, Bakshi and Madan (2000) and Bakshi et al. (2003) show that one can compute \(\sigma_{i,t}^{2}\) from the prices OTM call and put options: \[\sigma_{i,t}^{2}=\underbrace{\int_{S_{i,t}}^{\infty}\frac{2(1-\log(K/S_{i,t}) )}{K^{2}}C(t,t+1,K)dK}_{\sigma_{i,t}^{2,+}}+\underbrace{\int_{0}^{S_{i,t}} \frac{2(1+\log(S_{i,t}/K))}{K^{2}}P(t,t+1,K)dK}_{\sigma_{i,t}^{2,-}}, \tag{2}\] where \(C(\,\cdot\,)\) and \(P(\,\cdot\,)\) denote the prices at time \(t\) of a call and put contract with a time to expiration of one period and a strike price of \(K\). Call option prices reflect a good state for the stock, while the prices of a put option reflect a bad state for the stock. The two states, most of the time, relate to contrasting investors' beliefs and future expectations Buraschi and Jiltsov (2006). OTM puts usually link with hedging and insurance against equity market drops (Han, 2008), meanwhile OTM calls more commonly associate with optimistic beliefs (Buraschi and Jiltsov, 2006). Corresponding to an intuitive measure of expectations of good and bad events for the stock, the payoff from the variance contract can be written as in Kilic and Shaliastovich (2019) and Barunik et al. (2022): \[\sigma_{i,t}^{2}\equiv\underbrace{e^{-r_{t}^{f}}\mathbb{E}_{t}^{Q}\left[r_{i,t+1} ^{2}\mathbb{I}_{\{r_{i,t+1}>0\}}\right]}_{\sigma_{i,t}^{2,+}}+\underbrace{e^{- r_{t}^{f}}\mathbb{E}_{t}^{Q}\left[r_{i,t+1}^{2}\mathbb{I}_{\{r_{i,t+1}\leq 0\}} \right]}_{\sigma_{i,t}^{2,-}} \tag{3}\] Intuitively, good and bad components of the payoff add to the total, and we can obtain the prices of its components in a model-free manner from a bundle of option prices upon a discretization of equation (2); the Appendix provides details of the procedure we use. The total implied variance is the weighted sum of the option prices, and its components are identifiable by claims that have payoffs relating to the sign of the realized return. Good implied variance is identifiable from call options that pay off when we realize a positive return, and bad implied variance is identifiable through put options paying off upon the realization of a negative return. Consequently, the first term in Equation (2) refers to positive components and the second term refers to negative components of the payoff of the volatility contract, where \(\sigma_{i,t}^{2,+}\) is the good implied variance and \(\sigma_{i,t}^{2,-}\) is the bad implied variance. We identify good implied variance by call options that pay off if the return realisation is positive, and bad implied variance by put options that pay off only if the return realisation is negative. We define \(\sigma_{i,t}^{2}\) in Equation (2) as firm-level fears that proxy investors' expectations of future price movements. We refer to \(\sigma_{i,t}^{2,+}\) in Equation (2) as firm-level good fears, and \(\sigma_{i,t}^{2,-}\) in Equation (2) as firm-level bad fears. Firm-level good fears capture investors' expectations of future upward price movements. Firm-level bad fears capture investors' expectations of future downward price movements. Importantly, we distinguish whether the information content within firm-level investor fears differs from investor market fears. To do this, we infer investor market fears from index options using the same approach as described here. The only difference is that we replace firm-level options with S&P500 index options. Throughout this paper we use **MF**, **MF\({}^{+}\)**, **MF\({}^{-}\)** to define market fears, good market fears and bad market fears. ## 3 Common Firm-level Investor Fears ### Firm-level Option Data We compute firm-level implied variances using daily data from OptionMetrics over the sample January 03, 2000 to December 31, 20203. We include all stocks from the time of their IPO and listing, with good options data coverage (i.e. we require stocks to have data spanning more than 5 years of continuous data). We exclude stocks due to: i) bankruptcy; ii) delisting; and iii) mergers and acquisitions.4 Footnote 4: Examples of bankruptcies are General Motors, Lehman Brothers and Merrill Lynch; examples of M&As are Raytheon and United Technologies, Dow Chemical and DuPont, and Walt Disney Company and 21st Century Fox. We apply common options filtering rules to further exclude stock options with: i) missing deltas; ii) missing implied volatility; iii) bid prices equal to 0; iv) nil volume; v) nil open interest; vi) negative bid-ask spread; and that vii) violate arbitrage conditions (see, e.g. Bakshi et al., 2003; Carr and Wu, 2011; Christoffersen et al., 2012). Following these filtering criteria, we then remove options with less than 4 contracts on a specific day and are left with 526 firms.5 Approximately 90% of these firms are large-cap; with the remaining 10% being mid-cap stocks. Most stocks in our sample appear as a constituent of the S&P500 throughout our sample. Other stocks come from the Russell 1000 for which there is sufficient data coverage. To proxy market investor fears, we use the same filtering criteria for data on S&P500index options. Footnote 5: Most of these data filters are common in the option pricing literature. The volume and open interest constraints ensure that there is genuine interest in the option contract. Options that are close to maturity are removed (see, e.g. Carr and Wu, 2011; Christoffersen et al., 2012, among others). We remove options with a negative bid-ask spread and that violate no-arbitrage constraints, as these option prices are invalid and inconsistent with theory. Finally, we remove ITM contracts, as they tend to be more illiquid than OTM and at-the money options (e.g. Christoffersen et al., 2012). Specifically, each day \(t\), our data sample contains daily stock options observations for which we are able to calculate values of the Bakshi et al. (2003) implied variance (and semi-variance) measures. We consider call and put option prices with maturity around 30 days, considering all available strikes for each option. We keep implied variance measures within 23 and 37 days maturity to represent a proxy of investor expectations of the one-month ahead, \(t+1\), fluctuations in the underlying asset. Table 1 reports the time-series averages for the cross-sectional means and standard devi \begin{table} \begin{tabular}{l c c c} \hline \hline & \(\sigma_{i,t}^{2}\) & \(\sigma_{i,t}^{2,+}\) & \(\sigma_{i,t}^{2,-}\) \\ \hline Mean & 0.187 & 0.067 & 0.120 \\ Std & 0.182 & 0.055 & 0.129 \\ Ave. Pairwise covariance & 0.028 & 0.002 & 0.015 \\ \hline \hline \end{tabular} \end{table} Table 1: **Descriptive Statistics of Firm-level Implied Variances** Notes: This table shows the time-series averages of daily cross-sectional means (Mean) and standard deviations (Std) for firm-level implied variance measures. \(\sigma_{i,t}^{2}\) refers to total implied variance, while \(\sigma_{i,t}^{2,+}\), \(\sigma_{i,t}^{2,-}\) refer to good and bad implied variances respectively. ations for each of our implied variance measures, as well as the average pairwise covariances. We can see that bad implied variance has a higher time-series average for the cross sectional mean and standard deviation relative to good implied variance, and across all measures, there is a positive average pairwise covariance. Table 1 indicates there is positive co-movement in firm-level investor fears. ### The Common Factor in Firm-level Investor Fears We use principal components analysis (PCA) to estimate the common factor in our implied variance measures. The common factors we extract from firm-level total implied variance, **CF**, we refer to as common firm-level investor fears, or common fears for brevity. The common factors we extract from good implied variances, **CF\({}^{+}\)**, we refer to as good common firm-level investor fears, or common good fears, and those from bad implied variances, **CF\({}^{-}\)**, we refer to as bad common firm-level investor fears, or common bad fears. We adopt the expectation-maximization algorithm in Stock and Watson (2002a,b) and McCracken and Ng (2016), using a 252 day window that rolls through our sample in order to eliminate any look ahead bias for our pricing exercise. Figure 1 shows the 5% and 95% quantiles of the daily cross-sectional distributions of our firm-level implied variance measures and the common factors we extract from firm-level implied variance measures using PCA. The left hand side plot shows firm-level implied variance, and the middle and right hand side plots show firm-level good and bad implied variances respectively. This shows that firm-level implied variances obey a strong factor structure. Table 2 reports descriptive statistics concerning the proportion of variation the common factor explains for our implied variance measures. We also report the proportion of variation the common factors explain for the full sample January 03, 2000 - December 30, 2020. On average, common fears explains 87.1% of the variation in firm-level implied variances; with common good fears and common bad fears explaining 90.5% and 83.9% of variation within firm-level good and bad implied variances respectively. The proportions of variance explained by these common factors range from 69.7% to 91%. The standard deviation of explained variations are between 3.33% and 6.25%. Comparing the averages and medians in Panel A to full sample estimates in Panel B shows the rolling sample estimates, pertinent to our pricing exercise, are explain a higher proportion of variation when looking at our sample through a rolling window. This provides substantial evidence in favour of a common factor driving firm level investor fears. \begin{table} \begin{tabular}{l c c c} \hline \hline **A:** Rolling Sample & **CF** & **CF\({}^{+}\)** & **CF\({}^{-}\)** \\ \hline Mean (\%) & 87.13 & 90.51 & 83.94 \\ Median (\%) & 88.75 & 90.98 & 85.68 \\ Min (\%) & 75.56 & 82.04 & 69.68 \\ Max (\%) & 95.06 & 95.51 & 93.89 \\ Std (\%) & 4.86 & 3.33 & 6.25 \\ \hline \hline **B:** Full Sample & **CF** & **CF\({}^{+}\)** & **CF\({}^{-}\)** \\ \hline \% variation & 77.75 & 83.46 & 75.65 \\ \hline \hline \end{tabular} \end{table} Table 2: **Proportion of Variation Explained by Common Factor from Firm-level Implied Variances** Notes: This table shows the percentage of variation that the common factor we extract from firm-level implied variances explains in the dataset. Panel A shows the time-series average, median, minimum (Min), maximum (Max), and standard deviation (Std) for the common factors we extract using a 252-day window rolling through the sample January 03,2000 to December 31, 2020 daily. Panel B shows the percentage variation the common factor explains using the full sample January 03, 2000 to December 31, 2020. Figure 1: **Firm-level Implied Variances** Notes: This figure plots the daily 5% and 95% percentiles of the cross-sectional distribution for firm-level implied variances from January 03, 2000 to December 31, 2020. The left hand side plot reports firm-level implied variance, the middle plot reports firm-level good implied variances (those we infer from call options), the right hand side plot shows firm-level bad implied variances (those we infer from put options). The solid line is the common factor we extract as the first principal component. ### Is the Information within Common Firm-level Investor Fears different to Investor Market Fears? Here we investigate whether common firm-level fears contain different information to market fears. Figure 2 shows monthly innovations to the common factors we extract from firm-level implied variances using the entire sample. Panel A shows innovations to common fears, and Panels B and C show innovations to common good fears and common bad fears respectively. Alongside these plots, we also report the innovations orthogonal to those from the corresponding market fears measures we compute from index options. Orthogonal innovations are the residuals we compute by regressing common fears on market fears. The correlations between the plots in Panel A is 74%, and in Panels B and C the correlations are 76% and 79% respectively. To further explore differences, we plot the correlations between daily innovations to common firm-level fears and the corresponding market fears using a 1-year rolling window in Figure 3. There are three takeaway points from these plots. First, correlations vary substantially throughout the sample. These range from highs of around 0.8 for common good fears and good market fears, to lows of -0.05 for common bad fears and bad market Figure 2: **Monthly Innovations to Common Factors of Firm-level Implied Variances** fears. Second, correlations between common firm-level fears and market fears appears to rise during periods of financial and economic turbulence, as well as bear markets. Notably correlations are substantially lower from 2012 onwards. Finally, the correlation between innovations to common bad fears and bad market fears is almost always lower than common fears and market fears, and common good fears and good market fears. Overall, this implies that shocks to common fears in the firm-level domain are distinct to their corresponding fears at the market index level. In particular, innovations to common bad fears are notably different to bad market fears. We now move on to pricing exercises to understand the compensation investors demand for bearing such exposure. Figure 3: **Correlations between Common Fears and Market Fears** Notes: This figure plots the daily correlations between Common Fears and Market Fears from January 03, 2001 to December 31, 2020. The former is the common factor in firm-level implied variances and the latter is the implied variance from S&P500index options. We compute correlations using a rolling 1-year window. The solid line shows the correlations between Common Fears and Market Fears (total implied-variances), the line with triangle markers corresponds to the correlations between Common Good Fears and Good Market Fears, the line with o markers corresponds to the correlations between Common Bad Fears and Bad Market Fears. Pricing Common Fears To examine the pricing implications for our measures of common fears, we obtain data on stock prices from the Center for Research in Security Prices (CRSP). We take all common stocks listed on the New York Stock Exchange (NYSE), NASDAQ and AMEX. To construct our sample, we adopt the following filtering criteria. First, we omit stocks in month \(t\)+1 if their market capitalization at the end of month \(t\) is in the bottom 30% percentile of the cross-sectional distribution at the end of month \(t\). Next, we omit stocks in month \(t\)+1 whose price at the end of month \(t\) is less than $5. Then, we winsorize by removing stocks in month \(t\)+1 if their returns lie in the top and bottom 5% percentiles of returns at the end of month \(t\). As we explain below, our pricing exercise uses rolling regressions using 1-year of data. Therefore we require stocks to have this data available to estimate loadings. This means that the average number of stocks in our sample is 1045 with a minimum of 883 and a maximum of 1385.6 Footnote 6: We assess the robustness of our findings to these filtering criteria by relaxing the stringencies. In this case, we omit stocks in month \(t\)+1 if their market capitalization at the end of month \(t\) is in the bottom 20% percentile of the cross-sectional distribution at the end of month \(t\). Next, we omit stocks in month \(t\)+1 whose price at the end of month \(t\) is less than $1. Then, we winsorize by removing stocks in month \(t\)+1 if their returns lie in the top and bottom 5% percentiles of returns at the end of month \(t\). the average number of stocks in this sample is 1360, the minimum and maximum number of stocks are 1118 and 1937, respectively. These results are qualitatively similar, to those we report in the main text and are available upon request. Our pricing exercises are similar to Ang et al. (2006b), Cremers et al. (2015), and Herskovic et al. (2016) who follow standard approaches within the asset pricing literature; namely portfolio sorts and Fama-MacBeth regressions. For portfolio sorts, we obtain factor loadings from daily data using a 1-year rolling window from regressions of the \(i\)th stock's excess return \[r_{i,t} = \beta_{i}^{0}+\beta_{i}^{\Delta\mathbf{CF}}\Delta\mathbf{CF}_{t} +\beta_{i}^{\Delta\text{VIX}}\Delta\text{VIX}_{t}+\epsilon_{i,t} \tag{4}\] \[r_{i,t} = \beta_{i}^{0}+\beta_{i}^{\Delta\mathbf{CF}^{+}}\Delta\mathbf{CF} ^{+}{}_{t}+\beta_{i}^{\Delta\text{VIX}}\Delta\text{VIX}_{t}+\epsilon_{i,t}\] (5) \[r_{i,t} = \beta_{i}^{0}+\beta_{i}^{\Delta\mathbf{CF}^{-}}\Delta\mathbf{CF} ^{-}{}_{t}+\beta_{i}^{\Delta\text{VIX}}\Delta\text{VIX}_{t}+\epsilon_{i,t} \tag{6}\] in which \(\Delta\mathbf{CF}_{t}\), \(\Delta\mathbf{CF}^{+}{}_{t}\), and \(\Delta\mathbf{CF}^{-}{}_{t}\), are the respective innovations to common fears, common good fears and common bad fears; \(\Delta\text{VIX}_{t}\) are innovations to the VIX index. We also consider replacing \(\Delta\text{VIX}_{t}\) with market fears, \(\Delta\mathbf{MF}\), \(\Delta\mathbf{MF}^{+}\), \(\Delta\mathbf{MF}^{-}\), that we infer from S&P500 index options in a similar manner to the firm-level implied variances (similar to Herskovic et al., 2016).7 Footnote 7: We also consider replacing \(\Delta\text{VIX}_{t}\) with the market risk premium, market capitalization, and trading volume in the spirit of Ang et al. (2006b); Cremers et al. (2015). We report these results in the Appendix. Using these loadings we consider portfolio sorts for value-weighted portfolios. For single portfolio sorts, at the end of month \(t\), we sort stocks into quintile portfolios from their respective loadings on \(\beta_{i}^{\Delta\mathbf{CF}}\), \(\beta_{i}^{\Delta\mathbf{CF}^{+}}\), \(\beta_{i}^{\Delta\mathbf{CF}^{-}}\), compute returns over the next month, \(t+1\), and repeat this process. For sorts that control for firm characteristics, we first sort stocks into quintiles using the characteristic of interest, and then on one of our common fears risk proxies, \(\beta_{i}^{\Delta\mathbf{CF}}\), \(\beta_{i}^{\Delta\mathbf{CF}^{+}}\), \(\beta_{i}^{\Delta\mathbf{CF}^{-}}\)(see e.g. Ang et al., 2006; Herskovic et al., 2016); this neutralizes the portfolios from the characteristic of interest. For double sorts, we first sort stocks into quintiles on the characteristic of interest, and then on one of our common fears risk proxies, \(\beta_{i}^{\Delta\mathbf{CF}}\), \(\beta_{i}^{\Delta\mathbf{CF}^{+}}\), \(\beta_{i}^{\Delta\mathbf{CF}^{-}}\). Along with the excess expected returns for quintile portfolios, we also report risk-adjusted returns which are the alphas from the Fama-French 5 factor model, \(\alpha^{\mathrm{FF5}}\), and the alphas from the Fama-French 5-factor model that accounts for the momentum factor.8 Reporting risk-adjusted returns allows us to control for other factors known to affect stock returns. Footnote 8: These data, along with additional test assets are from Kenneth French’s Data Library. Table 3 shows results from single portfolio sorts using loadings on common fears in Panel A, common good fears in Panel B, and common bad fears in Panel C. We report the expected excess return and risk-adjusted returns relative to the respective Fama-French 5 factor model and the Fama-French 5-factor model plus the momentum factor, for quintile portfolios, and the long-short portfolio that buys the portfolio with high loadings (portfolio 5) and sells the portfolio with low loadings (portfolio 1). In general, and consistent with economic rationale, excess and risk-adjusted returns are almost monotonically decreasing. However, it is only when looking at portfolio sorts on common bad fears that the spread portfolio excess and risk-adjusted returns are statistically significant. The expected excess return is -0.53% per month which corresponds to an annualized return of -6.36%. Looking at alphas from the multi-factor pricing models, this portfolio earns an economically meaningful annualized return of -5.04% from the Fama-French 5 factor model, and -5.15% when adding the momentum factor. Table 4 analogous results to those in Table 3 that control for innovations to the VIX index, \(\Delta\)VIX. We can see that excess returns monotonically decrease as loadings to common fears increase, and risk-adjusted returns are almost monotonically decreasing. A similar story emerges here in that the spread portfolios are statistically significant for common bad fears. The economic significance for risk-adjusted returns for the spread portfolio sorting on common bad fears reduces to -0.21% per month (-2.52% annualized) and -0.23% (-2.76% annualized) relative to the Fama-French 5 factor model and Fama-French 5 factor model plus momentum respectively. In Table 5 we report portfolio sorts on common fears loadings that control for corresponding market fears. Here, both excess returns and risk-adjusted returns decrease mono \begin{table} \begin{tabular}{l r r r r r r} \hline \hline & low \(\beta^{\Delta\mathbf{CF}}\) & \multicolumn{5}{c}{High \(\beta^{\Delta\mathbf{CF}}\)} \\ \hline **A:** Common Fears & 1 & 2 & 3 & 4 & 5 & 5-1 \\ \hline mean (\%) & 1.14 & 1.11 & 0.90 & 0.96 & 0.79 & -0.35 \\ \(t\)-stat & 3.29 & 3.61 & 3.29 & 3.70 & 3.44 & -1.74 \\ \(\alpha^{\mathrm{FF5}}\) & 0.32 & 0.41 & 0.25 & 0.37 & 0.19 & -0.13 \\ \(t\)-stat & 4.42 & 2.93 & 2.41 & 2.66 & 1.74 & -0.90 \\ \(\alpha^{\mathrm{FF5}+\mathrm{MOM}}\) & 0.32 & 0.40 & 0.24 & 0.36 & 0.18 & -0.15 \\ \(t\)-stat & 4.38 & 2.77 & 2.40 & 2.58 & 1.74 & -1.02 \\ \hline \hline **B:** Common Good Fears & 1 & 2 & 3 & 4 & 5 & 5-1 \\ \hline mean (\%) & 1.13 & 0.99 & 1.20 & 0.79 & 0.76 & -0.37 \\ \(t\)-stat & 3.16 & 3.11 & 4.27 & 3.01 & 3.17 & -1.53 \\ \(\alpha^{\mathrm{FF5}}\) & 0.35 & 0.23 & 0.56 & 0.27 & 0.14 & -0.21 \\ \(t\)-stat & 2.91 & 3.71 & 3.87 & 1.99 & 0.87 & -0.92 \\ \(\alpha^{\mathrm{FF5}+\mathrm{MOM}}\) & 0.35 & 0.23 & 0.55 & 0.27 & 0.12 & -0.23 \\ \(t\)-stat & 3.08 & 3.80 & 3.71 & 1.92 & 0.84 & -1.09 \\ \hline \hline **C:** Common Bad Fears & 1 & 2 & 3 & 4 & 5 & 5-1 \\ \hline mean (\%) & 1.28 & 0.92 & 1.00 & 0.91 & 0.76 & **-0.53** \\ \(t\)-stat & 3.87 & 3.01 & 3.79 & 3.47 & 3.13 & **-2.82** \\ \(\alpha^{\mathrm{FF5}}\) & 0.55 & 0.16 & 0.33 & 0.36 & 0.13 & **-0.42** \\ \(t\)-stat & 3.29 & 1.63 & 3.24 & 2.47 & 1.59 & **-2.32** \\ \(\alpha^{\mathrm{FF5}+\mathrm{MOM}}\) & 0.54 & 0.16 & 0.32 & 0.35 & 0.12 & **-0.43** \\ \(t\)-stat & 3.22 & 1.54 & 3.24 & 2.42 & 1.48 & **-2.46** \\ \hline \hline \end{tabular} \end{table} Table 3: **Single Portfolio Sorts from Loadings on Common Fears, Common Good Fears, and Common Bad Fears** Notes: This table shows value-weighted portfolios that we sort on loadings to: i) common fears in Panel A; ii) common good fears in Panel B; and iii) common bad fears in Panel C. In each Panel, we report monthly excess returns for quintile portfolios and the long-short portfolio that goes long the portfolio of stocks with a high loading on common fears and short the portfolio of stocks with low loadings to common fears. We also report the risk adjusted returns from the Fama-French 5 factor model and the Fama-French 5-factor model accounting for Momentum. \begin{table} \begin{tabular}{l r r r r r r} \hline \hline & low \(\beta^{\Delta\mathbf{CF}}\) & \multicolumn{5}{c}{High \(\beta^{\Delta\mathbf{CF}}\)} \\ \hline **A:** Common Fears & 1 & 2 & 3 & 4 & 5 & 5-1 \\ \hline mean (\%) & 1.25 & 1.06 & 0.96 & 0.92 & 0.79 & **-0.46** \\ \(t\)-stat & 3.64 & 3.46 & 3.55 & 3.75 & 3.51 & **-2.62** \\ \(\alpha^{\mathrm{FF5}}\) & 0.42 & 0.29 & 0.24 & 0.26 & 0.18 & -0.23 \\ \(t\)-stat & 4.18 & 4.53 & 3.98 & 4.99 & 2.42 & -1.57 \\ \(\alpha^{\mathrm{FF5}+\mathrm{MOM}}\) & 0.42 & 0.29 & 0.24 & 0.25 & 0.17 & **-0.25** \\ \(t\)-stat & 4.49 & 4.46 & 3.83 & 5.17 & 2.74 & **-2.00** \\ \hline \hline **B:** Common Good Fears & 1 & 2 & 3 & 4 & 5 & 5-1 \\ \hline mean (\%) & 1.16 & 1.04 & 1.05 & 0.93 & 0.79 & -0.38 \\ \(t\)-stat & 3.26 & 3.34 & 3.82 & 3.70 & 3.70 & -1.74 \\ \(\alpha^{\mathrm{FF5}}\) & 0.39 & 0.26 & 0.32 & 0.25 & 0.18 & -0.21 \\ \(t\)-stat & 3.87 & 4.65 & 6.16 & 4.56 & 1.92 & -1.24 \\ \(\alpha^{\mathrm{FF5}+\mathrm{MOM}}\) & 0.39 & 0.26 & 0.31 & 0.25 & 0.16 & -0.23 \\ \(t\)-stat & 4.04 & 4.68 & 6.09 & 4.60 & 2.03 & -1.50 \\ \hline \hline **C:** Common Bad Fears & 1 & 2 & 3 & 4 & 5 & 5-1 \\ \hline mean (\%) & 1.23 & 1.06 & 0.96 & 0.91 & 0.82 & **-0.41** \\ \(t\)-stat & 3.68 & 3.59 & 3.55 & 3.59 & 3.43 & **-2.66** \\ \(\alpha^{\mathrm{FF5}}\) & 0.40 & 0.29 & 0.25 & 0.26 & 0.19 & -0.21 \\ \(t\)-stat & 5.53 & 5.01 & 3.95 & 4.54 & 2.51 & -1.78 \\ \(\alpha^{\mathrm{FF5}+\mathrm{MOM}}\) & 0.41 & 0.29 & 0.24 & 0.25 & 0.18 & **-0.23** \\ \(t\)-stat & 5.82 & 4.97 & 3.59 & 4.92 & 2.72 & **-2.24** \\ \hline \hline \end{tabular} \end{table} Table 4: **Portfolio Sorts from Loadings on Common Fears, Common Good Fears, and Common Bad Fears Controlling for \(\Delta\)VIX** Notes: This table shows value-weighted portfolios that we sort on loadings to: i) common fears in Panel A; ii) common good fears in Panel B; and iii) common bad fears in Panel C whilst controlling for innovations to the VIX index (\(\Delta\)VIX). In each Panel, we report monthly excess returns for quintile portfolios and the long-short portfolio that goes long the portfolio of stocks with a high loading on common fears and short the portfolio of stocks with low loadings to common fears. We also report the risk adjusted returns from the Fama-French 5 factor model and the Fama-French 5-factor model accounting for Momentum. tonically as loadings to common fears increase. All spread portfolios earn negative returns with statistically significant returns from the spread portfolio using common bad fears. The risk-adjusted return controlling for the Fama-French 5 factors and momentum is significant at 1% levels with a \(t\)-statistic of -2.24. The economic significance of this risk-adjusted return reduce slightly to -0.18% per month (-2.16% annualized) relative to the result in Table 4. In the Appendix, we report results from conditional double sorts in Tables A1 and A2. These results first sort on loadings to innovations in the VIX index and market fears respectively. Overall, these results show monotonically decreasing returns for portfolios that load on common fears relative to controlling for the market. These results show that such spread portfolios are significant for the majority of quantiles at conventional levels; with the remaining quantiles at 10% levels. There is no pattern in quintile portfolios sorting on innovations in the VIX index or market fears. To complement this analysis, we also report Fama-MacBeth two-pass regressions using the portfolios we sort on loadings to common fears, common good fears, and common bad fears. Our procedure estimates portfolio betas using the full-sample and then a single cross-sectional regression to estimate risk premia. For common fears, common good fears, and common bad fears, we construct a factor mimicking portfolio following the approach in Ang et al. (2006b). This is because our factors are not directly observable, and hence not tradable. Our base assets for the mimicking portfolios use the portfolios we sort on common fears, common good fears, and common bad fears. This is because they all have different sensitivities to the factor. This approach provides us with a daily factor mimicking portfolio that contains no look-ahead bias due to how we extract the common fears factors, and how we construct the base assets. We take the average daily return in each month to convert the factor mimicking portfolio returns to monthly in each month by taking the average to monthly by taking the average return over month.9 Footnote 9: Converting the daily returns to monthly by cumulating returns over the month does not change the statistical significance of our results, however it provides inflated point estimates of risk premia. Looking at the end of month returns produces qualitatively similar results, but again results in inflated point estimates of risk-premia. In Table 6, we present the risk premia estimates for decile portfolios that sort on loadings to common fears, \(\beta^{\Delta\mathbf{CF}}\), in columns 1-6; common good fears, \(\beta^{\Delta\mathbf{CF}^{+}}\), in columns 7-12; and common bad fears, \(\beta^{\Delta\mathbf{CF}^{-}}\), in columns 13-18. We show results for decile portfolios to have a large enough cross-section to estimate these models. Available on request are results for quintile portfolios using a smaller number of asset pricing factors that convey the same message as those we present here. Results in columns 1, 7, 13 control for the Fama-French 5 factor model. Meanwhile columns 2-5, 8-11, and 14-17 control for the variance risk premium (VRP), momentum \begin{table} \begin{tabular}{l r r r r r r} \hline \hline & low \(\beta^{\Delta\mathbf{CF}}\) & \multicolumn{5}{c}{High \(\beta^{\Delta\mathbf{CF}}\)} \\ \hline **A:** Common Fears & 1 & 2 & 3 & 4 & 5 & 5-1 \\ \hline mean (\%) & 1.25 & 1.05 & 0.94 & 0.93 & 0.82 & **-0.44** \\ \(t\)-stat & 3.56 & 3.48 & 3.32 & 3.77 & 3.79 & **-2.33** \\ \(\alpha^{\mathrm{FF5}}\) & 0.40 & 0.30 & 0.20 & 0.25 & 0.24 & -0.16 \\ \(t\)-stat & 4.03 & 5.30 & 2.51 & 4.18 & 3.61 & -1.16 \\ \(\alpha^{\mathrm{FF5}+\mathrm{MOM}}\) & 0.41 & 0.30 & 0.19 & 0.24 & 0.23 & -0.18 \\ \(t\)-stat & 4.38 & 5.12 & 2.80 & 4.32 & 4.06 & -1.56 \\ \hline \hline **B:** Common Good Fears & 1 & 2 & 3 & 4 & 5 & 5-1 \\ \hline mean (\%) & 1.12 & 1.11 & 0.99 & 0.97 & 0.81 & -0.31 \\ \(t\)-stat & 3.08 & 3.52 & 3.53 & 3.85 & 3.95 & -1.38 \\ \(\alpha^{\mathrm{FF5}}\) & 0.33 & 0.32 & 0.24 & 0.29 & 0.23 & -0.09 \\ \(t\)-stat & 2.76 & 4.75 & 4.50 & 4.70 & 2.80 & -0.51 \\ \(\alpha^{\mathrm{FF5}+\mathrm{MOM}}\) & 0.33 & 0.31 & 0.23 & 0.28 & 0.22 & -0.11 \\ \(t\)-stat & 3.05 & 4.65 & 4.53 & 4.93 & 3.07 & -0.73 \\ \hline \hline **C:** Common Bad Fears & 1 & 2 & 3 & 4 & 5 & 5-1 \\ \hline mean (\%) & 1.22 & 1.03 & 0.97 & 0.91 & 0.81 & **-0.41** \\ \(t\)-stat & 3.64 & 3.41 & 3.55 & 3.56 & 3.63 & **-2.46** \\ \(\alpha^{\mathrm{FF5}}\) & 0.39 & 0.26 & 0.25 & 0.23 & 0.23 & -0.16 \\ \(t\)-stat & 5.56 & 4.35 & 4.62 & 3.11 & 3.82 & -1.47 \\ \(\alpha^{\mathrm{FF5}+\mathrm{MOM}}\) & 0.40 & 0.26 & 0.24 & 0.22 & 0.22 & **-0.18** \\ \(t\)-stat & 6.17 & 4.27 & 4.61 & 3.47 & 4.21 & **-2.01** \\ \hline \hline \end{tabular} \end{table} Table 5: **Portfolio Sorts from Loadings on Common Fears, Common Good Fears, and Common Bad Fears Controlling for Market Fears** Notes: This table shows value-weighted portfolios that we sort on loadings to: i) common fears in Panel A; ii) common good fears in Panel B; and iii) common bad fears in Panel C whilst controlling for innovations market fears. Market fears are implied volatilities we compute from index options following Bakshi and Madan (2000) and Bakshi et al. (2003). In each Panel, we report monthly excess returns for quintile portfolios and the long-short portfolio that goes long the portfolio of stocks with a high loading on common fears and short the portfolio of stocks with low loadings to common fears. We also report the risk adjusted returns from the Fama-French 5 factor model and the Fama-French 5-factor model accounting for Momentum. (MOM), common idiosyncratic volatility (CIV) (Herskovic et al., 2016), and liquidity (LIQ) (Pastor and Stambaugh, 2003), respectively. Columns 6, 12, and 18 add both CIV and LIQ to the Fama-French 5 factor model. Below risk premium estimates, we report Newey and West \(t\)-statistics with 12 lags that adjust for errors in variables as in Shanken (1992). \(\bar{R}^{2}\) at the bottom of the table is the adjusted R-squared. The main takeaway from Table 6 is that portfolios that sort on common bad fears have significant risk premia estimates across all specifications we consider. We can see that the point estimate for the risk premia ranges from -0.41 to 0.47 with \(t\)-statistics indicating significance at 1% levels. Annualizing these estimates suggest risk prices ranging from -5.64% to -4.92%; which is similar to those we present in Table 3. Risk premia estimates from the Fama-French 5 factor model have varying degrees of significance with far more variability in the point estimates for estimates. The same holds true for the additional controls we consider. Model fit ranges from 0.756 to 0.982 and the intercepts in all cross-sectional regressions are statistically insignificant. Tables 7 and 8 report analogous results for double sorts that control for innovations in the VIX index, and market fears (total, good and bad), respectively. We can see from Table 7 similar conclusions hold. Common bad fears prices the 25 portfolios that sort on innovations to the VIX and loadings to common bad fears. Estimates of the risk premia range from -0.22 to -0.17 which imply annual risk prices of -2.64% to -2.04%. This is similar in magnitude to the risk adjusted returns for spread portfolios in Table 4. Observe that there are varying degrees of significance for the Fama-French 5-factors and also the additional controls we consider. The adjusted R-squared statistics fall relative to Table 6 and the intercepts are mostly significant at conventional levels. Similar conclusions hold for Table 8 where we control for market fears. Again, risk premia estimates for common bad fears range from -0.23 to -0.1. Here the two specifications in columns 15 and 18 are insignificant at conventional levels; although column 15 is significant at the 10% level. For significant estimates for common bad fear risk premia estimates the annual prices of risk range from -2.76% to -2.40%. In the Appendix, we show corresponding results that control for portfolios that control for: i) the market risk premium; ii) market capitalization; and iii) trading volume. In general, these results yield similar conclusions to those we report here. The two notable differences are that portfolios loading on common fears and common bad fears result in significant risk-adjusted returns when controlling for the market risk premium, and Fama-MacBeth analysis controlling for trading volume have significant exposures to common bad fears at the 10% level. Long-short portfolios loading on common bad fears generate risk-adjusted returns that range from -0.26% per month (-3.12% annualized) to -0.21% per month (-2.52% annualized). The annualized risk premia estimates from corresponding Fama-MacBeth regressions range from -3.24% to -2.04%. ### Alternative Test Assets It is natural to question whether common fears, common good fears, and common bad fears carry similar pricing implications to other test assets. We consider a variety of anomaly portfolios that contain double sorts (5 x 5) on: size and investment (ME/INV); size and the market risk premium (ME/MKT); size and book-to-market (ME/BM); size and operating profit (ME/OP); operating profit/investment (OP/INV); book-to-market and investment (BM/INV); book-to-market and operating profit (BM/OP); size and residual variance (ME/RESVAR); size and variance (ME/VAR); size and accruals (ME/AC); and size and momentum (ME/MOM). Figure 4 shows plots for the risk premia estimates (LHS plot) and corresponding \(t\)-statistics (RHS plot) to common fears, common good fears, and common bad fears respectively for a battery of test assets and asset pricing models.10 We first consider seven anomaly portfolios in isolation (ME/INV, ME/MKT, ME/BM, ME/OP, ME/AC, OP/INV, and BM/INV), and then add these anomaly portfolios to portfolios that we sort on loadings to common fears (Portfolios 8-12). For each set of test assets, we consider 12 pricing models. The first six consider: the Fama-French 3 factor model; add our additional controls (VRP, MOM, CIV, LIQ) to the Fama-French 3 factor model; and then all additional controls together. The next six do the same for the Fama-French 5 factor model. Footnote 10: For the sake of brevity, we refrain from reporting results here for: BM/OP; ME/RESVAR; ME/VAR; and ME/MOM. Common fears do not price i) and iv). In subsequent analysis below, we include the test assets in ii) and iii). All results are available upon request. From these plots, we can see that our main results hold for alternative test assets. these risk premia estimates are almost never positive (the exception is 25 OP/INV portfolios and common good fears); even for those that are statistically insignificant. Although common fears and common good fears are able to price the cross sectional variation in some test assets, it is common bad fears that predominantly yield significant negative risk premium estimates across all specifications. For risk premium estimates that are statistically significant, common fears annualized risk premia estimates range from -3.36% to -1.68%. Common good fears annualized risk premia estimates range from -3.00% to -1.80%, and common bad fears annualized risk premia estimates range from -3.96% to -1.68%. Test assets with the minimum risk premia point estimates are 25 portfolios on ME/INV, 25 portfolios on ME/MKT, 25 portfolios on BM/INV. Those with the maximum risk premia estimates are when we add multiple anomaly portfolios to portfolios we sort on common fears loadings (e.g. 25 portfolios sorted on: ME/INV, ME/BM, ME/MKT, ME/OP, ME/RESVAR, and ME/VAR). We end this section by showing the risk premia estimates for a set of test assets that contains anomaly portfolios and also portfolios we sort on loadings to common and market Figure 4: **Common Fears, Common Good Fears, and Common Bad Fears Risk Premium Estimates and \(t\)-statistics: Alternative Test Assets** Notes: This figure shows the point estimates (LHS plots) and corresponding \(t\)-statistics (RHS plots) for the risk premia associated to common fears (yellow dots), common good fears (blue dots), and common bad fears (red dots). The first seven set of test assets are: size and investment (ME/INV); size and the market risk premium (ME/MKT); size and book-to-market (ME/BM); size and operating profit (ME/OP); size and accruals (ME/AC); operating profit/investment (OP/INV); and book-to-market and investment (BM/INV). Portfolio 8 takes ME/INV, ME/MKT, ME/BM, and decile portfolios loading on common fears. Portfolios 9 and 10 consider the same anomaly portfolios but then add the respective double sorted portfolios on common investor fears and VIX, and common investor fears and market fears respectively. Portfolio 11 considers the following anomaly portfolios: ME/INV, ME/MKT, ME/BM, ME/OP, size/residual variance, size/variance, and adds the decile portfolios we sort on common fears. Portfolio 12 considers the same anomaly portfolios as Portfolio 11, but adds the 25 double sorted portfolios on common fears and VIX. \begin{table} \begin{tabular}{l r r r r r r r r r r r r r r r r} \hline \hline & & & \multicolumn{10}{c}{25 ME/INV,25 ME/BM, 25 ME/MKT, Plus 25:} \\ & & \multicolumn{10}{c}{\(\beta^{\Delta{\bf CF}}/\beta^{\Delta{\bf MF}}\)} & \multicolumn{10}{c}{\(\beta^{\Delta{\bf CF}^{+}}/\beta^{\Delta{\bf MF}^{+}}\)} & \multicolumn{10}{c}{\(\beta^{\Delta{\bf CF}^{-}}/\beta^{\Delta{\bf MF}^{-}}\)} \\ \hline & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 \\ \hline \(\lambda_{0}\) & 0.56 & 0.66 & 0.62 & 0.63 & 0.71 & 0.59 & 0.78 & 0.64 & 0.61 & 0.76 & 0.55 & 0.67 & 0.59 & 0.66 & 0.74 \\ \(t\)-stat & 2.22 & 2.76 & 2.74 & 2.78 & 3.10 & 2.17 & 3.02 & 2.42 & 2.30 & 2.87 & 2.18 & 2.68 & 2.51 & 2.92 & 3.22 \\ \(\lambda_{\bf CF}\) & **-0.19** & **-0.17** & **-0.18** & **-0.18** & **-0.16** & -0.13 & -0.11 & -0.12 & -0.09 & -0.08 & **-0.20** & **-0.18** & **-0.18** & **-0.18** & **-0.17** \\ \(t\)-stat & **-2.65** & **-2.42** & **-2.54** & **-2.45** & **-2.38** & -1.60 & -1.41 & -1.47 & -1.21 & -1.10 & **-2.78** & **-2.51** & **-2.61** & **-2.58** & **-2.55** \\ \(\lambda_{MKT}\) & 0.48 & 0.38 & 0.42 & 0.39 & 0.31 & 0.45 & 0.24 & 0.39 & 0.42 & 0.26 & 0.48 & 0.37 & 0.44 & 0.36 & 0.28 \\ \(t\)-stat & 1.40 & 1.16 & 1.24 & 1.10 & 0.98 & 1.40 & 0.78 & 1.24 & 1.29 & 0.86 & 1.36 & 1.14 & 1.28 & 0.96 & 0.87 \\ \(\lambda_{SMB}\) & 0.15 & 0.16 & 0.16 & 0.17 & 0.18 & 0.16 & 0.19 & 0.17 & 0.17 & 0.20 & 0.17 & 0.18 & 0.18 & 0.19 & 0.20 \\ \(t\)-stat & 0.93 & 1.00 & 0.97 & 1.01 & 1.07 & 0.95 & 1.20 & 1.01 & 1.03 & 1.21 & 1.02 & 1.09 & 1.05 & 1.12 & 1.17 \\ \(t\)-stat & -0.26 & -0.27 & -0.25 & -0.24 & -0.24 & -0.18 & -0.23 & -0.16 & -0.17 & -0.22 & -0.28 & -0.28 & -0.26 & -0.25 & -0.26 \\ \(t\)-stat & -1.15 & -1.19 & -1.12 & -1.08 & -1.08 & -0.79 & -1.03 & -0.73 & -0.78 & -0.96 & -1.24 & -1.28 & -1.19 & -1.16 & -1.18 \\ \(\lambda_{RMW}\) & 0.24 & 0.31 & 0.27 & 0.26 & 0.31 & 0.17 & 0.39 & 0.21 & 0.19 & 0.38 & 0.31 & 0.40 & 0.34 & 0.31 & 0.37 \\ \(t\)-stat & 1.06 & 1.43 & 1.18 & 1.11 & 1.44 & 0.82 & 1.99 & 1.02 & 0.93 & 1.93 & 1.41 & 2.04 & 1.53 & 1.39 & 1.91 \\ \(\lambda_{CAA}\) & -0.14 & -0.15 & -0.15 & -0.13 & -0.14 & -0.14 & -0.15 & -0.15 & -0.15 & -0.16 & -0.13 & -0.15 & -0.15 & -0.14 & -0.14 \\ \(t\)-stat & -0.97 & -1.06 & -1.00 & -0.91 & -0.97 & -1.01 & -1.06 & -1.08 & -1.09 & -1.15 & -0.96 & -1.08 & -1.04 & -0.98 & -1.03 \\ \(\lambda_{VRP}\) & 0.14 & & & & & 0.03 & 0.13 & & & & 0.05 & 0.14 & & & 0.03 \\ \(t\)-stat & 1.23 & & & & & 0.37 & 1.12 & & & & 0.57 & 1.26 & & & 0.45 \\ \(\lambda_{MOM}\) & & -0.35 & & & & -0.39 & & -1.40 & & & -1.28 & & -0.49 & & & -0.45 \\ \(t\)-stat & & -0.65 & & & -0.74 & & -2.36 & & & -2.21 & & -0.91 & & & -0.83 \\ \(\lambda_{CIV}\) & & & & -0.92 & & & 0.09 & & -1.11 & & 0.02 & & & -0.79 & & & -0.08 \\ \(t\)-stat & & & -1.00 & & & 0.10 & & & -1.40 & & 0.03 & & & -0.75 & & & -0.07 \\ \(\lambda_{LIQ}\) & & & & -0.19 & -0.22 & & & & -0.21 & -0.23 & & & & -0.19 & -0.20 \\ \(t\)-stat & & & & -1.74 & -2.26 & & & & -1.99 & -2.45 & & & & -1.59 & -1.88 \\ \hline \(\vec{R}^{2}\) & 0.468 & 0.459 & 0.455 & 0.556 & 0.551 & 0.353 & 0.413 & 0.344 & 0.466 & 0.510 & 0.515 & 0.506 & 0.498 & 0.584 & 0.578 \\ \hline \hline \end{tabular} \end{table} Table 9: **Fama-MacBeth Analysis; Alternative Test Assets: Size/Investment, Size/Book to Market, Size/Market Risk Premium** fears. Table 9 shows the risk premia estimates where our test assets are the 25 portfolios on i) ME/INV; ii) ME/BM; and ii) ME/MKT plus the respective 25 portfolios that we sort on common fears/market fears in columns 1-5; common good fears/good market fears in columns 6-10; and common bad fears/bad market fears in columns 10-15. For these test assets, we can see that common fears and common bad fears are priced with \(t\)-statistics indicating significance at 1% levels. the estimates range from -0.16% per month (-1.92% annualized) to -0.20% per month (-2.40% annualized). As for common good fears, the risk premia estimates are negative, but insignificant. Additional factor risk premia we control for range from being statistically insignificant to marginally significance. In some cases, the sign of the risk premia estimates change (see e.g. the CIV factor of Herskovic et al., 2016). Pricing models examining common fears and common bad fears having adjusted R-squared values that range from 0.459 to 0.578, with those looking at common good fears ranging from 0.352 to 0.51. Table A9 in the Appendix presents Fama-MacBeth results for additional test assets. Here, we add 25 portfolios sorted on ME/OP, ME/RESVAR, and ME/VAR to those we consider in Table 9. These results yield qualitatively similar conclusions with annualized risk premium estimates for common fears and common bad fears both ranging between -1.92% and -1.68% respectively. ## 5 Robustness Analysis ### Accounting for Market Fears in Fama-MacBeth Analysis Another natural question is whether the pricing implications for common firm-level investor fears hold when controlling for market fears within our Fama-MacBeth analysis. We therefore construct a factor mimicking portfolio to resemble market fears, good market fears, and bad market fears in the exact manner as we do for our measures of common firm-level investor fears. We present the Fama-MacBeth estimates for 25 portfolios we sort on loadings to: i) common fears, \(\beta^{\Delta\mathbf{CF}}\), and market fears, \(\beta^{\Delta\mathbf{MF}}\); ii) common good fears, \(\beta^{\Delta\mathbf{CF}^{+}}\) and good market fears, \(\beta^{\Delta\mathbf{MF}^{+}}\); and iii) common bad fears, \(\beta^{\Delta\mathbf{CF}^{-}}\), and bad market fears, \(\beta^{\Delta\mathbf{MF}^{-}}\) in Table 10. Two noteworthy points emerge from Table 10. First, Our main results concerning common fears and common bad fears hold even in light of allowing for a market fears pricing factor. Three of the five specifications pricing bad common fears are statistically significant at 1% levels with another two being marginally signficiant at 10% levels. For statistically significant bad common fears risk premia, the annualized risk premium estimates are be tween -2.64% and -2.52%. This magnitude if economically meaningful and similar to those we present in Table 8. Second, the risk premia estimates for market fears, good market fears and bad market fears are all positive except for the results in column 6. Second within these specifications, both good market fears and common good fears appear to command positive risk-premia that are statistically significant. The magnitude of these premia are similar at annualized values ranging from 1.92% to 2.40%. This result relates to those in Kilic and Shaliastovich (2019) where good variance risk-premia appear to have a positive relationship with future returns. Tables A10 and A11 report analogous results to those in Table 10, but include additional anomaly portfolios as test assets. The former adds the respective 25 portfolios on: ME/INV; ME/BM; and ME/MKT to the double sorts on common firm-level investor fears/market fears portfolios. The latter adds the respective 25 portfolios on: ME/INV; ME/BM; ME/MKT; ME/OP; ME/RESVAR; ME/VAR to the double sorts on common firm-level investor fears/market fears portfolios. These results show that common bad fears bear significant negative risk premia that conform with those within our main results. The significant positive premia for common good fears and good market fears we observe in Table 10 disappears. ### Three-pass Regression Analysis All results we present until now suffer from omitted variable bias. We therefore investigate the pricing implications of common firm-level investor fears using the three-pass regressions approach in Giglio and Xiu (2021). This procedure is valid even if we do not specify or observe all factors within a pricing model and relies on PCA of the test assets to recover the factor space and additional regressions to proxy risk premia. In what follows, we present results from pricing models that contain a measure of common firm-level investor fears, and the market risk premium. There are two benefits we exploit of the Giglio and Xiu (2021) approach. First, as long as the test assets remain the same, the risk-premia estimates and model fit do not change as you start adding additional pricing factors to the specification. This means we can consider multiple dimensions of robustness to our main results with brevity. Second, we are able to test the null hypothesis that factors we consider are weak pricing factors. The benefit of this within the three-pass regression procedure is that we are able to understand whether the test assets capture well variation in the pricing factor itself whilst being able to recover the risk-premia estimates of potentially strong factors and interpret inference reliably. Table 11 shows results for three-pass regressions that use a measure of common firm-level investor fears and the market risk premium. Each panel assesses the sensitivities to how we define the factor tracking common firm-level investor fears. In Panel A, we show results for tradable factors of common firm-level investor fears which are factor mimicking portfolios as with our main results. Panels B and C consider nontradable measures of common firm-level investor fears. Panel B uses the innovations in the factors we extract using PCA as we outline in Section 2.3. Panel C uses innovations in an equally weighted average of firm level implied volatilities in a similar vein to how Herskovic et al. (2016) measure their CIV factor. We report risk-premia estimates with their corresponding \(t\)-statistics below. We also report the \(p\)-values from the Wald test in Giglio and Xiu (2021) with the null hypothesis that the factor is weak. Rejection of the null indicates the factor is a strong pricing factor. We also report the adjusted R-squared for each cross-sectional regression and the number of factors from test assets to recover the factor space. The columns assess the sensitivity of our estimates to changes in the test assets. Columns 1-3 show results from decile portfolios that sort on common, common good, and common bad fears. Columns 4-6 contain the 25 portfolios that sort on: ME/INV; ME/BM; ME/MKT; ME/OP; ME/RESVAR; and ME/VAR. Columns 7-9 add the respective portfolios in columns 1-3 to those in 4-6. Columns 10-12 add all decile beta portfolios to columns 4-6 and then uses common fears, common good fears and common bad fears in columns 10, 11, and 12 respectively. Columns 13-15 add the respective decile beta portfolios on common fears, as well as double sorts on common fears and market fears to the test assets in columns 4-6. Risk premium estimates in columns 1, 4, 7, 10, 13 use common fears, in columns 2, 5, 8, 11, 14 use common good fears, and in columns 3, 6, 9, 12, 15 use common bad fears. Risk premium estimates for common firm-level investor fears are statistically significant across all specifications of how we define the risk factor, and across the breadth of test assets. The majority of the time the risk premium relating to common bad fears is larger in absolute value than those from common good fears or common fears. The factor mimicking portfolios we construct to track common fears, common good fears, and common bad fears in Panel A have comparable point estimates to risk premia estimates using innovations in the extracted factor using PCA in Panel B. The annualized risk premia estimates range from -2.76% to -1.44% and -3.72% to -1.44% in Panels A and B respectively. Those risk premia in Panel C are larger in absolute value with annualized estimates ranging from -7.68% to -3.96%11. The risk premia estimates associated to the market fluctuate between 0.68% and 1.44% per month and model fit ranges from 0.37 to 0.85. The Wald tests reject the null that common firm-level investor fears are weak asset pricing factors, as do those for the market risk premium. As we increase the number of test assets beyond decile portfolios, the number of factors to recover the factor space increases from one to three. As an additional exercise, we estimate results using all pricing factors we consider above and are available upon request. These results show that only the HML factor of Fama and French (2015) five factors and the LIQ factor of Pastor and Stambaugh (2003) are statistically significant; with the former in most cases only marginally significant. These results also suggest that MOM, VRP, and CIV of Herskovic et al. (2016) are statistically and economically insignificant. The Wald tests indicate that we cannot reject the null that MOM and CIV are weak pricing factors. ## 6 Conclusion This paper shows strong co-movement within firm-level investor fears using individual equity options. Using the model-free approach of Bakshi and Madan (2000); Bakshi et al. (2003) to extract implied variances, we define investor fears as uncertainty regarding future price movements of the underlying stocks. We decompose such fears into good and bad fears that link to upward and downward anticipated future price movements respectively. We provide empirical evidence in favour of commonalities within firm-level investor fears containing different information to those we infer from index options. Stocks with a higher loading to common firm-level investor fears earn lower returns. Our results indicate there are statistically significant and economically meaningful risk premia to common firm-level investor fears; particularly those relating to common bad fears. Sorting stocks on their common bad fear beta yields an annualized risk-adjusted return spread of around 5%. Fama-MacBeth regressions further substantiate this result with common bad fears risk premium estimates ranging from -5.63% to -4.92%. Meaningful common firm-level investor fear risk premia are present after controlling for market fears from index options, as well as a variety of other controls, although the annualized magnitude declines to around 3% in absolute value. Common fears are important in understanding return differences within anomaly portfolios with similar risk premium estimates. We also show that common firm-level investor fear risk premia survive the three-pass regression procedure in Giglio and Xiu (2021). Several important implications emerge from our analysis. First, the common component within in firm-level implied variances is different to that within implied variances on index options. This means that investor beliefs common to individual stocks is inherently different \begin{table} \begin{tabular}{l r r r r r r r r r r r r r r r r} \hline \hline & & & & & & & & & & & & & & & & & & & & & \\ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 \\ \hline \(\lambda_{0}\) & 0.02 & 0.29 & -0.29 & 0.33 & 0.33 & 0.33 & 0.33 & 0.33 & 0.33 & 0.33 & 0.30 & 0.30 & 0.30 & 0.37 & 0.40 & 0.34 \\ \(t\)-stat & 0.08 & 2.63 & -0.80 & 3.65 & 3.65 & 3.65 & 3.79 & 3.91 & 3.64 & 3.36 & 3.36 & 3.36 & 4.42 & 5.36 & 3.82 \\ Wald (p-value) & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ Wald (p-value) & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ \(\lambda_{MKT}\) & 1.11 & 0.80 & 1.44 & 0.70 & 0.70 & 0.70 & 0.70 & 0.70 & 0.75 & 0.75 & 0.75 & 0.75 & 0.68 & 0.65 & 0.72 \\ \(t\)-stat & 3.25 & 2.77 & 3.12 & 2.28 & 2.28 & 2.29 & 2.30 & 2.30 & 2.45 & 2.45 & 2.45 & 2.25 & 2.13 & 2.35 \\ Wald (p-value) & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ \hline \(\bar{R}^{2}\) & 0.76 & 0.85 & 0.62 & 0.43 & 0.43 & 0.43 & 0.42 & 0.44 & 0.40 & 0.41 & 0.41 & 0.41 & 0.40 & 0.42 & 0.37 \\ \(t\)-stat & 3.25 & 2.77 & 3.12 & 2.28 & 2.28 & 2.29 & 2.30 & 2.30 & 2.45 & 2.45 & 2.45 & 2.25 & 2.13 & 2.35 \\ Wald (p-value) & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ \hline \(\bar{R}^{2}\) & 0.76 & 0.85 & 0.62 & 0.43 & 0.43 & 0.43 & 0.42 & 0.44 & 0.40 & 0.41 & 0.41 & 0.41 & 0.40 & 0.42 & 0.37 \\ Wald (p-value) & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ \hline \(\bar{R}^{2}\) & 0.76 & 0.85 & 0.62 & 0.43 & 0.43 & 0.43 & 0.42 & 0.44 & 0.40 & 0.41 & 0.41 & 0.41 & 0.40 & 0.42 & 0.37 \\ Wald (p-value) & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ \hline \(\bar{R}^{2}\) & 0.76 & 0.85 & 0.62 & 0.43 & 0.43 & 0.43 & 0.42 & 0.44 & 0.40 & 0.41 & 0.41 & 0.41 & 0.40 & 0.42 & 0.37 \\ Wald (p-value) & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ \hline \(\bar{R}^{2}\) & 0.76 & 0.85 & 0.62 & 0.43 & 0.43 & 0.43 & 0.42 & 0.44 & 0.40 & 0.41 & 0.41 & 0.41 & 0.40 & 0.42 & 0.37 \\ Wald (p-value) & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ \hline \(\bar{R}^{2}\) & 0.76 & 0.85 & 0.62 & 0.43 & 0.43 & 0.43 & 0.42 & 0.44 & 0.40 & 0.41 & 0.41 & 0.41 & 0.40 & 0.42 & 0.37 \\ Wald (p-value) & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ \hline \(\bar{R}^{2}\) & 0.76 & 0.85 & 0.62 & 0.43 & 0.43 & 0.43 & 0.42 & 0.44 & 0.40 & 0.41 & 0.41 & 0.41 & 0.40 & 0.42 & 0.37 \\ Wald (p-value) & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ \hline \(\bar{R}^{2}\) & 0.76 & 0.85 & 0.62 & 0.43 & 0.43 & 0.43 & 0.42 & 0.44 & 0.40 & 0.41 & 0.41 & 0.41 & 0. to beliefs about the market. Second, the common component within firm-level investor fears constitutes a priced source of risk. We show that the risk premium associated to the common component within firm-level fears from put options, common bad fears, is substantially larger than the corresponding premia that links to call options, common good fears. This suggests that not only do market participants react differently to good and bad anticipated future outcomes, but also that they will accept substantially lower returns in equilibrium on assets that hedge against common bad fears. ## Appendix A Discretization Procedure of Model-Free Implied Variance Considering total implied variance in Equation (1), the discretization is \[\sigma_{i,t}^{2}=\frac{2}{T}\sum_{i=1}^{n}\frac{\Delta K_{i}}{K_{i}^{2}}e^{rT}Q( K_{i})-\frac{1}{T}\left[\frac{F}{K_{0}}-1\right]^{2},\] where \(T\) is time to expiration, \(F\) is the forward index level derived from the put-call parity as \(F=e^{r^{f}T}[C(K,T)-P(K,T)]+K\) with the risk-free rate \(r^{f}\), \(K_{0}\) is the reference price, the first exercise price less or equal to the forward level \(F(K_{0}\leq F)\), and \(K_{i}\) is the \(i\)th OTM strike price available on a specific date (call if \(K_{i}>K_{0}\), put if \(K_{i}<K_{0}\), and both call and put if \(K_{i}=K_{0}\)). \(Q(K_{i})\) is the average bid-ask of OTM options with exercise price equal to \(K_{i}\). If \(K_{i}=K_{0}\), it will be equal to the average between the at-the-money (ATM) call and put price, relative to the strike price, and \(\Delta(K_{i})\) is the sum divided by two of the two nearest prices to the exercise price \(K_{0}\), namely, \(\frac{(K_{i+1}-K_{i-1})}{2}\) for \(2\leq i\leq n-1\). For further details see the CBOE VIX white paper available at [https://cdn.cboe.com/api/global/us_indices/governance/Volatility_Index_Methodology_Cboe_Volatility_Index.pdf](https://cdn.cboe.com/api/global/us_indices/governance/Volatility_Index_Methodology_Cboe_Volatility_Index.pdf). Note the standard CBOE methodology considers an interpolation between the two closest to 30-days expiration dates. In our data construction, we take into account only one expiration date closest to 30-days due to options data availability with respect to firm-level stocks. Additional Results \begin{table} \begin{tabular}{l c c c c c c c c c c c c c c c c} \hline \hline A: Common Fears & \multicolumn{3}{c}{Low \(\beta^{\mathbf{ACF}}\)} & \multicolumn{3}{c}{High \(\beta^{\mathbf{ACF}}\)} & \multicolumn{3}{c}{B: Common Good Fears} & \multicolumn{3}{c}{Low \(\beta^{\mathbf{ACF}}\),} & \multicolumn{3}{c}{High \(\beta^{\mathbf{ACF}}\),} \\ & & & 1 & 2 & 3 & 4 & 5 & -_j_ & _t_-stat (\(\text{5-}\)) & & & 1 & 2 & 3 & 4 & 5 & -_j_ & _t_-stat (\(\text{5-}\)) \\ \hline low \(\beta^{\mathbf{ADF}}\) & 1 & 1.30 & 0.98 & 0.85 & 0.72 & 0.66 & -0.64 & **-2.26** & low \(\beta^{\mathbf{ADF}}\) & 1 & 1.02 & 0.97 & 0.90 & 0.86 & 0.59 & -0.43 & -1.57 \\ & 2 & 1.32 & 1.06 & 1.01 & 1.05 & 0.83 & -0.49 & **-2.10** & & & 2 & 1.22 & 1.15 & 0.97 & 1.11 & 0.80 & -0.42 & -1.50 \\ & 3 & 1.10 & 1.11 & 1.09 & 1.01 & 0.70 & -0.40 & -1.65 & & 3 & 1.18 & 1.04 & 0.88 & 0.79 & 0.83 & -0.35 & -1.26 \\ & 4 & 1.04 & 0.98 & 0.72 & 0.86 & 0.86 & -0.18 & -0.85 & & 4 & 0.93 & 1.02 & 0.91 & 1.07 & 0.83 & -0.10 & -0.36 \\ High \(\beta^{\mathbf{ADF}}\) & 5 & 1.50 & 1.14 & 1.04 & 0.98 & 1.03 & -0.48 & **-1.96** & High \(\beta^{\mathbf{ADF}}\) & 5 & 1.27 & 1.36 & 1.29 & 1.15 & 1.00 & -0.28 & -0.92 \\ & \(\text{5-}\) & 0.20 & 0.16 & 0.19 & 0.26 & 0.36 & & & 5-_i_ & & 0.25 & 0.39 & 0.39 & 0.19 & 0.41 & \\ & \(t\text{-stat (5-}\)) & 0.67 & 0.59 & 0.70 & 0.95 & 1.37 & & \(t\text{-stat (5-}\)) & 1.04 & 1.65 & 1.70 & 0.76 & 1.57 & & \\ \hline \hline D: Common Good & \multicolumn{3}{c}{Low \(\beta^{\mathbf{ACF}}\),} & \multicolumn{3}{c}{High \(\beta^{\mathbf{acF}}\),} & \multicolumn{3}{c}{C: Common Bad Fears} & \multicolumn{3}{c}{Low \(\beta^{\mathbf{ACF}}\),} & \multicolumn{3}{c}{High \(\beta^{\mathbf{acF}}\),} \\ vs Bad & & 1 & 2 & 3 & 4 & 5 & 5 & \(\text{5-}\)_j_ & _t_-stat (\(\text{5-}\)) & & 1 & 2 & 3 & 4 & 5 & 5 & \(\text{5-}\)_j_ & _t_-stat (\(\text{5-}\)) \\ \hline Low \(\beta^{\mathbf{ACF}}\) & 1 & 1.43 & 1.35 & 1.14 & 1.38 & 0.99 & -0.44 & -1.55 & low \(\beta^{\mathbf{ADF}}\), & 1 & 1.31 & 1.02 & 0.91 & 0.82 & 0.80 & -0.51 & **-1.94** \\ & 2 & 1.01 & 1.06 & 1.19 & 1.10 & 0.95 & -0.06 & -0.27 & & 2 & 1.29 & 1.20 & 1.09 & 0.98 & 0.71 & -0.58 & **-2.49** \\ & 3 & 0.76 & 1.04 & 1.03 & 0.96 & 0.90 & 0.13 & 0.53 & & 3 & 1.09 & 1.04 & 1.02 & 0.98 & 0.78 & -0.31 & -1.60 \\ & 4 & 0.96 & 0.89 & 1.02 & 0.94 & 0.78 & -0.18 & -0.68 & & 4 & 1.03 & 0.92 & 0.78 & 0.89 & -0.24 & -1.17 \\ High \(\beta^{\mathbf{ACF}}\) & 5 & 0.82 & 0.85 & 0.88 & 0.78 & 0.73 & -0.09 & -0.35 & High \(\beta^{\mathbf{ADF}}\), & 5 & 1.40 & 0.95 & 1.05 & 0.86 & 0.98 & -0.42 & -1.72 \\ & \(\text{5-}\) & -0.60 & -0.50 & -0.26 & -0.60 & -0.26 & & & 5-_i_ & & 0.09 & -0.07 & 0.15 & 0.04 & 0.19 & \\ & \(t\text{-stat (5-}\)) & **-2.10** & **-2.05** & -1.05 & **-2.60** & -0.98 & & & \(t\text{-stat (5-}\)) & 0.31 & -0.25 & 0.51 & 0.15 & 0.65 & \\ \hline \hline \end{tabular} \end{table} Table 2: **Unconditional Double Portfolio Sorts from Loadings on Common Fears and Market Fears** Notes: This table shows value-weighted portfolios from unconditional double sorts that we sort on loadings to: i) common fears and innovations to market fears in Panel A; ii) common good fears and innovations to good market fears in Panel B; and iii) common bad fears and innovations to bad market fears in Panel C. In Panel D we report double sorts on common bad fears and common good fears. ### Portfolios Controlling for the Market Risk Premium, Market Capitalization and Trading Volume \begin{table} \begin{tabular}{l r r r r r r} \hline \hline & low \(\beta^{\Delta\mathbf{CF}}\) & \multicolumn{5}{c}{High \(\beta^{\Delta\mathbf{CF}}\)} \\ \hline **A:** Common Fears & 1 & 2 & 3 & 4 & 5 & 5-1 \\ \hline mean (\%) & 1.21 & 1.09 & 0.97 & 0.94 & 0.78 & **-0.43** \\ \(t\)-stat & 3.46 & 3.54 & 3.68 & 3.75 & 3.38 & **-2.42** \\ \(\alpha^{\mathrm{FF5}}\) & 0.40 & 0.33 & 0.30 & 0.31 & 0.16 & **-0.23** \\ \(t\)-stat & 5.09 & 6.36 & 4.38 & 4.27 & 2.29 & **-1.95** \\ \(\alpha^{\mathrm{FF5}+\mathrm{MOM}}\) & 0.40 & 0.33 & 0.29 & 0.30 & 0.15 & **-0.25** \\ \(t\)-stat & 5.29 & 6.30 & 4.71 & 4.35 & 2.46 & **-2.44** \\ \hline \hline **B:** Common Good Fears & 1 & 2 & 3 & 4 & 5 & 5-1 \\ \hline mean (\%) & 1.11 & 1.12 & 1.01 & 0.93 & 0.82 & -0.28 \\ \(t\)-stat & 3.09 & 3.51 & 3.62 & 3.78 & 3.68 & -1.38 \\ \(\alpha^{\mathrm{FF5}}\) & 0.33 & 0.35 & 0.30 & 0.30 & 0.20 & -0.13 \\ \(t\)-stat & 3.51 & 5.97 & 5.04 & 4.34 & 1.99 & -0.73 \\ \(\alpha^{\mathrm{FF5}+\mathrm{MOM}}\) & 0.33 & 0.35 & 0.30 & 0.29 & 0.18 & -0.15 \\ \(t\)-stat & 3.63 & 6.08 & 5.01 & 4.37 & 2.15 & -0.94 \\ \hline \hline **C:** Common Bad Fears & 1 & 2 & 3 & 4 & 5 & 5-1 \\ \hline mean (\%) & 1.23 & 1.06 & 0.98 & 0.96 & 0.79 & **-0.44** \\ \(t\)-stat & 3.61 & 3.63 & 3.73 & 3.68 & 3.21 & **-2.78** \\ \(\alpha^{\mathrm{FF5}}\) & 0.41 & 0.33 & 0.31 & 0.30 & 0.16 & **-0.26** \\ \(t\)-stat & 5.57 & 5.81 & 5.29 & 4.32 & 2.21 & **-2.17** \\ \(\alpha^{\mathrm{FF5}}+\mathrm{MOM}\) & 0.42 & 0.33 & 0.30 & 0.29 & 0.14 & **-0.27** \\ \(t\)-stat & 5.75 & 5.59 & 5.20 & 4.54 & 2.21 & **-2.49** \\ \hline \hline \end{tabular} \end{table} Table A4: **Portfolio Sorts from Loadings on Common Fears, Common Good Fears, and Common Bad Fears Controlling for Market Capitalization** Notes: This table shows value-weighted portfolios that we sort on loadings to: i) common fears in Panel A; ii) common good fears in Panel B; and iii) common bad fears in Panel C whilst controlling for market capitalization. In each Panel, we report monthly excess returns for quintile portfolios and the long-short portfolio that goes long the portfolio of stocks with a high loading on common fears and short the portfolio of stocks with low loadings to common fears. We also report the risk adjusted returns from the Fama-French 5 factor model and the Fama-French 5-factor model accounting for Momentum. \begin{table} \begin{tabular}{l r r r r r r} \hline \hline & low \(\beta^{\Delta\mathbf{CF}}\) & \multicolumn{5}{c}{High \(\beta^{\Delta\mathbf{CF}}\)} \\ \hline **A:** Common Fears & 1 & 2 & 3 & 4 & 5 & 5-1 \\ \hline mean (\%) & 1.21 & 1.14 & 0.94 & 0.95 & 0.78 & **-0.43** \\ \(t\)-stat & 3.44 & 3.74 & 3.67 & 3.81 & 3.37 & **-2.42** \\ \(\alpha^{\mathrm{FF5}}\) & 0.40 & 0.38 & 0.28 & 0.32 & 0.16 & **-0.23** \\ \(t\)-stat & 4.97 & 6.01 & 4.78 & 4.71 & 2.25 & **-1.87** \\ \(\alpha^{\mathrm{FF5}+\mathrm{MOM}}\) & 0.40 & 0.38 & 0.28 & 0.31 & 0.15 & **-0.25** \\ \(t\)-stat & 5.08 & 5.96 & 4.61 & 5.05 & 2.48 & **-2.28** \\ \hline \hline **B:** Common Good Fears & 1 & 2 & 3 & 4 & 5 & 5-1 \\ \hline mean (\%) & 1.10 & 1.12 & 1.07 & 0.94 & 0.79 & -0.31 \\ \(t\)-stat & 3.09 & 3.47 & 3.89 & 3.93 & 3.67 & -1.44 \\ \(\alpha^{\mathrm{FF5}}\) & 0.32 & 0.35 & 0.36 & 0.33 & 0.20 & -0.12 \\ \(t\)-stat & 2.95 & 5.06 & 6.91 & 5.35 & 1.94 & -0.65 \\ \(\alpha^{\mathrm{FF5}+\mathrm{MOM}}\) & 0.32 & 0.36 & 0.35 & 0.33 & 0.18 & -0.14 \\ \(t\)-stat & 3.07 & 5.11 & 6.99 & 5.32 & 2.11 & -0.87 \\ \hline \hline **C:** Common Bad Fears & 1 & 2 & 3 & 4 & 5 & 5-1 \\ \hline mean (\%) & 1.21 & 1.10 & 0.97 & 0.92 & 0.79 & **-0.42** \\ \(t\)-stat & 3.60 & 3.82 & 3.74 & 3.56 & 3.28 & **-2.66** \\ \(\alpha^{\mathrm{FF5}}\) & 0.41 & 0.40 & 0.30 & 0.27 & 0.17 & **-0.24** \\ \(t\)-stat & 5.51 & 5.51 & 4.84 & 3.98 & 2.45 & **-2.17** \\ \(\alpha^{\mathrm{FF5}+\mathrm{MOM}}\) & 0.41 & 0.40 & 0.29 & 0.26 & 0.15 & **-0.25** \\ \(t\)-stat & 5.53 & 5.22 & 4.85 & 4.37 & 2.59 & **-2.45** \\ \hline \hline \end{tabular} \end{table} Table A5: **Portfolio Sorts from Loadings on Common Fears, Common Good Fears, and Common Bad Fears Controlling for Trading Volume** Notes: This table shows value-weighted portfolios that we sort on loadings to: i) common fears in Panel A; ii) common good fears in Panel B; and iii) common bad fears in Panel C whilst controlling for trading volume. In each Panel, we report monthly excess returns for quintile portfolios and the long-short portfolio that goes long the portfolio of stocks with a high loading on common fears and short the portfolio of stocks with low loadings to common fears. We also report the risk adjusted returns from the Fama-French 5 factor model and the Fama-French 5-factor model accounting for Momentum. ### Adding Alternative Alternative Test Assets II Table A9: **Fama-MacBeth Analysis; Alternative Test Assets: Size/Investment, Size,/Book to Market, Size/Market Risk Premium, Size/Operating Profit, Size/Residual Variance, Size/Total Variance** This table shows the Fama-MacBeth two-pass regression analysis for the 25 portfolios that sort on: i) size/investment (ME/INV); ii) size/book-to-market (ME/BM); iii) size/market risk premium (ME/MKT); iv) size/operating profit (ME/OP); v) size/residual variance (ME/RES VAR); and vi) size/total variance (ME/VAR), and the 25 respective portfolios we construct on loadings to: common fears, \(\beta^{\Delta\mathbf{CF}}\), and market fears, \(\beta^{\Delta\mathbf{MF}}\),in columns 1-5; common good fears, \(\beta^{\Delta\mathbf{CF}^{+}}\) and good market fears, \(\beta^{\Delta\mathbf{MF}^{+}}\), in columns 6-10; and common bad fears, \(\beta^{\Delta\mathbf{CF}^{-}}\), and bad market fears, \(\beta^{\Delta\mathbf{MF}^{-}}\), in columns 11-15. \(\lambda_{\mathbf{CF}}\) denotes the corresponding risk premium estimates for common fears, common good fears and common bad fears. All results control initially for the Fama-French 5 factors. MKT is the market risk premium; SMB is small-minus-big; HML is high-minus-low; RMW is robust-minus-weak; CMA is conservative minus aggressive). Results in columns 1-4, 6-9, 11-14 use the Fama-French 5 factors plus: the variance risk premium (VRP); momentum (MOM), common idiosyncratic volatility (CIV) (Herskovic et al., 2016); and liquidity (LIQ) (Pastor and Stambaugh, 2003). Columns 5, 10, 15 add all aforementioned additional controls to the Fama-French 5 factor model. Below risk premia estimates we report Newey and West \(t\)-statistics with 12 lags that adjust for errors in variables as in Shanken (1992). \(\tilde{R}^{2}\) is the adjusted R-squared.** ### Additional Results Controlling for Market Fears \begin{table} \begin{tabular}{l r r r r r r r r r r r r r r r r} \hline \hline & & \multicolumn{10}{c}{25 ME/INV, 25 ME/BM, 25 ME/MKT, 25 ME/OP 25 ME/RES VAR, 25 ME/VAR, Plus:} \\ & \multicolumn{10}{c}{\(\beta^{\Delta\mathbf{CF}}/\beta^{\Delta\mathbf{MF}}\)} & \multicolumn{10}{c}{\(\beta^{\Delta\mathbf{MF}^{+}}\)} & \multicolumn{10}{c}{\(\beta^{\Delta\mathbf{MF}^{-}}/\beta^{\Delta\mathbf{MF}^{-}}\)} \\ \hline & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 \\ \hline \(\lambda_{0}\) & 0.66 & 0.71 & 0.65 & 0.67 & 0.79 & 0.67 & 0.74 & 0.68 & 0.66 & 0.70 & 0.70 & 0.76 & 0.68 & 0.73 & 0.80 \\ \(t\)-stat & 2.69 & 3.67 & 3.26 & 2.84 & 4.24 & 2.44 & 3.11 & 2.64 & 2.39 & 2.78 & 2.62 & 3.62 & 2.90 & 2.79 & 3.81 \\ \(\lambda_{\mathbf{CF}}\) & -0.13 & -0.13 & -0.11 & -0.09 & -0.13 & -0.14 & -0.14 & -0.12 & -0.11 & **-0.15** & **-0.14** & **-0.14** & **-0.15** & **-0.14** \\ \(t\)-stat & -1.85 & -1.83 & -1.85 & -1.60 & -1.39 & -1.88 & -1.82 & -1.82 & -1.67 & -1.58 & **-2.11** & **-2.01** & **-2.02** & **-2.06** & **-2.05** \\ \(\lambda_{\mathbf{MF}}\) & 0.04 & 0.06 & 0.05 & 0.09 & 0.05 & -0.01 & -0.02 & -0.01 & 0.00 & 0.00 & 0.06 & 0.06 & 0.05 & 0.07 & 0.05 \\ \(t\)-stat & 0.77 & 0.89 & 0.72 & 1.27 & 0.91 & -0.34 & -0.51 & -0.37 & 0.09 & 0.13 & 1.18 & 0.93 & 0.74 & 1.16 & 1.07 \\ \(\lambda_{MKT}\) & 0.37 & 0.32 & 0.38 & 0.36 & 0.23 & 0.36 & 0.28 & 0.35 & 0.36 & 0.32 & 0.32 & 0.25 & 0.34 & 0.28 & 0.21 \\ \(t\)-stat & 0.91 & 0.89 & 0.97 & 0.85 & 0.69 & 0.95 & 0.86 & 1.01 & 0.97 & 1.05 & 0.78 & 0.79 & 0.91 & 0.68 & 0.68 \\ \(\lambda_{SMB}\) & 0.17 & 0.16 & 0.17 & 0.17 & 0.17 & 0.18 & 0.17 & 0.18 & 0.18 & 0.18 & 0.18 & 0.18 & 0.19 & 0.19 \\ \(t\)-stat & 1.02 & 0.99 & 1.00 & 0.99 & 1.04 & 1.09 & 1.07 & 1.09 & 1.07 & 1.11 & 1.09 & 1.09 & 1.10 & 1.13 & 1.13 \\ \(\lambda_{HML}\) & -0.26 & -0.27 & -0.26 & -0.23 & -0.24 & -0.22 & -0.23 & -0.22 & -0.20 & -0.21 & -0.26 & -0.26 & -0.23 & -0.26 \\ \(t\)-stat & -1.12 & -1.17 & -1.14 & -1.04 & -1.06 & -0.95 & -0.98 & -0.95 & -0.88 & -0.92 & -1.13 & -1.15 & -1.15 & -1.04 & -1.13 \\ \(\lambda_{RMW}\) & 0.21 & 0.22 & 0.21 & 0.20 & 0.21 & 0.21 & 0.22 & 0.21 & 0.23 & 0.22 & 0.19 & 0.21 & 0.19 & 0.18 & 0.18 \\ \(t\)-stat & 1.33 & 1.40 & 1.34 & 1.24 & 1.41 & 1.40 & 1.54 & 1.49 & 1.52 & 1.53 & 1.24 & 1.34 & 1.21 & 1.14 & 1.28 \\ \(\lambda_{CMA}\) & -0.17 & -0.18 & -0.17 & -0.16 & -0.16 & -0.13 & -0.13 & -0.13 & -0.14 & -0.14 & -0.20 & -0.20 & -0.20 & -0.20 \\ \(t\)-stat & -1.31 & -1.33 & -1.30 & -1.23 & -1.20 & -0.99 & -0.97 & -0.99 & -1.06 & -1.09 & -1.48 & -1.48 & -1.49 & -1.46 \\ \(\lambda_{VRP}\) & 0.00 & & & -0.08 & 0.04 & & & -0.05 & 0.03 & & & & -0.04 \\ \(t\)-stat & -0.01 & & & -1.00 & 0.34 & & & -0.51 & 0.37 & & & & -0.44 \\ \(\lambda_{MOM}\) & -0.11 & & & -0.25 & & -0.17 & & & -0.18 & & & -0.09 & & & -0.22 \\ \(t\)-stat & -0.21 & & & -0.49 & -0.35 & & & -0.36 & & & -0.18 & & & -0.42 \\ \(\lambda_{LV}\) & & & -0.14 & & 0.44 & & & -0.19 & 0.30 & & & & 0.53 & & 0.94 \\ \(t\)-stat & & & -0.12 & & 0.47 & & & -0.19 & & 0.37 & & & 0.44 & & 0.95 \\ \(\lambda_{LIQ}\) & & & -0.21 & -0.27 & & & & -0.21 & -0.26 & & & & -0.16 & -0.19 \\ \(t\)-stat & & & & -2.21 & -3.06 & & & -2.17 & -2.97 & & & & -1.82 & -2.43 \\ \hline \(\widehat{R}^{2}\) & 0.455 & 0.458 & 0.453 & 0.555 & 0.593 & 0.433 & 0.442 & 0.433 & 0.547 & 0.569 & 0.516 & 0.518 & 0.517 & 0.588 & 0.598 \\ \hline \hline \end{tabular} \end{table} Table 11: **Fama-MacBeth Analysis; Alternative Test Assets: Size/Investment, Size,/Book to Market, Size/Market Risk Premium, Size/Operating Profit, Size/Residual Variance, Size/Total Variance: Accounting for Market Fears**
We are able to identify a new type of risk, common firm-level investor fears, from commonalities within the cross-sectional distribution of individual stock options. We define firm-level fears that link with upward price movements as good fears, and those relating to downward price movements as bad fears. Such information is different from market fears that we extract from index options. Stocks with high sensitivities to common firm-level investor fears earn lower returns, with investors demanding a higher compensation for exposure to common bad fears relative to common good fears. Risk premium estimates for common bad fears range from -5.63% to -4.92% per annum.
2303.17946
Social Honeypot for Humans: Luring People through Self-managed Instagram Pages
Social Honeypots are tools deployed in Online Social Networks (OSN) to attract malevolent activities performed by spammers and bots. To this end, their content is designed to be of maximum interest to malicious users. However, by choosing an appropriate content topic, this attractive mechanism could be extended to any OSN users, rather than only luring malicious actors. As a result, honeypots can be used to attract individuals interested in a wide range of topics, from sports and hobbies to more sensitive subjects like political views and conspiracies. With all these individuals gathered in one place, honeypot owners can conduct many analyses, from social to marketing studies. In this work, we introduce a novel concept of social honeypot for attracting OSN users interested in a generic target topic. We propose a framework based on fully-automated content generation strategies and engagement plans to mimic legit Instagram pages. To validate our framework, we created 21 self-managed social honeypots (i.e., pages) on Instagram, covering three topics, four content generation strategies, and three engaging plans. In nine weeks, our honeypots gathered a total of 753 followers, 5387 comments, and 15739 likes. These results demonstrate the validity of our approach, and through statistical analysis, we examine the characteristics of effective social honeypots.
Sara Bardi, Mauro Conti, Luca Pajola, Pier Paolo Tricomi
2023-03-31T10:20:24
http://arxiv.org/abs/2303.17946v1
# Social Honeypot for Humans: ###### Abstract Social Honeypots are tools deployed in Online Social Networks (OSN) to attract malevolent activities performed by spammers and bots. To this end, their content is designed to be of maximum interest to malicious users. However, by choosing an appropriate content topic, this attractive mechanism could be extended to _any_ OSN users, rather than only luring malicious actors. As a result, honeypots can be used to attract individuals interested in a wide range of topics, from sports and hobbies to more sensitive subjects like political views and conspiracies. With all these individuals gathered in one place, honeypot owners can conduct many analyses, from social to marketing studies. In this work, we introduce a novel concept of social honeypot for attracting OSN users interested in a generic target topic. We propose a framework based on fully-automated content generation strategies and engagement plans to mimic legit Instagram pages. To validate our framework, we created 21 self-managed social honeypots (i.e., pages) on Instagram, covering three topics, four content generation strategies, and three engaging plans. In nine weeks, our honeypots gathered a total of 753 followers, 5387 comments, and 15739 likes. These results demonstrate the validity of our approach, and through statistical analysis, we examine the characteristics of effective social honeypots. Social Networks Social Honeypots Instagram User Profiling Artificial Intelligence Privacy. ## 1 Introduction In recent years, Social Network Analysis (SNA) has emerged as a powerful tool for studying society. The large amount of relational data produced by Online Social Networks (OSN) has greatly accelerated studies in many fields, including modern sociology [1], biology [2], communication studies [3], and political science [4]. SNA success can be attributed to the exponential growth and popularity OSN faced [5], with major OSN like Facebook and Instagram (IG) having billions of users [6, 7]. Researchers developed a variety of tools for SNA [8]; however, elaborating the quintillion bytes of data generated every day [9] is far from trivial [10]. The computational limitations compel scientists to conduct studies on sub-samples of the population, often introducing bias and reducing the quality of the results [11]. Furthermore, the reliability of data is hindered by adversarial activities perpetuated over OSN [12, 13], such as the creation of fake profiles [14], crowdturfing campaigns [15, 16], or spamming [17, 18, 19]. Back in the years, cybersecurity researchers proposed an innovative approach to overcome the computational limitation in finding malicious activity in OSN (e.g., spamming), by proposing social honeypots [20, 21, 22]: profiles or pages created ad-hoc to lure adversarial users, analyze their characteristics and behavior, and develop appropriate countermeasures. Thus, their search paradigm in OSN shifted from "look for a needle in the haystack" (i.e., searching for spammers among billions of legit users) to "the finer the bait, the shorter the wait" (i.e., let spammers come to you). MotivationThe high results achieved by such techniques inspired us to generalize the approach, gathering in a _single_ place _any target users_ we wish to study. Such a framework's uses are various, from the academic to the industrial world. First, _profilation_ or _marketing_ toward target topics: IG itself provides page owners to know aggregated statistics (e.g., demographic) of their followers and users that generate engagement.2 Second, _social cybersecurity analytics_: researchers or police might deploy social honeypots on sensitive themes to attract and analyze the behavior of people who engage with them. Examples of themes are fake news and extremism (e.g., terrorism). Although our "general" social honeypot may be used either benignly (e.g., to find misinformers) or maliciously (e.g., to find vulnerable people to scam), in this paper, we only aim to examine the feasibility of such a tool, and its effectiveness. Moreover, we investigate whether this technique can be fully automated, limiting the significant effort of creating a popular IG page [24]. We focus on IG given its broad audience and popularity. Furthermore, IG is the most used social network for marketing purposes, with nearly 70 percent of brands using IG influencers (even virtual [25]) for their marketing campaigns [26]. Footnote 2: Instagram API provides to the owner aggregated statistics of followers (gender, age, countries) when their page reaches 100 followers [23]. ContributionIn this work, we present an automated framework to attract and collect legitimate people in social honeypots. To this aim, we developed several strategies to understand and propose guidelines for building effective social honeypots. Such strategies consider both _how to generate content automatically_ (from simple to advanced techniques), and _how to engage with the OSN_ (from naive to complex interactions). In detail, we deployed 21 honeypots and maintained them for nine weeks. Our four content generation strategies involve state-of-the-art Deep Learning techniques, and we actively engage with the network following three engagement plans. The main contributions of our paper can be summarized as follows: * We define a novel concept of Social Honeypot, i.e., a flexible tool to gather _real people_ on IG interested in a target topic, in contrast to previous studies focusing on malicious users or bots; * We propose four automatic content generation strategies and three engagement plans to build self-maintained IG pages; * We demonstrate the quality of our proposal by analyzing our 21 IG social honeypots after a nine weeks period. OutlineWe begin our work discussing related works (SS2). Then, we present our methodology and implementation in SS3 and SS4. In SS5, we evaluate the effectiveness of our honeypots, while SS6 presents social analyses. We discuss the use cases of our approach and its challenges in SS7 and conclude the paper in SS8. ## 2 Related Works #### 2.0.1 Honeypot Honeypots are decoy systems that are designed to lure potential attackers away from critical systems [27]. Keeping attackers in the honeypot long enough allows to collect information about their activities and respond appropriately to the attack. Since legit users have no valid reason to interact with honeypots, any attempt to communicate with them will probably be an attack. Server-side honeypots are mainly implemented to understand network and web attacks [28], to collect malware and malicious requests [29], or to build network intrusion detection systems [30]. Conversely, client-side honeypots serve primarily as a detection tool for compromised (web) servers [31, 32]. #### 2.0.2 Social Honeypot Today, honeypots are not limited to fare against network attacks. Social honeypots aim to lure users or bots involved in illegal or malicious activities perpetuated on Online Social Networks (OSN). Most of the literature focused on detecting spamming activity, i.e., unsolicited messages sent for purposes such as advertising, phishing, or sharing undesired content [22]. The first social honeypot was deployed by Webb et al. [20] on MySpace. They developed multiple identical honeypots operated in several geographical areas to characterize spammers' behavior, defining five categories of spammers. Such work was extended to Twitter by Lee et al. in 2010 [21], identifying five more spammers' categories, and proposing an automatic tool to distinguish between spammers and legit users. Stringhini et al. [22] proposed a similar work on Facebook, using fake profiles as social honeypots. Similarly to previous works, these profiles were passive, i.e., they just accepted incoming friend requests. Their analysis showed that most spam bots follow identifiable patterns, and only a few of them act stealthily. De Cristofaro et al. [33] investigated Facebook Like Farms using social honeypots, i.e., blank Facebook pages. In their work, they leveraged demographic, temporal, and social characteristics of likers to distinguish between genuine and fake engagement. The first "active" social honeypot was developed on Twitter by Lee et al. [34], tempting, profiling, and filtering content polluters in social media. These social honeypots were designed to not interfere with legitimate users' activities, and learned patterns to discriminate polluters and legit profiles effectively. 60 honeypots online for seven months gathered 36'000 interactions. More active social honeypots were designed by Yang et al. [35]), to provide guidelines for building effective social honeypots for spammers. 96 honeypots online for five months attracted 1512 accounts. Last, pseudo-honeypots were proposed by Zhang et al. [36], which leveraged already popular Twitter users to attract spammers efficiently. They run 1000 honeypots for three weeks, reaching approximately 54'000 spammers. #### 2.0.3 Differences with previous work To date, social honeypots have been mainly adopted to detect spammers or bot activities. The majority of research focused on Twitter, and only a few works used other social networks like Facebook. There are several reasons behind this trend. First, spamming is one of the most widespread malicious activities on social networks because it can lead to other more dangerous activities. Second, Twitter APIs and policies facilitate data collection, and there are widely adopted Twitter datasets that can be used for further analysis. To the best of our knowledge, there are no works that utilize social honeypots on Instagram, perhaps because it is difficult to distribute, maintain and record honeypots' activities on this social network. Moreover, our goal is to attract _legit users_ rather than spammers, which is radically different from what was done insofar. Indeed, many analyses could be easier to conduct by gathering people in one place (e.g., an IG page). For instance, a honeypot could deal with peculiar topics to simplify community detection [37], could advertise a product to grasp consumer reactions [38], understand political views [39], analyze and contrast misinformation [40], conspiracies [41], and in general, carry out any Social Network Analytics task [42]. Last, owners of IG pages can see the demographic information of their followers (inaccessible otherwise), having extremely helpful (or dangerous) information for further social or marketing analyses [43]. ## 3 Methodology ### Overview & Motivation The purpose of our social honeypots is to attract people interested in a target topic. The methodology described in this section is intended for Instagram (IG) pages, but it can be extended to any generic social network (e.g., Facebook) with minor adjustments. We define the social honeypot as a combination of three distinct components: (i) the honeypot _topic_ that defines the theme of the IG page (SS3.2); (ii) the _generation strategy_ for creating posts related to a target topic (SS3.3); (iii) the _engagement plan_ that describes how the honeypot will engage the rest of the social network (SS3.4). Figure 1 depicts the social honeypot pipeline. Our study examines different types of honeypots with a variety of topics, generation strategies, and engagement plans, outlined in the rest of this section. Our experiments aim to answer the following research questions: 1. Can self-managed social honeypots generate engagement on Instagram? [MISSING_PAGE_POST] * How do the topic selection, post generation strategy, and engagement plan affect the success of a social honeypot? * How much effort (computation and costs) is required to build an effective social honeypot? The remainder of the section describes the strategies we adopt in our investigation, along with technical implementation details. ### Topic Selection Building a honeypot begins with selecting the topic of its posts. Such a choice will impact the type of users we will attract. The topic's nature might vary, from hobbies and passions like sports and music to sensitive issues like political views and conspiracies. As an example, if we wish to promote a new product of a particular brand, the topic might be the type of product we intend to promote. Alternatively, if we intend to develop a tool for spam detection, we should choose a topic that is interesting to spammers. This will ensure that they will be attracted to the honeypot's content. We can even design honeypots with generic topics that can be used for marketing profiling or social studies. In conclusion, the topic should be chosen in accordance with the honeypot's ultimate purpose. ### Post Generation Strategies The generative process aims to create posts pertaining to the honeypot topic. A two-part artifact is produced: the _visual_ component of the post (i.e., the image), and its _caption_. We propose four distinct methods to generate posts, each with its own characteristics and algorithms. For ethical reasons, we excluded techniques that might violate the author's copyright (e.g., re-posting). However, unscrupulous honeypot creators could conveniently use these strategies. In this section, we provide the strategies high-level view to serve as a framework. For technical implementation details (e.g., the actual models we used), please refer to Appendix A. Since this stage involves deep generative models that might produce artifacts affecting the post quality, the owner can approve a post or request a new one with negligible effort. #### 3.3.1 InstaModel _InstaModel_ is a generative schema that leverages machine learning techniques to generate both images and captions. Figure 1(a) shows its overview. The schema begins by retrieving one starting post among the 25 most popular IG posts for a popular hashtag related to the honeypot topic.3 Next, the pipeline performs, in order, caption generation and image generation steps. Figure 2: Overview of Post Generation strategies. * _Caption Generation._ The algorithm uses an _Object Detector_ tool4 to extract the relevant elements of the starting post's image. In the absence of meaningful information (e.g., is a meme or unrelated to the topic)5, we discard that image. When this occurs, the algorithm restarts and uses another sample from the top 25. If the image is kept, the algorithm uses the list of resulting elements (i.e., keywords) to generate a sentence, leveraging a _keyword-to-text_ algorithm. Note that we discard from the keywords list those elements with very low probability. The output of the _keyword-to-text_ phase (i.e., the new caption) is further refined to align with IG captions, for example, by adding emojis and hashtags, as presented in SS3.4. Footnote 4: Object detectors are Computer Vision-based tools that identify objects composing a given scene. Each object is accompanied by a probability score. * _Image Generation._ The caption generated in the previous step serves as input to produce the post image. To achieve this goal, we use _text-to-image_ models, i.e., algorithms that produce more images from a single input. An operator would choose the most appropriate option or a random option in such a case. We remark that _InstaModel_ severely adopts generative models. Indeed, we used state-of-the-art computer vision, NLP, and image generation models for object detection, text generation, and image generation, respectively. #### 3.3.2 ArtModel _ArtModel_ leverages the ability of novel _text-to-image_ generative models (e.g., DALL-E) to interpret artistic keywords as inputs. Figure 2b shows the overview of the model. Similarly to InstaModel, the process starts by generating a caption, and, subsequently, the image. * _Caption Generation._ Differently from _InstaModel_, the input to generate the caption does not come from other IG posts. Instead, we randomly select the target keyword (e.g., cat), the artistic style of the picture (e.g., Picasso, impressionism), and a medium (e.g., painting, sketch). We create a single sentence by filling pre-defined templates with such three keywords, and add emojis and hashtags as for _InstaModel_. * _Image Generation._ Similar to _InstaModel_, the caption (without emojis and hashtags) serves as input for a _text-to-image_ model, which generates the final image. #### 3.3.3 UnsplashModel This algorithm employs DL models only to generate the caption. In opposition to _InstaModel_ and _ArtModel_, _UnsplashModel_ starts from the image generation, and then generates the caption (Figure 2c). * in this case, Unsplash6. The search is based on a randomly selected keyword that reflects the target topic, from a list defined by the owner. Footnote 6: [https://unsplash.com/](https://unsplash.com/) * _Caption Generation._ Unsplash images are usually accompanied by captions free of license. We further refine the caption with a _rephrase_ model, and add emojis and hashtags as for the previous models. #### 3.3.4 QuotesModel Last, we present _QuotesModel_, a variant of _UnsplashModel_, presented in Figure 2d. The objective of this strategy is to determine whether AI-based techniques are necessary to generate attractive IG posts. Therefore, this model does not involve the use of artificial intelligence to create captions and images. In addition, using quotes to caption photos is a diffused strategy [44]. * _Image Generation._ The image generation process is the same as _UnsplashModel_, involving stock images. * Steve Jobs). Quotes are retrieved from a pool with 1665 quotes [45]. ### Engagement Plans Lastly, the engagement plan defines how the social honeypot interacts with the rest of the social network (e.g., other users or pages). We defined three plans, varying in effort required to maintain interactions, and whether paid strategies are involved: * _PLAN 0:_ low interactions and no paid strategies; * _PLAN 1_: high interactions and no paid strategies; * _PLAN 2_: high interactions and paid strategies. #### 3.4.1 Plan 0 The plan does not involve automatic interactions with the rest of the social network. At most, the owner replies to comments left under the honeypot's posts. The plan uses the well-known _Call To Actions_ (CTA) [46] in the posts. Such a strategy consists in creating captions that stimulate users' engagement (e.g., liking, commenting, sharing the post). Examples are captions containing simple questions (e.g., 'How was your day?'), polls and quizzes (e.g., 'What should I post next?'), or exhorting users to share their opinions (e.g., 'What do you think about it?'). Following the caption best strategies for IG posts [47], we added 15 random hashtags related to our topic, 8 with broad coverage and 7 with medium-low coverage. More details about the hashtags selections in Appendix A. In this plan, paid strategies are not involved. #### 3.4.2 Plan 1 The plan is a variant of _PLAN 0_ with explicit social networking interactions. We call these actions _spamming_. The spamming consists of automatically leaving likes and comments on the top 25 posts related to the topic (as described in _InstaModel_). Comments resemble legit users (e.g., 'So pretty!') and not spammers (e.g., 'Follow my page!'), and were randomly picked from a list we manually created by observing comments usually left under popular posts. The goal of such activities is to generate engagement with the owner of popular posts, hoping to redirect this stream to the honeypot. When a user follows us, we follow back with a probability of 0.5, increasing the page's number of followings, resembling a legit page. During our experiments, we also adopted a more aggressive (and effective) spamming strategy called _Follow & Unfollow_ (F&U) [48], consisting in randomly following users, often causing a follow back, and then remove the following after a couple of days. To not be labeled as spammers, we constantly respected the balance # following \(<\) #followers. In this plan, paid strategies are not involved. #### 3.4.3 Plan 2 This plan increments _PLAN 1_ with two paid strategies. Buying followersWhen we create a honeypot, we buy \(N\) followers. In theory, highly followed pages might encourage users to engage more, and gain visibility from IG algorithm [49]. Therefore, we aim to understand if an initial boost of followers can advantage honeypots. Such followers will be discarded during our analyses. We set \(N=100\), and we buy passive followers only.7 Footnote 7: Passive followers only follow the page, but they do not engage further. Content sponsoringIG allows posts' sponsoring for a certain amount of time. The target population can be automatically defined by IG, or chosen by the owner w.r.t. age, location, and interests. Since we are interested in studying the population attracted by our content, rather than attracting a specific category of users, we let IG decide our audience, directly exploiting its algorithms to make our honeypots successful. ## 4 Implementation ### Topic Selection We investigate the honeypots' effectiveness over three distinct topics: _food_, _cat_, and _car_. We selected such topics to account for different audience sizes, measured by coverage levels. Coverage is a metric that counts the total number of posts per hashtag or, in other words, the total number of posts that contain that hashtag in their captions. This information is available on IG by just browsing the hashtag. More in detail, we selected: **Food** (high coverage, #food counts 493 million posts), **Cat** (medium coverage, #cat counts 270 million posts), and **Car** (low coverage, #car counts 93 million posts). We chose these topics, and not more sensitive ones, mainly for ethical reasons. Indeed, we did not want to boost phenomena like misinformation or conspiracies through our posts, nor identify people involved in these themes. However, we designed our methodology to be as general as possible, and adaptable to any topic with little effort. ### Testbed We deployed 21 honeypots on Instagram, seven for each selected topic (i.e., food, cat, and car), that we maintained for a total of nine weeks. Within each topic, we adopt all post generation strategies and engagement plans. For the post generation strategies, three honeypots use both InstaModel and ArtModel, three honeypots use UnsplashModel and QuotesModel, and one honeypot combines the four. Such division is based on the image generation strategy, i.e., if images are generated with or without Deep Learning algorithms. All posts were manually checked before uploading them on Instagram to prevent the diffusion of harmful or low-quality content. This was especially necessary for AI-generated content, whose low quality might have invalidated a fair comparison with non-AI content.8 Similarly, for the engagement plan, two honeypots adopt PLAN 0, two PLAN 1, and three PLAN 2. Table 1 summarizes the 21 honeypots settings. Given the nature of our post generation strategies and engagement plans, we set as baselines the honeypots involving _UnsplashModel + QuotesModel_ as generation strategy and _PLAN 0_ as engagement plan (h1, h8, h15). Indeed, these honeypots are the simplest ones, requiring almost no effort from the owner. Setting baselines is useful to appreciate the results of more complex methods, given that there are currently no baselines in the literature. Footnote 8: The effort for the honeypot manager is limited to a quick approval, which could not be necessary with more advanced state-of-the-art models, e.g., DALL-E 2 [50] or ChatGPT [51]. By following the most common guidelines [52, 53], each honeypot was designed to publish two posts per day, with at least 8 hours apart from each other. During the nine weeks of experiments, we varied PLAN 1 and PLAN 2. In particular, we started PLAN 1 with spamming only, and PLAN 2 with buying followers. During the last week, both plans adopted more aggressive strategies, specifically, PLAN 1 applied F&U techniques, while PLAN 2 sponsored the two most-popular honeypot posts for one week, paying E 2/day for each post. For our analyses, we collected the following information: * Total number of followers per day; * Total number of likes per post; * Total number of comments per post. Moreover, IG API provided the gender, age, and geographical locations of the audience when applicable, as explained in SS6.3. Implementation ModelsIn SS3 we presented a general framework to create social honeypots. In our implementations, we employed deep learning state-of-the-art models in several steps. To extract keywords in _InstaModel_ we adopted \begin{table} \begin{tabular}{r c c} \hline \hline **ID** & **Post Generation Strategy** & **Engagement Plan** \\ \hline \hline \multicolumn{3}{c}{_Journal_} \\ \hline h1 (baseline) & UnsplashModel + QuotesModel & PLAN 0 \\ h2 & UnsplashModel + QuotesModel & PLAN 1 \\ h3 & UnsplashModel + QuotesModel & PLAN 2 \\ h4 & InstaModel + ArtModel & PLAN 0 \\ h5 & InstaModel + ArtModel & PLAN 1 \\ h6 & InstaModel + ArtModel & PLAN 2 \\ h7 & All Models & PLAN 2 \\ \hline \hline \multicolumn{3}{c}{_car_} \\ \hline h8 (baseline) & UnsplashModel + QuotesModel & PLAN 0 \\ h9 & UnsplashModel + QuotesModel & PLAN 1 \\ h10 & UnsplashModel + QuotesModel & PLAN 2 \\ h11 & InstaModel + ArtModel & PLAN 0 \\ h12 & InstaModel + ArtModel & PLAN 1 \\ h13 & InstaModel + ArtModel & PLAN 2 \\ h14 & All Models & PLAN 2 \\ \hline \hline \multicolumn{3}{c}{_car_} \\ \hline h15 (baseline) & UnsplashModel + QuotesModel & PLAN 0 \\ h16 & UnsplashModel + QuotesModel & PLAN 1 \\ h17 & UnsplashModel + QuotesModel & PLAN 2 \\ h18 & InstaModel + ArtModel & PLAN 0 \\ h19 & InstaModel + ArtModel & PLAN 1 \\ h20 & InstaModel + ArtModel & PLAN 2 \\ h21 & All Models & PLAN 2 \\ \hline \hline \end{tabular} \end{table} Table 1: Honeypots deployed. InceptionV3 [54] as object detector, pre-trained on ImageNet [55] with 1000 classes. From the original caption, we extracted nouns and adjectives through NLTK python library9. As _keyword-to-text_ algorithm, we adopted Keytotext [56] based on T5 model [57]; while for _text-to-image_ processes we opted for Dall-E Mini [58]. Finally, in _UnsplashModel_, the rephrase task was performed using the Pegasus model [59]. Footnote 9: [https://www.nltk.org/](https://www.nltk.org/) ## 5 Honeypots Evaluation ### Overall Performance The first research question _RQ1_ is whether social honeypots are capable of generating engagement. After nine weeks of execution, our 21 social honeypots gained: 753 followers (avg 35.86 per honeypot), 5387 comments (avg 2.01 per post), and 15730 likes (avg 5.94 per post). More in detail, Table 2 (left side) shows the overall engagement performance at the varying of our three variables, i.e., topic, generation strategy, and engagement plan. The reader might notice that not only our honeypots _can_ generate engagement, answering positively to the _RQ1_, but that also topic, generation strategy, and engagement plan have different impacts to the outcomes. For instance, _cat_ honeypots tend to have higher followers and likes, while _car_ ones generate more comments. Similarly, _non-AI_ generation methods tend to have higher likes, as well as _PLAN 1_. We investigate the effect of different combinations later in this section. ### Honeypot Trends Analysis Social honeypots can generate engagement, but we are further interested in understanding trends of such performance: _is honeypots' engagement growing over time?_ A honeypot with a positive trend will likely result in a higher future attraction. On the opposite, a stationary trend implies limited opportunities to improve. The qualitative analysis reported in Figure 3 motivates the trend investigation. The figure presents the average number of Likes per post gained by our honeypots over time, grouped by engagement plan. In general, PLAN 1 honeypots tend to attract more likes as they grow, followed by PLAN 2 and PLAN 0, in order. In particular, a constantly increasing number of likes is shown by honeypots with PLAN 1, especially for food-related pages: starting from an average of \(\sim\)5 likes per post (week 1st) to \(\sim\)12.5 likes per post (week 9th). We evaluate the presence of stationary trends by adopting the _Augmented Dickey-Fuller test_ (ADF) [60]. In this statistical test, the null hypothesis \(H_{0}\) suggests, if rejected, the presence of a non-stationary time series. On the opposite, the alternative hypothesis \(H_{1}\) suggests, if rejected, the presence of a stationary time series. We conducted the statistical test for each honeypot and the three engagement metrics: #Followers, #Likes, and #Comments. A \(p\)-value \(>0.05\) is used as a threshold to understand if we fail to reject \(H_{0}\). Table 2 (right side) reports the result of the analysis. The number of Followers and Likes is non-stationary in 19 and 17 cases out of 21, respectively. Conversely, the number of comments per post is stationary in most of the honeypots. This outcome suggests that engagement in terms of likes and followers varies over time (positively or negatively), while the number of comments is generally constant. As shown in Figure 3, and given the final number of followers higher than 0 (i.e., at creation time), we can conclude that our honeypots present, in general, a growing engagement trend. \begin{table} \begin{tabular}{l|c c c|c c c} \hline \hline & \multicolumn{3}{c}{_Average Engagement_} & \multicolumn{3}{c}{_Engagement Trend_} \\ \hline & **\#Followers** & **\#Comments** & **\#Likes** & **\#Followers** & **\#Comments** & **\#Likes** \\ \hline & \multicolumn{3}{c}{_topic_} \\ \hline food & 38.5\(\pm\)33.7 & 216.4\(\pm\)18.5 & 698.4\(\pm\)139.7 & 67 & 37 & 77 \\ cat & **47.4\(\pm\)**17.5 & 182.1\(\pm\)23.5 & **923.1\(\pm\)**214.8 & 67 & 27 & 47 \\ car & 21.9\(\pm\)9.7 & **371.0\(\pm\)**26.2 & 625.6\(\pm\)96.6 & 77 & 37 & 67 \\ \hline & \multicolumn{3}{c}{_generation strategy_} \\ \hline AI & 37.9\(\pm\)30.9 & 248.4\(\pm\)49.6 & 654.2\(\pm\)138.3 & 7/9 & 4/9 & 69 \\ non-AI & 32.7\(\pm\)21.3 & **264.2\(\pm\)**90.6 & **64.2\(\pm\)**235.2 & 9/9 & 3/9 & 89 \\ Mixed & **39.3\(\pm\)**7.9 & 257.7\(\pm\)80.0 & 753.0\(\pm\)125.9 & 3/3 & 1/3 & 3/3 \\ \hline & \multicolumn{3}{c}{_engagement plan_} \\ \hline PLAN & 11.5\(\pm\)8.4 & **266.0\(\pm\)**105.8 & 641.3\(\pm\)210.7 & 4/6 & 4/6 & 5/6 \\ PLAN 1 & **60.0\(\pm\)**25.2 & 254.2\(\pm\)94.3 & **835.2\(\pm\)**210.7 & 6/6 & 2/6 & 4/6 \\ PLAN 2 & 36.0\(\pm\)14.0 & 251.8\(\pm\)79.1 & 763.4\(\pm\)206.1 & 9/9 & 2/9 & 8/9 \\ \hline \hline \end{tabular} \end{table} Table 2: Honeypots overall performance. On the left side, we report the average (and std) engagement generated by the honeypots. On the right, we report the number of honeypots with a non-stationary trend. The results are reported based on the topic, generation strategy, and engagement plan. ### The Impact of Honeypots Configuration We now investigate whether the three variables (i.e., topic, generation strategy, and engagement plan) have a statistical impact on the success of the honeypots, answering _RQ2_ and _RQ3_. Given the stationary trend of comments, we focus solely on likes per post and followers per honeypot. #### 5.3.1 Likes Figure 4 depicts the distribution of honeypots Likes at the varying of the topic, generation strategy, and engagement plan. In general, there is a difference when the three variables are combined. For example, on average, honeypots belonging to cats, with non-AI generative models, and with PLAN1 or PLAN2 have higher values than the rest of the honeypots. Moreover, in general, honeypots adopting PLAN1 have higher results. To better understand the different impacts the three variables have on Likes, we conducted a three-way ANOVA. We found that both topic, engagement plan, and generation strategy are significantly (\(p\)-value \(<\) 0.001) influencing the Likes. Furthermore, we found significance even in the combination of topic and engagement plan (\(p\)-value \(<\) 0.001), but not in the other combinations. This result confirms the qualitative outcomes we have presented so far. We conclude the analysis by understanding which topic, generation strategy, and engagement plan are more effective. To this aim, we performed Tukey's HSD (honestly significant difference) test with significance level \(\alpha=5\%\). Among the three topics, _cat_ is significantly more influential than both _food_ and _car_ (\(p\)-value = 0.001). Regarding the generation strategies, non-AI-based models (i.e., UnsplashModel and InstaModel) outperform AI-based ones. Last, PLAN1 and PLAN2 outperform PLAN0 (\(p\)-value = 0.001), while the two plans do not show statistical differences between them. #### 5.3.2 Followers Tukey's HSD test revealed statistical differences in the number of followers as well. For the analysis, we use the number of followers of each honeypot at the end of the 9th week. We found that _cat_ statistically differ from _car_ (\(p\)-value \(<\) 0.01), while there are no significant differences between _cat_ and _food_, or _food_ and _car_. Regarding the generation strategy, we found no statistical difference among the groups. Finally, all three engagement plans have a significant impact on the number of followers (\(p\)-value = 0.001), where PLAN \(1>\) PLAN \(2>\) PLAN 0. Figure 4: Distribution of likes at the varying of topic, model generation strategy, and engagement plan. Figure 3: Likes trend of our honeypots grouped by engagement plan. #### 5.3.3 Aggressive engagement plans We recall that honeypots deployed with PLAN 1 and PLAN 2 adopted more aggressive engagement strategies on week 9th: _Follow & Unfollow_ for PLAN 1, and _Content Sponsoring_ for PLAN 2. Thus, we investigated whether aggressive plans result in more engagement in terms of comments, likes, and followers. The analysis is performed with Tukey's HSD (honestly significant difference) test with significance level \(\alpha=5\%\). We found no statistical difference in comments in PLAN 1 and PLAN 2. On the opposite, the average number of likes per post shows a statistically significant improvement in PLAN1 (\(p\)-value = 0.01): on average, 7.44 and 9.17 likes per post in weeks 8th and 9th, respectively. No statistical difference is found for PLAN 2; indeed, only the sponsored content benefited (i.e., a few posts).10 Last, we analyze the difference between the total amount of followers at the end of weeks 8th and 9th. PLAN 1 honeypots #Followers moved, on average, from \(45.7\pm 19.1\) of week 8th, to \(60.7\pm 26.2\) of week 9th, with no statistical difference. PLAN 2 honeypots #Followers moved, on average, from \(22.3\pm 11.6\) of week 8th, to \(30.7\pm 13.9\) of week 9th. The difference is statistically supported (\(p\)-value \(<\) 0.05). Footnote 10: All sponsored content belongs to weeks before the 9th. ### Baseline Comparison Social honeypots are effective, depending on topics, generation strategies, and engagement plans. Since we are the first, to the best of our knowledge, to examine how to attract _people_ using social honeypots (not bots or spammers), there are no state-of-the-art baselines to compare with. Therefore, we compare our methodology with (i) our proposed non-AI generative models with a PLAN 0 engagement strategy (baseline) and (ii) real Instagram pages trends. #### 5.4.1 Baseline This represents the most simplistic method someone might adopt: adding stock images, with random quotes, without caring about the engagement with the rest of the social network. From SS5.3, we statistically showed that the definition of engaging plans is essential to boost engagement in social honeypots. We remark on this concept with Figures 5 and 6 that show the comparison among the baselines and PLAN 1 social honeypot - which are the most effective ones - in terms of likes and followers over the 9 weeks: in terms of AI and Non-AI strategies, our advanced honeypots outperform in 3 out of 6 cases and 6 out of 6 cases the baselines for likes and followers, respectively. Such results confirm the remarkable performance of our proposed framework. Our strategies might perform worse than the baselines (regarding likes) when the image quality is unsatisfactory. Indeed, as demonstrated in our prior work [61], likes on IG are usually an immediate positive reaction to the post's image. Since Unsplash images are usually high-quality and attractive, they might have been more appealing than AI-generated images in these cases. Although comparing our approach with other social honeypots [34, 35, 36] carries some inherent bias (the purpose and social networks are completely different), we still find our approach aligned with (or even superior than) the literature. Lee et al. [34] gained in seven months through 60 honeypots a total of \(\sim\)36000 interactions (e.g., follow, retweet, likes), which is approximately 21.5 interactions per honeypot/week. Our honeypots reached a total of 21870 interactions, which is approximately 115.7 interactions per honeypot/week, i.e., more than five times higher. Yang et al. [35] lured 1512 accounts in five months using 96 honeypots, i.e., 0.788 accounts per honeypot/week. We collected 753 followers, which is 3.98 accounts per honeypot/week, i.e., five times higher. Last, Zhang et al. [36] carefully selected and harnessed 1000 popular Twitter accounts (which they called pseudo-honeypots) for three weeks to analyze spammers. Giving these accounts were already heavily integrated into the social network, they reached over 476000 users, which is around 159 accounts per (pseudo-honeypot per week. We remind that the purpose of these comparisons is to give an idea of the effectiveness of edges social honeypots rather than to provide meaningful conclusions. We now compare our PLAN 1 social honeypots with real IG public accounts. Accordingly, we analyzed the first nine weeks of activities on popular IG pages related to food, cat, and cars. We selected nine popular IG pages for each topic, 3 with \(\sim 10K\) followers, 3 with \(\sim 100K\) followers, and 3 with more than a million followers. We collected the number of comments and likes for each post published during this period. Due to IG limitations, we could access only information at the time of collection, implying that posts might be a few years old. Monitoring new pages would be meaningless since we do not know a priori whether they will become popular. We noticed that it is impossible to compare such baselines with our social honeypots because, generally, the considered IG pages contain posts with hundreds of likes and comments even in their first week of activity. For instance, \(+1M\) pages' first posts reached more than 2000 likes. Possible explanations behind this phenomenon are: (i) the considered 18 pages were already popular before their creation (e.g., on a different or older OSN like Facebook); (ii) the considered 18 pages massively sponsored all their content; (iii) we are facing the _earlybird bias_, where older posts contain not just engagement from the first nine weeks, but also engagement from later periods, even years.11 To further explain this phenomenon, we contacted such IG pages (we extended our survey to 36 pages). Questions focused on the first weeks of activity.12 Unfortunately, up to the submission date, none of the contacted pages replied. Footnote 11: Earlybird bias appears in other social contexts like online reviews [62]. Footnote 12: For instance, we asked whether the page resulted from an already existing page (on IG or other platforms), or the strategies they adopted to manage the pages (e.g., spam, sponsoring). Although there is no evidence in the literature on how long it takes to make an Instagram page famous, most sources consider the initial growth (from 0 to 1000 followers) to be the most challenging part [63, 64], with an overall monthly growth rate of about 2% [65]. Furthermore, success requires lots of dedication to follow best practices consistently [66], which is extraordinarily time-consuming and far from trivial. Being in line with these trends in a fully automated and effortless manner is already an impressive achievement. Our work can serve as a baseline and inspiration for future work. ## 6 Social Analyses ### Comments analysis An interesting (and unexpected) result is that, without the premeditated intention of building spammer detectors, most of the comments we received came from spammers. To estimate the total number of spam comments, we first manually identified patterns used by spammers on our honeypots (e.g., expressions like "send pic" or "DM us"). Afterward, using a pattern-matching approach, we found that 95.33% of the comments we received on our social honeypots came indeed from spammers. All spammers' accounts shared similar behavior in commenting: (i) there was always a mention '\(\Theta\)' to other accounts, and (ii) they commented almost immediately after the post creation. Such considerations suggest these accounts are both that target many recent posts, perhaps searching by specific hashtags. Such findings indicate that fresh pages could be a powerful tool to detect spammers with _minimal_ effort. We also highlight that spam comments are a well-known issue that affects the majority of IG pages [67] and is not limited to our honeypot ages. Therefore, we argue that creating pages that do not attract spammers is nearly impossible. Nevertheless, IG itself is employing and improving automatic screening mechanisms [68, 69] to limit such behavior. When such mechanisms are enhanced, our honeypots will become more accurate. Figure 5: Baseline comparison (average likes) with PLAN1 social honeypots. Figure 6: Baseline comparison (followers) with PLAN1 social honeypots. ### Followers analysis As most of our comments were spam, we investigated whether followers were the same. We manually inspected the followers of our most followed social honeypot for each topic, identifying three categories of followers: * _Real people:_ users that publish general-topic posts, with less than 1000 followers13, and real profile pictures; Footnote 13: After 1000 followers, users are considered nano influencers [70]. * _Pages and Influencers:_ users that publish topic-specific posts (e.g., our honeypots) or with more than 1000 followers; * _Bots:_ users whose characteristics resemble a bot, following well-known guidelines [71], e.g., fake or absent profile picture, random username, highly imbalanced follower/following count, zero or few (\(<\) 5) posts. From Table 3, we notice the three honeypots have different audiences. The _food_ honeypot obtained the most real followers, _car_ reached more bots, and _cat_, was followed mainly by pages. These results confirmed that (i) our honeypots can reach real people, (ii) the audience category depends on the topic, and (iii) spammers' threat is limited to comments. On an interesting note, most pages following our _cat_ honeypot were cat-related pages. ### Reached Audience We conclude the experimental results with a detailed analysis of the audience our honeypots reached. In particular, we performed two distinct analyses: (i) _Honeypot reached audience_, and (ii) _Sponsored posts audience_, i.e., IG features available for honeypots with 100 followers and sponsored content, respectively. After nine weeks of computation, one honeypot satisfies the requirement of 100 followers (honeypot ID: h9). About the sponsored content, we obtained information about 9 posts (one per honeypot belonging to PLAN 2). #### 6.3.1 Honeypot audience The honeypot h9 (topic: food, generation strategy: AI, and engagement plan: PLAN 1) gained 103 followers: the majority is distributed over the age range \([25-34]\) with 32% (equally distributed among men and women), \([35,44]\) with 10% of women and 27% of men. Most followers came from India (11.7%), Bangladesh (10.7%), and Japan (9.7%). #### 6.3.2 Sponsored posts audience For this analysis, we recall that we set our sponsoring strategy leveraging the automatic algorithm provided by IG. Overall, sponsored posts achieved great success in terms of generated engagement. On average, food posts reached 30.6, 116, and 60.6 likes for food, cat, and car posts, respectively. These numbers are strongly above the average likes per post 5.9. IG offers an analytic tool to inspect the reached audience; this feature perfectly fits in the scope of social honeypots, since it allows finding insights about the attracted audience. For each post, the following information is available: quantitative information (i.e., reached people, likes, comments, sharing, saved), and demographic distribution in percentage (gender, age, location). The detailed report is available in Appendix B. We observed interesting trends: * _food_ audience: the gender is almost balanced (female audience slightly more attracted), and the predominant age range is 18-34. Top locations: Campania, Lombardia, and Puglia.14 Footnote 14: IG automatic algorithm maximized the audience toward authors country, i.e., Italy, reporting Italian regions. * _cat_ audience: the gender distribution is toward the female sex, and the predominant age range is 18-34. Top locations: Emilia Romagna, Lombardia, Piemonte. * _car_ audience: the gender is strongly distributed toward the male sex, and the predominant age range is 18-24. Top locations: Lazio, Lombardia. To conclude, with minimal effort (i.e, EUR 2/day per post), an owner can get useful information, e.g., to use in marketing strategies.. \begin{table} \begin{tabular}{l c c c} \hline \hline & Real People & Pages & Bots \\ \hline Food & **48,08**\% & 37,50\% & 14,42\% \\ Cat & 10,61\% & **72,72**\% & 16,67\% \\ Car & 30,30\% & 21,21\% & **48,49**\% \\ \hline \hline \end{tabular} \end{table} Table 3: Percentage of real people, pages, and bots for the best social honeypot in each topic. ## 7 Toward a Real Implementation So far, we have demonstrated our social honeypots can attract real people in a fully automated way. With little effort, they can already be deployed for an array of situations. In this section, we first reason about the use cases of our approach, highlighting both positive and negative outcomes. Then, we present the current challenges and limitations of implementing this work in real scenarios. ### Use Cases Our work aims to show the lights and shadows of social networks such as Instagram. People can easily deploy automated social honeypots that can attract engagement from hundreds or even thousands of users. Upon on that, analyses on these (unaware) users can be conducted. As cyber security practitioners, we know that this technology might be exploited not only for benign purposes, but also to harm users [72]. Therefore, this work contributes to the discussion about the responsible use of online social networks, in an era when technologies like artificial intelligence are transforming cyber security. We list in this section possible social honeypot applications. #### 7.1.1 Marketing The first natural adoption of our proposed social honeypots is for marketing purposes. Suppose someone is interested in understanding "who is the average person that loves a specific theme", where themes might be music, art, puppies, or food. With a deployed social honeypot, the owner can then analyze the reached audience by using the tools offered by IG itself (as we ethically did in this paper) or by further gathering (potentially private) information on the users' profile pages [73]. #### 7.1.2 Phishing and Scam Similarly to marketing, social honeypots can be used by adversaries to conduct phishing and scam campaigns on IG users. For instance, the social honeypot might focus on cryptocurrency trading: once identified potential victims, attackers can target them aiming to obtain sensitive information (e.g., credentials), or to lure them into fraudulent activities such as investment scams, rug pulls, Ponzi schemes, or phishing. #### 7.1.3 Spammer Identification Social honeypots can also be created to imitate social network users, by posting content and interacting with other users. As we noticed in our experiments, they can attract spammers. Therefore, our proposed framework can be adopted by researchers to spot and study new types of spamming activities in social networks. #### 7.1.4 Monitoring of Sensible Themes An interesting application of social honeypots is to identify users related to sensible themes and monitor their activities (within the honeypot). Examples of such themes are fake news and extremism [74]. Researchers or authorities might leverage social honeypots to identify users that actively follow and participate in such themes, and then carefully examine their activity. For instance, honeypot owners can monitor how people respond to specific news or interact inside the honeypot. ### Challenges and Limitations The first challenge we faced in our work is the massive presence of spammers on IG. Most of them are automated accounts that react to particular hashtags and comments under a post for advertisement or scamming purposes [75, 76]. This factor can inevitably limit our approach when we aim to gather only real people. As a countermeasure, honeypots should include a spam detector (e.g. [75, 77]) to automatically remove spammers. On the contrary, this approach could be useful directly to reduce the spamming phenomenon. Many pages can be created with the purpose of attracting spammers and reporting them to IG for removal. The second challenge we encountered is the lack of similar works in the literature. Because of this, we have no existing baselines to compare with, and it could be difficult to understand whether our approach is truly successful. However, in nine weeks, we obtained more than 15k likes and gathered \(\sim 750\) followers in total, which is not trivial as discussed in SS5.4. Our most complex methods surpassed the simplest strategies we identify, which can serve as a baseline and source of inspiration for future works. Among the limitations, we inspected only generic (and ethical) topics. A comprehensive study in this direction would give much more value to our work, especially dealing with delicate topics (e.g., conspiracies, fake news). Moreover, our approach is currently deployable on IG, but would be hard to transfer to other platforms. Even if this can be perceived as a limitation, it would be naive to consider all social media to be the same. Indeed, each of them has its own content, purpose, and audience. Developing social honeypots for multiple platforms can be extremely challenging, which is a good focus for future research. Last, there was no clear connection between the posts of our honeypots. When dealing with specific topics, it might be necessary to integrate more cohesive content. ## 8 Conclusions The primary goal of this work was to first understand the feasibility of deploying self-managed Instagram Social Honeypots, and we demonstrated that _it is possible_ in SS5.1. Moreover, from the results obtained in our analyses we can derive the following outcomes and guidelines: 1. _Topics_ plays an important role in the success of the honeypot. 2. _Generation strategies_ does not require complex DL-based models, but simple solutions such as stock images are enough. Similarly, we saw that posts containing random quotes as captions are as effective as captions describing the content; 3. _Engagement plan_ is essential. We demonstrated that a naive engagement strategy (PLAN 0) results in a low volume of likes and followers. Moreover, the engagement plan without costly operations (PLAN 1) works as well as plans involving followers acquisition and content sponsoring; 4. _Sponsored content_ is a useful resource to preliminary assess the audience related to a specific topic; 5. Social honeypots not only attract _legitimate_ users, but also _spammers_. As a result, they can be adopted even for cybersecurity purposes. Future implementation of social honeypots might include automatic tools to distinguish engagement generated by legitimate and illegitimate users. In conclusion, we believe that our work can represent an important milestone for future researchers to easily deploy and collect social network users' preferences. New research directions might include not only general topics like cats and food, but more sensitive themes like fake news, or hate speech. In the future, we expect generative models to be always more efficient (e.g., DALL-E 2 [50] or ChatGPT [51]), thus increasing the reliability of our approach (or perhaps making it even more dangerous). #### Ethical Considerations Our institutions do not require any formal IRB approval to carry out the experiments described herein. Nonetheless, we designed our experiments to harm OSN users as less as possible, adhering to guidelines for building Ethical Social Honeypots [78], based on the Menlo report [79]. Moreover, we dealt with topics (cars, cats, food) that should not hurt any person's sensibility. In our work, we faced two ethical challenges: data collection and the use of deception. Similar to previous works [33, 34, 35], we collected only openly available data (provided by Instagram), thus no personal information was extracted, and only aggregated statistics were analyzed. Moreover, all information is kept confidential and no-redistributed. Upon completion of this study, all collected data will be deleted. This approach complies with the GDPR. To understand the honeypot's effectiveness, similar to previous works, we could not inform users interacting with them about the study, to limit the Hawthorne effect [80]. However, we will inform the deceived people at the end of the study, as suggested by the Menlo report.
ソーシャルホーンポットはオンラインソーシャルネットワーク(OSN)で悪意のある活動(スパムやボット)を誘引するためのツールです。その目的のために、彼らのコンテンツは悪意のあるユーザーに最大限の興味を持たせるように設計されています。しかし、適切なコンテンツのトピックを選択すれば、この魅力的なメカニズムは悪意のあるユーザーだけに限定されるのではなく、任意のOSNユーザーに広げていくことができます。その結果、ホーンポットは、スポーツや趣味、政治的な見解や陰謀など、さまざまな話題に関心を持ち込むユーザーを惹きつけることができます。これらのユーザーを集めて一つの場所で集めれば、ホーンポット所有者は、社会やマーケティングの研究など、様々な分析を行うことができます。この研究では、一般のターゲットテーマに興味を持つOSNユーザーを誘引するための新しい概念のソーシャルホーンポットを導入します。私たちは、合法なインスタグラムページを模
2309.12865
Bridging Sensor Gaps via Attention Gated Tuning for Hyperspectral Image Classification
Data-hungry HSI classification methods require high-quality labeled HSIs, which are often costly to obtain. This characteristic limits the performance potential of data-driven methods when dealing with limited annotated samples. Bridging the domain gap between data acquired from different sensors allows us to utilize abundant labeled data across sensors to break this bottleneck. In this paper, we propose a novel Attention-Gated Tuning (AGT) strategy and a triplet-structured transformer model, Tri-Former, to address this issue. The AGT strategy serves as a bridge, allowing us to leverage existing labeled HSI datasets, even RGB datasets to enhance the performance on new HSI datasets with limited samples. Instead of inserting additional parameters inside the basic model, we train a lightweight auxiliary branch that takes intermediate features as input from the basic model and makes predictions. The proposed AGT resolves conflicts between heterogeneous and even cross-modal data by suppressing the disturbing information and enhances the useful information through a soft gate. Additionally, we introduce Tri-Former, a triplet-structured transformer with a spectral-spatial separation design that enhances parameter utilization and computational efficiency, enabling easier and flexible fine-tuning. Comparison experiments conducted on three representative HSI datasets captured by different sensors demonstrate the proposed Tri-Former achieves better performance compared to several state-of-the-art methods. Homologous, heterologous and cross-modal tuning experiments verified the effectiveness of the proposed AGT. Code has been released at: \href{https://github.com/Cecilia-xue/AGT}{https://github.com/Cecilia-xue/AGT}.
Xizhe Xue, Haokui Zhang, Zongwen Bai, Ying Li
2023-09-22T13:39:24
http://arxiv.org/abs/2309.12865v3
# Bridging Sensor Gaps via Single-Direction Tuning for Hyperspectral Image Classification ###### Abstract Recently, vision transformer (ViT) models have excelled in diverse vision tasks, emerging as robust alternatives to convolutional neural networks. Inspired by this, some researchers have started exploring the use of ViTs in tackling HSI classification and achieved remarkable results. However, the training of ViT models requires a considerable number of training samples, while hyperspectral data, due to its high annotation costs, typically has a relatively small number of training samples. This contradiction has not been effectively addressed. In this paper, aiming to solve this problem, we propose the single-direction tuning (SDT) strategy, which serves as a bridge, allowing us to leverage existing labeled HSI datasets even RGB datasets to enhance the performance on new HSI datasets with limited samples. The proposed SDT inherits the idea of prompt tuning, aiming to reuse pre-trained models with minimal modifications for adaptation to new tasks. But unlike prompt tuning, SDT is custom-designed to accommodate the characteristics of HSIs. The proposed SDT utilizes a parallel architecture, an asynchronous cold-hot gradient update strategy, and unidirectional interaction. It aims to fully harness the potent representation learning capabilities derived from training on heterologous, even cross-modal datasets. In addition, we also introduce a novel Triplet-structured transformer (Tri-Former), where spectral attention and spatial attention modules are merged in parallel to construct the token mixing component for reducing computation cost and a 3D convolution-based channel mixer module is integrated to enhance stability and keep structure information. Comparison experiments conducted on three representative HSI datasets captured by different sensors demonstrate the proposed TriFormer achieves better performance compared to several state-of-the-art methods. Homologous, heterologous and cross-modal tuning experiments verified the effectiveness of the proposed SDT. Code will be released at: [https://github.com/Cecilia-xue/TriFormer](https://github.com/Cecilia-xue/TriFormer). Hyperspectral image classification, vision transformer, triplet-structured transformer, single-direction tuning, cross-sensor tuning ## I Introduction Hyperspectral image (HSI) classification aims to identify each pixel in an image and assign it to a specific land-cover category, which plays a crucial role in earth observation applications [1, 2]. Compared with traditional RGB images which only record three bands of colors, HSI images have much more rich spectral information. HSI sensors capture hundreds of spectral bands, each corresponding to a specific wavelength range. As a result, HSI images can provide more detailed and fine-grained information about the composition and properties of objects in the observed scene, which is very helpful in distinguishing ground-cover objects. Therefore, the HSI classification technology is widely applied in various scenes, e.g., mineral exploration [3], plant stress detection [4], and environmental science [5], etc. However, such high-dimensional features, on one hand, are beneficial for classifying ground objects. On the other hand, they also pose challenges in feature extraction due to their increased complexity. Hence, it is worthwile to explore efficient methods for feature extraction from raw HSIs. Deep learning (DL) technique is famous for its superior ability to automatically extract deep features, which exactly meet the requirement of HSI classification. Over the past decade, DL based methods, especially convolutional neural network (ConvNet) based methods dominate the field of HSI classification. From 2013 to 2017, various DL based models have been proposed. For instance, in [6], Lin _et al._ utilized PCA to reduce the dimensionality of HSI from hundreds of spectral dimensions to dozens, then extract deep features from a neighborhood region via Stacked Autoencoder (SAE). Later, Chen _et al._ introduced another spectral channel to enforce spectral features and proposed to use a deep belief network (DBN) to replace SAE [7, 8]. Zhang _et. al_ combine 1D ConvNet and 2D ConvNet to build dual channel ConvNet for HSI spectral-spatial classification [9]. Since 2016, 3D ConvNet-based HSI classification methods become the mainstream in this field, because the 3D convolution operation employed 3D ConvNet is inherently well-suited for processing 3D structure HSIs. From 2016 until now, various optimized 3D ConvNets have been proposed for HSI classification. In [10, 11], compact and conventional 3D ConvNets are applied into HSI classification directly. In [12], Zhang _et al._ design a 3D ConvNet via neural architecture search algorithm, where a 3D asymmetric neural network is searched via Darts [13] similar search method. Besides these 3D ConvNet-based methods, some transfer learning methods have also been drawn into the classification of HSI images [14, 15] to alleviate the small sample problem. The ConvNet-based HSI classification methods fully leverage their feature extraction capabilities, demonstrating a significant advantage in classification accuracy compared to previous approaches, dominating this field for a long time. Recently, the emergence of the vision transformer (ViT) has disrupted this situation. Vaswani _et al._ first proposed
データに飢えた HSI 分類方法には、高品質のラベル付けされた HSIs が必要であり、これらのデータの取得にはコストがかかることが多く、これはデータ駆動の方法は、標的データの少ないケースではパフォーマンスの可能性を制限する。異なるセンサーから取得されたデータを階層的な領域を結ぶことで、ラベル付けされたデータの豊富な利用を可能にし、このボトルネックを突破する。この論文では、新しい HSI データセットの性能を向上させ、データ駆動の方法はパフォーマンスの限界に陥ることを防止する。新しい HSI データセットに適用することができ、ラベル付けされたデータの利用を促進するために、既存のラベル付けされた HSIdデータセット、RGB データセットを含めることができる。これを実現するために、既存のラベル付けされた HSIdデータセットやRGB データセットを含むことができる。従来のモデルに追加のパラメータを挿入する代わりに、基本モデルからの中間特徴量を入力とする軽量な補助
2303.17952
Characterization and Coherent Control of Spin Qubits with Modulated Electron Beam and Resonator
The coherent dynamics and control of spin qubits are essential requirements for quantum technology. A prominent challenge for coherent control of a spin qubit in a set of qubits is the destructive effect of the applied magnetic field on the coherent dynamics of neighbouring qubits due to its spatial extension. We propose a novel scheme to characterize the coherent dynamics of these quantum systems and to coherently control them using a magnetic field. Our scheme consists of a resonator that encompasses the desired quantum system and a modulated electron beam that passes through the resonator in close proximity to the quantum system of interest. The dynamics of the system is obtained by solving the Lindblad master equation. To verify the reliability of our model, we tested the model on a Potassium atom, $^{41}$K and NV$^-$ centre in Diamond. The results show that by properly controlling the parameters of the resonator and the electron beam, the coherence and decoherence rates of these quantum systems can be improved. Our model has the potential to be used for characterizing different types of spin-based quantum systems, and implementing quantum logic gates for quantum computation.
Soheil Yasini, Zahra Shaterzadeh-Yazdi, Mahmoud Mohammad Taheri
2023-03-31T10:29:26
http://arxiv.org/abs/2303.17952v1
# Characterization and Coherent Control of Spin Qubits ###### Abstract The coherent dynamics and control of spin qubits are essential requirements for quantum technology. A prominent challenge for coherent control of a spin qubit in a set of qubits is the destructive effect of the applied magnetic field on the coherent dynamics of neighboring qubits due to its spatial extension. We propose a novel scheme to characterize the coherent dynamics of these quantum systems and to coherently control them using a magnetic field. Our scheme consists of a resonator that encompasses the desired quantum system and a modulated electron beam that passes through the resonator in close proximity to the quantum system of interest. The dynamics of the system is obtained by solving the Lindblad master equation. To verify the reliability of our model, we tested the model on a Potassium atom, \({}^{41}\)K and NV\({}^{-}\) center in Diamond. The results show that by properly controlling the parameters of the resonator and the electron beam, the coherence and decoherence rates of these quantum systems can be improved. Our model has the potential to be used for characterizing different types of spin-based quantum systems, and implementing quantum logic gates for quantum computation. ## I Introduction Qubits are the primary component in many areas of quantum science and technology, including quantum computation [1; 2], data encryption, safe information transmission [3; 4], and quantum metrology and imaging [5; 6]. To effectively utilize qubits, it is necessary to characterize these two-level quantum systems and optimize their characteristics [7; 8]. Additionally, controlling each individual qubit with minimum effect on the coherent dynamics of the neighboring ones is a necessary requirement for a wide range of applications, from investigating fundamental quantum phenomena to quantum information computation [9]. Among different types of qubits, spin qubits are prominent candidates for quantum technology. A common approach for individually controlling a spin qubit is by employing a proper magnetic field. However, due to the spatial extension of the magnetic field effect and the difficulty of localizing the field in a small volume of space, individual control of these quantum systems in a set of spin qubits is a considerable challenge [10]. In this paper, we propose a scheme for tackling this challenge, characterizing spin qubits, and controlling their coherent dynamics. As shown in Fig. 1, our proposed scheme consists of a resonator encompassing a set of two-level quantum systems and a modulated electron beam that passes through a near distance \(h\) from the qubit that is considered to be controlled. The desired qubit is located in the near-field region of the magnetic field generated by the electron beam. By adjusting the parameters of the resonator and the electron beam, one can characterize the dynamics of the quantum system and properly control it. We were inspired by previous experimental and theoretical studies on measuring tunnelling rate and decoherence in double-quantum-dot systems in the microwave and terahertz regimes [11; 12; 13]. Based on these studies, the qubit of interest would resonate by using a proper frequency provided by the resonator, concomitantly with employing a proper magnetic field strength provided by the electron beam. If the frequency of the resonator is off-resonant with the coherence rate of the quantum system, it would have a small perturbative effect on the Figure 1: The proposed scheme, consisting of a modulated electron beam, generated by a klystron lamp, and a resonator with a resonance frequency \(\omega_{r}\) that is encompassing a set of two-level quantum systems with coherence frequency \(\omega_{0}\). quantum system. However, if the off-resonance effect is compensated by the magnetic-field control of the electron beam, it causes resonance in the dynamics of the quantum system. The results of measurement should reveal the coherence rate and some properties of decoherence. Also, it would allow one to control over the dynamics of the quantum system by tuning the parameters of the resonator and the electron beam. Coherent control of an individual quantum system, with spatial resolution and on an atomic scale, becomes possible taking into account that the electron-beam characteristics are related to de-Broglie wavelength of the electrons [9], which is in the order of Angstroms and comparable to the atomic spacing. Electron-beam control of a quantum system can be achieved when the length of the wave packets associated with individual electrons is much shorter than the modulated wavelength. Such electron-beam current density modulation is easily achievable in the microwave frequency range and has wide applications in electronic technologies [14]. To investigate the dynamics of the desired quantum system, we first solve the Lindblad master equation, governing the Hamiltonian of the proposed structure and obtain the density state of the system. Quantum system's coherence and decoherence rates are then extracted from the time evolution of the density state. In addition, we evaluate our model by considering two quantum systems, i.e. Potassium atom (\({}^{41}\)K) as a representative of Alkali atoms [15], and Nitrogen vacancy defect centres in Diamond (NV\({}^{-}\)) [16]. By solving the equations governing the dynamics of these quantum systems, we show that their coherence properties are improved at some frequencies of the resonator. This paper is organized as follows. In Sec. II, we provide a detailed explanation of the constituent elements of our proposed scheme. We describe the setup of the resonator and the modulated electron beam. In Sec. III, we introduce the model that we use in order to simulate the dynamics of the quantum system in the proposed scheme. We provide a detailed explanation of the Lindblad master equation governing the Hamiltonian of the system and how we use it to obtain the density state of the system. In Sec. IV, we present the results of our simulations for two quantum systems, i.e. Potassium atom and Nitrogen vacancy defect centres in diamond. Then, we discuss the implications of our results in Sec. V. Finally, in Sec. VI, we summarize the achievements of our work and suggest some future works to further develop and improve the proposed scheme. ## II Background: Constituent elements of the proposed scheme ### Klystron Lamp and the Electron Beam One of the key elements of the proposed scheme is the modulated electron beam. The electron beam is generated by a cathode, and then modulated by a klystron lamp. Klystron is a type of radio frequency amplifier that modulates the speed of the electron beam passing through it [17; 18]. Klystrons are proposed in different dimensions and output powers, according to the need for accelerating the electrons. Also, they can operate in different operating frequencies ranging from microwave to terahertz regime [19; 20; 21]. Structure of a Klystron is schematically shown in Fig. 2. Klystron consists of two cavities, which are located at a certain distance from each other, along the length of the klystron. The first cavity is the buncher cavity and the second one is the catcher cavity. The electron beam is first produced by a cathode, and then moves along the length of the klystron and first passes through the buncher cavity. Passing through the buncher cavity with frequency \(\omega_{0}\), causes the electron beam to become modulated with a fixed modulation wavelength \(\lambda_{0}\). The electrons are accelerated due to the applied potential difference \(V_{0}\) between the two klystron cavities, consequently the kinetic energy of the electron beam increases. The electron beam with high kinetic energy in the output cavity produces a signal with a fixed modulation wavelength \(\lambda_{0}\) and with a higher field amplitude. Therefore, the output signal of the klystron lamp is the same as the amplified input radiofrequency signal with a fixed modulation wavelength \(\lambda_{0}\)[18]. ### Resonator A resonator is usually a cavity with metal shields [22] or a dielectric waveguide with high permittivity [23]. Ac Figure 2: A klystron lamp, composed of a buncher cavity and a catcher cavity. Passage of the electron beam, produced by a cathode, through the buncher cavity with frequency \(\omega_{0}\) causes the electron beam to get modulated to a fixed modulation wavelength, \(\lambda_{0}\). The diameter of the cross section of the electron beam is assumed to be \(2a\). The wave function attributed to each electron in the electron beam is assumed to be in Gaussian form with a length \(\Delta z\) much smaller than the modulation wavelength of the electron beam. cording to its size, a resonator possesses a number of resonant frequencies that are related to different modes of electromagnetic fields. To model a resonator, we assume that, in its resonant-frequency mode, the resonator is equivalent to a quantum system in the excited state. Similarly, for off-resonance mode, the resonator is equivalent to the quantum system in its ground state. Therefore, a resonator with a certain resonant frequency behaves similar to a two-level quantum system, where the energy difference between its two energy levels is equivalent to the resonant frequency of the resonator. Resonant frequency of a resonator depends on the shape, dimension, and the material of the resonator. Therefore, the value of the resonant frequency \(\omega_{m}\) is determined, if the resonator is designed and characterized [24]. For a rectangular-cubic resonator with metal walls, the value for the resonant frequency of the first excited mode of the resonator, \(f\), is obtained from, \[f=\frac{c}{2\sqrt{\epsilon_{r}\mu_{r}}}\sqrt{\left(\frac{m}{l_{1}}\right)^{2}+ \left(\frac{n}{l_{2}}\right)^{2}+\left(\frac{q}{l_{3}}\right)^{2}}, \tag{1}\] where \(l_{i}\) for \(i\in\{1,2,3\}\) are resonators 3D dimensions, \(c\) is the speed of light in the vacuum, \(\epsilon_{r}\) is the relative dielectric permittivity coefficient, and \(\mu_{r}\) is the relative magnetic permeability coefficient of the material inside the cavity [22]. Parameters \(m,n\), and \(q\) are the electromagnetic mode's arithmetic numbers, and they are not zero altogether. For higher values of these parameters, the frequency of the higher-order modes of the resonator can be obtained. It should be noted that in resonators, made of metal shields with vacuum inside, for high frequency electromagnetic fields, the loss resulting from the dielectric property of the vacuum is small. In such cases, the electrical losses are mostly related to the currents flowing in the cavity walls. The walls are usually coated with silver or gold to increase electrical conductivity and reduce energy losses. Unlike copper, which oxidizes quickly, silver and gold are safe from oxidation [23]. ## III Modelling the proposed scheme ### Basic Assumptions To model the proposed scheme, assumptions are considered for the electron beam, the quantum system, and the resonator. As shown in Fig. 2, for the electron beam, it is assumed that the wavefunction attributed to each electron is a Gaussian wave packet of length \(\Delta z\) and the wavelength of the wave packet, attributed to each electron, is much smaller than the modulation wavelength of the electron beam (\(\Delta z\ll\lambda_{0}\)). It is also assumed that the electron beam current and, accordingly, the radius of the cross-section of the electron beam (\(a\)) is such that the distance between the center of the electron beam and the desired quantum system \(h\) is large enough compared to the width of the cross-section of the beam (\(2a\ll h\)). With this assumption, the magnetic field or the non-radiative electric field caused by this modulated electron beam can be obtained from the classical relations and the interaction of the quantum system with the field caused by the electron beam can be considered semi-classical [9]. Regarding the quantum system, we assume that the dimension of the desired quantum system is small compared to the wavelength of the electron beam modulation (\(\lambda_{0}\)) as well as the distance of the quantum system from the center of the electron beam (\(h\)). We also assume that the quantum system in question, has only two energy levels, i.e. the ground state and the first excited state, and the probability of transition from the ground state to higher excited states is almost zero and can be ignored. The distance between the electron beam and the quantum system is such that the magnetic field, produced by the beam, affects the system in the near field region, thus enabling the control of the dynamics of the quantum system by the electron beam. We consider the interaction of the electron beam with the quantum system to be of a weak type, so its effect on causing decoherence in the dynamics of the quantum system can be ignored. The resonator is assumed to be a Fabry-Perot type, and the quantum system is located inside this resonator. The resonator is used to drive the quantum system. For this purpose, the resonant frequency of the resonator is assumed to be close to the frequency associated with the energy difference between the ground and excited energy levels of the quantum system. If the frequency difference between the resonant frequency of the resonator and the quantum system is compensated by an electron beam, then the quantum system resonates and its state changes coherently between the energy levels of its ground state and excited state. According to the above assumptions, in the following section, we provide a suitable theoretical model to investigate the dynamics of the desired system. ### Theoretical Model for the Dynamics of the Quantum System We consider the quantum system and the resonator as the main system, which is affected by the electron beam. The Hamiltonian governing the whole system is defined as \[\hat{H}_{\mathrm{s}}=\hat{H}_{0}+\hat{H}_{\mathrm{int}}, \tag{2}\] where the first term, \(\hat{H}_{0}\), is the Hamiltonian governing the main system including the two-level quantum system and the resonator, and the second term \(\hat{H}_{\mathrm{int}}\) shows the interaction between all the components of the proposed system. The energy levels of the quantum system are considered to be \(|g\rangle\) for the ground state and \(|e\rangle\) for the excited state, and the energy difference between these two levels is equal to \(\Delta E=\hbar\omega_{0}\). We consider the role of the resonator equivalent to quantized electromagnetic waves with the attributed energy \(E_{\mathrm{ph}}=\hbar\omega_{\mathrm{M}}\), where \(\omega_{\mathrm{M}}\) is the resonant frequency of the resonator. Then, \(\hat{H}_{0}\) is defined by \[\hat{H}_{0}=\hbar\omega_{0}\frac{\hat{\sigma}_{z}}{2}+\hbar\omega_{\mathrm{M}} \hat{a}_{\mathrm{n}}^{\dagger}\hat{a}_{\mathrm{n}}. \tag{3}\] The operator \(\hat{\sigma}_{z}=|g\rangle\langle g|-|e\rangle\langle e|\) represents the transition between the two levels of the quantum system, and the operators \(\hat{a}_{\mathrm{n}}\) and \(\hat{a}_{\mathrm{n}}^{\dagger}\) are, respectively, the annihilation and creation operators of the quantized electromagnetic field, between the primary state \(|\mathrm{m}\rangle\) and the secondary state \(|\mathrm{m}\rangle\), attributed to the resonator. The second term, in Eq. (2), is composed of the mutual interaction between the quantum system, the resonator and the electron beam and is defined by, \[\hat{H}_{\mathrm{int}}=\hat{H}_{\mathrm{int1}}+\hat{H}_{\mathrm{int2}}+\hat{H }_{\mathrm{int3}}. \tag{4}\] In the above equation, the first term is the interaction between the magnetic field \(B\), caused by the electron beam, and the magnetic dipole moment of the two-level quantum system \(\hat{\mu}\), which is defined as [9], \[\hat{H}_{\mathrm{int1}}=-\hat{\mu}.\mathbf{B}. \tag{5}\] We assume that the direction of the magnetic field is perpendicular to the electron beam propagation, so the mentioned interaction only happens in the \(z\) direction. The second term in Eq. (4), \(\hat{H}_{\mathrm{int2}}\), expresses the interaction between the resonator and the quantum system, which is defined by \[\hat{H}_{\mathrm{int2}}=\gamma_{\mathrm{n}}\left(\hat{\sigma}+\hat{\sigma}^{ \dagger}\right)\left(\hat{a}_{\mathrm{n}}+\hat{a}_{\mathrm{n}}^{\dagger} \right), \tag{6}\] where \(\hat{\sigma}=|g\rangle\langle e|\) and \(\hat{\sigma}^{\dagger}=|e\rangle\langle g|\), represent the transition of the two-level quantum system from the ground state to the excited state, and from the excited state to the ground state, respectively. Parameter \(\gamma_{\mathrm{n}}\) represents the interaction coefficient between the resonator and the quantum system. The third term in Eq. (4) represents the interaction between the magnetic field of the electron beam and the resonator, which is defined as, \[\hat{H}_{\mathrm{int3}}=\gamma\left(\hat{a}_{\mathrm{n}}^{\dagger}+\hat{a}_{ \mathrm{n}}\right)\left(\hat{a}_{\mathrm{e}}^{\dagger}+\hat{a}_{\mathrm{e}}\right) \tag{7}\] where \(\gamma\) is the interaction coefficient between the magnetic field of the electron beam and the resonator. We consider a weak interaction between the magnetic field and the quantum system as well as the resonator. Therefore, we consider classical behavior for the applied magnetic field and the relations for these two parts are semi-classical. To check the dynamics of the desired system, we consider the quantum system and the resonator together, as an open system interacting with the electron beam and assume that this interaction is weak. In order to investigate the temporal evolution of the density matrix of an open quantum system with weak interaction, we use the Lindblad equation, which is one of the well-known theories for use in this type of systems [25; 26; 27; 28; 29]. This equation for our desired system turns to be \[\frac{\partial\rho}{\partial t}= -\frac{i}{\hbar}\left[\hat{H}_{\mathrm{s}},\rho\right]+\frac{ \gamma}{2}\left(2\hat{\sigma}\rho\hat{\sigma}^{\dagger}-\hat{\sigma}^{\dagger} \hat{\sigma}\rho-\rho\hat{\sigma}^{\dagger}\hat{\sigma}\right)\] \[+\frac{\gamma_{\mathrm{n}}}{2}\left(2\hat{a}_{\mathrm{n}}\rho\hat {a}_{\mathrm{n}}^{\dagger}-\hat{a}_{\mathrm{n}}^{\dagger}\hat{a}_{\mathrm{n}} \rho-\rho\hat{a}_{\mathrm{n}}^{\dagger}\hat{a}_{\mathrm{n}}\right). \tag{8}\] In the above formula, \(\hat{H}_{\mathrm{s}}\) is the Hamiltonian of the whole system given in Eq. (2). Also, the density state of the quantum system is considered as \(\rho=\rho_{1}\otimes\rho_{2}\), where \(\rho_{1}\) is the density matrix of the quantum system and \(\rho_{2}\) is the density matrix of the resonator. Taking into account the assumptions, mentioned earlier, the corresponding matrix gets the following form \[\rho=\begin{pmatrix}\rho_{\mathrm{gg}}\rho_{\mathrm{mm}}&\rho_{\mathrm{gg}} \rho_{\mathrm{mm}}&\rho_{\mathrm{ge}}\rho_{\mathrm{mm}}&\rho_{\mathrm{ge}}\rho _{\mathrm{mm}}\\ \rho_{\mathrm{gg}}\rho_{\mathrm{mm}}&\rho_{\mathrm{gg}}\rho_{\mathrm{nn}}&\rho_ {\mathrm{ge}}\rho_{\mathrm{nm}}&\rho_{\mathrm{ge}}\rho_{\mathrm{nn}}\\ \rho_{\mathrm{eg}}\rho_{\mathrm{mm}}&\rho_{\mathrm{eg}}\rho_{\mathrm{mm}}&\rho_ {\mathrm{ee}}\rho_{\mathrm{mm}}&\rho_{\mathrm{ee}}\rho_{\mathrm{mn}}\\ \rho_{\mathrm{eg}}\rho_{\mathrm{nm}}&\rho_{\mathrm{eg}}\rho_{\mathrm{nn}}&\rho_ {\mathrm{ee}}\rho_{\mathrm{nm}}&\rho_{\mathrm{ee}}\rho_{\mathrm{mn}}\\ \end{pmatrix}. \tag{9}\] From Eq. (9), 16 differential equations are obtained for all the possible states of the system. Solutions of these equations expresses the time evolution of different states of the system. By simulating the equations governing the density state of the quantum system, using finite-difference time-domain method (FDTD), and optimizing the time response of these states according to different values of the existing parameters for tuning the system, we can improve the time characteristics and the coherent dynamics of the quantum system. In the following section, we apply this model to a couple of known quantum systems, and show the reliability of our model. ## IV Results In this section, the results of numerical simulation for the dynamics associated with temporal evolution of the proposed system, are examined. We assume that the system is initially in one of the two states \(\rho_{\mathrm{gg}}\rho_{\mathrm{nn}}\) or \(\rho_{\mathrm{ee}}\rho_{\mathrm{mm}}\). These two states correspond to the cases where the quantum system is in its ground state \(|g\rangle\) and the resonator is in its excited state \(|n\rangle\), or vice versa, the quantum system is in the excited state \(|e\rangle\) and the resonator is in its ground state \(|m\rangle\). In fact, the quantum system evolves between its ground state and excited state, due to receiving and returning energy to the resonator. We consider two quantum systems, i.e. Potassium atom \({}^{41}\)K, as a representative of Alkali atoms, and NV\({}^{-}\) centers in Diamond. By using the available parameters for these two quantum systems, which are given in the following, the results obtained from the simulation are compared with the results presented in reference [9]. The transition frequency between two energy levels of an individual \({}^{41}\)K atom and similarly an NV\({}^{-}\) center in Diamond, is equal to 254 MHz and 2.78 GHz, respectively [9]. These frequencies are equivalent to \(\omega_{0}=1.60\times 10^{9}\) rad/s and \(\omega_{0}=17.34\times 10^{9}\) rad/s, respectively. The coefficients \(\gamma\) and \(\gamma_{n}\) in Eq. (9) are taken equal to 150 Hz. The frequency of the resonator in transaction with the two-level quantum system is assumed to be \(\omega_{\rm M}=1.13\times 10^{9}\) rad/s for the \({}^{41}\)K atom, and \(\omega_{\rm M}=12.26\times 10^{9}\) rad/s for the NV\({}^{-}\) center. The electron-beam current is chosen to be 100 \(\mu\)A and the radius of the cross section of the electron beam is taken to be 25 \(\mu\)m. The magnetic field resulting from such electron beam at the location of the quantum system is chosen to be \(B=3\times 10^{-9}\) T. The relationship between the magnetic field resulting from the electron beam current and the distance of the beam from the quantum system is given by the Ampere's law [30]. Hence, the distance of the beam to the quantum system, \(h\), in order to obtain a magnetic field of \(3\times 10^{-9}\) T in the vicinity of the quantum system, is obtained to be 6.7 mm. Alternatively, for a current of 50 nA, in order to obtain the same amount of magnetic field, \(h\) should be equal to 3.3 \(\mu\)m. To obtain an electron beam with the desired current value and electron-beam cross section, we need to know the voltage that is applied to the klystron cavities, in order to evaluate the energy of the electron-beam output from the klystron lamp. We assume that the electrons, with initial speed \(u_{0}\), have a kinetic energy equal to \(18,000\) eV. According to the relation between the potential difference \(V_{0}\), between the two cavities of the Klystron, and the initial speed of the electron beam \(u_{0}\), it is possible to estimate the value of \(V_{0}\), necessary to produce the desired kinetic energy, based on the law of conservation of energy [18]. Therefore, to produce electrons with the assumed initial kinetic energy, the value of the applied voltage should be equal to \(V_{0}=15.750\) Kv, taking into account the relativistic conditions of the speed of these particles. For these calculations, we used CST software [24]. In Fig. 3, the time-evolution diagrams for the two states, \(\rho_{\rm ee}\) and \(\rho_{\rm gg}\), of the \({}^{41}\)K atom are presented. The initial quantum state of the \({}^{41}\)K atom is shown in the inset of each diagram. In Figs. 3a (3b), it is assumed that the atom is in the excited state \(\left|e\right>\) (the ground state \(\left|g\right>\)), at the initial time \(t=0\). In Figs. 3c and 3d, the difference and sum of the two states \(\rho_{\rm ee}\) and \(\rho_{\rm gg}\) are shown, respectively, as a function of time. Similar calculations were performed for the case of NV\({}^{-}\), where the results are presented in Fig. 4. In both cases, the diagrams display valuable points about the dynamical behavior of the entire system, which will be discussed in the next section. ## V Discussion Comparison between Figs. 3c and 3d for the \({}^{41}\)K atom, and similarly Figs. 4c and 4d for the NV\({}^{-}\) center, shows that for the case \(\rho_{\rm ee}-\rho_{\rm gg}\), the time evolution of the desired quantum system has an oscillatory behavior, whereas \(\rho_{\rm ee}+\rho_{\rm gg}\) decreases exponentially. According to the \(\pm\) sign between the states \(\rho_{\rm ee}\) and \(\rho_{\rm gg}\), these results can be an indication that if the time evolution of the state of the resonator has a \(\pi\)-phase difference with the time evolution of the state of the quantum system, then the energy exchange between the resonator and the quantum system occurs effectively. Consequently, the resonator Figure 3: Time evolution of the \({}^{41}\)K atom, when (a) the atom is initially in the excited state \(\left|e\right>\) and (b) the atom is initially in the ground state \(\left|g\right>\). In the inset of each diagram, the initial quantum state of the atom is shown. (c) shows the difference between the two states and (d) demonstrates the sum of the two states. Figure 4: Time evolution of the NV\({}^{-}\) center in Diamond for the case where (a) the NV\({}^{-}\) is initially in its excited state \(\left|e\right>\) and (b) the NV\({}^{-}\) center is initially in its ground state \(\left|g\right>\). In the inset of each diagram, the initial quantum state of the NV\({}^{-}\) center is shown. (c) shows the difference between the two states and (d) the sum of the two states. can control the state of the two-level quantum system and be the driving factor for the dynamics of the quantum system. Furthermore, the \(\rho_{\rm ee}-\rho_{\rm gg}\) time-response diagrams for \({}^{41}\)K atom and NV\({}^{-}\) center are compared with the corresponding diagrams presented in reference [9] for the same quantum systems and in the same time interval. We considered the same values for the used parameters so that the results can be compared. One can see that for both of the quantum systems, the number of Rabi oscillations in our proposed scheme has been almost doubled as compared to the number of Rabi oscillations in reference [9]. Hence, by employing our proposed scheme, we have been able to double the coherence rate in the quantum systems of interest (i.e. the \({}^{41}\)K and NV\({}^{-}\) center), and significantly improve the dynamic behavior of these quantum systems. In fact, these results show the effect of the presence of the resonator and its key role in driving the quantum system of interest. In addition, for evaluation of the decoherence rate, an exponential function is fitted to the damping of the oscillating graphs. For the results presented in the reference article [9], this function is proportional to \(f(t)\propto{\rm e}^{-57.34t}\) for the \({}^{41}\)K atom and \(f(t)\propto{\rm e}^{-923t}\) for the NV\({}^{-}\) center, whereas the exponential function fitted to our graphs is first proportional to \(g(t)\propto{\rm e}^{-4332t}\), in the descending part of the function, and then it is proportional to \(g(t)\propto{\rm e}^{58.35t}\), in the ascending part of the function for the \({}^{41}\)K atom. Similarly, the exponential functions fitted to the graph associated with the NV\({}^{-}\) center, in our proposed scheme, is proportional to \(g(t)\propto{\rm e}^{-11770t}\), in the descending part of the function, and then it is proportional to \(g(t)\propto{\rm e}^{209.4t}\), in the ascending part of the function. Comparison between the two functions \(f(t)\) and \(g(t)\), for each quantum system of interest, shows that the decoherence rate of the \({}^{41}\)K atom and the NV\({}^{-}\) center in our proposed scheme, has decreased by more than 70 times and 12 times, respectively, as compared to the decoherence rates given in reference [9]. Therefore, the presence of the resonator in our proposed scheme, not only results in the increment of the coherence rate of the quantum systems, but also significantly decreases the decoherence rate. Both of these factors are important requirements for a quantum system to be recognized as a _well-characterized_ qubit to be used, for instance, in fault tolerant quantum computation. ## VI Conclusion In this study, we proposed a scheme for characterization and coherent control of spin-based two-level quantum systems. We showed that proper use of a resonator and a modulated electron beam lead to a better time response of the quantum states of a desired system. The modulated electron beam, generated by a Klystron lamp, is a suitable tool for individual magnetic control of a spin-based quantum system in a set of quantum systems. Also, we showed that placing a quantum system inside a proper resonator can improve the performance and characteristics of the quantum system, especially improving the coherence and decoherence rates of the system. To model the proposed scheme, we developed a theory using the Lindblad master equation. By numerically solving the obtained equations, for the \({}^{41}\)K atom, as a representative of Alkali atoms, and for the NV\({}^{-}\) centers in diamond, we derived the time evolution of the state of these quantum systems and examined the dynamics of the states. The results showed that specific resonance frequencies can be provided by the resonator for each quantum system, which improves the coherence dynamics of the desired system. Improving the coherent dynamics of a quantum system can have many applications. For instance, one can design various quantum gates for the purpose of fault tolerant quantum computation [31; 32; 33; 34; 35]. In fact, by controlling the tuning parameters provided by the resonator and the electron beam, suitable quantum gates can be designed and applied. Furthermore, due to the unique control that the electron beam provides for the individual access of spin-type quantum systems, it is possible to use a set of quantum systems at close proximity of each other and engineer such a compound system by electron beams for the purpose of quantum simulations and computations. Therefore, the proposed model and the equations governing its dynamics can be generalized for more than one quantum system and also for more than one resonator.
量子技術における、スピンクビットの共振的なダイナミクスと制御は必須である。クビットの集合におけるスピンクビットの共振的な制御には、磁場が隣接するクビットの共振的なダイナミクスに破壊的な影響を与えることが課題となる。この影響は磁場の空間的広がりによるものである。この問題に対処するため、私たちは、共振的なスピン量子システムのダイナミクスを特徴付ける新しいスキームを提案した。このスキームは、対象となる量子システムを包むレゾネータと、レゾネータに近接して量子システムに接触する可変電子線から成っている。このシステムのダイナミクスは、Lindblad maestra方程式の解をもちいて求める。このモデルの信頼性を検証するために、カリウム原 atom ($^{41}$K) とダイアモンド中の NV$^-$中心についてテストを行った
2309.12871
AnglE-optimized Text Embeddings
High-quality text embedding is pivotal in improving semantic textual similarity (STS) tasks, which are crucial components in Large Language Model (LLM) applications. However, a common challenge existing text embedding models face is the problem of vanishing gradients, primarily due to their reliance on the cosine function in the optimization objective, which has saturation zones. To address this issue, this paper proposes a novel angle-optimized text embedding model called AnglE. The core idea of AnglE is to introduce angle optimization in a complex space. This novel approach effectively mitigates the adverse effects of the saturation zone in the cosine function, which can impede gradient and hinder optimization processes. To set up a comprehensive STS evaluation, we experimented on existing short-text STS datasets and a newly collected long-text STS dataset from GitHub Issues. Furthermore, we examine domain-specific STS scenarios with limited labeled data and explore how AnglE works with LLM-annotated data. Extensive experiments were conducted on various tasks including short-text STS, long-text STS, and domain-specific STS tasks. The results show that AnglE outperforms the state-of-the-art (SOTA) STS models that ignore the cosine saturation zone. These findings demonstrate the ability of AnglE to generate high-quality text embeddings and the usefulness of angle optimization in STS.
Xianming Li, Jing Li
2023-09-22T13:52:42
http://arxiv.org/abs/2309.12871v8
# Angle-optimized Text Embeddings ###### Abstract High-quality text embedding is pivotal in improving semantic textual similarity (STS) tasks, which are crucial components in Large Language Model (LLM) applications. However, a common challenge existing text embedding models face is the problem of vanishing gradients, primarily due to their reliance on the cosine function in the optimization objective, which has saturation zones. To address this issue, this paper proposes a novel angle-optimized text embedding model called AngIE. The core idea of AngIE is to introduce angle optimization in a complex space. This novel approach effectively mitigates the adverse effects of the saturation zone in the cosine function, which can impede gradient and hinder optimization processes. To set up a comprehensive STS evaluation, we experimented on existing short-text STS datasets and a newly collected long-text STS dataset from GitHub Issues. Furthermore, we examine domain-specific STS scenarios with limited labeled data and explore how AngIE works with LLM-annotated data. Extensive experiments were conducted on various tasks including short-text STS, long-text STS, and domain-specific STS tasks. The results show that AngIE outperforms the state-of-the-art (SOTA) STS models that ignore the cosine saturation zone. These findings demonstrate the ability of AngIE to generate high-quality text embeddings and the usefulness of angle optimization in STS. ## 1 Introduction The development of text embeddings (Kiros et al., 2015; Hill et al., 2016; Conneau et al., 2017; Cer et al., 2018; Reimers and Gurevych, 2019; Gao et al., 2021) is an essential research challenge in the NLP community. Text embeddings effectively feature key semantic and syntactic information in language, which broadly affects the performance of downstream tasks, such as text classification (Li et al., 2021), sentiment analysis (Suresh and Ong, 2021; Zhang et al., 2022), semantic matching (Grill et al., 2020; Lu et al., 2020), clustering (Reimers and Gurevych, 2019; Xu et al., 2023), and question-answering (QA) system (Yue et al., 2021). In particular, text embedding models play a crucial role in LLMs such as ChatGPT (OpenAI, 2022; 2023), LLMA (Touvron et al., 2023;b), and ChatGLM (Du et al., 2022)-based applications. These LLM-based applications heavily rely on high-quality text embeddings for tasks such as vector search, where related documents are retrieved for LLM QA (Asai et al., 2023). Recent studies (Gao et al., 2021; Jiang et al., 2022b; Chuang et al., 2022; Chanchani and Huang, 2023; Zhuo et al., 2023) have utilized pre-trained language models such as BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019) in combination with contrastive learning to enhance the quality of text embeddings. These approaches involve pulling semantically similar samples together and pushing apart those not (Gao et al., 2021). In these contrastive models, positive samples that are semantically similar can be generated by data augmentation, while negative samples that are dissimilar are selected from different texts within the same mini-batch (in-batch negatives). However, supervised negatives are underutilized, and the correctness of in-batch negatives is difficult to guarantee without annotation, which can lead to performance degradation. Although some models such as (Gao et al., 2021) optimize hard negative samples, they rely on strict triple formats (\(x_{i}\), \(x_{i}^{+}\), \(x_{i}^{-}\)). While most existing supervised STS datasets only provide pairs (\(x_{i}\), \(x_{i}^{+}\)) or (\(x_{j}\), \(x_{j}^{-}\)), where \(x_{i}^{+}\) refers to the positive sample of \(x_{i}\) while \(x_{j}^{-}\) the negative sample of \(x_{j}\). Thus, most contrastive models are used in unsupervised settings yet might not benefit from human supervision. For supervised STS (Reimers and Gurevych, 2019; Su, 2022), most efforts to date employed the cosine function in their training objective to measure the pairwise semantic similarity. However, the cosine function has saturation zones, as shown in Figure 1. It can impede the optimization due to the gradient vanishing issue and hinder the ability to learn subtle distinctions between texts in backpropagation. Additionally, many STS datasets such as MRPC 1 and QQP 2 provide binary labels representing dissimilar (\(0\)) and similar (\(1\)), which naturally fall within the saturation zone of the cosine function. To overcome this challenge, this paper proposes a novel angle-optimized text embedding. It optimizes not only the cosine similarity between texts but also the angle to mitigate the negative impact of the saturation zones of the cosine function on the learning process. Specifically, it first divides the text embedding into real and imaginary parts in a complex space. Then, it follows the division rule in complex space to compute the angle difference between two text embeddings. After normalization, the angle difference becomes an objective to be optimized. It is intuitive to optimize the normalized angle difference, because if the normalized angle difference between two text embeddings is smaller, it means that the two text embeddings are closer to each other in the complex space, i.e., their similarity is larger. Footnote 1: [https://www.microsoft.com/en-us/download/details.aspx?id=52398](https://www.microsoft.com/en-us/download/details.aspx?id=52398) Footnote 2: [https://www.quora.com/q/quoradata/](https://www.quora.com/q/quoradata/) In the STS experimental setup, we observed that the majority of existing STS benchmarks focus on evaluating models on short texts. Unfortunately, there is a lack of datasets specifically designed to evaluate the STS performance of models on long texts. Long texts are prevalent in real-world applications such as financial documents, legal documents, and health reports (Li et al., 2023). To tackle this challenge, this paper presents a new high-quality long-text STS dataset. This dataset allows for a more thorough evaluation of model performance on long texts. Specifically, the dataset is collected from GitHub Issues with roughly \(21\)K samples, we use the duplicate issues as the positive samples and the non-duplicate issues as the negative samples. We first experimented with both short and long-text datasets and showed that AngIE outperforms the SOTA STS models in both transfer and non-transfer STS tasks. For example, AngIE shows an average Spearman correlation of \(73.55\%\) in non-transfer STS tasks, compared to \(68.03\%\) for SBERT. Then, an ablation study shows that all components contribute positively to AngIE's superior performance. Next, we discuss the domain-specific scenarios with limited annotated data that are challenging for AngIE-like supervised STS, where it is observed that AngIE can work well with LLM-supervised data. Finally, we find that AngIE can benefit downstream retrieval applications and can learn representations closer to actual representations. In summary, the contributions of this paper are listed as follows: \(\bullet\) We investigate the negative effects of saturation zone in the cosine function widely applied in STS and propose a novel angle-optimized text embedding model to mitigate this issue. \(\bullet\) We extend the existing STS benchmark with a newly collected long-text dataset from Github Issues to allow a more comprehensive empirical study in STS. \(\bullet\) We present extensive experiments on STS and demonstrate that AngIE can substantially improve the text embedding quality in various scenarios. Figure 1: The saturation zones of the cosine function. The gradient at saturation zones is close to zero. During backpropagation, if the gradient is very small, it could kill the gradient and make the network difficult to learn. Related Work This section is organized as follows: we first introduce the unsupervised approaches, then the supervised approaches, and finally give a summary. Unsupervised ApproachesEarly studies (Hill et al., 2016; Pagliardini et al., 2018) have demonstrated the efficacy of augmenting word2vec (Mikolov et al., 2013) with n-gram embeddings, yielding strong results in text embeddings. Recently, BERT-flow (Li et al., 2020) has introduced a flow-based approach that maps BERT embeddings to a standard Gaussian latent space. On the other hand, BERT-whitening (Su et al., 2021) applies the whitening operation to BERT embeddings to enhance text embeddings. Furthermore, very recent research (Carlsson et al., 2020; Zhang et al., 2020; Giorgi et al., 2021; Gao et al., 2021; Yan et al., 2021; Chuang et al., 2022; Jiang et al., 2022b; Zhuo et al., 2023) has focused on leveraging contrastive objectives to improve the quality of text embeddings. Supervised ApproachesSupervised text embeddings usually perform better than their unsupervised counterparts (Gao et al., 2021). Various studies have effectively utilized supervised datasets to enhance the learning of text embeddings. In particular, Conneau et al. (2017) introduced a method that leverages supervised Natural Language Inference (NLI) tasks for this purpose. Building on a transformer backbone, USE (Cer et al., 2018) incorporates the SNLI dataset to augment unsupervised training, resulting in improved performance. Furthermore, SBERT (Reimers and Gurevych, 2019) enhances text embedding by combining BERT with a siamese architecture. Jiang et al. (2022a, 2023) proposed the use of prompt engineering to improve text embeddings. However, most existing models optimize the cosine similarity but neglect the negative effect of the saturation zone of the cosine function. To address this issue, this paper proposes a novel angle-optimized text embedding model to improve the quality of text embedding. ## 3 Methodology This section will introduce the components of the proposed angle-optimized text embedding model, including the input layer, cosine objective, in-batch negative objective, and angle objective. ### Input Layer For the input sentences, we first apply padding to ensure a consistent length \(l\). Next, we map each word to a continuous \(d\)-dimensional space to produce word embeddings \(\mathbf{e}_{i}\in\mathbb{R}^{d}\). These word embeddings are then concatenated to form the model input: \(\mathbf{E}=[\mathbf{e}_{1},\mathbf{e}_{2},\dots,\mathbf{e}_{l}]\in\mathbb{R}^ {l\times d}\). Subsequently, the model input is passed through an encoder such as BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), and LLaMA (Touvron et al., 2023a;b) to obtain the contextual representation \(\mathbf{X}\). ### Cosine Objective Following the prior study (Su, 2022), we employ the cosine objective function for end-to-end optimization of cosine similarity between representations, as follows: \[\mathcal{L}_{cos}=\log\left[1+\sum_{s\left(\mathbf{X}_{i},\mathbf{X}_{j} \right)>s\left(\mathbf{X}_{m},\mathbf{X}_{n}\right)}e^{\frac{\cos\left( \mathbf{X}_{m},\mathbf{X}_{n}\right)-\cos\left(\mathbf{X}_{i},\mathbf{X}_{j} \right)}{\tau}}\right] \tag{1}\] where \(\tau\) is a temperature hyperparameter, \(\cos(\cdot)\) is the cosine similarity function, and \(s(u,v)\) is the similarity between \(u\) and \(v\). By optimizing the \(\mathcal{L}_{cos}\), we expect the cosine similarity of the high similarity pair to be greater than that of the low similarity pair. ### In-batch Negative Objective To further improve performance, we integrate the in-batch negative objective function. Because in-batch negative samples can serve as a data augmentation technique, which can benefit the generalization. Unlike existing contrastive learning models (Gao et al., 2021; Yan et al., 2021) that generate positive samples through data augmentation, we use supervised positive samples. Recognizing that there might be identical sentences within a batch that are not explicitly labeled as positive samples, causing them to become in-batch negatives, we identify these duplicate sentences and assign them as positive samples, thereby reducing potential noise. The formulation for the in-batch negative objective function (ibn) is as follows: \[\mathcal{L}_{ibn}=-\sum_{b}\sum_{i}^{m}\log\left[\frac{e^{\cos(\mathbf{X}_{b_{i} },\mathbf{X}_{b_{i}}^{+})/\tau}}{\sum_{j}^{N}e^{\cos(\mathbf{X}_{b_{i}}, \mathbf{X}_{b_{j}}^{+})/\tau}}\right], \tag{2}\] where \(\tau\) is a temperature hyperparameter, \(b\) stands for the \(b\)-th batch, \(\mathbf{X}_{b_{i}}^{+}\) and \(\mathbf{X}_{b_{j}}^{+}\) are the respective positive samples of \(\mathbf{X}_{b_{i}}\) and \(\mathbf{X}_{b_{j}}\), \(m\) represents the number of positive pairs in \(b\)-th batch, \(N\) is the batch size, and \(\cos(\cdot)\) is the cosine similarity function. ### Angle Objective We found that both the cosine and in-batch negative objectives employ the cosine function to measure similarity. However, it is important to note that the cosine function includes saturation zones, which can hinder the optimization process. We optimize the angle difference in complex space to mitigate these adverse effects. Figure 1(a) draws the division in complex space, and Figure 1(b) depicts how angle optimization works in cosine saturation zones. To optimize the angle difference, we define \(\mathbf{X}^{re}\) and \(\mathbf{X}^{im}\) are the real part and the imaginary part of \(\mathbf{X}\). We follow the implementation of (Sun et al., 2019) to obtain \(\mathbf{X}^{re}\) and \(\mathbf{X}^{im}\) by the chunking strategy. Specifically, for the pair \((\mathbf{X}_{i},\mathbf{X}_{j})\), their representations in the complex space are defined as follows: \[\mathbf{z} =\mathbf{a}+\mathbf{b}i\in\mathbb{C} \tag{3}\] \[\mathbf{w} =\mathbf{c}+\mathbf{d}i\in\mathbb{C},\] where \(\mathbf{a}=\mathbf{X}_{i}^{re}\in\mathbb{R}\), \(\mathbf{b}=\mathbf{X}_{i}^{im}\in\mathbb{R}\), \(\mathbf{c}=\mathbf{X}_{j}^{re}\in\mathbb{R}\), and \(\mathbf{d}=\mathbf{X}_{j}^{im}\in\mathbb{R}\). To compute the angle difference between \(\mathbf{z}\) and \(\mathbf{w}\), we calculate division in complex space in polar coordinates, as follows: \[\frac{\mathbf{z}}{\mathbf{w}} =\gamma\Delta\theta_{zw} \tag{4}\] \[\gamma =\frac{r_{\mathbf{z}}}{r_{\mathbf{w}}}=\frac{\sqrt{\mathbf{a}^{ 2}+\mathbf{b}^{2}}}{\sqrt{\mathbf{c}^{2}+\mathbf{d}^{2}}}\] \[\Delta\theta_{zw} =\theta_{\mathbf{z}}-\theta_{\mathbf{w}},\] where \(r_{\mathbf{z}}\) and \(r_{\mathbf{w}}\) represent the magnitudes of \(\mathbf{z}\) and \(\mathbf{w}\), while \(\theta_{\mathbf{z}}\) and \(\theta_{\mathbf{w}}\) denote the respective angles of \(\mathbf{z}\) and \(\mathbf{w}\). Next, we compute the value of \(\frac{\mathbf{z}}{\mathbf{w}}\) by the division rule in complex space, as follows: \[\frac{\mathbf{z}}{\mathbf{w}}=\frac{\mathbf{a}+\mathbf{b}i}{\mathbf{c}+ \mathbf{d}i}=\frac{(\mathbf{a}\mathbf{c}+\mathbf{b}\mathbf{d})+(\mathbf{b} \mathbf{c}-\mathbf{a}\mathbf{d})i}{\mathbf{c}^{2}+\mathbf{d}^{2}}. \tag{5}\] Figure 2: (a) Division in complex space. \(\Delta\theta\) is the angle difference between dividend \(z\) and divisor \(w\) in complex space. (b) Angle optimization in cosine saturation zones. Even though \(\Delta y\approx 0\) could kill the gradient, the corresponding angle difference in complex space is still distinct for optimization. By employing Eq. 4 and Eq. 5, we can calculate the angle difference between \(\mathbf{z}\) and \(\mathbf{w}\) by multiplying both sides by \(\frac{1}{\gamma}\), which can be seen as a normalization operation. In this paper, we determine the absolute normalized angle difference using the following expression: \[\begin{split}\Delta\theta_{zw}&=\mathrm{abs}(\frac{ \mathbf{z}}{\mathbf{w}}\times\frac{1}{\gamma})\\ &=\mathrm{abs}\left[\frac{(\mathbf{a}\mathbf{c}+\mathbf{b} \mathbf{d})+(\mathbf{b}\mathbf{c}-\mathbf{a}\mathbf{d})i}{\mathbf{c}^{2}+ \mathbf{d}^{2}}\times\frac{\sqrt{\mathbf{c}^{2}+\mathbf{d}^{2}}}{\sqrt{ \mathbf{a}^{2}+\mathbf{b}^{2}}}\right]\\ &=\mathrm{abs}\left[\frac{(\mathbf{a}\mathbf{c}+\mathbf{b} \mathbf{d})+(\mathbf{b}\mathbf{c}-\mathbf{a}\mathbf{d})i}{\sqrt{(\mathbf{c}^ {2}+\mathbf{d}^{2})(\mathbf{a}^{2}+\mathbf{b}^{2})}}\right].\end{split} \tag{6}\] Then, the angle difference can be optimized by the following objective function: \[\mathcal{L}_{angle}=\log\left[1+\sum_{s(\mathbf{X}_{i},\mathbf{X}_{j})>s( \mathbf{X}_{m},\mathbf{X}_{n})}e^{\frac{\Delta\theta_{ij}-\Delta\theta_{zm}}{ \tau}}\right] \tag{7}\] where \(\tau\) is a temperature hyperparameter and \(s(u,v)\) is the similarity between \(u\) and \(v\). By optimizing the \(\mathcal{L}_{angle}\), our objective is to minimize the normalized angle difference for pairs with high similarity compared to those with low similarity. Finally, we combine the aforementioned three objective functions in the following manner to form the final objective function: \[\mathcal{L}=w_{1}*\mathcal{L}_{cos}+w_{2}*\mathcal{L}_{ibn}+w_{3}*\mathcal{L}_ {angle}, \tag{8}\] where \(w_{1}\), \(w_{2}\), and \(w_{3}\) are constants. ## 4 Experiment ### Datasets and Evaluation Metrics Existing STS BenchmarksWe mainly evaluate our model on several widely-adopted STS datasets, namely: MRPC, QQP, QNLI 3, STS 2012-2016 (Agirre et al., 2012, 2013, 2014, 2015, 2016), SICK-R (Marelli et al., 2014), and STS-B (Cer et al., 2017). These datasets mainly consist of short text, but real-world scenarios often involve long text documents. Thus, we introduce a newly long-text dataset called _GitHub Issues Similarity Dataset_ to comprehensively evaluate the STS task. Footnote 3: [https://gluebenchmark.com/](https://gluebenchmark.com/) GitHub Issues Similarity DatasetWe observed the presence of many duplicate issues on GitHub. Typically, the maintainers of open source organizations tend to mark these duplicate issues as closed with a comment like "closing as a duplicate of #id". Consequently, these duplicate issues inherently serve as a source of the STS task. It is also worth noting that most issues contain long texts because of the inclusion of extensive code within the issues. To compile the dataset, we extracted duplicated issues from \(55\) popular open-source projects (see A.1) on GitHub using GitHub API 4. The duplicated issues were used as positive samples, while the remaining issues were considered negative samples. Table 1 presents statistics of the GitHub Issues Similarity Dataset, while Figure 3 shows a violin plot illustrating the token-level text length distribution. The visualization reveals a substantial number of long texts. Specifically, the proportion of long texts (token length \(>512\)) for the train, validation, and test sets is at \(61.03\%\), \(60.85\%\), and \(60.50\%\), respectively. Footnote 4: [https://docs.github.com/en/rest](https://docs.github.com/en/rest) Evaluation MetricsTo ensure a fair comparison, we follow previous studies and use Spearman's correlation for evaluation. We use SentEval (Conneau and Kiela, 2018) to compute Spearman's correlation and report the results in the "all" setting, which is consistent with the baselines. ### Implementation Details In this paper, we use the pre-trained uncased BERT base model (110M parameters) as the backbone model. For a fair comparison, all BERT-based baselines also adopt this setting. We set the value of \(\tau\) for the cosine objective and the in-batch negative objective to \(0.05\), based on prior research. Additionally, we determined the value of \(\tau\) for the angle objective to be \(1.0\) through grid search. ### Main Results In this section, we will first introduce the baselines, then the results of the transfer STS tasks, then the results of the non-transfer STS tasks, and finally a summary. BaselinesWe compare our proposed model with widely used baselines, encompassing both unsupervised and supervised models. The unsupervised models are average GloVe (Pennington et al., 2014), BERT-flow (Li et al., 2020), BERT-whitening (Su et al., 2021), LLAMA2 (Touvron et al., 2023b), and contrastive learning models including IS-BERT (Zhang et al., 2020), CT-BERT (Carlson et al., 2020), SimCSE (Gao et al., 2021), ConSERT (Yan et al., 2021), and DiffCSE (Chuang et al., 2022). On the other hand, the chosen supervised models are InferSent (Conneau et al., 2017), USE (Cer et al., 2018), SBERT (Reimers and Gurevych, 2019), CoSENT (Su, 2022), as well as supervised versions of SimCSE and ConSERT. Transfer STS TasksFor a fair comparison, we train AngIE with the NLI datasets MNLI (Williams et al., 2018) and SNLI (Bowman et al., 2015) and then transfer it to evaluate seven STS benchmark datasets. The evaluation results are presented in Table 2. It is evident that AngIE-BERT and AngIE-LLaMA consistently outperform the baselines with a gain of \(0.80\%\) and \(0.72\%\) in average score, respectively, over the previous SOTA SimCSE-BERT and SimCSE-LLaMA. Note that supervised SBERT and CoSENT show lower results than other unsupervised contrastive learning models like SimCSE and DiffCSE. This difference might arise from the difference in data distributions between the training and test data in the transfer STS tasks. They struggle to effectively generalize to STS tasks when trained solely with NLI datasets. In contrast, contrastive learning models exhibit better generalization capabilities due to their alignment and uniformity features. Because AngIE optimizes both the supervised cosine objective and the in-batch negative objective. This can allow AngIE to generalize well in transfer STS tasks. Additionally, the angle optimization in AngIE mitigates the negative impact of the saturation zone in the cosine function to produce better performance than other baselines. Non-transfer STS TasksTo provide a comprehensive analysis, we also evaluate the performance of the baselines in the non-transfer setting. We train the baselines on the train set and evaluate them on the test or validation set. Two typical models, SimCSE and SBERT, representing contrastive and supervised learning, are compared with our model. The results of the non-transfer STS tasks are listed in Table 3, where we evaluate the baselines on four short-text datasets (MRPC, STS-B, QQP, and QNLI) and one long-text dataset (GitHub Issues Similarity Dataset). SimCSE notably performs poorly compared to SBERT and AngIE in the non-transfer setting. This is due to the limitation of the small-scale training set, as there are not enough samples for SimCSE to effectively learn representations. Furthermore, the datasets only provide pair-supervised data, namely \((x,x^{+})\) or \((x,x^{-})\), which prevents SimCSE from utilizing its hard negative objective that relies on triple-supervised \begin{table} \begin{tabular}{c||c c c} \hline \hline Split & Train & Validation & Test \\ \hline \#Positive & \(9457\) & \(774\) & \(807\) \\ \#Negative & \(9108\) & \(773\) & \(741\) \\ \hline Total & \(18565\) & \(1547\) & \(1548\) \\ \hline \hline \end{tabular} \end{table} Table 1: Statistics of the proposed GitHub Issues Similarity Dataset. #Positive denotes the count of positive pairs, and #Negative represents the number of negative pairs. Figure 3: Log token length distribution of the GitHub Issue Similarity Dataset. data \((x,x^{+},x^{-})\). This limitation might affect its performance. On the other hand, AngIE consistently outperforms SBERT, achieving an absolute gain of \(5.52\%\). This can support the idea that angle-optimized text embedding can mitigate the negative impact of the cosine function, resulting in better performance. Furthermore, we explore applying the long text model \(\text{RAN}_{base}\) (86M parameters) (Li et al., 2023) as the backbone to test the performance on long text. The results show that AngIE-BERT outperforms AngIE-RAN across all short text datasets. This advantage might be attributed to the larger parameter size of BERT and its proficiency in handling short texts. However, we observe a remarkable shift in long-text STS. AngIE-RAN outperforms AngIE-BERT in this scenario, suggesting that AngIE-RAN can handle long texts well despite having fewer parameters. In short, this evidence suggests AngIE's superiority in transfer and non-transfer settings, its ability to produce high-quality text embeddings, and its robustness and adaptability to different backbones. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline Model & STS12 & STS13 & STS14 & STS15 & STS16 & STS-B & SICR-R & Avg. \\ \hline \hline \multicolumn{8}{c}{_Unsupervised Models_} \\ \hline GloVe (avg.) † \(\dagger\) & \(55.14\) & \(70.66\) & \(59.73\) & \(68.25\) & \(63.66\) & \(58.02\) & \(53.76\) & \(61.32\) \\ BERT-flow † \(\dagger\) & \(58.40\) & \(67.10\) & \(60.85\) & \(75.16\) & \(71.22\) & \(68.66\) & \(64.47\) & \(66.55\) \\ BERT-whitening ‡ & \(57.83\) & \(66.90\) & \(60.90\) & \(75.08\) & \(71.31\) & \(68.24\) & \(63.73\) & \(66.28\) \\ IS-BERT ‡ & \(56.77\) & \(69.24\) & \(61.21\) & \(75.23\) & \(70.16\) & \(69.21\) & \(64.25\) & \(66.58\) \\ CT-BERT ‡ & \(61.63\) & \(76.80\) & \(68.47\) & \(77.50\) & \(76.48\) & \(74.31\) & \(69.19\) & \(72.05\) \\ ConSERT-BERT & \(64.64\) & \(78.49\) & \(69.07\) & \(79.72\) & \(75.95\) & \(73.97\) & \(67.31\) & \(72.74\) \\ DiffCSE-BERT & \(72.28\) & \(84.43\) & \(76.47\) & \(83.90\) & \(80.54\) & \(80.59\) & \(71.23\) & \(78.49\) \\ SimCSE-BERT & \(68.40\) & \(82.41\) & \(74.38\) & \(80.91\) & \(78.56\) & \(76.85\) & \(72.23\) & \(76.25\) \\ LLaMA2-7B \(\star\) & \(50.66\) & \(73.32\) & \(62.76\) & \(67.00\) & \(70.98\) & \(63.28\) & \(67.40\) & \(65.06\) \\ \hline \hline \multicolumn{8}{c}{_Supervised Models_} \\ \hline InferSent-GloVe † \(\dagger\) & \(52.86\) & \(66.75\) & \(62.15\) & \(72.77\) & \(66.87\) & \(68.03\) & \(65.65\) & \(65.01\) \\ USE † \(\dagger\) & \(64.49\) & \(67.80\) & \(64.61\) & \(76.83\) & \(73.18\) & \(74.92\) & \(76.69\) & \(71.22\) \\ ConSERT-BERT & \(74.07\) & \(83.93\) & \(77.05\) & \(83.66\) & \(78.76\) & \(81.36\) & \(76.77\) & \(79.37\) \\ CoSEMT-BERT † \(\star\) & \(71.35\) & \(77.52\) & \(75.05\) & \(79.68\) & \(76.05\) & \(78.99\) & \(71.19\) & \(75.69\) \\ SBERT † \(\dagger\) & \(70.97\) & \(76.53\) & \(73.19\) & \(79.09\) & \(74.30\) & \(77.03\) & \(72.91\) & \(74.89\) \\ SimCSE-BERT & \(75.30\) & \(84.67\) & \(80.19\) & \(85.40\) & \(80.82\) & \(84.25\) & \(80.39\) & \(81.57\) \\ SimCSE-LLaMA2-7B \(\star\) & \(78.39\) & \(89.95\) & \(84.80\) & \(88.50\) & \(86.04\) & \(87.86\) & \(81.11\) & \(85.24\) \\ \hline AngIE-BERT & \(75.09\) & \(85.56\) & \(80.66\) & \(86.44\) & \(82.47\) & \(85.16\) & \(81.23\) & \(82.37\) \\ AngIE-LLaMA2-7B & \(\mathbf{79.00}\) & \(\mathbf{90.56}\) & \(\mathbf{85.79}\) & \(\mathbf{89.43}\) & \(\mathbf{87.00}\) & \(\mathbf{88.97}\) & \(\mathbf{80.94}\) & \(\mathbf{85.96}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Text embedding performance on STS tasks. We report the Spearman’s correlation \(\rho\times 100\) of the “all” setting computed by SentEval. For supervised LLaMA-based models, we fine-tuned them using the LoRA (Hu et al., 2021) technique and used the prompt “_Summarize sentence \(\{\)sentence\(\}\) in one word:”_ sparked by (Jiang et al., 2023). Results marked with † are obtained from (Reimers and Gurevych, 2019), while results marked with ‡ are retrieved from (Gao et al., 2021). Additionally, results marked with † denote our own implementation using official code. For the remaining baselines, we refer to the corresponding original papers to obtain their results. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Model & MRPC & STS-B & QQP & QNLI & GitHub Issues. & Avg. \\ \cline{2-7} & test & test & validation & validation & test & \\ \hline SimCSE-BERT & \(48.13\) & \(76.27\) & \(65.84\) & \(33.00\) & \(60.38\) & \(56.72\) \\ SBERT & \(46.19\) & \(84.67\) & \(73.80\) & \(65.98\) & \(69.50\) & \(68.03\) \\ \hline AngIE-RAN & \(58.70\) & \(80.23\) & \(74.87\) & \(63.04\) & \(\mathbf{71.25}\) & \(69.62\) \\ AngIE-BERT & \(\mathbf{62.20}\) & \(\mathbf{86.26}\) & \(\mathbf{76.54}\) & \(\mathbf{72.19}\) & \(70.55\) & \(\mathbf{73.55}\) \\ \hline \hline \end{tabular} \end{table} Table 3: Results on the STS tasks. All baseline results are our implementation using the official code. Spearman’s correlation (\(\rho\times 100\)) serves as the reported metric. ### Ablation Study To gain a deeper understanding of AngIE, we conducted an ablation study examining different objectives and their effects. The results in table 4 indicate that AngIE shows improved performance with all three objectives. In particular, we observe that AngIE experiences a greater drop in performance without the angle objective than without the in-batch negative (ibn) objective. This suggests that angle optimization is more important than ibn in improving text embedding. Additionally, we find that using the angle objective alone yields performance close to that of using the cosine objective alone, demonstrating the effectiveness of angle optimization. We also evaluated five different pooling strategies and found that the "cls" strategy performed the best. Finally, we compared the ibn with/without identical sentence pair (ISP) detection and found that ibn without ISP detection has about \(0.18\%\) performance drop than with. This indicates that ibn with ISP detection is effective. the results that LLM-supervised AngIE performs better than unsupervised contrastive baselines, and the ensemble of LLMs shows the best results. This evidence suggests the effectiveness of LLM-supervised learning and indicates that it can alleviate the domain-supervised data scarcity problem. Discussion of Text RetrievalWe also evaluate the performance of the text retrieval task by experimenting on the test split of the flickr30k dataset (Young et al., 2014). This dataset consists of five caption texts for each photo, and these texts are similar to each other. We use the first caption text vector to retrieve the top 5 similar sentences using faiss 5. The strict accuracy 6 of AngIE, SimCSE (supervised), and SBERT are \(12.9\%\), \(10.4\%\), and \(5.2\%\), respectively. This evidence indicates the effectiveness of using AngIE for the retrieval task. Footnote 5: [https://github.com/facebookresearch/faiss](https://github.com/facebookresearch/faiss) Footnote 6: Strict accuracy means only all the top five sentences retrieved are equal to the five reference sentences considered correct Discussion of Transfer TasksIn addition, we evaluate the performance of text embedding in transfer tasks. In particular, our approach involves training text embedding on STS tasks and then transferring it to seven other kinds of tasks. Notably, AngIE outperforms baselines, showing a significant improvement of \(4.34\%\) and \(4.48\%\) over DiffCSE and SimCSE, respectively. These results suggest that AngIE can produce better embeddings that effectively improve performance in various tasks. A more detailed description of the experiment can be found in section A.2. Analysis of Text Embedding DistributionFigure 4 depicts the density plots of the cosine similarities between sentence pairs in the STS-B test set to provide an intuitive visualization of the text embedding quality. Figure 4(b) displays the golden scores for the same sentence pairs. Analyzing the overall density of cosine similarities, we find that AngIE's distribution resembles the golden distribution more closely than SimCSE (supervised) and SBERT's. Figure 4(b) illustrates a peak in the 0-1 range; however, only AngIE shows a distinct peak in this range in Figure 4(a). Also, Figure 4(b) portrays a higher peak around \(4\) than around \(4.8\) in the 4-5 range, only AngIE demonstrates this feature properly in Figure 4(a). Notably, the 0-1 and 4-5 ranges in Figure 4(b) represent two saturation zones of the cosine function. This evidence suggests that AngIE can mitigate the negative effect of the saturation zone. In conclusion, we can confidently assert that AngIE produces better text embeddings with a cosine similarity density closely resembling the actual distribution than the baselines. ## 5 Conclusion and Future Work In this paper, we have presented a novel text embedding model called AngIE, which optimizes the angle difference in complex space to overcome the adverse impact of the saturation zone of the cosine function, thereby improving text embeddings. To comprehensively evaluate the STS tasks, we have introduced the GitHub Issues Similarity Dataset to evaluate model performance on the Figure 5: (a) Density plots of cosine similarities between sentence pairs in the STS-B test set. The pairs have been categorized into 6 groups, reflecting the ground truth ratings (where higher ratings indicate a higher degree of similarity), visually represented on the y-axis. The x-axis represents the cosine similarity. (b) Density plots of golden scores between sentence pairs in the STS-B test set. long-text STS task. Furthermore, we have proposed an LLM-supervised learning method to cope with the scarcity of domain-supervised data. Extensive experimental results have demonstrated that AngIE outperforms baselines, indicating that AngIE can handle both short and long-text STS tasks and work effectively in various scenarios. In future work, we plan to explore the application of AngIE in real-world scenarios and provide further insights into AngIE.
高品質なテキストエンベッディングは、文脈的テキスト類似性(STS)タスクの向上に不可欠であり、これは大規模言語モデル(LLM)アプリケーションの重要なコンポーネントです。しかし、一般的な課題は、テキストエンベッディングモデルが抱える「消失の勾配」の問題です。これは、最適化の目的においてコサイン関数を使用しているためで、それが飽和領域を持つからです。この問題に対処するために、この論文では、角度最適化されたテキストエンベッディングモデルであるAnglEを提案します。AnglEの核心思想は、複雑な空間における角度最適化を導入することです。この新しいアプローチは、コサイン関数の飽和領域の悪影響を効果的に軽減することで、勾配の低下と最適化プロセスを阻害する可能性のあることを防止します。総合的な STS 評価を構築するために、既存の短文 STS データセットと GitHub Issues から
2308.00184
Attribution-Scores in Data Management and Explainable Machine Learning
We describe recent research on the use of actual causality in the definition of responsibility scores as explanations for query answers in databases, and for outcomes from classification models in machine learning. In the case of databases, useful connections with database repairs are illustrated and exploited. Repairs are also used to give a quantitative measure of the consistency of a database. For classification models, the responsibility score is properly extended and illustrated. The efficient computation of Shap-score is also analyzed and discussed. The emphasis is placed on work done by the author and collaborators.
Leopoldo Bertossi
2023-07-31T22:41:17
http://arxiv.org/abs/2308.00184v1
# Attribution-Scores in Data Management and Explainable Machine Learning+ ###### Abstract We describe recent research on the use of actual causality in the definition of responsibility scores as explanations for query answers in databases, and for outcomes from classification models in machine learning. In the case of databases, useful connections with database repairs are illustrated and exploited. Repairs are also used to give a quantitative measure of the consistency of a database. For classification models, the responsibility score is properly extended and illustrated. The efficient computation of Shap-score is also analyzed and discussed. The emphasis is placed on work done by the author and collaborators. ## 1 Introduction In data management and artificial intelligence, and machine learning in particular, one wants _explanations_ for certain results. For example, for query answers and inconsistency in databases. In machine learning (ML), one wants explanations for automated classification results, and automated decision making. Explanations, that may come in different forms, have been the subject of philosophical enquires for a long time, but, closer to our discipline, they appear under different forms in model-based diagnosis, and in causality as developed in artificial intelligence. In the last few years, explanations that are based on _numerical scores_ assigned to elements of a model that may contribute to an outcome have become popular. These _attribution scores_ attempt to capture the degree of contribution of those components to an outcome, e.g. answering questions like these: What is the contribution of this tuple to the answer to this query? Or, what is the contribution of this feature value for the label assigned to this input entity? In this article, we survey some of the recent advances on the definition, use and computation of the above mentioned score-based explanations for query answering in databases, and for outcomes from classification models in ML. This is not intended to be an exhaustive survey of the area. Instead, it is heavily influenced by our latest research. To introduce the concepts and techniques we will use mostly examples, trying to convey the main intuitions and issues. Different scores have been proposed in the literature, and some that have a relatively older history have been applied. Among the latter we find the general _responsibility score_ as found in _actual causality_[23, 18, 24]. For a particular kind of application, one has to define the right causality setting, and then apply the responsibility measure to the participating variables. In data management, responsibility has been used to quantify the strength of a tuple as a cause for a query result [35, 6]. The responsibility score, \(Resp\), is based on the notions of _counterfactual intervention_ as appearing in actual causality. More specifically, (potential) executions of _counterfactual interventions_ on a _structural logic-probabilistic model_[23] are considered, with the purpose of answering hypothetical questions of the form: _What would happen if we change...?_. Database repairs are commonly used to define and obtain semantically correct query answers from a database (DB) that may fail to satisfy a given set of integrity constraints (ICs) [5]. A connection between repairs and actual causality in DBs has been used to obtain complexity results and algorithms for responsibility [6] (see Section 2). Actual causality and responsibility can also be defined at the attribute-level in DBs (rather than at the tuple-level) [8]. We briefly describe this alternative in Section 2.1. On the basis of database repairs, a measure (or global score) to quantify the degree of inconsistency of a DB was introduced in [7]. We give the main ideas in Section 3. The _Resp_ score has also been applied to define scores for binary attribute values in classification [10]. However, it has to be generalized when dealing with non-binary features [9]. We describe this generalization in Section 4.1. The Shapley value of _coalition game theory_[43, 39] can be (and has been) used to define attribution scores, in particular in DBs. The main idea is that _several tuples together_, much like players in a coalition game, are necessary to produce a query result. Some may contribute more than others to the _wealth distribution function_ (or simply, game function), which in this case becomes the query result, namely \(1\) or \(0\) if the query is Boolean, or a number in the case of an aggregation query. This use of Shapley value was developed in [29, 30]. See also [14] for a more recent and general discussion of the use of the Shapley value in DBs. The Shapley value has also been used to define explanation scores to feature values in ML-based classification, taking the form of the _Shap_ score [33]. Since its computation is bound to be intractable in general, there has been recent research on classes of models for which _Shap_ becomes tractable [3, 4, 47]. See Section 4.2. There hasn't been much research on the use or consideration of domain knowledge (or domain semantics) in the definition and computation of attribution scores. In Section 5, we describe some problems that emerge in this context. This article has [15] as a companion, which concentrates mostly on data management. It delves into the _Resp_ score under ICs, and on the use of the Shapley value for query answering in DBs. That paper also discusses the _Causal Effect_ score applied in data management [41]. It is also based on causality, as it appears mainly in _observational studies_[40, 25, 38]. ## 2 Causal Responsibility in Databases Before going into the subject, we recall some notions and notation used in data management. A relational schema \(\mathcal{R}\) contains a domain of constants, \(\mathcal{C}\), and a set of predicates of finite arities, \(\mathcal{P}\). \(\mathcal{R}\) gives rise to a language \(\mathfrak{L}(\mathcal{R})\) of first-order (FO) predicate logic with built-in equality, \(=\). Variables are usually denoted with \(x,y,z,...\); and finite sequences thereof with \(\bar{x},...\); and constants with \(a,b,c,...\), etc. An _atom_ is of the form \(P(t_{1},\ldots,t_{n})\), with \(n\)-ary \(P\in\mathcal{P}\) and \(t_{1},\ldots,t_{n}\)_terms_, i.e. constants, or variables. An atom is _ground_ (a.k.a. a tuple) if it contains no variables. A database (instance), \(D\), for \(\mathcal{R}\) is a finite set of ground atoms; and it serves as an interpretation structure for \(\mathfrak{L}(\mathcal{R})\). A _conjunctive query_ (CQ) is a FO formula, \(\mathcal{Q}(\bar{x})\), of the form \(\exists\bar{y}\ (P_{1}(\bar{x}_{1})\wedge\cdots\wedge P_{m}(\bar{x}_{m}))\), with \(P_{i}\in\mathcal{P}\), and (distinct) free variables \(\bar{x}:=(\bigcup\bar{x}_{i})\smallsetminus\bar{y}\). If \(\mathcal{Q}\) has \(n\) (free) variables, \(\bar{c}\in\mathcal{C}^{n}\) is an _answer_ to \(\mathcal{Q}\) from \(D\) if \(D\models\mathcal{Q}[\bar{c}]\), i.e. \(Q[\bar{c}]\) is true in \(D\) when the variables in \(\bar{x}\) are componentwise replaced by the values in \(\bar{c}\). \(\mathcal{Q}(D)\) denotes the set of answers to \(\mathcal{Q}\) from \(D\). \(\mathcal{Q}\) is a _Boolean conjunctive query_ (BCQ) when \(\bar{x}\) is empty; and when _true_ in \(D\), \(\mathcal{Q}(D):=\{\textit{true}\}\). Otherwise, it is _false_, and \(\mathcal{Q}(D):=\emptyset\). We will consider only conjunctive queries, which are the most common the data management. Integrity constraints (ICs) are sentences of \(\mathfrak{L}(\mathcal{R})\) that a DB is expected to satisfy. Here, we consider mainly _denial constraints_ (DCs), i.e. of the form \(\kappa\colon\neg\exists\bar{x}(P_{1}(\bar{x}_{1})\wedge\cdots\wedge P_{m}( \bar{x}_{m}))\), where \(P_{i}\in\mathcal{P}\), and \(\bar{x}=\bigcup\bar{x}_{i}\). If an instance \(D\) does not satisfy the set \(\Sigma\) of ICs associated to the schema, we say that \(D\) is _inconsistent_, denoted with \(D\not\models\Sigma\). Now we move into the proper subject of this section. In data management we need to understand and compute _why_ certain results are obtained or not, e.g. query answers, violations of semantic conditions, etc.; and we expect a database system to provide _explanations_. Here, we will consider explanations that are based on _actual causality_[23, 24]. They were introduced in [35, 36], and will be illustrated by means of an example. _Example 1_.: Consider the database \(D\), and the Boolean conjunctive query (BCQ) \(\begin{array}{c|c}\hline R&A&B\\ \hline a&b&\\ c&d&\\ b&b&\end{array}\)\(\begin{array}{c|c}\hline S&C\\ \hline a&\\ c&\\ b&\end{array}\)\(\begin{array}{c|c}\hline S&C\\ \hline a&\\ c&\\ b&\end{array}\)\(\begin{array}{c|c}\hline S&C\\ \hline a&\\ c&\\ b&\end{array}\)\(\begin{array}{c|c}\hline a\\ c&\\ b&\end{array}\)\(\begin{array}{c|c}\hline\text{in }D\end{array}\), i.e. the query is true in \(D\). We ask about the causes for \(\mathcal{Q}\) to be true. A tuple \(\tau\in D\) is _counterfactual cause_ for \(\mathcal{Q}\) (being true in \(D\)) if \(D\models\mathcal{Q}\) and \(D\smallsetminus\{\tau\}\not\models\mathcal{Q}\). In this example, \(S(b)\) is a counterfactual cause for \(\mathcal{Q}\): If \(S(b)\) is removed from \(D\), \(\mathcal{Q}\) is no longer true. Removing a single tuple may not be enough to invalidate the query. Accordingly, a tuple \(\tau\in D\) is an _actual cause_ for \(\mathcal{Q}\) if there is a _contingency set_\(\Gamma\subseteq(D\smallsetminus\{\tau\})\), such that \(\tau\) is a counterfactual cause for \(\mathcal{Q}\) in \(D\smallsetminus\Gamma\). That is, \(D\models\mathcal{Q}\), \(D\smallsetminus\Gamma\models\mathcal{Q}\), but \(D\smallsetminus(\Gamma\cup\{\tau\})\not\models\mathcal{Q}\). In this example, \(R(a,b)\) is not a counterfactual cause for \(\mathcal{Q}\), but it is an actual cause with contingency set \(\{R(b,b)\}\): If \(R(b,b)\) is removed from \(D\), \(\mathcal{Q}\) is still true, but further removing \(R(a,b)\) makes \(\mathcal{Q}\) false. Notice that every counterfactual cause is also an actual cause, with empty contingent set. Actual causes that are not counterfactual causes need company to invalidate a query result. Now, we ask how strong are tuples as actual causes. To answer this question, we appeal to the _responsibility_ of an actual cause \(\tau\) for \(\mathcal{Q}\)[35], defined by: \[\textit{Resp}^{\mathcal{Q}}_{{}_{D}}(\tau)\ :=\ \frac{1}{|\Gamma|\ +\ 1},\] where \(|\Gamma|\) is the size of a smallest contingency set, \(\Gamma\), for \(\tau\), and \(0\), otherwise. _Example 2._ (ex. 1 cont.) The responsibility of \(R(a,b)\) is \(\frac{1}{2}=\frac{1}{1+1}\) (its several smallest contingency sets have all size \(1\)). \(R(b,b)\) and \(S(a)\) are also actual causes with responsibility \(\frac{1}{2}\); and \(S(b)\) is actual (counterfactual) cause with responsibility \(1=\frac{1}{1+0}\). \(\square\) We can see that causes are, in this database context, tuples that come with their responsibilities as "scores". It turns out that there are precise connections between _database repairs_ and tuples as actual causes for queries answers in databases. These connections where exploited to obtain complexity results for responsibility [6] (among other uses, e.g. to obtain answer-set programs for the specification and computation of actual causes and responsibility [8]). The notion of _repair_ of a relational database was introduced in order to formalize the notion of _consistent query answering_ (CQA) [2, 5]: If a database \(D\) is inconsistent in the sense that is does not satisfy a given set of integrity constraints, _ICs_, and a query \(\mathcal{Q}\) is posed to \(D\), what are the meaningful, or consistent, answers to \(\mathcal{Q}\) from \(D\)? They are sanctioned as those that hold (are returned as answers) from _all_ the _repairs_ of \(D\). The repairs of \(D\) are consistent instances \(D^{\prime}\) (over the same schema of \(D\)), i.e. \(D^{\prime}\models\textit{ICs}\), and _minimally depart_ from \(D\). _Example 3._ Let us consider the following set of _denial constraints_ (DCs) and a database \(D\), whose relations (tables) are shown right here below. \(D\) is inconsistent, because it violates the DCs: it satisfies the joins that are prohibited by the DCs. \[\begin{array}{l}\neg\exists x\exists y(P(x)\wedge Q(x,y))\\ \neg\exists x\exists y(P(x)\wedge R(x,y))\end{array}\qquad\qquad\qquad\qquad \begin{array}{l}\overline{P}\,\mbox{A}\\ \mbox{a}\\ \mbox{e}\end{array}\qquad\qquad\begin{array}{l}\overline{Q}\,\mbox{A}\, \mbox{B}\\ \mbox{a}\,\mbox{b}\end{array}\qquad\qquad\overline{R}\,\mbox{A}\,\mbox{C}\\ \hline\mbox{a}\,\mbox{b}\end{array}\] We want to repair the original instance by _deleting tuples_ from relations. Notice that, for DCs, insertions of new tuple will not restore consistency. We could change (update) attribute values though, a possibility that has been investigated in [8]. Here we have two _subset-repairs_, a.k.a. _S-repairs_. They are subset-maximal consistent subinstances of \(D\): \(D_{1}=\{P(e),Q(a,b),R(a,c)\}\) and \(D_{2}=\{P(e),\,P(a)\}\). They are consistent, subinstances of \(D\), and any proper superset of them (still contained in \(D\)) is inconsistent. (In general, we will represent database relations as set of tuples.) We also have _cardinality repairs_, a.k.a. _C-repairs_. They are consistent subinstances of \(D\) that minimize the _number_ of tuples by which they differ from \(D\). That is, they are maximum-cardinality consistent subinstances. In this case, only \(D_{1}\) is a C-repair. Every C-repair is an S-repair, but not necessarily the other way around. \(\square\) Let us now consider a BCQ \[\mathcal{Q}\colon\exists\bar{x}(P_{1}(\bar{x}_{1})\wedge\cdots\wedge P_{m}( \bar{x}_{m})), \tag{1}\] which we assume is true in a database \(D\). It turns out that we can obtain the causes for \(\mathcal{Q}\) to be true \(D\), and their contingency sets from database repairs. In order to do this, notice that \(\neg\mathcal{Q}\) becomes a DC \[\kappa(\mathcal{Q})\colon\neg\exists\bar{x}(P_{1}(\bar{x}_{1})\wedge\cdots \wedge P_{m}(\bar{x}_{m})). \tag{2}\] \(\mathcal{Q}\) holds in \(D\) iff \(D\) is inconsistent w.r.t. \(\kappa(\mathcal{Q})\). S-repairs are associated to causes with minimal contingency sets, while C-repairs are associated to causes for \(\mathcal{Q}\) with minimum contingency sets, and maximum responsibilities [6]. In fact, for a database tuple \(\tau\in D\): 1. \(\tau\) is actual cause for \(\mathcal{Q}\) with subset-minimal contingency set \(\Gamma\) iff \(D\smallsetminus(\Gamma\cup\{\tau\})\) is an S-repair (w.r.t. \(\kappa(\mathcal{Q})\)), in which case, its responsibility is \(\frac{1}{1+|\Gamma|}\). 2. \(\tau\) is actual cause with minimum-cardinality contingency set \(\Gamma\) iff \(D\smallsetminus(\Gamma\cup\{\tau\})\) is C-repair, in which case, \(\tau\) is a maximum-responsibility actual cause. Conversely, repairs can be obtained from causes and their contingency sets [6]. These results can be extended to unions of BCQs (UBCQs), or equivalently, to sets of denial constraints. One can exploit the connection between causes and repairs to understand the computational complexity of the former by leveraging existing results for the latter. Beyond the fact that computing or deciding actual causes can be done in polynomial time in data for CQs and unions of CQs (UCQs) [35, 6], one can show that most computational problems related to responsibility are intractable, because they are also intractable for repairs, in particular, for C-repairs (all this in data complexity) [32]. In particular, one can prove [6]: 1. The _responsibility problem_, about deciding if a tuple has responsibility above a certain threshold, is _NP_-complete for UCQs. However, on the positive side, this problem is _fixed-parameter tractable_ (equivalently, belongs to the _FPT_ class) [21], with the parameter being the inverse of the threshold. 2. Computing \(\mathit{Resp}_{D}^{\mathcal{Q}}(\tau)\) is \(\mathit{FP}^{NP(\mathit{log}(n))}\)-complete for BCQs. This the _functional_, non-decision, version of the responsibility problem. The complexity class involved is that of computational problems that use polynomial time with a logarithmic number of calls to an oracle in _NP_. 3. Deciding if a tuple \(\tau\) is a most responsible cause is \(P^{\mathit{NP}(\mathit{log}(n))}\)-complete for BCQs. The complexity class is as the previous one, but for decision problems [1]. For further illustration, property (b) right above tells us that there is a database schema and a Boolean conjunctive query \(\mathcal{Q}\), such that computing the responsibility of tuple in an instance \(D\) as an actual cause for \(\mathcal{Q}\) being true in \(D\) is \(\mathit{FP}^{NP(\mathit{log}(n))}\)-hard in \(|D|\). This is due to the fact that, through the C-repair connection, determining the responsibility of a tuple becomes the problem on hyper-graphs of determining the size of a minimum vertex cover that contains the tuple as a vertex (among all vertex covers that contain the vertex) [6, sec. 6]. This latter problem was first investigated in the context of C-repairs w.r.t. denial constraints. Those repairs can be characterized in terms of hyper-graphs [32] (see Section 3 for a simple example). This hyper-graph connection also allows to obtain the FPT result in (a) above, because we are dealing with hyper-edges that have a fixed upper bound given by the number of atoms in the denial constraint associated to que conjunctive query \(\mathcal{Q}\) (see [6, sec. 6.1] for more details). Notice that a class of database repairs determines a _possible-world semantics_ for database consistency restoration. We could use in principle any reasonable notion of distance between database instances, with each choice defining a particular _repair semantics_. S-repairs and C-repairs are examples of repair semantics. By choosing alternative repair semantics and an associated notion of counterfactual intervention, we can define_ also alternative notions of actual causality in databases and associated responsibility measures [8]. We will show an example in Section 2.1. ### Attribute-Level Causal Responsibility in Databases Causality and responsibility in databases can be extended to the attribute-value level; and can also be connected with appropriate forms of database repairs [6, 8]. We will develop this idea in the light of an example, with a particular repair-semantics, and we will apply it to define attribute-level causes for query answering, i.e. we are interested in attribute values in tuples rather than in whole tuples. The repair semantics we use here is very natural, but others could be used instead. Example 4: Consider the database \(D\), with tids, and query \(\mathcal{Q}\colon\ \exists x\exists y(S(x)\wedge R(x,y)\wedge S(y))\), of Example 1 and the associated denial constraint \(\kappa(\mathcal{Q}):\ \neg\exists x\exists y(S(x)\wedge R(x,y)\wedge S(y))\). \[\begin{array}{c|c|c}\hline R&\text{A}&\text{B}\\ \hline t_{1}&a&b\\ t_{2}&c&d\\ t_{3}&b&b\end{array}\qquad\begin{array}{c|c}\hline S&\text{C}\\ \hline t_{4}&a\\ t_{5}&c\\ t_{6}&b\end{array}\qquad\begin{array}{c|c}\hline S&\text{C}\\ \hline t_{4}&a\\ \text{pairs of $D$ w.r.t. $\kappa(\mathcal{Q})$. Repairs will be obtained by ``minimally'' changing attribute values by NULL, as in SQL databases,}\\ \end{array}\] which cannot be used to satisfy a join. In this case, minimality means that _the set_ of values changed by NULL is minimal under set inclusion. These are two different minimal-repairs: \[\begin{array}{c|c|c}\hline R&\text{A}&\text{B}\\ \hline t_{1}&a&b\\ t_{2}&c&d\\ t_{3}&b&b\end{array}\qquad\begin{array}{c|c}\hline S&\text{C}\\ \hline t_{4}&a\\ t_{5}&c\\ t_{6}&\text{NULL}\end{array}\qquad\begin{array}{c|c}\hline R&\text{A}&\text{B} \\ \hline t_{1}&a&\text{NULL}\\ t_{2}&c&d\\ t_{3}&b&\text{NULL}\end{array}\qquad\begin{array}{c|c}\hline S&\text{C}\\ \hline t_{4}&a\\ t_{5}&c\\ t_{6}&b\end{array}\qquad\begin{array}{c|c}\hline S&\text{C}\\ \hline t_{5}&c\\ t_{6}&b\end{array}\qquad\begin{array}{c|c}\hline S&\text{C}\\ \hline t_{6}&b\end{array}\qquad\begin{array}{c|c}\hline S&\text{C}\\ \hline t_{4}&a\\ t_{5}&c\\ t_{6}&b\end{array}\qquad\begin{array}{c|c}\hline t_{5}&c\\ t_{6}&b\end{array}\qquad\begin{array}{c|c}\hline S&\text{C}\\ \hline t_{6}&a\\ t_{5}&c\\ t_{6}&b\end{array}\qquad\begin{array}{c|c}\hline t_{5}&c\\ t_{6}&b\end{array}\qquad\begin{array}{c|c}\hline t_{6}&b\end{array}\qquad \begin{array}{c|c}\hline S&\text{C}\\ \hline t_{4}&a\\ t_{5}&c\\ t_{6}&b\end{array}\qquad\begin{array}{c|c}\hline t_{5}&c\\ t_{6}&b\end{array}\qquad\begin{array}{c|c}\hline t_{6}&b\end{array}\qquad \begin{array}{c|c}\hline S&\text{C}\\ \hline t_{6}&b\end{array}\qquad\begin{array}{c|c}\hline t_{6}&a\\ t_{5}&c\\ t_{6}&b\end{array}\qquad\begin{array}{c|c}\hline t_{6}&b\end{array}\qquad \begin{array}{c|c}\hline S&\text{C}\\ \hline t_{6}&b\end{array}\qquad\begin{array}{c|c}\hline t_{6}&a\\ t_{5}&c\\ t_{6}&b\end{array}\qquad\begin{array}{c|c}\hline t_{6}&b\end{array}\qquad \begin{array}{c|c}\hline S&\text{C}\\ \hline t_{6}&b\end{array}\qquad\begin{array}{c|c}\hline t_{6}&a\\ t_{5}&c\\ t_{6}&b\end{array}\qquad\begin{array}{c|c}\hline t_{6}&b\end{array}\qquad \begin{array}{c|c}\hline S&\text{C}\\ \hline t_{6}&b\end{array}\qquad\begin{array}{c|c}\hline t_{6}&a\\ t_{6}&c\\ t_{6}&b\end{array}\qquad\begin{array}{c|c}\hline t_{6}&b\end{array}\qquad \begin{array}{c|c}\hline t_{6}&a\\ t_{6}&c\\ t_{6}&b\end{array}\qquad\begin{array}{c|c}\hline t_{6}&b\end{array}\qquad \begin{array}{c|c}\hline t_{6}&a\\ t_{6}&c\\ t_{6}&b\end{array}\qquad\begin{array}{c|c}\hline t_{6}&b\end{array}\qquad \begin{array}{c|c}\hline t_{6}&a\\ t_{6}&c\\ t_{6}&b\end{array}\qquad\begin{array}{c|c}\hline t_{6}&b\end{array}\qquad \begin{array}{c|c}\hline t_{6}&a\\ t_{6}&c\\ t_{6}&b\end{array}\qquad\begin{array}{c|c}\hline t_{6}&b\end{array}\qquad \begin{array}{c|c}\hline t_{6}&a\\ t_{6}&c\\ t_{6}&b\end{array}\qquad\begin{array}{c|c}\hline t_{6}&a\\ t_{6}&c\\ t_{6}&b\end{array}\qquad\begin{array}{c|c}\hline t_{6}&b\end{array}\qquad \begin{array}{c|c}\hline t_{6}&a\\ t_{6}&c\\ t_{6}&b\end{array}\qquad\begin{array}{c|c}\hline t_{6}&b\end{array}\qquad \begin{array}{c|c}\hline t_{6}&a\\ t_{6}&c\\ t_{6}&b\end{array}\qquad\begin{array}{c|c}\hline t_{6}&b\end{array}\qquad \begin{array}{c|c}\hline t_{6}&a\\ t_{6}&c\\ t_{6}&b\end{array}\qquad\begin{array}{c|c}\hline t_{6}&b\end{array}\qquad \begin{array}{c|c}\hline t_{6}&a\\ t_{6}&c\\ t_{6}&b\end{array}\qquad\begin{array}{c|c}\hline t_{6}&a\\ t_{6}&c\\ t_{6}&b\end{array}\qquad\begin{array}{c|c}\hline t_{6}&b\end{array}\qquad \begin{array}{c|c}\hline t_{6}&a\\ t_{6}&c\\ t_{6}&b\end{array}\qquad\begin{array}{c|c}\hline t_{6}&b\end{array}\qquad \begin{array}{c|c}\hline t_{6}&a\\ t_{6}&c\\ t_{6}&b\end{array}\qquad\begin{array}{c|c}\hline t_{6}&a\\ t_{6}&c\\ t_{6}&b\end{array}\qquad\begin{array}{c|c}\hline t_{6}&b\end{array}\qquad \begin{array}{c|c}\hline t_ all the possible updates drawn from an attribute domain for a given value in a given tuple in the database, so as for potential contingency sets for it. If the attribute domains contain NULL, this update semantics would generalize the one illustrated with Example 4. As earlier in this section, the purpose of all this would be to counterfactually (and minimally) invalidate the query answer. On the repair counterpart, we could consider _update-repairs_[48] to restore the satisfaction of the associated denial constraint. In this more general situation, where we may depart from the "binary" case so far (i.e. the tuple is or not in the DB, or a value is replaced by a null or not), we have to be more careful: Some value updates on a tuple may invalidate the query and some other may not. So, what if only one such update invalidates the query, but all the others do not? This would not be reflected in the responsibility score as defined so far. We could generalize the responsibility score by bringing into the picture the average (or expected) value of the query (of \(1\)s or \(0\)s for true or false) in the so-intervened database. This idea has been developed in the context of explanation scores for ML-based classification [9]. See Section 4.1 for details. ## 3 Measuring Database Inconsistency A database \(D\) is expected to satisfy a given set of integrity constraints (ICs), \(\Sigma\), that come with the database schema. However, databases may be inconsistent in that those ICs are not satisfied. A natural question is: _To what extent, or how much inconsistent is \(D\) w.r.t. \(\Sigma\), in quantitative terms?_. This problem is about defining a _global numerical score_ for the database, to capture its "degree of inconsistency". This number can be interesting _per se_, as a measure of data quality (or a certain aspect of it), and could also be used to compare two databases (for the same schema) w.r.t. (in)consistency. In [7], a particular and natural _inconsistency measure_ (IM) was introduced and investigated. More specifically, the measure was inspired by the \(g_{3}\)-measure used for functional dependencies (FDs) in [28], and reformulated and generalized _in terms of a class of database repairs_. Once a particular measure is defined, an interesting research program can be developed on its basis. In particular, around its mathematical properties, and the algorithmic and complexity issues related to its computation. In [7], in addition to algorithms, complexity results, approximations for hard cases of the IM computation, and the dynamics of the IM under updates, _answer-set programs_ were proposed for the computation of this measure. In the rest of this section, we will use the notions and notation introduced in Example 3. For a database \(D\) and a set of _denial constraints_\(\Sigma\) (this is not essential, but to fix ideas), we have the classes of subset-repairs (or S-repairs), and cardinality-repairs (or C-repairs), denoted \(\mathit{Srep}(D,\Sigma)\) and \(\mathit{Crep}(D,\Sigma)\), resp. The consider the following inconsistency measure: \[\mathit{inc\text{-}\mathit{deg}}^{C}(D,\Sigma):=\frac{|D|-\mathit{max}\{|D^{ \prime}|\;:\;D^{\prime}\in\mathit{Crep}(D,\Sigma)\}}{|D|}.\] It is easy to verify that replacing C-repairs by S-repairs gives the same measure. It is clear that \(0\leq\mathit{inc\text{-}\mathit{deg}}^{C}(D,\Sigma)\leq 1\), with value \(0\) when \(D\) consistent. Notice that one could use other repair semantics instead of C-repairs [7]. Example 5 (example 3 cont.): Here, \(\mathit{Srep}(D,\Sigma)=\{D_{1},D_{2}\}\) and \(\mathit{Crep}(D,\Sigma)=\{D_{1}\}\). It holds: \(\mathit{inc\text{-}\mathit{deg}}^{C}(D,\Sigma)=\frac{4-|D_{1}|}{4}=\frac{1}{4}\). \(\square\) It turns out that complexity and efficient computation results, when they exist, can be obtained via C-repairs, for which we end up confronting graph-theoretic problems. Actually, C-repairs are in one-to-one correspondence with maximum-size independent sets in hyper-graphs, whose vertices are the database tuples, and hyper-edges are formed by tuples that jointly violate an IC [32]. Example 6.Consider the database \(D=\{A(a),B(a),C(a),D(a),E(a)\}\), which is inconsistent w.r.t. the set of DS: \[\Sigma=\{\neg\exists x(B(x)\wedge E(x)),\ \neg\exists x(B(x)\wedge C(x)\wedge D (x)),\ \neg\exists x(A(x)\wedge C(x))\}.\] We obtain the following _conflict hyper-graph_ (CHG), where tuples become the nodes, and a hyper-edge connects tuples that together violate a DC: \begin{tabular}{c c} & S-repairs are maximal independent sets: \(D_{1}=\{B(a),C(a)\}\), \(D_{2}=\{C(a),D(a),E(a)\}\), \(D_{3}=\{A(a),B(a),D(a)\}\); and the C-repairs are \(D_{2},\ D_{3}\). \\ \end{tabular} There is a connection between C-repairs and _hitting-sets_ (HS) of the hyper-edges of the CHG [32]: The removal from \(D\) of the vertices in a minimum-size _hitting set_ produces a C-repair. The connections between hitting-sets in hyper-graphs and C-repairs can be exploited for algorithmic purposes, and to obtain complexity and approximation results [7]. In particular, the complexity of computing \(\mathit{inc-deg}^{C}(D,\Sigma)\) for DCs belongs to \(\mathit{FP}^{NP(\mathit{log}(n))}\), in data complexity. Furthermore, there is a relational schema and a set of DCs \(\Sigma\) for which computing \(\mathit{inc-deg}^{C}(D,\Sigma)\) is \(\mathit{FP}^{NP(\mathit{log}(n))}\)-complete. Inconsistency measures have been introduced and investigated in knowledge representation, but mainly for propositional theories [26, 22]. In databases, it is more natural to consider the different nature of the combination of a database, as a structure, and ICs, as a set of first-order formulas. It is also important to consider the asymmetry: databases are inconsistent or not, not the combination. Furthermore, the relevant issues that are usually related to data management have to do with algorithms and computational complexity; actually, in terms of the size of the database (that tends to be by far the largest parameter). Scores for _individual tuples_ about their contribution to inconsistency can be obtained through responsibility scores for query answering, because every IC gives rise to a "violation view" (a query). Also Shapley values have been applied to measure the contribution of tuples to inconsistency of a database w.r.t. FDs [31]. ## 4 Attribution Scores in Machine Learning Scores as explanations for outcomes from machine learning (ML) models have also been proposed. In this section we discuss some aspects of two of them, the _Resp_ and the _Shap_ scores. Example 7.Consider a client of bank who is applying for loan. The bank will process his/her application by means of a ML system that will decide if the client should be given the loan or not. In order for the application to be computationally processed, the client is represented as an _entity_, say \(\mathbf{e}=\langle\mathsf{john},\,\mathsf{18},\mathsf{plumber},\,\mathsf{70K},\, \mathsf{harlem},\,\mathsf{10K},\,\mathsf{basic}\rangle\), that is, a finite record of _feature values_. The set of features is \(\mathcal{F}=\{\mathsf{Name},\,\mathsf{Age},\,\mathsf{Activity},\,\mathsf{Income},\, \mathsf{Debt},\,\mathsf{EdLevel}\}\). The bank uses a _classifier_, \(\mathcal{C}\), which, after receiving input \(\mathbf{e}\), returns a _label_: _Yes_ or _No_ (or \(0\) or \(1\)). In this case, it returns _No_, indicating that the loan request is rejected. The client (or the bank executive) asks _"Why?"_, and would like to have an explanation. What kind of explanation? How could it be provided? From what? There are different ways of building such a classifier, and depending on the resulting model, the classifier may be less or more "interpretable". For example, complex neural networks are considered to be difficult to interpret, and become "black-boxes", whereas more explicit models, such as decision trees, are considered to be much more interpretable, and "open-boxes" for that matter. In situations such as that in Example 7, actual causality and responsibility have been applied to provide _counterfactual explanations_ for classification results, and scores for them. In order to do this, having access to the internal components of the classifier is not needed, but only its input/output relation. Example 8.: (example 7 cont.) The entity \(\mathbf{e}\ =\ \langle\mathsf{john},\,\mathsf{18},\mathsf{plumber},\,\mathsf{70K},\, \mathsf{harlem},\,\mathsf{10K},\,\mathsf{basic}\rangle\) received the label \(1\) from the classifier, indicating that the loan is not granted.1 In order to identify counterfactual explanations, we intervene the feature values replacing them by alternative ones from the feature domains, e.g. \(\mathbf{e}_{1}\ =\ \langle\mathsf{john},\,\mathsf{25},\mathsf{plumber},\, \mathsf{70K},\,\mathsf{harlem},\,\mathsf{10K},\,\mathsf{basic}\rangle\), which receives the label \(0\). The counterfactual version \(\mathbf{e}_{2}=\langle\mathsf{john},\,\mathsf{18},\,\mathsf{plumber},\, \mathsf{90K},\,\mathsf{brookyn},\,\mathsf{10K},\,\mathsf{basic}\rangle\) also get label \(0\). Assuming, in the latter case, that none of the single changes alone switch the label, we could say that \(\mathsf{Age}=\mathsf{25}\), so as \(\mathsf{Income}=\mathsf{70K}\) with contingency \(\mathsf{Location}=\mathsf{harlem}\) (and the other way around) in \(\mathbf{e}\) are (minimal) counterfactual explanations, by being actual causes for the label. Footnote 1: We are assuming that classifiers are _binary_, i.e. they return labels \(0\) or \(1\). For simplicity and uniformity, but without loss of generality, we will assume that label \(1\) is the one we want to explain. We could go one step beyond, and define responsibility scores: \(Resp(\mathbf{e},\mathsf{Age}):=1\), and \(Resp(\mathbf{e},\mathsf{Income}):=\frac{1}{2}\) (due to the additional, required, contingent change). This choice does reflect the causal strengths of attribute values in \(\mathbf{e}\). However, it could be the case that only by changing the value of \(\mathsf{Age}\) to \(\mathsf{25}\) we manage to switch the label, whereas for all the other possible values for \(\mathsf{Age}\) (while nothing else changes), the label is always _No_. It seems more reasonable to redefine responsibility by considering an average involving all the possible labels obtained in this way. An application of the responsibility score similar to that in [35, 6] works fine for explanation scores when features are _binary_, i.e. taking two values, e.g. \(0\) or \(1\)[10]. Even in this case, responsibility computation can be intractable [10]. However, as mentioned in the previous example, when features have more than two values, it makes sense to extend the definition of the responsibility score. ### The Generalized \(Resp\) Score In [9], a generalized \(Resp\) score was introduced and investigated. We describe it next in intuitive terms, and appealing to Example 8. 1. For an entity \(\mathbf{e}\) classified with label \(L(\mathbf{e})=1\), and a feature \(F^{\star}\), whose value \(F^{\star}(\mathbf{e})\) appears in \(\mathbf{e}\), we want a numerical responsibility score \(Resp(\mathbf{e},F^{\star})\), characterizing the causal strength of \(F^{\star}(\mathbf{e})\) for outcome \(L(\mathbf{e})\). In the example, \(F^{\star}=\mathsf{Salary}\), \(F^{\star}(\mathbf{e})=\mathsf{70K}\), and \(L(\mathbf{e})=1\). 2. While we keep the original value for \(\mathsf{Salary}\) fixed, we start by defining a "local" score for a fixed contingent assignment \(\Gamma:=\bar{w}\), with \(F^{\star}\notin\Gamma\subseteq\mathcal{F}\). We define \(\mathbf{e}^{\Gamma,\bar{w}}:=\mathbf{e}[\Gamma:=\bar{w}]\), the entity obtained from \(\mathbf{e}\) by changing (or redefining) its values for features in \(\Gamma\), according to \(\bar{w}\). In the example, it could be \(\Gamma=\{\mathsf{Location}\}\), and \(\bar{w}:=\langle\mathsf{brooklin}\rangle\), a contingent (new) value for \(\mathsf{Location}\). Then, \(\mathbf{e}^{\{\mathsf{Location}\}},\langle\mathsf{brooklin}\rangle=\mathbf{e}[ \mathsf{Location}:=\mathsf{brooklin}]=\langle\mathsf{john},\mathsf{25}, \mathsf{plumber},\mathsf{70K},\,\mathsf{brooklin},\mathsf{10K},\,\mathsf{basic}\rangle\). We make sure (or assume in the following) that \(L(\mathbf{e}^{\Gamma,\bar{w}})=L(\mathbf{e})=1\) holds. This is because, being these changes only contingent, we do not expect them to switch the label by themselves, but only and until the "main" counterfactual change on \(F^{\star}\) is made. In the example, we assume \(L(\mathbf{e}[\mathsf{Location}:=\mathsf{brooklin}])=1\). Another case could be \(\mathbf{e}^{\Gamma^{\prime},\bar{w}^{\prime}}\), with \(\Gamma^{\prime}=\{\mathsf{Activity},\mathsf{Education}\}\), and \(\bar{w}^{\prime}=\langle\mathsf{accountant},\mathsf{medium}\rangle\), with \(L(\mathbf{e}^{\Gamma^{\prime},\bar{w}^{\prime}})=1\). 3. Now, for each of those \(\mathbf{e}^{\Gamma,\bar{w}}\) as in the previous item, we consider all the different possible values \(v\) for \(F^{\star}\), with the values for all the other features fixed as in \(\mathbf{e}^{\Gamma,\bar{w}}\). For example, starting from \(\mathbf{e}[\mathsf{Location}:=\mathsf{brooklin}]\), we can consider \(\mathbf{e}^{\prime}_{1}:=\mathbf{e}[\mathsf{Location}:=\mathsf{brooklin}; \mathsf{Salary}:=\mathsf{60K}]\) (which is the same as \(\mathbf{e}^{\mathsf{Location},\langle\mathsf{brooklin}\rangle}[\)\(\mathsf{Salary}:=\mathsf{60K}]\)), obtaining, e.g. \(L(\mathbf{e}^{\prime}_{1})=1\). However, for \(\mathbf{e}^{\prime}_{2}:=\mathbf{e}[\mathsf{Location}:=\mathsf{brooklin}; \mathsf{Salary}:=\mathsf{80}]\), we now obtain, e.g. \(L(\mathbf{e}^{\prime}_{2})=0\), etc. For a fixed (potentially) contingent change \((\Gamma,\bar{w})\) on \(\mathbf{e}\), we consider the difference between the original label \(1\) and the expected label obtained by further modifying the value of \(F^{\star}\) (in all possible ways). This gives us a _local_ responsibility score, local to \((\Gamma,\bar{w})\): \[Resp(\mathbf{e},F^{\star},\underline{\Gamma,\bar{w}}):= \ \frac{L(\mathbf{e})-\mathbb{E}(\ L(\mathbf{e}^{\prime})\mid F( \mathbf{e}^{\prime})=\ F(\mathbf{e}^{\Gamma,\bar{w}}),\ \forall F\in(\mathcal{F}\smallsetminus\{F^{\star}\}\ )\ )}{1+|\Gamma|}\] \[= \ \frac{1-\mathbb{E}(\ L(\mathbf{e}^{\Gamma,\bar{w}}[F^{\star}:=v ])\mid v\in\mathit{Dom}(F^{\star})\ )}{1+|\Gamma|}.\] (3) This local score takes into account the size of the contingency set \(\Gamma\). We are assuming here that there is a probability distribution over the entity population \(\mathcal{E}\). It could be known from the start, or it could be an empirical distribution obtained from a sample. As discussed in [9], the choice (or whichever distribution that is available) is relevant for the computation of the general \(\mathit{Resp}\) score, which involves the local ones (coming right here below). 4. Now, generalizing the notions introduced in Section 2, we can say that the value \(F^{\star}(\mathbf{e})\) is an _actual cause_ for label \(1\) when, for some \((\Gamma,\bar{w})\), (3) is positive: at least one change of value for \(F^{\star}\) in \(\mathbf{e}\) changes the label (with the company of \((\Gamma,\bar{w})\)). When \(\Gamma=\emptyset\) (and then, \(\bar{w}\) is an empty assignment), and (3) is positive, it means that at least one change of value for \(F^{\star}\) in \(\mathbf{e}\) switches the label by itself. As before, we can say that \(F^{\star}(\mathbf{e})\) is a _counterfactual cause_. However, as desired and expected, it is not necessarily the case anymore that counterfactual causes (as original values in \(\mathbf{e}\)) have all the same causal strength: \(F_{i}(\mathbf{e}),F_{j}(\mathbf{e})\) could be both counterfactual causes, but with different values for (3), for example if changes on the first switch the label "fewer times" than those on the second. 5. Now, we can define the global score, by considering the "best" contingencies \((\Gamma,\bar{w})\), which involves requesting from \(\Gamma\) to be of minimum size: \[\mathit{Resp}(\mathbf{e},F^{\star}) := \max_{\Gamma,\bar{w}:\ |\Gamma|\text{ is min.\& }\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeq:eq \(p\in\mathcal{S}\) quantifies the contribution of \(p\) to the game, for which all different coalitions are considered; each time, with \(p\) and without \(p\): \[\mathit{Shapley}(\mathcal{S},\mathcal{G},p):=\sum_{S\subseteq\mathcal{S}\setminus \{p\}}\frac{|S|!(|\mathcal{S}|-|S|-1)!}{|\mathcal{S}|!}(\mathcal{G}(S\cup\{p\} )-\mathcal{G}(S)). \tag{5}\] Here, \(|S|!(|D|-|S|-1)!\) is the number of permutations of \(\mathcal{S}\) with all players in \(S\) coming first, then \(p\), and then all the others. In other words, this is the _expected contribution_ of \(p\) under all possible additions of \(p\) to a partial random sequence of players, followed by random sequences of the rest of the players. The Shapley value emerges as the only quantitative measure that has some specified properties in relation to coalition games [39]. It has been applied in many disciplines. For each particular application, one has to define a particular and appropriate game function \(\mathcal{G}\). In particular, it has been applied to assign scores to logical formulas to quantify their contribution to the inconsistency of a knowledge base [26], to quantify the contribution of database tuples to making a query true [29, 30], and to quantify contributions to the inconsistency of a database [31]. In different application and with different game functions, the Shapley value turns out to be computationally intractable, more precisely, its time complexity is \(\#P\)_-hard_ in the size of the input, c.f., for example, [30]. Intuitively, this means that it is at least as hard as any of the problems in the class \(\#P\) of problems about counting the solutions to decisions problems (in \(\mathit{NP}\)) that ask about the existence of a certain solution [46, 37]. For example, \(\mathit{SAT}\) is the decision problem asking, for a propositional formula, if there exists a truth assignment (a solution) that makes the formula true. Then, \(\#\mathit{SAT}\) is the computational problem of counting the number of satisfying assignments of a propositional formula. Clearly, \(\#\mathit{SAT}\) is at least as hard as \(\mathit{SAT}\) (it is good enough to count the number of solutions to know if the formula is satisfiable), and \(\mathit{SAT}\) is the prototypical \(\mathit{NP}\)-complete problem, and furthermore, \(\#\mathit{SAT}\) is \(\#P\)-hard, actually, \(\#P\)-complete since it belongs to \(\#P\). As a consequence, computing the Shapley value can be at least as hard as computing the number of solutions for \(\mathit{SAT}\); a clear indication of its high computational complexity. As already mentioned, the _Shap_ score is a particular case of the Shapley value in (5). In this case, the players are the features \(F\) in \(\mathcal{F}\), or, more precisely, the values \(F(\mathbf{e})\) they take for a particular entity \(\mathbf{e}\), for which we have a binary classification label, \(L(\mathbf{e})\), we want to explain. The explanation takes the form of a numerical score for \(F(\mathbf{e})\), reflecting its relevance for the observed label. Since all the feature values contribute to the resulting label, features values can be seen as players in a coalition game. The game function, for a subset \(S\) of features, is the _expected (value of the) label_ over all possible entities whose values coincide with those of \(\mathbf{e}\) for the features in \(S\): \[\mathcal{G}_{\mathbf{e}}(S):=\mathbb{E}(L(\mathbf{e}^{\prime})\mid\mathbf{e} ^{\prime}\in\mathcal{E}\;\;\text{and}\;\;\mathbf{e}^{\prime}_{S}=\mathbf{e}_{ S}), \tag{6}\] where \(\mathbf{e}^{\prime}_{S},\mathbf{e}_{S}\) denote the projections of \(\mathbf{e}^{\prime}\) and \(\mathbf{e}\) on \(S\), resulting in two subrecords of feature values. We can see that the game function depends on the entity at hand \(\mathbf{e}\). With the game function in (5), the _Shap_ score for a feature value \(F^{\star}(\mathbf{e})\) in \(\mathbf{e}\) is: \[\mathit{Shap}(\mathcal{F},\mathcal{G}_{\mathbf{e}},F^{\star}):= \!\!\!\!\!\!\!\sum_{S\subseteq\mathcal{F}\setminus\{F^{\star}\}}\frac{|S|!(| \mathcal{F}|-|S|-1)!}{|\mathcal{F}|!}[\mathbb{E}(L(\mathbf{e}^{\prime}|\mathbf{ e}^{\prime}_{S\cup\{F^{\star}\}}=\mathbf{e}_{S\cup\{F^{\star}\}})-\] \[\mathbb{E}(L(\mathbf{e}^{\prime})|\mathbf{e}^{\prime}_{S}=\mathbf{ e}_{S})]. \tag{7}\] Example 9 (example 8 cont.): For the same fixed entity \(\mathbf{e}\ =\ \langle\mathsf{john},\mathsf{18},\mathsf{plumber},\mathsf{70K},\mathsf{ harlem},\mathsf{10K},\mathsf{basic}\rangle\) and feature \(F^{\star}=\mathsf{Salgary}\), one of the terms in (7) is obtained by considering \(S=\{\mathsf{Location}\}\subseteq\mathcal{F}\): \[\frac{|1|!(7-1-1)!}{7!}\times(\mathcal{G}_{\mathbf{e}}(\{\mathsf{ Location}\}\cup\{\mathsf{Salgary}\})-\mathcal{G}_{\mathbf{e}}(\{\mathsf{Location}\}))\] \[=\ \frac{1}{42}\times(\mathcal{G}_{\mathbf{e}}(\{\mathsf{Location },\mathsf{Sary}\}))-\mathcal{G}_{\mathbf{e}}(\{\mathsf{Location}\})),\] with, e.g., \(\mathcal{G}_{\mathbf{e}}(\{\mathsf{Location},\mathsf{Salgary}\})=\mathbb{E}(L (\mathbf{e}^{\prime})\mid\mathbf{e}^{\prime}\in\mathcal{E},\ \mathsf{Location}(\mathbf{e}^{\prime})=\mathsf{ harlem},\) and \(\mathsf{Salgary}(\mathbf{e}^{\prime})=\mathsf{70K}),\) that is, the expected label over all entities that have the same values as \(\mathbf{e}\) for features \(\mathsf{Salgary}\) and \(\mathsf{Location}\). Then, \(\mathcal{G}_{\mathbf{e}}(\{\mathsf{Location},\mathsf{Salgary}\})-\mathcal{G}_{ \mathbf{e}}(\{\mathsf{Location}\})\) is the expected difference in the label between the case where the values for \(\mathsf{Location}\) and \(\mathsf{Salgary}\) are fixed as for \(\mathbf{e}\), and the case where only the value for \(\mathsf{Location}\) is fixed as in \(\mathbf{e}\), measuring a local contribution of \(\mathbf{e}\)'s value for \(\mathsf{Salgary}\). After that, all these local differences are averaged over all subsets \(S\) of \(\mathcal{F}\), and the permutations in which they participate. \(\square\) We can see that, so as the _Resp_ score, _Shap_ is a _local_ explanation score, for a particular entity at hand \(\mathbf{e}\). Since the introduction of _Shap_ in this form, some variations have been proposed. So as for _Resp_, _Shap_ depends, via the game function, on an underlying probability distribution on the entity population \(\mathcal{E}\). The distribution may impact not only the _Shap_ scores, but also their computation [9]. Boolean classifiers, i.e. propositional formulas with binary input features and binary labels, are particularly relevant, _per se_ and because they can represent other classifiers by means of appropriate encodings. For example, the circuit in Figure 1 can be seen as a binary classifier that can be represented by means of a propositional formula that, depending on the binary values for \(x_{1},x_{2},x_{3},x_{4}\), also returns a binary value. Boolean classifiers, as logical formulas, have been extensively investigated. In particular, much is known about the satisfiability problem of propositional formulas, \(\mathit{SAT}\), and also about the _model counting_ problem, i.e. that of counting the number of satisfying assignments, denoted \(\#SAT\). In the area of _knowledge compilation_, the complexity of \(\#SAT\) and other problems in relation to the syntactic form of the Boolean formulas have been investigated [19, 20, 42]. Boolean classifiers turn out to be quite relevant to understand and investigate the complexity of _Shap_ computation. The computation of _Shap_ is bound to be expensive, for similar reasons as for _Resp_. For the computation of both, all we need is the input/output relation of the classifier, to compute labels for different alternative entities (counterfactuals). However, in principle, far too many combinations have to go through the classifier. Actually, under the _product probability distribution_ on \(\mathcal{E}\) (which assigns independent probabilities to the feature values), even with an explicit, open classifier for binary entities, the computation of _Shap_ can be intractable. In fact, as shown in [9], for Boolean classifiers in the class \(\mathit{Monotone2CNF}\), of negation-free propositional formulas in conjunctive normal form with at most two atoms per clause, _Shap_ can be \(\#P\)-hard. This is obtained via a polynomial reduction from \(\#Monotone2\mathit{CNF}\), the problem of counting the number of satisfying assignments for a formula in the class, which is known to be \(\#P\)-complete [46]. For example, if the classifier is \((x_{1}\lor x_{2})\land(x_{2}\lor x_{3})\), which belongs to \(\#Monotone2\mathit{CNF}\), the entity \(\mathbf{e}_{1}=\langle 1,0,1\rangle\) (with values for \(x_{1},x_{2},x_{3}\), in this order) gets label \(1\), whereas the entity \(\mathbf{e}_{2}=\langle 1,0,0\rangle\) gets label \(0\). The number of satisfying truth assignments, equivalently, the number of entities that get label \(1\), is \(5\), corresponding to \(\langle 1,1,1\rangle\), \(\langle 1,0,1\rangle\), \(\langle 0,1,1\rangle\), \(\langle 0,1,0\rangle\), and \(\langle 1,1,0\rangle\). Given that _Shap_ can be \(\#P\)-hard, a natural question is whether for some classes of open-box classifiers one can compute _Shap_ in polynomial time in the size of the model and input. The idea is to try to take advantage of the internal stricture and components of the classifier -as opposed to only the input/output relation of the classifier- in order to compute _Shap_ efficiently. We recall from results mentioned earlier in this section that having an open-box model does not guarantee tractability of _Shap_. Natural classifiers that have been considered in relation to a possible tractable computation of _Shap_ are decision trees and random forests [34]. The problem of tractability of _Shap_ was investigated in detail in [3], and with other methods in [47]. They briefly describe the former approach in the rest of this section. Tractable and intractable cases were identified, with algorithms for the tractable cases. (Approximations for the intractable cases were further investigated in [4].) In particular, the tractability for decision trees and random forests was established, which required first identifying the right abstraction that allows for a general proof, leaves aside contingent details, and is also broad enough to include interesting classes of classifiers. In [3], it was proved that, for a Boolean classifier \(L\) (identified with its label, the output gate or variable), the uniform distribution on \(\mathcal{E}\), and \(\mathcal{F}=\{F_{1},\ldots,F_{n}\}\): \[\#SAT(L)\ =\ 2^{|\mathcal{F}|}\times(L(\mathbf{e})-\sum_{i=1}^{n}\mathit{ Shap}(\mathcal{F},G_{\mathbf{e}},F_{i})). \tag{8}\] This result makes, under the usual complexity-theoretic assumptions, impossible for _Shap_ to be tractable for any circuit \(L\) for which \(\#SAT\) is intractable. (If we could compute _Shap_ fast, we could also compute \(\#SAT\) fast, assuming we have an efficient classifier.) This excludes, as seen earlier in this section, classifiers that are in the class \(Monotone2\mathit{CNF}\). Accordingly, only classifiers in a more amenable class became candidates, with the restriction that the class should be able to accommodate interesting classifiers. That is how the class of _deterministic and decomposable Boolean circuits_ (dDBCs) became the target of investigation. Each \(\lor\)-gate of a dDBC can have only one of the disjuncts true (determinism), and for each \(\land\)-gate, the conjuncts do not share variables (decomposition). Nodes are labeled with \(\lor,\land\), or \(\neg\) gates, and input gates with features (propositional variables) or binary constants. An example of such a classifier, borrowed from [3], is shown in Figure 1. It has the \(\land\) gate at the top that returns the output label. Model counting is known to be tractable for dDBCs. However, this does not imply (via (8) or any other obvious way) that _Shap_ is tractable. It is also the case that relaxing any of the determinism or decomposability conditions makes model counting \(\#P\)-hard [4], preventing _Shap_ from being tractable. It turns out that \(Shap\) computation is tractable for dDBCs (under the uniform and the product distribution), from which we also get the tractability of \(Shap\) for free for a vast collection of classifiers that can be efficiently compiled into (or represented as) dDBCs; among them we find: Decision Trees (even with non-binary features), Random Forests, Ordered Binary Decision Diagrams (OBDDs) [16], Deterministic-Decomposable Negation Normal-Forms (dDNNFs), Binary Neural Networks via OB-DDs or Sentential Decision Diagrams (SDDs) (but with an extra, exponential, but FPT compilation cost) [44, 12, 13], etc. For the gist, consider the binary decision tree (DT) on the LHS of Figure 2. It can be inductively and efficiently compiled into a dDBC [4, appendix A]. The leaves of the DT become labeled with propositional constants, \(0\) or \(1\). Each node, \(n\), is compiled into a circuit \(c(n)\), and the final dDBC corresponds to the compilation, \(c(r)\), of the root node \(r\), in this case, \(c(n7)\) for node \(c7\). Figure 2 shows on the RHS, the compilation \(c(n5)\) of node \(n5\) of the DT. If the decision tree is not binary, it is first binarized, and then compiled [4, sec. 7]. ## 5 Looking Ahead: Domain Knowledge There are different approaches and methodologies to provide explanations in data management and artificial intelligence, with causality, counterfactuals and scores being prominent approaches that have a relevant role to play. Much research is still needed on the use of _contextual, semantic and domain knowledge_ in explainable data management and explainable machine learning, in particular, when it comes to _define and compute_ attribution scores. Some approaches may be more appropriate in this direction, and we have argued that declarative, logic-based specifications can be successfully exploited [10]. For a particular application, maybe only some Figure 1: A decomposable and deterministic Boolean classifier Figure 2: A decision tree and part of its compilation into an dDBC counterfactuals make sense, are reasonable or are useful; the latter becoming _actionable_ or _resources_ in that they may indicate feasible and achievable conditions (or courses of action) under which we could obtain a different result [45, 27]. Domain knowledge may come in different forms, among them: 1. In data management, via semantic constraints, in particular ICs, and ontologies that are fed by data sources or describe them. For example, satisfied ICs might make unnecessary (or prohibited) to consider certain counterfactual interventions on data. In this context, we might want to make sure that sub-instances associated to teams of tuples for Shapley computation satisfy a given set of ICs. Or, at the attribute level, that we do not violate constraints preventing certain attributes from taking null values. 2. In the context of machine learning, we could define an _entity schema_, basically a wide-predicate \(Ent(\mathit{Feat}_{1},\ldots,\mathit{Feat}_{n})\), on which certain constraints could be imposed. In our running example, we would have the entity schema \(\mathit{Ent}(\mathsf{Name},\mathsf{Age},\mathsf{Activity},\mathsf{Income}, \mathsf{Debt},\mathsf{EdLevel})\); and depending on the application domain, _local constraints_, i.e. at the single entity level, such as: (a) \(\neg(\mathsf{Age}<6\wedge\mathsf{EdLevel}=\mathsf{phd})\), a denial constraint prohibiting that someone who is less than \(6\) years old has a PhD; (b) Or an implication \(\mathsf{Activity}=\mathsf{none}\to\mathsf{Income}=\mathsf{0}\); etc. Counterfactual interventions should be restricted by these constraints [10, sec. 6], or, if satisfied, they could be used to speed up a score computation. In [10, sec. 7], we showed how the underlying probability distribution, needed for _Resp_ or _Shap_, can be conditioned by logical constraints of this kind. We could also have _global constraints_, e.g. requiring that any two entities whose values coincide for certain features must have their values for other features coinciding as well. This would be like a functional dependency in databases. This may have an impact of the subteams that are considered for _Shap_, for example. 3. Domain knowledge could also come in the form of _probabilistic or statistical constraints_, e.g. about the stochastic independence of certain features, or an explicit stochastic dependency of others via conditional distributions. In this direction, we could have a whole Bayesian network representing (in)dependencies among features. We could also have constraints indicating that certain probabilities are bounded above by a given number; etc. This kind of knowledge would have impact on attribution scores that are defined and computed on the basis of a given probability distribution. The challenge becomes that of bringing these different forms of domain knowledge (and others) into the definitions or the computations of attribution scores. Acknowledgments:Part of this work was funded by ANID - Millennium Science Initiative Program - Code ICN17002; and NSERC-DG 2023-04650.
実際の因果関係を用いた責任スコア定義における最近の研究について説明する。データベースにおけるクエリ回答の説明、機械学習における分類モデルのアウトプットの両方で、責任スコアを用いる。データベースの場合、データベースの修復と関連付けられた有用な接続が示され、利用される。修復は、データベースの整合性の定量的な指標となる。分類モデルの場合、責任スコアは適切に拡張され、説明される。Shap-スコアを効率的に計算するための分析と議論もなされる。著者の貢献と協力者の貢献が強調される。 Let me know if you'd like me to elaborate on any part of the translation or if you'd like to try a different translation.
2309.06804
Block-and-hole graphs: Constructibility and $(3,0)$-sparsity
We show that minimally 3-rigid block-and-hole graphs, with one block or one hole, are characterised as those which are constructible from $K_3$ by vertex splitting, and also, as those having associated looped face graphs which are $(3,0)$-tight. This latter property can be verified in polynomial time by a form of pebble game algorithm. We also indicate connections to the rigidity properties of polyhedral surfaces known as origami and to graph rigidity in $\ell_p^3$ for $p\not=2$.
Bryan Gin-ge Chen, James Cruickshank, Derek Kitson
2023-09-13T08:50:50
http://arxiv.org/abs/2309.06804v1
# Block-and-hole graphs: ###### Abstract. We show that minimally \(3\)-rigid block-and-hole graphs, with one block or one hole, are characterised as those which are constructible from \(K_{3}\) by vertex splitting, and also, as those having associated looped face graphs which are \((3,0)\)-tight. This latter property can be verified in polynomial time by a form of pebble game algorithm. We also indicate connections to the rigidity properties of polyhedral surfaces known as _origami_ and to graph rigidity in \(\ell_{p}^{3}\) for \(p\neq 2\). ## 1. Introduction A finite simple graph is \(3\)-_rigid_ if it forms the structure graph for an infinitesimally rigid bar-and-joint framework in Euclidean \(3\)-space. If, in addition, the removal of any edge from the graph results in a subgraph which is not \(3\)-rigid then the graph is _minimally_\(3\)-rigid. A _block-and-hole graph_ is obtained by first triangulating a sphere, then removing the interiors of some triangulated discs to create holes, and finally adjoining minimally \(3\)-rigid graphs to the boundaries of some of the resulting holes to create blocks. It is well known that a graph obtained from a triangulation of a sphere is minimally \(3\)-rigid, see for example [9]. Whiteley ([20, Theorem 4.2]) showed that a block-and-hole graph with a single block and a single hole, and common boundary length \(k\), is minimally \(3\)-rigid if and only if the removal of any \(k-1\) vertices does not disconnect the two boundary cycles. In [7], it is shown that switching the blocks and holes in a block-and-hole graph preserves minimal \(3\)-rigidity. The main theorem of [3] characterizes the minimally \(3\)-rigid block-and-hole graphs with a single block and finitely Introduction Let \(S\) be a finite graph. A _link_ of a link \((S,E)\) is a link \((S,E)\) with a link \((S,E)\). The link \((S,E)\) is called a _link_ if it is a link \((S,E)\). **Example 1.2**.: Figure 1 illustrates the three steps in the construction of a face graph beginning on the left hand side with a maximal planar graph. Two internally disjoint simplicial discs are chosen with boundary cycles indicated in red and blue. Non-boundary vertices and edges of the chosen simplicial discs are removed (centre) and finally non-triangular faces are labelled by either \(B\) or \(H\) (right). **Definition 1.3**.: A _block-and-hole graph_ is a simple graph of the form \(\hat{G}=G\cup\hat{B}_{1}\cup\cdots\cup\hat{B}_{m}\) where, 1. \(G\) is a face graph with \(m\)\(B\)-labelled faces \(B_{1},\ldots,B_{m}\), 2. \(\hat{B}_{1},\ldots,\hat{B}_{m}\) are minimally \(3\)-rigid graphs, 3. \(G\cap\hat{B}_{i}=\partial B_{i}\), for each \(i=1,\ldots,m\). We refer to the minimally \(3\)-rigid graphs \(\hat{B}_{1},\ldots,\hat{B}_{m}\) as _blocks_ and the \(H\)-labelled faces of \(G\) as _holes_. For each \(B\)-labelled face \(B_{i}\) we can construct a block \(B_{i}^{\dagger}\) with, \[V(B_{i}^{\dagger})=V(\partial B_{i})\cup\{x_{i},y_{i}\},\quad E(B_{i}^{ \dagger})=E(\partial B_{i})\cup\{(v,x_{i}),(v,y_{i}):v\in V(\partial B_{i})\}.\] The block \(B_{i}^{\dagger}\) is referred to as a _simplicial discus_ with _poles_ at \(x_{i}\) and \(y_{i}\). The resulting block-and-hole graph \(G^{\dagger}:=G\cup B_{1}^{\dagger}\cup\cdots\cup B_{m}^{\dagger}\) is referred to as the _discus-and-hole graph_ for \(G\). Let \(f(J)\) denote the _freedom number_\(3|V(J)|-|E(J)|\) of a graph \(J\). A simple graph \(J\) is said to be \((3,6)\)_-sparse_ if \(f(J^{\prime})\geq 6\) for any subgraph \(J^{\prime}\) containing at least two edges. The graph \(J\) is \((3,6)\)_-tight_ if it is \((3,6)\)-sparse and \(f(J)=6\). We denote by \(\mathcal{G}(m,n)\) the set of face graphs with \(m\)\(B\)-labelled faces and \(n\)\(H\)-labelled faces for which the discus-and-hole graph \(G^{\dagger}\) is \((3,6)\)-tight. We will make reference to the following theorem which is proved in [3]. **Theorem 1.4**.: _Let \(\hat{G}\) be a block-and-hole graph with a single block and finitely many holes, or, a single hole and finitely many blocks. Then the following statements are equivalent._ 1. \(\hat{G}\) _is minimally_ \(3\)_-rigid._ 2. \(\hat{G}\) _is_ \((3,6)\)_-tight._ 3. \(\hat{G}\) _is constructible from_ \(K_{3}\) _by vertex splitting and isostatic substitution._ 4. \(\hat{G}\) _satisfies the girth inequalities._ Figure 1. Constructing a face graph. ## 2. Vertex splitting Let \(J\) be a simple graph and let \(v\) be a vertex of \(J\) with adjacent vertices \(v_{1},v_{2},\ldots,v_{n}\), \(n\geq 2\). Construct a new graph \(\tilde{J}\) from \(J\) by, 1. removing the vertex \(v\) and its incident edges from \(J\), 2. adjoining two new vertices \(w_{1},w_{2}\), 3. adjoining the edge \(w_{1}v_{j}\) or the edge \(w_{2}v_{j}\) for each \(j=3,4,\ldots,n\), 4. adjoining the five edges \(v_{1}w_{1},v_{2}w_{1}\), \(v_{1}w_{2},v_{2}w_{2}\) and \(w_{1}w_{2}\). The graph \(\tilde{J}\) is said to be obtained from \(J\) by _(3-dimensional) vertex splitting_. See Figure 2 for an illustration. In this section we show that a block-and-hole graph with a single block, or a single hole, is minimally 3-rigid if and only if the corresponding discus-and-hole graph is constructible from \(K_{3}\) by vertex splitting. For more on vertex splitting and rigid graphs see [21] for example. ### Critical separating cycles Let \(G\) be a face graph with exactly one \(B\)-labelled face and any number of \(H\)-labelled faces. Fix a planar realisation of \(G\) such that the unbounded face is \(B\)-labelled. Let \(c\) be a simple cycle in \(G\). Define \(G_{1}\) to be the face graph obtained from \(G\) and \(c\) by, 1. removing all edges and vertices interior to \(c\), and, 2. if \(|c|\geq 4\), viewing the edges of \(c\) as the boundary of a new face with label \(H\). Define \(G_{2}\) to be the face graph obtained from \(G\) and \(c\) by, 1. removing all edges and vertices which are exterior to \(c\), and, 2. if \(|c|\geq 4\), viewing the edges of \(c\) as the boundary of a new face with label \(B\). We refer to \(G_{1}\) and \(G_{2}\) respectively as the _external_ and _internal face graphs_ associated with \(c\). See Figure 3 for an illustration. Note that in the case where \(|c|=3\), the internal face graph \(G_{2}\) has no \(B\)-labelled face. We denote by \(Ext_{G}(c)\), or simply \(Ext(c)\) when the context is clear, the discus-and-hole graph for the external face graph \(G_{1}\). Note that \(Ext(c)\) is a block-and-hole graph with a single block and so, by Theorem 1.4, \(Ext(c)\) is \((3,6)\)-tight if and only if it is minimally 3-rigid. **Definition 2.1**.: A _critical separating cycle_ for a face graph \(G\in\mathcal{G}(1,n)\) is a simple cycle \(c\) in \(G\) with the property that the external discus-and-hole graph \(Ext(c)\) is \((3,6)\)-tight. We will require the following lemma which is adapted from the proof of [3, Proposition 22]. Figure 2. A vertex splitting operation. **Lemma 2.2**.: _Let \(G\in\mathcal{G}(1,n)\) and let \(v\) and \(w\) be distinct vertices in \(\partial B\) which are not joined by a \(BH\) edge in \(G\). If \(v\) and \(w\) lie in a common \(H\)-labelled face then \(G\) contains a non-facial critical separating cycle._ Proof.: Suppose there exists a \(H\)-labelled face in \(G\) which contains the vertices \(v\) and \(w\). The boundary of this \(H\)-labelled face is composed of two edge-disjoint paths \(\pi_{1}\) and \(\pi_{2}\) joining \(v\) to \(w\). Let \(c_{1}\) be the simple cycle in \(\partial B\cup\partial H\) which contains the path \(\pi_{1}\) and has the property that \(Ext(c_{1})\) does not contain the path \(\pi_{2}\). Similarly, let \(c_{2}\) be the simple cycle in \(\partial B\cup\partial H\) which contains the path \(\pi_{2}\) and has the property that \(Ext(c_{2})\) does not contain the path \(\pi_{1}\). See Figure 4 for an illustration. Note that \(Ext(c_{1})\cap Ext(c_{2})=B^{\dagger}\). Thus, \[f(G^{\dagger})=f(Ext(c_{1}))+f(Ext(c_{2}))-f(B^{\dagger}).\] Since \(f(G^{\dagger})=f(B^{\dagger})=6\), it follows that \(f(Ext(c_{1}))=f(Ext(c_{2}))=6\). Hence \(Ext(c_{1})\) and \(Ext(c_{2})\) are both \((3,6)\)-tight and so \(c_{1}\) and \(c_{2}\) are non-facial critical separating cycles for \(G\). We will require the following result, known as the "hole-filling" lemma. In the statement of the lemma, \(int(c)\) denotes the subgraph of \(G\) spanned by edges which lie inside the cycle \(c\). Figure 4. An illustration of the proof of Lemma 2.2. The edge-disjoint paths \(\pi_{1}\) and \(\pi_{2}\) are indicated in red and blue on the left. The cycles \(c_{1}\) and \(c_{2}\) are indicated in red and blue on the right. Figure 3. Left: A cycle \(c\) (indicated in red) in a face graph with one \(B\)-labelled face. Centre: The associated external face graph \(G_{1}\). Right: The associated internal face graph \(G_{2}\). **Lemma 2.3** ([3, Lemma 26]).: _Let \(G\in\mathcal{G}(1,n)\) and let \(K^{\prime}\) be a subgraph of \(G^{\dagger}\). Suppose that \(c\) is a simple cycle in \(K^{\prime}\cap G\) with \(E(K^{\prime}\cap int(c))=\emptyset\). If \(K^{\prime}\) is \((3,6)\)-tight then \(K^{\prime}\cup int(c)\) is \((3,6)\)-tight._ **Lemma 2.4**.: _Let \(G\in\mathcal{G}(1,n)\). Suppose that \(K^{\prime}\) is a \((3,6)\)-tight subgraph of \(G^{\dagger}\) with \(B^{\dagger}\subset K^{\prime}\) and let \(K=K^{\prime}\cap G\). Label the face of \(K\) corresponding to \(B^{\dagger}\) by \(B\) and every other non-triangular face by \(H\). Then,_ 1. \(K\) _is a face graph._ 2. _The boundary cycle of every_ \(H\)_-labelled face in_ \(K\) _is either the boundary of a_ \(H\)_-labelled face in_ \(G\) _or is a non-facial critical separating cycle in_ \(G\)_._ Proof.: \((i)\) We need to show that the boundary cycle of each \(H\)-labelled face of \(K\) is simple. If this were not the case then the boundary cycle of some face of \(K\) would contain a repeated vertex. Note that this repeated vertex is a cut vertex for \(K\). It is also a cut vertex for \(K^{\prime}\). However, \(K^{\prime}\) does not have a cut vertex since it is \((3,6)\)-tight. \((ii)\) Suppose \(c\) is the boundary cycle of a \(H\)-labelled face in \(K\) which is not a \(H\)-labelled face in \(G\). Let \(G_{1}\) be the external face graph associated with \(c\). Note that the external discus-and-hole graph \(G_{1}^{\dagger}\) is obtained from \(K^{\prime}\) by "filling in" \(H\)-labelled faces of \(K\). Since \(K^{\prime}\) is \((3,6)\)-tight, by the hole-filling lemma (Lemma 2.3), \(G_{1}^{\dagger}\) is also \((3,6)\)-tight. Thus, \(c\) is a non-facial critical separating cycle in \(G\). We will require the following result, known as the _isostatic substitution principle_. See [19, Corollary 2.8] and the more general form [7, Corollary 2.6]. **Lemma 2.5**.: _Let \(K\) be a simple graph which is minimally \(3\)-rigid and let \(K^{\prime}\) be a vertex induced subgraph of \(K\) which is also minimally \(3\)-rigid. If \(K^{\prime}\) is replaced with another minimally \(3\)-rigid graph \(K^{\prime\prime}\) with the property that \(V(K^{\prime})\subseteq V(K^{\prime\prime})\) then the resulting graph is minimally \(3\)-rigid._ **Lemma 2.6**.: _Let \(G\in\mathcal{G}(1,n)\). Suppose \(c\) is a non-facial critical separating cycle for \(G\) with internal face graph \(G_{2}\). If \(d\) is a critical separating cycle for \(G_{2}\) then \(d\) is also a critical separating cycle for \(G\)._ Proof.: By Theorem 1.4, the discus-and-hole graphs \(Ext_{G_{2}}(d)\) and \(Ext_{G}(c)\) are minimally \(3\)-rigid. Note that \(Ext_{G}(d)\) is obtained by replacing the discus in \(Ext_{G_{2}}(d)\) with \(Ext_{G}(c)\). Thus, by the isostatic substitution principle (Lemma 2.5), since \(Ext_{G_{2}}(d)\) is minimally \(3\)-rigid, \(Ext_{G}(d)\) is also minimally \(3\)-rigid. We conclude that \(d\) is a critical separating cycle for \(G\). We now present a key technical lemma which is needed for the proof of Theorem 2.15 below. **Lemma 2.7**.: _Let \(G\in\mathcal{G}(1,n)\) and let \(c\) be a critical separating cycle for \(G\) of length \(|c|\geq 4\), with associated external and internal face graphs \(G_{1}\) and \(G_{2}\). Let \(e\) be a \(TT\) edge in \(G_{1}\) and let \(f\) be a \(TT\) edge in \(G_{2}\)._ 1. _If_ \(e\) _lies in a non-facial critical separating cycle for_ \(G\) _then_ \(e\) _also lies in a non-facial critical separating cycle for_ \(G_{1}\) _._ 2. _If_ \(f\) _lies in a non-facial critical separating cycle for_ \(G\) _then_ \(f\) _also lies in a non-facial critical separating cycle for_ \(G_{2}\)_._ Proof.: \((i)\) Suppose \(d\) is a non-facial critical separating cycle for \(G\) which contains the edge \(e\) (see Figure 5 for an illustration). Let \(K^{\prime}=Ext(c)\cap Ext(d)\) and let \(K=K^{\prime}\cap G\). Similarly, let \(L^{\prime}=Ext(c)\cup Ext(d)\) and let \(L=L^{\prime}\cap G\). Observe that, \[f(K^{\prime})+f(L^{\prime})=f(Ext(c))+f(Ext(d))=12.\] Therefore \(f(K^{\prime})=f(L^{\prime})=6\) and so \(K^{\prime}\) and \(L^{\prime}\) are \((3,6)\)-tight subgraphs of \(G^{\dagger}\) which contain \(B^{\dagger}\). Label the face of \(K\) corresponding to \(B^{\dagger}\) by \(B\) and every other non-triangular face of \(K\) by \(H\). Note that, since \(|c|\geq 4\), \(e\) lies on the boundary cycle of a \(H\)-labelled face of \(K\) by construction. Let \(d^{\prime}\) be this boundary cycle. Since \(e\) is a \(TT\) edge in \(G_{1}\), \(d^{\prime}\) cannot be the boundary of a face in \(G_{1}\). Therefore, by Lemma 2.4, \(d^{\prime}\) is a non-facial critical separating cycle for \(G_{1}\). This proves part \((i)\). Part \((ii)\) is proved by applying similar arguments to \(L\). ### On indivisible graphs in \(\mathcal{G}(1,n)\) In this section, we derive properties of face graphs in \(\mathcal{G}(1,n)\) which contain no \(TT\) edges and no non-facial critical separating cycles. **Definition 2.8**.: A face graph \(G\) in \(\mathcal{G}(1,n)\) is _indivisible_ if every critical separating cycle for \(G\) is the boundary cycle of a face of \(G\). **Lemma 2.9**.: _Suppose that \(G\in\mathcal{G}(1,n)\) has no \(TT\) edge and is also indivisible. Then \(G\) has at least three \(BH\) edges._ Proof.: By [3, Proposition 22(ii)], \(G\) must contain at least one \(BH\) edge. The cases where \(G\) contains exactly one \(BH\) edge and exactly two \(BH\) edges are considered below. Since there are no \(TT\) edges in \(G\), for each vertex \(v\) of \(\partial B\) there exists a \(H\)-labelled face \(H_{v}\in\mathcal{H}\) which contains \(v\). The set of all \(H\)-labelled faces of \(G\) is denoted by \(\mathcal{H}\). Since \(f(G^{\dagger})=6\) it follows that \(|\partial B|-3=\sum_{H\in\mathcal{H}}(|\partial H|-3)\). Figure 5. Lemma 2.7. Case 1: Suppose \(G\) contains exactly one \(BH\) edge \(e\). Then the vertices of \(e\) are contained in a common \(H\)-labelled face \(H_{e}\). If the remaining \(r=|\partial B|-2\) vertices \(v_{1},v_{2},\ldots,v_{r}\) in \(\partial B\) are each contained in distinct \(H\)-labelled faces then we obtain the contradiction, \[|\partial B|-3=\sum_{H\in\mathcal{H}}(|\partial H|-3)\geq(|\partial H_{e}|-3)+ \sum_{i=1}^{r}(|\partial H_{v_{i}}|-3)\geq r+1.\] Case 2: Suppose \(G\) contains exactly two \(BH\) edges \(e\) and \(f\) and that these edges are adjacent. The vertices of \(e\) are contained in a common \(H\)-labelled face \(H_{e}\). If the remaining \(r=|\partial B|-3\) vertices \(v_{1},v_{2},\ldots,v_{r}\) in \(\partial B\) are each contained in distinct \(H\)-labelled faces then we obtain the contradiction, \[|\partial B|-3=\sum_{H\in\mathcal{H}}(|\partial H|-3)\geq(|\partial H_{e}|-3)+ \sum_{i=1}^{r}(|\partial H_{v_{i}}|-3)\geq r+1.\] Case 3: Suppose \(G\) contains exactly two \(BH\) edges \(e\) and \(f\) and that these edges are not adjacent. The vertices of \(e\) are contained in a common \(H\)-labelled face \(H_{e}\) and the vertices of \(f\) are contained in a common \(H\)-labelled face \(H_{f}\). If \(H_{e}\) and \(H_{f}\) are distinct, and, if the remaining \(r=|\partial B|-4\) vertices \(v_{1},v_{2},\ldots,v_{r}\) in \(\partial B\) are each contained in distinct \(H\)-labelled faces then we obtain the contradiction, \[|\partial B|-3=\sum_{H\in\mathcal{H}}(|\partial H|-3)\geq(|\partial H_{e}|-3)+ (|\partial H_{f}|-3)+\sum_{i=1}^{r}(|\partial H_{v_{i}}|-3)\geq r+2.\] The contradictions obtained in each of the above cases imply that there must exist a pair of vertices \(v\) and \(w\) in \(\partial B\) which are not joined by a \(BH\)-edge and for which \(H_{v}=H_{w}\). By Lemma 2.2, there must exist a non-facial critical separating cycle in \(G\). However, this contradicts the indivisibility of \(G\) and so \(G\) must contain at least three \(BH\) edges. **Lemma 2.10**.: _Suppose that \(G\in\mathcal{G}(1,n)\) has no \(TT\) edges, is indivisible and has exactly three \(BH\) edges. Then_ 1. _Every_ \(H\)_-labelled face in_ \(G\) _is a quadrilateral._ 2. _The three_ \(BH\) _edges are not consecutive edges in_ \(\partial B\)_._ Proof.: Consider the following three cases. Case 1: Suppose \(G\) contains exactly three \(BH\) edges \(e,f,g\) and no two are adjacent. Then the vertices of \(e,f,g\) are respectively contained in common \(H\)-labelled faces \(H_{e}\), \(H_{f}\) and \(H_{g}\). Since \(G\) is indivisible, the faces \(H_{e}\), \(H_{f}\) and \(H_{g}\) are distinct and the remaining \(r=|\partial B|-6\) vertices \(v_{1},v_{2},\ldots,v_{r}\) in \(\partial B\) are each contained in a distinct \(H\)-labelled face. Thus, \[|\partial B|-3 = \sum_{H\in\mathcal{H}}(|\partial H|-3)\] \[\geq (|\partial H_{e}|-3)+(|\partial H_{f}|-3)+(|\partial H_{g}|-3)+ \sum_{i=1}^{r}(|\partial H_{v_{i}}|-3)\] \[\geq r+3\] The above inequalities imply that \(H_{e}\), \(H_{f}\), \(H_{g}\) and \(H_{v_{1}},\ldots,H_{v_{r}}\) are the only \(H\)-labelled faces of \(G\) and each of these faces has boundary length four. Case 2: Suppose \(G\) contains exactly three \(BH\) edges \(e,f,g\) and exactly two of these edges, \(e\) and \(f\) say, are adjacent. The vertices of \(e\) and \(g\) are respectively contained in common \(H\)-labelled faces \(H_{e}\) and \(H_{g}\). Since \(G\) is indivisible, the faces \(H_{e}\) and \(H_{g}\) are distinct and the remaining \(r=|\partial B|-5\) vertices \(v_{1},v_{2},\ldots,v_{r}\) in \(\partial B\) are each contained in distinct \(H\)-labelled faces. Thus, \[|\partial B|-3 = \sum_{H\in\mathcal{H}}(|\partial H|-3)\] \[\geq (|\partial H_{e}|-3)+(|\partial H_{g}|-3)+\sum_{i=1}^{r}(|\partial H _{v_{i}}|-3)\] \[\geq r+2\] The above inequalities imply that \(H_{e}\), \(H_{g}\) and \(H_{v_{1}},\ldots,H_{v_{r}}\) are the only \(H\)-labelled faces of \(G\) and each of these faces has boundary length four. Case 3: Suppose \(G\) contains exactly three \(BH\) edges \(e,f,g\) and these three edges are consecutive. The vertices of \(e\) are contained in a common \(H\)-labelled face \(H_{e}\). Since \(G\) is indivisible, it follows from Lemma 2.2 that the remaining \(r=|\partial B|-4\) vertices \(v_{1},v_{2},\ldots,v_{r}\) in \(\partial B\) are each contained in distinct \(H\)-labelled faces. Thus, \[|\partial B|-3 = \sum_{H\in\mathcal{H}}(|\partial H|-3)\] \[\geq (|\partial H_{e}|-3)+\sum_{i=1}^{r}(|\partial H_{v_{i}}|-3)\] \[\geq r+1\] The above inequalities imply that \(H_{e}\) and \(H_{v_{1}},\ldots,H_{v_{r}}\) are the only \(H\)-labelled faces of \(G\) and each of these faces has boundary length four. However, the boundary of \(H_{e}\) consists of three consecutive edges of \(\partial B\) and a fourth edge that is not in \(B^{\dagger}\) but is incident to two vertices of \(B^{\dagger}\). This contradicts the \((3,6)\)-tightness of \(G^{\dagger}\) and so the three \(BH\)-edges of \(G\) must not be consecutive. See Figure 6 for examples of face graphs with no \(TT\) edges and exactly three \(BH\) edges. ### On the sufficiency of vertex splitting Let \(G\in\mathcal{G}(1,n)\). A \(TT\) edge is _contractible_ in \(G\) if it does not belong to any non-facial \(3\)-cycle in \(G\). A \(TT\)_edge contraction_ on \(G\) is an operation on the class of face graphs whereby the vertices of a contractible \(TT\) edge in \(G\) are identified, the resulting loop and parallel edges are discarded, and the labellings of all non-triangular faces in the resulting planar graph are inherited from \(G\). Note that a \(TT\) edge contraction fails to preserve \((3,6)\)-tightness if and only if the contractible \(TT\) edge lies on a non-facial critical separating cycle of \(G\) (see [3, Lemma 27]). For this reason we restrict attention to \(TT\) edge contractions on \(G\) which are _admissible_ in the sense that the contractible \(TT\) edge does not belong to a non-facial critical separating cycle of \(G\). **Definition 2.11**.: A face graph \(G\in\mathcal{G}(1,n)\) is _terminal_ if there exist no admissible \(TT\) edge contractions on \(G\). **Lemma 2.12**.: _Let \(G\in\mathcal{G}(1,n)\). If \(G\) is terminal then \(G\) contains no non-facial \(3\)-cycles._ Proof.: Suppose \(c\) is a non-facial \(3\)-cycle in \(G\). Note that \(f(c)=6\). Since \(G^{\dagger}=Ext(c)\cup G_{2}\) and \(c=Ext(c)\cap G_{2}\) we have, \[f(G_{2})=f(Ext(c))+f(G_{2})-f(c)=f(G^{\dagger})=6.\] Recall that in general planar graphs satisfy \(f(K)\geq 6\) and so \(G_{2}\) is a maximal planar graph. Since \(c\) is a non-facial \(3\)-cycle in \(G\) it follows that there exists a contractible \(TT\) edge \(f\) in \(G_{2}\) that does not lie in \(c\) (see for example [1, Lemma 1]). Note that the graph \(G_{2}/f\) obtained on contracting this \(TT\) edge is again a maximal planar graph. Consider the face graph \(G/f\) obtained from \(G\) by applying a \(TT\) edge contraction to \(f\). Note that the discus-and-hole graph \((G/f)^{\dagger}\) is obtained from \(G^{\dagger}\) by replacing \(G_{2}\) with \(G_{2}/f\). Also note that, \(G^{\dagger}\), \(G_{2}\) and \(G_{2}/f\) are minimally \(3\)-rigid. Thus, by the isostatic substitution principle (Lemma 2.5), \((G/f)^{\dagger}\) is minimally \(3\)-rigid. In particular, \((G/f)^{\dagger}\) is \((3,6)\)-tight. Since the \(TT\) edge contraction of \(f\) preserves \((3,6)\)-tightness it is an admissible \(TT\) edge contraction on \(G\). This contradicts the terminality of \(G\). A \(BH\) edge in the face graph \(G\) is _contractible_ if it does not belong to any \(3\)-cycle in \(G\). A \(BH\) _edge contraction_ on \(G\) is an operation on the class of face graphs whereby the vertices of a contractible \(BH\) edge in \(G\) are identified, the resulting loop is discarded, and the labellings of all non-triangular faces are inherited from \(G\). Note that \(BH\) edge contractions preserve \((3,6)\)-tightness (see [3, Lemma 29]). Also note that under a \(BH\) edge contraction it is possible for the \(B\)-labelled face and the \(H\)-labelled face containing the contractible \(BH\) edge to be transformed into triangular faces. **Definition 2.13**.: A face graph is \(BH\)_-reduced_ if it contains no contractible \(BH\) edges. We will require the following result. Figure 6. Face graphs with no \(TT\) edges and exactly three \(BH\) edges. The face graph on the left lies in \(\mathcal{G}(1,3)\) and is indivisible. The face graphs in the middle and on the right lie in \(\mathcal{G}(1,4)\) and \(\mathcal{G}(1,5)\) respectively and contain non-facial critical separating cycles (indicated in blue). **Lemma 2.14**.: _[_3_, Corollary 33]_ _For each \(n\geq 1\), there is no face graph in \(\mathcal{G}(1,n)\) which is terminal, indivisible and \(BH\)-reduced._ Note that the reversal of a \(TT\) edge contraction or a \(BH\) edge contraction is a vertex splitting operation. We can now strengthen the statement of Theorem 1.4 as follows. **Theorem 2.15**.: _Let \(\hat{G}\) be a block-and-hole graph with a single block and finitely many holes, or, a single hole and finitely many blocks. The following statements are equivalent._ 1. \(\hat{G}\) _is minimally_ \(3\)_-rigid._ 2. \(G^{\dagger}\) _is constructible from_ \(K_{3}\) _by vertex splitting._ Proof.: Throughout this proof we will use the word "constructible" as a shorthand for "constructible from \(K_{3}\) by vertex splitting only". In light of Theorem 1.4 it suffices to show that if the discus-and-hole graph \(G^{\dagger}\) with a single discus and finitely many holes is \((3,6)\)-tight then it is constructible. We prove this by induction on the number of edges in \(G^{\dagger}\). Thus let \(G\in\mathcal{G}(1,n)\) and assume that the theorem is true for all discus-and-hole graphs with strictly fewer edges than \(G^{\dagger}\). If \(G\) has a contractible \(BH\) edge then by [3, Lemma 29] we can apply a \(BH\) edge contraction to obtain a face graph \(G^{\prime}\) that lies in \(\mathcal{G}(1,n)\), \(\mathcal{G}(1,n-1)\) or in \(\mathcal{G}(0,0)\). In any case, the resulting discus-and-hole graph \((G^{\prime})^{\dagger}\) has fewer edges than \(G^{\dagger}\) and is hence constructible. Note that \(G^{\dagger}\) can be obtained from \((G^{\prime})^{\dagger}\) by applying a vertex splitting operation and so \(G^{\dagger}\) is also constructible. Similarly, if \(G\) has a contractible \(TT\) edge that does not lie in any non-facial critical separating cycle then we may apply an admissible \(TT\) edge contraction to obtain a face graph \(G^{\prime}\) which lies in \(\mathcal{G}(1,n)\). Again, the resulting discus-and-hole graph \((G^{\prime})^{\dagger}\) has fewer edges than \(G^{\dagger}\) and is hence constructible. Since \(G^{\dagger}\) can be obtained from \((G^{\prime})^{\dagger}\) by vertex splitting we conclude that \(G^{\dagger}\) is constructible also. Now suppose \(G\) is both \(BH\)-reduced and terminal. By Lemma 2.12, \(G\) contains no non-facial \(3\)-cycles. Thus, by 2.14, \(G\) must contain a non-facial critical separating cycle \(c\) with \(|c|\geq 4\). Let \(G_{1}\) and \(G_{2}\) be the external and internal face graphs associated with \(c\). We can choose \(c\) so that there is no non-facial critical separating cycle for \(G\) in \(G_{2}\) apart from \(c\) itself. By Lemma 2.6, any critical separating cycle for the internal face graph \(G_{2}\) is also a critical separating cycle for \(G\). Thus, our choice of \(c\) ensures that the face graph \(G_{2}\) is indivisible. If \(G_{2}\) contains a \(TT\) edge \(e\), then \(e\) does not lie on any non-facial critical separating cycle of \(G_{2}\). Since \(|c|\geq 4\), \(e\not\in c\) and so \(e\) is also a \(TT\) edge in \(G\). By Lemma 2.7, we conclude that \(e\) does not lie on any non-facial critical separating cycle for \(G\) either. Thus the contraction of \(e\) is an admissible \(TT\) edge contraction for \(G\). This contradicts the terminality of \(G\) and so, from now on, we may assume that \(G_{2}\) has no \(TT\) edges. Suppose \(G_{1}\) has a contractible \(TT\) edge \(e\) that does not lie on any non-facial critical separating cycle of \(G_{1}\). Since \(|c|\geq 4\), \(e\) is also a \(TT\) edge in \(G\). By Lemma 2.7, \(e\) does not lie on any non-facial critical separating cycle of \(G\). Again, the contraction of \(e\) is an admissible \(TT\) edge contraction for \(G\) and this contradicts the assumption that \(G\) is terminal. Thus, we may assume that \(G_{1}\) is terminal. Since \(Ext(c)\) has fewer edges than \(G^{\dagger}\), it is constructible. Thus \(G_{1}\) must have at least one contractible \(BH\) edge. Since \(G\) is \(BH\)-reduced and contains no non-facial \(3\)-cycles, we conclude that \(G\) contains no \(BH\) edges. Thus every contractible \(BH\) edge of \(G_{1}\) must in fact also be an edge of \(c\) (otherwise it would be a \(BH\) edge in \(G\)). **Claim 2.16**.: There are at least four edges of \(c\) that are not in the boundary of the \(B\)-labelled face in \(G\). Proof of Claim.: Using the isostatic substitution principle (Lemma 2.5), observe that \(G_{2}^{\dagger}\) is \((3,6)\)-tight since it is obtained from \(G^{\dagger}\) by replacing \(Ext(c)\) with a discus. Since \(G_{2}\) is indivisible and has no \(TT\) edges we can apply Lemma 2.9 to conclude that \(G_{2}\) has at least three \(BH\) edges. None of these edges are contained in the boundary of the \(B\)-labelled face in \(G\) since \(G\) contains no \(BH\) edges. Thus, we have demonstrated the existence of three of the required four edges. To get the fourth edge we use Lemma 2.10. This says that in the case where \(G_{2}\) has exactly three \(BH\) edges, these three edges are not consecutive around the boundary of the \(B\)-labelled face of \(G_{2}\). Label these three edges \(e_{1}\), \(e_{2}\) and \(e_{3}\). Now suppose that all other edges of \(c\) also belong to the boundary of the the \(B\)-labelled face in \(G\). Since \(e_{1}\), \(e_{2}\) and \(e_{3}\) are not consecutive in the cycle \(c\), at least one of these edges, say \(e_{1}\) after relabelling if necessary, is not adjacent to either of the other two. Then the vertices of \(e_{1}\) must lie in the boundary of the \(B\)-labelled face in \(G\). It follows that \(e_{1}\) is an edge of \(G^{\dagger}\) that is not in the discus \(B^{\dagger}\) but is incident with two vertices in \(B^{\dagger}\). This contradicts the \((3,6)\)-tightness of \(G^{\dagger}\). Now let \(K\) be the face graph obtained by applying \(BH\) edge contractions to \(G_{1}\) until no further \(BH\) edge contractions are possible (recalling that all of these \(BH\) edges lie in \(c\)). By Claim 2.16 there are at least four edges remaining in the cycle corresponding to \(c\). So this cycle still bounds a hole in \(K\). Thus every \(TT\) edge of \(K\) is also a \(TT\) edge of \(G_{1}\). Moreover it is clear that there is an obvious correspondence between the non-facial critical separating cycles of \(K\) and those of \(G_{1}\), and, that if a \(TT\) edge of \(K\) lies on a non-facial critical separating cycle in \(K\) then it does so in \(G_{1}\). By induction \(K^{\dagger}\) is constructible and so \(K\) must have a contractible \(TT\) edge that does not lie on a non-facial critical separating cycle (it has no contractible \(BH\) edges by construction). But this contradicts the assumption that \(G_{1}\) has no such edges. We conclude that \(G\) cannot be both \(BH\)-reduced and terminal. This completes the proof. ## 3. \((3,0)\)-sparsity and pebble games The main result of [3] characterises minimal \(3\)-rigidity for block-and-hole graphs with a single block in terms of \((3,6)\)-sparsity. The aim of this section is to show that \((3,6)\)-sparsity is equivalent to an a priori weaker sparsity condition on two related multigraphs. The advantage of these characterisations is that they can be quickly checked via a pebble game algorithm in the sense of [14], whereas the \((3,6)\)-sparsity condition lies outside the "matroidal" range and cannot be so easily checked. Let \(G\) be a face graph with a single \(B\)-labelled face. We denote by \(G^{2\sigma}\) the multigraph constructed from the face graph \(G\) by adjoining two self-loops to each vertex \(v\in V(\partial B)\). Let \(G^{-}=G\setminus E(\partial B)\) be the graph obtained by removing the edges in the boundary cycle \(\partial B\) from \(G\). We denote by \((G^{-})^{3\sigma}\) the graph obtained from \(G^{-}\) by adding three self-loops to each of the vertices of \(\partial B\). We refer to \(G^{2\sigma}\) and \((G^{-})^{3\sigma}\) as looped face graphs. A multigraph \(J\) is said to be \((3,0)\)_-sparse_ if \(f(J^{\prime})\geq 0\) for any subgraph \(J^{\prime}\). A multigraph \(J\) is \((3,0)\)_-tight_ if it is \((3,0)\)-sparse and \(f(J)=0\). For more on \((k,l)\)-sparsity generally see [14]. We will require the following lemma. **Lemma 3.1**.: _A multigraph is \((3,0)\)-tight if and only if there exists an out-degree 3 orientation of the edges of the multigraph._ Proof.: Apply [14, Theorem 8 and Lemma 10]. **Example 3.2**.: Let \(\hat{G}\) be a block-and hole graph on the face graph \(G\) illustrated in Figure 7. The associated looped face graphs admit out degree 3 edge orientations. Thus, by Lemma 3.1, these multigraphs are \((3,0)\)-tight. By Theorem 3.3 below, the block and hole graph \(\hat{G}\) is \((3,6)\)-tight and so, by Theorem 1.4, \(\hat{G}\) is minimally 3-rigid. We now prove the main result of this section. **Theorem 3.3**.: _Let \(\hat{G}\) be a block-and-hole graph with a single block and finitely many holes. Then the following statements are equivalent._ 1. \(\hat{G}\) _is minimally_ \(3\)_-rigid._ 2. \(G^{2\sigma}\) _is_ \((3,0)\)_-tight._ 3. \((G^{-})^{3\sigma}\) _is_ \((3,0)\)_-tight._ Proof.: \((i)\Rightarrow(ii)\) Suppose \(\hat{G}\) is minimally 3-rigid. Let \(K\) be a subgraph of \(G^{2\sigma}\) and let \(K^{\prime}=K\cap G\) be the subgraph of \(G\) obtained by removing all self-loops from \(K\). Note that \(K^{\prime}\cap\hat{B}\) is a subgraph of the boundary cycle \(\partial B\) and so \(|E(K^{\prime}\cap\hat{B})|\leq|V(K^{\prime}\cap\hat{B})|\). It follows that \(f(K^{\prime}\cap\hat{B})\geq 2|V(K^{\prime}\cap\partial B)|\). Note that \[f(K^{\prime}\cup\hat{B})=f(K^{\prime})+f(\hat{B})-f(K^{\prime}\cap\hat{B})\leq f (K^{\prime})+6-2|V(K^{\prime}\cap\partial B)|\leq f(K)+6.\] Since \(K^{\prime}\cup\hat{B}\) is a subgraph of \(\hat{G}\), it is \((3,6)\)-sparse, and so \(f(K)\geq 0\). We conclude that \(G^{2\sigma}\) is \((3,0)\)-sparse. Note that \(f(\partial B)=2|V(\partial B)|\) and so, \[f(\hat{G})=f(\hat{B})+f(G)-f(\partial B)=6+f(G)-2|V(\partial B)|=6+f(G^{2 \sigma}).\] Figure 7. A face graph \(G\) (left) and its associated looped face graphs \(G^{2\sigma}\) (centre) and \((G^{-})^{3\sigma}\) (right) together with out degree 3 edge orientations. Thus \(f(G^{2\sigma})=f(\hat{G})-6=0\) and so \(G^{2\sigma}\) is \((3,0)\)-tight. \((ii)\Leftrightarrow(iii)\) Note that on \(V(\partial B)\), any outdegree \(3\) orientation of the edges of \((G^{-})^{3\sigma}\) or \(G^{2\sigma}\) has a very constrained form. For any vertex \(v\) of \(V(\partial B)\subset V((G^{-})^{3\sigma})\), the three self-loops on it must be oriented away from \(v\), and similarly for the two self-loops on the vertices of \(\partial B\subset G^{2\sigma}\). Then there is one remaining outgoing edge from each \(v\in V(\partial B)\subset V(G^{2\sigma})\) which must be one of the two edges of \(\partial B\) that meet it. It follows that \(\partial B\subset G^{2\sigma}\) must be oriented according to one of its two cyclic orientations. Thus any outdegree \(3\) orientation of \((G^{-})^{3\sigma}\) is easily converted to one of \(G^{2\sigma}\) and vice versa. The result now follows from Lemma 3.1. \((iii)\Rightarrow(i)\) Suppose the multigraph \((G^{-})^{3\sigma}\) is \((3,0)\)-tight. Let \(K\) be a subgraph of \(\hat{G}\) containing at least two edges. If \(K\) is a subgraph of \(G\) then, since \(G\) is a subgraph of a triangulated sphere, \(K\) is \((3,6)\)-sparse. If \(K\) is not a subgraph of \(G\) then we consider three possible cases: _Case 1:_ Suppose \(K\cap\hat{B}\) contains at least two edges. Consider the subgraph \((K\cap G^{-})^{3\sigma}\) of the multigraph \((G^{-})^{3\sigma}\). Note that, \[0\leq f((K\cap G^{-})^{3\sigma})=f(K\cap G)-f(K\cap\partial B).\] Since \(\hat{B}\) is \((3,6)\)-sparse, we have \(f(K\cap\hat{B})\geq 6\) and so, \[f(K)=f(K\cap\hat{B})+f(K\cap G)-f(K\cap\partial B)\geq 6.\] _Case 2:_ Suppose \(K\cap\hat{B}\) contains no edges, or contains exactly one edge which lies in \(\partial B\). Then \(K\) must be the disjoint union of \(K\cap G\) (which, as a subgraph of a triangulated sphere, is \((3,6)\)-sparse) and some number of vertices in \(\hat{B}\). Hence \(f(K)\geq f(K\cap G)\geq 6\). _Case 3:_ Suppose \(K\cap\hat{B}\) contains exactly one edge and that this edge does not lie in \(\partial B\). Then \(K\) must consist of \(K\cap G\) with an additional edge (which is still a subgraph of a triangulated sphere) together with some number of vertices in \(\hat{B}\). Hence \(f(K)\geq 6\). We conclude that \(\hat{G}\) is \((3,6)\)-sparse. Also, \[f(\hat{G})=f(\hat{B})+f(G)-f(\partial B)=6+f(G)-2|V(\partial B)|=6+f((G^{-})^{ 3\sigma}).\] Thus \(f(\hat{G})=6\) and so \(\hat{G}\) is \((3,6)\)-tight. By Theorem 1.4, \(\hat{G}\) is minimally \(3\)-rigid. ## 4. Applications and Conjectures ### Rigidity in \(\ell_{p}^{3}\) The vertex splitting operation considered in Section 2 is known to preserve rigidity properties in geometric settings other than the Euclidean space \(\mathbb{R}^{3}\). For example, it is known that vertex splitting preserves _independence_ in every \(3\)-dimensional real normed linear space which is both smooth and strictly convex (see [6, Proposition 4.7]). It follows that any class of graphs which are constructible from an independent base graph by vertex splitting (for example, triangulations of a \(2\)-sphere) will satisfy independence. Thus, with the main theorem of Section 2 in hand, we obtain the following immediate corollary. **Corollary 4.1**.: _Let \(X\) be a \(3\)-dimensional real normed linear space which is smooth and strictly convex. Then every \((3,6)\)-tight discus-and-hole graph, with a single discus, is independent in \(X\)._ Proof.: By [6, Proposition 4.7], vertex splitting preserves independence in \(X\). The graph \(K_{3}\) is independent in \(X\). Thus the result follows from Theorem 2.15. In the case of \(\ell_{p}^{3}\), where \(p\in[1,\infty]\) and \(p\neq 2\), the minimally rigid graphs are \((3,3)\)-tight. Here a simple graph \(J\) is _\((3,3)\)-tight_ if \(f(J)=6\) and \(f(J^{\prime})\geq 3\) for any subgraph \(J^{\prime}\). The smallest (non-trivial) graph with this property is the complete graph \(K_{6}\). It is conjectured that every \((3,3)\)-tight simple graph is minimally rigid in \(\ell_{p}^{3}\) (see for example [6]). We propose here a special case of this conjecture. **Conjecture 4.2**.: Let \(p\in[1,\infty]\), \(p\neq 2\). Let \(\hat{G}\) be a block-and-hole graph with a single block. If the block is minimally rigid in \(\ell_{p}^{3}\) then the following statements are equivalent. 1. \(\hat{G}\) is minimally rigid in \(\ell_{p}^{3}\). 2. \(\hat{G}\) is \((3,3)\)-tight. ### Conjecture on global rigidity Establishing global rigidity is typically a more difficult problem than establishing rigidity for a given class of graphs. One of the reasons is that vertex splitting is less well understood in this context. Connelly and Whiteley have conjectured a necessary and sufficient condition for vertex splitting to preserve global rigidity in \(\mathbb{R}^{d}\)[2]. This conjecture is still open but has been verified in certain special cases (see [12, 4, 5]) leading to global rigidity characterisations for braced plane triangulations and for triangulations of non-spherical surfaces. Given Theorem 2.15, it is natural to wonder if similar global rigidity characterisations might be obtained for discus-and-hole graphs. **Conjecture 4.3**.: Suppose that \(G^{\dagger}\) is a discus-and-hole graph with exactly one discus. Then \(G^{\dagger}\) is generically globally rigid in \(\mathbb{R}^{3}\) if and only if \(G^{\dagger}\) is \(4\)-connected and redundantly rigid in \(\mathbb{R}^{3}\). Note that the "only if" implication in Conjecture 4.3 is already well known (see [10]). ### Connection to rigid origami Rigid origami is the study of structures made out of flat rigid sheets joined at hinges. Such structures have inspired work in structural engineering, mechanical design and the physics of mechanical metamaterials [8, 15, 16, 18]. It is of practical interest, given such a structure, to determine its mechanical properties, and as a very first step, one would like to know whether it is floppy or rigid. It is natural, given the constraint that the sheets remain rigidly flat, to mathematically model rigid origami by polyhedral surfaces (with boundary). The connection to the block-and-hole graphs considered in this article is then as follows. Given a polyhedral surface, we wish to replace it by a bar-joint framework such that all vertices and edges of the polyhedral surface become joints and bars, respectively. In order for the framework to have the same rigidity properties we must add additional bars and joints to the non-triangular faces, as they could otherwise bend and flex in the framework. By the isostatic substitution principle (Lemma 2.5), this can be done without introducing dependencies in the bars by adding any minimally 3-rigid graph on the vertices of the planar face. For example, the following two part construction works: first, triangulate each of the non-triangular faces and second, for each non-triangular face, create a new joint off the plane of the face with bars to each of the vertices of that face. Note that this replaces the rigid face with a triangulated prism. One can then naturally identify these with "blocks" and the missing faces as "holes". One important caveat is that the realizations of block-and-hole graphs arising from the above construction are not generic - the blocks are bounded by sets of coplanar vertices. It is natural of course to conjecture (along the lines of the molecular conjecture of Tay and Whiteley [17] proved by Katoh and Tanigawa [13]) that the rigidity of generic polyhedral surfaces can indeed be predicted by the rigidity of structures where the blocks are made more generic, but this remains to be proven. One further point is that the definition of rigid origami above allows vertices to have discrete Gaussian curvature (i.e. the angles of the faces around them may not sum to \(2\pi\)). Such a structure could not be folded from an ordinary sheet of paper. It would be interesting to consider the "developable" rigid origami case (where all angle-sums around vertices are \(2\pi\)), and this would require the consideration of further non-genericities. It may be that block-and-hole graphs provide the appropriate counts for "generic developable rigid origami" as well. Assuming a suitable "molecular origami conjecture" holds, Theorem 1.4 and Theorem 3.3 give a way of determining the rigidity or flexibility of rigid origami with either (1) one non-triangular face and an arbitrary number of non-triangular holes or (2) one non-triangular hole and an arbitrary number of non-triangular faces (related by block-and-hole swapping). Note that "pure" origami folded from a single-sheet without allowing any cutting leads at the combinatorial level to block-and-hole graphs which satisfy (2), with the exterior of the paper viewed as a large hole. ## 5. Acknowledgement This article is based on work initiated by the authors during the BIRS workshop on Advances in Combinatorial and Geometric Rigidity (15w5114).
Minimally 3-rigid block-and-hole graphs, with one block or one hole,は、$K_3$を頂点分割により構築可能なものとして、また、$(3,0)$タイトなループ面グラフを有するものであり、後者の性質は、多項式時間でペブルゲームアルゴリズムを用いて検証できます。これらは、折り紙などの多面体表面の剛性と、$\ell_p^3$のグラフ剛性にも関連付けられています。
2301.13395
Differentiating Through Integer Linear Programs with Quadratic Regularization and Davis-Yin Splitting
In many applications, a combinatorial problem must be repeatedly solved with similar, but distinct parameters. Yet, the parameters $w$ are not directly observed; only contextual data $d$ that correlates with $w$ is available. It is tempting to use a neural network to predict $w$ given $d$. However, training such a model requires reconciling the discrete nature of combinatorial optimization with the gradient-based frameworks used to train neural networks. We study the case where the problem in question is an Integer Linear Program (ILP). We propose applying a three-operator splitting technique, also known as Davis-Yin splitting (DYS), to the quadratically regularized continuous relaxation of the ILP. We prove that the resulting scheme is compatible with the recently introduced Jacobian-free backpropagation (JFB). Our experiments on two representative ILPs: the shortest path problem and the knapsack problem, demonstrate that this combination-DYS on the forward pass, JFB on the backward pass-yields a scheme which scales more effectively to high-dimensional problems than existing schemes. All code associated with this paper is available at github.com/mines-opt-ml/fpo-dys.
Daniel McKenzie, Samy Wu Fung, Howard Heaton
2023-01-31T04:03:28
http://arxiv.org/abs/2301.13395v4
# Faster Predict-and-Optimize ###### Abstract In many practical settings, a combinatorial problem must be repeatedly solved with similar, but distinct parameters \(w\). Yet, \(w\) is not directly observed; only contextual data \(d\) that correlates with \(w\) is available. It is tempting to use a neural network to predict \(w\) given \(d\), but training such a model requires reconciling the discrete nature of combinatorial optimization with the gradient-based frameworks used to train neural networks. One approach to overcoming this issue is to consider a continuous relaxation of the combinatorial problem. While existing such approaches have shown to be highly effective on small problems (10-100 variables) they do not scale well to large problems. In this work, we show how recent results in operator splitting can be used to design such a system which is easy to train and scales effortlessly to problems with thousands of variables. ## 1 Introduction Many high-stakes decision problems in healthcare[11], logistics and scheduling [12, 13], and transportation [24] can be phrased as combinatorial optimization problems, _i.e._ to compute \[x^{\star}=\operatorname*{argmin}_{x\in\mathcal{X}}f(x;w), \tag{1}\] where \(\mathcal{X}\subset\mathbb{R}^{n}\) is a finite constraint set and \(f(x)=w^{\top}x\) is a linear function4. Depending on the structure of \(\mathcal{X}\), the problem (1) may be straightforward (_e.g._ shortest path) or NP-hard (_e.g._ Traveling Sales Person problem [14]). In both cases, this problem is well-studied and there are a plethora of robust and reliable algorithms for solving (1) _given_ both the constraint \(\mathcal{X}\) and objective \(f\). Our present interest is settings where \(f\)_is only partially known_. Specifically, we aim to repeatedly compute Footnote 4: While we focus on linear objective functions, our techniques can be applied to more general, convex \(f\). \[x^{\star}(d)=\operatorname*{argmin}_{x\in\mathcal{X}}w(d)^{\top}x, \tag{2}\] for different data \(d\), where \(w\) is a function of \(d\). Importantly, the precise relationship between \(d\) and \(w\) is _unknown_. We propose _learning a model_ to approximate the unknown objective: \(w_{\Theta}(d)\approx w(d)\). The data \(d\) is observed and is called the _context_. As an illustrative example to keep in mind, consider the shortest path prediction problem studied in [24, 25] (see Figure 1). Here, the goal is to find the shortest path (from top-left to bottom-right) through a randomly generated terrain map from the Warcraft II tileset [23]. Each 8-by-8 square in the map represents a vertex in a 12-by-12 grid graph (with diagonal edges). The time required to traverse this vertex depends on the visual content of the square (e.g. a square containing water takes longer to traverse than a square containing land). Here \(d\) is the image representing the map, \(w(d)\) is a vector of traversal times (_i.e._ costs) for each vertex5, the true objective is \(f(x)=w(d)^{\top}x\), and \(x^{\star}(d)\) is the true shortest path (See (c) in 1). Footnote 5: Note [10] considers the vertex-weighted variant of the shortest problem. In our experiments we consider the edge-weighted version. We discuss this further in Appendix. Suppose we wish to use gradient-based methods (_e.g._ Stochastic Gradient Descent) to train \(w_{\Theta}(d)\). The primary obstacle to doing so is "differentiating through" the solution to \[\hat{x}(d)=\operatorname*{argmin}_{x\in\mathcal{X}}w_{\Theta}(d)^{\top}x, \tag{3}\] so as to compute an informative gradient with which to update \(\Theta\). Given the combinatorial nature of \(\mathcal{X}\), this is a nontrivial task. Specifically, for many small perturbations of \(\Theta\), the solution \(\hat{x}(d)\) will be unchanged; yet, for some perturbations the solution \(\hat{x}(d)\) will "jump" to a different point in \(\mathcal{X}\). Hence the gradient \(d\hat{x}/dw_{\Theta}\) is always either zero or undefined. We emphasize that this behaviour is common to all problems of the form (1) with linear \(f(\cdot;w)\). To compute an informative gradient, we follow recent works (_e.g._[11]) and relax (3) to a quadratic program over the convex hull of \(\mathcal{X}\) (see (13)) by adding a small regularizer. Building upon [11], we use Davis-Yin Three Operator splitting [11] and Jacobian-Free Backpropagation (JFB) [12] to build a solver for this relaxation that is efficient _and_ easy to backpropagate through. Our approach is fast, easy to implement using our provided code6, and unlike several prior works (_e.g._ see [10, 12]), runs completely on GPU. Although prior works are mostly constrained to problems with fewer than one hundred variables, numerical examples herein demonstrate our approach, run using only standard computing resources, easily scales to combinatorial problems with tens of thousands of variables. Footnote 6: See [https://github.com/mines-opt-ml/SPO_with_DYS](https://github.com/mines-opt-ml/SPO_with_DYS) Paper OrganizationThe rest of this paper is laid out as follows. We introduce the Predict-then-Optimize paradigm in Section 2, which is followed by discussion of related works in Section 3. Our proposed combination of a three-operator splitting architecture, called DYS-Net, and Jacobian-Free Backpropagation (JFB) is presented in Section 4. Numerical examples are provided in Section 5 and concluding remarks in Section 6. ## 2 The Predict-and-Optimize Paradigm ### LP reformulation We begin by observing that the combinatorial optimization problem (2) is equivalent to a linear program: \[x^{\star}(d)=\operatorname*{argmin}_{x\in\mathcal{X}}w(d)^{\top}x= \operatorname*{argmin}_{x\in\operatorname{Conv}(\mathcal{X})}w(d)^{\top}x, \tag{4}\] Figure 1: The shortest path prediction problem [10]. The contextual data \(d\) in (a) maps to the cost of traversing each square in (b), with darker shading representing lower cost. The true shortest path is shown in (c). where \(\operatorname{Conv}(\mathcal{X})\) is the convex hull of \(\mathcal{X}\). This is because for generic7\(w(d)\) the solution to the linear program lies at a corner of the polytope \(\operatorname{Conv}(\mathcal{X})\), which must be an element of \(\mathcal{X}\). Equivalently: Footnote 7: For certain \(w(d)\) it may happen that an entire face of \(\operatorname{Conv}(\mathcal{X})\) is a solution to the linear program. In this case, (27) is no longer strictly correct; rather the left-hand side is a _subset_ of the right-hand side. In the interest of clarity we ignore this edge case. \[\hat{x}(d)=\operatorname*{argmin}_{x\in\mathcal{X}}w_{\Theta}(d)^{\top}x= \operatorname*{argmin}_{x\in\operatorname{Conv}(\mathcal{X})}w_{\Theta}(d)^{ \top}x \tag{5}\] Henceforth we follow [19, 10] and others and focus exclusively on this LP reformulation. ### The Loss and Training Data Recall that we wish to train a neural network \(w_{\Theta}(d)\) such that \(\hat{x}(d)\approx x^{\star}(d)\), where \(\hat{x}(d)\) is defined in (5) and \(x^{\star}(d)\) is defined in (4). What loss function, and what training data, should one use? Several works [19, 20] suggest gathering training data of the form \((d,w(d))\) and then training so as to minimize the discrepancy8 between \(w(d)\) and \(w_{\Theta}(d)\). Frequently, this is referred to as the "two-stage" approach. For large or complex problems this can be sub-optimal. Intuitively [1] this is because small errors in the approximation \(w_{\Theta}(d)\approx w(d)\) in areas crucial to the optimization problem (4) can yield wildly different minimizers. Instead, one should choose a loss aligned with the end goal, _i.e._ one that seeks to minimize the discrepancy between \(x^{\star}_{\Theta}(d)\) and \(x(d)\)[19]. We discuss two such losses here. Footnote 8: for example the least-square discrepancy \(\|w(d)-w_{\Theta}(d)\|\) Regret LossTo define this loss, also known as the "Smart Predict-then Optimize" (SPO) loss [1] or task loss [10], we first consider the _regret_ associated to solving (2), given a particular \(d\), with \(\hat{x}(d)\) instead of \(x^{\star}(d)\): \[\mathcal{R}(\Theta,d,w):=w(d)^{\top}\hat{x}_{\Theta}(d)-w(d)^{\top}x^{\star}(d) \tag{6}\] As the second term is independent of model parameters (\(\Theta\)) we may drop it and consider the loss: \[\mathcal{L}_{R}(\Theta)=\mathbb{E}_{d\sim\mathcal{D}}\left[w(d)^{\top}\hat{x}_ {\Theta}(d)\right] \tag{7}\] The required training data in this setting is context-weight pairs \((d,w(d))\) where \(d\) is drawn from the distribution in question \(\mathcal{D}\). Note that finding a model with low task loss guarantees that _cost_ of the model output (_i.e._\(w^{\top}\hat{x}(d)\)) is close to the true optimal cost (_i.e._\(w^{\top}x^{\star}(d)\)). However, it provides no guarantees as to how well \(\hat{x}(d)\) approximates \(x^{\star}(d)\). Argmin lossIf \(w(d)\) is never observed directly, only \(x^{\star}(d)\), one can minimize the mismatch between \(\hat{x}(d)\) and \(x^{\star}(d)\): \[\mathcal{L}_{A}(\Theta)=\frac{1}{2}\mathbb{E}_{d\sim\mathcal{D}}\left[\|x^{ \star}(d)-\hat{x}_{\Theta}(d)\|^{2}\right] \tag{8}\] A similar loss is used in [19], except that the Hamming distance between \(x^{\star}(d)\) and \(\hat{x}(d)\), not the \(\ell_{2}\) distance, is computed. The required training data for this loss is context-solution pairs \((d,x^{\star}(d))\). ### Argmin differentiation In practice we replace the true risks for either loss by empirical ones: \[L_{R}(\Theta) =\sum_{i=1}^{m}w_{i}^{\top}\hat{x}_{\Theta}(d_{i}) \tag{9}\] \[L_{A}(\Theta) =\frac{1}{2}\sum_{i=1}^{m}\|x^{\star}(d_{i})-\hat{x}_{\Theta}(d_ {i})\|^{2} \tag{10}\] where \(w_{i}\) are the observed/true weights corresponding to \(d_{i}\). In order to use gradient based methods, one has to compute the derivative with respect to \(\Theta\). Appealing to the chain rule: \[\frac{dL_{R}}{d\Theta} =\sum_{i=1}^{m}w_{i}^{\top}\frac{\partial\hat{x}_{i,\Theta}}{dw_{i,\Theta}}\frac{dw_{i,\Theta}}{d\Theta} \tag{11}\] \[\frac{dL_{A}}{d\Theta} =\sum_{i=1}^{m}\left(x^{\star}(d_{i})-\hat{x}_{\Theta}(d_{i}) \right)^{\top}\frac{\partial\hat{x}_{i,\Theta}}{dw_{i,\Theta}}\frac{dw_{i, \Theta}}{d\Theta}, \tag{12}\] where \(w_{i,\Theta}:=w_{\Theta}(d_{i})\) is the neural network approximation evaluated at \(d_{i}\). For both \(L_{R}\) and \(L_{A}\), the key difficulty is making sense of \(d\hat{x}_{\Theta}/dw\). As discussed in Section 1, \(\hat{x}_{\Theta}(d)\) is piecewise constant as a function of \(w\), and this remains true for the LP reformulation (5). So, for all \(w\) either \(dx_{\Theta}^{\star}(d)/dw=0\) or \(dx_{\Theta}^{\star}(d)/dw\) is undefined--both cases do not yield an informative gradient. To remedy this, [23, 24] propose adding a small amount of regularization to the objective function in (5) so that the objective function becomes strongly convex, whence \(\hat{x}_{\Theta}(d)\) becomes a continuously differentiable function of \(w\). We shall follow [23] and add a small quadratic regularizer, so that (5) becomes: \[\hat{x}_{\Theta}(d)=\operatorname*{argmin}_{x\in\operatorname{Conv}(\mathcal{ X})}\underbrace{w_{\Theta}(d)^{\top}x+\gamma\|x\|_{2}^{2}}_{\triangleq f_{ \Theta}(x;d)}. \tag{13}\] Thus, for the rest of this paper we shall consider the problem \[\hat{x}_{\Theta}(d)=\operatorname*{argmin}_{x\in\mathcal{C}}f_{\Theta}(x;d) \tag{14}\] where \(\mathcal{C}\) is a _polytope_ (_i.e._ the convex hull of a finite set) and \(f_{\Theta}(x;d)\) is strongly convex with respect to \(x\). Our goal is to solve (14) _and_ simultaneously computing the derivative9\(d\hat{x}_{\Theta}/d\Theta\); this problem is frequently referred to as agnmin differentiation as has received much attention lately [1, 2, 3, 1, 1, 2]. Crucially, we note that every polytope may be expressed in the standardized form10: Footnote 9: For notational convenience we suppress the dependence on \(d\) Footnote 10: The standardization process may add dummy variables, and so increase the dimension of the problem \(n\) \[\mathcal{C}=\left\{x\in\mathbb{R}^{n}:\ Ax=b\text{ and }x\geq 0\right\}, \tag{15}\] where the inequality \(x\geq 0\) is interpreted componentwise, see [11]. ## 3 Prior Work The most common approach to computing \(d\hat{x}_{\Theta}/d\Theta\), proposed in [1] and used in [23, 24], starts with the KKT conditions for constrained optimality: \[\frac{\partial f_{\Theta}}{\partial x}(\hat{x}_{\Theta})+A^{\top} \hat{\lambda}+\hat{\nu} =0\] \[A\hat{x}-b =0\] \[D(\hat{\nu})\hat{x}_{\Theta} =0\] where \(\hat{\lambda}\) and \(\hat{\nu}\geq 0\) are Lagrange multipliers associated to the optimal solution \(\hat{x}_{\Theta}\)[1] and \(D(\hat{\nu})\) is a matrix with \(\hat{\nu}\) along its diagonal. Differentiating these equations with respect to \(\Theta\) and rearranging one obtains: \[\begin{bmatrix}\frac{\partial^{2}f_{\Theta}}{\partial x^{2}}&A&I\\ A^{\top}&0&0\\ D(\hat{\nu})&0&D(\hat{x}_{\Theta})\end{bmatrix}\begin{bmatrix}\frac{d\hat{x} _{\Theta}}{d\Theta}\\ \frac{d\hat{\lambda}}{d\Theta}\\ \frac{d\hat{\nu}}{d\Theta}\end{bmatrix}=\begin{bmatrix}\frac{\partial^{2}f_{ \Theta}}{\partial x\partial\theta}\\ 0\\ 0\end{bmatrix}. \tag{16}\] Solving (16) then yields \(\frac{d\hat{x}_{\Theta}}{d\Theta}\) (as well as \(\frac{d\hat{\lambda}}{d\Theta}\) and \(\frac{d\hat{\nu}}{d\Theta}\)). The computational bottleneck in this approach is computing \(\hat{\lambda}\) and \(\hat{\nu}\) in addition to \(\hat{x}_{\Theta}\) for (14). If \(\dim(x)=n\) and \(A\in\mathbb{R}^{m\times n}\) this can be done with a primal-dual interior point method at a cost of \(\mathcal{O}\left(\max\{n,m\}^{3}\right)\)[1]. In principle it is possible to exploit sparsity in \(A\) or \(\frac{\partial f_{\Theta}}{\partial x}(\hat{x}_{\Theta})\) to solve (16) faster, but in practice we observe the state-of-the-art implementation of this approach, cvxspylayers[1], struggles with problems containing more than 100 variables, see Section 5. Another approach, proposed for deep equilibrium models in [1] and adapted to constrained optimization layers in [1, 2] is to re-formulate (14) as a fixed point problem: \[\text{Find }\hat{x}_{\Theta}\text{ such that }\hat{x}_{\Theta}=P_{\mathcal{C}} \left(\hat{x}_{\Theta}-\alpha\frac{\partial f_{\Theta}}{\partial x}(\hat{x}_{ \Theta};d)\right) \tag{17}\] and then apply the implicit function theorem to obtain an explicit formula for \(d\hat{x}_{\Theta}/d\Theta\). However, the cost of computing \(P_{\mathcal{C}}\) can be prohibitive, see Section 4.1. Finally, many works use a perturbation-based approach to define a continuously differentiable proxy for the solution to the _unregularized_ optimization problem (5), which we rewrite here as \[g(w)=\min_{x\in\operatorname{Conv}(\mathcal{X})}w^{\top}x, \tag{18}\] omitting the dependence of \(w\) on \(d\) for notational clarity. For example, [12] define a piecewise-affine interpolant to \(g(w)\). The gradients of \(g_{\lambda}(w)\) are strikingly easy to compute, requiring just one additional solve of (18) with perturbed cost \(w^{\prime}\). In [2] a stochastic perturbation is considered: \[g_{\varepsilon}(w)=\mathbb{E}_{Z}\left[\min_{x\in\operatorname{Conv}( \mathcal{X})}\left(w+\varepsilon Z\right)^{\top}x\right], \tag{19}\] analogous to Nesterov-Spokoiny smoothing [13] in zeroth-order optimization. By Danskin's theorem [1] the gradients of \(g_{\varepsilon}(w)\) are also easy to compute: \[\nabla_{w}g_{\varepsilon}(w)=\mathbb{E}_{Z}\left[\operatorname*{ argmin}_{x\in\operatorname{Conv}(\mathcal{X})}\left(w+\varepsilon Z\right)^{ \top}x\right]\approx\frac{1}{m}\sum_{i=1}^{m}\operatorname*{argmin}_{x\in \operatorname{Conv}(\mathcal{X})}\left(w+\varepsilon Z_{i}\right)^{\top}x. \tag{20}\] We implement this approach as PertOpt-net in Section 5. We highlight that the advantage of such approaches is they easily wrap around existing combinatorial solvers (e.g. Dijkstra for the shortest path problem), as only repeated solves of (18) are required for computing gradients. The disadvantage is that such solvers are usually run on CPU. Thus, data needs to be shuttled between CPU and GPU when training. ## 4 Dys-Net We now describe our proposed architecture, DYS-Net. We use this term to refer to the neural architecture (see Section 4.1) _and_ the custom backprop procedure (see Section 4.2). DYS-Net models may be trained using either the regret loss or the argmin loss. ### The Forward Pass We wish to compute \(\hat{x}_{\Theta}\)_and_\(d\hat{x}_{\Theta}/d\Theta\) where \[\hat{x}_{\Theta}=\operatorname*{argmin}_{x\in\mathcal{C}\subset\mathbb{R}^{n}} f_{\Theta}(x;d). \tag{21}\] As we are interested in the case where \(n\) is very large, we are led to consider first-order methods (_e.g._ gradient descent) over second-order methods (_e.g._ Newton's method). For example, one could consider Projected Gradient Descent (PGD): \[\begin{split} x^{k+1}&=P_{\mathcal{C}}\left(x^{k}- \alpha\frac{\partial f_{\Theta}}{\partial x}(x^{k};d)\right)\quad\text{ for }k=0,\dots,K-1\\ \hat{x}_{\Theta}(d)&\approx x^{K}\end{split} \tag{22}\] This works for simple \(\mathcal{C}\) for which there exists an explicit form of \(P_{\mathcal{C}}\), _e.g._ when \(\mathcal{C}\) is the probability simplex [1, 13, 14]. However, for general polytopes \(\mathcal{C}\) no such form exists, and so one must resort to a second iterative procedure, _run at every \(k\)_, to compute \(P_{\mathcal{C}}(x^{k})\) as the solution to \[P_{\mathcal{C}}(x^{k})=\operatorname*{argmin}_{y\in\mathcal{C}}\|y-x^{k}\|^{2}. \tag{23}\] Our core idea is to solve (21) using Davis-Yin splitting (DYS) [4] to avoid computation of \(P_{\mathcal{C}}\) in the forward pass. To this end, we recall the standardized representation of polytopes (15) and write: \[\mathcal{C}=\{x:\ Ax=b\ \text{and}\ x\geq 0\}=\underbrace{\{x:\ Ax=b\}}_{ \triangleq\mathcal{C}_{1}}\cap\underbrace{\{x:\ x\geq 0\}}_{\triangleq\mathcal{C}_{2}} =\mathcal{C}_{1}\cap\mathcal{C}_{2}, \tag{24}\] where we note \(P_{\mathcal{C}}\) is hard to compute but \(P_{\mathcal{C}_{1}}\) and \(P_{\mathcal{C}_{2}}\) can be computed cheaply in many settings.11 The following theorem formulates (21) as a fixed point problem involving only \(P_{\mathcal{C}_{1}}\) and \(P_{\mathcal{C}_{2}}\), not \(P_{\mathcal{C}}\). Footnote 11: Further splitting of \(\mathcal{C}_{1}\) can yield simpler projections if a singular value decomposition of \(A\) is unavailable. **Theorem 1**.: _Suppose \(\mathcal{C}=\mathcal{C}_{1}\cap\mathcal{C}_{2}\) for convex \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\). Suppose both \(\mathcal{C}_{i}\) are polyhedral or have relative interiors with a point in common and \(f_{\Theta}(x;d)\) be strongly convex and \(L\)-smooth in \(x\). For any \(\alpha>0\) define:_ \[T_{\Theta}(x)\triangleq x\!-\!P_{\mathcal{C}^{1}}(x)+P_{\mathcal{C}^{2}}\left( 2P_{\mathcal{C}^{1}}(x)\!-\!x\!-\!\alpha\nabla f_{\Theta}(P_{\mathcal{C}^{1}} (x))\right), \tag{25}\] _Then \(\hat{x}_{\Theta}\) solves (21) if and only if_ \[\hat{x}_{\Theta}=P_{\mathcal{C}^{1}}(\hat{z}_{\Theta})\ \text{where}\ \hat{z}_{\Theta}=T_{\Theta}(\hat{z}_{\Theta}). \tag{26}\] Proof.: This follows from [11, Theorem 3.2] by first noting \(\hat{x}_{\Theta}\) solves (21) if and only if it is the solution to the variational inequality \[\nabla_{x}f_{\Theta}(\hat{x}_{\Theta};d)^{\top}\left(x-\hat{x}_{\Theta}\right) \geq 0\quad\text{ for all }x\in\mathcal{C}. \tag{27}\] As the operator \(\nabla_{x}f_{\Theta}(\cdot;d)\) is \(L\)-Lipschitz continuous, it is \(1/L\)-cocoercive by the Baillon-Haddad theorem [1, 1]. The next result shows that the simple fixed point iteration method, applied with \(T_{\Theta}\), will converge. **Corollary 1**.: _Suppose \(f_{\Theta}\) is \(L\)-smooth. Take \(\alpha\in(0,2/L)\), and define the sequence_ \[z^{k+1}=T_{\Theta}(z^{k}). \tag{28}\] _Then \(x^{k}:=P_{\mathcal{C}^{1}}(z^{k})\to\hat{x}_{\Theta}\) with rate \(\mathcal{O}(1/k)\)._ Proof.: See [11, Sec 2.2.1] The upshot is that we can compute \(\hat{x}_{\Theta}\)_using only \(P_{\mathcal{C}_{1}}\) and \(P_{\mathcal{C}_{2}}\)_. Finally we note that if \(f_{\Theta}(\cdot;d)\) is strongly convex with respect to \(x\) then \(x^{\star}_{\Theta}(d)\) is unique for fixed \(d\) and \(\Theta\), and the mapping \(\Theta\mapsto x^{\star}_{\Theta}(d)\) is differentiable [1, 1, 12]. ### Efficient Backpropagation Differentiating both sides of (26), we obtain \[\frac{\mathrm{d}\hat{z}_{\Theta}}{\mathrm{d}\Theta} =\frac{\partial T_{\Theta}}{\partial\Theta}+\frac{\partial T_{ \Theta}}{\partial z}\frac{\mathrm{d}\hat{z}_{\Theta}}{\mathrm{d}\Theta} \tag{29}\] \[\Rightarrow\frac{\mathrm{d}\hat{z}_{\Theta}}{\mathrm{d}\Theta} =\mathcal{J}_{\Theta}^{-1}\frac{\partial T_{\Theta}}{\partial\Theta} \qquad\text{where}\qquad\mathcal{J}_{\Theta}(z)=I-\frac{\partial T_{\Theta}} {\partial z} \tag{30}\] Thus, one may compute the gradient of the loss using the chain rule as follows: \[\frac{\mathrm{d}\ell}{\mathrm{d}\Theta} =\frac{\mathrm{d}\ell}{\mathrm{d}x}\frac{\mathrm{d}\hat{x}_{ \Theta}}{\mathrm{d}\Theta}=\frac{\mathrm{d}\ell}{\mathrm{d}x}\left(\frac{ \mathrm{d}P_{\mathcal{C}_{1}}}{\mathrm{d}z}\frac{\mathrm{d}\hat{z}_{\Theta}}{ \mathrm{d}\Theta}\right)=\frac{\mathrm{d}\ell}{\mathrm{d}x}\frac{\mathrm{d}P_{ \mathcal{C}_{1}}}{\mathrm{d}z}\mathcal{J}_{\Theta}^{-1}\frac{\partial T_{ \Theta}}{\partial\Theta} \tag{31}\] This approach requires solving a linear system with \(\mathcal{J}_{\Theta}\) which becomes particularly expensive when \(\dim(\hat{x}_{\Theta})\) is large. Instead, we use the recently introduced _Jacobian-Free Backpropagation_ (JFB) in which the Jacobian \(\mathcal{J}_{\Theta}\) is replaced with the identity matrix. This leads to an approximation of the true gradient \(\frac{\mathrm{d}\ell}{\mathrm{d}\Theta}\) using \[p_{\Theta}=\left[\frac{\partial\ell}{\partial x}\frac{\mathrm{d}P_{\mathcal{C} _{1}}}{\mathrm{d}z}\frac{\partial T_{\Theta}}{\partial\Theta}\right]_{\Theta,x =x^{\star}_{\Theta}}. \tag{32}\] In [11] it is shown \(p_{\Theta}\) is a descent direction, _i.e._\(\left<p_{\Theta},\frac{\mathrm{d}\ell}{\mathrm{d}\Theta}\right>>0\). Moreover, JFB has been found to be effective in a wide variety of application domains imaging [11, 12, 13, 14]. Using JFB yields a massive computational saving, as we no longer need to solve a linear system with \(\mathcal{J}_{\Theta}\). ### Implementation The combination of DYS for the forward pass and JFB for backpropagation makes the algorithm simple to implement. First, we note that \(P_{\mathcal{C}_{1}}\) is a projection onto the intersection of hyperplanes which is given by the following lemma [11]: **Lemma 1**.: _Let \(\mathcal{C}_{1}=\{x:\ Ax=b\}\) where \(A\in\mathbb{R}^{m\times n}\) and \(b\in\mathbb{R}^{m}\). Then the projection onto \(\mathcal{C}_{1}\) is given by_ \[P_{\mathcal{C}_{1}}(z)=\operatorname*{argmin}_{x\in\mathcal{C}_{1}}\frac{1}{2 }\|x-z\|^{2}=z-A^{\dagger}(Az-b) \tag{33}\] _where \(A^{\dagger}=U\Sigma^{-1}V^{\top}\) and \(U\Sigma V^{\top}\) is the compact singular value decomposition of \(A\) such that \(U\) and \(V\) have orthonormal columns and \(\Sigma\) is invertible._ Thus, to efficiently apply \(P_{\mathcal{C}_{1}}\), a singular value decomposition of \(A\) is first computed off-line. Also, once a parameterization is chosen for \(f_{\Theta}\), standard autodifferentiation tools, _e.g._ PyTorch [10], can be used to compute its gradient with respect to its inputs. Moreover, backpropagation can also be simply implemented as follows. We first compute \(\hat{x}_{\Theta}\) without tracking its gradients. Next, since JFB is equivalent to backpropagating through the last layer of the fixed point iteration [11], we apply \(T_{\Theta}\) from (25) once more to \(\hat{x}_{\Theta}\) to track the gradients and backpropagate through the last application of \(T_{\Theta}\). Thus, given \(\hat{x}_{\Theta}\), we backpropagate in four lines of code as shown in Fig. 2. The entire training process can thus be described in Algorithm 1. ## 5 Numerical Experiments We evaluate the performance of DYS-Net on a shortest path prediction problem inspired by [10] (see the discussion of Section1). ### Problem Setup In [10] a shortest path prediction problem is introduced for \(k\)-by-\(k\) grid graphs. The graph is constructed by partitioning a \(8k\)-by-\(8k\) pixel image representing a randomly generated terrain from the computer game Warcraft II into 8-by-8 squares. Each square represents a vertex12, and the time Figure 2: Sample PyTorch code for backpropagation through DYS. taken to traverse this vertex depends on the visual content of the square (e.g. a square containing water takes longer to traverse than a square containing land). The context \(d\in\mathbb{R}^{8k\times 8k}\) is then the visual information encoded by all these squares, _i.e._ an input image. In [PPM\({}^{+}\)19] problem instances are considered for \(k\in\{12,18,24,30\}\). Data GenerationWe extend this experiment to larger graphs while keeping the intrinsic learning complexity of the task fixed. We do so by constructing a series of \(k\times k\) grid graphs (see Fig 4) for \(k\in\{5,10,20,30,50,100\}\) where the weights are an unknown function of a context parameter \(d\in\mathbb{R}^{5}\). We do not allow diagonal edges thus each vertex (excepting those at corners and on sides) has four edges. In all experiments we draw \(d\) uniformly at random from the unit cube \([0,1]^{5}\). We assume the true edge weights are given by \(w(d)=Wd\) where \(W\in\mathbb{R}^{|E|\times 5}\), and \(E\) is the set of edges in the graph. While more complicated relationships between \(w\) and \(d\) can certainly be considered13, we deliberately choose a simple relationship between context and edge weights so that the forward and backward pass through the combinatorial solver is the bottleneck in training. Further details are presented in Appendix A. Footnote 13: and are easily implementable using our code: [https://github.com/mines-opt-ml/SPO_with_DYS](https://github.com/mines-opt-ml/SPO_with_DYS) ModelsWe test three different differential combinatorial solvers, based on cvxpylayers[AAB\({}^{+}\)19a], the perturbed optimization approach of Berthet _et al_[BBT\({}^{+}\)20], as well as the proposed \begin{table} \begin{tabular}{c|c|c} grid size & number of variables & network size \\ \hline 5-by-5 & 40 & 500 \\ 10-by-10 & 180 & 2040 \\ 20-by-20 & 760 & 8420 \\ 30-by-30 & 1740 & 19200 \\ 50-by-50 & 4900 & 53960 \\ 100-by-100 & 19800 & 217860 \\ \end{tabular} \end{table} Table 1: Number of variables (_i.e._ number of edges) per grid size for the shortest path problem described in Section 5. Third column: number of parameters for all three models used: DYS-Net, cvxpyayers and PertOpt-Net Figure 3: Comparison of of DYS-Net, cvxpylayers, and PertOptNet [BBT\({}^{+}\)20] for three different grid sizes: \(10\times 10\) (first column), \(20\times 20\) (second column), and \(30\times 30\) (third column). The first row shows the MSE loss vs. epochs of the testing dataset. The second row shows the training time vs. epochs. DYS-Net. We use the exact same neural network architecture for \(w_{\Theta}(d)\) in all three approaches; a two layer fully connected neural network with leaky ReLU activation functions. Network sizes can be seen in Table 1. We refer to the network equipped with the various solvers as CVX-net, PertOpt-net, and DYS-net respectively. CVX-net and DYS-net use the regularized, linear program form (13) of the shortest path problem. We tuned the hyperparameters for each architecture to the best of our ability on the smallest problem (5-by-5 grid graphs) and then used these hyperparameter values for all other graph sizes. We train all three architectures for 100 epochs total on each problem. ### Results In Figure 3, we show the test loss and training time per epoch for all three architectures: DYS-net, CVX-net, and PertOpt-net for 10-by-10, 20-by-20, and 30-by-30 grids. In terms of MSE loss, CVX-net and DYS-net lead to comparable performance. In the second row of Figure 3, we observe the benefits of combining the three-operator splitting with JFB [FHL\({}^{+}\)22]; in particular, DYS-Net trains much faster. Figure 4 shows some randomly selected outputs for the three architectures once fully trained. Interpreting the outputs of DYS-net and CVX-net as (unnormalized) probabilities over the grid, one can use a greedy decoder to determine the most probable path from top-left to bottom-right. For small grids, _e.g._ 5-by-5, this most probable path coincides exactly with the true path for most \(d\) (see Fig. 6). For larger grids, we find there are often slight differences between the predicted and true paths. This is not surprising, as the number of possible paths grows exponentially with \(k\). Thus, the number of "almost shortest paths" grows too, making exact prediction of \(x_{d}^{*}\) an unreasonable task. Hence, we use the regret (see (6)) of the trained model, averaged over the test set, Figure 4: True paths (column 1), paths predicted by DYS-net (column 2), CVX-net (column 3), and PertOpt-net (column 4). Samples are taken from different grid sizes: 10-by-10 (row 1), 20-by-20 (row 2), and 30-by-30 (row 3). as a measure of accuracy. Fig. 5 shows that while CVX-net and PertOpt-net achieve low regret for small grids, DYS-net model achieves a low regret _for all_ grids. In addition to training faster, DYS-net can also be trained for much larger problems, _e.g._, 100-by-100 grids, as shown in Figure 5. We found that CVX-net could not handle grids larger than 30-by-30, _i.e._, problems with more than 1740 variables14 (see Table 1). Importantly, PertOpt-net takes close to a week to train for the 100-by-100 problem, whereas DYS-net takes about a day (see right Figure 4(b)). For details on the training setup, see Appendix A.2. Footnote 14: This is to be expected, as discussed in in [1, 1] ## 6 Conclusions We present a new data-driven method for solving combinatorial optimization problems where parameters are unknown. The core idea is to use an implicit learn-to-optimize approach and learn an optimization algorithm based on three-operator-splitting. We call this model DYS-net. Importantly, DYS-net is trained using Jacobian-Free Backpropagation (JFB), thus avoiding computationally expensive backpropagation through projections on to the feasible set. Our experiments shows DYS-net leads to better accuracy _with substantially lower training times_. Our approach is also easy to implement as a consequence of JFB. While this work focused on linear objective functions, DYS-Net can be applied to more general objective functions with uniformly-Lipschitz gradients [13]. Moreover, as the dimensions of problems increase, this problem becomes more akin to using deep learning for optimal control problem [12], where the aim is to find an optimal path that minimizes an energy functional. Investigating these connections is a direction of future work.
多くのアプリケーションでは、組合せ問題を類似したパラメータで繰り返して解く必要があります。しかし、パラメータ $w$ は直接観察されず、 $w$ と関連するコンテキストデータ $d$ がのみ利用可能です。 $w$ を $d$ に基づいて予測するために、ニューラルネットワークを使用することは考えられます。しかし、そのようなモデルを訓練するには、組合せ最適化の離散的な性質とニューラルネットワークを訓練するための勾配ベースのフレームワークを調和させる必要があります。この研究では、問題の対象となる問題を整数線形計画 (ILP) に限定しています。本提案では、ILP の二次的に正規化した連続緩和に、三操作分割技術 (Davis-Yin Splitting, DYS) を適用します。この方法が、最近導入されたヤコビアンフリーバックプロパガション (JFB) と適合していることを証明しました。この組み合わせ (DYS
2309.08829
Exact description of limiting SIR and SEIR dynamics on locally tree-like graphs
We study the Susceptible-Infected-Recovered (SIR) and the Susceptible-Exposed-Infected-Recovered (SEIR) models of epidemics, with possibly time-varying rates, on a class of networks that are locally tree-like, which includes sparse Erd\H{o}s-R\`enyi random graphs, random regular graphs, and other configuration models. We identify tractable systems of ODEs that exactly describe the dynamics of the SIR and SEIR processes in a suitable asymptotic regime in which the population size goes to infinity. Moreover, in the case of constant recovery and infection rates, we characterize the outbreak size as the unique zero of an explicit functional. We use this to show that a (suitably defined) mean-field prediction always overestimates the outbreak size, and that the outbreak sizes for SIR and SEIR processes with the same initial condition and constant infection and recovery rates coincide. In contrast, we show that the outbreak sizes for SIR and SEIR processes with the same time-varying infection and recovery rates can in general be quite different. We also demonstrate via simulations the efficacy of our approximations for populations of moderate size.
Juniper Cocomello, Kavita Ramanan
2023-09-16T01:12:53
http://arxiv.org/abs/2309.08829v1
# Exact description of limiting SIR and SEIR dynamics on locally tree-like graphs ###### Abstract. We study the Susceptible-Infected-Recovered (SIR) and the Susceptible-Exposed-Infected-Recovered (SEIR) models of epidemics, with possibly time-varying rates, on a class of networks that are locally tree-like, which includes sparse Erdos-Renyi random graphs, random regular graphs, and other configuration models. We identify tractable systems of ODEs that exactly describe the dynamics of the SIR and SEIR processes in a suitable asymptotic regime in which the population size goes to infinity. Moreover, in the case of constant recovery and infection rates, we characterize the outbreak size as the unique zero of an explicit functional. We use this to show that a (suitably defined) mean-field prediction always overestimates the outbreak size, and that the outbreak sizes for SIR and SEIR processes with the same initial condition and constant infection and recovery rates coincide. In contrast, we show that the outbreak sizes for SIR and SEIR processes with the same time-varying infection and recovery rates can in general be quite different. We also demonstrate via simulations the efficacy of our approximations for populations of moderate size. Key words and phrases:interacting particle systems; continuous time Markov chains; SIR model; SEIR model, epidemics; outbreak size; sparse graphs; random graphs; local limits; mean-field limits; Erdos-Renyi random graphs; configuration model; random regular graph; Galton-Watson trees 2020 Mathematics Subject Classification: Primary: 60K35; 60F17; Secondary: 60G60; 60J2 K. Ramanan was supported in part by ARO Grant W911NF2010133 and both authors were supported by the Office of Naval Research under the Vannevar Bush Faculty Fellowship N0014-21-1-2887 _Acknowledgements:_ The second author would like to thank the Simons Institute and the organizers of the workshop on "Graph Limits, Nonparametric Models, and Estimation" for inviting her to the workshop to present a preliminary version of these results. In the present work, we study a continuous-time stochastic SIR process, and a related epidemic model, the Susceptible-Exposed-Infected-Recovered (SEIR) process, where an additional state is considered for individuals that have been exposed to the pathogen but have not yet become infectious. The SEIR process has been widely used to model the spreading of diseases including the recent SARS-CoV-2 pandemic, for instance, see [2, 15, 30]. In both cases, we allow for the (infection and recovery) transition rates to be time-dependent, so as to model effects due to seasonal variations, changes in the virulence of a disease, developments in treatment options, and changes in public health policies, which are of significant interest in practice [28, 7, 29]. While the majority of works have considered SIR processes on dense networks, we consider these processes on sparse networks (i.e., where each individual is connected to a bounded number of individuals), which more faithfully describe real-world networks. We provide tractable approximations for the evolution of fractions of individuals in each of the states of the epidemic in terms of a coupled system of ordinary differential equations (ODEs), see (2.4) and (2.11), and for the outbreak size, which is the final fraction of individuals ever infected. Moreover, we show that these approximations are asymptotically exact, as the size of the population increases to infinity, when the graph governing the dynamics is locally-tree like. More precisely, we consider a broad class of sparse (random) graph sequences, including sparse Erdos-Renyi random graphs, random regular graphs, and certain configuration models, which are known to converge in a certain local (weak) sense to a random limit that belongs to the class of unimodular Galton-Watson (UGW) trees; see Theorem 2.5 and Theorem 2.10. We refer the reader to Definition 2.3 for the definition of a UGW tree, and to [1, 35] for an extensive account of local convergence of graphs. Our proof technique starts by appealing to a general result in [12] that shows that for a general class of interacting particle systems that includes the SIR and SEIR processes, the sequence of empirical measures (equivalently, fractions of individuals in different states) on any locally converging sequence of graphs converges to the law of the marginal evolution of the root node in the limit UGW tree (see also [24, 32] for related results). The key step is then to provide a tractable characterization of the root marginal dynamics of this infinite-dimensional process. While for general particle systems the marginal dynamics of the root, or even of the root and its neighborhood, could be non-Markovian, a key step in our proof is to show that for the SIR and SEIR models, the dynamics simplifies. In fact, we can deduce from our proof that the evolution of the pair of vertices consisting of the root and an offspring is in fact Markovian (see Remark 4.11). The proof of the latter property relies crucially on certain conditional independence relations that we identify (see Proposition 4.7) and a certain projection lemma (Proposition 4.6) in the spirit of [10, Lemma 9.1]. These properties are combined with symmetry properties of the dynamics (identified in Proposition 4.8) to obtain a very tractable description of the evolution of the marginal law of the root in terms of the abovementioned systems of ODEs. For both the SIR and SEIR models, the associated system of ODEs is then analyzed to characterize the outbreak size in terms of the moment generating function of the offspring distribution of the limiting UGW tree, evaluated at the unique zero of an explicitly given functional; see Theorem 3.1. In the case of constant recovery and infection rates, we obtain a simpler characterization of the outbreak size and use it to show that the (suitably defined) mean-field prediction always overestimates the outbreak size. In this setting, we also show that although the transient dynamics can be different, the outbreak sizes for the SIR and SEIR models coincide when they have the same rates and initial conditions. In particular, this shows that in this case the outbreak size for the SEIR model does not depend on the rate at which an exposed individual becomes infectious. In contrast, we show that when the rates are time-varying, the outbreak sizes of the corresponding SIR and SEIR processes no longer coincide and can be vastly different even when the (time-varying) ratios of the infection rate to the recovery rate coincide. For both transient dynamics and the outbreak size, we compare our results with numerical simulations to demonstrate the efficacy of these approximations for populations of even moderate size. We also show how the ODEs can be used to study the impact of the amplitude and phase of periodically varying rates on the outbreak size. When the infection and recovery rates are constant in time, traditional techniques to analyze the outbreak size of the SIR process exploit a reformulation of the final outbreak size in terms of a bond percolation problem. However, it is not apparent if such a simple correspondence exists when the infection and recovery rates are time-varying, and unlike our approach, percolation-based arguments provide limited insight into the dynamics of the epidemic process. Furthermore, our general approach can also be applied to obtain analogous results for other more general epidemic processes including a class of compartmental models and processes with general recovery distributions; see Remark 2.8 and Remark 2.11. It would also be of interest to investigate to what extent an analogous approach can be used to provide alternatives to mean-field approximations for other classes of models, for instance, such as those described in [31]. We defer a complete investigation to future work. **Discussion of Prior Work.** Understanding epidemic dynamics on networks is an active area of contemporary research. The deterministic SIR model, introduced in [22], is a system of coupled ODEs that describes the evolution over time of the fraction of individuals in each of the states of the epidemic, in a population where everyone can come into contact with everyone else. This is known as the mean-field approximation. The mean-field dynamics are known to emerge as the large \(n\) limit of the SIR process defined on the complete graph on \(n\) vertices, when the infection rate scales like \(\mathcal{O}(1/n)\). The mean-field approximation provides a dramatic reduction of dimensionality, as it captures the global behavior of a size \(n\) population by a coupled system of two ODEs. However, most real-world contact networks are sparse, in the sense that the average number of neighbors for an individual in the network commands bounded even when the population size grows. Because of this, and the application-driven need to understand epidemic dynamics on more realistic networks, the study of SIR dynamics on a range of more realistic sparse network structures is an active area of research. The work of [33] derives equations for the expected number of individuals in each SIR state on a cycle graph, and compares these results with the corresponding quantities associated with the SIR model on the complete graph, as well as the scaled dynamics that result in the mean-field approximation. An SIR model on the \(\kappa\)-regular tree (the infinite tree where every vertex has \(\kappa\) neighbors) with general recovery times and time-dependent rates was studied in [9]. The latter work derives the asymptotic limit, as \(\kappa\) goes to infinity, of the evolution of the fraction of susceptible individuals over time, which recovers the mean-field approximation. Differential equations to approximate the fraction of susceptible, infected and recovered individuals for the continuous-time SIR model on configuration model graphs were derived heuristically in [36, 37] and shown to be asymptotically exact, as the population size goes to infinity, in [4, 21]. In very recent work [16], the authors obtain an explicit representation of the marginal distribution of each node on a finite tree by solving a coupled system of ODEs. This representation is shown to provide an upper bound on the probability that a node is susceptible on general graphs or with more than one initial infection. However, they show, via simulations, that this upper bound is generally not very tight. Existing mathematically rigorous work on the SEIR process focuses on studying the deterministic dynamics that arise in the mean-field regime, see [26]. To the best of our knowledge, not much is known rigorously about SEIR processes on sparse graphs or corresponding limits. In [39], the authors present an ODE system that they heuristically argue should approximate the fraction for large populations size. However, their approximation is not compared with simulations, and it differs from our ODE system, which is asymptotically exact as the population size approaches infinity. Organization of the Paper.The rest of the paper is structured as follows. In Section 1.1 we introduce some common notation used throughout the paper. In Section 2 we define the SIR and SEIR processes and state our characterization of the large-population limit of epidemic dynamics (see Theorem 2.5 and Theorem 2.10). In Section 3 we provide a characterization of the outbreak size in the large-population limit (see Theorem 3.1 and Theorem 3.5). The proofs of our results are provided in Section 4. They rely on a conditional independence property that is proven in Section 5 and some auxiliary results on the SEIR process that are relegated to Appendix A. Additionally, the proof of the well-posedness of the limit ODE systems are given in Appendix B. ### Notation We briefly overview common notation used throughout the paper. We use \(G=(V,E)\) to denote a graph with vertex set \(V\) and edge set \(E\). When clear from context, we identify a graph with its vertex set, and so for a vertex \(v\) we might write \(v\in G\) instead of the more accurate \(v\in V\). We let \(|G|:=|V|\) denote the number of vertices of \(G\). Given \(A\subset V\), we let \(\partial^{G}A:=\{w\in V\ :\ \{w,v\}\in E,\ v\in A,w\in V\setminus A\}\) be the _boundary_ of \(A\). In the case where \(A=\{v\}\) is a singleton, we write \(\partial^{G}_{v}:=\partial^{G}\{v\}\), and refer to it as the set of _neighbors_ of \(v\). The _degree_ of a vertex is defined as \(d^{G}_{v}:=|\partial^{G}_{v}|\). When unambiguous, we omit the dependence on \(G\) from our notation, and write \(d_{v}\) and \(\partial_{v}\). For \(v,w\in G\), we write \(v\sim w\) to mean \(v\in\partial_{w}\). Given a set \(\mathcal{Y}\), a configuration \(y\in\mathcal{Y}^{V}\) and \(A\subset V\), we write \(y_{A}:=\{y_{v}\ :\ v\in A\}\), and in the special case when \(|A|=2\), \(y_{v,w}:=y_{\{v,w\}}\). We let \(\mathbb{N}_{0}=\{0,1,2,...\}\), and let \(\mathcal{P}(\mathbb{N}_{0})\) be the set of probability measures on \(\mathbb{N}_{0}\). We identify probability measures on \(\mathbb{N}_{0}\) with their probability mass functions. In particular, for \(\zeta\in\mathbb{P}(\mathbb{N}_{0})\) and \(k\in\mathbb{N}_{0}\), we write \(\zeta(k)=\zeta(\{k\})\). For \(k\in\mathbb{R}\), we let \(\delta_{k}\) be the Dirac measure at \(k\). Given a probability space \((\Omega,\mathcal{F},\mathbb{P})\), we denote by \(\mathcal{L}(Y)\) the law of a \(\Omega-\)valued random variable \(Y\). ## 2. Results on Transient Dynamics In Section 2.1 we precisely define the SIR process and in Section 2.2 state the main result that describes the limiting dynamics on converging sequences of locally tree-like graphs in terms of solutions to systems of ODEs. In Section 2.3 we define the SEIR process and state the corresponding convergence result. ### SIR Model Fix a graph \(G=(V,E)\), the (time-varying) _infection rate_\(\beta:[0,\infty)\to(0,\infty)\), and the (time-varying) _recovery rate_\(\rho:[0,\infty)\to(0,\infty)\). We write \(\beta_{t}\) (resp. \(\rho_{t}\)) for the value of \(\beta\) (resp. \(\rho\)) at time \(t\in[0,\infty)\). The SIR process on \(G\), denoted by \(X^{G}\), is a continuous-time locally interacting Markov chain with the following dynamics. At any time \(t\), each individual \(v\) has a state \(X^{G}_{v}(t)\) in the space \(\mathcal{X}:=\{S,I,R\}\). The initial states \(X^{G}_{v}(0)\) are i.i.d. with \(\mathbb{P}(X^{G}_{v}(0)=S)=s_{0}\) and \(\mathbb{P}(X^{G}_{v}(0)=I)=i_{0}:=1-s_{0}\) for some \(s_{0}\in(0,1)\). Given \(y\in\emptyset\cup(\cup_{k=1}^{\infty}\mathcal{X}^{k})\), representing the configuration of the neighbors of a vertex, we denote by \(\mathcal{I}(y)\) the number of elements of \(y\) that are equal to \(I\). At time \(t\), each individual \(v\in G\) jumps from \(S\) to \(I\) (i.e., becomes infected) at rate \(\beta_{t}I(X^{G}_{\partial_{v}}(t-))\), and from \(I\) to \(R\) (i.e., recovers) at rate \(\rho_{t}\). We impose the following mild assumptions on the recovery and infection rate functions. **Assumption A**.: The functions \(\beta\) and \(\rho\) are continuous and there exist \(c_{1},c_{2}\in(0,\infty)\) such that \[c_{1}<\liminf_{t\to\infty}\min(\rho_{t},\beta_{t})<\limsup_{t\to\infty}\max( \rho_{t},\beta_{t})<c_{2}. \tag{2.1}\] Throughout the paper, we assume that Assumption A holds. **Remark 2.1**.: If we equip \(\mathcal{X}\) with the total ordering given by \(S<I<R\), then the SIR process is monotonic in the sense that for every \(v\in G\) and \(s,t\in[0,\infty)\), if \(s\leq t\) then \(X^{G}_{v}(s)\leq X^{G}_{v}(t)\). Next, we describe the class of graph sequences that we consider, as well as an associated probability measure on \(\mathbb{N}_{0}\) that characterizes the corresponding local limit. **Assumption B**.: Suppose the sequence of graphs \(\{G_{n}\}_{n\in\mathbb{N}}\) and \(\theta\in\mathcal{P}(\mathbb{N}_{0})\) satisfy one of the following: 1. _(Erdos-Renyi)._ There exists \(c>0\) such that, for every \(n\in\mathbb{N}_{0}\), \(G_{n}\) is a Erdos-Renyi random graph \(\operatorname{ER}(n,c/n)\), and \(\theta\) is the Poisson distribution with mean \(c\). 2. _(Configuration Model)._ For each \(n\), let \(\{d_{i,n}\}_{i=1}^{n}\) be a graphical sequence, such that \(\sum_{i=1}^{n}\delta_{d_{i,n}}\) converges weakly to \(\theta\) as \(n\to\infty\), and \(\theta\) has finite third moment. Let \(G_{n}\) be a graph uniformly chosen among graphs on \(n\) vertices with degree sequence \(\{d_{i,n}\}_{i=1}^{n}\). We write \(G_{n}=\operatorname{CM}_{n}(\theta)\). **Remark 2.2**.: The only place where we use the assumption that \(\theta\) has a finite third moment is in Proposition 2.4 below (and the corresponding result for the SEIR process, Proposition 2.9). Every result in this paper holds by replacing the assumption that \(\theta\) has finite third moment in Assumption B(2) with the assumption that \(\theta\) has finite second moment and that the system (2.4)-(2.5) (and the corresponding system for the SEIR process (2.11)-(2.12)) has a unique solution on \([0,\infty)\). We refer the reader to [18, Chapter 5 and Chapter 7] for an extensive account of random graphs, including precise definitions and well-known properties of the graphs in Assumption B. The class of graphs we consider is locally tree-like, in a sense that we now make precise. Given \(\theta\in\mathcal{P}(\mathbb{N}_{0})\) with finite first moment, we define its _size-biased distribution_\(\hat{\theta}\in\mathcal{P}(\mathbb{N}_{0})\) by \[\hat{\theta}(k)=\frac{(k+1)\theta(k+1)}{\sum_{j=0}^{\infty}j\theta(j)},\qquad \text{for }k\in\mathbb{N}_{0}. \tag{2.2}\] **Definition 2.3**.: The unimodular Galton-Watson tree with offspring distribution \(\theta\), denoted by \(\operatorname{UGW}(\theta)\), is a rooted random tree where the root has a number of children distributed like \(\theta\) and every vertex in subsequent generations has a number of children distributed like \(\hat{\theta}\), independently of the degree of vertices in the same or previous generations. It is well known that if \(\{G_{n}\}_{n\in\mathbb{N}}\) and \(\theta\) satisfy Assumption B, then \(G_{n}\) converges in a local sense (_local weak convergence in probability_, as defined in [35, Definition 2.11]; see also [24, Definition 2.2] and [12, Section 2.4]) to a \(\operatorname{GWT}(\theta)\) tree. This is established, for instance, in [34, Theorem 2.18 and Theorem 4.1]. ### Asymptotic Characterization of SIR dynamics Our first result is the limit characterization (as the graph size goes to infinity) of the evolution of the fractions of individuals that, at each time, are in each of the states \(\{S,I,R\}\). Given a finite graph \(G\), for \(t\in[0,\infty)\) we define \[\begin{split} s^{G}(t)&:=\frac{1}{|G|}\sum_{v\in G} \mathbf{1}_{\{X_{v}^{G}(t)=S\}},\\ i^{G}(t)&:=\frac{1}{|G|}\sum_{v\in G}\mathbf{1}_{\{X _{v}^{G}(t)=I\}}.\end{split} \tag{2.3}\] We start by establishing the existence and uniqueness of the solution to a certain system of ODEs that will be used to describe the limit. As is standard practice, we use the dot notation for derivatives with respect to time, and prime notation for derivatives in space. **Proposition 2.4**.: _Suppose that \(\theta\in\mathcal{P}(\mathbb{N}_{0})\) has finite third moment and let \(s_{0}\in(0,1)\). Then there exists a unique solution \((f_{S},\ f_{I},\ F_{I})\) to the following system of ODEs:_ 1. _(F \[\begin{cases}\dot{f}_{S}=f_{S}f_{I}\beta\left(1-\frac{\sum_{k=0}^{\infty}k\hat{ \theta}(k)e^{-kF_{I}}}{\sum_{j=0}^{\infty}\hat{\theta}(j)e^{-jF_{I}}}\right), \\ \dot{f}_{I}=f_{S}f_{I}\beta\frac{\sum_{k=0}^{\infty}k\hat{\theta}(k)e^{-kF_{I}} }{\sum_{j=0}^{\infty}\hat{\theta}(j)e^{-jF_{I}}}-f_{I}(\rho+\beta-\beta f_{I}),\\ \dot{F}_{I}=\beta f_{I},\end{cases} \tag{2.4}\] _with initial conditions_ \[\begin{cases}f_{S}(0)=s_{0},\\ f_{I}(0)=1-s_{0},\\ F_{I}(0)=0.\end{cases} \tag{2.5}\] The proof of Proposition 2.4 uses standard arguments and is thus relegated to Appendix B. Given \(\zeta\in\mathcal{P}(\mathbb{N}_{0})\) and \(x\in(-\infty,0]\), we define its Laplace transform as follows: \[M_{\zeta}(x):=\sum_{k\in\mathbb{N}_{0}}\zeta(k)e^{kx}. \tag{2.6}\] Given \(f_{S},\ f_{I},\ F_{I}\) as in Proposition 2.4, for \(t\in[0,\infty)\) we define \[\begin{split}& s^{(\infty)}(t):=s_{0}M_{\theta}(-F_{I}(t))\\ & i^{(\infty)}(t):=e^{-\int_{0}^{t}\rho_{u}du}\left(i_{0}+s_{0} \int_{0}^{t}M_{\theta}^{\prime}(-F_{I}(t))e^{\int_{0}^{u}\rho_{s}ds}\beta_{u} f_{I}(u)du\right).\end{split} \tag{2.7}\] We now state our main result for the SIR model. **Theorem 2.5**.: _Suppose that a sequence of random graphs \(\{G_{n}\}_{n\in\mathbb{N}}\) and \(\theta\in\mathcal{P}(\mathbb{N}_{0})\) satisfy Assumption B. Let \(\hat{\theta}\) be the size-biased version of \(\theta\), as defined in (2.2). Suppose that \(s^{G_{n}}(0)\to s_{0}\in(0,1)\) and let \(s^{(\infty)}\) and \(i^{(\infty)}\) be as defined in (2.7). Then, as \(n\to\infty\) we have_ \[\begin{split}& s^{G_{n}}(t)\xrightarrow{p}s^{(\infty)}(t),\\ & i^{G_{n}}(t)\xrightarrow{p}i^{(\infty)}(t),\end{split} \tag{2.8}\] _uniformly for \(t\in[0,\infty)\)._ The proof of Theorem 2.5 is given in Section 4.2. It relies on a hydrodynamic limit result established in [10, Corollary 4.7], which shows that the fraction of individuals in any state \(a\in\mathcal{X}\) in the SIR process on \(G_{n}\) converges to \(\mathbb{P}(X_{\varnothing}^{\mathcal{T}}(t)=a)\), where \(X^{\mathcal{T}}\) is the SIR process on \(\mathcal{T}=\mathrm{UGW}(\theta)\), and \(\varnothing\) is the root vertex. We then show that the trajectories of \(X^{\mathcal{T}}\) satisfy a certain conditional independence property (Proposition 4.7). We combine this property with symmetry properties of the dynamics (see Proposition 4.8) to characterize \(\mathcal{L}(X_{\varnothing}^{\mathcal{T}})\) in terms of a system of ODEs. In particular, for \(a=S\) or \(a=I\), the probability \(\mathbb{P}(X_{\varnothing}^{\mathcal{T}}(t)=a)\) is equal to \(s^{(\infty)}(t)\) or \(i^{(\infty)}(t)\), respectively, as defined in (2.7). As mentioned in the Introduction, Proposition 4.7 can be seen as a substantial refinement in the case of the SIR process \(X^{\mathcal{T}}\) of a certain general Markov random field property that holds for more general interacting particle systems; see [13, Theorem 3.7]. In Figure 1, we compare simulations of the evolution of the SIR process on certain Erdos-Renyi random graphs and random \(3\)-regular graphs of size \(n=250\) with the theoretical prediction from Theorem 2.5. The plots illustrate that even in systems of moderate size, the theoretical prediction closely tracks the simulations. **Remark 2.6**.: For simplicity, we restrict our attention to i.i.d. initial conditions, though the techniques in our proofs extend to more general initial conditions, as long as they satisfy certain symmetry properties between the laws of the initial states and that of the random graphs, and satisfy the Markov random field property mentioned above. In the case where the limit tree \(\mathcal{T}\) is the \(\kappa\)-regular tree \(T_{\kappa}\), the symmetry conditions correspond to the law of \(X^{T_{\kappa}}(0)\) being isomorphism invariant, see [25, Remark 3.16]. **Remark 2.7**.: We also mention that, while Theorem 2.5 is stated for (sparse) ER and CM graphs, our techniques extend to a broader class of graphs, namely to any graph sequence \(\{G_{n}\}_{n\in\mathbb{N}}\) that converges _locally weakly in probability_ to a UGW tree. All results in this paper hold if we replace Assumption B with the assumption that for some \(\theta\in\mathcal{P}(\mathbb{N}_{0})\) with finite third Figure 1. Time evolution of the fraction of susceptible (\(S\)) and infected (\(I\)) individuals on a finite graph (\(n=250\)), with constant \(\beta\), obtained through simulations (Sim), along with the asymptotically exact values (ODE) given by Theorem 2.5. The simulations are obtained through \(500\) iterations, resampling the random graphs at each iteration, and are plotted with \(95\%\) confidence intervals. moment and a \(\operatorname{UGW}(\theta)\) tree \(\mathcal{T}\), \[\frac{1}{n}\sum_{v\in G_{v}}\mathbf{1}_{\{B_{r}^{G_{n}}(v)\simeq H\}}\xrightarrow{ p}\mathbb{P}(B_{r}^{\mathcal{T}}(\varnothing)\simeq H))\] for every \(r\in\mathbb{N}_{0}\) and every rooted graph \(H\), where \(\simeq\) denotes graph isomorphism, and \(B_{r}^{G}(v)\) is a ball of radius \(r\) around \(v\in G\), that is, the subgraph induced by all vertices in \(G\) that are at most \(r\) edges away from \(v\). As mentioned in the Introduction, in the special case when the infection and recovery rates \(\beta\) and \(\rho\) are constant in time and \(G_{n}\) is the configuration model, an ODE approximation similar to (2.4) was proposed in [36, 37] and shown to be asymptotically exact in [4, 21]. However, Theorem 2.5 applies to the more general setting of time-varying rates, which is very relevant for applications, e.g., [3, 6, 19, 27], and more general graph classes (see Remark 2.7). Further, an advantage of our approach is that it allows for several important generalizations, including non-exponential recovery times, as elaborated upon in Remark 2.8 below, the SEIR model, presented in Section 2.3, and further extensions, discussed in Remark 2.11 below. **Remark 2.8**.: A large part of the literature on the SIR process focuses on the case where recovery times are exponential random variables, that is, each individual recovers at some rate \(\rho\) regardless of how long they have been infected, and the methods exploit this Markovian structure. If recovery times are not exponential, the resulting SIR dynamics are not Markov, and this makes their analysis significantly more challenging. In contrast, the local convergence tools that we used in the proof of Theorem 2.5 can still be used in this setting. Specifically, the hydrodynamic result in [12] is still valid and shows that the fraction of individuals in each of the SIR states on a finite locally-tree like graph can be approximated by the root particle dynamics of the non-Markovian SIR process on the infinite tree. Further, a version of the conditional independence property of Proposition 4.7 can be established, the marginal root dynamics can be characterized as a piecewise deterministic Markov process, and its law characterized as the solution to a certain PDE. A complete analysis is deferred to future work. ### SEIR Model In this section, we extend our limit results to the Susceptible-Exposed-Infected-Recovered (SEIR) process. The SEIR process is a model of epidemics in which each individual can be in one of four possible states: in addition to the three states \(S,\ I,\ R\), of the SIR model, an individual can also be in the exposed state \(E\), when it has contracted the disease but is not yet able to infect its neighbors. We define \(\bar{\mathcal{X}}:=\{S,E,I,R\}\). As in the case of the SIR model, the SEIR model on a (possibly random) graph \(G\) can be modelled as a locally interacting Markov chain. We denote this process by \(\bar{X}^{G}\). The SEIR process is governed by the graph \(G\) and three functions \(\beta,\rho,\lambda:[0,\infty)\to(0,\infty)\), with \(\beta\) and \(\rho\), as for the SIR model, representing the infection and recovery rates, and \(\lambda\) now representing the time-dependent rate at which an individual transitions from having been exposed to being infectious. We assume that the initial states are i.i.d. with \(\mathbb{P}(\bar{X}^{G}_{v}(0)=S)=s_{0}\), \(\mathbb{P}(\bar{X}^{G}_{v}(0)=E)=e_{0}\) and \(\mathbb{P}(\bar{X}^{G}_{v}(0)=I)=i_{0}\) for some \(s_{0}\in(0,1)\) and \(e_{0},i_{0}\in[0,1]\) such that \(s_{0}+e_{0}+i_{0}=1\). At time \(t\), an individual \(v\) jumps from \(S\) to \(E\) at the rate \(\beta_{t}\mathcal{I}(\bar{X}^{G}_{\partial_{v}}(t-))=\beta_{t}\sum_{w\in \partial_{v}}\mathbf{1}_{\{\bar{X}^{G}_{v}(t-)=I\}}\), from \(E\) to \(I\) at the rate \(\lambda_{t}\), and from \(I\) to \(R\) at the rate \(\rho_{t}\). No other jumps are possible. Equipping \(\bar{\mathcal{X}}\) with the ordering \(S<E<I<R\), the SEIR process is non-decreasing in the same sense as Remark 2.1. Throughout the rest of the paper, we make the following assumption. **Assumption C**.: The functions \(\beta\), \(\lambda\) and \(\rho\) are continuous and there exist constants \(c_{1},c_{2}\in(0,\infty)\) such that \[c_{1}<\liminf_{t\to\infty}\min(\beta_{t},\ \rho_{t},\ \lambda_{t})<\limsup_{t\to \infty}\max(\beta_{t},\ \rho_{t},\ \lambda_{t})<c_{2}. \tag{2.9}\] #### 2.3.1. Asymptotic Characterization of SEIR dynamics Given a finite graph \(G\), we let \[\bar{s}^{G}(t) :=\frac{1}{|G|}\sum_{v\in G}\mathbf{1}_{\{\bar{X}^{G}_{v}(t)=S\}}, \tag{2.10}\] \[\bar{e}^{G}(t) :=\frac{1}{|G}\sum_{v\in G}\mathbf{1}_{\{\bar{X}^{G}_{v}(t)=E\}},\] \[\bar{i}^{G}(t) :=\frac{1}{|G|}\sum_{v\in G}\mathbf{1}_{\{\bar{X}^{G}_{v}(t)=I\}}.\] We start by establishing the existence and uniqueness of the solution to a certain system of ordinary differential equations that we use in our main result. **Proposition 2.9**.: _Suppose that \(\theta\in\mathcal{P}(\mathbb{N}_{0})\) has a finite third moment and let \(s_{0}\in(0,1)\) and \(e_{0},i_{0}\in[0,1]\) satisfy \(s_{0}+e_{0}+i_{0}=1\). Then there exists a unique solution \((g_{S},\ g_{E},\ g_{I},\ G_{I})\) to the following system of ODEs:_ \[\begin{cases}\dot{g}_{S}=\beta g_{S}g_{I}\left(1-\frac{\sum_{k=0}^{\infty}k \hat{\theta}(k)e^{-kG_{I}}}{\sum_{j=0}^{\infty}\hat{\theta}(j)e^{-jG_{I}}} \right),\\ \dot{g}_{E}=\beta g_{S}g_{I}\frac{\sum_{k=0}^{\infty}k\hat{\theta}(k)e^{-kG_{I }}}{\sum_{j=0}^{\infty}\hat{\theta}(j)e^{-jG_{I}}}-g_{E}(\lambda-\beta g_{I}), \\ \dot{g}_{I}=\lambda g_{E}-g_{I}(\rho+\beta-\beta g_{I}),\\ \dot{G}_{I}=\beta g_{I},\end{cases} \tag{2.11}\] _with initial conditions_ \[\begin{cases}G_{I}(0)=0,\\ g_{m}(0)=s_{0}\mathbf{1}_{\{m=S\}}+e_{0}\mathbf{1}_{\{m=E\}}+i_{0}\mathbf{1}_ {\{m=I\}},\ \ m\in\bar{\mathcal{X}}.\end{cases} \tag{2.12}\] The proof of Proposition 2.9 is similar to that of Proposition 2.4. A brief outline is given at the end of Appendix B. Given \(g_{S},\ g_{E},\ g_{I},\ G_{I}\) as in Proposition 2.9 and \(M_{\theta}\) as in (2.6), define \[\bar{s}^{(\infty)}(t) :=s_{0}M_{\theta}(-G_{I}(t)), \tag{2.13}\] \[\bar{e}^{(\infty)}(t) :=e^{-\int_{0}^{t}\lambda_{u}du}\left(e_{0}+s_{0}\int_{0}^{t}M_{ \theta}^{\prime}(-G_{I}(u))G_{I}^{\prime}(u)e^{\int_{0}^{u}\lambda_{r}d\tau} du\right),\] \[\bar{i}^{(\infty)}(t) :=e^{-\int_{0}^{t}\rho_{u}du}\left(i_{0}+\int_{0}^{t}\lambda_{u} e^{\int_{0}^{u}(\rho_{s}-\lambda_{s})ds}\left(e_{0}+s_{0}\int_{0}^{u}M_{ \theta}^{\prime}(-G_{I}(\tau))G_{I}^{\prime}(\tau))e^{\int_{0}^{\tau}\lambda_ {s}ds}d\tau\right)du\right).\] We can now state our characterization of the large \(n\) dynamics of the SEIR process. **Theorem 2.10**.: _Suppose that the sequence of random graphs \(\{G_{n}\}_{n\in\mathbb{N}}\) and \(\theta\in\mathbb{P}(\mathbb{N}_{0})\) satisfy Assumption B. Let \(\hat{\theta}\) be the size-biased version of \(\theta\), as defined in (2.2), suppose \(\bar{s}^{G_{n}}(0)\to s_{0},\)\(\bar{e}^{G_{n}}(0)\to e_{0}\), and \(\bar{i}^{G_{n}}(0)\to i_{0},\) with \(s_{0}\in(0,1)\) and \(s_{0}+e_{0}+i_{0}=1\), and set \(\bar{s}^{(\infty)},\)\(\bar{e}^{(\infty)}\) and \(\bar{i}^{(\infty)}\) be as defined in (2.13). Then, as \(n\to\infty\),_ \[\bar{s}^{G_{n}}(t)\overset{p}{\to}\bar{s}^{(\infty)}(t),\qquad\bar{e}^{G_{n}} (t)\overset{p}{\to}\bar{e}^{(\infty)}(t),\qquad\bar{i}^{G_{n}}(t)\overset{p}{ \to}\bar{i}^{(\infty)}(t),\] _uniformly for \(t\in[0,\infty)\)._ The proof of Theorem 2.10 is given in Section 4.2.3, and follows a similar approach as for the SIR model, although the details are more involved. In Figure 2 we compare our asymptotically exact approximation to values of \(\bar{s}^{G_{n}},\ \bar{e}^{G_{n}}\) and \(\bar{i}^{G_{n}}\) for an Erdos-Renyi graph obtained by Monte Carlo simulations (500 iterations, plotted with 95% confidence intervals). Once again, our approximation closely tracks the simulation results, even for relatively small \(n\). **Remark 2.11**.: The result in Theorem 2.10 can be further extended to more general compartmental models that are widely used in the epidemiology literature in order to account for different viral strains and treatment options, for example, see [5, 8, 17, 20, 30]. These allow for a susceptible state \(S\) and \(m\in\mathbb{N}\) post-infection states \(\{I_{1},\,I_{2},\,\dots\,\,\,I_{m}\}\). Supposing that each individual's transitions among post-infection states do not depend on the states of its neighbors, under Assumption B and continuity assumptions analogous to Assumption C, the hydrodynamic Figure 2. Time evolution of the fraction of susceptible (S), exposed (E) and infected (I) individuals on an \(\operatorname{ER}(n,3/n)\) graph, with \(\beta\equiv\rho\equiv 1\). We compare the asymptotically exact values (ODE) given by Theorem 2.10 with the fraction obtained through Monte Carlo simulations (Sim). Simulations are obtained through 500 iterations, and are shows with 95% confidence intervals. result in [12] holds. If in addition one assumes that no transitions from post-infection states to state \(S\) are possible, a version of the independence property of Proposition 4.7 can be established, thus leading to a result analogous to Theorem 2.10. We defer a full account of this general setting to future work. ## 3. Results on Outbreak Size An important quantity of interest in the study of epidemic dynamics is the outbreak size, which is the fraction of individuals ever infected, in the interval \([0,\infty)\). By the monotonicity of the SIR and SEIR processes (Remark 2.1), the outbreak size is equal to \(1\) minus the limit, as \(t\to\infty\), of the fraction of susceptible individuals at time \(t\). In Section 3.1 and Section 3.2, we characterize the large-time behavior for the SIR and SEIR processes respectively, as the size of the population approaches infinity. In Section 3.3 we compare our asymptotically exact estimate of the outbreak size with a mean-field approximation for the special case of the SIR process on random regular graphs with constant infection and recovery rates. ### Outbreak Size for SIR Model Given a sequence of graphs \(\{G_{n}\}_{n\in\mathbb{N}}\) satisfying Assumption B, we let \(s^{G_{n}}(\infty):=\lim_{t\to\infty}s^{G_{n}}(t)\) for \(n\in\mathbb{N}\). We compute the limit of this quantity as \(n\to\infty\), by first showing that \(\lim_{n\to\infty}s^{G_{n}}(\infty)=\lim_{t\to\infty}s^{(\infty)}(t)\), where \(s^{(\infty)}\), given in (2.7), is the hydrodynamic limit of the fraction of susceptible individuals, by Theorem 2.5. We recall that \(M_{\nu}\) denotes the moment generating function of \(\nu\in\mathcal{P}(\mathbb{N}_{0})\). **Theorem 3.1**.: _Let \(\{G_{n}\}_{n\in\mathbb{N}}\) and \(\theta\in\mathcal{P}(\mathbb{N}_{0})\) satisfy Assumption B. Let \(\hat{\theta}\) be the size-biased version of \(\theta\), as defined in (2.2). Then, assuming that \(\lim_{n\to\infty}s^{G_{n}}(0)=s_{0}\in(0,1)\),_ \[\lim_{n\to\infty}\lim_{t\to\infty}s^{G_{n}}(t)=\lim_{t\to\infty}s^{(\infty)}(t )=s_{0}M_{\theta}\left(-\int_{0}^{\infty}\beta_{u}f_{I}(u)du\right),\] _where \(f_{I}\) is defined by (2.4)-(2.5). Moreover, \(\mathcal{F}:=\int_{0}^{\infty}\beta_{u}f_{I}(u)du\) is finite and satisfies_ \[\mathcal{F}+\log(M_{\hat{\theta}}(-\mathcal{F}))-\log\left(1-e^{\mathcal{F}} \int_{0}^{\infty}e^{-\int_{0}^{u}\beta_{r}f_{I}(\tau)d\tau}\rho_{u}f_{I}(u)du \right)+\log(s_{0})=0, \tag{3.1}\] _Furthermore, if there exists \(r\in(0,\infty)\) such that \(\rho_{t}/\beta_{t}=r\) for all \(t\in[0,\infty)\), then equation (3.1) is equivalent to_ \[\mathcal{F}+\log(M_{\hat{\theta}}(-\mathcal{F}))-\log(1+r(1-e^{\mathcal{F}} ))+\log(s_{0})=0, \tag{3.2}\] _which has a unique strictly positive solution \(\mathcal{F}\)._ The proof of Theorem 3.1 is given in Section 3. **Remark 3.2**.: When the ratio \(\rho_{t}/\beta_{t}\) is constant in time, the final outbreak size depends on \(\rho\) and \(\beta\) only through their ratio. This is well known when \(\beta\) and \(\rho\) are both constant, and in that case it is common in the SIR literature to fix \(\rho\equiv 1\) with no loss of generality, by re-scaling time. Theorem 3.1 shows that, when the ratio \(\rho/\beta\) is not constant, the ratio no longer determines the outbreak size, and instead the time evolution of both \(\beta\) and \(\rho\) influence the outbreak size. Figure 3 illustrates this phenomenon. It plots \(s^{(\infty)}(t)\), defined in (2.7), which by Theorem 2.5 is the large-\(n\) asymptotic fraction of susceptible individuals, for two SIR processes with the same ratio \(r_{t}=\rho_{t}/\beta_{t}\) for all \(t\geq 0\), though different \(\beta\) and \(\rho\), which lead to dramatically different outbreak sizes. Next, for each time-dependent \(\beta\) and \(\rho\) we identify constant infection and recovery rates that lead to the same outbreak size. These effective rates are unique only up to multiplication by the same constant, and so we identify them by their ratio. For given \(\beta,\ \rho\), we define \(\Psi_{\beta,\rho}:[0,\infty)\to[0,\infty]\) as \[\Psi_{\beta,\rho}(z)=z+\log(M_{\hat{\theta}}(-z))-\log\left(1-e^{z}\int_{0}^{ \infty}e^{-\int_{0}^{u}\beta_{\tau}f_{I}(\tau)d\tau}\rho_{u}f_{I}(u)du\right)+ \log(s_{0}), \tag{3.3}\] where we set \(\log(x)=-\infty\) for any \(x\leq 0\), and where \(f_{I}\) is defined by (2.4)-(2.5) for some fixed \(s_{0}\in(0,1)\) with the given \(\beta\) and \(\rho\). For \(r\in(0,\infty)\), we also define \(\Psi_{r}:[0,\infty)\to[0,\infty]\) as \[\Psi_{r}(z)=z+\log(M_{\hat{\theta}}(-z))-\log\left(1+r(1-e^{z})\right)+\log(s _{0}). \tag{3.4}\] The following result shows that for every pair of rate functions \(\beta\) and \(\rho\) satisfying Assumption A, there exists a constant \(\hat{r}:=\hat{r}(\beta,\rho)\) so that the outbreak size of an SIR process with rates \(\beta\) and \(\rho\), and that of a SIR process with constant infection rate \(1\) and constant recovery \(\hat{r}\) are the same (as \(n\to\infty\)). In particular, we observe that this is not achieved by naively replacing \(\rho\) and \(\beta\) with their respective average (over time) values, nor by taking \(\hat{r}\) to be the (time) average of \(\rho_{t}/\beta_{t}\). **Lemma 3.3**.: _Let \(\theta\in\mathcal{P}(\mathbb{N}_{0})\) have a finite third moment, and suppose that \(\rho,\ \beta\) satisfy Assumption A. Let \(\mathcal{F}_{\beta,\rho}:=\int_{0}^{\infty}\beta_{t}f_{I}(t)\), where \(f_{I}\) is defined by (2.4)-(2.5). Then there exists a unique \(\hat{r}\in(0,\infty)\) such that \(\Psi_{\hat{r}}(\mathcal{F}_{\beta,\rho})=0\), namely_ \[\hat{r}=\frac{\int_{0}^{\infty}e^{-\int_{0}^{t}\beta_{u}f_{I}(u)du}\rho_{t}f_ {I}(t)dt}{1-e^{-\int_{0}^{\infty}\beta_{u}f_{I}(u)du}}. \tag{3.5}\] Proof.: We start by showing that \(\hat{r}\in(0,\infty)\). We know that \(\mathcal{F}_{\beta,\rho}=\int_{0}^{\infty}\beta_{t}f_{I}(t)dt<\infty\) by Theorem 3.1. By Assumption A and (2.4), \(t\to\beta_{t}f_{I}(t)\) is continuous, non-negative, and bounded away from zero near \(t=0\), and so \(\mathcal{F}_{\beta,\rho}>0\). Letting \(c_{1},c_{2}\in(0,\infty)\) be constants such that (2.1) holds, we have \[\int_{0}^{\infty}e^{-\int_{0}^{t}\beta_{u}f_{I}(u)du}\rho_{t}f_{I}(t)dt=\int_{ 0}^{\infty}e^{-\int_{0}^{t}\beta_{u}f_{I}(u)du}\frac{\rho_{t}}{\beta_{t}}\beta _{t}f_{I}(t)dt\leq\frac{c_{2}}{c_{1}}(1-e^{-\mathcal{F}_{\beta,\rho}})<\infty.\] Similarly, note that \(\int_{0}^{\infty}e^{-\int_{0}^{t}\beta_{u}f_{I}(u)du}\rho_{t}f_{I}(t)dt>c_{1}c _{2}^{-1}(1-e^{-\mathcal{F}_{\beta,\rho}})>0\). Setting \(\hat{r}\) as in (3.5), by (3.4), we have \[\Psi_{\hat{r}}(z)=z+\log(M_{\hat{\theta}}(-z))-\log\left(1+\frac{\int_{0}^{ \infty}e^{-\int_{0}^{t}\beta_{u}f_{I}(u)du}\rho_{t}f_{I}(t)dt}{1-e^{-\mathcal{ F}_{\beta,\rho}}}(1-e^{z})\right)+\log(s_{0}).\] Figure 3. Large-\(n\) limit of the fraction of susceptible individuals (i.e., \(s^{(\infty)}(t)\) from (2.7)) for the SIR process on a uniform 3-regular graph, as a function of time. For either curves, the ratio \(\rho_{t}/\beta_{t}\) is equal to \(1.5+sin(\pi t)\). In one case, \(\rho_{t}\) increases linearly from \(0.5\) to \(1.5\) for \(t\in[0,10]\) and it then stays constant. In the other, \(\rho_{t}\) decreases linearly from \(1.5\) to \(0.5\) for \(t\in[0,10]\), and it then stays constant. Evaluating this at \(z=\mathcal{F}_{\beta,\rho}\) using (3.3), and observing that \((1-e^{z})/(1-e^{-z})=-e^{z}\), we have \[\Psi_{\hat{r}}(\mathcal{F}_{\beta,\rho}) =\mathcal{F}_{\beta,\rho}+\log(M_{\hat{\theta}}(-\mathcal{F}_{ \beta,\rho}))-\log\left(1-e^{\mathcal{F}_{\beta,\rho}}\int_{0}^{\infty}e^{-\int _{0}^{t}\beta_{u}f_{I}(u)du}\rho_{t}f_{I}(t)dt\right)+\log(s_{0})\] \[=\Psi_{\beta,\rho}(\mathcal{F}_{\beta,\rho}),\] which is zero by Theorem 3.1. This shows the existence of \(\hat{r}\in(0,\infty)\) such that \(\Psi_{\hat{r}}(\mathcal{F}_{\beta,\rho})=0\). For uniqueness, observe that for each \(z\in(0,\infty)\) the map \(r\to\Psi_{r}(z)\) is non-decreasing, and strictly increasing on \(\{r\ :\ \Psi_{r}(z)<\infty\}\). Let \(\mathcal{F}_{r}\) be the unique zero of \(\Psi_{r}\). It follows that \(\mathcal{F}_{r}\) is strictly decreasing in \(r\) and therefore there is a one-to-one correspondence between \(r\) and \(\mathcal{F}_{r}\). We conclude this section with a brief discussion of periodic parameters. For simplicity, we fix \(\rho\equiv 1\) and we consider periodic infection rates that could model, for instance, seasonality effects of the infectivity of a pathogen. For a given amplitude \(A\in[0,1)\), period \(\omega>0\) and \(\delta\in[0,1]\) we set \(\beta_{t}=1+A\sin((t+\delta\omega)2\pi/\omega)\). Here, \(\delta\) is a parameter controlling the phase of the periodic rate at time zero. Note that if the period length is large enough compared to the average infection rate and recovery rate, the outbreak dies out before the full length of the period, and so, while the average of \(\beta_{t}\) over the period is \(1\), the average infection rate during the time the epidemic is "active" (i.e, there are individuals in state \(I\)) will be close to \(\beta_{0}\). Because of this, we expect \(\delta\) to have a greater impact on the outbreak size when \(\omega\) is large. This is borne out by Figure 4, which plots the outbreak size as a function of \(A\) for various \(\delta\) and \(\omega\). We see that in every case other than large \(\omega\) and small \(\delta\), the outbreak size is decreasing in \(A\). This suggests the following conjecture, which we leave for future investigation. **Conjecture 3.4**.: Let \(A\in[0,1),\ \omega>0\) and \(\delta\in[0,1]\). Define \(\beta_{t}=1+A\sin((t+\delta\omega)2\pi/\omega)\). There exists \(\bar{\omega}>0\) such that, for all \(\omega<\bar{\omega}\), the asymptotic outbreak size \(1-s^{(\infty)}(\infty)\) is decreasing in \(A\). ### Outbreak Size for the SEIR Model We now turn to the characterization of the outbreak size of an SEIR process. Recall the definition of \(M_{\zeta}\) for \(\zeta\in\mathcal{P}(\mathbb{N}_{0})\) given in (2.6). **Theorem 3.5**.: _Let \(\{G_{n}\}_{n\in\mathbb{N}}\) and \(\theta\) satisfy Assumption B. Then_ \[\lim_{n\to\infty}\lim_{t\to\infty}\bar{s}^{G_{n}}(t)=\lim_{t\to\infty}\bar{s} ^{(\infty)}(t)=s_{0}M_{\theta}\left(-\int_{0}^{\infty}\beta_{u}g_{I}(u)du\right)\] _where \(\int_{0}^{\infty}\beta_{u}g_{I}(u)du=:\mathcal{F}\) satisfies_ \[\mathcal{F}+\log(M_{\hat{\theta}}(-\mathcal{F}))-\log\left(1-e^{\mathcal{F}} \int_{0}^{\infty}e^{-\int_{0}^{u}\beta_{\tau}g_{I}(\tau)d\tau}\rho_{u}g_{I}(u )du\right)+\log(s_{0})=0, \tag{3.6}\] _for \(g_{I}\) as in (2.11). Furthermore, if there exists \(r\in(0,\infty)\) such that \(\rho_{t}/\beta_{t},=r\) for all \(t\in[0,\infty)\), equation (3.6) is equivalent to_ \[\Psi_{r}(\mathcal{F})=0, \tag{3.7}\] _where \(\Psi_{r}\) is defined in (3.4)._ Theorem 3.5 shows that when the ratio \(t\mapsto\beta_{t}/\rho_{t}\) is constant, the final outbreak size does not depend on \(\lambda\) and it coincides with the outbreak size of a SIR process with the same infection rate, recovery rate, and initial condition \(s_{0}\). On the other hand, when the ratio is not constant, the rate \(\lambda\) affects the outbreak size. Figure 5 plots \(\bar{s}^{(\infty)}(t)\), as defined in (2.13), which by Theorem 2.10 is the large \(n\) limit of the of the fraction of susceptible individuals, for several SEIR processes on a random 3-regular graph. For fixed constants \(\beta\), and \(\rho\), but different values of constant \(\lambda\), the time-dynamics can vary significantly, but the final fraction of susceptible individuals does not depend on \(\lambda\). In contrast, when \(\rho/\beta\) changes with time, the final fraction of susceptible individuals (and hence, the outbreak size) varies with \(\lambda\). In Figure 6, we set all rates as constant, and we show that the time evolution of the sum of the fractions of infected and exposed individuals in the SEIR process can be markedly different from that of the fraction of infected individuals in an SIR process, despite the fact that the final outbreak sizes coincide. We leave as a future research direction the problem of understanding the impact of \(\lambda\) on the SEIR dynamics for finite \(t\) - for example, how does \(\lambda\) impact the maximum number of individuals that have ever been infected in any given time period? ### Comparison with the Mean-Field approximation for the SIR model In this section, we restrict our attention to the SIR process on the uniform \(\kappa\)-regular graph, with the ratio \(\rho/\beta\) being constant in time, and compare the asymptotically exact outbreak size with the corresponding mean-field approximation. We first start by observing that on \(\kappa\)-regular graphs, the characterization (3.2) of the outbreak size can be simplified further as follows. **Corollary 3.6**.: _Let \(\kappa\in\mathbb{N}\setminus\{1\}\). Let \(\{G_{n}\}_{n\in\mathbb{N}}\) be such that for every \(n\in\mathbb{N}\), \(G_{n}\) is chosen uniformly among all \(\kappa\)-regular graphs with \(n\) vertices, or equivalently \(G_{n}\) is a \(\text{CM}_{n}(\delta_{\kappa})\) graph. Suppose that there exists \(r\in(0,\infty)\) such that \(\rho_{t}/\beta_{t}=r\) for all \(t\in[0,\infty)\). Then, it follows that Figure 4. Large \(n\) limit of the final size of a SIR outbreak on a 3-regular graph with periodic infection rate, as a function of the amplitude \(A\), obtained numerically from \(s^{(\infty)}(t)\) as in Theorem 2.5. In all cases \(\rho\equiv 1\). \[\lim_{t\to\infty}s^{(\infty)}(t)=\sigma_{\kappa}.\] _where \(\sigma_{\kappa}\in(0,s_{0}]\) is the unique solution in \((0,s_{0})\) of the equation_ \[\phi_{\kappa}(z):=z^{\frac{\kappa-2}{\kappa}}s_{0}^{\frac{2}{\kappa}}-(1+r)+rz^{ -\frac{1}{\kappa}}s_{0}^{\frac{1}{\kappa}}=0. \tag{3.8}\] _In particular, we have_ \[\sigma_{2}=\frac{s_{0}}{(1+r(1-s_{0}))^{2}}.\] Proof.: Fix \(\kappa\in\mathbb{N}\setminus\{1\}\) and set \(\theta=\delta_{\kappa}\). It is immediate from (2.2) that the size-biased distribution \(\hat{\theta}\) is equal to \(\delta_{\kappa-1}\). For any \(k\in\mathbb{N}\), \(u\in\mathbb{R}\), we have that \(M_{\delta_{k}}(u)=e^{ku}\). By Theorem 3.1, the final fraction of susceptible individuals \(s^{(\infty)}(\infty)\) is equal to \(s_{0}e^{-\kappa\mathcal{F}}\), where \(\mathcal{F}\) is the solution of equation (3.2), which for \(\hat{\theta}=\delta_{\kappa-1}\) reduces to \[(2-\kappa)\mathcal{F}-\log(1+r(1-e^{\mathcal{F}}))+\log(s_{0})=0.\] Figure 5. Large \(n\) limit fraction of susceptible individuals for SEIR processes (\(\overline{s}^{(\infty)}(t)\) from (2.13)) with different values of \(\lambda\). In all cases, the interaction graph is the random 3-regular graph, \(e_{0}=0.01\) and \(s_{0}=1-e_{0}\). In (a) and (b) both \(\beta\) and \(\rho\) are constant. In (c) (resp. (d)) \(\rho\) is held constant and \(\beta\) grows linearly from \(0.5\) to \(1.5\) (resp. decreases linearly from \(1.5\) to \(0.5\)) and is then constant. By a simple arithmetic manipulation, \(\sigma_{\kappa}:=s_{0}e^{-\kappa\mathcal{F}}\) satisfies equation (3.8). Uniqueness of the solution to \(\phi_{\kappa}(z)=0\) follows since \(\phi_{\kappa}^{\prime}(z)=0\) holds for at most one value of \(z\), namely for \(z=(s_{0}^{-1/\kappa}r/(\kappa-2))^{\kappa/(\kappa-1)}\). Figure 7 plots the analytic values of \(\sigma_{\kappa}\) obtained from Corollary 3.6 versus simulated values for different values of \(n\) and \(\kappa\). We ran 200 iterations for each pair \((n,\,\kappa)\), sampling a new graph at every iteration. As shown therein, the limit appears to be a good approximation for graphs of moderate size (namely, with \(n\geq 150\)). We leave for future research the problem of finding accurate error bounds for finite \(n\). By Theorem 3.1 and Corollary 3.6, \(\sigma_{\kappa}\) is an asymptotically (in \(n\)) exact approximation of the total fraction of individuals ever infected on a SIR epidemic on a graph drawn uniformly among the \(\kappa\)-regular graphs on \(n\) vertices. We now compare this approximation with a scaled Figure 6. Large \(n\) limit of the fraction of individuals who are exposed or infected in the SEIR process on the uniform 3-regular (i.e., \(\bar{e}^{(\infty)}(t)+\bar{i}^{(\infty)}(t)\) from (2.13)) for \(\lambda\equiv 1\) and \(\lambda\equiv 0.5\), along with the large \(n\) limit of the fraction of infected individuals on the SIR process (\(i^{(\infty)}(t)\) from (2.7)) on the same graph and with same \(\rho\) and \(\beta\). Individuals are initially infected with probability \(0.01\), and are otherwise susceptible. Figure 7. Outbreak size for the SIR process with \(\beta\equiv 0.5\) and \(\rho\equiv 1\) obtained from Monte Carlo simulations on random \(\kappa\)-regular graphs on \(n\) vertices (with \(95\%\) confidence intervals), compared with the asymptotically exact value given by Corollary 3.6 (ODE). Simulations are obtained via 100 iterations each, with resampling of the graph at each iteration. mean-field approximation to the SIR model on a \(\kappa\)-regular graph which can be formulated via the following system of ODEs, see for example [10, Section 7]: \[\frac{d\hat{s}}{dt}(t) = -\beta_{t}\kappa\hat{s}(t)\hat{\hat{i}}(t),\] \[\frac{d\hat{i}(t)}{dt} = \beta_{t}\kappa\hat{s}(t)\hat{\hat{i}}(t)-\rho_{t}\hat{\hat{i}}(t),\] with initial conditions \(\hat{s}(0)=s_{0}\), \(\hat{\hat{i}}(0)=i_{0}=1-s_{0}\). When there exists \(r\in(0,\infty)\) such that \(\rho_{t}/\beta_{t}=r\) for all \(t\), by performing a change of variables and solving the equation \(d\hat{\hat{i}}/d\hat{s}=-1+\rho/(\beta\kappa\hat{s})\), it can be shown that \(\lim_{t\to\infty}\hat{s}(t)=\hat{\sigma}_{\kappa}\) where \(\hat{\sigma}_{\kappa}=\hat{\sigma}_{\kappa}(s_{0})\) is the unique solution in \((0,s_{0})\) of \[\hat{\phi}_{\kappa}(\hat{\sigma}_{\kappa})=0,\qquad\text{with}\quad\hat{\phi} _{\kappa}(z):=s_{0}e^{\frac{1}{r}\kappa(z-1)}-z. \tag{3.9}\] Our next result shows that the mean-field approximation always yields a larger estimate of the outbreak size on random regular graphs compared to the true asymptotic value. This is further illustrated in Figure 8, which plots the mean-field prediction versus our prediction of the outbreak size on random \(\kappa\)-regular graphs. **Proposition 3.7**.: _Fix \(s_{0}\in(0,1)\). For each \(\kappa\in\mathbb{N}\setminus\{1\}\), let \(\hat{\sigma}_{\kappa}\) be as in (3.9), and \(\sigma_{\kappa}\) be given as in Corollary 3.6. Then it follows that_ \[\hat{\sigma}_{\kappa}<\sigma_{\kappa}. \tag{3.10}\] Proof.: Fix \(s_{0}\in(0,1)\), \(r>0\). Recall that \(\sigma_{\kappa}\) is the unique solution in \((0,s_{0})\) to \(\phi_{\kappa}(z)=0\). From the proof of Corollary 3.6, we know that \(\phi_{\kappa}(z)>0\) for \(z\in(0,\sigma_{\kappa})\) and \(\phi_{\kappa}(z)<0\) for \(z\in(\sigma_{\kappa},s_{0})\). Therefore, to show (3.10), it is enough to show that \(\phi_{\kappa}(\hat{\sigma}_{\kappa})>0\). Using the fact Figure 8. Outbreak size on large (\(n=400\) vertices) \(\kappa\)-regular configuration model graphs (Sim), with \(\rho\equiv 1\), and \(s_{0}=0.95\), along with the asymptotically exact value \(1-\sigma_{\kappa}\) from Corollary 3.6(ODE), and the mean-field approximation \(1-\hat{\sigma}_{\kappa}\) defined in (3.9) (MF). Simulations are obtained through \(500\) iterations, resampling the graph at each iteration, and are plotted with \(95\%\) confidence intervals. that \(\hat{\sigma}_{\kappa}=s_{0}\exp(\kappa(\hat{\sigma}_{\kappa}-1)/r)\), and that \(e^{z}>1+z\) for every \(z>0\), we have \[\begin{split}\phi_{\kappa}(\hat{\sigma}_{\kappa})&= \phi_{\kappa}(s_{0}e^{\frac{\kappa}{r}(\hat{\sigma}_{\kappa}-1)})\\ &=\left(s_{0}e^{\frac{\kappa}{r}(\hat{\sigma}_{\kappa}-1)}\right) ^{\frac{\kappa-2}{\kappa}}s_{0}^{\frac{2}{\kappa}}+rs_{0}^{1/\kappa}\left(s_{ 0}e^{\frac{\kappa}{r}(\hat{\sigma}_{\kappa}-1)}\right)^{-1/\kappa}-1-r\\ &=e^{\frac{1}{r}(\kappa-2)(\hat{\sigma}_{\kappa}-1)}s_{0}+re^{ \frac{1}{r}(1-\hat{\sigma}_{\kappa})}-1-r\\ &>s_{0}e^{\frac{\kappa}{r}(\hat{\sigma}_{\kappa}-1)}(e^{2\frac{1 }{r}(1-\hat{\sigma}_{\kappa})}-1).\end{split} \tag{3.11}\] Since the last expression is strictly positive, this concludes the proof. ## 4. Proofs of main results In Section 4.1 we introduce a parameterized family of processes that interpolates between the SIR and SEIR processes. This allows us to prove some intermediate results simultaneously for both processes. In Section 4.2 we provide the proofs of Theorem 2.5 and Theorem 2.10. In Section 4.3 we prove Theorem 3.1 and Theorem 3.5. Throughout, \(\{G_{n}\}_{n\in\mathbb{N}}\) is a sequence of random graphs, \(\theta\in\mathcal{P}(\mathbb{N}_{0})\), \(\hat{\theta}\) is the size-biased version of \(\theta\), as defined in (2.2), and \(\mathcal{T}\) is a UGW\((\theta)\) tree. We assume that \(\{G_{n}\}_{n\in\mathbb{N}}\) and \(\theta\) satisfy Assumption B and that the rates \(\beta\), \(\lambda\), \(\rho:[0,\infty)\to(0,\infty)\) satisfy Assumption C. ### The Hybrid S(E)IR Process Fix the rates \(\beta,\rho,\lambda:[0,\infty)\to(0,\infty)\) as in Section 2.3, the interpolation parameter \(\alpha\in[0,1]\), probabilities \(s_{0}\in(0,1)\), and \(e_{0},\ i_{0}\in[0,1]\) with \(s_{0}+e_{0}+i_{0}=1\). For a graph \(G=(V,E)\), let \(\xi^{G,\alpha}\) be a Markov chain on \(\bar{\mathcal{X}}^{V}\) describing the evolution of interacting individuals or particles indexed by the nodes of \(V\), where the state of each particle lies in the space \(\bar{\mathcal{X}}=\{S,E,I,R\}\). The initial states are i.i.d. across particles, with common law \(p_{0}\) given by \[p_{0}(b)=s_{0}\mathbf{1}_{\{b=S\}}+e_{0}\mathbf{1}_{\{b=E\}}+i_{0}\mathbf{1}_{ \{b=I\}}, \tag{4.1}\] \(b\in\bar{\mathcal{X}}\). Given \(y\in\bar{\mathcal{X}}^{\ell}\) for some \(\ell\in\mathbb{N}_{0}\) (setting \(\bar{\mathcal{X}}^{0}=\emptyset\)) recall that \(\mathcal{I}(y)\) denotes the number of entries of \(y\) that are equal to \(I\). At time \(t\in[0,\infty)\), the jump rates for the jump processes \(\xi_{v}^{G,\alpha}\) representing the evolution of a particle \(v\in G\) are given as follows: * from \(S\) to \(E\) at rate \(\alpha\beta_{t}\mathcal{I}(\xi_{\partial v}^{G,\alpha}(t-))\); * from \(S\) to \(I\) at rate \((1-\alpha)\beta_{t}\mathcal{I}(\xi_{\partial v}^{G,\alpha}(t-))\); * from \(E\) to \(I\) at rate \(\lambda_{t}\); * from \(I\) to \(R\) at rate \(\rho_{t}\). No other transitions are allowed. When \(G\) is finite, classical results guarantee the existence of the process \(\xi^{G,\alpha}\) and the uniqueness of its law follows from standard results about finite-state continuous time Markov chains, see for instance [12, Proposition 4.1]. We note that if \(\alpha=1\), \(\xi^{G,\alpha}\) reduces to the SEIR model, and if \(\alpha=0\) (and \(e_{0}=0\)), \(\xi^{G,\alpha}\) is the SIR model. Throughout, whenever \(\alpha=0\), we implicitly assume that \(e_{0}=0\), and Assumption C can be substituted with Assumption A. We refer to \(\xi^{G,\alpha}\) as the S(E)IR process. We also observe that the process \(\xi^{G,\alpha}\) is non-decreasing for every \(G\) and \(\alpha\), that is, for every \(v\in G\) and \(t,s\in[0,\infty)\), \[t\leq s\ \Rightarrow\ \xi_{v}^{G,\alpha}(t)\leq\xi_{v}^{G,\alpha}(s). \tag{4.2}\] Since we are interested in studying the limit of the S(E)IR process on locally tree-like graphs \(G_{n}\) with \(|G_{n}|\to\infty\) and \(G_{n}\) converging to a limit random tree, we need to define \(\xi^{\mathcal{T},\alpha}\) on a possibly infinite tree \(\mathcal{T}\). Intuitively, \(\xi^{\mathcal{T},\alpha}\) is a Markov jump process with the same rates as described above, but due to randomness in the tree structure, a rigorous definition (and subsequent characterization of properties) is most conveniently expressed in terms of the following (standard) Ulam-Harris-Neveu labeling which identifies each realization of \(\mathcal{T}\) with a subgraph of the graph of all possible vertices. The latter has vertex set \(\mathbb{V}:=\{\varnothing\}\cup(\cup_{k=1}^{\infty}\mathbb{N}^{k})\), where \(\varnothing\) denotes the root, and edges \(\{\{v,vi\}:v\in\mathbb{V},i\in\mathbb{N}\}\), where \(vi\) denotes concatenation, with the convention \(\varnothing u=u\varnothing=u\) for all \(u\in\mathbb{V}\). For \(n\in\mathbb{N}_{0}\), we also let \(\mathbb{V}_{n}:=\{\varnothing\}\cup(\cup_{k=1}^{n}\mathbb{N}^{k})\). Given a vertex \(v\in\mathbb{V}\setminus\{\varnothing\}\), denote by \(\pi_{v}\) its _parent_, defined to be the unique \(w\in\mathbb{V}\) such that there exists \(k\in\mathbb{N}\) with \(wk=v\). The _children_ of a vertex \(v\in\mathbb{V}\) are defined to be the set \(C_{v}^{\mathbb{V}}:=\{vi\}_{i\in\mathbb{N}}\). Given a tree \(\mathcal{T}\) with root \(\varnothing_{\mathcal{T}}\), we identify it (uniquely up to root preserving automorphisms of \(\mathcal{T}\)) as a subgraph of \(\mathbb{V}\) via a map \(\mathcal{V}\) from the vertex set of \(\mathcal{T}\) to \(\mathbb{V}\) such that 1. \(\mathcal{V}(\varnothing_{\mathcal{T}})=\varnothing\); 2. \(\mathcal{V}(\partial_{\varnothing}^{\mathcal{T}})=\{m\in\mathbb{N}\ :\ m\leq d_{ \varnothing}^{\mathcal{T}}\}\); 3. for \(v\in\mathcal{T}\) at graph distance1\(k\in\mathbb{N}\) from \(\varnothing_{\mathcal{T}}\), \(\mathcal{V}(v)=u\in\mathbb{N}^{k}\) and \(\mathcal{V}(\partial_{v}^{\mathcal{T}})=\{\pi_{u}\}\cup\{um\ :\ m\in\mathbb{N},m\leq d_{v}^{ \mathcal{T}}-1\}\). Footnote 1: given two vertices \(v\) and \(w\) in a graph \(G\), the graph distance between them is the minimum number of edges on a path from \(v\) to \(w\). In order to represent elements in \(\bar{\mathcal{X}}^{\mathcal{T}}\) as marks on \(\mathbb{V}\), we consider a new mark \(\star\), and define \(\bar{\mathcal{X}}_{\star}:=\bar{\mathcal{X}}\cup\{\star\}\). Given \(x\in\bar{\mathcal{X}}^{\mathcal{T}}\), we extend it to an element in \((\bar{\mathcal{X}}_{\star})^{\mathbb{V}}\) by setting \(x_{w}=\star\) for all \(w\in\mathbb{V}\setminus\mathcal{T}\). Whenever we consider a graph \(\mathcal{T}\subset\mathbb{V}\) and \(v\in\mathcal{T}\), we use \(\partial_{v}\) and \(d_{v}\) denote neighborhoods and degrees with respect to \(\mathcal{T}\). We use \(\partial^{\mathbb{V}}A\) to refer to the boundary in \(\mathbb{V}\) of a set \(A\subset\mathbb{V}\), and set \(\partial_{v}^{\mathbb{V}}=\partial^{\mathbb{V}}\{v\}\) for \(v\in\mathbb{V}\). Given an interval \(\mathcal{S}\subset\mathbb{R}\) and \(\mathcal{M}\) a metric space, let \(\mathcal{D}(\mathcal{S}:\mathcal{M})\) be the space of cadlag functions2 equipped with the Skorokhod topology. For \(x\) a (possibly random) element in \(\mathcal{D}(\mathcal{S}:\mathcal{M})\) and \(t\in\mathcal{S}\), we write \(x[t]:=\{x(s)\ :s\in\mathcal{S}\cap(-\infty,t]\}\) and \(x[t]:=\{x(s)\ :\ s\in\mathcal{S}\cap(-\infty,t)\}\). Throughout, we write \(\mathcal{D}:=\mathcal{D}([0,\infty):\bar{\mathcal{X}}])\), and we set \(\mathcal{D}_{\star}\) to be the union of \(\mathcal{D}\) and the single element consisting of the constant-\(\star\) function. Footnote 2: right continuous functions with finite left limits at every \(t\) in the interior of \(\mathcal{S}\). For \(y\in\bar{\mathcal{X}}_{\star}^{\infty}\) we write \(\mathcal{I}(y)=\left|\left\{m\ :\ y_{m}=I\right\}\right|\). Also, for simplicity, we identify the states \((S,E,I,R)\) with the vector \((0,1,2,3)\), and the set of possible jumps with \(\mathcal{J}=\{1,2\}\). The jump rate function \(q_{\alpha}\ :\mathcal{J}\times(0,\infty)\times\mathcal{D}_{\star}\times \mathcal{D}_{\star}^{\infty}\to\mathbb{R}_{+}\) is then given by \[\begin{split}& q_{\alpha}(1,t,x,y)=\mathbf{1}_{\{x(t-)=S\}} \alpha\beta_{t}\mathcal{I}(y(t-))+\mathbf{1}_{\{x(t-)=E\}}\lambda_{t}+ \mathbf{1}_{\{x(t-)=I\}}\rho_{t},\\ & q_{\alpha}(2,t,x,y)=\mathbf{1}_{\{x(t-)=S\}}(1-\alpha)\beta_{ t}\mathcal{I}(y(t-)).\end{split} \tag{4.3}\] **Remark 4.1**.: When \(\alpha=0\), \(q_{\alpha}\) defined in (4.3) reduces to the jump rate function of the SIR process as described in Section 2.1, and when \(\alpha=1\), \(q_{\alpha}\) coincides with the jump rate function of the SEIR process as described in Section 2.3. Given \(j\in\mathcal{J}\) and \(v\in\mathbb{V}\), we define \(j^{(v)}\in\{0,1,2\}^{\mathbb{V}}\) by \(j^{(v)}_{w}=j\mathbf{1}_{\{v=w\}}\) for all \(w\in\mathbb{V}.\) We define \(\xi^{\mathcal{T},\alpha}\) as a continuous time Markov chain on \(\bar{\mathcal{X}}_{\star}^{\mathbb{V}}\) with jump directions \(\{j^{(v)}\}_{j\in\mathcal{J},v\in\mathbb{V}}\) and corresponding jump rates at time \(t\) given by \(q_{\alpha}(j,t,\xi_{v}^{\mathcal{T},\alpha},\xi_{\partial_{v}^{\mathbb{V}}}^{ \mathcal{T},\alpha})\). The initial state of the process is given by \(\xi_{v}^{\mathcal{T},\alpha}(0)=z_{v}^{p_{0}}\), where \(z^{p_{0}}=\{z_{v}^{p_{0}}\}_{v\in\mathbb{V}}\) satisfies the following assumption. **Assumption D**.: Let \(z^{(1)}=\{z_{v}^{(1)}\}_{v\in\mathbb{V}}\) be a collection of i.i.d. \(\bar{\mathcal{X}}\)-valued random variables with common law \(p_{0}\) given by (4.1). Let \(z^{(2)}\) be a \(\{1,\star\}^{\mathbb{V}}\)-valued random variables independent of \(z^{(1)}\) such that the subgraph of \(\mathbb{V}\) induced by \(\{v\ :z_{v}^{(2)}\neq\star\}\) is equal in law to a \(\mathrm{UGW}(\theta)\) tree. The \(\bar{\mathcal{X}}^{\mathbb{V}}\)-valued random variable \(z^{p_{0}}\) satisfies \[z_{v}^{p_{0}}=z_{v}^{(1)}\mathbf{1}_{\{z_{v}^{(2)}\neq\star\}}+\star\mathbf{1}_{ \left\{z_{v}^{(2)}=\star\right\}},\qquad v\in\mathbb{V}.\] We can easily recover the graph \(\mathcal{T}\) from the process \(\xi^{\mathcal{T},\alpha}\) as follows: \[\mathcal{T}=\mathcal{T}(\xi^{\mathcal{T},\alpha}):=\{v\in V:\xi_{v}^{\mathcal{ T},\alpha}(0)\neq\star\}=\{v\ :\,z_{v}^{p_{0}}\neq\star\}.\] Since the graph \(\mathcal{T}\) can be infinite, it is no longer immediate that the process \(\xi^{\mathcal{T},\alpha}\) with the intuitive description above is well defined (see [12, Appendix A]). However, since \(\mathcal{T}\) is a \(\mathrm{UGW}(\theta)\) with \(\theta\) having a finite second moment (see Assumption B), this is guaranteed by the following result proved in [12], which also characterizes \(\xi^{\mathcal{T},\alpha}\) as the unique in law solution of a certain jump SDE. **Lemma 4.2**.: _The S(E)IR process \(\xi^{\mathcal{T},\alpha}\) exist and is unique in law. Furthermore, its law is the unique solution to the SDE,_ \[\xi^{\mathcal{T},\alpha}_{v}(t)=\ z^{p_{0}}_{v}+\sum_{j=1,2}\int_{(0,t]\times[ 0,\infty)}j\mathbf{1}_{\left\{r<q_{\alpha}\left(j,s,\xi^{\mathcal{T},\alpha}_ {v}\xi^{\mathcal{T},\alpha}_{\partial_{V}^{\mathcal{V}}}\right)\right\}} \boldsymbol{N}_{v}(ds,\ dr),\quad v\in\mathbb{V},\ t\in[0,\infty), \tag{4.4}\] _where \(z^{p_{0}}\) is a \(\bar{\mathcal{X}}^{\mathbb{V}}_{\star}\)-valued random element satisfying Assumption D and \(\boldsymbol{N}=\{\boldsymbol{N}_{v}\ :v\in\mathbb{V}\}\) are i.i.d. Poisson point processes on \((0,\infty)\times[0,\infty)\) with intensity measure equal to Lebesgue measure, independent of \(z^{p_{0}}\)._ Proof.: Existence and uniqueness in law of the solution to the SDE (4.4) follows from [12, Theorem 4.2] on observing that Assumption C implies [12, Assumption 1], and that by [12, Proposition 5.1], the \(\mathrm{UGW}(\theta)\) tree \(\mathcal{T}\) is finitely dissociable in the sense of [12, Definition 5.1]. We now define \[X^{\mathcal{T}} :=\xi^{\mathcal{T},0}, \tag{4.5}\] \[\bar{X}^{\mathcal{T}} :=\xi^{\mathcal{T},1}.\] **Remark 4.3**.: By Remark 4.1 and the uniqueness in law established in Lemma 4.2, \(X^{\mathcal{T}}_{\mathcal{T}}\) and \(\bar{X}^{\mathcal{T}}_{\mathcal{T}}\) are the SIR and SEIR processes on the possibly infinite random tree \(\mathcal{T}\), in a sense consistent with the definitions of the SIR and SEIR processes on finite graphs given in Section 2.1 and Section 2.3, respectively. ### Proofs of Transient Results The proof of Theorem 2.5 is presented in Section 4.2.2. The proof of Theorem 2.10 uses similar techniques, and is thus only outlined in Section 4.2.2, with details relegated to Appendix A. Both proofs rely on four preliminary results first presented in Section 4.2.1. The first ingredient (Theorem 4.5) is a convergence result from [12], which shows that the limits of the fractions \(|\{v\in G_{n}:\ \xi^{\mathcal{T},\alpha}_{v}(t)=b\}|/|G_{n}|\), \(b\in\bar{\mathcal{X}}\), coincide with the root marginal probabilities of the limiting S(E)IR dynamics on the graph \(\mathcal{T}\) that arises as the local limit of the graph sequence \(\{G_{n}\}_{n\in\mathbb{N}}\). The second ingredient is a projection result (Proposition 4.6) that identifies the law of the marginal dynamics on \(\mathcal{T}\) in terms of a certain (_a priori_ non-Markovian) jump process with somewhat implicit jump rates. This result is a generalization of similar projection results obtained in [11, 14]. The third and fourth results (Proposition 4.7 and Proposition 4.8) identify key conditional independence and symmetry properties of the dynamics to explicitly identify the jump rates of the marginal dynamics. **Remark 4.4**.: For a general class of interacting particle systems (IPS) on UGW trees \(\mathcal{T}\) whose offspring satisfies suitable moment conditions, which in particular includes the SIR and S(E)IR processes, we expect that the marginal dynamics of the IPS on the root and its neighborhood can be described autonomously in terms of a certain (non-Markovian) stochastic process. Indeed, in the special case when \(\mathcal{T}\) is a \(\kappa\)-regular tree, such a result is established in [14] (see also [10]) by appealing to a Markov random field property for the trajectories of the process proved in [13, Theorem 3.7] (see also [23, 25] for corresponding results for interacting diffusions). The current work goes beyond regular trees to include a large class of UGW trees, and also establishes a much stronger conditional independence property of the trajectories for the S(E)IR process when compared to general IPS. The latter is then used to show that for the S(E)IR process, the root marginal dynamics is in fact a Markovian process (see Remark 4.11), and thus its law can be described by a system of ODEs (namely, the forward Kolmogorov equations describing the evolution of the law of the Markov process). We remind the reader that the standing assumptions made at the beginning of Section 4 are in effect throughout. #### 4.2.1. Preliminary Results We start by stating the convergence result. **Theorem 4.5**.: _For every \(n\in\mathbb{N}\) and \(b\in\bar{\mathcal{X}}\), set_ \[p_{b}^{n,\alpha}(t):=\frac{1}{|G_{n}|}\sum_{v\in G_{n}}\mathbf{1}_{\{\xi_{v}^{G _{n},\alpha}(t)=b\}}. \tag{4.6}\] _For every \(t\in[0,\infty)\), as \(n\to\infty\),_ \[(p_{b}^{n,\alpha}(t))_{b\in\bar{\mathcal{X}}}\to(\mathbb{P}(\xi_{\varnothing}^ {\mathcal{T},\alpha}(t)=b))_{b\in\bar{\mathcal{X}}}, \tag{4.7}\] _where the convergence is in probability._ Proof.: The statement follows from [12, Corollary 4.7] on observing that Assumption C implies [12, Assumption 1], Assumption B, along with [12, Corollary 5.16] implies that the graph sequence \(\{G_{n}\}_{n\in\mathbb{N}}\) and \(\mathcal{T}\) are a.s. finitely dissociable in the sense of [12, Definition 5.11], and that [12, Assumption 2] holds trivially since the state \(\bar{\mathcal{X}}_{\star}\) is discrete. In view of Theorem 4.5, our next goal is to characterize the law of the root marginal of \(\xi^{\mathcal{T},\alpha}\). We first apply a projection result that characterizes the law of any marginal \(\xi_{U}^{\mathcal{T},\alpha}\) for \(U\subset\mathbb{V}\) in terms of a certain jump process \(\eta^{\mathcal{T},\alpha}[U]\). **Proposition 4.6**.: _For every finite \(U\subset\mathbb{V}\), \(v\in U\), and \(j\in\{1,2\}\), there exists a Borel measurable function \(\check{q}_{v,j}^{\theta,\alpha}[U]:[0,\infty)\times\mathcal{D}([0,\infty),\bar {\mathcal{X}}_{\star}^{U})\to[0,\infty)\) such that_ 1. _the function_ \(t\to\check{q}_{v,j}^{\theta,\alpha}[U](t,x)\) _is caglad_3 _for all_ \(x\in\mathcal{D}([0,\infty),\bar{\mathcal{X}}_{\star}^{U})\)_;_ Footnote 3: left continuous with finite right limits at every \(t\in[0,\infty)\). 2. _the function_ \(t\to\check{q}_{v,j}^{\theta,\alpha}[U](t,x)\) _is predictable in the sense that for any_ \(t\in[0,\infty)\) _and_ \(x,x^{\prime}\in\mathcal{D}([0,\infty),\bar{\mathcal{X}}_{\star}^{U})\)_,_ \(\check{q}_{v,j}^{\theta,\alpha}[U](t,x)=\check{q}_{v,j}^{\theta,\alpha}[U](t, x^{\prime})\) _whenever_ \(x[t)=x^{\prime}[t)\)_;_ 3. _for every_ \(v\in U\)_, the stochastic process_ \(t\to\check{q}_{v,j}^{\theta,\alpha}[U](t,\xi_{U}^{\mathcal{T},\alpha})\) _is a modification_4 _of the process_ \(\{\mathbb{E}[q_{\alpha}(j,t,\xi_{v}^{\mathcal{T},\alpha},\xi_{\check{\mathcal{ X}}_{\check{\mathcal{X}}_{\check{\mathcal{X}}_{\check{\mathcal{X}}_{\check{\mathcal{X}}_{ \check{\mathcal{X}}}}}}}}^{\mathcal{T},\alpha})\mid\xi_{U}^{\mathcal{T},\alpha} [t]],\ t\in[0,\infty)\}\)_._ Footnote 4: Given two stochastic processes \(Y=(Y_{t},t\geq 0)\) and \(\hat{Y}=(\hat{Y}_{t},t\geq 0)\) defined on the same probability space \((\Omega,\mathcal{F},\mathbb{P})\), \(Y\) is a modification of \(\hat{Y}\) if for every \(t\geq 0\), \(\mathbb{P}(Y(t)=\hat{Y}(t))=1\). _Furthermore, \(\mathcal{L}(\xi_{U}^{\mathcal{T},\alpha})=\mathcal{L}(\eta^{\mathcal{T}, \alpha}[U])\), where \(\eta^{\mathcal{T},\alpha}[U])\) is the pathwise unique solution to the following jump SDE_ \[\eta_{v}^{\mathcal{T},\alpha}[U](t)=z_{v}^{p_{0}}+\sum_{j=1,2}\int_{(0,t] \times[0,\infty)}j\mathbf{1}_{\left\{r<\check{q}_{v,j}^{\theta,\alpha}[U](s, \ \eta^{\mathcal{T},\alpha}[U])\right\}}\textbf{N}_{v}(ds,\ dr),\qquad v\in U,\ t\in[0, \infty), \tag{4.8}\] _where \(z^{p_{0}}\) is a \(\bar{\mathcal{X}}_{\star}^{\mathbb{V}}\)-valued random variable satisfying Assumption D, and \(\textbf{N}=\{\textbf{N}_{v}\ :v\in U\}\) are i.i.d. Poisson point processes on \((0,\infty)\times[0,\infty)\) with intensity measure equal to Lebesgue measure, independent of \(z^{p_{0}}\)._ Proof.: In the case when \(\mathcal{T}\) is a deterministic \(\kappa\)-regular tree, this was proved in Lemma 9.1 and Proposition 9.2 of [10]; see also [14]. Using a general result from [13, Corollary 4.11], this can be extended to a class of Galton-Watson trees that include the ones considered in Assumption B; we refer the reader to [11] for full details. Using Proposition 4.6, the law of \(\xi^{\mathcal{T},\alpha}_{\{\varnothing,1\}}\) can be characterized in terms of a jump process \(\eta^{\mathcal{T},\alpha}[\{\varnothing,1\}]\). However, the jump rates of the latter process are _a priori_ path-dependent and not very explicit. We now identify two additional properties that allow us to simplify the form of these jump rates and thereby show that \(\eta^{\mathcal{T},\alpha}[\{\varnothing,1\}]\) is in fact a nonlinear Markov process (see Remark 4.11), that is, a (time-inhomogeneous) Markov process whose transition rates depend not only on the current state but also on the law of the current state. For a set \(B\subset\mathbb{V}\), we let \(S_{B}\) denote the vector in \(\bar{\mathcal{X}}^{B}_{\star}\) whose every coordinate is equal to \(S\). **Proposition 4.7**.: _For every \(t\in[0,\infty)\), \(A\subset\mathbb{V}\) with \(|\partial^{\mathbb{V}}A|<\infty\), and \(B\subset\mathbb{V}\setminus A\),_ \[\xi^{\mathcal{T},\alpha}_{A}[t]\perp\xi^{\mathcal{T},\alpha}_{B}[t]\lvert\xi ^{\mathcal{T},\alpha}_{\partial^{\mathbb{V}}A}(t-)=S_{\partial^{\mathbb{V}}A}. \tag{4.9}\] _Moreover, for every \(v\in\mathbb{V}\), the processes \(\xi^{\mathcal{T},\alpha}_{vi}[t)\), \(1\leq i\leq d_{v}-\mathbf{1}_{\{v\neq\varnothing\}}\) are conditionally independent given \(\xi^{\mathcal{T},\alpha}_{v}(t-)=S\) and the degree \(d_{v}\) of \(v\)._ The proof of Proposition 4.7 is given in Section 5. **Proposition 4.8**.: _For every \(b\in\bar{\mathcal{X}}\), \(\alpha\in[0,1]\) and \(t\in[0,\infty)\), the conditional probability \(\mathbb{P}(\xi^{\mathcal{T},\alpha}_{\bar{v}m}(t-)=b\ |\ \xi^{\mathcal{T},\alpha}_{\bar{v}}(t-)=S,\ \tilde{v}\in \mathcal{T})\) does not depend on the choice of \(\tilde{v}\in\mathbb{V}\) and \(m\in\mathbb{N}\)._ The proof of Proposition 4.8 proceeds by exploiting the conditional independence property in Proposition 4.7 along with symmetry properties and well-posedness of the SDE (4.4) to show that \(\mathcal{L}(\xi^{\mathcal{T},\alpha}_{\bar{v}m}[t]\lvert\xi^{\mathcal{T}, \alpha}_{\bar{v}}(t-)=S)=\mathcal{L}(\xi^{\mathcal{T},\alpha}_{1}[t]\lvert \xi^{\mathcal{T},\alpha}_{\varnothing}(t-)=S)\) for all \(\tilde{v}\in\mathbb{V}\) and \(m\in\mathbb{N}\). The details are relegated to the end of Section 5. We conclude this section with an elementary result we use repeatedly in the sequel. **Lemma 4.9**.: _Let \((\Omega,\ \mathcal{F},\ \mathbb{P})\) be a probability space, and suppose that \(A,\)\(A^{\prime},\)\(B,\)\(B^{\prime}\in\mathcal{F}\) with \(\mathbb{P}(B\cap B^{\prime})>0\) and \(\mathbb{P}(A^{\prime}\cap B^{\prime})>0\). Then,_ \[\mathbb{P}(A\cap A^{\prime}\ |\ B\cap B^{\prime})=\mathbb{P}(A^{\prime}\ |\ B^{ \prime})\frac{\mathbb{P}(A\cap B\ |\ A^{\prime}\cap B^{\prime})}{\mathbb{P}(B\ |\ B^{\prime})}. \tag{4.10}\] Proof.: Let \(A,\)\(A^{\prime},\)\(B,\)\(B^{\prime}\) be as in the statement of the lemma. By the definition of conditional probability, and some simple arithmetic manipulation, \[\mathbb{P}(A\cap A^{\prime}\ |\ B\cap B^{\prime}) =\frac{\mathbb{P}(A\cap A^{\prime}\cap B\cap B^{\prime})}{\mathbb{ P}(B\cap B^{\prime})} \tag{4.11}\] \[=\frac{\mathbb{P}(A\cap B\ |\ A^{\prime}\cap B^{\prime})\mathbb{P}(A^{ \prime}\cap B^{\prime})}{\mathbb{P}(B\ |\ B^{\prime})\mathbb{P}(B^{\prime})}\] \[=\mathbb{P}(A^{\prime}\ |\ B^{\prime})\frac{\mathbb{P}(A\cap B\ |\ A^{ \prime}\cap B^{\prime})}{\mathbb{P}(B\ |\ B^{\prime})}.\] #### 4.2.2. Proof of Theorem 2.5 We can now complete the proof of our main result for the SIR process by characterizing the time marginals of \(\xi^{\mathcal{T},\alpha}_{\varnothing}\) for the special case \(\alpha=0\), which by Remark 4.3 is equal to the marginal at the root of the SIR process on the possibly infinite tree \(\mathcal{T}\). For \(a,b\in\mathcal{X},\)\(t\in[0,\infty)\) and \(k\in\{m\in\mathbb{N}_{0}\ :\ \theta(m)>0\}\), define \[P^{\theta}_{a,b;k}(t) :=\mathbb{P}(X^{\mathcal{T}}_{\varnothing}(t)=b\ |\ X^{\mathcal{T}}_{ \varnothing}(0)=a,\ d_{\varnothing}=k), \tag{4.12}\] \[P^{\theta}_{a,b}(t) :=\mathbb{P}(X^{\mathcal{T}}_{\varnothing}(t)=b\ |\ X^{\mathcal{T}}_{ \varnothing}(0)=a),\] \[f^{\theta}_{a}(t) :=\mathbb{P}(X^{\mathcal{T}}_{\mathcal{T}}(t)=a\ |\ X^{\mathcal{T}}_{ \varnothing}(t)=S,\ 1\in\mathcal{T}),\] where \(d_{v}\) is the degree of the vertex \(v\in\mathcal{T}\), and where we recall that \(1\in\mathcal{T}\) is equivalent to \(X_{1}^{\mathcal{T}}(0)\neq\star\). When clear from context, we omit the dependence on \(\theta\) and simply write \(P_{a,b;k},\ P_{a,b}\) and \(f_{a}\). **Theorem 4.10**.: _Let \(f_{S}\) and \(f_{I}\) be as in (4.12), and set \(F_{I}(t):=\int_{0}^{t}\beta_{s}f_{I}(s)ds\) for \(t\in[0,\infty)\). Then, \((f_{S}\), \(f_{I}\), \(F_{I})\) solves the ODE system (2.4)-(2.5)._ Proof.: Throughout the proof, in order to simplify the notation we write \(X\) in lieu of \(X^{\mathcal{T}}=\xi^{\mathcal{T},0}\), the SIR process on \(\mathcal{T}\), and \(q\) in lieu of \(q_{0}\) for the jump rates defined in (4.3). We start by observing that, by Assumption D, \(f_{S}(0)=s_{0}\) and \(f_{I}(0)=i_{0}=1-s_{0}\). Since, clearly \(F_{I}(0)=0\), the initial condition (2.5) are established. By the fundamental theorem of calculus, \(\dot{F}_{I}(t)=\beta_{t}f_{I}(t)\), which is the third equation in (2.4). We now turn to the derivation of the evolution of \(f_{I}\) and \(f_{S}\). This requires us to simultaneously track the evolution of two nodes, \(\varnothing\) and \(1\), since \(f_{I}(t)\) and \(f_{S}(t)\) are conditional probabilities associate with the joint law of \(X_{\varnothing}(t)\) and \(X_{1}(t)\). To start with, we apply the projection result of Proposition 4.6, with \(\alpha=0\) and \(U=\{\varnothing,1\}\), to conclude that the joint marginal \(X_{\varnothing,1}\) has the same law as the jump process \(\eta^{\mathcal{T},\alpha}[\{\varnothing,1\}]\) on \(\mathcal{X}_{\star}\times\mathcal{X}_{\star}\) that has predictable jump rates \[\hat{q}_{v}(t,x):=\hat{q}_{v,j(x_{v})}^{\theta,0}[\{\varnothing,1\}](t,x), \tag{4.13}\] \(v\in\{\varnothing,1\}\), \(x\in\mathcal{D}([0,\infty),\mathcal{X}_{\star}^{2})\) and \(j(x_{v})=1+\mathbf{1}_{\{x_{v}(t-)=S\}}\), which satisfy, for every \(t\geq 0\), almost surely5 Footnote 5: The dependence \(j(x_{v})\) of the allowed jump on the state is a notational nuisance that is a mere artifact of our using a common framework to analyze both the SIR and SEIR processes. Indeed, this is because when we use the common (ordered) state space \(\{S,E,I,R\}\) for both processes, then the SIR process allows only jumps of size \(2\) from the state S (going from S to I and skipping over E), and only jumps of size \(1\) from the state I (going from I to R). \[\hat{q}_{v}(t,X_{\varnothing,1})=\mathbb{E}[q(j(X_{v}),t,X_{v},X_{\partial_{v }^{\vee}})|X_{\varnothing,1}[t)],\quad v\in\{\varnothing,1\}. \tag{4.14}\] Next, we use the specific form of \(q\), as defined in (4.3) and Propositions 4.7 and 4.8 to obtain a more explicit description of \(\hat{q}_{v}\), \(v\in\{\varnothing,1\}\). Since the probabilities \(f_{I}(t)\) and \(f_{S}(t)\) are conditioned on \(X_{\varnothing}(t)=S\) and on \(X_{1}(t)\neq\star\) (and using the fact that a particle that is in state \(R\) remains in that state for all subsequent times), we only need to consider the jump intensities \(\hat{q}_{v}(t,X_{\varnothing,1})\), \(v\in\{\varnothing,1\}\), on the events \(\{X_{\varnothing,1}(t-)=(S,S)\}\) and \(\{X_{\varnothing,1}(t-)=(S,I)\}\). Define \(B_{1}(t):=\beta_{t}\mathbb{E}[\mathcal{I}(X_{\partial_{1}^{\vee}\setminus\{ \varnothing\}}(t-))|X_{1}(t-)=S]\). Recalling the definition of \(q=q_{0}\) from (4.3), \(B_{1}(t)\) is the cumulative conditional rate at which the children of \(1\) infect \(1\) at time \(t\), given \(X_{1}(t-)=S\) (which also implies \(1\in\mathcal{T}\)). Similarly, let \(B_{\varnothing}(t):=\beta_{t}\mathbb{E}[\mathcal{I}(X_{\partial_{\varnothing}^ {\vee}\setminus\{1\}}(t-))|X_{\varnothing}(t-)=S]\) be the cumulative conditional rate at which the neighbors of the root other than vertex \(1\) infect the root at time \(t\), given \(X_{\varnothing}(t-)=S\). By Proposition 4.7, for \(v,w\in\{\varnothing,1\}\) with \(w\neq v\), \[B_{v}(t)=\beta_{t}\mathbb{E}[\mathcal{I}(X_{\partial_{v}^{\vee}\setminus\{w\}} (t-))|X_{v}(t-)=S,\ X_{w}(t-)]. \tag{4.15}\] Using (4.3) and (4.15), on the event \(\{X_{\varnothing}(t-)=S\}\), \[\hat{q}_{\varnothing}(t,X_{\varnothing,1})=\beta_{t}\mathbf{1}_{\{X_{1}(t-)=I\}} +B_{\varnothing}(t).\] Similarly, on the event \(\{X_{\varnothing}(t-)=S\}\), \[\hat{q}_{1}(t,X_{\varnothing,1})=B_{1}(t)\mathbf{1}_{\{X_{1}(t-)=S\}}+\rho_{t} \mathbf{1}_{\{X_{1}(t-)=I\}}.\] Therefore, we can treat \(X_{\varnothing,1}\) as a two particle jump processes driven by Poisson noises with intensity measure equal to Lebesgue measure, whose jumps and jump rates from the states \((S,S)\) and \((S,I)\) can be summarized as follows: \[\begin{array}{ll}\text{Jump:}&\text{Rate at time $t$:}\\ (S,S)\to(S,I)&B_{1}(t)\\ (S,S)\to(I,S)&B_{\varnothing}(t)\\ (S,I)\to(I,I)&\beta_{t}+B_{\varnothing}(t)\\ (S,I)\to(S,R)&\rho_{t},\end{array}\] with all other jump rates being equal to zero. Next, we fix \(h>0\) and we obtain expressions for \(f_{I}(t+h),\ f_{S}(t+h)\) in terms of \(f_{I}(t)\), \(f_{S}(t)\), \(h\), \(\beta_{t}\), \(\rho_{t}\), and \(\hat{\theta}\). We first consider \(f_{S}(t+h)=\mathbb{P}(X_{1}(t+h)=S\ |\ X_{\varnothing}(t+h)=S,\ 1\in\mathcal{T})\) defined in (4.12). Using the monotonicity of the SIR dynamics, we can write \[f_{S}(t+h)=\mathbb{P}(X_{1}(t+h)=S,\ X_{1}(t)=S\ |\ X_{\varnothing}(t+h)=S,\ X _{\varnothing}(t)=S,\ 1\in\mathcal{T}). \tag{4.16}\] By an application of Lemma 4.9, with \(A=\{X_{1}(t+h)=S\}\), \(A^{\prime}=\{X_{1}(t)=S\}\), \(B=\{X_{\varnothing}(t+h)=S\}\) and \(B^{\prime}=\{X_{\varnothing}(t)=S,1\in\mathcal{T}\}\), we obtain \[f_{S}(t+h)=f_{S}(t)\frac{\mathbb{P}(X_{\varnothing}(t+h)=S,\ X_{1}(t+h)=S,|\ X_{ \varnothing}(t)=S,\ X_{1}(t)=S,\ 1\in\mathcal{T})}{\mathbb{P}(X_{\varnothing}(t+h)=S\ |\ X_{ \varnothing}(t)=S,\ 1\in\mathcal{T})}. \tag{4.17}\] Since \(B_{1}(t)+B_{\varnothing}(t)\) is the rate at which \(X_{\varnothing,1}(t)\) leaves the state \((S,S)\), the numerator in the right-hand side of (4.17) is equal to \(1-h(B_{1}(t)-B_{\varnothing}(t))+o(h)\). For the denominator, observe that the rate \(\hat{q}_{\varnothing}(t,X_{\varnothing,1})\) on the event \(\{X_{\varnothing}(t-)=S,\ X_{1}(t-)\neq\star\}=\{X_{\varnothing}(t-)=S,\ 1\in \mathcal{T}\}\) is equal to \[\begin{split}&\mathbb{E}[q(1,t,X_{\varnothing},X_{\varnothing} ^{\vee})\ |X_{\varnothing}(t-)=S,1\in\mathcal{T}]\\ =&\beta_{t}\mathbb{E}[\mathcal{I}(X_{1}(t-))\ |X_{ \varnothing}(t-)=S,\ 1\in\mathcal{T}]+\beta_{t}\mathbb{E}[\mathcal{I}(X_{\partial ^{\vee}_{\emptyset}\setminus\{1\}}(t-))\ |X_{\varnothing}(t-)=S,\ 1\in\mathcal{T}]\\ =&\beta_{t}f_{I}(t)+B_{\varnothing}(t),\end{split}\] where the first equality follows from (4.3) with \(\alpha=0\), and the second follows from the definition of \(f_{I}\) in (4.12) and by (4.15) (on observing that the event \(\{1\in\mathcal{T}\}\) is \(X_{1}(t)\)-measurable). Therefore, it follows that \[f_{S}(t+h)=f_{S}(t)\frac{1-h(B_{1}(t)+B_{\varnothing}(t))+o(h)}{1-h(\beta_{t}f_ {I}(t)+B_{\varnothing}(t))+o(h)}, \tag{4.18}\] which implies that \[\begin{split} f_{S}(t+h)-f_{S}(t)&=f_{S}(t)\frac{1 -hB_{1}(t)-hB_{\varnothing}(t)-1+h\beta_{t}f_{I}(t)+hB_{\varnothing}(t)+o(h)} {1-h(\beta_{t}f_{I}(t)+B_{\varnothing}(t))+o(h)}\\ &=f_{S}(t)\frac{h\beta_{t}f_{I}(t)-hB_{1}(t)+o(h)}{1+o(1)}.\end{split} \tag{4.19}\] In turn, this implies \[\dot{f}_{S}(t)=\beta_{t}f_{S}(t)f_{I}(t)-f_{S}(t)B_{1}(t). \tag{4.20}\] Similarly, recalling that \(f_{I}(t+h)=\mathbb{P}(X_{1}(t+h)=I\ |\ X_{\varnothing}(t+h)=S,\ 1\in\mathcal{T})\) from (4.12), using the fact that a particle that at time \(t+h\) is in state \(I\) could only have been in states \(S\) or \(I\) at time \(t\), and using a similar derivation as in (4.16)-(4.19), \[\begin{split} f_{I}(t+h)&=\sum_{a=S,I}f_{a}(t) \frac{\mathbb{P}(X_{\varnothing}(t+h)=S,\ X_{1}(t+h)=I|\ X_{\varnothing}(t)=S, \ X_{1}(t)=a,\ 1\in\mathcal{T})}{\mathbb{P}(X_{\varnothing}(t+h)=S|\ X_{ \varnothing}(t)=S,1\in\mathcal{T})}\\ &=\frac{f_{S}(t)(hB_{1}(t)+o(h))+f_{I}(t)(1-h(\rho_{t}+B_{ \varnothing}(t)+\beta_{t})+o(h))}{1-h(f_{I}(t)\beta_{t}+B_{\varnothing}(t))+o(h)},\end{split} \tag{4.21}\] which implies that \[f_{I}(t+h)-f_{I}(t)=(1+o(1))(hf_{S}(t)B_{1}(t)-hf_{I}(t)(\rho_{t}+\beta_{t}-\beta_ {t}f_{I}(t))+o(h)). \tag{4.22}\] It follows that \[\dot{f}_{I}=f_{S}B_{1}-f_{I}(\rho+\beta-\beta f_{I}). \tag{4.23}\] In view of (4.20) and (4.23) all that is left to find is an expression for \(B_{1}(t)\), the conditional rate at which the children of vertex \(1\) infect vertex \(1\) at time \(t\), given \(X_{1}(t)=S\), in terms only of \(\beta_{t},\ \rho_{t},\ \hat{\theta}\), and \(f_{I}(t)\). By Proposition 4.7, \(X_{\partial_{1}\setminus\varnothing}(t)\) is conditionally independent of \(X_{\varnothing}(t)\) given \(X_{1}(t)=S\). Also by Proposition 4.7, \(\{X_{1i}(t)\}_{i=1,...,k}\), are conditionally i.i.d. given \(X_{1}(t)=S\) and \(d_{1}=k+1\), and by Proposition 4.8, \[\mathbb{P}(X_{1i}(t)=I\ |\ X_{1}(t)=S,\ d_{1}=k+1)=\mathbb{P}(X_{1i}(t)=I\ |\ X_{1}(t)=S,\ 1i \in\mathcal{T})=f_{I}(t),\] for \(i=1,...,k\). This implies that \[\begin{split} B_{1}(t)&=\beta_{t}\mathbb{E}[ \mathcal{I}(X_{\partial_{1}^{\vee}\setminus\{\varnothing\}}(t-))\ |\ X_{1}(t-)=S]\\ &=\beta_{t}\mathbb{E}[\mathbb{E}[\mathcal{I}(X_{\partial_{1}^{ \vee}\setminus\{\varnothing\}}(t-))|X_{1}(t-)=S,\ d_{1}=k+1]\ |\ X_{1}(t-)=S]\\ &=\beta_{t}f_{I}(t)\mathbb{E}[\mathbb{E}[d_{1}-1|X_{1}(t-)=S,\ d_{1}=k+1]\ |\ X_{1}(t-)=S]\\ &=\beta_{t}f_{I}(t)(\mathbb{E}[d_{1}\ -1|\ X_{1}(t-)=S]).\end{split} \tag{4.24}\] Next, we find a more explicit description of the conditional expectation in the last line of (4.24). Let \(\bar{\mathcal{N}}=\{k\in\mathbb{N}_{0}\ :\ \hat{\theta}(k+1)>0\}\). For \(k\in\bar{\mathcal{N}}\), define \[r_{k}:=\mathbb{P}(X_{1}(t)=S\ |\ 1\in\mathcal{T},\ d_{1}=k+1). \tag{4.25}\] Then, observing that \(X_{1}(t)=S\) implies that \(1\in\mathcal{T}\), \[\mathbb{E}[d_{1}-1\ |X_{1}(t-)=S]=\sum_{k\in\bar{\mathcal{N}}}k\frac{\mathbb{P} (d_{1}=k+1\ |\ 1\in\mathcal{T})}{\mathbb{P}(X_{1}(t-)=S\ |\ 1\in\mathcal{T})}r_{k}(t-). \tag{4.26}\] By (4.3), the conditional rate at which the individual at \(1\) is infected, given that \(X_{1}(t-)=S\) and \(d_{1}=k+1\), is \[\begin{split}&\beta_{t}\mathbb{E}[\mathcal{I}(X_{\partial_{1}^{ \vee}}(t-))\ |\ X_{1}(t-)=S,\ d_{1}=k+1]\\ &=\beta_{t}\mathbb{E}[\mathcal{I}(X_{\partial_{1}^{\vee}\setminus \{\varnothing\}}(t-)))\ |\ X_{1}(t-)=S,\ d_{1}=k+1]+\beta_{t}\mathbb{E}[\mathcal{I}(X_{\varnothing}(t- )))\ |\ X_{1}(t-)=S]\\ &=\beta_{t}kf_{I}(t)+\beta_{t}\mathbb{E}[\mathcal{I}(X_{ \varnothing}(t-)))\ |\ X_{1}(t-)=S]\end{split} \tag{4.27}\] where the second equality follows from Proposition 4.7 and Proposition 4.8, and the first equality follows from an application of Proposition 4.7 with \(A=\{\varnothing\}\cup\{\ell v\}_{\ell\in\mathbb{N}\setminus\{1\},v\in\mathbb{ V}}\), \(\partial^{\vee}A=\{1\}\) and \(B=\{1m\}_{m\in\mathbb{N}}\). Setting \(\iota(t):=\beta_{t}\mathbb{E}[\mathcal{I}(X_{\varnothing}(t-)))\ |\ X_{1}(t-)=S]\), using the monotonicty (4.2) of the SIR process in the first equality and (4.27) in the second, we have \[\begin{split} r_{k}(t+h)=&\mathbb{P}(X_{1}(t+h)=S \ |X_{1}(t)=S\ d_{1}=k+1)r_{k}(t)\\ =&(1-hk\beta_{t}f_{I}(t)-h\iota(t))r_{k}(t),\end{split} \tag{4.28}\] and it follows that \[\dot{r}_{k}(t)=-(k\beta_{t}f_{I}(t)+\iota(t))r_{k}(t)\] and, since \(r_{k}(0)=s_{0}\) by Assumption D, \[r_{k}(t)=s_{0}e^{-k\int_{0}^{t}\beta_{s}f_{I}(s)ds+\int_{0}^{t}\iota(s)ds}. \tag{4.29}\] Next, observing that \(\mathbb{P}(d_{1}=k+1\ |\ 1\in\mathcal{T})=\hat{\theta}(k)\) since \(\mathcal{T}\) is a UGW\((\theta)\), and \[\mathbb{P}(X_{1}(t-)=S\ |\ 1\in\mathcal{T}) =\sum_{k\in\bar{\mathcal{N}}}\mathbb{P}(X_{1}(t-)=S\ |d_{1}=k+1,1\in\mathcal{T})\mathbb{P}(d_{1}=k+1\ |1\in\mathcal{T})\] \[=\sum_{k\in\bar{\mathcal{N}}}r_{k}(t)\hat{\theta}(k),\] the expression in (4.26) can be rewritten as \[\begin{split}\mathbb{E}[d_{1}-1\ |X_{1}(t-)=S]&=\sum_{k \in\bar{\mathcal{N}}}k\frac{\mathbb{P}(d_{1}=k+1\ |\ 1\in\mathcal{T})}{\mathbb{P}(X_{1}(t)=S\ |\ 1\in \mathcal{T})}r_{k}(t).\\ &=\sum_{k\in\bar{\mathcal{N}}}k\frac{\hat{\theta}(k)}{\sum_{\ell \in\bar{\mathcal{N}}}\hat{\theta}(\ell)r_{\ell}}r_{k}(t)\\ &=\sum_{k\in\bar{\mathcal{N}}}k\hat{\theta}(k)\frac{s_{0}e^{-k \int_{0}^{t}\beta_{s}f_{I}(s)ds+\int_{0}^{t}\iota(s)ds}}{\sum_{\ell\in\bar{ \mathcal{N}}}\hat{\theta}(\ell)s_{0}e^{-\ell\int_{0}^{t}\beta_{s}f_{I}(s)ds+ \int_{0}^{t}\iota(s)ds}}\\ &=\frac{\sum_{k\in\bar{\mathcal{N}}}k\hat{\theta}(k)e^{-k\int_{0 }^{t}\beta_{s}f_{I}(s)ds}}{\sum_{\ell\in\bar{\mathcal{N}}}\hat{\theta}(\ell)e^ {-\ell\int_{0}^{t}\beta_{s}f_{I}(s)ds}},\end{split} \tag{4.30}\] where in the third equality we used (4.29). Combining (4.30) and (4.24), and recalling that \(F_{I}(t):=\int_{0}^{t}\beta_{s}f_{I}(s)ds\), we obtain \[B_{1}(t)=\beta_{t}f_{I}(t)\frac{\sum_{k=0}^{\infty}k\hat{\theta}(k)e^{-kF_{I}( t)}}{\sum_{\ell=0}^{\infty}\hat{\theta}(\ell)e^{-\ell F_{I}(t)}}. \tag{4.31}\] As desired, this expresses \(B_{1}(t)\) purely in terms of \(\ \hat{\theta},\ f_{I}\) and \(F_{I}(t)\). Combining (4.31) with (4.20) and (4.23) establishes the first and second equation of (2.4), thus concluding the proof. **Remark 4.11**.: In the proof of Theorem 4.10 we showed that the jump rate \(\hat{q}_{v}(t,X_{\varnothing,1}^{\mathcal{T}})\) as defined in (4.13) is not path dependent on the event \(\{X_{\varnothing}(t-)=S\}\). By a similar argument that appeals to Proposition 4.7, one can show that \(\hat{q}_{v}(t,X_{\varnothing,1}^{\mathcal{T}})\) is also not path dependent on the event \(\{X_{\varnothing}(t-)=S\}^{c}\), thereby showing that \(X_{\varnothing,1}\) is a Markov process. The analogue of the latter property can also be shown to hold for the discrete-time SIR process using a similar (in fact, simpler) proof. Numerical evidence supporting this property for the discrete-time SIR process on trees was first provided in [38]. **Theorem 4.12**.: _Let \(f_{I}\) be as in (4.12) and set \(\mathcal{N}:=\{m\in\mathbb{N}_{0}\ :\ \theta(m)>0\}\). Then \(((P_{S,S;k},\)\(P_{S,I;k})_{k\in\mathcal{N}}\), \(P_{I,I})\), as defined in (4.12), is the unique solution to the following system of ODEs:_ \[\begin{cases}\dot{P}_{S,S;k}=-\beta kf_{I}P_{S,S;k},&k\in\mathcal{N},\\ \dot{P}_{S,I;k}=\beta kf_{I}P_{S,S;k}-\rho P_{S,I;k},&k\in\mathcal{N},\\ \dot{P}_{I,I}=-\rho P_{I,I},\end{cases} \tag{4.32}\] _with initial conditions_ \[\begin{cases}P_{S,S;k}(0)=1,&k\in\mathcal{N},\\ P_{S,I;k}(0)=0,&k\in\mathcal{N},\\ P_{I,I}(0)=1,\end{cases} \tag{4.33}\] Proof.: Throughout the proof, in order to simplify the notation we write \(X\) in lieu of \(X^{\mathcal{T}}\). By Assumption A, the fact that \(f_{I}\) defined in (4.12) is continuous (since by Theorem (4.10) it is characterized in terms of the solution of the ODE system (2.4)-(2.5)) and the fact that the ODE system (4.32) is linear, the initial value problem (4.32)-(4.33) has a unique solution. Clearly, from (4.12), the initial conditions (4.33) hold. Next, we show that (4.32) is satisfied. We start by considering \(P_{S,S;k}\). Fix \(t\geq 0\), \(h>0\) and \(k\in\mathcal{N}\). Since \(s_{0}=\mathbb{P}(X_{\varnothing}(0)=0)>0\) and \(d_{\varnothing}\) is independent of \(X_{\varnothing}(0)\), \(\mathbb{P}(X_{\varnothing}(0)=S,\ d_{\varnothing})>0\). From (4.12), noting that \(X_{\partial_{\varnothing}}(t)=y\in\mathcal{X}^{k}\) implicitly implies \(d_{\varnothing}=k\), we have \[\begin{split}& P_{S,S;k}(t+h)\\ &=\mathbb{P}(X_{\varnothing}(t+h)=S\ |\ X_{\varnothing}(0)=S,\ d_{ \varnothing}=k)\\ &=\sum_{y\in\mathcal{X}^{k}}\frac{\mathbb{P}(X_{\varnothing}(t+h) =S,\ X_{\varnothing}(0)=S,\ X_{\partial_{\varnothing}}(t)=y)}{\mathbb{P}(X_{ \varnothing}(0)=S,\ d_{\varnothing}=k)}\\ &=\sum_{y\in\mathcal{S}_{k},t}\mathbb{P}(X_{\varnothing}(t+h)=S |X_{\varnothing}(t)=S,X_{\partial_{\varnothing}}(t)=y)\mathbb{P}(X_{ \varnothing}(t)=S,X_{\partial_{\varnothing}}(t)=y|X_{\varnothing}(0)=S,d_{ \varnothing}=k),\end{split} \tag{4.34}\] where \(\mathcal{S}_{k,t}:=\{y\in\mathcal{X}^{k}\ :\ \mathbb{P}(X_{\varnothing}(t)=S,\ X_{ \partial_{\varnothing}}(t)=y)>0\}\), and the monotonicity (4.2) of the SIR process is used in the third and fourth equality. Since the jump rate of a susceptible individual whose neighbors' states are equal to \(y\in\mathcal{X}^{k}\) is equal to \(\beta_{t}\mathcal{I}(y)\), we have that \[\mathbb{P}(X_{\varnothing}(t+h)=S\ |\ X_{\varnothing}(t)=S,X_{\partial_{ \varnothing}}(t)=y)=1-h\beta_{t}\mathcal{I}(y)+o(h). \tag{4.35}\] The expression on the right-hand side of (4.35) does not depend on the exact values of the \(k-\mathcal{I}(y)\) elements of \(y\) that are not equal to \(I\). Thus, substituting the expression in (4.35) into the last line of (4.34) and rewriting the sum to be over the number of infected neighbors of \(\varnothing\), \[\begin{split} P_{S,S;k}(t+h)=&\sum_{j=0}^{k}(1-h \beta_{t}j+o(h))\ \mathbb{P}(X_{\varnothing}(t)=S,\ \mathcal{I}(X_{\partial_{ \varnothing}}(t))=j\ |\ X_{\varnothing}(0)=S,d_{\varnothing}=k)\\ =&\sum_{j=0}^{k}(1-h\beta_{t}j+o(h))\ \mathbb{P}( \mathcal{I}(X_{\partial_{\varnothing}}(t))=j\ |X_{\varnothing}(t)=S,\ X_{\varnothing}(0)=S,\ d_{ \varnothing}=k)\ P_{S,S;k}(t)\\ =&\sum_{j=0}^{k}(1-h\beta_{t}j+o(h))\ \mathbb{P}( \mathcal{I}(X_{\partial_{\varnothing}}(t))=j\ |X_{\varnothing}(t)=S,\ d_{\varnothing}=k)\ P_{S,S;k}(t),\end{split}\] where in the last equality we used the monotonicity of the SIR process (4.2). Applying Proposition 4.7 with \(\alpha=0\), it follows that \(\{X_{i}(t)\ :\ i\sim\varnothing\}\) are conditionally i.i.d. given \(\{X_{\varnothing}(t)=S,d_{\varnothing}=k\}\). Furthermore, for \(k\geq 1\) and \(m\in\mathbb{N}\cap[0,k]\), by Proposition 4.8 and another application of Proposition 4.7 with \(A=\{mv\}_{v\in\mathbb{V}}\), \(\partial^{\mathbb{V}}A=\{\varnothing\}\) and \(B=\mathbb{N}\setminus\{m\}\), and observing that \(d_{\varnothing}=\sum_{\ell=1}^{\infty}\mathbf{1}_{\{X_{\ell}(0)\neq\star\}}\), we have that \[\mathbb{P}(X_{m}(t)=I\ |\ X_{\varnothing}(t)=S,\ d_{\varnothing}=k)=\mathbb{P}(X_{ m}(t)=I\ |\ X_{\varnothing}(t)=S,\ m\in\mathcal{T})=f_{I}(t),\] where \(f_{I}\) is as in (4.12). Therefore, conditional on \(X_{\varnothing}(t)=S\) and \(d_{\varnothing}=k\), \(\mathcal{I}(X_{\partial_{\varnothing}}(t))\) has a binomial distribution with parameters \((k,f_{I}(t))\). It follows that, letting \(Y\) be a binomial random variable with parameters \((k,f_{I}(t))\), \[\begin{split} P_{S,S;k}(t+h)&=P_{S,S;k}(t)(\mathbb{ E}[1-h\beta_{t}Y]+o(h))\\ &=P_{S,S;k}(t)(1-h\beta_{t}kf_{I}+o(h)).\end{split} \tag{4.36}\] This implies \[\lim_{h\to 0^{+}}\frac{P_{S,S,k}(t+h)-P_{S,S;k}(t)}{h}=\lim_{h\to 0^{+}}\frac{(1-h \beta_{t}kf_{I}(t)+o(h)-1)P_{S,S;k}(t)}{h}=-\beta_{t}kf_{I}(t)P_{S,S;k}(t), \tag{4.37}\] which proves the first equation in (4.32) The derivations of the ODEs for \(P_{S,I;k}\) and \(P_{I,I}\) are similar, and are thus only outlined below. As in the last line of (4.34), we start by writing \[P_{S,I;k}(t+h)=\mathcal{Q}_{I}(h)+\mathcal{Q}_{S}(h), \tag{4.38}\] where for \(b=I\) and \(b=S\) \[\mathcal{Q}_{b}(h)=\sum_{j=0}^{k}\mathbb{P}(X_{\varnothing}(t+h)=I,\ X_{ \varnothing}(t)=b,\ \mathcal{I}(X_{\partial_{\varnothing}}(t))=j\ |\ X_{\varnothing}(0)=S,\ d_{ \varnothing}=k).\] Recalling the definition of the SIR rates \(q_{0}\) as in (4.3) and using arguments similar to what used to derive (4.35)-(4.36), \(\mathcal{Q}_{S}(h)=(h\beta_{t}kf_{I}(t)+o(h))P_{S,S;k}(t)\) and \[\begin{split}\mathcal{Q}_{I}(h)&=\sum_{j=0}^{k}(1- \rho_{t}h+o(h))\mathbb{P}(X_{\varnothing}(t)=I,\ \mathcal{I}(X_{\partial_{ \varnothing}}(t))=j\ |\ X_{\varnothing}(0)=S,\ d_{\varnothing}=k)\\ &=(1-\rho_{t}h+o(h))\sum_{j=0}^{k}\mathbb{P}(X_{\varnothing}(t)= I,\ \mathcal{I}(X_{\partial_{\varnothing}}(t))=j\ |\ X_{\varnothing}(0)=S,\ d_{\varnothing}=k)\\ &=(1-\rho_{t}h+o(h))\mathbb{P}(X_{\varnothing}(t)=I\ |\ X_{ \varnothing}(0)=S,\ d_{\varnothing}=k)\\ &=(1-\rho_{t}h+o(h))P_{S,I;k}(t).\end{split} \tag{4.39}\] Substituting the last two displays into (4.38), we obtain \(P_{S,I;k}(t+h)-P_{S,I;k}(t)=hk\beta_{t}f_{I}(t)P_{S,S;k}(t)-\rho_{t}hP_{S,I;k} (t)+o(h)\),which implies the second equation in (4.32). Next, to obtain the ODE for \(P_{I,I}\) note that by definition of the jump rate (4.3), setting \(\mathcal{N}:=\{k\in\mathbb{N}_{0}\ :\ \theta(k)>0\}\), \[\begin{split}& P_{I,I}(t+h)\\ &=\sum_{k\in\mathcal{N}}\sum_{y\in\mathcal{X}^{k}}\mathbb{P}(X_{ \varnothing}(t+h)=I|X_{\varnothing}(t)=I,X_{\partial_{\varnothing}}(t)=y) \mathbb{P}(X_{\varnothing}(t)=I,\ X_{\partial_{\varnothing}}(t)=y|X_{ \varnothing}(0)=I)\\ &=(1-h\rho_{t}+o(h))\sum_{k\in\mathcal{N}}\sum_{y\in\mathcal{X}^ {k}}\mathbb{P}(X_{\varnothing}(t)=I,\ X_{\partial_{\varnothing}}(t)=y\ |\ X_{\varnothing}(0)=I)\\ &=(1-h\rho_{t}+o(h))P_{I,I}(t),\end{split} \tag{4.40}\] which implies the third equation in (4.32) and concludes the proof. We can combine the results above to prove Theorem 2.5. Proof of Theorem 2.5.: By Theorem 4.5, \(\lim_{n\to\infty}s^{G_{n}}(t)=\mathbb{P}(X_{\varnothing}^{\mathcal{T}}(t)=S)\) and \(\lim_{n\to\infty}i^{G_{n}}(t)=\mathbb{P}(X_{\varnothing}^{\mathcal{T}}(t)=I)\). By Theorem 4.12, we can characterize the transition rates of \(X_{\varnothing}^{\mathcal{T}}(t)\) defined in (4.12) as the solution to the ODE system (4.32)-(4.33). Let \(f_{I}\) and \(f_{S}\) be as in (4.12), and \(F_{I}(t)=\int_{0}^{t}\beta_{s}f_{I}(s)ds\), \(t\in[0,\infty)\).Then we can solve the ODE system (4.32)-(4.33) as follows: \[P_{S,S;k}(t) =e^{-kF_{I}(t)},\] \[P_{S,I;k}(t) =e^{-\int_{0}^{t}\rho_{u}du}\int_{0}^{t}ke^{-kF_{I}(u)}e^{\int_{0} ^{u}\rho_{\tau}d\tau}\beta_{u}f_{I}(u)du,\] \[P_{I,I}(t) =e^{-\int_{0}^{t}\rho_{u}du}.\] In view of (4.12), by averaging over \(d_{\varnothing}\), that is, by multiplying each of the quantities above by \(\theta(k)=\mathbb{P}(d_{\varnothing}=k)\) and summing over \(k\in\mathbb{N}\), we conclude that \[P_{S,S}(t) =M_{\theta}(-F_{I}(t)),\] \[P_{S,I}(t) =e^{-\int_{0}^{t}\rho_{u}du}\int_{0}^{t}M^{\prime}_{\theta}(-F_{I }(u))e^{\int_{0}^{u}\rho_{\tau}d\tau}\beta_{u}f_{I}(u)du,\] where \(M^{\prime}_{\theta}(z)=\sum_{k=0}^{\infty}k\theta(k)e^{kz}\), and the exchange in order of summation and integration is justified by the fact that every term is non-negative. By Theorem 4.10, \((f_{S}\), \(f_{I}\), \(F_{I})\) solve the ODE system (2.4)-(2.5). Finally, since \(\mathbb{P}(X^{\mathcal{T}}_{\varnothing}(t)=S)=s_{0}P_{S,S}(t)\) and \(\mathbb{P}(X^{\mathcal{T}}_{\varnothing}(t)=I)=s_{0}P_{S,I}(t)+i_{0}P_{I,I}(t)\), equation (2.7) follows. This completes the proof. #### 4.2.3. Proof of Theorem 2.10 Now, we turn our attention to the SEIR process. Since its derivation is similar to that of Theorem 2.5, we relegate most of the details to Appendix A. For \(a,b\in\bar{\mathcal{X}}\), \(t\in[0,\infty)\) and \(k\in\{m\in\mathbb{N}_{0}\ :\ \theta(m)>0\}\), define \[Q^{\theta}_{a,b;k}(t) \coloneqq\mathbb{P}(\bar{X}^{\mathcal{T}}_{\varnothing}(t)=b\ |\ \bar{X}^{\mathcal{T}}_{\varnothing}(0)=a,\ d_{ \varnothing}=k),\] \[Q^{\theta}_{a,b}(t) \coloneqq\mathbb{P}(\bar{X}^{\mathcal{T}}_{\varnothing}(t)=b\ |\ \bar{X}^{ \mathcal{T}}_{\varnothing}(0)=a), \tag{4.41}\] \[g^{\theta}_{a}(t) \coloneqq\mathbb{P}(\bar{X}^{\mathcal{T}}_{1}(t)=a\ |\ \bar{X}^{ \mathcal{T}}_{\varnothing}(t)=S,\ 1\in\mathcal{T}).\] When clear from the context, we omit the dependence on \(\theta\) and write \(Q_{a,b},\ Q_{a,b;k}\) and \(g_{a}\). **Theorem 4.13**.: _Let \(g_{S}\), \(g_{E}\) and \(g_{I}\) be as in (4.41) and set \(G_{I}(t):=\int_{0}^{t}\beta_{s}g_{I}(s)ds\) for \(t\in[0,\infty)\). Then, \((g_{S}\), \(g_{E}\), \(g_{I}\), \(G_{I})\) solves the ODE system (2.11)-(2.12)._ The proof of Theorem 4.13 is given in Appendix A. **Theorem 4.14**.: _Let \(g_{I}\) be as in (4.41) and set \(\mathcal{N}:=\{m\in\mathbb{N}_{0}\ :\ \theta(m)>0\}\). Then \(((Q_{S,i;k})_{i\in\bar{\mathcal{X}}\setminus\{R\},k\in\mathcal{N}}\), \(Q_{E,E}\), \(Q_{E,I}\), \(Q_{I,I})\) is the unique solution to the following system of ODEs:_ \[\begin{cases}\dot{Q}_{S,S;k}=-\beta kg_{I}Q_{S,S;k},&k\in\mathcal{N},\\ \dot{Q}_{S,E;k}=\beta kg_{I}Q_{S,S;k}-\lambda Q_{S,E;k},&k\in\mathcal{N},\\ \dot{Q}_{S,I;k}=\lambda Q_{S,E;k}-\rho Q_{S,I;k},&k\in\mathcal{N},\\ \dot{Q}_{E,E}=-\lambda Q_{E,E},\\ \dot{Q}_{E,I}=\lambda Q_{E,E}-\rho Q_{E,I},\\ \dot{Q}_{I,I}=-\rho Q_{I,I},\end{cases} \tag{4.42}\] _with initial conditions_ \[\begin{cases}Q_{a,b}(0)=\mathbf{1}_{\{a=b\}},&a,b\in\bar{\mathcal{X}},\\ Q_{a,b;k}(0)=\mathbf{1}_{\{a=b\}},&a,b\in\bar{\mathcal{X}},\ k\in\mathcal{N}. \end{cases} \tag{4.43}\] The proof of Theorem 4.14 is given in Appendix A. We conclude this section by outlining how the last two theorems are used to prove Theorem 2.10. Proof of Theorem 2.10.: By Theorem 4.5, \(\lim_{n\to\infty}\bar{s}^{G_{n}}(t)=\mathbb{P}(\bar{X}_{\varnothing}^{\mathcal{T}}( t)=S)\), \(\lim_{n\to\infty}\bar{e}^{G_{n}}(t)=\mathbb{P}(\bar{X}_{\varnothing}^{\mathcal{T}}( t)=E)\), and \(\lim_{n\to\infty}\bar{i}^{G_{n}}(t)=\mathbb{P}(\bar{X}_{\varnothing}^{\mathcal{T}}( t)=I)\). By Theorem 4.14, we can characterize the transition rates of \(\bar{X}_{\varnothing}^{\mathcal{T}}(t)\), given in (4.41) as the solution to the system of ODEs (4.42)-(4.43). We can solve these ODEs to obtain an expression for \(Q_{S,S;k},\ Q_{S,E;k}\), in terms of \(g_{S},\ g_{E}\) and \(g_{I}\) (defined in (4.41)) which, along with \(G_{I}(t)=\int_{0}^{t}g_{I}(s)ds\) for \(t\in[0,\infty)\), by Theorem 4.13, solve the ODEs (2.11)-(2.12). Observing that \(Q_{S,b}(t)=\sum_{k\in\mathbb{N}_{0}}\theta(k)Q_{S,b;k}(t)\) for all \(t\in[0,\infty)\) and \(b\in\{S,E\}\), and noting that \[\mathbb{P}(\bar{X}_{\varnothing}^{\mathcal{T}}(t)=S) = s_{0}Q_{S,S}(t),\] \[\mathbb{P}(\bar{X}_{\varnothing}^{\mathcal{T}}(t)=E) = s_{0}Q_{S,E}(t)+e_{0}Q_{E,E}(t)\] \[\mathbb{P}(\bar{X}_{\varnothing}^{\mathcal{T}}(t)=I) = s_{0}Q_{S,I}(t)+e_{0}Q_{E,I}(t)+i_{0}Q_{I,I}(t)\] establishes the theorem. ### Proofs related to the Outbreak Size In this section, we prove Theorem 3.1 and Theorem 3.5, which characterize the large \(n\) limit of the total fraction of individuals still susceptible at the end of an SIR or SEIR outbreak on the locally-tree like graph sequences we consider. Recall the standing assumptions made at the beginning of Section 4. We start by introducing some notation to simplify the exposition. First, define \[\underline{d}_{\hat{\theta}}:=\min\{d\in\mathbb{N}_{0}\ :\ \hat{\theta}(d)>0\}, \tag{4.44}\] and recalling that \(M_{\hat{\theta}}\) is the moment generating function of \(\hat{\theta}\), set \[\Phi(z):=\Phi_{\hat{\theta}}(z):=\frac{M_{\hat{\theta}}^{\prime}(-z)}{M_{\hat {\theta}}(-z)},\quad z\in[0,\infty). \tag{4.45}\] For all \(z\in[0,\infty)\) we have \(M_{\hat{\theta}}(-z)=\mathbb{E}_{\hat{\theta}}[e^{-dz}]\leq\mathbb{E}_{\hat{ \theta}}[1]=1\). Furthermore, for \(z\in[0,\infty)\), \(M_{\hat{\theta}}^{\prime}(-z)=\sum_{k=1}^{\infty}ke^{-kz}\hat{\theta}(k)\), where the interchange of the sum and derivative is justified because \(ke^{-kz}\leq k\) and \(\hat{\theta}\) has finite mean. We start with an elementary lemma. **Lemma 4.15**.: \(\Phi_{\hat{\theta}}:[0,\infty)\to[0,\infty)\) _is continuous and satisfies the following properties:_ 1. \(\Phi_{\hat{\theta}}(0)=\mathbb{E}_{\hat{\theta}}[d]\)_;_ 2. \(\lim_{z\to\infty}\Phi_{\hat{\theta}}(z)=\underline{d}_{\hat{\theta}}\)_;_ 3. \(\Phi_{\hat{\theta}}(z)\) _is non-increasing in_ \(z\in[0,\infty)\)_, and strictly decreasing if for every_ \(j\in\mathbb{N}_{0}\)_,_ \(\hat{\theta}\neq\delta_{j}\)_._ Proof.: The property _(i)_ follows immediately from the relation \(\Phi_{\hat{\theta}}(0)=\mathbb{E}_{\hat{\theta}}[d]/\mathbb{E}_{\hat{\theta}} [1]=\mathbb{E}_{\hat{\theta}}[d]\). The stated continuity of \(\Phi_{\hat{\theta}}\) follows from the dominated convergence theorem and the fact that \(\hat{\theta}\) has finite mean, which follows from (2.2) and Assumption B. In turn, by the dominated convergence theorem, the latter implies that \(\lim_{z\to\infty}\mathbb{E}_{\hat{\theta}}[de^{-dz}]=0\). If \(\underline{d}_{\hat{\theta}}=0\) then \(\lim_{z\to\infty}\mathbb{E}_{\hat{\theta}}[e^{-dz}]=\hat{\theta}(0)>0\), and by (4.45) it follows that \(\lim_{z\to\infty}\Phi_{\hat{\theta}}(z)=0=\underline{d}_{\hat{\theta}}\). On the other hand, if \(\underline{d}_{\hat{\theta}}>0\), then \[\lim_{z\to\infty}\Phi_{\hat{\theta}}(z)=\lim_{z\to\infty}\frac{\underline{d}_{ \hat{\theta}}e^{-\underline{d}_{\hat{\theta}}z}+\sum_{j=\underline{d}_{\hat{ \theta}}+1}^{\infty}je^{-jz}}{e^{-\underline{d}_{\hat{\theta}}z}+\sum_{j= \underline{d}_{\hat{\theta}}+1}^{\infty}e^{-jz}}=\lim_{z\to\infty}\frac{ \underline{d}_{\hat{\theta}}e^{-\underline{d}_{\hat{\theta}}z}}{e^{- \underline{d}_{\hat{\theta}}z}}=\underline{d}_{\hat{\theta}}. \tag{4.46}\] This proves _(ii)_. Next, observe that \[\Phi_{\hat{\theta}}(z)=-\frac{d}{dz}\log M_{\hat{\theta}}(-z)=\frac{d}{d(-z)} \log M_{\hat{\theta}}(-z).\] Since the moment generating function of any measure in \(\mathcal{P}(\mathbb{R})\) is log-convex (which follows from an application of Holder's inequality), and strictly log-convex unless the measure is equal to \(\delta_{x}\) for \(x\in\mathbb{R}\), _(iii)_ follows. We now prove Theorem 3.1. Proof of Theorem 3.1.: Let \(f_{I}\) and \(f_{S}\) be as in (4.12) and set \(F_{I}(t)=\int_{0}^{t}\beta_{s}f_{I}(s)ds\). By (2.7), \(s^{(\infty)}(t)=s_{0}M_{\theta}(-F_{I}(t))\). By the dominated convergence theorem, \(z\mapsto M_{\theta}(-z)=\sum_{k=0}^{\infty}\theta(k)e^{-zk}\) is continuous on \((0,\infty)\). We now turn to the study of the large-time limit of \(F_{I}\). By Theorem 2.10, \((f_{S},f_{I},F_{I})\) satisfy the ODE system (2.4). For any \(a\in[0,1]\) and \(b\in(0,\infty)\), the point \((a,0,b)\) is a fixed point of the system. We claim that as \(t\to\infty\), \((f_{S}(t),\,f_{I}(t),\,F_{I}(t))\) converges to one such point, and then identify the corresponding \(b\) as the solution of an equation. Near any \(t\geq 0\) such that \(f_{I}(t)>0\), \(F_{I}\) is strictly increasing, and thus it is invertible. Let \(F_{I}(\infty):=\lim_{t\to\infty}F_{I}(t)\), which exists since \(F_{I}\) is non-decreasing. We can change variables for \(F\in[0,F_{I}(\infty)]\) and write \(x(F):=f_{I}(F_{I}^{-1}(F))\) and \(y(F):=f_{S}(F_{I}^{-1}(F))\). We write \(\beta^{*}\) (resp. \(\rho^{*}\)) for the composition of \(\beta\) (resp. \(\rho\)) with \(F_{I}^{-1}\). Recalling the definition of \(\Phi\) in (4.45), we rewrite the first two equations in (2.4) as \[\begin{cases}y^{\prime}=y(1-\Phi)\\ x^{\prime}=y\Phi-(1+\frac{\rho^{*}}{\beta^{*}})+x,\end{cases} \tag{4.47}\] Since \(F_{I}(0)=0\), and \(f_{S}(0)=s_{0}\), we can solve the first equation to obtain \(\log(y(F)/s_{0})=F+\log M_{\hat{\theta}}(-F)\), which is equivalent to \[y(F)=s_{0}M_{\hat{\theta}}(-F)e^{F}. \tag{4.48}\] Substituting this into the second equation in (4.47), we obtain a linear ODE for \(x\). Recalling that \(x(0)=f_{I}(0)=i_{0}\) and that \(i_{0}+s_{0}=1\), we solve this equation to obtain \[\begin{split} x(F)&=i_{0}e^{F}+e^{F}\int_{0}^{F}s_{0 }M_{\hat{\theta}}(-z)\Phi(z)dz-e^{F}\int_{0}^{F}e^{-z}\left(1+\frac{\rho^{*}( z)}{\beta^{*}(z)}\right)dz\\ &=i_{0}e^{F}+e^{F}s_{0}(1-M_{\hat{\theta}}(-F))-e^{F}\int_{0}^{F}e ^{-z}\left(1+\frac{\rho^{*}(z)}{\beta^{*}(z)}\right)dz\\ &=e^{F}-y(F)-e^{F}(1-e^{-F})-e^{F}\int_{0}^{F}e^{-z}\frac{\rho^{*}( z)}{\beta^{*}(z)}dz\\ &=1-y(F)-e^{F}\int_{0}^{F}e^{-z}\frac{\rho^{*}(z)}{\beta^{*}(z)}dz, \end{split} \tag{4.49}\] where in the second line we used the fact that \(M_{\hat{\theta}}(0)=1\), and in the first and third line we applied (4.48). We now claim that (4.49) shows that \(F_{I}(\infty)<\infty\). Since \(F_{I}(t)=\int_{0}^{t}\beta_{s}f_{I}(s)ds\) and \(\beta\) satisfies Assumption A, this implies that \(\lim_{t\to\infty}f_{I}(t)=0\). First, observe that, if there exists \(s\in[0,\infty)\) such that \(f_{I}(s)=0\), then, by (2.4), \(f_{I}(t)=0\) for all \(t\geq s\). Next, suppose for the sake of contradiction that \(F_{I}(\infty)=\infty\). Then, for all \(t\geq 0\), \(f_{I}(t)>0\). By Assumption A, it then follows that \(\int_{0}^{t}e^{-F_{I}(s)}\rho_{s}f_{I}(s)ds>0\) for all \(t>0\). By definition, \(f_{S}(t)\in[0,1]\), and so \(y(F)\in[0,1]\) for all \(F\in[0,F_{I}(\infty))\). In particular, \(\liminf_{F\to\infty}y(F)\geq 0\). But letting \(F\to\infty\), (4.49) then implies that \(\lim_{F\to\infty}x(F)=\lim_{t\to\infty}f_{I}(t)=-\infty\), which is a contradiction. Therefore, we conclude that \(F_{I}(\infty)<\infty\) and, thus, \(\lim_{t\to\infty}f_{I}(t)=0\). Since \(\lim_{t\to\infty}f_{I}(t)=0\), by setting \(x(F_{I}(\infty))=0\) in (4.49), we obtain \[y(F_{I}(\infty))=1-e^{F_{I}(\infty)}\int_{0}^{F_{I}(\infty)}e^{-z}\frac{\rho^{*} (z)}{\beta^{*}(z)}dz=1-e^{F_{I}(\infty)}\int_{0}^{\infty}e^{-\int_{0}^{u}\beta _{\tau}f_{I}(\tau)d\tau}\rho_{u}f_{I}(u)du. \tag{4.50}\] When combined, (4.48) and (4.50) establish (3.1). If there exists \(r\in(0,\infty)\) such that \(\rho_{t}/\beta_{t}=r\) for all \(t\), then the integral in the rightmost expression in (3.1) is equal to \(r(1-e^{-F_{I}(\infty)})\), and thus (3.1) reduces to (3.2). Let \(\Psi_{r}\) be given by (3.4). Using the fact that moment generating functions are log-convex, it follows that \(\Psi_{r}\) is convex. Furthermore, \(\Psi_{r}\) is continuous on \([0,\log(1+1/r))\), \(\Psi_{r}(0)=\log(s_{0})<0\) and \(\lim_{z\to(\log(1+1/r))^{-}}\Psi_{r}(z)=\infty\). Therefore, (3.2) has a unique positive solution. This concludes the proof. We conclude this section by providing a similar characterization of the outbreak size for the SEIR process. Proof of Theorem 3.5.: Let \(g_{S},g_{E},g_{I}\) be as in (4.41), and set \(G_{I}(t):=\int_{0}^{t}\beta_{s}g_{I}(s)ds\) for \(t\in[0,\infty)\). Note that \(\bar{s}^{(\infty)}(t)=s_{0}M_{\theta}(-G_{I}(t))\) by (2.13), and by the dominated convergence theorem, \(M_{\theta}\) (the moment generating function of \(\theta\)) is continuous on \((-\infty,0)\). We now study the large-time limit of \(G_{I}\). By Theorem 4.13, (\(g_{S}\), \(g_{E}\), \(g_{I}\), \(G_{I}\)) satisfy the system of ODEs (2.11). Near any \(t\geq 0\) such that \(g_{I}(t)>0\), \(G_{I}\) is strictly increasing, and, therefore invertible. Let \(G_{I}(\infty):=\lim_{t\to\infty}G_{I}(t)\), which exists since \(G_{I}\) is non-decreasing. We can change variables for \(G\in[0,G_{I}(\infty)]\), write \(x(G):=g_{I}(G_{I}^{-1}(G))\), \(z(G):=g_{E}(G_{I}^{-1}(G))\) and \(y(G):=g_{S}(G_{I}^{-1}(G))\). We write \(\beta^{*}\) (resp. \(\rho^{*}\), \(\lambda^{*}\)) for the composition of \(\beta\) (resp. \(\rho\), \(\lambda\)) with \(G_{I}^{-1}\). By (2.11), letting apostrophe denote differentiation with respect to \(G\), we have \[\begin{cases}y^{\prime}=y(1-\Phi)\\ z^{\prime}=y\Phi-\frac{z\lambda^{*}}{x\beta^{*}}+z\\ x^{\prime}=\frac{z\lambda^{*}}{x\beta^{*}}-(1+\frac{\rho^{*}}{\beta^{*}})+x. \end{cases} \tag{4.51}\] If we let \(\bar{x}=x+z\), then \(y,\bar{x}\) satisfy \[\begin{cases}y^{\prime}=y(1-\Phi)\\ \bar{x}^{\prime}=y\Phi-(1+\frac{\rho^{*}}{\beta^{*}})+\bar{x}\end{cases} \tag{4.52}\] with \(y(0)=s_{0}\), \(\bar{x}(0)=1-s_{0}\), which is the same initial value problem as (4.47). The same argument as that used in the proof of Theorem 3.1 can then be used to conclude the proof. For the sake of completeness, we also include here the special case of the 2-regular tree (i.e., the infinite line graph), with constant \(\rho\) and \(\beta\), where we can obtain an explicit expression for \(\int_{0}^{t}\beta_{s}f_{I}(s)ds\) for all \(t\in[0,\infty]\). **Proposition 4.16**.: _Let \(T_{2}=\)UGW\((\delta_{2})\), and suppose that there exist \(r,\ b>0\) such that for all \(t\in[0,\infty)\), \(\rho_{t}=r\) and \(\beta_{t}=b\). Then, for all \(t\in[0,\infty)\),_ \[P_{S,S}(t)=\left(\frac{(1-s_{0})e^{-t(b(1-s_{0})+r)}+\frac{r}{b}}{1-s_{0}+ \frac{r}{b}}\right)^{2}, \tag{4.53}\] _and, hence,_ \[\lim_{t\to\infty}\mathbb{P}(X_{\varnothing}^{T_{2}}(t)=S)=s_{0}\left(\frac{1} {1+(1-s_{0})\frac{b}{r}}\right)^{2}. \tag{4.54}\] Proof.: Let \(P_{S,S}\), \(f_{I}\) and \(f_{S}\) be as in (4.12). By Theorem 4.12, \(\dot{P}_{S,S}=-b2f_{I}P_{S,S}\), and therefore \[P_{S,S}(t)=\exp(-2b\int_{0}^{t}f_{I}(s)ds). \tag{4.55}\] Setting \(\theta=\delta_{2}\) and, thus, \(\hat{\theta}=\delta_{1}\), the first equation in (2.4) reduces to \(\dot{f}_{S}(t)=0\). Since \(f_{S}(0)=s_{0}\) and \(f_{I}(0)=i_{0}=1-s_{0}\), the second equation in (2.4) reduces to \[\dot{f}_{I}(t)=-(bf_{I}(0)+r)f_{I}(t)+b(f_{I})^{2}. \tag{4.56}\] This is a Bernoulli equation that can be solved explicitly. The constant \(0\) function is a solution. For the rest of this proof, we assume that \(f_{I}(0)\in(0,1)\). Let \(a=-(bf_{I}(0)+r)\), so that (4.56) is \(\dot{f}_{I}=af_{I}+bf_{I}^{2}\). Let \(\tau:=\inf\{t>0\ :f_{I}(t)=0\}\). For \(t\in[0,\tau)\), we can divide both sides of the ODE by \((f_{I})^{2}\). \[\frac{\dot{f}_{I}}{(f_{I})^{2}}-\frac{a}{f_{I}}=b. \tag{4.57}\] If we set \(u(t)=1/f_{I}(t)\), for \(t\in[0,\tau)\), then \(\dot{u}=-(f_{I})^{-2}\dot{f}_{I}\) and the ODE in (4.57) takes the form \[\dot{u}+au=-b.\] This is a linear equation whose explicit solution is \[u(t)=\frac{b}{a}(e^{-ta}-1)+e^{-ta}u(0), \tag{4.58}\] which does not blow up in finite time, and therefore \(\tau=\infty\). Since \(f_{I}(t)=1/u(t)\), (4.58) implies \[f_{I}(t)=\frac{f_{I}(0)(f_{I}(0)+\frac{r}{b})}{f_{I}(0)+\frac{r}{b}e^{(bf_{I} (0)+r)t}},\] which can be integrated to conclude that \[\int_{0}^{t}f_{I}(s)ds=t\left(f_{I}(0)+\frac{r}{b}\right)-\frac{1}{b}\log \left(f_{I}(0)+\frac{r}{b}e^{(bf_{I}(0)+r)t}\right)+\frac{1}{b}\log\left(f_{I} (0)+\frac{r}{b}\right).\] This, combined with (4.55), yields (4.53). Since \(\mathbb{P}(X_{\varnothing}^{T_{2}}(t)=S)=s_{0}P_{S,S}(t)\), letting \(t\to\infty\), we obtain (4.54). ## 5. Proof of the Conditional Independence Property In Section 5.2, we prove the conditional independence property stated in Proposition 4.7 and the symmetry property stated in Proposition 4.8. The proof relies on a certain change of measure result established in [13], which we first summarize in Section 5.1. Throughout, \(\theta\in\mathcal{P}(\mathbb{N}_{0})\) has finite third moment, \(\mathcal{T}\) is a \(\mathrm{UGW}(\theta)\) tree, \(\alpha\in[0,1]\) is an interpolation parameter, the rates \(\beta\), \(\lambda\) and \(\rho\) satisfy Assumption C, and \(\xi^{\mathcal{T},\alpha}\) is the hybrid S(E)IR process solving (4.4), with initial states satisfying Assumption D. ### A Radon-Nikodym derivative We let \(\mu:=\mathcal{L}(\xi^{\mathcal{T},\alpha})\) and \(\mu_{t-}=\mathcal{L}(\xi^{\mathcal{T},\alpha}[t))\) for \(t\in(0,\infty)\). Given \(U\subset\mathbb{V}\) and \(t\in(0,\infty)\), let \(\mathcal{D}^{U}_{t-}:=\mathcal{D}([0,t):\bar{\mathcal{X}}^{U}_{\star})\) be the set of cadlag functions \([0,t)\to\bar{\mathcal{X}}^{U}_{\star}\). We start with two technical definitions. **Definition 5.1**.: Given \(y\in\mathcal{D}_{\star}\) and \(t\in[0,\infty)\) we let \(\mathrm{Disc}_{t}(y):=\{s\in(0,t]\ :\ y(s-)\neq y(s)\}\). We say that \(x\in\mathcal{D}^{\mathbb{V}}_{\star}\) is _proper_ if for every \(u,v\in\mathbb{V}\) and \(t\in[0,\infty)\), \(\mathrm{Disc}_{t}(x_{v})\cap\mathrm{Disc}_{t}(x_{u})=\emptyset\). **Definition 5.2**.: Fix \(U\subset\mathbb{V}\) finite, \(t\in(0,\infty)\) and suppose that \(x\in\mathcal{D}^{U}_{t-}\) is proper and \(\mathrm{Disc}_{\infty}(x):=\cup_{s\in(0,\infty)}\mathrm{Disc}_{s}(x)\) can be ordered as a strictly increasing sequence \(\{t_{k}(x)\}\). Then the _jump characteristics_ of \(x\) are the elements \(\{(t_{k}(x),j_{k}(x),v_{k}(x))\}\subset(0,\infty)\times\mathcal{J}\times U\) where for each \(k\in\mathbb{N}\) with \(k\leq|\mathrm{Disc}_{\infty}(x)|\), \(v_{k}=v_{k}(x)\) is a vertex in \(U\) such that \(x_{v_{k}}\) is discontinuous at time \(t_{k}(x)\), and \(j_{k}(x)\) is the size of the jump \(x_{v_{k}}(t_{k}(x))-\lim_{h\to 0^{+}}x_{v_{k}}(t_{k}(x)-h)\). Given \(U\subset\mathbb{V}\) and \(t\in(0,\infty)\), we also define a function \(\psi\) from the set of functions \([0,t)\to\bar{\mathcal{X}}\) into \(\{0,1\}\) by \[\psi(x)=\mathbf{1}_{\{x\in\mathcal{D}^{U}_{t-}\}}\mathbf{1}_{\{\{v\in U\,:\,x _{v}(0)\neq\star\}\text{ is a locally finite tree}\}}. \tag{5.1}\] We also recall that \(\mathbb{V}_{n}=\{\varnothing\}\cup(\cup_{k=1}^{n}\mathbb{N}^{k})\). We now state a change of measure result that is established, for general interacting jump processes, in [13]. In the sequel, the exact definition of the reference processes \(\hat{\xi}^{n}\) presented below will not be important, and so we state the following proposition to summarize some key properties we use. **Proposition 5.3**.: _For each \(n\in\mathbb{N}\setminus\{1\}\,,\) there exists an \(\bar{\mathcal{X}}^{\mathbb{V}}_{\star}\)-valued process \(\hat{\xi}^{n}:=\hat{\xi}^{n,\alpha}\) such that for any \(t\in(0,\infty)\), \(A,B\subset\mathbb{V}\) with \(\partial^{\mathbb{V}}A\subset\mathbb{V}_{n-1}\) and \((A\cup\partial^{\mathbb{V}}A)\cap B=\emptyset\),_ \[\hat{\xi}^{n}_{A}[t]\perp\hat{\xi}^{n}_{B}[t]\ |\ \hat{\xi}^{n}_{\partial^{ \mathbb{V}}A}[t]. \tag{5.2}\] _Furthermore, \(\hat{\xi}^{n}_{\mathbb{V}_{n}}\) is almost surely proper and its jump characteristics \(\{(t^{n}_{i},v^{n}_{i},j^{n}_{i})\}\) are well-defined. Moreover, for every \(t\in(0,\infty)\), \(\hat{\mu}^{n}_{t-}:=\mathcal{L}(\hat{\xi}^{n}[t))\) has the property that, almost surely_ \[\frac{d\mu_{t-}}{d\hat{\mu}^{n}_{t-}}(\hat{\xi}^{n}[t))=\psi(\hat{\xi}^{n}[t]) )\exp\left(-\sum_{\begin{subarray}{c}v\in\mathbb{V}_{n}\\ j=1,2\end{subarray}}\int_{(0,t)}(q_{\alpha}(j,s,\hat{\xi}^{\partial}_{v},\hat {\xi}^{n}_{\partial_{v}})-1)ds\right)\prod_{0<t^{n}_{i}<t}q_{\alpha}(j^{n}_{i},t^{n}_{i},\hat{\xi}^{n}_{v^{n}_{j}},\hat{\xi}^{n}_{\partial_{v^{n}_{i}}}), \tag{5.3}\] _where \(\psi\) is defined in (5.1), and \(q_{\alpha}\) is given in (4.3)._ Proof.: An explicit definition of the processes \(\hat{\xi}^{n}\) as a solution of a SDE related to (4.4) is given in [13, (4.3)] by substituting the rate function \(q_{\alpha}\) to the rate functions \(r^{v}\) used therein. Assumption 4.1 in [13], i.e., the well-posedness of \(\hat{\xi}^{n}\), follows from an application of [12, Theorem C.2] on observing that [12, Assumption C.1] holds by Assumption B, the definition of \(q_{\alpha}\) in (4.3), and the form of the driving noises in (4.4). Assumption C implies that [13, Assumption 3.1, Assumption 3.4] holds with \(q_{\alpha}\) in place of \(r^{v}\). Then, [13, Proposition 4.4] establishes (5.2). By [13, Lemma 4.8], \(\hat{\xi}^{n}_{\mathbb{V}_{v}}\) is proper. Finally, (5.3) holds by [13, Corollary 4.11]. Given \(n\in\mathbb{N}\setminus\{1\}\), \(A\subset\mathbb{V}_{n}\) and \(t>0\) we define \[\mathcal{E}^{t}_{n}(A):=\{\hat{\xi}^{n}_{\partial^{\mathbb{V}}A}(t-)=S_{ \partial^{\mathbb{V}}A}\}=\bigcap_{v\in\partial^{\mathbb{V}}A}\{\hat{\xi}^{n }_{v}(t-)=S\}. \tag{5.4}\] ### Proof of Proposition 4.7 We start by establishing a factorization result for the Radon-Nikodym derivative established in 5.1. We recall that \(\partial^{\mathbb{V}}_{v}\) denote the neighborhood in \(\mathbb{V}\) of \(v\in\mathbb{V}\). We also set \(C_{v}:=C^{\mathbb{V}}_{v}=\{vk\}_{k\in\mathbb{N}}\). **Lemma 5.4**.: _Let \(n\in\mathbb{N}\setminus\{1\}\) and fix \(A,B\subset\mathbb{V}\) with \(\partial^{\mathbb{V}}A\subset\mathbb{V}_{n-1}\) and such that \(A,\ \partial^{\mathbb{V}}A\) and \(B\) form a partition of \(\mathbb{V}\). Let \(\hat{\mu}^{n}_{t-}\) and \(\hat{\xi}^{n}_{t-}\) be as in Proposition 5.3. Then there exist measurable functions \(\tilde{f}_{1}:\mathcal{D}^{A\cup\partial^{\mathbb{V}}A}_{t-}\to[0,\infty)\) and \(\tilde{f}_{2}:\mathcal{D}^{B\cup\partial^{\mathbb{V}}A}_{t-}\to[0,\infty)\) such that for every \(t\in(0,\infty),\)_ \[\frac{d\mu_{t-}}{d\hat{\mu}^{n}_{t-}}(\hat{\xi}^{n}[t))=\ \tilde{f}_{1}(\hat{\xi}^{n}_{A\cup\partial^{ \mathbb{V}}A}[t))\ \tilde{f}_{2}(\hat{\xi}^{n}_{B\cup\partial^{\mathbb{V}}A}[t)),\qquad\text{a.s. on }\{\{\hat{\xi}^{n}_{\partial^{ \mathbb{V}}A}(t-)=S_{\partial^{\mathbb{V}}A}\}}. \tag{5.5}\] Proof.: Fix \(n,\ A,\ B\) as in the statement of the lemma, and \(t\in(0,\infty)\). Set \(\mathcal{E}_{n}:=\mathcal{E}_{n}^{t}(A)\), where the latter is defined in (5.4). For \(v\in\mathbb{V}_{n}\) and \(x\in\mathcal{D}_{t-}^{\mathbb{V}_{n}}\) proper define \[\gamma_{v}(x_{v},x_{\partial_{w}^{\mathbb{V}}}):=\left[\prod_{0<t_{k}(x_{v})<t }q_{\alpha}(j_{k}(x_{v}),t_{k}(x_{v}),x_{v},x_{\partial_{w}^{\mathbb{V}}}) \right]e^{-\sum_{j=1,2}\int_{(0,t)}\left(q_{\alpha}(j,s,x_{v},x_{\partial_{w}^{ \mathbb{V}}})-1\right)ds}, \tag{5.6}\] where \(\{(t_{k}(x_{v}),v,j_{k}(x_{v}))\}\) are the jump characteristics of \(x_{v}\). When \(x\in\mathcal{D}_{t-}^{\mathbb{V}_{n}}\) is not proper, set \(\gamma_{v}(x_{v},x_{\partial_{w}^{\mathbb{V}}}):=0\). Also, for \(v\in\mathbb{V}\), \(y:[0,t)\rightarrow\bar{\mathcal{X}}_{\star}\), \(b\in\bar{\mathcal{X}}_{\star}\) and \(z\in(\bar{\mathcal{X}}_{\star})^{\infty}\), define \[\psi_{v}(y,b,z):=\begin{cases}1-\mathbf{1}_{\{y(0)\neq\star,\ b=\star\}}& \text{if }v=\varnothing,\ y\in\mathcal{D}_{t-},\ |\ \{\kappa\in\mathbb{N}:\ z_{\kappa}\neq\star\}\,|<\infty,\\ \mathbf{1}_{\{y(0)\neq\star\}}&\text{if }v\neq\varnothing,\ y\in\mathcal{D}_{t-},\ |\ \{\kappa\in\mathbb{N}:\ z_{\kappa}\neq\star\}\,|<\infty,\\ 0&\text{otherwise}.\end{cases} \tag{5.7}\] By Proposition 5.3, the jump characteristics of \(\hat{\xi}_{\mathbb{V}_{n}}^{n}\) are almost surely well-defined. On the event that they exist, the jump characteristics of \(\hat{\xi}_{\mathbb{V}_{n}}^{n}\) are a disjoint union of those of \(\hat{\xi}_{v}^{n}\) for \(v\in\mathbb{V}_{n}\). We can then rewrite (5.3) as \[\frac{d\mu_{t-}}{d\hat{\mu}_{t-}^{n}}(\hat{\xi}^{n}[t])=\prod_{v\in\mathbb{V}_ {n}}\gamma_{v}(\hat{\xi}_{\partial_{w}^{\mathbb{V}}}^{0^{\mathbb{V}}}[t),\hat{ \xi}_{\partial_{w}^{\mathbb{V}}}^{n}[t))\ \psi_{v}(\hat{\xi}_{v}^{n}[t),\hat{\xi}_{\pi_{n}}^{n}(0),\hat{\xi}_{C_{v}}^{n}( 0))\ \text{a.s.}\] Since \(A,\ \partial^{\mathbb{V}}A\) and \(B\) forms a partition of \(\mathbb{V}\), we can further decompose the right-hand side as \[\frac{d\mu_{t-}}{d\hat{\mu}_{t-}^{n}}(\hat{\xi}^{n}[t])=\prod_{F\in\{A, \partial^{\mathbb{V}}A,B\}}\left(\prod_{v\in F\cap\mathbb{V}_{n}}\gamma_{v}( \hat{\xi}_{\partial_{w}^{\mathbb{V}}}^{n}[t],\hat{\xi}_{\partial_{w}^{ \mathbb{V}}}^{n}[t))\ \psi_{v}(\hat{\xi}_{v}^{n}[t]),\hat{\xi}_{\pi_{v}}^{n}(0),\hat{\xi}_{C_{v}}^{n} (0))\right)\ \text{a.s.}, \tag{5.8}\] where for ease of notation in the sequel, we set \(\pi_{\varnothing}=\varnothing\), which we can be done in (5.8) since \(\psi_{\varnothing}(y,b,z)\) does not depend on \(b\). The product in the inner bracket is a function of \(\hat{\xi}_{A\cup\partial^{\mathbb{V}}A}^{n}\) when \(F=A\), and a function of \(\hat{\xi}_{B\cup\partial^{\mathbb{V}}A}^{n}\) when \(F=B\). Thus, to prove (5.5) it suffices to show that for each \(w\in\partial^{\mathbb{V}}A\), there exist measurable functions \(\tilde{f}_{1}^{w}:\mathcal{D}_{t-}^{A\cup\partial^{\mathbb{V}}A}\rightarrow[0,\infty)\) and \(\tilde{f}_{2}^{w}:\mathcal{D}_{t-}^{B\cup\partial^{\mathbb{V}}A}\rightarrow[0,\infty)\) such that almost surely on \(\mathcal{E}_{n}\) \[\gamma_{w}(\hat{\xi}_{\partial_{w}^{\mathbb{V}}}^{n}[t),\hat{\xi}_{\partial_{w}^ {\mathbb{V}}}^{n}[t))\ \psi_{w}(\hat{\xi}_{w}^{n}[t]),\hat{\xi}_{\pi_{w}}^{n}(0),\hat{\xi}_{C_{w}}^{n} (0))=\tilde{f}_{1}^{w}(\hat{\xi}_{A\cup\partial^{\mathbb{V}}A}^{n}[t))\ \tilde{f}_{2}^{w}(\hat{\xi}_{B\cup\partial^{ \mathbb{V}}A}^{n}[t)). \tag{5.9}\] By the monotonicity of the S(E)IR dynamics given in (4.2), on \(\mathcal{E}_{n}\) the set of times \(\{t_{i}(\hat{\xi}_{w}^{n})<t\ :w\in\partial^{\mathbb{V}}A\}\) given by the jump characteristics of \(\hat{\xi}_{w}^{n}\) is empty. Hence, almost surely, \[\mathbf{1}_{\mathcal{E}_{n}}\prod_{t_{i}(\hat{\xi}_{w}^{n})<t}q_{\alpha}(j_{i}( \hat{\xi}_{w}^{n}),t_{i}(\hat{\xi}_{w}^{n}),\hat{\xi}_{w}^{n},\hat{\xi}_{ \partial_{w}^{\mathbb{V}}}^{n})=\mathbf{1}_{\mathcal{E}_{n}}. \tag{5.10}\] Recalling the identification of the states \((S,E,I,R)\) with \((0,1,2,3)\) and the definition \(q_{\alpha}\) in (4.3), for \(w\in\partial^{\mathbb{V}}A\) and \(s\in(0,t)\), on the event \(\mathcal{E}_{n}\) we have \[q_{\alpha}(j,s,\hat{\xi}_{w}^{n},\hat{\xi}_{\partial_{w}^{\mathbb{V}}}^{n}) =\beta_{t}\mathcal{I}(\hat{\xi}_{\partial_{w}^{\mathbb{V}}}^{n}(s-)) (\alpha\mathbf{1}_{\{j=1\}})+(1-\alpha)\mathbf{1}_{\{j=2\}}) \tag{5.11}\] \[=\beta_{t}\left(\sum_{\bar{w}\cup\sim w}\mathbf{1}_{\{\hat{\xi}_{ \mathbb{D}}^{n}(s-)=2\}}\right)(\alpha\mathbf{1}_{\{j=1\}})+(1-\alpha)\mathbf{1}_ {\{j=2\}}).\] For convenience of notation, set \(\alpha(j):=\alpha\mathbf{1}_{\{j=1\}}+(1-\alpha)\mathbf{1}_{\{j=2\}}\). Then for \(w\in\partial^{\mathbb{V}}A\), using first (5.6) and (5.10), and then (5.11) and the fact that \(\partial^{\mathbb{V}}_{w}\subset A\cup B\), \[\begin{split}\mathbf{1}_{\mathcal{E}_{n}}&\gamma_{w} (\hat{\xi}^{n}_{w},\hat{\xi}^{n}_{\partial^{\mathbb{V}}_{w}})\\ &=\mathbf{1}_{\mathcal{E}_{n}}\exp\left(-\sum_{j=1,2}\alpha(j) \int_{(0,t)}\left(\mathcal{I}(\hat{\xi}^{n}_{\partial^{\mathbb{V}}_{w}}(s-))-1 \right)ds\right)\\ &=\mathbf{1}_{\mathcal{E}_{n}}\exp\left(\int_{(0,t)}\left( \mathcal{I}(\hat{\xi}^{n}_{\partial^{\mathbb{V}}_{w}\cap A}(s-))+\mathcal{I}( \hat{\xi}^{n}_{\partial^{\mathbb{V}}_{w}\cap B}(s-))-1\right)ds\right)\\ &=\mathbf{1}_{\mathcal{E}_{n}}\exp\left(-\int_{(0,t)}\left( \mathcal{I}(\hat{\xi}^{n}_{\partial^{\mathbb{V}}_{w}\cap A}(s-))\right)ds \right)\exp\left(-\int_{(0,t)}\left(\mathcal{I}(\hat{\xi}^{n}_{\partial^{ \mathbb{V}}_{w}\cap B}(s-))-1\right)ds\right),\end{split} \tag{5.12}\] which shows that each \(\gamma_{w}\) term in (5.9) admits the desired factorization. It only remains to show that the same holds for the \(\psi_{w}\) term in (5.9). To this end, note that for \(w\in\partial^{\mathbb{V}}A\setminus\{\varnothing\}\), by (5.7), \[\begin{split}&\psi_{w}(\hat{\xi}^{n}_{w}[t]),\hat{\xi}^{n}_{ \pi_{w}}(0),\hat{\xi}^{n}_{C_{w}}(0))\\ &=\mathbf{1}_{\{\hat{\xi}^{n}_{w}[t]\in\mathcal{D}_{t-}\}} \mathbf{1}_{\{|\{v\in C_{w}:\ \hat{\xi}^{n}_{w}(0)\neq\star\}|<\infty\}}(1-\mathbf{1}_{\{\hat{\xi}^{n}_{w}(0) \neq\star,\ \hat{\xi}^{n}_{w}(0)=\star\}})\\ &=\mathbf{1}_{\{\hat{\xi}^{n}_{w}[t]\in\mathcal{D}_{t-}\}} \mathbf{1}_{\{|(A\cup\partial^{\mathbb{V}}A)\cap\{v\in C_{w}:\ \hat{\xi}^{n}_{v}(0)\neq\star\}|<\infty\}}\mathbf{1}_{\{|(B\cup\partial^{ \mathbb{V}}A)\cap\{v\in C_{w}:\ \hat{\xi}^{n}_{v}(0)\neq\star\}|<\infty\}}\mathbf{1}_{\{\hat{\xi}^{n}_{w}(0) \neq\star,\hat{\xi}^{n}_{w}(0)=\star\}})^{c}\\ &=\tilde{\psi}^{(1)}_{w}(\hat{\xi}^{n}_{w}[t],\hat{\xi}^{n}_{ \pi_{w}}(t))\ \tilde{\psi}^{(2)}_{w}(\hat{\xi}^{n}_{A\cup\partial^{\mathbb{V}}A}[t])\ \tilde{\psi}^{(3)}_{w}(\hat{\xi}^{n}_{B\cup\partial^{ \mathbb{V}}A}[t)),\end{split} \tag{5.13}\] where \(\tilde{\psi}^{(1)}_{w}\) is the product of the first and last term in the penultimate line of the display, and \(\tilde{\psi}^{(i)}_{w}\), \(i=2,3\), is the \(i\)-th term of the penultimate line of the display. Since \(w\in\partial^{\mathbb{V}}A\) and \(\mathbb{V}\) is a tree, either \(\{w,\pi_{w}\}\subset B\cup\partial^{\mathbb{V}}A\) or \(\{w,\pi_{w}\}\subset A\cup\partial^{\mathbb{V}}A\). Hence, the last line of (5.13) factors as desired. Similarly, if \(\varnothing\in\partial^{\mathbb{V}}A\), by (5.7) we have \[\begin{split}&\psi_{\varnothing}(\hat{\xi}^{n}_{\varnothing}[t]), \hat{\xi}^{n}_{\pi_{w}}(0),\hat{\xi}^{n}_{C_{\varnothing}}(0))\\ &=\mathbf{1}_{\{\hat{\xi}^{n}_{w}[t]\in\mathcal{D}_{t-}\}} \mathbf{1}_{\{|\{v\in C_{\varnothing}:\ \hat{\xi}^{n}_{\varnothing}(0)\neq\star\}|<\infty\}}\mathbf{1}_{\{\hat{\xi}^{n} _{\partial_{\varnothing}}(0)\neq\star\}}\\ &=\mathbf{1}_{\{\hat{\xi}^{n}_{\varnothing}[t]\in\mathcal{D}_{t-}\}} \mathbf{1}_{\{|(A\cup\partial^{\mathbb{V}}A)\cap\{v\in C_{\varnothing}:\ \hat{\xi}^{n}_{v}(0)\neq\star\}|<\infty\}}\mathbf{1}_{\{|(B\cup\partial^{ \mathbb{V}}A)\cap\{v\in C_{\varnothing}:\ \hat{\xi}^{n}_{v}(0)\neq\star\}|<\infty\}}\mathbf{1}_{\{\hat{\xi}^{n}_{ \varnothing}(0)\neq\star\}}\\ &=\tilde{\psi}^{(4)}(\hat{\xi}^{n}_{\varnothing}[t])\ \tilde{\psi}^{(5)}(\hat{\xi}^{n}_{A\cup\partial^{ \mathbb{V}}A}[t))\ \tilde{\psi}^{(6)}(\hat{\xi}^{n}_{B\cup\partial^{ \mathbb{V}}A}[t)),\end{split} \tag{5.14}\] where \(\tilde{\psi}^{(4)}\) is the product of the first and last term in the penultimate line of the display, and \(\tilde{\psi}^{(i)}\) with \(i=5,6\) is the \((i-3)\)-th term of the penultimate line of the display. Together (5.12), (5.14) and (5.13) prove (5.9) and, hence, \(\mathbf{1}_{\mathcal{E}_{n}}d\mu_{t-}/d\hat{\mu}^{n}_{t-}\) admits the factorization stated in (5.5). We conclude this section by proving Proposition 4.7. Proof of Proposition 4.7.: Throughout the proof we fix \(\alpha\in[0,1]\), \(\theta\in\mathcal{P}(\mathbb{N}_{0})\) and \(\mathcal{T}=\mathrm{UGW}(\theta)\), and we omit the dependence of \(\xi^{\mathcal{T},\alpha}\) on them. Let \(\{A,B,\partial^{\mathbb{V}}A\}\) be a partition of \(\mathbb{V}\) with \(\partial^{\mathbb{V}}A\) being finite. Pick \(n\in\mathbb{N}\) such that \(\partial^{\mathbb{V}}A\subset\mathbb{V}_{n-1}\). We define \(\mathcal{A}:\mathcal{D}^{\partial^{\mathbb{V}}A}_{\star}\to\{0,1\}\) by \(\mathcal{A}(x)=\mathbf{1}_{\{x_{v}(t-)=S,\ v\in\partial^{\mathbb{V}}A\}}\). We observe that \(\mathcal{A}(\hat{\xi}^{n}_{\partial^{\mathbb{V}}A})=\mathbf{1}_{\mathcal{E}^{n}_{ t}(A)}\), where the latter is defined in (5.4). Let \(W\) be a bounded, \(\sigma(\xi_{A}[t])\)-measurable random variable. Adopting the convention \(0/0=0\), and using Lemma 5.4 and Bayes's theorem in the first line, and the property (5.2) in the last line, we have \[\mathbb{E}_{\mu}[W\mathcal{A}(\xi_{\partial^{\mathbb{V}}A})\ |\ \xi_{ \partial^{\mathbb{V}}A}[t),\ \xi_{B}[t)]= \frac{\mathbb{E}_{\hat{\mu}^{n}}[W\mathcal{A}(\hat{\xi}_{\partial^{ \mathbb{V}}A}^{n})\frac{d\mu_{t-}}{d\hat{\mu}_{t-}^{n}}\hat{\xi}^{n}[t)\ |\ \hat{\xi}_{\partial^{\mathbb{V}}A}^{n}[t),\hat{\xi}_{B}^{n}[t)]}{ \mathbb{E}_{\hat{\mu}^{n}}[\frac{d\mu_{t-}}{d\hat{\mu}_{t-}^{n}}\hat{\xi}^{n}[t )|\ \hat{\xi}_{\partial^{\mathbb{V}}A}^{n}[t),\hat{\xi}_{B}^{n}[t)]}\] \[= \mathbf{1}_{\mathcal{E}_{n}^{t}(A)}\frac{\mathbb{E}_{\hat{\mu}^{n} }[W\tilde{f}_{1}(\hat{\xi}_{A\cup\partial^{\mathbb{V}}A}^{n}[t))\tilde{f}_{2} (\hat{\xi}_{B\cup\partial^{\mathbb{V}}A}^{n}[t))\ |\ \hat{\xi}_{\partial^{\mathbb{V}}A}^{n}[t),\hat{\xi}_{B}^{n}[t)]}{ \mathbb{E}_{\hat{\mu}^{n}}[\tilde{f}_{1}(\hat{\xi}_{A\cup\partial^{\mathbb{V} }A}^{n}[t))\tilde{f}_{2}(\hat{\xi}_{B\cup\partial^{\mathbb{V}}A}^{n}[t))\ |\ \hat{\xi}_{\partial^{ \mathbb{V}}A}^{n}[t),\hat{\xi}_{B}^{n}[t)]}\] \[= \mathbf{1}_{\mathcal{E}_{n}^{t}(A)}\frac{\tilde{f}_{2}(\hat{\xi}_ {B\cup\partial^{\mathbb{V}}A}^{n}[t))\mathbb{E}_{\hat{\mu}^{n}}[W\tilde{f}_{1} (\hat{\xi}_{A\cup\partial^{\mathbb{V}}A}^{n}[t))\ |\ \hat{\xi}_{\partial^{ \mathbb{V}}A}^{n}[t),\hat{\xi}_{B}^{n}[t)]}{\tilde{f}_{2}(\hat{\xi}_{B\cup \partial^{\mathbb{V}}A}^{n}[t))\mathbb{E}_{\hat{\mu}^{n}}[\tilde{f}_{1}(\hat{ \xi}_{A\cup\partial^{\mathbb{V}}A}^{n}[t))\ |\ \hat{\xi}_{\partial^{\mathbb{V}}A}^{n}[t),\hat{\xi}_{B}^{n}[t)]}\] \[= \mathbf{1}_{\mathcal{E}_{n}^{t}(A)}\frac{\mathbb{E}_{\hat{\mu}^{n} }[W\tilde{f}_{1}(\hat{\xi}_{A\cup\partial^{\mathbb{V}}A}^{n}[t))\ |\ \hat{\xi}_{\partial^{ \mathbb{V}}A}^{n}[t)]}{\mathbb{E}_{\hat{\mu}^{n}}[\tilde{f}_{1}(\hat{\xi}_{A \cup\partial^{\mathbb{V}}A}^{n}[t))\ |\ \hat{\xi}_{\partial^{\mathbb{V}}A}^{n}[t)]}.\] The last quotient is \(\hat{\xi}_{\partial^{\mathbb{V}}A}^{n}[t)\)-measurable. As this holds for every bounded \(\xi_{A}[t)\)-measurable random variable \(W,\) we conclude that \[\mathbb{E}_{\mu}[W\mathcal{A}(\xi_{\partial^{\mathbb{V}}A})\ |\ \xi_{\partial^{ \mathbb{V}}A}[t),\ \xi_{B}[t)]=\mathbb{E}_{\mu}[W\mathcal{A}(\xi_{\partial^{ \mathbb{V}}A})\ |\ \xi_{\partial^{\mathbb{V}}A}[t)].\] Since \(\mathcal{A}(\xi_{\partial^{\mathbb{V}}A})=\mathbf{1}_{\{\xi_{v}(t-)=S\ \forall v\in \partial^{\mathbb{V}}A\}}\), if follows that \[\xi_{A}[t)\perp\xi_{B}[t)\ |\ \xi_{\partial^{\mathbb{V}}A}(t-)=S_{\partial^{ \mathbb{V}}A}, \tag{5.15}\] which proves the first assertion of the proposition. Next, let \(v\in\mathbb{V}\) and \(\kappa\in\mathbb{N}_{0}\). Set \(\tilde{d}_{v}:=d_{v}-\mathbf{1}_{\{v\neq\varnothing\}}=|C_{v}\cap\mathcal{T}|\). The event \(\{\tilde{d}_{v}=k\}\) is clearly the same as the event \(\big{\{}d_{v}=\kappa+\mathbf{1}_{\{v\neq\varnothing\}}\big{\}}\). In the sequel, we condition on the event \(\tilde{d}_{v}=\kappa\). If \(\kappa=0\), the set \(\{\xi_{vi}(t)\}_{i=1,...,\tilde{d}_{v}}\) of children of \(v\) in \(\mathcal{T}\) is empty, and if \(\kappa=1\), the set of children of \(v\) in \(\mathcal{T}\) is a singleton. In either case, the stated conditional independence holds trivially. Suppose that \(\kappa\geq 2\) and for \(i=1,...,\kappa\), we fix \(x_{i}\in\bar{\mathcal{X}}\). For \(w\in\mathbb{V}\), let \(\mathbb{T}_{w}:=\{wu\}_{u\in\mathbb{V}}\) denote the subtree of \(\mathbb{V}\) rooted at \(w\). By the definition of the jump rate \(q_{\alpha}\) in (4.3), and the SDE characterization (4.4) from Lemma 4.2, \(\tilde{d}_{v}=\kappa\) if and only \(\xi(0)_{vm}\neq\star\) for all \(m\in\mathbb{N}\) with \(m\leq\kappa\) and \(\xi_{mk}(0)=\star\) for all \(k\in\mathbb{N}\) with \(k>\kappa\). Using first this fact, and then applying (5.15) with \(A=\cup_{i=1}^{\kappa}\mathbb{T}_{vi}\), \(\partial^{\mathbb{V}}A=\{v\}\) and \(B=\{v\ell\ :\ \ell\in\mathbb{N}\cap(\kappa,\infty)\}\), we have \[\mathbb{P}(\xi_{v\ell}(t)=x_{\ell}\ \text{for}\ 1\leq\ell\leq \kappa\ |\ \xi_{v}(t)=S,\ \tilde{d}_{v}=\kappa)\] \[=\mathbb{P}(\xi_{v\ell}(t)=x_{\ell}\ \text{for}\ 1\leq\ell\leq \kappa\ |\ \xi_{v}(t)=S,\ \xi_{vm}(0)\neq\star\ \text{for}\ 1\leq m\leq\kappa,\ \xi_{vk}(0)=\star\ \text{for}\ k>\kappa)\] \[=\mathbb{P}(\xi_{v\ell}(t)=x_{\ell}\ \text{for}\ 1\leq\ell\leq \kappa\ |\ \xi_{v}(t)=S,\ \xi_{vm}(0)\neq\star\ \text{for}\ 1\leq m\leq\kappa). \tag{5.16}\] For each \(\ell=1,...,\kappa\), another application of (5.15), with \(A=\mathbb{T}_{v\ell}\), \(\partial^{\mathbb{V}}A=\{v\}\) and \(B=\{vm\ :\ \ m\in\mathbb{N}\setminus\{\ell\}\}\) yields \[\xi_{v\ell}[t)\perp\{\xi_{vm}[t)\}_{m\in\mathbb{N}\setminus\{\ell\}}\ |\xi_{v}(t-)=S.\] It follows that for \(\ell\in\mathbb{N}\cap[1,\kappa]\) and \(M\subset\{m\in\mathbb{N}\ :\ m\leq\kappa\}\setminus\{\ell\}\), \[\mathbb{P}(\xi_{v\ell}(t)=x_{\ell}|\ \xi_{v}(t)=S,\ \xi_{vi}(t)\neq\star\ \text{for}\ 1\leq i\leq \kappa,\ \xi_{vm}=x_{m}\ \forall m\in M)\] \[=\mathbb{P}(\xi_{v\ell}(t)=x_{\ell}|\ \xi_{v}(t)=S,\ \xi_{v\ell}(t)\neq\star) \tag{5.17}\] \[=\mathbb{P}(\xi_{v\ell}(t)=x_{\ell}|\ \xi_{v}(t)=S,\ v\ell\in \mathcal{T}).\] By iteratively applying (5.17), we obtain \[\mathbb{P}(\xi_{v\ell}(t)=x_{\ell}\ \ \text{for}\ 1\leq\ell\leq \kappa\ |\ \xi_{v}(t)=S,\ \xi_{vi}(t)\neq\star\ \text{for}\ 1\leq i\leq\kappa)\] \[=\prod_{\ell=1}^{\kappa}\mathbb{P}(\xi_{v\ell}(t)=x_{\ell}\ |\xi_{v}(t)=S,\ v\ell\in\mathcal{T}),\] which along with (5.16) establishes the second assertion of the theorem and concludes the proof. We conclude this section by using Proposition 4.7 and properties of the SDE (4.4) to derive Proposition 4.8. Proof of Proposition 4.8.: Fix \(\tilde{v}\in\mathbb{V}\setminus\{\varnothing\}\) and, on the event \(\tilde{v}\in\mathcal{T}\), let \(\tilde{\mathcal{T}}\) be the subtree of \(\mathcal{T}\) rooted at \(\tilde{v}\), i.e., \(\tilde{\mathcal{T}}:=\mathcal{T}\cap\mathbb{T}_{\tilde{v}}\) where \(\mathbb{T}_{\tilde{v}}:=\{\tilde{v}w\ :\ w\in\mathbb{V}\}\). By Assumption D, \(\mathcal{T}_{\tilde{v}}\) is a Galton-Watson tree with offspring distribution \(\theta\) on the event \(\{\tilde{v}\in\mathcal{T}\}\). Recall that \(\xi^{\mathcal{T},\alpha}\) satisfies the SDE (4.4), and define the modified process \(\tilde{\xi}\) on \(\tilde{\mathcal{X}}_{\star}^{\mathbb{V}}\) by \(t\in[0,\infty)\), \[\tilde{\xi}_{v}(t)=\begin{cases}\xi_{v}^{\mathcal{T},\alpha}(t)&v\in\mathbb{T }_{\tilde{v}}\\ \star&v\in\mathbb{V}\setminus\mathbb{T}_{\tilde{v}}.\end{cases} \tag{5.18}\] Fix \(t\in[0,\infty)\), and let \(\mathcal{E}^{t}:=\{\xi_{\tilde{v}}^{\mathcal{T},\alpha}(t-)=S\}\). By (4.4) and Assumption D, \(\xi_{\tilde{v}}^{\mathcal{T},\alpha}(t-)=S\) implies \(\xi_{\tilde{v}}^{\mathcal{T},\alpha}\neq\star\) and hence, that \[\mathcal{E}^{t}=\tilde{\mathcal{E}}^{t}:=\{\tilde{\xi}_{\tilde{v}}(t-)=S,\ \tilde{v}\in\mathcal{T}\}=\{\tilde{\xi}_{\tilde{v}}(s)=S\ :\ s\in[0,t)\}, \tag{5.19}\] where the second equality follows from the monotonicity property of the S(E)IR process, see (4.2). Applying Proposition 4.7 with \(A=\mathbb{T}_{\tilde{v}}\setminus\{\tilde{v}\}\), \(\partial^{\mathbb{V}}A=\{\tilde{v}\}\) and B=\(\mathbb{V}\setminus\mathbb{T}_{\tilde{v}}\), it follows that \(\xi_{\mathbb{T}_{\tilde{v}}\setminus\{\tilde{v}\}}^{\mathcal{T},\alpha}[t)\) is conditionally independent of \(\xi_{\mathbb{V}\setminus\mathbb{T}_{\tilde{v}}}^{\mathcal{T},\alpha}[t)\) given \(\mathcal{E}^{t}\). Thus, by (5.18) and (5.19), \[\mathcal{L}(\xi_{\mathbb{T}_{\tilde{v}}\setminus\{\tilde{v}\}}^{\mathcal{T}, \alpha}[t)\ |\ \mathcal{E}^{t})=\mathcal{L}(\tilde{\xi}_{\mathbb{V}\setminus\mathbb{T}_{ \tilde{v}}}^{\mathcal{T},\alpha}[t)\ |\ \tilde{\mathcal{E}}^{t}). \tag{5.20}\] Next, fix \(m\in\mathbb{N}\), and define a map \(\tilde{\phi}_{m}:\mathbb{T}_{\tilde{v}}\rightarrow\mathbb{V}\) given by \(\tilde{\phi}_{m}(\tilde{v})=\varnothing\) and for \(v\in\mathbb{V}\), \(\tilde{\phi}_{m}(\tilde{v}(mv))=1v\), \(\tilde{\phi}_{m}(\tilde{v}(1v))=mv\) and \(\tilde{\phi}_{m}(\tilde{v}(\ell v))=\ell v\) for all \(\ell\in\mathbb{N}\setminus\{1,m\}\), recalling that \(vw\) represent concatenation of \(v,w\in\mathbb{V}\). Then \(\tilde{\phi}_{m}\) defines an isomorphism of the rooted graphs \((\mathbb{T}_{\tilde{v}},\tilde{v})\) and \((\mathbb{V},\varnothing)\). It follows from (5.18) and the form of the SDE (4.4) that \[\mathcal{L}(\tilde{\xi}_{\mathbb{T}_{\tilde{v}}}[t)\ |\ \tilde{\mathcal{E}}^{t})= \mathcal{L}(\xi_{\mathbb{V}}^{\mathcal{T},\alpha}[t)\ |\ \xi_{\varnothing}^{\mathcal{T},\alpha}(t-)=S)\] and in particular, \[\mathcal{L}(\tilde{\xi}_{\tilde{v}m}[t])\ |\tilde{\xi}_{\tilde{v}}(t-)=S, \tilde{v}m\in\tilde{\mathcal{T}})=\mathcal{L}(\xi_{1}^{\mathcal{T},\alpha}[t)]\ |\tilde{\xi}_{\varnothing}(t-)=S,1\in\mathcal{T}), \tag{5.21}\] which follows from the independence of \(\mathcal{T}\) from the driving Poisson processes in the SDE (4.4) for \(\xi^{\mathcal{T},\alpha}\) and the fact that \(\tilde{v}m\in\mathcal{T}\) implies \(\tilde{v}\in\mathcal{T}\). By the well-posedness of the SDE (4.4) established in Lemma 4.2, it follows that the left-hand side of (5.21) does not depend on the choice of \(\tilde{v}\in\mathbb{V}\) and \(m\in\mathbb{N}\), thus proving the proposition. ## Appendix A Proofs of intermediate SEIR dynamics Results In this section, we prove Theorem 4.14 and Theorem 4.13, thus completing the proof of Theorem 2.10. Throughout, \(\theta\in\mathcal{P}(\mathbb{N}_{0})\), \(\hat{\theta}\) is the size-biased version of \(\theta\), as defined in (2.2), and \(\mathcal{T}\) is a UGW\((\theta)\) tree. We assume that \(\theta\) has finite third moment and, as everywhere else in the paper, we assume that the rates \(\beta,\ \lambda,\ \rho:[0,\infty)\rightarrow(0,\infty)\) satisfy Assumption C. We assume that Assumption D hold. We start by proving the ODE characterization of \(g_{S}\), \(g_{E}\) and \(g_{I}\). Proof of Theorem 4.13.: Throughout the proof in order to simplify notation we write \(\bar{X}\) in lieu of \(\bar{X}^{\mathcal{T}}=\xi^{\mathcal{T},1}\), the SEIR process on \(\mathcal{T}\), and \(q\) in lieu of \(q_{1}\), the rate function defined in (4.3). By Assumption D, \(g_{S}(0)=s_{0}\), \(g_{E}(0)=e_{0}\) and \(g_{I}(0)=i_{0}\). Clearly, \(G_{I}(0)=0\). Therefore, the initial conditions (2.12) hold. By the fundamental theorem of calculus, \(\dot{G}_{I}(t)=\beta_{t}g_{I}(t)\), which is the fourth equation in (2.11). We now turn to the derivation of the evolution of \(g_{I},\ g_{E}\) and \(g_{S}\). This requires keeping track of two states simultaneously since \(g_{I}(t)\), \(g_{E}(t)\) and \(g_{S}(t)\) are conditional probabilities associated with the joint law of \(\bar{X}_{1}(t)\) and \(\bar{X}_{\varnothing}(t)\). To start, we apply Proposition 4.6 with \(\alpha=1\) and \(U=\{\varnothing,1\}\) to conclude that \(\bar{X}_{\varnothing,1}\) has the same law as the jump process on the state space \(\bar{\mathcal{X}}_{\star}\times\bar{\mathcal{X}}_{\star}\) with jump rates \(\hat{q}_{v}(t,x)=\hat{q}_{v,1}^{\theta,1}[\{\varnothing,1\}](t,x)\), \(v\in\{\varnothing,1\}\), \(x\in\mathcal{D}([0,\infty),\bar{\mathcal{X}}_{\star}^{2})\), which satisfy, for every \(t\geq 0\), almost surely \[\hat{q}_{v}(t,\bar{X}_{\varnothing,1})=\mathbb{E}[q(1,t,\bar{X}_{v},\bar{X}_{ \partial_{v}^{\mathcal{V}}})|\bar{X}_{\varnothing,1}[t)],\quad v\in\{ \varnothing,1\}.\] (A.1) Next, we use the specific form of \(q\), as defined in (4.3) and Propositions 4.7 and 4.8 to obtain a more explicit description of \(\hat{q}_{v}\), \(v\in\{\varnothing,1\}\). Since the probabilities \(g_{a}(t)\), \(a\in\{S,\ E,\ I\}\) are conditioned on \(\bar{X}_{\varnothing}(t-)=S\) and \(\bar{X}_{1}(0)\neq\star\) (and using the fact that an individual that is in state \(R\) remains in that state for all subsequent times), we only need to consider the jumps \(\hat{q}_{v}(t,\bar{X}_{\varnothing,1})\), \(v\in\{\varnothing,1\}\) on the events \(\{\bar{X}_{\varnothing,1}(t-)=(S,S)\}\), \(\{\bar{X}_{\varnothing,1}(t-)=(S,E)\}\) and \(\{\bar{X}_{\varnothing,1}(t-)=(S,I)\}\). For \(v,w\in\{\varnothing,1\}\) with \(v\neq w\), define \(\bar{B}_{v}(t):=\beta_{t}\mathbb{E}[\mathcal{I}(\bar{X}_{\partial_{v}\setminus\{w \}}(t-))|\bar{X}_{v}(t-)=S]\). By the definition of the SEIR jump rates \(q=q_{1}\) in (4.3), \(B_{v}\) is the conditional cumulative rate at which the neighbors of \(v\) other that \(w\) infect the individual at \(v\) at time \(t\). By Proposition 4.7, \[\bar{B}_{v}(t)=\beta_{t}\mathbb{E}[\mathcal{I}(\bar{X}_{\partial_{v}\setminus\{ w\}})|\bar{X}_{v}(t)=S,\bar{X}_{w}(t)].\] (A.2) Using (4.3), (A.1) and Proposition 4.6, and proceeding similarly as in the proof of Theorem 4.10, we can treat \(\bar{X}_{\varnothing,1}\) as a two particle jump process driven by Poisson noises with intensity measure equal to Lebesgue measure, whose jumps and jump rates from the states \((S,S)\), \((S,E)\) and \((S,I)\) can be summarized as follows. \[\begin{array}{ll}\mbox{Jump:}&\mbox{Rate at time $t$:}\\ (S,S)\to(S,E)&\bar{B}_{1}(t)\\ (S,S)\to(E,S)&\bar{B}_{\varnothing}(t)\\ (S,E)\to(E,E)&\bar{B}_{\varnothing}(t)\\ (S,E)\to(S,I)&\lambda_{t}\\ (S,I)\to(E,I)&\beta_{t}+\bar{B}_{\varnothing}(t)\\ (S,I)\to(S,R)&\rho_{t},\end{array}\] with all other rates being equal to zero. Next we fix \(h>0\) and \(t\geq 0\) and obtain expressions for \(g_{I}(t+h),\ g_{E}(t+h)\), and \(g_{S}(t+h)\) in terms of \(g_{I}(t)\), \(g_{E}(t)\), \(g_{S}(t)\), \(h\), \(\beta_{t}\), \(\rho_{t}\), and \(\hat{\theta}\). We first consider \(g_{S}\), defined in (4.41). Using monotonicity of the SEIR dynamics, we can write \[g_{S}(t+h)=\mathbb{P}(\bar{X}_{1}(t+h)=S,\ \bar{X}_{1}(t)=S\ |\ \bar{X}_{ \varnothing}(t+h)=S,\ \bar{X}_{\varnothing}(t)=S,\ 1\in\mathcal{T}).\] (A.3) By an application of Lemma 4.9 with \(A=\{\bar{X}_{1}(t+h)=S\}\), \(A^{\prime}=\{\bar{X}_{1}(t)=S\}\), \(B=\{\bar{X}_{\varnothing}(t+h)=S\}\), \(B^{\prime}=\{\bar{X}_{\varnothing}(t)=S,1\in\mathcal{T}\}\) we obtain \[g_{S}(t+h)=g_{S}(t)\frac{\mathbb{P}(\bar{X}_{\varnothing}(t+h)=S,\ \bar{X}_{1}(t+h)=S,|\ \bar{X}_{ \varnothing}(t)=S,\ \bar{X}_{1}(t)=S,\ 1\in\mathcal{T})}{\mathbb{P}(\bar{X}_{ \varnothing}(t+h)=S\ |\ \bar{X}_{\varnothing}(t)=S,\ 1\in\mathcal{T})}.\] (A.4) Since \(\bar{B}_{1}(t)+\bar{B}_{\varnothing}(t)\) is the rate at which \(\bar{X}_{\varnothing,1}(t)\) leaves the state \((S,\,S)\), the numerator on the right-hand side of (A.4) is equal to \(1-h(\bar{B}_{1}(t)+\bar{B}_{\varnothing}(t))+o(h)\). For the denominator, observe that the rate \(\hat{q}_{\varnothing}(t,\bar{X}_{\varnothing,1})\) on the event \(\{\bar{X}_{\varnothing}(t-)=S,1\in\mathcal{T}\}\) is equal to \[\mathbb{E}[q(1,t,\bar{X}_{\varnothing},\bar{X}_{\varnothing}^{ \vee})\ |\bar{X}_{\varnothing}(t-)=S,1\in\mathcal{T}]\] \[= \beta_{t}\mathbb{E}[\mathcal{I}(\bar{X}_{1}(t-))\ |\bar{X}_{ \varnothing}(t-)=S,\ 1\in\mathcal{T}]+\beta_{t}\mathbb{E}[\mathcal{I}(\bar{X}_{ \varnothing}^{\vee}\{1\}(t-))\ |\bar{X}_{\varnothing}(t-)=S,\ 1\in\mathcal{T}]\] \[= \beta_{t}g_{I}(t-)+\beta_{t}\bar{B}_{\varnothing}(t-),\] where the first equality follows from (4.3) with \(\alpha=1\), and the second follows from the definition of \(g_{I}\) in (4.41) and by (A.2) (on observing that the event \(\{1\in\mathcal{T}\}\) is \(\bar{X}_{1}(t)\)-measurable). Therefore, it follows that \[g_{S}(t+h)=g_{S}(t)\frac{1-h(\bar{B}_{1}(t)+\bar{B}_{\varnothing}(t))+o(h)}{1 -h(\beta_{t}g_{I}(t)+\bar{B}_{\varnothing}(t))+o(h)},\] Which implies \[g_{S}(t+h)-g_{S}(t)=g_{S}(t)\frac{h\beta_{t}g_{I}(t)-h\bar{B}_{1}(t)+o(h)}{1+o (1)}.\] In turn, this implies \[\dot{g}_{S}=g_{S}(\beta g_{I}-\bar{B}_{1}).\] (A.5) Similarly, recalling that \(g_{E}(t+h)=\mathbb{P}(\bar{X}_{1}(t+h)=E\ |\ \bar{X}_{\varnothing}(t+h)=S,\ 1\in \mathcal{T})\) from (4.41), and using the monotonicity property (4.2) with \(\alpha=1\), by a similar derivation as (A.3)-(A.5), \[g_{E}(t+h) =\sum_{a=S,E}g_{a}(t)\frac{\mathbb{P}(\bar{X}_{\varnothing}(t+h)=S,\ \bar{X}_{1}(t+h)=E|\ \bar{X}_{\varnothing}(t)=S,\ \bar{X}_{1}(t)=a,\ 1\in \mathcal{T})}{\mathbb{P}(\bar{X}_{\varnothing}(t+h)=S|\ \bar{X}_{ \varnothing}(t)=S,1\in\mathcal{T})}\] \[=\frac{g_{S}(t)(h\bar{B}_{1}(t)+o(h))+g_{E}(t)(1-h(\lambda_{t}+ \bar{B}_{\varnothing}(t))+o(h))}{1-h(g_{I}(t)\beta_{t}+\bar{B}_{\varnothing}(t) )+o(h)},\] and, hence, \[g_{E}(t+h)-g_{E}(t)=(1+o(1))(hg_{S}(t)\bar{B}_{1}(t)-hg_{E}(t)(\lambda_{t}- \beta_{t}g_{I}(t))+o(h)).\] It follows that \[\dot{g}_{E}=g_{S}\bar{B}_{1}-g_{E}(\lambda-\beta g_{I}).\] (A.6) Next, we see that \[g_{I}(t+h) =\sum_{a=S,E,I}g_{a}(t)\frac{\mathbb{P}(\bar{X}_{\varnothing}(t+h )=S,\ \bar{X}_{1}(t+h)=I|\ \bar{X}_{\varnothing}(t)=S,\ \bar{X}_{1}(t)=a,\ 1\in \mathcal{T})}{\mathbb{P}(\bar{X}_{\varnothing}(t+h)=S|\ \bar{X}_{ \varnothing}(t)=S,1\in\mathcal{T})}\] \[=\frac{g_{S}(t)o(h^{2})+g_{E}(t)(h\lambda_{t})+g_{I}(t)(1-h(\beta +\bar{B}_{\varnothing}+\rho))+o(h)}{1-h(g_{I}(t)\beta_{t}+\bar{B}_{\varnothing }(t))+o(h)},\] by the monotonicity property (4.2) with \(\alpha=1\) and by the fact that the probability that two jumps occur in an interval of length \(h\) is \(o(h^{2})\), since the driving noises \(\mathbf{N}_{\{\varnothing,1\}}\) as in Proposition 4.6 are independent Poisson point processes with intensity measure equal to Lebesgue measure. We then have \[g_{I}(t+h)-g_{I}(t)=(1+o(1))(hg_{E}(t)\lambda_{t}-hg_{I}(\beta_{t}+\rho_{t}- \beta_{t}g_{I}(t))+o(h)),\] which implies the third equation in (2.11). Recalling that \(G_{I}(t)=\int_{0}^{t}\beta_{u}g_{I}(u)du\), by the same argument as (4.24)-(4.30) in the proof of Theorem 4.10, \(\bar{B}_{1}(t)\) satisfies \[\bar{B}_{1}(t) =\beta_{t}g_{I}(t)\mathbb{E}[d_{1}-1\ |\ \bar{X}_{1}(t-)=S]\] \[=\beta_{t}g_{I}\frac{\sum_{k=0}^{\infty}k\hat{\theta}(k)e^{-kG_{ I}(t)}}{\sum_{n=0}^{\infty}\hat{\theta}(n)e^{-nG_{I}(t)}}.\] Substituting this back into (A.5) and (A.6), we obtain the first and second equations in (2.11). This concludes the proof. Proof of Theorem 4.14.: Throughout the proof, we simplify the notation and write \(\bar{X}\) in lieu of \(\bar{X}^{\mathcal{T}}\). By Assumption C, the fact that \(g_{I}\) is continuous (which follows from Theorem 4.13), and the fact that the ODE (4.42) is linear, the initial value problem (4.42)-(4.43) has a unique solution. Clearly by (4.41) the initial conditions (4.43) hold. To prove (4.42) we proceed similarly as in the proof of Theorem 4.12. We start by considering \(Q_{S,S;k}\). Fix \(t\geq 0\), \(h>0\) and \(k\) in the support of \(\theta\). Then, using the monotonicity of the SEIR process (see (4.2)) in the second quality, and the fact that \(\bar{X}_{\partial_{\varnothing}}(t)=y\in\bar{X}^{k}\) implies that \(d_{\varnothing}=k\), \[Q_{S,S;k}(t+h)\] \[=\mathbb{P}(\bar{X}_{\varnothing}(t+h)=S\ |\ \bar{X}_{\varnothing}(0)=S, \ d_{\varnothing}=k)\] \[=\sum_{y\in\bar{\mathcal{X}}^{k}}\frac{\mathbb{P}(\bar{X}_{ \varnothing}(t+h)=S,\ \bar{X}_{\varnothing}(t)=S,\ \bar{X}_{\varnothing}(0)=S,\ \bar{X}_{\partial_{ \varnothing}}(t)=y)}{\mathbb{P}(\bar{X}_{\varnothing}(0)=S,\ d_{\varnothing}=k)}\] \[=\sum_{y\in\mathcal{S}_{k,t}}\mathbb{P}(\bar{X}_{\varnothing}(t+h )=S|\bar{X}_{\varnothing}(t)=S,\bar{X}_{\partial_{\varnothing}}(t)=y)\mathbb{ P}(\bar{X}_{\varnothing}(t)=S,\bar{X}_{\partial_{\varnothing}}(t)=y|\bar{X}_{ \varnothing}(0)=S,d_{\varnothing}=k),\] (A.7) where \(\mathcal{S}_{k,t}:=\{y\in\bar{\mathcal{X}}^{k}\ :\ \mathbb{P}(\bar{X}_{ \varnothing}(t)=S,\bar{X}_{\partial_{\varnothing}}(t)=y)>0\}\). Since by (4.3) the jump rate of a susceptible individual with neighbors \(y\in\bar{\mathcal{X}}^{k}\) is equal to \(\beta_{t}\mathcal{I}(y)\), it follows that \[\mathbb{P}(\bar{X}_{\varnothing}(t+h)=S\ |\ \bar{X}_{\varnothing}(t)=S,\ \bar{X}_{ \partial_{\varnothing}}(t)=y)=1-h\beta_{t}\mathcal{I}(y)+o(h).\] (A.8) The right-hand side does not depend on the exact states of the \(k-\mathcal{I}(y)\) neighbors of the root that are not in state \(I\). Thus, substituting the expression in (A.8) into the last line of (A.7) and rewriting the sum to be over the number of infected neighbors of \(\varnothing\), \[Q_{S,S;k}(t+h)= \sum_{j=0}^{k}(1-h\beta_{t}j+o(h))\ \mathbb{P}(\bar{X}_{\varnothing}(t)=S, \ \mathcal{I}(\bar{X}_{\partial_{\varnothing}}(t))=j\ |\ \bar{X}_{\varnothing}(0)=S,d_{ \varnothing}=k)\] \[= \sum_{j=0}^{k}(1-h\beta_{t}j+o(h))\ \mathbb{P}(\mathcal{I}(\bar{X}_{ \partial_{\varnothing}}(t))=j\ |\bar{X}_{\varnothing}(t)=S,\ \bar{X}_{\varnothing}(0)=S,\ d_{ \varnothing}=k)\ Q_{S,S;k}(t)\] \[= \sum_{j=0}^{k}(1-h\beta_{t}j+o(h))\ \mathbb{P}(\mathcal{I}(\bar{X}_{ \partial_{\varnothing}}(t))=j\ |\bar{X}_{\varnothing}(t)=S,\ d_{ \varnothing}=k)\ Q_{S,S;k}(t).\] Letting \(\alpha=1\) in Proposition 4.7, it follows that \(\{\bar{X}_{i}(t)\ :\ i\sim\varnothing\}\) are conditionally i.i.d. given \(\bar{X}_{\varnothing}(t)=S\) and \(d_{\varnothing}=k\). For each \(m\in\mathbb{N}\cap[1,k]\), by Proposition 4.8 and an Application of Proposition 4.7 with \(A=\mathbb{T}_{m}:=\{mv\ :v\in\mathbb{V}\}\), the subtree rooted at \(m\), \(\partial^{\mathbb{V}}A=\{\varnothing\}\) and \(B=\mathbb{N}\setminus\{m\}\), observing that \(d_{\varnothing}=\sum_{\ell\in\mathbb{N}}\mathbf{1}_{\{\bar{X}_{\ell}(0)\neq \star\}}\), we have that \[\mathbb{P}(\bar{X}_{m}(t)=I\ |\ \bar{X}_{\varnothing}(t)=S,\ d_{\varnothing}=k)= \mathbb{P}(\bar{X}_{m}(t)=I\ |\ \bar{X}_{\varnothing}(t)=S,\ m\in\mathcal{T})=g_{I}(t),\] where \(g_{I}\) is defined in (4.41). It follows that, conditional on \(\bar{X}_{\varnothing}(t)=S\) and \(d_{\varnothing}=k\), \(\mathcal{I}(\bar{X}_{\partial_{\varnothing}}(t))\) has binomial distribution with parameters (\(k\), \(g_{I}(t)\)). Letting \(Y\) be a binomial random variable with parameters (\(k\), \(g_{I}(t)\)), it follows from (A.9) that \[\begin{split} Q_{S,S;k}(t+h)&=Q_{S,S;k}(t)(1-h\beta _{t}\mathbb{E}[Y]+o(h))\\ &=(1-h\beta_{t}kg_{I}+o(h))Q_{S,S;k}(t),\end{split}\] (A.10) and, thus, \[\begin{split}\lim_{h\to 0^{+}}\frac{Q_{S,S;k}(t+h)-Q_{S,S;k}(t)}{h}& =\lim_{h\to 0^{+}}\frac{(1-h\beta_{t}kg_{I}(t)+o(h)-1)Q_{S,S;k}(t)}{h} \\ &=-\beta_{t}kg_{I}(t)Q_{S,S;k}(t),\end{split}\] which proves the first equation in (4.42). The derivation of the ODEs for \(Q_{S,E;k}\) and \(Q_{S,I,;k}\) is similar and outlined below. As in the last line of (A.7) write, \[Q_{S,E;k}(t+h)=\bar{\mathcal{Q}}_{E}(h)+\bar{\mathcal{Q}}_{S}(h),\] (A.11) where, for \(b=\{S,E\}\), \[\bar{\mathcal{Q}}_{b}(h)=\sum_{j=0}^{k}\mathbb{P}(\bar{X}_{\varnothing}(t+h)=E,\ \bar{X}_{\varnothing}(t)=b,\ \mathcal{I}(\bar{X}_{\partial_{\varnothing}}(t))=j\ |\ \bar{X}_{ \varnothing}(0)=S,\ d_{\varnothing}=k).\] Recalling the definition of the rates \(q_{1}\) in (4.3) and using arguments similar to what used to derive (A.8)-(A.10), \(\bar{\mathcal{Q}}_{S}=(h\beta_{t}kg_{I}(t)+o(h))Q_{S,S;k}(t)\) and \[\bar{\mathcal{Q}}_{E}(h)= \sum_{j=0}^{k}(1-\lambda_{t}h+o(h))\mathbb{P}(\bar{X}_{\varnothing }(t)=E,\ \mathcal{I}(\bar{X}_{\partial_{\varnothing}}(t))=j|\bar{X}_{ \varnothing}(0)=S,\ d_{\varnothing}=k)\] \[= (1-\lambda_{t}h+o(h))\sum_{j=0}^{k}\mathbb{P}(\bar{X}_{\varnothing }(t)=E,\ \mathcal{I}(\bar{X}_{\partial_{\varnothing}}(t))=j|\bar{X}_{ \varnothing}(0)=S,\ d_{\varnothing}=k)\] \[= (1-\lambda_{t}h+o(h))\mathbb{P}(\bar{X}_{\varnothing}(t)=E\ |\bar{X}_{ \varnothing}(0)=S,\ d_{\varnothing}=k)\] \[= (1-\lambda_{t}h+o(h))Q_{S,E;k}(t).\] Therefore, \(Q_{S,E;k}(t+h)-Q_{S,E;k}(t)=hk\beta_{t}g_{I}(t)Q_{S,S;k}(t)-\lambda_{t}hQ_{S, E;k}(t)+o(h)\) which implies the second equation in (4.42). Proceeding similarly, we obtain the relation \[\begin{split} Q_{S,I;k}(t+h)&=\sum_{b=S,E,I} \mathbb{P}(\bar{X}_{\varnothing}(t+h)=I|\bar{X}_{\varnothing}(t)=b,\bar{X}_{ \varnothing}(0)=Sd_{\varnothing}=k)\\ &=\lambda_{t}hQ_{S,E;k}(t)+(1-\rho_{t})hQ_{S,I;k}\end{split}\] and \[\begin{split} Q_{E,I}(t+h)&=\sum_{b=S,E,I} \mathbb{P}(\bar{X}_{\varnothing}(t+h)=I|\bar{X}_{\varnothing}(t)=b,\bar{X}_{ \varnothing}(0)=E,d_{\varnothing}=k)\\ &=\lambda_{t}hQ_{E,E}(t)+(1-\rho_{t})hQ_{E,I},\end{split}\] which imply the third and fifth equations in (4.42). Setting \(r_{t}^{E}:=\lambda_{t}\), and \(r_{t}^{I}:=\rho_{t}\), for \(a\in\{E,\,I\}\) we see that \[Q_{a,a}(t+h)=\] \[=\sum_{y\in\bar{\mathcal{X}}^{k}}\mathbb{P}(\bar{X}_{\varnothing}(t +h)=a\ |\ \bar{X}_{\varnothing}(t)=a,\bar{X}_{\partial_{\varnothing}}(t)=y) \mathbb{P}(\bar{X}_{\varnothing}(t)=a,\ \bar{X}_{\partial_{\varnothing}}(t)=y\ |\ \bar{X}_{ \varnothing}(0)=a)\] \[=(1-hr_{t}^{a}+o(h))\sum_{y\in\bar{\mathcal{X}}^{k}}\mathbb{P}( \bar{X}_{\varnothing}(t)=a,\ \bar{X}_{\partial_{\varnothing}}(t)=y\ |\ \bar{X}_{ \varnothing}(0)=a)\] \[=(1-hr_{t}^{a}+o(h))Q_{a,a}(t),\] which proves the fourth and sixth equations in (4.42), thus concluding the proof. ## Appendix B Proof of Proposition 2.4 In this section we prove the well-posedness of the ODE system (2.4)-(2.5). We start with the following elementary result. **Lemma B.1**.: _Suppose that \(\theta\in\mathcal{P}(\mathbb{N}_{0})\) has a finite third moment. Then \(\Phi_{\hat{\theta}}\), defined in (4.45), is Lipschitz continuous on \([0,\infty)\)._ Proof.: It is easy to see that under the assumption that \(\theta\) has a finite third moment, the size-biased distribution \(\hat{\theta}\), defined in (2.2), has a finite second moment. Indeed, let \(\hat{Y}\) and \(Y\) be random variables with laws \(\hat{\theta}\) and \(\theta\), respectively. By (2.2), it is easy to see that \[\mathbb{E}[\hat{Y}^{2}]=\frac{\mathbb{E}[Y^{3}]-2\mathbb{E}[Y^{2}]+\mathbb{E} [Y]}{\mathbb{E}[Y]},\] which is finite since \(\theta\) has finite third moment. For \(z\in[0,\infty)\) note that \(M_{\hat{\theta}}(-z)=\mathbb{E}[e^{-z\hat{Y}}]\leq 1\), and so by the dominated convergence theorem \(\lim_{z\to\infty}M_{\hat{\theta}}(-z)=\hat{\theta}(0)\). Since \(\hat{\theta}\) has finite second moment, again by the dominated convergence theorem \(M_{\hat{\theta}}^{\prime\prime}(-z)=\mathbb{E}[\hat{Y}^{2}e^{-z\hat{Y}}]\leq \mathbb{E}[\hat{Y}^{2}]<\infty\), \(M_{\hat{\theta}}^{\prime\prime}\) is continuous on \((-\infty,0)\) and \(\lim_{z\to\infty}M_{\hat{\theta}}^{\prime\prime}(-z)=0\). Thus, it follows from the limits established above that \[\hat{\theta}(0)>0\quad\Rightarrow\quad\lim_{z\to\infty}\frac{M_{\hat{\theta} }^{\prime\prime}(-z)}{M_{\hat{\theta}}(-z)}=\frac{0}{\hat{\theta}(0)}=0\] (B.1) Now, setting \(\Phi:=\Phi_{\hat{\theta}}\) for conciseness, it follows that \[\Phi^{\prime}(z):=\frac{d}{dz}\Phi(z)=\frac{(M_{\hat{\theta}}^{\prime}(-z))^{2 }-M_{\hat{\theta}}^{\prime\prime}(-z)M_{\hat{\theta}}(-z)}{M_{\hat{\theta}}^ {2}(-z)}=(\Phi(z))^{2}-\frac{M_{\hat{\theta}}^{\prime\prime}(-z)}{M_{\hat{ \theta}}(-z)}.\] (B.2) By Lemma 4.15, \(\Phi\) is bounded on \([0,\infty)\). Furthermore, \(M_{\hat{\theta}}^{\prime\prime}(0)/M_{\hat{\theta}}(0)=\mathbb{E}[\hat{Y}^{2}]\). Recall the quantity \(\underline{d}_{\hat{\theta}}=\min\{d\in\mathbb{N}_{0}:\hat{\theta}(d)>0\}\) introduced in (4.44). Using (B.1) for the case \(\underline{d}_{\hat{\theta}}=0\) (which is equivalent to \(\hat{\theta}(0)>0\)), and a similar argument as (4.46) in Lemma 4.15 for the case \(\underline{d}_{\hat{\theta}}>0\), we have \[\lim_{z\to\infty}\frac{M_{\hat{\theta}}^{\prime\prime}(-z)}{M_{\hat{\theta}}(- z)}=\underline{d}_{\hat{\theta}}^{2}.\] (B.3) Together with (B.2), the continuity of \(\Phi\) on \([0,\infty)\) and the continuity of \(M_{\hat{\theta}}\) and \(M_{\hat{\theta}}^{\prime\prime}\) on \((-\infty,0]\), this implies that \(\Phi^{\prime}\) is uniformly bounded on \([0,\infty)\). This completes the proof. Proof of Proposition 2.4.: By Assumption A, \(\beta\) and \(\rho\) are continuous in \(t\). By Lemma 4.15, \(\Phi(z)\) is continuous in \(z\in[0,\infty)\). Therefore, the right-hand side of the ODE (2.4) is continuous and so by Peano's existence theorem, there exists \(\tau\in(0,\infty)\) and a solution (\(f_{S}\),\(f_{I}\),\(F_{I}\)) : \([0,\tau)\to\mathbb{R}^{3}\) to (2.4)-(2.5) on \([0,\tau)\). Next, fix \(s\in[0,\tau)\). We claim that \((f_{S}(s)\),\(f_{I}(s)\),\(F_{I}(s))\in[0,1]\times[0,1]\times[0,\infty)\). Since \(f_{S}(0)=s_{0}\in(0,1)\) and \(f_{I}(0)=1-s_{0}\in(0,1)\) and the right-hand side of the first (respectively, second) equation in (2.4) is equal to \(0\) whenever \(f_{S}(t)=0\) (respectively, \(f_{I}(t)=0\)), it follows that \(f_{S}(s)\geq 0\) (respectively, \(f_{I}(s)\geq 0\)). In turn, this implies that \(\dot{F}_{I}(s)\geq 0\), and therefore that \(F_{I}(s)>0\) (since \(F_{I}(0)=0\) and \(\dot{F}_{I}(0+)=f_{I}(0)>0\)). Now, by summing the first two equations in (2.4), we obtain \[\dot{f}_{I}(s)+\dot{f}_{S}(s)=f_{I}(s)\beta_{s}\left(f_{S}(s)+f_{I}(s)-1-\frac {\rho_{s}}{\beta_{s}}\right).\] (B.4) Since \(f_{S}(0)+f_{I}(0)=1\), it follows that \(f_{S}(s)+f_{I}(s)\) is strictly decreasing in \(s\), and in particular \(f_{S}(s)+f_{I}(s)\leq 1\). Finally, by Lemma B.1, the right-hand side of (2.4) is Lipschitz continuous in (\(f_{S}\), \(f_{I}\), \(F_{I}\)) on \([0,1]\times[0,1]\times[0,\infty)\). Thus, it follows that \(\tau=\infty\) and (2.4)-(2.5) has a unique solution for all times. Proposition 2.9 is proved in the same way as the proof of 2.4 by first using Lemma 4.15 and Assumption C to establish the existence of (\(g_{S}\), \(g_{E}\), \(g_{I}\), \(G_{I}\)) solving (2.11)-(2.12) on \([0,\tau)\) for some \(\tau\in(0,\infty)\), then showing that such a solution stays in \([0,1]\times[0,1]\times[0,1]\times[0,\infty)\), where, by Lemma B.1, the right-hand side of (2.11) is Lipschitz.
epidemicのモデルとして、Susceptible-Infected-Recovered (SIR) とSusceptible-Exposed-Infected-Recovered (SEIR) を研究しています。このモデルは、時間変化する率を持つ可能性があり、ローカルに木構造を持つネットワーククラスを対象としています。これは、sparse Erd\H{o}s-R\`enyi ランダムグラフ、ランダム正則グラフ、および他の配置モデルを含むものです。SIR とSEIRの過程の動的な特性を記述する、可解な微分方程式系を同定します。この結果として、人口規模が無限大に近づくと、SIR とSEIRの過程の動的な特性を記述する、可解な微分方程式系を同定します。さらに、一定の回復率と感染率の場合、ブレークスルーの大きさを、明確な関数のゼロとして表すことができます。この方法を使って、適切な定義の平均場予測は、ブレー
2309.10900
Incremental Multimodal Surface Mapping via Self-Organizing Gaussian Mixture Models
This letter describes an incremental multimodal surface mapping methodology, which represents the environment as a continuous probabilistic model. This model enables high-resolution reconstruction while simultaneously compressing spatial and intensity point cloud data. The strategy employed in this work utilizes Gaussian mixture models (GMMs) to represent the environment. While prior GMM-based mapping works have developed methodologies to determine the number of mixture components using information-theoretic techniques, these approaches either operate on individual sensor observations, making them unsuitable for incremental mapping, or are not real-time viable, especially for applications where high-fidelity modeling is required. To bridge this gap, this letter introduces a spatial hash map for rapid GMM submap extraction combined with an approach to determine relevant and redundant data in a point cloud. These contributions increase computational speed by an order of magnitude compared to state-of-the-art incremental GMM-based mapping. In addition, the proposed approach yields a superior tradeoff in map accuracy and size when compared to state-of-the-art mapping methodologies (both GMM- and not GMM-based). Evaluations are conducted using both simulated and real-world data. The software is released open-source to benefit the robotics community.
Kshitij Goel, Wennie Tabib
2023-09-19T19:49:03
http://arxiv.org/abs/2309.10900v2
# Incremental Multimodal Surface Mapping via Self-Organizing Gaussian Mixture Models ###### Abstract This letter describes an incremental multimodal surface mapping methodology, which represents the environment as a continuous probabilistic model. This model enables high-resolution reconstruction while simultaneously compressing spatial and intensity point cloud data. The strategy employed in this work utilizes Gaussian mixture models (GMMs) to represent the environment. While prior GMM-based mapping works have developed methodologies to determine the number of mixture components using information-theoretic techniques, these approaches either operate on individual sensor observations, making them unsuitable for incremental mapping, or are not real-time viable, especially for applications where high-fidelity modeling is required. To bridge this gap, this letter introduces a spatial hash map for rapid GMM submap extraction combined with an approach to determine relevant and redundant data in a point cloud. These contributions increase computational speed by an order of magnitude compared to state-of-the-art incremental GMM-based mapping. In addition, the proposed approach yields a superior tradeoff in map accuracy and size when compared to state-of-the-art mapping methodologies (both GMM- and not GMM-based). Evaluations are conducted using both simulated and real-world data. The software is released open-source to benefit the robotics community. ## I Introduction Robotic exploration systems are being deployed to automate multimodal data collection for applications like artifact detection [1], active thermal mapping [2], and planetary exploration [3]. For example, multi-instrument payloads (e.g. range and thermal sensors) are critical for mapping planetary caves [4]. These missions require an autonomous agent to incrementally create a mathematical model of the environment using onboard sensors and computers. The model must accurately represent fine details to enable operation in close proximity to complex, unknown structures (e.g. stalactites, thin wires, etc) and be compact to transfer via low-bandwidth communication channels [4]. Prior work has demonstrated that the exploration performance of a multi-agent team is impacted by the compactness of the environment model used in communications restricted environments due to the ability (or lack thereof) to share information [5, 6]. Few environment models are compact while enabling high-resolution reconstruction and safe navigation. Octomap [7], Voxblox [8], and GMM maps [9] are the key methodologies that have been used recently for communication-constrained exploration by Agha et al. [10], Tranzatto et al. [1], and Goel et al. [5], respectively. These methods enable safe navigation but suffer from the same limitation in terms of fixing the highest achievable map fidelity throughout exploration (e.g., minimum leaf size for Octomap, voxel size for Voxblox, and number of mixture components for GMM maps). This is inefficient because all parts of the environment are assigned the same level of highest fidelity, leading to larger map sizes and communication inefficiency [6]. To enable large-scale exploration with many robots simultaneously sharing information, the mapping algorithm must adapt to the scene complexity. Our recent work [11], the Self-Organizing Gaussian Mixture Models (SOGMMs), proposes an information-theoretic approach for automatically determining the number of mixture components from the underlying sensor data; however, the approach operates on single observations. Because consecutive sensor observations have significant overlap, the formulation in [11] is not well-suited for incremental mapping. Prior GMM mapping approaches not demonstrated onboard a robot either require iterating over the mixture components [12] or use approximate geometric projection methods [13, 14]. The computational requirements for these methods increase prohibitively with the number of components in the GMM map. To bridge these gaps, we extend the Fig. 1: A reconstructed point cloud using spatial and intensity information inferred from the compact multimodal point cloud model created using the proposed approach. The representation leverages a formulation that has been demonstrated to be amenable for higher level robot autonomy objectives like exploration in complex, unstructured 3D environments. A video is available at: [https://youtu.be/VgPEECbUAnY](https://youtu.be/VgPEECbUAnY).
この手紙は、環境を連続確率モデルとして表すための漸進多媒体表面マッピング方法を記述しています。このモデルは、高解像度Reconstructionを可能にし、同時に空間および強度点雲データの圧縮を同時に行うことができます。この研究では、ガウス混合モデル (GMM) を使用して環境を表現しています。以前の GMM ベースのマッピングワークは、情報理論的技術を用いて混合成分の数を見つけるための方法を開発していますが、これらのアプローチは個々のセンサの観測に基づいて動作するため、漸進的マッピングに適していません。また、リアルタイム対応が不可能であり、特に高精度のモデルが必要なアプリケーションでは、特にリアルタイム対応が不可能です。このギャップを埋め合わせるため、この手紙では、空間ハッシュマップを迅速に GMM サブマップの抽出と、点雲における関連性と冗長性を決定するアプローチを導入しました。これらの
2309.09289
Two-dimensional UV femtosecond stimulated Raman spectroscopy for molecular polaritons: dark states and beyond
We have developed a femtosecond ultra-voilet (UV) stimulated Raman spectroscopy (UV-FSRS) for $N$ molecules in optical cavities. The scheme enables a real-time monitoring of collective dynamics of molecular polaritons and their coupling to vibrations, along with a crosstalk between polariton and dark states. Through multidimensional projections of the UV-FSRS signal, we identify clear signature of the dark states, e.g., pathways and timescales that used to be invisible in resonant technique. A microscopic theory is developed for the UV-FSRS, so as to reveal the polaritonic population and coherence dynamics that interplay with each other. The resulting signal makes the dark states visible, thereby providing a new technique for probing dark state dynamics and their correlation with polariton modes.
Jianhua Ren, Zhedong Zhang
2023-09-17T14:50:58
http://arxiv.org/abs/2309.09289v1
Two-dimensional UV femtosecond stimulated Raman spectroscopy for molecular polaritons: dark states and beyond ###### Abstract We have developed a femtosecond ultra-voilet (UV) stimulated Raman spectroscopy (UV-FSRS) for \(N\) molecules in optical cavities. The scheme enables a real-time monitoring of collective dynamics of molecular polaritons and their coupling to vibrations, along with a crosstalk between polariton and dark states. Through multidimensional projections of the UV-FSRS signal, we identify clear signature of the dark states, e.g., pathways and timescales that used to be invisible in resonant technique. A microscopic theory is developed for the UV-FSRS, so as to reveal the polaritonic population and coherence dynamics that interplay with each other. The resulting signal makes the dark states visible, thereby providing a new technique for probing dark state dynamics and their correlation with polariton modes. ## I Introduction The strong interactions between photons and molecules in micro-cavities lead to hybrid phases of matter, forming a superposition of molecular states and photons [1]. These excitations, known as molecular polaritons, may present unusual properties incredibly distinct from normal molecules, for instance, the controllable many-body couplings and the cooperative emission of light. The complexity of molecules, due to the various degrees of freedom, has resulted in rich interactions between the excitations subject to multiple scales. A variety of intriguing phenomenon was therefore reported in recent studies, including polariton lasing [2] and condensation [3; 4], cavity-altered reactivity of chemistry [7; 8; 9; 10; 11; 12; 13] and topological effects [14]. All these highlight the importance of the polariton dynamics, which however remains elusive. So far, the strong coupling of molecules to cavities has led to extensive studies along with intense debates arising from the dark states. In spectroscopy, such kind of states is hard to be visualized, e.g., absorption and fluorescence. Nevertheless, the nonradiative processes--the channels causing symmetry breaking but not existing in atomic ensembles--may lead to the energy/information leakage from polariton states. Much theoretical and experimental efforts have been devoted to the relaxation of polariton modes, whereby the dark states serve as exciton reservoirs for the optical systems. Notably, the crosstalk between molecular polaritons and pure molecular modes was observed in recent experiments [15; 16]. A selective dynamics of dark-state polaritons coupled to bright polaritons was therefore demonstrated. Owing to the coherent and invisible nature, the dark-state polaritons show a dephasing presumably crucial for the energy transfer process [17]. Moreover, the high mode density makes the dark states a good strategy for controlling the chemical reactivity and achieving the phase transition towards the polariton condensation [18; 19]. Elaborate experiments demonstrated unusual dynamics of molecular polaritons beyond the Tavis-Cummings model, when considering the condensed-phase molecules [20; 21]. In the presence of solvent-induced disorder, extensive studies showed the spectral lines as a signature of the pure molecular states weakly coupled to cavity photons [22; 23]. This indicates the localization nature. All these call for a comprehensive understanding of the polariton dynamics in a conjunction with the dark states in molecules. In this article, we propose a novel off-resonant spectroscopic probe for the molecular polaritons, based on the stimulated Raman scattering. A two-dimensional ultraviolet femtosecond stimulated Raman spectra (2DUV-FSRS) is developed for molecules strong interacting with microcavities. Using a combination of visible pump and UV probe pulses, the dark-state polaritons show a prominent Raman response. Our results demonstrate a multi-dimensional projection of the coherent Raman signal for a real-time monitoring of the dark-state dynamics. A microscopic polariton model is further developed for the 2DUV-FSRS, elaborating the multi-timescale nature for the population dynamics of polaritons in a crosstalk with the dark states. The rest of the paper is organized as follows. In Section II, we discuss the dynamics of the model used to describe the interactions between photons and molecules. Section III provides a brief review on the Raman spectroscopy. In Section IV, we derive and numerically present the one-dimensional Raman signal in real time for the model under consideration. Besides, we propose a resolution procedure that allows us to quantitatively extract the dynamics of both polaritons and dark states. We then analyze the two-dimensional Raman signal, taking into account the time delay between pulse \(\varepsilon_{1}\) in Section V. Furthermore, we have discussion on the charge transfer state in Section VI. Finally, Section VII presents our conclusions and remarks. ## II Model for molecular excitons ### Polariton and Dark state We consider a generic model consisting of N molecules in a single-mode optical cavity, as depicted in FIG. 1(a). Each molecule has one exciton mode, describing the elec tronic excitations. The excitons essentially interact with the vibrations that may feature a dense distribution in complex molecules. This system can be described by a model Hamiltonian \(H=H_{0}+H_{\rm vib}+V_{\rm int}\), where \(H_{0}\) is the exciton-photon component, e.g., \[H_{0}=\sum_{j=1}^{N}\left[\omega b_{j}^{\dagger}b_{j}+g\left(b_{j}^{\dagger}a+b_ {j}a^{\dagger}\right)+\frac{\mathcal{U}}{2}b_{j}^{\dagger}b_{j}^{\dagger}b_{j} b_{j}\right]+va^{\dagger}a \tag{1}\] Here, \(b_{j}\) and \(b_{j}^{\dagger}\) are the respective annihilation and creation operators for excitons in the \(j\)th molecule, satisfying \([b_{i},b_{j}^{\dagger}]=\delta_{ij}\). \(a\) and \(a^{\dagger}\) denote the annihilation and creation operators for photon modes with \([a,a^{\dagger}]=1\). \(\omega\) and \(v\) are the energy of the single mode for the molecule and the cavity, respectively. The photon-molecule coupling is denoted by \(g=-\sqrt{2\pi\hbar v/V}\), \(V\) is cavity volume, \(p\) is dipole moment of molecule. And the parameter \(\mathcal{U}=\sum_{a,b}\langle j_{a},j_{b}|(\vec{d}_{a}\cdot\vec{d}_{b}-3 \left(\hat{n}\cdot\vec{d}_{a}\right)\left(\hat{n}\cdot\vec{d}_{b}\right))/4 \pi\epsilon_{0}R^{3}|j_{a},j_{b}\rangle\) measures the interaction strength between excitons. where \(\vec{d}_{i}=e\vec{r}_{i}\) are the exciton dipole moments in one molecule and \(|j_{a}\rangle\) reprents the exciton state a in jth molecule. \(R\) is the separation between the dipoles and \(\hat{n}=\vec{R}/R\) the unitary vector in the direction from one dipole to another [24]. One observes the total number of excitations, i.e., \(M=\sum_{j=1}^{N}b_{j}^{\dagger}b_{j}+a^{\dagger}a\) which is conserved due to \([M,H_{0}]=0\). The Hamiltonian in Eq.(1) is thus of a block-diagonal form. This leads to a superposition subject to a certain M, forming new molecule-cavity states. In the absence of \(\mathcal{U}\), two modes called upper polariton (UP) and lower polariton (LP) are found, with the other N-1 dark states (DS) or the so-called dark-state polaritons (DSPs). A schematic diagram for the energy structure is demonstrated in FIG. 2, where we denote the energy of UP, LP and DSPs as \(\omega_{\rm up},\omega_{\rm lp}\) and \(\omega_{\rm ds}\) respectively, in addition to the trivial ground state \(\omega_{\rm g}\); we also include near edge state and charge transfer state, which exists to trigger different Raman processes. The coupling of excitons to vibrational modes is of the form [25] \[V_{\rm int}=\sum_{i}^{N}\sum_{k}\omega_{\rm vib,}^{i}q_{i,k}d_{i,k}b_{i}^{ \dagger}b_{i} \tag{2}\] and \(H_{\rm vib}=\sum_{k}\sum_{i}^{N}\frac{1}{2}\omega_{\rm vib,}^{i}c_{ik}^{\dagger }C_{ik}\) where the vibrational modes are assumed to have a dense distribution that can be described by a smooth spectral density. \(q_{i,k}=\sqrt{\hbar/2m\omega_{\rm vib,}^{i}}(C_{i,k}^{\dagger}+C_{i,k})\) denotes the coordinates of the molecular stretching. Defining the polariton operators \[\eta_{n}=\sum_{j}^{N}U_{n,j}b_{j}+U_{n,N+1}a \tag{3}\] through diagonalizing the \(H_{0}\) apart from the \(\mathcal{U}\) term in Eq.(1), we have \([\eta_{n},\eta_{m}^{\dagger}]=\delta_{nm}\) and recast the Hamiltonian into \[H_{0}=\sum_{i=1}^{N+1}\omega_{i}\eta_{i}^{\dagger}\eta_{i}+\frac{\mathcal{U}}{ 2}\sum_{k,l,m,n}^{N+1}K_{klmn}\eta_{k}^{\dagger}\eta_{l}^{\dagger}\eta_{m}\eta_ {n} \tag{4a}\] \[V_{\rm int}=\sum_{m>n,i,k}^{N+1}\mathcal{V}_{i,k}\left(U_{im}^{ \dagger}U_{ni}\eta_{m}^{\dagger}\eta_{n}C_{i,k}+\text{h.c.}\right) \tag{4b}\] where \(\omega_{N+1/N}=\omega_{\rm up/lp}=\frac{1}{2}\left(v\pm\sqrt{4g^{2}+(v-\omega) ^{2}}+\omega\right)\), \(\omega_{1,2..N-1}=\omega_{ds}=\omega\), \(K_{klmn}=\sum_{j}^{N}U_{jk}^{\dagger}U_{jl}^{\dagger}U_{mj}U_{nj}\), \(\mathcal{V}_{i,k}=\sqrt{1/2}\omega_{\rm vib,}^{i}d_{i,k}\). Note that we have not included terms such as \(\eta_{m}^{\dagger}\eta_{n}C_{i,k}^{\dagger}\) because they are negligible according to the rotating wave approximation. Since the number of intramolecular vibrational degrees of freedom is significantly large and their interaction with molecules is notably weak, we can treat the vibrations as Figure 1: (a) Microscopic structure of polariton. (b) Schematic illustration of UV-FSRS setup. Figure 2: An illustration of the energy levels in the cavity, including the upper-polariton \(\omega_{\rm up}\), dark state \(\omega_{\rm ds}\), lower-polariton \(\omega_{\rm lp}\), the ground state \(\omega_{\rm d}\), as well as near edge state and charge transfer state \(\omega_{\rm ct}\). a bath system which affects the molecule-photon system with negligible backreactions. Therefore, we consider the molecule-photon cavity system as an open system by tracing out the vibrations and obtain the polariton Redfield equation \(\dot{\rho}=-i[H_{0},\rho]+\dot{\mathcal{W}}\rho\)[26] \[\dot{\mathcal{W}}\rho=\sum_{m>n}\frac{\gamma_{mn}}{2}[(\eta_{m}^{ \dagger}\eta_{n}\rho\eta_{n}^{\dagger}\eta_{m}-\eta_{n}^{\dagger}\eta_{m}\eta_{ m}^{\dagger}\eta_{n}\rho)\bar{n}_{w_{mn}}\] \[+(\eta_{n}^{\dagger}\eta_{m}\rho\eta_{m}^{\dagger}\eta_{n}-\eta_ {m}^{\dagger}\eta_{n}\eta_{n}^{\dagger}\eta_{m}\rho)(\bar{n}_{\omega_{mn}}+1)]+ \text{h.c.}\] (5a) with \[\gamma_{mn}=\sum_{i}^{N}J_{i}(\omega_{mn})U_{mi}^{*}U_{mi}U_{ni}U_{ni}^{*}\] and the spectral density [27] \[J_{i}(\omega_{mn})=2\lambda_{0}\frac{\omega\gamma_{0}}{\omega^{2}+ \gamma_{0}^{2}}. \tag{6}\] The solution to the polariton Redfield equation is given by \[\rho_{e_{4}e_{3}}(t)=\sum_{e_{2}e_{1}}\mathrm{G}_{e_{4}e_{3},e_{2}e_{1}}(t) \rho_{e_{2}e_{1}}(0) \tag{7}\] where \(G(t)\) is the Green's propagator for Eq.(5a), in the absence of external fields [26]. The polariton dynamics governed by Eq.(7) will be imprinted into the Raman response, when interacting with laser pulses. The nonlinear optical signals are thus capable of reading out the polariton resonance and dynamics. ## III Preliminary on Raman spectroscopy In this work, our strategy involves applying stimulated Raman techniques to study femtosecond ultraviolet stimulated Raman spectroscopy (UV-FSRS) for molecular platforms and optically-dark states [28; 29; 30; 31; 32]. We choose the UV light, as the UV-light-induced Raman transition would be background-free, due to the off-resonant nature. Moreover, the UV laser pulses can make it feasible to induce the electronic Raman polarizability via the near-edge states of molecules. For the 2DUV-FSRS, the excitations of system are created by a resonant pump pulses; then a pair of overlapped broad- and narrow-band pulses scatters off the system, so as to produce the stimulated Raman transition. The interaction thus reads \[H_{\mathrm{int}}=\alpha\varepsilon_{2}^{\dagger}(t)\varepsilon_{3}(t)-\mu \varepsilon_{1}^{\dagger}(t)+\mathrm{H.C} \tag{8}\] where \(\mu\) represents electric dipole operator and \(\alpha\) denotes the Raman polarizability operator, i.e., \(\alpha=\sum_{e,e^{\prime}}\alpha_{ee^{\prime}}|e\rangle\langle e^{\prime}|\)[33] \[\alpha_{ee^{\prime}}=\sum_{i}^{N}\frac{P_{i}^{2}|U_{ie}^{\dagger}U_{e^{\prime} i}|}{\hbar}\left(\frac{1}{\omega_{i}-\omega_{e^{\prime}}}+\frac{1}{\omega_{i}- \omega_{e}}\right). \tag{9}\] The energy of the near-edge states \(|r_{i}\rangle\) is high enough to ensure that it does not admit any interactions with the cavity [34], and \(P_{i}\) refers to the transition dipole between different energy levels in the \(i\)th molecule, namely \(P_{i}=\langle e_{i}|\mu|r_{i}\rangle\). Since \(|r_{j}\rangle\) is highly excited and thus is decoupled from the cavity, a random phase is essentially attached to the Raman transition amplitude, i.e., \(U_{ei}=|U_{ei}|e^{i\phi_{ei}}\). Therefore, we have to take the ensemble average over the phase \(\langle|U_{ie}^{\dagger}U_{e^{\prime}i}|e^{i(\phi_{e^{\prime}i}-\phi_{ei})} \rangle_{\mathrm{ave}}=|U_{ie}^{\dagger}U_{e^{\prime}i}|\), ensuring that Raman polarizability between two excited states is well-defined. The transmission of the short pulse (Raman process) is measured, so that the optical signal is \(S=\langle\varepsilon_{3}^{*}(t)\varepsilon_{3}(t)\rangle\). Using the Heisenberg's equation of motion for fields one has [35]: \(S=\int dt\langle(-i)[\varepsilon_{3}^{\dagger}(w)\varepsilon_{3}(w),H_{int}]]\), where the most significant terms are remained, i.e., the resonant pump and Raman probe that are time ordered. As the Raman pulses excite the molecules, polariton modes then begin to interact with both dark states and the vibrational bath. This interaction causes relaxation and repopulation, and the relevant dynamical information is encoded in the density matrix that can be described by the Born-Redfield equation [26]. As a result, the multidimensional projections of the UV-FSRS signal reflect the real-time information regarding the dark states and their correlations with polariton modes. More specifically, the Raman signal can be evaluated by: \[S=\frac{1}{\pi}\Im\int dte^{i\omega(t-T)}\varepsilon_{3}^{*}(w)\varepsilon_{2} (t-T)\cdot\mathrm{Tr}[\hat{\alpha}\rho] \tag{10}\] In this work, we consider density matrix \(\rho\) at three order. ## IV One-dimensional UV femtosecond stimulated Raman spectroscopy To calculate the Raman signal from eq.(10), we must perform the integral over the time domain. This is hard in general, but can be achievable by assuming the Lorentzian pulse shape, \[\varepsilon_{i}(t-T_{i})=\theta(t-T_{i})e^{-\frac{t-T_{i}}{\sigma_{i}}}e^{-i \omega_{i}t} \tag{11}\] Figure 3: loop diagram of raman spectroscopy where \(T_{i}\) and \(\omega_{i}\) denote the central time and frequency of the pulse, respectively. The resonant pump \(\varepsilon_{1}\) excites the system, and the pulses \(\varepsilon_{2}\) and \(\varepsilon_{3}\) act with a time delay \(T=T_{2}-T_{1}\) relative to \(\varepsilon_{1}\), inducing the Raman transition. The pulse fields inducing Raman transition have identical arrival time, i.e.,\(T_{2}=T_{3}\). Due to the broadband nature of the resonant pump, a wide spectrum of the excited states is covered. The Raman signal given by Eq.(10) thus includes populations \(\rho_{ee}\) and coherence \(\rho_{ee^{\prime}}\) (\(e\neq e^{\prime}\)) components of molecular polaritons. When the Raman fields and the pump field are well separated in time, namely, \(T\gg\sigma_{1},\sigma_{2},\sigma_{3}\), the FSRS spectroscopic experiment can be viewed as a three-step process: preparation, propagation and Raman detection. The two resonant pump fields create an initial doorway state of the polaritons, which propagates, and is finally probed at a time delay \(T\) with a window operation. For a precise definition of the three-step process, we expand Eq.(10) against the Raman coupling, having \[\begin{split} S_{\text{1D}}&(\omega-\omega_{2},T) \propto N\int_{-\infty}^{\infty}dt\int_{-\infty}^{t}d\tau e^{i(\omega-\omega_{ 2})t}\varepsilon_{3}^{*}(\omega)\\ &\times\varepsilon_{2}(t-T)\varepsilon_{2}^{*}(\tau-T)\varepsilon _{3}(\tau-T)\text{Tr}\Big{\{}\alpha\big{[}\alpha(\tau),\rho_{0}\big{]}\Big{\}} \\ &\approx N\int_{-\infty}^{\infty}dt\int_{-\infty}^{T}d\tau e^{i( \omega-\omega_{2})t}\varepsilon_{3}^{*}(\omega)\varepsilon_{2}(t)\varepsilon_ {2}^{*}(\tau)\\ &\times\varepsilon_{3}(\tau)\text{Tr}\Big{\{}\alpha(t)\big{[} \alpha(\tau),\rho(T)\big{]}\Big{\}}.\end{split} \tag{12}\] with a proper approximation in last step, given the delay \(T\) longer than the pulse duration. Such an approximation works for most of the ultrafast molecular spectroscopic experiments. This allows the definition of the _Raman window operators_ \[W(\omega-\omega_{2})=\int_{-\infty}^{\infty}dte^{i(\omega-\omega_{2})t} \varepsilon_{3}^{*}(\omega)\varepsilon_{2}(t)\alpha(t), \tag{13a}\] \[V(\sigma)=\int_{-\infty}^{T}d\tau\varepsilon_{2}^{*}(\tau) \varepsilon_{3}(\tau)\alpha(\tau). \tag{13b}\] Using Eqs.(12), (13a) and (13b), the UV-FSRS signal for the Raman shift \(\omega-\omega_{2}\) reads \[S_{\text{1D}}(\omega-\omega_{2},T)\propto\Re\text{Tr}\Big{\{}W(\omega-\omega_ {2})\big{[}V(\sigma),\rho(T)\big{]}\Big{\}}. \tag{14}\] To prepare the polariton state \(\rho(T)\) the _doorway operators_ are useful to describe the process by the resonant pump pulses. A _doorway-Raman-window_ representation can be therefore developed, which would be powerful for an unified understanding of the multidimensional Raman spectroscopy. We will elaborate this in Sec.V for the two-dimensional UV-FSRS. To see the Raman signal closely and study the dynamics of polaritons and dark states numerically, we set a strong coupling \(g\sqrt{N}=0.05\sqrt{2}\) without losing generality. We have simulated the polariton FSRS signals (given by Eq. (14) ) in FIG. 4 and FIG. 6 with different detunings. Before moving on to the detailed analysis of the signals, let us briefly comment on the implications derived from the signal in one dimension without referring to many numerical details. According to first-order perturbation theory, we know that the pulse \(\varepsilon_{1}\) initially excites the polaritons to the superposition state \(\mathcal{N}^{-1}(\mu_{\text{up,g}}|\text{UP}\rangle+\mu_{\text{lp,g}}|\text{LP}\rangle)\), where \(\mathcal{N}=\sqrt{\mu_{\text{up,g}}^{2}+\mu_{\text{lp,g}}^{2}}\) is the normalization factor. Generally speaking, the intensity of the signal curve indicates the probability of a certain process occurring, which boils down to the population of the relevant states. For example, a peak located at \(\omega_{e_{3}e_{3}}\geq 0\) reflects that the linear combination of density matrices \(\rho_{e_{3},e_{3}}+\rho_{e_{5},e_{6}}\) is dominating, rather than the individual population \(\rho_{e_{3},e_{3}}\) or \(\rho_{e_{5},e_{6}}\). This is attributed to the two components: dissipative (FIG. 3(a,c)) and parametric (FIG. 3(b,d)) processes that give the same Raman resonance. This argument can guide us to qualitatively decode more populations from the Raman signal. Figure 4: The stimulated Raman signals of \(N=10\) molecules, without detuning, are shown. In this paper, we consider cyanine dye J-aggregates [36] with \(\omega=1.84\text{ev}\) and \(\mathcal{U}=0.02\text{ev}\), with \(\hbar=1\) throughout our discussions. We have taken \(\delta=243\text{T}\text{Hz}\) as the center frequency difference, \(\sigma_{2}=1\text{ps}\), and \(\sigma_{3}=35\text{fs}\) for the broad pulse \(\varepsilon_{2}\) and the narrow pulse \(\varepsilon_{3}\), respectively. To better simulate the real situation, we introduce the phenomenological parameter \(\gamma=100\text{T}\text{Hz}\) to modify \(\xi\text{s}\) by \(\xi_{e_{3}e_{3}}=\omega_{e_{5}e_{3}}-i(\gamma_{e_{5}e_{3}}+\gamma)\), which can account for the possible dephasing rate caused by experimental uncertainty. Subsequent signals use the same parameters. Horizontally, (a), (b), and (c) monitor the real-time evolution of the stimulated Raman signals with different time delays \(T\); vertically, the total signals, populations, and coherence parts are shown, respectively. The relative strength of peaks observed in the real-time delay can be used to resolve the populations of molecular states. In the following part, one-dimensional UV-FSRS will be discussed in detail for cases with or without detuning between molecules and the cavity, under specific parameters. ### Zero cavity-molecule detuning In this subsection, we begin with considering the case of zero detuning between the molecular excitons and the cavity, i.e., \(\omega-v=0\). In FIG. 4, we display the total signal, population, and coherence in the top, middle, and bottom lines, respectively. Let's begin with analyzing the population, as shown in the middle line of FIG. 4. Owing to the vanishing detuning \(\omega-v=0\), we have \(\omega_{\rm up,ds}=\omega_{\rm ds,lp}\). As a consequence, we can observe two peaks: (1) the transitions UP \(\leftrightarrow\) Dark and Dark\(\leftrightarrow\) LP that merge together; (2) the transition UP \(\leftrightarrow\) LP. By varying the time delay T, the populations at the dark states can be resolved from the two peaks in FIG. 4(a,b,c). However, due to the transitions UP \(\leftrightarrow\) Dark and Dark \(\leftrightarrow\) LP that merge in one peak at 107THz, FIG. 4 enables a real-time monitoring of a coupled dynamics for the polariton states, i.e., \(\rho_{\rm up,up}+\rho_{\rm lp,lp}\). This is further confirmed by the simulations of polariton Redfield equation in Eq.(5a) that gives the population dynamics of the polariton states depicted in FIG. 5. Moreover, FIG. 4 shows that the signal resolves the coherence nature of the polaritons, within a short timescale. In particular, the peak at \(\omega_{\rm up,ds}(\omega_{\rm ds,lp})\) presents an oscillation during \(\sim 1\)ps, on top of the population components \(\rho_{\rm up,up}+\rho_{\rm ds,ds}\). This reveals the coherence between the UP and the LP. ### With detuning \(\omega-v=1.25g\) We consider a detuning of \(\omega-v=1.25g\) between molecular excitons and cavity photons. As seen from the energy-level structure of polaritons, the two transitions UP \(\leftrightarrow\) Dark and Dark \(\leftrightarrow\) LP lead to a splitting of the spectral lines. Beginning with the analysis of the population shown in the middle line of FIG. 6, we clearly observe three peaks at \(\omega_{\rm up,ds}=88\)THz, \(\omega_{\rm ds,lp}=130\)THz, and \(\omega_{\rm up,lp}=213\)THz. As in the previous analysis, the peak at \(\omega_{\rm up,ds}\) is expected to represent the combined dynamics of UP and dark states. This peak decreases with time delay, contrasting with the behavior of the LP state. The peak at \(\omega_{\rm ds,lp}\), however, continually increases, which aligns with the growth of the LP state. This peak corresponds to the combined dynamics of the LP and dark states. Similarly, the peak for \(\omega_{\rm up,lp}\) reflects the dynamics of UP and LP states. Its behavior is consistent with the dynamics of the superposition of UP and LP, which first decreases and then begins to increase. The population dynamics of polaritons coupled to the dark states can be read out from the Raman signal. For a longer timescale, longer than the coherence lifetime, the population dynamics of cavity polaritons dominates. After some manipulations, we are able to find a group of algebraic equations for populations and signal under impulsive approximation, up to small errors, i.e., \[S_{\rm 1D}^{(p)}(\omega_{\rm up}-\omega_{\rm ds},T)\simeq \tilde{\alpha}_{\rm 1D,upds}^{(1)}(\rho_{\rm dsds}(T)+\rho_{\rm upup}(T))+ \tilde{\alpha}_{\rm 1D,dslp}^{(1)}(\rho_{\rm dsds}(T)+\rho_{\rm lplp}(T))+ \tilde{\alpha}_{\rm 1D,uplp}^{(1)}(\rho_{\rm lplp}(T)+\rho_{\rm upup}(T))\,,\] \[S_{\rm 1D}^{(p)}(\omega_{\rm ds}-\omega_{\rm lp},T)\simeq\tilde{ \alpha}_{\rm 1D,upds}^{(2)}(\rho_{\rm dsds}(T)+\rho_{\rm upup}(T))+\tilde{ \alpha}_{\rm 1D,dslp}^{(2)}(\rho_{\rm dsds}(T)+\rho_{\rm lplp}(T))+\tilde{ \alpha}_{\rm 1D,uplp}^{(2)}(\rho_{\rm lplp}(T)+\rho_{\rm upup}(T))\,,\] \[S_{\rm 1D}^{(p)}(\omega_{\rm up}-\omega_{\rm lp},T)\simeq\tilde{ \alpha}_{\rm 1D,upds}^{(3)}(\rho_{\rm dsds}(T)+\rho_{\rm upup}(T))+\tilde{ \alpha}_{\rm 1D,dslp}^{(3)}(\rho_{\rm dsls}(T)+\rho_{\rm lplp}(T))+\tilde{ \alpha}_{\rm 1D,uplp}^{(3)}(\rho_{\rm lplp}(T)+\rho_{\rm upup}(T)) \tag{15}\] where the coefficients are defined by \[\tilde{\alpha}_{\rm 1D,ab}^{(j)}=\frac{1}{\pi}{\rm Im}(-i)^{3}\sum_{n} ^{N-1}|\alpha_{a,b}|^{2}f(\omega^{(j)},\xi_{ab}) \tag{16}\] Here we define the function \(f(v,\xi)\) to simplify the final expressions, which is a function related only to the pulse parameter and the energy gap and derived from integral of (14): \[f(v,\xi)=\frac{1}{-\frac{\omega_{\rm 2}^{2}+i\delta-\frac{i}{\sigma_{ 3}}+i\xi}}\Big{(}\frac{1}{i(v-\xi)-\frac{i}{\sigma_{2}}}-\frac{1}{-\frac{\omega _{\rm 2}^{2}}{-\sigma_{2}^{2}+i(v+\delta)-\frac{1}{\sigma_{3}}}}\Big{)} \tag{17}\] where we assume the broadband Raman pulse \(\varepsilon_{3}\) has duration much shorter than the polariton relaxation and dephasing process.In these formulas, \(j=1,2,3\) indicates that \(\omega^{(1,2,3)}\) corresponds to \(\omega_{\rm up,ds},\omega_{\rm ds,lp},\omega_{\rm up,lp}\) respectively, and \(\delta=\omega_{\rm 2}-\omega_{3}\). Using (15), we can solve for the density matrices \(\rho_{\rm ds,ds},\rho_{\rm up,up}\) and \(\rho_{\rm lp,lp}\) in terms of the Raman spectral lines, given superposition \(\mathcal{N}^{-1}(\mu_{\rm up,g}|{\rm UP})+\mu_{\rm lp,g}|{\rm LP}))\) an initial state. We illustrate the dynamics of this picture in FIG. 7, where the initial state is a superposition of LP and UP states with \(\mu_{\rm up,g}=2.443,\mu_{\rm lp,g}=2.008\), and the normalization is given by \(\mathcal{N}\simeq 3.162\). Due to the existing detuning between molecules and the cavity, the transition dipole between UP and LP is no longer equal, which explains why UP and LP states have different initial weights in FIG. 7. As displayed in FIG. 7, the bright and dark polariton populations resolved by Eq.(15) with the Raman signal have a perfect match to the ones obtained from the Redfield equation Eq.(5a). Interestingly, after relaxation ends around \(T=20\)ps, dark-state polaritons occupy a significant proportion. This insightful finding suggests that dark states can be more effectively explored after relaxation. Although we use Raman spectroscopy to accurately resolve the dynamics of the upper and lower polaritons, as well as the dark states, the main restriction is that we can only initially pump the superposition of the upper and lower polaritons. We cannot initially pump to any of them individually. Fortunately, we can adjust the time delay between \(\varepsilon_{1}\) to precisely pump either the upper polariton or the lower polariton. ## V Two-dimensional UV femtosecond stimulated Raman spectroscopy To overcome the spectral bottleneck by the 1D Raman spectra above, we will use two pulses for a selective excitation of molecular polaritons. Here a pair of short pulses with an additional delay \(T_{0}\) are pumping the system, creating resonant excitations. The Raman emission is collected after a delay of \(T\) relative to the 2nd pump pulse, as depicted in FIG. 1(b). Using the Dyson series up to the 2nd order against the couplings with the resonant pump pulses, we find the polariton density matrix \[\begin{split}\rho(t)&=N\iint_{-\infty}^{t}d\tau_{2} d\tau_{1}\theta(\tau_{2}-\tau_{1})\mu(\tau_{2})\rho_{0}\mu(\tau_{1})\\ &\qquad\qquad\times\varepsilon_{1}(\tau_{2}-T_{1}^{\prime}) \varepsilon_{1}^{*}(\tau_{1}-T_{1})+\text{h.c.}\\ &\approx N\int_{-\infty}^{\infty}dt^{\prime}\int_{-t^{\prime}}^{ \infty}dt^{\prime\prime}\hat{G}(t-T_{1}^{\prime})[\mu(t^{\prime})\rho_{0}\mu (-t^{\prime\prime})]\\ &\qquad\qquad\times\varepsilon_{1}(t^{\prime})\varepsilon_{1}^{*}( T_{0}-t^{\prime\prime})+\text{h.c.}\end{split} \tag{18}\] given \(t\gg\) pulse duration. \(\varepsilon_{1}(t-T_{1}),\ \varepsilon_{1}(t-T_{1}^{\prime})\) denote the two resonant pump fields and \(T_{0}\equiv T_{1}^{\prime}-T_{1}\). \(\hat{G}(t)\) is the Green's propagator defined in Eq.(7) for the polariton systems. Eq.(18) enables the definition of the _doorway operators_ for the resonant excitation process [37] \[\begin{split} D(T_{0})&=N\int_{-\infty}^{\infty}dt ^{\prime}\int_{-t^{\prime}}^{\infty}dt^{\prime\prime}\varepsilon_{1}(t^{\prime })\varepsilon_{1}^{*}(T_{0}-t^{\prime\prime})\\ &\qquad\qquad\qquad\times\mu(t^{\prime})\rho_{0}\mu(-t^{\prime \prime})+\text{h.c.},\\ D(\omega_{0})&=\int_{-\infty}^{\infty}dT_{0}e^{i \omega_{0}T_{0}}D(T_{0})\end{split} \tag{19}\] so that \[\rho(t)=\hat{G}(t-T_{1}^{\prime})D(T_{0}). \tag{20}\] Inserting Eq.(20) into Eq.(14) and performing the Fourier transform over \(T_{0}\), the calculations proceed as usual. We thus Figure 5: The dynamics of populations, solved using the master equation without detuning between the molecules and the cavity. In this case, the molecules are initially pumped to the superposition of the upper polariton and the lower polariton, with the transition dipole being \(\mu_{\text{g,up}}=\mu_{\text{g,lp}}\). Figure 6: The stimulated Raman signals of \(N=10\) molecules with a detuning between the molecule and cavity \(\omega-v=1.25g\) are presented. Figures (a), (b), and (c) display the stimulated Raman signals at different time delays. Vertically, the total signals, populations, and coherence are displayed sequentially. The relative intensity of peaks observed during real-time delay can be used to resolve the populations of the molecular states. Figure 7: Comparison between the populations resolved by the Raman signal (dashed line) and the populations obtained from the master equation (solid line) with a detuning of \(1.25g\). The transition dipole is \(\mu_{\text{g,up}}=2.443\), \(\mu_{\text{g,lp}}=2.008\). The green, orange, and blue colors correspond to the dynamics of the upper polariton, dark state, and lower polariton, respectively. obtain the 2DUV-FSRS signal in the _doorway-Raman-window_ formalism, i.e., \[\begin{split} S_{\text{2D}}&(\omega-\omega_{2},T, \omega_{0})\\ &\propto\Im\text{Tr}\Big{\{}W(\omega-\omega_{2})\big{[}V(\sigma), \hat{G}(T)D(\omega_{0})\big{]}\Big{\}}\end{split} \tag{21}\] with \(T\gg\) pulse durations meaning that the Raman pump-probe fields are temporally separated from the resonant pump fields; \(W(\omega-\omega_{2})\) and \(V(\sigma)\) are the Raman window operators given by Eqs.(13a) and (13b), respectively. The salient feature of the two-dimensional Raman signal can be easily seen from (21): it allows one to selectively access the polariton states by the pump. We can then expect a real-time monitoring of the pathways of the polariton dynamics, which are a hard task for the 1D Raman signal. ### Without detuning \(\omega-v=0\) Let's briefly analyze the behavior of the 2D Raman signal shown in FIG. 8, with \(\delta=0\). Different values of \(\omega_{0}\) allow us to select different initial pumping states, as indicated by the vertical axis of FIG. 8. From the vertical axis, it is clear that the peaks are localized around either the upper polariton (\(\omega_{0}=2890\)THz) or the lower polariton (\(\omega_{0}=2680\)THz), referring to the initial pumping to the upper polariton and lower polariton, respectively. In other words, we expect that analyzing the peaks that vertically localize around the upper polariton will reveal the real-time populations for the initial upper pump, and the same applies to the lower polariton case. To analyze the case with the initial pumping to the upper polariton, we should focus on the upper peaks in a given plot of FIG. 8. We observe that the signal is dominated by two peaks corresponding to \(\omega-\omega_{2}=\omega_{\mathrm{up,ds}}=\omega_{\mathrm{ds,lp}}=103\)THz and \(\omega-\omega_{2}=\omega_{\mathrm{up,ds}}=206\)THz. Because \(\omega-v=0\), the energy gap \(\omega_{\mathrm{up,ds}}\) equals \(\omega_{\mathrm{ds,lp}}\). In this case, the first peak is the merged one from \(\omega_{\mathrm{up,ds}}\) and \(\omega_{\mathrm{ds,lp}}\) due to zero detuning. Consequently, as in the one-dimensional case, the peak at \(\omega-\omega_{2}=\omega_{\mathrm{up,ds}}=\omega_{\mathrm{ds,lp}}\) reflects a mixed population from the upper polariton, dark state, and lower polariton with different ratios, making it difficult to obtain valid information about each of them. However, the peak at \(\omega-\omega_{2}=\omega_{\mathrm{up,ds}}\) only encodes the information of the upper and lower polariton. Its intensity first decreases and then grows, which is consistent with the dynamics of \(\rho_{\mathrm{up,up}}+\rho_{\mathrm{lp,lp}}\) shown in FIG. 10(a). Similarly, when the initial pump is pulsed to the lower polariton (the \(\omega_{0}=2680\)THz line), the intensity of the peak at \(\omega-\omega_{2}=\omega_{\mathrm{up,ds}}\) continues to decrease, which aligns with the dynamics of \(\rho_{\mathrm{up,up}}+\rho_{\mathrm{lp,lp}}\), as shown in FIG. 10(b). Regarding the coherence part of FIG. 8, we see the peaks are concentrated vertically on the upper polariton (\(\omega_{0}=2890\)THz) and lower polariton (\(\omega_{0}=2680\)THz), implying that the coherence oscillates only between the UP and LP states. A simple explanation is that we cannot initially pump to the dark state. We can clearly observe the oscillation and decay in the coherence section of FIG. 8. The combination of population and coherence is shown on the top line in FIG. 8. We observe that there are no coherence contributions anymore at \(\mathrm{T}=1\)ps. Until now, we have observed the dynamics of \(\rho_{\mathrm{up,up}}+\rho_{\mathrm{lp,lp}}\). However, obtaining their individual dynamics is challenging due to the overlapping of the first two peaks \(\omega-\omega_{2}=\omega_{\mathrm{up,ds}}\) and \(\omega-\omega_{2}=\omega_{\mathrm{ds,lp}}\). In the following sections, we will turn on the detuning, which will separate these two peaks. ### With detuning \(\omega-v=1.25g\) In a global view, we observe two peaks at \(\omega_{\mathrm{up}}\) and \(\omega_{\mathrm{lp}}\) when taking slices along \(\omega_{0}\). This is due to the destructive interference in the dark-state polaritons that results in zero net dipole. Nevertheless, three peaks can be seen along the slices with fixed \(\omega_{0}\). This indicates the active response of the dark-state polaritons. The 1st row of FIG. 9 shows the full 2DUV-FSRS given by Eq.(21), with scanning the delay T. For the slice at Figure 8: Two-dimensional Raman signal without the detuning. Figure 9: Two-dimensional Raman signal obtained with a detuning of \(1.25g\). \(\omega_{\rm up}=2890\)THz, FIG. 9(a) depicts that the signal is dominated by two peaks at \(\omega_{\rm up,ds}=88\)THz and \(\omega_{\rm up,lp}=218\)THz. This means the population of the system at the upper polariton state. When the delay \(\rm T\) varies, we see the energy transfer from UP to DSPs, evident by the increase of the peak at \(\omega-\omega_{2}=\omega_{\rm ds,lp}\) in FIG. 9(b) and 9(c). After a longer delay, as seen from FIG. 9(d), the DSPs are densely populated whereas the population at LP state is less. This is attributed to the large number of the DSPs. FIG. 9(a-d) further shows different timescales associated with different pathways. In particular, the peak at \((\omega-\omega_{2}=\omega_{\rm ds,lp},\omega_{0}=\omega_{\rm up})\) elaborates a fast increase within 930fs, dramatically different from the one at \((\omega-\omega_{2}=\omega_{\rm up,lp},\omega_{0}=\omega_{\rm up})\) that shows a considerable change within 20ps. We thus observe a faster energy transfer from UP to DSPs than that from UP to LP. This can be understood neatly by the large density of the DSPs, much higher than the bright polariton states that have been revealed in resonant spectroscopic experiments. The transition rate is \(\frac{2\pi}{\hbar}\langle{\rm DS}|{\rm V}_{\rm int}|{\rm UP}\rangle|^{2}\rho( {\rm E}_{\rm ds})\) from the Fermi's Golden rule, which is enhanced by the mode density of the states. When slicing FIG. 9(a-d) at \(\omega_{0}=\omega_{\rm lp}\), we see the dynamics when initially pumping to the LP state. The results present dramatical difference from the ones with \(\omega_{0}=\omega_{\rm up}\), resulting in a bit more subtle analysis due to the resolution issue for the figures. Nonetheless, we are still able to find the peak around \(\omega_{\rm ds,lp}\) barely varying with the delay, while the spectral line got broadened slightly towards the the peak at \(\omega_{\rm up,ds}\). The peak at \(\omega-\omega_{2}=\omega_{\rm up,lp}\) shows a low intensity, which indicates a weak population at the UP state. Notably from the comparison between the top and middle rows of FIG. 9, the DSPs can be greatly populated when the system is pumped to the UP rather than the LP state. This indicates the energy harvesting by the DSPs, which is important for understanding the kinetic and thermodynamic properties of molecules in cavities. As an optical signal, FIG. 9 clearly demonstrates a real-time monitoring of the DSPs coupled to the other states of molecules, for an illustration of the crucial role of the DSPs. Within shorter timescale, the 2DUV-FSRS reveals the coherence effect in the cavity-polariton systems, evident by the fast oscillations. By a careful check with the oscillating frequency, as supported by FIG. 9(3rd row), the quantum coherence between UP and LP states is captured. This is due to the broadband nature of the pump fields that create a coherent superposition \(a|{\rm UP}\rangle+{\rm b}|{\rm LP}\rangle\). Such a polaritonic coherence is resolved by the Raman signal during a short timescale, whereas the Raman signal is dominated by the polariton populations during longer timescales. The polariton dynamics may be monitored in a more advanced way, through a sophisticated method, i.e., the intrinsic connection of peak intensities to the populations that dominate over longer timescales. A group of algebraic equations can thus be found \[S_{\rm 2D}^{(\rm p)}(\omega_{\rm up}-\omega_{\rm ds},\omega_{ \rm up},T)\simeq\tilde{\alpha}_{\rm up,ds}^{(1)}(\rho_{\rm dsds}(T)+\rho_{ \rm upup}(T))+\tilde{\alpha}_{\rm ds,lp}^{(1)}(\rho_{\rm dsds}(T)+\rho_{\rm lplp }(T))+\tilde{\alpha}_{\rm up,lp}^{(1)}(\rho_{\rm lplp}(T)+\rho_{\rm upup}(T) )\,,\] \[S_{\rm 2D}^{(\rm p)}(\omega_{\rm ds}-\omega_{\rm lp},\omega_{ \rm up},T)\simeq\tilde{\alpha}_{\rm up,ds}^{(2)}(\rho_{\rm dsds}(T)+\rho_{ \rm upup}(T))+\tilde{\alpha}_{\rm ds,lp}^{(2)}(\rho_{\rm dsds}(T)+\rho_{\rm lplp }(T))+\tilde{\alpha}_{\rm up,lp}^{(2)}(\rho_{\rm lplp}(T)+\rho_{\rm upup}(T) )\,,\] \[S_{\rm 2D}^{(\rm p)}(\omega_{\rm up}-\omega_{\rm lp},\omega_{ \rm up},T)\simeq\tilde{\alpha}_{\rm up,ds}^{(3)}(\rho_{\rm dsds}(T)+\rho_{ \rm upup}(T))+\tilde{\alpha}_{\rm ds,lp}^{(3)}(\rho_{\rm dsds}(T)+\rho_{\rm lplp }(T))+\tilde{\alpha}_{\rm up,lp}^{(3)}(\rho_{\rm lplp}(T)+\rho_{\rm upupup}(T) )\,, \tag{22}\] where the coefficients are defined by \[\tilde{\alpha}_{a,b}^{(j)} = \frac{1}{\pi}{\rm Im}(-i)^{3}\sum_{n}^{N-1}|\alpha_{a,b}|^{2}|\mu_ {\rm g,up}|^{2}\times \tag{23}\] \[(\frac{1}{i\gamma_{gap}}+\frac{1}{i\gamma_{ugg}})f(\omega^{(j)}, \xi_{ab})\] We can solve the density matrices from (22) in both cases, i.e., when pumping to the upper and lower polariton. This resolved population turns out to match perfectly with the real dynamics governed by the master equation (see FIG. 10). ### With detuning \(\omega-v=-1.25g\) We can also consider the negative detuning, such as \(\omega-v=-1.25g\). In this case, \(\omega_{\rm up,lp}\) is the same as in the previous case (i.e., \(\omega-v=1.25g\)); however, the energy gaps of \(\omega_{\rm up,ds}\) and \(\omega_{\rm ds,lp}\) have been exchanged. Similarly, we show the full signal, population, and coherence all in FIG. 11. It is immediately evident from FIG. 11 that the trends for the peaks do not deviate significantly from the positive detuning case, indicating similar behavior for the populations, which we do not need to discuss further. However, the specific details of the dynamics indeed change. We demonstrate these changes in the resolved dynamics and the real dynamics in FIG. 12, which, of course, fit well. The most significant dis Figure 10: Comparison between the population resolved by the signal (dashed line) and the population obtained from the master equation (solid line), for \(\omega-v=1.25g\). crepancy with the positive detuning is a substantial increase in the percentage of the dark state in the populations. This occurs because dark states with negative detuning have lower energy comparing to the positive detuning case, making the dark states more easily "accessible". Therefore, to prevent the dark state from becoming scarce, we could consider setting the detuning to a negative value. ## VI 2DUV-FSRS with charge transfer states There are molecules reactive for electron transfer, accessing the charge transfer states (CTs). The CTs normally have energies typically lower than the first excited states of the molecules [38]. Our strategy, basically, is to induce the stimulated Raman transition that ends up at the CTs. As the CTs are hard to be excited directly by the pumping pulse, the Raman signal for the molecular polaritons can be cleaner than before. This is revealed by the fact that the components (a) and (c) in the loop diagrams (FIG. 3) survive only. Assuming the near-edge states can coupled radiatively to both the molecular excitons and the CTs, the Raman polarizability for the molecular polariton reads the form Eq.(24) following Eq.(9) \[\alpha_{e_{3}ct}=\sum_{i}^{N}\frac{P_{i}\mu_{ri,ct}U_{i\alpha}^{ \dagger}}{h}\left(\frac{1}{\omega_{i}-\omega_{ct}}+\frac{1}{\omega_{i}-\omega _{e_{3}}}\right) \tag{24}\] where \(\mu_{ri,ct}\) denotes the transition dipole between the near edge state and the charge transfer state. For this system, we find the 2DUV-FSRS signal with CTs \[S_{ct}(\omega_{0},\omega-\omega_{2},T)=\frac{1}{\pi}\mathrm{Im }(-i)^{3}\varepsilon_{3}^{*}(\omega)\int_{0}^{\infty}dt\int_{0}^{t}d\tau\] \[\sum_{e_{1}e_{2}e_{3}e_{4}}^{N+1}\mu_{ge_{1}}\mu_{ge_{2}}\alpha_ {e3ct}\alpha_{ete_{4}}(\frac{1}{-\omega_{0}-\xi_{ge_{1}}}+\frac{1}{\omega_{0}- \xi_{e_{1}g}})\] \[\varepsilon_{2}(t-T)e^{i\omega(t-T)}\varepsilon_{2}^{*}(\tau-T) \varepsilon_{3}(\tau-T)G_{e_{4}e_{3},e_{2}e_{1}}(\tau-T_{2})\] \[e^{-i\omega_{e_{4}ct}(t-\tau)} \tag{25}\] For visualizing our signal, we consider 10 molecules and assume that the energy of the charge transfer state is \(1.6\)eV, as shown in FIG. 13. Because the parametric process is not included in this case, a peak located at \(\omega_{e_{3}e_{5}}\geq 0\) perfectly reflects the density matrix \(\rho_{e_{3},e_{3}}\). This fact should be remembered when analyzing FIG. 13 and comparing it with the real-time dynamics in FIG. 10. First, we consider the pulse initially pumping to the upper polariton, which is shown in the upper part of the signal plots in FIG. 13. The behaviour of the peak at \(\omega-\omega_{2}=\omega_{\mathrm{lp,ct}}=230\)THz shows the dynamics of the upper polariton, which continues to grow as indicated by the increase in the intensity of the relevant peak. On the other hand, the peak at \(\omega-\omega_{2}=\omega_{\mathrm{ds,ct}}=360\)THz encodes the dynamics of the dark state: the density matrix increases over time and then starts to decay at some point. Lastly, the peak at \(\omega-\omega_{2}=\omega_{\mathrm{up,ct}}=450\)THz encodes the dynamics of the upper polariton, which keeps decreasing as reflected by the decreasing peak intensity. Regarding the lower part of the population in FIG. 13, where the system is initially pumped to the lower polariton, the peak at \(\omega-\omega_{2}=\omega_{\mathrm{up,ct}}\) is absent for all time delays because the energy of the upper polariton is too high to undergo relaxation. The decrease in intensity for the peak at \(\omega-\omega_{2}=\omega_{\mathrm{lp,ct}}\) and the opposite trend in intensity for the peak at \(\omega-\omega_{2}=\omega_{\mathrm{ds,ct}}\) correspond to the decay of the UP state and the growth of the dark state, respectively. For the coherence part of FIG. 13, we observe that peaks primarily dominate around \(\omega-\omega_{2}=\omega_{\mathrm{lp,ct}}\) and \(\omega-\omega_{2}=\omega_{\mathrm{up,ct}}\). This is due to the selection rule that forbids the initial pumping to the dark states. As usual, the peaks display an oscillatory decay. Moreover, we can clearly see the oscillation at the peak, transitioning from "all red", through "half red, half blue", to "all blue". As the color fades, it also shows that the oscillation is in decay. For the total signal, it is worth noting that the "red dot" (positive part) in FIG. 13(b) always represents the contribution of the population because the positive and negative values of coherence are the same. We can see that FIG. 13(c) and (d) are the same as the pure population, implying that the coherence has dissipated. ## VII Conclusion and remarks We studied the coherent Raman response of the molecular polaritons, in which the dark states are visualized. The Figure 11: Two-dimensional Raman signal with the detuning of \(\omega-v=-1.25g\). Figure 12: Comparison between the population resolved by signal (dashed line) and the population obtained from master equation (solid line) for \(\omega-v=-1.25g\). results led to the 2DUV-FSRS for cavity-polariton systems, and we therefore developed a microscopic theory for the Raman signal. Rich information about the dark-state polaritons and their coupling to the bright polaritons can be readily visualized in the 2DUV-FSRS. Our work provides an off-resonant spectroscopic scheme for a real-time monitoring of the dark-state-polariton dynamics, not accessible by conventional spectroscopic technique including the absorption and fluorescence. The multidimensional projections of the Raman signal as elaborated enable a multiscale illustration for the polariton dynamics in a crosstalk with the DPSs, underlying a time- and frequency-resolved nature. Our work would be insightful for the study of polariton-afforded reactivity of photo-active molecules, and the cavity-coupled heterostructures including the 2D semiconductors. The Raman spectra in present work can enable a clean fingerprint for the fast nonadiabatic electron dynamics, where complex energy potentials involving anharmonicity needs to be taken into account. These would be a remarkable generalization of our present work, and will be presented elsewhere.
$N$分子の光学キャビティで、フェミソセカンド超 violet(UV)刺激ラムンスpectroscopy (UV-FSRS)を開発しました。この方法により、分子極性子群の集団的ダイナミクスをリアルタイムで監視することができ、また、極性子と振動間の結合、および極性子と暗状態間の crosstalkも実現できます。UV-FSRS信号の多次元投影によって、暗状態の明確なシンボルが発見されました。これまでは、共鳴技術では見逃されていた、暗状態のパथウェイと時間スケールです。UV-FSRSの微細理論を開発し、UV-FSRSの波動と相互作用する極性子群の相関を明らかにしました。結果として得られた信号により、暗状態が視覚化され、極性子モードとの関連性を研究するための新しい手法が提供されました。 Please note that this
2309.08197
Hyperspectral Image Denoising via Self-Modulating Convolutional Neural Networks
Compared to natural images, hyperspectral images (HSIs) consist of a large number of bands, with each band capturing different spectral information from a certain wavelength, even some beyond the visible spectrum. These characteristics of HSIs make them highly effective for remote sensing applications. That said, the existing hyperspectral imaging devices introduce severe degradation in HSIs. Hence, hyperspectral image denoising has attracted lots of attention by the community lately. While recent deep HSI denoising methods have provided effective solutions, their performance under real-life complex noise remains suboptimal, as they lack adaptability to new data. To overcome these limitations, in our work, we introduce a self-modulating convolutional neural network which we refer to as SM-CNN, which utilizes correlated spectral and spatial information. At the core of the model lies a novel block, which we call spectral self-modulating residual block (SSMRB), that allows the network to transform the features in an adaptive manner based on the adjacent spectral data, enhancing the network's ability to handle complex noise. In particular, the introduction of SSMRB transforms our denoising network into a dynamic network that adapts its predicted features while denoising every input HSI with respect to its spatio-spectral characteristics. Experimental analysis on both synthetic and real data shows that the proposed SM-CNN outperforms other state-of-the-art HSI denoising methods both quantitatively and qualitatively on public benchmark datasets.
Orhan Torun, Seniha Esen Yuksel, Erkut Erdem, Nevrez Imamoglu, Aykut Erdem
2023-09-15T06:57:43
http://arxiv.org/abs/2309.08197v1
# Hyperspectral Image Denoising via Self-Modulating Convolutional Neural Networks ###### Abstract Compared to natural images, hyperspectral images (HSIs) consist of a large number of bands, with each band capturing different spectral information from a certain wavelength, even some beyond the visible spectrum. These characteristics of HSIs make them highly effective for remote sensing applications. That said, the existing hyperspectral imaging devices introduce severe degradation in HSIs. Hence, hyperspectral image denoising has attracted lots of attention by the community lately. While recent deep HSI denoising methods have provided effective solutions, their performance under real-life complex noise remains suboptimal, as they lack adaptability to new data. To overcome these limitations, in our work, we introduce a self-modulating convolutional neural network which we refer to as SM-CNN, which utilizes correlated spectral and spatial information. At the core of the model lies a novel block, which we call spectral self-modulating residual block (SSMRB), that allows the network to transform the features in an adaptive manner based on the adjacent spectral data, enhancing the network's ability to handle complex noise. In particular, the introduction of SSMRB transforms our denoising network into a dynamic network that adapts its predicted features while denoising every input HSI with respect to its spatio-spectral characteristics. Experimental analysis on both synthetic and real data shows that the proposed SM-CNN outperforms other state-of-the-art HSI denoising methods both quantitatively and qualitatively on public benchmark datasets. Our code will be available at [https://github.com/orhan-t/SM-CNN](https://github.com/orhan-t/SM-CNN). keywords: HSIs, denoising, spectral self-modulation, SM-CNN. + Footnote †: journal: Medical Imaging ## 1 Introduction Hyperspectral images (HSIs) contain rich spectral and spatial information of a scene, which makes them useful for many applications [1; 2; 3]. In remote sensing, HSI sensors are generally mounted on aircrafts, drones or satellites that present harsh operating environments for the sensors. Hence, during the acquisition process, HSI can be easily contaminated by noise that is caused by several different factors such as atmospheric absorption, temperature, lighting condition and sensor malfunctions. These environmental conditions and sensory malfunctions can cause various types of noise: (i) Gaussian noise (GN), (ii) stripe noise (SN), (iii) dead pixel noise (DN), (iv) impulse noise (IN), and (v) the mixture of all these noise types [4; 5]. Further, each band in HSI might be subject to different types and levels of noise [6]. As a result, noisy data often affects the performance of image interpretation and subsequent applications; thus noise reduction in HSI is considered an essential prepossessing step. Recently, with the popularity of deep learning (DL), CNN based approaches have created a fresh wave of HSI denoising methods, demonstrating significant improvements over the traditional methods. These data-driven models automatically learn a mapping from noisy HSIs to clean HSIs. In order to make the network learn about this non-linear mapping, training pairs consisting of a large number of noisy and clean data are needed. However, collecting numerous training pairs is not an easy task considering HSIs. In [7], the authors collected real data for the training pair, but collecting these pairs, especially for remotely sensed HSI, requires extra effort and is an expensive procedure. A common approach used in HSI network training is to generate training pairs by adding synthetic noise to existing data [5; 8; 9; 10; 11]. To obtain favorable outcomes, however, it is essential that the synthetic noise appearing in the training data possesses a distribution similar to the noise distribution in the real-world test data, which is hard to achieve in general. Denoising HSIs affected by real noise poses a challenge due to the complex (non-normal) distribution of real noise statistics and its spatial and spectral variability. To tackle the complexities arising from diverse and intricate real noise, we present a dynamic and well-generalized denoising architecture. Dynamic neural networks, an emerging area in deep learning [12; 13], offer a distinct advantage over static models by adapting their structures or parameters to different inputs during inference. This adaptability enhances accuracy, representation power, and generality, all while maintaining computational efficiency. Our proposed dynamic deep neural network framework processes both spatial and spectral information of HSIs through feature-modulation layers, leveraging input neighboring spectral bands in an adaptive manner. At the heart of our ap proach lies the novel Spectral Self-Modulating Residual Block (SSMRB) used in the deep layers. The SSMRB intelligently regulates feature maps, preventing overfitting to the training set and elevating denoising performance. This mechanism yields two main benefits: (i) regulating the denoiser with spatial-spectral information, strengthening the network's learning ability, and (ii) establishing well-generalized denoising capabilities that effectively adapt to diverse noise patterns. Expanding on the novel SSMRB module, we also introduce the self-modulating CNN (SM-CNN) model for HSI denoising. Our experimental results demonstrate the effective generalization abilities of our proposed denoising method as compared to the existing studies. Surprisingly, even when trained with synthetic-noise data, our network achieves superior performance in denoising real-noisy HSIs. This outcome highlights the effectiveness and promise of our approach, empowered by the SSMRB, in handling real-noisy HSIs with high accuracy. Our main contributions can be summarized as follows: * The core of our model is a new component, which we name as the spectral self-modulating residual block (SSMRB), that enables the network to adjust the features based on the adjacent spectral data and prevents overfitting to the training set, thus improving its capability to handle complex real noise. * The proposed SSMRB layer makes the proposed model a dynamic denoising network that adapts its predicted features according to the spatio-spectral attributes of each input HSI during the denoising process. This indicates that our network's weights can adapt themselves in real-time during the forward propagation, taking into account the spatio-spectral data. * By using the self-modulation framework, we have designed a new denoising network named SM-CNN, which efficiently removes noise from various HSIs that possess a varying number of spectral bands. It retains both the spectral information and local details of the HSI without requiring any additional parameter tuning, producing significantly more precise and clear outcomes. * Our experimental analysis reveals that our approach outperforms the state-of-the-art HSI denoising methods both qualitatively and quantitatively on various noise cases including GN, SN, DN, IN, mixture and real-world noise in different spectra through a single SM-CNN model. Moreover, our proposed method allows for improving the classification performance on the real-world noisy HSI. ## 2 Related Work ### Classical Approaches to HSI Denoising In the last decade, the information that an HSI is well represented by a few pure spectral endmembers has been widely used. This property inspires researchers to exploit the low-rankness of HSI for denoising. In [14], an HSI restoration technique based on the low-rank matrix recovery (LRMR) was developed. An extension of the LRMR in [15] works well for removing mixed types of noise in HSI. With these methods, HSIs are rearranged as a 2D matrix and the low-rank feature of this matrix is examined. Other methods that exploit the low-rankness, such as LRTV [16], LRTDTV [17] and 3D-GTV [18], attempt to improve the performance of LRMR together with hyperspectral total variation (TV) [19]. To further utilize low-rankness property, new models were proposed in [20; 21; 22; 23; 24; 25]. Majority of these methods can be considered as convex optimization problems and produce clean HSIs as global optima. However, the performance of these methods depends on the low-rankness prior that need to be investigated for each HSI, and especially TV-based ones cause over-smoothing. More comprehensive and in-depth information about model-based methods can be found in [26], which provides a detailed overview of the different types of model-based methods and their strengths and weaknesses. Another approach to HSI denoising is to use sparse representations [27]. BM4D filtering [28], an extension of the BM3D filter [29] to volumetric data, is an early example to these methods. BM4D groups 3D HSI patches and filters them in transfer domain. Non-local meets global (NGMeet) [30] offers a combined spatial non-local similarity and spectral low-rank approximation for HSI denoising. FastHyDe [31] and FastHyMix [32] make efficient use of both low-rankness and self-similarity properties of HSI for efficient denoising of GN and mixture noise, respectively. Recently, SSSLRR [33] proposes a denoising method based on both sparse representation and low-rankness. However, despite being suitable for Gaussian and Poisson noise, these sparse-coding based methods show poor performance in complex noise. ### Deep Learning Approaches to HSI Denoising Deep learning can provide a prominent end-to-end learning strategy to solve the mentioned inadequacies of classical methods. To date, many DL methods for HSI denoising have been proposed. In the early days, methods originally developed for grayscale or RGB images were used for HSI denoising by adjusting the input and output filter sizes or treating them as a single band. DnCNN [34] suggested to use residual learning and batch normalization for fast convergence. HSI-DeNet [35] introduced DL in HSI denoising based on a set of multi-channel 2D CNN filters for spatial and spectral structures. Moreover, MemNet [36], originally proposed for RGB denoising as a deep persistent memory network to solve long-term dependency issues, was used in [10] as a benchmark for noise removal in HSI with successful results. However, the approaches based on 2D filters cannot take full advantage of the substantial spectral information in HSIs. Recently, HSI-specific networks have also been developed to take advantage of both the spectral and spatial properties of HSI [11; 10; 37; 38; 39; 40; 41; 42; 43]. In particular, SSCAN [11] is an HSI-specific denoising network, that combines group convolutions and attention modules to effectively exploit spatial and spectral information in images. In [10], a quasi-recurrent pooling function into the 3D (QRNN3D) U-net [44] was introduced to further capture the global correlation along the spectral spectrum. Such methods give state-of-the-art results, but since the images obtained from different sensors have variable spectral bands, the networks need to be reconstructed and retrained. Unfortunately, this is a time-consuming process. More recently, attention-based methods were proposed to capture non-local features [39; 40; 41; 42]. Yuan _et. al._[8] proposed an HSI denoiser based on residual learning CNN (HSID-CNN), taking into account both spatial and spectral information and without the need to manually adjust hyperparameters for different HSIs. Similarly, a spatial-spectral gradient network (SSGN) [5] is proposed for the removal of mixed noise, using adjacent spectral difference in addition to spatial-spectral information. Maffei _et. al._ developed a denoising method using a single CNN called HSI-SDeCNN [9]. Since HSI-SDeCNN takes the noise variance as input, this model is effective for i.i.d. GN. These methods yield the most successful results by training a single model. However, their success on test data depends on adding a synthetic noise to the training data similar to that in the test data. As compared to these methods, our SM-CNN model can adapt itself during inference through the suggested SSMRB mechanism, which modulates the features for the network based on the provided spectral data. This also helps the suggested SM-CNN to partly alleviate the domain gap between the training and the test data distributions, as demonstrated in our experiments. ## 3 Proposed Denoising Method In this section, we describe our proposed single model self-modulating convolution neural network (SM-CNN) in detail. ### Hyperspectral Noise Model Considering HSI as a 3D tensor, assume that \(\mathbf{X}\) and \(\mathbf{Y}\) denote the clean data and the noisy observations, respectively, as given in Eqn. 1 and Eqn. 2: \[\mathbf{X} =[\mathbf{X}_{1},\mathbf{X}_{2},...,\mathbf{X}_{i},...,\mathbf{X }_{B}]\in\mathbb{R}^{M\times N\times B} \tag{1}\] \[\mathbf{Y} =[\mathbf{Y}_{1},\mathbf{Y}_{2},...,\mathbf{Y}_{i},...,\mathbf{ Y}_{B}]\in\mathbb{R}^{M\times N\times B} \tag{2}\] where \(\mathbf{X}_{i}\) and \(\mathbf{Y}_{i}\) represent the clean and noisy grayscale images of \(i\)th band, \(M\times N\) represents the spatial dimension and \(B\) denotes the spectral dimension. An HSI degraded by additive sparse noise affecting some bands (i.e., DN, SN, IN), \(\mathbf{S}\in\mathbb{R}^{M\times N\times B}\), and dense noise affecting all bands (i.e., GN), \(\mathbf{N}\in\mathbb{R}^{M\times N\times B}\), can be linearly modeled as follows: \(\mathbf{Y}=\mathbf{X}+\mathbf{S}+\mathbf{N}\). Hence, the HSI denoising process is the problem of estimating \(\mathbf{\hat{X}}\in\mathbb{R}^{M\times N\times B}\), which is an estimate of the \(\mathbf{X}\), from noisy observation \(\mathbf{Y}\). ### Method Description The system overview of the proposed method for HSI denoising is given in Fig. 1. Since HSIs are acquired by sensors that have different numbers of spectral bands and spatial diversity, a network typically needs to be retrained for different data. Hence, the strategy adopted to create a single model is to raster scan the spectral dimension and perform denoising one spatially cropped band at a time. With the SM-CNN method, a deep network structure is proposed to use both spatial and spectral information of HSI. When the proposed model performs the noise removal of a band, it uses the spatial information in the patch with size \(h\times w\) pixels and the spectral correlation of \(K\) adjacent bands including the target band. An important point is that we also use these \(K\) neighbor bands to regulate our network which is why we refer to it as self-modulating. In this way, we are able to significantly increase the noise reduction capacity and adaptability of the network for different types of noises (i.e., GN, mixture noise etc.). Proposed training framework can be described as follows: 1. Since SM-CNN comprehends a nonlinear peer-to-peer mapping between the noisy data and the clean data, we create training pairs by adding different synthetic noise to the original HSI data as shown in Fig. 1. 2. We perform noise removal by scanning band-to-band spatial information and using \(K\) adjacent spectral information along with spatial information as mentioned above. For this reason, we achieve spectral continuity at the end points by flipping the \(K/2\) bands. 3. Then, we obtain patches \(\mathbf{y}\) of size \(h\times w\times(B+K)\) by spatially cropping the enlarged noisy data of size \(M\times N\times(B+K)\). 4. Lastly, we get adjacent \(K\) spectral bands from spatially cropping patches by scanning the spectral dimension continuously. For \(i\in[1,B]\), we use one of the middle bands \(\mathbf{y}_{i}^{s}\) and its neighboring bands \(\mathbf{y}_{i}^{\lambda}\) as inputs to the network to eliminate noise of the middle band \(\mathbf{y}_{i}^{s}\). The network is trained with this middle band \(\mathbf{y}_{i}^{s}\) and its corresponding clean target \(\mathbf{x}_{i}^{s}\). At the same time, these neighboring bands are used for spectral self-modulation of the network. 5. For each spatially cropped patch \(\mathbf{y}\), the input \(\mathbf{y}_{i}^{s}\), its corresponding spectral bands \(\mathbf{y}_{i}^{\lambda}\), and clean target \(\mathbf{x}_{i}^{s}\) are selected and this process is repeated for all bands. After the training is completed, the test process is performed as shown in Fig. 1. Firstly, just like in training, we provide spectral continuity at the end points by flipping the \(K/2\) bands of the test data. The enlarged test data is then cropped in the same way as in training. As shown in Fig. 1, the network is modulated by the neighboring bands of test data. This allows the network to give much better denoising performance. ### Network Architecture The overall architecture of the SM-CNN model is shown in Fig. 2. The numbers written on the convolution layers in the figure indicate the number of channels. Our network has two input data. The input spatial data of size \(h\times w\) represents the current noisy band to be denoised, and the input spectral data of size \(h\times w\times K\) represents the spatial-spectral information of adjacent bands. As we mentioned above, based on the learning strategy of the proposed network, different HSI data can be denoised with a single model regardless of the band number, because a patch-based band-to-band denoising process is performed. One of the most important points here is that we use the provided correlated spectral data through the SSMRB in the deep layers to modulate the denoiser. The benefits are two folds. First, it regulates the denoiser with spatial-spectral information, thus increasing the learning ability of the network. Second, it enables the denoiser to generalize better through the feature modulation process, which lets the model to adapt itself for the novel noise settings. These gains are achieved by normalizing the features at deep layers according to the spectral information. In deep layers, normalization parameters of the features are obtained from the pixel-by-pixel noise level. Then, normalized feature are scaled and shifted according to the input adjacent spectral data, making them context dependent. Our network's core structure is inspired by the HSID-CNN [8] in which the given hyperspectral image is first processed with both 2D and 3D CNN layers to encode spatial and spectral information. However, the introduction of the SSRMB block in the architecture brings a significant difference in the behavior of our model, particularly in handling unseen noise distributions. The SSRMB block empowers the network to adapt itself to data with characteristics completely different from those seen during training. This adaptability is crucial in real-world scenarios where noise distributions can vary widely. In particular, as shown in Fig. 2, a single 2D spatial spectral band input is processed with 2D CNN layers with different kernel sizes (conv2_k3, conv2_k5, conv2_k7 where k stands for kernel size). Then, in parallel, a 3D spatio-spectral hypercube input is processed with 3D CNN convolutional layers with different kernel sizes (conv3_k3, conv3_k5, conv3_k7). Finally, these 2D and 3D convolutional features are concatenated. These 2D and 3D CNN structured at the initial stage are considered in order to better use and investigate the characteristics of a single band and to make high use of adjacent correlated spectral bands. After applying rectified linear unit (ReLU) to conjoined outputs of 2D CNN and 3D CNN, these are concatenated and forwarded to sequential deep layers consisting of 2D CNN followed by ReLU blocks (conv2_3k+ReLU) and the proposed SSMRB layers. To strike the right balance between model performance and computational efficiency, the number of consecutive SSMRBs is determined through careful experimental selection. As will be detailed in Sec. 3.4, the SSMRB module make the network aware of the given spectral signal within every step of the denoising process, and highly improves the performance of the core spectral-spatial denoising network as we demonstrate in our experiments. We use skip connections from sequential deep layers to output layer in order to ensure training stability [34]. These four skip connection are passed through the 2D CNN with 15 channels (conv2_k3), and then all outputs are concatenated. Lastly, final layer of the proposed denoiser is a single channel 2D CNN to obtain the estimation of the clean data from the noisy spatial channel. ### Spectral Self-Modulating Residual Block We modulate our network with SSMRB by using the spectral signal itself, and thus we call our network as self-modulating CNN. The structure of the SSMRB is displayed in Fig. 3. This block consists of two spectral self-modulation modules (SSMM) and two 2D CNNs connected consecutively. The SSMM transmutes the previous feature map \(\mathbf{f}_{\mathbf{p}\mathbf{e}}\in\mathbb{R}^{h\times w\times C}\) by taking input adjacent spectral information where \(h\times w\) denotes Figure 1: System overview of the proposed SM-CNN model for HSI denoising. In training, we first obtain noisy input data by adding synthetic noise to clean HSI. Then, we perform spatial cropping after providing spectral continuity at the end points by flipping the \(K/2\) bands. By continuously scanning the spectral dimension, we obtain adjacent \(K\) spectral bands from the spatially cropped patches. We use spatially cropped neighbor bands as inputs when training the network to eliminate middle band noise. This process is repeated for all bands. The noisy \(K\) adjacent bands as well as the input go through an SSMRB in deep layers to regulate the network, as given in Fig. 2. Additionally, we use the residual learning strategy to ensure the stability and efficiency of the training by adding the middle band to the final output. In testing, we give input to the network after spatially cropping and spectrally scanning the data as in training. Here, the network is modulated with test correlated spectral bands through the SSMRB in the deep layers, which lets the model adapt itself under complex noise settings not directly seen during training. the spatial size of the feature map, and C is the number of channels. To produce the affine transformation of the feature maps, the SSMM first normalizes the feature map channel-wise and then generates scale (\(\gamma\)) and shift (\(\beta\)) for each pixel by using the adjacent spectral bands, giving the transformed features: \[\mathbf{f}_{\text{next}}^{c}=\gamma_{c}(\mathbf{y}^{\text{d}})\frac{\mathbf{f}_ {\text{pre}}^{c}-\mu_{c}}{\sigma_{c}}+\beta_{c}(\mathbf{y}^{\text{d}}) \tag{3}\] where \(\gamma_{c}(\mathbf{y}^{\text{d}})\) and \(\beta_{c}(\mathbf{y}^{\text{d}})\) are the learned self-modulation parameters obtained pixel-wisely from the input spectral bands (\(\mathbf{y}^{\text{d}}\)) for each \(c\in[1,C]\) with \(C=60\). \(\mu_{c}\) and \(\sigma_{c}\) refer to the mean and standard deviation of feature map \(\mathbf{f}_{\text{pre}}^{c}\) for channel \(c\), respectively; and can be formulated as: \[\mu_{c}=\frac{1}{hw}\sum_{l}^{h}\sum_{k}^{w}\mathbf{f}_{\text{pre}}^{c}(l,k) \tag{4}\] \[\sigma_{c}=\sqrt{\frac{1}{hw}\sum_{l}^{h}\sum_{k}^{w}(\mathbf{f}_{\text{pre}}^ {c}(l,k)-\mu_{c})^{2}+\delta} \tag{5}\] with \(\delta\) denoting a stability parameter to prevent Eqn. 3 from dividing by zero. We set \(\delta=10^{-5}\) for our SM-CNN. From a theoretical point of view, there are some parallels that can be drawn between the proposed SSMRB block and the attention blocks, which have been a core building block in contemporary neural networks. However, it is essential to disambiguate the differences between the two. The distinction lies in the distinct intuitions underlying the operation of attention and the feature modulation scheme in our SSMRB block. In particular, attention mechanisms assume that specific spatial locations or feature channels contain the most useful information and select these locations or channels for further processing. On the other hand, the SSMRB block performs a spatially-varying affine transformation on the extracted feature maps, conditioned on the characteristics of the adjacent spectral data. It considers the correlations and relationships between neighboring spectral bands and modulates the features accordingly. In this sense, the SSMRB block can be seen as a special kind of normalization layer, where the normalization factors representing mean and variance are estimated based on the adjacent spectral data during inference time. This adaptability allows the network to adjust itself to the specific spectral characteristics of the input data, making it more robust to different noise distributions. ### Loss Function In training our proposed model, we employ the residual learning strategy to ensure the stability and efficiency of the training procedure [8; 34]. Given a training set \(\{(\mathbf{y}_{i}^{s},\mathbf{y}_{i}^{\text{d}}),\mathbf{x}_{i}^{s}\}_{i=1}^{N}\) where \(N\) is the number of training patches, \(\mathbf{x}_{i}^{s}\) is a single-band clean patch of noisy low-quality patch \(\mathbf{y}_{i}^{s}\), and \(\mathbf{y}_{i}^{s}\) is the noisy \(K\) adjacent spectral bands of \(\mathbf{y}_{i}^{s}\). The loss function of the proposed SM-CNN denoiser (\(D_{\theta}\)) with the parameter set \(\theta\) is: \[\mathcal{L}(\theta)=\frac{1}{2N}\sum_{i=1}^{N}\|D_{\theta}(\mathbf{y}_{i}^{s}, \mathbf{y}_{i}^{\text{d}})-\mathbf{x}_{i}^{s}\|_{1} \tag{6}\] ## 4 Experimental Results We have evaluated the effectiveness of the proposed SM-CNN method using both synthetic noisy and real noisy HSIs. First, effectiveness of the method has been verified using simulated data by adding synthetic noise. Then the proposed method has been applied to real noisy images. The proposed method has been compared with the classical approaches of tensor TDL [45], LRTF-DFR [20], FastHyMix [32], BM4D [28], LRMR [14], LRTV [16], and LRTDTV [17], for which codes are publicly available. In the field of DL, we have compared the proposed method with HSID-CNN [8], MemNet [36], QRNN3D [10], HDNET [43] and MAN [42]. For a fair comparison, we have trained a version of MemNet, which has 6 memory blocks and 6 residual blocks, like the proposed method. For this, we have set the input layer filter to \(K\), the output layer to 1 and perform the training as suggested in Sec. 3.2. Additionally, we have performed training for HDNET and MAN using the WDC dataset. On the other hand, since Wie _et. al._[10] trained the QRNN3D network using synthetic noise similar to our cases, we have obtained results using pre-trained networks as a single model for a fair comparison. Here, it should be noted that QRNN3D can achieve better results if both the train and the test set are of similar content, as detailed in [10]. Figure 2: Structure of the proposed SM-CNN. The SSMRB used in the deep layers is given in Fig. 3 and detailed in Sec. 3.4. In addition to visual comparison, three commonly used metrics have been adopted to evaluate the performance of the proposed approach on simulated data: mean peak signal-to-noise-ratio (MPSNR), mean structural similarity index (MSSIM), and spectral angle mapper (SAM). MPSNR and MSSIM show the spatial accuracy, which are calculated on each 2D spatial image and averaged over all spectral bands. SAM that shows the spectral fidelity is calculated on each 1D spectral vector and averaged over all spatial points. The higher the MPSNR and MSSIM scores and the lower the SAM score mean the better denoising results. To further evaluate the effectiveness of the proposed model, two real-world noisy HSI datasets have been used in our real-data experiments. Moreover, the performance of the methods has been also evaluated by performing a classification task on a real noisy HSI. First, we apply the denoising model to the data, then employ a support vector machine (SVM) [46] as the classifier before and after denoising. Finally, overall accuracy (OA) and kappa coefficient are given as evaluation indexes. ### Datasets Four HSI datasets are considered to evaluate the effectiveness of the proposed method: one is used to train the network and conduct experiments by introducing different complex simulated noise, while the other three are used in complex simulated noise and real noise experiments, to evaluate the performance of the proposed model. **Training Dataset.** To train the proposed model and other DL methods, we use a part of the Washington DC Mall (WDC1) data acquired by the Hyperspectral Digital Image Acquisition Experiment (HYDICE) airborne sensor [47]. The sensor system records 210 spectral bands in the 0.4 to 2.4 \(\mu\)m region of the visible and infrared spectrum. The bands in the 0.9 and 1.4 \(\mu\)m regions in which there is atmospheric interference have been removed from the dataset. Hence, spatial resolution of the WDC data is 1208\(\times\)307\(\times\)191 pixels. We divide this data into two parts. One part with 200\(\times\)200\(\times\)191 pixels is used for testing, and the remaining part is used for training and validation. Footnote 1: [https://engineering.purdue.edu/~biehl/MultiSpec/hyperspectral.html](https://engineering.purdue.edu/~biehl/MultiSpec/hyperspectral.html) **Testing Datasets.** Four datasets have been used in simulated and real data experiments to evaluate the effectiveness of the proposed method as follows: * _Washington DC Mall:_ As mentioned above, an area of 200\(\times\)200\(\times\)191 pixels is used for simulated data experiments by adding synthetic noise to the original image. * _Pavia University (PU2):_ This data was acquired by the Reflective Optics Spectrographic Imaging System (ROSIS) over Pavia, Italy. The scene of size 200\(\times\)200\(\times\)103 with a spectral range from 0.43 to 0.86 \(\mu\)m is used in the simulated data experiment by introducing mixture noise after removing the 12 water absorption bands. Footnote 2: [http://lesun.weebly.com/hyperspectral-data-set.html](http://lesun.weebly.com/hyperspectral-data-set.html) * _Indian Pines (IP3):_ IP data was collected by the Airborne Visible Infra-Red Imaging Spectrometer (AVIRIS) over the Indian Pines test site, and consists of 145\(\times\)145 pixels and 224 spectral bands in the wavelength range 0.4-2.5 \(\mu\)m [48]. After removing the bands severely damaged by the atmosphere and water (150-163), a total of 206 bands are used in the experiments. Footnote 3: [https://purr.purdue.edu/publications/1947/about%7v=1](https://purr.purdue.edu/publications/1947/about%7v=1) * _HSIDwRD4:_ This data was acquired by a SOC710-VP hyperspectral camera with 696\(\times\)520 pixels in spatial resolution and 34 spectral bands from 0.4 \(\mu\)m to 0.7 \(\mu\)m [7]. The dataset contains 59 clean images captured using long-exposure settings and their corresponding noisy images captured using short-exposure settings. Footnote 4: [https://github.com/ColinTaoZhang/HSIDwRD](https://github.com/ColinTaoZhang/HSIDwRD) WDC and PU are assumed to be noise-free, and used in testing by adding synthetic noise. IP and HSIDwRD, on the other hand, are used directly in testing since many of their bands are inherently noisy. ### Experimental Settings In this section, we describe the implementation details. First, HSI data is scaled to [0, 1] before adding noise during training of simulated-data experiments. Then, noise components are generated for the different cases given in the following subsection and added to the original data. As described in 3.2, when the SM-CNN performs the denoising of a band, it uses the spatial information in the patch with size \(h\times w\) and the spectral correlation of \(K\) adjacent bands including the target band. We set the patch size to \(20\times 20\) with the stride equal to 10, and rotate the patches 0\({}^{\circ}\), 90\({}^{\circ}\), 180\({}^{\circ}\) and 270\({}^{\circ}\) for data augmentation Figure 3: Spectral self-modulation residual block (SSMRB) used in our SM-CNN denoiser network. This block modulates features in deep layers by using spectral neighbor bands for adapting the network to noise. during training. Based on experimental findings, the number of adjacent spectral bands is fixed at \(K=24\). **Simulated Noise Settings.** HSIs are usually degenerated by different noise types including GN, SN, DN, IN and their combinations. Therefore, following the experimental setting used in [10], we define five types of complex noise added to simulated data for training and testing purposes as follows: * **Case 1**: Non-i.i.d. GN. All bands are corrupted by zero-mean GN with a random intensity ranged from 10 to 70. * **Case 2**: GN & SN. All bands are corrupted by non-i.i.d. GN as Case 1. In addition, one third of bands (63 bands for WDC datasets) are randomly selected to add SN of 5% to 15% of columns. * **Case 3**: GN & DN. The noise generation process is similar to Case 2, except that the SN is replaced by DN. * **Case 4**: GN & IN. Each band is polluted by GN as Case 1. Besides, one third of bands are randomly chosen to add IN with intensity ranged from 10% to 70%. * **Case 5**: Mixture noise. Each band is randomly corrupted by GN as Case 1 and at least one type of noise specified in Cases 2-4. **Training Details.** SM-CNN is implemented with PyTorch and trained on NVIDIA TESLA V100 GPUs. It is trained by minimizing the mean absolute error loss (MAELoss) between the noisy HSI patches and the corresponding clean patches. The network parameters are initialized with the XavierNormal initializer and updated by the Adam optimizer. The learning rate is set to 0.0001 and the mini-batch to 128. SM-CNN's training period took 100 epochs and the best performance on the validation data was recorded. Lastly, the training time for each model was approximately 4-5 hours. ### Simulated-Data Experiments In this section, we present both quantitative and visual results of our proposed network on simulated test data as mentioned above. The proposed model is discussed by comparing it with those in the literature. **WDC dataset.** We first train our network for 5 complex noise cases and test it via the WDC dataset. Table 1 lists the quantitative comparisons of competing methods obtained with WDC data for cases 1-5. Each column in the table shows the metric results obtained with different noise cases. For MPSNR and MSSIM, the higher the better; for SAM, the lower the better. While the noisy HSI row shows the metric values of the synthetic noisy data for different cases, the other rows show the metric values obtained with the results of the classical methods, deep methods in the literature and the proposed method, respectively. The best performance for each quality index in the table is marked in bold, and the second best performance for each quality index is marked with underline. Additionally, to make visual comparisons for Case 5, we have selected bands 57, 27 and 17 to generate false-color images of WDC data obtained by all methods and they are shown in Fig. 4. Specifically, Fig. 4(a) shows the original image and Fig. 4(b) demonstrates noisy image before denoising, while Fig. 4(c)-(n) shows the images obtained after applying different denoising methods. Moreover, to examine figures in more detail, we have zoomed in on a region and showed this region in upper right part of the figures. As can be seen in Table 1, the proposed SM-CNN achieves the highest MPSNR and MSSIM values and the lowest SAM values at four out of five complex cases. Especially in the mixture noise case, the proposed method has increased the level of success compared to both classical and deep methods. Looking at the traditional methods, BM4D performed quite well for Case 1 and Case 2 on the basis of metrics because it is an effective model for GN removal. However, BM4D in particular causes the image to become smooth and reduces edge details. As the noise complexity increases, the performance of BM4D drops dramatically. FastHyMix demonstrated impressive performance in the first three cases, particularly in case 2, but its effectiveness significantly declines in mixture noise. This method introduces artifacts and does not handle complex noise case as shown in Fig. 4. On the other hand, LRTV, LRTDTV and LRTF-DFR produced better results in complex case among others conventional methods, since they are more suitable for these situations. But, TV norm-based LRTV and LRTDTV methods show the smoothing effect. LRMR, in contrast, suffers from artifacts in the mixture case which are evident in Fig. 4. LRTF-DFR produced fairly good results in most situations. However, these methods require the careful adjustment of several hyper-parameters. While generating these results, we tried our best to produce the best results by adjusting the parameters. As can be seen in Fig. 4, the results of the DL methods are visually close, but the proposed method appears to be good at recovering the details. Also, the quantitative analysis in Table 1 reveals that our method outperforms in all metrics for all cases except case 2. In this table, higher performance on the MSSIM metric for each noise case indicates that the proposed model has a stronger and more robust ability to preserve structure properties and recover edge and detail information. Further, the superior performance on the SAM metric proves that SM-CNN can better maintain spectral accuracy than other methods. Fig. 5 shows the denoising results with the PSNR and SSIM values for simulated mixture noise case across the spectral spectrum. Here, we present the outcomes of LRTDTV and LRTF-DFR which give the best results among traditional methods, and LRMR that inspired many of the classical methods. Additionally, we show the results of the deep models. The values given in Table 1 are obtained by averaging the PSNR and SSIM values in Fig. 5 along the spectrum. In general, our method outperforms the others for almost all bands in terms of both PSNR and SSIM metrics. While the performance of LRMR fluctuates between bands, LRTDTV and LRTF-DFR produce more stable results across the spectral spectrum. Nonetheless, in certain wavelengths (i.e., around 2.2 \(\mu\)m and 2.3 \(\mu\)m), the efficacy of LRTDTV and LRTF-DFR seems to decrease, as evidenced in Fig. 5 by the decline in both PSNR and SSIM values. MemNet and HSID-CNN approaches have a performance that fluctuates between the bands, but not as much as LRMR. Our method outperforms MemNet in most of the bands. HDNET and MAN methods show a decrease in their performance compared to our method at points where spectral continuity is disrupted. QRNN3D method obtains more stable results across the spectral spectrum, but it needs to be retrained for better results rather than considering it as a single model, as noted in [10]. **PU dataset.** The noises similar to the mixture noise given in Case 5 have been added to this data. All conditions except the randomly selected number of bands to add different noises have been chosen as in the above cases. Since the number of bands is less than those of WDC, one third of the randomly selected bands corresponds to 34 bands. As a result, each band is randomly corrupted by at least one type of noise. While performing this test, the network trained on WDC data has been used. Note that the pre-trained network is used even though the number of bands is different. This feature demonstrates the reusability power of single models. Table 2 shows the quantitative evaluation of the denoising results of different methods for the PU dataset distorted by mixture noise. We ran tests on 10 different scenarios to observe the performance changes of all methods when different bands were corrupted with sparse noise. The table shows the mean and standard deviations of the different runs that we conducted. The proposed SM-CNN achieves the highest MPSNR, second best MSSIM and SAM for the PU dataset without any fine tuning. QRNN3D achieves the highest MSSIM and SAM, which may be due to the possibility that the training data will better match these data. Because MAN uses 3D convolution, retraining it when the number of bands in the data changes will improve its performance and adaptability. HSID-CNN, MemNet and HDNET have performed worse than the other deep models, and even the classical LRTDTV method outperformed them in terms of metrics. For qualitative evaluation, Fig. 6 shows false-color RGBs of \begin{table} \begin{tabular}{l|c c|c c c|c c c|c c c|c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c|}{Case 1: GN} & \multicolumn{3}{c|}{Case 2: GN \& SN} & \multicolumn{3}{c|}{Case 3: GN \& DN} & \multicolumn{3}{c|}{Case 4: GN \& N} & \multicolumn{3}{c}{Case 5: Mixture Noise} \\ & MPSNR† & MSSIM† & SAM† & MPSNR† & MSSIM† & SAM† & MPSNR† & MSSIM† & SAM† & MPSNR† & MSSIM† & SAM† & MPSNR† & MPSNR† & MSM† & SAM† \\ \hline Noisy HSI & 18.508 & 0.690 & 0.278 & 18.982 & 0.711 & 0.264 & 17.338 & 0.653 & 0.328 & 15.269 & 0.531 & 0.420 & 13.402 & 0.500 & 0.464 \\ BM4D [28] & 26.904 & 0.943 & 0.093 & 27.057 & 0.947 & 0.091 & 23.303 & 0.895 & 0.158 & 20.272 & 0.729 & 0.251 & 17.841 & 0.701 & 0.297 \\ LRTV [16] & 25.464 & 0.906 & 0.111 & 25.839 & 0.914 & 0.106 & 24.895 & 0.895 & 0.118 & 23.253 & 0.850 & 0.153 & 22.842 & 0.855 & 0.150 \\ LRMR [14] & 28.501 & 0.964 & 0.079 & 28.670 & 0.965 & 0.077 & 25.926 & 0.944 & 0.104 & 23.183 & 0.865 & 0.170 & 21.841 & 0.864 & 0.173 \\ LRTDTV [17] & 27.999 & 0.956 & 0.081 & 28.376 & 0.960 & 0.077 & 27.602 & 0.952 & 0.084 & 26.393 & 0.933 & 0.100 & 26.041 & 0.931 & 0.101 \\ LRTT-DFR[20] & 31.603 & 0.981 & 0.054 & 31.987 & 0.983 & **0.051** & 30.621 & 0.978 & 0.058 & 28.988 & 0.964 & 0.081 & 28.559 & 0.967 & 0.077 \\ FastHyMist[32] & 32.303 & **0.986** & 0.052 & **32.155** & **0.986** & 0.052 & 30.203 & 0.981 & 0.059 & 27.435 & 0.911 & 0.129 & 24.618 & 0.905 & 0.133 \\ QRNN3D [10] & 27.352 & 0.963 & 0.084 & 27.512 & 0.965 & 0.082 & 27.336 & 0.964 & 0.084 & 26.943 & 0.960 & 0.088 & 26.197 & 0.952 & 0.096 \\ HSID-CNN [8] & 29.355 & 0.968 & 0.071 & 29.541 & 0.970 & 0.069 & 28.872 & 0.966 & 0.075 & 26.559 & 0.943 & 0.097 & 26.156 & 0.940 & 0.101 \\ MemNet [36] & 28.126 & 0.964 & 0.095 & 28.398 & 0.966 & 0.079 & 29.913 & 0.971 & 0.069 & 29.702 & 0.969 & 0.081 & 27.082 & 0.960 & 0.093 \\ HDNET [43] & 29.897 & 0.972 & 0.065 & 30.079 & 0.974 & 0.064 & 29.158 & 0.969 & 0.070 & 26.982 & 0.951 & 0.090 & 26.222 & 0.945 & 0.095 \\ MAN [42] & 31.971 & 0.981 & 0.052 & 32.060 & 0.983 & **0.051** & 31.664 & 0.981 & 0.054 & 29.973 & 0.971 & 0.066 & 28.205 & 0.961 & 0.079 \\ SM-CNN (Ours) & **32.529** & 0.984 & **0.048** & 31.477 & 0.981 & 0.054 & **32.281** & **0.983** & **0.050** & **30.063** & **0.973** & **0.064** & **29.832** & **0.973** & **0.066** \\ \hline \hline \end{tabular} \end{table} Table 1: Quantitative evaluation of different denoising methods in the WDC dataset. \begin{table} \begin{tabular}{l c c c} \hline \hline \multicolumn{1}{c}{Method} & MPSNR† & MSSIM† & SAM† \\ \hline Noisy HSI & 14.65710-017 & 0.2814-015 & 0.6498-0012 \\ BM4D [28] & 23.5538-021 & 0.713-001 & 0.2731-008 \\ LSTM [16] & 27.7365-020 & 0.5793-015 & 0.240-023 \\ LSTM [14] & 28.8464-020 & 0.8099-011 & 0.214-007 \\ LRTDTV [17] & 31.0884-019 & 0.901-006 & 0.424-005 \\ LRTDTV-DR[20] & 29.0666-023 & 0.9012-007 & 0.158-005 \\ GensNGD [20] & 32.7030-020 & 0.8844-007 & 0.262-007 \\ QRNN3D [10] & 31.2744-013 & 0.8453-002 & **0.807-002** \\ HSD-CNN [8] & 27.5185-015 & 0.8737-003 & 0.858-002 \\ MemNet [26] & 29.6242-0118 & 0.910-004 & 0.143-001 \\ HDNET [43] & 29.9311-036 & 0.9110-004 & 0.242-003 \\ MAN [42] & 30.2353-012 & 0.914-002 & 0.139-001 \\ SM-CNN (Ours) & **31.399-0119** & 0.9253-002 & 0.124-001 \\ \hline \hline \end{tabular} \end{table} Table 2: Quantitative evaluation of different denoising methods on the PU dataset across 10 noisy runs. Figure 4: Results for WDC with mixture noise in Case 5. (a) False-color original image with bands (57, 27, 17), (b) Noisy image, (c) LRTF-DFR, (d) FastHyMix, (e) BM4D, (f) LRTV, (g) LRMR, (h) LRTDTDV, (i) QRNN3D, (j) HSID-CNN, (k) MemNet, (l) HDNET, (m) MAN, (n) SM-CNN (Ours). noisy data and denoising results for visual comparison. Looking at both visual results and quantitative metrics, we see that BM4D does not quite deal with mixture noise. In Fig. 6 (e), it is clear that the SN is not removed. Additionally, BM4D result shows the effect of excessive smoothing as it destroys all the details in the zoomed area. The denoising performance of FastHyMix seems to be better than BM4D, but it slightly changes the color intensity of the image, which is evident from the roof area in Fig. 6 (d). LRMR, LRTV and especially LRTDTV, which are more suitable for mixture noise removal, produce satisfactory outcomes in terms of both quantity and quality. Although not as much as BM4D, TV norm-based methods smooth the image as displayed in Fig. 6 (f), (h). According to the result displayed in Fig. 6 (c), it appears that LRTF-DFR is more effective at restoring details compared to traditional methods for these particular bands. Nevertheless, as indicated in Table 2, the MPSNR and MSSIM metrics of LRTF-DFR are lower than those of LRTDTV, it can be argued that some bands may not have been corrected sufficiently. HSID-CNN seems to change the color intensity of the image for these bands. Our proposed model performs better in terms of preserving details while eliminating noise as shown in the zoomed regions in Fig. Figure 5: PSNR and SSIM values across the spectrum corresponding to the denoising results of the proposed and the competing methods for Case 5. Figure 6: Results for PU with mixture noise. (a) False-color original image with bands (60, 32, 10), (b) Noisy image, (c) LRTF-DFR, (d) FastHyMix, (e) BM4D, (f) LRTV, (g) LRMR, (h) LRTDTV, (i) QRNN3D, (j) HISD-CNN, (k) MemNet, (l) HDNET, (m) MAN, (n) SM-CNN (Ours). Figure 7: SAM values corresponding to each pixel of the PU dataset: (a) Noisy data, (b) LRTF-DFR, (c) FastHyMix, (d) BM4D, (e) LRTV, (f) LRMR, (g) LRTDTV, (h) QRNN3D, (i) HISD-CNN, (j) MemNet, (k) HDNET, (l) MAN, (m) SM-CNN (Ours). 6. Considering the details for these bands, deep models do not produce as good results as our model. Fig. 7 shows the SAM values corresponding to each pixel of the noisy PU dataset and the SAM values of denoising results of all methods in the dimensions of the original image. In other words, it shows how much the simulated mixture noise distorts each pixel spectrally, and the success of the denoising methods in restoring this distorted spectral information. The values given in Table 2 for the relevant method are obtained by averaging the SAM values in Fig. 7. Among the classical methods, it is seen that the LRTDTV achieves the best results in most pixels. In addition, it is clear that DL methods give better results than classical methods. When we compare the DL methods, the performance of the methods decreases in the roof regions where the deterioration is intense. In particular, QRNN3D achieves better results than our SM-CNN in these regions. For this reason, it has obtained a slightly better result from our method on the average as seen from Table 2. In addition to the SAM metric, spectral signatures at (59,169) pixel of the PU data are given in Fig. 8 to show the quality of the spectral signature restoration for each framework. When the noisy signature is examined, it can be seen that this pixel is generally distorted by zero-mean GN. In some bands, a value of one means that it is disturbed by SN or IN, while a value of zero can be said to be exposed to DN. Often times, classical methods fail to restore spectral information. For example, these methods, notably LRTV, are unable to recover signature from IN, SN and DN at some points. In DL approaches, HSID-CNN has not successfully recovered the signature in the first bands and last bands for this pixel, which was mainly distorted by IN or SN. We see that QRNN3D and HDNET create a bias in the middle bands for this pixel. MemNet, MAN and the proposed method produce a result very close to the original signature at every band. Furthermore, observing the results of the SAM metric, which is a measure of the spectral fidelity of all pixels, it can be said that the QRNN3D and proposed method achieve the best results. ### Real-Data Experiments **IP dataset.** We also validate our model in the real-world noisy IP HSI without the corresponding ground truth. The first few bands and several other bands of IP the dataset are severely degraded by unknown noise. Since we do not have a reference clean image for IP, the performance of the methods have been evaluated via an SVM classifier. SVM has been trained with 10% of randomly generated samples from each class. In the real experiment, 16 ground truth classes have been used to test the classification accuracy. Table 3 shows the classification results obtained after applying the denoising methods and the run-times of each method. The first row in the table shows the methods, while the second and third rows show the OA and kappa coefficient obtained using the denoising results, respectively. We include the run-time information in the last row for reference only. We want to highlight that we did not engage in any optimization efforts on our code to enhance its run-time. The HSID-CNN result is obtained using a pre-trained network with GN which has a variance equal to 50. The result of QRNN3D is obtained by the pre-trained network, which is first trained on GN with a variance equal to 50 followed by training on GN with variable variance. Because these networks cannot adapt their structure to the noise at the input like the one we propose, the noise used in the training and the test noise should match. On the other hand, no special training has been performed on MemNet, HDNET, MAN and the proposed SM-CNN for this dataset. In Fig. 9, the 2nd band of the IP dataset obtained by all methods is presented as a grayscale image for visual comparison. According to the classification metrics, it can be seen that the performance of TDL and FastHyMix in real noise are not good enough. It is understood from metric results that BM4D gives better results. The complex noise removal methods LRMR, LRTV and LRTDTV seem to have good denoising performance for this band, but according to the classification results, it is clear that their performance is lower than BM4D. It can be observed that among the classical methods, LRTF-DFR yields the most favorable outcomes. Among the DL methods, the QRNN3D method introduces blurring for this band as shown in Fig. 9(i). Our proposed method, on the other hand, produces visually sharper results than the other methods. Moroever, proposed SM-CNN shows higher performance than all methods according to the classification results given in Table 3. The point that should not be forgotten here is that HSID-CNN and QRNN3D lag behind the proposed method even though they were trained with a special case to get the best results on the test data. **HSIDwRD dataset.** Real-Data experiment was also carried out on the real natural HSIDwRD dataset. The test was conducted using the WDC data-trained network without any fine-tuning. The quantitative outcomes are presented in Table 4 and Fig. 10 shows long-exposure clean image, short-exposure noisy image and denoising results for visual comparison. The effectiveness of our method is evidenced by the fact that our results surpass the others in all metrics. In Fig. 10, it can be seen that our method produces more sharp and clear results visually. namely Wavelength Modulating CNN (WM-CNN) and SM-CNN-Lite. **WM-CNN:** As we mentioned in Sec. 4.1, some bands are omitted in HSI data due to atmospheric effects and other reasons. The proposed training uses information of neighboring spectral bands while eliminating the noise of one band; and due to these missing bands, the information of neighboring bands can change rapidly. Therefore, these missing bands affect the performance of the methods as seen in Fig. 5 around 0.9 \(\mu\)m. In addition, since HSI sensors collect data in different parts of the spectrum, the denoiser must also adapt to the wavelength information of different dataset. Considering these reasons, instead of the \(K\) spectral bands in the SSMM module of the denoiser, the wavelength information of the denoising band is used. To do this, each element of a tensor with the same dimensions as the noisy patch to be denoised is placed in the value of the wavelength (in \(\mu\)m) of this band. To illustrate, for the noisy band collected at 0.4 \(\mu\)m wavelength, the modulation data in the SSMM is a tensor with size of 20\(\times\)20 and each value of this tensor is 0.4. Since this tensor is used as modulation data in the deep layers through SSMRB, we call it a Wavelength Modulating CNN. Finally, when the input size of the SSMM modules decrease from 20\(\times\)20\(\times\)\(K\) to 20\(\times\)20, the number of trainable parameters of the WM-CNN is also less than our original SM-CNN. **SM-CNN-Lite:** In this model, we reduced the number of consecutive SSMRB modules in the SM-CNN shown in Fig. 2. We \begin{table} \begin{tabular}{l c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Moits} & \multirow{2}{*}{Noisy} & \multirow{2}{*}{LET-DER} & \multirow{2}{*}{Fault/3-ML} & \multirow{2}{*}{IBAD} & \multirow{2}{*}{LETV} & \multirow{2}{*}{LET-DFR} & \multirow{2}{*}{Fault/3-ML} & \multirow{2}{*}{QUEN-3D} & \multirow{2}{*}{ISBD-CNN} & \multirow{2}{*}{Month} & \multirow{2}{*}{IBAD} & \multirow{2}{*}{IMENT} & \multirow{2}{*}{MAN} & \multirow{2}{*}{SM-CNN} \\ & & [20] & [23] & [23] & [28] & [16] & [14] & [17] & [10] & [8] & [36] & [43] & [42] & (Ours) \\ \hline MPSNR2 & 20.912 & 31.069 & 25.686 & 30.573 & 29.139 & 29.864 & 29.599 & 31.894 & 31.405 & 28.272 & 30.801 & 31.246 & **32.899** \\ MSSIMM & 0.358 & 0.922 & 0.649 & 0.907 & 0.904 & 0.867 & 0.913 & 0.928 & 0.933 & 0.890 & 0.922 & 0.923 & **0.902** \\ SAM2 & 0.552 & 0.147 & 0.335 & 0.163 & 0.150 & 0.185 & 0.154 & 0.140 & 0.150 & 0.240 & 0.147 & 0.152 & **0.139** \\ \hline \hline \end{tabular} \end{table} Table 4: Quantitative evaluation of different denoising methods on the HSIDwRD dataset. Figure 10: Results for HSIDwRD with real noise. (a) Long-exposure false-color clean image with bands (30, 15, 10), (b) Short-exposure noisy image, (c) LRTF-DFR, (d) FastHyMix, (e) BM4D, (f) LRTV, (g) LRMR, (h) LRTDTTV, (i) QRNN3D, (j) HSID-CNN, (k) MemNet, (l) HDNET, (m) MAN, (n) SM-CNN (Ours). \begin{table} \begin{tabular}{l c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Moits} & \multirow{2}{*}{Noisy} & \multirow{2}{*}{TDL} & \multirow{2}{*}{IBAD} & \multirow{2}{*}{LETV} & \multirow{2}{*}{LET-DFR} & \multirow{2}{*}{Fault/3-ML} & \multirow{2}{*}{QUEN-3D} & \multirow{2}{*}{ISBD-CNN} & \multirow{2}{*}{Month} & \multirow{2}{*}{IBAD} & \multirow{2}{*}{IMENT} & \multirow{2}{*}{MAN} & \multirow{2}{*}{SM-CNN} \\ & & [45] & [28] & [16] & [14] & [17] & [20] & [23] & [19] & [8] & [36] & [43] & [42] & (Ours) \\ \hline OA1 & 75.79 & 76.79 & 83.97 & 78.72 & 79.44 & 78.94 & 78.57 & 89.02 & 86.65 & 88.70 & 87.20 & 87.33 & **89.31** \\ Kappa+ & 4.7218 & 0.7365 & 0.8371 & 0.7553 & 0.7579 & 0.9461 & 0.8677 & 0.7306 & 0.8224 & 0.8338 & 0.8713 & 0.8533 & 0.8549 & **0.8781** \\ \hline Raw-Time (s) & & 15.205 & 286.151 & 666.192 & 61.842 & 229.315 & 63.298 & 0.827 & 13.000 & 5.886 & 22.268 & 6.145 & 8.887 & 12.405 \\ \hline \hline \end{tabular} \end{table} Table 3: Classification accuracies on the IP Dataset (SVM, 10% training labels) together with a runtime analysis of the methods. Figure 9: Results on the IP dataset. (a) Grayscale visualization of noisy HSI using band 2, (b) TDL, (c) BM4D, (d) LRTV, (e) LRMR, (f) LRTDTTV, (g) LRTF-DFR, (h) FastHyMix, (i) QRNN3D (j) HSID-CNN, (k) MemNet, (l) HDNET, (m) MAN, (n) SM-CNN (Ours). use a single SSMRB instead of 2 SSMRB in the deep layer, but we add a layer with a kernel size of 1 in parallel with the convolution layer with the kernel size of 5 in the SSMM shown in Fig. 3 in order to use spectral information directly. Then, we generate scale (\(\gamma\)) and shift (\(\beta\)) parameters by combining the results of these two layers. We call it SM-CNN-Lite when it uses the same modulation data as the original SM-CNN with less trainable parameters. Table 5 shows the quantitative evaluation of the denoising results of the evaluated versions of our model for the WDC dataset distorted by mixture noise and the number of parameter (NOP) of each model. Looking at the quantitative results, both WM-CNN and SM-CNN-Lite yield the most favorable outcomes for the heavily distorted WDC dataset with different types of noise. That said, the best performance is achieved with our proposed model. **Investigating the Impact of Various \(K\) values:** In Fig. 11, we explore the influence of varying the number of neighboring bands (\(K\)) on our network's performance. The network has been trained using 10 different values of \(K\) on the WDC, with mixture noise. As illustrated in Fig. 11, employing a limited number of bands leads to poor results, revealing the significance of considering a broader spectral context. As the number of bands increases, we witness an improvement in performance; but it does not have a linear trend. Instead, it exhibits slight fluctuations as the number of neighboring bands increases, unveiling subtle changes in the signals' spectral characteristics influenced by neighboring bands. **Investigating the Impact of the Number of Skip Connections:** Table 6 presents a comprehensive investigation into the significance of skip connections within the proposed SM-CNN framework. Through this systematic analysis, conducted on the WDC dataset with the mixture noise scenario, we gradually remove skip connections, beginning from the network's input layer, and perform training accordingly. The results clearly demonstrate the pivotal role played by skip connections in enhancing the network's performance. Remarkably, as the number of skip connections decreases, we observe a substantial decline in the denoising performance of our method. The incorporation of skip connections ensures better convergence during training, promoting effective information flow and preserving essential features within the denoising process. ## 6 Conclusion In this work, we present an SM-CNN for complex noise reduction in HSIs considering the noise type of GN, SN, IN, DN, mixture noise, and the real-world unknown noise. Thanks to the SM-CNN architecture, test data with different spatial-spectral properties can be denoised with a single model. Moreover, modulating the network using spatio-spectral information through the SSMRB enables its adaptation to different noise by integrating input data-driven dynamic predicted features. The qualitative and quantitative evaluation of the results show that the proposed algorithm is more efficient than other single-model algorithms on both synthetic and real data. ## 7 Acknowledgments O. Torun's PhD research has been partially supported by the KUIS AI Research Center and the 2023 BAGEP Award, which was granted to S. E. Yuksel by the Science Academy.
自然画像と比較して、ハイパースペクトル画像(HSI)は、各バンドが特定の波長の異なるスペクトル情報を取り込むため、多くのバンドで構成されます。その中には、可視光域を超えているものもあります。これらの特性により、HSIはリモートセンシングに効果的です。ただし、既存のハイパースペクトル画像装置は、HSIに重大な劣化を導入しています。このため、近年、ハイパースペクトル画像のノイズ除去はコミュニティにおいて注目を集めています。最近の深層学習によるHSIノイズ除去方法では、効果的な解決策が提供されていますが、実世界における複雑なノイズ下での性能は依然として最適化されていません。それは、新しいデータへの適応性が不足しているからです。これらの制限を超えるために、本稿では、スペクトルと空間情報を関連付ける自己調整型畳み込みニューラルネットワーク SM-CNN を導入しました。このモデル
2306.17632
From binary to singular: the AGN PSO J334.2028+1.4075 under the high-resolution scope
PSO J334.2028+1.4075 (PSO J334) is a luminous quasar located at redshift z=2.06. The source gained attention when periodic flux density variations were discovered in its optical light curve. These variations were initially interpreted as the variability due to the orbital motion of a supermassive black hole binary (SMBHB) residing in a single circumbinary accretion disk. However, subsequent multiwavelength observations provided evidence against the binary hypothesis as no optical periodicity was found on extended time baselines. On the other hand, detailed radio analysis with the Karl G. Jansky Very Large Array (VLA) and the Very Long Baseline Array (VLBA) revealed a lobe-dominated quasar at kpc scales, and possibly a precessing jet, which could retain PSO J334 as a binary SMBH candidate. We aim to study both the large- and small-scale radio structures in PSO J334 to provide additional evidence for or against the binary scenario. We observed the source at 1.7 GHz with the European Very Long Baseline Interferometry Network (EVN), and at 1.5 and 6.2 GHz with the VLA, at frequencies that complement the previous radio interferometric study. Our images reveal a single component at parsec scales slightly resolved in the southeast-northwest direction and a lobe-dominated quasar at kiloparsec scales with a complex structure. The source morphology and polarization in our VLA maps suggest that the jet is interacting with dense clumps of the ambient medium. While we also observe a misalignment between the inner jet and the outer lobes, we suggest that this is due to the restarted nature of the radio jet activity and the possible presence of a warped accretion disk rather than due to the perturbing effects of a companion SMBH. Our analysis suggests that PSO J334 is most likely a jetted AGN with a single SMBH, and there is no clear evidence of a binary SMBH system in its central engine.
P. Benke, K. É. Gabányi, S. Frey, T. An, L. I. Gurvits, E. Kun, P. Mohan, Z. Paragi, E. Ros
2023-06-30T13:06:16
http://arxiv.org/abs/2306.17632v1
# From binary to singular: the AGN PSO J334.2028+1.4075 under the high-resolution scope ###### Abstract Context:PSO J334.2028+1.4075 (PSO J334) is a luminous quasar located at redshift \(z=2.06\). The source gained attention when periodic flux density variations were discovered in its optical light curve. These variations were initially interpreted as the variability due to the orbital motion of a supermassive black hole binary (SMBHB) residing in a single circumbinary accretion disk. The orbital separation was determined to be 0.006 pc with an in-spiral time of 7 yr in the rest frame of PSO J334. These findings suggested the quasar could be in the gravitational wave emitting phase of its merger and so extended multi-wavelength observations were commenced. However, subsequent observations provided evidence against the binary hypothesis as no optical periodicity was found on extended time baselines. On the other hand, detailed radio analysis with the Karl G. Jansky Very Large Array (VLA) and the Very Long Baseline Array (VLBA) revealed a lobe-dominated quasar at kpc scales, and possibly a precessing jet, which could retain PSO J334 as a binary SMBH candidate. Aims:We aim to study both the large- and small-scale radio structures in PSO J334 to provide additional evidence for or against the binary scenario. Methods:We observed the source at 1.7 GHz with the European Very Long Baseline Interferometry Network (EVN), and at 1.5 and 6.2 GHz with the VLA, at frequencies that complement the previous radio interferometric study. Results:Our images reveal a single component at parsec scales slightly resolved in the southeast-northwest direction and a lobe-dominated quasar at kiloparsec scales with a complex structure. The source morphology and polarization in our VLA maps suggest that the jet is interacting with dense clumps of the ambient medium. While we also observe a misalignment between the inner jet and the outer lobes, we suggest that this is due to the restarted nature of the radio jet activity and the possible presence of a warped accretion disk rather than due to the perturbing effects of a companion SMBH. Conclusions:Our analysis suggests that PSO J334 is most likely a jetted AGN with a single SMBH, and there is no clear evidence of a binary SMBH system in its central engine. ## 1 Introduction The discovery of PSO J334.2028+1.4075 (FBQS J2216+0124; hereafter denoted as PSO J334) (Liu et al. 2015) as a supermassive black hole binary (SMBHB) candidate attracted significant interest due to the rarity of confirmed SMBHBs. Supermassive black holes are expected to be at the center of most galaxies. Since galaxies evolve hierarchically, SMBHBs are believed to be abundant, especially at small separations (An et al. 2018). However, confirming the existence of such objects has so far been mostly unsuccessful, with a few exceptions. The most notable examples are the dual AGN system in NGC 6240 (Komossa et al. 2003), which resides in an ultraluminous infrared galaxy, and 0402+379, which was detected with the Very Long Baseline Ar ray (VLBA) and has two radio cores separated by a projected distance of 7.3 pc (Rodriguez et al., 2006). The SMBHB candidate quasar PSO J334 was discovered through a systematic search in the Pan-STARSS1 Medium Deep Survey (Liu et al., 2015). Based on an observed 542 \(\pm\) 15 day period in the variation of the optical flux density and an estimated total black hole mass of \(10^{9.79\pm 0.5}M_{\odot}\) (with a mass ratio between 0.05 and 0.25), an orbital separation of 0.006 pc was inferred. According to this, the coalescence of the SMBHB would occur in approximately 7 yr in the rest frame of the quasar. Unfortunately, current astronomical instruments are not capable of resolving the two components at such a small separation, so evidence for the existence of a second component could only be indirect, such as the detected periodic variability in the optical flux density. This variability could be caused by a secondary black hole passing through the primary black hole's accretion disk, as proposed for OJ 287 (Lehto & Valtonen, 1996). A similar explanation has been suggested in the case of the recently discovered SMBHB candidate SDSS J143016.05+230344.4 as well, which shows a periodic optical variability with a decay in both period and amplitude (Jiang et al., 2022; An et al., 2022). However, the detected 2.6 cycles of the putative periodicity are likely insufficient to claim sinusoidal variations (Vaughan et al., 2016), and other processes, such as quasi-periodic eruptions (Miniutti et al., 2019) and quasi-periodic oscillations (Gierlinski et al., 2008) may also explain the periodic flux density variability observed in PSO J334. Indeed, subsequent observations with extended time baselines of the optical monitoring failed to find any evidence of periodic variability in PSO J334 (Liu et al., 2016). The radio structure of PSO J334 has been investigated in a multi-frequency study with the Karl G. Jansky Very Large Array (VLA) and the VLBA by Mooley et al. (2018). In the VLBA images obtained at four frequencies from 4.4 to 15.4 GHz, the quasar is resolved into two components, a compact core and a jet. Their separation is 3.6 milliarcsec (mas), corresponding to a projected linear separation of 30 pc. The VLA images at 2.8 and 4.38 GHz reveal a lobe-dominated structure extending 66 kpc from opposite sides of the core. In addition, the 39\({}^{\circ}\) difference between the position angles of the outer lobes and the inner jet is significant enough to suggest a perturbation of the jet by the second SMBH (Begelman et al., 1980), or alternatively a restarted double-double source. Thus, despite the results from the recent optical light curve, PSO J334 can still be considered as a SMBHB candidate (Mooley et al., 2018). Multi-wavelength observations aimed at determining the accretion mode of the quasar by Foord et al. (2017) have not found any feature that would convincingly distinguish PSO J334 from a single active galactic nucleus (AGN). However, there are still untested scenarios that would allow the object to retain its SMBHB status. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline \(\nu\) [GHz]\({}^{a}\) & Array\({}^{b}\) & \(S_{\rm tot}\) [mJy]\({}^{c}\) & \(S_{\rm peak}\) [mJy beam\({}^{-1}\)]\({}^{d}\) & \(\sigma\) [mJy beam\({}^{-1}\)]\({}^{e}\) & \(b_{\rm maj}\) [mas]\({}^{f}\) & \(b_{\rm min}\) [mas]\({}^{g}\) & PA [\({}^{\circ}\)]\({}^{k}\) \\ \hline 1.5 & VLA & 35.41 & 14.7 & 0.04 & 1665 & 1218 & \(-\)30.5 \\ 6.2 & VLA & 11.48 & 4.52 & 0.01 & 422.4 & 340.6 & \(-\)35.4 \\ 1.7 & EVN & 6.8 & 2.1 & 0.035 & 3.4 & 3.4 & 0 \\ 4.38 & VLBA & 4.83 & 2.54 & 0.049 & 5.86 & 2.46 & 11.2 \\ 7.40 & VLBA & 3.24 & 1.41 & 0.036 & 3.48 & 1.46 & 12.1 \\ 8.67 & VLBA & 3.45 & 1.56 & 0.06 & 2.13 & 0.86 & \(-\)0.27 \\ 15.37 & VLBA & 1.19 & 0.85 & 0.036 & 1.29 & 0.48 & \(-\)3.99 \\ \hline \hline \end{tabular} \({}^{a)}\) Observing frequency. \({}^{b)}\) Interferometer array performing the observation. \({}^{c)}\) Total flux density. \({}^{d)}\) Peak brightness. \({}^{e)}\) Rms noise. \({}^{f)}\) Beam major axis. \({}^{e)}\) Beam minor axis. \({}^{f)}\) Beam minor axis. \({}^{f)}\) Beam position angle. \end{table} Table 1: Map properties of the clean images shown in Figs. 1 and 3, as well as the re-analyzed VLBA images from Mooley et al. (2018). Figure 1: VLA images of PSO J334 at 1.5 and 6.2 GHz. Each color bar represents total intensity in mJy beam\({}^{-1}\). Restoring beam sizes (FWHM) are \(1218\,{\rm mas}\times 1665\,{\rm mas}\) at the major axis position angle PA = \(-30.5^{\circ}\) at 1.5 GHz, and \(340.6\,{\rm mas}\times 422.4\,{\rm mas}\) at PA = \(-25.4^{\circ}\) at 6.2 GHz, respectively. The lowest contours are at 0.12 and 0.03 mJy beam\({}^{-1}\), respectively, and further contour levels increase by a factor of two. The eastern and western lobes are marked as EL and WL, and the core is denoted as C. The red cross marks the position of the VLBI core at 1.7 GHz. We study the radio structure of PSO J334 using the technique of very long baseline interferometry (VLBI) with the European VLBI Network (EVN) at 1.7 GHz and with the VLA at 1.5 and 6.2 GHz. Here we present our results and compare them with those obtained with the VLBA at higher frequencies by Mooley et al. (2018). In Sect. 2, we describe the observations and the data reduction process. We then discuss our results in Sect. 3. Finally, a summary is given in Sect. 4. In this paper we assume a \(\Lambda\)CDM cosmological model with \(H_{0}=70\) km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{\Lambda}\)=0.73, and \(\Omega_{\rm M}\)=0.27. At the redshift of the object, \(z=2.06\)(Becker et al., 2001), 1'' of angular distance in the sky corresponds to 8.569 kpc of projected linear distance (Wright, 2006). ## 2 Observations and data reduction ### VLA data Observations with the VLA (project code AG980, PI: K.E. Gabanyi) were carried out at 1.5 and 6.2 GHz (L and S/C bands) on 2016 October 28 and 2016 October 26, respectively. The VLA was observing in its most extended A configuration with 27 antennas. The on-source time was 0.5 h in both bands. The primary flux density calibrator in both experiments was 3C 48, and the polarization D-term calibrator was J2355+4950. The secondary calibrators were J2212+0152 (1.5 GHz) and J2218\(-\)0335 (6.2 GHz). The 1.5 GHz data were recorded in 16 spectral windows between 1.008 and 1.968 GHz with 64 channels, each with a bandwidth of 64 MHz. The 6.2 GHz data contained 48 spectral windows, but the first 16 were only used to set up the observations, so the target and calibrators were observed between 4.226 and 8.096 GHz frequencies in 32 spectral windows, each with 64 channels and a bandwidth of 128 MHz. Data reduction was performed in the Common Astronomy Software Applications (CASA, McMullin et al., 2007; The CASA Team, 2022) package version 7.15.0. First, phase, delay, bandpass, and gain calibrations were derived for the primary calibrator. Then, to calibrate polarization, we determined cross-hand delays, solved antenna-based D-terms for the unpolarized calibrator, and finally calibrated the polarization angle for the primary calibrator. We then used the calibration tables generated in the previous steps to transfer the solutions to the secondary calibrator and then to our target source1. As 3C 48 was undergoing an active phase during 20162, we inspected the polarimetric calibration carefully, imaged all calibrators to determine the polarization angle and fractional polarization values, and compared them to those available in the literature. In the case of the D-term calibrators, we found no significant polarization signatures and fractional polarization values were close to 0. In the case of 3C 48, polarization angles and fractional polarization values in the two bands differ by less than 15% compared to the values in the literature2. Footnote 1: For the calibration, we followed this VLA tutorial: [https://casaquides.nrao.edu/index.php/VLA_Continuum_Tutorial_3C391-CASA5.5.0](https://casaquides.nrao.edu/index.php/VLA_Continuum_Tutorial_3C391-CASA5.5.0) Footnote 2: [https://science.nrao.edu/facilities/vla/docs/manuals/obsguide/modes/pol](https://science.nrao.edu/facilities/vla/docs/manuals/obsguide/modes/pol) Hybrid imaging was performed by iterating tclean and self-calibration, while imaging all four Stokes parameters together, and then deriving polarized intensity and polarization fraction images from the Stokes IQU images. The 1.5 and 6.2 GHz VLA images are shown in Fig. 1, and the polarized intensity and electric vector position angle (EVPA) images are shown in Fig. 2. ### EVN data Our EVN observation (project code RSG08, PI: S. Frey) at 1.7 GHz was performed on 2015 October 18, with the participation of eleven radio telescopes: Jodrell Bank Lovell Telescope (United Kingdom), Westerbork (single dish; the Netherlands), Effelsberg (Germany), Medicina (Italy), Onsala (Sweden), Sheshan (China), Torun (Poland), Hartebeeshock (South Africa), Svetloe, Zelenchukskaya, and Badary (Russia). The data were recorded at a 1 Gbit s\({}^{-1}\) rate in left and right circular polarizations, with 8 basebands (IFs) per polarization, each divided into thirty-two 500-kHz wide spectral channels. The total bandwidth was 128 MHz per polarization. The correlation was performed at the Joint Institute for VLBI ERIC (Dwingeloo, the Netherlands) with 4 s integration time. The observation lasted for a total of 2 h. We used phase-referencing to a nearby (separated by 0\(\aas@@fstack{\circ}\)96) compact calibrator source, J2217+0220, with duty cycles of 6.5 min, including 3 min 20 s scans spent on the target. The total accumulated observing time on PSO J334 was 1 h. Figure 2: VLA 1.5 GHz (left panel) and 6.2 GHz (right panel) polarimetric images of PSO J334. The colors represent the polarized intensity in \(\mu\)Jy beam\({}^{-1}\), while the contours are the same as in the total intensity map in Fig. 1. EVPAs are represented by the black ticks in the image. The data were calibrated with the U.S. National Radio Astronomy Observatory (NRAO) Astronomical Image Processing System (AIPS, Greisen, 2003), following the usual procedures. The amplitudes of the raw correlated visibility data were calibrated using the antenna gain curves and measured system temperatures (where available), as provided by the participating stations. Nominal system temperature values were used for Jodrell Bank, Sheshan, Svetloe, Zelenchukskaya, and Badary. The data were then corrected for the dispersive ionospheric delay by invoking the task TECOR that uses total electron content maps derived from global navigation satellite systems data. Phase changes due to the time variation of the source parallactic angle were also corrected for azimuth-elevation mounted radio telescopes in the network. Global fringe-fitting was performed using the task FRING on the phase-reference calibrator J2217+0220 and the bright fringe-finder source J2148+0657 also observed for a 12-min scan at the beginning of the experiment. These calibrated visibility data were exported to Difmap (Shepherd, 1997) for imaging. Conventional hybrid mapping with several iterations of the clean algorithm (Hogbom, 1974) and phase-only self-calibration was performed. Then antenna-based gain correction factors were determined. These values were within \(\pm\)5% for the compact bright fringe-finder source, suggesting a reliable initial amplitude calibration. The clean component model obtained for the phase-reference source J2217+0220 in Difmap was fed back to AIPS, before repeating fringe-fitting, now taking the calibrator source structure into account for determining visibility phases. The fringe-fit solutions obtained for J2217+0220 were interpolated to the target source, PSO J334, within the atmospheric coherence time using the task CLCAL. The final calibrated visibility data file for PSO J334 was then transferred to Difmap for imaging and brightness distribution modeling. The peak of the dirty image was offset by about 1\(\aas@@fstack{\prime\prime}\)1 from the center because of an inaccurate a-priori position used for radio telescope pointing. Therefore we started the imaging by shifting the phase center to the actual brightness peak. Because PSO J334 is relatively weak with \(\sim 7\) mJy total flux density, and it appeared slightly resolved at 1.7 GHz with the EVN, self-calibration in general was not attempted during hybrid imaging in Difmap, except for phase self-calibration for the European stations (Effelsberg, Jodrell Bank, Onsala, Westerbork, Medina, and Torun). We used gradually decreasing solution intervals from 60 to 1 min. Amplitude self-calibration was not made at all. The naturally weighted clean image of PSO J334 is shown in Fig. 3. A circular two-dimensional Gaussian brightness distribution model component fitted directly to the self-calibrated visibility data in Difmap can adequately describe the source, allowing us to quantitatively characterize its size and flux density (Table 2). Neither an elliptical Gaussian, nor a two-component circular Gaussian model could significantly improve the goodness of fit. ### Archival VLBA data To be able to perform brightness distribution model fitting in the visibility domain, we downloaded from the NRAO data archive3, re-calibrated, and imaged the VLBA data of Mooley et al. (2018) measured at 4.38, 7.40, 8.67, and 15.37 GHz (project code BM438, PI: K.P. Mooley). For details on the observations, we refer to the original paper (Mooley et al., 2018). Calibration was carried out in AIPS. After loading the data with FITLD, we performed parallactic angle and digital sampling corrections, corrected for the Earth orientation parameters, and applied ionospheric corrections that are especially important for phase-referenced observations performed at low frequencies and low source declination. At first, fringe-fitting was performed on the phase-reference calibrator, J2217+0220, using the task FRING. After applying the calibration tables and writing the data out of AIPS, we performed hybrid imaging in Difmap, in a similar way as described in Sect. 2.2 for the EVN data. Since the gain corrections for all the VLBA antennas were within \(\pm\)5%, we did not perform any additional antenna-based amplitude correction in AIPS. The clean image of J2217+0220 was then loaded into AIPS and used during the second round of fringe-fitting. Delay and rate solutions were applied to both the calibrator and the target, PSO J334. The calibrated visibility data of the target source were written out. Imaging again was carried out in Difmap by only using clean iterations without self-calibration. The total flux densities, i.e. the sum of the individual clean components, and peak brightnesses agree with the values published in Mooley et al. (2018) within 10% (see image properties in Tab. 1). Gaussian brightness distribution model components (Table 2) were fitted to the visibility data using the modelfit command in Difmap. Footnote 3: [https://data.nrao.edu/](https://data.nrao.edu/) ## 3 Results and discussion ### Source structure and polarization The 1.7-GHz EVN image of PSO J334 (Fig. 3) restored with a 3.4-mas circular Gaussian beam (full width at half maximum, FWHM) shows a single component that is slightly resolved in roughly the southeast-northwest direction. The structure is consistent with the higher-frequency VLBA images (Mooley et al., 2018), where two components - a southeastern synchrotron self-absorbed core and a northwestern jet - were identified with Figure 3: 1.7-GHz EVN image of PSO J334. The image is restored with a 3.4-mas circular Gaussian beam. The lowest contours are at 0.09 mJy beam\({}^{-1}\) and increase by a factor of two. The source shows an elongated shape in the southeast–northwest direction, which is consistent with the structure reported in Mooley et al. (2018). 3.6 mas separation along the position angle of 139\({}^{\circ}\). (Position angles are conventionally measured from north through east.) The 39\({}^{\circ}\) misalignment in position angles between the VLA and VLBA jets led Mooley et al. (2018) to restore the SMBHB status of PSO J334, however, alternative interpretations are still viable to explain the observations. Double-double radio sources, such as B0925+420 and B1450+333 (Schoenmakers et al., 2000), that retain signs of several active phases show similar morphology to PSO J334. In this case, the lobes seen on kpc scales with the VLA would be relics of past activity, and the compact, mas-scale core-jet morphology was formed more recently. The change in jet position angle can be interpreted as precession either due to interaction with a companion SMBH (Begelman et al., 1980) or caused by a warped accretion disk changing the orientation of the jet (Pringle, 1997). Our VLA observations at 1.5 and 6.2 GHz show details of the complex structure of PSO J334 (Fig. 1). The source, as shown in the 2.8-GHz VLA B-configuration Caltech-NRAO Stripe 82 Survey (CNSS) image of Mooley et al. (2018), is a lobe-dominated quasar oriented at a large angle with respect to our line of sight. The projected linear size of the object is approximately 79 kpc, which is typical of \(z=2\) AGN (Blundell et al., 1999). The lengths of the arms are unequal, with the eastern arm being longer at both frequencies. However, contrary to expectations that the longer arm is pointing towards the observer and is brighter due to Doppler boosting (Longair & Riley, 1979), here the western, shorter arm appears brighter in both images. In addition, the eastern arm shows a sharp turn before ending in a hotspot, which suggests an interaction with the surrounding interstellar medium (ISM), similarly as, e.g., in the high-redshift radio galaxy 4C 41.17 (Gurvits et al., 1997). However, we do not detect polarized emission at the turning point as, e.g., in PKS 0637\(-\)752 (Lovell et al., 2000), where the highly polarized bent jet region also gives rise to bright X-ray emission (Schwartz et al., 2000). By inspecting the polarimetric images, we also see that the polarized intensity is higher in the western lobe and that EVPAs are close to perpendicular to the jet propagation on both sides, indicating the presence of a termination shock where the lobe material interacts with the ambient medium. The asymmetric structure together with the polarimetric results suggest that PSO J334 is embedded in a large-scale environment that is not intrinsically symmetric, and the jet interacts with clumps of the ISM that are disrupted upon contact with the jet material. Inspecting the inner 2\({}^{\prime\prime}-3^{\prime\prime}\) in our 6.2-GHz VLA image (Fig. 1), we see a remarkably straight jet that cannot be described with any precession model. Jet precession itself would be revealed by a helical jet shape that is physically external or intrinsic to the jet. In the first case, the main driver might be e.g. binary motion (e.g. Kun et al., 2014), Lense-Thirring precession (e.g. Lense & Thirring, 1918; Liska et al., 2018), or disk precession induced by a secondary black hole (e.g. Carroni & Abraham, 2004). In this case, the jet components move more or less on straight or ballistic paths, and the pitch of the jet is constant. The helical pattern simply reflects the periodic ejection direction of the newborn jet components. The second case appears, e.g., due to instabilities in the jet (e.g. Perucho et al., 2006) and components actually move along helical path. In this case, the spatial wavelength of the jet along its symmetry axis is increasing with increasing core separation. The jet appearing in the 6.2-GHz VLA image (Fig. 1) has no resemblance to any of these scenarios. However, we cannot rule out the possibility that more sensitive images with higher angular resolution could recover jet structures indicative of precession or the presence of instabilities (see, e.g., the case of 3C 279, Fuentes et al., 2023). This structure seems hard to reconcile with the jet precession suggested by Mooley et al. (2018) based on lower-resolution VLA data, and rules out the last argument supporting the binary nature of the source. ### Spectral analysis Spectral index maps (\(S_{\nu}\propto\nu^{\alpha}\)) were created between our two VLA images at 1.5 and 6.2 GHz, as well as between our 1.7-GHz EVN image and the 4.38-GHz VLBA image of Mooley et al. (2018), plotted with black contours in Fig. 6. To align the images on the optically thin jet components, we used 2D cross-correlation. The image pairs had the same restoring beam, map and pixel size, as well as the same minimum and maximum (\(u,v\)) distance. The resulting spectral index maps are shown in Fig. 4. Due to the Figure 4: Two-point spectral index distribution maps of PSO J334 based on: _(a)_ quasi-simultaneous VLA images at 1.5 and 6.2 GHz, and _(b)_ 1.7-GHz EVN and 4.38-GHz VLBA images. Lowest contours are at 0.12 and 0.14 mJy/beam, respectively, and contour levels increase by a factor of two. Colors represent spectral index values. We note that the VLBI images were taken half a year apart, so flux density variability cannot be excluded and therefore the spectral index map in panel _(b)_ should be treated with caution. low signal-to-noise ratio outside of the peaks of the core and lobe regions, we do not recover spectral index solutions there. Spectra are flat in the VLA core, i.e. the central component, and the eastern lobe, however, \(\alpha\) values indicating a steeper spectrum are measured in the western lobe (Fig. 4). The flat spectrum of the eastern lobe might indicate a shock region, which is supported by the bending seen in the 6.2-GHz VLA image (Fig. 1), possibly happening due to the interaction with the ISM. The steeper spectrum of the western lobe, however, can be explained by an older population of electrons present in this region. The VLBI spectral index map created between 1.7 and 4.38 GHz shows a flat spectrum both for the core and the northwestern jet component. Since we assume that the amplitude calibration is accurate within 10%, spectral index errors are estimated to be \(\pm 0.15\). See also figure 3 of Mooley et al. (2018) for the spectral index map between 8.67 and 15.37 GHz that show steep spectra in both VLBI components. We also plot the spectra of the core and jet components of the VLBI images in Fig. 5, where both the core and the jet components show a steep spectrum between 1.7 and 4.38 GHz. In addition, the steepening towards the jet edge might be an artifact due to the low brightness of the component. While our results confirm the southeastern component as the core (Mooley et al., 2018), here we must note that the observations used to create the spectral index map were made half a year apart, so these results must be interpreted cautiously because of possible flux density variability. The radio spectrum compiled in Benke et al. (2018), as well as the broadband spectral energy distribution (SED) in Foord et al. (2017) show no deviation from single AGN spectra. The two models investigated by Foord et al. (2017) are the mini-disk and cavity models that represent different stages of the binary evolution and manifest in the SED as missing emission at different frequencies. However, they found that the optical to X-ray bands are well modeled with a composite non-blazar SED (Shang et al., 2011) and the radio emission falls between what is expected from radio-loud and radio-quiet sources. ### Brightness temperatures To study the brightness temperature of the compact radio emitting features in the VLBI images (Figs. 3 and 6, also Mooley et al., 2018), we fitted circular Gaussian components in the visibility domain with modelfit in Difmap. Characteristics of the modelfit components are summarized in Table 2. We assume flux density errors to be 10% and the error in component size to be 20% (Lister et al., 2009). We calculate the observed brightness temperature as \[T_{\rm b,obs}[{\rm K}]=1.22\times 10^{12}\bigg{(}\frac{S_{\nu}}{1 Jy}\bigg{)}\bigg{(}\frac{\nu}{\rm GHz}\bigg{)}^{-2}\bigg{(}\frac{b_{\rm min }\times b_{\rm maj}}{\rm mas^{2}}\bigg{)}^{-1}(1+z), \tag{1}\] where \(b_{\rm min}\) and \(b_{\rm maj}\) are the minor and major axis (FWHM) of the component. The resolution limit was computed based on eq. 2 in Kovalev et al. (2005). For the one component for which the size fell below this limit, we can only calculate a lower limit of its brightness temperature. \(T_{\rm b,obs}\) values fall between \(10^{7}-10^{9}\) K. This confirms that the emission originates from AGN activity and not from spatially extended star formation in the host galaxy (Condon et al., 1991). The measured brightness temperatures are lower than the average core brightness temperatures of blazars in the MOJAVE sample4 of \(1.39\times 10^{11}\) K (Homan et al., 2021). This indicates that the emission of PSO J334 is likely not beamed and the jets are probably inclined at a large angle to our line of sight. Footnote 4: [https://www.cv.nrao.edu/MOJAVE/](https://www.cv.nrao.edu/MOJAVE/) ### VLBI and Gaia astrometry Both the EVN and VLBA observations were carried out in phase-referencing mode, enabling precise relative astrometric measurements. We determined the position of the brightness peak in the VLBI images using MAXFIT in AIPS. Since we expect that the optical emission originates from the vicinity of the central engine, i.e., the accretion disk and the inner jet (Kovalev et al., 2017), we can use _Gaia_ data (Gaia Collaboration et al., 2016) from the third data release (DR3, Gaia Collaboration et al., 2023) to identify the nucleus position. The standard error of the optical position is 0.27 mas in right ascension and 0.32 mas in declination based on _Gaia_ DR3. VLBI astrometric errors are comparable and estimated to be 0.3 mas for low declinations with Figure 5: Spectra of the core (SE) and jet (NW) components of the VLBA images. Figure 6: 15.37-GHz VLBA image overplotted with the _Gaia_ position and its uncertainty (red cross). White contours are at 0.83 mJy beam\({}^{-1}\)\(\times\) (\(-15\), 15, 30, 60)%, and the restoring beam size is \(1.29\) mas \(\times\) 0.48 mas at PA \(=-3.96^{\circ}\). Black contours representing the 8.67 GHz source structure are at 1.56 mJy beam\({}^{-1}\)\(\times\) (\(-7.5\), 7.5, 15, 30, 60)%, and the FWHM of the restoring beam is \(2.13\) mas \(\times\) 0.86 mas at PA \(=-3.99^{\circ}\).
PSO J334.2028+1.4075 (PSO J334)は、赤方偏移z=2.06にある光的な超新星銀河です。そのソースは、その光学光曲線における周期的な光密度の変動が発見されたことで注目を集めました。これらの変動は、単一の円盤状Accretionディスク内に存在する超Massiveブラックホール双星(SMBHB)の軌道運動による変動として当初解釈されました。しかし、その後、マルチ波長観測は、双極的仮説に対する証拠を提供しました。これは、光的な周期性が見られなかったことです。一方、Karl G. Jansky Very Large Array (VLA)とVery Long Baseline Array (VLBA)との詳細な電波分析により、kpcスケールで小球主体の超新星銀河と、それが旋回するジェットの可能性が示唆されました。これらの証拠により
2301.07486
CINM (Cinnamon): A Compilation Infrastructure for Heterogeneous Compute In-Memory and Compute Near-Memory Paradigms
The rise of data-intensive applications exposed the limitations of conventional processor-centric von-Neumann architectures that struggle to meet the off-chip memory bandwidth demand. Therefore, recent innovations in computer architecture advocate compute-in-memory (CIM) and compute-near-memory (CNM), non-von- Neumann paradigms achieving orders-of-magnitude improvements in performance and energy consumption. Despite significant technological breakthroughs in the last few years, the programmability of these systems is still a serious challenge. Their programming models are too low-level and specific to particular system implementations. Since such future architectures are predicted to be highly heterogenous, developing novel compiler abstractions and frameworks become necessary. To this end, we present CINM (Cinnamon), a first end-to-end compilation flow that leverages the hierarchal abstractions to generalize over different CIM and CNM devices and enable device-agnostic and device-aware optimizations. Cinnamon progressively lowers input programs and performs optimizations at each level in the lowering pipeline. To show its efficacy, we evaluate CINM on a set of benchmarks for the well-known UPMEM CNM system and the memristors-based CIM accelerators. We show that Cinnamon, supporting multiple hardware targets, generates high-performance code comparable to or better than state-of-the-art implementations.
Asif Ali Khan, Hamid Farzaneh, Karl F. A. Friebel, Clément Fournier, Lorenzo Chelini, Jeronimo Castrillon
2022-12-25T18:36:47
http://arxiv.org/abs/2301.07486v4
CINM (Cinnamon): A Compilation Infrastructure for Heterogeneous Compute In-Memory and Compute Near-Memory Paradigms ###### Abstract. The rise of data-intensive applications exposed the limitations of conventional processor-centric von-Neumann architectures that struggle to meet the off-chip memory bandwidth demand. Therefore, recent innovations in computer architecture advocate compute-in-memory (CIM) and compute-near-memory (CNM), non-von-Neumann paradigms achieving orders-of-magnitude improvements in performance and energy consumption. Despite significant technological breakthroughs in the last few years, the programmability of these systems is still a serious challenge. Their programming models are too low-level and specific to particular system implementations. Since such future architectures are predicted to be highly heterogenous, developing novel compiler abstractions and frameworks become necessary. To this end, we present _CINM (Cinnamon)_, a first end-to-end compilation flow that leverages the hierarchal abstractions to generalize over different CIM and CNM devices and enable device-agnostic and device-aware optimizations. Cinnamon progressively lowers input programs and performs optimizations at each level in the lowering pipeline. To show its efficacy, we evaluate CINM on a set of benchmarks for the well-known UPMEM CNM system and the memristors-based CIM accelerators. We show that Cinnamon, supporting multiple hardware targets, generates high-performance code comparable to or better than state-of-the-art implementations. + Footnote †: 15: IPM, CIM, and in-memory computing (MC) are used alternatively in the literature. We will use CIM in this paper. + Footnote †: 15: IPM, CIM, and in-memory computing (MC) are used alternatively in the literature. We will use CIM in this paper. + Footnote †: 15: IPM, CIM, and in-memory computing (MC) are used alternatively in the literature. We will use CIM in this paper. + Footnote †: 15: IPM, CIM, and in-memory computing (MC) are used alternatively in the literature. We will use CIM in this paper. + Footnote †: 15: IPM, CIM, and in-memory computing (MC) are used alternatively in the literature. We will use CIM in this paper. + Footnote †: 15: IPM, CIM, and in-memory computing (MC) are used alternatively in the literature. We will use CIM in this paper. + Footnote †: 15: IPM, CIM, and in-memory computing (MC) are used alternatively in the literature. We will use CIM in this paper. + Footnote †: 15: IPM, CIM, and in-memory computing (MC) are used alternatively in the literature. We will use CIM in this paper. + Footnote †: 15: IPM, CIM, and in-memory computing (MC) are used alternatively in the literature. We will use CIM in this paper. + Footnote †: 15: IPM, CIM, and in-memory computing (MC) are used alternatively in the literature. We will use CIM in this paper. + Footnote †: 15: IPM, CIM, and in-memory computing (MC) are used alternatively in the literature. We will use CIM in this paper. + Footnote †: 15: IPM, CIM, and in-memory computing (MC) are used alternatively in the literature. We will use CIM in this paper. ## 1. Introduction Application domains such as social and streaming media, internet-of-everything, communications and services, and virtual assistant technologies such as Alexa and Siri are generating data at a break-neck pace, i.e., in the _quintillion bytes_ range every day. This mindboggling data volume is mostly raw and requires processing and analysis. In the conventional _processor centric_ von-Neumann computing paradigms, these applications quickly hit hard performance and energy-efficiency boundaries as data have to be moved between the CPU and the memory via a narrow memory channel. On a mobile device, the data movement alone consumes 62% of the total system energy (CIM). To overcome this data movement and other challenges associated with the memory subsystem, computer architects are moving to _non-Van-Neumann_ system models like _computing near memory_ (CNM) (Krishna et al., 2017) and _computing in memory_ (CIM) (Krishna et al., 2017). The idea is to bring computations closer to the data and process data where it makes sense. In CNM, dedicated CMOS logic is integrated into the memory chip to diminish the data movement problem. Conceptually, this tight coupling of the logic and memory devices can be applied at any level in the memory hierarchy with various memory technologies. For DRAM, both planar and stacked structures, such as Micron's hybrid memory cube (Krishna et al., 2017), AMD's and SK Hynix's high bandwidth memory (Krishna et al., 2017) and Samsung's wide IO (Samsung, 2018) have been used to realize CNM systems (Krishna et al., 2017). While CNM greatly reduces the data movement via the external channel, it still requires communicating data between the memory and the compute units. The CIM model completely eliminates data movement by exploiting the physical properties of the memory devices to implement various logic and compute operations in-place (Krishna et al., 2017). CIM systems based on novel memory devices with inherent computing capabilities, such as phase change memory (PCM), resistive RAM (RRAM), magnetic RAM (MRAM), and spintronics-based racetrack memories (RTMs) have demonstrated orders of magnitude performance and energy gains for machine learning and other applications domains (Krishna et al., 2017; Goyal et al., 2018; Goyal et al., 2018; Goyal et al., 2018; Goyal et al., 2018; Goyal et al., 2018). Of late, several innovative CNM and CIM systems have been proposed. These include domain-specific architectures such as the Neurocube (Nguyen et al., 2018), ISAAC (Krishna et al., 2017), Microsoft Brainwave NPU (Goyal et al., 2018), and several DNN accelerators (Krishna et al., 2017) among others. These systems are orders of magnitude faster and more energy-efficient than the general-purpose Von-Neumann machines but only target specific application domains. UPMEM (Krishna et al., 2017) has shown case studies of _processing in memory_ (PIM)1 in more general-purpose off-the-shelf systems. Recently, Samsung (Samsung, 2018; Samsung, 2018) and SK hynix (Krishna et al., 2017) proposed machine learning specific CNM systems based on the HBM2 and GDDR6 DRAM standards supporting TFLOPS. On the CIM front, in just the last couple of years, major memory vendors such as Samsung (Goyal et al., 2018), TSMC (Goyal et al., 2018; Krishna et al., 2017) and IBM (Krishna et al., 2017; Krishna et al., 2017) have fabricated CIM chips based on memristive and magnetic technologies that attain unparalleled performance and energy efficiencies. This is a significant milestone
データ指向アプリケーションの台頭により、従来の処理中心の von-Neumann アーキテクチャの制限が明らかになった。このアーキテクチャは、オフチップメモリbandwidthの需要を満たすことが難しく、そのため、コンピュータアーキテクチャの最近の進歩により、計算インメモリ(CIM)と計算に近いメモリ(CNM)という、非von-Neumannの概念が提唱されている。これらの概念は、パフォーマンスと消費電力の両面で、桁違いの改善をもたらす。近年、技術的なbreakthroughは大きく進んでいるものの、これらのシステムのプログラマビリティは依然として深刻な課題である。そのプログラミングモデルは、低レベルで、特定のシステム実装に特化している。このため、将来のアーキテクチャが非常に異質であると予測されているため、新しいコンパイルの抽象化とフレームワークの開発が不可欠である。そこで、私たちはCINM(
2309.04975
Trade-Off Between Beamforming and Macro-Diversity Gains in Distributed mMIMO
Industry and academia have been working towards the evolution from Centralized massive Multiple-Input Multiple-Output (CmMIMO) to Distributed mMIMO (DmMIMO) architectures. Instead of splitting a coverage area into many cells, each served by a single Base Station equipped with several antennas, the whole coverage area is jointly covered by several Access Points (AP) equipped with few or single antennas. Nevertheless, when choosing between deploying more APs with few or single antennas or fewer APs equipped with many antennas, one observes an inherent trade-off between the beamforming and macro-diversity gains that has not been investigated in the literature. Given a total number of antenna elements and total downlink power, under a channel model that takes into account a probability of Line-of-Sight (LoS) as a function of the distance between the User Equipments (UEs) and APs, our numerical results show that there exists a ``sweet spot" on the optimal number of APs and of antenna elements per AP which is a function of the physical dimensions of the coverage area.
Eduardo Noboro Tominaga, Hsuan-Jung Su, Jinfeng Du, Sivarama Venkatesan, Richard Demo Souza, Hirley Alves
2023-09-10T09:39:16
http://arxiv.org/abs/2309.04975v1
# Trade-Off Between Beamforming and Macro-Diversity Gains in Distributed mMIMO ###### Abstract Industry and academia have been working towards the evolution from Centralized massive Multiple-Input Multiple-Output (CmMIMO) to Distributed mMIMO (DmMIMO) architectures. Instead of splitting a coverage area into many cells, each served by a single Base Station equipped with several antennas, the whole coverage area is jointly covered by several Access Points (AP) equipped with few or single antennas. Nevertheless, when choosing between deploying more APs with few or single antennas or fewer APs equipped with many antennas, one observes an inherent trade-off between the beamforming and macro-diversity gains that has not been investigated in the literature. Given a total number of antenna elements and total downlink power, under a channel model that takes into account a probability of Line-of-Slight (LoS) as a function of the distance between the User Equipments (UEs) and APs, our numerical results show that there exists a "sweet spot" on the optimal number of APs and of antenna elements per AP which is a function of the physical dimensions of the coverage area. Distributed massive MIMO, beamforming gain, macro-diversity gain, and spectral efficiency. ## I Introduction Massive Multiple-Input Multiple-Output (mMIMO) is one of the major physical layer technologies introduced in the Fifth Generation (5G) of wireless communications networks [1]. It features Base Stations (BS) equipped with many antenna elements, thus providing them with very high beamforming gains and spatial multiplexing capabilities [2]. However, the current 5G deployments still adopt the cellular network paradigm; thus, the traditional Centralized mMIMO (CmMIMO) approach does not solve the issues of inter-cell interference and unequal performance between User Equipments (UEs) located at the cell center and UEs located at the cell edges [3]. Aiming to solve the aforementioned issues, the research community has been working towards evolving from CmMIMO to Distributed mMIMO (DmMIMO), also known as Cell-Free mMIMO. Instead of having a single BS in each cell equipped with several antennas, the coverage area is served by multiple Access Points (APs), each equipped with few or single antenna elements, and connected to a common Central Processing Unit (CPU) through fronthaul connections. This approach may provide uniform wireless coverage and solve the inter-cell interference problem via smart AP clustering and interference mitigation schemes [4]. Most of the works on DmMIMO advocate for its significant performance improvements compared to CmMIMO in terms of data rates and uniform wireless coverage, e.g., [5, 6, 7]. By deploying several APs, one obtains macro-diversity gains that guarantee a more uniform wireless coverage. However, since the APs are equipped with few or a single antenna, they do not present the same beamforming gains or spatial multiplexing capabilities of a single BS equipped with several antennas. Thus, given the total number of antenna elements, a trade-off between macro-diversity and beamforming gains has been studied in some works. In the case of indoor industrial scenarios, this trade-off was studied in [8, 9]. Based on the results of a measurement campaign, the authors in [8] found that semi-distributed setups (i.e., setups that adopt few APs equipped with multiple antenna elements) present a performance comparable to that of the fully distributed setups (i.e., setups that adopt many single antenna APs) in terms of achievable downlink Spectral Efficiency (SE), with the advantage of presenting lower deployment cost. However, their setup had only eight antenna elements. In [9], authors compared centralized, partially distributed, and fully distributed mMIMO setups regarding block error rates. They found that the DmMIMO setups only provide strong performance gains when the beamformers can properly handle inter-user interference. For micro-urban outdoor scenarios, the trade-off between the number of APs and antenna elements per AP was studied in [10, 11, 12]. In [10], the authors investigated the energy efficiency of DmMIMO systems. They fixed the total number of antenna elements and the network's total transmit power for a fair comparison. They found that the optimal number of antennas per AP depends on system parameters such as the coverage area's target spectral efficiency and size. Nevertheless, their work did not thoroughly investigate the relation between such parameters and the optimal number of APs. The authors in [11] observe that semi-distributed deployments have almost the same performance as the fully distributed deployments in uplink and downlink, with the advantage of requiring a much lower number of APs. Nonetheless, the optimal number of APs or antennas per AP as a function of any system parameter is not discussed. In [12], the authors showed that increasing the number of APs in a square coverage area with a side length of \(1\) km, given a total number of antenna elements and number of users, leads to higher mean per-user rates. Inspired by the aforementioned works, especially [10], in this paper, we evaluate the downlink performance of a DmMIMO network. Fixing the total number of antenna elements and the total downlink transmit power budget, we investigate the "sweet spot" in terms of the number of APs and antenna elements per AP. In other words, we investigate the trade-off between macro-diversity gain and beamforming gain. We conduct our analysis considering the impact of imperfect Channel State Information (CSI). Resorting to Monte Carlo simulations, our results show that, given a total number of antenna elements, the optimal numbers of APs and antenna elements per AP is a function of the dimensions of the coverage area. In other words, we found that there is an optimal density of APs (i.e., number of APs per km\({}^{2}\)) that maximizes the mean per-user achievable SE. Such finding has not been reported in related works. We show that, for small coverage areas, having few APs equipped with many antennas yields better performance than having many APs equipped with few antennas. Nevertheless, as the coverage area increases and having the users uniformly distributed, having more APs in the system becomes increasingly advantageous. This paper is organized as follows. The considered system model is introduced in Section II. The channel estimation methods studied in this work are described in Section III. Section IV presents the downlink data transmission and the adopted performance metric. Section V presents the numerical results based on Monte Carlo simulations. Finally, we conclude this work in Section VI. ## II System Model We consider a square coverage area with dimensions \(l\times l\) m\({}^{2}\), where \(Q\) APs, indexed by \(q\in\{1,\ldots,Q\}\), cooperate to serve \(K\) single-antenna UEs, which are indexed by \(k\in\{1,\ldots,K\}\). Each AP has a Uniform Linear Array (ULA) containing \(S\) antenna elements. The antenna spacing is half-wavelength, \(d_{H}=1/2\). The total number of antenna elements exceeds the number of UEs, \(M=QS\), \(M>K\). The \(Q\) APs are distributed on a uniform grid of \(\sqrt{Q}\times\sqrt{Q}\) APs, thus the value of \(Q\) is selected such that \(\sqrt{Q}\) is an integer. On the other hand, the \(K\) UEs are uniformly distributed on the square coverage area. Denoting \((x_{k},y_{k})\) the coordinates of the location of the \(k\)-th UE, we have \(x_{k},y_{k}\sim\mathcal{U}(0,l)\). The considered system model is illustrated in Fig. 1 for \(Q=16\) APs and \(K=20\) UEs. We assume a fully centralized operation, where each AP acts only as a Remote-Radio Head (RRH) and is connected to a common CPU through a fronthaul link. The APs forward the received signal samples to the CPU in the uplink and coherently transmit the signals generated by the CPU in the downlink. Moreover, all APs jointly serve all UEs. We consider a system with bandwidth \(B\) that operates at the carrier frequency \(f_{c}\). By employing a multicarrier modulation scheme, the frequency resources are split into multiple flat-fading subcarriers. We assume the channel coefficients to be constant and frequency flat in a coherence block of length \(\tau_{c}\) channel uses, and the independent block fading model. The performance analysis is carried out by studying a single statistically representative subcarrier during a coherence block [2]. We assume channel reciprocity, i.e., the channel coefficients are the same during uplink and downlink transmissions. This assumption allows Time-Division Duplexing (TDD) operation. The CSI is acquired in the uplink by using pilot sequences, and it is utilized for coherent uplink receive combining and downlink transmit precoding [2]. The uplink training phase occupies \(\tau_{p}\) samples, followed by a time instant reserved for channel estimation and processing tasks. The downlink data transmission phase occupies the next \(\tau_{d}\) time instants. Thus, we have \(\tau_{c}=\tau_{p}+1+\tau_{d}\). Note that we can have up to \(\tau_{p}\) orthogonal pilot sequences. We adopt the spatially correlated Rician fading channel model from 3GPP for MIMO simulations [13] that was also used in [14]. This model accounts for a Line-of-Sight (LoS) probability which is a function of the distance between the UE and the AP. The collective vector of wireless channel coefficients between the \(k\)-th UE and the \(Q\) APs is \[\textbf{h}_{k}=[\textbf{h}_{k1}^{T},\textbf{h}_{k2}^{T},\ldots,\textbf{h}_{kQ} ^{T}]^{T}\in\mathbb{C}^{M\times 1}, \tag{1}\] where \(\textbf{h}_{kq}\in\mathbb{C}^{S\times 1}\) is the vector of wireless channel coefficients between the \(k\)-th UE and the \(q\)-th AP such that \[\textbf{h}_{kq}\sim\mathcal{CN}(\bar{\textbf{h}}_{kq},\textbf{R}_{kq}), \tag{2}\] where \(\bar{\textbf{h}}_{kq}\in\mathbb{C}^{S\times 1}\) is the LoS component and \(\textbf{R}_{kq}\in\mathbb{C}^{S\times S}\) is the positive semidefinite covariance matrix describing the spatial correlation of the Non-Line-of-Sight (NLoS) components. The covariance matrices \(\textbf{R}_{kq},~{}\forall k,~{}\forall q\) are assumed to be perfectly known. We assume all UEs have the same fixed uplink transmit power \(p\). For downlink transmissions, the system has a maximum total transmit power denoted by \(P\), which is assumed to be split equally among all the APs so that the per AP transmit power is \(p_{d}=P/Q\). Fig. 1: Illustration of the considered DmMIMO network for \(Q=16\), \(K=20\) and \(l=500\) m. ## III Channel Estimation To obtain estimates of the wireless channel coefficients, the system adopts a set of mutually orthogonal pilot sequences \(\psi_{1},\psi_{2},\ldots,\psi_{\tau_{p}}\) with length \(\tau_{p}\), and with \(\|\psi_{t}\|^{2}=\tau_{p},t\in\{1,\ldots,\tau_{p}\}\). Herein we assume that \(\psi_{t}\) is a column of \(\sqrt{\tau_{p}}\mathbf{I}_{\tau_{p}}\forall t\), i.e., \(\psi_{t}=\sqrt{\tau_{p}}\ [\mathbf{I}_{\tau_{p}}]_{t},\forall t\in\{1,\ldots,\tau_{p}\}\). During a pilot transmission phase, the UEs transmit the pilot sequences with power \(p\). We assume a crowded network such that \(K>\tau_{p}\), thus, multiple UEs are assigned with the same pilot sequence, i.e., there is pilot contamination in the system. We adopt a balanced random pilot assignment scheme. Even though this scheme is suboptimal, its performance is still substantially better than that of a purely random pilot assignment scheme [15]. Each UE is allocated a pilot sequence, i.e., sequentially and cyclically selected from the set of available orthogonal pilot sequences. The index of the time instant allocated for the transmission of the pilot signal of UE \(k\) is \(n_{k}\in\{1,\ldots,\tau_{p}\}\). This index also corresponds to the index of the pilot sequence assigned to the \(k\)-th UE and is \(n_{k}=k-\left\lfloor\frac{k-1}{\tau_{p}}\right\rfloor\tau_{p}\). The subset of UEs that use the same pilot sequence as the \(k\)-th UE is defined as \(\mathcal{P}_{k}=\{i:n_{i}=n_{k}\}\subset\{1,\ldots,K\}\). During the pilot transmission phase, the received signal vector for the \(q\)-th AP at the time instant \(n_{k}\in\{1,\ldots,\tau_{p}\}\) is \(\mathbf{y}_{q}^{\text{\emph{ilot}}}[n_{k}]\in\mathbb{C}^{S\times 1}\), \[\mathbf{y}_{q}^{\text{\emph{ilot}}}[n_{k}]=\sum_{i\in\mathcal{P}_{k}}\sqrt{p_ {i}}\ \mathbf{h}_{iq}+\mathbf{z}_{q}[n_{k}]. \tag{3}\] The channel estimation takes place at time instant \(\lambda=\tau_{p}+1\). We assume that the CPU has perfect statistical information of the channels, i.e., the correlation matrices \(\mathbf{R}_{kq}\ \forall k,\forall q\) are assumed to be perfectly known [2]. Using the Linear Minimum Mean Square Error (LMMSE) estimation, the estimate \(\hat{\mathbf{h}}_{kq}[\lambda]\) of the channel coefficient \(\mathbf{h}_{kq}[n_{k}]\) can be computed by each AP according to [16] \[\hat{\mathbf{h}}_{kq}[\lambda] =\sqrt{p_{k}}\ \mathbf{R}_{kq}\,\mathbf{\Psi}_{n_{kq}}^{-1}\ \mathbf{y}_{n_{kq}}^{\text{\emph{ pilot}}}, \tag{4}\] \[\mathbf{\Psi}_{n_{kq}} =\sum_{i\in\mathcal{P}_{k}}\ p_{i}\ \mathbf{R}_{iq}+\sigma^{2}\mathbf{I}_{S} \tag{5}\] where (5) is the correlation matrix of the received signal. ## IV Downlink Data Transmission In this work, we consider coherent downlink transmission, i.e., all the APs simultaneously transmit the same data symbol to a given UE. The received downlink signal at the \(k\)-th UE and at time instant \(n\in\{\lambda+1,\ldots,\tau_{c}\}\) is \[y_{k}^{\text{\emph{dl}}}[n]=\sum_{q=1}^{Q}\mathbf{h}_{kq}[n]^{H}\mathbf{x}_{q }[n]+z_{k}[n], \tag{6}\] where \(z_{k}[n]\sim\mathcal{CN}(0,\sigma^{2})\) is the AWGN sample at the receiver of the \(k\)-th UE, \[\mathbf{x}_{q}[n]=\sqrt{p_{d}\mu_{q}}\sum_{k=1}^{K}\mathbf{w}_{kq}[\lambda]s_ {k}[n] \tag{7}\] is the signal transmitted by the \(q\)-th AP, \(p_{d}\) is the maximum transmit power of each AP, \(\mu_{q}\) is the normalization parameter for the precoding, \(\mathbf{w}_{kq}[\lambda]\) is the precoding vector utilized by the \(q\)-th AP for the transmission of the signal intended for the \(k\)-th UE, and \(s_{k}[n]\sim\mathcal{CN}(0,1)\) is the data symbol intended for the \(k\)-th UE. Since the beamforming vectors \(\mathbf{w}_{kq}\ \forall k,\forall q\) are computed once in each coherence time interval, at the time instant \(\lambda\), we can drop the time index for simplicity. The normalization parameter \(\mu_{q}\) is necessary to guarantee that each AP satisfies its maximum transmit power constraint \(p_{d}\), and is computed as \(1/\mu_{q}=\sum_{i=1}^{K}\mathbb{E}\{\|\mathbf{w}_{iq}\|^{2}\}\). The received signal at the \(k\)-th UE can be written as \[y_{k}[n]= \underbrace{\sum_{q=1}^{Q}\sqrt{p_{d}\mu_{q}}\ \mathbf{h}_{kq}^{H}[n]\ \mathbf{w}_{kq}s_{k}[n]}_{\text{Desired signals}}+\] \[\underbrace{\sum_{i=1,i\neq k}^{K}\sum_{q=1}^{Q}\sqrt{p_{d}\mu_{ q}}\ \mathbf{h}_{kq}^{H}[n]\ \mathbf{w}_{iq}s_{i}[n]}_{\text{Inter-User Interference}}+\underbrace{z_{k}[n]}_{\text{Noise}}. \tag{8}\] In this work, we adopt the Use-and-then-Forget (UatF) bound technique to obtain semi-analytical expressions for the achievable downlink rates [2]. The UatF bound provides us with a rigorous lower bound regardless of the amount of channel hardening. Nevertheless, the more channel hardening we have on the system, the tighter this bound is. In other words, the UatF bound is a pessimistic estimate of the achievable downlink rate when channel hardening does not occur [5]. Thus, the ergodic downlink capacity of the \(k\)-th UE is lower bounded using the UatF bound as \[\mathtt{SE}_{k} =\frac{1}{\tau_{c}}\sum_{n=\lambda+1}^{\tau_{c}}\log_{2}(1+\gamma_ {k}[n])\text{ bits/s/Hz}, \tag{9}\] \[\gamma_{k}[n] =\frac{p_{d}\left|\sum_{q=1}^{Q}\mathtt{DS}_{kq}[n]\right|^{2}}{ p\sum\limits_{i=1}^{K}\mathtt{INT}_{i}[n]-p_{d}\left|\sum_{q=1}^{Q}\mathtt{DS}_{kq}[n] \right|^{2}+\sigma^{2}},\] (10) \[\mathtt{DS}_{kq}[n] =\mathbb{E}\{\sqrt{\mu_{q}}\mathbf{h}_{kq}^{H}[n]\mathbf{w}_{kq}\}\] (11) \[\mathtt{INT}_{i}[n] =\mathbb{E}\bigg{\{}\bigg{|}\sum_{q=1}^{Q}\sqrt{\mu_{q}}\mathbf{ h}_{kq}^{H}[n]\mathbf{w}_{iq}\bigg{|}^{2}\bigg{\}}, \tag{12}\] where (10) is the effective SINR at the \(k\)-th UE and at time instant \(n\), (11) corresponds to the desired signal terms while and (12) corresponds to the inter-user interference terms. Considering Maximum Ratio Transmission (MRT) precoding, we have \(\mathbf{w}_{kq}=\hat{\mathbf{h}}_{kq}[\lambda],\ \forall k,\forall q\). Note that we consider the impact of imperfect CSI in the precoding vectors. ## V Numerical Results In this section, we present Monte Carlo simulation results that illustrate the trade-off between macro-diversity and beamforming gains on DmMIMO. We fix the total number of antenna elements \(M\) and the total downlink power \(P\), and then we evaluate the trade-off between \(Q\) and \(S\), respectively the number of APs and antenna elements on each AP. The considered simulation parameters are listed in Table I. Given the numbers of APs \(Q\), antennas per AP \(S\), and UEs \(K\), we generate 50 network realizations, which consist of different sets of uniformly distributed positions for the UEs. Then, for each network realization, we generate 1000 channel realizations. The SE expressions are averaged over all the network and channel realizations. Then, the mean per-user achievable SE is obtained by averaging over the achievable SEs of all the \(K\) UEs. The wrap-around technique is utilized to avoid the network border effect [17]. Herein, all the numerical results were obtained adopting the LMMSE channel estimator. We present the mean per-user achievable SE for \(M=64\) in Fig. 2 and for \(M=128\) in Fig. 3, for different number of APs \(Q\), and also for different coverage areas \(l^{2}\). The results are a function of the number of APs in Fig. 1(a) and Fig. 2(a) and the density of APs \(Q/l^{2}\) in Fig. 1(b) and Fig.2(b). Note the optimal number of APs is a function of the dimensions of the coverage area. For both the cases of \(M=64\) and \(M=128\), we observe that if the coverage area is relatively small (\(l=125\) m or \(l=250\) m), the best performance is achieved for the case of \(Q=4\) APs. This shows that transitioning from a CmMIMO setup with \(Q=1\) to a DmMIMO setup with \(Q>1\) provides some macro-diversity gain that significantly improves the system's performance. Nevertheless, since the distances between the UEs and APs tend to be small, there are very high probabilities for the existence of LoS components between the UEs and APs; thus, the path loss is not severe. In this situation, the performance is also greatly improved by having some level of beamforming gain, i.e., having few APs equipped with multiple antenna elements is more advantageous than having many APs equipped with few or single antennas. In other words, for small coverage areas, there is a balance between the macro-diversity gain obtained from the spatial distribution of APs and the beamforming gain by having multiple antennas on each AP that guarantees the best performance. On the other hand, as the coverage area's size grows, the macro-diversity gains obtained become increasingly more important since the distances between UEs and APs are also increased. As a consequence, the probability of the existence of an LoS component between the UEs and APs becomes very low, and the path loss severely affects the performance. Distributing the antenna elements more sparsely on the coverage area increases the probability that a given UE is closer to at least one AP, thus increasing the probability of LoS. This is illustrated by the fact that the best performance for the cases of \(l=500\) m and \(l=1\) km is achieved with \(Q=16\) and \(Q=64\), respectively. For the case of \(Q=1\), increasing \(l\) decreases the mean achievable per-user SEs, as expected, since the distances between the UEs and the single AP are also increased and the path losses are higher. Interestingly, for \(Q=64\), increasing the size of the coverage area increases the performance. This happens because, as we increase the distances between the UEs, we reduce the levels of inter-user interference seen by each UE. The case of \(Q=64\) and \(l=1\) km could be Fig. 2: Mean per-user achievable SE versus the number of APs (a) and density of APs (b) for \(M=64\). \begin{table} \begin{tabular}{l c c} \hline \hline **Parameter** & **Symbol** & **Value** \\ \hline Number of UEs & \(K\) & 20 \\ Number of APs & \(Q\) & {1,4,16,64} \\ Number of antennas on each AP & \(S\) & {64,16,4,1} \\ Total number of antenna elements & \(M\) & {64,128} \\ Length of the side of the square area & \(l\) & {125,250,500,1000} m \\ Signal bandwidth & \(B\) & 20 MHz \\ Coherence bandwidth & \(B_{c}\) & 100 kHz \\ Coherence time & \(T_{c}\) & 2 ms \\ Carrier frequency & \(f_{c}\) & 2 GHz \\ Sample time interval & \(T_{s}\) & \(10\)\(\mu\)s \\ Uplink transmit power & \(p_{k}\) & 23 dBm \\ Downlink total transmit power & \(P\) & 49.03 dBm \\ Noise figure at the UEs & \(N_{F}\) & 9 dB \\ Noise power & \(\sigma^{2}\) & \(-92\) dBm \\ Coherence block & \(\tau_{c}\) & 200 samples \\ Length of the pilot sequence & \(\tau_{p}\) & 10 samples \\ Height of the APs & \(h_{\text{AP}}\) & 12.5 m \\ Height of the UEs & \(h_{\text{UE}}\) & 1.5 m \\ \hline \hline \end{tabular} \end{table} TABLE I: Simulation parameters [6, 13, 18]. interpreted as a small cell deployment [19] since each UE tends to be close to only one AP and far away from the others. Overall, given a total number of antenna elements \(M\) and a total downlink power \(P\), setups with few or a single AP equipped with multiple antenna elements present the best performance in small coverage areas. On the other hand, in bigger coverage areas, the UEs tend to be more sparsely distributed, and consequently, the path losses are more severe. Thus, macro-diversity gains become more beneficial than the beamforming gains, and having more APs equipped with few or single antenna elements is more advantageous. The curves in Fig. (b)b and Fig. (b)b show that, for different values of \(l\), the mean per-user achievable SE as a function of the density of APs \(Q/l^{2}\) is a concave function. For both the cases of \(M=64\) and \(M=128\), the optimal density of \(Q/l^{2}\) is approximately 100 APs/km\({}^{2}\). ## VI Conclusions In this work, we studied the trade-off between beamforming and macro-diversity gains on DmMIMO. Fixing the total number of antenna elements and the total downlink transmit power, we found that the "sweet spot" on the number of APs and antenna elements per AP depends on the physical dimensions of the coverage area. If the UEs tend to be closer to the APs, beamforming gains provide more performance improvements than macro-diversity gains, i.e., it is better to have fewer APs equipped with multiple antennas. Conversely, if the distances between UEs and APs become longer, having more APs equipped with few or single antennas becomes advantageous since it increases the probability that a given UE is very close to at least one AP, thus having a high probability of LoS. Our numerical results show that DmMIMO networks that employ a moderate density of APs (on the order of 100 APs/km\({}^{2}\)), with each AP equipped with multiple antenna elements, are the best option. Considering also that each AP requires a fronthaul connection to the CPU, fully distributed setups with a very high density of APs, i.e., setups that employ a massive number of single-antenna APs, may not be an economically viable option. Besides presenting worst performance in terms of mean per-user average SE, they also present very high deployment and maintenance costs. The results presented in this work can help network operators plan and deploy DmMIMO networks that simultaneously present reasonable deployment and maintenance costs and guarantee the best performance for the users. ## Acknowledgment This research has been financially supported by Nokia Bell Labs, Academy of Finland, 6Genesis Flagship (grant no. 318937), European Union's Horizon 2020 research and innovation programme (EU-H2020), Hexa-X-I (grant no. 101015956) and Hexa-X-II (grant no. 101095759) projects and CNPq (Brazil).
産業と academia は、中央集権のマルチインマルチアウト(CmMIMO)から分散型マルチインマルチアウト(DmMIMO)アーキテクチャへと進化を図っています。単一基地局に複数のアンテナを搭載した複数のセルに分けて COVERAGE AREA をカバーするのではなく、いくつかのアクセスポイント(AP)を複数搭載することで、全体が複数 AP で共同でカバーされます。ただし、多くの AP を使用するか、または少ないアンテナを持つ AP を使用するかを選択する場合、ビームフォーミングとマクロダイバーシティの利点との間の inherent なトレードオフは、文献に調査されていません。総アンテナ要素数と総下リンクパワー、ユーザー機器 (UE) と AP 間の距離に基づいたチャネルモデルを考慮すると、私たちの算出結果では、COVERAGE AREA の物理的なサイズに基づいて最適な AP の数とアンテナ要素数との間の「甘美なポイント」が存在します。
2309.16381
Nek5000/RS Performance on Advanced GPU Architectures
We demonstrate NekRS performance results on various advanced GPU architectures. NekRS is a GPU-accelerated version of Nek5000 that targets high performance on exascale platforms. It is being developed in DOE's Center of Efficient Exascale Discretizations, which is one of the co-design centers under the Exascale Computing Project. In this paper, we consider Frontier, Crusher, Spock, Polaris, Perlmutter, ThetaGPU, and Summit. Simulations are performed with 17x17 rod-bundle geometries from small modular reactor applications. We discuss strong-scaling performance and analysis.
Misun Min, Yu-Hsiang Lan, Paul Fischer, Thilina Rathnayake, John Holmen
2023-09-28T12:29:02
http://arxiv.org/abs/2309.16381v1
# Nek5000/RS Performance on Advanced GPU Architectures ###### Abstract We demonstrate NekRS performance results on various advanced GPU architectures. NekRS is a GPU-accelerated version of Nek5000 that targets high performance on exascale platforms. It is being developed in DOE's Center of Efficient Exascale Discretizations, which is one of the co-design centers under the Exascale Computing Project. In this paper, we consider Frontier, Crusher, Spock, Polaris, Perlmutter, ThetaGPU, and Summit. Simulations are performed with 17\(\times\)17 rod-bundle geometries from small modular reactor applications. We discuss strong-scaling performance and analysis. Nek5000/RS, exascale, strong scaling, small modular reactor, rod-bundle 2019 acmcopyright ## 1 Introduction As part of its Exascale Computing Project, the U.S. Department of Energy leadership computing facilities deploy platforms capable of reaching \(>1\approx 10^{3}\)-\(10^{4}\) nodes, each equipped with powerful CPUs and anywhere from 4 to 8 accelerators (i.e., GPUs), which provide the bulk of the compute power. For reasons of efficiency, a favored programming model for these architectures is to assign a single process (i.e., MPI rank) to each GPU (or GPU processing unit, such as a GCD on the AMD MI250X or a tile on the Intel PVC) and execute across the GPUs using a private distributed-memory programming model. With \(P=10^{3}\)-\(10^{5}\) MPI ranks, this approach affords a significant amount of internode parallelism and contention-free bandwidth with no increase in memory-access latency, save for the relatively sparse internode communication that is handled by MPI. In this paper we describe performance results for the open source thermal-fluids simulation code Nek5000/RS Fischer et al. (2008, 2021, 2022) on serveral of DOE's recently installed high-performance computing (HPC) platforms. NekRS is a GPU-oriented version of Nek5000 that was developed under DOE's Center for Efficient Exascale Discretizations (CEED). For portability, all the GPU kernels are written in OCCA Medina et al. (2014); OCCA (2021). Many of the high-performance kernels came out of the libParanumal library Chalmers et al. (2020). ### Performance Metrics Of great interest to computational scientists is the speed that can be realized for a particular application for a given architecture. (Here, an _application_ is a particular _problem_ that uses Nek5000/RS, which is an _application code_.) For example, one frequently needs to estimate the number of node-hours and number of wall-clock hours that might be required for a large simulation campaign. A common metric, which is very much case-specific, is the number of degrees of freedom (dofs) per second that can be realized on a platform, or perhaps the number of dofs per second per accelerator (i.e., per MPI rank.1) In the sequel, we will assign MDOFS to the quantity "millions of dofs per second per rank." The case-specificity aspect of MDOFS is that one can realize a much larger MDOFS value for, say, linear solution of \(A_{\underline{x}}=\underline{b}\) than would be possible for an incompressible Navier-Stokes solver. Footnote 1: We prefer dofs per second per rank because AMD’s MI250X has two compute units (GCDs) per GPU and Aurora’s Intel has two tiles per PVC—users view these as two processors. Despite the large variance in MDOFS from one problem class to the next, it is nonetheless a worthy metric when making platform-to-platform comparisons. A related metric is the time-to-solution or, in the case of a timestepping simulation code that could take an aribitrarily large number of steps, the time per step, \(t_{step}\),2 which we measure in seconds. Even for a given code and architecture this latter quantity is subject to significant variability because some problems or computational meshes are more ill-conditioned than others, which leads to higher iteration counts in the linear solvers (e.g., in the pressure solve for an incompressible flow simulation) and hence a longer time per step. Footnote 2: We typically take the average time over several hundreds or thousands timesteps. MDOFS and \(t_{step}\) are dependent, or output, parameters. For a fixed platform, code, and problem, users still have two independent parameters at their disposal: \(n\), the problem size or number of dofs,3 Figure 1: Full-core configuration on the left and a single 17\(\times\)17 rod bundle on the right. and \(P\), the number of ranks (here, accelerator devices) to use. For a fixed problem size, \(n\) (which is what a user typically has), there is only one variable, namely, \(P\). A user who is contemplating a simulation campaign will often be interested in predicting performance over a range of \(n\). Under the given conditions, we will see that the most important quantitity in predicting performance is Footnote 1: The \(n\)-th column of the table is the number of grid points per device, which is the number of grid points per The first analysis question we address is, _For a fixed problem size \(n\), how many ranks can we use before \(\eta_{P}<0.8\)?_ An accompanying question is, _What is the time per step at that value?_ The key to this first question is that parallel efficiency typically drops as the local amount of work, \(n/P\) tends towards zero. So, fixing \(\eta_{P}=0.8\) implies \(n/P\) is a fixed value (for a given fixed-sized problem with \(P\) varying). We denote this value as \(n_{0.8}\). It is the number of points per rank where the application realizes \(80\%\) efficiency, which is where we anticipate that users will typically run. With this definition, we can address the question of the expected \(t_{step}\) under these conditions. Assume that a given problem requires a certain amount of work, \(W\), that is measured in total number of floating-point operations (FLOPS). Usually, \(W\sim Cn\), where \(C\) is an \(n\)-independent constant, which implies that, to leading order, the amount of works scales with \(n\). On \(P\) processors, we therefore can expect that \[t_{step} = \frac{W}{S_{P}}\;=\;\frac{C\,n}{\eta_{P}\,P\,S_{1}}. \tag{4}\] We define \(t_{0.8}\) to be the value of \(t_{step}\) at \(80\%\) efficiency, \[t_{0.8} = \frac{C}{0.8}\frac{n/P}{S_{1}}.\;=\;\left(\frac{C}{0.8}\right) \frac{n_{0.8}}{S_{1}}. \tag{5}\] We note that (4)-(5) are predicated on \(\eta_{P}\) being strongly dependent on \((n/P)\) with no direct \(P\) dependence. There are times when there is a weak \(P\) dependence, particularly for \(P>10^{4}\). In this case, one can simply modify the analysis to have a \(P\)-dependent \(n_{0.8}\). We see from (5) that the time per step is governed by the speed on a single rank, \(S_{1}\) (larger is better), and the amount of work on a single rank, \(n_{0.8}\) (smaller is better), where \(80\%\) efficiency is realized. If a new platform comes out with a \(2\times\) increase in \(S_{1}\) but a \(4\times\) increase in \(n_{0.8}\), then the net time-to-solution _increases by \(2\times\)_. In HPC, it is the _ratio_, \(n_{0.8}/S_{1}\), that is critical to fast time-to-solution. Much of this analysis can be found in Fischer et al. (2015, 2020). Communication overhead on GPU-based architectures is discussed in Bienz et al. (2020). A typical use case for (5) is that a user knows \(n\), which is the number of gridpoints required to resolve a given simulation, and wants to know how many processors will be required to efficiently solver this problem and how long it will take to execute. The user also knows \(n_{0.8}\) from scaling studies of the type provided here. From that, the user can determine \[P_{0.8} = \frac{n}{n_{0.8}}, \tag{6}\] which is the maximum number of ranks that can be employed while sustaining \(80\%\) efficiency. The time per step will be \(t_{0.8}\), and the total required node-hours will be \[\text{node hours} \approx \frac{P_{0.8}}{\text{ranks-per-node}}\;\times\;\frac{N_{steps}\,t _{0.8}}{3600\text{ s/hour}}, \tag{7}\] where \(N_{steps}\) is the estimated number of timesteps. ### Test Cases In the following sections we characterize these relevant parameters for NekRS across several of DOE's pre-exascale and exascale platforms, including Frontier, Crusher, Spock, Polaris, Perlmutter, ThetaGPU, and Summit. Simulations are performed using ExaSMR's 17\(\times\)17 rod-bundle geometry, illustrated in Fig. 1. This geometry is periodic in the axial (vertical) flow direction, which allows us to weak-scale the problem by adding more layers of elements in the \(z\) direction. (The model problem is essentially homogeneous in \(z\).) Each case starts with a pseudo-turbulent initial condition so that the iterative solvers, which compute only the change in the solution on each step, are not working on void solutions. Most of the cases are run under precisely the same conditions of timestep size, iteration tolerances, and averaging procedures, which are provided case by case in the sequel. We remark that the following performance summaries are for full Navier-Stokes solution times. We present a few plots that reflect work in salient kernels, such as the advection operator, which is largely communication-free, and the pressure-Poisson coarse-grid solve, which is highly communication-intensive. Detailed kernel-by-kernel breakdowns are presented in Fischer et al. (2021); Min et al. (2022) and are available in every logfile generated by NekRS. Further, we note that NekRS supports multiple versions of all of its high-intensity kernels and communication utilities. These are determined at runtime by running a small suite of tests for each invocation of the given utility. Thus, the performance is optimized under the loading conditions of that particular kernel for the particular platform for the particular application. An example of these outputs, along with the kernel-by-kernel breakdown, is presented in Section 5. ## 2 NEkRS Performance on a Single GPU Table 1 demonstrates NekRS performance for ExaSMR's single-rod simulation on a single GPU. Simulations are performed for 500 steps; and the average time per step, \(t_{\text{step}}\), is measured in seconds for the last 400 steps. For a given system, the speedup is the inverse ratio of \(t_{\text{step}}\) to that of Summit. \(v_{i}\) and \(p_{i}\) represent the average iteration counts per step of the velocity components and pressure. Timestepping is based on Nek5000's second-order characteristics method with one substep Maday et al. (1990); Patel et al. (2019), and the timestep size is \(\Delta t=1.2\)e-3 (CFL=1.82). Pressure preconditioning is based on \(p\)-multigrid with Chebyshev and additive Schwarz method (CHEBYSHEV+ASM) smoothing and hypre AMG (algebraic multigrid) for the coarse-grid solve Fischer et al. (2021). Tolerances for pressure and velocity are 1e-4 and 1e-6, respectively. We note that this test case has been explored in the context of NekRS kernel and algorithm development on other architectures in earlier work, including Kolev et al. (2021); Abdelfattah et al. (2021); Fischer et al. (2022). The single-device results of Table 1 show that, for the current version of NekRS,4 a single GCD of the MI250X on Crusher realizes a 1.32\(\times\) gain in Navier-Stokes solution performance over a single V100 on Summit. Similarly, the A100s are realizing \(\approx\) 1.6-fold gain over the V100. Footnote 4: NekRS version 22.0 is used. ## 3 NEkrs Performance on Frontier vs. Crusher On Frontier, \(\mathtt{rcom}/5.1.0\) and \(\mathtt{cray}\)-\(\mathtt{mpich}/8.1.17\) were used. On Crusher, simulations were performed with variation of versions such as \(\mathtt{rcom}/5.1.0\), \(\mathtt{rcom}/5.2.0\), \(\mathtt{cray}\)-\(\mathtt{mpich}/8.1.16\), and cray-mpich/8.1.19. On Crusher, rocm/5.1.0 is 2%-5% faster than rocm/5.2.0. We observe that the performance on Frontier is better than that on Crusher. We consider ExaSMR's 17\(\times\)17 rod-bundle geometry and extend the domain in streamwise direction with 10, 17, and 170 layers, keeping the mesh density the same, which correspond to 277 thousand spectral elements of order \(N=7\), for a total of \(n=.27\)M \(\times\)7\({}^{3}=95\)M grid points, 471 thousand spectral elements of order \(N=7\), for a total of \(n=.47\)M \(\times\)7\({}^{3}=161\)M grid points, and 4.7 million spectral elements of order \(N=7\), for a total of \(n=4.7\)M \(\times\)7\({}^{3}=1.6\)B grid points, respectively. Table 2 summarizes the configuration of the testing cases. Figure 2 compares the scaling performance of Frontier with that of Crusher. Simulations are performed for 2,000 steps; and the average time-per-step, \(t_{\text{step}}\), is measured in seconds for the last 1,000 steps. The third-order backward-difference formula (BDF3) combined with the third-order extrapolation (EXT3) Deville et al. (2002) is used for timestepping. and the timestep size is \(\Delta t=3.0\)e-04 (CFL=0.82). Figure 2, left, shows the classic strong scaling for the problem sizes of \(n\)= 95M, 161M, and 1.6B, demonstrating the average time per step vs. the number of MPI ranks, \(P\). We run a single MPI rank per GCD, and there are 8 GCDs per node. The dashed lines in sky-blue color represent ideal strong-scale profiles for each case. The solid lines in red are for Frontier, and the solid lines in black are for Crusher. We observe that Frontier is consistently slightly faster than Crusher for these three problem sizes. For larger problem sizes and processor counts, the Frontier advantage is increased. Figure 2, right, shows the average time per step vs. the number of points per MPI rank, \(n/P\), where \(n\) is the total number of grid points. \(t_{\text{step}}\) based on \(n/P\) is independent of the problem size, \(n\). This is the metric illustrating that the strong-scaling performance is primarily a function of \((n/P)\) and only weakly dependent on \(n\) or \(P\) individually, which is in accord with the extensive studies presented in Fischer et al. (2020). Based on this metric, we can determine a reasonable number value of \((n/P)\) for a given parallel efficiency and, from there, determine the number of MPI ranks required for a problem of size \(n\) to meet that expected efficiency. We provide more detailed performance behaviors depending on problem sizes in Figures 7-9. \begin{table} \begin{tabular}{|l|r|r|r|r|r|r|} \hline \multicolumn{4}{|c|}{**GPU Performance on a Single GPU**: singlero, \(E=7168\), \(n=2,458,624\), \(N=7\)} \\ \hline System & GPU & Device & API & \(v_{i}\) & \(p_{i}\) & \(t_{\text{step}}\) (sec) & Speedup \\ \hline Summit & 1 & 16GB V100 GPU & CUDA & 4 & 1 & 7.98e-02 & 1.00 \\ Spock & 1 & 32GB M100 GPU & HIP & 4 & 1 & 4.17e-02 & 0.84 \\ Crusher & 1 & 64GB M1250X (1 GCD) & HIP & 4 & 1 & 6.02e-02 & 1.32 \\ ThetaGPU & 1 & 40GB A100 GPU & CUDA & 4 & 1 & 6.78e-02 & 1.57 \\ Perlmutter & 1 & 40GB A100 GPU & CUDA & 4 & 1 & 4.16e-02 & 1.62 \\ Polaris & 1 & 40GB A100 GPU & CUDA & 4 & 1 & 4.31e-02 & 1.62 \\ \hline \end{tabular} \end{table} Table 1: NekRS performance on various architectures using a single GPU. \begin{table} \begin{tabular}{|l|r|r|r|} \hline \multicolumn{4}{|c|}{**Strong Scaling Test Sets**} \\ \hline & \(E\) & \(n\) & rank, \(P\) \\ \hline Case 1 & 277000 & 95M & 8–64 \\ Case 2 & 470900 & 161M & 14–128 \\ Case 3 & 4709000 & 1.6B & 128–16320 \\ \hline \end{tabular} \end{table} Table 2: Problem setup for strong-/weak-scaling studies. Figure 3, left and right, shows performance for a 17\(\times\)17 rod bundle with 170 layers (\(n=1.6\)B). Here we extend our discussion to other NVIDIA-based GPU architectures such as Summit (V100) at OLCF, Perlmutter (A100) at NERSC, and Polaris (A100) at ALCF, and we compare those with Frontier and Crusher. While we observe that Frontier is faster than Crusher in Figure 2, we see that Crusher is faster than Summit, but not quite as fast as the A100-based Perlmutter (NERSC) and Polaris (ALCF) platforms. We provide more detailed performance behavior as a function of \(P\) in Figures 7-9. Anomalous behavior for several of these architectures is also discussed in Section 5. Returning to the results for Frontier vs. Crusher, Figures 4-6 show detailed information for each problem size, including timings, parallel efficiency, dofs, and runtime statistics for the advection Figure 3: Strong scaling on Frontier (MI250X), Crusher (MI250X), Perlmutter (A100), Polaris (A100), and Summit (V100) for \(17\times 17\) rod bundles with 170 layers with total number of grid points of \(1.6B\). Average time per step vs. rank, \(P\) (left) and average time per step vs. \(n/P\) (right). Figure 2: Strong scaling on Frontier and Crusher for \(17\times 17\) rod bundles with 10, 17, and 170 layers with total number of grid points of \(n=95\)M, 161M, and 1.6B. Average time per step vs. rank, \(P\) (left), and average time per step vs. \(n/P\) (right). Frontier is set with (cray-mpich/8.1.17, rccm/5.1.0) and Crusher with (cray-mpich/8.1.19, rccm/5.2.0). kernel makef and for the coarse-grid-solve for the pressure Poisson problem. The bottom left plots in Figures 4-6 show that Frontier and Crusher deliver the same performance on the compute-intensive makef kernel, which evaluates the advection term for the Navier-Stokes equations. By contrast, the bottom right plots in Figures 4-6 show Crusher is a bit slower for the communication-intensive coarse-grid solve, which is part of the multigrid preconditioner for the pressure-Poisson problem. The solve is performed on the host CPUs using algebraic multigrid with Hypre. (These lower plots show the times for the full 2,000 steps, not the time per step.) For smaller problem sizes with \(n=\) 95M and 161M, the second row plots on the left and right in Figures 4-5 reveal that the 80% parallel efficiency is achieved with \(n/P\)= 2.2M on Frontier and \(n/P\)= 2.5M on Crusher. The same plots in Figure 6 demonstrate that Frontier realizes 80% parallel efficiency at \(n/P\approx\) 3M points per rank while Crusher at \(n/P\approx\) 5 million points per rank, which would normally be viewed as the strong-scale limit for NekRS simulations on these platforms. This somewhat disappointing increase in \(n_{0.8}\) implies that NekRS is not weak-scaling well on Frontier/Crusher. As noted in Fischer et al. (2015), weak-scaling performance is generally affected by the coarse-grid solve, which is one of the few terms for which the cost grows with \(P\) (in this case, as \(O(\log P)\)), instead scaling as \(n/P\).5 Indeed, inspection of the lower right graph in Figures 5-6 shows that, for the same \(n/P=\) 1.6M, the total coarse-grid solve time (for 2,000 steps) is 1 s for \(P=1000\) and only 0.65 s for \(P=100\), which corresponds precisely to the ratio \(\log_{2}(1000)/\log_{2}(100)\). We note that Crusher suffers even more in the (host-driven) coarse-grid solve. By contrast, Crusher and Frontier have identical behavior on the communication-minimal and compute-intensive nonlinear advection evaluation, as seen in the lower left graph in Figure 6. Footnote 5: Vector reductions also exhibit \(O(\log P)\) growth, but these can be mitigated by hardware support Fischer et al. (2015). The third-row left and right plots in Figures 4-6 show GDOF per step per \(t_{\text{step}}\) per rank vs. time per step and GDOF per step per \(t_{\text{step}}\) per rank vs. \(n/P\), where GDOF is defined by the _total_ number of degree of freedom, which is \(4n\) (three velocity vectors and one pressure field). Frontier consistently shows faster performance than does Crusher, particularly for the larger problem sizes (and, hence, higher processor counts), which we can primarily attribute to the speed of the coarse-grid solve on Crusher. Figure 6 in particular shows the performance as \(n/P\) gets as low as 0.1M, at which point the performance curve flattens and the device does not speed up further. There are two reasons for stalled performance: communication overhead and insufficient work on the device (even in the absence of communication), as shown in an earlier study for NekCAM on ORNL's Titan, which features NVIDIA K20X GPUs Otten et al. (2016) and in the single-GPU studies on the NVIDIA V100 in the CEED benchmark paper Fischer et al. (2020). ## 4 NEKRS Performance on Summit, Thetagpu, Perlmutter, Polaris, Crusher, and Frontier In this section we continue with scaling studies on the 17\(\times\)17 rod bundle simulations for the NVIDIA-based GPU platforms, Summit (V100), ThetaGPU (A100), Perlmutter (A100), and Polaris (A100), compared with the AMD MI-250X platforms, Frontier and Crusher. Recall that in Figure 3 we showed the 170-layer case across the platforms of Frontier, Crusher, Summit, Perlmutter, and Polaris. Here, we discuss the performance in more detail in Figures 7-9. In Figure 8 we include Figure 4: Strong-scaling on Frontier vs. Crusher for 17\(\times\)17 rod bundle with 10 layers. Figure 5: Strong-scaling on Frontier vs. Crusher for 17\(\times\)17 rod bundle with 17 layers. Figure 6: Strong-scaling on Frontier vs. Crusher for 17\(\times\)17 rod bundle with 170 layers. Figure 7: Strong-scaling on various GPU architectures for 17\(\times\)17 rod bundle with 10 layers. [MISSING_PAGE_POST] Figure 9: Strong-scaling on various GPU architectures for 17\(\times\)17 rod bundle with 170 layers. performance on ThetaGPU. We run one MPI rank per V100 or A100 on the NVIDIA-based nodes and one MPI rank per GCD on the AMD MI250X nodes. Figures 7-9 show the same metrics as for the Crusher-Frontier plots of Figures 4-6. We point out that these strong-scaling plots start from a high level of performance. NekRS currently leverages extensive tuning of several key FP64 and FP32 kernels in libParanumal, including the standard spectral element Laplacian matrix-vector product, local tensor-product solves using fast diagonalization, and dealiased evaluation of the advection operator on a finer set of quadrature points. These kernels are sustaining up to 3 TFLOPS FP64 and 5-8 TFLOPS FP32, per GPU or GCD. At the strong-scale limit, with MPI overhead, NekRS is sustaining \(\approx 1\) TFLOPS per rank (i.e., per A100 or GCD) for the full Navier-Stokes solution Merzaria et al. (2023). An important figure of merit is \(n_{0.8}\), which is the value of \(n/P\) at which the simulation realizes 80% parallel efficiency. From the second row, right, we see that \(n_{0.8}=2.5\)M for Perlmutter/Crusher and 2M for Polaris (GPUDirect in green dashed lines). For Polaris without GPUDirect (magenta solid line) we find \(n_{0.8}=4.5\)M. The plot on row 3, left, indicates that a remarkably small \(t_{step}\) value of 0.015 seconds per step is realizable on Polaris, albeit at 25% efficiency. The plots on the last row, left, of Figures 7-9 show that the time in the advection update strong-scales quite well, as would be expected. The curves for the single GCD and A100 collapse to nearly the same performance while the older V100 technology of Summit is about \(1.5\times\) slower. In the absence of communication, this kernel is sustaining 3-4 TFLOPS FP64 on these newer architectures, although the graphs here do include the communication overhead. By contrast, the last row, right, shows the performance for the communication-intensive coarse-grid solve, which is performed using Hypre on the host CPUs. Here, both Crusher and Summit show relatively poor performance at small values of \(n/P\) or large values of \(P\). Also, in Figure 7, lower right, we see that Polaris without GPUDirect exhibits some level of system noise. We also observe in Figure 8 that ThetaGPU (dashed blue lines) is about 1.3-1.5\(\times\) slower than Perlmutter and Polaris (GPUDirect). From the lower left curve we see that the ThetaGPU performance is lower than the other A100 platforms even for this work-intensive operation. Moreover, in terms of parallel efficiency, it falls off faster than Polaris whether Polaris is or is not using GPUDirect communication. ## 5 Discussion In this section we discuss a variety of "anomalous" behaviors encountered in these studies. By anomalous we mean that they are adverse behaviors that either appear or disappear with software and hardware updates. One could argue that these are passing phenomena not worthy of reporting. However, users are actively using these machines, and it is important that everyone understand potential pitfalls in system performance that might directly impact their own timing studies or production runtimes. The behaviors described here include the performance of the large-memory (32GB vs 16GB) nodes on Summit, the use of a nonmultiple of 8 ranks on Crusher, the influence of GPU-direct communication on Polaris, and the upgrade from Slingshot 10 to Slingshot 11 on Perlmutter. ### Performance on Summit V100 16 GB vs. 32 GB Most of the,4608 nodes on Summit have 16 GB of device memory, which limits how small one can take \(P_{0}\) in the efficiency definition (2). few nodes, however, have 32 GB, which allow one to fit more points onto each V100. Unfortunately, as seen in Figures 7-9, the Summit 32 GB curves perform about 10% slower than their 16 GB counterparts. The last row of graphs in Figure 8 is particularly revealing: One can see from the makef plot that the V100s perform at the same rate for both the 16 GB and 32 GB nodes but that the host-based coarse-grid solve costs differ by almost 1.5\(\times\), which indicates either an excessive device-host communication overhead or some type of interhost communication slowdown. ### Performance on Crusher with Rank Dependency In our timing studies we typically do not require that each node be fully occupied. This happens, for example, if one wants to make a device-to-device comparison between Summit, with six V100 per node, and Crusher, which has 8 GCDs per node. Our initial plots of Crusher timing data appeared to be very erratic in its dependence on \(P\). Closer inspection, however, revealed that the performance for mod(\(P\),8)\(\neq\)0 was about 2\(\times\) slower than for the case of mod(\(P\),8)=0. Figures 10-12 show early timings taken on Crusher with the results sorted onto two curves corresponding to multiples and nonmultiples of 8. Unlike the Summit behavior of the preceding section, this is clearly a device issue and not a host issue, as can be seen in the last row in each of Figures 10-12. This anomaly has been resolved and has not appeared as an issue on Frontier. ### Performance on Polaris with GPUDirect vs. without GPUDirect As expected, communication overhead is more significant without GPUDirect, and consequently the _no GPUDirect_ curves on Polaris show relatively poor performance as the amount of local work, \(n/P\), is reduced. For example, in row 2, right, of Figure 11 the \(n_{0.8}\) for Polaris with GPUDirect is 2.5M, whereas it is 4.5M without GPUDirect. In the last row of the same figure we see that neither makef nor the coarse-grid solve are influenced by the presence or absence of GPUDirect. For makef, communication does not matter. The coarse-grid solve is communication intensive, but all of that originates from the host. We can also see in the results of Figure 12 that the no GPUDirect results are relatively noisy. We remark that the number of unknowns in the coarse grid on each rank is roughly equal to the number of spectral elements on that rank. For \(p=7\) and \(n/P=2\)M, we have \(E=2\)M/(\(343\)) \(\approx\) 5800 elements per rank, so the local coarse-grid problem size is 5,800. If we were running on all \(P=\)27648 V100s of Summit, the total coarse-grid size would be \(P\times 5800=160\)M. ### Performance on Perlmutter with Slingshot 10 vs. Slingshot 11 One other discovery was a sudden change in the behavior of Perlmutter at NERSC. Polaris and Perlmutter have similar architectures, so it was expected that they would have similar performance, as is indeed evident in, for example, Figure 8. Later in our studies, however, the Perlmutter interconnect was being upgraded from Slingshot 10 (SS10) to Slingshot 11 (SS11). The performance started to vary radically with \(P\), albeit in a highly repeatable fashion. The strong-scaling plot in Figure 13, top-left, illustrates the problem. For 48 ranks, the Navier-Stokes runtime with SS11 jumps by a factor of 3 over that with SS10 and over Polaris, which is still Figure 10: Strong-scaling on Crusher for 17\(\times\)17 rod bundle with 10 layers. Figure 11: Strong-scaling on Crusher for 17\(\times\)17 rod bundle with 17 layers. Figure 12: Strong-scaling on Crusher for 17\(\times\)17 rod bundle with 170 layers. Figure 13: Strong-scaling on Perlmutter with Slingshot 10 and 11. running SS10. These are repeatable results evidenced by several timings over a period of several weeks. Of course, the last row in the figure reveals that this anomaly is neither a GPU issue, since maker timings are essentially identical, nor a host-to-host communication issue, since the coarse-grid times are close to the same. Inspection of the NekRS logfiles shows that the increase is all focused in one section of the code, namely, the non-local Schwarz-based smoother in the \(p\)-multigrid preconditioner for the pressure Poisson problem. _That section_ of code is running \(10\times\) slower than its SS10 counterpart! Below, we show the logfile content for the two simulations with \(P=48\) (which is the slowest case). **SS10 logfile:** ``` nametime%calls setup3.82904e+01s0.381 loadKernels1.03634e+01s0.271 udfExecuteStep4.79398e-03s0.002001 elapsedStepSum6.13724e+01s0.62 solve6.12031e+01s0.61 min2.31879e-02s max5.51687e-02s flop/s3.36729e+13 maxef5.59237e+00a0.09200 udfUEqnSource3.98969e-02s0.012000 udfProperties4.82886e-03s0.00201 velocitySolve1.73346e+01s0.28200 rhs2.29362e+00s0.132000 pressureSolve3.42052e+01s0.56200 rhs4.69203e+00a0.14200 preconditioner2.26178e+01s0.66247 pMGsmoother1.51609e+01s0.67980 coarsegrid5.33568e+00s0.0242470 initial guess3.18958e+00s0.09200 ``` We also note that SS11 delivers higher bandwidth than does SS10 in all of the set-up tests done by NekRS when it is making a runtime selection of the best communication algorithm for each subproblem. We list the logfile output for those below. **SS10 logfile:** ``` p=+device(MPI:7.37e-05s/bi-bw:91.8GB/s/rank) p=+device(MPI:1.75e-04s/bi-bw:23.0GB/s/rank) p=+device(MPI:1.77e-04s/bi-bw:22.7GB/s/rank) p=+device(MPI:1.75e-04s/bi-bw:22.0GB/s/rank) p=+device(MPI:7.29e-05s/bi-bw:55.2GB/s/rank) p=+device(MPI:7.29e-05s/bi-bw:55.1GB/s/rank) p=+device(MPI:5.50e-05s/bi-bw:73.1GB/s/rank) p=+device(MPI:5.48e-05s/bi-bw:73.4GB/s/rank) p=+device(MPI:5.37e-05s/bi-bw:96.3GB/s/rank) p=+device(MPI:5.16e-05s/bi-bw:100.2GB/s/rank) p=+device(MPI:4.64e-05s/bi-bw:16.3GB/s/rank) p=+device(MPI:4.90e-05s/bi-bw:15.4GB/s/rank) p=+device(MPI:3.84e-05s/bi-bw:33.6GB/s/rank) p=+host(MPI:2.46e-05s/bi-bw:3.6GB/s/rank) ``` **SS11 logfile:** ``` nametime%calls setup3.98696e+01s0.161 loadKernels8.86541e+00s0.221 udfExecuteStep4.79946e-03s0.002001 elapsedStepSum2.06042e+02s0.84 solve2.05867e+02s0.84 min5.50540e-02s max3.32500e-01s flop/s1.00108e+13 maxef5.57575e+00a0.03200 udfUEqnSource3.99624e-02s0.01200 udfProperties4.88246e-03s0.00201 velocitySolve1.72489e+01s0.08200 rhs2.29522e+00s0.13200 pressureSolve1.7243e+02s0.87200 rhs4.48683e+00a0.03200 preconditioner1.67813e+02s0.942470 pMGsmoother1.49445e+02s0.89980 coarsegrid5.53950e+00s0.032470 initial guess3.20173e+00a0.02200 ``` We also note that SS11 delivers higher bandwidth than does SS10 in all of the set-up tests done by NekRS when it is making a runtime selection of the best communication algorithm for each subproblem. We list the logfile output for those below. **SS11 logfile:** ``` p=+device(MPI:4.38e-05s/bi-bw:91.8GB/s/rank) p=+device(MPI:8.45e-05s/bi-bw:47.6GB/s/rank) p=+device(MPI:8.45e-05s/bi-bw:47.6GB/s/rank) p=+device(MPI:8.45e-05s/bi-bw:47.6GB/s/rank) p=+device(MPI:8.45e-05s/bi-bw:47.6GB/s/rank) p=+device(MPI:8.54e-05s/bi-bw:46.0GB/s/rank) p=+device(MPI:4.52e-05s/bi-bw:89.0GB/s/rank) p=+device(MPI:4.48e-05s/bi-bw:89.0GB/s/rank) p=+device(MPI:4.07e-05s/bi-bw:89.7GB/s/rank) p=+device(MPI:3.97e-05s/bi-bw:101.2GB/s/rank) p=+device(MPI:3.52e-05s/bi-bw:146.7GB/s/rank) p=+device(MPI:3.47e-05s/bi-bw:148.8GB/s/rank) p=+device(MPI:3.75e-05s/bi-bw:20.1GB/s/rank) p=+device(MPI:3.58e-05s/bi-bw:21.0GB/s/rank) p=+device(MPI:2.74e-05s/bi-bw:47.2GB/s/rank) p=+host(MPI:1.66e-05s/bi-bw:5.4GB/s/rank) For completeness, we also include the kernel performance numbers as reported in the NekRS logfiles: **SS10 logfile:** Ax: N=7 FP64 GDOF/s=13.2 GB/s=1260 GFLOPS=2184 kv0 Ax: N=7 FP64 GDOF/s=13.2 GB/s=1260 GFLOPS=2183 kv0 Ax: N=3 FP64 GDOF/s=12.6 GB/s=1913 GFLOPS=1883 kv5 Ax: N=7 FP32 GDOF/s=25.0 GB/s=1194 GFLOPS=414 kv4 Ax: N=3 FP32 GDOF/s=18.0 GB/s=1368 GFLOPS=2693 kv2 fdm: N=9 FP32 GDOF/s=44.9 GB/s= 812 GFLOPS=7452 kv4 fdm: N=5 FP32 GDOF/s=34.1 GB/s= 825 GFLOPS=4301 kv1 flop/s 3.6729e+13 (701 GFLOPS/rank) In the table above, kv reflects the particular kernel version chosen out of the suite of available kernels in NekRS for the particular operation. We see that the 64-bit Ax kernels (the matrix-vector product with the Laplace operator for spectral element order \(N\)) realize \(\approx 2\) TFLOPS per device, while their 32-bit counterparts realize 3-4 TFLOPS (32-bit arithmetic is used in the preconditioner only). The fast-diagonalization method (fdm) implements local tensor-product solves of the form \(\underline{z}=S\Lambda^{-1}S^{T}\underline{T}\), where \(S=(S_{t}\otimes S_{s}\otimes S_{r})\) is the orthogonal matrix of eigenvectors for the overlapped Poisson problem in local \((r-s-t)\) coordinates in the reference element, \(\hat{\Omega}=[-1,1]^{3}\)Deville et al. (2002). This is a fast operation and can be seen to sustain \(>7\) GFLOPS (FP32). Note that, as expected, the kernel performance, which does not include any MPI overhead, is not dependent on the Slingshot version. NekRS also reports the total observed GFLOPS, which is seen to by 701 GFLOPS/rank for SS10 and 208 GFLOPS/rank for SS11. We note that, for this particular case, \(P=48\) and \(n=96790120\), for which \(n/P=2.01\)M, which is close to (but below) the value of \(n_{0.8}\). We can anticipate that the saturated floating-point performance for SS10 would thus be about \(701/0.8=876\) GFLOPS. At present we do not know why the \(p\)-multigrid smoother time is an order of magnitude larger on SS11 than SS10. That question is under investigation but has been hampered by downtime of Perlmutter. ## 6 Conclusions We explored the performance of a highly tuned incompressible flow code, NekRS, on current-generation HPC architectures featuring accelerator-based nodes. The principal accelerators under consideration are the NVIDIA V100, NVIDIA A100, and MI250X with eight GCDs per node. We found that the raw performance of a single GCD is about 85% of a single A100. We also found that the AMD-based Crusher platform had slower host-based communication than its Polaris/Perlmutter counterpart, as witnessed by the relatively poor performance of the Hypre-based coarse-grid solver, which runs on the host. This situation is significantly improved on Frontier. What is critical to end-user performance is the ratio \(n_{0.8}/P\), which governs the time-to-solution (5). Despite the pessimistic results in Fischer et al. (2020), where it appeared that GPU-based platforms might be \(3\times\) slower than ANL's IBM BG/Q, Mira, we are finding that NekRS is in \(3\times\) faster_ than Nek5000 on Mira. This gain can be attributed to careful attention to reducing \(n_{0.8}\), to improved pressure preconditioners tuned specifically for accelerator-based nodes, to the use of 32-bit precision in the preconditioners, and to extensive use of overlapped communication and computation. ## Funding This material is based upon work supported by the U.S. Department of Energy, Office of Science, under contract DE-AC02-06CH11357 and by the Exascale Computing Project (17-SC-20-SC). The research used resources of the Oak Ridge Leadership Computing Facility at Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract DE-AC05-00OR22725. This research also used resources of the Argonne Leadership Computing Facility at Argonne National Laboratory, which is a DOE Office of Science User Facility supported under Contract DE-AC02-06CH11357. This research also used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract No. DE-AC02-05CH11231
``` 様々な高性能GPUアーキテクチャでNekRSの性能結果を検証します。NekRSは、高性能 exascale プラットフォームをターゲットとする GPU 加速版であり、DOE の効率的 exascale Discretizations センターで開発が進められています。これは、Exascale Computing Project の共同設計センターの一つです。この論文では、Frontier, Crusher, Spock, Polaris, Perlmutter, ThetaGPU, と Summit を考慮します。小規模モジュラーリアクターアプリケーションの棒束構造の 17x17 を使用したシミュレーションを実行します。強固なスケーリングのパフォーマンスと分析について議論します。 ```
2309.10928
Improved bounds for the zeros of the chromatic polynomial via Whitney's Broken Circuit Theorem
We prove that for any graph $G$ of maximum degree at most $\Delta$, the zeros of its chromatic polynomial $\chi_G(x)$ (in $\mathbb{C}$) lie inside the disc of radius $5.94 \Delta$ centered at $0$. This improves on the previously best known bound of approximately $6.91\Delta$. We also obtain improved bounds for graphs of high girth. We prove that for every $g$ there is a constant $K_g$ such that for any graph $G$ of maximum degree at most $\Delta$ and girth at least $g$, the zeros of its chromatic polynomial $\chi_G(x)$ lie inside the disc of radius $K_g \Delta$ centered at $0$, where $K_g$ is the solution to a certain optimization problem. In particular, $K_g < 5$ when $g \geq 5$ and $K_g < 4$ when $g \geq 25$ and $K_g$ tends to approximately $3.86$ as $g \to \infty$. Key to the proof is a classical theorem of Whitney which allows us to relate the chromatic polynomial of a graph $G$ to the generating function of so-called broken-circuit-free forests in $G$. We also establish a zero-free disc for the generating function of all forests in $G$ (aka the partition function of the arboreal gas) which may be of independent interest.
Matthew Jenssen, Viresh Patel, Guus Regts
2023-09-19T20:59:55
http://arxiv.org/abs/2309.10928v2
# Improved bounds for the zeros of the chromatic polynomial via Whitney's broken circuit theorem ###### Abstract. We prove that for any graph \(G\) of maximum degree at most \(\Delta\), the zeros of its chromatic polynomial \(\chi_{G}(x)\) (in \(\mathbb{C}\)) lie inside the disc of radius \(5.94\Delta\) centered at \(0\). This improves on the previously best known bound of approximately \(6.91\Delta\). We also obtain improved bounds for graphs of high girth. We prove that for every \(g\) there is a constant \(K_{g}\) such that for any graph \(G\) of maximum degree at most \(\Delta\) and girth at least \(g\), the zeros of its chromatic polynomial \(\chi_{G}(x)\) lie inside the disc of radius \(K_{g}\Delta\) centered at \(0\), where \(K_{g}\) is the solution to a certain optimization problem. In particular, \(K_{g}<5\) when \(g\geq 5\) and \(K_{g}<4\) when \(g\geq 25\) and \(K_{g}\) tends to approximately \(3.86\) as \(g\to\infty\). Key to the proof is a classical theorem of Whitney which allows us to relate the chromatic polynomial of a graph \(G\) to the generating function of so-called broken-circuit-free forests in \(G\). We also establish a zero-free disc for the generating function of all forests in \(G\) (aka the partition function of the arboreal gas) which may be of independent interest. **Keywords**: chromatic polynomial, forest generating function, zeros, broken circuit, maximum degree. MJ is supported by a UKRI Future Leaders Fellowship MR/W007320/1. GR was funded by the Netherlands Organisation of Scientific Research (NWO): VI.Vidi.193.068. with the property that, for every graph \(G\), the complex roots of \(\chi_{G}\) lie inside the disc of radius \(K\Delta(G)\) centered at \(0\), where \(\Delta(G)\) is the maximum degree of \(G\). Borgs [1] later provided a short alternative proof of Sokal's result. The constant was subsequently improved to \(K\approx 6.91\) by Fernandez and Procacci [10], and Jackson, Procacci, and Sokal [11]. Our main contribution is an improvement on these bounds. **Theorem 1.1**.: _There exists a constant \(K\leq 5.94\) such that for every graph \(G\), \(\chi_{G}(x)\neq 0\) for all \(z\in\mathbb{C}\) with \(|x|\geq K\Delta(G)\)._ For graphs of large girth we can obtain better bounds. **Theorem 1.2**.: _For each \(g\geq 3\) there exists a constant \(K_{g}\) such that \(\lim_{g\to\infty}K_{g}\leq 3.86\) and for every graph \(G\) of girth at least \(g\), \(\chi_{G}(x)\neq 0\) for all \(x\in\mathbb{C}\) with \(|x|\geq K_{g}\Delta(G)\)._ We will bound \(K_{g}\) by estimating the optimal solution to a certain constrained optimization problem. We refer to Figure 1 below for concrete bounds on \(K_{g}\) for several values of \(g\). Note that, since there are graphs of maximum degree \(\Delta\) with no proper \(\Delta\)-colouring, we cannot take \(K<1\) in Theorem 1.1. In fact, in general, we cannot take \(K<1.599\) (see Section 9 of [10]). However, it is conjectured by Sokal (see [11, Conjecture 21], and Sokal [10, Conjecture 9.5"]) that \(\chi_{G}(x)\neq 0\) if \(\operatorname{Re}(x)>\Delta(G)\). A positive resolution of this conjecture would resolve a major open problem in the field of approximate counting: it would imply (see [10]) the existence of a fully polynomial time approximation scheme for counting proper \(q\)-colourings of a graph of maximum degree \(\Delta\) for \(q\geq\Delta+1\). We note that a mildly stronger conjecture of Sokal, [10, Conjecture 9.5"] is not true except possibly for a finite range of values of \(\Delta\)[1]. Our approach, which we detail in the next section, motivates the study of the roots of the _forest generating function_ of a graph \(G=(V,E)\) defined as \[Z_{G}(x)=\sum_{\begin{subarray}{c}F\subseteq E:\\ (V,F)\text{ is a forest}\end{subarray}}x^{|F|}\,. \tag{1.1}\] \(Z_{G}(x)\) is the partition function of the _arboreal gas_ which has been studied significantly in the statistical physics and probability literature in recent years [12, 1, 1, 1, 2, 3]. We prove the following result for the roots of the forest generating function on bounded degree graphs. **Theorem 1.3**.: _Let \(G=(V,E)\) be a graph and let \(v\in V\). Suppose that all vertices of \(G\), except possibly \(v\) have degree at most \(\Delta\in\mathbb{N}\). Then for any \(x\) such that \(|x|\leq\frac{1}{2\Delta}\), \(Z_{G}(x)\neq 0\)._ Zero-freeness results such as the one above are of interest in statistical physics as they imply the absence of phase transitions in the Lee-Yang sense [13]. It would be interesting to determine whether the \(1/2\) in Theorem 1.3 is optimal and whether there is a zero-free region containing \(0\) that contains a larger portion of the positive real axis. We note that Theorem 1.3 from [1] implies that \(1/(2\Delta)\) in the above theorem cannot be replaced by \(1/\Delta\), indeed it implies there cannot be an open zero-free region in the complex plane containing the real interval \([0,1/\Delta]\). We remark that all graphs are assumed to be simple unless stated otherwise. ### Approach Our approach is inspired by the approach taken in [10, 1, 10, 10] but differs crucially in our use of Whitney's Theorem for describing the chromatic polynomial. Using the language of statistical physics, the approach in [10, 10, 10, 11] is to relate the chromatic polynomial to the _partition function_ of a _polymer model_. A polymer model consists of three ingredients: (i) a (finite) set \(\mathcal{P}\) whose elements we call polymers; (ii) a symmetric and reflexive 'compatibility' relation \(\sim\) on \(\mathcal{P}\); and (iii) a weight function \(\omega:\mathcal{P}\to\mathbb{C}\). The partition function of this polymer model is then \[\Xi(\mathcal{P})=\sum_{\Gamma}\prod_{\gamma\in\Gamma}w(\gamma)\,,\] where the sum is over all collections \(\Gamma\) of mutually compatible polymers. The motivation for relating the chromatic polynomial to such a partition function is that one can then apply the extensive theory of polymer models from statistical physics; in particular results that constrain the location of zeros of \(\Xi(\mathcal{P})\) considered as a function of the complex weights \(w(\gamma)\). For example, Sokal [20] and Borgs [1] apply Dobrushin's classical zero-freeness criterion [1] whereas Fernandez and Procacci applied their own more recent zero-freeness criterion [14] and Jackson, Sokal and Procacci [15] used the Gruber-Kunz criterion [1]. The main difficulty in these proofs is to verify that the conditions of the zero-freeness criteria hold. In each of the aforementioned papers the set of polymers \(\mathcal{P}\) is taken to be vertex subsets of \(G\) that induce a connected subgraph of \(G\). We take a different approach by applying Whitney's classical Broken Circuit Theorem (see below) to relate the chromatic polynomial of \(G\) to the partition function of a model in which polymers are the edge sets of so-called 'broken-circuit-free' trees in \(G\). We moreover do not directly rely on any results from statistical physics that constrain the location of zeros of \(\Xi(\mathcal{P})\), but rather use the proofs of these results as inspiration. Indeed, inspired by the proof of [1, Proposition 3.1], we give a direct and new inductive proof for the zero-freeness of the chromatic polynomial that takes advantage of the tree structure of polymers as well as the specific structure of broken-circuit-free trees. To go into more detail we now introduce Whitney's theorem. #### 1.1.1. Whitney's Broken Circuit Theorem A classical theorem of Whitney [17] gives a combinatorial description of the coefficients of the chromatic polynomial in terms of so-called _broken-circuit-free sets_. For a graph \(G=(V,E)\) fix any ordering of \(E\). A _broken circuit_ in the graph \(G\) is obtained by taking the edges of any cycle of \(G\) and removing its largest edge (in the given ordering). A _broken-circuit-free set_ in \(G\) is any set of edges \(F\subseteq E\) that does not contain a broken circuit. Equivalently, \(F\subseteq E\) is broken-circuit free if and only if * \(F\) has no cycles (i.e. is a forest) and * for every \(e\in E\setminus F\), if \(F\cup e\) contains a cycle (which is then necessarily unique and contains \(e\)), then \(e\) is not the largest edge of that cycle. From now on, we abbreviate "broken circuit free" by BCF. Let us write \(\mathcal{F}_{G}\) for the set of all BCF sets in \(G\) (including the empty set). We define the polynomial \[F_{G}(x)=\sum_{F\in\mathcal{F}_{G}}x^{|F|}. \tag{1.2}\] While \(\mathcal{F}_{G}\) depends on the ordering of the edges, the polynomial \(F_{G}\) remarkably does not. Moreover, the polynomial \(F_{G}\) is up to a simple transformation equal to the chromatic polynomial: **Theorem 1.4** (Whitney [17]).: _If \(G\) is an \(n\)-vertex graph, then_ \[\chi_{G}(x)=x^{n}F_{G}(-1/x).\] We give a short proof of this theorem at the end of this section to make the paper self contained. To make the connection to the polymer model approach explicit, we note that \(F_{G}(x)\) is the partition function of a polymer model in which polymers are BCF trees in \(G\); two polymers are compatible if their union in \(G\) is disconnected; and the weight \(w(T)\) of a polymer \(T\) is \(x^{|T|}\) where \(|T|\) denotes the number of edges in \(T\). We note that by Whitney's theorem, showing that \(\chi_{G}(x)\neq 0\) for \(|x|\geq R\) is equivalent to showing \(F_{G}(x)\neq 0\) for \(|x|\leq 1/R\). Proving such a zero-free disc for \(F_{G}\) will be the main goal of the paper. We note that our proof also implicitly gives a zero-free disc around \(0\) for the forest generating function of \(G\), as is also the case in [1, 1, 2, 3]. However, with a different proof we can obtain a significantly better bound which is the content of Theorem 1.3. This is detailed in Section 4. We conclude this section with a proof of Whitney's theorem. In Section 2 we collect some combinatorial preliminaries. In Section 3 we prove our main results, Theorems 1.1 and 1.2. In Section 4 we prove Theorem 1.3. We end with some concluding remarks and directions of future research. Proof of Theorem 1.4.: Let \(G\) be an \(n\)-vertex graph and choose any ordering of the edges of \(G\). Our goal is to prove that \[\chi_{G}(x)=x^{n}F_{G}(-1/x). \tag{1.3}\] We now use induction on the number of edges to prove (1.3). If \(G\) has no edges, then \(x^{n}F_{G}(-1/x)=x^{n}=\chi_{G}(x)\), showing the base case. Next assume that \(G\) has at least one edge and fix an ordering of the edges of \(G\). Let \(e\) be the smallest edge in the ordering. Next let \(G/e\) denote the simple graph obtained by contracting the edge \(e\) where if multiple edges arise we keep only the edge that is largest in the ordering. The ordering of the edges of \(G/e\) is taken to be the one induced by the ordering on \(G\). Let \(\mathcal{F}_{G,e}\) denote the BCF sets of \(G\) that contain the edge \(e\). Note that since \(e\) is the smallest edge of \(G\), \(\mathcal{F}_{G,e}\) corresponds bijectively with \(\mathcal{F}_{G/e}\) via \(F\mapsto F\setminus e\). Indeed, if \(e\) is contained in a triangle only the largest of the other two edges in that triangle can be contained in a BCF set that contains \(e\). Therefore \[x^{n}F_{G}(-1/x) =\sum_{F\in\mathcal{F}_{G,e}}(-1)^{|F|}x^{n-|F|}+\sum_{F\in \mathcal{F}_{G\setminus e}}(-1)^{|F|}x^{n-|F|}\] \[=-\sum_{F\in\mathcal{F}_{G,e}}(-1)^{|F|-1}x^{(n-1)-(|F|-1)}+\sum_ {F\in\mathcal{F}_{G\setminus e}}(-1)^{|F|}x^{n-|F|}\] \[=-\sum_{F\in\mathcal{F}_{G/e}}(-1)^{|F|}x^{(n-1)-|F|}+\sum_{F\in \mathcal{F}_{G\setminus e}}(-1)^{|F|}x^{n-|F|}\] \[=-x^{n-1}F_{G/e}(-1/x)+x^{n}F_{G\setminus e}(-1/x)=-\chi_{G/e}(x )+\chi_{G\setminus e}(x)=\chi_{G}(x),\] where the penultimate equality follows by induction, since both \(G/e\) and \(G\setminus e\) have fewer edges than \(G\) and the last equality is by the well-known deletion contraction formula for the chromatic polynomial. This completes the proof. ## 2. Preliminaries In this section we collect some combinatorial preliminaries that will be used in the proof of Theorems 1.1 and 1.2. ### Tree generating functions An acyclic connected graph is called a tree. For a tree \(T\), we abuse notation by using \(T\) to refer both to the graph and its edge set. For a graph \(G=(V,E)\), a vertex \(v\in V\), and a variable \(x\), we define the _rooted tree generating function_ by \[T_{G,v}(x):=\sum_{\begin{subarray}{c}T\subseteq E(G)\text{ a tree,}\\ v\in V(T)\end{subarray}}x^{|T|}.\] The following lemma shows how to bound the rooted tree generating function; it appears somewhat implicitly in [1]. The proof we give is taken from [10] and we include it for completeness. **Lemma 2.1**.: _Let \(G=(V,E)\) be a graph of maximum degree at most \(\Delta\geq 1\) and let \(v\in V\). Fix any \(\alpha\geq 1\), then_ \[T_{G,v}\left(\tfrac{\ln\alpha}{\alpha\Delta}\right)\leq\alpha.\] Note that for \(\alpha\geq 1\), \(\ln\alpha/(\alpha\Delta)\) takes values in the range \([0,(e\Delta)^{-1}]\) and so the lemma bounds \(T_{G,v}(x)\) when \(|x|\leq(e\Delta)^{-1}\). The example of the infinite \(\Delta\)-regular tree shows that it is not possible to give a finite uniform bound on \(T_{G,v}(x)\) for \(x\) outside this interval in general. Proof.: The proof is by induction on the number of vertices of \(G\). If \(|V|=1\), the statement is clearly true. Next assume that \(|V|\geq 2\). Given a tree \(T\) such that \(v\in V(T)\), let \(S\) be the set of neighbours of \(v\). After removing \(v\) from \(T\), the tree decomposes into the disjoint union of \(|S|\) trees, each containing a unique vertex from \(S\). Therefore, writing \(c=\frac{\ln\alpha}{\alpha\Delta}\), we have \[T_{G,v}(c)\leq\sum_{S\subseteq N_{G}(v)}c^{|S|}\prod_{s\in S}T_{G-v,s}(c),\] which by induction is bounded by \[\sum_{S\subseteq N_{G}(V)}(c\alpha)^{|S|}\leq(1+(\ln\alpha)/\Delta)^{\Delta} \leq e^{\ln\alpha}=\alpha.\] This completes the proof. Next we prove a similar result for _double-rooted trees_. For a graph \(G=(V,E)\), two distinct vertices \(v_{1},v_{2}\in V\), and a variable \(x\), we define the _double-rooted tree generating function_ by \[T_{G,v_{1},v_{2}}(x):=\sum_{\begin{subarray}{c}T\subseteq E(G)\text{ a tree,}\\ v_{1},v_{2}\in V(T)\end{subarray}}x^{|T|}.\] If \(v_{1}\) is disconnected from \(v_{2}\) in \(G\), then we define \(T_{G,v_{1},v_{2}}(x)\) to be the zero polynomial in \(x\). **Lemma 2.2**.: _Let \(G=(V,E)\) be a graph of maximum degree at most \(\Delta\geq 1\) and let \(v_{1},v_{2}\in V\) be distinct vertices. Fix any \(\alpha\in[1,e)\). Then_ \[T_{G,v_{1},v_{2}}\left(\tfrac{\ln\alpha}{\alpha\Delta}\right)\leq D(\alpha):= \tfrac{\alpha\ln\alpha}{\Delta(1-\ln\alpha)}.\] _Furthermore, if there is no path with \(g\) or fewer vertices between \(v_{1}\) and \(v_{2}\) in \(G\), then_ \[T_{G,v_{1},v_{2}}\left(\tfrac{\ln\alpha}{\alpha\Delta}\right)\leq(\ln\alpha)^ {g-1}D(\alpha).\] Proof.: Note that every tree in \(G\) that contains \(v_{1}\) and \(v_{2}\) can be obtained by first taking a path \(P\) from \(v_{1}\) to \(v_{2}\) and then appending trees to each vertex on the path (although this also creates objects that are not trees since the trees we attach to \(P\) may intersect \(P\) elsewhere or intersect other appended trees). Let us write \(p(i)\) for the number of paths from \(v_{1}\) to \(v_{2}\) with \(i\) vertices and note that \(p(i)\leq\Delta^{i-2}\) (in building the path we have at most \(\Delta\) choices for each vertex except the first and last). Suppose that \(T(x)\) is a uniform upper bound for \(T_{G,v}(x)\) over \(v\in V\) such that \(\Delta xT(x)<1\). Then, \[T_{G,v_{1},v_{2}}(x)\leq\sum_{i\geq 2}p(i)x^{i-1}T(x)^{i}\leq\sum_{i\geq 2} \Delta^{i-2}x^{i-1}T(x)^{i}=xT(x)^{2}(1-\Delta xT(x))^{-1}.\] From Lemma 2.1, we can take \(T\left(\tfrac{\ln\alpha}{\alpha\Delta}\right)=\alpha\), and so we see that \[T_{G,v_{1},v_{2}}\left(\tfrac{\ln\alpha}{\alpha\Delta}\right)\leq\frac{\alpha \ln\alpha}{\Delta}(1-\ln\alpha)^{-1}=D(\alpha).\] If there is no path with \(g\) or fewer vertices from \(v_{1}\) to \(v_{2}\) in \(G\) then \[T_{G,v_{1},v_{2}}(x)\leq\sum_{i\geq g+1}p(i)x^{i-1}T(x)^{i}\leq\sum_{i\geq g+1} \Delta^{i-2}x^{i-1}T(x)^{i}=(\Delta xT(x))^{g-1}\sum_{i\geq 2}\Delta^{i-2}x^{i-1}T(x )^{i}\] so that \[T_{G,v_{1},v_{2}}\left(\tfrac{\ln\alpha}{\alpha\Delta}\right)\leq(\ln\alpha)^{ g-1}D(\alpha).\] We also need the following counting lemma. **Lemma 2.3**.: _Let \(X=\{1,\ldots,k\}\) and let \(x\) be a variable. Then_ \[\sum_{S\subseteq X}\sum_{\begin{subarray}{c}(s,t)\in S\times X\\ t>s\end{subarray}}x^{|S|}=\sum_{S\subseteq X}x^{|S|}\sum_{i\in S}(k-i)={k\choose 2 }x(1+x)^{k-1}.\] Proof.: For each \(r=0,\ldots,k\), the coefficient of \(x^{r}\) is given by \[\sum_{|S|=r}\sum_{i\in S}(k-i)=\sum_{i=1}^{k}\sum_{\begin{subarray}{c}S:\ i \in S\\ |S|=r\end{subarray}}(k-i)=\sum_{i=1}^{k}{k-1\choose r-1}(k-i)={k-1\choose r-1}{ k\choose 2}=\frac{1}{2}r(k-1){k\choose r}.\] So \[\sum_{S\subseteq X}x^{|S|}\sum_{i\in S}(k-i)=\sum_{r=0}^{k}\frac{1}{2}(k-1)r{ k\choose r}x^{r}=\frac{1}{2}(k-1)x\frac{d}{dx}(1+x)^{k}={k\choose 2}x(1+x)^{k-1}.\] ### Broken-circuit-free sets In this section, we establish some notation and simple properties related to BCF sets that will be used in the proof of our main result. As before, let \(G=(V,E)\) be a graph and recall that \(\mathcal{F}_{G}\) is the set of all BCF sets of \(G\). For a set of vertices \(S\subseteq V\), we write \(\mathcal{F}_{G,S}\) for those \(F\in\mathcal{F}_{G}\) such that every non-trivial component of \((V,F)\) (i.e. a component that is not an isolated vertex) contains at least one vertex of \(S\). We note that \(\mathcal{F}_{G,S}\) contains the empty set of edges. So, for example, if \(S\) is just a single vertex \(u\), then \(\mathcal{F}_{G,u}\) is the set of (edge sets of) BCF trees that contain \(u\) along with the empty set of edges. We write \(\mathcal{F}_{G,S}^{*}\) for those \(F\in\mathcal{F}_{G}\) such that each non-trivial component of \(F\) contains exactly one vertex of \(S\) (so \(\mathcal{F}_{G,S}^{*}\subseteq\mathcal{F}_{G,S}\)). For two distinct vertices \(s,t\in V\), we write \(\mathcal{F}_{G,s,t}\) for those \(F\in\mathcal{F}_{G}\) that have exactly one non-trivial component and that component contains \(s\) and \(t\), i.e. \(\mathcal{F}_{G,s,t}\) is the set of (edge sets of) BCF trees that contain \(s\) and \(t\). For any element \(F\in\mathcal{F}_{G}\), we write \(V(F)\) to mean the set of non-isolated vertices of the graph \((V,F)\). The following inequality will be useful. For any positive number \(y\), \[\sum_{F\in\mathcal{F}_{G,S}}y^{|F|}\leq\prod_{s\in S}\sum_{T\in\mathcal{F}_{G, s}}y^{|T|}\leq\prod_{s\in S}T_{G,s}(y), \tag{2.1}\] which holds because \(\mathcal{F}_{G,S}\subseteq\{\cup_{s\in S}T_{s}:T_{s}\in\mathcal{F}_{G,s}\}\). For \(U\subseteq V\) we will write \(\mathcal{F}_{U}\) instead of \(\mathcal{F}_{G[U]}\) and extend this notation in the natural way to other situations. The following proposition captures the crucial property of BCF sets that we make use of in our main results. **Proposition 2.4**.: _Let \(G=(V,E)\) be a graph, let \(u\in V\), \(S\subseteq N(u)\) and set \(V^{\prime}=V-u\). Suppose \(E\) has an ordering such that the edges incident to \(u\) are largest in the order. Given a forest \(F\in\mathcal{F}_{V^{\prime},S}^{*}\), we have that \(T=F\cup\{us:s\in S\}\) is a tree and \(T\) contains a broken circuit if and only if there is some \(s\in S\) such that the non-trivial component of \(F\) containing \(s\) also contains some \(t\in N(u)\) with \(us<ut\) in the ordering of \(E\)._ Proof.: The definition of \(\mathcal{F}_{V^{\prime},S}^{*}\) implies that \(T\) is a tree with \(V(T)=V(F)\cup\{u\}\). If the non-trivial component of \(F\) containing some \(s\in S\) also contains \(t\in N(u)\) with \(us<ut\), then there is a path in \(T\) from \(u\) to \(t\) and this path is a broken circuit. Conversely, if \(T\) contains a broken circuit, let \(e=ab\in E\setminus T\) be such that \(T\cup e\) contains a cycle \(C\) in which \(e\) is the largest edge. We must have \(a,b\in V(T)\). Suppose \(a,b\in V(F)\). We cannot have that \(a\) and \(b\) belong to the same non-trivial component of \(F\) since \(F\) is BCF. We also cannot have \(a\) and \(b\) in different non-trivial components of \(F\) since the path from \(a\) to \(b\) in \(T\) then contains \(u\) so that \(e=ab\) cannot be the highest edge in the cycle \(C\). So it must be the case that \(u\) is one of the endpoints of \(e\), i.e. \(e=ut\) with \(t\in V(F)\). Let \(s\) be the unique vertex of \(S\) in the same component as \(t\) (note that \(s\neq t\) since \(e\not\in T\)). Then the broken circuit in \(T\) must be the path \(P\) in \(T\) from \(u\) to \(t\) and \(C=P\cup ut\). We know the first edge of \(P\) is \(us\), so we must have \(us<ut\). ## 3. Proof of the main results In this section we prove our main results Theorems 1.1 and 1.2. The results will follow from the following theorem which establishes a zero-free disc for \(F_{G}\) (defined at (1.2)) whose radius can be estimated via a constrained optimization problem. **Theorem 3.1**.: _Fix \(g\in\mathbb{N}_{\geq 3}\) (which will represent girth). For constants \(a\in[0,1)\) and \(b\in[1,e)\), define \(h(a,b):=\frac{(1-a)\ln b}{b}\) and_ \[f_{g}(a,b): =\exp\left(\frac{(1-a)\ln b}{b}\right)-1+\frac{b\cdot(\ln b)^{g-1 }}{2(1-\ln b)}.\] _For any choice of \(a\) and \(b\) satisfying \(f_{g}(a,b)\leq a\) the following holds. For any graph \(G\) of girth at least \(g\) and maximum degree at most \(\Delta\geq 1\) and any \(x\in\mathbb{C}\) satisfying \(|x|\leq h(a,b)/\Delta\), we have that \(F_{G}(x)\neq 0\) and so \(\chi_{G}(x)\neq 0\) for any \(x\in\mathbb{C}\) satisfying \(|x|\geq\Delta/h(a,b)\)._ For some fixed values of \(g\), the following table gives values of \(a\) and \(b\) for which the above inequalities hold. The first line of the table together with Theorem 3.1 proves Theorem 1.1. Proof of Theorem 3.1.: First we establish some notation and a few facts we will need. For any graph \(G=(V,E)\) and a set of vertices \(U\subseteq V\), recall that we write \(F_{U}(x)\) to mean \(F_{G[U]}(x)\). The proof is based on applying induction by using the following recursion. Given any graph \(G=(V,E)\) (with some ordering of the edges of \(G\) so that the notion of BCF is well defined) and any \(S\subseteq V\), we have \[F_{V}(x)=\sum_{F\in\mathcal{F}_{G,S}}x^{|F|}F_{V-S-V(F)}(x) \tag{3.1}\] since any \(F\in\mathcal{F}_{G}\) can be uniquely determined by first identifying those components of \(F\) that contain vertices in \(S\) and then identifying the remainder of the forest and noting that \(F\) is BCF if and only if every component of \(F\) is BCF. Now, specialising (3.1) to the case when \(S\) is a single vertex \(u\in V\), and separating the contribution to the sum from \(F=\emptyset\), we see that \[F_{V}(x)=F_{V-u}(x)+\sum_{\begin{subarray}{c}T\in\mathcal{F}_{G,u}\\ |T|\geq 1\end{subarray}}x^{|T|}F_{V-u-V(T)}(x)\,. \tag{3.2}\] Dividing through by \(F_{V-u}(x)\), we define the rational function \(R(x)\) by \[R=R(x):=\frac{F_{V}(x)}{F_{V-u}(x)}-1=\sum_{\begin{subarray}{c}T\in\mathcal{F}_{G,u}\\ |T|\geq 1\end{subarray}}x^{|T|}\frac{F_{V-u-V(T)}(x)}{F_{V-u}(x)}\,. \tag{3.3}\] Note that the sums in all three identities above depend on how we order the edges of \(G\), but that all the expressions are true for any choice of ordering. The following claim immediately implies the theorem, since \(F_{V}(x)=F_{G}(x)\). **Claim 3.2**.: _Assume \(a\in[0,1)\) and \(b\in[1,e)\) satisfy \(f_{g}(a,b)\leq a\) as in the statement of Theorem 3.1. For any graph \(G=(V,E)\) of girth at least \(g\) and maximum degree at most \(\Delta\), any vertex \(u\in V\) and any complex number \(x\) satisfying \(|x|\leq h(a,b)/\Delta\), we have_ 1. \(F_{V}(x)\neq 0\)_,_ 2. \(|R(x)|=|F_{V}(x)/F_{V-u}(x)-1|\leq a\)_._ We now prove the claim by induction on the number of vertices in \(G=(V,E)\). If \(|V|=2\) then \(F_{V}(x)\) is either \(1\) or \((1+x)\) whereas \(F_{V-u}(x)=1\) for any \(u\in V\), so that \(|R|\leq|x|\leq h(a,b)/\Delta\leq h(a,b)\leq f_{g}(a,b)\leq a\) and hence \(F_{V}(x)\neq 0\). Now assume \(|V|=n>2\) and let \(u\in V\) and write \(V^{\prime}=V-u\). It suffices to show part (ii) since it implies that \(|F_{V}(x)/F_{V-u}(x)|=|1+R(x)|\geq 1-a>0\), implying that \(F_{V}(x)\neq 0\). For notational convenience, we will write \(F_{U}\) instead of \(F_{U}(x)\) for \(U\subseteq V^{\prime}\) in what follows. We establish a fact, namely (3.4) below, that we will use repeatedly. For \(A=\{a_{1},\ldots,a_{k}\}\subseteq V^{\prime}\), write \(A_{i}=\{a_{1},\ldots,a_{i}\}\) and \(A_{0}=\emptyset\). Then by induction we know that for each \(i\), \[\left|\frac{F_{V^{\prime}-A_{i}}}{F_{V^{\prime}-A_{i+1}}}-1\right|=\left|\frac {F_{V^{\prime}-A_{i}}}{F_{V^{\prime}-A_{i}-a_{i+1}}}-1\right|<a\ \ \text{i.e.}\ \ \frac{1}{1+a}\leq\left|\frac{F_{V^{\prime}-A_{i+1}}}{F_{V^{\prime}-A_{i}}} \right|\leq\frac{1}{1-a}.\] Using the upper bound and the fact that \(F_{V^{\prime}-A_{i}}\neq 0\) for each \(i\) by induction, we have \[\left|\frac{F_{V^{\prime}-A}}{F_{V^{\prime}}}\right|=\prod_{i=0}^{k-1}\left| \frac{F_{V^{\prime}-A_{i+1}}}{F_{V^{\prime}-A_{i}}}\right|\leq(1-a)^{-k}=(1-a) ^{-|A|}. \tag{3.4}\] Figure 1. Values for \(a\) and \(b\) and \(g\) for which \(f_{g}(a,b)<a\) and corresponding values for \(h(a,b)^{-1}\). The rightmost column gives concrete bounds on the value of \(K_{g}\) in the statement of Theorem 1.2. For example, \(K_{5}\leq 4.87264\) and \(K_{25}\leq 3.97497\). Returning to the induction step for the claim, fix an ordering on the edges of \(G\) such that the edges incident with \(u\) are largest in the ordering. In particular, let \(s_{1},\ldots,s_{\ell}\) be the neighbours of \(u\) (so \(\ell\leq\Delta\)) and assume that \(us_{1}>us_{2}>\cdots>us_{\ell}\) are largest in the ordering and that the remaining edges are ordered arbitrarily. For convenience let us also order these vertices as \(s_{1}>s_{2}>\cdots>s_{\ell}\). Now, starting with (3.3), we can express \(R\) as follows: \[R=\sum_{\begin{subarray}{c}T\in\mathcal{F}_{V,u}\\ |T|\geq 1\end{subarray}}x^{|T|}\frac{F_{V^{\prime}-V(T)}}{F_{V^{\prime}}}=\sum_ {\begin{subarray}{c}S\subseteq N(u)\\ S\neq\emptyset\end{subarray}}x^{|S|}\sum_{\begin{subarray}{c}F\in\mathcal{F}_{ V^{\prime},S}\\ F\cup uS\text{ is BCF}\end{subarray}}x^{|F|}\frac{F_{V^{\prime}-S-V(F)}}{F_{V^{ \prime}}}\,. \tag{3.5}\] Here we write \(F\cup uS\) to mean \(F\cup\{us:s\in S\}\). The second equality holds because \(\mathcal{F}_{V,u}=\{F\cup uS:S\subseteq N_{G}(u),F\in\mathcal{F}_{V^{\prime},S }^{*}\}\) (by our choice of edge ordering). We have also used that if \(F\cup uS=T\in\mathcal{F}_{V,u}\) then \(V^{\prime}-V(T)=V^{\prime}-S-V(F)\). Next we compare the inner sum in (3.5) with the same sum but over \(F\in\mathcal{F}_{V^{\prime},S}\) (which is equal to \(1\) by (3.1)). So we have \[\left|1-\sum_{\begin{subarray}{c}F\in\mathcal{F}_{V^{\prime},S} \\ F\cup uS\text{ is BCF}\end{subarray}}x^{|F|}\frac{F_{V^{\prime}-S-V(F)}}{F_{V^{ \prime}}}\right| =\left|\sum_{F\in\mathcal{F}_{V^{\prime},S}}x^{|F|}\frac{F_{V^{ \prime}-S-V(F)}}{F_{V^{\prime}}}-\sum_{\begin{subarray}{c}F\in\mathcal{F}_{ V^{\prime},S}\\ F\cup uS\text{ is BCF}\end{subarray}}x^{|F|}\frac{F_{V^{\prime}-S-V(F)}}{F_{V^{ \prime}}}\right|\] \[=\left|\sum_{F\in\mathcal{A}\cup\mathcal{B}}x^{|F|}\frac{F_{V^{ \prime}-S-V(F)}}{F_{V^{\prime}}}\right|, \tag{3.6}\] where \(\mathcal{A}=\mathcal{F}_{V^{\prime},S}\setminus\mathcal{F}_{V^{\prime},S}^{*}\), i.e. the set of \(F\in\mathcal{F}_{V^{\prime},S}\) where some component of \(F\) contains (at least) two vertices of \(S\) and \(\mathcal{B}\) is the set of \(F\in\mathcal{F}_{V^{\prime},S}^{*}\) such that \(F\cup uS\) contains a broken circuit, i.e. where some component of \(F\) contains some \(s\in S\) and some \(t\in N(u)\) with \(us<ut\) (this is the only way a broken circuit can occur by Proposition 2.4). Now continuing from the expression above and applying the upper bound (3.4) we have \[\left|\sum_{F\in\mathcal{A}\cup\mathcal{B}}x^{|F|}\frac{F_{V^{ \prime}-S-V(F)}}{F_{V^{\prime}}}\right| \leq\sum_{F\in\mathcal{A}\cup\mathcal{B}}|x|^{|F|}\left|\frac{F_{V ^{\prime}-S-V(F)}}{F_{V^{\prime}}}\right| \leq\sum_{F\in\mathcal{A}\cup\mathcal{B}}|x|^{|F|}(1-a)^{-|V(F) \cup S|}\] \[\leq\sum_{F\in\mathcal{A}\cup\mathcal{B}}\left(\frac{|x|}{1-a} \right)^{|F|}(1-a)^{-|S|},\] where we have used that \(|V(F)\cup S|\leq|F|+|S|\) for all \(F\in\mathcal{A}\cup\mathcal{B}\) since \((V(F)\cup S,F)\) has at most \(|S|\) components. Note that every \(F\in\mathcal{A}\cup\mathcal{B}\) can be obtained by taking two vertices \(s\in S\) and \(t\in N(u)\) with \(t>s\), and taking \(F\) to be the union of some \(T\in\mathcal{F}_{V^{\prime},s,t}\) (defined in Section 2.2) with some \(F^{\prime}\in\mathcal{F}_{V^{\prime},S-s}\) (although this procedure can also give objects that are not in \(\mathcal{A}\cup\mathcal{B}\)). Writing \(y:=|x|/(1-a)\leq\ln b/(\Delta b)\), we can thus bound the right hand side of the expression above as \[\sum_{F\in\mathcal{A}\cup\mathcal{B}}\left(\frac{|x|}{1-a}\right)^{|F |}(1-a)^{-|S|} \leq\sum_{\begin{subarray}{c}(s,t)\in S\times N(u)\\ t>s\end{subarray}}\sum_{T\in\mathcal{F}_{V^{\prime},s,t}}\sum_{F^{\prime}\in \mathcal{F}_{V^{\prime},S-s}}y^{|T|+|F^{\prime}|}(1-a)^{-|S|}\] \[=(1-a)^{-|S|}\sum_{\begin{subarray}{c}(s,t)\in S\times N(u)\\ t>s\end{subarray}}\sum_{T\in\mathcal{F}_{V^{\prime},s,t}}y^{|T|}\sum_{F^{\prime }\in\mathcal{F}_{V^{\prime},S-s}}y^{|F^{\prime}|}\] \[\leq(1-a)^{-|S|}\sum_{\begin{subarray}{c}(s,t)\in S\times N(u)\\ t>s\end{subarray}}T_{G-u,s,t}(y)\prod_{t\in S-s}T_{G-u,t}(y)\] \[\leq\sum_{\begin{subarray}{c}(s,t)\in S\times N(u)\\ t>s\end{subarray}}\left(\frac{b}{1-a}\right)^{|S|}\frac{(\ln b)^{g-3}D(b)}{b}, \tag{3.7}\] assuming \(G\) has girth at least \(g\), where for the penultimate inequality we have used (2.1). For the last inequality, we have used Lemmas 2.1 and 2.2 as well as the fact that there is no path in \(G\) between vertices in \(S\) with \(g-2\) or fewer vertices since such a path together with \(u\) would create a cycle of length at most \(g-1\). Returning to (3.6), we deduce that if \(G\) has girth at least \(g\), \[\Bigg{|}\sum_{\begin{subarray}{c}F\in\mathcal{F}_{V^{\prime},S}^{\prime}\\ F\cup uS\text{ is }BCF\end{subarray}}x^{|F|}\frac{F_{V^{\prime}-S-V(F)}}{F_{V^{ \prime}}}\Bigg{|}\leq 1+\sum_{\begin{subarray}{c}(s,t)\in S\times N(u)\\ t>s\end{subarray}}\left(\frac{b}{1-a}\right)^{|S|}\frac{(\ln b)^{g-3}D(b)}{b}, \tag{3.8}\] and so using (3.5), (3.8), \[|R| \leq\sum_{\begin{subarray}{c}S\subseteq N(u)\\ S\neq\emptyset\end{subarray}}|x|^{|S|}\Bigg{|}\sum_{\begin{subarray}{c}F\in \mathcal{F}_{V^{\prime},S}^{\prime}\\ F\cup uS\text{ is }BCF\end{subarray}}x^{|F|}\frac{F_{V^{\prime}-S-V(F)}}{F_{V^{ \prime}}}\Bigg{|}\] \[\leq\sum_{\begin{subarray}{c}S\subseteq N(u)\\ S\neq\emptyset\end{subarray}}|x|^{|S|}+\sum_{\begin{subarray}{c}S\subseteq N(u )\\ S\neq\emptyset\end{subarray}}\sum_{\begin{subarray}{c}(s,t)\in S\times N(u)\\ t>s\end{subarray}}\left(\frac{|x|\cdot b}{1-a}\right)^{|S|}\frac{(\ln b)^{g-3} D(b)}{b},\] which can be further bounded using Lemma 2.3, and \(|x|\leq h(a,b)/\Delta=(1-a)(\ln b)/(\Delta b)\), by \[(1+|x|)^{\Delta}-1+\frac{(\ln b)^{g-3}D(b)}{b}\left[\left( \Delta\right)\left(\frac{\ln b}{\Delta}\right)\left(1+\frac{\ln b}{\Delta} \right)^{\Delta-1}-1\right]\] \[< \exp\left(\frac{(1-a)\ln b}{b}\right)-1+\frac{b(\ln b)^{g-1}}{2( 1-\ln b)}\] \[= f_{g}(a,b)\leq a,\] proving the claim and the theorem. **Remark 3.3**.: _Our proof strategy can also be used to give a compact and simple proof of the result of Fernandez and Procacci [10]. Indeed, just set \(b=1+a\) and follow the start of our proof and as one arrives at (3.5), simply bound the inner sum by \(((1+a)/(1-a))^{|S|}\) using (2.1) and Lemma 2.1. The bound then follows from minimizing \(\frac{1+a}{(1-a)\ln(a)}\) for \(a<1\) and yields the constant less of [10], cf. [10, Section 4.3.3]._ We end the section with a proof of Theorem 1.2. Proof of Theorem 1.2.: For \(g\geq 3\), let \[X_{g}=\left\{(a,b):f_{g}(a,b)\leq a,\,a\in[0,0.9],\,b\in[1.1,e)\right\},\] and let \[K_{g}=\min_{(a,b)\in X_{g}}\frac{1}{h(a,b)}=\min_{(a,b)\in X_{g}}\frac{b}{(1-a) \ln b}\,.\] First note that \(X_{g}\) is non-empty (e.g. by the first line of Figure 1 and the fact that \(X_{3}\subseteq X_{g}\)). Note also that the constraint \(f_{g}(a,b)\leq a\) implies that \(b\leq e-\epsilon_{g}\) for some \(\epsilon_{g}>0\). It follows that \(K_{g}\) is well-defined as the minimum of a continuous function over the non-empty compact set \(X_{g}\). Since \(X_{3}\subseteq X_{4}\subseteq\ldots\), we see that \((K_{g})\) is a monotone decreasing sequence. Since \(K_{g}\geq 0\) for all \(g\geq 3\) we conclude that \(\lim_{g\to\infty}K_{g}\) exists. Now let \[X_{\infty}=\left\{(a,b):\exp\left(\frac{(1-a)\ln b}{b}\right)-1\leq a,a\in[0, 0.9],b\in[1.1,e]\right\}\,,\] and let \(K_{\infty}=\min_{(a,b)\in X_{\infty}}1/h(a,b)\) (again this is well-defined since \(X_{\infty}\) is non-empty and compact). We first find an explicit expression for \(K_{\infty}\) and then show that \(K_{\infty}=\lim_{g\to\infty}K_{g}\). The result will then follow from Theorem 3.1. Let \((a^{*},b^{*})\in X_{\infty}\) be such that \(K_{\infty}=1/h(a^{*},b^{*})\). Since \(b^{*}>1\), we must have \(a^{*}>0\) by the first constraint in the definition of \(X_{\infty}\). It follows that \[\exp\left(\frac{(1-a^{*})\ln b^{*}}{b^{*}}\right)-1=a^{*}\] else we could decrease the value of \(a^{*}\) slightly and obtain a smaller value of the objective function \(1/h\). Rearranging, we obtain \[\frac{\ln b^{*}}{b^{*}}=\frac{\ln(1+a^{*})}{1-a^{*}}\,, \tag{3.9}\] so that \(h(a^{*},b^{*})=\ln(1+a^{*})\). Our task is therefore to choose \(a^{*}\in[0,0.9]\) maximal whilst satisfying the constraint (3.9) for some \(b^{*}\in[1.1,e]\). Since \(\ln(x)/x\) is increasing on the interval \([1.1,e]\) and \(\ln(1+x)/(1-x)\) is increasing on \([0,0.9]\), we conclude from (3.9) that \(b^{*}=e\) and \(a^{*}\) is the unique solution to \[\frac{1}{e}=\frac{\ln(1+x)}{1-x}\] i.e. \[a^{*}=eW(e^{2/e-1})-1\approx 0.295741\,,\] where \(W\) denotes the the Lambert \(W\) function1. Since \(h(a^{*},b^{*})=\ln(1+a^{*})\) we conclude that Footnote 1: For \(x>0\), \(W(x)\) is the unique solution to the equation \(We^{W}=x\). \[K_{\infty}=\frac{1}{h(a^{*},b^{*})}=\frac{1}{1+\ln\left(W(e^{2/e-1})\right)} \approx 3.85977\,.\] It remains to show that \(K_{\infty}=\lim_{g\to\infty}K_{g}\). First note that for \(g\geq 3\), \(X_{g}\subseteq X_{\infty}\) and so \(K_{g}\geq K_{\infty}\). In particular, \(\lim_{g\to\infty}K_{g}\geq K_{\infty}\). On the other hand, for any \(\epsilon\in(0,1)\), we have \((a^{*},b^{*}-\epsilon)\in X_{g}\) for \(g\) sufficiently large so that \[\lim_{g\to\infty}K_{g}\leq 1/h(a^{*},b^{*}-\epsilon)\,.\] Since \(h\) is continuous, taking \(\epsilon\to 0\), we see that \(\lim_{g\to\infty}K_{g}\leq 1/h(a^{*},b^{*})=K_{\infty}\). ## 4. The forest generating function Implicit in our proof of Theorem 3.1 above is a bound for the zeros of the forest generating function (defined at (1.1)). Explicitly, in (3.6) one can restrict to just the set \(\mathcal{A}\). With this approach we cannot obtain better bounds than the bounds we obtained for the BCF forest generating function. However, with a different argument we can prove the much stronger bound of Theorem 1.3. Before we proceed we require a lemma about a path generating function. We emphasize that in this section we have to allow multiple edges and loops and thus work with multigraphs. For a multigraph \(G\) and two of its vertices \(u,v\) we denote by \(\mathcal{P}_{G,u,v}\) the collection of all (edge sets of) paths in \(G\) from \(u\) to \(v\). **Lemma 4.1**.: _Let \(G=(V,E)\) be a multigraph and let \(v\in V\). Assume that all vertices but \(v\) have degree at most \(\Delta\in\mathbb{N}\). Then for any vertex \(u\neq v\) and any \(c\in[0,1]\),_ \[\sum_{P\in\mathcal{P}_{G,u,v}}\big{(}\tfrac{c}{\Delta}\big{)}^{|P|}\leq c.\] Proof.: The proof is by induction on the number of vertices of \(G\). If this number is two, then the statement is clearly true. Next suppose that \(G\) has three or more vertices and let \(u\in V\), \(u\neq v\). Let \(e_{1},\ldots,e_{d}\) be the edges between \(u\) and \(v\) and let \(u_{1},\ldots,u_{D}\) be the remaining neighbours of \(u\), counted with multiplicity. Note that \(d+D\leq\Delta\). Write \(x=\frac{c}{\Delta}\). Then, by induction, \[\sum_{P\in\mathcal{P}_{G,u,v}}x^{|P|} =\sum_{i=1}^{d}x+\sum_{i=1}^{D}x\sum_{P\in\mathcal{P}_{G-u,u_{i},v }}x^{|P|}\] \[\leq dx+xDc=d\tfrac{c}{\Delta}+D\tfrac{c^{2}}{\Delta}\leq c,\] as desired. We note that the lemma can be improved if \(G\) is assumed to be a simple graph, but in our proof of Theorem 1.3 below we need it in its present form. Proof of Theorem 1.3.: We start by proving an identity. For an edge \(e=uv\) of \(G\) it holds that \[Z_{G}(x)=Z_{G/e}(x)+\sum_{P\in\mathcal{P}_{G,u,v}}x^{|P|}Z_{G/P}(x), \tag{4.1}\] where \(G/e\) and \(G/P\) denote the multigraphs obtained from contracting the edge \(e\), respectively the path \(P\) (meaning that we keep any parallel edges/loops that arise). To prove this, note that a forest contributing to \(Z_{G}\) either contains \(u\) and \(v\) in the same component or not. The latter forests correspond bijectively to the forests of \(G/e\) via \(F\mapsto F^{\prime}=(F+e)/e\) and satisfy \(|F|=|F^{\prime}|\). The forests \(F\) for which \(u\) and \(v\) are in the same component can be uniquely decomposed into a path \(P\) from \(u\) to \(v\) and a forest \(F^{\prime\prime}\) in \(G/P\) such that \(|F|=|P|+|F^{\prime\prime}|\) and conversely a path \(P\) from \(u\) to \(v\) together with a forest \(F^{\prime\prime}\) in \(G/P\) yields a forest in \(G\) for which \(u\) and \(v\) are in the same component. This shows the desired decomposition and proves (4.1). Henceforth we assume \(|x|\leq 1/(2\Delta)\). Using (4.1) we now prove the following claim by induction on the number of edges, which directly implies the theorem. 1. For any edge \(e=uv\), where \(v\) is the vertex of potentially unbounded degree, we have \[\left|\frac{Z_{G}(x)}{Z_{G/e}(x)}-1\right|\leq 1/2,\] 2. \(Z_{G}(x)\neq 0\). If \(G\) has no edges then \(Z_{G}(x)=1\) and (i) is vacuous. Next assume that \(G\) has at least one edge and let \(v\) be the vertex of largest degree and let \(uv\in E\). Clearly (ii) follows from (i) and so it suffices to show (i). Since by induction \(Z_{G/e}(x)\neq 0\) the ratio \(\frac{Z_{G}(x)}{Z_{G/e}(x)}\) is well defined and by (4.1) it satisfies, \[\left|\frac{Z_{G}(x)}{Z_{G/e}(x)}-1\right|\leq\sum_{P\in\mathcal{P}_{G,u,v}}|x| ^{|P|}\left|\frac{Z_{G/P}(x)}{Z_{G/e}(x)}\right|. \tag{4.2}\] For a path \(P\) contributing to the sum above let \(f=vw\) be its last edge and let \(P^{*}\) be the path obtained from \(P\) by removing \(f\) from \(P\) and adding \(e\). Then, since \(G/P=G/P^{*}\) we have \(Z_{G/P}(x)=Z_{G/P^{*}}(x)\). Next let \(e_{1}=e\) and write \(P^{*}=(e_{1},u,e_{2},\ldots,e_{k})\) and let \(P^{*}_{i}=(e_{1},e_{2},\ldots,e_{i})\). We note that for each \(i\), the multigraph \(G/P^{*}_{i}\) has at most one vertex of unbounded degree, the vertex corresponding to \(v\) after the contraction, and \(G/P^{*}_{i}=(G/P^{*}_{i-1})/e_{i}\). Therefore by induction and (ii) we can write \[\frac{Z_{G/P^{*}}(x)}{Z_{G/e}(x)}=\prod_{i=1}^{k-1}\frac{Z_{G/P^{*}_{i+1}}(x)} {Z_{G/P^{*}_{i}}(x)}.\] Next by induction and (i), and the triangle inequality we have \(\left|\frac{Z_{G/P^{*}_{i}}(x)}{Z_{G/P^{*}_{i+1}}(x)}\right|\geq 1/2\) and therefore \(\left|\frac{Z_{G/P^{*}}(x)}{Z_{G/e}(x)}\right|\leq 2^{|P|-1}\). Now returning to (4.2), we obtain, since \(|x|\leq\frac{1}{2\Delta}\), \[\left|\frac{Z_{G}(x)}{Z_{G/e}(x)}-1\right|\leq 1/2\sum_{P\in\mathcal{P}_{G,u,v}}( 2|x|)^{|P|}\leq 1/2,\] by Lemma 4.1 and this finishes the proof. We note that our proof shows that Theorem 1.3 is in fact true for multigraphs. In the introduction we mentioned that the radius of the zero-free disc in Theorem 1.3 cannot be replaced by \(1/\Delta\). The multigraph consisting of two vertices connected with \(\Delta\) parallel edges has \(-1/\Delta\) as a zero of its forest generating function and so provides a more direct explanation of this fact for multigraphs. ## 5. Concluding remarks To conclude, we collect some remarks on possible extensions and further directions of research. Bipartite graphs.We can obtain better bounds on the chromatic zeros for bipartite graphs than the bounds we obtained for graphs of girth at least \(4\). This is because Lemma 2.2 can be improved when all paths between \(v_{1}\) and \(v_{2}\) are of even length. The resulting bound can then be plugged into (3.7). This can be used to show that if \(G\) is bipartite, then its chromatic zeros lie in a disc of radius \(K\Delta(G)\) where \(K\leq 5.05\). Small \(\Delta\).The zero-free region of Theorem 1.1 could be improved for small \(\Delta=\Delta(G)\). For example, we used bounds such as \((1+1/\Delta)^{\Delta}\leq e\) that are tight only in the large \(\Delta\) limit; moreover at various places \(\Delta\) can be replaced by \(\Delta-1\) (and sometimes even \(\Delta-2\)) with a bit more effort. We kept the simpler bounds for clarity of presentation and because these improvements do not affect the value of \(K\) in the general statement of Theorem 1.1, which was our main concern. _The double-rooted tree generating function._ Sokal [26] showed that for a graph \(G\) of maximum degree \(\Delta\) and vertex \(v\in V(G)\), \(T_{G,v}(x)\leq T_{\mathbb{T},r}(x)\) for all \(x>0\) where \(\mathbb{T}\) is the infinite \(\Delta\)-regular tree with root \(r\). This in turn implies Lemma 2.1. It is less clear what graph \(G\) and pair of vertices \(v_{1},v_{2}\in V(G)\) maximises the double-rooted tree generating function \(T_{G,v_{1},v_{2}}(x)\) for a given \(x>0\). Further understanding of this problem would likely lead to an improvement on Lemma 2.2 and in turn an improvement to the constant \(K\) in Theorem 1.1. _Real zeros._ Based on the polymer framework, Dong and Koh [1] managed to prove a bound of \(5.664\Delta\) for _real_ chromatic zeros of graphs of maximum degree at most \(\Delta\). It seems likely that with our approach this bound can be improved, but this is not automatic. In particular one would need to devise improved bounds on the (double) rooted tree generating functions evaluated at negative numbers. _Multivariate generating functions._ The reader can check that for both the forest and BCF forest generating functions our zero-freeness results extend to the multivariate setting where we equip each edge with its own variable. That is, for a graph \(G=(V,E)\) we let \(\mathbf{x}=(x_{e}:e\in E)\) and define \[Z_{G}(\mathbf{x})=\sum_{\begin{subarray}{c}F\subseteq E:\\ (V,F)\text{ is a forest}\end{subarray}}\prod_{e\in F}x_{e}\] and define \(F_{G}(\mathbf{x})\) similarly. Then, for example, one can show that \(Z_{G}(\mathbf{x})\) has no zeros in the complex polydisc \(\{\mathbf{x}:|x_{e}|\leq 1/(2\Delta)\) for all \(e\in E\}\). It is also possible to equip each tree \(T\) in \(G\) with its own variable \(x_{T}\) and consider the generating function \[\hat{Z}_{G}(\mathbf{x})=\sum_{\begin{subarray}{c}F\subseteq E:\\ (V,F)\text{ is a forest}\end{subarray}}\prod_{T\in F}x_{T}\,,\] where \(\mathbf{x}\) now denotes the vector of all the \(x_{T}\)'s and we write \(T\in F\) to mean that \(T\) is a (non-trivial) component of \(F\). We can define \(\hat{F}_{G}(\mathbf{x})\) similarly. It is however not clear how to extend our proof for the forest generating function (nor for the BCF forest generating function) in this setting. If possible somehow, this would be a way to leverage the result on the forest generating function to obtain results for the chromatic polynomial (since we can recover \(\hat{F}_{G}(\mathbf{x})\) from \(\hat{Z}_{G}(\mathbf{x})\) by simply setting variables \(x_{T}\) corresponding to non-BCF trees \(T\) to \(0\)). _The anti-ferromagnetic Potts model._ Sokal [26] extended his proof of the boundedness of the chromatic zeros to the partition function of the anti-ferromagnetic Potts model. It is not clear whether this is also possible with our approach and it would be interesting to see if a better bound can be obtained via our approach.
Weは、最大次数が$\Delta$のグラフ$G$に対して、その色付け多項式の$x$の零点$が$ $5.94\Delta$の半径を持つ円の中にある。これは、従来の最高の知られている boundの約6.91$\Delta$に改善する。また、高辺のグラフにも改善した有界を求めた。We は、任意の$g$に対して、$K_g$という定数があることを証明した。この定数は、最大次数が$\Delta$のグラフ$G$と girth が$g$ のグラフに対して、色付け多項式の$x$の零点$が$ $K_g \Delta$ の半径を持つ円の中にあることを示している。 $K_g$ は、ある最適化問題の解である。特に、$g \geq 5$ のとき$K_g<5$
2309.14596
Model averaging: A shrinkage perspective
Model averaging (MA), a technique for combining estimators from a set of candidate models, has attracted increasing attention in machine learning and statistics. In the existing literature, there is an implicit understanding that MA can be viewed as a form of shrinkage estimation that draws the response vector towards the subspaces spanned by the candidate models. This paper explores this perspective by establishing connections between MA and shrinkage in a linear regression setting with multiple nested models. We first demonstrate that the optimal MA estimator is the best linear estimator with monotonically non-increasing weights in a Gaussian sequence model. The Mallows MA (MMA), which estimates weights by minimizing the Mallows' $C_p$ over the unit simplex, can be viewed as a variation of the sum of a set of positive-part Stein estimators. Indeed, the latter estimator differs from the MMA only in that its optimization of Mallows' $C_p$ is within a suitably relaxed weight set. Motivated by these connections, we develop a novel MA procedure based on a blockwise Stein estimation. The resulting Stein-type MA estimator is asymptotically optimal across a broad parameter space when the variance is known. Numerical results support our theoretical findings. The connections established in this paper may open up new avenues for investigating MA from different perspectives. A discussion on some topics for future research concludes the paper.
Jingfu Peng
2023-09-26T01:00:36
http://arxiv.org/abs/2309.14596v2
# Model averaging: A shrinkage perspective+ ###### Abstract Model averaging (MA), a technique for combining estimators from a set of candidate models, has attracted increasing attention in machine learning and statistics. In the existing literature, there is an implicit understanding that MA can be viewed as a form of shrinkage estimation that draws the response vector towards the subspaces spanned by the candidate models. This paper explores this perspective by establishing connections between MA and shrinkage in a linear regression setting with multiple nested models. We first demonstrate that the optimal MA estimator is the best linear estimator with monotone non-increasing weights in a Gaussian sequence model. The Mallows MA, which estimates weights by minimizing the Mallows' \(C_{p}\), is a variation of the positive-part Stein estimator. Motivated by these connections, we develop a novel MA procedure based on a blockwise Stein estimation. Our resulting Stein-type MA estimator is asymptotically optimal across a broad parameter space when the variance is known. Numerical results support our theoretical findings. The connections established in this paper may open up new avenues for investigating MA from different perspectives. A discussion on some topics for future research concludes the paper. **Keywords: Model averaging, Stein shrinkage, penalized blockwise Stein rule, asymptotic optimality.** ## 1 Introduction Model averaging (MA) is an umbrella term for methods that combine multiple candidate models to make a decision, typically in regression and forecasting problems. The concept of MA was first introduced by de Laplace (1818) (see Moral-Benito, 2015, for a comprehensive review of the historical development of MA). In recent years, MA has received an explosion of interest in both machine learning and statistics (see, e.g., Fletcher, 2018). It is regarded as a viable alternative to model selection (MS) techniques, as it aims to mitigate MS variability and control modeling biases among candidate models. The benefits of MA compared to MS have been theoretically studied in Peng and Yang (2022). In the existing literature, MA has been approached using either Bayesian or frequentist frameworks. The Bayesian perspective on MA can be found in works such as Draper (1995), George and McCulloch (1997), and Hoeting et al. (1999). Within the frequentist paradigm, MA strategies have become increasingly popular in forecasting literature since the works of Barnard (1963) and Bates and Granger (1969) (see Timmermann, 2006, for a review on the literature). In recent years, several important techniques has been developed, including boosting (Freund, 1995), bagging (Breiman, 1996a), stacking (Wolpert, 1992; Breiman, 1996b), random forest (Amit and Geman, 1997), information criterion weighting (Buckland et al., 1997; Hjort and Claeskens, 2003), adaptive regression by mixing (Yang, 2001; Yuan and Yang, 2005), exponential weighting (George, 1986; Leung and Barron, 2006; Dalalyan and Salmon, 2012), and Q-aggregation (Rigollet, 2012; Dai et al., 2014; Bellec, 2018). Additionally, there is a growing body of literature focused on constructing asymptotically optimal MA, with the goal of finding the optimal convex combination of candidate estimates. This is typically achieved by minimizing performance measures such as Mallows' \(C_{p}\)(see, e.g., Blaker, 1999; Hansen, 2007; Wan et al., 2010) or cross-validation error (see, e.g., Hansen and Racine, 2012; Zhang et al., 2013; Ando and Li, 2014). Despite the extensive theoretical work and wide applications of MA, there is a commonly held viewpoint that MA is essentially a shrinkage estimator, and that other shrinkage methods can also achieve the objectives of MA. This view has been substantiated by several studies. For instance, the results in Section 5.1 of Kneip (1994) indicate that combining two linear smoothers by minimizing Mallows' \(C_{p}\) yields a James-Stein estimator. The relationship between Mallows model averaging (MMA) and Stein shrinkage estimation has been further explored in Blaker (1999), Hansen (2007), and Hansen (2014) in the context of two nested models. In a semiparametric regression setting, Ullah et al. (2017) established the connection between MA and ridge shrinkage on the basis of the orthogonal model. Additionally, in a Gaussian location model, Green and Strawderman (1991) proposed a James-Stein type estimator to estimate the best linear combination of two independent biased and unbiased estimators. The independence assumption between two estimators in Green and Strawderman (1991) has been relaxed by Kim and White (2001), Judge and Mittelhammer (2004), and Mittelhammer and Judge (2005). More recently, Hansen (2016) proposed a Stein method to combine the restricted and unrestricted maximum likelihood estimators in a local asymptotic framework, and showed the asymptotic risk of this shrinkage estimator is strictly less than that of the maximum likelihood estimator. Note that most of the aforementioned studies focused on the relationship between MA and shrinkage in a two-model setting. The fundamental question that remains to be explored is whether these links persist in the context of multiple models. If so, can some state-of-the-art shrinkage techniques be employed to create MA estimators that perform as optimally as the infeasible optimal convex combinations of candidate models (i.e., the asymptotically optimal MA estimators)? Such answers would have a significant impact on the theories and applications of MA. This paper addresses the previously mentioned questions in a general linear model setting with multiple nested candidate models. The main contribution is twofold. First, we demonstrate that the optimal MA estimator is equivalent to the optimal linear estimator with monotonically non-increasing weights in a specific Gaussian sequence model. And the MMA estimator (Hansen, 2007), which targets the optimal MA risk, can be regarded as a variation of the sum of a set of positive-part Stein estimators from multiple mutually orthogonal subspaces. Second, we introduce a novel MA procedure to achieve asymptotic optimality by adapting the blockwise Stein rules from prior works (Donoho and Johnstone, 1995; Nemirovski, 1998; Cavalier and Tsybakov, 2001) to linear regression. In particular, when the candidate model set is properly constructed, this Stein-type MA estimator achieves the full potential of MA (i.e., the minimal MA risk over all the nested models) in a sufficiently large parameter space. The results of finite sample simulations support our theories. This paper gives the opportunity of looking at MA from different angles. By connecting MA with shrinkage, existing knowledge and technology derived from shrinkage estimation can be potentially transferred to MA. The selected review presented in this paper and the unveiled connections provide a theoretical foundation for this transfer; See Section 6 for more discussion. The remainder of the paper is structured as follows. In Section 2, we set up our regression problem. Section 3 draws the theoretical connections between MA and shrinkage. In Section 4, we propose a Stein-type MA procedure and present its theoretical properties. Section 5 examines the finite sample properties of proposed methods by numerical simulations, and Section 6 concludes the paper. Proofs are included in the Appendix. ## 2 Problem setup Consider the linear regression model \[y_{i}=\mu_{i}+\varepsilon_{i}=\sum_{j=1}^{p_{n}}\beta_{j}x_{ij}+\varepsilon_{i },\quad i=1,\ldots,n, \tag{2.1}\] where \(\varepsilon_{1},\ldots,\varepsilon_{n}\) are independent normal errors with mean \(0\) and variance \(\sigma^{2}\), \(x_{i1},\ldots,x_{ip_{n}}\) are non-stochastic variables, and \(p_{n}\) is the number of regressors. Let \(\mathbf{y}=(y_{1},\ldots,y_{n})^{\top}\), \(\boldsymbol{\mu}=(\mu_{1},\ldots,\mu_{n})^{\top}\), \(\boldsymbol{\varepsilon}=(\varepsilon_{1},\ldots,\varepsilon_{n})^{\top}\), \(\boldsymbol{\beta}=(\beta_{1},\ldots,\beta_{p_{n}})^{\top}\), and \(\mathbf{x}_{j}=(x_{1j},\ldots,x_{nj})^{\top}\) for \(j=1,\ldots,p_{n}\). Define \(\mathbf{X}=[\mathbf{x}_{1},\ldots,\mathbf{x}_{p_{n}}]\) the regressor matrix. In matrix notation, (2.1) can be written as \[\mathbf{y}=\boldsymbol{\mu}+\boldsymbol{\varepsilon}=\mathbf{X}\boldsymbol{ \beta}+\boldsymbol{\varepsilon}. \tag{2.2}\] From now on, we make the assumption: \(p_{n}\leq n\) and \(\mathbf{X}\) has full column rank. For simplicity, we assume that the variance \(\sigma^{2}\) is known, which was also considered in Leung and Barron (2006) to develop the theory of MA. To estimate the true regression mean vector \(\boldsymbol{\mu}\), \(M_{n}\) strictly nested linear models are considered as candidates. The \(m\)-th candidate model includes the first \(k_{m}\) regressors, where \(1\leq k_{1}<k_{2}<\cdots<k_{M_{n}}\leq p_{n}\). The information about the sizes of candidate models is stored in a set \(\mathcal{M}=\{k_{1},\ldots,k_{M_{n}}\}\), and then \(M_{n}=|\mathcal{M}|\), where \(|\mathcal{S}|\) denotes the cardinality of a set \(\mathcal{S}\). Let \(\mathbf{X}_{k_{m}}=[\mathbf{X}_{1},\ldots,\mathbf{X}_{k_{m}}]\) be the design matrix of the \(m\)-th candidate model. We estimate the regression coefficients by the least-squares method \((\mathbf{X}_{k_{m}}^{\top}\mathbf{X}_{k_{m}})^{-1}\mathbf{X}_{k_{m}}^{\top} \mathbf{y}\). The \(m\)-th estimator of \(\boldsymbol{\mu}\) is \[\widehat{\boldsymbol{\mu}}_{k_{m}}=\mathbf{X}_{k_{m}}(\mathbf{X}_{k_{m}}^{\top }\mathbf{X}_{k_{m}})^{-1}\mathbf{X}_{k_{m}}^{\top}\mathbf{y}=\mathbf{P}_{k_{ m}}\mathbf{y},\] where \(\mathbf{P}_{k_{m}}\triangleq\mathbf{X}_{k_{m}}(\mathbf{X}_{k_{m}}^{\top} \mathbf{X}_{k_{m}})^{-1}\mathbf{X}_{k_{m}}^{\top}\) and \(\mathbf{P}_{0}=\mathbf{0}_{n\times n}\). Let \(\mathbf{w}=(w_{1},\ldots,w_{M_{n}})^{\top}\) denote a weight vector in the unit simplex of \(\mathbb{R}^{M_{n}}\): \[\mathcal{W}_{M_{n}}=\left\{\mathbf{w}\in[0,1]^{M_{n}}:\sum_{m=1}^{M_{n}}w_{m} =1\right\}. \tag{2.3}\] Given the candidate model set \(\mathcal{M}\), the MA estimator of \(\boldsymbol{\mu}\) is \[\widehat{\boldsymbol{\mu}}_{\mathbf{w}|\mathcal{M}}\triangleq\sum_{m=1}^{M_{n }}w_{m}\widehat{\boldsymbol{\mu}}_{k_{m}}=\sum_{m=1}^{M_{n}}w_{m}\mathbf{P}_{ k_{m}}\mathbf{y}, \tag{2.4}\] where the subscript \(\mathbf{w}|\mathcal{M}\) is to emphasize the dependence of the MA estimator on the candidate model set \(\mathcal{M}\). For the theoretical work, we consider the squared \(\ell_{2}\) loss \(L_{n}(\widehat{\boldsymbol{\mu}},\boldsymbol{\mu})=\|\widehat{\boldsymbol{ \mu}}-\boldsymbol{\mu}\|^{2}\) and its corresponding risk \(R_{n}(\widehat{\boldsymbol{\mu}},\boldsymbol{\mu})=\mathbb{E}L_{n}(\widehat{ \boldsymbol{\mu}},\boldsymbol{\mu})\) as measures of the performance of an estimator \(\widehat{\boldsymbol{\mu}}\). For abbreviation, let \(L_{n}(\mathbf{w}|\mathcal{M},\boldsymbol{\mu})\) and \(R_{n}(\mathbf{w}|\mathcal{M},\boldsymbol{\mu})\) stand for \(L_{n}(\widehat{\boldsymbol{\mu}}_{\mathbf{w}|\mathcal{M}},\boldsymbol{\mu})\) and \(R_{n}(\widehat{\boldsymbol{\mu}}_{\mathbf{w}|\mathcal{M}},\boldsymbol{\mu})\), respectively. Given a candidate model set \(\mathcal{M}\), the optimal weight vector in \(\mathcal{W}_{M_{n}}\) is \[\mathbf{w}^{*}|\mathcal{M}=\arg\min_{\mathbf{w}\in\mathcal{W}_{M_{n}}}R_{n}( \mathbf{w}|\mathcal{M},\boldsymbol{\mu}).\] Then, our goal is to construct an MA estimator that performs asymptotically equivalent to that based on \(\mathbf{w}^{*}|\mathcal{M}\). The specific definitions are as follows. **Definition 1**.: _An MA estimator with the weights \(\widehat{\mathbf{w}}|\mathcal{M}\) trained on data is called asymptotically optimal if_ \[\mathbb{E}L_{n}(\widehat{\mathbf{w}}|\mathcal{M},\boldsymbol{\mu})=\left[1+o(1) \right]R_{n}(\mathbf{w}^{*}|\mathcal{M},\boldsymbol{\mu}) \tag{2.5}\] _holds as \(n\to\infty\)._ A representative example of the candidate model set is \(\mathcal{M}_{a}=\{1,2,\ldots,p_{n}\}\), which contains all nested models. For any \(\mathcal{M}\subseteq\mathcal{M}_{a}\), we have \(R_{n}(\mathbf{w}^{*}|\mathcal{M}_{a},\boldsymbol{\mu})\leq R_{n}(\mathbf{w}^{* }|\mathcal{M},\boldsymbol{\mu})\). Thus, \(R_{n}(\mathbf{w}^{*}|\mathcal{M}_{a},\boldsymbol{\mu})\) represents the full potential of MA in the nested model setting. **Definition 2**.: _An MA estimator with the weights \(\widehat{\mathbf{w}}|\mathcal{M}\) estimated on data is called fully asymptotically optimal if_ \[\mathbb{E}L_{n}(\widehat{\mathbf{w}}|\mathcal{M},\boldsymbol{\mu})=\left[1+o( 1)\right]R_{n}(\mathbf{w}^{*}|\mathcal{M}_{a},\boldsymbol{\mu}) \tag{2.6}\] _holds as \(n\to\infty\)._ Note that Definition 2 targets the full potential of MA in the nested model setting we considered, while Definition 1 provides the asymptotic justification for some general candidate model sets \(\mathcal{M}\). In the next sections, we demonstrate some unexplored connections between MA and shrinkage estimation, through which we develop a Stein-type MA procedure to achieve the asymptotic optimality properties in Definitions 1-2. ## 3 Connecting MA to shrinkage ### Optimal MA and monotone oracle Define the matrixes \(\mathbf{D}_{m|\mathcal{M}}=\mathbf{P}_{k_{m}}-\mathbf{P}_{k_{m-1}}\) for \(m=1,\ldots,M_{n}\), and \(\mathbf{D}_{M_{n}+1|\mathcal{M}}=\mathbf{I}-\mathbf{P}_{k_{M_{n}}}\). As pointed out in Xu and Zhang (2022), \(\mathbf{D}_{m|\mathcal{M}},m=1,\ldots,M_{n}+1\) are mutually orthogonal since \(\mathbf{D}_{m|\mathcal{M}}\mathbf{D}_{m^{\prime}|\mathcal{M}}=\mathbf{D}_{m| \mathcal{M}}\delta_{mm^{\prime}}\), where \(\delta_{mm^{\prime}}\) is the Kronecker delta. We represent the original sample space (2.2) as an orthogonal direct sum of \(M_{n}+1\) subspaces \[\mathbf{y}_{m|\mathcal{M}}=\boldsymbol{\mu}_{m|\mathcal{M}}+\boldsymbol{ \varepsilon}_{m|\mathcal{M}},\quad m=1,\ldots,M_{n}+1, \tag{3.1}\] where \(\mathbf{y}_{m|\mathcal{M}}=\mathbf{D}_{m|\mathcal{M}}\mathbf{y}\), \(\boldsymbol{\mu}_{m|\mathcal{M}}=\mathbf{D}_{m|\mathcal{M}}\boldsymbol{\mu}\), and \(\boldsymbol{\varepsilon}_{m|\mathcal{M}}=\mathbf{D}_{m|\mathcal{M}}\boldsymbol {\varepsilon}\). Note that \(\boldsymbol{\varepsilon}_{m|\mathcal{M}}\) is distributed as \(N(\mathbf{0},\sigma^{2}\mathbf{D}_{m|\mathcal{M}})\) and is independent of \(\boldsymbol{\varepsilon}_{m^{\prime}|\mathcal{M}}\) when \(m^{\prime}\neq m\). The expression (3.1) defines a Gaussian sequence model with \(M_{n}+1\) independent vector-valued observations. The MA estimator (2.4) can be written as a linear estimator in the Gaussian sequence model \[\widehat{\mathbf{\mu}}_{\mathbf{w}|\mathcal{M}}=\widehat{\mathbf{\mu}}_{\mathbf{\gamma}| \mathcal{M}}\triangleq\sum_{m=1}^{M_{n}}\gamma_{m}\mathbf{y}_{m|\mathcal{M}}, \tag{3.2}\] where \(\gamma_{m}=\sum_{j=m}^{M_{n}}w_{j}\), \(\gamma_{M_{n}+1}=0\), and \(\mathbf{\gamma}|\mathcal{M}=(\gamma_{1},\ldots,\gamma_{M_{n}})^{\top}\) is the cumulative weight vector. For simplicity of notation, we write \(\mathbf{w}|\mathcal{M}\) and \(\mathbf{\gamma}|\mathcal{M}\) in the same expression \(\widehat{\mathbf{\mu}}\). It will cause no confusion if we use different notations \(\mathbf{w}|\mathcal{M}\) and \(\mathbf{\gamma}|\mathcal{M}\) to designate the forms in terms of the weights and the cumulative weights respectively. Similar arguments apply to the other notations defined in this section Furthermore, we see that the MA risk equals to the risk of the linear estimator (3.2) in the Gaussian sequence model (3.1), i.e., \[R_{n}(\mathbf{w}|\mathcal{M},\mathbf{\mu}) =R_{n}(\mathbf{\gamma}|\mathcal{M},\mathbf{\mu})\triangleq\sum_{m=1}^{M_ {n}+1}\mathbb{E}\|\mathbf{\mu}_{m|\mathcal{M}}-\gamma_{m}\mathbf{y}_{m|\mathcal{M }}\|^{2} \tag{3.3}\] \[=\sum_{m=1}^{M_{n}}\left[\|\mathbf{\mu}_{m|\mathcal{M}}\|^{2}(1- \gamma_{m})^{2}+\sigma_{m|\mathcal{M}}^{2}\gamma_{m}^{2}\right]+\|\mathbf{\mu}_{M_ {n}+1|\mathcal{M}}\|^{2},\] where \(\sigma_{m|\mathcal{M}}^{2}=(k_{m}-k_{m-1})\sigma^{2}\). By defining the set of monotone non-increasing cumulative weights \[\Gamma_{M_{n}}=\left\{\mathbf{\gamma}:1=\gamma_{1}\geq\gamma_{2}\geq\cdots\geq \gamma_{M_{n}}\geq 0\right\}, \tag{3.4}\] we connect \(\mathbf{w}^{*}|\mathcal{M}\) to the monotone oracle \[\mathbf{\gamma}^{*}|\mathcal{M}=\arg\min_{\mathbf{\gamma}\in\Gamma_{M_{n}}}R_{n}(\bm {\gamma}|\mathcal{M},\mathbf{\mu})\] with \(w_{m}^{*}=\gamma_{m}^{*}-\gamma_{m+1}^{*}\) for \(m=1,\ldots,M_{n}\). It is worth mentioning that the connection between the optimal MA estimator and the monotone oracle went noticed in Peng and Yang (2022). However, Peng and Yang (2022) mainly focused on the property of (3.3) itself rather than the specific MA procedures. The next subsection will provide more insight on how to estimate the optimal MA estimator/the monotone oracle based on the observed data. ### MMA and Stein estimation A well-known idea of estimating \(\mathbf{w}^{*}|\mathcal{M}\) and \(\mathbf{\gamma}^{*}|\mathcal{M}\) is based on the principle of unbiased risk estimation (URE) (Mallows, 1973; Akaike, 1973; Stein, 1981). Under the linear regression model (2.2), it is also known as the MMA criterion \[C_{n}(\mathbf{w}|\mathcal{M},\mathbf{y})=\|\mathbf{y}-\widehat{\mathbf{\mu}}_{ \mathbf{w}|\mathcal{M}}\|^{2}+2\sigma^{2}\mathbf{k}^{\top}\mathbf{w}. \tag{3.5}\] Minimizing \(C_{n}(\mathbf{w}|\mathcal{M},\mathbf{y})\) over \(\mathcal{W}_{M_{n}}\) yields \(\widehat{\mathbf{w}}_{\text{MMA}}|\mathcal{M}=\arg\min_{\mathbf{w}\in\mathcal{W }_{M_{n}}}C_{n}(\mathbf{w}|\mathcal{M},\mathbf{y})\) and the MMA estimator \[\widehat{\boldsymbol{\mu}}_{\widehat{\mathbf{w}}_{\text{MMA}}|\mathcal{M}}= \sum_{m=1}^{M_{n}}\widehat{w}_{\text{MMA},m}\widehat{\boldsymbol{\mu}}_{k_{m}},\] where \(\widehat{w}_{\text{MMA},m}\) is the \(m\)-th element of \(\widehat{\mathbf{w}}_{\text{MMA}}|\mathcal{M}\). Under the Gaussian sequence model (3.1), an equivalent expression of the MMA criterion in terms of \(\boldsymbol{\gamma}|\mathcal{M}\) is \[C_{n}(\boldsymbol{\gamma}|\mathcal{M},\mathbf{y})=\sum_{m=1}^{M_{n}}\left[\| \mathbf{y}_{m|\mathcal{M}}\|^{2}(1-\gamma_{m})^{2}+2\sigma_{m|\mathcal{M}}^{2 }\gamma_{m}\right]+\|\mathbf{y}_{M_{n}+1|\mathcal{M}}\|^{2}, \tag{3.6}\] where \(\mathbf{y}_{m|\mathcal{M}}\) and \(\sigma_{m|\mathcal{M}}^{2}\) is defined in Section 3.1. Minimizing \(C_{n}(\boldsymbol{\gamma}|\mathcal{M},\mathbf{y})\) over the monotone weight set \(\Gamma_{M_{n}}\) gives \(\widehat{\boldsymbol{\gamma}}_{\text{MMA}}|\mathcal{M}=\min_{\boldsymbol{ \gamma}\in\Gamma_{M_{n}}}C_{n}(\boldsymbol{\gamma}|\mathcal{M},\mathbf{y})\) and \[\widehat{\boldsymbol{\mu}}_{\widehat{\boldsymbol{\gamma}}_{\text{MMA}}| \mathcal{M}}=\sum_{m=1}^{M_{n}}\widehat{\gamma}_{\text{MMA},m}\mathbf{y}_{m| \mathcal{M}},\] where \(\widehat{\boldsymbol{\gamma}}_{\text{MMA},m}\) is the \(m\)-th element of \(\widehat{\boldsymbol{\gamma}}_{\text{MMA}}|\mathcal{M}\). Based on the relation (3.2), it is evident that \(\widehat{\boldsymbol{\mu}}_{\widehat{\mathbf{w}}_{\text{MMA}}|\mathcal{M}}\) and \(\widehat{\boldsymbol{\mu}}_{\widehat{\boldsymbol{\gamma}}_{\text{MMA}}| \mathcal{M}}\) are exactly the same estimator. In general, \(\widehat{\boldsymbol{\mu}}_{\widehat{\mathbf{w}}_{\text{MMA}}|\mathcal{M}}\) and \(\widehat{\boldsymbol{\mu}}_{\widehat{\boldsymbol{\gamma}}_{\text{MMA}}| \mathcal{M}}\) do not have explicit expressions, which makes it challenging to investigate their properties directly. However, if we consider minimizing \(C_{n}(\boldsymbol{\gamma}|\mathcal{M},\mathbf{y})\) in a hypercube \[\widetilde{\Gamma}_{M_{n}}=[0,1]^{M_{n}}\] instead of the monotone set \(\Gamma_{M_{n}}\), or equivalently, minimizing \(C_{n}(\mathbf{w}|\mathcal{M},\mathbf{y})\) over an enlarged convex set \[\widetilde{\mathcal{W}}_{M_{n}}=\left\{\mathbf{w}\in\mathbb{R}^{M_{n}}:0\leq \sum_{j=m}^{M_{n}}w_{m}\leq 1,m=1,\ldots,M_{n}\right\} \tag{3.7}\] rather than \(\mathcal{W}_{M_{n}}\), then things become clearer. The solution of the problem \(\min_{\boldsymbol{\gamma}\in\widetilde{\Gamma}_{M_{n}}}C_{n}(\boldsymbol{ \gamma}|\mathcal{M},\mathbf{y})\) is given by \[\widehat{\gamma}_{\text{STE},m}=\left(1-\frac{\sigma_{m|\mathcal{M}}^{2}}{\| \mathbf{y}_{m|\mathcal{M}}\|^{2}}\right)_{+},\quad m=1,\ldots,M_{n}. \tag{3.8}\] Define \(\widehat{\gamma}_{\text{STE},M_{n}+1}=0\) and \(\widehat{w}_{\text{STE},m}=\widehat{\gamma}_{\text{STE},m}-\widehat{\gamma} _{\text{STE},m+1}\) for \(m=1,\ldots,M_{n}\). This also generates an MA estimator \[\widehat{\mathbf{\mu}}_{\widehat{\mathbf{\psi}}_{\text{STE}}|\mathcal{M}}= \sum_{m=1}^{M_{n}}\widehat{w}_{\text{STE},m}\widehat{\mathbf{\mu}}_{k_{m}} \tag{3.9}\] \[= \widehat{\mathbf{\mu}}_{\widehat{\mathbf{\gamma}}_{\text{STE}}|\mathcal{M} }=\sum_{m=1}^{M_{n}}\left(1-\frac{\sigma_{m|\mathcal{M}}^{2}}{\|\mathbf{y}_{m| \mathcal{M}}\|^{2}}\right)_{+}\mathbf{y}_{m|\mathcal{M}},\] which actually is the sum of multiple positive-part Stein estimators in different orthogonal subspaces defined by the Gaussian sequence model (3.1). ### Relaxation of weight constraint Comparing the MMA estimator \(\widehat{\mathbf{\mu}}_{\widehat{\mathbf{\psi}}_{\text{MMA}}|\mathcal{M}}\) with the Stein estimator \(\widehat{\mathbf{\mu}}_{\widehat{\mathbf{\psi}}_{\text{STE}}|\mathcal{M}}\), we see that \(\widehat{\mathbf{\mu}}_{\widehat{\mathbf{\psi}}_{\text{MMA}}|\mathcal{M}}\) is the minimizer of the principle of URE under the weight constraint \(\mathcal{W}_{M_{n}}\) while \(\widehat{\mathbf{\mu}}_{\widehat{\mathbf{\psi}}_{\text{STE}}|\mathcal{M}}\) is based on the enlarged set \(\widetilde{\mathcal{W}}_{M_{n}}\). A natural question is whether relaxing the weight constraint from the simplex \(\mathcal{W}_{M_{n}}\) to \(\widetilde{\mathcal{W}}_{M_{n}}\) provides substantial benefits for MA. This section compares the optimal MA risks in \(\mathcal{W}_{M_{n}}\) and \(\widetilde{\mathcal{W}}_{M_{n}}\). Define \(\mathbf{w}^{*}|\mathcal{M}=\arg\min_{\mathbf{w}\in\mathcal{W}_{M_{n}}}R_{n}( \mathbf{w}|\mathcal{M},\mathbf{\mu})\) and \(\widetilde{\mathbf{w}}^{*}|\mathcal{M}=\arg\min_{\mathbf{w}\in\widetilde{ \mathcal{W}}_{M_{n}}}R_{n}(\mathbf{w}|\mathcal{M},\mathbf{\mu})\). Since \(\mathcal{W}_{M_{n}}\subseteq\widetilde{\mathcal{W}}_{M_{n}}\), we have \[R_{n}(\widetilde{\mathbf{w}}^{*}|\mathcal{M},\mathbf{\mu})\leq R_{n}(\mathbf{w}^{* }|\mathcal{M},\mathbf{\mu}).\] To further reveal the difference between \(R_{n}(\widetilde{\mathbf{w}}^{*}|\mathcal{M},\mathbf{\mu})\) and \(R_{n}(\mathbf{w}^{*}|\mathcal{M},\mathbf{\mu})\), define \[\mathcal{M}_{T}=\arg\max_{\mathcal{S}\subseteq\mathcal{M}}\left\{\frac{\|\mathbf{ \mu}_{l|\mathcal{S}}\|^{2}}{\sigma_{l|\mathcal{S}}^{2}}\geq\frac{\|\mathbf{\mu}_{l +1|\mathcal{S}}\|^{2}}{\sigma_{l+1|\mathcal{S}}^{2}},\quad l=1,\ldots,| \mathcal{S}|-1\right\}.\] Note that \(\|\mathbf{\mu}_{l|\mathcal{M}}\|^{2}/(n\sigma_{l|\mathcal{M}}^{2})\) represents the SNR in the \(l\)-th subspace of (3.1). Thus, \(\mathcal{M}_{T}\) is actually the largest subset of \(\mathcal{M}\) whose signal-to-noise ratios (SNRs) in the subspaces are monotone nonincreasing. It is clear that \[\{k_{1},k_{2},\ldots,k_{M_{n}}\}=\mathcal{M}_{1}\supseteq\cdots\supseteq \mathcal{M}_{T}\supseteq\cdots\supseteq\{k_{1},k_{M_{n}}\}.\] **Proposition 1**.: _For any \(\mathcal{M}=\{k_{1},k_{2},\ldots,k_{M_{n}}\}\), we have_ \[R_{n}(\widetilde{\mathbf{w}}^{*}|\mathcal{M},\mathbf{\mu})=\sum_{m=1}^{M_{n}}\frac {\|\mathbf{\mu}_{m|\mathcal{M}}\|^{2}\sigma_{m|\mathcal{M}}^{2}}{\|\mathbf{\mu}_{m| \mathcal{M}}\|^{2}+\sigma_{m|\mathcal{M}}^{2}}+\|\mathbf{\mu}_{M_{n}+1|\mathcal{ M}}\|^{2}, \tag{3.10}\] \[R_{n}(\mathbf{w}^{*}|\mathcal{M},\boldsymbol{\mu})=R_{n}(\mathbf{w}^{*}| \mathcal{M}_{T},\boldsymbol{\mu})\] \[=\frac{\sigma_{1|\mathcal{M}_{T}}^{4}}{\|\boldsymbol{\mu}_{1| \mathcal{M}_{T}}\|^{2}+\sigma_{1|\mathcal{M}_{T}}^{2}}+\sum_{l=1}^{|\mathcal{ M}_{T}|}\frac{\|\boldsymbol{\mu}_{l|\mathcal{M}_{T}}\|^{2}\sigma_{l|\mathcal{M}_{T }}^{2}}{\|\boldsymbol{\mu}_{l|\mathcal{M}_{T}}\|^{2}+\sigma_{l|\mathcal{M}_{T }}^{2}}+\|\boldsymbol{\mu}_{M_{n}+1|\mathcal{M}_{T}}\|^{2}.\] Proposition 1 suggests that when the SNRs of the subspaces in \(\mathcal{M}\) is not monotone noninceasing, the weight restriction based on the simplex limits the potential of MA since the optimal MA risk in this case equals to the optimal MA risk based on a reduced candidate models set \(\mathcal{M}_{T}\). To illustrate this observation, consider two extreme cases. If for all \(l=2,\ldots,M_{n}-1\), \(\|\boldsymbol{\mu}_{l|\mathcal{M}}\|^{2}/\sigma_{l|\mathcal{M}}^{2}\geq\| \boldsymbol{\mu}_{l+1|\mathcal{M}}\|^{2}/\sigma_{l+1|\mathcal{M}}^{2}\), then \(\mathcal{M}_{T}=\{k_{1},k_{2},\ldots,k_{M_{n}}\}\). In this case, we have \[R_{n}(\mathbf{w}^{*}|\mathcal{M},\boldsymbol{\mu})-R_{n}(\widetilde{\mathbf{w }}^{*}|\mathcal{M},\boldsymbol{\mu})=\frac{\sigma_{1|\mathcal{M}}^{4}}{\| \boldsymbol{\mu}_{1|\mathcal{M}}\|^{2}+\sigma_{1|\mathcal{M}}^{2}},\] which is ignorable compared to \(R_{n}(\mathbf{w}^{*}|\mathcal{M},\boldsymbol{\mu})\) provided \(k_{1}\) is bounded and \(R_{n}(\mathbf{w}^{*}|\mathcal{M},\boldsymbol{\mu})\to\infty\). The result in this case implies that if the SNR is monotone nonincreasing as \(l\) increases, the optimal MA risks in \(\mathcal{W}_{M_{n}}\) and \(\widetilde{\mathcal{W}}_{M_{n}}\) are asymptotically equivalent. In contrast, if for every \(l=2,\ldots,M_{n}-1\), \(\|\boldsymbol{\mu}_{l|\mathcal{M}}\|^{2}/\sigma_{l|\mathcal{M}}^{2}<\| \boldsymbol{\mu}_{l+1|\mathcal{M}}\|^{2}/\sigma_{l+1|\mathcal{M}}^{2}\), then \(\mathcal{M}_{T}\) is reduced to \(\{k_{1},k_{M_{n}}\}\). In this case, \(R_{n}(\mathbf{w}^{*}|\mathcal{M},\boldsymbol{\mu})\) equals to the optimal MA risk based on the two-model set \(\{k_{1},k_{M_{n}}\}\) while \(R_{n}(\widetilde{\mathbf{w}}^{*}|\mathcal{M},\boldsymbol{\mu})\) stays unchanged. The result in this case suggests that if the SNR is not strictly nonincreasing, enlarging the weight set from \(\mathcal{W}_{M_{n}}\) to \(\widetilde{\mathcal{W}}_{M_{n}}\) may have substantial benefits. ## 4 A Stein-type MA procedure ### Penalized blockwise Stein method Intuitively, although the Stein estimator (3.9) has lower oracle MA risk as discussed in Section 3.3, it may suffer from a higher estimation error since it minimizes the principle of URE in a relatively large weight set (see, e.g., Cavalier and Tsybakov, 2001). When its estimation error exceeds the optimal MA risk, it is hard to establish the asymptotic optimality of MA. Inspired by the ideas of Cavalier and Tsybakov (2001, 2002), we now modify the Stein rule (3.8) and consider \[\widehat{\gamma}_{m}=\left(1-\frac{\sigma_{m|\mathcal{M}}^{2}(1+\varphi_{m})} {\|\mathbf{y}_{m|\mathcal{M}}\|^{2}}\right)_{+},\quad m=1,\ldots,M_{n}, \tag{4.1}\] where \(0\leq\varphi_{m}<1\) is a penalty factor, \(\widehat{\boldsymbol{\gamma}}|\mathcal{M}=(\widehat{\gamma}_{1},\ldots, \widehat{\gamma}_{M_{n}})^{\top}\), and \(\widehat{\gamma}_{M_{n}+1}=0\). And then, define a Stein-type MA estimator \[\widehat{\mathbf{\mu}}_{\widehat{\mathbf{w}}|\mathcal{M}}=\sum_{m=1}^{M_{n}}\widehat{ \gamma}_{m}\mathbf{y}_{m|\mathcal{M}}=\sum_{m=1}^{M_{n}}\widehat{w}_{m}\widehat {\mathbf{\mu}}_{k_{m}}, \tag{4.2}\] where the \(m\)-th element of \(\widehat{\mathbf{w}}|\mathcal{M}\) is \(\widehat{w}_{m}=\widehat{\gamma}_{m}-\widehat{\gamma}_{m+1}\). In the existing literature, the estimator (4.2) is also known as the penalized blockwise Stein estimator (Cavalier and Tsybakov, 2001, 2002). Because of the penalty factor \(\varphi_{m}\), the estimator (4.2) has fewer nonzero cumulative weights than the Stein estimator (3.9), resulting in lower estimation error. In this paper, \(\varphi_{m}\) is assumed to be related to \(n\). Specifically, we assume the values of \(\varphi_{m}\) are small and \[\max_{1\leq m\leq M_{n}}\varphi_{m}\to 0 \tag{4.3}\] as \(n\to\infty\). To get the theoretical properties of the Stein-type MA estimator \(\widehat{\mathbf{\mu}}_{\widehat{\mathbf{w}}|\mathcal{M}}\), we need two additional assumptions on \(\mathcal{M}\) and \(\varphi_{m}\). **Assumption 1**.: _There exists a constant \(c_{1}\) such that_ \[\sum_{m=1}^{M_{n}}\exp\left[-\frac{(k_{m}-k_{m-1})\varphi_{m}^{2}}{16(1+2 \sqrt{\varphi_{m}})^{2}}\right]\leq c_{1}. \tag{4.4}\] **Assumption 2**.: _For all \(m=1,\ldots,M_{n}\), assume_ \[\frac{1}{k_{m}-k_{m-1}}\leq\frac{1-\varphi_{m}}{4}. \tag{4.5}\] Note that a prerequisite for Assumption 1 is \[(k_{m}-k_{m-1})\varphi_{m}^{2}\to\infty. \tag{4.6}\] This requires that \(k_{m}-k_{m-1}\to\infty\) as \(m\) increase. From (4.6), a typical choice for the penalty factor is \(\varphi_{m}=1/(k_{m}-k_{m-1})^{\tau}\), where \(0<\tau<1/2\). For Assumption 2, a sufficient condition is \(k_{m}-k_{m-1}>3\), which is a common assumption for the Stein-type methods (see, e.g., Stein, 1981). **Theorem 1**.: _Suppose Assumptions 1-2 hold. Then for any sample size \(n\) and any regression mean vector \(\mathbf{\mu}\), we have_ \[\mathbb{E}L_{n}(\widehat{\mathbf{w}}|\mathcal{M},\mathbf{\mu})\leq(1+\bar{\varphi })R_{n}(\mathbf{w}^{*}|\mathcal{M},\mathbf{\mu})+8c_{1}\sigma^{2}, \tag{4.7}\] _where \(\bar{\varphi}=\max_{1\leq m\leq M_{n}}\{2\varphi_{m}+16/[(k_{m}-k_{m-1}) \varphi_{m}]\}\)._ The oracle inequality in Theorem 1 states that the Stein-type MA estimator (4.2) performs as well as the optimal MA estimator based on \(\mathcal{M}\) and \(\mathcal{W}_{M_{n}}\). Indeed, when (4.3) and (4.6) hold, we have \[\bar{\varphi}=\max_{1\leq m\leq M_{n}}\left[2\varphi_{m}+o(\varphi_{m})\right]=O \left(\max_{1\leq m\leq M_{n}}\varphi_{m}\right)\to 0. \tag{4.8}\] Thus, if \(R_{n}(\mathbf{w}^{*}|\mathcal{M},\boldsymbol{\mu})\rightarrow\infty\) as \(n\rightarrow\infty\), the Stein-type MA estimator is asymptotically optimal in terms of (2.5). Theorem 1 improves the properties of the existing asymptotically optimal MA procedures in several directions. First, it significantly generalizes the setting of Blaker (1999) by considering multiple nested candidate models. Second, in contrast to the MA procedures that use the discrete weight set (Hansen, 2007), our Stein-type MA estimator targets the optimal MA estimator in the continues weight set theoretically, hence resulting in a lower MA risk (see Section 3 of Peng et al., 2023). Moreover, the Stein-type MA estimator has a closed form, which is more computationally feasible than minimizing the MMA criterion over some convex sets with weight constraint (Hansen, 2007; Wan et al., 2010). In addition, Assumptions 1-2 are much milder than those in Wan et al. (2010), Zhang (2021), and Fang et al. (2022). Indeed, Condition (8) in Wan et al. (2010), Assumption 2 in Zhang (2021), and the regularity assumptions in Fang et al. (2022) are too restrictive for the nested MA to include the best candidate model. As shown in the next subsection, Assumptions 1-2 allow the candidate model sets \(\mathcal{M}\) on which the Stein-type MA estimator can perform as well as the optimal MA estimator based on the largest candidate model set \(\mathcal{M}_{a}\). **Remark 1**.: _This paper focuses on the setup where the candidate models are restricted to be nested and estimated by least squares. More general MA frameworks with non-nested least squares candidates were considered by Wan et al. (2010) and Zhang (2021). In addition, the MA procedures in Dalalyan and Salmon (2012) and Bellec (2018) are suitable to combine a set of affine estimators, which include least squares estimator, ridge estimator, and nearest neighbor as special cases. Moreover, the MA strategies developed by Yang (2001, 2003, 2004); Wang et al. (2014) can be used to construct adaptive MA estimators under different weight constraints almost without imposing any restriction on candidate models._ ### Candidate model set In this subsection, we construct the candidate model set for the Stein-type MA estimator (4.2) to achieve the full asymptotic optimality given in Definition 2. The main idea is based on the system of weakly geometrically increasing blocks, which was studied in other statistical models (Nemirovski, 1998; Cavalier and Tsybakov, 2001, 2002). Specifically, consider \(\mathcal{M}=\{k_{1},k_{2},\ldots,k_{M_{n}}\}\) with \(k_{M_{n}}=p_{n}\). And the sizes of the candidate models in \(\mathcal{M}\) are assumed to satisfy the following assumption. **Assumption 3**.: _There exists \(\zeta_{n}>0\) such that_ \[\max_{1\leq j\leq M_{n}-1}\frac{k_{j+1}-k_{j}}{k_{j}-k_{j-1}}\leq 1+\zeta_{n}. \tag{4.9}\] **Theorem 2**.: _Suppose Assumptions 1-3 hold. Then there exist an integer \(N\) such that when \(n>N\), we have_ \[\mathbb{E}L_{n}(\widehat{\mathbf{w}}|\mathcal{M},\boldsymbol{\mu})\leq(1+ \bar{\varphi})(1+\zeta_{n})R_{n}(\mathbf{w}^{*}|\mathcal{M}_{a},\boldsymbol{ \mu})+\left[8c_{1}+k_{1}(1+\bar{\varphi})\right]\sigma^{2}, \tag{4.10}\] _where \(c_{1}\) is given in Assumption 1 and \(\bar{\varphi}\) is defined in Theorem 1._ This theorem indicates that the Stein-type MA estimator achieves the full potential of MA provided that \(\bar{\varphi}\to 0\), \(\zeta_{n}\to 0\), and the remainder term \(\left[8c_{1}+k_{1}(1+\bar{\varphi})\right]\sigma^{2}\) is not too large. Now we give a specific example of \(\mathcal{M}\) to illustrate this theorem. Let \(\nu_{n}\) be an integer such that \(\nu_{n}\to\infty\) as \(n\to\infty\). A typical choice is \(\nu_{n}=\lfloor\log n\rfloor\) or \(\nu_{n}=\lfloor\log\log n\rfloor\). And then let \(\rho_{n}=1/\log\nu_{n}\). Define a candidate model set \(\mathcal{M}^{*}\) with \(k_{1}=\lfloor\nu_{n}\rfloor\), \(k_{m}=k_{m-1}+\lfloor\nu_{n}\rho_{n}(1+\rho_{n})^{m-1}\rfloor\) for \(m=2,\ldots,M_{n}-1\), and \(k_{M_{n}}=p_{n}\), where \(M_{n}=\arg\max_{m\in\mathbb{N}}(\nu_{n}+\sum_{j=2}^{m}\lfloor\nu_{n}\rho_{n}( 1+\rho_{n})^{j-1}\rfloor\leq p_{n})\). Then we have the following consequence. **Corollary 1**.: _Suppose_ \[\varphi_{m}=\frac{1}{(k_{m}-k_{m-1})^{\tau}},\quad 0<\tau<1/2, \tag{4.11}\] _and_ \[\nu_{n}=o\left[R_{n}(\mathbf{w}^{*}|\mathcal{M}_{a},\boldsymbol{\mu})\right], \tag{4.12}\] _then the Stein-type MA estimator based on \(\mathcal{M}^{*}\) is fully asymptotically optimal in terms of (2.6)._ The proof of Corollary 1 consists in checking Assumptions 1-3 for \(\mathcal{M}^{*}\) and is given in the Appendix. The hyperparameters \(\tau\) and \(\nu_{n}\) influence the Stein-type MA in two different ways. First, \(\tau\) does not affect the fully asymptotic optimality of the Stein-type MA estimator when \(0<\tau<1/2\). However, \(\tau\) hurts the speed at which the estimator converges to the optimal MA risk as it decreases to \(0\), which is caused by the slower decaying rate of \(\bar{\varphi}\) in the risk bound (4.10). In contrast, the parameter \(\nu_{n}\) restricts the parameter space on which the fully asymptotic optimality can hold. For example, when \(\nu_{n}=\lfloor\log n\rfloor\), (4.12) requires \(R_{n}(\mathbf{w}^{*}|\mathcal{M}_{a},\boldsymbol{\mu})\) to converge slower than \(\log n\) for the fully asymptotic optimality. On the other hand, decreasing the order of \(\nu_{n}\) from \(\log n\) to a slower rate, say, \(\log\log n\), can broaden the parameter space for the fully asymptotic optimality. But it also increases the order of \(\rho_{n}\) and \(\zeta_{n}\), hence resulting to a slower rate of convergence of the Stein-type MA estimator. See Section 5 for more discussions about the choices of \(\nu_{n}\). **Remark 2**.: _When the error terms follow a sub-Gaussian assumption, Peng et al. (2023) prove that the risk of the MMA estimator based on \(\mathcal{M}_{a}\) is upper bounded by_ \[\begin{split}&\mathbb{E}L_{n}(\widehat{\mathbf{w}}_{\text{ MMA}}|\mathcal{M}_{a},\boldsymbol{\mu})\leq R_{n}(\mathbf{w}^{*}|\mathcal{M}_{a}, \boldsymbol{\mu})\\ &+C_{1}(\log n)^{3}+C_{2}\left[R_{n}(\mathbf{w}^{*}|\mathcal{M}_{ a},\boldsymbol{\mu})\right]^{1/2}(\log n)^{3/2},\end{split} \tag{4.13}\] _where \(C_{1}\) and \(C_{2}\) are two positive constants. This bound implies that MMA is fully asymptotically optimal if \((\log n)^{3}=o[R_{n}(\mathbf{w}^{*}|\mathcal{M}_{a},\boldsymbol{\mu})]\), while the Stein-type MA estimator with \(\nu_{n}=\lfloor\log n\rfloor\) achieves the fully asymptotic optimality provided \(\log n=o[R_{n}(\mathbf{w}^{*}|\mathcal{M}_{a},\boldsymbol{\mu})]\). Thus, the Stein-type MA estimator imposes milder limitations on \(R_{n}(\mathbf{w}^{*}|\mathcal{M}_{a},\boldsymbol{\mu})\) than MMA and achieves the fully asymptotic optimality in a broader parameter space. However, comparing the remainder terms in the risk bounds (4.10) and (4.13), we find that MMA converges to the oracle MA risk in a rate_ \[(\log n)^{3}\vee\left\{[R_{n}(\mathbf{w}^{*}|\mathcal{M}_{a},\boldsymbol{\mu}) ]^{1/2}\left(\log n\right)^{3/2}\right\}, \tag{4.14}\] _whereas the Stein-type MA estimator converges in a rate of_ \[\frac{R_{n}(\mathbf{w}^{*}|\mathcal{M}_{a},\boldsymbol{\mu})}{\log\log n}\vee( \log n), \tag{4.15}\] _which is slower than (4.14) when \(R_{n}(\mathbf{w}^{*}|\mathcal{M}_{a},\boldsymbol{\mu})\) is large. For example, if \(R_{n}(\mathbf{w}^{*}|\mathcal{M}_{a},\boldsymbol{\mu})\) grows at the order \(n^{1/2}\), (4.14) is of the order \(n^{1/4}(\log n)^{3/2}\) in contrast to \(n^{1/2}/\log\log n\) in (4.15)._ ## 5 Simulation studies The data is simulated from the linear regression model (2.1), where \(p_{n}=\lfloor 3n/4\rfloor\), \(x_{1i}=1\), the regressor vectors \((x_{2i},\ldots,x_{p_{n}i})^{\top},i=1,\ldots,n\) are i.i.d. from the multivariate normal distribution with mean \(\mathbf{0}_{p_{n}-1}\) and covariance \(\Sigma_{(p_{n}-1)\times(p_{n}-1)}\), where \(\Sigma_{ii}=1\) and \(\Sigma_{ij}=\rho\) when \(i\neq j\), and the random error terms \(\epsilon_{i}\)'s are i.i.d. from \(N(0,\sigma^{2})\) and are independent of \({x_{ji}}^{\prime}\)s. We set \(\rho\) to be \(0\), \(0.3\), and \(0.6\) and consider two cases of the regression coefficient vector: **Case 1**: \(\boldsymbol{\beta}=(\beta_{1},\ldots,\beta_{p_{n}})^{\top}\)_, where \(\beta_{j}=j^{-\alpha_{1}}\) and \(\alpha_{1}\) is set to be \(0.51\), \(1\), and \(1.5\)._ **Case 2**: \(\boldsymbol{\beta}=(\beta_{1},\ldots,\beta_{p_{n}})^{\top}\)_, where \(\beta_{j}=\exp(-j^{\alpha_{2}})\) and \(\alpha_{2}\) is set to be \(0.5\), \(1\), and \(1.5\)._ Let \(\boldsymbol{\beta}_{-1}\) denote \((\beta_{2},\ldots,\beta_{p_{n}})^{\top}\). The signal-to-noise ratio, defined by \(\boldsymbol{\beta}_{-1}^{\top}\Sigma\boldsymbol{\beta}_{-1}/\sigma^{2}\), is set to be one via the parameter \(\sigma^{2}\). Moreover, the sample size \(n\) increases from \(30\) to \(1000\). Note that in Case 1, the oracle MA risk \(R_{n}(\mathbf{w}^{*}|\mathcal{M}_{a},\boldsymbol{\mu})\) has a typical nonparametric rate \(n^{1/(2\alpha_{1})}\), while in Case 2, \(R_{n}(\mathbf{w}^{*}|\mathcal{M}_{a},\boldsymbol{\mu})\) increases at a logarithmic rate \((\log n)^{1/\alpha_{2}}\)(Peng and Yang, 2022). The candidate models used to implement MA are nested and estimated by least squares. We consider four competing MA methods to combine candidate models. The first (MMA1) is the classical MMA method, which minimizes (3.5) over \(\mathcal{W}_{p_{n}}\) to combine all nested models. The second (MMA2) is a parsimonious version of the MMA method (Zhang et al., 2020), which uses the same weight set and candidate model set as MMA1 but replaces \(2\sigma^{2}\) in (3.5) with a stronger penalty factor \(\log n\). The third (SMA1) is the proposed Stein-type MA based on the candidate model set \(\mathcal{M}^{*}\) with \(\tau=1/3\) and \(\nu_{n}=\lfloor\log n\rfloor\). The last one (SMA2) is the same as SMA1, except that \(\nu_{n}\) is set to be \(\lfloor\log\log n\rfloor\). Let \(\mathbf{\mu}=(\mu_{1},\dots,\mu_{n})^{\top}\) denote the mean vector of the true regression function. The accuracy of an estimation procedure is evaluated in terms of the squared \(\ell_{2}\) loss \(\|\mathbf{\mu}-\widehat{\mathbf{\mu}}\|^{2}\), where \(\widehat{\mathbf{\mu}}=(\mu_{1},\dots,\mu_{n})^{\top}\) is the estimated mean vector. We replicate the data generation process \(R=1000\) times to approximate the risks of the competing methods. Let \(\mathbf{\mu}^{(r)}\) and \(\widehat{\mathbf{\mu}}^{(r)}\) denote the true mean vector and the estimated mean vector in the \(r\)-th replicate, respectively. To highlight the differences among competing methods, we plot the relative risk of each method \[\text{Relative risk}=\frac{R^{-1}\sum_{r=1}^{R}\|\mathbf{\mu}^{(r)}- \widehat{\mathbf{\mu}}^{(r)}\|^{2}}{R^{-1}\sum_{r=1}^{R}\min_{\mathbf{w}\in \mathcal{W}_{p_{n}}}\|\mathbf{\mu}^{(r)}-\widehat{\mathbf{\mu}}^{(r)}_{\mathbf{w}| \mathcal{M}_{a}}\|^{2}} \tag{5.1}\] as a function of \(n\), where the denominator of (5.1) is an estimate of the oracle risk of MA based on \(\mathcal{M}_{a}\). The simulation results are displayed in Figures 1-2. From Figure 1, we observe that the relative risks of the methods MMA1, SMA1, and SMA2 gradually decrease and approach 1 with increasing \(n\) in Case 1, which supports the asymptotic optimality results in this paper and the work of Peng et al. (2023). In contrast, the curves of the MMA2 method are flat and have values around 2. This result may be explained by the fact that the MMA2 method tends to assign more weights to small models due to its strong penalty, while the optimal size of candidate model in Case 1 is relatively large. In addition, Figure 1 also shows that MMA1 is the best performing method when the coefficients decay slowly and the correlation between regressors is weak (\(\alpha_{1}=0.51\) and \(\rho=0\)). However, when \(\alpha_{1}\) and \(\rho\) increase, two Stein-type MA estimators perform as well as or sometimes even better than MMA1 (for example, when \(\alpha_{1}=1.5\) and \(\rho=0.6\)). Another interesting observation is that when \(\alpha_{1}=1.5\), the relative risk curves of SMA2 show some fluctuations. A possible explanation for this is that SMA2 includes fewer candidate models than SMA1 and may have certain possibility of excluding important models in Case 1. The patterns presented in Figure 2 are slightly more complicated, but they are still consistent with the current MA theories. When \(\alpha_{2}=0.5\), there are slight decreases in relative risks of the Stein-type estimators, which supports their fully asymptotic optimality results in Section 4.2. But the curves of SMA2 have more fluctuations than SMA1, which occurs due to the similar reason as that in Figure 1 (c). In contrast, the curves of MMA1 level off with relative risks significantly larger than 1. This result corroborates the findings in Peng et al. (2023) that MMA1 achieves the fully asymptotic optimality when \(\alpha_{2}<1/3\) but may not when the coefficients decay too fast. When \(\alpha_{2}=1\), it can be seen from Figure 2 (b) that the relative risk of SMA2 declines steadily from 9.39 to 2.57 when \(\rho=0.3\), whereas the curve of SMA1 increases slightly from 1.71 to 2.16. This finding is consistent with our theoretical understanding in Corollary 1 that SMA2 achieves the fully asymptotic optimality in a broader parameter space than SMA1. On the other hand, SMA2 actually performs much better than SMA1. This is because the relative risk of SMA1 is not very small, and the relative risk of SMA2 declines steadily from 9.39 to 2.57 when \(\rho=0.3\), whereas the curve of SMA1 increases slightly from 1.71 to 2.16. This finding is consistent with our theoretical understanding in Corollary 1 that SMA2 achieves the fully asymptotic optimality in a broader parameter space than SMA1. On the other hand, SMA2 actually performs much better than SMA1. This is because the relative risk of SMA1 is not very small, and the relative risk of SMA2 declines steadily from 9.39 to 2.57 when \(\rho=0.3\), whereas the curve of SMA1 increases slightly from 1.71 to 2.16. This finding is consistent with our theoretical understanding in Corollary 1 that SMA2 achieves the fully asymptotic optimality in a broader parameter space than SMA1. On the other hand, SMA2 actually performs much better than SMA1. This is because the relative risk of SMA1 is not very small, and the relative risk of SMA2 declines steadily from 9.39 to 2.57 when \(\rho=0.3\), whereas the curve of SMA1 increases slightly from 1.71 to 2.16. This finding is consistent with our theoretical understanding in Corollary 1 that SMA2 achieves the fully asymptotic optimality in a broader parameter space than SMA1. On the other hand, SMA2 actually performs much better than SMA1. This is because the relative risk of SMA1 is not very small, and the relative risk of SMA2 declines steadily from 9.39 to 2.57 when \(\rho=0.3\), whereas the curve of SMA1 increases slightly from 1.71 to 2.16. This finding is consistent with our theoretical understanding in Corollary 1 that SMA2 achieves the fully asymptotic optimality in a broader parameter space than SMA1. On the other hand, SMA2 actually performs much better than SMA1. This is because the relative risk of SMA1 is not very small, and the relative risk of SMA2 declines steadily from 9.39 to 2.57 when \(\rho=0.3\), whereas the curve of SMA1 increases slightly from 1.71 to 2.16. This finding is consistent with our theoretical understanding in Corollary 1 that SMA2 achieves the fully asymptotic optimality in a broader parameter space than SMA1. On the other hand, SMA2 actually performs much better than SMA1. This is because the relative risk of SMA1 is not very small, and the relative risk of SMA2 declines steadily from 9.39 to 2.57 when \(\rho=0.3\), whereas the curve of SMA1 increases slightly from 1.71 to 2.16. This finding is consistent with our theoretical understanding in Corollary 1 that SMA2 achieves the fully asymptotic optimality in a broader parameter space than SMA1. On the other hand, SMA2 actually performs much better than SMA1. This is because the relative risk of SMA2 declines steadily from 9.39 to 2.57 when \(\rho=0.3\), whereas the curve of SMA1 increases slightly from 1.71 to 2.16. This finding is consistent with our theoretical understanding in Corollary 1 that SMA2 achieves the fully asymptotic optimality in a broader parameter space than SMA1. worse than SMA1, especially when \(n\leq 500\). Thus, it also illustrates that the asymptotic optimality should not be the sole justification for a MA method. As shown in Figure 2 (b), the estimator with satisfactory asymptotic property (SMA2) may converge slowly and perform worse in the finite sample setting. When \(\alpha_{2}=1.5\), the coefficients decay extremely fast and, MA does not have any real benefit compared to MS (Peng and Yang, 2022). Such a case is close to the setting where a fixed-dimension true model exists. In this case, the MMA2 method becomes the best. This result is expected and in accord with the theory in Zhang et al. (2020). Figure 2: Relative risks of the competing methods in Case 2 with \(\alpha_{2}=0.5\) in row (a), \(\alpha_{2}=1\) in row (b), and \(\alpha_{2}=1.5\) in row (c). One unanticipated result in Figure 2 (c) is that the relative risks of MMA1 and SMA1 experience a two-phase process, a sharp increase as the sample size \(n\) is less than 500, followed by a decrease as \(n\) goes above 500. Similar simulation results have also appeared in Peng et al. (2023). The existing theories cannot fully explain this phenomenon, but it might be related to the specific expression of the relative risk in the finite sample setting. Take the SMA1 method for instance. If the upper bound in (4.10) is tight, then we may assume that the exact risk of SMA1 is \(C_{1}(\log n)^{2/3}+C_{2}\log n+C_{3}\), where \(C_{1}\to 1\) as \(n\rightarrow\infty\), and \(C_{2}\) and \(C_{3}\) are two positive constants. Thus, the relative risk of SMA1 is \(\text{RE}(n)=C_{1}+C_{2}(\log n)^{1/3}+C_{3}(\log n)^{-2/3}\), which increases to \(\infty\) as \(n\rightarrow\infty\). However, in the finite sample setting, the monotonicity of \(\text{RE}(n)\) may depend on the speed of \(C_{1}+C_{3}(\log n)^{-2/3}\) decreasing to 1. When the decrement of \(C_{1}+C_{3}(\log n)^{-2/3}\) exceeds the increment of \(C_{2}(\log n)^{1/3}\) in some range of \(n\), the relative risk of SMA1 shows a downward trend and vice versa. ## 6 Discussion This paper establishes an explicit link between MA and shrinkage in a multiple model setting, which significantly enhances the previous understanding of the relationship between MA and shrinkage in the two-model settings. Building upon the established connections, we extend the penalized blockwise Stein rule to the linear regression setting to develop the asymptotically optimal MA estimators. We provide some specific candidate model sets on which the proposed Stein-type MA estimator achieves the performance of the optimal convex combination of all the nested models (i.e., the fully asymptotic optimality). The improvement of the proposed Stein-type MA over the existing MA approaches is illustrated theoretically and numerically. Note that a limitation of our Stein-type MA method is that it requires the variance of the error terms to be known. Thus, extending our results to the case of unknown variance is a pressing topic for future research. The unveiled connections between MA and shrinkage offer the possibility of novel methodological developments in the area of MA. The focus of this paper has been on a linear regression setting. It is of great interest to bridge the gap between MA and shrinkage in the generalized linear model setting, and then apply the Stein estimators in some general distribution families (e.g., see Chapter 5 of Hoffmann, 2000, for a review) to combine models. In addition, given the approximate and exact distributions of the Stein-type estimators (Ullah, 1982; Phillips, 1984) and the techniques of constructing Stein-type confidence intervals (Hwang and Casella, 1982; He, 1992), it is greatly desirable to conduct inference for the asymptotically optimal MA estimators. Note that Hansen (2014) and Zhang and Liu (2019) have previously investigated the inference of MA but without the asymptotic optimality results. Another research direction is building a unified theory for Bayesian and frequentist MA. Indeed, the BIC weighting method considered in the frequentist literature (Buckland et al., 1997; Hjort and Claeskens, 2003) can be seen as an approximation of Bayesian MA. We conjecture that the asymptotically optimal MA estimator may also have a Bayesian interpretation since the Stein-type estimation is essentially an empirical Bayes approach (see, e.g., Efron and Morris, 1973). We leave these for future work. ### Proof of Proposition 1 We first introduce some notations. Given a candidate model set \(\mathcal{M}=\{k_{1},k_{2},\ldots,k_{M_{n}}\}\), define \(\mathcal{M}_{1}=\mathcal{M}\). For \(t\geq 1\), define \[\mathcal{M}_{t+1}=\mathcal{M}_{t}\setminus\{s_{t,l}:l\in\mathcal{I}_{t}\},\] (A.1.1) where \(s_{t,l}\) is the size of the \(l\)-th smallest model in \(\mathcal{M}_{t}\). If \(|\mathcal{M}_{t}|>2\) and there exists an \(l\in\{2,\ldots,|\mathcal{M}_{t}|-1\}\) such that \(\|\boldsymbol{\mu}_{l|\mathcal{M}_{t}}\|^{2}/\sigma_{l|\mathcal{M}_{t}}^{2}< \|\boldsymbol{\mu}_{l+1|\mathcal{M}_{t}}\|^{2}/\sigma_{l+1|\mathcal{M}_{t}}^{2}\), then define \[\mathcal{I}_{t}=\left\{l\in\{2,\ldots,|\mathcal{M}_{t}|-1\}:\|\boldsymbol{\mu }_{l|\mathcal{M}_{t}}\|^{2}/\sigma_{l|\mathcal{M}_{t}}^{2}<\|\boldsymbol{\mu} _{l+1|\mathcal{M}_{t}}\|^{2}/\sigma_{l+1|\mathcal{M}_{t}}^{2}\right\}.\] Otherwise, if \(\mathcal{M}_{t}=\{k_{1},k_{M_{n}}\}\) or for any \(l\in\{2,\ldots,|\mathcal{M}_{t}|-1\}\), \(\|\boldsymbol{\mu}_{l|\mathcal{M}_{t}}\|^{2}/\sigma_{l|\mathcal{M}_{t}}^{2}\geq \|\boldsymbol{\mu}_{l+1|\mathcal{M}_{t}}\|^{2}/\sigma_{l+1|\mathcal{M}_{t}}^{2}\), then define \(\mathcal{I}_{t}=\emptyset\). Let \(T=\arg\min_{t}\{\mathcal{I}_{t}=\emptyset\}\) and define \(\mathcal{M}_{T}=\{s_{T,1},\ldots,s_{T,|\mathcal{M}_{T}|}\}\), where \(s_{T,l}\) is the size of the \(l\)-th smallest model in \(\mathcal{M}_{T}\). \(\mathcal{M}_{T}\) is actually the largest subset \(\mathcal{S}\) of \(\mathcal{M}\) satisfying \(\|\boldsymbol{\mu}_{l|\mathcal{S}}\|^{2}/\sigma_{l|\mathcal{S}}^{2}\geq\| \boldsymbol{\mu}_{l+1|\mathcal{S}}\|^{2}/\sigma_{l+1|\mathcal{S}}^{2}\). From (3.3), we have \[R_{n}(\mathbf{w}|\mathcal{M},\boldsymbol{\mu})=R_{n}(\boldsymbol {\gamma}|\mathcal{M},\boldsymbol{\mu})\] \[=\sum_{m=1}^{M_{n}}\left(\|\boldsymbol{\mu}_{m|\mathcal{M}}\|^{2} +\sigma_{m|\mathcal{M}}^{2}\right)\left[\gamma_{m}-\frac{\|\boldsymbol{\mu}_{m |\mathcal{M}}\|^{2}}{\|\boldsymbol{\mu}_{m|\mathcal{M}}\|^{2}+\sigma_{m| \mathcal{M}}^{2}}\right]^{2}\] (A.1.2) \[\quad+\sum_{m=1}^{M_{n}}\frac{\|\boldsymbol{\mu}_{m|\mathcal{M}} \|^{2}\sigma_{m|\mathcal{M}}^{2}}{\|\boldsymbol{\mu}_{m|\mathcal{M}}\|^{2}+ \sigma_{m|\mathcal{M}}^{2}}+\|\boldsymbol{\mu}_{M_{n}+1|\mathcal{M}}\|^{2}.\] Since \(0\leq\|\boldsymbol{\mu}_{m|\mathcal{M}}\|^{2}/(\|\boldsymbol{\mu}_{m| \mathcal{M}}\|^{2}+\sigma_{m|\mathcal{M}}^{2})\leq 1\), the optimal MA risk in the enlarged weight set \(\widetilde{\mathcal{W}}_{M_{n}}\) is \[R_{n}(\widetilde{\mathbf{w}}^{*}|\mathcal{M},\boldsymbol{\mu}) =\min_{\mathbf{w}\in\widetilde{\mathcal{W}}_{M_{n}}}R_{n}(\mathbf{ w}|\mathcal{M},\boldsymbol{\mu})=\min_{\boldsymbol{\gamma}\in\Gamma_{M_{n}}}R_{n}( \boldsymbol{\gamma}|\mathcal{M},\boldsymbol{\mu})\] \[=\sum_{m=1}^{M_{n}}\frac{\|\boldsymbol{\mu}_{m|\mathcal{M}}\|^{2} \sigma_{m|\mathcal{M}}^{2}}{\|\boldsymbol{\mu}_{m|\mathcal{M}}\|^{2}+\sigma_{m |\mathcal{M}}^{2}}+\|\boldsymbol{\mu}_{M_{n}+1|\mathcal{M}}\|^{2}.\] Now our main task is to calculate \(R_{n}(\mathbf{w}^{*}|\mathcal{M},\boldsymbol{\mu})=\min_{\boldsymbol{\gamma} \in\Gamma_{M_{n}}}R_{n}(\boldsymbol{\gamma}|\mathcal{M},\boldsymbol{\mu})\), where \(\Gamma_{M_{n}}\) is defined in (3.4). The monotonicity of the sequence \(\{\|\boldsymbol{\mu}_{m|\mathcal{M}}\|^{2}/\sigma_{m|\mathcal{M}}^{2}\}_{m=1}^{M_ {n}}\) plays a key role in the opti mal MA risk \(R_{n}(\mathbf{w}^{*}|\mathcal{M},\boldsymbol{\mu})\). When \(\{\|\boldsymbol{\mu}_{m|\mathcal{M}}\|^{2}/\sigma_{m|\mathcal{M}}^{2}\}_{m=1}^{ M_{n}}\) is monotonically non-increasing, we see that \(\|\boldsymbol{\mu}_{m|\mathcal{M}}\|^{2}/(\|\boldsymbol{\mu}_{m|\mathcal{M}}\|^{2}+ \sigma_{m|\mathcal{M}}^{2})\) is also monotonically non-increasing. In view of (A.1.2), the optimal MA risk in \(\Gamma_{M_{n}}\) is given by \[R_{n}(\mathbf{w}^{*}|\mathcal{M},\boldsymbol{\mu})=\frac{\sigma_{1|\mathcal{M} }^{4}}{\|\boldsymbol{\mu}_{1|\mathcal{M}}\|^{2}+\sigma_{1|\mathcal{M}}^{2}}+ \sum_{m=1}^{M_{n}}\frac{\|\boldsymbol{\mu}_{m|\mathcal{M}}\|^{2}\sigma_{m| \mathcal{M}}^{2}}{\|\boldsymbol{\mu}_{m|\mathcal{M}}\|^{2}+\sigma_{m|\mathcal{ M}}^{2}}+\|\boldsymbol{\mu}_{M_{n}+1|\mathcal{M}}\|^{2}.\] In contrast, if the sequence \(\{\|\boldsymbol{\mu}_{m|\mathcal{M}}\|^{2}/\sigma_{m|\mathcal{M}}^{2}\}_{m=1}^ {M_{n}}\) is not strictly monotone non-increasing, let \[\mathcal{I}_{1}=\left\{l\in\{2,\ldots,M_{n}-1\}:\|\boldsymbol{\mu}_{l| \mathcal{M}}\|^{2}/\sigma_{l|\mathcal{M}}^{2}<\|\boldsymbol{\mu}_{l+1| \mathcal{M}}\|^{2}/\sigma_{l+1|\mathcal{M}}^{2}\right\}\] denote the set of indexes where the monotonicity is violated. Based on (A.1.2), we see that minimizing \(R_{n}(\boldsymbol{\gamma}|\mathcal{M},\boldsymbol{\mu})\) over \(\Gamma_{M_{n}}\) is equivalent to minimizing \(R_{n}(\boldsymbol{\gamma}|\mathcal{M},\boldsymbol{\mu})\) over \[\Gamma_{M_{n},(1)}=\left\{\boldsymbol{\gamma}\in\Gamma_{M_{n}}:\gamma_{l}= \gamma_{l+1},l\in\mathcal{I}_{1}\right\}.\] (A.1.3) It is immediate to observe that \[R_{n}(\mathbf{w}^{*}|\mathcal{M},\boldsymbol{\mu})=\min_{\boldsymbol{\gamma} \in\Gamma_{M_{n},(1)}}R_{n}(\boldsymbol{\gamma}|\mathcal{M},\boldsymbol{\mu}) =\min_{\boldsymbol{\gamma}\in\Gamma_{|\mathcal{M}_{2}|}}R_{n}( \boldsymbol{\gamma}|\mathcal{M}_{(2)},\boldsymbol{\mu}),\] (A.1.4) where \(\mathcal{M}_{2}\) is defined in (A.1.1), and the second equality is due to the fact that the adding the additional equality constraint on \(\boldsymbol{\gamma}\) is equivalent to merging the candidate models. If the monotonicity assumption is still violated based on \(\mathcal{M}_{2}\), then we repeat the above process \(T-2\) times until obtaining a \(\mathcal{M}_{T}\), where all elements in \(\{\|\boldsymbol{\mu}_{m|\mathcal{M}_{T}}\|^{2}/\sigma_{m|\mathcal{M}_{T}}^{2} \}_{m=2}^{|\mathcal{M}_{T}|}\) are monotonically non-increasing. This completes the proof. ### Proof of Theorem 1 The proof of this theorem basically follows the same lines as the proofs in Cavalier and Tsybakov (2001, 2002) but involves additional complexity since the covariances of \(\boldsymbol{\varepsilon}_{m|\mathcal{M}}\), \(m=1,\ldots,M_{n}\) are not identity matrixes. We lower bound the optimal MA risk by \[\begin{split} R_{n}(\mathbf{w}^{*}|\mathcal{M},\boldsymbol{\mu}) &\geq\sum_{m=1}^{M_{n}}\min_{\gamma_{m}}\left[\|\boldsymbol{\mu}_{ m|\mathcal{M}}\|^{2}(1-\gamma_{m})^{2}+\sigma_{m|\mathcal{M}}^{2}\gamma_{m}^{2} \right]+\|\boldsymbol{\mu}_{M_{n}+1|\mathcal{M}}\|^{2}\\ &=\sum_{m=1}^{M_{n}}\frac{\sigma_{m|\mathcal{M}}^{2}\|\boldsymbol {\mu}_{m|\mathcal{M}}\|^{2}}{\sigma_{m|\mathcal{M}}^{2}+\|\boldsymbol{\mu}_{m| \mathcal{M}}\|^{2}}+\|\boldsymbol{\mu}_{M_{n}+1|\mathcal{M}}\|^{2}.\end{split}\] (A.2.1) The risk of the Stein-type MA estimator is \[\mathbb{E}L_{n}(\widehat{\mathbf{w}}|\mathcal{M},\boldsymbol{\mu})=\sum_{m=1}^{M_{ n}}\mathbb{E}\|\widehat{\gamma}_{m}\mathbf{y}_{m|\mathcal{M}}-\boldsymbol{\mu}_{m| \mathcal{M}}\|^{2}+\|\boldsymbol{\mu}_{M_{n}+1|\mathcal{M}}\|^{2}.\] (A.2.2) The task now is to find the upper bounds of \(\mathbb{E}\|\widehat{\gamma}_{m}\mathbf{y}_{m|\mathcal{M}}-\boldsymbol{\mu}_{m |\mathcal{M}}\|^{2}\), \(m=1,\ldots,M_{n}\), respectively. Following Cavalier and Tsybakov (2002), we consider two different cases: \[\|\boldsymbol{\mu}_{m|\mathcal{M}}\|^{2}<\varphi_{m}\sigma_{m|\mathcal{M}}^{2} /2,\] (A.2.3) and \[\|\boldsymbol{\mu}_{m|\mathcal{M}}\|^{2}\geq\varphi_{m}\sigma_{m|\mathcal{M}}^ {2}/2.\] (A.2.4) We first construct the upper bound for \(\mathbb{E}\|\widehat{\gamma}_{m}\mathbf{y}_{m|\mathcal{M}}-\boldsymbol{\mu}_{ m|\mathcal{M}}\|^{2}\) under (A.2.3). Note that \[\mathbb{E}\|\widehat{\gamma}_{m}\mathbf{y}_{m|\mathcal{M}}- \boldsymbol{\mu}_{m|\mathcal{M}}\|^{2} =\mathbb{E}\|(\widehat{\gamma}_{m}-1)\mathbf{y}_{m|\mathcal{M}}+ \boldsymbol{\varepsilon}_{m|\mathcal{M}}\|^{2}\] (A.2.5) \[=\mathbb{E}\|\boldsymbol{\varepsilon}_{m|\mathcal{M}}\|^{2}+2 \mathbb{E}\left\langle(\widehat{\gamma}_{m}-1)\mathbf{y}_{m|\mathcal{M}}, \boldsymbol{\varepsilon}_{m|\mathcal{M}}\right\rangle+\mathbb{E}\|(\widehat{ \gamma}_{m}-1)\mathbf{y}_{m|\mathcal{M}}\|^{2}.\] The first term \(\mathbb{E}\|\boldsymbol{\varepsilon}_{m|\mathcal{M}}\|^{2}=\sigma_{m|\mathcal{ M}}^{2}\). For the second term, we obtain \[\mathbb{E}\left\langle(\widehat{\gamma}_{m}-1)\mathbf{y}_{m|\mathcal{M}}, \boldsymbol{\varepsilon}_{m|\mathcal{M}}\right\rangle=\mathbb{E}\left[\sum_{i =1}^{n}(\widehat{\gamma}_{m}-1)y_{m|\mathcal{M},i}\varepsilon_{m|\mathcal{M}, i}\right],\] (A.2.6) where \(y_{m|\mathcal{M},i}\), \(\mu_{m|\mathcal{M},i}\), and \(\varepsilon_{m|\mathcal{M},i}\) denote the \(i\)-th elements of \(\mathbf{y}_{m|\mathcal{M}}\), \(\boldsymbol{\mu}_{m|\mathcal{M}}\), and \(\boldsymbol{\varepsilon}_{m|\mathcal{M}}\), respectively. Define \(A_{m|\mathcal{M}}\) the event \(\{\|\mathbf{y}_{m|\mathcal{M}}\|^{2}\geq\sigma_{m|\mathcal{M}}^{2}(1+\varphi_ {m})\}\). Based on Lemma 1 of Liu (1994), we have \[\mathbb{E}\left[(\widehat{\gamma}_{m}-1)y_{m|\mathcal{M},i} \varepsilon_{m|\mathcal{M},i}\right]=\text{cov}\left[y_{m|\mathcal{M},i},( \widehat{\gamma}_{m}-1)y_{m|\mathcal{M},i}\right]\] (A.2.7) \[=\sum_{j=1}^{n}\text{cov}(y_{m|\mathcal{M},i},y_{m|\mathcal{M},j })\mathbb{E}\left(\frac{\partial\widehat{\gamma}_{m}}{\partial y_{m|\mathcal{M },j}}y_{m|\mathcal{M},i}\right)+\text{var}(y_{m|\mathcal{M},i})\mathbb{E}( \widehat{\gamma}_{m}-1),\] where \[\frac{\partial\widehat{\gamma}_{m}}{\partial y_{m|\mathcal{M},j}}=\frac{2 \sigma_{m|\mathcal{M}}^{2}(1+\varphi_{m})y_{m|\mathcal{M},j}I(A_{m|\mathcal{M} })}{\|\mathbf{y}_{m|\mathcal{M}}\|^{4}}.\] (A.2.8) Substituting (A.2.17)-(A.2.8) into (A.2.5) yields \[\mathbb{E}\|\widehat{\gamma}_{m}\mathbf{y}_{m|\mathcal{M}}-\mathbf{\mu} _{m|\mathcal{M}}\|^{2}=\sigma_{m|\mathcal{M}}^{2}\] (A.2.9) \[+\mathbb{E}\left\{\frac{4\sigma_{m|\mathcal{M}}^{2}(1+\varphi_{m}) I(A_{m|\mathcal{M}})}{\|\mathbf{y}_{m|\mathcal{M}}\|^{4}}\sum_{i=1}^{n}\sum_{j=1}^{n} \text{cov}(y_{m|\mathcal{M},i},y_{m|\mathcal{M},j})y_{m|\mathcal{M},i}y_{m| \mathcal{M},j}\right\}\] \[-2\mathbb{E}\left[\frac{\sigma_{m|\mathcal{M}}^{2}(1+\varphi_{m}) I(A_{m|\mathcal{M}})}{\|\mathbf{y}_{m|\mathcal{M}}\|^{2}}+I(\bar{A}_{m|\mathcal{M}}) \right]\sum_{i=1}^{n}\text{var}(y_{m|\mathcal{M},i})\] \[+\mathbb{E}\left[\frac{\sigma_{m|\mathcal{M}}^{4}(1+\varphi_{m}) ^{2}I(A_{m|\mathcal{M}})}{\|\mathbf{y}_{m|\mathcal{M}}\|^{2}}+\|\mathbf{y}_{m |\mathcal{M}}\|^{2}I(\bar{A}_{m|\mathcal{M}})\right].\] Since \(\sum_{i=1}^{n}\text{var}(y_{m|\mathcal{M},i})=\sigma_{m|\mathcal{M}}^{2}\) and \[\sum_{i=1}^{n}\sum_{j=1}^{n}\text{cov}(y_{m|\mathcal{M},i},y_{m|\mathcal{M},j} )y_{m|\mathcal{M},i}y_{m|\mathcal{M},j}=\sigma^{2}\mathbf{y}_{m|\mathcal{M}}^ {\top}\mathbf{D}_{m|\mathcal{M}}\mathbf{y}_{m|\mathcal{M}}\leq\sigma^{2}\| \mathbf{y}_{m|\mathcal{M}}\|^{2},\] (A.2.10) we have \[\mathbb{E}\|\widehat{\gamma}_{m}\mathbf{y}_{m|\mathcal{M}}-\mathbf{\mu}_{m| \mathcal{M}}\|^{2}\leq\|\mathbf{\mu}_{m|\mathcal{M}}\|^{2}+\mathbb{E}\left[W(\| \mathbf{y}_{m|\mathcal{M}}\|^{2})I(A_{m|\mathcal{M}})\right],\] (A.2.11) where \(W(x)\) is the function \[W(x)=-x+2\sigma_{m|\mathcal{M}}^{2}+\frac{\sigma_{m|\mathcal{M}}^{4}\left[4(1+ \varphi_{m})/(k_{m}-k_{m-1})-(1-\varphi_{m}^{2})\right]}{x}.\] (A.2.12) Then following the proofs in the Proposition 1 of Cavalier and Tsybakov (2002), we see that under the condition (A.2.3), \[\mathbb{E}\|\widehat{\gamma}_{m}\mathbf{y}_{m|\mathcal{M}}-\mathbf{\mu}_{m| \mathcal{M}}\|^{2}\leq\frac{1}{1-\varphi_{m}/2}\frac{\sigma_{m|\mathcal{M}}^{2 }\|\mathbf{\mu}_{m|\mathcal{M}}\|^{2}}{\sigma_{m|\mathcal{M}}^{2}+\|\mathbf{\mu}_{m| \mathcal{M}}\|^{2}}+\frac{8\sigma_{m|\mathcal{M}}^{2}}{k_{m}-k_{m-1}}\exp \left[-\frac{(k_{m}-k_{m-1})\varphi_{m}^{2}}{16(1+2\sqrt{\varphi_{m}})^{2}} \right].\] (A.2.13) Then, we construct the upper bound under (A.2.4). From Theorem 6.2 of Lehmann (1983), it is evident that \[\mathbb{E}\|\widehat{\gamma}_{m}\mathbf{y}_{m|\mathcal{M}}-\mathbf{\mu}_{m| \mathcal{M}}\|^{2}\leq\mathbb{E}\|\widetilde{\gamma}_{m}\mathbf{y}_{m| \mathcal{M}}-\mathbf{\mu}_{m|\mathcal{M}}\|^{2},\] (A.2.14) where \[\widetilde{\gamma}_{m}=1-\frac{\sigma_{m|\mathcal{M}}^{2}(1+\varphi_{m})}{\| \mathbf{y}_{m|\mathcal{M}}\|^{2}}\] (A.2.15) Similar to (A.2.5), we have \[\mathbb{E}\|\widetilde{\gamma}_{m}\mathbf{y}_{m|\mathcal{M}}- \mathbf{\mu}_{m|\mathcal{M}}\|^{2} =\mathbb{E}\|(\widetilde{\gamma}_{m}-1)\mathbf{y}_{m|\mathcal{M}}+ \mathbf{\varepsilon}_{m|\mathcal{M}}\|^{2}\] (A.2.16) \[=\mathbb{E}\|\mathbf{\varepsilon}_{m|\mathcal{M}}\|^{2}+2\mathbb{E} \left\langle(\widetilde{\gamma}_{m}-1)\mathbf{y}_{m|\mathcal{M}},\mathbf{ \varepsilon}_{m|\mathcal{M}}\right\rangle+\mathbb{E}\|(\widetilde{\gamma}_{m}- 1)\mathbf{y}_{m|\mathcal{M}}\|^{2}.\] And the second term of (A.2.16) is \[\mathbb{E}\left\langle(\widetilde{\gamma}_{m}-1)\mathbf{y}_{m|\mathcal{M}}, \boldsymbol{\varepsilon}_{m|\mathcal{M}}\right\rangle=\mathbb{E}\left[\sum_{i=1 }^{n}(\widetilde{\gamma}_{m}-1)y_{m|\mathcal{M},i}\varepsilon_{m|\mathcal{M}, i}\right].\] (A.2.17) Using Lemma 1 of Liu (1994) again, we have \[\mathbb{E}\left[(\widetilde{\gamma}_{m}-1)y_{m|\mathcal{M},i} \varepsilon_{m|\mathcal{M},i}\right]= \sigma_{m|\mathcal{M}}^{2}(1+\varphi_{m})\sum_{j=1}^{n}\text{cov }(y_{m|\mathcal{M},i},y_{m|\mathcal{M},j})\mathbb{E}\left(\frac{2y_{m|\mathcal{ M},i}y_{m|\mathcal{M},j}}{\|\mathbf{y}_{m|\mathcal{M}}\|^{4}}\right)\] (A.2.18) \[-\sigma_{m|\mathcal{M}}^{2}(1+\varphi_{m})\text{var}(y_{m| \mathcal{M},i})\mathbb{E}\left(\frac{1}{\|\mathbf{y}_{m|\mathcal{M}}\|^{2}} \right).\] Therefore, we have \[\mathbb{E}\|\widetilde{\gamma}_{m}\mathbf{y}_{m|\mathcal{M}}- \boldsymbol{\mu}_{m|\mathcal{M}}\|^{2}=\sigma_{m|\mathcal{M}}^{2}+4\sigma_{m| \mathcal{M}}^{2}(1+\varphi_{m})\sum_{i=1}^{n}\sum_{j=1}^{n}\text{cov}(y_{m| \mathcal{M},i},y_{m|\mathcal{M},j})\mathbb{E}\left(\frac{y_{m|\mathcal{M},i}y _{m|\mathcal{M},j}}{\|\mathbf{y}_{m|\mathcal{M}}\|^{4}}\right)\] (A.2.19) \[-2\sigma_{m|\mathcal{M}}^{2}(1+\varphi_{m})\sum_{i=1}^{n}\text{ var}(y_{m|\mathcal{M},i})\mathbb{E}\left(\frac{1}{\|\mathbf{y}_{m|\mathcal{M}}\|^{2}} \right)+\mathbb{E}\left[\frac{\|\mathbf{y}_{m|\mathcal{M}}\|^{2}\sigma_{m| \mathcal{M}}^{4}(1+\varphi_{m})^{2}}{\|\mathbf{y}_{m|\mathcal{M}}\|^{4}}\right]\] \[\leq\sigma_{m|\mathcal{M}}^{2}+4\sigma^{2}\sigma_{m|\mathcal{M}}^ {2}(1+\varphi_{m})\mathbb{E}\left(\frac{1}{\|\mathbf{y}_{m|\mathcal{M}}\|^{2} }\right)-2\sigma_{m|\mathcal{M}}^{4}(1+\varphi_{m})\mathbb{E}\left(\frac{1}{\| \mathbf{y}_{m|\mathcal{M}}\|^{2}}\right)\] \[+\sigma_{m|\mathcal{M}}^{4}(1+\varphi_{m})^{2}\mathbb{E}\left( \frac{1}{\|\mathbf{y}_{m|\mathcal{M}}\|^{2}}\right)\] \[=\sigma_{m|\mathcal{M}}^{2}-\left[1-\varphi_{m}^{2}-4(1+\varphi_{ m})/(k_{m}-k_{m-1})\right]\sigma_{m|\mathcal{M}}^{4}\mathbb{E}\frac{1}{\|\mathbf{y}_{m| \mathcal{M}}\|^{2}}.\] Under Assumption 2, the second term of (A.2.19) is negative. Combining with \[\mathbb{E}\left(\frac{1}{\|\mathbf{y}_{m|\mathcal{M}}\|^{2}}\right)\geq\frac{ 1}{\mathbb{E}\|\mathbf{y}_{m|\mathcal{M}}\|^{2}}=\frac{1}{\sigma_{m|\mathcal{M }}^{2}+\|\boldsymbol{\mu}_{m|\mathcal{M}}\|^{2}},\] (A.2.20) we have \[\mathbb{E}\|\tilde{\gamma}_{m}\mathbf{y}_{m|\mathcal{M}}- \boldsymbol{\mu}_{m|\mathcal{M}}\|^{2}\] (A.2.21) \[\leq\frac{\sigma_{m|\mathcal{M}}^{2}\|\boldsymbol{\mu}_{m| \mathcal{M}}\|^{2}}{\sigma_{m|\mathcal{M}}^{2}+\|\boldsymbol{\mu}_{m|\mathcal{M }}\|^{2}}\left[\frac{\sigma_{m|\mathcal{M}}^{2}+\|\boldsymbol{\mu}_{m| \mathcal{M}}\|^{2}}{\|\boldsymbol{\mu}_{m|\mathcal{M}}\|^{2}}-\frac{\sigma_{m| \mathcal{M}}^{2}}{\|\boldsymbol{\mu}_{m|\mathcal{M}}\|^{2}}+\frac{[\varphi_{m }^{2}+8/(k_{m}-k_{m-1})]\sigma_{m|\mathcal{M}}^{2}}{\|\boldsymbol{\mu}_{m| \mathcal{M}}\|^{2}}\right]\] \[\leq\frac{\sigma_{m|\mathcal{M}}^{2}\|\boldsymbol{\mu}_{m| \mathcal{M}}\|^{2}}{\sigma_{m|\mathcal{M}}^{2}+\|\boldsymbol{\mu}_{m| \mathcal{M}}\|^{2}}\left[1+\frac{2[\varphi_{m}^{2}+8/(k_{m}-k_{m-1})]}{ \varphi_{m}^{2}}\right].\] In view of Assumption 2, we have \[\frac{1}{1-\varphi_{m}/2}\leq 1+\varphi_{m}\leq 1+\frac{2\left(\varphi_{m}^{2}+8/(k _{m}-k_{m-1})\right)}{\varphi_{m}}.\] (A.2.22) Combining (A.2.13) with (A.2.21), we can conclude that \[\begin{split}\mathbb{E}\|\widehat{\gamma}_{m}\mathbf{y}_{m|\mathcal{ M}}-\boldsymbol{\mu}_{m|\mathcal{M}}\|^{2}&\leq\left\{(1+2\varphi_{m}+16/[(k_{m}-k_{m-1}) \varphi_{m}])\right.\frac{\sigma_{m|\mathcal{M}}^{2}\|\boldsymbol{\mu}_{m| \mathcal{M}}\|^{2}}{\sigma_{m|\mathcal{M}}^{2}+\|\boldsymbol{\mu}_{m| \mathcal{M}}\|^{2}}\\ &+\frac{8\sigma_{m|\mathcal{M}}^{2}}{k_{m}-k_{m-1}}\exp\left[- \frac{(k_{m}-k_{m-1})\varphi_{m}^{2}}{16(1+2\sqrt{\varphi_{m}})^{2}}\right] \end{split}\] (A.2.23) Under Assumption 1, summing up (A.2.23) yields \[\mathbb{E}L_{n}(\widehat{\mathbf{w}}|\mathcal{M},\boldsymbol{\mu})\leq(1+ \bar{\varphi})\sum_{m=1}^{M_{n}}\frac{\sigma_{m|\mathcal{M}}^{2}\|\boldsymbol{ \mu}_{m|\mathcal{M}}\|^{2}}{\sigma_{m|\mathcal{M}}^{2}+\|\boldsymbol{\mu}_{m| \mathcal{M}}\|^{2}}+8c_{1}\sigma^{2}+\|\boldsymbol{\mu}_{M_{n}+1|\mathcal{M} }\|^{2},\] (A.2.24) where \(\bar{\varphi}=\max_{1\leq m\leq M_{n}}2\varphi_{m}+16/[(k_{m}-k_{m-1})\varphi_ {m}\). Substituting (A.2.24) into (A.2.1), we have proved the theorem. ### Proof of Theorem 2 We first prove that \[R_{n}(\mathbf{w}^{*}|\mathcal{M},\boldsymbol{\mu})\leq(1+\zeta_{n})R_{n}( \mathbf{w}^{*}|\mathcal{M}_{a},\boldsymbol{\mu})+k_{1}\sigma^{2}.\] (A.3.1) Define an \(M_{n}\)-dimensional weight vector \(\bar{\mathbf{w}}=(\bar{w}_{1},\ldots,\bar{w}_{M_{n}})^{\top}\), where \(\bar{w}_{m}=\sum_{j=k_{m-1}+1}^{k_{m}}w_{j}^{*}\), \(\bar{\gamma}_{m}=\sum_{j=m}^{M_{n}}\bar{w}_{m}\), and \(w_{j}^{*}\) is the \(j\)-th element of \(\mathbf{w}^{*}|\mathcal{M}_{a}\). According to (3.3), we have \[\begin{split} R_{n}(\bar{\mathbf{w}}|\mathcal{M},\boldsymbol{\mu} )&=\sum_{m=1}^{M_{n}}\left[\|\boldsymbol{\mu}_{m|\mathcal{M}}\|^ {2}(1-\bar{\gamma}_{m})^{2}+\sigma_{m|\mathcal{M}}^{2}\bar{\gamma}_{m}^{2} \right]\\ &=\sum_{m=1}^{M_{n}}\left[\sum_{j=k_{m-1}+1}^{k_{m}}\|\boldsymbol{ \mu}_{j|\mathcal{M}_{a}}\|^{2}(1-\bar{\gamma}_{m})^{2}+\sigma_{m|\mathcal{M}} ^{2}\bar{\gamma}_{m}^{2}\right]\\ &\leq\sum_{m=1}^{p_{n}}\|\boldsymbol{\mu}_{m|\mathcal{M}_{a}}\|^{ 2}(1-\gamma_{m}^{*})^{2}+\sum_{m=1}^{M_{n}}\sigma_{m|\mathcal{M}}^{2}\bar{ \gamma}_{m}^{2}\end{split}\] (A.3.2) where the inequality follows the fact that \(\gamma_{j}^{*}\leq\bar{\gamma}_{m}\) when \(k_{m-1}+1\leq j\leq k_{m}\). Note that \[\begin{split}\sum_{m=1}^{M_{n}}\sigma_{m|\mathcal{M}}^{2}\bar{ \gamma}_{m}^{2}&\leq k_{1}\sigma^{2}+(1+\zeta_{n})\sum_{m=2}^{M_{ n}}(k_{m-1}-k_{m-2})\sigma^{2}\bar{\gamma}_{m}^{2}\\ &\leq k_{1}\sigma^{2}+(1+\zeta_{n})\sigma^{2}\sum_{j=1}^{p_{n}}( \gamma_{j}^{*})^{2},\end{split}\] (A.3.3) where the second inequality is due to \(\bar{\gamma}_{m}\leq\gamma_{j}^{*}\) when \(k_{m-2}+1\leq j\leq k_{m-1}\). Substituting (A.3.3) into (A.3.2), we obtain (A.3.1). Then combining the oracle inequality in Theorem 1 with (A.3.1), we proof the theorem. ### Proof of Corollary 1 First, it is obvious that \(\mathcal{M}^{*}\) satisfies Assumption 3. We then verify Assumption 2. Note that \(1/k_{1}=1/\nu_{n}\). When \(2\leq m\leq M_{n}\), we have \[k_{m}-k_{m-1}\geq\lfloor\nu_{n}\rho_{n}(1+\rho_{n})^{m-1}\rfloor \geq(1-\rho_{n})\nu_{n}\rho_{n}(1+\rho_{n})^{m-1},\] (A.4.1) where the first inequality is due to the definition of \(\mathcal{M}^{*}\), and the second inequality follows from \(\lfloor x\rfloor\geq(1-\rho_{n})x\) when \(x\geq\rho_{n}^{-1}\). Thus when \(2\leq m\leq M_{n}\) and \(n\) is large enough, we obtain \[\frac{1}{k_{m}-k_{m-1}}\leq\frac{1}{(1-\rho_{n})\nu_{n}\rho_{n}} =O\left(\frac{1}{\nu_{n}\rho_{n}}\right)\] (A.4.2) where the first inequality is due to \[k_{m}-k_{m-1}=\lfloor\nu_{n}\rho_{n}(1+\rho_{n})^{m-1}\rfloor \geq(1-\rho_{n})\nu_{n}\rho_{n}(1+\rho_{n})^{m-1}.\] (A.4.3) Since \(\nu_{n}\to\infty\) and \(\nu_{n}\rho_{n}\to\infty\), Assumption 2 is naturally satisfied when \(n\) is large than some fixed integer \(N\). Combining with (4.8), we also see that \(\bar{\varphi}\to 0\) as \(n\to\infty\). We now focus on Assumption 1. Because \(\varphi_{m}\) is bounded and \(\varphi_{m}=1/(k_{m}-k_{m-1})^{\tau}\), then there exist a constant \(C_{1}\) such that \[\exp\left[-\frac{(k_{m}-k_{m-1})\varphi_{m}^{2}}{16(1+2\sqrt{ \varphi_{m}})^{2}}\right]\leq\exp\left[-C_{1}(k_{m}-k_{m-1})^{1-2\tau}\right].\] (A.4.4) When \(m=1\), we have \(\exp(-C_{1}k_{1}^{1-2\tau})\to 0\). When \(m\geq 2\), using (A.4.3), we have \[\sum_{m=2}^{M_{n}}\exp\left[-C_{1}(k_{m}-k_{m-1})^{1-2\tau}\right] \leq\sum_{m=1}^{\infty}\exp\left[-C_{1}C_{2}\left(\nu_{n}\rho_{n}(1+\rho_{n}) ^{m-1}\right)^{1-2\tau}\right]\to 0,\] (A.4.5) which meets Assumption 1. Thus, applying Theorem 2, we proof the corollary.
モデル平均法(MA)は、候補モデルのセットから得られる推定量を組み合わせるための技術であり、機械学習と統計学において注目を集めています。既存の文献では、MAは、候補モデルによって張られた空間へと応答ベクトルを引き込む、縮小推定という形式と解釈されます。本論文では、この解釈を、多層モデルを含む線形回帰設定でMAと縮小推定との関係性を検証することを目的としています。まず、最適なMA推定量は、ガウス系列モデルにおいて、monotonically non-increasing weightを持つ最良の線形推定量として示されます。Mallows MA(MMA)は、Mallows' $C_p$ を単位の単独で最小化するようにウェイトを推定し、この手法は、正の値を持つStein estiamtorの和の変種となります。MMAとStein estimatorの違いは、Stein estimatorの最適
2309.04785
Implementation of Autonomous Supply Chains for Digital Twinning: a Multi-Agent Approach
Trade disruptions, the pandemic, and the Ukraine war over the past years have adversely affected global supply chains, revealing their vulnerability. Autonomous supply chains are an emerging topic that has gained attention in industry and academia as a means of increasing their monitoring and robustness. While many theoretical frameworks exist, there is only sparse work to facilitate generalisable technical implementation. We address this gap by investigating multi-agent system approaches for implementing autonomous supply chains, presenting an autonomous economic agent-based technical framework. We illustrate this framework with a prototype, studied in a perishable food supply chain scenario, and discuss possible extensions.
Liming Xu, Yaniv Proselkov, Stefan Schoepf, David Minarsch, Maria Minaricova, Alexandra Brintrup
2023-09-09T13:16:52
http://arxiv.org/abs/2309.04785v1
# Implementation of Autonomous Supply Chains for Digital Twinning: a Multi-Agent Approach1 ###### Abstract Trade disruptions, the pandemic, and the Ukraine war over the past years have adversely affected global supply chains, revealing their vulnerability. Autonomous supply chains are an emerging topic that has gained attention in industry and academia as a means of increasing their monitoring and robustness. While many theoretical frameworks exist, there is only sparse work to facilitate generalisable technical implementation. We address this gap by investigating multi-agent system approaches for implementing autonomous supply chains, presenting an autonomous economic agent-based technical framework. We illustrate this framework with a prototype, studied in a perishable food supply chain scenario, and discuss possible extensions. keywords: Autonomous Supply Chain, Digital Twin, Multi-Agent Systems, Perishable Foods + Footnote †: journal: IFAC2023, Yokohama, Japan ## 1 Introduction Recent trade disruptions, the pandemic, and the Ukraine war over the past years have seriously revealed vulnerabilities of traditional global supply chains (SCs) (Handfield et al., 2020; Shih, 2020; Kilpatrick, 2022) reinforcing the need for organisations to establish more resilient SCs. The use of digital twin technologies has been widely discussed as a solution to manage disruptions and achieve more resilient SCs (Calatayud et al., 2019; BlueYonder, 2020; NelsonHall, 2021). Digital Twins (DT) are virtual representations of physical objects and processes that allow real-time analytics and corrective actions (Sharma et al., 2022). As such, DTs can be viewed as self-interested agents acting in pursuit of their goals. The manufacturing industry embraced DT solutions in areas such as process control and condition-monitoring, however, end-to-end Supply Chain DT remain a less well-established area. Whilst DT allows manufacturers to achieve better outcomes and automate operational decisions at scale, a crucial element remains unaddressed to extend the concept to a SC context: A supply chain is a business entity group involved in the upstream and downstream flows of materials, information, and finance from sources to customers (Christopher, 2016). Thus, supply chains consist of multiple, heterogeneous agents that are interdependent. When the DT of a particular company adjusts batch sizes to minimise inventory costs, this action would impact downstream logistics processes by having to schedule additional deliveries, increasing carbon footprint and delivery costs. In such a scenario, system-level goals need trade-offs between individual agent goals, yet no reusable frameworks exist to facilitate this. We postulate that the realistic applicability of the DT concept to SC will crucially depend on their ability to function in a multi-agent environment, capable of distributed decision-making for attaining system-level goals. A relevant development is the multi-agent system-based facilitation of Autonomous Supply Chains (ASC), formally conceptualised in Xu et al. (2022). Xu et al. (2022) has proposed that the natural extension of a DT operating in a supply chain would be to give it agency so that consensus among distributed stakeholders can be built. Equipping DT with a multi-agent system framework would integrate different supply chain functions and actors' data coherently and enable collective decision-making. While Xu et al. (2022) presented the theoretical background of the ASC, the technical framework for implementing an ASC has not been well studied. This paper addresses this gap, exploring the development of a technical framework for realising ASC systems for supply chain digital twinning. Generalisability is particularly important as current academic research on supply chain automation focus on siloed SC functions, such as pallet picking and demand forecasting and remains disconnected from studies in Digital Twinning. The main contributions of this paper are described below: 1. We demonstrate the use of multi-agent systems (MAS) in supply chains to facilitate an ASC for supply chain digital twinning. 2. We extend the open economic framework (OEF) (Minarsch et al., 2021) and autonomous economic agent-based technical framework for agent-based autonomous supply chain implementation. 3. We develop an A2SC prototype to demonstrate the MAS approach for building a prototype ASC. This paper is organised as follows: Section 2 reviews work on using the MAS approach for supply chain management (SCM). Section 3 presents the agent-based autonomous supply chain. An implementation of an A2SC prototype is presented in Section 4. Section 5 discusses the limitations and implications of this work and concludes this paper. ## 2 Related Work The use of MAS approaches in SCM originates in the Integrated SCM System (ISCM) developed by Fox et al. (1993) in the early 1990s. The ISCM was composed of a set of software agents, each responsible for one or more activities in the SC and coordinated with other agents to plan and execute SC functions. Fox et al. (2001) investigated key issues and presented solutions for constructing such an agent-based SC architecture in a detailed manner. Swaminathan et al. (1998) modelled SC dynamics with a MAS approach. They developed a software agent library that captured generic SC processes and concepts, providing a modular, reusable framework for developing a wide range of realistic SC models. In comparison with Fox et al. (1993, 2001) which decomposed agents along the dimension of SC functions, Swaminathan et al. (1998) defined agents by considering the SC structural elements; the proposed framework in Swaminathan et al. (1998) allows a model to address issues at both tactical (coordination) and strategic levels (configuration). These early works paved the way to tackle SCM issues with MAS approaches. Since these early studies, numerous literature on MAS-based SCM has emerged. Work on the use of MAS approaches for automating SCM can be classified into three main streams. The first stream tackles agent frameworks and architectures for generic or specific SCs (Sadeh et al., 2001; Kumar et al., 2013). The second stream tackles dynamic SC formation and configuration (Kim and Cho, 2010; Ameri and McArthur, 2013). The third stream tackles the decision-making aspects of coordination and negotiation, making individual agents work coherently in a SCM (Sadeh et al., 2001; Wong and Fang, 2010). Other peripheral work has enhanced specific SC functions via agent technology, such as demand forecasting (Liang and Huang, 2006) and supplier selection (Ghadimi et al., 2018). Xu et al. (2021) contains a more comprehensive review of MAS for SCM. These previous works showed the wide recognition of the suitability of MAS approaches for SCM, work has mostly focussed on employing MAS approaches to solve particular SCM issues rather than developing generalisable architectures. While the MAS paradigm in SCM mostly focuses on either simulation or real-time control of supply chain operations, the concept of DTs in SCM is still gaining traction, with various definitions emerging. Most applications are in the area of logistics, and warehouse monitoring, although some authors classify all manufacturing processes, shop-floor, or even post-sales condition monitoring DT under the SC banner (Nguyen et al., 2022). Rosen et al. (2015) highlights that while Digital Twins should in theory be closely linked to autonomous systems research, the link has not been much explored. In SCM applications, the links to autonomous decision-making and actuation are at times implicitly made (Rosen et al., 2015; Sharma et al., 2022; Lopez et al., 2011), where the argument is that as autonomous systems need real-time information, and an up-to-date DT would underline the autonomous system's function. However, in the majority of SC cases, the link to autonomy is not discussed at all (Nguyen et al., 2022), with the DT only scoped as a monitoring and decision-aid tool. In this paper, we take the view that the DT is an ideal framework for feeding real-time information on the system state to a set of agents operating the supply chain. Similarly, a DT will enable monitoring of resulting states through the actions taken by agents, creating a continuous feedback loop. Thus the two concepts -- agent and DT -- are closely interlinked. In this paper, we implemented a scenario, in which we deploy a DT via an IoT system to track the system state, which is fed them into an autonomous SC agent framework. ## 3 Agent-based Autonomous Supply Chain Here we describe and categorise four key issues that arise when a multi-agent system (MAS) approach is applied to a supply chain system. A MAS is a _loosely coupled_ network of software agents that interact to solve complex problems beyond each agent's individual capacities or knowledge. Each agent is a goal-directed computational entity that acts autonomously on behalf of its users, communicating and coordinating with other agents as needed. As presented in Xu et al. (2022), an ASC is a set of self-interested organisational entities with partial knowledge that autonomously manage the movement of material and information flows. A MAS framework is thus naturally suited for studying SCs (Fox et al., 1993; Swaminathan et al., 1998). We explore the use of the MAS approach for building up ASCs, connecting distributed automated functions/entities with each other, and making them work coherently. The resulting ASCs are called agent-based autonomous supply chains, or A2SCs. Four design issues must be addressed when constructing A2SC. (Smith and Davis, 1981; Fox et al., 2001). The _first_ is the decomposition and distribution of task processing responsibility across agents. This requires the problem under study to be partitioned and allocated to constituent agents that are logically and often geographically distributed. Appropriate decomposition can facilitate interaction among these agents, addressing the _second_ issue. The agents in a MAS resolve conflicts (e.g., via negotiation) or act coherently (e.g., via coordination) through interaction. In the SCM domain, both control and data are distributed hence there is neither global control nor global data storage. Loosely coupled agents must connect with appropriate ones, proactively or reactively, to obtain control of or access to data. A well-designed connection mechanism lowers communication overhead, reducing the amount of data transmitted and promoting responsiveness. The _third_ issue is the design of appropriate communication languages and interaction protocols, which is an essential enabler for collective decision-making. Distributed agents require common languages to communicate with each other, mutually exchange knowledge, and need protocols to regulate their interactions. Standard language and protocols provide agents with instruments to establish connections and share information. Representative work (still dominant and in current use) on these two aspects include FIPA-ACL (FIPA, 2002), and the contract-net protocol (Smith and Davis, 1981). The _fourth_ issue is vocabulary and knowledge representation. To understand the contents of a message agents must use _consistent_ words that refer to the objects, functions, and relations appropriate to a certain application they are familiar with. Thus, in addition to "consistent words", a commonly understood knowledge representation is an integral part of agent communication. A vocabulary may contain multiple _ontologies_ for describing the shared body of knowledge in a domain. The first two issues deal with conceptual aspects: the _static structure_ and the _dynamics_ of a MAS as they are relevant to the architecture and agent organisation of the resulting MAS, whereas the last two issues tackle more concrete aspects -- the tools to enable the connection between disparate agents. Communication languages and protocols allow agents to freely join in or exit from an operating MAS, enabling an open, dynamic and adaptive computational environment. These four issues presented above thus offer a guiding technical framework for designing an A2SC system. In the next section, we describe an implementation of an A2SC using a MAS approach and explore how these issues are handled. ## 4 Prototype Development and Showcase The MIISI model, proposed in Xu et al. (2022), provides a conceptual framework for guiding A2SC design. We illustrate A2SC implementation with a prototype model by following the MIISI framework and the MAS approach. For a use case, we selected perishable food SCs, which tend to spoil because of improper storage and long transit times during transportation. Thus, it is important to monitor the ambient condition of the vehicle used for transporting perishable foods and rapidly respond to emergent events, such as traffic congestion. The use case company (CMC) purchases meat from suppliers on a wholesale basis and supplies it to local restaurants. CMC would like to automate its wholesale and procurement procedures using information and communications technologies such that the flow of information between different stakeholders can be accommodated. The prototype must maintain smooth movement of physical flow and its associated information flow concerning inbound and outbound goods shipment and state of inventory. The developed prototype implements an _end-to-end_ automated procurement process using MAS, involving the set of stakeholders represented by agents along the supply chain. Figure 1 illustrates the main structure of the SC, in which our prototype focuses on the middle part of the chain (highlighted in grey). The prototype additionally includes two other stakeholders: logistics and third-party logistics (3PL), responsible for transporting foods and are essential to perishable foods SCs. ### Design The scenario has two primary processes: replenishment (CMC procures meat from its suppliers to replenish its inventory) and wholesale (retailer purchases meat from CMC). Both processes have overlapping functions, so we use the replenishment process for illustration. Retailers can select their preferred delivery options from the multiple ones the logistics service offers retailers in the replenishment process. To reduce redundancy, delivery in the wholesale process is assigned by logistics service providers. Both processes also involve decision-making, such as making good proposals, proposal acceptance or refusal, and selecting a delivery option. The prototype additionally contains functions to manage inventory, e.g., automatic stock updating and replenishment and logistic monitoring, where the real-time location and ambient conditions of products are monitored throughout the delivery. In the next section, we decompose the problem and distribute it to multiple agents. #### 4.1.1 Agent decomposition and distribution Five stakeholders are involved in these two processes: the CMC, retailer, suppliers, logistics, and 3PL. An agent type represents each, as follows: 1. Wholesaler agent: This agent type represents the wholesaler, CMC, managing the processes of procuring meat from supplier agents, wholesaling them to retailer agents, and managing its inventory. It contacts logistics agents to arrange delivery services. It also serves as a central hub for this hypothetical SC, connecting upstream and downstream SC agents. 2. Supplier agent: This agent type manages the meat supply process to wholesaler agents. It provides its customers with delivery services, which are in turn provided by its cooperated logistics agent. The supplier agent thus coordinates with wholesale agents and logistics agents to complete a purchase order. 3. Retailer agent: This agent type represents local retailers, such as restaurants or local stores purchasing meat from wholesalers. It interacts with the wholesaler agent to deal with the procurement on behalf of a retailer. 4. Logistics agent: This agent type represents logistics service providers. It manages the overall operation of logistics services but outsources fulfilment services to 3PL providers. It coordinates with suppliers, wholesalers, and 3PL providers for delivery arrangements. 5. 3PL agent: This agent, acting on behalf of a 3PL provider, is responsible for fulfilling delivery orders assigned by a logistics agent. It monitors the entire transportation process of the meat, providing logistics companies with real-time data on the vehicle location and the product's ambient condition. All of the above agents are representative agents acting on behalf of SC stakeholders and are not involved in administrative tasks related to running the entire MAS. Therefore, we need to add an admin agent to provide agent-related services, such as lookup and yellow page services. These six types of agents are the main players whose interactions result in the dynamic structure of the prototype A2SC. #### 4.1.2 Agent interaction, language, and protocols Agents communicate with other agents to complete certain tasks and must comply with commonly agreed interaction protocols to enable such interactions (Hosseini Figure 1: An illustration of a _simplified_ meat supply chain. Figure 2: Agent interaction in the replenishment process. et al., 2021). We consider two types of protocols in this prototype: contract-net and HTTP. The contract net protocol is an interaction protocol for a service requester to seek suitable service providers who can perform certain tasks, which may involve multiple negotiation and information exchange rounds. This protocol is suitable for handling complex interaction scenarios such as procurement which may involve unknown participants. HTTP is a single-round interaction protocol adopted for handling simple interaction scenarios. This protocol is suitable for interacting between known participants that have already established coherence, e.g., a logistics agent directly assigning a delivery task to one of its longstanding 3PL partners for fulfilment. As such, it is also appropriate for quickly requesting information, such as wholesale product prices, delivery rate cards, and current traffic conditions. Interaction among agents was based on these two protocols. For example, Figure 2 illustrates agent interaction during the replenishment process, which involves four types of agents. As illustrated in (Figure 2), agents interact with each other through _messages_. A message consists of two essential parts: the performative and the message body, delimited by a colon in Figure 2. These two parts denote the communicative act (behaviour) and the content of the message, respectively. A set of performatives are defined in protocols for enabling interaction. For example, the contract-net protocol defines performances such as _cfp_ (call for proposal), _propose_, and _reject-proposal_; the HTTP protocol has two performatives: _request_ (which in turn has two types of request: get and post) and _response_. For unambiguous communication, messages contain other information, such as sender, receiver, protocol, and ontology. Agent communication languages define message format, and we use FIPA-ACL as the language in this study. Both replenishment and wholesale processes involve multiple rounds of communication, sending and receiving messages to coordinate procurement and logistics services. They also involve procedures for supporting decision-making, such as assess_order() for assessing incoming procurement orders, and self-calls for executing specific tasks, such as monitor_delivery() for monitoring the delivery process. Although the two processes are operationally separated and independent, their connection can be auto-triggered. For example, low wholesaler inventory stock triggers its replenishment behaviour and launches the replenishment process. In this sense, the two processes are linked, resulting in an end-to-end supply chain. ### Implementation Building on Section 4.1, we adopted a client-server architecture to develop a web-based platform for this prototype. This consists of three parts: backend MAS, frontend interfaces, and middleware. We limit our discussion to the backend MAS as this defines the core model infrastructure. #### 4.2.1 Open economic framework-based MAS Agents in a MAS must be able to connect with each other, thereby achieving organised behaviour. Thus a service requester must be able to find suitable service providers. This is the _connection_ problem (Smith and Davis, 1981; Ameri and McArthur, 2013), In this prototype, we realise agent connection through the open economic framework (OEF) (Minarsch et al., 2021). The OEF is a decentralised communication, search, and discovery system for their inhabitants -- autonomous economic agents (AEAs). It consists of protocols, languages, and connection mechanisms, allowing agents to connect and then interact with each other. The OEF hence provides a virtual environment -- a digital world -- for AEAs to live in by following social norms. We used two types of connections supported by the OEF for the backend MAS: simple OEF and HTTP. The simple OEF connection is an indirect (or mediated) connection, letting two unknown agents find each other via a mediator; whereas HTTP connection offers a direct connection between two familiar agents. Using an HTTP connection, agents can contact others via the request-response channel without the participation of the OEF. The resulting agent organisation is shown in Figure 3. #### 4.2.2 Implementation details This prototype continually collects real-time data to monitor the delivery process. Due to the limited experimental conditions, it was not feasible to drive a vehicle with necessary data collection instruments and re Figure 3: Illustration of the prototype’s agent organisation. frigerated containers to transport the perishable food product from one place to another and build wireless communication with remote agents. We thus used the pre-collected IoT and GPS data to simulate real-time data collection during the delivery process. To develop the prototype backend, we primarily used Python. We used other languages, such as HTML and JavaScript to create the frontend interfaces. We also used the OEF, the AEA framework, Django, OpenStreetMap, etc. to facilitate the development of both the backend and frontend. We developed the agents using the AEA framework. ### Showcase After starting up the OEF, all of the agents, the Django built-in server, and the PostgreSQL service, the prototype is ready for use. Each agent, after initialisation, first executes a registration behaviour. This one-shot proactive behaviour sends a message to the OEF to register their service to be discovered (connected) by other agents. Users can access this prototype via a browser. Figure 3(a) and Figure 3(b) show the example screenshots of the prototype's interface. The interface consists of four panels: ordering panel (top left), transport monitoring panel (central), agent chat room (right), and streaming data panel (bottom), as shown in Figure 3(a). As the wholesaler's inventory is empty at the beginning of execution, we first replenish its inventory. We select the replenishment scenario and click the launch button with other default values. This action triggers the wholesaler agent's replenishment behaviour. The wholesaler agent first connects with the OEF, sending it a query message to search for suitable suppliers. As a response, the recipient -- the OEF -- returns a message that contains a list of supplier agents that match the query conditions back to the sender. This process is _matchmaking_, which matches a service requester with a service provider. The wholesaler can then start a dialogue with the discovered suppliers and negotiate with a selected supplier on product purchase. The logistic agent also participates in this dialogue, in which the supplier requests available delivery options from this agent. When these three parties agree on the deal, the logistics agent randomly selects a 3PL from a set of eligible 3PL agents to fulfil the delivery task. The 3PL then begins the transport and monitors the whole journey. For example, Figure 3(a) shows a prototype screenshot during the middle of this journey. The relevant data about the ambient conditions are collected when the delivery is finished. These data can be used for further analytics, for example, to determine product quality, as shown in Figure 3(b). ## 5 Discussion and Conclusions In this paper, we have explored the technical implementation of an autonomous supply chain system using a MAS approach after summarising MAS literature in supply chain management and highlighting the relationship between Digital Twinning and agent-based supply chain systems. We described four key components of an agent-driven autonomous supply chain system that need to be designed and developed a concrete A2SC prototype to illustrate this approach, in which the Open Economic Framework (OEF) was adopted to facilitate supply chain agent connection and communication. The OEF inhabitants -- i.e. agents -- were developed to act on behalf of SC entities. The approach illustrated the automation of a perishable goods procurement and Figure 4: Example screenshots of the prototype. wholesale process, consisting of ordering, coordination, transport monitoring, and follow-up analytics. As the automation of the processes themselves inherently requires real-time state information, our prototype overlaps with extant definitions of Digital Twins in Supply Chains. Further research in Digital Twin and MAS integration would be beneficial in the context of supply chain information systems. The prototype is a simple experimental proof of concept of an ASC based on the MIISI model (Xu et al., 2022) used to exemplify task decomposition, communication, and negotiation protocol implementation inherent in the design of an ASC. Its implementation focuses on the two middle layers of the model: _interconnection_ and _integration_. The prototype involves _ten_ agents, significantly less than what would be the actual case in a similar scenario setting. Thus future research needs to focus on the scalability of communications and resulting decision-making bottlenecks. Moreover, the prototype's intelligence and automation are both low, being within the automation-skewed region of the autonomy manifold presented (Xu et al., 2022). Message flows enable interaction between agents within this structure, giving it cohesion. This approach also enables the integration of enterprise legacy systems, facilitating heterogeneous software such as ERPs to work together. As an initial attempt, this work seeks to shed light on implementing an ASC system in the real world, serving as a starting point for further studies. Our future work includes involving more stakeholders, expanding the number of agents involved, and designing or introducing languages and protocols used explicitly for A2SCs. ## Acknowledge This work was supported by the UK EPSRC Connected Everything Network Plus under Grant EP/S036113/1 and the DO-TES project.
過去の数年で貿易の混乱、パンデミック、ウクライナ戦争により、世界的な供給網は悪影響を受けており、その脆弱性を明らかにしました。自律的な供給網は、監視と堅牢性を高めるための手段として、業界や学界で注目を集めています。多くの理論的枠組みが存在しますが、汎用性の高い技術実装を促進するための研究は限られています。このギャップを埋めるために、マルチアgenteシステムアプローチを用いて自律的な供給網の実装を調査し、自律的な経済的代理人基盤の技術フレームワークを提示しました。このフレームワークを実用的な原型として、 perishable food supply chain のシナリオで検討し、拡張の可能性について議論します。
2310.00480
Nuclear Quantum Effects in the Acetylene:Ammonia Plastic Co-crystal
Organic molecular solids can exhibit rich phase diagrams. In addition to structurally unique phases, translational and rotational degrees of freedom can melt at different state points, giving rise to partially disordered solid phases. The structural and dynamic disorder in these materials can have a significant impact on the physical properties of the organic solid, necessitating a thorough understanding of disorder at the atomic scale. When these disordered phases form at low temperatures, especially in crystals with light nuclei, the prediction of materials properties can be complicated by the importance of nuclear quantum effects. As an example, we investigate nuclear quantum effects on the structure and dynamics of the orientationally-disordered, translationally-ordered plastic phase of the acetylene:ammonia (1:1) co-crystal that is expected to exist on the surface of Saturn's moon Titan. Titan's low surface temperature (~90 K) suggests that the quantum mechanical behavior of nuclei may be important in this and other molecular solids in these environments. By using neural network potentials combined with ring polymer molecular dynamics simulations, we show that nuclear quantum effects increase orientational disorder and rotational dynamics within the acetylene:ammonia (1:1) co-crystal by weakening hydrogen bonds. Our results suggest that nuclear quantum effects are important to accurately model molecular solids and their physical properties in low temperature environments.
Atul C. Thakur, Richard C. Remsing
2023-09-30T20:15:41
http://arxiv.org/abs/2310.00480v1
# Nuclear Quantum Effects in the Acetylene:Ammonia Plastic Co-crystal ###### Abstract Organic molecular solids can exhibit rich phase diagrams. In addition to structurally unique phases, translational and rotational degrees of freedom can melt at different state points, giving rise to partially disordered solid phases. The structural and dynamic disorder in these materials can have a significant impact on the physical properties of the organic solid, necessitating a thorough understanding of disorder at the atomic scale. When these disordered phases form at low temperatures, especially in crystals with light nuclei, the prediction of materials properties can be complicated by the importance of nuclear quantum effects. As an example, we investigate nuclear quantum effects on the structure and dynamics of the orientationally-disordered, translationally-ordered plastic phase of the acetylene:ammonia (1:1) co-crystal that is expected to exist on the surface of Saturn's moon Titan. Titan's low surface temperature (\(\sim 90\) K) suggests that the quantum mechanical behavior of nuclei may be important in this and other molecular solids in these environments. By using neural network potentials combined with ring polymer molecular dynamics simulations, we show that nuclear quantum effects increase orientational disorder and rotational dynamics within the acetylene:ammonia (1:1) co-crystal by weakening hydrogen bonds. Our results suggest that nuclear quantum effects are important to accurately model molecular solids and their physical properties in low temperature environments. ## I Introduction Molecular crystals play important roles in a wide variety of fields, including pharmaceuticals [1; 2], semiconductor technologies [3; 4; 5], energy storage and transport [6; 7; 8], pesticides [9], and atmospheric and planetary sciences [10; 11; 12; 13; 14]. Like the more familiar class of atomic crystals, molecular crystals are crystalline solids made of repeating molecular subunits. When these solids are made of two or more types of molecules, they are typically called molecular crystals [2; 12; 15]. Molecular co-crystals often exhibit thermodynamic and mechanical properties that are different from the corresponding single component solids, offering unique opportunities for tunable materials [16; 17; 18]. Molecular crystals are usually held together by weak interactions like hydrogen bonding and dispersion interactions. As a result, molecular crystals often display rich phase behavior. In addition to the usual translationally and orientationally ordered crystalline solid phase, translationally ordered and orientationally disordered (or vice versa) phases can also appear. Solids that exhibit translational order but lack orientational order are known as plastic crystals [19; 20; 21; 22]. The plasticity, as suggested by the name, refers to the additional mechanical flexibility encoded by the presence of orientational disorder. Plastic phases can also exhibit different chemical and physical properties than the corresponding orientationally ordered phases. For instance, some plastic phases exhibit different thermodynamic and electrical properties than the ordered phase [23]. Importantly, these differences in physical properties can lead to significant macroscopic consequences. Many molecular co-crystals are predicted to form on the cold (\(\sim\)90 K) surface of Saturn's moon Titan [24; 25; 26; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28]. As a result, molecular crystals and co-crystals on Titan play an analogous role to minerals on Earth and have been termed cryominerals. On Titan and similar planetary bodies, the changes in physical properties arising from plastic crystal phases could have implications for geochemical processes and even surface geology [10; 12; 13; 14]. To begin to understand plastic phases of these cryominerals, we use molecular dynamics (MD) simulations to investigate the acetylene:ammonia (1:1) plastic co-crystal [10; 12; 25]. In particular, the quantum mechanical nature of atomic nuclei is expected to be important at the cold temperatures on Titan's surface, and we focus on quantifying the impact of nuclear quantum effects on the structure and dynamics of the acetylene:ammonia (1:1) plastic co-crystal. Nuclear quantum effects (NQEs) on structure and dynamics are significant for light nuclei like hydrogen even at room temperature (300 K) [29; 30; 31]. As a result, the common approximation of using classical nuclei in molecular simulations may be inadequate for many molecular crystals. Indeed, NQEs can play important roles in determining thermodynamic stability [32; 33], thermal conductivity [32], expansivity, and mechanical properties [34]. At the cryogenic conditions on Titan, NQEs are anticipated to play an even greater role in determining the properties of molecular co-crystals. Here, we quantify the role of NQEs in determining the structure and dynamics of the acetylene:ammonia (1:1) co-crystal using neural network potential-based thermostated ring polymer MD simulations. We first discuss simulation details and the theory of approximate real time dynamics within the ring polymer molecular dynamics framework. Then, we discuss the translational and orientational structure of the acetylene:ammonia (1:1) co-crystal as obtained from the classical and quantum simulations, where classical and quantum refer to
有機分子固体は、豊かな相図を呈します。構造がユニークな相に加えて、転倒と回転の自由度が異なる状態点で溶融するため、部分的に不秩序な固体相が生じます。これらの材料における構造と動的不秩序は、有機固体の物理的性質に大きな影響を与える可能性があり、原子スケールの不秩序を理解する徹底的な理解が求められます。低温でこれらの不秩序な相が形成される場合、特にナノの小さい結晶では、核量子効果が材料の性質の予測に複雑さを増す可能性があります。例えば、この研究では、土星の衛星テティスに存在する予想される、アセチレンとアンモニウムの共晶の、方向が不秩序、転倒が秩序化されたプラスチック相の構造と動的特性について核量子効果を調査しています。テティス表面の低温(~90K)は、
2309.03954
A race against the clock: Constraining the timing of cometary bombardment relative to Earth's growth
Comets are considered a potential source of inner solar system volatiles, but the timing of this delivery relative to that of Earth's accretion is still poorly understood. Measurements of xenon isotopes in comet 67P/Churyumov-Gerasimenko revealed that comets partly contributed to the Earth's atmosphere. However, there is no conclusive evidence of a significant cometary component in the Earth's mantle. These geochemical constraints would favour a contribution of comets mainly occurring after the last stages of Earth's formation. Here, we evaluate whether dynamical simulations satisfy these constraints in the context of an Early Instability model. We perform dynamical simulations of the solar system, calculate the probability of collision between comets and Earth analogs component embryos through time and estimate the total cometary mass accreted in Earth analogs as a function of time. While our results are in excellent agreement with geochemical constraints, we also demonstrate that the contribution of comets on Earth might have been delayed with respect to the timing of the instability, due to a stochastic component of the bombardment. More importantly, we show that it is possible that enough cometary mass has been brought to Earth after it had finished forming so that the xenon constraint is not necessarily in conflict with an Early Instability scenario. However, it appears very likely that a few comets were delivered to Earth early in its accretion history, thus contributing to the mantle's budget. Finally, we compare the delivery of cometary material on Earth to Venus and Mars. These results emphasize the stochastic nature of the cometary bombardment in the inner solar system.
Sarah Joiret, Sean N. Raymond, Guillaume Avice, Matthew S. Clement, Rogerio Deienno, David Nesvorný
2023-09-07T18:02:30
http://arxiv.org/abs/2309.03954v1
# A race against the clock: ###### Abstract Comets are considered a potential source of inner solar system volatiles, but the timing of this delivery relative to that of Earth's accretion is still poorly understood. Measurements of xenon isotopes in comet 67P/Churyumov-Gerasimenko revealed that comets partly contributed to the Earth's atmosphere. However, there is no conclusive evidence of a significant cometary component in the Earth's mantle. These geochemical constraints would favour a contribution of comets mainly occurring _after_ the last stages of Earth's formation. Here, we evaluate whether dynamical simulations satisfy these constraints in the context of an _Early Instability_ model. We perform N-body simulations of the solar system's dynamic evolution considering an outer disk of 10000 comets, different initial conditions for the inner solar system and assuming that the giant planet instability happened within \(\simeq\)10 million years of the start of planet formation. Out of 30 simulations, 13 meet a list of criteria for success. We calculate the probability of collision between comets and Earth analogs component embryos through time and estimate the total cometary mass accreted in Earth analogs as a function of time. We determine that the cumulative cometary mass accreted by the Earth analogs during the first 100 million years ranges between 4 \(\times\) 10\({}^{21}\) and 5 \(\times\) 10\({}^{24}\) g. While our results are in excellent agreement with geochemical constraints, we also demonstrate that the contribution of comets on Earth might have been delayed with respect to the timing of the instability, due to a stochastic component of the bombardment. More importantly, we show that it is possible that enough cometary mass has been brought to Earth after it had finished forming so that the xenon constraint is not necessarily in conflict with an _Early Instability_ scenario. Indeed, 2 of the 13 successful simulations provide enough comet delivery after a last giant impact happening later than 30 Myr. However, it appears very likely that a few comets were delivered to Earth early in its accretion history, thus contributing to the mantle's budget. Finally, we compare the delivery of cometary material on Earth to Venus and Mars. These results emphasize the stochastic nature of the cometary bombardment in the inner solar system. solar system formation, orbital dynamics, cometary bombardment, xenon isotopes 0000-0002-4000]Sarah Jojret 0000-0002-2882-7880]Sean N. Raymond 0000-0002-4880-7880]Guillaume Ayice 0000-0002-1883-0888]Matthew S. Clement 0000-0002-4883-0888]Rogerio Deieno 0000-0002-4883-0888]David Nesvorny ## 1 Introduction Timing is everything, especially when it comes to cometary bombardment on Earth. A bombardment of comets is thought to have been triggered by the giant planet instability (Gomes et al., 2005) early in the history of the solar system. The timing of this instability has often been constrained to the first \(\simeq\) 100 Myr after gas disk dispersal (Morbidelli et al., 2018; Nesvorny et al., 2018; Boehnke & Harrison, 2016; Zellner, 2017; Mojzsis et al., 2019), but recent models seem to favour an extremely early instability (Clement et al., 2018; Liu et al., 2022), probably during the first 10 Myr of the solar system. On the other hand, both geochemical constraints and N-body simulations indicate that accretion of the Earth ended between \(\simeq 30\) to 100 Myr after the condensation of Ca-Al-rich inclusions (CAIs), the first solids in the solar system (Kleine et al., 2009; Rudge et al., 2010; Jacobson et al., 2014). One might naively interpret this to imply that the cometary bombardment happened before Earth had finished forming. However, isotopic signatures of xenon in the mantle and in the atmosphere of the Earth challenge this chronology. While recent studies tend to demonstrate that primordial mantle xenon is close to the chondritic value (Holland et al., 2009; Peron and Moreira, 2018; Broadley et al., 2020), primordial atmosphere xenon shows both a chondritic and a cometary component (Marty et al., 2017). Measurements obtained with the Rosetta spacecraft on gases emitted by comet 67P/Churyumov-Gerasimenko revealed a peculiar isotopic composition of cometary xenon with selective depletion in neutron-rich isotopes (\({}^{134}Xe\) and \({}^{136}Xe\)). These depletions are analog to those derived for U-Xe, the primordial xenon component in our early atmosphere, i.e. the atmospheric composition of xenon corrected for mass-dependent isotope fractionation (for a review on atmospheric Xe and U-Xe, see Avice and Marty (2020)). It was shown that the U-Xe isotopic composition matches a mixture of \(\simeq 22\) % cometary xenon and \(\simeq 78\) % chondritic xenon (Marty et al., 2017). Isotopic fractionation of U-Xe has been progressive through time on the fully-formed Earth, leading to the xenon composition of our present-day atmosphere (Avice et al., 2018). Hence, comets would have contributed to the composition of the Earth's atmosphere, but not necessarily the Earth's mantle (see Section 2.1 for a more detailed description of the geochemical constraints on the Earth's mantle). These results point towards a cometary bombardment that mainly occurred after the main accretion phases of the Earth, which contradicts the current understanding of the solar system timeline. In this paper, we propose to address this apparent paradox by constraining the range of plausible timings for cometary bombardments on the early Earth. We approach this issue using N-body simulations of planetary formation in the context of existing solar system dynamical models, combined with calculations of the collision probabilities between comets and Earth's embryos. Our simulations couple the evolution of the giant planets during their dynamical instability with the accretion of terrestrial planets from planetary embryos and planetesimals. They aim to successfully reproduce observational characteristics of our solar system, which mainly include the masses and orbits of planets and populations of small bodies. In particular, the orbital eccentricities and inclinations of the giant planets on one hand, and the small mass of Mars on the other hand, represent key constraints for the models (Raymond et al., 2009; Nesvorny and Morbidelli, 2012). The interaction of Jupiter-mass planets (or smaller ones) with gas or planetesimals has the effect of damping their eccentricities and inclinations, and causes migration (Papaloizou and Terquem, 2006; Morbidelli et al., 2009; Kley and Nelson, 2012). Planets may form a resonant chain as a result of these migrations. In the case of the solar system, it was shown that the giant planets should have been in a multi-resonant configuration at the moment of gas disk dispersal (Morbidelli et al., 2007; Pierens and Nelson, 2008). However, the giants of our solar system are on moderately eccentric and inclined orbits; and are not locked in mutual mean motion resonances whatsoever. Similarly, the complex orbital structure of the Kuiper belt seems to imply a complex formation history. These realizations led to the hypothesis of an excitation mechanism between the giant planets which would have disrupted the pristine outer disk of planetesimals (Tsiganis et al., 2005; Gomes et al., 2005; Morbidelli et al., 2007; Levison et al., 2011). This so-called _Nice model_ has prevailed over the last decades, especially considering its success at replicating a wide range of observable properties, such as the orbital architecture of the asteroid belt and Kuiper belt, the population of Jupiter's trojans asteroids, the capture of irregular moons and the obliquities of the giant planets (Nesvorny et al., 2018). Among the major updates of this model are the initial conditions, which now involve five or six giant planets instead of four (Nesvorny, 2011; Batygin et al., 2012; Nesvorny and Morbidelli, 2012). Stabilization of the system is then brought back following the ejection of one or two extra ice giants. Another crucial modification involves the timing of the instability. It was long believed that the dynamical instability, and thus the destabilization of the outer disk of planetesimals, should coincide with a cataclysmic bombardment of comets and asteroids that would have happened about 700 Myr after the formation of the solar system (Gomes et al., 2005). This event, known as the _Late Heavy Bombardment_ (LHB), has been recorded in the cratering history of the Moon and has largely been studied through the Apollo and Luna sample return missions (Tera et al., 1974). However, recent studies (Boehnke and Harrison, 2016; Zellner, 2017; Hartmann, 2019) no longer treat the LHB as a cataclysmic event and, more importantly, show that impact ages of lunar samples might reflect a sampling bias. In addition, an instability earlier than 20 - 100 Myr after CAIs can be inferred from new observations and measurements, such as the differences in highly-siderophile elements between the mantle of the Earth and the Moon (Morbidelli et al., 2018), the existence of a binary Trojan of Jupiter (Nesvorny et al., 2018) or the reset ages of minerals in certain meteorites (Mojzsis et al., 2019). For these reasons, the LHB is not considered as a firm constraint for the timing of the _Nice model_ anymore (Morbidelli et al., 2018). Dynamically-speaking, it has also been demonstrated that the survival of the terrestrial planets is very unlikely considering a late instability (Kaib & Chambers, 2016). An early instability is preferred over a delayed one for both the Solar System (Deienno et al., 2017; Ribeiro et al., 2020) and the general case of giant planets and outer planetesimal disks (Raymond et al., 2010). In agreement with this scenario, Liu et al. (2022) show that the dynamical instability may have been triggered by the dispersal of the gas disk. The _Early Instability_ model makes the assumption that the giant planet instability happened early enough to directly influence terrestrial planet formation. In particular, the small mass of Mars is best replicated when the instability happens 1-10 Myr after the dispersal of the primordial gas disk (Clement et al., 2018). The small mass of Mars is a strong observational constraint that needs to be verified in the simulations' outcomes (Raymond et al., 2009). The classical model, which assumes a smooth distribution of planetesimals as initial conditions for the inner solar system, has consistently failed to reproduce Mars analogs (Wetherill, 1991; Chambers & Wetherill, 1998; O'Brien et al., 2006; Raymond et al., 2006; Raymond et al., 2009; Morishima et al., 2010; Fischer & Ciesla, 2014; Kaib & Cowan, 2015). However, several models - that are not mutually exclusive - have been proposed to overcome this problem. As stated above, the _Early Instability_ model is very successful at depleting Mars' feeding zone and the asteroid belt, while having little effect on the growth of Venus and Earth (Clement et al., 2018, 2019). Alternatively, it is possible that there was initially less material in that area of the solar system leading to a smaller Mars and depleted asteroid belt (Hansen, 2009). A physical justification for this type of initial conditions can be provided by ALMA observations revealing how dust in protoplanetary disks is concentrated into rings (Huang et al., 2018). A number of dynamical models explain how these rings may have been shaped in the early solar system. In the _Grand Tack_ model for example, an early temporary intrusion of Jupiter and Saturn in the inner solar system could truncate the disk of terrestrial embryos at \(\simeq 1\) AU and confine it to a ring (Walsh et al., 2011). More recently, the _annulus_ model has proposed that our solar system formed from rings of planetesimals (Drazkowska & Alibert, 2017; Morbidelli et al., 2022; Izidoro et al., 2022). These rings would have been located near the silicate sublimation line, water snowline and CO snowline and would respectively explain the formation of terrestrial planets, giant planets and primordial outer disk of planetesimals. Lykawka & Ito (2023) demonstrate that chaotic excitation of disk objects triggered by a near-resonant Jupiter-Saturn configuration may also result in a narrow disk from which the terrestrial planets and the asteroid belt can form. In this study, our simulations start with a ring of terrestrial planetesimals in the inner solar system; the _Early instability_ model is then included by forcing the giant planets into a dynamical instability very early in their history (Nesvorny et al., 2021). Analytical derivations and numerical modeling of rocky planet formation predict a runaway growth, in which a starting population of small planetesimals evolves in a bimodal mass distribution comprising both Moon to Mars-sized embryos and smaller-sized planetesimals (Greenberg et al., 1978; Wetherill & Stewart, 1993; Kokubo & Ida, 1996). This phase is self-limiting and is followed by the _oligarchic growth_, during which small bodies can only participate in the growth of the embryos (Kokubo & Ida, 1998). Walsh & Levison (2019) have shown that by the end of the gas disk lifetime (considering a gas disk density decreasing exponentially with a 2 Myr timescale) the population comprises nearly 90% planetary embryos by mass around 1 AU. This mass distribution between embryos and planetesimals is taken as the starting configuration of our simulations (see section 2.2.3 for simulation details). The final stage of terrestrial planet formation occurred through giant impacts among the large embryos (Wetherill, 1985; Chambers & Wetherill, 1998; Agnor et al., 1999; Asphaug et al., 2006). The last giant impact on Earth would be the Moon-forming impact and would have led to the final phases of Earth's differentiation (Canup & Asphaug, 2001; Cuk & Stewart, 2012; Canup et al., 2022). The timing of this event can be derived from the Hf-W chronometry and dynamical simulations and is estimated to range between 30 and 100 Myr after CAIs, thus corresponding to the accretion timing of the Earth (Kleine et al., 2009; Rudge et al., 2010; Jacobson et al., 2014). During the last few years, an alternative school of thought has emerged to explain the growth of rocky embryos. It was proposed that accretion of inward-drifting _pebbles_ from the outer solar system could be extremely efficient in the presence of a gaseous disk and result in a very rapid formation of the terrestrial planets (Lambrechts et al., 2019; Johansen et al., 2021). In this scenario, Earth would have finished forming by the time of gas disk dispersal, leaving plenty of time for the comets to contribute to Earth's atmosphere without contributing to its mantle. Hence, the xenon isotopic dichotomy between the mantle and the atmosphere of the Earth could be explained more easily. However, recent studies have proven the contribution of outer solar system material in rocky planets to be small (Burkhardt et al., 2021; Mah et al., 2022). And even if Earth had formed in a few millions years, it would still require one late giant impact to form the Moon. Thus, it remains unclear whether the pebble accretion model is consistent with the observed isotopic composition of inner solar system bodies. ## 2 Material and Methods ### Geochemical constraints The xenon isotopic dichotomy between the mantle and the atmosphere of the Earth should be qualified by some considerations. The chondritic origin of the mantle has been determined by comparing the mantle-air Xe mixing lines to different cosmochemical endmembers. The uncertainties on the mantle Xe measurements give rise to an error envelope between the extrapolated minimum and maximum mantle-air mixing lines (the two dashed lines in panels a and b of Figure 1, taken from Peron and Moreira (2018)). If a cosmochemical endmember falls into that error envelope, it means that the mantle composition could be a mixture between air (Xe atmospheric value) and that endmember. The different endmembers shown in Figure 1 are Q-Xe, AVCC-Xe, U-Xe and SW-Xe. Q-phase is the carbon-rich residue of a meteorite after acid digestion, it is the main carrier of heavy noble gases in chondrites. AVCC is an acronym for Average Value Carbonaceous Chondrites. U-Xe is the primordial component of Xe in the atmosphere of the Earth and it was shown to be a mixture between 22% cometary Xe and 78% chondritic Xe (Marty et al., 2017). And SW stands for solar wind, it is the solar value. Panel a of Figure 1 shows that the extrapolation of mantle-air Xe mixing lines does not fit any known cosmochemical endmember within uncertainty. Panel b, on the other hand, shows that the extrapolation of mantle-air Xe mixing lines could fit Q-Xe and AVCC-Xe, while U-Xe and SW-Xe are outside the error envelope. This is how it was concluded that the Earth mantle is chondritic. However, it does not prove that there was absolutely no cometary contribution to the Earth's mantle. For example, a mixture between 50% Q-Xe and 50% U-Xe would still fall within the error envelope of this mantle-air mixing line. As we know that U-Xe contains around 22% of cometary Xe, that would mean that the Earth's mantle could include up to \(\simeq\) 10% of cometary material. Moreover, the Earth's deep mantle deficit in \({}^{86}\)Kr relative to chondritic Kr proves that a purely chondritic origin for the mantle is not completely established, as comet 67P/Churyumov-Gerasimenko is the only known planetary body that has also revealed a deficit in \({}^{86}\)Kr to this day (Peron et al., 2021). Hence, comets would have contributed to the deep mantle's Kr budget during the earliest phase of Earth's growth. Be that as it may, even with a 10% cometary component in the Earth's mantle, the primordial atmosphere would still contain at least two times more cometary xenon. Therefore, the xenon isotopic dichotomy between the mantle and the atmosphere of the Earth, and its related paradox, remain valid. ### N-body simulations We perform N-body simulations to constrain the timing of cometary bombardment relative to accretion of Earth analogs. We start the simulations at the time of gas disk dispersal (\(t_{0}\)). At this stage, the giant planets are already formed and the inner solar system comprises planetesimals and Moon- to Mars-sized embryos. We also consider an outer disk of planetesimals beyond the orbits of the giant planets to account for the comets. All the dynamics of the system occur in a gas-free environment and only gravitational forces - which dominate the trajectories for particles \(>\) 100 \(\mu\)m (Koschny et al., 2019) - are taken into account. Simulations last for 100 Myr after \(t_{0}\). #### 2.2.1 Collisional algorithm Simulations are performed using _GENGA_(Grimm and Stadel, 2014), a hybrid symplectic N-body integrator. _Hybrid_ means that close encounters are handled with good energy conservation; and _symplectic_ means that the numerical integrator conserves the Hamiltonian structure of the equations of motion. _GENGA_ is based on the integration scheme of _Mercury_(Chambers, 1999) but is GPU-accelerated, implying that many operations are performed in parallel. Aside from being faster than _Mercury_, _GENGA_ also includes an interpolation option that can force a set of bodies to follow - or evolve towards - a certain orbit. This feature is used to include successful simulations of the giant planets that accurately reproduce the outer solar system (Deienno et al., 2018; Clement et al., 2021; Nesvorny et al., 2021). #### 2.2.2 Giant planet instability As a reference, we consider two dynamical cases with no instability. One of them includes the terrestrial embryos and planetesimals only, and the other one also comprises the four giant planets set on their current orbits at the beginning of the simulation. We refer to them as the _nojov_ and _jovi_ sets, respectively. These sets serve as control cases for the sets including an instability. Regarding the instability cases, we take simulation outputs for the giant planets from Deienno et al. (2018), Clement et al. (2021b) and case 1 in Nesvorny et al. (2021) and refer to them as the _DE0_, _CL2_ and _NE6_ sets respectively. Conducting the integration of the giant planets separately from a more comprehensive simulation ensures that the orbital evolution of the giant planets is reproduced exactly like in the successful runs. In particular, many outer solar system constraints, such as the wide radial spacing and orbital eccentricities of giant planets, must be satisfied (Nesvorny & Morbidelli, 2012). More importantly, it ensures that the instability is triggered at a precise timing. As we consider the _Early instability_ model in this study, we set \(t_{inst}\simeq 0.6\) Myr (almost directly after the gas disk dispersal) in the _DE0_ set, \(t_{inst}\simeq 2\) Myr in the _CL2_ set and \(t_{inst}\simeq 6\) Myr in the _NE6_ set (cf. Figure 2). Migration models predict that the giant planets' orbits are in mutual resonances at the moment of gas disk dispersal (Papaloizou & Terquem, 2006; Morbidelli et al., 2007; Pierens & Nelson, 2008). In particular, it was shown that Jupiter's orbital ratio with Saturn is determinant for the dynamical evolution of the inner solar system (Brasser et al., 2009; Morbidelli et al., 2010). The 3:2 Jupiter-Saturn resonance is often considered as the only possible primordial Figure 1: Light xenon constraints on the mantle of the Earth, taken from Péron & Moreira (2018). The blue dot represents measured data and the orange dot represents the corrected data for atmospheric contamination. The dashed lines are extrapolated mantle-air mixing lines that indicate the uncertainties on the mantle measurements with a 95% confidence interval. These data suggest a chondritic origin (Q-phase or AVCC) for mantle xenon. ([https://creativecommons.org/licenses/by-nc-nd/4.0/](https://creativecommons.org/licenses/by-nc-nd/4.0/)) configuration for the outer solar system (Masset & Snellgrove, 2001; Morbidelli et al., 2007; Pierens & Nelson, 2008). Hence, in both Deienno et al. (2018) and Nesvorny et al. (2021), the selected simulation of the giant planets starts from an initial five-planet multi-resonant configuration with period ratios of 3:2, 3:2, 2:1, 3:2 at \(t_{0}\), as it was shown to generate the most plausible dynamical evolutions (Nesvorny & Morbidelli, 2012). However, it was found that a 2:1 Jupiter-Saturn resonance is also possible assuming specific combinations of gas disk parameters (Pierens et al., 2014). Clement et al. (2021) studied this primordial 2:1 MMR (mean motion resonance) with inflated eccentricities, starting with five giant planets in a 2:1, 4:3, 3:2, 3:2 chain. This configuration was shown to be highly successful at generating a final system of planets with low eccentricities and inclinations. In Deienno et al. (2018), the onset of the instability actually happens 5.2 Myr after \(t_{0}\) (i.e. after the initial multi-resonant configuration); but in our _DE0_ set, \(t_{inst}\) was directly initiated to investigate the scenario where instability is triggered by the gas disk dispersal (Liu et al., 2022). Hence, in our _DE0_ configuration, Neptune is already at 28 AU at \(t_{0}\) and its smooth outward migration through the disk before the instability has been neglected (refer to Figure 1 in Deienno et al. (2018) for a view of the entire evolution from that work). The mass of the additional ice giant is comparable to that of Uranus or Neptune in the _DE0_ and _NE6_ sets (i.e. \(\simeq\) 15 \(M_{Earth}\)), but is about half their masses in the _CL2_ set (i.e. \(\simeq\) 8 \(M_{Earth}\)). This extra planet is ejected out of the system during the instability. The masses of the four other giant planets are set to their current values in every case. Regarding the configuration of the outer disk of planetesimals, Clement et al. (2021) and Nesvorny et al. (2021) considered a ring beyond the orbit of Neptune, with outer edge at 30 AU, surface density profile that falls off as \(\sum(r)=r^{-1}\), and total mass of 35 \(M_{Earth}\); whereas a total mass of 30 \(M_{Earth}\) was studied in case 1 of Deienno et al. (2018). In all these three separate integrations, the outer disk already interacted gravitationally with the giant planets, influencing their orbits. By the end of the instability, the four remaining planets have stabilized and we interpolate their orbits once again to make them smoothly evolve towards their current orbit. This interpolation lasts until the end of the simulation, i.e. 100 Myr after \(t_{0}\). This artificial procedure for the evolution of the giant planets guarantees that the cometary bombardment is directly induced by a dynamical instability matching the main solar system constraints. #### 2.2.3 Initial conditions for the inner and outer planetesimal disks Inner bodies are uniformly distributed in a ring extending from 0.7 to 1.2 AU. They follow a surface density profile \(\sum(r)=r^{-1}\). The initial eccentricities and initial inclinations follow a Rayleigh distributions with \(\sigma_{e}\) = 0.005 and \(\sigma_{i}\) = 0.0025 respectively, as considered in Nesvorny et al. (2021). In order to account for a faster or slower accretion timing relative to the cometary bombardment, two types of set were considered for the masses of the terrestrial embryos. The _big_ set explores the scenario where Mars-sized embryos dominate the inner planetesimals disk, whereas the _small_ set investigates embryos with a few lunar masses. As noted in Section 1, the inner population around 1 AU should comprise \(\simeq\) 90% planetary embryos by mass and \(\simeq\) 10% planetesimals. We set the total mass of the inner ring to 2.5 \(M_{Earth}\), comprising 500 planetesimals and 20 or 40 embryos for the _big_ and _small_ sets respectively. With this set of initial conditions, inner planetesimals have a mass of 5 x 10\({}^{-4}\)\(M_{Earth}\), while embryos have a mass of 0.1125 \(M_{Earth}\simeq\) 1.047 \(M_{Mars}\) in the _big_ set and a mass of 5.625 x 10\({}^{-2}\)\(M_{Earth}\simeq\) 0.52 \(M_{Mars}\simeq\) 4.57 \(M_{Moon}\) in the _small_ set. We take a bulk density of 3 \(g/cm^{3}\) for embryos and 2 \(g/cm^{3}\) for planetesimals (Carry, 2012). Regarding the comets, we consider an outer ring of planetesimals uniformly distributed between 21 and 30 AU. The initial eccentricities and inclinations also follow a Rayleigh distribution with \(\sigma_{e}\) = 0.005 and \(\sigma_{i}\) = 0.0025 respectively. A cold disk with an inner edge at 21 AU means that, at the beginning of the integrations of the _DE0_ and _NE6_ sets, the outermost planet is already inside the cold disk, even though its slow outward migration should have already excited this portion of the disk. This can lead to very early scattering of comets into the inner Solar System, even before the giant planet dynamical instability. A more self-consistent approach would have been to model the excitation of the outer planetesimal disk before the onset of instability, as in Ribeiro et al. (2020). To account for this issue, in some of our analysis we divided the cometary bombardment into components before and after the instability. Nonetheless, proceeding this way, we did not find significant changes in the orders of magnitude of the cometary mass brought to the terrestrial planets. There are 10000 comets representing a total mass of 25 \(M_{Earth}\), which means that every comet has a mass of 2.5 x 10\({}^{-3}\)\(M_{Earth}\simeq\) 1.5 \(\times\) 10\({}^{25}\) g \(\simeq\)\(M_{Pluto}\). Their bulk density is set to 0.5 \(g/cm^{3}\). Even though the masses of the comets are not necessarily accurate, it was shown that the primordial outer disk of planetesimals should have had some hundreds to thousands Pluto-mass bodies, and they could represent 10% to 40% of the total disk mass (Nesvorny and Vokrouhlicky, 2016; Kaib et al., 2021). Here, we consider a larger number of these massive Kuiper belt objects to balance the lack of smaller comets in our initial conditions. #### 2.2.4 Summary of simulations and success criteria Overall, we explore two sets for the initial conditions of the inner solar system - the _big_ and _small_ sets - for each of the five dynamical scenarios - _nojov_, _jovi_, _DE0_, _CL2_ and _NE6_. This adds up to a total of ten sets. For each of these, five simulations were run with randomized orbital elements, within the framework detailed in section 2.2.3. The number of simulations performed for each set is mainly limited by their duration and significant computational cost. A simulation is successful when its outcome matches most of the solar system constraints. In the past few decades, several efforts have been made to measure this success in absolute terms. The most commonly used metrics are the angular momentum deficit (AMD, or \(S_{d}\)) introduced by Laskar (1997) and the radial mass concentration (RMC, or \(S_{c}\)) introduced by Chambers (2001). Figure 2: Early evolution of the giant planets in our three instability simulations. The onset of the instability happens at \(t_{inst}\)\(\simeq\) 0.6 Myr in the _DE0_ set, \(t_{inst}\)\(\simeq\) 2 Myr in the _CL2_ set and \(t_{inst}\)\(\simeq\) 6 Myr in the _NE6_ set. In each case, the extra ice giant is ejected right after \(t_{inst}\) and the four remaining giants start stabilizing again. The black vertical lines indicate the starting point of the second interpolation during which the giant planets evolve smoothly towards their current orbit. We only show the dynamical evolution of the first \(\simeq\) 10 Myr for a visualization purpose. In Deienno et al. (2018), the instability actually starts at t \(\simeq\) 5.2 Myr after a smooth migration of Neptune through the disk; however, we have neglected this outward migration in case _DE0_, and Neptune is already at 28 AU at \(t_{0}\). The angular momentum deficit is very useful to assess the dynamical excitation of the newly-formed rocky planets relative to a perfectly circular and coplanar system, and is defined in its normalized form (Chambers, 2001) as \[AMD=\frac{\sum_{j}m_{j}\sqrt{a_{j}}(1-\sqrt{1-e_{j}^{2}}\cos i_{j})}{\sum_{j}m_{j} \sqrt{a_{j}}} \tag{1}\] where \(m_{j}\), \(a_{j}\), \(e_{j}\) and \(i_{j}\) are the mass, semimajor axis, eccentricty and inclination of planet \(j\) respectively. The AMD of the solar system's terrestrial planets is 0.0018. On the other hand, the radial mass concentration provides information on the mass distribution of these planets and is defined as \[RMC=max(\frac{\sum_{j}m_{j}}{\sum_{j}m_{j}(\log_{10}(\frac{a}{a_{j}})^{2}}) \tag{2}\] where the maximum is taken over a. In particular, this value is very sensitive to the small Mars problem (Raymond et al., 2009). For comparison, the RMC of the solar system's terrestrial planets is 89.9 but a bigger Mars would decrease this number. However, Nesvorny et al. (2021) and Clement et al. (2023) show the limitations of these statistics and propose new constraints on the terrestrial planets. Our metrics for success is largely inspired by Nesvorny et al. (2021) and obey the following scheme: 1. The outcome of the simulation must include two (and only two) planets with mass M \(\gtrsim\frac{M_{Earth}}{2}\) between \(\simeq 0.6\) and 1.3 AU. These two planets are called the Venus and Earth analogs. 2. Only one planet can be located between 1.3 and 2.0 AU, we call it a Mars analog. Its mass must be \(\lesssim\frac{M_{Earth}}{3}\). However, the absence of a Mars analog is not a constraint for the simulation to be considered a success. 3. Only one planet can be located at a distance \(<0.6\) AU, we call it a Mercury analog. Its mass must be \(\lesssim\frac{M_{Earth}}{4}\). However, the absence of a Mercury analog is not a constraint for the simulation to be considered a success. 4. For every remaining successful simulations, we calculate the corresponding AMD and RMC and compare it to the solar system. We also calculate the collision rate of comets with Earth analogs from each of these successful simulations. ### Collision rate calculations We use an algorithm based on the Opik/Wetherill approach (Wetherill, 1967) including considerations from Farinella and Davis (1992). This method calculates the probability of collision between two planetary bodies in the generalized case where the orbits of both bodies are ellipses. In this section, we present the method without any mathematical demonstration. For more details, please refer to Wetherill (1967). Let (\(a_{e}\), \(e_{e}\), \(i_{e}\)) and (\(a_{c}\), \(e_{c}\), \(i_{c}\)) be sets of orbital elements of an Earth analog embryo of radius R and of a comet of radius r, respectively. The reference frame is set so that the origin is located at the intersection between the orbit of the Earth analog embryo and the line of mutual nodes between the two orbit planes. X axis is the node line and the XY plane is the plane of the embryo orbit. A drawing of this configuration can be found in Figure 3 of Greenberg (1982). It is assumed that the trajectories of both the Earth analog embryo and the comet are rectilinear near their point of approach. XY' is the trajectory of the Earth analog embryo and is in the plane of the reference frame. AB is the trajectory of the comet, where A is the intersection between the comet orbit and the mutual node line, and B is a point of the comet orbit but is not on the plane of reference. Wetherill (1967) assumes that the longitudes of ascending nodes (\(\Omega_{e}\) and \(\Omega_{c}\)) and arguments of perihelion (\(\omega_{e}\) and \(\omega_{c}\)) vary uniformly with time, so there is a uniform probability distribution for these parameters between 0 and \(2\pi\). The probability of collision between the two bodies will thus be averaged over all their possible collision geometries and these parameters are not required as input values. We only consider pairs of bodies for which \(q_{e}<Q_{c}\) and \(Q_{e}>q_{c}\), where \(q=a(1-e)\) is the perihelion and \(Q=a(1+e)\) the aphelion of a given body. Otherwise, the orbits do not intersect each other and the collision probability is zero (Farinella and Davis, 1992). The mutual inclination i' of the two bodies is given by \[\cos i^{\prime}=\cos i_{e}\cos i_{c}+\sin i_{e}\sin i_{c}\cos\Delta\Omega \tag{3}\] where \(\Delta\Omega\) is the difference between the longitudes of ascending nodes. It is not necessary to know \(\Delta\Omega\) because - as stated above - all relative orientations of the nodes are assumed to be equally probable. Thereby, we will further integrate the collision probability over \(\Delta\Omega\), which is equivalent to varying all the possible mutual inclination i' between the two orbit planes. The distance of the Earth analog embryo from the Sun is defined as \[r=\frac{a_{e}(1-e_{e}^{2})}{e_{e}\cos\theta_{e}} \tag{4}\] where \(\theta_{e}\) is the true anomaly of the Earth analog embryo. Again, it is not necessary to know \(\theta_{e}\). As it is assumed that all values of the argument of perihelion \(\omega_{e}\) are equally probable, we will further integrate the collision probability over \(\omega_{e}\), or - in an equivalent manner - over the true anomaly in the orbit of the Earth analog embryo (\(\theta_{e}\)) at the point of closest approach between the two orbits. Here, Wetherill implicitly assumes that the probability distribution for the true anomaly \(\theta_{e}\) of closest approach is also uniform from 0 to \(2\pi\). This assumption can introduce error factors of the order of the orbital eccentricities (Greenberg 1982). However, Farinella & Davis (1992) showed that these errors can be reduced to the smaller of the two eccentricities using a suitable ordering of the eccentricities in the algorithm; and, in our case, the Earth analog embryos usually have much smaller eccentricities than the colliding comets. We set \[c=\frac{1}{8\pi^{2}\sin i^{\prime}a_{e}^{2}a_{c}^{2}\sqrt{(1-e_{e}^{2})(1-e_{c }^{2})}} \tag{5}\] And we have \[\cot\alpha_{e}=\pm\sqrt{\frac{a_{e}^{2}e_{e}^{2}-r^{2}(\frac{a_{e}}{r}-1)^{2}} {a_{e}^{2}(1-e_{e}^{2})}} \tag{6}\] \[\cot\alpha_{c}=\pm\sqrt{\frac{a_{e}^{2}c_{c}^{2}-r^{2}(\frac{a_{e}}{r}-1)^{2}} {a_{c}^{2}(1-e_{c}^{2})}} \tag{7}\] where \(\alpha_{e}\) is the angle between the X axis (or mutual node line) and XY' (the trajectory of the Earth analog embryo), and \(\alpha_{c}\) is the angle between the X axis and AB (the trajectory of the comet). The relative velocity U between the two bodies can be obtained from \[U_{+}^{2}=\frac{GM}{r}\bigg{[}4-\frac{r}{a_{e}}-\frac{r}{a_{c}} -2\big{[}\frac{a_{e}a_{c}}{r^{2}}(1-e_{e}^{2})(1-e_{c}^{2})\big{]}^{\frac{1}{ 2}}\] \[(\cos i^{\prime}+|\cot\alpha_{e}||\cot\alpha_{c}|)\bigg{]} \tag{8}\] and \[U_{-}^{2}=\frac{GM}{r}\bigg{[}4-\frac{r}{a_{e}}-\frac{r}{a_{c}} -2\big{[}\frac{a_{e}a_{c}}{r^{2}}(1-e_{e}^{2})(1-e_{c}^{2})\big{]}^{\frac{1}{ 2}}\] \[(\cos i^{\prime}-|\cot\alpha_{e}||\cot\alpha_{c}|)\bigg{]} \tag{9}\] where we account for the fact that, for a given value of r, \(\cot\alpha_{e}\cot\alpha_{c}\) can take two values, depending on whether \(\cot\alpha_{e}\) and \(\cot\alpha_{c}\) have the same or opposite signs. GM is the heliocentric gravitational constant and equals to \(1.3202\times 10^{26}\)\(km^{3}kg^{-1}yr^{-2}\). Thus, we have \[U=\frac{|U_{+}|+|U_{-}|}{2} \tag{10}\] expressed in km/year. It must be integrated over all possible true anomalies of the Earth analog embryo \(\theta_{e}\), and over all possible relative longitudes of ascending node of the two bodies \(\Delta\Omega\). We can divide it by \(3.154\times 10^{7}\) to have it expressed in km/s. The intrinsic collision probability per unit of time (i.e. per year in our case) will be \[P_{i}=\frac{\left(|U_{+}|+|U_{-}|\right)r\,c}{|\cot\alpha_{c}|} \tag{11}\] and must also be integrated over all \(\theta_{e}\), and \(\Delta\Omega\) to obtain the total intrinsic collision probability per unit of time \(P_{tot}\). The collision probability is called _intrinsic_ because it only depends on the two orbits involved but is independent on the size of the corresponding bodies. In other words, it defines the collision probability per unit of time between two bodies when the sum of their respective radii equals 1 km. Hence, the number of collisions between an Earth embryo and a comet occuring during a certain time interval \(\Delta\)t can be expressed as \[n_{coll}=P_{tot}(R+r)^{2}f_{grav}\Delta t \tag{12}\] where R and r are the radii in km of the Earth analog embryo and the comet respectively, and \(f_{grav}\) (dimensionless) is the gravitational focus of the Earth analog embryo given by \[f_{grav}=1+\frac{v_{esc,e}^{2}}{U^{2}} \tag{13}\] with \(v_{esc,e}=\sqrt{\frac{2GM_{e}}{R}}\) the escape velocity of the Earth analog embryo. The cometary mass hitting an Earth analog constituent embryo during a certain time interval can be obtained by multiplying the number of collisions \(n_{coll}\) by \(1.5\times 10^{25}\) g, the mass of one comet in our simulations. For each timestep of a simulation (from 0 to 100 Myr) and for each embryo or planetesimal that formed the Earth analog, we calculate the probability of collision with each of the 10000 comets. By adding together the contribution of all 10000 comets in a given embryo/planetesimal, we can derive the cometary mass accreted by each embryo/planetesimal as a function of time. Then, by adding together the cometary contributions in all of the Earth analog embryos and planetesimals - which eventually bash into each other to form the Earth analog -, we obtain the total cometary mass accreted in the Earth analog as a function of time. For the sake of simplicity, in what follows, we will use the term 'Earth constituent embryos' even when we refer to Earth constituent planetesimals. ## 3 Results ### N-body simulations: diversity of outcomes The precise values for the masses, semi-major axes, AMD and RMC of the terrestrial planet analogs from each successful simulation at t = 100 Myr can be found in Table 1. Out of the ten _nojov_ simulations (including both _big_ and _small_ initial conditions for the inner ring of rocky embryos), five are successful. The median RMC value of all ten simulations is 45.5 and there are a lot of small planetesimals remaining in the inner part of the solar system at t = 100 Myr. When the contribution of the giant planets is added to the initial conditions (_jovi_ set), the median RMC value increases to 66.5. However, only one simulation out of ten is successful according to our success criteria and the median AMD value also increases (from 0.006 to 0.0155). Therefore, the presence of the giant planets have an important effect on the formation of the inner planets (Kaib & Chambers, 2016). We include an early instability among the giant planets and an outer disk of 10000 planetesimals (the _DE0_, _CL2_ and _NE6_ sets) and the number of successful simulations increases, with slightly higher median RMC and smaller AMD (i.e. closer to the values of our solar system: AMD = 0.0018 and RMC = 89.9). An overview showing the RMC and AMD values of all the successful runs is presented in Figure 3. Out of the 10 simulations performed for each instability case, 5 are successful for the _DE0_ set, 4 for the _CL2_ set and 4 for the _NE6_ set. Our sample of simulation outputs is too small to conclude on the best timing for the instability to happen between \(t_{inst}\) = 0.6, 2 or 6 Myr. In Clement et al. (2018), a range of timings were also investigated and simulations with \(t_{inst}<1\) Myr were usually less successful because the inner disk of planetesimals would not be sufficiently depleted by the end of the instability, and would thus re-spread, leading to smaller Venus and Earth analogs. However, in this study, we consider less mass in the inner ring of planetesimals to begin with, so this is not an issue. In the _NE6_ set, we notice that the outer disk of planetesimals might have been too close to the outermost ice giant planet at the beginning of the simulation. Indeed, the outer ring of planetesimals is located between 21 and 30 AU and the outermost planet starts at \(\simeq 22\) AU. This has the effect of completely destabilizing the outer disk in less than 1 Myr. Consequently, outer planetesimals are already scattered towards the inner and outer solar system before the onset of the giant planet instability. In the context of our study nevertheless, this is of no great significance since our main goal is to show a _range_ of possible timings for the cometary bombardment relative to the accretion of the Earth. Out of the 13 successful simulations, 7 started from 20 _big_ embryos (_big_ set) and 6 started from 40 _small_ embryos (_small_ set). Hence, it cannot be decided either which initial conditions were the best between the _big_ and _small_ sets. However, the accretional growth timing of Earth best fits geophysical constraints (Kleine et al., 2009; Rudge et al., 2010) in the _small_ initial configuration (with 30 Myr \(<t_{growth}<\) 90 Myr), with respect to the _big_ scenario (\(t_{growth}<\) 10 Myr in most cases). Two simulations from the _big_ set (CL2/big/2 and NE6/big/1) have unrealistically short growth timings, with more than 95% of their respective masses accreted in less than 1 Myr (see Table 2). However, as the timing of Earth's growth does not influence the cometary bombardment, it is still interesting to study the pattern of their corresponding cometary influx. In Figure 4, we display the final outcome of each of the 13 successful simulations, i.e. the state (number, relative mass, semi-major axis and eccentricity) of the inner planets at t = 100 Myr. 9 of the 13 successful simulations include a Mars analog (\(1.3<a<2\) AU and mass \(\lesssim\frac{M_{Earth}}{3}\)), and only one of them includes a Mercury analog (a \(<0.6\) AU and mass \(\lesssim\frac{M_{Earth}}{4}\)). The Mercury analog has a mass of 0.123 \(M_{Earth}\), which is about 2 times larger than the actual Mercury. The Mars analogs have a median mass of 0.121 \(M_{Earth}\), which is close to the actual Mars mass (0.107 \(M_{Earth}\)), with a small variability among the results. In 8 of these 13 good simulations, the Venus analog is larger than the Earth analog. The median mass of the Venus and Earth analogs is 1.101 \(M_{Earth}\) and 1.015 \(M_{Earth}\) respectively. If we take a closer look at the differences of outcomes in _DE0_, _CL2_ and _NE6_ sets, we find that the average mass ratios between Venus and Earth analogs are 1.82, 2.26, and 0.75, respectively. For _NE6_, Venus is smaller than the Earth by a ratio that is close to the real one (\(\simeq\) 0.815). In all other cases, Venus tends to be more massive than the Earth. The early timing of the instability could play a role in this, in the way Mars' and \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline & AMD & RMC & \(M_{Earth}\) & \(a_{Earth}\) & \(M_{Venus}\) & \(a_{Venus}\) & \(M_{Mars}\) & \(M_{Mercury}\) \\ & & & (\(M_{Earth}\)) & (AU) & (\(M_{Earth}\)) & (AU) & (\(M_{Earth}\)) & (\(M_{Earth}\)) & \\ **solar system** & **0.0018** & **89.9** & **1** & **1** & **0.815** & **0.723** & **0.107** & **0.055** \\ \hline _nojov_/big/2 & 0.0070 & 45.3 & 0.966 & 1.096 & 0.857 & 0.727 & 0.237 & 0.243 \\ _nojov_/big/5 & 0.0060 & 53.0 & 0.714 & 1.267 & 1.329 & 0.830 & 0.117 & 0.254 \\ _nojov_/small/1 & 0.0263 & 27.2 & 1.070 & 1.161 & 1.176 & 0.652 & / & / \\ _nojov_/small/2 & 0.0041 & 42.0 & 1.137 & 1.156 & 1.109 & 0.653 & / & / \\ _nojov_/small/4 & 0.0043 & 50.4 & 1.200 & 1.257 & 0.916 & 0.808 & / & 0.253 \\ _jov_/small/3 & 0.0014 & 113.3 & 1.224 & 0.865 & 0.764 & 0.667 & 0.181 & / \\ \hline _DE0_/big/1 & 0.0006 & 95.1 & 1.189 & 1.045 & 0.867 & 0.648 & / & / \\ _DE0_/big/5 & 0.0025 & 108.3 & 0.464 & 1.243 & 1.697 & 0.725 & / & / \\ _DE0_/small/2 & 0.0043 & 84.9 & 1.015 & 1.047 & 1.161 & 0.662 & 0.059 & / \\ _DE0_/small/3 & 0.0060 & 51.9 & 1.036 & 0.982 & 0.980 & 0.619 & 0.287 & / \\ _DE0_/small/5 & 0.0174 & 66.7 & 0.534 & 1.094 & 1.402 & 0.691 & 0.295 & / \\ \hline _CL2_/big/1 & 0.0140 & 87.7 & 0.474 & 1.208 & 1.684 & 0.742 & 0.121 & / \\ _CL2_/big/2 & 0.0011 & 103.2 & 0.712 & 1.084 & 1.459 & 0.715 & 0.116 & / \\ _CL2_/big/3 & 0.0085 & 64.9 & 0.596 & 1.268 & 1.452 & 0.776 & 0.121 & 0.123 \\ _CL2_/small/5 & 0.0131 & 44.4 & 1.074 & 1.176 & 1.101 & 0.651 & 0.180 & / \\ \hline _NE6_/big/1 & 0.0045 & 126.7 & 1.077 & 0.887 & 0.612 & 0.675 & 0.119 & / \\ _NE6_/big/4 & 0.0233 & 65.3 & 1.173 & 1.047 & 0.848 & 0.588 & / & / \\ _NE6_/small/1 & 0.0044 & 101.5 & 1.301 & 0.981 & 0.729 & 0.609 & / & / \\ _NE6_/small/5 & 0.0082 & 51.1 & 0.962 & 1.023 & 1.095 & 0.630 & 0.355 & / \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of all successful simulations (i.e. meeting the criteria established in section 2.2.4). The name given to each simulation specifies the instability case, the initial conditions for the inner ring of planetesimals and a number from 1 to 5 (5 simulations were run for each set). The values specific to the solar system are displayed at the top for comparison. Earth's feeding zone are depleted. One might also notice that _DE0_ and _NE6_ do a better job of getting a tight spacing between the Venus and Earth analogs. The evolution of the terrestrial planets analogs are stable after the first 100 millions years, except in simulation CL2/big/1. Indeed, the Mars analog in this simulation is very eccentric and its orbit crosses the Earth analog's orbit, which could lead to an ejection or a collision. Figure 5 shows the temporal evolution of simulation CL2/big/2, and a comparison with the present solar system. Before the onset of the giant planet instability and the ejection of the fifth planet, the Neptune analog already migrates outwards and the outer ring of planetesimals is destabilized. Most of these planetesimals are ejected outwards but some of them are ejected inwards and reach the inner solar system. By the end of the instability, the outer ring of planetesimals is strongly depleted. After 100 million years of simulation, the inner solar system ends up with 3 stable rocky planets following similar orbits to that of the actual Venus, Earth and Mars. In this particular simulation, the Venus and Earth analogs grew too rapidly, but this is related to the initial size of the embryos in the _big_ scenario. In Figure 6, we show the outcome of a simulation starting from smaller embryos (NE6/small/1). The inner solar system ends up with 2 stable rocky planets but with a more realistic growth timing for the Earth. Indeed, the last giant impact happens at \(t=38.1\) Myr and the impactor has a mass of 0.12 \(M_{Earth}\). The Earth analog accretes 0.02% of its final mass after this giant collision. This is close to what the dynamical and geochemical constraints require on Earth with the last giant impact taking place between 30 and 100 Myr after CAIs (Kleine et al., 2009), an impactor of mass \(\simeq\) 0.1-011 \(M_{Earth}\)(Canup and Asphaug, 2001), and 0.05% of Earth final mass accreted after that episode (Jacobson et al., 2014). Regarding the instability, we see that the outer disk of planetesimals has already been destabilized at the very beginning of the simulation. As explained above, this is probably because the Neptune analog was located too close to the disk in the initial conditions of the _NE6_ set. We calculate the feeding zones of the terrestrial planets in our simulations and see how they vary through time (cf. Figure 7). Feeding zones have significant consequences for the volatile inventory of planets. However, they are difficult to predict given that the last stages of planet formation are stochastic (Kaib and Cowan, 2015). We find that Earth and Venus analogs have similar feeding zones, whereas Mars mostly accretes material from the outer edge of the terrestrial planetesimal ring. This result is consistent with Izidoro et al. (2022)'s conclusions in the framework of the _annulus_ model. Figure 3: Comparison of the AMD and RMC values of our successful simulations with the solar system and other simulations from the literature. The Grand Tack simulations are from O’Brien et al. (2014). The Early Instability simulations are from Nesvorný et al. (2021). Those with eccentric and resonant gas giants are from Raymond et al. (2009) and those including planetesimal-driven migration of the gas giants are from Lykawka and Ito (2013). Figure 4: Comparison of the terrestrial planets analogs in their final states (at t = 100 Myr) from all successful simulations (according to the list of criteria presented in Section 2.2.4), with the current state of the inner Solar System. The symbol sizes are proportional to the mass of the bodies and the error bars indicate the variation in heliocentric distance given by the perihelion and aphelion of the orbit. The area marked in blue defines the semi-major axes range for which there must be two planets, the Venus and Earth analogs respectively. Before this range are the Mercury analogs and after are the Mars analogs. ### Collision rate calculations For each simulation and following the method described in section 2.3, we compute the intrinsic collision probability between each pair of comet and Earth embryo (main embryo or future Earth constituent embryo) at each timestep. This probability can be converted in a number of collisions occurring during a certain time interval. For each timestep, we add up the number of collisions with a comet of all the main and future Earth constituent embryos. Indeed, the Figure 5: Simulation CL2/big/2. The symbol sizes are proportional to the mass of the bodies; for the giant planets a fixed size was chosen so as not to fill most of the panels. Figure 6: Simulation NE6/small/1. The symbol sizes are proportional to the mass of the bodies; for the giant planets a fixed size was chosen so as not to fill most of the panels. Earth constituent embryos that haven't merged with the main Earth analog embryo yet, may also collide with comets and bring cometary material to the fully-formed Earth. Figure 8 shows the temporal evolution of this rate of collisions with comets for simulation CL2/big/2. This particular simulation reveals that, even though the giant planet instability happens at t \(\simeq\) 2 Myr, the number of collisions with comets shows important peaks up to t \(\simeq\) 13 Myr and between \(\simeq\) 40 and 50 Myr. A closer look into the dynamical evolution of this simulation reveals that the peaks in collision rate before t \(\simeq\) 13 Myr and between \(\simeq\) 40 and 50 Myr are both due to only one or two comets orbiting close to the terrestrial embryos (Figure 9). This shows that cometary accretion on terrestrial embryos could have included a stochastic component. More importantly, it can be concluded that the contribution of cometary material on Earth might have been delayed with respect to the timing of the instability. The cumulative collision rate, which is smaller than one (\(\simeq\) 0.23 in simulation CL2/big/2 for example, cf. Figure 8), justifies the need for a probabilistic model for the collision rate calculations, rather than directly counting the number of collisions with comets from the N-body simulations. The collision rate we obtain for each simulation depends to a large extent on the number of comets considered for the initial conditions. If we increased the number of outer solar system objects, the intrinsic probability of collision between an Earth embryo and a comet, and in turn the collision rate, would also increase. However, this increment would be limited by the radius of each comet being smaller, given a total mass of the outer disk set to 25 \(M_{Earth}\), which would also reduce the volume where the collision can take place. In our simulations, the 10000 comets are unrealistically large in order to balance for their _small_ population. Hence, estimating the cometary mass hitting Earth (see Section 3.2.1) or the collision rate normalized to the number Figure 7: Feeding zones of the terrestrial planets in simulations CL2/big/2 (top panel) and NE6/small/1 (bottom panel). They are represented by their cumulative mass as a function of the initial semi-major axis of their component embryos. of comets (see Section 3.2.2), instead of the collision rate for all 10000 comets, is more relevant in the scope of our study. While the assumption of each comet being \(\simeq 1.5\times 10^{25}\) g (about the mass of Pluto) appears to be highly unrealistic, it has been shown that the combined mass of Pluto-mass objects in the original outer disk could represent 10% to 40% of the disk mass (Nesvorny and Vokrouhlicky, 2016). It is estimated that the outer disk below 30 AU would have contained 1000-4000 Plutos initially (Nesvorny and Vokrouhlicky, 2016). More recently, Kaib et al. (2021) argued that the number of Pluto-mass bodies in the primordial disk was smaller than \(\simeq 1000\). Even though the precise number of those massive bodies could still be debated, it means that a non-negligible part of the comets in our simulations is representative of the original outer disk. The remaining part should naturally contain a large amount of much smaller bodies. In any case, the Pluto-mass objects carried a significant part of the mass in the early outer disk and could be responsible for the stochastic component of the cometary bombardment in the inner solar system. The contribution of the smaller comets would add a smoother component to the bombardment. The latter has been studied by Nesvorny et al. (2023). They have modeled the cometary impacts on Earth through time (after the Moon-forming impact) using the distribution of comets from (Nesvorny et al., 2017). The size distribution of the comets considered in their study is mainly comprised between 10 and 100 km, with a very steep slope for comets with diameters \(>100\) km (i.e. for comets with mass \(\gtrsim 2.6\times 10^{20}\) g). So they consider much smaller bodies than we do in this study. They calculate the collision probability between these _small_ comets and a target body at 1 AU (the Earth) and obtain a smooth profile for the cometary bombardment after the Moon-forming impact. As their approach is complementary to ours, we compare our quantitative results in Section 4.2. #### 3.2.1 Cometary mass calculations As detailed in Section 2.3, we infer the cometary mass hitting an Earth analog constituent embryo by multiplying its number of collisions with a comet by the mass of one comet in our simulations (\(\simeq 1.5\times 10^{25}\) g; cf. section 2.2.3). This cometary mass hitting Earth constituent embryos as a function of time in our simulation NE6/small/1 is represented in Figure 10. The running mass of the Earth is also shown, with a last giant impact happening at t = 40 Myr. Figure 8: Temporal evolution of the collision rate between comets and Earth constituent embryos in the simulation CL2/big/2. The giant planet instability happened at t \(\simeq\) 2 Myr. For simplicity, we use the term ’Earth constituent embryos’ even when we refer to Earth constituent planetesimals. The presence of orange crosses up to a certain point shows that the main Earth analog embryo is still accreting its Earth constituent embryos. These future Earth constituent embryos are also prone to colliding with a comet. Hence, the cometary mass they accrete before merging with the main Earth embryo will be integrated in the fully-formed Earth. The sum of the cometary contributions of all these Earth embryos is shown in dark purple. The cumulative collision rate is represented with the dashed line on the right axis. In 100 millions years, the probability of collision between an Earth constituent embryo and a comet is about 23%. Interestingly, this result demonstrates that it is possible that most cometary mass was brought to Earth _after_ it had finished forming. The high peak in cometary mass at the very beginning of Figure 10 is due to the outer disk of planetesimals being destabilized in the first Myr, even before the onset of the giant planets instability. The reasons for this have been explained in section 3.1 and is most probably due to the initial conditions of the simulation. This appears in every instability case but the _NE6_ set is where it is the most obvious. In the following, we _normalize_ the cometary mass brought to Earth to the timing of the instability. In other words, we only take into account the contribution of comets in Earth constituent embryos starting from the onset of the giant planet instability. This has implications on the values obtained for the cometary mass brought to Earth, which could be slightly underestimated, especially in the instability case _NE6_ (see Table 2). Figure 11 shows a comparison of all the cumulative cometary masses hitting Earth as a function of time for all the successful simulations (top panels). These functions are _normalized_ to the timing of the instability, which explains why they start increasing almost directly in simulations _DE0_, but only after t = 2 and 6 Myr in simulations _CL2_ and _NE6_ respectively. The bottom panels display the accreting mass rates of the corresponding Earth analogs. The moment of the last giant impact (or Moon-forming impact) happens between \(\simeq 0\) and 45 Myr in the _big_ set (left panel) and between \(\simeq 30\) and 90 Myr in the _small_ set (right panel), which is a little bit more realistic in terms of geophysical constraints (Kleine et al., 2009). It should be noted that simulations CL2/big/2 and NE6/big/1 have very Figure 9: Simulation CL2/big/2 with a zoom on the inner solar system. Different timings at which the peaks in collision rate are particularly high in this simulation are displayed. The symbol sizes are proportional to the mass of the bodies, except for the comets (in blue) for which the size has been exaggerated and Jupiter (around 5 AU) for which a fixed size was chosen so as not to fill most of the panels. unrealistic growth timings, but it does not have any influence on the absolute timing of the cometary bombardment. The cometary bombardment duration is highly variable and can last for less than 10 Myr as well as \(\simeq 80\) Myr. The exact values appear in Table 2. #### 3.2.2 Collision rate of each comet with Earth In this section, we take simulation CL2/big/2 as a case study to estimate collision rates of each comet with the Earth. In this particular scenario, the cumulative collision rate between Earth's constituent embryos and 10000 comets is \(\simeq\) 0.23, over the 100 millions years of our simulation. This means that the collision rate of each comet with Earth, over the same period of time, is equal to \(\simeq\) 2.3 \(\times 10^{-5}\). Considering the fact that comets in our simulations are Pluto-mass objects and there might have been between 1000 and 4000 of these objects in the primordial Kuiper Belt (Nesvorny and Vokrouhlicky, 2016), the collision rate between a Pluto object and an Earth embryo in the early solar system would range between \(\simeq\) 0.02 and 0.09. The Kuiper Belt objects populate as \(N(>d)\simeq d^{-q}\), with \(4.5<q<7.5\), for objects with a diameter d\(>\)100 km (Fraser et al., 2014; Morbidelli et al., 2021). Hence, the number of objects larger than half the size of Pluto, could be 20 to 200 times more numerous than Pluto objects. The collision rate between these objects and Earth would thus increase accordingly, reaching values ranging between 0.4 and 18 collisions. These smaller - but still very massive - objects could have played an important role in the stochastic component of the bombardment. ## 4 Discussion Figure 10: Temporal evolution of the cometary mass hitting Earth constituent embryos in the simulation NE6/small/1. The upper part of the figure shows the cometary mass with the dark purple line on the left axis and the cumulative cometary mass with the dashed black line on the right axis. The lower part of the figure shows the accreting mass of the main Earth embryo with a last giant impact at t \(\simeq\) 40 Myr. For simplicity, we use the term ’Earth constituent embryos’ even when we refer to Earth constituent planetesimals. The presence of orange crosses up to t \(\simeq\) 60 Myr on the upper part of the plot implies that small planetesimals continued to collide with the Earth in a late accretion phase, to eventually reach a final mass of 1.3 \(M_{Earth}\). ### Timing of cometary accretion with respect to Earth's growth We can distinguish three possible scenarios for the relative timing of cometary bombardment and Earth's accretion. In the first one, there is (almost) no cometary bombardment after the last giant impact. This is the most prevalent scenario among all our successful outcomes (10 out of 13). Simulations CL2/big/1 or DE0/small/2, for example, follow this particular evolution. For those Earth analogs, a _cometary_ isotopic signature for the atmospheric noble gases could hardly be explained. In addition, we should find a signature of comets in the Earth's mantle. An hypothesis that should be investigated in future works is that the cometary volatile material could simply not be trapped efficiently in the mantle. In the second scenario, a significant part of the cometary bombardment happens after the last giant impact. This is particularly the case for simulations DE0/big/5 and NE6/small/1. For those Earth analogs, there should be a cometary signature of noble gases in the atmosphere _and_ in the mantle. In the third and last scenario, there is almost no cometary bombardment on Earth until after the Moon-forming impact. So, in principle, we should find a cometary signature in the atmosphere of those Earth analogs, but not in their mantle. Simulations CL2/big/2 and NE6/big/1 exemplify this potential scenario. If we consider noble gas constraints on Earth, it appears that the mantle xenon is chondritic (Peron & Moreira, 2018; Broadley et al., 2020), while atmospheric xenon can be separated into a chondritic component (\(\simeq 78\%\)) and a cometary one (\(\simeq 22\%\)) (Marty et al., 2017). It is not possible to rule out a small cometary contribution to the mantle Figure 11: Comparison of the cometary bombardment timings (top panels) relative to Earth analog growth (bottom panels) for each of the successful simulations. The left panels consider the simulations which started with Mars-sized embryos (the _big_ set) in the inner solar system; and the right panels show simulations which started with a few lunar masses (the _small_ set). of the Earth (see Section 2.1), so the most plausible scenario for the actual Earth is either the second or the third one. It is important to note that the simulation examples we have for the third scenario (CL2/big/2 and NE6/big/1) always involve an extremely rapid, and thus unrealistic, formation of the Earth analogs (\(t_{growth}<1\) Myr). This is most probably the major reason why comets are exclusively delivered after the last giant impact in these simulations. With realistic growth timings, simulation CL2/big/2 would probably fall into the second scenario and simulation NE6/big/1 would fall into the first scenario. A bigger sample in simulation outcomes is necessary to validate this possible dynamical evolution. However, in a first approach, it appears unavoidable to bring cometary material to the mantle of the Earth in the context of an _Early Instability_ scenario. The second scenario might also match Earth's accretion history, with a cometary delivery _before_ (as well as after) the last giant impact and Earth's core closure. Simulation DE0/big/5 involves a very rapid formation of the Earth analog (\(t_{growth}\simeq 2\) Myr) and would fall into the first scenario with a realistic growth timescale. In contrast, simulation NE6/small/1 satisfy many dynamical and geochemical constraints with a last giant _Mars-sized_ impact \(\simeq 40\) Myr after \(t_{0}\) and \(0.2\%\) of mass accreted after this impact. Regarding the cometary bombardment in this simulation, \(\simeq 1/3\) of the cometary material is delivered _before_ the last giant impact, and \(\simeq 2/3\)_after_. This could be consistent with a \(10\%\) cometary contribution in the mantle allowed by the fact that chondritic Xe and U-Xe (mixture of chondritic and cometary Xe) can hardly be distinguished as a progenitor for the mantle due to measurement uncertainties (see Section 2.1). As also detailed in Section 2.1, comets would have contributed to the deep mantle's Kr budget during the earliest phase of Earth's growth (Peron et al., 2021). In that case, the cumulative cometary mass hitting Earth as a function of time (as shown in Figure 11) would be an upward slope in the first millions years, it would then reach a plateau until \(\simeq 40\) Myr for example, and there would be another upward slope followed by a plateau; just like in simulation CL2/big/2. If future geochemical studies also find a deficit in \({}^{86}\)Kr in the upper mantle, suggesting that a few comets contributed to the composition of Earth's upper mantle too, the cumulative cometary mass hitting Earth as a function of time could be similar to that of simulation NE6/small/1. Overall, and given the issue with the very early formation of Earth analogs in the third scenario, the second scenario appears the most likely. These results support the claim that comets have contributed to the mantle early in the Earth's accretion history, but that cometary material could also have been delivered after the Earth had finished forming. \begin{table} \begin{tabular}{l c c c c c c c} \hline & Normalized & Fraction of & Cometary & Cometary & Last giant & Last giant & Late \\ & cometary & com. mass & bomb. & bomb. & impact & impact & accretion \\ & bomb. & wrt. Earth & mass after & duration & time & mass & mass \\ & mass (g) & last giant & (Myr) & (Myr) & (\(M_{Earth}\)) & (\(M_{Earth}\)) \\ & & & impact (g) & & & & \\ **constraints** & / & \(<\)**0.5\%** & / & / & **[30-100]** & \(\simeq\)**0.1** & \(\simeq\)**0.005** \\ \hline _DE0_/big/1 & 2.39e+24 & 0.034\% & 9.55e+22 & 15.5 & 6.9 & 0.11 & 0.026 \\ _DE0_/big/5 & 9.00e+23 & 0.033\% & 3.98e+23 & 7.5 & 2.1 & 0.23 & 0.009 \\ _DE0_/small/2 & 2.22e+24 & 0.037\% & 3.71e+19 & 7.1 & 33.9 & 0.41 & 0.007 \\ _DE0_/small/3 & 5.00e+24 & 0.081\% & 8.04e+21 & 7.5 & 28.8 & 0.12 & 0.006 \\ _DE0_/small/5 & 1.35e+24 & 0.042\% & 6.36e+18 & 50.7 & 46.9 & 0.12 & 0.001 \\ \hline _CL2_/big/1 & 1.48e+24 & 0.052\% & 2.20e+21 & 1.1 & 41.3 & 0.12 & 0.001 \\ _CL2_/big/2 & 3.00e+24 & 0.071\% & 3.21e+24 & 29.3 & 0.9 & 0.24 & 0.015 \\ _CL2_/big/3 & 6.32e+23 & 0.018\% & 3.69e+22 & 9.8 & 9.4 & 0.24 & 0.009 \\ _CL2_/small/5 & 1.60e+24 & 0.025\% & 3.12e+15 & 11.7 & 89.7 & 0.12 & 0.0 \\ \hline _NE6_/big/1 & 2.85e+23 & 0.004\% & 1.35e+24 & 13.7 & 0.7 & 0.23 & 0.034 \\ _NE6_/big/4 & 4.42e+21 & \(<\)0.001\% & 0.0 & \(\simeq\)0.0 & 39.8 & 0.12 & 0.0 \\ _NE6_/small/1 & 3.34e+24 & 0.043\% & 2.13e+24 & 78.2 & 38.1 & 0.12 & 0.002 \\ _NE6_/small/5 & 0.0 & 0.0\% & 0.0 & 0.0 & 42.4 & 0.06 & 0.001 \\ \hline \end{tabular} \end{table} Table 2: Summary of all successful simulations. The values specific to the solar system are displayed for comparison. Second column is calculated starting from the moment of cometary bombardment. Third column shows the fraction of cometary mass with respect to the Earth analog mass in the simulation. ### Total cometary mass accreted by the Earth We assume that there is no loss of volatile elements to space during collisions between comets and terrestrial embryos. This assumption is discussed in Section 4.4. The values we obtain for the cumulative cometary mass accreted by our Earth analogs through time (during the first 100 Myr) range between 4 \(\times\) 10\({}^{21}\) and 5 \(\times\) 10\({}^{24}\) g, which also amounts to between 6.7 \(\times\) 10\({}^{-7}\)\(M_{Earth}\) and 8.4 \(\times\) 10\({}^{-4}\)\(M_{Earth}\) (see Table 2). A rough estimation of the expected cometary contribution on Earth can be obtained by comparing it to the carbonaceous chondrites contribution on Earth. Burkhardt et al. (2021) show that CC material could have contributed from \(\simeq\) 0 to 10 % of Earth by mass, with a peak at \(\simeq\) 4%. More recently, a \(\simeq\) 6% contribution has been proposed by Savage et al. (2022), using Zn isotope anomalies. Based on the D/H ratio in the present-day oceanic water, it has been shown that the contribution of comets cannot exceed 10 % of the contribution of carbonaceous chondrites (Morbidelli et al., 2000). Hence, comets could not represent more than about 0.5% of Earth mass, which amounts to 5 \(\times\) 10\({}^{-3}\)\(M_{Earth}\)\(\simeq\) 3 \(\times\) 10\({}^{25}\) g. These values for the cometary mass on Earth are based on the total contribution since \(t_{0}\) (or almost \(t_{0}\) with the normalization to the timing of the instability). In what follows, we estimate the expected mass brought by comets _after_ the Moon-forming impact in order to compare it with geochemical constraints in the atmosphere of the Earth. We find that the cometary mass accreted by our Earth analogs after the last giant impact ranges between 3 \(\times\) 10\({}^{15}\) and 3 \(\times\) 10\({}^{24}\) g (except in two cases where it is zero), or between 5.0 \(\times\) 10\({}^{-13}\)\(M_{Earth}\) and 5.0 \(\times\) 10\({}^{-4}\)\(M_{Earth}\). The lower limit of this range is reached in simulation _CL2/small/5_ where the last giant impact happens at 89.7 Myr after \(t_{0}\), whereas the upper limit is reached in simulation _CL2/big/2_ where the timing of the last giant impact is unrealistically early. However, in simulation _NE6/small/1_, the last giant impact happens at 38.1 Myr after \(t_{0}\), which is very realistic, and 2\(\times\) 10\({}^{24}\) g of cometary material is brought to the Earth analog after this event. Marty et al. (2016) argue that a cometary contribution between 3 \(\times\) 10\({}^{21}\) and 6.5 \(\times\) 10\({}^{23}\) g (or between 5 \(\times\) 10\({}^{-7}\) and 1 \(\times\) 10\({}^{-4}\)\(M_{Earth}\)) would have been necessary to supply all atmospheric \({}^{36}\)Ar. Alternatively, and based on the amount of Kr residing in the Earth surface reservoir, Bekaert et al. (2020) estimate the maximum amount of water that would have been brought by comets along with cometary Kr after the Moon-forming impact to \(\simeq\) 1.95 \(\times\) 10\({}^{21}\) g. From this value, they derive the total mass of comets accreted by the Earth after the last giant impact and obtain a value of \(\simeq\) 9.76 \(\times\) 10\({}^{21}\) g (or \(\simeq\) 1.6 \(\times\) 10\({}^{-6}\)\(M_{Earth}\)). Either way, an impact of one single comet with a 100-km radius and density of 0.5 g/cm\({}^{3}\), assuming that it is perfectly spherical and that volatile elements are conserved during the collision, could supply \(\simeq\) 2 \(\times\) 10\({}^{21}\) g of material to Earth. An impact of a comet with a 1000-km radius, and same density, could supply \(\simeq\) 2 \(\times\) 10\({}^{24}\) g of material to Earth. Therefore, only a small number of collisions may have been necessary to account for the cometary contribution to Earth. Our simulations show that it is possible that enough cometary mass has been brought to Earth after it had finished forming, through big impactors, in order to explain the noble gas content in the Earth's atmosphere. Because the primordial outer disk of planetesimals contained a non-negligible portion of Pluto-mass objects (Nesvorny and Vokrouhlicky, 2016; Kaib et al., 2021), the cometary bombardment on Earth probably included both a stochastic component - dictated by Pluto-mass objects -, and a smoother component - dictated by smaller comets. As explained in details in Section 3.2, Nesvorny et al. (2023) have calculated the cometary mass reaching Earth after the Moon-forming impact, considering a smaller size distribution of comets (instead of Pluto-mass objects like we do) and neglecting the stochastic component. They find that, assuming a Moon-forming impact at t \(\simeq\) 50 Myr after the birth of the solar system, the Earth would have accreted \(\simeq\) 1.5 \(\times\) 10\({}^{22}\) g of cometary material _after_ the Moon-forming impact. Their results also suggest that the time gap between the instability and the Moon forming event should be shorter than 50 Myr, in order to explain the noble gases content in the Earth's atmosphere. The same result holds for our simulations under certain assumptions. Indeed, it is also possible that the noble gases delivered by the comets were just not retained efficiently in the mantle, and were released to the atmosphere even before the last giant impact. This hypothesis needs to be tested before constraining the timing of the Moon-forming event relative to the timing of the instability. ### Comparison with Venus and Mars analogs We calculate the intrinsic collision probability between each pair of comet and Venus analog embryo, and each pair of comet and Mars analog embryo, at each timestep of our 100 Myr simulations. Subsequently, we obtain the temporal evolution of the cometary mass that has been supplied to these terrestrial planets analogs and compare it with our Earth analogs. Figure 12 illustrates the delivery of comets on all of the terrestrial planet analogs from our successful simulations. The top panel shows that some terrestrial planets analogs have been supplied a lot of cometary material after the Earth analog has reached half of its mass, while others have not. This emphasises the stochastic nature of the cometary bombardment and explains how the terrestrial planets may have substantially different atmosphere compositions. Indeed, terrestrial planets might have accreted very variable amounts of cometary mass after having formed completely, thus disparately contributing to their atmospheres. Our results thus converge with those of Marty et al. (2016) who point the high abundance of \({}^{36}\)Ar in the atmosphere of Venus compared to Earth, and suggest that the supply of atmospheric volatiles to the terrestrial planets most probably involved a small population of objects, resulting in an heterogeneous distribution of cometary isotopic signatures. Avice et al. (2022) argue that the (late) contribution of comets on Venus was most probably more important than on Earth, given the much higher \({}^{36}\)Ar/\({}^{22}\)Ne ratio in Venus atmosphere (although the low precision on this measurement still allows for an Earth-like elemental ratio). Indeed, comets also have a significant \({}^{36}\)Ar/\({}^{22}\)Ne ratio, as they probably do not contain any Ne (Bar-Nun & Owen, 1998). Finally, it was proposed that the isotopic signatures of some volatiles in the Martian atmosphere (mainly N, Ar and Kr) are consistent with a cometary contribution (Owen & Bar-Nun, 1995; Marty et al., 2016). Our simulations outcomes support this hypothesis with 6.4 \(\times\) 10\({}^{18}\) to 5.5 \(\times\) 10\({}^{23}\) g of cometary mass being supplied to Mars analogs after the Earth analog has reached half of its mass (and Mars has either already finished or is about to finish its formation by this time). Note that the bottom panel in Figure 12 displays the cumulative cometary mass that has been delivered to each of the terrestrial planets analogs during the 100 million years of the simulation, and these results have _not_ been _normalized_ to the moment of the instability (unlike the values in Table 2). This part of the figure mainly shows that the cumulative cometary mass hitting the terrestrial planets is proportional to the final mass of these planets. Indeed, it is comprised between 2.2 \(\times\) 10\({}^{24}\) and 2.4 \(\times\) 10\({}^{25}\) g for Venus analogs (or between 2.6 \(\times\) 10\({}^{-4}\) and 2.7 \(\times\) 10\({}^{-3}\) of Figure 12: Semimajor axis–eccentricity distribution of the terrestrial planets analogs that formed in all our successful simulations, and the corresponding cumulative cometary mass that has been delivered to them after the Earth analog has reached half of its mass (top) and in the total duration of the simulation (bottom). The actual terrestrial planets are represented by icons in the background. The area marked in blue defines the semi-major axes range for the Venus and Earth analogs, represented by different symbols. Before this range are the Mercury analogs and after are the Mars analogs. the Venus analogs' respective final masses); between 7.8 \(\times\) 10\({}^{23}\) and 5.0 \(\times\) 10\({}^{24}\) g for Earth analogs (or between 1.7 \(\times\) 10\({}^{-4}\) and 8.1 \(\times\) 10\({}^{-4}\) of the Earth analogs' respective final masses); and between 1.5 \(\times\) 10\({}^{23}\) and 8.5 \(\times\) 10\({}^{23}\) g for Mars analogs (or between 2.1 \(\times\) 10\({}^{-4}\) and 7.8 \(\times\) 10\({}^{-4}\) of Mars analogs' respective final masses). This is due to the fact that a bigger planet tends to accrete more embryos during its formation and the probability of collision between a comet and each of these constituent embryos adds up for the calculation of the total cometary contribution. However, if we take a closer look at the values of the cometary contributions relative to their respective terrestrial planets analogs' final masses, we find that Venus might have been supplied more comets than Earth or Mars. This result supports the claim made by Avice et al. (2022) mentioned above. ### Limitations The accuracy of our quantitative results for the range of potential cometary mass accreted by the Earth and the other terrestrial planets are limited by four main factors. First, the perfect merging approach used in most N-body simulations of terrestrial planet formation tends to underestimate the accretion timescale and overestimate the final masses of planets (Emsenhuber and Asphaug, 2019; Burger et al., 2019; Emsenhuber et al., 2021; Haghighipour and Maindl, 2022). However, Walsh and Levison (2016) and Deienno et al. (2019) have included collisional fragmentation in their simulations, and still obtained successful outcomes, starting from an initial total mass of \(\simeq\) 2 - 2.5 \(M_{Earth}\) for the inner ring of planetesimals. Second, all the assumptions for the typical noble gas isotopic composition in comets rely on the measurements made on one single comet, 67P/Churyumov-Gerasimenko. However, it was shown that volatile content, including noble gases content, in cometary ice reflects gas pressure and temperature under which they formed (Bar-Nun et al., 1985; Yokochi et al., 2012; Almayrac et al., 2022). Consequently, if we postulate that all comets formed in the same region, that is in the same conditions of pressure and temperature, the noble gas content in 67P could then be representative of all comets (Marty et al., 2016), if the comets did not lose a significant portion of their volatile over their thermal histories (Gkotsinas et al., 2022). Third, we assume that there is no loss of volatile elements to space during collisions between comets and terrestrial embryos. In the case of the Earth, this assumption was shown to be roughly correct for collisions occuring after the Moon-forming impact (Marty et al., 2016) but it becomes less valid in the case of giant impacts which likely involve significant atmospheric loss (Genda and Abe, 2005). Overall, models simulating the global volatile budget during giant collisions remain elusive. de Niem et al. (2012) have shown that single impacts of very massive bodies are of more importance for the atmospheric evolution than other input parameters. In contrast, Sinclair et al. (2020) argue that only the smallest cometary impactors could contribute material to the atmosphere. In the case of the "dry" comets however, their density could be high enough that a non-zero fraction of even the largest objects could be accreted, given a low enough impact velocity. Finally, we are limited by the number of particles in our N-body simulations, and the computational cost associated with it. Indeed, each simulation includes 10000 comets, having a mass of 1.5 \(\times\) 10\({}^{25}\) g (\(\simeq\) Pluto-mass) each in order to have a total mass of 25 \(M_{Earth}\) for the outer disk of planetesimals. It was shown that the combined mass of Pluto-mass objects in the initial outer disk of planetesimals could represent 10% to 40% of the total disk mass (Nesvorny and Vokrouhlicky, 2016). Therefore, our simulations mostly reflect those massive bodies carrying an important portion of the outer disk mass. The latter could be responsible for the stochastic component of the cometary bombardment on the planets of the inner solar system. However, it would be more realistic to perform simulations also including a large number of smaller-sized comets, to account for the non-stochastic component of the bombardment. Bearing all these considerations in mind, we believe that our simulation results, obtained from a dynamical model perspective, are sufficiently reliable as a first approximation given their consistency with geochemical constraints. In addition, both the perfect-merging approach and the limited number of comets have the particular advantage of reducing the overall computational cost of our simulations. ## 5 Conclusion We have performed N-body simulations of the early solar system considering different set of initial conditions and instability cases. In our successful simulations, we have calculated the probability of collision between each pair of comet and terrestrial planet embryo through time and obtained an estimation of the cometary mass accreted by each of our Earth, Venus and Mars analogs as a function of time. Given the fact that only a small number of collisions from big impactors may have been necessary to account for the cometary contribution to Earth, and that the primordial outer disk of planetesimals contained a non-negligible portion of Pluto-mass objects (Nesvorny and Vokrouhlicky, 2016; Kaib et al., 2021), we propose that the cometary bombardment on Earth included a stochastic component - dictated by Pluto-mass objects -, in addition to a smoother component - dictated by smaller comets. While the smoother component has been extensively investigated in Nesvorny et al. (2023), we have focused on the stochastic component of the bombardment First and foremost, we find that the contribution of comets on Earth might have been delayed with respect to the timing of the instability, due to this stochastic component. While the contribution of comets in the atmosphere of the Earth is mostly established (Marty et al., 2017) and volatiles in the Earth's mantle were shown to be chondritic (Peron and Moreira, 2018; Broadley et al., 2020), it remains unclear whether a few comets could have contributed to the deep mantle early in the Earth's accretion history (Peron et al., 2021). Given these geochemical constraints, the cometary delivery on Earth might have either happened _both_ during the earliest phases of Earth's growth and after its formation was finished, or _after_ - and only after - its formation was finished. These two cases have been observed as possible - yet not frequent - outcomes of our simulations. In particular, the case where comets are exclusively supplied after the Earth analog was formed only occurs in simulations which involve an extremely rapid formation of the Earth analog. For this reason, we favour the other scenario and support the idea that comets contributed to the mantle's budget. It is also possible that early-delivered cometary noble gases were not retained efficiently in the mantle, and were thus sequestered in the atmosphere. This alternative theory will be investigated in future works. In any case, our dynamical simulations demonstrate that the _Early Instabilty_ scenario and a late supply of cometary material on Earth are not mutually exclusive. As a direct consequence, the xenon constraints set by the terrestrial mantle and atmosphere are not necessarily in conflict with an instability happening during the first 10 Myr of the solar system. Above all, our study provides a broad overview of the possible cometary bombardment mass contribution and timings on Earth. More importantly, it highlights the stochastic variability of cometary delivery on the terrestrial planets. ## 6 Acknowledgements Computer time for this study was partly provided by the computing facilities MCIA (Mesocentre de Calcul Intensif Aquitain) of the Universite de Bordeaux and of the Universite de Pau et des Pays de l'Adour, France. Numerical computations were also partly performed on the S-CAPAD/DANTE platform, IPGP, France. We acknowledge support from the CNRS MITI funding program (PhD grant to S.J.) R.D. was supported by the NASA Emerging Worlds program, grant 80NSSC21K0387.
彗星は、太陽系内での揮発性物質の供給源として期待されていますが、地球の形成過程と関連するデリバリー時期の Timing はまだ十分に理解されていません。彗星67P/チャウリウム-ゲルasinメンコに含まれるヨウ素同位体測定結果によると、彗星は地球の気圏に少なからず貢献していました。しかし、地球の Mantel に存在する彗星成分の証拠は確証されていません。これらの地球化学的制約は、彗星の貢献を主に地球の形成の最後段階以降に生じていることを示唆しています。ここでは、これらの制約を考慮した dynamical シミュレーションが、Early Instability モデルに適合するかを評価しています。太陽系の dynamical シミュレーションを実行し、彗星と地球の類似体胚間の衝突の確率を時間とともに計算し、地球の類似体における彗星の物質の蓄積量を時間とともに推定します
2309.12681
Tight and Efficient Gradient Bounds for Parameterized Quantum Circuits
The training of a parameterized model largely depends on the landscape of the underlying loss function. In particular, vanishing gradients are a central bottleneck in the scalability of variational quantum algorithms (VQAs), and are known to arise in various ways. However, a caveat of most existing gradient bound results is the requirement of t-design circuit assumptions that are typically not satisfied in practice. In this work, we loosen these assumptions altogether and derive tight upper and lower bounds on loss and gradient concentration for a large class of parameterized quantum circuits and arbitrary observables, which are significantly stronger than prior work. Moreover, we show that these bounds, as well as the variance of the loss itself, can be estimated efficiently and classically-providing practical tools to study the loss landscapes of VQA models, including verifying whether or not a circuit/observable induces barren plateaus. In particular, our results can readily be leveraged to rule out barren plateaus for a realistic class of ans\"atze and mixed observables, namely, observables containing a non-vanishing local term. This insight has direct implications for hybrid Quantum Generative Adversarial Networks (qGANs). We prove that designing the discriminator appropriately leads to 1-local weights that stay constant in the number of qubits, regardless of discriminator depth. This implies that qGANs with appropriately chosen generators do not suffer from barren plateaus even at scale-making them a promising candidate for applications in generative quantum machine learning. We demonstrate this result by training a qGAN to learn a 2D mixture of Gaussian distributions with up to 16 qubits, and provide numerical evidence that global contributions to the gradient, while initially exponentially small, may kick in substantially over the course of training.
Alistair Letcher, Stefan Woerner, Christa Zoufal
2023-09-22T07:38:13
http://arxiv.org/abs/2309.12681v3
# From Tight Gradient Bounds for Parameterized Quantum Circuits ###### Abstract Barren plateaus are a central bottleneck in the scalability of variational quantum algorithms (VQAs), and are known to arise in various ways, from circuit depth and hardware noise to global observables. However, a caveat of most existing results is the requirement of \(t\)-design circuit assumptions that are typically not satisfied in practice. In this work, we loosen these assumptions altogether and derive tight upper and lower bounds on gradient concentration, for a large class of parameterized quantum circuits and arbitrary observables. By requiring only a couple of design choices that are constructive and easily verified, our results can readily be leveraged to rule out barren plateaus for explicit circuits and mixed observables, namely, observables containing a non-vanishing local term. This insight has direct implications for hybrid Quantum Generative Adversarial Networks (qGANs), a generative model that can be reformulated as a VQA with an observable composed of local and global terms. We prove that designing the discriminator appropriately leads to 1-local weights that stay constant in the number of qubits, _regardless of discriminator depth_. Combined with our first contribution, this implies that qGANs with shallow generators can be trained at scale without suffering from barren plateaus - making them a promising candidate for applications in generative quantum machine learning. We demonstrate this result by training a qGAN to learn a 2D mixture of Gaussian distributions with up to 16 qubits, and provide numerical evidence that global contributions to the gradient, while initially exponentially small, may kick in substantially over the course of training. ## I Introduction Gradients that vanish exponentially in the number of system qubits - also called _barren plateaus_ - have been shown to provide a substantial bottleneck in the training of variational quantum algorithms [1; 2; 3; 4; 5; 6; 7; 8]. They pose a central obstacle for the scaling of these algorithms to practically relevant problem sizes, and have been shown to arise for a variety of reasons: ansatz depth or expressivity [1; 4], entanglement [5], unital hardware noise [6], and loss functions induced by global observables, namely, observables acting on most system qubits [2]. Despite positive guarantees in a number of settings [9; 10; 11; 12; 13; 14; 15; 16], these results naturally becken a finer understanding of the conditions under which VQAs are scalable to intermediate- and large-scale problems ranging across optimization [17], machine learning [18], and chemistry [19]. While ansatz and backend may be chosen with some degree of freedom, the observable is often intrinsically connected to the model at hand. In particular, a large spectrum of applications in optimization, quantum chemistry, and generative quantum machine learning (GQML) involve _mixed_ observables which decompose as a sum of _local_ and _global_ terms. While Uvarov and Biamonte [8] have proven that local and global terms contribute independently to the gradient, and Cerezo _et al._[2] have proven that the variance of local terms is at most polynomially small, both results make use of 2-design assumptions that are not satisfied in practice, and it remains unclear whether mixed observables induce barren plateaus in realistic settings. In order to alleviate the problem, local restrictions or approximations of the underlying loss function have widely been suggested as potential remedy [10; 16; 20; 21; 22; 23; 24]. However, while local loss functions can help avoid vanishing gradients, they are also likely to introduce undesirable local minima in the loss landscape [25; 26] and fail to capture all system correlations. Our work is centered around two main contributions. First, we derive tight gradient bounds for a large class of parameterized circuits and arbitrary observables - _without_ any \(t\)-design assumptions (Theorem 1). This generalises prior results and contributes proof techniques and intuitions that are not only more elementary, but can be leveraged for the analysis of other circuit classes. In particular, our results readily imply that mixed observables do not induce barren plateaus - and that approximating a true loss function by its local counterpart, as suggested by prior work, can only _increase_ the concentration of gradients, besides leading to potentially undesirable local minima. However, it is important to emphasise that the absence of barren plateaus is a necessary condition to train parameterized circuits - but in no way guarantees that an optimal solution will be found, as is true in non-convex optimization more generally. In order to provide a bridge between this first contribution and the field of generative modeling, we prove that for hybrid qGANs [27; 28; 29; 30; 31], the weights of 1-local terms are _constant_ in the number of qubits, even for classical discriminators of _arbitrary depth_ (Theorem 2). Together, our contributions imply that qGANs with shallow generators do not suffer from barren plateaus, suggesting that qGANs are a rare and promising candidate for scalable GQML algorithms. We empirically demonstrate this by successfully training a qGAN to learn 2D mixtures of Gaussians with up to 16 qubits, with results which are on par with classical GANs - despite a massively smaller number of parameters. The remainder of this paper is organized as follows. We introduce relevant concepts and related work in Section II. We then present our first central contribution in Section III, Theorem 1, which provides tight gradient bounds for arbitrary observables and a large class of parameterized quantum circuits. Several extensions as well as practical implications thereof are discussed. Furthermore, we discuss the impact of our results for 3 application areas of VQAs. Next, Section IV introduces our second central contribution. Leveraging the previously introduced results, we prove that the gradients of hybrid qGANs do not vanish exponentially for arbitrarily deep discriminators and logarithmically shallow generators. Additionally, these theoretical insights are supported with numerical experiments. Finally, Section V presents a conclusion and an outlook for future research. ## II Background In this paper, we study problems that can be formulated as a VQA [32], i.e., that can be reduced to the minimization of a loss function \[\mathcal{L}(\theta)=\operatorname{Tr}\bigl{(}U(\theta)\rho U^{\dagger}( \theta)H\bigr{)}, \tag{1}\] induced by an \(n\)-qubit parameterized quantum circuit \(U(\theta)\), an initial state \(\rho\), and a Hermitian observable \(H\). #### ii.0.1 Barren Plateaus A loss function \(\mathcal{L}(\theta)\) is defined to exhibit a barren plateau if, for all \(\theta_{k}\), \(\operatorname{Var}_{\theta}\left[\partial_{k}\mathcal{L}\right]\in O(1/b^{n})\) for some \(b>1\), where the distribution on \(\theta\) is uniform. Arrasmith _et al._[33] have proven that, in the VQA setting, this is equivalent to an exponential concentration of the _loss function itself_, namely, \(\operatorname{Var}\left[\mathcal{L}\right]\in O(1/b^{n})\) for some \(b>1\). This simplifies the analysis of barren plateaus, although our main result (Theorem 1) will provide explicit bounds on the concentration of loss _and_ gradients for completeness. #### ii.0.2 Locality To describe whether a given observable induces barren plateaus, various notions of _locality_ have been introduced [2, 8, 20]. Our first contribution, Theorem 1, is independent of any form of locality as it provides tight bounds for _any observable_. However, in order to exclude barren plateaus for specific circuits, as in Corollary 1, a notion of local observables is required. In this work, we say that \(H\) is (algebraically) _local_ if it acts non-trivially on a constant number \(O(1)\) of qubits, and _global_ otherwise. In Section III.1.3, this definition will be extended to logarithmically local observables, which act on \(O(\log n)\) qubits (also called _low-bodied_ by Rudolph _et al._[20]), as well as topologically local observables, which are required to act on 'neighbouring' qubits according to some circuit-induced topology. The notion of locality can be pulled apart by decomposing \(H\) into a sum of Pauli strings as \(H=\sum_{\alpha}c_{\alpha}P_{\alpha}\), with \(c_{\alpha}\in\mathbb{R}\) and \(\alpha\in\{0,1,2,3\}^{n}\), where a _Pauli string_\(P_{\alpha}\) is defined by \(P_{\alpha}=\bigotimes_{i=1}^{n}\sigma_{\alpha_{i}}\) and \(\sigma_{\alpha_{i}}\in\{I,X,Y,Z\}\) are the Pauli matrices. We can then define \(P_{\alpha}\) to be _\(k\)-local_ if it contains exactly \(k\) non-identity matrices, or formally, if \(|\alpha|=k\) in the \(L_{0}\) pseudo-norm. To rule out barren plateaus for a generic class of global observables, it will be useful to introduce the notion of a _mixed_ observable, which is defined to contain _at least one_ local term with a non-vanishing coefficient \(c_{\alpha}\in\Omega(1/\operatorname{poly}n)\) - as opposed to a _global_ observable. These distinctions are illustrated, with examples, in Figure 1. #### ii.0.3 Related Work Most existing results focus on barren plateaus in circuits that satisfy certain \(t\)-design assumptions, either exactly [1, 2, 5, 7, 8, 34] or approximately [35, 36, 4, 37]. In particular, Cerezo _et al._[2, Theorem 1] have proven that the gradient of \(\mathcal{L}(\theta)\) is exponentially small in \(n\) if \(H\) is a (type of) global observable, such as a projector, and \(U\) is an Alternating Layered Ansatz forming a local 2-design. Conversely, the gradient is at most polynomially vanishing in \(n\) if \(H\) is a (type of) spatially local observable, and if \(U\) moreover has logarithmic depth in \(n\). These results are seminal in the study of observable-induced barren plateaus, but the local 2-design assumption is seldom satisfied in practice, and, as this work will show, relaxing them leads to more immediate gradient bounds with slightly different implications. By proving that Pauli strings make independent contributions to the gradient, Uvarov and Biamonte [8] have extended the lower bounds provided by Cerezo _et al._[2] to generic observables - but also require circuits to form local 2-designs. Napp [7] has derived gradient bounds for spatially and algebraically local observables, with lower bounds that Figure 1: Illustration and examples of local vs. mixed vs. global observables. All observables act on \(n\) qubits, but we omit the identity terms for convenience. For instance, \(X\) denotes \(X\otimes I^{\otimes\,n-1}\). are tighter than those of Uvarov and Biamonte [8]. However, they consider a class of circuits whose entangling gates are chosen randomly according to any measure that forms a 2-design. For Periodic Structure Ansatze, Larocca _et al._[35] have conjectured that the scaling of the gradient variance is inversely proportional to the dimension of a dynamical Lie algebra associated to the input state. This has recently been proven independently by Fontana _et al._[36] and Ragone _et al._[37], for observables that are in the Lie algebra associated with the ansatz. While both works provide interesting and novel perspectives on the origins of barren plateaus, they differ from our contributions in requiring circuits to form approximate 2-designs. Leone _et al._[34] have analysed the practical usefulness of the Hardware Efficient Ansatz (HEA), and identify a "Goldilocks scenario where shallow HEAs could achieve a quantum speedup: QML tasks with data satisfying an area law of entanglement". This is encouraging, and we hope our contributions can help refine their insights by loosening their \(t\)-design assumptions. Zhao and Gao [13] have ruled out barren plateaus for quantum convolutional networks and tree tensor networks, but these classes of circuits do not apply to problems that are naturally induced by global observables, as they require tracing out almost all qubits before measurement. They also show that a class of hardware-efficient ansatze induces barren plateaus for sufficient depth - but do not provide lower bounds in order to rule them out for shallow circuits. Finally, Wang _et al._[14] and Zhang _et al._[15] have proposed initialization schemes that rule out barren plateaus without \(t\)-design assumptions. However, the results of Wang _et al._[14] and the main theorem of Zhang _et al._[15] only apply to local observables, while Theorem 4.2 of Zhang _et al._[15] applies to global observables, but only provides non-trivial bounds when the gradient at the origin is sufficiently large. Our work instead focuses on providing tight bounds for generic uniform initialization, a larger class of circuits, and _all observables_. ## III General Results The central contribution of this section is Theorem 1, where we provide tight upper and lower bounds for loss and gradient concentration, for _arbitrary_ observables, and _without any \(t\)-design assumptions_. Appendix A presents the proof and related mathematical techniques. The class of circuits for which the result holds is broad, requiring only the presence of two initial layers of rotation gates which are orthogonal to each other - without which loss and gradients could be uniformly zero, see Appendix B.2. This circuit class, along with notions of light-cones and orthogonality required to state our main result, are formally introduced in Definition 1 below. ### Main Theorem **Definition 1**.: In this work, we consider parameterized circuits \(U(\theta)\) of the form \[U(\theta)=\prod_{k=0}^{K}W_{k}R_{k}(\theta)\,,\] where \(W_{k}\) are Clifford gates, \(R_{k}(\theta)=\prod_{l}R_{P_{kl}}(\theta_{kl})\) are products of rotation gates generated by Pauli strings \(P_{kl}\), parameters \(\theta_{kl}\) are independent and initialized uniformly over \([-\pi,\pi]\), and \(R_{0}\) begins with two layers of orthogonal single-qubit rotations (see Figure 2). For any mixed state \(\rho\) and any observable \(H=\sum_{\alpha}c_{\alpha}P_{\alpha}\), we let \[\mathcal{L}(\theta)=\operatorname{Tr}\bigl{(}U(\theta)\rho U^{\dagger}(\theta )H\bigr{)}\] be the induced loss, and similarly write \(\mathcal{L}_{\alpha}\) for the loss induced by each \(P_{\alpha}\). For each \(\alpha\), we define the light-cones \(\Delta_{\alpha}^{\text{mean}},\Delta_{\alpha}^{\text{min}}\) as the mean and minimal number of qubits on which \(U^{\dagger}(\theta)P_{\alpha}U(\theta)\) acts non-trivially, where the mean and minimum are taken over all \(\theta\in\{0,\pi/2\}\). These notions are illustrated visually in Appendix B.1. Finally, if \(\rho=\otimes\rho_{i}\) is a _product_ mixed state, let \[\Omega(\rho)=\prod_{i=1}^{n}4\left\langle u_{i}\right|\rho_{i}\left|u_{i} \right\rangle\left\langle v_{i}\right|\rho_{i}\left|v_{i}\right\rangle+2 \operatorname{Tr}\bigl{(}\rho_{i}^{2}\bigr{)}-2\] be a measure of orthogonality between \(\rho\) and the first layer of single-qubit rotations with eigenvectors \(u_{i},v_{i}\). **Theorem 1**.: _For any circuit satisfying Definition 1, any mixed state \(\rho\) and any observable \(H=\sum_{\alpha}c_{\alpha}P_{\alpha}\), each Pauli term makes an independent contribution to the loss and gradient concentrations:_ \[\operatorname{Var}_{\theta}\left[\mathcal{L}\right] =\sum_{\alpha}c_{\alpha}^{2}\operatorname{Var}_{\theta}\left[ \mathcal{L}_{\alpha}\right],\] \[\operatorname{Var}_{\theta}\left[\nabla\mathcal{L}\right] =\sum_{\alpha}c_{\alpha}^{2}\operatorname{Var}_{\theta}\left[ \nabla\mathcal{L}_{\alpha}\right].\] _Moreover, for each \(\alpha\neq 0\), the loss variance is given by_ \[\operatorname{Var}_{\theta}\left[\mathcal{L}_{\alpha}\right]=\left(\frac{1}{2} \right)^{m}\sum_{\theta\in\{0,\pi/2\}^{m}}\mathcal{L}_{\alpha}^{2}(\theta)\,,\] _while the gradient variance satisfies_ \[\left\|\operatorname{Var}_{\theta}\left[\nabla\mathcal{L}_{\alpha}\right] \right\|_{\infty}=\operatorname{Var}_{\theta}\left[\mathcal{L}_{\alpha}\right].\] _Finally, if \(\rho\) is a product state (see Section III.1.2), the loss variance is tightly bounded by_ \[\Omega(\rho)\left(\frac{1}{4}\right)^{\Delta_{\alpha}^{\text{mean}}}\leq \operatorname{Var}_{\theta}\left[\mathcal{L}_{\alpha}\right]\ \leq\ \left(\frac{1}{2}\right)^{\Delta_{\alpha}^{\text{min}}}.\] _In particular, in combination with the previous equation, **at least one** partial derivative \(\partial_{k}\mathcal{L}_{\alpha}\) satisfies this same lower bound, while **all of them** satisfy the upper bound._ Using Jensen's inequality, an immediate consequence of Theorem 1 is that we can rule out barren plateaus provided the _weighted_ mean-cone \(\Delta^{\text{mean}}=\sum_{\alpha}c_{\alpha}\Delta^{\text{mean}}_{\alpha}\) is logarithmically small in \(n\). In particular, this holds if _at least one_ term \(P_{\alpha}\) with non-vanishing \(c_{\alpha}\) has a logarithmically small mean-cone. Given any mixed (or local) observable, one generic approach to achieve this is to construct a circuit with logarithmic depth and a local form of entanglement, since light-cones typically grow with each layer of entangling gates. To make this as concrete as possible, we explicitly rule out barren plateaus for the standard class of EfficientSU2 circuits [38] in Corollary 1 below. This can easily be extended to a larger class of circuits with shallow depth and \(O(1)\)-local entanglement, by varying the proof in Appendix A. **Corollary 1**.: _Let \(H\) be a local or mixed observable, let \(U(\theta)\) be an EfficientSU2 circuit with pairwise entanglement, rotation layers \((R_{Y},R_{Z})\) and logarithmic depth (see Figure 2), and let \(\rho=\left|0\right\rangle\left\langle 0\right|\) (see Section III.1.2). Then the corresponding loss does not suffer from barren plateaus, i.e, there is a parameter \(\theta_{k}\) such that_ \[\operatorname{Var}_{\theta}\left[\partial_{k}\mathcal{L}\right]\in\Omega \left(\frac{1}{\operatorname{poly}(n)}\right)\,.\] For completeness, this corollary is illustrated numerically in Figure 3, where we provide explicit examples of local and mixed observables which induce only polynomially vanishing gradients - as opposed to the exponentially vanishing gradients of a global observable. Though a highly artificial example, the fact that mixed observables do not induce barren plateaus has a number of applications, detailed in Section III.2, and in particular, allows us to guarantee non-vanishing initial gradients for qGANs in Section IV. #### ii.1.1 Circuit Assumptions To justify each circuit assumption, we provide a counter-example that violates Theorem 1 when any of the circuit assumptions are loosened, in Appendix B.2. In particular, the lower bound can fail dramatically if the circuit does not include two orthogonal and adjacent rotation layers, which act as 'commutation shields'. The intuition is that an arbitrary observable can otherwise commute entirely with the circuit, hence inducing a uniformly constant loss. For instance, a RealAmplitudes circuit [39] paired with a 1-local \(Y\)-observable may have _uniformly zero gradients_. However, the assumption may be relaxed if more is known about the observable. For instance, a RealAmplitudes circuit paired with an observable that contains no \(Y\) terms will never produce uniformly zero gradients, despite not containing adjacent orthogonal layers, and a polynomial lower-bound can be obtained as in Corollary 1. Note also that the assumption on \(W_{k}\) being Clifford gates does _not_ restrict circuit expressivity, as all non-Clifford gates can be approximated using a sequence of parameterized rotation gates with appropriate parameters (specifically, adding \(T\propto R_{Z}(\pi/4)\) to the Clifford group provides a set of universal quantum gates). #### ii.1.2 Initial States As shown in Appendix B.2, an initial state \(\rho\) that is aligned with an eigenvector of the first-layer rotations may produce uniformly zero gradients - motivating us to capture some notion of orthogonality \(\Omega(\rho)\), as provided for _product_ mixed states in Definition 1. In the special case where \(\rho=\left|0\right\rangle\left\langle 0\right|^{\otimes n}\) is the zero initial state, and the first rotations are not \(Z\)-rotations, the initial state is both pure and fully orthogonal to the first layer, so we find \(\Omega(\rho)=1\) and the result reduces to \[\left(\frac{1}{4}\right)^{\Delta^{\text{mean}}_{\alpha}}\ \leq\ \operatorname{Var}_{\theta}\left[\mathcal{L}_{\alpha}\right]\ \leq\ \left(\frac{1}{2}\right)^{\Delta^{\text{min}}_{\alpha}}\,.\] However, Theorem 1 can be generalized to all mixed states by decomposing \(\rho=\sum_{\lambda}d_{\lambda}P_{\lambda}\) into the Pauli basis, and defining \(\Omega(\rho,\alpha)\) to be a sum over weights \(d_{\lambda}^{2}\) for which \(P_{\lambda}\) is orthogonal with the first layer, on each qubit line in the light-cone of \(P_{\alpha}\). However, this generalization Figure 2: EfficientSU2 circuit [38] with pairwise entanglement and \((R_{Y},R_{Z})\) rotation layers, illustrating the circuit structure introduced in Theorem 1. After the first layer \(R_{0}\) of single-qubit orthogonal rotations, any Clifford gates \(W_{k}\) and any multi-qubit rotations \(R_{P_{kl}}(\theta_{kl})\) are allowed. is no longer \(\alpha\)-independent, and the bound can no longer be computed efficiently unless more is known about the ansatz. Nonetheless, our proof can readily be extended to all mixed states following this approach. #### ii.1.3 Topological Locality Corollary 1 can also be extended to different notions of locality introduced in the literature by straightforward variants of the proof. We provide a broad outline of these adaptations as follows: 1. If \(H\) contains a topologically \(k\)-local Pauli string with \(k\in O(\log n)\), the lower bound is polynomial. 2. If \(H\) contains an algebraically \(k\)-local Pauli string with \(k\in O(1)\), the lower bound is polynomial. 3. If \(H\) contains an algebraically \(k\)-local Pauli string with \(k\in O(\log n)\), the lower bound is superpolynomial but sub-exponential, namely, there is a parameter \(\theta_{k}\) s.t. \(\operatorname{Var}_{\theta}\left[\partial_{k}\mathcal{L}\right]\in\Omega(1/ \operatorname{poly}(n^{\log n}))\). It should be noted that these extensions are in line with Corollaries 1 and 4 of Napp [7] which hold for circuits whose entangling gates form 2-designs. #### ii.1.4 Light-Cones and Global Observables By loosening \(t\)-design assumptions, Theorem 1 has the consequence that even a _global_ observable may not induce barren plateaus. Although apparently at odds with the results of [2], this should not come as a surprise, since global observables can be converted to local observables with an appropriate change of basis. For instance, consider a circuit with two orthogonal initial layers, followed by a single linear layer of CNOT gates. Then the global observable \(X^{\otimes n}\) propagates through this entanglement layer in such a way that it cancels itself out on all qubit lines except the last, producing a light-cone of size 1. It follows that the gradient variance is lower-bounded by \(1/4\), for any number of qubits \(n\), despite being global. Although this construction is highly artificial, it may be instructive in designing ansatze for problem-specific observables. Moreover, it demonstrates that relaxing \(t\)-design assumptions allows us to capture observable-circuit interaction in finer detail, since prior work relying on these assumptions had found that global observables necessarily induce exponentially vanishing gradients [2]. The upshot is that for realistic circuit classes, what determines gradient concentration is not algebraic locality, circuit depth or entanglement _per se_, but the _interaction_ between the observable and circuit, which is captured by the size of sub-light-cones. For a more intuitive and visual illustration of how these light-cones arise, and how these notions improve the bounds obtained in prior work, please refer to Appendix B.1. #### ii.1.5 Quantum vs. Classical Finally, the examples above highlight a central difference between quantum barren plateaus and the vanishing gradient problem in classical neural networks, where it is typically only the _depth_ of the circuit which causes gradients to vanish, by virtue of composition of activation functions across layers. Instead, in quantum circuits, gradient concentration is a direct product of how many qubits are 'hit' by the observable (the _width_), and thus, whether the sphere on which they live is sufficiently high-dimensional as to induce a concentration of measure phenomenon. Depth of the circuit, in itself, _does not_ induce vanishing gradients - it only does so indirectly because each layer typically entangles more and more qubits together, thus broadening the light-cone. Another way of making this point is that high _local_ expressivity, i.e. high-depth on a local part of the circuit, does not induce a barren plateau - whereas the analogous situation for classical neural networks _would_ induce vanishing gradients. ### Applications The set of problems that can be formulated as VQAs, hence amenable to our results, find a variety of applications. These range across chemical ground state problems [19; 40], black box and polynomial binary optimization [41; 42; 43; 44], and distribution learning with generative quantum machine learning such as qGANs [27; 29; 30; 45; 46]. Figure 3: Variance of the first-parameter gradient \(\operatorname{Var}\left[\partial_{1}\mathcal{L}\right]\) for a local, global, and mixed observable paired with an EfficientSU2 circuit of logarithmic depth. The mixed observable \(H=Z+5Z^{\otimes n}\) has a larger gradient variance than either of its Pauli terms (Theorem 1), and therefore, does not induce barren plateaus (Corollary 1). However, the fact that global gradients vanish exponentially _on average_ does not imply we ought to use local approximations of loss functions – because global terms can contribute significantly later in training, as numerically exemplified in Figure 4a. Quantum Chemistry VQAs have found their application in quantum chemistry to study molecular properties or chemical reactions. One particularly, interesting ground state problem is the calculation of electronic structures [47; 48]. Additional to these static properties, one also aims to run dynamical simulations or thermal state preparations with variational methods [49; 50; 51; 52; 53]. The goal of tackling these problems with quantum computers is to achieve more accurate mid- to large-scale simulations. While many quantum chemistry problems are governed by local correlations there still exists a variety of interesting problems where the underlying Hamiltonian is given as a mixed observable, e.g., hydrogen chains [54; 55; 56; 57; 58], vibrational bosonic systems [59; 60; 61], and downfolded electronic Hamiltonians [62; 63; 64]. Even many body systems [65; 66; 67; 68; 69] can exhibit a global nature when considered in first quantization formulation [70] or after application of certain fermion to qubit mappings [71; 72; 73] which are required to map fermionic creation and annihilation operators to Pauli operators. #### ii.1.2 Binary Optimization In unconstrained black-box binary optimization [41; 43], the goal is to minimize a function \(f:\{0,1\}^{n}\to\mathbb{R}\) which we can evaluate, but whose closed-form expression is unknown. Importantly, this problem can be formulated as a VQA, with a diagonal observable given by \(H=\sum_{x}f\left(x\right)\ket{x}\bra{x}\), where the sum ranges over bitstrings \(x\) of length \(n\). To decompose this into the Pauli basis, we write \(Z_{\alpha}\coloneqq\bigotimes Z^{\alpha_{i}}\) for any \(\alpha\in\{0,1\}^{n}\), with \(Z^{0}=I\) and \(Z^{1}=Z\). Then, \[H=\sum_{\alpha}c_{\alpha}Z_{\alpha}\coloneqq\sum_{\alpha}\left(\frac{1}{2^{n}} \sum_{x}\left(-1\right)^{\alpha\cdot x}f(x)\right)Z_{\alpha}\,. \tag{2}\] Since each coefficient \(c_{\alpha}\) comes with an exponentially small pre-factor \(1/2^{n}\), it is not a priori obvious whether this observable is'mainly' local or global. The prevailing approach has been to check this empirically, as illustrated by Zoufal _et al._[41], who show that a number of application-specific observables can be fitted to 2-local models with high fidelity. However, Theorem 1 implies that whether or not the observable is 'close' to a local one is, in fact, beside the point: all that is necessary to exclude barren plateaus is for _some_ local coefficient \(c_{\alpha}\) to be non-vanishing, _irrespective_ of global terms. Moreover, these coefficients can easily (and simultaneously) be estimated by Monte Carlo sampling of the black box \(f(x)\), whose convergence rate for \(N\) samples is \(O(\sqrt{N})\). Crucially, this is _independent_ from the number of qubits \(n\). If at least one local coefficient is found to be non-vanishing, Corollary 1 readily guarantees initial trainability for typical shallow circuits. If not, this knowledge could significantly help us design the ansatz. For instance, if a global coefficient is found to be non-vanishing, one may add an appropriate entangling layer, acting as a basis transformation, in order to transform a global term to a local one, as suggested by the example from Section III.1.4. A special case of this setting is when \(f\) is not a black box but an explicit polynomial \(f(x)=\sum_{I\in\mathcal{I}}c_{I}x_{I}\), where \(\mathcal{I}\) is a set of multi-indices, \(c_{I}\in\mathbb{R}\), and \(x_{I}=\prod_{i\in I}x_{i}\). A well-known instance is quadratic unconstrained binary optimization (QUBO) [74], where \(f\) is required to be _quadratic_. In this case, the corresponding VQA formulation is given by a 2-local observable, which will not induce barren plateaus. In Appendix C, we generalize this correspondence by producing a bijective mapping between polynomials of degree \(k\) and \(k\)-local observables, implying that generic polynomial optimization may include arbitrary global terms. Nonetheless, Theorem 1 guarantees that such observables will _not_ induce barren plateaus provided they include a non-vanishing local term, which broadens the spectrum of applications from QUBO to arbitrary polynomial binary optimization. #### ii.1.3 Generative QML Lastly, generative quantum machine learning is a model class that learns to encode the probability distribution underlying given data by employing quantum resources, which has been shown to facilitate approximate but efficient quantum data loading [75; 27]. However, generative quantum models such as quantum Born machines [76; 77] or quantum Boltzmann machines [78; 79; 53] that are based on explicit loss functions, for example the Kullback-Leibler divergence [80], inherently suffer from observable-induced barren plateaus [81; 20]. To alleviate this problem, Rudolph _et al._[20] have suggested the use of implicit loss functions which are approximately local, such as the Maximum Mean Discrepancy loss with a particular class of Gaussian kernels. However, the trainability of models with this type of loss function has so far only been proven for tensor product ansatze. Orthogonally, local formulations and approximations of the actual loss functions have been suggested as potential remedy [21; 22; 23; 10; 24; 16; 20]. However, although this approach can guarantee non-vanishing gradients, it may also lead to spurious local minima [25; 26]. On the other hand, hybrid qGANs [27; 29; 30; 46] are also based on an implicit loss function which can be reformulated as a ground state problem with respect to a global (diagonal) observable. This problem falls outside the scope of trainability guarantees proven by Cerezo _et al._[2], but using our results, it is possible for qGANs not to suffer from barren plateaus provided local terms have sufficiently large weights. We focus on this setting in the following section. QGANs - Theory and Experiments In this section, we show that hybrid qGANs represent a promising and scalable class of GQML models. Our central contribution is to prove that qGANs do not suffer from barren plateaus for a large class of classical discriminators of arbitrary depth, and quantum generators of logarithmic depth. Although the corresponding observable decomposes as a sum of local and global terms, we guarantee in Theorem 2 that 1-local weights stay _constant_ in the number of qubits. In combination with Theorem 1 from the previous section, this allows us to exclude barren plateaus for a realistic class of shallow generators, as made specific in Corollary 2. Additionally, we verify our insights with numerical experiments, by training a qGAN to learn a 2D mixture of Gaussian distributions with 6 and 16 qubits. ### Background A generative adversarial network (GAN) aims to learn the distribution underlying a given training data set. It consists of two opposing parameterized networks, a generator and a discriminator. The goal of the generator is to generate data samples that are similar to a training data set and the goal of the discriminator is to correctly classify data samples as true (from the training data set) or fake (stemming from the generator). We consider hybrid qGANs where the discriminator is a **classical** neural network \(D_{\phi}:\mathbb{B}^{n}\rightarrow\mathbb{R}\) taking bitstrings \(x\in\mathbb{B}^{n}=\{0,1\}^{n}\) as input, while the generator is a parameterized **quantum** circuit \(G_{\theta}\) acting on the \(n\)-qubit Hilbert space \(\mathcal{H}^{n}\)-resulting in a hybrid qGAN. If the training data \(x\in X\) is not initially binary, we assume that it can be transformed by some mapping \(T:X\rightarrow\mathbb{B}^{n}\) before being fed into the discriminator. We write \(\theta\) and \(\phi\) for generator and discriminator parameters, and the generator output as \(G_{\theta}\ket{0}^{\otimes n}=\sum_{x\in\mathbb{B}^{n}}\sqrt{p_{\theta}(x)}e^ {iq_{\theta}(x)}\ket{x}\). The generator's goal is to encode a distribution \(p_{\theta}\) matching the original distribution \(p_{D}\) underlying the training data, while the discriminator attempts to distinguish between them. The phase factor \(e^{iq_{\theta}(x)}\) has no impact on the distribution learnt by the generator, and may thus be neglected. We assume the generator and discriminator losses can be written as \(\mathcal{L}_{G}(\theta,\phi)=\mathbb{E}_{x\sim p_{D}}\left[F(D_{\phi}(x))\right]\) and \(\mathcal{L}_{D}(\theta,\phi)=\mathbb{E}_{x\sim p_{D}}\left[F(D_{\phi}(x)) \right]+\mathbb{E}_{x\sim p_{\theta}}\left[\tilde{F}(D_{\phi}(x))\right],\) for some analytic functions \(F,\tilde{F}:\mathbb{R}\rightarrow\mathbb{R}\), encompassing both min-max [82] and Wasserstein [83] GANs. While the (classical) discriminator gradients may be computed with standard automatic differentiation techniques, the (quantum) generator gradients require further considerations. The latter reads \(\nabla_{\theta}\mathcal{L}_{G}(\theta,\phi)=\sum_{x}\nabla_{\theta}p_{\theta}( x)F(D_{\phi}(x))\), with the individual gradients \(\nabla_{\theta}p_{\theta}(x)\) being subject to observable-induced barren plateaus [2] because \(\nabla_{\theta}p_{\theta}(x)=\nabla_{\theta}\bra{0}G_{\theta}^{\dagger}OG_{ \theta}\ket{0}\), where \(O=\ket{x}\bra{x}\) is a global projector. However, this intuition is misleading: the generator loss can be rewritten as \(\mathcal{L}_{G}(\theta,\phi)=\bra{0}G_{\theta}^{\dagger}H_{\phi}G_{\theta} \ket{0}\), with a global (diagonal) observable \(H_{\phi}=\sum_{x}F(D_{\phi}(x))\ket{x}\bra{x}\) that depends only on the discriminator. Expanding each projector into the \(Z\)-Pauli basis, as in Equation 2, we obtain \(H_{\phi}=\sum_{\alpha}c_{\alpha}(\phi)Z_{\alpha}\), with weights \(c_{\alpha}(\phi)\) that are _independent from_\(\theta\). This allows us to split the loss into a weighted sum that includes local and global terms \(\mathcal{L}_{\alpha}(\theta)=\bra{0}G_{\theta}^{\dagger}Z_{\alpha}G_{\theta} \ket{0}\) which are _independent from_\(\phi\): \[\mathcal{L}_{G}(\theta,\phi)=\sum_{\alpha}c_{\alpha}(\phi)\mathcal{L}_{\alpha}( \theta)\,. \tag{3}\] Hence, following Theorem 1, \(H_{\phi}\) will not necessarily induce a barren plateau if local weights \(c_{\alpha}(\phi)\) are sufficiently large for local contributions to the gradient to be non-exponentially-vanishing. This is what we prove in Theorem 2, for the following class of (arbitrarily deep) discriminators with leaky-ReLU activation - the typical activation for state-of-the-art classical GANs [84, 85, 86]. **Definition 2** (Discriminator Class).: In this work, we consider discriminators \(D_{\phi}:\mathbb{B}^{n}\rightarrow\mathbb{R}\) in the class \(\mathcal{D}\) of fully-connected neural networks with \(L\) hidden layers, leaky-ReLU hidden activations [87], and an output activation \(F(x)=\log(\sigma(x))\) for min-max GANs, or \(F(x)=x\) for Wasserstein GANs. For each layer \(l\), we write \(m_{l}\) for the number of neurons (width), \(\sigma_{l}\) for the standard deviation of initial weights (all parameters except biases), and \(\gamma_{l}\) for the leaky-ReLU parameter. In particular, the neural net has \(m_{0}=n\) inputs and \(m_{L+1}=1\) output. We assume that weights and biases are initialized following any i.i.d. symmetric distributions, which is satisfied for all typical state-of-the-art initializations used in classical machine learning, including Kaiming and Xavier [88, 89]. ### Main Theorem The following theorem guarantees that the observable \(H_{\phi}\) induced by a discriminator \(D_{\phi}\in\mathcal{D}\) has 1-local weights that are not only non-vanishing, but stay _constant_ in the number of qubits provided parameters are initialised as a function of discriminator width. The proof can be found in Appendix D. **Theorem 2**.: _Let \(D_{\phi}\in\mathcal{D}\) be a discriminator of depth \(L\). For any 1-local weight \(\alpha\), we have_ \[\mathbb{E}_{\phi}\left[c_{\alpha}(\phi)^{2}\right]\geq\frac{\sigma_{L+1}^{2}}{ 16}\ \prod_{l=1}^{L}\frac{m_{l}\sigma_{l}^{2}(1+\gamma_{l})^{2}}{4}\,.\] _In particular, initialising parameters such that \(m_{l}\sigma_{l}^{2}\geq 4\) for each \(l\), the bound reduces to \(\mathbb{E}_{\phi}\left[c_{\alpha}(\phi)^{2}\right]\geq\sigma_{L+1}^{2}/16\), which is **constant** both in the number of qubits \(n\) and the discriminator depth \(L\)._ Combining this with our results from the previous section, we can rule out barren plateaus for qGANs with shallow generators and arbitrarily deep discriminators. Corollary 2 below applies this concretely to EfficientSU2 circuits, but as already discussed, can easily be extended to other circuit classes with shallow depth and \(O(1)\)-local entangling layers. **Corollary 2** (Absence of barren plateaus in qGANs).: _For any discriminator \(D_{\phi}\in\mathcal{D}\) satisfying \(m_{l}\sigma_{l}^{2}\geq 4\) for each layer \(l\), and any EfficientSU2 generator \(G_{\theta}\in\mathcal{G}\) with pairwise entanglement and logarithmic depth \(K\in O(\log n)\), there exists a parameter \(\theta_{k}\) such that_ \[\mathrm{Var}_{\theta,\phi}\left[\partial_{k}\mathcal{L}_{G}\right]\in\Omega \left(\frac{1}{\mathrm{poly}(n)}\right)\,.\] This guarantees that the generator will not suffer from barren plateaus. One might also wonder whether the (classical) discriminator could have vanishing gradients - but this has extensively been studied in classical machine learning, and using a leaky-ReLU activation function is known to mitigate the problem significantly [90]. As already emphasised, note that the absence of barren plateaus is a necessary condition for trainability, but in no way guarantees that the training will lead to an optimal solution. Nonetheless, the fact that Pauli terms contribute independently (Theorem 1) proves that local approximations of the loss function can only harm the training process - by increasing initial concentration of gradients and potentially leading to undesirable local minima. While initially exponentially small, the global terms may kick in substantially over the course of training, as displayed numerically in the following section. ### Experiments #### iv.3.1 Setup In order to verify our theoretical results and compare the hybrid quantum setting with classical GANs, we reproduce an experiment from Letcher _et al._[91] by training a qGAN to learn a 2D mixture of Gaussian distributions with \(n=6\) and \(n=16\) qubits, using Qiskit [92]. The generator is an EfficientSU2 circuit with two layers of single-qubit \(R_{Y}\) and \(R_{X}\) rotations, separated by a single layer of CNOT pairwise entanglement, satisfying the requirements of Corollary 2. The parameters \(\theta\) are initialized uniformly over \([-\pi,\pi]\). The discriminator is a fully-connected neural network with \(L=3\) hidden layers of width \(m=64\) each, with leaky-ReLU hidden activation. For discriminator parameters \(\phi\), we use a slight modification of Pytorch's [93] default initialization scheme, Kaiming uniform [89], with fannode = fanout and twice the standard deviation. This guarantees constant 1-local weights following Theorem 2, while being in line with state-of-the-art initialization schemes for GANs. Generator and discriminator parameters are optimized using Adam [94], with learning rate \(\alpha=0.01\) and momentum terms \(\beta=(0.7,0.999)\). The real and generated batch sizes are 256, and the number of discriminator updates per generator update is 5. To enable a clear presentation, the results presented here are for a single seed of initial parameters and with exact gradients. To provide evidence that these results are statistically significant and robust to approximate gradients, we provide additional experiments in Appendix E. #### iv.3.2 Results The relative entropy between true and generated probability functions, as well as the generator gradients over the course of training, are shown in Figure 4(a). In part (b), we plot the generator PDF (left) at the end of training, which can be seen to match the true PDF (right) quasi-perfectly for 6 qubits, and reasonably well for 16 qubits. In both experiments, the norm of the full gradient at initialization is seen to be large - in line with our theoretical guarantees (Corollary 2). The relative entropy consistently decreases over the course of training, and stabilizes towards 300 epochs. Interestingly, our results are on par with classical GANs despite a massively smaller number of parameters: 64 parameters for our 16-qubit quantum generator, reaching a relative entropy of \(\sim\)0.06 in 300 training epochs, compared with 764930 parameters for the classical network used by Letcher _et al._[91], reaching a relative entropy of \(\sim\)0.04 in 10000 training epochs, with batch sizes of 256 in both cases. This is in line with previous empirical results [95] which indicated beneficial capacity properties and faster training convergence with parameterized quantum models compared to comparable classical neural networks. However, it remains to check whether this performance can still be achieved in the presence of noise from physical quantum hardware, and to see how wall-clock times compare. #### iv.3.3 Global Contributions In order to investigate whether global terms of the observable contribute significantly over the course of training, Figure 4(a) includes the norm of the gradients associated with different \(k\)-local terms, with \(k=1,5\) in the 6-qubit case and \(k=1,8\) in the 16-qubit case. We notice that the global terms are initially much smaller than the 1-local terms, in agreement with theoretical results, but kick in significantly towards the 100th epoch. As highlighted in the dashed boxes, this seems to coincide with the relative entropy stagnating a little, and then being pushed further down, possibly thanks to the contribution of these global terms. This observation was reproduced across multiple seeds, and suggests that global terms are central in reaching desirable local minima - instead of discarding them, as suggested in prior work [16; 20; 21; 22; 23]. #### iv.2.4 Limitations It is useful to notice that our 16-qubit experiments consistently led to'spiky' generator distributions, as shown in Figure 4(b). Indeed, while the ground distribution is a _continuous_ Gaussian mixture, the generator encodes this into amplitudes \(p_{\theta}(x)\), where \(x\) ranges over a _discrete_ set \(\{0,1\}^{n}\). Without introducing an inductive bias that forces 'adjacent' inputs \(x\) to have similar amplitudes, the generator is unfortunately blind to the underlying continuity, attempting to match the Gaussian mixture as if it were discrete. This may explain the resulting spikiness, and calls for further work on inductive biases and the optimization landscape of qGANs more generally. ## V Conclusion and outlook This paper extends prior work on barren plateaus, by lifting \(t\)-design assumptions and providing tight upper and lower bounds on gradient concentration for a large class of parameterized quantum circuits, and arbitrary observables (Theorem 1). In particular, our result emphasizes that gradient concentration is not exclusively determined by observable locality, circuit depth, and entanglement, but instead, strongly relies on the observable-ansatz _interaction_, formulated in terms of light-cones. One fortunate consequence is that _mixed_ observables do _not_ necessarily induce barren plateaus (Corollary 1), and even _global_ observables may have non-vanishing gradients if the circuit is chosen wisely (Section III.1.4). An interesting direction for future work would be to extend our proof techniques to additional circuit classes of practical relevance, such as Hamiltonian variational ansatze. Finally, we leverage our results to show that qGANs are an auspicious class of GQML algorithms, as they do not suffer from barren plateaus despite the corresponding observable being mixed (Theorem 2 and Corollary 2). We illustrate these theoretical results with numerical experiments, where we train a qGAN to learn Gaussian mixtures with up to 16 qubits, and provide evidence that global contributions kick in significantly during training. Nonetheless, it is important to remember that non-vanishing gradients, though necessary, are not a sufficient condition in the over-arching goal of reaching a global (or even a _good_ local) minimum of the loss. In particular, while our work rules out barren plateaus for qGANs Figure 4: Results for 6-qubit (top) and 16-qubit (bottom) experiments. (a): Relative entropy between true and generated distributions, and 1-norm of generator gradients, over the course of training. The **full** gradient is simply the gradient of the generator loss \(\mathcal{L}_{\mathcal{G}}\), while each \(k\)**-local** gradient is the gradient of \(\sum_{|\alpha|=k}c_{\alpha}\mathcal{L}_{\alpha}\) from Equation (3). Dashed boxes correspond to non-local gradients kicking in significantly, coinciding with a period of stagnation in the relative entropy. (b): True (left) and generated (right) probability density functions at the end of training. with shallow generators, future work calls for a deeper understanding of VQA optimization landscapes, and the potential introduction of an inductive bias. The latter can be particularly valuable for qGANs, which aim to efficiently learn continuous distributions, as suggested by the 16-qubit experiments presented in this work. **Code Availability.** The code can be made available upon reasonable request. **Acknowledgments.** The authors thank Francesco Tacchino and Alberto Baiardi for their input on quantum chemistry applications, Marco Cerezo for interesting discussions, and Kunal Sharma and Zoe Holmes for providing valuable feedback and input on the manuscript.
パラメータ化されたモデルのトレーニングは、その基盤となる損失関数のランドスケープに大きく依存します。特に、消失する勾配は、可変量子アルゴリズム(VQAs)のスケーラビリティにおける中心的ボトルネックであり、様々な方法で発生します。しかし、既存の勾配制限結果の大部分には、t設計回路仮定の必要性があり、これは実際には満足できません。この研究では、これらの仮定をすべて緩和し、パラメータ化された量子回路とその任意の可視化器に対する損失と勾配の濃度を強く制限する上限と下限を導き出しました。これは、従来の作業よりもはるかに強力な結果です。さらに、これらの境界線、および損失の分散は効率的にそして古典的に見積もることができ、VQAsモデルの損失の地形を研究するための実用的なツールを提供します。これは、回路/可視化
2309.12859
Higher order isometric shift operator on the de Branges-Rovnyak space
The de Branges-Rovnyak space $H(b)$ is generated by a bounded analytic function $b$ in the unit ball of $H^\infty$. When $b$ is a nonextreme point, the space $H(b)$ is invariant by the forward shift operator $M_z$. We show that the $H(b)$ spaces provide model spaces for expansive quasi-analytic $2n$-isometric operators $T$ with $T^*T - I$ being rank one. Then we describe the invariant subspaces of the $2n$-isometric forward shift operator $M_z$ on $H(b)$.
Caixing Gu, Shuaibing Luo
2023-09-22T13:32:03
http://arxiv.org/abs/2309.12859v1
# Higher order isometric shift operator on the de Branges-Rovnyak space ###### Abstract. The de Branges-Rovnyak space \(H(b)\) is generated by a bounded analytic function \(b\) in the unit ball of \(H^{\infty}\). When \(b\) is a nonextreme point, the space \(H(b)\) is invariant by the forward shift operator \(M_{2}\). We show that the \(H(b)\) spaces provide model spaces for expansive quasi-analytic \(2n\)-isometric operators \(T\) with \(T^{*}T-I\) being rank one. Then we describe the invariant subspaces of the \(2n\)-isometric forward shift operator \(M_{2}\) on \(H(b)\). de Branges-Rovnyak space; m-isometry; operator model; shift invariant subspace Msc (2010): 46E22, 47A15, 47A45. ## Introduction Let \(\mathbb{D}\) be the open unit disc in the complex plane \(\mathbb{C}\), and \(\mathbb{T}\) the unit circle. Let \(H^{2}\) be the Hardy space on \(\mathbb{D}\), and \(H^{\infty}\) the space of bounded analytic functions on \(\mathbb{D}\) with norm \[\|f\|_{\infty}=\sup_{z\in\mathbb{D}}|f(z)|.\] If \(P\) is the orthogonal projection from \(L^{2}(\mathbb{T})\) onto \(H^{2}\), then for \(\phi\in L^{\infty}(\mathbb{T})\), the Toeplitz operator \(T_{\phi}:H^{2}\to H^{2}\) is defined by \[T_{\phi}f=P(\phi f),\quad f\in H^{2}.\] For \(b\) in the unit ball of \(H^{\infty}\), the de Branges-Rovnyak space \(H(b)\) is the image of \(H^{2}\) under the operator \((I-T_{b}T_{\overline{b}})^{1/2}\) with the inner product \[\langle(I-T_{b}T_{\overline{b}})^{1/2}f,(I-T_{b}T_{\overline{b}})^{1/2}g \rangle=\langle f,g\rangle_{H^{2}},\quad f,g\in[\ker(I-T_{b}T_{\overline{b}}) ^{1/2}]^{\perp}.\] It is known that \(H(b)\) space is a reproducing kernel Hilbert space with kernel \[K^{b}_{\lambda}(z)=\frac{1-\overline{b(\lambda)}b(z)}{1-\overline{\lambda}z},\quad\lambda,z\in\mathbb{D}.\] So \(\langle f,K^{b}_{\lambda}\rangle=f(\lambda)\) for all \(f\in H(b)\) and \(\lambda\in\mathbb{D}\). \(H(b)\) spaces not only possess a beautiful structure, but also play an important role in many aspects of complex analysis and operator theory, most importantly, in the model theory for certain types of contractions, see the books of de Branges and Rovnyak [9], Sarason [28] and the recent monographs of Fricain and Mashreghi [13]. Many properties of \(H(b)\) depend on whether \(b\) is or is not an extreme point of the unit ball of \(H^{\infty}\). If \(M_{z}\) is the forward shift operator on \(H^{2}\), then \(H(b)\) is invariant by \(M_{z}\) if and only if \(b\) is a nonextreme point of the unit ball of \(H^{\infty}\) ([28, P23]). Forward shift invariant subspaces are the natural subject to study, which yields profound research in operator theory, e.g. Beurling's theorem for the Hardy space ([6]), Aleman-Richter-Sundberg's theorem for the Bergman space ([5]), Richter-Sundberg's theorems for the Dirichlet space ([21, 25]). Despite the ongoing and active research on \(H(b)\) spaces, we know little about the forward shift invariant subspaces of \(H(b)\). In fact, the situation for \(H(b)\) is difficult ([13, Chap24, P352]). In 2019, Aleman and Malman [4] made great progress on this problem. They studied the de Branges-Rovnyak spaces \(H(B)\) generated by Schur class functions \(B\), and characterized the \(M_{z}\)-invariant subspace of \(H(B)\) when \(H(B)\) is of finite rank. In this paper, when \(B=b\) is a rational function, we give another characterization of the \(M_{z}\)-invariant of \(H(b)\), which provides some additional information of the structure of the \(M_{z}\)-invariant subspaces of \(H(b)\). In order to better understand operators on a Hilbert space, one may try to find models for certain classes, that is, a subclass of concrete operators with the property that any given operator from the class is unitarily equivalent to an element of the subclass. It is known that the vector-valued de Branges-Rovnyak spaces \(H(B)\) provide canonical model spaces for certain types of contractions. It is amazing that \(H(B)\) spaces also provide model spaces for expansive analytic operators, see [4, Proposition 2.1], also see [20, Theorem 4.6] for a detailed account. In this paper, we will show in detail that the scalar-valued de Branges-Rovnyak spaces \(H(b)\) provide model spaces for what we call the expansive quasi-analytic operators \(A\) with \(A^{*}A-I\) being rank one, see definition 1.4 for the meaning of quasi-analytic operators. The main results of this paper are the following two theorems. **Theorem 0.1**.: _Every quasi-analytic strict \(2n\)-isometry \(A\) with \(A^{*}A-I=w\otimes w\) is unitarily equivalent to \((M_{z},H(b))\) for some rational function \(b\) of degree \(n\) with \(b(0)=0\)._ When \(b\) is a nonextreme point of the unit ball of \(H^{\infty}\), then there exists a unique outer function \(a\) such that \(a(0)>0\) and \[\left|a(z)\right|^{2}+\left|b(z)\right|^{2}=1,z\in\mathbb{T}. \tag{0.1}\] We call \(a\) the pythagorean mate of \(b\). **Theorem 0.2**.: _Let \(T=(M_{z},H(b))\) be a strict \(2n\)-isometry on \(H(b)\). Then the pythagorean mate of \(b\) has a single zero of multiplicity \(n\) at some point \(\overline{\lambda}\in\mathbb{T}\), and every nonzero \(T\)-invariant subspace is of the form \([(z-\overline{\lambda})^{j}\theta]_{T}\), where \(j\in\{0,1,\cdots,n\}\), \(\theta\) an inner function are such that \((z-\overline{\lambda})^{j}\theta\in H(b)\)._ When we say \(T=(M_{z},H(b))\) is a \(2n\)-isometry on \(H(b)\) we always mean that \(M_{z}\) is a bounded operator on \(H(b)\), and this happens if and only if \(b\) is a nonextreme point of the unit ball of \(H^{\infty}\). In section 2, we introduce the background of \(m\)-isometries, and then prove Theorem 0.1 using the rank one extension process. In section 3, we present the proof of Theorem 0.2. #### 1. Operator model #### 1.1. M-isometry on de Branges-Rovnyak space A bounded linear operator \(T\) on a Hilbert space \(H\) is an \(m\)-isometry for some positive integer \(m\) if \[\beta_{m}(T):=\sum_{k=0}^{m}(-1)^{m-k}\binom{m}{k}y^{k}x^{k}|_{y=T^{*},x=T}= \sum_{k=0}^{m}(-1)^{m-k}\binom{m}{k}T^{*k}T^{k}=0.\] Note that \[\beta_{m+1}(T)=T^{*}\beta_{m}(T)T-\beta_{m}(T),\] So if \(T\) is an \(m\)-isometry, then \(T\) is an \(n\)-isometry for all \(n\geq m.\) We say \(T\) is a strict \(m\)-isometry if \(T\) is an \(m\)-isometry but \(T\) is not an \((m-1)\)-isometry. The study of \(m\)-isometries originates from the work of Agler [1], followed by a series of papers by Agler and Stankus [2]. In 1991, Richter [22] investigated the cyclic analytic \(2\)-isometries and proved that any cyclic analytic \(2\)-isometry is unitarily equivalent to the shift operator \(M_{z}\) on a Dirichlet type space \(D(\mu)\) for some measure \(\mu\) supported on the unit circle, where \[D(\mu)=\left\{f\in\mathrm{H}(\mathbb{D}):\int_{\mathbb{D}}|f^{\prime}(z)|^{2} \int_{\mathbb{T}}\frac{1-|z|^{2}}{|z-\xi|^{2}}d\mu(\zeta)dA(z)<\infty\right\}.\] In 2014 and 2015, the first author Gu demonstrated some examples of \(m\)-isometric operators in [15, 16] and further studied the properties of \(m\)-isometries in [17]. In 2018, the authors Gu and Luo [18] showed that for any positive integer \(m\), the shift operator \(M_{z}\) on \(K_{m}\) is an \((m+2)\)-isometry, where \(K_{m}\) is the following space \[\left\{f\in\mathrm{H}(\mathbb{D}):\left\|f\right\|^{2}=\sum_{n\geq 0}\frac{(n+1) \cdots(n+m+1)}{(m+1)!}\left|a_{n}\right|^{2}<\infty,f(z)=\sum_{n=0}^{\infty} a_{n}z^{n}\right\}.\] In this paper, we are interested in the expansive \(m\)-isometry \(T\) with \(T^{*}T-I=w\otimes w\). The following result is proved in [20, Corollaries 8.6 and 8.11]. **Theorem 1.1**.: _Assume \(T\) is a bounded linear operator on a Hilbert space \(H\) with \(\Delta=T^{*}T-I=w\otimes w\). (i) The operator \(T\) is a \(2n\)-isometry if and only if there exists \(\lambda\in\mathbb{T}\) such that \(\left(T^{*}-\lambda\right)^{n}w\)\(=0\). (ii) If \(\left(T^{*}-\lambda\right)^{n}w=0\) and \(\left(T^{*}-\lambda\right)^{n-1}w\neq 0\), then \(T\) is a strict \(2n\)-isometry. (iii) The operator \(T\) is a \((2n+1)\)-isometry if and only if \(T\) is a \(2n\)-isometry._ Let \(M_{z}\) be the forward shift operator on the Hardy space \(H^{2}\), \(M_{z}f(z)=zf(z)\), and \(L\) be the backward shift operator, \[Lf(z)=\frac{f(z)-f(0)}{z}.\] Let \(b\) be a nonextreme function in the unit ball of \(H^{\infty}\). Then the de Branges-Rovnyak space \(H(b)\) is invariant for both \(M_{z}\) and \(L\) ([13, 28]). Let \(T=(M_{z},H(b))\). Both \(T\) and \(L\) are bounded operators from \(H(b)\) into \(H(b)\). Note that \(LT=I\) on \(H(b)\), and \(T\) is expansive and \(L\) is contractive. We have \[L^{*}=T-b\otimes Lb,\] see e.g. [13, Theorem 18.22]. Therefore, \(T=L^{*}+b\otimes Lb\) and \(T^{*}=L+Lb\otimes b\). **Lemma 1.2**.: _Let \(L\) and \(T\) be defined as above. Then_ \[T^{*}T-I=(1+\|b\|^{2})Lb\otimes Lb.\] Proof.: Note that \[T^{*}T =(L+Lb\otimes b)\,T=LT+Lb\otimes T^{*}b\] \[=I+Lb\otimes(Lb+\langle b,b\rangle\,Lb),\] where the inner product is in \(H(b).\) The conclusion then follows. ### Operator model It is known that every cyclic analytic 2-isometry \(A\) on a Hilbert space \(H\) is unitarily equivalent to the shift operator on some Dirichlet-type space \(D(\mu)\), where \(\mu\) is a positive measure on \(\mathbb{T}\) ([3, 8, 22]). So the shift operator \(M_{z}\) on \(D(\mu)\) is an operator model for such operator \(A\). In this section we will use the rank one extension process developed in [23] to show that every quasi-analytic \(2n\)-isometry \(A\) with \(A^{*}A-I=w\otimes w\) is unitarily equivalent to the shift operator \(M_{z}\) on \(H(b)\) for some \(b\). If \(T\) is an expansive operator, then we let \(\Delta=T^{*}T-I\), and \([\text{ran}\,\Delta]_{T^{*}}\) be the smallest \(T^{*}\)-invariant subspace containing the range of \(\Delta\). **Lemma 1.3**.: _Assume \(T\) is a bounded linear operator on a Hilbert space \(H\) with \(\Delta=T^{*}T-I=w\otimes w\), and \(T\) is a strict \(2n\)-isometry._ _(i) Let_ \(H_{w}=\text{Span}\left\{w,T^{*}w,\cdots,T^{*n-1}w\right\}.\) _Then_ \(H_{w}=[w]_{T^{*}}\)_,_ \(\dim H_{w}=n\) _and_ \(\sigma(T^{*}|H_{w})=\left\{\lambda\right\}\) _for some_ \(\lambda\in\mathbb{T}\)_._ _(ii) When_ \(n=1\)_,_ \(\{w\}^{\perp}\) _is_ \(T\)_-invariant, and_ \(T_{0}=T|\{w\}^{\perp}\) _is an isometry._ _(iii) When_ \(n\geq 2\)_, let_ \(w_{n}=\left(T^{*}-\lambda\right)^{n-1}w\) _and_ \(\left\{w_{n}\right\}^{\perp}=H\ominus\mathbb{C}\left\{w_{n}\right\}.\) _Then_ \(\left\{w_{n}\right\}^{\perp}\) _is_ \(T\)_-invariant and_ \(T_{n-1}=T|\left\{w_{n}\right\}^{\perp}\) _is a strict_ \(2(n-1)\)_-isometry such that_ \[T_{n-1}^{*}T_{n-1}-I=w^{\prime}\otimes w^{\prime},\] _where_ \(w^{\prime}:=P_{\left\{w_{n}\right\}^{\perp}}w\in H_{w}\ominus\mathbb{C} \left\{w_{n}\right\}.\)__ Proof.: (i) This follows from Theorem 1.1. (ii) When \(n=1\), by part (i) \([w]_{T^{*}}=\mathbb{C}w\), so \(\{w\}^{\perp}\) is \(T\)-invariant, and \(T_{0}=T|\{w\}^{\perp}\) is an isometry. (iii) Since \((T^{*}-\lambda)w_{n}=0\), we have \(\{w_{n}\}^{\perp}\) is \(T\)-invariant. So \[T^{*}_{n-1}T_{n-1}-I =P_{\{w_{n}\}^{\perp}}(T^{*}T-I)P_{\{w_{n}\}^{\perp}}\] \[=P_{\{w_{n}\}^{\perp}}w\otimes P_{\{w_{n}\}^{\perp}}w\] \[=w^{\prime}\otimes w^{\prime}.\] By \(w^{\prime}=P_{\{w_{n}\}^{\perp}}w=w-\langle w,w_{n}\rangle\frac{w_{n}}{\|w_{n }\|^{2}}\), we obtain that \(w^{\prime}\in H_{w}\ominus\mathbb{C}\left\{w_{n}\right\}\). Note that for \(n\geq 2\) \[(T^{*}_{n-1}-\lambda)^{n-1}w^{\prime} =P_{\{w_{n}\}^{\perp}}(T^{*}-\lambda)^{n-1}P_{\{w_{n}\}^{\perp}}w\] \[=P_{\{w_{n}\}^{\perp}}(T^{*}-\lambda)^{n-1}w=0,\] and \[(T^{*}_{n-1}-\lambda)^{n-2}w^{\prime}=P_{\{w_{n}\}^{\perp}}(T^{*}-\lambda)^{n- 2}w\neq 0,\] we conclude from Theorem 1.1 that \(T_{n-1}\) is a strict \(2(n-1)\)-isometry. Note that if \(\lambda\in\mathbb{T}\), then \(\beta_{m}(\overline{\lambda}T)=\beta_{m}(T)\). So for a \(2n\)-isometry \(T\) with \(T^{*}T-I=w\otimes w_{n}\left(T^{*}-\lambda\right)^{n}w=0\), we may suppose \(\lambda=1\). By the above lemma, there exists an orthonormal basis \(\{e_{1},\cdots,e_{n}\}\) of \(H_{w}\) such that with respect to \[H=H_{0}\oplus\mathbb{C}\left\{e_{1}\right\}\oplus\cdots\oplus\mathbb{C}\left\{ e_{n}\right\},\text{ where }H_{0}=H\ominus H_{w}=H_{w}^{\perp},\] the operator \(T\) has the following matrix decomposition (we suppose \(\lambda=1\)) \[T=\left[\begin{array}{ccccc}V&h_{1}\otimes e_{1}&\cdots&h_{n-1}\otimes e_{ n-1}&h_{n}\otimes e_{n}\\ 0&e_{1}\otimes e_{1}&\cdots&a_{1,n-1}e_{1}\otimes e_{n-1}&a_{1n}e_{1}\otimes e _{n}\\ \vdots&\vdots&\ddots&\ddots&\vdots\\ 0&0&\cdots&e_{n-1}\otimes e_{n-1}&a_{n-1,n}e_{n-1}\otimes e_{n}\\ 0&0&\cdots&0&e_{n}\otimes e_{n}\end{array}\right] \tag{1.1}\] where \(h_{i}\in H_{0}\) for \(1\leq i\leq n\) and \(a_{ij}\in\mathbb{C}\) for \(1\leq i<j\leq n\) and \(V\) is an isometry on \(H_{0}\). Since \(T^{*}T-I=w\otimes w\) is a rank one operator, one checks that \(V^{*}h_{i}=0,i=1,\ldots,n\). To investigate the operator model for the above \(2n\)-isometry \(T\), we introduce the following definition. **Definition 1.4**.: We say that an expansive operator \(T\) is quasi-analytic if \(T|\mathcal{M}^{\perp}\) is unitarily equivalent to the unilateral shift \(S=(M_{z},H^{2})\) acting on \(H^{2}\), where \(\mathcal{M}=[\operatorname{ran}\Delta]_{T^{*}},\Delta=T^{*}T-I\). Recall from [2, Proposition 5.6] that a \(2\)-isometry with \(T^{*}T-I=w\otimes w\) is unitarily equivalent to \(V\oplus\mathbf{B}\), where \(V\) is an isometry and \(\mathbf{B}\) is a Brownian shift, i.e. \[\mathbf{B}=\begin{bmatrix}S&\sigma(1\otimes 1)\\ 0&\lambda\end{bmatrix}\] on \(H^{2}\oplus\mathbb{C},\lambda\in\mathbb{T},\sigma>0\). Note that if \(T^{*}T-I=w\otimes w\), then \([\text{ran }\Delta]_{T^{*}}=H_{w}\). So if \(T\) is a quasi-analytic 2-isometry with \(T^{*}T-I=w\otimes w\), then \(T\) is unitarily equivalent to a Brownian shift. We also have a Brownian shift is unitarily equivalent to \((M_{z},H(b))\) on \(H(b)\) for some \(b\), see Theorem 1.12. Our definition of quasi-analyticity is partly motivated by this observation. It also turns out that if \(T\) is a quasi-analytic \(2n\)-isometry with \(T^{*}T-I=w\otimes w\) on \(H\), then \(T\) is analytic, i.e. \(\bigcap_{n\geq 0}T^{n}H=\{0\}\). This result is not obvious. Our strategy is to use the rank one extension process to show that \(T\) is unitarily equivalent to \((M_{z},H(b))\) on \(H(b)\) for some rational \(b\) of degree \(n\), thus obtaining Theorem 0.1. The analyticity of \(T\) then follows from this. We remark that Theorem 0.1 can also be derived from [20, Theorems 1.1, 1.3 and 4.6], but these theorems in [20] apply to a wide class of operators and are not readily obtained. Therefore we provide a different approach here for just the \(H(b)\) space setting. Now we introduce the rank one extension process. Suppose \(T\) is quasi-analytic with \(T^{*}T-I=w\otimes w\), then we can assume \[T=\begin{bmatrix}S&h\\ 0&A\end{bmatrix}\text{ on }H^{2}\oplus H_{w}\] where \(h:H_{w}\to H^{2}\) with \(\text{ran }h\subseteq\ker S^{*}\) and \(A:H_{w}\to H_{w}\) with \((A^{*}-1)^{n}=0\) (here we assume \(\lambda=1\)). By Lemma 1.3 and (1.1), one can construct every such \(2n\)-isometry \(T\) inductively by a sequence of operators \(T_{j}\in B(H^{2}\oplus\mathbb{C})\), \(j=0,\ldots,n\) with the following properties: 1. \(T_{0}=S\) and for \(0<j\leq n\), \(T_{j}\) is a strict \(2j\)-isometry with \(T_{j}^{*}T_{j}-I=w_{j}\otimes w_{j}\) for some \(w_{j}\in\{0\}\oplus\mathbb{C}^{j}\), and \((T^{*}-1)^{j}w_{j}=0\), 2. for \(0\leq j<n\), \(w_{j+1}=(w_{j},\omega_{j+1})\), 3. for \(0\leq j<n\), \[T_{j+1}=\begin{bmatrix}T_{j}&B_{j}\\ 0&1\end{bmatrix},\] where \(B_{j}:\mathbb{C}\to\ker S^{*}\oplus\mathbb{C}^{j}\subseteq H^{2}\oplus\mathbb{ C}^{j}\) satisfies \(T_{j}^{*}B_{j}1=\overline{\omega_{j+1}}w_{j}\). 4. for \(0\leq j<n\), \(B_{j}^{*}B_{j}1=|\omega_{j+1}|^{2}\), 5. \(T=T_{n},w=w_{n}\). In fact, identifying \(\mathbb{C}^{j}\) with \(\text{Span }\{e_{1},\ldots,e_{j}\}\), we note that for \(j=0,\ldots,n\)\(H_{j}=H^{2}\oplus\mathbb{C}^{j}\) is invariant for \(T\), and the above conditions follow with \(T_{j}=T|H_{j}\), \(\omega_{j}=\langle w,e_{j}\rangle\), and \(w_{j}=P_{j}w\), where \(P_{j}\) denotes the projection onto \(H_{j}\). The fact that \[T_{j+1}^{*}T_{j+1}-I_{H_{j+1}} =P_{j+1}(T^{*}T-I)|H_{j+1}\] \[=w_{j+1}\otimes w_{j+1}\] \[=\begin{bmatrix}w_{j}\otimes w_{j}&\overline{\omega_{j+1}}w_{j} \otimes e_{j+1}\\ \omega_{j+1}e_{j+1}\otimes w_{j}&|\omega_{j+1}|^{2}\end{bmatrix}\] gives the identities \(T_{j}^{*}B_{j}1=\overline{\omega_{j+1}}w_{j}\) and \(B_{j}^{*}B_{j}1=|\omega_{j+1}|^{2}\). **Lemma 1.5**.: _Let \(T_{0},\ldots,T_{n}\) be any sequence of operators satisfying \((a1)-(a5)\). Let \(z\in\mathbb{D}\) and \(j\geq 0\). Then the map \(x\to\begin{bmatrix}x\\ -\frac{1}{1-z}B_{j}^{*}x\end{bmatrix}\) takes \(\ker(T_{j}-z)^{*}\) onto \(\ker(T_{j+1}-z)^{*}\). In particular, we have that \(\dim\ker T_{j}^{*}=1\) for each \(j\geq 0\)._ Proof.: Immediate. By the above discussion, we consider operator of the following form \[R=\begin{bmatrix}T&B\\ 0&1\end{bmatrix}\text{ on }H\oplus\mathbb{C},\] where \(H\) is a Hilbert space, \(\dim\ker T^{*}=1\). **Lemma 1.6**.: _Let \(w=(w^{\prime},\omega)\in H\oplus\mathbb{C}\) and \(k\in\ker T^{*}\) with \(\|k\|^{2}=1+\|w^{\prime}\|^{2}\). Then \(R^{*}R-I=w\otimes w\), if and only if_ 1. \(T^{*}T-I=w^{\prime}\otimes w^{\prime}\)_,_ 2. _there is_ \(t\in[0,2\pi)\) _such that_ \(B1=B_{t}1=\frac{\overline{\omega}}{1+\|w^{\prime}\|^{2}}(e^{it}k+Tw^{\prime})\)_._ Proof.: One easily verifies that \(R^{*}R-I=w\otimes w\), if and only if (i) is satisfied and \(T^{*}B1=\overline{\omega}w^{\prime}\) and \(B^{*}B1=|\omega|^{2}\). Suppose that (i) and (ii) are satisfied. Then \(\|Tw^{\prime}\|^{2}=\langle T^{*}Tw^{\prime},w^{\prime}\rangle=\|w^{\prime}\| ^{2}+\|w^{\prime}\|^{4}\) and since \(k\perp Tw^{\prime}\) we have \(B_{t}^{*}B_{t}1=\|B_{t}1\|^{2}=\frac{|\omega|^{2}}{1+\|w^{\prime}\|^{2}}+\frac{ |\omega|^{2}}{(1+\|w^{\prime}\|^{2})^{2}}\|Tw^{\prime}\|^{2}\)\(=|\omega|^{2}\). Also, \[T^{*}B_{t}1=\frac{\overline{\omega}}{1+\|w^{\prime}\|^{2}}T^{*}Tw^{\prime}= \frac{\overline{\omega}}{1+\|w^{\prime}\|^{2}}(1+\|w^{\prime}\|^{2})w^{\prime} =\overline{\omega}w^{\prime}.\] Hence (i) and (ii) imply that \(R^{*}R-I=w\otimes w\). Conversely, assume that \(R^{*}R-I=w\otimes w\). Then (i) is satisfied and hence \(T\) is bounded below. Let \(f\in\ker T^{*}\) be a unit vector. Since \(\dim\ker T^{*}=1\) we must have \(B1=\alpha f+Tx\) for some \(\alpha\in\mathbb{C}\) and \(x\in H\). We have to show that \(|\alpha|^{2}=\frac{|\omega|^{2}}{1+\|w^{\prime}\|^{2}}\) and \(x=\frac{\overline{\omega}}{1+\|w^{\prime}\|^{2}}w^{\prime}\). We have \[\overline{\omega}w^{\prime}=T^{*}B1=T^{*}(\alpha f+Tx)=T^{*}Tx=x+\langle x,w^{ \prime}\rangle w^{\prime}. \tag{1.2}\] Hence \(x=\overline{\omega}w^{\prime}-\langle x,w^{\prime}\rangle w^{\prime}\). Let \(\beta=\overline{\omega}-\langle x,w^{\prime}\rangle\), then \(x=\beta w^{\prime}\). Plugging this into (1.2) we obtain that \(\overline{\omega}=\beta(1+\|w^{\prime}\|^{2})\). Thus, \(x\) has the required form and \[|\omega|^{2} =B^{*}B1=\|B1\|^{2}=|\alpha|^{2}+\|Tx\|^{2}\] \[=|\alpha|^{2}+|\beta|^{2}\|T\omega^{\prime}\|^{2}\] \[=|\alpha|^{2}+\frac{|\omega|^{2}}{(1+\|w^{\prime}\|^{2})^{2}}(\| w^{\prime}\|^{2}+\|w^{\prime}\|^{4})\] \[=|\alpha|^{2}+\frac{|\omega|^{2}\|w^{\prime}\|^{2}}{1+\|w^{\prime }\|^{2}}\] Hence \(|\alpha|^{2}=\frac{|\omega|^{2}}{1+\|w^{\prime}\|^{2}}\) and this concludes the proof. If \(\omega=0\), then \(B=0\) and hence \(R\) would have an isometric direct summand. In this case, if \((T^{*}-1)^{n}w^{\prime}=0\), then \((R^{*}-1)^{n}w=0\). **Lemma 1.7**.: _If \(w=(w^{\prime},\omega)\in H\oplus\mathbb{C}\), \(k\in\ker T^{*}\) with \(\|k\|^{2}=1+\|w^{\prime}\|^{2}\) and_ \[R_{t}=\begin{bmatrix}T&B_{t}\\ 0&1\end{bmatrix}\] _satisfies (i) and (ii) of the previous lemma, then_ \[k_{t}=\begin{bmatrix}k\\ -\omega e^{-it}\end{bmatrix}\] _satisfies \(k_{t}\in\ker R_{t}^{*}\) with \(\|k_{t}\|^{2}=1+\|w\|^{2}\)._ Proof.: The condition on \(\|k_{t}\|\) is obvious. Using the definition of \(B_{t}\) we obtain \[B_{t}^{*}k=\langle k,B_{t}1\rangle=\frac{\omega e^{-it}}{1+\|w^{\prime}\|^{2}} \|k\|^{2}=\omega e^{-it}.\] The fact that \(k_{t}\in\ker R_{t}^{*}\) now follows from Lemma 1.5 with \(z=0\). **Lemma 1.8**.: _If \(w=(w^{\prime},\omega)\in H\oplus\mathbb{C},\omega\neq 0\), \(k\in\ker T^{*}\) with \(\|k\|^{2}=1+\|w^{\prime}\|^{2}\) and_ \[R_{t}=\begin{bmatrix}T&B_{t}\\ 0&1\end{bmatrix}\] _satisfies (i) and (ii) of Lemma 1.6. In addition, we suppose \(T\) is a strict \(2n\)-isometry with \((T^{*}-1)^{n}w^{\prime}=0\). Then_ * \((R_{t}^{*}-1)^{n+1}w=0\)_._ * _There is a unique_ \(t\in[0,2\pi)\) _such that_ \((R_{t}^{*}-1)^{n}w=0\)_._ Proof.: (i) We have \[(R_{t}^{*}-1)^{n+1}w=\begin{bmatrix}(T^{*}-1)^{n+1}&0\\ B_{t}^{*}(T^{*}-1)^{n}&0\end{bmatrix}\begin{bmatrix}w^{\prime}\\ \omega\end{bmatrix}=0.\] (ii) We have \[(R_{t}^{*}-1)^{n}w=\begin{bmatrix}(T^{*}-1)^{n}&0\\ B_{t}^{*}(T^{*}-1)^{n-1}&0\end{bmatrix}\begin{bmatrix}w^{\prime}\\ \omega\end{bmatrix}=\begin{bmatrix}0\\ B_{t}^{*}(T^{*}-1)^{n-1}w^{\prime}\end{bmatrix}.\] By Lemma 1.6, \[B_{t}^{*}(T^{*}-1)^{n-1}w^{\prime} =\langle B_{t}^{*}(T^{*}-1)^{n-1}w^{\prime},1\rangle\] \[=\langle(T^{*}-1)^{n-1}w^{\prime},\frac{\overline{\omega}}{1+\|w ^{\prime}\|^{2}}(e^{it}k+Tw^{\prime})\rangle\] \[=\frac{\omega}{1+\|w^{\prime}\|^{2}}\langle(T^{*}-1)^{n-1}w^{ \prime},e^{it}k+w^{\prime}\rangle.\] Note that using (1.1) we have \((T^{*}-1)^{n-1}w^{\prime}\) is a nonzero vector with \(0\) in the first \(n-1\) entries, and Lemma 1.7 ensures that the \(n\)-th entries of \(k\) and \(w^{\prime}\) have the same modulus, it follows that there is a unique \(t\in[0,2\pi)\) such that \(B_{t}^{*}(T^{*}-1)^{n-1}w^{\prime}=0\). Hence there is a unique \(t\in[0,2\pi)\) such that \((R_{t}^{*}-1)^{n}w=0\). Part (ii) of the above lemma implies that if \(T\) is a strict \(2n\)-isometry and one wants to extend \(R_{t}\) to be a strict \(2(n+1)\)-isometry, then there is a unique \(t\in[0,2\pi)\) that does not lead to a strict extension, since for this \(t\), \((R_{t}^{*}-1)^{n}w=0\) and \(R_{t}\) is a \(2n\)-isometry by Theorem 1.1. Now suppose that \(H\) is a forward shift invariant reproducing kernel Hilbert space on \(\mathbb{D}\) with kernel \(k_{w}(z)\) such that \(k_{0}(z)=1\) for all \(z\in\mathbb{D}\). Also assume that with \(T=(M_{z},H)\) we have \(T^{*}T=I+w^{\prime}\otimes w^{\prime}\) for some function \(w^{\prime}\in H\), and \(\dim\ker T^{*}=1\). Set \(b_{0}=\frac{-1}{\sqrt{1+\|w^{\prime}\|^{2}}}Tw^{\prime}\). Note that in this case the function \(k\) as in Lemma 1.6 is the constant function \(k=\sqrt{1+\|w^{\prime}\|^{2}}\), and hence the \(B_{t}\) from Lemma 1.6 is given by \[B_{t}1=\frac{\overline{\omega}}{\sqrt{1+\|w^{\prime}\|^{2}}}(e^{it}-b_{0}).\] **Lemma 1.9**.: _Suppose \(H\) satisfies the conditions as above. Let \(t\in[0,2\pi)\), \(0\neq\omega\in\mathbb{C}\), and_ \[R_{t}=\begin{bmatrix}T&B_{t}\\ 0&1\end{bmatrix}\text{ on }H\oplus\mathbb{C}.\] _Then for each \(z\in\mathbb{D}\) we have_ \[k_{z}^{t}=\begin{pmatrix}k_{z}\\ \frac{\omega}{\overline{z}-1}\frac{1}{\sqrt{1+\|w^{\prime}\|^{2}}}(e^{-it}- \overline{b_{0}(z)})\end{pmatrix}\in\ker(R_{t}^{*}-\overline{z})\] Proof.: For \(z\in\mathbb{D}\) we have \(k_{z}\in\ker(T^{*}-\overline{z})\). Hence according to Lemma 1.5 we have to show that \(\frac{1}{\overline{z}-1}B_{t}^{*}k_{z}\) equals the second component in the vector displayed in the lemma. We multiply through by \(\overline{z}-1\) and compute \[B_{t}^{*}k_{z} =\langle k_{z},B_{t}1\rangle\] \[=\frac{\omega}{\sqrt{1+\|w^{\prime}\|^{2}}}(e^{-it}\langle k_{z},1 \rangle-\langle k_{z},b_{0}\rangle)\] \[=\frac{\omega}{\sqrt{1+\|w^{\prime}\|^{2}}}(e^{-it}-\overline{b_ {0}(z)}).\] Let \(h_{t,\omega}(z)=\frac{\varpi}{\sqrt{1+\|w^{\prime}\|^{2}}}\frac{1-e^{-it}b_{0} (z)}{z-1}\). If \(\frac{1-e^{-it}b_{0}(z)}{z-1}\not\in H\), then for \(z,w\in\mathbb{D}\) \[k_{w}^{t}(z)=k_{w}(z)+\frac{|\omega|^{2}}{(\overline{w}-1)(z-1)}\frac{1}{1+\| w^{\prime}\|^{2}}(1-e^{it}\overline{b_{0}(w)})(1-e^{-it}b_{0}(z))\] is the reproducing kernel for a Hilbert function space \(H_{t}\): \[H_{t}=\{f+\alpha h_{t,\omega}:f\in H,\alpha\in\mathbb{C}\},\quad\|f+\alpha h\| ^{2}=\|f\|^{2}+|\alpha|^{2},\] and \(R_{t}\) in the above lemma is unitarily equivalent to \((M_{z},H_{t})\). We will show that \((M_{z},H_{t})\) is unitarily equivalent to \((M_{z},H(b_{t}))\) for some \(b_{t}\), which implies \(R_{t}\) is unitarily equivalent to \((M_{z},H(b_{t}))\). Suppose \(b\) is a nonextreme point of the unit ball of \(H^{\infty}\). Let \(|\alpha|<1\) and \(b_{\alpha}(z)=\frac{b(z)-\alpha}{1-\overline{\alpha}b(z)}\). The following lemma is contained in [20, Lemma 4.7]. **Lemma 1.10**.: _For each \(|\alpha|<1\), \((M_{z},H(b))\) and \((M_{z},H(b_{\alpha}))\) are unitarily equivalent._ Thus we may assume that \(b(0)=0\). Now let \(b_{0}\) be a nonextreme point, rational function with \(\|b\|_{\infty}\leq 1,b_{0}(0)=0\), and consider \(H(b_{0})\). Let \(T=(M_{z},H(b_{0}))\), \(w^{\prime}=-\sqrt{1+\|b_{0}\|^{2}}Lb_{0}\). Then \[T^{*}T-I=(1+\|b_{0}\|^{2})Lb_{0}\otimes Lb_{0}=w^{\prime}\otimes w^{\prime}.\] Let \(a\) be the pythagorean mate of \(b_{0}\). Note that \[\begin{cases}\|b_{0}\|^{2}=a(0)^{-2}-1,\\ \|Lb_{0}\|^{2}=1-|b_{0}(0)|^{2}-a(0)^{2}=1-a(0)^{2},\end{cases}\] see e.g. [13, Corollary 23.9]. So \[\|w^{\prime}\|^{2}=(1+\|b_{0}\|^{2})\|Lb_{0}\|^{2}=a(0)^{-2}-1=\|b_{0}\|^{2}. \tag{1.3}\] Since \(TLb_{0}=b_{0}\), it follows that \(b_{0}=\frac{-1}{\sqrt{1+\|w^{\prime}\|^{2}}}Tw^{\prime}\). Hence \(H(b_{0})\) satisfies the hypothesis of Lemma 1.9. Set \(s=\frac{|\omega|^{2}}{1+\|w^{\prime}\|^{2}+|\omega|^{2}}\). Let \(t\in[0,2\pi)\) be such that \(e^{-it}b_{0}(1)\neq 1\), and consider \[H(z)=H_{s,t}(z)=s\frac{1+z}{1-z}+(1-s)\frac{1+e^{-it}b_{0}(z)}{1-e^{-it}b_{0}( z)}.\] Then \(\mathrm{Re}H(z)\geq 0\) and \(H(0)=1\), hence there is \(b_{t}=b_{s,t}\) in the unit ball of \(H^{\infty}\) such that \[H(z)=\frac{1+b_{t}(z)}{1-b_{t}(z)}=s\frac{1+z}{1-z}+(1-s)\frac{1+e^{-it}b_{0}(z) }{1-e^{-it}b_{0}(z)} \tag{1.4}\] One checks that \(b_{t}\) is a rational function with \(b_{t}(0)=0\), \(b_{t}(1)=1\), and \(b_{t}^{\prime}(1)=1/s\). **Theorem 1.11**.: _Let \(b_{0}\) be a rational function with \(\|b_{0}\|_{\infty}\leq 1,b_{0}(0)=0\), \(H=H(b_{0})\), and \(R_{t}\) be defined in Lemma 1.9 and \(b_{t}\) be defined by (1.4). Then \(R_{t}\) is unitarily equivalent to \((M_{z},H(b_{t}))\)._ Proof.: Note that \[\frac{1-z\overline{w}}{(1-z)(1-\overline{w})}=\frac{1}{2}\left[\frac{1+z}{1-z }+\frac{1+\overline{w}}{1-\overline{w}}\right].\] Thus \[1-b_{t}(z)\overline{b_{t}(w)}=\frac{1}{2}(1-b_{t}(z))(1-\overline{b_{t}(w)}) \left[\frac{1+b_{t}(z)}{1-b_{t}(z)}+\frac{1+\overline{b_{t}(w)}}{1-\overline{ b_{t}(w)}}\right],\] and \[\frac{1-e^{it}b_{0}(z)\overline{e^{it}b_{0}(w)}}{(1-e^{it}b_{0}(z))(1- \overline{e^{it}b_{0}(w)})}=\frac{1}{2}\left[\frac{1+e^{it}b_{0}(z)}{1-e^{it}b _{0}(z)}+\frac{1+\overline{e^{it}b_{0}(w)}}{1-\overline{e^{it}b_{0}(w)}} \right].\] So \[K_{w}^{b_{t}}(z) =\frac{1-b_{t}(z)\overline{b_{t}(w)}}{1-z\overline{w}}\] \[=\frac{1}{2}\frac{(1-b_{t}(z))(1-\overline{b_{t}(w)})}{1-z \overline{w}}\left[H(z)+\overline{H(w)}\right]\] \[=e(z)\overline{e(w)}+f(z)\overline{f(w)}\frac{1-b_{0}(z)\overline {b_{0}(w)}}{1-z\overline{w}}\] \[=e(z)\overline{e(w)}+f(z)\overline{f(w)}K_{w}^{b_{0}}(z),\] where \(e(z)=\sqrt{s}\frac{1-b_{t}(z)}{1-z}\) and \(f(z)=\sqrt{1-s}\ \frac{1-b_{t}(z)}{1-e^{-it}b_{0}(z)}\). Note that \(s=\frac{|\omega|^{2}}{1+\|w^{\prime}\|^{2}+|\omega|^{2}}\) and the reproducing kernel of \(H_{t}\) is \[k_{w}^{t}(z)\] \[=K_{w}^{b_{0}}(z)+\frac{|\omega|^{2}}{(\overline{w}-1)(z-1)}\frac{ 1}{1+\|w^{\prime}\|^{2}}(1-e^{it}\overline{b_{0}(w)})(1-e^{-it}b_{0}(z))\] \[=\frac{1-\overline{b_{0}(w)}b_{0}(z)}{1-\overline{w}z}+\frac{| \omega|^{2}}{(\overline{w}-1)(z-1)}\frac{1}{1+\|w^{\prime}\|^{2}}(1-e^{it} \overline{b_{0}(w)})(1-e^{-it}b_{0}(z)).\] So \(K_{w}^{b_{t}}(z)=g(z)\overline{g(w)}k_{w}^{t}(z)\), where \[g(z)=\frac{\sqrt{1+\|w^{\prime}\|^{2}}}{\sqrt{1+\|w^{\prime}\|^{2}+|\omega|^{2}}} \frac{1-b_{t}(z)}{1-e^{-it}b_{0}(z)}.\] It follows that \(H(b)=gH_{t}\) and \(R_{t}\) is unitarily equivalent to \((M_{z},H(b_{t}))\). Now we can prove Theorem 0.1. We restate it here. **Theorem 1.12**.: _Every quasi-analytic strict \(2n\)-isometry \(A\) with \(A^{*}A-I=w\otimes w\) is unitarily equivalent to \((M_{z},H(b))\) for some rational function \(b\) of degree \(n\) with \(b(0)=0\)._ Proof.: By the discussion before Lemma 1.5, we know that \(A\) is obtained through a sequence of operators \(T_{0}=S,T_{1},\cdots,T_{n}=A\) satisfying conditions \((a1)-(a5)\). Note that each \(T_{j+1}\) is a quasi-analytic strict \(2(j+1)\)-isometry with \(T_{j+1}^{*}T_{j+1}-I=w_{j+1}\otimes w_{j+1}\), and \(T_{j+1}=\begin{bmatrix}T_{j}&B_{j}\\ 0&1\end{bmatrix}\) is a rank one extension of \(T_{j},0\leq j<n\). Thus if we show \(T_{j+1}\) is unitarily equivalent to \((M_{z},H(b))\) for some rational function \(b\) of degree \((j+1)\) with \(b(0)=0\) under the condition that \(T_{j}\) is unitarily equivalent to \((M_{z},H(b_{0}))\) for some rational function \(b_{0}\) of degree \(j\) with \(b_{0}(0)=0\), then we are done. By [2, Proposition 5.6]\(T_{1}\) is unitarily equivalent to a Brownian shift \[\mathbf{B}=\begin{bmatrix}S&\sigma(1\otimes 1)\\ 0&1\end{bmatrix},\sigma>0.\] Since \(\mathbf{B}^{*}\mathbf{B}-I=\begin{bmatrix}0&0\\ 0&\sigma^{2}\end{bmatrix}\), using (1.3) one checks that \(\mathbf{B}\) is unitarily equivalent to \((M_{z},H(b))\) with \[b(z)=\frac{\gamma z}{1-\beta z},a(z)=a(0)\frac{1-z}{1-\beta z},\beta=\frac{1} {1+\sigma^{2}},|\gamma|=\frac{\sigma^{2}}{1+\sigma^{2}},\] This can also be deduced from (1.4) by setting \(b_{0}=0,s=\frac{\sigma^{2}}{1+\sigma^{2}}\). Thus we only need to consider \(j\geq 1\), and without loss of generality, suppose \(T_{j}=(M_{z},H(b_{0}))\) for some rational function \(b_{0}\) of degree \(j\) with \(b_{0}(0)=0\). By Lemma 1.6, there is \(t\in[0,2\pi)\) such that \[T_{j+1}=R_{t}=\begin{bmatrix}T_{j}&B_{t}\\ 0&1\end{bmatrix}\text{ on }H(b_{0})\oplus\mathbb{C}.\] By the remark below Lemma 1.8, there is a unique \(t_{0}\in[0,2\pi)\) such that \(R_{t_{0}}\) is not a strict \(2(j+1)\)-isometry. We claim that \(e^{-it_{0}}b_{0}(1)=1\). In fact, if \(e^{-it_{0}}b_{0}(1)\neq 1\), then the function \(b_{t_{0}}\) defined by (1.4) is a rational function of degree \((j+1)\). Then by Theorem 1.11, \(R_{t_{0}}\) is unitarily equivalent to \((M_{z},H(b_{t_{0}}))\), which is a strict \(2(j+1)\)-isometry. This is a contradiction. Hence \(e^{-it_{0}}b_{0}(1)=1\). Since \(T_{j+1}\) is a strict \(2(j+1)\)-isometry, there is \(t\in[0,2\pi)\setminus t_{0}\) such that \(T_{j+1}=R_{t}\). Then \(e^{-it}b_{0}(1)\neq 1\) and \(T_{j+1}=R_{t}\) is unitarily equivalent to \((M_{z},H(b_{t}))\), where \(b_{t}\) is defined by (1.4). The proof is complete. #### 2. Invariant subspaces of \(M\)-isometry In this section we study the invariant subspaces of the \(2n\)-isometric forward shift operator on the de Branges-Rovnyak space. There is a beautiful link between the strict \(2n\)-isometry \((M_{z},H(b))\) and the higher order local Dirichlet integral \(D_{w}^{n}(f),w\in\mathbb{T},f\in H^{2}\), see [20, Theorem 1]. When \(n=1\), \(D_{w}^{1}(f)=D_{w}(f)\) is the local Dirichlet integral introduced in [24]. ### Invariant subspaces of the \(2n\)-isometric shift operator Let \(T=(M_{z},H(b))\) be a strict \(2n\)-isometry on \(H(b)\) and \(a\) the pythagorean mate of \(b\). The characterization of such \(b\) is obtained in [19] for \(n=1\), and in [20] for general \(n\). By [20, Theorem 1.1], \(b\) is a rational function of degree \(n\) such that the mate \(a\) has a single zero of multiplicity \(n\) at some point \(\overline{\lambda}\in\mathbb{T}\). Since \(b\) are rational functions of order \(n\) with \(\left\|b\right\|_{\infty}\leq 1\), we have \(M(\overline{a})=H(b)\) ([27]), where \(M(\overline{a})=T_{\overline{a}}H^{2}\). Thus all the \(H(b)\) spaces are equal with equivalent norms. Let \(e_{n}(z)=\frac{(1-\lambda z)^{n}}{2^{n}}\). Since \(e_{n}/a\) and \(a/e_{n}\) are bounded on \(\mathbb{T}\), we also have \(M(\overline{e_{n}})=M(\overline{a})=H(b)\) ([10]). Let \(S_{n}=(M_{z},M(\overline{e_{n}}))\). Then the \(T\)-invariant subspaces of \(H(b)\) are just the \(S_{n}\)-invariant subspaces of \(M(\overline{e_{n}})\). When \(n=1,\lambda=1\), the \(S_{1}\)-invariant subspaces of \(M(\overline{e_{1}})\) were characterized in [27]. In this case, \(M(\overline{e_{1}})=M(e_{1})H^{2}\dotplus\mathbb{C}=D(\delta_{1})\) ([24]), and the \(S_{1}\)-invariant subspaces of \(M(\overline{e_{1}})\) are \(\{0\},M(e_{1})\cap M(\theta),M(\overline{e_{1}})\cap M(\theta)\) with \(\theta\) an inner function ([27, Theorem 7]). We will show that a similar characterization exists for all \(n\geq 1\). By the above discussion, we may suppose \(a(z)=\frac{(1-\lambda z)^{n}}{2^{n}}\), and \(b\) is the pythagorean mate of \(a\). Then by Fejer-Riesz theorem ([26]), \(b\) is a polynomial of degree \(n\). For simplicity, we take \(\lambda=1\) in the following. It was shown in [12] that every function in \(H(b)\), as well as its derivative up to order \(n-1\) has a finite nontangential limit at \(1\). For \(w\in\mathbb{D},i\in\mathbb{N}\), we define \[u_{w}^{i}(z)=\frac{\partial^{i}}{\partial\overline{w}^{i}}K_{w}^{b}(z)=\frac{ \partial^{i}}{\partial\overline{w}^{i}}\left(\frac{1-\overline{b(w)}b(z)}{1- \overline{w}z}\right),\quad z\in\mathbb{D},\] and \[u_{1}^{i}(z)=\angle\lim_{w\to 1}u_{w}^{i}(z),\quad z\in\mathbb{D}\] the nontangential limit as \(w\to 1\). Let \(\mathcal{P}_{n}\) be the space of all polynomials with degree less than or equal to \(n\). By [10, Theorems 1.3 & 1.4], see also [20, Theorem 7.2], we have \[H(b)=M(a)\oplus\operatorname{Span}\{u_{1}^{i}:0\leq i\leq n-1\}=M(a)\oplus \mathcal{P}_{n-1},\] where the orthogonal decomposition is with respect to the inner product in \(H(b)\). Let \[\mathcal{L}_{0}=\operatorname{Span}\{u_{1}^{i}:0\leq i\leq n-1\},\] and \[\mathcal{L}_{j}=\mathcal{L}_{0}\cap\left(\operatorname{Span}\{u_{1}^{i}:0\leq i <j\}\right)^{\perp},\quad 1\leq j\leq n-1.\] Since \(\langle f,u_{1}^{i}\rangle=f^{(i)}(1)\), it follows that \[\mathcal{L}_{j}=\operatorname{Span}\{(z-1)^{j},(z-1)^{j+1},\ldots,(z-1)^{n-1} \},\quad 0\leq j\leq n-1. \tag{2.1}\] Let \[\operatorname{Mult}(H(b))=\{\varphi\in H(b):\varphi f\in H(b),\forall f\in H( b)\}\] be the multiplier algebra of \(H(b)\). The following lemma is contained in [11], we include a proof here for completeness. **Lemma 2.1**.: _Let \(a(z)=\frac{(1-z)^{n}}{2^{n}}\) and \(b\) the pythagorean mate of \(a\). Then \(\operatorname{Mult}(H(b))\)\(=H(b)\cap H^{\infty}\)._ Proof.: Since \(H(b)\) is a reproducing kernel Hilbert space, we have \(\operatorname{Mult}(H(b))\)\(\subseteq H(b)\cap H^{\infty}\). Conversely, let \(\varphi\in H(b)\cap H^{\infty}\). Let \(f\in H(b)\), there are \(g\in H^{2},p\in\mathcal{P}_{n-1}\) such that \(f=ag+p\). So \[\varphi f=a(\varphi g)+p\varphi.\] Since \(H(b)\) is \(T\)-invariant and \(\varphi\in H(b)\cap H^{\infty}\), we obtain \(p\varphi\in H(b)\) and \(\varphi g\in H^{2}\). Hence \(\varphi f\in H(b)\) and \(\varphi\in\operatorname{Mult}(H(b))\). The proof is complete. Let \(\theta\) be an inner function. Then \(M(a)\cap M(\theta)\) is a \(T\)-invariant subspace of \(H(b)\). It is not hard to verify that \(M(a)\oplus\mathcal{L}_{j},0\leq j\leq n-1\), are \(T\)-invariant subspaces of \(H(b)\), or see the following Lemma 2.4. If furthermore, \(\theta\left(M(a)\oplus\mathcal{L}_{j}\right)\subseteq H(b)\) for some \(j\in\{0,\ldots,n-1\}\), then since \(T_{\overline{\theta}}\) is a contraction on \(H(b)\), we have \(\theta\left(M(a)\oplus\mathcal{L}_{j}\right)\) is closed, and thus a \(T\)-invariant subspace of \(H(b)\). Now we can describe the \(T\)-invariant subspaces of \(H(b)\). **Theorem 2.2**.: _Let \(a,b\) be as in the above lemma. Then the \(T\)-invariant subspaces of \(H(b)\) are \(\{0\}\), \(M(a)\cap M(\theta)\) with \(\theta\) an inner function, and \(\theta\left(M(a)\oplus\mathcal{L}_{j}\right)\), where \(0\leq j\leq n-1\), \(\theta\) an inner function are such that \(\theta\left(M(a)\oplus\mathcal{L}_{j}\right)\subseteq H(b)\)._ Proof.: Let \(\mathcal{N}\) be a nonzero \(T\)-invariant subspace of \(H(b)\). We split the proof in the following three cases. **Case1**. \(\mathcal{N}\) is contained in \(M(a)\). Since \(T_{a}:H^{2}\to M(a)\) defined by \(T_{a}f=af\) is a unitary operator. By Beurling's theorem, \(\mathcal{N}=a\theta H^{2}=M(a)\cap M(\theta)\) for some inner function \(\theta\). **Case2**. \(\mathcal{N}\) is not contained in \(M(a)\) and the greatest common inner divisor of the functions in \(\mathcal{N}\) is \(1\). By case \(1\), \(\mathcal{N}\cap M(a)=M(a)\cap M(\theta)\) for some inner function \(\theta\). Note that \(\forall f\in\mathcal{N}\), we have \(af\in\mathcal{N}\cap M(a)\) and \(af=\theta g\) for some \(g\in H^{2}\). Since \(a\) is an outer function, we obtain that \(\theta\) is an inner divisor of \(f\) This is true for all functions in \(\mathcal{N}\), we then get that \(\theta=1\), and \(\mathcal{N}\cap M(a)=M(a)\), \(M(a)\subsetneq\mathcal{N}\). Note that \(M(a)\oplus\mathcal{L}_{j},0\leq j\leq n-1\) are \(n\)\(T\)-invariant subspaces containing \(M(a)\) and there is no other \(T\)-invariant subspace containing \(M(a)\). So \(\mathcal{N}=M(a)\oplus\mathcal{L}_{j}\) for some \(j\in\{0,1,\cdots,n-1\}\). **Case3**. \(\mathcal{N}\) is not contained in \(M(a)\) and the greatest common inner divisor \(\theta\) of the functions in \(\mathcal{N}\) is not \(1\). By case 1, \(\mathcal{N}\cap M(a)=M(a)\cap M(u)\) for some inner function \(u\). By the argument in case 2, we see that \(u\) is an inner divisor of \(\theta\). Let \(au\in M(a)\cap M(u)\), then \(au\in\mathcal{N}\) and \(\theta\) is an inner divisor of \(au\). So \(\theta\) is a divisor of \(u\) and \(\mathcal{N}\cap M(a)=M(a)\cap M(\theta)\). It follows that \(a\theta H^{2}\subseteq\mathcal{N}\). Let \(\mathcal{R}=T_{\overline{\theta}}\mathcal{N}\). It is not hard to check that \(\mathcal{N}=\theta\mathcal{R}\). Hence \(T\mathcal{R}\subseteq\mathcal{R}\) and \[M(a)=T_{\overline{\theta}}(a\theta H^{2})\subseteq T_{\overline{\theta}} \mathcal{N}=\mathcal{R}.\] Next we show \(\mathcal{R}\) is closed. In fact, note that \(M(a)\) is a subspace which is closed and of finite co-dimension in \(H(b)\), thus \(\mathcal{R}\) is closed, as it is the pre-image of a closed subspace \(\mathcal{R}/M(a)\) under the quotient mapping \(H(b)\to H(b)/M(a)\). Therefore \(\mathcal{R}\) is a \(T\)-invariant subspace containing \(M(a)\), and then \(\mathcal{R}=M(a)\oplus\mathcal{L}_{j}\) for some \(j\in\{0,\ldots,n-1\}\). In fact, if \(\theta\left(M(a)\oplus\mathcal{L}_{j}\right)\subseteq H(b)\) for some \(0\leq j\leq n-1\), we will show that \[\left(M(a)\oplus\mathcal{L}_{k}\right)\cap M(\theta)=\theta\left(M(a)\oplus \mathcal{L}_{k}\right),\quad j\leq k\leq n-1,\] and thus obtaining a similar characterization for the \(T\)-invariant subspaces of \(H(b)\) as in [27, Theorem 7]. To achieve this, we need the following lemmas. **Lemma 2.3**.: _Let \(\theta\) be an inner function, and \(a,b\) be as in Lemma 2.1. Then \(M(a)\cap M(\theta)\) and \(\left(M(a)\oplus\mathcal{L}_{j}\right)\cap M(\theta)\), \(0\leq j\leq n-1\), are \(T\)-invariant subspaces of \(H(b)\)._ Proof.: It is clear that \(M(a)\cap M(\theta)\) and \(\left(M(a)\oplus\mathcal{L}_{j}\right)\cap M(\theta),0\leq j\leq n-1\), are closed subspaces and \(M(a)\cap M(\theta)\) is a \(T\)-invariant subspace. Next we show \(\left(M(a)\oplus\mathcal{L}_{j}\right)\cap M(\theta),0\leq j\leq n-1\), are \(T\)-invariant. Let \(f\in H(b)\), we have \[\left\langle f,T^{*}u_{1}^{i}\right\rangle=\langle zf,u_{1}^{i}\rangle=if^{(i- 1)}(1)+f^{(i)}(1)=\langle f,u_{1}^{i}+iu_{1}^{i-1}\rangle.\] So \(T^{*}u_{1}^{i}=u_{1}^{i}+iu_{1}^{i-1}\) and \(M(a)\oplus\mathcal{L}_{j},0\leq j\leq n-1\), are \(T\)-invariant. Thus \(\left(M(a)\oplus\mathcal{L}_{j}\right)\cap M(\theta),0\leq j\leq n-1\), are \(T\)-invariant. **Lemma 2.4**.: _Let \(\theta\) be an inner function, and \(a,b\) be as in the above lemma. Then for \(0\leq j\leq n-1\),_ \[\left(M(a)\oplus\mathcal{L}_{j}\right)\cap M(\theta)=M(a)\cap M(\theta)+ \left(M(a)\oplus\mathcal{L}_{j}\right)\cap\left(\theta\mathcal{P}_{n-1} \right).\] Proof.: We only need to show that the space on the left is contained in the space on the right. Let \(f\in\left(M(a)\oplus\mathcal{L}_{j}\right)\cap M(\theta)\), then \(f\in H(b)\cap M(\theta)\). Suppose \(f=\theta g,g\in H^{2}\). Since \(T_{\overline{\theta}}\) is a contraction on \(H(b)\), we have \(g\in H(b)\). Then there are \(h\in H^{2},p\in\mathcal{P}_{n-1}\) such that \(g=ah+p\). So \(f=a\theta h+\theta p\), which belongs to the space on the right. Recall that \(D(\delta_{1})=(z-1)H^{2}\dotplus\mathbb{C}\) ([24]). If \(\theta\) is an inner function, then \(\theta\in D(\delta_{1})\), if and only if \(\theta\) has a finite angular derivative at \(1\) ([27, 24]). **Lemma 2.5**.: _Let \(a,b\) be as in the above lemma. Suppose \(\theta\) is an inner function and \(\theta\not\in(z-1)H^{2}\dotplus\mathbb{C}\). Then_ \[\left(M(a)\oplus\mathcal{L}_{j}\right)\cap M(\theta)=M(a)\cap M(\theta),\quad 0 \leq j\leq n-1.\] Proof.: We first show that \(\theta\mathcal{P}_{n-1}\cap H(b)=\{0\}\). If this holds, then \[\left(M(a)\oplus\mathcal{L}_{j}\right)\cap(\theta\mathcal{P}_{n-1})\subseteq \theta\mathcal{P}_{n-1}\cap H(b)=\{0\}.\] Lemma 2.4 then ensures that the conclusion holds. Now we prove \(\theta\mathcal{P}_{n-1}\cap H(b)=\{0\}\). Note that \(H(b)=(z-1)^{n}H^{2}\dotplus\mathcal{P}_{n-1}\subseteq D(\delta_{1})\), and \[\mathcal{P}_{n-1}=\operatorname{Span}\{1,(z-1),\ldots,(z-1)^{n-1}\}= \mathcal{L}_{0}.\] If \(n=1\), then since \(\mathcal{P}_{0}=\mathbb{C}\), \(H(b)=(z-1)H^{2}\dotplus\mathbb{C}\), we have \(\theta\mathcal{P}_{n-1}\cap H(b)=\{0\}\). If \(n\geq 2\), we proceed as follows. Let \(f\in\theta\mathcal{P}_{n-1}\cap H(b)\), then there are \(c_{0},\ldots,c_{n-1}\in\mathbb{C}\) such that \[f=\sum_{i=0}^{n-1}c_{i}(z-1)^{i}\theta\in H(b)\subseteq D(\delta_{1}).\] Since for \(i\geq 1\), \((z-1)^{i}\theta\in(z-1)H^{2}\subseteq D(\delta_{1})\), we obtain that \(c_{0}\theta\in D(\delta_{1})\), and so \(c_{0}=0\). Then \(f=\sum_{i=1}^{n-1}c_{i}(z-1)^{i}\theta\in H(b)=(z-1)^{n}H^{2}\dotplus\mathcal{P }_{n-1}\). Suppose \[f=\sum_{i=1}^{n-1}c_{i}(z-1)^{i}\theta=(z-1)^{n}g+p,\quad g\in H^{2},p\in \mathcal{P}_{n-1}.\] Then \(p(1)=0\), and \[\sum_{i=1}^{n-1}c_{i}(z-1)^{i-1}\theta=(z-1)^{n-1}g+\frac{p}{z-1}.\] Note that \((z-1)^{n-1}g+\frac{p}{z-1}\in(z-1)H^{2}\dotplus\mathbb{C}=D(\delta_{1})\), applying the same argument, we obtain that \(c_{1}=0\). Similarly, we have \(c_{2}=0,\ldots,c_{n-1}=0\), and \(f=0\). The proof is complete. **Lemma 2.6**.: _Let \(\theta\) be an inner function, and \(a,b\) be as in the above lemma. If \(\theta\left(M(a)\oplus\mathcal{L}_{j}\right)\subseteq H(b)\) for some \(0\leq j\leq n-1\), then_ \[\left(M(a)\oplus\mathcal{L}_{k}\right)\cap M(\theta)=\theta\left(M(a)\oplus \mathcal{L}_{k}\right),\quad j\leq k\leq n-1.\] Proof.: If \(\theta\left(M(a)\oplus\mathcal{L}_{j}\right)\subseteq H(b)\), then \(\theta(z-1)^{j}\in H(b)=(z-1)^{n}H^{2}\dotplus\mathcal{P}_{n-1}\). So \(\theta\in(z-1)^{n-j}H^{2}\dotplus\mathcal{P}_{n-j-1}\). If \(n=1\), then \(j=0\), \(\theta\in(z-1)H^{2}\dotplus\mathbb{C}=H(b)\). So \(\theta\) is a multiplier of \(H(b)=M(a)\oplus\mathcal{L}_{0}=M(a)\oplus\mathbb{C}\). By Lemma 2.4, \[\left(M(a)\oplus\mathcal{L}_{0}\right)\cap M(\theta) =M(a)\cap M(\theta)+\mathbb{C}\theta\] \[=\theta M(a)+\theta\mathcal{L}_{0}=\theta\left(M(a)\oplus \mathcal{L}_{0}\right).\] If \(n\geq 2\), we have the following cases. **Case1**. \(\theta\in\left((z-1)^{i-1}H^{2}\dotplus\mathcal{P}_{i-2}\right)\backslash \left((z-1)^{i}H^{2}\dotplus\mathcal{P}_{i-1}\right),2\leq i\leq n\). Suppose \(\theta=(z-1)^{i-1}g+p,g\in H^{2},p\in\mathcal{P}_{i-2}\). By the same argument as in Lemma 2.5, we have \((z-1)^{k}\theta\not\in H(b),0\leq k\leq n-i\), and \[\theta\mathcal{P}_{n-1}\cap H(b) =\text{Span}\{(z-1)^{n-i+1}\theta,(z-1)^{n-j+2}\theta,\dots,(z-1 )^{n-1}\theta\}\] \[=\theta\mathcal{L}_{n-i+1}.\] Now we show \[\left(M(a)\oplus\mathcal{L}_{k}\right)\cap\left(\theta\mathcal{P}_{n-1} \right)=\theta\mathcal{L}_{k},\quad n-i+1\leq k\leq n-1.\] Note that \[\left(M(a)\oplus\mathcal{L}_{k}\right)\cap\left(\theta\mathcal{P} _{n-1}\right) =\left(M(a)\oplus\mathcal{L}_{k}\right)\cap\left(\theta\mathcal{P} _{n-1}\right)\cap H(b)\] \[=\left(M(a)\oplus\mathcal{L}_{k}\right)\cap\theta\mathcal{L}_{n- i+1}.\] It is not hard to see that \(\theta\mathcal{L}_{k}\subseteq\left(M(a)\oplus\mathcal{L}_{k}\right),n-i+1\leq k\leq n-1\). Since \(\theta\in D(\delta_{1})\), we have \(\theta(1)\neq 0\). Then in the expression of \(\theta=(z-1)^{i-1}g+p,g\in H^{2},p\in\mathcal{P}_{i-2}\), we have \(p(1)\neq 0\). It follows that if \(0\leq l<k\), then \((z-1)^{l}\theta\not\in(M(a)\oplus\mathcal{L}_{k})\). Thus arguing as in Lemma 2.5, we conclude that \[\left(M(a)\oplus\mathcal{L}_{k}\right)\cap\left(\theta\mathcal{P} _{n-1}\right) =\left(M(a)\oplus\mathcal{L}_{k}\right)\cap\theta\mathcal{L}_{n-i+1}\] \[=\theta\mathcal{L}_{k},\quad n-i+1\leq k\leq n-1.\] Lemma 2.4 then implies \[\left(M(a)\oplus\mathcal{L}_{k}\right)\cap M(\theta) =M(a)\cap M(\theta)+\theta\mathcal{L}_{k}\] \[=\theta\left(M(a)\oplus\mathcal{L}_{k}\right),\quad n-i+1\leq k \leq n-1.\] **Case2**. \(\theta\in H(b)\). In this case \(\theta\) is a multiplier of \(H(b)\). By using the argument in Case 1, we have \(\theta\mathcal{L}_{k}\subseteq M(a)\oplus\mathcal{L}_{k},0\leq k\leq n-1\), and for \(k\geq 1\), \((z-1)^{i}\theta\not\in M(a)\oplus\mathcal{L}_{k},0\leq i<k\). Hence \[\left(M(a)\oplus\mathcal{L}_{k}\right)\cap\left(\theta\mathcal{P}_{n-1}\right) =\theta\mathcal{L}_{k},\quad 0\leq k\leq n-1,\] and \[\left(M(a)\oplus\mathcal{L}_{k}\right)\cap M(\theta)=\theta\left(M(a)\oplus \mathcal{L}_{k}\right),\quad 0\leq k\leq n-1.\] The proof is complete. Combining Theorem 2.2 and Lemma 2.6, we obtain the following similar result as in [27, Theorem 7]. **Theorem 2.7**.: _Let \(a,b\) be defined as in the above lemma. Then the \(T\)-invariant subspaces of \(H(b)\) are \(\{0\}\), \(M(a)\cap M(\theta)\) and \(\big{(}M(a)\oplus\mathcal{L}_{j}\big{)}\cap M(\theta)\), \(0\leq j\leq n-1\), with \(\theta\) an inner function._ For \(f\in H(b)\), Let \([f]_{T}\) be the closure of the polynomial multiples of \(f\). In the following, we write \([f]\) for \([f]_{T}\). If \([f]=H(b)\), then \(f\) is called a cyclic function. We prove the following version of Theorem 0.2. **Theorem 2.8**.: _Let \(T=(M_{z},H(b))\) be a strict \(2n\)-isometry on \(H(b)\). Suppose the pythagorean mate \(a\) of \(b\) has a single zero of multiplicity \(n\) at \(\overline{\lambda}\in\mathbb{T}\). Then every nonzero \(T\)-invariant subspace is of the form \([(z-\overline{\lambda})^{j}\theta]\), where \(j\in\{0,1,\ldots,n\}\), \(\theta\) an inner function are such that \((z-\overline{\lambda})^{j}\theta\in H(b)\)._ Proof.: Let \(a(z)=\frac{(1-\lambda z)^{n}}{2^{n}}\) and \(b_{1}\) the pythagorean mate of \(a\), and \(T_{1}=(M_{z},H(b_{1}))\). By the discussion before Lemma 2.1, we have the \(T\)-invariant subspaces of \(H(b)\) are just the \(T_{1}\)-invariant subspaces of \(H(b_{1})\). Let \(\mathcal{N}\) be a nonzero \(T\)-invariant subspace of \(H(b)\). Theorem 2.2 then implies that \(\mathcal{N}=M(a)\cap M(\theta)\), or \(\mathcal{N}=\theta\left(M(a)\dot{+}\mathcal{L}_{j}\right)\) for some \(0\leq j\leq n-1\) and \(\theta\) an inner function. If \(\mathcal{N}=M(a)\cap M(\theta)\), then \[\mathcal{N}=a\theta H^{2}=[(z-\overline{\lambda})^{n}\theta].\] If \(\mathcal{N}=\theta\left(M(a)\dot{+}\mathcal{L}_{j}\right)\) for some \(0\leq j\leq n-1\), then \(\theta\mathcal{L}_{j}\in H(b)\). Since \[\mathcal{L}_{j}=\text{Span}\{(z-\overline{\lambda})^{j},(z-\overline{ \lambda})^{j+1},\ldots,(z-\overline{\lambda})^{n-1}\},\quad 0\leq j\leq n-1,\] we have \((z-\overline{\lambda})^{j}\theta\in H(b)\). So \((z-\overline{\lambda})^{j}\theta\in\text{Mult}(H(b))\) and \(\mathcal{N}=[(z-\overline{\lambda})^{j}\theta]\). When \(b=\frac{z+1}{2}\), the following result was obtained in [14]. **Corollary 2.9**.: _Let \(T=(M_{z},H(b))\) be a strict \(2n\)-isometry on \(H(b)\). Suppose the pythagorean mate \(a\) of \(b\) has a single zero of multiplicity \(n\) at \(\overline{\lambda}\in\mathbb{T}\). Let \(f\in H(b)\), then \(f\) is cyclic if and only if \(f\) is outer and \(f(\overline{\lambda})\neq 0\)._ Proof.: The necessity is clear since \(H(b)\subseteq H^{2}\) and the point evaluation at \(\overline{\lambda}\) is continuous on \(H(b)\). For sufficiency, let \(f\in H(b)\), if \(f\) is outer and \(f(\overline{\lambda})\neq 0\), then by the above theorem, there are \(j\in\{0,1,\cdots,n\}\), \(\theta\) inner function such that \[[f]=[(z-\overline{\lambda})^{j}\theta].\] So \(\theta=1,j=0\), and \([f]=[1]=H(b)\). The proof is complete. ### The case of the rational functions Let \(b\) be a rational function of degree \(n\) that is a nonextreme point of the unit ball of \(H^{\infty}\). By Fejer-Riesz theorem ([26]), there are \(\lambda_{1},\cdots,\lambda_{n}\in\overline{\mathbb{D}}\) (may not be distinct) such that the numerator of the Pythagorean mate \(a\) of \(b\) is a constant multiple of \(\prod_{i=1}^{n}(1-\lambda_{i}z)\). Without loss of generality, suppose \(\lambda_{i}\in\mathbb{T}\) for \(1\leq i\leq m,0\leq m\leq n\). Then \[H(b)=\prod_{i=1}^{m}(z-\overline{\lambda_{i}})H^{2}\dotplus\mathcal{P}_{m-1}.\] If there is no \(\lambda_{i}\in\mathbb{T}\), then \(m=0\) and \(H(b)=H^{2}\), see [7, Lemma 4.3]. Let \(a_{1}(z)=\frac{1}{2^{m}}\prod_{i=1}^{m}(1-\lambda_{i}z)\), then as discussed in the above section 3.1, \(H(b)=M(\overline{a_{1}})\) and the \(T\)-invariant subspaces of \(H(b)\) are just the forward shift invariant subspaces of \(M(\overline{a_{1}})\). Using the arguments in section 2.1, similarly, one can characterize the \(T\)-invariant subspaces of \(H(b)\). Thus the proof of the following theorem is similar as in section 2.1, we omit the details. **Theorem 2.10**.: _Let \(b\) be a rational function of degree \(n\) that is a nonextreme point of the unit ball of \(H^{\infty}\). Then there is \(a(z)=\frac{1}{2^{m}}\prod_{i=1}^{m}(z-\lambda_{i})\) such that \(H(b)=M(\overline{a}),\lambda_{i}\in\mathbb{T},1\leq i\leq m,0\leq m\leq n\). When \(m=0\), \(H(b)=H^{2}\). So every nonzero \(T\)-invariant subspace of \(H(b)\) is of the form \(\left[\prod_{j=1}^{k}(z-\lambda_{i_{j}})\theta\right]\), where \(i_{j}\in\{1,2,\cdots,m\}\) distinct, \(0\leq k\leq m,j=1,2,\cdots,k\), \(\theta\) inner function are such that \(\prod_{j=1}^{k}(z-\lambda_{i_{j}})\theta\in H(b)\). When \(k=0\), \(\left[\prod_{j=1}^{k}(z-\lambda_{i_{j}})\theta\right]\) is interpreted as \([\theta]\)._ It follows from the above theorem that \(f\in H(b)\) is cyclic if and only if \(f\) is outer and \(f(\lambda_{i})\neq 0,1\leq i\leq m\). In [4], Aleman and Malman studied the vector-valued \(H(B)\) spaces. When \(H(B)\) is of finite rank and \(M_{z}\)-invariant, they proved in [4, Theorem 1.3] that for \(M_{z}\)-invariant subspace \(\mathcal{M}\), \(\dim(\mathcal{M}\ominus M_{z}\mathcal{M})=1\). Moreover, if \(\varphi\in\mathcal{M}\ominus M_{z}\mathcal{M}\) is of norm \(1\), then \(\mathcal{M}=[\varphi]_{M_{z}}\) and there is a Schur function \(C\) such that the rank of \(H(C)\) does not exceed that of \(H(B)\) and \(\varphi:H(C)\to\mathcal{M}\) is unitary. Theorems 2.8 and 2.10 in this paper provide some additional information of the structure of the \(M_{z}\)-invariant subspaces of \(H(b)\) when \(b\) is a rational function. _Acknowledgements._ S. Luo was supported by NNSFC (12271149, 11701167). The authors thank Professor Stefan Richter for many helpful conversions. The authors thank the referee for careful reading of the paper and many crucial comments. The argument of showing \(\mathcal{R}\) closed in Case 3 of Theorem 2.2 is due to the referee, our original argument is a little complicated.
``` ブラジェス・ロヴァン空間 $H(b)$ は、$H^\infty$ の単位球体内のbounded analytic関数 $b$によって生成されます。$b$ が非 extremes の点の場合、$H(b)$ は前移 operator $M_z$ によるInvariant です。$H(b)$ 空間は、$T$ という $2n$ - isometric 的な expansive quasi-analytic Operator のモデル空間となります。そして、$T^*T - I$ がランク1 の場合、$H(b)$ の invariant subspace を記述します。 ``` **Explanation of Translation:** * **ブラジェス・ロヴァン空間 $H(b)$ は、$H^\infty$ の単位球体内のbounded analytic関数 $b$によって生成されます。** * "The de Branges-Rovnyak space $H(b)$ is generated by a bounded analytic function $b$ in
2309.10096
Tracing the Galactic Disk with Planetary Nebulae using Gaia DR3
We study the population of Galactic planetary Nebulae (PNe) and their central stars (CSs) through the analysis of their heliocentric distances and Galactic distribution. Distances are obtained by means of a revised statistical scale, based on an astrometrically-defined sample of CSs parallaxes from Gaia DR3 as calibrators. The statistical scale is applied to infer distances of a significant number (~850) of Galactic PNe, for which we deliver a new catalog of PN distances. By adopting a circular velocity curve of the Galaxy, we also derive 3D peculiar velocities from DR3 proper motions and published radial velocities of a large sample (~300) of PN CSs. We date PN progenitors based both on the best chemical abundances culled from the literature and on CS kinematic properties, finding a confirmation of the first method with the second. The slope of the radial oxygen gradient of the Galactic Disk traced by the complete PNe sample amounts to -0.0144 +/- 0.00385 [dex/kpc]. Furthermore, by distinguishing between PNe with old (> 7.5 Gyr) and young (< 1 Gyr) progenitors, we estimate the gradient to be respectively -0.0121 +/- 0.00465 and -0.022 +/- 0.00758 [dex/kpc], thus disclosing a mild steepening since Galaxy formation, with a slope change of 0.01 dex. These results are in broad agreement with previous PN studies, but now based on DR3 Gaia analysis, and also in agreement with what traced by most other Galactic probes.
B. Bucciarelli, L. Stanghellini
2023-09-18T19:10:38
http://arxiv.org/abs/2309.10096v1
Tracing the Galactic Disk with Planetary Nebulae using Gaia DR3 Distance Catalog, Velocities, Populations, and Radial Metallicity Gradients of Galactic Planetary Nebulae ###### Abstract Context: Aims:We want to study the population of Galactic planetary nebulae (PNe) and their central stars (CSs) through the analysis of their distances and Galactic distribution. The PN distances are obtained by means of a revised statistical distance scale, based on an astrometrically-defined sample of CSs from Gaia DR3 as calibrators. The new statistical distances, together with the proper motion of the CSs - also from DR3, and with published PN abundances as well as radial velocities, are used to characterize the PN populations in the Galaxy, and to derive the radial metallicity gradient. Methods:The statistical scale is applied to infer distances of a significant number (\(\sim\)850) of Galactic PNe, for which we deliver a new catalog of PN distances. By adopting a circular velocity curve of the Galaxy, we also obtain peculiar 3D velocities for a large sample of PNe (\(\sim\)300). Elemental abundances of the PNe are culled from the literature for an updated catalog, to be used in our analysis and other external applications. Results:The radial chemical gradient of the Galactic Disk is traced by PNe with available chemical abundances and distances, and kinematic data of the CSs are employed to identify the Halo PN population. We date PN progenitors based both on abundances and on kinematic properties, finding a confirmation of the first method with the second. For all PNe with at least one oxygen determination in the literature, we find a slope of the radial oxygen gradient equal to \(\Delta\log(\mathrm{O/H})/\Delta\ R_{\mathrm{O}}=-0.0144\pm 0.00385\) [dex kpc\({}^{-1}\)]. Furthermore, we estimate radial oxygen gradients for the PNe with (\(>7.5\) Gyr) and young (\(<1\) Gyr) progenitors to be respectively \(\Delta\log(\mathrm{O/H})/\Delta\ R_{\mathrm{O}}=-0.0121\pm 0.00465\) and -0.022\(\pm\)0.00758 [dex kpc\({}^{-1}\)], thus disclosing a mild steepening of the gradient since Galaxy formation, with a slope change of 0.01 dex. The time evolution is slightly higher (\(\sim 0.015\) dex) if we select the best available abundances in the literature. This result is in broad agreement with previous PN results, but now based on DR3 Gaia analysis, and also in agreement with what traced by most other Galactic probes. We also find a moderate oxygen enrichment when comparing the PNe with young and old progenitors. Conclusions: ## 1 Introduction The third data release of the Gaia ESA mission, DR3 (Gaia Collaboration et al. 2016, 2023b) provides unprecedented astrometric - along with photometric and spectroscopic- data for hundreds of central stars (CSs) of planetary nebulae (PNe) that, further to witnessing a short-lived phase of their life, turn out to be suitable probes of chemical abundances in the Galaxy (e.g. Pembert 1978; Henry et al. 2010), as well as in external galaxies (see Stanghellini 2019, and references therein). Metal contents of PNe with young and old progenitors can be examined to constrain different mechanisms of gas accretion and star formation history (e.g. Gibson et al. 2013; Curti et al. 2020). In particular, metallicity gradients along galactic discs are crucial observables in galaxy evolution studies. They are the outcome of complex interacting physical processes involving metal production, consumption, transport, and loss (Matteucci 2021; Prantzos et al. 2023); currently, their radial profiles reconstructed from different metal tracers show some diversities which need to be further investigated (Stanghellini et al. 2019). The basic observational step in this direction is to build a metallicity gradient as function of the galactocentric radius \(R_{\mathrm{G}}\); the profile and slope of the gradient can be inspected for a coeval stellar population, and its time evolution tested with different age populations (e.g. Magrini et al. 2016; Stanghellini & Haywood 2018). To this end, being able to gauge the precise PN location as function of Galactic radius is clearly of utmost importance. In our previous paper (Stanghellini et al. 2020, hereafter Paper I), we performed a re-calibration of the PN statistical distance scale from a sample of \(\approx 100\) CSs having DR2 parallaxes with relative uncertainty better than 20%, demonstrating the potential of Gaia astrometry in improving the extant knowledge on PNe distances. Following on this path, we present a revision of the PN distance scale using the best parallaxes available from Gaia DR3 (Lindegren et al. 2021b). In addition, the highly accurate Gaia DR3 proper motions are precious parameters in the context of this study: in fact, space velocities can be inferred by the CS proper motions using their distances. Since the velocity dispersion of a disk population tends to increase with stellar age (e.g. Stromberg 1925, 1946; Wielen 1977; Mackereth et al. 2019), the kinematic properties of PNe CSs bear independent information about the progenitor's age, which can complement and support the elemental abundance dating method. Therefore, in this work we have exploited the full astrometric data provided by Gaia DR3 for a bona-fide set of Galactic PNe, in combination with their most updated chemical abundances and radial velocities, to investigate the gas-phase metallicity gradient of the Milky Way and its evolution in time. The structure of the paper is as follows. In SS 2 we characterize the Galactic PN sample and give an assessment of the new distance scale calibration, based on their DR3 parallax and other nebular parameters; SS 3 describes the data and methods for the derivation of radial metallicity gradients; finally, SS 4 is devoted to the discussion of the results and the conclusions. ## 2 The Revised Distance Scale We explore the statistical distance scale for Galactic Planetary Nebulae (PNe) hinging on the commensurate relation between their surface brightness and physical radius advocated by Shklovsky (1956), which has been widely used and refined since then; see Smith (2015) for a thorough review. The similarity between the expansion radius of a fully ionized nebula and the decreasing density of its (nearly) constant ionized mass provides an estimation of the PN distance, once its angular radius and observed flux have been measured. Despite the wide range of PNe mass progenitors, the so-called Shklovsky mass, i.e., the one responsible for the observed flux, has a fairly constant value of \(\approx 0.1-0.2\)\(M_{\odot}\), making the underlying assumption that all nebulae have the same ionized mass a viable one. Nonetheless, the Shklovsky method tends to overestimate the distances of less evolved PNe that are in a "ionization bound" state, i.e., whose ionization front is still within the nebular material, far from the edge of the gaseous cloud (Schneider & Buckley 1996). The wealth and unprecedented accuracy of Gaia DR3 parallaxes allow us to calibrate the _physical radius-surface brightness_ distance scale on well-constrained empirical evidence, while allowing us to put to test some of the crucial aspects of the underlying physical models. In logarithmic units, the scale is represented by the equation: \[\log R=a\times\log\!S_{\mathrm{H}B}+b, \tag{1}\] where the surface brightness \(S_{\mathrm{H}B}\) is the distance-independent parameter, given in terms of the extinction-corrected absolute \(H\beta\) flux from the nebula \(I_{\mathrm{H}\beta}\) (in erg cm\({}^{-2}\) sec\({}^{-1}\)) and its angular radius \(\theta\) (in arcsec) as \(S_{\mathrm{H}B}=I_{\mathrm{H}\beta}/\pi\theta^{2}\), whereas the physical radius of the PN, \(R=\theta"/(206265~{}p")\) (in pc), with \(p\) the PN's parallax, is the distance-dependent variable. ### Identifying Central Stars of Planetary Nebulae in Gaia DR3 Starting from a list of 2556 _true_ PNe extracted from the HASH database (Parker et al. 2016)1, we searched for Gaia DR3 sources with the HASH coordinates of the nebular centers, inside a radius of 3 arcsec and up to 5 arcsec if the smaller radius returned zero matches. We resolved multiple matches by applying a ranking system based on the astrometric quality indicators provided with the Gaia solution. In such a way, we identified 1674 CS candidates lying near the geometric center of the nebula with a DR3 parallax. Footnote 1: The list includes all objects from the HASH database that were labeled as ‘TRUE PN’ at the time of this work Furthermore, to check the reliability of our purely astrometric criterion, we made comparisons with the CS catalogs recently published by Gonzalez-Santamaria et al. (2021) and Chornay & Walton (2021) whose matching algorithms used the Gaia BP-RP colour to discriminate between hot and cool stars candidates and weigh them accordingly. Since our main aim in this context was sample purity as opposed to completeness, we retained only those objects for which our matched Gaia source was in agreement with both Gonzalez-Santamaria et al. (2021) and Chornay & Walton (2021) catalogs, obtaining a sample of 942 CSs, whose corresponding PNe constitute our _Calibrator Sample_. ### Selection of Distance Scale Calibrators To find the parameters \(a\) and \(b\) of Eq.1 we matched the Calibrator Sample with the physical parameters that are needed to build the scale. Table 1 contains the Calibrator Sample PNe whose angular radius, \(H\beta\) flux, and extinction correction are available in the literature. For each of those targets we list the PN G name, the Gaia DR3 ID name - which is different from the DR2 ID, the common name used for the PN, the corrected (see below) DR3 parallax \(\varpi_{\mathrm{c}}\), the angular radius \(\theta\), the logarithmic \(H\beta\) flux, the logarithmic extinction constant, and the ionized mass \(\log\!M_{\mathrm{i}}\), the latter calculated following the prescription described in Paper I. For the cases where uncertainties on the physical parameters are unavailable we assume a typical error for each parameter. In particular, we adopt 0.2 as a relative angular radius uncertainty, and 0.1 as the uncertainty for the logarithmic extinction. All parameters, excluding those derived from Gaia DR3, have been selected from the literature (references in Stanghellini & Haywood 2018). Parallaxes in Table 1 are those used to fit the scale, and have been corrected by applying to DR3 parallaxes a global offset of 0.017 mas plus an additional bias term which depends on the star's \(G\) magnitude, its colour \(G_{BP}-G_{RP}\) -- or _pseudo-colour_ in case of a 6-parameter (6p) astrometric solution -- and ecliptic latitude, detailed in Lindegren et al. (2021a). Moreover, parallax errors have been multiplied by 1.05/1.22 for a 5p/6p parameter solution to compensate an underestimation (bias) of the error as described in Fabricius et al. (2021). In the following, the symbols \(\varpi_{\mathrm{c}}\), \(\sigma_{\varpi_{\mathrm{c}}}\) will indicate that the corresponding DR3 parallax \(\varpi\) and standard error \(\sigma_{\varpi}\) have been corrected for the above mentioned systematic effects. When we match our Calibrator Sample with the PNe whose radius, H\(\beta\) flux, and extinction are also available, we obtain 401 calibrators. If we chose \(\sigma_{\varpi_{\mathrm{c}}}/\varpi_{\mathrm{c}}<20\%\) (10%) we still have 137 (74) calibrators, doubling the corresponding samples in the DR2 calibration (Paper I). ### Building the Planetary Nebula Distance Scale The semi-empirical relation expressed by Eq. 1 sets the stage for the inference problem relating the sought-for posterior density distribution of the unknown parameters \(a\) and \(b\) to the likelihood of the observed physical quantities characterizing our PN calibrators. We give here a short exposition of the method, referring the reader to the Appendix of Paper I for a complete derivation. To formulate our likelihood function, we first introduce the measured radius \(\theta\sim N(\theta,\sigma_{\theta})\), flux \(I\sim N(J,\sigma_{I})\), and parallax \(\varpi\sim N(p,\sigma_{\varpi})\) as Gaussian-distributed uncorrelated variables, then marginalize the likelihood of the i-th set of measurements with respect to the parameters radius (\(\phi\)), flux (\(J\)), and parallax (\(p\)) and treat the parallax as dependent variable through Eq. 1, obtaining the expression of the likelihood for the i-th calibrator as: \[L(\varpi_{i},\theta_{i},I_{i}|a,b)=\int_{0}^{b_{\rm lim}}\int_{0}^{J _{\rm lim}}N(\frac{\pi^{\mu}\phi_{i}^{2n+1}}{206265J_{i}^{\mu}10^{5}},\sigma_{ \varpi_{\rm in}})\times\] \[N(\phi_{i},\sigma_{a})N(J_{i},\sigma_{t_{\rm d}})d\phi dJ, \tag{2}\] where we have used uniform priors for \(\phi\) and \(J\) inside some reasonable intervals. Then, after forming the _log-likelihood_ of the complete set of calibrators, by virtue of the Bayes theorem we derive the bi-variate posterior probability density (\(p_{\rm pdf}\)) of \(a\) and \(b\), assigning them uniform priors, as: \[p_{\rm pdf}(a,b|\varpi,\theta,I)\propto\sum_{i=1}^{N}\log\left( \int_{0}^{b_{\rm lim}}\int_{0}^{J_{\rm lim}}\left(N(\frac{\pi^{\mu}\phi_{i}^{2 n+1}}{206265J_{i}^{\mu}10^{5}},\sigma_{\varpi_{\rm in}})\times\right.\right.\] \[\left.\left.N(\phi_{i},\sigma_{a})N(J_{i},\sigma_{t})d\phi dJ \right).\right. \tag{3}\] We approach this problem by numerical integration over the \(\phi\) and \(J\) parameters' range, and by sampling the posterior distribution of \(a\) and \(b\) inside a bi-dimensional grid with step sizes of 0.0005 and 0.005 for the _slope_ and _intercept_ parameters, respectively. The estimated \(\hat{a}\) and \(\hat{b}\) correspond to the mode of the discrete posterior probability density function. The 68% confidence interval for the two parameters are calculated separately, by projecting onto the \(a\) and \(b\) axes the iso-probability contour of the posterior distribution for which \(|\)p\((a,b)_{\rm max}-\)p\((a,b)|\leq 0.5\) (remembering that the log-likelihood ratio statistics is asymptotically distributed as \(\chi^{2}/2\)). And finally, we estimate the correlation term from the complete posterior sample, as \(\rho_{\hat{a}\hat{b}}=1/(\sigma_{\hat{a}}\sigma_{\hat{b}})\sum_{i=1}^{n_{\rm e }}\sum_{j=1}^{n_{\rm e}}[p(a_{i},b_{j})((a_{i}-\hat{a})(b_{j}-\hat{b})]\), obtaining in all cases a high positive correlation between the estimated slope and intercept, as illustrated in Fig. 3 showing density levels of the bivariate posterior distribution \(p_{\rm pdf}(a,b)\) for the \(\sigma_{\varpi_{\rm in}}/\varpi_{\rm c}<20\%\) sample. We inferred the distance scale parameters using different sets of calibrators, defined by the adopted threshold on the relative DR3 parallax error \(\sigma_{\varpi_{\rm in}}/\varpi_{\rm c}\), obtaining consistent estimations of the scale \(a\) and intercept \(b\) as reported in Table 2. Figure 1 shows the log\(R-\)log\(S_{\rm H\beta}\) plane with the subsets of calibrators for which \(\sigma_{\varpi}/\varpi\leq 10\%\) (top panel) and \(\sigma_{\varpi_{\rm in}}/\varpi_{\rm c}\leq 20\%\) (bottom panel) along with their error bars, and the corresponding estimated linear relation (see caption). By inspecting these plots, both data sets show a somewhat large dispersion around the scale. Moreover, from a comparison of the 20% sample of Figure 1 with that of Fig. 2 in Paper I (where the calibration used DR2 parallaxes) we note that the numerous "stragglers" populating the lower right end of the plane, which corresponded to PN with very low ionized mass (log\(M_{\rm i}<-2\)), are not so prominent in the new plots. It turned out that, among the complete set of potential calibrators, 37 objects had log\(M_{\rm i}<-2\) but none of them had accurate enough parallaxes at the 10% level and a few - indicated in red in the lower panel of Figure 1\(-\) met the 20% requirement. If we put these 37 objects on the log\(R-\)log\(S_{\rm H\beta}\) plane, they are scattered mainly in the lower part of the graph and their distance do not conform with the statistical scale, corroborating the findings of our previous paper based on DR2 data. To further investigate the ionized mass effect, we inspect the PNe with \(\sigma_{\varpi_{\rm in}}/\varpi_{\rm c}<20\%\) by plotting them using a colour-density scale that increases with log\(M_{\rm i}\). The result, shown in Figure 2, is quite striking and can be fully appreciated thanks to the quality of the calibrators: the dispersion around the linear scale is clearly linked to the (estimated) log\(M_{\rm i}\) of the PN shell, manifesting the deviation of its actual evolutionary state. In fact, in the transition from ionization-bound, or optically thick, to density-bound, or optically thin, the ionized mass of PNe increases, as discussed in Pottasch (1980); therefore, the model assumption of fully ionized, constant mass on which Eq. 1 stands, does not strictly hold. Remarkably, when we tried to tighten even more the constraint on the goodness of the parallax relative error we obtained a similar dispersion of the calibrators as in Fig. 2. Finally, by subtracting out the effect due to the nominal uncertainties of the observed angular radius and flux, we tentatively estimated an average intrinsic (cosmic) scatter of \(\sim 0.1\) dex around the distance scale and conclude that this is due to an actual observed nebular evolution which limits the potential accuracy of this statistical distance scale, unless the ionized mass for the PN is known a priori. The results of the calibration are reported in Table 2, where for each of the three relative uncertainty thresholds on \(\sigma_{\varpi_{\rm in}}/\varpi_{\rm c}\) we give the number of calibrators N\({}_{\rm cal}\), the estimated slope and intercept along with their correlation \(\rho_{\rm ab}\); we also list the averaged parameter \(<K>=<\) statistical_scale_distance \(\times\)\(\sigma_{\rm c}>\), ideally equal to 1, and its dispersion \(<\sigma_{\rm K}>\), which is an indicator of the goodness of the scale, see (Smith 2015). In this paper we will use the third scale listed in Table 2, the one calibrated with central stars whose DR3 parallaxes were measured with better uncertainty than 20%, which has a \(<K>\sim 0.964\). Through the analysis presented in this paper we have confirmed that using another of the three scales would not have an impact of the science results. Our adopted scale is: \[\log R_{\rm PN}=(-0.242\pm 0.0042)\times\log S_{\rm H\beta}-(4.2\pm 0.057). \tag{4}\] In Table 3 we give the new catalog of distances to 843 Galactic PNe, based on Eq. 4. For each PN in the HASH catalog whose \(H\beta\) intensity and apparent radius is available we calculate its heliocentric distance, \(D\), based on our scale. In the catalog we list the PN G name, the angular radius \(\theta\), the logarithmic \(H\beta\) flux and its uncertainty, the logarithmic extinction constant and its uncertainty, the heliocentric distances \begin{table} \begin{tabular}{l l c c c c c c} \hline PN G & Gaia ID (DR3) & alias & \(\varpi_{c}\) & \(\theta\) & \(F_{\rm H\beta}\) & \(c\) & log\(M_{\rm i}\) \\ & & & [\({}^{\prime\prime}\)] & [\({}^{\prime\prime}\)] & [erg cm\({}^{-2}\) s\({}^{-1}\)] & & \\ \hline 000.1+17.2 & 4130784921205604736 & PC12 & 0.108\(\pm\)0.081 & 1.1245 & -11.91\(\pm\)0.00 & 0.700\(\pm\)0.10 & -1.198 \\ 000.1-05.6 & 4049045783774253696 & H2-40 & 0.361\(\pm\)0.326 & 8.7950 & -13.20\(\pm\)0.40 & 0.731\(\pm\)0.15 & -1.761 \\ 000.3+12.2 & 4126115570219432448 & IC4634 & 0.414\(\pm\)0.052 & 5.8150 & -10.88 \(\pm\)0.01 & 0.550\(\pm\)0.06 & -1.175 \\ \hline \end{tabular} 1 \end{table} Table 1: PN calibrators Parameters from our scale of Eq. 4, \(D\) [kpc], and their left/right formal uncertainties. The latter have been computed by estimating \(\sigma_{\log D}\) via first-order error propagation of the linear relations among the variables log\(R\), log\(S_{\rm H\beta}\)\(\theta\) and \(D\), accounting for uncertainties in both observed and fitted parameters; then, asymmetric uncertainties were defined by putting \(\sigma_{\rm D^{-}}=10^{\log D-\sigma_{\log D}}\) and \(\sigma_{\rm D^{-}}=10^{\log D+\sigma_{\log D}}\)). Finally, we include the PN peculiar velocity \(V_{\rm pec}\) and its uncertainty calculated from Eq. 12, plus a Disk/Bulge/Halo population classification as discussed in the following section. ## 3 Planetary Nebulae as tracers of the Galactic Disk An important scientific motivation for the Galactic PN distance scale is the study of the radial (and vertical) metallicity gradients in the Galaxy. Planetary nebulae are the ejecta of evolved 1-8 M\({}_{\odot}\) stars, thus they represent a continuum of stellar populations from very old ages through relatively young, with ages \(\sim\)0 to \(\sim\)10 Gyr, which makes them very versatile probes for Galactic evolution. Gas-phase abundances of \(\alpha\)-elements observed through emission-lines and measured in their shell should be the same as their progenitors' at the time of formation, since these elements do not change considerably during the evolution of these stars. Thus, tracing the abundances of such elements is equivalent to probing the Galaxy's abundance with look-back time from \(\sim\)0 to \(\sim\)10 Gyr, depending on the initial progenitor mass of the PNe. ### Elemental abundances We use PN elemental abundances both for progenitor dating purposes, and to determine the metallicity gradients. We select the elemental abundances from all references included in Stanghellini & Haywood (2018), to whose we add the abundances from the data sets by Tsamis et al. (2003), Miller et al. (2019), McNabb et al. (2016), and Delgado-Inglada et al. (2015), which were not included therein. In the references where abundances are given both for the whole nebula and for different parts of the PN, we use only those measures based on spectra that include the whole PN, so abundances from different references are readily comparable with one another. The final abundances have been curated, i.e., we have recalculated the ionization correction factor (ICF) uniformly, as described in Stanghellini & Haywood (2018). In Table 4 we give the average abundances used in this paper. Column (1) gives the PN G name, then in columns (2) through (4) we give the C, N, and O abundances, in the usual scale log(X/H)+12, averaged from all the available references. The uncertainties, given in dex, are the dispersion of that elemental abundance across the literature, estimated with \(\sigma(\log({\rm O/H}))=0.434\times[\sigma({\rm O/H})/<{\rm O/H}>]\). We also produced a curated catalog of selected PN abundances, shown in Table 5. For this selection we prioritize abundances from space-based observations unless they are deemed uncertain in the original papers; otherwise, we use the published ground-based abundances, prioritizing the most recent, or recently revised, ones. Curated abundances are given with their uncertainty, when available in the original reference. ### Velocities of PNe Central Stars For a sizable sample of CSs whose PNe have a statistical distance in our catalog, Gaia DR3 provides equatorial proper motions \(\mu_{\alpha\ast}\pm\sigma_{\mu_{\alpha}}\)2, \(\mu_{\delta}\pm\sigma_{\mu_{\delta}}\); we use this kinematic information as insight for their populations and ages. After converting DR3 proper motions and their standard deviations in galactic coordinates \(\mu_{1}\pm\sigma_{\mu_{\mu}}\), \(\mu_{b}\pm\sigma_{\mu_{b}}\) via the transformation matrices \(A_{\rm G}^{\prime}\) and \(C_{\rm Gal}\) defined by Eq. 4.62 and 4.79 of the Gaia EDR3 online \begin{table} \begin{tabular}{l l l l l l l l} \hline Scale & \(\sigma_{\varpi_{\rm m}}/\varpi_{c}\) & \(N_{\rm cal}\) & Slope & Intercept & \(\rho_{ab}\) & \(N_{\rm comp}\) & \(<K>\) & \(<\sigma_{\rm K}>\) \\ \hline 1 & \(<\)0.05 & 26 & -0.232\(\pm\)0.010 & -4.075\(\pm\)0.132 & 0.99 & 26 (1) & 0.894 & 0.102 \\ & & & & & & 355 (2) & 0.913 & 0.544 \\ 2 & \(<\)0.10 & 74 & -0.237\(\pm\)0.005 & -4.14\(\pm\)0.070 & 0.99 & 74 (1) & 0.914 & 0.113 \\ & & & & & 355 (2) & 0.917 & 0.543 \\ 3 & \(<\)0.20 & 137 & -0.242\(\pm\)0.004 & -4.20\(\pm\)0.057 & 0.99 & 133 (1) & 0.964 & 0.154 \\ & & & & & 355 (2) & 0.931 & 0.553 \\ \hline \end{tabular} 1 \end{table} Table 2: Summary of the Distance Scale Analysis \begin{table} \begin{tabular}{l l l l l l} \hline PN G & \(\theta\) & \(F_{\rm HB}\) & \(c\) & \(D\) & \(V_{\rm pec}\) & Pop. \\ & [′′] & [erg cm\({}^{-2}\) s\({}^{-1}\)] & & [kpc] & [km s\({}^{-1}\)] & \\ \hline 000.1+17.2 & 1.12 & \(-\)11.91 \(\pm\) 0.00 & 0.70 \(\pm\) 0.10 & 8.341\({}^{+0.089}_{-0.636}\) & 160.21 \(\pm\) 28.83 & Bulge \\ 000.2-01.9 & 4.47 & \(-\)12.56 \(\pm\) 0.01 & 2.04 \(\pm\) 0.10 & 2.793\({}^{+0.226}_{-0.117}\) & 50.95 \(\pm\) 12.12 & Disk \\ 000.3+12.2 & 5.82 & \(-\)10.88 \(\pm\) 0.01 & 0.55 \(\pm\) 0.06 & 2.188\({}^{+0.102}_{-0.048}\) & 48.23 \(\pm\) 9.72 & Disk \\ \hline \end{tabular} 1 \end{table} Table 3: Catalog of Statistical Distances and Peculiar Velocities of Galactic PNe documentation, we compute the corresponding observed spatial velocities plus errors, given in [km s\({}^{-1}\)], as \[V_{\rm I} =4.740470446\mu_{\rm I}\times D \tag{5}\] \[\sigma_{V_{\rm I}} =4.7407470446\times D\sqrt{\mu_{I}^{2}\sigma_{\rm D}^{2}+\sigma_{ \mu_{I}}^{2}}\] \[V_{\rm b} =4.74047\mu_{\rm b}\times D\] \[\sigma_{V_{\rm b}} =4.74074\times D\sqrt{\mu_{\rm b}^{2}\sigma_{\rm D}^{2}+\sigma_{ \mu_{0}}^{2}} \tag{6}\] where \(l\) and \(b\) are the Galactic longitude and latitude. We also adopt \(\sigma_{\rm D}\equiv(\sigma_{mD}+\sigma_{\rm D^{\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm \rm\rm\rm \rm\rm \rm\rm \rm\rm \rm \rm\rm \rm \rm\rm that \[\mathbf{v}_{gal}=\begin{pmatrix}U\\ V\\ W\end{pmatrix}=\mathbf{v}_{rel}+\mathbf{v}_{\odot} \tag{8}\] For the following analysis, it is useful to express spatial velocities in cylindrical coordinates, for which we need to compute the galactocentric azimuth \(\phi\) as (e.g. Lopez-Corredoira & Sylos Labini, 2019) \[\begin{array}{l}X=R_{\odot}-D\cos l\cos b\\ Y=D\sin l\cos b\\ \phi=\tan^{-1}Y/X\end{array} \tag{9}\] where \(D\) is the star's heliocentric distance and \(\phi\) is taken as positive in the direction of galactic rotation; the radial, azimuthal, and vertical components of the spatial velocity are then given by: \[\begin{array}{l}V_{\rm R}=-U\cos\phi+V\sin\phi\\ V_{\phi}=U\sin\phi+V\cos\phi\\ V_{z}=W\end{array} \tag{10}\] Finally, we are able to estimate the peculiar velocity of each star by subtracting from the azimuthal component \(V_{\phi}\) the circular motion due to differential galactic rotation. To do so, for each PN we compute the galactocentric radius \(R_{\rm G}\) projected onto the Galactic plane as: \[R_{\rm G}=\sqrt{R_{\odot}^{2}+D^{2}\cos^{2}b-2R_{\odot}D\cos l\cos b}, \tag{11}\] and interpolate linearly between the nearest two values of \(R_{\rm G}\) of Table 6 to calculate the star's circular velocity \(V_{\rm c}(R_{\rm G})\). Figure 1: The log(\(R\)) [pc] vs. log(\(S_{\rm 1\mu b}\)) [erg cm\({}^{-2}\) s\({}^{-1}\)] plot for calibrators with \(\sigma_{\varpi_{\rm c}}/\varpi_{\rm c}<0.1\) (top panel) and \(\sigma_{\varpi_{\rm c}}/\varpi_{\rm c}<0.2\) (bottom panel). In both plots, the blue points are the calibrators, with their logarithmic (asymmetric) error bars reflecting the 1-\(\sigma\) uncertainties of the observed parameters; In the bottom plot the red points are those calibrators with log\(M_{\rm i}<-2\); the red line are the two distance scales from the calibrators, and in cyan we plot the 2-\(\sigma\) confidence band of the scales, representing the uncertainty of the logarithmic radius \(R\) induced by the correlated errors on \(\hat{a}\) and \(\hat{b}\) Figure 3: Normalized 2D posterior probability of the distance scale parameters obtained from the PN sample with \(\sigma_{\varpi_{\rm c}}/\varpi_{\rm c}<20\%\) Figure 2: The log(\(R\)) [pc] vs. log(\(S_{\rm 1\mu b}\)) [erg cm\({}^{-2}\) s\({}^{-1}\)] plot for the PN sample with \(\sigma_{\varpi_{\rm c}}/\varpi_{\rm c}<0.2\), colour-coded according to their ionized mass. The error bars have been omitted for clarity; the cross at the bottom-left corner is representative of the typical error in the respective axis The absolute space motion of the stars (or the PNe), which we call \(V_{\rm pec}\), can be obtained by quadrature of the three velocity components as \[\begin{array}{l}V_{\rm pec}=\sqrt{V_{\rm R}^{2}+V_{2}^{2}+(V_{\phi}-V_{\rm c})^ {2}}\\ \sigma\nu_{\rm pec}\approx\sqrt{\sigma_{V_{\rm I}}^{2}+\sigma_{V_{\rm R}}^{2}+ \sigma_{V_{\rm r}}^{2}}\end{array} \tag{12}\] where \(\sigma_{V_{\rm pec}}\) is a lower-limit uncertainty, which assumes uncorrelated variables and perfect model. Peculiar velocities for the PNe are reported in Table 3. In Figure 4 we put the inferred peculiar velocities in the so-called Toomre diagram, which can be used to identify regions of high probability of finding Disk versus Halo stars. In particular, following Bonaca et al. (2017), we use \(V_{\rm pec}>220\) [km s\({}^{-1}\)] as threshold value for rejecting Halo stars. ### Galactic PN Populations, and Dating PN progenitors We derive heliocentric distances \(D\) and their confidence limits, as described in subsection 2.3, for 843 Galactic PNe. The range of distances spanned by these PNe is \(\sim\) 150 to \(\sim\)27,000 [pc], and most of them can be included in a general Disk population, thus they are adequate for our science scope. We populate the Halo PN sample selecting them by peculiar velocity, where Halo PNe have \(V_{\rm pec}>220\) [km s\({}^{-1}\)]. This selection yields to 13 Halo PNe. We also tentatively define the Bulge population similarly to Stanghellini & Haywood (2018), by selecting those PNe in the central 3\(\times\)3 [kpc], thus with \(|z|<3\) and \(0<R_{\rm G}<3\) [kpc]. To constrain the Bulge sample even more we select in this sample only those PNe whose apparent radius is \(\theta<5\) [arcsec]. Our selection gives 102 Bulge PNe. Whether a PN belongs to the Halo or Bulge populations, according to our selection, is noted in Table 3. In Figure 5 we plot the distribution of the peculiar velocity vs. \(z\), where the colour indicates the distance from the Galactic center. We observe that in the area between the two grey lines (\(\pm 0.175\) kpc) PNe with high peculiar velocity are away from the Galactic center. It is worth noting that all Bulge PNe selected as in SS3.3 are outside the grey lines, and have intermediate \(V_{\rm pec}\). More insight on dating PN progenitors are derived from the comparison of Gaia proper motions described in the previous sub-section, with the ages derived purely from the PN chemistry above. In Figure 6 we show the resulting distribution of \(V_{\rm pec}\) for the general PN population (which includes OPPNe, YPPNe, as well as the intermediate age progenitor PNe, and those PNe whose age from chemistry was unavailable), then by plotting separately OPPNe and YPPNe. We find that the OPPNe distribution peaks at \(V_{\rm pec}\sim 60\) [km s\({}^{-1}\)], while the YPPNe distribution peaks at a smaller velocity (\(\sim 25-30\) [km s\({}^{-1}\)]). This plot shows the correlation between completely independent dating systems, from the chemistry and the kinematics of the PNe, giving us confidence about using the OPPNe and YPPNe distributions. ### Galactic metallicity gradients and other metallicity variations The radial metallicity gradients in this paper have been calculated by orthogonal distance regression, which makes use of the uncertainties in the measured quantities \(R_{\rm G}\) and log(O/H) + 12. The uncertainties in the elemental abundances are described in SS3.1, where we indicate which literature fronts we chose the abundances from, and how errors are estimated both in the average and chosen abundances (see Table 4). The uncertainties in \(R_{\rm G}\) are computed by first-order propagation of the symmetrized error of the distance scale, \(\sigma_{\rm D}=(\sigma_{\rm D}\cdot+\sigma_{\rm D}\cdot)/2\), thus obtaining \(\sigma_{R_{\rm G}}=|D\cos^{2}b-R_{\rm G}\cos{\rm{\it{\it{\rm{\it{\rm{\it{\rm{ \it{\rm{\it{\rm{\it{\rm{\it{\rm{\it{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ we exclude from the comparison the PNe with \(\log M_{\rm i}<-2\), and \(<K>=0.931\) if we include all calibrators. Comparing these results with those of Paper I, where the best scale obtained with DR2 parallaxes gave \(<K>=0.948\) and \(<\sigma_{\rm K}>=0.25\), we conclude that the use of DR3 parallaxes as calibrators successfully improved upon the Galactic PN distance scale. We have calculated \(<K>\) based on DR3 parallaxes for the most commonly used PN distance scales, such as Frew et al. (2016) (hereafter FPB) and Stanghellini et al. (2008) (hereafter SSV). We found \(<K_{\rm FPB}>=<D_{\rm FPB}\times\sigma_{\rm c}>=1.272\) and \(<K_{\rm SSV}>=<D_{\rm SSV}\times\sigma_{\rm c}>=1.203\) with the exclusion of calibrators with \(\log M_{\rm i}<-2\), and \(<K_{\rm FPB}>=1.36\) and \(<K_{\rm SSV}>=1.29\) when including all calibrators. We conclude that the scale presented in Eq.4 gives the most reliable distances to date, and distances in Table 3 should be used, unless an accurate parallax measurement of the CS or PN under study is available, from DR3 or otherwise. Moreover, our analysis revealed that the \(\log R-\log S_{\rm H\beta}\) distance scale of Galactic PNe has an intrinsic mean dispersion of \(\approx 0.1\) dex around the \(\log R-\log S_{\rm H\beta}\) linear relation, which can be accounted for by the variation of the PN ionized mass (excluding objects with \(\log M_{\rm i}<-2\)), bringing us to the conclusion that statistical distance scales such as this can hardly be improved by increasing the accuracy of calibrators. Second, we used Gaia DR3 proper motion to have a thorough description of the peculiar velocities of the CSs, or the PNe, and to characterize the Halo population. The peculiar velocity derivation has the additional advantage to be a completely independent assessment of the progenitor ages of the PNe, thus validating even further the OPPN and YPPN classes based on elemental abundances rations C/O and N/O. The proper motion and peculiar velocity analysis of SS 3.2 provided a tool to exclude Halo PNe from the gradient analysis. Even more importantly, it gave us additional confidence in the progenitor dating from chemical analysis, as described in SS 3.3. This is, to our knowledge, the first time that dating PNe is consistent with two completely independent approaches, while in the past the dating scheme has used either approach, or both approaches for different age classes, such as in defining Peimbert's PN Types (Peimbert, 1978). Dating PN progenitors is essential to study the time evolution of the radial metallicity gradients in the Milky Way. Third, we measure radial oxygen gradients based on DR3 distance scale, with Halo PN selected through their kinematic properties measured by Gaia DR3, and progenitor ages from chemical evolutionary assessment, confirmed via the peculiar velocities determined by Gaia. The negative, shallow gradients that we determine, and the mild evolution of the gradients - steepening since the formation of the Galaxy-- are a confirmation of past results both in the Galaxy (Stanghellini & Haywood, 2018) and in the nearby spiral galaxies examined (Pena & Flores-Duran, 2019; Magrini et al., 2016), where all emission-line probes in star-forming galaxies (including the Milky Way) indicate that the oxygen radial gradients are steeper in the younger populations. For the gradients we have used curated catalogs of published abundances, which we give in Tables 4 and 5. It is worth noting that, while our results confirm most of the metallicity gradients measurements based on gas-phase data published to date, recent cluster and stellar results show a more controversial picture (Anders et al., 2017; Magrini et al., 2023; Gaia Collaboration et al., 2023a). Stellar metallicity are mainly inferred from iron rather than oxygen; since iron atoms in PNe are mostly found in condensed state (e.g. Delgado-Inglada et al., 2015), a direct comparison between the stellar and nebular samples is typically problematic. The work of Magrini et al. (2023) also include \(\alpha\)-element gradients, indicating minimal oxygen gradient evolution, compatible with flattening of gradients with time. Their oxygen gradient slopes of the age\(<\)1 Gyr and age\(>\)3 Gyr bins in their table 10. are the same within the uncertainties. We underline that these results are not incompatible with ours, since we are comparing different age ranges: our PN progenitors are in the age\(<\)1 Gyr and age\(>\) 7.5 Gyr bins, whereas their oldest clusters are within 7 Gyr. Furthermore, migration, and evolution due to inflow, of clusters and stars may be different in ways that are unpredictable when measuring radial gradients. Interestingly, (Bhattacharya et al., 2022) who studied the radial oxygen gradients in M31, found that young, thin disk PNe describe a steeper radial gradient than the older, thick disk population, in qualitative agreement with our results, where radial metallicity gradients steepen with galaxy evolution. Quantitatively, in their Fig. 13 they attempt to unify the gradients slopes across galaxies, taking into account their scale lengths, and show the remarkable similarities between oxygen gradients in the Galaxy and M31. More inspection of Galactic gradients from data and modeling comes from the recent analysis in Lian et al. (2023), where they use integrated radial metallicities to find (see their Fig. 1) that radial Fe/H gradients from older populations are flatter than for the young population, in agreement with our results. They also compared the Galactic gradient from present-day stellar population with that obtained from gas-phase oxygen (H II re \begin{table} \begin{tabular}{l c c c} \hline Sample & N & slope & intercept \\ & & [dex kpc\({}^{-1}\)] & log(O/H)+12 \\ \hline Average abundances & & & \\ \hline All & 288 & -0.0144\(\pm\)0.00385 & 8.669\(\pm\)0.0307 \\ Excluding Halo PNe & 285 & -0.0141\(\pm\)0.00390 & 8.666\(\pm\)0.0314 \\ OPPNe & 186 & -0.0121\(\pm\)0.00465 & 8.660\(\pm\)0.0376 \\ YPPNe & 55 & -0.0220\(\pm\)0.00758 & 8.743\(\pm\)0.0601 \\ \hline Selected abundances & & & \\ \hline All & 288 & -0.0163\(\pm\)0.00385 & 8.672\(\pm\)0.0312 \\ Excluding Halo PNe & 285 & -0.0159\(\pm\)0.00392 & 8.668\(\pm\)0.0319 \\ OPPNe & 186 & -0.0132\(\pm\)0.00464 & 8.659\(\pm\)0.0373 \\ YPPNe & 55 & -0.0278\(\pm\)0.00789 & 8.759\(\pm\)0.0629 \\ \hline \end{tabular} 1 [FOOTNOTE:1]Footnote 1: footnotetext: \(\sim\) 10000–0000–000–0000–000–000–000–000–00–00–00–00–00–00–00–00–00–00–00–0–00–00–00–0–00–0–00–00–0–00–0–0–0–00–0–00–0–00–0–00–0–00–0–0–00–0–0–00–0–0–0–00–0–0–00–0–0–0–00–0–0–00–0–0–00–0–0–0–00–0–0–0–0–00–0–0–00–0–0–0–0–00–0–0–00–0–0–00–0–0–0–0–0–00–0–0–0–00–0–0–0–0–00–0–0–0–0–0–00–0–0–00–0–00–0–0–0–00–0–0–0–00––00–0–0–00––00–0–0–00–0–00–0–0–00–0–0–0–00–0–00–0–0–0–00–0–0–00––00–0–00–0–0–00––00–0–00–0–00––00–0–0–00–0–0–00–0–00–0–00–0–00––00–00––00–0–00–0–00–0–00–0–00–0–0–00–0–00–0–00–0–0–00–0–0–00–0–00–0–00–0–00–0–00–0–00–0–00–0–0–00–0–00–0–0–00–0–00–0–00–0–00–00–0–0–0–00–0–0–00–0–0–00–0–00–0–00–0–00–0–00–00–0–00–0–00–0–0–00–0–00–0–00–00–0–00–0–00–0–00–0–00–0–00–0–00–0–00–00–0–00––000––00–00–0–000–0–00–00–0–00–00–00–00–00–00––000–0–00–00–0–00–00–00–00–00––000–00–00–0–00–00–0–000–00––000–00–00––000–00–0–000–00–000––000–00–00–00–00–00–000––000–00–00–00–00–00–00–00–00–00–00–00–00–00–00–000–00–00–00–000–00–00–00–00–00–00–00–00–000–00–000–00–00–00–000–00–000––0000–00–000–00–000–00–000–000–00–00–000–000–000–000–000–00–00–000–000–000–000–000–00–000–000–000–000–000–000–000–000–000–000–0000–0000–000–000–0000–00–0000–000–000–0000–000–0000–000–000–0000–000–0000–0000–0000–00–000–0000–000–0000–000–0000–000–000–000–000–000–0000–000–0000–0000–0000–0000–000–0000–0000–0000–00000–0000–0000–00000–00000–00000–00000–0000–0000–00000–0000–00000–00000–00000–0000–000000–00000–0000–000000–000000–00000–000000–000000–000000–000000–00000–00000–00000–000000–0000000–00000–000000–000000–000000–0000000–0000000–0000000–0000000–0000000–0000000–00000000–00 gion), which is consistent with the PNe abundance tracers, reinforcing the confidence on our findings. The distance scale that we presented here, and the distances in our catalog, will be used in the future to study the behaviour of dust and other elements in Galactic PNe in the framework of the chemical evolution of the Galaxy. ###### Acknowledgements. This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement. B.B. acknowledges the support of the Italian Space Agency (ASI) through contract 2018-24-HH.0 and its addendum 2018-24-HH.1-2022 to the National Institute for Astrophysics (INAF). The authors would like to thank the anonymous referee whose comments and suggestions helped improving the original manuscript.
We are studying the population of Galactic planetary nebulae (PNe) and their central stars (CSs) through the analysis of their heliocentric distances and Galactic distribution. We obtain distances by means of a revised statistical scale based on an astrometrically-defined sample of CSs parallaxes from Gaia DR3 as calibrators. The statistical scale is applied to infer distances of a significant number (~850) of Galactic PNe for which we deliver a new catalog of PN distances. By adopting a circular velocity curve of the Galaxy, we also derive 3D peculiar velocities from DR3 proper motions and published radial velocities of a large sample (~300) of PN CSs. We date PN progenitors based on both the best chemical abundances culled from the literature and on CS kinematic properties, finding a confirmation of the first method with the second. The slope of the radial oxygen gradient of the Galactic Disk traced by the complete PNe sample amounts to -0.01
2309.11388
The achievement set of generalized multigeometric sequences
We study the topology of all possible subsums of the generalized multigeometric series $k_1f(x)+k_2f(x)+\dots+k_mf(x)+\dots + k_1f(x^n)+\dots+k_mf(x^n)+\dots,$ where $k_1, k_2, \dots, k_m$ are fixed positive real numbers and $f$ runs along a certain class of non-negative functions on the unit interval. We detect particular regions of this interval for which this achievement set is, respectively, a compact interval, a Cantor set and a Cantorval.
Dmytro Karvatskyi, Aniceto Murillo, Antonio Viruel
2023-09-20T15:14:57
http://arxiv.org/abs/2309.11388v1
# The Achievement set of generalized multigeometric sequences ###### Abstract. We study the topology of all possible subsums of the _generalized multigeometric series_\(k_{1}f(x)+k_{2}f(x)+\cdots+k_{m}f(x)+\cdots+k_{1}f(x^{n})+\cdots+k_{m}f(x^{n})+\ldots\), where \(k_{1},k_{2},\ldots,k_{m}\) are fixed positive real numbers and \(f\) runs along a certain class of non-negative functions on the unit interval. We detect particular regions of this interval for which this achievement set is, respectively, a compact interval, a Cantor set and a Cantorval. Key words and phrases:Subsums set, achievement set, Cantor set, Cantorval, multigeometric series 2020 Mathematics Subject Classification: 40A05, 11B05, 28A80 The first author was supported by the University of Malaga grant D.3-2023 for research stays of renowned Ukrainian scientists. The second and third authors were partially supported by the MINECO grant PID2020-118753GB-I00 of the AEI of the Spanish Government. ## 1. Introduction In this paper we study the convergence of the solutions of the Laplace equation (1.1) with respect to the Laplace operator (1.1). We consider the following **Proposition 2.3**.: _Let \(f\in\mathcal{C}^{r+1}\big{(}[0,1)\big{)}\) such that \(f^{i})(0)=0\) for \(0\leq i<r\) and \(f^{r})(0)>0\). Then \(f\in\mathcal{M}\)._ Proof.: With \(f\) as in the statement there exists \(\epsilon\in(0,1)\) such that \(f^{r})([0,\epsilon])\subset\mathbb{R}^{+}\), and therefore \(f(x)\) is monotone increasing in \([0,\epsilon]\). On the other hand, define \[a=\frac{1}{r!}\min\big{\{}f^{r})(\zeta),\,\zeta\in[0,\epsilon]\big{\}}\quad \text{ and }\quad b=\frac{1}{r!}\max\big{\{}f^{r})(\zeta),\,\zeta\in[0,\epsilon] \big{\}},\] and consider, for any \(\lambda\in\mathbb{R}\), the function \(h_{\lambda}(x)=f(x)-\lambda x^{r}\). We then use the Taylor \((r-1)\)th-approximation together with the error formula of \(h_{\lambda}\) at \(0\) to conclude that \[h_{\lambda}(x)=\left(\frac{f^{r})(\zeta)}{r!}-\lambda\right)x^{r}\quad\text{ for some }\quad\zeta\in[0,x). \tag{2.4}\] In particular, for any \(x\in[0,\epsilon]\), \(h_{a}(x)=f(x)-ax^{r}\geq 0\) while \(h_{b}(x)f(x)-ax^{r}\leq 0\). **Example 2.5**.: (1) Note that the identity \(f(x)=x\) is trivially in \(\mathcal{M}\) by choosing \(a=b=r=1\) and any \(\epsilon\). (2) By the well known Jordan inequality, \[\frac{2x}{\pi}\leq\sin x\leq x,\quad|x|\leq\frac{\pi}{2},\] we see that the function \(f(x)=\sin x\) is in \(\mathcal{M}\) by choosing \(a=\frac{2}{\pi}\), \(b=1\) and \(\epsilon=\frac{\pi}{2}\). Nevertheless, as \(f^{\prime}(x)=\cos x\), Proposition 2.3 provides \(a=\cos 1\), \(b=r=\epsilon=1\). The same applies, for instance, to the function \(f(x)=\tan x\) choosing \(r=\epsilon=1\): \[x\leq\tan x\leq\frac{x}{\cos^{2}(1)},\quad x\in[0,1].\] (3) Consider the function \(f(x)=x\cdot\ln(x+1)\) in which \(f(0)=f^{\prime}(0)=0\) and \(f^{\prime\prime}(x)=\frac{x+2}{(x+1)^{2}}>0\) for \(x\in[0,1]\). Then \(f\in\mathcal{M}\) choosing \(r=2\), \(\epsilon=1\), \(a=\frac{3}{8}\) and \(b=1\). Another example covered by Proposition 2.3 and providing \(r=2\) is, for instance, \(f(x)=e^{x}-x-1\). Here \(\epsilon=1\), \(a=\frac{1}{2}\) and \(b=\frac{e}{2}\). We also need: **Lemma 2.6**.: _Let \(f\in\mathcal{M}\) and let \(k_{1}\geq k_{2}\geq\cdots\geq k_{m}>0\) be positive scalars. Then, the associated generalized multigeometric series \(\sum_{n\geq 1}w_{n}(x)\) is convergent for any \(x\in[0,\epsilon]\). Moreover, \(w_{n}(x)\geq w_{n+1}(x)\) for any \(n\in\mathbb{N}\) and any \(x\in[0,\epsilon]\)._ Proof.: The second assertion trivially holds: as \(f\) is monotone increasing in \([0,\epsilon]\) it follows that, for any \(x\) in this interval, \(f(x^{n})>f(x^{n+1})\). Hence, as \(k_{1}\geq k_{2}\geq\cdots\geq k_{m}\), it follows that \(w_{n}(x)>w_{n+1}(x)\). On the other hand, if we write \(K=\sum_{i=1}^{m}k_{i}\), we deduce that \[\sum_{n=1}^{\infty}w_{n}(x)\leq b\cdot\sum_{n=1}^{\infty}Kx^{nr}=\frac{bKx^{r }}{1-x^{r}},\quad x\in[0,\epsilon],\] and therefore, this series converges since \(\epsilon\in(0,1)\) In what follows we fix an arbitrary function \(f\in\mathcal{M}\), thus constants \(\epsilon\in(0,1]\), and \(a,b,r\in\mathbb{R}^{+}\) are those given in Definition 2.2. We choose positive real numbers \(k_{1}\geq k_{2}\geq\cdots\geq k_{m}>0\), and consider the associated generalized multigeometric series \(\sum_{n\geq 1}w_{n}(x)\) for \(x\in[0,\epsilon)\). We also fix the following notation: \[K=\sum_{i=1}^{m}k_{i},\quad U_{j}=\sum_{i=j+1}^{m}k_{i},\quad L_{j}=\sum_{i=1 }^{j}k_{i}=K-U_{j},\] Also, for any series \(\sum_{n\geq 1}z_{n}\) and any \(\ell\geq 1\) we denote by \(Z_{\ell}=\sum_{n>\ell}z_{n}\) the \(\ell\)th _tail_ of the series. In particular, we write \(W_{\ell}(x)=\sum_{n>\ell}w_{n}(x)\). On the other hand, we will strongly use the following foundational result of Kakeya, rediscovered and extended by Hornich: **Theorem 2.7**.: _[_6, 8_]_ _Let \(\sum_{n\geq 1}z_{n}\) be a convergent positive series with non-increasing terms, i.e., \(z_{n}\geq z_{n+1}\) for any \(n\in\mathbb{N}\). Then, the achievement set \(E(z_{n})\) is:_ * _a finite union of bounded closed intervals if and only if_ \(z_{n}\leq Z_{n}\) _for all but finitely many_ \(n\in\mathbb{N}\)_;_ * _a compact interval if and only if_ \(z_{n}\leq Z_{n}\) _for every_ \(n\in\mathbb{N}\)_;_ * _homeomorphic to the Cantor set if_ \(z_{n}>Z_{n}\) _for all but finitely many_ \(n\in\mathbb{N}\)_._ With the notation above define \[d_{I}=\sqrt[r]{\max_{1\leq j\leq m}\Big{\{}\frac{bk_{j}-aU_{j}}{bk_{j}+aL_{j}} \Big{\}}}.\] Our first result extends and refines [14, Theorem 3.1]: **Theorem 2.8**.: _Whenever \(d_{I}\leq\epsilon\), the achievement set \(E\big{(}w_{n}(x)\big{)}\) is a compact interval for any \(x\in[d_{I},\epsilon]\)._ Proof.: According to the Theorem 2.7.(ii), it is enough to show that \(w_{n}(x)\leq W_{n}(x)\) for every \(n\in\mathbb{N}\) and any \(x\in[d_{I},\epsilon)\). Since \(f\in\mathcal{M}\), \[w_{n}(x)=k_{g(n)}f(x^{\lfloor\frac{m+n-1}{m}\rfloor})\leq b\cdot k_{g(n)}x^{ \lfloor\frac{m+n-1}{m}\rfloor r},\] while \[W_{n}(x)=\sum_{\ell>n}w_{\ell}(x)\geq a\cdot\sum_{\ell>n}k_{g(\ell)}x^{ \lfloor\frac{m+\ell-1}{m}\rfloor r}=a\cdot x^{\lfloor\frac{m+n-1}{m}\rfloor r }\Big{(}U_{g(n)}+K\frac{x^{r}}{1-x^{r}}\Big{)},\] for \(n\in\mathbb{N}\) and \(x\in[0,\epsilon]\). Therefore \(w_{n}(x)\leq W_{n}(x)\) whenever \[b\cdot k_{g(n)}x^{\lfloor\frac{m+n-1}{m}\rfloor r}\leq a\cdot x^{\lfloor\frac{ m+n-1}{m}\rfloor r}\Big{(}U_{g(n)}+K\frac{x^{r}}{1-x^{r}}\Big{)}.\] For \(x=0\) this trivially holds and, for \(x>0\) this is the case when \[b\cdot k_{g(n)}\leq a\cdot\Big{(}U_{g(n)}+K\frac{x^{r}}{1-x^{r}}\Big{)},\] that is, whenever \[x\geq\sqrt[r]{\frac{bk_{g(n)}-aU_{g(n)}}{bk_{g(n)}+aL_{g(n)}}}\] for all \(n\). As the function \(g(n)\) takes the values \(1,2,\ldots,m\), this happens for all \(x\in[d_{I},\epsilon)\). As a consequence, denoting \[d_{IM}:=\sqrt[r]{\frac{b}{b+a}},\] we obtain: **Corollary 2.9**.: _Whenever \(d_{IM}\leq\epsilon\), the achievement set \(E\big{(}w_{n}(x)\big{)}\) is a compact interval for any \(x\in[d_{IM},\epsilon]\)._ Proof.: Simply note that \[d_{I}=\sqrt[r]{\max_{1\leq j\leq m}\Big{\{}\frac{bk_{j}-aU_{j}}{bk_{j}+aL_{j}} \Big{\}}}=\sqrt[r]{\max_{1\leq j\leq m}\Big{\{}\frac{b-aU_{j}/k_{j}}{b+aL_{j}/k _{j}}\Big{\}}}\leq\sqrt[r]{\frac{b}{a+b}}=d_{IM}\] and apply Theorem 2.8. On the other hand, denoting \[d_{NI}=\sqrt[r]{\frac{ak_{m}}{bK+ak_{m}}},\] we prove the following that extends [14, Theorem 3.3], which in turn is inspired by [3, Theorem 2.1] generalized in [4, Theorem 2.2(ii)]: **Theorem 2.10**.: _The achievement set \(E\big{(}w_{n}(x)\big{)}\) is not a finite union of closed bounded intervals for \(0<x<\min\{\epsilon,d_{NI}\}\)._ Proof.: According to Theorem 2.7.(i) it is sufficient to show that \(w_{\ell m}(x)>W_{\ell m}(x)\) for any \(\ell\in\mathbb{N}\) and \(0<x<\min\{\epsilon,d_{NI}\}\). Observe that, for any \(x\in(0,\epsilon)\), \[w_{\ell m}(x)=k_{m}f(x^{\ell})\geq a\cdot k_{m}x^{\ell r},\] while \[W_{\ell m}(x)=K\cdot\sum_{j>\ell}f(x^{j})\leq b\cdot\frac{Kx^{(\ell+1)r}}{1-x ^{r}}.\] Therefore \(w_{\ell m}(x)>W_{\ell m}(x)\) as long as \[W_{\ell m}\leq b\cdot\frac{Kx^{(\ell+1)r}}{1-x^{r}}<a\cdot k_{m}x^{\ell r}\leq w _{\ell m}.\] That is, whenever \[x<\sqrt[r]{\frac{ak_{m}}{bK+ak_{m}}}=d_{NI}.\] The next result extends [3, Theorem 2.1], [4, Theorem 2.2(i)] and [14, Theorem 3.4]. We follow a strategy similar to the proofs of these references. **Theorem 2.11**.: _Choose \(\lambda,\mu\in\mathbb{R}^{+}\) and \(s\in\mathbb{N}\) such that every number \(\mu,\mu+\lambda,\mu+2\lambda,\ldots,\mu+s\lambda\), is a subsum of the (finite) series \(\sum_{i=1}^{m}k_{i}\), and write_ \[d_{CI}=\sqrt[n]{\frac{b}{s\cdot a+b}}\,.\] _Then, whenever \(d_{CI}<\epsilon\), \(E\big{(}w_{n}(x)\big{)}\) contains a compact interval for any \(x\in[d_{CI},\epsilon)\)._ Proof.: Let define \(\overline{k}_{1}=\overline{k}_{2}=\ldots=\overline{k}_{s}=\lambda>0\), and consider the convergent positive series \(\sum_{n\geq 1}\overline{w}_{n}(x)\) where \[\overline{w}_{n}(x)=\overline{k}_{\overline{g}(n)}f(x^{\lfloor\frac{n+s-1}{s} \rfloor}),\] with \(\overline{g}(n)=1+\big{(}(n-1)\mod s\big{)}\). Then, according to Theorem 2.8, \(E\big{(}\overline{w}_{n}\big{)}\) is a compact interval for all \(x\in[d_{I},\epsilon)\) where now \[d_{I} =\sqrt[n]{\max_{1\leq j\leq s}\Big{\{}\frac{b\overline{k}_{j}-aU_ {j}}{b\overline{k}_{j}+aL_{j}}\Big{\}}}\] \[=\sqrt[n]{\max_{1\leq j\leq s}\Big{\{}\frac{b\lambda-a\lambda(s-j )}{b\lambda+a\lambda j}\Big{\}}}\] \[=\sqrt[n]{\max_{1\leq j\leq s}\Big{\{}1-\frac{as}{aj+b}\Big{\}}}\] \[=\sqrt[n]{1-\frac{as}{as+b}}\] \[=\sqrt[n]{\frac{b}{b+sa}}\,.\] We finish the proof by showing that the interval \[\{\sum_{n=1}^{\infty}\mu f(x^{n})\}+E\big{(}\overline{w}_{n}(x)\big{)}\] is contained in \(E\big{(}w_{n}(x)\big{)}\). Indeed, if \(z\in\{\sum_{n=1}^{\infty}\mu f(x^{n})\}+E\big{(}\overline{w}_{n}(x)\big{)}\), write \[z=\sum_{n=1}^{\infty}\big{(}\mu+s_{n}\lambda)f(x^{n}),\] where \(s_{n}\in\{0,1,\ldots,s\}\). Therefore, there exist \(c_{n,i}\in\{0,1\}\) such that \(\mu+s_{n}\lambda=\sum_{i=1}^{m}c_{n,i}k_{i}\), and thus \[z=\sum_{n=1}^{\infty}\big{(}\sum_{i=1}^{m}c_{n,i}k_{i}\big{)}f(x^{n})\in E(w_ {n}).\] The following recovers [4, Theorem 2.2(iii)] and generalizes [14, Corollary 3.5]. Under the hypothesis of the previous theorem we have: **Corollary 2.12**.: _Whenever \(d_{CI}<d_{NI}\), the achievement set \(E\big{(}w_{n}(x)\big{)}\) is a Cantorval for any \(x\in[d_{CI},d_{NI})\)._ Proof.: Let \(x\in[d_{CI},d_{NI})\). Then 1. According to the Theorem 2.11, if \(x\geq\sqrt[n]{\frac{b}{sa+b}}\), then \(E\big{(}w_{n}(x)\big{)}\) contains an interval. 2. An analogous argument to the one in the proof of Theorem 2.10 shows that if \(x<\sqrt[n]{\frac{ak_{m}}{ak_{m}+bK}}\), then \(w_{mi}(x)>W_{mi}(x)\) for every \(i\in\mathbb{N}\) and \(E(w_{n})\) cannot be a finite union of closed and bounded intervals. Therefore, \(E\big{(}w_{n}(x)\big{)}\) must be a Cantorval. _Remark 2.13_.: Observe that \(d_{CI}<d_{NI}\) only if \[b<a\sqrt{\frac{sk_{m}}{K}}.\] Therefore, as \(a\leq b\), \[1<\frac{sk_{m}}{K}\] is a necessary condition for Corollary 2.12. Finally, let \[d_{C}=\sqrt[n]{\min_{1\leq j\leq m}\Big{\{}\frac{ak_{j}-bU_{j}}{ak_{j}+bL_{j}} \Big{\}}}.\] Our last result extends and refines [14, Theorem 3.7]: **Theorem 2.14**.: _Whenever \(d_{C}>0\), the achievement set \(E\big{(}w_{n}(x)\big{)}\) is homeomorphic to the Cantor set for \(0<x<\min\{\epsilon,d_{C}\}\)._ Proof.: According to Theorem 2.7.(i), it is enough to show that \(w_{n}(x)>W_{n}(x)\) for all but finitely many \(n\in\mathbb{N}\) and \(x\in[0,d_{C}]\). Observe that, for any \(x\in(0,\epsilon)\), \[w_{n}(x)=k_{g(n)}f(x^{\lfloor\frac{m+n-1}{m}\rfloor})\geq a\cdot k_{g(n)} \big{(}x^{\lfloor\frac{m+n-1}{m}\rfloor r}\big{)}\] while \[W_{n}(x)=\sum_{\ell>n}w_{\ell}\leq b\cdot\sum_{\ell>n}k_{g(\ell)}\big{(}x^{ \lfloor\frac{m+\ell-1}{m}\rfloor r}\big{)}=b\cdot x^{\lfloor\frac{m+n-1}{m} \rfloor r}\Big{(}U_{g(n)}+\frac{Kx^{r}}{1-x^{r}}\Big{)}.\] Therefore, \(w_{n}(x)>W_{n}(x)\) as long as \[a\cdot k_{g(n)}x^{\lfloor\frac{m+n-1}{m}\rfloor r}>b\cdot x^{\lfloor\frac{m+n -1}{m}\rfloor r}\Big{(}U_{g(n)}+\frac{Kx^{r}}{1-x^{r}}\Big{)},\] and since \(x>0\), whenever \[a\cdot k_{g(n)}>b\cdot\Big{(}U_{g(n)}+\frac{Kx^{r}}{1-x^{r}}\Big{)}.\] That is, when \[x<\sqrt[n]{\frac{ak_{g(n)}-bU_{g(n)}}{ak_{g(n)}+bL_{g(n)}}}\] for all \(n\) and the theorem follows.
``` k_1f(x) + k_2f(x) + ... + k_mf(x) + ... + k_1f(x^n) + ... + k_mf(x^n) + ... の gerais subsumのトポロジーを研究しています。 ```
2309.08801
Relaxations and Duality for Multiobjective Integer Programming
Multiobjective integer programs (MOIPs) simultaneously optimize multiple objective functions over a set of linear constraints and integer variables. In this paper, we present continuous, convex hull and Lagrangian relaxations for MOIPs and examine the relationship among them. The convex hull relaxation is tight at supported solutions, i.e., those that can be derived via a weighted-sum scalarization of the MOIP. At unsupported solutions, the convex hull relaxation is not tight and a Lagrangian relaxation may provide a tighter bound. Using the Lagrangian relaxation, we define a Lagrangian dual of an MOIP that satisfies weak duality and is strong at supported solutions under certain conditions on the primal feasible region. We include a numerical experiment to illustrate that bound sets obtained via Lagrangian duality may yield tighter bounds than those from a convex hull relaxation. Subsequently, we generalize the integer programming value function to MOIPs and use its properties to motivate a set-valued superadditive dual that is strong at supported solutions. We also define a simpler vector-valued superadditive dual that exhibits weak duality but is strongly dual if and only if the primal has a unique nondominated point.
Alex Dunbar, Saumya Sinha, Andrew J Schaefer
2023-09-15T22:50:20
http://arxiv.org/abs/2309.08801v1
# Relaxations and Duality for Multiobjective Integer Programming ###### Abstract Multiobjective integer programs (MOIPs) simultaneously optimize multiple objective functions over a set of linear constraints and integer variables. In this paper, we present continuous, convex hull and Lagrangian relaxations for MOIPs and examine the relationship among them. The convex hull relaxation is tight at supported solutions, i.e., those that can be derived via a weighted-sum scalarization of the MOIP. At unsupported solutions, the convex hull relaxation is not tight and a Lagrangian relaxation may provide a tighter bound. Using the Lagrangian relaxation, we define a Lagrangian dual of an MOIP that satisfies weak duality and is strong at supported solutions under certain conditions on the primal feasible region. We include a numerical experiment to illustrate that bound sets obtained via Lagrangian duality may yield tighter bounds than those from a convex hull relaxation. Subsequently, we generalize the integer programming value function to MOIPs and use its properties to motivate a set-valued superadditive dual that is strong at supported solutions. We also define a simpler vector-valued superadditive dual that exhibits weak duality but is strongly dual if and only if the primal has a unique nondominated point. **Keywords: Multiobjective optimization, integer programming, Lagrangian duality, superadditive duality** ## 1 Introduction Multiobjective optimization problems optimize multiple objective functions over a common set of constraints. When the constraints and objective functions are linear and the variables are constrained to be integers, the resulting problem is called a multiobjective integer (linear) program (MOIP); see [7] for a detailed introduction. In this paper, we present relaxations and dual formulations for MOIPs. Duality is an extensively studied area of optimization that is used in both theoretical analysis and the development of solution methods. The fundamental idea in duality is to formulate a related (dual) optimization problem that can provide bounds on the original (primal) problem. For single-objective integer programs (IPs), several notions of duality have been proposed in the literature [11, 18, 25, 48, 49]. Gale, Kuhn, and Tucker first studied multiobjective linear programming (MOLP) duality in the 1950s [13], and the subject continued to receive attention in subsequent decades [5, 19, 22, 23, 24, 27, 31, 43]. Recent work on MOIPs has explored bound sets that use relaxations or variations of the original problem to bound nondominated points in the objective space [3, 4, 9, 10, 35, 36]. Klamroth et al. [30] use IP duality to derive bounds for the multiobjective problem. To our knowledge, however, a multiobjective duality framework for MOIPs has not been explored in the literature. In this paper, we extend Lagrangian and superadditive duality from single-objective IPs to the multiobjective case. The paper is organized as follows: in Section 1, we formally define an MOIP and some related concepts of vector and set comparison. We also provide a brief review of duality for IPs and MOLPs, and an overview of bound sets. In Section 2, we present the Lagrangian relaxation of an MOIP and compare it with the continuous and convex hull relaxations. In Section 3, we extend Lagrangian duality to the multiobjective case and illustrate the performance of Lagrangian dual bound sets via a numerical experiment. In Section 4, we develop superadditive duals for MOIPs. We present concluding remarks in Section 5. In single-objective optimization problems, the objective function is a scalar-valued function whose maximum over the feasible region is easily defined. In contrast, a \(k\)-objective optimization problem has a vector-valued objective function that maps points in the feasible region to vectors in \(\mathbb{R}^{k}\). In this case, the notion of maximization is ambiguous as there may be several objective values that are mutually incomparable. Therefore, we employ the notion of Pareto efficiency and nondominance [2, 7, 34, 45]. We first introduce some notation for the usual element-wise order on \(\mathbb{R}^{k}\). **Definition 1**.: _For \(x,y\in\mathbb{R}^{k}\), define_ 1. \(x<y\) _if_ \(y-x\) _has all positive components,_ 2. \(x\leqq y\) _if_ \(y-x\) _has all nonnegative components, and_ 3. \(x\leq y\) _if_ \(x\leqq y\) _but_ \(x\neq y\) **Definition 2** (Nondominance).: _Given a set \(S\subseteq\mathbb{R}^{k}\), \(s\in S\) is said to be nondominated (from above) if there does not exist \(t\in S\) such that \(s\leq t\)._ A set may have multiple nondominated elements that are mutually incomparable, and we denote the set of all such elements by \(\mathrm{Max}(S)\). The set \(\mathrm{Min}(S)\) is analogously defined by reversing the inequality in Definition 2. Under this definition, solving a multiobjective maximization problem amounts to finding the nondominated points of the set of feasible objective function values. An MOIP is defined via linear constraints and objective functions. Let \(A\in\mathbb{R}^{m\times n}\) be the constraint matrix with right-hand-side \(b\in\mathbb{R}^{m}\). Let \(C\in\mathbb{R}^{k\times n}\) be the cost-matrix whose \(i\)-th row comprises the coefficients of the \(i\)-th (linear) objective function. An MOIP with these \(k\) objectives is defined as \[\max\ Cx\] (MOIP) \[\mathrm{s.t.}\ Ax\leqq b,\] \[\ x\in\mathbb{Z}_{\geqq}^{n},\] where \(\mathbb{Z}_{\geqq}^{n}\) is the set of integer vectors with nonnegative components. The set of positive integer vectors is denoted by \(\mathbb{Z}_{>}^{n}\); \(\mathbb{R}_{\geqq}^{n}\) and \(\mathbb{R}_{>}^{n}\) are defined similarly. Let \(\mathcal{X}\) denote the feasible region of (MOIP), and \(\mathcal{Y}=\{Cx\ |\ x\in\mathcal{X}\}\) be the set of feasible objective values. The set \(\mathcal{X}\) is an integer polyhedron just as in single-objective IP, while \(\mathcal{Y}\) is a subset of \(\mathbb{R}^{k}\). Solving (MOIP) amounts to finding the set of nondominated points \(\mathrm{Max}(\mathcal{Y})\), known as its _nondominated set_. Feasible solutions corresponding to these points are called _efficient solutions_. **Definition 3** (Supported Solution).: _An efficient solution \(x^{*}\) is said to be a supported solution if there exists a scalarizing vector \(\mu\in\mathbb{R}_{>}^{k}\) such that \(x^{*}\) is an optimal solution to the scalarized problem \(\max\ \{\mu^{\top}Cx\ |\ Ax\leqq b,x\in\mathbb{Z}_{\geqq}^{n}\}\)._ **Remark 1**.: _Henceforth, we use the term "scalarization" to refer to the weighted-sum scalarization \(\max\ \{\mu^{\top}Cx\ |\ Ax\leqq b,x\in\mathbb{Z}_{\geqq}^{n}\}\), where \(\mu\in\mathbb{R}_{>}^{k}\)._ **Lemma 1** ([26]).: _A point \(x^{*}\) is an efficient solution to the MOLP \(\max\ \{Cx\ |\ Ax\leqq b,x\in\mathbb{R}_{\geqq}^{n}\}\) if and only if there is a vector \(\mu\in\mathbb{R}_{>}^{k}\) such that \(x^{*}\) is an optimal solution to the scalarized problem \(\max\ \{\mu^{\top}Cx\ |\ Ax\leqq b,x\in\mathbb{R}_{\geqq}^{n}\}\)._ Geoffrion [17] proves a general statement about scalarization for multiobjective concave functions over a convex set. Lemma 1 applies this to MOLPs, asserting that finding efficient solutions to an MOLP is equivalent to finding optimal solutions to its scalarizations. This, however, is not true for the integer case - while optimal solutions to the scalarized MOIP are efficient for the original problem, not all efficient solutions of the MOIP can be recovered in this manner [8, 45]. ### Extended Power Set and Set Comparison When the set of objective values \(\mathcal{Y}\) is nonempty and bounded above, \(\mathrm{Max}(\mathcal{Y})\) is a well-defined subset of \(\mathbb{R}^{k}\). However, this definition is ambiguous if \(\mathcal{Y}\) is empty or unbounded above. To distinguish between the two cases, we define an extended power set of \(\mathbb{R}^{k}\) (analogous to the extended real line) that contains two additional "sets" \(\pm M_{\infty}\). Define a set \(S\subseteq\mathbb{R}^{k}\) to be bounded above if there exists \(z\in\mathbb{R}^{k}\) such that \(s\leq z\) for all \(s\in S\). Suppose \(\mathcal{Y}\) is nonempty and not bounded above. Then, we say that the maximization problem is unbounded and denote \(\mathrm{Max}(\mathcal{Y})=M_{\infty}\). On the other hand, if \(\mathcal{Y}=\emptyset\), which corresponds to the problem being infeasible, we define \(\mathrm{Max}(\mathcal{Y})=-M_{\infty}\). The roles of \(\pm M_{\infty}\) are reversed in case of minimization. Thus, \(\pm M_{\infty}\) are analogous to \(\pm\infty\) in single-objective optimization. These definitions extend naturally to any class of multiobjective optimization problems whose sets of feasible objective values are closed. Unlike scalar optimization problems whose optimal values can be directly compared, nondominated sets for multiobjective problems are collections of vectors in \(\mathbb{R}^{k}\). Several authors [4, 10, 32, 40] have proposed set-orderings to compare them, especially in the context of bound sets (see Section 1.4). We employ the following set-ordering proposed by Ehrgott and Gandibleux [9]. **Definition 4** (Pareto Set-Ordering [9]).: _Let \(S,T\subseteq\mathbb{R}^{k}\) be nonempty. Define \(S\unlneq T\) if_ 1. _for every_ \(s\in S\)_, there exists_ \(t\in T\) _such that_ \(s\unlneqq t\)_, and_ 2. _for every_ \(s\in S\) _and_ \(t\in T\)_,_ \(t\not\leq s\)_._ In other words, every element of \(S\) is dominated by an element of \(T\), but no element of \(S\) dominates an element of \(T\) (except possibly itself). This ordering extends to \(\pm M_{\infty}\) (see Appendix A for details). We then consider the following subset of the extended power set of \(\mathbb{R}^{k}\). \[\mathcal{E}=\{S\subseteq\mathbb{R}^{k}\ |\ S\neq\emptyset,\ \text{if}\ s,t\in S,\ \text{then}\ s\not\leq t\ \text{and}\ t\not\leq s\}\cup\{-M_{\infty},M_{\infty}\}. \tag{1}\] When \(k=1\), \(\mathcal{E}\) is the collection of singleton subsets of the extended real line, and the set-ordering "\(\unlneqq\)" coincides with the standard ordering on \([-\infty,\infty]\). Proposition 1 implies that nondominated sets for the multiobjective optimization problems we consider in this paper will always belong to the family \(\mathcal{E}\). Proposition 2 asserts that the relation "\(\mathop{\preceq}\limits^{\sim}\)" defines a partial order on \(\mathcal{E}\). Together, these results will allow us to use this set-ordering to compare the nondominated sets of multiobjective optimization problems. Proofs of results from this subsection are provided in Appendix A. **Proposition 1**.: _Let \(S\subseteq\mathbb{R}^{k}\) be nonempty. If \(S\) has points that are nondominated from above, then \(\operatorname{Max}(S)\in\mathcal{E}\). If \(S\) has points that are nondominated from below, then \(\operatorname{Min}(S)\in\mathcal{E}\)._ **Proposition 2**.: _The relation \(\mathop{\preceq}\limits\) defines a partial order on \(\mathcal{E}\)._ The following properties of the set-ordering will be used in later proofs. **Lemma 2**.: _Let \(T\subseteq S\subseteq\mathbb{R}^{k}\) be nonempty sets, and let \(U\in\mathcal{E}\) such that \(S\mathop{\preceq}\limits U\). Then, \(T\mathop{\preceq}\limits U\)._ **Lemma 3**.: _Let \(S\subseteq\mathbb{R}^{k}\) be a closed set and let \(U\in\mathcal{E}\) such that \(S\mathop{\preceq}\limits U\). Then, \(\operatorname{Max}(S)\mathop{\preceq}\limits U\)._ **Remark 2**.: _The analog of Lemma 3 for minimization does not hold. That is, \(S\mathop{\preceq}\limits T\) does not imply \(S\mathop{\preceq}\limits\operatorname{Min}(T)\). This is because the elements of \(S\) and \(\operatorname{Min}(T)\) may be incomparable (though we do have \(s\not\geq t\) for all \(s\in S\) and \(t\in\operatorname{Min}(T)\)). For example, consider \(S=\{(0,0)^{\top}\}\) and \(T=\{(-1,1)^{\top},(2,2)^{\top},(1,-1)^{\top}\}\). Then, \(S\mathop{\preceq}\limits T\), but \(\operatorname{Min}(T)=\{(-1,1)^{\top},(1,-1)^{\top}\}\), and \(S\mathop{\not\leq}\limits\operatorname{Min}(T)\)._ ### Duality in Integer Programming Duality for single-objective IPs has been well studied, and we briefly review Lagrangian and super-additive duality; see [49] for a comprehensive discussion including the following results. Consider the IP \[z_{IP}=\max\ \ cx\quad\text{s.t.}\ Ax\leq b,\ x\in\mathbb{Z}_{\geq}^{n}.\] (IP) Suppose the constraint matrix \(A\) is composed of two sub-matrices, \(A^{1}\in\mathbb{R}^{m_{1}\times n}\) and \(A^{2}\in\mathbb{R}^{(m-m_{1})\times n}\), so that the constraints corresponding to \(A^{1}\) are the so-called complicating constraints, while those corresponding to \(A^{2}\) are somewhat easier to handle. Then, a Lagrangian dual is constructed by penalizing the complicating constraints by means of a dual variable \(\lambda\) as follows. \[z_{\text{LD}}=\min_{\lambda\in\mathbb{R}_{\geq}^{n}}\ \max_{x\in Q}\ (cx+\lambda^{\top}(b^{1}-A^{1}x)), \tag{2}\] where \(Q=\{x\in\mathbb{Z}_{\geq}^{n}\ |\ A^{2}x\leqq b^{2}\}\), and \(b^{1}\) and \(b^{2}\) are the sub-vectors of \(b\) corresponding to the rows included in \(A^{1}\) and \(A^{2}\), respectively. The Lagrangian dual (2) satisfies the following properties [11, 18]. **Proposition 3** (Weak Lagrangian Duality).: _For each \(\lambda\in\mathbb{R}_{+}^{m_{1}}\) and each \(x\) feasible to (1P),_ \[cx\leq cx+\lambda^{\top}(b^{1}-A^{1}x).\] _It follows that \(z_{IP}\leq z_{\mathrm{LD}}\)._ **Theorem 1**.: _The optimal value \(z_{\mathrm{LD}}\) of (2) is equal to the optimal value of the following linear program (LP):_ \[\max\leavevmode\nobreak\ cx\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \mathrm{s.t.}\leavevmode\nobreak\ A^{1}x\leqq b^{1},\leavevmode\nobreak\ x\in\mathrm{conv}(Q).\] **Theorem 2**.: _For fixed \(A^{1},b^{1}\) and \(Q\), the Lagrangian dual (2) is strong for any cost vector \(c\) if and only if_ \[\mathrm{conv}\left(Q\cap\{x\in\mathbb{R}_{\geqq}^{n}\leavevmode\nobreak\ | \leavevmode\nobreak\ A^{1}x\leqq b^{1}\}\right)=\mathrm{conv}(Q)\cap\{x\in \mathbb{R}_{\geqq}^{n}\leavevmode\nobreak\ |\leavevmode\nobreak\ A^{1}x\leqq b^{1}\}.\] The value function of an IP is defined as its optimal value parameterized by the constraint right-hand-side \(b\). That is, \(z(b)=\max\{cx\leavevmode\nobreak\ |\leavevmode\nobreak\ Ax\leqq b,x\in \mathbb{Z}_{\geqq}^{n}\}\). The value function is nondecreasing and superadditive over its domain, which motivates the following superadditive dual for (1P). \[\min \leavevmode\nobreak\ F(b)\] (3) s.t. \[\leavevmode\nobreak\ F(A_{j})\geqq c_{j}, j=1,\ldots,n,\] \[\leavevmode\nobreak\ F(0)=0,\] \[\leavevmode\nobreak\ F:\mathbb{R}^{m}\to\mathbb{R},\leavevmode \nobreak\ \leavevmode\nobreak\ \mathrm{nondecreasing\ and\ superadditive.}\] The superadditive dual (3) satisfies weak and strong duality [28, 48]. **Proposition 4** (Weak Superadditive Duality).: _If \(x\) is feasible for (1P) and \(F\) is feasible for (3), then \(cx\leq F(b)\). If (1P) is unbounded, then (3) is infeasible._ **Theorem 3** (Strong Superadditive Duality).: _If (1P) has an optimal solution \(x^{*}\) with \(cx^{*}<\infty\), then (3) has an optimal solution \(F^{*}\) with \(F^{*}(b)=cx^{*}\)._ ### Duality in Multiobjective Linear Programming We now provide a brief overview of MOLP duality; see Luc [33] for a survey. Consider the MOLP \[\max\leavevmode\nobreak\ Cx\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \mathrm{s.t.}\leavevmode\nobreak\ \leavevmode\nobreak\ Ax=b,\leavevmode\nobreak\ x\in\mathbb{R}_{\geqq}^{n}. \tag{4}\] A matrix optimization dual for (4) was first proposed by Gale et al. in [13]. Several types of MOLP duality have since been explored, such as geometric duality [23] and set valued duality [24]. Isermann [27] proposed a vector-valued dual problem over a space of matrices as an extension of the single-objective LP dual. This dual is strong in the sense that if both the primal and dual problems are feasible, then they have nondominated points that coincide. Isermann's dual has also been extended to other variants by considering different feasible sets for the dual matrix variables, such as Corley's duality in [5]. Lagrangian duality for MOLPs was introduced by Hamel et al. [22] and extended by Gourion and Luc [19]. Given the MOLP (4), define a set-valued function \(G\) on \(\mathbb{R}^{k\times m}\) as \(G(\Lambda)=\mathrm{Max}\) (\(\{Cx+\Lambda(b-Ax)\ |\ x\geqq 0\}\)). Then, the Lagrangian minmax problem \(\min\{G(\Lambda)\ |\ \Lambda\in\mathbb{R}^{k\times m}\}\) is expressed as \[\begin{split}\min&\ \Lambda b+(C-\Lambda A)u\\ \mathrm{s.t.}&\ \alpha^{\top}\Lambda A\geqq\alpha^{ \top}C,\\ &\ \alpha^{\top}(C-\Lambda A)u=0,\\ &\ \alpha\in\mathbb{R}^{k}_{>},\ u\in\mathbb{R}^{n}_{\geqq}.\end{split} \tag{5}\] **Lemma 4** (Strong duality of (5) [34]).: _If \(\mathcal{Y}_{\mathrm{M}}\) and \(\mathcal{Y}_{\mathrm{L}}\) are the sets of feasible objective values of (4) and (5) respectively, then \(\mathrm{Max}(\mathcal{Y}_{\mathrm{M}})=\mathrm{Min}(\mathcal{Y}_{\mathrm{L}})\)._ ### Bound Sets for Multiobjective Integer Programming While MOLP duality is well-studied, duality for MOIPs remain relatively unexplored. However, the related idea of using relaxations or variations of an MOIP to derive bounds on its nondominated points has been explored through bound sets [7, 9, 10]. Bound sets are helpful in computing the entire nondominated set of an MOIP and have been popular in recent work on MOIPs [3, 4, 6, 20, 30, 35, 36]; see [21] for a survey of algorithms for MOIPs. A common upper bound on \(\mathcal{Y}\) is given by the _ideal point_ defined coordinatewise as \(y^{I}_{i}=\underset{x\in\mathcal{X}}{\max}(Cx)_{i}\), \(i=1,2,\ldots,k\). Similarly, a common lower bound is the _nadir point_ defined as \(y^{N}_{i}=\min\{(Cx)_{i}\ |\ x\ \text{efficient for (MOIP)}\}\), \(i=1,2,\ldots,k\)[7]. Then, for each \(y\in\mathrm{Max}(\mathcal{Y})\), \(y^{N}\leqq y\leqq y^{I}\). Because \(\mathrm{Max}(\mathcal{Y})\) is in general a set with more than one element, the notion of bounding has been extended to sets. Fix a subset \(Y\subseteq\mathcal{Y}\). Ehrgott and Gandibleux [9] define an upper (resp. lower) bound set \(U\) (resp. \(L\)) as a subset of \(\mathbb{R}^{k}\) such that \(Y\preceqq U\) (resp. \(L\preceqq Y\)). Note that this definition of bound sets is not unique. For example, in [10], Ehrgott and Gandibleux use an alternative defi nition that imposes additional conditions on the sets \(L\) and \(U\). However, in this paper, we follow [9] and define bound sets through "\(\underline{\preceq}\)". In [1], the authors solve a sequence of scalarized problems to obtain an upper bound set. Their approach uses the fact that the nondominated set of the convex hull of the supported nondominated points of (MOIP) is an upper bound set for \(\mathcal{Y}\). Moreover, because the scalarization of a relaxation is a relaxation of the scalarization, this approach can be extended to problems for which the supported efficient solutions are difficult to compute [10]. This technique has further been adapted to specific problems by leveraging the problem features [4, 35, 36]. Alternatively, bound sets may be derived via search space decomposition, which uses local information based on regions of the search space [3, 6]. Klamroth et al. [30] compute bound sets using single-objective duality applied to \(\varepsilon\)-constraint scalarizations [20] of the MOIP. In the subsequent sections, we provide several bounds on the nondominated points of MOIPs through relaxations and multiobjective dual problems. ## 2 Relaxations for Multiobjective Integer Programs A \(k\)-objective maximization problem is a relaxation of (MOIP) if its feasible region contains the feasible region of (MOIP), and its nondominated set is an upper bound set for \(\text{Max}(\mathcal{Y})\). In this section, we first review the continuous and convex hull relaxations, and then present Lagrangian relaxation for MOIPs. ### The MOLP Relaxation The continuous relaxation of (MOIP) is obtained by dropping the integrality constraints, which yields the following MOLP. \[\max\;\;Cx\quad\text{s.t.}\;\;Ax\leqq b,\;x\in\mathbb{R}_{\geqq}^{n}.\] (MOLP) In recent years, the MOLP relaxation has been used in several search-based methods for solving MOIPs [12, 29, 38, 47]. The feasible region of (MOLP) clearly contains the feasible region of (MOIP). Proposition 5 shows that its nondominated set provides an upper bound for \(\text{Max}(\mathcal{Y}_{\text{MOIP}})\)1, so that (MOLP) is a relaxation of (MOIP); the proof is provided in Appendix B. **Proposition 5**.: _If \(\mathcal{Y}_{\mathrm{MOLP}}\) is the set of feasible objective values of (MOLP), then \(\mathrm{Max}(\mathcal{Y}_{\mathrm{MOIP}})\,\underline{\preceq}\,\mathrm{Max}( \mathcal{Y}_{\mathrm{MOLP}})\)._ ### The Convex Hull Relaxation The convex hull relaxation of (MOIP) is defined as \[\max\;Cx\quad\text{s.t.}\;\;x\in\mathrm{conv}\left(\left\{x\in\mathbb{Z}_{ \geq}^{n}\ |\ Ax\leqq b\right\}\right).\] (CH) Again, the feasible region of (CH) clearly contains the feasible region of (MOIP), and Proposition 6 asserts that its nondominated set provides an upper bound for \(\mathrm{Max}(\mathcal{Y}_{\mathrm{MOIP}})\). Thus, (CH) is a relaxation of (MOIP). The convex hull relaxation has been widely used in solution algorithms for MOIPs [10, 39, 41, 44]. We include some relevant results here; proofs are available in Appendix B. **Proposition 6**.: _If \(\mathcal{Y}_{\mathrm{CH}}\) is the set of feasible objective values of (CH), then \(\mathrm{Max}(\mathcal{Y}_{\mathrm{MOIP}})\,\underline{\preceq}\,\mathrm{Max}( \mathcal{Y}_{\mathrm{CH}})\)._ **Remark 3**.: _Because \(\mathcal{X}_{\mathrm{CH}}\subseteq\mathcal{X}_{\mathrm{MOLP}}\), the convex hull relaxation of an MOIP is tighter than its continuous relaxation. That is, \(\mathrm{Max}(\mathcal{Y}_{\mathrm{CH}})\,\underline{\preceq}\,\mathrm{Max}( \mathcal{Y}_{\mathrm{MOLP}})\)._ The inequality in Proposition 6 does not necessarily hold with equality. In the subsequent results, we explore connections between efficient solutions of (CH) and (MOIP). **Proposition 7**.: _Let \(x^{*}\) be an efficient solution of (CH). Then, \(x^{*}\) is an efficient solution of (MOIP) if and only if \(x^{*}\) is integral._ **Proposition 8**.: _If (CH) has efficient solutions, then it must have an integral efficient solution._ Propositions 7 and 8 imply that solving (CH) returns at least one efficient solution of (MOIP). Solutions to an MOIP obtained via its convex hull relaxation are further characterized as supported solutions in Proposition 9. **Proposition 9** ([8, 42]).: _Let \(x^{*}\) be an efficient solution of (MOIP). Then, \(x^{*}\) is a supported solution if and only if it is an efficient solution of (CH)._ In contrast to single-objective IPs, where every optimal solution to the IP is also optimal for its convex hull relaxation, (MOIP) may have efficient solutions that are not efficient for (CH). This is because the images of unsupported solutions do not lie on the nondominated frontier of \(\mathrm{conv}(\mathcal{Y})\). ### Lagrangian Relaxation for MOIPs In this section, we extend Lagrangian relaxation to MOIPs. Suppose the constraint matrix \(A\) is comprised of two sub-matrices \(A^{1}\in\mathbb{R}^{m_{1}\times n}\) corresponding to "complicating" constraints, and \(A^{2}\in\mathbb{R}^{(m-m_{1})\times n}\) corresponding to "simple" constraints. Let \(b^{1}\) and \(b^{2}\) be the corresponding sub-vectors of the constraint right-hand-side \(b\). As in the single-objective case, we only dualize the complicating constraints. Given a matrix of multipliers \(\Lambda\in\mathbb{R}^{k\times m_{1}}_{\geq}\), the Lagrangian relaxation of (MOIP) is the following multiobjective optimization problem. \[\begin{split}\max& Cx+\Lambda(b^{1}-A^{1}x)\\ \text{s.t.}& x\in Q=\{x\in\mathbb{Z}^{n}_{\geq}\ |\ A^{2}x \leqq b^{2}\}.\end{split}\] (LR \[\Lambda\] ) If \(x\) is feasible to (MOIP), then it is feasible to (LR \[\Lambda\] ) as well. Therefore, (LR \[\Lambda\] ) is a relaxation of (MOIP) provided its nondominated set yields an upper bound for \(\operatorname{Max}(\mathcal{Y}_{\text{MOIP}})\). This is established in Proposition 10. **Proposition 10**.: _For \(\Lambda\in\mathbb{R}^{k\times m_{1}}_{\geq}\), let \(\mathcal{Y}_{\text{LR}(\Lambda)}\) be the set of feasible objective values of (LR \[\Lambda\] ). Then, \(\operatorname{Max}(\mathcal{Y}_{\text{MOIP}})\operatorname{\preceqq} \operatorname{Max}(\mathcal{Y}_{\text{LR}(\Lambda)})\)._ Proof.: If (MOIP) is infeasible, then \(\operatorname{Max}(\mathcal{Y}_{\text{MOIP}})=-M_{\infty}\) and the result follows trivially. Next, if \(x\in\mathcal{X}\), then \(x\) is feasible to (LR \[\Lambda\] ) and \(Cx\leqq Cx+\Lambda(b^{1}-A^{1}x)\). So, if (MOIP) is unbounded above, then so is (LR \[\Lambda\] ) and the result holds. Finally, suppose (MOIP) is feasible and bounded and let \(y^{*}=Cx^{*}\in\operatorname{Max}(\mathcal{Y}_{\text{MOIP}})\). Then \(x^{*}\) is feasible for (LR \[\Lambda\] ) as well. If (LR \[\Lambda\] ) is unbounded, then the result holds. Otherwise, there exists \(\hat{y}\in\operatorname{Max}(\mathcal{Y}_{\text{LR}(\Lambda)})\) such that \(\hat{y}\geqq Cx^{*}+\Lambda(b^{1}-A^{1}x^{*})\geqq Cx^{*}\). On the other hand, suppose there is \(\tilde{y}\in\operatorname{Max}(\mathcal{Y}_{\text{LR}(\Lambda)})\) such that \(\tilde{y}\leq y^{*}\). This implies \(\tilde{y}\leq\hat{y}\), which contradicts the nondominance of \(\tilde{y}\) for (LR \[\Lambda\] ). Thus, \(\operatorname{Max}(\mathcal{Y}_{\text{MOIP}})\operatorname{\preceqq} \operatorname{Max}(\mathcal{Y}_{\text{LR}(\Lambda)})\). **Remark 4**.: _We noted in Remark 3 that \(\operatorname{Max}(\mathcal{Y}_{\text{CH}})\operatorname{\preceqq} \operatorname{Max}(\mathcal{Y}_{\text{MOIP}})\) for any MOIP. In general, there is no such relationship between \(\operatorname{Max}(\mathcal{Y}_{\text{LR}(\Lambda)})\) and either \(\operatorname{Max}(\mathcal{Y}_{\text{CH}})\) or \(\operatorname{Max}(\mathcal{Y}_{\text{MOLP}})\). This is illustrated in Example 1._ Example 1.Consider the MOIP \[\max\ \begin{bmatrix}1&-\frac{1}{2}\\ -\frac{1}{2}&1\end{bmatrix}\begin{bmatrix}x_{1}\\ x_{2}\end{bmatrix}\quad\text{s.t}\ \,x_{1}+x_{2}\leqq\frac{3}{2},\ x_{1},x_{2}\in\{0,1\}.\] For the Lagrangian relaxations, we dualize the linear constraint. Then, for each \(\Lambda=\left(\lambda_{1},\lambda_{2}\right)^{\top}\geq 0\), the set of feasible objective values for the Lagrangian relaxation is \[\mathcal{Y}_{\mathrm{LR}(\Lambda)}=\left\{\begin{bmatrix}\frac{3}{2}\lambda_{1} \\ \frac{3}{2}\lambda_{2}\end{bmatrix},\begin{bmatrix}1+\frac{1}{2}\lambda_{1}\\ -\frac{1}{2}+\frac{1}{2}\lambda_{2}\end{bmatrix},\begin{bmatrix}-\frac{1}{2}+ \frac{1}{2}\lambda_{1}\\ 1+\frac{1}{2}\lambda_{2}\end{bmatrix},\begin{bmatrix}\frac{1}{2}-\frac{1}{2} \lambda_{1}\\ \frac{1}{2}-\frac{1}{2}\lambda_{2}\end{bmatrix}\right\}.\] In particular, the points \((1,-\frac{1}{2})^{\top}\) and \((-\frac{1}{2},1)^{\top}\) are feasible objective values for the Lagrangian relaxation with \(\Lambda=(0,0)^{\top}\), and correspond to the supported efficient solutions \((1,0)^{\top}\) and \((0,1)^{\top}\) of the MOIP. This shows that the Lagrangian relaxation can be tight at some points in \(\mathrm{Max}(\mathcal{Y}_{\mathrm{MOIP}})\). Next, if \(\Lambda=(0,\frac{1}{4}+\epsilon)^{\top}\) for small \(\epsilon>0\), \[\mathrm{Max}(\mathcal{Y}_{\mathrm{LR}(\Lambda)})=\left\{\begin{bmatrix}0\\ \frac{3}{8}+\frac{3}{2}\epsilon\end{bmatrix},\begin{bmatrix}1\\ -\frac{3}{8}+\frac{1}{2}\epsilon\end{bmatrix},\begin{bmatrix}-\frac{1}{2}\\ \frac{9}{8}+\frac{1}{2}\epsilon\end{bmatrix},\begin{bmatrix}\frac{1}{2}\\ \frac{3}{8}-\frac{1}{2}\epsilon\end{bmatrix}\right\}.\] The relaxations are illustrated in Figure 1, which depicts \(\mathrm{Max}(\mathcal{Y}_{\mathrm{MOIP}})\), \(\mathrm{Max}(\mathcal{Y}_{\mathrm{CH}})\), and \(\mathrm{Max}(\mathcal{Y}_{\mathrm{MOLP}})\), and the nondominated sets \(\mathrm{Max}(\mathcal{Y}_{\mathrm{LR}((0,0)^{\top})})\) and \(\mathrm{Max}(\mathcal{Y}_{\mathrm{LR}((0,1/4+\epsilon)^{\top})})\) corresponding to Lagrangian relaxations. In this case, there is a subset \[S =\Big{\{}\Big{(}-\frac{1}{2},1\Big{)}^{\top},\Big{(}0,\frac{3}{8} +\frac{3\epsilon}{2}\Big{)}^{\top},\Big{(}1,-\frac{1}{2}\Big{)}^{\top}\Big{\}}\] \[=\mathrm{Min}\,\bigg{(}\,\mathrm{Max}(\mathcal{Y}_{\mathrm{LR}(( 0,0)^{\top})})\bigcup\mathrm{Max}(\mathcal{Y}_{\mathrm{LR}((0,1/4+\epsilon)^ {\top})})\bigg{)},\] such that \(\mathrm{Max}(\mathcal{Y}_{\mathrm{MOIP}})\,\underline{\preceq}\,S\, \underline{\preceq}\,\mathrm{Max}(\mathcal{Y}_{\mathrm{CH}})\,\underline{ \preceq}\,\mathrm{Max}(\mathcal{Y}_{\mathrm{MOLP}})\), and none of the inequalities holds with equality. Example 1 demonstrates that there are problems for which Lagrangian relaxations can provide strictly tighter upper bounds on the nondominated set of (MOIP) than the convex hull relaxation. ## 3 Lagrangian Duality for Multiobjective Integer Programs In the previous section, we showed that for any nonnegative matrix \(\Lambda\), the nondominated set of the Lagrangian relaxation provides an upper bound set for (MOIP). Moreover, in Example 1, we found that at unsupported solutions, an appropriate choice of \(\Lambda\) may yield a bound that is tighter than the convex hull relaxation. This motivates a strategy where we search for the "best" among all the upper bounds obtained via Lagrangian relaxation. To this end, we consider the set \[\mathcal{Y}_{\mathrm{LD}}=\bigcup_{\Lambda\in\mathbb{R}_{\geq}^{k\times m_{1}}} \mathrm{Max}(\mathcal{Y}_{\mathrm{LR}(\Lambda)}).\] Thus, \(\mathcal{Y}_{\mathrm{LD}}\) is a subset of \(\mathbb{R}^{k}\) comprised of the nondominated points of all possible Lagrangian relaxations of (MOIP). Then, a natural approach to finding the best bounds due to the Lagrangian relaxations is to consider the elements of \(\mathcal{Y}_{\mathrm{LD}}\) that are nondominated from below. Therefore, we define the Lagrangian dual of (MOIP) as \[\mathrm{Min}(\mathcal{Y}_{\mathrm{LD}})=\mathrm{Min}\,\bigg{(}\bigcup_{ \Lambda\geq 0}\mathrm{Max}(\mathcal{Y}_{\mathrm{LR}(\Lambda)})\bigg{)}.\] (LD) ### Geometry of the Set of Lagrangian Dual Feasible Values Before we establish properties of the Lagrangian dual (LD), we first analyze its set of feasible objectives \(\mathcal{Y}_{\mathrm{LD}}\). Because \(\mathcal{Y}_{\mathrm{LD}}\) is an uncountable union of closed sets in \(\mathbb{R}^{k}\), it is not guaranteed to be closed (or open). This is illustrated in Example 1, where we analytically derive the set \(\mathrm{Max}(\mathcal{Y}_{\mathrm{LR}(\Lambda)})\) for all \(\Lambda\geqq 0\) and use it to obtain a complete description of \(\mathcal{Y}_{\mathrm{LD}}\). In general, however, an explicit description of \(\mathcal{Y}_{\mathrm{LD}}\) may be difficult to derive. **Remark 5**.: _The set \(\mathcal{Y}_{\mathrm{LD}}\) may be non-convex, disconnected, and neither open nor closed; this is Figure 1: Feasible regions and nondominated sets for the MOIP in Example 1 and its continuous, convex hull, and Lagrangian relaxations. For this example, a subset of Lagrangian upper bounds outperforms both the continuous and convex hull relaxations. illustrated in Example 1 and Figure 2._ Example 1.Consider the MOIP \[\max\ \begin{bmatrix}1&-\frac{1}{2}\\ -\frac{1}{2}&1\end{bmatrix}\begin{bmatrix}x_{1}\\ x_{2}\end{bmatrix}\quad\text{s.t.}\ x_{1}+x_{2}\leqq 1,\ x_{1},x_{2}\in\{0,1\}.\] As in Example (1), we dualize the linear constraints. For a fixed \(\Lambda\), this yields the following Lagrangian relaxation. \[\max\ \begin{bmatrix}1&-\frac{1}{2}\\ -\frac{1}{2}&1\end{bmatrix}\begin{bmatrix}x_{1}\\ x_{2}\end{bmatrix}+\begin{bmatrix}\lambda_{1}\\ \lambda_{2}\end{bmatrix}\begin{pmatrix}1-\begin{bmatrix}1&1\end{bmatrix} \begin{bmatrix}x_{1}\\ x_{2}\end{bmatrix}\end{pmatrix}\quad\text{s.t.}\ x_{1},x_{2}\in\{0,1\}. \tag{6}\] For each \(\Lambda\), let \(\mathcal{Y}_{\text{LR}(\Lambda)}\) be the set of feasible objective values of (6). Enumerating the feasible objective values, we have \[\mathcal{Y}_{\text{LR}(\Lambda)}=\left\{(\lambda_{1},\lambda_{2})^{\top}, \Big{(}1,-\frac{1}{2}\Big{)}^{\top},\Big{(}-\frac{1}{2},1\Big{)}^{\top},\Big{(} \frac{1}{2}-\lambda_{1},\frac{1}{2}-\lambda_{2}\Big{)}^{\top}\right\}.\] Then, the nondominated points of (6) are given by \(\max(\mathcal{Y}_{\text{LR}(\Lambda)})\). Considering various sub-cases, we derive the following description of \(\max(\mathcal{Y}_{\text{LR}(\Lambda)})\). \[\max(\mathcal{Y}_{\text{LR}(\Lambda)})\!=\!\begin{cases}\{(\lambda_{1},\! \lambda_{2})\},&\lambda_{1},\!\lambda_{2}\geqq 1,\\ \{(\lambda_{1},\!\lambda_{2}),(-\frac{1}{2},\!1)\},&\lambda_{1}\geqq 1,0\leq\lambda_{2}<1,\\ \{(\lambda_{1},\!\lambda_{2}),(1,\!-\frac{1}{2})\},&0\leq\lambda_{1}<1,\! \lambda_{2}\geqq 1,\\ \{(\lambda_{1},\!\lambda_{2}),(-\frac{1}{2},\!1),(1,\!-\frac{1}{2})\},&\frac{1 }{4}\leq\lambda_{1},\!\lambda_{2}<1,\\ \{(-\frac{1}{2},\!1),(1,-\frac{1}{2}),(\frac{1}{2}\!-\!\lambda_{1}\frac{1}{2} \!-\!\lambda_{2})\},&0\leq\lambda_{1},\!\lambda_{2}<\frac{1}{4},\\ \{(\lambda_{1},\!\lambda_{2}),(-\frac{1}{2},\!1),(1,\!-\frac{1}{2}),(\frac{1 }{2}\!-\!\lambda_{1}\frac{1}{2}\!-\!\lambda_{2})\},&0\leq\lambda_{1}<\frac{1}{ 4}\frac{1}{4}\leq\lambda_{2}<1,\\ \{(\lambda_{1},\!\lambda_{2}),(-\frac{1}{2},\!1),(1,-\frac{1}{2}),(\frac{1 }{2}\!-\!\lambda_{1}\frac{1}{2}\!-\!\lambda_{2})\},&\frac{1}{4}\leq\lambda_{1}< 1,\!0\leq\lambda_{2}<\frac{1}{4}.\end{cases}\] The set \(\mathcal{Y}_{\text{LD}}\) is plotted in Figure 2. We first show that \(\mathcal{Y}_{\text{LD}}\) is not open. For \(\Lambda=(0,0)^{\top}\), we have \((1,-\frac{1}{2})^{\top}\in\max(\mathcal{Y}_{\text{LR}(\Lambda)})\subseteq \mathcal{Y}_{\text{LD}}\). Consider an arbitrary \(\epsilon>0\) and suppose \((1-\epsilon,-\frac{1}{2}-\epsilon)^{\top}\in\mathcal{Y}_{\text{LD}}\). Then, there exists \(\Lambda\geqq 0\) such that \((1-\epsilon,-\frac{1}{2}-\epsilon)^{\top}\) is a nondominated element of the set \(\mathcal{Y}_{\text{LR}(\Lambda)}\). However, \((1-\epsilon,-\frac{1}{2}-\epsilon)^{\top}\leq(1,-\frac{1}{2})^{\top}\) for any such \(\Lambda\), which is a contradiction. Thus, for each \(\epsilon>0\), the ball of radius \(\epsilon\sqrt{2}\) centered at \((1,-\frac{1}{2})^{\top}\) is not contained in \(\mathcal{Y}_{\mathrm{LD}}\), so \(\mathcal{Y}_{\mathrm{LD}}\) is not open. Next we show that \(\mathcal{Y}_{\mathrm{LD}}\) is not closed either. For this, we show that \((\frac{1}{4},0)^{\top}\) is a limit point of \(\mathcal{Y}_{\mathrm{LD}}\) that is not contained in \(\mathcal{Y}_{\mathrm{LD}}\). For all \(0<\epsilon<\frac{1}{4}\), setting \(\lambda_{1}=\frac{1}{4}+\epsilon\) and \(\lambda_{2}=0\) gives \[\mathrm{Max}(\mathcal{Y}_{\mathrm{LR}((1/4+\epsilon,0)^{\top})})=\left\{\left( \frac{1}{4}+\epsilon,0\right)^{\top},\left(1,-\frac{1}{2}\right)^{\top}, \left(-\frac{1}{2},1\right)^{\top},\left(\frac{1}{4}-\epsilon,\frac{1}{2} \right)^{\top}\right\}.\] Thus, for all \(\epsilon\) small enough, \((\frac{1}{4}+\epsilon,0)^{\top}\) is an element of \(\mathcal{Y}_{\mathrm{LD}}\). It follows that \((\frac{1}{4},0)^{\top}\) is a limit point of \(\mathcal{Y}_{\mathrm{LD}}\). However, for \((\frac{1}{4},0)^{\top}\) to be a feasible objective value to \(LR(\Lambda)\), we must have \(\Lambda=(\frac{1}{4},0)^{\top}\) or \(\Lambda=(\frac{1}{4},\frac{1}{2})^{\top}\). In either case, \[\mathrm{Max}(\mathcal{Y}_{\mathrm{LR}(\Lambda)})=\left\{\left(1,-\frac{1}{2} \right)^{\top},\left(-\frac{1}{2},1\right)^{\top},\left(\frac{1}{4},\frac{1}{2 }\right)^{\top}\right\}.\] Hence, \(\mathcal{Y}_{\mathrm{LD}}\) has an unattained limit point, and \(\mathcal{Y}_{\mathrm{LD}}\) is not closed. Because \(\mathcal{Y}_{\mathrm{LD}}\) is not guaranteed to be closed, one possible avenue is to define the Lagrangian dual via the closure of \(\mathcal{Y}_{\mathrm{LD}}\), denoted by \(\mathrm{cl}(\mathcal{Y}_{\mathrm{LD}})\). However, this may not be fruitful because points in \(\mathrm{Min}(\mathrm{cl}(\mathcal{Y}_{\mathrm{LD}}))\) do not necessarily provide upper bounds on \(\mathrm{Max}(\mathcal{Y}_{\mathrm{MOIP}})\). To see this in Example 1, consider the point \((\frac{1}{4},-\frac{1}{2})^{\top}\) that lies on the boundary of \(\mathrm{cl}(\mathcal{Y}_{\mathrm{LD}})\) but is not contained in \(\mathcal{Y}_{\mathrm{LD}}\), and is dominated by the primal nondominated point \((1,-\frac{1}{2})^{\top}\). This breakdown at unattained limit points of \(\mathcal{Y}_{\mathrm{LD}}\) can occur even for more structured problems. For example, consider the primal problem \[\max\left\{\begin{bmatrix}x_{1}\\ x_{2}\end{bmatrix}\ \Big{|}\ x_{1}+x_{2}\leq 2,x_{1},x_{2}\in\{0,1\}\right\}.\] It has a unique efficient solution at \((1,1)^{\top}\), but dualizing the constraints \(x_{1}\leq 1\) and \(x_{2}\leq 1\) and taking \(\Lambda_{\epsilon}=\begin{bmatrix}2&2\\ 0&1-\epsilon\end{bmatrix}\) yields \(y_{\epsilon}=\begin{bmatrix}0\\ 1+\epsilon\end{bmatrix}\in\operatorname{Max}(\mathcal{Y}_{\operatorname{LR}( \Lambda_{\epsilon})})\) for small \(\epsilon>0\). However, the limit point \((0,1)^{\top}\) is dominated by \((1,1)^{\top}\). Owing to the geometry of \(\mathcal{Y}_{\operatorname{LD}}\), we note that points in \(\mathcal{Y}_{\operatorname{LD}}\) may provide better bounds than \(\operatorname{Min}(\mathcal{Y}_{\operatorname{LD}})\). For instance, in Example 1, \((\frac{1}{4},\frac{1}{4})^{\top}\) is the only upper bound in \(\operatorname{Min}(\mathcal{Y}_{\operatorname{LD}})\) on the nondominated point \((0,0)^{\top}\). However, points \((\frac{1}{4}+\epsilon,0)^{\top}\) and \((0,\frac{1}{4}+\epsilon)^{\top}\) obtained via Lagrangian relaxation provide tighter upper bounds (for small \(\epsilon\)). These observations suggest that considering (a finite set of) Lagrangian relaxations may yield a better upper bound set than one obtained by restricting our attention to \(\operatorname{Min}(\mathcal{Y}_{\operatorname{LD}})\). A numerical illustration of the quality of (an approximation to) \(\operatorname{Min}(\mathcal{Y}_{\operatorname{LD}})\) as an upper bound set is given in Section 3.4. Note that any approximation to \(\mathcal{Y}_{\operatorname{LD}}\) that considers finitely many values of \(\Lambda\) will result in a closed set that is bounded below, so that its nondominated set is guaranteed to be nonempty. While it may be difficult to obtain an explicit description of \(\mathcal{Y}_{\operatorname{LD}}\), we can approximate it through its relationship with the set \(\mathcal{Y}_{\operatorname{MOIP}}\). Proposition 11 shows that for any \(y\in\mathcal{Y}_{\operatorname{MOIP}}\), \(\operatorname{cl}(\mathcal{Y}_{\operatorname{LD}})\) is contained in the union of the half-spaces \(\{z\in\mathbb{R}^{k}\ |\ z_{i}\geq y_{i}\}\), \(i=1,\ldots,k\). **Proposition 11**.: _If \(z\in\operatorname{cl}(\mathcal{Y}_{\operatorname{LD}})\), then for each \(y\in\mathcal{Y}_{\operatorname{MOIP}}\), \(z_{i}\geq y_{i}\) for some \(i\). That is, \(\operatorname{cl}(\mathcal{Y}_{\operatorname{LD}})\subseteq\bigcap\limits_{y \in\mathcal{Y}_{\operatorname{MOIP}}}\bigcup\limits_{i=1}^{k}\{z\in\mathbb{R}^ {k}\ |\ z_{i}\geq y_{i}\}\)._ Proof.: Consider a fixed \(y\in\mathcal{Y}_{\operatorname{MOIP}}\) and \(z\in\mathcal{Y}_{\operatorname{LD}}\). By Proposition 10, there exists \(i\in\{1,\ldots,k\}\) for which \(z_{i}\geq y_{i}\). That is, \(\mathcal{Y}_{\operatorname{LD}}\subseteq\{z\in\mathbb{R}^{k}\ |\ z_{i}\geq y_{i}\}\) for some \(i=1,\ldots,k\). As \(y\in\mathcal{Y}_{\operatorname{MOIP}}\) was arbitrary, we have \(\mathcal{Y}_{\operatorname{LD}}\subseteq\cap_{y\in\mathcal{Y}_{\operatorname{ MOIP}}}\cup_{i=1}^{k}\{z\in\mathbb{R}^{k}\ |\ z_{i}\geq y_{i}\}\). To see that the containment extends to \(\operatorname{cl}(\mathcal{Y}_{\operatorname{LD}})\), note that for each \(y\in\mathcal{Y}\), \(\bigcup_{i=1}^{k}\{z\in\mathbb{R}^{k}\ |\ z_{i}\geq y_{i}\}\) is a finite union of closed sets and is therefore closed. Then, \(\bigcap_{y\in\mathcal{Y}_{\operatorname{MOIP}}}\bigcup_{i=1}^{k}\{z\in \mathbb{R}^{k}\ |\ z_{i}\geq y_{i}\}\) is also closed so that \(\mathcal{Y}_{\operatorname{LD}}\subseteq\operatorname{cl}(\mathcal{Y}_{ \operatorname{LD}})\subseteq\bigcap_{y\in\mathcal{Y}_{\operatorname{MOIP}}} \bigcup_{i=1}^{k}\{z\in\mathbb{R}^{k}\ |\ z_{i}\geq y_{i}\}\). Figure 3 illustrates the superset described in Proposition 11 on Example 1. Some limit points of \(\mathcal{Y}_{\operatorname{LD}}\) are dominated by primal nondominated points. Nonetheless, every element of \(\operatorname{cl}(\mathcal{Y}_{\operatorname{LD}})\) has at least one component that exceeds the corresponding objective value of a primal feasible solution. ### Properties of the Lagrangian dual We now proceed to establish properties of the Lagrangian dual (LD). **Corollary 1** (Weak Duality for (LD)).: _If \(x\) is feasible to (MOIP) and \(\Lambda\in\mathbb{R}^{k\times m_{1}}\), then \(\{Cx\}\mathop{\hbox{\hbox to 0.0pt{$\preceq$}\lower 4.0pt\hbox{$\,\subseteq\,$}}} \operatorname{Max}(\mathcal{Y}_{\operatorname{LR}(\Lambda)})\)._ Proof.: By Proposition 10, we have \(\{Cx\}\mathop{\hbox{\hbox to 0.0pt{$\preceq$}\lower 4.0pt\hbox{$\,\subseteq\,$}}} \operatorname{Max}(\mathcal{Y}_{\operatorname{MOIP}})\mathop{\hbox{\hbox to 0.0pt{$\preceq$}\lower 4.0pt\hbox{$ \,\subseteq\,$}}} \operatorname{Max}(\mathcal{Y}_{\operatorname{LR}(\Lambda)})\). Note that Corollary 1 does not imply \(\operatorname{Max}(\mathcal{Y}_{\operatorname{MOIP}})\mathop{\hbox{\hbox to 0.0pt{$ \preceq$}\lower 4.0pt\hbox{$\,\subseteq\,$}}} \operatorname{Min}(\mathcal{Y}_{\operatorname{LD}})\). As discussed in Remark 2, for \(S,T\subseteq\mathbb{R}^{k}\) with \(S\mathop{\hbox{\hbox to 0.0pt{$\preceq$}\lower 4.0pt\hbox{$\,\subseteq\,$}}} T\), we do not necessarily have \(S\mathop{\hbox{\hbox to 0.0pt{$\preceq$}\lower 4.0pt\hbox{$\,\subseteq\,$}}} \operatorname{Min}(T)\). We can only assert that elements of \(\operatorname{Max}(\mathcal{Y}_{\operatorname{MOIP}})\) do not dominate elements of \(\operatorname{Min}(\mathcal{Y}_{\operatorname{LD}})\). Theorem 1 states that the optimal value of the Lagrangian dual of an IP can be obtained by solving an LP. An analogous result does not hold for the multiobjective problem. That is, the Lagrangian dual of an MOIP cannot be posed as an MOLP in general. This is illustrated in Example 1. The set \(\operatorname{Min}(\mathcal{Y}_{\operatorname{LD}})\) consists of three isolated points which can never be obtained from a single MOLP because the nondominated set of an MOLP is connected. Nonetheless, Theorem 4 uses an MOLP to derive an upper bound on the dual nondominated points. Recall that if \(A^{1}\) and \(A^{2}\) are the sub-matrices of \(A\) corresponding to the complicating and simple constraints respectively, then \(Q=\{x\in\mathbb{Z}_{\geq}^{n}\ |\ A^{2}x\leqq b^{2}\}\). **Theorem 4**.: _Let \(\mathcal{Y}_{\operatorname{LDLP}}\) be the set of feasible objective values of the MOLP_ \[\max\ \{Cx\ |\ A^{1}x\leqq b^{1},\ x\in\operatorname{conv}(Q)\}. \tag{7}\] Figure 3: The set \(\mathcal{Y}_{\operatorname{LD}}\) for the MOIP in Example 1 as well as the approximation of its closure given by Proposition 11. Elements of \(\operatorname{cl}(\mathcal{Y}_{\operatorname{LD}})\) are weakly dominated by elements of \(\operatorname{Max}(\mathcal{Y}_{\operatorname{MOIP}})\). However, for every \(y\in\mathcal{Y}_{\operatorname{MOIP}}\) and \(z\in\operatorname{cl}(\mathcal{Y}_{\operatorname{LD}})\), there is an objective \(i\) for which \(y_{i}\leqq z_{i}\). _Then, \(\operatorname{Min}(\mathcal{Y}_{\mathrm{LDLP}})\preceqq\operatorname{Max}( \mathcal{Y}_{\mathrm{LDLP}})\)._ Proof.: Because \(\operatorname{conv}(Q)\) is a polyhedron, there is a matrix \(B\) and a vector \(d\) such that \(\operatorname{conv}(Q)=\{x\in\mathbb{R}_{\geq}^{n}\ |\ Bx\leqq d\}\). Strong Lagrangian duality for MOLPs (Lemma 4) implies that \[\operatorname{Max}(\mathcal{Y}_{\mathrm{LDLP}}) =\max\ \{Cx\ |\ A^{1}x\leqq b^{1},x\ \in\operatorname{conv}(Q)\}\] \[=\min_{\Lambda,\Gamma\geq 0}\ \max_{x\in\mathbb{R}^{n}}\ (Cx+ \Lambda(b^{1}-A^{1}x)+\Gamma(d-Bx))\] \[\succeqq\min_{\Lambda\geq 0}\ \max_{x\in\operatorname{conv}(Q)}\ ( Cx+\Lambda(b^{1}-A^{1}x)) \tag{8}\] \[\succeqq\min_{\Lambda\geq 0}\ \max_{x\in Q}\ (Cx+\Lambda(b^{1}-A^{1}x))\] (9) \[\succeqq\operatorname{Min}\ (\mathcal{Y}_{\mathrm{LD}}).\] The inequality in (8) holds because for a fixed \(\Lambda\geqq 0\) and for every \(\Gamma\geqq 0\), \(x\in\operatorname{conv}(Q)\), we have \(Cx+\Lambda(b^{1}-A^{1}x)+\Gamma(d-Bx)\geqq Cx+\Lambda(b^{1}-A^{1}x)\). In particular, \[\max_{x\in\mathbb{R}^{n}}\{Cx+\Lambda(b^{1}-A^{1}x)+\Gamma(d-Bx)\}\succeqq \max\{Cx+\Lambda(b^{1}-A^{1}x)\ |\ Bx\leqq d\}.\] Further, (9) holds because for any \(\Lambda\geqq 0\), if \(x\in Q\) is an efficient solution for \(\max_{x\in Q}\ (Cx+\Lambda(b^{1}-A^{1}x))\), then it is either also an efficient solution for \(\max_{x\in\operatorname{conv}(Q)}\ (Cx+\Lambda(b^{1}-A^{1}x))\), or an interior point of \(\operatorname{conv}(Q)\). In the latter case, \(\{Cx+\Lambda(b^{1}-A^{1})x\}\preceqq\max_{x\in\operatorname{conv}(Q)}\ (Cx+ \Lambda(b^{1}-A^{1}x))\). Thus, for each \(\Lambda\geqq 0\), \[\max_{x\in\operatorname{conv}(Q)}\ (Cx+\Lambda(b^{1}-A^{1}x))\ \succeqq\max_{x\in Q}\ (Cx+ \Lambda(b^{1}-A^{1}x)).\qed\] Corollary 2 establishes that the Lagrangian dual provides a tighter upper bound than that given by the continuous relaxation (MOLP). **Corollary 2**.: _Let \(\mathcal{Y}_{\mathrm{MOLP}}\) be the set of feasible objective values for (MOLP). Then, \(\operatorname{Min}(\mathcal{Y}_{\mathrm{LD}})\preceqq\operatorname{Max}( \mathcal{Y}_{\mathrm{MOLP}})\)._ Proof.: The feasible region of (MOLP) contains the feasible region of (7). Therefore, \(\operatorname{Max}(\mathcal{Y}_{\mathrm{MOLP}})\succeqq\operatorname{Max}( \mathcal{Y}_{\mathrm{LDLP}})\succeqq\) Corollary 2 implies that if \(x^{*}\) and \(\tilde{x}\) are efficient solutions to (MOIP) and its MOLP relaxation respectively and \(y\in\operatorname{Min}(\mathcal{Y}_{\mathrm{LD}})\) is such that \(Cx^{*},C\tilde{x}\), and \(y\) are all comparable, then \(Cx^{*}\leqq y\leqq C\tilde{x}\). Example 1 illustrates that both the inequalities can be strict. Theorem 4 gives a loose upper bound on the Lagrangian dual. In the remainder of this section, we investigate relationships that hold with equality. To do so, we solve a scalarized problem and apply results from single-objective duality. **Theorem 5**.: _For all \(\mu\in\mathbb{R}^{k}_{>}\),_ \[\min_{\Lambda\geq 0}\ \max_{x\in Q}\ \mu^{\top}Cx+\mu^{\top}\Lambda(b^{1}-A^{1}x )=\max_{x\in\operatorname{conv}(Q)}\{\mu^{\top}Cx\ |\ A^{1}x\leqq b^{1}\}.\] Proof.: Consider the single-objective IP \[\max\ \mu^{\top}Cx\quad\text{s.t.}\ A^{1}x\leqq b^{1},\ x\in Q. \tag{10}\] The Lagrangian dual of (10) is \[\min_{u\in\mathbb{R}^{m}_{\geqq}}\ \max_{x\in Q}\ \mu^{\top}Cx+u^{\top}(b^{1}-A^{1}x). \tag{11}\] For any \(u\in\mathbb{R}^{m_{1}}_{\geqq}\), we can write \(u^{\top}=\mu^{\top}\Lambda\) for some \(\Lambda\in\mathbb{R}^{k\times m_{1}}_{\geqq}\). Conversely, if \(\mu\in\mathbb{R}^{k}_{>}\) and \(\Lambda\in\mathbb{R}^{k\times m_{1}}_{\geqq}\), then \((\mu^{\top}\Lambda)^{\top}\in\mathbb{R}^{m_{1}}_{\geqq}\). Thus, we can rewrite (11) as \[\min_{\Lambda\in\mathbb{R}^{k\times m_{1}}_{\geqq}}\ \max_{x\in Q}\ \mu^{\top}Cx+\mu^{\top}\Lambda(b^{1}-A^{1}x). \tag{12}\] Moreover, Theorem 1 implies that the Lagrangian dual (11) to (10) has the same optimal value as the LP \[\max\ \mu^{\top}Cx\quad\text{s.t.}\ A^{1}x\leqq b^{1},\ x\in\operatorname{conv} (Q). \tag{13}\] Therefore, \[\min_{\Lambda\geqq 0}\ \max_{x\in Q}\ \mu^{\top}Cx+\mu^{\top}\Lambda(b^{1}-A^{1 }x)=\max_{x\in\operatorname{conv}(Q)}\ \left\{\mu^{\top}Cx\ |\ A^{1}x\leqq b^{1}\right\}.\qed\] Thus, single-objective LPs can solve the scalarized dual problem. ### Strong Lagrangian Duality A dual problem to (MOIP) is strong at a solution \(x\) if there exists a point \(y\in\operatorname{Min}(\mathcal{Y}_{\mathrm{LD}})\) such that \(y=Cx\). In this subsection, we seek conditions on (MOIP) under which the Lagrangian dual is strong. Theorem 4 used an MOLP to prescribe an upper bound for \(\operatorname{Min}(\mathcal{Y}_{\mathrm{LD}})\). Theorem 6 uses this MOLP to derive a sufficient condition for strong duality. **Theorem 6**.: _Let \(x^{*}\) be an efficient solution of (MOIP) such that \(Cx^{*}\leqq y\) for some \(y\in\operatorname{Min}(\mathcal{Y}_{\mathrm{LD}})\). The Lagrangian dual (LD) is strong at \(x^{*}\) if there exists a vector \(\alpha\in\mathbb{R}^{k}_{>}\) such that_ \[\alpha^{\top}C(x^{*}-x)\leqq 0\text{ for all }x\in\operatorname{conv}(Q) \cap\{x\in\mathbb{R}^{n}\ |\ A^{1}x\leqq b^{1}\}. \tag{14}\] Proof.: Condition (14) holds if and only if \(x^{*}\) is an efficient solution to the MOLP \(\max\{Cx\ |\ A^{1}\leqq b^{1},x\in\operatorname{conv}(Q)\}\)[34, Theorem 4.2.6(i)]. Moreover, Theorem 4 implies that \(\operatorname{Min}(\mathcal{Y}_{\mathrm{LD}})\leqq\max\{Cx\ |\ A^{1}\leqq b^{1},x\in \operatorname{conv}(Q)\}\) so that if \(y\in\operatorname{Min}(\mathcal{Y}_{\mathrm{LD}})\) is comparable to \(Cx^{*}\), then \(y\leqq Cx^{*}\). Corollary 1 then implies that \(y\not\leq Cx^{*}\). Thus, \(y=Cx^{*}\). Restricting our attention to supported solutions, we next derive conditions for strong Lagrangian duality that are independent of the objective function. These results are analogous to the single-objective case (see Theorem 2). **Theorem 7**.: _If the Lagrangian dual is strong at supported efficient solutions for all matrices \(C\), then_ \[\operatorname{conv}(Q\cap\{x\in\mathbb{R}^{n}\ |\ A^{1}x\leqq b^{1}\})= \operatorname{conv}(Q)\cap\{x\in\mathbb{R}^{n}\ |\ A^{1}x\leqq b^{1}\}. \tag{15}\] Proof.: By Theorem 2, it suffices to show that under the hypothesis of Theorem 7, strong Lagrangian duality holds for any single-objective IP with the same feasible region as (MOIP), i.e, \(Q\cap\{x\in\mathbb{R}^{n}\ |\ A^{1}x\leqq b^{1}\}\). Let \(c\in\mathbb{R}^{n}\) be an arbitrary cost vector and consider the cost matrix \(C=\begin{bmatrix}c&0&\dots&0\end{bmatrix}^{\top}\in\mathbb{R}^{k\times n}\). Set \(\mu\in\mathbb{R}^{k}_{>}\) to be the vector of all ones so that \(\mu^{\top}C=c^{\top}\). Let \(x^{*}\) be a supported efficient solution to (MOIP) with supporting vector \(\mu\). Then \(x^{*}\) is an efficient solution to the single-objective IP \(\max\{c^{\top}x\ |\ A^{1}x\leqq b^{1},x\in Q\}\). By the hypothesis, there exists \(y\in\operatorname{Min}(\mathcal{Y}_{\mathrm{LD}})\) such that \(Cx^{*}=y\). So, \(y=C\hat{x}+\hat{\Lambda}(b^{1}-A^{1}\hat{x})\) for some \(\hat{x}\in Q\) and \(\hat{\Lambda}\geqq 0\). Then, \[c^{\top}x^{*}=\mu^{\top}Cx^{*}=\mu^{\top}y=\mu^{\top}C\hat{x}+\mu^{\top}\hat{ \Lambda}(b^{1}-A^{1}\hat{x})=c^{\top}\hat{x}+\hat{\lambda}(b^{1}-A^{1}\hat{x}),\] where \(\hat{\lambda}=\mu^{\top}\hat{\Lambda}\in\mathbb{R}^{m_{1}}_{\leqq}\). Thus, the Lagrangian dual of the IP \(\max\{c^{\top}x\ |\ A^{1}x\leqq b^{1},x\in Q\}\) is strong. Because \(c\) was arbitrary, Theorem 2 implies that (15) holds. An example to illustrate Theorem 7 is provided in Appendix C, where we present an MOIP whose feasible region does not satisfy condition (15), and show that the Lagrangian dual for this problem is not strong at a supported solution. A modified converse to Theorem 7 is derived in Theorem 8, but we first show that optimal objective values for the scalarized dual problem lift to feasible objective values for the multiobjective dual. **Lemma 5**.: _For any \(\mu\in\mathbb{R}^{k}_{>}\), let_ \[w=\min_{\Lambda\in\mathbb{R}^{k\times m_{1}}_{\geqq}}\ \max_{x\in Q}\ \mu^{\top}Cx+\mu^{\top}\Lambda(b^{1}-A^{1}x),\] _be the optimal objective value for the scalarized dual problem. Then, there exists \(y\in\mathcal{Y}_{\mathrm{LD}}\) such that \(\mu^{\top}y=w\)._ Proof.: Fix \(\mu\in\mathbb{R}^{k}_{>}\). Let \(w=\min_{\Lambda\in\mathbb{R}^{k\times m_{1}}_{\leqq}}\max_{x\in Q}\ \mu^{\top}Cx+\mu^{\top}\Lambda(b^{1}-A^{1}x)\). Then, there exists \(\Lambda^{*}\in\mathbb{R}^{k\times m_{1}}_{\geqq}\) and \(x^{*}\in Q\) such that \(x^{*}\) is optimal for the inner maximization and \(w=\mu^{\top}y^{*}\) where \(y^{*}=Cx^{*}+\Lambda^{*}(b^{1}-A^{1}x^{*})\). Because \(x^{*}\) is optimal to the scalarized problem, it is a supported efficient solution for the relaxation \(\mathrm{LR}(\Lambda^{*})\). Hence, \(y^{*}\in\mathrm{Max}(\mathcal{Y}_{\mathrm{LR}(\Lambda^{*})})\subseteq \mathcal{Y}_{\mathrm{LD}}\). **Theorem 8**.: _Suppose (15) holds. Given an objective matrix \(C\) and a supporting vector \(\mu\in\mathbb{R}^{k}_{>}\), let \(x^{*}\) be a corresponding supported solution to (MOIP). If there exists \(y\in\mathrm{Min}(\mathcal{Y}_{\mathrm{LD}})\) that is comparable to \(Cx^{*}\), then the Lagrangian dual is strong at \(x^{*}\)._ Proof.: Let \(\mu\in\mathbb{R}^{k}_{>}\) and \(C\in\mathbb{R}^{k\times m}\) be arbitrary. If (15) holds, then we have by Theorem 2 that \[\max_{x\in Q}\{\mu^{\top}Cx\ |\ A^{1}x\leqq b^{1}\}=\min_{\Lambda\geqq }\max_{x\in Q}\mu^{\top}Cx+\mu^{\top}\Lambda(b^{1}-A^{1}x).\] Let \(x^{*}\) be an optimal solution to the scalarized problem \[\max_{x\in Q}\{\mu^{\top}Cx\ |\ A^{1}x\leqq b^{1}\},\] such that there exists \(\tilde{y}\in\mathrm{Min}(\mathcal{Y}_{\mathrm{LD}})\) that is comparable with \(Cx^{*}\). Corollary 1 implies that \(Cx^{*}\leqq\tilde{y}\). Lemma 5 implies that there is a \(y\in\mathcal{Y}_{\mathrm{LD}}\) such that \(\mu^{\top}Cx^{*}=\mu^{\top}y\). But then, because \[\mu^{\top}y=\min_{\Lambda\subseteq 0}\max_{x\in Q}\mu^{\top}Cx+y^{\top}\Lambda(b^{ 1}-A^{1}x)=\max_{x\in\mathrm{conv}(Q)}\{\mu^{\top}Cx\mid A^{1}x\leqq b^{1}\},\] and \(\{\tilde{y}\}\preceqq\max_{x\in\mathrm{conv}(Q)}\{Cx\mid A^{1}x\leqq b^{1}\}\) by Theorem 4, it follows that \[\mu^{\top}\tilde{y}\leqq\max_{x\in\mathrm{conv}(Q)}\{\mu^{\top}Cx\mid A^{1}x \leqq b^{1}\}.\] Then, \[\mu^{\top}y=\mu^{\top}Cx^{*}\leqq\mu^{\top}\tilde{y}\leqq\mu^{\top}y.\] This implies that \(\mu^{\top}Cx^{*}=\mu^{\top}\tilde{y}\). So, \(\mu^{\top}(\tilde{y}-Cx^{*})=0\) and \(\tilde{y}-Cx^{*}\) has all nonnegative entries, so that \(\tilde{y}-Cx^{*}=0\). Thus, strong duality holds at \(x^{*}\). In the single-objective case, every efficient solution is a supported solution and the extended real line is totally ordered. Then, Theorem 8 is the converse of Theorem 7, and together, they coincide with Theorem 2. **Remark 6**.: _Theorems 7 and 8 do not address the strength of the Lagrangian dual at unsupported solutions. Example 2 illustrates that even if the feasible region satisfies condition (15) and the Lagrangian dual is strong at supported solutions, it may not be strong at an unsupported solution._ **Example 2.** We revisit the MOIP from Example 1. For this problem, we have \(\mathrm{Max}(\mathcal{Y}_{\mathrm{MOIP}})\)\(=\{(1,-\frac{1}{2})^{\top},(0,0)^{\top},(-\frac{1}{2},1)^{\top}\}\). For \(\mu=(\mu_{1},\mu_{2})^{\top}\in\mathbb{R}^{2}_{>}\), the scalarized IP is \[\max\ \Big{(}\mu_{1}-\frac{1}{2}\mu_{2}\Big{)}x_{1}+\Big{(}\mu_{2}- \frac{1}{2}\mu_{1}\Big{)}x_{2}\] \[\mathrm{s.t.}\ x_{1}+x_{2}\leqq 1,\] \[\ x_{1},x_{2}\in\{0,1\}.\] The optimal solution for this IP is \((1,0)^{\top}\) if \(\mu_{1}\geq\mu_{2}\) and \((0,1)^{\top}\) for \(\mu_{2}\geq\mu_{1}\). Thus, \((1,0)^{\top}\) and \((0,1)^{\top}\) are supported efficient solutions to the primal problem. Moreover, Example 1 shows that their corresponding objective vectors \((1,-\frac{1}{2})^{\top}\) and \((-\frac{1}{2},1)^{\top}\) are feasible objective values for the Lagrangian dual problem, so that strong duality holds for the supported efficient solutions. On the other hand, \((0,0)^{\top}\) is not a feasible objective value to the dual problem, so that strong duality does not hold for the unsupported solution \((0,0)^{\top}\), despite the fact that \[\text{conv}\big{(}\{0,1\}^{2}\cap\{x\in\mathbb{R}^{n}|x_{1}+x_{2}\leq 1\}\big{)} = \text{conv}\big{(}\{0,1\}^{2}\big{)}\cap\{x\in\mathbb{R}^{n}|x_{1}+x_{2} \leqq 1\}.\qed\] Theorems 7 and 8 analyze strong duality at the supported solutions of (MOIP). We now consider the unsupported solutions. Theorem 9 derives a sufficient condition under which (LD) is _not_ strong at unsupported solutions. **Theorem 9**.: _Let \(x\) be an unsupported efficient solution to (MOIP). Suppose there exists \(\Lambda^{*}\in\mathbb{R}^{k\times m_{1}}_{\geqq}\) and a supported solution \(x^{*}\) to (LR(\(\Lambda\))) such that_ * \(Cx\leq Cx^{*}+\Lambda^{*}(b^{1}-A^{1}x^{*})\)_, and_ * _for all_ \(\Lambda\in\mathbb{R}^{k\times m_{1}}_{\geqq}\)_,_ \(\{v\in\text{\rm Max}(\mathcal{Y}_{\text{\rm LR}(\Lambda)})\ |\ v\leqq Cx^{*}+\Lambda^{*}(b^{1}-A^{1}x^{*})\}\) _is either empty or consists only of supported objective values._ _Then, (LD) is not strong at \(x\)._ Proof.: Note that if \(y\in\text{\rm Min}(\mathcal{Y}_{\text{\rm LD}})\) satisfies \(y=Cx\), then \(y\in\text{\rm Max}(\mathcal{Y}_{\text{\rm LR}(\Lambda)})\cap\{v\in\mathbb{R}^{ k}\ |\ v\leqq Cx^{*}+\Lambda^{*}(b^{1}-A^{1}x^{*})\}\) for some \(\Lambda\in\mathbb{R}^{k\times m_{1}}_{\geqq}\). By hypothesis, \(y\) is a supported nondominated point to (LR(\(\Lambda\))) so that there exists a vector \(\mu\in\mathbb{R}^{k}_{>}\) such that \[\mu^{\top}y=\max_{\xi\in Q}\ \mu^{\top}C\xi+\mu^{\top}\Lambda(b^{1}-A^{1}\xi).\] Then, because \(y\) is an optimal objective value of the single-objective Lagrangian relaxation of \(\max\ \{\mu^{\top}C\xi\ |\ \xi\in\mathcal{X}\}\) we have by Proposition 3 \[\max_{\xi\in\mathcal{X}}\ \mu^{\top}C\xi\leq\mu^{\top}y.\] On the other hand, if \(x\) is an unsupported efficient solution to (MOIP), then \(\mu^{\top}Cx<\max_{\xi\in\mathcal{X}}\ \mu^{\top}C\xi\). So, \(Cx\neq y\). ### Numerical Illustration As discussed in Section 1.4, bound sets are a key component of search-based algorithms for solving MOIPs. In this section, we derive an upper bound set that approximates the Lagrangian dual and test its performance on two biobjective MOIPs. We compare this set with the upper bound set derived by Ehrgott and Gandibleux in [10]2, which amounts to the convex hull of supported nondominated points. As such, we present computational evidence that a Lagrangian dual-based approach can provide a tighter upper bound than that obtained via the convex hull relaxation. Remark 4 noted that the bound sets due to Lagrangian and convex hull relaxations are mutually incomparable in general, but this section illustrates that (an approximation of) the Lagrangian dual may present a computational advantage. Footnote 2: [10] presents a minimization problem, so the roles of upper and lower bound sets are reversed and accordingly adapted here. We test the quality of the bound sets on 100 randomly generated instances of two classes of biobjective problems: a linear assignment problem, and a knapsack problem, each with one additional randomly generated constraint that is subsequently dualized. The linear assignment problem consists of 16 binary variables. Problem parameters are taken from a binary linear assignment problem appearing in [46], included as an example file with the Julia-based MOIP solver vOptSolveGeneric [14]. The knapsack problem has 20 binary variables; coefficients for the objectives and constraints are generated randomly in each trial by sampling uniformly at random over the sets \(\{1,\ldots,15\}\) and \(\{1,\ldots,5\}\) respectively. For both problems, coefficients for the one additional constraint are randomly generated in each trial by sampling uniformly over \(\{1,\ldots,5\}\). All computations were performed in Julia using the vOptSolve package [14, 15, 16] with the GLPK optimizer [37]. To approximate the Lagrangian dual problem, we consider a finite set of Lagrangian relaxations parameterized by multipliers \(\Lambda\in\mathcal{M}\), constructed by dualizing the additional constraint for each problem instance. The values of \(\Lambda\) in \(\mathcal{M}\) are naively selected as equally spaced gridpoints in \([0,2.5]^{2}\). The set \(\mathcal{M}\) consists of \(51^{2}\) and \(26^{2}\) values of \(\Lambda\) for the linear assignment and knapsack problems respectively. Then, \(U=\operatorname{Min}\Big{(}\bigcup\limits_{\Lambda\in\mathcal{M}}\,\operatorname {Max}\big{(}\mathcal{Y}_{\operatorname{LR}(\Lambda)}\big{)}\,\Big{)}\) is an upper bound set that approximates the Lagrangian bound set \(\operatorname{Min}(\mathcal{Y}_{\operatorname{LD}})\) defined in (LD). To determine the quality of this bound, we used a scaled distance \[d(L,U)=\frac{1}{\gamma}\,\max_{\ell\in L}\,\min_{u\in U}\,\|u-\ell\|_{2},\] where \(L\) is a lower bound set consisting of local nadir points between supported solutions [10], and \(\gamma\) is the average of \(\|y\|_{2}\) for \(y\in L\cup U\). A smaller value of \(d\) corresponds to a better bound. The set \(L\) is the collection of points of the form \((\min(y_{1}^{1},y_{1}^{2}),\min(y_{2}^{1},y_{2}^{2}))^{T}\) where the pair \((y^{1},y^{2})\) ranges over adjacent supported solutions; it is computed as in [10]. We note that the metric \(d\) is a re-scaled version of the measure \(\mu_{1}\) used in [10], which is sensitive to outliers in the set \(L\cup U\). Because the set \(U\) derived from Lagrangian relaxations can contain points far from the nondominated set of (MOIP) (as in Figure 4), this could lead to misleading comparisons and spuriously improve the performance of \(U\). Therefore, we choose the modified scaling parameter \(\gamma\) which is less sensitive to far away points. We also report the number of instances for which the upper bound is strong (i.e. \(\text{Max}(\mathcal{Y}_{\text{MOIP}})\subseteq U\)). We compare the performance of \(U\) with \(\text{Max}(\mathcal{Y}_{\text{CH}})\) as computed in [10, Algorithm 1], which recursively solves a sequence of scalarized problems to obtain the supported nondominated points and their convex hull. Each single-objective problem is solved exactly--and not approximately as proposed in [10]--to derive the exact convex hull relaxation. Again, we measure the quality of this upper bound set using the measure \(d\) and the lower bound set \(L\) of local nadir points, and report the number of instances for which the bound is tight. The results are summarized in Table 1. We find that for both problems, the Lagrangian dual approximation outperforms the convex hull bound with respect to the metric \(d\) averaged across the 100 trials. However, the extent of improvement is not uniform across the two problem classes. Specifically, for the linear assignment problem, the standard deviation in \(d\) is notably smaller and the Lagrangian dual provides tight bounds in a much larger fraction of trials. In contrast, for the knapsack problem, the convex hull upper bound yields a lower standard deviation in \(d\) and provides tight bounds more frequently than the Lagrangian dual. Examples of the bounds for each problem are illustrated in Figure 4. The above numerical experiments are a proof of concept rather than a detailed computational study. The results are promising and point towards several avenues for future investigation. For instance, our approach naively enumerates over a range of Lagrange multiplier matrices \(\Lambda\), which limits our ability to scale to larger problems. Future research could explore more sophisticated techniques for searching over the space of multipliers. Additionally, in our experiment we use a lower bound set derived from information from the convex hull relaxation of our problem, and the question of similarly using information from the Lagrange relaxation/dual to derive a lower bound \begin{table} \begin{tabular}{c||c|c|c||c|c|c||} & \multicolumn{3}{c||}{Lagrangian Dual} & \multicolumn{3}{c||}{Convex Hull} \\ \hline Problem & Mean \(d\) & SD \(d\) & \# Strong & Mean \(d\) & SD \(d\) & \# Strong \\ \hline Linear Assignment & 3.930 & 0.587 & 59/100 & 4.801 & 0.956 & 9/100 \\ Knapsack & 5.869 & 1.980 & 3/100 & 6.384 & 1.803 & 9/100 \\ \end{tabular} \end{table} Table 1: Mean and standard deviation (SD) of the metric \(d\), as well as number of problem instances (out of 100) where the upper bound sets are strong (#Strong), for bound sets computed by an approximation of the Lagrangian dual and the convex hull of the supported nondominated points. set, remains open. Although the scope of our experiments is limited, the results are encouraging and highlight the potential of the Lagrangian approach in deriving bound sets for MOIP solution methods. ## 4 Superadditive Duality for Multiobjective Integer Programs In this section, we develop a multiobjective counterpart of the superadditive dual of an IP. For \(\beta\in\mathbb{R}^{m}\), define \(\mathcal{X}(\beta)=\{x\in\mathbb{Z}_{\geq}^{n}\ |\ Ax\leqq\beta\}\) as the feasible region of (MOIP) parameterized by the right-hand-side \(\beta\). We define the value function of an MOIP as \[Z(\beta)=\max\ Cx\quad\text{s.t.}\ x\in\mathcal{X}(\beta). \tag{16}\] Unlike the single-objective case, where the value function maps onto the extended real line, the multiobjective value function \(Z\) is _multi-valued_ in general, and maps to the nondominated set of the MOIP. Moreover, the cardinality of the image set \(Z(\beta)\) is not known _a priori_. Therefore, the first step towards developing a superadditive dual is to extend the definitions of monotonicity and superadditivity to set-valued functions. Recall from (1) that \(\mathcal{E}\) is the collection of nonempty subsets of \(\mathbb{R}^{k}\) whose elements are mutually incomparable, along with \(\pm M_{\infty}\). Figure 4: An example of upper bounds computed by an approximation of the Lagrangian dual and the convex hull of the supported nondominated points, as well as a lower bound set derived from the local nadir points of the supported efficient solutions. Note that the Lagrangian upper bound is tight for this instance of the linear assignment problem, but not for the knapsack problem. Due to the presence of unsupported solutions, the convex hull bound is not strong in either instance. **Definition 5** (Nondecreasing Function).: _A function \(F:\mathbb{R}^{m}\to\mathcal{E}\) is nondecreasing (with respect to \(\underline{\preceq}\)) if for all \(\beta_{1}\leqq\beta_{2}\), \(F(\beta_{1})\,\underline{\preceq}\,F(\beta_{2})\)._ **Definition 6** (Superadditive Function).: _A function \(F:\mathbb{R}^{m}\to\mathcal{E}\) is superadditive (with respect to \(\underline{\preceq}\)) if for all \(\beta_{1},\beta_{2}\), \(F(\beta_{1})+F(\beta_{2})\,\underline{\preceq}\,F(\beta_{1}+\beta_{2})\)._ Here, \(F(\beta_{1})+F(\beta_{2})\) denotes the Minkowski sum of the two sets, defined as \(F(\beta_{1})+F(\beta_{2})=\{z_{1}+z_{2}\mid z_{1}\in F(\beta_{1}),z_{2}\in F( \beta_{2})\}\). If \(F:\mathbb{R}^{m}\to\mathcal{E}\) is superadditive and nondecreasing with respect to \(\underline{\preceq}\), then \(\sum_{j=1}^{\ell}F(\beta_{j})\,\underline{\preceq}\,F\Big{(}\sum_{j=1}^{\ell} \beta_{j}\Big{)}\) for any finite sum (by induction on \(\ell\)). It follows that for any positive integer \(\kappa\), we must have \(\kappa\cdot F(\beta)\,\underline{\preceq}\,F(\kappa\cdot\beta)\), where \(\kappa\cdot F(\beta)\) is the Minkowski sum of \(\kappa\) copies of \(F(\beta)\). **Proposition 12**.: _The value function \(Z\) is nondecreasing and superadditive with respect to \(\underline{\preceq}\)._ Proof.: We first show that \(Z\) is nondecreasing. Let \(\beta_{1}\leqq\beta_{2}\). Then \(\mathcal{X}(\beta_{1})\subseteq\mathcal{X}(\beta_{2})\). Therefore if \(Cx_{1}=z_{1}\in Z(\beta_{1})\), then there is a \(z_{2}\in Z(\beta_{2})\) such that \(z_{1}\leqq z_{2}\). Moreover, there is no \(z_{3}\in Z(\beta_{2})\) with \(z_{3}\leq z_{1}\), because that would imply \(z_{3}\leq z_{1}\leqq z_{2}\) which contradicts the nondominance of \(z_{3}\) for the MOIP with RHS \(\beta_{2}\). Next, we prove that \(Z\) is superadditive. If \(Cx_{1}=z_{1}\in Z(\beta_{1})\) and \(Cx_{2}=z_{2}\in Z(\beta_{2})\), then \(x_{1}+x_{2}\in\mathcal{X}(\beta_{1}+\beta_{2})\). Thus, there exists \(z_{3}\in Z(\beta_{1}+\beta_{2})\) such that \(z_{1}+z_{2}\leqq z_{3}\). Also, there is no \(z_{4}\in Z(\beta_{1}+\beta_{2})\) such that \(z_{4}\leq z_{1}+z_{2}\), because such a \(z_{4}\) would not be a nondominated point for the MOIP with RHS \(\beta_{1}+\beta_{2}\). Because the value function is set-valued, it is not immediate how to define an analog of single-objective superadditive duality. We present two variants, both of which coincide with the standard superadditive dual in the single-objective case. ### Set-valued Superadditive Dual Recall from Section 7 that the superadditive IP dual contains the constraints \(F(A_{j})\geq c_{j}\). Our first formulation (SDP) generalizes this constraint to a set-valued counterpart. Consider the following problem. \[\begin{split}\min&\ F(b)\\ \text{s.t.}&\ \{C_{j}\}\,\underline{\preceq}\,F(A_{j}) \text{for }j=1,\ldots,n,\\ &\ 0\in F(0),\\ &\ F:\mathbb{R}^{m}\to\mathcal{E}\text{\ \ \ \ \ superadditive and nondecreasing.}\end{split}\] (SDP) Let \(\mathcal{F}\) be the set of functions feasible to (SDP). Then, \(F(b)\in\mathcal{E}\) for all \(F\in\mathcal{F}\). We interpret the objective of (SDP) as finding the elements of the collection \(\{F(b)\mid F\in\mathcal{F}\}\subseteq\mathcal{E}\) that are nondominated from below with respect to the set-ordering "\(\preceq\)". This objective is well-defined because "\(\preceq\)" defines a partial order on \(\mathcal{E}\). Proposition 13 establishes weak duality for (SDP). **Proposition 13** (Weak Duality for (SDP)).: _If \(x\in\mathcal{X}(b)\) and \(F\) is feasible to (SDP), then \(\{Cx\}\mathop{\hbox to 0.0pt{\lower 3.0pt\hbox{$\preceq$}}\lower 3.0pt\hbox{$ \preceq$}}F(b)\)._ Proof.: Let \(x\in\mathcal{X}(b)\). As \(F\) is nondecreasing and \(Ax\leqq b\), we have \(F(Ax)\mathop{\hbox to 0.0pt{\lower 3.0pt\hbox{$\preceq$}}\lower 3.0pt\hbox{$ \preceq$}}F(b)\). Now, consider the set \[S=\bigg{\{}\sum\limits_{j=1}^{n}z_{j}x_{j}\Big{|}z_{j}\in F(A_{j})\bigg{\}}.\] We claim that \(S\mathop{\hbox to 0.0pt{\lower 3.0pt\hbox{$\preceq$}}\lower 3.0pt\hbox{$ \preceq$}}F(b)\). Because \(x_{j}\) is a nonnegative integer, we can express \(z_{j}x_{j}\) as \(\sum\limits_{i=1}^{x_{j}}z_{j}\). (If \(x_{j}=0\), the sum is empty and equal to \(0\in\mathbb{R}^{k}\).) Then, \[S=\bigg{\{}\sum\limits_{j=1}^{n}\sum\limits_{i=1}^{x_{j}}z_{j}\Big{|}z_{j}\in F (A_{j})\bigg{\}}\subseteq\sum\limits_{j=1}^{n}\sum\limits_{i=1}^{x_{j}}F(A_{j })=\sum\limits_{j=1}^{n}x_{j}F(A_{j}).\] Note that \(Ax=\sum\limits_{j=1}^{n}A_{j}x_{j}\). Then, the superadditivity of \(F\) implies that \(\sum\limits_{j=1}^{n}x_{j}F(A_{j})\mathop{\hbox to 0.0pt{\lower 3.0pt\hbox{$ \preceq$}}\lower 3.0pt\hbox{$\preceq$}}F(Ax)\). By Lemma 2, \(S\mathop{\hbox to 0.0pt{\lower 3.0pt\hbox{$\preceq$}}\lower 3.0pt\hbox{$ \preceq$}}F(Ax)\). By transitivity of the order, we have \(S\mathop{\hbox to 0.0pt{\lower 3.0pt\hbox{$\preceq$}}\lower 3.0pt\hbox{$ \preceq$}}F(b)\). It follows by Lemma 3 that \(\operatorname{Max}(S)\mathop{\hbox to 0.0pt{\lower 3.0pt\hbox{$\preceq$}}\lower 3.0pt \hbox{$\preceq$}}F(b)\). Next, we will show that \(\{Cx\}\mathop{\hbox to 0.0pt{\lower 3.0pt\hbox{$\preceq$}}\lower 3.0pt\hbox{$ \preceq$}}\operatorname{Max}(S)\). Because \(F(A_{j})\mathop{\hbox to 0.0pt{\lower 3.0pt\hbox{$\succeq$}}\lower 3.0pt\hbox{$ \succeq$}}\{C_{j}\}\) for all \(j=1,\ldots,n\), there is a \(z_{j}\in F(A_{j})\) such that \(C_{j}\leqq z_{j}\). Then, \(w=\sum\limits_{j=1}^{n}z_{j}x_{j}\in S\) is such that \(Cx\leqq w\). Therefore, there exists \(\tilde{w}\in\operatorname{Max}(S)\) such that \(\tilde{w}\geqq w\geqq Cx\). Further, suppose that \(z_{j}\in F(A_{j})\) are such that \(\sum\nolimits_{j=1}^{n}z_{j}x_{j}\leq\sum\nolimits_{j=1}^{n}C_{j}x_{j}\). We have already shown that there exists \(\tilde{w}\in S\) such that \(Cx\leqq\tilde{w}\). Therefore, \[\sum\limits_{j=1}^{n}z_{j}x_{j}\leq\sum\limits_{j=1}^{n}C_{j}x_{j}=Cx\leqq \tilde{w},\] so that \(\sum\nolimits_{j=1}^{n}z_{j}x_{j}\not\in\operatorname{Max}(S)\). Thus, \(\{Cx\}\mathop{\hbox to 0.0pt{\lower 3.0pt\hbox{$\preceq$}}\lower 3.0pt\hbox{$ \preceq$}}\operatorname{Max}(S)\mathop{\hbox to 0.0pt{\lower 3.0pt\hbox{$\preceq$}}\lower 3.0pt \hbox{$\preceq$}}F(b)\), which implies \(\{Cx\}\mathop{\hbox to 0.0pt{\lower 3.0pt\hbox{$\preceq$}}\lower 3.0pt\hbox{$ \preceq$}}\limits F(b)\). We say that the dual problem (SDP) is strong at an efficient solution \(x^{*}\) of (MOIP) if there is a function \(F\) feasible to (SDP) such that \(Cx^{*}\in F(b)\). Theorems 10 and 11 guarantee that (SDP) is strong at the supported primal solutions. **Theorem 10** (Strong Duality for (Sdp)).: _If \(x^{*}\) is a supported efficient solution for (MOIP), then there is a function \(F\) feasible to (SDP) such that \(Cx^{*}\in F(b)\)._ Proof.: Let \(x^{*}\) be a supported efficient solution to (MOIP) and let \(\mu\in\mathbb{R}^{k}_{>}\) be a supporting vector such that \(x^{*}\) is optimal to the scalarized problem \[\max\ \mu^{\top}Cx\quad\text{s.t.}\quad Ax\leqq b,\ x\in\mathbb{Z}^{n}_{\geq}.\] Let \(z^{*}=Cx^{*}\). We need to show that there is a function \(F\) feasible for (SDP) such that \(z^{*}\in F(b)\). By strong superadditive duality for single-objective IPs (Theorem 3), there exists a nondecreasing superadditive function \(f_{\mu}:\mathbb{R}^{m}\to\mathbb{R}\) such that \(f_{\mu}(0)=0\), \(f_{\mu}(A_{j})\geq\mu^{\top}C_{j}\) for all \(j\), and \(f_{\mu}(b)=\mu^{\top}Cx^{*}=\mu^{\top}z^{*}.\) We use \(f_{\mu}\) to define a function \(F_{\mu}:\mathbb{R}^{m}\to\mathcal{E}\) as follows: \[F_{\mu}(v)=\{w\in\mathbb{R}^{k}\ |\ \mu^{\top}w=f_{\mu}(v)\}.\] We first show that \(F_{\mu}\) is well-defined. Let \(\hat{\mu}=\frac{1}{\|\mu\|^{2}}\mu\). Then, \(F_{\mu}(v)\neq\emptyset\) for all \(v\) because \((f_{\mu}(v)\hat{\mu})\in F_{\mu}(v)\). Also, \(0\in F_{\mu}(0)\) because \(f_{\mu}(0)=0\). For arbitrary \(v\), let \(w_{1},w_{2}\in F_{\mu}(v)\) with \(w_{1}\neq w_{2}\). Then, \(w_{1}\not\leq w_{2}\) and \(w_{1}\not\geq w_{2}\) because \(\mu^{\top}(w_{1}-w_{2})=0\). Thus, \(F_{\mu}(v)\in\mathcal{E}\). Next, we show that \(F_{\mu}\) is feasible to (SDP). Let \(v_{1},v_{2}\in\mathbb{R}^{m}\), \(v_{1}\leqq v_{2}\). Then, \(f_{\mu}(v_{1})\leqq f_{\mu}(v_{2})\) as \(f_{\mu}\) is nondecreasing. We need to show that \(F_{\mu}(v_{1})\mathop{\preceqq}F_{\mu}(v_{2})\). Let \(w_{1}\in F_{\mu}(v_{1})\). Define \(w_{2}=w_{1}+(f_{\mu}(v_{2})-f_{\mu}(v_{1}))\hat{\mu}\) so that \[\mu^{\top}w_{2}=\mu^{\top}\left[w_{1}\!+\!(f_{\mu}(v_{2})-f_{\mu}(v_{1}))\hat {\mu}\right]=\mu^{\top}w_{1}+(f_{\mu}(v_{2})-\mu^{\top}w_{1})\frac{\mu^{\top} \mu}{\|\mu\|^{2}}=f_{\mu}(v_{2}).\] Thus, \(w_{2}\in F_{\mu}(v_{2})\). Because \(\hat{\mu}\in\mathbb{R}^{k}_{>}\) and \((f_{\mu}(v_{2})-f_{\mu}(v_{1}))\geqq 0\), \(w_{1}\leqq w_{2}\). Also, for any \(w_{3}\in F_{\mu}(v_{2})\), \(\mu^{\top}w_{1}=f_{\mu}(v_{1})\leqq f_{\mu}(v_{2})=\mu^{\top}w_{3}\), which implies that \(w_{1}\not\geq w_{3}\). Thus, \(F_{\mu}(v_{1})\mathop{\preceqq}F_{\mu}(v_{2})\) and \(F_{\mu}\) is nondecreasing. Second, for any \(v_{1},v_{2}\in\mathbb{R}^{m}\), \(f_{\mu}(v_{1})+f_{\mu}(v_{2})\leqq f_{\mu}(v_{1}+v_{2})\) because \(f\) is superadditive. If \(w_{1}\in F_{\mu}(v_{1}),w_{2}\in F_{\mu}(v_{2})\), and \(w_{3}\in F_{\mu}(v_{1}+v_{2})\), then \(\mu^{\top}w_{3}\geqq\mu^{\top}(w_{1}+w_{2})\) so that \(w_{3}\not\leq w_{1}+w_{2}\). Moreover, by choosing \(w_{3}=(w_{1}+w_{2})+\left[f_{\mu}(v_{1}+v_{2})-f_{\mu}(v_{1})-f_{\mu}(v_{2}) \right]\hat{\mu}\), we have \(w_{3}\in F_{\mu}(v_{1}+v_{2})\) with \(w_{3}\geqq w_{1}+w_{2}\). Thus, \(F_{\mu}(v_{1})+F_{\mu}(v_{2})\mathop{\preceqq}F_{\mu}(v_{1}+v_{2})\), and \(F_{\mu}\) is superadditive. Finally, note that \(f_{\mu}(A_{j})\geqq\mu^{\top}C_{j}\). If \(w\in F_{\mu}(A_{j})\), then \(\mu^{\top}w\geqq\mu^{\top}C_{j}\) so that \(w\not\leq C_{j}\). Moreover, for \(w=C_{j}+(f_{\mu}(A_{j})-\mu^{\top}C_{j})\hat{\mu}\), we have \(w\in F_{\mu}(A_{j})\) and \(w\geqq C_{j}\). Thus, \(\{C_{j}\}\mathop{\preceqq}F_{\mu}(A_{j})\). Thus, \(F_{\mu}\) is feasible to (SDP). Because \(\mu^{\top}z^{*}=f_{\mu}(b)\), \(z^{*}\in F_{\mu}(b)\) Theorem 10 does not address the behavior of (SDP) if the primal is infeasible. As in single-objective IPs [49], the dual is unbounded in that case. The result is presented later in Corollary 4, as the proof uses properties derived in Section 4.2. Theorem 10 implies that for each primal supported solution \(x\), there is a dual solution \(G\) feasible to (SDP) such that \(G(b)\) contains \(Cx\). However, the theorem makes no statement about the number of feasible functions needed to obtain all such points. That is, Theorem 10 alone does not guarantee the existence of a single function \(F^{*}\) feasible to (SDP) such that \(Cx^{*}\in F^{*}(b)\) for all supported efficient solutions \(x^{*}\) to MOIP. Theorem 11 shows that such a function always exists. To prove this, we first establish that finitely many scalarizations suffice to recover all supported efficient solutions of (MOIP). **Lemma 6**.: _There exist scalarizing vectors \(\mu_{1},\mu_{2},\ldots,\mu_{\ell}\in\mathbb{R}^{k}_{>}\) such that every supported efficient solution \(x^{*}\) of (MOIP) is an optimal solution of the scalarized problem_ \[\max\ \mu_{i}^{\top}Cx\quad\text{s.t.}\quad Ax\leqq b,\ x\in\mathbb{Z}^{n}_{+},\] (MOIP \[{}_{\mu_{i}}\] ) _for some \(i=1,\ldots,\ell\)._ Proof.: By Proposition 9, the supported efficient solutions of (MOIP) are precisely the integral efficient solutions to (CH). Because (CH) is an MOLP, there are finitely many scalarizing vectors \(\mu_{1},\mu_{2},\ldots,\mu_{\ell}\in\mathbb{R}^{k}_{>}\) such that each efficient solution of (CH) is an optimal solution to the LP \[\max\ \mu_{i}^{\top}Cx\quad\text{s.t.}\quad x\in\operatorname{conv}(\mathcal{X}),\] (CH \[{}_{y_{i}}\] ) for some \(i=1,\ldots,\ell\)[34, Corollary 4.3.3]. So, if \(x\) is a supported efficient solution of (MOIP), then \(x\) is an integral optimal solution to (CH\({}_{y_{i}}\)) for some \(i=1,\ldots,\ell\) and therefore an optimal solution to (MOIP\({}_{\mu_{i}}\)). **Theorem 11**.: _There exists a function \(F^{*}\) feasible to (SDP) such that if \(x^{*}\) is a supported efficient solution of (MOIP), then \(Cx^{*}\in F^{*}(b)\)._ Proof.: Lemma 6 implies that there are finitely many vectors \(\mu_{1},\mu_{2},\ldots,\mu_{\ell}\in\mathbb{R}^{k}_{>}\) such that every supported efficient solution of (MOIP) is an optimal solution to the scalarized IP (MOIP\({}_{\mu_{i}}\)) for some \(i=1,\ldots,\ell\). For every \(i=1,\ldots\ell\), define \(\hat{\mu_{i}}=\frac{\mu_{i}}{\|\mu_{i}\|^{2}}\). Then, there is a superadditive nondecreasing function \(f_{\mu_{i}}:\mathbb{R}^{m}\to\mathbb{R}\) such that \(f_{\mu_{i}}(A_{j})\geq\mu_{i}^{\top}C_{j}\) for all \(j=1,\ldots,n\), and \(\mu_{i}^{\top}Cx^{*}=f_{\mu_{i}}(b)\) for all \(x^{*}\) optimal to \((\textsc{MOIP}_{\mu_{i}})\). For \(i=1,\ldots,\ell\), consider the functions \(F_{\mu_{i}}:\mathbb{R}^{m}\to\mathcal{E}\) given by \[F_{\mu_{i}}(v)=\{w\in\mathbb{R}^{k}\ |\ \mu_{i}^{\top}w=f_{\mu_{i}}(v)\}\] As in the proof of Theorem 10, each \(F_{\mu_{i}}(v)\) is feasible to (SDP). Define the set \(\mathcal{S}(v)=\cup_{i=1}^{\ell}F_{\mu_{i}}(v)\) and consider the function \[F^{*}(v)=\text{Min }(\mathcal{S}(v))=\text{Min}\Big{(}\bigcup_{i=1}^{\ell}F_ {\mu_{i}}(v)\Big{)}.\] We claim that \(F^{*}\) is feasible to (SDP). For each \(v\), the set \(\mathcal{S}(v)\) is a finite union of hyperplanes in \(\mathbb{R}^{k}\). Therefore, either \(F^{*}(v)=-M_{\infty}\) or \(\mathcal{S}(v)\) has points that are nondominated from below. In either case, \(F^{*}(v)\in\mathcal{E}\). Next, we show that \(0\in F^{*}(0)\). For each \(i=1,\ldots,\ell\), we have \(0\in F_{\mu_{i}}(0)\) and there is no \(z\in F_{\mu_{i}}(0)\) with \(z\leq 0\) because \(F_{\mu_{i}}(0)\in\mathcal{E}\). In particular, there are no elements of \(\mathcal{S}(0)\) which are dominated by \(0\). Thus, \(0\) is a nondominated point of \(\mathcal{S}(0)\), that is, \(0\in F^{*}(0)\). Because \(F_{\mu_{i}}(A_{j})\mathop{\hbox to 0.0pt{$\,\succeq\,$}}C_{j}\) for all \(i=1,\ldots\ell\), there is no \(z\in\mathcal{S}(A_{j})\) such that \(z\leq C_{j}\). In particular, because \(F^{*}(A_{j})\subseteq\mathcal{S}(A_{j})\), there is no \(z\in F^{*}(A_{j})\) such that \(z\leq C_{j}\). Next, we show that for every \(j\), there is a \(z\in F^{*}(A_{j})\) such that \(z\geqq C_{j}\). Consider \(\mathcal{Z}_{j}=\{z\in\mathcal{S}(A_{j})\ |\ C_{j}\leqq z\}\). Note that \(\mathcal{Z}_{j}\cap F_{\mu_{i}}(A_{j})\neq\emptyset\) for all \(i,j\) because each \(F_{\mu_{i}}\) is feasible to (SDP). Let \(z\in\text{Min}(\mathcal{Z}_{j})\), and we claim that \(z\in F^{*}(A_{j})\). Suppose if possible that there exists \(w\in\mathcal{S}(A_{j})\) such that \(w\leq z\). Then there exists \(i\in\{1,\ldots,\ell\}\) such that \(w\in F_{\mu_{i}}(A_{j})\). Then, \(\tilde{z}=z+(f_{\mu_{i}}(A_{j})-\mu_{i}^{\top}z)\hat{\mu}_{i}\) satisfies \(\tilde{z}\in F_{\mu_{i}}(A_{j})\) and \(z\leqq\tilde{z}\) because \(z\in\text{Min}(\mathcal{Z}_{j})\) and \(\tilde{z}\) is comparable with \(z\). Then, the following three statements hold: \(\mu_{i}^{\top}z\leqq\mu_{i}^{\top}\tilde{z}\), \(\mu_{i}^{\top}w=\mu_{i}^{\top}\tilde{z}\), and \(\mu_{i}^{\top}w<\mu_{i}^{\top}z\), which is a contradiction. Thus, there is no such \(w\), and we have \(z\in F^{*}(A_{j})\). So, \(\{C_{j}\}\mathop{\hbox to 0.0pt{$\,\preceq\,$}}F^{*}(A_{j})\). To see that \(F^{*}(v)\) is nondecreasing, let \(v\leqq\tilde{v}\). Let \(w\in F^{*}(v)\) and \(\tilde{w}\in F^{*}(\tilde{v})\), and suppose without loss of generality that \(\tilde{w}\in F_{\mu_{1}}(\tilde{v})\). Because \(F_{\mu_{1}}\) is nondecreasing, \(z\not\geq\tilde{w}\) for all \(z\in F_{\mu_{1}}(v)\). If \(w\in F_{\mu_{i}}(v)\) for \(i\neq 1\), then \[z=w+\left(f_{\mu_{1}}(v)-\mu_{1}^{\top}w\right)\hat{\mu_{1}}\] is an element of \(F_{\mu_{1}}(v)\). If \(w\in F^{*}(v)\), then \(w\leqq z\) because \(\hat{\mu_{1}}\in\mathbb{R}_{>}^{k}\) and therefore \(w\) and \(z\) are comparable. It follows that \(\tilde{w}\not\leqq w\) because \(w\leqq z\) and \(\tilde{w}\not\leqq z\) as \(z\in F_{\mu_{1}}(v)\). We now show that if \(w\in F^{*}(v)\), then there is a \(\tilde{w}\in F^{*}(\tilde{v})\) such that \(\tilde{w}\geqq w\). For each \(i=1,\ldots,\ell\), define \[\tilde{w}_{i}=w+\left(f_{\mu_{i}}(\tilde{v})-\mu_{i}^{\top}w\right)\hat{\mu}_{i},\] and note that \(\tilde{w}_{i}\in F_{\mu_{i}}(\tilde{v})\). Moreover, because \(F\) is nondecreasing and because \(w\in F^{*}(v)\), we have \(w\leqq\tilde{w}_{i}\) for all \(i\). Let \(\tilde{w}\in\operatorname{Min}(\{\tilde{w}_{i}\mid i=1,\ldots,n\})\). Suppose there is some \(i\in\{1,\ldots,\ell\}\) and \(z\in F_{\mu_{i}}(\tilde{v})\) such that \(z\leq\tilde{w}\). Then, if \(\tilde{z}=\tilde{w}+\left(f_{\mu_{i}}(\tilde{v})-\mu_{i}^{\top}\tilde{w} \right)\hat{\mu}_{i}\), it would follow that \(\mu_{i}^{\top}z<\mu_{i}^{\top}\tilde{w}\), \(\mu_{i}^{\top}z=\mu_{i}^{\top}\tilde{z}\), and \(\mu_{i}\top\tilde{w}\leqq\mu_{i}^{\top}\tilde{z}\), which is a contradiction. So, no such \(z\) exists and we have \(\tilde{w}\in F^{*}(\tilde{v})\). Thus, \(F^{*}(v)\underset{\Xi}{\rightrightarrows}F^{*}(\tilde{v})\). Next, we prove that \(F^{*}\) is superadditive. Let \(w_{1}\in F^{*}(v_{1})\) and \(w_{2}\in F^{*}(v_{2})\), and \(\tilde{w}\in F^{*}(v_{1}+v_{2})\). Suppose without loss of generality that \(\tilde{w}\in F_{\mu_{1}}(v_{1}+v_{2})\). Then, for every \(z_{1}\in F_{\mu_{1}}(v_{1})\) and \(z_{2}\in F_{\mu_{1}}(v_{2})\), \(\tilde{w}\not\leqq z_{1}+z_{2}\) by the superadditivity of \(F_{\mu_{1}}\). In particular, setting \[z_{1}=w_{1}+\left(f_{\mu_{1}}(v_{1})-\mu_{1}^{\top}w_{1}\right)\hat{\mu_{1}} \quad\text{ and }\quad z_{2}=w_{2}+\left(f_{\mu_{1}}(v_{2})-\mu_{1}^{\top}w_{2} \right)\hat{\mu_{1}},\] we have \(z_{1}\in F_{\mu_{1}}(v_{1})\), \(z_{2}\in F_{\mu_{1}}(v_{2})\) and \(w_{1}+w_{2}\leqq z_{1}+z_{2}\). This in turn implies that \(\tilde{w}\not\leqq w_{1}+w_{2}\) because \(\tilde{w}\not\leqq z_{1}+z_{2}\). Let \(w_{1}\in F^{*}(v_{1}),w_{2}\in F^{*}(v_{2})\), and let \[\tilde{w}\in\operatorname{Min}\,\left\{(w_{1}+w_{2})+\left(f_{\mu_{i}}(v_{1}+v _{2})-\mu_{i}^{\top}(w_{1}+w_{2})\right)\hat{\mu_{i}}\mid 1\leq i\leq\ell\right\}.\] Then, for each \(i\), there is a \(z_{2}\in F_{\mu_{i}}(v_{1}+v_{2})\) such that \(\tilde{w}\leqq z_{2}\). Therefore, there is no \(z\in F_{\mu_{i}}(v_{1}+v_{2})\) with \(z\leq\tilde{w}\) because if there were, then \(\mu_{i}^{\top}z<\mu_{i}^{\top}\tilde{w}\), \(\mu_{i}^{\top}\tilde{w}\leqq\mu_{i}^{\top}z_{2}\), and \(\mu_{i}^{\top}z=\mu_{i}^{\top}z_{2}\), which cannot happen. So, \(\tilde{w}\in F^{*}(v_{1}+v_{2})\). Therefore, \(F^{*}(v_{1})+F^{*}(v_{2})\underset{\Xi}{\rightrightarrows}F^{*}(v_{1}+v_{2})\). So, \(F^{*}\) is feasible to (SDP). For each supported primal efficient solution \(x^{*}\), there exists a \(1\leq i\leq\ell\) such that \(Cx^{*}\in F_{\mu_{i}}(b)\). Because \(\{Cx^{*}\}\underset{\Xi}{\rightrightarrows}F^{*}(b)\) by Proposition 13, it follows that \(Cx^{*}\in F^{*}(b)\). Thus, the superadditive dual (SDP) is a strong dual to (MOIP) at supported primal efficient solutions. However, the set-valued functions it employs are difficult to characterize, which limits the immediate algorithmic utility of this dual in providing a bound set for search-based solution methods. Several researchers have developed methods to approximate the single-objective IP value function, and future research in this area could extend those methods to approximate the set-valued MOIP value function. In this paper, we consider a restricted dual that only includes vector-valued functions. ### Vector-Valued Superadditive Dual We formulate another dual problem to (MOIP) by restricting the feasible region in (SDP) to include only vector-valued functions. In other words, we consider only those functions \(F\) feasible to (SDP) for which \(F(v)\) is a singleton set in \(\mathbb{R}^{k}\) for all \(v\in\mathbb{R}^{m}\). In this case, we denote \(F\) as a \(k\)-tuple with components \(f_{i}\), \(i=1,\ldots,k\), where \(f_{i}:\mathbb{R}^{m}\to\mathbb{R}\), and have the following dual formulation \[\begin{split}\min& F(b)=(f_{1}(b),\ldots,f_{k}(b))^{\top} \\ \text{s.t.}& f_{i}(A_{j})\geqq c_{ij}\text{ for all }i,j,\\ & f_{i}(0)=0\text{ for all }i,\\ & f_{i}:\mathbb{R}^{m}\to\mathbb{R}\text{ nondecreasing and superadditive for all }i.\end{split}\] (VSDP) **Proposition 14** (Weak Duality for (V SDP)).: _If \(F\) is feasible to (VSDP), then \(Cx\leqq F(b)\) for all \(x\in\mathcal{X}(b)\)._ Proof.: For any \(x\in\mathcal{X}(b)\), \[Cx=\begin{pmatrix}\sum_{j=1}^{n}c_{1,j}x_{j}\\ \vdots\\ \sum_{j=1}^{n}c_{k,j}x_{j}\end{pmatrix}\leqq\begin{pmatrix}\sum_{j=1}^{n}f_{1} (A_{j})x_{j}\\ \vdots\\ \sum_{j=1}^{n}f_{k}(A_{j})x_{j}\end{pmatrix}\leqq\begin{pmatrix}f_{1}(Ax)\\ \vdots\\ f_{k}(Ax)\end{pmatrix}\leqq F(b),\] where the first inequality follows from the \(f_{i}(A_{j})\geqq c_{i,j}\) constraint, the second inequality is due to \(f_{i}\) being superadditive, and the third inequality follows from \(f_{i}\) being nondecreasing. As in (SDP), the minimization in (VSDP) is with respect to the partial order \(\preceqq\). However, if \(\{F_{\alpha}(b)\ |\ \alpha\in\mathcal{I}\}\) is the set of nondominated points of (VSDP) for some indexing set \(\mathcal{I}\), then \(F_{\alpha_{1}}(b)\) and \(F_{\alpha_{2}}(b)\) are incomparable under \(\preceqq\) for all \(\alpha_{1},\alpha_{2}\in\mathcal{I}\), \(\alpha_{1}\neq\alpha_{2}\). Because \(F_{\alpha}(b)\) is a singleton set for each \(\alpha\in\mathcal{I}\), we can identify \(F_{\alpha}(b)\) with its sole element \(z_{\alpha}\). Under this identification, the nondominated set \(\{F_{\alpha}(b)\ |\ \alpha\in\mathcal{I}\}\) is equivalent to \(\{z_{\alpha}\ |\ \alpha\in\mathcal{I}\}\in\mathcal{E}\). Moreover, \[\{z_{\alpha}\ |\ \alpha\in\mathcal{I}\}=\operatorname{Min}\Big{(}\bigcup_{F\text{ feasible to (VSDP)}}F(b)\Big{)}.\] In this way, (VSDP) is equivalent to solving a multiobjective problem whose objective values are elements of \(\mathbb{R}^{k}\). For the remainder of this section, we view (VSDP) as a problem in \(\mathbb{R}^{k}\) and make the identification of \(F(b)\) with its sole element for a singleton set \(F(b)\). If an objective of (MOIP) is unbounded, then the corresponding objective of (VSDP) is infeasible by Proposition 4, so that (VSDP) is infeasible. Proposition 15 shows that the upper bound due to (VSDP) is tighter than any other singleton upper bound set for the MOIP. **Proposition 15**.: _Let \(y\in\mathbb{R}^{k}\) such that \(Cx\leqq y\) for all \(x\in\mathcal{X}(b)\). Then, there exists a feasible solution \(F\) of (VSDP) such that \(F(b)\leqq y\). Moreover, if (VSDP) has efficient solutions, then there is an efficient solution \(F^{*}\) of (VSDP) such that \(F^{*}(b)\leqq y\)._ Proof.: If \(Cx\leqq y\) for all \(x\in\mathcal{X}(b)\), then for each \(1\leq i\leq k\), \(c_{i}x\leq y_{i}\) is a valid inequality. This implies that there is a superadditive nondecreasing function \(f_{i}:\mathbb{R}^{m}\to\mathbb{R}\) such that for each \(1\leq j\leq n\), \(f_{i}(A_{j})\geq c_{i,j}\), \(f_{i}(0)=0\), and \(f_{i}(b)\leq y_{i}\). Then, \(F=(f_{1},\ldots,f_{k})^{\top}\) is feasible to (VSDP). So, if (VSDP) has efficient solutions, then there exists \(F^{*}\) efficient to (VSDP) with \(F^{*}(b)\leqq F(b)\leqq y\). Proposition 15 has several corollaries that describe the nondominated points of (VSDP). **Corollary 3**.: _If (MOIP) is infeasible, then (VSDP) is unbounded._ Proof.: If (MOIP) is infeasible, then for each \(y\in\mathbb{R}^{k}\), \(Cx\leqq y\) for all \(x\in\mathcal{X}(b)\). So, by Proposition 15, for each \(y\in\mathbb{R}^{k}\) there is a feasible objective value \(F(b)\) to (SDP) with \(F(b)\leqq y\). **Corollary 4**.: _If (MOIP) is infeasible, then (SDP) is unbounded._ Proof.: (SDP) is a relaxation of (VSDP). Corollary 3 therefore implies that if (MOIP) is infeasible, then for each \(y\in\mathbb{R}^{k}\) there is a feasible objective value \(F(b)\) to (SDP) with \(F(b)\leqq\{y\}\). **Corollary 5**.: _If \(G^{*}(b)\) is a nondominated point of (VSDP), then \(G^{*}(b)\) is the unique nondominated point of (VSDP)._ Proof.: Let \(G^{*}(b)\) be a nondominated point of (VSDP). This implies that (VSDP) is feasible and each single-objective IP max \(\{c_{i}x\mid x\in\mathcal{X}(b)\}\) has a finite optimal value. Let \(y\in\mathbb{R}^{k}\) be the vector with components \(y_{i}=\max\ \{c_{i}x\mid x\in\mathcal{X}(b)\}\). Proposition 14 then implies that for each \(i\in\{1,2,\ldots,k\}\), the \(i^{th}\) objective of \(G^{*}(b)\) is bounded below by \(y_{i}\). That is, \(y\leqq G^{*}(b)\). On the other hand, \(Cx\leqq y\) holds for all \(x\in\mathcal{X}(b)\) so that Proposition 15 implies the existence of a feasible function \(F^{*}(b)\) to (VSDP) such that \(F^{*}(b)\leqq y\). Then, \(F^{*}(b)\leqq G^{*}(b)\) by the transitivity of \(\leqq\). Because \(F^{*}(b)\) and \(G^{*}(b)\) are both nondominated points of (VSDP), this inequality must hold with equality. Because \(G^{*}(b)\) was an arbitrary nondominated point, this implies that (VSDP) has a unique nondominated point. **Remark 7**.: _Proposition 15 and Corollary 5 imply that (VSDP) computes the ideal point \(y^{I}\) of (MOIP)._ One approach to proving strong duality of (VSDP) is to show that Proposition 15 holds for every \(z\in Z(b)\). This, however, may not be true as an element of \(Z(b)\) may be incomparable with some \(Cx\) in the value-set of the MOIP. This is illustrated in Example 1. **Example 1**.: Consider again the MOIP from Example 1. \[\max\ \begin{bmatrix}1&-\frac{1}{2}\\ -\frac{1}{2}&1\end{bmatrix}\begin{bmatrix}x_{1}\\ x_{2}\end{bmatrix}\quad\text{s.t. }x_{1}+x_{2}\leqq 1,\ x_{1},x_{2}\in\mathbb{Z}_{+},\] This problem has nondominated points \(\left\{(1,-0.5)^{\top},(0,0)^{\top},(-0.5,1)^{\top}\right\}=Z(1)\). But for any \(z\in Z(1)\), there is a feasible \(x\) such that \(Cx\not\leqq z\), where \(C\) is the objective matrix. For example, if \(z=(1,-0.5)^{\top}\), then \(x=(0,0)^{\top}\) is feasible but \(Cx\not\leqq z\). So, there is no \(z\in Z(1)\) such that \(Cx\leqq z\) holds for all \(x\in\mathcal{X}\). In the special case when (MOIP) has a unique nondominated point, all elements of the set of feasible objective values must be comparable with the sole element in the singleton set \(Z(b)\). This is established in Lemma 7. We slightly abuse the notation \(Z(b)\) to also denote the unique nondominated point contained in the set \(Z(b)\). **Lemma 7**.: _If (MOIP) has a unique nondominated point, then \(Cx\leqq Z(b)\) for all \(x\in\mathcal{X}(b)\)._ Proof.: Let \(x^{*}\) be an efficient solution of (MOIP), so that \(Cx^{*}=Z(b)\). Suppose for the sake of a contradiction that there was an \(x\in\mathbb{Z}_{+}^{n}\) such that \(Ax\leqq b\) but \(Cx\not\leqq Z(b)\). Then, there would be an objective \(i\in\{1,\ldots,k\}\) such that \(c_{i}x>c_{i}x^{*}\), which contradicts the hypothesis that \(Cx^{*}\) is the unique nondominated point. The statement \(Cx\leqq Z(b)\) may be ill-defined if (MOIP) does not have a unique nondominated point. So, the direct converse of Lemma 7 is not well defined, but a modified converse holds. **Lemma 8**.: _If there exists \(z\in Z(b)\) such that \(Cx\leqq z\) for all \(x\in\mathcal{X}(b)\), then (MOIP) has a unique nondominated point._ Proof.: Let \(z_{1},z_{2}\in Z(b)\) such that \(Cx\leqq z_{1}\) is a valid inequality. Because \(z_{1},z_{2}\) are feasible objective values, there are \(x_{1},x_{2}\in\mathcal{X}(b)\) such that \(z_{1}=Cx_{1}\) and \(z_{2}=Cx_{2}\). If \(Cx\leqq z_{1}\) is a valid inequality, then \(z_{1}\geqq Cx_{2}=z_{2}\). On the other hand, if \(z_{2}\in Z(b)\), then \(z_{1}\ngeqq z_{2}\), which implies that \(z_{1}=z_{2}\). Because \(z_{2}\in Z(b)\) was arbitrary, this implies that \(Z(b)\) has only one element. Therefore, (MOIP) has a unique nondominated point. Theorem 12 completely characterizes the strength of (VSDP). **Theorem 12**.: _The dual problem (VSDP) is strong if and only if the primal problem (MOIP) has a unique nondominated point._ Proof.: Suppose that (VSDP) is strong. Then, there exists an efficient solution \(F^{*}\) to (VSDP) such that \(F^{*}(b)\in Z(b)\). Then, Proposition 14 implies that \(Cx\leqq F^{*}(b)\) is a valid inequality. Because \(F^{*}(b)\in Z(b)\), Lemma 8 implies that (MOIP) has a unique nondominated point. Conversely suppose that (MOIP) has a unique nondominated point \(Z(b)=(z_{1}(b),\ldots,z_{k}(b))^{\top}\). Then, \(Cx\leqq Z(b)\) is a valid inequality by Lemma 7. Proposition 15 implies that there is a feasible solution \(F(b)\) to (VSDP) with \(F(b)\leqq Z(b)\). Proposition 14 implies that this inequality must hold with equality. Moreover, \(F(b)\) must be an efficient solution because otherwise there would be a feasible solution \(G(b)\) with \(G(b)\leqq F(b)\leqq Z(b)\), a contradiction with Proposition 14. Then, \(F(b)\) is an efficient solution to (VSDP) with \(F(b)=Z(b)\). In the single-objective case, every (MOIP) has a unique nondominated point, so that Theorem 12 coincides with Theorem 3. #### 4.2.1 MOLP reformulation of (VSDP) The superadditive dual of a single-objective IP can be formulated as an (exponentially large) LP in the special case where all entries of \(A\) and \(b\) are nonnegative integers. The subsequent discussion shows that (VSDP) can be cast as an MOLP in a similar manner. If (VSDP) has a nondominated point, it must be unique. Then, there exists a dual efficient function \(F^{*}=(f_{1}^{*},\ldots,f_{k}^{*})^{\top}\) such that each \(f_{i}^{*}\) is an optimal solution to the problem \[\min\{f_{i}(b)\ |\ f_{i}(0)=0,f_{i}(A_{j})\geq c_{i,j},f_{i}\text{ superadditive and nondecreasing}\},\] which is the superadditive dual for the single-objective IP \(\max\{c_{i}x\mid x\in\mathcal{X}(b)\}\). If \(A\) and \(b\) have all nonnegative integral entries, then \(f_{i}^{*}(b)\) is the optimal value of the following LP (see [48] for details). \[\min f_{i}(b)\] (SDMOLP) s.t. \[f_{i}(A_{j})\geqq c_{i,j}\] \[f_{i}(d_{1})+f_{i}(d_{2})-f_{i}(d_{1}+d_{2})\leqq 0\quad\text{ for all }0\leqq d_{1},d_{2},(d_{1}+d_{2})\leqq b,\] \[f_{i}(0)=0,f_{i}(d)\geqq 0.\] This leads to the following MOLP reformulation of (VSDP) for instances in which all entries of \(A\) and \(b\) are nonnegative integers. \[\min F(b)=(f_{1}(b),\ldots,f_{k}(b))^{\top}\] (SDMOLP) s.t. \[f_{i}(A_{j})\geqq c_{i,j}\] for all \[i,j\] , \[f_{i}(d_{1})+f_{i}(d_{2})-f_{i}(d_{1}+d_{2})\leqq 0\quad\text{ for all }0\leqq d_{1},d_{2},(d_{1}+d_{2})\leqq b,\] \[f_{i}(0)=0,f_{i}(d)\geqq 0.\] Because (VSDP) (and therefore (SDMOLP)) has a unique nondominated point, a single scalarization suffices to recover an efficient solution to (SDMOLP). Thus, the vector-valued dual of this problem is an MOLP with a unique nondominated point. However, this dual is not strong unless the MOIP itself has a unique nondominated point. **Remark 8**.: _From Remark 7 and the above discussion, it follows that the ideal point \(y^{I}\) for an MOIP with nonnegative constraint and objective coefficients can be obtained by solving a single (large) LP._ Example 2 illustrates the MOLP reformulation on a bi-objective knapsack problem. Note that this MOLP has a unique nondominated point. Because the primal problem does not have a unique nondominated point, the vector-valued dual is not strong. Nonetheless, its (unique) nondominated point provides an upper bound on \(\operatorname{Max}(\mathcal{Y}_{\text{MOIP}})\). **Example 2**.: Consider the MOIP \[\max\ \begin{bmatrix}2&1\\ 1&2\end{bmatrix}\begin{bmatrix}x_{1}\\ x_{2}\end{bmatrix}\quad\text{s.t.}\quad x_{1}+x_{2}\leqq 2,\ x_{1},x_{2}\in\mathbb{Z}_{+}.\] Following the steps described above, the MOLP formulation of the vector-valued superadditive dual for this problem is \[\min F(2)=(f_{1}(2),f_{2}(2))^{\top}\] s.t. \[(f_{1}(1),f_{2}(1))^{\top} \geqq(2,1)^{\top},\] \[(f_{1}(1),f_{2}(1))^{\top} \geqq(1,2)^{\top},\] \[2(f_{1}(1),f_{2}(1))^{\top}-(f_{1}(2),f_{2}(2))^{\top} \leqq(0,0)^{\top}.\] This MOLP has a unique nondominated point \(F^{*}(2)=(4,4)^{\top}\). On the other hand, the original MOIP has nondominated points \(\big{\{}(4,2)^{\top},(3,3)^{\top},(2,4)^{\top}\big{\}}\). ## 5 Conclusion In this paper, we analyzed relaxations and developed a duality framework for MOIPs by leveraging results from single-objective integer programming. In particular, we formulated the Lagrangian relaxation of an MOIP and compared it with the continuous and convex hull relaxations. The convex hull relaxation is tight at supported efficient solutions of the MOIP but not at unsupported solutions. We showed via an example that a Lagrangian relaxation can provide a tighter upper bound at unsupported nondominated points. We presented an MOIP Lagrangian dual that generalizes the single-objective counterpart, relying on the idea of finding the best upper bound over all Lagrangian relaxations. The behavior of this dual at supported solutions, including conditions for strong duality, mimics those derived in the single-objective case; the analysis is aided by scalarization techniques. The properties of the dual are harder to analyze at unsupported solutions. This is due in part to the complicated geometry of the dual feasible set \(\mathcal{Y}_{\mathrm{LD}}\). Every point in the primal nondominated set has an upper bound in the dual feasible set \(\mathcal{Y}_{\mathrm{LD}}\), but the non-convexity of \(\mathcal{Y}_{\mathrm{LD}}\) implies that it also contains elements that are incomparable with the primal nondominated points. In particular, \(\mathrm{Min}(\mathcal{Y}_{\mathrm{LD}})\) does not necessarily provide the "best" upper bound on the primal nondominated points and the Lagrangian relaxations themselves may be more informative in this respect. We presented computational evidence to illustrate that a naive approximation to the Lagrangian dual bound set can provide a tighter upper bound than one obtained via convex hull relaxation. We also introduced two superadditive duals, namely, a set-valued formulation and a vector-valued variant. The set-valued problem considers set-valued functions that are non-decreasing and superadditive, inspired by the properties of the MOIP value function. This dual is strong at supported efficient solutions of the primal. The vector-valued dual is constructed by restricting the set-valued dual to functions from \(\mathbb{R}^{m}\) to \(\mathbb{R}^{k}\); it is strong if and only if the primal has a unique nondominated point. Given any upper bound \(z\) on the set of feasible primal objective values, there exists a vector-valued dual feasible solution that provides a tighter upper bound. In the special case where the constraint parameters are nonnegative integers, the vector-valued dual can be formulated as an MOLP. Notably, the vector-valued superadditive dual provides an alternate method for computing the ideal point of an MOIP via a single (large) LP in case of nonnegative problem parameters. Our computational experiments have promising results, but our approach of enumerating a set of Lagrange multipliers over an equispaced grid does not scale well to larger problems. Future work in this area could focus on algorithmic aspects of the Lagrangian dual, especially with a view on techniques for selecting Lagrange multipliers. The IP value function is hard to compute even for the single-objective IP in the general case, but several researchers have developed algorithms for value function approximation for structured IPs. Extension of these methods to the multiobjective superadditive dual offers another promising avenue for future research.
多目標整数 programação (MOIPs) は、線形制約と整数変数を持つ集合上で、複数の目的関数を同時に最適化します。この論文では、MOIPsに対する連続、凸包、ラグランジュ緩和を提示し、それら間の関係を検討します。凸包緩和は、サポートされた解に緊密に、つまり、重み付き加算によるMOIPsの推定で得られる解です。サポートされていない解では、凸包緩和は緊密ではありません。ラグランジュ緩和は、より緊密な境界を与える可能性があります。ラグランジュ緩和を用いて、MOIPsのラグランジュ双対を定義します。これは弱双極性を持つが、特定の条件でプライマリ可行領域内の解が支持される場合、強になります。数値実験により、ラグランジュ双対によって得られる境界は、凸包緩和による境界よりもより緊密であることが示されています。
2309.10082
Constructor algorithms for building unconventional computers able to solve NP-complete problems
Nature often builds physical structures tailored for specific information processing tasks with computations encoded using diverse phenomena. These can sometimes outperform typical general-purpose computers. However, describing the construction and function of these unconventional computers is often challenging. Here, we address this by introducing constructor algorithms in the context of a robotic wire machine that can be programmed to build networks of connected wires in response to a problem and then act upon these to efficiently carry out a desired computation. We show how this approach can be used to solve the NP-complete Subset Sum Problem (SSP) and provide information about the number of solutions through changes in the voltages and currents measured across these networks. This work provides a foundation for building unconventional computers that encode information purely in the lengths and connections of electrically conductive wires. It also demonstrates the power of computing paradigms beyond digital logic and opens avenues to more fully harness the inherent computational capabilities of diverse physical, chemical and biological substrates.
Tony McCaffrey, Thomas E. Gorochowski, Lee Spector
2023-09-18T18:49:53
http://arxiv.org/abs/2309.10082v1
# Constructor algorithms for building unconventional computers able to solve NP-complete problems ###### Abstract Nature often builds physical structures tailored for specific information processing tasks with computations encoded using diverse phenomena. These can sometimes outperform typical general-purpose computers. However, describing the construction and function of these unconventional computers is often challenging. Here, we address this by introducing constructor algorithms in the context of a robotic wire machine that can be programmed to build networks of connected wires in response to a problem and then act upon these to efficiently carry out a desired computation. We show how this approach can be used to solve the NP-complete Subset Sum Problem (SSP) and provide information about the number of solutions through changes in the voltages and currents measured across these networks. This work provides a foundation for building unconventional computers that encode information purely in the lengths and connections of electrically conductive wires. It also demonstrates the power of computing paradigms beyond digital logic and opens avenues to more fully harness the inherent computational capabilities of diverse physical, chemical and biological substrates. ## Introduction General-purpose electronic computers built on the foundation of digital logic are ubiquitous in everyday life. Their highly-programmable nature makes them ideal for a wide variety of tasks. However, this flexibility often comes at a cost -- their efficiency in performing specific types of computation [1]. In contrast, biological systems display a far wider diversity in the ways that information is encoded and processed [2, 3, 4, 5, 6]. It is common for the physical structures of biological systems to embed intricate computational functionalities in terms of the connections, concentrations and dynamics of the diverse elements making up the system. This highlights an opportunity for the development of more efficient unconventional computers that move beyond digital logic and whose physical construction and function is tailored to the problem at hand [7, 8, 9, 10, 11, 12, 13, 14, 15, 16], as well as new theoretical frameworks for both digital and analog computation [17, 18, 19]. With this in mind, the challenge becomes describing how such computers might be built and how they might function to carry out useful computations. Construct theory is an emerging approach for thinking about and describing fundamental physics based on distinguishing between transformations of a system's state that are possible (i.e., in theory could be made to occur) and those that are not [20]. The ability to include counterfactual statements (i.e., those transformations that are impossible) has opened up productive avenues for reconciling many previously disconnected areas of physics. In particular, there has been success in using constructor theory to integrate information theory and computation [21]. These developments relied on information being considered as a physically instantiated entity and ensuring that the features and transformations of the physical system encoding information meets a set of requirements. These include the physical medium being able to support at least two distinct states, with the state able to be permuted and copied. Using this work as a foundation, constructor theory has also been used to show that evolution by natural selection can emerge from fundamental physics as long as the underlying system is able to physically encode digital information [22]. Here, we take inspiration from this theory to develop what we call _constructor algorithms_ that specify: (i) how a physical system should be built through allowable physical transformations to create a specialized computer for a given problem; and (ii) how this constructed artefact can be used to compute solutions to a given query. By placing physical embodiment at the forefront of our algorithms, we aim to open up opportunities to consider more diverse information processing substrates and processes, such as those found in chemistry and biology [10]. As a proof-of-concept, we consider a hypothetical robotic wire machine (RWM) that enables us to specify how a network of electrically conductive wires should be connected together in a network based on a set of input numbers and problem type. The wire network assembled encodes an unconventional computer that uses the lengths of wires, and flows and resistances of electricity through these, to efficiently solve the desired problem. We describe constructor algorithms to create unconventional computers capable of solving an NP-complete problem. Interestingly, once a particular wire network is assembled it can be used to efficiently compute solutions in \(O(1)\) time, by exploiting the highly parallel nature of electrical currents. While our focus here is on demonstrating algorithms for building unconventional computers based on networks of conductive wires, we end by discussing how such constructor algorithms might translate to other physical, chemical and biological substrates to create living computers that could potentially exploit self-assembly to build such networks [9, 15, 23]. ## Results ### Robotic wire machines as a basis for unconventional computers The basis of our unconventional computers are RWMs (**Figure 1a**). These consist of: (i) a board of parallel wires (referred to as rows) uniformly spaced at unit lengths, e.g., 1 cm; (ii) a Turing-complete programmable robot that can add additional wires connecting pairs of rows on the board in a desired pattern; and (iii) a measurement device that can introduce a fixed electrical potential across two of the rows on the board and measure any induced current. Our constructor algorithms take as input a multiset of positive integers, and from these generate a set of commands for the RWM to create a wire network able to compute solutions for a set of specified problems. To execute a computation, the user provides a query value (integer) to be tested. This value sets the length of the two connection points for the measurement device, which are then attached to relevant rows in the wire network. A fixed voltage is then applied and any electrical current measured (**Figure 1b**). If no current is observed, the wire network does not contain a path connecting those two rows. However, if a current is measured, then at least one path exists with each encoding a solution to our problem. Furthermore, we assume that the resistance of the wires connecting separated rows is significantly higher than the wires making up each row. This ensures that any electrical current that is able to pass between rows is dominated by the connecting wires and not the length of wire making up the rows themselves. Given that we known the length that each path must take (i.e., the query input) and can measure the unit length resistance of the connecting wires, from Ohm's law it is possible to calculate the number of solutions, \(s\), using: \[s(q,m)=\frac{V}{qR_{u}m}, \tag{1}\] where \(q\) is the query integer, \(m\) is the measured current in amps at the voltage source, \(V\) is the constant applied voltage in volts, and \(R_{u}\) is the resistance of a unit length of connecting wire in ohms. It should be noted that once a wire network has been built, it can be reused as is to compute answers to many different queries -- the wire network physically embodies a set of possible computations and we use electricity to probe the solution space in parallel. ### Encoding and efficient querying of information in wire networks Before describing constructor algorithms for a specific problem, it is important to understand how the resultant wire networks encode information and the function performed when querying the network. In our wire networks, positive integer numbers are encoded by the length of connecting wires. For example, a wire 5 units long encodes the number 5. Furthermore, paths of connecting wires, where the end of one wire is connected to the start of another via a shared row, encode the addition of those wire lengths. So for example, if one wire of length 2 connects rows 0 and 2, and another wire of length 3 connects rows 2 and 5, then a measurement with a query length of 5 across rows 0 to 5 will result in a flow of current and establish that addition \(2+3=5\) is possible (**Figure 0(b)**, single path). All of the algorithms we describe in this work build trees of connected wires that encode different combinations of additions for a set of input integers (wire lengths). The power of the wire networks comes from their ability to effectively test all of these separate additions simultaneously once the network is built, and for the number of different additions (i.e., routes through the network) to be immediately computed via the parallel nature of electronics (**Figure 0(b)**, multiple paths). ### Constructor algorithm for the Subset Sum Problem The Subset Sum Problem (SSP) is an NP-Complete problem [24] where a decision is made on whether a multiset of integers contains a subset that adds up to a target (query) integer. In this paper, we focus on a multiset containing only positive integers and a positive integer target. This version of the SSP is also NP-Complete because the problem's complexity is based on the number of subsets of the multiset, which is independent of the sign of the integers involved. We will also initially focus on basic sets without repeated elements to simplify our description. However, the same algorithm can be easily extended to multisets if redundant elements are each treated as unique in their own right. Later on we show how to optimize this algorithm for multisets if only the presence of at least one solution needs to be computed. To demonstrate how our constructor algorithm works for SSP, consider the input set \(X=\{4,1,5,2\}\). The powerset \(\mathbb{P}(X)\) contains all the possible subsets of \(X\) that could form a solution to an SSP query and thus may require summation during a computation. Rather than carry out these sums sequentially, our constructor algorithm creates a wire network with paths embodying all possible summations for each non-empty subset. By placing the wires making up the element in each subset from end-to-end on the board, we can use the RWM's measurement device to query a target integer \(q\) by connecting it to row 0 and row \(q\) and measuring the induced current. As explained in the previous section, this will simultaneously test in parallel all possible sums of non-empty subsets and allow for the number of unique solutions to be ascertained. To build the wire network for an input set \(X\), our constructor algorithm performs the following steps (**Figure 1(a)**): 1. For each element in \(X\) generate a wire the length of its value and connect it to row 0. 2. Remove an element, \(x_{f}\), from the remaining elements in set \(X\). For every location in the current network that corresponds to element \(x_{f}\), generate a set of wires of lengths corresponding to the remaining values in \(X\), and connect them starting at these locations. 3. If \(|X|>1\) then repeat Step 2, else end as network is complete. #### Trade-offs in number of wires and measurements used for a computation For a starting set of size \(n=|X|\), our standard SSP constructor algorithm will generate a network containing \(n+\sum_{i=1}^{n-1}2^{i-1}(n-i)=2^{n}-1\) wires. However, because our wire networks are physical objects, it may sometimes be useful to reduce the number of wires present to reduce costs and improve the scalability of the approach. Interestingly, this can be achieved if multiple measurements of a fixed wire network are possible. This then enables a trade-off between the number of measurements and size of the wire network produced. To understand why this is the case, it is helpful to recognise the self-similarity of sub-networks that are present in the wire networks built using the standard SSP algorithm (see shaded regions in **Figure 2b**). It is clear that two large and identical sub-networks exist, which calculate the SSP for the set excluding the first element 5. Therefore, if two measurements are possible for the same query, the entire sub-network that tests the set \(\{2,1,3,4\}\) that starts at row 0 can be removed and an additional measurement made starting at the row where the copy of the sub-network begins (row 5 in this example). This allows a network containing only \(1+[(n-1)+\sum_{i=1}^{(n-1)-1}2^{i-1}((n-1)-i)]=1+2^{n-1}-1=2^{n-1}\) wires (the main sub-network shown in the darkest grey in **Figure 2b**, plus a wire for the first element in the set) to be used to compute the SSP if two measurements are possible (**Figure 2c**). A further benefit of the wire networks is their ability to be easily reused. Performing multiple queries for the same starting set merely requires a change to the query length used for the measurement of electrical currents. Therefore, while sequentially building the network is relatively slow, querying for both one measurement and two measurement situations is always fast: \(O(1)\). #### Growing wire networks to accommodate new input set values Once a wire network is built, if new elements are added to the input they could be accommodated by rebuilding the network from scratch. This would impart a large time cost and much of the new network would be identical to the original one. An alternative approach is to allow for the wire networks to "grow" via defined transformations. In this case, the addition of a new element in the input set can be easily integrated into an existing network by taking all occurrences of the last element added to the tree and adding two new wires to the start row and end row of each last element occurrence with lengths that equal to the new element's value (**Figure 3**). This transformation effectively adds branches to all possible sub-solutions that include the new element. #### Optimizing wire networks for multisets So far, we have treated the repeated elements of a multiset as though they were unique elements in their own right. This ensures the networks created contain all possible summations as unique routes through the wire network. However, if we are only interested in whether a solution exists and not concerned with the number of unique solutions, then simpler wire networks exist to carry out this computation. This stems from the fact that repeated elements lead to redundancy in the wire network where paths incorporate the elements with identical values at different steps of the paths created (because each element is treated as unique by our original algorithm). This is evident by looking at the structure of a wire network built for an input with an element value that is repeated (**Figure 4a**). At a particular level (i.e., row) of the network, there is repetition across sub-networks, allowing some to be removed without affecting the output of the computation. To generate an optimised network it is possible to retrospectively analyse the network and remove identical sub-networks at each level. However, a simpler approach is to not produce these sub-networks when the wire network is first being produced. To do this we can adapt our previous algorithm to filter out these unnecessary elements as they arise. The constructor algorithm for a wire network where only the presence of at least one solution is required and two measurements are possible is given by: 1. Remove an element, \(x_{f}\), from \(X\), create a wire the length of its value and connect it to row 0. 2. For the remaining elements in \(X\), generate a single wire for each value (i.e., two or more elements with the same value only leads to a single wire) and connect them to the row at the end of the wire added in Step 1. 3. Remove another element, \(x_{f}\), from the remaining elements in set \(X\). For every wire in the current network that corresponds to the element \(x_{f}\), generate a single wire for each value (i.e., two or more elements with the same value only leads to a single wire) and connect them all starting at these locations. 4. If \(|X|>1\) then repeat Step 3, else end as network is complete. An example of the steps involved in this algorithm are shown in **Figure 4b**. ## Discussion We typically consider information and computation as abstract concepts. However, the success of constructor theory to unify these ideas in theoretical physics demonstrates that real-world constraints need not hamper our ability to express these ideas [20, 21, 22]. In this work, we have introduced the concept of constructor algorithms as a means to describe the physical transformations that take place when building an unconventional computing device able to solve the NP-complete Subset Sum Problem. Our focus has been to demonstrate how this approach offers a framework for thinking about unconventional computing by constraining our computers to being built as simple wire networks via a programmable RWM. Our constructor algorithms are able to tailor the RWM's actions (i.e., physical transformations of the wire network) and thus the structure of the resultant network for a specific problem at hand. Once built, this network can then be used to immediately compute the answer to any number of queries by exploiting the lengths of paths through this network and the parallel nature of electrical currents. We chose simple wire networks as our computing medium to highlight that an alternative physical encoding of information (e.g., numbers encoded as the lengths of wires) can lead to new ways that a computer can be physically built and parallel computations performed. In this case, the wire networks encode the full solution space of the problem and are able to exploit changes in electrical current to provide the number of unique solutions at no extra cost. Being physical entities, as our wire networks grow in complexity, we are likely to hit practical limits for their use. We already assume that the wires for each row have significantly less electrical resistance than those wires connecting rows, which ensures the major determinant of the electrical resistance comes from the connecting wires. This avoids issues where the physical placement of wires on a single row generates large horizontal distances that would affect our ability to determine if multiple paths existed. However, there will be physical limits as the size of the input multiset grows causing this assumption to break. It should be noted, that even if this assumption does not hold, we are still able to test whether at least one solution exists by monitoring any flow of a current to solve the classical SSP. In addition to wire placement, the physical size of the networks built will also cause problems as their complexity grows. To compensate for this, we can scale down the unit length and thus the amount of wire needed, as well as the diameter of each wire. Such changes would also require enhancement to the RWM to allow for the precise handling of tiny components. Richard Feynman is famous for stating that "there's plenty of room at the bottom"[25], but controlling what happens at the micro- or nano-scales is hugely challenging. One potential way of achieving such a goal is to make use of molecular self-assembly (e.g., DNA origami[23]), allowing a wire network to effectively build itself. This parallel assembly would not only scale-down the size of our networks, but also significantly improve the assembly time and reduce the overall complexity of creating and executing computations using this approach. In summary, all life senses, responds, and adapts to its environment. Thinking about whether algorithms might exist for describing how organisms develop to implement personalised computational devices throughout their lives may be an important step towards new approaches to unconventional computers that evolve with use. We believe that constructor algorithms provide a framework to support this goal, placing the physical embodiment of computation centre-stage and enabling input data to affect how a computer's internal structure is built, and thus, the flow of information when computations occur. This goes beyond the sequential state-based architectures of general-purpose electronic computers we all use today, offering inspiration for new ways of harnessing diverse physical, chemical and biological processes for our growing computational needs. ## Acknowledgements L.S. was supported by U.S.A. National Science Foundation Grant No. 2117377. T.E.G. was supported by a Royal Society University Research Fellowship grant UF160357, and a Turing Fellowship from The Alan Turing Institute under the EPSRC grant EP/N510129/1. Any opinions, findings, and conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the National Science Foundation or other funders. The funders also had no role in study design, data collection and analysis, and decision to publish or preparation of the manuscript. ## Author Contributions T.M. conceived the project, developed the constructor algorithms, and proved the mathematical results. L.S. and T.E.G. provided guidance and support during development of the research. All authors contributed to the writing and editing of the manuscript. ## Conflicts of Interest None declared. ## Figures wire network for the input set \(X=\{5,2,1,3,4\}\). At each step, the element removed from the set is shown below the wire network in red, as well as the remaining elements in the input set. Blue wires denote new wires added at that step, red wires denote those wires to which new links are being made. (**b**) The final wire network can be queried using a single measurement starting at row 0. (**c**) Because of the self similar nature of the wire network (similar sub-networks highlighted in shades of grey), if two measurements are possible then the wire network can be simplified to include only one of the largest repeated sub-networks (shown in dark grey). In this case though a measurement needs to be taken at row 0 and at the row where the top of the repeated sub-network is placed. Figure 2: Constructor algorithm for the Subset Sum Problem. (**a**) Steps involved in creating the wire network for the input set \(X=\{5,2,1,3,4\}\). At each step, the element removed from the set is shown below the wire network in red, as well as the remaining elements in the input set. Blue wires denote new wires added at that step, red wires denote those wires to which new links are being made. (**b**) The final wire network can be queried using a single measurement starting at row 0. (**c**) Because of the self similar nature of the wire network (similar sub-networks highlighted in shades of grey), if two measurements are possible then the wire network can be simplified to include only one of the largest repeated sub-networks (shown in dark grey). In this case though a measurement needs to be taken at row 0 and at the row where the top of the repeated sub-network is placed. requiring two measurements for the input set \(X=\{4,1,5,2\}\) is updated to include a new element, 3. A transformation is used to replace all of the last inserted elements (highlighted in red) with sets of wires that have wires corresponding to the new element placed at the start row and end row of the last inserted elements. Figure 3: Adapting a wire networks to new input elements. In this example an initial wire network configuration two measurements for the input set \(X=\{4,1,5,2\}\) is updated to include a new element, 3. A transformation is used to replace all of the last inserted elements (highlighted in red) with sets of wires that have wires corresponding to the new element placed at the start row and end row of the last inserted elements. repeated elements is shown using our original constructor algorithm. Sub-networks generated for each element in the input set are highlighted in grey. Repeated sub-networks (red) that occur within the largest sub-network (blue) can be removed without affecting the correctness of the output if only the presence of at least one solution is required. This optimisation can be continued to deeper levels within the network until no further levels exist. (**b**) Optimised constructor algorithm for multisets where only the presence of at least one solution is required. Wires being added to the network are highlighted in blue, current selected wires are highlighted in red, and elements in bold in the set \(X\) below each wire network correspond to those for which wires are created (repeated values only have a single wire created). Computations of these networks require 2 measurements. Figure 4: Optimising wire networks for multisets and repeated values if only the presence of at least one solution required. (**a**) A wire network for the input multiset \(X=\{5,1,1,3,1\}\) containing repeated elements is shown using our original constructor algorithm. Sub-networks generated for each element in the input set are highlighted in grey. Repeated sub-networks (red) that occur within the largest sub-network (blue) can be removed without affecting the correctness of the output if only the presence of at least one solution is required. This optimisation can be continued to deeper levels within the network until no further levels exist. (**b**) Optimised constructor algorithm for multisets where only the presence of at least one solution is required. Wires being added to the network are highlighted in blue, current selected wires are highlighted in red, and elements in bold in the set \(X\) below each wire network correspond to those for which wires are created (repeated values only have a single wire created). Computations of these networks require 2 measurements.
自然は、特定の情報処理タスクに最適化された物理構造を構築し、その計算は多様な現象を用いて暗号化します。これらのコンピュータは、従来の汎用コンピュータを上回ることはあるものの、これらの非伝統的なコンピュータの構築と機能を説明することはしばしば困難です。そこで、ここでは、ロボットワイヤーマシンを対象としたコンストラクタアルゴリズムを導入することで、問題に応じて接続されたワイヤーのネットワークを構築するようにプログラムできるというアプローチを紹介します。このネットワークを処理して、効率的に計算を実行するよう設計しました。このアプローチを用いて、NP完全なサブセットSUM問題(SSP)を解き、これらのネットワークで測定される電圧と電流の変化を通じて、解の数を示すことができます。この研究は、電気伝導性ワイヤーの長さと接続に情報を暗号化する、非伝統的なコンピュータの構築の基礎を提供します。また、デジタル論理を超えた
2309.08815
Hybrid Quantum-Classical Multilevel Approach for Maximum Cuts on Graphs
Combinatorial optimization is one of the fields where near term quantum devices are being utilized with hybrid quantum-classical algorithms to demonstrate potentially practical applications of quantum computing. One of the most well studied problems in combinatorial optimization is the Max-Cut problem. The problem is also highly relevant to quantum and other types of "post Moore" architectures due to its similarity with the Ising model and other reasons. In this paper, we introduce a scalable hybrid multilevel approach to solve large instances of Max-Cut using both classical only solvers and quantum approximate optimization algorithm (QAOA). We compare the results of our solver to existing state of the art large-scale Max-Cut solvers. We demonstrate excellent performance of both classical and hybrid quantum-classical approaches and show that using QAOA within our framework is comparable to classical approaches.
Anthony Angone, Xioayuan Liu, Ruslan Shaydulin, Ilya Safro
2023-09-15T23:54:46
http://arxiv.org/abs/2309.08815v1
# Hybrid Quantum-Classical Multilevel Approach for Maximum Cuts on Graphs ###### Abstract Combinatorial optimization is one of the fields where near term quantum devices are being utilized with hybrid quantum-classical algorithms to demonstrate potentially practical applications of quantum computing. One of the most well studied problems in combinatorial optimization is the Max-Cut problem. The problem is also highly relevant to quantum and other types of "post Moore" architectures due to its similarity with the Ising model and other reasons. In this paper, we introduce a scalable hybrid multilevel approach to solve large instances of Max-Cut using both classical only solvers and quantum approximate optimization algorithm (QAOA). We compare the results of our solver to existing state of the art large-scale Max-Cut solvers. We demonstrate excellent performance of both classical and hybrid quantum-classical approaches and show that using QAOA within our framework is comparable to classical approaches. **Reproducibility: Our solver is publicly available at [https://github.com/angone/MLMax-cut](https://github.com/angone/MLMax-cut).** ## I Introduction Recently, a plethora of post-Moore hardware architectures have been developed, demonstrating promising potential in tackling combinatorial optimization problems. These include quantum annealers [17], gate-based quantum devices which operate using fundamental quantum gates and circuits, digital and analog annealers [1], often considered as intermediate architectures between classical and quantum computers, and Coherent Ising Machines [16]. Some of these machines are generally called Ising processing units (IPU) due to its architecture designed to solve Ising model optimization problems. Despite the potential of these architectures, many suffer from the limitation of a small number of variables and connectivity, creating a significant barrier in addressing larger, complex real-world problems. To address these issues, hybrid approaches that blend classical and novel hardware have emerged, where the primary routine runs on a classical machine and calls the specialized hardware to solve small sub-problems, effectively combining the best of both worlds [32]. While the experimental focus of this paper is _hybrid quantum-classical and classical-classical approaches_ for the maximum cut (Max-Cut) on graphs, nothing prevents from applying a similar approach on other novel architectures. Recent Noisy Intermediate Scale Quantum (NISQ) devices is one of these novel architectures. They suffer from having few qubits and high error rates [25]. Regardless there is a tremendous interest in exploring NISQ devices for practical applications [2, 15]. This has encouraged the development of various hybrid algorithms that combine the use of both classical and quantum computers. Notable hybrid algorithms are variational algorithms such as the Quantum Approximate Optimization Algorithm (QAOA) [10] and the Variational Quantum Eigensolver (VQE) [24]. These types of algorithms construct a shallow parameterized quantum circuit, which a classical optimizer then attempts to optimize over the parameters to find the optimal solution. Due to the engineering and physical limitations, the number of qubits available to use is too small to actually solve problems in practical applications. This situation inspires the development of using classical decomposition based techniques to decompose the original problem into subproblems that will be solved on a quantum computer, then the classical computer is used to piece these subproblems together into a solution for the original problem [32, 33]. If these subproblems are chosen in a way that they are all independent from each other, they can be trivially solved in parallel on quantum devices or IPUs before being pieced back together. The focus of this paper is the multilevel approach, one of the most significant classes of methods for large scale computationally hard problems. These algorithms look at the problem through a sequence of increasingly coarser representations of the original problem. They have proven to be efficient in many different domains such as graph partitioning [18] and linear arrangement [28]. **Our contribution:** In this paper, we present a novel approach to solving the Max-Cut problem using a multilevel method. For the coarsening within our multilevel solver, we introduce a distance measure designed specifically for the Max-Cut problem that takes advantage of a \(d\)-dimensional embedding of the nodes. We also introduce a novel randomized, multistart refinement scheme. We evaluate our solver on a diverse set of graphs including real world problems such as social networks. We demonstrate our solver's performance both in the quantum-classical regime using QAOA as our subproblem solver and in the fully classical regime using a MQLib heuristic as the subproblem solver. Our experimental results show that our solver is competitive with state-of-the-art global solvers when provided with equivalent computational resources but can use small IPUs or universal quantum devices in parallel. We also demonstrate that our solver is scalable, being competitive both for large graphs and dense graphs. ## II Background and Notation ### _Maximum cut on graphs_ The Max-Cut is an NP-hard combinatorial optimization problem defined on a graph. We consider simple undirected weighted graphs defined as \(G=(V,E,w)\) where \(V\) is a set of nodes, \(E\) is a set of edges, and edge weight function \(w:E\rightarrow\mathbb{R}_{\geq 0}\). We define \(n=|V|\) and \(m=|E|\). A cut is a partition of \(V\) into two disjoint parts, \(V_{1}\) and \(V_{2}\). The objective to be maximized is the weighted sum of edges \(ij\in E\) such that \(i\in V_{1}\) and \(j\in V_{2}\) or vice versa. We can formulate Max-Cut as a quadratic unconstrained binary optimization problem (QUBO) that is equivalent to the models solved by IPUs and universal quantum devices using QAOA (see next section). Binary variables \(x_{i}\) are interpreted as follows: \[x_{i}=\begin{cases}0,&\text{if }i\in V_{1}\\ 1,&\text{if }i\in V_{2}.\end{cases} \tag{1}\] The Max-Cut optimization problem is then defined as: \[\max_{x\in\{0,1\}^{n}}\sum_{ij\in E}w_{ij}(x_{i}+x_{j}-2x_{i}x_{j}). \tag{2}\] ### _Quantum Approximate Optimization Algorithm_ The Quantum Approximate Optimization Algorithm (QAOA), is a hybrid quantum optimization algorithm proposed in [10]. This algorithm uses a parameterized quantum circuit of \(p\) layers of alternating unitary operators and a classical optimization solver. The classical optimization solver optimizes over the parameters of the circuit and tries to maximize the expectation. Suppose we are trying to optimize some objective function \(f(x)\), \(x\in\{0,1\}^{n}\). When using QAOA to solve this, we would begin by building the parameterized quantum circuit. This quantum circuit has two parameters: \(\beta\) and \(\gamma\). It is constructed by applying alternating unitary operators: the phase operator \(U(\gamma)=e^{-i\gamma\mathcal{H}}\) and the mixing operator \(U(\beta)=e^{-i\beta B}\). \(\mathcal{H}\) is the problem Hamiltonian while \(B\) is the mixing Hamiltonian. Given the number of layers \(p\) and parameters \(\beta\) and \(\gamma\), we can construct our quantum state: \[|\gamma,\beta\rangle=U(\beta_{p})U(\gamma_{p})\dots U(\beta_{1})U(\gamma_{1})|+ \rangle^{\oplus n} \tag{3}\] We then measure the circuit and compute the objective function: \[\langle\mathcal{H}\rangle=\langle\gamma,\beta|\mathcal{H}|\gamma,\beta\rangle \tag{4}\] The classical optimizer then attempts to find values of \(\gamma\) and \(\beta\) such that the objective is maximized. The state \(|\gamma_{opt},\beta_{opt}\rangle\) corresponds to the optimal solution. There exist several versions of QAOA. In this work we implemented the classical QAOA of depth \(p=3\). ### _Multilevel Algorithms_ Multilevel algorithms are a powerful class of computational methods used for handling and processing large-scale instances in an efficient and manageable way. In the context of graphs, these algorithms break down the large-scale graph problem into a series of increasingly simpler subproblems at multiple scales of coarseness, which improves scalability and for graph problems helps to take advantage of the graph structure [3]. The multilevel algorithm involves three main phases: coarsening, coarsest solution, and uncoarsening. In the coarsening phase, the graph is successively reduced in size to create a series of smaller, simplified graphs. Once the coarsest (i.e., smallest and fast to solve) graph is reached, the coarsest solution phase begins, where the graph problem is solved either exactly or approximately but with a high quality. Following this, the solution is then progressively uncoarsened, where the solution is interpolated back to the original graph and refined at each level of coarseness. The entire multilevel approach is inspired by the multigrid methods. The coarsening stage involves starting off with the original graph \(G_{0}\) and constructing a sequence of \(L\) increasingly coarser graphs \(G_{1},G_{2},...G_{L}\). Each next coarser graph \(G_{k}\) has fewer nodes than the previous coarse graph \(G_{k-1}\). To denote the sets of nodes and edges of graph at level \(k\) we will use notation \(G_{k}=(V_{k},E_{k})\) for \(0\leq k\leq L\). To describe the coarsening from level \(f\) to level \(c=f+1\), we will use subscript letters \(c\) and \(f\) to define coarse and fine variables, respectively. The coarsening process is performed by grouping nodes into pairs that form next-coarser nodes. As a result of this, the edges are also merged into next-coarser level edges and the edge weights are accumulated. The uncoarsening stage involves using the solution from a coarser problem as an initialization to begin solving a finer problem and refinement. At the initialization step, each of the nodes is uncoarsened into the two nodes that were merged into the coarse node during the coarsening stage. The values of the variables are then carried from the coarse variables to the fine variables. After the initialization is finished, the refinement is started in which a local small sub-problem solver is used to improve upon the solution that was carried over in iterative manner. This process repeats at each level of coarseness until we return to the original graph that we started with before any coarsening. ### _Multistart Methods_ When solving combinatorial problems, it is very common to get trapped in a local optima that prevents achieving the global optima. One technique that has been used to improve solutions beyond a local optima is multistart methods [23, 31]. Many optimization techniques consist of starting at some initial solution and then improving upon that solution. Multistart methods improve upon this by trying multiple initial solutions, improving all of them, and then accepting the best of the resulting solution. ## III Related Work ### _Coarsening_ When designing a multilevel algorithm for some problem, an important factor is the coarsening scheme. When working with the Max-Cut problem, the coarsening needs to be different than the one usually designed for the graph cut or edge length minimization problems [18, 28, 29, 34] due to the maximization objective. Typical coarsening methods for cut minimization use either regular algebraic multigrid (AMG) methods or restricted AMG (in which only pairs of nodes are coarsened that are faster and more popular) where a coarse subset of the graph nodes is chosen to form a coarse node based on the connectivity strength between nodes. The intuitive idea is to coarsen the heavy edges and leave sparse cuts to solve at the coarse levels. When working with the Max-Cut problem, coarsening nodes that are strongly connected does not help since we would lose heavy edges that span between the two parts. Instead, we designed a coarsening technique that coarsens together nodes that are loosely connected. This results in losing fewer edges when coarsening which means that the progressive change in the Max-Cut value will not be significant throughout the hierarchy. ### _QAOA Acceleration_ Our algorithm relies on using QAOA to solve subproblems. Thus its refinement performance is dependent on that of QAOA. There have been many attempts to improve the performance of QAOA including parameter transferring [12], warm-starts [9], phase operator sparsification [21] and taking advantage of symmetry [30]. ### _Solving Large Max-Cut Instances_ With our algorithm, we attempt to solve large Max-Cut problems. There have been many attempts at this previously, including tabu search [19], semi-definite programming relaxations [14], and randomized algorithms [11]. Tabu search is a type of algorithm that is similar to local search algorithms. Differences include that it may accept a worse solution to escape a local optima and it will mark some solutions to not be visited again. This method has been used to solve many well known graphs of sizes \(|V|=800...10000\) to the best known solutions [20]. Semi-definite programming relaxations have also been used as a way to approximate problems that have a linear or convex quadratic objective. Randomized algorithms are also a technique commonly used in many combinatorial optimization problems as a simple and efficient way to improve upon other algorithms. When using local search algorithms, adding random perturbations enables the algorithm to escape a local optima. One widely known algorithm that utilizes both of these techniques is the Goemans-Williamson algorithm which is able to approximate the solution to a ratio of 0.878 [13]. ### _MQLib_ Throughout this paper, we frequently reference MQLib [8] which is a project that implemented and evaluated 37 heuristics for Max-Cut and QUBO problems. One of the heuristics that was implemented is a rank-two relaxation heuristic presented in [4]. We use this implementation both as a subproblem solver within our algorithm and as a global solver to compare our algorithm against. This specific heuristic was chosen due to it performing the best in 5 of the 7 metrics considered in MQLib's systematic evaluation of the 37 heuristics and performed the best on our benchmark that contains graphs larger than those in MQLib. ## IV Hybrid Quantum-Classical Multilevel Solver for Max Cut We introduce a novel multilevel hybrid quantum-classical algorithm designed for the Max-Cut problem. The algorithm is tailored not only to leverage the capabilities of quantum architectures, but it is also adaptable to general Ising Processing Units (IPUs), devices designed for solving combinatorial optimization problems, in which they serve as efficient subproblem solvers [6]. To the best of our knowledge, this is the first multilevel Max-Cut algorithm. Inspired by relaxation-based algebraic multigrid coarsening techniques [22, 26], our proposed algorithm introduces two principal innovative components. The first is the novel relaxation-based Max-Cut distance measure to inform and optimize the coarsening phase designed in the spirit of algebraic distance on graphs [5]. The second is a fast, multistart refinement. ### _Coarsening_ As has been shown in many multigrid and multilevel algorithms, a distance measure between variables is one of the most critical components of the coarsening [26]. Our distance measure between nodes uses a \(d\)-dimensional embedding space to pair together nodes to be coarsened. This process begins by creating a \(d\)-dimensional sphere that will have every node embedded into it. Initially, every node starts off at a random position in the space \[\forall i\in V\quad p_{i}^{(0)}\leftarrow\text{rand}[-1,1]^{d}. \tag{5}\] Each node in the graph is iteratively visited and its position is updated in a way that maximizes the total distance between the node and every other node in its neighborhood. \[\forall i\in V\quad p_{i}^{(l+1)}\leftarrow\max\sum_{j\in N(i)}w_{ij}||p_{i}^{( l)}-p_{j}^{(l)}||_{2}, \tag{6}\] where \(l\) is the number of the iteration, and \(N(i)\) is the set of neighbors for \(i\in V\). This process in Eq. (6) is repeated until convergence. Once the required number of iterations are completed, the embedding process is finished and the nodes can start to be paired together. A similar (but faster to compute) distance measure in which the embedding space was a \(d\)-dimensional hypercube led to similar results. ``` 0: Fine graph \(G_{f}=(V_{f},E_{f})\) 1:\(A_{f}\leftarrow\)Adjacency matrix of \(G_{f}\) 2:\(embedding\gets embed(G_{f})\) 3:\(used\leftarrow\emptyset\) 4:\(pairs\leftarrow\emptyset\) 5:for\(i\in V_{f}\)do 6:if\(i\not\in used\)then 7:\(j\gets nearestNeighbor(i,embedding),\not\in used\) 8:\(used\gets i,j\) 9:\(pairs\leftarrow(i,j)\) 10:endif 11:endfor 12:\(P\in 0^{|V_{c}|\times|V_{f}|}\) 13:\(q\gets 0\) 14:for\((i,j)\in pairs\)do 15:\(P_{q,i}\gets 1\) 16:\(P_{q,j}\gets 1\) 17:\(q=q+1\) 18:endfor 19:\(A_{c}\gets P^{T}A_{f}P\) 20:return\(G_{c}\) obtained from \(A_{c}\) ``` **Algorithm 1** Coarsening When the nodes are being paired together to be coarsened, each node should be paired to a node that lies near it in the embedding. This is done by placing all the nodes and their positions in a KDTree. We then iterate through each node and perform a nearest neighbor search within the KDTree. Each node is then paired to the nearest neighbor that has not already been paired with a different node. Once this matching is constructed, the new coarse graph is built by contracting every pair in the matching. The details are in Algorithm 1. In line 2 of the algorithm, we embed the nodes into our embedding space according to 6. In lines 5-11, we iterate through each node in the graph, pairing it with the nearest node that is not yet paired. On line 12, we construct a matrix \(P\) that is filled with zeros. On lines 14-18, we update the matrix \(P\) so that each column contains two entries of 1 that correspond to a pair of nodes that will be merged together. In line 19, we find the adjacency matrix of our coarse graph by computing the matrix multiplication \(P^{T}A_{f}P\). The function returns the coarse graph that corresponds to the coarse adjacency matrix we computed. This coarsening process is repeated until the desired coarsest graph size is reached. Overall, it is expected that the original graph will be broken into approximately a logarithmic number of increasingly coarse graphs. ### _Sparsification_ During the coarsening process, we provide an optional sparsification parameter in order to reduce the density of our coarse graphs using a novel technique based upon our embedding. This sparsification happens after the embedding but before the coarsening. Once we calculate our embedding, we compute the weighted length of each edge in that embedding. Longer edges are more likely to be within the cut, so we choose to sparsify the shortest edges in that embedding. Instead of just removing the edge, we move the weight of that edge to its longest length adjacent edge. We repeat this process until we remove the number of edges specified by a parameter inputted by the user. ### _Coarsest level solution_ The size of the coarsest level is based upon the subproblem size that is acquired via user input. The graphs will get coarsened until the size of the coarsest graph is less than the subproblem size. Once the coarse graph size reaches the desired size, we solve it using MQLib with a running time of 5 seconds. ### _Uncoarsening_ We begin the uncoarsening phase with an initial solution to our coarsest level. When we move to the next level, we inherit our solution from the previous coarsest level. This inheritance is done with a surjection \(F:V_{f}\to V_{c}\) that maps from the fine nodes to the coarse nodes that they were contracted into. \[\forall i\in V_{f}\quad x_{i}=x_{F(i)} \tag{7}\] At each level, we begin by computing the gain of each node, which is the change in objective if you were to move that node to the other part. This gain can be computed as follows: \[\forall i\in V\quad gain(i)\leftarrow\sum_{j\in N(i)}w_{ij}(-1)^{2x_{i}x_{j}-x _{i}-x_{j}} \tag{8}\] Throughout the refinement we efficiently keep track of each node's gain by updating it based on the edges that enter/leave the cut. We solve each level by iteratively improving upon the solution until we stop all improvement. Each iteration chooses nodes to use in a subproblem, constructs a subproblem, and then solves that subproblem. This process repeats until we perform 3 iterations without improvement. The details of this are described in Algorithm 2. In each iteration, we first choose a random subset of the nodes and order them by their gain. From that subset, we choose the \(K\) nodes with the highest gain for the subproblem and then construct and solve it. We take the solution if our objective stays the same or increases. After 3 iterations with no improvement, we consider our refinement as completed and accept the current solution. We use multistart methods here to run multiple instances of this process in parallel and accept the best of those solutions. ### _Subproblem construction_ Each subproblem is constructed by creating a graph with 2 super-nodes and all the nodes chosen. Each super-node is an aggregation of every node not chosen for the subproblem in each part. Edges are added between the super-nodes and regular nodes. Once the subproblem is constructed, we will solve it using a subproblem solver. The subproblem solver used is either QAOA or a MQLib solver. The resulting solution will be no worse than our current solution. If we find a better solution, we update all the nodes to be in the correct part and update their gain accordingly. ``` 0: A graph \(G=(V,E)\) 0: Subproblem Size \(K\) 0: Initial Solution \(S\) \(constructGain()\) \(count\gets 0\) \(obj\gets calculateObjective(S)\) while\(count<3\)do \(subset\leftarrow\) random \(max(0.2|V|,2K)\) nodes \(\in V\) \(subprob\gets K\) highest gain nodes \(\in subset\) \(constructSubproblem()\) \(solveSubproblem()\) \(S\gets updateSolution()\) \(newObj\gets updateObjective()\) \(updateGain()\) if\(newObj>obj\)then \(obj\gets newObj\) \(count\gets 0\) else \(count\gets count+1\) endif endwhile return\(S\) ``` **Algorithm 2** Refinement ## V Computational Experiments We evaluated our solver on a set of graphs to evaluate both our performance when working with both a quantum and classical subproblem solver. ### _Quantum Subproblem Solver_ We evaluated our solver using a quantum subproblem solver on 5 small graphs taken from the well known Gset graph set acquired from SuiteSparse Matrix Collection [7]. These graphs all contain 800 nodes and 19716 edges. Our subproblem solver uses a simulated QAOA circuit with a size of 12 and a depth of 3 layers. These parameters were chosen due to the limitations of simulation. We perform the QAOA experiments on a smaller set of graphs due to the long running time of simulating QAOA. In Table I, we compare against the exact solutions obtained with Gurobi solver and against our own solver with a classical subproblem solver. ### _Classical Subproblem Solver_ We evaluated our solver with a classical subproblem solver on 30 different graphs of different types that were either constructed or taken from SuiteSparse Matrix Collection or the Network Repository [27]. The types of graphs used include social networks, random regular graphs, computational fluid dynamics problems, optimization problems, and finite graphs. The random regular dense graphs, titled r-xxxx-xxxx, are the only ones that were constructed. These experiments were performed with 40 parallel refinements in the multistart scheme and with a subproblem size of 100. In Table II, we compare against using MQLib as a global solver with time constraints equivalent to the time for refinement when using our algorithm. We also provide a comparison between our solver's coarsest level objective and our solver's finest level objective. As it can be seen from the results, the performance of our multilevel solver is at least comparable to those of the best global heuristic from MQLib and is noticeably better on some graphs. Since our refinement is decomposition based and can run in parallel, the scalability of our solver is very good. It is important to mention a remarkable quality of the coarsening that actually solves a large portion of the problem (see column "Coarse Ratio") as well as the sparsification that almost does not change the overall quality of our solver. ### _Obstacles and Future Work_ There are still a few obstacles we have ran into with this solver. One obstacle we face is the density of the coarse graphs. As the size of our coarse graph decreases, our density increases significantly due to our coarsening attempting to preserve edges. Currently we attempt to alleviate this problem by utilizing sparsification but a better solution is necessary. We face another obstacle of our subproblem selection relying heavily on randomness. In the future, we plan on working improving the running time of QAOA. Some ways to achieve this include taking advantage of parameter transferring [12] and symmetry [30]. Improving the running time of QAOA will both improve the overall running time of our solver and will allow us to perform experiments on larger graphs while using QAOA. ## VI Conclusion With the current state of quantum computing hardware, an important problem is making practical use of these devices even with high error rates and low qubit counts. Hybrid algorithms such as QAOA have been in development to address this problem. In this paper, we presented a multilevel algorithm alongside QAOA to solve the NP-Hard Max-Cut problem. We introduced novel schemes for coarsening and refinement for the Max-Cut problem. We performed a numerical study that shows we are at least comparable (and sometimes better) against state-of-the-art solvers on a number of different graphs when ran for the same amount of time. We concluded by demonstrating that we are scalable and can solve large, practical problem instances.
組み合わせ最適化は、量子デバイスの近期的利用が量子-古典的アルゴリズムを用いて実現され、量子計算の実用的な応用を示唆する分野の一つです。組み合わせ最適化における最もよく研究されている問題は、最大カット問題です。最大カット問題は、量子とその他の「Moore以降」アーキテクチャに関連しており、その類似性とIsingモデルなどの理由から、その解決が重要です。この論文では、大規模な最大カット問題を解決するためにスケーラブルなハイブリッド多レベルアプローチを導入します。このアプローチでは、古典的ソリューションのみを用いたアプローチと量子近似最適化アルゴリズム (QAOA) を組み合わせた方法を提案します。私たちのソリューションの結果を既存の最先端の大規模な最大カットソリューションと比較しました。そして、古典的アプローチとハイブリッド量子-古典的アプローチの両方に優れた性能を示し、QAOAを私たちのフレームワーク
2309.03940
Are there more galaxies than we see around high-$z$ quasars?
Whether or not $z \gtrsim 6$ quasars lie in the most massive dark-matter halos of the Universe is still a subject of dispute. While most theoretical studies support this scenario, current observations yield discordant results when they probe the halo mass through the detection rate of quasar companion galaxies. Feedback processes from supermassive black holes and dust obscuration have been blamed for this discrepancy, but the impact of these effects is complex and far from being clearly understood. This paper aims to improve the interpretation of current far-infrared observations by taking into account the cosmological volume probed by the Atacama Large Millimeter/submillimeter Array Telescope and to explain the observational discrepancies. We statistically investigate the detection rate of quasar companions in current observations and verify if they match the expected distribution from various theoretical models, once convolved with the ALMA field-of-view, through the use of Monte Carlo simulations. We demonstrate that the telescope geometrical bias is fundamental and can alone explain the scatter in the number of detected satellite galaxies in different observations. We conclude that the resulting companion densities depend on the chosen galaxy distributions. According to our fiducial models, current data favour a density scenario where quasars lie in dark-matter halos of viral mass $M_{\rm vir} \gtrsim 10^{12}~{\rm M_{\odot}}$, in agreement with most theoretical studies. According to our analysis, each quasar has about 2 companion galaxies, with a [CII] luminosity $L_{\rm [CII]} \gtrsim 10^8~{\rm L}_{\odot}$, within a distance of about 1~Mpc from the quasar.
Tommaso Zana, Stefano Carniani, David Prelogović, Fabio Vito, Viola Allevato, Andrea Ferrara, Simona Gallerani, Eleonora Parlanti
2023-09-07T18:00:00
http://arxiv.org/abs/2309.03940v1
# Are there more galaxies than we see around high-\(z\) quasars? ###### Abstract Context:Whether or not \(z\gtrsim 6\) quasars lie in the most massive dark-matter halos of the Universe is still a subject of dispute. While most theoretical studies support this scenario, current observations yield discordant results when they probe the halo mass through the detection rate of quasar companion galaxies. Feedback processes from supermassive black holes and dust obscuration have been blamed for this discrepancy, but the impact of these effects is complex and far from being clearly understood. Aims:This paper aims to improve the interpretation of current far-infrared observations by taking into account the cosmological volume probed by the Atacama Large Millimeter/submillimeter Array Telescope and to explain the observational discrepancies. Methods:We statistically investigate the detection rate of quasar companions in current observations and verify if they match the expected distribution from various theoretical models, once convolved with the ALMA field-of-view, through the use of Monte Carlo simulations. Results:We demonstrate that the telescope geometrical bias is fundamental and can alone explain the scatter in the number of detected satellite galaxies in different observations. We conclude that the resulting companion densities depend on the chosen galaxy distributions. According to our fiducial models, current data favour a density scenario where quasars lie in dark-matter halos of viral mass \(M_{\rm vir}\gtrsim 10^{12}\) M\({}_{\odot}\), in agreement with most theoretical studies. According to our analysis, each quasar has about 2 companion galaxies, with a [CII] luminosity \(L_{\rm[CII]}\gtrsim 10^{8}\) L\({}_{\odot}\), within a distance of about 1 Mpc from the quasar. Conclusions: ## 1 Introduction Most luminous (\(L_{\rm bol}>10^{46}\) erg s\({}^{-1}\)) quasars at redshift \(z\gtrsim 6\) are powered by the accretion process of gas onto the most massive supermassive black-holes (\(10^{8}-10^{8}\)M\({}_{\odot}\); Ferrarese & Ford 2005; Meyer et al. 2022a; Wang et al. 2018). According to theoretical models, including numerical simulations (Sijacki et al. 2009; Di Matteo et al. 2012; Costa et al. 2014; Weinberger et al. 2018; Barai et al. 2018; Habouzit et al. 2019; Ni et al. 2020; Valentini et al. 2021), and clustering studies (Hickox et al. 2009; Ross et al. 2009; Cappelluti et al. 2010; Allevato et al. 2011, 2012), supermassive black-holes reside in the densest regions of the Universe, such as the centre of most massive dark-matter halos, with virial masses ranging from \(10^{12}\) to \(10^{13}\) M\({}_{\odot}\) at \(z\sim 6\). However, observations do not always agree with such studies at nearly any redshift. Numerous observational works have exploited current astronomical facilities to investigate the properties of the environments where quasars reside, yielding discordant results. On the one hand, some studies suggest that quasars are located in regions characterized by large densities of galaxies, such as Steidel et al. (2005) at \(z=2.3\), Hall et al. (2018) at \(0.5\lesssim z\lesssim 3.5\), Swinbank et al. (2012) at \(2.5\leq z\lesssim 4.5\), Garcia-Vergara et al. (2019) and Uchiyama et al. (2020) at \(z\sim 4\), Husband et al. (2013) and Capak et al. (2011) at \(z\sim 5\), and Kim et al. (2009), Morselli et al. (2014), Balmaverde et al. (2017), Decarli et al. (2018), Mignoli et al. (2020), and Venemans et al. (2020) at \(z\gtrsim 6\). On the other hand, some works, such as Francis & Bland-Hawthorn (2004) and Simpson et al. (2014) at \(z\gtrsim 2\), Uchiyama et al. (2018) at \(z\sim 4\), Kashikawa et al. (2007) and Kikuta et al. (2017) at \(z\sim 5\), and Banados et al. (2013), Mazzucchelli et al. (2017), Champagne et al. (2018), Meyer et al. (2022b) at \(z\gtrsim 6\), find that the number of companion galaxies is, at most, similar to the galaxy density estimated in blank fields. Overall, the question of whether quasars tend to live in over-dense regions of the Universe remains an active area of research. The detection of Lyman-break galaxies (LBGs) and Lyman-alpha emitting galaxies (LAEs) in the neighbourhoods of quasars and active galactic nuclei is often used in literature to probe the underlying dark matter distribution of high redshift structures (see, for instance, Intema et al. 2006; Venemans et al. 2007; Banados et al. 2013; Hennawi et al. 2015; Mazzucchelli et al. 2017, investigating the satellite number counts, or Garcia-Vergara et al. 2019, studying the cross-correlation between galaxies and quasars). The contradictory conclusions of these studies may also stem from the intrinsic challenges in constraining the redshift of LBGs and the significant influence of dust extinction and obscuration on both LBGs and LAEs. For this reason, the best strategy to detect high-\(z\) quasar companions relies on the observation of their [CII]\(158\mu\)m emission, as this gas tracer is not affected by dust-attenuation and intergalactic medium absorption (e.g., Maiolino et al. 2005; Walter et al. 2009). In this context, the Atacama Large Millimeter Array (ALMA) has provided the most numerous spectroscopically confirmed detections of \(z\gtrsim 6\) quasar companions (Decarli et al. 2017, 2018; Venemans et al. 2020). In particular, Venemans et al. (2020, hereafter V20) found 27 line-emitter candidates within 27 quasar fields. Out of this initial pool, 10 line emitters were already recognised as quasar companions in previous works (Decarli et al. 2017; Willott et al. 2017; Neeleman et al. 2019; Venemans et al. 2019). The remaining candidates have been identified as companions by assuming that the detected line does correspond to the carbon [CII] transition, emitted within \(\pm\Delta v\) from the quasar.1 Two thresholds have been adopted in V20, leading to slightly different results: 7 additional satellites are detected with \(\Delta v=1000\) km s\({}^{-1}\), and 9 with \(\Delta v=2000\) km s\({}^{-1}\). The V20 sample is nowadays the largest and most complete catalogue of high-\(z\) quasar companions with constrained redshift. Footnote 1: In this work, we assume the validity of V20 hypothesis and therefore consider all the additional 17 line emitters as proper quasar companions. One debated possibility to explain the differences among these observational results is associated with the quenching effect of star formation in galactic satellites driven by quasar feedback (Efstathiou 1992; Thoul & Weinberg 1996; Okamoto et al. 2008; Dashyan et al. 2019; Martin-Navarro et al. 2019). According to this scenario, quasars would still reside in the most massive halos as predicted by theoretical models, but their companions would be gas-poor and would exhibit low star-formation activity, making them impossible to detect with current observational facilities. This argument has been put forth to account for the absence of galaxy over-densities within \(\sim 1\) Mpc from certain high-\(z\) quasar objects. Nevertheless, this alleged negative feedback effect is currently far from established. Some works proposed even a possible positive interaction between quasar feedback and star formation in companion galaxies (Fragile et al. 2017; Martin-Navarro et al. 2021; Zana et al. 2022; Ferrara et al. 2023). In particular, Zana et al. (2022), based on cosmological simulations, and Ferrara et al. (2023), using semi-analytic models, claimed that quasar feedback always enhances the process of star-formation in satellites, provided that the satellites are not disrupted by the strong quasar outflows. These recent works suggest that the discrepant observational results on the environments of high-\(z\) quasars might not be due to feedback and that additional effects and possible observational biases might be in place. Zana et al. (2022) analysed the cosmological simulation by Barai et al. (2018) and focussed on the environment of a massive quasar at \(z\gtrsim 6\), finding 2-3 satellites with a [CII] luminosity \(L_{\rm[CII]}\gtrsim 10^{8}\) L\({}_{\odot}\). They suggested that their findings match only the most densely populated field in V20, due to the geometrical effects introduced by the ALMA telescope. In fact, the relatively narrow ALMA field of view (FoV) compared with the much larger spatial range probed by the line of sight (LoS) where satellites can be identified, could inherently hinder the detection of a potentially significant fraction of the actual population of satellites, even if they are above the instrumental sensitivity threshold. Other studies, such as Champagne et al. (2018) and Meyer et al. (2022) are in agreement with this hypothesis and conclude that observational campaigns adopting large FoV are fundamental to recover the missing satellite populations. In this work, we investigate the geometrical bias introduced by ALMA on the satellite detection rate. In particular, we use different mock distributions of galactic satellite populations, based on theoretically motivated models, to measure the geometrical response of the ALMA limited observational volume. Our ultimate goal is to derive a rigorous statistical framework to interpret the most recent observational survey. We assume a flat \(\Lambda\)CDM model, with the cosmological parameters \(\Omega_{\rm M,0}=0.3089\), \(\Omega_{\rm A,0}=1-\Omega_{\rm M,0}=0.6911\), and \(H_{0}=67.74\) Mpc s\({}^{-1}\) (Planck Collaboration et al. 2016, results XIII). All lengths and volumes in the text are assumed to be physical unless otherwise specified. The paper is structured as follows: we describe the method adopted to statistically investigate the geometrical bias in Sec. 2, in Sec. 3 we present the outcomes, and in Sec. 4 we discuss the results in the context of current literature. We finally summarise our findings in Sec. 5. ## 2 Method Figure 1 shows the incidence of [CII]-emitting companions in the sample of 27 quasars reported by V20, in the range \(\Delta v=1000\) km s\({}^{-1}\). Approximately half of quasars do not exhibit any detected companions, whereas in other cases, up to three galaxies are observed. Since all datasets analysed by V20 have comparable exposure times, the frequency of detections in this sample cannot be solely attributed to possible different sensitivities of the observations. Under the assumption that all the observed quasars live in environments with similar properties and that the quasar feedback has the same effect on all the companions, our objective is to assess if the limited cosmic volume probed by ALMA can explain the large detection range shown in Figure 1. ALMA observations of \(z\sim 6.5\) quasars are characterised by a circular FoV of radius \(\simeq 90\) kpc and cover a wavelength range of \(\Delta v=1000-2000\) km s\({}^{-1}\) with respect to the systematic redshift of each quasar (V20). The explored cosmological volume is thus a cylinder with a base of about \(2.6\times 10^{4}\) kpc\({}^{2}\) and a height of about 2.5-5.0 Mpc at \(z=6.5\), depending on the spectral setting of the observations, as it is shown in Figure 2. Here, we focus on the fixed LoS range \(\Delta v=1000\) km s\({}^{-1}\) to be more conservative, given the large distances involved, and we compare with V20 equivalent sample. Additionally, we consider the median redshift Figure 1: Distribution of the number of satellites detected as [CII] emitters from V20, during an ALMA survey of \(27\geq 6\) quasars. All these sources reside within \(\pm 1000\) km s\({}^{-1}\), i.e. about \(\pm 1.3\) Mpc at \(z=6.5\) from the central quasar. \(z=6.5\) as representative of the range \(6\leq z\leq 7\) for the whole V20 sample.2 Given the geometry of the probed volume, the detection rate of companions would depend on both the distribution of galaxies in the quasar environment and on the orientation of the cylinder axis. Therefore, a fair comparison between theoretical studies and observations must take into account this effect. Footnote 2: The ratios amongst the comoving quantities (the position of the satellites and the ALMA observational range) are conserved in redshift. We have nonetheless run our analysis also at different mean redshift, confirming no significant variation. We create different mock distributions of quasar satellites and study the frequency of detections when the companions lie within a random observational cylinder. Ideally, we would run numerous, cosmologically motivated, N-body simulations of high-\(z\) quasar companions and select a single observational cylinder in each simulation. This approach would be prohibitive in terms of time and computational costs. Alternatively, we run various sets of Monte Carlo simulations where we spawn, for each set, an increasing number of satellites \(N\) in the quasar-centred sphere with radius \(D_{z=6.5}\), where \(D_{z=6.5}\) is the distance corresponding to the recession velocity \(\Delta v=1000\) km s\({}^{-1}\) at \(z=6.5\), i.e. about 1.3 Mpc. The \(N\) satellites represent massive galaxies bright enough to be detected by ALMA as [CII] emitters, provided that they fall within the volume probed by the telescope. They indeed represent only the "detectable" fraction of the whole satellite population and, in principle, could be connected to the total dark matter density where the quasar resides. For each set of simulations (at fixed \(N\)), we intersect the quasar-centred sphere with the ALMA cylinder with a random orientation, as exemplified in Figure 2. In particular, for each given \(N\), we perform \(10^{5}\) simulations, varying the position of the satellites, along with the orientation of the observational cylinder. Hence, we compute the probability \(P(N_{\rm o}|N)\), that \(N_{\rm o}\) satellites are included in the cylinder, and thus detectable by ALMA, as a function of \(N\). We adopt 9 different dark-matter distributions to generate the satellite populations: * **Homogeneous distribution**: satellites are generated by following a homogeneous distribution within the spherical volume, with fixed radius \(D_{z=6.5}\). At \(z=6.5\), the volume measures about 9 Mpc\({}^{3}\). This model is dubbed as _Homogeneous_ and represents the simplest case, with no assumption on the radial distribution. * **Plummer model**: satellite galaxies are spawned by following a Plummer density profile (Plummer 1911), with cumulative distribution function (CDF) \[P(r)=\frac{r^{3}}{\left(r^{2}+a^{2}\right)^{3/2}},\] (1) where \(r\) is the radius from the central quasar and \(a\) is the scale radius. We adopt two instances for this distribution by choosing \(a=25\) and 100 kpc. These models are dubbed as _Plummer25_ and _Plummer100_. * **Hernquist model**: analogously to the Plummer model case, here satellites are generated by mimicking a Hernquist density profile (Hernquist 1990), through the CDF \[P(r)=\frac{r^{2}}{(a+r)^{2}}.\] (2) This model, where we fix the scale radius \(a=5\) kpc, is dubbed as _Hernquist5_. * **NFW model**: we sample 2 Navarro-Frenk-White density profiles (Navarro et al. 1996), dubbed as _NFW25_ and _NFW100_, via the CDF \[\begin{split} P(r)&=\mathcal{N}(a)\left[\ln\left( \frac{a+r}{a}\right)-\frac{r}{a+r}\right]\\ \mathcal{N}(a)&=\left[\ln\left(\frac{a+r_{\rm cut }}{a}\right)-\frac{r_{\rm cut}}{a+r_{\rm cut}}\right]^{-1},\end{split}\] (3) with scale radii \(a\) equal to 25 kpc and 100 kpc, respectively. \(\mathcal{N}(a)\) is a normalization constant with \(r_{\rm cut}=D_{z=6.5}\). * dubbed as _CCF_ - wants to mimic only the shape of the galaxy distribution function around quasars. The density is not fixed and varies with \(N\) in the Monte Carlo simulations. * **Cosmological simulations**: finally, satellite galaxies are spawned by following the dark-matter distribution of cosmological zoom-in simulations of high-redshift quasars. In this work, we employ two different cosmological simulations, dubbed here with _CosmoSim1_(Barai et al. 2018) and _CosmoSim2_(Valentini et al. 2021). In particular, satellite locations are randomly selected amongst the dark-matter potential wells in a \(z=6.5\) snapshot of each simulation, after running AMIGA halo finder code (Knollmann & Knebe 2009) with a minimum of 20 bound particles to define a halo. The dark-matter halos are considered only if their distance from the centre of the main halo is, at most, \(D_{z=6.5}\). _CosmoSim1_ and _CosmoSim2_ represent our fiducial models, for they have been derived from a complex framework specially designed Figure 2: Schematic representation of an ALMA observation of a quasar (yellow star). Here, two galaxy satellites (red stars) are detected, whereas a third one remains outside the observer’s cylinder and would be detected only through a different line of sight. to describe the environment of high-\(z\) quasars.3 Although both _CosmoSim1_ and _CosmoSim2_ describe the evolution of two very massive dark-matter halos, their virial masses and radii are quite diverse, being \(M_{\rm vir}=3.3\times 10^{12}\,\mathrm{M}_{\odot}\), \(R_{\rm vir}=66\) kpc and \(M_{\rm vir}=1.2\times 10^{12}\,\mathrm{M}_{\odot}\), \(R_{\rm vir}=47\) kpc at \(z=6\), for _CosmoSim1_ and _CosmoSim2_, respectively,4 and this increases the generality of the investigation. We also note that the feedback prescriptions for the active galactic nuclei are significantly different in the two simulations, and this has a non-negligible impact on the dark-matter distribution (see the effect on the merger history in Zana et al. 2022). Footnote 3: We note also that only cosmological simulations, amongst our models, succeed in reproducing the complex filamentary structures expected to form during the collapse of primordial dark-matter fluctuations. Footnote 4: We define the virial mass as \(M_{\rm vir}=M_{200}=\frac{\pm 2}{3}\,\mathrm{mod}_{0}\rho_{c}R_{\rm 300}^{3}\), where \(\rho_{c}\) is the critical density of the Universe and \(R_{200}\) is the radius enclosing 200 times \(\rho_{c}\). The models based on the Plummer, Hernquist, and NFW profiles adopt a set of parameters (\(a\) and \(\mathcal{N}\)) to force their CDFs to have \(P(r)\gtrsim 0.99\) for \(r=D_{z=6.5}\). In the rare occasion where a galaxy is spawned at \(r>D_{z=6.5}\), the extraction process is repeated. Thanks to this expedient, the number density of companion galaxies within the spherical volume of radius \(D_{z=6.5}\) is directly comparable amongst all the models. We note that these models do not claim to completely cover all the possible satellite distributions, but rather aim to probe the implications of highly diverse environments. ## 3 Results Figure 3 shows the outcome of the Monte Carlo simulations for the aforementioned nine satellite distributions. In most cases, the number of detected companions is significantly smaller than \(N\). For example, in the fiducial distribution model _CosmoSim1_, with three seeded satellites, we estimate \(P(N_{\rm o}=0|N=3)=0.29\). This indicates that the probability of no detection due to the ALMA FoV is \(\sim 30\%\), despite the presence of three satellites in the nearby environment. We also observe that satellite distributions based on simulations _CosmoSim1_ and _CosmoSim2_ produce almost identical results, although they refer to two distinct quasar hosts. This hints at a major bias produced by the telescope observational volume that could explain why observations of quasar satellites return so different numbers in detecting serendipitous [CII] emitters. The spherical geometry of _NFW25_ produces a very similar trace to the simulation-oriented distributions and, therefore, could describe a similar distribution of galaxies, on average. On the other hand, those distributions where satellites are almost always spawned in close proximity to the quasar, (i.e._Plummer25_, or _Hernquist5_), result in a nearly one-to-one correspondence between seeded and detected satellites. Interestingly, if we distribute a variable number of companion galaxies by following an empirical cross-correlation function quasar-LAEs (_CCF_), galaxies are generated farther from the quasar and the detection efficiency drops at higher \(N\). Finally, _Homogeneous_ and _NFW100_ yield similar outcomes to each other, even if they are in principle based on very different distribution laws. It is worth mentioning that _Homogeneous_ represents a control scenario, with no additional conjecture on the galaxy distribution. The left panel of Figure 4 shows \(P(N_{\rm o}=N)\) for all the probed satellite populations. As the seeded population grows, it becomes increasingly challenging to detect all the quasar satellites. \(P(N_{\rm o}=N)\) is always lower than 1 for \(N>0\) and it decreases more rapidly the less compact the distribution is. In Sec. 4, we discuss the possibility of sampling a cylinder with a radius three times larger using ALMA. The right panel of Figure 4 presents the result for that case, indicating a much higher detection rate. A general quasar environment is better quantified with a mean companion number, rather than their precise count, given the natural variance of a realistic system. Therefore, we introduce a further step to take into account a scatter in the number of satellites that can be seeded. In particular, we connect the number \(N\) of seeded satellites to the average number \(\langle N\rangle\) of a Poissonian distribution5 Footnote 5: The use of a Poissonian distribution in this context is a commonly made choice. We note that other scatter-laws could be adopted with very similar results. \[P(N|\langle N\rangle)=\frac{\langle N\rangle^{N}}{N!}e^{-\langle N\rangle}. \tag{5}\] Hence, we can build the conditional probability \[P(N_{\rm o},N|\langle N\rangle)=P(N_{\rm o}|N)\cdot P(N|\langle N\rangle), \tag{6}\] and the likelihood function for the average seeded number \(\langle N\rangle\), given a single observation \(N_{\rm o}\), \(P(N_{\rm o}|\langle N\rangle)\): \[P(N_{\rm o}|\langle N\rangle)=\sum_{N}P(N_{\rm o},N|\langle N\rangle). \tag{7}\] Finally, we compute the distribution \(\mathcal{P}(\langle N\rangle)\) by including all the detections from V20 and considering them to be equiprobable. Operatively, we multiply together the likelihood associated with every detection and assume a flat prior over \(\langle N\rangle\): \[\mathcal{P}(\langle N\rangle)=\frac{\prod_{N_{\rm o}}P(N_{\rm o}|N)\gamma^{f( N_{\rm o})}}{\sum_{N_{\rm e}}\left[\prod_{N_{\rm o}}P(N_{\rm o}|N)\gamma^{f(N_{\rm o })}\right]}, \tag{8}\] where \(f(N_{\rm o})\) is the frequency of detection reported in Figure 1. The final likelihood functions, reproduced in Figure 5 for all the distribution models, show that each model has a characteristic peak. As mentioned before, the \(CCF\) model predicts the highest number of average intrinsic satellites in order to justify current observations. On the other hand, compact distributions can describe observations with the smallest possible number of satellites. In the specific cases of _Plummer20_ and _Hernquist20_ - both sharply peaking around \(\langle N\rangle=1\) - V20 observations with \(N_{\rm o}=2\) or 3 would be explainable entirely through Poissonian fluctuations. ## 4 Discussion There is currently not enough observational data to constrain the spatial distribution of galactic companions around high redshift quasars. For this reason, we have explored various possibilities, each with different degrees of realism and assumptions. While numerical simulations describe the most realistic scenarios, the other radial profiles also offer a reasonable representation of reality, especially due to the limited number of objects involved. Apart from the \(CCF\) model, which, nevertheless, has its roots in dedicated observational campaigns, all the spherically symmetric distributions adopted in this work are the most commonly used radial laws in astrophysics to model a gravitationally bound system, even if they are not directly connected to the scales and masses studied here. _Homogeneous_ is the model with the fewest assumptions, where galaxies are randomly seeded in the surrounding volume. The other spherically symmetrical models tend to refine the guess and distribute the galaxies preferentially closer to the central quasar. _Plummer25, Plummer100, Hernquist5, NFW25_, use different radial laws, with different scale radii. Most compact distributions, such as _Plummer25, Plummer100_, and _Hernquist5_ are less observationally motivated, especially at high \(N\), where numerous companions are spawned within the virial radius of the quasar host, whereas the vast majority of V20 satellites are detected at \(r>100\) kpc. On the other hand, the spherical model _CCF_ describes a relation that is observed to hold for scales larger than \(D_{z\simeq 6.5}\), beyond the limit of 1000 km s\({}^{-1}\). More consistent models, such as _CosmoSim1_, _CosmoSim2_, _NFW25_, and _NFW100_, predict \(\langle N\rangle\gtrsim 2\) on the basis of current observations. This finding has numerous fundamental implications: the geometrical bias of ALMA can explain the scatter in recent observations.6 In other words, we support the scenario in which high-\(z\) quasars have an average of two massive companions, with [CII] luminosities \(L_{\rm[CII]}\gtrsim 10^{8}\,\rm L_{\odot}\), but their detection depends on the volume probed by the telescope and thus on the specific LoS of the observation. Footnote 6: This is valid even before the addition of the Poissonian scatter. This result suggests that, for those quasar fields in which no satellite has been observed, mosaic observations with ALMA would result in the detection of 2 new galactic companions per quasar field, on average (in the cases described by our fiducial models, \(\langle N\rangle\simeq 2\)). In the right panel of Figure 4, we report the probability \(P(N_{\rm o}=N)\), as a function of the number of seeded satellites if the FoV is expanded to \(R_{\rm FoV}\simeq 270\) kpc, mimicking an observation with an additional layer of 9 adjacent "standard" FoV, around the original one. For _CosmoSim1_ and _CosmoSim2_, in particular, the probability of detecting the whole Figure 3: Monte Carlo simulations for all the matter distribution models adopted, i.e. _Homogeneous_, _Plummer25, Plummer100, Hernquist5, NFW5, NFW15, CCF_, _CosmoSim1_, and _CosmoSim2_. Each panel shows the probability \(P(N_{\rm o}|N)\) of detection of \(N_{\rm o}\) satellites, given \(N\) satellites spawned. population of 2 seeded satellites is 0.72, which is almost six times larger than the probability estimated for a single ALMA pointing (\(R_{\rm FoV}\simeq 90\) kpc). If the intrinsic number of satellites \(N\) were larger instead, \(P(N_{\rm o}\!=\!N)\gtrsim 0.5\), up to \(N\sim 6\). Accordingly, follow-up priority should be given to those quasar fields which, so far, have shown the least amount of companions, given the high probability of detecting the whole missing population with an individual mosaic observation. A further consideration can be drawn if we examine our fiducial model which predicts \(\langle N\rangle\simeq 2\), being in very close agreement with the twin model _CosmoSim2_. In Zana et al. (2022), we found that a total of 2-3 satellites were detectable, having a [CII] luminosity higher than a current sensitivity threshold of about \(10^{8}\) L\({}_{\odot}\). These results were based on a post-processing study of a suite of cosmological zoom-in simulations, including the evolution of baryons and numerous state-of-the-art sub-grid prescriptions. In the present investigation, we have generalized the dark-matter distribution of satellites of such simulations (with no assumption on the mass of the halo) and demonstrated that current observations (V20) agree remarkably well with our previous prediction. As a consequence, the dense quasar-host environment studied in Zana et al. (2022), and now confirmed by observations via our geometrical interpretation, likely describes real-quasar systems and this implies that most powerful high-\(z\) quasars would live in the densest regions of the Universe. Finally, we provide an additional interpretation by comparing our predicted number of satellites in the most realistic cases, with recent number-density measurements in a field galaxy population. In the context of the large program ASPECS, Decarli et al. (2020) and Uzgil et al. (2021) have reported no detection of [CII]-emitters with \(L_{\rm[CII]}>1.89\times 10^{8}\) L\({}_{\odot}\), in the redshift range \(z\sim 6-8\), within a comoving volume of about 12500 Mpc\({}^{3}\), resulting in an upper limit on the galaxy density of \(3.4\times 10^{-4}\) Mpc\({}^{-3}\). If we consider our predictions to take place within the quasar-centered sphere with radius \(D_{z=6.5}\) (the volume enclosing an ALMA cylinder with every possible orientation), our fiducial models lead to a density \(\gtrsim 0.2\) Mpc\({}^{-3}\), i.e., larger by almost a factor 600 with respect to the field environment probed by ASPECS. Given that no [CII]-emitters have been found in the Hubble Ultra Deep Field via ASPECS, this result still holds true even if we compute our detected companions within a larger sphere, where the density could be reduced. We conclude by observing that, the analysis conducted and its findings extend beyond ALMA, as they can be applied to any instrument with a limited FoV, due to the straightforward geometric nature of our study. ## 5 Summary and conclusions Through various Monte Carlo simulations, we evaluated the geometrical bias of the ALMA telescope on the detection rate of high-\(z\) quasar satellites (see Figure 3). We convolved the resulting probability with a set of Poissonian distributions in order to estimate the intrinsic number of detectable companions, which are orbiting around a given quasar potential well. We remark that each detectable satellite represents a galaxy massive enough to be detected as a [CII] emitter if it lies within the telescope geometry (see Figure 2). We produced different likelihood functions for the average number of orbiting satellites, in order to explain the most recent ALMA observations (see Figure 1). We have demonstrated that: Figure 4: Probability of total detection, \(P(N_{\rm o}=N)\), for all the density profiles adopted in this work. _Left_: standard case of Figure 3, where \(R_{\rm FoV}\simeq 90\) kpc. _Right_: mosaic-case with \(R_{\rm FoV}\simeq 270\) kpc. Figure 5: Posterior functions for the different galaxy distributions adopted in this work. The curves show the probability that an average number of massive satellites (\(N\)) is actually orbiting around a \(z\gtrsim 6\) quasar, given the observed data from V20. The colour coding is the same as Figure 4 * The telescope bias can entirely explain the differences amongst the number of detected high-\(z\) quasar companions. Hence, every quasar could conceivably reside within almost the same environment, but we would detect only a part of the orbiting galaxies, depending on our line of sight. * If we consider the most physically motivated distribution profiles, e.g., _CosmoSim1_ and _CosmoSim2_, we can infer an intrinsic number of \(\sim 2\) satellites, massive enough to be detected by ALMA as [CII] emitters with \(L_{\rm[CII]}\gtrsim 10^{8}\,\rm L_{\odot}\) and orbiting within \(\sim 1.3\) Mpc from the quasar. * Interestingly, we expect to discover more satellites, in the case of _CosmoSim1_ and _CosmoSim2_, via a simple mosaic observing campaign which targets those quasars where no companion galaxies have been observed so far. We predict that a single additional layer of ALMA FoVs around those objects would be enough to observe the total population of \(N\sim 2\) satellites with a probability of 0.72, compared to 0.13 in the single-observation case. In general, we expect such a campaign to detect, half of the time, the total population of companions, up to 6 satellites, i.e. \(P(N_{\rm o}=N|N)\gtrsim 0.5\) for \(N\lesssim 6\). * Our predicted number of satellites (\(ii\)) in the case _CosmoSim1_ is compatible with the analysis performed in Zana et al. (2022). In other words, ALMA observations are consistent with cosmological hydro-dynamical simulations evolving \(z\sim 6\) quasars in the most massive dark-matter halos with \(M_{\rm vir}>10^{12}-10^{13}\,\rm M_{\odot}\), corresponding to fluctuations of about \(3-4\sigma\) in the density field. * We compared our fiducial predictions with a recent survey from the ASPECS program, where no [CII] emitters have been confirmed in the field above \(L_{\rm[CII]}=1.89\times 10^{8}\,\rm L_{\odot}\) in our same redshift range, attesting once more the over-dense nature of high-\(z\) quasar environments. Future additional ALMA detections of quasar companions will increase our current sample, allowing us to tune this model further and better constrain our predictions. A contribution may also come from the AtLAST telescope (Klaassen et al., 2020) which, despite its low angular resolution, can detect high-\(z\) satellites at a greater distance compared to ALMA. Follow-up observations of V20 galaxies and the possible detection of supplementary lines could confirm the status of companion or update the catalogues of quasar satellites. We note, however, that the Poissonian scatter included in our analysis takes into account such potential small variations. Moreover, the next deep JWST (Gardner et al., 2023) observational campaigns of quasar environments may also help shed light on the spatial distribution of satellite galaxies, given the much larger FoV. However, the galactic tracers probed will be completely different with respect to ALMA and will be subjected to dust extinction. For this reason, a new and appropriate set of predictions is required. ###### Acknowledgements. Funded by the European Union (ERC, WINGS, 101040227). Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union or the European Research Council Executive Agency. Neither the European Union nor the granting authority can be held responsible for the TZ and VA acknowledge support from INAF-PRIN 1.05.01.85.08. The authors greatly thank the anonymous referee for useful comments which improved the quality of this manuscript.
$z \gtrsim 6$ の quasars が、宇宙の最も massive な暗黒物質ハローに位置しているかどうかは、まだ論争の対象です。理論的な研究はほとんどがこのシナリオを支持していますが、現在の観測結果は、quasar companion galaxies の検出率を介してハローの質量を調査する際に矛盾しています。スーパー massive black holes からのフィードバックプロセスと塵の遮蔽は、この矛盾を説明されることが多いですが、これらの効果の影響は複雑で、明確に理解できていません。この論文では、Atacama Large Millimeter/submillimeter Array Telescope による宇宙の探査範囲を考慮して、現在の遠赤外線観測の解釈を改善し、観測的な矛盾を説明することを目的としています。私たちは、現在の観測における quasar companion の検出率を統計的に調査し、これらの検出率は、さまざまな理論モデルから得られた期待される分布と、
2309.03798
Managing the Uncertainty in System Dynamics Through Distributionally Robust Stability-Constrained Optimization
With the increasing penetration of Inverter-Based Resources (IBRs) and their impact on power system stability and operation, the concept of stability-constrained optimization has drawn significant attention from researchers. In order to manage the parametric uncertainty due to inaccurate modeling that influences the system dynamics, this work proposes a distributionally robust stability constraint formulation. However, the uncertainty of system dynamic parameters influences the stability constraints indirectly through a nonlinear and implicit relationship. To address this issue, a propagation mechanism from the uncertainty of the system dynamic parameters to the stability constraint coefficients is established. Since these coefficients are connected to the uncertain parameters through highly nonlinear and implicit functions, an approximation approach utilizing Taylor expansion and the Delta method is developed to estimate the statistical moments of the stability constraint coefficients based on the first and second-order derivatives, with which an ambiguity set for the distributionally robust optimization can be formulated. The accuracy of the uncertainty propagation as well as the effectiveness of the distributionally robust stability constraints are demonstrated through detailed case studies in the modified IEEE 39-bus system.
Zhongda Chu, Fei Teng
2023-09-07T15:51:24
http://arxiv.org/abs/2309.03798v2
Managing the Uncertainty in System Dynamics Through Distributionally Robust Stability-Constrained Optimization ###### Abstract With the increasing penetration of Inverter-Based Resources (IBRs) and their impact on power system stability and operation, the concept of stability-constrained optimization has drawn significant attentions from researchers. In order to manage the parametric uncertainty due to inaccurate modeling that influences the system dynamics, this work proposes a distributionally robust stability constraint formulation, where the propagation mechanism from uncertainty of the system dynamic parameters to the stability constraint coefficients is established and managed. Since these coefficients are connected to the uncertain parameters through highly nonlinear and implicit functions, an approximation approach utilizing Taylor expansion and the Delta method is developed to estimate the statistical moments of the stability constraint coefficients based on the first and second-order derivatives, with which an ambiguity set for the distributionally robust optimization can be formulated. The accuracy of the uncertainty propagation as well as the effectiveness of the distributionally robust stability constraints are demonstrated through detailed case studies in the modified IEEE 39-bus system. Stability constraints, distributionally robust optimization, uncertainty management, system scheduling ## I Introduction The ongoing trend towards a clean and sustainable power system due to the environmental concerns requires wide-scale integration of Inverter-Based Resources (IBRs). Together with the retirement of conventional Synchronous Generators (SGs), these power electronics interfaced devices bring new challenges to power system operation and stability due to their significantly distinguished characteristics in power generation and conversion. Specifically, the decline of rotational inertia, frequency and voltage support as well as the loss of stable and inherent synchronization mechanism have been identified as the main threats for the future high IBR-penetrated system [1]. In order to maintain stable and secure power system operation by coordinating different resources in the system and maximize the overall economic profit, it is necessary to develop stability constraints and incorporate them during system scheduling process. The works in [2, 3, 4] focus on the frequency stability issues in low-inertia systems, where the frequency stability constraints are derived based on either analytical or data-driven approaches and these highly nonlinear frequency constraints are further reformulated to mixed-integer linear/Second-Order Cone (SOC) forms. Voltage stability constrained optimal system scheduling is investigated in [5, 6, 7] to ensure the static voltage stability by improving the power transfer capability and system strength at the IBR buses. The voltage stability index is derived based on the static voltage stability margin and the singularity of the power flow Jacobian matrix. The NP-hard problem due to the nonlinear and non-convex voltage stability constraints is then coped with using different approaches, namely model decomposition, support vector machine and boundary-aware regression respectively. Synchronization stability is studied in [8, 9, 10] where the stability is achieved by explicitly confining the rotor angle deviations or the minimum eigenvalue of the Hessian matrix of the energy function. The differential equations are transformed into algebraic equations using trapezoidal integration method or formulated as a multi-agent potential game. The uncertainties in the system dynamics, however, have been less focused, which may result in unstable or over-conservative operation. In order to address this issue, the authors in [11] proposes a probabilistic transient stability constrained optimal power flow to consider the correlated uncertain wind generation. An group search optimization and the point estimated method are further applied to solve the probabilistic constrained optimization problem. However, a single machine equivalent method is used to derive the system transient stability constraint. This is improved in [12], where a deep sigma point processes is proposed to predict the transient stability model based on time-domain simulation while considering the uncertainty related to the renewable generation and the loads. The resulting nonlinear programming is solved by the primal dual interior point method. A coordinated method for preventive generation rescheduling and corrective load shedding to maintain power system transient stability under uncertain wind power variation is proposed in [13], where the preventive and corrective control is coordinated by a risk parameter in the two-stage bi-level optimization. The nonlinear stability constraints are linearized based on the extended equal area criterion and the trajectory sensitivity. The stability issues brought by renewable energy sources with non-Gaussian uncertainties in isolated microgrids is also investigated in [14], where the stability chance constrained optimal power flow is formulated as a bi-level optimization with the lower level handling the stability index through semi-definite programming. The Gaussian mixture model is applied to represent the non-Gaussian RES uncertainties and the analytical sensitivity analysis is used to reformulate chance constraints into linear deterministic versions. Nevertheless, these existing works concentrate on the uncertainty of the renewable generation, which indirectly influences the system stability through the impact on operating points, whereas the uncertainty associated with the system dynamic models due to factors such as inaccurate modeling and unknown component parameters has not investigated. On one hand, the parameters of the conventional system components, such as SGs, may vary due to the environmental changes. On the other hand, considering the manufacturers being reluctant to share the specific control algorithms and parameters of IBRs, their accurate dynamic models are unknown to system operators, thus also suffering from uncertainty. This would become more evident in the future system with higher IBR penetration. Some works have been done to account for these uncertainties in parameter estimation and control design. A Bayesian inference framework has been proposed in [15] to estimate the SG parameters based on a polynomial-based reduced-order representation. The authors in [16] develop a gray-box method to estimate the parameters of wind farm controllers, based on the measurements of frequency domain equivalent impedance combined with non-parametric impedance identification. The uncertainty of AC grid impedance is considered in [17] for the HVDC systems during the control design process to ensure the robustness of the stability and performance. Reference [18] considers the inverter model uncertainty in the active disturbance rejection control for grid-connected inverters, which guarantees a stable operation of inverters under uncertainties. However, a unified modeling and management framework to consider the uncertainties in system dynamics and their impact on system stabilities for stability-constrained optimization is yet to be investigated. In this context, this work proposes a distributionally robust stability constrained optimization model to ensure the system stability under system dynamic uncertainties. The main contributions of this paper are identified as follows: * A stability-constrained optimization framework in high-IBR penetrated system considering the uncertainty of the system dynamics is proposed based on the distributionally robust approach, where chance constraints on system stability are formulated in Second-Order Cone (SOC) form. * Uncertainties associated with the constraint coefficients in the optimization model as well as the stability indices due to the concerned uncertain parameters are analytically quantified, by propagating the statistical moments through nonlinear and implicit functions. * The accuracy and the effectiveness of the proposed approach is demonstrated with the modified IEEE 39-bus system. The impact of system dynamic uncertainties and the uncertainty levels on system operation within the proposed distributionally robust stability-constrained framework is thoroughly investigated with detailed case studies. The reminder of this paper is structured as follows. In Section II, a unified representation of the system stability constraints are introduced. Section III derives the uncertainty of the stability constraint coefficients from the uncertain parameters in the system dynamics. The distributionally robust stability chance constrained system scheduling model is formulated in Section IV. An alternative formulation based on robust distributional learning is discussed in Section V, followed by case studies in Section VI. Finally, Section VII draws the main conclusions and discusses the outlook of the study. ## II Unified Representation of System Stability Constraints In order to analysis and assess system stability, a number of stability indices have been established based on the system dynamic models of different forms, such as state-space models and impedance models. Due to their close relevance to system-level features, such as system inertia, impedances, and operating points, many of these stability indices have been formulated as operational constraints during the system scheduling process, to ensure stable system operation. As a result, we express a stability index in the following general form: \[\mathbf{g}(X)\geq\mathbf{g}_{\mathrm{lim}}, \tag{1}\] where \(X\in\mathbb{R}^{n}\) is the decision variable in system optimization model and \(\mathbf{g}_{\mathrm{lim}}\in\mathbb{R}\) is the stability index limit, above which the system remains stable. Some examples are given here to illustrate the generality of (1). Based on the system state-space model, the small-signal stability can be characterized by the eigenvalues of the closed loop state matrix \(A\). Therefore, in this case, (1) can be rewritten as \(\Re(\lambda_{\mathrm{min}}\left(A(X)\right))<0\), where \(X\) could represent system operating points. As for the voltage stability, many indices are proposed based on the power transfer capability [7, 19], where the stability index can be represent as a function of the power flow injection at the concerned buses and the equivalent grid impedances. Moreover, considering the impedance model of a general multi-machine system, the small-signal stability criterion due to the grid-following (GFL) IBRs can be developed in the form of generalized short-circuit ratio [20], i.e., \(\mathrm{gSCR}\geq\mathrm{gSCR}_{\mathrm{lim}}\), where the \(\mathrm{gSCR}\) represents the connectivity of the network, depending on the system topology and impedances. However, it is also understandable that, due to the strong nonlinearity, it is in general difficult or even impossible to directly include these stability indices as operational constraints in power system optimization models. A unified framework that can effectively reformulate the stability constraints, in a general way, to fit any typical power system optimization model is presented in [21], which is briefly introduced below. The target of the stability constraint reformulation is to describe the boundary of the stability feasible region through a simple structure. The estimated expression of the nonlinear functions \(\mathbf{g}\) is defined as follows: \[\tilde{\mathbf{g}}(\mathsf{X})=\mathsf{K}^{\mathsf{T}}\mathsf{X}, \tag{2}\] where \(\tilde{\mathbf{g}}\in\mathbb{R}\) is the estimated function of \(\mathbf{g}\) and \(\mathsf{K}\in\mathbb{R}^{\tilde{k}}\) is the coefficients of \(\mathsf{X}\). \(\mathsf{X}\in\mathbb{R}^{\tilde{k}}\) is the augmented decision variable which may contain \(1\), \(X\) and products of any two elements in \(X\). The resulting products of a binary variable and a continuous terms or bilinear terms can be dealt with through Big-M and binary expansion, which is not discussed here. Substituting (2) into (1) leads to the reformulated system stability constraint as: \[\mathsf{K}^{\mathsf{T}}\mathsf{X}\geq\mathbf{g}_{\mathrm{lim}}. \tag{3}\] In order to find the coefficients \(\mathsf{K}\) that best describes the boundary of the system stability feasible region, the following boundary-aware optimization is utilized: \[\min_{\mathcal{K}} \sum_{\omega\in\Omega_{2}}\left(\mathbf{g}^{(\omega)}-\tilde{ \mathbf{g}}^{(\omega)}\right)^{2} \tag{4a}\] \[\mathrm{s.t.} \mathbf{\tilde{g}}^{(\omega)}<\mathbf{g}_{\lim},\ \forall\omega\in\Omega_{1}\] (4b) \[\tilde{\mathbf{g}}^{(\omega)}\geq\mathbf{g}_{\lim},\ \forall\omega\in\Omega_{3}, \tag{4c}\] with \(\omega=\left(\mathsf{X}^{(\omega)},\,\mathbf{g}^{(\omega)}\right)\in\Omega\) being the entire data set corresponding to the stability constraint and \((\cdot)^{(\omega)}\) denoting the quantity evaluated at data point \(\omega\). The data set \(\Omega\) is generated by evaluating \(\mathbf{g}\) in representative system conditions. The sets \(\Omega_{1},\,\Omega_{2}\) and \(\Omega_{3}\) are the subsets of \(\Omega\), whose relationship is defined as below: \[\Omega =\Omega_{1}\cup\Omega_{2}\cup\Omega_{3} \tag{5a}\] \[\Omega_{1} =\left\{\omega\in\Omega\mid\mathbf{g}^{(\omega)}<\mathbf{g}_{ \lim}\right\}\] (5b) \[\Omega_{2} =\left\{\omega\in\Omega\mid\mathbf{g}_{\lim}\leq\mathbf{g}^{( \omega)}<\mathbf{g}_{\lim}+\nu\right\}\] (5c) \[\Omega_{3} =\left\{\omega\in\Omega\mid\mathbf{g}_{\lim}+\nu\leq\mathbf{g}^{ (\omega)}\right\}, \tag{5d}\] with \(\nu\) being a constant parameter. Given (4b) and (5b), all the data points whose real stability indices, \(\mathbf{g}^{(\omega)}\) are smaller than the limits, can be identified correctly by the estimated function, \(\tilde{\mathbf{g}}^{(\omega)}\). Ideally, it is also desired to correctly identify all the above-limit data points, which would make it a classification problem. However, this may cause infeasibility due to the restricted structure defined in (2). Therefore, a parameter \(\nu\in\mathbb{R}^{+}\) is introduced to define \(\Omega_{2}\) and \(\Omega_{3}\) as in (5c) and (5d). In this way, all the data points in \(\Omega_{3}\) will be classified correctly and misclassification can only occur in \(\Omega_{2}\), thus being conservative. Furthermore, \(\nu\) should be chosen as small as possible while ensuring the feasibility of (4). ## III Uncertainty Quantification This work focuses on the uncertainty associated with the system dynamic model, such as the impedances of SGs and IBRs. These parameters influences the system stability performance and thus the stability constraints, which eventually brings uncertainty to the constraint coefficients \(\mathsf{K}\). In this section, the uncertainty of the stability constraint coefficients are derived analytically by propagating the statistical moments through nonlinear and implicit functions from uncertain system parameters. ### _Uncertainty Representation_ Although the parameters of different system components such as generators, transformers and transmission lines are typically provided by manufacturers, their actual values can deviate substantially from manufacturer ones [22], due to a number of factors including magnetic saturation, internal and ambient temperature, unit aging, and the effect of centrifugal forces on winding contacts and incipient faults within the machines [23]. Moreover, since the detailed control logic and parameters of the commercial IBRs are typically not shared by the manufacturers, their exact dynamic performance cannot be modeled perfectly by system operations [24]. Methodologies have also been proposed for model identifications and parameter estimations of convectional SGs and IBRs based on either offline testing methods or online measuring methods [15, 25]. However, good accuracy cannot be guaranteed due to measurement errors and impractical assumptions [15, 16], which has to be accounted for in the stability-constrained optimization. We first redefine the nonlinear expression of the stability index \(\mathbf{g}\) as a function of the decision variables \(\mathsf{X}\) and the uncertain parameters \(\mathsf{p}\), which converts (1) into: \[\mathbf{g}(\mathsf{X},\mathsf{p})\geq\mathbf{g}_{\lim}. \tag{6}\] Based on the prior knowledge about the uncertainty, different approaches can be applied to manage the uncertainty in the stability-constrained optimization. For instance, robust optimization can be formulated if only bounds of uncertain parameters are known: \[\min_{\mathsf{X}\in\mathcal{X}} J(\mathsf{X}) \tag{7a}\] \[\mathrm{s.t.} \min_{\mathsf{p}\in[\bar{\mathsf{p}},\bar{\mathsf{p}}]}\mathbf{g }(\mathsf{X},\mathsf{p})\geq\mathbf{g}_{\lim}, \tag{7b}\] where \(J(\mathsf{X})\) represents the objective function of the stability-constrained optimization, (e.g. system operation cost), \(\mathcal{X}\) the feasible region determined by other constraints and \(\mathsf{p}\in[\bar{\mathsf{p}},\bar{\mathsf{p}}]\) the uncertain set. Alternatively, if a detailed distribution of the uncertainty is available, a chance constrained optimization in the following form can be developed: \[\min_{\mathsf{X}\in\mathcal{X}} J(\mathsf{X}) \tag{8a}\] \[\mathrm{s.t.} \Pr\left\{\mathbf{g}(\mathsf{X},\mathsf{p})\geq\mathbf{g}_{\lim }\right\}\geq\eta, \tag{8b}\] with \(\eta\) being the pre-defined confidence level. Although a Gaussian assumption is typically made here, it may be unrealistic and lead to biased estimation [15]. Therefore, this work focuses on a distributionally robust formulation to deal with the situation where only the statistical moments are known based on historical data or measurements, whereas the specific probabilistic distribution is unknown. The general expression takes the form: \[\min_{\mathsf{X}\in\mathcal{X}} J(\mathsf{X}) \tag{9a}\] \[\mathrm{s.t.} \min_{\mathsf{D}\in\mathcal{P}}\Pr\left\{\mathbf{g}(\mathsf{X}, \mathsf{p})\geq\mathbf{g}_{\lim}\right\}\geq\eta, \tag{9b}\] where \(\mathsf{D}\) is the uncertain parameter distribution and \(\mathcal{P}\) the ambiguity set. ### _Uncertainty Propagation_ Due to the complex relationship between the stability index \(\mathbf{g}\) and \(\mathsf{X}\), \(\mathsf{p}\), it is challenging enough to reformulate the original nonlinear constraint \(\mathbf{g}(\mathsf{X},\mathsf{p})\geq\mathbf{g}_{\lim}\) into a mathematically tractable form, let alone its distributionally robust form (9b). Therefore, we consider a simplified representation as in (3). Derived based on (4) and (5), this reformulated stability constraint transforms the uncertainty from system parameters \(\mathsf{p}=[\mathsf{p}_{1},...,\mathsf{p}_{p},...,\mathsf{p}_{\bar{p}}]^{ \mathsf{T}}\in\mathbb{R}^{\bar{p}}\) with \(\bar{p}\) being the total number of uncertain parameters, to the stability constraint coefficients, \(\mathsf{K}=[\mathsf{K}_{1},...,\mathsf{K}_{k},...,\mathsf{K}_{\bar{k}}]^{ \mathsf{T}}\in\mathbb{R}^{\bar{k}}\), with \(\bar{k}\) being the total number of the coefficients. However, it is challenging to identify the uncertainty associated with these coefficients, due to their complex dependence on the uncertain parameters. In order to achieve this, we define \(\mathsf{K}_{k}\) as the following implicit function of the training data, \(\left(\mathsf{X}^{\Omega},\mathbf{g}^{\Omega}\right)\): \[\mathsf{K}_{k}=h_{k}\left(\mathsf{X}^{\Omega},\mathbf{g}^{\Omega}\left(\mathsf{ X}^{\Omega},\mathbf{p}\right)\right), \tag{10}\] since it is determined by solving the optimization problem (4). In (10), \(\mathsf{X}^{\Omega}=\left[...,\mathsf{X}^{(\omega)},...\right]^{\mathsf{T}}\) is the vector containing \(\mathsf{X}^{(\omega)},\,\forall\omega\in\Omega\) and similarly, \(\mathbf{g}^{\Omega}(\mathsf{X}^{\Omega},\mathbf{p})=\left[...,\mathbf{g}^{( \omega)},...\right]=\left[...,\mathbf{g}(\mathsf{X}^{(\omega)},\mathbf{p}),...\right]\). Since it is the uncertainty of the system parameter \(\mathsf{p}\) that is of interest here, the decision variable without uncertainty \(\mathsf{X}^{\Omega}\) in (10) is omitted in the following derivation for clarity, i.e., \(\mathsf{K}_{k}=h_{k}\left(\mathsf{g}^{\Omega}\left(\mathsf{p}\right)\right)=h _{k}\circ\mathbf{g}^{\Omega}(\mathsf{p})\hat{=}f_{k}(\mathsf{p})\). Next, we derive the expectation and variance of \(\mathsf{K}_{k}\) from that of \(\mathsf{p}\). For the expectation, Taylor expansion is utilized. Expand \(f_{k}(\mathsf{p})\) in a Taylor series around the expectation of uncertain parameters \(\mu_{\mathsf{p}}\): \[f_{k}(\mathsf{p}) =f_{k}(\mu_{\mathsf{p}})+\nabla_{f_{k}}(\mu_{\mathsf{p}})( \mathsf{p}-\mu_{\mathsf{p}})\] \[+\frac{1}{2}(\mathsf{p}-\mu_{\mathsf{p}})^{\mathsf{T}}H_{f_{k}}( \mu_{\mathsf{p}})(\mathsf{p}-\mu_{\mathsf{p}})+\mathcal{O}\left((\mathsf{p}- \mu_{\mathsf{p}})^{3}\right), \tag{11}\] where \(\nabla_{f_{k}}\) and \(H_{f_{k}}\) are the gradient vector and Hessian matrix of \(f_{k}\) with respect to the uncertain parameter \(\mathsf{p}\). Taking the expectation of both sides of (11) and neglecting the higher order term give: \[\mathbb{E}(f_{k}(\mathsf{p})) \approx f_{k}(\mu_{\mathsf{p}})+\nabla_{f_{k}}(\mu_{\mathsf{p}}) \mathbb{E}(\mathsf{p}-\mu_{\mathsf{p}})+\mathrm{Tr}[H_{f_{k}}(\mu_{\mathsf{p }})\Sigma_{\mathsf{p}}]\] \[+\frac{1}{2}\mathbb{E}(\mathsf{p}-\mu_{\mathsf{p}})^{\mathsf{T}}H _{f_{k}}(\mu_{\mathsf{p}})\mathbb{E}(\mathsf{p}-\mu_{\mathsf{p}})\] \[=f_{k}(\mu_{\mathsf{p}})+\mathrm{Tr}[H_{f_{k}}(\mu_{\mathsf{p}}) \Sigma_{\mathsf{p}}], \tag{12}\] where \(\Sigma_{\mathsf{p}}\) is the covariance matrix of \(\mathsf{p}\) and the equality holds since \(\mathbb{E}(\mathsf{p}-\mu_{\mathsf{p}})=\mathbb{E}(\mathsf{p})-\mu_{\mathsf{p }}=0\). Further assuming that different uncertain parameters in \(\mathsf{p}\) are independent with each other, simplifies (12) as follows: \[\mu_{\mathsf{K}_{k}}=\mathbb{E}(f_{k}(\mathsf{p}))\approx f_{k}(\mu_{\mathsf{ p}})+\sum_{p=1}^{\bar{p}}\frac{\partial^{2}f_{k}}{\partial\mathsf{p}_{p}^{2}}( \mu_{\mathsf{p}})\sigma_{\mathsf{p}_{p}}^{2}, \tag{13}\] with \(\sigma_{\mathsf{p}_{p}}^{2}\) being the variance of \(\mathsf{p}_{p}\). As for the covariance of \(\mathsf{K}=[\mathsf{K}_{1},...,\mathsf{K}_{k},...,\mathsf{K}_{k}]^{\mathsf{T}}=[ f_{1}(\mathsf{p}),...,f_{k}(\mathsf{p}),...,f_{\bar{k}}(\mathsf{p})]^{ \mathsf{T}}=f(\mathsf{p})\), it can be derived by applying the Delta method [26], which gives the following results: \[\Sigma_{\mathsf{K}}=\mathrm{Cov}(f(\mathsf{p}))=\nabla_{f}(\mu_{\mathsf{p}}) \Sigma_{\mathsf{p}}\nabla_{f}(\mu_{\mathsf{p}})^{\mathsf{T}}, \tag{14}\] where \(\nabla_{f}\in\mathbb{R}^{k\times\bar{p}}\) denotes the Jacobian matrix of \(f\) with \(\frac{\partial f_{k}}{\partial\mathsf{p}_{p}}\) being the element in the \(k\)-th row and \(p\)-th column. Based on (13) and (14), the first and second-order moment of \(\mathsf{K}\) can be estimated with the second and first-order derivative of \(f(\mathsf{p})\), if the mean and variance of \(\mathsf{p}\) are assumed to be known. However, with \(f(\mathsf{p})=(h\circ\mathbf{g}^{\Omega})(\mathsf{p})\) being the composition of \(h(\mathbf{g}^{\Omega})\) and \(\mathbf{g}^{\Omega}(\mathsf{p})\), these derivatives (\(\nabla_{f}\)) cannot be determined directly. Apply the chain rule to \(\frac{\partial f_{k}}{\partial\mathsf{p}_{p}}\) leading to: \[\frac{\partial f_{k}}{\partial\mathsf{p}_{p}}=\frac{\partial h_{k}}{\partial \mathbf{g}^{\Omega}}\cdot\frac{\partial\mathbf{g}^{\Omega}}{\partial\mathsf{p} _{p}}, \tag{15}\] with \(\frac{\partial h_{k}}{\partial\mathbf{g}^{\Omega}}\in\mathbb{R}^{1\times[\Omega]}\) and \(\frac{\partial\mathbf{g}^{\Omega}}{\partial\mathsf{p}_{p}}\in\mathbb{R}^{| \Omega|\times 1}\) being the partial derivative of coefficient \(\mathsf{K}_{k}\) with respect to the stability index \(\mathbf{g}^{\Omega}\) and the the stability index \(\mathbf{g}^{\Omega}\) with respect to the uncertain parameter \(\mathsf{p}_{p}\) respectively, which are derived in the following sections. ### _Perturbation Analysis of the Regression Model_ In order to derive \(\frac{\partial h_{k}}{\partial\mathbf{g}^{\Omega}}\), which is implicitly defined through the optimization problem as in (4), the perturbation analysis of the regression model is carried out. Understandably, for \(\frac{\partial h_{k}}{\partial\mathbf{g}^{\Omega}}\) to be well-defined, the optimal solution to (4) has to be continuous and differentiable with respect to \(\mathbf{g}^{\Omega}\). However, when \(\mathbf{g}^{\omega}\) changes, due to the definition in (5), the region to which the data point \(\omega\) belong may change, leading to different constraints and/or objective functions and potentially discontinuous optimal solution variation. To solve this issue, the discontinuous region division in (5) is embedded into the optimization model (4). For the objective function (4a), since only the data points in \(\Omega_{2}\) is of concern, without loss of generality, Gaussian weights are applied to all the data points in order to increase the weights of those in \(\Omega_{2}\) and decrease the weights of the rest: \[\sum_{\omega\in\Omega}\left(\mathbf{g}^{(\omega)}-\tilde{\mathbf{g}}^{(\omega)} \right)^{2}\exp\left(-\frac{\left(\mathbf{g}^{(\omega)}-\left(\mathbf{g}_{\mathrm{ lim}}+\frac{\nu}{2}\right)\right)^{2}}{2s^{2}}\right), \tag{16}\] with \(\tilde{\mathbf{g}}^{(\omega)}=\mathsf{K}^{\mathsf{T}}\mathsf{X}^{(\omega)}\) being the linearized stability index evaluated at \(\omega\) and \(s\) governing the spatial scale of the weight function. Note that the Gaussian weights are not required to have a probabilistic interpretation in this application. In addition, (4b) and (4c) are modified as soft constraints: \[\tilde{\mathbf{g}}^{(\omega)}-\gamma(\mathbf{g}^{(\omega)}-\mathbf{g} _{\mathrm{lim}})M\leq\mathbf{g}_{\mathrm{lim}}, \forall\omega\in\Omega \tag{17a}\] \[\tilde{\mathbf{g}}^{(\omega)}+\gamma(\mathbf{g}_{\mathrm{lim}}+ \nu-\mathbf{g}^{(\omega)})M\geq\mathbf{g}_{\mathrm{lim}}, \forall\omega\in\Omega \tag{17b}\] where \(M\) is a large enough constant and the function \(\gamma(\cdot)\) takes the form: \[\gamma(x)=\frac{1}{1+e^{-2rx}}, \tag{18}\] with the constant \(r\) being the curve steepness. It should be noted that given the above reformulation, the objective function and constraints in the optimization apply to all the data points thus eliminating the strict region division in (5). Rewrite the optimization model defined in (16) and (17) in a compact form: \[\min_{\mathsf{K}} C(\mathsf{K},\mathbf{g}^{\Omega}) \tag{19a}\] \[\mathrm{s.t.} \beta(\mathsf{K},\mathbf{g}^{\Omega})\geq 0. \tag{19b}\] As a result, \(h\) can be defined as \(h(\mathbf{g}^{\Omega})=\arg\min_{\mathsf{K}\ni(\mathbf{g},\mathbf{g}^{\Omega}) \geq 0}C(\mathsf{K},\mathbf{g}^{\Omega})\). The Lagrange function of the optimization problem (19) at the solution point \(\mathsf{K}^{*}\) is then written as: \[L(\mathsf{K},\mathbf{\lambda})=C(\mathsf{K},\mathbf{g}^{\Omega})-\mathbf{\lambda}^{ \mathsf{T}}\beta(\mathsf{K},\mathbf{g}^{\Omega}), \tag{20}\] with \(\mathbf{\lambda}\) being the Lagrangian multiplier. The Karush-Kuhn-Tucker (KKT) conditions of (19) take the form: \[\frac{\partial C(\mathsf{K} where \(m\in\mathcal{M}\) is the set of constraints. The sensitivity of the optimal solution \(\mathsf{K}_{k}\) with respect to the parameter \(\mathbf{g}^{(\omega)}\) can be found from perturbation analysis of the KKT conditions. Applying a small perturbation \(d\mathbf{g}^{(\omega)}\) to (21) gives: \[\frac{\partial^{2}C}{\partial\mathsf{K}_{k}\partial\mathbf{g}^{( \omega)}}+\sum_{k^{\prime}=1}^{\tilde{k}}\frac{\partial^{2}C}{\partial\mathsf{ K}_{k}\partial\mathsf{K}_{k^{\prime}}}\frac{d\mathsf{K}_{k^{\prime}}}{d \mathbf{g}^{(\omega)}}-\sum_{m\in\mathcal{M}}\left[\frac{\partial\beta_{m}}{ \partial\mathsf{K}_{k}}\frac{d\lambda_{m}}{d\mathbf{g}^{(\omega)}}\right.\] \[\left.+\lambda_{m}\left.\left(\frac{\partial^{2}\beta_{m}}{ \partial\mathsf{K}_{k}\partial\mathbf{g}^{(\omega)}}+\sum_{k^{\prime}=1}^{ \tilde{k}}\frac{\partial^{2}\beta_{m}}{\partial\mathsf{K}_{k}\partial\mathsf{ K}_{k^{\prime}}}\frac{d\mathsf{K}_{k^{\prime}}}{d\mathbf{g}^{(\omega)}}\right) \right]=0,\,\forall k \tag{22a}\] \[\beta_{m}\frac{d\lambda_{m}}{d\mathbf{g}^{(\omega)}}+\lambda_{m} \left(\frac{\partial\beta_{m}}{\partial\mathbf{g}^{(\omega)}}+\sum_{k^{\prime }=1}^{\tilde{k}}\frac{\partial\beta_{m}}{\partial\mathsf{K}_{k^{\prime}}} \frac{d\mathsf{K}_{k^{\prime}}}{d\mathbf{g}^{(\omega)}}\right)=0,\,\forall m \tag{22b}\] Equation (22) can be further simplified based on the following facts: \[\lambda_{m} =0, \forall m\in\mathcal{M}_{\rm in} \tag{23a}\] \[\frac{d\lambda_{m}}{d\mathbf{g}^{(\omega)}} =0, \forall m\in\mathcal{M}_{\rm in}\] (23b) \[\lambda_{m} \neq 0, \forall m\in\mathcal{M}_{\rm a}\] (23c) \[\beta_{m} =0, \forall m\in\mathcal{M}_{\rm a}\] (23d) \[\frac{\partial\beta_{m}}{\partial\mathbf{g}^{(\omega)}} =0, \forall m\in\mathcal{M}_{\rm a} \tag{23e}\] where \(\mathcal{M}_{\rm in}\) and \(\mathcal{M}_{\rm a}\) represent the set of inactive and active constraints. Equations (23b) and (26f) hold as it is assumed that the sets of active and inactive constraints remain the same after the perturbation \(d\mathbf{g}^{(\omega)}\), which is an appropriate assumption since the analysis is local and the derivatives of the objective function are continuous at the optimum [27, 28]. Combining (22) and (23) gives the following: \[\begin{bmatrix}\mathbf{A}_{\tilde{k}\times\tilde{k}}&\mathbf{B}_{\tilde{k} \times|\mathcal{M}_{\rm a}|}\\ \mathbf{B}_{|\mathcal{M}_{\rm a}|\times\tilde{k}}^{\dagger}&\mathbf{0}_{| \mathcal{M}_{\rm a}|\times|\mathcal{M}_{\rm a}|}\end{bmatrix}\begin{bmatrix} \frac{d\mathsf{K}}{d\mathbf{g}^{(\omega)}}\\ \frac{d\mathbf{A}_{|\mathcal{M}_{\rm a}|}}{d\mathbf{g}^{(\omega)}}\end{bmatrix}= \begin{bmatrix}-\mathbf{c}_{k}\\ \mathbf{d}_{|\mathcal{M}_{\rm a}|}\end{bmatrix}. \tag{24}\] \(\mathbf{0}\) is the zero matrix and \(\mathbf{A},\,\mathbf{B},\,\mathbf{c}\) and \(\mathbf{d}\) are matrices and vectors with comfortable dimensions, in which the elements are defined as follows: \[\mathbf{A}_{kk^{\prime}} =\frac{\partial^{2}C}{\partial\mathsf{K}_{k}\partial\mathsf{K}_{ k^{\prime}}}-\sum_{m\in\mathcal{M}_{\rm a}}\lambda_{m}\frac{\partial^{2} \beta_{m}}{\partial\mathsf{K}_{k}\partial\mathsf{K}_{k^{\prime}}} \tag{25a}\] \[\mathbf{B}_{km} =\frac{\partial\beta_{m}}{\partial\mathsf{K}_{k}}\] (25b) \[\frac{d\mathsf{K}}{d\mathbf{g}^{(\omega)}} =\begin{bmatrix}\frac{d\mathsf{K}_{\rm i}}{d\mathbf{g}^{(\omega)}} &...&\frac{d\mathsf{K}_{\rm c}}{d\mathbf{g}^{(\omega)}}\end{bmatrix}^{\mathsf{T}}\] (25c) \[\frac{d\mathsf{A}_{|\mathcal{M}_{\rm a}|}}{d\mathbf{g}^{(\omega)}} =\begin{bmatrix}\frac{d\lambda_{1}}{d\mathbf{g}^{(\omega)}}&...& \frac{d\lambda_{|\mathcal{M}_{\rm a}|}}{d\mathbf{g}^{(\omega)}}\end{bmatrix}^{ \mathsf{T}}\] (25d) \[\mathbf{c}_{k} =\frac{\partial^{2}C}{\partial\mathsf{K}_{k}\partial\mathbf{g}^{ (\omega)}}-\sum_{m\in\mathcal{M}_{\rm a}}\lambda_{m}\frac{\partial^{2}\beta_{m}} {\partial\mathsf{K}_{k}\partial\mathbf{g}^{(\omega)}}\] (25e) \[\mathbf{d}_{m} =\frac{\partial\beta_{m}}{\partial\mathbf{g}^{(\omega)}}. \tag{25f}\] Given the linear system (24), the sensitivity of the optimal solution (\(\mathsf{K}_{k}\)) with respect to the optimization parameters (\(\mathbf{g}^{\rm{f}2}\)) can be solved by evaluating the first- and second-order partial derivatives in (25). The closed form expression of the first- and second-order partial derivatives in (25) can be derived by combining (2) and (19): \[\frac{\partial^{2}C}{\partial\mathsf{K}_{k}\partial\mathsf{K}_{k^{ \prime}}} =\sum_{\omega\in\Omega}\left(2\mathsf{X}_{k}^{(\omega)}\mathsf{X}_{k^{ \prime}}^{(\omega)}\mathbf{e}^{(\omega)}\right) \tag{26a}\] \[\frac{\partial^{2}\beta_{m}}{\partial\mathsf{K}_{k}\partial \mathsf{K}_{k^{\prime}}} =0\] (26b) \[\frac{\partial\beta_{m}}{\partial\mathsf{K}_{k}} =\mathsf{X}_{k}^{(\varphi(m))}\] (26c) \[\frac{\partial^{2}C}{\partial\mathsf{K}_{k}\partial\mathbf{g}^{( \omega)}} =-\mathsf{X}_{k}^{(\omega)}\mathbf{e}^{(\omega)}\left(2+\left(\mathsf{K}^{ \mathsf{T}}\mathsf{X}^{(\omega)}-\mathbf{g}^{(\omega)}\right)\right.\] \[\left.\cdot\left(\frac{\mathbf{g}^{(\omega)}-\mathbf{g}_{\rm im}^{ \prime}}{s^{2}}\right)\right)\] (26d) \[\frac{\partial^{2}\beta_{m}}{\partial\mathsf{K}_{k}\partial \mathbf{g}^{(\omega)}} =0\] (26e) \[\frac{\partial\beta_{m}}{\partial\mathbf{g}^{(\omega)}} =\frac{-2rM\exp\left(-2r(\mathbf{g}_{\rm lim}+\nu-\mathbf{g}^{( \omega)})\right)}{\left(1+\exp\left(-2r(\mathbf{g}_{\rm lim}+\nu-\mathbf{g}^{( \omega)})\right)\right)^{2}}, \tag{26f}\] where \(\varphi(m)\in\Omega\) maps the active constraint index \(m\) to the corresponding index in the data set; \(\mathbf{e}^{(\omega)}\) and \(\mathbf{g}_{\rm lim}^{\prime}\) are defined as follows: \[\mathbf{e}^{(\omega)} =\exp\left(-\frac{(\mathbf{g}^{(\omega)}-\mathbf{g}_{\rm lim}^{ \prime})^{2}}{2s^{2}}\right) \tag{27a}\] \[\mathbf{g}_{\rm lim}^{\prime} =\mathbf{g}_{\rm lim}+\frac{\nu}{2}. \tag{27b}\] Note that (26c) and (26f) are derived based on (17b) and the results associated with (17a) can be obtained in a very similar manner, thus not being covered here. ### _Sensitivity of Stability Index with Respect to System Parameters_ A number of stability indices are considered and discussed within the proposed stability-constrained system optimization framework in [21], the sensitivity of these stability indices with respect to the uncertain parameters can be further derived analytically. An example that involves both matrix inverse and eigenvalue operation is given here to illustrate the concept. Consider a power system having \(n\in\mathcal{N}\) buses with \(g\in\mathcal{G}\), \(c_{l}\in\mathcal{C}_{l}\) and \(c_{m}\in\mathcal{C}_{m}\) being the set of conventional SGs, GFL and GFM IBRs. \(\Psi(g)\) and \(\Phi(c)\) map the units in \(g\in\mathcal{G}\) and \(c\in\mathcal{C}=\mathcal{C}_{l}\cup\mathcal{C}_{m}\) to the corresponding bus indices respectively. The system stability constraint can be expressed as follows [20]: \[\mathbf{g}(Y_{red}(\mathsf{X},\mathsf{p}),\mathsf{X}) =\lambda_{\rm min}\underbrace{\left[\mathrm{diag}\left(\frac{V_{ \Phi(c_{l})}^{2}}{P_{c_{l}}}\right)Y_{red}(\mathsf{X},\mathsf{p})\right]}_{Y_{red} ^{\prime}(\mathsf{X},\mathsf{p})}\] \[\geq\mathrm{gSCR}_{\rm lim}, \tag{28}\] where \(\mathrm{diag}\left(V_{\Phi(c_{l})}^{2}/P_{c_{l}}\right)\) is the diagonal matrix related to the GFL IBR terminal voltage Since the derivative with respect to the uncertain parameters (\(\partial\mathbf{g}/\partial\mathbf{p}\)) are of concern, the decision variable \(\mathsf{X}\) is omitted for the rest of the derivation, i.e., \(\mathbf{g}=\mathbf{g}(Y_{real}(\mathbf{p}))\). \(Y_{red}(\mathbf{p})\) can be further expressed as: \[Y_{red}(\mathbf{p})=Y_{\mathcal{C}_{l}\mathcal{C}_{l}}-Y_{\mathcal{C}_{l} \delta}Y_{\delta\delta}^{-1}(\mathbf{p})Y_{\delta\mathcal{C}_{l}}, \tag{29}\] where the terms on the right-hand side are the sub-matrices of the admittance matrix (\(Y\)) defined by: \[Y=\left[\begin{array}{c|c}Y_{\mathcal{C}_{l}\mathcal{C}_{l}}&Y_{\mathcal{C} _{l}\delta}\\ \hline Y_{\delta\mathcal{C}_{l}}&Y_{\delta\delta}(\mathbf{p})\end{array}\right]. \tag{30}\] with \(\mathcal{C}_{l}\in\mathcal{N}\) being the set of CFL IBR nodes and \(\delta=\mathcal{N}\setminus\mathcal{C}_{l}\) being the set of the rest nodes. The partial derivative \(\partial\mathbf{g}/\partial\mathbf{p}\) can be derived by first applying the eigenvalue perturbation theorem: \[\frac{\partial\mathbf{g}}{\partial\mathbf{p}_{p}}=\frac{w^{\mathsf{T}}\frac{ \partial Y_{red}^{\prime}(\mathbf{p})}{\partial\mathbf{p}_{p}}v}{w^{\mathsf{T }}v} \tag{31}\] where \(w\) and \(v\) are the left and right eigenvectors of the matrix \(Y_{red}^{\prime}(\mathbf{p})\) corresponding to \(\lambda_{\min}\). The expression of \(\frac{\partial Y_{red}^{\prime}(\mathbf{p})}{\partial\mathbf{p}_{p}}\) can be further derived by combining (28) and (29): \[\frac{\partial Y_{red}^{\prime}(\mathbf{p})}{\partial\mathbf{p}_{p}}=-\mathrm{ diag}\left(\frac{V_{\Phi(\mathcal{C}_{l})}^{2}}{P_{c_{l}}}\right)Y_{ \mathcal{C}_{l}\delta}Y_{\delta\delta}^{-1}\frac{\partial Y_{\delta\delta}( \mathbf{p})}{\partial\mathbf{p}_{p}}Y_{\delta\delta}^{-1}Y_{\delta\mathcal{C} _{l}}. \tag{32}\] To derive \(\frac{\partial Y_{\delta\delta}(\mathbf{p})}{\partial\mathbf{p}_{p}}\), the formula of the admittance matrix \(Y\) is revisited: \[Y=Y^{0}+Y^{g}, \tag{33}\] where \(Y^{0}\) is the admittance matrix of the transmission lines only; \(Y^{g}\) denotes the additional \(Y\) matrix increment due to reactance of SGs and GFM IBRs. The element of the \(i\)-th row and \(j\)-th column in \(Y^{g}\) can be expressed as below: \[Y^{g}_{ij}=\begin{cases}\frac{1}{X_{g}}x_{g}&\mathrm{if}\,i=j\wedge\exists\, g\in\mathcal{G}\cup\mathcal{C}_{m},\,\mathrm{s.t.}\,i=\Psi(g)\\ 0&\mathrm{otherwise}.\end{cases} \tag{34}\] Considering the uncertainty of SG's and GFM IBR's reactance, i.e., \(\mathbf{p}_{p}=X_{g},\,g\in\mathcal{G}\cup\mathcal{C}_{m}\), the concerned derivative is derived as: \[\frac{\partial Y_{\delta\delta,ij}}{\partial X_{g}}=\begin{cases}-\frac{x_{g} }{X_{g}^{2}}&\mathrm{if}\,i=j\wedge\exists\,g\in\mathcal{G}\cup\mathcal{C}_{ m},\,\mathrm{s.t.}\,i=\Psi(g)\\ 0&\mathrm{otherwise}.\end{cases} \tag{35}\] Combining (31), (32) and (35) leads to the closed form expression of \(\partial\mathbf{g}/\partial\mathbf{p}_{p}\). It should be noted that although the uncertainty of the SG and GFM IBR reactance is considered in the derivation, other uncertain parameters, such as line reactance can be dealt with similarly. Moreover, the sensitivity of other stability indices, such as small-signal stability indices based on the eigenvalue of the closed loop system state matrix or post-fault current and voltage and different forms of system strength based on the elements in the impedance matrix can be developed in a similar yet simpler manner. With the derivation in Section III-C and Section III-D, the first order derivatives \(\nabla_{f}\) can be computed according to (15). As for the second derivatives \(\partial^{2}f_{k}/\partial\mathbf{p}_{p}^{2}\), although the analytical derivation may become extremely cumbersome due to the complex expressions in \(\nabla_{f}\), numerical approaches with good approximation can be applied [29, 30]. As a result, the statistical moments of the stability constraint coefficients can be calculated based on (13) and (14). ## IV Distributionally Robust Stability-Constrained Optimization Having obtained the uncertainty information of the stability constraint coefficients, a distributionally robust stability constrained UC problem can be formulated, where the overall system operation cost is minimized subjected to a number constraints, such as power flow and power balance constraints, thermal unit constraints, and the distributionally robust system stability constraints. ### _Objective Function_ The objective of the UC problem is to minimize the expected cost over all nodes in the given scenario tree: \[\min\sum_{n\in\mathcal{N}}\pi(n)\left(\sum_{g\in\mathcal{G}}C_{g}(n)+\Delta t (n)c^{s}P^{s}(n)\right) \tag{36}\] where \(\pi(n)\) is the probability of scenario \(n\in\mathcal{N}\) and \(C_{g}(n)\) is the operation cost of unit \(g\in\mathcal{G}\) in scenario n, including startup, no-load and marginal cost; \(\Delta t(n)c^{s}P^{s}(n)\) represents the cost of the load shedding in scenario n with the three terms being the time step of scenario n, load shedding cost and shed load. The scenario tree is built based on user-defined quantiles of the forecast error distribution to capture the uncertainty associated with demand and wind generation. [31] can be referred to for more details. ### _Distributionally Robust Stability Constraints_ Assume the first- and second-order moments of the SG and GFM impedance is known, denoted by \(\mu_{\mathbf{p}}\) and \(\Sigma_{\mathbf{p}}\) respectively, whereas the exact probability distribution is unknown. Based on the discussion on Section III, the first- and second-order moments of the coefficients of the stability constraint can be developed and is denoted by \(\mu_{\mathsf{K}}\) and \(\Sigma_{\mathsf{K}}\) respectively. As a result, the ambiguity set of the coefficients \(\mathsf{K}\) can be modeled as: \[\mathcal{P}=\left\{\mathbf{D}\in\xi(\mathsf{K}):\mathbb{E}^{\mathbf{D}}(\mathsf{ K})=\mu_{\mathsf{K}},\mathrm{Var}^{\mathbf{D}}(\mathsf{K})=\Sigma_{\mathsf{K}}\right\} \tag{37}\] where \(\xi(\cdot)\) is the probability density function; \(\mu_{\mathsf{K}}\) and \(\Sigma_{\mathsf{K}}\) denote the mean and covariance matrix of the uncertain parameters given the distribution \(\mathbf{D}\). With the above ambiguity set, the distributionally robust stability chance constraint can be formulated as follows: \[\min_{\mathbf{D}\in\mathcal{P}}\mathrm{Pr}\left\{-\mathsf{K}^{\mathsf{T}} \mathsf{X}\leq-\mathbf{g}_{\lim}\right\}\geq\eta, \tag{38}\] which maintains the stability constraint over a certain confidence level \(\eta\in(0,1)\), for all the possible distribution in the ambiguity set \(\mathcal{P}\). According to _Theorem 3.1_ in [32], the above distributionally robust chance constraint can be equivalently converted to: \[k_{\eta}\sqrt{\mathsf{X}^{\mathsf{T}}\Sigma_{\mathsf{K}}\mathsf{X}} \leq\mu_{\mathsf{K}}^{\mathsf{T}}\mathsf{X}-\mathbf{g}_{\lim} \tag{39a}\] \[k_{\eta} =\sqrt{\frac{\eta}{1-\eta}}. \tag{39b}\] Due to the fact that the covariance matrix \(\Sigma_{\mathsf{K}}\) is symmetric and positive semi-definite, it can be expanded by the spectral factorization: \[\Sigma_{\mathsf{K}}=\sum_{i}\tau_{i}q_{i}q_{i}^{\mathsf{T}}, \tag{40}\] with \(\tau_{i}\) and \(q_{i}\) being the \(i\)-th eigenvalue and orthogonal eigenvector respectively. Substituting (40) into (39a) gives the following stability constraint: \[\left\|\begin{bmatrix}\sqrt{\tau_{1}}q_{1}^{\mathsf{T}}\mathsf{K}\\ \cdots\\ \sqrt{\tau_{n}}q_{n}^{\mathsf{T}}\mathsf{X}\end{bmatrix}\right\|_{2}\leq\frac{ 1}{k_{\eta}}\left(\mu_{\mathsf{K}}^{\mathsf{T}}\mathsf{K}-\mathbf{g}_{\lim} \right). \tag{41}\] With \(\tau_{i}\geq 0\), (41) is a well-defined constraint in a standard SOC form. All other conventional UC constraints such as those of power balance, thermal units and transmission system are not listed in the paper. The readers can refer [33] for details. ## V Discussion on An Alternative Formulation In previous sections, the parameter uncertainties related to the system dynamics, is transferred to the coefficients of the stability constraints in the system-level optimization. In this way, the uncertainty associated with the system dynamics is essentially managed by a distributional robust chance-constrained formulation during the system scheduling process. Alternatively, this uncertainty can also be explicitly considered when generating the stability constraint coefficients. For this purpose, the formulation based on the concept of distributionally robust learning is discussed here. With this formulation, the training process determines the optimal coefficients of the stability constraints that best describe the relationship between the stability index and the decision variables, while considering the uncertainty in the data set. Given the notation in (19), a general formulation can be expressed as: \[\min_{\mathsf{K}} \max_{\mathsf{D_{p}}\in\mathcal{P}_{\mathsf{p}}}\mathbb{E}^{ \mathbf{D_{p}}}\left[C(\mathsf{K},\mathbf{g}^{\Omega}(p))\right] \tag{42a}\] \[\mathrm{s.t.} \min_{\mathbf{D_{p}}\in\mathcal{P}_{\mathsf{p}}}\Pr\left\{\beta( \mathsf{K},\mathbf{g}^{\Omega}(p))\geq 0\right\}\geq\eta, \tag{42b}\] where \(\mathbf{D_{p}}\) and \(\mathcal{P}_{\mathsf{p}}\) are the the distribution of the uncertain vector \(\mathsf{p}\) and the corresponding ambiguity set. After obtaining the optimal stability constraint coefficients \(\mathsf{K}\), the system scheduling can be modeled as a standard optimization problem with a simple stability constraint (3). It is theoretically possible to reformulate problem (42) into tractable convex model for simple linear regression or more general cases, where the objective function and the constraints depend affinely on the random vector \(\mathsf{p}\)[32, 34]. However, considering the highly nonlinear relationship in \(C(\mathbf{g}^{\Omega})\), \(\beta(\mathbf{g}^{\Omega})\) and \(\mathbf{g}^{\Omega}(\mathsf{p})\), as defined in the constrained and weighted regression problem (4) or (19), (42) becomes computationally intractable. Moreover, another drawback with this formulation is the repeated data generation and retraining process, every time the uncertainty information of \(\mathsf{p}\) and the confidence level requirement \(\eta\) change. Nevertheless, this work focuses on the formulation developed in previous sections and an efficient reformulation of (42) and its comparison against this work will be conducted in the future work. ## VI Case Studies The IEEE-39 bus system shown in Fig. 1 is utilized to demonstrate the effectiveness of the proposed model. To increase the renewable penetration, IBRs are added at Bus 26, 27, 28 and 29 with the second one being a GFM unit. The parameters of transmission lines, loads are available in [35]. The load and renewable generation profile in [36, 37] is adapted for the simulation during the considered time horizon. The characteristics of the thermal generators are given in Table I while considering the data in [37, 38], with their location being Bus \(\{30,37\}\), \(\{31,32,33,34,35,36,38\}\) and \(\{39\}\) respectively. Other system parameters are set as follows: load demand \(P^{D}\in[5.16,6.24]\,\mathrm{GW}\), load damping \(D=0.5\%P^{D}/1\,\mathrm{Hz}\), base power \(S_{B}=100\mathrm{MVA}\). ### _Validation of Uncertainty Quantification_ To validate the effectiveness of the uncertainty quantification method proposed in Section III, the mean and variance of the stability constraint coefficients obtained based on the following two methods are compared: i) the proposed analytical (Ana.) method and ii) Monte Carlo (MC) simulation. Note that a sufficient amount of samples (15 k) are generated for the MC simulation to ensure performance convergence. From the results listed in Table II, it can be observed that both the \begin{table} \begin{tabular}{c||c|c|c} \hline \hline Type & **Type I** & **Type II** & **Type III** \\ \hline No-load Cost [k\(\ell\)h] & 4.5 & 3 & 0 \\ \hline Marginal Cost [\(\ell\)MWh] & 47 & 200 & 10 \\ \hline Startup Cost [\(\ell\)k] & 10 & 0 & N/A \\ \hline Startup Time [h] & 4 & 0 & N/A \\ \hline Min Up Time [h] & 4 & 0 & N/A \\ \hline Min Down Time [h] & 1 & 0 & N/A \\ \hline Inertia Constant [s] & 6 & 6 & 6 \\ \hline \hline \end{tabular} \end{table} TABLE I: Parameters of Thermal Units Fig. 1: Modified IEEE-39 bus system. mean (\(\mu\)) and variance (\(\sigma^{2}\)) derived based on the analytical approach are close to those obtained from MC simulation. The averaged absolute relative errors (\(e_{\mu},\,e_{\sigma^{2}}\)) of the mean and variance are 5.04 % and 7.82 % respectively, demonstrating a good approximation of the proposed method. ### _Impact of Stability Constraints on System Operation_ The effectiveness of the proposed stability constraints and their influences on the system operation cost as well as operating conditions are studied in this section. The following three different cases are considered: * Case I: conventional system scheduling without stability constraints * Case II: system scheduling with stability constraints and the system parameter uncertainty is not considered * Case III: system scheduling with distributionally robust stability constraints to account for system uncertainties. #### Iv-B1 **System operation cost and stability constraint violation** The averaged system operation costs among 24-hour operation in these three cases with various wind penetration levels are depicted in Fig. 2. It is understandable that the system operation costs in all the cases decrease, as more wind generation is installed in the system. These costs stay close to each other at lower wind capacity (below \(3\,\mathrm{GW}\)), since the stability issue is not obvious. However, as the installed wind capacity in the system further increases, the system operation cost in Case I (blue curve) presents the lowest value among all the cases, because the stability constraint is not considered in the scheduling process. Once the stability constraint is incorporated into the system scheduling, the original optimal solution cannot be attained, thus inducing more operational cost to maintain the stability constraint, as indicated by the red curve. This cost increment becomes more significant at high wind capacity due to the server stability issue. In addition, if the uncertain associated with the system parameters is considered (Case III), the system operation cost further increases by around \(40\,\mathrm{k}\mathcal{E}/\mathrm{h}\) at \(6\,\mathrm{GW}\) wind capacity, since a larger stability margin has to be reserved to ensure the stability under uncertainty. The stability constraint violation rates under different circumstances are also shown in Fig. 3. In Case I, as indicated by the blue curve, the stability constraint violation starts to appear at \(3\,\mathrm{GW}\) wind capacity for about 5% of the total operation time. This violation rate increases dramatically to about 45%, with the rising of the wind capacity in the system, which demonstrates the necessity of including the stability constraint in the system optimization. As for Case II, although the stability constraint is considered, a small percentage (less than 10%) of the operating conditions do not satisfy the stability constraint due to the neglect of the parameter uncertainty related to system dynamics. This impact becomes more remarkable at higher wind capacity, illustrating the importance of modeling and management of the uncertainty in system stability constraints. The stability constraint violation due to the uncertainty in Case II is eliminated in Case III with the incorporation of the distributionally robust stability chance constraint, therefore demonstrating the effectiveness of the proposed method. #### Iv-B2 **Dispatched wind power and SGs** In order to investigate the reasons behind the changes of the system operation costs and stability constraint violation rates in different cases, the amount of dispatched wind power and the number of online SGs during 24-hour operation are plotted in Fig. 4 and Fig. 5 respectively. The wind capacity in both figures are set to be \(6\,\mathrm{GW}\). It can be observed in Fig. 4, that when no stability constraint is considered during the system scheduling process (Case I), the dispatched wind power, indicated by the blue curve is the highest during most of the hours among all the cases, which also justifies the lowest operation cost in this case as illustrated in Fig. 2. Furthermore, since the stability constraint considered in (28) is closely related to the output power of the grid-following IBRs in the system as explained in [20], a significant amount of wind power is curtailed in Case II to ensure the system stability, especially during the high wind power hours. This effect becomes more obvious in Case III (yellow curve), when the uncertainty of the system dynamics is considered though the distributionally robust formulation. In this scenario, additional wind power (around \(0.3\,\mathrm{GW}\)) compared to Case II is further curtailed, which leads to a zero stability constraint violation rate under uncertainty as shown \begin{table} \begin{tabular}{c||c|c|c|c|c|c} \hline \hline & \multicolumn{3}{c|}{\(\boldsymbol{\mu}\)} & \multicolumn{3}{c|}{\(\boldsymbol{\sigma^{2}}\) [\(\times 10^{-3}\)]} \\ \cline{2-7} & **Ana.** & **MC** & \(\boldsymbol{\epsilon_{\mu}}\) [\%] & **Ana.** & **MC** & \(\boldsymbol{\epsilon_{\sigma^{2}}}\) [\%] \\ \hline \(\mathsf{K}_{1}\) & 2.392 & 2.496 & -4.19 & 0.438 & 0.405 & 8.05 \\ \hline \(\mathsf{K}_{2}\) & 1.458 & 1.474 & -1.09 & 3.470 & 3.644 & -4.78 \\ \hline \(\mathsf{K}_{3}\) & 0.387 & 0.389 & 4.65 & 0.399 & 0.392 & 1.97 \\ \hline \(\mathsf{K}_{4}\) & 0.469 & 0.470 & 1.56 & 0.249 & 0.258 & -3.25 \\ \hline \(\mathsf{K}_{5}\) & -0.078 & -0.088 & -11.96 & 0.689 & 0.653 & 5.51 \\ \hline... &... &... &... &... &... &... \\ \hline Average & N/A & N/A & 5.04 & N/A & N/A & 7.82 \\ \hline \hline \end{tabular} \end{table} TABLE II: Mean and Variance of Stability Constraint Coefficient with Different Methods Fig. 3: Stability constraint violation rates in different cases with varying wind capacity. Fig. 2: System operation costs in different cases with varying wind capacity. in Fig. 3. On the other hand, the impact on the number of online SGs is also investigated with the results plotted in Fig. 5. In general, the committed SGs in Case I is the smallest for most of the time since more wind power can be utilized without the stability constraint. Similarly, to provide enough system strength and ensure the robustness against the uncertainty, the number of SGs in Case III is larger than or equal to that in Case II. However, an opposite trend is observed during \(6\) and \(13-15\) when there is a significant decline in wind power. This is because in Case I, there is not enough reserve to balance the demand, from the SGs committed before the wind power decreases. As a result, more SGs have to be dispatched during these hours due to the SG ramp rate constraints. Therefore, the blue curve exceeds the red and yellow curves. Nevertheless, more SGs are needed on average to maintain system stability in the cases where the uncertainty and stability constraint are modeled and implemented in system scheduling. #### Vi-B3 Comparison to operation with fixed stability margin In order to illustrate the benefit of the proposed method, the operating strategy with fixed stability margin to manage the uncertainty in the system is assessed, where the wind capacity in this case is set to \(6\,\mathrm{GW}\). As shown in Fig. 6, a high stability margin leads to increased system operation cost and decreased stability constraint violation. A zero stability constraint violation requires a stability margin of \(20\%\), which corresponding to the system operation cost of \(360.84\,\mathrm{k}\mathcal{E}/\mathrm{h}\), whereas with the proposed method, this number reduces to \(324.65\,\mathrm{k}\mathcal{E}/\mathrm{h}\). This indicates that the stability uncertainty is managed with overconservative stability margin, since the dependence of the actual stability margin on the system operating conditions as expressed by the left-hand side in (41) cannot be considered in this strategy. ### _Impact of Uncertainty Level_ The parameter uncertainties associated with the system dynamics are modeled and managed in this work through a distributionally robust formulation during the system scheduling process. The uncertainty level thus influences the optimal solution, which is investigated in this section. The results are depicted in Fig. 7, where the averaged equivalent stability index limits (\(\mathbf{g}_{\mathrm{lim}}^{\mathrm{eq}}\)) is derived from (41) as follow: \[\mathbf{g}_{\mathrm{lim}}^{\mathrm{eq}}=\frac{1}{|\mathcal{T}|}\sum_{t\in \mathcal{T}}\left(\mathbf{g}_{\mathrm{lim}}+k_{\eta}\left\|\begin{bmatrix} \sqrt{\tau_{10}}\tau_{10}^{\mathrm{T}}\mathsf{X}(t)\\ \cdots\\ \sqrt{\tau_{n}}\tau_{n}^{\mathrm{T}}\mathsf{X}(t)\end{bmatrix}\right\|_{2} \right), \tag{43}\] representing the stability index limit under the uncertainty. It can be observed that more system operation cost is induced in an approximate linear fashion, as indicated by the blue curve, with the increasing of the uncertainty level, which is defined as the variance ratio to the original variance (\(\sigma_{0}^{2}\)). This can be explained by the robust formulation to ensure the stability constraint under different uncertainty level. Note that \(\sigma^{2}/\sigma_{0}^{2}=0\) corresponds to Case II in Section VI-B and \(\sigma^{2}/\sigma_{0}^{2}=1\) Case III. Similarly, a higher uncertainty level leads to a larger equivalent stability index limit, which represents a larger stability margin. To maintain this increased stability margin, more wind power is curtailed during high wind power hours, thus resulting in higher operational cost as discussed previously. ## VII Conclusion and Future Work The uncertainty associated with the system dynamics is investigated in this work, within the framework of stability-constrained optimization in high IBR-penetrated system. In order to ensure the system stability under uncertainty, the uncertainty levels of the stability constraint coefficients are analytically quantified based on the uncertainty information Fig. 4: Dispatched wind power during 24-hour system operation. Fig. 5: Number on online SGs during 24-hour system operation. Fig. 6: System operation cost and constraint violation rate with different stability margin. Fig. 7: System operation costs and the equivalent stability index limits under various uncertainty levels. related to the system impedance. A distributionally robust stability chance constraint is further formulated and converted into an SOC form, leading to an overall MISOCP-based system scheduling model. A good approximation of the uncertainty quantification method through moments propagation is observed and the impacts of the stability constraint on the system operating conditions are investigated, demonstrating an uncertainty level-dependent cost increment for stability maintenance under the distributional robust formulation. Future work involves studying the other formulation through the distributionally robust learning as discussed in Section V, where an efficient approach to deal with the nonlinear relationship with the uncertainty parameters in the weighted and constrained regression problem will be investigated.
インバーターベースのリソース(IBR)の浸透度増加に伴い、その影響は電力システムの安定性と運用に、安定性制約の最適化という概念が研究者から注目を集めている。 inaccuratemodeling によるパラメータ的不確実性(システムダイナミクスの影響)を管理するために、本研究では、分布的に robusteな安定性制約の公式を提案する。しかし、システムダイナミックパラメータの不確定性は、非線形かつ非表現的な関係を通して安定性制約に直接的に影響を与える。この問題に対処するために、システムダイナミックパラメータの不確定性から安定性制約係数への伝播メカニズムを確立する。これらの係数は、高度に非線形かつ非表現的な関数を介して不確実なパラメータに接続されているため、テイラー展開とデルタ法を用いた近似方法により
2309.14582
Quantum Counterpart of Equipartition Theorem in Quadratic Systems
The equipartition theorem is a fundamental law of classical statistical physics, which states that every degree of freedom contributes $k_{B}T/2$ to the energy, where $T$ is the temperature and $k_{B}$ is the Boltzmann constant. Recent studies have revealed the existence of a quantum version of the equipartition theorem. In the present work,we focus on how to obtain the quantum counterpart of the generalized equipartition theorem for arbitrary quadratic systems in which the multimode Brownian ocillators interact with multiple reservoirs at the same temperature. An alternative method of deriving the energy of the system is also discussed and compared with the result of the the quantum version of the equipartition theorem, after which we conclude that the latter is more reasonable. Our results can be viewed as an indispensable generalization of rencent works on a quantum version of the equipartition theorem.
Xin-Hai Tong
2023-09-26T00:13:14
http://arxiv.org/abs/2309.14582v4
# Quantum Equipartition Theorem for Arbitrary Quadratic Systems ###### Abstract Equipartition theorem is a fundamental law of classical statistical physics, which states that every degree of freedom contributes \(k_{B}T/2\) to the energy, where \(T\) is the temperature and \(k_{B}\) is the Boltzmann constant. However, a general quantum analogue of equipartition theorem has never been put forward yet. In the present Letter, we focus on how to obtain generalized equipartition theorem for arbitrary quadratic systems where the multimode Brownian oscillators interacts with multiple reservoirs at the same temperature. An alternative method of deriving the energy of the system is also discussed and compared with the result of the equipartition theorem, after which we conclude that the latter is more reasonable. Our results can be viewed as an indispensable generalization of recent works on quantum equipartition theorem. _Introduction.--_ One of the elegant principles of classical statistical physics is the equipartition theorem, which has numerous applications in various topics, such as thermodynamics [1; 2; 3], astrophysics [4; 5; 6] and applied physics [7; 8; 9]. Though quantum mechanics has been founded over a hundred years and so as quantum statistical mechanics, there is still no general quantum version of the equipartition theorem. Recent years have seen much progress on this topic. A novel work [10] investigated the simplest quantum Brownian oscillator model to formulate the energy of the system in terms of the average energy of a quantum oscillator in a harmonic well. Based on this work, more papers [11; 12; 13; 14; 15; 16; 17; 18] tried to study quantum equipartition theorem in different versions of quadratic open quantum systems from various perspectives, including electrical circuits [11] and dissipative diamagnetism [18]. In this Letter, we aim to deduce the generalized equipartition theorem for arbitray open quantum quadratic systems. Many quadratic systems share the same algebra \([\hat{A},\hat{B}]=i\hbar\), where the binary operator pair can be the coordinate \(\hat{x}\) and the momentum \(\hat{p}\) for oscillators, the magnetic flux \(\hat{\Phi}\) and charge \(\hat{Q}\) for quantum circuits, and so on. We here turn to Brownian oscillators as an example to grasp the physical nature of all these systems. It is worth noting [19] that linearly coupled driven Brownian oscillator systems share the same exact dynamical process as undriven systems, as long as we absorb the external field \(\epsilon(t)\) into the stochastic force of the bath. We thus only consider undriven systems for simplicity. To construct such systems, we adopt a generalized Calderia-Leggett model [20] and manage to transform it into a multi-mode Brownian oscillator system well-discussed in Ref. [21]. For generality, we do not choose a concrete form of dissipation. We also generalize a formula [22] for the internal energy so that it could be applied to the multi-mode Brownian oscillator system. It has been debated [23] which of this formula or the quantum equipartition theorem given in Ref. [10] truly describes the energy of the system. Our analysis show that the generalized version of the former formula cannot be used to find the energy, which implies that the latter one is more reasonable. _Arbitrary quadratic system.--_ To construct our arbitrary quadratic system, let us start with the multi-mode Calderia-Leggett model [20] \[H_{\text{\tiny CL}}=\sum_{u}\frac{\hat{P}_{u}^{2}}{2M_{u}}+ \sum_{uv}\frac{1}{2}k_{uv}\hat{Q}_{u}\hat{Q}_{v}+\sum_{\alpha j}\bigg{[}\frac {\hat{p}_{\alpha j}^{2}}{2m_{\alpha j}}\] \[\qquad\qquad+\frac{1}{2}m_{\alpha j}\omega_{\alpha j}^{2}\bigg{(} \hat{x}_{\alpha j}+\frac{1}{m_{\alpha j}\omega_{\alpha j}^{2}}\sum_{u}c_{ \alpha ij}\hat{Q}_{u}\bigg{)}^{2}\bigg{]}, \tag{1}\] where \(u,v\in\{1,2,...,n_{\textsc{s}}\}\) and \(j\in\{1,2,...,n_{\alpha}\}\) are indices for the oscillators in the system and in the \(\alpha\)th bath, repectively. The coefficient \(c_{\alpha ij}\) represents the coupling strength between the coordinate of the \(u\)th oscillator in the system and the \(j\)th oscillator in the \(\alpha\)th bath. The convention would put \(-c_{\alpha ij}\) in the last term of Eq. (1), but we replace it by \(c_{\alpha uj}\). We also have the commutation relations for all the momentum and position operators as follows: \[[\hat{Q}_{u},\hat{P}_{v}]=i\delta_{uv},\quad[\hat{x}_{\alpha_{1}j_{1}},\hat{p}_ {\alpha_{2}j_{2}}]=i\delta_{\alpha_{1}\alpha_{2}}\delta_{j_{1}j_{2}} \tag{2}\] with \(\delta\) representing the Kronecker delta. Equation (1) can be reorganized in the following forms: \[H_{\text{\tiny CL}}=H_{\text{\tiny S}}+H_{\text{\tiny SB}}+h_{ \text{\tiny B}}, \tag{3a}\] \[H_{\text{\tiny S}}=\sum_{u}\frac{\hat{P}_{u}^{2}}{2M_{u}}+\frac{ 1}{2}\sum_{uv}k_{uv}\hat{Q}_{u}\hat{Q}_{v},\] (3b) \[H_{\text{\tiny SB}}=\sum_{\alpha u}\hat{Q}_{u}\hat{F}_{au}+\frac {1}{2}\sum_{\alpha j}\frac{c_{\alpha uj}c_{\alpha j}}{m_{\alpha j}\omega_{ \alpha j}^{2}}\hat{Q}_{u}\hat{Q}_{v},\] \[\hat{F}_{\alpha u}=\sum_{j}c_{\alpha uj}\hat{x}_{\alpha j},\] (3c) \[h_{\text{\tiny B}}=\sum_{\alpha}h_{\alpha\text{\tiny B}}=\sum_{ \alpha j}\bigg{(}\frac{\hat{p}_{\alpha j}^{2}}{2m_{\alpha j}}+\frac{1}{2}m_{ \alpha j}\omega_{\alpha j}^{2}\hat{x}_{\alpha j}^{2}\bigg{)}. \tag{3d}\] Here, the system-bath interaction results from the linear coupling of the system coordinate \(\hat{Q}_{u}\) and the random force \(\hat{F}_{\alpha u}\). We also emphasize that all the mutually independent baths \(\{h_{\alpha\mathrm{n}}\}\) in Eq. (34) are at the same temperature \(\beta\). By defining the pure bath response function as \[\boldsymbol{\phi}_{\alpha}(t)\equiv\{\phi_{\alpha uv}(t)\equiv i \langle[\hat{F}_{\alpha u}^{\mathrm{n}}(t),\hat{F}_{\alpha v}^{\mathrm{n}}(0)] \rangle_{\mathrm{B}}\}, \tag{4}\] we recognize that \[\sum_{\alpha j}\frac{c_{\alpha uj}c_{\alpha vj}}{m_{\alpha j} \omega_{\alpha j}^{2}}=\tilde{\phi}_{\alpha uv}(0), \tag{5}\] where \(\hat{F}_{\alpha u}^{\mathrm{n}}(t)\equiv e^{ih_{\mathrm{B}}t}\hat{F}_{\alpha u }e^{-ih_{\mathrm{B}}t}\) and the average is defined over the canonical ensembles of baths as in \(\langle\cdots\rangle_{\mathrm{n}}\equiv\mathrm{tr}_{\mathrm{n}}[\cdots\otimes_ {\alpha}e^{-\beta h_{\alpha\mathrm{n}}}]/\prod_{\alpha}\mathrm{tr}_{\mathrm{n} }(e^{-\beta h_{\alpha\mathrm{n}}})\). In Eq. (5) we use tilde to denote the Laplace transform \(\tilde{f}(\omega)=\int_{0}^{\infty}\!\mathrm{d}\omega\,e^{i\omega t}f(t)\) for any function \(f(t)\). If we denote \(V_{uv}\equiv k_{uv}+\sum_{\alpha}\tilde{\phi}_{\alpha uv}(0)\) and \(\Omega_{u}\equiv M_{u}^{-1}\) for convenience, then Eqs. (3b) and (3c) can be rewritten in the form: \[H_{\mathrm{S}}+H_{\mathrm{SB}}\] \[\equiv\bigg{[}\frac{1}{2}\sum_{u}\Omega_{u}\hat{P}_{u}^{2}+\frac {1}{2}\sum_{uv}\Big{(}V_{uv}-\sum_{\alpha}\tilde{\phi}_{\alpha uv}(0)\Big{)} \hat{Q}_{u}\hat{Q}_{v}\bigg{]}\] \[\quad+\bigg{[}\sum_{\alpha u}\hat{Q}_{u}\hat{F}_{\alpha u}+\frac {1}{2}\sum_{\alpha uv}\tilde{\phi}_{\alpha uv}(0)\hat{Q}_{u}\hat{Q}_{v}\bigg{]}, \tag{6}\] which is the starting point of our quantum equipartition theorem. The total Hamiltonian, referred to as [cf. Eq. (3)], now remains identical to the one presented in Ref. [21]. Physically, we need \(\mathbf{V}=\{V_{uv}\}\), \(\mathbf{k}=\{k_{uv}\}\) and \(\mathbf{\Omega}=\{\Omega_{u}\delta_{uv}\}\) to be positive definite. Detailed derivation of Eq. (5) can be found in Ref. [24]. _Generalizad equipartition theorem.--_ The conventional equipartition theorem deals with the kinetic energy and the potential energy separately contributed by certain degrees of freedom [10]. When some degrees of freedom are interwined with each other, such as in our model [cf. Eqs. (3c, 3d), 3d)], we would better discuss the generalized equipartition theorem [25]. In the rest of this work we study the quantity \(\langle\hat{X}_{i}\partial H_{\mathrm{S}}/\partial\hat{X}_{j}\rangle\) for any system degrees of freedom \(\hat{X}_{i},\hat{X}_{j}\in\{\hat{P}_{u}\}\cup\{\hat{Q}_{u}\}\). We focus on the cases \(i=j\) in the main text. Other cases are discussed in Ref. [24]. We have \(\langle\hat{Q}_{u}\partial H_{\mathrm{S}}/\partial\hat{Q}_{u}\rangle=\sum_{v}[ V_{uv}-\sum_{\alpha}\tilde{\phi}_{\alpha uv}(0)]\langle\hat{Q}_{u}\hat{Q}_{v}\rangle\) when we choose \(\hat{X}_{i}=\hat{Q}_{u}\). Here, the angular brackets denote the average over the steady state of the total composite system \(\rho_{\mathrm{T}}^{\mathrm{st}}\), whose partial trace of bath is the system equilibrium state: \(\rho_{\mathrm{S}}:=\mathrm{tr}_{\mathrm{S}}\,\rho_{\mathrm{T}}^{\mathrm{st}}=e ^{-\beta H_{\mathrm{S}}}/\,\mathrm{tr}_{\mathrm{S}}\,e^{-\beta\hat{H}_{ \mathrm{S}}}\). Under the help of the fluctuation-dissipation theorem [26], we obtain \[\bigg{\langle}\hat{Q}_{u}\frac{\partial H_{\mathrm{S}}}{\partial \hat{Q}_{u}}\bigg{\rangle} =\sum_{v}\frac{1}{\pi}\int_{0}^{\infty}\mathrm{d}\omega\,\bigg{[}V _{uv}\] \[\quad-\sum_{\alpha}\tilde{\phi}_{\alpha uv}(0)\bigg{]}J_{uv}^{QQ}( \omega)\coth\frac{\beta\omega}{2}, \tag{7}\] with \(\mathbf{J}^{QQ}(\omega)=\{J_{uv}^{QQ}(\omega)\}\) being the anti-Hermitian part of the matrix \(\tilde{\mathbf{\chi}}^{QQ}(\omega)=\{\tilde{\chi}_{uv}^{QQ}(\omega)=\int_{0}^{ \infty}\mathrm{d}\omega\,e^{i\omega t}\chi_{uv}^{QQ}(t)\}\). Here, we denote the system response function of any two operators \(\hat{A}_{u}\) and \(\hat{B}_{v}\) as \(\chi_{uv}^{AQ}(t)\equiv i\langle[\hat{A}_{u}(t),\hat{B}_{v}(0)]\rangle\). According to Ref. [21], we also have an explicit expression for \(\tilde{\mathbf{\chi}}^{QQ}(\omega)\) in the form of \[\tilde{\mathbf{\chi}}^{QQ}(\omega)=\Big{[}\mathbf{\Omega}\mathbf{V}-\omega^{ 2}\mathbf{I}-\mathbf{\Omega}\sum_{\alpha}\tilde{\mathbf{\phi}}_{\alpha}(\omega)\Big{]} ^{-1}\mathbf{\Omega}, \tag{8}\] where \(\mathbf{V}\) and \(\mathbf{\Omega}\) are given below Eq. (6). Equation (7) can be recast as \[\bigg{\langle}\hat{Q}_{u}\frac{\partial H_{\mathrm{S}}}{\partial \hat{Q}_{u}}\bigg{\rangle}=\int_{0}^{\infty}\mathrm{d}\omega\,\mathbb{P}_{u}^{Q }(\omega)\bigg{\langle}\hat{Q}_{u}\frac{\partial H_{\mathrm{S}}}{\partial\hat{Q} _{u}}\bigg{\rangle}_{\mathrm{c}}(\omega,\beta), \tag{9}\] where \[\bigg{\langle}\hat{Q}_{u}\frac{\partial H_{\mathrm{S}}}{\partial \hat{Q}_{u}}\bigg{\rangle}_{\mathrm{c}}(\omega,\beta)=\frac{\omega}{2}\coth \frac{\beta\omega}{2} \tag{10}\] and \[\mathbb{P}_{u}^{Q}(\omega)=\frac{2}{\pi\omega}\sum_{v}\bigg{[}V_{uv}-\sum_{ \alpha}\tilde{\phi}_{\alpha uv}(0)\bigg{]}J_{uv}(\omega). \tag{11}\] In Eq. (10) the subscript "c" denotes the canonical ensemble average of a closed system with \(c_{\alpha uj}=0\) and \(\mathbf{V}=\omega^{2}\mathbf{\Omega}^{-1}\) in Eq. (3). The closed system here is a set of noninteracting harmonic oscillators with the same frequencies \(\omega\). Therefore, the left-hand side of Eq. (10) equals twice the average potential energy of a harmonic oscillator in the canonical ensemble, which is well known to be \((\omega/2)\coth(\beta\omega/2)\). A similar process for the case \(\hat{X}=\hat{P}_{u}\) leads to \(\langle\hat{P}_{u}\partial H_{\mathrm{S}}/\partial\hat{P}_{u}\rangle=\Omega_{u} \langle\hat{P}_{u}^{2}\rangle\). Using the fluctuation-dissipation theorem [26] again, we find \[\bigg{\langle}\hat{P}_{u}\frac{\partial H_{\mathrm{S}}}{\partial \hat{P}_{u}}\bigg{\rangle}=\frac{\Omega_{u}}{\pi}\int_{0}^{\infty}\mathrm{d} \omega\,J_{uu}^{PP}(\omega)\coth\frac{\beta\omega}{2}. \tag{12}\] where \(\mathbf{J}^{PP}(\omega)=\{J_{uv}^{PP}(\omega)\}\) is the anti-Hermitian part of matrix \(\tilde{\mathbf{\chi}}^{PP}(\omega)=\{\tilde{\chi}_{WP}^{PP}(\omega)\}\)[21]: \[\tilde{\mathbf{\chi}}^{PP}(\omega)=\mathbf{\Omega}^{-1}+\omega^{2}\mathbf{ \Omega}^{-1}\tilde{\mathbf{\chi}}^{QQ}(\omega)\mathbf{\Omega}^{-1}. \tag{13}\] We obtain \[\bigg{\langle}\hat{P}_{u}\frac{\partial H_{\mathrm{S}}}{\partial \hat{P}_{u}}\bigg{\rangle}_{\mathrm{c}}(\omega,\beta)=\frac{\omega}{2}\coth \frac{\beta\omega}{2}, \tag{14}\] for the same closed system with \(c_{\alpha uj}=0\) and \(\mathbf{V}=\omega^{2}\mathbf{\Omega}^{-1}\). It is evident that the left-hand side of Eq. (14) equals twice the average kinetic energy of a harmonic oscillator in canonical ensemble. By substituting Eqs. (13) and (14) into Eq. (12), we obtain the final result \[\bigg{\langle}\hat{P}_{u}\frac{\partial H_{\mathrm{S}}}{\partial \hat{P}_{u}}\bigg{\rangle}=\int_{0}^{\infty}\mathrm{d}\omega\,\mathbb{P}_{u}^{P }(\omega)\bigg{\langle}\hat{P}_{u}\frac{\partial H_{\mathrm{S}}}{\partial\hat{P}_{u }}\bigg{\rangle}_{\mathrm{c}}(\omega,\beta) \tag{15}\] with \[\mathbb{P}_{u}^{P}(\omega)=\frac{2\omega}{\pi\Omega_{\text{\tiny{T}}}}J_{uu}( \omega). \tag{16}\] Equations (9-11) and Eqs. (15-16) are partly the main results of this work. A brife discussion is presented here to conclude this section. Equations (9) and (15) are what we call the generalized quantum equipartition theorem. They expand quantities \(\langle\hat{Q}_{u}\partial H_{\text{\tiny{S}}}/\partial\hat{Q}_{u}\rangle\) and \(\langle\hat{P}_{u}\partial H_{\text{\tiny{S}}}/\partial\hat{P}_{u}\rangle\) in open quantum systems in terms of **the same** quantities in a set of closed systems, which we can use \(\omega\) to label. Equations (11) and (16) are interpreted as the probability density functions that represent the weights of the quantities of the corresponding closed systems. Proofs of their positivity and normalization are shown in Ref. [24]. Quantum equipartition theorem here offers a new angle on how to calculate quantities of open systems in terms of those of closed systems. The former is generally hard to obtain while the latter is easy to find. Another application of the relations (9) and (15) is given below. Noting that the total energy is given by \(E(\beta)=1/2\sum_{u}\langle\hat{Q}_{u}\partial H_{\text{\tiny{S}}}/\partial \hat{Q}_{u}\rangle+1/2\sum_{u}\langle\hat{P}_{u}\partial H_{\text{\tiny{S}}}/ \partial\hat{P}_{u}\rangle\), we arrive at \[E(\beta)=\int_{0}^{\infty}\text{d}\omega\,\mathbb{P}(\omega)\mathcal{E}( \omega,\beta) \tag{17}\] with \[\mathbb{P}(\omega)=\frac{1}{2n_{\text{\tiny{S}}}}\sum_{u}\left[\mathbb{P}_{u }^{Q}(\omega)+\mathbb{P}_{u}^{P}(\omega)\right] \tag{18}\] where \[\mathcal{E}(\omega,\beta)=\frac{n_{\text{\tiny{S}}}\omega}{2}\coth\frac{ \beta\omega}{2} \tag{19}\] is the average of the total energy of the chosen closed system with \(c_{\alpha uj}=0\) and \(\mathbf{V}=\omega^{2}\mathbf{\Omega}^{-1}\). Equation (17) is termed as the traditional quantum energy equipartition theorem [23]. Moving further with the help of thermodynamic equations, the free energy \(F(\beta)\) of the system can be determined by \[F(\beta)+\beta\frac{\partial F(\beta)}{\partial\beta}=E(\beta), \tag{20}\] and hence Eqs. (20) and (17) may yield \[F(\beta)=\int_{0}^{\infty}\text{d}\omega\,\mathbb{P}(\omega)\mathcal{F}( \omega,\beta), \tag{21}\] where \[\mathcal{F}(\omega,\beta)=\frac{n_{\text{\tiny{S}}}}{\beta}\ln\left(2\sinh \frac{\beta\omega}{2}\right) \tag{22}\] is the free energy of the \(\omega\)-closed system with \(\mathbf{V}=\omega^{2}\mathbf{\Omega}^{-1}\). From Eq. (21) we further obtain an expression for the partition function of the system in the form of \[\ln Z_{\text{\tiny{S}}}(\beta)=-\beta\int_{0}^{\infty}\text{d}\omega\, \mathbb{P}(\omega)\mathcal{F}(\omega,\beta). \tag{23}\] Note that Eq. (23) is much easier to obtain than the conventional influence-functional approach [27; 28]. _Alternative approach for the energy.--_ A recent review [23] presented another approach to obtain the energy of the system of multi-mode harmonic oscillators. When introduced in Ref. [22] first, the result was only limited to the single-mode case. Here we generalize their derivation and find that their derivation is not applicable to the multi-mode case. The starting point of Ref. [22] is quite straightforward. Since the conventional definition for the system energy \(U_{\text{\tiny{S}}}(\beta)=\text{tr}_{\text{\tiny{S}}}[H_{\text{\tiny{S}}} \rho_{\text{\tiny{S}}}]\) is generally challenging to handle, we adopt a normal-mode coordinate so that the transformed Hamiltonian \(H_{\text{\tiny{T}}}\) describes \(N(=1+\sum_{\alpha}n_{\alpha})\) uncoupled oscillators. Physically we do not need to obtain the detailed information for any normal modes, since the total energy \(U_{\text{\tiny{T}}}(\beta)\) is only associated with the normal frequencies, namely \[U_{\text{\tiny{T}}}(\beta)=\sum_{r=1}^{N}u(\overline{\omega}_{r},\beta)\equiv \sum_{r=1}^{N}\frac{\overline{\omega}_{r}}{2}\coth\frac{\beta\overline{\omega}_ {r}}{2}, \tag{24}\] with \(\overline{\omega}_{r}\) being the normal frequency for \(r\)th oscillator in the transformed system. Since the energy for the independent bath is well-defined as \[U_{\text{\tiny{B}}}(\beta)=\sum_{\alpha j}\frac{\omega_{\alpha j}}{2}\coth \frac{\beta\omega_{\alpha j}}{2}, \tag{25}\] the authors of Ref. [22] interpreted the difference \[U_{\text{\tiny{S}}}(\beta)=U_{\text{\tiny{T}}}(\beta)-U_{\text{\tiny{B}}}(\beta) \tag{26}\] as the internal energy and found it to be \[U_{\text{\tiny{S}}}(\beta)=\int_{0}^{\infty}\text{d}\omega\,\frac{1}{\pi}\, \text{Im}\,\frac{\text{d}}{\text{d}\omega}\ln\tilde{\chi}^{QQ}(\omega) \mathcal{E}(\omega,\beta), \tag{27}\] where \(\tilde{\chi}^{QQ}(\omega)\) is the one-dimensional version of Eq. (8) and \(\mathcal{E}(\omega,\beta)\) has already been defined in Eq. (19). Following their procedures for the multi-mode case we find (see Ref. [24] for detailed derivation) \[U_{\text{\tiny{T}}}(\beta)-n_{\text{\tiny{S}}}U_{\text{\tiny{B}}}(\beta)=\int_ {0}^{\infty}\text{d}\omega\,\mathbb{B}(\omega)\mathcal{E}(\omega,\beta) \tag{28}\] with \[\mathbb{B}(\omega)=\frac{1}{\pi n_{\text{\tiny{S}}}}\,\text{Im}\,\frac{\text{ d}}{\text{d}\omega}\ln\det\tilde{\mathbf{\chi}}^{QQ}(\omega), \tag{29}\] which does not give us \(U_{\text{\tiny{S}}}(\beta)=U_{\text{\tiny{T}}}(\beta)-U_{\text{\tiny{B}}}(\beta)\). On the other hand the result of the system internal energy according to the equipartition theorem (17) is applicable to any multi-mode case. Therefore, we conclude that the equipartition theorem is more reasonable than the alternative approach dicussed in Ref. [22] as an expression for the internal energy of the system. _Numerical demonstrations.--_ Numerical demonstrations of the probability density functions (18) and (29) can offer an intuitive perspective on how the quantum equipartition theorem reduces to classical equipartition theorem. We here use a two-mode Brownian-oscillator system coupled with one reservoir. To enhance clarity, we choose the spectrum of the pure bath in the following form: \[\tilde{\mathbf{\phi}}(\omega,\lambda)=\lambda\mathbf{\eta}\,\mathrm{Im}\,\frac{\Omega_{ \mathrm{n}}^{2}}{\Omega_{\mathrm{n}}^{2}-\omega^{2}-i\omega\gamma_{\mathrm{n}}} \tag{30}\] with \(\mathbf{\eta}\equiv\{\eta_{uv}=\eta_{u}\delta_{uv}\}\) specifying the strength of the system-bath couplings. We also introduce the parameter \(\lambda\in\{1,1.25,1.5\}\) to vary the strength. Figures ((a)a) and ((b)b) depict \(\mathbb{P}(\omega)\) and \(\mathbb{B}(\omega)\) in the three cases, respectively. As \(\lambda\) decreases, the curves become sharper around the square root of the eigenvalues of \(\mathbf{\Omega}\mathbf{V}\). In another word, the maximum points of \(\mathbb{P}(\omega)\) and \(\mathbb{B}(\omega)\) become nearer to them when \(\lambda\) decreases. In fact we can prove [24] that \[\lim_{\{c_{\alpha ij}\}\to\{0\}}\mathbb{P}(\omega)=\sum_{i}\big{[}\delta( \omega-\omega_{i})+\delta(\omega+\omega_{i})\big{]}, \tag{31}\] where the summation is over all the square root of eigenvalues of the matrix \(\mathbf{\Omega}\mathbf{V}\). These results show that in the weak coupling limit, only typical closed systems contribute to the quantity that we consider, such as the energy. In other words, the energy of the open system reads \[E_{\mathrm{weak}}(\beta)=n_{\mathrm{s}}\sum_{i}\frac{\omega_{i}}{2}\coth\frac {\beta\omega_{i}}{2}, \tag{32}\] which reduces to \(\sum_{i}1/2\beta\) in the high-temperature limit \(\beta\to 0\). That is how quantum equipartition reduces to the classical one. ## IV Summary To summarize, we derived the generalized quantum equipartition of arbitray quadratic systems and proved the positivity and normalization of their two probability density functions. We also extended another formula for the internal energy of the multi-mode Brownian oscillator system. The generalized formula as well as our analysis shed light on the controverse upon the method. We noticed that our quantum equipartition theorem can be used to obtain the partition function of the system in a much easier way compared to the classical approach [28]. Our results can be viewed as an indispensable generalization of recent works on quantum equipartition theorem. As a future prospect, expressing thermodynamic quantities as an infinite series also offers potential advantages for this objective [23]. Work in this direction is in progress. As another point, it seems difficult to discuss the equipartition theorem without the help of fluctuation-disspation theorem or to consider the equipartition theorem over steady states, or even in general nonequilibrium. Discussing the equipartition theorem under other more difficult setups, such as quartic systems, is also challenging. All of them constitute other directions of further development. ## Acknowledgements I thank Prof. Naomichi Hatano for his valuable suggestions and Ao Yuan for fruitful discussions. Xinhai Tong was supported by FoPM, WINGS Program, the University of Tokyo. This research was supported by Forefront Physics and Mathematics Program to Drive Transformation (FoPM), a World-leading Innovative Graduate Study (WINGS) Program, the University of Tokyo.
equipartition theoremの基礎法則として、古典統計物理学における基本法則として、自由度が何の程度であるかを考慮して、$k_{B}T/2$ というエネルギーに貢献するという法則が述べられています。$T$ は温度、$k_{B}$ はボルツマン定数です。 最近の研究では、 equipartition theorem の量子版が存在することが明らかになりました。この研究では、任意の二次元系における多モードブラウン運動振動子の相互作用による量子版の一般化された equipartition theorem を得る方法について検討しています。温度が同じで、多重貯水器と相互作用する二次元系における量子版の一般化された equipartition theorem を導出するための代替的な方法が提示され、その結果が量子版の equipartition theorem の結果と比較され、後者はより合理的であるという結論に至りました。これらの結果は、量子版の equipartition theorem の重要的なgeneralization
2309.04791
osmAG: Hierarchical Semantic Topometric Area Graph Maps in the OSM Format for Mobile Robotics
Maps are essential to mobile robotics tasks like localization and planning. We propose the open street map (osm) XML based Area Graph file format to store hierarchical, topometric semantic multi-floor maps of indoor and outdoor environments, since currently no such format is popular within the robotics community. Building on-top of osm we leverage the available open source editing tools and libraries of osm, while adding the needed mobile robotics aspect with building-level obstacle representation yet very compact, topometric data that facilitates planning algorithms. Through the use of common osm keys as well as custom ones we leverage the power of semantic annotation to enable various applications. For example, we support planning based on robot capabilities, to take the locomotion mode and attributes in conjunction with the environment information into account. The provided C++ library is integrated into ROS. We evaluate the performance of osmAG using real data in a global path planning application on a very big osmAG map, demonstrating its convenience and effectiveness for mobile robots.
Delin Feng, Chengqian Li, Yongqi Zhang, Chen Yu, Soeren Schwertfeger
2023-09-09T13:36:24
http://arxiv.org/abs/2309.04791v1
# osmAG: Hierarchical Semantic Topometric Area Graph Maps in the OSM Format for Mobile Robotics ###### Abstract Maps are essential to mobile robotics tasks like localization and planning. We propose the open street map (osm) XML based Area Graph file format to store hierarchical, geometric semantic multi-floor maps of indoor and outdoor environments, since currently no such format is popular within the robotics community. Building on-top of osm we leverage the available open source editing tools and libraries of osm, while adding the needed mobile robotics aspect with building-level obstacle representation yet very compact, topometric data that facilitates planning algorithms. Through the use of common osm keys as well as custom ones we leverage the power of semantic annotation to enable various applications. For example, we support planning based on robot capabilities, to take the locomotion mode and attributes in conjunction with the environment information into account. The provided C++ library is integrated into ROS. We evaluate the performance of osmAG using real data in a global path planning application on a very big osmAG map, demonstrating its convenience and effectiveness for mobile robots. ## I Introduction Autonomous mobile robots have found extensive applications in various fields such as agriculture, industry, and commerce. These robots rely on known maps for autonomous localization, mission-planning and navigation to accomplish tasks. A well-defined map representation serves as a prerequisite for mobile robots to execute their tasks effectively. Given the substantial differences in scale and structural characteristics between indoor and outdoor environments, map representations for mobile robots vary significantly. For outdoor scenarios, large-scale vector maps like OpenStreetMap are common. Point cloud maps are often employed in smaller outdoor scenarios [1]. In indoor settings, 2D or 3D grid maps and point cloud maps are the most widely used map representations. Consequently, most robots are designed for either indoor or outdoor environments exclusively. With the further advancement of sensor technology, maps are becoming more accurate and detailed. Simultaneously, improvements in simultaneous localization and mapping (SLAM) algorithms have led to increased precision and scalability in map construction. However, memory and computation constrains limit the use of point cloud and 3D grid map based approaches for large areas for localization. They are also typically too detailed for most semantic information that is to be encoded, like room numbers or terrain types. Finally, 3D mesh and point cloud based maps are quite difficult to employ for path planning and navigation. Therefore, for most of these tasks, 2D grid maps are still widely used in robotics. They are suitable up to a certain size in terms of storage and computation, but become computationally more demanding as map sizes grow. They do not readily provide semantic information and, most crucially, are inherently 2D, so fail to support multi-level scenarios like buildings with more than one floor. Graph structures encoding the topology of an environment are a solution to these problems [2]. These graphs represent places (vertices) and how they are connected (edges). Topometric maps additionally encode metric information about the position of these elements. These graphs can also be easily organized in a hierarchical manner, grouping several places or connections into a parent structure. Adding semantic information to the graph elements further enhances the utility of the map. In our paper [3] we go further into details about hierarchical topometric representation of 3D robotic maps. Fig. 1: Two buildings with two stories each rendered from osmAG and with a planned path utilizing an elevator from one building to the other. In our previous work [4] and [5] we developed the so-called Area Graph, a topometric representation generated from 2D grid maps. That work is based on a so-called TopologyGraph [6], which is a pruned and simplified version of the Voronoi Diagram. The Area Graph has as graph nodes areas and as edges passages between the areas: passages mark where it is possible for a robot to traverse from one area to another area that is touching the first area. In this paper we present an on-file format to store said Area Graph. Additionally, we define how a hierarchy of areas and passages can be established and how semantic information can be added to it. That on-file format is based on and fully compatible with the open street map XML format - we just define certain use patterns and special osm tags that encode the hierarchical, semantic Area Graph structure in the osm XML tag "way", that is used for both areas and passages. We also provide the osmAG C++ library to load osmAG files into memory and utilize them for visualization and planning in the Robot Operating System (ROS). Fig. 1 shows a multi-story osmAG map and a planned path. Adopting the open street map XML standard has advantages in its widespread use and support from mature software, facilitating map editing, visualization, updates, and corrections. Apart from autonomous cars and a few other notable exceptions [7], open street map has not been widely adopted in the robotics community, even though some indoor approaches exist [8, 9]. The AreaGraph encodes the areas of rooms based on the architectural walls. We can create the AreaGraph from complete 3D point clouds [3] or from 2D grid maps [4, 5]. We obtain furniture-free 2D grid maps by either employing according to Simultaneous Localization and Mapping technologies [10, 11] or by rendering CAD building data into 2D grid maps. The memory consumption of our abstract osmAG is orders of magnitude smaller than 2D or 3D grid maps and point clouds, and planning on them is easy and fast, as our experiment will show. In [12] we already utilize osmAG for localization and tracking of a mobile robot with a 3D LiDAR, see Fig. 2. The key contributions of this work are: * Definition of an on-disk map representation osmAG to store hierachical, semantic, topometric map data in the open street map format with additional robotics-centric Area Graph data. * Providing a C++ library osmAGlib to load osmAG into memory and perform tasks like visualization in ROS and path planning. * Providing a big osmAG map and performing various experiments with it 1. Footnote 1: [https://github.com/STAR-Center/osmAG](https://github.com/STAR-Center/osmAG) ## II Related work ### _Hierarchical Topological Maps_ In recent years, there has been great interest in the integration of hierarchical, semantic, and topological mapping. Specifically, learning-based semantic segmentation has made significant progress, enabling high-precision semantic segmentation of 3D point clouds and images. Current research on semantic maps mainly focuses on object-level [13, 14] or dense maps, which include volumetric models [15], point clouds [16, 17], and 3D grids (voxels) [18]. These methods do not involve the estimation of higher-level semantics, such as rooms, and often return dense models that are not suitable for direct navigation. Segmentation alone also does not provide topological connections of neighboring areas. The second research line focuses on constructing hierarchical topological map models. Hierarchical maps have been widely used since the inception of robotics [2, 19]. Early works focus on 2D maps, exploring the use of multi-layer maps to address the apparent discrepancy between metric and topological representations [20, 21]. More recently, 3D Scene Graphs have been introduced as expressive layered models for 3D environments. A 3D Scene Graph is a hierarchical graph where nodes represent spatial concepts at multiple abstraction levels and edges represent relationships between concepts. Armeni et al. [22] pioneer the use of 3D Scene Graphs in computer vision, modeling the environment as a graph that includes low-level geometry (i.e., metric-semantic grids), objects, rooms, and camera poses, and presented the first algorithm to parse metric-semantic 3D grids into 3D Scene Graphs. Rosinol et al. [23] propose a novel 3D Scene Graph model that directly builds from sensor data, including subgraphs for places (useful for robot navigation), modeling objects, rooms, and buildings, and capturing moving entities in the environment. Fig. 2: Application of osmAG for robot localization with a 3D LiDAR pointcloud in the Robot Operating System ROS [12] The work most similar to ours is a semantic SLAM method Hydra [24]. Hydra constructs a real-time multi-level 3D Scene Graph using onboard sensors. It transforms a local Euclidean Signed Distance Function (ESDF) into a metric-semantic 3D grid and a Voronoi diagram. It extracts a topological map of places and uses a method inspired by community detection for room segmentation. Our proposed osmAG has a similar hierarchical structure, but we construct the map offline. Unlike Hydra's oversegmented room segmentation, we accurately segment areas. While Hydra focuses on fine-grained indoor object mapping, we target large indoor and outdoor environments, including multi-floor scenes with adaptable layering. Another similar research direction is parsing the layout of buildings from 2D or 3D data. A considerable amount of work has been dedicated to parsing 2D maps [25], including hypothesis-based methods [26] and learning-based approaches [27]. One common method for constructing topological maps from 2D occupancy grid maps is to compute Voronoi diagrams, which facilitate the extraction of structures such as points and regions as topological nodes. Depending on the extracted structures, related research can be roughly divided into three types: selecting key points of the Voronoi diagram, such as region boundaries (e.g., doors), to divide the environment into disjoint regions [28, 29]; using the vertices of the Voronoi diagram as nodes in the topological map [6, 30]; performing region segmentation based on the Voronoi diagram, treating regions as nodes, as demonstrated in the work by Friedman et al. [31] using Conditional Random Fields for labeling. In addition to Voronoi-based methods, Bormann et al. [25] introduced several approaches to parse 2D maps into region nodes such as rooms, including morphology-based segmentation, distance transform-based methods, and feature-based segmentation. Hou et al. [4] proposed a method based on Voronoi diagrams to construct a topological graph with regions as nodes and corridors as edges, using a non-heuristic region growing approach to identify meaningful areas such as rooms. This method is used as the basis for the automatic generation of the topological graph in this paper. Recent work has focused on 3D data. Liu et al. [27] and Stekovic et al. [32] projected 3D point clouds onto 2D maps, but this approach is not suitable for multi-story buildings. Zheng et al. [33] detected rooms by performing region growing on a 3D metric-semantic model. ## III osmAG In this section, we will provide a detailed exposition of the structure and composition of the osmAG file format. Our format utilizes the open street map XML tag "way" for areas and passages of the Area Graph, standardizes the use of some common attributes for the XML way elements and introduces a couple of osm tags. osmAG maps can be embedded into normal osm maps - since osmAG is fully compatible with osm. ### _Definition_ First we define some basic osmAG elements, followed by their on-disk osm XML storage format (see Fig. 3). In the beginning we just consider a non-hierarchical map - the more complex definition for the hierarchy follows afterwards. **Area:** The node of an AreaGraph that encodes an area as a closed polygon, where the points are stored in an ordered list of nodes. Often, especially indoors, areas represent rooms or corridors. The construction algorithm has a parameter that decides how to segment the space (minimum free distance). But areas can also represent whole buildings, certain types of streets or terrains or a whole campus. **Inner area:** A free area surrounded by a boundary, e.g. a room, such as the blue outline in Fig. 2. This information is useful for example for localization, where you want to know if you should match against this wall from the inside (inner area) or the outside. **Structure area:** The space containing a closed boundary, which is the outer contour of the area and usually is a parent of multiple sub-areas, for example the outer walls of a building. **Passage:** The edge of the Area Graph, connecting two areas topologically. The two areas have to share at least two points (nodes). Those shared points form the passage, which is represented as a set of points forming a line. The on-file format of osmAG follows the open street map (osm) standard with some extensions: **XML tag: node:** A geographic location as latitude and longitude coordinates with an id. Fig. 3: Non-functional demo of the osm xml of osmAG **Root node:** A node tag that contains the pose of the map origin. It converts the WGS84 geodetic coordinate system to Cartesian coordinates as a reference point and facilitates traditional robot indoor navigation. **XML tag: way:** An ordered lists of nodes - the osm method to encode both streets and areas. **osm way key: osmAG:id:** While osm is unaware of our tags, we cannot utilize the osm ids for ways, as they may change when uploading the data to an osm server. Therefore, we create our own ids for them. **osm way key: osmAG:type:** If present that osm way is part of the Area Graph. The value encodes whether this way is an area or a passage. **osm way key: osmAG:areatype:** Inner or structure, see above. **osm way keys: osmAG:from and osmAG:to:** While the Area Graph is undirected, for the on-file format we save the two osmAG:ids of the two areas connected to a passage in the values of the osmAG:from and osmAG:to keys in arbitrary order. **osm way key: osmAG:parent:** This is the key that encodes the hierarchical structure. Its value is the osmAG:id of this area's parent. The areas of all children have to be fully contained within their parent. Areas can never overlap, unless they are on different elevations or are parent and (grand-)child. This spans a hierarchical tree of areas and their children. One osmAG file can contain multiple such trees, together with traditional osm vector data. The hierarchical structure is mainly used for two scenarios: First, we may want to join multiple neighboring areas into one bigger area, e.g. to facilitate faster planning and encode bigger areas and semantic tags for them. For example, we can, in the outdoor case, join areas for a parking place and neighboring areas for streets together to a bigger area. Indoors we can join multiple rooms to form a "floor" of a building. In the second case, the hierarchical structure is used to stack several 2D areas on top of each other, e.g. multiple floors of a building. The height of the floors is encoded, following the osm standard, as height above the ground floor in the osm key "height". The different levels of a building are connected via passages. E.g. an elevator will have areas on every floor that are connected to each other vertically by special elevator passages and that are also connected in the 2D plane to one (or more) neighboring areas with passage(s) that have the semantic information "elevatordoor". Stairs follow the same principle. More details can be found on GitHub. ### _Visualization and Editing_ Since osmAG follows the basic data format of open street map, users can perform visualization and editing operations in any software or application that supports OpenStreetMap. The three-dimensional rendering of osmAG data was performed in the OpenIndoor application2. Fig. 4 (a) illustrates the shape and geographic location of the indoor building. Fig. 4 (b) showcases some floors within one of the buildings, highlighting variations in the structural layout between floors. Fig. 4 (c) and Fig. 4 (d) depict the rendered floor plans of different levels, with black sections representing passages. Walls with a fixed thickness show the area polygons, and semantic labels such as "stairs", "restrooms", and "elevators" are annotated according to the icons used in OpenStreetMap. Footnote 2: [https://wiki.openstreetmap.org/wiki/OpenIndoor](https://wiki.openstreetmap.org/wiki/OpenIndoor) The two-dimensional visualization and manual editing of osmAG are achieved through the JOSM (Java OpenStreetMap Editor) application. By using JOSM, users are able to easily view, edit, and update the area graph, as shown in Fig. 5. ### _Automated Generation_ Although existing editors like JOSM can be used to edit area map, in this case, the elements, information, and relationships of the topological map must be manually added. It is impractical when the scene is vast. We focus on the basic Fig. 4: 3D Rendering of an osmAG map Fig. 5: 2D visualization of osmAG (highlighted) of two buildings in JOSM, together with classic osm data functionality of offline mapping for osm. This functionality is implemented by a C++ library based on osmAG. Offline mapping involves segmenting, extracting, and merging from existing maps to create a new map. These maps can be CAD files of buildings, 3D point clouds, or maps generated by robots. We implement the automatic conversion from 2D grid maps and 3D point cloud maps into osmAG, as well as the simple fusion of multiple topological maps. **Generating from a 2D grid map.** Our previous work [4, 5] generates non-hierarchical areas graphs from 2D grid maps. We modified that work to output osmAG files that can then be further edited. **Generating from a CAD file.** The process is illustrated in Fig. 6 (a), taking a CAD file input as an example. Generally, CAD files for building layouts contain multiple layers and semantic information. Currently, CAD files are processed manually by removing layers. Fig. 6 (b) shows the generated area graph, where it can be observed that a corridor in the bottom left of the current image is divided into several areas. By adjusting the parameter \(alpha\) shape in the algorithm, issues of over-segmentation and under-segmentation can be alleviated. Fig. 6 (c) demonstrates the effect of directly using the Area Graph method to generate the topological map's area nodes. Fig. 6 (d) illustrates how this method correctly generates inner areas and passages for the osmAG (the red arrow in the image represents a passage). **Generating from a 3D pointcloud map.** We follow our previous work [3] which generates hierarchical 3D topometric maps from 3D point clouds of the whole building, to extract 2D Area Graphs from a 3D pointcloud map. These extraction methods are suitable for generating topological representations in indoor environments. For further processing, our osmAGlib C++ library efficiently merges multiple maps. This is achieved through a process that involves assessing the distances between nodes and selectively consolidating certain points to merge osmAG maps and integrate existing outdoor OpenStreetMaps into the current map. ## IV Path Planning with osmAG Navigation is an essential part of mobile robotics and utilizing osmAG for path planning is a crucial part in developing a navigation stack for osmAG. We have already demonstrated the effectiveness of path planning in the area graph [34]. Here, we describe our approach to planning with the hierarchical area graph data structure. The goal of path planning in osmAG is to provide the best global path between two coordinates. The paths generated may be close to walls and it is the job of local planning to keep a safe distance during navigation. We cannot use the graph structure of the area graph to directly plan, because the cost is in traversing the vertices, while the cost of traversing a passage is typically zero. The cost of traversing the area depends on which passage you enter and through which passage you exit. A simple solution would assign the cost by summing up the distance from the first passage to the center with the distance from the center to the second passage. But that may overestimate the cost and lead to sub-optimal paths as shown in Fig. 8 (a). We therefore opt for generating a 2D grid map just from that area and utilizing A* on that grid map to calculate the true cost for that traversal (Fig. 8 (b)). Consequently, for planning we build a new graph where the passages are the vertices and where we add edges to all possible other passages of that area. We pre-compute this passage graph on the leaves of the whole Area graph. Planning a path between two coordinates then follows these simple steps: 1. Find the leaf areas of the start and goal points. 2. (Special case: if both are in the same area, render that area and use A* on that grid map) Fig. 6: Generating osmAG from CAD data Fig. 7: The automatic generation of osmAG from 3D maps[3] 3. Temporarily add edges from the start (goal) point to all passages of that area by using A* on the rendered 2D grid map of that area. 4. In that graph use A* to search for a path from the start to the goal node. 5. Return the path in terms of a list of passages and areas and/or in terms of a set of points (from the A* of the pre-computed 2D grids). ### _Speedup using Hierarchy_ Path planning can be further sped up by utilizing bigger, higher-level areas. This is done by pre-computing the cost of all combinations of start and end passages of each non-leaf area. This pre-computation is very fast, since it utilizes the already computed costs of the child areas. During planning, whenever we encounter a passage we select the highest level area it connects to that is not containing the goal point - unless it is the goal leaf area. ### _Planning using Semantic Information_ One great feature of planning with the osmAG map is, that it can take robot-specific capabilities into account. For that semantic cues from the passages and areas are taken into account. For example, a wheeled robot will not plan through an area with the semantic information "stairs", while a legged robot might. For that we plan to callback a cost-calculation function for every area and passage that is encountered, so robot specific code may determine the cost. The legged robot may return the cost in terms of the estimation of time needed to traverse and still avoid the stairs--if there is a ramp with just a small detour close by. Other information taken into account may be the step height of a curb (bigger wheeled robots can traverse, smaller ones cannot), the type of door (automatic, push, pull), or the material of an area (pavement, grass, sand). ## V Experiment Our experiment verifies the feasibility and effectiveness of the proposed global path planning method in osmAG. This global path planning experiment is conducted in an osmAG that includes two multi-story buildings and an outdoor area. The osmAG is stored in XML file format with a size of 883.9 kb. It consists of 4908 nodes, 500 areas, and 347 passages. The actual area of the indoor area is approximately 22300 \(m^{2}\), while the outdoor area is approximately 3000 \(m^{2}\). Fig. 1 shows how we use the proposed osmAG to represent a complex and large-scale environment, and also demonstrates the global path planning results of the robot spanning two floors, across the road, leveraging semantic cues such as whether to use elevators. The path shown by the black line in the figure starts from the first floor of the building located in the bottom half of the figure, through the outdoor area to the first floor of the building in the top half of the figure, and then through the elevator to the destination located on the second floor. Our experiments are carried out on an Intel i7-6700 CPU. The distance of the planned path is 157.57m (Includes the vertical distance between floors), the time is about 3100 \(\mu\)s at the topological level, and the pre-compute time (calculating the metric level distance for all areas and can be applied multiple times) is about 200 ms. ## VI Conclusions This paper proposed a novel hierarchical semantic topological map called open street map Area Graph (osmAG), based on the Area Graph and OpenStreetMap. This map represents the environment using a topometric, hierarchical semantic structure with areas as vertices and passages as edges. It can encode outdoor and indoor maps on multiple levels. Through its use of open street map format, we stay compatible with a plenitude of tools and libraries. Our osmAG C++ library is capable of loading osmAG, and visualizing and planning with it in ROS. We also will publish tools to semi-automatically generate osmAG from 2D grid maps, 3D point clouds, and CAD data. Through utilizing osmAG, we are able to plan paths for mobile robots, taking into account their capabilities through semantic cues, and planning through multiple floors. We are able to plan very quickly by utilizing the abstract topometric and hierarchical data structure. Our experiment shows the good performance of our approach. The code and example data are available on GitHub. For future work, we plan on providing a full ROS osmAG Navigation stack including localization, planning, and navigation with online changing of the osmAG to facilitate real-time updates of temporary or permanent map changes. We will furthermore introduce WiFi access point mapping to include those locations in the osmAG maps to fully automate our osmAG localization approach. We will then test the whole osmAG Navigation stack on real robots and provide it as open source to the robotics community. Fig. 8: Comparison of cost calculation options during path planning.
地図は、位置付けと計画のようなモバイルロボティクスタスクにとって必須です。私たちは、室内と室外環境の階層的、トポメトリック的な semantico 多階層マップを格納するためのオープンストリートマップ(osm)XML 基礎のエリアグラフファイル形式を提案しています。ロボットコミュニティでは現在そのようなフォーマットが流行しておらず、そのために、この提案は重要な意味を持つ。osmの上に構築することで、オープンソースの編集ツールとライブラリを利用し、モバイルロボティクスへの適用を考慮した建物レベルの障害物表現と、非常にコンパクトなトポメトリックなデータを追加しています。このデータは計画アルゴリズムを促進します。共通のosm キーとカスタムキーを組み合わせることで、意味付けの注釈を活用し、さまざまなアプリケーションをサポートしています。例えば、ロボットの能力に基づいた計画を支援し、ロボットの運動方法と属性を環境情報とともに考慮します。提供された C++ ライブラリ
2309.15844
Data-Driven Framework for Uncovering Hidden Control Strategies in Evolutionary Analysis
We have devised a data-driven framework for uncovering hidden control strategies used by an evolutionary system described by an evolutionary probability distribution. This innovative framework enables deciphering of the concealed mechanisms that contribute to the progression or mitigation of such situations as the spread of COVID-19. Novel algorithms are used to estimate the optimal control in tandem with the parameters for evolution in general dynamical systems, thereby extending the concept of model predictive control. This is a significant departure from conventional control methods, which require knowledge of the system to manipulate its evolution and of the controller's strategy or parameters. We used a generalized additive model, supplemented by extensive statistical testing, to identify a set of predictor covariates closely linked to the control. Using real-world COVID-19 data, we successfully delineated the descriptive behaviors of the COVID-19 epidemics in five prefectures in Japan and nine countries. We compared these nine countries and grouped them on the basis of shared profiles, providing valuable insights into their pandemic responses. Our findings underscore the potential of our framework as a powerful tool for understanding and managing complex evolutionary processes.
Nourddine Azzaoui, Tomoko Matsui, Daisuke Murakami
2023-09-26T13:58:54
http://arxiv.org/abs/2309.15844v1
# Data-Driven Framework for Uncovering Hidden Control Strategies in Evolutionary Analysis ###### Abstract We have devised a data-driven framework for uncovering hidden control strategies used by an evolutionary system described by an evolutionary probability distribution. This innovative framework enables deciphering of the concealed mechanisms that contribute to the progression or mitigation of such situations as the spread of COVID-19. Novel algorithms are used to estimate the optimal control in tandem with the parameters for evolution in general dynamical systems, thereby extending the concept of model predictive control. This is a significant departure from conventional control methods, which require knowledge of the system to manipulate its evolution and of the controller's strategy or parameters. We used a generalized additive model, supplemented by extensive statistical testing, to identify a set of predictor covariates closely linked to the control. Using real-world COVID-19 data, we successfully delineated the descriptive behaviors of the COVID-19 epidemics in five prefectures in Japan and nine countries. We compared these nine countries and grouped them on the basis of shared profiles, providing valuable insights into their pandemic responses. Our findings underscore the potential of our framework as a powerful tool for understanding and managing complex evolutionary processes. data-driven optimization algorithm model predictive control evolutionary probability distribution generalized additive model classification COVID-19 evolution ## 1 Introduction The evolution of natural dynamical systems is a complex process where the dynamics are influenced by a multitude of factors, including epidemic evolution, environmental changes, and various exogenous random events. Traditional methods of studying evolutionary processes often involve creating simplified models that may not fully capture this complexity. When it comes to influencing or controlling this evolution, optimal control theory (OCT) is a powerful and flexible tool Grune et al. (2020); Bussell et al. (2019); Ros et al. (2004). This can provide valuable insights into the likely paths of evolution and the factors that may affect it Shen et al. (2018). The aim of this work was to investigate an evolutionary process, such as epidemic evolution, from the perspective of OCT in order to uncover the hidden mechanisms that lead to deterioration or improvement of the situation. We assumed that the evolutionary process is described by a controlled system via a hidden mechanism to which we do not have access and which we cannot change directly; we can only observe its effects through measurements related to its evolution. We also assumed that this hidden
``` 私たちは、進化確率分布で記述された進化系を用いて隠れた制御戦略を発見するためのデータ駆動フレームワークを設計しました。この革新的なフレームワークは、COVID-19などの状況における進展や軽減に関与する隠されたメカニズムを解読するのに役立ちます。最適な制御を評価するために、一般的なダイナミクス系の進化に関するパラメータと連携した新しいアルゴリズムを使用しています。これにより、モデル予測制御の概念を拡張しています。これは、システムの進化を操作するためのシステム知識や制御者の戦略やパラメータに関する知識が必要とする従来の制御方法とは異なっています。私たちは、予測変数に関連性の高い一組の予測因子を特定するために、一般化した加算モデルと、広範な統計的テストを補完して使用しました。COVID-19の現実世界のデータを用いて、私たちは日本の5つの県と9つの国のCOVID-19
2310.00494
The Algebra of $S^2$-Upper Triangular Matrices
Based on work presented in [4], we define $S^2$-Upper Triangular Matrices and $S^2$-Lower Triangular Matrices, two special types of $d\times d(2d-1)$ matrices generalizing Upper and Lower Triangular Matrices, respectively. Then, we show that the property that the determinant of an Upper Triangular Matrix is the product of its diagonal entries is generalized under our construction. Further, we construct the algebra of $S^2$-Upper Triangular Matrices and give conditions for an LU-Decomposition with $S^2$-Lower Triangular and $S^2$-Upper Triangular Matrices, respectively.
Steven R. Lippold
2023-09-30T21:20:55
http://arxiv.org/abs/2310.00494v1
# The algebra of \(S^{2}\)-Upper triangular matrices ###### Abstract. Based on work presented in [4], we define \(S^{2}\)-Upper Triangular Matrices and \(S^{2}\)-Lower Triangular Matrices, two special types of \(d\times d(2d-1)\) matrices generalizing Upper and Lower Triangular Matrices, respectively. Then, we show that the property that the determinant of an Upper Triangular Matrix is the product of its diagonal entries is generalized under our construction. Further, we construct the algebra of \(S^{2}\)-Upper Triangular Matrices and give conditions for an LU-Decomposition with \(S^{2}\)-Lower Triangular and \(S^{2}\)-Upper Triangular Matrices, respectively. Key words and phrases:generalization of triangular matrices, generalization of determinant map 2020 Mathematics Subject Classification: Primary 15B99, Secondary 15A15, 15A72, 15A23 ## 1. Introduction One common type of special matrix with a variety of applications is that of the Upper Triangular matrix. Notably, it is well known that if \(A\) is an Upper Triangular matrix, then the determinant \(det(A)\) is the product of its diagonal entries. In addition, matrices that satisfy certain conditions can be factored in terms of the product of an upper and lower triangular matrix (so called LU Factorization), which is often more desirable computationally. In [7], a generalization of the determinant map, denoted \(det^{S^{2}}\), was first constructed and has been subsequently studied further in [2], [4], and [6]. Namely, if \(V_{d}\) is a vector space of dimension \(d\) over a field \(k\), there exists a map \(det^{S^{2}}:V_{d}^{\otimes d(2d-1)}\to k\) with the property that \(det^{S^{2}}(\otimes_{1\leq i<j\leq 2d}(v_{i,j}))=0\) if there exists \(1\leq x<y<z\leq 2d\) such that \(v_{x,y}=v_{x,z}=v_{y,z}\). Further, this map is unique up to a constant if \(d=2\) or \(d=3\)[4]. An interesting follow-up is to construct a class of matrices that generalize the idea of Upper Triangular Matrix, but in relation to the map \(det^{S^{2}}\). In this paper, we give such a construction. Now, we give an outline to this paper. In Section 2, we start by recalling some terminology concerning graphs before giving an equivalent definition of Triangular Matrices which is useful for this paper. Then, we give some basic results concerning Triangular Matrices and determinant map. Following this, we give some basic results from [4] and [6] regarding a vector space \(\mathcal{T}_{V_{d}}^{S^{2}}[2d]\), a special element \(E_{d}^{(2)}\in\mathcal{T}_{V_{d}}^{S^{2}}[2d]\), and the map \(det^{S^{2}}\). In Section 3, we give two new constructions. First, we define the \(S^{2}\)-Diagonal of a \(d\times d(2d-1)\) matrix, which generalizes the idea of the diagonal of a square matrix. Then, we define \(S^{2}\)-Triangular Matrices before giving a result connecting \(det^{S^{2}}\), \(S^{2}\)-Upper Triangular Matrices, and the \(S^{2}\)-diagonal. In Section 4, we construct a \(k\)-algebra of \(S^{2}\)-Upper Triangular Matrices, starting first with the case \(d=2\) and extending it for \(d>2\). We define a multiplication on \(d\times d(2d-1)\) matrices and give some conditions for an LU-Decomposition using \(S^{2}\)-Lower Triangular Matrices and \(S^{2}\)-Upper Triangular Matrices, respectively. ## 2. Preliminaries In this paper, we assume that \(k\) is a field of characteristic not \(2\) or \(3\), \(\otimes=\otimes_{k}\), and that \(V_{d}\) is a vector space of dimension \(d\) over \(k\). Given a vector \(v\in V_{d}\), we will take \(v=(v^{1},v^{2},\ldots,v^{d})\), where \(v^{i}\in k\) for \(1\leq i\leq d\). Lastly, we take \(\mathcal{B}_{d}=\{e_{1},e_{2},\ldots,e_{d}\}\) to be basis for \(V_{d}\). ### Edge \(d\)-Partitions of Graphs First, we will cover some basics of edge \(d\)-partitions of graphs. Throughout this paper, we will assume that graphs are undirected, do not have multiple edges or loops, and are not weighted. Regarding notation, we will denote the vertex set of a graph \(\Gamma\) as \(V(\Gamma)\) and the edge set as \(E(\Gamma)\). Next, recall the following definition. **Definition 2.1** ([4]).: Let \(\Gamma\) be a graph and \(\Gamma_{1},\Gamma_{2},\ldots,\Gamma_{d}\) be subgraphs of \(\Gamma\). Then, \((\Gamma_{1},\Gamma_{2},\ldots,\Gamma_{d})\) is an edge \(d\)-partition of \(\Gamma\) if 1. For all \(1\leq i\leq d\), \(V(\Gamma_{i})=V(\Gamma)\). 2. For all \(1\leq i<j\leq d\), \(E(\Gamma_{i})\cap E(\Gamma_{j})\) is empty. 3. \(\cup_{i=1}^{d}E(\Gamma_{i})=E(\Gamma)\). We say that an edge \(d\)-partition is cycle-free if each of the subgraphs \(\Gamma_{i}\) for \(1\leq i\leq d\) are cycle free. We say that an edge \(d\)-partition is homogeneous if \(|E(\Gamma_{i})|=|E(\Gamma_{j})|\) for all \(1\leq i<j\leq d\). We will denote the set of edge \(d\)-partitions of \(\Gamma\) as \(\mathcal{P}_{d}(\Gamma)\) and the set of homogeneous cycle-free edge \(d\)-partitions of \(\Gamma\) as \(\mathcal{P}_{d}^{h,cf}(\Gamma)\). Figure 1 gives an example of an edge \(4\)-partition, i.e. \(d=4\). Further, each of the subgraphs are trees with \(7\) edges, so the edge \(4\)-partition is homogeneous and cycle-free. Another important preliminary concerning edge \(d\)-partitions of the complete graph \(K_{2d}\). The following result is from [4]. **Lemma 2.2** ([4]).: _Let \((\Gamma_{1},\Gamma_{2},\ldots,\Gamma_{d})\) be a homogeneous cycle-free edge \(d\)-partition of \(K_{2d}\) and \(1\leq x<y<z\leq 2d\). Then, there exists a unique homogeneous cycle-free edge \(d\)-partition \((\Lambda_{1},\Lambda_{2},\ldots,\Lambda_{d})\) of \(K_{2d}\) such that \((\Gamma_{1},\Gamma_{2},\ldots,\Gamma_{d})\) and \((\Lambda_{1},\Lambda_{2},\ldots,\Lambda_{d})\) agree on all their edges, except on the triangle \((x,y,z)\)._ _Remark 2.3_.: For \(1\leq x<y<z\leq 2d\) and a homogeneous cycle-free edge \(d\)-partition of \(K_{2d}\) given by \(\Gamma=(\Gamma_{1},\Gamma_{2},\ldots,\Gamma_{d})\), we will denote the unique edge \(d\)-partition given in Lemma 2.2 as \(\Gamma^{(x,y,z)}\). This edge \(d\)-partition is known as the involution of \(\Gamma\). There is a special graph which is of interest for this paper. **Definition 2.4**.: Fix \(1\leq b\leq d\). For the vertices \(1,2,\ldots,2d\) define the sets of edges \[L_{d}(2b-1,2b)=\{(2a,2b-1)|1\leq a<b\}\cup\{(2b-1,2c-1)|b<c\leq d\},\] \[C_{d}(2b-1,2b)=\{(2b-1,2b)\},\] Figure 1. A homogeneous cycle-free edge \(4\)-partition of the complete graph \(K_{8}\). \[R_{d}(2b-1,2b)=\{(2a-1,2b)|1\leq a<b\}\cup\{(2b,2c)|b<c\leq d\}.\] Let the twin star graph \(TS_{d}(2b-1,2b)\) be the graph with vertices \(1,2,\ldots,2d\) and edges \[E(TS_{d})=L_{d}(2b-1,2b)\cup C_{d}(2b-1,2b)\cup R_{d}(2b-1,2b).\] The edges in \(L_{d}(2b-1,2b)\) are called the left legs of the graph and the edges in \(R_{d}(2b-1,2b)\) are called the right legs of the graph. **Example 2.5**.: As an example, consider the following graph \(TS_{7}(5,6)\). Here there are \(14\) vertices, with \(6\) left legs and \(6\) right legs. For example, the edges \((2,5)\) and \((5,13)\) are left legs, whereas the edges \((3,6)\) and \((6,10)\) are right legs. The graph is given in Figure 2. **Definition 2.6**.: For \(TS_{d}(2b-1,2b)\), take the set \[LL=\{x|(x,2b-1)\in L_{d}(2b-1,2b)\text{ or }(2b-1,x)\in L_{d}(2b-1,2b)\}\] and \[RL=\{x|(x,2b)\in L_{d}(2b-1,2b)\text{ or }(2b,x)\in L_{d}(2b-1,2b)\}.\] Notice that \(LL\) and \(RL\) are finite subsets of the natural numbers and there is a natural order on them. Given a left leg \((x,2b-1)\) or \((2b-1,x)\) of \(TS_{d}(2b-1,2b)\), we say that the leg number is the element number of \(x\) after \(LL\) is enumerated by the natural order on subsets of the natural numbers. Similarly, given a right leg \((x,2b)\) or \((2b,x)\), we say that the leg number is the element number of \(x\) after \(RL\) is enumerated by the order. By default, let the leg number of \((2b-1,2b)\) be \(1\). Alternatively, \[LL=\{x|x<2b-1\text{ and }x\text{ is even}\}\cup\{y|2b-1<y\text{ and }y\text{ is odd}\}.\] Also, \[RL=\{x|x<2b-1<2b\text{ and }x\text{ is odd}\}\cup\{y|2b-1<y\text{ and }y\text{ is even}\}.\] Given \((x,y)\), we will denote the leg number of \((x,y)\) as \(l(x,y)\). **Example 2.7**.: Take the Twin Star Graph given in Figure 2. Notice that the left and right legs are in order, going from the top center to the bottom center along the outside. So, we have \[LL=\{2,4,7,9,11,13\},\] and \[RL=\{1,3,8,10,12,14\}.\] Figure 2. The twin star graph \(TS_{7}(5,6)\) In this case, we see that \(l(4,5)=2\), whereas \(l(6,8)=3\). We could likewise determine the leg number of all of the other edges of \(TS_{7}(5,6)\). Lastly, note that \(l(5,6)=1\), as \(C_{7}(5,6)=\{(5,6)\}\), which is pictured in Figure 2 as the edge in the center of the graph. ### Upper Triangular Matrices We now recall upper triangular matrices and reformulate in such a way that is useful for our later construction. Given any \(d\times n\) matrix \(A\) with columns \(c_{i}\), then we can associate \(A\) with the simple tensor \(c_{1}\otimes c_{2}\otimes\ldots\otimes c_{n}\in V_{d}^{\otimes n}\). In particular, if \(e_{1},e_{2},\ldots,e_{d}\) is a basis for \(V_{d}\), then the \(d\times d\) identity matrix \(I_{d}\) is associated with the simple tensor \(e_{1}\otimes e_{2}\otimes\ldots\otimes e_{d}\). **Definition 2.8**.: Let \(A\) be a \(d\times d\) matrix with columns \(c_{i}\) and associated simple tensor \(c_{1}\otimes\ldots\otimes c_{d}\). We say that \(A\) is an Upper Triangular Matrix if for all \(1\leq i\leq d\) there exists \(\alpha_{j}\in k\) for \(1\leq j\leq i\) such that \(c_{i}=\sum_{j=1}^{i}\alpha_{j}e_{j}\). We say that \(A\) is a Lower Triangular Matrix if for all \(1\leq i\leq d\) there exists \(\alpha_{j}\in k\) for \(i\leq j\leq d\) such that \(c_{i}=\sum_{j=i}^{d}\alpha_{j}e_{j}\). If \(A\) is Upper Triangular or Lower Triangular, we say that \(A\) is a Triangular Matrix. To illustrate Definition 2.8, we give a couple examples. **Example 2.9**.: First, consider the matrix \(A=\begin{pmatrix}1&2&5\\ 0&6&10\\ 0&0&3\end{pmatrix}.\) Then, \(A\) is associated with the simple tensor \(c_{1}\otimes c_{2}\otimes c_{3}\), where \[c_{1}=e_{1}\] \[c_{2}=2e_{1}+6e_{2}\] and \[c_{3}=5e_{1}+10e_{2}+3e_{3}.\] Under Definition 2.8, this is an Upper Triangular Matrix but not a Lower Triangular Matrix. Next, consider the matrix \(B=\begin{pmatrix}1&0&0\\ 3&0&0\\ 1&2&5\end{pmatrix}.\) Then \(B\) is associated with the simple tensor \(u_{1}\otimes u_{2}\otimes u_{3}\), where \[u_{1}=e_{1}+3e_{2}+e_{3},\] \[u_{2}=2e_{3},\] and \[u_{3}=5e_{3}.\] Under Definition 2.8, this is not an Upper Triangular matrix, but \(B\) is a Lower Triangular Matrix. Lastly, consider the matrix \(C=\begin{pmatrix}1&1&0\\ 1&0&0\\ 0&0&1\end{pmatrix}.\) Then \(C\) is associated with the simple tensor \(v_{1}\otimes v_{2}\otimes v_{3}\), where \[v_{1}=e_{1}+e_{2},\] \[v_{2}=e_{1},\] and \[v_{3}=e_{3}.\] Under Definition 2.8, this is not a Triangular Matrix (neither an Upper Triangular Matrix nor a Lower Triangular Matrix). Regarding Triangular Matrices, one common use is LU-Decomposition, which is summarized in the following proposition. **Proposition 2.1** ([5]).: _For any \(d\times d\) matrix \(A\), if all of the leading principal minors are nonzero, then there exists a Unit Lower Triangular Matrix \(L\) (that is, \(A\) is Lower Triangular and \(a_{i,i}=1\) for \(1\leq i\leq d\)) and a non-singular Upper Triangular Matrix \(U\) such that \(A=LU\)._ Lastly, we reformulate a definition for the diagonal of a matrix. **Definition 2.10**.: Let \(A\) be a \(d\times d\) matrix with entries \(a_{i,j}\) and \(I_{d}\) be the identity matrix with entries \(e_{j}^{i}\), where \(e_{j}=(e_{j}^{1},e_{j}^{2},\ldots,e_{j}^{d})\). Define the \(S^{1}\)-diagonal indicator set is defined \(DI_{d}[1]=\{(i,j)|e_{j}^{i}\neq 0\}\) and the \(S^{1}\)-diagonal as \(diag(A)=\{a_{i,j}|(i,j)\in DI_{d}[1]\}\). _Remark 2.11_.: For a \(d\times d\) matrix \(A\), notice that \(DI_{d}[1]=\{(1,1),(2,2),(3,3),\ldots,(d,d)\}\) and that \(diag(A)\) agrees with the usual definition of the diagonal of a square matrix. We are formulating Definition 2.10 in this way in order to give motivation for a later definition. ### The Tensor Algebra and the Determinant Map Next, we will recall the Tensor Algebra and its connection to the determinant map. **Definition 2.12** ([1]).: Let \(V_{d}\) be a vector space of dimension \(d\). Then, we define the vector space \(\mathcal{T}_{V_{d}}[n]=V_{d}^{\otimes n}\). The graded vector space \(\mathcal{T}_{V_{d}}=\bigoplus_{n\geq 0}\mathcal{T}_{V_{d}}[n]\), where \(\mathcal{T}_{V_{d}}[0]=k\), is called the Tensor Algebra. _Remark 2.13_.: Notice that since every \(d\times d\) matrix is associated with a simple tensor in \(V_{d}^{\otimes d}\), it follows that every \(d\times d\) matrix is associated with a simple tensor in \(\mathcal{T}_{V_{d}}[d]\). _Remark 2.14_.: The Tensor Algebra is a \(k\)-algebra with multiplication defined by concatenation of simple tensors. This structure will not be important for our purposes in this paper. In the context of the Tensor Algebra, we can give a connection to the determinant map. **Proposition 2.2**.: _There exists a unique map \(det:\mathcal{T}_{V_{d}}[d]\to k\) such that_ 1. \(det(e_{1}\otimes e_{2}\otimes\ldots\otimes e_{d})=1\)_._ 2. _If there exists_ \(1\leq x<y\leq d\) _such that_ \(v_{x}=v_{y}\)_, then_ \(det(v_{1}\otimes\ldots\otimes v_{d})=0\)_._ 3. _If_ \(c_{1}\otimes c_{2}\otimes\ldots\otimes c_{d}\) _is the simple tensor associated with a_ \(d\times d\) _matrix_ \(A\)_, then_ \(det(c_{1}\otimes\ldots\otimes c_{d})=det(A)\)_, where_ \(det(A)\) _is the usual determinant of the matrix_ \(A\)_._ Regarding the determinant function, recall the following result. **Proposition 2.3**.: _Let \(A\) be an Upper Triangular Matrix. Then, \(det(A)\) is precisely the product of the diagonal entries._ ### The Vector Space \(\mathcal{T}^{S^{2}}[2d]\) Next, we recall a construction from [7]. This construction parallels the Tensor Algebra previously discussed (see [2] for more information). **Definition 2.15** ([7]).: Let \(V_{d}\) be a vector space of dimension \(d\). Define the vector space \(\mathcal{T}_{V_{d}}^{S^{2}}[n]=V_{d}^{\otimes{n\choose 2}}\) and the graded vector space \(\mathcal{T}^{S^{2}}=\bigoplus_{n\geq 0}\mathcal{T}_{V_{d}}^{S^{2}}[n]\), where \(\mathcal{T}_{V_{d}}^{S^{2}}[0]=\mathcal{T}_{V_{d}}^{S^{2}}[1]=k\). _Remark 2.16_.: Simple tensors in \(\mathcal{T}_{V_{d}}^{S^{2}}[n]\) are of the form (\(\otimes_{1\leq i<j\leq n}(v_{i,j})\)). In particular, given a simple tensor \(\omega=(\otimes_{1\leq i<j\leq n}c_{i,j})\), we can associate a \(d\times{n\choose 2}\) matrix \(A_{\omega}\) by the matrix with columns \(c_{i,j}\) ordered by the dictionary order. We will refer to \(A_{\omega}\) as the matrix form of \(\omega\) and refer to \(\omega\) as the simple tensor associated to \(A_{\omega}\). By way of example, let \(\omega=(\otimes_{1\leq i<j\leq 4}(v_{i,j}))\), where \[v_{i,j}=\begin{cases}e_{1},\quad i+j\text{ }even\\ e_{1}+e_{2},\quad i+j\text{ }odd\end{cases}.\] If \(A\) is the matrix form of \(\omega\), the columns of the matrix \(A\) are of the form \((i,j)\) for \(1\leq i<j\leq 4\) under the dictionary order (i.e. the columns are labelled by \((1,2),(1,3),(1,4),(2,3),(2,4)\), and \((3,4)\), respectively). In this case, \[A=(e_{1}+e_{2})\otimes e_{1}\otimes(e_{1}+e_{2})\otimes(e_{1}+e _{2})\otimes e_{1}\otimes(e_{1}+e_{2})\] \[= \begin{pmatrix}(1,2)&(1,3)&(1,4)&(2,3)&(2,4)&(3,4)\\ &1&1&1&1&1\\ &1&0&1&1&0&1\end{pmatrix}\] Now, there is the following fact about \(\mathcal{T}_{V_{d}}^{\mathcal{S}^{2}}[n]\). **Proposition 2.4** ([4]).: _Fix a basis \(\mathcal{B}_{d}=\{e_{1},\ldots,e_{d}\}\) for \(V_{d}\) and define_ \[\mathcal{G}_{\mathcal{B}_{d}}^{\mathcal{S}^{2}}[n]=\{\otimes_{1\leq i<j\leq n }(v_{i,j})|v_{i,j}\in\mathcal{B}_{d}\}.\] _Then, \(\mathcal{G}_{\mathcal{B}_{d}}^{\mathcal{S}^{2}}[n]\) is a system of generators for \(\mathcal{T}_{V_{d}}^{\mathcal{S}^{2}}[n]\). Further, there exists a bijection between \(\mathcal{G}_{\mathcal{B}_{d}}^{\mathcal{S}^{2}}[n]\) and \(\mathcal{P}_{d}(K_{n})\), the collection of edge \(d\)-partitions of the complete graph on \(n\) vertices._ Proposition 2.4 follows by direct construction. Given \(\otimes_{1\leq i<j\leq n}(v_{i,j})\in\mathcal{G}_{\mathcal{B}_{d}}^{\mathcal{ S}^{2}}[n]\), for \(1\leq i\leq d\) we define \(\Gamma_{i}\) to be the graph with vertices \(1,2,3,\ldots,n\) and edges \[E(\Gamma_{i})=\{(x,y)|v_{x,y}=e_{i}\}.\] This defines an edge \(d\)-partition and this process defines a bijection. Next, there is a special element of \(\mathcal{T}_{V_{d}}^{\mathcal{S}^{2}}[2d]\) that we will be relying on in this paper, which generalizes the construction of the simple tensor associated to the identity matrix. **Definition 2.17** ([6]).: For \(1\leq a\leq d\), let \(S_{a}=\{2a-1,2a\}\). Then, for \(1\leq i<j\leq 2d\), we define \(e_{i,j}\) by \[e_{i,j}=\begin{cases}e_{a},&i+j\text{ is even and }i\in S_{a}\\ e_{b},&i+j\text{ is odd and }j\in S_{b}\end{cases}.\] Finally, we define \(E_{d}^{(2)}\in\mathcal{T}_{V_{d}}^{\mathcal{S}^{2}}[2d]\) by \(E_{d}^{(2)}=\otimes_{1\leq i<j\leq 2d}(e_{i,j})\). _Remark 2.18_.: Notice that the edge \(d\)-partition associated with \(E_{d}^{(2)}\) consists only of twin star graphs. More specifically, the edge \(d\)-partition is \((TS_{d}(1,2),TS_{d}(3,4),\ldots,TS_{d}(2d-1,2d))\). _Remark 2.19_.: In [4], a different distinguished element of \(\mathcal{T}_{V_{d}}^{\mathcal{S}^{2}}[2d]\) was given for \(d\geq 2\). However, the element \(E_{d}^{(2)}\) fits more naturally into a further generalization, so we will be using it for the terms defined in this paper. For more information on this further generalization \(E_{d}^{(r)}\), see [2]. #### 2.5.1. General Results Next, we will consider a map \(det^{S^{2}}:\mathcal{T}^{S^{2}}_{V_{d}}[2d]\to k\). This map has been studied in several instances, including [2], [4], [6], [7] and [8]. For our purposes, the following theorem summarizes the relevant information. **Proposition 2.5** ([4],[6],[7]).: _There exists a map \(det^{S^{2}}:\mathcal{T}^{S^{2}}_{V_{d}}[2d]\to k\) which is linear, nontrivial, and has the property that \(det^{S^{2}}(\otimes_{1\leq i<j\leq 2d}(v_{i,j}))=0\) if there exists \(1\leq x<y<z\leq 2d\) such that \(v_{x,y}=v_{x,z}=v_{y,z}\). Further, if \(d=2\) or \(d=3\), the map \(det^{S^{2}}\) is unique, up to a constant._ _Remark 2.20_.: There are parallels between Propositions 2.2 and 2.5, even if uniqueness is still an open question for \(d>4\) in Proposition 2.5. In [2] and [3], it was shown that the determinant map and \(det^{S^{2}}\) have a further generalization \(det^{S^{r}}:V_{d}^{\otimes\binom{rd}{r}}\to k\) that satisfy a property generalizing that \(det^{S^{2}}(\otimes_{1\leq i<j\leq 2d}(v_{i,j}))=0\) if there exists \(1\leq x<y<z\leq 2d\) such that \(v_{x,y}=v_{x,z}=v_{y,z}\). We will not be focusing on \(det^{S^{r}}\) in this paper, although many of the concepts introduced can be extended to this generalization. Regarding this map \(det^{S^{2}}:V_{d}^{\otimes d(2d-1)}\to k\), a construction is given in [6] for all \(d\geq 1\). However, there exists an alternative construction for \(d=2\) and \(d=3\) which is more useful for our purposes in this paper. Due to the uniqueness, the alternative construction and the general construction presented in [6] are equivalent for \(d=2\) and \(d=3\). #### 2.5.2. The Case \(d=2\) First, suppose that \(d=2\), so \(V_{d}=V_{2}\) is a \(2\)-dimensional vector space. Let \(v_{i,j}\in V_{2}\) for \(1\leq i<j\leq 4\), where \(v_{i,j}=v_{i,j}^{1}e_{1}+v_{i,j}^{2}e_{2}\). Given an edge \(2\)-partition \((\Gamma_{1},\Gamma_{2})\), define the monomial \[M_{(\Gamma_{1},\Gamma_{2})}(\otimes_{1\leq i<j\leq 2d}(v_{i,j}))=\prod_{(i_{1}, j_{1})\in E(\Gamma_{1})}v_{i_{1},j_{1}}^{1}\prod_{(i_{2},j_{2})\in E( \Gamma_{2})}v_{i_{2},j_{2}}^{2} \tag{2.1}\] Given this, we have the following result. **Proposition 2.6** ([4]).: _Let \((\Gamma_{1}^{E^{(2)}_{1}},\Gamma_{2}^{E^{(2)}_{2}})\) be the edge \(2\)-partition associated with \(E^{(2)}_{2}\). For \(1\leq x<y<z\leq 2d\), let \((\Gamma_{1},\Gamma_{2})^{(x,y,z)}\) be the involution of \((\Gamma_{1},\Gamma_{2})\) given in Lemma 2.2._ 1. _There exists a map_ \(\varepsilon_{2}^{S^{2}}:\mathcal{P}^{h,cf}_{2}(K_{4})\to\{\pm 1\}\) _such that_ \(\varepsilon_{2}^{S^{2}}((\Gamma_{1},\Gamma_{2})^{(x,y,z)})=-\varepsilon_{2}^{ S^{2}}(\Gamma_{1},\Gamma_{2})\)_,_ \(\varepsilon_{2}^{S^{2}}\) _is unique, and_ \(\varepsilon_{2}^{S^{2}}(\Gamma_{1}^{E^{(2)}_{2}},\Gamma_{2}^{E^{(2)}_{2}})=-1\)_._ 2. _The map determined by_ \[det^{S^{2}}(\otimes_{1\leq i<j\leq 2d}v_{i,j})=\sum_{(\Gamma_{1},\Gamma_{2}) \in\mathcal{P}^{h,cf}_{2}(K_{4})}\varepsilon_{2}^{S^{2}}((\Gamma_{1},\Gamma_{ 2}))M_{(\Gamma_{1},\Gamma_{2})}((v_{i,j})_{1\leq i<j\leq 4}),\] (2.2) _is linear,_ \(det^{S^{2}}\) _is unique up to a constant, and has the property that_ \(det^{S^{2}}(\otimes_{1\leq i<j\leq 4}v_{i,j})=0\) _if there exists_ \(1\leq x<y<z\leq 4\) _such that_ \(v_{x,y}=v_{x,z}=v_{y,z}\)_._ _Remark 2.21_.: In [4], the map \(\varepsilon_{2}^{S^{2}}\) was more closely studied, particular in relation to an \(S_{4}\times S_{2}\) action. These facts will not be relevant for our discussion here. #### 2.5.3. The Case \(d>2\) Now, we will use the previous construction to describe a construction for \(d=3\), as well as to present a conjecture for \(d>3\) proposed in [4]. Let \(d=3\) and let \(v_{i,j}\in V_{d}\) for \(1\leq i<j\leq 2d\), where \(v_{i,j}=\sum_{k=1}^{d}v_{i,j}^{k}e_{k}\). First, given an edge \(d\)-partition \((\Gamma_{1},\Gamma_{2},\ldots,\Gamma_{d})\), define the monomial \[M_{(\Gamma_{1},\Gamma_{2},\ldots,\Gamma_{d})}(\otimes_{1\leq i<j\leq 2d}v_{i,j})= \prod_{k=1}^{d}\prod_{(i,j)\in E(\Gamma_{k})}v_{i,j}^{k}. \tag{2.3}\] Then, we have the following proposition from [4]. **Proposition 2.7** ([4]).: _Let \(d=2\) or \(d=3\). Let \((\Gamma_{1}^{E_{4}^{(2)}},\Gamma_{2}^{E_{4}^{(2)}},\ldots,\Gamma_{d}^{E_{d}^{(2)}})\) be the edge \(d\)-partition associated with \(E_{d}^{(2)}\). For \(1\leq x<y<z\leq 2d\), let \((\Gamma_{1},\Gamma_{2},\ldots,\Gamma_{d})^{(x,y,z)}\) be the involution of \((\Gamma_{1},\Gamma_{2},\ldots,\Gamma_{d})\) given in Lemma 2.2._ 1. _There exists a unique map_ \(\varepsilon_{d}^{S^{2}}:\mathcal{P}_{d}^{h,cf}(K_{2d})\to\{\pm 1\}\) _such that_ \(\varepsilon_{d}^{S^{2}}((\Gamma_{1},\Gamma_{2},\ldots,\Gamma_{d})^{(x,y,z)})=- \varepsilon_{d}^{S^{2}}(\Gamma_{1},\Gamma_{2},\ldots,\Gamma_{d})\)_,_ \(\varepsilon_{d}^{S^{2}}\) _is unique, and_ \(\varepsilon_{d}^{S^{2}}(\Gamma_{1}^{E_{2}^{(2)}},\Gamma_{2}^{E_{d}^{(2)}})=(- 1)^{d+1}\)_._ 2. _The map determined by_ \[det^{S^{2}}(\otimes_{1\leq i<j\leq 2d}v_{i,j})=\sum_{(\Gamma_{1},\Gamma_{2}, \ldots,\Gamma_{d})\in\mathcal{P}_{d}^{h,cf}(K_{2d})}\varepsilon_{d}^{S^{2}}(( \Gamma_{1},\Gamma_{2},\ldots,\Gamma_{d}))M_{(\Gamma_{1},\Gamma_{2},\ldots, \Gamma_{d})}(\otimes_{1\leq i<j\leq 2d}v_{i,j}).\] (2.4) _is linear, nontrivial, unique up to a constant, and has the property that_ \(det^{S^{2}}(\otimes_{1\leq i<j\leq 2d}v_{i,j})=0\) _if there exists_ \(1\leq x<y<z\leq 2d\) _such that_ \(v_{x,y}=v_{x,z}=v_{y,z}\)_._ Now, the monomial given in Equation 2.3 can be defined for \(d>3\). However, Proposition 2.7 is not known for \(d>3\). With this in mind, there was the following conjecture in [4] (stated differently in that paper). **Conjecture 2.22**.: Let \(d>1\) and \((\Gamma_{1}^{E_{4}^{(2)}},\Gamma_{2}^{E_{4}^{(2)}},\ldots,\Gamma_{d}^{E_{d}^{( 2)}})\) be the edge \(d\)-partition associated with \(E_{d}^{(2)}\). For \(1\leq x<y<z\leq 2d\), let \((\Gamma_{1},\Gamma_{2},\ldots,\Gamma_{d})^{(x,y,z)}\) be the involution of \((\Gamma_{1},\Gamma_{2},\ldots,\Gamma_{d})\) given in Lemma 2.2. 1. There exists a map \(\varepsilon_{d}^{S^{2}}:\mathcal{P}_{d}^{h,cf}(K_{2d})\to\{\pm 1\}\) such that \[\varepsilon_{d}^{S^{2}}(E_{d}^{(2)})=(-1)^{d+1}\] and \[\varepsilon_{d}^{S^{2}}((\Gamma_{1},\Gamma_{2},\ldots,\Gamma_{d})^{(x,y,z)})=- \varepsilon_{d}^{S^{2}}((\Gamma_{1},\Gamma_{2},\ldots,\Gamma_{d}))\] for all \(1\leq x<y<z\leq 2d\). 2. For all \(d>1\), the map \[det^{S^{2}}(\otimes_{1\leq i<j\leq 2d}v_{i,j})=\sum_{(\Gamma_{1},\Gamma_{2}, \ldots,\Gamma_{d})\in\mathcal{P}_{d}^{h,cf}(K_{2d})}\varepsilon_{d}^{S^{2}}(( \Gamma_{1},\Gamma_{2},\ldots,\Gamma_{d}))M_{(\Gamma_{1},\Gamma_{2},\ldots, \Gamma_{d})}(\otimes_{1\leq i<j\leq 2d}v_{i,j}),\] is linear, nontrivial, and has the property that \(det^{S^{2}}(\otimes_{1\leq i<j\leq 2d}v_{i,j})=0\) if there exists \(1\leq x<y<z\leq 2d\) such that \(v_{x,y}=v_{x,z}=v_{y,z}\). _Remark 2.23_.: Although it was not stated in Conjecture 2.22, it was also conjectured that \(\varepsilon_{d}^{S^{2}}\) and \(det^{S^{2}}\) are unique up to a constant. Before we finish this section, notice that by using the construction behind Theorem 2.5, we can compute \(det^{S^{2}}(E_{d}^{(2)})\) for small cases of \(d\) (this is a different but equivalent construction than what is given in Propositions 2.2 and 2.4). More recently, the cases of \(d\leq 10\) were computed in the Appendix of [3]. **Proposition 2.8** ([3]).: _Let \(1\leq d\leq 10\). Then, \(det^{S^{2}}(E_{d}^{(2)})=(-1)^{d+1}\)._ Notice that the result from Proposition 2.8 matches what would be expected if Conjecture 2.22 holds. _Remark 2.24_.: As a last remark for this section, we are relying on Conjecture 2.22 rather than the construction of \(det^{S^{2}}\) for all \(d>1\) presented in [6] because it will be more convenient for later computations. With that stated, keep in mind that for \(d>4\), some of the results presented are dependent on a conjecture which has not been proven for all \(d\). Alternatively, one could show that the vector space \(\Lambda_{V_{d}}^{S^{2}}[2d]\), presented in [7], is generated by the image of \(E_{d}^{(2)}\) for all \(d>1\). We will not be discussing the specifics in this paper, but more information regarding this can be seen in [2]. ## 3. \(S^{2}\)-Triangular Matrices In this section, we introduce a new type of matrix based on Triangular matrices and \(E_{d}^{(2)}\). ### \(S^{2}\)-Diagonal of a Matrix First, we construct a generalization of the concept of the diagonal of a matrix. This definition will parallel Definition 2.10. **Definition 3.1**.: Let \(A\) be a \(d\times d(2d-1)\) matrix with columns \(a_{i,j}\) for \(1\leq i<j\leq 2d\) ordered by the dictionary order and entries \(a_{i,j}^{k}\) in the \((i,j)\) column and \(k^{th}\) row. Let \(I_{d}^{S^{2}}\) be the matrix form of \(E_{d}^{(2)}\) ordered by the dictionary order with entries \(e_{i,j}^{k}\) in the \((i,j)\) column and \(k^{th}\) row. Define the \(S^{2}\) diagonal indicator set by \(DI_{d}[2]=\{(i,j,k)|e_{i,j}^{k}\neq 0\}\) and the \(S^{2}\)-diagonal of \(A\) as the set \(diag^{S^{2}}(A)=\{a_{i,j}^{k}|(i,j,k)\in DI_{d}[2]\}\). We will write \(DI_{d}[2]\) as \(DI[2]\) if \(d\) is understood. _Remark 3.2_.: Notice that by construction of \(E_{d}^{(2)}\), for each \(1\leq i<j\leq 2d\), there exists a unique \(1\leq k\leq d\) such that \((i,j,k)\in DI[2]\). **Example 3.3**.: Now, we give two examples and their \(S^{2}\)-diagonal. For both of these examples, we will take \(d=3\), so we will be looking at \(3\times 15\) matrices. First, for these examples, we will need \(DI[2]\). By using the definition of \(E_{3}^{(2)}\), we have the following matrix for \(I_{3}^{S^{2}}\), the matrix form of \(E_{3}^{(2)}\). Here the columns here are \((1,2),(1,3),(1,4),(1,5),(1,6),(2,3)\), \((2,4),(2,5),(2,6),(3,4),(3,5),(3,6)\), \((4,5),(4,6)\), and \((5,6)\), which we have listed above the respective columns. \[I_{3}^{S^{2}}= \begin{pmatrix}(1,2)&(1,3)&(1,4)&(1,5)&(1,6)&(2,3)&(2,4)&(2,5)&(2,6)&( 3,4)&(3,5)&(3,6)&(4,5)&(4,6)&(5,6)\\ 1&1&0&1&0&0&1&0&1&0&0&0&0&0&0\\ 0&0&1&0&0&1&0&0&0&1&1&0&0&1&0\\ 0&0&0&0&1&0&0&1&0&0&0&1&1&0&1\end{pmatrix}\] Since this is the matrix form of \(E_{3}^{(2)}\), the set \(DI_{3}[2]=DI[2]\) is based on the non-zero entries of \(I_{3}^{S^{2}}\). Thus, we have \[DI[2]= \{(1,2,1),(1,3,1),(1,4,2),(1,5,1),(1,6,3),(2,3,2),(2,4,1),\] \[(2,5,3),(2,6,1),(3,4,2),(3,5,2),(3,6,3),(4,5,3),(4,6,2),(5,6,3)\}.\] For example, in the first column (the (1,2) column), we see that the first row is nonzero, so \((1,2,1)\in DI[2]\). However, the other entries are zero so neither \((1,2,2)\) nor \((1,2,3)\) is in \(DI[2]\). Similarly, the last column is \((5,6)\) and the only nonzero entry is in the third row, so \((5,6,3)\in DI[2]\). Next, consider the \(S^{2}\)-diagonal \(diag^{S^{2}}(I_{3}^{S^{2}})\). This is determined by looking at all of the positions given in \(DI[2]\), ordering all of the columns as \((i,j)\) for \(1\leq i<j\leq 6\) using the dictionary order. For example, we know that \((2,5,3)\in DI[2]\), so \(e_{2,5}^{3}\), that is the element at the third row in column \((2,5)\), is in the \(S^{2}\)-diagonal. After looking at all of the positions given in \(DI[2]\), we get \(diag^{S^{2}}(I_{3}^{S^{2}})=\{1\}\). Now, consider the example of the following matrix \(B=(b_{i,j}^{k})_{1\leq i<j\leq 2d,1\leq k\leq d}\): \[B= \begin{pmatrix}(1,2)&(1,3)&(1,4)&(1,5)&(1,6)&(2,3)&(2,4)&(2,5)&(2,6)&(3,4)&(3,5)&(3,6)&(4,5)&(4,6)&(5,6)\\ 1&1&0&1&b&c&d&0&1&0&0&0&0&m\\ a&0&0&0&0&1&e&0&0&1&0&1&0\\ 0&0&1&0&1&0&f&1&0&0&0&l&1&0&n\end{pmatrix}\quad.\] Notice that not all of the nontrivial entries correspond to indices in \(DI[2]\). For example, \(b_{1,2}^{2}=a\), although \((1,2,2)\) is not in \(DI[2]\). Keeping this in mind, we get \(diag^{S^{2}}(B)=\{0,1,f,l,n\}\). ### \(S^{2}\)-Triangular Matrices Now, we will define our main construction. **Definition 3.4**.: Let \(1\leq i<j\leq 2d\) and define \(ind(i,j)\) as the unique \(1\leq ind(i,j)\leq d\) such that \((i,j,ind(i,j))\in DI[2]\). Let \(A\) be a \(d\times d(2d-1)\) matrix with columns \(c_{i,j}\) for \(1\leq i<j\leq d\) ordered by the dictionary order. Then, we say \(A\) is \(S^{2}\)-Upper Triangular if for each \(1\leq i<j\leq 2d\) and \(1\leq k\leq d\), there exists \(\alpha_{i,j}^{k}\) such that \[c_{i,j}=\sum_{k=1}^{ind(i,j)}\alpha_{i,j}^{k}e_{k}.\] We say that \(A\) is \(S^{2}\)-Lower Triangular if for each \(1\leq i<j\leq 2d\) and \(1\leq k\leq d\), there exists \(\alpha_{i,j}^{k}\) such that \[c_{i,j}=\sum_{k=ind(i,j)}^{d}\alpha_{i,j}^{k}e_{k}.\] If \(A\) is either \(S^{2}\)-Upper Triangular or \(S^{2}\)-Lower Triangular, we say that \(A\) is an \(S^{2}\)-Triangular Matrix. _Remark 3.5_.: Note that \(I_{d}^{S^{2}}\) is both an \(S^{2}\)-Upper Triangular Matrix and an \(S^{2}\)-Lower Triangular Matrix, taking \(\alpha_{i,j}^{k}=\delta_{k,ind(i,j)}\), where \(\delta_{m,n}\) is the Kronecker delta. _Remark 3.6_.: There are noticeable parallels between the reformulation of the definition of Triangular Matrices in Definition 2.8 and \(S^{2}\)-Triangular Matrices in Definition 3.4. This can be extended to \(S^{r}\)-Upper Triangular Matrices by using \(E_{d}^{(r)}\), defined in [3]. Next, we give some examples. **Example 3.7**.: First, consider the following matrix \(A\), where the indices in \(DI[2]\) are boxed. \[A=\begin{pmatrix}\framebox{a}&\framebox{b}&c&\framebox{e}&0&f&\framebox{1}&h& \framebox{1}&0&l&0&0&0&0\\ 0&0&\framebox{0}&0&0&\framebox{g}&0&0&0&\framebox{k}&\framebox{1}&m&n& \framebox{1}&p\\ 0&0&0&0&\framebox{0}&0&0&\framebox{1}&0&0&0&\framebox{1}&0&\framebox{1} \\ \end{pmatrix}\] This matrix is an \(S^{2}\)-Upper Triangular Matrix, as all non-zero entries lie at or above the boxed entries in any given column. Next, consider the following matrix \(B\), where the indices in \(DI[2]\) are boxed. \[B=\begin{pmatrix}\framebox{a}&\framebox{b}&c&\framebox{e}&0&f&\framebox{1}&h& \framebox{1}&0&l&0&0&0&0\\ 0&0&\framebox{0}&0&0&\framebox{g}&0&0&0&\framebox{k}&\framebox{1}&m&n& \framebox{1}&p\\ 0&z&0&0&\framebox{0}&0&0&\framebox{1}&0&0&0&\framebox{1}&\framebox{1}&0& \framebox{1}\\ \end{pmatrix}\] Notice that the entry at column \((1,3)\) and row \(3\) is nonzero, although \(ind(1,3)=1\). Therefore, \(B\) is not an \(S^{2}\)-Upper Triangular Matrix. Similarly, \(B\) is not an \(S^{2}\)-Lower Triangular Matrix since there is a nonzero entry in column \((1,4)\) and row \(1\), although \(ind(1,4)=2\). _Remark 3.8_.: With a \(d\times d\) Upper Triangular Matrix \(A\), it is well known that if \(Ax=b\) and the diagonal entries of \(A\) are non-zero, then we can use back substitution to determine \(x\). In the case of an \(S^{2}\)-Upper Triangular Matrix \(A\), we can consider the system \(Ax=b\), where \(x=(x^{(1,2)},x^{(1,3)},x^{(1,2d)},x^{(2,3)},\ldots,x^{(2d-1,2d)})\in V_{d(2d-1)}\) is labelled by the dictionary order and \(b=(b^{1},b^{2},\ldots,b^{d})\in V_{d}\). In this case, we know that certain \(x^{(i,j)}\) are linear combinations of each other, but cannot determine \(x^{(i,j)}\) exactly. For example, if we consider the matrix \(A\) given in the previous example, then \[x^{(2,5)}+x^{(3,6)}+x^{(4,5)}+x^{(5,6)}=b^{3},\] so we could write \(x^{(2,5)}\) as a linear combination of \(x^{(3,6)},x^{(4,5)}\), and \(x^{(5,6)}\), but we cannot completely determine \(x\). Notice that \((2,5)\), \((3,6)\), \((4,5)\), and \((5,6)\) are all edges of \(\Gamma_{3}\), where \((\Gamma_{1},\Gamma_{2},\Gamma_{3})\) is the edge \(3\)-partition associated with \(E_{3}^{(2)}\). However, note that if \(A\) is \(S^{2}\)-Upper Triangular and \(0\) is not in the \(S^{2}\)-diagonal of \(A\), then it follows that we can write \(x^{(2a-1,2a)}\) as a linear combination for \(1\leq a\leq d\). Namely, for each \(1\leq a\leq d\), there exists \(\alpha_{m,n}^{(2a-1,2a)}\in k\) for \(1\leq m<n\leq 2d\) such that \[x^{(2a-1,2a)}=\sum_{\begin{subarray}{c}(m,n)\in E(\Gamma_{k}),\ k\geq a\\ (m,n)\neq(2b-1,2b),\ b\geq a\end{subarray}}\alpha_{m,n}^{(2a-1,2a)}x^{(m,n)}.\] ### The Determinant of \(S^{2}\)-Upper Triangular Matrices We will end this section by proving a result which parallels Proposition 2.2. First, we start with some lemmas. **Lemma 3.9**.: _Let \(\Gamma^{E_{d}^{(2)}}=(\Gamma_{1},\Gamma_{2},\ldots,\Gamma_{d})\) be the edge \(d\)-partition associated with \(E_{d}^{(2)}\) and let \(A\) be a \(d\times d(2d-1)\) matrix with associated simple tensor \(\otimes_{1\leq i<j\leq 2d}(v_{i,j})\). Then,_ \[M_{\Gamma^{E_{d}^{(2)}}}(\otimes_{1\leq i<j\leq 2d}(v_{i,j}))=prod(diag^{S^{2}} (A)). \tag{3.1}\] Proof.: For ease of notation, we will label \(\Gamma^{E_{d}^{(2)}}=\Gamma\). Notice that both terms appearing in Equation 3.1 are monomials. So, we will proceed by showing that they consists of products of the same terms. First, let \(\otimes_{1\leq i\leq j\leq 2d}(v_{i,j})\) be the simple tensor associated with \(A\) and suppose that \(v_{i,j}^{k}\) is a term in \(M_{\Gamma}\). Then, by definition of \(M_{\Gamma}\), we know that the edge \((i,j)\) is in the subgraph \(\Gamma_{k}\) of the complete graph. However, this also means that \((i,j,k)\in DI[2]\) since \(\Gamma_{k}\) is from the edge \(d\)-partition associated with \(E_{d}^{(2)}\), so \(v_{i,j}^{k}\in diag^{S^{2}}(A)\). In particular, the terms appearing in \(M_{\Gamma}\) also appears in \(prod(diag^{S^{2}}(A))\). Next, suppose that \(v_{x,y}^{z}\in diag^{S^{2}}(A)\). Then, we know that the edge \((x,y)\) lies on the graph \(\Gamma_{z}\) by definition of \(diag^{S^{2}}(A)\) since \(\Gamma=\Gamma^{E_{d}^{(2)}}\). In particular, this implies that \(v_{x,y}^{z}\) is also a term in the product \(M_{\Gamma}\). In particular, all of the terms of \(prod(diag^{S^{2}}(A))\) also appear in \(M_{\Gamma}\). Therefore, Equation 3.1 holds. **Lemma 3.10**.: _If \(A\) is an \(S^{2}\)-Upper Triangular Matrix with associated simple tensor \(\otimes_{1\leq i<j\leq 2d}(v_{i,j})\) and \(\Gamma=(\Gamma_{1},\Gamma_{2},\ldots,\Gamma_{d})\) is a homogeneous cycle-free edge \(d\)-partition of the complete graph \(K_{2d}\), then either \(\Gamma\) is the edge \(d\)-partition associated with \(E_{d}^{(2)}\) or \(M_{\Gamma}(\otimes_{1\leq i<j\leq 2d}(v_{i,j}))=0\)._ Proof.: Let \(\otimes_{1\leq i<j\leq 2d}(v_{i,j})\) be the simple tensor associated with \(A\) and let \(\Gamma^{E_{d}^{(2)}}=(\Gamma_{1}^{E_{d}^{(2)}},\Gamma_{2}^{E_{d}^{(2)}}, \ldots,\Gamma_{d}^{E_{d}^{(2)}})\) be the edge \(d\)-partition associated with \(E_{d}^{(2)}\). Consider the following set: \[S=\{(a,b,c)|(a,b)\in E(\Gamma_{c}),\,\text{but}\,\,(a,b)\notin E(\Gamma_{c}^{E _{d}^{(2)}})\}.\] Since both \(\Gamma\) and \(\Gamma^{E_{d}^{(2)}}\) are edge \(d\)-partitions of \(K_{2d}\), we know that all of their subgraphs have the same number of edges. Now, \(S\) is a finite set so the maximum of \(\{c|(a,b,c)\in S\}\) exists. Also, in order for the edge \((a,b)\) to be in \(\Gamma_{c}\) for the maximal \(c\) value but not in \(\Gamma_{c}^{E_{d}^{(2)}}\), that implies that there is some \((a_{0},b_{0},c_{0})\) with \(c_{0}<c\) such that \((a_{0},b_{0})\notin E(\Gamma_{c_{0}})\) and \((a_{0},b_{0})\in E(\Gamma_{c_{0}}^{E_{d}^{(2)}})\). But, by the definition of \(S^{2}\)-Upper Triangular Matrices, we know that \[v_{i,j}=\sum_{k=1}^{ind(i,j)}v_{i,j}^{k}.\] Further, \(c_{0}=ind(a_{0},b_{0})<c\) and so \(v^{c}_{a_{0},b_{0}}=0\). This means that \[M_{\Gamma}(\otimes_{1\leq i<j\leq 2d}(v_{i,j}))=\prod_{k=1}^{d}\prod_{(i,j)\in E( \Gamma_{k})}v^{k}_{i,j}=0.\] _Remark 3.11_.: Note that just because \(A\) is an \(S^{2}\)-Upper Triangular Matrix does not necessarily imply that \(M_{\Gamma^{E^{(2)}_{d}}}(\otimes_{1\leq i<j\leq 2d}(v_{i,j}))\neq 0\). Next, we give a result paralleling Proposition 2.3. **Theorem 3.12**.: _Let \(A\) be an \(S^{2}\)-Upper Triangular Matrix over \(V_{d}\), let \(\otimes_{1\leq i<j\leq 2d}v_{i,j}\) be the simple tensor associated with \(A\), and suppose that Conjecture 2.22 holds. Let the product of the \(S^{2}\)-diagonal of \(A\) be denoted by \(prod(diag^{S^{2}}(A))\). Then,_ \[det^{S^{2}}(A)=det^{S^{2}}(E^{(2)}_{d})\cdot prod(diag^{S^{2}}(A)).\] Proof.: First, notice that since \(E^{(2)}_{d}\) is \(S^{2}\)-Upper Triangular, it follows by Lemma 3.10 that \[det^{S^{2}}(E^{(2)}_{d}) =\sum_{\Gamma\in\mathcal{P}^{h,cf}_{d}(K_{2d})}\varepsilon^{S^{ 2}}_{d}(\Gamma)M_{\Gamma}(\otimes_{1\leq i<j\leq 2d}(e_{i,j}))\] \[=\varepsilon^{S^{2}}_{d}(\Gamma^{E^{(2)}_{d}})M_{\Gamma^{E^{(2)}_ {d}}}(\otimes_{1\leq i<j\leq 2d}(e_{i,j})).\] Further, by Lemma 3.9, we know that \(M_{\Gamma^{E^{(2)}}}(\otimes_{1\leq i<j\leq 2d}(e_{i,j}))=prod(diag^{S^{2}}(E^{(2)} _{d})\). So, since \(DI[2]\) identifies all of the nonzero entries of \(E^{(2)}_{d}\), which are all \(1\) by construction, we know that \(det^{S^{2}}(E^{(2)}_{d})=\varepsilon^{S^{2}}_{d}(\Gamma^{E^{(2)}})\). Next, by Lemmas 3.9 and 3.10, \[det^{S^{2}}(A) =\sum_{\Gamma\in\mathcal{P}^{h,cf}_{d}(K_{2d})}\varepsilon^{S^{2 }}_{d}(\Gamma)M_{\Gamma}(\otimes_{1\leq i<j\leq 2d}(v_{i,j}))\] \[=\varepsilon^{S^{2}}_{d}(\Gamma^{E^{(2)}})M_{\Gamma^{E^{(2)}}}( \otimes_{1\leq i<j\leq 2d}(v_{i,j}))\] \[=det^{S^{2}}(E^{(2)}_{d})prod(diag^{S^{2}}(A)).\] since \(A\) is an \(S^{2}\)-Upper Triangular Matrix. _Remark 3.13_.: Note that similar proofs for Lemma 3.10 and Theorem 3.12 could be used for \(S^{2}\)-Lower Triangular Matrices. _Remark 3.14_.: Theorem 3.12 can be quickly shown using GNU Octave for \(d\leq 5\), which does not rely on proving Conjecture 2.22, since we know \(det^{S^{2}}(E^{(2)}_{d})\) for \(d\leq 10\) by Proposition 2.8. Due to the constraints of the symbolic package, we stopped the computations beyond this point. Lastly, we have the following corollary, which follows since we know Conjecture 2.22 is true for \(d=2\) or \(d=3\). **Corollary 3.15**.: Suppose that \(d=2\) or \(d=3\). Let \(A\) be an \(S^{2}\)-Upper Triangular Matrix over \(V_{d}\), let \(\otimes_{1\leq i<j\leq 2d}(v_{i,j})\) be the simple tensor associated with \(A\), and let the product of the \(S^{2}\)-diagonal of \(A\) be denoted by \(prod(diag^{S^{2}}(A))\). Then, \[det^{S^{2}}(A)=det^{S^{2}}(E_{d})\cdot prod(diag^{S^{2}}(A)).\] _Remark 3.16_.: Conjecture 2.22 is of current research interest, as seen in [2] and [4]. Notice that if Conjecture 2.22 can be shown for \(d>3\), then Corollary 3.15 will hold as well for \(d>3\) by Theorem 3.12. ## 4. The Algebra of \(S^{2}\)-Upper Triangular Matrices In this section, we will construct a \(k\)-algebra using \(S^{2}\)-Upper Triangular Matrices. ### The Algebra of \(d\times d(2d-1)\) Matrices and the Subalgebra \(U_{d}^{S^{2}}\) #### 4.1.1. The Case \(d=2\) First, suppose that \(d=2\). Recall that given \(m,n\geq 1\), the collection of \(m\times n\) matrices forms a vector space. With that in mind, we give the following definition. **Definition 4.1**.: Let \(Mat_{2}^{S^{2}}\) be the vector space of \(2\times 6\) matrices. Define a map \(\cdot:Mat_{2}^{S^{2}}\times Mat_{2}^{S^{2}}\to Mat_{2}^{S^{2}}\) by \[\begin{pmatrix}a_{1,2}&a_{1,3}&a_{1,4}&a_{2,3}&a_{2,4}&a_{3,4}\\ b_{1,2}&b_{1,3}&b_{1,4}&b_{2,3}&b_{2,4}&b_{3,4}\end{pmatrix}\cdot\begin{pmatrix} c_{1,2}&c_{1,3}&c_{1,4}&c_{2,3}&c_{2,4}&c_{3,4}\\ d_{1,2}&d_{1,3}&d_{1,4}&d_{2,3}&d_{2,4}&d_{3,4}\end{pmatrix} \tag{4.1}\] \[=\begin{pmatrix}a_{1,2}c_{1,2}+a_{3,4}d_{1,2}&a_{1,3}c_{1,3}+a_{2,3}d_{1,3}&a_{2,4}c_{1,4}+a_{1,4}d_{1,4}&a_{1,3}c_{2,3}+a_{2,3}d_{2,3}&a_{2,4}c _{2,4}+a_{1,4}d_{2,4}&a_{1,2}c_{3,4}+a_{3,4}d_{3,4}\\ b_{1,2}c_{1,2}+b_{3,4}d_{1,2}&b_{1,3}c_{1,3}+b_{2,3}d_{1,3}&b_{2,4}c_{1,4}+b_{1,4}d_{1,4}&b_{1,3}c_{2,3}+b_{2,3}d_{2,3}&b_{2,4}c_{2,4}+b_{1,4}d_{2,4}&b_{1,2}c _{3,4}+b_{3,4}d_{3,4}\end{pmatrix}\] _Remark 4.2_.: The multiplication given by Equation 4.1 generalizes multiplication of \(d\times d\) matrices. Namely, if we group columns \((1,2)\) with \((3,4)\), \((1,3)\) with \((2,3)\), and \((2,4)\) with \((1,4)\), Equation 4.1 is computed to matrix multiplication on these groupings. **Lemma 4.3**.: \(Mat_{2}^{S^{2}}\) _is a \(k\)-algebra with multiplication given by Equation 4.1._ The proof to Lemma 4.3 follows straightforwardly by verifying the associativity and distributively of Equation 4.1, as we know \(Mat_{2}^{S^{2}}\) is a vector space. Now, another interesting question is about the unit of \(Mat_{2}^{S^{2}}\). Since the matrix associated to \(E_{2}^{(2)}\) fits into a generalization of the identity matrix, this appears to be a good candidate, which is established in the following lemma. **Lemma 4.4**.: _The multiplicative unit of \(Mat_{2}^{S^{2}}\) is the matrix associated to \(E_{2}^{(2)}\)._ Proof.: Let \(I_{2}^{S^{2}}\) be the matrix associated to \(E_{2}^{(2)}\). We will show that \(I_{2}^{S^{2}}\) is a multiplicative identity with respect to left multiplication, as the computation for right multiplication is similar. \[\begin{pmatrix}1&1&0&0&1&0\\ 0&0&1&1&0&1\end{pmatrix}\cdot\begin{pmatrix}a_{1,2}&a_{1,3}&a_{1,4}&a_{2,3}&a_ {2,4}&a_{3,4}\\ b_{1,2}&b_{1,3}&b_{1,4}&b_{2,3}&b_{2,4}&b_{3,4}\end{pmatrix}\] \[=\begin{pmatrix}1\cdot a_{1,2}+0\cdot b_{1,2}&1\cdot a_{1,3}+0 \cdot b_{1,3}&1\cdot a_{1,4}+0\cdot b_{1,4}&1\cdot a_{2,3}+0\cdot b_{2,3}&1\cdot a _{2,4}+0\cdot b_{2,4}&1\cdot a_{3,4}+0\cdot b_{3,4}\\ 0\cdot a_{1,2}+1\cdot b_{1,2}&0\cdot a_{1,3}+1\cdot b_{1,3}&0\cdot a_{1,4}+1 \cdot b_{1,4}&0\cdot a_{2,3}+1\cdot b_{2,3}&0\cdot a_{2,4}+1\cdot b_{2,4}&0 \cdot a_{3,4}+1\cdot b_{3,4}\end{pmatrix}\] \[=\begin{pmatrix}a_{1,2}&a_{1,3}&a_{1,4}&a_{2,3}&a_{2,4}&a_{3,4}\\ b_{1,2}&b_{1,3}&b_{1,4}&b_{2,3}&b_{2,4}&b_{3,4}\end{pmatrix}\] **Proposition 4.1**.: _Let \(U_{2}^{S^{2}}\) be the collection of \(2\times 6\)\(S^{2}\)-Upper Triangular Matrices. Then, \(U_{2}^{S^{2}}\) is a subalgebra of \(Mat_{2}^{S^{2}}\)._ Proof.: It suffices to show that \(U_{2}^{S^{2}}\) is closed under multiplication, which comes from direct computation, as the \(U_{2}^{S^{2}}\) is a vector subspace of \(Mat_{2}^{S^{2}}\). We box the \(S^{2}\)-diagonal of the resulting matrix to verify it is an \(S^{2}\)-Upper Triangular Matrix. \[\begin{pmatrix}a&b&c&e&g&h\\ 0&0&d&f&0&i\end{pmatrix}\cdot\begin{pmatrix}j&k&m&p&r&s\\ 0&0&n&q&0&t\end{pmatrix}\\ =\begin{pmatrix}\boxed{aj}&\boxed{bs}&gm+cd&bp+eq&\boxed{gr}& as+ht\\ 0&0&\boxed{dn}&\boxed{fq}&0&\boxed{it}\end{pmatrix}\] #### 4.1.2. The Case \(d>2\) Now, we will construct \(U_{d}^{S^{2}}\) for \(d>2\). **Definition 4.5**.: Let A be a \(d\times d(2d-1)\) matrix associated with the simple tensor \(\otimes_{1\leq i<j\leq 2d}(c_{i,j})\). Let \(1\leq n\leq 2d-2\) and let \((\Gamma_{1},\Gamma_{2},\ldots,\Gamma_{d})\) be the edge \(d\)-partition associated with \(E_{d}^{(2)}\). For \(1\leq a\leq d\), let \(u_{a}\) be the vector in \(\{c_{i,j}|1\leq i<j\leq 2d\}\) such that \((i,j)\) is a left leg of the twin star graph \(TS_{d}(2a-1,2a)=\Gamma_{a}\) with leg number \(n\). Take \(A_{n}^{L}\) be the matrix associated with \(u_{1}\otimes u_{2}\otimes\ldots\otimes u_{d}\). Similarly, for \(1\leq b\leq d\), let \(v_{b}\) be the vector in \(\{c_{i,j}|1\leq i<j\leq 2d\}\) such that \((i,j)\) is a right leg of the graph \(TS_{d}(2b-1,2b)=\Gamma_{b}\) with leg number \(n\). Take \(A_{n}^{R}\) to be the matrix associated with \(v_{1}\otimes v_{2}\otimes\ldots\otimes v_{d}\). Lastly, let \(A^{C}\) be the matrix associated with \(c_{1,2}\otimes c_{3,4}\otimes c_{5,6}\otimes\ldots\otimes c_{2d-1,2d}\). We call \(A_{n}^{L}\), \(A_{n}^{R}\), and \(A^{C}\) the leg submatrices of \(A\). _Remark 4.6_.: Notice that the leg submatrices of \(A\) are \(d\times d\) matrices. Further, the leg submatrices of \(A\) are triangular if and only if \(A\) is \(S^{2}\)-Triangular. **Example 4.7**.: Now, we will give an example of some leg submatrices. First, take the matrix \(B\) previously introduced as \[B=\begin{array}{cccccccccccccccc}(1,2)&(1,3)&(1,4)&(1,5)&(1,6)&(2,3)&(2,4)& (2,5)&(2,6)&(3,4)&(3,5)&(3,6)&(4,5)&(4,6)&(5,6)\\ 1&1&0&1&b&c&d&0&1&0&0&0&0&0&m\\ a&0&0&0&0&1&e&0&0&1&1&0&0&1&0\\ 0&0&1&0&1&0&f&1&0&0&0&l&1&0&n\end{array}\end{array}\] First, consider \(B^{C}\). This is the submatrix given by columns \((1,2)\), \((3,4)\), and \((5,6)\). \[B^{C}=\begin{pmatrix}1&0&m\\ \alpha&1&0\\ 0&0&n\end{pmatrix}\] Next, consider \(B_{2}^{R}\). For this, we want to find the edges with leg number two that belong to \(R_{3}(2a-1,2a)\) for some \(1\leq a\leq d\). For example, we know that \(R_{3}(1,2)=\{(2,4),(2,6)\}\), \(R_{3}(3,4)=\{(1,4),(4,6)\}\), and \(R_{3}(5,6)=\{(1,6),(3,6)\}\). So, \(A_{2}^{R}\) consists of the columns of \(B\) labelled \((2,6)\), \((4,6)\), and \((3,6)\). \[B_{2}^{R}=\begin{pmatrix}1&0&0\\ 0&1&0\\ 0&0&l\end{pmatrix}\] Lastly, consider \(B_{1}^{L}\). For this, we want to find the edges with leg number one that belong to \(L_{3}(2a-1,2a)\) for some \(1\leq a\leq d\). We know that \(L_{3}(1,2)=\{(1,3),(1,5)\}\), \(L_{3}(3,4)=\{(2,3),(3,5)\}\), and \(L_{3}(5,6)=\{(2,5),(4,5)\}\). So, \(B_{1}^{L}\) consists of the columns of \(A\) labelled \((1,3)\), \((2,3)\), and \((2,5)\). \[B_{1}^{L}=\begin{pmatrix}1&e&0\\ 0&1&0\\ 0&0&1\end{pmatrix}\] Next, we give a new operation that generalizes what was previously constructed in Equation 4.1. **Definition 4.8**.: Let \((\Gamma_{1},\Gamma_{2},\ldots,\Gamma_{d})\) be the edge \(d\)-partition associated with \(E_{d}^{(2)}\). Let \(A\) be a \(d\times d(2d-1)\) matrix with associated simple tensor \(\otimes_{1\leq i<j\leq 2d}(a_{i,j})\). Let \(A_{i}^{L}\), \(A_{i}^{R}\), and \(A^{C}\) be the leg submatrices for \(1\leq i\leq 2d-2\). Similarly, let \(B\) be a \(d\times d(2d-1)\) matrix with associated simple tensor \(\otimes_{1\leq i<j\leq 2d}(b_{i,j})\). Let \(B_{i}^{L}\), \(B_{i}^{R}\), and \(B^{C}\) be the leg submatrices for \(1\leq i\leq 2d-2\). Define the matrix \(A\odot B\) with associated simple tensor \(\otimes_{1\leq i<j\leq 2d}(v_{i,j})\) as follows. Let \(1\leq m<n\leq 2d\) with \((m,n)\) an edge in \(\Gamma_{a}\) for \(1\leq a\leq d\) and leg number \(l_{0}\). If \((m,n)\) is a left leg of \(\Gamma_{a}\), then \(v_{i,j}\) is given by the \(a^{\text{th}}\) column of the matrix \(A_{l_{0}}^{L}\cdot B_{l_{0}}^{L}\), where the multiplication here is the usual matrix multiplication. If \((m,n)\) is a right leg of \(\Gamma_{a}\), then \(v_{i,j}\) is given by the \(a^{\text{th}}\) column of the matrix \(A_{l_{0}}^{R}\cdot B_{l_{0}}^{R}\), where the multiplication is the usual matrix multiplication. Lastly, if \((m,n)=(2a-1,2a)\), then \(v_{i,j}\) is given by the \(a^{\text{th}}\) column of the matrix \(A^{C}\cdot B^{C}\), where the multiplication is the usual matrix multiplication. We will call the operation \(\odot\) the Leg Identifying Multiplication (LIM) of \(A\) and \(B\). _Remark 4.9_.: Leg Identifying Multiplication is the terminology used here as it involves grouping together columns with the same leg number and are either all in \(L_{d}\), \(R_{d}\), or \(C_{d}\). _Remark 4.10_.: Here we will maintain a distinction between \(\cdot\) for the usual matrix multiplication and \(\odot\) for LIM. **Example 4.11**.: Now, we will give two examples. First, we will multiply \(I_{3}^{S^{2}}\odot B\) using LIM, where \[B=\ \begin{pmatrix}(1,2)&(1,3)&(1,4)&(1,5)&(1,6)&(2,3)&(2,4)&(2,5)&(2,6)&(3,4)&(3,5)&(3,6)&(4,5)&(4,6)&(5,6)\\ 1&1&0&1&b&c&d&0&1&0&0&0&0&0&m\\ a&0&0&0&0&1&e&0&0&1&1&0&0&1&0\\ 0&0&1&0&1&0&f&1&0&0&0&l&1&0&n\end{pmatrix}\.\] Notice that the leg submatrix \((I_{3}^{S^{2}})^{C}\) is the identity matrix \(I_{3}\). In particular, \(I_{3}\cdot B^{C}=B^{C}\). This means that since we have the vector \(\begin{pmatrix}1\\ \alpha\\ 0\end{pmatrix}\) in the first column of \((I_{3}^{S^{2}})^{C}\cdot B^{C}\), which is labelled \((1,2)\), the \((1,2)\) column of \(I_{3}^{S^{2}}\odot B\) is given by \(\begin{pmatrix}1\\ \alpha\\ 0\end{pmatrix}\). We could continue this computation to verify \(I_{3}^{S^{2}}\odot B=B\). Next, we will multiply \(A\otimes B\), where \(A\) is given by \[A=\ \begin{pmatrix}(1,2)&(1,3)&(1,4)&(1,5)&(1,6)&(2,3)&(2,4)&(2,5)&(2,6)&(3,4)&( 3,5)&(3,6)&(4,5)&(4,6)&(5,6)\\ a&b&c&e&0&f&g&0&1&h&0&0&0&0&l\\ a&0&m&0&0&1&q&0&e&1&1&0&k&1&0\\ 0&0&1&0&1&0&f&1&0&0&0&l&1&z&n\end{pmatrix}\.\] First, notice that \[A^{C}=\begin{pmatrix}a&h&l\\ a&1&0\\ 0&0&n\end{pmatrix},\] and \[B^{C}=\begin{pmatrix}1&0&m\\ a&1&0\\ 0&0&n\end{pmatrix}.\] Therefore, \[A^{C}\cdot B^{C}=\begin{pmatrix}a+ah&h&am+ln\\ a^{2}+a&ah+1&al\\ 0&0&n^{2}\end{pmatrix}.\] Now, we can use \(A^{C}\cdot B^{C}\) for columns \((1,2),(3,4)\), and \((5,6)\) of \(A\odot B\) \[A\odot B=\ \begin{pmatrix}(1,2)&(1,3)&(1,4)&(1,5)&(1,6)&(2,3)&(2,4)&(2,5)&(2,6)&( 3,4)&(3,5)&(3,6)&(4,5)&(5,6)\\ a+h&a^{2}+a&&&&&ah+1&&&am+ln\\ a^{2}&0&1&&&a^{2}\end{pmatrix}\.\] Similarly, we can consider \(A_{1}^{L}\) and \(B_{1}^{L}\). Remember that the first left legs are the edges \((1,3)\), \((2,3)\), and \((2,5)\). So, using these columns, we get \[A_{1}^{L}=\begin{pmatrix}b&f&0\\ 0&1&0\\ 0&0&1\end{pmatrix}\] \[B_{1}^{L}=\begin{pmatrix}1&c&0\\ 0&1&0\\ 0&0&1\end{pmatrix}.\] Thus, \[A_{1}^{L}\cdot B_{1}^{L}=\begin{pmatrix}b&bc+f&0\\ 0&1&0\\ 0&0&1\end{pmatrix}\] which allows us to fill in positions \((1,3)\), \((2,3)\), and \((2,5)\) of \(A\odot B\). \[A\odot B=\begin{array}{rccccccccc}(1,2)&(1,3)&(1,4)&(1,5)&(1,6)&(2,3)&(2,4)& (2,5)&(2,6)&(3,4)&(3,5)&(3,6)&(4,5)&(4,6)&(5,6)\\ &a+h&b&b&b&b+f&0&h&\\ a^{2}+a&0&1&0&ah+1&&a^{l}&\\ 0&0&0&1&0&0&a^{2}\end{array}\end{array}\] Similarly, the other leg submatrices can be used to obtain the other entries in \(A\odot B\). \[A\odot B=\begin{array}{rccccccccccccc}(1,2)&(1,3)&(1,4)&(1,5)&(1,6)&(2,3)&(2, 4)&(2,5)&(2,6)&(3,4)&(3,5)&(3,6)&(4,5)&(6,5)\\ &a+h&b&0&e&b&b&c+f&g+c&0&1&h&0&0&0&am+ln\\ a^{2}+a&0&0&0&b&b&d+c=m&0&e&ah+1&1&0&e&1&a^{l}\\ 0&0&1&0&bf+1&0&d^{l}+e+f&1&0&0&0&l^{2}&1&z&n^{2}\end{array}\end{array}\] Now, notice that it follows directly that we have the following parallels to Lemmas 4.3 and 4.4. **Lemma 4.12**.: _Let \(Mat_{d}^{S^{2}}\) be the collection of \(d\times d(2d-1)\) matrices. Then, \(Mat_{d}^{S^{2}}\) is a k-algebra with multiplication given by LIM (Definition 4.8) and the matrix associated with \(E_{d}^{(2)}\) is the multiplicative identity._ Also, we can extend Proposition 4.1 as follows. **Proposition 4.2**.: _Let \(U_{d}^{S^{2}}\) be the collection of \(d\times d(2d-1)\)\(S^{2}\)-Upper Triangular Matrices. Then, \(U_{d}^{S^{2}}\) is a subalgebra of \(Mat_{d}^{S^{2}}\)._ Notice that Proposition 4.2 is easily verified directly, as it can be verified that the leg submatrices of \(S^{2}\)-Upper Triangular Matrices are Upper Triangular Matrices and Upper Triangular Matrices are closed under matrix multiplication. ### \(S^{2}\)-Lu-Decomposition Lastly in this section, we will develop a parallel result to Proposition 2.1. **Proposition 4.3**.: _Let \(A\) be a \(d\times d(2d-1)\) matrix and suppose that all of the leg submatrices have nonzero leading principal minors. Then, there exists an \(S^{2}\)-Lower Triangular Matrix \(L\) and an \(S^{2}\)-Upper Triangular Matrix \(U\) such that \(A=LU\)._ Proof.: This proposition follows immediately from the definition of the LIM and construction of leg submatrices. We will show this explicitly for \(d=2\). Let \(A\) be given by \[A=\begin{pmatrix}a_{1,2}&a_{1,3}&a_{1,4}&a_{2,3}&a_{2,4}&a_{3,4}\\ b_{1,2}&b_{1,3}&b_{1,4}&b_{2,3}&b_{2,4}&b_{3,4}\end{pmatrix}.\] Then, we can construct matrices \(L\) and \(U\) by \[L=\begin{pmatrix}\young(1)&\young(1)&0&0&\young(1)&0\\ \young(1)&\young(1)&\young(2)&\young(1)\end{pmatrix}\] and \[U=\begin{pmatrix}\young(a_{1,2})&\young(a_{1,3})&a_{1,4}&a_{2,3}&\young(a_{2,4} )&a_{3,4}\\ \hline 0&0&\young(b_{1,4}-\frac{b_{2,4}}{a_{2,4}}a_{1,4})&\young(b_{2,3}-\frac{b_{1, 3}}{a_{1,3}}a_{2,3})&0&\young(b_{3,4}-\frac{b_{1,2}}{a_{1,2}}a_{3,4})\end{pmatrix},\] where the \(S^{2}\)-diagonal is boxed for clarity. Notice that these matrices are \(S^{2}\)-Lower Triangular and \(S^{2}\)-Upper Triangular, respectively. These matrices are constructed from the corresponding leg matrices. For example, for \(A_{1}^{L}\), which corresponds to columns \((1,3)\) and \((2,3)\), we have the matrix \[A_{1}^{L}=\begin{pmatrix}a_{1,3}&a_{2,3}\\ b_{1,3}&b_{2,3}\end{pmatrix}\] which decomposes into matrices \(U_{1}^{L}\) and \(L_{1}^{L}\) by Proposition 2.1 since \(A_{1}^{L}\) has nonzero leading principal minors. In particular, \[L_{1}^{L}=\begin{pmatrix}1&0\\ \frac{b_{1,3}}{a_{1,3}}&1\end{pmatrix}\] and \[U_{1}^{L}=\begin{pmatrix}a_{1,3}&a_{2,3}\\ 0&b_{2,3}-\frac{b_{1,3}}{a_{1,3}}a_{2,3}\end{pmatrix}\] by direct computation. Using \(L_{1}^{L}\) and \(U_{1}^{L}\), we can construct columns \((1,3)\) and \((2,3)\) of \(L\) and \(U\) directly. The other columns are similar. Therefore, by using Equation 4.1, we have \[LU =\begin{pmatrix}1&1&0&0&1&0\\ \frac{b_{1,2}}{a_{1,2}}&\frac{b_{1,3}}{a_{1,3}}&1&1&\frac{b_{2,4}}{a_{2,4}}&1 \end{pmatrix}\cdot\begin{pmatrix}a_{1,2}&a_{1,3}&a_{1,4}&a_{2,3}&a_{2,4}&a_{3,4} \\ 0&0&b_{1,4}-\frac{b_{2,4}}{a_{2,4}}a_{1,4}&b_{2,3}-\frac{b_{1,3}}{a_{1,3}}a_{2,3}&0&b_{3,4}-\frac{b_{1,2}}{a_{1,2}}a_{3,4}\end{pmatrix}\] \[=\begin{pmatrix}a_{1,2}&a_{1,3}&a_{1,4}&a_{2,3}&a_{2,4}&a_{3,4} \\ b_{1,2}&b_{1,3}&\frac{b_{2,4}}{a_{2,4}}a_{1,4}+b_{1,4}-\frac{b_{2,4}}{a_{2,4} }a_{1,4}&\frac{b_{1,3}}{a_{1,3}}a_{2,3}+b_{2,3}-\frac{b_{1,3}}{a_{1,3}}a_{2,3}& b_{2,4}&\frac{b_{1,2}}{a_{1,2}}a_{3,4}+b_{3,4}-\frac{b_{1,2}}{a_{1,2}}a_{3,4} \end{pmatrix}\] \[=A.\] ## Acknowledgment We thank Mihai Staic for his help related to the code for finding \(det^{S^{2}}(A)\) for the case of \(d\leq 5\).
[4] に基づいて、$S^2$-上三角行列と$S^2$-下三角行列を定義します。これらは、それぞれ上三角行列と下三角行列の一般化である $d\times d(2d-1)$ 行列です。その後、$S^2$-上三角行列の各要素の積が行列の要素の積になるという性質は、我々の構築により一般化されます。さらに、$S^2$-上三角行列の代数とその$S^2$-下三角行列と$S^2$-上三角行列のLU分解のための条件を導出します。 **Important Notes:** * **Formal Language:** Use formal Japanese language, avoiding slang or colloquialisms. * **Grammar:** Ensure that the grammatical structure is correct and fluent. * **Clarity:** Make sure the translation is clear and easy to understand. * **Word Choice:** Use precise words
2309.04949
A multiple k-means cluster ensemble framework for clustering citation trajectories
Citation maturity time varies for different articles. However, the impact of all articles is measured in a fixed window. Clustering their citation trajectories helps understand the knowledge diffusion process and reveals that not all articles gain immediate success after publication. Moreover, clustering trajectories is necessary for paper impact recommendation algorithms. It is a challenging problem because citation time series exhibit significant variability due to non linear and non stationary characteristics. Prior works propose a set of arbitrary thresholds and a fixed rule based approach. All methods are primarily parameter dependent. Consequently, it leads to inconsistencies while defining similar trajectories and ambiguities regarding their specific number. Most studies only capture extreme trajectories. Thus, a generalised clustering framework is required. This paper proposes a feature based multiple k means cluster ensemble framework. 1,95,783 and 41,732 well cited articles from the Microsoft Academic Graph data are considered for clustering short term (10 year) and long term (30 year) trajectories, respectively. It has linear run time. Four distinct trajectories are obtained Early Rise Rapid Decline (2.2%), Early Rise Slow Decline (45%), Delayed Rise No Decline (53%), and Delayed Rise Slow Decline (0.8%). Individual trajectory differences for two different spans are studied. Most papers exhibit Early Rise Slow Decline and Delayed Rise No Decline patterns. The growth and decay times, cumulative citation distribution, and peak characteristics of individual trajectories are redefined empirically. A detailed comparative study reveals our proposed methodology can detect all distinct trajectory classes.
Joyita Chakraborty, Dinesh K. Pradhan, Subrata Nandi
2023-09-10T07:10:31
http://arxiv.org/abs/2309.04949v1
# Amultiplek-means clusterensembleframework clustering citation trajectories ###### Abstract Citation maturity time varies for different articles. However, the impact of all articles is measured in a fixed window (2-5 years). Clustering their citation trajectories helps understand the knowledge diffusion process and reveals that not all articles gain immediate success after publication. Moreover, clustering trajectories is necessary for paperimpact recommendation algorithms. Its a challenging problem because citation time series exhibit significant variability due to non-linear and non-stationary characteristics. Prior works propose a set of arbitrary thresholds and a fixed rule-based approach. All methods are primarily parameter-dependent. Consequently, it leads to inconsistencies while defining similar trajectories and ambiguities regarding their specific number. Most studies only capture extreme trajectories. Thus, a generalized clustering framework is required. This paper proposes a _feature-based multiple k-means cluster ensemble framework_. Multiple learners are trained for evaluating the credibility of class labels, unlike single clustering algorithms. 1,95,783 and 41,732 well-cited articles from the Microsoft Academic Graph data are considered for clustering short-term (10-year) and long-term (30-year) trajectories, respectively. It has linear run-time. Four distinct trajectories are obtained- _Early Rise-Rapid Decline (ER-RD)_ ( 2.2%), _Early Rise-Slow Decline (ER-SD)_ ( 45%), _Delay Rise-No Decline (DR-ND)_ ( 53%), and _Delayed Rise-Slow Decline (DR-SD)_ ( 0.8%). Individual trajectory differences for two different spans are studied. Most papers exhibit _ER-SD_ and _DR-ND_ patterns. The growth and decay times, cumulative citation distribution, and peak characteristics of individual trajectories' are re-defined empirically. A detailed comparative study reveals our proposed methodology can detect all distinct trajectory classes. keywords: Clustering citation trajectories, time-series clustering, unsupervisedmachinelearning,k-means,clusterensemble + ## 1 Introduction A citation trajectory represents the time-series distribution of annual citations received by a paper [1]. The other terms used for a citation trajectory are 'citation curve,' 'citation pattern,' 'citation histories,' and 'citation time series.' Clustering citation trajectories refers to grouping papers with similar shapes or identical patterns in their citation life cycle [2]. It is a fundamental source of time and topic-correlated information in scholarly networks. Thus, it can capture the knowledge diffusion process [3]. Clustering trajectories can identify hidden patterns such as how some information gets early attention (pre-mature discovery), how some information receives attention for a short period and fades out soon, how some information gets attention and remains relevant indefinitely, how some information gets unnoticed and gets delayed attention (delayed recognition) [1], etc. Besides, it is crucial in paper impact recommendation algorithms for measuring similarity distances [4]. Traditional research evaluation metrics calculate total or average citation counts accumulated within a specific time window [5; 6]. However, they can not truly capture the time-varying changes in a publication's impact. We present an Figure 1: **Citation trajectories are plotted for two randomly chosen papers, each from four fields–Medicine, Computer Science, Chemistry, and Bio-Chemistry.** example to illustrate this. The citation trajectories of two randomly chosen papers from four fields are plotted in figure 1. Both articles from each field in the Microsoft Academic Graph (MAG) dataset [7] were published around the same time and got equal citations. Papers from medicine, computer science, chemistry, and biochemistryfieldsreceived \(2,593,1,959,1,166,\) and \(1,607\) citationsin around 50 years time, respectively. Comparing early citations of two articles in the medicine field, we find that paper B is more likely to achieve a higher impact in the future than paper A. However, paper A suddenly jumps to receive greater than 50 citations annually after 40 years of publication. Although both papers receive the exact total and average citations, there is considerable variability in their temporal trajectories. Thus, different articles differ significantly in their citation maturity times. Clustering helps in empirically understanding the maturity cycles (growth and decay) of individual trajectory classes. It will also help in cross-discipline evaluation, where some fields may require more time to grow than others [8]. It is a challenging problem because citation trajectories exhibits non-linear and non-stationary properties when observed on a time scale [2; 9]. Further, exponential growth in bibliographic databases has added to this variance. Consequently, Zamani et al. [10] mathematically prove that citation trajectories diffuse anomalously, and their variance varies proportional to \(t^{2H}\) where \(H=1/2\). Moreover, S. Baumgartner and L. Leydesdorff [11] report that a fifth-order polynomial fits citation trajectories of articles over 16years. Thus, it is a more complex phenomenon than expected. Next, a comprehensive survey of existing literature is presented. ### Background In sub-section 1.1, a detailed review of the clustering literature is discussed chronologically. Theclusteringproblem has beenroughlystudied since the 1980s. E. Garfield [12] first identified the _Delayed Recognition (DR)_ phenomenon. Highly-cited papers were considered cited for over 10+ years with few initial citations. E. Aversa [13] clustered trajectories of 400 highly-cited articles over 9 years. They used a simple k-means on raw time series and detected two clusters- _Early Rise-Rapid Decline (ER-RD)_ and _Delayed Rise-Slow Decline (DR-SD)_. D. Aksnes [14] clustered 297 articles cited over 16 years. For each article, he calculated citations received in 3 and 7-12years of time windows. The trajectory rise was categorized into _Early_, _Medium_, and _Delayed rise_ if a publication got \(>30\%\), between 15%-30%, and \(<15\%\) of its final citations in the first 3 years, respectively. Further, the decline was categorized into _Rapid_, _Slow_, and _No Decline_ if a publication received \(<\) 30%, between 30%-50%, and \(>\) 50% of its final citations in the later period. Three clusters were identified- _ER-RD_, _MR-SD_, and _DR-ND_. Raan et al. [15] in 2004, first coined the term '_Sleeping Beauty (SB)_' for the _DR_ phenomenon. S. Redner [16] identified three clusters by arbitrarily defining thresholds- _Sleeping Beauties (SB)_, _Discovery Papers (DP)_, and _Hot Papers (HP)_. SB's received \(>\) 250 citations with the ratio of the mean citation age to publicationage(r) \(>\) 0.7. DP'sreceived \(>\) 500 citationswith r \(<\) 0.4. HP'sreceived \(>\) 350 citationswith r \(>\) 2/3. Costasetal. [17] initially divided the trajectory into two halves based on the time taken to attain 50% of its total citations (Y50). Next, they determine the peak location by comparing the time of receiving 25% and 75% of its total citations with Y50. They considered articles cited over 29 years. They identified three clusters- '_Flashes-in-the-Pan (FP)_,' '_Normal Documents (ND)_,' and '_Delayed Documents (DD)_. Liand Ye [18] identified a sub-category of SB's - All-Element-Sleeping- Beauties (ASB). Jian Li [19] separately identified three clusters - _ASB_, _FP_, and _DR_. Other studies proposed different thresholds for SBs- heartbeat spectra [20] and awakening intensity of SBs [21]. Few studies [11; 22] grouped them into two clusters- _transient_ and _sticky_ knowledge claim. Chakraborty et al. [23] considered the number of peaks (\(n_{CP}\)) and their location (\(t_{CP}\)) for defining thresholds. They defined sixclusters-_PeakInit_ (\(n_{CP}\) in \(t_{CP}<=5\) years followed by an exponential decline), _MonInc_ (monotonic increase in \(n_{CP}\) till 20 years after publication), _PeakMult_ (multiple \(n_{CP}\)), _MonDec_ (\(n_{CP}\) in the first year after publication followed by a monotonic decrease in citations), _PeakLate_ (few initial citations and single \(n_{CP}\) in \(t_{CP}>5\) years but not in the last year), and _Others_ (undefined trajectory). Bjorketal. [24] and Minetal. [25] used the BASSmodel from management studies. Min et al. [25] considered two parameters- _innovation (p)_ and _imitation (q)_ coefficients and defined four clusters - papers with (small p, small q), (large p, small q), (small p, large q), and (large p, large q)_ values. G. Colavizza and M. Franceschet [26] proposed a shape-based approach and non-linear spectral-clustering method. Three clusters were identified - _sprinters_, _middle-of-the-roads_, and _marthoners_. Zhang et al. [27] proposed a model-based approach and a simple k-means algorithm. Four clusters were identified- _normal low, normal high, delayed documents_, and _evergreens_. Bornmann et al. [28] measured field and time normalized citation impact scores and identified two clusters- _Hot Papers (HP)_ and _Delayed Recognition (DR)_. The thresholds were defined based on peaks in the early order half period, similar to Costas et al. [17]. F. Ye and L. Bornmann [29] categorized into two clusters-_Smart Girls (SG)_ and _Sleeping Beauties (SB)_. They used beauty co-efficient [30] and proposed the concept of citation angles. The citation angle for \(SGs\) was \(>60^{o}\) and \(<30^{o}\) for \(SBs\) as compared to the zero citation line. Besides, over the past decade, many works [30; 31; 32] have only studied \(SBs\) from multiple dimensions. He et al. [33] separately modeled SBs into _single-peak SBs_, _second-act SBs_, and _second-act non-SBs_. Recently, Gou et al. [34] defined papers receiving multiple citation peaks even after decay as exhibiting a _literature revival_ phenomenon. Summarizing, we observed that trajectories with similar shapes and inherent behavior are studied under different clusters. Clusters identified as-ER-RD [13; 14], FP [17], sprimters [26], transient-knowledge-claim [11], MonDec [23], HP [28], and SGs [29] represent similar trajectories. Besides, clusters identified as- DR [12; 28], SBs [16; 30; 31; 29], DD [27], and PeakLate [23] represent similar trajectories. Moreover, clusters identified as DR-ND [14], delayed rise [17], sticky-knowledge-claim [11], marathoners [26], evergreens [27], and MonInc [23] represent similar trajectories. Here, we briefly point out similarities between some of the clusters. A comprehensive comparison is made in section 3.4. Besides, most studies only capture two extreme trajectories- HP and SB. HP receive an initial citation burst followed by a rapid decay. SBs are papers with negligible initial citations followed by a citation burst in the later period. Thus, different methods capture different groups of trajectories. Consequently, there are ambiguities regarding the exact number of distinct trajectories. The primary motivation draws from varying thresholds and arbitrary methods used foridentifyings similar trajectories [34]. Broadly, any combination of citation count, the time to receive such a value, citation peaks, or the timegap for receiving such peaks are fixed to set arbitrary thresholds. Moreover, cluster-ingusing-raw-time series can add noise [35]. It results in inconsistent citation and temporal characteristics noted for similar trajectories and contradictory cluster behavior. Some studies [17; 23; 28; 27] only explore growth attributes of trajectory and do not study its decay in detail. Mostly, the growth phase is categorized into two periods-the sleepingphase and the recognition or awakening phase [34; 31]. Thus, there is variability in growth and decline times. Besides, most methods have parameter dependence adding to irregularities. Such techniques result in a major proportion of articles remaining unclustered. For instance, methods proposed by Chakraborty et al. [23] could not define trajectories for 45% articles. Ourprime objective is to propose a generalized clustering framework that can capture all probable distinct trajectories. For this, the challenge of evaluating the credibility of cluster labels in unsupervised learning algorithms needs to be addressed. Additionally, we aim to define a generic feature set that can capture the temporal evolution of any trajectory. It will help to empirically refine the characteristics of individual trajectories by their subjective assessment. The present study attempts to address multiple gaps and, in doing so, makes important contributions. First, the study proposes a feature-based multiple k-means cluster ensemble framework for clustering trajectories. The feature-based approach removes the necessity of manually defining thresholds for different trajectories. Besides, it reduces the dimensionality and complexity of raw time series. It also helps inputan even length vector. Moreover, unsupervised ensemble learning accurately evaluates the credibility of class labels. Thus, ambiguities regarding the number of distinct trajectories are resolved. Second, trajectory differences are studied considering different time lengths. Third, the growth and decay times, citation, and peak characteristics of individual trajectory clusters are empirically re-defined. Fourth, this is the first study to examine diverse literature and point out that identical trajectories were studied as different clusters. Next, we compare all of them with the final cluster sets obtained in this study to validate our proposed methodology. This study's proposed generalized clustering framework can determine all possible distinct trajectories rather than only capturing extreme classes. The paper is organized as follows. In section 2, we define the feature set, the clustering methodology, experimental settings, and the MAG data set in the brief. Section 3 contains the main clustering results, cluster characteristics, and comparative study for validation. Section 4 concludes the research and discusses limitations and future implications. ## 2 Materials and methods This section initially presents the choice of feature set and then the multiple k-means cluster ensemble algorithm (MKMCE) used for clustering citation trajectories. Further, we discuss the experimental settings and briefly describe the Microsoft Academic Graph (MAG) data set. ### Feature selection This study extracts features from a paper's raw time-series and converts it to a lower-dimension space. The dimensionality reduction reduces memory requirements [35]. It speeds up the clustering process as the distance calculation in clustering algorithms using raw time series can be computationally expensive [36]. Moreover, some distance measures are sensitive to noise. Thus, clustering using raw data may group time series similar in noises than identical in their characteristic shape [35]. It also helps us to input an even-length vector. Table 1 represents a complete description of features and their notation. Figure 2 represents a hypothetical trajectory. Broadly, we divide it for any given paper into two phases- _growth_ and _decay_ period. One of the pioneer studiesmodeling citation distribution is the WSB model proposed by Wanget al. [37]. They highlighted the importance of two features- _impact time_ and _immediacy time_ for studying a trajectory. Here, impact time is the time to achieve \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Summary & No. & Features & Notation & Description \\ \hline \multirow{5}{*}{**Time-related features**} & F1 & Initial time & \(T_{i}\) & It is time a paper takes to attain the geometric meanofitsfinalcitations. \\ & & & & It is the difference between the time a paper \\ & & & & It is the difference between the time a paper \\ & & & & It is the difference between the time when a paper \\ & & & & paper is last cited (publication age) and the timea \\ & & & & paper takes to receive its highest annual citation. \\ \hline \multirow{5}{*}{**Citation-related features**} & F4 & Citation gain in \(T_{i}\) & \(C_{T_{i}}\) & It is the ratio of cumulative citations received \\ & & & & \(C_{T_{i}}\{c\}=\frac{\text{L}_{T_{i}}c_{i}}{c}\) \\ & & & & It is the ratio of cumulative citations received \\ & & & & by a paper in time \(T_{g}\) to its final citations. \\ & & & & \(C_{T_{g}}\{c\}=\frac{\text{L}_{T_{g}}c_{i}}{c}\) \\ & & & & It is the ratio of cumulative citations received \\ & & & & by a paper in time \(T_{d}\) to its final citations. \\ & & & & \(C_{T_{d}}\{c\}=\frac{\text{L}_{T_{d}}c_{i}}{c}\) \\ & & & & \(C_{T_{g}}\{c\}=\frac{\text{L}_{T_{d}}c_{i}}{c}\) \\ & & & & \(C_{T_{g}}\{c\}=\frac{\text{L}_{T_{d}}c_{i}}{c}\) \\ \hline \multirow{5}{*}{**Peak count-related features**} & F7 & Number of peaks of high-intensity & \multirow{5}{*}{\(n_{P}\)} & The total number of citation values in the time series \\ & & & & greater than or equal to \(\mu+3\sigma\): is calculated separately \\ & & & & for two periods, \(T_{g}\) and \(T_{d}\). \\ \cline{1-1} & & & & The total number of citation values in the time series \\ \cline{1-1} & & & & \(T_{d}\) \\ \cline{1-1} & & & & \(T_{d}\) \\ \cline{1-1} & & & & The total number of citation values in the time series \\ \cline{1-1} & & & & \(T_{d}\) \\ \cline{1-1} & & & & \(T_{d}\) \\ \cline{1-1} & & & & \(T_{d}\) \\ \cline{1-1} & & & & \(T_{d}\) \\ \cline{1-1} \cline{2-5} & & & & \(T_{d}\) \\ \cline{1-1} \cline{2-5} & & & & \(T_{d}\) \\ \cline{1-1} \cline{2-5} & & & & \(T_{d}\) \\ \hline \end{tabular} \end{table} Table 1: **The generic feature set for evaluating trajectories** the geometric mean of the final citation count, and immediacy time is when a paper receives its maximum or highest annual citation. We use these two variables-impactandimmediacytime tomeasure early growth oriinitial time and growth time, respectively. Both of them refer to the growth period. Summarizing, we empirically define three measurements of time (time- related features F1, F2 and F3)- _initial time_ (\(T_{i}\)), _growth time_ (\(T_{g}\)), and _decay time_ (\(T_{d}\))(seefulldescriptionintable 1). Further,weusestatisticalmethodstomeasure a given paper's _citation growth or decay_ (citation-related features F4, F5, and F6)separatelyin eachoftheabovethreetimes. Wecalculatethegainvariable to measure citation growth or decline. Finally, we separately measure _the number of peaks of three different intensities_ (peak count-related features F7, F8, and F9) in growth and decay times. We use an empirical outlier detection rule [38; 36] widely used in time-series analysis. First, we calculate \(i^{th}\) paper's mean citation (\(\mu_{i}\)) and standard deviation (\(\sigma_{i}\)) consideringits entire time series. Next, we calculate citation values of magnitude greater than \(\mu_{i}+\sigma_{i}\) (_low-intensity_), \(\mu_{i}+2\sigma_{i}\) (_medium-intensity_), and \(\mu_{i}+3\sigma_{i}\) (_high-intensity_). The resultant number of peaks of three different intensities separately occurring in growth and decay times are considered as Figure 2: **A hypothetical citation trajectory of paper ‘i’ is plotted to understand the derived feature set quickly. Here, \(\mu_{i}\) is the geometric mean of final citations, and \(\sigma_{i}\) is the standard deviation consideringthe entire series. The location ofcitationpeaksisshown.** input features (refer to figure 2). It is essential to standardize features before clustering, as the distance between different data points is measured. Normalization refers to the probability distribution of features. We scale each feature \((f_{i})\) using a \(z\)-score value given as, \(z_{f_{i}}=\frac{x_{n}-\mu_{f_{i}}}{\sigma_{f_{i}}}\) ### Clustering methodology This section motivates the choice of multiple k-means cluster ensemble (MKMCE) algorithm [39] for clustering task. A cluster ensemble method is a more recent technique [40] to overcome the pastlimitations of single clustering algorithms. #### 2.2.1 Problem definition Let X be an original data set consisting of N objects given by \(\{x_{i}\}_{l=1}^{N}\). F is a set of M features. Besides, X(F) is the representation of X on F. It is a matrix of dimension N x M. Further, \(X(f_{j})\) represents \(j^{th}\) column, \(x_{i}(F)\) represents \(i^{th}\) row, and \(x_{i}(f_{j})\) is the value of object \(x_{i}\) in feature \(f_{j}\). 1. _Multiple k-means clustering problem to generate a base cluster set:_ The k-means algorithm is used as a base clustererandis run multiple times to generate a base cluster set. Let, \(Z=\{\zeta_{h}\}_{h=1}^{T}\) represent a set of T base clustering of X(F) where, \(\zeta_{h}=\{C_{hl}\}_{h=1}^{h_{H}}\) is the \(h^{th}\) base clustering. \(K=\{k\}_{h}^{T}\) represents the entire set of the number of clusters in each base clustering \(\zeta_{h}\) where \(k_{h}\) is the number of clusters in \(h^{th}\) base clustering. Further, \(\zeta_{h}(x_{i}(F))\) is the cluster label of object \(x_{i}(F)\) in clustering \(\zeta_{h}\). \(\zeta_{h}(x_{i}(F))=l\) denotes that object \(x_{i}(F)\) belongs to cluster \(C_{hl}\). The objective function (A) of k-means can be given as, \[A(\zeta_{h},y_{h})=\begin{array}{c}k_{h}\\ l=1\end{array}\begin{array}{c}d(x_{i}(F),y_{hl})^{2}\end{array}\] (1) Here, \(y_{hl}\) is the \(l^{th}\) cluster centre and \(y_{h}=\{y_{hl}\}^{kh}\). Further, \(d\left(x_{i}(F),y_{hl}\right)=\begin{array}{c}\\ l=1\end{array}\begin{array}{c}\\ \end{ 2. _Cluster ensemble problem to generate a final cluster set:_ The cluster ensemble problem integrates the base clusters rapidly to generate a final clustering \(Z*\) of data set \(\mathbf{X}(\mathbf{F})\) based on the clustering set \(\mathbf{Z}\). The final cluster set can be given as \(C_{*}\). It can also be represented as \(C_{*}=X(\zeta_{*})\) where \(\zeta_{*}\) is the final clustering feature. If \(x_{i}(F)\in C_{*l}\) then, \(x_{i}(F)(\zeta_{*})=\zeta_{*l}\). Here, \(\zeta_{*l}\) is the cluster label of \(C_{*l}\) for \(1\leq i\leq N\). #### 2.2.2 MKMCE algorithm Itis an unsupervised ensemble learningmethod where several base clusters integrate to form final clusters with improved stability and accuracy. A cluster ensemble method is more efficient than other algorithms as it assesses the credibility of class labels. For instance, simple k-means is a linear clustering algorithm widely known for its low computational cost. However, it is susceptible to data distributions [41]. Besides, non-linear clustering algorithms, such as spectral clustering [26] and density-based spatial clustering of applications with noise (DBSCAN) [17], have expensive time costs. Their pair-wise distance calculation is not suitable for large data sets. Compared to them, Bai etal. [39] proved that the MKMCE algorithm is faster and canrapidly discover non-linearly separable clusters. It also worked well with large data sets such as KDD99, which had 1,048,576 entities. Broadly, the MKMCE algorithm can be divided into two parts-(1) initially, it produces a base cluster set using k-means for multiple clusterings, (2) finally, integrate them into a final cluster set using cluster ensemble algorithm. The key steps are as follows: 1. _Local credibility function:_ Unlike supervised learning algorithms, we do not know the exact cluster label of data points in unsupervised learningmethods. Besides, a cluster center is used to represent a cluster in k-means. However, the cluster center is not convenient for representing a non-linear cluster. Therefore, a local credibility function is introduced to check whether the cluster label of objects falls into the local \(\psi\) neighborhood of its cluster center. It is defined as, \[\psi_{h}(x_{i}(F))=\begin{array}{l}\mathsf{J}\\ 1\quad\text{if }x_{i}(F)\in\mathbf{N}(y_{hl}),\\ 0\quad other\;wise,\end{array}\] (2) Here, \(l=\zeta_{h}(x_{i}(F))\) and \(\mathbf{N}(y_{hl})\) is the \(\psi\)neighborhood of the cluster center \(y_{hl}\). It is also called as the local credible space of cluster \(C_{hl}\), for \(1\leq i\leq N\) and \(1\leq h\leq T\). 2. _Multiple k-means clustering algorithm_: The objective function (U) to produce multiple base clusters using k-means can be given as, \[\underset{Z}{\min}\;\;U(Z)\overset{T}{=}\underset{h=1\,i=1}{\theta_{h}(x_{i}(F))} \;\mathbf{\psi}_{h}(x_{i}(F))\;d\left(x_{i}(F),v_{h\zeta_{h}(x_{i}(F))}\right)\] (3) Here, \(\mathbf{\theta}_{h}(x_{i}(F))\) is a boolean variable whose value is 1, if \(x_{i}(F)\) takes part in\(h^{th}\) base clustering and 0 otherwise. Also, once an object forms a base cluster, it does not participate in further k-means clustering. Thus, the aim is to minimize the value of objective function (U) with the constraint \[\underset{h=1}{\overset{T}{\theta_{h}(x_{i}(F))}}\;\mathbf{\psi}_{h}(x_{i}(F))=1,1\leq i\leq N\] (4) Initially, we set h=1, S=X(F), and \(\mathbf{\theta}_{h}(x_{i}(F))\)= 1 for \(1\leq i\leq N\). Next, we randomly select \(k_{h}\) points as initial cluster centers from S. We apply the k-means algorithm multiple times with constraint equation 4. Only those objects are included in the cluster in the \(\epsilon\) neighborhood of randomly initialized cluster centers. Finally, we assign each object in X(F)-S to the cluster with the nearest cluster centers. Next, we update\(S\;S\)' where S' contains objects already clustered in \(h^{th}\) base clustering. Also, update h=h+1 and \(\mathbf{\theta}_{h}(x_{i}(F))=1\), if \(x_{i}(F)\in S\) else 0. This incremental clustering method runs until the desired number of base clusterings (\(T_{max}\)) is obtained, or the number of objects in \(S\)' (\(|S\)') is lesser than \(k_{h}^{2}\). \(T_{max}\) can be set depending on user requirements. We initially set \(T_{max}\) = 100, and based upon running it for a few iterations, we find \(T_{max}\) = 10 as the ideal choice. Also, considering all prior works [23], we observed that up to a maximum of six clusters had been reported. The output of this part of the algorithm are base cluster set \(Z=z_{h}\), \(1\leq h\leq T\) and a cluster center set \(P=p_{h}\), \(1\leq h\leq T\). 3. _Cluster ensemble algorithm_: The final clusters should have high concurrence with the features of the base and original clusters. The overlap of credible local space between any two base clusters is used to measure their similarity. However, the credible local space between two base clusters is naturally small. It is due to the generation mechanism of base clusters using multiple k-means clustering. Thus, a latent cluster is introduced to measure the indirect overlap between base clusters. Let, \(C_{hl}\) and \(C_{gj}\) be any two base clusters and \(C_{g}\) be a latent cluster as per assumption. Let \(y_{hl}\) and \(y_{gj}\) represent the cluster centers of two base clusters. The center of the latent cluster can measure the similarity between any two base clusters. The mid-point of two cluster centers of base clusters is used to calculate the center of the latent cluster. It determines whether there is an indirect overlap between two base clusters. Further, it is calculated as, \(d(y_{hl},y_{gj})=\frac{y_{hl}+y_{gj}}{2}\). If \(d(y_{hl},y_{gj})\leq 4\mathbf{\epsilon}\), the clusters are indirectly overlapped. Thus, the similarity measure is inversely proportional to \(d(y_{hl},y_{gj})\). Mathematically, it can be represented as, \[\begin{array}{c}\boxed{\begin{array}{c}\boxed{\begin{array}{c}\boxed{ \begin{array}{c}\boxed{\begin{array}{c}\boxed{\begin{array}{c}\boxed{ \begin{array}{c}\boxed{\begin{array}{c}\boxed{\begin{array}{c}\boxed{ \begin{array}{c}\boxed{\begin{array}{c}\boxed{\begin{array}{c}\boxed{ \begin{array}{c}\boxed{\begin{array}{c}\boxed{\begin{array}{c}\boxed{ \begin{array}{c}\boxed{\begin{array}{c}\boxed{\begin{array}{c}\boxed{ \begin{array}{c}\boxed{\begin{array}{c}\boxed{\begin{array}{c}\boxed{ \begin{array}{c}\boxed{\begin{array}{c}\boxed{\begin{array}{c}\boxed{ \begin{array}{c}\boxed{\begin{array}{c}\boxed{\begin{array}{c}\boxed{ \begin{array}{c}\boxed{\begin{array}{c}\boxed{\begin{array}{c}\boxed{ \begin{array}{c}\boxed{\begin{array}{c}\boxed{\begin{array}{c}\boxed{ \begin{array}{c}\boxed{\begin{array}{c}\boxed{\begin{array}{c}\boxed{ \begin{array}{c}\boxed{\begin{array}{c}\boxed{\begin{array}{c}\boxed{ \begin{array}{c}\boxed{\begin{array}{c}\boxed{\begin{array}{c}\boxed{ \begin{array}{c}\boxed{\begin{array}{c}\boxed{\begin{array}{c}\boxed{ \begin{array}{c}\box{\begin{array}{c}\boxed{\begin{array}{c}\box{ \begin{array}{c}\box{\begin{array}{c}\box{\begin{array}{c}\boxed{ \begin{array}{c}\box{\begin{array}{c}\box{\begin{array}{c}\boxed{ \begin{array}{c}\box{\begin{array}{c}\box{\begin{array}{c}\box{ \begin{array}{c}\box{\begin{array}{c}\box{\begin{array}{c}\box{ \begin{array}{c}\box{\begin{array}{c}\box{\begin{array}{c}\box{ \begin{array}{c}\box{\begin{array}{c}\box{\begin{array}{c}\box{\begin{array}{c} \boxed{\begin{array}{c}\box{\begin{array}{c}\box{\begin{array}{c}\box{ \begin{array}{c}\box{\begin{array}{c}\box{\begin{array}{c}\box{\begin{array}{c} \end{array}[]{c}\box{\begin{array}{c}\boxed{\begin{array}{c}\boxed{ \begin{array}{c}\box{\begin{array}{c}\box{\begin{array}{c}\box{\begin{array}{c} \end{array}[\box{\begin{array}{c}\box{\begin{array}{c}\box{\begin{array}{c} \end{array}[\box{\begin{array}{c}\boxed{\begin{array}{c}\boxed{\begin{array}{c} \end{array}[\box{\begin{array}{c}\boxed{\begin{array}{c}\box{\begin{array}{c} \boxed{\begin{array}{c}\boxed{\begin{array}{c}\box{\begin{array}{c}\box{ \begin{array}{c}\boxed{\begin{array}{c}\box{\begin{array}{c}\boxed{\begin{array}{c} \boxed{\begin{array}{c}\box{\begin{array}{c}\boxed{\begin{array}{c}\boxed{ \begin{array}{c}\boxed{\begin{array}[{c}\boxed{\begin{array}{c}\boxed{\begin{array}{c}\boxed{ \begin{array}{c}\boxed{\begin{array}{c}\boxed{\begin{array}{c}\boxed{\begin{array}{c}\boxed{ \begin{array}[{c}\boxed{\begin{array}{c}\boxed{\begin{array}{c}\boxed{\begin{array}{c}\boxed{ \begin{array}{c}\boxed{\begin{array}{c}\boxed{\begin{array}{c}\boxed{\begin{array}{c}\boxed{ \begin{array}{c}\boxed{\begin{array}{c}\boxed{\begin{array}{c}\boxed{\begin{array}{c}\boxed{ \begin{array}{c}\boxed{\begin{array}{c}\boxed{\begin{array}{c}\boxed{\begin{array}{c}\boxed{\begin{array}{c}\boxed{ \begin{array}{c}\boxed{\begin{array}{c}\boxed{\begin{array}{c}\boxed{\begin{array}{c}\boxed{\begin{array}{c}\boxed{ \begin{array}{c}\boxed{\begin{array}{c}\boxed{\begin{array}{c}\boxed{\begin{array}{c}\boxed{\begin{array}{c}\boxed{\begin{array}{c}\boxed{\begin{array}{c}\boxed{\begin{array}{c}\boxed{\begin{array}{c}\boxed{\begin{array}{c}\boxed{\begin{array}{c}\boxed{\begin{array}{c}\box{\begin{\array}[]{c}\boxed{\begin{array}{c}\boxboxed{\begin{array}{c}\box{\begin{array}{c}\box{\begin{array}{c}\box{\begin{\array}[]{c}\box{\begin{\array}[]{c}\box{\begin{\array}[]{c}\box{\begin{\array}[]{c}\box{\begin{\array}[ {c}\begin{\begin{\lst}[{\lst}{\vdots represent the number of citations received by a paper **x** during a period y. A vector can define the citation trajectory of a publication **x** over T as: \[k_{x,T}=(k_{x,1},k_{x,2},k_{x,3},....k_{x,n}) \tag{6}\] The first fundamental question is, how many minimum citations should a paper receive to produce meaningful patterns in its trajectory? We use the metric _relative success ratio (\(s_{x}\)) of a paper \(x\)_ as defined by Radicchi and Castellano [43]. It is given as, \[s_{x}=\frac{c_{x}}{max(\mu_{x},5)} \tag{7}\] Here, \(c_{x}\) and \(\mu_{x}\) are the total number of citations and mean citation rate received by a publication **x** over a period T. They can be represented as, \[c_{x}=\overset{n}{\underset{T=1}{\underset{}{=}}}k_{x,T} \tag{8}\] \[\mu_{x}=\frac{c_{x}}{T} \tag{9}\] G. Colavizza and M. Franceschet [26] use \(max(\mu_{x},5)\) parameter in the denominator to account for years that receive a low mean citation rate. We only consider papers in our study with a value of \(s_{x}\geq 1\). This method is used in the beginning to filter out well-cited articles with above-average citation impact so that its time-series trajectory can help us derive meaningful trends. It helps to address the problem of citation variance that occurs with various factors added due to the time of publication. The following question is, what should be the exact length of a citation trajectory to be considered for clustering? Citation histories can only be compared if their lengths are equal [26]. Consequently, trajectories of three different time windows- 10 years, 20 years, and 30 years are considered separately. Above it, a minimum citation threshold, as discussed earlier, is set to filter out articles. The 10-year time window is chosen because impact factors [44] usually consider 2 to 5 years as it is the average time for attaining citation growth in most disciplines. However, some fields, such as social sciences, take longer to peak. Besides, we examine another 5-year window for capturing the decay pattern. Next, 20 and 30years are multiples of 10, and it should allow us to investigate the long-term behavior of citation histories. Besides, the internet era began around 1985, and citation-based metrics such as the h-index came into practical use in 2005. Consequently, papers published in 1985, 1995, and 2005 and cited till 2015 are considered for studying 10-year [23; 26], 20-year [23], and 30-year trajectories [27], respectively. ### Data The MAG data set [7] is used for empirical study in this paper. Weget 1,95,783 papers published in 2005, 56,380 papers published in 1995, and 41,732 papers published in 1985 after filtering using methods described in section 2.3. All sets of papers receive citations till 2015. The cumulative citation distribution of papers is right-skewed. For example, while studying the 30-year distribution of cumulative citation count, we find that only 30 papers receive citations greater than 10,000, 106 papers receive citations greater than 5,000, 1,810 papers receive citations greater than 500, and 73.6% receive citations fewer than 100. Similarly, while studying the 10-year distribution of cumulative citation count, we find that only 28 papers receive citations greater than 10,000, 154 papers receive citations greater than 5,000, 5,362 papers receive citations greater than 500, and approximately 75% receive citations fewer than 100. ## 3 Results This section extracts features and applies the multiple k-means cluster ensemble algorithm (MKMCE). The flowchart in figure 3 shows the number of papers obtained in each base and final cluster set. We do not find any significantly different clusters while considering data of the first two windows- 10 years and 20 years. Thus, the resulting distinct clusters are discussed considering two lengths- short-term (10 years) and long-term (30 years). Table 2 represents the final cluster set. Further, the characteristics of each cluster are discussed, that is, citation trajectories growth and decay cycles. Finally, a comparative study is performed to validate obtained clusters with identical trajectories detected in prior literature. \begin{table} \begin{tabular}{|c|c|c|c|} \hline & **No Decline (ND)** & **Rapid Decline (RD)** & **Slow Decline (SD)** \\ \hline **Early Rise (ER)** & - & Cluster S3 & Cluster L3, Cluster S2 \\ \hline **Delayed Rise (DR)** & Cluster L1, Cluster S1 & - & Cluster L2 \\ \hline \end{tabular} \end{table} Table 2: **Universal final cluster set** ### Clustering short-term trajectories The length of a citation trajectory considered for this study is 10 years. We consider papers published in the year 2005 and cited till 2015. The final set of papers considered is 1,95,783. We obtain three distinct clusters- cluster S1, S2, and S3 (see figure 3). The citation patterns for clusters S3, S2, and S1 are '_Early Rise-Rapid Decline (ER-RD)_', '_Early Rise-Slow Decline (ER-SD)_', and '_Delayed Rise-No Decline (DR-ND)_', respectively (see table 2). Moreover, the percentage of papers in _ER-RD_, _ER-SD_, and _DR-ND_ clusters are 2.2%, 45%, and 53%, respectively. #### 3.1.1 Cluster analysis Table 3 shows each cluster's descriptive statistics of initial time, growth time, and decay time. Besides, the histogram plot in figure 4 shows cluster-wise cumulative citation distribution separately for three consecutive times. Finally, in figure 5, the box plot shows the distribution of citation peaks of different intensities. 1. _ER-RD:_ They have an average initial time of 1 to 1.5 years and a growth time of \(\sim\) 2years. Thus, the total growth period for papers in this cluster is 3 to 3.5years, followed by a quick decay in 2years (see table 3). Cumulative citation distribution in figure 4 (a) reveals that they receive the least citations compared to other clusters as they receive citations only for a Figure 3: **The number of papers in each base and final cluster set after applying multiple k-means cluster ensemble algorithm.** short window. However, a handful of papers also receive as high as 1,000 citations. M. Haghighat [45] points out self-citation stacking as one of the reasons behind the citation patterns of such hot papers. The number of \(l^{i}\) intensity peaks are seen chiefly during their growth period (see blue Figure 4: **The histogram plot shows the cumulative citation distribution of each cluster for three consecutive times- initial time, growth time, and decay time. Figures (a), (b), and (c) represents ER-RD, ER-SD, and DR-ND clusters for short-term trajectories, respectively.** Figure 5: **The box plot represents the distribution of the number of citation peaks of three different intensities-\(n_{l^{i}}\) (figures (a), (b)), \(n_{l^{i}}\)= (figures (c), (d)), and \(n_{l^{i}}\) (figures (e), (f)) separately for growth and decay times. Blue, red, and green indicate ER-RD, ER-SD, and DR-ND clusters.** \begin{table} \begin{tabular}{|c|c c c c|c c c|c c|} \hline & \multicolumn{3}{c|}{**Early Rise-Rapid Decline**} & \multicolumn{3}{c|}{**Early Rise-Slow Decline**} & \multicolumn{3}{c|}{**Delayed Rise-No Decline**} \\ \hline **Short-term citation trajectory** & 7: (in yrs.) & \(T_{g}\) (in yrs.) & \(T_{g}\) (in yrs.) & \(T_{l}\) (in yrs.) & \(T_{g}\) (in yrs.) & \(T_{g}\) (in yrs.) & \(T_{g}\) (in yrs.) & \(T_{g}\) (in yrs.) & \(T_{g}\) (in yrs.) \\ \hline Mean & 1.51 & 2.16 & 2.11 & 2.31 & 3.15 & 3.88 & 3.05 & 5.72 & 0.5 \\ Standard deviation & 1 & 1 & 1 & 0.7 & 0.5 & 1 & 0.5 & 2 & 0.5 \\ Quartile 1 & 0 & 1 & 1 & 1 & 2 & 2 & 2 & 3 & 0 \\ Quartile 2 & 1 & 2 & 2 & 2 & 3 & 3 & 3 & 5 & 0 \\ Quartile 3 & 2 & 3 & 3 & 2 & 3 & 4 & 3 & 6 & 0 \\ \hline \end{tabular} \end{table} Table 3: **(Short-term trajectory study) The descriptive statistics of initial time (\(T_{i}\)), growth time (\(T_{g}\)), and decay time (\(T_{d}\)) is shown for final clusters.** colored box-plot in figure 5 (a)). Table 2 reveals it is only a characteristic of short-term citation trajectories. 2. _ER-SD_: They have an average initial time of 2 years and a growth time of 3 years. Consequently, the total growth time for papers in this cluster is 5 years, followed by a slow decay in the next 3 to 4 years (see table 3). The histogram plot depicting cumulative citation distribution in figure 4 (b) reveals a right-skewed distribution where papers receive more citations as it shifts from initial to growth to decay time. They receive the highest number of peaks of \(l^{l}\) and \(l^{m}\) intensity mainly in the growth time (seered colored box plot in figure 5 (a), (c)). Also, considering three consecutive times, they receive the maximum number of peaks compared to the other two clusters. 3. _DR-ND_: They have an average initial time of 3 years and an average growth time of \(\sim\) 6 years. Weobserved no citation decay for the period analyzed. The histogram plot depicting cumulative citation distribution in figure 4 (c) shows that a significant proportion of citations is received in the growth time. They attain the highest number of citation peaks of \(l^{l}\) intensity during their delayed growth time (see green colored box plot in figure 5 (b)). ### Clustering long-term trajectories The length of a citation trajectory considered for this study is 30 years. We consider papers published in the year 1985 and cited till 2015. The final set of papers considered is 41,732. Webtain three distinctclusters-cluster L1, L2, and L3 (see figure 3). The citation patterns for clusters L3, L1, and L2 are '_Early Rise-Slow Decline (ER-SD)_', '_Delayed Rise-No Decline (DR-ND)_' and '_Delayed Rise-Slow Decline (DR-SD)_', respectively (see table 2). The percentage of papers in _ER-SD_, _DR-ND_, and _DR-SD_ clusters are 42%, 57%, and 0.8%, respectively. #### 3.2.1 Cluster analysis Table 4 shows each cluster's descriptive statistics of initial time, growth time, and decay time. Besides, the histogram bar plot in figure 6 shows cluster-wise cumulative citation distribution separately for three consecutive times. Finally, in figure 7, the box plot shows the distribution of citation peaks of different intensities. 1. _ER-SD_: They have an average initial time of 2 years and a growth time of 4 to 4.5 years. Consequently, the total growth time for papers in this cluster is 6 years, followed by an average decay time of 20 years (see table 4). Cumulative citation distribution in figure 6 (a) reveals ER-SD papers receive a significant proportion of their final citations during growth time. \begin{table} \begin{tabular}{|c|c c c c|c c c|c c c|} \hline & \multicolumn{3}{c|}{**Early Rise-Slow Decline**} & \multicolumn{3}{c|}{**Delayed Rise-No Decline**} & \multicolumn{3}{c|}{**Delayed Rise-Slow Decline**} \\ \hline **Long-term citation trajectory** & \(T_{i}\) (in yrs) & \(T_{i}\) (in yrs) & \(T_{i}\) (in yrs) & \(T_{i}\) (in yrs) & \(T_{i}\) (in yrs) & \(T_{i}\) (in yrs) & \(T_{i}\) (in yrs) & \(T_{i}\) (in yrs) \\ \hline Mean & 2.14 & 4.41 & 20.82 & 4.06 & 25.46 & 0 & 4.73 & 16.06 & 7.84 \\ Standard deviation & 1.22 & 2.85 & 3.72 & 2.00 & 2.93 & 0 & 2.42 & 2.63 & 2.09 \\ Quartile 1 & 1 & 3 & 18 & 2 & 22 & 0 & 2 & 6 & 5 \\ Quartile 2 & 2 & 4 & 20 & 4 & 25 & 0 & 4 & 16 & 7 \\ Quartile 3 & 3 & 6 & 22 & 4 & 26 & 0 & 5 & 17 & 7 \\ \hline \end{tabular} \end{table} Table 4: **(Long-term trajectory study) The descriptive statistics of initial time (\(T_{i}\)), growth time (\(T_{g}\)), and decay time (\(T_{d}\)) is shown for final clusters.** Figure 6: **The histogram plot shows the cumulative citation distribution of each cluster for three consecutive times– initial growth time, growth time, and decay time. Figures (a), (b), and (c) represents ER-SD, DR-ND, and DR-SD clusters for long-term trajectories, respectively.** Figure 7: **The box plot represents the distribution of the number of citation peaks of three different intensities– \(n_{l^{\prime}l}\) (figures (a), (b)), \(n_{l^{\prime}l}\) (figures (c), (d)), and \(n_{l^{\prime}l}\) (figures (e), (f)) separately for growth and decay times. Blue, red, and green indicate ER-SD, DR-ND, and DR-SD clusters.** They receive the highest number of \(l^{l}\) and \(l^{m}\) intensity peaks in growth time (see blue colored box plot in figure 7 (a) and (c)). Specifically, it gets a median of two peaks of \(l^{l}\) intensity in growth time (see blue colored box plot in figure 7 (a)). 2. _DR-ND:_ They have an average initial time of 4 years and an average growth time of 25 years. No significant decay is seen for the period analyzed (see table 4). Cumulative citation distribution in figure 6 (b) shows thatthey receive significant citations during their delayed growth period. Further, we find several \(l_{l}\) intensity peaks duringgrowth time (see red colored box plot in figure 7 (a)). It receives a single median peak and up to 5 peaks in the growth period. 3. _DR-SD:_ They have an average initial time of 5 years and an average growth time of 16 years. Consequently, the total growth time for papers in this cluster is 16 to 20 years, followed by an average slow decay in the next 14years (see table 4). Cumulative citation distribution in figure 6 (c) reveals that they receive negligible citations in the initial time, moderate citations in growth time, and a maximum proportion of citations in decay time. Low-intensity \(l^{l}\) citation peak values are prominently visible in decay time (see green colored box plot in figure 7 (b)). Table 2 reveals its only a characteristic of long-term citation trajectories. ### Statistical cluster validation The analysis of variance (ANOVA) test is conducted to validate the final clusters statistically. (Tables 1 and 2) 1 represents the ANOVA test results for short and long-term trajectories, respectively. We compare the mean values of a feature from k-groups and check whether the difference is statistically significant. It separates the variance into two components due to-mean differences and random influences [46]. We find that the final clusters in both studies are statistically significant (p-value\(<\)0.05), considering each feature separately. The largest F-values depicting feature importance are obtained for time-related features -_decay_ and _growth time._ Footnote 1: [https://github.com/decodejoyita/Clustering-citation-trajectories/tree/main](https://github.com/decodejoyita/Clustering-citation-trajectories/tree/main) ### Comparative study This sub-section presents a qualitative comparison study between clusters to validate the proposed methodology. Table 5 and 6 examine how final cluster sets obtained in this study get mapped to identical clusters defined in prior literature. Here, a quantitative comparison is not feasible as different methods have different thresholds and parameter dependencies. E.Aversa [13] identified two clusters - 'Early Rise-Rapid Decline (ER-RD)' and 'Delayed Rise-Slow Decline (DR-SD).' ER-RD are defined as papers with a growth time of 3 years followed by a rapid decay. The exact decay time is not mentioned. DR-SD are defined as papers with a growth time of 6 years followed by a decline in the next 4 years. The ER-RD and DR-SD clusters are identical and align with _ER-RD_ and _ER-SD_ clusters obtained in this study. D. Aksnes [14] identified three clusters- 'Early Rise-Rapid Decline (ER-RD),' 'Medium Rise-Slow Decline (MR-SD),' and 'Delayed Rise-No Decline (DR-ND).' \begin{table} \begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt}} \hline \hline **Final clusters** & **Brief description** & **Identted clusters in prior literature** \\ \hline \hline \end{tabular} \end{table} Table 5: **The ER-RD and DR-SD clusters are aligned withidentical trajectories in prior** \begin{table} \begin{tabular}{c c c} \hline \hline **Finalclusters** & **Brief description** & **Identical clusters in prior literature** \\ \hline \hline & **Long-termtrajectory** & \\ & & \\ & _Growth time:_ & 6 \\ years & _Decay time:_ & 20 \\ years & _No. of papers:_ & 17,479 \\ \end{tabular} \end{table} Table 6: **The ER-SD and DR-ND clusters are aligned with identical trajectories in literature** The ER-RD and MR-SD clusters have a growth time of 2 to 3 years and 4 to 5 years, respectively. The exact decay time is not mentioned. DR-ND are defined as papers with a slow citation rise initially and receiving a significant proportion of citations only after 12 years. Nodecayis observed. The ER-RD, MR-SD, and DR-ND clusters are identical to this study's _ER-RD_, _ER-SD_, and _DR-ND_ clusters. Costas et al. [17] identified 3 clusters- "Flashes-in-the-Pan (FP),"Normal Documents (ND),' and 'Delayed Documents (DD).' FP is defined as papers with a growth time of 3 years; however, they are not cited in the long term. ND is defined as papers with a 4 to 5 years growth time followed by an exponential decay. The exact decay times are not mentioned. DD is defined as papers with a growth time of 10 to 11 years and receiving a significant proportion of their citations later than normal documents. They receive citations even after 20 years. The FP, ND, and DD clusters are identical to this study's _ER-RD_, _ER-SD_, and _DR-SD_ clusters. Moreover, S. Redner [16] identified three clusters- Sleeping Beauties (SB), Discovery Papers (DP), and Hot Papers (HP), respectively. The behavior of DP and HP align with the _ER-SD_ cluster. Their growth time is 4 to 6 years. Besides, SBs align with the _DR-ND_ and receive citations 40 years after publication. S. Baumgartner and L. Leydesdorff [11] identified two clusters- Transient-Knowledge-Claim (TKC) and Sticky-Knowledge-Claim (SKC). Papers belonging to TKC show a typical early peak in citations followed by a steep decline. Papers belonging to SKC have a growth time of 3 to 4 years, and they continue to be cited even after more than 10 years. The TKC and SKC are identical to this paper's _ER-RD_ and _ER-SD_ clusters, respectively. Chakraborty et al. [23] defined six clusters- PeakInit, MonDec, MonInc, PeakLate, PeakMult, and Others. PeakInit papers have a growth time of exactly 5 years. MonDec papers have a growth time of 1 year followed by a monotonic decrease. MonInc papers have a growth time of 20 years and no decay. PeakLate papers have a growth time of \(>5\) years. The exact decay times are not determined. 45% of the papers fell into the Others category, whose trajectory characteristics are not defined. The PeakInit, MonDec, MonInc, and PeakLate clusters are identical to _ER-SD_, _ER-RD_, _DR-ND_, and _DR-SD_ from this study. Gou et al. [34] alsoidentified the PeakMult cluster and studied it as a literature-vival' phenomenon. All-Element-Sleeping-Beauties, a sub-category of SBs, are also a PeakMult cluster. However, we observed that papers of all trajectories receive multiple peaks. The only difference is the time of occurrence of peaks and their varying intensities in individual trajectories. For instance, _ER-SD_ received a maximum number of peaks during the growth period, and DR-ND received them during the decay period. Thus, it is one of the inherent properties of a trajectory. G. Colavizza and M. Franceschet [26] identified 3 clusters-sprinters, middle-of-the-roads, and marathoners. The exact growth and decay times are notexplicitly mentioned. Sprinters are defined as papers with fast and high peak values followed by equally rapid aging. Middle-of-the-roads are defined as papers that attain fast but moderate peaks with a gradual decay over time. Marathoners are defined as papers that start slow, peak moderately, keep receiving a higher proportion of citations over a prolonged time, and, finally, citations decline slowly. The sprinters, middle-of-the-roads, and marathoners are identical with _ER-RD_, _ER-SD_, and _DR-SD_ clusters of this study. Zhang et al. [27] identified four clusters-normal low, normal high, delayed documents, and evergreens. Papers belonging to normal low and normal high clusters receive an early peak followed by a slow decline. Delayed documents are papers with a slow citation rise followed by a slow decay. Evergreens are papers with a continual increase in citations and no decay in the 30 years analyzed. The normal-low and normal-high clusters are similar to _ER-SD_, delayed documents are identical to _DR-SD_, and evergreens align with _DR-ND_ clusters of this study. F. Ye and L. Bornmann [29] identified two clusters- Smart Girls (SG) and Sleeping Beauties (SB). SGs are papers with a growth time of 5years, and the citation angle is \(>60^{o}\). SBs are papers with a \(>5\) years growth time, and the citation angle is \(>30^{o}\). SBs receive a major citation proportion after 15years. The SGs and SBs are identical with _ER-RD_ and _DR-SD_ of this study. Bornmann [28] identified two clusters- Hot Papers (HP) and Delayed Recognition (DR). The HP and DR are identical to _ER-RD_ and _DR-SD_ clusters from this study. Besides, many works only study an extreme trajectory, that is, sleeping beauties [15, 31, 18, 20, 30]. Such papers receive negligible citations for a long time after publication and then, depending upon the awakening intensity, suddenlyjump to receive large citations. The decay time is characterized by lower annual citations than peak [28]. Italigns with the _DR-SD_ cluster of this study. To summarize, the qualitative comparison validates our proposed methodology. We can detect all probable classes of trajectories identified in the existing literature. Table (5, 6) shows that we can universally categorize trajectories into four clusters- _ER-RD_, _ER-SD_, _DR-ND_, and _DR-SD_. Papers with an _ER-RD_ trajectory have a growth period of 3 years and decay in the next 2 years. Short-term trajectories mostly exhibit such a pattern. They receive \(n_{ll}\) peaks during the growth period. Besides, both short-term and long-term trajectories exhibit _ER-SD_ patterns. Compared to the other two clusters, short-term trajectories with an \(ER\)-\(SD\) pattern receive maximum citations. They have a growth period of 5 years and decay in the next 3 to 4 years. Further, long-term citation trajectories with an \(ER\)-\(SD\) pattern have a growth period of 6 years and decay in the next 20 years. Compared to all other clusters, they receive a maximum \(n_{l^{\prime}}\) and \(n_{l^{\prime m}}\) peaks during the growth period. Both short-term and long-term trajectories also exhibit \(DR\)-\(ND\) patterns. Short-term trajectories with a \(DR\)-\(ND\) pattern have a delayed growth period of 9 years. Further, long-term citation trajectories with a \(DR\)-\(ND\) pattern have a growth period of 29 years and receive maximum citations compared to the other two clusters. No citation decline is seen for the period analyzed. Finally, papers with a \(DR\)-\(SD\) trajectory have a growth period of 16 years and decay in the next 14 years. Long-term trajectories mostly exhibit such a pattern. They receive a maximum \(n_{l^{\prime}}\) peaks during decay. ## 1 Conclusion The study of clustering citation trajectories reveals that not all articles receive immediate success after publication. A detailed review of prior literature reveals thatidentical trajectories were detected as different clusters. Its due to the use of arbitrary thresholds and parameter-dependent methods. Besides, different methods captured different groups of trajectories leading to ambiguities in their specific number. This study proposes a feature-based multiple k-means cluster ensemble framework. It is a generalized framework for capturing the temporal evolution ofany trajectory using a generic feature setand clustering them using the ensemble learning method. Four distinct trajectories are obtained- \(ER\)-\(RD\), \(ER\)-\(SD\), \(DR\)-\(ND\), and \(DR\)-\(SD\). Comprehensive comparison reveals that these four clusters can define all prior groups of trajectories identified in the literature. Thus, the issue of ambiguities regarding their distinct number is resolved. Most papers fall into \(ER\)-\(SD\) and \(DR\)-\(ND\) clusters. A negligible share of articles fall into \(ER\)-\(RD\) (widely studied as hot papers) and \(DR\)-\(SD\) (widely studied as sleeping beauties). Further, the \(ER\)-\(RD\) is a characteristic of short-term trajectories, and \(DR\)-\(SD\) is a characteristic of mostly long-term trajectories. Delayed-rise papers receive higher total citations than early risers as they receive citations for a more extended period. However, multiple peaks are detected highest for early risers, establishing that the citations'intensity is higher for them. Self-citations are not excluded from our analyses which is a limitation of this study. Future studies could use other empirical data sets to examine the effectiveness of this methodology.
Citation成熟度は異なる論文ごとに異なり、しかし、すべての論文の影響は、固定された窓内で見られます。論文の引用経路をクラスター化することで、知識拡散のプロセスを理解し、出版後のすぐに成功を得たわけではありません。さらに、クラスター化経路は、論文のインパクト推定アルゴリズムに必要です。これは、引用時間系列が非線形かつ非定量的特性を持つため、困難な問題です。過去の研究では、任意の閾値と定められたルールに基づいたアプローチを提案しています。すべての手法は、主にパラメータに依存しています。したがって、同様の経路を定義するための不一致が生じ、それらの具体的な数に関する不明瞭な状況が生じます。多くの研究は、極端な経路のみを捉えています。したがって、一般化したクラスターフレームワークが必要不可欠です。この論文では、特徴に基づいた複数kMeansクラスタエンsemble
2309.16395
QUIC on the Highway: Evaluating Performance on High-rate Links
QUIC is a new protocol standardized in 2021 designed to improve on the widely used TCP / TLS stack. The main goal is to speed up web traffic via HTTP, but it is also used in other areas like tunneling. Based on UDP it offers features like reliable in-order delivery, flow and congestion control, streambased multiplexing, and always-on encryption using TLS 1.3. Other than with TCP, QUIC implements all these features in user space, only requiring kernel interaction for UDP. While running in user space provides more flexibility, it profits less from efficiency and optimization within the kernel. Multiple implementations exist, differing in programming language, architecture, and design choices. This paper presents an extension to the QUIC Interop Runner, a framework for testing interoperability of QUIC implementations. Our contribution enables reproducible QUIC benchmarks on dedicated hardware. We provide baseline results on 10G links, including multiple implementations, evaluate how OS features like buffer sizes and NIC offloading impact QUIC performance, and show which data rates can be achieved with QUIC compared to TCP. Our results show that QUIC performance varies widely between client and server implementations from 90 Mbit/s to 4900 Mbit/s. We show that the OS generally sets the default buffer size too small, which should be increased by at least an order of magnitude based on our findings. Furthermore, QUIC benefits less from NIC offloading and AES NI hardware acceleration while both features improve the goodput of TCP to around 8000 Mbit/s. Our framework can be applied to evaluate the effects of future improvements to the protocol or the OS.
Benedikt Jaeger, Johannes Zirngibl, Marcel Kempf, Kevin Ploch, Georg Carle
2023-09-28T12:42:26
http://arxiv.org/abs/2309.16395v1
# QUIC on the Highway: ###### Abstract QUIC is a new protocol standardized in 2021 designed to improve on the widely used TCP/TLS stack. The main goal is to speed up web traffic via HTTP, but it is also used in other areas like tunneling. Based on UDP it offers features like reliable in-order delivery, flow and congestion control, stream-based multiplexing, and always-on encryption using TLS 1.3. Other than with TCP, QUIC implements all these features in user space, only requiring kernel interaction for UDP. While running in user space provides more flexibility, it profits less from efficiency and optimization within the kernel. Multiple implementations exist, differing in programming language, architecture, and design choices. This paper presents an extension to the QUIC Interop Runner, a framework for testing interoperability of QUIC implementations. Our contribution enables reproducible QUIC benchmarks on dedicated hardware. We provide baseline results on 10G links, including multiple implementations, evaluate how OS features like buffer sizes and NIC offloading impact QUIC performance, and show which data rates can be achieved with QUIC compared to TCP. Our results show that QUIC performance varies widely between client and server implementations from 90 Mbit/s to 4900 Mbit/s. We show that the OS generally sets the default buffer size too small, which should be increased by at least an order of magnitude based on our findings. Furthermore, QUIC benefits less from NIC offloading and AES NI hardware acceleration while both features improve the goodput of TCP to around 8000 Mbit/s. Our framework can be applied to evaluate the effects of future improvements to the protocol or the OS. QUIC, High-rate links, Performance evaluation, Transport protocols ## I Introduction QUIC is a general-purpose protocol that combines transport layer functionality, encryption through TLS 1.3, and features from the application layer. Proposed and initially deployed by Google [1, 2], it was finally standardized by the Internet Engineering Task Force (IETF) in 2021 [3] after more than five years of discussion. Like TCP, it is connection-oriented, reliable, and provides flow and congestion control in its initial design. An extension for unreliable data transmission was added in RFC9221 [4]. One goal of QUIC is to improve web communication with HTTPS, which is currently using TCP/TLS as underlying protocols. It achieves this by accelerating connection build-up with faster handshakes, allowing only ciphers considered secure, and fixing the head-of-line blocking problem with HTTP/2. The transport layer handshake is directly combined with a TLS handshake allowing 0- and 1-RTT connection establishment. To comply with currently deployed network devices and mechanisms, QUIC relies on the established and widely supported transport protocol UDP. The usage of UDP allows to implement QUIC libraries in user space. Thus, QUIC can initially be deployed without requiring new infrastructure, and libraries can be easily implemented, updated, and shipped. On the one hand, this resulted in a variety of implementations based on different programming languages and paradigms [5]. On the other hand, previous efforts to optimize the performance of existing protocols have to be applied to new libraries, kernel optimizations cannot be used to the same degree, and the negative impact of encryption on performance has to be considered. Additionally, the heterogeneity among the QUIC implementations requires a consistent measurement environment for a reproducible evaluation. The performance differences between QUIC and TCP/TLS become more evident when the implementations are pushed to their limits on high-speed networks. In this work, we show the impact of these effects through a fine-grained analysis of different QUIC implementations. We extend the existing QUIC Interop Runner (QIR), a framework for interoperability testing of QUIC implementations [6, 7]. Using Docker containers, it primarily focuses on functional correctness. Thus, we extend it to allow for performance-oriented measurements on bare metal. Our contributions in this paper are: _(i)_ We develop and publish a **measurement framework based on the existing QIR framework**. The measurements are run on real hardware, and more metrics are collected allowing an in-depth analysis of performance bottlenecks. Besides our code, all configurations and the collected data are published along with this paper. With this, we want to foster reproducibility and allow other researchers and library maintainers to evaluate different QUIC libraries and test potential performance optimizations. _(ii)_ We perform a **baseline performance evaluation of different QUIC implementations** on a 10G link with the proposed framework. Tested implementations show a wide diversity in their configuration and behavior. _(iii)_ We **evaluate impacting factors on the performance** of QUIC-based data transmissions. We analyze the effect of, _e.g._, different buffer sizes, cryptography, and offloading technologies. This shows how goodput can be increased beyond the default setting. We explain relevant background regarding QUIC in Section II. In Section III, we introduce the implemented measurement framework and used configurations. We present our findings and evaluations in Section IV. Section V contains an outline of related work. Finally, we discuss our findings and conclude in Section VI. ## II Background This section introduces relevant background for various properties impacting QUIC performance, as shown in Section IV. We identify components relevant to the overall performance and point out the main differences compared to TCP/TLS. They include the always-on encryption in QUIC, the different acknowledgment (ACK) handling, the involved buffers in the network stack, and offloading functionalities supported by the Network Interface Card (NIC). ### _Encryption_ QUIC relies on TLS version 1.3, which reduces available cipher suites to only four compared to previous TLS versions. Only ciphers supporting authenticated encryption with additional data (AEAD) are allowed by the RFC [8]. AEAD encrypts the QUIC packet payload while authenticating both the payload and the unencrypted header. Supported ciphers either rely on Advanced Encryption Standard (AES), a block cipher with hardware acceleration on most modern CPUs, or ChaCha20, a stream cipher that is more efficient than AES without hardware acceleration1. Footnote 1: [https://datatracker.ietf.org/doc/html/rfc8439#appendix-B](https://datatracker.ietf.org/doc/html/rfc8439#appendix-B) Considering TLS in combination with TCP, data is encrypted into so-called TLS records, which are, in general, larger than individual TCP segments spanning over multiple packets. TCP handles packetization and the reliable, in-order transfer of the record before it is reassembled and decrypted at the receiver. With QUIC, each packet must be sent and encrypted individually. On receiving a packet, it is first decrypted before data streams can be reordered. Loss detection and retransmissions are done on packet- and not stream-level using the packet number similar to TCP's sequence number. Additionally, QUIC adds another layer of protection to the header data, called header protection. Fields of the QUIC header like the packet number are encrypted again after packet protection has been applied, leading to twice as many encryption and decryption operations per packet. ### _Acknowledgments_ Since QUIC provides reliability and stream orientation, it requires a similar ACK process as TCP. QUIC encapsulates QUIC packets into UDP datagrams, while the packets carry the actual payload as QUIC frames. Different frame types exist, such as _stream_, _ACK_, _padding_, or _crypto_. An _ACK_ frame contains so-called ACK ranges, acknowledging multiple packets and ranges simultaneously, potentially covering multiple missing packets. All received packets are acknowledged. However, _ACK_ frames are only sent after receiving an ACK-eliciting packet. A QUIC packet is called ACK-eliciting if it contains at least one frame other than _ACK_, _padding_, or _connection close_. Other than TCP, QUIC sends ACKs encrypted, inducing additional cryptographic workload both on sending and receiving side. Therefore, QUIC must determine how frequently ACKs are sent. The _maximum ACK delay_ transport parameter defines the time the receiver can wait before sending an _ACK_ frame [3]. Determining the acknowledgment frequency is a trade-off and may affect the protocol's performance. Fewer ACKs can lead to blocked connections due to retransmissions and flow control, while more put additional load on the endpoints. Furthermore, sending and receiving ACKs with QUIC is more expensive than with TCP. With TCP, ACKs do not take any additional space since they are part of the TCP header and thus can be piggybacked on sent data. The kernel evaluates them before any cryptography is performed. We show the impact of acknowledgment processing and sending in Section IV. ### _Buffers_ Different buffers of the system can impact the performance of QUIC. Figure 1 shows a simplified schema of buffers managed by the kernel, which are relevant for the measurement scenarios of this paper. The NIC stores received frames to the RX ring buffer using Direct Memory Access (DMA) without kernel interrupts. Conversely, it reads frames from the TX buffer and sends them out. The kernel reads and writes to those ring buffers based on interrupts and parses or adds headers of layers 2 to 4, respectively. Afterward, the UDP socket writes the payload to the Receive Buffer (RCVBUF). However, if the ring buffer or RCVBUF is full, packets are dropped, and loss occurs from the perspective of the QUIC implementation. The default size of the receive buffer in the used Linux kernels is 208 KiB. In contrast, if the Send Buffer (SNDBUF), which resides between the implementation and the socket, is packed, the QUIC implementation will be blocked until space is available. Both scenarios reduce the goodput. We show the impact of different buffer sizes in Section IV-C. ### _Offloading_ The Linux kernel offers different NIC offloading techniques, namely TCP Segmentation Offload (TSO), Generic Segmentation Offload (GSO), and Generic Receive Offload (GRO). While the first only affects TCP, the others also apply to UDP and thus QUIC. The main idea of all offloading techniques is Fig. 1: Simplified overview of available system buffers relevant for a QUIC implementation. to combine multiple segments and thus reduce the overhead of processing packets. Following the ISO / OSI model, data sent by TCP is supposed to be split into chunks smaller or equal to the Maximum Segment Size (MSS) and then passed down to lower layers. Each layer adds its header and passes the data further. All of this is computed by the kernel. With hardware offloading, this segmentation task can be outsourced to the NIC. Offloading can only accelerate sending and receiving of packets, not the data transfer over the network [9]. So far, most offloading functions are optimized regarding TCP. QUIC profits less since packetization is done in user space. Utilizing offloading would require adjustments in the implementation to offload tasks like cryptography or segmentation [10]. In Section IV-E we compare the impact of different offloading functions on QUIC and TCP/ TLS. ## III Measurement Framework In this paper, we extend QIR initially proposed by Seemann and Iyengar [6]. It is designed to test different QUIC implementations for interoperability and compliance with the QUIC standard reporting results on a website [7]. Servers and clients from multiple implementations are tested against each other, performing various test cases, like handshake, 0-RTT, and session resumption. The main focus is on the implementations' functional correctness and less on performance. Even though all followed the same RFC drafts during specification, interoperability between different servers and clients was only present for some features. Implementations run inside Docker containers, and the network in between is simulated using _ns-32_. Simple goodput measurements are available but only conducted on a 10 Mbit/s link. As of January 2023, nearly all tested libraries are close to the possible maximum goodput [7]. Footnote 2: [https://www.nsnam.org/](https://www.nsnam.org/) QIR orchestrates the measurements by configuring used client and server applications, creating required directories for certificates and files, and collecting log files and results afterward. In the sense of reproducibility, these features make QIR a powerful tool for QUIC (but not limited to) evaluations. We extended QIR to enable high-speed network measurements on dedicated hardware servers. Figure 2 shows an overview of our framework and its features. For this work, we use different physical servers for the client and server implementation to prevent them from impacting each other. We add additional configuration parameters, such that measurements can be specified with a single configuration file, and include version fingerprints both from the implementations and QIR to the output to foster reproducibility. Additionally, we extend the logging by including several tools which collect data from different components (_e.g._, the NIC, CPU, and sockets). We follow these three requirements in the implementation of our measurement framework: **Flexibility:** Any QUIC implementation can be used as long as it follows the required interface regarding how client / server are started and variables are passed to them. We provide example implementations for the ones given in Section III-D. **Portability:** The framework can be deployed to any hardware infrastructure, as depicted in Figure 2, requiring a link between the client and server nodes and ssh access from the management node. **Reproducibility:** Experiments are specified in configuration files and results contain needed information on how they were generated, _e.g._, the QIR and QUIC implementation version. Additionally, results contain a complete description of the used configuration, versions, and hardware. Our code, results, and analysis scripts are available: [https://github.com/tumi8/quic-10g-paper](https://github.com/tumi8/quic-10g-paper) This includes our extension of QIR, all measurement configurations, and results shown in the paper, analysis scripts to parse the results, and Jupyter notebooks for visualization [11]. ### _Workflow_ We followed a similar workflow as QIR. First, our framework configures the used hardware nodes, especially the used network interfaces for the measurements. Each measurement executes four different scripts that the implementation or configuration can adjust: setup environment, pre-scripts, run-scripts, post-scripts as shown in Figure 2. The setup script can be used to install local dependencies like Python environments. Pre- and post-scripts can be utilized to configure OS-level properties like the UDP buffer size and reset them after the measurement. Also, additional monitoring scripts can be started and stopped accordingly. Relevant configuration parameters for the client / server implementation, such as IP address, port, and X.509 certificate files, are passed via environment variables. Then the server application is started, waiting for incoming connections. Subsequently, the client application conducts a QUIC handshake and requests a file from the server via HTTP/3. Once the client terminates, the framework checks whether all files were transferred successfully and executes post-scripts to stop monitoring tools and to reset the environment to a clean state for subsequent measurements. Finally, the framework collects all logs. ### _Hardware Configuration_ We decided to run the client and server applications on different hardware nodes to prevent interference and to fully include the kernel and NIC (other than with Docker). The QIR runs on another host and functions as a management node orchestrating the measurement. The management node can access the measurement nodes via ssh, while the others are connected via a 10 Gbit/s link, as shown in Figure 2. Different network interfaces and links are used for management and measurements. If not stated differently, all measurements were executed on _AMD EPYC 7543 32-Core_ processors, 512 GB memory, and _Broadcom BCM57416 NICs_. As an operating system, we use _Debian Bullseye_ on _5.10.0-8-amd64_ for all measurements without additional configurations. We rely on live-boot images with RAM disks, drastically increasing I/O speed to focus on the network aspect. ### _Collected Data and Analysis_ For each measurement, the framework computes the good-put as the size of the transmitted file divided by the time it takes to transmit it. To reduce this impact of transmission rate fluctuations (_e.g._, caused by congestion control), a large file size (8 GB) is chosen to enforce a connection duration of several seconds. The following monitoring tools are directly integrated into our framework to collect more data from within the implementations or from the hardware hosts. **tcpdump** is used to collect packet traces which can be decrypted with the exported session keys. Additionally, implementations can enable **qlog** (a schema for logging internal QUIC events [12]) and save results to a directory set up and collected by the framework. However, both result in extensive logging that heavily impacts performance and are only considered for debugging. **ifstat**, **ethtool**, **netstat**, and **pidstat** can be started to collect additional metrics from the NIC and CPU, such as the number of dropped packets or context switches. Finally, **perf** can be used for in-detail client and server application profiling. This allows to analyze how much CPU time is consumed for tasks like encryption, sending, receiving, and parsing packets. When enabled, the output of the used tools is exported along with the measurement results. We provide a parsing script for all generated data that handles all different output formats of the used tools. By this, it is possible to analyze the change of goodput with different configurations and get a better understanding of why this is the case. It is possible to detect client and server-side bottlenecks as root causes for performance limitations. We consider this important since we observed multiple different QUIC components limiting the performance depending on the configuration. The analysis pipeline parses available result files and outputs the final results as Python _pandas_ DataFrame. ### _Implementations_ Since there is not one widely used implementation for QUIC (such as Linux for TCP), we evaluate multiple implementations written in different languages. The implemented framework currently includes six QUIC libraries namely: aioquic [13], quic-go [14], mvfst by Facebook [15], picoquic by Private Octopus [16], quiche by Cloudflare [17], and LSQUIC by LiteSpeed Tech [18]. For each implementation, we either used provided examples for the client and server or made minor adjustments to be compatible with QIR. In this paper, we mainly focus on LSQUIC and quiche since they show the highest goodput rates, as shown in Section IV-A. They are implemented in Rust and C, respectively, and are already widely deployed [19]. Furthermore, both libraries rely on BoringSSL for TLS. Thus, cryptographic operations and TLS primitives are comparable, and we can focus on QUIC specifics in the following. We compare them to the remaining libraries in Section IV-A to put their initial performance into perspective and further support our selection. We base our initial client and server on their example implementation and adapt only where necessary. Additionally, we compare QUIC with a TCP/TLS stack consisting of a server using nginx (version 1.18.0) and a client using wget. ## IV Evaluation We apply the measurement framework to evaluate the performance of different QUIC implementations. For every measurement, the client downloads an 8 GB file via HTTP/3. Every measurement is repeated 50 times to assure repeatability. The boxplots show the median as a horizontal line, the mean as icons such as, and the quartiles \(Q_{1}\) and \(Q_{3}\). All implementations come with different available Congestion Control (CC) algorithms. For LSQUIC, we observed unintended behavior with _BBR_, resulting in several retransmissions and only between 30 % to 70 % of the goodput of _Cubic_. We assume this is related to a known issue with _BBR_ in TCP [20]. To prevent significant impact from different algorithms, we set each implementation to _Cubic_ if available or _Reno_ otherwise. ### _Baseline Measurements_ QIR is designed to test operability between different implementations. As a first baseline, we evaluate the client and server of the implementations listed in Section III-D. Figure 3 shows the goodput for each client-server pair for all implementations. It clearly shows that the goodput widely differs (52 Mbit/s to 3004 Mbit/s) between the implementations. The Python library aioquic shows the lowest performance, as expected since it is the only interpreted and not compiled implementation. It can be considered an implementation suitable for functional evaluation or a research implementation. When it comes to goodput, it can be neglected. The two best-performing implementations are LSQUIC and quiche. Regarding picoquic, we are not able to reach similar goodput Fig. 2: Hardware architecture, measurement workflow, and analysis pipeline. as reported by Tyunyayev et al. [21] and were not able to easily reproduce the required optimizations. Therefore, we stick with LSQUIC-LSQUIC and quiche-quiche pairs for the following evaluation since they achieved the highest goodput. Like interoperability of QUIC features, goodput performance widely varies among client-server pairs. We conjecture that different operation modes, QUIC parameters, and efficiency of the used components result in these fluctuations. The LSQUIC server achieves the highest rate with the LSQUIC client while being less performant with clients of other implementations. Overall, the quiche server achieves decent goodput with most clients. The picoquic-mvfst measurements achieve peculiar low rates of only 52 Mbit/s. _Key take-away: The goodput of different QUIC libraries varies drastically, not only between libraries in general but also between clients and servers. LSQUIC and quiche perform best in the baseline scenario. Only in the case of LSQUIC as server and quiche as client the goodput drops drastically._ ### _Performance Profiling_ We use **perf** to analyze LSQUIC and quiche further and see the CPU time consumption for each component. During each measurement, perf collects samples for the complete system. We categorize the samples for both implementations based on their function names and a comparison to the source code. Figure 4 shows the results for the respective servers. _Packet I/O_ covers sending and receiving messages, especially the interplay with the UDP socket and kernel functionality (sendmsg and recvmsg). _I/O_ covers reading and writing the transmitted file by server and client. _Crypto_ describes all TLS-related en-/ decryption tasks. _Connection Management_ covers packet and acknowledgment processing and managing connection states and streams. If no function name can be collected by **perf** or if we cannot map it to a category, we use _Uncategorized_. We manually created this mapping by assigning each function occurring in the perf dump to a category. While we could not assign all functions, a clear trend is visible. The strict inclusion of TLS is often criticized due to the expectation of a high overhead [22]. However, _Packet I/O_ takes up a majority of resources for both libraries. Passing packets from the libraries in user space to the kernel, the NIC, and vice versa currently impacts the performance most. Interestingly, we see differences in performance in the used programming language or architecture and between the two client and server combinations of different implementations. Resulting in the highest difference, the goodput with a quiche server and LSQUIC client is more than twice as high as vice versa. During these measurements, both the client and server are only at around 70 % CPU usage, no loss is visible, and no additional retransmission occurs. In this case, the bottleneck seems to be the interaction of the flow control mechanisms of both libraries in this client/server constellation. Besides general interoperability tests, our measurement framework can be used in the future to identify these scenarios and to improve QUIC libraries, their flow and congestion control mechanisms, and their interaction. _Key take-away: Our findings show that the most expensive task for QUIC is Packet I/O. While the cost of crypto operations is visible, it is not the main bottleneck here. Overall, our results comply with related work [10], and we share our mappings so that they can be refined and extended in the future._ ### _Buffers_ As explained in Section II-C, different system buffers affect a QUIC connection. The essential buffers are the ring buffers in-between the NIC and network driver, and the send and receive buffers from the socket used by the library, as shown in Figure 1. To analyze the impact of these buffers, the measurement framework captures network driver statistics before and after each data transmission using **ethtool** and connection statistics using **netstat**. It provides the number of sent and received packets and dropped packets in each of the mentioned buffers. The baseline measurement shows that the ring buffers drop no data. However, packets are dropped by the client receive buffer since both client implementations retrieve packets slower than they arrive. As a result, retransmissions are required by the server, and congestion control impacts the transmission. To analyze the impact on the goodput, we Fig. 4: Distribution of server perf samples across different categories. The results are shown in relation to the total number of samples collected by perf (including idle states). Fig. 3: Baseline goodput results for different QUIC libraries tested against each other on a 10 Gbit/s link. In comparison TCP/TLS reaches 8010 Mbit/s in the baseline scenario. increase the default receive buffer (208 KiB). The effect of this can be seen in Figure 5. It shows the goodput of LSQUIC and quiche pairs for different buffer sizes as multiple of the default buffer on the x-axis. The goodput for both libraries improves with increasing buffer sizes and stabilizes after a 16-fold increase. Using LSQUIC with the default buffer size results in 11.4 k dropped packets and a loss rate of 0.2 %. With the largest tested buffer size, LSQUIC reaches a goodput of 3250 Mbit/s, an 8.7 % increase compared to the baseline shown in Figure 3. Besides the reduced loss and retransmissions, the number of ACKs sent by the LSQUIC client is drastically reduced from 180 k to 46 k (26 %). This development for all tested buffer sizes is shown in Figure 6. The reduction of sent ACK packets also results in reduced CPU usage by the client while increasing the server CPU usage shifting the bottleneck further towards the sending of QUIC packets. In all scenarios, LSQUIC sends fewer ACKs than reported by Marx et al. [23]. Regarding quiche, only 7 k packets are dropped with the default buffer size, hence a 0.1 % loss rate. Larger buffers decrease the loss by one order of magnitude and increase the goodput but only by 3 % to 2530 Mbit/s. In contrast to LSQUIC, no difference regarding sent ACKs can be seen (see Figure 6). Reducing the receive buffer further impacts both libraries, especially quiche. The goodput drops by 40 % and a more significant deviation is visible. We repeated the baseline measurements with larger UDP receive buffers for which the results can be seen in Figure 10. _Key take-away: The default UDP receive buffer size has a visible impact on QUIC-based data transmission. We recommend a general increase of the default buffer by at least an order of magnitude to support widespread deployment of QUIC._ ### _Crypto_ As explained in Section II-A, QUIC supports TLS using AES or ChaCha20. Generally, operations to encrypt and decrypt are computationally expensive but are widely optimized in hardware and software today. QUIC and TLS 1.3 only support AEAD algorithms [24], which can encrypt and sign data in one single pass. For AES, only three ciphers are available: AEAD_AES_128_GCM, AEAD_AES_128_CCM, and AEAD_AES_256_GCM. From them, only GCM ciphers are required or should be implemented [24], the CCM cipher is rarely used. Usually, the 128 bit GCM cipher is preferred [19]. For the following evaluation, we use the default cipher. ChaCha20 always uses the Poly1305 authenticator to compute message authentication codes. We evaluated the QUIC implementations in three different scenarios. Two of them use AES either without or with the AES New Instruction Set that offers full hardware acceleration [25] (which is the default in our test environment), and one scenario uses ChaCha20. For the following, we refer to the hardware-accelerated AES version as AES-NI. LSQUIC and quiche automatically fall back to ChaCha20 whenever hardware acceleration is unavailable. To evaluate AES without hardware acceleration, we patched the used BoringSSL library accordingly. As seen in Figure 7, ChaCha20 achieves the same throughput as AES-NI for both QUIC libraries. While removing hardware acceleration decreases the goodput with AES by 11 % for LSQUIC and even by 51 % for quiche. This difference between the implementations results from the quiche client sending more than 16 times as many ACKs than LSQUIC. While the server is the bottleneck in all measurements, more packets sent by the client add additional load to the server for receiving and decrypting packets (see Section IV-C). This decryption is more expensive without hardware acceleration. Thus, crypto has a higher impact on the overall goodput. Also, we observe that the client CPU utilization of quiche is 5 % lower with ChaCha20 than with AES. With TCP/TLS, the effect changes. While ChaCha20 reaches higher goodput rates than AES without hardware acceleration, AES-NI outperforms ChaCha20 with a three times higher goodput, almost reaching the link rate. Additionally, for TCP/TLS we observed that the client CPU bottlenecked the connection for AES-NI and ChaCha20, while the server was the bottleneck without hardware acceleration. _Key take-away: While selecting an appropriate cipher suite drastically impacts TCP/TLS, QUIC reaches similar goodput for AES-NI and the ChaCha20-based cipher. The main bottleneck is still packet I/O and the acceleration effect is further reduced due to smaller TLS records and encrypted ACKs._ Fig. 5: QUIC goodput with different UDP receive buffer sizes. X-axis values are multiples of the default buffer size of 208 KiB. Fig. 6: Number of sent ACKs by the client. X-axis values are multiples of the default buffer size of 208 KiB. ### _Segmentation Offloading_ To reduce the impact of packet I/O, a QUIC library can combine multiple packets and rely on offloading. Similarly, an implementation can use _sendmmsg3_ and _recvmmsg4_ to reduce system calls and move multiple packets to/from the kernel space within a single buffer. Footnote 3: [https://man7.org/linux/man-pages/man2/sendmmsg.2.html](https://man7.org/linux/man-pages/man2/sendmmsg.2.html) Footnote 4: [https://man7.org/linux/man-pages/man2/recvmmsg.2.html](https://man7.org/linux/man-pages/man2/recvmmsg.2.html) By default, Generic Segmentation Offload (GSO), Generic Receive Offload (GRO), and TCP Segmentation Offload (TSO) are activated in Linux. We analyze which offload impacts QUIC and TCP by incrementally activating different offloading techniques. Figure 8 indicates that all the offloading techniques hardly influence QUIC goodput, while TCP largely profits from TSO. The QUIC goodput does not change when GSO/GRO is enabled. However, the client CPU utilization increases from 82 % to 92 % for LSQUIC and from 77 % to 84 % for quiche. Turning off all offloads does not decrease goodput but decreases the client's CPU utilization and also power consumption. While LSQUIC implements functionality to support _sendmmsg_ and _recvmmsg_, we were not able to use it as of January 2023. Data transmission with the functionality activated randomly terminated with exceptions. We expect improved support for those features also by other implementations and suggest reevaluating libraries with our framework in the future. _Key take-away:_ _The tested QUIC implementations do not profit from any segmentation offloading techniques as of January 2023. Compared with TCP, there is much room for improvement to apply the same benefits to UDP and QUIC. Since QUIC encrypts every packet individually, including parts of the header, techniques similar to TSO cannot be applied, which would require that headers can be generated in the offloading function. However, with adjustments to the protocol and the offloading functions, speedups can be achieved for segmentation and crypto offloading [10]._ ### _Hardware_ Even though an implementation in user space comes with more flexibility, tasks that are done by the kernel for TCP become more expensive with QUIC. More CPU cycles are required per packet. We repeated the final goodput measurements with increased buffers on five host pairs with different generations of AMD and Intel CPUs listed in Table I. Client and server hosts are equipped with the same CPU for each measurement. Additionally, we optimized LSQUIC and quiche further by adding compile flags to optimize for the used architecture. The optimized implementations are referred to as LSQUIC* and quiche*. The results in Figure 9 show that quantizing QUIC throughput highly depends on the used CPU and architecture. Both QUIC and TCP profit from newer CPUs with more modern instruction sets. Compile flags improved the LSQUIC goodput by 14 % to 20 %, while they hardly affected quiche. Also, QUIC and TCP perform differently among the different architectures. For example, when comparing the two newest CPUs AMD-2 and Intel-3, QUIC performs 27 % better on the Intel chip, almost reaching 5 Gbit/s. On the other side, TCP goodput decreases by 3 % compared to the CPU in the AMD-2 host pair. This shows that QUIC (in the user space) and TCP (in the kernel) profit differently from CPU architectures and instruction sets. _Key take-away:_ _The used hardware is highly relevant for evaluating QUIC performance. Newer CPUs lead to a higher goodput for both QUIC implementations even though their frequency is slightly lower, e.g., comparing Intel-2 and Intel-3. While not feasible for all research groups, we suggest attempting an evaluation of potential improvements to QUIC \begin{table} \begin{tabular}{l l l r} \hline \hline & CPU & Year & max GHz \\ \hline AMD-1 & AMD EPYC 7551P & 2017 & 2.0 \\ AMD-2 & AMD EPYC 7543 & 2021 & 3.7 \\ Intel-1 & Intel Xeon CPU D-1518 & 2015 & 2.2 \\ Intel-2 & Intel Xeon CPU E5-1650 & 2012 & 3.8 \\ Intel-3 & Intel Xeon Gold 6312U & 2021 & 3.6 \\ \hline \hline \end{tabular} \end{table} TABLE I: Different CPUs used for the measurements. The default for measurements was AMD-2 if not noted otherwise. Fig. 8: Impact of hardware offloading on QUIC and TCP/TLS goodput. By default, GSO, GRO, and TSO are activated. Fig. 7: Impact of different TLS ciphers on QUIC and TCP/TLS goodput. During the AES measurement without hardware acceleration, the implementations were forced to use AES by deactivating ChaCha20 in the respective TLS library to prevent the fallback. libraries on different CPU generations and frequencies in the future to better quantify their impact._ ## V Related Work The interoperability of different QUIC implementations has been tested by research throughout the protocol specification. Seemann and Iyengar [6], Piraux et al. [26], and Marx et al. [23] developed different test scenarios and analyzed a variety of QUIC implementations. Furthermore, Ruth et al. [27], Piraux et al. [26], and Zirngibl et al. [19] have already shown a wide adoption of QUIC throughout the protocol specification and shortly before the release of RFC9000 [3]. They show that multiple large corporations are involved in the deployment of QUIC, various implementations are used, and different configurations can be seen (_e.g._, transport parameters). However, they mainly focused on functionality analyses, interoperability, and widespread deployments but did not focus on the performance of libraries. Different related works analyzed the performance of QUIC implementations [28, 29, 30, 21, 31, 32, 10]. However, they either analyzed QUIC in early draft stages, _e.g._, Megyesi et al. [30] in 2016, or mainly focused on scenarios covering small web object downloads across multiple streams, _e.g._, by Sander et al. [29] and Wolsing et al. [32]. Similar to our work, Yang et al. [10] orchestrated QUIC implementations to download a file. With a single stream, they reached a throughput between 325 Mbit/s and even 4121 Mbit/s for Quant. However, they omit HTTP/3 and only download a file of size 50 MB. Therefore, the impact of the handshake and cryptographic setup is higher, and other effects might be missed. Endres et al. [33] adapted the QIR to evaluate QUIC implementations similar to the starting point of our framework. However, they still relied on the virtualization using docker and network emulation using ns-3 and focused on a different scenario, namely satellite links. Tyunyayev et al. [21] combined picoquic with the Data Plane Development Kit (DPDK) to bypass the kernel. They compared their implementation to other QUIC stacks and increased the throughput by a factor of three. They argue that the speedup is primarily due to reduced I/O but do not investigate other factors in more detail. ## VI Conclusion In this work, we analyzed the performance of different QUIC implementations on high-rate links and shed light on various influence factors. We systematically created a measurement framework based on the idea of the QUIC Interop Runner. It allows automating QUIC goodput measurements between two dedicated servers. It can use different QUIC implementations, automate the server configuration, collect various statistics, _e.g._, from the network device and CPU statistics, and provide means to collect, transform, and evaluate results. We applied the presented framework to evaluate the goodput of mainly LSQUIC and quiche on 10 Gbit/s links and analyzed what limits the performance. A key finding in this work is that the UDP receive buffer is too small by default, which leads to packets getting dropped on the receiver side. This results in retransmissions and a reduced goodput. We show that increasing the buffer by at least an order of magnitude is necessary to reduce buffer limits in high link rate scenarios. We observed several differences in the behavior of LSQUIC and quiche, such as differing default parameters, _e.g._, the UDP packet size or a diverse approach regarding the acknowledgment sending rate. When comparing different TLS 1.3 ciphers, QUIC almost reaches the same goodput with ChaCha20 as with the hardware-accelerated AES ciphers, which behaves differently with TCP. We could not measure any performance increase with the support of segmentation offloading features of the operating system. Finally, we show that evaluating QUIC highly depends on the used CPU. By applying various optimizations, we increased the goodput of LSQUIC by more than 25 % and achieved up to 5 Gbit/s on Intel CPUs. Even though QUIC has many similarities to TCP and the specification took several years, our work shows that many details already analyzed and optimized for TCP are still limiting QUIC. Furthermore, the variety of implementations complicates a universal evaluation and yields further challenges to improve performance in the interplay of libraries, _e.g._, the drastically reduced goodput using an LSQUIC server and quiche client, as shown in Section IV-A. To allow for an informed and detailed evaluation of QUIC implementations in the future, we publish the framework code, the analysis scripts, and the results presented in the paper [11]. The measurement framework can be applied to evaluate future improvements in QUIC implementations or operating systems. ## Acknowledgment The European Union's Horizon 2020 research and innovation programme funded this work under grant agreements No 101008468 and 101079774. Additionally, we received funding by the Bavarian Ministry of Economic Affairs, Regional Development and Energy as part of the project 6G Future Lab Bavaria. This work is partially funded by Germany Federal Ministry of Education and Research (BMBF) under the projects 6G-life (16KISK001K) and 6G-ANNA (16KISK107). Fig. 9: Goodput as measured on different hardware architectures listed in Table I. LSQUIC* and quiche* are built with compile flags to optimize for the used architecture.
QUICは、2021年に標準化された新しいプロトコルで、TCP/TLSスタックを改善することを目的としています。主な目的は、HTTPの través によりウェブトラフィックを高速化することですが、他にトンネルなどの他の領域でも使用されています。UDPに基づいており、信頼性のある順序的配信、フローと混雑制御、ストリームベースのマルチ plexing、およびTLS 1.3を使用した常にオンの暗号化を提供します。TCPと異なり、QUICはユーザー空間でこれらの機能を実装し、UDPでの kernel との相互作用のみを必要とします。ユーザー空間での実行により柔軟性が向上しますが、kernel 内での効率性と最適化には、利益が得られていません。複数の実装が存在し、プログラミング言語、アーキテクチャ、デザインの選択の違いがあります。この論文では、QUICの互換性テストのためのフレームワークであるQUIC Interop Runner
2307.16639
Silicon Photonics Mode-Selective Phase Shifter
A novel mode-selective thermo-optic phase shifter (MS-TOPS) enabled by subwavelength grating (SWG) structures is proposed and experimentally demonstrated on a 220 nm waveguide thick silicon photonics chip for the first two quasi-transverse electric modes (TE0, TE1). Mode-selective relative phase manipulation of modes unlocks several processing tasks in mode division multiplexing systems. This integrated solution provides a direct phase manipulation of modes without converting them to their fundamental modes. A Mach-Zehnder interferometer is deployed as a test structure incorporating the proposed MS-TOPS in one arm and a mode-insensitive thermo-optic phase shifter (MI-TOPS) in another. The effect of the SWG duty cycle ratio is investigated by both numerical simulations and experimental measurements. A mode-selectivity of 1.44 is experimentally demonstrated. In other words, the thermo-optic coefficient of TE0 is 44% larger than the one for TE1. The phase shifter's insertion loss is at most 2.5 dB and a worst-case crosstalk of -13.1 dB over a 40 nm wavelength range from 1520 to 1560 nm. A cascaded configuration of the proposed MS-TOPS and an MI-TOPS provides sufficient degrees of freedom to manipulate the relative phase of each mode independently. Potential numerous applications of such devices include optical switching, multimode quantum optical processors, and scaling-up conventional optical processors with a mode selective building block.
Seyed Mohammad Reza Safaee, Kaveh, Rahbardar Mojaver, Guowu Zhang, Odile Liboiron-Ladouceur
2023-07-31T13:21:31
http://arxiv.org/abs/2307.16639v1
# Silicon Photonics Mode-Selective Phase Shifter ###### Abstract A novel mode-selective thermo-optic phase shifter (MS-TOPS) enabled by subwavelength grating (SWG) structures is proposed and experimentally demonstrated on a 220 nm waveguide thick silicon photonics chip for the first two quasi-transverse electric modes (TE0, TE1). Mode-selective relative phase manipulation of modes unlocks several processing tasks in mode division multiplexing systems. This integrated solution provides a direct phase manipulation of modes without converting them to their fundamental modes. A Mach-Zehnder interferometer is deployed as a test structure incorporating the proposed MS-TOPS in one arm and a mode-insensitive thermo-optic phase shifter (MI-TOPS) in another. The effect of the SWG duty cycle ratio is investigated by both numerical simulations and experimental measurements. A mode-selectivity of 1.44 is experimentally demonstrated. In other words, the thermo-optic coefficient of TE0 is 44% larger than the one for TE1. The phase shifter's insertion loss is at most 2.5 dB and a worst-case crosstalk of -13.1 dB over a 40 nm wavelength range from 1520 to 1560 nm. A cascaded configuration of the proposed MS-TOPS and an MI-TOPS provides sufficient degrees of freedom to manipulate the relative phase of each mode independently. Potential numerous applications of such devices include optical switching, multimode quantum optical processors, and scaling-up conventional optical processors with a mode-selective building block. Integrated optics, Periodic structures, Silicon photonics, Thermooptical devices. ## I Introduction Extensive growth of silicon photonics in the past decade is not only owing to its compatibility with the widespread CMOS (complementary metal-oxide-semiconductor) fabrication technology but also on account of its potential for enabling high-speed data transfer, reduced power consumption, and improved performance compared to its counterpart solutions [1]. Deployment of the silicon-on-insulator (SOI) fabrication process provides a relatively large refractive index difference between the core (silicon, Si), and cladding (silicon dioxide, SiO\({}_{2}\)) materials, which can confine different orthogonal eigenmodes inside the SOI waveguide. Historically, the dominant approach in silicon photonics devices was to operate in single-mode, below the cut-off condition of higher-order modes, due to significant levels of inter-modal crosstalk, caused by the underdeveloped components required for multimode operation [2]. Lately, the advantageous performance of the mode division multiplexing (MDM) scheme in terms of further expanding the optical link capacity in an energy efficient approach has motivated significant research towards developing on-chip MDM components [3]-[5]. Conventional on-chip MDM architectures do not predominantly provide individual access to the higher-order modes that are intended to be processed (e.g., switching, modulation). Instead, the prevailing approach typically involves encoding of higher modes into their fundamental mode, followed by the execution of the desired processing function using single-mode components, and reconverting them to their primary higher-order mode profile [6, 7]. Although this approach provides a straightforward solution for developing numerous MDM applications, it requires a bulky (de)multiplexing photonic circuitry that inevitably deteriorates insertion loss, power consumption, and crosstalk. In contrast, state-of-the-art MDM research involves efforts to provide building blocks that enable direct manipulation of modes such as the proposed approaches for space-based and intra-mode switching applications [8]-[11]. In contrast to the limited applicability of the previous research efforts, individual phase manipulation of each mode is a highly versatile optical functionality in MDM systems. This is the cornerstone of realizing almost every desired processing task from switching, and filtering to photonic-based vector/matrix-based mathematical multiplication. For example, by directly controlling the relative phase between different modes without encoding them to their fundamental mode distribution, the need for mode (de)multiplexing is eliminated, which can support a more efficient design in terms of footprint, crosstalk, and potentially power consumption. In this work, direct phase manipulation of modes is demonstrated for the first time. As a proof of concept, the phase of the first and second order quasi transverse electric (TE) mode profiles (TE0, TE1) are manipulated by the proposed mode-selective thermo-optic phase shifter (MS-TOPS). Mode-selectivity as a design figure-of-merit (FoM) is realized by deploying subwavelength grating (SWG) structures that provide a method to engineer the waveguide refractive index for different modes. We verify the effect of the SWG geometric parameters both analytically and using two-dimensional (2D) numerical simulations. We experimentally validated the working principle of the MS-TOPS by investigating the effect of changing the SWG duty cycle ratio. We introduce an MZI test structure incorporating the MS-TOPS in one arm, and a mode-insensitive TOPS in another as an additional degree of freedom to manipulate both modes independently. We provide a methodology to experimentally extract the designed FoM and compare it with the three-dimensional (3D) simulation results. Finally, we briefly review the potential applications of the proposed MS-TOPS as a highly versatile and enabling building block in MDM systems. ## II Design Theory and Analysis ### Thermo-Optic Phase Shifter (TOPS) Integrated tunable phase shifters in the SOI platform operate based on altering the effective refractive index. This is commonly realized by making use of thermo-optic effect, electro-optic effect (also known as free-carrier plasma dispersion effect), or microelectromechanical system (MEMS) actuation. Despite the lower power consumption of the latter approach, MEMS fabrication is non-standard in today's silicon photonics technology. In contrast to TOPS, electro-optic phase shifters offer a notably larger modulation bandwidth at the expense of higher insertion loss, which separates the target applications of each methodology. In particular, TOPS plays a significant role in various reconfigurable silicon photonics applications [12] such as optical neural networks [13], and mode-division multiplexing systems [14]. TOPS working principle relies on phase of light alteration that is induced by a heater-imposed temperature change of \(\Delta T\) (i.e., thermo-optic effect). For a heated waveguide with length \(L\), the temperature-dependent phase change at \(\lambda_{\text{u}}\) free-space wavelength is [15]: \[\Delta\Phi=\frac{2\pi L}{\lambda_{\text{u}}}\frac{dn_{\text{eff}}}{dT}\Delta T \tag{1}\] where \(dn_{\text{eff}}/dT\) is the thermo-optic coefficient of the waveguide. Thermo-optic coefficient of silicon and silicon dioxide are respectively approximately 1.86\(\times\)10\({}^{-4}\)K\({}^{-1}\) and 0.95\(\times\)10\({}^{-5}\)K\({}^{-3}\) at T = 300 K, and \(\lambda_{\text{u}}\) = 1550 nm [16], which shows more than an order of magnitude difference between waveguide core and cladding material. This difference implies that the waveguide geometries with respect to the guided mode profile can have an impact on the thermo-optic coefficient of the waveguide. This phenomenon provides an opportunity to engineer the thermo-optic coefficient of the waveguide in order to differ for various modes as an endeavor to reach mode-selective TOPS. We previously investigated the TOPS waveguide width effect on altering the thermo-optic coefficient for the two fundamental TE modes using both numerical simulation tools and an experimental validation methodology [17]. Considering a 220 nm thick, and 0.96 um wide waveguide, the design yields about nine percent larger \(dn_{\text{eff}}/dT\) for the TE1 compared to the TE0 mode (1.96 versus 1.80, respectively) demonstrating a proof-of-concept MS-TOPS although not sufficiently efficient for practical applications. On the other hand, a waveguide width of 4 um provides only a difference of less than 1% in terms of thermo-optic coefficient for the two fundamental TE modes, which creates a desirable waveguide width to build mode-insensitive TOPS (MI-TOPS) [18]. As a design FoM, mode-selectivity is defined as the ratio of thermo-optic coefficients of the first and second fundamental TE modes (TE0, and TE1): \[\zeta=\frac{dn_{\text{eff}}(TE0)/dT}{dn_{\text{eff}}(TE1)/dT}=\frac{P_{x,TE1} }{P_{x,TE0}} \tag{2}\] where \(P_{x,TE1}\), and \(P_{x,TE0}\) are the required heating power to produce an electro-optic phase shift of 180\({}^{\circ}\) (\(\pi\)) for the respective modes. The latter fraction in the above equation is derived considering a linear relation between the heating power Figure 1: (a) The schematic diagram of the proposed SWG-based TOPS; (b) Modeling the corrugated sections as a homogenous medium with an engineered refractive index represented by Eq. 3 for the 2D simulation objective; (c) Simulated (2D) normalized electric field distribution for both TE0, and TE1 modes in the xy plane (perpendicular to the propagation axis). and temperature change in Eq. (1), _i.e._, \(P_{s}\propto(dn_{\sigma}/dT)^{-1}\). This relation suggests a straightforward method for the MS-TOPS mode-selectivity extraction using solely a transmission measurement from an interferometric structure while sweeping the applied heating power. ### _Subwavelength grating-based TOPS_ Silicon subwavelength grating structures (also known as SWG metamaterials) have extensive range of applications in integrated photonic devices including fiber-chip surface and edge grating couplers [19, 20], broadband contra directional couplers [21], mode-selective add-drop couplers [22], and several others [23]. SWGs enable engineering the refractive index of strip waveguides by behaving as a homogenous medium with a range of possible effective refractive index [24]. This property of the SWGs is exploited to design a mode-selective TOPS as demonstrated in Fig. 1(a). In this work, a SWG structure with a periodicity (pitch) of \(\Lambda\) is considered on both sides of the conventional single-mode strip waveguide (500 nm wide, and 220 nm thick). The SWG periodicity is well below the Bragg condition remaining in the sub-wavelength regime [24]. The duty cycle ratio of the grating, \(f=L_{\mathrm{s}}/\Lambda\), is the volume fraction of silicon where \(L_{\mathrm{s}}\) is the length of the unetched silicon in the SWG structure. Based on the effective medium theory, the side corrugation structures could be considered as a homogenous medium as shown in Fig. 1(b). Considering a TE polarization beam with almost no electric field component in the propagation direction, the engineered refractive index of the corrugated structure is well approximated as [24]: \[n_{\mathrm{s}vc}=\sqrt{f\cdot n_{\mathrm{s}}^{2}+(1-f)\cdot n_{\mathrm{0}c \mathrm{s}vc}^{2}}\ \, \tag{3}\] where \(n_{\mathrm{s}}=3.46\) and \(n_{\mathrm{0}c\mathrm{s}vc}=1.45\) are the silicon and silicon dioxide refractive index at \(\lambda_{v}=1550\) nm, respectively. For an arbitrary duty ratio \(f\)of 0.4, the SWG refractive index reaches a value of 2.46, which implies a potentially lower thermo-optic coefficient for the SWG compared to the waveguide core. As supported by the electric field modal distribution results, attained by Ansys Lumerical photonic simulation tool, provided in Fig. 1 (c), the TE0 main lobe propagates through the un-corrugated silicon waveguide core while TE1 mode lobes mostly propagate through the sides of the waveguide where the SWG structure exists. Considering the propagation behavior of the TE1 mode, when heat applied through the deployed titanium-tungsten alloy (TiW) metal heater shown in Fig. 1 (a), this lower thermo-optic coefficient of the SWG for TE1 translates eventually to a smaller imposed phase shift on the TE1 mode in contrast with the TE0 mode enabling a mode-selective TOPS. As a proof-of-concept design, the SWG is 450 nm wide (W\({}_{\mathrm{SWG}}=450\) nm). The metal heater is deposited 2.2 \(\mathrm{\SIUnitSymbolMicro m}\) above the waveguide extending 3 \(\mathrm{\SIUnitSymbolMicro m}\) wide to ensure a uniform heating. The SWG width should be optimized for to the application requirements. Minimizing the SWG width improves the heater energy efficiency at the cost of a lower mode-selectivity and an increased TE1 mode insertion loss due to a relative increase in scattering from sidewall roughness. On the other hand, an increased SWG width can provide a greater TE1 mode confinement in the SWG region rather than the waveguide core area leading to a larger \(dn_{\sigma}/dT\) contrast between two modes. This could also lower the sidewall roughness scattering loss contribution in the TE1 mode insertion loss. Minimizing the SWG duty ratio is favored because a greater \(f\) reduces the TOPS \(dn_{\sigma}/dT\) contrast between two modes; hence, reaching the maximum mode-selectivity (_i.e._, lowest \(f\)) is limited by the minimum feature size of the fabrication technology. Increasing the SWG pitch facilitates attaining a smaller duty ratio well above the minimum feature size. However, staying in the subwavelength regime and avoiding the Bragg reflection condition determines the upper boundary limit of the pitch (\(\Lambda<\lambda_{\mathrm{s}vc}/2n_{\sigma}\)). Considering \(\Lambda=220\) nm as a proof-of-concept design, the impact of the duty ratio is investigated by conducting a 2D simulation approach using Ansys Lumerical finite difference eigenmode solver tool. In this approach, Eq. 3 represents the corrugated sections as an effective medium besides the silicon waveguide core, while the whole structure is elongated toward infinity in the propagation direction. The effective refractive index of the structure is attained in room temperature, and following a 100K temperature increase. Assuming \(dn_{\sigma}/dT\simeq\Delta n_{\sigma}/\Delta T\) in that temperature range, the thermo-optic coefficient is calculated for each mode. Results for different duty ratios are depicted in Fig. 2 supporting a mode-selectivity of \(\zeta\)=1.28 for \(f=\)0.4, corresponding to \(L_{\mathrm{s}vc}=88\) nm being marginally above the minimum feature size of the accessible technology. Upon optimization of different dimensions using 2D simulation approach, a 3D finite difference time domain (FDTD) simulation is carried out using Ansys Lumerical FDTD solver tool. This involves calculation of effective refractive indices from extracting the phase velocity, which requires attaining the dispersion diagram of the SWG TOPS (also called a band structure analysis) [25]. More specifically, Fourier transform of some randomly placed time-domain monitors in the simulation region leads to the frequency-wavenumber relation as the mode guiding condition (i.e., the dispersion diagram) providing the phase velocity parameter. The Fig. 2: Thermo-optic coefficient of the proposed mode-sensitive TOPS for TE0 and TE1 mode profiles versus duty cycle ratio calculated using a 2D simulation approach (W\({}_{\mathrm{SWG}}\)=450 nm, A=220 nm). Subsets show the electric field intensity of a cross section perpendicular to the propagation direction for\(f\)= \(\{0.1,0.4,0.8\}\). calculated mode-selectivity FoM for \(f=\{0.4,\,0.5,\,0.6\}\) is compared with experimental results in section III. B. ### _Interferometric test structure_ The conventional TOPS test structure involves using a balanced Mach-Zehnder interferometer (MZI), which incorporates the same TOPS design in both arm with one typically operational and the other making the MZI loss-balanced. Tuning the TOPS alters the light phase in one arm and modulates the output transmission intensity in an interferometric manner. The schematic diagram of the MS-TOPS test structure is demonstrated in Fig. 3(a). The 2\(\times\)2 MZI structure is comprised of two multimode interferometers (MMIs), 100 \(\upmu\)m long low-loss S-bend at its output to thermally set the MZI arms apart, and a mode-insensitive thermo-optic phase shifter (MI-TOPS), and tapers to provide a low-loss interconnection between components with different waveguide widths. Excluding the SWG structure, and the tapers at both ends of the SWG, all geometric parameters are optimized based on a previous work [18] as illustrated in Fig. 3 (a). Fig. 3 (b) shows the SEM image of the fabricated taper used at both ends of a 250 nm wide SWG waveguide (the optimized design is 450 nm wide). This linear taper from the single-mode waveguide to the SWG waveguide was optimized using the design methodology reported in [26]. The taper length was swept using a 3D FDTD method to ensure a smooth transition between the conventional waveguides and the SWG structure realizing above 99.6% power transmission for both TE0 and TE1 modes with a 20 \(\upmu\)m long taper [26]. The 200 \(\upmu\)m long MI-TOPS waveguide is 4 \(\upmu\)m wide providing similar thermo-optic coefficient for TE0 and TE1 modes, within a contrast of 2% (\(\pm\)1%). Deploying such a component enables a more comprehensive evaluation of the MS-TOPS performance by imposing an adjustable \(\phi\) phase shift on both modes in one arm while applying another adjustable \(\zeta\cdot\delta\) and \(\delta\) phase shift on TE0 and TE1 modes, respectively. Fig. 3(c) shows the SEM image of the fabricated SWG-based MS-TOPS validated using the interferometric structure. The fabrication in an SOI platform is performed using Applied Nanotools (ANT) NanoSOI fabrication process. The optical waveguides are 220 nm thick fully etched on top of 2 \(\upmu\)m buried oxide (BOX). The high resistive titanium-tungsten (TiW) alloy functions as metal heaters deposited 2.2 \(\upmu\)m on top of waveguides. A low resistive titanium-tungsten/aluminum bilayer (TiW/Al) enables the contact with heaters and metal routings. ## III Experimental demonstration ### _Validation Testbed_ The fabricated MS-TOPS in the interferometric test structure is the device under test (DUT) in the relatively straightforward testbed, schematically illustrated in Fig. 4. The DUT is optically probed by a multi-channel single mode fiber array unit (FAU) using on-chip vertical grating couplers from ANT process design kit (PDK). The bare die is electrically probed with a multi-contact wedge (MCW). Because the repeatability of the measurement depends on the MCW contact resistance varying with each probe landing attempt, the power supply in the current mode eliminates the dependency of the applied bias and phase shift relation to the contact resistance within the range of current source stability. An on-chip electrical loopback between two electrical pads facilitates secure and more repeatable landing and connection to the die of the MCW. In the first measurement scenario, an external directional coupler (DC) evenly splits (_i.e.,_ splitting ratio of S\({}_{1}\)=50:50) the power of the tunable laser source to enable simultaneous TE0 and TE1 mode propagation through the DUT. The second order Fig. 4: Schematic block diagram of the testbed consisting of an external directional coupler (DC) in two different scenarios (S, for simultaneous, and S\({}_{2}\) for successive mode propagation through the DUT), polarization controller (PC), fiber array unit (FAU) optical probe, device under test (DUT), and multi-contact wedge (MCW) electrical probe. Fig. 3: Schematic diagram of the balanced interferometric test structure designed for the evaluation of the SWG-based MS-TOPS (\(\delta\) phase shifter); (b) SEM image of a typical designed taper that provides a low loss interconnection between a 250 nm SWG waveguide and neighboring components (subsets 1-3 are enlarged for better visibility); (c) SEM image of the designed SWG waveguide with dimensions: W\({}_{\text{SWG}}\)= 450 nm, \(\Lambda\) = 220 nm, \(f\)= 0, 4, \(\text{L}_{\text{ss}}\) = 88 nm. mode is excited on-chip using adiabatic coupler-based mode multiplexers [27]. Embedded \(\phi\) (mode-insensitive), and \(\delta\) (mode-selective) TOPS in each arm of the MZI DUT are biased and swept according to their metal heater resistivity to deliver desired heat. Sweeping the applied current to each phase shifter creates a two-dimensional space to measure the DUT, which further enables calculating its mode-selectivity as the main design FoM. In the second measurement scenario, an external DC with an S\({}_{2}\)=10:90 splitting ratio is deployed. The objective of this scenario is to extract the DUT crosstalk and insertion loss for each mode; hence, it requires two consecutive measurements: once for TE0 while TE1 input is null and vice versa. In each case, the output is de-multiplexed on-chip to the primary and crosstalk signals, which are simultaneously measured with two optical power meters. The 10% splitting branch is fed to the on-chip alignment loopbacks promoting a more robust and accurate insertion loss assessment. The insertion loss and modal crosstalk measurement accuracy is masked by the FAU random drift of \(\pm\)0.2 dB during a ten-minutes time window of stability measurement. Moreover, within the same duration, a random drift of \(\pm\)0.3 mW in the delivered electrical power to the heaters limits the phase shift controllability to 0.09 rad. ### _Measurement results and discussion_ The DUT characterization objective is to measure the mode-selectivity of the designed MS-TOPS as defined in Eq. (2) being the principal design FoM. Using the discussed testbed (Fig. 4 with S\({}_{1}\)=50:50), the phase shift in both arms of the MZI DUT is altered while the output bar port transmission is monitored. As elaborated in the 2D transmission contour maps in Fig. 5(a), sweeping the applied current (\(I_{\phi}\)) of the MI-TOPS (vertical axis) yields the same behavior for both TE0, and TE1 modes, as expected, for any constant \(\delta\) phase shift. In other words, the required current change for inducing a 2\(\pi\) phase change (_e.g._, distance between two bias points corresponding to minimum transmission) stays the same regardless of the mode (i.e., \(P_{x,\text{TE0}}=P_{x,\text{TE1}}\)). This confirms the mode insensitivity of the \(\phi\) phase shifter. On the other hand, sweeping the MS-TOPS current (\(I_{\delta}\) in the horizontal axis) demonstrates a different behavior for each mode profile (i.e., \(P_{x,\text{TE0}}\neq P_{x,\text{TE1}}\)). A dash-dotted line shows the applied current \(I_{\delta}\) for the first two transmission deeps to provide a visual reference point. Distance between these two lines represents the additional required current for achieving a 2\(\pi\) phase shift. As shown in Fig. 5 (a), this distance in the TE1 results is greater than the TE0 outcome confirming the DUT mode-selectivity behavior. Quantitative validation of the MS-TOPS behavior leads to extracting the mode-selectivity FoM. To accurately extract \(P_{x}\), the normalized linear output power transmission of both modes is plotted versus the \(\delta\) phase shifter heating power at a constant phase \(\phi\) (_e.g._, \(I_{\phi}=0\) as illustrated at the bottom of Fig. 5 (a) with down sampled data points for a better distinctness). This output optical power of the MZI DUT at the bar port conforms with the expected sin\({}^{2}\)(\(\delta\)/2+ \(\delta\)\(\delta\)\(\delta\)) relation where \(\delta\)\(\delta\)\(\delta\) represents the observed phase offset at a zero-phase shift due to fabrication imperfection with the DUT second input port nulled. The fitted dashed curve facilitates the \(P_{x}\) extraction as half of the distance between two consecutive deeps in the corresponding graph. Considering the experimental results of the DUT with a duty ratio of\(f\)= 0.4 as shown in Fig. 5 (a), this procedure produces a mean mode-selectivity of \(\zeta=1.44\pm 0.05\) for a total number of 18 measured \(\phi\) data points as \(I_{\phi}=[0.2:36]\)\(mA\). The term \(\pm\)0.05 shows the range of extracted FoM among these 18 measurements. The source of error in this mode-selectivity assessment is ascribed to the FAU and MCW random drift discussed in the testbed validation section (III.A). Moreover, the parameter extraction using the mathematical relation sin\({}^{2}\)(\(\delta\)/2+ \(\delta\)\(\delta\)\(\delta\)) fits a large number of the measurement data leading to a more accurate FoM extraction reducing this uncertainty by an order of magnitude. For a behavioral verification purpose, a similar 2D contour map is analytically calculated and illustrated in Fig. 5(b) for the same interferometric test structure ( \(\zeta=1.44\) ) for both mode profiles under study. The MZI bar transmission follows the relation of \(0.5\times[\exp(j\phi)-\exp(j\delta)]\) times the input signal considering a null second input port. Considering mode insensitive phase \(\phi\), and \(\delta\) being the TE0 phase shift in each arm of the MZI, the relation becomes \(0.5\times[\exp(j\phi)-\exp(j\delta/\zeta)]\) for the TE1 mode at the bar output. Assuming a linear relationship between TOPS dissipated electrical power and the induced temperature gradient, the TOPS phase shift is proportional to the square of the heater current. As a result, the applied current and heating power units in Fig. 5(b) are arbitrary units (a.u.). This square relation considered between the applied current and the resulting phase shift is a realistic simplification of the heating power distribution. The resultant 2D contour map conforms with the experimental behavior. In comparison, it is worth noting that the maximum bar output transmission happens analytically at a \(\pi\) phase shift. In other words, the bar output is null for a fully balanced MZI (at zero bias); however, experimental results show existence of a power offset at zero bias due to the phase offset originated from fabrication process variation that makes the MZI unbalanced. This could also be ascribed to the MMI performance deviation from the 50:50 splitting. Furthermore, TE0 and TE1 phase mismatch at the output of the on-chip coupler-based mode multiplexer contributes toward creating a different \(\delta\)\(\delta\)\(\delta\) for each mode. Depending on the target application, deploying a 1\(\times\)2 MMI with a balanced phase shift from the input to both outputs could promote the DUT power consumption efficiency. Figure 3: The 2D contour map of the DUT bar output transmission versus applied current to the MS-TOPS (horizontal axis, \(I_{\delta}\)), and the MI-TOPS (vertical axis, \(I_{\delta}\)) for different SWG duty ratios. The dash-dotted line on each 2D contour map illustrates the first two transmission deeps as a visual reference point for comparing the DUT behavior in dealing with TE0 and TE1 modes. At the bottom of each 2D graph, the linear normalized optical power versus MS-TOPS heater power at \(I_{\delta}=0\) is shown where markers and dashed curves represent the down sampled experimental data points and the fit with sin\({}^{2}\)(\(\delta\)/2+ \(\delta_{\alpha\delta}\)) respectively. (a)\(f\)\(=0.4\); (b) Analytically calculated transmission for \(f\)\(=0.4\); (c)\(f\)\(=0.5\); (d)\(f\)\(=0.6\). To experimentally investigate the effect of the SWG duty cycle ratio \(f\) on the mode-selectivity FoM, two other DUTs with different duty ratios are measured in addition to the one that was discussed above (_f_\({}^{\prime}\) = 0.4). The 2D transmission contour map of DUTs with _f_\({}^{\prime}\) = {0.5, 0.6} are presented in Fig. 5(c) and Fig. 5(d), respectively. The mode-selectivity FoM is similarly extracted being \(\zeta\) = {\(1.31\pm 0.03,1.20\pm 0.05\)}, correspondingly. According to the previous analytical discussions supported by 2D simulation results, the mode-selectivity is expected to be diminished as the duty ratio increases, which is in compliance with the experimentally observed behavior. In contrast to the validation measurement results of \(\zeta\)= {1.44 \(\pm\) 0.05, \(1.31\pm 0.03\), \(1.20\pm 0.05\)} for all three DUTs with f = {0.4, 0.5, 0.6}, respectively, the 3D simulation is carried out as described in section II.B, and a mode-selectivity FoM of \(\zeta\) = {1.30, 1.28, 1.18} is obtained. We believe that this relatively small discrepancy originates from a superposition of these fabrication process variations: (1) over etching in the fabrication process, which contributes toward decreasing the duty ratio and hence increasing the FoM; (2) the impact of buried air gaps (voids) in the SiO\({}_{2}\) cladding between two consecutive corrugations (shown as \(L_{\text{oxide}}\) in Fig. 1 (a)) [28], which lowers the effective refractive index for TE1 and eventually gives rise to the FoM; (3) concaved sharp edges in the SWG inner section, which could contribute toward decreasing the FoM depending on the energy distribution of the TE1 mode profile near the edges (sidewall surface roughness could result in spatial field energy redistribution of the TE1 mode); (4) convex sharp edges in the SWG outer section could act otherwise with a very minor effect on FoM expanding due to TE1 energy concentration closer to the core waveguide. Performance evaluation of the three discussed DUTs in terms of crosstalk and insertion loss require deployment of the mentioned testbed (Fig. 4 with S\({}_{2}\) = 10:90), while the 10 % portion being used for optical power loss normalization through a loop-back. In each measurement, only one mode is passing through the DUT for accurate crosstalk monitoring. More specifically, both ports of the DUT through the output mode demultiplexer is monitored, one port being the primary output, and the other as the crosstalk signal. The output transmission is normalized to the spectrum of an on-chip grating coupler loopback interconnecting two back-to-back mode multiplexers to accurately evaluate the DUT insertion loss (IL) at the target wavelength (1540 nm). To remove the impact of Fabry-Perot cavity induced fringes, a moving window of 2 nm is deployed as an averaging filter [11]. The resultant spectrums for the DUTs with different SWG duty ratio _f_\({}^{\prime}\)= {0.4, 0.5, 0.6} are illustrated in Fig. 6. Evaluating the IL for the TE1 mode profile shows a distinct IL increase with the SWG duty ratio, which could be explained by the growing TE1 mode redistribution to TE0 (and hence the IL) as the duty ratio enlarges. However, this may not totally explain the relatively larger insertion loss for _f_\({}^{\prime}\) = 0.6, which could be ascribed to a fabrication variation induced increased loss of the input(output) mode (de)multiplexer, MMIs, tapers, and S-bends of this DUT. Detailed experimental results are summarized in Table I below including an averaged measured insertion loss over a 40 nm optical bandwidth from 1520 nm to 1560 nm. The relatively low amounts of crosstalk and IL in the measured wavelength range (\(1520-1560\) nm) provides a promising performance of the DUT for wide-band applications. In addition, because the input(output) mode-(de)multiplexers are the main contributor to the crosstalk, one can expect realization of better crosstalk by utilizing high performance on-chip mode-(de)multiplexers. ## IV Applications The proposed MS-TOPS provides a sole degree of freedom to adjust the phase of both TE0 and TE1 modes dependently. Using Eq. (1-2), and noting that the mode-selective TOPS experience almost the same amount of temperature variation \(\Delta T\) for both modes, parameter \(\zeta\) relates both phase shifts Fig. 4: Measured normalized bar transmission spectrum for the primary and crosstalk mode profiles for DUTs with different SWG duty ratios as _f_\({}^{\prime}\) = {0.4, 0.5, 0.6}. through the relation \(\Delta\Phi_{TE0}=\zeta\cdot\Delta\Phi_{TE1}\). Based on the target application, the second degree of freedom can be attained by deployment of an MI-TOPS either in a cascaded configuration, as schematically shown in the subset of Fig. 7, or in an MZI (similar to our proposed test structure) in conjunction with the MS-TOPS. Considering the TE1 phase shifts of \(\phi\) and \(\delta\) imposed by the cascaded MI-TOPS, and MS-TOPS modules, respectively, the total undertaken phase shifts are formulated as: \[\Delta\Phi_{total}=\begin{cases}\phi+\zeta\delta,&TE0\\ \phi+\delta,&TE1\end{cases} \tag{4}\] The system of a cascaded mode-selective and insensitive TOPS can realize any desired phase shifts for TE0, and TE1 modes independently. Considering \(\phi\in[0\colon 2\pi]\), an arbitrary total phase shift may demand the other phase shifter to be driven further than \(2\pi\) for a mode-selectivity FoM of less than two. In other words, an amount of \(\zeta=2\) would facilitate attaining all TE0, and TE1 phase shift combinations while keeping both \(\phi\) and \(\delta\) within the range of \([0\colon 2\pi]\), which directly improves the system power consumption efficiency. Therefore, the maximum amount of phase shift \(\delta\) is a function of the mode-selectivity as illustrated in Fig. 7 as a phase shifter driving efficiency analysis. Considering our work, \(\zeta=1.44\), all TE0, and TE1 phase combinations can be realized with driving the MS-TOPS to a maximum amount of \(2.4\times 2\pi\). A well-optimized mode-selectivity would decrease this amount toward a more power efficient design. Although increasing the mode-selectivity further than \(\zeta=2\) would be optimal from the driving efficiency perspective, depending on the target application, the whole system needs to be engineered to attain the desired phase adjustment precision. The reason being that as the more significant the mode-selectivity is, the more sensitive the system becomes to the applied voltage such that a relatively lower error in the applied voltage translates into a proportionally larger deviation in the targeted phase. ### _Datacom (switching)_ Considering data communication applications enabled by reconfigurable multimode silicon photonics such as optical networks [29], it may be necessary to route TE0 to a target destination and TE1 to another. A 2\(\times\)2 MZI incorporating an MS-TOPS in one arm and an MI-TOPS in another could efficiently act as an on-chip mode switch routing TE0 to the first output in bar state (\(\pi\) phase shift for TE0 between two arms) and TE1 into the second output in cross state (zero TE1 phase shift for TE1 between two arms). Because it would be possible to simultaneously generate \(\pi\) and zero phase shifts between two arms for TE0, and TE1 correspondingly, this space mode switch can separate and route TE0 and TE1 into different paths at the same time, which provides a direct switching methodology of modes without converting them to their fundamental mode. ### _Scaling up optical processors_ Our research group previously demonstrated a four-by-four reconfigurable MZI-based linear optical processor [30] and experimentally validated implementation of a single layer optical neural network trained for classification of a Gaussian dataset [13]. Scaling-up this topology to higher radix optical processor is challenging due to high insertion loss induced by an increased number of MZI building blocks in each input-output path [31]. Introducing a cascaded configuration of an MS-TOPS and an MI-TOPS could provide the core-enabling building block for realizing a multimode optical processor that can address the scalability bottleneck. With this approach, a radix four multimode optical processor can theoretically realize two simultaneous four-by-four photonic-based mathematical multiplications. ### _Multimode quantum optical processors_ A recent work has used multi-transverse optical modes for encoding a two-qubit quantum gate [32]; however, it deploys a non-reconfigurable mode-selective directional coupler to solely create a duplication of the signal and attain an additional degree of freedom. In contrast, the proposed MS-TOPS building block can directly manipulate relative phase of TE0 and TE1 modes; which could be cascaded with a MI-TOPS and provide the second degree of freedom. In addition, this could help with efficient realization of transverse mode entangled quantum photonics with supporting the required manipulation of optical modes. ### _Programming MZI-based optical processors_ Conventional MZI-based optical processors deploy an interconnected mesh of two-by-two MZI building blocks that incorporates an internal and an external phase shifter to adjust output light intensity and phase, respectively. Calibration and accurate programming of the external phase shifters demands an output phase detection technique such as the external coherent detection. We have previously proposed a novel integrated methodology to convert the optical phase to optical Fig. 5: Driving efficiency analysis of a cascaded MI-TOPS, and MS-TOPS for the application where two degrees of freedom is needed to control both TE0 and TE1 phase shifts with a mode-selectivity of \(\zeta\) as shown in the figure subset. Maximum required \(\delta\) phase shift quantity is shown versus different mode-selectivity amounts (\(\phi\in[0\colon 2\pi]\) ). In this work (\(\zeta=1.44\)), the MS-TOPS phase shifter needs to be driven up to a maximum of \(2.4\times 2\pi\) to support all phase shift combinations for TE0 and TE1 with two degrees of freedom. power using an MZI incorporating an MS-TOPS and an MI-TOPS in each arm [17]. This enables a low cost, and accurate on-chip optical phase measurement without the need for external coherent detection. Our proposed SWG-based MS-TOPS in this work improves the device mode-selectivity by approximately 32%, which increases the dynamic range and accuracy of the phase measurement task for calibration of multi-transverse mode optical processors. ## V Conclusion We propose a mode-selective thermo-optic phase shifter (MS-TOPS) exploiting subwavelength grating (SWG) structures. The proposed device can manipulate the relative phase of fundamental transverse electric mode (TE0), while inducing a 1.44 times smaller phase shift in the second mode (TE1) because of the modal field distribution of TE0 in the non-corrugated (center) part of the waveguide and TE1 mode mostly in the SWG section resulting in different thermo-optic coefficients for each mode. This novel device serves numerous applications in mode division multiplexing systems (e.g., optical switching and datacom) by direct mode selective manipulation of the relative phase of each mode instead of using a multitude of components to convert the modes down to their fundamental mode, conduct the desired processing task (e.g., phase shift), and then converting them back to their original mode profile. The balanced MZI test structure incorporating an MS-TOPS, and an MI-TOPS in each arm, and two multimode interferometers demonstrates an insertion loss of 2 (1.5) dB for TE0 (TE1) modes at 1540 nm wavelength. Furthermore, a cascaded configuration of an MS-TOPS and a mode-insensitive thermo-optic phase shifter can adjust the relative phase of each mode independently. A mode-selectivity factor of two can realize all phase combinations while driving each phase shifter within the range of \([0\!:\!2\pi]\). Such a device provides the core-enabling building block to scale-up conventional MZI-based optical processors using multimode operation. ## Acknowledgment Authors acknowledge the help from Canadian Microelectronic Corporation (CMC) for the subsidized multi-project wafer fabrication through Applied Nanotools (ANT) as well as financial support from the Natural Sciences and Engineering Research Council of Canada (NSERC).
Subwavelength gratings (SWG)を用いた新モード選択型熱光学位相変動器(MS-TOPS)が提案され、実験的に220 nm厚のシリコン光子チップ上に動作させられた。このMS-TOPSは、2つの準 quasi-transverse electric mode(TE0, TE1)において動作した。モード選択的相位変動は、モード分割多重化システムにおける処理タスクを解き放つ。この統合されたソリューションは、モードを基底モードに変換せずに、直接的にモードの相位変動を提供する。Mach-Zehnder interferometerは、提案されたMS-TOPSを含む1つのアームと、モード非感受性熱光学位相変動器(MI-TOPS)を含むもう1つのアームで構成されている。SWG duty cycle比率の効果は、数値シミュレーションと実験的測定によって調査した。実験的に1
2309.07298
Deriving Abstract Interpreters from Skeletal Semantics
This paper describes a methodology for defining an executable abstract interpreter from a formal description of the semantics of a programming language. Our approach is based on Skeletal Semantics and an abstract interpretation of its semantic meta-language. The correctness of the derived abstract interpretation can be established by compositionality provided that correctness properties of the core language-specific constructs are established. We illustrate the genericness of our method by defining a Value Analysis for a small imperative language based on its skeletal semantics.
Thomas Jensen, Vincent Rébiscoul, Alan Schmitt
2023-09-13T20:49:19
http://arxiv.org/abs/2309.07298v1
# Deriving Abstract Interpreters from Skeletal Semantics # Deriving Abstract Interpreters from Skeletal Semantics Thomas Jensen INRIA, Rennes thomas.jensen@inria.fr Universite de Rennes, Rennes vincent.rebiscoul@inria.fr Alan Schmitt INRIA, Rennes alan.schmitt@inria.fr ###### Abstract This paper describes a methodology for defining an executable abstract interpreter from a formal description of the semantics of a programming language. Our approach is based on Skeletal Semantics and an abstract interpretation of its semantic meta-language. The correctness of the derived abstract interpretation can be established by compositionality provided that correctness properties of the core language-specific constructs are established. We illustrate the genericness of our method by defining a Value Analysis for a small imperative language based on its skeletal semantics. ## 1 Introduction The derivation of provably correct static analyses from a formal specification of the semantics of a programming language is a long-standing challenge. The recent advances in the mechanisation of semantics has opened up novel perspectives for providing tool support for this task, thereby enabling the scaling of this approach to larger programming languages. This paper presents one such approach for mechanically constructing semantics-based program analysers from a formal description of the semantics of a programming language. We aim to provide methodologies which not only can prove the correctness of program abstractions but also lead to executable analysis techniques. Abstract Interpretation [5] has set out a methodology for defining an abstract semantics from an operational semantics and for proving a correctness relation between abstract and concrete semantics using Galois connections. The principle of abstract interpretation has been applied to a variety of semantic frameworks, including small-step and big-step (natural) operational semantics, and denotational semantics. An example of this methodology is to build an abstract semantics from a natural semantics [21]. Another example is Nielson's theory of abstract interpretation of two-level semantics [15] in which a semantic meta-language is equipped with binding-time annotations so that types and terms can be given a _static_ and _dynamic_ interpretation, leading to different but (logically) related interpretations. In order for semantics-based program analysis to handle the complexity of today's programming languages, it is necessary to conceive a methodology that is built using some form of mechanised semantics. Examples of this include Verasco [9], a formally verified static analyser for the C programming language. It uses abstract interpretation techniques to perform value analyses, relational analyses... Versaco is written in Coq and the soundness of the analysis is guaranteed by a theorem: a program where the analysis does not raise an alarm is free of errors. Reasoning about program behaviours is possible as Verasco reuses the formalisation of the C semantics in Coq that was written for CompCert [12]. CompCert is a proved semantic preserving C compiler written in Coq. Another example is the \(\mathbb{K}\)[19] framework for writing semantics using rewriting rules. Rewriting rules make the formal definition of a semantics both flexible and relatively simple to write, and allows to mechanically derive objects from the semantics like an interpreter. However, this mechanization can be complex: \(\mathbb{K}\)-Java [3] is a formalization of Java in \(\mathbb{K}\), with close to four hundred rewriting rules. It is unclear if it is possible to derive an analysis from a mechanization in \(\mathbb{K}\). The key idea that we will pursue in this paper is that an abstract interpreter for a semantic meta-language combined with language-specific abstractions for a particular property yield a correct-by-construction abstract interpreter for the specific language and property. We describe how to obtain a correct program analyser for a programming language from its _skeletal_ semantics. Skeletal Semantics [2] is a proposal for machine-representable semantics of programming languages. The skeletal semantics of a language \(\mathcal{L}\) is a partial description of the semantics of \(\mathcal{L}\). Typically, a skeletal semantics will contain definitions of the constructs of the language and functions of evaluation of these constructs. A skeletal semantics is written in the meta-language Skel [18], a minimalist functional language. It is a _meta language_ to describe the semantics of _object languages_. Skel has several semantics, called _interpretations_, (small step, big step [11], abstract interpretation), giving different semantics for the object languages. Contributions * We propose new interpretations of the semantic meta-language Skel that integrates the notion of program point in a systematic way. * We define an abstract interpretation for Skel. The abstract interpretation of Skel combined with language-specific abstractions define an analyzer for the object language. * We prove that the abstract interpretation of Skel is a sound approximation of the big-step interpretation of Skel, provided that some small language-dependent properties hold. * We implement a program which, given a Skeletal Semantics, generates an executable abstract interpreter, and we test it on toy languages. We define a basic value analyzer for a small imperative language. A Control Flow Analysis for a \(\lambda\)-calculus is also presented in the long version of this paper [8]. ## 2 Skeletal Semantics Skeletal Semantics offers a framework to mechanise semantics of programming languages [2]. It uses a minimalist, functional, and strongly typed semantic meta-language called Skel [18], whose syntax is presented in Figure 1. The actual semantics of a language described in Skel is expressed by providing a (meta-)interpretation of the Skel language itself. In this paper, we will present two such interpretations: a big-step (or concrete) semantics and an abstract interpretation. We illustrate Skel through the definition of the skeletal semantics of a toy imperative language called While. A Skeletal Semantics is a formal description of a language and consists of _declarations_. We start with some type declarations (production \(r_{\tau}\) in Figure 1). \begin{tabular}{l l l} type ident & type expr = & type stmt = \\ type lit & \(|\) Const lit & \(|\) Skip \\ type store & \(|\) Var ident & \(|\) Assign (ident, expr) \\ type int & \(|\) Plus (expr, expr) & \(|\) Seq (stmt, stmt) \\ & \(|\) Leq(expr, expr) & \(|\) If (expr, stmt, stmt) \\ & \(|\) Rand (lit, lit) & \(|\) While (expr, stmt) \\ \end{tabular} For While, there are four _unspecified_ types (identifiers, literals, stores, integers) and two _specified_ types (expressions and statements). Unspecified types is an useful trait of Skel, their definitions are unconstrained and they can be instantiated depending on the semantics of the object language being defined. The specification of the integer type can be different for a big-step semantics or for an abstract interpretation. The expr and stmt types define expressions and statements of While programs. An expression can be a constant, a variable, an addition, a comparison, or a random integer. A statement can be a skip (an instruction that does nothing), an assignment, a sequence, a condition, or a loop. In addition to these declared types, one may build arrow types and tuple types. We now turn to Skel's _term declarations_ (production \(r_{t}\) of Figure 1), which may also be unspecified or specified. Unspecified terms are typically used for operations on values of unspecified types. For our While language, they are as follows. val litToInt: lit ~ int val add: (int, int) ~ int val lt: (int, int) ~ int val rand: (lit, lit) ~ int val isZero: int ~ () val isNotZero: int ~ () val read : (ident, store) ~ int val write: (ident, store, int) ~ store The types for isZero and isNotZero may be surprising. These partial functions act as filters when used in branches, as detailed below. Specified terms, on the other hand, are signatures associated with a _term_. A term is either a skeletal variable, a constructor applied to a term, a tuple, or an abstraction. The body of an abstraction is a skeleton, described below. Consider the declaration of term eval_expr. val eval_expr ((s, e): (store, expr)): int = match e with | Const i ~ litToInt i | Var x ~ read (x, s) | Plus (e1, e2) ~ let v1 = eval_expr (s, e1) in let v2 = eval_expr (s, e2) in add (v1, v2) ``` The first line is syntactic sugar for val eval_expr: (store, expr) ~ int = \(\lambda\) (s, e): (store, expr) ~ where the remainder of the description is the body of the abstraction. This body is a skeleton. A skeleton may be a term, an n-ary application, a let binding, a branching (detailed below), or a match. Here the Figure 1: The Syntax of Skeletal Semantics skeleton is a match, distinguishing between the different expressions which may be evaluated. For a constant expression, we call the unspecified term litToInt to convert the literal to an integer. For a variable, we read its value in the store. For an addition, we sequence the recursive evaluation of each subterm using a let binding, and we then apply the unspecified add term to perform the actual addition. Note that specified term and type declarations are all mutually recursive. The rest of the code does not use any additional feature. We now turn to the second specified term declaration, to evaluate statements. ``` valeval_stmt((s,t):(store,stmt)):store= matchtwith |Skip+s |Assign(x,e)+ letw=eval_expr(s,e)in write(x,s,w) |Seq(t1,t2)+ lets'=eval_stmt(s,t1)in eval_stmt(s',t2) |If(cond,true,false)+ leti=eval_expr(s,cond)in branch let()=isNotZeroiin eval_stmt(s,true) ``` The code for the conditional and the loop illustrates the last feature of the language, branching. Branches are introduced with the branch keyword and are separated with the or keyword. They correspond to a form of a non-deterministic choice. Intuitively, in a big-step interpretation, any branch that succeeds may be taken. Branches may fail if a pattern matching in a let binding fails, or if the application of a term fails. For instance, the instantiation of the term isNotZero will not be defined on 0, making the whole branch fail when given 0 as argument. This is how we decide which branch to execute next for the conditional, and whether to loop in the While case. ## 3 Big-step Semantics of Skel We give meaning to a Skeletal Semantics by providing _interpretations_ of Skel. We first define the concrete, big-step semantics of Skel. Let \(\mathcal{S}\) be an arbitrary Skeletal Semantics. We write \(\mathrm{Funs}(\mathcal{S})\) for the set of pairs \((\Gamma,\lambda p:\tau_{1}\to S_{0})\) such that \(\lambda p:\tau_{1}\to S_{0}\) appears in Skeletal Semantics \(\mathcal{S}\). The typing environment \(\Gamma\) gives types to the free variables of \(\lambda p:\tau_{1}\to S_{0}\). Full formal details are available in [8]. ### From Types to Concrete Values The definitions of the sets of semantic values are presented on Figure 2a. They are defined by induction on the type. For each type \(\tau\), we write \(V(\tau)\) the set of values of type \(\tau\). A value with tuple type is a tuple of concrete values. A value of a specified type is a constructor applied to a value. A value with arrow type is a function that can either be a _named closure_ or an _anonymous closure_. The set of named closure \(\mathrm{NC}(\tau_{1}\to\tau_{2})\) and the set of anonymous closures \(\mathrm{AC}(\tau_{1}\to\tau_{2})\) are defined on Figure 2b. A named closure denotes a function that is specified in the Skeletal Semantics \(\mathcal{S}\), it is a pair of the name of the function and its arity. An anonymous closure is a tuple of a typing environment \(\Gamma\), a pattern \(p\) to bind the argument upon application, a skeleton \(S\) which is the body of the function, and an environment \(E\) captured at the creation of the closure. An environment is a partial function mapping skeletal variable to concrete values. It is said to be consistent with typing environment \(\Gamma\), written \(\Gamma\vdash E\), if they have the same domain and if, for every \(x\in\mathrm{dom}(\Gamma)\), we have \(E(x)\in V^{\sharp}(\Gamma(x))\). The unspecified types of a skeletal semantics must be instantiated to obtain an interpretation. In the case of While, the unspecified types are ident, lit, int, and store. They are instantiated as follows. \[V(\texttt{store})=\{\,s\mid s\in\mathcal{X}\hookrightarrow \mathbb{Z}\;\} V(\texttt{ident})=\{\,x\mid x\in\mathcal{X}\;\}\qquad\text{with }\mathcal{X}=\{\,x,y,z,..\,\}\] \[V(\texttt{lit})=\{\,l\mid l\in\mathbb{Z}\;\} V(\texttt{int})=\{\,i\mid i\in\mathbb{Z}\;\}\] Identifiers are taken from a countable set \(\mathcal{X}\), literals and integers are relative integers, and stores are partial maps from identifiers to integers. ### Interpretation of Unspecified Terms In the following, we write \(\mathrm{na}(\tau)\) when \(\tau\) is not an arrow type. Take an unspecified term \(\textbf{val}\;t:\tau_{1}\rightarrow..\rightarrow\tau_{n}\rightarrow\tau\) such that \(\mathrm{na}(\tau)\), then an instantiation of \(t\), written \([\![t]\!]\), is a function such that \([\![t]\!]\in(V(\tau_{1})\times..\times V(\tau_{n}))\rightarrow\mathcal{P}_{ fin}(V(\tau))\), where \(\mathcal{P}_{fin}(X)\) is the set of finite subsets of \(X\). In particular, if \(\textbf{val}\;t:\tau\) and \(\mathrm{na}(\tau)\), then \([\![t]\!]\subseteq V(\tau)\). Allowing the specification of a term to be a function returning a set is useful to model non-determinism. We instantiate the unspecified functions of our While language. The expression \((b)\)?\(e_{1}\) : \(e_{2}\) evaluates Figure 2: Definition of Concrete Values to \(e_{1}\) is the condition \(b\) is true. Otherwise, it evaluates to \(e_{2}\). \[\begin{array}{lclcl}\llbracket litToInt\rrbracket(n)&=&\{\,n\,\}&&\llbracket add \rrbracket(n_{1},n_{2})&=&\{\,n_{1}+n_{2}\,\}\\ \llbracket lit\rrbracket(n_{1},n_{2})&=&(n_{1}<n_{2})\?\ \{\,1\,\}\ program. They play an important role in semantics-based program analysis, to indicate places where information about the execution is collected. Program points are essential to abstract interpretation as an abstract interpretation usually computes an abstraction of the state of the execution of the analysed program for each program point. Our formalisation of program points for Skeletal Semantics is modular and works for the big-step semantics of Skel, but also for the abstract interpretation of Skel, presented in Section 5. In Skel, programs are values of an algebraic data type (ADT), such as stmt or expr in the While example. For instance, the skeletal term Seq(Assign(x, Const 0), Assign(y, Const 1)) is a While **program** of type stmt. A program point is a path in the ADT of the program, encoded as a list of integers (underlined to distinguish them from natural numbers). For example, \(\epsilon\) is the empty path, it corresponds to the whole program. The path \(\underline{01}\) corresponds to Const 0. The set of program points is thus \(\texttt{ppt}=\mathbb{N}^{*}\). Let **prg** be a term of an ADT and pp a program point. We note **prg@** pp the subterm of **prg** at program point pp. Formally, it is defined as follows. \[v@\underline{\epsilon}=v\qquad\qquad\texttt{C}(v_{0},..,v_{n-1})@(\underline{i} \,\text{pp})=v_{i}@\,\text{pp}\ \ \text{when}\ 0\leq i\leq n-1\] ### Building Values with Program Points Our approach is to replace the values that correspond to programs with program points. These program points correspond to a sub-program of a main program that is a parameter of the interpretation. The values that ought to be replaced by program points should be values representing fragments of the program being executed. Therefore, we call \(\mathcal{T}\) the set of _program types_, i.e., types representing programs. For instance, for the While language, \(\mathcal{T}=\{\texttt{stmt},\texttt{expr}\}\). Moreover, the interpretation is parametrised by a program **prg**, which is a value of a type \(\tau\in\mathcal{T}\). For the While language, that could be \(\textbf{prg}=\texttt{Seq}(\texttt{Assign(x, Const 0), Assign(y, Const 1)})\). Therefore, the rules to build values are unchanged except for the values with program types. Values with program types are defined by the following equation, where \(V(\tau)\) is defined in Figure 1(a). \[\tau\in\mathcal{T}\implies V_{\textbf{prg}}^{\texttt{ppt}}(\tau)=\{\,\text{pp} \in\texttt{ppt}\mid\textbf{prg@}\,\text{pp}\in V(\tau)\ \}\] Therefore in our example, \(\epsilon\) is now a value of type stmt denoting the value **prg**, and \(\underline{0}\) denotes the value Assign(x, Const 0). For each unspecified term \(x\), we assume given an interpretation \(\llbracket x\rrbracket^{\texttt{ppt}}\), which is identical to the concrete interpretation \(\llbracket x\rrbracket\) where program terms are replaced by program points. The full definition of this interpretation can be found in the long version of this paper [8]. ### Pattern-matching of Program Points Replacing some values with program points does not change the interpretation of Skel, except when matching a program point with a pattern. Indeed, a program point pp corresponds to the sub-program **prg@** pp if it exists, and it might be matched against a pattern \(\texttt{C}\)\(p\). To handle this case, the program point is _unfolded_, meaning the constructor at **prg@** pp is revealed, and the parameters of the constructor are replaced with program points if their type is a program type. To give an example, given **prg** as before, unfolding \(\epsilon\) gives \(\texttt{Seq}(\underline{0},\underline{1})\): the constructor is revealed and the parameters are program points because they both have type \(\mathtt{stmt}\in\mathcal{T}\). On the other hand, unfolding \(\underline{00}\) directly returns x, as identifiers are not program types in this example. This unfolding mechanism is added to pattern matching via the following rule: \[\begin{array}{c}\mathbf{prg@\,pp}=\mathtt{C}\left(v_{0}^{\prime},..,v_{n-1}^{ \prime}\right)\quad\mathtt{C}:\left(\tau_{0}\times..\times\tau_{n-1},\tau \right)\\ \underline{v_{j}=\mathbf{if}\;\tau_{j}\in\mathcal{T}\;\mathbf{then}\;\mathrm{ pp}\;\underline{j}\;\mathbf{else}\;v_{j}^{\prime}\quad\mathcal{T},\mathbf{prg} \vdash E+p\leftarrow(v_{0},..,v_{n-1})\rightsquigarrow E^{\prime}}{\mathcal{T },\mathbf{prg}\vdash E+\mathtt{C}p\leftarrow\mathrm{pp}\rightsquigarrow E^{ \prime}}\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \ We assume that a partial order on \(V^{\sharp}(\tau)\) is provided for each unspecified type \(\tau\), and that it includes a smallest value, denoted by \(\bot_{\tau}\), and a largest value, denoted by \(\top_{\tau}\). In the case of While, we instantiate \(\mathtt{ident}\) with the flat lattice of \(\mathcal{X}\), \(\mathtt{lit}\) with the flat lattice of integers, \(\mathtt{int}\) with closed intervals of \(\mathbb{Z}\cup\{-\infty,+\infty\}\), and \(\mathtt{store}\) with a partial mapping from identifiers to non-empty intervals. We define abstract values for specified types in Figure 5, writing \(V^{\sharp\star}(\tau)\) for \(V^{\sharp}(\tau)\setminus\{\bot_{\tau}\}\). Abstract tuples are finite sets of tuples of (non-bottom) abstract values, with \(\bot_{\tau_{1}\times\ldots\times\tau_{n}}\) being the empty set. We use sets to retain some precision in the analysis. Abstract values of an algebraic data type are simply constructors applied to an abstract value of the correct type. Finally, functional abstract values are sets of abstract closures of this type, with \(\bot_{\tau_{1}\rightarrow\tau_{2}}\) being the empty set. The AI-state \(\mathcal{A}\) contains information collected throughout the abstract interpretation. It is dependent on the analysis and the language, and therefore must be provided, similarly to unspecified values. Moreover, a partial order and a union must be given for abstract states. In the case of while, the AI-state records information about the abstract store before (In) and after (Out) every program point. We write \(\mathtt{Pos}\) for either \(\mathtt{In}\) or \(\mathtt{Out}\). We then define \(\mathcal{A}\) as a mapping from program points and a \(\mathtt{Pos}\) to abstract while stores. We have \(\mathcal{A}_{1}\sqsubseteq^{\sharp}\mathcal{A}_{2}\) if \(\mathrm{dom}(\mathcal{A}_{1})\subseteq\mathrm{dom}(\mathcal{A}_{2})\) and for any \((\mathtt{pp},\mathtt{Pos})\in\mathrm{dom}(\mathcal{A}_{1})\), we have \(\mathcal{A}_{1}(\mathtt{pp},\mathtt{Pos})\sqsubseteq^{\sharp}_{store} \mathcal{A}_{2}(\mathtt{pp},\mathtt{Pos})\). We define \(\mathcal{A}_{1}\sqcup^{\sharp}\mathcal{A}_{2}\) as the mapping from \(\mathrm{dom}(\mathcal{A}_{1})\cup\mathrm{dom}(\mathcal{A}_{2})\) that relates \((\mathtt{pp},\mathtt{Pos})\) to \(\mathcal{A}_{1}(\mathtt{pp},\mathtt{Pos})\sqcup^{\sharp}_{im}\mathcal{A}_{2}( \mathtt{pp},\mathtt{Pos})\). ### Operations on Abstract Values Each abstract domain has a partial order and an associated join operator. In addition, a concretisation function that returns a set of concrete values defines the meaning of each abstract value. All of these functions are indexed by types (or type environments when they deal with environments). We assume they are provided for non-specified types, and show in this section how to extend them to all types. A concretisation function for type \(\tau\) maps an abstraction state and an abstract value in \(V^{\sharp}(\tau)\) to \(\mathcal{P}(V(\tau))\), a set of concrete values. We also define a function of concretisation \(\gamma_{\tau}\) which maps abstract Figure 5: Abstract Values for Specified Types skeletal environments to sets of concrete skeletal environments. \[\gamma_{\tau_{1}\times\ldots\times\tau_{n}}(\mathcal{A},t^{\sharp}) =\bigcup_{(\varphi_{1}^{\sharp},\ldots,\varphi_{n}^{\sharp})\in t^ {\sharp}}\gamma_{\tau_{1}}(\mathcal{A},v_{1}^{\sharp})\times\ldots\times\gamma_{ \tau_{n}}(\mathcal{A},v_{n}^{\sharp})\] \[\gamma_{\tau_{2}}(\mathcal{A},C\,v^{\sharp}) =\Big{\{}\,C\,v\mid C:(\tau_{1},\tau_{2}),\,v\in\gamma_{\tau_{2}} (\mathcal{A},v^{\sharp})\,\Big{\}}\] \[\gamma_{\tau_{1}\rightarrow\tau_{2}}(\mathcal{A},F) =\] \[\gamma_{\tau}(\mathcal{A},E^{\sharp}) =\Big{\{}\,E\,\left|\,\Gamma\vdash E\wedge\Gamma\vdash E^{\sharp} \wedge\forall x\in\mathrm{dom}(\Gamma),E(x)\in\gamma_{\Gamma(x)}(\mathcal{A},E^ {\sharp}(x))\,\,\right\}\] \[\gamma(\mathcal{A},\bot_{\tau}) =\emptyset\qquad\gamma(\mathcal{A},\top_{\tau})=V^{\sharp}(\tau)\] In the case of While, the concretisation function for ident and lit are immediate as they are flat lattices. The concretisation function for an interval \(i\) is the set of integers it contains: \(\gamma_{\mathtt{int}}(i)=\{\,n\mid n\in i\,\,\}\). Finally, the concretisation of an abstract store \(\sigma^{\sharp}\) is \[\gamma_{\mathtt{store}}(\sigma^{\sharp})=\Big{\{}\,\sigma\,\left|\,\mathrm{dom }(\sigma)=\mathrm{dom}(\sigma^{\sharp})\wedge\forall x\in\mathrm{dom}(\sigma), \sigma(x)\in\gamma_{\mathtt{int}}(\sigma^{\sharp}(x))\,\,\right\}\] To compare abstract values, we define partial orders that are relations, but we call them functions as they can be viewed as boolean functions. For every unspecified type \(\tau\), we assume a comparison function \(\sqsubseteq^{\sharp}_{\tau}\) which is a partial order between abstract values. It must satisfy the following property: for any value \(v^{\sharp}\in V^{\sharp}(\tau)\), we have \(\bot_{\tau}\sqsubseteq^{\sharp}_{\tau}v^{\sharp}\) and \(v^{\sharp}\sqsubseteq^{\sharp}_{\tau}\top_{\tau}\). For every other type, the comparison function is the smallest partial order that satisfies the following equations. \[C\,\,v^{\sharp}\sqsubseteq^{\sharp}_{\tau_{\tau}}C\,\,w^{\sharp} \,\Longleftrightarrow\,\,v^{\sharp}\sqsubseteq^{\sharp}_{\tau}w^{\sharp}\text { with }C:(\tau,\tau_{a})\] \[v^{\sharp}\sqsubseteq^{\sharp}_{\tau_{1}\times\ldots\times \tau_{n}}w^{\sharp}\,\Longleftrightarrow\,\forall(v_{1}^{\sharp},..,v_{n}^{ \sharp})\in v^{\sharp},\,\exists(w_{1}^{\sharp},..,w_{n}^{\sharp})\in w^{\sharp} \text{ such that }\forall i\in[1..n],\,v_{i}^{\sharp}\sqsubseteq^{\sharp}_{\tau_{1}}w_{i}^{\sharp}\] \[F_{1}\sqsubseteq^{\sharp}_{\tau_{1}\rightarrow\tau_{2}}F_{2}\, \Longleftrightarrow\,\left\{\begin{aligned} &(f,n)\in F_{1}\,\, \Longrightarrow\,(f,n)\in F_{2}\\ &(\Gamma,p,S,E^{\sharp}_{1})\in F_{1}\,\,\Longrightarrow\,\exists (\Gamma,p,S,E^{\sharp}_{2})\in F_{2},\,E^{\sharp}_{1}\sqsubseteq^{\sharp}_{ \Gamma}E^{\sharp}_{2}\\ & E^{\sharp}_{1}\sqsubseteq^{\sharp}_{\Gamma}E^{\sharp}_{2}\, \Longleftrightarrow\,\Gamma\vdash E^{\sharp}_{1}\wedge\Gamma\vdash E^{\sharp}_{2 }\wedge\forall x\in\mathrm{dom}(E^{\sharp}_{1}),\,E^{\sharp}_{1}(x)\sqsubseteq^{ \sharp}_{\Gamma(x)}E^{\sharp}_{2}(x)\\ &\qquad\qquad\qquad v^{\sharp}\sqsubseteq^{\sharp}_{\tau}\top_{ \tau}\qquad\bot_{\tau}\sqsubseteq^{\sharp}_{\tau}v^{\sharp}\end{aligned}\right.\] Most rules are straightforward. To compare two functions, all named closures of the left function must be in the right function. Moreover, for all closures in the left function, there must be a closure in the right function with the same pattern and skeleton, but with a bigger abstract environment. Abstract environments are compared using point-wise lifting. For our While language, we have \(i_{1}\sqsubseteq^{\sharp}_{\mathtt{int}}i_{2}\) if the interval \(i_{1}\) is included in \(i_{2}\), and \(\sigma^{\sharp}_{1}\sqsubseteq^{\sharp}_{\mathtt{store}}\sigma^{\sharp}_{2}\) if for all \(x\) in \(\mathrm{dom}(\sigma^{\sharp}_{1})\), we have \(\sigma^{\sharp}_{1}(x)\sqsubseteq^{\sharp}_{\mathtt{int}}\sigma^{\sharp}_{2}(x)\). **Definition 1**: _A concretion function \(\gamma_{\tau}\) is monotonic iff for any \(v_{1}^{\sharp}\sqsubseteq^{\sharp}_{\tau}v_{2}^{\sharp}\) and \(\mathcal{A}_{1}\sqsubseteq^{\sharp}\mathcal{A}_{2}\). we have \(\gamma_{\tau}(\mathcal{A}_{1},v_{1}^{\sharp})\subseteq\gamma_{\tau}(\mathcal{A} _{2},v_{2}^{\sharp})\)._ **Lemma 1**: \(\gamma_{\mathtt{ident}}\)_, \(\gamma_{\mathtt{lit}}\), \(\gamma_{\mathtt{int}}\) and \(\gamma_{\mathtt{store}}\) are monotonic._ For each type, an upper bound (or join) is defined. For every non-specified type \(\tau\), we assume an upper bound \(\sqcup^{\sharp}_{\tau}\). The definition of \(\sqcup^{\sharp}_{\mathtt{ident}}\) and \(\sqcup^{\sharp}_{\mathtt{lit}}\) have the usual definition for flat lattices. \(\sqcup^{\sharp}_{\mathtt{int}}\) is the convex hull of two intervals and \(\sqcup^{\sharp}_{\mathtt{store}}\) is the usual point-wise lifting of the abstract union of integers. Moreover, we note \(\nabla^{\sharp}_{\tt int}\) the widening on intervals (define below) and \(\nabla^{\sharp}_{\tt store}\) the point-wise lifting of the widening of intervals. We extend it for every other type. \[(C\;v^{\sharp})\sqcup^{\sharp}_{\tau_{2}}(C\;w^{\sharp})=C\;(v^{\sharp}\sqcup^{ \sharp}_{\tau_{1}}w^{\sharp})\text{ with }C:(\tau_{1},\tau_{2}) E^{\sharp}_{1}\sqcup^{\sharp}_{\tau}\Gamma E^{\sharp}_{2}= \left\{x\in\operatorname{dom}(\Gamma)\mapsto E^{\sharp}_{1}(x)\sqcup^{\sharp} _{\Gamma(x)}E^{\sharp}_{2}(x)\right\}\] \[(C\;v^{\sharp})\sqcup^{\sharp}_{\tau_{2}}(D\;w^{\sharp})=\top_{ \tau_{2}}\text{ with }C:(\tau_{1},\tau_{2})\wedge D:(\tau^{\prime}_{1},\tau_{2}) v^{\sharp}\sqcup^{\sharp}_{\tau}\top_{\tau}=\top_{\tau}\sqcup^{\sharp} _{\tau}v^{\sharp}=\top_{\tau}\] \[v^{\sharp}\sqcup^{\sharp}_{\tau_{1}\times\dots\times\tau_{n}}w^ {\sharp}=v^{\sharp}\cup w^{\sharp} v^{\sharp}\sqcup^{\sharp}_{\tau}\bot_{\tau}=\bot_{\tau}\sqcup^{\sharp} _{\tau}v^{\sharp}=v^{\sharp}\] \[F_{1}\sqcup^{\sharp}_{\tau_{1}\to\tau_{2}}F_{2}=F_{1}\cup F_{2}\] Joining two algebraic values with the same constructor is joining their parameters, and joining algebraic values with different constructors yields top. The join of abstract tuples or abstract functions is their union. Joining abstract environments is done by point-wise lifting. For each type, top is an absorbing element, and bottom is the neutral element. **Lemma 2**: \(\sqsubseteq^{\sharp}_{\tt ident}\)_, \(\sqsubseteq^{\sharp}_{\tt lit}\), \(\sqsubseteq^{\sharp}_{\tt int}\), \(\sqsubseteq^{\sharp}_{\tt store}\) are orders. \(\sqcup^{\sharp}_{\tt ident}\), \(\sqcup^{\sharp}_{\tt lit}\), \(\sqcup^{\sharp}_{\tt int}\), \(\sqcup^{\sharp}_{\tt store}\) give an upper bound._ **Lemma 3**: _If for all unspecified types \(\tau_{u}\), \(\gamma_{\tau_{u}}\) is monotonic, then for all \(\tau\), \(\gamma_{\tau}\) is also monotonic._ **Lemma 4**: _If for every unspecified type \(\tau_{u}\sqsubseteq^{\sharp}_{\tau_{u}}\) is an order and \(\sqcup^{\sharp}_{\tau_{u}}\) gives an upper bound, then for all \(\tau\), \(\sqsubseteq^{\sharp}_{\tau}\) is an order and \(\sqcup^{\sharp}_{\tau}\) gives an upper bound._ Finally, we give an abstract specification of unspecified terms. As an illustration, here are a few specifications from our running example. \[[\![{\it ToInt}]\!]^{\sharp}(n) =[n,n] [\![{\it add}]\!]^{\sharp}([n_{1},n_{2}],[m_{1},m_{2}]) =[n_{1}+m_{1},n_{2}+m_{2}]\] \[[\![{\it read}]\!]^{\sharp}(x,s^{\sharp}) =s^{\sharp}(x) [\![{\it write}]\!]^{\sharp}(x,s,[n_{1},n_{2}]) =s^{\sharp}\{x\mapsto[n_{1},n_{2}]\}\] **Definition 2**: _Let \(x\) be an unspecified term of type \(\tau\), such that \(\operatorname{na}(\tau)\). We say that \([\![x]\!]^{\sharp}\) is a **sound approximation** of \([\![x]\!]^{\tt ppt}\) if and only if:_ \[\forall\mathcal{A},[\![x]\!]^{\tt ppt}\subseteq\gamma(\mathcal{A},[\![x]\!]^{ \sharp})\] **Definition 3**: _Let \(f\) be an unspecified term of type \(\tau_{1}\to..\to\tau_{n}\to\tau\) where \(\operatorname{na}(\tau)\). We say that \([\![f]\!]^{\sharp}\) is a **sound approximation** of \([\![f]\!]^{\tt ppt}\) iff \(\forall v_{i}\in V^{\tt ppt}_{\tt pfg}(\tau_{i})\), \(\forall v^{\sharp}_{i}\in V^{\sharp}(\tau_{i})\), and for all abstract state \(\mathcal{A}\), if_ \[v_{i}\in\gamma_{t}(\mathcal{A},v^{\sharp}_{i})\] \[[\![f]\!]^{\sharp}(\mathcal{A},v^{\sharp}_{1},..,v^{\sharp}_{n}) =\mathcal{A}^{\prime},w^{\sharp}\] **Lemma 5**: _The abstract instantiations of the unspecified terms for While are sound approximation of the concrete instantiations of the unspecified terms._ ### Abstract Interpretation of Skel The abstract interpretation of skeletons is given on Figure 6. It maintains a callstack of specified function calls which is used to prevent infinite computations by detecting identical nested calls. A callstack is an ordered list of frames. The set of callstacks \(\Pi\) is defined as: \[\overline{\mathbf{\mathsf{emp}}\in\Pi}\] \[\mathcal{A}\mbox{ an AI-state}\qquad\mathbf{\mathsf{val}}\ f:\tau_{1} \rightarrow\ldots\rightarrow\tau_{n}\rightarrow\tau\ =\ \mathsf{t}\in\mathcal{S}\qquad\mbox{na}(\tau)\qquad v_{i}\in V^{\sharp}(\tau_ {i})\qquad\pi\in\Pi\] \[(f,\mathcal{A},[v_{1},..,v_{n}])::\pi\in\Pi\] The abstract interpretation of skeletons is similar to the big-step interpretation: the evaluation of terms is almost unchanged except that evaluating a closure or a tuple returns a singleton. When evaluating a skeleton (branch, let-binding, or application), a state of the abstract interpretation is carried through the computations. In the Branch rule, all branches are evaluated and joined instead of only one branch being evaluated. Pattern matching now returns set of environments rather than a single one (explained later). As a consequence, the LetIn rule may evaluate \(S_{2}\) in several abstract environments. This flexibility in the control-flow of the abstract interpreter allows us to do control flow analysis for \(\lambda\)-calculus. The App rule evaluates all terms and pass a list of values to the application relation, defined in Figure 7. Because the abstraction of a function is a set of closures and named closures, the App-Set rule evaluates each one individually. The Base rule returns the remaining value when all arguments have been Figure 6: Abstract Interpretation of Skeletons and Terms processed. The Clos rule evaluates the body of the function \(S\) in all abstract environments returned by the matching of the pattern against the argument. The Spec rule evaluates the call to a specified function and maintains invariants. Invariants depend on the analysis and the AI-state, therefore, language-dependent update functions can be provided to to maintain invariants before and after a call. The update functions must respect the following monotonicity constraints in order to ensure soundness: **Definition 4**: _The update functions are said to be monotonic if and only if:_ \[\begin{array}{l}\mathbf{update}^{in}_{f}(\mathcal{A},[v_{1}^{ \sharp},..,v_{k}^{\sharp}])=\mathcal{A}^{\prime},[v_{1}^{\sharp},..,v_{k}^{ \prime\sharp}]\implies\mathcal{A}\sqsubseteq^{\sharp}\mathcal{A}^{\prime} \wedge(v_{1}^{\sharp},..,v_{k}^{\sharp})\sqsubseteq^{\sharp}(v_{1}^{\prime \sharp},..,v_{k}^{\prime\sharp})\\ \mathbf{update}^{out}_{f}(\mathcal{A},[v_{1}^{\sharp},..,v_{k}^{ \sharp}],v^{\sharp})=\mathcal{A}^{\prime},v^{\prime\sharp}\implies\mathcal{A} \sqsubseteq^{\sharp}\mathcal{A}^{\prime}\wedge v^{\sharp}\sqsubseteq^{\sharp}v^{ \prime\sharp}\end{array}\] The update functions of While are defined as: \[\begin{array}{l}\mathbf{update}^{in}_{\mbox{\scriptsize eval\_ stmt}}(\mathcal{A},[s_{i}^{\sharp},\mbox{\scriptsize pp}])=\mathcal{A}\{(\mbox{\scriptsize pp },\mbox{\scriptsize In})\mapsto s^{\sharp}\},[s^{\sharp},\mbox{\scriptsize pp}] \hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt \hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt \hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt \hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt \hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt \hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt \hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt \hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt \hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt \hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt \hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt \hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt \hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt \hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt \hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt \hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt \hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt \hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt \hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt \hskip 56.905512pt\hskip 56.905512pt\hskip 56.90512pt\hskip 56.905512pt \hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt \hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt \hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt \hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt \hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt \hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt \hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt \hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt \hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt \hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt \hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt \hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt \hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt \hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt \hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt \hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt \hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt \hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt \hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt \hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt \hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt \hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt \hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt \hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt \hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.90512pt\hskip 56.905512pt \hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt \hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt \hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt \hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56.905512pt\hskip 56. ### Correctness of the Abstract Interpretation Our methodology aims at defining mathematically correct abstract interpreters from Skeletal Semantics. In this section, we present a theorem stating that the abstract interpreter of Skel computes a correct approximation of the big-step semantics of Skel. We state the following theorem of correctness that states that the abstract interpretation of Skel computes a sound approximation of the big-step interpretation of Skel. **Theorem 1**: _Let \(\mathcal{S}\) be a Skeletal Semantics with unspecified terms \(\mathit{Te}\) and unspecified types \(\mathit{Ty}\), and let \(\mathit{E}\) and \(\mathit{E}^{\sharp}\) be a concrete and abstract environment, respectively. Suppose_ * \(\forall x\in\mathit{Te}\)_,_ \(\llbracket x\rrbracket^{\sharp}\) _is a sound approximation of_ \(\llbracket x\rrbracket^{\texttt{{\small ppt}}}\)_._ * \(\forall\tau\in\mathit{Ty}\)_,_ \(\gamma_{\tau}\) _is monotonic._ * \(\textbf{update}^{\mathit{in}}\) _and_ \(\textbf{update}^{\mathit{out}}\) _are monotonic._ _Then:_ \[\left.\begin{array}{c}E\in\gamma_{\Gamma}(\mathcal{A}_{0},\mathit{E}^{\sharp })\\ E,S\Downarrow v\\ \textbf{emp},\mathcal{A}_{0},\mathit{E}^{\sharp},S\Downarrow^{\sharp}v^{ \sharp},\mathcal{A}\end{array}\right\}\implies v\in\gamma(\mathcal{A},v^{ \sharp})\] Therefore, to prove the soundness of the analysis, it is sufficient to prove that the abstract instantiation of terms are sound approximation of the concrete ones, and that the update functions and concretisation functions are monotonic. Let \(\sigma_{0}\in V^{\texttt{{\small ppt}}}(\texttt{store})\) and \(\sigma_{0}^{\sharp}\in V^{\sharp}(\texttt{store})\) be the concrete and abstract stores with empty domain. Let \(E_{0}=\{\,s\mapsto\sigma_{0},t\mapsto\underline{\varepsilon}\,\}\) and \(E_{0}^{\sharp}=\left\{\,s\mapsto\sigma_{0}^{\sharp},t\mapsto\underline{ \varepsilon}\,\right\}\) be a concrete and an abstract Skel environments. We recall that \(\underline{\varepsilon}\) is the program point of the root of \(\mathbf{prg}\), the analysed program. Let \(\mathcal{A}_{0}\) be the empty mapping from program points and flow tags (\(\mathtt{In}\) or \(\mathtt{Out}\)) to abstract stores. **Lemma 7**: \(\sigma_{0}\in\gamma_{\mathit{store}}(\mathcal{A}_{0},\sigma_{0}^{\sharp})\)__ **Lemma 8**: _Let \(\Gamma=\{\,s\mapsto\texttt{store},t\mapsto\texttt{stmt}\,\}\), \(E_{0}\in\gamma_{\Gamma}(\mathcal{A},E_{0}^{\sharp})\)._ The abstract interpreter computes an abstract store that is a correct approximation of the concrete store returned by the big-step semantics. **Theorem 2**: \[\left.\begin{array}{c}E_{0},\textit{eval\_stmt}\,(s,t)\Downarrow^{PP}\sigma \\ \textbf{emp},\mathcal{A}_{0},E_{0}^{\sharp},\textit{eval\_stmt}\,(s,t) \Downarrow^{\sharp}\sigma^{\sharp},\mathcal{A}\end{array}\right\}\implies \sigma\in\gamma(\mathcal{A},\sigma^{\sharp})\] As an example, take \(\mathbf{prg}\equiv\texttt{x}\) := 0; while (x < 3) x := x + 1. The concrete and abstract interpretations will find that \[E_{0},\textit{eval\_stmt}\,(s,t)\Downarrow^{PP}\{\,x\mapsto 3\,\}\] \[\textbf{emp},\mathcal{A}_{0},E_{0}^{\sharp},\textit{eval\_stmt}\,( s,t)\Downarrow^{\sharp}\{\,x\mapsto[0,+\infty]\,\},\mathcal{A}\] In accordance with Theorem 2, we observe that \(\{\,x\mapsto 3\,\}\in\gamma(\mathcal{A},\{\,x\mapsto[0,+\infty]\,\})\) The abstract interpreter returns an imprecise result. Currently, our method fails to properly take into account the guards: the conditions of loops or conditional branchings are not used to refine the abstract values. In the previous While program, the guard of the loop is not used to get a precise abstract store in or after the loop. The skeletal semantics of the While language makes it unclear how to use the guards to modify the store, as it is syntactically the same before and after the evaluation of the condition. The precision of the analysis also depends on the skeletal semantics of the language. An easy fix for our precision issue would be to modify the type of isZero and isNotZero functions such that they have type \((\mathtt{store}\times\mathtt{int})\rightarrow\mathtt{store}\). The abstract instantiations of these functions could then be used to refine the abstract stores. ## 6 Related Work Our work is part of a large research effort to define sound analyses and build correct abstract interpreters from semantic description of languages. At its core, our approach is the Abstract Interpretation [5, 6] of a semantic meta-language. Abstract Interpretation is a method designed by Cousot and Cousot to define sound static analyses from a concrete semantics. In his Marktoberdorf lectures [4], Cousot describes a systematic way to derive an abstract interpretation of an imperative language from a concrete semantics and mathematically proved sound. We chose to define the Abstract Interpretation of Skel, as it is designed to mechanise semantics of languages. The benefit of analysing a meta-language is that a large part of the work to define and prove the correctness of the analysis is done once for every semantics mechanised with Skel. However, it is often less precise than defining a language specific abstract interpretation. Moreover, there have been several papers describing methods to derive abstract interpretation from different types of concrete semantics [5, 21, 14], we chose to derive abstract interpreters from a big-step semantics of Skel. Schmidt [21] shows how to define an abstract interpretation for \(\lambda\)-calculus from a big-step semantics defined co-inductively. The abstract interpretation of Skel and its correctness proof follow the methods described in the paper. However Skel has more complex constructs than \(\lambda\)-calculus, especially branches. Moreover, the big-step of Skel is defined inductively, thus reasoning about non-terminating program is not possible. Also, to prove the correctness of the abstract interpretation of Skel, we relate the big-step derivation tree to the abstract derivation tree, similarly to Schmidt, but a key difference is that our proof is inductive when Schmidt's proof is co-inductive. Lim and Reps propose the TSL system [13]: a tool to define machine-code instruction set and abstract interpretations. The specification of an instruction set in TSL is compiled into a Common Intermediate Representation (CIR). An abstract interpretation is defined on the CIR, therefore an abstract interpreter is derivable from any instruction set description. However, the TSL system is aimed at specifying and analysing machine code, and not languages in general. Moreover, it is unclear how it would be possible to define analyses on languages with more complex control-flow, like \(\lambda\)-calculus. In the paper on Skeletal semantics, Bodin _et al._[2] used skeletal semantics to relate concrete and abstract interpretations in order to prove correctness. An important difference between that work and the present is that their resulting abstract semantics is not computable, whereas our abstract interpretation can be executed as an analysis, as demonstrated by our implementation [20]. Moreover, our method computes an AI-state that collects information throughout the interpretation and allows to use widening using the update functions, rather than computing an Input/Output relation. The idea of defining an abstract interpreter of a meta-language to define analyses for languages has been explored, for example by Keidel, Poulsen and Erdweg [10]. They use arrows [7] as meta-language to describe interpreters. The concrete and abstract interpreters share code using the unified interface of arrows. By instantiating language-dependent parts for the concrete interpretation and the abstract interpretation, they obtain two interpreters that can be proven sound compositionally by proving that the abstract language-dependent parts are sound approximation of the concrete language-dependent parts, similarly to Skel. However, we chose to use a dedicated meta-language, Skel, as its library [17] makes defining interpreters for Skel convenient and one objective is to use the NecroCoq tool [16] to generate mechanised proofs that our derived abstract interpreters are correct. ## 7 Conclusion In this paper, we propose a methodology for mechanically deriving correct abstract interpreters from mechanised semantics. Our approach is based on Skeletal Semantics and its meta-language Skel, used to write a semantic description of a language. It consists of two independent parts. First, we define an abstract interpreter for Skel which is target language-agnostic and is the core of all derived abstract interpreters from Skeletal Semantics. The abstract interpreter of Skel is proved correct with respect to the operational semantics of Skel. Second, for a given target language to analyse, abstractions must be defined. The abstract domains are defined by instantiating the unspecified types and providing comparisons and abstract unions of abstract values. The semantics of the language-specific parts are defined by instantiating the unspecified terms. By combining the abstract interpreter of Skel and the abstractions of the target language, we derive a working abstract interpreter specialised for the target language, obtained by meta-interpretation of Skel. We prove a theorem which states that the abstract interpreter of the target language is correct if the abstract instantiation of the unspecified terms are sound approximation of the concrete instantiation of the unspecified terms. We illustrate our method to build abstract interpreters on two examples: a value analysis for a small imperative language, and a CFA for \(\lambda\)-calculus (in the long version [8]). The approach has been evaluated by an implementation of a tool [20] to generate abstract interpreters from any skeletal semantics. It was tested on While and \(\lambda\)-calculus and resulted in executable, sound analyses validating the feasibility of the approach. The current abstract interpreters that we obtain have limitations to their precision. Part of this imprecision stems from the fact that we generate abstract interpreters for any language based on an abstract interpreter for the Skel meta-language skeletal semantics. An interesting feature of the approach is that some precision can be gained in a generic fashion by improving the underlying abstract interpretation of Skel. For example, our interval analysis for While does not refine the abstract values when entering a part of the program guarded by a condition. Take If(Equal(x, 0), Skip, Assign(x, 0)), evaluated in store where \(\{x\mapsto\top\}\). Our abstract interpreter returns state \(\{x\mapsto\top\}\). Indeed, the condition can be true or false thus both branches of the if construct are evaluated but each one is computed in the store \(\{x\mapsto\top\}\) because the condition is not used to refine the abstract values. This issue can be addressed, e.g., by keeping a trace of the execution in order to know if we are computing a statement guarded by a condition. Dealing with this issue at the level of the meta-language analysis benefits all generated analyses.
この論文では、プログラミング言語の形式的な semantics から実行可能な抽象論理的インタプリタを定義する方法について述べています。私たちの方法は、骨格semanticsとその semantics のメタ言語の抽象的な解釈に基づいています。抽象的な解釈の 正確性は、コア言語の特定の構造の正確性を確立することによって、合成性により保証されます。小規模な imperative 言語に値分析を定義して、この方法の汎用性を示しています。
2301.13424
Inradius of random lemniscates
A classically studied geometric property associated to a complex polynomial $p$ is the inradius (the radius of the largest inscribed disk) of its (filled) lemniscate $\Lambda := \{z \in \mathbb{C}:|p(z)| < 1\}$. In this paper, we study the lemniscate inradius when the defining polynomial $p$ is random, namely, with the zeros of $p$ sampled independently from a compactly supported probability measure $\mu$. If the negative set of the logarithmic potential $U_{\mu}$ generated by $\mu$ is non-empty, then the inradius is bounded from below by a positive constant with overwhelming probability. Moreover, the inradius has a determinstic limit if the negative set of $U_{\mu}$ additionally contains the support of $\mu$. On the other hand, when the zeros are sampled independently and uniformly from the unit circle, then the inradius converges in distribution to a random variable taking values in $(0,1/2)$. We also consider the characteristic polynomial of a Ginibre random matrix whose lemniscate we show is close to the unit disk with overwhelming probability.
Manjunath Krishnapur, Erik Lundberg, Koushik Ramachandran
2023-01-31T05:39:25
http://arxiv.org/abs/2301.13424v1
# Inradius of random lemniscates ###### Abstract. A classically studied geometric property associated to a complex polynomial \(p\) is the inradius (the radius of the largest inscribed disk) of its (filled) lemniscate \(\Lambda:=\{z\in\mathbb{C}:|p(z)|<1\}\). In this paper, we study the lemniscate inradius when the defining polynomial \(p\) is random, namely, with the zeros of \(p\) sampled independently from a compactly supported probability measure \(\mu\). If the negative set of the logarithmic potential \(U_{\mu}\) generated by \(\mu\) is non-empty, then the inradius is bounded from below by a positive constant with overwhelming probability. Moreover, the inradius has a determnistic limit if the negative set of \(U_{\mu}\) additionally contains the support of \(\mu\). On the other hand, when the zeros are sampled independently and uniformly from the unit circle, then the inradius converges in distribution to a random variable taking values in \((0,1/2)\). We also consider the characteristic polynomial of a Ginibre random matrix whose lemniscate we show is close to the unit disk with overwhelming probability. ## 1. Introduction Let \(p(z)\) be a polynomial of degree \(n\) and \(\Lambda\) be its (filled) lemniscate defined by \(\Lambda=\{z:|p(z)|<1\}\). Denote by \(\rho(\Lambda)\) the inradius of \(\Lambda\). By definition, this is the radius of the largest disk that is completely contained in \(\Lambda\). In this paper, we study the inradius of random lemniscates for various models of random polynomials. The lemniscate \(\{z:|z^{n}-1|<1\}\) has an inradius asymptotically proportional to \(1/n\). In 1958, P. Erdos, F. Herzog, and G. Piranian posed a number of problems [10] on geometric properties of polynomial lemniscates. Concerning the inradius, they asked [10, Problem 3] whether the rate of decay in the example \(\{|z^{n}-1|=1\}\) is extremal, that is, whether there exists a positive constant \(C\) such that for any monic polynomial of degree \(n\), all of whose roots lie in the closed unit disk, the inradius \(\rho\) of its lemniscate \(\Lambda\) satisfies \(\rho\geq\frac{C}{n}\). This question remains open. C. Pommerenke [33] showed in this context that the inradius satisfies the lower bound \(\rho\geq\frac{1}{2e\,n^{2}}\). Our results, which we state below in Sec. 1.4 of the Introduction, show within probabilistic settings that the _typical_ lemniscate admits a much better lower bound on its inradius. Namely, if the zeros of \(p\) are sampled independently from a compactly supported measure \(\mu\) whose logarithmic potential has non-empty negative set, then the inradius of \(\Lambda\) is bounded below by a positive constant with overwhelming probability, see Theorem 1.1 below. Let us provide some insight on this result and explain why the logarithmic potential of \(\mu\) plays an important role. First, the lemniscate \(\Lambda\) can alternatively be described as the sublevel set \(\{\frac{1}{n}\log|p(z)|<0\}\) of the discrete logarithmic potential \(\frac{1}{n}\log|p(z)|=\frac{1}{n}\sum\log|z-z_{k}|\) where \(z_{k}\) are the zeros of \(p(z)\). For fixed \(z\) the sum \(\frac{1}{n}\sum\log|z-z_{k}|\) is a Monte-Carlo approximation for the integral defining the logarithmic potential \(U_{\mu}(z)\) of \(\mu\), and, in particular, it converges pointwise, by the law of large numbers, to \(U_{\mu}(z)\). With the use of large deviation estimates, we can further conclude that each \(z\) in the negative set \(\Omega^{-}\) of \(U_{\mu}\) is in \(\Lambda\) with overwhelming probability. The property of holding with overwhelming probability survives (by way of a union bound) when taking an intersection of polynomially many such events. This fact, together with a suitable uniform estimate for the derivative \(p^{\prime}(z)\) (for which we can use a Bernstein-type inequality), allows for a standard epsilon-net argument showing that an arbitrary compact subset of \(\Omega^{-}\) is contained in \(\Lambda\) with overwhelming probability. Since \(\Omega^{-}\) is assumed nonempty, this leads to the desired lower bound on the inradius, see the proof of Theorem 1.1 in Section 3 for details. Under an additional assumption that the negative set \(\Omega^{-}\) of the logarithmic potential of \(\mu\) contains the support of \(\mu\), the inradius converges to the inradius of \(\Omega^{-}\) almost surely, see Corollary 1.2; in particular, the inradius has a deterministic limit. On the other hand, for certain measures \(\mu\), the inradius does not have a deterministic limit and rather converges in distribution to a nondegenerate random variable, see Theorem 1.5 addressing the case when \(\mu\) is uniform measure on the unit circle. We also consider the lemniscate associated to the characteristic polynomial of a random matrix sampled from the Ginibre ensemble, and we show that the inradius is close to unity (in fact the whole lemniscate is close to the unit disk) with overwhelming probability, see Theorem 1.6. See Section 1.4 below for precise statements of these results along with some additional results giving further insight on the geometry of \(\Lambda\). ### Previous results on random lemniscates The current paper fits into a series of recent studies investigating the geometry and topology of random lemniscates. Let us summarize previous results in this direction. We note that the lemniscates studied in the results cited below, in contrast to the filled lemniscates of the current paper, are level sets (as opposed to sublevel sets). Partly motivated to provide a probabilistic counterpart to the Erdos lemniscate problem on the extremal length of lemniscates [10], [5], [11], [12], the second and third authors in [23] studied the arclength and topology of a random polynomial lemniscate in the plane. When the polynomial has i.i.d. Gaussian coefficients, it is shown in [23] that the average length of its lemniscate approaches a constant. They also showed that with high probability the length is bounded by a function with arbitrarily slow rate of growth, which means that the length of a lemniscate typically satisfies a much better estimate than the extremal case. It is also shown in [23] that the number of connected components of the lemniscate is asymptotically \(n\) (the degree of the defining polynomial) with high probability, and there is at least some fixed positive probability of the existence of a "giant component", that is, a component having at least some fixed positive length. Of relevance to the focus of the current paper, we note that the proof of the existence of the giant component in [23] shows that for a fixed \(0<r<1\), there is a positive probability that the inradius \(\rho\) of the lemniscate satisfies the lower bound \(\rho>r\). Inspired by Catanese and Paluszny's topological classification [7] of generic polynomials (in terms of the graph of the modulus of the polynomial with equivalence up to diffeomorphism of the domain and range), in [9] the second author with M. Epstein and B. Hanin studied the so-called lemniscate tree associated to a random polynomial of degree \(n\). The lemniscate tree of a polynomial \(p\) is a labelled, increasing, binary, nonplane tree that encodes the nesting structure of the singular components of the level sets of the modulus \(|p(z)|\). When the zeros of \(p\) are i.i.d. sampled uniformly at random according to a probability density that is bounded with respect to Haar measure on the Riemann sphere, it is shown in [9] that the number of branches (nodes with two children) in the induced lemniscate tree is \(o(n)\) with high probability, whereas a lemniscate tree sampled uniformly at random from the combinatorial class has asymptotically \(\left(1-\frac{2}{\pi}\right)n\) many branches on average. In [21], partly motivated by a known result [11], [43]) stating that the maximal length of a rational lemniscate on the Riemann sphere is \(2\pi n\), the second author with A. Lerario studied the geometry of a random rational lemniscate and showed that the average length on the Riemann sphere is asymptotically \(\frac{\pi^{2}}{2}\sqrt{n}\). Topological properties (the number of components and their nesting structure) were also considered in [21], where the number of connected components was shown to be asymptotically bounded above and below by positive constants times \(n\). Z. Kabluchko and I. Wigman subsequently established an asymptotic limit law for the number of connected components in [16] by adapting a method of F. Nazarov and M. Sodin [28] using an integral geometry sandwich and ergodic theory applied to a translation-invariant ensemble of planar meromorphic lemniscates obtained as a scaling limit of the rational lemniscate ensemble. ### Motivation for the study of lemniscates The study of lemniscates has a long and rich history with a wide variety of applications. The problem of computing the length of Bernoulli's lemniscate played a role in the early study of elliptic integrals [1]. Hilbert's lemniscate theorem and its generalizations [26] show that lemniscates can be used to approximate rather arbitrary domains, and this density property contributes to the importance of lemniscates in many of the applications mentioned below. In some settings, sequences of approximating lemniscates arise naturally for example in holomorphic dynamics [25, p. 159], where it is simple to construct a nested sequence of "Mandelbrot lemniscates" that converges to the Madelbrot set. In the classical inverse problem of logarithmic potential theory--to recover the shape of a two-dimensional object with uniform mass density from the logarithmic potential it generates outside itself--uniqueness has been shown to hold for lemniscate domains [37]. This is perhaps surprising in light of Hilbert's lemniscate theorem and the fact that the inverse potential problem generally suffers from non-uniqueness [40]. Since lemniscates are real algebraic curves with useful connections to complex analysis, they have frequently received special attention in studies of real algebraic curves, for instance in the study of the topology of real algebraic curves [7], [2]. Lemniscates such as the Arnoldi lemniscate appear in applications in numerical analysis [39]. Lemniscates have seen applications in two-dimensional shape compression, where the "fingerprint" of a shape constructed from conformal welding simplifies to a particularly convenient form--namely the \(n\)th root of a Blaschke product--in the case the two-dimensional shape is assumed to be a degree-\(n\) lemniscate [8], [44], [35]. Lemniscates have appeared in studies of moving boundary problems of fluid dynamics [18], [24], [19]. In the study of planar harmonic mappings, rational lemniscates arise as the critical sets of harmonic polynomials [17], [22] as well as critical sets of lensing maps arising in the theory of gravitational lensing [30, Sec. 15.2.2]. Lemniscates also have appeared prominently in the theory and application of conformal mapping [3], [15], [13]. See also the recent survey [36] which elaborates on some of the more recent of the above mentioned lines of research. ### Definitions and Notation Throughout the paper, \(\mu\) will denote a Borel probability measure with compact support \(S\subset\mathbb{C}\). The logarithmic potential of \(\mu\) is defined by \[U_{\mu}(z)=\int_{S}\log|z-w|d\mu(w).\] It is well known that \(U_{\mu}\) is a subharmonic function in the plane, and harmonic in \(\mathbb{C}\setminus S\). For such \(\mu\), we denote the associated negative and positive sets of its potential by \[\Omega^{-}=\{z\in\mathbb{C}:U_{\mu}(z)<0\},\ \Omega^{+}=\{z\in\mathbb{C}:U_{\mu}( z)>0\}.\] It is easy to see that \(\Omega^{-}\) is a (possibly empty) bounded open set. **Assumptions on the measure.** Let \(\mu\) be a Borel probability measure with compact support \(S\subset\mathbb{C}\). We define the following progressively stronger conditions on \(\mu\). 1. For each compact \(K\subset\mathbb{C}\), \[C(K)=\sup_{z\in K}\int_{S}\left(\log|z-w|\right)^{2}d\mu(w)<\infty.\] * There is some \(C<\infty\) and \(\varepsilon>0\) such that for all \(z\in\mathbb{C}\) and all \(r\leq 1\), we have \[\mu\left(B(z,r)\right)\leq\frac{C}{(\log(1/r))^{2+\varepsilon}}.\] * There exists \(\delta>0\) such that \[\sup_{z\in\mathbb{C}}\int_{S}\frac{d\mu(w)}{|z-w|^{\delta}}<\infty.\] * There is some \(C<\infty\) and \(\varepsilon>0\) such that for all \(z\in\mathbb{C}\) and all \(r>0\), we have \[\mu\left(B(z,r)\right)\leq Cr^{\varepsilon}.\] ### Main results In all theorems (except Theorem 1.6), we have the following setting: **Setting**: \(\mu\) is a compactly supported probability measure on \(\mathbb{C}\) with support \(S\). The random variables \(X_{i}\) are i.i.d. from the distribution \(\mu\). We consider the random polynomial \(p_{n}(z):=(z-X_{1})\dots(z-X_{n})\) and its lemniscate \(\Lambda_{n}:=\{z\ :\ |p_{n}(z)|<1\}\). We write \(\rho_{n}=\rho(\Lambda_{n})\) for the inradius of \(\Lambda_{n}\). Throughout the paper, w.o.p. means with overwhelming probability, i.e., with probability at least \(1-e^{-cn}\) for some \(c>0\). The theorems below concern the random lemniscate \(\Lambda_{n}\). Observe that \(\Lambda_{n}\) consists of all \(z\) for which \(\log|p_{n}(z)|<0\), or what is the same, \[\frac{1}{n}\sum_{k=1}^{n}\log|z-X_{k}|<0.\] By the law of large numbers, the quantity on the left converges to \(U_{\mu}(z)\) pointwise. Hence we may expect the asymptotic behaviour of \(\Lambda_{n}\) to be described in terms of \(U_{\mu}\) and its positive and negative sets \(\Omega^{+},\Omega^{-}\). The first three theorems make this precise under different conditions on the underlying measure \(\mu\). **Theorem 1.1**.: _Assume that \(\mu\) satisfies assumption (A). Suppose that \(\Omega^{-}\neq\emptyset\) and let \(\rho=\rho(\Omega^{-}).\) Fix compact sets \(K\subset\Omega^{-},\) and \(L\subset\Omega^{+}\setminus S.\) Then for all large \(n\),_ \[K\subset\Lambda_{n},\quad\text{w.o.p.},\quad\text{and}\quad L\subset\Lambda_{ n}^{c}\quad\text{w.o.p.}\] _In particular, if \(\rho_{n}\) denotes the inradius of \(\Lambda_{n}\), then_ \[\rho_{n}\geq a\quad\text{w.o.p.},\quad\forall a\in(0,\rho)\] **Corollary 1.2**.: _In the setting of Theorem 1.1, \(\liminf\rho_{n}\geq\rho\) a.s. Further, if \(S\subseteq\Omega^{-}\), then \(\rho_{n}\to\rho\) a.s._ Ideally, we would have liked to say that a compact set \(L\subseteq\Omega^{+}\) is contained inside \(\Lambda_{n}^{c}\) w.o.p. However, this is clearly not true if some of the roots fall inside \(L\). Making the stronger assumption \((D)\) on the measure and further assuming that \(U_{\mu}\) is bounded below by a positive number on \(L\), we show that \(L\) is almost entirely contained in \(\Lambda_{n}^{c}\). **Theorem 1.3**.: _Let \(\mu\) satisfy assumption (D). Let \(L\) be a compact subset of \(\{U_{\mu}\geq m\}\) for some \(m>0\). Then there exists \(c_{0}>0\) such that_ \[\Lambda_{n}\cap L\subset\bigcup_{k=1}^{n}B(X_{k},e^{-c_{0}n}),\quad\text{w.o.p.}\] In particular, if \(U_{\mu}\geq m\) everywhere, then the whole lemniscate is small. It suffices to assume that \(U_{\mu}\geq m\) on the support of \(\mu\), by the minimum principle for potentials (Theorem 3.1.4 in [34]). **Corollary 1.4**.: _Suppose \(\mu\) satisfies assumption (D) and \(U_{\mu}\geq m\) on \(S\). Then there is a \(c_{0}>0\) such that \(\Lambda_{n}\subset\bigcup_{k=1}^{n}B(X_{k},e^{-c_{0}n})\) and \(\rho_{n}\leq ne^{-c_{0}n}\) w.o.p._ A class of examples illustrating Theorem 1.1 and Theorem 1.3 is given at the end of the section. What happens when the potential \(U_{\mu}\) vanishes on a non-empty open set? In this case \(\log|p_{n}|\) has zero mean, and is (approximately) equally likely to be positive or negative. Because of this, one may expect that the randomness in \(\Lambda_{n}\) and \(\rho_{n}\) persists in the limit and we can at best hope for a convergence in distribution. The particular case when \(\mu\) is uniform on the unit circle is dealt with in the following theorem. **Theorem 1.5**.: _Let \(\mu\) be the uniform probability measure on \(\mathbb{S}^{1}\), the unit circle in the complex plane. Then, \(\rho_{n}\stackrel{{ d}}{{\to}}\rho\) for some random variable \(\rho\) taking values in \((0,\frac{1}{2})\). Further, \(\mathbb{P}\{\rho<\varepsilon\}>0\) and \(\mathbb{P}\{\rho>\frac{1}{2}-\varepsilon\}>0\) for every \(\varepsilon>0\)._ As shown in the proof of Theorem 1.5, the random function \(\log|p_{n}(z)|\) converges, after appropriate normalization, almost surely to a nondegenerate Gaussian random function on \(\mathbb{D}\), and this convergence underlies the limiting random inradius \(\rho\). We note that similar methods can be used to study other measures \(\mu\) for which \(U_{\mu}\) vanishes on non-empty open set (such as other instances where \(\mu\) is the equilibrium measure of a region with unit capacity), however the case of the uniform measure on the circle is rather special, as the resulting random function \(\log|p_{n}(z)|\) as well as its limiting Gaussian random function has a deterministic zero at the origin (which is responsible for the limiting inradius taking values only up to half the radius of \(\mathbb{D}\). Another setting where one can rely on convergence of the defining function \(\log|p_{n}(z)|\) is in the case when the polynomial \(p_{n}\) has i.i.d. Gaussian coefficients. Actually, the convergence in this case is more transparent (and does not require additional tools such as Skorokhod's Theorem) as \(p_{n}\) can already be viewed as the truncation of a power series with i.i.d. coefficients. This case has a similar outcome as in Theorem 1.5, except the value \(1/2\) is replaced by \(1\) due to the absence of a deterministic zero. One can ask for results analogous to Theorems 1.1 and 1.3 when the zeros are dependent random variables. A natural class of examples are determinantal point processes. We consider one special case here. The Ginibre ensemble is a random set of \(n\) points in \(\mathbb{C}\) with joint density proportional to \[e^{-\sum_{k=1}^{n}|\lambda_{k}|^{2}}\prod_{j<k}|\lambda_{j}-\lambda_{k}|^{2}. \tag{1}\] This arises in random matrix theory, as the distribution of eigenvalues of an \(n\times n\) random matrix whose entries are i.i.d. standard complex Gaussian. After scaling by \(\sqrt{n}\), the empirical distributions \[\mu_{n}=\frac{1}{n}\sum_{j=1}^{n}\delta_{\frac{\lambda_{j}}{\sqrt{n}}} \tag{2}\] converge to the uniform measure on \(\mathbb{D}\). Hence, we may expect the lemniscate of the corresponding polynomial to be similar to the case when the roots are sampled independently and uniformly from \(\mathbb{D}\). **Theorem 1.6**.: _Let \(\lambda_{1},\ldots,\lambda_{n}\) have joint density given by (1) and let \(X_{j}=\frac{\lambda_{j}}{\sqrt{n}}\). Let \(\Lambda_{n}\) be the unit lemniscate of the random polynomial \(p_{n}(z)=\prod_{k=1}^{n}(z-X_{k})\). Given \(r\in(0,1)\) and \(s\in(1,\infty)\), we have for large \(n\),_ \[\mathbb{D}_{r}\subseteq\Lambda_{n}\subseteq\mathbb{D}_{s},\ \ \text{w.o.p.}\] **Example 1.7**.: Let \(\mu_{r}\) be the normalized area measure on the disk \(r\mathbb{D}\) and suppose the roots are sampled from \(\mu_{r}\). It is easy to check that \(\mu_{r}\) satisfies assumptions \((A)-(D).\) We claim that \[U_{\mu_{r}}(z)=\begin{cases}\frac{|z|^{2}-r^{2}}{2r^{2}}+\log r&\text{if }|z|<r,\\ \log|z|&\text{if }|z|\geq r.\end{cases} \tag{3}\] Therefore, \(\Omega^{-}=r_{c}\mathbb{D}\) where \[r_{c}=\begin{cases}1&\text{if }r\leq 1,\\ r\sqrt{1-2\log r}&\text{if }1\leq r\leq\sqrt{e},\\ 0&\text{if }r\geq\sqrt{e}.\end{cases}\] Hence Theorem 1.1 implies that when \(r<\sqrt{e}\), any disk \(\mathbb{D}_{s}\) with radius \(s<r_{c}\) is contained in \(\Lambda_{n}\) with overwhelming probability as \(n\to\infty\). When \(r\leq 1\), Corollary 1.2 implies that \(\Lambda_{n}\) is almost the same as \(\Omega^{-}=\mathbb{D}\). For \(r>\sqrt{e}\), Theorem 1.3 applies to show that \(\Lambda_{n}\) is contained in a union of very small disks. Let us carry out the computations to verify (3). By rescaling, it is clear that \(U_{\mu_{r}}(z)=\log r+U_{\mu_{1}}(z/r)\), hence it suffices to consider \(r=1\). \[U_{\mu_{1}}(z)=\frac{1}{\pi}\int_{\mathbb{D}}\log|z-w|dA(w).\] For \(|z|\geq 1\), the integrand is harmonic with respect to \(w\in\mathbb{D}\), hence \(U_{\mu}(z)=\log|z|\) by the mean-value theorem. For \(|z|<1\), we separate the integral over the two regions where \(|w|<|z|\) and \(|w|>|z|\). Harmonicity of \(w\mapsto\log|z-w|\) on \(\{|w|<|z|\}\) and the mean-value property gives \[\int_{|w|<|z|}\log|z-w|dA(w)=\pi|z|^{2}\log|z|.\] We switch to polar coordinates \(w=re^{i\theta}\) for the second integral. \[\int_{|z|}^{1}\int_{0}^{2\pi}\log|z-re^{i\theta}|rd\theta dr =\int_{|z|}^{1}\underbrace{\int_{0}^{2\pi}\log|ze^{-i\theta}-r|d \theta}_{2\pi\log r}rdr\] \[=\int_{|z|}^{1}2\pi r\log rdr\] \[=2\pi\left(\frac{1}{4}-\frac{|z|^{2}}{2}\log|z|+\frac{|z|^{2}}{4} \right),\] where we have again used the mean value property (this time over a circle) for harmonic functions to compute the inside integral in the first line above. Combining these integrals over the two regions and dividing by \(\pi\) we arrive at (3). **Outline of the paper.** We review some preliminary results in Section 2 that serve as tools in the proofs the results stated above. We prove Theorem 1.1 and Corollary 1.2 in Section 3, and we prove Theorem 1.3 and Corollary 1.4 in Section 4. The proof of Theorem 1.5, concerning uniform measure on the circle, is presented in Section 5, and Theorem 1.6, related to the Ginibre ensemble, is proved in Section 6. ## 2. Preliminary Results We start with two preparatory lemmas which we use repeatedly in the proofs of our theorems. **Lemma 2.1**.: _Let \(\mu\) be a Borel probability measure with compact support \(S\subset\mathbb{C}\) satisfying Assumption \((A).\) Whenever \(K\) is a non-empty compact subset of \(\Omega^{-}\) or a compact subset of \(\Omega^{+}\) with \(K\cap S=\emptyset,\) there Figure 1. Lemniscates of degree \(n=30,40,400,15\) with zeros sampled uniformly from the disks of radii \(0.5,1,1.5,1.7\) (order: from top-left to bottom-right). The dotted circle has radius \(r_{c}\). Figure 2. Lemniscates of degree \(n=20,30,40\) with zeros sampled uniformly from the unit circle. A unit circle is also plotted for reference in each case. exists a constant \(c(K)>0\) such that_ \[\inf_{z\in K}\int_{S}\left(\log|z-w|\right)^{2}d\mu(w)\geq c(K).\] Proof.: Let \(\mu\) satisfy assumption (A), and let \(K\neq\emptyset\) be compact. Then, by the Cauchy-Schwarz inequality we have for all \(z\in K\) \[\int_{S}\left(\log|z-w|\right)^{2}d\mu(w)\geq\left(\int_{S}|\log|z-w||\,d\mu(w )\right)^{2}\geq|U_{\mu}(z)|^{2}\] Thus in order to prove the lemma, it suffices to show that \(|U_{\mu}(z)|^{2}\) is bounded away from zero for \(z\in K\), whenever \(K\subset\Omega^{-}\), or \(K\subset\Omega^{+}\) and \(K\cap S=\emptyset\). Suppose first that \(K\subset\Omega^{-}\) is compact. Since subharmonic functions are upper semi-continuous and hence attain a maximum on any compact set, there exists \(c_{1}(K)>0\) such that \(U_{\mu}(z)\leq-c_{1}(K),\) for all \(z\in K.\) Hence \(|U_{\mu}(z)|^{2}\geq c_{1}(K)^{2},\) for \(z\in K.\) In the other case, let \(K\subset\Omega^{+}\) be compact and disjoint from the support \(S\) of \(\mu.\) Notice then that \(U_{\mu}(z)\) is positive and harmonic on \(K.\) An application of Harnack's inequality now gives the existence of the required constant (depending only on \(K\)). This concludes the proof of the lemma. The second lemma is based on a net argument which allows us to control the size of the modulus of a polynomial by its values at the points of the net. Figure 4. Lemniscates of degree \(n=20,30,40\) generated by the characteristic polynomial of a Ginibre matrix together with a unit circle plotted for reference. Figure 3. Lemniscates of degree \(n=20,30,40\) with i.i.d. Gaussian coefficients plotted together with a unit circle for reference. **Lemma 2.2**.: _Let \(G\) be a bounded Jordan domain with rectifiable boundary. Let \(p(z)\) be a polynomial of degree \(n.\) Then, there exists a constant \(C=C(G)>0,\) and points \(w_{1},w_{2}...w_{Cn^{2}}\in\partial G\) such that_ \[\|p\|_{\partial G}\leq 2\max_{1\leq k\leq Cn^{2}}|p(w_{k})| \tag{4}\] Proof.: The key to the proof is a Bernstein-type inequality (see [32, Thm. 1]) \[|p^{\prime}(z)|\leq C_{1}n^{2}M, \tag{5}\] where \(M:=\|p\|_{\partial G}\), and \(C_{1}\) is a constant that depends only on \(G\). With this estimate in hand, the proof reduces to the following argument that is well-known but which we nevertheless present in detail for the reader's convenience. Let \(\ell=\ell(\partial G)\) denote the length of \(\partial G.\) Let \(N\) be a positive integer to be specified later. Divide \(\partial G\) into \(N\) pieces of equal length, with \(w_{0},w_{1}...,w_{N}\) denoting the points of subdivision. Let \(z_{0}\in\partial G\) be such that \(M=\|p\|_{\partial G}=|p(z_{0})|.\) If \(z_{0}\) is one of the \(w_{j},\) then the estimate (4) clearly holds. If that is not the case, then \(z_{0}\) lies \(|z_{0}-w_{j}|\leq\frac{\ell}{N}\), for some \(j\) with \(0\leq j\leq N\). We can now write \[M-|p(w_{j})|\leq|p(z_{0})-p(w_{j})|=\left|\int_{w_{j}}^{z_{0}}p^{\prime}(t)dt \right|\leq C_{1}n^{2}M\frac{\ell}{N}. \tag{6}\] Here we have used the Bernstein-type inequality (5) to estimate the size of \(|p^{\prime}|\). If we now choose \(N=2\ell C_{1}n^{2},\) then the estimate (6) becomes \[M-|p(w_{j})|\leq\frac{M}{2}\] which concludes the proof of the lemma. We will also need the following concentration inequality (see Section 2.7 of [6]). This result, referred to as "Bennett's inequality", is similar to the well-known Hoeffding inequality, but note that, instead of being bounded, the random variables are merely assumed to be bounded from above. **Theorem 2.3** (Bennett's inequality).: _Let \(X_{1},X_{2},...,X_{n}\) be independent random variables with finite variance such that \(X_{i}\leq b\) for some \(b>0\) almost surely for all \(i\leq n.\) Let_ \[S=\sum_{i=1}^{n}(X_{i}-\mathbb{E}(X_{i}))\] _and \(\nu=\sum_{i=1}^{n}\mathbb{E}(X_{i}^{2}).\) Then for any \(t>0,\)_ \[\mathbb{P}(S>t)\leq\exp\left(\frac{-\nu}{b^{2}}h\left(\frac{bt}{\nu}\right) \right),\] _where \(h(u)=(1+u)\log(1+u)-u\) for \(u>0.\)_ ## 3. Proofs of Theorem 1.1 and Corollary 1.2 Proof of Theorem 1.1.: We divide the proof into two steps. ### Step \(1\): Compact subsets of \(\Omega^{-}\) lie in \(\Lambda_{n}\) By our hypothesis \(\Omega^{-}\neq\emptyset.\) Let \(K\subset\Omega^{-}\) be compact. We wish to show that \(K\subset\Lambda_{n}\) w.o.p. We may assume without loss of generality that \(K=\overline{G}\) for some bounded Jordan domain \(G\) with rectifiable boundary, since any connected compact is contained such a domain. Recall that \(\Lambda_{n}=\{z:\log|p_{n}(z)|<0\}.\) Writing \[\log|p_{n}(z)|=\sum_{k=1}^{n}\log|z-X_{k}|\] as a sum of i.i.d. random variables, for \(z\in\Omega^{-}\) we will use a concentration inequality to show that \(\log|p_{n}(z)|\) is negative with overwhelming probability. We then use lemma 2.2 to get a uniform estimate on \(K\) and finish the proof. Fix \(z_{0}\in K.\) For \(k=1,2,...,n,\) define \(Y_{k}=\log|z_{0}-X_{k}|,\) and let \[Z:=\sum_{k=1}^{n}(Y_{k}-\mathbb{E}Y_{k}).\] Notice since \(z_{0}\in\Omega^{-}\) \[\mathbb{E}Y_{k}=U_{\mu}(z_{0})<0\] and by the assumption in the statement of the theorem, we also have \[\sigma_{z_{0}}^{2}:=\mathbb{E}Y_{k}^{2}=\int_{S}(\log|z_{0}-u|)^{2}d\mu(u)<\infty\] Now applying Theorem 2.3 to our problem with \(b\geq\sup_{z\in K,w\in S}\log(|z|+|w|),\)\(\nu=n\sigma_{z_{0}}^{2},\) we obtain \[\mathbb{P}\{\log|p_{n}(z_{0})|>-\log(2)\} =\mathbb{P}\{\log|p_{n}(z_{0})|-nU_{\mu}(z_{0})>-nU_{\mu}(z_{0})- \log(2)\}\] \[=\mathbb{P}\{Z>-nU_{\mu}(z_{0})-\log(2)\}\] \[\leq\exp\left(-\frac{n\sigma_{z_{0}}^{2}}{b^{2}}h\left(\frac{-b}{ \sigma_{z_{0}}^{2}}U_{\mu}(z_{0})-\frac{b\log(2)}{n\sigma_{z_{0}}^{2}}\right) \right).\] Since subharmonic functions are upper semi-continuous and hence attain a maximum on any compact set, we have, \(U_{\mu}(z)\leq-M\) for all \(z\in K\) and some \(M>0.\) Also, by Lemma 2.1, \(0<c_{1}(K)\leq\sigma_{z}^{2}\leq c_{2}(K)<\infty,\) for all \(z\in K.\) This bound together with the fact that \(h\) is an increasing function can now be used in the above estimate to get \[\mathbb{P}\{\log|p_{n}(z_{0})|>-\log(2)\}\leq\exp\left(-\frac{n\sigma_{z_{0}} ^{2}}{b}h\left(\frac{-b}{\sigma_{z_{0}}^{2}}U_{\mu}(z_{0})-\frac{b\log(2)}{n \sigma_{z_{0}}^{2}}\right)\right)\leq\exp\left(-cn\right) \tag{7}\] for some constant \(c=c(K)>0\) depending only on \(K.\) Using lemma 2.2 in combination with a union bound and the estimate (7), we obtain \[\mathbb{P}\{\log\|p_{n}\|_{K}<0\} \geq\mathbb{P}\{\max_{1\leq k\leq Cn^{2}}\log|p_{n}(w_{k,n})|+\log (2)<0\}\] \[=1-\mathbb{P}\{\max_{1\leq k\leq Cn^{2}}\log|p_{n}(w_{k,n})|>-\log (2)\}\] \[=1-\mathbb{P}\left(\bigcup_{k=1}^{Cn^{2}}\{\log|p_{n}(w_{k,n})|>- \log(2)\}\right)\] \[\geq 1-Cn^{2}\exp\left(-cn\right)\] where in the last inequality we used (7). This proves that \(K\subset\Lambda_{n}\) w.o.p. and concludes the proof of the first part. ### Step 2: Compact subsets \(L\) of \(\Omega^{+}\setminus S\) are in \(\Lambda^{c}_{n}\) Without loss of generality, we may assume that \(L\) is a closed disc in \(\Omega^{+}\setminus S\). Since \(S\) is a compact set disjoint from \(L\), there exists \(\delta>0\) such that the distance \(d(L,S)=\delta.\) Notice that for all \(z\in L,\) we have \(-\log|z-X_{i}|\leq-\log\delta\). Now fix \(z_{0}\in L.\) An application of Bennett's inequality to the random variables \(-\log|z_{0}-X_{i}|\) yields, \[\mathbb{P}\left(-\log|p_{n}(z_{0})|+nU_{\mu}(z_{0})\geq nU_{\mu}(z_{0})-1 \right)\leq\exp\left(-\frac{n\sigma_{z_{0}}^{2}}{b}h\left(\frac{b}{\sigma_{z_{ 0}}^{2}}U_{\mu}(z_{0})-\frac{b}{n\sigma_{z_{0}}^{2}}\right)\right). \tag{8}\] The quantities \(b,h\) have an analogous meaning as in Step 1. By Lemma 2.1, \(\sigma_{z}^{2}\) is bounded below, and by assumption it is also bounded above, by some positive constants depending only on \(L\). Furthermore, Lemma 2.1 shows that \(U_{\mu}(z)\geq c(L)>0\) for all \(z\in L.\) Making use of all this in (8), we can now estimate \[\mathbb{P}\left(\log|p_{n}(z_{0})|>1\right) =\mathbb{P}\left(\log|p_{n}(z_{0})|-nU_{\mu}(z_{0})>-nU_{\mu}(z_{0 })+1\right)\] \[=1-\mathbb{P}\left(\log|p_{n}(z_{0})|-nU_{\mu}(z_{0})\leq-nU_{\mu }(z_{0})+1\right)\] \[=1-\mathbb{P}\left(-\log|p_{n}(z_{0})|+nU_{\mu}(z_{0})\geq nU_{ \mu}(z_{0})-1\right)\] \[\geq 1-\exp\left(-\frac{n\sigma_{z_{0}}^{2}}{b}h\left(\frac{bU_{ \mu}(z_{0})}{\sigma_{z_{0}}^{2}}-\frac{b}{n\sigma_{z_{0}}^{2}}\right)\right) \tag{9}\] \[\geq 1-\exp\left(-C_{0}(L)n\right).\] This estimate shows that individual points of \(L\) are in \(\Lambda^{c}_{n}\) with overwhelming probability. To finish the proof, we once again use a net argument to show that \(L\subset\Lambda^{c}_{n}\) w.o.p. We first observe that if \(z,w\in L\), and \(X\) is one of the \(X_{k}\)'s, the mean value theorem gives \[|\log|z-X|-\log|w-X||\leq\frac{|z-w|}{\delta},\] where we have used that \(d(L,S)=\delta>0\) (and that \(L\) is a disk). The triangle inequality then yields \[|\log|p_{n}(z)|-\log|p_{n}(w)||\leq\frac{n|z-w|}{\delta},\quad\text{for }z,w\in L. \tag{10}\] Choose a net of \(n^{2}\) equally spaced points \(w_{1},w_{2},...w_{n^{2}}\) on \(\partial L\), and note that any point on \(\partial L\) is within \(C_{1}/n^{2}\) of some point in the net, where \(C_{1}\) is a constant depending on the radius of \(L\). From (10) we have that \[|\log|p_{n}(z)|-\log|p_{n}(w)||\leq\frac{C_{2}}{n},\quad\text{for }z,w\in L \text{ with }|z-w|\leq\frac{C_{1}}{n^{2}}, \tag{11}\] where \(C_{2}=C_{1}/\delta\) is a constant. We are now ready to show that for large \(n\), \(\inf_{z\in L}\log|p_{n}(z)|>0\) w.o.p. Indeed, note that the point on \(\partial L\) where the infimum of \(\log|p_{n}|\) is attained must be within \(C_{1}/n^{2}\) of some point in the net \(\{w_{1},w_{2}...,w_{n^{2}}\}\). Then by (11), \[\mathbb{P}\left(\inf_{L}\log|p_{n}(z)|>0\right)\geq\mathbb{P}\left(\bigcap_{k= 1}^{n^{2}}\{\log|p_{n}(w_{k})|>1\}\right).\] Therefore, we obtain \[\mathbb{P}\left(\inf_{L}\log|p_{n}(z)|>0\right) \geq\mathbb{P}\left(\bigcap_{k=1}^{n^{2}}\{\log|p_{n}(w_{k})|>1\}\right)\] \[=1-\sum_{k=1}^{n^{2}}\mathbb{P}\left(\log|p_{n}(w_{k})|\leq 1\right)\] \[\geq 1-n^{2}\exp(-C_{0}\,n).\] by the pointwise estimate (9). This concludes the proof of the theorem. Proof of Corollary 1.2.: We assume that the measure \(\mu\) is as in Theorem 1.1. Let \(\rho_{n}=\rho(\Lambda_{n})\) be the inradius of the lemniscate of \(p_{n}\) and let \(\rho=\rho(\Omega^{-})\) be the inradius of \(\Omega^{-}\). By Theorem 1.1, we immediately get \(\liminf\rho_{n}\geq\rho\). Let \(S\) be the support of \(\mu\). As \(S\cap\Omega^{+}=\emptyset\), Theorem 1.3 shows that if \(m>0\) then \(\Lambda_{n}\cap\{U_{\mu}\geq m\}\) is contained in a union of at most \(n\) circles each of radius \(e^{-cn}\). Writing \(\rho_{n}(m)\) for \(\rho(\Lambda_{n}\cap\{U_{\mu}<m\})\) and \(\rho(m)\) for \(\rho(\{U_{\mu}<m\})\), it is then clear that \(\rho_{n}\leq\rho_{n}(m)+2ne^{-cn}\leq\rho(m)+2ne^{-cn}\) and therefore, first letting \(n\to\infty\) and then letting \(m\downarrow 0\) we see that \[\limsup_{n\to\infty}\rho_{n}\leq\lim_{m\downarrow 0}\rho(m).\] As \(U_{\mu}\) is continuous on \(\mathbb{C}\setminus S\), it follows that for any \(\varepsilon>0\) there is \(m>0\) such that \(\{U_{\mu}<m\}\subseteq\Omega_{\varepsilon}^{-}\), the \(\varepsilon\) enlargement of \(\Omega^{-}\). Hence, with \(\rho^{\prime}(\varepsilon):=\rho(\Omega_{\varepsilon}^{-})\), we have \[\limsup_{n\to\infty}\rho_{n}\leq\lim_{\varepsilon\downarrow 0}\rho^{\prime}( \varepsilon).\] Under the additional assumption that \(S\subseteq\Omega^{-}\), we show that \(\rho^{\prime}(\varepsilon)\downarrow\rho\) as \(\varepsilon\downarrow 0\) and that completes the proof that \(\limsup\rho_{n}\leq\rho\). That \(\rho^{\prime}(\varepsilon)\downarrow\rho\) requires a proof as inradius is not continuous under decreasing limits of sets. For example, the inradius of the slit disk \(\mathbb{D}\setminus[0,1)\) is \(1/2\) but any \(\varepsilon\)-enlargement of it has inradius \(1\). As \(U_{\mu}\) is harmonic on \(\mathbb{C}\setminus S\) and \(S\subseteq\Omega^{-}\) and \(U_{\mu}(z)\sim\log|z|\) near \(\infty\), the level set \(\{U_{\mu}=0\}\) is a compact set comprised of curves that are real analytic except for a discrete set of points (the critical points of \(U_{\mu}\) are zeros of locally defined analytic functions). It also separates \(S\) from \(\infty\). Thus, \(\{U_{\mu}<0\}\) can be written as a union of Jordan domains, and there are at most finitely many components that have inradius more than any given number. Pick a component \(V\) of \(\Omega^{-}\) that attains the inradius \(\rho\). The boundary of \(V\) can have a finite number of critical points of \(U_{\mu}\). Locally around any such critical point, \(U_{\mu}\) is the real part of a holomorphic function that looks like \(cz^{p}\) for some \(p\), and hence \(U_{\mu}=0\) is like a system of equi-angular lines with angle \(\pi/p\) between successive rays. In particular, there are no cusps. What this shows is that \(V\) satisfies the following "external ball condition": There is a \(\delta_{0}>0\) and \(B<\infty\), such that for any \(\delta<\delta_{0}\) and each \(w\in\partial V\), there is a \[w^{\prime}\in\mathbb{C}\setminus V\ \text{such that}\ |w^{\prime}-w|=\delta\ \text{and}\ |w^{\prime}-z|\geq\delta/B\ \text{for all}\ z\in\Omega^{-}. \tag{12}\] Now suppose \(\mathbb{D}(z,r)\subseteq\Omega_{\varepsilon}^{-}\). If \(\varepsilon<\delta_{0}/B\), we claim that \(\mathbb{D}(z,r-2B\varepsilon)\subseteq\Omega^{-}\), which of course proves that \(\rho\geq\rho^{\prime}(\varepsilon)-B\varepsilon\), completing the proof. If the claim was not true, then we could find \(w\in\partial V\) such that \(|w-z|\leq r-2B\varepsilon\). Find \(w^{\prime}\) as in (12) with \(\delta=B\varepsilon\). Then \(w^{\prime}\not\in\Omega_{\delta/B}=\Omega_{\varepsilon}\) but \[|w^{\prime}-z|\leq|w^{\prime}-w|+|w-z|\leq\frac{\delta}{B}+r-2B\varepsilon<r.\] This is a contradiction as \(w^{\prime}\in\mathbb{D}(z,r)\subseteq\Omega_{\varepsilon}\) ## 4. Proof of Theorem 1.3 and Corollary 1.4 A standard net argument can be used to prove the theorem. But we would like to first present a proof of Corollary 1.4 by a different method, which may be of independent interest. At the end of the section, we outline the net argument to prove Theorem 1.3. We will need the following lemma in the proof of Corollary 1.4. **Lemma 4.1**.: _Under the assumptions of Corollary 1.4, there exists \(c_{1}>0\) such that_ \[\mathbb{P}\left(\log|p_{n}^{\prime}(X_{1})|\leq\frac{m}{2}(n-1)\right)\leq e ^{-c_{1}n}.\] First we prove the corollary assuming the above Lemma. Proof of Corollary 1.4.: Let \(G_{i}\) be the connected component of \(\Lambda_{n}\) containing \(X_{i}\). Then by Bernstein's inequality we have \[|p_{n}^{\prime}(X_{i})|\leq C\frac{n^{2}}{\text{diam}(G_{i})}\|p_{n}\|_{ \partial G_{i}}=C\frac{n^{2}}{\text{diam}(G_{i})}. \tag{13}\] By Lemma 4.1 we have \[|p_{n}^{\prime}(X_{i})|\geq\exp\left\{\frac{m}{2}(n-1)\right\},\quad\text{w. o.p.}\] and we conclude from (13) that \[\text{diam}(G_{i})\leq Cn^{2}\exp\left\{-\frac{m}{2}(n-1)\right\},\quad\text{ w.o.p.} \tag{14}\] The event \(\Lambda_{n}\subset\bigcup_{k=1}^{n}\mathbb{D}_{r_{n}}(X_{k})\) occurs if \(\text{diam}(G_{i})<r_{n}\) for each \(i=1,2,...,n\). Using (14) and a union bound, all these events occur with overwhelming probability if we choose \(r_{n}=\exp\{-c_{0}n\}\) for a suitable \(c_{0}\). It remains to prove Lemma 4.1. Proof of Lemma 4.1.: We have \[\mathbb{P}\left\{\log|p_{n}^{\prime}(X_{1})|\leq\frac{m}{2}(n-1)\right\} =\int_{S}\mathbb{P}\left\{\log|p_{n}^{\prime}(X_{1})|\leq\frac{m} {2}(n-1)\big{|}X_{1}=z\right\}d\mu(z)\] \[=\int_{S}\underbrace{\mathbb{P}\left\{\sum_{k=2}^{n}\log|z-X_{k} |\leq\frac{m}{2}(n-1)\right\}}_{(*)}d\mu(z). \tag{15}\] Let us rewrite the integrand \((*)\) as \[(*)=\mathbb{P}\left\{Z\geq\left(U_{\mu}(z)-\frac{m}{2}\right)(n-1)\right\}, \quad Z=(n-1)U_{\mu}(z)-\sum_{k=2}^{n}\log|z-X_{k}|.\] Then we have (with \(\theta\) to be chosen below) \[(*) =\mathbb{P}\left\{e^{\theta Z}\geq e^{\theta(n-1)(U_{\mu}-m/2)}\right\}\] \[(\text{since }U_{\mu}\geq m) \leq\mathbb{P}\left\{e^{\theta Z}\geq e^{\theta(n-1)(m/2)}\right\}\] \[\leq e^{-\theta(n-1)(m/2)}\mathbb{E}e^{\theta Z}.\] Let \(Z_{k}=-\log|z-X_{k}|+U_{\mu}(z)\) so that \(Z=Z_{2}+\ldots+Z_{n}\). As \(X_{i}\) are i.i.d., so are \(Z_{i}\) and we have \[\mathbb{E}e^{\theta Z}=\left(\mathbb{E}e^{\theta Z_{2}}\right)^{n-1}.\] We claim that there exist \(\tau<\infty\) and \(\theta_{0}>0\) (not depending on \(z\in S\)) such that \[\mathbb{E}[e^{\theta Z_{2}}]\leq e^{\tau\theta^{2}}\quad\text{for }|\theta|<\theta_{0}. \tag{16}\] Assuming this, the proof can be completed as follows: \[(*) \leq e^{-\theta(n-1)m/2}e^{(n-1)\tau\theta^{2}}\] \[=e^{-\frac{1}{4}m\theta(n-1)} \tag{17}\] provided we choose \(\theta<\frac{m}{4\tau}\). Using this in (15) we obtain \[\mathbb{P}\left\{\log|p_{n}^{\prime}(X_{1})|\leq\frac{m}{2}(n-1)\right\}\leq e ^{-\frac{1}{4}m\theta(n-1)},\] which implies the statement in the lemma. It remains to prove (16). Assumption (D) in definition A yields that for \(z\in S\), \[\mathbb{P}\{Z_{1}>t\} =\mathbb{P}\{|z-X_{1}|\leq e^{U_{\mu}(z)-t}\}\] \[\leq Ce^{\varepsilon(M-t)}\] where \(M=\sup_{z\in S}U_{\mu}(z)\). On the other hand, \(\mathbb{P}\{Z_{1}<-t\}=0\) for large \(t\), hence by choosing a smaller \(\varepsilon\) if necessary, we have the bound \[\mathbb{P}\{|Z_{1}|>t\}\leq 2e^{-\varepsilon t}.\] A random variable satisfying the above tail bound is said to be _sub-exponential_ (see Section 2.7 in [41]). It is well-known (see the implication \((a)\implies(e)\) of Proposition 2.7.1 in [41]) that if a sub-exponential random variable has zero mean, then (16) holds. Now we outline the argument for the proof of Theorem 1.3 Proof of Theorem 1.3.: The same argument (basically that \(-\log|z-X_{1}|+U_{\mu}(z)\) has sub-exponential distribution) that led to (17) shows that there exists \(\theta>0\) \[\mathbb{P}\{\log|p_{n}(z)|<\frac{1}{2}mn\}\leq e^{-\theta n} \tag{18}\] for any \(z\in L\). Let \(r_{n}=e^{-\frac{\theta}{4}n}\). Then, if \(z\in L\setminus\bigcup_{k=1}^{n}B(X_{k},r_{n})\), we have \[|\nabla\log|p_{n}(z)||=\big{|}\sum_{k=1}^{n}\frac{1}{z-X_{k}}\big{|}\;\leq\; \frac{n}{r_{n}}.\] Therefore, if \(z\in L\setminus\bigcup_{k=1}^{n}B(X_{k},(1+\frac{m}{4})r_{n})\), then combining the bound on the gradient with (18), we get \[\mathbb{P}\left\{\inf_{B(z,\frac{1}{4}mr_{n})}\log|p_{n}|\geq\frac{1}{4}mn \right\}\geq 1-e^{-\theta n}.\] Assuming without loss of generality that \(m\leq 1\), we may choose a net of \(C/r_{n}^{2}\) points in \(L\) such that every of point of \(L\setminus\bigcup_{k=1}^{n}B(X_{k},2r_{n})\) is within distance \(mr_{n}/4\) of one of the points of the net. Then, \(\log|p_{n}|>\frac{1}{4}mn\) everywhere on \(L\setminus\bigcup_{k=1}^{n}B(X_{k},2r_{n})\), with probability at least \(1-\frac{C}{r_{n}^{2}}e^{-\theta n}\geq 1-Ce^{-\frac{\theta}{2}n}\), by our choice of \(r_{n}\) ## 5. Proof of Theorem 1.5 First we claim that \(\Lambda_{n}\subseteq(1+\varepsilon)\mathbb{D}\) w.o.p. for any \(\varepsilon>0\). Deterministically, \(\Lambda_{n}\subseteq 2\mathbb{D}\), since \(\mu\) is supported on \(\mathbb{S}^{1}\). Further, \(U_{\mu}(z)=\log_{+}|z|\), hence \(L=\{z\ :\ 1+\varepsilon\leq|z|\leq 2\}\) is a compact subset of \(\Omega^{+}\). By Theorem 1.1 or Theorem 1.3, we see that \(L\cap\Lambda_{n}=\emptyset\) w.o.p. proving that \(\Lambda_{n}\subseteq(1+\varepsilon)\mathbb{D}\) w.o.p. Thus, it suffices to consider \(\Lambda_{n}\cap\mathbb{D}\). Consider \[g_{n}(z)=\frac{1}{\sqrt{n}}\sum_{k=1}^{n}\log|z-X_{k}|\] for \(z\in\mathbb{D}\). As \(X_{k}\) are uniform on \(\mathbb{S}^{1}\), it follows that \(\mathbb{E}[\log|z-X_{1}|]=0\). Let \[K(z,w)=\mathbb{E}[(\log|z-X_{1}|)(\log|w-X_{1}|)]=\frac{1}{2\pi}\int_{0}^{2 \pi}\log|z-e^{i\theta}|\log|w-e^{i\theta}|\ d\theta.\] Hence \(\mathbb{E}[g_{n}(z)]=0\) and \(\mathbb{E}[g_{n}(z)g_{n}(w)]=K(z,w)\). Let \(g\) be the (real-valued) Gaussian process on \(\mathbb{D}\) with expectation \(\mathbb{E}[g(z)]=0\) and covariance function \(\mathbb{E}[g(z)g(w)]=K(z,w)\). Then by the central limit theorem, it follows that \[(g_{n}(z_{1}),\ldots,g_{n}(z_{k}))\stackrel{{ d}}{{\to}}(g(z_{1}), \ldots,g(z_{k}))\] for any \(z_{1},\ldots,z_{k}\in\mathbb{D}\). We observe that \(g_{n}(0)=0\) and claim that \(\sup_{r\mathbb{D}}|\nabla g_{n}|\) is tight, for any \(r<1\). By a well-known criterion for tightness of measures (on the space \(C(\mathbb{D})\) endowed with the topology of uniform convergence on compacts), this proves that \(g_{n}\to g\) in distribution, as processes (see Theorem 7.2 in [4]). To prove the tightness of \(\sup_{r\mathbb{D}}|\nabla g_{n}|\), fix \(r<s<1\) and note that \(\nabla g_{n}(z)\) is essentially the same as \(F_{n}(z)=\frac{1}{\sqrt{n}}\sum_{k=1}^{n}\frac{1}{z-X_{k}}\) which is holomorphic on \(\mathbb{D}\). By Cauchy's integral formula, for \(|z|<r\), \[|F_{n}(z)|^{2} =\big{|}\frac{1}{2\pi}\int_{0}^{2\pi}\frac{F_{n}(se^{i\theta})}{z -se^{i\theta}}ise^{i\theta}d\theta\big{|}^{2}\] \[\leq\left(\frac{1}{2\pi}\int_{0}^{2\pi}|F_{n}(se^{i\theta})|^{2} d\theta\right)\left(\frac{1}{2\pi}\int_{0}^{2\pi}\frac{1}{|z-se^{i\theta}|^{2}} d\theta\right)\] \[\leq\frac{1}{(s-r)^{2}}\frac{1}{2\pi}\int_{0}^{2\pi}|F_{n}(se^{i \theta})|^{2}d\theta.\] The bound does not depend on \(z\), hence taking expectations, \[\mathbb{E}[(\sup_{r\mathbb{D}}|F_{n}|)^{2}] \leq\frac{1}{(s-r)^{2}}\frac{1}{2\pi}\int_{0}^{2\pi}\mathbb{E} \left[|F_{n}(se^{i\theta})|^{2}\right]d\theta\] \[\leq\frac{1}{(s-r)^{2}}\frac{1}{2\pi}\int_{0}^{2\pi}\mathbb{E} \left[\frac{1}{|se^{i\theta}-X_{1}|^{2}}\right]d\theta\] \[\leq\frac{1}{(s-r)^{2}(1-s)^{2}}.\] The boundedness in \(L^{2}\) implies tightness of the distributions of \(F_{n}\), as claimed. In order to formulate a precise statement on almost sure convergence it is necessary to construct \(g_{n}\) and \(g\) on a single probability space. One way to accomplish that is by the Skorokhod representation theorem (see Theorem 6.7 in [4]) from which it follows that \(g_{n}\) and \(g\) can be constructed on one probability space so that \(g_{n}\to g\) uniformly on compacta, a.s. Hence, the proof of Theorem 1.5 will be complete if we prove the following lemma. **Lemma 5.1**.: _Let \(f_{n},f;\mathbb{D}\to\mathbb{R}\) be smooth functions such that \(\{f=0\}\cap\{\nabla f=0\}=\emptyset\). Suppose \(f_{n}\to f\) uniformly on compact sets of \(\mathbb{D}\). Then, \(\rho(\{f_{n}<0\})\to\rho(\{f<0\})\)._ Indeed, applying this to \(g_{n},g\), we see that \(\rho(\Lambda_{n}\cap\mathbb{D})\to\rho(\{g<0\})\) almost surely. On the other hand, for any \(\varepsilon>0\), Theorem 1.3 shows that \(\Lambda_{n}\cap((1+\varepsilon)\mathbb{D})^{c}\) is contained in a union of \(n\) disks of radius \(e^{-cn}\), w.o.p. Putting these together, \(\rho(\Lambda_{n})\to\rho(\{g<0\})\) a.s. and hence in distribution. This completes the proof of the convergence claim in Theorem 1.5. Proof of Lemma 5.1.: For any \(U\subseteq\mathbb{D}\), it is clear that \(\rho(U)-\varepsilon\leq\rho(U\cap(1-\varepsilon)\mathbb{D})\leq\rho(U)\). Applying this to \(U=\{f_{n}<0\}\) and \(U=\{f<0\}\), we see that to show that \(\rho(\{f_{n}<0\})\to\rho(\{f<0\})\), it is sufficient to show that \(\rho(\{f_{n}<0\}\cap(1-\varepsilon)\mathbb{D})\to\rho(\{f<0\}\cap(1- \varepsilon)\mathbb{D})\) for every \(\varepsilon>0\). On \((1-\varepsilon)\mathbb{D}\), the convergence is uniform, hence for any \(\delta>0\), we have \(\{f<-\delta\}\subseteq\{f_{n}<0\}\subseteq\{f<\delta\}\) for sufficiently large \(n\). It remains to show that \(\delta\mapsto\rho(\{f<\delta\})\) is continuous at \(\delta=0\). First we show that \(\rho(\{f<-\delta\})\uparrow\rho(\{f<0\})\) as \(\delta\downarrow 0\). If \(B(z,r)\subseteq\{f<0\}\), then for any \(\varepsilon>0\), the maximum of \(f\) on \(B(z,r-\varepsilon)\) is some \(-\delta<0\). Hence \(\rho(\{f\leq-\delta\})\geq r-\varepsilon\) proving that \(\rho(\{f<-\delta\})\uparrow\rho(\{f<0\})\). Next we show that \(\rho(\{f\leq\delta_{n}\})\downarrow\rho(\{f\leq 0\})\) for some \(\delta_{n}\downarrow 0\). Let \(r_{n}=\rho(\{f\leq\frac{1}{n}\})\) and find \(z_{n}\) such that \(B(z_{n},r_{n})\subseteq\{f\leq\frac{1}{n}\}\). Let \(r_{n}\downarrow r_{0}\) and \(z_{n}\to z_{0}\) without loss of generality. Then if \(w\in B(z_{0},r_{0})\), then \(w\in B(z_{n},r_{n})\) for large enough \(n\), hence \(f(w)\leq\frac{1}{n}\) for large \(n\). Thus \(f\leq 0\) on \(B(z_{0},r_{0})\) showing that \(\rho(\{f\leq 0\})\geq\lim_{\delta\downarrow 0}\rho(\{f\leq\delta\})\). From the assumption that \(\{f=0\}\cap\{\nabla f=0\}=\emptyset\), we claim that \(\rho(\{f\leq 0\})=\rho(\{f<0\})\). Indeed, if \(B(z,r)\subseteq\{f\leq 0\}\), then in fact \(B(z,r)\subseteq\{f<0\}\). Otherwise, we would get \(w\in B(z,r)\) with \(f(w)=0\) which implies that \(w\) is a local maximum of \(f\) and hence \(\nabla f(w)=0\). This proves the continuity of \(\delta\mapsto\rho(\{f<\delta\})\) at \(\delta=0\), and hence the lemma. This completes the proof of the first part that \(\rho_{n}=\rho(\{g_{n}<0\})\) converges in distribution to \(\rho=\rho(\{g<0\})\). To show that \(\mathbb{P}(\{\rho<\varepsilon\})>0\), it suffices to show that \(g>0\) on \((1-\varepsilon)\mathbb{D}\cap\{|\operatorname{Im}z|>\varepsilon\}\) with positive probability. To show that \(\mathbb{P}(\{\rho>\frac{1}{2}-\varepsilon\})>0\), it suffices to show that \(g<0\) in \((1-\varepsilon)\mathbb{D}\cap\{|\operatorname{Im}z|>\varepsilon\}\) with positive probability. We do this in two steps. 1. There exist \(u_{0}:\mathbb{D}\to\mathbb{R}\), harmonic with \(u_{0}(0)=0\) such that \(u_{0}<0\) on \((1-\varepsilon)\mathbb{D}\cap\{|\operatorname{Im}z|>\varepsilon\}\). This is known, see either the proof of Theorem 6.1 of [27] or take \(\log|p|\) of the polynomial \(p\) constructed in Lemma 5 of Wagner [42]. 2. For any \(u:\mathbb{D}\to\mathbb{R}\), harmonic with \(u(0)=0\) and any \(r<1\) and \(\varepsilon>0\), we claim that \(\|g-u\|_{\sup(r\mathbb{D})}<\varepsilon\) with positive probability. Applying this to \(u_{0}\) and \(-u_{0}\) from the previous step show that \(\rho>\frac{1}{2}-\varepsilon\) with positive probability and \(\rho<\varepsilon\) with positive probability. To this end, we observe that the process \(g\) can be represented as \[g(z)=\operatorname{Re}\sum_{k=1}^{\infty}\frac{2}{k}a_{k}z^{k}\] where \(a_{k}\) are i.i.d. standard complex Gaussian random variables. The covariance of \(g\) defined as above is \[\mathbb{E}[g(z)g(w)]=\sum_{k\geq 1}\frac{1}{k^{2}}(z^{k}\bar{w}^{k}+w^{k}\bar{z}^{k})\] where \(a_{k}\) are i.i.d. standard complex Gaussian random variables. The covariance of \(g\) defined as above is \[\mathbb{E}[g(z)g(w)]=\sum_{k\geq 1}\frac{1}{k^{2}}(z^{k}\bar{w}^{k}+w^{k}\bar{z}^{k})\] where \(\bar{w}^{k}\) is the covariance of \(g\) defined as above. We now show that \(\rho(\{f<0\})\) is continuous at \(\delta=0\). We show that \(\rho(\{f<0\})\) is continuous at \(\delta=0\). **Lemma 5.2**.: _Let \(f_{n},f;\mathbb{D}\to\mathbb{R}\) be smooth functions such that \(\{f_{n}<0\}\cap\{\nabla f=0\}=\emptyset\). Suppose \(f_{n}\to f\) uniformly on compact sets of \(\mathbb{D}\). Then, \(\rho(\{f_{n}<0\})\to\rho(\{f<0\})\)._ Proof.: Let \(f_{n}\) be a function such that \(\{f_{n}<0\}\cap\{\nabla f=0\}=\emptyset\). Suppose \(f_{n}\to f\) uniformly on compact sets of \(\mathbb{D}\). Then, \(\rho(\{f_{n}<0\})\to\rho(\{f<0\})\). Indeed, applying this to \(g_{n},g\), we see that \(\rho(\{f_{n}<0\})\to\rho(\{f<0\})\) almost surely. On the other hand, for any \(\varepsilon>0\), Theorem 1.3 shows that \(\Lambda_{n}\cap((1+\varepsilon)\mathbb{D})^{c}\) is contained in a union of \(n\) disks of radius \(e^{-cn}\), w.o.p. Putting these together, \(\rho(\Lambda_{n})\to\rho(\{g<0\})\) a.s. and hence in distribution. This completes the proof of the convergence claim in Theorem 1.5. Proof of Lemma 5.1.: For any \(U\subseteq\mathbb{D}\), it is clear that \(\rho(U)-\varepsilon\leq\rho(U\cap(1-\varepsilon)\mathbb{D})\leq\rho(U)\). Applying this to \(U=\{f_{n}<0\}\) and \(U=\{f<0\}\), we see that to show that \(\rho(\{f_{n}<0\})\to\rho(\{f<0\})\), it is sufficient to show that \(\rho(\{f_{n}<0\}\cap(1-\varepsilon)\mathbb{D})\to\rho(\{f<0\}\cap(1-\varepsilon) \mathbb{D})\) for every \(\varepsilon>0\). On \((1-\varepsilon)\mathbb{D}\), the convergence is uniform, hence for any \(\delta>0\), we have \(\{f<-\delta\}\subseteq\{f_{n}<0\}\subseteq\{f<\delta\}\) for sufficiently large \(n\). It remains to show that \(\delta\mapsto\rho(\{f<\delta\})\) is continuous at \(\delta=0\). First we show that \(\rho(\{f<-\delta\})\uparrow\rho(\{f<0\})\) as \(\delta\downarrow 0\). If \(B(z,r)\subseteq\{f<0\}\), then for any \(\varepsilon>0\), the maximum of \(f\) on \(B(z,r-\varepsilon)\) is some \(-\delta<0\). Hence \(\rho(\{f<-\delta\})\geq r-\varepsilon\) proving that \(\rho(\{f<-\delta\})\uparrow\rho(\{f<0\})\). Next we show that \(\rho(\{f\leq\delta_{n}\})\downarrow\rho(\{f<0\})\) for some \(\delta_{n}\downarrow 0\). Let \(r_{n}=\rho(\{f\leq\frac{1}{n}\})\) and find \(z_{n}\) such that \(B(z_{n},r_{n})\subseteq\{f\leq\frac{1}{n}\}\). Let \(r_{n}\downarrow r_{0}\) and \(z_{n}\to z_{0}\) without loss of generality. Then if \(w\in B(z_{0},r_{0})\), then which can be checked to match with the integral expression for \(K(z,w)\) given earlier. Given any harmonic \(u:\mathbb{D}\to\mathbb{R}\) with \(u(0)=0\), write it as \[u(z)=\operatorname{Re}\sum_{k\geq 1}c_{k}z^{k}\] and choose \(N\) such that \[\|\sum_{k>N}c_{k}z^{k}\|_{\sup(\tau\mathbb{D})}<\varepsilon.\] If both the events \[\mathcal{A}_{N} =\left\{\|\sum_{k>N}\frac{a_{k}}{k}z^{k}\|_{\sup(\tau\mathbb{D}) }<\varepsilon\right\},\] \[\mathcal{B}_{N} =\left\{|\frac{2a_{k}}{k}-c_{k}|<\frac{\varepsilon}{N}\text{ for }1 \leq k\leq N\right\}\] occur, then \(|g-u|<3\varepsilon\) on \(r\mathbb{D}\). As \(\mathcal{A}_{N}\) and \(\mathcal{B}_{N}\) are independent and have positive probability, we also have \(\mathbb{P}(\mathcal{A}_{N}\cap\mathcal{B}_{N})>0\). ## 6. Proof of Theorem 1.6 The idea of the proof proceeds along earlier lines: first we fix \(t>0\) and show that \(\log|p_{n}(z)|\) is negative w.o.p. for a fixed \(z\) lying on \(|z|=1-t\). It then follows from a net argument that the whole circle (and hence the disk) is contained in \(\Lambda_{n}\) w.o.p. Let \(t\in(0,\frac{1}{100})\) and fix \(z\) with \(|z|=1-t\). Taking logarithms, we have as before that \[\log|p_{n}(z)|=\sum_{k=1}^{n}\log|z-X_{k}|,\] except now the roots are no longer i.i.d. Define \(F_{t}:\mathbb{C}\to\mathbb{R}\) by \[F_{t}(w)=\begin{cases}\log\frac{1}{t},&|z-w|\geq\frac{1}{t}\\ \log|z-w|,&t<|z-w|<\frac{1}{t}\\ \log t,&|z-w|\leq t.\end{cases}\] Next, we write \[\log|p_{n}(z)| =\sum_{k=1}^{n}F_{t}(X_{k})+\sum_{k:|z-X_{k}|\geq\frac{1}{t}} \left(\log|z-X_{k}|-\log\frac{1}{t}\right)+\sum_{k:|z-X_{k}|\leq t}\left(\log |z-X_{k}|-\log t\right)\] \[=:\mathcal{L}_{1}+\mathcal{L}_{2}+\mathcal{L}_{3}.\] Since the term \(\mathcal{L}_{3}\) is negative, we have \[\mathbb{P}\left(\log|p_{n}(z)|\geq-\frac{t}{4}n\right)\leq\mathbb{P}\left( \mathcal{L}_{1}+\mathcal{L}_{2}\geq-\frac{t}{4}n\right) \tag{19}\] We claim that the right hand side of (19) decays exponentially. For that we will need the following **Proposition 6.1**.: _Fix \(t>0\). There exist constants \(c_{t},c_{2}>0\) such that for all large \(n\), we have_ \[\mathbb{P}\left(\mathcal{L}_{1}\geq-\frac{t}{2}n\right)\leq 5\exp(-c_{t}n), \ \ \mathbb{P}(\mathcal{L}_{2}\geq\frac{t}{4}n)\leq n\exp(-c_{2}n).\] Assume the Proposition is true for now. Then, it is easy to see that the right hand side of (19) goes to \(0\) exponentially with \(n\). Indeed, \[\mathbb{P}\left(\mathcal{L}_{1}+\mathcal{L}_{2}\geq-\frac{t}{4}n\right) =\mathbb{P}\left(\mathcal{L}_{1}+\mathcal{L}_{2}\geq-\frac{t}{4}n, \ \mathcal{L}_{2}<\frac{t}{4}n\right)+\mathbb{P}\left(\mathcal{L}_{1}+\mathcal{L}_ {2}\geq-\frac{t}{4}n,\ \mathcal{L}_{2}\geq\frac{t}{4}n\right)\] \[\leq\mathbb{P}\left(\mathcal{L}_{1}\geq-\frac{t}{2}n\right)+ \mathbb{P}\left(\mathcal{L}_{2}\geq\frac{t}{4}n\right)\] \[\leq 5\exp(-c_{t}n)+n\exp(-c_{2}n).\] which establishes the claim. We now proceed with the proof of Proposition 6.1. Proof of Proposition 6.1.: **Step \(1\): Estimate on \(\mathcal{L}_{2}\)** Let \(N_{t}=|\{k:|z-X_{k}|\geq\frac{1}{t}\}|\). If \(\mathcal{L}_{2}\geq\frac{t}{4}n\), then we must have \(N_{t}\geq 1\), which has probability at most \(e^{-cn}\) for some \(c>0\). To see this, let us recall the following fact about eigenvalues of the Ginibre ensemble. **Lemma 6.2** (Kostlan [20], [14]).: _Let \(\lambda_{j}\) be the eigenvalues (indexed in order of increasing modulus) of a Ginibre random matrix (un-normalized). Then,_ \[\{|\lambda_{1}|^{2},|\lambda_{2}|^{2},...,|\lambda_{n}|^{2}\}\sim\{Y_{1},Y_{2},...,Y_{n}\},\] _where \(Y_{j}\) is a sum of \(j\) i.i.d. \(Exp(1)\) random variables._ Now for the proof of the claim. Since \(|z|<1\) and \(t\in(0,\frac{1}{100}),|z-X_{k}|\geq\frac{1}{t}\) implies for instance that \(|X_{k}|>99\). Therefore, by elementary steps and applying Lemma 6.2, we obtain \[\mathbb{P}(N_{t}\geq 1) \leq\mathbb{P}(\max_{k}|X_{k}|\geq 99)\] \[=\mathbb{P}(\max_{k}|X_{k}|^{2}\geq 99^{2})\] \[=\mathbb{P}(\max_{k}|\lambda_{k}|^{2}>99^{2}n)\] \[=\mathbb{P}(\max_{k}Y_{k}>99^{2}n),\] where we have used \(X_{j}=\dfrac{\lambda_{j}}{\sqrt{n}}\) in going from the second to third line above. Then a union bound and a Cramer-Chernoff estimate gives \[\mathbb{P}(\max_{k}Y_{k}>99^{2}n) \leq n\mathbb{P}(Y_{n}>99^{2}n)\] \[\leq n\exp(-c_{2}n),\] and combining this with the above estimate we obtain \[\mathbb{P}(N_{t}\geq 1)\leq n\exp(-c_{2}n),\] as desired. ### Step \(2\): Estimate on \(\mathcal{L}_{1}\) The desired estimate is equivalent to \[\mathbb{P}\big{\{}\mathcal{L}_{1}-\mathbb{E}(\mathcal{L}_{1})\geq-\frac{t}{2} n-\mathbb{E}(\mathcal{L}_{1})\big{\}}\leq 5\exp(-c_{t}n). \tag{26}\] As preparation towards this, observe that \(\dfrac{1}{n}\mathbb{E}(\mathcal{L}_{1})=\mathbb{E}\left(\int F_{t}d\mu_{n} \right),\) where \(\mu_{n}\) is the empirical spectral measure defined in (2). By the circular law of random matrices [38], almost surely \(\mu_{n}\) and its expectation both converge to the uniform measure on the unit disk. As a result, taking into account that \(F_{t}\) is a bounded continuous function, we obtain \[\lim_{n\to\infty}\frac{1}{n}\mathbb{E}(\mathcal{L}_{1})=\frac{1}{\pi}\int_{ \mathbb{D}}F_{t}dm=\frac{|z|^{2}-1}{2}+\frac{t^{2}}{2}, \tag{27}\] where the second equality in (27) follows from a computation similar to the one in Example 1.7. Using \(|z|=1-t,\) the quantity on the right reduces to \(-t+t^{2}\). Hence, for large \(n\), we have \(\mathbb{E}(\mathcal{L}_{1})\leq-\frac{3}{4}tn\) and hence, if the event in (26) holds, then \[\mathcal{L}_{1}-\mathbb{E}(\mathcal{L}_{1})\geq\frac{t}{4}n.\] Thus, our immediate goal is reduced to showing that the probability of the above event is at most \(5\exp(-c_{t}n)\) for an appropriate constant \(c_{t}\). We invoke the following result of Pemantle and Peres [29, Thm. 3.2]. **Theorem 6.3**.: _Given a determinantal point process with \(n<\infty\) points and \(f\) a Lipschitz-\(1\) function on finite counting measures, for any \(a>0\) we have_ \[\mathbb{P}\left(|f-\mathbb{E}(f)|\geq a\right)\leq 5\exp\left(-\frac{a^{2}}{16( a+2n)}\right).\] To say that \(f\) is Lipschitz-\(1\) on the space of finite counting measures means that \[\left|f\left(\sum_{i=1}^{k+1}\delta_{x_{i}}\right)-f\left(\sum_{i=1}^{k}\delta _{x_{i}}\right)\Big{|}\leq 1\] for any \(k\geq 0\) and any points \(x_{1},\ldots,x_{k}\). In our case, as we have recalled, \(\{X_{1},X_{2},...,X_{n}\}\) is a determinantal point process with exactly \(n\) points. Moreover, \(\mathcal{L}_{1}\) is Lipschitz with Lipschitz constant \(\|F_{t}\|_{\sup}=\log\frac{1}{t}\). Applying Theorem 6.3 to \(\mathcal{L}_{1}/\log(1/t)\), we see that \[\mathbb{P}\big{\{}\mathcal{L}_{1}-\mathbb{E}(\mathcal{L}_{1}) \geq\frac{t}{4}n\big{\}} =5\exp\left\{-\frac{t^{2}n^{2}}{256(\log(1/t))^{2}(\frac{tn}{4 \log(1/t)}+2n)}\right\}\] \[\leq 5\exp\{-c_{t}n\}\] where we may take \(c_{t}=ct^{2}/\log(1/t)^{2}\) for a large constant \(c\). This completes the proof of the proposition. Now that we have proved the pointwise estimate, the net argument from Lemma 6 can be used to show that the whole circle \(|z|=1-t\) lies in the lemniscate w.o.p. The maximum principle then shows that the corresponding disk lies in the lemniscate w.o.p. This concludes the proof that \(\Lambda_{n}\) contains \(\mathbb{D}_{r}\) w.o.p. We next prove that \(\Lambda_{n}\subseteq\mathbb{D}_{s}\) w.o.p. for \(s>1\). Fix \(1<s^{\prime}<s\) and let \(\delta=s-s^{\prime}\) and \(\varepsilon=\frac{1}{2}\log s\). We present the proof in four steps. 1. \(\frac{|\lambda_{j}|}{\sqrt{n}}<s^{\prime}\) for all \(j\), w.o.p., i.e., with probability at least \(1-e^{-cn}\). To see this, invoke Lemma 6.2 to see that the complementary event has probability less than \(ne^{-c(s^{\prime})n}\) by the same reasoning used in (25), noting that \(99^{2}\) may be replaced by any constant greater than \(1\). 2. Fix \(z\) with \(|z|=s\) and let \(f_{z,\delta}(w)=\log\left(\min\{\max\{|z-w|,\delta\},\frac{1}{\delta}\}\right)\), a bounded continuous function. Then by [31] (Theorem 9), \[\mathbb{P}\left\{|\frac{1}{n}\sum_{j=1}^{n}f_{z,\delta}(\lambda_{j}/\sqrt{n})- \int_{\mathbb{D}}f_{z,\delta}(w)\frac{dm(w)}{\pi}\big{|}>\varepsilon\right\} \leq e^{-c_{\varepsilon,\delta}n^{2}}.\] 3. On the event in (1), \(f_{z,\delta}(\lambda_{j}/\sqrt{n})=\log|z-\frac{|\lambda_{j}}{\sqrt{n}}|\) for all \(j\) and all \(|z|=s\). Also, \(f_{z,\delta}(w)=\log|z-w|\) for all \(w\in\mathbb{D}\). Hence, w.o.p. \[\mathbb{P}\left\{\left|\frac{1}{n}\log|p_{n}(z)|-\log s\right|>\varepsilon \right\}\leq e^{-c_{\varepsilon,\delta}n^{2}}+e^{-cn}.\] Hence, \(\frac{1}{n}\log|p_{n}(z)|>\frac{1}{2}\varepsilon\) w.o.p. by the choice of \(\varepsilon=\frac{1}{2}\log s\). 4. Let \(m=\frac{100}{\varepsilon\delta}\) and let \(z_{1},\ldots,z_{m}\) be equispaced points on \(\partial\mathbb{D}_{s}\). Then w.o.p. \(\inf_{j\leq m}\frac{1}{n}\log|p_{n}(z_{j})|>\frac{1}{2}\varepsilon\) by the previous step. On the event in (1), \(\|\nabla\frac{1}{n}\log|p_{n}(z)|\|\leq\frac{1}{\delta}\), hence \[\inf_{|z|=s}\frac{1}{n}\log|p_{n}(z)|>0\] w.o.p. On this event \(\Lambda_{n}\subseteq\mathbb{D}_{s}\). This concludes the proof of Theorem 1.6.
古典的な幾何学的な性質として、複素多項式$p$に関連付けられるものは、その(埋め込まれた)lemniscateの内部半径(最大の中心円を定義する半径)である。$\Lambda := \{z \in \mathbb{C}:|p(z)| < 1\}$. この論文では、$p$がランダムな複素多項式の場合のlemniscateの内部半径を研究する。つまり、$p$の零点を$\mu$のコンパクトサポート確率測度から独立に抽出した。もし、$\mu$によって生成された対数ポテンシャル$U_{\mu}$の負集合が非空であれば、内部半径は、圧倒的な確率で、正の定数から下限に収束する。さらに、$U_{\mu}$の負集合が$\mu$のサポートを含む場合、内部半径は、決定論的極限
2309.08140
PromptTTS++: Controlling Speaker Identity in Prompt-Based Text-to-Speech Using Natural Language Descriptions
We propose PromptTTS++, a prompt-based text-to-speech (TTS) synthesis system that allows control over speaker identity using natural language descriptions. To control speaker identity within the prompt-based TTS framework, we introduce the concept of speaker prompt, which describes voice characteristics (e.g., gender-neutral, young, old, and muffled) designed to be approximately independent of speaking style. Since there is no large-scale dataset containing speaker prompts, we first construct a dataset based on the LibriTTS-R corpus with manually annotated speaker prompts. We then employ a diffusion-based acoustic model with mixture density networks to model diverse speaker factors in the training data. Unlike previous studies that rely on style prompts describing only a limited aspect of speaker individuality, such as pitch, speaking speed, and energy, our method utilizes an additional speaker prompt to effectively learn the mapping from natural language descriptions to the acoustic features of diverse speakers. Our subjective evaluation results show that the proposed method can better control speaker characteristics than the methods without the speaker prompt. Audio samples are available at https://reppy4620.github.io/demo.promptttspp/.
Reo Shimizu, Ryuichi Yamamoto, Masaya Kawamura, Yuma Shirahata, Hironori Doi, Tatsuya Komatsu, Kentaro Tachibana
2023-09-15T04:11:37
http://arxiv.org/abs/2309.08140v2
Promptts++: Controlling Speaker Identity in Prompt-Based Text-to-Speech Using Natural Language Descriptions ###### Abstract We propose PromptTTS++, a prompt-based text-to-speech (TTS) synthesis system that allows control over speaker identity using natural language descriptions. To control speaker identity within the prompt-based TTS framework, we introduce the concept of speaker prompt, which describes voice characteristics (e.g., gender-neutral, young, old, and muffled) designed to be approximately independent of speaking style. Since there is no large-scale dataset containing speaker prompts, we first construct a dataset based on the LibriTTS-R corpus with manually annotated speaker prompts. We then employ a diffusion-based acoustic model with mixture density networks to model diverse speaker factors in the training data. Unlike previous studies that rely on style prompts describing only a limited aspect of speaker individuality, such as pitch, speaking speed, and energy, our method utilizes an additional speaker prompt to effectively learn the mapping from natural language descriptions to the acoustic features of diverse speakers. Our subjective evaluation results show that the proposed method can better control speaker characteristics than the methods without the speaker prompt. Audio samples are available at [https://reppy4620.github.io/demo.prompttspp/](https://reppy4620.github.io/demo.prompttspp/). Reo Shimizu\({}^{1*}\), Ryuichi Yamamoto\({}^{2}\), Masaya Kawamura\({}^{2}\), Yuma Shirahatata\({}^{2}\), Hironori Doi\({}^{2}\), Tatsuya Komatsu\({}^{2}\), Kentaro Tachibana\({}^{2}\)+\({}^{1}\)Tohoku University, Japan, \({}^{2}\)LINE Corp., Japan. Footnote †: Work done during an internship at LINE corporation. Text-to-speech, speech synthesis, speaker generation, mixture model, diffusion model ## 1 Introduction Deep-learning-based text-to-speech (TTS) technology has seen significant advancements, becoming a fundamental technology for numerous voice interaction applications. Since the quality of synthetic speech has become nearly close to human voices [1, 2], recent TTS research focuses on more challenging speech generation tasks, including controllable TTS [3]. Recently, prompt-based controllable TTS systems, which use a natural language description (referred to as a _prompt_) to control voice characteristics such as speaking style, have attracted considerable interest [4, 5, 6]. These systems offer an intuitive user interface and leverage the powerful language understanding capabilities of large language models (LLMs) [7, 8, 9] to enhance the flexibility of controllable TTS. Despite the promising capability of prompt-based TTS systems, previous works have lacked controllability over speaker identity. For instance, PromptTTS [4] uses a prompt describing speaking styles (referred to as _style prompt_) such as gender, pitch, speaking speed, energy, and emotion. Given that the style prompt predominantly correlates with the prosody of utterances and only describes a limited aspect of speaker individuality, it becomes challenging to finely control the speaker identity to synthesize the desired speech. Some other methods, such as InstructTTS [5] and PromptStyle [6], also use the style prompt to control speaking styles. In those systems, speaker ID is used as the basis of multi-speaker prompt-based TTS. Therefore, they cannot generate new speakers other than the ones used in the training data, which greatly limits the controllability. To address these issues, we propose PromptTTS++, a prompt-based TTS system that enables more control over speaker identity by text-based prompt. Our method is inspired by PromptTTS [4], but we have made two key changes. (1) We introduce the concept of _speaker prompt_, which is designed to be approximately independent of the style prompt and describes speaker identity with natural language descriptions. Table 1 illustrates the difference between style and speaker prompts. (2) We employ mixture density networks (MDNs) [10] based on Gaussian mixture models (GMMs) for modeling style/speaker embedding extracted from a global style token (GST)-based reference encoder [11, 12], allowing to learn rich and diverse speaker representations conditioned on the text prompt information. Additionally, we use a diffusion-based acoustic model for enhancing the quality of synthesized speech. Since there is no appropriate database for our task, we first construct a dataset with annotated speaker prompts for the speakers in the LibriTTS-R corpus [13]. Then, we apply our proposed method to the dataset. Our subjective evaluation results on prompt-to-speech consistency confirm that the proposed method can generate speech closer to the specified speaker characteristics than the methods without the speaker prompt. Furthermore, by visualizing the learned embedding space, we show that using only the style prompt is insufficient for controlling speaker identity, and using the speaker prompt alleviates this insufficiency. We plan to publish our annotated prompts for future research. ## 2 Method Figure 1 provides an overview of our proposed model, which consists of a reference encoder, prompt encoder, and acoustic model. Our method generates output speech from a given input text (referred to as a _content prompt_) and speaker/style prompt. Details are described in the following sections. ### Reference encoder To uncover latent acoustic variations related to speakers, we employ a GST-based reference encoder and extract style embeddings1 from speech signals [11]. The GST-based reference encoder takes log-scale mel-spectrogram as input and outputs a fixed-dimensional style embedding. This style embedding is a weighted combination of the learned style tokens, which will be used as the conditional features of the acoustic model. Footnote 1: Following GST [11], we use the term style embedding for the rest of this paper. However, note that the learned embeddings will reveal not only the style factors but also speaker factors. ### Prompt encoder The prompt encoder is a key module in the prompt-based TTS systems and is responsible for predicting the style embedding from the input speaker/style prompts. Our prompt encoder is mainly composed of a pre-trained language model and MDN. Previous works, such as PromptTTS [4] and PromptStyle [6], adopt BERT [14] as a pre-trained language model to extract prompt embeddings. Following these works, we also adopt BERT as a fundamental building block of the prompt encoder. We concatenate the speaker and style prompts into a single one. Subsequently, the prompt embedding is obtained using BERT, followed by three linear layers. During training, PromptStyle [6] utilizes a cosine similarity loss to minimize the difference between embeddings predicted from the reference encoder and prompt encoder. However, using the cosine similarity loss significantly limits the capability of TTS to generate diverse speakers. To resolve this problem, we employ GMM-based MDN [10] to model the conditional distribution of embeddings given the prompt information. Specifically, an MDN layer is added on top of BERT combined with linear layers, where the output consists of the parameters of GMMs. Using MDN allows the model to learn diverse characteristics of speakers as a probabilistic distribution and enables novel speaker generation by sampling from the distribution [12]. ### Acoustic model As depicted in the left side of Fig. 1, our acoustic model generates a mel-spectrogram from the content prompt and style embedding. The model comprises a content encoder, variance adaptor, and diffusion decoder. We use a Conformer [15] as the content encoder. The structure of the variance adaptor mirrors that of Fast-Speech 2 [16], except that the energy predictor is not used for simplicity. The variance adaptor contains a duration predictor and pitch predictor. The pitch predictor predicts logarithmic fundamental frequency (log-\(F_{0}\)) and voiced/unvoiced flags (V/UV). For the diffusion decoder, we adopt the same model as DiffSinger [17], a mel-spectrogram generation model based on a denoising diffusion probabilistic model [18]. During training, the diffusion decoder learns to denoise a given noisy mel-spectrogram, whereas the decoder generates a clean mel-spectrogram from noise at inference time. Further details can be found in previous studies [17], [18]. Although the improvement of naturalness is not the focus of this paper, our preliminary experiments confirmed a relatively low level of synthetic naturalness with the Transformer-based decoder used in PromptTTS [4]. As a result, we replaced the decoder with the diffusion model. Similarly, we add an MDN layer to the duration predictor for improving naturalness [19]. ### Training To train our model, we minimize the following loss function: \[L=L_{\mathrm{dec}}+L_{\mathrm{dur}}+L_{\mathrm{pitch}}+L_{\mathrm{style}} \tag{1}\] where \(L_{\mathrm{dec}}\), \(L_{\mathrm{dur}}\), \(L_{\mathrm{pitch}}\), and \(L_{\mathrm{style}}\) represent the loss functions for the diffusion decoder, duration predictor, pitch predictor, and prompt encoder, respectively. We use log-likelihood losses for \(L_{\mathrm{dur}}\) and \(L_{\mathrm{style}}\). For \(L_{\mathrm{dec}}\), and \(L_{\mathrm{pitch}}\), we use a weighted variational lower bound [18] and L1 loss, respectively. Note that \(L_{\mathrm{pitch}}\) includes two L1 losses for log-\(F_{0}\) and V/UV. We use stop-gradient operation on the output of the reference encoder when optimizing \(L_{\mathrm{style}}\) to emulate the separate training of the the prompt encoder and the rest of the models [12]. \begin{table} \begin{tabular}{l|l} \hline Prompt & Example \\ \hline Style prompt & A woman speaks slowly with low volume and low pitch. \\ Speaker prompt & The speaker identity is described as soft, adult-like, gender-neutral and slightly muffled. \\ \hline \end{tabular} \end{table} Table 1: Examples of prompts. The style prompt is defined for each utterance, whereas the speaker prompt is defined for each speaker. Figure 1: Overview of PromptTTS++. During training, the style embedding is extracted from the reference speech, while it is predicted from the input speaker/style prompt during inference. The style embedding is used as the conditional information of the acoustic model (in blue) to generate mel-spectrogram. The output waveform is generated by a vocoder. ## 3 Dataset Given that there is no large-scale dataset containing speaker prompts, we create a database by manually annotating speaker prompts. As the base database, we use LibriTTS-R [13], a high-quality multi-speaker corpus that is produced by applying a text-informed speech restoration method [20] to the LibriTTS corpus [21]. This corpus includes 585 hours of speech data from 2,456 English speakers. For this study, we randomly select 404 speakers and have their speaker prompts annotated by speech experts. The speaker prompt is annotated for each speaker individually. We randomly select five audio samples from each speaker and ask experts to annotate the speaker prompts describing the speaker's characteristics. To simplify the labeling process, we present a set of pre-defined words associated with speaker identity [22, 23] and allow the annotators to create speaker prompts based on these words. The pre-defined words include terms such as young, old, gender-neutral, deep, weak, muffled, raspy, clear, cool, wild, and sweet, among others. Similar to PromptTTS [4], we also use the style prompt describing pitch, speaking speed, and energy. We automatically generate pseudo style prompts by analyzing the statistics of \(F_{0}\), speaking speed, and loudness for each gender. Specifically, we add labels of pitch, speed, and loudness with three levels (e.g., low-normal-high) for each utterance and convert it to our pre-defined style prompts (e.g., a woman speaks with low pitch, normal speed, and high volume). To increase the variations of style prompts, we generate semantically identical but different prompts2 using LLama2 [9]. In total, we create 1,349 unique style prompts. Footnote 2: We found that only 154 unique prompts were used for the PromptSpeech dataset of real speech samples [4]. Therefore, we decided not to use them directly and instead manually make new style prompts ourselves. ## 4 Experimental Evaluations ### Data We used the LibriTTS-R dataset with text prompt information, as described in Section 3. All audio samples were sampled at 24 kHz. We separated 10 speakers for evaluation, specifically with the speaker IDs: 121, 237, 260, 908, 1089, 1188, 1284, 1580, 1995 and 2300. Those test speakers had speaker prompt annotations. The rest of the dataset was split into training and validation sets, with the split based on 2% of the speakers for validation. Although the speaker prompts were only available for 404 speakers, we used all the speakers' data for training3. For speakers without available speaker prompts, we solely used the style prompt as the input for the prompt encoder. Note that the style prompt for each utterance was sampled from our handcrafted style prompts based on gender and the three-level labels of pitch, speed, and energy for each training iteration. For training our duration model, we extracted phone durations by the Montreal forced aligner (MFA) [24]. We extracted 80-dimensional log-scale mel-spectrograms with a 10 msec frame shift and 40 msec window size as acoustic features. We used WORLD [25] to compute continuous log-\(F_{0}\) and V/UV [26]. Footnote 3: Our preliminary experiments found that using the partially annotated full dataset was more beneficial than using only the annotated data. ### Models For our GST-based reference encoder [11], we used six convolutional layers with output channels set to 128, 128, 256, 256, 512, and 512, respectively. The number of hidden units in a gated recurrent unit was set to 256. We set the number of style tokens, their dimensionality, and the number of attention heads to 10, 256, and 4, respectively. The output style embedding was normalized to have a unit norm. As the prompt encoder, we used a pre-trained BERT 4. To obtain a prompt embedding, the final hidden state corresponding to the special classification token [14] from BERT was passed to three linear layers interleaved by ReLU activation [27]. An MDN layer was used for modeling the style embedding, where the number of mixtures was set to 10. During training, we fixed the parameters of BERT except for the last attention layer [6]. Footnote 4: [https://huggingface.co/bert-base-uncased](https://huggingface.co/bert-base-uncased) The diffusion decoder was based on non-causal WaveNet [17]. The model contained 20 layers of one-dimensional residual convolution layers with skip connections. The channel size was set to 256. The number of diffusion steps was set to 100. The details of the duration predictor and pitch predictor were the same as those in FastSpeech 2 [16], except that we added an MDN layer to the duration predictor. The number of mixtures was set to four. We trained the proposed model for 100 epochs (680 K steps). We used AdamW optimizer [28] with a batch size of 30 K frames, and adopted a warmup learning rate scheduler [29] with an initial learning rate of 0.001. The number of warmup steps was 4000. As the vocoder, we used BigVGAN-base [30], which was trained using the same data as the acoustic model. To stabilize the pitch generation, we incorporated the source excitation module from the neural source filter model [31]. We trained the vocoder for 2.5 M steps using the AdamW optimizer with a batch size of 32. To evaluate the effectiveness of the proposed method, we investigated the models without the speaker prompt and the MDN layer in the prompt encoder. For the models without MDN, we used a cosine similarity loss for \(L_{\mathrm{style}}\) in the same manner as Prompt-Style [6]. We used the same model architecture for a fair comparison except for the MDN module. Furthermore, we trained a baseline model that resembles PromptTTS [4], where the non-autoregressive Transformer-based decoder [29] was used instead of the diffusion decoder. We trained the baseline model for the same number of iterations as the other models. ### Evaluation We conducted two subjective listening tests: 5-point naturalness mean opinion score (MOS) and 4-point prompt-to-speech consistency MOS tests. For the former, raters are asked to make quality judgments about the speech samples using the following five possible responses: 1 = Bad; 2 = Poor; 3 = Fair; 4 = Good; and 5 = Excellent. For the latter, given a pair of a speech sample and corresponding speaker prompt, raters are asked to judge the prompt-to-speech consistency with the following choices: 1 = Inconsistent; 2 = Somewhat inconsistent; 3 = Somewhat consistent; and 4 = Consistent. Note that we did not evaluate the consistency of the style prompt, as the primary focus of our study is on the controllability of speaker individuality. We asked native Japanese raters for evaluations. The number of subjects for each MOS test was 16 and 14. For all the tests, we selected three random utterances for each speaker. Each utterance had a duration of 3 to 8 seconds. In total, we evaluated 30 utterances for each method. We assessed samples from both the training and test speakers for the prompt-to-speech consistency MOS test, aiming to evaluate the system's ability to generalize to unseen speakers. We also evaluated the analysis-synthesis samples for the naturalness MOS test, which were generated by the vocoder using the ground truth mel-spectrogram. Additionally, we evaluated the speech samples generated with the reference encoder, where the style embedding was computed from the reference speech instead of the speaker/style prompt. ### Results Table 2 shows the subjective evaluation results. In terms of naturalness, all the proposed methods significantly outperformed the baseline system (B1). Among the proposed prompt-based TTS systems, the models without MDN worked better than the others (P1 vs. P3 and P2 vs. P4). This result can be attributed to the fact that human raters preferred the averaged voice characteristics generated by the non-MDN models, likely due to the inability to capture various speaker characteristics without using MDN. Note that the non-MDN models do not have the sampling capability. As for prompt-to-speech consistency, the proposed method (P1) achieved the best performance among all the prompt-based TTS systems for both the training and test speakers. Specifically, the proposed method significantly outperformed the models without the speaker prompt (P1 vs. P2 and P1 vs. P4) in a student's \(t\)-test with a 5 % significance level, demonstrating the effectiveness of the proposed method in terms of controllability and generalization capability. Another observation is that there was no statistical significance between the proposed model and the model without MDN (P1 vs. P3). This result suggests that while MDN enables the sampling capability, it does not necessarily enhance controllability. Lastly, a comparison between the proposed method and the one that uses reference speech input (R1) reveals that the performance of the prompt-based system is on par with the method based on reference speech. To further analyze our proposed methods, we compared the t-SNE plots of the style embeddings derived from the training data5. Fig2-(a) illustrates that our method generates distinctly clustered embeddings for each different speaker, whereas the method without the speaker prompt, as shown in Fig 2-(b), generates embeddings with approximately two clusters corresponding to gender, making different speakers non-distinguishable. These observations suggest that (1) without the speaker prompt, it becomes challenging to control speaker individuality beyond gender distinctions; (2) by incorporating the additional speaker prompt, it becomes possible to map the input prompt to the desired speaker characteristics. Footnote 5: We omitted the t-SNE plots for our non-MDN models due to their inability to learn complex distributions, making the results difficult to interpret. ## 5 Conclusion In this paper, we proposed PromptTTS++, a prompt-based TTS system that allows control over speaker identity. We introduced the speaker prompt to control speaker identity and constructed a dataset based on the LibriTTS-R dataset with manual annotations. Our experimental results demonstrated that our method enabled speaker identity control using the additional speaker prompt. Future work includes larger-scale annotations of speaker prompts for the entire LibriTTS-R speakers. Furthermore, investigating the prompt-based TTS systems for novel speaker generation tasks would be an interesting research direction. \begin{table} \begin{tabular}{l l c c c|c c c} \hline \hline & & Model & & & Naturalness & Prompt-to-speech consistency \\ \hline \multirow{2}{*}{System} & \multirow{2}{*}{Name} & Style & Speaker & \multirow{2}{*}{MDN} & \multicolumn{3}{c}{Dataset} \\ & & prompt & prompt & & & Test & Train & Test \\ \hline B1 & Baseline & ✓ & & & \(1.54\pm 0.07\) & \(2.52\pm 0.10\) & \(2.36\pm 0.10\) \\ P1 & **Proposed (PromptTTS++)** & ✓ & ✓ & \(3.88\pm 0.08\) & \(\mathbf{3.52\pm 0.06}\) & \(\mathbf{3.37\pm 0.07}\) \\ P2 & Proposed w/o speaker prompt & ✓ & & ✓ & \(3.82\pm 0.09\) & \(3.36\pm 0.07\) & \(3.21\pm 0.07\) \\ P3 & Proposed w/o MDN & ✓ & ✓ & & \(3.91\pm 0.09\) & \(3.51\pm 0.06\) & \(3.27\pm 0.07\) \\ P4 & Proposed w/o both MDN and speaker prompt & ✓ & & & \(\mathbf{3.99\pm 0.08}\) & \(3.42\pm 0.07\) & \(3.25\pm 0.07\) \\ \hline R1 & Proposed w/ reference speech & & & & \(3.84\pm 0.09\) & \(3.58\pm 0.06\) & \(3.32\pm 0.07\) \\ R2 & BigVGAN-base (analysis-synthesis) & & & & \(4.71\pm 0.05\) & - & - \\ \hline R3 & Ground truth & - & - & - & \(4.80\pm 0.04\) & \(3.57\pm 0.06\) & \(3.50\pm 0.07\) \\ \hline \hline \end{tabular} \end{table} Table 2: Naturalness and prompt-to-speech consistency MOS test results. Naturalness tests were performed for the test set, while the consistency tests were performed for both the training and test sets. Averaged scores among 10 speakers were reported. Bold font denotes the best score among all prompt-based TTS systems. Figure 2: t-SNE plots of the style embeddings for 100 speakers, extracted by the prompt encoders. Each speaker is represented by a different color. Best viewed in color.
PromptTTS++ は、自然言語記述を用いてスピーカアイデンティティを制御できる、プロンプトベースのテキスト-音声合成システムです。プロンプトベースのTTSフレームワーク内でスピーカアイデンティティを制御するために、私たちが導入した概念はスピーカプロンプトです。これは、声の特性(例:性別中立、若く、高齢、 muffied)を、話すスタイルにあまり依存しないように設計されたものです。スピーカプロンプトを含む大規模なデータセットは存在しないため、まず、LibriTTS-R コーパスに基づいて手動アノテーションでスピーカプロンプトを構築しました。そして、様々なスピーカ要素をモデル化するために、混合密度ネットワークを用いた拡散ベースの聴覚モデルをトレーニングデータに用います。従来のスタイルプロンプトは、ピッチ、話す速度、エネルギなど、スピー
2309.07273
Real Effect or Bias? Best Practices for Evaluating the Robustness of Real-World Evidence through Quantitative Sensitivity Analysis for Unmeasured Confounding
The assumption of no unmeasured confounders is a critical but unverifiable assumption required for causal inference yet quantitative sensitivity analyses to assess robustness of real-world evidence remains underutilized. The lack of use is likely in part due to complexity of implementation and often specific and restrictive data requirements required for application of each method. With the advent of sensitivity analyses methods that are broadly applicable in that they do not require identification of a specific unmeasured confounder, along with publicly available code for implementation, roadblocks toward broader use are decreasing. To spur greater application, here we present a best practice guidance to address the potential for unmeasured confounding at both the design and analysis stages, including a set of framing questions and an analytic toolbox for researchers. The questions at the design stage guide the research through steps evaluating the potential robustness of the design while encouraging gathering of additional data to reduce uncertainty due to potential confounding. At the analysis stage, the questions guide researchers to quantifying the robustness of the observed result and providing researchers with a clearer indication of the robustness of their conclusions. We demonstrate the application of the guidance using simulated data based on a real-world fibromyalgia study, applying multiple methods from our analytic toolbox for illustration purposes.
Douglas Faries, Chenyin Gao, Xiang Zhang, Chad Hazlett, James Stamey, Shu Yang, Peng Ding, Mingyang Shan, Kristin Sheffield, Nancy Dreyer
2023-09-13T19:25:15
http://arxiv.org/abs/2309.07273v1
Real Effect or Bias? Best Practices for Evaluating the Robustness of Real-World Evidence through Quantitative Sensitivity Analysis for Unmeasured Confounding Authors: Douglas Faries\({}^{1}\), Chenyin Gao\({}^{2}\), Xiang Zhang\({}^{3}\), Chad Hazlett\({}^{4}\), James Stamey\({}^{5}\), Shu Yang\({}^{2}\), Peng Ding\({}^{6}\), Mingyang Shan\({}^{1}\), Kristin Sheffield\({}^{7}\), Nancy Dreyer\({}^{8}\). ## Abstract The assumption of 'no unmeasured confounders' is a critical but unverifiable assumption required for causal inference yet quantitative sensitivity analyses to assess robustness of real-world evidence remains underutilized. The lack of use is likely in part due to complexity of implementation and often specific and restrictive data requirements required for application of each method. With the advent of sensitivity analyses methods that are broadly applicable in that they do not require identification of a specific unmeasured confounder - along with publicly available code for implementation - roadblocks toward broader use are decreasing. To spur greater application, here we present a best practice guidance to address the potential for unmeasured confounding at both the design and analysis stages, including a set of framing questions and an analytic toolbox for researchers. The questions at the design stage guide the research through steps evaluating the potential robustness of the design while encouraging gathering of additional data to reduce uncertainty due to potential confounding. At the analysis stage, the questions guide researchers to quantifying the robustness of the observed result and providing researchers with a clearer indication of the robustness of their conclusions. We demonstrate the application of the guidance using simulated data based on a real-world fibromyalgia study, applying multiple methods from our analytic toolbox for illustration purposes. ## Plain Language Summary Analyses comparing the effectiveness or safety of interventions based on real-world (non-randomized) data are potentially biased and viewed skeptically in part due to unmeasured confounding. The assumption of 'no unmeasured confounders', that is, all variables that influence both treatment selection and outcomes are in the dataset and used in the analysis, is a requirement for producing valid real-world evidence. However, sensitivity analyses to assess this assumption is underutilized. Roadblocks to broader implementation of sensitivity analysis are decreasing with the introduction of broadly applicable methods and publicly available code. Here we propose a best practice guidance to 1) assist researchers planning a study how to measure the robustness of the design to bias from potential confounding; and 2) help researchers analyzing data to produce sensitivity analyses quantifying the robustness of their findings by understanding the strength of unmeasured confounding necessary to change study conclusions and the
The assumption of no unmeasured confounders is a critical but unverifiable assumption required for causal inference. しかし、実用的な検討には、実世界における証拠の健全性を評価する際に用いられる定量的な感受性解析が未利用です。この欠乏は、その実装の複雑さや、各手法の適用に必要な特定の制限されたデータの必要性など、複数の要因が考えられます。 特定の未測定の要因を特定しない方法で適用できる感受性解析手法の登場とともに、その利用の障壁が減少しています。 より広範な利用を促進するために、ここでは、設計段階と分析段階における未測定の要因の可能性に関するベスト practiveガイドラインを提示します。 このガイドラインには、フレームワーク質問と分析ツールが含まれています。 設計段階の質問は、設計の潜在的な健全性を評価する手順を導き、潜在的な要因の関与
2309.17060
Possible inconsistency between phenomenological and theoretical determinations of charge symmetry breaking in nuclear energy density functionals
We summarize the recent progress on the determination of the charge symmetry breaking term of nuclear energy density functionals. We point out that the strength of the term determined theoretically is remarkably smaller than that determined phenomenologically, which is still an open question.
Tomoya Naito, Gianluca Colò, Tetsuo Hatsuda, Haozhao Liang, Xavier Roca-Maza, Hiroyuki Sagawa
2023-09-29T08:45:30
http://arxiv.org/abs/2309.17060v1
Possible inconsistency between phenomenological and theoretical determinations of charge symmetry breaking in nuclear energy density functionals ###### Abstract We summarize the recent progress on the determination of the charge symmetry breaking term of nuclear energy density functionals. We point out that the strength of the term determined theoretically is remarkably smaller than that determined phenomenologically, which is still an open question. \({}^{1}\) RIKEN Interdisciplinary Theoretical and Mathematical Sciences Program, Wako, Japan \({}^{(2)}\) Department of Physics, The University of Tokyo, Tokyo, Japan \({}^{(3)}\) Dipartimento di Fisica, Universita degli Studi di Milano, Milano, Italy \({}^{(4)}\) INFN, Sezione di Milano, Milano, Italy \({}^{(5)}\) Center for Mathematics and Physics, University of Aizu, Aizu-Wakamatsu, Japan \({}^{(6)}\) RIKEN Nishina Center, Wako, Japan ## 1 Introduction The idea of the isospin \(T\) was introduced in the 1930s to distinguish protons and neutrons as two different states of nucleons [1, 2]. The isospin symmetry of the nuclear interaction was also proposed in the 1930s [3, 4], i.e., the nuclear interaction does not depend on the third component of the isospin \(T_{z}\). Accordingly, the proton-proton, the neutron-neutron, and the \(T=1\) channel of the proton-neutron channels are identical to each other. It is known that this symmetry is, however, just an approximate one and slightly broken. This is evidenced by the difference in the scattering length. The main isospin symmetry breaking (ISB) terms of the nuclear interaction can be classified into two: charge independence breaking (CIB) and charge symmetry breaking (CSB). They are also, respectively, referred to as class II and III nuclear interactions [5]. The former corresponds to the difference between like-particle interaction and different-particle interaction; the latter corresponds to the difference between proton-proton interaction and neutron-neutron one. Isospin symmetry of atomic nuclei is also broken due to the Coulomb interaction. Okamoto [6] and Nolen and Schiffer [7] pointed out that the Coulomb interaction is not enough to describe the mass difference of mirror nuclei, a pair of two nuclei one of which is composed of \(Z\) protons and \(N\) neutrons and the other of which is of \(N\) protons and \(Z\) neutrons. Recently, thanks to the progress of nuclear experiments, the shape and the spin-parity of mirror nuclei have been also discussed [8, 9]. Some bare nuclear interactions contain the ISB part [10, 11]. Accordingly, effects of the ISB terms on nuclear properties have also been discussed in _ab initio_ calculations [12, 13]. Nevertheless, only a few calculations of mean-field or density functional theory have considered the ISB terms of nuclear interaction [14-16], while such calculations are needed for a systematical study of effects of the ISB terms on nuclear properties. In order to perform such a systematical study, first, the effective interaction or energy density functional (EDF) for the ISB nuclear interaction should be determined. In this proceeding, we summarize the recent progress on the determination of an EDF of the CSB nuclear interaction, since the CSB interaction dominates for most properties on the nuclear ground state [17]. ## 2 Skyrme-like isospin symmetry breaking interaction To perform the Hartree-Fock calculation, one needs to introduce an effective nuclear interaction or an EDF. We use a Skyrme Hartree-Fock calculation [18], and accordingly, we also introduce the Skyrme-like zero-range CSB interaction as follows: \[v_{\rm Sky}^{\rm CSB}\left(\mathbf{r}\right) = \left\{s_{0}\left(1+y_{0}P_{\sigma}\right)\delta\left(\mathbf{r}\right)\right.\] \[\left.+\frac{s_{1}}{2}\left(1+y_{1}P_{\sigma}\right)\left[\mathbf{k} ^{\dagger 2}\delta\left(\mathbf{r}\right)+\delta\left(\mathbf{r}\right)\mathbf{k}^{2} \right]+s_{2}\left(1+y_{2}P_{\sigma}\right)\mathbf{k}^{\dagger}\cdot\delta\left( \mathbf{r}\right)\mathbf{k}\right\}\times\frac{\tau_{z1}+\tau_{z2}}{4},\] where \(P_{\sigma}=\left(1+\mathbf{\sigma}_{1}\cdot\mathbf{\sigma}_{2}\right)/2\) is the spin-exchange operator, \(\mathbf{k}\) is the relative momentum, \(\tau_{zi}\) is the \(z\)-projection of the isospin operator \(\mathbf{\tau}\) for the particle \(i\), and \(s_{i}\) and \(y_{i}\) (\(i=0\), \(1\), and \(2\)) are parameters to be determined. Accordingly, the energy density reads \[\mathcal{E}_{\rm CSB}\left(\rho_{p},\rho_{n}\right) = \frac{s_{0}}{8}\left(1-y_{0}\right)\left(\rho_{n}^{2}-\rho_{p}^{ 2}\right)+\frac{1}{16}\left[s_{1}\left(1-y_{1}\right)+3s_{2}\left(1+y_{2} \right)\right]\left(\rho_{n}t_{n}-\rho_{p}t_{p}\right)\] \[-\frac{1}{64}\left[9s_{1}\left(1+y_{1}\right)-s_{2}\left(1-y_{2} \right)\right]\left(\rho_{n}\,\Delta\,\rho_{n}-\rho_{p}\,\Delta\,\rho_{p}\right)\] \[-\frac{1}{32}\left[s_{1}\left(y_{1}-1\right)+s_{2}\left(y_{2}+1 \right)\right]\left(\mathbf{J}_{n}^{2}-\mathbf{J}_{p}^{2}\right),\] where \(\rho_{\tau}\), \(t_{\tau}\), and \(\mathbf{J}_{\tau}\) are the particle, kinetic, and the spin-orbit current densities for nucleon \(\tau\) (\(\tau=p,\,n\)), respectively. Hereinafter, new parameters \[\tilde{s}_{0}\equiv s_{0}\left(1-y_{0}\right),\quad\tilde{s}_{1}\equiv s_{1} \left(1-y_{1}\right),\quad\tilde{s}_{2}\equiv s_{2}\left(1+y_{2}\right) \tag{3}\] are used for simplicity, instead of the original \(s_{j}\) and \(y_{j}\) (\(j=0,\,1\), and \(2\)). Note that Eq. (2) for the homogeneous systems can be written only by \(\tilde{s}_{j}\) \[\mathcal{E}_{\rm CSB}\left(\rho_{p},\rho_{n}\right)=\frac{\tilde{s}_{0}}{8} \left(\rho_{n}^{2}-\rho_{p}^{2}\right)+\frac{1}{16}\left(\tilde{s}_{1}+3 \tilde{s}_{2}\right)\left(\rho_{n}t_{n}-\rho_{p}t_{p}\right) \tag{4}\] since \(\Delta\,\rho\) and \(\mathbf{J}\) terms vanish. ## 3 Strength of Skyrme-like CSB interactions To perform calculations, the parameters of Eq. (1) must be determined. One way to determine these parameters is a fit to experimental data as done in the isospin symmetric part of most EDFs, hereinafter called "phenomenological" determination. The other way to pin down the strengths is based on _ab initio_ calculations, if available, or even on quantum chromodynamics (QCD). This section is devoted to summarizing the results of these two methods. ### Phenomenological determination The phenomenological determination can be further classified into two: in one, all the parameters, including the isospin symmetric part, are fitted to experimental data altogether, while in the other, only the parameters of the ISB part are fitted independently on top of an effective interaction or EDF already existing. Note that it is discussed in Ref. [19] that these two methods may differ by \(10\,\%\) of the value of \(s_{0}\). An EDF named "SAMi-ISB" [14], which includes only the leading-order ISB term (\(s_{0}\) term), is classified into the former type, where \(y_{0}=-1\) is fixed to select the spin-singlet term. The isobaric analog energy of \({}^{208}\)Pb and the energy difference of symmetric nuclear matter calculated with and without ISB terms, as well as the criteria used in the original SAMi EDF, are used to determine the parameters. Other EDFs, "SLy4-ISB", "SkM*-ISB", and "SV\({}_{\rm T}\)-ISB" [15], are classified into the latter type. In these EDFs, the parameters of the isospin symmetric terms remain those of the original form, e.g., SLy4. On top of an existing EDF, the parameter of the leading-order ISB term is fitted to experimental data of mass differences of mirror nuclei, which are sensitive to the Coulomb and CSB interactions [15, 17], with \(y_{0}=0\). The ISB interaction with the next-leading-order ISB (\(s_{0}\)-\(s_{2}\)) terms on top of the SV\({}_{\rm T}\) interaction with \(y_{j}=0\) (\(j=0\), 1, and 2) are also constructed with the same methods [16]. The other method proposed in Ref. [20] uses the isovector density \(\rho_{\rm IV}\left(\mathbf{r}\right)=\rho_{n}\left(\mathbf{r}\right)-\rho_{p}\left(\bm {r}\right)\), where \(\rho_{n}\) and \(\rho_{p}\) are, respectively, neutron and proton densities, of \({}^{40}\)Ca. It was found in Ref. [20] that the peak height of \(\rho_{\rm IV}\left(r\right)\) is proportional to the CSB strength \(s_{0}\) if only the \(s_{0}\)-term is considered. On top of the isospin-symmetric part of the SAMi-ISB interaction, the CSB strength is re-determined. The strengths determined in these methods are listed in Table 1. It can be found that the strength \(\tilde{s}_{0}\) ranges from about \(-20\) to \(-50\,{\rm MeV\,fm}^{3}\) if one limits oneself to the leading-order interaction. However, there are factor two differences between \(\tilde{s}_{0}\) of the SAMi-ISB EDF and those of the others. If the higher-order (\(\tilde{s}_{1}\) and \(\tilde{s}_{2}\)) terms are included, even the sign of \(\tilde{s}_{0}\) is opposite from that determined without the \(\tilde{s}_{1}\) or \(\tilde{s}_{2}\) term. This point will be discussed in detail later. ### Theoretical determination We have proposed that both the mass difference of mirror nuclei \(\Delta E_{\rm tot}\) and the neutron-skin thickness \(\Delta R_{np}\) depend linearly on \(s_{0}\) in a way which is almost universal among the isospin symmetric part of the Skyrme interaction if only the leading-order CSB interaction \(s_{0}\) is considered with the fixed value of \(y_{0}\)[19]. Here, we take \(\Delta E_{\rm tot}\) as an example. The \(\Delta E_{\rm tot}\) with arbitrary \(s_{0}\) can be parametrized as \[\Delta E_{\rm tot}\left(s_{0}\right)=as_{0}+\Delta E_{\rm tot}^{\rm w/o\ CSB}, \tag{5}\] where \(\Delta E_{\rm tot}^{\rm w/o\ CSB}\) is the mass difference calculated without the CSB interaction. The value of \(a\) is almost universal among Skyrme EDFs, and hence the averaged value \(\overline{a}\) is almost equal to each \(a\). Once _ab initio_ calculations provide \(\Delta E_{\rm tot}\) with and without the CSB interaction, \(s_{0}\) can be determined by \[s_{0}=\frac{\Delta E_{\rm tot}^{\rm w/\ CSB}-\Delta E_{\rm tot}^{\rm w/\ CSB}}{ \overline{a}}. \tag{6}\] Reference [13] provides \(\Delta E_{\rm tot}\) of \({}^{48}\)Ca and \({}^{48}\)Ni with and without the CSB contribution calculated by the coupled cluster (CC) approach with the N\({}^{2}\)LO\({}_{\rm GO}\) chiral interaction without the Coulomb interaction, whose value is \(\Delta E_{\rm tot}=0.72\pm 1.10\) MeV for the N\({}^{2}\)LO\({}_{\rm GO}\) (394) interaction and \(\Delta E_{\rm tot}=0.86\pm 4.85\) MeV for the N\({}^{2}\)LO\({}_{\rm GO}\) (450) one. Note that the mass difference \(\Delta E_{\rm tot}\) without the Coulomb interaction originating almost only from the CSB interaction since the CIB interaction rarely contributes to \(\Delta E_{\rm tot}\)[17]. Therefore, \(\Delta E_{\rm tot}^{\rm w/\ CSB}\) is zero and the aforementioned value can be used for \(\Delta E_{\rm tot}^{\rm w/\ CSB}\). Since the averaged slope is \(-0.3399\pm 0.0046\) fm\({}^{-3}\)[19], the obtained values of \(s_{0}\) are \(-2.1\pm 3.2\) MeV fm\({}^{3}\) and \(-2.5\pm 14.3\) MeV fm\({}^{3}\) obtained, respectively, by the N\({}^{2}\)LO\({}_{\rm GO}\) (394) and the N\({}^{2}\)LO\({}_{\rm GO}\) (450) interactions. The mass difference \(\Delta E_{\rm tot}\) of \({}^{10}\)Be-\({}^{10}\)C calculated by the Green's function Monte Carlo (GFMC) calculation with the Argonne v18 (AV18) [10] and Urbana X (UX) [22] interactions is also available [23]. Using the contributions originates from the mass difference of a proton and a neutron and the CSB interaction (the 18th term of the AV18 interaction), the total CSB contribution to \(E_{\rm tot}\) for \({}^{10}\)Be and \({}^{10}\)C are estimated as \(-0.0932(6)\) and \(0.0918(6)\) MeV, respectively. Since the averaged slope is \(-0.05769\pm 0.00147\) fm\({}^{-3}\), the obtained \(s_{0}\) is \(-3.207\pm 0.086\) MeV fm\({}^{3}\). Another method to pin down the CSB strength is based on the sum rule in QCD [21]. The QCD sum rules relate the proton-neutron mass difference in symmetric nuclear matter to the in-medium quark condensate associated with the partial restoration of chiral symmetry. Using this proton-neutron mass difference, the mass difference of mirror nuclei can be calculated with the local density approximation, whose density dependence is, indeed, the same as Eq. (4). Hence, not only \(\tilde{s}_{0}\) but also \(\tilde{s}_{1}+3\tilde{s}_{2}\) can be determined. The strengths obtained by these methods are summarized in Table 1. The obtained \begin{table} \begin{tabular}{l l c c c} \hline Class & Method or Name & \(\tilde{s}_{0}\) (MeV fm\({}^{3}\)) & \(\tilde{s}_{1}\) (MeV fm\({}^{5}\)) & \(\tilde{s}_{2}\) (MeV fm\({}^{5}\)) \\ \hline Pheno & SAMi-ISB & \(-52.6\pm 1.4\) & — & — \\ Pheno & SLy4-ISB (leading order) & \(-22.4\pm 4.4\) & — & — \\ Pheno & SkM*-ISB (leading order) & \(-22.4\pm 5.6\) & — & — \\ Pheno & SV\({}_{\rm T}\)-ISB (leading order) & \(-29.6\pm 7.6\) & — & — \\ Pheno & SV\({}_{\rm T}\)-ISB (next-leading order) & \(+44\pm 8\) & \(-56\pm 16\) & \(-31.2\pm 3.2\) \\ Pheno & Estimation by isovector density & \(-17.6\pm 32.0\) & — & — \\ \hline Theor & \(\Delta E_{\rm tot}\) (N\({}^{2}\)LO\({}_{\rm GO}\) (394) \& CC) & \(-4.2\pm 6.5\) & — & — \\ Theor & \(\Delta E_{\rm tot}\) (N\({}^{2}\)LO\({}_{\rm GO}\) (450) \& CC) & \(-5.1\pm 28.5\) & — & — \\ Theor & \(\Delta E_{\rm tot}\) (AV18-UX \& GFMC) & \(-6.413\pm 0.173\) & — & — \\ Theor & QCD sum rule (Case I) & \(-15.5^{+8.8}_{-12.5}\) & \(+0.52^{+0.42}_{-0.29}\) & — \\ Theor & QCD sum rule (Case II) & \(-15.5^{+8.8}_{-12.5}\) & — & \(+0.18^{+0.14}_{-0.10}\) \\ \hline \end{tabular} \end{table} Table 1: Strengths of the various Skyrme-like CSB interactions. “Pheno” and “Theor”, respectively, refer to results based on phenomenological fits and on theoretical evaluation. The values shown here except ab initio determinations are taken from Refs. [14, 15, 20, 21]. \(\tilde{s}_{0}\) based on _ab initio_ methods are around \(-5\,\mathrm{MeV}\,\mathrm{fm}^{3}\), which is remarkably smaller than the phenomenological determination. Especially, \(\tilde{s}_{0}\) of the SLy4-ISB, SkM*-ISB, and SV\({}_{\mathrm{T}}\)-ISB interactions are also determined by using the mass difference, which is the same as _ab initio_ determination. Even though the same quantities are used to determine \(\tilde{s}_{0}\), the strength determined by _ab initio_ calculation is 1/5 of the phenomenological value. The possible reasons for this inconsistency may be the following: experimental data, in principle, include many correlations beyond mean-field that are hard to capture in current EDFs like those considered in this study; the \({}^{10}\)Be-\({}^{10}\)C pair may be too light to compare with the mean-field or DFT calculation or they are deformed; the chiral interaction may not be accurate enough or converged well [24] to discuss the small contribution such as the ISB terms. As for the next-leading-order CSB interaction, \(\tilde{s}_{0}\) obtained by the QCD sum rule is not so significantly different from the phenomenological values, while \(\tilde{s}_{1}\) and \(\tilde{s}_{2}\) are quite small. This is remarkably in contrast to the next-leading-order SV\({}_{\mathrm{T}}\)-ISB interaction. As seen in Eq. (4), the sign of \(\tilde{s}_{0}\)-term and those of \(\tilde{s}_{1}\)- or \(\tilde{s}_{2}\)-term are opposite; accordingly, their effects on the energy density \(\mathcal{E}_{\mathrm{CSB}}\) are opposite to each other. Therefore, it may be difficult to pin down all \(\tilde{s}_{0}\), \(\tilde{s}_{1}\), and \(\tilde{s}_{2}\) at the same time from experimental data. ## 4 Summary In this proceeding, we discussed the strength of the charge symmetry breaking term of an effective nuclear interaction or an energy density functional. There is an inconsistency between the strength determined phenomenologically and theoretically. It is discussed that the isospin symmetry breaking terms of nuclear interaction themselves are small, but they sometimes give non-negligible or even significant contributions [17, 25]. Therefore, it is indispensable to solve such an inconsistency and to pin down the strengths properly. ###### Acknowledgements. The authors thank Andreas Ekstrom, Stefano Gandolfi, and Robert B. Wiringa for fruitful discussions. T.N. and H.L. would like to thank the RIKEN iTHEMS program and the RIKEN Pioneering Project: Evolution of Matter in the Universe. T.N. acknowledges the RIKEN Special Postdoctoral Researcher Program. G.C. gratefully acknowledges the support and hospitality of YITP, Kyoto University, where part of this work has been carried out. This work is partly supported by the JSPS Grant-in-Aid under Grant Nos. 18K13549, 19K03858, 20H05648, 22K20372, 23H04526, 23H01845, and 23K03426. The numerical calculations were performed on cluster computers at the RIKEN iTHEMS program.
核エネルギー密度関数alsの電荷不対称 breaking の項の決定における最近の進展を要約します。この項の理論的に決定された強度は、その現象学的に決定された強度の驚くほど小さいです。これは依然として未解決の問題です。
2310.02268
Dual Dictionaries in Linear Programming
In order to use the Dual Simplex Method, one needs to prove a certain bijection between the dictionaries associated with the primal problem and those associated with its dual. We give a short conceptual proof of why this bijection exists.
Patrick T. Perkins, Xiang Gao
2023-09-19T21:51:59
http://arxiv.org/abs/2310.02268v1
# Dual dictionaries in linear programming ###### Abstract. In order to use the Dual Simplex Method, one needs to prove a certain bijection between the dictionaries associated with the primal problem and those associated with its dual. We give a short conceptual proof of why this bijection exists. ## 1. Introduction Chvatal [1] introduces the notion of a _dictionary_ associated to a Linear Programming problem (LP). In order to use the Dual Simplex Method, one needs to prove a certain bijection between the dictionaries associated with the primal problem and those associated with its dual. Chvatal leaves the proof as an exercise, involving a long computation. Vanderbei [2] gives a short and elegant proof. Our contribution is a short proof that, we feel, gives a clear conceptual reason for why this beautiful bijection exists. First, we set up some notation we will use throughout the paper. Consider a general LP problem \[\begin{split}\max& z=\mathbf{c}^{T}\mathbf{x}\\ \text{s.t.}& A_{0}\mathbf{x}\leq\mathbf{b}\\ &\mathbf{x}\geq 0\end{split} \tag{1.1}\] The dual problem is \[\begin{split}\max&-w=-\mathbf{b}^{T}\mathbf{y}\\ \text{s.t.}&-A_{0}^{T}\mathbf{y}\leq-\mathbf{c}\\ &\mathbf{y}\geq 0\end{split} \tag{1.2}\] Here \(\mathbf{x}\in\mathbb{R}^{n}\), \(\mathbf{y}\in\mathbb{R}^{m}\) and \(A_{0}\) is an \(m\times n\) matrix. But we immediately introduce slack variables and, for the rest of the paper, take \(\mathbf{x},\mathbf{y}\in\mathbb{R}^{m+n}\). Write \(A=[\,A_{0}\,I\,]\) for the larger matrix with an \(m\times m\) identity matrix appended to \(A_{0}\). As usual, \(x_{1},\ldots,x_{n}\) are the decision variables for the primal problem and \(x_{n+1},\ldots,x_{m+n}\) are its slack variables. But, following Chvatal, we use \(y_{n+1},\ldots,y_{n+m}\) as the decision variables for the dual problem and \(y_{1},\ldots,y_{n}\) for its slacks. This makes the bijection easier to see. **Example 1**.: If the initial dictionary for a primal problem is \[x_{4} =18-4x_{1}-2x_{2}+2x_{3}\] \[x_{5} =-3+x_{1}+x_{2}+2x_{3}\] \[z =8x_{1}+11x_{2}-10x_{3}\] ## 1. Introduction In this paper we study the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem in the dual problem. We consider the dual LP problem in the dual problem. We consider the dual LP problem in the dual problem in the dual problem. We consider the dual LP problem in the dual problem in the dual problem. We consider the dual LP problem in the dual problem in the dual problem. We consider the dual LP problem in the dual problem in the dual problem. We consider the dual LP problem in the dual problem in the dual problem. We consider the dual LP problem in the dual problem in the dual problem. We consider the dual LP problem in the dual problem in the dual problem. We consider the dual LP problem in the dual problem in the dual problem. We consider the dual LP problem in the dual problem in the dual problem. We consider the dual LP problem in the dual problem in the dual problem. We consider the dual LP problem in the dual problem in the dual problem. We consider the dual LP problem in the dual problem in the dual problem in the dual problem. We consider the dual LP problem in the dual problem in the dual problem. We consider the dual LP problem in the dual problem in the dual problem in the dual problem. We consider the dual LP problem in the dual problem in the dual problem in the dual problem in the dual problem in the dual problem. We consider the dual LP Now consider the row space of \(R\). Let \([\mathbf{u}^{T},u_{0}]^{T}\in\mathbb{R}^{m+1}\). Every vector in the row space is of the form \[[\mathbf{u}^{T},u_{0}]\begin{bmatrix}\mathbf{0}^{T}&A_{0}&I&-\mathbf{b}\\ 1&-\mathbf{c}^{T}&\mathbf{0}&0\end{bmatrix}=\begin{bmatrix}u_{0}&\mathbf{u}^{ T}A_{0}-u_{0}\mathbf{c}^{T}&\mathbf{u}^{T}&-\mathbf{u}^{T}\mathbf{b}\end{bmatrix} \tag{2.2}\] Let \(\overline{\mathbf{y}}=[y_{0},y_{1},\ldots,y_{m+n},y_{m+n+1}]\in\mathbb{R}^{m+n +2}\). Then we can reformulate the dual LP as \[\begin{split}\max&y_{m+n+1}\\ \text{s.t.}&y_{1},\ldots y_{m+n}\geq 0\\ &y_{0}=1\\ &\overline{\mathbf{y}}\in\text{row space}(R)\end{split} \tag{2.3}\] Thus \(\mathbb{R}^{m+n+2}\) splits into two orthogonal subspaces, one associated with the primal LP and one with the dual. Note that this naturally makes \(y_{n},\ldots,y_{m+n}\) the decision variables for the dual LP. ## 3. The Proof Let \(\mathbf{y}=[y_{1},\ldots,y_{m+n}]^{T}\in\mathbb{R}^{m+n}\). Then \([\mathbf{y},w]^{T}\) is a solution to (1.4) if and only if \[[1,\mathbf{y}_{N}^{T},\mathbf{y}_{B}^{T},w]\in\text{row space}\left(\begin{bmatrix} \mathbf{0}^{T}&Q&I&-\mathbf{p}\\ 1&-\mathbf{q}^{T}&\mathbf{0}&-z^{*}\end{bmatrix}\right)\] Now we rewrite (1.3) using the notation from [1] page 100. Extend \(\mathbf{c}\) to \(\mathbb{R}^{m+n}\) by adding \(m\) zeroes at the end. Let \(A_{B}\) be the submatrix of \(A=[A_{0}\,I]\) with columns indexed by \(B\), and similarly for \(A_{N}\). Then \(A_{B}\) is non-singular and (1.3) is of the form \[\begin{split}\mathbf{x}_{B}&=A_{B}^{-1}\mathbf{b}-A_{B}^ {-1}A_{N}\mathbf{x}_{N}\\ z&=\mathbf{c}_{B}^{T}A_{B}^{-1}\mathbf{b}+(\mathbf{c}_{N}^{T}- \mathbf{c}_{B}^{T}A_{B}^{-1}A_{N})\mathbf{x}_{N}\end{split} \tag{3.1}\] It follows that \([\mathbf{y},w]\) is a solution to (1.4) if an only if \[\begin{split}[1,\mathbf{y}_{N}^{T},\mathbf{y}_{B}^{T},w]& \in\text{row space}\left(\begin{bmatrix}\mathbf{0}^{T}&A_{B}^{-1}A_{N}&I&-A_{B}^{-1} \mathbf{b}\\ 1&\mathbf{c}_{B}^{T}A_{B}^{-1}A_{N}-\mathbf{c}_{N}^{T}&\mathbf{0}&- \mathbf{c}_{B}^{T}A_{B}^{-1}\mathbf{b}\end{bmatrix}\right)\\ &=\text{row space}\left(\begin{bmatrix}A_{B}^{-1}&\mathbf{0}^{T} \\ \mathbf{c}_{B}^{T}A_{B}^{-1}&1\end{bmatrix}\cdot\begin{bmatrix}\mathbf{0}^{T}&A_{N}&A_{B}&- \mathbf{b}\\ 1&-\mathbf{c}_{N}^{T}&-\mathbf{c}_{B}^{T}&0\end{bmatrix}\right)\\ &=\text{row space}\left(\begin{bmatrix}\mathbf{0}^{T}&A_{N}&A_{B}&- \mathbf{b}\\ 1&-\mathbf{c}_{N}^{T}&-\mathbf{c}_{B}^{T}&0\end{bmatrix}\right)\end{split}\] because \(\begin{bmatrix}A_{B}^{-1}&\mathbf{0}^{T}\\ \mathbf{c}_{B}^{T}A_{B}^{-1}&1\end{bmatrix}\) is nonsingular. But this is equivalent to \([1,\mathbf{y}^{T},w]\in\text{row space}(R)\), which is what we wished to prove.
双対simplex法を使用するためには、原始問題と双対問題の辞書を関連付けることが必要であり、その関係の双対性証明を行います。双対性証明は、この双対性があることを示します。
2309.05464
On the absence of the electrostriction force in dilute clouds of cold atoms
The momentum of light in a medium and the mechanisms of momentum transfer between light and dielectrics have long been the topic of controversies and confusion. We discuss here the problem of momentum transfers that follow the refraction of light by dilute, inhomogeneous ensembles of ultra-cold atoms. We show experimentally and theoretically that the refraction of light rays by a dilute gas does not entail momentum transfers to first order in the light-atom coupling coefficient, in contradiction with the work reported in Matzliah et al. Phys. Rev. Lett. 119, 189902 (2017).
Arnaud Courvoisier, Nir Davidson
2023-09-11T14:04:19
http://arxiv.org/abs/2309.05464v1
# On the absence of the electrostriction force in dilute clouds of cold atoms ###### Abstract The momentum of light in a medium and the mechanisms of momentum transfer between light and dielectrics have long been the topic of controversies and confusion [1; 2; 3; 4]. We discuss here the problem of momentum transfers that follow the refraction of light by dilute, inhomogeneous ensembles of ultra-cold atoms. We show experimentally and theoretically that the refraction of light rays by a dilute gas does not entail momentum transfers to first order in the light-atom coupling coefficient, in contradiction with the work reported in Matzliah et al. Phys. Rev. Lett. 119, 189902 (2017). + Footnote †: Corresponding author : nir.davidson@weizmann.ac.il ## I Introduction The fields of optomechanics and collective light scattering by ultra-cold atomic ensembles are active and prolific, as they have seen numerous recent experimental and theoretical developments, on effects such as superradiance [5; 6; 7; 8], subradiance [9] and cooperative shifts [10; 11; 12]. Most of the recent experimental efforts have been focused on the understanding of the collective response of an ensemble to a probe beam [13; 14], in the regime of dense atomic samples, in which dipole-dipole interactions are predominant. The work presented here, however, focuses on mechanisms of momentum transfers between light and matter in a regime where ensembles are inhomogeneous and where dipole-dipole interactions are negligible. We report on new developments concerning electrostriction, a force reported by _Matzliah et. al_ in [15] and thought to emerge when an atomic ensemble acts as a lens and refracts an incoming plane wave. An intuitive momentum conservation picture leads to the derivation of a force acting transversely to the light's propagation direction, and depending linearly on the atomic susceptibility. Our theoretical and experimental results point towards the non-existence of this force in the form that was previously thought-of. We first present a theoretical derivation that predicts the absence of a transverse force resulting from the refraction of a far-off-resonance plane-wave by a dilute atomic ensemble. We then describe a series of well-controlled and well-calibrated experiments aimed at proving the absence of such a force. ## II Theoretical considerations Well-established theories on the force density experienced by dielectrics in the presence of electromagnetic radiation predict that, in the limit of non-magnetic dielectric materials where pressure gradients are insignificant, the force density averaged over an optical cycle is given by the following formula [1; 2; 3]: \[\mathbf{f}=-\frac{\epsilon_{0}}{2}\mathbf{E}^{2}\mathbf{\nabla}\epsilon\ +\frac{ \epsilon_{0}}{2}\mathbf{\nabla}\left(\frac{\partial\epsilon}{\partial n^{*}}n^{*} \mathbf{E}^{2}\right), \tag{1}\] where \(\epsilon_{0}\) is the vacuum permittivity, \(\epsilon\) is the relative permittivity of the material, \(n^{*}\) is the ensemble density and \(\mathbf{E}\) is the electric field amplitude. The first term in the force density can be associated to momentum conservation arising from the deviation of light rays by the ensemble, while the second term is the gradient of the interaction energy between the dielectric and the field. For a dilute gas, \(\epsilon=1+n^{*}\alpha/\epsilon_{0}\), where \(\alpha\) is the single-atom polarizability. The force density can then simply be re-written as \[\mathbf{f} =-\alpha\left(\nabla n^{*}\right)\frac{\mathbf{E}^{2}}{2}+\mathbf{ \nabla}\left(\alpha n^{*}\frac{\mathbf{E}^{2}}{2}\right) \tag{2}\] \[=\frac{n^{*}\alpha}{2}\mathbf{\nabla}\mathbf{E}^{2}. \tag{3}\] The above expression accounts solely for the well-known dipole force, therefore, in the absence of an intensity gradient, one should not measure any back action on the cloud when light propagates through it, to first order in the polarizability. The absence of such a force is due to exact cancellation between the first and second terms of equation (1) and confirmed by theories that treat this problem in a microscopic framework, based on a dipole-dipole interaction approach [16; 17; 18; 19]. Equation (1) of [15] derived the optomechanical strain resulting from back action to refraction of an incoming plane wave by a dilute atomic ensemble by calculating the change of the electromagnetic momentum in the eikonal approximation. It captured correctly the transverse parts of the first term of equation (1) but omitted its longitudinal component. Indeed, we note that adding \(\Delta p=\hbar k\Delta n\), the Minkowsky correction to the electromagnetic momentum in [15] naturally yields this missing longitudinal term and therefore turns the prediction of [15] to be isotropic. The second term of equation (1) that does not emerge from momentum conservation considerations was completely neglected in [15], leading to a first order force in the single atom polarizability even for plane wave illumination where the dipole force is absent. ## III Experiment We now report on experimental results showing the absence of a back-action following the refraction of light by an atomic ensemble, in the dilute and far-off-resonance limits. The presence of such a force would result in a change in the ensemble's momentum distribution after the passing of a plane wave pulse through the cloud. Therefore, our experimental sequence consists in producing a thermal ensemble at a few hundreds of nK by means of evaporative cooling in an optical dipole trap, releasing the trap, immediately pulsing a plane-wave on the ensemble for \(\tau=400\mu\)s, and measuring the cloud's momentum distribution via absorption imaging after 20ms of time-of-flight. Our thermal \({}^{87}\)Rb clouds contain approximately \(5\times 10^{5}\) atoms at a temperature of 400nK, and we measure the trap oscillation frequency at the end of evaporation to be \(\omega\simeq 2\pi\times 90\)Hz in all directions. Note that the trap oscillation period is much larger than \(\tau\) such that we can neglect thermal motion during the plane-wave pulse. The optical layout used for electrostriction measurements is presented in figure 1. A beam of waist 1.05mm originates from Thorlabs' F220APC-780 fiber collimator and we check its intensity profile right before the cell using a beam profiler. The beam diameter has been chosen such that the intensity profile impinging on the atoms is quasi-uniform, therefore suppressing the traditional dipole force by more than three orders of magnitude compared to an hypothetical electrostriction force. After entering the vacuum chamber, the beam passes through the ensemble and then propagates along the long axis of the vacuum chamber, from the experiment cell to the 2D MOT cell. This is to ensure that reflections of the beam at glass interfaces will not reach the atoms and cause parasitic dipole forces. The power in the electrostriction beam ranges from 0 to 300mW, and we can continuously vary its detuning with respect to the \(F=1\) to \(F^{\prime}=2\) transition of the \(D_{2}\) line of \({}^{87}\)Rb, in the \(-100\)GHz to \(+100\)GHz range. We added the option of retro-reflecting the beam onto itself in order to create a 1D optical lattice potential. This allowed us to perform precise on-atom light intensity measurements by diffracting the ensemble on the lattice and measuring the oscillations of Kapitza-Dirac orders [20]. One can derive a relation between the light intensity, its detuning and the oscillation frequency by numerically solving Mathieu's equation for a 1D optical lattice [21]. By measuring this oscillation frequency and measuring the laser detuning with a wave-meter, we directly measure the maximum on-atom intensity to be \(4.5\times 10^{4}\)W.m\({}^{-2}\). ### Absence of a _transverse_ electrostriction force In [15], _Matzliah et. al_ reported on a transverse electrostriction force with respect to the incoming light. They measured variations of the ensemble's aspect ratio after time-of-flight, \(AR\), as a function of various experimental parameters such as the laser's detuning and intensity. Note that in the case of a transverse electrostriction force, \(AR\) is such that [15] \[AR^{2}-1=\left(\frac{\sigma_{p}^{es}}{\sigma_{p}^{th}}\right)^{2}, \tag{4}\] Figure 1: Optical layout used for the electrostriction experiment. The shutter allows us to retro-reflect or not the electrostriction beam onto itself. We monitor the power in the incoming and reflected beams by using a fast-photodiode and a wedged window placed at the fiber’s output. The wedged window is used in order to insure that there are no interference fringes on the atoms themselves. Figure 2: Comparison between our experiment (circular markers) and the theory of [15] (dashed lines) for detunings between \(\pm 50GHz\) and \(\pm 100GHz\), showing the absence of electrostriction as described in [15]. The aspect ratio, \(AR\), was measured after 20ms of time-of-flight. Each data point was obtained by averaging 30 experimental runs and the data was collected in a random order. Error-bars correspond to the standard deviation of the data-spread. The maximum intensity of the incoming beam was measured to be \(4.5\times 10^{4}\)W.m\({}^{-2}\) by retro-reflecting it and measuring the oscillations of Kapitza-Dirac orders [20]. where \(\sigma_{p}^{es}\) is the momentum distribution width of the electrostriction impulse, and \(\sigma_{p}^{th}\) is the initial thermal momentum width. Therefore, in the case of a transverse electrostriction force, measuring the cloud's aspect ratio gives a direct measurement of the force. Reproducing the experiment from [15] in a different experimental setup and in a well-controlled setting, we measured no effect from the electrostriction pulse on the value of \(AR^{2}-1\), in agreement with equation (1) taken in the appropriate limits. In figure 2, we show measurements of \(AR^{2}-1\) as a function of the electrostriction beam power and its detuning, \(\Delta\), between \(\pm 50\)GHz and \(\pm 100\)GHz with respect to the \(F=1\) to \(F^{\prime}=2\) transition of the \(D_{2}\) line of \({}^{87}\)Rb. At detunings of \(\pm 50\)GHz and at maximal intensity, the theory prediction from [15] is larger than our measurement by more than a thousand standard deviations. When the incoming beam was illuminated through a side window of our 3D MOT cell we measured significant deviations of the aspect ratios from unity, similar to the measurements of [15]. By carefully monitoring the spatial profile of the beam, we identified fringes across the beam due to weak undesired reflections from the cell walls that are unavoidable in the case of side illumination. We thus asses that the aspect ratios measured in [15] resulted from a residual dipole force due to similar imperfections on the beam used. ### Absence of an _isotropic_ electrostriction force In addition to the absence of a transverse force, we show experimentally that, in the case of plane wave illumination and in the far-off-resonance limit, the two terms in equation (1) compensate one-another. To that end, we systematically measured the width of the ensemble's momentum distribution with and without pulsing the electrostriction beam. Initially, our thermal ensembles exhibit Gaussian position and momentum distributions with \[\sigma_{r}^{th}=\sqrt{\frac{k_{B}T}{m\omega^{2}}}\text{ and }\sigma_{p}^{th}= \sqrt{k_{B}Tm}. \tag{5}\] We define \(\beta\) as the ratio between the Gaussian width of the cloud after time-of-flight in the pulsed and unpulsed cases. Upon acting on the ensemble, \(\mathbf{f}\) changes the momentum distribution of the cloud to \[\sigma_{p}^{pulsed}=\sqrt{\sigma_{p}^{th^{2}}+\sigma_{p}^{\mathbf{f}^{2}}}. \tag{6}\] Similarly to the derivation shown in [15], after a time-of-flight long with respect to \(2\pi/\omega\), the ratio \(\beta\) is such that \[\beta^{2}-1=\left(\frac{\sigma_{p}^{\mathbf{f}}}{\sigma_{p}^{th}}\right)^{2}. \tag{7}\] Therefore, measuring \(\beta^{2}-1\) yields a direct measurement of the action of \(\mathbf{f}\) on the ensemble. In figure 3, we show the evolution of \(\beta^{2}-1\) as a function of the electrostriction beam power for various detunings as well as the theoretical expectations in the case where only the first term of equation (1) would be present _i.e._ if we were to follow the intuitive picture of momentum conservation that follows the refraction of light rays by the cloud. As in the transverse case, at maximal intensity and at a detuning of \(-30\)GHz, the measured value of \(\beta^{2}-1\) and the one predicted by the incomplete theory differ by more than a thousand standard deviations. We attribute the slight deviation from zero to incoherent scattering by the electrostriction beam. We can conclude on the absence of a mechanical back-action on a dilute atomic ensemble through which a plane wave propagates, to first order in the atomic susceptibility. ## IV Conclusion It is clear from the theoretical picture we presented that one should not expect a mechanical back action to first order in \(\alpha\) to the refraction of light by a dilute atomic Figure 3: Comparison between our experiment (circular markers) and an incomplete theory where only the first term of equation (1) would be present (dashed lines), for detunings between \(-30GHz\) and \(-100GHz\). The \(\beta\) parameter was measured after 20ms of time-of-flight. Each data point was obtained by averaging 30 experimental runs and the data was collected in a random order. Error-bars correspond to the standard deviation of the data-spread. The maximum intensity of the incoming beam was measured to be \(4.5\times 10^{4}\)W.m\({}^{-2}\) by retro-reflecting it and measuring the oscillations of Kapitza-Dirac orders [20]. ensemble. The deflection of light rays imparts a momentum kick to a volume element of the cloud which, alone, would lead to a decrease of the ensemble's density in the case of a red-detuned beam. However, locally, the interaction energy between the medium and the electric field would increase if the density was to get lower. There is therefore a force that prevents this from happening. In the dilute limit _i.e._ when the susceptibility is proportional to the density, these two forces compensate one another exactly. Outside of the dilute regime, the interaction energy and the momentum conservation condition have a different dependence on the atomic density, such that the two forces do not compensate one another, and one can derive forces proportional to \(\alpha^{2}\)[14; 19]. We have described above our recent experimental efforts where careful beam calibration and characterization reveal the absence of electrostriction as reported in [15]. We believe that the results previously reported were the result of a parasitic dipole force due to imperfections of the illuminating beam and do not reflect a new physical mechanism. These experimental results are in agreement with well-established theories that predicts exact cancellation between the two terms of equation (1) and hence the absence of an electrostriction force in the dilute regime. ## Acknowledgements The authors acknowledge fruitful discussions with Ulf Leonhardt and Ephraim Shahmoon and thank Amruta Gadge for technical assistance.
光の媒質における運動の勢と、光と誘電体の運動量転換のメカニズムは、長年議論の的となってきました。ここでは、光が薄く非均一な超低温原子の集合体によって屈折した際に起こる運動量転換の問題について議論します。実験と理論的に示すには、光が低濃度の気体によって屈折する現象は、光原子の結合係数で1次的な運動量転換を伴いません。これは、Matzliah et al. Phys. Rev. Lett. 119, 189902 (2017)に報告された研究との矛盾を指摘しています。
2309.12118
Vulnerability of 3D Face Recognition Systems to Morphing Attacks
In recent years face recognition systems have been brought to the mainstream due to development in hardware and software. Consistent efforts are being made to make them better and more secure. This has also brought developments in 3D face recognition systems at a rapid pace. These 3DFR systems are expected to overcome certain vulnerabilities of 2DFR systems. One such problem that the domain of 2DFR systems face is face image morphing. A substantial amount of research is being done for generation of high quality face morphs along with detection of attacks from these morphs. Comparatively the understanding of vulnerability of 3DFR systems against 3D face morphs is less. But at the same time an expectation is set from 3DFR systems to be more robust against such attacks. This paper attempts to research and gain more information on this matter. The paper describes a couple of methods that can be used to generate 3D face morphs. The face morphs that are generated using this method are then compared to the contributing faces to obtain similarity scores. The highest MMPMR is obtained around 40% with RMMR of 41.76% when 3DFRS are attacked with look-a-like morphs.
Sanjeet Vardam, Luuk Spreeuwers
2023-09-21T14:36:10
http://arxiv.org/abs/2309.12118v1
# Vulnerability of 3D Face Recognition Systems to Morphing Attacks ###### Abstract In recent years face recognition systems have been brought to the mainstream due to development in hardware and software. Consistent efforts are being made to make them better and more secure. This has also brought developments in 3D face recognition systems at a rapid pace. These 3DER systems are expected to overcome certain vulnerabilities of 2DER systems. One such problem that the domain of 2DER systems face is face image morphing. A substantial amount of research is being done for generation of high quality face morphs along with detection of attacks from these morphs. Comparatively the understanding of vulnerability of 3DFR systems against 3D face morphs is less. But at the same time an expectation is set from 3DFR systems to be more robust against such attacks. This paper attempts to research and gain more information on this matter. The paper describes a couple of methods that can be used to generate 3D face morphs. The face morphs that are generated using this method are then compared to the contributing faces to obtain similarity scores. The highest iMMRM is obtained around 40% with iMMRM of 41.76% when 3DFRS are attacked with look-a-like morphs. 3D Face Recognition, 3DMM, face morphs, 3d face registration ## I Introduction In recent years, use of face recognition for biometric identification has become popular. Due to development in software technologies along with hardware, access to face recognition systems have become easy. This development has also allowed easier access to technologies that can exploit the shortcomings of an automated face recognition system. Face morphing is one such threat to a face recognition systems which has proven to be quite effective. The basic idea is to synthetically generate a face image from a combination of faces of two subjects. This generated image is expected to be similar to both the subjects in terms of various relevant features. It has been observed that the state-of-the-art face recognition systems can have higher false matches through introduction of such face morphs that are generated with relevant modifications [1]. This vulnerability of a face recognition system to face morphs has given rise to a whole domain of Morphing Attack Detection (MAD). It is necessary to mitigate such anomalies by understanding the effects of different morph generation techniques on performance of face recognition systems. Pertaining to MAD a considerable amount of work is done to understand these effects on a 2D face recognition system. Substantial amount of research has been done to differentiate potential parts in the training process and working mechanism of a 2D face recognition system that can be improved to mitigate Face Morphing Attacks [2, 3]. Some challenges that remain a big barrier for this problem are availability of representative, large-scale datasets and reference evaluation procedures of novel detection algorithms, addressing which will require considerable time, effort and resources [4]. Now developments in the 3D domain have become the next step for face recognition systems. With improvements in hardware to obtain 3D facial scans, it has become possible to build large 3D facial datasets. These datasets have made it possible to train and develop reliant 3D Face Recognition systems. These systems vary in approach that they follow to obtain accuracies comparable to a 2D face recognition system. With further improvement it technology it is only expected that 3D face recognition systems will be brought to mainstream and follow a development path similar to its 2D counterpart. It has the potential to eradicate some major shortcomings of 2D face recognition. An argument here is that because there is an additional information source in the form of depth or range, 3D face recognition systems can develop better feature discriminators. When comparing the vulnerability of these systems towards face morphing, an expectation can be set from 3D systems that due to the additional information, it might be more robust against face morphing. Going one step ahead, one can also ask the question, are 3D face recognition systems even vulnerable to morphing attacks? This paper makes an attempt to answer this question. The progress in 2D face morphing has allowed substantial research for MAD in 2D face recognition systems. Comparatively, generation of 3D face morphs is a relatively untouched topic. Further using these morphs to measure their effectiveness against a 3D face recognition system is unknown. The novelty of this paper lies in the attempt to attack 3D face recognition with 3D face morphs. This paper describes a couple of ways to obtain a 3D face morph. The prime method of obtaining morphs in this paper is using an already existing pipeline developed for Large Scale 3D Morphable Models [5]. This approach allows generation of 3D face morphs in a fully automated manner, suitable for a large-scale evaluation of morphing attacks with minimal requirements. These attacks are tested specifically on face recognition systems described in papers [6] and [7]. Generation and testing of face morphs is done using FRGC [8] and Bosphorous [9] 3D facial datasets. This datasets allow the research to be done on a large number of samples obtained from many different subjects under controlled and uncontrolled environmental conditions. They allow a considerable amount of variations, to enable a selection criteria for generation of morphs. The intention behind this selection is to improve the performance of morphing attack. This is done to understand external factors that could potentially influence 3D face morphing. The selection criteria is based on scores obtained from comparing faces, with selection of faces that have high score and hence can be considered similar. This criteria allows a consistent approach on selection of subjects and gives an idea if similar looking faces generate better morphs in 3D, similar to 2D. More relevant work about the morphing in general and different 3D morphable models is described in section II. This is followed by highlighting the objectives of this research in the form of questions. Section IV describes the methods used for generation of 3D face morphs. This is followed by a section about selection criteria that will be used to choose certain subjects in order to see their effect on the final results. The evaluation metrics section explains the metrics used to evaluate these results. And then the experiment section gives a brief overview about the experiments performed in order to answer the research questions posed in this paper. Experimental results obtained so far are mentioned here followed by a brief discussion explaining this. And then a short conclusion about the research done in this paper. ## II Related Work This section covers an overview of work that has been done on 3D Face Morphing and work done to build generic 3D face models, wherein the approach of converting multiple facial meshes into a main model is described. Also some surveys related to 2D face morphing, give an idea of how a 3D face morphing system can be evaluated and what are the factors that need to be thoroughly checked for the evaluation. As there is less work done on 3D Face Morphing specifically, this section mainly explores the work that is relevant to the morphing process. 3D face morphing can be considered as the next step to current 2D face morphing. Hence the basis for a 3D face morphing would be relatively similar to 2D face morphing. Survey about Face Morphing Attacks presented in [10] covers different 2D morph generation techniques. It also compares landmark based morph generation to newer Deep learning based morph generations. Another paper [11] also provides an extensive overview about fundamentals of 2D face morphing. It also describes various factors to assess the quality of Face Morphs, while also covering metrics that evaluate the morphing attacks. Early works of 3D Face Morphing began in the form of Metamorphosis on 3D head models in order to achieve new 2D images from existing 2D images [12]. The idea was to map the 3D models onto a 2D space and use these maps to obtain a morphed map from basic 2D morphing technique. This process was used to obtain a metamorphosed 3D model. A survey of 3D metamorphosis from [13] distributes the approaches for metamorphosis into volume based approach and boundary based approach. The work of Chen and Banks was applied by Steyvers to extend his 2D morphing algorithm to obtain morphs of two 3D head models [14]. The work was based on manual segmentation of facial features to create and obtain a correspondence between both 2D maps, hence obtaining a proper 3D morphed face. No further analysis was done on these results. This was then immediately followed by Lee and Thalmann with their paper [15]. The idea was to prepare a 3D generic model, and use this to fit the feature lines obtained from the front and side view of the subject in order to obtain an individualized head along with a properly fitted texture. Hence using 2D morphing on textures along with 3D interpolation in shape, resulted in a 3D morph between two facial models. Work done in [16] to create a generic face model i.e. a 3D Morphable model named Basel Face Model, is considered groundbreaking. This paper had a major impact on the approach of 3D facial reconstruction and synthesis. The framework for this model was presented in [17]. It performs face registration to obtain correspondence among individual meshes and then uses this with Gaussian process model as a prior to generating a morphable face model. This model is an approach to obtain linear combinations of faces in a meaningful way. Hence it becomes a powerful and convenient tool to generate face morphs through proper input. The Large Scale Face Model presented in [5] contributes towards making the process of generating a 3D morphable model relatively easy. The idea is to have an automated process to build a 3DMM from a large collection of 3D scans. This construction is proposed in a way that requirements of input are low without any trade off between automation and model quality. An overview about different 3D Morphable Face Models is well presented in [18]. This paper provides a detailed view of the history of 3DMMs and their progress through the past few decades. It takes an in-depth view of different processes that are involved in building 3DMMs and provides a comparison between different options at each step. This process distinctly includes Face Capturing, Modelling, Image Formation, Optimization, Applications, etc. ## III Research Question The purpose of this research is to gain more understanding of the relation between 3D face morphing and 3D Face Recognition systems by answering following questions: * How can a 3D face morph be obtained from two face meshes? What are different suitable approaches for this? * Are 3D Face Recognition systems vulnerable to attacks through these face morphs? * If so, then to what extent are 3DFR systems vulnerable? * What factors in generation of morph influence the vulnerability of the 3DFR system? ## IV 3D Face Morph Generation ### _Introduction_ As discussed in II, there is not a significant amount of work done to specifically generate 3D face morphs. In 2D face morphing techniques a common algorithm is to detect facial feature points of 2 subjects. Connecting these points yields triangular meshes. Through triangle-to-triangle correspondence the same sections in two different faces are warped to obtain intermediate shapes. For each pixel in this new shape, corresponding pixels of both face images are cross-dissolved by interpolation, as shown in Fig. 1. Many other algorithms are developed for same purpose and are broadly classified into mesh warping, field morphing, radial basis functions, thin plate splines, energy minimization and multilevel deformations [19]. All these share the following steps: feature specification, warp generation and transition control. Feature specification is the part of the system that segregates sections on face in a meaningful way. For example, using face landmarks to obtain triangular meshes as explained above. Warp generation is the process of geometrically transforming these sections and obtaining a sort of correspondence between both inputs. Once this is obtained, transition control is used as a factor to fix the rate of warping and color blending between these inputs. The general steps involved in 2D face morphing can be modified in a relevant way to apply to 3D data as shown in Fig. 2. For 3D face morphing, the operations will stay similar to those as explained above from a broader perspective. So in order to achieve a 3D face morph, the idea is to first have a suitable method of feature specification. Once this specification is obtained in a reliable way, perform operations on them to obtain a general transformation. This transformation should be done in such a way that there is a sense of linearity for translations. This will allow for transition control to be the deciding factor for weightage of inputs in the face morph. As 3D face recognition systems used in this paper are based solely on shape data, color blending of texture is not the topic of discussion in this paper. Two methods that are discussed in this paper that cover above steps in a proper manner. The simple way to do morphing is to just average the faces in terms of depth, so that they develop a linear mixture of two faces. Another method is to use the pipeline for building 3D Morphable Facial Models. These methods are briefly discussed ahead. ### _Background_ #### Ii-B1 Facial 3D Morphable Model The Facial 3D Morphable Model(3DMM) is defined as a generative model for face shape and appearance [18]. This model is built using multiple faces to obtain a generic model that can be altered to obtain varying face shapes. Building such a model requires a dense point-to-point correspondence amongst all the facial meshes included. This dense correspondence is obtained through a registration process that is used on all the samples, and also maintained in any further steps involved. This is one way of feature specification in 3D facial morphing. Once such a correspondence is obtained, a linear combination of faces can be defined in a meaningful way. Thus allowing warp generation with transition control. There are various 3DMMs built from facial databases and the pipelines for this are provided. Basel Face Model is used in this paper to test suitability of such models to obtain morphs. Then the Large Scale Facial Model pipeline is used to obtain morphs from raw datasets. The requirements for both are Fig. 1: 2D Face Morphing example. Original faces on left and right, with morph in the centre. Lines represent the feature specification. Fig. 2: Step wise distribution of 3D Face Morphing process. Feature Specification, Warp Generation, Transition Control are general steps, relevant also in 2D Face Morphing. Each step has respective sub steps, specific to 3D Face Morphing. different due to differences in the registration process. #### Ii-B2 3D Face Recognition Systems In order to understand the vulnerability of 3D face recognition systems against face morphing, two different 3DFRS. This is done to understand the differences in robustness of different face recognition systems against face morphing. The two systems used are based on the papers [6] and [7]. [6] is based on feature extraction using PCA-LDA by different face region classifiers. The matching score is extracted form likelihood ratio. This is a fast and accurate 3D Face Recognition system, with scores ranging from 0 to 60. 8 is the threshold and hence the scores below it are considered as a non-match. This threshold value corresponds to FMR=0.1%. [7] presents a 3D face recognition system that is based on local shape descriptors. Feature comparison is done using cosine distance similarity metric. Hence the scores lie between 0 and 2, with anything below 0.71565 is considered a match. This threshold value corresponds to FMR=0.1%. ### _Basel Face Model_ Basel Face Model is a Facial 3DMM, which is built using the pipeline described in [17]. The latest version of this model i.e. BFM-2017 is available as an open-source. BU3D-FE along with Basel Scans are used to construct this model. This model is the base of facial 3DMMs and a lot of other pipelines are built around this specific model. Along with the model, the pipeline itself is a big contribution towards development of facial 3DMMs. The availability of this pipeline allows us to reach full reproducibility of the experiments conducted for this research. The BFM pipeline requires 2D or 3D landmarks for registration of raw data. The registration process is carried out using a model-fitting approach. The reference surface i.e. the base model is deformed to fit the known landmarks. The landmark evaluation is done for this registration. Average distance errors range around \(4\div 3mm\) for nose and highest being \(12\div 6mm\) for chin. The data provided on their website also gives access to different pre-registered face samples. They are available as coefficients, and can be used to regenerate a face. This face being obtained by fitting the BFM to 2D face images from certain datasets explained in [20]. These samples are used for initially testing the suitability of the idea of morphable models for generation of 3D face morphs as shown in Fig. 3. ### _Large Scale Facial Model_ The Large Scale Facial Model proposed in [5] was built in order to obtain a facial 3DMM with a high number of distinct facial identities, specifically 9663. The objective of this model was to attain high variance in statistical information. The major reason that LSFM is able to include such a high number of distinct samples is due to inclusion of an automated landmarking method in the pipeline used to build it. This automated landmarking of 3D face data helps overcome a huge barrier of obtaining 3D landmarks manually. Hence standard large 3D face datasets can be used to develop facial 3DMMs. The automated landmarking uses the 2D landmarking using RGB information of the input facial mesh while preserving the 3D shape information. This helps in using the advantage of high accuracies with 2D landmarking for 3D data. Basically Fig. 3: Examples of averaging PCA features of two pre-registered face samples of different subjecrs to obtain a morphed face RGB image from different angles is recorded for the face mesh along with XYZ shape image. The knowledge of pose and the face image helps extract landmarks using the state-of-the-art landmark localization technique. This technique is the HOG active appearance model. Thus- 68 sparse annotations are obtained automatically and used for further registration which is done using non-rigid iterative closest point (NICP) algorithm. The paper [5] also compares different registration methods to choose NICP after obtaining best results, with a mean per-vertex reconstruction error of around 1.5mm. ### _Depth Averaging_ This method to obtain morphs is based on the simple idea that two registered faces can be averaged in terms of their depth. This simplicity covers all three aspects of feature specification through registration. Warp generation through averaging the depth, with transition control obtained through weightage for depth values. Registration for this method is done using the 3D Face Registration Method described in [6]. This method extracts a region of interest from the face. Then determine the vertical symmetry plane for this ROI through the nose along with finding the nose tip and slope of the nose bridge. Then transform the points in ROI to a coordinate system defined by the symmetry plane, nose tip and nose bridge. This step of transforming the point cloud into a 'nose based' coordinate system allows all the faces to correspond with at least one feature being the consistent match. The overlapping faces now only depend on the structure of the subject face to generate an acceptable morph. The simplicity of this method has a trade off for this approach. With a necessary requirement being that both subjects need to have similar ratios for lengths on their feature points to obtain a reliable morph. ## V Evaluation Metrics In order to assess the methods used for face morphing, standardized metrics are necessary. These metrics provide an understanding of the quality and effectiveness of morphs. In order to understand the effectiveness of the morph generation system, the metric should represent the ability of the system to obtain matches between generated morphs and faces of input Fig. 4: Input Face images, this is texture data of the mesh which is also an input to the pipeline Fig. 5: Obtained facial landmarks that correspond to the 3D face. The yellow points are high confidence landmarks and red points are low confidence. Fig. 6: Registered face model. This along with other face mesh inputs have dense correspondence, and are further used to obtain facial 3DMM Fig. 7: Registered face mesh of two distinct subjects(above) and morphed face with depth averaging(below). The arrows in the morphed face image point to improper overlapping due to different position and size of nose, eye. subjects. Mated Morph Presentation Match Rate(MMPMR) and Relative Morph Match Rate(RMMR) in [21] properly quantify the vulnerability of face recognition systems to Morphing Attacks. MMPMR measures the vulnerability of system, i.e. for a certain threshold, how many % of total morphing attacks were successful for a certain set threshold. Hence this metric highly depends on the threshold. RMMR also considers FNMR in it's calculation and hence also measures the effectiveness of system against attacks along with it's vulnerability. ## VI Experiments This section explains the experiments performed for evaluation of face morphing methods described in Section IV. First experiment is generating morphs using coefficient averaging on Basel Face Model and testing them against a 3D face recognition system. Second experiment is testing the morphs generated using the LSFM pipeline, with inputs captured in an uncontrolled illumination setup. Third experiment is testing the morphs generated using the LSFM pipeline, but with inputs captured in a controlled illumination setup. Fourth experiment is testing morphs generated using depth averaging. Fifth experiment is to use selection criteria for choosing a similar set of faces to generate lookalike morphs and testing them. The face samples used for these experiments were chosen from FRGC v2.0 and Bosphorous database. ### _Database_ The experiments were performed by using either of the following data: * Coefficients from BFM fitting algorithms * FRGC database * Bosphorous database #### Vi-A1 Coefficients from BFM fitting algorithms In the initial experiment, coefficients are averaged to obtain morphs of the different faces. These coefficients are provided along with the Basel Face Model. They are obtained using the state of the art fitting algorithm described in [20]. Fitting is done on images from FERET and CMU-PIE database. This are 2D face databases. Results of fitting yields coefficients which are already provided as explained in IV-C. The coefficients are used to obtain faces of distinct subjects from the BFM. Each individual coefficient represents a certain aspect of the face, for example, width, thickness, etc. #### Vi-A2 FRGC database FRGC v2 database is a widely used dataset for research related to 3D face recognition. It has 4007 textured 3D face scans of 466 subjects with varying facial expressions, captured under controlled as well as uncontrolled illumination conditions. The acquisition of 3D range data was done using the Minolta Vivid 900/910 series sensor. #### Vi-A3 Bosphorous Database Bosphorous database is also a widely used dataset due to its well defined samples in terms of facial expression. It consists of 4652 textured face scans of 105 subjects and is captured in controlled illumination conditions. Each subject includes scans with 34 combinations of different expressions, poses and occlusion conditions. It is also well detailed in terms of demographics. Also each scan includes manually labelled 24 facial landmark points. The scans are captured using Inspeck Mega Capturor II 3D. ### _Experiment 1: PCA Averaging_ For this experiment, coefficients from VI-A1 were chosen for different face ids but the same lighting condition. These coefficients can be used to regenerate distinct faces. Averaging these coefficients yields a face that is expected to be the face morph. 50 faces were chosen and morphs were obtained with combinations of 2. For same face ids another set of coefficients was obtained that was fitted from a different lighting conditions. So the faces are expected to be the same but captured under different conditions. This is done with consideration of the natural biometric variance. In total 59 different subjects with 22 samples for each subject are used to compare with 2475 generated face morphs. 2475 morphs are obtained from using all pairs of face combination from 59 faces. The whole set of faces and face morphs are compared using both face recognition systems mentioned in IV-B2 for further evaluation. The bonafied 3D faces in this experiment are generated i.e. fitted. Hence this experiment only displays the possibility of 3DFRS being vulnerable to face morphs. Fig. 8: From top to bottom, side view of face 1, face 2 and face morph respectively. Arrow in face morph shows some remnants formed due to averaging mesh points with a hole from the eye. Also it can be observed that all nose tips are at (0,0,0) with a morphed face showing average values for the axis on left. For simplicity, 3DFRS from [6] will be referred as likelihood ratio classifier and 3DFRS from [7] will be referred as distance similarity classifier. This are based on the scores that are obtained from them. The results obtained from this experiment are as noted below: #### Iv-B1 Distance Similarity Classifier MMPMR = 16.2% RMMR = 26.39% FNMR = 24.76% FMR = 1.94% #### Iv-B2 Likelihood Ratio Classifier MMPMR = 98.23 RMMR = 123.14 FNMR = 75.09 FMR = 12.34 From both the figures 9 and 10, it can be observed that high number of genuine samples do not match with each other, also confirmed by high FNMR. This is due to comparison of samples generated from fitting on 2D images of different angle, which leaves a room for error. At the same time, it can be seen that considerable amount of mated morph curve is in the matched region, which is also verified by high percentage of RMMR and MMPMR values in both cases. Due to high FMR these numbers are rendered irrelevant in terms of understanding actual vulnerability of 3DFRS, but they indeed confirm that samples of two subjects that do not match on 3DFRS, can still match with a morph that is generated from samples of both this subjects. ### _Experiment 2: Morphs generation from uncontrolled Lighting conditions_ In this experiment, morphs are generated using the LSFM pipeline. 3D face scans from FRGC databases are used as the inputs to the pipeline. The scans chosen are captured in uncontrolled illumination conditions. In total 282 combinations of subjects chosen at random from 466 subjects where used to generate morphs. This experiment is done to test if the 3DFRS are actually vulnerable to morphs generated from actual facial scans when compared against original face scans unlike the previous experiment. The resultant morphs are compared with both the face recognition system to obtain scores for further evaluation. The results obtained from this experiment are as noted below: #### Iv-B1 Distance Similarity Classifier MMPMR = 13.48% RMMR = 16.07% FNMR = 2.59% FMR = 0.11% #### Iv-B2 Likelihood Ratio Classifier MMPMR = 1.64% Figure 11: Experiment 2 score distribution for Distance Similarity Classifier Figure 10: Experiment 1 score distribution for Likelihood Ratio Classifier Figure 9: Experiment 1 score distribution for Distance Similarity Classifier In this experiment, it is observed that distance similarity classifier has higher MMPMR, RMMR, compared to likelihood ratio classifier. This stresses on the fact that distance similarity classifier is vulnerable to such kind of randomly generated morphs. ### _Experiment 3: Morphs generation from controlled Lighting conditions_ In this experiment, morphs are generated using the LSFM pipeline. Data from the Boshorous database is used in this experiment. In total 422 morphs are generated, by choosing subjects at random. For generation of morphs, neutral pose samples from individual subjects are used. Scores are obtained from comparison between this face morph and rest of the samples from the dataset. This experiment is conducted to observe, if generation of morphs from face scans obtained in controlled lighting condition, improves their quality hence resulting in matches with more samples of contributing subjects. #### Iv-D1 Distance Similarity Classifier * MMPMR = 0% * RMMR = 35.25% * FNMR = 35.25% * FMR = 11.3% #### Iv-D2 Likelihood Ratio Classifier * MMPMR = 0.41% * RMMR = 28.17% * FNMR = 27.8% * FMR = 36.18% It can be seen that FMR is high in this experiment. This sheds light on the fact that a lot samples from same subjects do not match with each other. This can also be confirmed in figures 13 and 14 with genuine curve having high numbers in unmatched area. This is probably due to different expressions and poses of faces that Boshorous dataset covers. Thus FNMR is higher for both cases which results in high RMMR, even though MMPMR is low. Which means that morphs were not effective in this case, but threshold was high for even the genuine matches. ### _Experiment 4: Depth Averaging_ In this experiment, morphs are obtained using depth averaging methods explained in IV-E. The samples from FRGC database are chosen, irrespective of the illumination condition, as no texture is involved for this method. 560 face morphs were generated using this method from combinations of subjects chosen at random. This experiment is conducted to check vulnerability of 3DFRS against face morphs, if 3D face morph has been generated with a method as simple as depth averaging. #### Iv-E1 Distance Similarity Classifier Fig. 14: Experiment 3 score distribution for Likelihood Ratio Classifier Fig. 12: Experiment 2 score distribution for Likelihood Ratio Classifier Fig. 13: Experiment 3 score distribution for Distance Similarity Classifier * MMPMR = 2.14% * RMMR = 4.79% * FNMR = 2.7% * FMR = 0.06% #### Vi-B2 Likelihood Ratio Classifier * MMPMR = 0% * RMMR = 1.47% * FNMR = 1.48% FMR = 0.13% MMPMR values obtained from experiment on both classifiers display that depth averaging is not an effective technique for morphing. This shows that some morphs obtained in this method can produce successful matches. A more manual approach with this method might yield better results on distance similarity classifier but same can't be expected to be true in likelihood ratio classifier. ### _Experiment 5: Lookalike Morph generation_ In this method a selection criteria is used to choose similar subjects for obtaining face morphs. In order to stay consistent, the selection was made based on similarity scores obtained from the 3DFR system [6]. Two subjects with scores between 3 and 7 were considered as similar. This is to keep a high threshold for faces to qualify as similar and also not consider faces that already match on the 3DFRS. In total, 872 morphs were generated from combinations obtained using this criteria. This morphs are considered look-a-like morphs and hence this would be the ideal test to check the vulnerability of 3DFRS. #### Vi-F1 Distance Similarity Classifier * MMPMR = 8.6% * RMMR = 12.24% * FNMR = 3.64% * FMR = 0.15% #### Vi-F2 Likelihood Ratio Classifier * MMPMR = 39.97% * RMMR = 41.76% * FNMR = 1.8% * FMR = 0.28% Based on the scores above, based on RMMR and MMPMR scores it can be observed that both classifiers are vulnerable to morphs generated from similar faces. With low FNMR, threshold for this 3DFRSs seem fine. But still relatively high number of 3D faces match with the contributing subjects. ## VII Discussion The objective of this research was to try and find a method to obtain 3D face morphs. To find out if a 3DFR system is vulnerable to attacks from face morphs and the extent to which it can be done. Along with the understanding of factors that influence the vulnerability. In order to answer these questions, the 5 experiments explained in section VI were designed. Experiment 1 was set up to try and figure out if 3D Morphing is possible through usage of 3D morphable models. Fig. 16: Experiment 4 score distribution for Likelihood Ratio Classifier Fig. 17: Experiment 5 score distribution for Distance Similarity Classifier Fig. 15: Experiment 4 score distribution for Distance Similarity Classifier As it can be seen in results of section VI-B, a high number of morphs in both cases seem to match with the contributing subject samples, with likelihood ratio classifier having higher amount of matches. These can be seen in high MMPMR, which means out of all the morphs, 16.2% in distance similarity classifier and 98.23% in likelihood ratio classifier, matched with at least one sample of both the contributing subjects. This just confirms the fact that 3DMM is a viable approach to face morphing. This experiment verifies nothing beyond this fact. As can be seen that FMR are above 1% in both the cases, and also the fact that all bonafied faces are generated, makes it clear that this metrics are not reliable enough to discuss further. Experiment 2 was to use the 3D morphable model building pipeline to generate face morphs from exactly one sample of each contributing subject. These morphs show a substantially lower number of matches, which is expected. But in case of distance similarity classifier, MMPMR and RMMR are high, with low FNMR. This shows that a lot of morphs, 13.48% to be specifc, generated at random tend to match with both the contributing subjects. At the same time, even likelihood ratio classifier seems to have false matches with morphs. This experiment is based on low-quality morphs. But as the subjects were chosen at random, it can still be the case that some of the morphs were high quality. So it can be confirmed that 3DFRS in general are vulnerable to face morphing to some extent. It might be in the case that morphs are high quality. Experiment 3 is performed to produce and test morphs, that are produced with more accuracy in terms of landmarking and correspondence. This is expected to result into more smoother morphs, with features being preserved in a better way. For this bosphorous dataset was used, which has controlled environment conditions. This may yield better automated landmarks in a easier way, with less errors. The morphs generated in this case are also from subjects chosen at random. A high FNMR depicts that many samples for same subject in this case do not match with each other. At the same time low MMPMR, almost 0, emphasizes on the fact that morphs generated in this case are not good enough. This behaviour could can be explained as follows. Both the 3DFRS from this experiments are better at distinguishing faces from Bosphorous dataset and could reduce the FNMR by using a higher threshold. This answers the research question about exploring an external factor that influences the vulnerability of 3D face morph. Experiment 4 is performed to test if there is another suitable approach for 3D face morphing, other than using 3DMM pipeline. Depth averaging seems to be a contender for an alternative. But the experiment results shows that classifiers are highly robust against this method. It can be argued that a more manual approach with this method could yield more matches. Experiment 5 is conducted to test extent of vulnerability of 3DFRS against high quality face morphs. The look-a-like of morphs pertains to the high similarity between both contributing faces. Results show that a high number of morphs match with both the contributing faces. This adds to the answer for the question, about what other factors influence vulnerability of 3DFRS. In this experiment, distance similarity classifier seems to be able to differentiate between such high quality morphs better than likelihood ratio classifier. But this is due to the fact that, the similar faces are chosen based on scores of likelihood ratio classifier. Hence it seems reasonable that morphs in this experiment are more likely to have successful attacks against likelihood ratio classifier. This emphasizes that 3DFRS are highly vulnerable to high-quality morphs, especially the ones that are based on the 3DFRS itself, in terms of similarity. So the experiments performed in this paper answer all the questions asked in section III. To summarize, now we know how to obtain proper 3D face morphs. So far we know that there is only one suitable approach for it. 3DFRS are vulnerable to attacks from face morph generated using this approach. The extent of vulnerability is significant, and efforts need to be made to improve 3DFRS mentioned in this paper. One factor that influences the vulnerability of these morphs is quality of scans and other one is similarity between the faces of contributing subjects of the morph. ## VIII Conclusion From the experiments conducted so far, it can be seen that 3D face morphing is achievable and is able to get matches with 3DFRS. It can be seen that for attacks from look-a-like face morphs, 3DFR systems are quite vulnerable. This vulnerability is especially high when the similar faces are chosen from high scores, obtained from the 3DFRS itself, that is being attacked. In case of face scans that are generated in a highly controlled manner, the differentiation between morphs and contributing faces becomes easier. Also a method as simple as depth averaging cannot yield matches, if done in an automated way. There is still a considerable amount of work that is to be done to understand face morphing attacks against 3DFRS. Based on the results above it can be concluded that 3DFRS are vulnerable to 3D face morphing. Fig. 18: Experiment 5 score distribution for Likelihood Ratio Classifier
近年、ハードウェアとソフトウェアの開発により顔認識システムは mainstream に登場してきました。継続的な努力によって、より良いものとより安全なものへと開発が進められています。この開発は 3D顔認識システムの開発にも加速を与えてきました。これらの 3DFR システムは、2DFR システムの弱点克服を目指して急速に発展しています。2DFR システムの領域で課題となるのは顔画像の変形です。高品質な顔変形生成とこれらの変形の攻撃検出に関する研究は、相当な量の研究が進められています。3DFR システムの脆弱性に対する3D顔変形に対する理解は比較的に少ないですが、3DFR システムがこれらの攻撃に対してより堅牢であることが期待されています。この研究は、この問題についてさらに調査し、より多くの情報を収集することを目的としています。この論文では、3D顔変形生成のための複数の方法を記述しています。この方法を用いて生成された顔
2309.15677
Experimental and numerical investigation to elucidate the fluid flow through packed beds with structured particle packings
The present paper presents an experimental and numerical investigation of the dispersion of the gaseous jet flow and co-flow for the simple unit cell (SUC) and body centered cubic (BCC) configuration of particles in packed beds. The experimental setup is built in such a way, that suitable and simplified boundary conditions are imposed for the corresponding numerical framework. The SUC and BCC particle beds consist of 3D-printed spheres. The flow velocities are analysed directly at the exit of the particle bed, for both beds for particle Reynolds numbers of 200, 300, and 400. Stereo particle image velocimetry (SPIV) is experimentally arranged in such a way, that the velocities over the entire region at the exit of the packed bed are obtained instantaneously. The numerical method consists of a state-of-the-art IBM with AMR. The paper presents the pore jet structure and velocity field exiting each pore for the SUC and BCC packed particle beds. The numerical and experimental studies show a good agreement for the SUC configuration for all flow velocities. For the BCC configuration, some differences can be observed in the pore jet flow structure between the simulations and the experiments, but the general flow velocity distribution shows a good overall agreement. The axial velocity is generally higher for the pores located near the centre of the packed bed than for the pores near the wall. In addition, the axial velocities are observed to increase near the peripheral pores of the packed bed. This behaviour is predominant for the BCC configuration as compared to the SUC configuration. It is shown that both the experiments and the simulations can be used to study the complex fluid structures inside a packed bed reactor.
Shirin Patil, Christian Gorges, Joel López-Bonilla, Moritz Stelter, Frank Beyrau, Berend van Wachem
2023-09-27T14:21:33
http://arxiv.org/abs/2309.15677v1
Experimental and numerical investigation to elucidate the fluid flow through packed beds with structured particle packings ###### Abstract The present paper presents an experimental and numerical investigation of the dispersion of the gaseous jet flow and co-flow for the simple unit cell (SUC) and body centered cubic (BCC) configuration of particles in packed beds. The experimental setup is built in such a way, that suitable and simplified boundary conditions are imposed for the corresponding numerical framework, so the simulations can be done under very similar conditions as the experiments. Accordingly, a porous plate is employed for the co-flow to achieve the uniform velocity and the fully developed flow is ensured for the jet flow. The SUC and BCC particle beds consist of 3D-printed spheres, and the non-isotropy near the walls is mostly eliminated by placing half-spheres at the channel walls. The flow velocities are analysed directly at the exit of the particle bed, for both beds over 36 pores for the SUC configuration and 60 pores for the BCC configuration, for particle Reynolds numbers of 200, 300, and 400. Stereo particle image velocimetry (SPIV) is experimentally arranged in such a way, that the velocities over the entire region at the exit of the packed bed are obtained instantaneously. The numerical method consists of a state of the art immersed boundary method with adaptive mesh refinement. The paper presents the pore jet structure and velocity field exiting from each pore for the SUC and BCC packed particle beds. The numerical and experimental studies show a good agreement for the SUC configuration for all flow velocities. For the BCC configuration, some differences can be observed in the pore jet flow structure between the simulations and the experiments, but the general flow velocity distribution shows a good overall agreement. The axial velocity is generally higher for the pores located near the centre of the packed bed than for the pores near the wall. In addition, the axial velocities are observed to increase near the peripheral pores of the packed bed. This behaviour is predominant for the BCC configuration as compared to the SUC configuration. The velocities near the peripheral pores can become even higher than at the central pores for the BCC configuration. It is shown that both the experiments as well as the simulations can be used to study the complex fluid structures inside a packed bed reactor. ## 1 Introduction Packed bed reactors, especially with gaseous flows, have wide-ranging engineering applications. For example, in the food industry (e.g., bioreactors for dairy products production or coffee roasters), basic materials' industry (e.g., shaft kilns to produce lime or dolomite), energy sector (e.g., production of synthesis gas from biomass), to name just a few examples. In such applications, there are multi-phase interactions, which are governed by physical phenomena such as mass transfer, heat transfer and the fluid flow through the packed bed. The construction of fixed packed bed reactors usually relies on simplifying assumptions, such as plug flow (Eppinger et al., 2011; Dixon and Partopour, 2020) or empirical correlations, such as the Ergun equation (Ergun, 1952). However, such assumptions can give rise to erroneous predictions, principally for small tube to particle diameter ratios, as a result of wall effects (Nijemeisland and Dixon, 2001). The distribution of flow within the reactor and the development of the velocity field in the freeboard above the interface can significantly affect the overall process. Thus, the local flow behaviour within the bed is a crucial parameter for optimising fixed packed bed reactor systems. Currently, there is limited experimental information available on the local fluid flow structure within packed particle beds, as accessing the particle interstices experimentally is highly challenging without affecting the flow. Nevertheless, a variety of experimental techniques have been carried out to study different aspects of the fluid flow through packed beds in different configurations. Probe-based techniques, such as hot wire anemometry (Alshammari et al., 2023) or electrochemical micro-probes (Seguin et al., 1998; Bu et al., 2015), are intrusive but fast frequency techniques (\(\sim\) 100 Hz) that have been used to measure the instantaneous local flow velocity within the interstitial spaces of a packed bed. The first is limited to applications with gas flows, while the second with liquid flows. Laser Doppler velocimetry (Giese et al., 1998; Chen et al., 2007) is a semi-intrusive optical technique but, as the previous technique, allows measuring the instantaneous velocity in a single point, which limits the achieved understanding of the fluid flow phenomena. Magnetic resonance imaging (MRI) allows determining the three components of the interstitial velocity of liquid flows in the interparticle spaces of different packed beds, as random packed beds with Ballotini particles (Sederman et al., 1997) and simple unit cell (SUC) packed bed (Suekane et al., 2003). This technique is highly expensive and few works have been performed with gas flows (Nguyen et al., 2005). Particle image velocimetry (PIV) is a technique that can measure two or three components of the instantaneous velocity of a fluid seeded with particles. The particles are illuminated by laser light, and the images are recorded with synchronized cameras (Raffel et al., 2018). This technique has different configurations that have been widely used to characterize the flows associated with packed bed. Several works are performed using planar PIV on transversal planes to packed beds of solid spheres using the refractive index matching (RIM) method (Wiedreseiner et al., 2011; Haam et al., 2000), which helps to access to the interstitial spaces minimizing the distortions in the captured images, by having the similar refractive index for the fluid and spheres, but it is limited to liquid flows. The following works used this method to evaluate liquid fluids at low particle Reynolds number, \(\mathrm{Re_{p}}\), in random packed beds with monodisperse spheres. Huang et al. (2008) reported for \(\mathrm{Re_{p}}\) = 28 that the velocity in the pore increases with porosity size and the velocity asymmetries around the spheres are influenced by the fluid inertia. Patil and Liburdy (2013) have studied \(\mathrm{Re_{p}}\)= 4 in a low aspect ratio packed bed and evaluated three different locations (vertical planes), finding that the flow structures become less ordered and the dynamic range of velocities increases from near wall towards the midplane. Furthermore, the following works have used the RIM method to evaluate the turbulence intensity in random packed beds with monodisperse spheres. (Patil and Liburdy, 2013) have applied time resolved planar PIV to study turbulent flow with \(\mathrm{Re_{p}}\) ranging from 418 to 3964, where they identify repetitive patterns in the pore spaces and demonstrate that most of the turbulent measures become independent of \(\mathrm{Re_{p}}\) beyond \(\mathrm{Re_{p}}\) = 2800. Khayamyan et al. (2017) have studied turbulent and transitional flow in a randomly monodisperse packed bed at \(\mathrm{Re_{p}}\) ranging from 20 to 3220. They found that when the \(\mathrm{Re_{p}}\) increases, the magnitude of velocities increases, the dynamic range of velocities decreases, but the flow is more disordered, predominately in low velocity areas. Also, they define flow regimes, such as the regime of stokes to inertial transition for \(\mathrm{Re_{p}}\) from 40 to 250, inertia to turbulent transition for \(\mathrm{Re_{p}}\) from 250 to 1500 and turbulent flow from \(\mathrm{Re_{p}}\)=1500, where the velocity fluctuation becomes independent of \(\mathrm{Re_{p}}\). The same group, applied Stereo PIV with RIM to study the longitudinal and transversal dispersion of transitional and turbulent flows throughout the packed bed (Khayamyan et al., 2017). Further works also applied tomographic PIV (Larsson et al., 2018) or time resolved planar PIV (Nguyen et al., 2018, 2019, 2021) to study liquid flows in packed beds. Planar PIV without RIM, but with optical access, has been used in some works to measure the velocity fields of liquid flows (Blois et al., 2012) and gas flows. Velten and Zahringer (2023) and Neeraj et al. (2023), both from the same group, measure the velocity fields of gas flow, at \(\mathrm{Re_{p}}\) from 200 to 500, in some pores and the freeboard above a packed bed of spheres arranged in body centered cubic (BCC) packing. They evaluates the influence of the number of layers on the flow above the bed, where it was found that from 11 layers, the surface flow becomes independent of the number of layers, but 21 layers minimize the influences from the surroundings. The velocity profiles above the bed have been analysed, observing that a non-periodic porosity distribution near the wall creates a channelling effect, which leads to high velocity jets near the wall. Also, the velocity profiles closer to the top of the bed show clearly the jets from the inter-particle spaces. About the influence of the \(\mathrm{Re_{p}}\), the averaged flow structures are not influenced, but at higher \(\mathrm{Re_{p}}\), the presence of recirculations around the sphere is more evident, the flow structures fluctuate more, which could derive in asymmetric averaged velocity fields. These works are still limited to the study of pores that have optical access, but to access to the pores behind the spheres in the gas phase, where RIM is not applicable, planar PIV with image correction methodology based on ray tracing PIV (RT-PIV) has been proposed to correct the optical aberrations from the transparent spheres (Martins et al., 2018). This technique has been validated to be used in a BCC packed bed by Velten et al. (2024), who show that RT-PIV still has a limited field of view, is very sensitive to geometric parameters, and has difficulties in the illumination. Next to observing the physical behaviour of a system, experimental studies can also assist in the validation of numerical models (Wood et al., 2015). Simulations of fixed packed particle beds employing computational fluid dynamics (CFD) assume a vital role in predicting and regulating flow and process parameters, both nowadays and in the near future. Several simulation methods can be used to simulate the fluid dynamics inside fixed bed reactors. The Lattice-Boltzmann method (LBM) is often used due to its good scalability and efficiency, but it has drawbacks for dense particle packings and high Reynolds number turbulent flows with additional heat or mass transfer (Mantle et al., 2001; Manz et al., 1999; Sullivan et al., 2005; Dixon and Partopour, 2020). Another common method is to solve the Navier-Stokes equations as a single-phase flow on body-fitted finite volume meshes. This method is accurate, depending on the complexity and structure of the used numerical mesh, but it can be computationally very expensive, especially for complex particle shapes (Eppinger et al., 2011; Robbins et al., 2012; Yang et al., 2013). A third option is the immersed boundary method (IBM), which does not require the fluid mesh to conform to the surfaces of the particles. This makes IBM significantly less computationally expensive than body-fitted meshes, while still maintaining a good accuracy. The IBM is a numerical method for simulating fluid flow around complex geometries, such as the particles in a fixed bed reactor. The IBM was first introduced by Peskin (1972) for the simulation of fluid-structure interactions in heart valves, and has since been extended to a wide range of applications. The IBM uses an Eulerian framework for the discretization of the fluid domain and a Lagrangian marker framework for the representation of the particle surface. This means that the fluid mesh does not conform to the surfaces of the particles, which simplifies mesh generation and reduces computational costs. In the continuous forcing IBM, also known as the smooth IBM, the particle surfaces are represented by source terms in the Navier-Stokes equations. These source terms are spread across several fluid cells at each side of the particle surface (Peskin, 1972). The no-slip boundary condition at the particle surface is imposed by requiring that the fluid velocity at the Lagrangian markers match the desired velocity at the surface. In the realm of fixed bed reactor simulation, the IBM has gained notable prominence in recent years. A study conducted by Gorges et al. (2024) delved into the comparison between two IBM approaches: the smooth and blocked-off IBM techniques. These methods were applied to simulate a fixed packed bed reactor consisting of spherical particles arranged in a BCC particle packing structure. The investigation involves an examination of velocity fields in vertical planes above the fixed packed bed, with a focus on varying Reynolds numbers. To bolster their findings, the researchers relied on experimental inline PIV measurements as a foundational benchmark for evaluating the efficacy and accuracy of the two IBM approaches. Another notable contribution to this field is from Yuan et al. (2019), who applied the IBM in conjunction with local adaptive meshing techniques. This combination has been employed to simulate fixed bed reactors containing particles of various shapes. Their study entails a thorough comparison of the predicted pressure drop across the bed and local heat transfer with empirical correlations corresponding to these parameters. Furthermore, Lovreglio et al. (2018) has extend the exploration of fixed bed reactor dynamics. The study involves a comparative analysis between results obtained through MRI and those generated by employing a proprietary CFD code built with an IBM framework. The primary focus was to elucidate the structural aspects and hydrodynamics of fixed beds comprising spherical particles. In the current paper, the focus is to analyse the fluid velocity field of the outflow close to the exit of a packed bed, and to study the dispersion of a fluid jet flow through the packed bed for different flow conditions, from Re\({}_{\text{p}}\)=200 to Re\({}_{\text{p}}\)=400. In the present study, the packed bed is arranged in two structured configurations: simple unit cell (SUC) and body centered cubic (BCC). The velocity fields, predominantly the axial vector, are measured by the stereo-PIV (SPIV) technique and provide experimental data to compare with numerical results obtained by the smooth IBM (Cheron et al., 2023). Previous studies have performed planar PIV measurements at maximum three locations (vertical planes) in the outflow and under the presence of one or two interstitial spaces at the exit of packed bed (Suekane et al., 2003; Velten and Zahringer, 2023; Neeraj et al., 2023; Velten et al., 2024; Gorges et al., 2024). This approach cannot continuously resolve the velocity field over the entire plane, as it misses the information from the spatial gaps. The novelty of this work is the study of jet dispersion throughout a wide packed bed, with multiple interstitial spaces, by measuring simultaneously the three components of velocities over the entire plane at the exit of the packed bed, which is possible with the SPIV technique. The experimental data used to validate numerical simulation reproduces, as much as possible, the simplified and well-defined boundary conditions, which are required in a numerical calculation. The experimental particle packed beds have been 3D-printed with a high dimensional precision technique, allowing an accurate positioning of all the spheres in the experimental and numerical setup, the jet flow originates from a fully developed flow in the bottom centre from a pipe, and the co-flow has a uniform velocity, ensured by using a sufficiently thick porous plate. The printed packed particle bed also ensures the uniform porosity within the bed, specially at the walls interface, where half spheres instead of full spheres have been printed (Bu et al., 2015). This is critical to minimize the wall flow and its influence on the surface velocity field (Gorges et al., 2024; Neeraj et al., 2023). ## 2 Experimental setup and methodology In this section, we discuss the experimental setup for the SUC and BCC packed particle bed arrangements. The optical setup and the methodology of image processing for SPIV is also outlined and discussed. ### Experimental Setup Figure 1 shows the experimental packed bed setup, including the SUC and BCC particle arrangements. The setup consists of a jet flow in the centre of the bed with a concentric co-flow that interacts with the packed bed from bottom to top. Both flows consist of synthetic air. The main components of the setup are the square channel, the central pipe for the central jet flow, the porous plate at the bottom for the co-flow, and the packed bed. The squared channel conducts the co-flow and has a cross-section of 152.5 mm x 152.5 mm with walls made of transparent acrylic (PMMA). The central pipe is made from stainless steel and conducts the jet flow, seeded with particles. It has an inner diameter of 8 mm, an outer diameter of 12 mm and a length of 380 mm, so that the inner flow is fully developed before exiting the pipe. The packed beds are 3D-printed (made from Nylon PA-12) and consist of uniform spheres placed periodically in SUC or BCC arrangements. The 3D printing and design details are described in Section 2.2. Before the co-flow interacts with the spheres of the packed bed, it is homogenized by passing through a bronze porous plate (Siperm, B40 with 10 mm thickness) that fully covers the channel cross-section. A uniform velocity profile across the entire section is desirable, as it provides a known and homogeneous boundary condition for the simulation work. A circular hole has been drilled exactly in the middle of the porous plate to hold the central pipe for the jet flow, keeping it centred in the square channel. As shown in Figure 1, the exit of the pipe is flush fitted with the end surface of the porous plate, such that the uniform co-flow and the jet flow exit at the same plane. The packed bed is held 30 mm above porous plate for SUC configuration or 60 mm, for BCC configuration, using a 10 mm step on the inside of the channel walls. This creates a gap between the exit of jet flow and the inlet of packed bed, which allows some room for jet flow to disperse before interacting with the layers of packed bed. After emerging from the bronze plate plane, both the co-flow and jet flow enter the packed bed and continue to flow upwards, crossing all the layers of the packed bed. To ensure that the flow at the exit of the packed bed is not affected by any kind of perturbations from the surrounding, the walls of the square channel are extended to 400 mm beyond the last layer of the packed bed. This provides a well-defined boundary condition for the simulation part of this study. In this work, the SUC packed bed consists of 18 layers of spheres, and the BCC of 25 layers. For these numbers of layers, the seeding particles from the central jet are dispersed over the entire region of the packed bed. Also, according to Neeraj et al. (2023), in a BCC configuration, the pore jet velocities do not show much variation after 21 BCC layers. The total length of the SUC and BCC packed bed is 455 mm and 367 mm, respectively. The required airflow is supplied from compressed air cylinders, and separate mass flow controllers are used to control the co-flow (Bronkhorst, Mass-Stream 6371, maximum flow 380 lpm air) and the jet flow (Bronkhorst, El-Flow, maximum flow 80 lpm air). The seeding for the SPIV measurements is supplied along with the jet flow. A Lavision 'Aerosol generator' is used with Di-Ethyl-Hexyl-Sebacat (DEHS) to seed droplets into the jet flow for SPIV experiments. A liquid seeder, rather than a solid particle seeder, is used to keep the extended walls transparent and to not cloud the view of the cameras. As shown in Figure 1, a bypass is used to regulate the fraction of the jet flow passing through the aerosol generator to provide control over the seeding density. In the current configuration of the setup, the co-flow cannot be seeded, as the porous plate does not allow the droplets to pass further downstream. ### The 3D-printed packed bed This section outlines the fabrication of the packed particle bed. As mentioned previously, two packed bed configurations, SUC and BCC, are investigated. These packed beds are 3D-printed using the selective laser sintering (SLS) method and the material is Nylon PA-12. To ease the handling of the packing, printing is carried out in separate packing units, which are shown in Figure 2. The spheres for both configurations have a diameter of 25.5 mm. For instance, one layer of SUC unit has 36 spheres and a unit consists of 6 layers of spheres in axial (\(Z\)) and lateral (\(X\),\(Y\)) directions (see Figure 1(a)). However, along the \(X\) and \(Y\) directions, each layer of SUC unit consists of 5 full spheres and 2 half spheres, each towards the end of the packing. This boundary condition has been used before by Bu et al. (2015), and the intention is to uniform the porosity of the packed bed near the wall, minimizing the wall flow and the channelling effects due to bigger pores near the wall which influence the superficial flow above the bed (Neeraj et al., 2023; Gorges et al., 2024). In total, the configuration has 18 layers of SUC, thus the packed bed consists of a total of 648 spheres. The BCC unit has two types of layers, a full layer, that corresponds to a layer which extends until the edges of a unit and a weak layer which is defined as the layer between two full layers. One full layer of a BCC unit consists of 36 spheres, while a weak layer has 25 spheres. A BCC unit consists of 11 layers, 6 full layers and 5 weak layers, along the axial (\(Z\)) direction (see Figure 2b). Similar to the SUC particle packing, to keep uniform the porosity near the wall, along the lateral \(X\) and \(Y\) direction, each full layer of BCC unit has 5 full spheres and two half-spheres near the periphery, while each weak layer has 5 full spheres. The BCC particle packing begins and ends with a full layer. Additionally, each unit of packed bed is 3D-printed in such a way that it can be easily moved and placed over other units. In the case of the BCC packing, it is not straightforward to move and place one unit over another, hence a bridge unit, consisting of 1 full layer and 2 weak layers, is 3D-printed and added. This makes a total of 768 spheres in the BCC packed particle bed configuration. The pore volume fraction (void fraction) for the fluid flow within the SUC and BCC particle packings is 0.471 and 0.296, respectively. However, the theoretical volume fraction for SUC and BCC packings are 0.48 and 0.32, respectively. It is noted that the volume fractions for both configurations are slightly lower than its corresponding theoretical value. This is because in 3D printing, the contact point between each sphere has a finite size. The actual volume fraction for BCC is reduced further, as in that configuration, each sphere is surrounded by more spheres as compared to SUC. ### Experimental flow conditions Table 1 shows the flow conditions used in the present work for the SUC and BCC configurations. The experiments are performed under atmospheric pressure (\(\sim 1\) atm) and ambient temperature conditions (\(\sim 20^{\circ}\)C). The Reynolds number of the particle or sphere, \(\mathrm{Re_{p}}\), is defined based on the particle diameter and the interstitial velocity (\(U_{\mathrm{int}}\)) between particles. The interstitial velocity is defined as the ratio between superficial velocity and the actual volume fraction of the corresponding SUC or BCC packing. The superficial velocity (\(U_{\mathrm{spf}}\)) is defined as the average velocity through the square cross-section in the absence of particles. The volumetric flow rates of the co-flow (\(Q_{\mathrm{C}}\)) and the central jet flow (\(Q_{\mathrm{j}}\)) are kept equal to each other for each flow condition, and are tabulated in standard litres per minute (lpm). The standard condition for the mass flow controllers correspond to 1 atm and \(20^{\circ}C\). These flow conditions are chosen specifically to provide three different \(\mathrm{Re_{p}}\): 200, 300, and 400 for each particle bed configuration, which correspond to the laminar and transitional regime for the SUC packing, and the transitional and turbulent for the BCC packing, according to the experimental studies by Bu et al. (2015). The material properties of the fluid (synthetic air) were determined at \(20^{\circ}\)C and 1 atm (kinematic viscosity = \(1.52\times 10^{-5}\,\mathrm{m^{2}/s}\) ). Figure 1: Experimental packed bed setup with the SUC and BCC arrangements and schematic gas routing. ### Optical setup and methodology for stereo particle imaging velocimetry Figure 3 shows the optical arrangement for the stereo particle imaging velocimetry (SPIV) experiments performed in the present study. The aim of this setup is to determine the velocity vectors of tracer particles over the entire cross-section of the flow at the outlet of the packed bed. It should be noted that the major velocity component of the fluid flow (i.e. along axial \(Z\) direction) is perpendicular to the cross-section. A Nd:YAG double pulse laser (Litron Lasers, nano L PIV, 532 nm, 800 mJ, 4 ns pulse duration, double pulse, 15 Hz pulse frequency, and 5mm beam diameter) is used for illuminating the seeding particles. The laser beam is transformed into a sheet using a cylindrical lens (focal length = -12.7 mm). The laser sheet had a thickness of approx. 5 mm, but it is reduced to 3 mm using a rectangular aperture. The aperture consists of a metallic plate with a rectangular slot of 3 mm width and of 152.5 mm length, corresponding to the width of the square channel. The diverging laser sheet is aligned perpendicular to the exit of the packed bed (i.e. axial \(Z\) direction). The mid-plane of the laser sheet is placed 5.5 mm above the exit of the packed bed. Double-frame images were captured simultaneously using two CCD cameras (Lavision, Imager Pro X, 1600\(\times\)1200 resolution and 7.4\(\times\)7.4 \(\mu\) m\({}^{2}\) pixel size). Scheimpflug adaptors from Lavision and 50 mm objective lenses from Nikon have been mounted in each camera to clearly focus all the seeding particles at the measurement plane. Both the cameras are arranged at an inclination of 35\({}^{\circ}\) with respect to the \(Z\) axis. The apertures of both camera lenses are closed to \(f\)-numbers of 11. This ensures that the depth of field is larger than the laser sheet thickness. Cameras and laser were synchronized and \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multicolumn{8}{|c|}{SUC configuration} \\ \hline Case & Re\({}_{p}\) & \(U_{\text{int}}\) (m/s) & \(U_{\text{sgt}}\) (m/s) & \(Q_{c}\)(lpm) & \(Q_{i}\)(lpm) & \(U_{C}\) (m/s) & \(U_{j}\) (m/s) \\ \hline 1 & 200 & 0.12 & 0.06 & 38.8 & 38.8 & 0.028 & 12.9 \\ 2 & 300 & 0.18 & 0.08 & 58.2 & 58.2 & 0.042 & 19.3 \\ 3 & 400 & 0.24 & 0.11 & 77.6 & 77.6 & 0.056 & 25.7 \\ \hline \multicolumn{8}{|c|}{BCC configuration} \\ \hline 4 & 200 & 0.12 & 0.04 & 24.4 & 24.4 & 0.018 & 8.1 \\ 5 & 300 & 0.18 & 0.05 & 36.6 & 36.6 & 0.026 & 12.1 \\ 6 & 400 & 0.24 & 0.07 & 48.8 & 48.8 & 0.035 & 16.2 \\ \hline \end{tabular} \end{table} Table 1: Operating flow conditions for the SUC and BCC packed bed configuration. Re\({}_{p}\): particle Reynolds number, \(U_{\text{int}}\): interstitial velocity, \(U_{\text{sgt}}\): superficial velocity, \(Q_{c}\): volumetric flow rate of the co-flow, \(Q_{j}\): volumetric flow rate of the central jet flow, \(U_{c}\): flow velocity of the co-flow and \(U_{j}\): flow velocity of the central jet at the exit of the central pipe. Figure 2: 3D-printed packing unit for the: (a) SUC configuration and (b) BCC configuration. Isometric, top, and side view (left to right). controlled by a programmable time unit (PTU from LaVision) and Davis 8.4 software (Lavision). In total, for each flow condition, 1000 pairs of double-frame images have been recorded by each camera. The laser pulse delay is varied between \(400-1400\)\(\mu\)s, depending on the Re\({}_{\text{p}}\) of the flow conditions, to prevent the particles from leaving the laser sheet in between recording of the image frames. The recording of the stereo image pairs is carried out by Davis 8.4. Regarding the camera calibration, a calibration plate is manufactured based on a dot pattern generated by Davis 8.4, where the diameter of each dot is 3 mm and the distance between dots is 7 mm. The pattern was printed and glued to a flat and rigid metal plate. During calibration, the calibration plate is kept inside the square channel, close to the exit of the packed bed and parallel to the plane of the laser sheet. Then, 7 positions, every 0.5 mm, are traversed, with an accuracy of 5 \(\mu\)m, over a range of 3 mm, within the region of interest. Thereafter, the calibration is performed by a mapping function based on a polynomial fit of 3rd order (Soloff et al., 1997). To correct any laser sheet misalignment, the self-calibration procedure by Wienke (2005) is performed. Finally, the RMS error of the calibration is determined to be 0.05 pixels (7.9 \(\mu\)m in the object plane). The SPIV images are captured near the packed bed and, since the spheres are printed in white colour and their surfaces reflect the laser light, the spheres are always visible in the recorded images. In order to minimize the noise arising from this scattering, the last layer of the packed bed of SUC and BCC is painted in matt black. Furthermore, a set of 100 background images are captured in the presence of the laser sheet without any seeding particles and averaged to have one average background image. The image pre-processing is carried out using Davis 8.4 software (LaVision, 2017). First, the average background image is subtracted from each instantaneous SPIV image to remove any offset in the instantaneous images. It is observed that even after background subtraction, some of the sphere surfaces were still visible, because there is some additional scattering from the seeding particles. To reduce the effect of such reflections, Davis 8.4 offers a sliding background subtraction and particle intensity normalization pre-processing operations to minimize local intensity fluctuation in the background and to correct local particle intensity fluctuation, respectively. Those tools are applied using a local scale length of 7 pixels. Regarding the vector calculations, a multi-pass approach is adopted. First, two iterations were performed with an interrogation window with a size of 64\(\times\)64 pixels (75% overlap) and the remaining four passes had an interrogation Figure 3: The optical setup for stereo particle image velocimetry. window size of 16x16 pixels (50 % overlap), which lead to a spatial resolution of 8 pixels (1.24 mm in the object plane). The uncertainty in the instantaneous velocity components is defined by the correlation statistics method by Wiencke (2015), while the uncertainty propagated to the mean velocity field and other statistical parameters are evaluated using the method by Sciacchitano and Wieneke (2016). Thus, the uncertainty in mean axial velocity is 0.5 - 2 % while the uncertainty in standard deviation of axial velocity is around 2 % over the entire plane of measurement. Additionally, Table 2 describes all the important parameters used to collect and process the SPIV images. Table 3 shows the volumetric flow rate calculated using the SPIV measurements for the SUC and BCC configurations and the respective percentage difference with respect to the actual volumetric flow rate for all \(\text{Re}_{\text{p}}\). It is observed that for the SUC configuration, the difference in volumetric flow rate is 9-14 % while for BCC, it is 15-20 %. Additionally, with increase in \(\text{Re}_{\text{p}}\), the difference is shown to increase for SUC and BCC both. This suggests that, when the flow is close to laminar conditions, for example in SUC for \(\text{Re}_{\text{p}}=200\), the difference in volumetric flow rate is up to 10 %. On the other hand, when turbulence is induced in the flow, and higher fluctuations in velocity arise, the difference in volumetric flow rate goes up to 20 %. ## 3 Numerical method The gaseous flow inside the fixed packed bed reactor is considered as an incompressible Newtonian fluid with constant fluid properties and is therefore governed by the Navier-Stokes equations, \[\nabla\cdot\mathbf{u} =0\,, \tag{1}\] \[\rho\left(\frac{\partial\mathbf{u}}{\partial t}+\nabla\cdot( \mathbf{u}\otimes\mathbf{u})\right) =-\nabla p+\nabla\cdot\mathbf{\tau}+\rho\mathbf{g}+\mathbf{s}\,, \tag{2}\] which are discretized and solved on an Eulerian mesh with adaptive mesh refinement (AMR). \(\rho\) is the fluid density, \(\mathbf{u}\) the velocity vector, \(p\) the pressure, \(\mathbf{\tau}\) the viscous stress tensor, \(\mathbf{g}\) the gravity acceleration vector, and \(\mathbf{s}\) represents a momentum source term arising from the presence of the immersed boundaries. The Navier-Stokes equations are discretized and solved using a finite-volume framework with a collocated variable arrangement in a coupled, pressure-based manner with second-order accuracy in space and time (Denner and van Wachem, 2014; Denner et al., 2020). \begin{table} \begin{tabular}{|l|l|} \hline Parameter & Value \\ \hline Maximum out of plane velocity & \(\approx\) 1.5 m/s \\ Maximum in-plane velocity & \(\approx\) 0.3 m/s \\ Measurement area size & 152.5 mm \(\times\) 152.5 mm \\ Observation distance & 1100 mm \\ Pixel size in the sensor & 7.4 \(\mu\)m \\ Magnification & 0.0472 \\ \(f\)-number & 11 \\ Particle image size & \(\approx\) 2 pixels \\ Seeding particle diameter & less than 1 \(\mu\)m \\ \hline \end{tabular} \end{table} Table 2: Parameters for the experimental SPIV technique. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Case & \(\text{Re}_{\text{p}}\) & Actual volume & \multicolumn{2}{c|}{volumetric flow rate} & percentage \\ & & flow rate (lpm) & from SPIV technique (lpm) & difference \\ \hline \multicolumn{4}{|c|}{SUC configuration} \\ \hline 1 & 200 & 77.6 & 70.3 & 9.4 \% \\ 2 & 300 & 116.4 & 105.7 & 9.2 \% \\ 3 & 400 & 155.2 & 133.2 & 14.1 \% \\ \hline \multicolumn{4}{|c|}{BCC configuration} \\ \hline 4 & 200 & 48.8 & 41.5 & 14.9 \% \\ 5 & 300 & 73.2 & 60.2 & 17.7 \% \\ 6 & 400 & 97.7 & 77.2 & 20.9 \% \\ \hline \end{tabular} \end{table} Table 3: Comparison of volumetric flow rates between the actual conditions and the values obtained from the SPIV measurements. The particle surface is discretized by uniformly distributed Lagrangian markers \(\mathbf{X}_{\mathbf{j}}\,,j\in\{1,...,N_{L}\}\), with an optimal distance between the markers of the order of the Eulerian fluid mesh spacing (Zhou and Balachandar, 2021). For the computation of the Lagrangian feedback force, the momentum equation is reformulated as \[\mathbf{F}_{\mathbf{j}}^{n}=\frac{\rho}{\Delta t}\left(\mathbf{U}_{\mathrm{IB},j} ^{n}-\mathbf{U}_{\mathrm{J}}^{n-1}\right)+\mathbf{C}_{\mathrm{J}}^{n}+\mathbf{ B}_{\mathbf{j}}^{n}-\mathbf{D}_{\mathrm{J}}^{n}-\rho\mathbf{g}\,, \tag{3}\] for each Lagrangian marker \(j\), the direct forcing approach by Abdol Azis et al. (2019) is applied. The super-script \(n\) denotes the time level at which the quantities are to be evaluated, \(\mathbf{U}_{\mathrm{IB},j}\) is the velocity vector of the \(j\)-th Lagrangian marker, and \(\mathbf{U}_{\mathrm{J}}\), \(\mathbf{C}_{\mathrm{J}}\), \(\mathbf{B}_{\mathrm{J}}\), and \(\mathbf{D}_{\mathrm{J}}\), are the interpolated Eulerian velocity, advection, pressure, and diffusion terms of the governing momentum equations, respectively. The previous equation can be further simplified and the momentum terms can be further summarized as \[\mathbf{F}_{\mathrm{J}}^{n}=\frac{\rho}{\Delta t}\left(\mathbf{U}_{\mathrm{IB},j}^{n}-\mathbf{\hat{U}}_{\mathrm{J}}\right)+\mathbf{\hat{F}}_{\mathrm{J}}^{n}\,. \tag{4}\] For the coupling of the Lagrangian forces with the momentum equation, the deferred fluid velocity \(\mathbf{\hat{U}}_{\mathrm{J}}\) and the deferred momentum terms cumulated in \(\mathbf{\hat{F}}_{\mathrm{J}}^{n}\) of time level \(n\) are interpolated from the Eulerian mesh by an adequate interpolation operator. Such a discrete, compact interpolation operator interpolates the fluid velocities within a symmetric stencil with a certain radius (usually a few fluid cell spacings) to the Lagrangian markers and the interpolated velocities are therefore referred to as the Lagrangian velocity (Peskin, 1972). The interpolation of an arbitrary fluid variable \(\gamma\) to the position of a Lagrangian marker \(j\) reads as \[\Gamma_{\mathrm{J}}=\sum_{i\in\delta_{\mathrm{J}}}\phi_{i,j}\gamma_{i}\,, \tag{5}\] where \(\Gamma_{\mathrm{J}}\) is the fluid variable interpolated to the position of Lagrangian marker \(j\), \(\delta_{\mathrm{J}}\) is the set of Eulerian cells in the interpolation support of the \(j\)-th Lagrangian marker, \(\phi_{i,j}\) is the discrete interpolation weight associated with the \(i\)-th Eulerian cell in the support stencil, and \(\gamma_{i}\) is the fluid variable for that Eulerian cell \(i\). The discrete interpolation weight \(\phi_{i,j}\) is based on a normalized kernel function \(\phi\ :\ \mathbb{R}^{3}\rightarrow\mathbb{R}\) and can be calculated as \[\phi_{i,j}=\phi\left(\mathbf{x}_{i}-\mathbf{X}_{\mathrm{J}}\right)V_{i}\,, \tag{6}\] where \(\mathbf{x}_{i}\) is the centroid of the \(i\)-th Eulerian fluid cell in the support stencil, \(\mathbf{X}_{\mathrm{J}}\) is the position of the \(j\)-th Lagrangian marker and \(V_{i}\) is the volume of the fluid cell. Throughout this work, a support stencil with a radius of four times the fluid mesh spacing is used. This interpolation stencil numerically thickens the particle-fluid interface over a few fluid cells across the particle surface. Henceforth, the required Lagrangian force to satisfy the no-slip condition at the particle surface is calculated from the difference between the interpolated Lagrangian velocity and the desired velocity at the surface. The desired velocity typically arises from the rigid body motion of the solid object (Uhlmann, 2005). After the interpolation step, the Lagrangian forces are computed at the location of each Lagrangian marker, see Eq. (4), followed by spreading the Lagrangian forces, with the same stencil as used for the interpolation, back to the fluid cells in the support region. On the fluid mesh, the force is applied as a volumetric source term in the discretized equations governing the fluid flow. Depending on the implicitness of the above procedure, the procedure of interpolation and force computation may require a number of iterations before the accurate no-slip boundary condition at the particle surface is obtained. The spreading of an arbitrary Lagrangian variable \(\Gamma\) onto the Eulerian mesh follows as \[\gamma_{i}=\sum_{j\in\psi_{i}}\phi_{i,j}W_{\mathrm{J}}\Gamma_{\mathrm{J}}\,, \tag{7}\] where \(W_{\mathrm{J}}\) is the spreading weight associated with the \(j\)-th Lagrangian marker, \(\psi_{i}\) is the set of Lagrangian markers whose spreading support stencil contains the \(i\)-th Eulerian cell, and \(\phi_{i,j}\) is the same as in Eq. (6). The spreading weight \(W_{\mathrm{J}}\) is a non-physical quantity, and many definitions have been used and discussed in the literature (Uhlmann, 2005; Abdol Azis et al., 2019; Pinelli et al., 2010; Zhou et al., 2019). For an optimal compromise between stability and accuracy of the no-slip boundary condition enforcement at the particle surface, the spreading weights for the IBM in this work are treated with a stability analysis (Zhou and Balachandar, 2021; Cheron et al., 2023). A qualitative illustration of the interpolation and spreading stencil in 2D for the Lagrangian markers is also given in Figure 3(b). For the three white coloured Lagrangian markers, the support stencil is shown by circles. The stencils are symmetric and the Eulerian fluid mesh cells within these stencils contribute to the interpolation to the corresponding markers and are the ones to which the Lagrangian forces are spread onto. A more detailed step by step derivation and implementation of the IBM and its interpolation and spreading is given, for instance, in Cheron et al. (2023). ## 4 Numerical setup The general numerical setup is adopted from the experimental configuration introduced in section 2. Based on the experimental conditions of atmospheric pressure and ambient temperature conditions of \(\sim 20^{\circ}\)C, the fluid properties for air are chosen to be \(1.204\,\mathrm{kg/m^{3}}\) for the density and \(1.82\cdot 10^{-5}\,\mathrm{kg/m\,s}\) for the dynamic viscosity. As can be seen in Figure (a)a, the overall numerical domain size of the fixed packed bed reactor is \([0.1525\times 0.1525\times 0.84]\,\mathrm{m^{3}}\) for the BCC particle configuration, and \([0.1525\times 0.1525\times 0.90]\,\mathrm{m^{3}}\) for the SUC particle configuration. Both reactors have a velocity inlet (Dirichlet boundary condition for velocity, Neumann boundary condition for pressure) at the bottom, a pressure outlet (Neumann boundary condition for velocity, Dirichlet boundary condition for pressure) at the top and no-slip walls (Dirichlet boundary condition for velocity, Neumann boundary condition for pressure) at the sides as domain boundary conditions. Compared to the experimental setup, the numerical domain starts directly above the porous plate with the velocity inlet and ends at the outlet of the extended walls with the pressure outlet. The modelling of the velocity inlet is divided into two sections, the centre jet region and the co-flow region. A cell face at the inlet corresponds to the centre jet region if its area intersects with the \(8\,\mathrm{mm}\) jet hole radius from the exact centre of the inlet plane. All other inlet cell faces are considered as co-flow inlet faces. Therefore, the inlet boundary condition for the velocity is modelled as \[u_{\mathrm{inlet}}=A_{\mathrm{fraction}}\cdot u_{\mathrm{j}}+(1-A_{\mathrm{ fraction}})\cdot u_{\mathrm{C}}\,, \tag{8}\] where \(A_{\mathrm{fraction}}=A_{\mathrm{inside}}/A_{\mathrm{cell}}\) is the partial area of the inlet cell face which falls inside the centre jet area, \(A_{\mathrm{cell}}\) is the total area of the cell face, \(u_{\mathrm{j}}\) and \(u_{\mathrm{C}}\) are jet and co-flow inlet velocities, given in Table 1. Since all variables are known a priori, this leads to an exact inlet mass flow compared to the dictated mass flow from experimental measurements. Figure 4: General numerical setup of the packed particle bed reactor with: (a) the numerical domain, (b) an example of the measuring plane in the region of interest above the packing and a qualitative illustration of the Lagrangian discretization over an Eulerian fluid mesh for the IBM, and (c) an example of the cross-section of the adaptive refined mesh. It should be noted here, that the inlet conditions for the numerical setup are precisely symmetrical, with no variations in space and time, which is not possible to be completely ensured for the experimental setup. Furthermore, for the numerical setup, the jet inlet pipe has no predefined wall thickness. It is given a slighlty artificial wall thickness based on \(A_{\text{fraction}}\) for the cell faces in the transitional area between the jet region and the co-flow region, which may not perfectly match the pipe thickness as used in the experiment. Furthermore, the velocity profile of the jet is prescribed uniformly and not parabolic. The SUC and BCC particle packing configurations are modelled according to the information of the 3D-printed packings from the experimental setup, although the surfaces of the particles are assumed smooth. For the SUC packing, the overlap between particles of adjacent layers is 0.1 mm and the overlap between particles of adjacent layers for the BCC packing is 0.88 mm. Therefore, the discrepancy between the heights, respectively the volume fractions, of the experimental and numerical packings is below 1 %. Here it is noted that the step on which the packings rest in the experimental setup is not modelled at all in the numerical setup. An example of the adapted and refined Eulerian fluid mesh for the numerical IBM simulations is shown in Figure 4c. Within the interpolation/spreading stencil around the particle surfaces, in the region of interest above the fixed packed bed, and around the inlet jet, the cell width is set to satisfy the desired particle diameter to fluid cell edge length (\(d_{\text{p}}/\Delta x\)) ratio, which is set to 26 in this work. This is in accordance with our earlier findings (Gorges et al., 2024). In the remaining interstices and at the outlet, the cell width is double the size. For all simulations, the Courant-Friedrichs-Lewy number (CFL) is fixed in the range of 0.25 to 0.30. After an initial flow phase for the formation of the flow structures, and to ensure independence of the starting conditions, the velocity field is recorded in intervals with a frequency of 20 Hz with a total period of approximately 2 s of physical time. The results which are compared are then the average values of all previously stored velocity fields. The region of interest for the comparisons is a 3 mm high volume at a height of 4\(-\)7 mm above the respective particle packing. This region of interest is then divided into a grid with the approximate cell size of [\(1.25\times 1.25\times 3.00\)] mm for a more accurate comparison with the experimental data. All time-averaged velocity data within one of these post-processing grid cells are then averaged to a single post-processing grid cell value. ## 5 Results and discussion In this section, the experimental and numerical results are compared and discussed. Velocity contour and line plots and probability distributions of averaged axial velocity are presented and discussed for the SUC and BCC configurations for \(\text{Re}_{\text{p}}\) = 200, 300 and 400. ### Flow characteristics for the SUC packed bed Figure 5 presents the experimental and numerical contour plots of the averaged axial velocities for the SUC packed bed at three flow conditions, characterized by \(\text{Re}_{\text{p}}\) = 200, 300, and 400. For the SUC configuration, there are 36 pores (see Figure 2a) in between the spheres. Through each pore, the airflow emerges in the form of a small jet. It is noted that in this research work, the jet flow through the central pipe in the experimental setup is referred to as 'central jet', while the flow through each pore is referred to as 'pore jet' or 'jet'. It can be observed from Figure 5, that the central jet flow is able to disperse in both lateral directions, and it reaches the pores near the periphery of the packed bed. First, the observations from contour plots from the experimental data, shown in Figures 5b, 5d, and 5f are discussed. The velocity of the jets is maximum for the four central pores near to which the central jet is placed at the bottom of the setup. These jets from four central pores are highlighted with a dashed rectangle in the contour plot from the experimental data for \(\text{Re}_{\text{p}}\) = 200 (see Figure 5b). This indicates that the influence of the presence of the central jet is preserved, even after the flow passed through 18 layers of the SUC particle bed. It is observed, that, even though the flow has laterally dispersed to the pores at the periphery, the axial velocity through pores differs considerably from the central to the peripheral region of the packed particle bed. Moreover, as expected, the velocity of each individual pore jet is the highest at its centre, and gradually reduces towards its periphery (Pope, 2000). The pore jets seem to be elliptical near the central region of the packed bed, while they are completely transformed into rectangular shape near the periphery of the packed bed. However, before transforming into rectangular pore jets, there is a region where they appear as distorted ellipses with their maximum velocities reduced by around 50% compared to the pore jets from the four central pores. The group of elliptical pore jets is highlighted with dashed rectangles for \(\text{Re}_{\text{p}}\) = 300 and 400 (see Figure 5d and 5f). This highlights that, even though the physical size and structure of each pore is identical, * Patil et al. (2018) Patil, Gorges, Lopez Bonilla, Stelter, and van Wachem Figure 5: Comparison of numerical and experimental contour plots for averaged axial velocity (\(\overline{V_{z}}\)) for SUC packing at (a,b) \(\mathrm{Re_{p}}=200\), (c,d) \(\mathrm{Re_{p}}=300\), and (e,f) \(\mathrm{Re_{p}}=400\). * Patil et al. (2018) Patil, Gorges, Lopez Bonilla, Stelter, and van Wachem Figure 6: Comparison of the numerical and experimental averaged axial velocity (\(\overline{V_{z}}\)) for the SUC packing along the red line (periphery of packed bed) and black line (centre of packed bed) marked in (a,b) for (c,d) \(\mathrm{Re_{p}}=200\), (e,f) \(\mathrm{Re_{p}}=300\), and (g,h) \(\mathrm{Re_{p}}=400\). the local inlet flow conditions at the pores eventually dictate the velocity magnitude of each jet. Also, it can be noted qualitatively that the jet structure through each pore remains overall similar for all \(\mathrm{Re_{p}}\) conditions. The flow structure is observed to be overall symmetric. Though the symmetrical nature of the flow structure is almost perfect about \(X\) axis, there seems to be slight asymmetric profiles about \(Y\) axis. For instance, comparing the jet structure of the four central pores, it can be observed that the central pores towards the right have a relatively large spatial extent of highest velocities, as compared to the central pores towards the left of the particle bed. Although, it is ensured that the central pipe and the entire setup are aligned as straight as possible in the vertical direction, the flow structures that cross through the layers of the packed bed are very sensitive to precise the setup alignment and any minute offset between the placement of packed bed units. In the numerical results, the structure of the pore jets is not elliptical, but seems to preserve the shape of the pore from which they emerge. This is most clearly seen for the pore jets in the central region of the packed bed (see Figures 5a, 5c, and 5e). The pore jets near the periphery of the packed bed tend to have a rectangular shape in both the experiments and the simulations. The overall pore jet structures from all the pores remain similar for different \(\mathrm{Re_{p}}\), which is predicted by the numerics as well as the experiments. The possible reason behind the discrepancy in the pore jet structure between the simulation and the experiment might be explained as follows: The region of interest in the present study is 152.5 \(\times\) 152.5 mm, which is relatively large, and the spatial resolution is rather low (i.e. interrogation window size is 1.25 mm) compared to typical PIV studies for packed beds (Khayamyan et al., 2017; Patil and Liburdy, 2013a; Neeraj et al., 2023). This is a critical issue in the experiments, especially at the jets boundaries, where velocity gradients are large so that variations in the velocity field over a small length scale are prone to average the velocity field. Hence, the sharpness of fluid flow structures observed in the contour plots obtained from the experiments tends to diffuse, whereas in the numerical predictions this is not the case, at least not to the same extent. Figure 6 shows the comparison between the numerical and experimental averaged velocity profile along the centre and periphery of the packed bed for all considered \(\mathrm{Re_{p}}\), indicated by black and red lines in Figures 6a and 6b, respectively. It can be observed that the maximum velocities at the periphery of the packed bed (see Figures 6c, 6e, and 6g), marked by red lines in Figures 6a and 6b, remain almost constant for all the pores at all \(\mathrm{Re_{p}}\). This is similar for both experimental and numerical studies. However, the maximum velocities along the centreline of the packing (see Figures 6d, 6f and 6h), as indicated by the black lines in Figures 6a and 6b vary considerably for all pores. Some interesting observations about the distribution of the average velocity can be made from the experimental results. For the experiments at all considered \(\mathrm{Re_{p}}\), the maximum velocity decreases and then increases again as one moves from central pores to peripheral pores, see Figures 6d, 6f, and 6h. A gradual reduction in the maximum velocity from the central to the peripheral pores is expected, as the velocities of the central jet decrease near the sides (Pope, 2000), which eventually disperse towards the peripheral pores. The reason behind the increase of this maximum velocity at the periphery might be the complex interaction between the fluid flow and the different layers of the packed particle bed within the interstitial spaces of different spheres. This highlights that the flow near the peripheral pores experiences a contraction in the actual area available for the fluid flow, and hence the fluid velocity tends to increase. The numerical results (the black lines) also show a similar behaviour, although not as pronounced as observed in experiments. For all considered \(\mathrm{Re_{p}}\), the common observation is that the average fluid velocity is usually slightly lower in the experiments compared to predictions from the simulations. For the case with \(\mathrm{Re_{p}}=200\), the fluid velocity compares very well between the experiment and the simulation for the central pores, see Figure 6d. However, for the cases with \(\mathrm{Re_{p}}=300\) and \(400\), the deviation in the average fluid velocity magnitude between the experiments and the simulations increases together with the turbulence level in the flow. The deviation between the experimental and numerical results are attributed to the following reasons: The surfaces of the spheres in the experiments are relatively rough due to their production method, whereas the surfaces of the spheres in the simulations are completely smooth. This difference leads to a different behaviour of the boundary layers on the particle surfaces, which can lead to a reduction in velocity magnitude in case of experiments. Another reason why the velocity magnitudes are consistently higher in the simulations compared to the experimental results is summarized in Table 3. In this table, the volumetric flow rate obtained by SPIV is 9 - 14 % lower than the flow rate based on the particle Reynolds number. This difference in volumetric flow rate does not exist in the numerical simulation. Figure 7 shows the fluid velocity probability distributions for the velocity profiles shown in Figure 5. The probability is obtained by dividing the corresponding number of fluid velocity vectors by the total number of velocity vectors. These velocity distribution plots provide quantitative information about the distribution of the mean velocities in the measurement plane. It can be observed that there is good agreement between the numerical and experimental results for all considered \(\mathrm{Re_{p}}\), especially for \(\mathrm{Re_{p}}=200\). It is interesting to note, that even though flow structures and velocity magnitudes at some spatial locations differ between the experimental and simulation results (see Figures 5 and 6), a good agreement is achieved as far as the distribution of the average axial velocity at the entire plane of measurement is considered. For all considered \(\mathrm{Re_{p}}\), the distributions peak at 0 m/s, and indicate that nearly 20-35% of the velocity vectors in the measurement plane have zero velocity. The probability is around 5% or less for most of the non-zero velocities values in the flow field. The maximum average velocities increase from around 0.9 m/s to 1.5 m/s for the experiments and simulations as \(\mathrm{Re_{p}}\) increases. It can be observed that the width of the probability distribution plots remain similar as \(\mathrm{Re_{p}}\) increases. For instance, this can be observed for a probability of around 2.5 % where the velocities are in the narrow range of -0.05 to 0.1 m/s for all \(\mathrm{Re_{p}}\). Therefore, the higher range of velocities are significantly affected when the \(\mathrm{Re_{p}}\) is increased. ### Flow characteristics for the BCC packed bed Figure 8 compares the contour plots of the average axial fluid velocity between the experiments and simulations for the BCC packed beds for \(\mathrm{Re_{p}}\) = 200, 300, and 400. First, the observations from contour plots from the experimental data, shown in Figures 8b, 8d, and 8f are discussed. It is noted that there are 60 pores in total for the BCC packed bed, and that the pore size is smaller compared to the SUC packed bed, see Figure 2. Out of the 60 pore jets, 56 jets can be clearly observed in the contour plot, while the remaining pore jets, at the four corners of the packed bed, have significant interactions with the adjacent jets and thus overlapped with them. For the case with \(\mathrm{Re_{p}}\) = 400, the maximum velocity Figure 7: Comparison of numerical and experimental distribution plots for averaged axial velocity (\(\overline{V_{z}}\)) for the SUC packing at (a) \(\mathrm{Re_{p}}\) = 200, (b) \(\mathrm{Re_{p}}\) = 300, and (c) \(\mathrm{Re_{p}}\) = 400. of the pore jets is observed to be comparable between the central and peripheral pores, but for \(\mathrm{Re_{p}}\) = 200 and 300, differences in maximum velocities are observed. For all \(\mathrm{Re_{p}}\), it is observed qualitatively that the spatial extent of higher pore jet velocities is larger for the peripheral pores as compared to central pores. This suggests that for the same \(\mathrm{Re_{p}}\), the fluid flow disperses significantly in the lateral directions in the BCC packing, which is less pronounced in the SUC packing. It is noted that there are complex interactions of the fluid with the spheres of the packed bed for the BCC packing as fluid penetrates all full and weak layers of the packing and also disperses in lateral \(X\) and \(Y\) directions. To highlight the different features of the jets, dashed rectangles are drawn to focus the attention on the contour plot for \(\mathrm{Re_{p}}\) = 300 in Figure 8. It is noted that, along a particular row or column, elliptical pore jets are oriented in the same direction, except for the rows or columns at the periphery. For instance, for the first dotted rectangle from the top (orange), all the elliptical pore jets have their major axis approximately oriented in the vertical direction, i.e. along the \(Y\) axis. The second dashed rectangle (red) shows the pore jets oriented with the horizontal major axis, i.e. along the \(X\) axis. The jet structure shown by the first and second dashed rectangles is observed for every consecutive row or column. The third dashed rectangle (white) groups the jets at the periphery, and it has alternate jets whose major axis is oriented in both the directions. Similar trends of the elliptical pore jet structures and their orientation are observed for \(\mathrm{Re_{p}}\) = 200 and 400 as well. For the case with the SUC packing, the pore jet structures and especially their orientation does not vary considerably for the different pores. However, the BCC packing produces a complex arrangement of jets, even though the considered \(\mathrm{Re_{p}}\) is the same. For \(\mathrm{Re_{p}}\) = 200, all elliptical pore jets are not distinctly visible along a particular row or column, which may be due to the lower momentum of the pore jets compared to the cases with \(\mathrm{Re_{p}}\) = 300 and 400. Hence, the pore jets may be more susceptible to surrounding perturbations in the velocity field, as they eject out of the packing. In general, it is observed that the main patterns of the flow structures are similar between the simulations and the experiments. For instance, the area indicated with the dashed rectangle 'A' in Figures 8 and 8 for \(\mathrm{Re_{p}}\) = 200 shows very similar features. This comparison is also very good for the cases with \(\mathrm{Re_{p}}\) = 300 and 400, but \(\mathrm{Re_{p}}\) = 200 is chosen as an example here. In both the experiments and the simulations, there are five rows over which flow is distributed. However, a single elliptical pore jet appears in the experiments. While there are two distinct pore jets visible for each row in the numerical studies. This can be seen in the dashed rectangles 'B' and 'C' in the figures. Similar features of the pore jets' structures in the simulations and experiments are also shown by the circle 'D' in the figure. Due to the lower spatial resolution in the SPIV experiments compared to the simulations, multiple pore jets are overlapping, and appear to form a single pore jet in the experimental results. This was not observed in the case of the results obtained for the case with the SUC packing, as the distance between the centre of each pore was on the order of the diameter of the sphere of the packing. But for the cases with the BCC packing, this distance is reduced to 0.70 times the diameter of the sphere, see Figure 2 for a qualitative comparison. Furthermore, the size of individual pore jets itself is larger in the cases with the SUC packing as compared to the pore jets from the BCC packing, compare Figure 8 for BCC, and Figure 5 for SUC. The effect of this can be observed, for instance, in the pore jet flow in the simulation results near the wall, as shown by dashed rectangle 'E'. Such pore jet flows near the wall tend to get averaged out in the corresponding experimental locations, due to the lower spatial resolution in the experiments compared to the simulations. Figure 9 shows the comparison between the average axial velocity for the simulation and the experiments along the black line (i.e. the centre) and red line (i.e. the periphery) of the packed bed (see Figures 9 and 9), for the case with the BCC structured particle packing. It is observed, that the average velocities along the red line at the periphery of the packed bed shows overall a good comparison between experiments and simulations for the cases with \(\mathrm{Re_{p}}\) = 300 and 400. In contrast, for \(\mathrm{Re_{p}}\) = 200, there is some deviation between the experiments and the simulations, especially for some locations such as \(X\approx\) -40 or 40 mm. The velocity profile along a black line at the centre of the packed bed shows a higher difference in the velocities from the experimental and the numerical results at the central pores. For the peripheral pores, the comparison between the experiments and the simulations is generally very good. It is observed, that it is difficult to obtain a perfect agreement between the experiments and the simulations at all locations for the BCC packing, as the pore jets' structures vary considerably between the experiments and the simulations, as seen earlier in Figure 8. A possible reason for some discrepancies observed between the experiments and the simulations for the BCC packing, can also be attributed to the surface finish of the particles in the particle packing. As mentioned above, the sphere surfaces are relatively rough in experiments, while they are completely smooth in simulations. The influence of surface roughness is amplified in the cases with the BCC packing compared to the SUC packing, as the number of spheres in the former is higher, and the distance between the spheres, the size of the interstitial pores, is lower in the BCC particle configuration. Hence, the fluid flow in the experiments experiences a slightly different effect in the * Patil et al. (2017) Patil, Gorges, Lopez Bonilla, Stelter, and van Wachem Figure 8: Comparison of numerical and experimental contour plots of averaged axial velocity (\(\overline{V_{z}}\)) for BCC packing at (a,b) \(\mathrm{Re_{p}}=200\), (c,d) \(\mathrm{Re_{p}}=300\), and (e,f) \(\mathrm{Re_{p}}=400\). * Patil et al. (2018) Patil, Gorges, Lopez Bonilla, Stelter, and van Wachem Figure 9: Comparison of numerical and experimental averaged axial velocity (\(\overline{V}_{z}\)) for the BCC packing along the red line (periphery of packed bed) and the black line (centre of packed bed) marked in (a,b) for (c,d) \(\mathrm{Re}_{p}=200\), (e,f) \(\mathrm{Re}_{p}=300\), and (g,h) \(\mathrm{Re}_{p}=400\). boundary layers at the surfaces of the spheres compared to the simulations, and this difference is expected to be more pronounced in the case with the BCC packing than the case with the SUC packing. Moreover, the dispersion of the fluid flow through the layers of the packed bed in case of the BCC configuration involves higher complexity, due to the more complex arrangement of the spheres and the subsequent flow structures in the packing. The surface roughness also likely triggers more velocity fluctuations in the fluid flow, which leads to the generation of vortices and, hence, better mixing within the fluid flow. Accordingly, as can be seen in Figure 11, the standard deviation of the axial fluid velocities is found to be higher for the experiments compared to the simulations. Because of these reasons, the BCC configuration is likely to be more sensitive to the surface roughness of the spheres in the packing, and hence leads to the observed differences in flow structure and velocities at the exit of the packed bed between the experiments and simulations. Moreover, due to the relatively low spatial resolution of the presented SPIV results and the underestimation of the volumetric flow rate by SPIV (see Table 3), these issues might also contribute to the observed differences. As shown in Figure 6, the boundary or peripheral pore jets in the simulations tend to have a slightly higher velocity magnitude compared to the experiments, especially for the case with \(\mathrm{Re_{p}}\) = 400. The variation in results could also appear due to the difference in operating time between simulation and experiments. The experimental images were captured over a period of 133 seconds, while simulations ran for a period of 2 seconds of physical time. Figure 10 shows the probability distribution of the average velocities for the considered values of \(\mathrm{Re_{p}}\). Similar to the SUC configuration, the distribution shows a peak at 0 m/s, with probabilities in the range of 5-20 % for all \(\mathrm{Re_{p}}\). It can be observed, that the comparisons of the velocity distribution between the experiments and the simulations are better for \(\mathrm{Re_{p}}\) = 200 than for the two cases with \(\mathrm{Re_{p}}\) = 300 and 400. The positive axial velocities at which the probability reaches almost zero shows a good agreement between the simulations and experiments for all considered \(\mathrm{Re_{p}}\). For instance, the velocity increases from around 0.2 to 0.4 m/s as \(\mathrm{Re_{p}}\) is increased. Moreover, in contrast to the SUC configuration, the width of the probability distribution slightly increases for lower velocities with increasing \(\mathrm{Re_{p}}\). For instance, this is clearly observed by comparing the width of the distribution at a probability of 2.5 %. The velocities vary in the range from -0.05 to 0.05 m/s for \(\mathrm{Re_{p}}\) = 200, and they vary from -0.05 to 0.1 m/s for \(\mathrm{Re_{p}}\) = 400. This shows that when increasing the value of \(\mathrm{Re_{p}}\) in case of the BCC packing, the lower and higher velocities of the flow field are increased. This is confirmed by the experiments as well as the simulations. As can be seen from the contour plots in Figure 8, small negative velocities exist near the boundaries of the elliptically shaped pore jets. When \(\mathrm{Re_{p}}\) is increased, the magnitude of the negative velocities tends to increase and the distribution plot becomes slightly broader, as can be clearly observed in Figure 10. This implies that, as the pore jets' axial velocities increase with increasing \(\mathrm{Re_{p}}\), they induce a relatively strong re-circulation zone near the boundaries of the pore jets. The magnitude of the negative axial velocities is not observed to depend on \(\mathrm{Re_{p}}\) for the SUC configuration, as seen from Figure 7. For this case, the minimum negative velocity remains around -0.1 m/s for all \(\mathrm{Re_{p}}\). This is an interesting observation, confirmed by both the experiments and the simulations, even though the maximum axial velocity is always higher for the case of the SUC configuration as compared to the BCC configuration for the same \(\mathrm{Re_{p}}\). Figure 11 shows the comparison between the standard deviation of the axial fluid velocity between the experiments and the simulations for \(\mathrm{Re_{p}}\) = 200 and 300. As highlighted earlier, due to the surface roughness of the spheres, the velocity fields are more susceptible to fluctuations in the case of the experiments compared to the simulations. Hence, the peak in the distribution shifts towards higher values for the experiments for both \(\mathrm{Re_{p}}\), which is not seen to the same extent in the simulations. ## 6 Conclusions This paper reports a detailed comparison between experimental and numerical studies for the fluid velocity field at the exit of simple unit cell (SUC) and body centered cubic (BCC) packed bed reactors, where a jet and a co-flow delivers the fluid flow from the bottom of the packed bed. Under the centre of the packing is a jet, and a co-flow is generated through a porous plate on which the particle packing rests. The objective is to study the details of the dispersion of the jet flow and co-flow through the layers of packed beds for flows with particle Reynolds numbers of \(\mathrm{Re_{p}}\) = 200, 300, and 400. Additionally, the paper provides model experimental results for the validation of simulation studies. The experimental setup involves 3D-printed spheres from which SUC and BCC packed beds are constructed. The experiments are performed with 18 layers of SUC and 25 layers of BCC (13 full layers and 12 weak layers). The effects of the wall flow are significantly reduced by having 3D-printed half spheres instead of full spheres at the channel wall and sphere interface. A co-flow of air is made to pass through a porous plate, so that a uniform co-flow velocity is obtained at the entrance of the packing, and the same boundary condition can be imposed in simulations. A fully developed jet flow is supplied at the exit of a central pipe, mounted directly under the centre of the packing. In the experiments, stereo particle image velocimetry (SPIV) is used in such a way, that the velocities over the entire region at the exit of the packed bed are obtained instantaneously. The fluid flow of the jet is seeded with tracer particles, to enable the SPIV. The experiments are complex, as the out of plane component is the dominant velocity component of the flow field. Furthermore, the experimental measurements are carried out within the square channel, where the walls cause distortions in the recorded camera images. Therefore, the camera calibration is carefully executed and additionally a stereo self-calibration algorithm is used. The volumetric flow rate is observed to differ by 10-20 % between the actual inlet flow rate, determined by mass flow controllers, and the flow rate achieved by integrating the results determined from the SPIV measurements. The present experimental SPIV arrangement helps to extract velocity field instantaneously over an entire region of the packed bed. For the numerical simulations, the immersed boundary method (IBM) with the direct forcing approach and an adaptively refined mesh is used. In the context of the flow simulations of fixed packed bed reactors, the proposed work shows that the IBM approach is highly suitable for modelling fixed packed bed reactors of any configuration. Especially the fact that the IBM can be straightforwardly applied to non-uniform and even moving packings of arbitrary shaped particles, with no significant increase in computational cost, is a great advantage compared to other simulation methodologies. Figure 10: Comparison of numerical and experimental distribution plots for averaged axial velocity (\(\overline{V_{z}}\)) for BCC packing at (a) \(\mathrm{Re_{p}}\) = 200, (b) \(\mathrm{Re_{p}}\) = 300, and (c) \(\mathrm{Re_{p}}\) = 400. Overall, a good agreement between the simulations and the experimental results is observed. Especially, for the SUC particle packing, there is generally a very good agreement between the experiments and the simulations for all considered particle Reynolds numbers. However, the velocity magnitude was always slightly higher in the simulations than in the experiments. Interestingly, the axial velocity is found to increase at the pores in the packing at the exit, but this is more evident in experiments as compared to the simulations. The structure of the jets from the pores at the exit of the BCC configuration is found to differ between the experiments and simulation. In the simulations, for this configuration, mostly two jets appear from the pores, whereas only one jet appears from each pore in the experiments. However, the overall features of the flow structures are in good agreement. The velocity profile at the exit of the packing near the walls as compared to centre of the packed bed shows a very good agreement between the simulations and the experiments. For the BCC packing, the axial velocities can become higher at the peripheral pores than at the central pores of the packed bed. The discrepancies between simulation and experiments may be attributed to the surface roughness of the 3D-printed spheres used in the experiments, as the behaviour of the boundary layers near the spheres is very sensitive to the surface roughness. This is corroborated by the fact that fluctuations in the axial velocity are higher in the experiments compared to the velocities predicted by the simulations. Hence, the velocity field at the exit can be affected, especially in the case of the BCC packing, where spheres are closely packed, and therefore the boundary layers are more pronounced, than in the SUC packing. Another probable reason for the discrepancy is the relatively low spatial resolution achieved in the SPIV measurements and the subsequent underestimation of the volumetric flow rate by the SPIV measurements, compared to the volumetric flow rate as determined by the mass flow controllers. However, in general, both the experiments and the simulations show a good agreement, and the complex flow structures in the packings can be well predicted. ## 7 Acknowledgements This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project-ID 422037413 - TRR 287. Gefordert durch die Deutsche Forschungsgemeinschaft (DFG) - Projektnummer 422037413 - TRR 287. We would like to thank Dr. Gunar Boye for helping with the preparation of the experimental setup. We would like to thank M.Sc. Afrin Merchant for carrying out the 3D-printing of the packed bed units.
**本論文では、コンパクトセル(SUC)と体積中心立立方体(BCC)の粒子配置の、気体噴流と共流の拡散を実験的および数値的に調査しています。実験セットアップは、その数値的枠組みに適切で簡素な境界条件を課すように設計されています。SUCとBCCの粒子床は、3Dプリントされた球体から成っています。粒子床の出口での流速を、200、300、および400の粒子レイノルズ数で両方の床で直接分析しました。立体粒子画像流速計(SPIV)は、詰まった床の出口の全領域での速度を同時に取得するように実験的に設定されています。数値方法は、最新のIBMとAMRで構成されています。この論文では、SUCとBCCの詰まった粒子床の各ポールのポージションと出口の速度場を提示しています。数値と実験の
2309.14569
Towards a Novel Ultrasound System Based on Low-Frequency Feature Extraction From a Fully-Printed Flexible Transducer
Ultrasound is a key technology in healthcare, and it is being explored for non-invasive, wearable, continuous monitoring of vital signs. However, its widespread adoption in this scenario is still hindered by the size, complexity, and power consumption of current devices. Moreover, such an application demands adaptability to human anatomy, which is hard to achieve with current transducer technology. This paper presents a novel ultrasound system prototype based on a fully printed, lead-free, and flexible polymer ultrasound transducer, whose bending radius promises good adaptability to the human anatomy. Our application scenario focuses on continuous blood flow monitoring. We implemented a hardware envelope filter to efficiently transpose high-frequency ultrasound signals to a lower-frequency spectrum. This reduces computational and power demands with little to no degradation in the task proposed for this work. We validated our method on a setup that mimics human blood flow by using a flow phantom and a peristaltic pump simulating 3 different heartbeat rhythms: 60, 90 and 120 beats per minute. Our ultrasound setup reconstructs peristaltic pump frequencies with errors of less than 0.05 Hz (3 bpm) from the set pump frequency, both for the raw echo and the enveloped echo. The analog pre-processing showed a promising reduction of signal bandwidth of more than 6x: pulse-echo signals of transducers excited at 12.5 MHz were reduced to about 2 MHz. Thus, allowing consumer MCUs to acquire and elaborate signals within mW-power range in an inexpensive fashion.
Marco Giordano, Kirill Keller, Francesco Greco, Luca Benini, Michele Magno, Christoph Leitner
2023-09-25T22:42:27
http://arxiv.org/abs/2309.14569v1
Towards a Novel Ultrasound System Based on Low-Frequency Feature Extraction From a Fully-Printed Flexible Transducer ###### Abstract Ultrasound is a key technology in healthcare, and it is being explored for non-invasive, wearable, continuous monitoring of vital signs. However, its widespread adoption in this scenario is still hindered by the size, complexity, and power consumption of current devices. Moreover, such an application demands adaptability to human anatomy, which is hard to achieve with current transducer technology. This paper presents a novel ultrasound system prototype based on a fully printed, lead-free, and flexible polymer ultrasound transducer, whose bending radius promises good adaptability to the human anatomy. Our application scenario focuses on continuous blood flow monitoring. We implemented a hardware envelope filter to efficiently transpose high-frequency ultrasound signals to a lower-frequency spectrum. This reduces computational and power demands with little to no degradation in the task proposed for this work. We validated our method on a setup that mimics human blood flow by using a flow phantom and a peristaltic pump simulating 3 different heartbeat rhythms: 60, 90 and 120 beats per minute. Our ultrasound setup reconstructs peristaltic pump frequencies with errors of less than 0.05 Hz (3 bpm) from the set pump frequency, both for the raw echo and the enveloped echo. The analog pre-processing showed a promising reduction of signal bandwidth of more than 6x: pulse-echo signals of transducers excited at 12.5 MHz were reduced to about 2 MHz. Thus, allowing consumer MCUs to acquire and elaborate signals within mW-power range in an inexpensive fashion. Medical, Ultrasonics, P(VDF-TrFE), Cardiovascular response. ## I Introduction Ultrasound technology, long utilized as a diagnostic tool, is witnessing a notable expansion into wearable applications, fueled by ongoing advancements in material sciences [1], system design [2] and signal processing [3]. As an echographic imaging technique, it carries the unique ability to penetrate soft tissues and capture real-time images in a radiation-free manner, enabling doctors to assess health conditions without resorting to invasive procedures. However, the potential benefits of ultrasound extend beyond traditional healthcare applications. Recent works show possible applications of ultrasound technology in the realm of wearable monitoring of hemodynamics [4] and organs [1, 5], in prosthetic controls [6] as well as in human-machine interfaces (HMI) [7]. Exploiting novel materials, compact hardware designs, and advanced signal processing techniques, ultrasound technology is now being integrated into smart patches, with the ultimate goal of inconspicuously obtaining vital data from tissues and internal organs [8], representing an innovative, non-invasive approach to patient monitoring and personalized healthcare. Recently, flexible ultrasonic transducers [4] and even bendable 2D arrays [5] based on lead zirconate titanate (PZT) have been proposed. Lin et al. [1] demonstrated the first fully wearable ultrasound system tailored for transmitting and receiving signals from a thin and flexible PZT ultrasound transducer. PZT has excellent piezoelectric properties, but their fabrication is complex and transducers contain harmful lead. Piezoelectric polymers such as P(VDF-TrFE) provide a compelling alternative to materials like PZT. Not only is P(VDF-TrFE) lead-free, but it is also significantly easier and more cost-effective to manufacture, thanks to its compatibility with printing methods. These characteristics make P(VDF-TrFE) a promising candidate for its use in wearable ultrasonic devices [9]. To date, P(VDF-TrFE)-based transducers have been used primarily for high-frequency applications, but they are also finding their way into medical application scenarios where flexibility and conformability are key. The inherent advantages of P(VDF-TrFE) including its high sensitivity and wide bandwidth, can make it a key component in transferring ultrasound instruments from traditional benchtop or handheld hardware form factors into wearable devices [10]. One of the challenges with screen-printed P(VDF-TrFE) is the feasible (often thin) layer thicknesses [9, 11]. This results in high resonant frequencies [12] that might not be suitable for medical ultrasound and pose a challenge for signal acquisition and processing hardware. From a fabrication viewpoint, care must be taken in the material selection and thickness choice of auxiliary structures (e.g. substrate, electrodes,...) as they all acoustically modulate the transducer bandwidth [13]. Moreover, the use of such transducers in wearable scenarios requires custom-made ultrasonic acquisition system designs which can bypass power-hungry electronics that are usually required for high-frequency signal acquisition [2]. In this context, and to address the above challenges, we present a design concept for an ultrasound acquisition platform. We developed and evaluated an envelope filter strategy to extract features from the ultrasound signal, transposing high-frequency ultrasound signals to a low-frequency spectrum, thus reducing computational and power demands without compromising vital diagnostic data. We incorporate a flexible high-frequency ultrasound transducer based on P(VDF-TrFE). Moreover, we employ an integrated pulser in a compact form factor that ensures effective ultrasound signal generation while enhancing portability and energy efficiency. We demonstrate the application of our setup on hemodynamic measurements in a phantom. Our ultrasound setup manages to reconstruct peristaltic pump frequencies with errors of less than \(0.05\,\mathrm{Hz}\) (\(3\,\mathrm{b}\mathrm{p}\mathrm{m}\)) from the set pump frequency of 1-\(2\,\mathrm{Hz}\) (60-\(120\,\mathrm{b}\mathrm{p}\mathrm{m}\)), both for the raw echo and the enveloped echo. Our system prototype allows for easy fabrication and has the potential for miniaturization, other than opening doors towards low-power ultrasound signal conditioning and onboard processing for wearable devices. In conclusion, this paper demonstrates that a low-power front-end for ultrasound can be used to lower system requirements for high-frequency fully printed ultrasound transducers. The technology was tested on a hemodynamic phantom, showing great potential for continuous blood flow monitoring. ## II Materials and Methods An overview of the setup is summarised in Figure 1. The flexible transducer, seen in detail in Figure 1(a), is placed on top of the phantom, Figure 1(b), while water is pumped through the phantom with the peristaltic pump. In the following subsections, the different components of the setup as well as the experimental measurement setup will be introduced and discussed. ### _Flexible Polymer Transducer Characterization_ For this study, we employed a P(VDF-TrFE) ultrasonic transducer printed on a flexible substrate (Figure 1(a). As the printing substrate we used Kapton HN (DuPont) with \(50\,\mathrm{\SIUnitSymbolMicro m}\) thickness. P(VDF-TrFE) transducers were screen printed using FC20 ink (Arkema). Electrodes were inkjet printed using reactive silver ink EI-502 (Electroninks). The entire fabrication process has already been described in detail in Leitner et al. [9]. Our transducers showed the resonance frequency at \(15.1\,\mathrm{M}\mathrm{H}\mathrm{z}\), they were circular and had an active surface area of \(56.75\,\mathrm{m}\mathrm{m}^{2}\). ### _Transmit and Receive Circuit_ #### Ii-B1 Pulser Targeting an embedded implementation, we aimed for a low-power pulser with a small device footprint, which could deliver the necessary power for our transducer. The choice was the STHVP32 (STMicroelectronics), a novel pulser technology that provides the ability to design custom waveforms on up to 5 different voltage levels for 32 different channels, coordinated by an internal beamforming engine. #### Ii-B2 Envelope detection Ultrasound applications are characterized by fairly high-frequency signals, usually in the MHz range. To cope with such high data rates in conventional medical devices, dedicated ADCs are used to acquire raw signals which are then sent to FPGAs for further processing [14]. Therefore, the components required are expensive and do not target low-power operation in the mW-range. Since we expected the operating frequencies of our printed P(VDF-TrFE) transducers to be in the range of 10-15 MHz one core idea of this work was to pre-process the ultrasound signals in the analog domain before the digital conversion. We experimented with extracting the envelope of the receiving ultrasound signal in hardware, validating the approach in a frequency extraction task. The proposed approach allows to drastically reduce the data rates required for the acquisition signals from the P(VDF-TrFE) transducers. Thus, minimizing the power consumption and system requirements for the signal processing, at a mW-range power expense. Figure 2 shows the envelope detection circuit implemented in this work. The operational amplifier (opamp) chosen is the ADA4807 (Analog Devices) due to its analog characteristics compatible with the ultrasound signal and its low-power operation. The first two opamps, U1 and U2, are non-inverting amplifiers to amplify and shift the echo signal. Moreover, given the limited gain-bandwidth product of the opamps, they also filter (by having a smaller amplification factor) the high-frequency noise caused by the pulser. The following opamp, U3, serves to decouple the feedback networks of the first two amplifiers from the envelope detection circuit. The envelope detection starts from D1, which feeds a rectified signal into R3 that in turn charges the capacitor C1, which is then discharged by R4. The resulting signal is once again amplified by U4, which decouples the envelope detection circuit from the signal acquisition pipeline. ### _Flow Phantom_ To verify our experimental setup within a regulated setting, we engineered a phantom made from agar-agar that simulates a blood vessel [15]. Our phantom was produced using a 3D Fig. 1: Schematic of the system setup. (a) shows the flexible transducer. (b) shows the phantom inside the 3D-printed mold. printed mold that encompasses a tube along its primary axis. Following the curing process of the material, this tube is extractable, allowing for the attachment of an additional pair of tubes at the endpoints. This process effectively forms a conduit within the agar-agar, which is subsequently filled with water. The section of the phantom, as well as a picture of the mold, can be seen in Figure 1. To emulate the rhythmic beating of the heart, we employed a peristaltic pump. The pump is operated under a duty cycle to replicate a single heartbeat for each period, thereby mirroring the human's heart beating. Our objective is to reconstruct the pump frequency, which emulates the heart rate, from the ultrasound data collected with our system. ### _Flow Measurements_ In this section, the experimental setup and the signal processing used to retrieve the peristaltic pump frequency are described, and the results from the data collected are detailed and commented on. #### Ii-D1 Experimental Data-collection For data collection, we placed our P(VDF-TrFE) transducer on top of the flow phantom. The active transducer surface was centered on the conduit of the phantom. Transmission pulses were selected as a 5-cycle \(12.5\,\mathrm{MHz}\) square-wave signal with a 30Vpp amplitude, and is shown on a coarse time axes in Figure 3(a). The same figure shows also the receiving signal on the bottom, where the expanded segment highlights the reflected signal between the impedance discontinuity between the agar-agar and air. #### Ii-D2 Signal acquisition In this work, we targeted a benchtop setup, as an explorative step towards system integration. In our experiment, the transmit pulse, the echo, and the hardware-extracted envelope of the echo have been acquired with a PCI ADC card ADQ412 (Teledyne, Sweden). An overview of the involved signals is given in Figure 3(a). #### Ii-D3 Signal processing To extract peristaltic pump frequencies, we adapted the method proposed in [8]. * **Filtering:** the raw signal is bandpass filtered with a third order Butterworth filter with a lower bound of \(10\,\mathrm{MHz}\) and an upper bound of \(15\,\mathrm{MHz}\). * **M-Mode image:** the acquired data are put in a matrix, with each collected ultrasound shot occupying one column. This aligns the Sampling Time (SPT) along the vertical axes and the Pulse Repetition Time (PRT) along the horizontal axes, as shown in Figure 3(b). * **Differentiation:** in order to enhance the signal, the velocity of the signal is computed by differentiating along the Pulse Repetition Time axes. The output of this step can be seen in Figure 3(c) * **2D FFT:** to extract the target frequency two Fast Fourier Transforms are performed on the differentiated matrix form of the signal. Figure 3(d) and (e) report the absolute value of said FFT for the raw and enveloped echo respectively. * **Frequency accumulation**: this step reduces the 2D FFT dimensionality to 1D and helps to highlight eventual time-periodic behavior by summing the absolute value of the 2D FFT along the Pulse Repetition axis. The resulting plot can be seen in Figure 3(f) and (g) for the raw and enveloped echo respectively. * **Moving average and peak finding:** the final step in signal processing is to smoothen the accumulated FFT and find the peak corresponding to our wanted frequency. In Figure 3(d) and (g) a moving average is applied on the accumulated FFT before a peak finding algorithm is run, the extracted peak is highlighted in red. ## III Results and Discussion ### _Flow Measurements_ To validate our setup we collected 60 seconds of ultrasound data, with a pulse repetition frequency of \(25\,\mathrm{Hz}\). We tested three different peristaltic pump frequencies, namely \(1\,\mathrm{Hz}\), \(1.5\,\mathrm{Hz}\) and \(2\,\mathrm{Hz}\), which replicate common heart rate values of 60bpm, 90bpm and 120bpm. Starting the analysis from the raw signal, shown in Figure 3(a), it is clear how the system can be heavily duty-cycled: our application requires only \(25\,\mathrm{\SIUnitSymbolMicro s}\) of active acquisition time over a pulse repetition period of \(40\,\mathrm{ms}\). Moving on to the M-Mode plot, Figure 3(b), we can see how the differentiation manages to amplify the time-dependency along the PRT axes: from a slightly observable time dependency to a quite explicit time dependency in Figure 3(c). Focusing now on Figure 3(d), we can observe how most of the frequency components of the signal along the SPT axes are centered, as expected, around the \begin{table} \begin{tabular}{c c c} & \multicolumn{1}{c}{Pulser} & Envelope \\ \hline Power & \(1.9\,\mathrm{mW}\) & \(0.14\,\mathrm{mW}\) \\ \end{tabular} \end{table} TABLE II: Estimated system power. Fig. 2: Schematic of the signal acquisition pipeline and detail on the envelope detection circuit. transducer's pulsing frequency: \(12.5\,\mathrm{MHz}\). In the same plot, along the PRT axes higher values of the absolute values of the FFT are detectable at around \(2\,\mathrm{Hz}\) and \(4\,\mathrm{Hz}\), with the former being the peristaltic pump frequency selected for this experiment. The same plot is also proposed in Figure 3(e) for the enveloped signal. Focusing on the SPT axes, it is clear how the bandwidth reduction has been successful, given that the majority of the signal lies within \(1\,\mathrm{MHz}\), and the pulse repetition frequencies are clearly spanning up to about \(2\,\mathrm{MHz}\). Accumulating across the PRT axes highlights the frequency components of the peristaltic pump even more, as seen in Figure 3(f) and (g). From these plots, it is clear how the envelop detection manages to successfully reconstruct the frequency of the peristaltic pump. An additional moving window filter further helps to manage additional noise that could leak into the frequency analysis. The peaks displayed in Figure 3(f) and (g), which corresponds to the detected peristaltic pump frequency, are clearly distinguishable and with an excellent signal-to-noise ratio. The experiment in Figure 3 had the peristaltic pump frequency set at \(2\,\mathrm{Hz}\), which is well reconstructed by the frequency accumulation. Moreover, the first harmonic of the signal at \(4\,\mathrm{Hz}\) is also visible. Table I summarises the experimental runs on the three different peristaltic pump frequencies. The signal processing is introduced in section II. A is applied on both the raw echo and the hardware-enveloped echo, and the results from the extracted peaks have been reported for each of the three selected frequencies. The set frequencies on the peristaltic pump were reconstructed, both for the raw signal and the enveloped one, within an error of \(0.05\,\mathrm{Hz}\) (equivalent to \(3\,\mathrm{bpm}\)), which lies within errors reported by literature [16] for similar tasks. Moreover, we can observe how the envelope detection managed to reconstruct the pump frequency as good as, or even better, than the non-enveloped signal. We have observed that enveloping the signal can reduce the bandwidth of over 6x: in our analysis, most of the enveloped signal was in fact within the \(2\,\mathrm{MHz}\) mark (Figure 3(e), while the raw echo had most of the signal above \(10\,\mathrm{MHz}\) (Figure 3(d)). Reducing the bandwidth allows using MCU's integrated analog front ends, and allows MCUs to process the amount of data acquired. This represents a lower power, cheaper, and more scalable solution than using FPGA to acquire and process ultrasound signals. The power consumption estimation breakdown for the proposed system is reported in Table II: all the power involved are in the mW-range. ## IV Conclusion This paper proposes a methodology to lower ultrasound signal acquisition and processing requirements by enveloping the received echo. A P(VDF-TrFE)-based paper-thin flexible transducer has been used to acquire an ultrasound signal, pushing ultrasound application towards on-body continuous monitoring. A drawback of such a transducer is the higher operating frequency, which we addressed with a hardware envelope-detection circuit. An agar-agar phantom powered by a peristaltic pump has been constructed to simulate a hemodynamic application. Data collected on the phantom demonstrate little to no degradation in the recovery of the peristaltic pump frequency, while the required bandwidth has been decreased by over 6x. The reduced bandwidth of the hardware pre-processed signal can be then acquired and processed by common-off-the-shelf microcontrollers, drastically lowering the cost and complexity of ultrasound systems, and enabling their deployment in wearable devices. ## Acknowledgment STMicroelectronics for the provision of hardware and support. This work was supported by the ETH Research Grant ETH-C-01-21-2 (Project ListenToLight) Fig. 3: Signal processing pipeline. (a) shows the raw signal as collected by the ADC, with a zoom on the US TX and RX. (b) shows different pulses of the US RX signal, in (c) it is differentiated. (d) shows a 2D FFT on the differentiated signal, (e) on the enveloped signal. (f) shows the frequency accumulation for the non-enveloped signal, (g) for the enveloped signal.
超音波は医療における重要な技術であり、非侵襲的な、装着可能な、継続的な心拍数の監視を目的として探求されています。しかし、現在のデバイスのサイズ、複雑さ、消費電力により、このシナリオでの広範な普及は阻害されています。さらに、このようなアプリケーションには、現在の伝送器技術では、人体に適応する能力が求められます。この論文では、柔軟性のあるポリマー超音波伝達器を使用した新しい超音波システムプロトタイプを提示します。このプロトタイプは、曲がり radius が人体に適応する良い性能を備えています。私たちのアプリケーションシナリオは、継続的な血流監視に焦点を当てています。私たちは、高周波音波信号を高周波数スペクトルに効率的に転送するためにハードウェアの外側フィルターを実装しました。これにより、計算と消費電力の要求を軽減し、この作業のための提案されたタスクにはほとんど影響