yilunzhao commited on
Commit
2bf039d
·
verified ·
1 Parent(s): f47caac

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. 20240620/1808.05671v4.json +417 -0
  2. 20240620/2008.11451v2.json +417 -0
  3. 20240620/2206.02909v3.json +0 -0
  4. 20240620/2208.07540v4.json +0 -0
  5. 20240620/2208.13296v2.json +0 -0
  6. 20240620/2210.00898v3.json +629 -0
  7. 20240620/2210.14484v4.json +0 -0
  8. 20240620/2211.10636v6.json +0 -0
  9. 20240620/2211.14873v4.json +0 -0
  10. 20240620/2212.01211v3.json +639 -0
  11. 20240620/2212.10131v2.json +0 -0
  12. 20240620/2301.13006v2.json +0 -0
  13. 20240620/2302.02224v3.json +774 -0
  14. 20240620/2302.08176v3.json +0 -0
  15. 20240620/2303.15350v2.json +72 -0
  16. 20240620/2304.06470v6.json +629 -0
  17. 20240620/2305.04694v2.json +0 -0
  18. 20240620/2305.13582v3.json +0 -0
  19. 20240620/2306.05486v3.json +0 -0
  20. 20240620/2306.09293v2.json +0 -0
  21. 20240620/2307.01927v3.json +149 -0
  22. 20240620/2307.06930v3.json +0 -0
  23. 20240620/2307.13520v2.json +0 -0
  24. 20240620/2308.03372v2.json +0 -0
  25. 20240620/2308.04792v3.json +0 -0
  26. 20240620/2308.07706v3.json +0 -0
  27. 20240620/2308.10692v2.json +0 -0
  28. 20240620/2309.08781v3.json +0 -0
  29. 20240620/2309.08902v3.json +0 -0
  30. 20240620/2309.11143v4.json +167 -0
  31. 20240620/2309.12875v2.json +164 -0
  32. 20240620/2309.14169v2.json +401 -0
  33. 20240620/2309.15001v2.json +321 -0
  34. 20240620/2309.16792v2.json +0 -0
  35. 20240620/2310.00905v2.json +0 -0
  36. 20240620/2310.04741v6.json +144 -0
  37. 20240620/2310.08745v3.json +0 -0
  38. 20240620/2310.13164v6.json +591 -0
  39. 20240620/2310.14414v2.json +0 -0
  40. 20240620/2310.15903v4.json +0 -0
  41. 20240620/2310.17467v4.json +460 -0
  42. 20240620/2311.01264v2.json +66 -0
  43. 20240620/2311.06530v2.json +0 -0
  44. 20240620/2311.07230v2.json +0 -0
  45. 20240620/2311.10433v2.json +147 -0
  46. 20240620/2311.11900v2.json +0 -0
  47. 20240620/2311.13564v2.json +335 -0
  48. 20240620/2311.17088v2.json +100 -0
  49. 20240620/2311.17451v3.json +145 -0
  50. 20240620/2311.17541v3.json +0 -0
20240620/1808.05671v4.json ADDED
@@ -0,0 +1,417 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "On the Convergence of Adaptive Gradient Methods for Nonconvex Optimization",
3
+ "abstract": "Adaptive gradient methods are workhorses in deep learning. However, the convergence guarantees of adaptive gradient methods for nonconvex optimization have not been thoroughly studied. In this paper, we provide a fine-grained convergence analysis for a general class of adaptive gradient methods including AMSGrad, RMSProp and AdaGrad. For smooth nonconvex functions, we prove that adaptive gradient methods in expectation converge to a first-order stationary point. Our convergence rate is better than existing results for adaptive gradient methods in terms of dimension. In addition, we also prove high probability bounds on the convergence rates of AMSGrad, RMSProp as well as AdaGrad, which have not been established before. Our analyses shed light on better understanding the mechanism behind adaptive gradient methods in optimizing nonconvex objectives.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Stochastic gradient descent (SGD) (Robbins & Monro, 1951 ###reference_b32###) and its variants have been widely used in training deep neural networks. Among those variants, adaptive gradient methods (AdaGrad) (Duchi et al., 2011 ###reference_b9###; McMahan & Streeter, 2010 ###reference_b26###), which scale each coordinate of the gradient by a function of past gradients, can achieve better performance than vanilla SGD in practice when the gradients are sparse. An intuitive explanation for the success of AdaGrad is that it automatically adjusts the learning rate for each feature based on the partial gradient, which accelerates the convergence. However, AdaGrad was later found to demonstrate degraded performance especially in cases where the loss function is nonconvex or the gradient is dense, due to rapid decay of learning rate. This problem is especially exacerbated in deep learning due to the huge number of optimization variables. To overcome this issue, RMSProp (Tieleman & Hinton, 2012 ###reference_b34###) was proposed to use exponential moving average rather than the arithmetic average to scale the gradient, which mitigates the rapid decay of the learning rate. Kingma & Ba (2014 ###reference_b20###) proposed an adaptive momentum estimation method (Adam), which incorporates the idea of momentum (Polyak, 1964 ###reference_b29###; Sutskever et al., 2013 ###reference_b33###) into RMSProp. Other related algorithms include AdaDelta (Zeiler, 2012 ###reference_b38###) and Nadam (Dozat, 2016 ###reference_b8###), which combine the idea of the exponential moving average of the historical gradients, Polyak\u2019s heavy ball (Polyak, 1964 ###reference_b29###) and Nesterov\u2019s accelerated gradient descent (Nesterov, 2013 ###reference_b28###).\nRecently, by revisiting the original convergence analysis of Adam, Reddi et al. (2018 ###reference_b31###) found that for some handcrafted simple convex optimization problem, Adam does not even converge to the global minimizer. In order to address this convergence issue of Adam, Reddi et al. (2018 ###reference_b31###) proposed a new variant of the Adam algorithm named AMSGrad, which has guaranteed convergence in the convex setting.\nThe update rule of AMSGrad is as follows111With slight\nabuse of notation, here we denote by the element-wise square root of the vector , the element-wise division between and , and the element-wise maximum between and .:\nwhere is the step size, is a small number to ensure numerical stability, is the iterate in the -th iteration, and are the exponential moving averages of the gradient and the squared gradient at the -th iteration respectively: 222We denote by the element-wise square of the vector .\nHere are algorithm hyperparameters, and is the stochastic gradient at .\nDespite the successes of adaptive gradient methods for training deep neural networks, the convergence guarantees for these algorithms are mostly restricted to online convex optimization (Duchi et al., 2011 ###reference_b9###; Kingma & Ba, 2014 ###reference_b20###; Reddi et al., 2018 ###reference_b31###). Therefore, there is a huge gap between existing online convex optimization guarantees for adaptive gradient methods and the empirical successes of adaptive gradient methods in nonconvex optimization.\nIn order to bridge this gap, there are a few recent attempts to prove the nonconvex optimization guarantees for adaptive gradient methods.\nMore specifically,\nBasu et al. (2018 ###reference_b3###) proved the convergence rate of RMSProp and Adam when using deterministic gradient rather than stochastic gradient. Li & Orabona (2018 ###reference_b22###) proved the convergence rate of AdaGrad, assuming the gradient is -Lipschitz continuous. Ward et al. (2018 ###reference_b35###) proved the convergence rate of AdaGrad-Norm where the moving average of the norms of the gradient vectors is used to adjust the gradient vector in both deterministic and stochastic settings for smooth nonconvex functions.\nNevertheless, the convergence guarantees in Basu et al. (2018 ###reference_b3###); Ward et al. (2018 ###reference_b35###) are still limited to simplified algorithms. Another attempt to obtain the convergence rate under stochastic setting is prompted recently by Zou & Shen (2018 ###reference_b40###), in which they only focus on the condition when the momentum vanishes. Chen et al. (2018a ###reference_b5###) studies the convergence properties of adaptive gradient methods in the nonconvex setting, however, its convergence rate has a quadratic dependency on the problem dimension . D\u00e9fossez et al. (2020 ###reference_b7###) proves the convergence of Adam and Adagrad in nonconvex smooth optimization under the assumption of almost sure uniform bound on the norm of the gradients.\nIn this paper, we provide a fine-grained convergence analysis of the adaptive gradient methods. In particular, we analyze several representative adaptive gradient methods, i.e., AMSGrad (Reddi et al., 2018 ###reference_b31###), which fixed the non-convergence issue in Adam and the RMSProp (fixed version via (Reddi et al., 2018 ###reference_b31###)), and prove its convergence rate for smooth nonconvex objective functions in the stochastic optimization setting.\nMoreover, existing theoretical guarantees for adaptive gradient methods are mostly bounds in expectation over the randomness of stochastic gradients, and are therefore only on-average convergence guarantees. In practice, however, the optimization algorithm is usually only run once, and therefore the performance cannot be guaranteed by the in-expectation bounds. To deal with this problem, we also provide high probability convergence rates for AMSGrad and RMSProp, which can characterize the performance of the algorithms on a single run."
10
+ },
11
+ {
12
+ "section_id": "1.1",
13
+ "parent_section_id": "1",
14
+ "section_name": "Our Contributions",
15
+ "text": "The main contributions of our work are as follows:\nWe prove that the convergence rate of AMSGrad to a stationary point for stochastic nonconvex optimization is\nwhen . Here with being the stochastic gradients satisfying , and is a parameter that characterizes the growth rate of the cumulative stochastic gradient .\nOur result implies that the worst case (i.e., ) convergence rate for AMSGrad is\nwhich has a better dependence on the dimension and than the convergence rate proved in Chen et al. (2018a ###reference_b5###), i.e.,\nWe also establish high probability bounds for adaptive gradient methods. To the best of our knowledge, it is the first high probability convergence guarantees for AMSGrad and RMSProp for nonconvex stochastic optimization.\nNotations:\nscalars are denoted by lower case letters, vectors by lower case bold face letters, and matrices by upper case bold face letters. For a vector , we denote the norm () of by , the norm of by . For a sequence of vectors , we denote by the -th element in . We also denote . With slightly abuse of notation, for any two vectors and , we denote as the element-wise square, as the element-wise power operation, as the element-wise division and as the element-wise maximum. For a matrix , we define and .\nGiven two sequences and , we write if there exists a constant such that . We use notation to hide logarithmic factors.\n###table_1###"
16
+ },
17
+ {
18
+ "section_id": "2",
19
+ "parent_section_id": null,
20
+ "section_name": "Related Work",
21
+ "text": "Here we review other related work that is not covered before.\nAdaptive gradient methods:\nMukkamala & Hein (2017 ###reference_b27###) proposed SC-Adagrad and SC-RMSprop, which derives logarithmic regret bounds for strongly convex functions.\nChen et al. (2018b ###reference_b6###) proposed SADAGRAD for solving stochastic strongly convex optimization and more generally stochastic\nconvex optimization that satisfies the second order growth condition.\nZaheer et al. (2018 ###reference_b37###) studied the effect of adaptive denominator constant and minibatch size in the convergence of adaptive gradient methods.\nZou et al. (2019 ###reference_b41###) presented an easy-to-check sufficient condition to guarantee the convergences of Adam and AMSGrad in the non-convex stochastic setting.\nChen et al. (2020 ###reference_b4###) proposed a partially adaptive gradient method and proved its convergence in nonconvex settings.\nAlacaoglu et al. (2020 ###reference_b1###) proposed a new framework to derive data-dependent regret bounds with a constant momentum parameter in various settings.\nNonconvex Stochastic Optimization:\nGhadimi & Lan (2013 ###reference_b10###) proposed a randomized stochastic gradient (RSG) method, and proved its convergence rate to a stationary point. Ghadimi & Lan (2016 ###reference_b11###) proposed an randomized stochastic accelerated gradient (RSAG) method, which achieves convergence rate, where is an upper bound on the variance of the stochastic gradient.\nMotivated by the success of stochastic momentum methods in deep learning (Sutskever et al., 2013 ###reference_b33###),\nYang et al. (2016 ###reference_b36###)\nprovided a unified convergence analysis for both stochastic heavy-ball method and the stochastic variant of Nesterov\u2019s accelerated gradient method, and proved convergence rate to a stationary point for smooth nonconvex functions.\nReddi et al. (2016 ###reference_b30###); Allen-Zhu & Hazan (2016 ###reference_b2###) proposed variants of stochastic variance-reduced gradient (SVRG) method (Johnson & Zhang, 2013 ###reference_b18###) that is provably faster than gradient descent in the nonconvex finite-sum setting. Lei et al. (2017 ###reference_b21###) proposed a stochastically controlled stochastic gradient (SCSG), which further improves convergence rate of SVRG for finite-sum smooth nonconvex optimization. Recently, Zhou et al. (2018 ###reference_b39###) proposed a new algorithm called stochastic nested variance-reduced gradient (SNVRG), which achieves strictly better gradient complexity than both SVRG and SCSG for finite-sum and stochastic smooth nonconvex optimization.\nHigh Probability Bounds: There are only a few works on the high probability convergence results. Kakade & Tewari (2009 ###reference_b19###) proved high probability bounds for the PEGASOS algorithm via Freeman\u2019s inequality.\nHarvey et al. (2019a ###reference_b12###; b ###reference_b13###) proved convergence bounds for non-smooth, strongly convex case via generalized Freeman\u2019s inequality.\nJain et al. (2019 ###reference_b16###) makes the last iterate of SGD information theoretically optimal by providing a high probability bound.\nLi & Orabona (2020 ###reference_b23###) presented a high probability analysis for Delayed AdaGrad algorithm with momentum in the smooth nonconvex setting.\nFor the ease of comparison, we summarize the convergence rates of adaptive gradient methods derived in different works in Table 1 ###reference_###, along with the convergence types and corresponding assumptions."
22
+ },
23
+ {
24
+ "section_id": "3",
25
+ "parent_section_id": null,
26
+ "section_name": "Algorithms",
27
+ "text": "We mainly consider the following three algorithms: AMSGrad (Reddi et al., 2018 ###reference_b31###), a corrected version of RMSProp (Tieleman & Hinton, 2012 ###reference_b34###; Reddi et al., 2018 ###reference_b31###), and AdaGrad (Duchi et al., 2011 ###reference_b9###).\nThe AMSGrad algorithm is originally proposed by Reddi et al. (2018 ###reference_b31###) to fix the non-convergence issue in the original Adam optimizer (Kingma & Ba, 2014 ###reference_b20###).\nSpecifically, in Algorithm 1 ###reference_###, the effective learning rate of AMSGrad is where , while in original Adam, the effective learning rate is where . This choice of effective learning rate guarantees that it is non-increasing and thus fix the possible convergence issue.\nIn Algorithm 2 ###reference_###, we present a variant of RMSProp (Tieleman & Hinton, 2012 ###reference_b34###) (adding the max step according to Reddi et al. (2018 ###reference_b31###)) where the effective learning rate is also set as .\nIn Algorithm 3 ###reference_### we further present the AdaGrad algorithm (Duchi et al., 2011 ###reference_b9###), which adopts the summation of past stochastic gradient squares instead of the running average to compute the effective learning rate."
28
+ },
29
+ {
30
+ "section_id": "4",
31
+ "parent_section_id": null,
32
+ "section_name": "Convergence Results in Expectation",
33
+ "text": "In this section, we present our main results on the convergence of AMSGrad, RMSProp and AdaGrad.\nWe study the following stochastic nonconvex optimization problem\nwhere is a random variable satisfying certain distribution, is a -smooth nonconvex function. In the stochastic setting, one cannot directly access the full gradient of . Instead, one can only get unbiased estimators of the gradient of , which is . This setting has been studied in Ghadimi & Lan (2013 ###reference_b10###; 2016 ###reference_b11###).\nhas -bounded stochastic gradient. That is, for any , we assume that\n.\nIt is worth mentioning that Assumption 4.1 ###reference_theorem1### is slightly weaker than the -boundedness assumption used in Reddi et al. (2016 ###reference_b30###); Chen et al. (2018a ###reference_b5###). Since , the -boundedness assumption implies Assumption 4.1 ###reference_theorem1### with . Meanwhile, will be tighter than by a factor of when each coordinate of almost equals to each other.\nis -smooth: for any , we have\nAssumption 4.2 ###reference_theorem2### is a standard assumption in the analysis of gradient-based algorithms. It is equivalent to the -gradient Lipschitz condition, which is often written as .\nWe are now ready to present our main result.\nSuppose , and for . Then under Assumptions 4.1 ###reference_theorem1### and 4.2 ###reference_theorem2###, the iterates of AMSGrad satisfy that\nwhere\n are defined as follows:\nand .\nNote that in Theorem 4.3 ###reference_theorem3### we have a condition that . Here characterizes the growth rate of , i.e., the cumulative stochastic gradient (Liu et al., 2019 ###reference_b24###). In the worse case where the stochastic gradients are not sparse, we have , while in practice when the stochastic gradients are\nsparse, we have .\nIf we choose then (4.1 ###reference_###) implies that AMSGrad achieves\nconvergence rate.\nIn the worst case when , this result matches the convergence rate of nonconvex SGD (Ghadimi & Lan, 2016 ###reference_b11###). For the dimension dependence, it is not directly comparable since they made a different stochastic noise assumption (they assumed the stochastic gradient is -subGaussian w.r.t. the norm of the gradient). By directly translating their assumption to ours (to replace with ), we can obtain a dominant term in their convergence result, which matches our convergence rate. Note that Chen et al. (2018a ###reference_b5###) also provided a similar bound for AMSGrad that\nIt can be seen that the dependence of in their bound is quadratic, which is worse than the linear dependence suggested by (4.1 ###reference_###).\nA recent work (D\u00e9fossez et al., 2020 ###reference_b7###) discussed the convergence issue of Adam by showing that the bound consists of a constant term and does not converge to zero. In comparison, our result for AMSGrad does not have such a constant term and converges to zero in a rate . This suggests that the convergence issue of Adam is indeed fixed in AMSGrad.\nUnder the same conditions of Theorem 4.3 ###reference_theorem3###, if and for , then the iterates of RMSProp satisfy that\nwhere are defined as follows:\nand .\nUnder the same conditions of Theorem 4.3 ###reference_theorem3###, if and for , then the the iterates of AdaGrad satisfy that\nwhere are defined as follows:\nand .\nCorollaries 4.5 ###reference_theorem5### and 4.6 ###reference_theorem6### imply that RMSProp and AdaGrad achieve the same rate of convergence as AMSGrad. In worst case where , both algorithms achieve convergence rate, which matches the convergences rate of nonconvex SGD given by Ghadimi & Lan (2016 ###reference_b11###).\nD\u00e9fossez et al. (2020 ###reference_b7###) gave a bound for AdaGrad, which gives the following rate\nwhen . Our result gives a faster rate in terms of the dependency in dimension ."
34
+ },
35
+ {
36
+ "section_id": "5",
37
+ "parent_section_id": null,
38
+ "section_name": "Convergence Results with High Probability",
39
+ "text": "In the previous section, we provide convergence results of the three adaptive gradient methods in expectation.\nThese bounds can only guarantee the average performance of a large number of trials of the algorithm, but cannot rule out extremely bad solutions.\nWhat\u2019s more, for practical applications such as training deep neural networks, we often perform a single run of the algorithm since the training time can be fairly large. Hence, it is helpful to get high probability bounds\nwhich guarantee the performance of the algorithm on a single\nrun.\nTo overcome this limitation, in this section, we further establish high probability bounds on the convergence rate for AMSGrad, RMSProp and AdaGrad.\nWe make the following additional assumption.\nThe stochastic gradients are sub-Gaussian random vectors (Jin et al., 2019 ###reference_b17###):\nfor all and all .\nAssumption 5.1 ###reference_theorem1### is commonly considered when studying high probability bounds (Li & Orabona, 2020 ###reference_b23###). It is weaker than Assumption B2 in Li & Orabona (2020 ###reference_b23###): for the case when is a standard Gaussian vector, defined in Li & Orabona (2020 ###reference_b23###) is of order , while in our definition.\nSuppose , and for . Then for any , under Assumptions 4.1 ###reference_theorem1###, 4.2 ###reference_theorem2### and 5.1 ###reference_theorem1###, with probability at least , the iterates of AMSGrad satisfy that\nwhere\n are defined as follows:\nand .\nSimilar to the discussion in Remark 4.4 ###reference_theorem4###, we can choose to achieve an convergence rate.\nWe also have the following corollaries providing the high probability bounds for RMSProp and AdaGrad.\nUnder the same conditions of Theorem 5.2 ###reference_theorem2###, if and for , then for any , with probability at least , the iterates of RMSProf satisfy that\nwhere are defined as follows:\nand .\nUnder the same conditions of Theorem 5.2 ###reference_theorem2###, if and for , then for any , with probability at least , the iterates of AdaGrad satisfy\nwhere\n are defined as follows:\nand ."
40
+ },
41
+ {
42
+ "section_id": "6",
43
+ "parent_section_id": null,
44
+ "section_name": "Proof Sketch of the Main Results",
45
+ "text": "In this section, we provide a proof sketch of Theorem 4.3 ###reference_theorem3### and Theorem 5.2 ###reference_theorem2###, and the complete proofs as well as proofs for other corollaries and technical lemmas can be found in the supplemental materials. Compared with the analysis of standard stochastic gradient descent, the main difficulty of analyzing the convergence rate of adaptive gradient methods is caused by the stochastic momentum and adaptive stochastic gradient . To address this challenge, following Yang et al. (2016 ###reference_b36###), we define an auxiliary sequence : let , and for each ,\nThe following lemma shows that can be represented by and . This indicates that by considering the sequence , it is possible to analyze algorithms which include stochastic momentum, such as AMSGrad.\nLet be defined in (6.1 ###reference_###). Then for , we have the following expression for .\nWe can also represent as the following:\nFor , we have\nWith Lemma 6.1 ###reference_theorem1###, we have the following two lemmas giving upper bounds for and , which are useful for the proof of the main theorem.\nLet be defined in (6.1 ###reference_###). For , we have\nLet be defined in (6.1 ###reference_###). For , we have\nWe also need the following lemma to bound and . Basically, it shows that these quantities can be bounded by .\nLet and be as defined in Algorithm 1 ###reference_###. Then under Assumption 4.1 ###reference_theorem1###, we have , and .\nLastly, we need the following lemma that provides upper bounds on and . More specifically, it shows that we can bound and with . The bound of is essential for us to obtain a tighter dependency in terms of .\nLet be the weight parameters,\n, be the step sizes in Algorithm 1 ###reference_###. We denote . Suppose that and , then under Assumption 4.1 ###reference_theorem1###, we have the following two results:\nand\nWith all lemmas provided above, now we are ready to provide the proof of Theorem 4.3 ###reference_theorem3###.\nProof [Proof Sketch of Theorem 4.3 ###reference_theorem3###]\nSince is -smooth, we have:\nIn the following, we bound , and separately.\nBounding term : We can prove that when ,\nFor , by Lemma 6.1 ###reference_theorem1###, we can prove the following result:\nBounding term : For , by Lemma 6.1 ###reference_theorem1### and Lemma 6.2 ###reference_theorem2###, we can prove that\nBounding term :\nFor , by Lemma 6.1 ###reference_theorem1###, we have\nNow we get back to (6.2 ###reference_###). We provide upper bounds of (6.2 ###reference_###) for and separately. For , substituting (6.3 ###reference_###), (6.5 ###reference_###) and (6.6 ###reference_###) into (6.2 ###reference_###), taking expectation and rearranging terms, we have\nFor , substituting (6.4 ###reference_###), (6.5 ###reference_###) and (6.6 ###reference_###) into (6.2 ###reference_###), taking expectation and rearranging terms, we have\nwhere the inequality holds due to the fact by Lemma 6.4 ###reference_theorem4###. We now telescope (6.8 ###reference_###) for to , and add it with (6.7 ###reference_###). Rearranging it, we have\nBy using Lemma 6.5 ###reference_theorem5###, we can further bound and in (6.9 ###reference_###) with , which turns out to be\nFinally, rearranging (6.10 ###reference_###), and adopting the theorem condition that , we obtain\nwhere are defined in Theorem 4.3 ###reference_theorem3###. This completes the proof.\nWe highlight here why we can achieve a tighter dimension dependency ( v.s. ) as compared with D\u00e9fossez et al. (2020 ###reference_b7###). Both our analysis and the one in D\u00e9fossez et al. (2020 ###reference_b7###) required to upper bound the gradient norm by the stochastic gradients and momentum (see our (6.9 ###reference_###) and (A.19) in D\u00e9fossez et al. (2020 ###reference_b7###). However, D\u00e9fossez et al. (2020 ###reference_b7###) bounded and separately as suggested by (A.20) in D\u00e9fossez et al. (2020 ###reference_b7###), and they obtained a better bound for , which depends on , and a worse bound for , which has an dependency. Thus, the final bound in their result suffers from an dependency (see the second and third term in (A.54) in D\u00e9fossez et al. (2020 ###reference_b7###). To compare with, we bound both and by uniformly by using Lemma 6.5 ###reference_theorem5### which makes our final bound only has an dependency (see the third term in (6.10 ###reference_###)). Therefore, by optimizing , our final bound only depends on rather than .\nWe then show the proof sketch for high probability result, i.e, Theorem 4.3 ###reference_theorem3###.\nProof [Proof Sketch of Theorem 5.2 ###reference_theorem2###]\nFollowing the same procedure as in the proof for Theorem 4.3 ###reference_theorem3### until (6.6 ###reference_###).\nFor , substituting (6.3 ###reference_###), (6.5 ###reference_###) and (6.6 ###reference_###) into (6.2 ###reference_###), rearranging terms, we have\nFor , substituting (6.4 ###reference_###), (6.5 ###reference_###) and (6.6 ###reference_###) into (6.2 ###reference_###), rearranging terms, we have\nWe now telescope (6.12 ###reference_###) for to and add it with (6.11 ###reference_###). Rearranging it, we have\nNow consider the filtration . Since and only depend on , by Assumption 5.1 ###reference_theorem1### and an martingale concentration argument,we obtain\nBy using Lemma 6.5 ###reference_theorem5### and substituting (6.14 ###reference_###) into (6.13 ###reference_###), we have\nMoreover, by Lemma 6.4 ###reference_theorem4###, we have , and therefore by choosing and rearranging terms, we have\nwhere is an absolute constant.\nFinally, rearranging (6.15 ###reference_###) and adopting the condition gives\nwhere are defined in Theorem 5.2 ###reference_theorem2###. This completes the proof."
46
+ },
47
+ {
48
+ "section_id": "7",
49
+ "parent_section_id": null,
50
+ "section_name": "Conclusion",
51
+ "text": "In this paper, we provided a fine-grained analysis of\na general class of adaptive gradient methods, and proved their convergence rates for smooth nonconvex optimization.\nOur results provide faster convergence rates of AMSGrad and the corrected version of RMSProp as well as AdaGrad for smooth nonconvex optimization compared with previous works.\nIn addition, we also prove high probability bounds on the convergence rates of AMSGrad and RMSProp as well as AdaGrad, which have not been established before."
52
+ }
53
+ ],
54
+ "appendix": [
55
+ {
56
+ "section_id": "Appendix x1",
57
+ "parent_section_id": null,
58
+ "section_name": "Appendix",
59
+ "text": ""
60
+ },
61
+ {
62
+ "section_id": "Appendix 1",
63
+ "parent_section_id": null,
64
+ "section_name": "Appendix A Proof of the Main Theory",
65
+ "text": "Here we provide the detailed proof of the main theorem.\nLet .\nTo prove Theorem 4.3 ###reference_theorem3###, we need the following lemmas:\nLet and be as defined in Algorithm 1 ###reference_###. Then under Assumption 4.1 ###reference_theorem1###, we have , and .\nLet , be the weight parameters such that\n, be the step sizes. We denote . Suppose that and , then under Assumption 4.1 ###reference_theorem1###, we have the following two results:\nand\nNote that Lemma A.2 ###reference_theorem2### is general and applicable to various algorithms. Specifically, set and , we recover the case in Algorithm 1 ###reference_###. Further set we recover the case in Algorithm 2 ###reference_###. Set and we recover the case in Algorithm 3 ###reference_###.\nTo deal with stochastic momentum and stochastic weight , following Yang et al. (2016 ###reference_b36###), we define an auxiliary sequence as follows: let , and for each ,\nLemma A.3 ###reference_theorem3### shows that can be represented in two different ways.\nLet be defined in (A.1 ###reference_###). For , we have\nand\nFor , we have\nBy Lemma A.3 ###reference_theorem3###, we connect with and . The following two lemmas give bounds on and , which play important roles in our proof.\nLet be defined in (A.1 ###reference_###). For , we have\nLet be defined in (A.1 ###reference_###). For , we have\nWe present the following lemma which upper bounds the difference .\nFor , we have\nFor , we have\nNow we are ready to prove Theorem 4.3 ###reference_theorem3###.\nProof [Proof of Theorem 4.3 ###reference_theorem3###]\nBy Lemma A.6 ###reference_theorem6###, for , we have\nFor , we have\nwhere the equality holds because conditioned on and , the second inequality holds because of Lemma A.1 ###reference_theorem1###.\nTelescoping (A.6 ###reference_###) for to and adding with (B.15 ###reference_###), we have\nBy Lemma A.2 ###reference_theorem2###, we have\nwhere . We also have\nSubstituting (A.8 ###reference_###) and (A.9 ###reference_###) into (A.7 ###reference_###), and rearranging (A.7 ###reference_###), we have\nwhere the second inequality holds because . Rearranging (A.10 ###reference_###), and note that in the theorem condition we have , we obtain\nwhere are defined in Theorem 4.3 ###reference_theorem3###.\nThis completes the proof.\nProof [Proof of Corollary 4.5 ###reference_theorem5###]\nFollowing the proof for Theorem 4.3 ###reference_theorem3###, setting and in Lemma A.2 ###reference_theorem2### we get the conclusion.\nProof [Proof of Corollary 4.6 ###reference_theorem6###]\nFollowing the proof for Theorem 4.3 ###reference_theorem3###, setting , and in Lemma A.2 ###reference_theorem2### we get the conclusion.\nProof [Proof of Theorem 5.2 ###reference_theorem2###]\nBy Lemma A.6 ###reference_theorem6###, for , we have\nFor , we have\nTelescoping (A.12 ###reference_###) for to and adding (A.11 ###reference_###), we have\nBy Lemma A.2 ###reference_theorem2###, we have\nwhere . We also have\nMoreover, consider the filtration . Since and only depend on . For any , by Assumption 5.1 ###reference_theorem1### with , we have\nDenote . Then we have\nWith exactly the same proof, we also have\nCombining above two inequalities, we have\nChoosing , we finally obtain\nfor all ,\nwhere . The tail bound (A.16 ###reference_###) enables the application of Lemma 6 in Jin et al. (2019 ###reference_b17###), which gives that with probability at least ,\nwhere is an absolute constant. Plugging in the definitions of and , we obtain\nwhere the second inequality is by the fact that the diagonal entries of are all loewr bounded by .\nSubstituting (A.14 ###reference_###), (A.15 ###reference_###) and (A.17 ###reference_###) into (A.13 ###reference_###), we have\nMoreover, by Lemma A.1 ###reference_theorem1###, we have , and therefore by choosing and rearranging terms, we have\nTherefore when , we have\nwhere is an absolute constant.\nNow by the theorem condition , we have\nwhere are defined in Theorem 5.2 ###reference_theorem2###.\nThis completes the proof.\nProof [Proof of Corollary 5.4 ###reference_theorem4###]\nFollowing the proof for Theorem 5.2 ###reference_theorem2###, setting and in Lemma A.2 ###reference_theorem2### we get the conclusion.\nProof [Proof of Corollary 5.4 ###reference_theorem4###]\nFollowing the proof for Theorem 5.2 ###reference_theorem2###, setting , and in Lemma A.2 ###reference_theorem2### we get the conclusion."
66
+ },
67
+ {
68
+ "section_id": "Appendix 2",
69
+ "parent_section_id": null,
70
+ "section_name": "Appendix B Proof of Technical Lemmas",
71
+ "text": "Proof [Proof of Lemma A.1 ###reference_theorem1###]\nSince has -bounded stochastic gradient, for any and , . Thus, we have\nNext we bound . We have . Suppose that , then for , we have\nThus, for any , we have . Finally we bound . First we have . Suppose that and . Note that we have\nand by definition, we have .\nThus, for any , we have .\nProof Recall that denote the -th coordinate of and . We have\nwhere the first inequality holds since and the second inequality holds because . Next we have\nwhere the first inequality holds due to Cauchy inequality, and the last inequality holds because . Note that\nwhere the equality holds due to the definition of . Substituting (B.2 ###reference_###) and (B.3 ###reference_###) into (B.1 ###reference_###), we have\nTelescoping (B.4 ###reference_###) for to , we have\nFinally, we have\nwhere the inequality holds due to H\u00f6lder\u2019s inequality. Substituting (B.6 ###reference_###) into (B.5 ###reference_###), we have\nSpecifically, taking , we have , then\n\nProof By definition, we have\nThen we have\nThe equities above are based on definition. Then we have\nThe equalities above follow by combining the like terms.\nProof By Lemma A.3 ###reference_theorem3###, we have\nwhere the inequality holds because the term is positive, and triangle inequality. Considering that , when , we have . With that fact, the term above can be bound as:\nThis completes the proof.\nProof For term , we have:\nwhere the last inequality holds because the term is positive.\nProof \nSince is -smooth, we have:\nIn the following, we bound , and separately.\nBounding term : When , we have\nFor , we have\nwhere the first equality holds due to (A.3 ###reference_###) in Lemma A.3 ###reference_theorem3###. For in (B.9 ###reference_###), we have\nThe first inequality holds because for a positive diagonal matrix , we have . The second inequality holds due to . Next we bound . We have\nThe first inequality holds because for a positive diagonal matrix , we have . The second inequality holds due to . Substituting (B.10 ###reference_###) and (B.11 ###reference_###) into (B.9 ###reference_###), we have\nBounding term : For , we have\nwhere the second inequality holds because of Lemma A.3 ###reference_theorem3### and Lemma A.4 ###reference_theorem4###, the last inequality holds due to Young\u2019s inequality.\nBounding term : For , we have\nThe first inequality is obtained by introducing Lemma A.3 ###reference_theorem3###.\nFor , substituting (B.8 ###reference_###), (B.13 ###reference_###) and (B.14 ###reference_###) into (B.7 ###reference_###), taking expectation and rearranging terms,\nwe have\nwhere the last inequality holds because\nFor , substituting (B.12 ###reference_###), (B.13 ###reference_###) and (B.14 ###reference_###) into (B.7 ###reference_###), taking expectation and rearranging terms, we have\nwhich ends our proof.\nIn order to show that the growth rate condition of the cumulative stochastic gradient indeed holds, we have conducted experiments to estimate the growth rate parameter for ResNet-18 (He et al., 2016 ###reference_b14###) model and 3-layer LSTM model (Hochreiter & Schmidhuber, 1997 ###reference_b15###) respectively. For simplicity, we assume and estimate the growth rate by calculating the logarithm of the cumulative gradient norm and calculate . As can be seen from Table 2 ###reference_###, of adaptive gradient methods (AdaGrad, RMSProp and AMSGrad) is smaller than that of SGDM for training 3-layer LSTM model on the PennTreeBank (Marcus et al., 1993 ###reference_b25###) dataset. All of them are actually far below the theoretical limit in this real experiment.\n###table_2###"
72
+ }
73
+ ],
74
+ "tables": {
75
+ "1": {
76
+ "table_html": "<figure class=\"ltx_table\" id=\"S1.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Comparison of convergence rate of AMSGrad and AdaGrad in terms of the convergence types and assumptions by different works in the nonconvex smooth setting. Here denotes the total number of iterations and is the dimension.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S1.T1.27\">\n<tr class=\"ltx_tr\" id=\"S1.T1.27.24\">\n<td class=\"ltx_td ltx_border_tt\" id=\"S1.T1.27.24.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S1.T1.27.24.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S1.T1.27.24.2.1\">Conv. Rate</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S1.T1.27.24.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S1.T1.27.24.3.1\">Conv. Type</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_tt\" id=\"S1.T1.27.24.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S1.T1.27.24.4.1\">\n<span class=\"ltx_p\" id=\"S1.T1.27.24.4.1.1\" style=\"width:133.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S1.T1.27.24.4.1.1.1\">Assumptions</span></span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.27.25\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S1.T1.27.25.1\">AMSGrad</td>\n<td class=\"ltx_td ltx_border_t\" id=\"S1.T1.27.25.2\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S1.T1.27.25.3\"></td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_t\" id=\"S1.T1.27.25.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.6.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"S1.T1.5.1.1\">\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">Chen et\u00a0al. (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/1808.05671v4#bib.bib5\" title=\"\">2018a</a>)</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.6.2.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.6.2.3\">in-expectation</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top\" id=\"S1.T1.6.2.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S1.T1.6.2.4.1\">\n<span class=\"ltx_p\" id=\"S1.T1.6.2.4.1.1\" style=\"width:133.7pt;\">smoothness, bounded gradient</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.8.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"S1.T1.7.3.1\">\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">Alacaoglu et\u00a0al. (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/1808.05671v4#bib.bib1\" title=\"\">2020</a>)</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.8.4.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.8.4.3\">in-expectation</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top\" id=\"S1.T1.8.4.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S1.T1.8.4.4.1\">\n<span class=\"ltx_p\" id=\"S1.T1.8.4.4.1.1\" style=\"width:133.7pt;\">smoothness, bounded gradient</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.11.7\" style=\"background-color:#FFCCCC;\">\n<td class=\"ltx_td ltx_align_left\" id=\"S1.T1.10.6.2\">\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<span class=\"ltx_text\" id=\"S1.T1.10.6.2.1\" style=\"background-color:#FFCCCC;\">Ours (worst case, i.e., )</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.7.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.7.4\"><span class=\"ltx_text\" id=\"S1.T1.11.7.4.1\" style=\"background-color:#FFCCCC;\">in-expectation</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top\" id=\"S1.T1.11.7.5\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S1.T1.11.7.5.1\" style=\"background-color:#FFCCCC;\">\n<span class=\"ltx_p\" id=\"S1.T1.11.7.5.1.1\" style=\"width:133.7pt;\">smoothness, bounded gradient</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.15.11\" style=\"background-color:#FFCCCC;\">\n<td class=\"ltx_td ltx_align_left\" id=\"S1.T1.13.9.2\">\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<span class=\"ltx_text\" id=\"S1.T1.13.9.2.1\" style=\"background-color:#FFCCCC;\">Ours (worst case, i.e., )</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.14.10.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.15.11.5\"><span class=\"ltx_text\" id=\"S1.T1.15.11.5.1\" style=\"background-color:#FFCCCC;\">high probability</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top\" id=\"S1.T1.15.11.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S1.T1.15.11.4.1\" style=\"background-color:#FFCCCC;\">\n<span class=\"ltx_p\" id=\"S1.T1.15.11.4.1.1\" style=\"width:133.7pt;\">smoothness, bounded gradient, is a sub-Gaussian vector</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.27.26\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S1.T1.27.26.1\">AdaGrad</td>\n<td class=\"ltx_td ltx_border_t\" id=\"S1.T1.27.26.2\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S1.T1.27.26.3\"></td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_t\" id=\"S1.T1.27.26.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.17.13\">\n<td class=\"ltx_td ltx_align_left\" id=\"S1.T1.16.12.1\">\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">D\u00e9fossez et\u00a0al. (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/1808.05671v4#bib.bib7\" title=\"\">2020</a>)</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.17.13.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.17.13.3\">in-expectation</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top\" id=\"S1.T1.17.13.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S1.T1.17.13.4.1\">\n<span class=\"ltx_p\" id=\"S1.T1.17.13.4.1.1\" style=\"width:133.7pt;\">smoothness, bounded gradient</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.20.16\">\n<td class=\"ltx_td ltx_align_left\" id=\"S1.T1.18.14.1\">\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">Li &amp; Orabona (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/1808.05671v4#bib.bib23\" title=\"\">2020</a>)</cite><span class=\"ltx_note ltx_role_footnote\" id=\"footnotex7\"><sup class=\"ltx_note_mark\">3</sup><span class=\"ltx_note_outer\"><span class=\"ltx_note_content\"><sup class=\"ltx_note_mark\">3</sup><span class=\"ltx_tag ltx_tag_note\">3</span>To be precise, <cite class=\"ltx_cite ltx_citemacro_cite\">Li &amp; Orabona (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/1808.05671v4#bib.bib23\" title=\"\">2020</a>)</cite> studies a delayed AdaGrad algorithm with momentum.</span></span></span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.19.15.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.20.16.4\">high probability</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top\" id=\"S1.T1.20.16.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S1.T1.20.16.3.1\">\n<span class=\"ltx_p\" id=\"S1.T1.20.16.3.1.1\" style=\"width:133.7pt;\">smoothness, is sub-Gaussian</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.23.19\" style=\"background-color:#FFCCCC;\">\n<td class=\"ltx_td ltx_align_left\" id=\"S1.T1.22.18.2\">\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<span class=\"ltx_text\" id=\"S1.T1.22.18.2.1\" style=\"background-color:#FFCCCC;\">Ours (worst case, i.e., )</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.23.19.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.23.19.4\"><span class=\"ltx_text\" id=\"S1.T1.23.19.4.1\" style=\"background-color:#FFCCCC;\">in-expectation</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top\" id=\"S1.T1.23.19.5\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S1.T1.23.19.5.1\" style=\"background-color:#FFCCCC;\">\n<span class=\"ltx_p\" id=\"S1.T1.23.19.5.1.1\" style=\"width:133.7pt;\">smoothness, bounded gradient</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.27.23\" style=\"background-color:#FFCCCC;\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S1.T1.25.21.2\">\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<span class=\"ltx_text\" id=\"S1.T1.25.21.2.1\" style=\"background-color:#FFCCCC;\">Ours (worst case, i.e., )</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S1.T1.26.22.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S1.T1.27.23.5\"><span class=\"ltx_text\" id=\"S1.T1.27.23.5.1\" style=\"background-color:#FFCCCC;\">high probability</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_bb\" id=\"S1.T1.27.23.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S1.T1.27.23.4.1\" style=\"background-color:#FFCCCC;\">\n<span class=\"ltx_p\" id=\"S1.T1.27.23.4.1.1\" style=\"width:133.7pt;\">smoothness, bounded gradient, is a sub-Gaussian vector</span>\n</span>\n</td>\n</tr>\n</table>\n</figure>",
77
+ "capture": "Table 1: Comparison of convergence rate of AMSGrad and AdaGrad in terms of the convergence types and assumptions by different works in the nonconvex smooth setting. Here denotes the total number of iterations and is the dimension."
78
+ },
79
+ "2": {
80
+ "table_html": "<figure class=\"ltx_table\" id=\"A2.T2\">\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"A2.T2.1\">\n<tr class=\"ltx_tr\" id=\"A2.T2.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"A2.T2.1.1.2\">method</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"A2.T2.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"A2.T2.1.1.3\">training loss</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A2.T2.1.1.4\">test perplexity</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T2.1.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A2.T2.1.2.1\">SGDM</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A2.T2.1.2.2\">0.136</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A2.T2.1.2.3\">4.01</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A2.T2.1.2.4\">65.11</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T2.1.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A2.T2.1.3.1\">AdaGrad</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A2.T2.1.3.2\">0.089</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A2.T2.1.3.3\">3.92</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T2.1.3.4\">64.90</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T2.1.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A2.T2.1.4.1\">RMSProp</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A2.T2.1.4.2\">0.085</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A2.T2.1.4.3\">3.84</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T2.1.4.4\">63.77</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T2.1.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A2.T2.1.5.1\">AMSGrad</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A2.T2.1.5.2\">0.086</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A2.T2.1.5.3\">3.85</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T2.1.5.4\">63.97</td>\n</tr>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Empirical growth rate parameter of 3-layer LSTM model on PennTreeBank dataset.</figcaption>\n</figure>",
81
+ "capture": "Table 2: Empirical growth rate parameter of 3-layer LSTM model on PennTreeBank dataset."
82
+ }
83
+ },
84
+ "image_paths": {},
85
+ "validation": true,
86
+ "references": [
87
+ {
88
+ "1": {
89
+ "title": "A new regret analysis for adam-type algorithms.",
90
+ "author": "Ahmet Alacaoglu, Yura Malitsky, Panayotis Mertikopoulos, and Volkan Cevher.",
91
+ "venue": "In International Conference on Machine Learning, pp. 202\u2013210. PMLR, 2020.",
92
+ "url": null
93
+ }
94
+ },
95
+ {
96
+ "2": {
97
+ "title": "Variance reduction for faster non-convex optimization.",
98
+ "author": "Zeyuan Allen-Zhu and Elad Hazan.",
99
+ "venue": "In International Conference on Machine Learning, pp. 699\u2013707, 2016.",
100
+ "url": null
101
+ }
102
+ },
103
+ {
104
+ "3": {
105
+ "title": "Convergence guarantees for rmsprop and adam in non-convex\noptimization and their comparison to nesterov acceleration on autoencoders.",
106
+ "author": "Amitabh Basu, Soham De, Anirbit Mukherjee, and Enayat Ullah.",
107
+ "venue": "arXiv preprint arXiv:1807.06766, 2018.",
108
+ "url": null
109
+ }
110
+ },
111
+ {
112
+ "4": {
113
+ "title": "Closing the generalization gap of adaptive gradient methods in\ntraining deep neural networks.",
114
+ "author": "Jinghui Chen, Dongruo Zhou, Yiqi Tang, Ziyan Yang, Yuan Cao, and Quanquan Gu.",
115
+ "venue": "In International Joint Conferences on Artificial Intelligence,\n2020.",
116
+ "url": null
117
+ }
118
+ },
119
+ {
120
+ "5": {
121
+ "title": "On the convergence of a class of adam-type algorithms for nonconvex\noptimization.",
122
+ "author": "Xiangyi Chen, Sijia Liu, Ruoyu Sun, and Mingyi Hong.",
123
+ "venue": "arXiv preprint arXiv:1808.02941, 2018a.",
124
+ "url": null
125
+ }
126
+ },
127
+ {
128
+ "6": {
129
+ "title": "Sadagrad: Strongly adaptive stochastic gradient methods.",
130
+ "author": "Zaiyi Chen, Yi Xu, Enhong Chen, and Tianbao Yang.",
131
+ "venue": "In International Conference on Machine Learning, pp. 913\u2013921, 2018b.",
132
+ "url": null
133
+ }
134
+ },
135
+ {
136
+ "7": {
137
+ "title": "On the convergence of adam and adagrad.",
138
+ "author": "Alexandre D\u00e9fossez, L\u00e9on Bottou, Francis Bach, and Nicolas Usunier.",
139
+ "venue": "arXiv preprint arXiv:2003.02395, 2020.",
140
+ "url": null
141
+ }
142
+ },
143
+ {
144
+ "8": {
145
+ "title": "Incorporating nesterov momentum into adam.",
146
+ "author": "Timothy Dozat.",
147
+ "venue": "2016.",
148
+ "url": null
149
+ }
150
+ },
151
+ {
152
+ "9": {
153
+ "title": "Adaptive subgradient methods for online learning and stochastic\noptimization.",
154
+ "author": "John Duchi, Elad Hazan, and Yoram Singer.",
155
+ "venue": "Journal of Machine Learning Research, 12(Jul):2121\u20132159, 2011.",
156
+ "url": null
157
+ }
158
+ },
159
+ {
160
+ "10": {
161
+ "title": "Stochastic first-and zeroth-order methods for nonconvex stochastic\nprogramming.",
162
+ "author": "Saeed Ghadimi and Guanghui Lan.",
163
+ "venue": "SIAM Journal on Optimization, 23(4):2341\u20132368, 2013.",
164
+ "url": null
165
+ }
166
+ },
167
+ {
168
+ "11": {
169
+ "title": "Accelerated gradient methods for nonconvex nonlinear and stochastic\nprogramming.",
170
+ "author": "Saeed Ghadimi and Guanghui Lan.",
171
+ "venue": "Mathematical Programming, 156(1-2):59\u201399,\n2016.",
172
+ "url": null
173
+ }
174
+ },
175
+ {
176
+ "12": {
177
+ "title": "Tight analyses for non-smooth stochastic gradient descent.",
178
+ "author": "Nicholas JA Harvey, Christopher Liaw, Yaniv Plan, and Sikander Randhawa.",
179
+ "venue": "In Conference on Learning Theory, pp. 1579\u20131613. PMLR,\n2019a.",
180
+ "url": null
181
+ }
182
+ },
183
+ {
184
+ "13": {
185
+ "title": "Simple and optimal high-probability bounds for strongly-convex\nstochastic gradient descent.",
186
+ "author": "Nicholas JA Harvey, Christopher Liaw, and Sikander Randhawa.",
187
+ "venue": "arXiv preprint arXiv:1909.00843, 2019b.",
188
+ "url": null
189
+ }
190
+ },
191
+ {
192
+ "14": {
193
+ "title": "Identity mappings in deep residual networks.",
194
+ "author": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.",
195
+ "venue": "In ECCV, pp. 630\u2013645. Springer, 2016.",
196
+ "url": null
197
+ }
198
+ },
199
+ {
200
+ "15": {
201
+ "title": "Long short-term memory.",
202
+ "author": "Sepp Hochreiter and J\u00fcrgen Schmidhuber.",
203
+ "venue": "Neural computation, 9(8):1735\u20131780, 1997.",
204
+ "url": null
205
+ }
206
+ },
207
+ {
208
+ "16": {
209
+ "title": "Making the last iterate of sgd information theoretically optimal.",
210
+ "author": "Prateek Jain, Dheeraj Nagaraj, and Praneeth Netrapalli.",
211
+ "venue": "arXiv preprint arXiv:1904.12443, 2019.",
212
+ "url": null
213
+ }
214
+ },
215
+ {
216
+ "17": {
217
+ "title": "A short note on concentration inequalities for random vectors with\nsubgaussian norm.",
218
+ "author": "Chi Jin, Praneeth Netrapalli, Rong Ge, Sham M Kakade, and Michael I Jordan.",
219
+ "venue": "arXiv preprint arXiv:1902.03736, 2019.",
220
+ "url": null
221
+ }
222
+ },
223
+ {
224
+ "18": {
225
+ "title": "Accelerating stochastic gradient descent using predictive variance\nreduction.",
226
+ "author": "Rie Johnson and Tong Zhang.",
227
+ "venue": "In Advances in neural information processing systems, pp. 315\u2013323, 2013.",
228
+ "url": null
229
+ }
230
+ },
231
+ {
232
+ "19": {
233
+ "title": "On the generalization ability of online strongly convex programming\nalgorithms.",
234
+ "author": "Sham M Kakade and Ambuj Tewari.",
235
+ "venue": "In Advances in Neural Information Processing Systems, pp. 801\u2013808, 2009.",
236
+ "url": null
237
+ }
238
+ },
239
+ {
240
+ "20": {
241
+ "title": "Adam: A method for stochastic optimization.",
242
+ "author": "Diederik P Kingma and Jimmy Ba.",
243
+ "venue": "arXiv preprint arXiv:1412.6980, 2014.",
244
+ "url": null
245
+ }
246
+ },
247
+ {
248
+ "21": {
249
+ "title": "Non-convex finite-sum optimization via scsg methods.",
250
+ "author": "Lihua Lei, Cheng Ju, Jianbo Chen, and Michael I Jordan.",
251
+ "venue": "In Advances in Neural Information Processing Systems, pp. 2345\u20132355, 2017.",
252
+ "url": null
253
+ }
254
+ },
255
+ {
256
+ "22": {
257
+ "title": "On the convergence of stochastic gradient descent with adaptive\nstepsizes.",
258
+ "author": "Xiaoyu Li and Francesco Orabona.",
259
+ "venue": "arXiv preprint arXiv:1805.08114, 2018.",
260
+ "url": null
261
+ }
262
+ },
263
+ {
264
+ "23": {
265
+ "title": "A high probability analysis of adaptive sgd with momentum.",
266
+ "author": "Xiaoyu Li and Francesco Orabona.",
267
+ "venue": "arXiv preprint arXiv:2007.14294, 2020.",
268
+ "url": null
269
+ }
270
+ },
271
+ {
272
+ "24": {
273
+ "title": "Towards better understanding of adaptive gradient algorithms in\ngenerative adversarial nets.",
274
+ "author": "Mingrui Liu, Youssef Mroueh, Jerret Ross, Wei Zhang, Xiaodong Cui, Payel Das,\nand Tianbao Yang.",
275
+ "venue": "arXiv preprint arXiv:1912.11940, 2019.",
276
+ "url": null
277
+ }
278
+ },
279
+ {
280
+ "25": {
281
+ "title": "Building a large annotated corpus of english: The penn treebank.",
282
+ "author": "Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini.",
283
+ "venue": "Computational linguistics, 19(2):313\u2013330,\n1993.",
284
+ "url": null
285
+ }
286
+ },
287
+ {
288
+ "26": {
289
+ "title": "Adaptive bound optimization for online convex optimization.",
290
+ "author": "H Brendan McMahan and Matthew Streeter.",
291
+ "venue": "arXiv preprint arXiv:1002.4908, 2010.",
292
+ "url": null
293
+ }
294
+ },
295
+ {
296
+ "27": {
297
+ "title": "Variants of rmsprop and adagrad with logarithmic regret bounds.",
298
+ "author": "Mahesh Chandra Mukkamala and Matthias Hein.",
299
+ "venue": "In ICML, 2017.",
300
+ "url": null
301
+ }
302
+ },
303
+ {
304
+ "28": {
305
+ "title": "Introductory lectures on convex optimization: A basic course,\nvolume 87.",
306
+ "author": "Yurii Nesterov.",
307
+ "venue": "Springer Science & Business Media, 2013.",
308
+ "url": null
309
+ }
310
+ },
311
+ {
312
+ "29": {
313
+ "title": "Some methods of speeding up the convergence of iteration methods.",
314
+ "author": "Boris T Polyak.",
315
+ "venue": "USSR Computational Mathematics and Mathematical Physics,\n4(5):1\u201317, 1964.",
316
+ "url": null
317
+ }
318
+ },
319
+ {
320
+ "30": {
321
+ "title": "Stochastic variance reduction for nonconvex optimization.",
322
+ "author": "Sashank J. Reddi, Ahmed Hefny, Suvrit Sra, Barnabas Poczos, and Alex Smola.",
323
+ "venue": "pp. 314\u2013323, 2016.",
324
+ "url": null
325
+ }
326
+ },
327
+ {
328
+ "31": {
329
+ "title": "On the convergence of adam and beyond.",
330
+ "author": "Sashank J Reddi, Satyen Kale, and Sanjiv Kumar.",
331
+ "venue": "In International Conference on Learning Representations, 2018.",
332
+ "url": null
333
+ }
334
+ },
335
+ {
336
+ "32": {
337
+ "title": "A stochastic approximation method.",
338
+ "author": "Herbert Robbins and Sutton Monro.",
339
+ "venue": "The Annals of Mathematical Statistics, 22(3):400\u2013407, 1951.",
340
+ "url": null
341
+ }
342
+ },
343
+ {
344
+ "33": {
345
+ "title": "On the importance of initialization and momentum in deep learning.",
346
+ "author": "Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton.",
347
+ "venue": "In International conference on machine learning, pp. 1139\u20131147, 2013.",
348
+ "url": null
349
+ }
350
+ },
351
+ {
352
+ "34": {
353
+ "title": "Lecture 6.5\u2014RmsProp: Divide the gradient by a running average of\nits recent magnitude.",
354
+ "author": "T. Tieleman and G. Hinton.",
355
+ "venue": "COURSERA: Neural Networks for Machine Learning, 2012.",
356
+ "url": null
357
+ }
358
+ },
359
+ {
360
+ "35": {
361
+ "title": "Adagrad stepsizes: Sharp convergence over nonconvex landscapes, from\nany initialization.",
362
+ "author": "Rachel Ward, Xiaoxia Wu, and Leon Bottou.",
363
+ "venue": "arXiv preprint arXiv:1806.01811, 2018.",
364
+ "url": null
365
+ }
366
+ },
367
+ {
368
+ "36": {
369
+ "title": "Unified convergence analysis of stochastic momentum methods for\nconvex and non-convex optimization.",
370
+ "author": "Tianbao Yang, Qihang Lin, and Zhe Li.",
371
+ "venue": "arXiv preprint arXiv:1604.03257, 2016.",
372
+ "url": null
373
+ }
374
+ },
375
+ {
376
+ "37": {
377
+ "title": "Adaptive methods for nonconvex optimization.",
378
+ "author": "Manzil Zaheer, Sashank Reddi, Devendra Sachan, Satyen Kale, and Sanjiv Kumar.",
379
+ "venue": "In Advances in neural information processing systems, pp. 9793\u20139803, 2018.",
380
+ "url": null
381
+ }
382
+ },
383
+ {
384
+ "38": {
385
+ "title": "Adadelta: an adaptive learning rate method.",
386
+ "author": "Matthew D Zeiler.",
387
+ "venue": "arXiv preprint arXiv:1212.5701, 2012.",
388
+ "url": null
389
+ }
390
+ },
391
+ {
392
+ "39": {
393
+ "title": "Stochastic nested variance reduction for nonconvex optimization.",
394
+ "author": "Dongruo Zhou, Pan Xu, and Quanquan Gu.",
395
+ "venue": "arXiv preprint arXiv:1806.07811, 2018.",
396
+ "url": null
397
+ }
398
+ },
399
+ {
400
+ "40": {
401
+ "title": "On the convergence of adagrad with momentum for training deep neural\nnetworks.",
402
+ "author": "Fangyu Zou and Li Shen.",
403
+ "venue": "arXiv preprint arXiv:1808.03408, 2018.",
404
+ "url": null
405
+ }
406
+ },
407
+ {
408
+ "41": {
409
+ "title": "A sufficient condition for convergences of adam and rmsprop.",
410
+ "author": "Fangyu Zou, Li Shen, Zequn Jie, Weizhong Zhang, and Wei Liu.",
411
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision\nand Pattern Recognition, pp. 11127\u201311135, 2019.",
412
+ "url": null
413
+ }
414
+ }
415
+ ],
416
+ "url": "http://arxiv.org/html/1808.05671v4"
417
+ }
20240620/2008.11451v2.json ADDED
@@ -0,0 +1,417 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Determinantal Point Process as an alternative to NMS",
3
+ "abstract": "We present a determinantal point process (DPP) inspired alternative to non-maximum suppression (NMS) which has become an integral step in all state-of-the-art object detection frameworks. DPPs have been shown to encourage diversity in subset selection problems [Gong et al. (2014)]. We pose NMS as a subset selection problem and posit that directly incorporating DPP like framework can improve the overall performance of the object detection system. We propose an optimization problem which takes the same inputs as NMS, but introduces a novel sub-modularity based diverse subset selection functional. Our results strongly indicate that the modifications proposed in this paper can provide consistent improvements to state-of-the-art object detection pipelines.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Object detection has gained a lot of momentum over the past few years, especially due to its application in a wide variety of fields such as autonomous driving, manufacturing industry, traffic and law enforcement [Idrees et al. (2018) ###reference_b15###] applications. The primary approaches for object detection can be loosely divided into a few dominant approaches, including sliding-window Deformable Parts Models [Felzenszwalb et al. (2010) ###reference_b7###, Zhu et al. (2010) ###reference_b38###], region proposal with classification [Girshick et al. (2013) ###reference_b9###, Uijlings et al. (2013) ###reference_b35###], and location regression with deep learning [Sermanet et al. (2014) ###reference_b33###, Szegedy et al. (2013) ###reference_b34###]. Almost all of the current day object detection frameworks follow a three step process, namely:\n(1) proposing a search space of windows, which has mostly converged to the output of a region proposal network (RPN), (2) scoring/\nrefining the window with a classifier/regressor, and (3)\nmerging or discarding windows that might belong to the same object.\nThis last stage is commonly referred to as \u201cnon-maximum suppression\u201d (NMS) [Girshick et al. (2013) ###reference_b9###, He et al. (2016) ###reference_b12###, Ren et al. (2015) ###reference_b30###, Felzenszwalb et al. (2010) ###reference_b7###, Redmon et al. (2015) ###reference_b29###, Liu et al. (2015) ###reference_b24###].\nNMS is a fairly simple test time post-processing routine. Maintaining parity with some of the published research in this area, we denote the basic NMS step as GreedyNMS [Felzenszwalb et al. (2010) ###reference_b7###, Rothe et al. (2014) ###reference_b31###, Hosang et al. (2017) ###reference_b13###] in this paper. The GreedyNMS algorithm, greedily selects high scoring detected windows and iteratively discards spatially close-by less confident neighbours with the assumption that the neighbors are likely to cover the same object. Specifically, all the candidate windows are either selected or rejected based on the following procedure: first, the highest-scored window is marked as retained, and all those overlapping with it by more than some threshold (e.g. 30%) intersection-over-union (IoU) are marked as suppressed; then, the next highest-scored window neither retained nor suppressed is marked as retained, and again all others sufficiently-overlapping candidate windows are marked for rejection. This process is repeated until all windows are marked as either retained or suppressed. The retained windows then constitute the final set of detected proposals.\nAlthough GreedyNMS continues to be the method of choice due to its simplicity, it inherently suffers from significant conceptual shortcomings. GreedyNMS is based on the simple intuition that similar detection windows which are close in spatial sense, should be suppressed. It controls the influence span by a single threshold parameter which is chosen to keep the region of suppression not too wide, since a wide suppression would remove close-by high scoring detected windows that are likely to be false positives that hurt precision. If objects are indeed close to each other, such as persons in crowded scenes, then the windows detected close to each other should be counted as true positives, in which case suppression should be narrow to improve recall. Achieving both these targets with a single tuning parameter seems hard and indeed this inherent limitation is the biggest shortcoming of the GreedyNMS routine.\nOne of the seminal works in general object detection was the R-CNN model by Girshick et al. (2013 ###reference_b9###), which replaced the feature extraction and classifier pipeline by a neural network, resulting in almost two times performance gain on Pascal VOC. Another significant improvement was the F-RCNN model by Ren et al. (2015 ###reference_b30###), which absorbed the object proposal generation into the network, while YOLO Redmon et al. (2015 ###reference_b29###) avoided proposals altogether, leading to both speed and quality improvements. A general trend towards end-to-end trainable object detection models has been the norm in recent times. NMS is one step in the object detection pipeline that is based on post-processing. Though a few works have tried to incorporate end-to-end trainable pipelines Hosang et al. (2017 ###reference_b13###); Wan et al. (2015 ###reference_b36###), so far it is not widely accepted. We would like to retain the post-processing nature of NMS in order for our approach to be incorporated in any pipeline.\nIn this work, we propose a principled improvement of the core NMS step by incorporating a DPP cost function in it. This development leads to an overall improvement of the NMS step and can be incorporated to existing NMS implementation with minimal changes. The theoretical guarantees afforded by a DPP based cost function lets us bridge the aforementioned gaps in fundamental ways, namely:\nWe improve the performance of NMS staying in the standard flow, wherein NMS still stays outside the main neural loop in state-of-the-art (SOTA) object detection implementations,\nThe proposed system does not need any additional training as in Hosang et al. (2017 ###reference_b13###); Azadi et al. (2017 ###reference_b1###) or modification of standard cost functions as in Wan et al. (2015 ###reference_b36###).\nthe proposed system works with the same inputs as NMS, namely proposal windows and their score, and introduces a new way to select diverse proposal subsets."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Related Work",
15
+ "text": "Wan et al. Wan et al. (2015 ###reference_b36###), proposed to integrate the NMS cost function into the unified loss function of a joint optimization system which had a neural featurizer, a deformable parts model and an NMS block. Since the NMS block was outside the neural loop, this implementation was similar to GreedyNMS, albeit with application dependent loss function. This work mentioned faster RCNN based models but did not use them and hence the baseline is considerably lower than the current day works. Hosang et al. Hosang et al. (2017 ###reference_b13###), propose to absorb the entire NMS step into a neural network. The authors claim that the suppression width parameter can be better estimated by a neural net and hence it should be data dependent rather than an empirically chosen one. Even though this argument has merit, the adoption in state-of-the-art algorithms is still missing. Azadi et al. Azadi et al. (2017 ###reference_b1###) propose a similar method, where they use DPP as an alternative to NMS. However, in their method DPP is implemented as a trainable layer and not as a simple plug and play module.\nInformative subset selection problems arise in many applications where a small number of items must be chosen to represent or cover a much larger set; for instance, text summarization Nenkova et al. (2006 ###reference_b27###); Lin and Bilmes (2010 ###reference_b22###), document and image search Radlinski et al. (2008 ###reference_b28###); Yue and Joachims (2008 ###reference_b37###); Kulesza and Taskar (2011a ###reference_b19###), sensor placement Guestrin et al. (2005 ###reference_b11###), viral marketing Kempe et al. (2005 ###reference_b16###), and many others. Recently, probabilistic models extending determinantal point processes (DPPs) Macchi (1975 ###reference_b25###); Daley and Vere-Jones (2003 ###reference_b4###) were proposed\nfor several such problems Kulesza and Taskar (2010 ###reference_b18###, 2011a ###reference_b19###); Gillenwater et al. (2012 ###reference_b8###). DPP was first used to characterize the Pauli exclusion principle, which states that two identical particles cannot occupy the same quantum state simultaneously Macchi (1975 ###reference_b25###). DPPs offer computationally attractive properties, including exact and efficient computation of marginals Macchi (1975 ###reference_b25###), sampling Hough et al. (2006 ###reference_b14###); Kulesza and Taskar (2011a ###reference_b19###), and (partial) parameter estimation Kulesza and Taskar (2011b ###reference_b20###). DPP has emerged as a powerful method for selecting a diverse subset from a \u201cground set\u201d of items Kulesza and Taskar (2012 ###reference_b21###)."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Determinantal Point Processes",
21
+ "text": "To define a determinantal point process (DPP) let us first consider the definition of a point process itself. A point process on a ground set refers to a probability distribution on finite subsets of . Let be a discrete set represented as , then defines a probability distribution on , the powerset of .\nFor to be called a determinantal process, it should satisfy the following condition for all :\nwhere, is a random subset drawn according to , K is a real, symmetric matrix indexed by the elements of , and is the submatrix obtained from K when only the entries indexed by elements of are considered. K is referred to as the marginal kernel.\nThe above definition of DPP defines in terms of marginal probabilities using K. There exists an alternative definition for a slightly restricted class of DPPs which allow us to model the probability of a subset directly. These are known as L-ensembles Brunel et al. (2017 ###reference_b3###) and are much easier to work with practically. We define using L-ensembles as follows:\nwhere, represents the random variable as earlier, is a real, symmetric matrix indexed by elements of , and is similarly the submatrix of indexed by elements of . To satisfy the fact that probability measures must always be positive, has to be positive semidefinite (psd). The normalization constant for can be obtained in closed form since\nThus, using L-ensembles we get a direct probability distribution on the subsets of as:\nExact MAP inference of DPP is a NP-hard problem Ko et al. (1995 ###reference_b17###). However, approximation of the DPP formulation, notably,\nis a non-monotone submodular function Kulesza and Taskar (2012 ###reference_b21###), which has been the function of choice for most of the work in this domain Gong et al. (2014 ###reference_b10###); Feldman et al. (2018 ###reference_b6###)."
22
+ },
23
+ {
24
+ "section_id": "3",
25
+ "parent_section_id": null,
26
+ "section_name": "Method",
27
+ "text": "We propose replacing GreedyNMS in detection pipelines with a DPP proposed in Eq. 5 ###reference_###. Generally in a detection pipeline NMS is applied on final detections to filter them and keep only one detection per object. Faster RCNN, not only performs NMS on the final detections but also on the region proposals returned by the Region Proposal Network (RPN). We posit that the NMS after the RPN stage would gain with diversified selection, since its task is to retain all the informative regions. The second NMS which comes after the softmax stage just filters the boxes obtained for each class independently and hence does not gain with diversity preserving methods. Consequently, we replace the first stage NMS after the RPN layer in this work. As such it is here that we apply DPP. The basic idea is to use DPP to select or filter the proposals instead of NMS. Thus our ground set consists of the proposals returned by the RPN. GreedyNMS uses the box coordinates to compute an intersection over union metric and also the score provided by the RPN to filter the windows. We use the exact same two features for our method. To construct our matrix we make use of 2 features.\nScores for the proposals from the RPN ()\nIntersection over union (IoU) of the proposals ()\nwhere . These features are then combined to form the matrix given by,\nwhose elements are written as follows:\nwhere is a scaling constant provided to bias the selection process towards selecting larger subsets, and the values of , is a column vector with as its element, represents the element-wise exponentiation of , is a matrix composed of , and represents the Hadamard product of matrices. Note that the interaction of the two score and can be combined in many different ways. In this work we use the exponent function to bring it closer to the smooth maximum approximation, along with the large weighting constant 111https://en.wikipedia.org/wiki/Smooth_maximum.\nis positive semidefinite.\nThe constituents of the matrix in the above manner can be proven to be individually positive semidefinite by the following three arguments. a) is positive semidefinite since it is of the form , b) The matrix, also known as the Jaccard similarity matrix, can be shown to be positive definite Bouchard et al. (2013 ###reference_b2###), and c) According to the Schur product theorem222https://en.wikipedia.org/wiki/Schur_product_theorem, the Hadamard product (elementwise multiplication product) of two positive semidefinite matrices is also positive semidefinite. Thus, the product is also positive semidefinite.\n\u220e\nThe final probability of a selecting can now be written as:\nNote that due to the determinant operation, the weighting term is raised to the power , which is the size of the subset to be selected. Explicitly making the subset size influence the probability is important since the marginal gain decreases with increase in subset size. Hence, the weighting term acts as a counter to the diminishing marginal gain, which is due to the sub-modular nature of the objective function.\nTo obtain the set which maximizes the above probability we need to use some approximation technique. One choice is the simple greedy method. Before arriving at the final formulation we need the following lemmas.\nThe principle sub-matrices of a psd matrix are also psd.\nAccording to this lemma any principle submatrix of indexed by the set is also positive semidefinite. Hence, leads to all subsets .\nfor a psd matrix is submodular.\nSubmodularity of DPPs can be established by the geometrical argument as shown in Kulesza and Taskar (2012 ###reference_b21###).\n\u220e\nConnecting all the lemmas, we can claim that all principal submatrices of are themselves . Finally, invoking Lemma. 3 ###reference_ma3### and extending it to the current setting, we can maximize to obtain the approximate MAP set. As such the final formulation for DPP based NMS is given by:\nWe employ a greedy algorithm to maximize this cost function, where at every iteration we add the element which has the highest marginal gain with respect to the currently selected set. While greedy algorithms are not optimal in general, for monotone sub-modular problems they have well-defined approximation bounds Kulesza and Taskar (2012 ###reference_b21###).\nOur final algorithm is given as follows:\nWe utilise a heap-based implementation to speed up the algorithm as proposed by Minoux (1978 ###reference_b26###). The additional check for positivity of the marginal gain in the greedy algorithm, ensures that the value of our currently selected set always increases at every iteration."
28
+ },
29
+ {
30
+ "section_id": "4",
31
+ "parent_section_id": null,
32
+ "section_name": "Experiments and Results",
33
+ "text": "In this section we provide details about the experiments performed and discuss the various results obtained. We work with a standard PyTorch333https://pytorch.org/ version of faster-RCNN444https://github.com/jwyang/faster-rcnn.pytorch and use VGG-16 as the backbone network. We maintain all the default settings to make the experiments as reproducible as possible. All our experiments are subsequently based on replacing the NMS module after the RPN block, with our own proposed method. We perform experiments on MS-COCO Lin et al. (2014 ###reference_b23###) and PASCAL VOC 2007 Everingham et al. (2010 ###reference_b5###) datasets. In all cases we train the network for 6 epochs on the default training splits, which are mentioned in the respective dataset subsections. During training we do not use DPP. We replace the NMS module with DPP during test time. We believe that the merit of existing GreedyNMS is its simplicity and the fact that it does not need to be tuned much for any experiment. Consequently, we propose a similar setting where the default parameter configuration works well for most applications. We evaluate a few variants of our model to understand the different modes of its operation and then converge onto one model with default parameter recommendation.\nThe models in the experiments are as follows:\n: This is the standard Greedy NMS algorithm with a maximum of selected windows. Note that is the default setting in most SOTA object detection pipelines with GreedyNMS.\nDPP: This refers to DPP with bias factor , (Eq.7 ###reference_###), with a maximum of selected boxes.\nFor all of the above models the number of input proposals (the ones returned by the RPN) are limited to a maximum of windows. We present comparison against the previous works which are most similar to our in spirit. Neural-NMS represents the deep network based NMS proposed by Hosang et al. Hosang et al. (2017 ###reference_b13###). They train their own deep network to replace Greedy NMS and plug it in after the detection step of Faster RCNN. This is a deviation from the generic way of using NMS, where it is plugged after the RPN but before the detection stage. MP-NMS refers to the message passing based NMS algorithm proposed by Rothe et al. Rothe et al. (2014 ###reference_b31###). We also compare against the end to end integration of convolution network, deformable parts model and NMS into one unified pipeline, proposed by Wan et al. Wan et al. (2015 ###reference_b36###). Though this method, denoted as CN-DPM-NMS, does not use F-RCNN like network, but the results can still work as a baseline comparison. Finally, LDDP refers to the pipeline proposed by Azadi et al. Azadi et al. (2017 ###reference_b1###) where they use a trainable DPP layer as an alternative to NMS.\nAll experiments were performed on a system with a i7-6850k CPU, a GTX 1080 Ti GPU and 64GB RAM. We implement DPP in C++ using the Eigen3 framework and run it on the CPU. When compared to a basic C++ CPU implementation of NMS we get comparable runtime upto approximately 100 selections for which NMS takes about 0.3s/image whereas DPP takes about 0.5s/image. The runtime of DPP however scales significantly with the number of selected proposals since the complexity involved is approximately , where is the number of proposals selected."
34
+ },
35
+ {
36
+ "section_id": "4.1",
37
+ "parent_section_id": "4",
38
+ "section_name": "MS-COCO",
39
+ "text": "For the MS-COCO dataset the model was trained on the training and valminusminival data splits and was tested on the minival split. In the results AP0.5 represents average precision (AP) calculated considering 50% overlap with ground truth. AP represents AP averaged over multiple overlap thresholds ranging from 50% to 95% in steps of 5%. The results for multi-class classification are shown in Table. 2 ###reference_###. Results for MS-COCO person detection class has been reported by several authors and hence we also report it separately in Table. 2 ###reference_###."
40
+ },
41
+ {
42
+ "section_id": "4.2",
43
+ "parent_section_id": "4",
44
+ "section_name": "PASCAL VOC",
45
+ "text": "For PASCAL VOC 2007 we perform several experiments. We start off by evaluating Greedy NMS vs several variants of DPP over each class individually. For these experiments Faster-RCNN was trained on the training and validation sets and tested on the test set for PASCAL VOC 2007. For assigning proposed bounding boxes to ground truth detections PASCAL VOC considers overlaps greater than 50% to be correct detections. This evaluation criteria is denoted as AP0.5. Table. 3 ###reference_### shows the results of class wise performance. Average performance across all classes along with comparative methods are shown in Table. 4 ###reference_###.\n###figure_1### ###figure_2###"
46
+ },
47
+ {
48
+ "section_id": "4.3",
49
+ "parent_section_id": "4",
50
+ "section_name": "Varying the maximum window and scaling parameters",
51
+ "text": "We perform more experiments to identify the core strengths of the proposed algorithm. The maximum number of windows returned by the algorithm is a parameter, which has a direct implication on the run-time of the algorithm. As such, the minimum value at which acceptable results are obtained needs to be selected. Keeping , we run the algorithm with different values of . The results are shown in Fig. 2 ###reference_###. Note that, for the setting , our algorithm already beats gNMS300 and is almost at par with gNMS400. This is the key contribution of introducing diversified window selection in the NMS algorithm, wherein, a diverse set of lesser number of proposal windows () outperform a larger set of proposal windows () selected by GreedyNMS.\nSimilarly, we also perform experiments to observe the effect of the scaling parameter on the detection performance. We test different values of while keeping the maximum number of windows fixed. The results are shown in Fig. 2 ###reference_###. The proposed method beats GreedyNMS for for both 300 and 400 region proposal selections.\n###figure_3###"
52
+ },
53
+ {
54
+ "section_id": "4.4",
55
+ "parent_section_id": "4",
56
+ "section_name": "IoU vs Recall",
57
+ "text": "In Rothe et al. (2015 ###reference_b32###) the authors propose evaluating the recall rate of detections at different IoU thresholds to measure how well fitting the selected bounding boxes are. We perform a similar evaluation, where we plot the recall with respect to the ground truth boxes against varying IoU thresholds (Fig. 3 ###reference_###). As NMS/DPP is applied on the RPN proposals in Faster-RCNN, we directly consider these proposals before any bounding box regression for this experiment. The IoU threshold determines whether a predicted bounding box is matched to a ground truth object or not. The AUC scores for the two curves are 0.7575 for Greedy NMS and 0.7869 for the DPP based method. In addition to having higher AUC we also note that the DPP based method becomes especially better when more precise bounding boxes are required (). This indicates that DPP chooses better fitting bounding boxes than Greedy NMS."
58
+ },
59
+ {
60
+ "section_id": "5",
61
+ "parent_section_id": null,
62
+ "section_name": "Qualitative Results",
63
+ "text": "We show a few qualitative results in Fig. 4 ###reference_### using similar parameter settings as used for all the previous results. We select images from the MS-COCO validation set and plot the region boundaries found by the two competing methods, namely gNMS400 and DPP. It is interesting to observe that DPP based selection works well when there is large overlap between two correct detections. DPP was able to remove some extraneous windows, such as the extra person detection for the tennis player blue cluster in Fig. 4 ###reference_###. Similarly, it selects only meaningful windows for the collection of people in the bottom right image in the blue cluster. For images with very simple / few detections, both the methods perform at par. A few examples where NMS still performs better are shown in the green cluster in Fig. 4 ###reference_###.\n###figure_4###"
64
+ },
65
+ {
66
+ "section_id": "6",
67
+ "parent_section_id": null,
68
+ "section_name": "Conclusion and Future Work",
69
+ "text": "We propose a novel integration of DPP based diverse set selection technique into the NMS paradigm. We formulate a principled cost function which uses the same two features which the traditional NMS routines use, and show that this formulation can be driven to improve on NMS accuracy by carefully selecting the bias parameter which promotes larger subsets. The comparative results against Greedy NMS as well as other recent methods prove that the proposed method is working at par or superior than most other methods."
70
+ }
71
+ ],
72
+ "appendix": [],
73
+ "tables": {
74
+ "1": {
75
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<div class=\"ltx_flex_figure ltx_flex_table\">\n<div class=\"ltx_flex_cell ltx_flex_size_2\">\n<figure class=\"ltx_figure ltx_figure_panel ltx_parbox ltx_align_middle\" id=\"S4.T2.6\" style=\"width:208.1pt;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T2.6.6\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T2.2.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.2.2.2.3\">Model</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.1.1\">AP<sub class=\"ltx_sub\" id=\"S4.T2.1.1.1.1.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T2.1.1.1.1.1.1\">0.5</span></sub>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T2.2.2.2.2\">AP\n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.3.3.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_tt\" id=\"S4.T2.3.3.3.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.3.3.3.1.1\">gNMS<sub class=\"ltx_sub\" id=\"S4.T2.3.3.3.1.1.1\"><span class=\"ltx_text ltx_font_medium ltx_font_italic\" id=\"S4.T2.3.3.3.1.1.1.1\">300</span></sub></span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T2.3.3.3.2\">47.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T2.3.3.3.3\">27.3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.4.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S4.T2.4.4.4.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.4.4.1.1\">gNMS<sub class=\"ltx_sub\" id=\"S4.T2.4.4.4.1.1.1\"><span class=\"ltx_text ltx_font_medium ltx_font_italic\" id=\"S4.T2.4.4.4.1.1.1.1\">400</span></sub></span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.4.4.4.2\">48.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.4.4.4.3\">27.4</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.5.5.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S4.T2.5.5.5.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.5.5.5.1.1\">DPP</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.5.5.5.2\">47.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.5.5.5.3\">27.4</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.6.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S4.T2.6.6.6.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.6.6.6.1.1\">DPP</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.6.6.6.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.6.6.6.2.1\">48.1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.6.6.6.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.6.6.6.3.1\">27.5</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.6.6.7.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S4.T2.6.6.7.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.6.6.7.1.1.1\">Neural-NMS\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">Hosang et\u00a0al. (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2008.11451v2#bib.bib13\" title=\"\">2017</a>)</cite></span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.6.6.7.1.2\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.6.6.7.1.3\">24.3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.6.6.8.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b ltx_border_l ltx_border_r\" id=\"S4.T2.6.6.8.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.6.6.8.2.1.1\">LDDP\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">Azadi et\u00a0al. (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2008.11451v2#bib.bib1\" title=\"\">2017</a>)</cite></span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T2.6.6.8.2.2\">32.2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T2.6.6.8.2.3\">15.5</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_figure\">Table 1: </span>NMS vs DPP experiments on MS COCO (All Classes)</figcaption>\n</figure>\n</div>\n<div class=\"ltx_flex_cell ltx_flex_size_2\">\n<figure class=\"ltx_figure ltx_figure_panel ltx_parbox ltx_align_middle\" id=\"S4.T2.12\" style=\"width:208.1pt;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T2.12.6\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T2.8.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.8.2.2.3\">Model</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T2.7.1.1.1\">AP<sub class=\"ltx_sub\" id=\"S4.T2.7.1.1.1.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T2.7.1.1.1.1.1\">0.5</span></sub>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T2.8.2.2.2\">AP\n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.9.3.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_tt\" id=\"S4.T2.9.3.3.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.9.3.3.1.1\">gNMS<sub class=\"ltx_sub\" id=\"S4.T2.9.3.3.1.1.1\"><span class=\"ltx_text ltx_font_medium ltx_font_italic\" id=\"S4.T2.9.3.3.1.1.1.1\">300</span></sub></span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T2.9.3.3.2\">69.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T2.9.3.3.3\">40.2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.10.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S4.T2.10.4.4.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.10.4.4.1.1\">gNMS<sub class=\"ltx_sub\" id=\"S4.T2.10.4.4.1.1.1\"><span class=\"ltx_text ltx_font_medium ltx_font_italic\" id=\"S4.T2.10.4.4.1.1.1.1\">400</span></sub></span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.10.4.4.2\">70.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.10.4.4.3\">40.3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.11.5.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S4.T2.11.5.5.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.11.5.5.1.1\">DPP</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.11.5.5.2\">69.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.11.5.5.3\">40.3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.12.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S4.T2.12.6.6.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.12.6.6.1.1\">DPP</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.12.6.6.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.12.6.6.2.1\">70.2</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.12.6.6.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.12.6.6.3.1\">40.6</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.12.6.7.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b ltx_border_l ltx_border_r\" id=\"S4.T2.12.6.7.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.12.6.7.1.1.1\">Neural-NMS\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">Hosang et\u00a0al. (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2008.11451v2#bib.bib13\" title=\"\">2017</a>)</cite></span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T2.12.6.7.1.2\">67.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T2.12.6.7.1.3\">36.9</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_figure\">Table 2: </span>NMS vs DPP experiments on MS COCO (Persons)</figcaption>\n</figure>\n</div>\n</div>\n</figure>",
76
+ "capture": "Table 1: NMS vs DPP experiments on MS COCO (All Classes)"
77
+ },
78
+ "2": {
79
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T3.8.8\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T3.8.8.9.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.8.8.9.1.1\"><span class=\"ltx_text\" id=\"S4.T3.8.8.9.1.1.1\" style=\"font-size:70%;\">Model</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T3.8.8.9.1.2\"><span class=\"ltx_text\" id=\"S4.T3.8.8.9.1.2.1\" style=\"font-size:50%;\">aeroplane</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T3.8.8.9.1.3\"><span class=\"ltx_text\" id=\"S4.T3.8.8.9.1.3.1\" style=\"font-size:50%;\">bicycle</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T3.8.8.9.1.4\"><span class=\"ltx_text\" id=\"S4.T3.8.8.9.1.4.1\" style=\"font-size:50%;\">bird</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T3.8.8.9.1.5\"><span class=\"ltx_text\" id=\"S4.T3.8.8.9.1.5.1\" style=\"font-size:50%;\">boat</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T3.8.8.9.1.6\"><span class=\"ltx_text\" id=\"S4.T3.8.8.9.1.6.1\" style=\"font-size:50%;\">bottle</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T3.8.8.9.1.7\"><span class=\"ltx_text\" id=\"S4.T3.8.8.9.1.7.1\" style=\"font-size:50%;\">bus</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T3.8.8.9.1.8\"><span class=\"ltx_text\" id=\"S4.T3.8.8.9.1.8.1\" style=\"font-size:50%;\">car</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T3.8.8.9.1.9\"><span class=\"ltx_text\" id=\"S4.T3.8.8.9.1.9.1\" style=\"font-size:50%;\">cat</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T3.8.8.9.1.10\"><span class=\"ltx_text\" id=\"S4.T3.8.8.9.1.10.1\" style=\"font-size:50%;\">chair</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T3.8.8.9.1.11\"><span class=\"ltx_text\" id=\"S4.T3.8.8.9.1.11.1\" style=\"font-size:50%;\">cow</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T3.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_tt\" id=\"S4.T3.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.1.1.1.1\" style=\"font-size:70%;\">gNMS<sub class=\"ltx_sub\" id=\"S4.T3.1.1.1.1.1.1\"><span class=\"ltx_text ltx_font_medium ltx_font_italic\" id=\"S4.T3.1.1.1.1.1.1.1\">300</span></sub></span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T3.1.1.1.2\"><span class=\"ltx_text\" id=\"S4.T3.1.1.1.2.1\" style=\"font-size:70%;\">67.66</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T3.1.1.1.3\"><span class=\"ltx_text\" id=\"S4.T3.1.1.1.3.1\" style=\"font-size:70%;\">77.39</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T3.1.1.1.4\"><span class=\"ltx_text\" id=\"S4.T3.1.1.1.4.1\" style=\"font-size:70%;\">67.15</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T3.1.1.1.5\"><span class=\"ltx_text\" id=\"S4.T3.1.1.1.5.1\" style=\"font-size:70%;\">54.36</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T3.1.1.1.6\"><span class=\"ltx_text\" id=\"S4.T3.1.1.1.6.1\" style=\"font-size:70%;\">54.43</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T3.1.1.1.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.1.1.7.1\" style=\"font-size:70%;\">78.40</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T3.1.1.1.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.1.1.8.1\" style=\"font-size:70%;\">85.52</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T3.1.1.1.9\"><span class=\"ltx_text\" id=\"S4.T3.1.1.1.9.1\" style=\"font-size:70%;\">85.67</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T3.1.1.1.10\"><span class=\"ltx_text\" id=\"S4.T3.1.1.1.10.1\" style=\"font-size:70%;\">48.45</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T3.1.1.1.11\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.1.1.11.1\" style=\"font-size:70%;\">79.78</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.2.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S4.T3.2.2.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.2.2.2.1.1\" style=\"font-size:70%;\">gNMS<sub class=\"ltx_sub\" id=\"S4.T3.2.2.2.1.1.1\"><span class=\"ltx_text ltx_font_medium ltx_font_italic\" id=\"S4.T3.2.2.2.1.1.1.1\">400</span></sub></span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.2.2.2.2\"><span class=\"ltx_text\" id=\"S4.T3.2.2.2.2.1\" style=\"font-size:70%;\">68.18</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.2.2.2.3\"><span class=\"ltx_text\" id=\"S4.T3.2.2.2.3.1\" style=\"font-size:70%;\">77.96</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.2.2.2.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.2.2.2.4.1\" style=\"font-size:70%;\">67.89</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.2.2.2.5\"><span class=\"ltx_text\" id=\"S4.T3.2.2.2.5.1\" style=\"font-size:70%;\">54.61</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.2.2.2.6\"><span class=\"ltx_text\" id=\"S4.T3.2.2.2.6.1\" style=\"font-size:70%;\">54.72</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.2.2.2.7\"><span class=\"ltx_text\" id=\"S4.T3.2.2.2.7.1\" style=\"font-size:70%;\">78.17</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.2.2.2.8\"><span class=\"ltx_text\" id=\"S4.T3.2.2.2.8.1\" style=\"font-size:70%;\">85.50</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.2.2.2.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.2.2.2.9.1\" style=\"font-size:70%;\">85.98</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.2.2.2.10\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.2.2.2.10.1\" style=\"font-size:70%;\">48.61</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.2.2.2.11\"><span class=\"ltx_text\" id=\"S4.T3.2.2.2.11.1\" style=\"font-size:70%;\">79.73</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.3.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S4.T3.3.3.3.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.3.3.3.1.1\" style=\"font-size:70%;\">DPP</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.3.3.3.2\"><span class=\"ltx_text\" id=\"S4.T3.3.3.3.2.1\" style=\"font-size:70%;\">69.93</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.3.3.3.3\"><span class=\"ltx_text\" id=\"S4.T3.3.3.3.3.1\" style=\"font-size:70%;\">77.22</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.3.3.3.4\"><span class=\"ltx_text\" id=\"S4.T3.3.3.3.4.1\" style=\"font-size:70%;\">65.75</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.3.3.3.5\"><span class=\"ltx_text\" id=\"S4.T3.3.3.3.5.1\" style=\"font-size:70%;\">54.79</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.3.3.3.6\"><span class=\"ltx_text\" id=\"S4.T3.3.3.3.6.1\" style=\"font-size:70%;\">55.43</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.3.3.3.7\"><span class=\"ltx_text\" id=\"S4.T3.3.3.3.7.1\" style=\"font-size:70%;\">78.25</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.3.3.3.8\"><span class=\"ltx_text\" id=\"S4.T3.3.3.3.8.1\" style=\"font-size:70%;\">85.05</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.3.3.3.9\"><span class=\"ltx_text\" id=\"S4.T3.3.3.3.9.1\" style=\"font-size:70%;\">82.40</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.3.3.3.10\"><span class=\"ltx_text\" id=\"S4.T3.3.3.3.10.1\" style=\"font-size:70%;\">47.93</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.3.3.3.11\"><span class=\"ltx_text\" id=\"S4.T3.3.3.3.11.1\" style=\"font-size:70%;\">80.36</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.4.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S4.T3.4.4.4.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.4.4.4.1.1\" style=\"font-size:70%;\">DPP</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.4.4.4.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.4.4.4.2.1\" style=\"font-size:70%;\">69.94</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.4.4.4.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.4.4.4.3.1\" style=\"font-size:70%;\">78.51</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.4.4.4.4\"><span class=\"ltx_text\" id=\"S4.T3.4.4.4.4.1\" style=\"font-size:70%;\">65.42</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.4.4.4.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.4.4.4.5.1\" style=\"font-size:70%;\">55.13</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.4.4.4.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.4.4.4.6.1\" style=\"font-size:70%;\">55.49</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.4.4.4.7\"><span class=\"ltx_text\" id=\"S4.T3.4.4.4.7.1\" style=\"font-size:70%;\">77.90</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.4.4.4.8\"><span class=\"ltx_text\" id=\"S4.T3.4.4.4.8.1\" style=\"font-size:70%;\">85.29</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.4.4.4.9\"><span class=\"ltx_text\" id=\"S4.T3.4.4.4.9.1\" style=\"font-size:70%;\">83.53</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.4.4.4.10\"><span class=\"ltx_text\" id=\"S4.T3.4.4.4.10.1\" style=\"font-size:70%;\">48.01</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.4.4.4.11\"><span class=\"ltx_text\" id=\"S4.T3.4.4.4.11.1\" style=\"font-size:70%;\">78.20</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.8.8.10.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_tt\" id=\"S4.T3.8.8.10.1.1\"><span class=\"ltx_text\" id=\"S4.T3.8.8.10.1.1.1\" style=\"font-size:70%;\">Model</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T3.8.8.10.1.2\"><span class=\"ltx_text\" id=\"S4.T3.8.8.10.1.2.1\" style=\"font-size:50%;\">diningtable</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T3.8.8.10.1.3\"><span class=\"ltx_text\" id=\"S4.T3.8.8.10.1.3.1\" style=\"font-size:50%;\">dog</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T3.8.8.10.1.4\"><span class=\"ltx_text\" id=\"S4.T3.8.8.10.1.4.1\" style=\"font-size:50%;\">horse</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T3.8.8.10.1.5\"><span class=\"ltx_text\" id=\"S4.T3.8.8.10.1.5.1\" style=\"font-size:50%;\">motorbike</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T3.8.8.10.1.6\"><span class=\"ltx_text\" id=\"S4.T3.8.8.10.1.6.1\" style=\"font-size:50%;\">person</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T3.8.8.10.1.7\"><span class=\"ltx_text\" id=\"S4.T3.8.8.10.1.7.1\" style=\"font-size:50%;\">pottedplant</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T3.8.8.10.1.8\"><span class=\"ltx_text\" id=\"S4.T3.8.8.10.1.8.1\" style=\"font-size:50%;\">sheep</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T3.8.8.10.1.9\"><span class=\"ltx_text\" id=\"S4.T3.8.8.10.1.9.1\" style=\"font-size:50%;\">sofa</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T3.8.8.10.1.10\"><span class=\"ltx_text\" id=\"S4.T3.8.8.10.1.10.1\" style=\"font-size:50%;\">train</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T3.8.8.10.1.11\"><span class=\"ltx_text\" id=\"S4.T3.8.8.10.1.11.1\" style=\"font-size:50%;\">tvmonitor</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.5.5.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_tt\" id=\"S4.T3.5.5.5.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.5.5.5.1.1\" style=\"font-size:70%;\">gNMS<sub class=\"ltx_sub\" id=\"S4.T3.5.5.5.1.1.1\"><span class=\"ltx_text ltx_font_medium ltx_font_italic\" id=\"S4.T3.5.5.5.1.1.1.1\">300</span></sub></span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T3.5.5.5.2\"><span class=\"ltx_text\" id=\"S4.T3.5.5.5.2.1\" style=\"font-size:70%;\">61.50</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T3.5.5.5.3\"><span class=\"ltx_text\" id=\"S4.T3.5.5.5.3.1\" style=\"font-size:70%;\">78.89</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T3.5.5.5.4\"><span class=\"ltx_text\" id=\"S4.T3.5.5.5.4.1\" style=\"font-size:70%;\">82.14</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T3.5.5.5.5\"><span class=\"ltx_text\" id=\"S4.T3.5.5.5.5.1\" style=\"font-size:70%;\">75.61</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T3.5.5.5.6\"><span class=\"ltx_text\" id=\"S4.T3.5.5.5.6.1\" style=\"font-size:70%;\">77.26</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T3.5.5.5.7\"><span class=\"ltx_text\" id=\"S4.T3.5.5.5.7.1\" style=\"font-size:70%;\">40.65</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T3.5.5.5.8\"><span class=\"ltx_text\" id=\"S4.T3.5.5.5.8.1\" style=\"font-size:70%;\">70.42</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T3.5.5.5.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.5.5.5.9.1\" style=\"font-size:70%;\">63.77</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T3.5.5.5.10\"><span class=\"ltx_text\" id=\"S4.T3.5.5.5.10.1\" style=\"font-size:70%;\">74.94</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T3.5.5.5.11\"><span class=\"ltx_text\" id=\"S4.T3.5.5.5.11.1\" style=\"font-size:70%;\">72.19</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.6.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S4.T3.6.6.6.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.6.6.6.1.1\" style=\"font-size:70%;\">gNMS<sub class=\"ltx_sub\" id=\"S4.T3.6.6.6.1.1.1\"><span class=\"ltx_text ltx_font_medium ltx_font_italic\" id=\"S4.T3.6.6.6.1.1.1.1\">400</span></sub></span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.6.6.6.2\"><span class=\"ltx_text\" id=\"S4.T3.6.6.6.2.1\" style=\"font-size:70%;\">61.10</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.6.6.6.3\"><span class=\"ltx_text\" id=\"S4.T3.6.6.6.3.1\" style=\"font-size:70%;\">78.59</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.6.6.6.4\"><span class=\"ltx_text\" id=\"S4.T3.6.6.6.4.1\" style=\"font-size:70%;\">82.32</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.6.6.6.5\"><span class=\"ltx_text\" id=\"S4.T3.6.6.6.5.1\" style=\"font-size:70%;\">75.39</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.6.6.6.6\"><span class=\"ltx_text\" id=\"S4.T3.6.6.6.6.1\" style=\"font-size:70%;\">77.23</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.6.6.6.7\"><span class=\"ltx_text\" id=\"S4.T3.6.6.6.7.1\" style=\"font-size:70%;\">40.96</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.6.6.6.8\"><span class=\"ltx_text\" id=\"S4.T3.6.6.6.8.1\" style=\"font-size:70%;\">70.16</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.6.6.6.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.6.6.6.9.1\" style=\"font-size:70%;\">63.77</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.6.6.6.10\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.6.6.6.10.1\" style=\"font-size:70%;\">75.28</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.6.6.6.11\"><span class=\"ltx_text\" id=\"S4.T3.6.6.6.11.1\" style=\"font-size:70%;\">71.76</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.7.7.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S4.T3.7.7.7.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.7.7.7.1.1\" style=\"font-size:70%;\">DPP</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.7.7.7.2\"><span class=\"ltx_text\" id=\"S4.T3.7.7.7.2.1\" style=\"font-size:70%;\">63.35</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.7.7.7.3\"><span class=\"ltx_text\" id=\"S4.T3.7.7.7.3.1\" style=\"font-size:70%;\">80.97</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.7.7.7.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.7.7.7.4.1\" style=\"font-size:70%;\">83.10</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.7.7.7.5\"><span class=\"ltx_text\" id=\"S4.T3.7.7.7.5.1\" style=\"font-size:70%;\">75.67</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.7.7.7.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.7.7.7.6.1\" style=\"font-size:70%;\">77.60</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.7.7.7.7\"><span class=\"ltx_text\" id=\"S4.T3.7.7.7.7.1\" style=\"font-size:70%;\">42.22</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.7.7.7.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.7.7.7.8.1\" style=\"font-size:70%;\">71.45</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.7.7.7.9\"><span class=\"ltx_text\" id=\"S4.T3.7.7.7.9.1\" style=\"font-size:70%;\">63.71</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.7.7.7.10\"><span class=\"ltx_text\" id=\"S4.T3.7.7.7.10.1\" style=\"font-size:70%;\">74.89</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.7.7.7.11\"><span class=\"ltx_text\" id=\"S4.T3.7.7.7.11.1\" style=\"font-size:70%;\">72.61</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.8.8.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b ltx_border_l ltx_border_r\" id=\"S4.T3.8.8.8.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.8.8.8.1.1\" style=\"font-size:70%;\">DPP</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T3.8.8.8.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.8.8.8.2.1\" style=\"font-size:70%;\">63.91</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T3.8.8.8.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.8.8.8.3.1\" style=\"font-size:70%;\">81.26</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T3.8.8.8.4\"><span class=\"ltx_text\" id=\"S4.T3.8.8.8.4.1\" style=\"font-size:70%;\">83.06</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T3.8.8.8.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.8.8.8.5.1\" style=\"font-size:70%;\">76.17</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T3.8.8.8.6\"><span class=\"ltx_text\" id=\"S4.T3.8.8.8.6.1\" style=\"font-size:70%;\">77.54</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T3.8.8.8.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.8.8.8.7.1\" style=\"font-size:70%;\">42.63</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T3.8.8.8.8\"><span class=\"ltx_text\" id=\"S4.T3.8.8.8.8.1\" style=\"font-size:70%;\">70.34</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T3.8.8.8.9\"><span class=\"ltx_text\" id=\"S4.T3.8.8.8.9.1\" style=\"font-size:70%;\">63.49</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T3.8.8.8.10\"><span class=\"ltx_text\" id=\"S4.T3.8.8.8.10.1\" style=\"font-size:70%;\">75.11</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T3.8.8.8.11\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.8.8.8.11.1\" style=\"font-size:70%;\">72.49</span></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>NMS vs DPP experiments on PASCAL VOC 2007 (Classwise)</figcaption>\n</figure>",
80
+ "capture": "Table 3: NMS vs DPP experiments on PASCAL VOC 2007 (Classwise)"
81
+ },
82
+ "3": {
83
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T4\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T4.5.5\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T4.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T4.1.1.1.2\">Model</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T4.1.1.1.1\">AP<sub class=\"ltx_sub\" id=\"S4.T4.1.1.1.1.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T4.1.1.1.1.1.1\">0.5</span></sub>\n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T4.2.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_tt\" id=\"S4.T4.2.2.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.2.2.2.1.1\">gNMS<sub class=\"ltx_sub\" id=\"S4.T4.2.2.2.1.1.1\"><span class=\"ltx_text ltx_font_medium ltx_font_italic\" id=\"S4.T4.2.2.2.1.1.1.1\">300</span></sub></span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T4.2.2.2.2\">69.81</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.3.3.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S4.T4.3.3.3.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.3.3.3.1.1\">gNMS<sub class=\"ltx_sub\" id=\"S4.T4.3.3.3.1.1.1\"><span class=\"ltx_text ltx_font_medium ltx_font_italic\" id=\"S4.T4.3.3.3.1.1.1.1\">400</span></sub></span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.3.3.3.2\">69.90</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.4.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S4.T4.4.4.4.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.4.4.4.1.1\">DPP</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.4.4.4.2\">70.13</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.5.5.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S4.T4.5.5.5.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.5.5.5.1.1\">DPP</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.5.5.5.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.5.5.5.2.1\">70.17</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.5.5.6.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S4.T4.5.5.6.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.5.5.6.1.1.1\">MP-NMS\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">Rothe et\u00a0al. (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2008.11451v2#bib.bib32\" title=\"\">2015</a>)</cite></span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.5.5.6.1.2\">56.14</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.5.5.7.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S4.T4.5.5.7.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.5.5.7.2.1.1\">CN-DPM-NMS\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">Wan et\u00a0al. (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2008.11451v2#bib.bib36\" title=\"\">2015</a>)</cite></span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.5.5.7.2.2\">46.50</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.5.5.8.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b ltx_border_l ltx_border_r\" id=\"S4.T4.5.5.8.3.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.5.5.8.3.1.1\">LDDP\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">Azadi et\u00a0al. (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2008.11451v2#bib.bib1\" title=\"\">2017</a>)</cite></span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T4.5.5.8.3.2\">62.21</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 4: </span>Average performance on PASCAL VOC 2007</figcaption>\n</figure>",
84
+ "capture": "Table 4: Average performance on PASCAL VOC 2007"
85
+ }
86
+ },
87
+ "image_paths": {
88
+ "1": {
89
+ "figure_path": "2008.11451v2_figure_1.png",
90
+ "caption": "Figure 1: Comparison of varying the maximum window parameter k\ud835\udc58kitalic_k in our algorithm.\n",
91
+ "url": "http://arxiv.org/html/2008.11451v2/extracted/5680776/images/nms_vs_dpp_at_k_scaled2.png"
92
+ },
93
+ "2": {
94
+ "figure_path": "2008.11451v2_figure_2.png",
95
+ "caption": "Figure 2: Comparison of varying the scaling parameter \u03b1\ud835\udefc\\alphaitalic_\u03b1 in our algorithm. The horizontal dotted lines denote GreedyNMS.\n",
96
+ "url": "http://arxiv.org/html/2008.11451v2/extracted/5680776/images/alpha_vs_apScaled3.png"
97
+ },
98
+ "3": {
99
+ "figure_path": "2008.11451v2_figure_3.png",
100
+ "caption": "Figure 3: IoU vs Recall plot for gNMS400 and DPP5400superscriptsubscriptabsent4005{}_{400}^{5}start_FLOATSUBSCRIPT 400 end_FLOATSUBSCRIPT start_POSTSUPERSCRIPT 5 end_POSTSUPERSCRIPT",
101
+ "url": "http://arxiv.org/html/2008.11451v2/extracted/5680776/images/auc_0_2.png"
102
+ },
103
+ "4": {
104
+ "figure_path": "2008.11451v2_figure_4.png",
105
+ "caption": "Figure 4: Qualitative results (best viewed in color). Blue boxes are produced by our method DPP5400superscriptsubscriptabsent4005{}_{400}^{5}start_FLOATSUBSCRIPT 400 end_FLOATSUBSCRIPT start_POSTSUPERSCRIPT 5 end_POSTSUPERSCRIPT, green boxes are produced by gNMS400. Blue dotted cluster represents results where DPP5400superscriptsubscriptabsent4005{}_{400}^{5}start_FLOATSUBSCRIPT 400 end_FLOATSUBSCRIPT start_POSTSUPERSCRIPT 5 end_POSTSUPERSCRIPT performs better than gNMS400. Brown cluster represents similar performance. Green cluster represents cases where gNMS400 seems to perform better, although the person detection is still superior for DPP5400superscriptsubscriptabsent4005{}_{400}^{5}start_FLOATSUBSCRIPT 400 end_FLOATSUBSCRIPT start_POSTSUPERSCRIPT 5 end_POSTSUPERSCRIPT.",
106
+ "url": "http://arxiv.org/html/2008.11451v2/extracted/5680776/images/CocoCompNew242.png"
107
+ }
108
+ },
109
+ "validation": true,
110
+ "references": [
111
+ {
112
+ "1": {
113
+ "title": "Learning detection with diverse proposals.",
114
+ "author": "Samaneh Azadi, Jiashi Feng, and Trevor Darrell.",
115
+ "venue": "In The IEEE Conference on Computer Vision and Pattern\nRecognition (CVPR), July 2017.",
116
+ "url": null
117
+ }
118
+ },
119
+ {
120
+ "2": {
121
+ "title": "A proof for the positive definiteness of the jaccard index matrix.",
122
+ "author": "Mathieu Bouchard, Anne-Laure Jousselme, and Pierre-Emmanuel Dor\u00e9.",
123
+ "venue": "International Journal of Approximate Reasoning, 54(5):615\u2013626, 2013.",
124
+ "url": null
125
+ }
126
+ },
127
+ {
128
+ "3": {
129
+ "title": "Rates of estimation for determinantal point processes.",
130
+ "author": "Victor-Emmanuel Brunel, Ankur Moitra, Philippe Rigollet, and John Urschel.",
131
+ "venue": "In Satyen Kale and Ohad Shamir, editors, Proceedings of the\n2017 Conference on Learning Theory, volume 65 of Proceedings of\nMachine Learning Research, pages 343\u2013345, Amsterdam, Netherlands, 07\u201310\nJul 2017. PMLR.",
132
+ "url": null
133
+ }
134
+ },
135
+ {
136
+ "4": {
137
+ "title": "An introduction to the theory of point processes. Vol. I.",
138
+ "author": "D. J. Daley and D. Vere-Jones.",
139
+ "venue": "Probability and its Applications (New York). Springer-Verlag, New\nYork, second edition, 2003.",
140
+ "url": null
141
+ }
142
+ },
143
+ {
144
+ "5": {
145
+ "title": "The pascal visual object classes (voc) challenge.",
146
+ "author": "Mark Everingham, Luc Van Gool, Christopher K. I. Williams, John Winn, and\nAndrew Zisserman.",
147
+ "venue": "International Journal of Computer Vision, 88(2):303\u2013338, Jun 2010.",
148
+ "url": null
149
+ }
150
+ },
151
+ {
152
+ "6": {
153
+ "title": "Do less, get more: Streaming submodular maximization with\nsubsampling.",
154
+ "author": "Moran Feldman, Amin Karbasi, and Ehsan Kazemi.",
155
+ "venue": "In Advances in Neural Information Processing Systems, pages\n732\u2013742, 2018.",
156
+ "url": null
157
+ }
158
+ },
159
+ {
160
+ "7": {
161
+ "title": "Object detection with discriminatively trained part-based models.",
162
+ "author": "Pedro F. Felzenszwalb, Ross B. Girshick, David McAllester, and Deva Ramanan.",
163
+ "venue": "IEEE Trans. Pattern Anal. Mach. Intell., 32(9):1627\u20131645, September 2010.",
164
+ "url": null
165
+ }
166
+ },
167
+ {
168
+ "8": {
169
+ "title": "Discovering diverse and salient threads in document collections.",
170
+ "author": "Jennifer Gillenwater, Alex Kulesza, and Ben Taskar.",
171
+ "venue": "In Proceedings of the 2012 Joint Conference on Empirical\nMethods in Natural Language Processing and Computational Natural Language\nLearning, EMNLP-CoNLL \u201912, pages 710\u2013720, Stroudsburg, PA, USA, 2012.\nAssociation for Computational Linguistics.",
172
+ "url": null
173
+ }
174
+ },
175
+ {
176
+ "9": {
177
+ "title": "Rich feature hierarchies for accurate object detection and semantic\nsegmentation.",
178
+ "author": "Ross B. Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik.",
179
+ "venue": "CoRR, abs/1311.2524, 2013.",
180
+ "url": null
181
+ }
182
+ },
183
+ {
184
+ "10": {
185
+ "title": "Diverse sequential subset selection for supervised video\nsummarization.",
186
+ "author": "Boqing Gong, Wei Lun Chao, Kristen L Grauman, and Fei Sha.",
187
+ "venue": "Advances in Neural Information Processing Systems, 3:2069\u20132077, 2014.",
188
+ "url": null
189
+ }
190
+ },
191
+ {
192
+ "11": {
193
+ "title": "Near-optimal sensor placements in gaussian processes.",
194
+ "author": "Carlos Guestrin, Andreas Krause, and Ajit Paul Singh.",
195
+ "venue": "In Proceedings of the 22Nd International Conference on Machine\nLearning, ICML \u201905, pages 265\u2013272, New York, NY, USA, 2005. ACM.",
196
+ "url": null
197
+ }
198
+ },
199
+ {
200
+ "12": {
201
+ "title": "Identity mappings in deep residual networks.",
202
+ "author": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.",
203
+ "venue": "In Bastian Leibe, Jiri Matas, Nicu Sebe, and Max Welling, editors,\nECCV (4), volume 9908 of Lecture Notes in Computer Science,\npages 630\u2013645. Springer, 2016.",
204
+ "url": null
205
+ }
206
+ },
207
+ {
208
+ "13": {
209
+ "title": "Learning non-maximum suppression.",
210
+ "author": "Jan Hosang, Rodrigo Benenson, and Bernt Schiele.",
211
+ "venue": "In CVPR, 2017.",
212
+ "url": null
213
+ }
214
+ },
215
+ {
216
+ "14": {
217
+ "title": "Determinantal processes and independence.",
218
+ "author": "J. Ben Hough, Manjunath Krishnapur, Yuval Peres, and B\u00c3\u00a1lint Vir\u00c3\u00a1g.",
219
+ "venue": "Probab. Surveys, 3:206\u2013229, 2006.",
220
+ "url": null
221
+ }
222
+ },
223
+ {
224
+ "15": {
225
+ "title": "Enhancing camera surveillance using computer vision: a research note.",
226
+ "author": "Haroon Idrees, Mubarak Shah, and Ray Surette.",
227
+ "venue": "Policing, 41:292\u2013307, 04 2018.",
228
+ "url": null
229
+ }
230
+ },
231
+ {
232
+ "16": {
233
+ "title": "Influential nodes in a diffusion model for social networks.",
234
+ "author": "David Kempe, Jon Kleinberg, and \u00c9va Tardos.",
235
+ "venue": "volume 3580, pages 1127\u20131138, 07 2005.",
236
+ "url": null
237
+ }
238
+ },
239
+ {
240
+ "17": {
241
+ "title": "An exact algorithm for maximum entropy sampling.",
242
+ "author": "Chun-Wa Ko, Jon Lee, and Maurice Queyranne.",
243
+ "venue": "Oper. Res., 43(4):684\u2013691, August 1995.",
244
+ "url": null
245
+ }
246
+ },
247
+ {
248
+ "18": {
249
+ "title": "Structured determinantal point processes.",
250
+ "author": "Alex Kulesza and Ben Taskar.",
251
+ "venue": "In J. D. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R. S. Zemel,\nand A. Culotta, editors, Advances in Neural Information Processing\nSystems 23, pages 1171\u20131179. Curran Associates, Inc., 2010.",
252
+ "url": null
253
+ }
254
+ },
255
+ {
256
+ "19": {
257
+ "title": "k-dpps: Fixed-size determinantal point processes.",
258
+ "author": "Alex Kulesza and Ben Taskar.",
259
+ "venue": "In Proceedings of the International Conference on Machine\nLearning (ICML), pages 1193\u20131200, 01 2011a.",
260
+ "url": null
261
+ }
262
+ },
263
+ {
264
+ "20": {
265
+ "title": "Learning determinantal point processes.",
266
+ "author": "Alex Kulesza and Ben Taskar.",
267
+ "venue": "In Proceedings of the Twenty-Seventh Conference on Uncertainty\nin Artificial Intelligence, UAI\u201911, pages 419\u2013427, Arlington, Virginia,\nUnited States, 2011b. AUAI Press.",
268
+ "url": null
269
+ }
270
+ },
271
+ {
272
+ "21": {
273
+ "title": "Determinantal point processes for machine learning.",
274
+ "author": "Alex Kulesza and Ben Taskar.",
275
+ "venue": "Foundations and Trends in Machine Learning, 5(23):123\u2013286, 2012.",
276
+ "url": null
277
+ }
278
+ },
279
+ {
280
+ "22": {
281
+ "title": "Multi-document summarization via budgeted maximization of submodular\nfunctions.",
282
+ "author": "Hui Lin and Jeff Bilmes.",
283
+ "venue": "In In Proc. NAACL/HLT, 2010.",
284
+ "url": null
285
+ }
286
+ },
287
+ {
288
+ "23": {
289
+ "title": "Microsoft coco: Common objects in context.",
290
+ "author": "Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva\nRamanan, Piotr Dollar, and Larry Zitnick.",
291
+ "venue": "In ECCV. European Conference on Computer Vision, September\n2014.",
292
+ "url": null
293
+ }
294
+ },
295
+ {
296
+ "24": {
297
+ "title": "Ssd: Single shot multibox detector, 2015.",
298
+ "author": "Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed,\nCheng-Yang Fu, and Alexander C. Berg.",
299
+ "venue": "URL http://arxiv.org/abs/1512.02325.",
300
+ "url": null
301
+ }
302
+ },
303
+ {
304
+ "25": {
305
+ "title": "The coincidence approach to stochastic point processes.",
306
+ "author": "Odile Macchi.",
307
+ "venue": "Advances in Applied Probability, 7(1):83\u2013122, 1975.",
308
+ "url": null
309
+ }
310
+ },
311
+ {
312
+ "26": {
313
+ "title": "Accelerated greedy algorithms for maximizing submodular set\nfunctions.",
314
+ "author": "Michel Minoux.",
315
+ "venue": "In J. Stoer, editor, Optimization Techniques, pages 234\u2013243.\nSpringer Berlin Heidelberg, 1978.",
316
+ "url": null
317
+ }
318
+ },
319
+ {
320
+ "27": {
321
+ "title": "A compositional context sensitive multi-document summarizer:\nexploring the factors that influence summarization.",
322
+ "author": "Ani Nenkova, Lucy Vanderwende, and Kathleen McKeown.",
323
+ "venue": "In Proceedings of SIGIR 2006, January 2006.",
324
+ "url": null
325
+ }
326
+ },
327
+ {
328
+ "28": {
329
+ "title": "Learning diverse rankings with multi-armed bandits.",
330
+ "author": "Filip Radlinski, Robert Kleinberg, and Thorsten Joachims.",
331
+ "venue": "In Proceedings of the International Conference on Machine\nLearning (ICML), January 2008.",
332
+ "url": null
333
+ }
334
+ },
335
+ {
336
+ "29": {
337
+ "title": "You only look once: Unified, real-time object detection, 2015.",
338
+ "author": "Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi.",
339
+ "venue": "URL http://arxiv.org/abs/1506.02640.",
340
+ "url": null
341
+ }
342
+ },
343
+ {
344
+ "30": {
345
+ "title": "Faster r-cnn: Towards real-time object detection with region proposal\nnetworks.",
346
+ "author": "Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun.",
347
+ "venue": "In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett,\neditors, Advances in Neural Information Processing Systems 28, pages\n91\u201399. Curran Associates, Inc., 2015.",
348
+ "url": null
349
+ }
350
+ },
351
+ {
352
+ "31": {
353
+ "title": "Non-maximum suppression for object detection by passing messages\nbetween windows.",
354
+ "author": "Rasmus Rothe, Matthieu Guillaumin, and Luc Van Gool.",
355
+ "venue": "In ACCV, 2014.",
356
+ "url": null
357
+ }
358
+ },
359
+ {
360
+ "32": {
361
+ "title": "Non-maximum suppression for object detection by passing messages\nbetween windows.",
362
+ "author": "Rasmus Rothe, Matthieu Guillaumin, and Luc Van Gool.",
363
+ "venue": "In Daniel Cremers, Ian Reid, Hideo Saito, and Ming-Hsuan Yang,\neditors, Computer Vision \u2013 ACCV 2014, pages 290\u2013306, Cham, 2015.\nSpringer International Publishing.",
364
+ "url": null
365
+ }
366
+ },
367
+ {
368
+ "33": {
369
+ "title": "Overfeat: Integrated recognition, localization and detection using\nconvolutional networks. 2nd international conference on learning\nrepresentations, iclr 2014.",
370
+ "author": "Pierre Sermanet, David Eigen, Xiang Zhang, Michael Mathieu, Robert Fergus, and\nYann LeCun.",
371
+ "venue": "1 2014.",
372
+ "url": null
373
+ }
374
+ },
375
+ {
376
+ "34": {
377
+ "title": "Deep neural networks for object detection.",
378
+ "author": "Christian Szegedy, Alexander Toshev, Dumitru Erhan, and Google Inc.",
379
+ "venue": "In Advances in neural information processing systems, 2013.",
380
+ "url": null
381
+ }
382
+ },
383
+ {
384
+ "35": {
385
+ "title": "Selective search for object recognition.",
386
+ "author": "J. R. R. Uijlings, K. E. A. van de Sande, T. Gevers, and A. W. M. Smeulders.",
387
+ "venue": "International Journal of Computer Vision, 104(2):154\u2013171, 2013.",
388
+ "url": null
389
+ }
390
+ },
391
+ {
392
+ "36": {
393
+ "title": "End-to-end integration of a convolutional network, deformable parts\nmodel and non-maximum suppression.",
394
+ "author": "Li Wan, David Eigen, and Robert Fergus.",
395
+ "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,\nCVPR 2015, volume 07-12-June-2015, pages 851\u2013859. IEEE Computer Society, 10\n2015.",
396
+ "url": null
397
+ }
398
+ },
399
+ {
400
+ "37": {
401
+ "title": "Predicting diverse subsets using structural svms.",
402
+ "author": "Yisong Yue and Thorsten Joachims.",
403
+ "venue": "In ICML, 2008.",
404
+ "url": null
405
+ }
406
+ },
407
+ {
408
+ "38": {
409
+ "title": "Latent hierarchical structural learning for object detection.",
410
+ "author": "Long Zhu, Yuanhao Chen, Alan L. Yuille, and William T. Freeman.",
411
+ "venue": "2010 IEEE Computer Society Conference on Computer Vision and\nPattern Recognition, pages 1062\u20131069, 2010.",
412
+ "url": null
413
+ }
414
+ }
415
+ ],
416
+ "url": "http://arxiv.org/html/2008.11451v2"
417
+ }
20240620/2206.02909v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240620/2208.07540v4.json ADDED
The diff for this file is too large to render. See raw diff
 
20240620/2208.13296v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240620/2210.00898v3.json ADDED
@@ -0,0 +1,629 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Robust \ud835\udc44-learning Algorithm for Markov Decision Processes under Wasserstein Uncertainty",
3
+ "abstract": "We present a novel -learning algorithm tailored to solve distributionally robust Markov decision problems where the corresponding ambiguity set of transition probabilities for the underlying Markov decision process is a Wasserstein ball around a (possibly estimated) reference measure.\nWe prove convergence of the presented algorithm and provide several examples also using real data to illustrate both the tractability of our algorithm as well as the benefits of considering distributional robustness when solving stochastic optimal control problems, in particular when the estimated distributions turn out to be misspecified in practice.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "The among practitioners popular and widely applied -learning algorithm provides a tractable reinforcement learning methodology to solve Markov decision problems (MDP). The -learning algorithm learns an optimal policy online via observing at each time the current state of the underlying process as well as the reward depending on the current (and possibly next) state when acting according to a (not necessarily) optimal policy and by assuming to act optimally after the next state. The observed rewards determine a function depending on a state-action pair that describes the quality of the chosen action when being in the observed state. After a sufficient amount of observations the function then allows in each state to decide which actions possess the most quality. In this way the -learning algorithm determines an optimal policy.\nThe -learning algorithm was initially proposed in Watkins\u2019 PhD thesis ([57 ###reference_b57###]). [27 ###reference_b27###] and [58 ###reference_b58###] then provided a rigorous mathematical proof of the convergence of the -learning algorithm to the optimal -value function using results from stochastic approximation theory (see e.g.\u2009[16 ###reference_b16###] and [41 ###reference_b41###]). The design of the -learning algorithm as well as the proof of its convergence to the optimal -value both rely on the dynamic programming principle of the corresponding Markov decision problem, which allows to find an optimal policy for the involved infinite horizon stochastic optimal control problem by solving a one time-step optimization problem. We refer to [1 ###reference_b1###], [2 ###reference_b2###], [3 ###reference_b3###], [11 ###reference_b11###], [12 ###reference_b12###], [24 ###reference_b24###], [25 ###reference_b25###], [28 ###reference_b28###], [29 ###reference_b29###], [35 ###reference_b35###], [38 ###reference_b38###], and [55 ###reference_b55###] for various successful applications of the -learning algorithm.\nRecently, there has been a huge focus in the literature starting from the viewpoint that one might have an estimate of the correct transition probability of the underlying Markov decision process, for example through the empirical measure derived from past observed data, but one faces the risk of misspecifying the correct distribution and hence would like to consider a distributionally robust Markov decision process (compare [5 ###reference_b5###], [6 ###reference_b6###], [13 ###reference_b13###],\n[17 ###reference_b17###], [23 ###reference_b23###], [30 ###reference_b30###], [31 ###reference_b31###],\n[32 ###reference_b32###], [37 ###reference_b37###], [39 ###reference_b39###], [47 ###reference_b47###], [48 ###reference_b48###], [52 ###reference_b52###], [56 ###reference_b56###], [59 ###reference_b59###], [61 ###reference_b61###], [62 ###reference_b62###], [64 ###reference_b64###], and [66 ###reference_b66###]), also called Markov decision process under model uncertainty, where one maximizes over the worst-case scenario among all probability measures of an ambiguity set of transition probabilities. We also refer to, e.g, the following related distributionally robust stochastic control problems [13 ###reference_b13###], [14 ###reference_b14###], [22 ###reference_b22###], [52 ###reference_b52###], [53 ###reference_b53###], [60 ###reference_b60###], and [63 ###reference_b63###] beyond the MDP setting.\nIndeed, as discussed in [31 ###reference_b31###], there is a common risk in practice that one cannot fully capture the probabilities of the real-world environment due to its complexity and hence the corresponding reinforcement learning algorithm will be trained based on misspecified probabilities. In addition, there is the risk that the environment shifts between the training period and the testing period. This situation can often be observed in practice as the future evolution of random processes rarely behaves exactly according to, for example, the observed historical evolution. One may think as a prime example of financial markets, where several financial crises revealed repeatedly that used models were strongly misspecified. We refer to [31 ###reference_b31###] for further examples, e.g. in robotics, and a further general discussion on the need of considering distributionally robust Markov decision processes and corresponding reinforcement learning based algorithms.\nWhile there has been a lot of contributions in the literature on distributionally robust Markov decision problems, only very recently, to the best of our knowledge, there has been a first -learning algorithm developed in [31 ###reference_b31###] to solve distributionally robust Markov decision problems. More precisely, in [31 ###reference_b31###] the authors recently introduced a -learning algorithm tailored for distributionally robust Markov decision problems where the corresponding ambiguity set of transition probabilities consists of all probability measures which are -close to a reference measure with respect to the Kullback-Leibler (KL) divergence, and prove its convergence to the optimal robust Q-value function.\nThe goal of this paper is to provide a -learning algorithm which can solve distributionally robust Markov decision problems where the corresponding ambiguity set of transition probabilities for the underlying Markov decision process is a Wasserstein ball around a (possibly estimated) reference measure. We obtain theoretical guarantees of convergence of our -learning algorithm to the corresponding optimal robust -value function (see also (12 ###reference_###)). The design of our -learning algorithm combines the dynamic programming principle of the corresponding Markov decision process under model uncertainty (see, e.g., [37 ###reference_b37###]) and a convex duality result for worst-case expectations with respect to a Wasserstein ball (see [4 ###reference_b4###], [9 ###reference_b9###], [19 ###reference_b19###], [34 ###reference_b34###], and [65 ###reference_b65###]).\nFrom an application point of view, considering the Wasserstein distance has the crucial advantage that a corresponding Wasserstein-ball consists of probability measures which do not necessarily share the same support as the reference measure, compared to the KL-divergence, where by definition probability measures within a certain fixed distance to the reference measure all need to have a corresponding support included in the support of the reference measure. We highlight that from a structural point of view, our -learning algorithm is different than the one in [31 ###reference_b31###], which roughly speaking comes from the fact that the dual optimization problem with respect to the Wasserstein distance has a different structure than the corresponding one with respect to the KL-divergence.\nWe demonstrate in several examples also using real data that our robust -learning algorithm determines robust policies that outperform non-robust policies, determined by the classical -learning algorithm, given that the probabilities for the underlying Markov decision process turn out to be misspecified.\nThe remainder of the paper is as follows. In Section 2 ###reference_### we introduce the underlying setting of the corresponding Markov decision process under model uncertainty. In Section 3 ###reference_### we present our new -learning algorithm and provide our main result: the convergence of this algorithm to the optimal robust -value function. Numerical examples demonstrating the applicability as well as the benefits of our -learning algorithm compared to the classical -learning algorithm are provided in Section 4 ###reference_###. All proofs and auxiliary results are provided in Appendix A.1 ###reference_### and A.2 ###reference_###, respectively"
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Setting and Preliminaries",
15
+ "text": "In this section we provide the setting and define necessary quantities to define our -learning algorithm for distributionally robust stochastic optimization problems under Wasserstein uncertainty."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Setting",
21
+ "text": "Optimal control problems are defined on a state space containing all the states an underlying stochastic process can attain. We model this state space as a finite subset where refers to the dimension of the state space.\nWe consider the robust control problem over an infinite time horizon, hence the space of all attainable states in this horizon is given by the infinite Cartesian product\n, with the corresponding -algebra .\nOn we consider a stochastic process that describes the states that are attained over time. To this end, we\nlet be the canonical process on , that is defined by for each , .\nGiven a realization of the underlying stochastic process at some time , the outcome of the next state can be influenced through actions that are executed in dependence of the current state . At any time the set of possible actions is given by a finite set , where is the dimension of the action space (also referred to as control space). The set of admissible policies over the entire time horizon contains all sequences of actions that depend at any time only on the current observation of the state process formalized by\nThe current state and the chosen action influence the outcome of the next state by influencing the probability distribution with which the subsequent state is realized. As we take into account model uncertainty we assume that the correct probability kernel is unknown and hence, for each given state and action , we consider an ambiguity set of probability distributions representing the set of possible probability laws for the next state.\nWe denote by and the set of probability measures on and respectively, and we assume that an ambiguity set of probability measures is modelled by a set-valued map\nHence, if at time the process attains the value , and the agent decides to execute action , then describes the set of possible probability distributions with which the next state is realized. If is single-valued, then the state-action pair determines unambiguously the transition probability, and the setting coincides with the usual setting used for classical (i.e., non-robust) Markov decision processes, compare e.g. [7 ###reference_b7###].\nThe ambiguity set of admissible probability distributions on depends therefore on the initial state and the chosen policy . We define for every initial state and every policy the set of admissible underlying probability distributions of by\nwhere the notation abbreviates\nIn the literature of robust Markov decision processes one refers to as being -rectangular, see, e.g., [26 ###reference_b26###], [45 ###reference_b45###], [59 ###reference_b59###]. This is a common assumption which turns out to be crucial to obtain a dynamic programming principle (see, e.g., [37 ###reference_b37###, Theorem 2.7] and [43 ###reference_b43###]) and therefore to enable efficient and tractable computations. Indeed, if one weakens this assumption the problem becomes computationally more expensive (see, e.g, [8 ###reference_b8###, Section 2]), or can be provably intractable (compare [30 ###reference_b30###]) and therefore cannot be solved by dynamic programming methods. Several approaches to solve robust MDPs w.r.t.\u2009non-rectangular ambiguity sets using methods other than dynamic programming however have recently been proposed, and are described in [21 ###reference_b21###], [30 ###reference_b30###], and [50 ###reference_b50###].\nTo determine optimal policies we reward actions in dependence of the current state-action pair and the subsequent realized state. To this end, let be some reward function, and let be a discount factor fulfilling\nThen, our robust optimization problem consists, for every initial value , in maximizing the expected value of under the worst case measure from over all possible policies . More precisely, we aim for every to maximize\n\namong all policies . The value function given by\nthen describes the expectation of under the worst case measure from and under the optimal policy from in dependence of the initial value."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Specification of the Ambiguity Sets",
27
+ "text": "To specify the ambiguity set for each , we first consider for each a reference probability measure. In applications, this reference measure may be derived from observed data. Considering an ambiguity set related to this reference measure then allows to respect deviations from the historic behavior in the future and leads therefore to a more robust optimal control problem that allows to take into account adverse scenarios, compare also [37 ###reference_b37###].\nTo that end, let\nbe a probability kernel, where acts as reference probability measure for each .\nThen, for every we denote by\nthe corresponding probability measure on that determines the distribution of in dependence of initial value and the policy , i.e., we have for any that\nWe provide two specifications of ambiguity sets of probability measures , , as defined in (1 ###reference_###). Both ambiguity sets rely on the assumption that for each given the uncertainty with respect to the underlying probability distribution is modelled through a Wasserstein-ball around the reference probability measure on .\nTo that end, for any , and any , consider the -Wasserstein-distance\nwhere denotes the Euclidean norm on and where denotes the set of joint distributions of and . Since we consider probability measures on a finite space we have a representation of the form\nfor all for ,\nwhere denotes the Dirac-measure at point .\nHence, the -Wasserstein-distance can also be written as\nwhere\nRelying on the above introduced Wasserstein-distance we define two ambiguity sets of probability measures."
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "Definition of Operators",
33
+ "text": "We consider the following single time step optimization problem\nwhere is the value function defined in (3 ###reference_###),\nand we define the optimal robust -value function by\nNote that if (2 ###reference_###) holds and is either or for all , then the values of are finite, since for all we have\nwhere the finiteness of follows from [37 ###reference_b37###, Theorem 2.7].\nThen we obtain as a consequence of the main result from [37 ###reference_b37###, Theorem 3.1] the following proposition showing that the infinite time horizon distributionally robust optimization problem defined in (3 ###reference_###) can be solved by the consideration of a suitable one time-step fixed point equation, which is the key result that allows to derive -learning type of algorithms.\nAssume that (2 ###reference_###) holds and that the ambiguity set is either given by or for all . Then for all we have\n\nwhere corresponds to the value function of the robust stochastic optimal control problem defined in (3 ###reference_###)."
34
+ },
35
+ {
36
+ "section_id": "3",
37
+ "parent_section_id": null,
38
+ "section_name": "The Robust -learning Algorithm",
39
+ "text": "In this section we present a novel robust -learning algorithm for the corresponding distributionally robust stochastic optimization problem (3 ###reference_###) and prove its convergence.\nA robust -learning algorithm intends to approximate which involves the minimization over an infinite amount of probability measures. Due to the particular choice of ambiguity sets (6 ###reference_###) and (10 ###reference_###) w.r.t. the Wasserstein-distance, we can transform this minimization problem into a tractable problem using a duality from, e.g., [4 ###reference_b4###].\nTo this end, for a function we define, as in [4 ###reference_b4###, Section 2] or [54 ###reference_b54###, Section 5] its - transform.\nLet , let , and let . Then the -transform of is defined by\nIndeed, the -transform now allows to rephrase the optimization problem involved in the definition of in more tractable terms involving only an expectation with respect to the reference kernel, compare also Proposition 16 ###reference_16###. We use this representation to define our robust -learning algorithm which is summarized in Algorithm 1 ###reference_###.\nInput State space ; Control space ; Reward function ; Discount factor ; Kernel ; Starting point ; Policy ; Cost function of the -transform; Ambiguity parameter ; Parameter related to the Wasserstein-distance; Sequence of learning rates ;\nOutput A sequence\nThe update rule from (16 ###reference_###) in Algorithm 1 ###reference_### means that for all , , we have\n if and else,\ni.e., the update of only takes that state-action pair into account which was realized by the process . Further, note that Algorithm 1 ###reference_### assumes for each time the existence of some such that (15 ###reference_###) holds. The following result ensures that this requirement is indeed fulfilled.\nLet , , let and recall defined in (14 ###reference_###). Further let satisfy for all . Then, there exists some such that\nThe following main result now shows that the function obtained as the output of Algorithm 1 ###reference_### converges indeed against the optimal robust -value function defined in (12 ###reference_###).\nAssume that (2 ###reference_###) holds, and let such that\nLet the ambiguity set be given by for all for some and , and consider222The function is used to determine the -transform in the algorithm, see (15 ###reference_###) and (16 ###reference_###). . Then, we have for all that\nLet for some and finite for some , let the ambiguity set be given by for all for some and , and consider444The function is used to determine the -transform in the algorithm, see (15 ###reference_###) and (16 ###reference_###).\n,\nwhere .\nThen, we have for all that\nNote that condition (17 ###reference_###) can be ensured by considering a sequence of learning rates satisfying\nand is a (positive) recurrent irreducible Markov decision process under .\nNote that in the non-robust case it has been empirically shown that an efficient choice for when applying -learning is given by the so called -greedy policy, see e.g. [15 ###reference_b15###, Chapter 9], [33 ###reference_b33###], or [51 ###reference_b51###]. The -greedy policy is, for , , defined by\nwhere means that a random action is chosen uniformly at random from the finite set . A popular modification of the -greedy policy is to start with a relatively large and to decrease the value of over time, see, e.g., [33 ###reference_b33###].\nNote that from the optimal -value function one can infer\n and\n which solves the robust stochastic optimal control problem (3 ###reference_###), compare Proposition 4 ###reference_4### and [37 ###reference_b37###, Theorem 2.7]. Analogously, by considering for a sufficiently large , we can derive an approximation of the optimal action.\nThe following result based on [36 ###reference_b36###] shows that whenever an agent possesses a good enough guess about the true (but to her unknown) probability kernel so that it is contained in the ambiguity set, one can\nbound the difference of the values of the robust and non-robust Markov decision problems.\nThis is important since and , hence the following result also provides an upper bound on the sub-optimality of the performance of our robust -learning algorithm. We see that it can be controlled to be arbitrarily small when , as long as the agent possesses a good enough guess for\n as discussed above.\nNote that compared to [36 ###reference_b36###], no regularity assumptions on the map nor on the reward function are necessary due to the finiteness of both the state and action space.\nLet , , and let\nwith\nwhere , , for all . Moreover, assume that\nthe discount factor satisfies (2.2) as well as , where\nThen for any we have\nwhere"
40
+ },
41
+ {
42
+ "section_id": "4",
43
+ "parent_section_id": null,
44
+ "section_name": "Numerical Examples",
45
+ "text": "In this section we provide three numerical examples that illustrate how the robust -learning Algorithm 1 ###reference_### can be applied to specific problems. The examples highlight that a distributionally robust approach can outperform non-robust approaches whenever the assumed underlying distribution of the non-robust Markov -learning approach turns out to be misspecified during the testing period.\nThe selection of examples in this section is intended to give a small impression on the broad range of different applications of -learning algorithms for stochastic optimization problems.\nWe refer to [7 ###reference_b7###], [15 ###reference_b15###], and [24 ###reference_b24###] for an overview on several applications in finance and to [49 ###reference_b49###] for a range of applications outside the world of finance."
46
+ },
47
+ {
48
+ "section_id": "4.1",
49
+ "parent_section_id": "4",
50
+ "section_name": "On the Implementation",
51
+ "text": "To apply the numerical method from Algorithm 1 ###reference_###, we use for all of the following examples a\ndiscount factor of , an -greedy policy with (compare Remark 9 ###reference_9###), , and as a sequence of learning rates we use for . Moreover, we train all implementations with iterations. The parameter from (15 ###reference_###) is determined by maximizing the right-hand-side of (15 ###reference_###) with a numerical solver relying on the Broyden\u2013-Fletcher\u2013-Goldfarb\u2013-Shanno (BFGS) algorithm ([10 ###reference_b10###], [18 ###reference_b18###], [20 ###reference_b20###], [44 ###reference_b44###]).\nFurther details of the implementation can be found under https://github.com/juliansester/Wasserstein-Q-learning ###reference_n-Q-learning###."
52
+ },
53
+ {
54
+ "section_id": "4.2",
55
+ "parent_section_id": "4",
56
+ "section_name": "Examples",
57
+ "text": "We consider an agent playing the following game:\nAt each time the agent observes the result of coins that either show heads (encoded by ) or tails (encoded by ). The state at time is then given by the sum of the heads observed in the coins, i.e., we have . At each time the agent can bet whether the sum of the heads of the next throw strictly exceeds the previous sum (i.e. ), or whether it is strictly smaller (i.e. ).\nIf the agent is correct, she gets , if the agent is wrong she has to pay . The agent also has the possibility not to play. We model this by considering the reward function:\n\nwhere the possible actions are given by , where for example corresponds to betting .\nWe then rely on Setting 1.) from Section 2.2 ###reference_### and consider as a reference measure a binomial distribution with , i.e.,\n\nWe then define, according to Setting 1.) from Section 2.2 ###reference_###, an ambiguity set, in dependence of , by\nfor every .\nLet . Then, we denote the cumulative distribution function of a -distributed random variable by . Then we compute for the -Wasserstein distance that\nwhere the first equality of (24 ###reference_###) follows e.g. from [42 ###reference_b42###, Equation (3.5)] and the second equality of (24 ###reference_###) follows since for all .\nThis means that all binomial distributions with are contained in the ambiguity set555We highlight that of course the ambiguity set not only contains binomial distributions.. The calculation from (24 ###reference_###) gives a good indication how choosing a different value of may influence the measures contained in the ambiguity set. We then train actions according to the robust -learning approach proposed in Algorithm 1 ###reference_### for different values of , compare also Remark 10 ###reference_10###. Additionally we train an action according to the classical non-robust -learning approach, see, e.g., [57 ###reference_b57###], where we assume that the underlying process develops according to the reference measure . We obtain after applying Algorithm 1 ###reference_### the strategies depicted in Table 1 ###reference_###.\nIn particular, we see that in comparison with the non-robust action , the robust actions behave more carefully where a larger value of corresponds to a more careful behavior, which can be clearly seen for , in which case the agent decides not to play for every realization of the state.\nThen, we test the profit of the resultant actions and by playing rounds of the game according to these actions. For simulating the rounds we assume an underlying binomial distribution with a fix probability for heads which we vary from to .\nWe depict the cumulated profits of the considered actions in Table 2 ###reference_###.\nWe observe that if the probability for heads is similar as probability for heads in the reference measure (), then the non-robust approach (w.r.t. ) outperforms the robust approaches. If however the model with which the non-robust action was trained was clearly misspecified then outperforms . More precisely, the larger the degree of misspecification the more favorable it becomes to choose a larger .\nThis can be well explained by the choice of the ambiguity set that covers, according to (24 ###reference_###), the more measures under which we test, the larger we choose .\nThis simple example showcases that if in practice one is uncertain about the correct law according to which the state process evolves and one faces the risk of misspecifying the probabilities, then it can be advantageous to rely on a distributionally robust approach, whereas the choice of the radius of the Wasserstein-ball is extremely important as it corresponds to the degree of misspecification one wants to be robust against.\nWe reconsider an example of a supply-chain model provided in [31 ###reference_b31###, Section 4]. In this example we have for some the state space representing the possible goods in the inventory and the action space representing the possible goods we can order. The reward function is defined as the negative of the costs that are composed of holding costs and fixed ordering costs depending on parameters and on the demand which is, for the reference measure, uniformly distributed on , see [31 ###reference_b31###, Section 4] for more details.\nIn the setting described in [31 ###reference_b31###, Section 4], the optimal non-robust strategy (w.r.t. the reference measure) given current number of goods is while we compute for a Wasserstein-uncertainty parameter an optimal robust strategy . The robust strategy computed in [31 ###reference_b31###, Section 4] that takes uncertainty w.r.t.\u2009Kullback\u2013Leibler distance in account is given by .\nAs in [31 ###reference_b31###, Figure 1], we evaluate the strategies on a distribution which does not coincide with the reference measure. To this end, we follow the example from [31 ###reference_b31###, Section 4] and consider a perturbed uniform distribution depending on parameters and .\nWith parameter we compute after evaluation on iterations the costs depicted in Figure 1 ###reference_###, in dependence of the parameter . The figure shows that for this particular example the Wasserstein approach leads for all values that are considered, except for , to smaller costs than the approach provided in [31 ###reference_b31###, Section 4]. Moreover, since the true distribution does not coincide with the reference distribution, the robust strategies can outperform the non-robust ones (defined w.r.t. the reference distribution).\n###figure_1### We study the problem of predicting the movement of stock prices. We aim to predict whether in the next time step the return of an underlying stock is strongly negative (encoded by ), slightly negative (encoded by ), slightly positive (encoded by ), or strongly positive (encoded by ). Hence the space of the numerically encoded returns is given by\n\nWe want to rely our prediction for the movement of the next return on the last values. Hence, we consider, in line with the setting outlined in (7 ###reference_###)\n\nThe space of actions is modelled by\n\nas the actions correspond to the future returns that are predicted. To construct a reference measure, we consider the historic evolution of the (numerically encoded) returns of the underlying stock. This time series is denoted by for some , see also Figure 2 ###reference_### for an illustration.\nWe then define for some small666Note that is only introduced to avoid a division by . the set-valued map\n\nwhere for we define\nas well as777Note that is defined in (8 ###reference_###).\nThis means the construction of relies, according to (26 ###reference_###), on the relative frequency of the sequence in the time series of past realized returns . Equation (25 ###reference_###) is then applied to convert the frequencies to probabilities.\nThen, as a reference measure we consider, as in (9 ###reference_###), the set-valued map\nMoreover, as a reward function we consider888Here , and hence, denotes the last component of .\n\ni.e., we reward only correct predictions.\nWe apply the setting described above to real data. To this end, we consider as series of realized returns the daily returns of the stock of Apple in the time horizon from January until September and hence we take into account daily returns.\nTo encode the observed market returns to values in , we distinguish between small returns and large returns by saying that a daily return is strongly positive if it is larger than . Analogously a daily return is strongly negative if smaller than . This leads to the distribution of returns as depicted in Table 3 ###reference_###.\nWe then train a non-robust action according to the classical non-robust -learning algorithm ([58 ###reference_b58###]) as well as robust actions according to Algorithm 1 ###reference_### that takes into account an ambiguity set defined in (10 ###reference_###) with . Moreover, for comparison, we consider a trivial action which always, independent of the state-action pair, predicts since, according to Table 3 ###reference_###, is the most frequent appearing value in the time series .\nWe then evaluate the trained actions, in a small backtesting study, on realized daily returns of Apple that occurred after the end of the training period. To this end, we consider an evaluation period from October 2018 until February consisting of daily returns that are distributed according to Table 4 ###reference_###.\nWe observe that in the evaluation period, in contrast to the training period, the large negative returns impose the largest class of appearing returns. Overall the distribution is significantly different from the distribution of the classes on the training data. We illustrate in Table 5 ###reference_### the results of predictions of the actions evaluated in the evaluation period, and we observe that indeed the robust action outperforms the other two actions clearly in this period where the distribution of returns significantly differs from the distributions of the returns on which the actions were trained.\nThis showcases again that if there is the risk that the underlying distribution on which the actions were trained turns out to be misspecified, then it can be advantageous to use a robust approach.\nFinancial support by the MOE AcRF Tier 1 Grant RG74/21 and by the Nanyang Assistant Professorship Grant (NAP Grant) Machine Learning based Algorithms in Finance and Insurance is gratefully acknowledged."
58
+ }
59
+ ],
60
+ "appendix": [
61
+ {
62
+ "section_id": "Appendix 1",
63
+ "parent_section_id": null,
64
+ "section_name": "Appendix A Auxiliary Results and Proofs",
65
+ "text": "In Section A.1 ###reference_### we provide several useful results which then allow in Section A.2 ###reference_### to prove the main result from Section 3 ###reference_###.\nTo establish convergence of our -learning algorithm that was presented in Section 3 ###reference_### we will make use of the following auxiliary result from stochastic approximation theory which was developed to prove the convergence of the classical -learning algorithm. We refer to [27 ###reference_b27###, Section 3] for a discussion of the advantage of the following result compared to classical results from stochastic approximation such as, e.g., [16 ###reference_b16###].\nNote that for any , we write\nLet be a probability measure on , and consider a family of stochastic processes , , satisfying for all\n.\nLet be a sequence of increasing -algebras such that for all the random variables and are -measurable and such that\n, , and are -measurable for all .\nFurther assume that the following conditions hold.\n, , -almost surely for all , .\nThere exists such that -almost surely for all .\nThere exists such that -almost surely for all .\nThen, -almost surely for all .\nNext, as the following proposition shows, the -transform allows to compute worst case expectations with respect to probability measures contained in the Wasserstein-ball by computing its dual which solely depends on the center of the Wasserstein-ball.\nLet , let and , let be the ambiguity sets of probability measures defined in (6 ###reference_###) and (10 ###reference_###), and let be defined as in Theorem 7 ###reference_7###.\nThen, we have for every that\nIn addition, let for some , and finite for some . Moreover, assume that there exists some probability kernel such for all we have . Then, we have for every that\nProof of Proposition 16 ###reference_16###\nIn case (i), the assertion follows by an application of the duality result from [4 ###reference_b4###, Theorem 2.4] (with the specifications , in the notation of [4 ###reference_b4###, Theorem 2.4], see also [4 ###reference_b4###, Example 2.5]). More precisely, by [4 ###reference_b4###, Theorem 2.4], [4 ###reference_b4###, Example 2.5] and by the definition of we have for all that\nTo show (ii), we observe that in the notation of [4 ###reference_b4###], we have for that\nHence, we have\n if and only if for some with . Moreover, we see that is indeed a cost function in the sense of [4 ###reference_b4###].\nThis implies by [4 ###reference_b4###, Theorem 2.4] and by the definition of that for all we have\nNext, consider the operator which is defined for any by\nWe derive for the following form of the Bellman-equation.\nAssume that (2 ###reference_###) holds and let the ambiguity set be either or , defined in (6 ###reference_###) and (10 ###reference_###). Then the following equation holds true for the optimal -value function defined in (12 ###reference_###):\nProof of Lemma 17 ###reference_17###\nThis follows directly by definition of and by Proposition 4 ###reference_4###. Indeed, let . Then, we have\nMoreover, we observe that the operator is a contraction with respect to the supremum norm defined in (28 ###reference_###).\nFor any maps , , we have\nProof of Lemma 18 ###reference_18###\nConsider any maps , . Then, we have for all that\nwhich implies the assertion by taking the supremum with respect to the arguments of . \u220e\nIn this section we provide the proofs of the results from Section 2 ###reference_### and Section 3 ###reference_###.\nProof of Proposition 4 ###reference_4###\nThe first equality follows by definition of . For the second equality we\nwant to check that [37 ###reference_b37###, Assumption 2.2] and [37 ###reference_b37###, Assumption 2.4] hold true to be able to\napply [37 ###reference_b37###, Theorem 3.1]. [37 ###reference_b37###, Assumption 2.2] is fulfilled (for and in the notation of [37 ###reference_b37###, Assumption 2.2]) according to [37 ###reference_b37###, Proposition 3.1] in the case , and according to [37 ###reference_b37###, Proposition 3.3] in the case . To verify [37 ###reference_b37###, Assumption 2.4] (i), note that is continuous since and are finite (endowed with the discrete topology). To show [37 ###reference_b37###, Assumption 2.4] (ii) note that for all and we have\n\nwith .\nSimilarly, to show [37 ###reference_b37###, Assumption 2.4] (iii), we observe that for all and all we have\ni.e., in the notation of [37 ###reference_b37###, Assumption 2.4] we have . To verify [37 ###reference_b37###, Assumption 2.4] (iv) we see that, since , we can choose in the notation of [37 ###reference_b37###, Assumption 2.2] (ii) and hence with (2 ###reference_###) we get\n\nas required. Hence, the result follows from [37 ###reference_b37###, Theorem 3.1].\n\u220e\nProof of Lemma 6 ###reference_6###\nFor any we have by definition of the -transform\nTherefore, since is finite, the map\n\nis continuous. Hence, the assertion of Lemma 6 ###reference_6### follows once we have shown that . To that end, note that as, by assumption, for all , we have that\nProof of Theorem 7 ###reference_7###\nLet .\nAssume that either and , or and . Then, we show for all \n\nwhich shows simultaneously both (i) and (ii).\nTo that end, let be fixed. Then we rearrange the terms in (16 ###reference_###) and write\nwhere is as defined in (15 ###reference_###), see also Lemma 6 ###reference_6### for its existence.\nNote that by construction for all .\nWe define for every the map\n\nNote that indeed, as for all we have as well as is finite (compare (13 ###reference_###)), we directly conclude the finiteness of for all .\nMoreover, we obtain by (30 ###reference_###) and by using the relation that\nNext, we define for every the random variable\n\nwhich by (13 ###reference_###) is finite for all . We consider the filtration with\n\nand being the trivial sigma-algebra. Note that, in particular, , and are -measurable for all . Moreover, we have by (5 ###reference_###) and by Proposition 16 ###reference_16### that\nThus, (14 ###reference_###), (29 ###reference_###), and\nLemma 17 ###reference_17### show that\nHence it follows with Lemma 18 ###reference_18### that\nwhere the norm is defined in (28 ###reference_###).\nNext, recall that . Note that by (14 ###reference_###), by the -transform from Definition 5 ###reference_5###, and since , we have for all that\nThe latter expression coincides with\nwhich implies\nand similarly, since for all ,\nWe define . Then, by using Popoviciu\u2019s inequality on variances999Popoviciu\u2019s inequality (see [40 ###reference_b40###] or [46 ###reference_b46###]) states that for all random variables on a probability space satisfying for some we have . applied to the bounds , computed in (33 ###reference_###) and (34 ###reference_###), and by using the inequality which holds for all , we see for every that\nThis means the assumptions of Lemma 15 ###reference_15### are fulfilled, and we obtain that for -almost surely, which implies, by definition of , that for -almost surely.\n\u220e\nProof of Proposition 11 ###reference_11###\nThe conditions of [36 ###reference_b36###, Assumption 2.1-2.4] are satisfied w.r.t.\u2009 and defined in (20 ###reference_###) and (22 ###reference_###) since here both the state and action space are finite. Hence the result follows from [36 ###reference_b36###, Theorem 3.1].\n\u220e"
66
+ }
67
+ ],
68
+ "tables": {
69
+ "1": {
70
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T1.5.5\" style=\"width:216.8pt;height:79pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-15.0pt,5.5pt) scale(0.878326557848873,0.878326557848873) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.5.5.5\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" id=\"S4.T1.1.1.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.1.1.1.1.2\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.1.1.1.1.2.1\">0</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.1.1.1.1.3\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.1.1.1.1.3.1\">1</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.1.1.1.1.4\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.1.1.1.1.4.1\">2</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.1.1.1.1.5\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.1.1.1.1.5.1\">3</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.1.1.1.1.6\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.1.1.1.1.6.1\">4</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.1.1.1.1.7\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.1.1.1.1.7.1\">5</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.1.1.1.1.8\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.1.1.1.1.8.1\">6</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.1.1.1.1.9\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.1.1.1.1.9.1\">7</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.1.1.1.1.10\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.1.1.1.1.10.1\">8</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.1.1.1.1.11\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.1.1.1.1.11.1\">9</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.1.1.1.1.12\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.1.1.1.1.12.1\">10</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.2.2.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.2.2.2.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.2.2.2.2.2\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.2.2.2.2.2.1\">1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.2.2.2.2.3\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.2.2.2.2.3.1\">1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.2.2.2.2.4\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.2.2.2.2.4.1\">1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.2.2.2.2.5\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.2.2.2.2.5.1\">1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.2.2.2.2.6\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.2.2.2.2.6.1\">1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.2.2.2.2.7\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.2.2.2.2.7.1\">0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.2.2.2.2.8\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.2.2.2.2.8.1\">-1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.2.2.2.2.9\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.2.2.2.2.9.1\">-1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.2.2.2.2.10\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.2.2.2.2.10.1\">-1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.2.2.2.2.11\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.2.2.2.2.11.1\">-1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.2.2.2.2.12\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.2.2.2.2.12.1\">-1</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.3.3.3.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.3.3.3.3.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.3.3.2\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.3.3.3.3.2.1\">1</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.3.3.3\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.3.3.3.3.3.1\">1</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.3.3.4\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.3.3.3.3.4.1\">1</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.3.3.5\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.3.3.3.3.5.1\">0</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.3.3.6\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.3.3.3.3.6.1\">0</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.3.3.7\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.3.3.3.3.7.1\">0</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.3.3.8\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.3.3.3.3.8.1\">0</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.3.3.9\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.3.3.3.3.9.1\">0</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.3.3.10\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.3.3.3.3.10.1\">-1</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.3.3.11\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.3.3.3.3.11.1\">-1</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.3.3.12\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.3.3.3.3.12.1\">-1</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.4.4.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.4.4.4.4.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.4.4.4.2\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.4.4.4.4.2.1\">1</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.4.4.4.3\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.4.4.4.4.3.1\">1</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.4.4.4.4\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.4.4.4.4.4.1\">0</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.4.4.4.5\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.4.4.4.4.5.1\">0</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.4.4.4.6\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.4.4.4.4.6.1\">0</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.4.4.4.7\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.4.4.4.4.7.1\">0</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.4.4.4.8\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.4.4.4.4.8.1\">0</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.4.4.4.9\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.4.4.4.4.9.1\">0</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.4.4.4.10\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.4.4.4.4.10.1\">0</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.4.4.4.11\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.4.4.4.4.11.1\">-1</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.4.4.4.12\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.4.4.4.4.12.1\">-1</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.5.5.5.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S4.T1.5.5.5.5.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.5.5.5.5.2\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.5.5.5.5.2.1\">0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.5.5.5.5.3\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.5.5.5.5.3.1\">0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.5.5.5.5.4\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.5.5.5.5.4.1\">0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.5.5.5.5.5\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.5.5.5.5.5.1\">0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.5.5.5.5.6\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.5.5.5.5.6.1\">0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.5.5.5.5.7\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.5.5.5.5.7.1\">0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.5.5.5.5.8\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.5.5.5.5.8.1\">0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.5.5.5.5.9\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.5.5.5.5.9.1\">0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.5.5.5.5.10\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.5.5.5.5.10.1\">0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.5.5.5.5.11\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.5.5.5.5.11.1\">0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.5.5.5.5.12\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.5.5.5.5.12.1\">0</span></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_font_italic\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_upright\" id=\"S4.T1.26.1.1\">Table 1</span>: </span>The trained actions , , , and in dependence of the realized state at time .</figcaption>\n</figure>",
71
+ "capture": "Table 1: The trained actions , , , and in dependence of the realized state at time ."
72
+ },
73
+ "2": {
74
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T2.4.4\" style=\"width:216.8pt;height:45.2pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-107.4pt,22.4pt) scale(0.502427545620262,0.502427545620262) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T2.4.4.4\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt\" id=\"S4.T2.1.1.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.1.1.1.1.2\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T2.1.1.1.1.2.1\">0.1</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.1.1.1.1.3\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T2.1.1.1.1.3.1\">0.2</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.1.1.1.1.4\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T2.1.1.1.1.4.1\">0.3</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.1.1.1.1.5\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T2.1.1.1.1.5.1\">0.4</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.1.1.1.1.6\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T2.1.1.1.1.6.1\">0.5</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.1.1.1.1.7\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T2.1.1.1.1.7.1\">0.6</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.1.1.1.1.8\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T2.1.1.1.1.8.1\">0.7</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.1.1.1.1.9\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T2.1.1.1.1.9.1\">0.8</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.1.1.1.1.10\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T2.1.1.1.1.10.1\">0.9</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.4.4.4.5.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T2.4.4.4.5.1.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T2.4.4.4.5.1.1.1\">Non-Robust</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.4.4.5.1.2\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T2.4.4.4.5.1.2.1\">-31386</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.4.4.5.1.3\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T2.4.4.4.5.1.3.1\">-18438</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.4.4.5.1.4\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T2.4.4.4.5.1.4.1\">-1567</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.4.4.5.1.5\"><span class=\"ltx_text ltx_font_bold ltx_font_italic\" id=\"S4.T2.4.4.4.5.1.5.1\">22892</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.4.4.5.1.6\"><span class=\"ltx_text ltx_font_bold ltx_font_italic\" id=\"S4.T2.4.4.4.5.1.6.1\">35082</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.4.4.5.1.7\"><span class=\"ltx_text ltx_font_bold ltx_font_italic\" id=\"S4.T2.4.4.4.5.1.7.1\">22956</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.4.4.5.1.8\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T2.4.4.4.5.1.8.1\">-656</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.4.4.5.1.9\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T2.4.4.4.5.1.9.1\">-18374</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.4.4.5.1.10\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T2.4.4.4.5.1.10.1\">-31091</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.2.2.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.2.2.2.2.1\">\n<span class=\"ltx_text ltx_font_italic\" id=\"S4.T2.2.2.2.2.1.1\">Robust, </span>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.2.2.2.2\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T2.2.2.2.2.2.1\">-24728</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.2.2.2.3\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T2.2.2.2.2.3.1\">4554</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.2.2.2.4\"><span class=\"ltx_text ltx_font_bold ltx_font_italic\" id=\"S4.T2.2.2.2.2.4.1\">16491</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.2.2.2.5\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T2.2.2.2.2.5.1\">13323</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.2.2.2.6\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T2.2.2.2.2.6.1\">9920</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.2.2.2.7\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T2.2.2.2.2.7.1\">13170</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.2.2.2.8\"><span class=\"ltx_text ltx_font_bold ltx_font_italic\" id=\"S4.T2.2.2.2.2.8.1\">16825</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.2.2.2.9\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T2.2.2.2.2.9.1\">4451</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.2.2.2.10\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T2.2.2.2.2.10.1\">-24427</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.3.3.3.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.3.3.3.3.1\">\n<span class=\"ltx_text ltx_font_italic\" id=\"S4.T2.3.3.3.3.1.1\">Robust, </span>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.3.3.3.2\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T2.3.3.3.3.2.1\">-8174</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.3.3.3.3\"><span class=\"ltx_text ltx_font_bold ltx_font_italic\" id=\"S4.T2.3.3.3.3.3.1\">15201</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.3.3.3.4\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T2.3.3.3.3.4.1\">11091</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.3.3.3.5\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T2.3.3.3.3.5.1\">4387</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.3.3.3.6\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T2.3.3.3.3.6.1\">2050</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.3.3.3.7\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T2.3.3.3.3.7.1\">4373</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.3.3.3.8\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T2.3.3.3.3.8.1\">11139</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.3.3.3.9\"><span class=\"ltx_text ltx_font_bold ltx_font_italic\" id=\"S4.T2.3.3.3.3.9.1\">15276</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.3.3.3.10\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T2.3.3.3.3.10.1\">-7611</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.4.4.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb ltx_border_r\" id=\"S4.T2.4.4.4.4.1\">\n<span class=\"ltx_text ltx_font_italic\" id=\"S4.T2.4.4.4.4.1.1\">Robust, </span>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.4.4.4.4.2\"><span class=\"ltx_text ltx_font_bold ltx_font_italic\" id=\"S4.T2.4.4.4.4.2.1\">0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.4.4.4.4.3\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T2.4.4.4.4.3.1\">0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.4.4.4.4.4\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T2.4.4.4.4.4.1\">0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.4.4.4.4.5\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T2.4.4.4.4.5.1\">0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.4.4.4.4.6\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T2.4.4.4.4.6.1\">0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.4.4.4.4.7\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T2.4.4.4.4.7.1\">0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.4.4.4.4.8\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T2.4.4.4.4.8.1\">0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.4.4.4.4.9\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T2.4.4.4.4.9.1\">0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.4.4.4.4.10\"><span class=\"ltx_text ltx_font_bold ltx_font_italic\" id=\"S4.T2.4.4.4.4.10.1\">0</span></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_font_italic\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_upright\" id=\"S4.T2.11.1.1\">Table 2</span>: </span>Overall Profit of the game described in Example\u00a0<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2210.00898v3#Thmthm12\" title=\"Example 12 (Coin Toss). \u2023 4.2 Examples \u2023 4 Numerical Examples \u2023 Robust \ud835\udc44-learning Algorithm for Markov Decision Processes under Wasserstein Uncertainty\"><span class=\"ltx_text ltx_ref_tag\">12</span></a> in dependence of different trained strategies and of the probability distribution of the underlying process. The best performing strategy in each case is indicated with bold characters.</figcaption>\n</figure>",
75
+ "capture": "Table 2: Overall Profit of the game described in Example\u00a012 in dependence of different trained strategies and of the probability distribution of the underlying process. The best performing strategy in each case is indicated with bold characters."
76
+ },
77
+ "3": {
78
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T3.9\" style=\"width:216.8pt;height:69.2pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-32.5pt,10.4pt) scale(0.769292407519681,0.769292407519681) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T3.9.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T3.9.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.9.1.1.1.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T3.9.1.1.1.1.1\">Type of Encoded Return\u00a0(Numerical Value)</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.9.1.1.1.2\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T3.9.1.1.1.2.1\">Total Amount</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T3.9.1.2.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.9.1.2.1.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T3.9.1.2.1.1.1\">Strongly Negative Returns\u00a0(-2)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.9.1.2.1.2\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T3.9.1.2.1.2.1\">404</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.9.1.3.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.9.1.3.2.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T3.9.1.3.2.1.1\">Slightly Negative Returns\u00a0(-1)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.9.1.3.2.2\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T3.9.1.3.2.2.1\">637</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.9.1.4.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.9.1.4.3.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T3.9.1.4.3.1.1\">Slightly Positive Returns\u00a0(1)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.9.1.4.3.2\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T3.9.1.4.3.2.1\">627</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.9.1.5.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T3.9.1.5.4.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T3.9.1.5.4.1.1\">Strongly Positive Returns\u00a0(2)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.9.1.5.4.2\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T3.9.1.5.4.2.1\">532</span></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_font_italic\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_upright\" id=\"S4.T3.18.1.1\">Table 3</span>: </span>The distribution of the numerically encoded daily returns of <em class=\"ltx_emph ltx_font_upright\" id=\"S4.T3.19.2\">Apple</em> between January and September . The threshold to distinguish slightly positive (negative) returns from strongly positive returns is ().</figcaption>\n</figure>",
79
+ "capture": "Table 3: The distribution of the numerically encoded daily returns of Apple between January and September . The threshold to distinguish slightly positive (negative) returns from strongly positive returns is ()."
80
+ },
81
+ "4": {
82
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T4\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T4.9\" style=\"width:216.8pt;height:69.2pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-32.5pt,10.4pt) scale(0.769292407519681,0.769292407519681) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T4.9.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T4.9.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T4.9.1.1.1.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T4.9.1.1.1.1.1\">Type of Encoded Return\u00a0(Numerical Value)</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T4.9.1.1.1.2\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T4.9.1.1.1.2.1\">Total Amount</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T4.9.1.2.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T4.9.1.2.1.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T4.9.1.2.1.1.1\">Strongly Negative Returns\u00a0(-2)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.9.1.2.1.2\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T4.9.1.2.1.2.1\">29</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.9.1.3.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.9.1.3.2.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T4.9.1.3.2.1.1\">Slightly Negative Returns\u00a0(-1)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.9.1.3.2.2\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T4.9.1.3.2.2.1\">21</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.9.1.4.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.9.1.4.3.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T4.9.1.4.3.1.1\">Slightly Positive Returns\u00a0(1)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.9.1.4.3.2\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T4.9.1.4.3.2.1\">22</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.9.1.5.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T4.9.1.5.4.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T4.9.1.5.4.1.1\">Strongly Positive Returns\u00a0(2)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.9.1.5.4.2\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T4.9.1.5.4.2.1\">28</span></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_font_italic\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_upright\" id=\"S4.T4.18.1.1\">Table 4</span>: </span>The distribution of the numerically encoded daily returns of <em class=\"ltx_emph ltx_font_upright\" id=\"S4.T4.19.2\">Apple</em> between October and February .</figcaption>\n</figure>",
83
+ "capture": "Table 4: The distribution of the numerically encoded daily returns of Apple between October and February ."
84
+ },
85
+ "5": {
86
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T5\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T5.6.6\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T5.6.6.7.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S4.T5.6.6.7.1.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T5.6.6.7.1.1.1\">Action</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T5.6.6.7.1.2\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T5.6.6.7.1.2.1\">Share of Correct Predictions</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T5.2.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T5.1.1.1.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.2.2.2.2\">\n<span class=\"ltx_text ltx_font_italic\" id=\"S4.T5.2.2.2.2.1\">23.40</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.4.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T5.3.3.3.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.4.4.4.2\">\n<span class=\"ltx_text ltx_font_italic\" id=\"S4.T5.4.4.4.2.1\">28.72</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.6.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S4.T5.5.5.5.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T5.6.6.6.2\">\n<span class=\"ltx_text ltx_font_italic\" id=\"S4.T5.6.6.6.2.1\">21.27</span>\n</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_font_italic\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_upright\" id=\"S4.T5.20.1.1\">Table 5</span>: </span>The proportion of correct stock movement predictions in the evaluation period between October and January </figcaption>\n</figure>",
87
+ "capture": "Table 5: The proportion of correct stock movement predictions in the evaluation period between October and January "
88
+ }
89
+ },
90
+ "image_paths": {
91
+ "1": {
92
+ "figure_path": "2210.00898v3_figure_1.png",
93
+ "caption": "Figure 1: Total Costs for b=1\ud835\udc4f1b=1italic_b = 1 after 100000100000100000100000 iterations in the setting of Example 13, compare also [31, Figure 1].",
94
+ "url": "http://arxiv.org/html/2210.00898v3/x1.png"
95
+ }
96
+ },
97
+ "validation": true,
98
+ "references": [
99
+ {
100
+ "1": {
101
+ "title": "Model-free Q-learning designs for linear discrete-time zero-sum\ngames with application to h-infinity control.",
102
+ "author": "Asma Al-Tamimi, Frank L Lewis, and Murad Abu-Khalaf.",
103
+ "venue": "Automatica, 43(3):473\u2013481, 2007.",
104
+ "url": null
105
+ }
106
+ },
107
+ {
108
+ "2": {
109
+ "title": "Reinforcement learning algorithm for mixed mean field control games.",
110
+ "author": "Andrea Angiuli, Nils Detering, Jean-Pierre Fouque, and Jimin Lin.",
111
+ "venue": "arXiv preprint arXiv:2205.02330, 2022.",
112
+ "url": null
113
+ }
114
+ },
115
+ {
116
+ "3": {
117
+ "title": "Reinforcement learning for mean field games, with applications to\neconomics.",
118
+ "author": "Andrea Angiuli, Jean-Pierre Fouque, and Mathieu Lauriere.",
119
+ "venue": "arXiv preprint arXiv:2106.13755, 2021.",
120
+ "url": null
121
+ }
122
+ },
123
+ {
124
+ "4": {
125
+ "title": "Computational aspects of robust optimized certainty equivalents and\noption pricing.",
126
+ "author": "Daniel Bartl, Samuel Drapeau, and Ludovic Tangpi.",
127
+ "venue": "Mathematical Finance, 30(1):287\u2013309, 2020.",
128
+ "url": null
129
+ }
130
+ },
131
+ {
132
+ "5": {
133
+ "title": "Distributionally robust Markov decision processes and their\nconnection to risk measures.",
134
+ "author": "Nicole B\u00e4uerle and Alexander Glauner.",
135
+ "venue": "Mathematics of Operations Research, 2021.",
136
+ "url": null
137
+ }
138
+ },
139
+ {
140
+ "6": {
141
+ "title": "Q-learning for distributionally robust Markov decision processes.",
142
+ "author": "Nicole B\u00e4uerle and Alexander Glauner.",
143
+ "venue": "In Modern Trends in Controlled Stochastic Processes:, pages\n108\u2013128. Springer, 2021.",
144
+ "url": null
145
+ }
146
+ },
147
+ {
148
+ "7": {
149
+ "title": "Markov decision processes with applications to finance.",
150
+ "author": "Nicole B\u00e4uerle and Ulrich Rieder.",
151
+ "venue": "Springer Science & Business Media, 2011.",
152
+ "url": null
153
+ }
154
+ },
155
+ {
156
+ "8": {
157
+ "title": "Fast algorithms for -constrained s-rectangular robust\nmdps.",
158
+ "author": "Bahram Behzadian, Marek Petrik, and Chin Pang Ho.",
159
+ "venue": "Advances in Neural Information Processing Systems,\n34:25982\u201325992, 2021.",
160
+ "url": null
161
+ }
162
+ },
163
+ {
164
+ "9": {
165
+ "title": "Quantifying distributional model risk via optimal transport.",
166
+ "author": "Jose Blanchet and Karthyek Murthy.",
167
+ "venue": "Mathematics of Operations Research, 44(2):565\u2013600, 2019.",
168
+ "url": null
169
+ }
170
+ },
171
+ {
172
+ "10": {
173
+ "title": "The convergence of a class of double-rank minimization algorithms 1.\ngeneral considerations.",
174
+ "author": "Charles George Broyden.",
175
+ "venue": "IMA Journal of Applied Mathematics, 6(1):76\u201390, 1970.",
176
+ "url": null
177
+ }
178
+ },
179
+ {
180
+ "11": {
181
+ "title": "Deep hedging of derivatives using reinforcement learning.",
182
+ "author": "Jay Cao, Jacky Chen, John Hull, and Zissis Poulos.",
183
+ "venue": "The Journal of Financial Data Science, 3(1):10\u201327, 2021.",
184
+ "url": null
185
+ }
186
+ },
187
+ {
188
+ "12": {
189
+ "title": "Reinforcement learning in economics and finance.",
190
+ "author": "Arthur Charpentier, Romuald Elie, and Carl Remlinger.",
191
+ "venue": "Computational Economics, pages 1\u201338, 2021.",
192
+ "url": null
193
+ }
194
+ },
195
+ {
196
+ "13": {
197
+ "title": "Distributionally robust optimization for sequential decision-making.",
198
+ "author": "Zhi Chen, Pengqian Yu, and William B Haskell.",
199
+ "venue": "Optimization, 68(12):2397\u20132426, 2019.",
200
+ "url": null
201
+ }
202
+ },
203
+ {
204
+ "14": {
205
+ "title": "Regularized and distributionally robust data-enabled predictive\ncontrol.",
206
+ "author": "Jeremy Coulson, John Lygeros, and Florian D\u00f6rfler.",
207
+ "venue": "In 2019 IEEE 58th Conference on Decision and Control (CDC),\npages 2696\u20132701. IEEE, 2019.",
208
+ "url": null
209
+ }
210
+ },
211
+ {
212
+ "15": {
213
+ "title": "Machine learning in Finance, volume 1170.",
214
+ "author": "Matthew F Dixon, Igor Halperin, and Paul Bilokon.",
215
+ "venue": "Springer, 2020.",
216
+ "url": null
217
+ }
218
+ },
219
+ {
220
+ "16": {
221
+ "title": "On stochastic approximation.",
222
+ "author": "Aryeh Dvoretzky.",
223
+ "venue": "University of California Press, 1956.",
224
+ "url": null
225
+ }
226
+ },
227
+ {
228
+ "17": {
229
+ "title": "Robust solutions to Markov decision problems with uncertain\ntransition matrices.",
230
+ "author": "Laurent El Ghaoui and Arnab Nilim.",
231
+ "venue": "Operations Research, 53(5):780\u2013798, 2005.",
232
+ "url": null
233
+ }
234
+ },
235
+ {
236
+ "18": {
237
+ "title": "A new approach to variable metric algorithms.",
238
+ "author": "Roger Fletcher.",
239
+ "venue": "The computer journal, 13(3):317\u2013322, 1970.",
240
+ "url": null
241
+ }
242
+ },
243
+ {
244
+ "19": {
245
+ "title": "Distributionally robust stochastic optimization with Wasserstein\ndistance.",
246
+ "author": "Rui Gao and Anton Kleywegt.",
247
+ "venue": "Mathematics of Operations Research, 2022.",
248
+ "url": null
249
+ }
250
+ },
251
+ {
252
+ "20": {
253
+ "title": "A family of variable metric updates derived by variational means.\nmathematics of computing.",
254
+ "author": "Donald Goldfarb.",
255
+ "venue": "1970.",
256
+ "url": null
257
+ }
258
+ },
259
+ {
260
+ "21": {
261
+ "title": "Robust markov decision processes: Beyond rectangularity.",
262
+ "author": "Vineet Goyal and Julien Grand-Clement.",
263
+ "venue": "Mathematics of Operations Research, 48(1):203\u2013226, 2023.",
264
+ "url": null
265
+ }
266
+ },
267
+ {
268
+ "22": {
269
+ "title": "Data-based distributionally robust stochastic optimal power\nflow\u2014part i: Methodologies.",
270
+ "author": "Yi Guo, Kyri Baker, Emiliano Dall\u2019Anese, Zechun Hu, and Tyler Holt Summers.",
271
+ "venue": "IEEE Transactions on Power Systems, 34(2):1483\u20131492, 2018.",
272
+ "url": null
273
+ }
274
+ },
275
+ {
276
+ "23": {
277
+ "title": "Distributionally robust differential dynamic programming with\nwasserstein distance.",
278
+ "author": "Astghik Hakobyan and Insoon Yang.",
279
+ "venue": "IEEE Control Systems Letters, 2023.",
280
+ "url": null
281
+ }
282
+ },
283
+ {
284
+ "24": {
285
+ "title": "Recent advances in reinforcement learning in finance.",
286
+ "author": "Ben Hambly, Renyuan Xu, and Huining Yang.",
287
+ "venue": "arXiv preprint arXiv:2112.04553, 2021.",
288
+ "url": null
289
+ }
290
+ },
291
+ {
292
+ "25": {
293
+ "title": "Deep learning in finance and banking: A literature review and\nclassification.",
294
+ "author": "Jian Huang, Junyi Chai, and Stella Cho.",
295
+ "venue": "Frontiers of Business Research in China, 14(1):1\u201324, 2020.",
296
+ "url": null
297
+ }
298
+ },
299
+ {
300
+ "26": {
301
+ "title": "Robust dynamic programming.",
302
+ "author": "Garud N Iyengar.",
303
+ "venue": "Mathematics of Operations Research, 30(2):257\u2013280, 2005.",
304
+ "url": null
305
+ }
306
+ },
307
+ {
308
+ "27": {
309
+ "title": "On the convergence of stochastic iterative dynamic programming\nalgorithms.",
310
+ "author": "Tommi Jaakkola, Michael I Jordan, and Satinder P Singh.",
311
+ "venue": "Neural computation, 6(6):1185\u20131201, 1994.",
312
+ "url": null
313
+ }
314
+ },
315
+ {
316
+ "28": {
317
+ "title": "Improving financial trading decisions using deep Q-learning:\nPredicting the number of shares, action strategies, and transfer learning.",
318
+ "author": "Gyeeun Jeong and Ha Young Kim.",
319
+ "venue": "Expert Systems with Applications, 117:125\u2013138, 2019.",
320
+ "url": null
321
+ }
322
+ },
323
+ {
324
+ "29": {
325
+ "title": "Modern perspectives on reinforcement learning in finance.",
326
+ "author": "Petter N Kolm and Gordon Ritter.",
327
+ "venue": "Modern Perspectives on Reinforcement Learning in Finance\n(September 6, 2019). The Journal of Machine Learning in Finance, 1(1), 2020.",
328
+ "url": null
329
+ }
330
+ },
331
+ {
332
+ "30": {
333
+ "title": "Policy gradient algorithms for robust mdps with non-rectangular\nuncertainty sets.",
334
+ "author": "Mengmeng Li, Tobias Sutter, and Daniel Kuhn.",
335
+ "venue": "arXiv preprint arXiv:2305.19004, 2023.",
336
+ "url": null
337
+ }
338
+ },
339
+ {
340
+ "31": {
341
+ "title": "Distributionally robust Q-learning.",
342
+ "author": "Zijian Liu, Qinxun Bai, Jose Blanchet, Perry Dong, Wei Xu, Zhengqing Zhou, and\nZhengyuan Zhou.",
343
+ "venue": "In International Conference on Machine Learning, pages\n13623\u201313643. PMLR, 2022.",
344
+ "url": null
345
+ }
346
+ },
347
+ {
348
+ "32": {
349
+ "title": "Robust MDPs with k-rectangular uncertainty.",
350
+ "author": "Shie Mannor, Ofir Mebel, and Huan Xu.",
351
+ "venue": "Mathematics of Operations Research, 41(4):1484\u20131509, 2016.",
352
+ "url": null
353
+ }
354
+ },
355
+ {
356
+ "33": {
357
+ "title": "Human-level control through deep reinforcement learning.",
358
+ "author": "Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness,\nMarc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg\nOstrovski, et al.",
359
+ "venue": "nature, 518(7540):529\u2013533, 2015.",
360
+ "url": null
361
+ }
362
+ },
363
+ {
364
+ "34": {
365
+ "title": "Data-driven distributionally robust optimization using the\nWasserstein metric: Performance guarantees and tractable reformulations.",
366
+ "author": "Peyman Mohajerin Esfahani and Daniel Kuhn.",
367
+ "venue": "Mathematical Programming, 171(1):115\u2013166, 2018.",
368
+ "url": null
369
+ }
370
+ },
371
+ {
372
+ "35": {
373
+ "title": "Application of Q-learning with temperature variation for bidding\nstrategies in market based power systems.",
374
+ "author": "Mohammad Bagher Naghibi-Sistani, MR Akbarzadeh-Tootoonchi, MH Javidi-Dashte\nBayaz, and Habib Rajabi-Mashhadi.",
375
+ "venue": "Energy Conversion and Management, 47(11-12):1529\u20131538, 2006.",
376
+ "url": null
377
+ }
378
+ },
379
+ {
380
+ "36": {
381
+ "title": "Bounding the difference between the values of robust and non-robust\nmarkov decision problems.",
382
+ "author": "Ariel Neufeld and Julian Sester.",
383
+ "venue": "arXiv preprint arXiv:2308.05520, 2023.",
384
+ "url": null
385
+ }
386
+ },
387
+ {
388
+ "37": {
389
+ "title": "Markov decision processes under model uncertainty.",
390
+ "author": "Ariel Neufeld, Julian Sester, and Mario \u0160iki\u0107.",
391
+ "venue": "Mathematical Finance, 33(3):618\u2013665, 2023.",
392
+ "url": null
393
+ }
394
+ },
395
+ {
396
+ "38": {
397
+ "title": "Double deep Q-learning for optimal execution.",
398
+ "author": "Brian Ning, Franco Ho Ting Lin, and Sebastian Jaimungal.",
399
+ "venue": "Applied Mathematical Finance, 28(4):361\u2013380, 2021.",
400
+ "url": null
401
+ }
402
+ },
403
+ {
404
+ "39": {
405
+ "title": "Sample complexity of robust reinforcement learning with a generative\nmodel.",
406
+ "author": "Kishan Panaganti and Dileep Kalathil.",
407
+ "venue": "In International Conference on Artificial Intelligence and\nStatistics, pages 9582\u20139602. PMLR, 2022.",
408
+ "url": null
409
+ }
410
+ },
411
+ {
412
+ "40": {
413
+ "title": "Sur les \u00e9quations alg\u00e9briques ayant toutes leurs racines\nr\u00e9elles.",
414
+ "author": "Tiberiu Popoviciu.",
415
+ "venue": "Mathematica, 9(129-145):20, 1935.",
416
+ "url": null
417
+ }
418
+ },
419
+ {
420
+ "41": {
421
+ "title": "A stochastic approximation method.",
422
+ "author": "Herbert Robbins and Sutton Monro.",
423
+ "venue": "The annals of mathematical statistics, pages 400\u2013407, 1951.",
424
+ "url": null
425
+ }
426
+ },
427
+ {
428
+ "42": {
429
+ "title": "Monge-Kantorovich transportation problem and optimal couplings.",
430
+ "author": "Ludger R\u00fcschendorf.",
431
+ "venue": "Jahresbericht der DMV, 3:113\u2013137, 2007.",
432
+ "url": null
433
+ }
434
+ },
435
+ {
436
+ "43": {
437
+ "title": "Conditional risk mappings.",
438
+ "author": "Andrzej Ruszczy\u0144ski and Alexander Shapiro.",
439
+ "venue": "Mathematics of operations research, 31(3):544\u2013561, 2006.",
440
+ "url": null
441
+ }
442
+ },
443
+ {
444
+ "44": {
445
+ "title": "Conditioning of quasi-Newton methods for function minimization.",
446
+ "author": "David F Shanno.",
447
+ "venue": "Mathematics of computation, 24(111):647\u2013656, 1970.",
448
+ "url": null
449
+ }
450
+ },
451
+ {
452
+ "45": {
453
+ "title": "Rectangular sets of probability measures.",
454
+ "author": "Alexander Shapiro.",
455
+ "venue": "Operations Research, 64(2):528\u2013541, 2016.",
456
+ "url": null
457
+ }
458
+ },
459
+ {
460
+ "46": {
461
+ "title": "Some better bounds on the variance with applications.",
462
+ "author": "Rajesh Sharma, Madhu Gupta, and Girish Kapoor.",
463
+ "venue": "Journal of Mathematical Inequalities, 4(3):355\u2013363, 2010.",
464
+ "url": null
465
+ }
466
+ },
467
+ {
468
+ "47": {
469
+ "title": "Distributional robust batch contextual bandits.",
470
+ "author": "Nian Si, Fan Zhang, Zhengyuan Zhou, and Jose Blanchet.",
471
+ "venue": "arXiv preprint arXiv:2006.05630, 2020.",
472
+ "url": null
473
+ }
474
+ },
475
+ {
476
+ "48": {
477
+ "title": "Distributionally robust policy evaluation and learning in offline\ncontextual bandits.",
478
+ "author": "Nian Si, Fan Zhang, Zhengyuan Zhou, and Jose Blanchet.",
479
+ "venue": "In International Conference on Machine Learning, pages\n8884\u20138894. PMLR, 2020.",
480
+ "url": null
481
+ }
482
+ },
483
+ {
484
+ "49": {
485
+ "title": "Reinforcement learning: An introduction.",
486
+ "author": "Richard S Sutton and Andrew G Barto.",
487
+ "venue": "MIT press, 2018.",
488
+ "url": null
489
+ }
490
+ },
491
+ {
492
+ "50": {
493
+ "title": "Policy-conditioned uncertainty sets for robust markov decision\nprocesses.",
494
+ "author": "Andrea Tirinzoni, Marek Petrik, Xiangli Chen, and Brian Ziebart.",
495
+ "venue": "Advances in neural information processing systems, 31, 2018.",
496
+ "url": null
497
+ }
498
+ },
499
+ {
500
+ "51": {
501
+ "title": "Value-difference based exploration: adaptive control between\nepsilon-greedy and softmax.",
502
+ "author": "Michel Tokic and G\u00fcnther Palm.",
503
+ "venue": "In Annual conference on artificial intelligence, pages\n335\u2013346. Springer, 2011.",
504
+ "url": null
505
+ }
506
+ },
507
+ {
508
+ "52": {
509
+ "title": "Robust optimal control using conditional risk mappings in infinite\nhorizon.",
510
+ "author": "Kerem U\u011furlu.",
511
+ "venue": "Journal of Computational and Applied Mathematics, 344:275\u2013287,\n2018.",
512
+ "url": null
513
+ }
514
+ },
515
+ {
516
+ "53": {
517
+ "title": "Distributionally robust control of constrained stochastic systems.",
518
+ "author": "Bart PG Van Parys, Daniel Kuhn, Paul J Goulart, and Manfred Morari.",
519
+ "venue": "IEEE Transactions on Automatic Control, 61(2):430\u2013442, 2015.",
520
+ "url": null
521
+ }
522
+ },
523
+ {
524
+ "54": {
525
+ "title": "Optimal transport: old and new, volume 338.",
526
+ "author": "C\u00e9dric Villani.",
527
+ "venue": "Springer, 2008.",
528
+ "url": null
529
+ }
530
+ },
531
+ {
532
+ "55": {
533
+ "title": "Online robust reinforcement learning with model uncertainty.",
534
+ "author": "Yue Wang and Shaofeng Zou.",
535
+ "venue": "Advances in Neural Information Processing Systems,\n34:7193\u20137206, 2021.",
536
+ "url": null
537
+ }
538
+ },
539
+ {
540
+ "56": {
541
+ "title": "Policy gradient method for robust reinforcement learning.",
542
+ "author": "Yue Wang and Shaofeng Zou.",
543
+ "venue": "arXiv preprint arXiv:2205.07344, 2022.",
544
+ "url": null
545
+ }
546
+ },
547
+ {
548
+ "57": {
549
+ "title": "Learning form delayed rewards.",
550
+ "author": "Christopher JCH Watkins.",
551
+ "venue": "Ph. D. thesis, King\u2019s College, University of Cambridge, 1989.",
552
+ "url": null
553
+ }
554
+ },
555
+ {
556
+ "58": {
557
+ "title": "Q-learning.",
558
+ "author": "Christopher JCH Watkins and Peter Dayan.",
559
+ "venue": "Machine learning, 8(3-4):279\u2013292, 1992.",
560
+ "url": null
561
+ }
562
+ },
563
+ {
564
+ "59": {
565
+ "title": "Robust Markov decision processes.",
566
+ "author": "Wolfram Wiesemann, Daniel Kuhn, and Ber\u00e7 Rustem.",
567
+ "venue": "Mathematics of Operations Research, 38(1):153\u2013183, 2013.",
568
+ "url": null
569
+ }
570
+ },
571
+ {
572
+ "60": {
573
+ "title": "Distributionally robust convex optimization.",
574
+ "author": "Wolfram Wiesemann, Daniel Kuhn, and Melvyn Sim.",
575
+ "venue": "Operations research, 62(6):1358\u20131376, 2014.",
576
+ "url": null
577
+ }
578
+ },
579
+ {
580
+ "61": {
581
+ "title": "Distributionally robust Markov decision processes.",
582
+ "author": "Huan Xu and Shie Mannor.",
583
+ "venue": "Mathematics of Operations Research, 37(2):288\u2013300, 2012.",
584
+ "url": null
585
+ }
586
+ },
587
+ {
588
+ "62": {
589
+ "title": "A convex optimization approach to distributionally robust markov\ndecision processes with wasserstein distance.",
590
+ "author": "Insoon Yang.",
591
+ "venue": "IEEE control systems letters, 1(1):164\u2013169, 2017.",
592
+ "url": null
593
+ }
594
+ },
595
+ {
596
+ "63": {
597
+ "title": "Wasserstein distributionally robust stochastic control: A data-driven\napproach.",
598
+ "author": "Insoon Yang.",
599
+ "venue": "IEEE Transactions on Automatic Control, 66(8):3863\u20133870, 2020.",
600
+ "url": null
601
+ }
602
+ },
603
+ {
604
+ "64": {
605
+ "title": "Towards theoretical understandings of robust markov decision\nprocesses: Sample complexity and asymptotics.",
606
+ "author": "Wenhao Yang, Liangyu Zhang, and Zhihua Zhang.",
607
+ "venue": "arXiv preprint arXiv:2105.03863, 2021.",
608
+ "url": null
609
+ }
610
+ },
611
+ {
612
+ "65": {
613
+ "title": "Data-driven risk-averse stochastic optimization with Wasserstein\nmetric.",
614
+ "author": "Chaoyue Zhao and Yongpei Guan.",
615
+ "venue": "Operations Research Letters, 46(2):262\u2013267, 2018.",
616
+ "url": null
617
+ }
618
+ },
619
+ {
620
+ "66": {
621
+ "title": "Finite-sample regret bound for distributionally robust offline\ntabular reinforcement learning.",
622
+ "author": "Zhengqing Zhou, Zhengyuan Zhou, Qinxun Bai, Linhai Qiu, Jose Blanchet, and\nPeter Glynn.",
623
+ "venue": "In International Conference on Artificial Intelligence and\nStatistics, pages 3331\u20133339. PMLR, 2021.",
624
+ "url": null
625
+ }
626
+ }
627
+ ],
628
+ "url": "http://arxiv.org/html/2210.00898v3"
629
+ }
20240620/2210.14484v4.json ADDED
The diff for this file is too large to render. See raw diff
 
20240620/2211.10636v6.json ADDED
The diff for this file is too large to render. See raw diff
 
20240620/2211.14873v4.json ADDED
The diff for this file is too large to render. See raw diff
 
20240620/2212.01211v3.json ADDED
@@ -0,0 +1,639 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Sometimes Two Irrational Guards are Needed",
3
+ "abstract": "In the art gallery problem, we are given a closed polygon , with rational coordinates and\nan integer .\nWe are asked whether it is possible to find a set (of guards) of size \nsuch that any point is seen by a point in .\nWe say two points , see each other if the line segment \nis contained inside .\nIt was shown by Abrahamsen, Adamaszek, and Miltzow that there is a polygon\nthat can be guarded with three guards, but requires four guards if the\nguards are required to have rational coordinates.\nIn other words, an optimal solution of size three might need to be irrational.\nWe show that an optimal solution of size two might need to be irrational.\nNote that it is well-known that any polygon that can be guarded with one guard\nhas an optimal guard placement with rational coordinates.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "In the art gallery problem, we are given a closed polygon , on\n vertices, with rational coordinates and\nan integer .\nWe are asked whether it is possible to find a set (of guards) of size \nsuch that any point is seen by a point in .\nWe say two points , see each other if the line segment \nis contained inside .\nWe show that an optimal solution of two guards might need to have irrational coordinates.\nIn such a case, we say a polygon has irrational guards.\nThe art gallery problem was formulated in 1973 by Victor Klee. See, for example, the book by O\u2019Rourke [46 ###reference_b46###, page 2].\nOne of the earliest results states that every simple polygon on vertices can always be guarded with guards [22 ###reference_b22###, 31 ###reference_b31###].\n###figure_1### Interestingly, it is actually very tough to find any positive algorithmic results on the art gallery problem. It seems like the art gallery problem is almost impenetrable.\nFor instance, only in 2002, Micha Sharir pointed out that the problem was even decidable [26 ###reference_b26###, 27 ###reference_b27###, see acknowledgments].\nThe decidability of the art gallery problem is actually easy once you know methods from real algebraic geometry [5 ###reference_b5###].\nThe idea is to reduce the problem to the first-order theory of the reals.\nWe encode guard positions by variables, and then we check if every point in the polygon is seen by at least one guard.\nNote that this is easy to encode in the first-order theory of the reals, as we are allowed to use existential () and universal quantifiers ().\nSince then, despite much research on the art gallery problem, no better algorithm appeared, as far as worst-case complexity is concerned.\nThe underlying reason for the difficulty to find better algorithms\ncan be explained by the fact that the art gallery problem is -complete [57 ###reference_b57###, 3 ###reference_b3###].\nIn a nutshell, -completeness precisely entails that there is no better method for the worst-case complexity of the problem.\n( can be defined as the class of problems that are equivalent to finding a real root to a multivariate polynomial with integer coordinates. See Section 1.3 ###reference_### for an introduction.)\nMore specifically, it was shown that arbitrary algebraic numbers may be needed to describe an optimal solution to the art gallery problem.\nThis may come as a surprise to some readers, and was clearly a surprise back then.\nSpecifically, \u201cin practice\u201d, it seems very rare that irrational guards are ever needed.\nThe reason is that a typical situation is one of the following two.\nEither the guards have some freedom to move around and still see the entire polygon.\nOr if a guard has no freedom, it is forced to be on a line defined by vertices of the polygon.\nAs the vertices of the polygon are at rational coordinates, the guards will be at rational coordinates in that case as well.\nIndeed, only in 2017, the first polygon requiring irrational guards was found [2 ###reference_b2###].\nEven though -reductions exhibit an infinite number of polygons that require irrational guards, those polygons are not \u201cconcrete\u201d in the naive sense of the word.\nAnd up to this day, this is the only \u201cconcrete\u201d polygon [2 ###reference_b2###] that we know does require irrational guards.\nIn this work, we find a second polygon.\nIt is superior to the first one in the sense that it shows that\ntwo guards are already enough to enforce irrational guards.\nAs a single guard can always be chosen to have rational coordinates,\nwe settle the question of the minimum number of guards required to\nhave irrational guards.\nThe polygon we find is again monotone.\nA polygon is called monotone if there exists a line such that every line orthogonal to intersects at most twice.\nWe summarize our results in the following theorem.\nThere exists a simple monotone polygon with rational coordinates, such that there is only one way of guarding this polygon optimally with two guards.\nThose two guards have irrational coordinates."
10
+ },
11
+ {
12
+ "section_id": "1.1",
13
+ "parent_section_id": "1",
14
+ "section_name": "Discussion",
15
+ "text": "In this section, we discuss different aspects of our findings.\nIt is known that one guard can always be chosen to be rational [40 ###reference_b40###].\nThe polygon by Abrahamsen, Adamaszek, and Miltzow [2 ###reference_b2###] requires three irrational guards.\nThe main strength of our finding is to determine the minimum number of guards required to have irrational guards.\nOne way to circumvent worst-case complexity is to discretize the polygon and restrict oneself to a dense grid [15 ###reference_b15###, 26 ###reference_b26###, 27 ###reference_b27###].\nThe polygon by Abrahamsen, Adamaszek, and Miltzow showed that a grid cannot have a better approximation factor than .\nWe improve this lower bound to .\n###figure_2### Note that Bonnet and Miltzow showed that under some mild assumptions, the grid contains a constant factor approximation [15 ###reference_b15###].\nIt is good to have multiple different concrete polygons that require irrational guards.\nThis result complements the -completeness of the art gallery problem nicely.\nWhile -completeness is clearly stronger from a theoretical perspective, concrete polygons may be useful to get a better intuitive understanding of the difficulty.\nFrom a practical perspective, our polygon can serve as a test case on which we can compare the performance of different algorithms.\nUsually, we would like to have a host of difficult and diverse instances\nthat can be automatically generated.\nWith difficult instances, we mean polygons that require irrational guards.\nWe leave this as a future research question.\nThe principal methods that we used for the construction of our polygon are in spirit similar to the methods used by Abrahamsen, Adamaszek, and Miltzow [2 ###reference_b2###].\nHowever, it turned out that it was considerably more difficult to find the polygon.\nOn the one hand, our construction is smaller and thus there were fewer parameters that we had to manipulate to find a solution.\nOn the other hand, the two guards interact in more intertwined ways.\nThus making it much harder to find a correct placement of all the polygon vertices.\nTo be concrete, the construction by [2 ###reference_b2###] has the guards , , and .\nGuards and cover together two pockets.\nSimilarly, guards and cover together two separate pockets.\nThus there is no direct interaction between guards and .\nThis makes it easier to construct the different parts independently.\nIn our case the two guards and together guard three pockets.\nAnd this implies that the interaction between and is much more integrated.\nThis leads to a construction with some vertices being extremely close.\nFurthermore, our construction has pockets that were inside other pockets.\nBoth our polygon and the polygon by [2 ###reference_b2###] have their\nguards in the interior.\nIt is an interesting open problem if one can enforce irrational guards, in case all guards are restricted to lying on the boundary and are only required to guard the boundary.\nOne may wonder whether there is also a polygon with integer coordinates that exposes irrational guards.\nThe answer is yes, and this can be achieved by scaling all coordinates by all the appearing denominators.\nOne may wonder whether it is possible to enforce irrational guards on\npolygons with some extra properties like being rectilinear or monotone.\nBoth of those questions have been positively resolved by [2 ###reference_b2###]."
16
+ },
17
+ {
18
+ "section_id": "1.2",
19
+ "parent_section_id": "1",
20
+ "section_name": "Art Gallery Problem",
21
+ "text": "The literature on the art gallery is vast.\nTherefore, we decide to focus here on algorithmic results.\nIn 1979, the first algorithm for guarding a polygon in linear time with one guard appeared [40 ###reference_b40###].\nIt took until 1992 until there was an algorithm that could determine if a polygon could be guarded by two guards in time [7 ###reference_b7###, 6 ###reference_b6###].\nAs mentioned already above it took until 2002 to find the first correct algorithm that solves the art gallery problem [26 ###reference_b26###, 27 ###reference_b27###].\nThere is still no other algorithm known.\nOn the lower bound side, we know NP-hardness [39 ###reference_b39###], APX-hardness [28 ###reference_b28###] and W[1]-hardness [16 ###reference_b16###].\nOne may argue that NP-hardness is enough evidence that there are\nno efficient algorithms for the art gallery problem and that this\nmay fully explain the lack of algorithmic results.\nHowever, for other NP-complete problems like Clique, Subset-sum, Dominating Set, and TSP, we do know a myriad of algorithms.\nAlthough many of them run in exponential time in the worst case they give huge improvements in many different situations.\nWe believe that the lack of algorithmic results may stem from the fact that\nwe do not know how to discretize the art gallery problem efficiently.\nNote that all the mentioned problems are already discrete.\nWe believe that the -completeness of the art gallery problem may give the most compelling explanation of why a concise discretization of the art gallery problem is unlikely [3 ###reference_b3###, 57 ###reference_b57###].\nSpecifically, many discretization schemes would imply that the art gallery problem lies in NP and thus imply .\nThe first proof that the art gallery problem is -comlete was given by Abrahamsen, Adamaszek, and Miltzow in 2017 [3 ###reference_b3###].\nIt was recently improved by Stade, who showed that it is even -complete if we only require the boundary to be guarded [57 ###reference_b57###].\nIt remains open whether guarding the boundary from the boundary is -complete.\nThere is a series of papers that studied the art gallery problem\nfrom a practical perspective.\nIn other words, they implemented algorithms and tested them on benchmark instances [17 ###reference_b17###, 32 ###reference_b32###, 35 ###reference_b35###, 59 ###reference_b59###, 34 ###reference_b34###].\nThe practical experiences from those papers suggest that irrational coordinates do not play any role in the pursuit to find an optimal solution.\n###figure_3### We are aware of two theoretical explanations for the discrepancy between the theoretical and the practical results.\nOne such finding uses smoothed analysis and argues that there is with high probability an optimal solution after a small random perturbation of the polygon [34 ###reference_b34###, 30 ###reference_b30###].\nA second explanation comes from Hengeveld and Miltzow.\nThey introduced the notion of vision-stability.\nTo explain this concept, we consider guards that can either see by some small angle around reflex vertices or are blocked by an angle by reflex vertices.\nSee the green visibility regions in Figure 4 ###reference_###.\nIntuitively, if is small enough then the optimal number of guards will not change.\nVision-stability states that there indeed exists such a .\nUsing this assumption Hengeveld and Miltzow could find a polynomially-sized discretization scheme for the art gallery problem.\nIt remains an open question to improve their discretization scheme.\nGiven a polygon, we can consider the set of all possible guard sets of minimum size.\nTogether with the Hausdorff distance, forms a topological space.\nFrom the algebraic encoding by Sharir, we know that the topological space must be compact and semi-algebraic.\nThe question arises: given a compact semi-algebraic , is there a polygon such that is topologically equivalent to ?\nThis question was positively answered by Bertschinger, El Maalouly, Miltzow, Schnider, and Weber for homotopy-equivalence [10 ###reference_b10###] and shortly improved by Stade and Tucker-Foltz to homeomorphic-equivalence [58 ###reference_b58###]."
22
+ },
23
+ {
24
+ "section_id": "1.3",
25
+ "parent_section_id": "1",
26
+ "section_name": "Existential theory of the Reals",
27
+ "text": "The complexity class (often pronounced as \u201cER\u201d) has gained a lot of interest in recent years.\nIt is defined via its canonical complete problem ETR (short for Existential Theory of the Reals) and contains all problems that polynomial-time many-one reduce to it.\nIn an ETR instance, we are given an integer and a sentence of the form\nwhere is a well-formed and quantifier-free formula consisting of polynomial equations and inequalities in the variables and the logical connectives .\nThe goal is to decide whether this sentence is true.\nAs an example, consider the formula ;\namong (infinitely many) other solutions, evaluates to true, witnessing that this is a yes-instance of ETR.\nIt is known that\nHere the first inclusion follows because a SAT instance can trivially be written as an equivalent ETR instance.\nThe second inclusion is highly non-trivial and was first proven by Canny in his seminal paper [19 ###reference_b19###].\nNote that the complexity of working with continuous numbers was studied in various contexts.\nTo avoid confusion, let us make some remarks on the underlying machine model.\nThe underlying machine model for (over which sentences need to be decided and where reductions are performed) is the word RAM (or equivalently, a Turing machine) and not the real RAM [30 ###reference_b30###] or the Blum-Shub-Smale model [14 ###reference_b14###].\nThe complexity class gains its importance by numerous important algorithmic problems that have been shown to be complete for this class in recent years.\nThe name was introduced by Schaefer in [48 ###reference_b48###] who also pointed out that several NP-hardness reductions from the literature actually implied -hardness.\nFor this reason, several important -completeness results were obtained before the need for a dedicated complexity class became apparent.\nCommon features of -complete problems are their continuous solution space and the nonlinear relations between their variables.\nImportant -completeness results include the realizability of abstract order types [45 ###reference_b45###, 56 ###reference_b56###] and geometric linkages [49 ###reference_b49###], as well as the recognition of geometric segment graphs [38 ###reference_b38###, 42 ###reference_b42###], unit-disk graphs [37 ###reference_b37###, 43 ###reference_b43###], and ray intersection graphs [20 ###reference_b20###].\nMore results appeared in the graph drawing community [25 ###reference_b25###, 29 ###reference_b29###, 41 ###reference_b41###, 50 ###reference_b50###], regarding polytopes [24 ###reference_b24###, 47 ###reference_b47###], the study of Nash-equilibria [8 ###reference_b8###, 11 ###reference_b11###, 12 ###reference_b12###, 33 ###reference_b33###, 51 ###reference_b51###], training neural networks [4 ###reference_b4###, 9 ###reference_b9###], matrix factorization [21 ###reference_b21###, 53 ###reference_b53###, 54 ###reference_b54###, 55 ###reference_b55###], or continuous constraint satisfaction problems [44 ###reference_b44###].\nIn computational geometry, we would like to mention the art gallery problem [3 ###reference_b3###, 57 ###reference_b57###] and covering polygons with convex polygons [1 ###reference_b1###].\nRecently, the community started to pay more attention to higher levels of the \u201creal polynomial hierarchy\u201d, which surprisingly captures some interesting algorithmic problems [13 ###reference_b13###, 23 ###reference_b23###, 25 ###reference_b25###, 36 ###reference_b36###, 52 ###reference_b52###, 18 ###reference_b18###]."
28
+ },
29
+ {
30
+ "section_id": "2",
31
+ "parent_section_id": null,
32
+ "section_name": "Preparation",
33
+ "text": "We aim to construct a polygon.\nThis polygon should be guarded by two guards at irrational coordinates but requires three guards at rational coordinates.\nWe must restrict the possible coordinates the guards can be positioned.\nIn this section, we will explore the tools to restrict the possible positions of the two guards within the polygon.\n###figure_4###"
34
+ },
35
+ {
36
+ "section_id": "2.1",
37
+ "parent_section_id": "2",
38
+ "section_name": "Basic Definitions",
39
+ "text": "Each guard will be able to guard some region of the polygon:\nwe call this region its visibility polygon .\nThe visibility polygon includes all points for which the line segment between the guard and the point is included in the polygon .\nNotably, the union of the visibility polygons of the two guards must be the art gallery. Otherwise, the art gallery is not completely guarded.\nA window is an edge of the visibility polygon that is not part of the boundary of .\nWe can find windows in a guard \u2019s visibility polygon, by shooting rays from to reflex vertices (the vertices of the polygon, with an interior angle larger than ).\nIf these rays do not leave the polygon at the reflex vertex,\na window will exist between the reflex vertex and the position where the ray does intersect the boundary of the polygon.\nLet the window\u2019s end be the intersection of the ray with an edge of the polygon.\nOur final polygon consists of the core and a number of pockets, as shown in Figure 5 ###reference_###.\nThe core of the polygon is the square in the center.\nWe will enforce that both guards are located in the core.\nAs a square is a convex shape, this implies that both guards will guard the core.\nThe pockets are all regions outside the core.\nWe will use pockets that are either quadrilateral or triangular.\nPockets are attached to either the core or another pocket: they have one edge that lies on the boundary of the core or on the boundary of another pocket.\nQuadrilateral pockets will always be attached to the core.\nEach quadrilateral pocket has one edge that is not on the boundary of the core, nor adjacent to it.\nWe will call this edge the wall of a\nquadrilateral pocket.\nSimilarly, triangular pockets will be attached to either the core or a quadrilateral pocket.\nWe will use pockets as a tool to limit the locations of the two guards."
40
+ },
41
+ {
42
+ "section_id": "2.2",
43
+ "parent_section_id": "2",
44
+ "section_name": "Guard Segments",
45
+ "text": "###figure_5### We can force a guard to be positioned on a line segment within the polygon.\nSuch a line segment is called a guard segment.\nGuard segments are commonly used in the context of the art gallery problem [2 ###reference_b2###, 57 ###reference_b57###].\nIn this section, we will describe how we construct a guard segment.\nLet us denote by the segment and by its supporting line.\nTo make a guard segment, we add two triangular pockets\nwhere intersects .\nEach of the triangular pockets has an edge on .\nBesides this one edge, the pockets lay on different sides of .\nOnly a guard on the line segment between the two pockets can guard both triangular pockets at the same time.\nWe have two guards in our polygon and both will be on a guard segment.\nIf the two guard segments are not intersecting, we can enforce that there must be one guard on each of them as follows.\nFirst, note that there are in total four triangular pockets.\nSecond, we make the triangular pockets sufficiently narrow.\nIn this way, it is impossible to guard two of the triangular pockets outside of a guard segment.\nThus at least one guard must be on each guard segment.\nA simple construction with two non-intersecting guard segments is shown in Figure 6 ###reference_###."
46
+ },
47
+ {
48
+ "section_id": "2.3",
49
+ "parent_section_id": "2",
50
+ "section_name": "Guarding Quadrilateral Pockets",
51
+ "text": "We will now describe how given the position of guard and a quadrilateral pocket will limit the position of guard .\nSee Figure 7 ###reference_### for an illustration of the following description.\nFirst, note that if will not guard completely then there will remain some unguarded region (orange) in .\nThe part of the guard segment of where the unguarded region is visible is denoted the feasible segment.\nIt is bounded from the back ray and the front ray.\nIt is clear that must be on the feasible segment.\nWe can compute the front ray by first computing the window end\u2019s from to the wall of and then shooting a ray from in the direction of the second reflex vertex of .\n###figure_6### ###figure_7### ###figure_8###"
52
+ },
53
+ {
54
+ "section_id": "3",
55
+ "parent_section_id": null,
56
+ "section_name": "Complete Polygon",
57
+ "text": "In this section, we will present our complete polygon: a polygon that can be guarded by two guards if and only if both guards are situated at irrational points."
58
+ },
59
+ {
60
+ "section_id": "3.1",
61
+ "parent_section_id": "3",
62
+ "section_name": "The Polygon",
63
+ "text": "As we described in Section 2 ###reference_### and displayed in Figure 8 ###reference_###, the polygon consists of a core and some pockets.\nThe polygon has four triangular pockets defining two guard segments.\nThe two guard segments lie on the lines and .\nFurthermore, the polygon has three quadrilateral pockets.\nIn Table 1 ###reference_###, the coordinates of the vertices of the polygon, the coordinates of the two guards, and the coordinates of the window\u2019s ends are given.\nThe walls of the three quadrilateral pockets have the supporting lines:\nTop pocket: .\nRight pocket: .\nBottom pocket: ."
64
+ },
65
+ {
66
+ "section_id": "3.2",
67
+ "parent_section_id": "3",
68
+ "section_name": "Proof",
69
+ "text": "We prove that our polygon can be guarded with two irrational guards, but cannot be guarded with two rational guards.\nWe state that the polygon can be guarded by two guards placed at\nand\nFigure 9 ###reference_### displays the visibility polygons of these two guards.\nIt can be checked using simple calculations that the two visibility polygons cover the complete polygon.\nFirst, both guard segments will contain a guard.\nFurthermore, the window\u2019s ends are in the same location (so no unseen region between them) and both vertices on the wall are guarded.\nNow, we prove that no two rational guards can guard our polygon.\nSpecifically, we show that no two guards, except the ones mentioned, will guard the entire polygon.\nClearly, both guard segments must contain a guard.\nLet be on the guard segment in the top left of the polygon, and be on the guard segment in the bottom right of the polygon.\nGiven the position of , we calculate where can guard all regions not guarded by .\nFor each of the three pockets, we will bound the position of given the position of .\nTo represent their positions, we will use their x-coordinates.\nAs both and lie on a non-vertical guard segment, their x-coordinates will uniquely describe their position.\nFirst, we calculate which part of the pocket guards and which part it does not.\nDue to the guard segment of , we will have exactly one window along with its corresponding window\u2019s end.\nAs described in Section 2 ###reference_###, we can use this construction to determine the region where can guard the region of the pocket does not cover.\nNotably, will always be on the correct side of the back ray.\nIndeed, the entire guard segment of lies on one side of the back ray.\nAs such, we will bound the feasible segment by ensuring lies on the correct side of the front ray.\nWe calculate the intersection of this ray and the guard segment.\nThen, the x-coordinate of may either not be smaller than, or be greater than the x-coordinate of the intersection between the front ray and the guard segment.\n###figure_9### It depends on the pocket whether the x-coordinate of can not be smaller or greater than the x-coordinate of the intersection.\nGuard must lie on the same side of the front ray as the unguarded region of the pocket.\nAs can be verified in Figure 9 ###reference_###, the x-coordinate of interacts with the intersections we find for the pockets in the following way:\nTop pocket: the x-coordinate of must be smaller or equal to the intersection.\nRight pocket: the x-coordinate of must be greater or equal to the intersection.\nBottom pocket: the x-coordinate of must be smaller or equal to the intersection.\nIt is important to note that the x-coordinate of does not lie on the same side of all three intersections.\nIf it did lie on the same side of all three, then the position of could trivially be at any coordinate greater or smaller than all three intersections.\nWe can use the x-coordinate of to determine its position.\nSo, we use \u2019s x-coordinate () to calculate inequalities that limit the x-coordinate of ():\nWe will use Equation 3.1 ###reference_###, Equation 3.2 ###reference_###, Equation 3.3 ###reference_### as a system of equations.\nA solution to the system of equations will have a corresponding pair of guards.\nWe use an algebraic computer program to calculate the solution to this system of equations.\nFigure 10 ###reference_### shows the values for for which the system of equations has a valid solution.\nThen, any value for is chosen between the bounds imposed by .\nHowever, not all solutions for correspond to valid positions for guard .\nSpecifically, notice in Figure 10 ###reference_### that the only possible value for is irrational.\nWe will argue that must be smaller than .\nFor the following description, we refer to Figure 11 ###reference_###.\n###figure_10### Suppose for the purpose of contradiction that there is a valid guard placement with .\nIn this case, guard fails to guard any part of the wall of the top pocket.\nGuard must be less than or equal to to guard the entire wall.\nHowever, for less than or equal to , guard fails to guard the wall of the right pocket. Guard can never guard the entire wall of the right pocket.\nThis gives a contradiction and implies that\n.\nAs such, we can limit the possible locations for guard as .\nEvidently, in this range, the only valid x-coordinate for is , see Figure 10 ###reference_###.\nFor this x-coordinate of , the only possible position for is at .\nFinally, this shows that the only possible configuration of two guards in this polygon is at , and at : both guards must be at irrational coordinates."
70
+ },
71
+ {
72
+ "section_id": "4",
73
+ "parent_section_id": null,
74
+ "section_name": "Challenges",
75
+ "text": "We encountered new challenges while searching for our polygon, compared to Abrahamsen, Adamaszek, and Miltzow [2 ###reference_b2###]\u2019s polygon that requires three irrational guards.\n###figure_11### ###figure_12###"
76
+ },
77
+ {
78
+ "section_id": "5",
79
+ "parent_section_id": null,
80
+ "section_name": "Conclusion.",
81
+ "text": "In comparison to the polygon construction with three guards [2 ###reference_b2###], our polygon has fewer parameters that we can adjust, as we have one guard less.\nAs everything depends on those few parameters, it was difficult to find a configuration that satisfies all the desired properties simultaneously.\nFurthermore, we need to check some additional properties that didn\u2019t play a role in the previous construction, as the middle guard was surrounded by the other guards from two different sides.\nAt last, we could not avoid the supporting line of the guard segment intersecting two quadrilateral pockets."
82
+ }
83
+ ],
84
+ "appendix": [],
85
+ "tables": {
86
+ "1": {
87
+ "table_html": "<figure class=\"ltx_table\" id=\"S2.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Coordinates of the vertices of the polygon (), the guards ( and ), and the window\u2019s ends ().</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S2.T1.70\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S2.T1.14.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_t\" id=\"S2.T1.9.1.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S2.T1.10.2.2\"></td>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S2.T1.11.3.3\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S2.T1.12.4.4\"></td>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S2.T1.13.5.5\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S2.T1.14.6.6\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.20.12\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l\" id=\"S2.T1.15.7.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S2.T1.16.8.2\"></td>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S2.T1.17.9.3\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S2.T1.18.10.4\"></td>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S2.T1.19.11.5\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S2.T1.20.12.6\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.26.18\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l\" id=\"S2.T1.21.13.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S2.T1.22.14.2\"></td>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S2.T1.23.15.3\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S2.T1.24.16.4\"></td>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S2.T1.25.17.5\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S2.T1.26.18.6\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.32.24\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l\" id=\"S2.T1.27.19.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S2.T1.28.20.2\"></td>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S2.T1.29.21.3\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S2.T1.30.22.4\"></td>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S2.T1.31.23.5\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S2.T1.32.24.6\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.38.30\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l\" id=\"S2.T1.33.25.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S2.T1.34.26.2\"></td>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S2.T1.35.27.3\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S2.T1.36.28.4\"></td>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S2.T1.37.29.5\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S2.T1.38.30.6\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.44.36\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l\" id=\"S2.T1.39.31.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S2.T1.40.32.2\"></td>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S2.T1.41.33.3\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S2.T1.42.34.4\"></td>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S2.T1.43.35.5\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S2.T1.44.36.6\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.50.42\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l\" id=\"S2.T1.45.37.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S2.T1.46.38.2\"></td>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S2.T1.47.39.3\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S2.T1.48.40.4\"></td>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S2.T1.49.41.5\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S2.T1.50.42.6\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.56.48\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l\" id=\"S2.T1.51.43.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S2.T1.52.44.2\"></td>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S2.T1.53.45.3\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S2.T1.54.46.4\"></td>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S2.T1.55.47.5\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S2.T1.56.48.6\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.62.54\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l\" id=\"S2.T1.57.49.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S2.T1.58.50.2\"></td>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S2.T1.59.51.3\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S2.T1.60.52.4\"></td>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S2.T1.61.53.5\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S2.T1.62.54.6\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.68.60\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l\" id=\"S2.T1.63.55.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S2.T1.64.56.2\"></td>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S2.T1.65.57.3\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S2.T1.66.58.4\"></td>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S2.T1.67.59.5\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S2.T1.68.60.6\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.70.62\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b ltx_border_l\" id=\"S2.T1.69.61.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r\" id=\"S2.T1.70.62.2\"></td>\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_b\" id=\"S2.T1.70.62.3\"></th>\n<td class=\"ltx_td ltx_border_b ltx_border_r\" id=\"S2.T1.70.62.4\"></td>\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_b\" id=\"S2.T1.70.62.5\"></th>\n<td class=\"ltx_td ltx_border_b ltx_border_r\" id=\"S2.T1.70.62.6\"></td>\n</tr>\n</tbody>\n</table>\n</figure>",
88
+ "capture": "Table 1: Coordinates of the vertices of the polygon (), the guards ( and ), and the window\u2019s ends ()."
89
+ }
90
+ },
91
+ "image_paths": {
92
+ "1": {
93
+ "figure_path": "2212.01211v3_figure_1.png",
94
+ "caption": "Figure 1: Lucas and Till guarding the polygon with just the two of them\u2026",
95
+ "url": "http://arxiv.org/html/2212.01211v3/x1.png"
96
+ },
97
+ "2": {
98
+ "figure_path": "2212.01211v3_figure_2.png",
99
+ "caption": "Figure 2: Any triangulation of a simple polygon can be three-colored.\nAt least one of the color classes has at most \u230an/3\u230b\ud835\udc5b3\\lfloor n/3\\rfloor\u230a italic_n / 3 \u230b vertices. This color class also guards the entire polygon, as every triangle is incident to all three colors [31].",
100
+ "url": "http://arxiv.org/html/2212.01211v3/x2.png"
101
+ },
102
+ "3": {
103
+ "figure_path": "2212.01211v3_figure_3.png",
104
+ "caption": "Figure 3: We may restrict the guards to lie on a dense grid.\nThis may make the optimal solution worse.",
105
+ "url": "http://arxiv.org/html/2212.01211v3/x3.png"
106
+ },
107
+ "4": {
108
+ "figure_path": "2212.01211v3_figure_4.png",
109
+ "caption": "Figure 4: Left: The dark green region is added to the visibility polygon. Right: The orange region is removed from the visibility polygon.",
110
+ "url": "http://arxiv.org/html/2212.01211v3/x4.png"
111
+ },
112
+ "5": {
113
+ "figure_path": "2212.01211v3_figure_5.png",
114
+ "caption": "Figure 5: Our final polygon: it has a core (gray), three quadrilateral pockets (blue), and four narrow triangular pockets (yellow).",
115
+ "url": "http://arxiv.org/html/2212.01211v3/x5.png"
116
+ },
117
+ "6": {
118
+ "figure_path": "2212.01211v3_figure_6.png",
119
+ "caption": "Figure 6: A small polygon that can only be guarded by two guards, because each guard segment (yellow dashed line) must contain a guard. The region where a guard could guard at least one pocket is shaded in light yellow.",
120
+ "url": "http://arxiv.org/html/2212.01211v3/x6.png"
121
+ },
122
+ "7": {
123
+ "figure_path": "2212.01211v3_figure_7.png",
124
+ "caption": "Figure 7: A polygon with guard l\ud835\udc59litalic_l. The guard l\ud835\udc59litalic_l defines an unguarded region in the quadrilateral pocket, a front ray and a back ray, and a feasible segment.",
125
+ "url": "http://arxiv.org/html/2212.01211v3/x7.png"
126
+ },
127
+ "8": {
128
+ "figure_path": "2212.01211v3_figure_8.png",
129
+ "caption": "Figure 8: Our complete polygon. The art gallery is shaded according to the function of each region: gray is the core, yellow is the pockets used to create guard segments, and turquoise are other pockets. The yellow dashed lines represent the guard segments. The coordinates of important vertices are given.",
130
+ "url": "http://arxiv.org/html/2212.01211v3/x8.png"
131
+ },
132
+ "9": {
133
+ "figure_path": "2212.01211v3_figure_9.png",
134
+ "caption": "Figure 9: Our complete polygon. The optimal solution with two guards at irrational coordinates is shown. The green regions are guarded by the upper left guard; the red regions are guarded by the bottom right guard; the purple regions are guarded by both. The dashed lines are rays shot from the guards through reflex vertices. For each pocket, these windows meet at a point on the art gallery\u2019s wall, of which the coordinates are also given.",
135
+ "url": "http://arxiv.org/html/2212.01211v3/x9.png"
136
+ },
137
+ "10": {
138
+ "figure_path": "2212.01211v3_figure_10.png",
139
+ "caption": "Figure 10: The solution to the system of equations (Equation 3.1, Equation 3.2, Equation 3.3). Here, \u2219\u2219\\bullet\u2219 denotes a closed interval, while \u2218\\circ\u2218 denotes an open interval.",
140
+ "url": "http://arxiv.org/html/2212.01211v3/x10.png"
141
+ },
142
+ "11": {
143
+ "figure_path": "2212.01211v3_figure_11.png",
144
+ "caption": "Figure 11: The guard l\ud835\udc59litalic_l must be to the left of the line x=105399974\ud835\udc65105399974x=\\frac{10539}{9974}italic_x = divide start_ARG 10539 end_ARG start_ARG 9974 end_ARG.",
145
+ "url": "http://arxiv.org/html/2212.01211v3/x11.png"
146
+ },
147
+ "12(a)": {
148
+ "figure_path": "2212.01211v3_figure_12(a).png",
149
+ "caption": "Figure 12: Left: Two guards and two reflex vertices, where the corresponding wall leads to an invalid polygon. Right: Three front rays intersecting the guard segment.\nGuard t\ud835\udc61titalic_t must be on the dotted side of each ray.\nIf the intersection points were reversed, there does not exist a valid solution either.",
150
+ "url": "http://arxiv.org/html/2212.01211v3/x12.png"
151
+ },
152
+ "12(b)": {
153
+ "figure_path": "2212.01211v3_figure_12(b).png",
154
+ "caption": "Figure 12: Left: Two guards and two reflex vertices, where the corresponding wall leads to an invalid polygon. Right: Three front rays intersecting the guard segment.\nGuard t\ud835\udc61titalic_t must be on the dotted side of each ray.\nIf the intersection points were reversed, there does not exist a valid solution either.",
155
+ "url": "http://arxiv.org/html/2212.01211v3/x13.png"
156
+ },
157
+ "13": {
158
+ "figure_path": "2212.01211v3_figure_13.png",
159
+ "caption": "Figure 13: We draw the two guard segments without the surrounding polygon.\nOn the top, we move the guard l\ud835\udc59litalic_l from left to right.\nOn the bottom, we see the corresponding feasible segment for guard t\ud835\udc61titalic_t.\nRecall that we denote by l\u22c6superscript\ud835\udc59\u22c6l^{\\star}italic_l start_POSTSUPERSCRIPT \u22c6 end_POSTSUPERSCRIPT and\nt\u22c6superscript\ud835\udc61\u22c6t^{\\star}italic_t start_POSTSUPERSCRIPT \u22c6 end_POSTSUPERSCRIPT the coordinates of the predetermined guard positions.",
160
+ "url": "http://arxiv.org/html/2212.01211v3/x14.png"
161
+ }
162
+ },
163
+ "validation": true,
164
+ "references": [
165
+ {
166
+ "1": {
167
+ "title": "Covering Polygons is Even Harder.",
168
+ "author": "Mikkel Abrahamsen.",
169
+ "venue": "In Nisheeth K. Vishnoi, editor, 2021 IEEE 62nd Annual Symposium\non Foundations of Computer Science (FOCS), pages 375\u2013386, 2022.",
170
+ "url": null
171
+ }
172
+ },
173
+ {
174
+ "2": {
175
+ "title": "Irrational guards are sometimes needed.",
176
+ "author": "Mikkel Abrahamsen, Anna Adamaszek, and Tillmann Miltzow.",
177
+ "venue": "In SoCG 2017, pages 3:1\u20133:15, 2017.",
178
+ "url": null
179
+ }
180
+ },
181
+ {
182
+ "3": {
183
+ "title": "The Art Gallery Problem is -complete.",
184
+ "author": "Mikkel Abrahamsen, Anna Adamaszek, and Tillmann Miltzow.",
185
+ "venue": "Journal of the ACM, 69(1):1\u201370, 2022.",
186
+ "url": null
187
+ }
188
+ },
189
+ {
190
+ "4": {
191
+ "title": "Training Neural Networks is ER-complete.",
192
+ "author": "Mikkel Abrahamsen, Linda Kleist, and Tillmann Miltzow.",
193
+ "venue": "In Marc A. Ranzato, Alina Beygelzimer, K. Nguyen, Percy Liang,\nJennifer W. Vaughan, and Yann Dauphin, editors, Advances in Neural\nInformation Processing Systems (NeurIPS 2021), volume 34, 2021.",
194
+ "url": null
195
+ }
196
+ },
197
+ {
198
+ "5": {
199
+ "title": "Algorithms in Real Algebraic Geometry, volume 10 of Algorithms and Computation in Mathematics.",
200
+ "author": "Sauguta Basu, Richard Pollack, and Marie-Fran\u00e7oise Roy.",
201
+ "venue": "Springer, Berlin, Heidelberg, 2006.",
202
+ "url": null
203
+ }
204
+ },
205
+ {
206
+ "6": {
207
+ "title": "Computing two-covers of simple polygons.",
208
+ "author": "Patrice Belleville.",
209
+ "venue": "master thesis, 1991.",
210
+ "url": null
211
+ }
212
+ },
213
+ {
214
+ "7": {
215
+ "title": "Two-guarding simple polygons.",
216
+ "author": "Patrice Belleville.",
217
+ "venue": "In Proc. 4th Canadian Conference on Computational Geometry,\npage 103\u2013108, 1992.",
218
+ "url": null
219
+ }
220
+ },
221
+ {
222
+ "8": {
223
+ "title": "On the Computational Complexity of Decision Problems About\nMulti-player Nash Equilibria.",
224
+ "author": "Marie L. T. Berthelsen and Kristoffer A. Hansen.",
225
+ "venue": "In Dimitris Fotakis and Evangelos Markakis, editors, International Symposium on Algorithmic Game Theory, volume 11801 of Lecture Notes in Computer Science, pages 153\u2013167, 2019.",
226
+ "url": null
227
+ }
228
+ },
229
+ {
230
+ "9": {
231
+ "title": "Training fully connected neural networks is\n-complete.",
232
+ "author": "Daniel Bertschinger, Christoph Hertrich, Paul Jungeblut, Tillmann Miltzow, and\nSimon Weber.",
233
+ "venue": "Arxive, abs/2204.01368, 2022.",
234
+ "url": null
235
+ }
236
+ },
237
+ {
238
+ "10": {
239
+ "title": "Topological art in simple galleries.",
240
+ "author": "Daniel Bertschinger, Nicolas El Maalouly, Tillmann Miltzow, Patrick Schnider,\nand Simon Weber.",
241
+ "venue": "In Karl Bringmann and Timothy Chan, editors, SOSA 2022, pages\n87\u2013116. SIAM, 2022.",
242
+ "url": null
243
+ }
244
+ },
245
+ {
246
+ "11": {
247
+ "title": "A Catalog of EXISTS-R-Complete Decision Problems About Nash\nEquilibria in Multi-Player Games.",
248
+ "author": "Vittorio Bil\u00f2 and Marios Mavronicolas.",
249
+ "venue": "In Nicolas Ollinger and Heribert Vollmer, editors, 33rd\nSymposium on Theoretical Aspects of Computer Science (STACS 2016), Leibniz\nInternational Proceedings in Informatics (LIPIcs), pages 17:1\u201317:13, 2016.",
250
+ "url": null
251
+ }
252
+ },
253
+ {
254
+ "12": {
255
+ "title": "Existential-R-Complete Decision Problems about Symmetric Nash\nEquilibria in Symmetric Multi-Player Games.",
256
+ "author": "Vittorio Bil\u00f2 and Marios Mavronicolas.",
257
+ "venue": "In Vollmer Heribert and Brigitte Vall\u00e9e, editors, 34th\nSymposium on Theoretical Aspects of Computer Science (STACS 2017), volume 66\nof Leibniz International Proceedings in Informatics (LIPIcs), pages\n13:1\u201313:14, 2017.",
258
+ "url": null
259
+ }
260
+ },
261
+ {
262
+ "13": {
263
+ "title": "Computational Complexity of Multi-player Evolutionarily Stable\nStrategies.",
264
+ "author": "Manon Blanc and Kristoffer A. Hansen.",
265
+ "venue": "In Rahul Santhanam and Daniil Musatov, editors, Computer Science\n\u2013 Theory and Applications (CSR 2021), volume 12730 of Lecture Notes\nin Computer Science, pages 1\u201317, 2021.",
266
+ "url": null
267
+ }
268
+ },
269
+ {
270
+ "14": {
271
+ "title": "On a Theory of Computation and Complexity over the Real Numbers:\nNP-Completeness, Recursive Functions and Universal Machines.",
272
+ "author": "Lenore Blum, Mike Shub, and Steve Smale.",
273
+ "venue": "Bulletin of the American Mathematical Society, 21:1\u201346, 1989.",
274
+ "url": null
275
+ }
276
+ },
277
+ {
278
+ "15": {
279
+ "title": "An approximation algorithm for the art gallery problem.",
280
+ "author": "\u00c9douard Bonnet and Tillmann Miltzow.",
281
+ "venue": "In SoCG 2017, pages 20:1\u201320:15, 2017.",
282
+ "url": null
283
+ }
284
+ },
285
+ {
286
+ "16": {
287
+ "title": "Parameterized hardness of art gallery problems.",
288
+ "author": "\u00c9douard Bonnet and Tillmann Miltzow.",
289
+ "venue": "ACM Transactions on Algorithms, 16(4), 2020.",
290
+ "url": null
291
+ }
292
+ },
293
+ {
294
+ "17": {
295
+ "title": "Point guards and point clouds: Solving general art gallery\nproblems.",
296
+ "author": "Dorit Borrmann, Pedro J. de Rezende, Cid C. de Souza, S\u00e1ndor P. Fekete,\nStephan Friedrichs, Alexander Kr\u00f6ller, Andreas N\u00fcchter,\nChristiane Schmidt, and Davi C. Tozoni.",
297
+ "venue": "In SoCG 2013, pages 347\u2013348, 2013.",
298
+ "url": null
299
+ }
300
+ },
301
+ {
302
+ "18": {
303
+ "title": "Exotic Quantifiers, Complexity Classes, and Complete Problems.",
304
+ "author": "Peter B\u00fcrgisser and Felipe Cucker.",
305
+ "venue": "Foundations of Computational Mathematics, 9(2):135\u2013170, 2009.",
306
+ "url": null
307
+ }
308
+ },
309
+ {
310
+ "19": {
311
+ "title": "Some Algebraic and Geometric Computations in PSPACE.",
312
+ "author": "John Canny.",
313
+ "venue": "In STOC \u201988: Proceedings of the Twentieth Annual ACM Symposium\non Theory of Computing, pages 460\u2013467, 1988.",
314
+ "url": null
315
+ }
316
+ },
317
+ {
318
+ "20": {
319
+ "title": "Intersection Graphs of Rays and Grounded Segments.",
320
+ "author": "Jean Cardinal, Stefan Felsner, Tillmann Miltzow, Casey Tompkins, and Birgit\nVogtenhuber.",
321
+ "venue": "Journal of Graph Algorithms and Applications, 22(2):273\u2013294,\n2018.",
322
+ "url": null
323
+ }
324
+ },
325
+ {
326
+ "21": {
327
+ "title": "On Restricted Nonnegative Matrix Factorization.",
328
+ "author": "Dmitry Chistikov, Stefan Kiefer, Ines Marusic, Mahsa Shirmohammadi, and James\nWorrell.",
329
+ "venue": "In Ioannis Chatzigiannakis, Michael Mitzenmacher, Yuval Rabani, and\nDavide Sangiorgi, editors, 43rd International Colloquium on Automata,\nLanguages, and Programming (ICALP 2016), volume 55 of Leibniz\nInternational Proceedings in Informatics (LIPIcs), pages 103:1\u2013103:14,\n2016.",
330
+ "url": null
331
+ }
332
+ },
333
+ {
334
+ "22": {
335
+ "title": "A combinatorial theorem in plane geometry.",
336
+ "author": "V\u00e1clav Chv\u00e1tal.",
337
+ "venue": "Journal of Combinatorial Theory, Series B, 18(1):39\u201341, 1975.",
338
+ "url": null
339
+ }
340
+ },
341
+ {
342
+ "23": {
343
+ "title": "On the Complexity of the Escape Problem for Linear Dynamical Systems\nover Compact Semialgebraic Sets.",
344
+ "author": "Julian D\u2019Costa, Engel Lefaucheux, Eike Neumann, Jo\u00ebl Ouaknine, and James\nWorrel.",
345
+ "venue": "In Filippo Bonchi and Simon J. Puglisi, editors, 46th\nInternational Symposium on Mathematical Foundations of Computer Science (MFCS\n2021), volume 202 of Leibniz International Proceedings in Informatics\n(LIPIcs), pages 33:1\u201333:21, 2021.",
346
+ "url": null
347
+ }
348
+ },
349
+ {
350
+ "24": {
351
+ "title": "A Universality Theorem for Nested Polytopes.",
352
+ "author": "Michael G. Dobbins, Andreas Holmsen, and Tillmann Miltzow.",
353
+ "venue": "arXiv preprint, 2019.",
354
+ "url": null
355
+ }
356
+ },
357
+ {
358
+ "25": {
359
+ "title": "Completeness for the Complexity Class and\nArea-Universality.",
360
+ "author": "Michael G. Dobbins, Linda Kleist, Tillmann Miltzow, and Pawe\u0142\nRz\u0105\u017cewski.",
361
+ "venue": "Discrete & Computational Geometry, 2022.",
362
+ "url": null
363
+ }
364
+ },
365
+ {
366
+ "26": {
367
+ "title": "Guarding galleries and terrains.",
368
+ "author": "Alon Efrat and Sariel Har-Peled.",
369
+ "venue": "In IFIP 2002, pages 181\u2013192, 2002.",
370
+ "url": null
371
+ }
372
+ },
373
+ {
374
+ "27": {
375
+ "title": "Guarding galleries and terrains.",
376
+ "author": "Alon Efrat and Sariel Har-Peled.",
377
+ "venue": "Inf. Process. Lett., 100(6):238\u2013245, 2006.",
378
+ "url": null
379
+ }
380
+ },
381
+ {
382
+ "28": {
383
+ "title": "Inapproximability results for guarding polygons and terrains.",
384
+ "author": "Stephan Eidenbenz, Christoph Stamm, and Peter Widmayer.",
385
+ "venue": "Algorithmica, 31(1):79\u2013113, 2001.",
386
+ "url": null
387
+ }
388
+ },
389
+ {
390
+ "29": {
391
+ "title": "Optimal Curve Straightening is -Complete.",
392
+ "author": "Jeff Erickson.",
393
+ "venue": "arXiv preprint, 2019.",
394
+ "url": null
395
+ }
396
+ },
397
+ {
398
+ "30": {
399
+ "title": "Smoothing the gap between NP and ER.",
400
+ "author": "Jeff Erickson, Ivor van der Hoog, and Tillmann Miltzow.",
401
+ "venue": "In 2020 IEEE 61st Annual Symposium on Foundations of Computer\nScience (FOCS), pages 1022\u20131033, 2020.",
402
+ "url": null
403
+ }
404
+ },
405
+ {
406
+ "31": {
407
+ "title": "A short proof of Chv\u00e1tal\u2019s watchman theorem.",
408
+ "author": "Steve Fisk.",
409
+ "venue": "J. Comb. Theory, Ser. B, 24(3):374, 1978.",
410
+ "url": null
411
+ }
412
+ },
413
+ {
414
+ "32": {
415
+ "title": "Integer solutions for the art gallery problem using linear\nprogramming.",
416
+ "author": "Stephan Friedrichs.",
417
+ "venue": "Master\u2019s thesis, TU Braunschweig, 2012.",
418
+ "url": null
419
+ }
420
+ },
421
+ {
422
+ "33": {
423
+ "title": "-Completeness for Decision Versions of\nMulti-Player (Symmetric) Nash Equilibria.",
424
+ "author": "Jugal Garg, Ruta Mehta, Vijay V. Vazirani, and Sadra Yazdanbod.",
425
+ "venue": "ACM Transactions on Economics and Computation, 6(1):1:1\u20131:23,\n2018.",
426
+ "url": null
427
+ }
428
+ },
429
+ {
430
+ "34": {
431
+ "title": "A Practical Algorithm with Performance Guarantees for the Art\nGallery Problem.",
432
+ "author": "Simon B. Hengeveld and Tillmann Miltzow.",
433
+ "venue": "In Kevin Buchin and \u00c9ric Colin de Verdi\u00e8re, editors, 37th International Symposium on Computational Geometry (SoCG 2021), volume\n189 of Leibniz International Proceedings in Informatics (LIPIcs), pages\n44:1\u201344:16, 2021.",
434
+ "url": null
435
+ }
436
+ },
437
+ {
438
+ "35": {
439
+ "title": "A practical algorithm with performance guarantees for the art gallery\nproblem.",
440
+ "author": "Simon B Hengeveld and Tillmann Miltzow.",
441
+ "venue": "In 37th International Symposium on Computational Geometry (SoCG\n2021). Schloss Dagstuhl-Leibniz-Zentrum f\u00fcr Informatik, 2021.",
442
+ "url": null
443
+ }
444
+ },
445
+ {
446
+ "36": {
447
+ "title": "The Complexity of the Hausdorff Distance.",
448
+ "author": "Paul Jungeblut, Linda Kleist, and Tillmann Miltzow.",
449
+ "venue": "In Xavier Goaoc and Michael Kerber, editors, 38th International\nSymposium on Computational Geometry (SoCG 2022), volume 224 of Leibniz\nInternational Proceedings in Informatics (LIPIcs), pages 48:1\u201348:17, 2022.",
450
+ "url": null
451
+ }
452
+ },
453
+ {
454
+ "37": {
455
+ "title": "Sphere and Dot Product Representations of Graphs.",
456
+ "author": "Ross Kang and Tobias M\u00fcller.",
457
+ "venue": "Discrete & Computational Geometry, 47(3):548\u2013569, 2012.",
458
+ "url": null
459
+ }
460
+ },
461
+ {
462
+ "38": {
463
+ "title": "Intersection Graphs of Segments.",
464
+ "author": "Jan Kratochv\u00edl and Ji\u0159\u00ed Matou\u0161ek.",
465
+ "venue": "Journal of Combinatorial Theory, Series B, 62(2):289\u2013315,\n1994.",
466
+ "url": null
467
+ }
468
+ },
469
+ {
470
+ "39": {
471
+ "title": "Computational complexity of art gallery problems.",
472
+ "author": "Der-Tsai Lee and Arthur K. Lin.",
473
+ "venue": "IEEE Transactions on Information Theory, 32(2):276\u2013282,\n1986.",
474
+ "url": null
475
+ }
476
+ },
477
+ {
478
+ "40": {
479
+ "title": "An optimal algorithm for finding the kernel of a polygon.",
480
+ "author": "Der-Tsai Lee and Franco P. Preparata.",
481
+ "venue": "Journal of the ACM (JACM), 26(3):415\u2013421, 1979.",
482
+ "url": null
483
+ }
484
+ },
485
+ {
486
+ "41": {
487
+ "title": "The Complexity of Drawing a Graph in a Polygonal Region.",
488
+ "author": "Anna Lubiw, Tillmann Miltzow, and Debajyoti Mondal.",
489
+ "venue": "Journal of Graph Algorithms and Applications, 26(4):421\u2013446,\n2022.",
490
+ "url": null
491
+ }
492
+ },
493
+ {
494
+ "42": {
495
+ "title": "Intersection graphs of segments and .",
496
+ "author": "Ji\u0159\u00ed Matou\u0161ek.",
497
+ "venue": "arXiv preprint, 2014.",
498
+ "url": null
499
+ }
500
+ },
501
+ {
502
+ "43": {
503
+ "title": "Integer realizations of disk and segment graphs.",
504
+ "author": "Colin McDiarmid and Tobias M\u00fcller.",
505
+ "venue": "Journal of Combinatorial Theory, Series B, 103(1):114\u2013143,\n2013.",
506
+ "url": null
507
+ }
508
+ },
509
+ {
510
+ "44": {
511
+ "title": "On Classifying Continuous Constraint Satisfaction Problems.",
512
+ "author": "Tillmann Miltzow and Reinier F. Schmiermann.",
513
+ "venue": "In Nisheeth K. Vishnoi, editor, 2021 IEEE 62nd Annual Symposium\non Foundations of Computer Science (FOCS), pages 781\u2013791, 2022.",
514
+ "url": null
515
+ }
516
+ },
517
+ {
518
+ "45": {
519
+ "title": "The Universality Theorems on the Classification Problem of\nConfiguration Varieties and Convex Polytopes Varieties.",
520
+ "author": "Nikolai E. Mn\u00ebv.",
521
+ "venue": "In Oleg Y. Viro and Anatoly M Vershik, editors, Topology and\nGeometry \u2014 Rohlin Seminar, volume 1346 of Lecture Notes in\nMathematics, pages 527\u2013543. Springer, 1988.",
522
+ "url": null
523
+ }
524
+ },
525
+ {
526
+ "46": {
527
+ "title": "Art Gallery Theorems and Algorithms.",
528
+ "author": "Joseph O\u2019Rourke.",
529
+ "venue": "Oxford University Press, 1987.",
530
+ "url": null
531
+ }
532
+ },
533
+ {
534
+ "47": {
535
+ "title": "Realization Spaces of 4-Polytopes are Universal.",
536
+ "author": "J\u00fcrgen Richter-Gebert and G\u00fcnter M. Ziegler.",
537
+ "venue": "Bulletin of the American Mathematical Society, 32(4):403\u2013412,\n1995.",
538
+ "url": null
539
+ }
540
+ },
541
+ {
542
+ "48": {
543
+ "title": "Complexity of Some Geometric and Topological Problems.",
544
+ "author": "Marcus Schaefer.",
545
+ "venue": "In David Eppstein and Emden R. Gansner, editors, GD 2009: Graph\nDrawing, volume 5849 of Lecture Notes in Computer Science, pages\n334\u2013344, 2010.",
546
+ "url": null
547
+ }
548
+ },
549
+ {
550
+ "49": {
551
+ "title": "Realizability of Graphs and Linkages, pages 461\u2013482.",
552
+ "author": "Marcus Schaefer.",
553
+ "venue": "Thirty Essays on Geometric Graph Theory. Springer, 2013.",
554
+ "url": null
555
+ }
556
+ },
557
+ {
558
+ "50": {
559
+ "title": "Complexity of Geometric k-Planarity for Fixed k.",
560
+ "author": "Marcus Schaefer.",
561
+ "venue": "Journal of Graph Algorithms and Applications, 25(1):29\u201341,\n2021.",
562
+ "url": null
563
+ }
564
+ },
565
+ {
566
+ "51": {
567
+ "title": "Fixed Points, Nash Equilibria, and the Existential Theory of the\nReals.",
568
+ "author": "Marcus Schaefer and Daniel \u0160tefankovi\u010d.",
569
+ "venue": "Theory of Computing Systems, 60:172\u2013193, 2017.",
570
+ "url": null
571
+ }
572
+ },
573
+ {
574
+ "52": {
575
+ "title": "Beyond the existential theory of the reals.",
576
+ "author": "Marcus Schaefer and Daniel Stefankovic.",
577
+ "venue": "CoRR, abs/2210.00571, 2022.",
578
+ "url": null
579
+ }
580
+ },
581
+ {
582
+ "53": {
583
+ "title": "The Complexity of Tensor Rank.",
584
+ "author": "Marcus Schaefer and Daniel \u0160tefankovi\u010d.",
585
+ "venue": "Theory of Computing Systems, 62(5):1161\u20131174, 2018.",
586
+ "url": null
587
+ }
588
+ },
589
+ {
590
+ "54": {
591
+ "title": "A Universality Theorem for Nonnegative Matrix Factorizations.",
592
+ "author": "Yaroslav Shitov.",
593
+ "venue": "arXiv preprint, 2016.",
594
+ "url": null
595
+ }
596
+ },
597
+ {
598
+ "55": {
599
+ "title": "The complexity of positive semidefinite matrix factorization.",
600
+ "author": "Yaroslav Shitov.",
601
+ "venue": "SIAM Journal on Optimization, 27(3):1898\u20131909, 2017.",
602
+ "url": null
603
+ }
604
+ },
605
+ {
606
+ "56": {
607
+ "title": "Stretchability of Pseudolines is NP-Hard.",
608
+ "author": "Peter W. Shor.",
609
+ "venue": "In Peter Gritzmann and Bernd Sturmfels, editors, Applied\nGeometry And Discrete Mathematics, volume 4 of DIMACS Series in\nDiscrete Mathematics and Theoretical Computer Science, pages 531\u2013554, 1991.",
610
+ "url": null
611
+ }
612
+ },
613
+ {
614
+ "57": {
615
+ "title": "Complexity of the boundary-guarding art gallery problem.",
616
+ "author": "Jack Stade.",
617
+ "venue": "arXiv preprint arXiv:2210.12817, 2022.",
618
+ "url": null
619
+ }
620
+ },
621
+ {
622
+ "58": {
623
+ "title": "Topological universality of the art gallery problem.",
624
+ "author": "Jack Stade and Jamie Tucker-Foltz.",
625
+ "venue": "arXiv preprint arXiv:2202.11076, 2022.",
626
+ "url": null
627
+ }
628
+ },
629
+ {
630
+ "59": {
631
+ "title": "Algorithm 966: A practical iterative algorithm for the art gallery\nproblem using integer linear programming.",
632
+ "author": "Davi C. Tozoni, Pedro J. de Rezende, and Cid C. de Souza.",
633
+ "venue": "ACM Trans. Math. Softw., 43(2), August 2016.",
634
+ "url": null
635
+ }
636
+ }
637
+ ],
638
+ "url": "http://arxiv.org/html/2212.01211v3"
639
+ }
20240620/2212.10131v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240620/2301.13006v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240620/2302.02224v3.json ADDED
@@ -0,0 +1,774 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "TAP: The Attention Patch for Cross-Modal Knowledge Transfer from Unlabeled Modality",
3
+ "abstract": "This paper addresses a cross-modal learning framework, where the objective is to enhance the performance of supervised learning in the primary modality using an unlabeled, unpaired secondary modality. Taking a probabilistic approach for missing information estimation, we show that the extra information contained in the secondary modality can be estimated via Nadaraya-Watson (NW) kernel regression, which can further be expressed as a kernelized cross-attention module (under linear transformation). This expression lays the foundation for introducing The Attention Patch (TAP), a simple neural network add-on that can be trained to allow data-level knowledge transfer from the unlabeled modality. We provide extensive numerical simulations using real-world datasets to show that TAP can provide statistically significant improvement in generalization across different domains and different neural network architectures, making use of seemingly unusable unlabeled cross-modal data.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Consider a cross-modal learning framework where there exist a labeled primary modality and an unlabeled, unpaired secondary modality. In this paper, we address the following research question: can we boost the performance of supervised learning in the primary modality by exploiting the extra information in the secondary modality (that is unlabeled and unpaired with the primary modality)? Our work naturally lies at the intersection of cross-modal learning and semi-supervised learning. The cross-modal learning paradigm learns from data in different modalities (Li et al., 2018 ###reference_b32###). From a probability theory perspective, cross-modal data often refers to data with different support dimensions and distribution curvatures. The semi-supervised learning paradigm uses unlabeled data to improve the model performance learned by limited labeled data (Zhu, 2005 ###reference_b73###; Van Engelen & Hoos, 2020 ###reference_b59###). While cross-modal learning and semi-supervised learning have been extensively studied independently, the intersection of the two has been less explored in the literature and remained elusive. To be specific, when we have a limited amount of labeled data in the primary modality and a large set of unlabeled and unpaired data in the secondary modality available during training, there is no principled learning paradigm that can make use of the secondary modality to create a model that is better than the model learned with only the primary modality. Figure 1(a) ###reference_sf1### presents a visualization of our target problem to solve.\n###figure_1### ###figure_2### Our target problem is frequently encountered in different research communities. For example, when building a battery failure prediction model using temporal current and voltage readings, one option is to learn a supervised learning model with only current and voltage readings. However, there exist unlabeled videos of battery inner structure changes that are generated by different research labs, so a potentially more effective option is to incorporate the information of these videos into the model. These videos are information-rich, but labeling them requires repeating the collection process, which is expensive and potentially impossible due to equipment constraints (Davidson et al., 2018 ###reference_b9###). In general, reusing information-rich but hard-to-label datasets for different learning tasks calls for developing novel methods at the intersection of cross-modal learning and semi-supervised learning.\nTo solve the target problem, we start by formulating the missing information estimation problem from a probability density estimation perspective. Using kernel density estimation (Rosenblatt, 1956 ###reference_b47###; Wand & Jones, 1994 ###reference_b63###; Wang et al., 2023 ###reference_b66###), we show that this formulation leads to a multivariate NW kernel regression, which can further be expressed as a kernelized cross-attention module (under linear transformation). We name this module The Attention Patch (TAP), as shown in Figure 1(b) ###reference_sf2###. TAP is a simple neural network plugin that can be attached between two consecutive layers in a neural network. The TAP integration requires minimal modification to the original network (i.e., only moderate dimension change), and the parameters of mappings can be learned in parallel with the original network training."
10
+ },
11
+ {
12
+ "section_id": "1.1",
13
+ "parent_section_id": "1",
14
+ "section_name": "Summary of Contributions",
15
+ "text": "This is the first work that investigates a learning paradigm in semi-supervised learning and cross-modal learning, where the \"extra\" information in is not only unlabeled and unpaired but also comes from a different modality than the primary modality . We propose a framework to enhance supervised learning of the primary modality using unpaired, unlabeled secondary modality .\nIn Sections 3 ###reference_### and 4 ###reference_###, we show that formulating our target problem as a missing information estimation problem leads to a multivariate Nadaraya-Watson (NW) kernel regression, and it further recovers a kernelized version of the popular cross-attention mechanism (Vaswani et al., 2017 ###reference_b60###) (under linear transformation). Based on our observation, we propose The Attention Patch (TAP) neural network plugin for cross-modal knowledge transfer from unlabeled modality.\nWe further propose a batch training strategy to incorporate more unlabeled cross-modal data while maintaining moderate memory costs.\nWe provide detailed simulations on three real-world datasets in different domains to examine various aspects of TAP and demonstrate that the integration of TAP into a neural network can provide statistically significant improvement in generalization using the unlabeled modality. We also provide detailed ablation studies to investigate the best configuration for TAP in practice, including the choice of kernel, the choice for latent space transformation, and compatibility with CNN and Transformer-based backbone feature extractors with an additional text-image dataset."
16
+ },
17
+ {
18
+ "section_id": "2",
19
+ "parent_section_id": null,
20
+ "section_name": "Related Literature",
21
+ "text": "In this section, we provide a review of related literature that explains why existing techniques can not be directly applied to our target problem."
22
+ },
23
+ {
24
+ "section_id": "2.1",
25
+ "parent_section_id": "2",
26
+ "section_name": "Cross-Modal Learning",
27
+ "text": "Cross-modal learning focuses on learning with data from different modalities. The most representative topic application-wise is cross-modal retrieval. This topic focuses on finding relevant samples in one modality given a query in another modality (Wang et al., 2016 ###reference_b64###). The most important step in cross-modal retrieval tasks is learning a coupled space that can correctly describe the correlation between data points from different modalities. Traditional techniques including canonical correlation analysis (Hardoon et al., 2004 ###reference_b19###; Andrew et al., 2013 ###reference_b1###; Wang & Shahrampour, 2021 ###reference_b65###), partial least squares (Geladi & Kowalski, 1986 ###reference_b18###; Cha, 1994 ###reference_b5###), and bilinear model (Sharma et al., 2012 ###reference_b50###; Tenenbaum & Freeman, 2000 ###reference_b56###) find a simple projection of the matching pairs that minimize certain pre-defined loss functions. This framework, though has seen several variants and extensions (Ngiam et al., 2011 ###reference_b39###; Hong et al., 2015 ###reference_b20###; Sohn et al., 2014 ###reference_b52###; Xu et al., 2015 ###reference_b71###), remains the most popular framework in cross-modal correlation learning.\nIn addition to cross-modal retrieval, cross-modal supervised learning also follows the same principle. Examples include but are not limited to Lin & Tang (2006 ###reference_b33###); Evangelopoulos et al. (2013 ###reference_b15###); Jing et al. (2014 ###reference_b23###); Feichtenhofer et al. (2016 ###reference_b16###); Peng et al. (2017 ###reference_b44###); Li et al. (2019 ###reference_b31###). All of these models involve coupled space learning through either implicit or explicit learning loss with matching pairs of cross-modal inputs.\nWith the rise in popularity of natural language processing and time series analysis, a new concept of weak alignment appears in cross-modal learning. Weak alignment refers to the missing alignment of sub-components in instances from different modalities (Baltru\u0161aitis et al., 2018 ###reference_b2###). More specifically, this often means cross-modal input sequences with different sampling intervals or orders. For example, in the case of vision and language models, a text description sequence of a video will usually differ from the time and order of information that appeared in the video (Venugopalan et al., 2015 ###reference_b61###; Tsai et al., 2019 ###reference_b58###), or a text description of an image will need to map a sequence of words to a collection of objects without any specific orders (Mitchell et al., 2012 ###reference_b36###; Kulkarni et al., 2013 ###reference_b29###; Chen et al., 2015 ###reference_b6###; Karpathy & Fei-Fei, 2015 ###reference_b24###).\nThere exists a research direction that taps into instance-wise nonalignment for cross-modal learning, which is called non-parallel co-learning (Baltru\u0161aitis et al., 2018 ###reference_b2###). Non-parallel co-learning aims at improving the model learned on a single modality using another modality that is unaligned with the primary data. However, this is a concept that has only been studied with very specific applications, and it also requires the reference modality to be labeled during the training process. For example, cross-modal transfer learning (Frome et al., 2013 ###reference_b17###; Kiela & Bottou, 2014 ###reference_b27###; Mahasseni & Todorovic, 2016 ###reference_b35###) mainly focuses on transferring supervised pre-trained embedding networks for improved cross-model prediction accuracy. Cross-modal meta-learning (Phoo & Hariharan, 2020 ###reference_b45###; Islam et al., 2021 ###reference_b21###) investigates improving few-shot learning performance of primary modality using a labeled, yet unaligned secondary modality.\nIn a nutshell, almost all cross-modal learning frameworks focus on the case where there are known alignments between different modalities of data at least during the learning phase. Our work considers the case where both alignment and labels do not exist during learning, and our proposed architecture can be applied to arbitrary modalities."
28
+ },
29
+ {
30
+ "section_id": "2.2",
31
+ "parent_section_id": "2",
32
+ "section_name": "Semi-Supervised Learning",
33
+ "text": "Semi-supervised learning focuses on addressing the challenge of limited labeled data availability in building a learning algorithm (Zhu, 2005 ###reference_b73###; Van Engelen & Hoos, 2020 ###reference_b59###). Among all kinds of semi-supervised learning methods, self-training is the closest class of methods that relates to our study. Self-training refers to the class of methods that train a supervised learning algorithm using labeled and unlabeled data together (Triguero et al., 2015 ###reference_b57###). This approach is usually done by assigning pseudo labels to the unlabeled data and jointly refining the supervised learning model and the pseudo labels by iterative training (Yarowsky, 1995 ###reference_b72###; Rosenberg et al., 2005 ###reference_b46###; D\u00f3pido et al., 2013 ###reference_b13###; Wu et al., 2012 ###reference_b68###; Tanha et al., 2017 ###reference_b55###). For most traditional learners, this means re-training the learning algorithm many times as the pseudo labels are being updated. However, this pseudo-label training approach naturally works with incremental learning algorithms like neural networks, where the model is gradually learned through optimizing an objective function (Lee et al., 2013 ###reference_b30###; Berthelot et al., 2019 ###reference_b3###; Zoph et al., 2020 ###reference_b74###; Xie et al., 2020 ###reference_b69###; Sohn et al., 2020 ###reference_b53###). Strictly speaking, all pseudo-labeling/self-training methods focus on assigning labels in the prediction space to the data points in the primary space that is the same as the labeled data.\nThere exists another line of research that studies semi-supervised learning in the context of multi-view or multi-modal learning. For example, multi-view co-training methods (Blum & Mitchell, 1998 ###reference_b4###; Kiritchenko & Matwin, 2001 ###reference_b28###; Wan, 2009 ###reference_b62###; Du et al., 2010 ###reference_b14###) propose to train different classifiers on different views of the same data. Multi-view co-training generally assumes the two views are independent of each other but can perform equally well in a single-view prediction task. The development of deep neural networks also introduced a lot of cross-modal semi-supervised learning works, which mainly focus on computer vision and natural language processing (Nie et al., 2017b ###reference_b41###; a ###reference_b40###; 2019 ###reference_b42###; Jia et al., 2020 ###reference_b22###). Again, all of these mentioned works rely on the availability of labels in all the modalities.\nAs we can see, all existing semi-supervised learning frameworks consider the case where the unlabeled data resides in the same space or joint space as the labeled data. Our work considers the case where the unlabeled data resides in a completely different space than the labeled data."
34
+ },
35
+ {
36
+ "section_id": "3",
37
+ "parent_section_id": null,
38
+ "section_name": "Estimating the Missing Information",
39
+ "text": "In this section, using kernel density estimators, we show that the missing information estimation problem coincides with the Nadaraya-Watson (NW) kernel regression. We further establish that this particular multivariate estimation scheme yields an asymptotically vanishing error."
40
+ },
41
+ {
42
+ "section_id": "3.1",
43
+ "parent_section_id": "3",
44
+ "section_name": "Cross-Modal NW Kernel Regression",
45
+ "text": "For clarity, we focus on estimating the missing information of one data point , which can be obtained from a data point in another mode. We denote the corresponding representation of in as . Consider the missing information for in space as , which needs to be estimated with the help of reference dataset . Let us write the conditional expectation of the missing information given the representation as follows\nwhere is the joint density of and , and is the marginal density of . Using reference data , we apply kernel density estimation (Rosenblatt, 1956 ###reference_b47###) for both above densities in the following form\nfor proper kernel functions and , which results in the following proposition.\nThe missing information estimation formulation in Equation 1 ###reference_### can be approximated with kernel density estimators in Equation 2 ###reference_###. When the kernel function in Equation 2 ###reference_### is a density function for a distribution with mean , the approximation leads to\nThe proof can be found in the Appendix. The proposition implies that the conditional expectation of missing information (given representation) leads to a multivariate version of the well-known NW kernel regression estimator (Nadaraya, 1964 ###reference_b38###; Watson, 1964 ###reference_b67###) that employs both modalities of data with the help of kernel density estimators. In Section 4.1 ###reference_###, we will see that the above formulation recovers the popular kernelized cross-attention when are linear mappings."
46
+ },
47
+ {
48
+ "section_id": "3.2",
49
+ "parent_section_id": "3",
50
+ "section_name": "Estimation Error Guarantee",
51
+ "text": "We now investigate the estimation error of Equation 3 ###reference_### under several assumptions, especially for its connection to the reference sample size and the choice of the kernel function. In particular, we consider the case that and replace and , respectively. The NW kernel regression formulation in Equation 3 ###reference_### now becomes\nwhere is a kernel that satisfies the following mild technical assumption and the subscript of is dropped for convenience.\n(Wand & Jones, 1994 ###reference_b63###)\nThe bandwidth matrix of the kernel function with (the subscript shows the dependence of to the number of data points) has the following properties\nwhich implies that the bandwidth parameter decays slower than and converges to .\nThe standard shift-invariant kernel function is a bounded, symmetric probability density function with a zero first moment and a finite second moment. That is, the following properties hold\nwhere and are constants decided by the choice of kernel .\nIt is easy to verify that several popular shift-invariant kernels (e.g., Gaussian kernel) satisfy the above assumption with proper normalization. We also put mild assumptions on the density functions and true mapping.\nThe true density function is differentiable and the -norm of its gradient is bounded. The underlying true function , (i.e., ) has a bounded gradient and Hessian in the norm sense, and we have where is an isotropic noise vector from .\nUnder 1 ###reference_umption1### and 2 ###reference_umption2###, the NW kernel regression estimator in Equation 4 ###reference_### with an isotropic shift-invariant kernel of bandwidth yields an estimation error that asymptotically converges in distribution as\nwhere , and the -th entry of is\nwhere and are the Hessian and gradient of -th entry of the true function with respect to , is the gradient of the true density function, and is the trace operator.\nThe proof is provided in the Appendix. The above theorem shows that with proper kernel function, the estimation error as under 1 ###reference_umption1### and 2 ###reference_umption2###. This suggests that under an ideal latent space transformation, more unlabeled data helps drive the estimation error to zero. We also observe that the error term converges to a sequence that scales with the factor , which implies kernel functions with lower might contribute to a smaller in the non-asymptotic regime."
52
+ },
53
+ {
54
+ "section_id": "4",
55
+ "parent_section_id": null,
56
+ "section_name": "The Attention Patch",
57
+ "text": "In this section, we formally propose The Attention Patch (TAP) by showing that the NW kernel regression formulation in Equation 3 ###reference_### with linear latent space transformation results in the popular \"cross-attention\" module (Vaswani et al., 2017 ###reference_b60###). We then propose a batch training strategy for scalability."
58
+ },
59
+ {
60
+ "section_id": "4.1",
61
+ "parent_section_id": "4",
62
+ "section_name": "Cross-Attention Module",
63
+ "text": "In Equation 3 ###reference_###, we showed how to estimate the missing information using the reference cross-modal data . However, implementing this mechanism requires learning the latent space transformations . Fortunately, a neural network integration of such formulation will allow the latent space transformation to be learned in parallel with the main learning objective. To be more specific, we can define the latent space transformations with learnable linear weight matrices , and we can see that the RHS of Equation 3 ###reference_### now becomes the kernelized version of the \"cross-attention\" module (Vaswani et al., 2017 ###reference_b60###). Furthermore, as shown in Figure 2 ###reference_###, TAP can be inserted between any two consecutive layers in a deep neural network with minimal modification to the original network. The integration process is equivalent to applying a patch to an existing neural network, hence the name The Attention Patch (TAP).\n###figure_3### Now, we formally propose The Attention Patch in the following corollary.\nFor an output of a layer in a deep neural network and an unlabeled cross-modal reference dataset , the TAP integration means calculating the following\nand concatenating and for downstream tasks. The parameters are learned in parallel with the original neural network. is a shift-invariant kernel of choice, and a Gaussian kernel is recommended.\nNote that Equation 9 ###reference_### allows for easy implementation of TAP by feeding the whole set of reference data as keys and values in a cross-attention module during the training process. However, the latent space transformation can go beyond linear, as we will discuss later in the simulation."
64
+ },
65
+ {
66
+ "section_id": "4.2",
67
+ "parent_section_id": "4",
68
+ "section_name": "Batch Training",
69
+ "text": "The attention formulation of cross-modal learning in Equation 9 ###reference_### requires setting the keys and values to be the set of unlabeled reference data points . Theorem 1 ###reference_orem1### suggests that using more reference data for the model will potentially result in a lower estimation error. However, the computation complexity of the cross-attention module scales linearly with respect to the sequence length of keys and values, which is equivalent to the number of reference points in . So, it is not practical to feed all the reference points at once, mainly due to memory limitations.\nTherefore, we can break the reference dataset into batches and train each epoch by iterating over the set of reference batches together with input batches of primary data points. This is similar in vein to training with stochastic gradient descent.\nEach batch of reference points will make the neural network yield an output in space . Evaluating all batches will in turn yield outputs, which can be used in different ways depending on the application, like the ensemble approach (Dietterich et al., 2002 ###reference_b12###; Sagi & Rokach, 2018 ###reference_b48###) in classification tasks.\nIn a nutshell, TAP integration without batch training will incur an additional memory cost of , and batch training reduces the memory cost to . For reference, the additional memory required for a forward path in TAP integrated model (written in PyTorch) is around GB for one training data point and K reference data of dimension ."
70
+ },
71
+ {
72
+ "section_id": "5",
73
+ "parent_section_id": null,
74
+ "section_name": "Numerical Experiments",
75
+ "text": ""
76
+ },
77
+ {
78
+ "section_id": "5.1",
79
+ "parent_section_id": "5",
80
+ "section_name": "Performance Evaluation",
81
+ "text": "In this subsection, we evaluate TAP by plugging it into neural network classifiers. We show the effectiveness of TAP by comparing the performance of TAP-integrated networks with other variants. The simulations are implemented on three real-world datasets in different areas. Full implementation details for all experiments in this section can be found in the Appendix for reproducibility.\nDatasets: To ensure a comprehensive evaluation of the performance of TAP integration, we select/create three real-world cross-modal datasets in three different areas. All datasets are open-access and can be found online. A detailed dataset and pre-processing description can be found in the Appendix.\nComputer Vision: We start with the MNIST dataset (MNIST) (Deng, 2012 ###reference_b10###). We crop the upper half of all images as the primary modality for digit prediction and use the lower half of all images as the reference modality (without labels and without pairing). This creates two data modes that have guaranteed complementary information while having different distributions. In model inference (testing), the test data points are upper-half images from the primary modality that are not present in the training set.\nHealthcare: We use the Activity dataset (Activity) (Mohino-Herranz et al., 2019 ###reference_b37###), where the Electrodermal Activity (EDA) signals are the primary modality for predicting the subject activity. Thoracic Electrica Bioimpedance (TEB) signals are used as the reference dataset (without labels and without pairing). In model inference (testing), the test data points are EDA signals from the primary modality that are not present in the training set.\nRemote Sensing: We also choose the Crop dataset (Crop) (Khosravi et al., 2018 ###reference_b26###; Khosravi & Alavipanah, 2019 ###reference_b25###), where the optical features are used to predict the crop type, and the radar readings are used as reference dataset (without labels and without pairing). In model inference (testing), the test data points are optical features from the primary modality that are not present in the training set.\nThere is no overlapping instance among the training data (in space ), reference data (in space ), and test data (in space ).\nModels: It is difficult to directly compare the effectiveness of TAP against existing state-of-the-art neural network architectures since this work is the first to propose cross-modal learning from a fully unlabeled data modality. However, we can show the effectiveness of TAP by carefully examining the performance difference across different variants of TAP.\nOur Baseline competitor is a single-modal neural network without TAP integration at all. However, TAP integration will bring three changes to the baseline model, including depth increase, the addition of cross-attention structure, and the reference data . To examine the contribution of these three aspects, we further propose three competitors as follows:\nFFN: We replace TAP with a feedforward network to create the FFN variant. So, if FFN performs worse than TAP, we are able to factor out the impact of depth increase with TAP integration.\nControl Group: We replace the real reference data in TAP with random noise of the same mean and variance. So, if Control Group performs worse than TAP, it suggests that the addition of a cross-attention structure is not the main reason for TAP to work, and using meaningful reference data (like ) is important.\nTAP w/o Batch: We disregard batch training strategy by incorporating the whole reference dataset throughout the training process. So, if we observe TAP w/o Batch outperforms TAP, it would suggest the model is learning from the reference data, and more reference data will help.\nWe now highlight some important details here: First, the normalization parameter in the attention module of TAP is set to , such that it follows the bandwidth Assumption 1 ###reference_umption1### while being close to the generally recommended normalization constant . Second, at each Monte-Carlo simulation, the set of training data, reference data, and evaluation data are shuffled while keeping the amount the same. The Control Group also generates a new set of random reference data at each Monte-Carlo simulation.\nResults: The simulation results are shown in Figure 3 ###reference_###. The error bars are calculated over Monte-Carlo simulations to reflect the statistical significance of the results. As we can see, there is a consistent performance hierarchy among all the benchmark models throughout all datasets in different areas. First, we see that TAP integration always leads to a performance improvement compared to the baseline classifier. Second, we see that the FFN variant shows no performance advantage against the baseline model, which rules out the possibility that the depth increase in TAP is the major factor causing performance improvement. Third, we observe the worst generalization performance across all benchmark models for the Control Group. This shows that feeding irrelevant information will exacerbate the generalization performance. Finally, we see that batch training and evaluation for TAP results in a slightly worse generalization performance compared to TAP w/o Batch, which trains directly with the whole reference dataset. Therefore, having more reference data indeed helps.\n###figure_4### ###figure_5### ###figure_6### ###figure_7### ###figure_8### ###figure_9###"
82
+ },
83
+ {
84
+ "section_id": "5.2",
85
+ "parent_section_id": "5",
86
+ "section_name": "Ablation Study",
87
+ "text": "In this subsection, we further conduct an ablation study for TAP to investigate its best configuration in practice. We discuss the batch size comparison and the choice of kernel function for TAP. Next, we look at the impact of nonlinear latent space transformation on TAP. Then, we examine the compatibility of TAP with large backbone feature extractors (for vision and language). Finally, we investigate shared space learning and dummy reference modality.\nReference Batch Size Comparison: To further evaluate the effect of batch training, we look at the performance of different reference batch sizes. The total number of reference data is five times of labeled data . Figure 4 ###reference_### shows the prediction accuracy with respect to \"unlabeled to labeled ratio\", which is defined as the reference batch size divided by .\nThe performance of the baseline model is shown as the horizontal dotted line. The standard errors are calculated over 10 Monte-Carlo simulations. We observe that the generalization performance of TAP improves as the reference batch size increases, even though the total number of reference data remains the same. This observation is consistent with the estimation error characterization in Theorem 1 ###reference_orem1###. In practice, this suggests that one can increase the reference batch size as much as possible until the memory limit is reached.\nChoice of Kernels: Theorem 1 ###reference_orem1### suggests that the estimation error diminishes as the number of reference data points goes to infinity. However, we observe that the error term is also related to , which is a constant determined by the choice of the kernel function. This connection is asymptotically negligible, but it might still be relevant in finite-sample cases. The choice of kernels in kernelized attention literature (Peng et al., 2020 ###reference_b43###; Choromanski et al., 2020 ###reference_b8###; Chen et al., 2021 ###reference_b7###; Luo et al., 2021 ###reference_b34###; Xiong et al., 2021 ###reference_b70###) has been empirically studied, but these works have not suggested the theoretical intuition behind the kernel choices. Here, we compare three kernels, namely the Gaussian kernel, Laplace kernel, and Inverse Multiquadric kernel. Notice that the value is the smallest for the Gaussian kernel, larger for the Laplace kernel, and unbounded for the Inverse Multiquadric kernel.\nThe results are shown in Table 1 ###reference_###. The standard errors are calculated over 20 Monte-Carlo simulations. We observe that for TAP, the Gaussian kernel consistently outperforms the other two kernels. The general performance hierarchy is also consistent with the order of value for three kernels. So, we recommend using the Gaussian kernel for the TAP integration.\nNonlinear Latent Space Transformation:\nWe further investigate whether nonlinear latent space transformation will benefit TAP. We compare TAP with a variant where the latent space transformations are modeled with multi-layer perceptrons (MLP). The results are shown in Table 2 ###reference_###.\nThe standard errors are calculated over 10 Monte-Carlo simulations. We observe that MLP variants in general provide an accuracy that is no worse than the linear version of TAP. The only statistical advantage of MLP can be found in the Crop dataset, where the data are radar and optical images, which are information-rich and often require nonlinear mappings for feature extraction. We suggest using nonlinear latent space transformation, including backbone feature extractors for vision or language data, but adhering to linear transformation for tabular data for computational purposes.\nBackbone Compatibility: We note that TAP considers cross-modal learning from a probability theory standpoint, which is domain agnostic. However, given the popularity of computer vision and large language models, it is appealing to see if TAP is compatible with pre-trained feature extractors, such as Convolutional Neural Networks (CNNs) for images and Transformers for language.\n###figure_10### We carry out the test on a fourth dataset, Memotion 7K dataset (Sharma et al., 2020 ###reference_b51###). Specifically, we focus on sentiment classification tasks with images as the primary modality and text as the secondary modality. We investigate a modified TAP integration as shown in Figure 5 ###reference_###. We choose pre-trained EfficientNet-B0 (Tan & Le, 2019 ###reference_b54###) as the image feature extractor and pre-trained distilled-RoBERTa (Sanh et al., 2019 ###reference_b49###) as the text feature extractor. The implementation details can be found in the Appendix. The test accuracy and F1 score are tabulated in Table 3 ###reference_###. We observe that TAP integration improves both test accuracy and F1 score, which is the main objective for the imbalanced Memotion 7K dataset.\nShared Space Learning: The benefit of TAP relies on finding a shared information space , such that similar samples in the primary modality and reference modality are close to each other in the space (i.e., after proper transformations). Although we have observed clear generalization advantage, it is still important to examine whether TAP is able to find such suitable space . To this end, we look at the kernel values in TAP after training for and with the same/different labels. The results are shown in Table 4 ###reference_###.\nThe standard errors are calculated over all evaluation data. We observe that the kernel value is larger when and share the same label, which means TAP is able to find the appropriate shared space without accessing label information for reference modality . This observation helps explain why TAP can provide better generalization performance.\nDummy Reference Modality: To further verify the claim that the \"extra information\" in the reference modality improves learning, we carry out a simulation tricking TAP by creating dummy reference modalities for all previous three real-world datasets. To be specific, we create the reference modality in Section 5.1 ###reference_### by transforming the training data in the primary modality using randomly initialized feedforward neural networks (with the same output dimension as the reference data). For example, for Crop dataset, we randomly initialize a neural network with input dimension (for optical features) and output dimension (for radar features) to transform the primary modality training data into the reference dataset. Using this we have created a \"reference-like\" dataset without any relevant cross-modal information. With Monte-Carlo simulations, we get the performance results shown in Table 5 ###reference_###.\nWe observe that TAP with \"dummy\" reference modality has no statistically significant performance gain over the baseline model if not being worse. This is intuitive as the embedded cross-modal information does not carry any extra information."
88
+ },
89
+ {
90
+ "section_id": "6",
91
+ "parent_section_id": null,
92
+ "section_name": "Conclusion and Discussion of Future Directions",
93
+ "text": "This paper investigates a novel perspective on cross-modal learning, where the objective is to transfer knowledge from an unlabeled secondary modality to improve generalization in the primary modality. We showed that the missing information in the primary modality can be estimated using an NW kernel regression formulation. This perspective leads to the design of a cross-attention module in neural network integration, which we refer to as The Attention Patch (TAP). We demonstrated the effectiveness of TAP integration on four real-world cross-modal datasets in different domains, highlighting its performance advantage and domain robustness.\nWe hope that this work will inspire new perspectives on knowledge transfer from the data level for neural networks, showing how seemingly unusable data can be leveraged for improved generalization. However, our work can still be improved in the future in certain aspects. For example, having the secondary modality as keys and values requires large computation memory in the forward pass for large datasets, and there is also no guarantee that a shared space and an exclusive space exist between the two modalities. Therefore, there are several promising directions for future research. For example, developing approaches that can efficiently store and reference unlabeled modalities could significantly benefit the technique, given the wide availability of massive datasets. Additionally, a principled framework for selecting relevant reference datasets, as seen in cross-modal and semi-supervised learning, could help guide the data selection process."
94
+ }
95
+ ],
96
+ "appendix": [
97
+ {
98
+ "section_id": "Appendix 1",
99
+ "parent_section_id": null,
100
+ "section_name": "Appendix A Appendix",
101
+ "text": "The organization of this Appendix is as follows:\nSubsection A.1 ###reference_### presents the experimental details, including dataset descriptions, preprocessing descriptions, and training details for reproducibility.\nIn Subsection A.2 ###reference_###, we provide the proof for all of the theoretical claims in the paper."
102
+ }
103
+ ],
104
+ "tables": {
105
+ "1": {
106
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S5.T1.11.1.1\" style=\"font-size:90%;\">Table 1</span>: </span><span class=\"ltx_text\" id=\"S5.T1.12.2\" style=\"font-size:90%;\">The prediction accuracy of TAP / TAP w/o Batch on three datasets. Gaussian kernel moderately outperforms the other two, and the Laplace kernel is slightly better than the Inverse Multiquadric kernel within the margin of error.</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T1.9.9\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T1.9.9.10.1\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T1.9.9.10.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.9.9.10.1.2\">Gaussian</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.9.9.10.1.3\">Laplace</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.9.9.10.1.4\">Inverse Multiquadric</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T1.3.3.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T1.3.3.3.4\">MNIST</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.1.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.2.2.2.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.3.3.3.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.6.6.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S5.T1.6.6.6.4\">Activity</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.4.4.4.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.5.5.5.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.6.6.6.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.9.9.9\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_l ltx_border_r\" id=\"S5.T1.9.9.9.4\">Crop</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T1.7.7.7.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T1.8.8.8.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T1.9.9.9.3\"></td>\n</tr>\n</tbody>\n</table>\n</figure>",
107
+ "capture": "Table 1: The prediction accuracy of TAP / TAP w/o Batch on three datasets. Gaussian kernel moderately outperforms the other two, and the Laplace kernel is slightly better than the Inverse Multiquadric kernel within the margin of error."
108
+ },
109
+ "2": {
110
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T2\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S5.T2.8.1.1\" style=\"font-size:90%;\">Table 2</span>: </span><span class=\"ltx_text\" id=\"S5.T2.9.2\" style=\"font-size:90%;\">The prediction accuracy of TAP / TAP w/o Batch on three datasets. MLP variant shows little to no improvements compared to the linear version of TAP.</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T2.6.6\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T2.6.6.7.1\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T2.6.6.7.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T2.6.6.7.1.2\">Linear</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T2.6.6.7.1.3\">MLP</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T2.2.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T2.2.2.2.3\">MNIST</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.1.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.2.2.2.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.4.4.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S5.T2.4.4.4.3\">Activity</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.3.3.3.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.4.4.4.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.6.6.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_l ltx_border_r\" id=\"S5.T2.6.6.6.3\">Crop</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T2.5.5.5.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T2.6.6.6.2\"></td>\n</tr>\n</tbody>\n</table>\n</figure>",
111
+ "capture": "Table 2: The prediction accuracy of TAP / TAP w/o Batch on three datasets. MLP variant shows little to no improvements compared to the linear version of TAP."
112
+ },
113
+ "3": {
114
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T3\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S5.T3.6.1.1\" style=\"font-size:90%;\">Table 3</span>: </span><span class=\"ltx_text\" id=\"S5.T3.7.2\" style=\"font-size:90%;\">The prediction accuracy (%) and F1 score comparison between baseline model and TAP integrated model. TAP integration shows a clear advantage in both accuracy and F1 score.</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T3.4.4\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T3.4.4.5.1\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T3.4.4.5.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T3.4.4.5.1.2\">Accuracy(%)</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T3.4.4.5.1.3\">F1 Score</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T3.2.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T3.2.2.2.3\">Baseline</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.1.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.2.2.2.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.4.4.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_l ltx_border_r\" id=\"S5.T3.4.4.4.3\">TAP</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T3.3.3.3.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T3.4.4.4.2\"></td>\n</tr>\n</tbody>\n</table>\n</figure>",
115
+ "capture": "Table 3: The prediction accuracy (%) and F1 score comparison between baseline model and TAP integrated model. TAP integration shows a clear advantage in both accuracy and F1 score."
116
+ },
117
+ "4": {
118
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T4\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S5.T4.8.1.1\" style=\"font-size:90%;\">Table 4</span>: </span><span class=\"ltx_text\" id=\"S5.T4.9.2\" style=\"font-size:90%;\">The kernel evaluations between data (one from the primary modality and one from the reference modality) that shares the same label against data with different labels.</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T4.6.6\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T4.6.6.7.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T4.6.6.7.1.1\">Dataset</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T4.6.6.7.1.2\">Same Label</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T4.6.6.7.1.3\">Different Label</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T4.2.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T4.2.2.2.3\">MNIST</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.1.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.2.2.2.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.4.4.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S5.T4.4.4.4.3\">Activity</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.3.3.3.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.4.4.4.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.6.6.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_l ltx_border_r\" id=\"S5.T4.6.6.6.3\">Crop</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T4.5.5.5.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T4.6.6.6.2\"></td>\n</tr>\n</tbody>\n</table>\n</figure>",
119
+ "capture": "Table 4: The kernel evaluations between data (one from the primary modality and one from the reference modality) that shares the same label against data with different labels."
120
+ },
121
+ "5": {
122
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T5\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S5.T5.8.1.1\" style=\"font-size:90%;\">Table 5</span>: </span><span class=\"ltx_text\" id=\"S5.T5.9.2\" style=\"font-size:90%;\">The performance comparison of the baseline model compared to TAP with dummy reference modality.</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T5.6.6\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T5.6.6.7.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T5.6.6.7.1.1\">Dataset</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T5.6.6.7.1.2\">Baseline</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T5.6.6.7.1.3\">TAP with dummy reference</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T5.2.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T5.2.2.2.3\">MNIST</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.1.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.2.2.2.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T5.4.4.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S5.T5.4.4.4.3\">Activity</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T5.3.3.3.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T5.4.4.4.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T5.6.6.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_l ltx_border_r\" id=\"S5.T5.6.6.6.3\">Crop</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T5.5.5.5.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T5.6.6.6.2\"></td>\n</tr>\n</tbody>\n</table>\n</figure>",
123
+ "capture": "Table 5: The performance comparison of the baseline model compared to TAP with dummy reference modality."
124
+ }
125
+ },
126
+ "image_paths": {
127
+ "1(a)": {
128
+ "figure_path": "2302.02224v3_figure_1(a).png",
129
+ "caption": "(a) \nFor primary modality \ud835\udcb3\u2286\u211ddx\ud835\udcb3superscript\u211dsubscript\ud835\udc51\ud835\udc65\\mathcal{X}\\subseteq\\mathbb{R}^{d_{x}}caligraphic_X \u2286 blackboard_R start_POSTSUPERSCRIPT italic_d start_POSTSUBSCRIPT italic_x end_POSTSUBSCRIPT end_POSTSUPERSCRIPT and the secondary modality \ud835\udcb5\u2286\u211ddz\ud835\udcb5superscript\u211dsubscript\ud835\udc51\ud835\udc67\\mathcal{Z}\\subseteq\\mathbb{R}^{d_{z}}caligraphic_Z \u2286 blackboard_R start_POSTSUPERSCRIPT italic_d start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT end_POSTSUPERSCRIPT, there exists a space \u21331subscript\u21331\\mathcal{M}_{1}caligraphic_M start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT that contains the mutual information between \ud835\udcb3\ud835\udcb3\\mathcal{X}caligraphic_X and \ud835\udcb5\ud835\udcb5\\mathcal{Z}caligraphic_Z, where \u03d51:\ud835\udcb5\u2192\u21331:subscriptitalic-\u03d51\u2192\ud835\udcb5subscript\u21331\\phi_{1}:\\mathcal{Z}\\rightarrow\\mathcal{M}_{1}italic_\u03d5 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT : caligraphic_Z \u2192 caligraphic_M start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and \u03d53:\ud835\udcb3\u2192\u21331:subscriptitalic-\u03d53\u2192\ud835\udcb3subscript\u21331\\phi_{3}:\\mathcal{X}\\rightarrow\\mathcal{M}_{1}italic_\u03d5 start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT : caligraphic_X \u2192 caligraphic_M start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT transform data \ud835\udc33\u2208\ud835\udcb5\ud835\udc33\ud835\udcb5\\mathbf{z}\\in\\mathcal{Z}bold_z \u2208 caligraphic_Z and \ud835\udc31\u2208\ud835\udcb3\ud835\udc31\ud835\udcb3\\mathbf{x}\\in\\mathcal{X}bold_x \u2208 caligraphic_X to space \u21331subscript\u21331\\mathcal{M}_{1}caligraphic_M start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, respectively. There also exists a space \u21332subscript\u21332\\mathcal{M}_{2}caligraphic_M start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT that contains the exclusive information present in \ud835\udcb5\ud835\udcb5\\mathcal{Z}caligraphic_Z but not in \ud835\udcb3\ud835\udcb3\\mathcal{X}caligraphic_X. A transformation \u03d52:\ud835\udcb5\u2192\u21332:subscriptitalic-\u03d52\u2192\ud835\udcb5subscript\u21332\\phi_{2}:\\mathcal{Z}\\rightarrow\\mathcal{M}_{2}italic_\u03d5 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT : caligraphic_Z \u2192 caligraphic_M start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT takes data \ud835\udc33\u2208\ud835\udcb5\ud835\udc33\ud835\udcb5\\mathbf{z}\\in\\mathcal{Z}bold_z \u2208 caligraphic_Z to \ud835\udc262\u2208\u21332subscript\ud835\udc262subscript\u21332\\mathbf{m}_{2}\\in\\mathcal{M}_{2}bold_m start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT \u2208 caligraphic_M start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT. The label information for data in \ud835\udcb3\ud835\udcb3\\mathcal{X}caligraphic_X is available, but data in \ud835\udcb5\ud835\udcb5\\mathcal{Z}caligraphic_Z are unlabeled. There is also no alignment between data in \ud835\udcb3\ud835\udcb3\\mathcal{X}caligraphic_X and \ud835\udcb5\ud835\udcb5\\mathcal{Z}caligraphic_Z.\nFigure 1:",
130
+ "url": "http://arxiv.org/html/2302.02224v3/x1.png"
131
+ },
132
+ "1(b)": {
133
+ "figure_path": "2302.02224v3_figure_1(b).png",
134
+ "caption": "(b) The Attention Patch (TAP) parameterized by query transformation \u03d53:\ud835\udc16q:subscriptitalic-\u03d53subscript\ud835\udc16\ud835\udc5e\\phi_{3}:\\mathbf{W}_{q}italic_\u03d5 start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT : bold_W start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT, key transformation \u03d51:\ud835\udc16k:subscriptitalic-\u03d51subscript\ud835\udc16\ud835\udc58\\phi_{1}:\\mathbf{W}_{k}italic_\u03d5 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT : bold_W start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT, and value transformation \u03d52:\ud835\udc16v:subscriptitalic-\u03d52subscript\ud835\udc16\ud835\udc63\\phi_{2}:\\mathbf{W}_{v}italic_\u03d5 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT : bold_W start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT. TAP takes one instance \ud835\udc31\u2208\ud835\udcb3\ud835\udc31\ud835\udcb3\\mathbf{x}\\in\\mathcal{X}bold_x \u2208 caligraphic_X (or a representation of \ud835\udc31\ud835\udc31\\mathbf{x}bold_x), uses a batch of reference data \ud835\udc19:{\ud835\udc33i\u2208\ud835\udcb5}i=1nz:\ud835\udc19superscriptsubscriptsubscript\ud835\udc33\ud835\udc56\ud835\udcb5\ud835\udc561subscript\ud835\udc5b\ud835\udc67\\mathbf{Z}:\\{\\mathbf{z}_{i}\\in\\mathcal{Z}\\}_{i=1}^{n_{z}}bold_Z : { bold_z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT \u2208 caligraphic_Z } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT end_POSTSUPERSCRIPT to generate the corresponding representation \ud835\udc262\u2208\u21332subscript\ud835\udc262subscript\u21332\\mathbf{m}_{2}\\in\\mathcal{M}_{2}bold_m start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT \u2208 caligraphic_M start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, which contains information that is not present in space \ud835\udcb3\ud835\udcb3\\mathcal{X}caligraphic_X. The representation \ud835\udc262subscript\ud835\udc262\\mathbf{m}_{2}bold_m start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT will be concatenated with input \ud835\udc31\ud835\udc31\\mathbf{x}bold_x for downstream tasks.\nFigure 1:",
135
+ "url": "http://arxiv.org/html/2302.02224v3/x2.png"
136
+ },
137
+ "2": {
138
+ "figure_path": "2302.02224v3_figure_2.png",
139
+ "caption": "Figure 2: The Attention Patch (TAP) neural network integration visualization: TAP takes the output of a layer to calculate the missing representation using reference data \ud835\udc19\ud835\udc19\\mathbf{Z}bold_Z, and the output of TAP will be concatenated with TAP input and fed to the next layer. The only modification to the original deep neural network (DNN) is increasing the input dimension of the integration layer (blue layer).",
140
+ "url": "http://arxiv.org/html/2302.02224v3/x3.png"
141
+ },
142
+ "3(a)": {
143
+ "figure_path": "2302.02224v3_figure_3(a).png",
144
+ "caption": "Figure 3: Simulation results on three real-world datasets. TAP integration shows a consistent performance advantage compared to other variants.",
145
+ "url": "http://arxiv.org/html/2302.02224v3/x4.png"
146
+ },
147
+ "3(b)": {
148
+ "figure_path": "2302.02224v3_figure_3(b).png",
149
+ "caption": "Figure 3: Simulation results on three real-world datasets. TAP integration shows a consistent performance advantage compared to other variants.",
150
+ "url": "http://arxiv.org/html/2302.02224v3/x5.png"
151
+ },
152
+ "3(c)": {
153
+ "figure_path": "2302.02224v3_figure_3(c).png",
154
+ "caption": "Figure 3: Simulation results on three real-world datasets. TAP integration shows a consistent performance advantage compared to other variants.",
155
+ "url": "http://arxiv.org/html/2302.02224v3/x6.png"
156
+ },
157
+ "4(a)": {
158
+ "figure_path": "2302.02224v3_figure_4(a).png",
159
+ "caption": "Figure 4: Reference batch size comparison on three real-world datasets. The generalization accuracy increases as the reference batch size becomes larger.",
160
+ "url": "http://arxiv.org/html/2302.02224v3/x7.png"
161
+ },
162
+ "4(b)": {
163
+ "figure_path": "2302.02224v3_figure_4(b).png",
164
+ "caption": "Figure 4: Reference batch size comparison on three real-world datasets. The generalization accuracy increases as the reference batch size becomes larger.",
165
+ "url": "http://arxiv.org/html/2302.02224v3/x8.png"
166
+ },
167
+ "4(c)": {
168
+ "figure_path": "2302.02224v3_figure_4(c).png",
169
+ "caption": "Figure 4: Reference batch size comparison on three real-world datasets. The generalization accuracy increases as the reference batch size becomes larger.",
170
+ "url": "http://arxiv.org/html/2302.02224v3/x9.png"
171
+ },
172
+ "5": {
173
+ "figure_path": "2302.02224v3_figure_5.png",
174
+ "caption": "Figure 5: TAP integration with pre-trained feature extractors: The primary modality prediction model takes a meme image as input to predict the sentiment of the meme. A text set of batch size 100100100100 is used as the reference secondary modality in TAP. The text data goes through pre-trained distilled-RoBERTa before being used in TAP.",
175
+ "url": "http://arxiv.org/html/2302.02224v3/x10.png"
176
+ }
177
+ },
178
+ "validation": true,
179
+ "references": [
180
+ {
181
+ "1": {
182
+ "title": "Deep canonical correlation analysis.",
183
+ "author": "Galen Andrew, Raman Arora, Jeff Bilmes, and Karen Livescu.",
184
+ "venue": "In International Conference on Machine Learning, pp. 1247\u20131255. PMLR, 2013.",
185
+ "url": null
186
+ }
187
+ },
188
+ {
189
+ "2": {
190
+ "title": "Multimodal machine learning: A survey and taxonomy.",
191
+ "author": "Tadas Baltru\u0161aitis, Chaitanya Ahuja, and Louis-Philippe Morency.",
192
+ "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(2):423\u2013443, 2018.",
193
+ "url": null
194
+ }
195
+ },
196
+ {
197
+ "3": {
198
+ "title": "Mixmatch: A holistic approach to semi-supervised learning.",
199
+ "author": "David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, and Colin A Raffel.",
200
+ "venue": "Advances in Neural Information Processing Systems, 32, 2019.",
201
+ "url": null
202
+ }
203
+ },
204
+ {
205
+ "4": {
206
+ "title": "Combining labeled and unlabeled data with co-training.",
207
+ "author": "Avrim Blum and Tom Mitchell.",
208
+ "venue": "In Proceedings of the Eleventh Annual Conference on Computational Learning Theory, pp. 92\u2013100, 1998.",
209
+ "url": null
210
+ }
211
+ },
212
+ {
213
+ "5": {
214
+ "title": "Partial least squares.",
215
+ "author": "Jaesung Cha.",
216
+ "venue": "Advanced Methods of Marketing Research, 407:52\u201378, 1994.",
217
+ "url": null
218
+ }
219
+ },
220
+ {
221
+ "6": {
222
+ "title": "Microsoft coco captions: Data collection and evaluation server.",
223
+ "author": "Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Doll\u00e1r, and C Lawrence Zitnick.",
224
+ "venue": "arXiv preprint arXiv:1504.00325, 2015.",
225
+ "url": null
226
+ }
227
+ },
228
+ {
229
+ "7": {
230
+ "title": "Skyformer: Remodel self-attention with gaussian kernel and nystr\u00f6m method.",
231
+ "author": "Yifan Chen, Qi Zeng, Heng Ji, and Yun Yang.",
232
+ "venue": "Advances in Neural Information Processing Systems, 34:2122\u20132135, 2021.",
233
+ "url": null
234
+ }
235
+ },
236
+ {
237
+ "8": {
238
+ "title": "Rethinking attention with performers.",
239
+ "author": "Krzysztof Marcin Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Quincy Davis, Afroz Mohiuddin, Lukasz Kaiser, et al.",
240
+ "venue": "In International Conference on Learning Representations, 2020.",
241
+ "url": null
242
+ }
243
+ },
244
+ {
245
+ "9": {
246
+ "title": "Formation of magnesium dendrites during electrodeposition.",
247
+ "author": "Rachel Davidson, Ankit Verma, David Santos, Feng Hao, Coleman Fincher, Sisi Xiang, Jonathan Van Buskirk, Kelvin Xie, Matt Pharr, Partha P Mukherjee, et al.",
248
+ "venue": "ACS Energy Letters, 4(2):375\u2013376, 2018.",
249
+ "url": null
250
+ }
251
+ },
252
+ {
253
+ "10": {
254
+ "title": "The mnist database of handwritten digit images for machine learning research.",
255
+ "author": "Li Deng.",
256
+ "venue": "IEEE Signal Processing Magazine, 29(6):141\u2013142, 2012.",
257
+ "url": null
258
+ }
259
+ },
260
+ {
261
+ "11": {
262
+ "title": "The convergence of kernel density estimates.",
263
+ "author": "LP Devroye and TJ Wagner.",
264
+ "venue": "The Annals of Statistics, 7(5):1136\u20131139, 1979.",
265
+ "url": null
266
+ }
267
+ },
268
+ {
269
+ "12": {
270
+ "title": "Ensemble learning.",
271
+ "author": "Thomas G Dietterich et al.",
272
+ "venue": "The Handbook of Brain Theory and Neural Networks, 2(1):110\u2013125, 2002.",
273
+ "url": null
274
+ }
275
+ },
276
+ {
277
+ "13": {
278
+ "title": "Semisupervised self-learning for hyperspectral image classification.",
279
+ "author": "Inmaculada D\u00f3pido, Jun Li, Prashanth Reddy Marpu, Antonio Plaza, Jos\u00e9 M Bioucas Dias, and Jon Atli Benediktsson.",
280
+ "venue": "IEEE Transactions on Geoscience and Remote Sensing, 51(7):4032\u20134044, 2013.",
281
+ "url": null
282
+ }
283
+ },
284
+ {
285
+ "14": {
286
+ "title": "When does cotraining work in real data?",
287
+ "author": "Jun Du, Charles X Ling, and Zhi-Hua Zhou.",
288
+ "venue": "IEEE Transactions on Knowledge and Data Engineering, 23(5):788\u2013799, 2010.",
289
+ "url": null
290
+ }
291
+ },
292
+ {
293
+ "15": {
294
+ "title": "Multimodal saliency and fusion for movie summarization based on aural, visual, and textual attention.",
295
+ "author": "Georgios Evangelopoulos, Athanasia Zlatintsi, Alexandros Potamianos, Petros Maragos, Konstantinos Rapantzikos, Georgios Skoumas, and Yannis Avrithis.",
296
+ "venue": "IEEE Transactions on Multimedia, 15(7):1553\u20131568, 2013.",
297
+ "url": null
298
+ }
299
+ },
300
+ {
301
+ "16": {
302
+ "title": "Convolutional two-stream network fusion for video action recognition.",
303
+ "author": "Christoph Feichtenhofer, Axel Pinz, and Andrew Zisserman.",
304
+ "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1933\u20131941, 2016.",
305
+ "url": null
306
+ }
307
+ },
308
+ {
309
+ "17": {
310
+ "title": "Devise: A deep visual-semantic embedding model.",
311
+ "author": "Andrea Frome, Greg S Corrado, Jon Shlens, Samy Bengio, Jeff Dean, Marc\u2019Aurelio Ranzato, and Tomas Mikolov.",
312
+ "venue": "Advances in Neural Information Processing Systems, 26, 2013.",
313
+ "url": null
314
+ }
315
+ },
316
+ {
317
+ "18": {
318
+ "title": "Partial least-squares regression: a tutorial.",
319
+ "author": "Paul Geladi and Bruce R Kowalski.",
320
+ "venue": "Analytica Chimica Acta, 185:1\u201317, 1986.",
321
+ "url": null
322
+ }
323
+ },
324
+ {
325
+ "19": {
326
+ "title": "Canonical correlation analysis: An overview with application to learning methods.",
327
+ "author": "David R Hardoon, Sandor Szedmak, and John Shawe-Taylor.",
328
+ "venue": "Neural Computation, 16(12):2639\u20132664, 2004.",
329
+ "url": null
330
+ }
331
+ },
332
+ {
333
+ "20": {
334
+ "title": "Multimodal deep autoencoder for human pose recovery.",
335
+ "author": "Chaoqun Hong, Jun Yu, Jian Wan, Dacheng Tao, and Meng Wang.",
336
+ "venue": "IEEE Transactions on Image Processing, 24(12):5659\u20135670, 2015.",
337
+ "url": null
338
+ }
339
+ },
340
+ {
341
+ "21": {
342
+ "title": "Dynamic distillation network for cross-domain few-shot recognition with unlabeled data.",
343
+ "author": "Ashraful Islam, Chun-Fu Richard Chen, Rameswar Panda, Leonid Karlinsky, Rogerio Feris, and Richard J Radke.",
344
+ "venue": "Advances in Neural Information Processing Systems, 34:3584\u20133595, 2021.",
345
+ "url": null
346
+ }
347
+ },
348
+ {
349
+ "22": {
350
+ "title": "Semi-supervised multi-view deep discriminant representation learning.",
351
+ "author": "Xiaodong Jia, Xiao-Yuan Jing, Xiaoke Zhu, Songcan Chen, Bo Du, Ziyun Cai, Zhenyu He, and Dong Yue.",
352
+ "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(7):2496\u20132509, 2020.",
353
+ "url": null
354
+ }
355
+ },
356
+ {
357
+ "23": {
358
+ "title": "Intra-view and inter-view supervised correlation analysis for multi-view feature learning.",
359
+ "author": "Xiao-Yuan Jing, Rui-Min Hu, Yang-Ping Zhu, Shan-Shan Wu, Chao Liang, and Jing-Yu Yang.",
360
+ "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence, volume 28, 2014.",
361
+ "url": null
362
+ }
363
+ },
364
+ {
365
+ "24": {
366
+ "title": "Deep visual-semantic alignments for generating image descriptions.",
367
+ "author": "Andrej Karpathy and Li Fei-Fei.",
368
+ "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3128\u20133137, 2015.",
369
+ "url": null
370
+ }
371
+ },
372
+ {
373
+ "25": {
374
+ "title": "A random forest-based framework for crop mapping using temporal, spectral, textural and polarimetric observations.",
375
+ "author": "Iman Khosravi and Seyed Kazem Alavipanah.",
376
+ "venue": "International Journal of Remote Sensing, 40(18):7221\u20137251, 2019.",
377
+ "url": null
378
+ }
379
+ },
380
+ {
381
+ "26": {
382
+ "title": "MSMD: maximum separability and minimum dependency feature selection for cropland classification from optical and radar data.",
383
+ "author": "Iman Khosravi, Abdolreza Safari, and Saeid Homayouni.",
384
+ "venue": "International Journal of Remote Sensing, 39(8):2159\u20132176, 2018.",
385
+ "url": null
386
+ }
387
+ },
388
+ {
389
+ "27": {
390
+ "title": "Learning image embeddings using convolutional neural networks for improved multi-modal semantics.",
391
+ "author": "Douwe Kiela and L\u00e9on Bottou.",
392
+ "venue": "In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 36\u201345, 2014.",
393
+ "url": null
394
+ }
395
+ },
396
+ {
397
+ "28": {
398
+ "title": "Email classification with co-training.",
399
+ "author": "Svetlana Kiritchenko and Stan Matwin.",
400
+ "venue": "In Proceedings of the 2001 Conference of the Centre for Advanced Studies on Collaborative Research, pp. 8, 2001.",
401
+ "url": null
402
+ }
403
+ },
404
+ {
405
+ "29": {
406
+ "title": "Babytalk: Understanding and generating simple image descriptions.",
407
+ "author": "Girish Kulkarni, Visruth Premraj, Vicente Ordonez, Sagnik Dhar, Siming Li, Yejin Choi, Alexander C Berg, and Tamara L Berg.",
408
+ "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(12):2891\u20132903, 2013.",
409
+ "url": null
410
+ }
411
+ },
412
+ {
413
+ "30": {
414
+ "title": "Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks.",
415
+ "author": "Dong-Hyun Lee et al.",
416
+ "venue": "In Workshop on Challenges in Representation Learning, ICML, volume 3, pp. 896, 2013.",
417
+ "url": null
418
+ }
419
+ },
420
+ {
421
+ "31": {
422
+ "title": "Cross-modal learning with adversarial samples.",
423
+ "author": "Chao Li, Shangqian Gao, Cheng Deng, De Xie, and Wei Liu.",
424
+ "venue": "Advances in Neural Information Processing Systems, 32, 2019.",
425
+ "url": null
426
+ }
427
+ },
428
+ {
429
+ "32": {
430
+ "title": "A survey of multi-view representation learning.",
431
+ "author": "Yingming Li, Ming Yang, and Zhongfei Zhang.",
432
+ "venue": "IEEE Transactions on Knowledge and Data Engineering, 31(10):1863\u20131883, 2018.",
433
+ "url": null
434
+ }
435
+ },
436
+ {
437
+ "33": {
438
+ "title": "Inter-modality face recognition.",
439
+ "author": "Dahua Lin and Xiaoou Tang.",
440
+ "venue": "In European Conference on Computer Vision, pp. 13\u201326. Springer, 2006.",
441
+ "url": null
442
+ }
443
+ },
444
+ {
445
+ "34": {
446
+ "title": "Stable, fast and accurate: Kernelized attention with relative positional encoding.",
447
+ "author": "Shengjie Luo, Shanda Li, Tianle Cai, Di He, Dinglan Peng, Shuxin Zheng, Guolin Ke, Liwei Wang, and Tie-Yan Liu.",
448
+ "venue": "Advances in Neural Information Processing Systems, 34:22795\u201322807, 2021.",
449
+ "url": null
450
+ }
451
+ },
452
+ {
453
+ "35": {
454
+ "title": "Regularizing long short term memory with 3d human-skeleton sequences for action recognition.",
455
+ "author": "Behrooz Mahasseni and Sinisa Todorovic.",
456
+ "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3054\u20133062, 2016.",
457
+ "url": null
458
+ }
459
+ },
460
+ {
461
+ "36": {
462
+ "title": "Midge: Generating image descriptions from computer vision detections.",
463
+ "author": "Margaret Mitchell, Jesse Dodge, Amit Goyal, Kota Yamaguchi, Karl Stratos, Xufeng Han, Alyssa Mensch, Alexander Berg, Tamara Berg, and Hal Daum\u00e9 III.",
464
+ "venue": "In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pp. 747\u2013756, 2012.",
465
+ "url": null
466
+ }
467
+ },
468
+ {
469
+ "37": {
470
+ "title": "Activity recognition using wearable physiological measurements: Selection of features from a comprehensive literature study.",
471
+ "author": "Inma Mohino-Herranz, Roberto Gil-Pita, Manuel Rosa-Zurera, and Fernando Seoane.",
472
+ "venue": "Sensors, 19(24), 2019.",
473
+ "url": null
474
+ }
475
+ },
476
+ {
477
+ "38": {
478
+ "title": "On estimating regression.",
479
+ "author": "Elizbar A Nadaraya.",
480
+ "venue": "Theory of Probability & Its Applications, 9(1):141\u2013142, 1964.",
481
+ "url": null
482
+ }
483
+ },
484
+ {
485
+ "39": {
486
+ "title": "Multimodal deep learning.",
487
+ "author": "Jiquan Ngiam, Aditya Khosla, Mingyu Kim, Juhan Nam, Honglak Lee, and Andrew Y Ng.",
488
+ "venue": "In International Conference on Machine Learning, 2011.",
489
+ "url": null
490
+ }
491
+ },
492
+ {
493
+ "40": {
494
+ "title": "Multi-view clustering and semi-supervised classification with adaptive neighbours.",
495
+ "author": "Feiping Nie, Guohao Cai, and Xuelong Li.",
496
+ "venue": "In Thirty-first AAAI Conference on Artificial Intelligence, 2017a.",
497
+ "url": null
498
+ }
499
+ },
500
+ {
501
+ "41": {
502
+ "title": "Convex multiview semi-supervised classification.",
503
+ "author": "Feiping Nie, Jing Li, and Xuelong Li.",
504
+ "venue": "IEEE Transactions on Image Processing, 26(12):5718\u20135729, 2017b.",
505
+ "url": null
506
+ }
507
+ },
508
+ {
509
+ "42": {
510
+ "title": "Multiview semi-supervised learning model for image classification.",
511
+ "author": "Feiping Nie, Lai Tian, Rong Wang, and Xuelong Li.",
512
+ "venue": "IEEE Transactions on Knowledge and Data Engineering, 32(12):2389\u20132400, 2019.",
513
+ "url": null
514
+ }
515
+ },
516
+ {
517
+ "43": {
518
+ "title": "Random feature attention.",
519
+ "author": "Hao Peng, Nikolaos Pappas, Dani Yogatama, Roy Schwartz, Noah Smith, and Lingpeng Kong.",
520
+ "venue": "In International Conference on Learning Representations, 2020.",
521
+ "url": null
522
+ }
523
+ },
524
+ {
525
+ "44": {
526
+ "title": "CCL: Cross-modal correlation learning with multigrained fusion by hierarchical network.",
527
+ "author": "Yuxin Peng, Jinwei Qi, Xin Huang, and Yuxin Yuan.",
528
+ "venue": "IEEE Transactions on Multimedia, 20(2):405\u2013420, 2017.",
529
+ "url": null
530
+ }
531
+ },
532
+ {
533
+ "45": {
534
+ "title": "Self-training for few-shot transfer across extreme task differences.",
535
+ "author": "Cheng Perng Phoo and Bharath Hariharan.",
536
+ "venue": "In International Conference on Learning Representations, 2020.",
537
+ "url": null
538
+ }
539
+ },
540
+ {
541
+ "46": {
542
+ "title": "Semi-supervised self-training of object detection models.",
543
+ "author": "Chuck Rosenberg, Martial Hebert, and Henry Schneiderman.",
544
+ "venue": "In Applications of Computer Vision and the IEEE Workshop on Motion and Video Computing, volume 1, pp. 29\u201336. IEEE Computer Society, 2005.",
545
+ "url": null
546
+ }
547
+ },
548
+ {
549
+ "47": {
550
+ "title": "Remarks on some nonparametric estimates of a density function.",
551
+ "author": "Murray Rosenblatt.",
552
+ "venue": "The Annals of Mathematical Statistics, pp. 832\u2013837, 1956.",
553
+ "url": null
554
+ }
555
+ },
556
+ {
557
+ "48": {
558
+ "title": "Ensemble learning: A survey.",
559
+ "author": "Omer Sagi and Lior Rokach.",
560
+ "venue": "Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 8(4):e1249, 2018.",
561
+ "url": null
562
+ }
563
+ },
564
+ {
565
+ "49": {
566
+ "title": "DistilBERT, a distilled version of bert: smaller, faster, cheaper and lighter.",
567
+ "author": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf.",
568
+ "venue": "arXiv preprint arXiv:1910.01108, 2019.",
569
+ "url": null
570
+ }
571
+ },
572
+ {
573
+ "50": {
574
+ "title": "Generalized multiview analysis: A discriminative latent space.",
575
+ "author": "Abhishek Sharma, Abhishek Kumar, Hal Daume, and David W Jacobs.",
576
+ "venue": "In 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2160\u20132167. IEEE, 2012.",
577
+ "url": null
578
+ }
579
+ },
580
+ {
581
+ "51": {
582
+ "title": "Task Report: Memotion Analysis 1.0 @SemEval 2020: The Visuo-Lingual Metaphor!",
583
+ "author": "Chhavi Sharma, William Paka, Scott, Deepesh Bhageria, Amitava Das, Soujanya Poria, Tanmoy Chakraborty, and Bj\u00f6rn Gamb\u00e4ck.",
584
+ "venue": "In Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020), Barcelona, Spain, Sep 2020. Association for Computational Linguistics.",
585
+ "url": null
586
+ }
587
+ },
588
+ {
589
+ "52": {
590
+ "title": "Improved multimodal deep learning with variation of information.",
591
+ "author": "Kihyuk Sohn, Wenling Shang, and Honglak Lee.",
592
+ "venue": "Advances in Neural Information Processing Systems, 27, 2014.",
593
+ "url": null
594
+ }
595
+ },
596
+ {
597
+ "53": {
598
+ "title": "Fixmatch: Simplifying semi-supervised learning with consistency and confidence.",
599
+ "author": "Kihyuk Sohn, David Berthelot, Nicholas Carlini, Zizhao Zhang, Han Zhang, Colin A Raffel, Ekin Dogus Cubuk, Alexey Kurakin, and Chun-Liang Li.",
600
+ "venue": "Advances in Neural Information Processing Systems, 33:596\u2013608, 2020.",
601
+ "url": null
602
+ }
603
+ },
604
+ {
605
+ "54": {
606
+ "title": "Efficientnet: Rethinking model scaling for convolutional neural networks.",
607
+ "author": "Mingxing Tan and Quoc Le.",
608
+ "venue": "In International Conference on Machine Learning, pp. 6105\u20136114. PMLR, 2019.",
609
+ "url": null
610
+ }
611
+ },
612
+ {
613
+ "55": {
614
+ "title": "Semi-supervised self-training for decision tree classifiers.",
615
+ "author": "Jafar Tanha, Maarten Van Someren, and Hamideh Afsarmanesh.",
616
+ "venue": "International Journal of Machine Learning and Cybernetics, 8(1):355\u2013370, 2017.",
617
+ "url": null
618
+ }
619
+ },
620
+ {
621
+ "56": {
622
+ "title": "Separating style and content with bilinear models.",
623
+ "author": "Joshua B Tenenbaum and William T Freeman.",
624
+ "venue": "Neural Computation, 12(6):1247\u20131283, 2000.",
625
+ "url": null
626
+ }
627
+ },
628
+ {
629
+ "57": {
630
+ "title": "Self-labeled techniques for semi-supervised learning: taxonomy, software and empirical study.",
631
+ "author": "Isaac Triguero, Salvador Garc\u00eda, and Francisco Herrera.",
632
+ "venue": "Knowledge and Information systems, 42(2):245\u2013284, 2015.",
633
+ "url": null
634
+ }
635
+ },
636
+ {
637
+ "58": {
638
+ "title": "Multimodal transformer for unaligned multimodal language sequences.",
639
+ "author": "Yao-Hung Hubert Tsai, Shaojie Bai, Paul Pu Liang, J Zico Kolter, Louis-Philippe Morency, and Ruslan Salakhutdinov.",
640
+ "venue": "In Proceedings of the Conference. Association for Computational Linguistics. Meeting, volume 2019, pp. 6558. NIH Public Access, 2019.",
641
+ "url": null
642
+ }
643
+ },
644
+ {
645
+ "59": {
646
+ "title": "A survey on semi-supervised learning.",
647
+ "author": "Jesper E Van Engelen and Holger H Hoos.",
648
+ "venue": "Machine Learning, 109(2):373\u2013440, 2020.",
649
+ "url": null
650
+ }
651
+ },
652
+ {
653
+ "60": {
654
+ "title": "Attention is all you need.",
655
+ "author": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin.",
656
+ "venue": "Advances in Neural Information Processing Systems, 30, 2017.",
657
+ "url": null
658
+ }
659
+ },
660
+ {
661
+ "61": {
662
+ "title": "Translating videos to natural language using deep recurrent neural networks.",
663
+ "author": "Subhashini Venugopalan, Huijuan Xu, Jeff Donahue, Marcus Rohrbach, Raymond J Mooney, and Kate Saenko.",
664
+ "venue": "In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2015.",
665
+ "url": null
666
+ }
667
+ },
668
+ {
669
+ "62": {
670
+ "title": "Co-training for cross-lingual sentiment classification.",
671
+ "author": "Xiaojun Wan.",
672
+ "venue": "In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pp. 235\u2013243, 2009.",
673
+ "url": null
674
+ }
675
+ },
676
+ {
677
+ "63": {
678
+ "title": "Kernel Smoothing.",
679
+ "author": "Matt P Wand and M Chris Jones.",
680
+ "venue": "CRC press, 1994.",
681
+ "url": null
682
+ }
683
+ },
684
+ {
685
+ "64": {
686
+ "title": "A comprehensive survey on cross-modal retrieval.",
687
+ "author": "Kaiye Wang, Qiyue Yin, Wei Wang, Shu Wu, and Liang Wang.",
688
+ "venue": "arXiv preprint arXiv:1607.06215, 2016.",
689
+ "url": null
690
+ }
691
+ },
692
+ {
693
+ "65": {
694
+ "title": "ORCCA: Optimal randomized canonical correlation analysis.",
695
+ "author": "Yinsong Wang and Shahin Shahrampour.",
696
+ "venue": "IEEE Transactions on Neural Networks and Learning Systems, 2021.",
697
+ "url": null
698
+ }
699
+ },
700
+ {
701
+ "66": {
702
+ "title": "TAKDE: temporal adaptive kernel density estimator for real-time dynamic density estimation.",
703
+ "author": "Yinsong Wang, Yu Ding, and Shahin Shahrampour.",
704
+ "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023.",
705
+ "url": null
706
+ }
707
+ },
708
+ {
709
+ "67": {
710
+ "title": "Smooth regression analysis.",
711
+ "author": "Geoffrey S Watson.",
712
+ "venue": "Sankhy\u0101: The Indian Journal of Statistics, Series A, pp. 359\u2013372, 1964.",
713
+ "url": null
714
+ }
715
+ },
716
+ {
717
+ "68": {
718
+ "title": "HySAD: A semi-supervised hybrid shilling attack detector for trustworthy product recommendation.",
719
+ "author": "Zhiang Wu, Junjie Wu, Jie Cao, and Dacheng Tao.",
720
+ "venue": "In Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 985\u2013993, 2012.",
721
+ "url": null
722
+ }
723
+ },
724
+ {
725
+ "69": {
726
+ "title": "Self-training with noisy student improves imagenet classification.",
727
+ "author": "Qizhe Xie, Minh-Thang Luong, Eduard Hovy, and Quoc V Le.",
728
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10687\u201310698, 2020.",
729
+ "url": null
730
+ }
731
+ },
732
+ {
733
+ "70": {
734
+ "title": "Nystr\u00f6mformer: A nystr\u00f6m-based algorithm for approximating self-attention.",
735
+ "author": "Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, and Vikas Singh.",
736
+ "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pp. 14138\u201314148, 2021.",
737
+ "url": null
738
+ }
739
+ },
740
+ {
741
+ "71": {
742
+ "title": "Jointly modeling deep video and compositional text to bridge vision and language in a unified framework.",
743
+ "author": "Ran Xu, Caiming Xiong, Wei Chen, and Jason Corso.",
744
+ "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence, volume 29, 2015.",
745
+ "url": null
746
+ }
747
+ },
748
+ {
749
+ "72": {
750
+ "title": "Unsupervised word sense disambiguation rivaling supervised methods.",
751
+ "author": "David Yarowsky.",
752
+ "venue": "In 33rd Annual Meeting of the Association for Computational Linguistics, pp. 189\u2013196, 1995.",
753
+ "url": null
754
+ }
755
+ },
756
+ {
757
+ "73": {
758
+ "title": "Semi-supervised learning literature survey.",
759
+ "author": "Xiaojin Jerry Zhu.",
760
+ "venue": "Technical Report TR 1530, 2005.",
761
+ "url": null
762
+ }
763
+ },
764
+ {
765
+ "74": {
766
+ "title": "Rethinking pre-training and self-training.",
767
+ "author": "Barret Zoph, Golnaz Ghiasi, Tsung-Yi Lin, Yin Cui, Hanxiao Liu, Ekin Dogus Cubuk, and Quoc Le.",
768
+ "venue": "Advances in Neural Information Processing Systems, 33:3833\u20133845, 2020.",
769
+ "url": null
770
+ }
771
+ }
772
+ ],
773
+ "url": "http://arxiv.org/html/2302.02224v3"
774
+ }
20240620/2302.08176v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240620/2303.15350v2.json ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Improving Neural Topic Models with Wasserstein Knowledge Distillation",
3
+ "abstract": "Topic modeling is a dominant method for exploring document collections on the web and in digital libraries. Recent approaches to topic modeling use pretrained contextualized language models and variational autoencoders. However, large neural topic models have a considerable memory footprint. In this paper, we propose a knowledge distillation framework to compress a contextualized topic model without loss in topic quality. In particular, the proposed distillation objective is to minimize the cross-entropy of the soft labels produced by the teacher and the student models, as well as to minimize the squared 2-Wasserstein distance between the latent distributions learned by the two models. Experiments on two publicly available datasets show that the student trained with knowledge distillation achieves topic coherence much higher than that of the original student model, and even surpasses the teacher while containing far fewer parameters than the teacher\u2019s. The distilled model also outperforms several other competitive topic models on topic coherence.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Topic modeling has come up as an important technique to analyze large document corpora and extract their themes automatically [1 ###reference_b1###], [30 ###reference_b30###], [26 ###reference_b26###]. Therefore, they are frequently used to obtain an overview of the topics in document archives and web search results, match queries and documents, and diversify search results [28 ###reference_b28###, 11 ###reference_b11###].\nWhile latent Dirichlet allocation (LDA) [5 ###reference_b5###] is the classical topic modeling algorithm, recent approaches exploit deep neural networks, specifically, variational autoencoders (VAEs) [13 ###reference_b13###]. ProdLDA [24 ###reference_b24###] is a well-known VAE-based topic model that uses a product of experts and a Laplace approximation to the Dirichlet prior.\nBianchi et al. [3 ###reference_b3###] recently proposed CombinedTM, a contextualized topic model that feeds into the VAE of ProdLDA a distributed representation of the document built with a pre-trained language model (PLM) like sentence-BERT (SBERT) [22 ###reference_b22###] along with a bag-of-words (BoW) representation of the document. It achieves state-of-the-art topic coherence on many benchmark data sets. Given a VAE-based topic model pre-trained on a corpus, one can pass a document from the corpus through the VAE encoder and recover its topics.\nA remarkable feature of contextualized topic models is that, if the PLM is multilingual and the input to the encoder solely consists of contextualized representations from the PLM, it is possible to train the model in one language and test it in another, making it a zero-shot topic model, also called ZeroShotTM [4 ###reference_b4###].\nIncreasing the network complexity, like the depth or width of the neural networks in the VAE, might improve the coherence of the generated topics but produce a larger memory footprint, thereby making it difficult to store and use the topic models on resource-constrained devices. Using only contextualized embeddings in the input would also reduce the model size but could impact the topic quality as well.\nIn this paper, we investigate if a VAE-based topic model can be compressed without compromising topic coherence.\nFor this purpose, we use knowledge distillation (KD), which involves a teacher model to improve the performance of a smaller student model [12 ###reference_b12###]. While KD has been used for classification tasks in image [10 ###reference_b10###] and text processing [17 ###reference_b17###], this paper tackles an unsupervised learning problem for a generative model. Specifically, we distill knowledge from a CombinedTM teacher to a smaller ZeroShotTM student. In standard KD [12 ###reference_b12###], the aim is to minimize the cross-entropy between the soft labels produced by the student and the teacher models along with the Kullback-Leibler (KL) divergence between their respective output distributions. But even if the two distributions have very little dissimilarity with each other, the KL-divergence may reach a very high value, and if the two distributions are not overlapping at all, it explodes to infinity [19 ###reference_b19###].\nTo avoid these issues, we choose 2-Wasserstein distance [18 ###reference_b18###] instead of KL-divergence in distillation loss.\nOur distillation process minimizes the cross-entropy between the soft labels produced by the teacher and the student, and the square of the 2-Wasserstein distance between the latent distributions learned by the two models. Wasserstein distance arises in the theory of optimal transport and measures how \u2018close\u2019 two distributions are [21 ###reference_b21###, 27 ###reference_b27###, 9 ###reference_b9###]. Unlike the KL divergence, if the Wasserstein between two distributions is high, this actually represents that the underlying distributions are very different from each other.\nIn summary, our contributions are:\n(1) We propose a 2-Wasserstein distance-based knowledge distillation framework for neural topic models. We call our method Wasserstein knowledge distillation. To the best of our knowledge, this is the first work on inter-VAE knowledge distillation for topic modeling. (2) Experiments on two public datasets show that in terms of topic coherence, the distilled model significantly outperforms the student and even scores better than the teacher. The distilled model also beats several strong baselines on topic coherence. This demonstrates the efficacy of our approach. We have made our code publicly available111https://github.com/AdhyaSuman/CTMKD ###reference_###."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Background on Wasserstein Distance",
15
+ "text": "Let be a complete separable metric space with metric and equipped with a Borel -algebra.\nLet denote the space of all probability measures defined on with finite -th moment for .\nIf , then is defined to be the set of measures having and as marginals. The Wasserstein distance between the two probability measures and in is defined as\nis intuitively the minimum \u2018cost\u2019 of transforming to (or vice versa) [27 ###reference_b27###].\nConsider with as the Euclidean norm. Suppose , and are normal distributions with means and symmetric positive semi-definite covariance matrices . From [18 ###reference_b18###], the squared 2-Wasserstein distance between and is given by:\nWasserstein distance has been used to train various machine learning models, including classifiers [7 ###reference_b7###], Boltzmann machines [16 ###reference_b16###], and generative adversarial networks [2 ###reference_b2###], where it is found to be a better loss metric than KL-divergence."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Proposed Framework for Knowledge Distillation",
21
+ "text": "###figure_1### Our framework for KD is shown in Figure 1 ###reference_###. The teacher and the student models are both VAEs. The teacher is a CombinedTM [3 ###reference_b3###] that takes as input a document encoded as the concatenation of the document\u2019s normalized BoW representation , where is the vocabulary size, and its contextualized embedding scaled to dimension by a linear layer. The student is a ZeroShotTM [4 ###reference_b4###]. While the student\u2019s encoder takes only the document\u2019s contextualized representation, its decoder still needs the BoW vector during training,\nbut it is not necessary when we use only its trained encoder to infer the topics for a given document. The teacher\u2019s encoder is a multi-layer feed-forward neural network (FFNN) while we make the student\u2019s encoder an FFNN with one hidden layer.\nA VAE-based topic model works as follows [24 ###reference_b24###]. Suppose it has to learn topics from a corpus. The VAE encoder having weights learns the approximate posterior distribution represented by mean and variance for an input instance . The decoder samples a vector using the reparameterization trick [13 ###reference_b13###], and produces the document-topic vector , which is passed through a shallow FFNN with weight matrix to learn a distribution . The VAE is trained by backpropagation to minimize the following loss :\nwhere is the expected negative log-likelihood of the reconstructed BoW, and is a regularizer measuring the KL-divergence of the encoder\u2019s output from the assumed prior of the latent distribution.\nNow suppose that the teacher has been already trained on a dataset to learn topics, and that, after training, the weights of its encoder and decoder are and , respectively. We will use this frozen teacher model to train the student with KD to learn topics from the same dataset and the same vocabulary. We denote this KD-trained student by . Let the weights in its encoder and decoder be and , respectively, at the start of some iteration during the training of .\nGiven an input instance , the student\u2019s loss function has two components:\n(i) Loss associated with student VAE: The VAE loss is given by Eq. (3 ###reference_###).\n(ii) Loss associated with knowledge distillation:\nWhile training , every instance is passed through both and . Suppose the teacher\u2019s encoder outputs the -variate Gaussian while the student\u2019s encoder outputs the -variate Gaussian . Note that instead of a full covariance matrix, a diagonal covariance matrix (encoded as a vector) is learned [3 ###reference_b3###], [4 ###reference_b4###].\nLet and , which are easily observed to be symmetric positive semi-definite. We calculate the squared 2-Wasserstein distance between the distributions learned by and using Eq. (2 ###reference_###):\nWe propose to minimize so that the distribution learned by the student is pulled close to that of the teacher.\nThe decoder of the teacher and that of the student produce unnormalized logits and , respectively. We compute the cross-entropy loss between the soft labels and where is the softmax temperature (hyperparameter) [12 ###reference_b12###]. In addition to identifying the most probable class, the soft labels formed by a higher softmax temperature () capture the correlation between the labels, which is desired in the distillation framework.\nThe total loss due to KD is\nFinally, with as a hyperparameter, the total loss for the student is"
22
+ },
23
+ {
24
+ "section_id": "4",
25
+ "parent_section_id": null,
26
+ "section_name": "Experimental Setup",
27
+ "text": "We have performed all experiments in OCTIS ###reference_github.com/MIND-Lab/OCTIS### [25 ###reference_b25###], which is an integrated framework for topic modeling.\nWe use the following datasets from OCTIS: 20NG, which contains newsgroup documents on different subjects [25 ###reference_b25###], and M10 comprising 8355 scientific publications from 10 distinct research areas [20 ###reference_b20###].\nFor each dataset, the vocabulary contains the 2K most common words in the corpus.\nWe represent each topic by its top-10 words. We use Normalized Pointwise Mutual Information (NPMI) [15 ###reference_b15###] and Coherence Value (CV) [23 ###reference_b23###, 14 ###reference_b14###] to measure topic coherence.\nNPMI of a topic is high if the words in the topic tend to co-occur. CV is calculated using an indirect cosine measure along with the NPMI score over a boolean sliding window. Higher values of NPMI and CV are better.\nThe experiments are done for topic counts on the 20NG dataset and for topic counts on the M10 dataset, where 20 and 10 are the golden number of categories for 20NG and M10, respectively.\nWe denote the teacher (CombinedTM) by T, the student (ZeroShotTM) by S, and the distilled student model (ZeroShotTM) by SKD. The encoder in T uses 768-dimensional contextualized sentence embeddings (SBERT) from paraphrase-distilroberta-base-v2 ###reference_rs/paraphrase-distilroberta-base-v2###. The encoders in S and SKD use 384-dimensional SBERT embeddings from all-MiniLM-L6-v2 ###reference_rs/all-MiniLM-L6-v2### model.\nDataset\nK\nH\nDataset\nK\nH\n\n20NG\n \n\n\n20\n\n50\n\n100\n \n \n\n\n1\n\n1\n\n5\n \nM10\n10\n4\n\n20\n5\n\n50\n2\n\n100\n3\nUsing the Bayesian optimization framework of OCTIS, we have calculated the optimal number of hidden layers in the teacher\u2019s encoder (which takes as input the concatenation of a document\u2019s contextualized and BoW representations) from the set that maximizes the NPMI for the teacher. As shown in Table 1 ###reference_###, on 20NG dataset, we found for topic count and for ; on M10, we observed for , for , for , and for . Each hidden layer of the teacher contains 100 neurons.\nWe have tuned the hyperparameters and for SKD in OCTIS. For performance analysis, we compare these models with ProdLDA [24 ###reference_b24###], NeuralLDA [24 ###reference_b24###], Embedded Topic Model (ETM) [6 ###reference_b6###] and LDA [5 ###reference_b5###], already implemented in OCTIS. We use the default parameters unless otherwise mentioned. All models are trained for 100 epochs with a batch size of 64. Each reported performance score is the median over 5 runs (except for T, where we use a single run as it must be frozen for KD).\n###figure_2###"
28
+ },
29
+ {
30
+ "section_id": "5",
31
+ "parent_section_id": null,
32
+ "section_name": "Results",
33
+ "text": "Models S and SKD contain the same number of parameters, which is smaller than that of T. The sizes of all the models depend on the SBERT dimension, the number and size of hidden layers, the number of topics, and the vocabulary size. For example, for 20 topics in 20NG, T takes 6.14 MB while SKD 2.74 MB (for parameters and buffers) \u2013 a reduction in model size by . In general, the compression ranged from to .\nFig. 2 ###reference_### shows the coherence scores for each topic model for all topic settings and datasets. SKD achieves the highest NPMI and CV scores. Among T, S, and SKD, we find SKD performs much better than S and even modestly better than T. On 20NG, the NPMI scores of (T, S, SKD) are for , for , and for , so the maximum gain of SKD over S is and that over T is . Similarly on M10, the NPMI scores are for , for , for , and for .\nThus, on M10, SKD improves NPMI of S by over for , and that of T by at most . Student outperforming the teacher is surprising but has been reported earlier for supervised tasks [8 ###reference_b8###, 29 ###reference_b29###].\nWhen we deleted any one of the two loss terms from in Eq. (5 ###reference_###), NPMI and CV of SKD dropped (see Table 2 ###reference_###).\nThus, although the simpler model and weaker SBERT lower the student\u2019s performance, the knowledge distilled from the teacher\u2019s encoder and decoder vastly improves it.\nKD-loss ()\n20NG\nM10\n\nNPMI\nCV\nNPMI\nCV\n\n\n50\n\n\n50\n\n\n\n50\n\n\n\n50\n\n\n\n0.132\n0.130\n0.105\n0.687\n0.657\n0.638\n0.084\n0.080\n0.073\n0.070\n0.522\n0.499\n0.485\n0.475\n\n\n0.109\n0.114\n0.089\n0.659\n0.638\n0.615\n0.051\n0.049\n0.037\n0.043\n0.498\n0.479\n0.459\n0.452\n\n\n0.110\n0.105\n0.083\n0.653\n0.629\n0.588\n0.042\n0.052\n0.016\n0.023\n0.485\n0.464\n0.425\n0.425\nThe higher performance of the contextualized topic models over other topic models agrees with similar results in [3 ###reference_b3###, 4 ###reference_b4###].\nIn Table 3 ###reference_###, we compare qualitatively some aligned topics learned by T, S, and SKD from the 20NG corpus. For the first three topics, SKD displays more word overlap than S with the corresponding topics from T, showing that T and SKD learn similar topic-word distributions. Interestingly, the fourth topic from SKD contains more healthcare-related words than the fourth topic from T although the latter is also primarily on healthcare; this shows that SKD can produce more coherent topics than T.\nModel\nID\nTopics\n\nT\n\ngun, law, firearm, crime, weapon, assault, amendment, state, police, permit\n\n\nrussian, turkish, people, village, genocide, armenian, muslim, population, greek, army\n\n\noil, engine, ride, front, road, chain, bike, motorcycle, water, gas\n\n\nhealth, make, president, patient, medical, people, doctor, disease, work, year\n\nS\n\nlaw, people, state, government, gun, amendment, constitution, firearm, crime, privacy\n\n\narmenian, village, soldier, soviet, muslim, troop, turkish, russian, genocide, land\n\n\nengine, car, mile, ride, bike, oil, front, wheel, motorcycle, tire\n\n\nmedical, disease, study, treatment, doctor, patient, health, food, risk, percent\n\nSKD\n\ngun, law, weapon, firearm, amendment, crime, bill, assault, constitution, police\n\n\nturkish, genocide, armenian, russian, village, population, israeli, war, attack, muslim\n\n\nride, engine, car, bike, motorcycle, front, oil, motor, road, seat\n\n\nhealth, medical, doctor, disease, patient, insurance, treatment, drug, care, risk"
34
+ },
35
+ {
36
+ "section_id": "6",
37
+ "parent_section_id": null,
38
+ "section_name": "Conclusion",
39
+ "text": "We have proposed a 2-Wasserstein loss-based knowledge distillation framework to compress a contextualized topic model. Experiments on two datasets show that the pruned topic model produces topics with coherence better than that of the topics produced by the student and even the larger teacher model. This is a new method for neural topic distillation. In the future, we would like to study it analytically and apply it to distill knowledge across other neural topic models."
40
+ }
41
+ ],
42
+ "appendix": [],
43
+ "tables": {
44
+ "1": {
45
+ "table_html": "<figure class=\"ltx_table ltx_align_floatleft\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>The optimal number of hidden layers in the encoder of the teacher <span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.6.1\">T</span> for each dataset and different topic counts .</figcaption>\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"S4.T1.7\" style=\"width:210.8pt;height:90pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(0.0pt,0.0pt) scale(1,1) ;\">\n<p class=\"ltx_p\" id=\"S4.T1.7.1\"><span class=\"ltx_text\" id=\"S4.T1.7.1.1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.7.1.1.1\">\n<span class=\"ltx_tr\" id=\"S4.T1.7.1.1.1.1\">\n<span class=\"ltx_td ltx_align_center ltx_border_l ltx_border_t\" id=\"S4.T1.7.1.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.7.1.1.1.1.1.1\">Dataset</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.7.1.1.1.1.2\"><span class=\"ltx_text ltx_font_bold ltx_font_italic\" id=\"S4.T1.7.1.1.1.1.2.1\">K</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_t\" id=\"S4.T1.7.1.1.1.1.3\"><span class=\"ltx_text ltx_font_bold ltx_font_italic\" id=\"S4.T1.7.1.1.1.1.3.1\">H</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.7.1.1.1.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.7.1.1.1.1.4.1\">Dataset</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.7.1.1.1.1.5\"><span class=\"ltx_text ltx_font_bold ltx_font_italic\" id=\"S4.T1.7.1.1.1.1.5.1\">K</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.7.1.1.1.1.6\"><span class=\"ltx_text ltx_font_bold ltx_font_italic\" id=\"S4.T1.7.1.1.1.1.6.1\">H</span></span></span>\n<span class=\"ltx_tr\" id=\"S4.T1.7.1.1.1.2\">\n<span class=\"ltx_td ltx_align_center ltx_border_b ltx_border_l ltx_border_tt ltx_rowspan ltx_rowspan_4\" id=\"S4.T1.7.1.1.1.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.7.1.1.1.2.1.1\">20NG</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_b ltx_border_tt ltx_rowspan ltx_rowspan_4\" id=\"S4.T1.7.1.1.1.2.2\"><span class=\"ltx_text\" id=\"S4.T1.7.1.1.1.2.2.1\"><span class=\"ltx_text\" id=\"S4.T1.7.1.1.1.2.2.1.1\"></span> <span class=\"ltx_text\" id=\"S4.T1.7.1.1.1.2.2.1.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.7.1.1.1.2.2.1.2.1\">\n<span class=\"ltx_tr\" id=\"S4.T1.7.1.1.1.2.2.1.2.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T1.7.1.1.1.2.2.1.2.1.1.1\">20</span></span>\n<span class=\"ltx_tr\" id=\"S4.T1.7.1.1.1.2.2.1.2.1.2\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T1.7.1.1.1.2.2.1.2.1.2.1\">50</span></span>\n<span class=\"ltx_tr\" id=\"S4.T1.7.1.1.1.2.2.1.2.1.3\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T1.7.1.1.1.2.2.1.2.1.3.1\">100</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S4.T1.7.1.1.1.2.2.1.3\"></span></span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_b ltx_border_rr ltx_border_tt ltx_rowspan ltx_rowspan_4\" id=\"S4.T1.7.1.1.1.2.3\"><span class=\"ltx_text\" id=\"S4.T1.7.1.1.1.2.3.1\"><span class=\"ltx_text\" id=\"S4.T1.7.1.1.1.2.3.1.1\"></span> <span class=\"ltx_text\" id=\"S4.T1.7.1.1.1.2.3.1.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.7.1.1.1.2.3.1.2.1\">\n<span class=\"ltx_tr\" id=\"S4.T1.7.1.1.1.2.3.1.2.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T1.7.1.1.1.2.3.1.2.1.1.1\">1</span></span>\n<span class=\"ltx_tr\" id=\"S4.T1.7.1.1.1.2.3.1.2.1.2\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T1.7.1.1.1.2.3.1.2.1.2.1\">1</span></span>\n<span class=\"ltx_tr\" id=\"S4.T1.7.1.1.1.2.3.1.2.1.3\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T1.7.1.1.1.2.3.1.2.1.3.1\">5</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S4.T1.7.1.1.1.2.3.1.3\"></span></span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_b ltx_border_tt ltx_rowspan ltx_rowspan_4\" id=\"S4.T1.7.1.1.1.2.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.7.1.1.1.2.4.1\">M10</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T1.7.1.1.1.2.5\">10</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T1.7.1.1.1.2.6\">4</span></span>\n<span class=\"ltx_tr\" id=\"S4.T1.7.1.1.1.3\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T1.7.1.1.1.3.1\">20</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.7.1.1.1.3.2\">5</span></span>\n<span class=\"ltx_tr\" id=\"S4.T1.7.1.1.1.4\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T1.7.1.1.1.4.1\">50</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.7.1.1.1.4.2\">2</span></span>\n<span class=\"ltx_tr\" id=\"S4.T1.7.1.1.1.5\">\n<span class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T1.7.1.1.1.5.1\">100</span>\n<span class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T1.7.1.1.1.5.2\">3</span></span>\n</span></span></p>\n</span></div>\n</figure>",
46
+ "capture": "Table 1: The optimal number of hidden layers in the encoder of the teacher T for each dataset and different topic counts ."
47
+ },
48
+ "2": {
49
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Ablation study for the distillation loss term defined in Eq. (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2303.15350v2#S3.E5\" title=\"In 3 Proposed Framework for Knowledge Distillation \u2023 Improving Neural Topic Models with Wasserstein Knowledge Distillation\"><span class=\"ltx_text ltx_ref_tag\">5</span></a>). For each metric, the median over five independent runs for each topic count is mentioned.</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S5.T2.18\" style=\"width:559.3pt;height:109pt;vertical-align:-1.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(0.0pt,0.0pt) scale(1,1) ;\">\n<p class=\"ltx_p\" id=\"S5.T2.18.18\"><span class=\"ltx_text\" id=\"S5.T2.18.18.18\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S5.T2.18.18.18.18\">\n<span class=\"ltx_tr\" id=\"S5.T2.1.1.1.1.1\">\n<span class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t ltx_rowspan ltx_rowspan_3\" id=\"S5.T2.1.1.1.1.1.1\"><span class=\"ltx_text\" id=\"S5.T2.1.1.1.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.1.1.1.1.1.1.1\">KD-loss</span> ()</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_t ltx_colspan ltx_colspan_6\" id=\"S5.T2.1.1.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.1.1.1.1.2.1\">20NG</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t ltx_colspan ltx_colspan_8\" id=\"S5.T2.1.1.1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.1.1.1.1.3.1\">M10</span></span></span>\n<span class=\"ltx_tr\" id=\"S5.T2.18.18.18.18.19\">\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t ltx_colspan ltx_colspan_3\" id=\"S5.T2.18.18.18.18.19.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.18.18.18.18.19.1.1\">NPMI</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_t ltx_colspan ltx_colspan_3\" id=\"S5.T2.18.18.18.18.19.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.18.18.18.18.19.2.1\">CV</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t ltx_colspan ltx_colspan_4\" id=\"S5.T2.18.18.18.18.19.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.18.18.18.18.19.3.1\">NPMI</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t ltx_colspan ltx_colspan_4\" id=\"S5.T2.18.18.18.18.19.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.18.18.18.18.19.4.1\">CV</span></span></span>\n<span class=\"ltx_tr\" id=\"S5.T2.15.15.15.15.15\">\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.2.2.2.2.2.1\"></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.3.3.3.3.2\"><span class=\"ltx_text ltx_markedasmath ltx_font_bold\" id=\"S5.T2.3.3.3.3.3.2.1\">50</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.4.4.4.4.4.3\"></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.5.5.5.5.5.4\"></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.6.6.6.6.6.5\"><span class=\"ltx_text ltx_markedasmath ltx_font_bold\" id=\"S5.T2.6.6.6.6.6.5.1\">50</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_t\" id=\"S5.T2.7.7.7.7.7.6\"></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.8.8.8.8.8.7\"></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.9.9.9.9.8\"></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.10.10.10.10.10.9\"><span class=\"ltx_text ltx_markedasmath ltx_font_bold\" id=\"S5.T2.10.10.10.10.10.9.1\">50</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.11.11.11.11.11.10\"></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.12.12.12.12.12.11\"></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.13.13.13.13.13.12\"></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.14.14.14.14.14.13\"><span class=\"ltx_text ltx_markedasmath ltx_font_bold\" id=\"S5.T2.14.14.14.14.14.13.1\">50</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.15.15.15.15.15.14\"></span></span>\n<span class=\"ltx_tr\" id=\"S5.T2.16.16.16.16.16\">\n<span class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_tt\" id=\"S5.T2.16.16.16.16.16.1\"></span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T2.16.16.16.16.16.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.16.16.16.16.16.2.1\">0.132</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T2.16.16.16.16.16.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.16.16.16.16.16.3.1\">0.130</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T2.16.16.16.16.16.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.16.16.16.16.16.4.1\">0.105</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T2.16.16.16.16.16.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.16.16.16.16.16.5.1\">0.687</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T2.16.16.16.16.16.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.16.16.16.16.16.6.1\">0.657</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_tt\" id=\"S5.T2.16.16.16.16.16.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.16.16.16.16.16.7.1\">0.638</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T2.16.16.16.16.16.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.16.16.16.16.16.8.1\">0.084</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T2.16.16.16.16.16.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.16.16.16.16.16.9.1\">0.080</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T2.16.16.16.16.16.10\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.16.16.16.16.16.10.1\">0.073</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T2.16.16.16.16.16.11\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.16.16.16.16.16.11.1\">0.070</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T2.16.16.16.16.16.12\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.16.16.16.16.16.12.1\">0.522</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T2.16.16.16.16.16.13\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.16.16.16.16.16.13.1\">0.499</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T2.16.16.16.16.16.14\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.16.16.16.16.16.14.1\">0.485</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T2.16.16.16.16.16.15\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.16.16.16.16.16.15.1\">0.475</span></span></span>\n<span class=\"ltx_tr\" id=\"S5.T2.17.17.17.17.17\">\n<span class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r\" id=\"S5.T2.17.17.17.17.17.1\"></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S5.T2.17.17.17.17.17.2\">0.109</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S5.T2.17.17.17.17.17.3\">0.114</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.17.17.17.17.17.4\">0.089</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S5.T2.17.17.17.17.17.5\">0.659</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S5.T2.17.17.17.17.17.6\">0.638</span>\n<span class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S5.T2.17.17.17.17.17.7\">0.615</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S5.T2.17.17.17.17.17.8\">0.051</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S5.T2.17.17.17.17.17.9\">0.049</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S5.T2.17.17.17.17.17.10\">0.037</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.17.17.17.17.17.11\">0.043</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S5.T2.17.17.17.17.17.12\">0.498</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S5.T2.17.17.17.17.17.13\">0.479</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S5.T2.17.17.17.17.17.14\">0.459</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.17.17.17.17.17.15\">0.452</span></span>\n<span class=\"ltx_tr\" id=\"S5.T2.18.18.18.18.18\">\n<span class=\"ltx_td ltx_align_center ltx_border_b ltx_border_l ltx_border_r\" id=\"S5.T2.18.18.18.18.18.1\"></span>\n<span class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T2.18.18.18.18.18.2\">0.110</span>\n<span class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T2.18.18.18.18.18.3\">0.105</span>\n<span class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T2.18.18.18.18.18.4\">0.083</span>\n<span class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T2.18.18.18.18.18.5\">0.653</span>\n<span class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T2.18.18.18.18.18.6\">0.629</span>\n<span class=\"ltx_td ltx_align_center ltx_border_b ltx_border_rr\" id=\"S5.T2.18.18.18.18.18.7\">0.588</span>\n<span class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T2.18.18.18.18.18.8\">0.042</span>\n<span class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T2.18.18.18.18.18.9\">0.052</span>\n<span class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T2.18.18.18.18.18.10\">0.016</span>\n<span class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T2.18.18.18.18.18.11\">0.023</span>\n<span class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T2.18.18.18.18.18.12\">0.485</span>\n<span class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T2.18.18.18.18.18.13\">0.464</span>\n<span class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T2.18.18.18.18.18.14\">0.425</span>\n<span class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T2.18.18.18.18.18.15\">0.425</span></span>\n</span></span></p>\n</span></div>\n</figure>",
50
+ "capture": "Table 2: Ablation study for the distillation loss term defined in Eq. (5). For each metric, the median over five independent runs for each topic count is mentioned."
51
+ },
52
+ "3": {
53
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>Some selected topics output when <span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.21.1\">T</span>, <span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.22.2\">S</span>, and <span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.23.3\">SKD</span> models are run on the 20NG corpus for 20 topics. If a word in a topic from <span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.24.4\">S</span> or <span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.25.5\">SKD</span> is shared with the corresponding topic in <span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.26.6\">T</span>, then it is in <span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.27.7\">bold</span> otherwise it is in <span class=\"ltx_text ltx_font_italic\" id=\"S5.T3.28.8\">italic</span>.</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S5.T3.12\" style=\"width:450.3pt;height:235pt;vertical-align:-1.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(0.0pt,0.0pt) scale(1,1) ;\">\n<p class=\"ltx_p\" id=\"S5.T3.12.12\"><span class=\"ltx_text\" id=\"S5.T3.12.12.12\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S5.T3.12.12.12.12\">\n<span class=\"ltx_tr\" id=\"S5.T3.12.12.12.12.13\">\n<span class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T3.12.12.12.12.13.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.12.12.12.12.13.1.1\">Model</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.12.12.12.12.13.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.12.12.12.12.13.2.1\">ID</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.12.12.12.12.13.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.12.12.12.12.13.3.1\">Topics</span></span></span>\n<span class=\"ltx_tr\" id=\"S5.T3.1.1.1.1.1\">\n<span class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_tt ltx_rowspan ltx_rowspan_4\" id=\"S5.T3.1.1.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.1.1.1.1.1.2.1\">T</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T3.1.1.1.1.1.1\"></span>\n<span class=\"ltx_td ltx_align_left ltx_border_r ltx_border_tt\" id=\"S5.T3.1.1.1.1.1.3\">gun, law, firearm, crime, weapon, assault, amendment, state, police, permit</span></span>\n<span class=\"ltx_tr\" id=\"S5.T3.2.2.2.2.2\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.2.2.2.2.2.1\"></span>\n<span class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T3.2.2.2.2.2.2\">russian, turkish, people, village, genocide, armenian, muslim, population, greek, army</span></span>\n<span class=\"ltx_tr\" id=\"S5.T3.3.3.3.3.3\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.3.3.3.3.3.1\"></span>\n<span class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T3.3.3.3.3.3.2\">oil, engine, ride, front, road, chain, bike, motorcycle, water, gas</span></span>\n<span class=\"ltx_tr\" id=\"S5.T3.4.4.4.4.4\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.4.4.4.4.4.1\"></span>\n<span class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T3.4.4.4.4.4.2\">health, make, president, patient, medical, people, doctor, disease, work, year</span></span>\n<span class=\"ltx_tr\" id=\"S5.T3.5.5.5.5.5\">\n<span class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_tt ltx_rowspan ltx_rowspan_4\" id=\"S5.T3.5.5.5.5.5.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.5.5.5.5.5.2.1\">S</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T3.5.5.5.5.5.1\"></span>\n<span class=\"ltx_td ltx_align_left ltx_border_r ltx_border_tt\" id=\"S5.T3.5.5.5.5.5.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.5.5.5.5.5.3.1\">law</span>, <span class=\"ltx_text ltx_font_italic\" id=\"S5.T3.5.5.5.5.5.3.2\">people</span>, <span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.5.5.5.5.5.3.3\">state</span>, <span class=\"ltx_text ltx_font_italic\" id=\"S5.T3.5.5.5.5.5.3.4\">government</span>, <span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.5.5.5.5.5.3.5\">gun</span>, <span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.5.5.5.5.5.3.6\">amendment</span>, <span class=\"ltx_text ltx_font_italic\" id=\"S5.T3.5.5.5.5.5.3.7\">constitution</span>, <span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.5.5.5.5.5.3.8\">firearm</span>, <span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.5.5.5.5.5.3.9\">crime</span>, <span class=\"ltx_text ltx_font_italic\" id=\"S5.T3.5.5.5.5.5.3.10\">privacy</span></span></span>\n<span class=\"ltx_tr\" id=\"S5.T3.6.6.6.6.6\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.6.6.6.6.6.1\"></span>\n<span class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T3.6.6.6.6.6.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.6.6.6.6.6.2.1\">armenian</span>, <span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.6.6.6.6.6.2.2\">village</span>, <span class=\"ltx_text ltx_font_italic\" id=\"S5.T3.6.6.6.6.6.2.3\">soldier</span>, <span class=\"ltx_text ltx_font_italic\" id=\"S5.T3.6.6.6.6.6.2.4\">soviet</span>, <span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.6.6.6.6.6.2.5\">muslim</span>, <span class=\"ltx_text ltx_font_italic\" id=\"S5.T3.6.6.6.6.6.2.6\">troop</span>, <span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.6.6.6.6.6.2.7\">turkish</span>, <span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.6.6.6.6.6.2.8\">russian</span>, <span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.6.6.6.6.6.2.9\">genocide</span>, <span class=\"ltx_text ltx_font_italic\" id=\"S5.T3.6.6.6.6.6.2.10\">land</span></span></span>\n<span class=\"ltx_tr\" id=\"S5.T3.7.7.7.7.7\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.7.7.7.7.7.1\"></span>\n<span class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T3.7.7.7.7.7.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.7.7.7.7.7.2.1\">engine</span>, <span class=\"ltx_text ltx_font_italic\" id=\"S5.T3.7.7.7.7.7.2.2\">car</span>, <span class=\"ltx_text ltx_font_italic\" id=\"S5.T3.7.7.7.7.7.2.3\">mile</span>, <span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.7.7.7.7.7.2.4\">ride</span>, <span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.7.7.7.7.7.2.5\">bike</span>, <span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.7.7.7.7.7.2.6\">oil</span>, <span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.7.7.7.7.7.2.7\">front</span>, <span class=\"ltx_text ltx_font_italic\" id=\"S5.T3.7.7.7.7.7.2.8\">wheel</span>, <span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.7.7.7.7.7.2.9\">motorcycle</span>, <span class=\"ltx_text ltx_font_italic\" id=\"S5.T3.7.7.7.7.7.2.10\">tire</span></span></span>\n<span class=\"ltx_tr\" id=\"S5.T3.8.8.8.8.8\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.8.8.8.8.8.1\"></span>\n<span class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T3.8.8.8.8.8.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.8.8.8.8.8.2.1\">medical</span>, <span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.8.8.8.8.8.2.2\">disease</span>, <span class=\"ltx_text ltx_font_italic\" id=\"S5.T3.8.8.8.8.8.2.3\">study</span>, <span class=\"ltx_text ltx_font_italic\" id=\"S5.T3.8.8.8.8.8.2.4\">treatment</span>, <span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.8.8.8.8.8.2.5\">doctor</span>, <span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.8.8.8.8.8.2.6\">patient</span>, <span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.8.8.8.8.8.2.7\">health</span>, <span class=\"ltx_text ltx_font_italic\" id=\"S5.T3.8.8.8.8.8.2.8\">food</span>, <span class=\"ltx_text ltx_font_italic\" id=\"S5.T3.8.8.8.8.8.2.9\">risk</span>, <span class=\"ltx_text ltx_font_italic\" id=\"S5.T3.8.8.8.8.8.2.10\">percent</span></span></span>\n<span class=\"ltx_tr\" id=\"S5.T3.9.9.9.9.9\">\n<span class=\"ltx_td ltx_align_center ltx_border_b ltx_border_l ltx_border_r ltx_border_tt ltx_rowspan ltx_rowspan_4\" id=\"S5.T3.9.9.9.9.9.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.9.9.9.9.9.2.1\">SKD</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T3.9.9.9.9.9.1\"></span>\n<span class=\"ltx_td ltx_align_left ltx_border_r ltx_border_tt\" id=\"S5.T3.9.9.9.9.9.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.9.9.9.9.9.3.1\">gun</span>, <span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.9.9.9.9.9.3.2\">law</span>, <span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.9.9.9.9.9.3.3\">weapon</span>, <span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.9.9.9.9.9.3.4\">firearm</span>, <span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.9.9.9.9.9.3.5\">amendment</span>, <span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.9.9.9.9.9.3.6\">crime</span>, <span class=\"ltx_text ltx_font_italic\" id=\"S5.T3.9.9.9.9.9.3.7\">bill</span>, <span class=\"ltx_text ltx_font_italic\" id=\"S5.T3.9.9.9.9.9.3.8\">assault</span>, <span class=\"ltx_text ltx_font_italic\" id=\"S5.T3.9.9.9.9.9.3.9\">constitution</span>, <span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.9.9.9.9.9.3.10\">police</span></span></span>\n<span class=\"ltx_tr\" id=\"S5.T3.10.10.10.10.10\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.10.10.10.10.10.1\"></span>\n<span class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T3.10.10.10.10.10.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.10.10.10.10.10.2.1\">turkish</span>, <span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.10.10.10.10.10.2.2\">genocide</span>, <span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.10.10.10.10.10.2.3\">armenian</span>, <span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.10.10.10.10.10.2.4\">russian</span>, <span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.10.10.10.10.10.2.5\">village</span>, <span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.10.10.10.10.10.2.6\">population</span>, <span class=\"ltx_text ltx_font_italic\" id=\"S5.T3.10.10.10.10.10.2.7\">israeli</span>, <span class=\"ltx_text ltx_font_italic\" id=\"S5.T3.10.10.10.10.10.2.8\">war</span>, <span class=\"ltx_text ltx_font_italic\" id=\"S5.T3.10.10.10.10.10.2.9\">attack</span>, <span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.10.10.10.10.10.2.10\">muslim</span></span></span>\n<span class=\"ltx_tr\" id=\"S5.T3.11.11.11.11.11\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.11.11.11.11.11.1\"></span>\n<span class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T3.11.11.11.11.11.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.11.11.11.11.11.2.1\">ride</span>, <span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.11.11.11.11.11.2.2\">engine</span>, <span class=\"ltx_text ltx_font_italic\" id=\"S5.T3.11.11.11.11.11.2.3\">car</span>, <span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.11.11.11.11.11.2.4\">bike</span>, <span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.11.11.11.11.11.2.5\">motorcycle</span>, <span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.11.11.11.11.11.2.6\">front</span>, <span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.11.11.11.11.11.2.7\">oil</span>, <span class=\"ltx_text ltx_font_italic\" id=\"S5.T3.11.11.11.11.11.2.8\">motor</span>, <span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.11.11.11.11.11.2.9\">road</span>, <span class=\"ltx_text ltx_font_italic\" id=\"S5.T3.11.11.11.11.11.2.10\">seat</span></span></span>\n<span class=\"ltx_tr\" id=\"S5.T3.12.12.12.12.12\">\n<span class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T3.12.12.12.12.12.1\"></span>\n<span class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r\" id=\"S5.T3.12.12.12.12.12.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.12.12.12.12.12.2.1\">health</span>, <span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.12.12.12.12.12.2.2\">medical</span>, <span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.12.12.12.12.12.2.3\">doctor</span>, <span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.12.12.12.12.12.2.4\">disease</span>, <span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.12.12.12.12.12.2.5\">patient</span>, <span class=\"ltx_text ltx_font_italic\" id=\"S5.T3.12.12.12.12.12.2.6\">insurance</span>, <span class=\"ltx_text ltx_font_italic\" id=\"S5.T3.12.12.12.12.12.2.7\">treatment</span>, <span class=\"ltx_text ltx_font_italic\" id=\"S5.T3.12.12.12.12.12.2.8\">drug</span>, <span class=\"ltx_text ltx_font_italic\" id=\"S5.T3.12.12.12.12.12.2.9\">care</span>, <span class=\"ltx_text ltx_font_italic\" id=\"S5.T3.12.12.12.12.12.2.10\">risk</span></span></span>\n</span></span></p>\n</span></div>\n</figure>",
54
+ "capture": "Table 3: Some selected topics output when T, S, and SKD models are run on the 20NG corpus for 20 topics. If a word in a topic from S or SKD is shared with the corresponding topic in T, then it is in bold otherwise it is in italic."
55
+ }
56
+ },
57
+ "image_paths": {
58
+ "1": {
59
+ "figure_path": "2303.15350v2_figure_1.png",
60
+ "caption": "Figure 1: Framework for knowledge distillation from CombinedTM to ZeroShotTM.",
61
+ "url": "http://arxiv.org/html/2303.15350v2/x1.png"
62
+ },
63
+ "2": {
64
+ "figure_path": "2303.15350v2_figure_2.png",
65
+ "caption": "Figure 2: Coherence scores (NPMI and CV) for different topic models on two datasets: 20NG and M10. The X-axis is marked with the topic counts used for each dataset.",
66
+ "url": "http://arxiv.org/html/2303.15350v2/x2.png"
67
+ }
68
+ },
69
+ "validation": true,
70
+ "references": [],
71
+ "url": "http://arxiv.org/html/2303.15350v2"
72
+ }
20240620/2304.06470v6.json ADDED
@@ -0,0 +1,629 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Qualitative Failures of Image Generation Models and Their Application in Detecting Deepfakes",
3
+ "abstract": "The remarkable advancement of image and video generation models has led to the creation of exceptionally realistic content, posing challenges in differentiating between genuine and fabricated instances in numerous scenarios. However, despite this progress, a gap remains between the quality of generated images and those found in the real world. To address this, we have reviewed a vast body of literature from both academic publications and social media to identify qualitative shortcomings in image generation models, which we have classified into five categories. By understanding these failures, we can identify areas where these models need improvement, as well as develop strategies for detecting generated images and deepfakes. The prevalence of deepfakes in today\u2019s society is a serious concern, and our findings can help mitigate their negative impact. In order to support research in this field, a collection of instances where models have failed is made available at here.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Generated images, also known as synthetic images, are created by machine learning algorithms or other software programs, while real images are captured by cameras or other imaging devices. Generated images are not real-world representations of a scene or object, but rather computer-generated approximations. As such, they lack the authenticity and realism of real images. Deepfakes refer to fabricated media content that has undergone digital alterations to effectively substitute the appearance of one individual with that of another, creating a highly convincing outcome. This paper investigates the indicators that can be utilized for identifying artificially generated images, with a specific focus on detecting deepfakes.\nDespite the abundance of anecdotal evidence shared on social media regarding the weaknesses of image generation models, there has yet to be a comprehensive and systematic analysis of these failures. Often, the examples shared by people are selectively chosen to showcase instances in which the models perform well, which may lead to a biased perception of their capabilities, and an overestimation of their effectiveness. While there have been quantitative studies aimed at evaluating and comparing generative models [4 ###reference_b4###, 6 ###reference_b6###], such as the use of metrics like FID [17 ###reference_b17###], these measures can be difficult to interpret and are usually calculated over large datasets, making them unsuitable for determining the authenticity of individual images. Quantitative measures for detecting deepfakes do exist [26 ###reference_b26###], but they are not as easily accessible to the general public as qualitative measures, which are simpler to carry out.\n###figure_1### As the quality of generated images continues to improve, it is crucial to conduct more in-depth and precise analyses. Thus far, people have been amazed by the ability of synthesized images to approximate natural scenes. When Photoshop was introduced, significant efforts were made to identify manipulated images, and a similar approach is needed for generated images today. It would be beneficial to compile a set of indicators and other resources to aid in detecting generated images and deepfakes.\nWe present a collection of indicators that can be examined in a single image to determine whether it is genuine or generated. Overall, we offer five classes of these indicators including Human and Animal Body Parts, Geometry, Physics, Semantics and Logic, as well as Text, Noise, and Details, for both portraits and natural landscapes. The advantage of utilizing qualitative cues is that they are easily accessible and can be utilized by anyone, potentially serving as the initial step in detecting deepfakes.\nGenerated images can appear realistic when viewed from a distance or at high resolutions, making it difficult to discern them from actual photographs. However, at lower resolutions, nearly all generated images lack distinguishable characteristics that set them apart from real photographs. To illustrate, refer to Figure 1 ###reference_###, which depicts a painting by Camille Pissarro featuring intricate details. While the overall image may seem satisfactory, closer inspection reveals several missing details such as distorted facial features.\nThis study has a dual purpose. Firstly, it aims to explore the differences between generated images and real-world images. Therefore, this research complements studies that propose quantitative approaches for evaluating generative models. Secondly, it aims to examine qualitative methods that can be employed to identify deepfakes and train individuals to become proficient in this task, with the added benefit of systematically organizing this knowledge."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Related work",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Quantitative and Qualitative Approaches to Evaluate Generative Models",
21
+ "text": "Quantitative approaches have emerged as a vital tool to evaluate the performance of generative models. These methods rely on quantitative measures to assess how well a model is able to generate realistic data. One commonly used metric is the Inception Score [33 ###reference_b33###], which evaluates the diversity and quality of generated images based on the classification accuracy of a pre-trained classifier. Another popular approach is the Fr\u00e9chet Inception Distance [17 ###reference_b17###], which uses feature statistics to compare the distribution of generated data with that of real data. Moreover, other metrics such as precision and recall [32 ###reference_b32###] can be used to evaluate the quality of generated samples in specific domains such as vision, text and audio. Some studies have proposed methods to assess the visual realism of generated images (e.g. [12 ###reference_b12###]). These quantitative approaches provide a rigorous and objective way to measure the effectiveness of generative models, helping researchers to improve their models and develop more advanced generative techniques.\nRecently, two metrics have gained popularity, namely the CLIP score and the CLIP directional similarity (e.g. [27 ###reference_b27###, 28 ###reference_b28###]). The CLIP score evaluates the coherence of image and caption pairs by measuring their compatibility. A higher CLIP score indicates a greater degree of compatibility, which can also be interpreted as the semantic similarity between the image and the caption. Moreover, studies have shown that the CLIP score has a strong correlation with human judgement. On the other hand, the CLIP directional similarity is used for generating images based on text prompts while being conditioned on an input image. It assesses the consistency between the differences in the two images (in CLIP space) and the differences in their respective captions.\nTo obtain a thorough analysis of quantitative metrics for evaluating generative models, please refer to the following references [4 ###reference_b4###, 6 ###reference_b6###, 35 ###reference_b35###, 38 ###reference_b38###].\nQualitative assessment of generated images entails a human evaluation. The quality of these images is evaluated on various criteria, such as compositionality, image-text alignment, and spatial relations.\nDrawBench ###reference_nAbmR4FREi6npB1u-Bo3GFdwdOPYJc617rBOxIRHY/edit#gid=0### and PartiPrompts ###reference_### are prompt datasets used for qualitative benchmarking, that are were introduced by Imagen [31 ###reference_b31###] and Parti [37 ###reference_b37###], respectively.\nThese benchmarks allow for side-by-side human evaluation of different image generation models.\nPartiPrompts is a rich set of over 1600 prompts in English. It can be used to measure model capabilities across various categories and challenge aspects such as \u201cBasic\u201d, \u201cComplex\u201d, \u201cWriting & Symbols\u201d, etc.\nDrawBench is comprised of a collection of 200 prompts that are divided into 11 categories (Table 1 ###reference_###), which aim to assess various capabilities of models. These prompts test a model\u2019s ability to accurately render different attributes, such as colors, object counts, spatial relationships, text in the scene, and unusual object interactions. Additionally, the categories include complex prompts that incorporate lengthy, intricate textual descriptions, as well as uncommon words and misspelled prompts. DrawBench was used to directly compare different models, where human evaluators were presented with two sets of images, each consisting of eight samples, one from Model A and the other from Model B. Evaluators were then asked to compare Model A and Model B based on sample fidelity and image-text alignment.\nLarge-scale datasets have also been used in studies that focus on the qualitative evaluation of generated images (e.g. [2 ###reference_b2###]).\nThe assessment of models through qualitative methods can be susceptible to errors, potentially leading to an incorrect decision. Conversely, quantitative metrics may not always align with image quality. Therefore, the use of both qualitative and quantitative evaluations is typically recommended to obtain a more robust indication when selecting one model over another."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Deepfake Detection Methods",
27
+ "text": "Detection of deepfakes has become an essential area of research due to the increasing sophistication of deep learning algorithms that can generate highly realistic fake images, videos, and audio. As a result, numerous deepfake detection methods have been proposed in recent years, ranging from traditional image and video forensic techniques to advanced deep learning-based approaches. These methods can be broadly categorized into two groups: static and dynamic analysis.\nStatic analysis methods use handcrafted features to distinguish between real and fake images. Examples of static analysis methods include reverse image search, which compares the content of an image to a large database of known images (e.g. [9 ###reference_b9###]), and error level analysis, which detects inconsistencies in the compression levels of an image [19 ###reference_b19###]. Another method is the use of noise patterns and artifacts, which are common in images and videos captured by digital cameras and can be used to identify forgeries. For instance, the sensor pattern noise in images captured by digital cameras can be used to authenticate images and detect tampering attempts [22 ###reference_b22###]. In addition, traditional forensic techniques such as shadow analysis, lighting analysis, and perspective analysis can also be used to identify inconsistencies in the shadows, lighting, and perspectives of images.\nOn the other hand, dynamic analysis methods rely on deep neural networks to analyze the temporal features of video and audio data to detect deepfakes. These methods aim to exploit the fact that deepfakes lack the natural temporal variations and correlations that are present in real videos and audios. For instance, the use of convolutional neural networks (CNNs) has been proposed to detect deepfakes by analyzing the spatial features of images and videos (e.g. [1 ###reference_b1###, 24 ###reference_b24###, 25 ###reference_b25###, 11 ###reference_b11###]). Similarly, recurrent neural networks (RNNs) have been proposed to analyze the temporal features of video and audio data to detect deepfakes [16 ###reference_b16###]. Moreover, Generative Adversarial Networks (GANs) [15 ###reference_b15###] have been used to generate fake images and videos, but can also be used to detect them by identifying inconsistencies in the generator\u2019s output [21 ###reference_b21###].\nOverall, deepfake detection is a challenging problem due to the rapid evolution of deep learning algorithms that can generate more realistic fake content [10 ###reference_b10###]. Thus, a combination of static and dynamic analysis approaches is necessary to achieve effective detection of deepfakes. Additionally, extensive evaluation and comparison of deepfake detection methods are essential to identify their effectiveness and limitations and to guide future research in this area. To read more about this subject, you may want to consult [14 ###reference_b14###, 30 ###reference_b30###, 26 ###reference_b26###, 34 ###reference_b34###] which offer comprehensive reviews on the topic."
28
+ },
29
+ {
30
+ "section_id": "3",
31
+ "parent_section_id": null,
32
+ "section_name": "Qualitative Failures of Image Generation Models",
33
+ "text": "We compiled a list of qualitative failures by examining images from various sources including social media websites such as Twitter, LinkedIn, Discord, and Reddit333A few of the images used in this work were obtained with the consent of a Reddit user named Kronzky ###reference_###., as well as images from the DiffusionDB dataset [36 ###reference_b36###]444This dataset includes prompts that were used to generate images.. These images have been generated by notable generative models such as DALL-E 2 ###reference_###, Midjourney ###reference_www.midjourney.com/###, StableDiffusion ###reference_stability.ai/###, and Bing Image Creator ###reference_###. Additionally, we analyzed images from websites such as thisxdoesnotexist.com ###reference_thisxdoesnotexist.com###, whichfaceisreal.com ###reference_www.whichfaceisreal.com/###, the Adobe Stock library ###reference_stock.adobe.com/###, and openart.ai ###reference_openart.ai/###. We made sure that the text prompts used to generate images were not intentionally seeking peculiar images. Finally, we manually reviewed the images and filtered out the ones without problems."
34
+ },
35
+ {
36
+ "section_id": "3.1",
37
+ "parent_section_id": "3",
38
+ "section_name": "Human and Animal Body Parts",
39
+ "text": "###figure_2### Faces.\nSince the initial triumphs of GANs, the generation of fake faces has been the most extensively scrutinized category for deep generative models [5 ###reference_b5###]. Faces are comparatively simpler to generate than complex scenes because they are easier to calibrate. In the past, the first generated faces were effortlessly recognizable by humans. However, with the advancement of technology such as StyleGAN [18 ###reference_b18###], the latest examples of generated faces are more challenging to distinguish. Figure 2 ###reference_### illustrates a few faces that were generated with issues. You can evaluate your ability to distinguish between real and computer-generated faces by taking a quiz at whichfaceisreal.com ###reference_www.whichfaceisreal.com/###.\nImage Background.\nWhen creating generated images and deepfakes, issues with the background of the images may arise, particularly in cases where the face is in focus while the surrounding clues are incorrect. The neural network used to generate the images focuses mainly on the face and may not pay as much attention to the surrounding details. This can lead to strange companions or chaotic forms in the background. Additionally, the objects or people next to the primary person in the image may appear unnatural or \u201cmutant\". Figure 3 ###reference_### displays several instances of failures as examples.\n###figure_3### Eyes and Gaze.\nDeep generative models have largely overcome issues with early fake images such as cross-eyed, uncentered or different sized pupils, different colored irises, and non-round pupils, as shown in examples in Figure 4 ###reference_###. Early GANs used to produce pupils that were not circular or elliptical like those found in real human eyes, which can be a clue that an image is fake. Reflections in the eyes can also be used to identify fake images. Other clues include irregularities in pupil shape, although this is not always indicative of a fake image since some diseases can cause such irregularities. See the example shown in the bottom-right panel in Figure 4 ###reference_###.\nUnnatural gaze direction or unrealistic eye movements may be observed in deepfakes, which can indicate that a machine learning algorithm generated or manipulated the image. Please see Figure 5 ###reference_###.\n###figure_4### ###figure_5### Eyeglasses.\nAlgorithms can struggle to create realistic eyeglasses, with frame structures often differing between the left and right sides, or with one side having an ornament and the other not. Sometimes the frame can appear crooked or jagged. The glasses may partially disappear or blend with the head, and they can be asymmetrical. The view through the lens may also be heavily distorted or illogical, and nose pads may be missing or distorted. Please see Figure 6 ###reference_### for some examples.\n###figure_6### Teeth.\nRendering teeth is a difficult task for AI, which often results in odd or asymmetric teeth. When someone\u2019s teeth appear unusual or crooked, there\u2019s a good chance that the image was generated by AI. Semi-regular repeating details like teeth are difficult for models to generate, causing misaligned or distorted teeth. This problem has also been observed in other domains, such as texture synthesis with bricks. Occasionally, an image may display an excessive number of teeth or teeth with abnormal shapes and colors, and in some instances, there may be an insufficient number of incisors. Please see Figure 7 ###reference_### for some examples.\n###figure_7### Ear and Earrings.\nEars in AI-generated images may exhibit discrepancies such as differences in size, one ear appearing higher or bigger than the other, or missing or partially missing earrings. Additionally, earrings may be randomly shaped or not match visually. If earrings are asymmetrical or have different features such as one having an attached earlobe while the other doesn\u2019t or one being longer than the other, it\u2019s likely that the image has been generated by AI. Examples of poorly generated ears and earrings are shown in Figure 8 ###reference_###.\n###figure_8### Hair and Whiskers.\nThe style of hair can differ greatly, which also means there is a lot of intricate detail to capture. This makes it one of the most challenging aspects for a model to render accurately. The generated images may contain stray strands of hair in unusual places, or the hair may appear too straight or streaked. Occasionally, the image may resemble acrylic smudges from a palette knife or brush. Another issue may be a strange glow or halo around the hair. In some cases, the model may bunch hair in clumps or create random wisps around the shoulders, while also including thick stray hairs on the forehead. Please see Figure 9 ###reference_###.\n###figure_9### Skin.\nDeepfakes can be deficient in delicate details and subtleties found in genuine images, like skin texture, pores, or fine lines on someone\u2019s face. The skin tone in deepfakes may appear unnatural or inconsistent, such as a person\u2019s face appearing too pale or too red. Additionally, deepfakes may lack the presence of noise or grain which exists in real images, giving a sense of texture and realism. Without the presence of noise or grain, deepfake images may seem excessively clean or artificial.\nSome example failures are shown in Figure 10 ###reference_###.\n###figure_10### Limbs, Hands, and Fingers.\nThe models used for generating deepfakes often fall short when it comes to accurately depicting the intricate details of human extremities. For instance, hands may randomly duplicate, fingers can merge together or there may be too many or too few of them, and third legs may unexpectedly appear while existing limbs may disappear without a trace. Furthermore, limbs may be positioned in unrealistic or impossible poses, or there may be an excess number of them. As a result, deepfakes may exhibit unnatural body language, such as unrealistic gestures or postures that are out of place. In certain instances, models are unable to accurately depict the interaction between objects and body parts (e.g. brushing).\nSee Figs 11 ###reference_### and 12 ###reference_###.\n###figure_11### ###figure_12### Clothing.\nGenerative models may produce distorted clothing with various issues, such as asymmetrical, peculiar, or illogical textures or components such as zippers or collars merging with the skin, and textures abruptly changing or ending. Please refe to Figure 13 ###reference_### for some of such failures.\n###figure_13###"
40
+ },
41
+ {
42
+ "section_id": "3.2",
43
+ "parent_section_id": "3",
44
+ "section_name": "Geometry",
45
+ "text": "Generated images may exhibit anomalous or atypical image geometry, with objects appearing to be of an unusual shape or size, in comparison to their expected proportions.\nStraight Lines and Edges.\nAI-generated images may lack the straight lines, seams, and connections found in real-world objects, resulting in wavy, misaligned, and jumpy renderings (e.g. in tiles). Generated images can also exhibit inconsistent or unnatural image edges, which refer to the boundaries between different parts of the image. Further, surfaces, which are typically straight, may look somewhat uneven in generated images. Some samples failures are shown in Figure 14 ###reference_###.\n###figure_14### Perspective.\nModels lack the ability to understand the 3D world, which results in physically impossible situations when objects cross different planes in a scene. These errors are difficult to detect as our brain often auto-corrects them, requiring a conscious investigation of each angle of the object to identify inconsistencies. Generated images can display an unnatural or distorted perspective, where a person\u2019s body appears stretched or compressed unrealistically. They may also have inconsistent or unrealistic camera angles, where a person\u2019s face appears to be viewed from an impossible angle or perspective. Some example failures are shown in Figure 15 ###reference_###.\n###figure_15### Symmetry.\nDue to difficulty managing long-distance dependencies in images, symmetry (reflection, radial, translation, etc) can be challenging for models. For instance, in generated images, eyes may appear heterochromatic and crosseyed, unlike in real life where they tend to point in the same direction and have the same color. Additionally, asymmetry may appear in facial hair, eyeglasses, and the types of collar or fabric used on the left and right sides of clothing.\nModels may face challenges in maintaining symmetry not only in faces but also in other objects and scenes. For instance, two shoes in a pair or wings in an airplane might not be exactly the same. This is a type of reasoning glitch where the model cannot understand that certain elements should be symmetrical. Some example failures are shown in Figures 16 ###reference_### and 17 ###reference_###.\n###figure_16### ###figure_17### Relative Size.\nRelative size is a visual perceptual cue that helps us understand the size of objects in relation to one another. It is a powerful cue because it allows us to estimate the size of objects even when we do not have any absolute size reference in the scene.\nModels, however, fall short in synthesizing objects with objects with sizes proportional to their size in the real world. Some example failures are shown in Figure 18 ###reference_###.\n###figure_18### Other Geometry.\nGenerated images exhibit various geometrical anomalies that may reveal their artificiality. For instance, their depth cues can be inconsistent or unnatural, causing the foreground or background to seem blurry or devoid of detail. Moreover, they often lack parallax, which is the apparent displacement of objects when viewed from different perspectives, resulting in a flat or two-dimensional appearance. Additionally, incorrect or inconsistent motion blur may suggest that certain parts of the image have been manipulated. The absence of occlusion, i.e., the overlapping of objects in the scene, is another telltale sign of generated images, as it can make the image look flat or unrealistic. Lastly, generated images may display improper image alignment, with objects seeming misaligned or out of place."
46
+ },
47
+ {
48
+ "section_id": "3.3",
49
+ "parent_section_id": "3",
50
+ "section_name": "Physics",
51
+ "text": "Generated images that violate physics rules exhibit various cues that can give them away as unrealistic or physically impossible. These cues include objects appearing to float in mid-air without support, shadows that are inconsistent with the light source, reflections or refractions that break the laws of optics, objects passing through each other without interaction, and incorrect physics-based simulations such as fluids or cloth that behave in impossible ways. By identifying these cues, it is possible to identify and distinguish realistic images from those that violate the rules of physics.\nReflection.\nAn effective technique for detecting generated images is to examine the lighting and how it interacts with the elements within the image, and how it causes reflections and shadows. Generated images can exhibit artificial reflections that are inconsistent with the natural lighting and environment, such as those in glasses, mirrors, or pupils. The root cause of this issue is that deep generative models lack a proper understanding of reflections. While these models may recognize that an image contains a reflection and typically involves two people (one facing the camera and the other with their back turned), they do not comprehend that the two individuals are, in fact, the same person. Generated images may display other lighting effects that do not match real-world environments, such as lens flares, lens distortion, chromatic aberration and unnatural specular highlights. These effects are frequently observed in genuine photographs due to the physical properties of camera lenses and the way light is refracted through them. Some example failures are shown in Figure 19 ###reference_###.\n###figure_19### AI-generated images sometimes exhibit inconsistencies in the geometry of reflections [13 ###reference_b13###]. Shown in the left panel of Fig. 20 ###reference_### is a photographic image in which lines that connect points on the toy dinosaur in the scene and their reflections in the mirror all converge to a single point (a vanishing point). Reflections in AI-generated images (the right panel in Fig. 20 ###reference_###), however, exhibit physical inconsistencies as can be seen by the lack of a consistent vanishing point.\n###figure_20### Shadow.\nGenerated images might not include shadows, which are typically found in real images and contribute to the impression of depth and authenticity.\nIt is important to observe objects without shadows and those with highlights that appear to originate from a different direction than the rest of the image. Additionally, if the photo was taken outdoors in natural light during the afternoon, the setting sun will produce longer shadows than it would at midday, which can be easily identified by scrutinizing the shadow\u2019s length. However, this method may not be as precise in artificial lighting conditions. Finally, if there are multiple objects or people within the scene, their shadows should be consistent with each other. Some generated images with inconsistent shadows are shown in 21 ###reference_###.\n###figure_21### Objects without Support.\nWhen an object or material appears to be floating in mid-air without any visible means of support, it gives the impression that the object is defying gravity or the laws of physics. In reality, all objects are subject to the force of gravity unless they are held up by some other force. When an object appears to be floating, it could be a result of an incorrect rendering or an error in the physics simulation that fails to account for the gravitational force. This type of inconsistency can cause a generated image to look unrealistic or implausible. Some example failures are shown Figure 22 ###reference_###.\n###figure_22###"
52
+ },
53
+ {
54
+ "section_id": "3.4",
55
+ "parent_section_id": "3",
56
+ "section_name": "Semantics and Logic",
57
+ "text": "Images produced by generative models may lack the semantic meaning or contextual relationships present in authentic images. These models tend to focus on the nouns in a given prompt and construct a plausible scene based on them, potentially failing to capture the true relationships between objects. It is crucial to bear in mind that AI lacks an inherent understanding of the world and can only process information in terms of shapes and colors. Complex concepts, such as logical connections and three-dimensional space, are beyond its grasp, resulting in potential difficulties in these areas. For example, when tasked with generating an image of the solar system drawn to scale, a generative model may struggle to maintain the correct planetary order, as demonstrated here ###reference_###.\nSpatial Reasoning.\nNatural scenes are complex and contain a wide range of spatial relationships among objects, such as occlusions, relative distances, and orientations. Capturing these relationships requires the model to have a nuanced understanding of the scene and the objects within it, which can be difficult to achieve without more explicit guidance. Furthermore, some image generation models rely solely on pixel-level reconstruction, without explicitly modeling the underlying semantics or spatial relationships. In these cases, the model may generate images that are visually realistic but lack coherent semantic meaning or accurate spatial relationships among objects. Please see Figures LABEL:fig:reasoning,fig:promptfailure for some examples.\n###figure_23### ###figure_24### ###figure_25### Context and Scene Composition.\nGenerated images can be detected through various inconsistencies such as the background or surroundings not matching the real-world environment, cardinality/counting, missing contextual details, unnatural object placement, and inconsistent image composition. These irregularities may include inconsistencies in order of objects, missing objects or features, objects appearing in the wrong location or orientation, or unnatural arrangement and placement of objects in the image. Please see Figure 25 ###reference_###.\n###figure_26### Other Semantics.\nFigure 26 ###reference_### depicts several additional generated images that exhibit semantic issues. For instance, one image features a person with his head and feet pointing in opposite directions, while another displays a fragmented pizza that does not cohere into a single entity. In yet another image, a blank painting hangs on the wall, creating a confusing and nonsensical composition. From time to time, models encounter issues when creating reverse images. At instances, these models produce highly similar objects or faces in the images (refer to Figure 27 ###reference_###).\n###figure_27### ###figure_28### ###figure_29### ###figure_30###"
58
+ },
59
+ {
60
+ "section_id": "3.5",
61
+ "parent_section_id": "3",
62
+ "section_name": "Text, Noise, and Details",
63
+ "text": "Text.\nGenerating text and logos in images requires the generative model to understand the relationships between the text and the visual content of the image. This can be challenging because the text and image data have different structures and are not directly aligned with each other. Additionally, text can appear in various locations and orientations within an image, and the context of the text may change depending on the surrounding visual content. Furthermore, generating text that accurately describes the visual content of an image requires a deep understanding of the semantics and context of both the text and the image. While some progress has been made in recent years with the development of methods such as image captioning, it is still an active area of research to develop generative models that can effectively generate text in images. Figure 28 ###reference_### displays instances where the text is incomprehensible. In such cases, the letters appear scrambled or duplicated, and the words are spelled incorrectly.\n###figure_31### Noise, Color, and Blur Artifacts.\nDigital distortion in the form of pixelation or imperfect coloring can be present in generated images, particularly around the image edges. Monochrome areas may display semi-regular noise with horizontal or vertical banding, potentially due to the network attempting to replicate cloth textures. Older GANs tend to produce a more noticeable checkerboard noise pattern. Other telltale signs of generated images include inconsistencies in color or tone, oversaturation or undersaturation of colors, and unnatural image noise patterns. See the top row in Figure 29 ###reference_###. Fluorescent bleed, where bright colors bleed onto the hair or face of a person in the image from the background, is also a potential indicator of a generated images (the bottom row in Figure 29 ###reference_###). The human attention system is naturally adept at quickly recognizing these patterns, making them useful tools for identifying generated images.\n###figure_32### Images with Cartoonish Look.\nAI generated images may look cartoonish or may look like a painting. This could be due to several reasons such as inconsistent or unnatural image texture, lack of depth, or focus.\nSome examples are shown in Figure30 ###reference_###.\n###figure_33### Fine-grained Details.\nAI-generated images may contain technical details that are either incorrect or appear as random shapes. For example, furniture legs can be particularly challenging for AI to accurately render, resulting in incorrect numbers of legs or physically impossible configurations. These issues can be attributed to the inherent difficulty of modeling complex objects and the limitations of the AI\u2019s understanding of real-world. Some example failures are shown in Figure 31 ###reference_###.\n###figure_34### Accurately rendering all details in complex scenes or crowd scenes, such as those depicted in Figures 32 ###reference_### and 33 ###reference_###, can be particularly challenging for AI. The complexity of these scenes makes it difficult for the AI to accurately model every detail and can lead to errors in object placement, lighting, perspective, and other features. Despite the challenges, AI technology continues to improve, and advancements are being made in the generation of more realistic and believable large and crowd scenes.\n###figure_35### ###figure_36###"
64
+ },
65
+ {
66
+ "section_id": "4",
67
+ "parent_section_id": null,
68
+ "section_name": "Discussion",
69
+ "text": ""
70
+ },
71
+ {
72
+ "section_id": "4.1",
73
+ "parent_section_id": "4",
74
+ "section_name": "Other Cues",
75
+ "text": "In addition to the cues discussed above, there are several other indicators that can be used to identify generated images and deepfakes. One such method involves examining the metadata of an image or conducting a reverse Google search to verify its authenticity. Additionally, common sense can be applied to detect images that are likely to be generated, such as a shark swimming down a street or aliens eating sushi in a Chinese restaurant. Other indications of generated images and deepfakes include lack of motion blur, unnatural bokeh, all objects appearing in focus, and repeated patterns in the image."
76
+ },
77
+ {
78
+ "section_id": "4.2",
79
+ "parent_section_id": "4",
80
+ "section_name": "Some Challenging Objects",
81
+ "text": "Generative models face particular challenges when it comes to generating images of objects such as clocks, Lego houses, chessboards, carpets, circuit boards555In situations like this, zooming in on the image will highlight the deficiencies and blurred regions., basketballs, glasses of water, dice, diagrams and tables, keyboards, and computer screens. One of the reasons for this is that these types of images contain many repeated patterns, which can be difficult for the model to accurately capture. Several examples of failed attempts to generate these objects can be seen in Figures 34 ###reference_### and 35 ###reference_###. This list of challenging objects can be used to assess and compare the performance of different image generation models.\n###figure_37### ###figure_38###"
82
+ },
83
+ {
84
+ "section_id": "4.3",
85
+ "parent_section_id": "4",
86
+ "section_name": "Memorization and Copyright",
87
+ "text": "As previously mentioned, a method for identifying whether an image is generated or not is through reverse image search. Generative models may memorize images partially or in their entirety, as seen in the examples presented in Figure 36 ###reference_###. This phenomenon has raised concerns regarding copyright infringement, as generated images may include watermarks from the original images. For more information on this issue, please refer to the this link ###reference_-makers-of-stable-diffusion-over-ai-photos/###.\n###figure_39### ###figure_40###"
88
+ },
89
+ {
90
+ "section_id": "4.4",
91
+ "parent_section_id": "4",
92
+ "section_name": "Failure modes from other studies",
93
+ "text": "Certain image generation techniques may incorporate failure models to provide readers with a more comprehensive understanding of their models\u2019 limitations. For instance, the creators of the Parti image generator [37 ###reference_b37###]666https://parti.research.google/ have presented some examples of such failure cases, which are illustrated in Figure 38 ###reference_###. These failure cases can be categorized into the errors discussed earlier. It is recommended that researchers in this field consider including a discussion of their models\u2019 failure models as a best practice.\n###figure_41### Generative image models have also problems with bias and discrimination, similar to LLMs [7 ###reference_b7###]. People discovered that requesting Google Gemini to generate images of certain historical events or figures led to amusing outcomes. For example, the Founding Fathers, historically known as white slave owners, were depicted as a multicultural group that included people of color (see https://techcrunch.com/2024/02/23/embarrassing-and-wrong-google-admits-it-lost-control-of-image-generating-ai/ ###reference_sing-and-wrong-google-admits-it-lost-control-of-image-generating-ai/###). Please see Figure 39 ###reference_###.\n###figure_42### For additional failures reported by the general public using generative image models, please see https://www.boredpanda.com/ai-fails/ ###reference_###."
94
+ },
95
+ {
96
+ "section_id": "5",
97
+ "parent_section_id": null,
98
+ "section_name": "Conclusion and Future Work",
99
+ "text": "This paper lists several qualitative indicators for identifying generated images and deepfakes. These indicators not only enable us to address the issue of fake images but also underscore the differences between generated and real-world content [7 ###reference_b7###]. Furthermore, they serve as a checklist for evaluating image generation models.\nIt should be noted that as algorithms improve, some of these clues may become obsolete over time. However, this does not mean that these models will not make any of these mistakes in generating images. It may be necessary to use a combination of these indicators to identify generated images, as there is no one-size-fits-all solution.\nImage generation models are becoming increasingly widespread and accessible. However, in the wrong hands, these algorithms can be used to create propaganda and other forms of fake media. In a world rife with fake news [20 ###reference_b20###], we have learned not to believe everything we read. Now, we must also exercise caution when it comes to visual media. The blurring of lines between reality and fiction could transform our cultural landscape from one primarily based on truth to one characterized by artificiality and deception. As we have demonstrated with the set of cues presented here, it is possible to identify fake images. In fact, in an informal investigation, we were able to use some of these indicators to detect fake faces with high accuracy in the quiz available on whichfaceisreal.com ###reference_www.whichfaceisreal.com/###.\nSubsequent research can assess the extent to which these cues contribute to the detection of generated images and deepfakes by conducting behavioral experiments involving human participants.\nAlthough visual inspection can be useful in identifying generated images, it may not be comprehensive enough to detect all types of generated content. Thus, integrating alternative approaches such as machine learning algorithms or forensic analysis can provide a more comprehensive strategy. Moreover, it is vital to stay informed about the latest advancements and techniques in this field, as it is continuously evolving.\nBe aware that certain instances exist where authentic images may appear deceptive. Therefore, caution should be exercised when employing these cues to discern the authenticity of an image. Please see [8 ###reference_b8###].\nIn this study, we focused on still images. However, for videos, additional indicators beyond those outlined here, such as motion and optical flow, as well as the synchronization of lip, face, and head movements over time, can also be significant factors [3 ###reference_b3###]. One can undertake comparable initiatives to investigate indicators for identifying counterfeit audio. Educating individuals on the cues outlined in this paper may aid in combating deepfake proliferation. It would be worthwhile to investigate whether individuals can be effectively trained to become experts in this area."
100
+ }
101
+ ],
102
+ "appendix": [],
103
+ "tables": {
104
+ "1": {
105
+ "table_html": "<figure class=\"ltx_table\" id=\"S2.T1\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S2.T1.1\" style=\"width:433.6pt;height:304.2pt;vertical-align:-0.7pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-78.9pt,55.2pt) scale(0.733208160203717,0.733208160203717) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S2.T1.1.1\">\n<tr class=\"ltx_tr\" id=\"S2.T1.1.1.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S2.T1.1.1.1.1\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.1.1.1.1.1\" style=\"font-size:90%;\">Category</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S2.T1.1.1.1.2\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.1.1.1.2.1\" style=\"font-size:90%;\">Description</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S2.T1.1.1.1.3\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.1.1.1.3.1\" style=\"font-size:90%;\">Examples</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.1.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T1.1.1.2.1\" rowspan=\"2\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text\" id=\"S2.T1.1.1.2.1.1\" style=\"font-size:90%;\">Colors</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.1.2.2\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text\" id=\"S2.T1.1.1.2.2.1\" style=\"font-size:90%;\">Ability to generate objects</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.1.2.3\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text\" id=\"S2.T1.1.1.2.3.1\" style=\"font-size:90%;\">\u201cA blue colored dog.\u201d</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.1.3\">\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.1.3.1\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text\" id=\"S2.T1.1.1.3.1.1\" style=\"font-size:90%;\">with specified colors.</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.1.3.2\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text\" id=\"S2.T1.1.1.3.2.1\" style=\"font-size:90%;\">\u201cA black apple and a green backpack.\u201d</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.1.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T1.1.1.4.1\" rowspan=\"2\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text\" id=\"S2.T1.1.1.4.1.1\" style=\"font-size:90%;\">Counting</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.1.4.2\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text\" id=\"S2.T1.1.1.4.2.1\" style=\"font-size:90%;\">Ability to generate specified</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.1.4.3\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text\" id=\"S2.T1.1.1.4.3.1\" style=\"font-size:90%;\">\u201cThree cats and one dog sitting on the grass.\u201d</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.1.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.1.5.1\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text\" id=\"S2.T1.1.1.5.1.1\" style=\"font-size:90%;\">number of objects.</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.1.5.2\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text\" id=\"S2.T1.1.1.5.2.1\" style=\"font-size:90%;\">\u201cFive cars on the street.\u201d</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.1.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T1.1.1.6.1\" rowspan=\"2\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text\" id=\"S2.T1.1.1.6.1.1\" style=\"font-size:90%;\">Conflicting</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.1.6.2\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text\" id=\"S2.T1.1.1.6.2.1\" style=\"font-size:90%;\">Ability to generate conflicting</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.1.6.3\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text\" id=\"S2.T1.1.1.6.3.1\" style=\"font-size:90%;\">\u201cA horse riding an astronaut.\u201d</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.1.7\">\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.1.7.1\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text\" id=\"S2.T1.1.1.7.1.1\" style=\"font-size:90%;\">interactions b/w objects.</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.1.7.2\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text\" id=\"S2.T1.1.1.7.2.1\" style=\"font-size:90%;\">\u201cA panda making latte art.\u201d</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.1.8\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T1.1.1.8.1\" rowspan=\"2\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text\" id=\"S2.T1.1.1.8.1.1\" style=\"font-size:90%;\">DALL-E <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2304.06470v6#bib.bib29\" title=\"\">29</a>]</cite></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.1.8.2\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text\" id=\"S2.T1.1.1.8.2.1\" style=\"font-size:90%;\">Subset of challenging prompts</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.1.8.3\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text\" id=\"S2.T1.1.1.8.3.1\" style=\"font-size:90%;\">\u201cA triangular purple flower pot.\u201d</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.1.9\">\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.1.9.1\" style=\"padding-left:7.0pt;padding-right:7.0pt;\">\n<span class=\"ltx_text\" id=\"S2.T1.1.1.9.1.1\" style=\"font-size:90%;\">from </span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S2.T1.1.1.9.1.2.1\" style=\"font-size:90%;\">[</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2304.06470v6#bib.bib29\" title=\"\">29</a><span class=\"ltx_text\" id=\"S2.T1.1.1.9.1.3.2\" style=\"font-size:90%;\">]</span></cite><span class=\"ltx_text\" id=\"S2.T1.1.1.9.1.4\" style=\"font-size:90%;\">.</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.1.9.2\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text\" id=\"S2.T1.1.1.9.2.1\" style=\"font-size:90%;\">\u201cA cross-section view of a brain.\u201d</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.1.10\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T1.1.1.10.1\" rowspan=\"2\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text\" id=\"S2.T1.1.1.10.1.1\" style=\"font-size:90%;\">Description</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.1.10.2\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text\" id=\"S2.T1.1.1.10.2.1\" style=\"font-size:90%;\">Ability to understand complex and long</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.1.10.3\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text\" id=\"S2.T1.1.1.10.3.1\" style=\"font-size:90%;\">\u201cA small vessel propelled on water by oars, sails, or an engine.\u201d</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.1.11\">\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.1.11.1\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text\" id=\"S2.T1.1.1.11.1.1\" style=\"font-size:90%;\">text prompts describing objects.</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.1.11.2\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text\" id=\"S2.T1.1.1.11.2.1\" style=\"font-size:90%;\">\u201cA mechanical or electrical device for measuring time.\u201d</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.1.12\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T1.1.1.12.1\" rowspan=\"2\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text\" id=\"S2.T1.1.1.12.1.1\" style=\"font-size:90%;\">Marcus et al. <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2304.06470v6#bib.bib23\" title=\"\">23</a>]</cite></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.1.12.2\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text\" id=\"S2.T1.1.1.12.2.1\" style=\"font-size:90%;\">Set of challenging prompts</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.1.12.3\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text\" id=\"S2.T1.1.1.12.3.1\" style=\"font-size:90%;\">\u201cA pear cut into seven pieces arranged in a ring.\u201d</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.1.13\">\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.1.13.1\" style=\"padding-left:7.0pt;padding-right:7.0pt;\">\n<span class=\"ltx_text\" id=\"S2.T1.1.1.13.1.1\" style=\"font-size:90%;\">from </span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S2.T1.1.1.13.1.2.1\" style=\"font-size:90%;\">[</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2304.06470v6#bib.bib23\" title=\"\">23</a><span class=\"ltx_text\" id=\"S2.T1.1.1.13.1.3.2\" style=\"font-size:90%;\">]</span></cite><span class=\"ltx_text\" id=\"S2.T1.1.1.13.1.4\" style=\"font-size:90%;\">.</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.1.13.2\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text\" id=\"S2.T1.1.1.13.2.1\" style=\"font-size:90%;\">\u201cPaying for a quarter-sized pizza with a pizza-sized quarter.\u201d</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.1.14\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T1.1.1.14.1\" rowspan=\"2\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text\" id=\"S2.T1.1.1.14.1.1\" style=\"font-size:90%;\">Misspellings</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.1.14.2\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text\" id=\"S2.T1.1.1.14.2.1\" style=\"font-size:90%;\">Ability to understand</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.1.14.3\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text\" id=\"S2.T1.1.1.14.3.1\" style=\"font-size:90%;\">\u201cRbefraigerator.\u201d</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.1.15\">\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.1.15.1\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text\" id=\"S2.T1.1.1.15.1.1\" style=\"font-size:90%;\">misspelled prompts.</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.1.15.2\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text\" id=\"S2.T1.1.1.15.2.1\" style=\"font-size:90%;\">\u201cTcennis rpacket.\u201d</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.1.16\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T1.1.1.16.1\" rowspan=\"2\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text\" id=\"S2.T1.1.1.16.1.1\" style=\"font-size:90%;\">Positional</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.1.16.2\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text\" id=\"S2.T1.1.1.16.2.1\" style=\"font-size:90%;\">Ability to generate objects with</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.1.16.3\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text\" id=\"S2.T1.1.1.16.3.1\" style=\"font-size:90%;\">\u201cA car on the left of a bus.\u201d</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.1.17\">\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.1.17.1\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text\" id=\"S2.T1.1.1.17.1.1\" style=\"font-size:90%;\">specified spatial positioning.</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.1.17.2\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text\" id=\"S2.T1.1.1.17.2.1\" style=\"font-size:90%;\">\u201cA stop sign on the right of a refrigerator.\u201d</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.1.18\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T1.1.1.18.1\" rowspan=\"2\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text\" id=\"S2.T1.1.1.18.1.1\" style=\"font-size:90%;\">Rare Words</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.1.18.2\" rowspan=\"2\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text\" id=\"S2.T1.1.1.18.2.1\" style=\"font-size:90%;\">Ability to understand rare words<span class=\"ltx_note ltx_role_footnote\" id=\"footnote1\"><sup class=\"ltx_note_mark\">1</sup><span class=\"ltx_note_outer\"><span class=\"ltx_note_content\"><sup class=\"ltx_note_mark\">1</sup><span class=\"ltx_tag ltx_tag_note\"><span class=\"ltx_text\" id=\"footnote1.1.1.1\" style=\"font-size:111%;\">1</span></span><span class=\"ltx_text\" id=\"footnote1.5\" style=\"font-size:111%;\">https://www.merriam-webster.com/topics/obscure-words</span></span></span></span>.</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.1.18.3\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text\" id=\"S2.T1.1.1.18.3.1\" style=\"font-size:90%;\">\u201cArtophagous.\u201d</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.1.19\">\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.1.19.1\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text\" id=\"S2.T1.1.1.19.1.1\" style=\"font-size:90%;\">\u201cOctothorpe.\u201d</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.1.20\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T1.1.1.20.1\" rowspan=\"2\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text\" id=\"S2.T1.1.1.20.1.1\" style=\"font-size:90%;\">Reddit</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.1.20.2\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text\" id=\"S2.T1.1.1.20.2.1\" style=\"font-size:90%;\">Set of challenging prompts from</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.1.20.3\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text\" id=\"S2.T1.1.1.20.3.1\" style=\"font-size:90%;\">\u201cA yellow and black bus cruising through the rainforest.\u201d</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.1.21\">\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.1.21.1\" style=\"padding-left:7.0pt;padding-right:7.0pt;\">\n<span class=\"ltx_text\" id=\"S2.T1.1.1.21.1.1\" style=\"font-size:90%;\">DALLE-2 Reddit</span><span class=\"ltx_note ltx_role_footnote\" id=\"footnote2\"><sup class=\"ltx_note_mark\">2</sup><span class=\"ltx_note_outer\"><span class=\"ltx_note_content\"><sup class=\"ltx_note_mark\">2</sup><span class=\"ltx_tag ltx_tag_note\">2</span>https://www.reddit.com/r/dalle2/</span></span></span><span class=\"ltx_text\" id=\"S2.T1.1.1.21.1.2\" style=\"font-size:90%;\">.</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.1.21.2\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text\" id=\"S2.T1.1.1.21.2.1\" style=\"font-size:90%;\">\u201cA medieval painting of the wifi not working.\u201d</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.1.22\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_t\" id=\"S2.T1.1.1.22.1\" rowspan=\"2\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text\" id=\"S2.T1.1.1.22.1.1\" style=\"font-size:90%;\">Text</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S2.T1.1.1.22.2\" rowspan=\"2\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text\" id=\"S2.T1.1.1.22.2.1\" style=\"font-size:90%;\">Ability to generate quoted text.</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.1.22.3\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text\" id=\"S2.T1.1.1.22.3.1\" style=\"font-size:90%;\">\u201cA storefront with \u2019Deep Learning\u2019 written on it.\u201d</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.1.23\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S2.T1.1.1.23.1\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text\" id=\"S2.T1.1.1.23.1.1\" style=\"font-size:90%;\">\u201cA sign that says \u2019Text to Image\u2019.\u201d</span></td>\n</tr>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Description and examples of the 11 categories in DrawBench, compiled from\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2304.06470v6#bib.bib31\" title=\"\">31</a>]</cite>.</figcaption>\n</figure>",
106
+ "capture": "Table 1: Description and examples of the 11 categories in DrawBench, compiled from\u00a0[31]."
107
+ }
108
+ },
109
+ "image_paths": {
110
+ "1": {
111
+ "figure_path": "2304.06470v6_figure_1.png",
112
+ "caption": "Figure 1: The Fishmarket, Dieppe, 1902 - Camille Pissarro. When observed more closely, it becomes apparent that the faces in the image lack clarity and numerous details are either incorrect or absent, similar to fake images. Although such images may appear authentic at first glance, scrutinizing them thoroughly is crucial to avoid overlooking errors. It is advisable to conduct a detailed examination of each object within the image by zooming in and analyzing its shape, features, location, and interaction with other objects. This approach allows for a more accurate assessment of the image\u2019s authenticity and being free from errors.",
113
+ "url": "http://arxiv.org/html/2304.06470v6/extracted/5679695/Imgs/teaser.jpg"
114
+ },
115
+ "2": {
116
+ "figure_path": "2304.06470v6_figure_2.png",
117
+ "caption": "Figure 2: Examples of poorly generated faces.",
118
+ "url": "http://arxiv.org/html/2304.06470v6/extracted/5679695/Imgs/face.jpg"
119
+ },
120
+ "3": {
121
+ "figure_path": "2304.06470v6_figure_3.png",
122
+ "caption": "Figure 3: Fake images can be exposed through background cues.",
123
+ "url": "http://arxiv.org/html/2304.06470v6/extracted/5679695/Imgs/bg.jpg"
124
+ },
125
+ "4": {
126
+ "figure_path": "2304.06470v6_figure_4.png",
127
+ "caption": "Figure 4: Here are some instances of eyes that were generated poorly. The eye in the bottom right corner is an actual photograph of a patient who has an irregularly shaped pupil. You can refer to this link for more details. This case represents a unique manifestation of a condition known as \u201ccat\u2019s eye Adie-like pupil,\" which is considered a warning sign for ICE syndrome.",
128
+ "url": "http://arxiv.org/html/2304.06470v6/extracted/5679695/Imgs/eye.jpg"
129
+ },
130
+ "5": {
131
+ "figure_path": "2304.06470v6_figure_5.png",
132
+ "caption": "Figure 5: Here are some examples of images where the gaze direction is problematic. In these images, one eye appears to be looking in a different direction compared to the other, similar to a medical condition called Strabismus in the real world. You can check out https://en.wikipedia.org/wiki/Strabismus for additional information on this topic.",
133
+ "url": "http://arxiv.org/html/2304.06470v6/extracted/5679695/Imgs/gaze.jpg"
134
+ },
135
+ "6": {
136
+ "figure_path": "2304.06470v6_figure_6.png",
137
+ "caption": "Figure 6: Some samples of generated eyeglasses with poor quality.",
138
+ "url": "http://arxiv.org/html/2304.06470v6/extracted/5679695/Imgs/glasses.png"
139
+ },
140
+ "7": {
141
+ "figure_path": "2304.06470v6_figure_7.png",
142
+ "caption": "Figure 7: Examples of poorly generated teeth.",
143
+ "url": "http://arxiv.org/html/2304.06470v6/extracted/5679695/Imgs/teeth.jpg"
144
+ },
145
+ "8": {
146
+ "figure_path": "2304.06470v6_figure_8.png",
147
+ "caption": "Figure 8: Clues that can reveal fake ears, here through earrings.",
148
+ "url": "http://arxiv.org/html/2304.06470v6/extracted/5679695/Imgs/ear.jpg"
149
+ },
150
+ "9": {
151
+ "figure_path": "2304.06470v6_figure_9.png",
152
+ "caption": "Figure 9: Examples of poorly generated hair.",
153
+ "url": "http://arxiv.org/html/2304.06470v6/extracted/5679695/Imgs/hair.jpg"
154
+ },
155
+ "10": {
156
+ "figure_path": "2304.06470v6_figure_10.png",
157
+ "caption": "Figure 10: Examples of poorly generated skin, absolutely perfect skin with no pores.",
158
+ "url": "http://arxiv.org/html/2304.06470v6/extracted/5679695/Imgs/skin.jpg"
159
+ },
160
+ "11": {
161
+ "figure_path": "2304.06470v6_figure_11.png",
162
+ "caption": "Figure 11: Examples of images with poorly generated limbs and distorted body.",
163
+ "url": "http://arxiv.org/html/2304.06470v6/extracted/5679695/Imgs/limb.jpg"
164
+ },
165
+ "12": {
166
+ "figure_path": "2304.06470v6_figure_12.png",
167
+ "caption": "Figure 12: Issues with AI-generated fingers.",
168
+ "url": "http://arxiv.org/html/2304.06470v6/extracted/5679695/Imgs/finger.jpg"
169
+ },
170
+ "13": {
171
+ "figure_path": "2304.06470v6_figure_13.png",
172
+ "caption": "Figure 13: Generating realistic clothing is a challenge for generative models.",
173
+ "url": "http://arxiv.org/html/2304.06470v6/extracted/5679695/Imgs/clothing.jpg"
174
+ },
175
+ "14": {
176
+ "figure_path": "2304.06470v6_figure_14.png",
177
+ "caption": "Figure 14: Examples of lines, edges, and surfaces that are generated poorly by AI.",
178
+ "url": "http://arxiv.org/html/2304.06470v6/extracted/5679695/Imgs/edges.png"
179
+ },
180
+ "15": {
181
+ "figure_path": "2304.06470v6_figure_15.png",
182
+ "caption": "Figure 15: Examples of generated images that exhibit issues with perspective.",
183
+ "url": "http://arxiv.org/html/2304.06470v6/extracted/5679695/Imgs/perspective.png"
184
+ },
185
+ "16": {
186
+ "figure_path": "2304.06470v6_figure_16.png",
187
+ "caption": "Figure 16: Examples of generated images that display inconsistent symmetry.",
188
+ "url": "http://arxiv.org/html/2304.06470v6/extracted/5679695/Imgs/symmetry2.jpg"
189
+ },
190
+ "17": {
191
+ "figure_path": "2304.06470v6_figure_17.png",
192
+ "caption": "Figure 17: Additional examples of generated images that exhibit inconsistent symmetry.",
193
+ "url": "http://arxiv.org/html/2304.06470v6/extracted/5679695/Imgs/symmetry1.jpg"
194
+ },
195
+ "18": {
196
+ "figure_path": "2304.06470v6_figure_18.png",
197
+ "caption": "Figure 18: Examples of images where there is a violation of relative size.",
198
+ "url": "http://arxiv.org/html/2304.06470v6/extracted/5679695/Imgs/size.jpg"
199
+ },
200
+ "19": {
201
+ "figure_path": "2304.06470v6_figure_19.png",
202
+ "caption": "Figure 19: Generated images with inconsistent reflections.",
203
+ "url": "http://arxiv.org/html/2304.06470v6/extracted/5679695/Imgs/reflection.jpg"
204
+ },
205
+ "20": {
206
+ "figure_path": "2304.06470v6_figure_20.png",
207
+ "caption": "Figure 20: Consistent and inconsistent reflections in real (left) vs. generated images.",
208
+ "url": "http://arxiv.org/html/2304.06470v6/extracted/5679695/Imgs/geometryreflection.jpeg"
209
+ },
210
+ "21": {
211
+ "figure_path": "2304.06470v6_figure_21.png",
212
+ "caption": "Figure 21: Generated images with inconsistent shadows.",
213
+ "url": "http://arxiv.org/html/2304.06470v6/extracted/5679695/Imgs/shadow.jpg"
214
+ },
215
+ "22": {
216
+ "figure_path": "2304.06470v6_figure_22.png",
217
+ "caption": "Figure 22: Generated images where some objects lack visible physical support. Some objects are suspended in mid-air without any explanation or justification. This lack of physical support could result from a failure to properly simulate or model the forces acting on the objects in the scene.",
218
+ "url": "http://arxiv.org/html/2304.06470v6/extracted/5679695/Imgs/semantic3.png"
219
+ },
220
+ "23": {
221
+ "figure_path": "2304.06470v6_figure_23.png",
222
+ "caption": "Figure 23: Samples of spatial reasoning from [23]. Images are generated by DALL-E 2 for the following text prompts for columns from left to right: \u201ca red basketball with flowers on it, in front of blue one with a similar pattern\", \u201ca red ball on top of a blue pyramid with the pyramid behind a car that is above a toaster\", \u201ca pear cut into seven pieces arranged in a ring, \u201cIn late afternoon in January in New England, a man stands in the shadow of a maple tree\", and \u201cAn old man is talking to his parents\".",
223
+ "url": "http://arxiv.org/html/2304.06470v6/extracted/5679695/Imgs/reasoning.jpg"
224
+ },
225
+ "24(a)": {
226
+ "figure_path": "2304.06470v6_figure_24(a).png",
227
+ "caption": "Figure 24: Examples for which generative models do not understand the propmpt properly. Left image is generated using ChatGPT which is using DALL-E 3 in the background. Similar phenomenon was observed using Bing image creator. Right image is generated by Google Gemini.",
228
+ "url": "http://arxiv.org/html/2304.06470v6/extracted/5679695/Imgs/elephant.jpg"
229
+ },
230
+ "24(b)": {
231
+ "figure_path": "2304.06470v6_figure_24(b).png",
232
+ "caption": "Figure 24: Examples for which generative models do not understand the propmpt properly. Left image is generated using ChatGPT which is using DALL-E 3 in the background. Similar phenomenon was observed using Bing image creator. Right image is generated by Google Gemini.",
233
+ "url": "http://arxiv.org/html/2304.06470v6/extracted/5679695/Imgs/no_elephant_gemini.jpg"
234
+ },
235
+ "25": {
236
+ "figure_path": "2304.06470v6_figure_25.png",
237
+ "caption": "Figure 25: Generated images with problems with context and scene composition.",
238
+ "url": "http://arxiv.org/html/2304.06470v6/extracted/5679695/Imgs/context.png"
239
+ },
240
+ "26": {
241
+ "figure_path": "2304.06470v6_figure_26.png",
242
+ "caption": "Figure 26: Additional generated images that exhibit semantic issues.",
243
+ "url": "http://arxiv.org/html/2304.06470v6/extracted/5679695/Imgs/semantic2.jpg"
244
+ },
245
+ "27(a)": {
246
+ "figure_path": "2304.06470v6_figure_27(a).png",
247
+ "caption": "Figure 27: Further images with semantic problems, top: problem with generating inverse images, bottom: sometimes models generate similar objects of faces in the images.",
248
+ "url": "http://arxiv.org/html/2304.06470v6/extracted/5679695/Imgs/inverse1.jpeg"
249
+ },
250
+ "27(b)": {
251
+ "figure_path": "2304.06470v6_figure_27(b).png",
252
+ "caption": "Figure 27: Further images with semantic problems, top: problem with generating inverse images, bottom: sometimes models generate similar objects of faces in the images.",
253
+ "url": "http://arxiv.org/html/2304.06470v6/extracted/5679695/Imgs/inverse2.jpeg"
254
+ },
255
+ "27(c)": {
256
+ "figure_path": "2304.06470v6_figure_27(c).png",
257
+ "caption": "Figure 27: Further images with semantic problems, top: problem with generating inverse images, bottom: sometimes models generate similar objects of faces in the images.",
258
+ "url": "http://arxiv.org/html/2304.06470v6/extracted/5679695/Imgs/repetition1.jpeg"
259
+ },
260
+ "28": {
261
+ "figure_path": "2304.06470v6_figure_28.png",
262
+ "caption": "Figure 28: Generative images that exhibit issues or inconsistencies with the text.",
263
+ "url": "http://arxiv.org/html/2304.06470v6/extracted/5679695/Imgs/text.png"
264
+ },
265
+ "29": {
266
+ "figure_path": "2304.06470v6_figure_29.png",
267
+ "caption": "Figure 29: Top row: problems with color and noise in generated images. Bottom row: fluorescent colors sometimes bleed in from background onto the hair or face.",
268
+ "url": "http://arxiv.org/html/2304.06470v6/extracted/5679695/Imgs/noiseall.jpg"
269
+ },
270
+ "30": {
271
+ "figure_path": "2304.06470v6_figure_30.png",
272
+ "caption": "Figure 30: Some generated images that look cartoonish or look like paintings.",
273
+ "url": "http://arxiv.org/html/2304.06470v6/extracted/5679695/Imgs/cartoon.jpg"
274
+ },
275
+ "31": {
276
+ "figure_path": "2304.06470v6_figure_31.png",
277
+ "caption": "Figure 31: Generated images with flawed details.",
278
+ "url": "http://arxiv.org/html/2304.06470v6/extracted/5679695/Imgs/details2.jpg"
279
+ },
280
+ "32": {
281
+ "figure_path": "2304.06470v6_figure_32.png",
282
+ "caption": "Figure 32: Example failures of generated complex scenes. Achieving accurate and detailed rendering in these types of images is particularly difficult due to the large number of objects and the intricate relationships between them.",
283
+ "url": "http://arxiv.org/html/2304.06470v6/extracted/5679695/Imgs/largescene.jpg"
284
+ },
285
+ "33": {
286
+ "figure_path": "2304.06470v6_figure_33.png",
287
+ "caption": "Figure 33: Generated crowd scenes with issues.",
288
+ "url": "http://arxiv.org/html/2304.06470v6/extracted/5679695/Imgs/crowd.jpg"
289
+ },
290
+ "34": {
291
+ "figure_path": "2304.06470v6_figure_34.png",
292
+ "caption": "Figure 34: Some object that are difficult for models to generate.",
293
+ "url": "http://arxiv.org/html/2304.06470v6/extracted/5679695/Imgs/objectsx.jpg"
294
+ },
295
+ "35": {
296
+ "figure_path": "2304.06470v6_figure_35.png",
297
+ "caption": "Figure 35: Additional challenging objects for models to generate.",
298
+ "url": "http://arxiv.org/html/2304.06470v6/extracted/5679695/Imgs/objects4.jpg"
299
+ },
300
+ "36": {
301
+ "figure_path": "2304.06470v6_figure_36.png",
302
+ "caption": "Figure 36: The images on the left side of each pair are generated by StableDiffusion. One pair shows an oil painting of American Gothic by Hieronymus Bosch, while the other pair depicts The Ghosts of Hokusai.",
303
+ "url": "http://arxiv.org/html/2304.06470v6/extracted/5679695/Imgs/copy.jpg"
304
+ },
305
+ "37": {
306
+ "figure_path": "2304.06470v6_figure_37.png",
307
+ "caption": "Figure 37: Images that violate copyright generated by StableDiffusion.",
308
+ "url": "http://arxiv.org/html/2304.06470v6/extracted/5679695/Imgs/copyright.jpg"
309
+ },
310
+ "38": {
311
+ "figure_path": "2304.06470v6_figure_38.png",
312
+ "caption": "Figure 38: Sample failure of the Parti image generation model. Please refer to here to see high resolution images.",
313
+ "url": "http://arxiv.org/html/2304.06470v6/extracted/5679695/Imgs/parti.jpg"
314
+ },
315
+ "39": {
316
+ "figure_path": "2304.06470v6_figure_39.png",
317
+ "caption": "Figure 39: An example of bias failure, generated by Google Gemini.",
318
+ "url": "http://arxiv.org/html/2304.06470v6/extracted/5679695/Imgs/gemini-founding-fathers.jpg"
319
+ }
320
+ },
321
+ "validation": true,
322
+ "references": [
323
+ {
324
+ "1": {
325
+ "title": "Mesonet: a compact facial video forgery detection network.",
326
+ "author": "Darius Afchar, Vincent Nozick, Junichi Yamagishi, and Isao Echizen.",
327
+ "venue": "In 2018 IEEE international workshop on information forensics and\nsecurity (WIFS), pages 1\u20137. IEEE, 2018.",
328
+ "url": null
329
+ }
330
+ },
331
+ {
332
+ "2": {
333
+ "title": "Large scale qualitative evaluation of generative image model outputs.",
334
+ "author": "Yannick Assogba, Adam Pearce, and Madison Elliott.",
335
+ "venue": "arXiv preprint arXiv:2301.04518, 2023.",
336
+ "url": null
337
+ }
338
+ },
339
+ {
340
+ "3": {
341
+ "title": "Protecting world leaders against deep fakes using facial, gestural,\nand vocal mannerisms.",
342
+ "author": "Maty\u00e1\u0161 Boh\u00e1\u010dek and Hany Farid.",
343
+ "venue": "Proceedings of the National Academy of Sciences,\n119(48):e2216035119, 2022.",
344
+ "url": null
345
+ }
346
+ },
347
+ {
348
+ "4": {
349
+ "title": "Pros and cons of gan evaluation measures.",
350
+ "author": "Ali Borji.",
351
+ "venue": "Computer Vision and Image Understanding, 179:41\u201365, 2019.",
352
+ "url": null
353
+ }
354
+ },
355
+ {
356
+ "5": {
357
+ "title": "Generated faces in the wild: Quantitative comparison of stable\ndiffusion, midjourney and dall-e 2.",
358
+ "author": "Ali Borji.",
359
+ "venue": "arXiv preprint arXiv:2210.00586, 2022.",
360
+ "url": null
361
+ }
362
+ },
363
+ {
364
+ "6": {
365
+ "title": "Pros and cons of gan evaluation measures: New developments.",
366
+ "author": "Ali Borji.",
367
+ "venue": "Computer Vision and Image Understanding, 215:103329, 2022.",
368
+ "url": null
369
+ }
370
+ },
371
+ {
372
+ "7": {
373
+ "title": "A categorical archive of chatgpt failures.",
374
+ "author": "Ali Borji.",
375
+ "venue": "arXiv preprint arXiv:2302.03494, 2023.",
376
+ "url": null
377
+ }
378
+ },
379
+ {
380
+ "8": {
381
+ "title": "Florida: Fake-looking real images dataset.",
382
+ "author": "Ali Borji.",
383
+ "venue": "arXiv preprint arXiv:2311.10931, 2023.",
384
+ "url": null
385
+ }
386
+ },
387
+ {
388
+ "9": {
389
+ "title": "Content-based image retrieval for digital forensics.",
390
+ "author": "Yixin Chen, Vassil Roussev, G Richard, and Yun Gao.",
391
+ "venue": "In Advances in Digital Forensics: IFIP International Conference\non Digital Forensics, National Center for Forensic Science, Orlando, Florida,\nFebruary 13\u201316, 2005 1, pages 271\u2013282. Springer, 2005.",
392
+ "url": null
393
+ }
394
+ },
395
+ {
396
+ "10": {
397
+ "title": "Deep fakes: A looming challenge for privacy, democracy, and national\nsecurity.",
398
+ "author": "Bobby Chesney and Danielle Citron.",
399
+ "venue": "Calif. L. Rev., 107:1753, 2019.",
400
+ "url": null
401
+ }
402
+ },
403
+ {
404
+ "11": {
405
+ "title": "Forensictransfer: Weakly-supervised domain adaptation for forgery\ndetection.",
406
+ "author": "Davide Cozzolino, Justus Thies, Andreas R\u00f6ssler, Christian Riess, Matthias\nNie\u00dfner, and Luisa Verdoliva.",
407
+ "venue": "arXiv preprint arXiv:1812.02510, 2018.",
408
+ "url": null
409
+ }
410
+ },
411
+ {
412
+ "12": {
413
+ "title": "Beyond detection: Visual realism assessment of deepfakes.",
414
+ "author": "Luka Dragar, Peter Peer, Vitomir \u0160truc, and Borut Batagelj.",
415
+ "venue": "arXiv preprint arXiv:2306.05985, 2023.",
416
+ "url": null
417
+ }
418
+ },
419
+ {
420
+ "13": {
421
+ "title": "Perspective (in) consistency of paint by text.",
422
+ "author": "Hany Farid.",
423
+ "venue": "arXiv preprint arXiv:2206.14617, 2022.",
424
+ "url": null
425
+ }
426
+ },
427
+ {
428
+ "14": {
429
+ "title": "Digital image forensics.",
430
+ "author": "Jessica Fridrich.",
431
+ "venue": "IEEE Signal Processing Magazine, 26(2):26\u201337, 2009.",
432
+ "url": null
433
+ }
434
+ },
435
+ {
436
+ "15": {
437
+ "title": "Generative adversarial networks.",
438
+ "author": "Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley,\nSherjil Ozair, Aaron Courville, and Yoshua Bengio.",
439
+ "venue": "Communications of the ACM, 63(11):139\u2013144, 2020.",
440
+ "url": null
441
+ }
442
+ },
443
+ {
444
+ "16": {
445
+ "title": "Deepfake video detection using recurrent neural networks.",
446
+ "author": "David G\u00fcera and Edward J Delp.",
447
+ "venue": "In 2018 15th IEEE international conference on advanced video and\nsignal based surveillance (AVSS), pages 1\u20136. IEEE, 2018.",
448
+ "url": null
449
+ }
450
+ },
451
+ {
452
+ "17": {
453
+ "title": "Gans trained by a two time-scale update rule converge to a local nash\nequilibrium.",
454
+ "author": "Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp\nHochreiter.",
455
+ "venue": "Advances in neural information processing systems, 30, 2017.",
456
+ "url": null
457
+ }
458
+ },
459
+ {
460
+ "18": {
461
+ "title": "Analyzing and improving the image quality of stylegan.",
462
+ "author": "Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and\nTimo Aila.",
463
+ "venue": "In Proceedings of the IEEE/CVF conference on computer vision and\npattern recognition, pages 8110\u20138119, 2020.",
464
+ "url": null
465
+ }
466
+ },
467
+ {
468
+ "19": {
469
+ "title": "Digital image authentication from jpeg headers.",
470
+ "author": "Eric Kee, Micah K Johnson, and Hany Farid.",
471
+ "venue": "IEEE transactions on information forensics and security,\n6(3):1066\u20131075, 2011.",
472
+ "url": null
473
+ }
474
+ },
475
+ {
476
+ "20": {
477
+ "title": "The science of fake news.",
478
+ "author": "David MJ Lazer, Matthew A Baum, Yochai Benkler, Adam J Berinsky, Kelly M\nGreenhill, Filippo Menczer, Miriam J Metzger, Brendan Nyhan, Gordon\nPennycook, David Rothschild, et al.",
479
+ "venue": "Science, 359(6380):1094\u20131096, 2018.",
480
+ "url": null
481
+ }
482
+ },
483
+ {
484
+ "21": {
485
+ "title": "Exposing deepfake videos by detecting face warping artifacts.",
486
+ "author": "Yuezun Li and Siwei Lyu.",
487
+ "venue": "arXiv preprint arXiv:1811.00656, 2018.",
488
+ "url": null
489
+ }
490
+ },
491
+ {
492
+ "22": {
493
+ "title": "Digital camera identification from sensor pattern noise.",
494
+ "author": "Jan Lukas, Jessica Fridrich, and Miroslav Goljan.",
495
+ "venue": "IEEE Transactions on Information Forensics and Security,\n1(2):205\u2013214, 2006.",
496
+ "url": null
497
+ }
498
+ },
499
+ {
500
+ "23": {
501
+ "title": "A very preliminary analysis of dall-e 2.",
502
+ "author": "Gary Marcus, Ernest Davis, and Scott Aaronson.",
503
+ "venue": "arXiv preprint arXiv:2204.13807, 2022.",
504
+ "url": null
505
+ }
506
+ },
507
+ {
508
+ "24": {
509
+ "title": "Fake faces identification via convolutional neural network.",
510
+ "author": "Huaxiao Mo, Bolin Chen, and Weiqi Luo.",
511
+ "venue": "In Proceedings of the 6th ACM workshop on information hiding and\nmultimedia security, pages 43\u201347, 2018.",
512
+ "url": null
513
+ }
514
+ },
515
+ {
516
+ "25": {
517
+ "title": "Detecting gan generated fake images using co-occurrence matrices.",
518
+ "author": "Lakshmanan Nataraj, Tajuddin Manhar Mohammed, Shivkumar Chandrasekaran, Arjuna\nFlenner, Jawadul H Bappy, Amit K Roy-Chowdhury, and BS Manjunath.",
519
+ "venue": "arXiv preprint arXiv:1903.06836, 2019.",
520
+ "url": null
521
+ }
522
+ },
523
+ {
524
+ "26": {
525
+ "title": "Deep learning for deepfakes creation and detection: A survey.",
526
+ "author": "Thanh Thi Nguyen, Quoc Viet Hung Nguyen, Dung Tien Nguyen, Duc Thanh Nguyen,\nThien Huynh-The, Saeid Nahavandi, Thanh Tam Nguyen, Quoc-Viet Pham, and\nCuong M Nguyen.",
527
+ "venue": "Computer Vision and Image Understanding, 223:103525, 2022.",
528
+ "url": null
529
+ }
530
+ },
531
+ {
532
+ "27": {
533
+ "title": "Learning transferable visual models from natural language\nsupervision.",
534
+ "author": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh,\nSandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark,\net al.",
535
+ "venue": "In International conference on machine learning, pages\n8748\u20138763. PMLR, 2021.",
536
+ "url": null
537
+ }
538
+ },
539
+ {
540
+ "28": {
541
+ "title": "Hierarchical text-conditional image generation with clip latents.",
542
+ "author": "Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen.",
543
+ "venue": "arXiv preprint arXiv:2204.06125, 2022.",
544
+ "url": null
545
+ }
546
+ },
547
+ {
548
+ "29": {
549
+ "title": "Zero-shot text-to-image generation.",
550
+ "author": "Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec\nRadford, Mark Chen, and Ilya Sutskever.",
551
+ "venue": "In International Conference on Machine Learning, pages\n8821\u20138831. PMLR, 2021.",
552
+ "url": null
553
+ }
554
+ },
555
+ {
556
+ "30": {
557
+ "title": "Digital image forensics: a booklet for beginners.",
558
+ "author": "Judith A Redi, Wiem Taktak, and Jean-Luc Dugelay.",
559
+ "venue": "Multimedia Tools and Applications, 51:133\u2013162, 2011.",
560
+ "url": null
561
+ }
562
+ },
563
+ {
564
+ "31": {
565
+ "title": "Photorealistic text-to-image diffusion models with deep language\nunderstanding.",
566
+ "author": "Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L\nDenton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim\nSalimans, et al.",
567
+ "venue": "Advances in Neural Information Processing Systems,\n35:36479\u201336494, 2022.",
568
+ "url": null
569
+ }
570
+ },
571
+ {
572
+ "32": {
573
+ "title": "Assessing generative models via precision and recall.",
574
+ "author": "Mehdi SM Sajjadi, Olivier Bachem, Mario Lucic, Olivier Bousquet, and Sylvain\nGelly.",
575
+ "venue": "Advances in neural information processing systems, 31, 2018.",
576
+ "url": null
577
+ }
578
+ },
579
+ {
580
+ "33": {
581
+ "title": "Improved techniques for training gans.",
582
+ "author": "Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and\nXi Chen.",
583
+ "venue": "Advances in neural information processing systems, 29, 2016.",
584
+ "url": null
585
+ }
586
+ },
587
+ {
588
+ "34": {
589
+ "title": "Media forensics and deepfakes: an overview.",
590
+ "author": "Luisa Verdoliva.",
591
+ "venue": "IEEE Journal of Selected Topics in Signal Processing,\n14(5):910\u2013932, 2020.",
592
+ "url": null
593
+ }
594
+ },
595
+ {
596
+ "35": {
597
+ "title": "Cnn-generated images are surprisingly easy to spot\u2026 for now.",
598
+ "author": "Sheng-Yu Wang, Oliver Wang, Richard Zhang, Andrew Owens, and Alexei A Efros.",
599
+ "venue": "In Proceedings of the IEEE/CVF conference on computer vision and\npattern recognition, pages 8695\u20138704, 2020.",
600
+ "url": null
601
+ }
602
+ },
603
+ {
604
+ "36": {
605
+ "title": "Diffusiondb: A large-scale prompt gallery dataset for text-to-image\ngenerative models.",
606
+ "author": "Zijie J Wang, Evan Montoya, David Munechika, Haoyang Yang, Benjamin Hoover, and\nDuen Horng Chau.",
607
+ "venue": "arXiv preprint arXiv:2210.14896, 2022.",
608
+ "url": null
609
+ }
610
+ },
611
+ {
612
+ "37": {
613
+ "title": "Scaling autoregressive models for content-rich text-to-image\ngeneration.",
614
+ "author": "Jiahui Yu, Yuanzhong Xu, Jing Yu Koh, Thang Luong, Gunjan Baid, Zirui Wang,\nVijay Vasudevan, Alexander Ku, Yinfei Yang, Burcu Karagol Ayan, et al.",
615
+ "venue": "arXiv preprint arXiv:2206.10789, 2022.",
616
+ "url": null
617
+ }
618
+ },
619
+ {
620
+ "38": {
621
+ "title": "Statistics of deep generated images.",
622
+ "author": "Yu Zeng, Huchuan Lu, and Ali Borji.",
623
+ "venue": "arXiv preprint arXiv:1708.02688, 2017.",
624
+ "url": null
625
+ }
626
+ }
627
+ ],
628
+ "url": "http://arxiv.org/html/2304.06470v6"
629
+ }
20240620/2305.04694v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240620/2305.13582v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240620/2306.05486v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240620/2306.09293v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240620/2307.01927v3.json ADDED
@@ -0,0 +1,149 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Safe Connectivity Maintenance of Underactuated Multi-Agent Networks in Dynamic Oceanic Environments",
3
+ "abstract": "Autonomous multi-agent systems are increasingly being deployed in environments where winds and ocean currents have a significant influence. Recent work has developed control policies for single agents that leverage flows to achieve their objectives in dynamic environments. However, in multi-agent systems, these flows can cause agents to collide or drift apart and lose direct inter-agent communications, especially when agents have low propulsion capabilities. To address these challenges, we propose a hierarchical multi-agent control approach that allows arbitrary single-agent performance policies that are unaware of other agents to be used in multi-agent systems while ensuring safe operation.\nWe first develop a safety controller using potential functions, solely dedicated to avoiding collisions and maintaining inter-agent communication.\nNext, we design a low-interference safe interaction (LISIC) policy that trades off the performance policy and the safety control to ensure safe and performant operation. Specifically, when the agents are at an appropriate distance, LISIC prioritizes the performance policy while smoothly increasing the safety controller when necessary. We prove that under mild assumptions on the flows experienced by the agents, our approach can guarantee safety. Additionally, we demonstrate the effectiveness of our method in realistic settings through an extensive empirical analysis with simulations of fleets of underactuated autonomous surface vehicles operating in dynamic ocean currents where these assumptions do not always hold.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Autonomous multi-agent systems, from drones to balloons and ocean surface vessels, are increasingly being explored for various applications, including inspection, collecting data, or scaling ocean aquaculture [1 ###reference_b1###, 2 ###reference_b2###]. In many applications, the agents communicate with each other for various purposes: to achieve a joint objective, to ensure internet coverage [3 ###reference_b3###], or to share information amongst each other to improve operations. Local communication often relies on limited-range systems, e.g., sonar or radar, requiring agents to stay close to each other for network connectivity (see Fig. 1 ###reference_###).\nWhen a robotic system operates in the oceans and air, it is exposed to winds and currents. Most control approaches consider these as disturbances for which an overactuated control needs to compensate. What if, instead, the agent takes advantage of these flows? Recent work demonstrated that by going with the flow and using small actuation strategically to nudge itself into favorable flows, an agent can achieve its objective with very little energy [4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###].\nGiven such individual agent performance controllers [4 ###reference_b4###], we aim to develop a method that extends to multi-agent systems operating in complex flows while ensuring network connectivity and avoiding collision among agents. From the control perspective, this is challenging because of two key reasons. First, disconnections are sometimes unavoidable in the underactuated setting, where the agent\u2019s individual propulsion is smaller than the surrounding flows, as the nonlinear, time-varying flows can push agents in opposing directions. The safe interaction controller needs to be resilient and recover connectivity after losing it. Second, constraint satisfaction needs to be traded off intelligently with the performance objective of each agent. For example, a time-optimal controller for an agent would prefer staying in strong flows, which can conflict with the network connectivity objective. Our insight is that we can simplify this multi-agent problem using three different controllers in a Hierarchical Control of Multi-Agent-Systems ###reference_4.id14### (H-MAS ###reference_4.id14###) approach (Fig. 1 ###reference_###).\n###figure_1### Related Literature. In H-MAS ###reference_4.id14###, agents are organized into multiple levels of hierarchy, with higher-level agents having more authority and control over lower-level agents, designated as followers [9 ###reference_b9###]. For instance, [10 ###reference_b10###] solves path planning and ocean vehicle coordination separately with a leader-follower structure. For distance-based control tasks, such as for the Safety Controller in Fig. 1 ###reference_###, flocking techniques can maintain connectivity by influencing the agent\u2019s behavior to follow the movement of their neighbors while avoiding collisions [11 ###reference_b11###, 12 ###reference_b12###]. Recent advancements in Model Predictive Control ###reference_id5### (MPC ###reference_id5###) have also achieved connectivity and collision-free operation within Multi-Agent-Systems ###reference_3.id13### (MAS ###reference_3.id13###) [13 ###reference_b13###, 14 ###reference_b14###] and successfully approached control of varying-topology networks [15 ###reference_b15###]. Nevertheless, a notable limitation of these approaches is their reliance on the assumptions of position invariance [13 ###reference_b13###], fixed neighbor sets [14 ###reference_b14###] and time invariance [13 ###reference_b13###, 15 ###reference_b15###, 14 ###reference_b14###] of the flows, which do not apply in dynamic ocean environments.\nThus far, the mentioned literature offers limited applicability to time-varying, uncertain flows predicted by forecasts. Since the agents are often underactuated, they cannot reliably compensate for disturbances. In this context, optimization-based approaches such as MPC ###reference_id5### frequently lead to increased computational complexity, convergence to local minima, and infeasibility with respect to the constraints. Hence, we opt for computationally efficient flocking approaches to achieve distance-based safety control, which are always feasible and, when coupled with effective single-agent planners [4 ###reference_b4###], can be run at high update rates.\nWhile many flocking schemes only assume simple double integrator dynamics, adaptive flocking has also been applied to nonlinear dynamics [16 ###reference_b16###, 17 ###reference_b17###, 18 ###reference_b18###, 19 ###reference_b19###]. However, due to the time-varying dynamics, adaptive approaches may not generalize well, especially given the diverse nature of currents.\nContributions. To address the above shortcomings, we propose a Low Interference Safe Interaction Controller ###reference_6.id26### (Fig. 1 ###reference_###). This framework is more general in three dimensions. First, it takes an arbitrary performance policy, in contrast to [12 ###reference_b12###, 20 ###reference_b20###, 21 ###reference_b21###], where the flock can only track reference trajectories of single or multiple virtual leaders. In fact, a feedback control policy can optimize objectives besides navigation and, in complex flows, leads to significantly better results than tracking a reference trajectory [4 ###reference_b4###]. This enables the use of Dynamic Programming ###reference_id3### (DP ###reference_id3###) approaches where the value function yields optimal individual agent controls for an arbitrarily high number of agents without additional cost beyond a cheap gradient computation. This is especially powerful for multi-agent problems where the objective can be decomposed into the sum of independent single-agent objectives. Second, we provide design choices to modulate the aggressiveness of pursuing safety versus performance. Third, our approach also enables recoveries in case of connectivity losses. Finally, we investigate our method in the context of a promising approach to Carbon Dioxide Removal ###reference_7.id27### (CDR ###reference_7.id27###): utilizing robotic seaweed farms [2 ###reference_b2###].\nOrganization. Section II ###reference_### introduces relevant background and metrics to evaluate our LISIC ###reference_6.id26### in complex flows. In Section III ###reference_###, we present our LISIC ###reference_6.id26### approach, whereas in Section IV ###reference_###, we prove that our method guarantees safe network interactions under certain conditions on the maximum magnitude of the control and flow field velocities across the agents. Finally, we assess the performance of our approach in realistic ocean currents where these conditions are not always met with the metrics defined in SectionII ###reference_###."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II Problem Formulation",
15
+ "text": "In this section, we first describe the system\u2019s dynamics and briefly summarize connectedness in communication graphs. Then, we define our problem statement and the metrics we use to measure constraint violation."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "II-A System Dynamics",
21
+ "text": "We consider a swarm of agents and use to describe the set of all agents. Let the actuation signal of each agent be denoted by from a bounded set where is the dimensionality of the control.\nThen, the dynamics for each agent are given by:\ndenotes the position of agent in the dimensional state space, where for a surface vessel on the ocean. The movement of agent depends on the time-varying non-linear flow field and its control . Although our method works for arbitrary , we will focus on situations where the agent can directly actuate in each dimension, i.e., , in line with our experiments discussed in Section V ###reference_###.\nLet the agent trajectory resulting from Eq. (1 ###reference_###) be described by with the state at . For the global system of all agents, we use , , and respectively to describe the state, control, and trajectory.\nWhile our method also works in known currents, we focus on realistic settings, where only coarse forecasts are available to the planner, which differ significantly from the true flows ."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "II-B Communication Graph Preliminaries",
27
+ "text": "The network topology of our MAS ###reference_3.id13### with state can be represented by an undirected communication graph , allowing information to flow bidirectionally between agents. The set of finite vertices denotes individual agents, while the time-varying set of edges represents direct communication between agents. Given an upper communication threshold, . In other words, agents and can communicate directly with each other if they are spatially close with respect to a distance measure . The graph is said to be connected if an undirected path exists between every pair of distinct vertices.\nNext, we define the the adjacency matrix encoding the connectivity between vertices, i.e. .\nThe degree of a vertex at time , represents the number of incident edges to vertex .\nThe degree matrix is then defined as the diagonal matrix . To measure the graph\u2019s connectivity, we can compute the eigenvalues of the Laplacian positive semi-definite matrix . The second smallest eigenvalue , commonly referred to as the algebraic connectivity or Fiedler value, captures the robustness of the network to link failures. In particular, is connected if and only [11 ###reference_b11###]."
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "II-C Problem Statement",
33
+ "text": "We focus on multi-agent problems where the joint objective is the sum of independent objectives , which can be sketched out as:\nWe aim to find a control policy that approximately solves Eq. (2 ###reference_###) while being computationally cheap and always feasible. The agents are coupled in only two constraints: the collision constraint (2b ###reference_2###) where represents the distance between agent and and the minimum safe distance, and second, Eq. (2c ###reference_3###), in maintaining a graph where all agents are connected based on the communication range .\nIn real-world scenarios, the initial network may begin in a disconnected state or transition to a disconnected state when underactuated agents are involved. Hence, it is assumed that each agent possesses an emergency communication backup to a central unit, e.g., via satellite, in case its closest distance to any other agent exceeds . The objective is to minimize such instances, as these forms of communication are typically expensive.\nOur insight is that in this setting, we can decompose the problem and handle the objectives and constraints on different levels with (1) a performance controller for each agent, (2) a safety controller , and (3) a low-interference safe interaction controller trading-off the two (Fig. 1 ###reference_###).\nThe performance controller of an agent minimizes its only considering its own dynamics (1 ###reference_###). can be an arbitrary control policy from a fixed control signal to a feedback controller based on learning or dynamic programming (Section V ###reference_###). In challenging settings like ours with non-linear, time-varying dynamics, it is easier to design single-agent feedback controllers than solving the coupled multi-agent problem above, e.g., for time-optimal navigation, reference tracking, or optimizing seaweed growth [5 ###reference_b5###]. The safety controller, , determines the control for all agents to ensure the interaction constraints (2b ###reference_2###), (2c ###reference_3###) are satisfied. Lastly, based on the control inputs and from the respective policies, the safe interaction controller\ndecides the agents final control inputs . To achieve good performance, the safe interaction controller should not interfere too much with while still ensuring connectivity and avoiding collisions. This work focuses on designing and for an arbitrary ."
34
+ },
35
+ {
36
+ "section_id": "2.4",
37
+ "parent_section_id": "2",
38
+ "section_name": "II-D Evaluation Metrics",
39
+ "text": "Due to the underactuated nature of the agents, it is impossible to guarantee network connectivity or collision avoidance in some scenarios. Hence, we define evaluation metrics to assess the safety of our control schema with respect to Eq. (2c ###reference_3###) and (2b ###reference_2###). A collision happens between any of the agents in the swarm if at which Eq. (2b ###reference_2###) is violated. We denote this with the collision indicator . To measure various aspects of losing connectivity, we use three metrics. First, for a binary measure, if disconnections occur, we define the disconnection indicator which is if at which Eq. (2c ###reference_3###) is violated and zero otherwise.\nAdditionally, we measure the minimum Fiedler value over time; the higher, the more robust the communication network (Section II-B ###reference_###):\nLastly, as single-agent backup communication is costly, it matters how long an agent is isolated from all other agents. Therefore, we are introducing a new measure called Isolated Platform Metric ###reference_7.id17### (IPM ###reference_7.id17###).\nwhere counts the number of disconnected vertices, which corresponds to the number of zeros in the diagonal of the graph degree matrix (Section II-B ###reference_###).\nIn Section V ###reference_###, we compare different controllers empirically over a large, representative set of missions by evaluating the collision rate , the disconnection rate , as well as the distributions of IPM and . In our setting where the performance objectives are minimum time-to-target for each agent , the connectivity constraint often leads to a trade-off with the performance objective. Hence, we also quantify the degradation of the performance controller by quantifying the minimum distance the swarm center got to the target area over the mission time as ."
40
+ },
41
+ {
42
+ "section_id": "3",
43
+ "parent_section_id": null,
44
+ "section_name": "III Method",
45
+ "text": "Our method tackles the multi-agent problem with a hierarchical control approach. The low interference safe interaction controller ensures performance and safe control based on an arbitrary performance controller and a safety controller (see Fig. 1 ###reference_###). As explained in Section I ###reference_###, our approach is more general than [22 ###reference_b22###, 20 ###reference_b20###, 21 ###reference_b21###], as an arbitrary performance policy is chosen and our framework allows for designing the aggressiveness of the safety flocking-based controller following the magnitude of the performance control input. We first introduce our flocking-inspired safety controller based on potential functions and then detail our design for . Note that this control scheme is applicable in fully actuated agents, even though our primary focus lies in applications involving underactuated agents, which initially motivated a feasible reactive approach to collision avoidance and connectivity."
46
+ },
47
+ {
48
+ "section_id": "3.1",
49
+ "parent_section_id": "3",
50
+ "section_name": "III-A Flocking-Inspired Safety Controller",
51
+ "text": "The sole objective of the safety controller is to ensure adequate distances between the agents without prescribing a formation. Hence, we design our safety controller based on the gradients of a potential function .\nTo explain the principle, let us first focus on two connected agents and at an inter-agent distance . Consider the following bowl-shaped potential function\nwhere is a tuning factor to adjust the bell shape (see left of in Fig. 2 ###reference_###). Let the safety controllers for be and for . When those two agents are getting too close , the potential goes to infinity, so the gradient controllers are a strong repulsive force that pushes them away from each other. Conversely, when the two connected agents are at risk of losing their communication link , then , which means the gradient-controllers result in a strong, attractive force that brings them closer again. For multiple agents, the control becomes the sum of gradient potential terms of the other agents, and the magnitude of the gradients helps prioritize the critical inter-agent distances .\nWhen the agents are disconnected, which is sometimes unavoidable in underactuated settings where strong flows push them apart, we want them to be able to reconnect.\nGiven the assumption of an emergency communication backup outlined in Section II-C ###reference_###, we incorporate a second term to restore connectivity among disconnected agents. To the best of our knowledge, this concept was introduced in [23 ###reference_b23###]. While the augmented potential function in [23 ###reference_b23###] uses a square function of distance to for disconnected agents, we implement the second term in Eq. (6 ###reference_###) as a square-root function. This is a design choice in the context of underactuated agents in a dynamic oceanic environment, where remote flock members can experience strong divergent flows and direct connectivity may be infeasible or undesirable to achieve.\nThus, our approach yields a relatively low attraction force for agents beyond their communication range.\nThis results in our final potential function that is also visualized in Fig. 2 ###reference_###:\nwhere is an edge indicator similar to in Section II-B ###reference_###, but with a switching threshold inducing a hysteresis when adding new edges, see Eq. (7 ###reference_###). Hence, switches between two terms whether the pair of agents are within communication range () or disconnected (). Following the notation of [23 ###reference_b23###], we define:\nThis hysteresis mechanism avoids constant switching of the dynamical network with multiple agents for edges close to and helps preserve connectivity in reactive control schemes [24 ###reference_b24###].\nThe final safe interaction controller for each agent with maximum propulsion is then defined as"
52
+ },
53
+ {
54
+ "section_id": "3.2",
55
+ "parent_section_id": "3",
56
+ "section_name": "III-B Low Interference Safe Interaction Controller",
57
+ "text": "For our that trades off the performance inputs with the safety input , we propose an approach that weights these control vector inputs for each agent depending on the risk of losing connectivity or colliding.\nwhere and are weighting factors. Note that depends on the other agents\u2019 positions to guarantee safe interactions.\nWhen collisions or connectivity losses are imminent, should be able to rapidly tend to to prioritize the safe interaction safety over performance, i.e. and (Fig. 1 ###reference_### B, C). Conversely, when the network is well connected and there is low danger of collisions, should align with to have low interference with the agent\u2019s performance control, i.e. and (Fig. 1 ###reference_### A).\nHence we defined a weighting function such that and , see an example in Section V ###reference_###.\nThis function measures the urgency of to converge to and we define it\nThe function can be thought of as a monotonically increasing safety activation function taking values between depending on its argument\u2019s (unbounded) magnitude.\nFrom the definition of in Eq. (6 ###reference_###), and . Hence, in critical situations gets very large so that saturates to and to , thus prioritizing the network safety for the concerned agents i.e. , over each agent individual objective .\nIn other words, has a contractivity property for agent inter-distances at the boundaries of the safe set, defined by and , similarly to Control Barrier Functions ###reference_5.id25### (CBFs ###reference_5.id25###) [25 ###reference_b25###]. With this design, we ensure that agents coming from a disconnected status to a connected status experience a strong attracting gradient to avoid escaping the communication range again. From Fig. 2 ###reference_###, it is also clear that when the network is close to being ideally connected, the gradient norm of the potential function is low so that agent\u2019s control input is dominated by the performance controller since and .\n###figure_2###"
58
+ },
59
+ {
60
+ "section_id": "4",
61
+ "parent_section_id": null,
62
+ "section_name": "IV Theoretical Analysis",
63
+ "text": "This section analyzes under which conditions our safe interaction controller can maintain connectivity and avoid collisions [26 ###reference_b26###]. Our analysis follows the common approach to demonstrate that a flock converges to a lattice structure while preventing inter-agent collisions using energy-based analysis and LaSalle\u2019s invariance principle [12 ###reference_b12###]. We start by introducing a moving referential frame for the structural collective dynamics [12 ###reference_b12###] with respect to the flock centroid . The relative coordinates are given by and . Therefore, , and the total tension energy or potential energy for the structural dynamics in the relative coordinates yields\nA possible approach, although conservative, is to show that a global tension energy decrease of the system can be achieved by guaranteeing local tension energy decrease . Assume that switches at time for and on each . Then, [23 ###reference_b23###] with the number of edges added at switching time . The energy can be bound for any subsequent time as the graph topology becomes fixed after a certain time, and only a finite number of maximum edges can be added.\nThe time-derivative of along the trajectory of agent yields\nwhere we exploited the relation . We seek a condition linking the maximum actuation power of each agent to the dynamics of the flock, subject to the nonlinear flow . For ease of understanding, assume holonomic actuation i.e. , then can be directly substituted with\nEq. (8 ###reference_###). Using and the Cauchy-Schwarz inequality in Eq. (11 ###reference_###) yields:\nwhere denotes the average and the set the neighboring agents of . More details about this proof can be found in [26 ###reference_b26###]. A similar inequality to Eq. (12 ###reference_###) can be derived for the general dynamics defined in Eq. (1 ###reference_###) if is a linear map. \nLet us interpret Eq. (12 ###reference_###). The dynamics of the neighboring agents of depend on their surrounding flows and respective individual control inputs, i.e. .\nDespite strong flows, the agents do not necessarily need to be overactuated to meet a local energy decrease . Eq. (12 ###reference_###) can be fulfilled even if , since can be compensated by . In other words, agents in strong flows could still maintain connectivity and avoid collisions as long as the currents experienced by each agent and its neighbors are of similar direction and magnitude. The neighboring flocking control inputs average also helps accounting for the current difference term . Under these assumptions, we can show that , which allows to bound the maximum energy and apply LaSalle\u2019s Invariance Principle [23 ###reference_b23###], [27 ###reference_b27###], thus ensuring that no collisions or disconnections happen, since when or . However, for strong divergent flows between agents, it can happen that due to the underactuated nature of the agents, which makes satisfaction of Eq. (12 ###reference_###) challenging. Note that Eq. (12 ###reference_###) is sufficient but not necessary to guarantee , as negative local energies can compensate for positive ones."
64
+ },
65
+ {
66
+ "section_id": "5",
67
+ "parent_section_id": null,
68
+ "section_name": "Simulation Study",
69
+ "text": "The proposed scheme is evaluated on realistic ocean currents, seeking a close reproduction of an innovative CDR ###reference_7.id27### approach [2 ###reference_b2###, 5 ###reference_b5###]. We use multi-time Hamilton Jacobi Reachability ###reference_9.id19### (mt-HJR ###reference_9.id19###) as a performance single agent controller since it generates a value function yielding the time-optimal control everywhere [4 ###reference_b4###]."
70
+ },
71
+ {
72
+ "section_id": "5.1",
73
+ "parent_section_id": "5",
74
+ "section_name": "Experimental Set-Up",
75
+ "text": "We study the effectiveness of different controllers in maneuvering a two-dimensional Autonomous Surface Vehicle ###reference_2.id12### (ASV ###reference_2.id12###), with a thrust angle as a control input. The actuation can be assumed holonomic of fixed thrust magnitude m/s, as the vessel can turn to a desired within seconds while our sampling times are 10 minutes. We consider a group of identical ASV ###reference_2.id12###s with omnidirectional communication capabilities, navigating in strong ocean currents m/s. Each single agent\u2019s objective is to reach a target region common to all ASV ###reference_2.id12###s, which could be identified as an ideal seaweed growth region in the context of floating robotic farms [2 ###reference_b2###]. Next, we detail the creation of an extensive set of missions to illustrate trade-offs between single-agent objectives and flock connectivity maintenance in a realistic ocean environment.\nInspired by [4 ###reference_b4###], we focus on the Gulf of Mexico region (Fig. 3 ###reference_###), as it presents challenging currents. Moreover, we employ two ocean current data sources, which we refer to as HYCOM hindcasts [28 ###reference_b28###] and Copernicus hindcasts [29 ###reference_b29###] that we use as forecasts for realistic scenarios. In our context, the ocean forecast data represents predicted currents while the hindcast ocean data are true flows . While the forecast error affects the optimality of the performance mt-HJR ###reference_9.id19### controllers, the advantage of our reactive safety controller design over predictive schemes [13 ###reference_b13###, 14 ###reference_b14###] is that it is not affected by the error on forecasted currents. We propose two settings to investigate our approach, namely (a) performance mt-HJR ###reference_9.id19### planning on hindcasts and simulation on hindcasts (HC-HC) and (b) performance mt-HJR ###reference_9.id19### planning on forecasts and simulation on hindcasts (FC-HC). The first allows us to assess performance in an idealized setting where true flows are known, while the second reflects a realistic application in dynamic ocean environments.\nWe assume all agents start simultaneously at time a navigation mission to a target region . The navigation objective of each ASV ###reference_2.id12### is to reach from their start states within a maximum allowed time . The target is defined as a circular region with center coordinates and fixed radius around it. To obtain a diverse set of missions , the starting times are uniformly sampled between April 2022 and December 2022. is set to h, and the start points are sampled such that the ASV ###reference_2.id12###s can reach the target within to ensure that missions are by definition feasible on true flows and temporally representative enough of realistic scenarios. To prevent stranding side effects, we impose a minimum distance of km between the target area and the land and a minimum distance of km between each ASV ###reference_2.id12###\u2019s initial position and the land.\nWe generate missions of initially connected and collision-free networks, see Fig. 3 ###reference_###.\n###figure_3###"
76
+ },
77
+ {
78
+ "section_id": "5.2",
79
+ "parent_section_id": "5",
80
+ "section_name": "Baseline Controllers",
81
+ "text": "We build on recent work that proposed a reliable mt-HJR ###reference_9.id19### controller for underactuated agents utilizing complex flows [4 ###reference_b4###]. This approach directly extends to multiple agents with little extra computation.\nThe feedback controller for agent can be obtained from an optimal value function at time as .\nAll evaluated controllers use the mt-HJR ###reference_9.id19### formulation as a single agent performance control. Our baseline scheme, called multi-time Hamilton Jacobi Reachability Baseline ###reference_0.id20### (mt-HJR-B ###reference_0.id20###), involves each agent only utilizing its time-optimal performance control mt-HJR ###reference_9.id19### without considering multi-agent interactions.\nThis baseline provides a reasonable estimation of the likelihood of collisions and communication losses if each agent were to rely solely on its performance control. In addition, we define a second baseline controller, multi-time Hamilton Jacobi Reachability with multi-agent Reactive Control ###reference_1.id21### (mt-HJR-RC ###reference_1.id21###) adapted from [30 ###reference_b30###]. This controller operates in three modes: achieveConnectivity, maintainConnectivity, and GoToGoal, which are selected based on the ASV ###reference_2.id12###s\u2019 relative positions. The maintainConnectivity and GoToGoal modes employ a general navigation function for each agent, which we instantiate to our mt-HJR ###reference_9.id19### performance controller. This approach is easily integrated with the time-optimal control mt-HJR ###reference_9.id19###, and the reactive control term can be implemented decentralized.\nFinally, we denote our Low Interference Safe Interaction Controller ###reference_6.id26### (LISIC ###reference_6.id26###) approach from Section III ###reference_### as multi-time Hamilton Jacobi Reachability with multi-agent LISIC ###reference_2.id22### (mt-HJR-LISIC ###reference_2.id22###). The single agent performance controller is again mt-HJR ###reference_9.id19###. The trade-off between each agent\u2019s navigational objective and the safe network interaction can be tuned with two parameters. First, the potential function shape (Fig. 2 ###reference_###) can be more or less flat around the ideal distance . In this application, we set . Furthermore, we now detail our weighting scheme for and via the definition of in Eq. (9 ###reference_###) as a softmax-like function\nwhere the parameter can be adjusted to achieve faster saturation of the potential function gradient term ."
82
+ },
83
+ {
84
+ "section_id": "5.3",
85
+ "parent_section_id": "5",
86
+ "section_name": "Additional Parameters and Metrics",
87
+ "text": "The upper connectivity bound in Eq. (2c ###reference_3###) and (7 ###reference_###) is set to km, which corresponds approximately to radio communication capabilities for ASV ###reference_2.id12###s. The collision lower threshold from Eq. (2b ###reference_2###) is set to m, providing a practical margin, as one would typically do in a real-world implementation. Moreover, we define m for the edge hysteresis parameter from Eq. (7 ###reference_###). We use the Euclidean norm to measure inter-agent distances and the minimum flock center distance to target . The parameters used in the experiments are summarized in Table I ###reference_###."
88
+ },
89
+ {
90
+ "section_id": "5.4",
91
+ "parent_section_id": "5",
92
+ "section_name": "Numerical Results",
93
+ "text": "The results over apriori known true currents (HC-HC) and realistic scenario (FC-HC) are presented in Table II ###reference_###. Both mt-HJR-LISIC ###reference_2.id22### and mt-HJR-RC ###reference_1.id21### exhibit superior performance in terms of connectivity and collision metrics compared to the baseline mt-HJR-B ###reference_0.id20###. Thus, we conduct statistical testing to compare mt-HJR-RC ###reference_1.id21### and mt-HJR-LISIC ###reference_2.id22###. Regarding the disconnection and collision rate, we perform a one-sided two-sample z proportion test for mt-HJR-LISIC ###reference_2.id22### against mt-HJR-RC ###reference_1.id21###.\nLet be the rate collision or disconnection over with the null hypothesis to reject in favor of the alternative hypothesis . mt-HJR-LISIC ###reference_2.id22### is statistically significantly better than mt-HJR-RC ###reference_1.id21### at avoiding disconnections in both (HC-HC) and (FC-HC) scenarios, with p-values of and , respectively. However, it is not significantly better than mt-HJR-RC ###reference_1.id21### at avoiding collisions. We also perform a Welch\u2019s t-test due to the unequal variances of mt-HJR-RC ###reference_1.id21### and mt-HJR-LISIC ###reference_2.id22### to test (1) connectivity using the means over of the Isolated Platform Metric ###reference_7.id17###, i.e., and the minimum Fiedler value recorded over time, i.e., , (2) performance trade-off with . For both (HC-HC) and (FC-HC) scenarios, mt-HJR-LISIC ###reference_2.id22### leads to statistically significantly better results for the network connectivity with for and while mt-HJR-RC ###reference_1.id21### displays a better objective trade-off with p-values . Moreover, we plot the IPM ###reference_7.id17### and , evaluated on the full set of missions for the three controllers in Fig. 4 ###reference_###. Among the three evaluated controllers, mt-HJR-LISIC ###reference_2.id22### has the lowest IPM ###reference_7.id17###. Because of its higher value of (see Fig. 4 ###reference_###, right), mt-HJR-LISIC ###reference_2.id22### is more robust against disconnections, and should be the preferred control choice when communication maintenance is prioritized.\nFinally, Fig. 5 ###reference_### illustrates a navigation mission, comparing a naive multi-agent approach (mt-HJR-B ###reference_0.id20###) to our safe interaction controller, mt-HJR-LISIC ###reference_2.id22###.\nNote that despite the initial strong currents pushing the ASV ###reference_2.id12###s away from the desired goal in Fig. 5 ###reference_###, the currents eventually shift favorably, allowing the underactuated ASV ###reference_2.id12###s to reach the target. The mt-HJR ###reference_9.id19### framework leverages this information through current forecasts to plan intelligently.\n###figure_4### ###figure_5###"
94
+ },
95
+ {
96
+ "section_id": "5.5",
97
+ "parent_section_id": "5",
98
+ "section_name": "Discussion",
99
+ "text": "It is clear that mt-HJR-LISIC ###reference_2.id22### outperforms mt-HJR-RC ###reference_1.id21### and the mt-HJR-B ###reference_0.id20### in terms of connectivity metrics. Interestingly, mt-HJR-LISIC ###reference_2.id22### leads to a slightly higher collision rate in Table II ###reference_### than mt-HJR-RC ###reference_1.id21###. We believe that it is mainly due to two reasons: (1) In mt-HJR-RC ###reference_1.id21###, the expected risk of collisions is inherently lower as each agent can achieve connectivity with a maximum amount of two other agents [30 ###reference_b30###] while mt-HJR-LISIC ###reference_2.id22### achieves a similar structure to a lattice configuration [12 ###reference_b12###] (2) In our example, all agents navigate to the same target, which also increases the risk of collisions, as it is a common implicit regularizer. We expect improvement in collision rate for application to autonomous ASV ###reference_2.id12###s, where each agent maximizes an objective along its trajectory [5 ###reference_b5###]. The discrepancy between the performance trade-off with each agent target reaching objective in Table II ###reference_### is less noticeable in the (FC-HC) setting, since the mt-HJR ###reference_9.id19### performance is also degraded because of the stochastic error when planning on forecasts [4 ###reference_b4###]."
100
+ },
101
+ {
102
+ "section_id": "6",
103
+ "parent_section_id": null,
104
+ "section_name": "VI Conclusion and Future Work",
105
+ "text": "In this work, we proposed a H-MAS ###reference_4.id14### approach to maintain network connectivity in complex dynamical flows while satisfying single agent level objectives when feasible. Our method blends a safety controller for collisions and connectivity maintenance with a performance control policy, which allows us to decompose a complex multi-agent problem effectively.\nOur empirical results in realistic ocean dynamics showed that our method efficiently maintains connectivity and avoids collisions in most scenarios while reasonably trading off with each agent\u2019s performance objective. Future work involves real-world testing of our experiments, as well as adapting predictive methods [13 ###reference_b13###, 14 ###reference_b14###] to time-varying flows to anticipate disconnections and collisions utilizing the only available coarse ocean forecasts."
106
+ }
107
+ ],
108
+ "appendix": [],
109
+ "tables": {
110
+ "1": {
111
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T1.18\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T1.18.19.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r ltx_border_tt\" id=\"S5.T1.18.19.1.1\">Symbol</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r ltx_border_tt\" id=\"S5.T1.18.19.1.2\">Description</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T1.18.19.1.3\">Value</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T1.2.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T1.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T1.2.2.3\">\n<a href=\"https://arxiv.org/html/2307.01927v3#id12.12.id12\"><abbr class=\"ltx_glossaryref\" href=\"https://arxiv.org/html/2307.01927v3#id12.12.id12\" title=\"Autonomous Surface Vehicle\"><span class=\"ltx_text ltx_glossary_short\">ASV</span></abbr></a>s maximum actuation</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T1.2.2.2\">\nm/s</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.4.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.3.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.4.4.3\">Time-varying ocean currents</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.4.4.2\">\nm/s</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.6.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.5.5.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.6.6.3\">Upper connectivity bound</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.6.6.2\">\nm</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.8.8\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.7.7.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.8.8.3\">Lower collision threshold</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.8.8.2\">\nm</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.10.10\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.9.9.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.10.10.3\">Radius of circular target region</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.10.10.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.12.12\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.11.11.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.12.12.3\">Saturation of potential function</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.12.12.2\">\n (no units)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.14.14\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.13.13.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.14.14.3\">Shape of potential function</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.14.14.2\">\n (no units)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.16.16\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.15.15.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.16.16.3\">Hysteresis for adding or removing edges</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.16.16.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.18.18\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.17.17.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.18.18.3\">Duration of a mission</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.18.18.2\">\nh</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span><span class=\"ltx_text\" id=\"S5.T1.20.1\" style=\"font-size:90%;\"> Relevant Simulation and Controller Parameters.</span></figcaption>\n</figure>",
112
+ "capture": "TABLE I: Relevant Simulation and Controller Parameters."
113
+ },
114
+ "2": {
115
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T2\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S5.T2.13\" style=\"width:433.6pt;height:179.7pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(21.4pt,-8.9pt) scale(1.10952183151539,1.10952183151539) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S5.T2.13.13\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T2.3.3.3\">\n<td class=\"ltx_td ltx_border_tt\" id=\"S5.T2.3.3.3.4\"></td>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T2.3.3.3.5\">Coll.</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T2.3.3.3.6\">Disconn.</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T2.1.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T2.2.2.2.2\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T2.3.3.3.3\"></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.4.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" colspan=\"4\" id=\"S5.T2.4.4.4.1\">\n plans on true flows: HC-HC</th>\n<th class=\"ltx_td ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.4.4.4.2\"></th>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T2.4.4.4.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.13.13.14.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.13.13.14.1.1\"><a href=\"https://arxiv.org/html/2307.01927v3#id20.20.id20\"><abbr class=\"ltx_glossaryref\" href=\"https://arxiv.org/html/2307.01927v3#id20.20.id20\" title=\"multi-time Hamilton Jacobi Reachability Baseline\"><span class=\"ltx_text ltx_glossary_short\">mt-HJR-B</span></abbr></a></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.13.13.14.1.2\">68.5%</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.13.13.14.1.3\">50.1%</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.13.13.14.1.4\">0.37</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.13.13.14.1.5\">0.39</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.13.13.14.1.6\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.13.13.14.1.6.1\">0</span> km</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.5.5.5\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.5.5.5.2\"><a href=\"https://arxiv.org/html/2307.01927v3#id21.21.id21\"><abbr class=\"ltx_glossaryref\" href=\"https://arxiv.org/html/2307.01927v3#id21.21.id21\" title=\"multi-time Hamilton Jacobi Reachability with multi-agent Reactive Control\"><span class=\"ltx_text ltx_glossary_short\">mt-HJR-RC</span></abbr></a></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.5.5.5.3\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.5.5.5.3.1\">0</span>%</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.5.5.5.4\">44.8%</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.5.5.5.5\">0.19</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.5.5.5.6\">0.42</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.5.5.5.1\">0.14 km<sup class=\"ltx_sup\" id=\"S5.T2.5.5.5.1.1\">\u2217</sup>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.8.8.8\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.8.8.8.4\"><a href=\"https://arxiv.org/html/2307.01927v3#id22.22.id22\"><abbr class=\"ltx_glossaryref\" href=\"https://arxiv.org/html/2307.01927v3#id22.22.id22\" title=\"multi-time Hamilton Jacobi Reachability with multi-agent LISIC\"><span class=\"ltx_text ltx_glossary_short\">mt-HJR-LISIC</span></abbr></a></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.8.8.8.5\">0.7%</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.6.6.6.1\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.6.6.6.1.1\">9.9</span>%<sup class=\"ltx_sup\" id=\"S5.T2.6.6.6.1.2\">\u2217</sup>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.7.7.7.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.7.7.7.2.1\">0.05<sup class=\"ltx_sup\" id=\"S5.T2.7.7.7.2.1.1\"><span class=\"ltx_text ltx_font_medium\" id=\"S5.T2.7.7.7.2.1.1.1\">\u2217</span></sup></span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.8.8.8.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.8.8.8.3.1\">1.15<sup class=\"ltx_sup\" id=\"S5.T2.8.8.8.3.1.1\"><span class=\"ltx_text ltx_font_medium\" id=\"S5.T2.8.8.8.3.1.1.1\">\u2217</span></sup></span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.8.8.8.6\">5.90 km</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.9.9.9\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" colspan=\"4\" id=\"S5.T2.9.9.9.1\">\n plans on forecasts: FC-HC</td>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T2.9.9.9.2\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T2.9.9.9.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.13.13.15.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.13.13.15.2.1\"><a href=\"https://arxiv.org/html/2307.01927v3#id20.20.id20\"><abbr class=\"ltx_glossaryref\" href=\"https://arxiv.org/html/2307.01927v3#id20.20.id20\" title=\"multi-time Hamilton Jacobi Reachability Baseline\"><span class=\"ltx_text ltx_glossary_short\">mt-HJR-B</span></abbr></a></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.13.13.15.2.2\">39.1 %</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.13.13.15.2.3\">70.5%</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.13.13.15.2.4\">0.92</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.13.13.15.2.5\">0.23</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.13.13.15.2.6\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.13.13.15.2.6.1\">10.55</span> km</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.10.10.10\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.10.10.10.2\"><a href=\"https://arxiv.org/html/2307.01927v3#id21.21.id21\"><abbr class=\"ltx_glossaryref\" href=\"https://arxiv.org/html/2307.01927v3#id21.21.id21\" title=\"multi-time Hamilton Jacobi Reachability with multi-agent Reactive Control\"><span class=\"ltx_text ltx_glossary_short\">mt-HJR-RC</span></abbr></a></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.10.10.10.3\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.10.10.10.3.1\">0</span>%</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.10.10.10.4\">58%</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.10.10.10.5\">0.23</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.10.10.10.6\">0.30</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.10.10.10.1\">10.84 km<sup class=\"ltx_sup\" id=\"S5.T2.10.10.10.1.1\">\u2217</sup>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.13.13.13\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T2.13.13.13.4\"><a href=\"https://arxiv.org/html/2307.01927v3#id22.22.id22\"><abbr class=\"ltx_glossaryref\" href=\"https://arxiv.org/html/2307.01927v3#id22.22.id22\" title=\"multi-time Hamilton Jacobi Reachability with multi-agent LISIC\"><span class=\"ltx_text ltx_glossary_short\">mt-HJR-LISIC</span></abbr></a></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T2.13.13.13.5\">0.7%</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T2.11.11.11.1\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.11.11.11.1.1\">9.9</span>%<sup class=\"ltx_sup\" id=\"S5.T2.11.11.11.1.2\">\u2217</sup>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T2.12.12.12.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.12.12.12.2.1\">0.043<sup class=\"ltx_sup\" id=\"S5.T2.12.12.12.2.1.1\"><span class=\"ltx_text ltx_font_medium\" id=\"S5.T2.12.12.12.2.1.1.1\">\u2217</span></sup></span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T2.13.13.13.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.13.13.13.3.1\">1.15<sup class=\"ltx_sup\" id=\"S5.T2.13.13.13.3.1.1\"><span class=\"ltx_text ltx_font_medium\" id=\"S5.T2.13.13.13.3.1.1.1\">\u2217</span></sup></span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T2.13.13.13.6\">13.96 km</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE II: </span><span class=\"ltx_text\" id=\"S5.T2.15.1\" style=\"font-size:90%;\">We compare the performance of three controllers in two forecast settings. The <sup class=\"ltx_sup\" id=\"S5.T2.15.1.1\">\u2217</sup> symbol indicates a statistically significant better performance in terms of connectivity, collisions, and distance to the target.</span></figcaption>\n</figure>",
116
+ "capture": "TABLE II: We compare the performance of three controllers in two forecast settings. The \u2217 symbol indicates a statistically significant better performance in terms of connectivity, collisions, and distance to the target."
117
+ }
118
+ },
119
+ "image_paths": {
120
+ "1": {
121
+ "figure_path": "2307.01927v3_figure_1.png",
122
+ "caption": "Figure 1: Our LISIC policy blends a single-agent performance control input with a flocking-based safety control input to avoid connectivity losses and collisions in a multi-agent network while minimally interfering with the performance objective of each agent. This ensures safe performance in ocean environments with strong ocean currents affecting the low-powered agents.",
123
+ "url": "http://arxiv.org/html/2307.01927v3/x1.png"
124
+ },
125
+ "2": {
126
+ "figure_path": "2307.01927v3_figure_2.png",
127
+ "caption": "Figure 2: Augmented potential function, with two terms to account for agents within and outside the communication range Rc\u2062o\u2062msubscript\ud835\udc45\ud835\udc50\ud835\udc5c\ud835\udc5aR_{com}italic_R start_POSTSUBSCRIPT italic_c italic_o italic_m end_POSTSUBSCRIPT. A high \u03ba\ud835\udf05\\kappaitalic_\u03ba parameter is shown to increase the steepness of the slope around Rc\u2062o\u2062m2subscript\ud835\udc45\ud835\udc50\ud835\udc5c\ud835\udc5a2\\frac{R_{com}}{2}divide start_ARG italic_R start_POSTSUBSCRIPT italic_c italic_o italic_m end_POSTSUBSCRIPT end_ARG start_ARG 2 end_ARG, depending on how achieving the exact ideal distance is valued.",
128
+ "url": "http://arxiv.org/html/2307.01927v3/x2.png"
129
+ },
130
+ "3": {
131
+ "figure_path": "2307.01927v3_figure_3.png",
132
+ "caption": "Figure 3: We sample a large set of missions |\ud835\udd44|=1000\ud835\udd441000\\lvert\\mathbb{M}\\rvert=1000| blackboard_M | = 1000 in the Gulf of Mexico that are spatially and temporally representative of realistic scenarios.",
133
+ "url": "http://arxiv.org/html/2307.01927v3/x3.png"
134
+ },
135
+ "4": {
136
+ "figure_path": "2307.01927v3_figure_4.png",
137
+ "caption": "Figure 4: Left: IPM evaluated on \ud835\udd44\ud835\udd44\\mathbb{M}blackboard_M. Due to its low IPM, mt-HJR-LISIC typically has both a low disconnection time and a low number of disconnected agents. Right: The minimum Fiedler value \u03bb2m\u2062i\u2062nsuperscriptsubscript\ud835\udf062\ud835\udc5a\ud835\udc56\ud835\udc5b\\lambda_{2}^{min}italic_\u03bb start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m italic_i italic_n end_POSTSUPERSCRIPT can be used as a graph connectivity measure. A high \u03bb2m\u2062i\u2062nsuperscriptsubscript\ud835\udf062\ud835\udc5a\ud835\udc56\ud835\udc5b\\lambda_{2}^{min}italic_\u03bb start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m italic_i italic_n end_POSTSUPERSCRIPT ensures better robustness against connectivity failures.",
138
+ "url": "http://arxiv.org/html/2307.01927v3/x4.png"
139
+ },
140
+ "5": {
141
+ "figure_path": "2307.01927v3_figure_5.png",
142
+ "caption": "Figure 5: Mission example with mt-HJR-B (left) versus mt-HJR-LISIC (right). Note that while the agents\u2019 trajectories are depicted for the interval [t0subscript\ud835\udc610t_{0}italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT, Tt\u2062i\u2062m\u2062e\u2062o\u2062u\u2062tsubscript\ud835\udc47\ud835\udc61\ud835\udc56\ud835\udc5a\ud835\udc52\ud835\udc5c\ud835\udc62\ud835\udc61T_{timeout}italic_T start_POSTSUBSCRIPT italic_t italic_i italic_m italic_e italic_o italic_u italic_t end_POSTSUBSCRIPT], the currents in the background represent a snapshot at time t0subscript\ud835\udc610t_{0}italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT and evolve in both direction and magnitude over time. mt-HJR-LISIC guarantees communication through the full length of the mission, avoids collisions, and ensures that all agents reach the target.",
143
+ "url": "http://arxiv.org/html/2307.01927v3/x5.png"
144
+ }
145
+ },
146
+ "validation": true,
147
+ "references": [],
148
+ "url": "http://arxiv.org/html/2307.01927v3"
149
+ }
20240620/2307.06930v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240620/2307.13520v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240620/2308.03372v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240620/2308.04792v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240620/2308.07706v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240620/2308.10692v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240620/2309.08781v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240620/2309.08902v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240620/2309.11143v4.json ADDED
@@ -0,0 +1,167 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "CoT-BERT: Enhancing Unsupervised Sentence Representation through Chain-of-Thought",
3
+ "abstract": "Unsupervised sentence representation learning aims to transform input sentences into fixed-length vectors enriched with intricate semantic information while obviating the reliance on labeled data. Recent strides within this domain have been significantly propelled by breakthroughs in contrastive learning and prompt engineering. Despite these advancements, the field has reached a plateau, leading some researchers to incorporate external components to enhance the quality of sentence embeddings. Such integration, though beneficial, complicates solutions and inflates demands for computational resources. In response to these challenges, this paper presents CoT-BERT, an innovative method that harnesses the progressive thinking of Chain-of-Thought reasoning to tap into the latent potential of pre-trained models like BERT. Additionally, we develop an advanced contrastive learning loss function and propose a novel template denoising strategy. Rigorous experimentation demonstrates that CoT-BERT surpasses a range of well-established baselines by relying exclusively on the intrinsic strengths of pre-trained models. 111Our code and checkpoints are available at https://github.com/ZBWpro/CoT-BERT.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Sentence representation is a fundamental aspect of Natural Language Processing (NLP), which concentrates on converting input sentences into fixed-length numerical vectors, commonly known as sentence embeddings. These vectors are crucial for enabling computational systems and advanced deep learning models to process linguistic data effectively. Their applications span across various domains, including information retrieval, text clustering, topic modeling, recommendation systems, and the development of artificial intelligence agents [1 ###reference_b1###]. Moreover, they serve as essential features for neural networks engaged in downstream tasks such as relationship extraction, sentiment classification, and text implication recognition.\nIn this research field, unsupervised sentence representation learning has garnered substantial attention owing to its independence from annotated data. Currently, the prevailing paradigm in academia involves leveraging pre-trained language models (PLMs) like BERT [2 ###reference_b2###] and RoBERTa [3 ###reference_b3###], with contrastive learning to mitigate the issue of semantic space anisotropy [4 ###reference_b4###] by drawing similar samples closer while pushing dissimilar ones apart.\nWith the rise of prompt engineering, a series of studies have been initiated to integrate prompts into sentence representation tasks. A remarkable endeavor, PromptBERT [5 ###reference_b5###], adopts a manual template \u201cThis sentence: \u2018[X]\u2019 means [MASK].\u201d to encapsulate the input sentence [X], using the output vector corresponding to the [MASK] token as the original sentence\u2019s representation. Despite its impressive performance, subsequent advancements have encountered a plateau. For instance, ConPVP [6 ###reference_b6###], an improvement on PromptBERT, achieves a marginal gain, with only a 0.03 increment in the average Spearman\u2019s correlation score across seven Semantic Textual Similarity (STS) benchmarks when implemented with RoBERTabase.\nIn pursuit of further enhancements, some research [7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###, 10 ###reference_b10###] has explored incorporating external models or corpora to assist in sentence embedding derivation. Although these additions can improve model performance, they concurrently increase solution complexity and computational resource consumption. For example, the current state-of-the-art (SOTA) work, RankCSE [8 ###reference_b8###], necessitates knowledge distillation from two high-capacity teacher models for training. Specifically, the development of RankCSE-BERTbase involves training both SimCSE-BERTbase [11 ###reference_b11###] and SimCSE-BERTlarge, constraining its applicability due to the computational demands of updating BERTlarge. Another cutting-edge strategy, RankEncoder [9 ###reference_b9###], requires the introduction of a large external corpus on top of the training set, which poses challenges in data-scarce scenarios.\nThis paper endeavors to fully unleash the potential of pre-trained models, achieving results on par with or superior to previous methods without introducing any external components. To this end, we draw inspiration from the progressive thinking of Chain-of-Thought (CoT) [12 ###reference_b12###], attempting to derive sentence representations through a multi-staged process.\nCoT, an emerging prompting technique, has shown promise in augmenting the performance of high-parameter generative models in complex reasoning tasks by breaking down problems into a series of logical steps leading to the final answer. Although BERT operates with significantly fewer parameters compared to models such as LLaMA [13 ###reference_b13###] and GPT [14 ###reference_b14###], we posit that the progressive nature of CoT can be adapted to discriminative models. Given BERT\u2019s capability to assign context-specific meanings to the special token [MASK] via its attention mechanism during the masked language modeling (MLM) task, this token exhibits an adaptive characteristic that aligns well with the CoT methodology.\nAccordingly, we propose a two-step approach to sentence representation, which includes both comprehension and summarization stages, with the latter\u2019s output serving as the representation for the original sentence. Extensive experimental evaluations indicate that this method, termed CoT-BERT, outperforms several robust baselines without necessitating additional parameters. The primary contributions of this paper are outlined as follows:\nIntroduction of CoT-BERT: This study unveils CoT-BERT, a groundbreaking method that integrates the progressive logic of Chain-of-Thought with sentence representation. Our approach demonstrates performance that matches or exceeds current rank-based SOTA solutions, while obviating the need for external corpora or auxiliary text representation models. Notably, employing RoBERTabase as the PLM, we attain a Spearman correlation of 80.62% across seven STS tasks, markedly surpassing the existing best result. To our knowledge, CoT-BERT represents the inaugural effort to amalgamate CoT reasoning with sentence representation.\nExtended InfoNCE Loss: We present a superior contrastive learning loss function that extends beyond the conventional InfoNCE Loss [15 ###reference_b15###]. This function facilitates an enhanced optimization of the PLM\u2019s semantic space uniformity by introducing contrast not only between anchor sentences and negative samples but also between positive and negative instances.\nInnovative Template Denoising Strategy: Our research proposes an advanced template denoising strategy to eliminate the potential influence of semantic interpretation attributable to prompt biases. This objective is realized by filling blank templates with [PAD] placeholders of identical length to the input sentence and adjusting the attention masks accordingly.\nComprehensive Experimental Evaluation: We conduct a thorough examination of CoT-BERT\u2019s performance across seven established STS benchmarks. Additionally, we provide exhaustive ablation studies and analyses focused on the main innovations of CoT-BERT to ascertain the reasons behind its effectiveness. Furthermore, our code and checkpoints have been made available for replication and further experimentation."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Related Work",
15
+ "text": "This section reviews three categories of research directly related to our work: unsupervised contrastive learning methods for sentence embeddings, techniques for text representation through prompts, and chain-of-thought prompting for multi-step reasoning."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Unsupervised Sentence Representation",
21
+ "text": "Inspired by advancements in computer vision, numerous studies have explored the application of contrastive learning to unsupervised sentence representation [17 ###reference_b17###, 18 ###reference_b18###, 19 ###reference_b19###, 20 ###reference_b20###]. Compared to post-processing strategies like BERT-flow [4 ###reference_b4###] and BERT-whitening [21 ###reference_b21###], contrastive learning demonstrates more pronounced improvements within the semantic space of BERT, thus establishing itself as the predominant method for deriving embeddings in the current NLP community. Among these efforts, ConSERT [16 ###reference_b16###] employs four strategies, including adversarial attacks and token shuffling, to construct positive samples. SimCSE [11 ###reference_b11###], on the other hand, utilizes standard dropout for minimal data augmentation. Building upon this, SSCL [22 ###reference_b22###] leverages outputs from the intermediate layers of PLMs as hard negatives to mitigate over-smoothing. In general, these methods primarily innovate in the construction of positive and negative samples, yet they do not fully exploit BERT\u2019s behavior during its pre-training phase.\nA new frontier in this field involves integrating external components to refine sentence embeddings. DiffCSE [7 ###reference_b7###], for instance, introduces a generator and discriminator structure atop the encoder, aiming to endow the model with the ability to distinguish between original and edited sentences. RankCSE [8 ###reference_b8###] proposes utilizing multiple text representation models for knowledge distillation, coupled with a training methodology that combines ranking consistency and contrastive learning. Moreover, RankEncoder [9 ###reference_b9###] suggests the inclusion of an external corpus to enrich sentence representation derivation. While methods that incorporate external models or datasets typically exhibit enhanced performance over those relying solely on PLMs, this integration inevitably increases the complexity and resource demands of their frameworks."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Prompt-based Learning",
27
+ "text": "Originating from GPT models [14 ###reference_b14###], the concept of prompts has rapidly expanded into diverse domains, including semantic textual similarity. Prompt-based techniques are designed to maximize the utilization of prior knowledge stored within PLMs. For models like BERT and RoBERTa, this objective is chiefly realized through the transformation of downstream tasks into formats that closely emulate MLM. Notably, PromptBERT [5 ###reference_b5###] employs a manually designed template to encapsulate the input sentence and takes the output vector corresponding to the [MASK] token as the representation. In a parallel endeavor, PromCSE [23 ###reference_b23###] deploys a distinctive approach by freezing the entire pre-trained model while integrating multi-layer learnable soft prompts. Furthermore, ConPVP [6 ###reference_b6###] merges continuous prompts with various manual templates. These elements are then converted into anchor sentences, positive instances, and negative instances for incorporation into the InfoNCE Loss."
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "Chain-of-Thought Prompting",
33
+ "text": "Chain-of-Thought (CoT) prompting represents a revolutionary approach, guiding large language models (LLMs) through a series of intermediate steps towards the final answer. As CoT is primarily targeted at direct inference scenarios and typically exhibits significant advantages only when the model reaches a substantial size, CoT reasoning is considered an emergent property of LLMs [12 ###reference_b12###].\nDespite this, the principle of decomposing complex problem, as advocated by CoT, finds broad applicability in deep learning. As an illustration, consider the representation of text in neural networks, wherein the journey commences with the encoding of individual lexical units before advancing toward holistic sentence representation. Therefore, we aspire to extend this technique to discriminative models like BERT to further unlock the potential inherent within PLMs."
34
+ },
35
+ {
36
+ "section_id": "3",
37
+ "parent_section_id": null,
38
+ "section_name": "Methodology",
39
+ "text": "In this section, we initially present our CoT-style two-stage sentence representation strategy, along with its underlying design principles, in subsection 3.1 ###reference_###. Following this, in subsection 3.2 ###reference_###, we proceed to introduce the extended InfoNCE Loss, accompanied by an intuitive explanation for its heightened performance. Eventually, our refined template denoising technique is expounded upon in subsection 3.3 ###reference_###."
40
+ },
41
+ {
42
+ "section_id": "3.1",
43
+ "parent_section_id": "3",
44
+ "section_name": "Chain-of-Thought-Styled Two-stage Sentence Representation",
45
+ "text": "To effectively adapt CoT to new application scenarios, it is essential to clarify its main characteristics. Formally, CoT consists of two principal components: the reasoning process and the resultant conclusions. The former is a model-generated sequence predicated on given context, while the latter constitutes the final outcome derived from synthesizing the intermediate reasoning steps. Collectively, they form a problem-solving strategy that progresses from simple to complex.\nCoT primarily operates on high-parameter generative PLMs. Although discriminative models like BERT possess robust natural language understanding capabilities, they cannot directly generate intermediate reasoning steps. Therefore, to guide such models towards CoT-style progressive reasoning, we need to employ prompt engineering to artificially design a multi-step sentence representation pipeline that is both versatile and adaptive.\nVersatility here refers to the method\u2019s applicability to various sentences types, which requires a general strategy for solving sentence representation tasks. Adaptability, in contrast, emphasizes the model\u2019s ability to identify and prioritize key information from sentences based on their distinct semantics. Drawing inspiration from human practices in text summarization, we propose a two-stage sentence embedding derivation methodology: comprehension followed by summarization. Each stage incorporates a [MASK] token, enabling the model to attribute contextually relevant interpretations to [MASK] across diverse scenarios.\nFollowing the aforementioned principles, our devised templates are detailed in Table 1 ###reference_###. Given that the training set for unsupervised sentence representation tasks only includes a series of standalone sentences, different templates are adopted to construct anchor sentences, positive instances, and hard negative instances. These elements are subsequently integrated into our revised contrastive learning loss function for training, a procedure elaborated upon in subsection 3.2 ###reference_###. After forward computation, we take the output vector corresponding to the final [MASK] token as the sentence embedding for the input text, mirroring the essence of CoT reasoning, which focuses on the model\u2019s conclusive output rather than the intermediate process.\nAnchor Sentence\n\nThe sentence of \u201c[X]\u201d means [MASK], so it can be summarized as [MASK].\n\nPositive Instance\n\nThe sentence : \u201c[X]\u201d means [MASK], so it can be summarized as [MASK].\n\nHard Negative Instance\n\nThe sentence : \u201c[X]\u201d does not mean [MASK], so it cannot be summarized as [MASK].\nIn section 4 ###reference_###, this paper will further substantiate the rationality of CoT-BERT\u2019s two-stage sentence representation method and its correlation with CoT through an array of experiments. Specifically, we aim to explore the following pivotal inquiries:\nDoes the two-stage sentence representation strategy of CoT-BERT surpass the use of any single sub-stage alone?\nDoes the progressive relationship between sub-stages contribute to enhanced model performance? If affirmative, can a genuine progressive linkage between comprehension and summarization be established?\nCompared to static templates, does the adaptive reasoning facilitated by the [MASK] token exhibit better universality?\nExploring these questions will deepen our understanding of CoT-inspired multi-stage reasoning. Empirical validation is critical for providing definitive answers to these considerations."
46
+ },
47
+ {
48
+ "section_id": "3.2",
49
+ "parent_section_id": "3",
50
+ "section_name": "Enhancing Contrast with Extended InfoNCE Loss",
51
+ "text": "Current methods for unsupervised sentence representation through contrastive learning commonly employ the InfoNCE Loss to guide the training process. For any given sentence within a batch of size , where denotes the encoding technique and represents a temperature coefficient, the InfoNCE Loss is defined as follows:\nIn Equation 1 ###reference_###, sim signifies a similarity metric between two sentence vectors, typically cosine similarity. This formula encourages the model to maximize the similarity between the anchor sentence vector and its positive instance , while simultaneously diminishing the similarity between and other unrelated sentence vectors within the same batch. Both SimCSE and PromptBERT adopt an InfoNCE Loss of this form. In SimCSE, positive instances are crafted using the intrinsic dropout of Transformer blocks, whereas PromptBERT employs different templates to generate positive samples. Although PromptBERT\u2019s comparative experiments reveal minimal performance distinctions between these two methods under their manual templates, the utilization of varied templates for data augmentation can additionally facilitate the creation of hard negatives.\nSpecifically, ConPVP designs multiple templates and integrates the comparison between the anchor sentence vector and the negative instance in the denominator of Equation 1 ###reference_###. By contrast, our CoT-BERT extends this by also introducing a comparison between the positive instance and the negative sample . The differences among these approaches are visualized in Figure 1 ###reference_###. Intuitively, our refined InfoNCE Loss incorporates more reference items into the sentence representation calculation process, rendering the distribution disparities among unrelated sentence vectors more pronounced within the semantic space. Subsection 4.4 ###reference_### will further elaborate on the effects of this modification with corresponding experimental results.\n###figure_1### Formalization of the described process leads to the ultimate loss function employed by CoT-BERT, as illustrated in Equation 2 ###reference_### and 3 ###reference_###. Here, represents the template-denoised sentence embedding for . Particularly, in this context, the standard InfoNCE Loss can be expressed as ."
52
+ },
53
+ {
54
+ "section_id": "3.3",
55
+ "parent_section_id": "3",
56
+ "section_name": "Leveraging [PAD] Tokens for Template Denoising",
57
+ "text": "Deriving sentence representations through prompts involves the concatenation of templates with input sentences, which are then fed into the encoder for a complete forward pass. However, due to the attention mechanism, the output vector of the [MASK] token is influenced by the presence of the template, potentially distorting its original semantics.\nTo address this concern, PromptBERT suggests sending an empty template into BERT while maintaining position ids identical to those utilized when incorporating the input sentence. This process generates the template bias . Ultimately, the embedding used in the contrastive learning objective function is the difference between the sentence vector and the template bias .\nFrom our perspective, a more effective approach than modeling input sentences with identical position ids is to populate [PAD] placeholders, matching the length of input sentences. This strategy naturally aligns position ids. Additionally, [PAD] placeholders can construct sentences devoid of significant meaning, thus serving to represent the inherent semantics of the template.\nFigure 2 ###reference_### provides a detailed illustration of our template denoising method. In line with our two-stage representation derivation method, we extract the embedding corresponding to the final [MASK] token as the representation for the empty template. Since samples are processed in batches within the encoder, we pad each template to a predefined maximum length. Subsequently, we set the attention masks for the empty template to 1, while assigning 0 to the attention masks for the padded tail.\n###figure_2### The comparison between our designed denoising strategy and the method proposed by PromptBERT, conducted on both BERT and RoBERTa, is discussed in subsection 4.5 ###reference_###."
58
+ },
59
+ {
60
+ "section_id": "4",
61
+ "parent_section_id": null,
62
+ "section_name": "Experiment",
63
+ "text": ""
64
+ },
65
+ {
66
+ "section_id": "4.1",
67
+ "parent_section_id": "4",
68
+ "section_name": "Setup",
69
+ "text": "In our experimental setup, we adhere to the established conventions by utilizing the SentEval [24 ###reference_b24###] toolkit for the evaluation of our model, CoT-BERT, across seven distinct English STS tasks [25 ###reference_b25###, 26 ###reference_b26###, 27 ###reference_b27###, 28 ###reference_b28###, 29 ###reference_b29###, 30 ###reference_b30###, 31 ###reference_b31###], with Spearman correlation as the metric.\nOur training dataset is sourced from SimCSE [11 ###reference_b11###], comprising sentences randomly sampled from English Wikipedia. During the training phase, we save checkpoints based on our model\u2019s performance on the development set of STS-Benchmark. In the validation phase, we directly compute the cosine similarity between sentence embeddings without any additional regressors. Furthermore, akin to PromptBERT, we only perform template denoising during the training process.\nOur selection of baselines includes a diverse range of models, encompassing both non-BERT-based methods such as GloVe [32 ###reference_b32###] and USE [33 ###reference_b33###], as well as the latest advancements in unsupervised sentence representation built upon BERT architecture. Specifically, the models chosen for comparison include BERT-flow, BERT-whitening, IS-BERT [34 ###reference_b34###], ConSERT, SimCSE, DCLR, ArcCSE, ESimCSE, DiffCSE, PCL, PromCSE, PromptBERT, ConPVP, RankEncoder, and RankCSE. This set of baselines serves to thoroughly validate the effectiveness of CoT-BERT from various perspectives."
70
+ },
71
+ {
72
+ "section_id": "4.2",
73
+ "parent_section_id": "4",
74
+ "section_name": "Main Results",
75
+ "text": "Table 2 ###reference_### provides a comprehensive overview of our primary experimental results across multiple STS datasets. When RoBERTabase is employed as the encoder, CoT-BERT achieves an exceptional Spearman\u2019s correlation score of 80.62, setting a new record for the current best performance. In scenarios where BERTbase serves as the encoder, our model attains an average Spearman correlation of 79.40 on the seven STS tasks. While this value falls slightly below the existing SOTA results achieved by RankCSE and RankEncoder, it is imperative to note that CoT-BERT\u2019s architecture exclusively relies on BERT itself, whereas RankCSE and RankEncoder incorporate other text representation models such as SimCSE, SNCSE [35 ###reference_b35###], or external database. Therefore, CoT-BERT\u2019s performance remains highly competitive and is obtained with a concise model configuration.\nMoreover, compared to SimCSE, CoT-BERT outperforms it by 3.15% and 4.05% on BERTbase and RoBERTabase, respectively. In contrast to PromptBERT, which also leverages prompt-based techniques, CoT-BERT exhibits a 0.86% improvement on BERTbase and a 1.47% enhancement on RoBERTabase. These results collectively provide compelling evidence attesting to the effectiveness of CoT-BERT.\nPLMs\n\nMethods\nSTS12\nSTS13\nSTS14\nSTS15\nSTS16\nSTS-B\nSICK-R\nAvg.\n\n\n\nNon-BERT\n\nGloVe(avg.)\n55.14\n70.66\n59.73\n68.25\n63.66\n58.02\n53.76\n61.32\n\nUSE\n64.49\n67.80\n64.61\n76.83\n73.18\n74.92\n76.69\n71.22\n\n\n\nBERTbase\n\nBERT-flow\n58.40\n67.10\n60.85\n75.16\n71.22\n68.66\n64.47\n66.55\n\nBERT-whitening\n57.83\n66.90\n60.90\n75.08\n71.31\n68.24\n63.73\n66.28\n\nIS-BERT\n56.77\n69.24\n61.21\n75.23\n70.16\n69.21\n64.25\n66.58\n\nConSERT\n64.64\n78.49\n69.07\n79.72\n75.95\n73.97\n67.31\n72.74\n\nSimCSE\n68.40\n82.41\n74.38\n80.91\n78.56\n76.85\n72.23\n76.25\n\nDCLR\n70.81\n83.73\n75.11\n82.56\n78.44\n78.31\n71.59\n77.22\n\nArcCSE\n72.08\n84.27\n76.25\n82.32\n79.54\n79.92\n72.39\n78.11\n\nESimCSE\n73.40\n83.27\n77.25\n82.66\n78.81\n80.17\n72.30\n78.27\n\nDiffCSE\n72.28\n84.43\n76.47\n83.90\n80.54\n80.59\n71.23\n78.49\n\nPCL\n72.84\n83.81\n76.52\n83.06\n79.32\n80.01\n73.38\n78.42\n\nPromCSE\n73.03\n85.18\n76.70\n84.19\n79.69\n80.62\n70.00\n78.49\n\nPromptBERT\n71.56\n84.58\n76.98\n84.47\n80.60\n81.60\n69.87\n78.54\n\nConPVP\n71.72\n84.95\n77.68\n83.64\n79.76\n80.82\n73.38\n78.85\n\nRankCSElistNet\n74.38\n85.97\n77.51\n84.46\n81.31\n81.46\n75.26\n80.05\n\nRankEncoder\n74.88\n85.59\n78.61\n83.50\n80.56\n81.55\n75.78\n80.07\n\nCoT-BERT\n72.56\n85.53\n77.91\n85.05\n80.94\n82.40\n71.41\n79.40\n\n\n\nRoBERTabase\n\nSimCSE\n70.16\n81.77\n73.24\n81.36\n80.65\n80.22\n68.56\n76.57\n\nDCLR\n70.01\n83.08\n75.09\n83.66\n81.06\n81.86\n70.33\n77.87\n\nDiffCSE\n70.05\n83.43\n75.49\n82.81\n82.12\n82.38\n71.19\n78.21\n\nPCL\n71.13\n82.38\n75.40\n83.07\n81.98\n81.63\n69.72\n77.90\n\nESimCSE\n69.90\n82.50\n74.68\n83.19\n80.30\n80.99\n70.54\n77.44\n\nPromptRoBERTa\n73.94\n84.74\n77.28\n84.99\n81.74\n81.88\n69.50\n79.15\n\nConPVP\n73.20\n83.22\n76.24\n83.37\n81.49\n82.18\n74.59\n79.18\n\nRankCSElistNet\n72.91\n85.72\n76.94\n84.52\n82.59\n83.46\n71.94\n79.73\n\nCoT-RoBERTa\n75.43\n85.47\n78.74\n85.64\n82.21\n83.40\n73.46\n80.62"
76
+ },
77
+ {
78
+ "section_id": "4.3",
79
+ "parent_section_id": "4",
80
+ "section_name": "Assessing our Two-stage Sentence Representation",
81
+ "text": "To further ascertain the efficacy of CoT-BERT\u2019s two-stage sentence representation strategy and address the inquiries raised in subsection 3.1 ###reference_###, we embark on an extensive series of experiments. For the manual templates provided in Table 1 ###reference_###, we define the initial and latter parts of the template as its prefix and suffix, respectively. To illustrate, for the anchor sentence template: \u201cThe sentence of \u2018[X]\u2019 means [MASK], so it can be summarized as [MASK].\u201d, we designate its prefix as \u201cThe sentence of \u2018[X]\u2019 means [MASK].\u201d and its suffix as \u201cThe sentence of \u2018[X]\u2019 can be summarized as [MASK].\u201d When we refer to \u201cprefix + suffix\u201d, it implies the use of our complete, original templates. The handling of positive instances and hard negative instances follows a similar pattern.\nMulti-stage vs. Single-stage: Here, we explore whether CoT-BERT\u2019s two-stage sentence representation method surpasses the performance of each sub-stage when deployed independently. To this end, we conduct two sets of comparative experiments: prefix vs. prefix + suffix and suffix vs. prefix + suffix. If prefix + suffix consistently performs better, it indicates that the integrative application of both stages effectively bolsters the model\u2019s performance.\nProgressive Relationship Between Sub-stages: In this part, we replace CoT-BERT\u2019s original template prefix with a non-sequitur (termed irrelevant prefix) and examine whether the performance of irrelevant prefix + suffix deteriorates compared to original prefix + suffix, thereby proving the necessity of a logical connection between the two sub-stages. An example modification includes changing the anchor sentence template to \u201cPenguin is a flightless bird, and the sentence of \u2018[X]\u2019 can be summarized as [MASK].\u201d The positive and hard negative instance templates are treated in the same manner.\nAdditionally, by reversing the order of the two sub-stages in CoT-BERT templates to suffix + prefix, we seek to demonstrate the significance of the progressive reasoning process from comprehension to summarization. For instance, the anchor sentence template is converted to \u201cThe sentence of \u2018[X]\u2019 can be summarized as [MASK], so it means [MASK].\u201d The positive instance template and hard negative instance template are adjusted accordingly.\nAdaptivity in Multi-stage Reasoning: Herein, we substitute the first [MASK] token in CoT-BERT templates with a static element (static prefix + suffix) to observe performance disparities against the dynamic prefix + suffix setup. This experiment aims to underscore the pivotal role of adaptive reasoning facilitated by the intermediate [MASK] token, notwithstanding that the derivation of sentence embeddings is from the final [MASK] token. For illustration, the anchor sentence template is changed to \u201cThe sentence of \u2018[X]\u2019 means something, so it can be summarized as [MASK].\u201d, with corresponding adjustments made to the templates for positive and hard negative instances.\nThe results of these experiments are recorded in Table 3 ###reference_###, demonstrating that the prefix + suffix configuration achieves the highest average Spearman\u2019s correlation score across seven STS benchmarks. These findings affirm the rationality and effectiveness of CoT-BERT\u2019s two-stage sentence representation strategy. Additionally, the specific settings and sequencing of the comprehension and summarization stages, together with the introduction of adaptive [MASK] tokens, adeptly emulate the essence of CoT reasoning."
82
+ },
83
+ {
84
+ "section_id": "4.4",
85
+ "parent_section_id": "4",
86
+ "section_name": "Evaluating our Extended InfoNCE Loss",
87
+ "text": "In subsection 3.2 ###reference_###, we have expounded upon our proposed Extended InfoNCE Loss and its design concept. Departing from prior work, we introduce a comparison between the anchor sentence and negative samples in the loss function\u2019s denominator, as well as a comparison between the positive and negative instances, with the latter being proposed by CoT-BERT for the first time. In line with the symbol definitions in Equation 2 ###reference_###, we maintain the usage of to denote the similarity computation between the positive and negative instances.\nTable 4 ###reference_### displays the performance disparities exhibited by CoT-BERT across seven STS tasks, both with and without the introduction of . The values in the table correspond to the average Spearman correlation. Remarkably, irrespective of whether BERTbase or RoBERTabase serves as the PLM, our extended InfoNCE Loss consistently yields improvements in the model\u2019s performance.\nTo offer deeper insights into the underlying factors underpinning these results, we conduct alignment and uniformity analyses on three models: PromptBERT, CoT-BERT (with and without ). This evaluation utilizes the STS-B test set, a corpus encompassing a total of 1,379 sentence pairs, each accompanied by a similarity score ranging from 0.0 to 5.0.\nFor the computation of uniformity, the entire set of 1,379 sentence pairs is employed. In the case of alignment calculation, we filter for sentence pairs with similarity scores greater than 4.0. The results are detailed in Table 5 ###reference_###.\nBoth alignment and uniformity serve as established metrics for evaluating the model\u2019s semantic space quality. Given a data distribution , alignment quantifies the expected distance between samples and their corresponding positive instances, defined as below:\nUniformity, on the other hand, reflects the overall evenness of the sentence vector space by calculating the average distance between embeddings of any two semantically unrelated texts:\nThe experimental outcomes in Table 5 ###reference_### provide empirical evidence in support of the intuitive explanations outlined in subsection 3.2 ###reference_###. We posit that the incorporation of introduces additional contextual references during the training process, thus enhancing the model\u2019s discriminative power among diverse samples. The inclusion of further amplifies this effect by concurrently distinguishing negative instances from both anchor sentences and positive instances, thereby leading to an augmentation in the model\u2019s uniformity.\nMeanwhile, a diminution in the alignment metric is observed. We attribute this phenomenon to two potential factors. Firstly, while computing alignment with the STS-B test set, we categorize sentence pairs with a similarity exceeding 4.0 as positive examples, a threshold selection that may introduce bias. Additionally, when modifying the objective function to impose more stringent requirements on the model, there is no corresponding increase in the number of model iterations. Consequently, within a constrained number of update steps, the model may struggle to optimally adjust its alignment."
88
+ },
89
+ {
90
+ "section_id": "4.5",
91
+ "parent_section_id": "4",
92
+ "section_name": "Assessing our Template Denoising Strategy",
93
+ "text": "We also embark on an ablation study regarding the template denoising method of CoT-BERT. To distinguish between the two denoising strategies under examination, we refer to the technique introduced by PromptBERT as \u201cposition denoise\u201d and our proposed approach as \u201c[PAD] denoise.\u201d\nThe primary difference between these two lies in the fact that \u201c[PAD] denoise\u201d does not deliberately adjust the values of position ids but rather models an empty template by injecting [PAD] placeholders of identical length as the input sentence, accompanied by corresponding adaptations to the attention masks.\nOur evaluation of different denoising methods on the seven STS tasks is presented in Table 6 ###reference_###. As in our prior assessments, we continue to report the model\u2019s average Spearman correlation.\nThe empirical findings compellingly demonstrate the advantages conferred by the introduction of denoising encodings within the contrastive learning loss function. Moreover, \u201c[PAD] denoise\u201d notably outperforms its \u201cposition denoise\u201d counterpart.\nOne possible explanation for the effectiveness of template denoising is that sentence embeddings derived from PLMs may contain some general information, such as syntax and sentence structure. During the contrastive learning process, subtracting that kind of information helps highlight the distinctions between input sentences, thereby fostering more precise clustering outcomes."
94
+ },
95
+ {
96
+ "section_id": "5",
97
+ "parent_section_id": null,
98
+ "section_name": "Discussion",
99
+ "text": ""
100
+ },
101
+ {
102
+ "section_id": "5.1",
103
+ "parent_section_id": "5",
104
+ "section_name": "Distribution of Predicted Values",
105
+ "text": "We present a visualization of the predicted value distribution generated by CoT-BERT using the STS-B test set in Figure 3 ###reference_###. It should be noted that the model remains unexposed to these data during its training phase. Therefore, the model\u2019s performance on this dataset can serve as a reliable indicator of its overall efficacy.\n###figure_3### As illustrated in Figure 3 ###reference_###, the initial RoBERTa checkpoint demonstrates limited discriminative ability for sentence pairs with varying degrees of similarity. It tends to yield higher predicted values across the board. In contrast, both PromptBERT and CoT-BERT display a clear upward trend in predicted values as the similarity between sentences increases.\nFurthermore, CoT-BERT distinctly outperforms PromptBERT, especially in handling samples with annotated similarity scores ranging from 0 to 2. CoT-BERT\u2019s predictions within this range are considerably more concentrated, indicating enhanced precision. Additionally, it is noteworthy that for some samples with a true similarity score of 1, PromptBERT\u2019s predicted values are even higher than those for most samples with a true similarity score of 2, while CoT-BERT does not exhibit such a pattern."
106
+ },
107
+ {
108
+ "section_id": "5.2",
109
+ "parent_section_id": "5",
110
+ "section_name": "Introducing More Stages",
111
+ "text": "In subsection 3.1 ###reference_###, we have elucidated the underlying design principles behind CoT-BERT\u2019s two-stage sentence representation method. Furthermore, in subsection 4.3 ###reference_###, we have empirically demonstrated the superiority of this approach over using either sub-stage in isolation. Naturally, this leads to a pertinent question: if we further divide the template and introduce more sub-stages, will the model\u2019s performance continue to improve?\nRegrettably, due to constraints on computational resources, we are currently unable to conduct experiments in this regard. Nevertheless, we recognize several crucial considerations that warrant careful attention in such investigations.\nFirstly, increasing the number of stages within the template inherently augments the complexity of the corresponding prompt. This heightened complexity demands substantial effort in devising and selecting the most suitable prompts. Additionally, even if a template performs well on a specific PLM, its adaptability to other PLMs remains uncertain. Besides, as the template\u2019s length expands, the weight of the input sentence [X] within the prompt gradually diminishes, and its distance from the final [MASK] token increases. This could potentially result in the model inadequately capturing the semantics of [X]. Moreover, it\u2019s worth mentioning that certain concise short sentences in natural language may not lend themselves well to being segmented into multiple stages. Lastly, the presence of templates compresses the maximum input length that a PLM can accommodate, thereby affecting the model\u2019s capacity to handle longer texts. We leave the exploration of these aspects to future work."
112
+ },
113
+ {
114
+ "section_id": "6",
115
+ "parent_section_id": null,
116
+ "section_name": "Conclusion",
117
+ "text": "In this study, we propose CoT-BERT, a pioneering strategy for sentence representation computation. To the best of our knowledge, CoT-BERT is the first work that combines the Chain-of-Thought (CoT) concept with text representation tasks. Furthermore, we improve the distribution of sentence embeddings within the BERT semantic space by introducing an extended InfoNCE Loss. Additionally, we devise a more efficacious template denoising method to mitigate the impact of prompt-induced biases on sentence semantics.\nExperimental findings across seven Semantic Textual Similarity tasks unequivocally affirm the outstanding efficacy of CoT-BERT. It surpasses a spectrum of formidable baselines and achieves state-of-the-art performance without relying on additional text representation models or external databases. Comprehensive ablation experiments demonstrate that CoT-BERT\u2019s two-stage sentence representation, extended InfoNCE loss, and refined template denoising methods collectively contribute to the enhancements in its overall performance."
118
+ }
119
+ ],
120
+ "appendix": [],
121
+ "tables": {
122
+ "1": {
123
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Manual templates for our prompt-based learning. To alleviate burden and ensure conciseness in CoT-BERT, the templates selected for the anchor sentence, positive instance, and hard negative instance only have slight differences.</figcaption>\n<p class=\"ltx_p ltx_align_center\" id=\"S3.T1.1\"><span class=\"ltx_text ltx_inline-block\" id=\"S3.T1.1.1\" style=\"width:433.6pt;\">\n<span class=\"ltx_inline-block ltx_transformed_outer\" id=\"S3.T1.1.1.1\" style=\"width:392.9pt;height:108pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(0.0pt,0.0pt) scale(1,1) ;\">\n<span class=\"ltx_p\" id=\"S3.T1.1.1.1.1\"><span class=\"ltx_text\" id=\"S3.T1.1.1.1.1.1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S3.T1.1.1.1.1.1.1\">\n<span class=\"ltx_tbody\">\n<span class=\"ltx_tr\" id=\"S3.T1.1.1.1.1.1.1.1.1\">\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.1.1.1.1.1.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.1.1.1.1.1.1.1.1\">Anchor Sentence</span></span></span>\n<span class=\"ltx_tr\" id=\"S3.T1.1.1.1.1.1.1.2.2\">\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.1.1.1.1.1.2.2.1\">The sentence <span class=\"ltx_text\" id=\"S3.T1.1.1.1.1.1.1.2.2.1.1\" style=\"color:#FF0000;\">of</span> \u201c[X]\u201d means [MASK], so it can be summarized as [MASK].</span></span>\n<span class=\"ltx_tr\" id=\"S3.T1.1.1.1.1.1.1.3.3\">\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.1.1.1.1.1.3.3.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.1.1.1.1.3.3.1.1\">Positive Instance</span></span></span>\n<span class=\"ltx_tr\" id=\"S3.T1.1.1.1.1.1.1.4.4\">\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.1.1.1.1.1.4.4.1\">The sentence <span class=\"ltx_text\" id=\"S3.T1.1.1.1.1.1.1.4.4.1.1\" style=\"color:#FF0000;\">:</span> \u201c[X]\u201d means [MASK], so it can be summarized as [MASK].</span></span>\n<span class=\"ltx_tr\" id=\"S3.T1.1.1.1.1.1.1.5.5\">\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.1.1.1.1.1.5.5.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.1.1.1.1.5.5.1.1\">Hard Negative Instance</span></span></span>\n<span class=\"ltx_tr\" id=\"S3.T1.1.1.1.1.1.1.6.6\">\n<span class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.1.1.1.1.1.1.6.6.1\">The sentence : \u201c[X]\u201d <span class=\"ltx_text\" id=\"S3.T1.1.1.1.1.1.1.6.6.1.1\" style=\"color:#FF0000;\">does not</span> mean [MASK], so it <span class=\"ltx_text\" id=\"S3.T1.1.1.1.1.1.1.6.6.1.2\" style=\"color:#FF0000;\">cannot</span> be summarized as [MASK].</span></span>\n</span>\n</span></span></span>\n</span></span></span></p>\n</figure>",
124
+ "capture": "Table 1: Manual templates for our prompt-based learning. To alleviate burden and ensure conciseness in CoT-BERT, the templates selected for the anchor sentence, positive instance, and hard negative instance only have slight differences."
125
+ },
126
+ "2": {
127
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Performance of different models on STS tasks under <span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.6.1\">unsupervised</span> settings. Consistent with established research conventions, the table displays the Spearman\u2019s rank correlation between model predictions and human-annotated scores. Baseline results are sourced from original papers. The highest scores obtained by models using the same PLM for each dataset are highlighted in bold.</figcaption>\n<p class=\"ltx_p ltx_align_center\" id=\"S4.T2.4\"><span class=\"ltx_text ltx_inline-block\" id=\"S4.T2.4.4\" style=\"width:433.6pt;\">\n<span class=\"ltx_inline-block ltx_transformed_outer\" id=\"S4.T2.4.4.4.4\" style=\"width:485.5pt;height:505pt;vertical-align:-1.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(0.0pt,0.0pt) scale(1,1) ;\">\n<span class=\"ltx_p\" id=\"S4.T2.4.4.4.4.4\"><span class=\"ltx_text\" id=\"S4.T2.4.4.4.4.4.4\">\n<span class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T2.4.4.4.4.4.4.4\">\n<span class=\"ltx_tbody\">\n<span class=\"ltx_tr\" id=\"S4.T2.4.4.4.4.4.4.4.5.1\">\n<span class=\"ltx_td ltx_align_justify ltx_align_top ltx_th ltx_th_row ltx_border_r ltx_border_tt\" id=\"S4.T2.4.4.4.4.4.4.4.5.1.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.4.4.4.4.4.4.4.5.1.1.1\">\n<span class=\"ltx_p\" id=\"S4.T2.4.4.4.4.4.4.4.5.1.1.1.1\" style=\"width:59.8pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.4.4.4.4.4.4.5.1.1.1.1.1\">PLMs</span></span>\n</span></span>\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_tt\" id=\"S4.T2.4.4.4.4.4.4.4.5.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.4.4.4.4.4.4.5.1.2.1\">Methods</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T2.4.4.4.4.4.4.4.5.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.4.4.4.4.4.4.5.1.3.1\">STS12</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T2.4.4.4.4.4.4.4.5.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.4.4.4.4.4.4.5.1.4.1\">STS13</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T2.4.4.4.4.4.4.4.5.1.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.4.4.4.4.4.4.5.1.5.1\">STS14</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T2.4.4.4.4.4.4.4.5.1.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.4.4.4.4.4.4.5.1.6.1\">STS15</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T2.4.4.4.4.4.4.4.5.1.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.4.4.4.4.4.4.5.1.7.1\">STS16</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T2.4.4.4.4.4.4.4.5.1.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.4.4.4.4.4.4.5.1.8.1\">STS-B</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T2.4.4.4.4.4.4.4.5.1.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.4.4.4.4.4.4.5.1.9.1\">SICK-R</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T2.4.4.4.4.4.4.4.5.1.10\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.4.4.4.4.4.4.5.1.10.1\">Avg.</span></span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.4.4.4.4.4.4.4.6.2\">\n<span class=\"ltx_td ltx_align_justify ltx_align_top ltx_th ltx_th_row ltx_border_r ltx_border_t ltx_rowspan ltx_rowspan_2\" id=\"S4.T2.4.4.4.4.4.4.4.6.2.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.4.4.4.4.4.4.4.6.2.1.1\">\n<span class=\"ltx_p\" id=\"S4.T2.4.4.4.4.4.4.4.6.2.1.1.1\" style=\"width:59.8pt;\"><span class=\"ltx_text\" id=\"S4.T2.4.4.4.4.4.4.4.6.2.1.1.1.1\">Non-BERT</span></span>\n</span></span>\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T2.4.4.4.4.4.4.4.6.2.2\">GloVe(avg.)</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.4.4.4.4.4.4.6.2.3\">55.14</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.4.4.4.4.4.4.6.2.4\">70.66</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.4.4.4.4.4.4.6.2.5\">59.73</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.4.4.4.4.4.4.6.2.6\">68.25</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.4.4.4.4.4.4.6.2.7\">63.66</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.4.4.4.4.4.4.6.2.8\">58.02</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.4.4.4.4.4.4.6.2.9\">53.76</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.4.4.4.4.4.4.6.2.10\">61.32</span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.4.4.4.4.4.4.4.7.3\">\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.4.4.4.4.4.4.4.7.3.1\">USE</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.7.3.2\">64.49</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.7.3.3\">67.80</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.7.3.4\">64.61</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.7.3.5\">76.83</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.7.3.6\">73.18</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.7.3.7\">74.92</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.7.3.8\">76.69</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.7.3.9\">71.22</span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.1.1.1.1.1.1.1.1\">\n<span class=\"ltx_td ltx_align_justify ltx_align_top ltx_th ltx_th_row ltx_border_r ltx_border_t ltx_rowspan ltx_rowspan_16\" id=\"S4.T2.1.1.1.1.1.1.1.1.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.1.1.1.1.1.1.1.1.1.1\">\n<span class=\"ltx_p\" id=\"S4.T2.1.1.1.1.1.1.1.1.1.1.1\" style=\"width:59.8pt;\"><span class=\"ltx_text\" id=\"S4.T2.1.1.1.1.1.1.1.1.1.1.1.1\">BERT<sub class=\"ltx_sub\" id=\"S4.T2.1.1.1.1.1.1.1.1.1.1.1.1.1\">base</sub></span></span>\n</span></span>\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.1.1.1.1.1.1.2\">BERT-flow</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.1.1.1.1.1.1.3\">58.40</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.1.1.1.1.1.1.4\">67.10</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.1.1.1.1.1.1.5\">60.85</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.1.1.1.1.1.1.6\">75.16</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.1.1.1.1.1.1.7\">71.22</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.1.1.1.1.1.1.8\">68.66</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.1.1.1.1.1.1.9\">64.47</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.1.1.1.1.1.1.10\">66.55</span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.4.4.4.4.4.4.4.8.4\">\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.4.4.4.4.4.4.4.8.4.1\">BERT-whitening</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.8.4.2\">57.83</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.8.4.3\">66.90</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.8.4.4\">60.90</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.8.4.5\">75.08</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.8.4.6\">71.31</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.8.4.7\">68.24</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.8.4.8\">63.73</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.8.4.9\">66.28</span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.4.4.4.4.4.4.4.9.5\">\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.4.4.4.4.4.4.4.9.5.1\">IS-BERT</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.9.5.2\">56.77</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.9.5.3\">69.24</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.9.5.4\">61.21</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.9.5.5\">75.23</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.9.5.6\">70.16</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.9.5.7\">69.21</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.9.5.8\">64.25</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.9.5.9\">66.58</span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.4.4.4.4.4.4.4.10.6\">\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.4.4.4.4.4.4.4.10.6.1\">ConSERT</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.10.6.2\">64.64</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.10.6.3\">78.49</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.10.6.4\">69.07</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.10.6.5\">79.72</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.10.6.6\">75.95</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.10.6.7\">73.97</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.10.6.8\">67.31</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.10.6.9\">72.74</span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.4.4.4.4.4.4.4.11.7\">\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.4.4.4.4.4.4.4.11.7.1\">SimCSE</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.11.7.2\">68.40</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.11.7.3\">82.41</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.11.7.4\">74.38</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.11.7.5\">80.91</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.11.7.6\">78.56</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.11.7.7\">76.85</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.11.7.8\">72.23</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.11.7.9\">76.25</span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.4.4.4.4.4.4.4.12.8\">\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.4.4.4.4.4.4.4.12.8.1\">DCLR</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.12.8.2\">70.81</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.12.8.3\">83.73</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.12.8.4\">75.11</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.12.8.5\">82.56</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.12.8.6\">78.44</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.12.8.7\">78.31</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.12.8.8\">71.59</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.12.8.9\">77.22</span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.4.4.4.4.4.4.4.13.9\">\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.4.4.4.4.4.4.4.13.9.1\">ArcCSE</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.13.9.2\">72.08</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.13.9.3\">84.27</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.13.9.4\">76.25</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.13.9.5\">82.32</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.13.9.6\">79.54</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.13.9.7\">79.92</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.13.9.8\">72.39</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.13.9.9\">78.11</span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.4.4.4.4.4.4.4.14.10\">\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.4.4.4.4.4.4.4.14.10.1\">ESimCSE</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.14.10.2\">73.40</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.14.10.3\">83.27</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.14.10.4\">77.25</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.14.10.5\">82.66</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.14.10.6\">78.81</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.14.10.7\">80.17</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.14.10.8\">72.30</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.14.10.9\">78.27</span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.4.4.4.4.4.4.4.15.11\">\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.4.4.4.4.4.4.4.15.11.1\">DiffCSE</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.15.11.2\">72.28</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.15.11.3\">84.43</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.15.11.4\">76.47</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.15.11.5\">83.90</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.15.11.6\">80.54</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.15.11.7\">80.59</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.15.11.8\">71.23</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.15.11.9\">78.49</span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.4.4.4.4.4.4.4.16.12\">\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.4.4.4.4.4.4.4.16.12.1\">PCL</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.16.12.2\">72.84</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.16.12.3\">83.81</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.16.12.4\">76.52</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.16.12.5\">83.06</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.16.12.6\">79.32</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.16.12.7\">80.01</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.16.12.8\">73.38</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.16.12.9\">78.42</span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.4.4.4.4.4.4.4.17.13\">\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.4.4.4.4.4.4.4.17.13.1\">PromCSE</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.17.13.2\">73.03</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.17.13.3\">85.18</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.17.13.4\">76.70</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.17.13.5\">84.19</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.17.13.6\">79.69</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.17.13.7\">80.62</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.17.13.8\">70.00</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.17.13.9\">78.49</span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.4.4.4.4.4.4.4.18.14\">\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.4.4.4.4.4.4.4.18.14.1\">PromptBERT</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.18.14.2\">71.56</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.18.14.3\">84.58</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.18.14.4\">76.98</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.18.14.5\">84.47</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.18.14.6\">80.60</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.18.14.7\">81.60</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.18.14.8\">69.87</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.18.14.9\">78.54</span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.4.4.4.4.4.4.4.19.15\">\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.4.4.4.4.4.4.4.19.15.1\">ConPVP</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.19.15.2\">71.72</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.19.15.3\">84.95</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.19.15.4\">77.68</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.19.15.5\">83.64</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.19.15.6\">79.76</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.19.15.7\">80.82</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.19.15.8\">73.38</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.19.15.9\">78.85</span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.2.2.2.2.2.2.2.2\">\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.2.2.2.2.2.2.2.2.1\">RankCSE<sub class=\"ltx_sub\" id=\"S4.T2.2.2.2.2.2.2.2.2.1.1\">listNet</sub></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.2.2.2.2.2.2.2.2\">74.38</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.2.2.2.2.2.2.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.2.2.2.2.2.2.2.2.3.1\">85.97</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.2.2.2.2.2.2.2.4\">77.51</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.2.2.2.2.2.2.2.5\">84.46</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.2.2.2.2.2.2.2.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.2.2.2.2.2.2.2.2.6.1\">81.31</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.2.2.2.2.2.2.2.7\">81.46</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.2.2.2.2.2.2.2.8\">75.26</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.2.2.2.2.2.2.2.9\">80.05</span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.4.4.4.4.4.4.4.20.16\">\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.4.4.4.4.4.4.4.20.16.1\">RankEncoder</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.20.16.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.4.4.4.4.4.4.20.16.2.1\">74.88</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.20.16.3\">85.59</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.20.16.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.4.4.4.4.4.4.20.16.4.1\">78.61</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.20.16.5\">83.50</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.20.16.6\">80.56</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.20.16.7\">81.55</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.20.16.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.4.4.4.4.4.4.20.16.8.1\">75.78</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.20.16.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.4.4.4.4.4.4.20.16.9.1\">80.07</span></span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.4.4.4.4.4.4.4.21.17\">\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.4.4.4.4.4.4.4.21.17.1\">CoT-BERT</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.21.17.2\">72.56</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.21.17.3\">85.53</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.21.17.4\">77.91</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.21.17.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.4.4.4.4.4.4.21.17.5.1\">85.05</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.21.17.6\">80.94</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.21.17.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.4.4.4.4.4.4.21.17.7.1\">82.40</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.21.17.8\">71.41</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.21.17.9\">79.40</span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.3.3.3.3.3.3.3.3\">\n<span class=\"ltx_td ltx_align_justify ltx_align_top ltx_th ltx_th_row ltx_border_bb ltx_border_r ltx_border_t ltx_rowspan ltx_rowspan_9\" id=\"S4.T2.3.3.3.3.3.3.3.3.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.3.3.3.3.3.3.3.3.1.1\">\n<span class=\"ltx_p\" id=\"S4.T2.3.3.3.3.3.3.3.3.1.1.1\" style=\"width:59.8pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.3.3.3.3.3.3.3.1.1.1.1\">RoBERTa<sub class=\"ltx_sub\" id=\"S4.T2.3.3.3.3.3.3.3.3.1.1.1.1.1\">base</sub></span></span>\n</span></span>\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T2.3.3.3.3.3.3.3.3.2\">SimCSE</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.3.3.3.3.3.3.3.3\">70.16</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.3.3.3.3.3.3.3.4\">81.77</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.3.3.3.3.3.3.3.5\">73.24</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.3.3.3.3.3.3.3.6\">81.36</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.3.3.3.3.3.3.3.7\">80.65</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.3.3.3.3.3.3.3.8\">80.22</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.3.3.3.3.3.3.3.9\">68.56</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.3.3.3.3.3.3.3.10\">76.57</span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.4.4.4.4.4.4.4.22.18\">\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.4.4.4.4.4.4.4.22.18.1\">DCLR</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.22.18.2\">70.01</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.22.18.3\">83.08</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.22.18.4\">75.09</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.22.18.5\">83.66</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.22.18.6\">81.06</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.22.18.7\">81.86</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.22.18.8\">70.33</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.22.18.9\">77.87</span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.4.4.4.4.4.4.4.23.19\">\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.4.4.4.4.4.4.4.23.19.1\">DiffCSE</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.23.19.2\">70.05</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.23.19.3\">83.43</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.23.19.4\">75.49</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.23.19.5\">82.81</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.23.19.6\">82.12</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.23.19.7\">82.38</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.23.19.8\">71.19</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.23.19.9\">78.21</span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.4.4.4.4.4.4.4.24.20\">\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.4.4.4.4.4.4.4.24.20.1\">PCL</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.24.20.2\">71.13</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.24.20.3\">82.38</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.24.20.4\">75.40</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.24.20.5\">83.07</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.24.20.6\">81.98</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.24.20.7\">81.63</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.24.20.8\">69.72</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.24.20.9\">77.90</span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.4.4.4.4.4.4.4.25.21\">\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.4.4.4.4.4.4.4.25.21.1\">ESimCSE</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.25.21.2\">69.90</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.25.21.3\">82.50</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.25.21.4\">74.68</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.25.21.5\">83.19</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.25.21.6\">80.30</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.25.21.7\">80.99</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.25.21.8\">70.54</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.25.21.9\">77.44</span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.4.4.4.4.4.4.4.26.22\">\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.4.4.4.4.4.4.4.26.22.1\">PromptRoBERTa</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.26.22.2\">73.94</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.26.22.3\">84.74</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.26.22.4\">77.28</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.26.22.5\">84.99</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.26.22.6\">81.74</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.26.22.7\">81.88</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.26.22.8\">69.50</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.26.22.9\">79.15</span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.4.4.4.4.4.4.4.27.23\">\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.4.4.4.4.4.4.4.27.23.1\">ConPVP</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.27.23.2\">73.20</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.27.23.3\">83.22</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.27.23.4\">76.24</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.27.23.5\">83.37</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.27.23.6\">81.49</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.27.23.7\">82.18</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.27.23.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.4.4.4.4.4.4.27.23.8.1\">74.59</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.27.23.9\">79.18</span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.4.4.4.4.4.4.4.4\">\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.4.4.4.4.4.4.4.4.1\">RankCSE<sub class=\"ltx_sub\" id=\"S4.T2.4.4.4.4.4.4.4.4.1.1\">listNet</sub></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.4.2\">72.91</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.4.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.4.4.4.4.4.4.4.3.1\">85.72</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.4.4\">76.94</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.4.5\">84.52</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.4.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.4.4.4.4.4.4.4.6.1\">82.59</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.4.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.4.4.4.4.4.4.4.7.1\">83.46</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.4.8\">71.94</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4.4.4.4.4.9\">79.73</span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.4.4.4.4.4.4.4.28.24\">\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb ltx_border_r\" id=\"S4.T2.4.4.4.4.4.4.4.28.24.1\">CoT-RoBERTa</span>\n<span class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.4.4.4.4.4.4.4.28.24.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.4.4.4.4.4.4.28.24.2.1\">75.43</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.4.4.4.4.4.4.4.28.24.3\">85.47</span>\n<span class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.4.4.4.4.4.4.4.28.24.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.4.4.4.4.4.4.28.24.4.1\">78.74</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.4.4.4.4.4.4.4.28.24.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.4.4.4.4.4.4.28.24.5.1\">85.64</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.4.4.4.4.4.4.4.28.24.6\">82.21</span>\n<span class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.4.4.4.4.4.4.4.28.24.7\">83.40</span>\n<span class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.4.4.4.4.4.4.4.28.24.8\">73.46</span>\n<span class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.4.4.4.4.4.4.4.28.24.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.4.4.4.4.4.4.28.24.9.1\">80.62</span></span></span>\n</span>\n</span></span></span>\n</span></span></span></p>\n</figure>",
128
+ "capture": "Table 2: Performance of different models on STS tasks under unsupervised settings. Consistent with established research conventions, the table displays the Spearman\u2019s rank correlation between model predictions and human-annotated scores. Baseline results are sourced from original papers. The highest scores obtained by models using the same PLM for each dataset are highlighted in bold."
129
+ },
130
+ "3": {
131
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>Comparative analysis of CoT-BERT\u2019s two-stage sentence representation.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T3.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T3.1.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_t\" id=\"S4.T3.1.1.2\" style=\"padding:1.35pt 2.0pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T3.1.1.1\" style=\"padding:1.35pt 2.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T3.1.1.1.1\" style=\"font-size:90%;\">BERT</span>\n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T3.1.2.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T3.1.2.1.1\" style=\"padding:1.35pt 2.0pt;\"><span class=\"ltx_text\" id=\"S4.T3.1.2.1.1.1\" style=\"font-size:90%;\">only prefix</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.2.1.2\" style=\"padding:1.35pt 2.0pt;\"><span class=\"ltx_text\" id=\"S4.T3.1.2.1.2.1\" style=\"font-size:90%;\">78.91</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.3.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.1.3.2.1\" style=\"padding:1.35pt 2.0pt;\"><span class=\"ltx_text\" id=\"S4.T3.1.3.2.1.1\" style=\"font-size:90%;\">only suffix</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.3.2.2\" style=\"padding:1.35pt 2.0pt;\"><span class=\"ltx_text\" id=\"S4.T3.1.3.2.2.1\" style=\"font-size:90%;\">78.83</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.4.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.1.4.3.1\" style=\"padding:1.35pt 2.0pt;\"><span class=\"ltx_text\" id=\"S4.T3.1.4.3.1.1\" style=\"font-size:90%;\">irrelevant prefix + suffix</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.4.3.2\" style=\"padding:1.35pt 2.0pt;\"><span class=\"ltx_text\" id=\"S4.T3.1.4.3.2.1\" style=\"font-size:90%;\">78.44</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.5.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.1.5.4.1\" style=\"padding:1.35pt 2.0pt;\"><span class=\"ltx_text\" id=\"S4.T3.1.5.4.1.1\" style=\"font-size:90%;\">static prefix + suffix</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.5.4.2\" style=\"padding:1.35pt 2.0pt;\"><span class=\"ltx_text\" id=\"S4.T3.1.5.4.2.1\" style=\"font-size:90%;\">78.91</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.6.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.1.6.5.1\" style=\"padding:1.35pt 2.0pt;\"><span class=\"ltx_text\" id=\"S4.T3.1.6.5.1.1\" style=\"font-size:90%;\">suffix + prefix</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.6.5.2\" style=\"padding:1.35pt 2.0pt;\"><span class=\"ltx_text\" id=\"S4.T3.1.6.5.2.1\" style=\"font-size:90%;\">78.46</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.7.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b\" id=\"S4.T3.1.7.6.1\" style=\"padding:1.35pt 2.0pt;\"><span class=\"ltx_text\" id=\"S4.T3.1.7.6.1.1\" style=\"font-size:90%;\">prefix + suffix</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T3.1.7.6.2\" style=\"padding:1.35pt 2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.7.6.2.1\" style=\"font-size:90%;\">79.40</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
132
+ "capture": "Table 3: Comparative analysis of CoT-BERT\u2019s two-stage sentence representation."
133
+ },
134
+ "4": {
135
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T4\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 4: </span>Ablation experiments on the extended InfoNCE Loss for CoT-BERT.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T4.4\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T4.2.2\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_t\" id=\"S4.T4.2.2.3\" style=\"padding:0.9pt 2.0pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T4.1.1.1\" style=\"padding:0.9pt 2.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T4.1.1.1.1\" style=\"font-size:90%;\">BERT</span>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T4.2.2.2\" style=\"padding:0.9pt 2.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T4.2.2.2.1\" style=\"font-size:90%;\">RoBERTa</span>\n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T4.3.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T4.3.3.1\" style=\"padding:0.9pt 2.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T4.3.3.1.1\" style=\"font-size:90%;\">CoT-BERT (without </span><span class=\"ltx_text\" id=\"S4.T4.3.3.1.2\" style=\"font-size:90%;\">)</span>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.3.3.2\" style=\"padding:0.9pt 2.0pt;\"><span class=\"ltx_text\" id=\"S4.T4.3.3.2.1\" style=\"font-size:90%;\">79.11</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.3.3.3\" style=\"padding:0.9pt 2.0pt;\"><span class=\"ltx_text\" id=\"S4.T4.3.3.3.1\" style=\"font-size:90%;\">80.30</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b\" id=\"S4.T4.4.4.1\" style=\"padding:0.9pt 2.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T4.4.4.1.1\" style=\"font-size:90%;\">CoT-BERT (with </span><span class=\"ltx_text\" id=\"S4.T4.4.4.1.2\" style=\"font-size:90%;\">)</span>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T4.4.4.2\" style=\"padding:0.9pt 2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.4.4.2.1\" style=\"font-size:90%;\">79.40</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T4.4.4.3\" style=\"padding:0.9pt 2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.4.4.3.1\" style=\"font-size:90%;\">80.62</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
136
+ "capture": "Table 4: Ablation experiments on the extended InfoNCE Loss for CoT-BERT."
137
+ },
138
+ "5": {
139
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T5\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 5: </span>Results of alignment and uniformity calculations performed on models using the STS-B test set. Lower values indicate better performance. All three sets of experiments employed RoBERTa<sub class=\"ltx_sub\" id=\"S4.T5.7.1\">base</sub> as the pre-trained model.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T5.5\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T5.3.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_t\" id=\"S4.T5.3.1.1\" style=\"padding:0.9pt 2.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T5.3.1.1.1\" style=\"font-size:90%;\">PLM = RoBERTa</span>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T5.3.1.2\" style=\"padding:0.9pt 2.0pt;\"><span class=\"ltx_text\" id=\"S4.T5.3.1.2.1\" style=\"font-size:90%;\">Alignment</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T5.3.1.3\" style=\"padding:0.9pt 2.0pt;\"><span class=\"ltx_text\" id=\"S4.T5.3.1.3.1\" style=\"font-size:90%;\">Uniformity</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T5.5.4.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T5.5.4.1.1\" style=\"padding:0.9pt 2.0pt;\"><span class=\"ltx_text\" id=\"S4.T5.5.4.1.1.1\" style=\"font-size:90%;\">PromptBERT</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.5.4.1.2\" style=\"padding:0.9pt 2.0pt;\"><span class=\"ltx_text\" id=\"S4.T5.5.4.1.2.1\" style=\"font-size:90%;\">0.0957</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.5.4.1.3\" style=\"padding:0.9pt 2.0pt;\"><span class=\"ltx_text\" id=\"S4.T5.5.4.1.3.1\" style=\"font-size:90%;\">- 1.2033</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.4.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T5.4.2.1\" style=\"padding:0.9pt 2.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T5.4.2.1.1\" style=\"font-size:90%;\">CoT-BERT (without </span><span class=\"ltx_text\" id=\"S4.T5.4.2.1.2\" style=\"font-size:90%;\">)</span>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.4.2.2\" style=\"padding:0.9pt 2.0pt;\"><span class=\"ltx_text\" id=\"S4.T5.4.2.2.1\" style=\"font-size:90%;\">0.1089</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.4.2.3\" style=\"padding:0.9pt 2.0pt;\"><span class=\"ltx_text\" id=\"S4.T5.4.2.3.1\" style=\"font-size:90%;\">- 1.3852</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.5.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b\" id=\"S4.T5.5.3.1\" style=\"padding:0.9pt 2.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T5.5.3.1.1\" style=\"font-size:90%;\">CoT-BERT (with </span><span class=\"ltx_text\" id=\"S4.T5.5.3.1.2\" style=\"font-size:90%;\">)</span>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T5.5.3.2\" style=\"padding:0.9pt 2.0pt;\"><span class=\"ltx_text\" id=\"S4.T5.5.3.2.1\" style=\"font-size:90%;\">0.1278</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T5.5.3.3\" style=\"padding:0.9pt 2.0pt;\"><span class=\"ltx_text\" id=\"S4.T5.5.3.3.1\" style=\"font-size:90%;\">- 1.5492</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
140
+ "capture": "Table 5: Results of alignment and uniformity calculations performed on models using the STS-B test set. Lower values indicate better performance. All three sets of experiments employed RoBERTabase as the pre-trained model."
141
+ },
142
+ "6": {
143
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T6\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 6: </span>Ablation study of CoT-BERT\u2019s template denoising strategy.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T6.2\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T6.2.2\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_t\" id=\"S4.T6.2.2.3\" style=\"padding:0.9pt 2.0pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T6.1.1.1\" style=\"padding:0.9pt 2.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T6.1.1.1.1\" style=\"font-size:90%;\">BERT</span>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T6.2.2.2\" style=\"padding:0.9pt 2.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T6.2.2.2.1\" style=\"font-size:90%;\">RoBERTa</span>\n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T6.2.3.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T6.2.3.1.1\" style=\"padding:0.9pt 2.0pt;\"><span class=\"ltx_text\" id=\"S4.T6.2.3.1.1.1\" style=\"font-size:90%;\">CoT-BERT (without denoise)</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T6.2.3.1.2\" style=\"padding:0.9pt 2.0pt;\"><span class=\"ltx_text\" id=\"S4.T6.2.3.1.2.1\" style=\"font-size:90%;\">78.69</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T6.2.3.1.3\" style=\"padding:0.9pt 2.0pt;\"><span class=\"ltx_text\" id=\"S4.T6.2.3.1.3.1\" style=\"font-size:90%;\">79.87</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.2.4.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T6.2.4.2.1\" style=\"padding:0.9pt 2.0pt;\"><span class=\"ltx_text\" id=\"S4.T6.2.4.2.1.1\" style=\"font-size:90%;\">CoT-BERT (position denoise)</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.2.4.2.2\" style=\"padding:0.9pt 2.0pt;\"><span class=\"ltx_text\" id=\"S4.T6.2.4.2.2.1\" style=\"font-size:90%;\">78.89</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.2.4.2.3\" style=\"padding:0.9pt 2.0pt;\"><span class=\"ltx_text\" id=\"S4.T6.2.4.2.3.1\" style=\"font-size:90%;\">79.95</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.2.5.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b\" id=\"S4.T6.2.5.3.1\" style=\"padding:0.9pt 2.0pt;\"><span class=\"ltx_text\" id=\"S4.T6.2.5.3.1.1\" style=\"font-size:90%;\">CoT-BERT ([PAD] denoise)</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T6.2.5.3.2\" style=\"padding:0.9pt 2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.2.5.3.2.1\" style=\"font-size:90%;\">79.40</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T6.2.5.3.3\" style=\"padding:0.9pt 2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.2.5.3.3.1\" style=\"font-size:90%;\">80.62</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
144
+ "capture": "Table 6: Ablation study of CoT-BERT\u2019s template denoising strategy."
145
+ }
146
+ },
147
+ "image_paths": {
148
+ "1": {
149
+ "figure_path": "2309.11143v4_figure_1.png",
150
+ "caption": "Figure 1: Behavior of three variants of InfoNCE Loss within the semantic space of BERT. For clarity, we depict this figure with the anchor sentence sisubscript\ud835\udc60\ud835\udc56s_{i}italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT as the focal point.",
151
+ "url": "http://arxiv.org/html/2309.11143v4/x1.png"
152
+ },
153
+ "2": {
154
+ "figure_path": "2309.11143v4_figure_2.png",
155
+ "caption": "Figure 2: Illustration of the template denoising method employed by CoT-BERT. In this depiction, we utilize the template for anchor sentences as an example, with analogous treatment applied to both positive and hard negative instances.",
156
+ "url": "http://arxiv.org/html/2309.11143v4/x2.png"
157
+ },
158
+ "3": {
159
+ "figure_path": "2309.11143v4_figure_3.png",
160
+ "caption": "Figure 3: Correlation diagram between the true similarity scores and model-predicted cosine similarity on the STS-B test set. The vertical axis has been normalized for clarity, and the methods employed for deriving sentence embeddings ([CLS] or [MASK]) are explicitly indicated for reference.",
161
+ "url": "http://arxiv.org/html/2309.11143v4/x3.png"
162
+ }
163
+ },
164
+ "validation": true,
165
+ "references": [],
166
+ "url": "http://arxiv.org/html/2309.11143v4"
167
+ }
20240620/2309.12875v2.json ADDED
@@ -0,0 +1,164 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "A second-order in time, BGN-based parametric finite element method for geometric flows of curves",
3
+ "abstract": "Over the last two decades, the field of geometric curve evolutions has attracted significant attention from scientific computing. One of the most popular numerical methods for solving geometric flows is the so-called BGN scheme, which was proposed by Barrett, Garcke, and N\u00fcrnberg (J. Comput. Phys., 222 (2007), pp. 441\u2013467), due to its favorable properties (e.g., its computational efficiency and the good mesh property). However, the BGN scheme is limited to first-order accuracy in time, and how to develop a higher-order numerical scheme is challenging. In this paper, we propose a fully discrete, temporal second-order parametric finite\nelement method, which integrates with two different mesh regularization techniques, for solving geometric flows of curves. The scheme is constructed based on the BGN formulation and a semi-implicit Crank-Nicolson leap-frog time stepping discretization as well as a linear finite element approximation in space. More importantly, we point out that the shape metrics, such as manifold distance and Hausdorff distance, instead of function norms, should be employed to measure numerical errors. Extensive numerical experiments demonstrate that the proposed BGN-based scheme is second-order accurate in time in terms of shape metrics. Moreover, by employing the classical BGN scheme as mesh regularization techniques, our proposed second-order schemes exhibit good properties with respect to the mesh distribution. In addition, an unconditional interlaced energy stability property is obtained for one of the mesh regularization techniques.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Geometric flows, which describe the evolution of curves or surfaces over time based on the principle that the shape changes according to its underlying geometric properties, such as the curvature, have been extensively studied in the fields of computational geometry and geometric analysis. In particular, second-order (e.g., mean curvature flow, which is also called as curve-shortening flow for curve evolution) and fourth-order (e.g., surface diffusion flow) geometric flows have attracted considerable interest due to their wide-ranging applications in materials science [6 ###reference_b6###, 32 ###reference_b32###], image processing [1 ###reference_b1###], multiphase fluids [22 ###reference_b22###] and cell biology [13 ###reference_b13###]. For more in-depth information, readers can refer to the recent review articles [15 ###reference_b15###, 18 ###reference_b18###], and references provided therein.\nIn this paper, we focus on three different types of geometric flows of curves: curve-shortening flow (CSF), area-preserving curve-shortening flow (AP-CSF) and surface diffusion flow (SDF). First, assume that is a family of simple closed curves in the two-dimensional plane. We consider that the curve is governed by the three geometric flows, i.e., its velocity is respectively given by\nwhere is the curvature of the curve, is the arc-length, is the average curvature and is the outward unit normal to . Here, we use the sign convention that a unit circle has a positive constant curvature.\nBy representing the curves as a parametrization , where is the \u201cperiodic\u201d interval , Barrett, Garcke and N\u00fcrnberg [10 ###reference_b10###, 15 ###reference_b15###] creatively reformulated the above equations (1.1 ###reference_###) into the following coupled forms:\nBased on the above equations and the corresponding weak formulations, a series of numerical schemes (the so-called BGN schemes) were proposed for solving different geometric flows, such as mean curvature flow and surface diffusion [10 ###reference_b10###, 11 ###reference_b11###], Willmore flow [13 ###reference_b13###], anisotropic geometric flow [5 ###reference_b5###], solid-state dewetting [6 ###reference_b6###, 32 ###reference_b32###] and geometric flow for surface evolution [12 ###reference_b12###]. Recently, based on the BGN formulation (1.2 ###reference_###), structure-preserving schemes have been proposed for axisymmetric geometric equations [4 ###reference_b4###] and surface diffusion [7 ###reference_b7###, 5 ###reference_b5###], respectively. In practical simulations, ample numerical results have demonstrated the high performance of\nthe BGN scheme, due to inheriting the variational structure of the original problem and introducing an appropriate tangential velocity to help mesh points maintain a good distribution. However, for the original BGN scheme, because its formal truncation error is , where is the time step size, the temporal convergence order of the scheme is limited to the first-order. This has been confirmed by extensive numerical experiments [7 ###reference_b7###, 10 ###reference_b10###, 11 ###reference_b11###, 6 ###reference_b6###]. Therefore, how to design a temporal high-order scheme which is based on the BGN formulation (1.2 ###reference_###) is challenging and still open. It is also worth noting that rigorous numerical analysis for BGN schemes remains an open problem [15 ###reference_b15###].\nIn this paper, based on the BGN formulation (1.2 ###reference_###), we propose a novel temporal second-order parametric finite element method for solving geometric flows of curves, i.e., CSF, AP-CSF and SDF. Specifically, to discretize the same continuous-in-time semi-discrete formulation as the classical BGN scheme [10 ###reference_b10###], we begin by fixing the unit normal as that on the current curve and then discretize other terms using the Crank-Nicolson leap-frog scheme [23 ###reference_b23###].\nThe resulting scheme is a second-order semi-implicit scheme, which only requires solving a system of linear algebraic equations at each time step. Furthermore, the well-posedness of the fully discrete scheme can be established under suitable assumption conditions. Numerical results have demonstrated that the proposed scheme achieves second-order accuracy in time, as measured by the shape metrics, outperforming the classical BGN scheme in terms of accuracy and efficiency.\nIt is worth mentioning that there exist several temporal higher-order numerical schemes based on other formulations which have been proposed for simulating geometric flows. For the specific case of curve-shortening flow, a Crank-Nicolson-type scheme combined with tangential redistribution [3 ###reference_b3###] and an adaptive moving mesh method [29 ###reference_b29###] have been developed. Both of the schemes are convergent quadratically in time and fully implicit, requiring to solve a system of nonlinear equations at each time step. Recently, an evolving surface finite element method together with linearly implicit backward difference formulae for time integration for simulating the mean curvature flow has been proposed in [27 ###reference_b27###, 28 ###reference_b28###]. In comparison to these existing approaches, our newly proposed scheme is based on the BGN formulation (1.2 ###reference_###), then it inherits the variational structure of the original geometric flows, and has very good property with respect to mesh distribution. The new scheme exhibits comparable computational cost to the classical BGN scheme while surpassing it in terms of accuracy. Furthermore, it can be extended easily to other geometric flows with applications to various fields.\nThe main reason why we have successfully proposed a temporal high-order, BGN-based parametric finite element method for solving geometric flows lies in the following two key points: (1). we choose an appropriate metric (i.e., shape metrics) to measure numerical errors of the proposed schemes; (2). we use the classical first-order BGN scheme as \u201ca good partner\u201d of the proposed scheme to help mesh points maintain a good distribution without sacrificing the accuracy.\nHow to measure the errors of numerical solutions for geometric flows is an important issue. A natural approach is to use classical Sobolev norms, such as -norm, -norm or -norm, which are widely used in the numerical analysis for geometric flows [19 ###reference_b19###, 20 ###reference_b20###, 27 ###reference_b27###, 28 ###reference_b28###]. However, when it comes to numerical schemes that involve in tangential movements, these function norms may not be suitable for quantifying the differences between two curves/surfaces. To address this issue, we consider an alternative approach using shape metrics, such as manifold distance (as used in [7 ###reference_b7###, 33 ###reference_b33###]) and Hausdorff distance [2 ###reference_b2###]. These metrics provide a measure of how similar or different two curves/surfaces are in terms of their shape characteristics. Extensive numerical experiments have been conducted, and the results demonstrate that our proposed scheme achieves second-order accuracy when measured using shape metrics.\nOn the other hand, the quality of mesh distribution is always a major concern when simulating geometric flows using parametric finite element methods. It is important to note that the original flow (1.1 ###reference_###) requires the curve to evolve only in the normal direction, thus the numerical methods based on (1.1 ###reference_###) which prevent tangential movement of mesh points might lead to mesh distortion or clustering during the evolution. To address this issue, various approaches have been proposed in the literature to maintain good mesh quality, e.g., artificial mesh regularization method [16 ###reference_b16###], reparametrization by introducing a tangential velocity [17 ###reference_b17###, 30 ###reference_b30###, 21 ###reference_b21###, 26 ###reference_b26###, 31 ###reference_b31###]. On the contrary, the BGN formulation (1.2 ###reference_###) does not enforce any condition on the tangential velocity, which allows for an intrinsic tangential motion of mesh points, as demonstrated by the standard BGN scheme [10 ###reference_b10###, 11 ###reference_b11###] constructed based on this formulation (1.2 ###reference_###). Though the semi-discrete scheme of (1.2 ###reference_###), where only spatial discretization is performed, results in precise equidistribution of mesh points, our proposed fully discrete second-order BGN-based scheme exhibits oscillations in terms of mesh ratio and other geometric quantities, which may lead to instability in certain situations. To address this issue, we implement two classical first-order BGN schemes as mesh regularization procedures to enhance the quality of the mesh. More specifically, (1). we utilize the classical semi-implicit BGN scheme when poorly distributed polygonal approximations are detected. Extensive numerical experiments have shown that this approach improves the stability of the new scheme and significantly enhances the mesh quality. Importantly, numerous numerical experiments have also demonstrated that this mesh regularization only occurs infrequently throughout the evolution process, ensuring that the temporal second-order accuracy of the proposed scheme remains uncompromised; (2). after solving the BGN2 scheme at each time step, we employ the fully-implicit BGN scheme for the trivial flow in order to achieve mesh equidistribution. Although this mesh regularization may increase the computational cost, the unaffected temporal second-order accuracy ensures that our newly proposed scheme remains more efficient than classical BGN schemes. More importantly, this mesh regularization allows for the establishment of unconditional energy stability.\nThe remaining of the paper is organized as follows. In Section 2, taking CSF as an example, we begin by recalling the standard BGN scheme, and then propose a second-order in time, BGN-based parametric finite element method for solving CSF. Two mesh regularization procedures are proposed to ensure the good mesh quality during the evolution. Section 3 is devoted to explaining the importance of using shape metrics, such as manifold distance and Hausdorff distance, to accurately measure the errors of two curves. We extend the proposed second-order scheme to other geometric flows such as AP-CSF and the fourth-order flow SDF in Section 4. Extensive numerical results are provided to demonstrate the accuracy and efficiency of the proposed schemes in Section 5. Finally, we draw some conclusions in Section 6."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "For curve shortening flow (CSF)",
15
+ "text": "In this section, we propose a parametric finite element method with second-order temporal accuracy for numerically solving the CSF. The same idea can be easily extended to other geometric flows (cf. Section 4). To provide a comprehensive understanding, we first review a classical first-order BGN scheme proposed by Barrett, Garcke and N\u00fcrnberg [10 ###reference_b10###, 11 ###reference_b11###, 15 ###reference_b15###]."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Derivation of the classical BGN scheme",
21
+ "text": "To begin with, we rewrite the CSF into the following formulation as presented in Eqs. (1.2 ###reference_###):\nWe introduce the following finite element approximation. Let , , be a decomposition of into intervals given by the nodes red, . Let be the maximal length of a grid element. Define the linear finite element space as\nThe mass lumped inner product over the polygonal curve , which is an approximation of by using the composite trapezoidal rule, is defined as\nwhere are two scalar/vector piecewise continuous functions with possible jumps at the nodes ,\nand .\nSubsequently, the semi-discrete scheme of the formulation (2.1 ###reference_###) is as follows: given initial polygon with vertices lying on the initial curve clockwise, parametrized by ,\nfind such that\nwhere we always integrate over the current curve described by , the outward unit normal is a piecewise constant vector given by\nwith denoting clockwise rotation by , and\nthe partial derivative is defined piecewisely over each side of the polygon\n. It was shown that the scheme (2.2 ###reference_###) will always equidistribute the vertices along for if they are not locally parallel (see Remark 2.4 in [10 ###reference_b10###]).\nFor a full discretization, we fix as a uniform time step size for simplicity, and let and be the approximations of and , respectively, for , where . We define and assume for , . The discrete unit normal vector , the discrete inner product and the discrete operator are defined similarly as in the semi-discrete case.\nBarrett, Garcke and N\u00fcrnberg used a formal first-order approximation [10 ###reference_b10###, 11 ###reference_b11###] to replace the velocity , and by\nand the fully discrete semi-implicit BGN scheme (denoted as BGN1 scheme) reads as:\n(BGN1, First-order in time BGN scheme for CSF): For , find and such that\nThe well-posedness and energy stability were established under some mild conditions. In practice, numerous numerical results show that the BGN1 scheme (2.3 ###reference_###) converges quadratically in space [11 ###reference_b11###] and linearly in time (cf. Fig. 1 ###reference_### in Section 5.1 ###reference_###)."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "A second-order in time, BGN-based scheme",
27
+ "text": "Instead of using the first-order Euler method, we apply the Crank-Nicolson leap-frog time stepping discretization in (2.2 ###reference_###) based on the following simple calculation\nthen the corresponding second-order scheme (denoted as BGN2 scheme) is as follows:\n(BGN2, Second-order in time BGN-based scheme for CSF): For , and which are the appropriate approximations at the time levels and , respectively, find and for such that\nfor all .\nThe scheme (2.5 ###reference_###) is semi-implicit and the computational cost is comparable to that of the BGN1 scheme (2.3 ###reference_###). Moreover, as a temporal discretization of the semi-discrete version (2.2 ###reference_###), it can be easily derived from (2.4 ###reference_###) that the truncation error is of order .\nTo begin the BGN2 scheme (2.5 ###reference_###), we need to first prepare the data and\n. In practical simulations, this can be easily achieved without sacrificing the accuracy of the scheme by utilizing the standard BGN1 scheme (2.3 ###reference_###) to get , and the following formula of discrete curvature was proposed in [10 ###reference_b10###, Page 461] to prepare (note the the sign convention of the curvature is opposite to [10 ###reference_b10###])\nwhere is a matrix, is a vector and is a matrix given by\nwhere are the standard Lagrange basis over , and are the first and second component of vector , and , for . Note that this formula can be derived by solving the finite element approximation of the equation and using the least square method. We can summarize the process as Algorithm 2.1 ###reference_###, which outlines the steps to prepare the required data and . Once we have obtained these data, we can directly apply the BGN2 scheme (2.5 ###reference_###) to calculate , for .\n(Preparation for the initial data of BGN2 for CSF)\nGiven the initial curve , the number of grid points and the time step size . We choose the polygon with vertices lying on such that is (almost) equidistributed, i.e., each side of the polygon is (nearly) equal in length. We parameterize with and the grid points can be determined correspondingly.\nUsing as the input, we compute using the discrete curvature formula (2.6 ###reference_###).\nUsing as the input, we obtain by solving the BGN1 scheme (2.3 ###reference_###) for one time step.\nWhen dealing with an initial curve which is not regular, an alternative approach for initialization is to solve the BGN1 scheme twice and start the BGN2 scheme from . Specifically, for given , we can compute \nand , which are the appropriate approximations at time levels and , by solving the BGN1 scheme (2.3 ###reference_###) twice. These approximations can be used as initial values to implement the BGN2 scheme (2.3 ###reference_###) for . For the superiority of this approach, see Fig. 6 ###reference_### in Section 5.3.\nSimilar to the BGN1 scheme (2.3 ###reference_###), we can show the well-posedness of the BGN2 scheme (2.5 ###reference_###) under some mild conditions as follows.\nFor , we assume that the following two conditions are satisfied:\nThere exist at least two vectors in which are not parallel, i.e.,\nNo degenerate elements exist on , i.e.,\nThen the full discretization (2.5 ###reference_###) is well-posed, i.e., there exists a unique solution of (2.5 ###reference_###).\nIt suffices to prove the following algebraic system for has only zero solution,\nIndeed, the stiffness matrix is exactly the same as the standard BGN1 scheme (2.3 ###reference_###) and thus the same argument in [11 ###reference_b11###, Theorem 2.9] yields the conclusion under the assumptions (1) and (2).\n\u220e"
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "Mesh regularization by semi-implicit BGN1 scheme",
33
+ "text": "As was mentioned earlier, the semi-discrete scheme (2.2 ###reference_###) possesses the mesh equidistribution property [15 ###reference_b15###, Theorem 79]. In practice, the fully-discrete BGN1 scheme (2.3 ###reference_###) can maintain the asymptotic long-time mesh equidistribution property. However, the BGN2 scheme (2.5 ###reference_###) may have oscillating mesh ratio due to the structure of two-step method, which can potentially amplify the mesh ratio and cause mesh distortion or clustering during the evolution, especially for some initial curves which are not so regular, e.g., a \u2018flower\u2019 curve (see the second row of Fig. 7 ###reference_###). Therefore, a mesh regularization procedure is necessary in real simulations to help the mesh maintain a good distribution property during the evolution, when the mesh ratio exceeds a given threshold value. Inspired by the good mesh distribution property of the BGN1 scheme, we utilize the BGN1 scheme as the mesh regularization technique. In the following, we denote as the threshold value chosen initially. If the mesh ratio , then we use the mesh regularization procedure to improve the mesh distribution. We present a summary of the complete algorithm of BGN2 scheme for solving the CSF in Algorithm 2.2 ###reference_###.\n(BGN2 scheme for CSF)\nGiven the initial curve , and , , compute as in Step 0 in Algorithm 2.1 ###reference_###.\nUsing as the input, we compute using the discrete curvature formula (2.6 ###reference_###) and solve via the BGN1 scheme (2.3 ###reference_###). Set .\nCalculate the mesh ratio of , .\nIf the mesh ratio , then replace with the solution of the BGN1 scheme (2.3 ###reference_###) with as the input by one run; otherwise, skip this step.\nUse the BGN2 scheme (2.5 ###reference_###) to obtain .\nUpdate . If , then go back to Step 2; otherwise, stop the algorithm and output the data.\nAs shown in Step 3 of Algorithm 2.2 ###reference_###, if the mesh ratio , we replace with the solution of the BGN1 scheme (2.3 ###reference_###) with as the input by one run,\nto help us realize the mesh regularization. Extensive numerical experiments suggest that the mesh regularization procedure is very effective, and the mesh ratio decreases immediately to a small value after this procedure (cf. Fig. 4 ###reference_###(d) in Section 5). The BGN2 scheme with the aid of the BGN1 scheme as the mesh regularization is very efficient and\nstable in practical simulations. The reason comes from that the BGN1 scheme (2.3 ###reference_###) can intrinsically lead to a good mesh distribution property, which was explained in [10 ###reference_b10###, 15 ###reference_b15###], but a more convincing explanation needs further rigorous numerical analysis for the scheme.\nOne concern that may arise is whether the BGN2 scheme with necessary mesh regularization can still achieve second-order accuracy, considering that the BGN1 scheme is only first-order accurate. It is important to note that for certain smooth initial curves, such as elliptic curves, the mesh regularization procedure is never required during the evolution. In such cases, the numerical evolution remains remarkably stable and the mesh ratio remains bounded. While for certain special initial curves, like a \u2018flower\u2019 curve or a \u2018tube\u2019 curve, the mesh regularization procedure may be needed only a few times (cf. Section 5.3 ###reference_###). Nevertheless, this does not compromise the temporal second-order accuracy of the BGN2 scheme (2.5 ###reference_###)."
34
+ },
35
+ {
36
+ "section_id": "2.4",
37
+ "parent_section_id": "2",
38
+ "section_name": "Mesh regularization by implicit equi-BGN1 scheme for trivial flow",
39
+ "text": "In the following, we recall a fully-implicit scheme for CSF [14 ###reference_b14###] which intrinsically equidistributes the vertices along the curve at each time step.\n(equi-BGN1, First-order in time equidistribution BGN scheme for CSF): For , find and such that\nIt has been shown in [14 ###reference_b14###] that\nMoreover, the stability estimate holds\nwhere represents the length of the polygon .\nInspired by the equidistribution property of the fully implicit scheme (2.7 ###reference_###), we propose to implement the mesh regularization using the equi-BGN1 scheme for the trivial flow\nwhich can distribute mesh points equally and retain the shape of the curve in the continuous level. More specifically, with , find and such that\nSimilar to (2.7 ###reference_###), it can be rigorously proved that the vertices of are evenly distributed and the perimeter does not increase, i.e.,\nNow, we present a summary of the entire algorithm for the equi-BGN2 scheme for solving the CSF. This scheme can be regarded as a variant of scheme (2.5 ###reference_###), where and are replaced by their mesh regularized approximations and , respectively.\n(equi-BGN2 scheme for CSF)\nGiven the initial curve , and , , compute as in Step 0 in Algorithm 2.1 ###reference_###. Use equi-BGN1 scheme (2.10 ###reference_###) to obtain the equidistributed polygon and .\nUsing as the input, we obtain by solving the equi-BGN1 scheme (2.7 ###reference_###) with time step . Set .\nSolve the BGN2 scheme (2.5 ###reference_###) with and to obtain .\nUpdate . Apply the equi-BGN1 scheme (2.10 ###reference_###) to obtain the mesh-regularized approximation and .\nIf , then go back to Step 2; otherwise, stop the algorithm and output the data as an approximation solution at time .\nIndeed, the solution not only equidistributes the vertices at each time level, but also is unconditionally stable in the following sense.\nLet be the solution of Algorithm 2.3 ###reference_###. Then for any and , the energy stability holds\nwhere is the perimeter of . In particular, we have\nwhere is the perimeter of the initial polygon .\nRecalling , taking and in equation (2.5 ###reference_###), we get\nNoticing (2.11 ###reference_###), we denote by the length of each edge of the polygon , then, we have\nTherefore, by combining with the Cauchy-Schwarz inequality, we can estimate\nwhere for the last inequality we used (2.11 ###reference_###). Combining (2.14 ###reference_###) and (2.15 ###reference_###), we can deduce (2.12 ###reference_###).\nTo show (2.13 ###reference_###), it suffices to prove\n and . This can be easily obtained by recalling , , (2.11 ###reference_###) and (2.9 ###reference_###).\n\u220e\nIt is also feasible to perform the mesh regularization using the semi-implicit BGN1 scheme (2.3 ###reference_###) for the trivial flow in of Algorithm 2.3 ###reference_### at each time step. While this approach can reduce the global computational costs, achieving a theoretical proof of energy stability, as demonstrated in Theorem 2.2 ###reference_2###, seems unattainable.\nIn subsequent sections, we will denote (2.3 ###reference_###) and (2.7 ###reference_###) by the BGN1 and equi-BGN1 scheme, respectively. We call Algorithm 2.2 ###reference_### and Algorithm 2.3 ###reference_### as the BGN2 and equi-BGN2 scheme, respectively."
40
+ },
41
+ {
42
+ "section_id": "3",
43
+ "parent_section_id": null,
44
+ "section_name": "Shape metric is a better choice",
45
+ "text": "As we are aware, it is an interesting and thought-provoking problem to determine how to quantify the difference between two curves in 2D or two surfaces in 3D. Given two closed curves and , we assume that the two curves are parametrized by and , respectively, over the same interval . Consequently, we can define the following four metrics for measurement:\n(-error) The -norm between the parametrized functions and is defined in a classical way\n(-error) The -norm between the parametrized functions and is defined as\n(Manifold distance) The manifold distance between the curves and is defined as [33 ###reference_b33###]\nwhere and represent the regions enclosed by and , respectively, and denotes the area of .\n(Hausdorff distance) The Hausdorff distance between the curves and is defined as [2 ###reference_b2###]\nwhere , and\n is the Euclidean distance.\nThe -error and -error fall within the domain of function metrics, which rely on the parametrization of curves.\nOn the other hand, as demonstrated in [33 ###reference_b33###, Proposition 5.1] and [2 ###reference_b2###], it has been easily proven that both manifold distance and Hausdorff distance fulfill the properties of symmetry, positivity and the triangle inequality. Therefore, they belong to the category of shape metrics and not influenced by the specific parametrization.\nIt should be noted that the aforementioned shape metrics can be easily calculated using simple algorithms. As the numerical solutions are represented as polygons, it is very easy to calculate the area of the symmetric difference region, i.e., the manifold distance, between two polygonal curves. Additionally, a polygon-based approach proposed in the literature [2 ###reference_b2###] can be employed to calculate the Hausdorff distance between planar curves.\nIn order to test the convergence rate of numerical schemes, for example, we consider the evolution of the CSF with an initial ellipse defined by\nThis initial ellipse is approximated using an equidistributed polygon \nwith vertices. Here, we simulate the CSF by using three different numerical schemes: Dziuk\u2019s scheme [19 ###reference_b19###, Section 6], BGN1 scheme and BGN2 scheme. Since the exact solution of the CSF for an elliptical curve is unknown, we first compute a reference solution by Dziuk\u2019s scheme (to test the convergence of Dziuk\u2019s scheme) or the BGN2 scheme (to test the convergence of BGN-type schemes) with a fine mesh and a tiny time step size, e.g., and . To test the temporal error, we still take a large number of grid points, e.g., , such that the spatial error is ignorable. The numerical error and the corresponding convergence order are then determined as follows\nwhere , and represents any one of the four metrics defined above.\nTables 1 ###reference_###-3 ###reference_### display the numerical errors at time measured by the four different metrics for Dziuk\u2019s scheme [19 ###reference_b19###], the BGN1 scheme and the BGN2 scheme, respectively. As anticipated, we easily observe linear convergence in time for Dziuk\u2019s scheme across all four different metrics. While linear and quadratic convergence for both shape metrics (i.e., the manifold distance and Hausdorff distance) are observed for the BGN1 scheme in Table 2 ###reference_### and the BGN2 scheme in Table 3 ###reference_###, respectively.\nErrors\n\n\n\n\n\n\n\n-norm\n1.17E-2\n6.31E-3\n3.26E-3\n1.62E-3\n\nOrder\n\u2013\n0.89\n0.95\n1.01\n\n-norm\n3.05E-2\n1.63E-2\n8.41E-3\n4.19E-3\n\nOrder\n\u2013\n0.90\n0.96\n1.00\n\nManifold distance\n6.89E-2\n3.65E-2\n1.86E-2\n9.17E-3\n\nOrder\n\u2013\n0.92\n0.97\n1.02\n\nHausdorff distance\n3.04E-2\n1.62E-2\n8.29E-3\n4.09E-3\n\nOrder\n\u2013\n0.91\n0.97\n1.02\nErrors\n\n\n\n\n\n\n\n-norm\n4.25E-3\n3.98E-3\n4.05E-3\n4.15E-3\n\nOrder\n\u2013\n0.10\n0.03\n0.03\n\n-norm\n1.00E-2\n9.17E-3\n9.47E-3\n9.79E-3\n\nOrder\n\u2013\n0.12\n0.05\n0.05\n\nManifold distance\n3.11E-2\n1.58E-2\n7.96E-3\n4.00E-3\n\nOrder\n\u2013\n0.98\n0.99\n0.99\n\nHausdorff distance\n8.23E-3\n4.18E-3\n2.11E-3\n1.06E-3\n\nOrder\n\u2013\n0.98\n0.99\n0.99\nErrors\n\n\n\n\n\n\n\n-norm\n1.49E-2\n1.45E-2\n1.45E-2\n1.43E-2\n\nOrder\n\u2013\n0.04\n0.00\n0.02\n\n-norm\n3.32E-2\n3.30E-2\n3.29E-2\n3.29E-2\n\nOrder\n\u2013\n0.01\n0.00\n0.00\n\nManifold distance\n8.44E-4\n2.11E-4\n5.27E-5\n1.32E-5\n\nOrder\n\u2013\n2.00\n2.00\n1.99\n\nHausdorff distance\n2.00E-4\n4.98E-5\n1.26E-5\n3.29E-6\n\nOrder\n\u2013\n2.01\n1.98\n1.94\nIt is worth noting that unlike Dziuk\u2019s scheme, the convergence of the BGN1 scheme and BGN2 scheme under function metrics (the -norm and -norm) is not as satisfactory. This is not surprising since the error in classical Sobolev space depends on the specific parametrization of the curve. In contrast, the BGN formulation (2.1 ###reference_###) allows tangential motion to make the mesh points equidistribute, which indeed affects the parametrization while preserving the shape of the curve. Thus it is not appropriate to use the classical function metrics to quantify the errors of the BGN-type schemes which are based on the BGN formulation.\nInstead, as observed from Tables 2 ###reference_### and 3 ###reference_###, the shape metrics are much more suitable for quantifying the numerical errors of the schemes that allow intrinsic tangential velocity. In the remaining of the article, we will employ the manifold distance or the Hausdorff distance when measuring the difference between two curves."
46
+ },
47
+ {
48
+ "section_id": "4",
49
+ "parent_section_id": null,
50
+ "section_name": "Applications to other geometric flows",
51
+ "text": "In this section, we extend the above proposed BGN2 scheme to other geometric flows."
52
+ },
53
+ {
54
+ "section_id": "4.1",
55
+ "parent_section_id": "4",
56
+ "section_name": "For area-preserving curve-shortening flow (AP-CSF)",
57
+ "text": "As is known, the AP-CSF can be viewed as the -gradient flow with respect to the length functional under the constraint of total area preservation [15 ###reference_b15###, 25 ###reference_b25###]. Similar to (2.1 ###reference_###), we rewrite the AP-CSF as the following coupled equations\nwhere the average of curvature is defined as .\nThe fully-discrete, first-order in time semi-implicit BGN scheme for AP-CSF reads as [15 ###reference_b15###]:\n(BGN1 scheme for AP-CSF): For , find and such that\nfor all ,\nwhere .\nBased on the same spirit, we can propose the following second-order BGN2 scheme.\n(BGN2 scheme for AP-CSF): For , find such that\nfor all .\nSimilarly, the stiffness matrix of the linear system to be solved in (4.3 ###reference_###) is exactly the same as the BGN1 scheme (4.2 ###reference_###), whose well-posedness has been established in [15 ###reference_b15###, Theorem 90]. The equi-BGN1 scheme [15 ###reference_b15###] and equi-BGN2 scheme can be derived in a similar manner. Similarly, unconditional interlaced energy stability for the equi-BGN2 scheme can be obtained."
58
+ },
59
+ {
60
+ "section_id": "4.2",
61
+ "parent_section_id": "4",
62
+ "section_name": "For surface diffusion flow (SDF)",
63
+ "text": "We consider the fourth-order flow\u2014SDF, which can be viewed as the -gradient flow with respect to the length functional [15 ###reference_b15###, 7 ###reference_b7###]. In a similar fashion, we rephrase the SDF as the subsequent system of equations\nThe fully discrete, first-order in time semi-implicit BGN scheme for SDF reads as [10 ###reference_b10###]:\n(BGN1 scheme for SDF): For , find and such that\nIn line with the same approach, we can put forward the subsequent second-order BGN2 scheme:\n(BGN2 scheme for SDF): For , find such that\nfor all .\nThe well-posedness of the above scheme can be shown similarly under certain mild conditions.\nFor the schemes (4.3 ###reference_###) and (4.6 ###reference_###), we consistently set as specified in Algorithm 2.1 ###reference_###, that is, is a parametrization of an (almost) equidistributed interpolation polygon with vertices for the initial curve . Similar as the case of CSF, to start the BGN2 schemes, we need to prepare the initial data and , which can be achieved by using the similar approach as Algorithm 2.1 ###reference_### by using the corresponding BGN1 scheme. A complete second-order scheme can be obtained as in Algorithm 2.2 ###reference_### with the corresponding BGN1 scheme as a mesh regularization when necessary."
64
+ },
65
+ {
66
+ "section_id": "5",
67
+ "parent_section_id": null,
68
+ "section_name": "Numerical results",
69
+ "text": ""
70
+ },
71
+ {
72
+ "section_id": "5.1",
73
+ "parent_section_id": "5",
74
+ "section_name": "Convergence tests",
75
+ "text": "In this subsection, we test the temporal convergence of the second-order schemes (2.5 ###reference_###), (4.3 ###reference_###) and (4.6 ###reference_###) for solving the three geometric flows: CSF, AP-CSF and SDF, respectively, with two different mesh regularization techniques. As previously discussed in\nSection 3 ###reference_###, we quantify the numerical errors of the curves using the shape metrics, such as the manifold distance\nand Hausdorff distance. For the following simulations, we select four distinct types of initial shapes:\n(Shape 1): a unit circle;\n(Shape 2): an ellipse with semi-major axis and semi-minor axis ;\n(Shape 3): a \u2018tube\u2019 shape, which is a curve comprising a rectangle with two semicircles on its left and right sides;\n(Shape 4): a \u2018flower\u2019 shape, which is parameterized by\nWe note that for the CSF with Shape 1 as its initial shape has the following true solution, i.e.,\nFor this particular case, we compute the numerical error by comparing it with the true solution. However, for all other cases, we utilize the reference solutions which are obtained by the BGN2 scheme with large and a tiny time step size . In addition, the mesh regularization threshold for the BGN2 scheme is consistently set to , and the iteration tolerance of the equi-BGN2 scheme is set as .\n###figure_1### ###figure_2### We begin our test by calculating the convergence of the BGN2 scheme and the equi-BGN2 scheme for the CSF with either Shape 1 or Shape 2 as initial data. Fig. 1 ###reference_### presents a log-log plot of the numerical errors at time , measured by the manifold distance. The errors for the Hausdorff distance, which are similar, are not included here for brevity. To ensure a fair comparison, we also include the numerical results of the BGN1 scheme (2.3 ###reference_###) and the equi-BGN1 scheme (2.7 ###reference_###) under the same computational parameters, with a fixed number of grid points . As clearly shown in Fig. 1 ###reference_###, the numerical error of the BGN2-type schemes reduce very rapidly with second-order accuracy in time, while the BGN1-type schemes only achieve first-order convergence.\nFig. 2 ###reference_### illustrates the temporal errors of the schemes for solving the AP-CSF and SDF with Shape 2 as initial data, showing quadratic convergence for BGN2-type schemes and linear convergence for BGN1-type schemes."
76
+ },
77
+ {
78
+ "section_id": "5.2",
79
+ "parent_section_id": "5",
80
+ "section_name": "Comparison of computational costs",
81
+ "text": "In order to show that the computational cost of the proposed BGN2 schemes is comparable to that of the BGN1 schemes, we present two examples about solving the CSF and SDF, respectively. The numerical codes were written by using MATLAB 2021b, and they were implemented in a MacBook Pro with 1.4GHz quad-core Intel Core i5 and 8GB RAM.\nTable 4 ###reference_### displays a comparison of CPU times in seconds and numerical errors at time , as measured by the manifold distance and Hausdorff distance , using the BGN1-type and BGN2-type schemes for solving the CSF, where the initial shape is chosen as Shape 1. Table 5 ###reference_### provides similar results for solving the SDF with Shape 3 as its initial shape. Based on the findings presented in Tables 4 ###reference_### and 5 ###reference_###, the following conclusions can be drawn. (i) On the same mesh, the computational cost of the BGN2 scheme is slightly higher than that of the BGN1 scheme, as it involves additional calculations for the initial values and the right-hand side of the linear system at each time level. Meanwhile, the equi-BGN2 scheme incurs more or less similar computational cost as the equi-BGN1 scheme.\nHowever, the numerical solutions obtained using the BGN2-type schemes are significantly more accurate than those of the BGN1-type schemes; (ii) The computational cost of the equi-BGN2 scheme is several times higher than that of the BGN2 scheme, since it needs to solve a nonlinear system at each time step. However, equidistribution and unconditional energy stability can be theoretically guaranteed for the equi-BGN2 scheme.\n, where the initial shape is chosen as Shape 1, with and .\n\n\n\n\n\nBGN1 scheme\nBGN2 scheme\n\n\n\n\nTime(s)\n\n\n\nTime(s)\n\n\n\n320\n5.61E-4\n1.25E-4\n0.350\n320\n2.09E-4\n5.04E-5\n0.430\n\n640\n3.34E-4\n6.37E-5\n1.70\n640\n5.20E-5\n1.27E-5\n2.30\n\n1280\n1.81E-4\n3.22E-5\n9.85\n1280\n1.29E-5\n3.20E-6\n12.9\n\n2560\n9.38E-5\n1.62E-5\n110\n2560\n3.08E-6\n8.16E-7\n130\n\nequi-BGN1 scheme\nequi-BGN2 scheme\n\n\n\n\nTime(s)\n\n\n\nTime(s)\n\n320\n4.70E-4\n9.42E-5\n1.16\n320\n2.09E-4\n5.03E-5\n0.82\n\n640\n1.82E-4\n3.44E-5\n4.93\n640\n5.20E-5\n1.26E-5\n4.19\n\n1280\n7.78E-5\n1.40E-5\n25.5\n1280\n1.29E-5\n3.14E-6\n25.4\n\n2560\n3.55E-5\n6.22E-6\n284\n2560\n3.08E-6\n7.86E-7\n304\n, where the initial shape is chosen as Shape 3, with , and .\n\n\n\n\n\nBGN1 scheme\nBGN2 scheme\n\n\n\n\nTime(s)\n\n\n\nTime(s)\n\n\n\n320\n4.73E-3\n6.91E-4\n0.470\n320\n2.53E-3\n1.14E-3\n0.610\n\n640\n2.24E-3\n3.38E-4\n2.03\n640\n8.28E-4\n4.17E-4\n2.27\n\n1280\n1.10E-3\n1.67E-4\n12.6\n1280\n2.30E-4\n1.12E-4\n15.1\n\n2560\n5.53E-4\n8.34E-5\n133\n2560\n5.42E-5\n2.82E-5\n153\n\nequi-BGN1 scheme\nequi-BGN2 scheme\n\n\n\n\nTime(s)\n\n\n\nTime(s)\n\n320\n5.00E-3\n1.04E-3\n3.39\n320\n2.71E-3\n1.21E-3\n3.48\n\n640\n2.62E-3\n5.61E-4\n17.1\n640\n8.88E-4\n4.33E-4\n16.7\n\n1280\n1.34E-3\n2.93E-4\n105\n1280\n2.64E-4\n1.56E-4\n102\n\n2560\n6.83E-4\n1.51E-4\n1151\n2560\n8.12E-5\n5.57E-5\n1140"
82
+ },
83
+ {
84
+ "section_id": "5.3",
85
+ "parent_section_id": "5",
86
+ "section_name": "Applications to the curve evolution",
87
+ "text": "As is well-known, the AP-CSF and SDF possess some structure-preserving properties, such as the perimeter decreasing and area conserving properties [24 ###reference_b24###, 25 ###reference_b25###, 7 ###reference_b7###].\nIn this subsection, we investigate the structure-preserving properties of the proposed BGN2 scheme and equi-BGN2 scheme applied to AP-CSF and SDF, respectively. As an example, we mainly focus on the SDF here. Moreover, we will discuss the importance of the mesh regularization procedures.\nFig. 3 ###reference_### (a) illustrates the evolution of an initially elliptic curve, referred to as Shape 2, driven by SDF towards its equilibrium state by the BGN2 scheme. Fig. 3 ###reference_###(b)-(e) show the evolution of various geometric quantities during the process: the relative area loss , the normalized perimeter , and the mesh distribution function ,\nwhich are defined respectively as\nfor , where is the area enclosed by the polygon determined by , represents the perimeter of the polygon, and is the mesh ratio function. As depicted in Fig. 3 ###reference_###(b), the area loss exhibits a weakly oscillating behavior, which may result from the two-step structure of the BGN2 scheme, the equi-BGN2 scheme has similar oscillating behavior and we omit it here for brevity. It is worth noting that despite the oscillations, the normalized area loss remains very low, consistently below . By employing a smaller grid size, the area loss can be further reduced, and it is significantly lower than that of the BGN1 scheme under the same computational parameters. Furthermore, Fig. 3 ###reference_###(c) shows the BGN2 scheme and the equi-BGN2 scheme preserve the perimeter-decreasing property of the SDF numerically. Furthermore, in Fig. 3 ###reference_###(d), it can be observed that the mesh distribution function remains lower than during the evolution. This indicates that the mesh distribution remains well-maintained and almost equidistributed during the process. Therefore, in this scenario, there is no need to perform the mesh regularization procedure because is always smaller than the chosen threshold (here we choose it as ) in the simulations. Additionally, Fig. 3 ###reference_###(e) shows the equi-BGN2 scheme achieves equidistribution property at each time step. The relatively low iteration numbers do not compromise the computational efficiency.\n###figure_3### . For (a)-(b), we used and while for (c)-(e), and .\nTo provide a more comprehensive comparison, we conduct simulations of evolution of Shape 3 curve driven by the SDF. Fig. 4 ###reference_###(b)-(c) demonstrates that the BGN2 scheme and the equi-BGN2 scheme effectively preserve two crucial geometric properties\nof the SDF: the conservation of area and the reduction of perimeter properties [24 ###reference_b24###, 7 ###reference_b7###]. It should be noted that\nFig. 4 ###reference_###(d) reveals that without the implementation of mesh regularization, the mesh distribution function can become very large.\nTherefore, in our algorithm, when exceeds a threshold ,\nwe employ the BGN1 scheme (4.5 ###reference_###) for a single run to perform mesh regularization, similar to of Algorithm 2.2 ###reference_###. As clearly shown in Fig. 4 ###reference_###(d), following this step, the mesh ratio rapidly decreases to a low value, which makes the method more stable. Importantly, this mesh regularization procedure is only required four times throughout the entire evolution, without sacrificing the accuracy of the BGN2 scheme (cf. Table 5 ###reference_###). Similarly, as shown in Fig. 4 ###reference_###(e), the equi-BGN2 scheme also performs well for this initial shape. Compared to the case of Shape 2, although we require more iteration steps, it is still superior to the BGN1 scheme in view of the accuracy and efficiency (cf. Table 5 ###reference_###).\n###figure_4### . For (a)-(b) we used and while and for (c)-(e).\nNext, we proceed to simulate the evolution of a nonconvex curve, referred to as Shape 4. Fig. 5 ###reference_### and Fig. 6 ###reference_### (the first row) show the evolution of the geometric quantities based on two different initial data preparations: Algorithm 2.1 ###reference_### and Remark 2.2 ###reference_remark2###, respectively. A comparison of the results reveals the superiority of the latter approach for several reasons: (i) the magnitude of area loss is significantly lower when using the approach in Remark 2.2 ###reference_remark2###; (ii) the perimeter-decreasing property is preserved while the perimeter oscillates at the beginning when using Algorithm 2.1 ###reference_###; (iii) the number of mesh regularization implementations is smaller with the approach in Remark 2.2 ###reference_remark2###. Thus we recommend preparing the data for a nonconvex initial curve following the approach outlined in Remark 2.2 ###reference_remark2###. Fig. 6 ###reference_### (the second row) demonstrates the performance of the equi-BGN2 scheme, from which it can be seen that only a relatively low number of iterations are needed for the majority of time steps (see 6 ###reference_###(c2)). Additionally, Fig. 6 ###reference_### (the third row) illustrates the evolution of the same quantities without any implementations of mesh regularization. In this case, all three quantities exhibit significant oscillations after a certain time period, and the area loss and mesh ratio of the polygon becomes excessively large, resulting in the breakdown of the BGN2 scheme. Notably, mesh clustering has happened at (see Fig. 7 ###reference_###(c3)), eventually leading to mesh distortion at (see Fig. 7 ###reference_###(d3)).\n###figure_5### ###figure_6### .\nThese issues can be avoided by implementing one of the mesh regularization techniques. Fig. 7 ###reference_###(a1)-(d1) and Fig. 7 ###reference_###(a2)-(d2) demonstrate that mesh regularization is crucial for the effectiveness of BGN2-type schemes and the BGN1-type schemes can significantly enhance mesh quality. Additionally, A comparison between Fig. 7 ###reference_###(d1) and Fig. 7 ###reference_###(d2) reveals that there still exists some mesh clustering for the BGN2 scheme in long-time evolution. In contrast, the equi-BGN2 scheme exhibits equidistribution property throughout all time.\n###figure_7### The simulations are conducted with a grid number of and a time step size .\nFinally, we close this section by simulating the evolution of a nonconvex initial curve [31 ###reference_b31###, 3 ###reference_b3###, 29 ###reference_b29###]\ndriven by CSF, AP-CSF and SDF using the BGN2 schemes. The initial curve can be parametrized as\nfor . The numerical results are depicted in Fig. 8 ###reference_###.\nAs shown in this figure, the CSF initially transforms the intricate curve into a circle before it disappear. Both the AP-CSF and SDF\ndrive the curve to evolve into a perfect circle as its equilibrium shape.\n###figure_8###"
88
+ },
89
+ {
90
+ "section_id": "6",
91
+ "parent_section_id": null,
92
+ "section_name": "Conclusions",
93
+ "text": "We proposed two novel temporal second-order, BGN-based parametric finite element methods (i.e., the BGN2 and the equi-BGN2 schemes) for solving different geometric flows of curves (e.g., CSF, AP-CSF and SDF). Based on the BGN formulation and the corresponding semi-discrete FEM approximation, our numerical methods employ a Crank-Nicolson leap-frog method to discretize in time. The key idea lies in choosing a discrete inner product over the curve , such that the time level coincides with the time at which all quantities have approximations with an error of . We established the well-posedness\nof the BGN2 scheme under some suitable assumptions. Additionally, we showed that the equi-BGN2 scheme is unconditional energy-stable. We emphasized the use of shape metrics (manifold distance and Hausdorff distance) rather than function norms (e.g., -norm, -norm) to measure numerical errors of BGN-based schemes.\nIn the case of certain initial curves, such as a \u2018flower\u2019 shape, we found that the BGN2 scheme (resp. the equi-BGN2 scheme), in conjunction with the BGN1 scheme (resp. the equi-BGN1 scheme) for mesh regularization, exhibited remarkable stability in practical simulations.\nExtensive numerical experiments demonstrated that the proposed BGN2 and equi-BGN2 schemes achieve second-order accuracy in time, as measured by the shape metrics, outperforming the BGN1 scheme in terms of accuracy.\nFurthermore, it is worth mentioning that the approach we have presented for constructing a temporal high-order BGN-based scheme can be readily extended to address various other problems, such as anisotropic geometric flows [5 ###reference_b5###], Willmore flow [13 ###reference_b13###], two-phase flow [22 ###reference_b22###], solid-state dewetting [33 ###reference_b33###] and geometric flows in 3D [32 ###reference_b32###].\nIn our future research, we will further investigate the development of structure-preserving temporal high-order BGN-based schemes [7 ###reference_b7###, 24 ###reference_b24###] and conduct the numerical analysis of the BGN-based schemes with respect to the shape metric. These investigations will contribute to enhancing the overall understanding and applicability of the BGN type scheme in different contexts."
94
+ }
95
+ ],
96
+ "appendix": [],
97
+ "tables": {
98
+ "1": {
99
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Numerical errors quantified by various metrics for Dziuk\u2019s scheme <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2309.12875v2#bib.bib19\" title=\"\">19</a>, Section 6]</cite>, with the parameters , and .</figcaption>\n<p class=\"ltx_p\" id=\"S3.T1.10\"><span class=\"ltx_rule\" style=\"width:411.9pt;height:1.0pt;background:black;display:inline-block;\">\u00a0</span>\n<span class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S3.T1.10.6\">\n<span class=\"ltx_thead\">\n<span class=\"ltx_tr\" id=\"S3.T1.8.4.4\">\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_column\" id=\"S3.T1.8.4.4.5\"><span class=\"ltx_text\" id=\"S3.T1.8.4.4.5.1\">Errors</span></span>\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_column\" id=\"S3.T1.5.1.1.1\"></span>\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_column\" id=\"S3.T1.6.2.2.2\"></span>\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_column\" id=\"S3.T1.7.3.3.3\"></span>\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_column\" id=\"S3.T1.8.4.4.4\"></span></span>\n</span>\n<span class=\"ltx_tbody\">\n<span class=\"ltx_tr\" id=\"S3.T1.9.5.5\">\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.9.5.5.1\">-norm</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.9.5.5.2\">1.17E-2</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.9.5.5.3\">6.31E-3</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.9.5.5.4\">3.26E-3</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.9.5.5.5\">1.62E-3</span></span>\n<span class=\"ltx_tr\" id=\"S3.T1.10.6.7.1\">\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T1.10.6.7.1.1\">Order</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T1.10.6.7.1.2\">\u2013</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T1.10.6.7.1.3\">0.89</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T1.10.6.7.1.4\">0.95</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T1.10.6.7.1.5\">1.01</span></span>\n<span class=\"ltx_tr\" id=\"S3.T1.10.6.6\">\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.10.6.6.1\">-norm</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.10.6.6.2\">3.05E-2</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.10.6.6.3\">1.63E-2</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.10.6.6.4\">8.41E-3</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.10.6.6.5\">4.19E-3</span></span>\n<span class=\"ltx_tr\" id=\"S3.T1.10.6.8.2\">\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T1.10.6.8.2.1\">Order</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T1.10.6.8.2.2\">\u2013</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T1.10.6.8.2.3\">0.90</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T1.10.6.8.2.4\">0.96</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T1.10.6.8.2.5\">1.00</span></span>\n<span class=\"ltx_tr\" id=\"S3.T1.10.6.9.3\">\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.10.6.9.3.1\"><span class=\"ltx_text\" id=\"S3.T1.10.6.9.3.1.1\">Manifold distance</span></span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.10.6.9.3.2\">6.89E-2</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.10.6.9.3.3\">3.65E-2</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.10.6.9.3.4\">1.86E-2</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.10.6.9.3.5\">9.17E-3</span></span>\n<span class=\"ltx_tr\" id=\"S3.T1.10.6.10.4\">\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T1.10.6.10.4.1\">Order</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T1.10.6.10.4.2\">\u2013</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T1.10.6.10.4.3\">0.92</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T1.10.6.10.4.4\">0.97</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T1.10.6.10.4.5\">1.02</span></span>\n<span class=\"ltx_tr\" id=\"S3.T1.10.6.11.5\">\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.10.6.11.5.1\"><span class=\"ltx_text\" id=\"S3.T1.10.6.11.5.1.1\">Hausdorff distance</span></span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.10.6.11.5.2\">3.04E-2</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.10.6.11.5.3\">1.62E-2</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.10.6.11.5.4\">8.29E-3</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.10.6.11.5.5\">4.09E-3</span></span>\n<span class=\"ltx_tr\" id=\"S3.T1.10.6.12.6\">\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T1.10.6.12.6.1\">Order</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T1.10.6.12.6.2\">\u2013</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T1.10.6.12.6.3\">0.91</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T1.10.6.12.6.4\">0.97</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T1.10.6.12.6.5\">1.02</span></span>\n</span>\n</span>\n<span class=\"ltx_rule\" style=\"width:411.9pt;height:1.0pt;background:black;display:inline-block;\">\u00a0</span></p>\n</figure>",
100
+ "capture": "Table 1: Numerical errors quantified by various metrics for Dziuk\u2019s scheme [19, Section 6], with the parameters , and ."
101
+ },
102
+ "2": {
103
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T2\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Numerical errors quantified by various metrics for the BGN1 scheme, with the parameters , .</figcaption>\n<p class=\"ltx_p\" id=\"S3.T2.14\"><span class=\"ltx_rule\" style=\"width:411.9pt;height:1.0pt;background:black;display:inline-block;\">\u00a0</span>\n<span class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S3.T2.14.10\">\n<span class=\"ltx_thead\">\n<span class=\"ltx_tr\" id=\"S3.T2.8.4.4\">\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_column\" id=\"S3.T2.8.4.4.5\"><span class=\"ltx_text\" id=\"S3.T2.8.4.4.5.1\">Errors</span></span>\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_column\" id=\"S3.T2.5.1.1.1\"></span>\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_column\" id=\"S3.T2.6.2.2.2\"></span>\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_column\" id=\"S3.T2.7.3.3.3\"></span>\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_column\" id=\"S3.T2.8.4.4.4\"></span></span>\n</span>\n<span class=\"ltx_tbody\">\n<span class=\"ltx_tr\" id=\"S3.T2.9.5.5\">\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T2.9.5.5.1\">-norm</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T2.9.5.5.2\">4.25E-3</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T2.9.5.5.3\">3.98E-3</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T2.9.5.5.4\">4.05E-3</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T2.9.5.5.5\">4.15E-3</span></span>\n<span class=\"ltx_tr\" id=\"S3.T2.11.7.7\">\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T2.11.7.7.3\">Order</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T2.11.7.7.4\">\u2013</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T2.11.7.7.5\">0.10</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T2.10.6.6.1\">0.03</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T2.11.7.7.2\">0.03</span></span>\n<span class=\"ltx_tr\" id=\"S3.T2.12.8.8\">\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T2.12.8.8.1\">-norm</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T2.12.8.8.2\">1.00E-2</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T2.12.8.8.3\">9.17E-3</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T2.12.8.8.4\">9.47E-3</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T2.12.8.8.5\">9.79E-3</span></span>\n<span class=\"ltx_tr\" id=\"S3.T2.14.10.10\">\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T2.14.10.10.3\">Order</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T2.14.10.10.4\">\u2013</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T2.14.10.10.5\">0.12</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T2.13.9.9.1\">0.05</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T2.14.10.10.2\">0.05</span></span>\n<span class=\"ltx_tr\" id=\"S3.T2.14.10.11.1\">\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T2.14.10.11.1.1\"><span class=\"ltx_text\" id=\"S3.T2.14.10.11.1.1.1\">Manifold distance</span></span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T2.14.10.11.1.2\">3.11E-2</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T2.14.10.11.1.3\">1.58E-2</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T2.14.10.11.1.4\">7.96E-3</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T2.14.10.11.1.5\">4.00E-3</span></span>\n<span class=\"ltx_tr\" id=\"S3.T2.14.10.12.2\">\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T2.14.10.12.2.1\">Order</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T2.14.10.12.2.2\">\u2013</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T2.14.10.12.2.3\">0.98</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T2.14.10.12.2.4\">0.99</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T2.14.10.12.2.5\">0.99</span></span>\n<span class=\"ltx_tr\" id=\"S3.T2.14.10.13.3\">\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T2.14.10.13.3.1\"><span class=\"ltx_text\" id=\"S3.T2.14.10.13.3.1.1\">Hausdorff distance</span></span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T2.14.10.13.3.2\">8.23E-3</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T2.14.10.13.3.3\">4.18E-3</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T2.14.10.13.3.4\">2.11E-3</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T2.14.10.13.3.5\">1.06E-3</span></span>\n<span class=\"ltx_tr\" id=\"S3.T2.14.10.14.4\">\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T2.14.10.14.4.1\">Order</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T2.14.10.14.4.2\">\u2013</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T2.14.10.14.4.3\">0.98</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T2.14.10.14.4.4\">0.99</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T2.14.10.14.4.5\">0.99</span></span>\n</span>\n</span>\n<span class=\"ltx_rule\" style=\"width:411.9pt;height:1.0pt;background:black;display:inline-block;\">\u00a0</span></p>\n</figure>",
104
+ "capture": "Table 2: Numerical errors quantified by various metrics for the BGN1 scheme, with the parameters , ."
105
+ },
106
+ "3": {
107
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T3\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>Numerical errors quantified by various metrics for the BGN2 scheme, with the parameters , .</figcaption>\n<p class=\"ltx_p\" id=\"S3.T3.10\"><span class=\"ltx_rule\" style=\"width:411.9pt;height:1.0pt;background:black;display:inline-block;\">\u00a0</span>\n<span class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S3.T3.10.6\">\n<span class=\"ltx_thead\">\n<span class=\"ltx_tr\" id=\"S3.T3.8.4.4\">\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_column\" id=\"S3.T3.8.4.4.5\"><span class=\"ltx_text\" id=\"S3.T3.8.4.4.5.1\">Errors</span></span>\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_column\" id=\"S3.T3.5.1.1.1\"></span>\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_column\" id=\"S3.T3.6.2.2.2\"></span>\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_column\" id=\"S3.T3.7.3.3.3\"></span>\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_column\" id=\"S3.T3.8.4.4.4\"></span></span>\n</span>\n<span class=\"ltx_tbody\">\n<span class=\"ltx_tr\" id=\"S3.T3.9.5.5\">\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T3.9.5.5.1\">-norm</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T3.9.5.5.2\">1.49E-2</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T3.9.5.5.3\">1.45E-2</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T3.9.5.5.4\">1.45E-2</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T3.9.5.5.5\">1.43E-2</span></span>\n<span class=\"ltx_tr\" id=\"S3.T3.10.6.7.1\">\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T3.10.6.7.1.1\">Order</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T3.10.6.7.1.2\">\u2013</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T3.10.6.7.1.3\">0.04</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T3.10.6.7.1.4\">0.00</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T3.10.6.7.1.5\">0.02</span></span>\n<span class=\"ltx_tr\" id=\"S3.T3.10.6.6\">\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T3.10.6.6.1\">-norm</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T3.10.6.6.2\">3.32E-2</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T3.10.6.6.3\">3.30E-2</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T3.10.6.6.4\">3.29E-2</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T3.10.6.6.5\">3.29E-2</span></span>\n<span class=\"ltx_tr\" id=\"S3.T3.10.6.8.2\">\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T3.10.6.8.2.1\">Order</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T3.10.6.8.2.2\">\u2013</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T3.10.6.8.2.3\">0.01</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T3.10.6.8.2.4\">0.00</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T3.10.6.8.2.5\">0.00</span></span>\n<span class=\"ltx_tr\" id=\"S3.T3.10.6.9.3\">\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T3.10.6.9.3.1\"><span class=\"ltx_text\" id=\"S3.T3.10.6.9.3.1.1\">Manifold distance</span></span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T3.10.6.9.3.2\">8.44E-4</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T3.10.6.9.3.3\">2.11E-4</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T3.10.6.9.3.4\">5.27E-5</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T3.10.6.9.3.5\">1.32E-5</span></span>\n<span class=\"ltx_tr\" id=\"S3.T3.10.6.10.4\">\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T3.10.6.10.4.1\">Order</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T3.10.6.10.4.2\">\u2013</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T3.10.6.10.4.3\">2.00</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T3.10.6.10.4.4\">2.00</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T3.10.6.10.4.5\">1.99</span></span>\n<span class=\"ltx_tr\" id=\"S3.T3.10.6.11.5\">\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T3.10.6.11.5.1\"><span class=\"ltx_text\" id=\"S3.T3.10.6.11.5.1.1\">Hausdorff distance</span></span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T3.10.6.11.5.2\">2.00E-4</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T3.10.6.11.5.3\">4.98E-5</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T3.10.6.11.5.4\">1.26E-5</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T3.10.6.11.5.5\">3.29E-6</span></span>\n<span class=\"ltx_tr\" id=\"S3.T3.10.6.12.6\">\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T3.10.6.12.6.1\">Order</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T3.10.6.12.6.2\">\u2013</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T3.10.6.12.6.3\">2.01</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T3.10.6.12.6.4\">1.98</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T3.10.6.12.6.5\">1.94</span></span>\n</span>\n</span>\n<span class=\"ltx_rule\" style=\"width:411.9pt;height:1.0pt;background:black;display:inline-block;\">\u00a0</span></p>\n</figure>",
108
+ "capture": "Table 3: Numerical errors quantified by various metrics for the BGN2 scheme, with the parameters , ."
109
+ },
110
+ "4": {
111
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T4\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 4: </span>Comparisons of the CPU times (seconds) and the numerical errors measured from the manifold distance and Hausdorff distance for the BGN1-type and BGN2-type schemes applied to CSF</figcaption>\n<p class=\"ltx_p ltx_align_center\" id=\"S5.T4.18\">, where the initial shape is chosen as Shape 1, with and .\n\n\n<span class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S5.T4.18.12\">\n<span class=\"ltx_thead\">\n<span class=\"ltx_tr\" id=\"S5.T4.18.12.13.1\">\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t ltx_colspan ltx_colspan_4\" id=\"S5.T4.18.12.13.1.1\">BGN1 scheme</span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t ltx_colspan ltx_colspan_4\" id=\"S5.T4.18.12.13.1.2\">BGN2 scheme</span></span>\n<span class=\"ltx_tr\" id=\"S5.T4.12.6.6\">\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T4.7.1.1.1\"></span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T4.8.2.2.2\"></span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T4.9.3.3.3\"></span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T4.12.6.6.7\"><span class=\"ltx_text\" id=\"S5.T4.12.6.6.7.1\">Time</span>(s)</span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T4.10.4.4.4\"></span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T4.11.5.5.5\"></span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T4.12.6.6.6\"></span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T4.12.6.6.8\"><span class=\"ltx_text\" id=\"S5.T4.12.6.6.8.1\">Time</span>(s)</span></span>\n</span>\n<span class=\"ltx_tbody\">\n<span class=\"ltx_tr\" id=\"S5.T4.18.12.14.1\">\n<span class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.14.1.1\">320</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.14.1.2\">5.61E-4</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.14.1.3\">1.25E-4</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.14.1.4\">0.350</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.14.1.5\">320</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.14.1.6\">2.09E-4</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.14.1.7\">5.04E-5</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.14.1.8\">0.430</span></span>\n<span class=\"ltx_tr\" id=\"S5.T4.18.12.15.2\">\n<span class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.15.2.1\">640</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.15.2.2\">3.34E-4</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.15.2.3\">6.37E-5</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.15.2.4\">1.70</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.15.2.5\">640</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.15.2.6\">5.20E-5</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.15.2.7\">1.27E-5</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.15.2.8\">2.30</span></span>\n<span class=\"ltx_tr\" id=\"S5.T4.18.12.16.3\">\n<span class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.16.3.1\">1280</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.16.3.2\">1.81E-4</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.16.3.3\">3.22E-5</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.16.3.4\">9.85</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.16.3.5\">1280</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.16.3.6\">1.29E-5</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.16.3.7\">3.20E-6</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.16.3.8\">12.9</span></span>\n<span class=\"ltx_tr\" id=\"S5.T4.18.12.17.4\">\n<span class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.17.4.1\">2560</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.17.4.2\">9.38E-5</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.17.4.3\">1.62E-5</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.17.4.4\">110</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.17.4.5\">2560</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.17.4.6\">3.08E-6</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.17.4.7\">8.16E-7</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.17.4.8\">130</span></span>\n<span class=\"ltx_tr\" id=\"S5.T4.18.12.18.5\">\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t ltx_colspan ltx_colspan_4\" id=\"S5.T4.18.12.18.5.1\">equi-BGN1 scheme</span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t ltx_colspan ltx_colspan_4\" id=\"S5.T4.18.12.18.5.2\">equi-BGN2 scheme</span></span>\n<span class=\"ltx_tr\" id=\"S5.T4.18.12.12\">\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T4.13.7.7.1\"></span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T4.14.8.8.2\"></span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T4.15.9.9.3\"></span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.12.7\"><span class=\"ltx_text\" id=\"S5.T4.18.12.12.7.1\">Time</span>(s)</span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T4.16.10.10.4\"></span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T4.17.11.11.5\"></span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.12.6\"></span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.12.8\"><span class=\"ltx_text\" id=\"S5.T4.18.12.12.8.1\">Time</span>(s)</span></span>\n<span class=\"ltx_tr\" id=\"S5.T4.18.12.19.6\">\n<span class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.19.6.1\">320</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.19.6.2\">4.70E-4</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.19.6.3\">9.42E-5</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.19.6.4\">1.16</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.19.6.5\">320</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.19.6.6\">2.09E-4</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.19.6.7\">5.03E-5</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.19.6.8\">0.82</span></span>\n<span class=\"ltx_tr\" id=\"S5.T4.18.12.20.7\">\n<span class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.20.7.1\">640</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.20.7.2\">1.82E-4</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.20.7.3\">3.44E-5</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.20.7.4\">4.93</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.20.7.5\">640</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.20.7.6\">5.20E-5</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.20.7.7\">1.26E-5</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.20.7.8\">4.19</span></span>\n<span class=\"ltx_tr\" id=\"S5.T4.18.12.21.8\">\n<span class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.21.8.1\">1280</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.21.8.2\">7.78E-5</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.21.8.3\">1.40E-5</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.21.8.4\">25.5</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.21.8.5\">1280</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.21.8.6\">1.29E-5</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.21.8.7\">3.14E-6</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.21.8.8\">25.4</span></span>\n<span class=\"ltx_tr\" id=\"S5.T4.18.12.22.9\">\n<span class=\"ltx_td ltx_align_center ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.22.9.1\">2560</span>\n<span class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.22.9.2\">3.55E-5</span>\n<span class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.22.9.3\">6.22E-6</span>\n<span class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.22.9.4\">284</span>\n<span class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.22.9.5\">2560</span>\n<span class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.22.9.6\">3.08E-6</span>\n<span class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.22.9.7\">7.86E-7</span>\n<span class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T4.18.12.22.9.8\">304</span></span>\n</span>\n</span></p>\n</figure>",
112
+ "capture": "Table 4: Comparisons of the CPU times (seconds) and the numerical errors measured from the manifold distance and Hausdorff distance for the BGN1-type and BGN2-type schemes applied to CSF"
113
+ },
114
+ "5": {
115
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T5\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 5: </span>Comparisons of the CPU times (seconds) and the numerical errors measured by the manifold distance and Hausdorff distance using the BGN1-type and BGN2-type schemes applied to SDF</figcaption>\n<p class=\"ltx_p ltx_align_center\" id=\"S5.T5.18\">, where the initial shape is chosen as Shape 3, with , and .\n\n\n<span class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S5.T5.18.12\">\n<span class=\"ltx_thead\">\n<span class=\"ltx_tr\" id=\"S5.T5.18.12.13.1\">\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t ltx_colspan ltx_colspan_4\" id=\"S5.T5.18.12.13.1.1\">BGN1 scheme</span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t ltx_colspan ltx_colspan_4\" id=\"S5.T5.18.12.13.1.2\">BGN2 scheme</span></span>\n<span class=\"ltx_tr\" id=\"S5.T5.12.6.6\">\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T5.7.1.1.1\"></span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T5.8.2.2.2\"></span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T5.9.3.3.3\"></span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T5.12.6.6.7\"><span class=\"ltx_text\" id=\"S5.T5.12.6.6.7.1\">Time</span>(s)</span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T5.10.4.4.4\"></span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T5.11.5.5.5\"></span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T5.12.6.6.6\"></span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T5.12.6.6.8\"><span class=\"ltx_text\" id=\"S5.T5.12.6.6.8.1\">Time</span>(s)</span></span>\n</span>\n<span class=\"ltx_tbody\">\n<span class=\"ltx_tr\" id=\"S5.T5.18.12.14.1\">\n<span class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.14.1.1\">320</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.14.1.2\">4.73E-3</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.14.1.3\">6.91E-4</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.14.1.4\">0.470</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.14.1.5\">320</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.14.1.6\">2.53E-3</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.14.1.7\">1.14E-3</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.14.1.8\">0.610</span></span>\n<span class=\"ltx_tr\" id=\"S5.T5.18.12.15.2\">\n<span class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.15.2.1\">640</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.15.2.2\">2.24E-3</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.15.2.3\">3.38E-4</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.15.2.4\">2.03</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.15.2.5\">640</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.15.2.6\">8.28E-4</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.15.2.7\">4.17E-4</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.15.2.8\">2.27</span></span>\n<span class=\"ltx_tr\" id=\"S5.T5.18.12.16.3\">\n<span class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.16.3.1\">1280</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.16.3.2\">1.10E-3</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.16.3.3\">1.67E-4</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.16.3.4\">12.6</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.16.3.5\">1280</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.16.3.6\">2.30E-4</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.16.3.7\">1.12E-4</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.16.3.8\">15.1</span></span>\n<span class=\"ltx_tr\" id=\"S5.T5.18.12.17.4\">\n<span class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.17.4.1\">2560</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.17.4.2\">5.53E-4</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.17.4.3\">8.34E-5</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.17.4.4\">133</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.17.4.5\">2560</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.17.4.6\">5.42E-5</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.17.4.7\">2.82E-5</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.17.4.8\">153</span></span>\n<span class=\"ltx_tr\" id=\"S5.T5.18.12.18.5\">\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t ltx_colspan ltx_colspan_4\" id=\"S5.T5.18.12.18.5.1\">equi-BGN1 scheme</span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t ltx_colspan ltx_colspan_4\" id=\"S5.T5.18.12.18.5.2\">equi-BGN2 scheme</span></span>\n<span class=\"ltx_tr\" id=\"S5.T5.18.12.12\">\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T5.13.7.7.1\"></span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T5.14.8.8.2\"></span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T5.15.9.9.3\"></span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.12.7\"><span class=\"ltx_text\" id=\"S5.T5.18.12.12.7.1\">Time</span>(s)</span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T5.16.10.10.4\"></span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T5.17.11.11.5\"></span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.12.6\"></span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.12.8\"><span class=\"ltx_text\" id=\"S5.T5.18.12.12.8.1\">Time</span>(s)</span></span>\n<span class=\"ltx_tr\" id=\"S5.T5.18.12.19.6\">\n<span class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.19.6.1\">320</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.19.6.2\">5.00E-3</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.19.6.3\">1.04E-3</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.19.6.4\">3.39</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.19.6.5\">320</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.19.6.6\">2.71E-3</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.19.6.7\">1.21E-3</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.19.6.8\">3.48</span></span>\n<span class=\"ltx_tr\" id=\"S5.T5.18.12.20.7\">\n<span class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.20.7.1\">640</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.20.7.2\">2.62E-3</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.20.7.3\">5.61E-4</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.20.7.4\">17.1</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.20.7.5\">640</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.20.7.6\">8.88E-4</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.20.7.7\">4.33E-4</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.20.7.8\">16.7</span></span>\n<span class=\"ltx_tr\" id=\"S5.T5.18.12.21.8\">\n<span class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.21.8.1\">1280</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.21.8.2\">1.34E-3</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.21.8.3\">2.93E-4</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.21.8.4\">105</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.21.8.5\">1280</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.21.8.6\">2.64E-4</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.21.8.7\">1.56E-4</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.21.8.8\">102</span></span>\n<span class=\"ltx_tr\" id=\"S5.T5.18.12.22.9\">\n<span class=\"ltx_td ltx_align_center ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.22.9.1\">2560</span>\n<span class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.22.9.2\">6.83E-4</span>\n<span class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.22.9.3\">1.51E-4</span>\n<span class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.22.9.4\">1151</span>\n<span class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.22.9.5\">2560</span>\n<span class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.22.9.6\">8.12E-5</span>\n<span class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.22.9.7\">5.57E-5</span>\n<span class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T5.18.12.22.9.8\">1140</span></span>\n</span>\n</span></p>\n</figure>",
116
+ "capture": "Table 5: Comparisons of the CPU times (seconds) and the numerical errors measured by the manifold distance and Hausdorff distance using the BGN1-type and BGN2-type schemes applied to SDF"
117
+ }
118
+ },
119
+ "image_paths": {
120
+ "1": {
121
+ "figure_path": "2309.12875v2_figure_1.png",
122
+ "caption": "Figure 1: Log-log plot of the numerical errors at time T=0.25\ud835\udc470.25T=0.25italic_T = 0.25 measured by the manifold distance for BGN1, equi-BGN1, BGN2 and equi-BGN2 schemes for solving the CSF with two different initial curves: (a) Shape 1 and (b) Shape 2, respectively, where the number of nodes is fixed as N=10000\ud835\udc4110000N=10000italic_N = 10000.",
123
+ "url": "http://arxiv.org/html/2309.12875v2/x1.png"
124
+ },
125
+ "2": {
126
+ "figure_path": "2309.12875v2_figure_2.png",
127
+ "caption": "Figure 2: Log-log plot of the numerical errors at time T=0.25\ud835\udc470.25T=0.25italic_T = 0.25, measured by the manifold distance, for solving two different flows with Shape 2 as the initial curve: (a) AP-CSF and (b) SDF, respectively.",
128
+ "url": "http://arxiv.org/html/2309.12875v2/x2.png"
129
+ },
130
+ "3": {
131
+ "figure_path": "2309.12875v2_figure_3.png",
132
+ "caption": "Figure 3: (a) Several snapshots of the curve evolution controlled by the SDF, starting with Shape 2 as its initial shape. (b) The relative area loss as a function of time. (c) The normalized perimeter as a function of time. (d) The mesh ratio function \u03a8\u2062(t)\u03a8\ud835\udc61\\Psi(t)roman_\u03a8 ( italic_t ) (in blue line) and the number of mesh regularizations (in red line) for the BGN2 scheme. (e) The mesh ratio function \u03a8\u2062(t)\u03a8\ud835\udc61\\Psi(t)roman_\u03a8 ( italic_t ) (in blue line) and the number of iteration numbers (in red line) at each time step for the equi-BGN2 scheme",
133
+ "url": "http://arxiv.org/html/2309.12875v2/x3.png"
134
+ },
135
+ "4": {
136
+ "figure_path": "2309.12875v2_figure_4.png",
137
+ "caption": "Figure 4: (a) Several snapshots of the curve evolution controlled by the\nSDF, starting with Shape 3 as its initial shape. (b) The relative area loss as a function of time. (c) The normalized perimeter as a function of time. (d) The mesh distribution function \u03a8\u2062(t)\u03a8\ud835\udc61\\Psi(t)roman_\u03a8 ( italic_t ) (in blue line) and the number of mesh regularizations (in red line) for the BGN2 scheme. (e) The mesh ratio function \u03a8\u2062(t)\u03a8\ud835\udc61\\Psi(t)roman_\u03a8 ( italic_t ) (in blue line) and the number of iteration numbers (in red line) at each time step for the equi-BGN2 scheme",
138
+ "url": "http://arxiv.org/html/2309.12875v2/x4.png"
139
+ },
140
+ "5": {
141
+ "figure_path": "2309.12875v2_figure_5.png",
142
+ "caption": "Figure 5: Evolution of the three geometrical quantities when the initial data is prepared as in Algorithm 2.1: (a) the relative area loss, (b) the normalized perimeter, (c) the mesh distribution function \u03a8\u2062(t)\u03a8\ud835\udc61\\Psi(t)roman_\u03a8 ( italic_t ), for the BGN2 scheme.",
143
+ "url": "http://arxiv.org/html/2309.12875v2/x5.png"
144
+ },
145
+ "6": {
146
+ "figure_path": "2309.12875v2_figure_6.png",
147
+ "caption": "Figure 6: Evolution of the three geometrical quantities when the initial data is prepared as in Remark 2.2: (a) the relative area loss, (b) the normalized perimeter, (c) the mesh distribution function \u03a8\u2062(t)\u03a8\ud835\udc61\\Psi(t)roman_\u03a8 ( italic_t ), for the BGN2 scheme (shown in the first row), the equi-BGN2 scheme (shown in the second row) and without mesh regularization procedure (shown in the third row)",
148
+ "url": "http://arxiv.org/html/2309.12875v2/x6.png"
149
+ },
150
+ "7": {
151
+ "figure_path": "2309.12875v2_figure_7.png",
152
+ "caption": "Figure 7: Evolution of the curve driven by SDF starting with Shape 4 as initial data by using the BGN2 scheme (shown in the first row), the equi-BGN2 scheme (shown in the second row) and without mesh regularization procedure (shown in the third row).",
153
+ "url": "http://arxiv.org/html/2309.12875v2/x7.png"
154
+ },
155
+ "8": {
156
+ "figure_path": "2309.12875v2_figure_8.png",
157
+ "caption": "Figure 8: Snapshots of the curve evolution using the proposed BGN2 schemes for three distinct geometric flows: CSF (first row), AP-CSF (second row) and SDF (third row). The simulations are conducted with N=80\ud835\udc4180N=80italic_N = 80 and \u03c4=1/640\ud835\udf0f1640\\tau=1/640italic_\u03c4 = 1 / 640.",
158
+ "url": "http://arxiv.org/html/2309.12875v2/x8.png"
159
+ }
160
+ },
161
+ "validation": true,
162
+ "references": [],
163
+ "url": "http://arxiv.org/html/2309.12875v2"
164
+ }
20240620/2309.14169v2.json ADDED
@@ -0,0 +1,401 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Extrapolated Regularization of Nearly Singular Integrals on Surfaces",
3
+ "abstract": "We present a method for computing nearly singular integrals that occur when single or double layer surface integrals, for harmonic potentials or Stokes flow, are evaluated at points nearby. Such values could be needed in solving an integral equation when one surface is close to another or to obtain values at grid points. We replace the singular kernel with a regularized version having a length parameter in order to control discretization error.\nAnalysis near the singularity leads to an expression for the error due to regularization which has terms with unknown coefficients multiplying known quantities. By computing the integral with three choices of \nwe can solve for an extrapolated value that has regularization error reduced to ,\nuniformly for target points on or near the surface.\nIn examples with\n constant and moderate resolution we observe total error about close to the surface. For convergence as we can choose proportional to with to ensure the discretization error is dominated by the regularization error. With we find errors about . For harmonic potentials we extend the approach to a version with regularization; it typically has smaller errors but the order of accuracy is less predictable.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "The evaluation of singular or nearly singular surface integrals, on or near the surface, requires special care. Here we are concerned with single and double layer integrals for harmonic potentials or for Stokes flow. One of several possible approaches is to regularize the singular kernel in order to control the discretization error. A natural choice is to replace the singularity in the single layer potential with , where is the error function and is a numerical parameter setting the length scale of the regularization. This replacement introduces an additional error due to smoothing. For the singular case, evaluating at points on the surface, we can modify the choice of regularization so that the new error is ; see [3 ###reference_b3###, 6 ###reference_b6###, 30 ###reference_b30###]. The nearly singular case, evaluation at points near the surface, could be needed e.g.\nin solving integral equations\nwhen surfaces are close together or to obtain values at grid points.\nFor this case, in the previous work, we used\nanalysis near the singularity to derive corrections which leave a remaining error of . It does not seem practical to extend the corrections to higher order. In the present work we show by local analysis that the simpler regularization can be used with extrapolation, rather than corrections, to improve the error to in the nearly singular case. For on or near the surface, at signed distance ,\nif is the single layer potential with some density function and is the regularized integral, we show that\nuniformly for as ,\nwhere and are certain integrals, known explicitly, and , are coefficients which depend on , , the surface, and the density function. We can regard , , as unknowns at one point . Our strategy is to calculate the\nregularized integrals \nfor three different choices of and then solve for , within ), from the system of three equations.\nWe treat the double layer potential in a similar way, as well as the single and double layer integrals for Stokes flow.\nWe comment on the Helmholtz equation.\nFor the harmonic potentials we extend the approach to a method with regularization error; it requires four choices of rather than three.\nTo compute the integrals we use a quadrature rule for surface integrals for which the quadrature points are points where the surface intersects lines in a three-dimensional grid and the weights are determined by the normal vector to the surface.\nIt is high order accurate for smooth integrands; for the nearly singular integrals the accuracy depends on as well as the grid size .\nThe regularization enables us to make the integrand smooth enough to discretize without special treatment near the singularity.\nOther quadrature methods could be used if desired.\nThe total error consists of the regularization error and the error due to discretization. The discretization error is low order as if is fixed,\nbut it rapidly improves as increases; this is explained in Sect. 4. In our experiments with\n constant, we typically observe errors about near the surface with moderate resolution, i.e. not too small, indicating that the regularization error is dominant. However this trend cannot continue as .\nFor rapid convergence as we need to increase to ensure that the discretization error is dominated by the regularization error. To do this we choose proportional to , e.g. with\n, resulting in an error about .\nTo test the uniform convergence we measure errors at grid points\nwithin distance from the surface.\nWith the fifth order regularization\nwe see the predicted orders, while for the seventh order method\nwe typically see smaller errors but the order in is less predictable,\npresumably because of discretization error.\nConsiderable work has been devoted to the computation of singular integrals such as layer potentials. Only a portion of this work has concerned nearly singular integrals on surfaces. Often\nvalues close to the surface are obtained by extrapolating from values further away [35 ###reference_b35###], sometimes as part of the quadrature by expansion\n(QBX) or hedgehog methods\n[1 ###reference_b1###, 15 ###reference_b15###, 16 ###reference_b16###, 19 ###reference_b19###, 28 ###reference_b28###]. In [29 ###reference_b29###] sources are placed on the opposite side of the surface to produce a kernel independent method.\nWith the singularity subtraction technique [12 ###reference_b12###] a most singular part is evaluated analytically leaving a more regular remainder. In [21 ###reference_b21###], for the nearly singular axisymmetric case,\nthe error in computing the most singular part provides a correction. In [22 ###reference_b22###] an approximation to the density function is used to reduce the singularity.\nRegularization has been used extensively to model Stokes flow in biology [8 ###reference_b8###, 9 ###reference_b9###]; see also [30 ###reference_b30###]. Richardson extrapolation has been used for\nStokes flow [10 ###reference_b10###].\nWith Ewald splitting [14 ###reference_b14###],[26 ###reference_b26###],[25 ###reference_b25###],[2 ###reference_b2###],[13 ###reference_b13###]\nthe kernel is written as a localized singular part plus a smooth part so that the two parts can be computed by different methods. Regularization as used\nhere could be thought of as a limit case which reduces the singular part so that it becomes a correction, as in [3 ###reference_b3###, 6 ###reference_b6###, 30 ###reference_b30###]\nor treated as an error in the present case. Integrals for\nthe heat equation were treated in this way in [11 ###reference_b11###], with the history treated as a smooth part. There is an analogy between the present method and QBX.\nIn the latter, the value\nat a specified point near the boundary is extrapolated from values at points further away along a normal line; increasing the distance is a kind of smoothing, analogous\nto the regularization here. However the two techniques for making the integral smoother are different in practice.\nWhile the choice of numerical method depends on context, the present approach is simple and direct. The work required is similar to that for a surface integral with smooth integrand, except that three (or four) related integrals must be computed rather than one. No special gridding or separate treatment of the singularity is needed. The surface must be moderately smooth, without corners or edges. Geometric information about the surface is not needed other than normal vectors; further geometry was needed for the corrections of [3 ###reference_b3###, 6 ###reference_b6###, 30 ###reference_b30###] and in some other methods. It would be enough for the surface to be known through values of a level set function at grid points nearby. For\nefficiency fast summation methods suitable for regularized kernels [34 ###reference_b34###, 27 ###reference_b27###, 31 ###reference_b31###] could be used. The approach here is general enough that it should apply to other singular kernels; however, a limitation is discussed at the end of the next section.\nResults are described more specifically in Sect. 2. The analysis leading to (1 ###reference_###) is carried out in Sect. 3. In Sect. 4 we discuss the quadrature rule and the discretization error. In Sect. 5 we present numerical examples which illustrate the behavior of the method. In Sect. 6 we prove that the system of three equations of the form (1 ###reference_###) is solvable, and Sect. 7 has a brief conclusion."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Summary of results",
15
+ "text": "For a single layer potential\non a closed surface , with given density function , we define the regularized version\nwith\nThen is smooth, with , and\n rapidly as increases.\nTypically .\nIf is near the surface, then , where is the closest point on , is the outward normal vector at , and is the signed distance. From a series expansion for near and near we show in Sect. 3 that\nuniformly for near the surface,\nwhere ; , are unknown coefficients; and and are integrals occurring in the derivation that are found to be\nHere .\nTo obtain an accurate value of , we calculate the\nregularized integrals for three different choices of ,\nat the same with the same grid size ,\nresulting in a system of three equations with three unknowns. We can then solve for the exact integral within error .\nWe typically choose with or .\nTo improve the conditioning we write three versions of (5 ###reference_###) in terms of rather than ,\nwith . It is important that do not depend on or . We solve this system for .\nThe th row is ; the entries depend only on\n as well as . The value obtained for has the form\nIn each case .\nFor , and\n, , provided . As increases, the coefficients approach\n, allowing a gradual transition to the region far enough from to omit the regularization.\nIt is not obvious that the system (8 ###reference_###) is solvable, i.e. that the matrix is invertible. In Sect. 6 we\nprove the solvability for any distinct choices of the .\nTo ensure the smoothing error is dominant as we may choose with , rather than , to obtain convergence ; see Sect. 4.\nFor the double layer potential\nthe treatment is similar after a subtraction. Using Green\u2019s identities we rewrite (10 ###reference_###) as\nwhere again is the closest point on and\n for inside, for outside, and\n on . To regularize we replace \nwith the gradient of the smooth function , obtaining\nwith\nThus\nThe expansion for near is somewhat different but coincidentally leads to the same relation as in (8 ###reference_###) with\n and replaced by and . Thus we can solve\nfor to in the same way as for .\nThere is a straightforward extension to a method with regularization error. In equation (5 ###reference_###) there is now\nan additional term . There are four unknowns, so that four choices of are needed.\nOtherwise this version is similar to the original one. On the other hand, we could use only two choices of\n, omitting the term in (5 ###reference_###), obtaining a version with error .\nThe special case of evaluation at points on the surface\nis important because it is used to solve integral equations for\nproblems such as the Dirichlet or Neumann problem for harmonic functions.\nWe could use the procedure described with and . However in this case we can modify the regularization to obtain error more directly [3 ###reference_b3###, 6 ###reference_b6###].\nFor the single layer integral, in place of (3 ###reference_###) we use\nFor the double layer\nwe use (14 ###reference_###) with and (13 ###reference_###)\nreplaced by\nWe typically use with these formulas for evaluation on the surface [6 ###reference_b6###, 30 ###reference_b30###]. They\nwere derived by imposing conditions to eliminate the leading error [3 ###reference_b3###], and the error can be checked using the analysis in the next section. Formulas with error could be produced with the same approach.\nThe equations of Stokes flow represent the motion of incompressible fluid in the limit of zero Reynolds number; e.g. see [24 ###reference_b24###]. In the simplest form they are\nwhere is the fluid velocity and is the pressure. The primary fundamental solutions for the velocity are the Stokeslet and stresslet,\nwhere is the Kronecker delta and . They are the kernels for the single and double layer integrals\nwhere and are components of vector quantities and on the surface and is a component of the normal vector . A subtraction can be used in both cases; e.g., see [24 ###reference_b24###],\nSect. 6.4. With as before we rewrite\n(19a ###reference_.1###) as\nThe subtracted form of (19b ###reference_.2###) is\nTo compute (20 ###reference_###) we replace with the regularized version\nwith and as in (4 ###reference_###),(13 ###reference_###), resulting in a smooth kernel.\nFor the Stokes double layer integral we need to rewrite the kernel\nso that it will be compatible with the analysis of Sect. 3; see the last paragraph of this section\nfor further discussion.\nFor near the surface we have\n with and .\nIn we substitute where and are the th components of and \nand similarly for and .\nThe product becomes a sum. We need to avoid terms in the kernel such as\n or , with . To do this\nwe replace with \nto introduce factors in the numerator which vanish\nat . We obtain\nwhere\nand we substitute .\nWe compute (21 ###reference_###) with replaced with the regularized version of (23 ###reference_###)\nwhere\nFor both Stokes integrals, calculated in the manner described, we find in Sect. 3 that the error has a form equivalent to (8 ###reference_###), and we extrapolate with three choices of\n as before. Again for the special case of evaluation on the surface we can obtain an regularization directly. Formulas were given in [30 ###reference_b30###] and an improved formula for the stresslet case was given in [5 ###reference_b5###].\nA strategy similar to that for the Laplacian\ncould be used for single or double layer integrals for the Helmholtz equation,\n, which describes waves of a definite frequency.\nThe usual fundamental solution is . We could regularize the most singular part,\n or or , multiplying by ,\nand extrapolate as for the Laplacian. We would not modify the remaining part of .\nFor the double layer potential we need to use a subtraction again. We could\ndo this using a plane wave and Green\u2019s third identity (e.g. see [20 ###reference_b20###] Thm. 3.1.1) as has been done before\n(e.g. see [23 ###reference_b23###]). We choose a vector\n so that and, for convenience, .\nWith and as in (11 ###reference_###) we rewrite the double layer potential as\nIf we regularize only the term we could instead use\n(11 ###reference_###) for that part alone.\nIt appears this method would not be successful if applied directly to the double layer potential or the Stokeslet integral without the subtraction. There would be\na term in the integrand proportional to\n. The equation (5 ###reference_###) for the regularization error would then have an additional term which, to first approximation, does not change as is varied. As a result the extrapolated value of the integral becomes unstable as ; i.e.,\nthe coefficients in the linear combination replacing (9 ###reference_###) become large as . A similar consideration motivates the expression for above. For other kernels general techniques to reduce the singularity could be used if necessary, e.g. [22 ###reference_b22###]."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Local analysis near the singularity",
21
+ "text": "We derive an expansion for the error due to regularizing a singular integral, when evaluated at a point near the surface .\nThe error is uniform with respect to .\nThe expression obtained leads to the formula (5 ###reference_###) and the extrapolation strategy used here. The first few terms of the expansion were used in [3 ###reference_b3###, 6 ###reference_b6###, 30 ###reference_b30###] to find corrections to .\nWe begin with the single layer potential (2 ###reference_###). The error is the difference between (3 ###reference_###) and (2 ###reference_###).\nGiven near ,\nwe assume for convenience that\nthe closest point on is . Then , where\n is the outward normal at and is the signed distance from the surface. We choose coordinates on near so that , the metric tensor at , and the second derivatives are normal at . E.g., if the tangent plane at is , we could use . Since the error in the integral is negligible for away from we can assume the density is zero outside this coordinate patch, regard it as a function of , and write the regularization error as\nThen\nWe can expand near as\nHere , the tangent vector at\n, and we use multi-index notation: , , is mixed partial derivative of order , and . We will use the notation for generic constants whose value will not be needed.\nWe first get an expression for . We start with\nThere is no term with since the first and second order terms in are orthogonal. Also\nThen\nWe assume is smooth, so that the error terms are uniform with respect to the\nlocation.\nWe will make a change of variables defined by\nThis allows us to write the error as\nwhere\nThe expression (36 ###reference_###) will enable us to expand the error in the form we need. An estimate of\n(36 ###reference_###), bounding by a constant, shows that decays faster than and so is negligible for larger\nthan . Thus we can regard as being at most .\nThe mapping is close to the identity but it is not smooth at , so that we cannot write directly in a power series in .\nWe will see that is a sum of terms of\nthe form with , and such a term makes a contribution to the error of order\n. For this purpose we need a qualitative understanding of the inverse of the mapping .\nThinking of polar coordinates in (35 ###reference_###), we do not change the angle but we make a change along each ray depending on the angle. Thus it is enough to\nconsider the inverse of the mapping . We will do this using the Lagrange Inversion Theorem [32 ###reference_b32###, 17 ###reference_b17###]; the theorem is usually\nstated for analytic functions, but for functions it can be applied to the Taylor polynomial.\nWe start by rewriting (34 ###reference_###) as\nHere means . With\n,\nwe can substitute in (38 ###reference_###). We then regard (38 ###reference_###) as a power series in in which\nthe coefficients depend on and . We will say that such a series is of type A if the coefficient of the th power is a\npolynomial in and with terms such that is even. Then (38 ###reference_###) is of type A. Multiplication of series\npreserves type A; thus powers of have series of type A. We note that the th term in a product series depends only on the first terms in the factors.\nUsing the power series for we can write a similar expression for with terms as in (38 ###reference_###) and their products.\nThis series is also of type A;\nthe same is true for powers of .\nWe now apply the Lagrange Theorem to the function . According to the theorem, has a series in , with remainder,\nsuch that the coefficient of is proportional to the\ncoefficient of in the series for\n =\n.\nThis quantity has factors with even.\nWe now divide this expression for by so that the earlier parity is restored.\nWe have shown that has a series in which is type A.\nFinally we rewrite as , and\nin summary we have shown that\nwhere , , and\n. With \nwe get a similar expression for as a function of .\nThe function and the factor in have series in which can be converted to . The Jacobian\nis\nIt has terms of the same type as those in .\nThe Jacobian has leading term and is bounded but not smooth as\n.\nWe conclude that has the expression\nwhere , , , and\n.\nTo find the contribution to the error (36 ###reference_###)\nfrom a term in (41 ###reference_###) with a particular\n we will integrate in polar coordinates. The angular integral is zero by symmetry unless , are both even. Let , the degree of . With the restriction the possible nonzero terms have\n and or with . To carry out the integration, we rescale variables to ,\n, and write in polar coordinates. With\n we obtain\nwhere\nIn a similar way we see that the remainder leads to an error which is . In summary we can express the error as\nwhere are polynomials in with ,\n. They depend only on the surface and , not\n or . For fixed and they are unknown coefficients. To normalize the equation we set \nand rewrite it as\nThis conclusion is equivalent to (8 ###reference_###), which we use with three choices of to solve for the single layer potential within .\nFor the double layer potential, in view of (14 ###reference_###) and (11 ###reference_###), we can write the error from regularizing as\nwhere\nand after changing from to ,\nwhere now\nWe find\nand note . Thus each term in now has at least two additional factors. We expand as in (41 ###reference_###)\nbut now include terms with , where again\n. The term now contributes an\nerror of order , rather than as before.\nFrom the last remark, each nonzero term must have and or and . By symmetry a term that contributes a nonzero error must have and \nor and . The possible terms with \nare with and with .\nRescaling the integrals we find\nwith , , and\nIn fact\nso that (51 ###reference_###) is equivalent to (44 ###reference_###), and we can solve for the double layer as in (8 ###reference_###).\nThe expansions can be carried further in the same manner. For the single layer integral we can refine the error expression (44 ###reference_###) to\nFor the double layer (51 ###reference_###) is replaced by\nEach of these expressions leads to a system of four equations in four unknowns, using four different choices of .\nIn fact , so that again we may use the same equations for both cases.\nFor the Stokes single layer integral, calculated in the form\n(20 ###reference_###), (22 ###reference_###), the first term is equivalent to the single layer potential (2 ###reference_###). The second term resembles the double layer (10 ###reference_###). We note the integrand has a factor\n with\n.\nThus at , and since\n, the numerator of the integrand is . The discussion above for the double layer now applies to this second term, leading to the same expression for the error.\nFor the Stokes double layer integral, with the subtraction\n(21 ###reference_###) and the kernel rewritten as in (26 ###reference_###), the first term is again like the harmonic double layer. For the second term, regularized with , the numerator in the expansion will have terms or higher. By symmetry the terms that contribute nonzero error have or higher. We get an expansion for the error in the second term in the form\nwith\nand .\nWe find that and , so that once again we can use (8 ###reference_###) for extrapolation."
22
+ },
23
+ {
24
+ "section_id": "4",
25
+ "parent_section_id": null,
26
+ "section_name": "\nSurface quadrature and the discretization error",
27
+ "text": "We use a quadrature rule for surface integrals introduced in [33 ###reference_b33###] and used in [3 ###reference_b3###, 6 ###reference_b6###, 30 ###reference_b30###]. We cover the surface with a three-dimensional grid with spacing . The quadrature points have the form , i.e., points on the surface whose projections on the plane are grid points, and similarly for the other two directions. We only use points for which the component of the normal vector in the distinguished direction is no smaller than for a chosen angle . In our case we take . The weights are determined by a partition of unity on the unit sphere; it is applied to the normal vector at each point.\nWe define three sets of quadrature points as\nwhere means the third component of the normal vector, and similarly for . The quadrature points of the set are shown for two ellipsoids in Figure 1 ###reference_###.\nTo construct the partition of unity we start with the bump function\nHere is a parameter.\nFor a unit vector \nwe define\nThe quadrature rule for a surface integral with integrand is\nIt has high order accuracy as allowed by the smoothness of the surface and the integrand.\nThe weights cut off the sum in each plane, and each sum has the character of\nthe trapezoidal rule without boundary; see [33 ###reference_b33###].\n###figure_1### In earlier work we chose the parameter to be . Here we use . We have found from error estimates in [4 ###reference_b4###], discussed below, as well as numerical experiments, that the discretization error is controlled better with this choice. We do not recommend using because of increased derivatives.\nThe full error in this method consists of the regularization error plus the discretization error; symbolically\nFor either the single layer potential (2 ###reference_###) or the double layer (11 ###reference_###) the discretization error\narbitrarily close to \ncan be written as\nwhich at first appears inaccurate. Formulas for the first term were given in [3 ###reference_b3###, 6 ###reference_b6###], based on approximating the surface locally as a plane. They can be used as corrections. Estimates for these formulas were given in [4 ###reference_b4###]. With\nthe parameter choices here, in particular with , it was shown that\nfor the single and double layer respectively, and they decrease rapidly as increases. Here means the tangential gradient.\nThe term in (63 ###reference_###) evidently decreases rapidly as increases, as does . With , ;\nsee [6 ###reference_b6###], Sect. 3.4. However depends on the surface and integrand and could be large. With moderate resolution we expect that the\ndiscretization error is controlled by the regularization. If desired the formulas for in [6 ###reference_b6###]\ncould be used as corrections with the present method; they are infinite series, but only the first few terms are significant. To ensure that the regularization error dominates the discretization error for small we can choose proportional to , with , so that\n increases as ."
28
+ },
29
+ {
30
+ "section_id": "5",
31
+ "parent_section_id": null,
32
+ "section_name": "Numerical examples",
33
+ "text": "We present examples computing single and double layer integrals at grid points within of a surface, for harmonic potentials and for Stokes flow. The points are selected from the three-dimensional grid with spacing which determines the quadrature points on the surface, as described in Sect. 4. With the fifth order regularization the results are in general agreement with the theoretical predictions. With moderate resolution and\n constant the errors are about . With proportional to the error is about . For the harmonic potentials we also test the seventh order method; the errors are typically smaller but the order of accuracy is less predictable.\nIt is likely that the discretization error is relatively more significant with the smaller errors of the seventh order case.\nWe report maximum errors and errors, defined as\nwhere is the error at and is the number of points.\nWe present absolute errors; for comparison we give approximate norms of the exact solution.\nHarmonic Potentials.\nWe begin with known solutions on the unit sphere. We test the single and double layer separately. We compute the integrals at grid points first within distance and then on shells at increasing distance. In the latter case we also find values computed without regularization.\nWe then compute known harmonic functions on three other surfaces which combine single and double layers.\nThe single and double layer potentials, (2 ###reference_###) and (10 ###reference_###), are harmonic inside and outside the surface . They are characterized by the jump conditions\nwhere means the value outside minus the value inside.\nFor the unit sphere we use the spherical harmonic function\nfor both the single and double layer integrals. The functions\nare both harmonic. We define\n by (2 ###reference_###) and by (10 ###reference_###)\nwith . They are determined by the jump conditions,\nWe present errors for the single and double layer potentials at grid points at various distances from the sphere. We begin with the single layer. We compute the integral as in (3 ###reference_###) and extrapolate as in (8 ###reference_###). Near the sphere the maximum of is about and the norm is about . Figure 2 ###reference_###, left, shows the and maximum errors for grid points within distance of the sphere, using fifth or seventh order extrapolation. For the fifth order we take as previously described, and for\nthe seventh order we take . The expected order of accuracy is evident in the fifth order case; the seventh order method has somewhat smaller errors but does not have a discernible order of accuracy, probably because the discretization error is significant. In subsequent figures we display the errors at nearby grid points at distance between and from the sphere, both inside and outside, for . We compute the integral with no regularization as well as the fifth and seventh order methods. Figure 2 ###reference_###, right, shows errors for and Figure 3 ###reference_### for and . The values without regularization in Figure 2 ###reference_### appear to be about accurate. The fifth order method again has the expected order of accuracy at least for but becomes less steady with distance. The errors become smaller overall as the distance increases. Beyond the error without regularization is quite small, suggesting that we can discontinue the regularization for points at least from the surface.\n###figure_2### ###figure_3### In Figures 4 ###reference_###,5 ###reference_### we present results of the same type for the double layer potential, computed as in (14 ###reference_###). They are similar in behavior to those for the single layer. The maximum of is about and\n.\n###figure_4### ###figure_5### For the remaining tests on other surfaces we use a procedure as in [6 ###reference_b6###] which allows us to have known solutions with an arbitrary surface . This provides a test of the single and double layer combined, rather than separately. We choose harmonic functions outside and inside. We set and , the jumps across as above. Then assuming decays at infinity, on both sides, where and are defined in (2 ###reference_###), (10 ###reference_###). We choose\nIn these tests we again use with the fifth order method and with seventh order. We also\nchoose proportional to\n with the fifth order method and with the seventh order method, so that the predicted order of error is\n. We choose constants so that agrees with the earlier choice at .\nOur first surface with this procedure is a rotated ellipsoid shown in Figure 1 ###reference_###, left,\nwhere , , and , where\n is the orthogonal matrix\nWe present results in Figure 6 ###reference_###. In Figure 6 ###reference_###, left, we evaluate at\nall grid points within distance with both regularizations.\nFigure 6 ###reference_###, right, has values at points within distance \nin the first octant, i.e., those with .\nThe accuracy of the fifth order version is close to the prediction; the seventh order version has smaller errors in Figure 6 ###reference_###, right, and perhaps approximates the predicted order but not clearly so.\nFor the left figure the norm of the exact solution is about\n and the maximum about 1.7. For the right figure, within the first octant, they are about .76 and 1.4.\n###figure_6### The next example is a surface obtained by revolving a Cassini oval about the axis,\nwith and . The final surface represents a molecule with four atoms,\nwith , , and given by\nThese surfaces are shown in Figure 7 ###reference_###.\n###figure_7### We compute the solution for grid points in the first octant as before for the ellipsoid, with related to in the same way. We present errors with fifth or seventh order regularization, with proportional to or fractional.\nThe results, reported in Figures 8 ###reference_### and 9 ###reference_###, are generally similar to those for the rotated ellipsoid. For both surfaces we see roughly the predicted orders of accuracy in the fifth order case. For seventh order the errors are smaller, but the accuracy in the fractional case is somewhat less than fourth order in .\nFor the Cassini surface the norm for the exact values is about and the maximum is about . For the molecular surface they are about and .\n###figure_8### ###figure_9### Stokes Flow. We present examples of three types. First we calculate the velocity near a translating spheroid in Stokes flow, given as a single layer integral. We then compute a standard identity for the double layer integral. Finally we compute a velocity that combines single and double layer integrals on an arbitrary surface, as in the examples above with harmonic potentials. We have increased to to make the order of accuracy more evident, even though errors are typically smaller with . In each case we report errors at grid points within distance of the surface.\nIn our first example we compare the single layer or Stokeslet integral with an exact solution. We compute the Stokes flow around a prolate spheroid\nwith semi-axes , shown in Figure 1 ###reference_###, right, and translating with velocity . The fluid velocity is determined by the integral (19a ###reference_.1###)\nfrom the surface traction . Formulas for the solution are given in\n[7 ###reference_b7###, 18 ###reference_b18###, 30 ###reference_b30###]. The surface traction is\nwhere is a constant.\nWe compute the fluid velocity as in (20 ###reference_###),(22 ###reference_###) and extrapolate as before. Results are presented in Figure 10 ###reference_###. The exact solution has maximum amplitude and norm about .\n###figure_10### Next we test the double layer integral (19b ###reference_.2###) using the identity (2.3.19) from [24 ###reference_b24###]\nwhere = 1, 1/2, 0 when is inside, on, and outside the boundary. We set and define . We compute the integral according to (21 ###reference_###), (23 ###reference_###), (26 ###reference_###) and extrapolate. We report errors for a sphere and for the spheroid (78 ###reference_###) in Figure 11 ###reference_###.\nFor the sphere the maximum value is and the norm is about . For the spheroid the maximum is and the norm is .\n###figure_11### In order to test integrals on general surfaces we again use a formula combining the single and double layer integrals. If is the velocity of Stokes flow outside and inside a surface , with suitable decay at infinity, then\nHere is the jump in surface force, outside minus inside, and is the jump in velocity. The surface force is the normal stress,\n, where the outward normal. The jump conditions are derived e.g. in [24 ###reference_b24###]. As a test problem we take the inside velocity to be the Stokeslet due to a\npoint force singularity of strength , placed at\n. The velocity is\nand the stress tensor is\nwhere , . We choose the outside velocity and stress to be zero. We compute the two integrals in the same manner as above. We present results for three surfaces: the unit sphere, Figure 12 ###reference_###, left; an ellipsoid with semi-axes , Figure 12 ###reference_###, right; and the molecular surface (76 ###reference_###), Figure 13 ###reference_###. For the first two surfaces, the errors are at all grid points within , but for the molecular surface the points are in the first octant only. For the sphere or ellipsoid the maximum velocity magnitude is and the norms are and , respectively. For the molecular surface they are\n and .\n###figure_12### ###figure_13###"
34
+ },
35
+ {
36
+ "section_id": "6",
37
+ "parent_section_id": null,
38
+ "section_name": "Proof that the three extrapolation equations can be solved",
39
+ "text": "We prove that the system of three equations (1 ###reference_###) or (8 ###reference_###) can always be solved provided\nTo do this we show that the determinant whose th row is\nis positive, where . For , the case of evaluation on the surface,\nwe see directly that\nIn general we can assume since and \nare even in . First we note from (6 ###reference_###) and (7 ###reference_###) that\nInserting this expression in last entry of the th row we obtain\nThe third column is now a sum where the first term is a multiple of the second column.\nThis first part contributes zero, and the determinant becomes\nNext we subtract row 1 from rows 2 and 3, resulting in the determinant\nWe can assume that ,\nsince we could replace arbitrary with . The new determinant has the form\nwhere\nClearly for . For , and and as\n from above, ,\nas seen from (93 ###reference_###) below. Hereafter \u2032 means .\nTo show it suffices, according to (90 ###reference_###), to show that\n decreases as increases.\nTo verify this we will show that or equivalently\nAt , since . We find after some cancellation that\nThen for , since and .\nFinally , and since , we conclude that\n for , as claimed in (92 ###reference_###)."
40
+ },
41
+ {
42
+ "section_id": "7",
43
+ "parent_section_id": null,
44
+ "section_name": "Conclusions and future work",
45
+ "text": "We have developed a simple, self-contained method for computing surface integrals,\nsuch as single or double layer integrals for harmonic functions or for Stokes flow,\nwhen evaluated at points close to the surface, so that these integrals are nearly singular. The integral kernel is replaced by a regularized form. The modified integral expressions are given in Sect. 2. Asymptotic analysis in Sect. 3 provides a formula for the leading error due to this regularization, uniform for target points near the surface. This formula can\nbe used with extrapolation to obtain high order regularization. The high order allows\nthe modified integrands to be smooth enough so that a conventional quadrature can be used;\nsee Sect. 4. Numerical tests in Sect. 5 verify the accuracy by evaluating known solutions\nat points near the surface.\nThe tests in this work used direct summation so that errors are measured unambiguously. To reduce the high computational cost for large systems, fast summation methods such as treecodes or fast multipole methods can be used. In the present work, the integrals are computed for several values of the regularization parameter to obtain the extrapolated value. Since the contribution from decays rapidly away from the near singularity, the evaluation of the integrals for additional values might need to be done only in a certain neighborhood of the target point. This approach will be investigated in future work.\nSurface integrals considered here are nearly singular when values are needed at grid points near the surface, which was the focus of our tests. The near singularity also occurs when multiple surfaces are close to each other. One such example was presented in our earlier work with Stokes surface integrals [30 ###reference_b30###], where corrections were added to improve the accuracy. The correction formulas are found using asymptotic analysis somewhat similar to the analysis presented here, and they have complicated expressions. Furthermore, the corrections improve the accuracy only to in the nearly singular case. The extrapolation method presented here is more accurate and much easier to use. We therefore expect the current method to work better in multi-surface cases.\nThis method could be used to simulate moving interfaces in Stokes flow. A possible approach is to represent the surface by a level set function. To move the surface, the current velocity can be computed at grid points nearby, then the level set function is updated at these grid points, and finally the new surface is recovered. The method developed in this work is well suited to find the velocity at the grid points, and its simplicity should be an advantage."
46
+ }
47
+ ],
48
+ "appendix": [],
49
+ "tables": {},
50
+ "image_paths": {
51
+ "1": {
52
+ "figure_path": "2309.14169v2_figure_1.png",
53
+ "caption": "Figure 1: The rotated (1,.8,.6) ellipsoid (left) and the (1,.5,.5) spheroid (right).",
54
+ "url": "http://arxiv.org/html/2309.14169v2/x1.png"
55
+ },
56
+ "2": {
57
+ "figure_path": "2309.14169v2_figure_2.png",
58
+ "caption": "Figure 2: Errors for the single layer potential on the unit sphere,\n(left) at grid points within distance h\u210ehitalic_h, computed with the 5th and 7th order regularization, and\n(right) evaluated at distance between h\u210ehitalic_h and 2\u2062h2\u210e2h2 italic_h, without regularization and with the 5th and 7th order methods.",
59
+ "url": "http://arxiv.org/html/2309.14169v2/x2.png"
60
+ },
61
+ "3": {
62
+ "figure_path": "2309.14169v2_figure_3.png",
63
+ "caption": "Figure 3: L2superscript\ud835\udc3f2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT errors in the single layer potential on the unit sphere, evaluated at distance between 2\u2062h2\u210e2h2 italic_h and 3\u2062h3\u210e3h3 italic_h (left) or 3\u2062h3\u210e3h3 italic_h and 4\u2062h4\u210e4h4 italic_h (right).",
64
+ "url": "http://arxiv.org/html/2309.14169v2/x3.png"
65
+ },
66
+ "4": {
67
+ "figure_path": "2309.14169v2_figure_4.png",
68
+ "caption": "Figure 4: Errors for the double layer potential on the unit sphere,\n(left) at grid points within distance h\u210ehitalic_h, computed with the 5th and 7th order regularization, and\n(right) evaluated at distance between h\u210ehitalic_h and 2\u2062h2\u210e2h2 italic_h, without regularization and with the 5th and 7th order methods.",
69
+ "url": "http://arxiv.org/html/2309.14169v2/x4.png"
70
+ },
71
+ "5": {
72
+ "figure_path": "2309.14169v2_figure_5.png",
73
+ "caption": "Figure 5: L2superscript\ud835\udc3f2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT errors in the double layer potential on the unit sphere, evaluated at\ndistance between 2\u2062h2\u210e2h2 italic_h and 3\u2062h3\u210e3h3 italic_h (left) or 3\u2062h3\u210e3h3 italic_h and 4\u2062h4\u210e4h4 italic_h (right).",
74
+ "url": "http://arxiv.org/html/2309.14169v2/x5.png"
75
+ },
76
+ "6": {
77
+ "figure_path": "2309.14169v2_figure_6.png",
78
+ "caption": "Figure 6: (Left) Errors for the single and double layers on a rotated ellipsoid at grid points within distance h\u210ehitalic_h, with the 5th order and 7th order methods, \u03b4\ud835\udeff\\deltaitalic_\u03b4 proportional to h\u210ehitalic_h. (Right) Errors for the rotated ellipsoid, at grid points within distance h\u210ehitalic_h in the first octant; 5555th and 7777th order methods with \u03b4\ud835\udeff\\deltaitalic_\u03b4 chosen to correspond to O\u2062(h4)\ud835\udc42superscript\u210e4O(h^{4})italic_O ( italic_h start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT ) accuracy.",
79
+ "url": "http://arxiv.org/html/2309.14169v2/x6.png"
80
+ },
81
+ "7": {
82
+ "figure_path": "2309.14169v2_figure_7.png",
83
+ "caption": "Figure 7: The Cassini oval surface and the four-atom molecular surface.",
84
+ "url": "http://arxiv.org/html/2309.14169v2/x7.png"
85
+ },
86
+ "8": {
87
+ "figure_path": "2309.14169v2_figure_8.png",
88
+ "caption": "Figure 8: Errors for the Cassini oval surface, at grid points within distance h\u210ehitalic_h in the first octant; 5555th and 7777th order method with \u03b4\ud835\udeff\\deltaitalic_\u03b4 proportional to h\u210ehitalic_h or corresponding to O\u2062(h4)\ud835\udc42superscript\u210e4O(h^{4})italic_O ( italic_h start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT ) accuracy.",
89
+ "url": "http://arxiv.org/html/2309.14169v2/x8.png"
90
+ },
91
+ "9": {
92
+ "figure_path": "2309.14169v2_figure_9.png",
93
+ "caption": "Figure 9: Errors for the molecular surface, at grid points within distance h\u210ehitalic_h in the first octant; 5555th and 7777th order method with \u03b4\ud835\udeff\\deltaitalic_\u03b4 proportional to h\u210ehitalic_h or corresponding to O\u2062(h4)\ud835\udc42superscript\u210e4O(h^{4})italic_O ( italic_h start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT ) accuracy.",
94
+ "url": "http://arxiv.org/html/2309.14169v2/x9.png"
95
+ },
96
+ "10": {
97
+ "figure_path": "2309.14169v2_figure_10.png",
98
+ "caption": "Figure 10: Errors for the Stokes single layer on a prolate spheroid, at grid points within distance h\u210ehitalic_h outside the spheroid.",
99
+ "url": "http://arxiv.org/html/2309.14169v2/x10.png"
100
+ },
101
+ "11": {
102
+ "figure_path": "2309.14169v2_figure_11.png",
103
+ "caption": "Figure 11: (Left) Error for the Stokes double layer on the unit sphere, at grid points within distance h\u210ehitalic_h on either side of the sphere. (Right) Errors for the Stokes double layer on a prolate spheroid, at grid points within distance h\u210ehitalic_h on either side of the spheroid.",
104
+ "url": "http://arxiv.org/html/2309.14169v2/x11.png"
105
+ },
106
+ "12": {
107
+ "figure_path": "2309.14169v2_figure_12.png",
108
+ "caption": "Figure 12: (Left) Errors for the Stokes single and double layers on the unit sphere, at grid points within distance h\u210ehitalic_h on either side of the sphere. (Right) Errors for the Stokes single and double layers on an ellipsoid, at grid points within distance h\u210ehitalic_h on either side of the ellipsoid.",
109
+ "url": "http://arxiv.org/html/2309.14169v2/x12.png"
110
+ },
111
+ "13": {
112
+ "figure_path": "2309.14169v2_figure_13.png",
113
+ "caption": "Figure 13: Errors for the Stokes single and double layers on the four-atom molecular surface, at grid points in the first octant within distance h\u210ehitalic_h on either side of the molecule.",
114
+ "url": "http://arxiv.org/html/2309.14169v2/x13.png"
115
+ }
116
+ },
117
+ "validation": true,
118
+ "references": [
119
+ {
120
+ "1": {
121
+ "title": "Highly accurate special quadrature methods for stokesian particle\nsuspensions in confined geometries.",
122
+ "author": "J. Bagge and A.-K. Tornberg.",
123
+ "venue": "Int. J. Numer. Methods Fluids, 93:2175\u20132224, 2021.",
124
+ "url": null
125
+ }
126
+ },
127
+ {
128
+ "2": {
129
+ "title": "Fast Ewald summation for Stokes flow with arbitrary periodicity.",
130
+ "author": "J. Bagge and A.-K. Tornberg.",
131
+ "venue": "J. Comput. Phys., 493:112473, 2023.",
132
+ "url": null
133
+ }
134
+ },
135
+ {
136
+ "3": {
137
+ "title": "A grid-based boundary integral method for elliptic problems in three\ndimensions.",
138
+ "author": "J. T. Beale.",
139
+ "venue": "SIAM J. Numer. Anal., 42(2):599\u2013620, 2004.",
140
+ "url": null
141
+ }
142
+ },
143
+ {
144
+ "4": {
145
+ "title": "Neglecting discretization corrections in regularized singular or\nnearly singular integrals.",
146
+ "author": "J. T. Beale.",
147
+ "venue": "arXiv; Cornell University Library, 2020.",
148
+ "url": null
149
+ }
150
+ },
151
+ {
152
+ "5": {
153
+ "title": "A novel regularization for higher accuracy in the solution of the\n3-dimensional Stokes flow.",
154
+ "author": "J. T. Beale, C. Jones, J. Reale, and Tlupova S.",
155
+ "venue": "Involve, 15:515\u201324, 2022.",
156
+ "url": null
157
+ }
158
+ },
159
+ {
160
+ "6": {
161
+ "title": "A simple method for computing singular or nearly singular integrals\non closed surfaces.",
162
+ "author": "J. T. Beale, W. Ying, and J. R. Wilson.",
163
+ "venue": "Commun. Comput. Phys., 20(3):733\u2013753, 2016.",
164
+ "url": null
165
+ }
166
+ },
167
+ {
168
+ "7": {
169
+ "title": "Hydromechanics of low Reynolds number flow. part 2. singularity\nmethod for Stokes flows.",
170
+ "author": "A. T. Chwang and R. Y.-T. Wu.",
171
+ "venue": "J. Fluid Mech., 67:787\u2013815, 1975.",
172
+ "url": null
173
+ }
174
+ },
175
+ {
176
+ "8": {
177
+ "title": "The method of regularized Stokeslets.",
178
+ "author": "R. Cortez.",
179
+ "venue": "SIAM J. Sci. Comput., 23:1204, 2001.",
180
+ "url": null
181
+ }
182
+ },
183
+ {
184
+ "9": {
185
+ "title": "The method of regularized Stokeslets in three dimensions: Analysis,\nvalidation, and application to helical swimming.",
186
+ "author": "R. Cortez, L. Fauci, and A. Medovikov.",
187
+ "venue": "Phys. Fluids, 17:031504, 2005.",
188
+ "url": null
189
+ }
190
+ },
191
+ {
192
+ "10": {
193
+ "title": "The art of coarse Stokes: Richardson extrapolation improves the\naccuracy and efficiency of the method of regularized stokeslets.",
194
+ "author": "M. T. Gallagher and D. J. Smith.",
195
+ "venue": "Roy. Soc. Open Sci., 8(5):210108, 2021.",
196
+ "url": null
197
+ }
198
+ },
199
+ {
200
+ "11": {
201
+ "title": "A fast algorithm for the evaluation of heat potentials.",
202
+ "author": "L. Greengard and J. Strain.",
203
+ "venue": "Commun. Pure Appl. Math, 43:949\u2013963, 1990.",
204
+ "url": null
205
+ }
206
+ },
207
+ {
208
+ "12": {
209
+ "title": "A higher-order singularity subtraction technique for the\ndiscretization of singular integral operators on curved surfaces.",
210
+ "author": "J. Helsing.",
211
+ "venue": "arXiv; Cornell University Library, 2013.",
212
+ "url": null
213
+ }
214
+ },
215
+ {
216
+ "13": {
217
+ "title": "A dual-space multilevel kernel-splitting framework for discrete and\ncontinuous convolution.",
218
+ "author": "S. Jiang and L. Greengard.",
219
+ "venue": "arXiv; Cornell University Library, 2023.",
220
+ "url": null
221
+ }
222
+ },
223
+ {
224
+ "14": {
225
+ "title": "Fast Ewald summation for free-space Stokes potentials.",
226
+ "author": "L. af Klinteberg, D. S. Shamshirgar, and A.-K. Tornberg.",
227
+ "venue": "Res. Math. Sci., 4:1:1, 2017.",
228
+ "url": null
229
+ }
230
+ },
231
+ {
232
+ "15": {
233
+ "title": "A fast integral equation method for solid particles in viscous flow\nusing quadrature by expansion.",
234
+ "author": "L. af Klinteberg and A.-K. Tornberg.",
235
+ "venue": "J. Comput. Phys., 326:420\u2013445, 2016.",
236
+ "url": null
237
+ }
238
+ },
239
+ {
240
+ "16": {
241
+ "title": "Quadrature by expansion: A new method for the evaluation of layer\npotentials.",
242
+ "author": "A. Kl\u00f6ckner, A. Barnett, L. Greengard, and M. O\u2019Neil.",
243
+ "venue": "J. Comput. Phys., 252:332\u2013349, 2013.",
244
+ "url": null
245
+ }
246
+ },
247
+ {
248
+ "17": {
249
+ "title": "The Implicit Function Theorem, History, Theory, and\nApplications.",
250
+ "author": "S. G. Krantz and H. R. Parks.",
251
+ "venue": "Birkhauser, 2002.",
252
+ "url": null
253
+ }
254
+ },
255
+ {
256
+ "18": {
257
+ "title": "Motion of a rigid particle in Stokes flow: a new second-kind\nboundary-integral equation formulation.",
258
+ "author": "N. Liron and E. Barta.",
259
+ "venue": "J. Fluid Mech., 238:579\u2013598, 1992.",
260
+ "url": null
261
+ }
262
+ },
263
+ {
264
+ "19": {
265
+ "title": "A robust solver for elliptic pdes in 3d complex geometries.",
266
+ "author": "M. Morse, A. Rahimian, and D. Zorin.",
267
+ "venue": "J. Comput. Phys., 442:110511, 2021.",
268
+ "url": null
269
+ }
270
+ },
271
+ {
272
+ "20": {
273
+ "title": "Acoustic and Electromagnetic Equations: Integral Representations\nfor Harmonic Problems.",
274
+ "author": "J.-C. N\u00e9d\u00e9lec.",
275
+ "venue": "Springer-Verlag, New York, 2001.",
276
+ "url": null
277
+ }
278
+ },
279
+ {
280
+ "21": {
281
+ "title": "Corrected trapezoidal rule for near-singular integrals in\naxi-symmetric Stokes flow.",
282
+ "author": "M. Nitsche.",
283
+ "venue": "Adv. Comput. Math., 48:57, 2022.",
284
+ "url": null
285
+ }
286
+ },
287
+ {
288
+ "22": {
289
+ "title": "Harmonic density interpolation methods for high-order evaluation of\nlaplace layer potentials in 2D and 3D.",
290
+ "author": "C. P\u00e9rez-Arancibia, L. M. Faria, and C. Turc.",
291
+ "venue": "J. Comput. Phys., 376:411\u201334, 2019.",
292
+ "url": null
293
+ }
294
+ },
295
+ {
296
+ "23": {
297
+ "title": "Planewave density interpolation methods for 3D Helmholtz boundary\nintegral equations.",
298
+ "author": "C. P\u00e9rez-Arancibia, C. Turc, and L. Faria.",
299
+ "venue": "SIAM J. Sci. Comput., 41:A2088\u2013A2116, 2019.",
300
+ "url": null
301
+ }
302
+ },
303
+ {
304
+ "24": {
305
+ "title": "Boundary Integral and Singularity Methods for Linearized Viscous\nFlow.",
306
+ "author": "C. Pozrikidis.",
307
+ "venue": "Cambridge Univ. Press, 1992.",
308
+ "url": null
309
+ }
310
+ },
311
+ {
312
+ "25": {
313
+ "title": "Fast Ewald summation for electrostatic potentials with arbitrary\nperiodicity.",
314
+ "author": "D. S. Shamshirgar, J. Bagge, and A.-K. Tornberg.",
315
+ "venue": "J. Chem. Phys., 154:164109, 2021.",
316
+ "url": null
317
+ }
318
+ },
319
+ {
320
+ "26": {
321
+ "title": "The spectral Ewald method for singly periodic domains.",
322
+ "author": "D. S. Shamshirgar and A.-K. Tornberg.",
323
+ "venue": "J. Comput. Phys., 347:341\u2013366, 2017.",
324
+ "url": null
325
+ }
326
+ },
327
+ {
328
+ "27": {
329
+ "title": "Radial basis function (RBF)-based parametric models for closed and\nopen curves within the method of regularized stokeslets.",
330
+ "author": "V. Shankar and S. D. Olson.",
331
+ "venue": "Int. J. Numer. Methods Fluids, 79:269\u201389, 2015.",
332
+ "url": null
333
+ }
334
+ },
335
+ {
336
+ "28": {
337
+ "title": "A local target specific quadrature by expansion method for evaluation\nof layer potentials in 3D.",
338
+ "author": "M. Siegel and A.-K. Tornberg.",
339
+ "venue": "J. Comput. Phys., 364:365\u2013392, 2018.",
340
+ "url": null
341
+ }
342
+ },
343
+ {
344
+ "29": {
345
+ "title": "Quadrature by fundamental solutions: kernel-independent layer\npotential evaluation for large collections of simple objects.",
346
+ "author": "D. B. Stein and A. H. Barnett.",
347
+ "venue": "Adv. Comput. Math., 48:60, 2022.",
348
+ "url": null
349
+ }
350
+ },
351
+ {
352
+ "30": {
353
+ "title": "Regularized single and double layer integrals in 3D Stokes flow.",
354
+ "author": "S. Tlupova and J. T. Beale.",
355
+ "venue": "J. Comput. Phys., 386:568\u2013584, 2019.",
356
+ "url": null
357
+ }
358
+ },
359
+ {
360
+ "31": {
361
+ "title": "A kernel-independent treecode algorithm based on barycentric\nLagrange interpolation.",
362
+ "author": "L. Wang, R. Krasny, and S. Tlupova.",
363
+ "venue": "Commun. Comput. Phys., 28(4):1415\u20131436, 2020.",
364
+ "url": null
365
+ }
366
+ },
367
+ {
368
+ "32": {
369
+ "title": "A Course of Modern Analysis.",
370
+ "author": "E. T. Whittaker and G. N. Watson.",
371
+ "venue": "Cambridge Univ. Press, 4th edition, 1927.",
372
+ "url": null
373
+ }
374
+ },
375
+ {
376
+ "33": {
377
+ "title": "On computing smooth, singular and nearly singular integrals on\nimplicitly defined surfaces.",
378
+ "author": "J. R. Wilson.",
379
+ "venue": "PhD thesis, Duke University, 2010.",
380
+ "url": null
381
+ }
382
+ },
383
+ {
384
+ "34": {
385
+ "title": "A kernel independent fast multipole algorithm for radial basis\nfunctions.",
386
+ "author": "L. Ying.",
387
+ "venue": "J. Comput. Phys., 213:451\u201357, 2006.",
388
+ "url": null
389
+ }
390
+ },
391
+ {
392
+ "35": {
393
+ "title": "A high-order 3D boundary integral equation solver for elliptic\nPDEs in smooth domains.",
394
+ "author": "L. Ying, G. Biros, and D. Zorin.",
395
+ "venue": "J. Comput. Phys., 219:247\u2013275, 2006.",
396
+ "url": null
397
+ }
398
+ }
399
+ ],
400
+ "url": "http://arxiv.org/html/2309.14169v2"
401
+ }
20240620/2309.15001v2.json ADDED
@@ -0,0 +1,321 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Convergence guarantees for forward gradient descent in the linear regression model",
3
+ "abstract": "Renewed interest in the relationship between artificial and biological neural networks motivates the study of gradient-free methods. Considering the linear regression model with random design, we theoretically analyze in this work the biologically motivated (weight-perturbed) forward gradient scheme that is based on random linear combination of the gradient. If denotes the number of parameters and the number of samples, we prove that the mean squared error of this method converges for with rate Compared to\nthe dimension dependence for stochastic gradient descent, an additional factor occurs.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Looking at the past developments, it is apparent that artificial neural networks (ANNs) became more powerful the more they resembled the brain. It is therefore anticipated that the future of AI is even more biologically inspired. As in the past, the bottlenecks towards more biologically inspired learning are computational barriers. For instance, shallow networks only became computationally feasible after the backpropagation algorithm was proposed. Deep neural networks were proposed for a longer time but deep learning only became scalable to large datasets after the introduction of large scale GPU computing. Neuromorphic computing aims to imitate the brain on computer chips, but is currently not fully scalable due to computational barriers.\nThe mathematics of AI has focused on explaining the state-of-the-art performance of modern machine learning methods and empirically observed phenomena such as the good generalization properties of extreme overparametrization. To shape the future of AI, statistical theory needs more emphasis on anticipating future developments. This includes proposing and analyzing biologically motivated methods already at a stage before scalable implementations exist.\nThis work aims to analyze a biologically motivated learning rule building on the renewed interest of the differences and similarities between ANNs and biological neural networks (BNNs) [18 ###reference_b18###, 25 ###reference_b25###, 32 ###reference_b32###] which are rooted in the foundational literature from the 1980s [10 ###reference_b10###, 8 ###reference_b8###]. A key difference between ANNs and BNNs is that ANNs are usually trained based on a version of (stochastic) gradient descent, while this seems prohibitive for BNNs. Indeed, to compute the gradient, knowledge of all parameters in the network is required, but biological networks do not posses the capacity to transport this information to each neuron. This suggests that biological networks cannot directly use the gradient to update their parameters [8 ###reference_b8###, 18 ###reference_b18###, 29 ###reference_b29###].\nThe brain still performs well without gradient descent and can learn tasks with much fewer examples than ANNs. This sparks interest in biologically plausible learning methods that do not require (full) access of the gradient. Such methods are called derivative-free. A simple example of a derivative-free method is to randomly sample in each step a new parameter. If this decreases the loss, one keeps the new parameter and otherwise discards it without updating step. There is a wide variety of derivative-free strategies [7 ###reference_b7###, 15 ###reference_b15###, 28 ###reference_b28###]. Among those, so-called zeroth-order methods use evaluations of the loss function to build a noisy estimate of the gradient. This substitute is then used to replace the gradient in the gradient descent routine [19 ###reference_b19###, 9 ###reference_b9###]. [25 ###reference_b25###] establishes a connection between the Hebbian learning underlying the local learning of the brain (see e.g. Chapter 6 of [29 ###reference_b29###]) and a specific zeroth-order method. A statistical analysis of this zeroth-order scheme is provided in the companion article [26 ###reference_b26###].\nIn this article, we study\n(weight-perturbed) forward gradient descent. This method is motivated by biological neural networks [3 ###reference_b3###, 24 ###reference_b24###] and lies between full gradient descent methods and derivative-free methods, as only random linear combination of the gradient are required. The form of the random linear combination is related to zeroth-order methods, see Section 2 ###reference_###. Settings with partial access to the gradient have been studied before. For example, [21 ###reference_b21###] proposes a learning method based on directional derivatives for convex functions. In this work, we specifically derive theoretical guarantees for forward gradient descent in the linear regression model with random design. Theorem 3.1 ###reference_Theorem1### establishes an expression for the expectation. A bound on the mean squared error is provided in Theorem 3.3 ###reference_Theorem3###.\nThe structure of the paper is as follows. In Section 2 ###reference_### we describe the forward gradient descent update rule in the linear regression model. Results are in Section 3 ###reference_### and the corresponding proofs can be found in Section 4 ###reference_###.\nNotation: Vectors are denoted by bold letters and we write for the Euclidean norm. We denote the largest and smallest eigenvalue of a matrix by the respective expressions and . The spectral norm is \nThe condition number of a positive semi-definite matrix is For a random variable we denote the expectation with respect to by The symbol \nstands for an expectation taken with respect to all random variables that are inside that expectation. The (multivariate) normal distribution with mean vector and covariance matrix is denoted by"
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Weight-perturbed forward gradient descent",
15
+ "text": "Suppose we want to learn a parameter vector from training data Stochastic gradient descent (SGD) is based on the iterative update rule\nwith some initial value and a loss that depends on the data only through the -th sample .\nFor a standard normal random vector that is independent of all the other randomness, the quantity is called the (weight-perturbed) forward gradient [3 ###reference_b3###, 24 ###reference_b24###]. (Weight-perturbed) forward gradient descent is then given by the update rule\nAssuming that the exogenous noise has unit variance is sufficient. Indeed, generalizing to with variance parameter has the same effect as rescaling the learning rate\nSince for a deterministic -dimensional vector one has taking the expectation of the weight-perturbed forward gradient descent scheme with respect to the exogenous randomness induced by gives\nresembling the SGD dynamic (2.1 ###reference_###). If depends on linearly then also\nWhile in expectation, forward gradient descent is related to SGD, the induced randomness of the -dimensional random vectors induces a large amount of noise. To control the high noise level in the dynamic is the main obstacle in the mathematical analysis. One of the implications is that one has to make small steps by choosing a small learning rate to avoid completely erratic behavior. This particularly effects the first phase of the learning.\nFirst order multivariate Taylor expansion shows that and are close. Therefore, forward gradient descent is related to the zeroth-order method\n[19 ###reference_b19###]. Consequently, forward gradient descent can be viewed as an intermediate step between gradient descent, with full access to the gradient, and zeroth-order methods that are solely based on (randomly) perturbed function evaluations.\n###figure_1### We now comment on the biological plausibility of forward gradient descent. As mentioned in the introduction, it is widely accepted that the brain cannot perform (full) gradient descent. The backpropagation algorithm decomposes the computation of the gradient in a forward pass and a backward pass. The forward pass evaluates the loss for a training sample by sending signal through the network. This is biologically plausible. For a given vector , it is even possible to compute both and in one forward pass, [3 ###reference_b3###, 24 ###reference_b24###, 2 ###reference_b2###]. The construction can be conveniently explained for two variables see Figure 1 ###reference_###. The loss function is implemented by first computing and in parallel. Subsequently, one can infer and For a given vector the update value in the forward gradient descent routine can be computed from and \nIndeed, after computing and in a first step, one can compute and finally For more background on the implementation, see for instance [2 ###reference_b2###].\nIn [25 ###reference_b25###], it has been shown that under appropriate conditions, Hebbian learning of excitatory neurons in biological neural networks leads to a zeroth-order learning rule that has the same structure as (2.4 ###reference_###).\nTo complete this section, we briefly compare forward gradient descent with feedback alignment as both methods are motivated by biological learning and are based on additional randomness. Inspired by biological learning, feedback alignment proposes to replace the learned weights in the backward pass by random weights chosen at the start of the training procedure [17 ###reference_b17###, 18 ###reference_b18###]. The so-called direct feedback alignment method goes even further: instead of back-propagating the gradient through all the layers of the network by the chain-rule, layers are updated with the gradient of the output layer multiplied with a fixed random weight matrix [22 ###reference_b22###, 16 ###reference_b16###]. (Direct) feedback alignment causes the forward weights to change in such a way that the true gradient of the network weights and the substitutes used in the update rule become more aligned [17 ###reference_b17###, 22 ###reference_b22###, 18 ###reference_b18###]. The linear model can be viewed as neural network without hidden layers. The absence of layers means that in the backward step, no weight information is transported between different layers. As a consequence, both feedback alignment and direct feedback alignment collapse in the linear model into standard gradient descent. The conclusion is that feedback alignment and forward gradient descent are not comparable. The argument also shows that to unveil nontrivial statistical properties of feedback alignment, one has to go beyond the linear model. We leave the statistical analysis as an open problem."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Convergence rates in the linear regression model",
21
+ "text": "We analyze weight-perturbed forward gradient descent for data generated from the -dimensional linear regression with Gaussian random design. In this framework, we observe i.i.d. pairs satisfying\nwith the unknown -dimensional regression vector, an unknown covariance matrix, and independent noise variables with mean zero and variance one.\nFor the analysis, we consider the squared loss The gradient is given by\nWe now analyze the forward gradient estimator assuming that the initial value can be random or deterministic but should be independent of the data. We employ a similar proving strategy as in the recent analysis of dropout in the linear model in [6 ###reference_b6###]. In particular, we will derive a recursive formula for In contrast to this work, we consider a different form of noise and non-constant learning rates.\nThe first result shows that forward gradient descent does gradient descent in expectation.\nWe have and thus\nThe proof does not exploit the Gaussian design and only requires that is centered and has covariance matrix . The exogenous randomness induced by disappears in the expected values but heavily influences the recursive expressions for the squared expectations.\nConsider forward gradient descent (2.2 ###reference_###). If then\nSince depends on , the fourth moments of the design vectors and the exogenous random vectors play a role in this equation.\nThe risk is the trace of the matrix . Setting\nfor the condition number and building on Theorem 3.2 ###reference_Theorem2###, we can establish the following risk bound for forward gradient descent.\nConsider forward gradient descent (2.2 ###reference_###) and assume that is positive definite. For constant choosing the learning rate\nyields\nAlternatively, the upper bound of Theorem 3.3 ###reference_Theorem3### can be written as\nIn the upper bound, the risk of the initial estimate appears. A realistic scenario is that the entries of and are all of order one. In this case, the inequality shows that the risk of the initial estimate will scale with the number of parameters . Taking (for such that ), Theorem 3.3 ###reference_Theorem3### implies that\nFor and . Since , this means that Moreover, tends faster to zero than as . So, for\nThe rate for is thus This means that forward gradient descent has dimension dependence This is by a factor worse than the minimax rate for the linear regression problem, [31 ###reference_b31###, 12 ###reference_b12###, 20 ###reference_b20###]. In contrast, methods that have access to the gradient can achieve optimal dimension dependence in the rate, [23 ###reference_b23###, 14 ###reference_b14###]. The obtained convergence rate is in line with results showing that for convex optimization problems zeroth-order methods have a higher dimension dependence, [9 ###reference_b9###, 19 ###reference_b19###, 21 ###reference_b21###].\nWe believe that faster convergence rates are obtainable if the same datapoint is assessed several times. This means that each data point is used for several updates of the forward gradient for instance by running multiple epochs. However, in every iteration a new random direction is sampled. We expect that if every data point is used times, one should be able to achieve the convergence rate up to some logarithmic terms. If this is true and if is of the order of one could even recover the minimax rate Using the same datapoints multiple times induces additional dependence among the parameter updates. To deal with this dependence is the key challenge to establish the convergence rate .\nAssuming that the covariance matrix is positive definite is standard for linear regression with random design [12 ###reference_b12###, 20 ###reference_b20###, 27 ###reference_b27###].\nFor the decrease of the learning rate is of the order , which is the standard choice [13 ###reference_b13###, 11 ###reference_b11###, 4 ###reference_b4###]. A constant learning rate is used for Ruppert-Polyak averaging in [23 ###reference_b23###, 11 ###reference_b11###].\nFor least squares linear regression, it is possible to achieve (near) optimal convergence with a constant (universal) stepsize [1 ###reference_b1###]. Conditions under which a constant (universal) stepsize in more general settings than linear least squares works or fails are investigated in [14 ###reference_b14###].\n###figure_2### ###figure_3### In a small simulation study, we investigated whether there is a discrepancy between the derived convergence rates and the empirical decay of the risk. For dimensions and , data according to (3.1 ###reference_###) with are generated. On these data, we run ten times weight perturbed forward gradient descent (2.2 ###reference_###), and compare the mean squared errors (MSEs) to one realization of SGD (2.1 ###reference_###). For all simulations of forward gradient descent and SGD, we use the same initialization , drawn from a distribution, and the learning rate specified in (3.4 ###reference_###) with . Thus, only the random perturbation vectors in the forward gradient descent schemes differ across different runs. The outcomes are reported in Figure 2 ###reference_###. For each of the 10+1 simulations, we report on a log-log scale the MSE for the first one million iterations. The upper dashed line gives the derived convergence rate , the middle dashed line is , and the lower dashed line is . The ten paths from the ten forward gradient descent runs are shown in blue. The path from the SGD is displayed in red. We see three regimes. In the first regime, the risk remains nearly constant. For dimension this is true up to the first ten thousand of iterations. Afterwards there is a sudden decrease of the risk. Eventually, for large number of iterations the MSE of forward gradient descent concentrates near the line , while the MSE of SGD concentrates around This suggest that up to the -factor, the derived theory does in fact describe the rate of the MSE. Equation (3.5 ###reference_###) predicts that the rate will occur for . For and for Thus, in terms of orders of magnitude, there is a close agreement between theory and simulations.\nStarting with a good initializer that lies already in the neighborhood of the true parameter, one can avoid the long burn-in time in the beginning. Otherwise, it remains an open problem, whether one can modify the procedure such that also for smaller values of the risk behaves more like\nPython code is available on Github [5 ###reference_b5###]."
22
+ },
23
+ {
24
+ "section_id": "4",
25
+ "parent_section_id": null,
26
+ "section_name": "Proofs",
27
+ "text": "By (3.2 ###reference_###) and the linear regression model , we have\nSince and are jointly independent, we obtain\nCombined with (2.3 ###reference_###), we find\nThe true parameter is deterministic.\nSubtracting on both sides, yields the claimed identity\n\u220e"
28
+ },
29
+ {
30
+ "section_id": "4.1",
31
+ "parent_section_id": "4",
32
+ "section_name": "Proof of Theorem 3.2",
33
+ "text": "If is a -dimensional random vector and is a -dimensional random vector that is independent of then\nBecause and are independent,\nthe -th entry of the matrix is\nSince\nsee for instance the example at the end of Section 2 in [30 ###reference_b30###].\nThus\nBecause of\nand\nthe -th entry of the matrix is\nFor a vector the scalar is the -th entry of the matrix Combined with the previous display, the result follows.\n\u220e\nAs Theorem 3.2 ###reference_Theorem2### only involves one update step, we can simplify the notation by dropping the index and analyzing for one data point and independent With and , we then have to prove that\nSubstituting the update rule (2.2 ###reference_###) in gives by the linearity of the transpose that\nFirst, consider the terms with the minus sign in the above expression. The random vector is independent of all other randomness and hence Moreover, together with (4.2 ###reference_###),\nTaking the transpose and tower rule, we find\nIn a next step, we derive an expression for . Since is independent of we can apply Lemma 4.1 ###reference_Theorem1### to derive\nArguing as for (4.1 ###reference_###) gives and this yields\nBecause has mean zero and variance one and is independent of , we conclude that\nwhere for the last equality we used that is a scalar and that .\nSince is independent of we get by Lemma 4.1 ###reference_Theorem1### that\nSubstituting this in (4.6 ###reference_###) and (4.5 ###reference_###) yields\nCombining (4.3 ###reference_###) with (4.4 ###reference_###) and (4.7 ###reference_###) yields the statement of the theorem.\n\u220e"
34
+ },
35
+ {
36
+ "section_id": "4.2",
37
+ "parent_section_id": "4",
38
+ "section_name": "Proof of Theorem 3.3",
39
+ "text": "For two vectors of the same length, Thus,\n\nTogether with Theorem 3.2 ###reference_Theorem2###, and for square matrices and of the same size, this yields\nIf is an eigenvalue of then is an eigenvalue of . By assumption, and therefore the matrix is positive semi-definite and is the largest eigenvalue.\nFor a positive semi-definite matrix and a vector the min-max theorem states that Using that for a vector it holds that , with in (4.8 ###reference_###) and applying with and , yields\nThe spectral norm of a positive semi-definite matrix is equal to the largest eigenvalue and so Therefore,\nUsing that yields\nRewritten in non-recursive form, we obtain\nwhere we use the convention that the (empty) product over zero terms is given the value \nFor ease of notation define with condition number From the definition of , (3.4 ###reference_###), it follows that Using that for all real numbers it holds that , we get that for all integers\nThe function is monotone decreasing for and and thus,\nUsing (4.10 ###reference_###) and (4.11 ###reference_###) with gives\nUsing (4.10 ###reference_###) and (4.11 ###reference_###) with gives\nObserve that . This gives us that and thus . For all real numbers and thus Therefore,\nFor the function is monotone increasing for Hence,\nSince we can apply this with to find\nCombining (4.9 ###reference_###), (4.12 ###reference_###), (4.13 ###reference_###) and (4.14 ###reference_###) finally gives\nUsing that for now yields the result.\n\u220e"
40
+ }
41
+ ],
42
+ "appendix": [],
43
+ "tables": {},
44
+ "image_paths": {
45
+ "1": {
46
+ "figure_path": "2309.15001v2_figure_1.png",
47
+ "caption": "Figure 1: Computional graphs for computing in a forward pass L\u2062(\ud835\udf3d)=12\u2062(Y\u2212X1\u2062\u03b81\u2212X2\u2062\u03b82)2\ud835\udc3f\ud835\udf3d12superscript\ud835\udc4csubscript\ud835\udc4b1subscript\ud835\udf031subscript\ud835\udc4b2subscript\ud835\udf0322L(\\bm{\\theta})=\\frac{1}{2}(Y-X_{1}\\theta_{1}-X_{2}\\theta_{2})^{2}italic_L ( bold_italic_\u03b8 ) = divide start_ARG 1 end_ARG start_ARG 2 end_ARG ( italic_Y - italic_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_\u03b8 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - italic_X start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT italic_\u03b8 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT (upper half) and (\u2207L\u2062(\ud835\udf3d))\u22a4\u2062\ud835\udc2fsuperscript\u2207\ud835\udc3f\ud835\udf3dtop\ud835\udc2f(\\nabla L(\\bm{\\theta}))^{\\top}\\mathbf{v}( \u2207 italic_L ( bold_italic_\u03b8 ) ) start_POSTSUPERSCRIPT \u22a4 end_POSTSUPERSCRIPT bold_v (lower half).",
48
+ "url": "http://arxiv.org/html/2309.15001v2/x1.png"
49
+ },
50
+ "2(a)": {
51
+ "figure_path": "2309.15001v2_figure_2(a).png",
52
+ "caption": "(a) d=10\ud835\udc5110d=10italic_d = 10\nFigure 2: Comparison of the MSE of forward gradient descent (blue) and SGD (red) for dimensions d=10\ud835\udc5110d=10italic_d = 10 and d=100.\ud835\udc51100d=100.italic_d = 100 . The upper dashed line is k\u21a6d2\u2062log\u2061(d)/kmaps-to\ud835\udc58superscript\ud835\udc512\ud835\udc51\ud835\udc58k\\mapsto d^{2}\\log(d)/kitalic_k \u21a6 italic_d start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT roman_log ( italic_d ) / italic_k, the middle dashed line is k\u21a6d2/kmaps-to\ud835\udc58superscript\ud835\udc512\ud835\udc58k\\mapsto d^{2}/kitalic_k \u21a6 italic_d start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT / italic_k, and the lower dashed line is k\u21a6d/k.maps-to\ud835\udc58\ud835\udc51\ud835\udc58k\\mapsto d/k.italic_k \u21a6 italic_d / italic_k .",
53
+ "url": "http://arxiv.org/html/2309.15001v2/x2.png"
54
+ },
55
+ "2(b)": {
56
+ "figure_path": "2309.15001v2_figure_2(b).png",
57
+ "caption": "(b) d=100\ud835\udc51100d=100italic_d = 100\nFigure 2: Comparison of the MSE of forward gradient descent (blue) and SGD (red) for dimensions d=10\ud835\udc5110d=10italic_d = 10 and d=100.\ud835\udc51100d=100.italic_d = 100 . The upper dashed line is k\u21a6d2\u2062log\u2061(d)/kmaps-to\ud835\udc58superscript\ud835\udc512\ud835\udc51\ud835\udc58k\\mapsto d^{2}\\log(d)/kitalic_k \u21a6 italic_d start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT roman_log ( italic_d ) / italic_k, the middle dashed line is k\u21a6d2/kmaps-to\ud835\udc58superscript\ud835\udc512\ud835\udc58k\\mapsto d^{2}/kitalic_k \u21a6 italic_d start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT / italic_k, and the lower dashed line is k\u21a6d/k.maps-to\ud835\udc58\ud835\udc51\ud835\udc58k\\mapsto d/k.italic_k \u21a6 italic_d / italic_k .",
58
+ "url": "http://arxiv.org/html/2309.15001v2/x3.png"
59
+ }
60
+ },
61
+ "validation": true,
62
+ "references": [
63
+ {
64
+ "1": {
65
+ "title": "Non-strongly-convex smooth stochastic approximation with convergence\nrate O (1/n).",
66
+ "author": "Bach, F., and Moulines, E.",
67
+ "venue": "Advances in neural information processing systems 26 (2013).",
68
+ "url": null
69
+ }
70
+ },
71
+ {
72
+ "2": {
73
+ "title": "Automatic differentiation in machine learning: a survey.",
74
+ "author": "Baydin, A. G., Pearlmutter, B. A., Radul, A. A., and Siskind, J. M.",
75
+ "venue": "Journal of Machine Learning Research 18, 153 (2018), 1\u201343.",
76
+ "url": null
77
+ }
78
+ },
79
+ {
80
+ "3": {
81
+ "title": "Gradients without backpropagation.",
82
+ "author": "Baydin, A. G., Pearlmutter, B. A., Syme, D., Wood, F., and Torr, P.",
83
+ "venue": "arXiv preprint arXiv:2202.08587 (2022).",
84
+ "url": null
85
+ }
86
+ },
87
+ {
88
+ "4": {
89
+ "title": "Adaptive algorithms and stochastic approximations, vol. 22 of\nApplications of Mathematics (New York).",
90
+ "author": "Benveniste, A., M\u00e9tivier, M., and Priouret, P.",
91
+ "venue": "Springer-Verlag, Berlin, 1990.",
92
+ "url": null
93
+ }
94
+ },
95
+ {
96
+ "5": {
97
+ "title": "Simulation code: Convergence guarantees for forward gradient descent\nin the linear regression model.",
98
+ "author": "Bos, T., and Schmidt-Hieber, J.",
99
+ "venue": "https://github.com/Bostjm/SimulationCodeForwardGradient, Jan.\n2024.",
100
+ "url": null
101
+ }
102
+ },
103
+ {
104
+ "6": {
105
+ "title": "Dropout Regularization Versus -penalization in the linear\nmodel.",
106
+ "author": "Clara, G., Langer, S., and Schmidt-Hieber, J.",
107
+ "venue": "arXiv e-prints (2023), arXiv:2306.10529.",
108
+ "url": null
109
+ }
110
+ },
111
+ {
112
+ "7": {
113
+ "title": "Introduction to derivative-free optimization, vol. 8 of MPS/SIAM Series on Optimization.",
114
+ "author": "Conn, A. R., Scheinberg, K., and Vicente, L. N.",
115
+ "venue": "Society for Industrial and Applied Mathematics (SIAM), Philadelphia,\nPA; Mathematical Programming Society (MPS), Philadelphia, PA, 2009.",
116
+ "url": null
117
+ }
118
+ },
119
+ {
120
+ "8": {
121
+ "title": "The recent excitement about neural networks.",
122
+ "author": "Crick, F.",
123
+ "venue": "Nature 337 (1989), 129\u2013132.",
124
+ "url": null
125
+ }
126
+ },
127
+ {
128
+ "9": {
129
+ "title": "Optimal rates for zero-order convex optimization: the power of two\nfunction evaluations.",
130
+ "author": "Duchi, J. C., Jordan, M. I., Wainwright, M. J., and Wibisono, A.",
131
+ "venue": "IEEE Trans. Inform. Theory 61, 5 (2015), 2788\u20132806.",
132
+ "url": null
133
+ }
134
+ },
135
+ {
136
+ "10": {
137
+ "title": "Competitive learning: From interactive activation to adaptive\nresonance.",
138
+ "author": "Grossberg, S.",
139
+ "venue": "Cognitive Science 11, 1 (1987), 23\u201363.",
140
+ "url": null
141
+ }
142
+ },
143
+ {
144
+ "11": {
145
+ "title": "On the averaged stochastic approximation for linear regression.",
146
+ "author": "Gy\u00f6rfi, L., and Walk, H.",
147
+ "venue": "SIAM J. Control Optim. 34, 1 (1996), 31\u201361.",
148
+ "url": null
149
+ }
150
+ },
151
+ {
152
+ "12": {
153
+ "title": "Random design analysis of ridge regression.",
154
+ "author": "Hsu, D., Kakade, S. M., and Zhang, T.",
155
+ "venue": "Found. Comput. Math. 14, 3 (2014), 569\u2013600.",
156
+ "url": null
157
+ }
158
+ },
159
+ {
160
+ "13": {
161
+ "title": "Stochastic approximation and recursive algorithms and\napplications, second ed., vol. 35 of Applications of Mathematics (New\nYork).",
162
+ "author": "Kushner, H. J., and Yin, G. G.",
163
+ "venue": "Springer-Verlag, New York, 2003.",
164
+ "url": null
165
+ }
166
+ },
167
+ {
168
+ "14": {
169
+ "title": "Linear stochastic approximation: How far does constant step-size and\niterate averaging go?",
170
+ "author": "Lakshminarayanan, C., and Szepesvari, C.",
171
+ "venue": "In Proceedings of the Twenty-First International Conference on\nArtificial Intelligence and Statistics (09\u201311 Apr 2018), A. Storkey and\nF. Perez-Cruz, Eds., vol. 84 of Proceedings of Machine Learning\nResearch, PMLR, pp. 1347\u20131355.",
172
+ "url": null
173
+ }
174
+ },
175
+ {
176
+ "15": {
177
+ "title": "Derivative-free optimization methods.",
178
+ "author": "Larson, J., Menickelly, M., and Wild, S. M.",
179
+ "venue": "Acta Numer. 28 (2019), 287\u2013404.",
180
+ "url": null
181
+ }
182
+ },
183
+ {
184
+ "16": {
185
+ "title": "Direct feedback alignment scales to modern deep learning tasks and\narchitectures.",
186
+ "author": "Launay, J., Poli, I., Boniface, F., and Krzakala, F.",
187
+ "venue": "In Advances in Neural Information Processing Systems (2020),\nH. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, Eds., vol. 33,\nCurran Associates, Inc., pp. 9346\u20139360.",
188
+ "url": null
189
+ }
190
+ },
191
+ {
192
+ "17": {
193
+ "title": "Random synaptic feedback weights support error backpropagation for\ndeep learning.",
194
+ "author": "Lillicrap, T. P., Cownden, D., Tweed, D. B., and Akerman, C. J.",
195
+ "venue": "Nature communications 7, 1 (2016), 13276.",
196
+ "url": null
197
+ }
198
+ },
199
+ {
200
+ "18": {
201
+ "title": "Backpropagation and the brain.",
202
+ "author": "Lillicrap, T. P., Santoro, A., Marris, L., Akerman, C. J., and Hinton, G.",
203
+ "venue": "Nature Reviews Neuroscience 21 (2020), 335\u2013346.",
204
+ "url": null
205
+ }
206
+ },
207
+ {
208
+ "19": {
209
+ "title": "A primer on zeroth-order optimization in signal processing and\nmachine learning: Principals, recent advances, and applications.",
210
+ "author": "Liu, S., Chen, P.-Y., Kailkhura, B., Zhang, G., Hero III, A. O., and\nVarshney, P. K.",
211
+ "venue": "IEEE Signal Processing Magazine 37, 5 (2020), 43\u201354.",
212
+ "url": null
213
+ }
214
+ },
215
+ {
216
+ "20": {
217
+ "title": "Exact minimax risk for linear least squares, and the lower tail of\nsample covariance matrices.",
218
+ "author": "Mourtada, J.",
219
+ "venue": "Ann. Statist. 50, 4 (2022), 2157\u20132178.",
220
+ "url": null
221
+ }
222
+ },
223
+ {
224
+ "21": {
225
+ "title": "Random gradient-free minimization of convex functions.",
226
+ "author": "Nesterov, Y., and Spokoiny, V.",
227
+ "venue": "Found. Comput. Math. 17, 2 (2017), 527\u2013566.",
228
+ "url": null
229
+ }
230
+ },
231
+ {
232
+ "22": {
233
+ "title": "Direct feedback alignment provides learning in deep neural networks.",
234
+ "author": "N\u00f8kland, A.",
235
+ "venue": "In Advances in Neural Information Processing Systems (2016),\nD. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett, Eds., vol. 29,\nCurran Associates, Inc.",
236
+ "url": null
237
+ }
238
+ },
239
+ {
240
+ "23": {
241
+ "title": "Acceleration of stochastic approximation by averaging.",
242
+ "author": "Polyak, B. T., and Juditsky, A. B.",
243
+ "venue": "SIAM J. Control Optim. 30, 4 (1992), 838\u2013855.",
244
+ "url": null
245
+ }
246
+ },
247
+ {
248
+ "24": {
249
+ "title": "Scaling forward gradient with local losses.",
250
+ "author": "Ren, M., Kornblith, S., Liao, R., and Hinton, G.",
251
+ "venue": "arXiv preprint arXiv:2210.03310 (2022).",
252
+ "url": null
253
+ }
254
+ },
255
+ {
256
+ "25": {
257
+ "title": "Interpreting learning in biological neural networks as zero-order\noptimization method.",
258
+ "author": "Schmidt-Hieber, J.",
259
+ "venue": "arXiv preprint arXiv:2301.11777 (2023).",
260
+ "url": null
261
+ }
262
+ },
263
+ {
264
+ "26": {
265
+ "title": "Hebbian learning inspired estimation of the linear regression\nparameters from queries.",
266
+ "author": "Schmidt-Hieber, J., and Koolen, W.",
267
+ "venue": "arXiv preprint (2023).",
268
+ "url": null
269
+ }
270
+ },
271
+ {
272
+ "27": {
273
+ "title": "The Gauss-Markov theorem and random regressors.",
274
+ "author": "Shaffer, J. P.",
275
+ "venue": "Amer. Statist. 45, 4 (1991), 269\u2013273.",
276
+ "url": null
277
+ }
278
+ },
279
+ {
280
+ "28": {
281
+ "title": "Introduction to stochastic search and optimization.",
282
+ "author": "Spall, J. C.",
283
+ "venue": "Wiley-Interscience Series in Discrete Mathematics and Optimization.\nWiley-Interscience [John Wiley & Sons], Hoboken, NJ, 2003.",
284
+ "url": null
285
+ }
286
+ },
287
+ {
288
+ "29": {
289
+ "title": "Fundamentals of Computational Neuroscience: Third Edition.",
290
+ "author": "Trappenberg, T. P.",
291
+ "venue": "Oxford University Press, 12 2022.",
292
+ "url": null
293
+ }
294
+ },
295
+ {
296
+ "30": {
297
+ "title": "On the central moments of the multidimensional Gaussian\ndistribution.",
298
+ "author": "Triantafyllopoulos, K.",
299
+ "venue": "Math. Sci. 28, 2 (2003), 125\u2013128.",
300
+ "url": null
301
+ }
302
+ },
303
+ {
304
+ "31": {
305
+ "title": "Optimal rates of aggregation.",
306
+ "author": "Tsybakov, A. B.",
307
+ "venue": "In Learning Theory and Kernel Machines (Berlin, Heidelberg,\n2003), B. Sch\u00f6lkopf and M. K. Warmuth, Eds., Springer Berlin Heidelberg,\npp. 303\u2013313.",
308
+ "url": null
309
+ }
310
+ },
311
+ {
312
+ "32": {
313
+ "title": "An approximation of the error backpropagation algorithm in a\npredictive coding network with local Hebbian synaptic plasticity.",
314
+ "author": "Whittington, J. C. R., and Bogacz, R.",
315
+ "venue": "Neural Comput. 29, 5 (2017), 1229\u20131262.",
316
+ "url": null
317
+ }
318
+ }
319
+ ],
320
+ "url": "http://arxiv.org/html/2309.15001v2"
321
+ }
20240620/2309.16792v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240620/2310.00905v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240620/2310.04741v6.json ADDED
@@ -0,0 +1,144 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Formatting Instructions for CoLLAs 2024 Conference Submissions",
3
+ "abstract": "Concise and insightful abstract for the paper. Please follow the instructions below for structuring your submission to CoLLAs 2024. This template follows closely the template for ICLR 2022 submissions, with some minor modifications. A conference submissions to CoLLAs should aim for 9 pages, with a maximum of 10 and no minimum number of pages required.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Submission of conference papers to CoLLAs 2024",
9
+ "text": "CoLLAs requires electronic submissions, processed by\nhttps://openreview.net/ ###reference_openreview.net/###. See CoLLAs\u2019 website for more instructions.\nIf your paper is ultimately accepted, the statement \\collasfinalcopy should be inserted to adjust the\nformat to the camera ready requirements.\nThe format for the submissions is a variant of the ICLR format (which is inline with the NeurIPS format).\nPlease read carefully the instructions below, and follow them\nfaithfully."
10
+ },
11
+ {
12
+ "section_id": "1.1",
13
+ "parent_section_id": "1",
14
+ "section_name": "Style",
15
+ "text": "Papers to be submitted to CoLLAs 2024 must be prepared according to the\ninstructions presented here.\nAuthors are required to use the CoLLAs LaTeX style files obtainable at the\nCoLLAs website (www.lifelong-ml.cc ###reference_long-ml.cc###). Please make sure you use the current files and\nnot previous versions. Tweaking the style files may be grounds for rejection."
16
+ },
17
+ {
18
+ "section_id": "1.2",
19
+ "parent_section_id": "1",
20
+ "section_name": "Retrieval of style files",
21
+ "text": "The style files for CoLLAs and other conference information are available online at:\nhttp://www.lifelong-ml.cc/ ###reference_ww.lifelong-ml.cc/###\nThe file collas2024_conference.pdf contains these\ninstructions and illustrates the\nvarious formatting requirements your CoLLAs paper must satisfy.\nSubmissions must be made using LaTeX and the style files\ncollas2024_conference.sty and collas2024_conference.bst (to be used with LaTeX2e). The file\ncollas2024_conference.tex may be used as a \u201cshell\u201d for writing your paper. All you\nhave to do is replace the author, title, abstract, and text of the paper with\nyour own.\nThe formatting instructions contained in these style files are summarized in\nsections 2 ###reference_###, 3 ###reference_###, and 4 ###reference_### below."
22
+ },
23
+ {
24
+ "section_id": "2",
25
+ "parent_section_id": null,
26
+ "section_name": "General formatting instructions",
27
+ "text": "The text must be confined within a rectangle 6.5 inches wide and\n9 inches (54 picas) long.\nUse 10 point type with a vertical spacing of 11 points. Times New Roman is the\npreferred typeface throughout. Paragraphs are separated by 1/2 line space,\nwith no indentation.\nPaper title is 17 point, in small caps and left-aligned.\nAll pages should start at 1 inch (6 picas) from the top of the page.\nAuthors\u2019 names are\nset in boldface, and each name is placed above its corresponding\naddress. The lead author(s)\u2019s name is to be listed first, and\nthe co-authors\u2019 names are set to follow. Authors sharing the\nsame address can be on the same line.\nPlease pay special attention to the instructions in section 4 ###reference_###\nregarding figures, tables, acknowledgments, and references.\nThere will be a strict upper limit of 9 pages for the main text of the initial submission, with unlimited additional pages for citations."
28
+ },
29
+ {
30
+ "section_id": "3",
31
+ "parent_section_id": null,
32
+ "section_name": "Headings: first level",
33
+ "text": "First level headings are in small caps,\nflush left and in point size 12. One line space before the first level\nheading and 1/2 line space after the first level heading."
34
+ },
35
+ {
36
+ "section_id": "3.1",
37
+ "parent_section_id": "3",
38
+ "section_name": "Headings: second level",
39
+ "text": "Second level headings are in small caps,\nflush left and in point size 10. One line space before the second level\nheading and 1/2 line space after the second level heading."
40
+ },
41
+ {
42
+ "section_id": "3.1.1",
43
+ "parent_section_id": "3.1",
44
+ "section_name": "3.1.1 Headings: third level",
45
+ "text": "Third level headings are in small caps,\nflush left and in point size 10. One line space before the third level\nheading and 1/2 line space after the third level heading."
46
+ },
47
+ {
48
+ "section_id": "4",
49
+ "parent_section_id": null,
50
+ "section_name": "Citations, figures, tables, references",
51
+ "text": "These instructions apply to everyone, regardless of the formatter being used."
52
+ },
53
+ {
54
+ "section_id": "4.1",
55
+ "parent_section_id": "4",
56
+ "section_name": "Citations within the text",
57
+ "text": "Citations within the text should be based on the natbib package\nand include the authors\u2019 last names and year (with the \u201cet al.\u201d construct\nfor more than two authors). When the authors or the publication are\nincluded in the sentence, the citation should not be in parenthesis using \\citet{} (as\nin \u201cSee Hinton06 for more information.\u201d). Otherwise, the citation\nshould be in parenthesis using \\citep{} (as in \u201cDeep learning shows promise to make progress\ntowards AI (Bengio+chapter2007).\u201d).\nThe corresponding references are to be listed in alphabetical order of\nauthors, in the References section. As to the format of the\nreferences themselves, any style is acceptable as long as it is used\nconsistently."
58
+ },
59
+ {
60
+ "section_id": "4.2",
61
+ "parent_section_id": "4",
62
+ "section_name": "Footnotes",
63
+ "text": "Indicate footnotes with a number111Sample of the first footnote in the\ntext. Place the footnotes at the bottom of the page on which they appear.\nPrecede the footnote with a horizontal rule of 2 inches\n(12 picas).222Sample of the second footnote"
64
+ },
65
+ {
66
+ "section_id": "4.3",
67
+ "parent_section_id": "4",
68
+ "section_name": "Figures",
69
+ "text": "All artwork must be neat, clean, and legible. Lines should be dark\nenough for purposes of reproduction; art work should not be\nhand-drawn. The figure number and caption always appear after the\nfigure. Place one line space before the figure caption, and one line\nspace after the figure. The figure caption is lower case (except for\nfirst word and proper nouns); figures are numbered consecutively.\nMake sure the figure caption does not get separated from the figure.\nLeave sufficient space to avoid splitting the figure and figure caption.\nYou may use color figures.\nHowever, it is best for the\nfigure captions and the paper body to make sense if the paper is printed\neither in black/white or in color.\n###figure_1###"
70
+ },
71
+ {
72
+ "section_id": "4.4",
73
+ "parent_section_id": "4",
74
+ "section_name": "Tables",
75
+ "text": "All tables must be centered, neat, clean and legible. Do not use hand-drawn\ntables. The table number and title always appear before the table. See\nTable 1 ###reference_###.\nPlace one line space before the table title, one line space after the table\ntitle, and one line space after the table. The table title must be lower case\n(except for first word and proper nouns); tables are numbered consecutively."
76
+ },
77
+ {
78
+ "section_id": "5",
79
+ "parent_section_id": null,
80
+ "section_name": "Default Notation",
81
+ "text": "In an attempt to encourage standardized notation, we have included the\nnotation file from the textbook, Deep Learning\ngoodfellow2016deep available at\nhttps://github.com/goodfeli/dlbook_notation/ ###reference_n/###. Use of this style\nis not required and can be disabled by commenting out\nmath_commands.tex.\nNumbers and Arrays\nSets and Graphs\nIndexing\nCalculus\nProbability and Information Theory\nFunctions"
82
+ },
83
+ {
84
+ "section_id": "6",
85
+ "parent_section_id": null,
86
+ "section_name": "Final instructions",
87
+ "text": "Do not change any aspects of the formatting parameters in the style files.\nIn particular, do not modify the width or length of the rectangle the text\nshould fit into, and do not change font sizes (except perhaps in the\nReferences section; see below). Please note that pages should be\nnumbered."
88
+ },
89
+ {
90
+ "section_id": "7",
91
+ "parent_section_id": null,
92
+ "section_name": "Preparing PostScript or PDF files",
93
+ "text": "Please prepare PostScript or PDF files with paper size \u201cUS Letter\u201d, and\nnot, for example, \u201cA4\u201d. The -t\nletter option on dvips will produce US Letter files.\nConsider directly generating PDF files using pdflatex\n(especially if you are a MiKTeX user).\nPDF figures must be substituted for EPS figures, however.\nOtherwise, please generate your PostScript and PDF files with the following commands:"
94
+ },
95
+ {
96
+ "section_id": "7.1",
97
+ "parent_section_id": "7",
98
+ "section_name": "Margins in LaTeX",
99
+ "text": "Most of the margin problems come from figures positioned by hand using\n\\special or other commands. We suggest using the command\n\\includegraphics\nfrom the graphicx package. Always specify the figure width as a multiple of\nthe line width as in the example below using .eps graphics\nor\nfor .pdf graphics.\nSee section 4.4 in the graphics bundle documentation \n(http://www.ctan.org/tex-archive/macros/latex/required/graphics/grfguide.ps ###reference_ex/required/graphics/grfguide.ps###)\nA number of width problems arise when LaTeX cannot properly hyphenate a\nline. Please give LaTeX hyphenation hints using the \\- command."
100
+ },
101
+ {
102
+ "section_id": "7.1.x",
103
+ "parent_section_id": "7.1",
104
+ "section_name": "Author Contributions",
105
+ "text": "If you\u2019d like to, you may include a section for author contributions as is done\nin many journals. This is optional and at the discretion of the authors.\nThis should be only done in the camera ready version of the manuscript, not in the anonymized version submitted for review!"
106
+ },
107
+ {
108
+ "section_id": "7.1.x",
109
+ "parent_section_id": "7.1",
110
+ "section_name": "Acknowledgments",
111
+ "text": "Use unnumbered third level headings for the acknowledgments. All\nacknowledgments, including those to funding agencies, go at the end of the paper.\nThis should be only done in the camera ready version of the manuscript, not in the anonymized version submitted for review!"
112
+ },
113
+ {
114
+ "section_id": "8",
115
+ "parent_section_id": null,
116
+ "section_name": "Rebuttal Modifications",
117
+ "text": "When making changes to your paper during the rebuttal period, please use the EasyReview package to indicate what text has been \\addadded, \\removeremoved or \\replacereplacedreplaced. Other potentially useful commands are referenced here ###reference_ex/contrib/easyreview/doc/easyReview.pdf###."
118
+ }
119
+ ],
120
+ "appendix": [
121
+ {
122
+ "section_id": "Appendix 1",
123
+ "parent_section_id": null,
124
+ "section_name": "Appendix A Appendix",
125
+ "text": "You may include other additional sections here."
126
+ }
127
+ ],
128
+ "tables": {
129
+ "1": {
130
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Sample table title</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T1.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.1.1.1\">PART</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T1.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.1.2.1\">DESCRIPTION</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.1.2.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.1.2.1.1\">Dendrite</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.1.2.1.2\">Input terminal</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.3.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.1.3.2.1\">Axon</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.1.3.2.2\">Output terminal</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.4.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.1.4.3.1\">Soma</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.1.4.3.2\">Cell body (contains cell nucleus)</td>\n</tr>\n</tbody>\n</table>\n</figure>",
131
+ "capture": "Table 1: Sample table title"
132
+ }
133
+ },
134
+ "image_paths": {
135
+ "1": {
136
+ "figure_path": "2310.04741v6_figure_1.png",
137
+ "caption": "Figure 1: Sample figure caption.",
138
+ "url": "http://arxiv.org/html/2310.04741v6/neuron.jpg"
139
+ }
140
+ },
141
+ "validation": true,
142
+ "references": [],
143
+ "url": "http://arxiv.org/html/2310.04741v6"
144
+ }
20240620/2310.08745v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240620/2310.13164v6.json ADDED
@@ -0,0 +1,591 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Almost Equivariance via Lie Algebra Convolutions",
3
+ "abstract": "Recently, equivariant neural networks have become an important topic of research in machine learning.\nHowever, imbuing an architecture with a specific group equivariance imposes a strong prior on the types of data\ntransformations that the model expects to see. While strictly-equivariant models enforce symmetries,\nreal-world data does not always conform to such strict equivariances.\nIn such cases, the prior of strict equivariance can actually prove too strong and cause models\nto underfit on real-world data. Therefore, in this work we study a closely related topic,\nthat of almost equivariance. We provide a definition of almost equivariance that\ndiffers from those extant in the current literature\nand give a practical method for encoding almost equivariance in models by appealing to the Lie algebra of a Lie group.\nSpecifically, we define Lie algebra convolutions and demonstrate that they offer several benefits over Lie group convolutions,\nincluding being well-defined for non-compact Lie groups having non-surjective exponential map.\nFrom there, we pivot to the realm of theory and demonstrate parallel connections between the notions of equivariance and isometry and\nthose of almost equivariance and almost isometry. Finally, we demonstrate the validity of our approach by\nbenchmarking against datasets in fully equivariant and almost equivariant settings.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "The past few years have shown a surge in interest in equivariant model architectures,\nthose that explicitly impose symmetry with respect to a particular group acting on the\nmodel inputs. While data augmentation strategies have been proposed to make generic models exhibit greater symmetry\nwithout the need for equivariant model architectures, much work has demonstrated that this is an inefficient\napproach at best [15 ###reference_b15###, 26 ###reference_b26###, 45 ###reference_b45###]. As such, developing methods for building neural network layers\nthat are equivariant to general group actions is of great importance.\nMore recently, almost equivariance, also referred to variously as approximate, soft, or\npartial equivariance, has become a rich topic of study. The idea is that the symmetry\nconstraints imposed by full equivariance are not always completely conformed to in real-world systems.\nFor example, the introduction of external forces and certain boundary conditions\ninto models of turbulence and fluid flow break many theoretical symmetry constraints. Accurately\nmodeling real-world physical systems therefore requires building model architectures\nthat have a built-in notion of symmetry but that are not so constrained by it as to be incapable\nof fully modeling the underlying system dynamics."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Related Work",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Strict Equivariance",
21
+ "text": "Much of the work in developing strictly-equivariant model architectures began with the\nseminal paper of Cohen and Welling [4 ###reference_b4###], which introduced the group equivariant convolutional\nneural network layer. Kondor and Trivedi [25 ###reference_b25###] generalized this notion of equivariance and convolution\nto the action of an arbitrary compact group. Further generalizations followed,\nwith the creation of convolutions [10 ###reference_b10###] and efficient MLP layers [11 ###reference_b11###]\nequivariant to arbitrary Lie groups. Other neural network types have also been studied\nthrough the lens of equivariance, for example, graph neural networks [39 ###reference_b39###], [1 ###reference_b1###],\ntransformers [20 ###reference_b20###], and graph transformers [30 ###reference_b30###].\nCohen et al. [5 ###reference_b5###] consolidated much of this\nwork into a general framework via which equivariant layers can be understood as maps between spaces\nof sections of vector bundles. Similar to our work, Dehmamy et al. [6 ###reference_b6###] devised a convolutional layer on the\nLie algebra designed to approximate group convolutional layers.\nHowever, their objective was to make the layer as close to equivariant as possible whereas our layer is designed to\nbe flexible so as to be capable of modelling almost equivariances.\nFinally, rather than devising a new equivariant\nlayer type, Gruver et al. [16 ###reference_b16###] developed a method based on the Lie derivative which\ncan be used to detect the degree of equivariance learned by an arbitrary model architecture."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Almost Equivariance",
27
+ "text": "One of the first works on almost equivariance was Finzi et al. [12 ###reference_b12###], which introduced\nthe Residual Pathway Prior. Their idea is to construct a neural network\nlayer, , that is the sum of two components, and , where is a strictly equivariant\nlayer and is a more flexible, non-equivariant layer\nFurthermore, they place priors on the sizes of and such that a model trained using\nmaximum a posteriori estimation is incentivized to favor the strict equivariance of \nwhile relying on only to explain the difference between and the fully symmetric\nprediction function determined by . The priors on and can be defined\nso as to weight the layer towards favoring the use of .\nThe approach taken in Wang et al. [44 ###reference_b44###] is somewhat different. They give an\nexplicit definition of approximate equivariance, then model it via a\nrelaxed group convolutional layer wherein the single kernel, , of a strictly\nequivariant group convolutional layer is replaced with a set of kernels, .\nThis introduces a specific, symmetry-breaking dependence on a pair of group elements, ,\ni.e.\nTheir full relaxed group convolution operation is then defined as follows\nRomero and Lohit [38 ###reference_b38###] take an altogether different approach.\nThey introduce a model, which they call the Partial G-CNN,\nand show how to train it to learn layer-wise levels of equivariance from data.\nA key differentiator in their method is the learning of a probability distribution\nover group elements at each group convolutional layer, allowing them to sample group\nelements during group convolutions.\nMore specifically, they define a -partially equivariant map, , as one\nthat satisfies\nwhere is a subset, but not necessarily a subgroup, of . They then define\na partial group convolution from to as\nfor , where is a probability distribution on and is the Haar measure.\nIn order to learn the convolution for one-dimensional, continuous groups, they parameterize\n by applying a reparameterization trick to the Lie algebra of . This allows them to\ndefine a distribution which is uniform over a connected set of group elements,\n, but zero otherwise. Thus they define a uniform\ndistribution, , with learnable \nand map it to the group via the exponential map, .\nvan der Ouderaa et al. [42 ###reference_b42###] relax equivariance constraints by defining a non-stationary group convolution\nThey parameterize the kernel by choosing a basis for the Lie algebra, , of and defining elements,\n, as exponential maps of Lie algebra elements, i.e.\nwhere and is a basis for .\nIn particular, they achieve fine-grained control over the kernel representation by choosing a basis of\nRandom Fourier Features (RFF) for .\nFinally, Petrache and Trivedi [36 ###reference_b36###] provide a take on approximate equivariance rooted in\nstatistical learning theory and provide generalization and error bounds on approximately equivariant architectures."
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "Notions of Approximate Equivariance",
33
+ "text": "It\u2019s worthwhile to note that there are multiple notions of approximate, partial, and soft equivariance, only some of which we explicitly\naddress in this work.\nThe first type occurs when we only have partially observed data, for example, a single pose of a 3D object captured in a 2D image or an\nobject occlusion in computer vision. Wang et al. [43 ###reference_b43###] refer to this as extrinsic equivariance in that applying a group transformation\nto an in-distribution data point transforms it to an out-of-distribution data point. This type of partial equivariance is often addressed via data augmentation.\nWe do not explicitly test our approach in this setting.\nThe second type occurs when we have noise in data that breaks equivariance. This is one setting we explicitly address.\nThe third type occurs when we have data that naturally exhibits almost equivariance. For example, data sampled from vector fields\nand PDEs governing natural physical processes often exhibit this quality. This is another setting we explicitly address.\nFinally, there is what Wang et al. [43 ###reference_b43###] call incorrect equivariance. This occurs when applying a group transformation to\na data point qualitatively and quantitatively changes its label. For example, rotating the digit 6 by 180 degrees turns it into\nthe digit 9 and vice versa. We do not explicitly address this in our method, but our model performs competitively on the Rot-MNIST\nclassification task, indicating that it has the capability of accounting for incorrect equivariances in its modeling."
34
+ },
35
+ {
36
+ "section_id": "3",
37
+ "parent_section_id": null,
38
+ "section_name": "Theory",
39
+ "text": ""
40
+ },
41
+ {
42
+ "section_id": "3.1",
43
+ "parent_section_id": "3",
44
+ "section_name": "Equivariance & Almost Equivariance",
45
+ "text": "In this section, we seek to give a suitable definition of almost equivariance and establish the relationship between\nit and full equivariance. In defining almost equivariance of a model with respect to the action of some Lie group, ,\nwe seek a definition that offers both theoretical insight as well as practical significance. We start by addressing\nthe abstract case, in which we define almost equivariance for general functions on a Riemannian manifold. We then\ndrop to the level of practice and give a method for encoding almost equivariance into a machine learning model\ntaking inputs on some data manifold.\nLet be a Lie group acting smoothly on smooth Riemannian manifolds and via the left actions and \ngiven by . Furthermore, let be a mapping of smooth manifolds, .\nThen we say is equivariant with respect to the action of if it commutes with the actions of on and , i.e.\nNow, consider the same setup as in the previous definition. We say a function is -almost equivariant if\nthe following is satisfied\nfor all and , where is the distance metric on . We can think of such a function as commuting with the actions of on and to within some .\nThis definition is reminiscent of one given in a question posed by Stanislaw Ulam [40 ###reference_b40###] concerning the stability of\ncertain \u201cquasi\u201d group homomorphisms. In particular, given a group , a group equipped with a distance ,\nand a -homomorphism, , satisfying\nfor all , he asked whether there exists an actual group homorphism that is \u201cclose\u201d to with respect to the distance, .\nThis question spurred research that showed the answer to be in the affirmative in a variety of cases, given certain restrictions on , and .\nIn our case, we seek to address a similar question, that is, whether given an almost equivariant map as defined above,\nthere exists a fully equivariant map that is \u201cclose\u201d to it in the sense of being within some bounded distance, and vice versa.\nIf such maps do exist, we hope to determine under what conditions on and they can be found."
46
+ },
47
+ {
48
+ "section_id": "3.2",
49
+ "parent_section_id": "3",
50
+ "section_name": "Isometries, Isometry Groups, and Almost Isometries",
51
+ "text": "We begin our discussion of the theory underlying almost equivariance by studying the notions of isometry and almost isometry.\nBecause we often seek to impose in our models equivariance with respect to the action of the isometry group of a manifold from which data is sampled, we find it\nworthwhile to study isometries as a precursor to studying equivariance.\nAn isometry is a mapping of metric spaces that preserves the distance metric.\nSome common types of metric spaces for which there exists a natural notion of isometry\nare normed spaces, such as Banach spaces, and Riemannian manifolds.\nIn this work, we focus most of our analysis on Riemannian manifolds, as they are among\nthe most general spaces upon which equivariant models operate [3 ###reference_b3###].\nLet and be Riemannian manifolds adding\n a diffeomorphism. Then we say is an\nisometry of if . In other words, the metric \ncan be pulled back by to get the metric .\nNext, we give a definition of an -almost isometry, which, in close analogy with almost equivariance,\nis a mapping of manifolds that preserves the metric on a Riemannian manifold, , to within some .\nLet and be Riemannian manifolds,\n a diffeomorphism, and .\nThen we say is an -almost isometry of if\nfor any and any having unit norm.\nIn other words, -almost isometries are maps between the same Riemannian manifold equipped with two different metrics\nfor which the metric on pairs of vectors, , and the metric on their pushforward by ,\n, differ by at most .\nOur definition of an -almost isometry is local in the sense that it deals with the tangent spaces to\npoints of a Riemannian manifold. However, we can naturally extend this definition to a global version\nthat operates on vector fields. The local and global definitions are related by the following fact:\nif is locally -almost isometric to via ,\nthen globally it is at most -isometric.\nGiven oriented and compact Riemannian manifolds and and a local -almost isometry, ,\nwe say that is a global -almost isometry if there exists a continuous, compactly-supported scalar field, ,\nsuch that for any normalized vector fields , we have\nIn particular, ,\nwhere is the canonical Riemannian volume form on .\nIt is known that equivariant model architectures are designed to preserve symmetries in data by imposing equivariance with respect to a Lie group action.\nTypical examples of Lie groups include the group of -dimensional rotations, , the group of area-preserving transformations, , and the special unitary group,\n, which has applications to quantum computing and particle physics.\nSome of these Lie groups are, in fact, isometry groups of the underlying manifolds from which data are sampled.\nThe isometry group of a Riemannian manifold, , is the set of isometries\n where the group operations of multiplication and inversion are given by function\ncomposition and function inversion, respectively. In particular, the composition of two isometries is\nan isometry, and the inverse of an isometry is an isometry. We denote the isometry group of by and the\nset of -almost isometries of by .\nTo give some examples, is the isometry group of , while the Poincar\u00e9 group, ,\nis the isometry group of Minkowski space, which has important applications to special and general relativity.\nWe often seek to impose equivariance in our models with respect to such isometry groups.\nIsometry groups of Riemannian manifolds also satisfy the following deep theorem, due to Myers and Steenrod.\nThe isometry group of a Riemannian manifold is a Lie group.\nThus, we can apply all the standard theorems of Lie theory to the study of isometry groups.\nUsing basic facts, we can deduce the following result about equivariance.\nIf is an isometry of the Riemannian manifold and be abelian,\nthen acts smoothly on and is an equivariant map with respect to this action\nof on . To see why, note that since is abelian, we have by definition that\n for all , which shows that is equivariant with respect to the\naction of on .\nHowever, we cannot, without some work, consider this\ntheorem in the context of because the set of -almost isometries of a manifold\ndoes not form a group. To see why, note that composing two -almost isometries produces, in general, a -almost isometry,\nthus the set of -almost isometries of a manifold is not closed under composition.\nStill, we can impose the abelian condition on group actions as a stepping stone\ntowards studying more general group actions, almost isometries, and equivariant functions. Under the assumption of an abelian Lie group acting on a Riemannian manifold,\nwe prove the following theorem.\nLet be a Riemannian manifold and suppose its group of isometries, , is an abelian Lie group.\nLet , and suppose there exists a continuous -almost isometry, ,\nwith , such that\nwhere we abbreviate the above as on \nand interpret it as an analogue to the supremum norm on the space of real-valued functions on , i.e. .\nThen is -almost equivariant with respect\nto the action of on . That is, it satisfies\nfor any and any .\nBy Proposition 3.8 ###reference_theorem8###, since is abelian, any is equivariant to actions of\n, i.e. we have\nfor all . Equivalently, we have . Now, by definition\nof the supremum norm. Then, we have simply by definition of a -almost isometry. Since is an isometry, it preserves distances, so we have because\n.\nUsing the fact that is equivariant to actions of and applying the inequalities just derived, along with repeated applications of the triangle inequality, we get\nWe apply the triangle inequality to get (1) and (4), and we substitute the inequalities derived above to get (3) and (6).\nThus, , which shows that is\n-almost equivariant with respect to the action of . This completes the proof.\n\u220e\nOf course, this theorem is not particularly useful unless for every isometry, ,\nwe have a way of obtaining an -almost isometry, ,\nsatisfying\nThe next theorem shows that such are plentiful. In fact, there are infinitely many of them.\nFurthermore, not only can we find , but we can find an isometric embedding, ,\nof that is -equivariant and then construct as an -almost isometric\nembedding of into such that\nThis is particularly useful in the context of machine learning, where we normally appeal to\nembedding abstract manifolds into some discretized subspace of in order to actually perform computations on a finite-precision computer.\nWe then later give some conditions under which we can achieve the converse, that is, given an -almost isometry,\n, of a metric space, , find an isometry, of , such that\nfor some constant .\nLet be a compact Riemannian manifold without boundary, \na compact Lie group acting on by isometries, a -equivariant function,\nand . Then there exists an orthogonal representation of ,\ni.e. a Lie group homomorphism from into the orthogonal group which acts on by rotations and reflections,\nan isometric embedding , and an -almost isometric embedding, ,\nsuch that is equivariant with respect to , i.e.\nand is -almost isometric with respect to , i.e. it satisfies\nUnder the stated assumptions of a compact Riemannian manifold and a compact Lie group\nacting on by isometries, we can get the existence of and by invoking the\nmain theorem of Moore and Schlafly [33 ###reference_b33###]. From there, note that setting \ntrivially satisfies (1) for any , although we seek a non-trivial solution.\nWe can choose an arbitrary , and define \nfor all . Next, since is compact, \nis bounded on . We can then take a neighborhood of such that\n. We can then choose an arbitrary ,\nwhile requiring , and set . Then is\nan -almost isometric embedding of , but , as desired.\nFurthermore, given a suitable topology on (such as the compact-open topology),\n is open so that there exist infinitely many such ,\nand they can be taken to be continuous.\n\u220e\nWe\u2019ve now shown that, subject to restrictions on ,\ngiven a -equivariant isometry of , , we can find -isometries of ,\n, within distance to that are, in fact, -almost equivariant with respect\nto the -action on .\nThe next, more difficult question (Theorem 3.11 ###reference_theorem11###) concerns a partial converse.\nThat is, given an -almost isometry, , can we find an isometry, , that\ndiffers from by no more than some constant multiple of , for all inputs ? The answer here\nis, yes, but proving it takes some work. We address this question in the next section.\n###figure_1###"
52
+ },
53
+ {
54
+ "section_id": "3.3",
55
+ "parent_section_id": "3",
56
+ "section_name": "Ulam Stability Theory",
57
+ "text": "There exist a number of results in the mathematics literature that confirm the\nexistence of almost isometries that are \u201cclose\u201d to isometries in the sense that the\nmetric space distance between them can be bounded by some . For example,\na theorem, due to Hyers and Ulam, states the following\n(Hyers and Ulam [22 ###reference_b22###])\nLet be a complete real Hilbert space. Let and be a surjection\nof E into itself that is an -isometry, that is,\n, for all ,\nwhere denotes the inner product in . Assume that . Then the limit\nexists for every and the transformation is a surjective isometry of \ninto itself, which satisfies .\nThis proposition demonstrates, given any and -almost isometry, ,\nthe existence of an isometry, , whose distance from is at most , for any input .\nThis result spurred subsequent research, and a later bound due\nto Fickett tightened the inequality. We state his theorem here as well.\nFor a fixed integer , let be a bounded subset of and let \nbe given. If a function satisfies\nfor all , that is, is an -isometry of , then there exists\nan isometry such that\nTaken together, these two results show that no matter what -almost isometry\nwe define, it is never \u201cfar off\u201d from a full isometry, with the\ndistance between the two bounded above by . Most recently,\nV\u00e4is\u00e4l\u00e4 [41 ###reference_b41###] proved an even tighter bound, but its discussion is beyond the scope of this paper.\nTo apply Theorem 3.11 ###reference_theorem11### in the context of machine learning,\nnote that by the Nash Embedding Theorem [35 ###reference_b35###], we can smoothly and isometrically embed any\nRiemannian manifold into for some . If is compact, then the embedding of in \nwill be a compact, and therefore bounded, subset of . We can then apply Theorem 3.11 ###reference_theorem11### to any\n-isometry of to get a nearby isometry of as a subset of .\nIf is not compact, let be its smooth isometric embedding.\nWe can then apply Theorem 11.4 of Wells and Williams [48 ###reference_b48###],\nwhich states that for a finite-dimensional Hilbert space, , we can extend any\nisometry of to an isometry on the linear span of . Assuming the completion, , of \nis contained in the linear span of , we can then, for any surjective -isometry of \ninto itself, apply Theorem 3.11 ###reference_theorem11### to recover an isometry of ."
58
+ },
59
+ {
60
+ "section_id": "4",
61
+ "parent_section_id": null,
62
+ "section_name": "Method",
63
+ "text": ""
64
+ },
65
+ {
66
+ "section_id": "4.1",
67
+ "parent_section_id": "4",
68
+ "section_name": "Almost Equivariant Models",
69
+ "text": "Having established the theory, we now give a practical method for encoding almost equivariance in machine learning models by appealing to the Lie algebra,\n, of the Lie group, .\nGiven a connected Lie group, , its Lie algebra, , vector spaces, and , and\nrepresentations, and ,\nwe say a model is -almost equivariant\nwith respect to the action of a Lie group, , if\nfor , , , and some .\nNote that our definition naturally encompasses full equivariance with respect to the action of\nconnected, compact Lie groups, for which the map is surjective, and which occurs when we\ntake and define .\nOur definition makes clear the correspondence between and the linear approximation at the identity, , afforded by the Lie\nalgebra, . Because acts by on , we expect that there exists an element such\nthat the action of on approximates the action of some representation of on .\nWe givea visualization of the intuition behind the definition in Figure 1 ###reference_### for the case where ."
70
+ },
71
+ {
72
+ "section_id": "4.2",
73
+ "parent_section_id": "4",
74
+ "section_name": "Lie Algebra Convolutions",
75
+ "text": "We build an almost equivariant neural network layer based on the Lie algebra, , of a\nmatrix Lie group, .\nTo parameterize our kernel function, we encode the Lie\nalgebra basis explicitly. For most matrix Lie groups, the corresponding Lie algebra basis\nhas an easily calculated set of generators, i.e. a set of basis elements .\nSecond, instead of mapping elements of directly to via the exponential\nmap, we train a neural network, ,\nto learn an approximation to this mapping directly from data. In our experiments, each\nLie algebra convolutional layer of the network is equipped with its own , which\nis parameterized as an MLP with a single linear layer followed by a non-linear activation,\neither a ReLU or a sigmoid function. Our method confers some key benefits over previous approaches.\nFor one, the kernels used in some past works are still constrained to take as input only group elements,\n, which to some extent limits the flexibility with which they\ncan model partial equivariances. In contrast, our kernel can take any\n as an input, allowing us to model a more flexible class\nof functions while still maintaining the interpretability achieved by parameterizing\nthis function class via elements of the Lie algebra.\nWe construct an almost equivariant Lie algebra convolution, abbreviated -conv, by letting\n and defining\nHere, instead of integrating with respect to the Haar measure, we instead integrate with\nrespect to the Lebesgue measure, , defined on . This is possible because\nwe are integrating over the Lie algebra, , which is a vector subspace of\n. Existing works require integrating with respect to the Haar measure\nbecause it is finite for compact groups, which allows one to more easily do MCMC sampling.\nCompactness is also necessary to define fully-equivariant group convolutions parameterized\nin the Lie algebra, because such a parameterization relies on the exponential map being surjective. Furthermore,\nwhile MacDonald et al. [31 ###reference_b31###] define a method for sampling from the Lie group, , that\nallows the group convolution to retain full equivariance,\neven for non-compact groups, by using a measure induced on the Lie algebra by the Haar\nmeasure, we adopt our simpler approach since we are not aiming for full group equivariance\nand instead only for almost equivariance. Thus, we use a uniform measure on the Lie algebra, which\nfor the groups studied here amounts to the Lebesgue measure on . While we still\nultimately convolve with group elements (in the case of compact groups, for which\n is surjective), our inputs, , are taken from the Lie algebra, ,\nand then pushed onto the Lie group, , via the map.\nAdditionally, because the map is surjective only for compact Lie groups [18 ###reference_b18###], the approach of parameterizing\nLie group elements by applying the map to elements of the Lie algebra only works\nin the compact case. Because we model the mapping function \nusing a neural network, our approach extends to non-compact Lie groups.\nLet be a compact matrix Lie group, , and\n so that .\nThen the Lie algebra convolution\nis -almost equivariant.\nWe provide the proof in the appendix."
76
+ },
77
+ {
78
+ "section_id": "5",
79
+ "parent_section_id": null,
80
+ "section_name": "Results",
81
+ "text": "We test our Almost Equivariant G-CNN on a suite of tasks that span the gamut of\nfull and almost equivariance. For each task, we compare the performance of our model with that\nof the Residual Pathway Prior model given in Finzi et al. [12 ###reference_b12###],\nthe Approximately Equivariant G-CNN defined in Wang et al. [44 ###reference_b44###],\nthe -equivariant and steerable E2CNN of Weiler and Cesa [46 ###reference_b46###], and a Standard CNN that is equivariant only to\ntranslations of the inputs."
82
+ },
83
+ {
84
+ "section_id": "5.1",
85
+ "parent_section_id": "5",
86
+ "section_name": "Image Classification",
87
+ "text": "We first test our model on an image classification task. We focus on the Rot-MNIST dataset,\nwhich consists of images taken from the MNIST dataset and\nsubjected to random rotations. We would expect rotational equivariance to be beneficial for\nclassifying these images. The training, validation, and test sets contain 10,000, 2,000, and\n50,000 images, respectively. We summarize our results on this task in Table 1 ###reference_###.\nWe perform a comprehensive hyperparameter grid search during training, and find that our best-performing\nmodel outperforms all baselines that we tested against.\nIt also outperforms the standard CNN and is\nmarginally outperformed only by the fully--equivariant E2CNN. We didn\u2019t perform any optimization of the kernel\nfunctions for any of the models, nor of the neural\nnetwork mapping from the Lie Algebra to the Lie Group for our model, and expect\nthat with further hyperparameter tuning as well as deeper models and more complex kernel functions, we could achieve\neven higher performance(s) on the test set.\nWe provide further details on the model training process in the appendix."
88
+ },
89
+ {
90
+ "section_id": "5.2",
91
+ "parent_section_id": "5",
92
+ "section_name": "Damped Pendulum",
93
+ "text": "The second task is to predict the -position, , at time \nof a pendulum undergoing simple harmonic motion\nand subjected to wind resistance. The pendulum is modeled as a mass, , connected to\na massless rod of length subjected to an acceleration due to gravity of\n and position function .\nThe differential equation governing this motion is\nwhere is the coefficient of friction governing the wind resistance which\nis modeled as a force\nWe simulate the trajectory of the pendulum using the Runge-Kutta method to obtain an\niterative, approximate solution to the above, second-order differential equation. We\nsample for 6000 values of using a and setting\n, , , and\n. We partition this data into a 90%/10% train-test split and train\na series of models to predict -position from the time . Because the pendulum\nrotates about a vertical line, we again expect that rotational equivariance would be beneficial for this task.\nTable 1 ###reference_### summarizes our results. We find that our Almost Equivariant G-CNN, the\nE2CNN, the Approximately Equivariant G-CNN, and the Residual Pathway Prior all achieve nearly identical performance,\nslightly beating out the standard CNN, which has many more parameters than the other baselines.\nRelative to the E2CNN and the RPP models, our model achieves significantly lower mean RMSE across hyperparameter configurations.\nThe RPP model, in particular, demonstrates a high sensitivity to hyperparameter settings. Our model uses far fewer parameters\nthan the standard CNN and a number of parameters comparable to the other baselines. While our best-performing model uses a\nkernel size of 4 compared to a kernel size of 2 used for the CNN, it uses only 1 hidden layer and 16 hidden channels, compared to\nthe CNN which uses 3 hidden layers having hidden channel sizes of 32, 64, and 128, respectively."
94
+ },
95
+ {
96
+ "section_id": "5.3",
97
+ "parent_section_id": "5",
98
+ "section_name": "Smoke Plume",
99
+ "text": "Next, we test our model on an almost equivariant prediction task. The dataset we use is the\nsmoke plume dataset of Wang et al. [44 ###reference_b44###] consisting of 2D velocity vector fields\nof smoke simulations with different initial conditions and external forces, all generated using\nthe PDE simulation framework, PhiFlow [19 ###reference_b19###]. Specifically, we use the subset\nof the data that features rotational almost equivariance. As stated in Wang et al. [44 ###reference_b44###],\n\u201cboth the inflow location and the direction of the buoyant forces possess a perfect rotation symmetry\nwith respect to the group, but the buoyancy factor varies with the inflow positions to break\nthe rotational symmetry.\u201d All models are trained to predict the raw velocity fields at the next time step\ngiven the raw velocity fields at the previous timestep as input.\nDue to computational constraints, we only run our method on this data and compare to the baseline\nresults reported in Wang et al. [44 ###reference_b44###].\nTable 2 ###reference_### shows how our method compares\nto the baselines. Due to computational constraints, we were\nunable to run a full hyperparameter sweep and suspect that\ndoing so would boost our model\u2019s performance even further."
100
+ },
101
+ {
102
+ "section_id": "5.4",
103
+ "parent_section_id": "5",
104
+ "section_name": "Jet Flow",
105
+ "text": "Finally, we test on one more almost equivariant dataset.\nAs described in Wang et al. [44 ###reference_b44###], this dataset\ncontains samples of 2D turbulent velocity fields taken\nfrom NASA multi-stream jets that were measured using time-resolved\nparticle image velocimetry as described in Bridges and Wernet [2 ###reference_b2###].\nWe follow the procedure described in\nWang et al. [44 ###reference_b44###], and \u201ctrain and test\non twenty-four sub-regions of jet flows.\u201d\nTable 2 ###reference_### shows our results."
106
+ },
107
+ {
108
+ "section_id": "6",
109
+ "parent_section_id": null,
110
+ "section_name": "Discussion",
111
+ "text": "In this work, we proposed a definition of almost equivariance that encompassed previous\ndefinitions of full and approximate/partial/soft equivariance. We connected this definition\nto mathematical theory by showing that, given an abelian isometry group, , acting on a Riemannian manifold,\n, then any isometry, of , is equivariant to the action of , and furthermore that there exists an\n-almost isometry, of , not more than from in the supremum norm,\nsuch that is almost equivariant to the action of .\nNext, we showed that nothing is lost by taking and to be isometric and almost isometric embeddings, respectively,\nof into . We then appealed to Ulam Stability Theory to give conditions under which we\ncan get an isometry of a complete, real Hilbert space close to an almost isometry of the same space.\nAll of this taken together demonstrates that there exist almost equivariant functions that are never \u201cfar\u201d from fully equivariant functions,\ngiven some constraints on the group action and class of functions, in a sense that can be mathematically quantified.\nWe next introduced a convolution on the elements of a Lie algebra that approximates a\nfully equivariant group convolution. We then showed that such a convolution can model almost\nequivariance relative to any group action, even those of non-compact groups.\nWe validated our assumptions by testing our model on a 2D image classification task,\na 1D sequence regression task, and a 2D sequence regression task. On all tasks, our model exceeded\nor met the performance of state-of-the-art equivariant and almost equivariant baseline models.\nThis demonstrates the utility of our method across a variety of scientific domains and prediction task types."
112
+ },
113
+ {
114
+ "section_id": "7",
115
+ "parent_section_id": null,
116
+ "section_name": "Future Work",
117
+ "text": "One line of future work will involve testing our model architecture on a wider class of group actions.\nWhile our model is general enough to handle the action of\nany group, including those of non-compact groups, we have not yet tested it on groups aside from .\nLawrence and Harris [27 ###reference_b27###] points to some potential\napplications of equivariance to non-compact Lie groups.\nNext, there exist a number of ways to further expound upon the theoretical results given here.\nOne potential angle to consider is whether variations of Theorem 3.11 ###reference_theorem11###\ncan be made to hold for arbitrary Riemannian manifolds and not just Hilbert and Euclidean spaces,\nrespectively. Another direction would involve undertaking a rigorous analysis of the conditions under\nwhich almost equivariance to the action of a non-abelian group can be imposed upon a function.\nWe here gave\nproof of the existence of almost isometries of Riemannian manifolds that are almost equivariant to certain abelian group actions,\nwhich we believe to be the most useful direction as, in practice, one normally seeks to take a fully equivariant\nmodel and make it almost equivariant.\nThat said, the more difficult mathematical question is to consider when, given an almost equivariant\nfunction on a manifold, it can be transformed into a fully equivariant function on the same manifold. We leave this direction\nfor future work.\nFinally, it is known that fully-equivariant kernel sharing for G-CNNs requires that the group act transitively on the input space [47 ###reference_b47###].\nAn interesting direction for future work would be investigating the extent to which this assumption is required for almost\nequivariant kernel sharing."
118
+ },
119
+ {
120
+ "section_id": "8",
121
+ "parent_section_id": null,
122
+ "section_name": "Acknowledgements",
123
+ "text": "We thank Frederic Sala, Jason Hartford, and Andrew Zimmer for their valuable feedback on this work."
124
+ }
125
+ ],
126
+ "appendix": [
127
+ {
128
+ "section_id": "Appendix 1",
129
+ "parent_section_id": null,
130
+ "section_name": "Appendix A Appendix",
131
+ "text": "We give brief introductions to the subjects of representation theory, differential topology and geometry, and Lie theory,\nstating only those definitions and theorems needed to understand the paper. For more comprehensive background, we encourage\nreaders to consult any of Fulton and Harris [13 ###reference_b13###], Etingof et al. [8 ###reference_b8###], Hall [18 ###reference_b18###] for representation theory,\nany of Lee [28 ###reference_b28###, 29 ###reference_b29###] for differential topology and geometry, and Hall [18 ###reference_b18###] for Lie theory.\nAll experiments were conducted on a single NVIDIA A100 GPU with 80GB of memory."
132
+ }
133
+ ],
134
+ "tables": {
135
+ "1": {
136
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T1.15\" style=\"width:433.6pt;height:92.1pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-92.6pt,19.7pt) scale(0.70076746852372,0.70076746852372) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.15.15\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T1.15.15.16.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S4.T1.15.15.16.1.1\">Group</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S4.T1.15.15.16.1.2\">Num Samples</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S4.T1.15.15.16.1.3\">Model</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.15.15.16.1.4\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.15.15.16.1.4.1\">\n<tr class=\"ltx_tr\" id=\"S4.T1.15.15.16.1.4.1.1\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T1.15.15.16.1.4.1.1.1\">Rot-MNIST</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.15.15.16.1.4.1.2\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T1.15.15.16.1.4.1.2.1\">Classification Accuracy</td>\n</tr>\n</table>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.15.15.16.1.5\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.15.15.16.1.5.1\">\n<tr class=\"ltx_tr\" id=\"S4.T1.15.15.16.1.5.1.1\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T1.15.15.16.1.5.1.1.1\">Pendulum</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.15.15.16.1.5.1.2\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T1.15.15.16.1.5.1.2.1\">Regression Error (RMSE)</td>\n</tr>\n</table>\n</th>\n<th class=\"ltx_td ltx_nopad_r ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.15.15.16.1.6\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.15.15.16.1.6.1\">\n<tr class=\"ltx_tr\" id=\"S4.T1.15.15.16.1.6.1.1\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T1.15.15.16.1.6.1.1.1\">Pendulum</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.15.15.16.1.6.1.2\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T1.15.15.16.1.6.1.2.1\">Average RMSE</td>\n</tr>\n</table>\n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.3.3.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S4.T1.3.3.3.4\">SE(2)</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S4.T1.3.3.3.5\">10</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S4.T1.3.3.3.6\">Almost Equivariant G-CNN</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.2.2.2.2\"></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_t\" id=\"S4.T1.3.3.3.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.6.6.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T1.6.6.6.4\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S4.T1.6.6.6.4.1\">E(2)</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T1.6.6.6.5\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T1.6.6.6.5.1\">10</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T1.6.6.6.6\">E2CNN</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.4.4.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.4.4.4.1.1\" style=\"color:#808080;\"></span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.5.5.5.2\"></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T1.6.6.6.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.9.9.9\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T1.9.9.9.4\">Residual Pathway Prior</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.7.7.7.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.8.8.8.2\"></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T1.9.9.9.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.12.12.12\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T1.12.12.12.4\">N/A</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T1.12.12.12.5\">Approximately Equivariant G-CNN</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.10.10.10.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.11.11.11.2\"></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T1.12.12.12.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.15.15.15\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb\" id=\"S4.T1.15.15.15.4\">T(2)</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb\" id=\"S4.T1.15.15.15.5\">N/A</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb\" id=\"S4.T1.15.15.15.6\">Standard CNN</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.13.13.13.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.14.14.14.2\"></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_bb\" id=\"S4.T1.15.15.15.3\"></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Rot-MNIST classification accuracies and RMSE prediction errors for pendulum trajectory prediction.\nBest results are bold-faced and second-best are colored gray.</figcaption>\n</figure>",
137
+ "capture": "Table 1: Rot-MNIST classification accuracies and RMSE prediction errors for pendulum trajectory prediction.\nBest results are bold-faced and second-best are colored gray."
138
+ },
139
+ "2": {
140
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T2.10\" style=\"width:433.6pt;height:95.1pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-70.5pt,15.5pt) scale(0.754667596559725,0.754667596559725) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T2.10.10\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T2.10.10.11.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.10.10.11.1.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T2.10.10.11.1.1.1\">Group</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.10.10.11.1.2\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T2.10.10.11.1.2.1\">Num Samples</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.10.10.11.1.3\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T2.10.10.11.1.3.1\">Model</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S4.T2.10.10.11.1.4\">Jet Flow (RMSE)</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S4.T2.10.10.11.1.5\">Smoke Plume (RMSE)</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.10.10.12.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.10.10.12.2.1\">Future</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.10.10.12.2.2\">Domain</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.10.10.12.2.3\">Future</th>\n<th class=\"ltx_td ltx_nopad_r ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.10.10.12.2.4\">Domain</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.2.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.2.2.2.3\">SE(2)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.2.2.2.4\">10</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.2.2.2.5\">Almost Equivariant G-CNN</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T2.1.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.2.2.2.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.2.2.2.6\">1.18</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_t\" id=\"S4.T2.2.2.2.7\">0.78</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.4.4.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.3\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S4.T2.4.4.4.3.1\">E(2)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4\">10</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.5\">E2CNN</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.3.3.3.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.6\">1.05</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T2.4.4.4.7\"><span class=\"ltx_text\" id=\"S4.T2.4.4.4.7.1\" style=\"color:#808080;\">0.76</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.6.6.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.6.6.6.3\">10</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.6.6.6.4\">Residual Pathway Prior</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.5.5.5.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.6.6.6.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.6.6.6.5\"><span class=\"ltx_text\" id=\"S4.T2.6.6.6.5.1\" style=\"color:#808080;\">0.96</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T2.6.6.6.6\">0.83</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.8.8.8\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.8.8.8.3\">4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.8.8.8.4\">Steerable Approximately Equivariant G-CNN</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.7.7.7.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.8.8.8.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.8.8.8.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.8.8.8.5.1\">0.80</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T2.8.8.8.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.8.8.8.6.1\">0.67</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.10.10.10\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.10.10.10.3\">T(2)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.10.10.10.4\">N/A</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.10.10.10.5\">Standard CNN</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T2.9.9.9.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.10.10.10.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.10.10.10.6\">1.21</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_bb\" id=\"S4.T2.10.10.10.7\">1.10</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Prediction RMSE on simulated smoke plume velocity fields and jet flow 2D turbulent velocity fields with almost rotational symmetry. The results for the baseline methods are taken from <cite class=\"ltx_cite ltx_citemacro_citet\">Wang et\u00a0al. [<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.13164v6#bib.bib44\" title=\"\">44</a>]</cite> and compared against our <span class=\"ltx_text ltx_font_italic\" id=\"S4.T2.14.1\">Almost Equivariant G-CNN</span>. As stated in <cite class=\"ltx_cite ltx_citemacro_citet\">Wang et\u00a0al. [<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.13164v6#bib.bib44\" title=\"\">44</a>]</cite>, <span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.15.2\">Future</span> prediction involves testing on data that lies in the future of the training data. <span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.16.3\">Domain</span> prediction involves training and test data that are from different spatial domains. Best results are bold-faced and second-best are colored gray.</figcaption>\n</figure>",
141
+ "capture": "Table 2: Prediction RMSE on simulated smoke plume velocity fields and jet flow 2D turbulent velocity fields with almost rotational symmetry. The results for the baseline methods are taken from Wang et\u00a0al. [44] and compared against our Almost Equivariant G-CNN. As stated in Wang et\u00a0al. [44], Future prediction involves testing on data that lies in the future of the training data. Domain prediction involves training and test data that are from different spatial domains. Best results are bold-faced and second-best are colored gray."
142
+ },
143
+ "3": {
144
+ "table_html": "<figure class=\"ltx_table\" id=\"A1.T3\">\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"A1.T3.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A1.T3.1.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A1.T3.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"A1.T3.1.1.1.1.1\">Learning Rate</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A1.T3.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"A1.T3.1.1.1.2.1\">Optimizer</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A1.T3.1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"A1.T3.1.1.1.3.1\">Kernel Sizes</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A1.T3.1.1.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"A1.T3.1.1.1.4.1\">Hidden Channels</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_tt\" id=\"A1.T3.1.1.1.5\"><span class=\"ltx_text ltx_font_bold\" id=\"A1.T3.1.1.1.5.1\"># Hidden Layers</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"A1.T3.1.2.2.1\">1e-4, 1e-3, 1e-2, 1e-1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"A1.T3.1.2.2.2\">Adam, SGD</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"A1.T3.1.2.2.3\">2, 3, 4, 5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"A1.T3.1.2.2.4\">16, 32</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_bb ltx_border_t\" id=\"A1.T3.1.2.2.5\">1, 2, 3, 4</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>Model hyperparameters used in grid search for the pendulum trajectory prediction task.</figcaption>\n</figure>",
145
+ "capture": "Table 3: Model hyperparameters used in grid search for the pendulum trajectory prediction task."
146
+ },
147
+ "4": {
148
+ "table_html": "<figure class=\"ltx_table\" id=\"A1.T4\">\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"A1.T4.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A1.T4.1.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A1.T4.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"A1.T4.1.1.1.1.1\">Learning Rate</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A1.T4.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"A1.T4.1.1.1.2.1\">Optimizer</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A1.T4.1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"A1.T4.1.1.1.3.1\">Kernel Sizes</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A1.T4.1.1.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"A1.T4.1.1.1.4.1\">Hidden Channels</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A1.T4.1.1.1.5\"><span class=\"ltx_text ltx_font_bold\" id=\"A1.T4.1.1.1.5.1\"># Hidden Layers</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_tt\" id=\"A1.T4.1.1.1.6\"><span class=\"ltx_text ltx_font_bold\" id=\"A1.T4.1.1.1.6.1\">Batch Sizes</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T4.1.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"A1.T4.1.2.2.1\">1e-4, 1e-3, 1e-2, 1e-1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"A1.T4.1.2.2.2\">Adam</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"A1.T4.1.2.2.3\">3, 4, 5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"A1.T4.1.2.2.4\">16, 32</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"A1.T4.1.2.2.5\">1, 2, 3, 4</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_bb ltx_border_t\" id=\"A1.T4.1.2.2.6\">16, 32, 64</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 4: </span>Model hyperparameters used in grid search for the Rot-MNIST classification task.</figcaption>\n</figure>",
149
+ "capture": "Table 4: Model hyperparameters used in grid search for the Rot-MNIST classification task."
150
+ }
151
+ },
152
+ "image_paths": {
153
+ "1": {
154
+ "figure_path": "2310.13164v6_figure_1.png",
155
+ "caption": "Figure 1: We provide a visualization of how actions of the Lie algebra can be used\nto approximate actions of the corresponding Lie group. The Lie group, S\u2062O\u2062(2)\ud835\udc46\ud835\udc422SO(2)italic_S italic_O ( 2 ), of\ntwo-dimensional rotations, represented here as the circle, S1\u2282\u21022superscript\ud835\udc461superscript\u21022S^{1}\\subset\\mathbb{C}^{2}italic_S start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT \u2282 blackboard_C start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT, with\nits Lie algebra, \ud835\udd30\u2062\ud835\udd2c\u2062(2)\ud835\udd30\ud835\udd2c2\\mathfrak{so}(2)fraktur_s fraktur_o ( 2 ), represented here as the tangent line at the\nidentity x=1\u2208\u2102\ud835\udc651\u2102x=1\\in\\mathbb{C}italic_x = 1 \u2208 blackboard_C, is the most easily visualized case. Here,\n\u03b8\ud835\udf03\\thetaitalic_\u03b8 gives the angle of rotation, \u03b5\ud835\udf00\\varepsilonitalic_\u03b5 gives the approximation\nerror arising from working in the Lie algebra, and the top dashed arrow shows how points\ncan be mapped from the Lie algebra onto the Lie group via the exponential map.\nThe function \u03a6:\ud835\udd24\u2192G:\u03a6\u2192\ud835\udd24\ud835\udc3a\\Phi:\\mathfrak{g}\\to Groman_\u03a6 : fraktur_g \u2192 italic_G, indicated here by the green curve,\nis a learned mapping that can be trained to approximate the exponential map.",
156
+ "url": "http://arxiv.org/html/2310.13164v6/extracted/5679831/img/screenshot_lie.png"
157
+ },
158
+ "2(a)": {
159
+ "figure_path": "2310.13164v6_figure_2(a).png",
160
+ "caption": "Figure 2: Training Losses and Train/Validation RMSE across Epochs for Pendulum Trajectory Prediction",
161
+ "url": "http://arxiv.org/html/2310.13164v6/extracted/5679831/training/loss_model_ApproxEq_bs_16_nh_2_ks_3_hc_16_lr_0.01_optim_Adam_ts_0.8_seed_0.png"
162
+ },
163
+ "2(b)": {
164
+ "figure_path": "2310.13164v6_figure_2(b).png",
165
+ "caption": "Figure 2: Training Losses and Train/Validation RMSE across Epochs for Pendulum Trajectory Prediction",
166
+ "url": "http://arxiv.org/html/2310.13164v6/extracted/5679831/training/rmse_model_ApproxEq_bs_16_nh_2_ks_3_hc_16_lr_0.01_optim_Adam_ts_0.8_seed_0.png"
167
+ },
168
+ "2(c)": {
169
+ "figure_path": "2310.13164v6_figure_2(c).png",
170
+ "caption": "Figure 2: Training Losses and Train/Validation RMSE across Epochs for Pendulum Trajectory Prediction",
171
+ "url": "http://arxiv.org/html/2310.13164v6/extracted/5679831/training/loss_model_AlmostEquivariant_bs_16_nh_2_ks_4_hc_32_lr_0.001_optim_Adam_ts_0.8_seed_0.png"
172
+ },
173
+ "2(d)": {
174
+ "figure_path": "2310.13164v6_figure_2(d).png",
175
+ "caption": "Figure 2: Training Losses and Train/Validation RMSE across Epochs for Pendulum Trajectory Prediction",
176
+ "url": "http://arxiv.org/html/2310.13164v6/extracted/5679831/training/rmse_model_AlmostEquivariant_bs_16_nh_2_ks_4_hc_32_lr_0.001_optim_Adam_ts_0.8_seed_0.png"
177
+ }
178
+ },
179
+ "validation": true,
180
+ "references": [
181
+ {
182
+ "1": {
183
+ "title": "E(3)-equivariant graph neural networks for data-efficient and\naccurate interatomic potentials.",
184
+ "author": "Simon Batzner, Albert Musaelian, Lixin Sun, Mario Geiger, Jonathan P. Mailoa,\nMordechai Kornbluth, Nicola Molinari, Tess E. Smidt, and Boris Kozinsky.",
185
+ "venue": "Nature Communications, 13(1):2453, 2022.",
186
+ "url": null
187
+ }
188
+ },
189
+ {
190
+ "2": {
191
+ "title": "Measurements of turbulent convection speeds in multistream jets using\ntime-resolved piv.",
192
+ "author": "James Bridges and Mark Wernet.",
193
+ "venue": "06 2017.",
194
+ "url": null
195
+ }
196
+ },
197
+ {
198
+ "3": {
199
+ "title": "Geometric deep learning: Grids, groups, graphs, geodesics, and\ngauges, 2021.",
200
+ "author": "Michael M. Bronstein, Joan Bruna, Taco Cohen, and Petar Veli\u010dkovi\u0107.",
201
+ "venue": null,
202
+ "url": null
203
+ }
204
+ },
205
+ {
206
+ "4": {
207
+ "title": "Group equivariant convolutional networks.",
208
+ "author": "Taco Cohen and Max Welling.",
209
+ "venue": "In Maria Florina Balcan and Kilian Q. Weinberger, editors,\nProceedings of The 33rd International Conference on Machine Learning,\nvolume 48 of Proceedings of Machine Learning Research, pages\n2990\u20132999, New York, New York, USA, 20\u201322 Jun 2016. PMLR.",
210
+ "url": null
211
+ }
212
+ },
213
+ {
214
+ "5": {
215
+ "title": "A general theory of equivariant cnns on homogeneous spaces.",
216
+ "author": "Taco S Cohen, Mario Geiger, and Maurice Weiler.",
217
+ "venue": "In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch\u00e9-Buc, E. Fox, and R. Garnett, editors, Advances in Neural\nInformation Processing Systems, volume 32. Curran Associates, Inc., 2019.",
218
+ "url": null
219
+ }
220
+ },
221
+ {
222
+ "6": {
223
+ "title": "Automatic symmetry discovery with lie algebra convolutional network.",
224
+ "author": "Nima Dehmamy, Robin Walters, Yanchen Liu, Dashun Wang, and Rose Yu.",
225
+ "venue": "In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman\nVaughan, editors, Advances in Neural Information Processing Systems,\nvolume 34, pages 2503\u20132515. Curran Associates, Inc., 2021.",
226
+ "url": null
227
+ }
228
+ },
229
+ {
230
+ "7": {
231
+ "title": "Lie groups for 2d and 3d transformations, May 2017.",
232
+ "author": "Ethan Eade.",
233
+ "venue": "URL https://ethaneade.com/lie.pdf.",
234
+ "url": null
235
+ }
236
+ },
237
+ {
238
+ "8": {
239
+ "title": "Introduction to representation theory, 2011.",
240
+ "author": "Pavel Etingof, Oleg Golberg, Sebastian Hensel, Tiankai Liu, Alex Schwendner,\nDmitry Vaintrob, and Elena Yudovina.",
241
+ "venue": null,
242
+ "url": null
243
+ }
244
+ },
245
+ {
246
+ "9": {
247
+ "title": "Approximate isometries on bounded sets with an application to measure\ntheory.",
248
+ "author": "James Fickett.",
249
+ "venue": "Studia Mathematica, 72(1):37\u201346, 1982.",
250
+ "url": null
251
+ }
252
+ },
253
+ {
254
+ "10": {
255
+ "title": "Generalizing convolutional neural networks for equivariance to lie\ngroups on arbitrary continuous data.",
256
+ "author": "Marc Finzi, Samuel Stanton, Pavel Izmailov, and Andrew Gordon Wilson.",
257
+ "venue": "In Hal Daum\u00e9 III and Aarti Singh, editors, Proceedings of the\n37th International Conference on Machine Learning, volume 119 of\nProceedings of Machine Learning Research, pages 3165\u20133176. PMLR,\n13\u201318 Jul 2020.",
258
+ "url": null
259
+ }
260
+ },
261
+ {
262
+ "11": {
263
+ "title": "A practical method for constructing equivariant multilayer\nperceptrons for arbitrary matrix groups.",
264
+ "author": "Marc Finzi, Max Welling, and Andrew Gordon Gordon Wilson.",
265
+ "venue": "In Marina Meila and Tong Zhang, editors, Proceedings of the\n38th International Conference on Machine Learning, volume 139 of\nProceedings of Machine Learning Research, pages 3318\u20133328. PMLR,\n18\u201324 Jul 2021a.",
266
+ "url": null
267
+ }
268
+ },
269
+ {
270
+ "12": {
271
+ "title": "Residual pathway priors for soft equivariance constraints.",
272
+ "author": "Marc Anton Finzi, Gregory Benton, and Andrew Gordon Wilson.",
273
+ "venue": "In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan,\neditors, Advances in Neural Information Processing Systems,\n2021b.",
274
+ "url": null
275
+ }
276
+ },
277
+ {
278
+ "13": {
279
+ "title": "Representation theory: A first course.",
280
+ "author": "William Fulton and Joe Harris.",
281
+ "venue": "Springer, 2004.",
282
+ "url": null
283
+ }
284
+ },
285
+ {
286
+ "14": {
287
+ "title": "Equivariance versus augmentation for spherical images.",
288
+ "author": "Jan Gerken, Oscar Carlsson, Hampus Linander, Fredrik Ohlsson, Christoffer\nPetersson, and Daniel Persson.",
289
+ "venue": "In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari,\nGang Niu, and Sivan Sabato, editors, Proceedings of the 39th\nInternational Conference on Machine Learning, volume 162 of\nProceedings of Machine Learning Research, pages 7404\u20137421. PMLR,\n17\u201323 Jul 2022a.",
290
+ "url": null
291
+ }
292
+ },
293
+ {
294
+ "15": {
295
+ "title": "Equivariance versus augmentation for spherical images.",
296
+ "author": "Jan Gerken, Oscar Carlsson, Hampus Linander, Fredrik Ohlsson, Christoffer\nPetersson, and Daniel Persson.",
297
+ "venue": "In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari,\nGang Niu, and Sivan Sabato, editors, Proceedings of the 39th\nInternational Conference on Machine Learning, volume 162 of\nProceedings of Machine Learning Research, pages 7404\u20137421. PMLR,\n17\u201323 Jul 2022b.",
298
+ "url": null
299
+ }
300
+ },
301
+ {
302
+ "16": {
303
+ "title": "The lie derivative for measuring learned equivariance.",
304
+ "author": "Nate Gruver, Marc Anton Finzi, Micah Goldblum, and Andrew Gordon Wilson.",
305
+ "venue": "In The Eleventh International Conference on Learning\nRepresentations, 2023.",
306
+ "url": null
307
+ }
308
+ },
309
+ {
310
+ "17": {
311
+ "title": "Lecture notes on metric space and gromov-hausdorff distance, Sep\n2017.",
312
+ "author": "Chenlin Gu.",
313
+ "venue": "URL https://chenlin-gu.github.io/notes/GromovHausdorff.pdf.",
314
+ "url": null
315
+ }
316
+ },
317
+ {
318
+ "18": {
319
+ "title": "Lie Groups, Lie Algebras, and Representations: An Elementary\nIntroduction.",
320
+ "author": "Brian Hall.",
321
+ "venue": "Springer International Publishing, Cham, 2015.",
322
+ "url": null
323
+ }
324
+ },
325
+ {
326
+ "19": {
327
+ "title": "Learning to control pdes with differentiable physics.",
328
+ "author": "Philipp Holl, Nils Thuerey, and Vladlen Koltun.",
329
+ "venue": "In International Conference on Learning Representations, 2020.",
330
+ "url": null
331
+ }
332
+ },
333
+ {
334
+ "20": {
335
+ "title": "Lietransformer: Equivariant self-attention for lie groups.",
336
+ "author": "Michael J Hutchinson, Charline Le Lan, Sheheryar Zaidi, Emilien Dupont,\nYee Whye Teh, and Hyunjik Kim.",
337
+ "venue": "In Marina Meila and Tong Zhang, editors, Proceedings of the\n38th International Conference on Machine Learning, volume 139 of\nProceedings of Machine Learning Research, pages 4533\u20134543. PMLR,\n18\u201324 Jul 2021a.",
338
+ "url": null
339
+ }
340
+ },
341
+ {
342
+ "21": {
343
+ "title": "Lietransformer: Equivariant self-attention for lie groups.",
344
+ "author": "Michael J Hutchinson, Charline Le Lan, Sheheryar Zaidi, Emilien Dupont,\nYee Whye Teh, and Hyunjik Kim.",
345
+ "venue": "In Marina Meila and Tong Zhang, editors, Proceedings of the\n38th International Conference on Machine Learning, volume 139 of\nProceedings of Machine Learning Research, pages 4533\u20134543. PMLR,\n18\u201324 Jul 2021b.",
346
+ "url": null
347
+ }
348
+ },
349
+ {
350
+ "22": {
351
+ "title": "On approximate isometries.",
352
+ "author": "D. H. Hyers and S. M. Ulam.",
353
+ "venue": "Bulletin of the American Mathematical Society, 51(4):288\u2013292, 1945.",
354
+ "url": null
355
+ }
356
+ },
357
+ {
358
+ "23": {
359
+ "title": "Gromov\u2013hausdorff convergence and volumes of manifolds.",
360
+ "author": "SV Ivanov.",
361
+ "venue": "Algebra i Analiz, 9(5):65\u201383, 1997.",
362
+ "url": null
363
+ }
364
+ },
365
+ {
366
+ "24": {
367
+ "title": "Group Theoretical Methods in Machine Learning.",
368
+ "author": "Imre Risi Kondor.",
369
+ "venue": "PhD thesis, USA, 2008.",
370
+ "url": null
371
+ }
372
+ },
373
+ {
374
+ "25": {
375
+ "title": "On the generalization of equivariance and convolution in neural\nnetworks to the action of compact groups.",
376
+ "author": "Risi Kondor and Shubhendu Trivedi.",
377
+ "venue": "In Jennifer Dy and Andreas Krause, editors, Proceedings of the\n35th International Conference on Machine Learning, volume 80 of\nProceedings of Machine Learning Research, pages 2747\u20132755. PMLR,\n10\u201315 Jul 2018.",
378
+ "url": null
379
+ }
380
+ },
381
+ {
382
+ "26": {
383
+ "title": "Roto-translation equivariant convolutional networks: Application to\nhistopathology image analysis.",
384
+ "author": "Maxime W. Lafarge, Erik J. Bekkers, Josien P. W. Pluim, Remco Duits, and Mitko\nVeta.",
385
+ "venue": "CoRR, abs/2002.08725, 2020.",
386
+ "url": null
387
+ }
388
+ },
389
+ {
390
+ "27": {
391
+ "title": "Learning polynomial problems with sl(2)-equivariance, 2023.",
392
+ "author": "Hannah Lawrence and Mitchell Tong Harris.",
393
+ "venue": "URL https://openreview.net/pdf?id=mRr53KWuf1.",
394
+ "url": null
395
+ }
396
+ },
397
+ {
398
+ "28": {
399
+ "title": "Introduction to Smooth Manifolds.",
400
+ "author": "John M. Lee.",
401
+ "venue": "Springer New York, New York, NY, 2003.",
402
+ "url": null
403
+ }
404
+ },
405
+ {
406
+ "29": {
407
+ "title": "Introduction to Riemannian Manifolds.",
408
+ "author": "John M. Lee.",
409
+ "venue": "Springer International Publishing, Cham, 2018.",
410
+ "url": null
411
+ }
412
+ },
413
+ {
414
+ "30": {
415
+ "title": "Equiformer: Equivariant graph attention transformer for 3d atomistic\ngraphs.",
416
+ "author": "Yi-Lun Liao and Tess Smidt.",
417
+ "venue": "In International Conference on Learning Representations, 2023.",
418
+ "url": null
419
+ }
420
+ },
421
+ {
422
+ "31": {
423
+ "title": "Enabling equivariance for arbitrary lie groups.",
424
+ "author": "Lachlan E. MacDonald, Sameera Ramasinghe, and Simon Lucey.",
425
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision\nand Pattern Recognition (CVPR), pages 8183\u20138192, June 2022.",
426
+ "url": null
427
+ }
428
+ },
429
+ {
430
+ "32": {
431
+ "title": "Understanding the haar measure.",
432
+ "author": "Olivia Di Matteo.",
433
+ "venue": "https://pennylane.ai/qml/demos/tutorial_haar_measure, 02 2021.",
434
+ "url": null
435
+ }
436
+ },
437
+ {
438
+ "33": {
439
+ "title": "On equivariant isometric embeddings.",
440
+ "author": "John Douglas Moore and Roger Schlafly.",
441
+ "venue": "Mathematische Zeitschrift, 173(2):119\u2013133, 1980.",
442
+ "url": null
443
+ }
444
+ },
445
+ {
446
+ "34": {
447
+ "title": "The group of isometries of a riemannian manifold.",
448
+ "author": "S. B. Myers and N. E. Steenrod.",
449
+ "venue": "Annals of Mathematics, 40(2):400\u2013416,\n1939.",
450
+ "url": null
451
+ }
452
+ },
453
+ {
454
+ "35": {
455
+ "title": "C1 isometric imbeddings.",
456
+ "author": "John Nash.",
457
+ "venue": "Annals of Mathematics, 60(3):383\u2013396,\n1954.",
458
+ "url": null
459
+ }
460
+ },
461
+ {
462
+ "36": {
463
+ "title": "Approximation-generalization trade-offs under (approximate) group\nequivariance, 2023.",
464
+ "author": "Mircea Petrache and Shubhendu Trivedi.",
465
+ "venue": null,
466
+ "url": null
467
+ }
468
+ },
469
+ {
470
+ "37": {
471
+ "title": "Chapter 2 - ulam stability of operators in normed spaces.",
472
+ "author": "Themistocles M. Rassias, Janusz Brzdkek, Dorian Popa, Ioan Racsa, and Bing Xu.",
473
+ "venue": "In Themistocles M. Rassias, Janusz Brzdkek, Dorian Popa, Ioan Racsa,\nand Bing Xu, editors, Ulam Stability of Operators, Mathematical\nAnalysis and its Applications, pages 33\u201368. Academic Press, 2018.",
474
+ "url": null
475
+ }
476
+ },
477
+ {
478
+ "38": {
479
+ "title": "Learning partial equivariances from data.",
480
+ "author": "David W. Romero and Suhas Lohit.",
481
+ "venue": "In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh,\neditors, Advances in Neural Information Processing Systems, volume 35,\npages 36466\u201336478. Curran Associates, Inc., 2022.",
482
+ "url": null
483
+ }
484
+ },
485
+ {
486
+ "39": {
487
+ "title": "E(n) equivariant graph neural networks.",
488
+ "author": "V\u00edctor Garcia Satorras, Emiel Hoogeboom, and Max Welling.",
489
+ "venue": "In Marina Meila and Tong Zhang, editors, Proceedings of the\n38th International Conference on Machine Learning, volume 139 of\nProceedings of Machine Learning Research, pages 9323\u20139332. PMLR,\n18\u201324 Jul 2021.",
490
+ "url": null
491
+ }
492
+ },
493
+ {
494
+ "40": {
495
+ "title": "A collection of mathematical problems.",
496
+ "author": "Stanislaw M. Ulam.",
497
+ "venue": "Interscience Publishers, 1960.",
498
+ "url": null
499
+ }
500
+ },
501
+ {
502
+ "41": {
503
+ "title": "Isometric approximation property in euclidean spaces.",
504
+ "author": "Jussi V\u00e4is\u00e4l\u00e4.",
505
+ "venue": "Israel Journal of Mathematics, 128(1):1\u201327, 2002.",
506
+ "url": null
507
+ }
508
+ },
509
+ {
510
+ "42": {
511
+ "title": "Relaxing equivariance constraints with non-stationary continuous\nfilters.",
512
+ "author": "Tycho F.A. van der Ouderaa, David W. Romero, and Mark van der Wilk.",
513
+ "venue": "In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho,\neditors, Advances in Neural Information Processing Systems, 2022.",
514
+ "url": null
515
+ }
516
+ },
517
+ {
518
+ "43": {
519
+ "title": "A general theory of correct, incorrect, and extrinsic equivariance.",
520
+ "author": "Dian Wang, Xupeng Zhu, Jung Yeon Park, Mingxi Jia, Guanang Su, Robert Platt,\nand Robin Walters.",
521
+ "venue": "In Thirty-seventh Conference on Neural Information Processing\nSystems, 2023.",
522
+ "url": null
523
+ }
524
+ },
525
+ {
526
+ "44": {
527
+ "title": "Approximately equivariant networks for imperfectly symmetric\ndynamics.",
528
+ "author": "Rui Wang, Robin Walters, and Rose Yu.",
529
+ "venue": "In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari,\nGang Niu, and Sivan Sabato, editors, Proceedings of the 39th\nInternational Conference on Machine Learning, volume 162 of\nProceedings of Machine Learning Research, pages 23078\u201323091. PMLR,\n17\u201323 Jul 2022a.",
530
+ "url": null
531
+ }
532
+ },
533
+ {
534
+ "45": {
535
+ "title": "Data augmentation vs. equivariant networks: A theory of\ngeneralization on dynamics forecasting, 2022b.",
536
+ "author": "Rui Wang, Robin Walters, and Rose Yu.",
537
+ "venue": null,
538
+ "url": null
539
+ }
540
+ },
541
+ {
542
+ "46": {
543
+ "title": "General E(2)-Equivariant Steerable CNNs.",
544
+ "author": "Maurice Weiler and Gabriele Cesa.",
545
+ "venue": "In Conference on Neural Information Processing Systems\n(NeurIPS), 2019.",
546
+ "url": null
547
+ }
548
+ },
549
+ {
550
+ "47": {
551
+ "title": "Coordinate independent convolutional networks - isometry and gauge\nequivariant convolutions on riemannian manifolds.",
552
+ "author": "Maurice Weiler, Patrick Forr\u00e9, Erik Verlinde, and Max Welling.",
553
+ "venue": "CoRR, abs/2106.06020, 2021.",
554
+ "url": null
555
+ }
556
+ },
557
+ {
558
+ "48": {
559
+ "title": "The Extension Problem for Contractions and Isometries, pages\n46\u201375.",
560
+ "author": "J.H. Wells and L.R. Williams.",
561
+ "venue": "Springer Berlin Heidelberg, 1975.",
562
+ "url": null
563
+ }
564
+ },
565
+ {
566
+ "49": {
567
+ "title": "Deep scale-spaces: Equivariance over scale.",
568
+ "author": "Daniel Worrall and Max Welling.",
569
+ "venue": "In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch\u00e9-Buc, E. Fox, and R. Garnett, editors, Advances in Neural\nInformation Processing Systems, volume 32. Curran Associates, Inc., 2019.",
570
+ "url": null
571
+ }
572
+ },
573
+ {
574
+ "50": {
575
+ "title": "Group equivariant subsampling.",
576
+ "author": "Jin Xu, Hyunjik Kim, Thomas Rainforth, and Yee Teh.",
577
+ "venue": "In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman\nVaughan, editors, Advances in Neural Information Processing Systems,\nvolume 34, pages 5934\u20135946. Curran Associates, Inc., 2021.",
578
+ "url": null
579
+ }
580
+ },
581
+ {
582
+ "51": {
583
+ "title": "Towards a better understanding of reverse-complement equivariance for\ndeep learning models in genomics.",
584
+ "author": "Hannah Zhou, Avanti Shrikumar, and Anshul Kundaje.",
585
+ "venue": "In David A. Knowles, Sara Mostafavi, and Su-In Lee, editors,\nProceedings of the 16th Machine Learning in Computational Biology\nmeeting, volume 165 of Proceedings of Machine Learning Research,\npages 1\u201333. PMLR, 22\u201323 Nov 2022.",
586
+ "url": null
587
+ }
588
+ }
589
+ ],
590
+ "url": "http://arxiv.org/html/2310.13164v6"
591
+ }
20240620/2310.14414v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240620/2310.15903v4.json ADDED
The diff for this file is too large to render. See raw diff
 
20240620/2310.17467v4.json ADDED
@@ -0,0 +1,460 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "The statistical thermodynamics of generative diffusion models: Phase transitions, symmetry breaking and critical instability",
3
+ "abstract": "Generative diffusion models have achieved spectacular performance in many areas of machine learning and generative modeling. While the fundamental ideas behind these models come from non-equilibrium physics, variational inference and stochastic calculus, in this paper we show that many aspects of these models can be understood using the tools of equilibrium statistical mechanics. Using this reformulation, we show that generative diffusion models undergo second-order phase transitions corresponding to symmetry breaking phenomena. We show that these phase-transitions are always in a mean-field universality class, as they are the result of a self-consistency condition in the generative dynamics. We argue that the critical instability that arises from the phase transitions lies at the heart of their generative capabilities, which are characterized by a set of mean-field critical exponents. Finally, we show that the dynamic equation of the generative process can be interpreted as a stochastic adiabatic transformation that minimizes the free energy while keeping the system in thermal equilibrium.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Generative modeling is a sub-field of machine learning concerned with the automatic generation of structured data such as images, videos and written language (Bond-Taylor et al.,, 2021 ###reference_b8###). Generative diffusion models (Sohl-Dickstein et al.,, 2015 ###reference_b32###), also known as score-based models, form a class of deep generative models that have demonstrated high performance in image (Ho et al.,, 2020 ###reference_b13###; Song et al., 2021b, ###reference_b34###), sound (Chen et al.,, 2020 ###reference_b9###; Kong et al.,, 2020 ###reference_b18###; Liu et al.,, 2023 ###reference_b23###) and video generation (Ho et al.,, 2022 ###reference_b14###; Singer et al.,, 2022 ###reference_b31###). Diffusion models were first introduced in analogy with the physics of non-equilibrium statistical physics. The fundamental idea is to formalize generation as the probabilistic inverse of a forward stochastic process that gradually turns the target distribution into a simple base distribution such as Gaussian white noise (Sohl-Dickstein et al.,, 2015 ###reference_b32###; Song et al., 2021a, ###reference_b33###). Recently, several works suggested that many of the dynamical properties of generative diffusion models can be understood using concepts such as spontaneous symmetry breaking (Raya and Ambrogioni,, 2023 ###reference_b29###; Biroli and M\u00e9zard,, 2023 ###reference_b7###; Biroli et al.,, 2024 ###reference_b6###), and phase transitions (Biroli et al.,, 2024 ###reference_b6###; Sclocchi et al.,, 2024 ###reference_b30###). These theoretical and experimental results suggest a deep connection between generative diffusion and equilibrium phenomena.\nIn this paper, we outline a conceptual reformulation of generative diffusion models in the language of equilibrium statistical physics. We begin by defining a family of Boltzmann distributions over the noise-free states, which are interpreted as (unobservable) microstates during the diffusion process. In this picture, the Boltzmann weights are given by the conditional distributions of the noiseless data given the noisy state. We obtain a self-consistent equation of state for the system, which corresponds to the fixed-point equation of the generative dynamics. Moreover, we show that generative diffusion models can undergo second-order phase transitions of the mean-field type, corresponding the the generative spontaneous symmetry breaking phenomena fist discussed in (Raya and Ambrogioni,, 2023 ###reference_b29###) and further studied in Biroli et al., (2024 ###reference_b6###); Li and Chen, (2024 ###reference_b22###) and in Sclocchi et al., (2024 ###reference_b30###). Finally, we show that this mean-field theory can be seen as the thermodynamic limit of a multi-site system of coupled replicas. Based on this results, we derive a variant of the generative diffusion equations as the Brownian dynamics of a \u2019particle\u2019 coupled on a large densely connected systems of replicated microstates, which offers a possible generalization of diffusion models beyond mean-field theory."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Contributions and Related Work",
15
+ "text": "The main novel theoretical contributions of this paper are in the characterization of mean-field critical phase transitions in generative diffusion models, and its extension beyond mean-field theory. While the paper contains novel results, its aim is also pedagogical, as we wish to provide a self-consistent introduction for physicists to the study of generative diffusion. As such, we report known formulas and results from the literature, including the analysis scheme used in (Lucibello and M\u00e9zard,, 2024 ###reference_b24###) and Biroli et al., (2024 ###reference_b6###) for the analysis of memorization phenomena and the equivalence results for modern Hopfield networks given in (Ambrogioni,, 2023 ###reference_b3###). Several of these formulas can also be found in recent work on stochastic localization (El A. et al.,, 2022 ###reference_b11###; Huang et al.,, 2024 ###reference_b17###), which has been shown to offer an elegant generalization of generative diffusion processes (Montanari,, 2023 ###reference_b27###; Benton et al.,, 2024 ###reference_b5###; Alaoui et al.,, 2023 ###reference_b2###). In particular, the Boltzmann distributions given in Eq. 6 ###reference_### is equivalent to the tilted distributions given in (El A. et al.,, 2022 ###reference_b11###)."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Preliminaries on generative diffusion models",
21
+ "text": "The goal of diffusion modeling is to sample from a potentially very complex target distribution , which we model as the initial initial boundary condition of a (forward) stochastic process that removes structure by injecting white noise. In order to simplify the derivations, here we assume the forward process to be a mathematical Brownian motion. Other forward processes are more commonly used in the applied literature, such as the variance preserving process (e.g. a non-stationary Olsten-Uhlenbeck process) (Song et al., 2021b, ###reference_b34###). However, most of the qualitative thermodynamic properties are shared between these models. The mathematical Brownian motion is defined by the following Langevin equation:\nwhere is an infinitesimal increment, is the instantaneous standard deviation of the stochastic input and is a standard Gaussian white noise process. The marginal probabilities defined by Eq. 1 ###reference_### with as initial boundary condition can be expressed analytically as follows:\nwhere the expectation is taken with respect to the target distribution . A generative model can then be obtained by \"inverting\" Eq. 1 ###reference_###. The inverse equation is\nwhich can be shown to give the same marginal distributions in Eq. 2 ###reference_### if the process is initialized with appropriately scaled white-noise (Anderson,, 1982 ###reference_b4###). The function is known as the score in the literature. If the score is available for all values of and , we can then sample from by integrating Eq. 3 ###reference_### using numerical methods."
22
+ },
23
+ {
24
+ "section_id": "3.1",
25
+ "parent_section_id": "3",
26
+ "section_name": "Training diffusion models as denoising autoencoders",
27
+ "text": "###figure_1### While the score of the target distribution is generally not available analytically, a deep network can be trained to approximate it using a from a large set of samples (Song et al., 2021a, ###reference_b33###). We refer to the network as a vector valued function . Deep networks are parameterized by a large number of weights and biases. However, since we are not interested in the details of the specific parameterization, here we will report the functional loss:\nwhere is a cumulative distribution with support in and is sampled conditional on using the propagator of the forward Langevin equation. Note that is simply the total noise added up to time , which implies that the network learns how to predict the noise that corrupted the input data. The score function can then be obtained from the optimized network using the following formula (Anderson,, 1982 ###reference_b4###):\nIn other words, the score is proportional to the optimal estimate of the noise given the noise-corrupted state. Therefore, once the network is trained to minimize Eq. 4 ###reference_###, synthetic samples can be generated by sampling from the boundary noise, computing the score using Eq. 5 ###reference_### and integrating backward Eq. 3 ###reference_### using numerical methods. An example of this generative dynamics for a network trained on natural images is shown in Fig. 1 ###reference_###."
28
+ },
29
+ {
30
+ "section_id": "4",
31
+ "parent_section_id": null,
32
+ "section_name": "Diffusion models as systems in equilibrium",
33
+ "text": "The starting point for any model in statistical mechanics is the definition of the relevant microstates. In statistical physics, the microstate of a system is usually assumed to be an unobservable quantity. Given that we can observe a noise-corrupted data , in a diffusion model the most obvious unobservable quantity of interest is the noise-free initial state . The next step is to define a Hamiltonian function on the set of microstates. We can do this by considering the conditional probability of the data given a noisy state :\nwhich we can interpret as a Boltzmann distribution over the microstates with Hamiltonian\nand partition function\nThe statistical properties of this ensemble determine the score function, which can be expressed as a Boltzmann average:\nwhere and\nIntuitively, this equation tells us that the score vector directs the system towards the posterior average . As we shall see, studying the thermodynamics determined by these weights will allow us to understand several important qualitative and quantitative features of the generative dynamics. For example, as we will see in later sections, after a \u2019condensation\u2019 phase transition the score will only depend on a small number of data-points, which can be detected by studying the concentration of the weights on a sub-exponential number of microstates.\nThe thermodynamic system defined by Eq. 6 ###reference_### does not have a true temperature parameter. However, the quantity plays a very similar role to temperature in classical statistical mechanics. Moreover, in the Hamiltonian given by Eq .39 ###reference_###, the dynamic variable is analogous to the external field term in magnetic systems, which can bias the distribution of microstates towards the patterns \u2019aligned\u2019 in its direction. We can imagine as being a \"slower\" thermodynamic variable that interacts (adiabatically) with the statistics of the microstates."
34
+ },
35
+ {
36
+ "section_id": "4.1",
37
+ "parent_section_id": "4",
38
+ "section_name": "Example 1: Two deltas",
39
+ "text": "Most of the complexity of the generative dynamics comes from the target distribution . However, simple toy models can be used to draw general insights that often generalize to complex target distributions. A simple but informative example is given by the following target\nwhere is equal to either or with probability . Assuming the binary constraint, this results in the following diffusion Hamilton\nand the partition function\nwhere . Note that this is the same partition function of the Curie-Weiss model of ferromagnetism, which suggests a connection with mean-field theory."
40
+ },
41
+ {
42
+ "section_id": "4.2",
43
+ "parent_section_id": "4",
44
+ "section_name": "Example 2: Discrete dataset",
45
+ "text": "In real application, generative diffusion models are trained on a large but finite dataset . Sampling from this dataset correspond to the target distribution\nIf the data-points are all normalized so as to have norm equal to one, this results in the partition function\nThis partition function will play a central role in the random-energy analysis of the model, which can be used to study the finite sample thermodynamics."
46
+ },
47
+ {
48
+ "section_id": "4.3",
49
+ "parent_section_id": "4",
50
+ "section_name": "Example 3: Hyper-spherical manifold",
51
+ "text": "Since datasets are always finite, in practice every trained generative diffusion model corresponds to the discrete model outlined in the previous subsection. However, fitting the dataset exactly leads to a model that can only reproduce the memorized training data. Instead, the hope is that the trained network will generalize and interpolate the samples, thereby approximately recovering the true distribution of the sampled data. Very often, this distribution will span a lower-dimensional manifold embedded in the ambient space.\nA simple toy model of data defined in a manifold is the hyper-spherical model introduced in (Raya and Ambrogioni,, 2023 ###reference_b29###):\nwhere denotes a -dimensional hyper-sphere centered at zero with volume . The \"two delta\" model is a special case of this model for an ambient dimension equal to one. As we will see in further section, this data distribution is very tractable in the infinite dimensional (i.e. thermodynamic) limit as it converges to a distribution of normalized Gaussian variables, which removes the quadratic terms in the Hamiltonian."
52
+ },
53
+ {
54
+ "section_id": "4.4",
55
+ "parent_section_id": "4",
56
+ "section_name": "Example 4: Diffused Ising model",
57
+ "text": "While most of the formulas presented in this manuscript have a very close analogy with formulas in statistical physics, there are some subtle interpretative differences that could create confusion in the reader. To clarify these issues, we will discuss the diffused Ising model, which will provide a bridge between the two views. Consider a diffusion model with a target distribution supported on -dimensional vectors with entries in the set . The log-probability of the target distribution is defined by the following formula:\nwhere W is a symmetric coupling matrix, is a temperature parameter and is a constant. Up to constants, this is of course the log-probability of an Ising model without the external field term. From Eq. 39 ###reference_###, up to constant terms, we obtain the following Hamiltonian for the diffusion model:\nwhich is almost identical to the Hamiltonian of an Ising model coupled to a location-dependent external field . Nevertheless, the quantity , which we loosely interpreted as a \"inverse temperature\", does not divide the coupling part of the Hamiltonian, which results in a radically different behavior. In fact, only modulates the susceptibility to the field term and it therefore does not radically alter the phase of the model, which depends on the Ising temperature parameter . Instead, the interesting phase transition of the diffusion models is a consequence of the self-consistency relation in Eq. 22 ###reference_###, which characterizes the branching of the fixed-points of the generative stochastic dynamics. From the point of view of statistical physics, Eq. 22 ###reference_### can be seen as the result of a mean-field approximation, where the average magnetization is coupled to the external field. However, it is important to keep in mind that, in a diffusion model, this mean-field approach does not represent the coupling between individual sites, which, as Eq. 18 ###reference_### shows, are instead statistically coupled by the interaction terms in the Hamiltonian. Instead, it can be seen as an idealized mean-field interaction between infinitely many copies of the whole system. In general, the value of will change the properties of the diffusion model, as the system transitions from its low temperature to its high temperature phase. The dependency of the diffusion dynamics on this transition have been studied in Biroli and M\u00e9zard, (2023 ###reference_b7###)."
58
+ },
59
+ {
60
+ "section_id": "5",
61
+ "parent_section_id": null,
62
+ "section_name": "Free energy, magnetization and order parameters",
63
+ "text": "###figure_2### Using our interpretation of as the inverse temperature parameter, we can define the Helmholtz free energy as follows\nThe expected value of the pattern given can then be expressed the gradient of the free energy with respect to :\nThis formula suggests an analogy between diffusion models and magnetic systems in statistical physics. The noisy state can be interpreted as an external magnetic field, which induces the state of \"magnetization\" . In this analogy, a diffusion model is magnetized when its distribution is biased towards a sub-set of the possible microstates.\nIn physics, the \u2019external field\u2019 variable is usually assumed to be controlled by the experimenter. On the other hand, in generative diffusion models is a dynamic variable that, under the reversed dynamics, is itself attracted towards by the drift term:\nIn other words, if we ignore the effect of the dispersion term, the state of the system is driven towards self-consistent points where is equal to . It is therefore interesting to study the self-consistency equation\nwhich defines the self-consistent solutions where the state is identical to the expected value. In the equation, we introduced a perturbation term , which will allow us to study how the systems react to perturbations. For , the equation can be equivalently re-expressed as the fixed point equation of the reversed drift:\nFor , this equation admits the single \"trivial\" solution , where denotes expectation with respect to the target distribution . In analogy with magnetic systems, we can interpret as an order parameter and this equation as a thermodynamic equation of state. This analogy suggests that can be interpreted as a \u2019spontaneous magnetization\u2019 of the system. From this point of view, we can conceptualize the generative process as a form of self-consistent spontaneous symmetry breaking, where the system aligns with one of the many possible target points. In the following sections, we will formalize this insight by characterizing the critical behavior of this system.\nReaders familiar with statistical physics will recognize that Eq. 22 ###reference_### is formally identical to the self-consistency conditions used in the mean-field approximation, where the external field term in one location is assumed to be determined by the magnetization of all other locations. However, it is important to note that in the case of a diffusion model, this self-consistent coupling is not approximate, as it is a natural consequence of the dynamics. Nevertheless, the formal analogy implies that the thermodynamics of generative diffusion models is formally identical to the thermodynamics of mean-field models."
64
+ },
65
+ {
66
+ "section_id": "5.1",
67
+ "parent_section_id": "5",
68
+ "section_name": "The susceptibility matrix",
69
+ "text": "In the physics of magnetic systems, the magnetic susceptibility matrix determines how much the different magnetization components are sensitive to the components of the external magnetic field. Similarly, in diffusion models we can define a susceptibility matrix:\nwhich tells us how sensitive the expected value is to changes in the noisy-state . The susceptibility matrix is helpful in interpreting the dynamics of the generative denoising process as it informs us on how random fluctuations in each component of the state are propagated to the other components. For example, in the context of image generation, a random fluctuation of \"green\" at the bottom of an image can propagate to the rest, originating the image of a forest.\nThe susceptibility matrix can be re-expressed in temrms of connected correlation matrix (i.e. the covariance matrix) of the microstates under the Boltzmann distribution\nWe can now express the Jacobi matrix of the score function as follows\n###figure_3###"
70
+ },
71
+ {
72
+ "section_id": "6",
73
+ "parent_section_id": null,
74
+ "section_name": "Phase transitions and symmetry breaking",
75
+ "text": "A spontaneous symmetry breaking happens when the trivial solution for the order parameter branches into multiple solutions at some critical time of . This corresponds to the onset of multi-modality in the regularized free energy , as visualized in Fig. 3 ###reference_### for a \"four deltas\" model. In thermodynamic systems, the symmetry breaking corresponds to a (second order) phase transition, which in this case can be detected by the divergence of several state variables around the critical point.\nThe presence of one (or multiple) phase transitions in diffusion models depends on the target distribution . The simplest example is given by the \"two deltas\", which corresponds to the process visualized in Fig. 2 ###reference_### a. Using this distribution, we obtain the self-consistency equation\nThe solutions of this equation are shown in Fig. 2 ###reference_### b, together with the gradient of the regularized free energy, where we can see the branching of the solutions and the singular behavior around a critical point. Eq. 27 ###reference_### is identical to the mean-field self-consistency equation of an Ising model, from which we can deduce that the critical scaling of this simple generative diffusion models shares its universality class. For example, by Taylor expansion of Eq.27 ###reference_### around , we see that\nwith , which is valid for smaller than ."
76
+ },
77
+ {
78
+ "section_id": "6.1",
79
+ "parent_section_id": "6",
80
+ "section_name": "Generation and critical instability",
81
+ "text": "As shown in (Raya and Ambrogioni,, 2023 ###reference_b29###), spontaneous symmetry breaking phenomena play a central role in the generative dynamics of diffusion models. Consider the simple \"two deltas\" model. For , the dynamics is mean-reverting towards a unique fixed-point . Around , the order parameter splits into thee \"branches\", an unstable one corresponding to the mean of the target distribution and two stable ones corresponding to the two target points. Importantly, at the critical point the susceptibility defined in Eq. 24 ###reference_### diverges, implying that the system becomes extremely reactive to fluctuations in the noise. This instability is determined by the critical exponents and and , defined by the relations\nand\nNote that, in the general case the critical exponents can be different for different coordinates and matrix entries. These divergences give rise to something we refer to as critical generative instability. We conjecture that the diversity of the generated samples crucially depends on a proper sampling of this critical region."
82
+ },
83
+ {
84
+ "section_id": "7",
85
+ "parent_section_id": null,
86
+ "section_name": "Generation as as an adiabatic free energy descent process",
87
+ "text": "So far, we characterized the thermodynamic state of diffusion model at time by its Boltzmann distribution. The dynamics of the system can now be recovered as a form of (stochastic) free energy minimization:\nwhere is the free-energy plus a free potential term:\nwhere . This can be seen as a form of adiabatic approximation, where the dynamics of the \u2019slow\u2019 variable is obtained by assuming that the system is maintained in thermal equilibrium along the diffusion trajectory. The symmetry breaking can now be detected as a change of shape in the regularized free energy, which transitions from a convex shape with a single global minimum to a more complex shape with potentially several meta-stable points (see Fig. 3 ###reference_###). The reformulation of the dynamics in term of the gradient of a free energy allows us to interpret generative diffusion models as a kind of energy-based machine learning models (LeCun et al.,, 2006 ###reference_b21###), as discussed in (Hoover et al.,, 2023 ###reference_b15###) and (Ambrogioni,, 2023 ###reference_b3###). The main difference is that the (free) energy is not learned directly but it is instead implicit in the learned score function. The connection suggests potential connection with the free energy principle in theoretical neuroscience, which is used to characterize the stochastic dynamics of biological neural systems (Friston,, 2010 ###reference_b12###)."
88
+ },
89
+ {
90
+ "section_id": "8",
91
+ "parent_section_id": null,
92
+ "section_name": "Beyond mean-field theory: A multi-site \u2019generative bath\u2019 model",
93
+ "text": "There results given in the previous sections suggest that generative diffusion models can be seen as a mean-field limit of a model with replicated microstates on \u2019sites\u2019 coupled through long-range interactions. We denote the microstate in the -th site as . Consider the following multi-site Hamiltonian:\nwhere denotes a multi-site configuration and is coupling weight parameter and is an external field term. The re-normalized marginal energy is defined to fulfil the following constraint:\nIn general, we have that when the possible microstates have a constant euclidean norm, while it will involve additional re-normalization terms in the general case. This condition ensures that the multi site coupling only affects the correlation between different sites (leading to perfect alignment for ) while preserving the limiting marginal distributions. We refer to this thermodynamic system as a generative bath.\nIn the model, different replications of the microstates (i.e. the noise-free data) exert mutual attractive couplings. Generation can be seen as a spontaneous symmetry breaking that the system undergoes when the temperature (i.e. the time) decreases, since at low temperatures all the microstates in all sites will align on the same pattern, resulting in a coherent observable average\nIn the thermodynamic limit (), the model converges to the self-consistent mean-field model discussed in the previous sections. This allows us to conceptualize the self-consistency condition implicit in the fixed-point equation of generative diffusion model as the result of an ideal multi-site coupling, which could result in new forms of neural implementations. This conceptualization opens the door for possible non-mean-field generalizations of generative diffusion characterized by short-range interactions, or disordered generalizations with random interactions. However, it is not clear if these extensions will have practical value.\nDepending on the choice of the target distribution , the coupled model in Eq. 35 ###reference_### reduces to well known models in statistical mechanics such as the fully connected Ising model for the two-deltas distribution and the classical Heisenberg model for spherical distributions. However, the model becomes substantially more complex under more realistic distributions of the data."
94
+ },
95
+ {
96
+ "section_id": "8.1",
97
+ "parent_section_id": "8",
98
+ "section_name": "Brownian dynamics in a \u2019generative bath\u2019",
99
+ "text": "The Hamiltonian defined in Eq. 35 ###reference_### specifies an equilibrium system that, in the thermodynamic limit, shares the same self-consistent criticality of the fixed-point equation of generative diffusion models. In this section, we will derive from first principles a generative stochastic dynamics similar to the generative equation given in Eq. 3 ###reference_###. The idea is to consider a \u2019Brownian particle\u2019 coupled to the multi-site system of microstates. We define the random force as\nwhere is the number of sites that are coupled to . In contrast to the distribution in Eq. 6 ###reference_###, we assume that does not exert any effect on the equilibrium systems itself, which instead undergoes symmetry breaking events due to its own internal coupling between sites. In other word, in this formulation the state is now passively controlled by the statistical fluctuations in the equilibrium system.\nIf we assume that the force in Eq. 36 ###reference_### is applied at each infinitesimal time interval and that its time-scale is much faster that the motion of , the Brownian dynamics follows the following (reversed) Langevin equation\nwhere is a matrix square root of the pure state covariance matrix:\nThe scaling in the differential are introduced to have the reversed diffusion ends at the finite time , which is equivalent to a logarithmic change of coordinate in the time variable.\nThe Boltzmann expectation is taken with respect to the multi-site ensemble given in Eq. 35 ###reference_### in the limit of a vanishing external field alligned to . This is done in order to isolate the appropriate \u2019pure state\u2019 from the Boltzmann average, since after a spontaneous symmetry breaking only one branch of the distribution should affect the particle. In fact, after a symmetry breaking phase transition, the Boltzmann distribution splits into two or more modes corresponding to the possible states with broken symmetry."
100
+ },
101
+ {
102
+ "section_id": "8.2",
103
+ "parent_section_id": "8",
104
+ "section_name": "The two delta model revisited",
105
+ "text": "In the \"two deltas\" model, the Hamiltonian of the generative bath\" is just\nwith the restriction that that . This is simply the Hamiltonian of a fully connected Ising model with uniform coupling weights. In the thermodynamic limit , the model therefore reduces to the mean-field Curie-Weiss model that we have already discussed. In this case, the pure state magnetisation are the stable solutions of the self-consistency equation , which is identically equal to zero for and it has two branches that cannot be expressed in closed-form in the low-temperature regime. The instantaneous variance of the Brownian generative dynamics is given by , which is equal to in the high temperature phase and to in the low temperature phase. Note that the variance diverges at due to the critical phase transition and that it vanishes for as the system fully aligns in one of the two possible pure states."
106
+ },
107
+ {
108
+ "section_id": "9",
109
+ "parent_section_id": null,
110
+ "section_name": "Associative memory and Hopfield networks",
111
+ "text": "We will now move back to the standard mean-field formulation of generative diffusion and discuss its connection with associative memory networks. Associative memory networks are energy-based learning systems that can store patterns (i.e. memories) as meta-stable states of a parameterized energy function (Hopfield,, 1982 ###reference_b16###; Abu-Mostafa and Jacques,, 1985 ###reference_b1###; Krotov,, 2023 ###reference_b19###). There is a substantial body of literature on the thermodynamic properties of associative memory networks (Strandburg et al.,, 1992 ###reference_b35###; Volk,, 1998 ###reference_b36###; Marullo and Agliari,, 2020 ###reference_b25###). The original associative memory networks, also known as Hopfield networks, are defined by the energy function under the constraints of binary entries for the state vector. In a Hopfield network, a finite number of training patterns are encoded into a weight matrix , which usually gives the correct minima when the number of patterns is on the order of the dimensionality. Associative memory networks can reach much higher capacity by using exponential energy function (Krotov and Hopfield,, 2016 ###reference_b20###; Demircigil et al.,, 2017 ###reference_b10###; Krotov,, 2023 ###reference_b19###). For example, (Ramsauer et al.,, 2021 ###reference_b28###) introduces the use of the following function\nwhich can be proven to provide exponential scaling of the capacity and it is related to the transformers architectures used in large language models (Ramsauer et al.,, 2021 ###reference_b28###). By inspection of Eq. 40 ###reference_###, we can see that this energy function is equivalent to the regularized Helmholtz free energy of a diffusion models trained on a mixture of delta distributions (Ambrogioni,, 2023 ###reference_b3###):\nwhich gives a free energy with the same fixed-point structure of Eq. 40 ###reference_### at the zero temperature limit. Note that, while the dynamics of a diffusion model does not necessarily act as an optimizer in the general case, the free energy is exactly optimized when is a sum of delta functions, making the dynamics of the model exactly equivalent to the optimization of Eq. 40 ###reference_### for . Given this connection, most of the results presented in this paper can be re-stated for associative memory networks. However, generative diffusion models are more general as they can target arbitrary mixtures of continuous and singular distributions. As we shall see in the next section, the modern Hopfield Hamiltonian plays a crucial role in studying finite sample effects such as data memorization (Lucibello and M\u00e9zard,, 2024 ###reference_b24###)."
112
+ }
113
+ ],
114
+ "appendix": [],
115
+ "tables": {},
116
+ "image_paths": {
117
+ "1": {
118
+ "figure_path": "2310.17467v4_figure_1.png",
119
+ "caption": "Figure 1: Generative process for a digit taken from the MNIST dataset.",
120
+ "url": "http://arxiv.org/html/2310.17467v4/extracted/5680420/mnist_forward_process_4.png"
121
+ },
122
+ "2": {
123
+ "figure_path": "2310.17467v4_figure_2.png",
124
+ "caption": "Figure 2: Visualization of a phase transition in a simple diffusion model (two deltas). a) Order parameter paths and (regularized) free energy gradients. The dashed line denotes the critical value of \u03c3t=t\u2062\u03c30subscript\ud835\udf0e\ud835\udc61\ud835\udc61subscript\ud835\udf0e0\\sigma_{t}=\\sqrt{t}\\sigma_{0}italic_\u03c3 start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = square-root start_ARG italic_t end_ARG italic_\u03c3 start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT. b) Forward process. The dashed line denotes time critical time.",
125
+ "url": "http://arxiv.org/html/2310.17467v4/x1.png"
126
+ },
127
+ "3": {
128
+ "figure_path": "2310.17467v4_figure_3.png",
129
+ "caption": "Figure 3: Negative free energy of a \"four delta\" 2d diffusion model for different values of the time variable. The target points are at (1,0)10(1,0)( 1 , 0 ), (0,1)01(0,1)( 0 , 1 ), (\u22121,0)10(-1,0)( - 1 , 0 ) and (0,\u22121)01(0,-1)( 0 , - 1 ).",
130
+ "url": "http://arxiv.org/html/2310.17467v4/x2.png"
131
+ },
132
+ "4(a)": {
133
+ "figure_path": "2310.17467v4_figure_4(a).png",
134
+ "caption": "(a) MNIST\nFigure 4: Figure taken from (Raya and Ambrogioni,, 2023). Analysis of the model\u2019s performance, as measured by FID scores, for different starting times using three different sampling methods: the normal DDPM sampler with decreasing time steps from T=1000\ud835\udc471000T=1000italic_T = 1000 to 0, and fast sampler DDIM and PSDM for 10 and 5 denoising steps. The vertical line corresponds to the maximum of the second derivative of the FID curve, which offers a rough estimate of the first bifurcation time. (e) Illustrates samples generation on Imagenet64, while progressively varying the starting time from 1000 to 100.",
135
+ "url": "http://arxiv.org/html/2310.17467v4/extracted/5680420/cifar10_late_starts__.png"
136
+ },
137
+ "4(b)": {
138
+ "figure_path": "2310.17467v4_figure_4(b).png",
139
+ "caption": "(a) MNIST\nFigure 4: Figure taken from (Raya and Ambrogioni,, 2023). Analysis of the model\u2019s performance, as measured by FID scores, for different starting times using three different sampling methods: the normal DDPM sampler with decreasing time steps from T=1000\ud835\udc471000T=1000italic_T = 1000 to 0, and fast sampler DDIM and PSDM for 10 and 5 denoising steps. The vertical line corresponds to the maximum of the second derivative of the FID curve, which offers a rough estimate of the first bifurcation time. (e) Illustrates samples generation on Imagenet64, while progressively varying the starting time from 1000 to 100.",
140
+ "url": "http://arxiv.org/html/2310.17467v4/extracted/5680420/mnist_late_starts_.png"
141
+ },
142
+ "4(c)": {
143
+ "figure_path": "2310.17467v4_figure_4(c).png",
144
+ "caption": "(b) CIFAR10\nFigure 4: Figure taken from (Raya and Ambrogioni,, 2023). Analysis of the model\u2019s performance, as measured by FID scores, for different starting times using three different sampling methods: the normal DDPM sampler with decreasing time steps from T=1000\ud835\udc471000T=1000italic_T = 1000 to 0, and fast sampler DDIM and PSDM for 10 and 5 denoising steps. The vertical line corresponds to the maximum of the second derivative of the FID curve, which offers a rough estimate of the first bifurcation time. (e) Illustrates samples generation on Imagenet64, while progressively varying the starting time from 1000 to 100.",
145
+ "url": "http://arxiv.org/html/2310.17467v4/extracted/5680420/cifar10_late_starts__.png"
146
+ },
147
+ "4(d)": {
148
+ "figure_path": "2310.17467v4_figure_4(d).png",
149
+ "caption": "(c) Imagenet64\nFigure 4: Figure taken from (Raya and Ambrogioni,, 2023). Analysis of the model\u2019s performance, as measured by FID scores, for different starting times using three different sampling methods: the normal DDPM sampler with decreasing time steps from T=1000\ud835\udc471000T=1000italic_T = 1000 to 0, and fast sampler DDIM and PSDM for 10 and 5 denoising steps. The vertical line corresponds to the maximum of the second derivative of the FID curve, which offers a rough estimate of the first bifurcation time. (e) Illustrates samples generation on Imagenet64, while progressively varying the starting time from 1000 to 100.",
150
+ "url": "http://arxiv.org/html/2310.17467v4/extracted/5680420/celeba64_late_starts_.png"
151
+ },
152
+ "4(e)": {
153
+ "figure_path": "2310.17467v4_figure_4(e).png",
154
+ "caption": "(c) Imagenet64\nFigure 4: Figure taken from (Raya and Ambrogioni,, 2023). Analysis of the model\u2019s performance, as measured by FID scores, for different starting times using three different sampling methods: the normal DDPM sampler with decreasing time steps from T=1000\ud835\udc471000T=1000italic_T = 1000 to 0, and fast sampler DDIM and PSDM for 10 and 5 denoising steps. The vertical line corresponds to the maximum of the second derivative of the FID curve, which offers a rough estimate of the first bifurcation time. (e) Illustrates samples generation on Imagenet64, while progressively varying the starting time from 1000 to 100.",
155
+ "url": "http://arxiv.org/html/2310.17467v4/extracted/5680420/imagenet64_late_starts_.png"
156
+ },
157
+ "4(f)": {
158
+ "figure_path": "2310.17467v4_figure_4(f).png",
159
+ "caption": "(d) CelebA64\nFigure 4: Figure taken from (Raya and Ambrogioni,, 2023). Analysis of the model\u2019s performance, as measured by FID scores, for different starting times using three different sampling methods: the normal DDPM sampler with decreasing time steps from T=1000\ud835\udc471000T=1000italic_T = 1000 to 0, and fast sampler DDIM and PSDM for 10 and 5 denoising steps. The vertical line corresponds to the maximum of the second derivative of the FID curve, which offers a rough estimate of the first bifurcation time. (e) Illustrates samples generation on Imagenet64, while progressively varying the starting time from 1000 to 100.",
160
+ "url": "http://arxiv.org/html/2310.17467v4/extracted/5680420/celeba64_late_starts_.png"
161
+ },
162
+ "4(g)": {
163
+ "figure_path": "2310.17467v4_figure_4(g).png",
164
+ "caption": "(e) Imagenet late start generation\nFigure 4: Figure taken from (Raya and Ambrogioni,, 2023). Analysis of the model\u2019s performance, as measured by FID scores, for different starting times using three different sampling methods: the normal DDPM sampler with decreasing time steps from T=1000\ud835\udc471000T=1000italic_T = 1000 to 0, and fast sampler DDIM and PSDM for 10 and 5 denoising steps. The vertical line corresponds to the maximum of the second derivative of the FID curve, which offers a rough estimate of the first bifurcation time. (e) Illustrates samples generation on Imagenet64, while progressively varying the starting time from 1000 to 100.",
165
+ "url": "http://arxiv.org/html/2310.17467v4/extracted/5680420/imagenet64_late_starts_short__.png"
166
+ }
167
+ },
168
+ "validation": true,
169
+ "references": [
170
+ {
171
+ "1": {
172
+ "title": "Information capacity of the hopfield model.",
173
+ "author": "Abu-Mostafa, Y. and Jacques, J. S. (1985).",
174
+ "venue": "IEEE Transactions on Information Theory, 31(4):461\u2013464.",
175
+ "url": null
176
+ }
177
+ },
178
+ {
179
+ "2": {
180
+ "title": "Sampling from mean-field gibbs measures via diffusion processes.",
181
+ "author": "Alaoui, A. E., Montanari, A., and Sellke, M. (2023).",
182
+ "venue": "arXiv preprint arXiv:2310.08912.",
183
+ "url": null
184
+ }
185
+ },
186
+ {
187
+ "3": {
188
+ "title": "In search of dispersed memories: Generative diffusion models are associative memory networks.",
189
+ "author": "Ambrogioni, L. (2023).",
190
+ "venue": "arXiv preprint arXiv:2309.17290.",
191
+ "url": null
192
+ }
193
+ },
194
+ {
195
+ "4": {
196
+ "title": "Reverse-time diffusion equation models.",
197
+ "author": "Anderson, B. D. (1982).",
198
+ "venue": "Stochastic Processes and their Applications, 12(3):313\u2013326.",
199
+ "url": null
200
+ }
201
+ },
202
+ {
203
+ "5": {
204
+ "title": "Nearly d-linear convergence bounds for diffusion models via stochastic localization.",
205
+ "author": "Benton, J., De Bortoli, V., Doucet, A., and Deligiannidis, G. (2024).",
206
+ "venue": "In The Twelfth International Conference on Learning Representations.",
207
+ "url": null
208
+ }
209
+ },
210
+ {
211
+ "6": {
212
+ "title": "Dynamical regimes of diffusion models.",
213
+ "author": "Biroli, G., Bonnaire, T., de Bortoli, V., and M\u00e9zard, M. (2024).",
214
+ "venue": "arXiv preprint arXiv:2402.18491.",
215
+ "url": null
216
+ }
217
+ },
218
+ {
219
+ "7": {
220
+ "title": "Generative diffusion in very large dimensions.",
221
+ "author": "Biroli, G. and M\u00e9zard, M. (2023).",
222
+ "venue": "arXiv preprint arXiv:2306.03518.",
223
+ "url": null
224
+ }
225
+ },
226
+ {
227
+ "8": {
228
+ "title": "Deep generative modelling: A comparative review of vaes, gans, normalizing flows, energy-based and autoregressive models.",
229
+ "author": "Bond-Taylor, S., Leach, A., Long, Y., and Willcocks, C. G. (2021).",
230
+ "venue": "IEEE transactions on pattern analysis and machine intelligence.",
231
+ "url": null
232
+ }
233
+ },
234
+ {
235
+ "9": {
236
+ "title": "Wavegrad: Estimating gradients for waveform generation.",
237
+ "author": "Chen, N., Zhang, Y., Zen, H., Weiss, R. J., Norouzi, M., and Chan, W. (2020).",
238
+ "venue": "arXiv preprint arXiv:2009.00713.",
239
+ "url": null
240
+ }
241
+ },
242
+ {
243
+ "10": {
244
+ "title": "On a model of associative memory with huge storage capacity.",
245
+ "author": "Demircigil, M., Heusel, J., L\u00f6we, M., Upgang, S., and Vermet, F. (2017).",
246
+ "venue": "Journal of Statistical Physics, 168:288\u2013299.",
247
+ "url": null
248
+ }
249
+ },
250
+ {
251
+ "11": {
252
+ "title": "Sampling from the sherrington-kirkpatrick gibbs measure via algorithmic stochastic localization.",
253
+ "author": "El A., A., Montanari, A., and Sellke, M. (2022).",
254
+ "venue": "In 2022 IEEE 63rd Annual Symposium on Foundations of Computer Science (FOCS), pages 323\u2013334. IEEE.",
255
+ "url": null
256
+ }
257
+ },
258
+ {
259
+ "12": {
260
+ "title": "The free-energy principle: a unified brain theory?",
261
+ "author": "Friston, K. (2010).",
262
+ "venue": "Nature Reviews Neuroscience, 11(2):127\u2013138.",
263
+ "url": null
264
+ }
265
+ },
266
+ {
267
+ "13": {
268
+ "title": "Denoising diffusion probabilistic models.",
269
+ "author": "Ho, J., Jain, A., and Abbeel, P. (2020).",
270
+ "venue": "Advances in Neural Information Processing Systems.",
271
+ "url": null
272
+ }
273
+ },
274
+ {
275
+ "14": {
276
+ "title": "Video diffusion models.",
277
+ "author": "Ho, J., Salimans, T., Gritsenko, A., Chan, W., Norouzi, M., and Fleet, D. J. (2022).",
278
+ "venue": "arXiv preprint arXiv:2204.03458.",
279
+ "url": null
280
+ }
281
+ },
282
+ {
283
+ "15": {
284
+ "title": "Memory in plain sight: A survey of the uncanny resemblances between diffusion models and associative memories.",
285
+ "author": "Hoover, B., Strobelt, H., Krotov, D., Hoffman, J., Kira, Z., and Chau, H. (2023).",
286
+ "venue": "arXiv preprint arXiv:2309.16750.",
287
+ "url": null
288
+ }
289
+ },
290
+ {
291
+ "16": {
292
+ "title": "Neural networks and physical systems with emergent collective computational abilities.",
293
+ "author": "Hopfield, J. J. (1982).",
294
+ "venue": "Proceedings of the National Academy of Sciences, 79(8):2554\u20132558.",
295
+ "url": null
296
+ }
297
+ },
298
+ {
299
+ "17": {
300
+ "title": "Sampling from spherical spin glasses in total variation via algorithmic stochastic localization.",
301
+ "author": "Huang, B., Montanari, A., and Pham, H. T. (2024).",
302
+ "venue": "arXiv preprint arXiv:2404.15651.",
303
+ "url": null
304
+ }
305
+ },
306
+ {
307
+ "18": {
308
+ "title": "Diffwave: A versatile diffusion model for audio synthesis.",
309
+ "author": "Kong, Z., Ping, W., Huang, J., Zhao, K., and Catanzaro, B. (2020).",
310
+ "venue": "arXiv preprint arXiv:2009.09761.",
311
+ "url": null
312
+ }
313
+ },
314
+ {
315
+ "19": {
316
+ "title": "A new frontier for hopfield networks.",
317
+ "author": "Krotov, D. (2023).",
318
+ "venue": "Nature Reviews Physics, pages 1\u20132.",
319
+ "url": null
320
+ }
321
+ },
322
+ {
323
+ "20": {
324
+ "title": "Dense associative memory for pattern recognition.",
325
+ "author": "Krotov, D. and Hopfield, J. J. (2016).",
326
+ "venue": "Advances in Neural Information Processing Systems.",
327
+ "url": null
328
+ }
329
+ },
330
+ {
331
+ "21": {
332
+ "title": "A tutorial on energy-based learning.",
333
+ "author": "LeCun, Y., Chopra, S., Hadsell, R., Ranzato, M., and Huang, F. (2006).",
334
+ "venue": "Predicting structured data, 1(0).",
335
+ "url": null
336
+ }
337
+ },
338
+ {
339
+ "22": {
340
+ "title": "Critical windows: non-asymptotic theory for feature emergence in diffusion models.",
341
+ "author": "Li, M. and Chen, S. (2024).",
342
+ "venue": "arXiv preprint arXiv:2403.01633.",
343
+ "url": null
344
+ }
345
+ },
346
+ {
347
+ "23": {
348
+ "title": "Audioldm: Text-to-audio generation with latent diffusion models.",
349
+ "author": "Liu, H., Chen, Z., Yuan, Y., Mei, X., Liu, X., Mandic, D., Wang, W., and Plumbley, M. D. (2023).",
350
+ "venue": "arXiv preprint arXiv:2301.12503.",
351
+ "url": null
352
+ }
353
+ },
354
+ {
355
+ "24": {
356
+ "title": "Exponential capacity of dense associative memories.",
357
+ "author": "Lucibello, C. and M\u00e9zard, M. (2024).",
358
+ "venue": "Physical Review Letters, 132(7):077301.",
359
+ "url": null
360
+ }
361
+ },
362
+ {
363
+ "25": {
364
+ "title": "Boltzmann machines as generalized hopfield networks: a review of recent results and outlooks.",
365
+ "author": "Marullo, C. and Agliari, E. (2020).",
366
+ "venue": "Entropy, 23(1):34.",
367
+ "url": null
368
+ }
369
+ },
370
+ {
371
+ "26": {
372
+ "title": "Spin glass theory and beyond: An Introduction to the Replica Method and Its Applications, volume 9.",
373
+ "author": "M\u00e9zard, M., Parisi, G., and Virasoro, M. A. (1987).",
374
+ "venue": "World Scientific Publishing Company.",
375
+ "url": null
376
+ }
377
+ },
378
+ {
379
+ "27": {
380
+ "title": "Sampling, diffusions, and stochastic localization.",
381
+ "author": "Montanari, A. (2023).",
382
+ "venue": "arXiv preprint arXiv:2305.10690.",
383
+ "url": null
384
+ }
385
+ },
386
+ {
387
+ "28": {
388
+ "title": "Hopfield networks is all you need.",
389
+ "author": "Ramsauer, H., Sch\u00e4fl, B., Lehner, J., Seidl, P., Widrich, M., Adler, T., Gruber, L., Holzleitner, M., Pavlovi\u0107, M., Sandve, G. K., et al. (2021).",
390
+ "venue": "Internetional Conference on Learning Representations.",
391
+ "url": null
392
+ }
393
+ },
394
+ {
395
+ "29": {
396
+ "title": "Spontaneous symmetry breaking in generative diffusion models.",
397
+ "author": "Raya, G. and Ambrogioni, L. (2023).",
398
+ "venue": "Neural Information Processing Systems.",
399
+ "url": null
400
+ }
401
+ },
402
+ {
403
+ "30": {
404
+ "title": "A phase transition in diffusion models reveals the hierarchical nature of data.",
405
+ "author": "Sclocchi, A., Favero, A., and Wyart, M. (2024).",
406
+ "venue": "arXiv preprint arXiv:2402.16991.",
407
+ "url": null
408
+ }
409
+ },
410
+ {
411
+ "31": {
412
+ "title": "Make-a-video: Text-to-video generation without text-video data.",
413
+ "author": "Singer, U., Polyak, A., Hayes, T., Yin, X., An, J., Zhang, S., Hu, Q., Yang, H., Ashual, O., Gafni, O., et al. (2022).",
414
+ "venue": "arXiv preprint arXiv:2209.14792.",
415
+ "url": null
416
+ }
417
+ },
418
+ {
419
+ "32": {
420
+ "title": "Deep unsupervised learning using nonequilibrium thermodynamics.",
421
+ "author": "Sohl-Dickstein, J., Weiss, E., Maheswaranathan, N., and Ganguli, S. (2015).",
422
+ "venue": "International Conference on Machine Learning.",
423
+ "url": null
424
+ }
425
+ },
426
+ {
427
+ "33": {
428
+ "title": "Denoising diffusion implicit models.",
429
+ "author": "Song, J., Meng, C., and Ermon, S. (2021a).",
430
+ "venue": "Internetional Conference on Learning Representations.",
431
+ "url": null
432
+ }
433
+ },
434
+ {
435
+ "34": {
436
+ "title": "Score-based generative modeling through stochastic differential equations.",
437
+ "author": "Song, Y., Sohl-Dickstein, J., Kingma, D. P., Kumar, A., Ermon, S., and Poole, B. (2021b).",
438
+ "venue": "In International Conference on Learning Representations.",
439
+ "url": null
440
+ }
441
+ },
442
+ {
443
+ "35": {
444
+ "title": "Phase transitions in dilute, locally connected neural networks.",
445
+ "author": "Strandburg, K. J., Peshkin, M. A., Boyd, D. F., Chambers, C., and O\u2019Keefe, B. (1992).",
446
+ "venue": "Physical Review A, 45(8):6135.",
447
+ "url": null
448
+ }
449
+ },
450
+ {
451
+ "36": {
452
+ "title": "On the phase transition of hopfield networks\u2014another monte carlo study.",
453
+ "author": "Volk, D. (1998).",
454
+ "venue": "International Journal of Modern Physics C, 9(05):693\u2013700.",
455
+ "url": null
456
+ }
457
+ }
458
+ ],
459
+ "url": "http://arxiv.org/html/2310.17467v4"
460
+ }
20240620/2311.01264v2.json ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Structure preserving discontinuous Galerkin approximation of a hyperbolic-parabolic system",
3
+ "abstract": "Abstract.\nWe study the numerical approximation of a coupled hyperbolic-parabolic system by a family of discontinuous Galerkin space-time finite element methods. The model is rewritten as a first-order evolutionary problem that is treated by the unified abstract solution theory of R. Picard. For the discretization in space, generalizations of the distribution gradient and divergence operators on broken polynomial spaces are defined. Since their skew-selfadjointness is perturbed by boundary surface integrals, adjustments are introduced such that the skew-selfadjointness of the first-order differential operator in space is recovered. Well-posedness of the fully discrete problem and error estimates for the discontinuous Galerkin approximation in space and time are proved.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "We study the numerical approximation by discontinuous Galerkin methods in space and time of solutions to the hyperbolic-parabolic system\nFor this, we rewrite (1.1 ###reference_###) as a first order evolutionary problem in space and time on the open bounded domain , with , and for the final time . System (1.1 ###reference_###) is investigated as a prototype model problem for poro- and thermoelasticity; cf., e.g., [13 ###reference_b13###, 14 ###reference_b14###, 15 ###reference_b15###, 33 ###reference_b33###, 46 ###reference_b46###]. In poroelasticity, Eqs. (1.1a ###reference_1###) and (1.1b ###reference_2###) describe the conservation of momentum and mass. The unknowns are the effective solid phase displacement and the effective fluid pressure . The quantity denotes the symmetrized gradient or strain tensor. Further, is the effective mass density, is Gassmann\u2019s fourth order effective elasticity tensor, is Biot\u2019s pressure-storage coupling tensor, is the specific storage coefficient and is the permeability field. For simplicity, the positive quantities , and are assumed to be constant in space and time. Moreover, the tensors and are assumed to be symmetric and positive definite and independent of the space and time variables as well. In (1.1a ###reference_1###), the effects of secondary consolidation (cf. [36 ###reference_b36###]), described in certain models by the additional term in the total stress, are not included here. Beyond the classical applications of (1.1 ###reference_###) in subsurface hydrology and geophysics, for instance in reservoir engineering, systems like (1.1 ###reference_###) have recently attracted reseachers\u2019 interest in biomedical engineering; cf., e.g., [19 ###reference_b19###, 22 ###reference_b22###, 37 ###reference_b37###]. In thermoelasticity, the system (1.1 ###reference_###) describes the flow of heat through an elastic structure. In that context, denotes the temperature, is the specific heat of the medium, and is the conductivity. The homogeneous Dirichlet boundary conditions (1.1d ###reference_4###) are studied here for simplicity and brevity.\nBy introducing the variable of unknowns , with the quantities , and , we transform the system (1.1 ###reference_###) into an abstract evolutionary equation written as the sum of two unbounded first-order differential operators, one of them involving a first order differential operator in time and the other one involving first order differential operators in space. In the exponentially weighted in time Bochner space defined in (2.1 ###reference_###), with some weight and the Hilbert space , we then obtain the evolutionary equation for that\nIn (1.2 ###reference_###), and are bounded linear selfadjoint operators in and is an unbounded skew-selfadjoint operator in . The right-hand side function in (1.2 ###reference_###) depends on the source terms and of (1.1 ###reference_###). For (1.2 ###reference_###), a solution mechanism developed by R. Picard [38 ###reference_b38###] can be applied. It is based on monotonicity of both the sum of the mentioned unbounded operators together with its adjoint computed in the space-time Hilbert space. For the presentation of the solution theory we refer to [45 ###reference_b45###, Thm. 6.2.1]. The well-posedness criterion for (1.2 ###reference_###), that is summarized in Thm. 2.5 ###reference_defi5###, is elementary and general. It can be verified without dealing with the intricacies of more involved solution methods. This is an appreciable advantage of Picard\u2019s theorem [38 ###reference_b38###]. A priori, there is no explicit initial condition implemented in the theory. For (1.2 ###reference_###), an initial condition of the form for some and can be implemented by a distributional right-hand side term for some supported on and the Dirac distribution at . For details of this we refer to [39 ###reference_b39###, Sec. 6.2.5] and [45 ###reference_b45###, Chap. 9]. By introducing the four-field formulation for the unknown vector the problem size is increased. However, in poroelasticity the explicit approximation of the flux variable is often desirable and of higher importance than the approximation of the fluid pressure itself. For instance, this holds if reactive transport of species, dissolved in the fluid, is studied further. Simulations then demand for accurate approximations of the flux variable . A similar argument applies to the stress tensor , if this variable is the goal quantity of physical interest in (1.1 ###reference_###) or needs to be post-processed for elucidating phenomena modeled by (1.1 ###reference_###). In implementations, the symmetrie of the stress tensor can still be exploited to reduce the problem\u2019s size.\nIn this work we propose and analyze fully discrete numerical approximation schemes that are built for the evolutionary equation (1.2 ###reference_###). Their key feature is that they essentially preserve the abstract evolutionary form (1.2 ###reference_###) and the operators\u2019 properties. However, due to a nonconforming discretization in space that is applied here, the skew-selfadjointness of the discrete counterpart of in (1.2 ###reference_###) is perturbed by non-vanishing contributions arising from boundary face integrals. Therefore, a correction term is introduced on the discrete level to overcome this defect and ensure that a discrete counterpart of the skew-selfadointness that is essentially used in the analyses is satisfied. In the design of numerical methods, structure preserving approaches ensuring that important properties of differential operators and solutions to the continuous problem are maintained on the fully discrete level are highly desirable and important to ensure physical realism of numerical predictions. We focus discontinuous Galerkin (DG) discretizations of the space and time variables. DG methods for the space discretization (cf., e.g., [40 ###reference_b40###, 23 ###reference_b23###, 43 ###reference_b43###]) have shown their high flexibility and accuracy in approximating reliably solutions to partial differental equations, even solutions with complex structures or discontinuities and in anisotropic or heterogenous media. The application of DG schemes for the space discretization of (1.2 ###reference_###) and the definition of the DG counterpart of in (1.2 ###reference_###) to preserve skew-selfadjointness represent the key innovation of this work over a series of previous ones [4 ###reference_b4###, 30 ###reference_b30###, 29 ###reference_b29###, 28 ###reference_b28###] based on Picard\u2019s theory. For the DG space discretization, the definition of the distribution gradient and divergence operator is extended to broken polynomial spaces by penalizing the jumps of the unknowns over interelement surfaces. By still adding some boundary correction due to the nonconformity of DG methods, the skew-selfadjointness of is passed on to its discrete counterpart . This consistent definition and treatment of the DG gradient and DG divergence operators for the nonconforming approximation is essential for the overall approach and its analysis. It has not been studied yet.\nFor the discretization in time we use the DG method [47 ###reference_b47###]. Variational time discretizations offer the appreciable advantage of the natural construction of families of schemes with higher order members, even for complex coupled systems of equations. There exists a strong link to Runge\u2013Kutta methods; cf. [2 ###reference_b2###, 3 ###reference_b3###]. DG time discretizations are known to be strongly A-stable. For elastodynamics and wave propagation they violate the energy conservation principle of solutions to the continuous problem. This might evoke effects of damping or dispersion. However, the convergence of the jump terms at the discrete time nodes is ensured; cf. [28 ###reference_b28###, Thm. 2.3]. Continuous in time Galerkin (CG) methods (cf.\u2009 e.g., [7 ###reference_b7###, 10 ###reference_b10###, 11 ###reference_b11###, 29 ###reference_b29###, 1 ###reference_b1###] and the references therein) are known to be A-stable only, but they preserve the energy of solutions [11 ###reference_b11###, Sec. 6]. These families are more difficult to analyse since they lead to Galerkin\u2013Petrov methods with trial and test spaces differing from each other. For this reason and due to computational advantages gained for simulations of the second-order form (1.1 ###reference_###), DG time discretizations are studied here. For studies of CG schemes with continuous in time discrete solutions we refer to, e.g., [7 ###reference_b7###, 10 ###reference_b10###, 11 ###reference_b11###, 29 ###reference_b29###, 34 ###reference_b34###, 24 ###reference_b24###] and the references therein. For a numerical study of DG and CG time discretizations of (1.1 ###reference_###) we refer to [5 ###reference_b5###].\nIn [30 ###reference_b30###] and [29 ###reference_b29###], one of the authors of this work studies with his coauthors numerical schemes based on DG and CG Galerkin methods in time and conforming Galerkin methods in space for evolutionary problems (1.2 ###reference_###) of changing type. By decomposing into three disjoint sets and defining the and setwise, the system (1.2 ###reference_###) degenerates to elliptic, parabolic or hyperbolic type on these sets. Usually, degenerating problems are difficult to analyze. Due to the weak assumptions about the operators made in the theory of Picard [38 ###reference_b38###], such type of problems can be embedded into this framework. The same applies to the concept of perfectly matched layers in wave propagation; cf., e.g. [12 ###reference_b12###, 20 ###reference_b20###]. They are used to truncate the entire space or unbounded domains to bounded computational ones and mimic non-reflecting boundary conditions. The analysis of wave propagation with artifical absorbing layer and changing equations in either regions becomes feasible as well by the abstract solution theory.\nIn [21 ###reference_b21###], space-time DG methods for weak solutions of hyperbolic linear first-order symmetric Friedrichs systems describing acoustic, elastic, or electro-magnetic waves are proposed. For an introduction into the theory of first-order symmetric Friedrichs systems we refer to [25 ###reference_b25###, 26 ###reference_b26###, 27 ###reference_b27###] and [40 ###reference_b40###, Chap. 7]. Similarly to this work, in [21 ###reference_b21###] a first-order in space and time formulation of a second-order hyperbolic problem is used. In contrast to this work, no coupled system of mixed hyperbolic-parabolic type is considered there. In [21 ###reference_b21###], the mathematical tools for proving well-posedness of the space-time DG discretization and error estimates are based on the theory of first-order Friedrichs systems. The theory strongly differs from Picard\u2019s theorem [38 ###reference_b38###] that is used here. The differences of either approaches still require elucidation. In deriving space-time DG methods and proving error estimates, differences become apparent in the norms with respect to that convergence is proved. In [21 ###reference_b21###], stability and convergence estimates are provided with respect to a mesh-dependent DG norm that includes the norm at the final time; cf. also [8 ###reference_b8###].\nHere, convergence of the fully discrete approximation of (1.2 ###reference_###) is proved in Thm. 4.1 ###reference_defi1### with respect to the natural and induced norm of the exponentially weighted Bochner space , with the product space equipped with the -norm. For the full discretization of the solution to (1.2 ###reference_###) we show that\nwhere is the exponentially weighted natural norm associated with . Further, and are the piecewise polynomial degrees in time and space, respectively.\nThe paper is organized as follows. In Sec. 2 ###reference_### the evolutionary form (1.2 ###reference_###) of (1.1 ###reference_###) is derived and its well-posedness is shown. The space-time discretization of (1.2 ###reference_###) by the DG method is presented in Sec. 3 ###reference_###. Its error analysis is addressed in Sec. 4 ###reference_###. In Sec. 5 ###reference_###, we end with a summary and outlook."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Evolutionary formulation and its well-posedness",
15
+ "text": "In this section we rewrite formally the coupled hyperblic-parabolic problem (1.1 ###reference_###) as an evolutionary problem (1.2 ###reference_###) by introducing auxiliary variables. For the evolutionary problem we present a result of well-posedness that is based on Picard\u2019s theorem; cf. [38 ###reference_b38###] and [45 ###reference_b45###, Thm. 6.2.1]. Therein, the evolutionary problem is investigated on the whole time axis, for , in the exponentially weighted Bochner space introduced in Def. 2.1 ###reference_defi1###. Throughout, we use usual notation for standard Sobolev spaces. In the notation, we do not differ between scalar-, vector- or tensor-valued functions.\nLet be a real Hilbert space with associated norm . For , we put\nThe space , equipped with the inner product\nis a Hilbert space. The norm induced by the inner product (2.2 ###reference_###) is denoted by . Moreover, we define to be the closure of the operator\nwhere is the space of infinitely differentiable -valued functions on with compact support. The domain of the time derivative of -order, denoted by , is the space . Before rewriting (1.1 ###reference_###) in the form (1.2 ###reference_###), we need to introduce differential operators with respect to the spatial variables.\nLet , for , be an open non-empty set. Then we define\nLet , for , be an open non-empty set. We put\nand\nMoreover, we put\nand\nWe note that for . The operator in (2.5 ###reference_###) assigns each vector field its distributional divergence with maximal domain, that is,\nSimilarly, the operator in (2.6 ###reference_###) assigns each tensor field its distributional divergence with maximal domain, that is,\nTo rewrite (1.1 ###reference_###) formally as a first-order evolutionary problem, we introduce the set of new unknowns\nUsing (2.7 ###reference_###) and differentiating the second of the definitions in (2.7 ###reference_###) with respect to the time variable, we recast (1.1a ###reference_1###) and (1.1b ###reference_2###) as the first order in space and time system\nwhere denotes the positive definite, fourth order compliance tensor of the inverse stress-strain relation of Hook\u2019s law of linear elasticity,\nIn matrix-vector notation the system (2.8 ###reference_###) reads as\nTo further simplify the spatial differential operator in (2.10 ###reference_###), we introduce the total flux variable\nand, then recast (2.10 ###reference_###) as the evolutionary problem\nFinally, we put\nWe define the operators\nThen we obtain the following evolutionary problem.\nLet denote the product space\nequipped with the inner product of . Let and , with\nbe defined by (2.13 ###reference_###). For given according to (2.12 ###reference_###), find such that\nwhere is defined by (2.12 ###reference_###) along with (2.7 ###reference_###).\nWell-posedness of (2.16 ###reference_###) is ensured by the following abstract result; cf. [38 ###reference_b38###] and [45 ###reference_b45###, Thm. 6.2.1].\nLet denote a real Hilbert space. Let be bounded linear selfadjoint operators and skew-selfadjoint. Moreover, suppose that there exists some such that\nThen, for each and each there exist a unique solution such that\nwhere the closure is taken in . Moreover, there holds the stability estimate\nIf for some , then the inclusion is satisfied and the evolutionary equation is solved literally, such that\nProblem 2.4 ###reference_defi4### is well-posed. In particular, there exists a unique solution in the sense of (2.18 ###reference_###)."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Discontinuous Galerkin discretization and well-posedness",
21
+ "text": "Here we derive a family of fully discrete schemes for Problem 2.4 ###reference_defi4###. Space and time discretization are based on discontinuous Galerkin approaches. Well-posedness of the discrete problem is shown. We assume that the weight in (2.1 ###reference_###) is chosen such that the assumptions of Thm. 2.5 ###reference_defi5### are satisfied."
22
+ },
23
+ {
24
+ "section_id": "3.1",
25
+ "parent_section_id": "3",
26
+ "section_name": "Notation and auxiliaries",
27
+ "text": "For the time discretization, we decompose into subintervals , for , where such that . We put with . Further, the set is called the time mesh. For any and some Banach space , we let\ndenote the space of -valued polynomials of degree at most defined on . For a Hilbert space , the space , equipped with the exponentially weighted in time inner product\nis a Hilbert space. The semidiscretization in time of (2.16 ###reference_###) by Galerkin methods is done in\nFor any function that is piecewise sufficiently smooth with respect to the time mesh , for instance for , we define the right-hand sided and left-hand sided limit at a mesh point by\nFor the error analysis, we further need the space\nIn the discrete scheme, a quadrature formula is applied for the evaluation of the time integrals. For the discontinuous in time finite element method, a natural choice is to consider the -point right-sided Gauss\u2013Radau quadrature formula on each time interval . Here, we use a modification of the standard right-sided Gauss\u2013Radau quadrature formula that is defined by\nwhere , for , are the quadrature points on and the corresponding weights. Here, is the affine transformation from the reference interval to and , for , are the quadrature points of the weighted Gauss\u2013Radau formula on (cf. [41 ###reference_b41###]), such that for all polynomials there holds that\nThen, for polynomials we have that\nFinally, we introduce the time-mesh dependent quantities\nwhere the nonnegativity of is tacitly assumed in the definition of . This will be satisfied below.\nFor the nodes , for and , of the weighted Gauss\u2013Radau formula (3.6 ###reference_###), we define the global Lagrange interpolation operator by\nFor the Lagrange interpolation (3.9 ###reference_###), on each there holds that (cf. [32 ###reference_b32###, Thm. 1])\nMoreover, we need the Lagrange interpolation operator with respect to the Gauss\u2013Radau quadrature points , for , and , for , that is defined by\nThen, for there holds that (cf. [32 ###reference_b32###, Thm. 2])\nFor the space discretization, let the mesh denote a decomposition of the polyhedron into quadrilateral or hexahedral elements with meshsize for . The mesh is assumed to be conforming (matching) and shape-regular; cf., e.g., [43 ###reference_b43###]. The assumptions about are sufficient to derive inverse and trace inequalities; cf. [40 ###reference_b40###, Chap. 1]. Further, optimal polynomial approximation properties in the sense of [40 ###reference_b40###, Def. 1.55] are satisfied; cf., e.g., [43 ###reference_b43###, Thm. 2.6]. Simplicial triangulations can be considered analogously. For more general mesh concepts in the context of discontinuous Galerkin methods we refer to [40 ###reference_b40###, Subsec. 1.4] or [23 ###reference_b23###, Subsec. 2.3.2]. For any we denote by the outward unit normal to the faces (egdes for ) of . Further, we let be the union of the boundaries of all elements of . Let be the set of interior faces (edges if ) and denote the union of all boundary faces.\nFor any , the discrete space of continuous and piecewise polynomial functions is denoted as\nwhere the local space is defined by mapped versions of ; cf. [42 ###reference_b42###, Subsec. 3.2]. For any , we denote the space of broken polynomials by\nFor the spatial approximation of Problem 2.4 ###reference_defi4### we consider using\nwhere the finite element product spaces and are given by\nDiscretizations of Problem 2.4 ###reference_defi4### in either spaces, and , are studied simultaneously. The reason for considering also the hybrid space is that continuous and -conforming finite element methods lead to lower computational cost than discontinuous ones. -conforming approximations in the framework of Picard\u2019s theory have been studied in [30 ###reference_b30###] for scalar-valued problems of changing type. These families of schemes can be applied analoguously to the approximation of and in Problem 2.4 ###reference_defi4###. Since discontinuous Galerkin methods offer high flexibilty combined with implementational advantages, DG methods are attractive and studied here.\nIn Subsec. 4 ###reference_### we need the -orthogonal projection of functions onto the broken polynomial space of (3.16b ###reference_.2###) that is very simple, even on more general meshes than studied here. For the -orthogonal projection , and , with\nthere holds for all and all that\nwhere is independent of both and ; cf. [40 ###reference_b40###, Lem. 1.58], [43 ###reference_b43###, Thm. 2.6]. In (3.18 ###reference_###), we denote by the seminorm of the Sobolev space . Also, the -orthgonal projection satisfies that\nwhere and are independent of both and ; cf. [40 ###reference_b40###, Lem. 1.59]."
28
+ },
29
+ {
30
+ "section_id": "3.2",
31
+ "parent_section_id": "3",
32
+ "section_name": "Gradient and divergence on broken function spaces",
33
+ "text": "To define our discontinuous Galerkin discretization schemes we need to recall some general concepts for the definition of the gradient and divergence operator on broken function spaces with respect to the triangulation . For further details and concepts of broken function spaces we refer to, e.g., [40 ###reference_b40###]. On the triangulation , let and denote piecewise (broken) spaces of scalar- and vector-valued functions, respectively. On the set of (inner and outer) boundaries , let and be piecewise (broken) spaces of scalar- and vector-valued functions on , respectively. We put and . We denote the dual spaces of and by and . In these spaces we define the following derivatives of the discontinuous Galerkin method; cf. [31 ###reference_b31###].\nLet and . Then the DG-gradient and the DG-divergence are defined by\nHere, denotes the inner product of , where we drop the index if . Further, and are the broken gradient and divergence, respectively; cf. [40 ###reference_b40###, Subsec. 1.2.5 and 1.2.6]. In what follows, we drop the index in the broken operators, when this operation appears inside an integral over a fixed mesh element . We recall that on the usual Sobolev spaces the broken gradient coincides with the distribution gradient; cf. [40 ###reference_b40###, Lem. 1.22]. The same applies to the broken divergence; cf. [40 ###reference_b40###, Subsec. 1.2.6]. The dual operators of and are denoted by and . Then, there holds that\nThe DG-derivatives and are conditionally dual with each other; cf. [31 ###reference_b31###]. To demonstrate this link, we deduce from (3.20 ###reference_###) and (3.21 ###reference_###) that\nand\nThe identities (3.22 ###reference_###) and (3.23 ###reference_###) directly imply the following conditional duality between and under the assumption that on ; cf. also [31 ###reference_b31###, Lem. 2.1].\nSuppose that on . For the DG derivatives (3.20 ###reference_###) there holds the duality\nif one of the following conditions is satisfied:\nIn (3.25c ###reference_.3###), we let and for two adjacent elements and with common face and outer unit normal vector to . By we denote the normal vector assigned to , where is the outer normal vector for . We note that for and for , since the second terms on the right-hand side of (3.20 ###reference_###) yield that\nThe matrix- and vector-valued operators and , introduced in (2.4 ###reference_###) and (2.6 ###reference_###) respectively, are defined on broken function spaces similarly to (3.20 ###reference_###)."
34
+ },
35
+ {
36
+ "section_id": "3.3",
37
+ "parent_section_id": "3",
38
+ "section_name": "Gradient and divergence on broken polynomial spaces and operator",
39
+ "text": "Now, we specify the broken spaces and of Def. 3.1 ###reference_defi1### for the finite element spaces (3.15 ###reference_###) and (3.16 ###reference_###) that we consider for the spatial approximation of Problem 2.4 ###reference_defi4###. In light of Lem. 3.2 ###reference_defi2### we put\nFor (3.26 ###reference_###), the definitions of the DG gradient operator in (3.20a ###reference_.1###) and the DG divergence operator in (3.20b ###reference_.2###) of Def. 3.1 ###reference_defi1### then read as follows.\nThe DG gradient operator and the DG divergence operator are defined by\nfor all and , where standard notation (cf. [40 ###reference_b40###]) is used for the averages and jumps\nOn the usual Sobolev spaces the DG gradient and DG divergence of (3.27 ###reference_###) coincide with the distribution gradient and divergence, respectively, since for functions of the jump terms on and the traces on the boundary faces vanish in (3.27 ###reference_###); cf. [40 ###reference_b40###, Lem. 1.22 and 1.24]. Similarly, for functions of the jumps for vanish as well; cf. [40 ###reference_b40###, Lem. 1.22 and 1.24]. The assumption of Lem. 3.2 ###reference_defi2### that for is not fulfilled for the DG space (3.15 ###reference_###) with (3.16b ###reference_.2###). This leads to a perturbation of the skew-selfadjointness of and shown now.\nFor all and there holds that\nand\nThe identity (3.28 ###reference_###) is a direct consequence of (3.22 ###reference_###) and (3.23 ###reference_###) along with the definition (3.26 ###reference_###) of the broken spaces and on . From (3.28b ###reference_.2###) we then get that\nThis proves (3.29 ###reference_###).\n\nDG derivatives and for multi-valued function are defined as follows.\nFor multi-valued functions, the DG gradient operator and DG divergence operator are given by\nfor all and .\nIn (3.30 ###reference_###), the operators and are the broken symmetrized gradient and broken divergence that extend the distributional gradient in (2.4 ###reference_###) and divergence in (2.6 ###reference_###) to broken polynomial spaces; cf. [40 ###reference_b40###, Def. 1.21]. On the usual Sobolev spaces the broken gradient and divergence coincide with the distributional symmetrized gradient and divergence of (2.4 ###reference_###) and (2.6 ###reference_###), respectively. Similarly to Lem. 3.4 ###reference_defi4###, for DG spaces there holds for and that\nand\nNow, we are able to define a discrete counterpart of the differential operator introduced in (2.13 ###reference_###).\nFor the DG differential operators introduced in (3.27 ###reference_###) and (3.30 ###reference_###), respectively, the operator is defined by\nsuch that for there holds that\nFrom (3.29 ###reference_###) and (3.32 ###reference_###) we conclude that for there holds that\nBy (3.28 ###reference_###) and (3.31 ###reference_###), the operator is not skew-selfadjoint on , defined in (3.16b ###reference_.2###), due to perturbations by boundary face integrals. Consequently, the inner product does no longer vanish as in the continuous case. However, the control of the latter term is essential for our analysis. Therefore, some correction term, defined in (3.37 ###reference_###) below, will be introduced in the fully discrete scheme. Finally, we note that skew-selfadjointness is preserved for the hybrid space of (3.16a ###reference_.1###)."
40
+ },
41
+ {
42
+ "section_id": "3.4",
43
+ "parent_section_id": "3",
44
+ "section_name": "Fully discrete problem of structure preserving nonconforming approximation",
45
+ "text": "For the discretization of Problem 2.4 ###reference_defi4### in the space , defined in (3.15 ###reference_###) and (3.16 ###reference_###), we then consider the following family of fully discrete nonconforming approximation schemes.\nLet be given by (3.15 ###reference_###) and (3.16 ###reference_###). For the operators and of (2.13 ###reference_###), of (3.33 ###reference_###) and given data and , where denotes an approximation of the initial value according to (2.12 ###reference_###), find with\nfor all and , where\nand\nfor with ; cf. (3.4 ###reference_###).\nThe algorithmic (or penalization) parameters , for , in (3.38 ###reference_###) have to be chosen sufficiently large; cf. [43 ###reference_b43###]. The contribution , defined in (3.38 ###reference_###), enforces the weak form of the homogeneous Dirichlet boundary conditions in (2.15 ###reference_###). Moreover, in the error estimation given below it is essential for absorbing contributions from upper bounds of the error.\nIn (3.36 ###reference_###), the mathematical structure of the evolutionary problem (2.16 ###reference_###) is essentially preserved, with the discrete operator replacing . The perturbation of the skew-selfadjointness of , resulting from (3.28 ###reference_###) and (3.31 ###reference_###), is captured in the analysis below by the additional (boundary) correction along with the penalization induced by .\nProblem 3.7 ###reference_defi7### yields a global in time formulation. For computations of space-time finite element discretizations we propose using a temporal test basis that is supported on the subintervals ; cf. [6 ###reference_b6###, 4 ###reference_b4###]. Then, a time marching process is obtained. For Problem 3.7 ###reference_defi7###, this amounts to assuming that the trajectory has been computed before for all , starting with an approximation of . On , for given we consider then finding such that (3.36 ###reference_###) is satisfied for all .\nIn (3.36 ###reference_###), there holds that\nBy (3.28a ###reference_.1###) and (3.31a ###reference_.1###), the operators and in (3.40 ###reference_###) are transformed into the DG gradients and , respectively, applied to the test functions, and additional sums of boundary face integrals. This can be exploited in the assembly process and error analysis.\nThere exists a unique solution of Problem 3.7 ###reference_defi7###.\nThe proof follows the ideas of [30 ###reference_b30###, Proof of Prop. 3.2]. To keep this work self-contained, and due to adaptations of the proof required by the perturbation of the skew-selfadjointness, we present it briefly. Since Problem 3.7 ###reference_defi7### is finite dimensional, it suffices to prove uniqueness of solutions to (3.36 ###reference_###) for . The existence of solutions then directly follows from their uniqueness. By means of the first of the items in Rem. 3.8 ###reference_defi8### and an induction argument, it suffices to prove the uniqueness of solutions to (3.36 ###reference_###) on a fixed subinterval . For this, let and be two solutions of (3.36 ###reference_###). Then, their difference satisfies for all that\nNext, we recall an argument of [30 ###reference_b30###, Proof of Prop. 3.2]. We note that\nand\nare bounded linear operators with respect to the norm of induced by the inner product (3.2 ###reference_###). Consequently, the mapping\nis linear and bounded for each . Then, by the Riesz representation theorem there exists a unique such that\nThe mapping is linear and bounded, since for there holds that\nNow, using integration by parts along with (3.42 ###reference_###), we have or all that\nUsing (3.42 ###reference_###), we rewrite (3.41 ###reference_###) as\nIn (3.44 ###reference_###), we choose . By (3.35 ###reference_###) along with (3.37 ###reference_###) we have for that\nNow, from (3.44 ###reference_###) we deduce by (3.45 ###reference_###) and the nonnegativity of given by (3.38 ###reference_###) that\nwhere the nonnegativity of is ensured by\nThe latter inequality follows from the assumption (2.17 ###reference_###). From (3.47 ###reference_###) we directly conclude the uniqueness of solutions to (3.36 ###reference_###) and, thereby, the assertion of this lemma."
46
+ },
47
+ {
48
+ "section_id": "4",
49
+ "parent_section_id": null,
50
+ "section_name": "Error estimation for structure preserving nonconforming approximation",
51
+ "text": "Here we prove an error estimate for the solution of Problem 3.7 ###reference_defi7###. For brevity, the proof is done only for the full DG approximation in space, corresponding to the choice in (3.15 ###reference_###) with in (3.16b ###reference_.2###). The adaptation of the proof to the hybrid case in (3.15 ###reference_###) is straightforward.\nLet be defined by (2.14 ###reference_###). For the solution of Problem 2.4 ###reference_defi4### suppose that the regularity condition\nis satisfied. Let the discrete initial in Problem 3.7 ###reference_defi7### be chosen such that holds. Then, for the numerical solution of Problem 3.7 ###reference_defi7### we have the error estimate that"
52
+ },
53
+ {
54
+ "section_id": "5",
55
+ "parent_section_id": null,
56
+ "section_name": "Summary and outlook",
57
+ "text": "In this work we presented and analyzed the numerical approximation of a prototype hyperbolic-parabolic model of dynamic poro- or thermoelasticity that is rewritten as a first-order evolutionary system in space and time such that the solution theory of Picard [38 ###reference_b38###, 45 ###reference_b45###] becomes applicable. A family of discontinuous Galerkin (DG) schemes in space and time was studied where the innovation came through the discontinuous Galerkin discretization in space of the first-order formulation. By a consistent definition of the first-order spatial differential operators on broken polynomials spaces and the addition of boundary correction terms the mathematical evolutionary structure of the continuous problem was preserved on the fully discrete level. Well-posedness of the fully discrete system and error estimates were proved. The numerial evaluation of the approach and computational studies with comparison to the three-field formulation of [5 ###reference_b5###, 4 ###reference_b4###, 10 ###reference_b10###] remain a work for the future. Further, error control in higher order norms (like the usual DG norm, cf. [40 ###reference_b40###]) involving the broken gradient is of interest and remains a future task. The optimality of the error estimates (4.2 ###reference_###) and (4.23 ###reference_###) with respect to the rate of convergence in space still needs further elucidation. An improvement to convergence of order in space might become feasible, which is shown in [9 ###reference_b9###] for an equal-order approximation of the second-order in space displacement presssure formulation of (1.1 ###reference_###). However, the coupling mechanisms of the unknowns in the model equations and the abstract evolutionary Problem 2.4 ###reference_defi4### for the holistic vector of variables do not allow such an improvement in a straightforward and obvious manner. In our error analysis, the appearance of the interpolation error , involving first order derivatives, leads to an order reduction in space. For this, we refer also to the results in [10 ###reference_b10###] for the three-field formulation that also lack from optimality of the theoretical convergence rate in space, even though the latter is observed in numerical experiments."
58
+ }
59
+ ],
60
+ "appendix": [],
61
+ "tables": {},
62
+ "image_paths": {},
63
+ "validation": true,
64
+ "references": [],
65
+ "url": "http://arxiv.org/html/2311.01264v2"
66
+ }
20240620/2311.06530v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240620/2311.07230v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240620/2311.10433v2.json ADDED
@@ -0,0 +1,147 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Task Scheduling Optimization from a Tensor Network Perspective",
3
+ "abstract": "We present a novel method for task optimization in industrial plants using quantum-inspired tensor network technology.\nThis method allows us to obtain the best possible combination of tasks on a set of machines with a set of constraints without having to evaluate all possible combinations.\nWe simulate a quantum system with all possible combinations, perform an imaginary time evolution and a series of projections to satisfy the constraints.\nWe improve its scalability by means of a compression method, an iterative algorithm, and a genetic algorithm, and show the results obtained on simulated cases.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "The distribution of production\nresources is widely regarded as one of the most interesting and useful problems in industry [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###]. This distribution problem can be stated as a combinatorial problem, so its complexity scales exponentially with the problem\u2019s size. There exist some classical heuristic approximations which are capable to reduce the computational time so much, which has led them to be considered great applied methods, such as genetic algorithms [4 ###reference_b4###] or particle swarm optimization [5 ###reference_b5###].\nQuantum computing applied to industrial cases has become very interesting, due to its computational power. Some of the best-known and most promising quantum algorithms for combinatorial optimization are the Quantum Approximate Optimization Algorithm (QAOA) [6 ###reference_b6###], Variational Quantum Eigensolver (VQE) [7 ###reference_b7###] and the Quantum Annealing for Constrained Optimization (QACO) [8 ###reference_b8###]. However, these algorithms are limited due to the current Noisy intermediate-scale quantum (NISQ) state of small quantum computers with notable noise.\nDue to this, a great expectation has arisen with quantum-inspired methods, which are based on imitating certain quantum processes in classical systems to improve their performance. One example is digital annealing for Quadratic Unconstrained Binary Optimization (QUBO) problem solving [9 ###reference_b9###]. Another quantum-inspired branch is that of tensor networks [10 ###reference_b10###], a classical technology based on the use of linear algebra that allows us to simulate quantum systems both exact and approximately. Several algorithms are available in tensor networks to address various combinatorial optimization problems [11 ###reference_b11###, 12 ###reference_b12###, 13 ###reference_b13###]. However, it is desirable to have specialized algorithms for particular cases in order to reduce as much as possible their computational complexity and memory requirements and to improve their performance.\nA specific optimization problem for industrial processes is to assign the tasks to be performed to a set of machines, given a set of constraints on the tasks that can be performed by one machine depending on the task performed by a different machine. This case is interesting because we can extract both the execution times of the tasks and the constraints between them from a historical record of the corresponding manufacturing plant, without having to logically deduce them or having to perform tests.\nIn our work, we develop a quantum-inspired algorithm with tensor networks that is able to obtain a solution with the lowest cost that respects a given set of constraints. We also improved its scalability by adding an iterative method and a genetic algorithm. Our main novelty contribution is the union of heuristic algorithms and genetic algorithms with tensor network algorithms for solving combinatorial optimization problems."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II Description of the problem",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "II.1 Motivation",
21
+ "text": "The problem we want to solve is the distribution of tasks in a set of machines with some rules about multiple sets of tasks. That is, we have a set of machines and on each machine there are possible tasks, with an execution time for the task on machine , with and . We also have a set of directed rules over these task combinations. An example of rules would be: \u201cIf machine 0 has task 2 and machine 1 has task 4, machine 2 must have task 3.\u201d\nWe want the combination of tasks on machines that satisfies the rules with the lowest possible execution time. This is essential for increasing the productivity of industrial plants. Obviously we can introduce another indicator as cost instead of execution time, but we will work with this time to simplify the explanation.\nWe could also extend the problem to a case where each machine has a cost related to the previous one in the chain, but we will focus on the local case to better understand the rule system. We assume that there are no extra execution times due to the order of task execution or moving from one cycle to another. However, we will see how we can extend the method in Ssec. III.4.3 ###reference_.SSS3###."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "II.2 Formalism of the problem",
27
+ "text": "Combinations of tasks are expressed as a vector such that each element of the vector is the task executed on the machine of the corresponding position. Then,\n, would indicate that the machine 0 is assigned to task 1, the machine 1 is assigned to task 3, the machine 2 is assigned to task 2, and the machine 3 is assigned to task 3. In other words, is the task executed in machine .\nAs with any optimization problem, we also need a cost function, which is equal the total execution time. This can be written as a sum of the individual local costs:\nThe set of all rules is obtained by the union of all the individual rules. Each rule is denoted as , for , being the number of rules.\nThe rules are written as a list with two elements such that the first one is the conditional and the second one the conditioned. That is, for the above example of Ssec. II.1 ###reference_###,\n\u201cif machine 0 has task 2 and machine 1 has task 4, then machine 2 must have task 3.\u201d, the rule string would be: , where \u2032\u2032 implies that this machine does not condition the other ones, but it can be the conditioned one."
28
+ },
29
+ {
30
+ "section_id": "3",
31
+ "parent_section_id": null,
32
+ "section_name": "III Resolution algorithm",
33
+ "text": "The core of our algorithm is based on the simulation of a quantum system with qudits by means of a tensor network, taking advantage of the non-unitary operations that the latter allow us. We rely on improving the algorithm of [13 ###reference_b13###] for the implementation of the restrictions to reduce complexity and memory cost, combining it with the algorithm presented in [14 ###reference_b14###] for the minimization part and obtaining the final result.\nOur algorithm consists of:\nCreating the initial uniform superposition state on qudits: .\nApplying an imaginary time evolution, so that the amplitude of a combination depends on its cost and a damping constant : .\nDiscarding states by applying the rules by means of projectors: .\nTo measure and extract the basis state with maximum amplitude of the superposition.\nIn order to reduce the number of tensors to use, the number of operations and the amount of memory to use, we perform several steps of compression of the rules applied to the tensor network to be contracted."
34
+ },
35
+ {
36
+ "section_id": "3.1",
37
+ "parent_section_id": "3",
38
+ "section_name": "III.1 Pre-processing",
39
+ "text": "Before executing the core solver itself, we pre-process the problem. The first step is normalize the costs. We want the minimum possible cost to be -1 and the maximum +1. Thus, the time evolution maintains the amplitudes between and for the amplitudes, being the damping constant. This helps us not to have to adjust for each problem manually. We only have to rescale all the times of the problem with the sum of the maximum and the sum of the minimum times of each machine.\nThe next step is to sort the machines for using the less tensors possible in the contraction. This will be better understood in Ssec. III.4.3 ###reference_.SSS3###. To do this, we look at which are the machines that appear in more rules, placing them closer together in the most central area of the sequence in an orderly way. This means, if machine 0 appears in 3 rules, machine 1 appears in 2 rules, machine 2 in 1 rule, machine 3 in 7 rules and machine 4 in 3 rules, we will sort the machines in the order , changing for example the state to .\nThe reason for doing this is that machines that appear more in the constraints will have more connections in the tensor network with other machines than those that appear less. In this way, we will be able to reduce the number of tensors by eliminating many tensors whose only purpose is to connect two distant machines. From here on, we take all states, times and rules as already organized and normalized."
40
+ },
41
+ {
42
+ "section_id": "3.2",
43
+ "parent_section_id": "3",
44
+ "section_name": "III.2 Coding in the quantum system",
45
+ "text": "Our encoding of the problem is the basis quantum states of qudits. That is, our quantum state would be, for example of Ssec. II.2 ###reference_###. Thus, each task is encoded as the state in the corresponding qudit. The dimension of the qudit is . From here, we follow the notation and methodology of the paper [14 ###reference_b14###], adding the extra steps necessary for this problem."
46
+ },
47
+ {
48
+ "section_id": "3.3",
49
+ "parent_section_id": "3",
50
+ "section_name": "III.3 Initial system and imaginary time evolution",
51
+ "text": "As we want to check all combinations of tasks, we make an superposition of all possible tasks and apply the imaginary time evolution. We want that the state after this steps being\nso that the higher the cost of the combination, the lower its amplitude exponentially. To do this, we use the first two layers of the method of paper [14 ###reference_b14###], adapted to our cost function removing the bond indexes in the evolution layer.\nHowever, since the costs depend only on the qudit and not on its neighbors, we can absorb the \u2018\u2019 layer of imaginary time evolution in the \u2018+\u2019 layer of superposition itself. Our state after minimization can also be expressed as\nso we can get it by means of the tensor product of vectors already minimized locally\nFor example, for a machine 0 with 3 possible tasks, the tensor + for the qudit 0 would be:\nIf we wanted to forcibly delete a task, we would only have to replace its corresponding evolution element with a 0 (equivalent to setting its cost to infinity).\nIf we have a case where the cost of each task on one machine depends on the task on the previous machine, we can directly use the initialization and evolution layers of [14 ###reference_b14###] by modeling that part of the problem as a nearest neighbor Quadratic Unconstrained Discrete Optimization (QUDO)."
52
+ },
53
+ {
54
+ "section_id": "3.4",
55
+ "parent_section_id": "3",
56
+ "section_name": "III.4 Rules",
57
+ "text": "In this subsection we explain how to apply the rules to the system. Here we first do a rearrangement and compression of rules, and then we apply them. The compression is done to reduce the memory scaling with the number of rules and the rearrangement is done to improve the compression. We perform these two steps to have a single layer of bond dimension that agglomerates rules instead of having layers of one rule each, where we would have a final bond dimension of . In this way we can exponentially reduce the bond dimension required for the calculation for each restriction in a simple way."
58
+ },
59
+ {
60
+ "section_id": "3.4.1",
61
+ "parent_section_id": "3.4",
62
+ "section_name": "III.4.1 Rule grouping",
63
+ "text": "First, we group the rules into sets that start and end on the same machines, and have the same conditioned machine, as this is compression requirement. In this way, we will have the groups of rules that we can condense in the next step.\nIn case the conditioned machine is previous to the first conditioning machine, we will join all the rules that have the same final conditioning machine and that conditioned machine, with the first conditioning machine after the conditioned one. This is the initial extreme case. Similarly, in case the conditioned machine is subsequent to the last conditioning machine, we will join all the rules that have the same initial conditioning machine and that conditioned machine, with the last conditioning machine prior to the conditioned one. This is the final extreme case."
64
+ },
65
+ {
66
+ "section_id": "3.4.2",
67
+ "parent_section_id": "3.4",
68
+ "section_name": "III.4.2 Rule condensation",
69
+ "text": "To do the rule condensation, we use each group of the previous step. Within each group we will generate subgroups of up to rules, with being the first (or last in the final extreme case) conditioning machine in the rule, so that the conditioning task on the first (or last in the final extreme case) machine is never repeated. We do this to avoid summing in the next step the compatible states differently according to their extremal relation to the constraint. We will understand this better in the next step.\nThere are actually better rule compression schemes, using complex techniques, but they are so complicated to generalize that they are left for a possible future study."
70
+ },
71
+ {
72
+ "section_id": "3.4.3",
73
+ "parent_section_id": "3.4",
74
+ "section_name": "III.4.3 Creating rule layers",
75
+ "text": "In order to impose such restrictions on the system, we convert the rules into a set of tensors, such as a Grover oracle circuit [15 ###reference_b15###]. The basic tensors are Ctrl, Cctrl, cProy, CcProy and Id, which we define later in this section. The tensors Ctrl and Cctrl are the controllers, while the cProy and CcProy are the so-called projectors. There are two types of indexes: vertical and horizontal (Fig. 1 ###reference_###).\n###figure_1### The vertical ones are the ones that go from the quantum state to the next rule, or between rules, or from rules to the output. We can see them as the timeline of a quantum circuit. The horizontals are the ones that connect the nodes of the same rule. These indexes send signals and information between nodes of the same rule.\n###figure_2### The key to this method is that the Ctrl and Cctrl tensors identify, based on the task associated with each conditioning machine, which rule is activated, thus telling the cProy or CcProy tensor to impose the correct task on the conditioned machine. Therefore, if the first Ctrl of the layer finds that its associated machine is in the task that indicates the rule 3, it will tell the following tensor to verify if its machine is in the task associated to the rule 3. If so, it will return the same value of 3 to the following tensor, and if not, it will return 0, associated to the fact that no rule is activated. And so on until reaching the cProy, which in case of receiving that all the conditioning machines are in the values of the rule 3, it will force its machine to be in the task of the machine conditioned in this rule, by means of a projection. If the projector tensor (CcProy in this case) has controls on both sides, both sides have to receive the same signal to project. If it receives 0 or different values on both sides, the conditions of the rule will not have been met, and therefore, the projection will not apply.\nTensors are described as:\nId (Identity): it only serves to transmit the signal on non-involved machines. Its non-zero elements (equal to 1) are Idi,j,k for 3 indexes with and Idi,j,k,l for 4 indexes, .\nCtrl(): its vertical indexes only pass the state, i.e. Identity. Its horizontal indexes send a 0 if the incoming signal through the verticals is not in the set of values for that conditional machine of the rules of the subgroup represented on this layer, and if it is the value .\nFor a Ctrl whose target are the states with rules, the non-zero elements (equal to 1) are\nCtrli,j,k, :\nif , then .\nif , then .\nCctrl(): its vertical indexes only pass the state, i.e. Identity. Its horizontal indexes send a 0 to the right if the incoming signal through the verticals is not in the set of values for that conditional machine of the rules of the subgroup represented on this layer, or if the signal on the left is different from signal and the state in this machine is , and if is received on the vertical and from the left.\nFor a Cctrl whose target are the states with rules, the non-zero elements (equal to 1) are\nCctrli,j,k,l, :\nif , then .\nif :\nif , then .\nif , then .\nCProy(): its vertical indexes perform the filtering so that, if the horizontal index is activated with , it only lets state pass through the verticals, and if is activated with 0, it applies an Identity.\nFor a CProy whose target are the states with rules, the non-zero elements (equal to 1) are CProyi,j,k, :\nif , .\nif , .\nCcProy(): its vertical indexes perform the filtering so that, if both horizontal indexes are activated with , it only lets state pass through the vertical ones, and if is activated with 0 or with two different signals, it applies an Identity.\nFor a CcProy whose target are the states with rules, the non-zero elements (equal to 1) are CcProyi,j,k,l, :\n, then .\n, then .\nThese 5 tensors are everything we need for our method. So, simply, by joining the rules, the joined set will be made in such a way that machine qudits have the following tensors applied to them:\nFirst machine: The tensor is a Ctrl or cProy with the being the values in that machine of each of the rules of the subset.\nSecond and next machines: The tensor is a Cctrl or CcProy with the being the values in that machine of each of the rules of the subset, or Id when the machine does not appear in the rule. It is important that the indexes receiving signals from the Cctrl have to be directed in the opposite direction to the CProy or CcProy. In other words, for the machines after the conditional machine, we will reverse the order of the horizontal indexes.\nFinal machine: same as the first one.\nNow with these tensors done, we create a layer joining by the horizontal indexes.\nIt is important to be aware of that a certain rule could have extra information. We can increase the cost of a particular state by changing the filtered element in the projector in the corresponding rule from 1 to the exponential with the extra cost (and ). This adds extra terms to the cost in a simple way. We can also allow the target machine to have a set of tasks instead of a single task by adding more non-null elements to the projectors."
76
+ },
77
+ {
78
+ "section_id": "3.5",
79
+ "parent_section_id": "3",
80
+ "section_name": "III.5 Rule set creation",
81
+ "text": "Having the layers, we join them one after the other (Fig. 3 ###reference_###), and finally apply them to the initial system. It is important that when connecting the constraint layers, we try to make the tensor network as compact as possible. That is, that there are as few intermediate gaps as possible. This will help the contraction algorithms to reduce the memory cost.\nThis takes us to the rules operator\nwhere is the -th restriction tensor/operator.\nWith this we have a quantum state with superposition where each remaining element satisfies the rules.\n###figure_3### The final state is\nbeing the number of machines and the exponential the general evolution operator.\nIt can be seen as applying the network Fig. 3 ###reference_### to the initial nodes."
82
+ },
83
+ {
84
+ "section_id": "3.6",
85
+ "parent_section_id": "3",
86
+ "section_name": "III.6 Measurement",
87
+ "text": "Now we want to extract our final state with the maximum amplitude, as this is the one with the minimum cost. We can do this by means of partial traces. This is based on the fact that our system has a peak amplitude, that is, a state with sufficiently higher amplitude than the others. Therefore, when doing a partial trace, the maximum at each index should be the same as the global one. This whole method is explained in more detail in [14 ###reference_b14###].\nWe do this by connecting a set of nodes in uniform superposition at the end of the last rule layer, except for the index corresponding to the machine to be checked.\n###figure_4### In the resulting vector we look for the position of the element whose absolute value is the largest, being that position the correct task for machine 0. Once machine 0 has been determined, we do the same scheme again, but taking into account only the other machines (since we have already determined the first one), with the following modifications:\nWe eliminate all the rules in which machine 0 conditioned another machine and had to have a different value than the one found. This is because obviously this rule will never be activated.\nIf machine 0 conditions another machine and its value is the one found, we keep the rule eliminating the dependence on machine 0, since it will always have its approval.\nIf we have a rule in which machine 0 is the only conditioner for another machine and has the value obtained, this rule is eliminated and the task of the other machine is forced to be the one imposed by the rule. This is because it will always be active.\nIf a rule has machine 0 as conditional and its value is the obtained, it disappears, since we already have it.\nIf a rule has as conditioned the machine 0 with a value different to the one obtained, we replace the rule by one that makes that the conditioned combination cannot occur. This is because if we had found that combination, we would not have the value found.\nFor each machine that we determine, either by contraction or by this process, we have to repeat the process for that machine, adding what is new that we know. That is, in addition to the above, if we have a rule in which the determined machines are the only conditioners for another machine and they all have the values obtained, this rule is eliminated and the task of the other machine is forced to be the one imposed by the rule. This is because it will always be active.\nWith this method, each machine determination operation is less costly than the previous one, in addition to dealing with possible degeneration problems. The contraction of this tensor network can be by means of an algorithm that looks for the most efficient route of contraction or by first contracting all the tensors of the last machine, then this resulting tensor with all those of the previous one, and so on until the whole network is completed. The latter scheme will result, taking into account that all for simplicity, in a complexity between and . This will depend significantly on the particular case to be solved and the structure of the resulting layers for the constraints, as well as their ordering, so it could have a significantly lower complexity.\nTo improve speed, we can use the reuse of intermediate calculations, so that at each step the tensor that contracts all tensors of all subsequent machines is used. In this way, we will only have to contract it with local tensors of zeros or ones to update with the information of the previous step and contract it with the tensors of the machine to be determined in this step. However, since these tensor networks are generally irregular, we did not study this case in depth, so it will be left for a future study."
88
+ },
89
+ {
90
+ "section_id": "4",
91
+ "parent_section_id": null,
92
+ "section_name": "IV Execution algorithms",
93
+ "text": "Finally we have to contract a 2D-grid network, which scales exponentially with the number of rules. One way to avoid excessive scaling was our rule compression, but it is still excessive.\nTo compensate for this scaling as a function of the number of rules, we propose the following two schemes: the iterative one and the genetic one."
94
+ },
95
+ {
96
+ "section_id": "4.1",
97
+ "parent_section_id": "4",
98
+ "section_name": "IV.1 Iterative algorithm",
99
+ "text": "This method is based on not applying all the rules at once at the beginning. We only apply one compressed rule, and in case of failure, we apply the following rules iteratively. The method is:\nFirst we check if the simple minimum (the minimum on each machine) satisfies all the constraints. If it does, we finish, meaning this is the case where the global minimum of the unrestricted problem is the same as the global minimum of the problem restricted by the chosen set of rules. If it does not, we continue.\nNow we apply the rule that eliminates the state we have found, adding also the rules that are compatible with this one to compress them (and take advantage to eliminate more states and iterations). If the result satisfies all constraints, we are done. If not, we repeat this step until the result satisfies them or until we reach the limit of iterations.\nIf we reach the limit of iterations, that is, the groups of rules we can add, we end up with a negative result: no solution that follows all rules of the chosen set has been found.\nThe goal of this algorithm is to find whether the minimum of the intersection of certain regions bounded by the rules corresponds to the minimum of the intersection of the regions bounded by all the rules. It is important to note that this is an heuristical method.\nHowever, in most cases we could get the right result in a few iterations, each one being more expensive than the previous one."
100
+ },
101
+ {
102
+ "section_id": "4.2",
103
+ "parent_section_id": "4",
104
+ "section_name": "IV.2 Genetic algorithm",
105
+ "text": "This algorithm is another version of solving the contraction problem. It is based on initialising only one set of tasks on the initial nodes, i.e., making only a subset of tasks on each machine activable, and applying only one set of rules.\nEach individual of the population has the following attributes:\nChromosomes: tasks whose amplitudes are non-zero. That is, tasks that can be activated.\nPhenotype: rules that are included in the individual.\nResult: state obtained after applying the tensor network method. This can also be calculated with the iterative method in case of having many rules. The initialization phases take into account only the activable tasks in order to remove as much as possible the available states.\nCost: cost of your result.\nTimes: the times of the non-activated tasks are set to infinity so that they are not taken into consideration.\nThe method is as follows:\nWe initialize the population at random.\nWe calculate the outcome for each individual and keep only a proportion of the best individuals using the cost function.\nWith these individuals we perform a crossover of pairs of parents from the previous generation and correct the rules.\nWe create from the parents a set of mutated individuals and correct the rules. Mutations are performed by changing a task on the same machine at random.\nWe check how many individuals are repeated and eliminate them.\nWe add new randomly created individuals to make up for the missing ones to have the number of individuals we should have.\nWe repeat steps until convergence criteria is met or the chosen maximum number of generations is reached.\nThe chromosome crossover is performed by exchanging, within the same machine, the possible active tasks a number of times we choose between the two individuals, at random.\nRule correction is applied to each individual each time it is created. This protocol is based on checking the rules already placed on the individual, if the constraints of the rules were included in the active tasks. If not, the rule is removed. For individuals with missing rules in their heritage, we add new rules that are compatible.\nThe output of the algorithm is a set of possible results sorted by how good they are."
106
+ },
107
+ {
108
+ "section_id": "5",
109
+ "parent_section_id": null,
110
+ "section_name": "Results",
111
+ "text": "We have tested our algorithm by creating simulated cases for the three cases of iterative, genetic, and the combination of both. In all of them we have seen that a is enough to obtain the desired results.\nCase generation is performed by choosing number of machines, number of tasks per machine and number of rules. First we create a list of random times with uniform distribution between 0 and 1 for each machine and task. Then we create a set of rules that are compatible with each other, i.e. the same condition cannot lead to 2 different tasks on the same machine. We could still work with incompatible tasks, we would simply make those states disappear.\nIn the iterative case, we have seen that we can reduce the number of rules to be applied so that our limit of rules to be applied is significantly lower. We have obtained the correct result for cases of 10 machines with 10 tasks each and 30 rules applied on all occasions when we have not had an memory overflow. Cases in which the excess memory has been exceeded have subsequently been treated with the genetic method.\nIn the genetic case we have been able to obtain the best result for cases of 10 machines with 10 tasks each and 1000 rules, using 10 individuals, 2 tasks on the chromosomes, 6 rules each, 1 mutation and a survival ratio of . The correct results have been obtained in 3 to 7 generations in most cases.\nIf we use both algorithms together, we see that we can go up to 10 rules per individual, but in the sizes used we have not seen much influence. Of course, the execution was faster. It would probably be better for larger cases, with more rules per individual. However, we have not analyzed larger cases because of the complexity of generating the list of rules that are compatible with each other.\nAll tests have been performed at Google Colab in August 2023 with the TensorNetwork library [16 ###reference_b16###]."
112
+ },
113
+ {
114
+ "section_id": "6",
115
+ "parent_section_id": null,
116
+ "section_name": "VI Conclusions",
117
+ "text": "We have developed a quantum inspired algorithm for combinatorial optimization and applied it to the industrial case of machine task distribution. We have brought together different methodologies, which can serve as inspiration for analogous algorithms for quantum computing, which solve the problems arising from the initial algorithm.\nThe method can also be extended in future works for other types of bidirectional rules or restrictions where only certain combinations are possible.\nAnother future study would be to further analyze the algorithm to improve its computation, for example by using Matrix Product State (MPS/TT) compression [17 ###reference_b17###, 18 ###reference_b18###] when rules are applied or by compressing the rules in a better way.\nThe general algorithm could also be tested for different significant problems, such as the Travelling Salesman Problem or the Job Shop Scheduling Problem."
118
+ }
119
+ ],
120
+ "appendix": [],
121
+ "tables": {},
122
+ "image_paths": {
123
+ "1": {
124
+ "figure_path": "2311.10433v2_figure_1.png",
125
+ "caption": "Figure 1: Types of indexes of the nodes of a rule layer.",
126
+ "url": "http://arxiv.org/html/2311.10433v2/x1.png"
127
+ },
128
+ "2": {
129
+ "figure_path": "2311.10433v2_figure_2.png",
130
+ "caption": "Figure 2: Name of the indexes for the tensors Ci\u2062j\u2062ksubscript\ud835\udc36\ud835\udc56\ud835\udc57\ud835\udc58C_{ijk}italic_C start_POSTSUBSCRIPT italic_i italic_j italic_k end_POSTSUBSCRIPT and Ci\u2062j\u2062k\u2062lsubscript\ud835\udc36\ud835\udc56\ud835\udc57\ud835\udc58\ud835\udc59C_{ijkl}italic_C start_POSTSUBSCRIPT italic_i italic_j italic_k italic_l end_POSTSUBSCRIPT.",
131
+ "url": "http://arxiv.org/html/2311.10433v2/x2.png"
132
+ },
133
+ "3": {
134
+ "figure_path": "2311.10433v2_figure_3.png",
135
+ "caption": "Figure 3: Layering of compressed rules.",
136
+ "url": "http://arxiv.org/html/2311.10433v2/x3.png"
137
+ },
138
+ "4": {
139
+ "figure_path": "2311.10433v2_figure_4.png",
140
+ "caption": "Figure 4: Tensor network performing the simulation and traces for machine i=0\ud835\udc560i=0italic_i = 0 amplitudes.",
141
+ "url": "http://arxiv.org/html/2311.10433v2/x4.png"
142
+ }
143
+ },
144
+ "validation": true,
145
+ "references": [],
146
+ "url": "http://arxiv.org/html/2311.10433v2"
147
+ }
20240620/2311.11900v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240620/2311.13564v2.json ADDED
@@ -0,0 +1,335 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "High order universal portfolios",
3
+ "abstract": "The Cover universal portfolio (UP from now on) has many interesting theoretical and numerical properties and was investigated for a long time. Building on it, we explore what happens when we add this UP to the market as a new synthetic asset and construct by recurrence higher order UPs.\nWe investigate some important theoretical properties of the high order UPs and show in particular that they are indeed different from the Cover UP and are capable to break the time permutation invariance.\nWe show that under some perturbation regime the second high order UP has better Sharp ratio than the standard UP and briefly investigate arbitrage opportunities thus created. Numerical experiences on a benchmark from the literature confirm that high order UPs improve Cover\u2019s UP performances.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "",
9
+ "text": "The introduction of the Universal Portfolio (abbreviated \"UP\" from now on) algorithm in [2 ###reference_b2###] generated a considerable amount of interest in the quantitative portfolio management community because of its theoretical and practical appeals: the universal portfolio\ncan be constructed without any\ninformation on the future price evolution nor any statistical assumption \nbut is proven to reach asymptotically a performance compatible with the best constant rebalanced portfolio (denoted by \"BCRP\" in the sequel) chosen with the benefit of hindsight [2 ###reference_b2###, Thm. 6.1 p.13] 111In more precise terms it was proved under some technical assumptions that , see Remark 1 ###reference_ark1### for notations. and in particular is asymptotically\nnot worse\nthan any individual asset. While the BCRP cannot be implemented because requires future data, the UP is implementable at each time step.\nTested on some simple benchmarks this strategy proved to be efficient if given enough time to reach the asymptotic regime.\nSeveral works explored various aspects of the theory:\n[9 ###reference_b9###]\nwrote a continuous time version and gave further results under log-normal assumptions,\n[7 ###reference_b7###] proposed a online reinforcement learning-style version of the algorithm,\n[12 ###reference_b12###] continued in this direction under the assumption of moving average reversion while [1 ###reference_b1###] investigated the theoretical and practical impact of transaction costs.\nA connection with the general framework of stochastic portfolio theory see [6 ###reference_b6###] was proposed by [4 ###reference_b4###] together with a comparison with the the \"num\u00e9raire\" portfolio.\nA more general view on the learning rate is presented by [18 ###reference_b18###].\n[3 ###reference_b3###]\ncame back to the subject opening the discussion of how to incorporate additional side information and later [16 ###reference_b16###] explored derivative pricing related to the UP.\nOn the computational side, [10 ###reference_b10###] showed that an astute sampling of the set of Constant Rebalanced Portfolios (\"CRP\" from now on) can give rise to efficient algorithms for universal portfolios; and the work continues\nup to this day with [17 ###reference_b17###] investigating the optimal way to assign weights to different CRPs.\nFor further results see also the review of [11 ###reference_b11###].\nIt is thus clear that the universal portfolio can provide interesting insights into investing strategies; given its importance, it is then natural to see the universal portfolio as a kind of synthetic asset (to be added to the same market as the primary assets used to build it) in the same vein as market indices and associated ETFs are nowadays an important instruments for market gauging.\nFor a general set of market assets denote by the universal portfolio associated to .\nA natural question is what happens if we add to the market .\nOne can define recursively the universal portfolio\n\nas the\nuniversal portfolio of the market to which we add the synthetic asset :\n ; this procedure can be continued recursively :\n\nand so on.\nAs a matter of vocabulary, we will call such portfolios \u2019high order universal portfolios\u2019, abbreviated HOUP.\nSeveral questions are now in order :\ndo , bring anything new with respect to i.e., are they different from ?\nif we iterate this construction times resulting in the -th order portfolio, how does its performance compares with that of ?\nThe purpose of this paper is to answer such questions. The outline of the work is the following : in section 2 ###reference_### we introduce formally the high order universal portfolios followed in section 3 ###reference_### by some theoretical results. In section 5 ###reference_### we present the performance of the HOUPs on several benchmarks from the literature followed in section 6 ###reference_### by concluding remarks."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "",
15
+ "text": "Consider a set of financial assets and a time frame of instants 222Of course, everything said here can be extended to a general time frame\n at the price of writing instead of and instead of \u2026 ; denote by the price of the -th asset at time ; by convention for all .\nIntroduce the price relatives\n for any 333It is assumed that all asset prices are strictly positive at any time.. In financial jargon quantities are also called \"returns\" of the assets.\nA constant rebalanced portfolio (CRP) is defined by a set of positive weights that sum up to one. For each such the portfolio is such that at any time the quotient of the wealth invested in the asset with respect to total portfolio value is . In financial literature CRP is also called a \u2019Constant Mix\u2019 portfolio.\nNote that CRP is a dynamic investment strategy because at any time one has to rebalance the allocation that may have drifted because of the change in prices: imagine for instance a market with assets and a allocation at . If at one of the assets rises significantly and the other decreases significantly, the amount of wealth invested in the first asset will become much larger than that invested in the second asset. In this case a rebalancing is done at the prices available at time to restore the proper allocation proportions . Note in particular that CRP is distinct from the so-called \"Buy&Hold\" (also known as \u2019Split-and-Forget\u2019) portfolio that invests at equal parts of the initial wealth in each asset and do not trade at all afterwards (no rebalancing).\nSupposing a portfolio starting from value at time and\ndenoting the vector with components , ,\nthe value at time of the portfolio will be :\nIn \"returns\" formulation this reads\n.\nDenote the unit simplex of dimension :\nWe will introduce a distribution on ; in the initial proposal Cover used the uniform distribution but in general Dirichlet laws have also been advocated. All that is said in the following extends to any of these distributions but for simplicity we will use the uniform law over and denote or the average over it.\nThe universal portfolio for the market is defined to have the allocation at time :\ni.e. the allocation is a weighted average of with weights proportional to the performances up to time .\nFor convenience, we denote the universal portfolio thus defined and in particular will be its value at time . Note that the construction of does not requires any forward information on the market and can be realized on the fly as time advances.\nLet be a market and denote its universal portfolio. Then order universal portfolios of the market are denoted and defined recursively by :\nIt can be proved from (5 ###reference_###) and was documented in in [2 ###reference_b2###] that the value of the universal portfolio is the average of the values of all possible CRPs :\nThis property is the basis of its asymptotic performance because any \nthat is optimal in hindsight (i.e., =BCRP) corresponds to some with and\nwill end up imposing its growth rate to all other members of the average above.\nIn fact as put by [12 ###reference_b12###] the UP operates as a Fund of Funds (FoF), each elementary fund corresponding to a strategy. In such a view, the ) is already a member of the UP portfolio, so the FOF will benefit from its gains.\nSince we are enlarging the market at any step, any will keep the same property of optimality (this can be formalized mathematically). In particular, since is in the market used to obtain we expect that the performance of is at least as good as that of and in general we expect the performance to improve when increases. This theoretical point will be checked empirically in section 5 ###reference_###.\nTo implement for one does not need anything more than the access to the market as was the case for .\nAdding UP to the initial market is a \"thought experiment\", in practice\neach can be expressed as a portfolio containing only assets of the initial market ; note however that in general is not a CRP because from equation (3 ###reference_###) is not necessarily constant in time (neither for UP nor for , ). In particular, while equation (5 ###reference_###) shows that for , when (5 ###reference_###) involves an average over and does allow to conclude that is necessarily bounded by ().\nFormula (5 ###reference_###) can be used as definition of the Universal portfolio when the measure over the unit simplex is not uniform.\nFor instance [3 ###reference_b3###] use a measure. Each measure will give another \u2019flavor\u2019 of Universal portfolio.\nLet us take the two extreme ones: the distribution is only supported in the vertices of the simplex so in this case the UP is simply the \"Buy&Hold\" portfolio.\nOn the contrary, when the distribution is increasingly concentrated at the center of the simplex. In this case, the same formula says that in the limit the Universal portfolio is the uniform ."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "",
21
+ "text": ""
22
+ },
23
+ {
24
+ "section_id": "3.1",
25
+ "parent_section_id": "3",
26
+ "section_name": "",
27
+ "text": "Besides the optimality discussed in remark 1 ###reference_ark1###\nan important preliminary question is whether is different at all from , i.e., does the introduction of the synthetic asset brings any new information and changes the\nway to compute the universal portfolio.\nNote that intuitively, since universal portfolios are obtained by averaging existing assets,\none could expect that introducing the average into the market would not change anything at all. The somewhat counter-intuitive result is that averaging is done non-linearly and therefore computing high order universal portfolios enrich the set of possible strategies. The formal answer is given in the following proposition :\nFor a general market the higher order universal portfolios , are not all equal to .\n(other measures on the simplex: follow-up)\nIf we recall the remark 3 ###reference_ark3### we understand that this assertion is not really trivial. Indeed, it can be seen that when all , will all be equal to the \"Buy&Hold\" portfolio. When all will be equal to the uniform . So, while both extreme cases are degenerate we prove that for (that is the uniform distribution) the procedure does create new portfolios.\nThe formulation of the result appears somehow awkward, let us explain why it has its present form. Take the statement : . For given horizon , the equalities\n for all form\na system of nonlinear equations with unknowns . Since the system is under-specified (i.e., there are more unknowns than equations),\nit is very likely that\nfor given, fixed, and \nthere exists at least one market where we have\n for all times up to time (and even for ).\nIt may even be possible that for all and all ! So some care is needed when dealing with this statement.\nWe will proceed by contradiction. Suppose on the contrary that for all possible markets\n\nand all , all\n are equal to . In particular consider the simple situation when .\nTo ease notations we will denote , for all .\nIn this case the expectation with respect to the uniform law over the simplex can be computed as an integral over taking .\nAccording to its definition, the allocation of is at time ; at time its allocation will change to :\nIts value will be which is the same as its first price relative that we will denote : . We will denote in general by the price relative of the first universal portfolio . The value at time of is then :\nThis informs that , the price relative of the portfolio from time to time is\nFor convenience the price relative of the market are recalled in the table 1 ###reference_###.\nOn the other hand, when is added to the market, the value of at time\n will be (starting value), at time will be and at time\n will be :\nwhere we used relation , equation (8 ###reference_###) and\nidentities\nThe computation of the last identity was checked in full detail by hand by the author. However intermediary steps\nare cumbersome and\nwill not be presented here but the reader can\nobtain independent verification of the result using\nthe small symbolic Python program below :\nSince formulas (7 ###reference_###) and (9 ###reference_###) show that in general we obtain a contradiction with the assumption\nthat all equal , which\nends the proof.\n\u220e"
28
+ },
29
+ {
30
+ "section_id": "3.2",
31
+ "parent_section_id": "3",
32
+ "section_name": "",
33
+ "text": "To explain the next result we recall [2 ###reference_b2###] that states that the return of the UP is invariant with respect to permutations of the time instants. More precisely, consider a permutation of \nand the market that contains assets with prices (we keep the convention ) and factors\n such that for any , . It was proven that :\nAlthough this is an interesting property this means that UP cannot, by design, exploit time correlations or the time ordering of the factors. Some attempts to settle this problem came from the inclusion of side information, see [3 ###reference_b3###]. We take here a different view and show that higher order UP break the permutation symmetry and thus can possibly exploit the time coherence of the price relatives.\nThere exist at least one market , one order and one permutation of such that :\nOf course, the permutation invariance may still occur for some particular values of factors, see discussion in Remark 5 ###reference_ark5###;\nit is even natural to expect that varying the factors one may occasionally match the values of\n and . This explains the form of the proposition (12 ###reference_###).\nWe will simply present a counter-example for the case and the market in table 2 ###reference_###; first two columns are given, the third is derived from them using previous formulas for and . To compute we use that the value of at time is given by ( is uniform in ) :\nWe now take the permutation that exchanges and and consider the market with entries in table 3 ###reference_###.\nNote that, as expected, even if the entries in table 3 ###reference_### are not permutation of the entries in table 2 ###reference_###, the final value of is the same at time for both markets as proved by Cover: .\nWith these provisions one can compute the values of the second order universal portfolio for the two markets at time , which will be :\nTo this end following formulaes are used for :\nWe obtain for market : while for its permutation\n we obtain , which are different; the proof is complete.\n\u220e"
34
+ },
35
+ {
36
+ "section_id": "4",
37
+ "parent_section_id": null,
38
+ "section_name": "",
39
+ "text": "We will consider now the continuous time versions of the high order universal portfolios and will work in a perturbative regime around a reference dynamics.\nMore specifically, assume that the market containing assets is such that :\nThe perturbation parameter will be explained later.\nHere are continuous deterministic functions and are integrable martingales with finite quadratic variation starting from at time . We will not consider singular settings so we suppose that regularity conditions required to write these stochastic differential equations are satisfied, see [15 ###reference_b15###, 14 ###reference_b14###] for details.\nIn particular the setting of [9 ###reference_b9###, equation 2.1] is realized when\n with a constant matrix, and independent Brownian motions.\nThe denoted, as before, by follows the dynamics\nwhich leads to\nHere and in the following for two martingales , with finite quadratic variation\nwe denote their quadratic covariation at time .\nWe will compare below some properties of for \nin the limit where\n are close to some and are small; more precisely, we will consider some perturbation parameter and assume :\nWe will prove a result stating that has better Sharpe ratio than\n.\nConsider the market above and\ndenote ,\n.\nThen, up to the first order in the Sharpe ratio of the log-return of is better (i.e., larger)\nthan the Sharpe ratio of the log-return of .\nTo ease notations we will write from now on sometimes instead of\n.\nIn the proof of proposition 3 ###reference_orem3### we will need the following\ntechnical lemmata, one that involves integration constants over and another some properties of the variances of stochastic processes.\nLet , , be martingales starting from . Denoting the variance operator, the function :\nis increasing with respect to for all .\nThe variance can be written as\nThis is a second order polynomial in ;\nthe first order term is\n;\nthe coefficient of is thus positive.\nThe polynomial is therefore a constant plus times a positive coefficient hence the conclusion.\n\u220e\nSuppose and let be the uniform measure on the dimensional unit simplex denoted defined in (2 ###reference_###).\nDenote\nThen for any :\nBy symmetry all are equal for . Since their sum is , the first identity follows.\nFor the second identity denote . Of course, for all we have .\nLet us now compute . We will sample as follows: we sample at random uniform from , add to their set the values and and order the set to obtain . Then the law of is uniform over . In particular . The minimum can be any of the , so by symmetry :\n.\nThis means that .\nOn the other hand, by symmetry :\n and the value of follows together with the last identity.\n\u220e\nLet us denote, for simplicity, , . From the previous relations and equation (5 ###reference_###) we obtain :\nThe next steps of the proof are as follows:\n\u2013 step A : use equation (24 ###reference_###) in order to obtain the SDE for \nup to order in ;\n\u2013 step B : adjoin to the market and do again a whole cycle of computations in order to obtain the log-dynamics ;\n\u2013 step C : compare and to obtain the insights stated in the proposition.\nFor step A we note that the term is smooth so its quadratic covariation with the other terms is null and will just appear as an additive part in . On the other hand, so\n. We obtain\nWe obtain finally :\nor, in equivalent form :\nWe continue with step B. The portfolio weights are now in of which the last coordinate indicates how much is allocated to (that has been added to the market ). As before :\nwhere we used the definition of and and the notation\n.\nStraightforward computations based on lemma 23 ###reference_### show that\n\n, with .\nWe continue and obtain, after some computations similar to the one before :\nWe continue by computing to the first order in . Again, the computations are not difficult but rather lengthy and we only give the result :\nThis equation looks like (27 ###reference_###) with the only exception that the last term has a smaller coefficient : instead of .\nThis remark is exploited in the step C of our proof to which we turn now.\nWe will work with the log-returns associated with and . For a general Ito process with a\nmartingale starting from (and under suitable regularity assumptions),\n which means that the mean is\n and the standard deviation\n.\nSo here, to the first order in the means of\n and are the same\nOn the other hand note that the stochastic part of and can be written\nas for some constants and with (constant corresponds to and to ).\nThen from the lemma 4 ###reference_orem4###\nit follows that the variance hence the standard deviation of is smaller than that of :\nThus the Sharpe ratio of is larger than that of which ends the proof.\n\u220e"
40
+ },
41
+ {
42
+ "section_id": "5",
43
+ "parent_section_id": null,
44
+ "section_name": "",
45
+ "text": ""
46
+ },
47
+ {
48
+ "section_id": "5.1",
49
+ "parent_section_id": "5",
50
+ "section_name": "",
51
+ "text": "Although some algorithms exists to compute the Cover UP using combinatorial identities, see [3 ###reference_b3###] for the case of the Dirichlet(1/2,\u2026,1/2) distribution, they are known to behave bad with the dimension and the number of data points .\nOn the other hand, Monte Carlo approaches have been shown to perform well, see\n[1 ###reference_b1###, 8 ###reference_b8###] and we will take this choice here. Note also the special\nsampling like in [10 ###reference_b10###] that obtain polynomial complexity\nwhen sampling from a non-uniform distribution.\nWe resort then to a straightforward loop over by evaluating the simplex averages in (3 ###reference_###). Therefore, to compute Cover UP :\nwhen there are exactly two assets the expectation with respect to the uniform law over the simplex can be computed as an integral over taking . In this case\nwe use a points Gauss-Legendre quadrature over the interval .\nDenote , the weights and points of the Gauss-Legendre quadrature; recall that and the quadrature is designed for functions defined over this interval; to compute the averages over we use, for any function the formula :\nwhen there are more than assets we draw random points\n\nfrom the unit simplex and replace the exact expectation with a sample average :\nThe points are drawn using the property that if are independent exponentially distributed variables then follows the uniform law on .\nTogether with relation (5 ###reference_###) this allows to compute for all . Once this first step done, we add to the market as\n\nand proceed recursively with the computation of all other for . If we use samples to compute the integral over the simplex, the complexity of an algorithm to compute for , will be at most\n."
52
+ },
53
+ {
54
+ "section_id": "6",
55
+ "parent_section_id": null,
56
+ "section_name": "",
57
+ "text": "We introduce in this paper the high order universal portfolios (HOUP) constructed from Cover\u2019s initial suggestion by recursive additions to the underlying market.\nWe discuss several theoretical questions and prove that HOUP are indeed distinct from UP and can break the time invariance symmetry featured by the Cover UP. The expected optimality of the HOUP with respect to the baseline UP was investigated\n; we proved theoretically that under some assumption (stated in the proposition 3 ###reference_orem3###) the Sharpe ratio increases. We next performed empirical tests\non a dataset from the literature. In many cases implementing HOUP is more rewarding than the UP; of course, some situations can occur when this is not the case but this first joint theoretical and numerical evidence appears positive. Further studies could shed additional light onto the performances of the high order universal portfolios introduced here and their applicability domain."
58
+ },
59
+ {
60
+ "section_id": "6.1",
61
+ "parent_section_id": "6",
62
+ "section_name": "",
63
+ "text": "The author does not declare any conflicts of interests. The research did not involve and human participants and/or animals."
64
+ }
65
+ ],
66
+ "appendix": [],
67
+ "tables": {
68
+ "1": {
69
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Price relatives of the market .</figcaption>\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S3.T1.11\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T1.3.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt\" id=\"S3.T1.3.1.2\">time transition</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.3.1.3\">Asset 1</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.3.1.4\">Asset 2</th>\n<th class=\"ltx_td ltx_nopad_r ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.3.1.1\">Asset 3 = \n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T1.7.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T1.4.2.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.5.3.2\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.6.4.3\"></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left ltx_border_t\" id=\"S3.T1.7.5.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.11.9\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S3.T1.8.6.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.9.7.2\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.10.8.3\"></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S3.T1.11.9.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.11.10.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S3.T1.11.10.1.1\"><span class=\"ltx_ERROR undefined\" id=\"S3.T1.11.10.1.1.1\">\\botrule</span></th>\n<td class=\"ltx_td\" id=\"S3.T1.11.10.1.2\"></td>\n<td class=\"ltx_td\" id=\"S3.T1.11.10.1.3\"></td>\n<td class=\"ltx_td\" id=\"S3.T1.11.10.1.4\"></td>\n</tr>\n</tbody>\n</table>\n</figure>",
70
+ "capture": "Table 1: Price relatives of the market ."
71
+ },
72
+ "2": {
73
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T2\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Price relatives of the market in proposition\u00a0(<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.13564v2#S3.E12\" title=\"In Proposition 2. \u2023 3.2 On the permutation invariance \u2023 3 Theoretical questions \u2023 High order universal portfolios\"><span class=\"ltx_text ltx_ref_tag\">12</span></a>).</figcaption>\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S3.T2.15\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T2.3.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt\" id=\"S3.T2.3.1.2\">time transition</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T2.3.1.3\">Asset 1</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T2.3.1.4\">Asset 2</th>\n<th class=\"ltx_td ltx_nopad_r ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T2.3.1.1\">Asset 3 = \n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T2.7.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T2.4.2.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T2.5.3.2\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T2.6.4.3\"></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left ltx_border_t\" id=\"S3.T2.7.5.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.11.9\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S3.T2.8.6.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T2.9.7.2\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T2.10.8.3\"></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S3.T2.11.9.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.15.13\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S3.T2.12.10.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T2.13.11.2\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T2.14.12.3\"></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S3.T2.15.13.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.15.14.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S3.T2.15.14.1.1\"><span class=\"ltx_ERROR undefined\" id=\"S3.T2.15.14.1.1.1\">\\botrule</span></th>\n<td class=\"ltx_td\" id=\"S3.T2.15.14.1.2\"></td>\n<td class=\"ltx_td\" id=\"S3.T2.15.14.1.3\"></td>\n<td class=\"ltx_td\" id=\"S3.T2.15.14.1.4\"></td>\n</tr>\n</tbody>\n</table>\n</figure>",
74
+ "capture": "Table 2: Price relatives of the market in proposition\u00a0(12)."
75
+ },
76
+ "3": {
77
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T3\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>Price relatives of the market in proposition\u00a0(<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.13564v2#S3.E12\" title=\"In Proposition 2. \u2023 3.2 On the permutation invariance \u2023 3 Theoretical questions \u2023 High order universal portfolios\"><span class=\"ltx_text ltx_ref_tag\">12</span></a>).</figcaption>\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S3.T3.15\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T3.3.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt\" id=\"S3.T3.3.1.2\">time transition</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T3.3.1.3\">Asset 1</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T3.3.1.4\">Asset 2</th>\n<th class=\"ltx_td ltx_nopad_r ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T3.3.1.1\">Asset 3 = \n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T3.7.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T3.4.2.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T3.5.3.2\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T3.6.4.3\"></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left ltx_border_t\" id=\"S3.T3.7.5.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.11.9\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S3.T3.8.6.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T3.9.7.2\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T3.10.8.3\"></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S3.T3.11.9.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.15.13\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S3.T3.12.10.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T3.13.11.2\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T3.14.12.3\"></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S3.T3.15.13.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.15.14.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S3.T3.15.14.1.1\"><span class=\"ltx_ERROR undefined\" id=\"S3.T3.15.14.1.1.1\">\\botrule</span></th>\n<td class=\"ltx_td\" id=\"S3.T3.15.14.1.2\"></td>\n<td class=\"ltx_td\" id=\"S3.T3.15.14.1.3\"></td>\n<td class=\"ltx_td\" id=\"S3.T3.15.14.1.4\"></td>\n</tr>\n</tbody>\n</table>\n</figure>",
78
+ "capture": "Table 3: Price relatives of the market in proposition\u00a0(12)."
79
+ },
80
+ "4": {
81
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T4\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 4: </span>Descriptions of the markets considered in section\u00a0<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.13564v2#S5.SS3\" title=\"5.3 \u2019Old NYSE\u2019 dataset \u2023 Part I title \u2023 High order universal portfolios\"><span class=\"ltx_text ltx_ref_tag\">5.3</span></a>.</figcaption>\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S5.T4.4\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T4.4.5.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_tt\" id=\"S5.T4.4.5.1.1\">Market</th>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_tt\" id=\"S5.T4.4.5.1.2\">Asset</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_tt\" id=\"S5.T4.4.5.1.3\">Correlation</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_tt\" id=\"S5.T4.4.5.1.4\">individual</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left ltx_border_tt\" id=\"S5.T4.4.5.1.5\">Description</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.4.6.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S5.T4.4.6.2.1\">number</th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T4.4.6.2.2\">names</td>\n<td class=\"ltx_td ltx_border_r\" id=\"S5.T4.4.6.2.3\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T4.4.6.2.4\">performances</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S5.T4.4.6.2.5\">cf. <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.13564v2#bib.bib5\" title=\"\">5</a>]</cite>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S5.T4.1.1.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T4.1.1.2\">Commercial Metals</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T4.1.1.3\">0.064</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T4.1.1.4\">52.02</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left ltx_border_t\" id=\"S5.T4.1.1.5\">Volatile and stagnant</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.4.7.3\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_r\" id=\"S5.T4.4.7.3.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T4.4.7.3.2\">Kin Ark</td>\n<td class=\"ltx_td ltx_border_r\" id=\"S5.T4.4.7.3.3\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T4.4.7.3.4\">4.13</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S5.T4.4.7.3.5\">uncorrelated</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S5.T4.2.2.1\">\n<span class=\"ltx_ERROR undefined\" id=\"S5.T4.2.2.1.1\">\\botrule</span>\n</th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T4.2.2.2\">Irocquois</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T4.2.2.3\">0.041</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T4.2.2.4\">8.92</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S5.T4.2.2.5\">Volatile</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.4.8.4\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_r\" id=\"S5.T4.4.8.4.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T4.4.8.4.2\">Kin Ark</td>\n<td class=\"ltx_td ltx_border_r\" id=\"S5.T4.4.8.4.3\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T4.4.8.4.4\">4.13</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S5.T4.4.8.4.5\">uncorrelated</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.3.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S5.T4.3.3.1\">\n<span class=\"ltx_ERROR undefined\" id=\"S5.T4.3.3.1.1\">\\botrule</span>\n</th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T4.3.3.2\">Coca Cola</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T4.3.3.3\">0.388</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T4.3.3.4\">13.36</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S5.T4.3.3.5\">Non-volatile</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.4.9.5\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_r\" id=\"S5.T4.4.9.5.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T4.4.9.5.2\">IBM</td>\n<td class=\"ltx_td ltx_border_r\" id=\"S5.T4.4.9.5.3\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T4.4.9.5.4\">12.21</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S5.T4.4.9.5.5\">highly correlated</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.4.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S5.T4.4.4.1\">\n<span class=\"ltx_ERROR undefined\" id=\"S5.T4.4.4.1.1\">\\botrule</span>\n</th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T4.4.4.2\">Commercial Metals</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T4.4.4.3\">0.067</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T4.4.4.4\">52.02</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S5.T4.4.4.5\">Volatile</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.4.10.6\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_r\" id=\"S5.T4.4.10.6.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T4.4.10.6.2\">Meicco</td>\n<td class=\"ltx_td ltx_border_r\" id=\"S5.T4.4.10.6.3\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T4.4.10.6.4\">22.92</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S5.T4.4.10.6.5\">uncorrelated</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.4.11.7\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S5.T4.4.11.7.1\"><span class=\"ltx_ERROR undefined\" id=\"S5.T4.4.11.7.1.1\">\\botrule</span></th>\n<td class=\"ltx_td\" id=\"S5.T4.4.11.7.2\"></td>\n<td class=\"ltx_td\" id=\"S5.T4.4.11.7.3\"></td>\n<td class=\"ltx_td\" id=\"S5.T4.4.11.7.4\"></td>\n<td class=\"ltx_td\" id=\"S5.T4.4.11.7.5\"></td>\n</tr>\n</tbody>\n</table>\n</figure>",
82
+ "capture": "Table 4: Descriptions of the markets considered in section\u00a05.3."
83
+ }
84
+ },
85
+ "image_paths": {
86
+ "1(a)": {
87
+ "figure_path": "2311.13564v2_figure_1(a).png",
88
+ "caption": "Figure 1: The result of the high order universal portfolios for the\ntoy example in section 5.2.\nTop : the performance of the two assets and of the first 10101010 high order universal portfolios (U\u2062P1\ud835\udc48superscript\ud835\udc431UP^{1}italic_U italic_P start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT being the classical, Cover, UP). Middle : for convenience only Cover (U\u2062P1\ud835\udc48superscript\ud835\udc431UP^{1}italic_U italic_P start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT) and U\u2062P10\ud835\udc48superscript\ud835\udc4310UP^{10}italic_U italic_P start_POSTSUPERSCRIPT 10 end_POSTSUPERSCRIPT (the last) portfolios are plotted together with the individual assets. Bottom : the performance of U\u2062P\u2113\ud835\udc48superscript\ud835\udc43\u2113UP^{\\ell}italic_U italic_P start_POSTSUPERSCRIPT roman_\u2113 end_POSTSUPERSCRIPT depending on \u2113\u2113\\ellroman_\u2113 is seen to increase with \u2113\u2113\\ellroman_\u2113.",
89
+ "url": "http://arxiv.org/html/2311.13564v2/x1.png"
90
+ },
91
+ "1(b)": {
92
+ "figure_path": "2311.13564v2_figure_1(b).png",
93
+ "caption": "Figure 1: The result of the high order universal portfolios for the\ntoy example in section 5.2.\nTop : the performance of the two assets and of the first 10101010 high order universal portfolios (U\u2062P1\ud835\udc48superscript\ud835\udc431UP^{1}italic_U italic_P start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT being the classical, Cover, UP). Middle : for convenience only Cover (U\u2062P1\ud835\udc48superscript\ud835\udc431UP^{1}italic_U italic_P start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT) and U\u2062P10\ud835\udc48superscript\ud835\udc4310UP^{10}italic_U italic_P start_POSTSUPERSCRIPT 10 end_POSTSUPERSCRIPT (the last) portfolios are plotted together with the individual assets. Bottom : the performance of U\u2062P\u2113\ud835\udc48superscript\ud835\udc43\u2113UP^{\\ell}italic_U italic_P start_POSTSUPERSCRIPT roman_\u2113 end_POSTSUPERSCRIPT depending on \u2113\u2113\\ellroman_\u2113 is seen to increase with \u2113\u2113\\ellroman_\u2113.",
94
+ "url": "http://arxiv.org/html/2311.13564v2/x2.png"
95
+ },
96
+ "1(c)": {
97
+ "figure_path": "2311.13564v2_figure_1(c).png",
98
+ "caption": "Figure 1: The result of the high order universal portfolios for the\ntoy example in section 5.2.\nTop : the performance of the two assets and of the first 10101010 high order universal portfolios (U\u2062P1\ud835\udc48superscript\ud835\udc431UP^{1}italic_U italic_P start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT being the classical, Cover, UP). Middle : for convenience only Cover (U\u2062P1\ud835\udc48superscript\ud835\udc431UP^{1}italic_U italic_P start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT) and U\u2062P10\ud835\udc48superscript\ud835\udc4310UP^{10}italic_U italic_P start_POSTSUPERSCRIPT 10 end_POSTSUPERSCRIPT (the last) portfolios are plotted together with the individual assets. Bottom : the performance of U\u2062P\u2113\ud835\udc48superscript\ud835\udc43\u2113UP^{\\ell}italic_U italic_P start_POSTSUPERSCRIPT roman_\u2113 end_POSTSUPERSCRIPT depending on \u2113\u2113\\ellroman_\u2113 is seen to increase with \u2113\u2113\\ellroman_\u2113.",
99
+ "url": "http://arxiv.org/html/2311.13564v2/x3.png"
100
+ },
101
+ "2(a)": {
102
+ "figure_path": "2311.13564v2_figure_2(a).png",
103
+ "caption": "Figure 2: The result of the high order universal portfolios for the couple \u2019Iroquois\u2019-\u2019Kin Ark\u2019. Top : the performance of the two assets and of the first 10101010 high order universal portfolios (U\u2062P1\ud835\udc48superscript\ud835\udc431UP^{1}italic_U italic_P start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT being the classical, Cover, UP). It is seen that the performance is, as expected, above that of individual assets. Middle : for convenience only Cover (U\u2062P1\ud835\udc48superscript\ud835\udc431UP^{1}italic_U italic_P start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT) and U\u2062P10\ud835\udc48superscript\ud835\udc4310UP^{10}italic_U italic_P start_POSTSUPERSCRIPT 10 end_POSTSUPERSCRIPT (the last) portfolios are plotted together with the individual assets. Bottom : the performance of U\u2062P\u2113\ud835\udc48superscript\ud835\udc43\u2113UP^{\\ell}italic_U italic_P start_POSTSUPERSCRIPT roman_\u2113 end_POSTSUPERSCRIPT depending on \u2113\u2113\\ellroman_\u2113 which is seen to be increasing with \u2113\u2113\\ellroman_\u2113.",
104
+ "url": "http://arxiv.org/html/2311.13564v2/x4.png"
105
+ },
106
+ "2(b)": {
107
+ "figure_path": "2311.13564v2_figure_2(b).png",
108
+ "caption": "Figure 2: The result of the high order universal portfolios for the couple \u2019Iroquois\u2019-\u2019Kin Ark\u2019. Top : the performance of the two assets and of the first 10101010 high order universal portfolios (U\u2062P1\ud835\udc48superscript\ud835\udc431UP^{1}italic_U italic_P start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT being the classical, Cover, UP). It is seen that the performance is, as expected, above that of individual assets. Middle : for convenience only Cover (U\u2062P1\ud835\udc48superscript\ud835\udc431UP^{1}italic_U italic_P start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT) and U\u2062P10\ud835\udc48superscript\ud835\udc4310UP^{10}italic_U italic_P start_POSTSUPERSCRIPT 10 end_POSTSUPERSCRIPT (the last) portfolios are plotted together with the individual assets. Bottom : the performance of U\u2062P\u2113\ud835\udc48superscript\ud835\udc43\u2113UP^{\\ell}italic_U italic_P start_POSTSUPERSCRIPT roman_\u2113 end_POSTSUPERSCRIPT depending on \u2113\u2113\\ellroman_\u2113 which is seen to be increasing with \u2113\u2113\\ellroman_\u2113.",
109
+ "url": "http://arxiv.org/html/2311.13564v2/x5.png"
110
+ },
111
+ "2(c)": {
112
+ "figure_path": "2311.13564v2_figure_2(c).png",
113
+ "caption": "Figure 2: The result of the high order universal portfolios for the couple \u2019Iroquois\u2019-\u2019Kin Ark\u2019. Top : the performance of the two assets and of the first 10101010 high order universal portfolios (U\u2062P1\ud835\udc48superscript\ud835\udc431UP^{1}italic_U italic_P start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT being the classical, Cover, UP). It is seen that the performance is, as expected, above that of individual assets. Middle : for convenience only Cover (U\u2062P1\ud835\udc48superscript\ud835\udc431UP^{1}italic_U italic_P start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT) and U\u2062P10\ud835\udc48superscript\ud835\udc4310UP^{10}italic_U italic_P start_POSTSUPERSCRIPT 10 end_POSTSUPERSCRIPT (the last) portfolios are plotted together with the individual assets. Bottom : the performance of U\u2062P\u2113\ud835\udc48superscript\ud835\udc43\u2113UP^{\\ell}italic_U italic_P start_POSTSUPERSCRIPT roman_\u2113 end_POSTSUPERSCRIPT depending on \u2113\u2113\\ellroman_\u2113 which is seen to be increasing with \u2113\u2113\\ellroman_\u2113.",
114
+ "url": "http://arxiv.org/html/2311.13564v2/x6.png"
115
+ },
116
+ "3(a)": {
117
+ "figure_path": "2311.13564v2_figure_3(a).png",
118
+ "caption": "Figure 3: Same as in figure 2 for the couple \u2019Commercial Metals\u2019-\u2019Kin Ark\u2019.",
119
+ "url": "http://arxiv.org/html/2311.13564v2/x7.png"
120
+ },
121
+ "3(b)": {
122
+ "figure_path": "2311.13564v2_figure_3(b).png",
123
+ "caption": "Figure 3: Same as in figure 2 for the couple \u2019Commercial Metals\u2019-\u2019Kin Ark\u2019.",
124
+ "url": "http://arxiv.org/html/2311.13564v2/x8.png"
125
+ },
126
+ "3(c)": {
127
+ "figure_path": "2311.13564v2_figure_3(c).png",
128
+ "caption": "Figure 3: Same as in figure 2 for the couple \u2019Commercial Metals\u2019-\u2019Kin Ark\u2019.",
129
+ "url": "http://arxiv.org/html/2311.13564v2/x9.png"
130
+ },
131
+ "4(a)": {
132
+ "figure_path": "2311.13564v2_figure_4(a).png",
133
+ "caption": "Figure 4: Same as in figure 2 for the market 4444 in table 4 : \u2019Commercial Metals\u2019-\u2019Meicco\u2019.",
134
+ "url": "http://arxiv.org/html/2311.13564v2/x10.png"
135
+ },
136
+ "4(b)": {
137
+ "figure_path": "2311.13564v2_figure_4(b).png",
138
+ "caption": "Figure 4: Same as in figure 2 for the market 4444 in table 4 : \u2019Commercial Metals\u2019-\u2019Meicco\u2019.",
139
+ "url": "http://arxiv.org/html/2311.13564v2/x11.png"
140
+ },
141
+ "4(c)": {
142
+ "figure_path": "2311.13564v2_figure_4(c).png",
143
+ "caption": "Figure 4: Same as in figure 2 for the market 4444 in table 4 : \u2019Commercial Metals\u2019-\u2019Meicco\u2019.",
144
+ "url": "http://arxiv.org/html/2311.13564v2/x12.png"
145
+ },
146
+ "5(a)": {
147
+ "figure_path": "2311.13564v2_figure_5(a).png",
148
+ "caption": "Figure 5: Same as in figure 2 for the couple \u2019IBM\u2019-\u2019Coca Cola\u2019.",
149
+ "url": "http://arxiv.org/html/2311.13564v2/x13.png"
150
+ },
151
+ "5(b)": {
152
+ "figure_path": "2311.13564v2_figure_5(b).png",
153
+ "caption": "Figure 5: Same as in figure 2 for the couple \u2019IBM\u2019-\u2019Coca Cola\u2019.",
154
+ "url": "http://arxiv.org/html/2311.13564v2/x14.png"
155
+ },
156
+ "5(c)": {
157
+ "figure_path": "2311.13564v2_figure_5(c).png",
158
+ "caption": "Figure 5: Same as in figure 2 for the couple \u2019IBM\u2019-\u2019Coca Cola\u2019.",
159
+ "url": "http://arxiv.org/html/2311.13564v2/x15.png"
160
+ },
161
+ "6": {
162
+ "figure_path": "2311.13564v2_figure_6.png",
163
+ "caption": "Figure 6: Quotient U\u2062P\u2113\u2062(T)/U\u2062P1\u2062(T)\ud835\udc48superscript\ud835\udc43\u2113\ud835\udc47\ud835\udc48superscript\ud835\udc431\ud835\udc47UP^{\\ell}(T)/UP^{1}(T)italic_U italic_P start_POSTSUPERSCRIPT roman_\u2113 end_POSTSUPERSCRIPT ( italic_T ) / italic_U italic_P start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT ( italic_T ) for 1000100010001000 markets in\nsection 5.4.1\nconsisting of 5555 assets taken randomly from the \u2019Old NYSE\u2019 dataset. The 1000100010001000 markets are in yellow, the quantiles in other colors as indicated in the legend. See also\nfigure 7 for an \u2019arbitrage view\u2019.",
164
+ "url": "http://arxiv.org/html/2311.13564v2/x16.png"
165
+ },
166
+ "7": {
167
+ "figure_path": "2311.13564v2_figure_7.png",
168
+ "caption": "Figure 7: Arbitrage test for U\u2062P10\ud835\udc48superscript\ud835\udc4310UP^{10}italic_U italic_P start_POSTSUPERSCRIPT 10 end_POSTSUPERSCRIPT versus U\u2062P1\ud835\udc48superscript\ud835\udc431UP^{1}italic_U italic_P start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT, cf. also\nfigure 7.\nWe consider 1000100010001000 markets of 5555 assets each cf. section 5.4.1.",
169
+ "url": "http://arxiv.org/html/2311.13564v2/x17.png"
170
+ },
171
+ "8": {
172
+ "figure_path": "2311.13564v2_figure_8.png",
173
+ "caption": "Figure 8: Performance of U\u2062P\u2113\ud835\udc48superscript\ud835\udc43\u2113UP^{\\ell}italic_U italic_P start_POSTSUPERSCRIPT roman_\u2113 end_POSTSUPERSCRIPT relative to a \"Buy&Hold\" strategy for the test case in section 5.4.1. The 1000100010001000 tests are in yellow, quantiles in other colors as indicated in the legend.",
174
+ "url": "http://arxiv.org/html/2311.13564v2/x18.png"
175
+ },
176
+ "9(a)": {
177
+ "figure_path": "2311.13564v2_figure_9(a).png",
178
+ "caption": "Figure 9: Statistics of the computation of U\u2062P\u2113\ud835\udc48superscript\ud835\udc43\u2113UP^{\\ell}italic_U italic_P start_POSTSUPERSCRIPT roman_\u2113 end_POSTSUPERSCRIPT, cf. section 5.4.2. The same market is used here but we take 1\u2032\u2062000superscript1\u20320001^{\\prime}0001 start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT 000 distinct simulations, each including 10\u2032\u2062000superscript10\u203200010^{\\prime}00010 start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT 000 samples of the unit simplex. Left: the results for all U\u2062P\u2113\ud835\udc48superscript\ud835\udc43\u2113UP^{\\ell}italic_U italic_P start_POSTSUPERSCRIPT roman_\u2113 end_POSTSUPERSCRIPT portfolios for \u2113\u226410\u211310\\ell\\leq 10roman_\u2113 \u2264 10. The individual results are in yellow, all the others are statistics of the yellow distribution. Right: the histograms for \u2113=1\u21131\\ell=1roman_\u2113 = 1 and \u2113=2\u21132\\ell=2roman_\u2113 = 2.",
179
+ "url": "http://arxiv.org/html/2311.13564v2/x19.png"
180
+ },
181
+ "9(b)": {
182
+ "figure_path": "2311.13564v2_figure_9(b).png",
183
+ "caption": "Figure 9: Statistics of the computation of U\u2062P\u2113\ud835\udc48superscript\ud835\udc43\u2113UP^{\\ell}italic_U italic_P start_POSTSUPERSCRIPT roman_\u2113 end_POSTSUPERSCRIPT, cf. section 5.4.2. The same market is used here but we take 1\u2032\u2062000superscript1\u20320001^{\\prime}0001 start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT 000 distinct simulations, each including 10\u2032\u2062000superscript10\u203200010^{\\prime}00010 start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT 000 samples of the unit simplex. Left: the results for all U\u2062P\u2113\ud835\udc48superscript\ud835\udc43\u2113UP^{\\ell}italic_U italic_P start_POSTSUPERSCRIPT roman_\u2113 end_POSTSUPERSCRIPT portfolios for \u2113\u226410\u211310\\ell\\leq 10roman_\u2113 \u2264 10. The individual results are in yellow, all the others are statistics of the yellow distribution. Right: the histograms for \u2113=1\u21131\\ell=1roman_\u2113 = 1 and \u2113=2\u21132\\ell=2roman_\u2113 = 2.",
184
+ "url": "http://arxiv.org/html/2311.13564v2/x20.png"
185
+ }
186
+ },
187
+ "validation": true,
188
+ "references": [
189
+ {
190
+ "1": {
191
+ "title": "\\APACrefYearMonthDay1997.",
192
+ "author": "\\APACinsertmetastarblum1997universal{APACrefauthors}Blum, A.\\BCBT \\BBA Kalai, A.",
193
+ "venue": "\\BBOQ\\APACrefatitleUniversal portfolios with and without transaction costs\nUniversal portfolios with and without transaction costs.\\BBCQ",
194
+ "url": null
195
+ }
196
+ },
197
+ {
198
+ "2": {
199
+ "title": "\\APACrefYearMonthDay1991.",
200
+ "author": "\\APACinsertmetastarcover91{APACrefauthors}Cover, T.M.",
201
+ "venue": "\\BBOQ\\APACrefatitleUniversal Portfolios Universal portfolios.\\BBCQ",
202
+ "url": null
203
+ }
204
+ },
205
+ {
206
+ "3": {
207
+ "title": "\\APACrefYearMonthDay1996.",
208
+ "author": "\\APACinsertmetastarcover1996universal{APACrefauthors}Cover, T.M.\\BCBT \\BBA Ordentlich, E.",
209
+ "venue": "\\BBOQ\\APACrefatitleUniversal portfolios with side information Universal\nportfolios with side information.\\BBCQ",
210
+ "url": null
211
+ }
212
+ },
213
+ {
214
+ "4": {
215
+ "title": "\\APACrefYearMonthDay2019.",
216
+ "author": "\\APACinsertmetastarstoch_portf_up_num18{APACrefauthors}Cuchiero, C., Schachermayer, W.\\BCBL Wong, T\\BHBIK.L.",
217
+ "venue": "\\BBOQ\\APACrefatitleCover\u2019s universal portfolio, stochastic portfolio\ntheory, and the num\u00e9raire portfolio Cover\u2019s universal portfolio,\nstochastic portfolio theory, and the num\u00e9raire portfolio.\\BBCQ",
218
+ "url": null
219
+ }
220
+ },
221
+ {
222
+ "5": {
223
+ "title": "\\APACrefYearMonthDay2016.",
224
+ "author": "\\APACinsertmetastardochow_proposed_2016{APACrefauthors}Dochow, R.",
225
+ "venue": "\\BBOQ\\APACrefatitleProposed Algorithms with Risk Management\nProposed Algorithms with Risk Management.\\BBCQ",
226
+ "url": null
227
+ }
228
+ },
229
+ {
230
+ "6": {
231
+ "title": "\\APACrefYear2002.",
232
+ "author": "\\APACinsertmetastarfernholz2002stochastic{APACrefauthors}Fernholz, E.R.\\BCBT \\BBA Fernholz, E.R.",
233
+ "venue": "\\APACrefbtitleStochastic portfolio theory Stochastic portfolio theory.",
234
+ "url": null
235
+ }
236
+ },
237
+ {
238
+ "7": {
239
+ "title": "\\APACrefYearMonthDay1998.",
240
+ "author": "\\APACinsertmetastarHelmbold98{APACrefauthors}Helmbold, D.P., Schapire, R.E., Singer, Y.\\BCBL Warmuth, M.K.",
241
+ "venue": "\\BBOQ\\APACrefatitleOn-Line Portfolio Selection Using Multiplicative\nUpdates On-line portfolio selection using multiplicative updates.\\BBCQ",
242
+ "url": null
243
+ }
244
+ },
245
+ {
246
+ "8": {
247
+ "title": "\\APACrefYearMonthDay2001.",
248
+ "author": "\\APACinsertmetastarishijima_numerical_2001{APACrefauthors}Ishijima, H.",
249
+ "venue": "\\BBOQ\\APACrefatitleNumerical methods for universal portfolios Numerical\nmethods for universal portfolios.\\BBCQ",
250
+ "url": null
251
+ }
252
+ },
253
+ {
254
+ "9": {
255
+ "title": "\\APACrefYearMonthDay1992.",
256
+ "author": "\\APACinsertmetastarcont_time_univ_portf_jamshidian92{APACrefauthors}Jamshidian, F.",
257
+ "venue": "\\BBOQ\\APACrefatitleASYMPTOTICALLY OPTIMAL PORTFOLIOS Asymptotically\noptimal portfolios.\\BBCQ",
258
+ "url": null
259
+ }
260
+ },
261
+ {
262
+ "10": {
263
+ "title": "\\APACrefYearMonthDay2002.",
264
+ "author": "\\APACinsertmetastarkalai2002efficient{APACrefauthors}Kalai, A.T.\\BCBT \\BBA Vempala, S.",
265
+ "venue": "\\BBOQ\\APACrefatitleEfficient algorithms for universal portfolios\nEfficient algorithms for universal portfolios.\\BBCQ",
266
+ "url": null
267
+ }
268
+ },
269
+ {
270
+ "11": {
271
+ "title": "\\APACrefYearMonthDay2014.",
272
+ "author": "\\APACinsertmetastarli2014online_survey{APACrefauthors}Li, B.\\BCBT \\BBA Hoi, S.C.",
273
+ "venue": "\\BBOQ\\APACrefatitleOnline portfolio selection: A survey Online portfolio\nselection: A survey.\\BBCQ",
274
+ "url": null
275
+ }
276
+ },
277
+ {
278
+ "12": {
279
+ "title": "\\APACrefYearMonthDay2015.",
280
+ "author": "\\APACinsertmetastarli_moving_avg_portf_2015{APACrefauthors}Li, B., Hoi, S.C.H., Sahoo, D.\\BCBL Liu, Z\\BHBIY.",
281
+ "venue": "\\BBOQ\\APACrefatitleMoving average reversion strategy for on-line portfolio\nselection Moving average reversion strategy for on-line portfolio\nselection.\\BBCQ",
282
+ "url": null
283
+ }
284
+ },
285
+ {
286
+ "13": {
287
+ "title": "\\APACrefYearMonthDay2013.",
288
+ "author": "\\APACinsertmetastarmarigold_github_up{APACrefauthors}Marigold",
289
+ "venue": "\\BBOQ\\APACrefatitleUniversal Portfolios Universal portfolios.\\BBCQ",
290
+ "url": null
291
+ }
292
+ },
293
+ {
294
+ "14": {
295
+ "title": "\\APACrefYear2005.",
296
+ "author": "\\APACinsertmetastarmusiela_martingale_2005{APACrefauthors}Musiela, M.\\BCBT \\BBA Rutkowski, M.",
297
+ "venue": "\\APACrefbtitleMartingale methods in financial modelling. Martingale methods\nin financial modelling. (\\PrintOrdinal2nd ed. \\BEd, \\BVOL 36).",
298
+ "url": null
299
+ }
300
+ },
301
+ {
302
+ "15": {
303
+ "title": "\\APACrefYear2003.",
304
+ "author": "\\APACinsertmetastaroksendal_sde_book{APACrefauthors}\u00d8ksendal, B.",
305
+ "venue": "\\APACrefbtitleStochastic differential equations. An introduction with\napplications. Stochastic differential equations. An introduction with\napplications. (\\PrintOrdinal6th ed. \\BEd).",
306
+ "url": null
307
+ }
308
+ },
309
+ {
310
+ "16": {
311
+ "title": "\\APACrefYearMonthDay1998.",
312
+ "author": "\\APACinsertmetastarcover98{APACrefauthors}Ordentlich, E.\\BCBT \\BBA Cover, T.M.",
313
+ "venue": "\\BBOQ\\APACrefatitleThe Cost of Achieving the Best Portfolio in Hindsight\nThe cost of achieving the best portfolio in hindsight.\\BBCQ",
314
+ "url": null
315
+ }
316
+ },
317
+ {
318
+ "17": {
319
+ "title": "\\APACrefYearMonthDay2023.",
320
+ "author": "\\APACinsertmetastarparthasarathy2023online{APACrefauthors}Parthasarathy, P., Bhardwaj, A.\\BCBL Hanawal, M.K.",
321
+ "venue": "\\APACrefbtitleOnline Universal Dirichlet Factor Portfolios. Online\nuniversal dirichlet factor portfolios.\n\\PrintBackRefs\\CurrentBib",
322
+ "url": null
323
+ }
324
+ },
325
+ {
326
+ "18": {
327
+ "title": "\\APACrefYearMonthDay1998.",
328
+ "author": "\\APACinsertmetastarvovk1998universal{APACrefauthors}Vovk, V.\\BCBT \\BBA Watkins, C.",
329
+ "venue": "\\BBOQ\\APACrefatitleUniversal portfolio selection Universal portfolio\nselection.\\BBCQ",
330
+ "url": null
331
+ }
332
+ }
333
+ ],
334
+ "url": "http://arxiv.org/html/2311.13564v2"
335
+ }
20240620/2311.17088v2.json ADDED
@@ -0,0 +1,100 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Unsupervised Multimodal Deepfake Detection Using Intra- and Cross-Modal Inconsistencies",
3
+ "abstract": "Deepfake videos present an increasing threat to society with potentially negative impact on criminal justice, democracy, and personal safety and privacy. Meanwhile, detecting deepfakes, at scale, remains a very challenging task that often requires labeled training data from existing deepfake generation methods. Further, even the most accurate supervised deepfake detection methods do not generalize to deepfakes generated using new generation methods. In this paper, we propose a novel unsupervised method for detecting deepfake videos by directly identifying intra-modal and cross-modal inconsistency between video segments. The fundamental hypothesis behind the proposed detection method is that motion or identity inconsistencies are inevitable in deepfake videos. We will mathematically and empirically support this hypothesis, and then proceed to constructing our method grounded in our theoretical analysis. Our proposed method outperforms prior state-of-the-art unsupervised deepfake detection methods on the challenging FakeAVCeleb dataset, and also has several additional advantages: it is scalable because it does not require pristine (real) samples for each identity during inference and therefore can apply to arbitrarily many identities, generalizable because it is trained only on real videos and therefore does not rely on a particular deepfake method, reliable because it does not rely on any likelihood estimation in high dimensions, and explainable because it can pinpoint the exact location of modality inconsistencies which are then verifiable by a human expert.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "The rapid advancement of generative deep learning [41 ###reference_b41###, 40 ###reference_b40###, 25 ###reference_b25###], fueled by faster and cheaper compute power and exploding data availability, has blurred the line between fact and fiction. In particular, deepfakes111We use deepfake as an umbrella term to refer to fake video generation in general including face-swapping, face reenactment, face generation, etc.\u2014videos in which the motion of a source video is transferred to a target identity such that it appears to say the words uttered by the source\u2014are becoming increasingly hard to distinguish from real videos [13 ###reference_b13###]. Since existing deepfake detection methods themselves can be used as part of the objective in the generation process to improve the deepfake quality, this rapid advancement in generative deep learning leads to a daunting question: will machines eventually be able to imitate any person without leaving any trace? Despite the alarming empirical evidence [2 ###reference_b2###, 23 ###reference_b23###], we conjecture that the answer could be negative; that is, deepfakes might always contain a detectable inevitable trace, as illustrated in Fig. 1 ###reference_###, which explains the the observation that has led to our conjecture. While the synthesized video (middle row) appears very realistic in each frame and its motion almost exactly matches those of the source, a closer look reveals that at certain frames the identity of the person has noticeably changed (depicted by a red dashed box).\nWe conjecture that this phenomena, which occurs in various deepfake videos [26 ###reference_b26###], is a consequence of a fundamental property of facial motion and identity, that facial motion and identity are not independent variables. Consequently, a deepfake model must either not perfectly transfer the motion (leading to motion inconsistencies in the target video) or perfectly transfer the motion and with it partially transfer the identity as well (leading to identity inconsistencies in the target video). The goal of this paper is to mathematically and empirically validate this conjecture (Sec. 3 ###reference_###), and to develop a new method for explicitly detecting the conjectured inconsistencies (Sec. 4 ###reference_###).\nOur proposed unsupervised deepfake detection method is designed to simultaneously detect intra- and cross-modal inconsistencies within a given video. Note that while extracting consistency-based features have been explored in prior deepfake detection (i.e., learning joint features from video-audio based on a contrastive loss [6 ###reference_b6###, 3 ###reference_b3###]), our work is the first to directly identify and pinpoint inconsistencies within a given video, without relying on comparison to any other pristine (real) video. This distinction makes our method explainable (since the pinpointed inconsistencies of a video can be directly provided to a human expert as explanation of fakeness), and scalable to many identities (since it does not need pristine videos of identities to compare against at inference time). Our method outperforms prior state-of-the-art unsupervised detection methods on the challenging FakeAVCeleb [26 ###reference_b26###] dataset (achieving a new best average AUC of ). We summarize the novelty and practical advantages of our method compared to existing deepfake detection methods as follows.\nInevitable inconsistencies: we provide mathematical and empirical evidence for that deepfake generation methods will inevitably leave generation artifacts/traces due to a trade-off between identity and motion. To the best of our knowledge, this argument does not exist in the literature.\nGeneralizability: compared to supervised methods [8 ###reference_b8###, 11 ###reference_b11###, 30 ###reference_b30###, 33 ###reference_b33###] which rely on existing deepfake generation models, our proposed method is unsupervised and is trained solely on real videos, thereby does not rely on the particular of artifacts of existing deepfake generation models.\nScalability: compared to POI-Forensics [6 ###reference_b6###], which requires access to a set of pristine reference samples for each identity at inference time in order to evaluate the given video, our method does not require any pristine samples, thereby can scale to arbitrarily many new identities.\nReliability: compared to AV-Anomaly [15 ###reference_b15###] which relies on generative modeling to estimate the likelihood of real data (making it susceptible to the known unreliability of likelihood estimation in high-dimensions [35 ###reference_b35###]), our method does not use any likelihood estimation, and instead directly compares a given video with itself to find mismatching regions.\nExplainability: our method is explainable by design, since to detect whether a given video is fake, it must discover that a portion of the video is inconsistent with another portion of the same video. These two inconsistent portions can be provided to a human expert for verification of inconsistency."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Related Work",
15
+ "text": "Deepfake Generation.\nAlong with the development of generative models, there are many ways to generate fake videos for both the visual and auditory parts of a video. Suwajanakorn et al. [45 ###reference_b45###] proposed an approach to produce videos of President Obama with realistic lip movement from a given audio. Utilizing face-swapping [26 ###reference_b26###] and lip-syncing, GANimation[38 ###reference_b38###], FSGAN[36 ###reference_b36###], Wav2lip[37 ###reference_b37###], Face2Face[46 ###reference_b46###] can generate deep fake videos with better quality. SV2TTS[24 ###reference_b24###] can even generate an audio speech for a different person by a given piece of text. To deal with the ethical and security issues brought by deepfake techniques, deepfake detection methods are proposed.\nUnimodal Deepfake Detection.\nThe unimodal deepfake detection methods focus on detecting the artifacts in visual part of a video. Li et al. [31 ###reference_b31###] claimed that the artifacts are left in deepfake videos, where such artifacts can be effectively captured by a convolution neural network (CNN). Built upon [31 ###reference_b31###], G\u00fcera et al. [18 ###reference_b18###] incorporated a recurrent neural network (RNN) to detect Deepfake frames by feeding the RNN with the features extracted from each frame by CNN. Except raw videos, Yang et al. [47 ###reference_b47###] suggested that the fake videos can be detected by analyzing the estimated head position corresponding to the facial landmarks.\nMultimodal Deepfake Detection.\nSeveral approaches have recently focused on multimodal deepfake detection by utilizing both visual and auditory information. The key of multimodal method is to find the way to measure the dissimilarity between audio and video features [34 ###reference_b34###]. Li et al.[34 ###reference_b34###] proposed detecting Deepfake videos by analyzing audio-visual cues and perceived emotions which was extracted from Memory Fusion Network (MFN)[48 ###reference_b48###]. Chugn et al. [3 ###reference_b3###] incorporated the contrastive learning as the objective function and Modality Dissonance Score (MDS) to measure audio-video dissimilarity. Similarly, Hshmi et al. [21 ###reference_b21###] fed both visual features extracted from ResNet[22 ###reference_b22###] and audio Mel-spectral features [32 ###reference_b32###] into a fusion network to produce the final prediction score. In Zhou. [50 ###reference_b50###], similar idea was implemented by checking if the concurrency property of audio and visual features is broken. Shahzad et al. [44 ###reference_b44###] proposed a multimodal approach using Wav2Lip [37 ###reference_b37###] to generate a corresponding audio semantic features which help the model to distinguish between original and Deepfake videos. Aforementioned methods rely on training the model with groundtruth labels of fake and real videos. In contrast, AV-Anomaly [15 ###reference_b15###], first learns a joint audio-visual segment feature in a contrastive setting from real videos, and then trains a likelihood estimator on those features extracted from real videos (an autoregressive generative model of sequences of synchronization characteristics). The likelihood estimator is then used to score a query video\u2019s likelihood of being real. Another unsupervised method, POI-Forensics [6 ###reference_b6###], proposed an audio-visual identification verification approach, where a model is trained to extract features that indicate the identities of videos through contrastive learning on real videos, and then during inference, these features are used to compare a query video to a set of existing pristine (real) videos, to determine a fakeness score. In contrast to these methods which use audio-visual consistency as a mean to learn representative features for later processes (likelihood estimation or comparison to pristine videos), we aim to theoretically motivate and then develop a method that directly identifies inconsistencies in a query video."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Theoretical Motivation",
21
+ "text": "In this section, we provide the theoretical justification for our conjecture that changing the motion of a target video to match the motion of the source video will inevitably introduce intra- or cross-modal inconsistencies, as illustrated in Fig. 1 ###reference_###. To formalize the identity, the facial motion and the specific video, let be a random variable representing the identity of a person (with a sample space of all human identities), a random variable representing facial motion (with a sample space of all realistic facial motions within a specific time-window), and a random variable representing a video of a person (with a sample space of all talking-head videos of a specific duration). We model the deepfake generation process as a mapping which induces a random variable , and consider the following notions (where is mutual information and entropy):\nmeasures the correspondence between a video and an identity, where a larger mutual information between video and identity indicates that the video has a distinct and consistent identity.\nmeasures the correspondence between video and the facial motion, where a smaller mutual information between video and motion indicates that the motion can be transferred across videos.\nmeasures identity dependence on the motion, where a smaller conditional entropy (i.e. uncertainty) indicates that the motion is more predictive of the identity.\nIn this theoretical framework, we can state the objective of the deepfake generation method as achieving a high (i.e., the target video can have the target identity) while simultaneously achieving a low (i.e., the target video can have the desired motion of the source video without distortion). The relationship between these two objectives (maximizing and minimizing ) can be explained by the following inequality which holds for any three random variables [29 ###reference_b29###]:\nIn the above inequality, we observe that if the motion is predictive of the identity (), then the fake generation process faces a trade-off: if it learns to precisely transfer the motion among videos (), then it will inevitably break the identity consistency in the generated videos ().\nIs motion predictive of identity in practice? In Eq. 1 ###reference_###, we observed that if the motion is predictive of identity in real videos (), then any deepfake generation method will be in a trade-off between motion consistency and identity consistency. However, is motion predictive of identity in real videos? Recent works [1 ###reference_b1###, 6 ###reference_b6###] utilize temporal features for identity recognition and verification, indicating that the motion is indeed predictive of identity. However, to our knowledge, the existing evidence does not explicitly separate motion from pose and appearance features, and therefore we conduct an experiment to directly observe how predictive the motion is of the identity. More concretely, we model motion in a video by extracting the difference between 3D facial landmarks for all pairs of consecutive frames in the video, and concatenate these differences into a long input feature vector, denoted the motion vector for a video. Then, we investigate the accuracy of an identity classifier trained on motion vectors. Note that, unlike methods that extract temporal features, the simple motion vector is intentionally crafted such that it does not include any pose or appearance features, that could have been unintentionally learned as part of the temporal features. Since the motion vectors do not contain any appearance and pose information, the accuracy of this classifier serves as an indication of whether motion is predictive of identity. The experiment is conducted on VoxCeleb2 [5 ###reference_b5###] dataset, where we partition the videos in the official training set222The official testing set does not contain as many identities and lacks video labels., containing 5994 unique identities, into a training and validation set (a - partition), making sure there is no overlap between the videos in training and validation sets. We train a CNN [22 ###reference_b22###] backbone on the extracted motion vectors. This model achieves a validation accuracy of which is a 596-fold improvement over random guess accuracy. We expect this accuracy can be further improved by a more complex model, but the observed accuracy suffices for our current goal of providing evidence for that motion is predictive of identity in real videos.\nGiven that the motion is predictive of identity, and that the inequality in Eq. 1 ###reference_### is invariant to the choice of model , we argue that fake generation processes, in general, must either sacrifice identity consistency within the generated video, or the exact transfer of the motion from the source video to the target video. This in turn means that any fake generation process will inevitably leave artifacts as a result of the trade-off between identity consistency and the accuracy of motion transfer. To further support our theoretical claim, we also conduct an identity prediction from motion experiment on the FakeAVCeleb [26 ###reference_b26###] (213 identities). In Tab. 1 ###reference_###, we observe that in real videos (trained and tested on reals) identity is much more predictable from motion compared to in deepfake videos (trained and tested on deepfakes), showing that deepfake generation breaks the motion-identity interdependence, consistent with our theoretical claim that motion or identity inconsistencies are inevitable in deepfakes.\n###figure_1###"
22
+ },
23
+ {
24
+ "section_id": "4",
25
+ "parent_section_id": null,
26
+ "section_name": "Deepfake Detection",
27
+ "text": "Motivated by the theoretical analysis in Sec. 3 ###reference_###, we propose a deepfake detection method based on the premise of inevitable inconsistencies in terms of \u2014(1) identity inconsistency in the generated video (due to small ) by comparing different video segment pairs in the video, and (2) motion distortions in the generated video (due to large ) by comparing video and audio segments.\nNotation and Training Setup. Assume a training batch that includes multiple videos for distinct real identities, each video having different time-window (a time-window is a fixed duration of time). Each video goes through three mappings parameterized by neural networks to extract identity, visual and audio features. Let define a mapping between a video and its corresponding identity features at a time-window , similarly define a mapping between a video and its corresponding visual features at a time-window, and define a mapping between a video and its corresponding audio features at a time-window. Whenever referring to a feature extracted from the video of identity at time-window , we summarize into for brevity. The dot product is defined as .\nIntra-modal Consistency Loss.\nIn order to detect identity inconsistencies, we must learn an identity feature extractor that is sensitive to slight changes of identity within any video (due to a small as described in Sec. 3 ###reference_###). Therefore, we propose the intra-modal consistency loss, whose objective is to guide to learn maximally divergent feature vectors between different identities, and maximally convergent feature vectors between different observations/samples of the same identity. We use to extract identify features for each sample in the batch as illustrated in the left side of Fig. 2 ###reference_###. We then measure the similarity between all pairs of identity vectors , where are identity indices and time-window indices, resulting in a similarity tensor.\nEq. 2 ###reference_### shows the overall intra-modal consistency loss :\nwhere is the temperature used to control the scale of similarity measurements. The left part of Fig. 2 ###reference_### illustrates how we find the dissimilarity with given features from different identities (small white squares in \u2461-Fig. 2 ###reference_###) and the similarity with given features from the same identity (small colored squares in \u2461-Fig. 2 ###reference_###). The objective is to make the colored dot-products increase and white dot-products decrease such that vectors of similar identities becoming closer to each other than to vectors of dissimilar identities, across all pairs of time-windows.\nAt inference time, the video is divided into fixed sized windows. As illustrated by the black arrows in the left part of the testing flow in Fig. 2 ###reference_###, we use the trained to compute the similarity matrix of identity features. The intra-modal consistency score of the test video using intra-modal consistency method (\n\n) is the percentile of this similarity matrix.\nCross-modal Consistency Loss.\nIn order to detect motion inconsistencies, we attempt to learn a visual feature extractor and an audio feature extractor that are sensitive to slight mismatches between audio and video of a speaking person (as a surrogate to motion inconsistency due to a large as described in Sec. 3 ###reference_###). Therefore, we propose the cross-modal consistency loss, whose objective is to guide and to learn maximally divergent feature vectors between video and audio at different time windows, and maximally convergent feature vectors between video and audio at the same time windows. We use and to extract visual and audio features for each sample in the batch as illustrated in the right side of Fig. 2 ###reference_###. We then measure the similarity between all pairs of audio-video feature vectors , where is the identity index and are time-window indices, resulting in a similarity tensor. Eq. 3 ###reference_### shows the overall cross-modal consistency loss :\nwhere is the temperature used to control the scale of similarity measurements. The right part of Fig. 2 ###reference_### intuitively illustrates how we find the dissimilarity with given audio and video features at different time-windows (white squares in the gray block-Fig. 2 ###reference_###) and the similarity with given audio and video features from the same time-window (squares with green and brown filling as marked as \u2463-Fig. 2 ###reference_###). The objective is to make the colored dot-products increase and white dot-products decrease such that vectors of corresponding audio-visual time-windows become closer to each other than to all other audio or video vectors of different time-window for each identity.\nAt inference/testing time, the testing video is divided into fixed sized windows, and following the black arrows in the right part of the testing flow in Fig. 2 ###reference_###, we will use the trained and to compute the similarity matrix of visual-audio features. The score of the testing video (\n\n) is determined by calculating the average value of its diagonal elements. We take the sum of these two scores as the final deepfake detection score:"
28
+ },
29
+ {
30
+ "section_id": "5",
31
+ "parent_section_id": null,
32
+ "section_name": "Experimental Evaluations",
33
+ "text": "Datasets. We use three data sets for training and evaluating our method\u2014(1) VoxCeleb2, (2) FakeAVCeleb and (3) KoDF. VoxCeleb2 [5 ###reference_b5###] is used for training, since the training process of our model requires a substantial amount of speaking videos from various individuals, and VoxCeleb2 meets the requirements. VoxCeleb2 includes over 6000 different identities, each with videos of that identity speaking in multiple scenarios. Leveraging this characteristic, it is feasible to construct a substantial number of video and audio windows with fixed size.\nFakeAVCeleb [26 ###reference_b26###] is used to evaluate the performance of our model, which consists of 500 identities taken from VoxCeleb2, where both the visual and audio content of the videos of these identities is tampered. We excluded all videos of the 500 identities present in FakeAVCeleb (used for evaluation) from our training dataset (VoxCeleb2) in order to prevent our method from learning features specific to these identities. FakeAVCeleb dataset allows us to thoroughly test our model\u2019s ability to discern between different modalities of forgery. Specifically, FakeAVCeleb utilizes Wav2Lip [37 ###reference_b37###], faceswap [26 ###reference_b26###] and fsgan [36 ###reference_b36###] for video manipulation, along with SV2TTS [24 ###reference_b24###] for audio manipulation. The manipulated videos are categorized into four groups: RealVideo-RealAudio, RealVideo-FakeAudio, FakeVideo-RealAudio, and FakeVideo-FakeAudio. The first two categories contain 500 videos each, and the others have about 10000 videos each.\nKoDF [28 ###reference_b28###] is a deepfake dataset of Korean speaking videos, which we utilize for evaluating the generalization of our method to people speaking in other languages (which indicates out-of-domain generalization to audio features). The manipulation techniques employed in KoDF include face swapping and face reenactment methods. Additionally, there is a dedicated section within the dataset where both real and fake videos have been subjected to adversarial attacks. We subsequently used the data from this section to evaluate the robustness of our model under adversarial attacks. For those real and fake videos in the KoDF dataset that had not been manipulated with adversarial attacks, we attempted to download them, but encountered persistent issues during the decoding process. Consequently, the only part of the KoDF dataset we were able to utilize was its adversarially-attacked subset.\nModels. We use an AdaFace [27 ###reference_b27###] model pre-trained on MS1MV2 [9 ###reference_b9###], MS1MV3 [10 ###reference_b10###] and WebFace4M [51 ###reference_b51###] to extract per-frame identity features and use a transformer encoder to aggregate the identity in a temporal window for our visual features (both and ). For the audio features () we use a pre-trained Whisper [39 ###reference_b39###] encoder trained on 680k hours of labeled audio-text data as our audio feature extractor. The entire model (including pretrained layers) are trained during our training with intra- and cross-modal losses. Details of our data preprocessing and hyperparamters are provided in Appendix.\nMetrics. We use the standard Area Under the Curve (AUC) and Average Precision (AP) as performance metrics. These metrics are widely used since these two indicators do not impose threshold restrictions [15 ###reference_b15###, 6 ###reference_b6###]. Furthermore, they effectively capture the model\u2019s performance in scenarios involving imbalanced datasets, accurately assessing the model\u2019s ability to detect true positives."
34
+ },
35
+ {
36
+ "section_id": "5.1",
37
+ "parent_section_id": "5",
38
+ "section_name": "Main Results",
39
+ "text": "Following the evaluation setup of [15 ###reference_b15###], we separately consider the five categories of FakeAVCeleb dataset (RVFA, FVRA-WL, FVFA-WL, FVFA-FS, FVFA-GAN). Each category includes deepfake videos generated using the same method. The evaluation process allows us to gain clear insights into how our method performs under different types of deepfake generations, which can also substantiate our conjecture that deepfake generators will inevitablly leave a detectable trace.\nThe result of evaluation on FakeAVCeleb is shown in Tab. 2 ###reference_###. We observe that our overall consistency method (Intra-Cross-modal) which combines the strengths of both intra- and cross-modal scores, consistently achieves a significant improvement over previous state-of-the-art unsupervised methods across all categories, and is only lower than the best-performing supervised method on average (AVG-FV). Note that our method achieves this performance without seeing any of the identities in the FakeAVCeleb dataset, or any of the various deepfake generations used in this dataset, during its training. Our method is also does not require/have access to any pristine (real) videos during inference.\nAn key observation is that while our combined method (Intra-Cross-modal) achieves state-of-the-art performance, the performance of the two sub-scores (Intra-modal and Cross-modal) varies depending on the deepfake type and show complementary strengths (the worst AUC of the Cross-modal method is on FVFA-GAN, which is where the Intra-modal method has its best AUC). This observation validates our theoretical motivation that there is a trade-off between identity consistency and motion consistency, and therefore detecting both types of inconsistencies should be considered. We discuss the separate performance of the Intra-modal and Cross-modal methods in Sec. 5.3 ###reference_###.\nFurthermore, as illustrated in Fig. 3 ###reference_###, the predictions of our method are explainable by design. For example, the video segments with highest mismatch score can be optionally provided to a human expert as explanations of \u201cfakeness\u201d, and are verifiable through further analysis by such forensic experts.\n###figure_2###"
40
+ },
41
+ {
42
+ "section_id": "5.2",
43
+ "parent_section_id": "5",
44
+ "section_name": "Generalization to Audio/Video Compression, and KoDF Adversarial Attacked Dataset",
45
+ "text": "###table_1### In this section, we study the generalization capability of our proposed method (Intra-Cross-modal) to varying video and audio qualities, as well as to other spoken languages and adversarial attacks. We follow the evaluation settings of [6 ###reference_b6###]. We could not include AV-Anomaly [15 ###reference_b15###] in these evaluations due to lack of public inference and training code.\nCompression Study. We construct a high quality and low quality version of the FakeAVCeleb dataset. In the High Quality (HQ) setting, the video is compressed using H.264 encoding with factor 23, and the audio is the same as original (44.1KHz). In the Low Quality (LQ) setting, the video is compressed using H.264 encoding with factor 40 and the audio is sampled with a sample rate of 16KHz. In Tab. 3 ###reference_###, we observe that our method outperforms all other unsupervised methods in the HQ compression setting (small video compression, no audio compression). However, in the LQ setting, our method is second to POI-Forensics, while still outperforming all other methods. However, it is important to note that POI-Forensics requires access to pristine videos during testing, which it can compress to the same LQ setting and directly compare against.\nKoDF Study. We also evaluated our method using the adversarially attacked subset of the KoDF[28 ###reference_b28###] (both real and deepfake videos are adversarially attacked using the Fast Gradient Sign Method (FGSM)[16 ###reference_b16###]). In this setting, POI-Forensics [6 ###reference_b6###] randomly sampled 276 original videos and 544 synthesized videos as positive and negative samples for evaluation. The specific method for random selection of data was not disclosed by [6 ###reference_b6###]. Therefore, for a robust and fair comparison, we repeat our evaluation 1000 times following their setting, each time with new random choice of 276 original and 544 synthesized videos from KoDF, which contains 5365 original and 15044 synthesized videos, and report the average and one standard deviation of our method\u2019s AUC (%). As shown in Tab. 3 ###reference_###, we observe that our method outperforms all supervised and unsupervised methods on this challenging task, where not only the language of the videos are completely different than training, but all the samples are also adversarially attacked. This empirical evidence serves as an additional validation of the theoretical motivation, that there is an inevitable consistency (in either identity or motion) of deepfake videos, as discussed in Sec. 3 ###reference_###."
46
+ },
47
+ {
48
+ "section_id": "5.3",
49
+ "parent_section_id": "5",
50
+ "section_name": "Why Does Intra-Modal Performs Worse than Cross-Modal?",
51
+ "text": "###figure_3### We observe in Tab. 2 ###reference_### that our Intra-modal method performs worse than our Cross-modal method on FakeAVCeleb. The reason can be understood from our theoretical analysis. A deepfake generation method must produce identity inconsistencies if it transfers the motion exactly. In other words, if the deepfake generation method does not have to transfer motion (imagine training a deepfake method on a video dataset that has no motion at all, and only still portraits), then the generation method can transfer the identity perfectly, and consequently our Intra-modal identity consistency method will be completely ineffective. As such, whether Intra-modal method or Cross-modal method perform better, depends on how well the deepfake generation method is transferring motion.\nIf the data contains little facial motion (such as FakeAVCeleb, where most videos are stationary, calm, frontal-shot interviews) a deepfake generation method will incur little penalty for not perfectly transferring motion, and consequently the trained deepfake method will be at the motion-inconsistency side of the trade-off predicted by our theory. Therefore, the Intra-modal method will perform worse that the Cross-modal method\u2014which is detecting motion inconsistency. However, note that it is precisely for this reason that we proposed both methods, because they are complementary in their capabilities, and which one performs better alone depends on the particular deepfake generation method.\nTo support the above argument regarding the behavior of our Intra-Modal method, we provide both quantitative and qualitative evidence in Fig. 4 ###reference_###. To illustrate the other side of the theoretical trade-off, we show that our Intra-modal method can clearly discover mismatched identity in videos of a deepfake model that is trained to strongly regress the source motion [13 ###reference_b13###] (thereby losing identity consistency). Furthermore, to illustrate how the Intra-modal method is very effective in particular deepfakes that require substantial motion matching (i.e., generating a mismatched motion would incur a great loss for the deepfake generation method), in Fig. 4 ###reference_### we show that the AUC of the Intra-modal method increases with the average magnitude of motion in videos of FakeAVCeleb (measured for each video as the norm of the motion vector)."
52
+ },
53
+ {
54
+ "section_id": "6",
55
+ "parent_section_id": null,
56
+ "section_name": "Conclusion",
57
+ "text": "In this work, we introduced a novel theoretical analysis that suggests motion or identity inconsistencies are inevitable in deepfake videos that transfer the motion of a source video to a target video. We provided mathematical and empirical evidence for our theoretical claim, and then based on this theory, we proposed unsupervised methods to explicitly detect and pinpoint identity and motion inconsistencies in deepfakes, namely the Intra-modal method and Cross-modal method, and combined them into the Intra-Cross-modal method that achieved a new state-of-the-art performance on the FakeAVCeleb dataset. We also showed that our method generalizes to videos of other languages (KoDF Korean speech deepfake dataset), adversarial attacks, and small video compression. Furthermore, compared to existing deepfake detection methods, our method is more scalable because it does not require any pristine (real) videos during inference, generalizable because it only trains on real videos, reliable because it does not explicitly use likelihood estimation in high dimensions, and explainable because it explicitly discovers verifiable inconsistent segments in a video. Our findings also reveal several interesting directions for future research. First, directly measuring the terms in the proposed information-theoretical upper bound in Sec. 3 ###reference_### for various deepfake methods can empirically verify the bound and reveal interesting trends in deepfake videos. Second, we expect the proposed Intra-modal method could be further improved by building inductive biases into the architecture that encourages attending to fine visual details. Finally, while we currently only consider talking-head videos, the proposed consistency losses could be applied to other video types such as to full-body videos."
58
+ }
59
+ ],
60
+ "appendix": [],
61
+ "tables": {
62
+ "1": {
63
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S3.T1.2.1.1\" style=\"font-size:90%;\">Table 1</span>: </span><span class=\"ltx_text\" id=\"S3.T1.3.2\" style=\"font-size:90%;\">Accuracy(%) of 213-class identity classification from motion on FakeAVCeleb.</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S3.T1.4\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T1.4.1.1\">\n<td class=\"ltx_td ltx_border_tt\" id=\"S3.T1.4.1.1.1\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"></td>\n<th class=\"ltx_td ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.4.1.1.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S3.T1.4.1.1.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">Training Set</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.4.2.2\">\n<td class=\"ltx_td\" id=\"S3.T1.4.2.2.1\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"></td>\n<th class=\"ltx_td ltx_th ltx_th_column\" id=\"S3.T1.4.2.2.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S3.T1.4.2.2.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">Real</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S3.T1.4.2.2.4\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">Fake</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.4.3.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T1.4.3.3.1\" rowspan=\"2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"><span class=\"ltx_text\" id=\"S3.T1.4.3.3.1.1\">Testing Set</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.4.3.3.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">Real</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.4.3.3.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">14.56</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.4.3.3.4\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.65</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.4.4.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.4.4.4.1\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">Fake</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.4.4.4.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.90</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.4.4.4.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">2.62</td>\n</tr>\n</tbody>\n</table>\n</figure>",
64
+ "capture": "Table 1: Accuracy(%) of 213-class identity classification from motion on FakeAVCeleb."
65
+ },
66
+ "2": {
67
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T2\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S5.T2.8.4.1\" style=\"font-size:90%;\">Table 2</span>: </span><span class=\"ltx_text\" id=\"S5.T2.6.3\" style=\"font-size:90%;\">AUC () and AP () of our methods (Intra-modal, Cross-modal, and Intra-Cross-modal) on FakeAVCeleb<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17088v2#bib.bib26\" title=\"\">26</a>]</cite>. Our combined method outperforms the unsupervised state-of-the-art, and reaches within absolute percentage points of best supervised method on average (AVG-FV).</span></figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S5.T2.9\" style=\"width:433.6pt;height:110.1pt;vertical-align:-0.4pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-316.7pt,80.1pt) scale(0.406391038979288,0.406391038979288) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S5.T2.9.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T2.9.1.1.1\">\n<td class=\"ltx_td ltx_border_tt ltx_border_t\" id=\"S5.T2.9.1.1.1.1\" rowspan=\"3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt ltx_border_t\" id=\"S5.T2.9.1.1.1.2\" rowspan=\"3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.9.1.1.1.2.1\">Method</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt ltx_border_t\" id=\"S5.T2.9.1.1.1.3\" rowspan=\"3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.9.1.1.1.3.1\">Modality</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt ltx_border_t\" id=\"S5.T2.9.1.1.1.4\" rowspan=\"3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.9.1.1.1.4.1\">Pretrained Dataset</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt ltx_border_t\" colspan=\"18\" id=\"S5.T2.9.1.1.1.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">Catogory</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.9.1.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"2\" id=\"S5.T2.9.1.2.2.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">RVFA</td>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T2.9.1.2.2.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T2.9.1.2.2.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"2\" id=\"S5.T2.9.1.2.2.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">FVRA-WL</td>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T2.9.1.2.2.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"2\" id=\"S5.T2.9.1.2.2.6\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">FVFA-FS</td>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T2.9.1.2.2.7\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"2\" id=\"S5.T2.9.1.2.2.8\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">FVFA-GAN</td>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T2.9.1.2.2.9\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"2\" id=\"S5.T2.9.1.2.2.10\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">FVFA-WL</td>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T2.9.1.2.2.11\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"2\" id=\"S5.T2.9.1.2.2.12\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">AVG-FV</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.9.1.3.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.3.3.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">AP</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.3.3.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">AUC</td>\n<td class=\"ltx_td\" id=\"S5.T2.9.1.3.3.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td\" id=\"S5.T2.9.1.3.3.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.3.3.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">AP</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.3.3.6\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">AUC</td>\n<td class=\"ltx_td\" id=\"S5.T2.9.1.3.3.7\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.3.3.8\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">AP</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.3.3.9\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">AUC</td>\n<td class=\"ltx_td\" id=\"S5.T2.9.1.3.3.10\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.3.3.11\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">AP</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.3.3.12\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">AUC</td>\n<td class=\"ltx_td\" id=\"S5.T2.9.1.3.3.13\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.3.3.14\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">AP</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.3.3.15\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">AUC</td>\n<td class=\"ltx_td\" id=\"S5.T2.9.1.3.3.16\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.3.3.17\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">AP</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.3.3.18\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">AUC</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.9.1.4.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.4.4.1\" rowspan=\"5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.9.1.4.4.1.1\">\n<span class=\"ltx_inline-block ltx_transformed_outer\" id=\"S5.T2.9.1.4.4.1.1.1\" style=\"width:8.9pt;height:47.4pt;vertical-align:-21.2pt;\"><span class=\"ltx_transformed_inner\" style=\"width:47.3pt;transform:translate(-19.21pt,2.92pt) rotate(-90deg) ;\">\n<span class=\"ltx_p\" id=\"S5.T2.9.1.4.4.1.1.1.1\">Supervised</span>\n</span></span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.4.4.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">Xception <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17088v2#bib.bib42\" title=\"\">42</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.4.4.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">V</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.4.4.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">ImageNet</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.4.4.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.4.4.6\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">-</td>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T2.9.1.4.4.7\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T2.9.1.4.4.8\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.4.4.9\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">88.2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.4.4.10\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">88.3</td>\n<td class=\"ltx_td\" id=\"S5.T2.9.1.4.4.11\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.4.4.12\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">92.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.4.4.13\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">93.5</td>\n<td class=\"ltx_td\" id=\"S5.T2.9.1.4.4.14\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.4.4.15\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">67.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.4.4.16\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">68.5</td>\n<td class=\"ltx_td\" id=\"S5.T2.9.1.4.4.17\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.4.4.18\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">91.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.4.4.19\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">91.0</td>\n<td class=\"ltx_td ltx_border_r\" id=\"S5.T2.9.1.4.4.20\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.4.4.21\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">84.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.4.4.22\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">85.3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.9.1.5.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.5.5.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">LipForensics <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17088v2#bib.bib20\" title=\"\">20</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.5.5.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">V</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.5.5.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">LRW</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.5.5.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.5.5.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">-</td>\n<td class=\"ltx_td\" id=\"S5.T2.9.1.5.5.6\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td\" id=\"S5.T2.9.1.5.5.7\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.5.5.8\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.9.1.5.5.8.1\">97.8</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.5.5.9\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.9.1.5.5.9.1\">97.7</span></td>\n<td class=\"ltx_td\" id=\"S5.T2.9.1.5.5.10\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.5.5.11\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">99.9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.5.5.12\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">99.9</td>\n<td class=\"ltx_td\" id=\"S5.T2.9.1.5.5.13\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.5.5.14\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">61.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.5.5.15\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">68.1</td>\n<td class=\"ltx_td\" id=\"S5.T2.9.1.5.5.16\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.5.5.17\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">98.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.5.5.18\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">98.7</td>\n<td class=\"ltx_td ltx_border_r\" id=\"S5.T2.9.1.5.5.19\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.5.5.20\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">89.4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.5.5.21\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">91.1</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.9.1.6.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.6.6.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">AD DFD <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17088v2#bib.bib50\" title=\"\">50</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.6.6.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">AV</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.6.6.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">Kinetics</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.6.6.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">74.9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.6.6.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">73.3</td>\n<td class=\"ltx_td\" id=\"S5.T2.9.1.6.6.6\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td\" id=\"S5.T2.9.1.6.6.7\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.6.6.8\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">97.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.6.6.9\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">97.4</td>\n<td class=\"ltx_td\" id=\"S5.T2.9.1.6.6.10\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.6.6.11\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">99.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.6.6.12\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">99.7</td>\n<td class=\"ltx_td\" id=\"S5.T2.9.1.6.6.13\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.6.6.14\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">58.4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.6.6.15\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">55.4</td>\n<td class=\"ltx_td\" id=\"S5.T2.9.1.6.6.16\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.6.6.17\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.9.1.6.6.17.1\">100.</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.6.6.18\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.9.1.6.6.18.1\">100.</span></td>\n<td class=\"ltx_td ltx_border_r\" id=\"S5.T2.9.1.6.6.19\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.6.6.20\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">88.8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.6.6.21\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">88.1</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.9.1.7.7\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.7.7.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">FTCN <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17088v2#bib.bib49\" title=\"\">49</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.7.7.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">V</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.7.7.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.7.7.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.7.7.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">-</td>\n<td class=\"ltx_td\" id=\"S5.T2.9.1.7.7.6\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td\" id=\"S5.T2.9.1.7.7.7\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.7.7.8\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">96.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.7.7.9\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">97.4</td>\n<td class=\"ltx_td\" id=\"S5.T2.9.1.7.7.10\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.7.7.11\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.9.1.7.7.11.1\">100.</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.7.7.12\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.9.1.7.7.12.1\">100.</span></td>\n<td class=\"ltx_td\" id=\"S5.T2.9.1.7.7.13\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.7.7.14\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">77.4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.7.7.15\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">78.3</td>\n<td class=\"ltx_td\" id=\"S5.T2.9.1.7.7.16\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.7.7.17\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">95.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.7.7.18\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">96.5</td>\n<td class=\"ltx_td ltx_border_r\" id=\"S5.T2.9.1.7.7.19\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.7.7.20\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">92.3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.7.7.21\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">93.1</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.9.1.8.8\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.8.8.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">RealForensics <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17088v2#bib.bib19\" title=\"\">19</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.8.8.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">V</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.8.8.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">LRW</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.8.8.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.8.8.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">-</td>\n<td class=\"ltx_td\" id=\"S5.T2.9.1.8.8.6\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td\" id=\"S5.T2.9.1.8.8.7\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.8.8.8\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">88.8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.8.8.9\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">93.0</td>\n<td class=\"ltx_td\" id=\"S5.T2.9.1.8.8.10\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.8.8.11\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">99.3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.8.8.12\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">99.1</td>\n<td class=\"ltx_td\" id=\"S5.T2.9.1.8.8.13\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.8.8.14\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.9.1.8.8.14.1\">99.8</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.8.8.15\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.9.1.8.8.15.1\">99.8</span></td>\n<td class=\"ltx_td\" id=\"S5.T2.9.1.8.8.16\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.8.8.17\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">93.4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.8.8.18\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">96.7</td>\n<td class=\"ltx_td ltx_border_r\" id=\"S5.T2.9.1.8.8.19\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.8.8.20\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.9.1.8.8.20.1\">95.3</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.8.8.21\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.9.1.8.8.21.1\">97.1</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.9.1.9.9\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T2.9.1.9.9.1\" rowspan=\"7\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.9.1.9.9.1.1\">\n<span class=\"ltx_inline-block ltx_transformed_outer\" id=\"S5.T2.9.1.9.9.1.1.1\" style=\"width:8.9pt;height:58.8pt;vertical-align:-26.9pt;\"><span class=\"ltx_transformed_inner\" style=\"width:58.8pt;transform:translate(-24.93pt,2.92pt) rotate(-90deg) ;\">\n<span class=\"ltx_p\" id=\"S5.T2.9.1.9.9.1.1.1.1\">Unsupervised</span>\n</span></span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.9.9.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">AVBYOL <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17088v2#bib.bib17\" title=\"\">17</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.9.9.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">AV</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.9.9.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">LRW</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.9.9.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">50.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.9.9.6\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">50.0</td>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T2.9.1.9.9.7\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T2.9.1.9.9.8\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.9.9.9\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">73.4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.9.9.10\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">61.3</td>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T2.9.1.9.9.11\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.9.9.12\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">88.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.9.9.13\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">80.8</td>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T2.9.1.9.9.14\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.9.9.15\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">60.2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.9.9.16\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">33.8</td>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T2.9.1.9.9.17\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.9.9.18\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">73.2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.9.9.19\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">61.0</td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S5.T2.9.1.9.9.20\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.9.9.21\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">73.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.9.9.22\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">59.2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.9.1.10.10\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.10.10.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">VQ-GAN <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17088v2#bib.bib14\" title=\"\">14</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.10.10.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">V</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.10.10.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">LRS2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.10.10.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.10.10.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">-</td>\n<td class=\"ltx_td\" id=\"S5.T2.9.1.10.10.6\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td\" id=\"S5.T2.9.1.10.10.7\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.10.10.8\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">50.3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.10.10.9\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">49.3</td>\n<td class=\"ltx_td\" id=\"S5.T2.9.1.10.10.10\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.10.10.11\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">57.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.10.10.12\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">53.0</td>\n<td class=\"ltx_td\" id=\"S5.T2.9.1.10.10.13\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.10.10.14\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">49.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.10.10.15\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">48.0</td>\n<td class=\"ltx_td\" id=\"S5.T2.9.1.10.10.16\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.10.10.17\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">62.4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.10.10.18\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">56.9</td>\n<td class=\"ltx_td ltx_border_r\" id=\"S5.T2.9.1.10.10.19\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.10.10.20\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">55.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.10.10.21\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">51.8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.9.1.11.11\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.11.11.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">A-V Anomaly <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17088v2#bib.bib15\" title=\"\">15</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.11.11.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">AV</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.11.11.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">LRS2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.11.11.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">62.4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.11.11.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">71.6</td>\n<td class=\"ltx_td\" id=\"S5.T2.9.1.11.11.6\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td\" id=\"S5.T2.9.1.11.11.7\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.11.11.8\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">93.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.11.11.9\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">93.7</td>\n<td class=\"ltx_td\" id=\"S5.T2.9.1.11.11.10\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.11.11.11\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">95.3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.11.11.12\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">95.8</td>\n<td class=\"ltx_td\" id=\"S5.T2.9.1.11.11.13\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.11.11.14\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">94.1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.11.11.15\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">94.3</td>\n<td class=\"ltx_td\" id=\"S5.T2.9.1.11.11.16\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.11.11.17\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">93.8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.11.11.18\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">94.1</td>\n<td class=\"ltx_td ltx_border_r\" id=\"S5.T2.9.1.11.11.19\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.11.11.20\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">94.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.11.11.21\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">94.5</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.9.1.12.12\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.12.12.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">A-V Anomaly <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17088v2#bib.bib15\" title=\"\">15</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.12.12.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">AV</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.12.12.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">LRS3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.12.12.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">70.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.12.12.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">80.5</td>\n<td class=\"ltx_td\" id=\"S5.T2.9.1.12.12.6\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td\" id=\"S5.T2.9.1.12.12.7\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.12.12.8\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">91.1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.12.12.9\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">93.0</td>\n<td class=\"ltx_td\" id=\"S5.T2.9.1.12.12.10\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.12.12.11\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">91.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.12.12.12\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">92.3</td>\n<td class=\"ltx_td\" id=\"S5.T2.9.1.12.12.13\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.12.12.14\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">91.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.12.12.15\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">92.7</td>\n<td class=\"ltx_td\" id=\"S5.T2.9.1.12.12.16\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.12.12.17\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">91.4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.12.12.18\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">93.1</td>\n<td class=\"ltx_td ltx_border_r\" id=\"S5.T2.9.1.12.12.19\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.12.12.20\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">91.3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.12.12.21\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">92.8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.9.1.13.13\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.13.13.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">Ours (Intra-modal)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.13.13.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">V</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.13.13.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">VoxCeleb2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.13.13.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.13.13.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">-</td>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T2.9.1.13.13.6\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T2.9.1.13.13.7\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.13.13.8\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">94.96</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.13.13.9\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">67.99</td>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T2.9.1.13.13.10\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.13.13.11\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">96.98</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.13.13.12\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">86.65</td>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T2.9.1.13.13.13\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.13.13.14\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">98.33</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.13.13.15\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">90.65</td>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T2.9.1.13.13.16\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.13.13.17\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">94.09</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.13.13.18\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">66.15</td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S5.T2.9.1.13.13.19\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.13.13.20\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">96.09</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.1.13.13.21\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">77.86</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.9.1.14.14\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.14.14.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">Ours (Cross-modal)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.14.14.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">AV</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.14.14.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">VoxCeleb2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.14.14.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.9.1.14.14.4.1\">99.68</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.14.14.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.9.1.14.14.5.1\">99.65</span></td>\n<td class=\"ltx_td\" id=\"S5.T2.9.1.14.14.6\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td\" id=\"S5.T2.9.1.14.14.7\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.14.14.8\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.9.1.14.14.8.1\">99.37</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.14.14.9\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.9.1.14.14.9.1\">95.98</span></td>\n<td class=\"ltx_td\" id=\"S5.T2.9.1.14.14.10\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.14.14.11\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.9.1.14.14.11.1\">98.74</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.14.14.12\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">95.66</td>\n<td class=\"ltx_td\" id=\"S5.T2.9.1.14.14.13\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.14.14.14\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.9.1.14.14.14.1\">98.81</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.14.14.15\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.9.1.14.14.15.1\">94.58</span></td>\n<td class=\"ltx_td\" id=\"S5.T2.9.1.14.14.16\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.14.14.17\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.9.1.14.14.17.1\">99.38</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.14.14.18\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.9.1.14.14.18.1\">96.25</span></td>\n<td class=\"ltx_td ltx_border_r\" id=\"S5.T2.9.1.14.14.19\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.14.14.20\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.9.1.14.14.20.1\">99.08</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.1.14.14.21\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.9.1.14.14.21.1\">95.62</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.9.1.15.15\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.9.1.15.15.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.9.1.15.15.1.1\">Ours (Intra-Cross-modal)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.9.1.15.15.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">AV+V</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.9.1.15.15.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">VoxCeleb2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.9.1.15.15.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.9.1.15.15.4.1\">98.49</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.9.1.15.15.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.9.1.15.15.5.1\">99.41</span></td>\n<td class=\"ltx_td ltx_border_bb\" id=\"S5.T2.9.1.15.15.6\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_border_bb\" id=\"S5.T2.9.1.15.15.7\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.9.1.15.15.8\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.9.1.15.15.8.1\">99.34</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.9.1.15.15.9\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.9.1.15.15.9.1\">95.96</span></td>\n<td class=\"ltx_td ltx_border_bb\" id=\"S5.T2.9.1.15.15.10\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.9.1.15.15.11\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.9.1.15.15.11.1\">99.27</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.9.1.15.15.12\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.9.1.15.15.12.1\">97.71</span></td>\n<td class=\"ltx_td ltx_border_bb\" id=\"S5.T2.9.1.15.15.13\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.9.1.15.15.14\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.9.1.15.15.14.1\">99.43</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.9.1.15.15.15\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.9.1.15.15.15.1\">97.59</span></td>\n<td class=\"ltx_td ltx_border_bb\" id=\"S5.T2.9.1.15.15.16\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.9.1.15.15.17\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.9.1.15.15.17.1\">99.29</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.9.1.15.15.18\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.9.1.15.15.18.1\">95.99</span></td>\n<td class=\"ltx_td ltx_border_bb ltx_border_r\" id=\"S5.T2.9.1.15.15.19\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.9.1.15.15.20\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.9.1.15.15.20.1\">99.33</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.9.1.15.15.21\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.9.1.15.15.21.1\">96.81</span></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
68
+ "capture": "Table 2: AUC () and AP () of our methods (Intra-modal, Cross-modal, and Intra-Cross-modal) on FakeAVCeleb[26]. Our combined method outperforms the unsupervised state-of-the-art, and reaches within absolute percentage points of best supervised method on average (AVG-FV)."
69
+ },
70
+ "3": {
71
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T3\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S5.T3.4.2.1\" style=\"font-size:90%;\">Table 3</span>: </span><span class=\"ltx_text\" id=\"S5.T3.2.1\" style=\"font-size:90%;\">AUC () comparison of our Intra-Cross-modal method on high/low quality compressed FakeAVCeleb dataset, and attacked KoDF dataset. The best and second-best results of unsupervised methods are in bold and underlined, respectively.</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S5.T3.5\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T3.5.1.1\">\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T3.5.1.1.1\" style=\"padding:1pt 8.0pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.5.1.1.2\" rowspan=\"2\" style=\"padding:1pt 8.0pt;\"><span class=\"ltx_text\" id=\"S5.T3.5.1.1.2.1\">Methods</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"2\" id=\"S5.T3.5.1.1.3\" style=\"padding:1pt 8.0pt;\">FakeAVCeleb</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.5.1.1.4\" style=\"padding:1pt 8.0pt;\">KoDF</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.5.1.1.5\" rowspan=\"2\" style=\"padding:1pt 8.0pt;\"><span class=\"ltx_text\" id=\"S5.T3.5.1.1.5.1\">Training Dataset</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.5.2.2\">\n<td class=\"ltx_td\" id=\"S5.T3.5.2.2.1\" style=\"padding:1pt 8.0pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.5.2.2.2\" style=\"padding:1pt 8.0pt;\">HQ</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.5.2.2.3\" style=\"padding:1pt 8.0pt;\">LQ</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.5.2.2.4\" style=\"padding:1pt 8.0pt;\">Attacked</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.5.3.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.5.3.3.1\" rowspan=\"5\" style=\"padding:1pt 8.0pt;\"><span class=\"ltx_text\" id=\"S5.T3.5.3.3.1.1\">\n<span class=\"ltx_inline-block ltx_transformed_outer\" id=\"S5.T3.5.3.3.1.1.1\" style=\"width:8.9pt;height:47.4pt;vertical-align:-21.2pt;\"><span class=\"ltx_transformed_inner\" style=\"width:47.3pt;transform:translate(-19.21pt,2.92pt) rotate(-90deg) ;\">\n<span class=\"ltx_p\" id=\"S5.T3.5.3.3.1.1.1.1\">Supervised</span>\n</span></span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.5.3.3.2\" style=\"padding:1pt 8.0pt;\">Seferbekov<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17088v2#bib.bib43\" title=\"\">43</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.5.3.3.3\" style=\"padding:1pt 8.0pt;\">98.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.5.3.3.4\" style=\"padding:1pt 8.0pt;\">61.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.5.3.3.5\" style=\"padding:1pt 8.0pt;\">61.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.5.3.3.6\" style=\"padding:1pt 8.0pt;\">DFDC</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.5.4.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.5.4.4.1\" style=\"padding:1pt 8.0pt;\">FTCN<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17088v2#bib.bib49\" title=\"\">49</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.5.4.4.2\" style=\"padding:1pt 8.0pt;\">84.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.5.4.4.3\" style=\"padding:1pt 8.0pt;\">37.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.5.4.4.4\" style=\"padding:1pt 8.0pt;\">58.3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.5.4.4.5\" style=\"padding:1pt 8.0pt;\">-</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.5.5.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.5.5.5.1\" style=\"padding:1pt 8.0pt;\">LipForensics <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17088v2#bib.bib20\" title=\"\">20</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.5.5.5.2\" style=\"padding:1pt 8.0pt;\">97.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.5.5.5.3\" style=\"padding:1pt 8.0pt;\">58.3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.5.5.5.4\" style=\"padding:1pt 8.0pt;\">54.8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.5.5.5.5\" style=\"padding:1pt 8.0pt;\">LRW</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.5.6.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.5.6.6.1\" style=\"padding:1pt 8.0pt;\">Real Forensics<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17088v2#bib.bib19\" title=\"\">19</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.5.6.6.2\" style=\"padding:1pt 8.0pt;\">88.3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.5.6.6.3\" style=\"padding:1pt 8.0pt;\">52.9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.5.6.6.4\" style=\"padding:1pt 8.0pt;\">55.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.5.6.6.5\" style=\"padding:1pt 8.0pt;\">LRW</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.5.7.7\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.5.7.7.1\" style=\"padding:1pt 8.0pt;\">MDS-based FD<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17088v2#bib.bib4\" title=\"\">4</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.5.7.7.2\" style=\"padding:1pt 8.0pt;\">64.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.5.7.7.3\" style=\"padding:1pt 8.0pt;\">61.1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.5.7.7.4\" style=\"padding:1pt 8.0pt;\">55.4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.5.7.7.5\" style=\"padding:1pt 8.0pt;\">LRW</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.5.8.8\">\n<td class=\"ltx_td\" id=\"S5.T3.5.8.8.1\" style=\"padding:1pt 8.0pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.5.8.8.2\" style=\"padding:1pt 8.0pt;\">Joint AV<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17088v2#bib.bib50\" title=\"\">50</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.5.8.8.3\" style=\"padding:1pt 8.0pt;\">55.1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.5.8.8.4\" style=\"padding:1pt 8.0pt;\">55.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.5.8.8.5\" style=\"padding:1pt 8.0pt;\">47.1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.5.8.8.6\" style=\"padding:1pt 8.0pt;\">DFDC</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.5.9.9\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S5.T3.5.9.9.1\" rowspan=\"5\" style=\"padding:1pt 8.0pt;\"><span class=\"ltx_text\" id=\"S5.T3.5.9.9.1.1\">\n<span class=\"ltx_inline-block ltx_transformed_outer\" id=\"S5.T3.5.9.9.1.1.1\" style=\"width:8.9pt;height:58.8pt;vertical-align:-26.9pt;\"><span class=\"ltx_transformed_inner\" style=\"width:58.8pt;transform:translate(-24.93pt,2.92pt) rotate(-90deg) ;\">\n<span class=\"ltx_p\" id=\"S5.T3.5.9.9.1.1.1.1\">Unsupervised</span>\n</span></span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.5.9.9.2\" style=\"padding:1pt 8.0pt;\">ICT <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17088v2#bib.bib12\" title=\"\">12</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.5.9.9.3\" style=\"padding:1pt 8.0pt;\">68.2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.5.9.9.4\" style=\"padding:1pt 8.0pt;\">66.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.5.9.9.5\" style=\"padding:1pt 8.0pt;\">61.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.5.9.9.6\" style=\"padding:1pt 8.0pt;\">MS-Celeb-1M</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.5.10.10\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.5.10.10.1\" style=\"padding:1pt 8.0pt;\">ICT-Ref<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17088v2#bib.bib12\" title=\"\">12</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.5.10.10.2\" style=\"padding:1pt 8.0pt;\">71.9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.5.10.10.3\" style=\"padding:1pt 8.0pt;\">71.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.5.10.10.4\" style=\"padding:1pt 8.0pt;\">78.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.5.10.10.5\" style=\"padding:1pt 8.0pt;\">MS-Celeb-1M</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.5.11.11\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.5.11.11.1\" style=\"padding:1pt 8.0pt;\">ID-Reveal <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17088v2#bib.bib7\" title=\"\">7</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.5.11.11.2\" style=\"padding:1pt 8.0pt;\">70.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.5.11.11.3\" style=\"padding:1pt 8.0pt;\">70.8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.5.11.11.4\" style=\"padding:1pt 8.0pt;\">73.4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.5.11.11.5\" style=\"padding:1pt 8.0pt;\">VoxCeleb2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.5.12.12\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.5.12.12.1\" style=\"padding:1pt 8.0pt;\">POI-Forensics <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17088v2#bib.bib6\" title=\"\">6</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.5.12.12.2\" style=\"padding:1pt 8.0pt;\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"S5.T3.5.12.12.2.1\">94.1</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.5.12.12.3\" style=\"padding:1pt 8.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.5.12.12.3.1\">94.4</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.5.12.12.4\" style=\"padding:1pt 8.0pt;\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"S5.T3.5.12.12.4.1\">80.5</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.5.12.12.5\" style=\"padding:1pt 8.0pt;\">VoxCeleb2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.5.13.13\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S5.T3.5.13.13.1\" style=\"padding:1pt 8.0pt;\">Ours (Intra-Cross-modal)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S5.T3.5.13.13.2\" style=\"padding:1pt 8.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.5.13.13.2.1\">95.5</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S5.T3.5.13.13.3\" style=\"padding:1pt 8.0pt;\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"S5.T3.5.13.13.3.1\">85.9</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S5.T3.5.13.13.4\" style=\"padding:1pt 8.0pt;\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.5.13.13.4.1\">84.2</span><span class=\"ltx_text\" id=\"S5.T3.5.13.13.4.2\" style=\"font-size:50%;\">\u00b11.1</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S5.T3.5.13.13.5\" style=\"padding:1pt 8.0pt;\">VoxCeleb2</td>\n</tr>\n</tbody>\n</table>\n</figure>",
72
+ "capture": "Table 3: AUC () comparison of our Intra-Cross-modal method on high/low quality compressed FakeAVCeleb dataset, and attacked KoDF dataset. The best and second-best results of unsupervised methods are in bold and underlined, respectively."
73
+ }
74
+ },
75
+ "image_paths": {
76
+ "1": {
77
+ "figure_path": "2311.17088v2_figure_1.png",
78
+ "caption": "Figure 1: When transferring the motion of the source video (top) to the target identity (Angelina Jolie), the deepfake generation method [13] faces a trade-off: (middle) matching motion exactly results in some frames having the wrong identity which can be detected by looking for intra-modal identity inconsistency; or (bottom) matching identity exactly results in motion distortion which can be detected by looking for video cross-modal inconsistency with audio (e.g., the lips do not move at moments where audio magnitude shows speaking, and vice versa). Red boxes show inconsistencies.",
79
+ "url": "http://arxiv.org/html/2311.17088v2/x1.png"
80
+ },
81
+ "2": {
82
+ "figure_path": "2311.17088v2_figure_2.png",
83
+ "caption": "Figure 2: \nTraining and testing scheme for intra-modal consistency and cross-modal consistency methods. For each training batch, we take multiple fixed-size video and audio clips of N\ud835\udc41Nitalic_N distinct identities and feed them into our networks. \u2460 is an output (feature vector) of all identities extracted from the identity network at time-window tasubscript\ud835\udc61\ud835\udc4et_{a}italic_t start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT, The similarity matrix computed on time dimension is given in the gray box on the left, each element \u2461 represents the similarity matrix of \u2460 on a specific time-window pair (ta,tb)subscript\ud835\udc61\ud835\udc4esubscript\ud835\udc61\ud835\udc4f(t_{a},t_{b})( italic_t start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT , italic_t start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT ). A intra-modal consistency loss for identity network training is calculated based on this tensor. \u2462 denotes the feature vector generated by video and audio network at time-window tasubscript\ud835\udc61\ud835\udc4et_{a}italic_t start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT. The features of each individual across multiple time windows are used to generate their corresponding similarity matrix. A cross-modal consistency loss for video and audio network training is calculated from these N 2-dimensional matrices.",
84
+ "url": "http://arxiv.org/html/2311.17088v2/x2.png"
85
+ },
86
+ "3": {
87
+ "figure_path": "2311.17088v2_figure_3.png",
88
+ "caption": "Figure 3: The explainability of the proposed methods using intra-modal consistency loss and cross-modal consistency loss for two samples in FakeAVCeleb. When the method decides that a given video is fake due to its average score being lower than a threshold (light-gray boxes), it can provide the portions of the video with the minimum consistency score to a human expert as explanation (yellow boxes), and the expert can verify the method\u2019s decision through manual comparison.",
89
+ "url": "http://arxiv.org/html/2311.17088v2/x3.png"
90
+ },
91
+ "4": {
92
+ "figure_path": "2311.17088v2_figure_4.png",
93
+ "caption": "Figure 4: The similarity matrices (Mi\u2062n\u2062t\u2062r\u2062asubscript\ud835\udc40\ud835\udc56\ud835\udc5b\ud835\udc61\ud835\udc5f\ud835\udc4eM_{intra}italic_M start_POSTSUBSCRIPT italic_i italic_n italic_t italic_r italic_a end_POSTSUBSCRIPT) of the Intra-modal method clearly show the stronger temporal fluctuations of identity in deepfake (middle) compared to real (top).\nIntra-modal method AUC on FaceAVCeleb sorted based on the magnitude of motion in videos shows increasing performance with increasing motion in videos (bottom).",
94
+ "url": "http://arxiv.org/html/2311.17088v2/x4.png"
95
+ }
96
+ },
97
+ "validation": true,
98
+ "references": [],
99
+ "url": "http://arxiv.org/html/2311.17088v2"
100
+ }
20240620/2311.17451v3.json ADDED
@@ -0,0 +1,145 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Wireless Network Digital Twin for 6G: Generative AI as A Key Enabler",
3
+ "abstract": "Digital twin, which enables emulation, evaluation, and optimization of physical entities through synchronized digital replicas, has gained increasing attention as a promising technology for intricate wireless networks. For 6G, numerous innovative wireless technologies and network architectures have posed new challenges in establishing wireless network digital twins. To tackle these challenges, artificial intelligence (AI), particularly the flourishing generative AI, emerges as a potential solution. In this article, we discuss emerging prerequisites for wireless network digital twins considering the complicated network architecture, tremendous network scale, extensive coverage, and diversified application scenarios in the 6G era. We further explore the applications of generative AI, such as Transformer and diffusion model, to empower the 6G digital twin from multiple perspectives including physical-digital modeling, synchronization, and slicing capability. Subsequently, we propose a hierarchical generative AI-enabled wireless network digital twin at both the message-level and policy-level, and provide a typical use case with numerical results to validate the effectiveness and efficiency. Finally, open research issues for wireless network digital twins in the 6G era are discussed.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "From the first-generation (1G) of analog communication systems to the contemporary fifth-generation (5G) era, we have witnessed a rapid evolution of wireless communication networks. As we stand on the brink of the sixth-generation (6G) era, there is a continuous emergence of novel technologies aimed at meeting superior performance demands for next-generation networks, such as microsecond-scale latency, terabits-per-second-scale peak data rate, and ubiquitous connectivity [1 ###reference_b1###]. The integration of advanced wireless technologies, for instance, space-air-ground integrated networks, integrated sensing and communication (ISAC), and native network intelligence, significantly escalates the intricacy of network structure and functionality, necessitating digitalization of wireless networks for the sake of network performance evaluation and dynamic optimization. To tackle this issue, wireless network digital twin, which is capable of establishing a faithful replica of physical networks, has been considered as a promising technology in the 6G era for their extensive applications throughout research and development life cycles. It provides a risk-free environment for preliminary research on innovative technologies, facilitates the verification of extended 6G network architectures before deployment, and accelerates adaptation along with real-time wireless network management during practical applications.\nThe concept of digital twin, encompassing a physical entity, a digital representation, and a communication channel, was first proposed by Grieves in 2003 [2 ###reference_b2###]. In recent years, endeavors have been made to establish digital twins of wireless networks through diversified paradigms, including programming, mathematical modeling, and artificial intelligence (AI). These developments empower network operators to predict potential issues and seek optimization solutions via digital twin-based emulation and evaluation. In [3 ###reference_b3###], digital twins have been employed for assessing performance of a core network, forecasting service changes, and optimizing network management. In the realm of radio access network (RAN), the digital twin has been established to facilitate intelligent resource management for the RAN [4 ###reference_b4###]. For wireless network topologies, graph neural network-based digital twins have been developed to enhance the latency prediction across potential topologies [5 ###reference_b5###]. However, creating digital twins is usually nontrivial, necessitating not only accurate modeling of network functions but also efficient synchronization between the physical and digital entities. This poses challenges for employing current digital twin methods in the evolving 6G network. Additionally, existing wireless network digital twins lack adaptability and scalability to cope with dynamic network status and user demands.\nThe impending vision of 6G era, as released by the international telecommunication union (ITU) [6 ###reference_b6###] in June 2023, will inevitably result in a wireless network with substantial scale, intricate structures, extensive states, and diversified services. To achieve network autonomy and intelligence, it is widely advocated that AI should be deeply integrated into wireless networks, encompassing the paradigms of AI for network (AI4Net) and network for AI (Net4AI). Currently, efforts have been made towards developing AI-enhanced network digital twins, utilizing data-driven neural networks to simulate the network function, network topology, and user behavior within wireless networks [7 ###reference_b7###]. In addition, deep reinforcement learning (DRL) has been an effective approach to integrate with wireless network digital twins to tackle optimization tasks like network admission control and resource allocation [4 ###reference_b4###]. However, current digital twin methods still face challenges in the 6G era in terms of physical-digital modeling, synchronization, and slicing capability. Consequently, there is an immediate need to explore innovative approaches in wireless network digital twins to align with the evolving requirements of 6G.\nThe rapid development of deep learning (DL) technologies has facilitated the emergence of generative AI models, capable of creating novel and realistic content such as text and images. With available large-scale data, powerful computing resources, and novel algorithms, generative AI has fortunately been achieving successful commercialization in various domains. ChatGPT, as a conversational agent based on a generative Transformer decoder, generates fluent and engaging responses according to input sentences from users. In terms of images, DALL E based on generative diffusion models, has shown the ability to generate high-quality and diverse images from scratch and conditioned on a given input. The achievements in domains of natural language processing (NLP) and computer vision (CV) have galvanized researchers to integrate generative AI into mobile networks [8 ###reference_b8###]. Similarly, these successes have inspired us to leverage generative AI in the enhancement and substitution of digital twin techniques in the 6G era. The generative characteristics of these AI models hold promise in addressing the issue of data scarcity encountered during the construction of wireless network digital twins, while their superior transferability could be instrumental in enabling digital twins to rapidly adapt to evolving scenarios in 6G. Furthermore, the advanced and diversified architectures of generative AI models offer potentials in creating digital twins for wireless networks from multiple dimensions, thereby constructing a more comprehensive hierarchical wireless network digital twin.\n###figure_1### The remainder of this article is organized as follows. First, we introduce wireless network digital twins for 6G networks. Then, we analyze potential applications of generative AI in the 6G network digital twins. A hierarchical generative AI-enabled wireless network digital twin is presented, followed by a case study with numerical verifications. Finally, future research directions are discussed before a conclusion."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II Wireless Network Digital Twin in 6G Era",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "II-A Tractable Solutions of Wireless Network Digital Twins",
21
+ "text": "Digital twin has been recognized as a vital facilitator in wireless networks [9 ###reference_b9###]. Due to the intricacy of modern wireless networks, however, it is usually challenging to establish a single digital twin for an entire wireless network. Consequently, a common tractable solution is to segment the wireless network and create digital representations for individual components such as the core network, network topology, RAN, and user behaviors, as depicted in Fig. 1 ###reference_###.\nIn practice, digital twins can be developed selectively for these network components to meet individual objectives. For instance, a dedicated RAN digital twin can be created for providing a pre-verification environment for intelligent resource management [4 ###reference_b4###]. Similarly, accurate traffic prediction necessitates a digital twin that emulates behaviors of network entities including mobile terminals and infrastructure nodes [10 ###reference_b10###].\nModeling methodologies for these digital twins are customized based on the architecture and functionality of their respective network elements. Currently, digital twins for core networks are primarily developed through conventional programming according to protocols, with the integration of open-source projects to streamline the development process [3 ###reference_b3###], while network topology digitalization usually exploits graph neural networks due to their structural similarity [5 ###reference_b5###]. Principal component analysis (PCA)-based outdated model discovery has also been employed in construction of network topology digital twin [11 ###reference_b11###]. A summary of existing digital twin solutions for wireless networks is provided in Table I ###reference_###."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "II-B Requirements of Wireless Network Digital Twins for 6G",
27
+ "text": "While these methods have successfully established digital twins for current wireless networks, the advent of 6G era comes with multifarious stringent requirements that are far beyond the capabilities of existing methods. Besides typical 5G network key performance indicators (KPIs) in terms of latency, data rate, mobility, and so on, additional indicators such as global coverage, applicable AI-related capabilities, and sustainability have been raised by ITU in the international mobile telecommunications 2030 (IMT-2030) framework [6 ###reference_b6###] to accommodate disruptive use cases and applications in 6G. In order to satisfy the emerging KPIs, numerous innovative technologies and network architectures have been proposed and incorporated into wireless networks, including representatives as integrated sensing, computing, and communication [12 ###reference_b12###], autonomous networks, and native AI [13 ###reference_b13###]. This surge of innovations has substantially increased network complexity, scalability, and heterogeneity, necessitating advancements in corresponding digital twin technologies. These requirements are broadly categorized into three key areas.\nPhysical-digital modeling:\nWhile model-driven digital twin methods can effectively emulate network functions in many cases, the substantial time and capital investment for system replication renders them unsuitable in this fast-paced technological era. Furthermore, the heterogeneity of wireless network architectures and functions in varying 6G scenarios also makes it impractical to conduct extended programming and customization for each type of network during the establishment of digital twins. Hence, it is an inevitable path of self-monitored and automated approaches, for instance, AI-enabled methods, to take over the conventional model-driven method to cater to the demands in the 6G era.\nOn the other hand, data-driven methodologies, which employ neural networks to intelligently establish digital twins, are inherently data-hungry. These AI approaches require large volumes of data to ensure learning convergence, especially for the emerging massive wireless networks. With the expected surge in connection density, network scale, and coverage in 6G, it is predicted to face a substantially and potentially exponentially growing need for training data. However, data collection, filtering, and labeling in physical networks are in fact costly, time-consuming, and privacy-sensitive. It is definitely impossible to acquire data covering all potential scenarios and conditions that the network may encounter. Consequently, a tractable and efficient strategy for data acquisition and dataset construction becomes a fundamental challenge to facilitate the training of digital twin models via data-driven methods.\nPhysical-digital synchronization:\nIn order to achieve accurate replication of a physical 6G network, timely synchronization between the physical and digital entities is a key ingredient to maintain synchronized operations of digital twins. In current 5G networks, synchronization is achieved through a local deployment of digital twins. However, the advent of the 6G era significantly amplifies the scale and complexity of wireless networks, thereby posing challenging requirements for wireless network digital twins. For example, the space-air-ground integrated 6G networks extend two-dimensional (2D) terrestrial coverage to three-dimensional (3D) global coverage, incorporating terrestrial and aerial base stations. Moreover, heterogeneous networks employ a combination of macro, pico, and femto base stations for flexible and cost-effective coverage, while edge computing necessitates collaborative optimization across various nodes including edge and cloud. For these scenarios, traditional local deployment of digital twins becomes inadequate. In this context, the construction of digital twins can either adopt a distributed deployment strategy with remote collaboration or opt for a centralized deployment within the centralized cloud, coordinating distributed segments via synchronous signaling. Both methods consume excessive bandwidth for the overhead and could potentially interfere with regular network communications, even jeopardizing the guarantee of ultra-reliable low-latency communication in future networks. Moreover, noise and interference during wireless data transmission can result in the corruption and distortion of exchanged data, compromising the accuracy and reliability of digital twins. Therefore, it is imperative to develop an affordable transmission strategy to fulfill the stringent synchronization requirements of 6G wireless network digital twins. Given the prior success of semantic communication [12 ###reference_b12###] and AI-driven physical-layer communication [14 ###reference_b14###], AI technology is supposed to be a viable solution for meeting these requirements.\nSlicing capability:\nNetwork slicing for 6G enables the creation of multiple distinct virtual networks on top of a shared physical hardware infrastructure, allowing dynamic services and applications to have customized network structures, functions, procedures, and resources with performance guarantees. There are three typical types of slices in the current 5G network: enhanced mobile broadband (eMBB), ultra-reliable low-latency communication (URLLC), and massive machine-type communication (mMTC). However, existing digital twin methods exhibit limitations in providing customized functionality for each slice type and establishing digital slices in alignment with the sliced physical network. As we move into the 6G era, the number and diversity of slices are expected to further increase to accommodate new use cases and scenarios. The typical scenarios, previously as eMBB, URLLC, and mMTC in 5G, have evolved to immersive communication, hyper-reliable low-latency communication (HRLLC), and massive communication, respectively, along with the addition of ubiquitous connectivity, AI and communication, and ISAC [6 ###reference_b6###]. Although various network slicing technologies have successfully integrated AI techniques like deep learning and reinforcement learning for rapid adaptation to emerging scenarios in 6G [13 ###reference_b13###], such paradigms have not yet been extensively adopted in the current network digital twin solutions. Consequently, introducing the slicing capability to wireless network digital twins becomes increasingly significant for addressing the dynamic visions and application scenarios in 6G.\nAmong all these requirements, physical-digital modeling is considered the primary challenge in the 6G era. It serves as the foundation for wireless network digital twins, directly impacting their fidelity in representing physical network elements and their effectiveness in network emulation, evaluation, and optimization. Moreover, the evolving scalability and heterogeneity of the impending 6G network present additional obstacles in the development of modeling techniques."
28
+ },
29
+ {
30
+ "section_id": "3",
31
+ "parent_section_id": null,
32
+ "section_name": "III Potentials of Generative AI in 6G Wireless Network Digital Twin",
33
+ "text": "Generative AI models, typically Transformer, generative adversarial network (GAN), and diffusion model, provide opportunities to fulfill the evolving requirements of 6G wireless network digital twins. We explore applications of these generative AI technologies to enable 6G digital twins from four perspectives.\n###figure_2###"
34
+ },
35
+ {
36
+ "section_id": "3.1",
37
+ "parent_section_id": "3",
38
+ "section_name": "III-A Generative Data Augmentation for Digital Twin",
39
+ "text": "Compared to existing 5G digital twins, the data-driven modeling method for wireless network digital twins is predicted to be much more data-hungry in the 6G era. Data augmentation, a technique that enhances both data quality and quantity by generating new samples from existing ones, is a common solution. GAN is a classic generative AI model that consists of two competing neural networks: a generator producing fake data samples, and a discriminator distinguishing between real and fake data samples. Through the alternating training of both models, GAN gradually acquires the capability to generate synthetic data samples indistinguishable from real ones, making it a valuable tool for data augmentation, particularly in the intricate 6G network. In the 6G era, network services have been further categorized into more segments, posing difficulties for the acquisition of specific network service data. The application of generative data augmentation offers a feasible solution by creating diverse synthetic network data, such as user behaviors and traffic patterns, based on collected real-world data. In addition, through generative data augmentation, some possible but previously unencountered network topology and network state can be generated. Generative data augmentation enhances not only the volume but also the diversity of wireless network data, thereby facilitating the training of wireless network digital twin models and enabling the assessment of potential risks in 6G networks."
40
+ },
41
+ {
42
+ "section_id": "3.2",
43
+ "parent_section_id": "3",
44
+ "section_name": "III-B Generative Transmission for Digital Twin",
45
+ "text": "Wireless network digital twins can leverage innovative generative AI techniques, such as the diffusion model, to enhance the transmission process. The diffusion model involves two main phases: a forward diffusion process that progressively introduces noise to data until it reaches a predefined noise level, and a reverse denoising process that gradually eliminates the noise until the data returns to its origin. This empowers the diffusion model to effectively learn from highly noisy data and generate high-fidelity samples. By combining classic encoder-decoder models, such as the U-net and the deep joint source-channel coding model, it can learn the process of source compression, noise introduction, signal denoising, and data reconstruction in wireless communication in an offline manner.\nAfter thorough training, the encoder serves as a generative transmitter for the synchronization of 6G wireless network digital twins. It enables remote collaboration among distributedly deployed digital twins, and facilitates physical-digital synchronization between networks and digital twins deployed in a central cloud. The generative transmitter extracts key information from the synchronization data and compresses them into a low-dimensional latent space. After passing through a noisy channel, the generative receiver, consisting of the reverse diffusion model and decoder, restores the noisy data to its original state. Additionally, the transmission performance can be further enhanced for online training by utilizing real-time data collected from 6G wireless network digital twins. Such a generative transmission solution can substantially facilitate the physical-digital synchronization of 6G wireless network digital twins, especially in complex scenarios like the space-air-ground integrated communication in the 6G era."
46
+ },
47
+ {
48
+ "section_id": "3.3",
49
+ "parent_section_id": "3",
50
+ "section_name": "III-C Generative AI as Digital Twins",
51
+ "text": "Apart from assisting digital twin construction, a generative AI model itself can serve as a digital twin. These generative models replicate the behavior of physical networks, offering a more efficient alternative to labor-intensive and time-consuming model-driven solutions. By viewing the interaction signaling messages between different network entities as text, generative language models, for example, Transformers, can be employed in establishing a message-level digital twin. Transformer utilizes self-attention mechanisms to capture extended dependencies and contextual information from sequential data. Through training with extensive message data, the Transformer-based digital twin is able to emulate network functions by generating responses according to the requests and past interaction messages. It is worth noting that Transformers can encounter difficulties in learning specific algorithms like encryption, due to the lack of prior knowledge, including network user identity document (ID), flow billing information, and so on. Thus, a faithful message-level digital twin for the 6G network can be established through cooperations among Transformer-based models, databases, and dedicated algorithms."
52
+ },
53
+ {
54
+ "section_id": "3.4",
55
+ "parent_section_id": "3",
56
+ "section_name": "III-D Generative Model-based Transfer Learning for Digital Twin",
57
+ "text": "To tackle diverse scenarios and heterogeneous network structures in 6G, the slicing capability also needs to be involved in digital twins. For instance, compared to the massive communication scenario, the wireless network digital twins for HRLLC need to ensure higher quality of service, conduct more concise service request procedures, and serve relatively fewer end users. These differences are reflected in the composition and distribution of the constructed dataset, such as behaviors, network topologies, and exchanged messages. To deal with this issue, customized digital twins for diverse slices can be achieved through generative model-based transfer learning. By exploiting knowledge from a well-trained source task, transfer learning helps the 6G wireless network digital twin to avoid training generative AI models from scratch and to accelerate the adaptation to new, but similar, scenarios. Particularly, after training in a general scenario, most generative AI-based digital twins implement transfer learning through parameter sharing and retraining on newly acquired data, thereby preserving their effectiveness in new scenarios. However, such a method might not suffice for large generative models like Transformers and diffusion models. In such a case, fine-tuning is ready to be employed by freezing parameters in lower layers of the pre-trained model of a digital twin, while adjusting parameters in the last few layers based on new data in changing 6G applications. It allows the digital twin to rapidly fine-tune its representations and predictions in different slices while preserving the general functionality."
58
+ },
59
+ {
60
+ "section_id": "4",
61
+ "parent_section_id": null,
62
+ "section_name": "IV Hierarchical Generative AI-Enabled Wireless Network Digital Twin: A Case Study",
63
+ "text": "###figure_3### Given the current absence of a definitive 6G network architecture, we consider a typical 5G wireless network consisting of a data network, a core network, RAN, and user equipments (UEs). With the technological evolution of network function virtualization (NFV) and software-defined network (SDN), the 5G network has introduced control and user plane separation (CUPS). In this architecture, the user plane concentrates solely on managing the substantial user traffic from UE to the data network. In contrast, the control plane, which integrates most of the core network functions, uses a relatively small number of critical signaling messages to govern the entire user plane. To realize the digitalization of core network and support the assessment, development, and optimization of network technologies, we establish a hierarchical generative AI-enabled digital twin of a beyond 5G (B5G) core network control plane, as depicted in Fig. 3 ###reference_###."
64
+ },
65
+ {
66
+ "section_id": "4.1",
67
+ "parent_section_id": "4",
68
+ "section_name": "IV-A Hierarchical Framework",
69
+ "text": "The hierarchical digital twin consists of two levels, that is, a message-level digital twin based on external exchanged messages, and a policy-level digital twin utilizing internal network states and actions.\nThe message-level digital twin is established upon a generative Transformer model, trained through signaling messages between the control and user planes. The core network is composed of multiple virtualized network functions, including access and mobility management function (AMF), session management function (SMF), policy control function (PCF), user plane function (UPF), and many others. Control signaling messages between the control and user planes flow through distinct interfaces, which include the N1 interface between UE and AMF, the N2 interface between RAN and AMF, and the N4 interface between SMF and UPF. Therefore, by capturing interaction messages from these interfaces and converting them into sequential directed dialogue texts, we construct a message-level dataset. This dataset is then utilized in training a message-level digital twin to emulate the functions of the core network control plane [7 ###reference_b7###].\nWhile the purely message-based digital twin successfully replicates network functions of the control plane in conventional situations, it may experience malfunctions in certain scenarios due to the lack of prior knowledge. For instance, the successful establishment of a session for a specific UE depends on not only the UE\u2019s legitimacy but also current network conditions such as queueing requests from other UEs and remaining resources. The generative Transformer model can hardly process the entire history of interaction signaling messages simultaneously or extract network state information from these messages. As a solution, the policy-level digital twin is introduced to replicate internal network policies [15 ###reference_b15###], such as admission control, resource allocation, and session management, and assist the operation of the message-level digital twin.\nIn order to establish the policy-level digital twin, the network state and action should be collected whenever a decision is made under a specific network policy, thereby constructing the training dataset. Considering the scarcity and rarity of data, a generative AI-based data augmentation method is employed to enrich the training dataset. We exploit a GAN model in which the generator takes a random sample from the entire state space as input and generates an action within the action space as output, while the discriminator distinguishes between the state-action pair from the actual network policy and the synthetic state-action pair. After training, this GAN model can construct and generate a sufficient augmented dataset, which is used to train the policy-level digital twin model composed of multilayer neural networks through supervised learning.\nThe policy-level digital twin is capable of replicating the actual network policy in the physical network, thereby assisting the message-level digital twin in accurately emulating core network functions, particularly in scenarios involving policy-related messages. Furthermore, in response to the growing demand for 6G network optimization, the policy-level digital twin can be leveraged within DRL to enhance the existing policy of the physical network. Specifically, the policy-level digital twin, which has parameterized the network policy through supervised learning, serve as the actor network in the actor-critic style DRL model, for example, the advantage actor critic (A2C). By tailoring rewards according to specific objectives, such as revenue maximization or fairness optimization, we can iteratively optimize the network policy to meet desired goals."
70
+ },
71
+ {
72
+ "section_id": "4.2",
73
+ "parent_section_id": "4",
74
+ "section_name": "IV-B The Case Study",
75
+ "text": "In the case study, we consider a specific admission control policy in the policy-level digital twin. In the admission control process, a sliced network with four distinct 6G slices is considered, including slices for immersive communication, HRLLC, massive communication, and ubiquitous connectivity. As illustrated in the upper left of Fig. 3 ###reference_###, various service requests arise from end users, and these requests are directed to tenants responsible for requisitioning and allocating network infrastructure resources to fulfill their service needs. The admission policy analyzes the current queue of requests and the available resources to determine the viability and priority of accepting a specific request. Service requests that have not been admitted remain in the queue, awaiting admission until they expire due to a timeout.\nWe set three types of resources for the network services: radio, computing, and storage resources, with resource utilization varying across services within the network slices. For instance, the immersive communication slice predominantly utilizes radio resource, whereas the HRLLC slice places a heavier demand on computing resource. Arrival rates for different service requests are customized according to the slice feature and following a Poisson arrival process. Unique means for service times in different slices are also set, following an exponential distribution. The admission policy in the physical network is implemented as a greedy algorithm. In terms of models, the performance of the generative Transformer model and a conventional non-generative long short-term memory (LSTM) model are compared. The maximum number of concurrent UEs is altered to assess the robustness of the two models across various scenarios. Accuracy serves as a fundamental performance metric. Considering the sparsity of the information in signaling messages, we also adopt the recall and precision metrics to better evaluate the prediction of positive samples, which refer to the information elements carried in the messages. In Fig. 4a ###reference_sf1###, the generative AI-enhanced digital twin model exhibits notable performance superiority against the non-generative LSTM model, especially in scenarios with a high volume of concurrently served UEs. This superiority is creditable to the parallel architecture and the distinctive attention mechanism in the Transformer. The parallel architecture enables the model to process all historical signaling messages simultaneously, avoiding the long-term dependency issue, while the attention mechanism helps it concentrate on the most important messages, particularly those belonging to the currently processed UE. Additionally, to evaluate the optimization efficacy of wireless network digital twins, we conducted an experiment employing the optimization strategy outlined in the previous section. The results, as depicted in Fig. 4b ###reference_sf2###, illustrate a significant advantage of the digital twin-based DRL compared to the conventional DRL, particularly within the first thousands of iterations.\n###figure_4### ###figure_5###"
76
+ },
77
+ {
78
+ "section_id": "5",
79
+ "parent_section_id": null,
80
+ "section_name": "Open Opportunities and Issues",
81
+ "text": "In the following, we discuss open research opportunities and issues pertaining to generative AI-enabled 6G wireless network digital twins."
82
+ },
83
+ {
84
+ "section_id": "5.1",
85
+ "parent_section_id": "5",
86
+ "section_name": "Cross-layer Digital Twin Collaboration",
87
+ "text": "While the majority of digital twin methods are tailored for specific network elements such as RAN and core networks, end-to-end optimization of wireless networks calls for the collaboration of digital twins across multiple network elements. It is therefore imperative to deeply explore the synthesis of 6G wireless network digital twins with diverse structures and functions, potentially operating asynchronously, as well as their utilization for specific optimization objectives. Besides, the substantial scale of generative AI models like Transformers opens new research opportunities in terms of improving response time and computational resources utilization, while the needs for coordinating optimization goals, functions, and synchronization intervals arise with challenging issues of cross-layer digital twin collaboration. Consequently, the model compression for generative AI-enabled wireless network digital twins, as well as the wireless federated learning techniques, are demanding research issues for effective collaboration, especially within distributed wireless networks."
88
+ },
89
+ {
90
+ "section_id": "5.2",
91
+ "parent_section_id": "5",
92
+ "section_name": "Privacy and Security of Network Digital Twin",
93
+ "text": "There are numerous 6G application scenarios that involve sensitive data, such as multisensory extended reality, sensing, and applications for business. In such cases, applications are susceptible to data theft by malicious entities during communication. Unfortunately, the establishment and physical-digital synchronization for digital twins demand a substantial volume of communication data. Therefore, it is essential to enhance the security and privacy of communications between the physical network and its digital twin. Moreover, reinforcing defense mechanisms against potential attacks and safeguarding the confidentiality of digital twins throughout modeling and synchronization processes are also imperative. The inherent stochastic nature of generative AI presents substantial potentials for addressing the privacy and security concerns, in terms of the encryption of valuable information, as well as the generation of numerous indistinguishable synthetic data points to protect privacy."
94
+ },
95
+ {
96
+ "section_id": "5.3",
97
+ "parent_section_id": "5",
98
+ "section_name": "Intelligent Deployment of Network Digital Twin",
99
+ "text": "The forthcoming 6G network is characterized in part by extensive 3D global coverage [6 ###reference_b6###]. Therefore, 6G wireless network digital twins will be deployed in either a distributed manner, at the end-user devices, or centrally within a cloud infrastructure. While distributed deployment offers the benefit of reduced synchronization latency, it suffers from limited computational capacity and increased management complexity, particularly for compute-intensive generative models. On the other hand, a centralized deployment leverages high-performance computing and storage resources, at the cost of higher latency and weakened reliability. Consequently, a comprehensive and quantitative investigation is necessary to assess the pros and cons across various deployment strategies. It is a promising avenue for researchers to explore trade-offs between various deployment methods. In addition, the lack of standardized assessment criteria for network digital twins poses significant challenges in this domain, along with the difficulty in obtaining high-quality datasets for deploying network digital twins."
100
+ },
101
+ {
102
+ "section_id": "6",
103
+ "parent_section_id": null,
104
+ "section_name": "VI Conclusion",
105
+ "text": "The digital twin technology holds substantial potential for the emulation, evaluation, and optimization of wireless networks. The forthcoming 6G era presents new challenges for wireless network digital twins. This study carries out a prospective analysis of key requirements for 6G wireless network digital twins and leverages cutting-edge generative AI technologies to meet the demands. We also propose a hierarchical generative AI-enabled network digital twin along with a typical use case to showcase the distinct benefits of generative AI-enabled digital twins. Potential future directions for generative AI-enabled wireless network digital twins are also discussed."
106
+ }
107
+ ],
108
+ "appendix": [],
109
+ "tables": {
110
+ "1": {
111
+ "table_html": "<figure class=\"ltx_table\" id=\"S2.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span><span class=\"ltx_text\" id=\"S2.T1.2.1\" style=\"color:#000000;\">Digital Twins of Different Wireless Network Elements</span></figcaption>\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S2.T1.3\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S2.T1.3.1.1\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t\" id=\"S2.T1.3.1.1.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.3.1.1.1.1\">\n<span class=\"ltx_p\" id=\"S2.T1.3.1.1.1.1.1\" style=\"width:56.9pt;\">Network element</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S2.T1.3.1.1.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.3.1.1.2.1\">\n<span class=\"ltx_p\" id=\"S2.T1.3.1.1.2.1.1\" style=\"width:79.7pt;\">Modeling strategy</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S2.T1.3.1.1.3\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.3.1.1.3.1\">\n<span class=\"ltx_p\" id=\"S2.T1.3.1.1.3.1.1\" style=\"width:108.1pt;\">Enabling technology</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S2.T1.3.1.1.4\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.3.1.1.4.1\">\n<span class=\"ltx_p\" id=\"S2.T1.3.1.1.4.1.1\" style=\"width:113.8pt;\">Objectives</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S2.T1.3.1.1.5\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.3.1.1.5.1\">\n<span class=\"ltx_p\" id=\"S2.T1.3.1.1.5.1.1\" style=\"width:85.4pt;\">Reference</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.3.2.2\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t\" id=\"S2.T1.3.2.2.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.3.2.2.1.1\">\n<span class=\"ltx_p\" id=\"S2.T1.3.2.2.1.1.1\" style=\"width:56.9pt;\"><span class=\"ltx_text\" id=\"S2.T1.3.2.2.1.1.1.1\">Core network</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S2.T1.3.2.2.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.3.2.2.2.1\">\n<span class=\"ltx_p\" id=\"S2.T1.3.2.2.2.1.1\" style=\"width:79.7pt;\"><span class=\"ltx_text\" id=\"S2.T1.3.2.2.2.1.1.1\">Model-driven</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S2.T1.3.2.2.3\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.3.2.2.3.1\">\n<span class=\"ltx_p\" id=\"S2.T1.3.2.2.3.1.1\" style=\"width:108.1pt;\"><span class=\"ltx_text\" id=\"S2.T1.3.2.2.3.1.1.1\">Protocol-oriented programming</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S2.T1.3.2.2.4\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.3.2.2.4.1\">\n<span class=\"ltx_p\" id=\"S2.T1.3.2.2.4.1.1\" style=\"width:113.8pt;\">Testbed, risk control, real-time optimization</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S2.T1.3.2.2.5\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.3.2.2.5.1\">\n<span class=\"ltx_p\" id=\"S2.T1.3.2.2.5.1.1\" style=\"width:85.4pt;\"><span class=\"ltx_text\" id=\"S2.T1.3.2.2.5.1.1.1\">M. S. Rodrigo <em class=\"ltx_emph ltx_font_italic\" id=\"S2.T1.3.2.2.5.1.1.1.1\">et al.</em> <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17451v3#bib.bib3\" title=\"\">3</a>]</cite></span></span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.3.3.3\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.3.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.3.3.3.1.1\">\n<span class=\"ltx_p\" id=\"S2.T1.3.3.3.1.1.1\" style=\"width:56.9pt;\"><span class=\"ltx_text\" id=\"S2.T1.3.3.3.1.1.1.1\">RAN</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.3.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.3.3.3.2.1\">\n<span class=\"ltx_p\" id=\"S2.T1.3.3.3.2.1.1\" style=\"width:79.7pt;\"><span class=\"ltx_text\" id=\"S2.T1.3.3.3.2.1.1.1\">Model-driven</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.3.3\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.3.3.3.3.1\">\n<span class=\"ltx_p\" id=\"S2.T1.3.3.3.3.1.1\" style=\"width:108.1pt;\"><span class=\"ltx_text\" id=\"S2.T1.3.3.3.3.1.1.1\">Simulation program</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.3.4\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.3.3.3.4.1\">\n<span class=\"ltx_p\" id=\"S2.T1.3.3.3.4.1.1\" style=\"width:113.8pt;\">Optimal capacity-sharing for network slicing</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.3.5\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.3.3.3.5.1\">\n<span class=\"ltx_p\" id=\"S2.T1.3.3.3.5.1.1\" style=\"width:85.4pt;\"><span class=\"ltx_text\" id=\"S2.T1.3.3.3.5.1.1.1\">I. Vil\u00e0 <em class=\"ltx_emph ltx_font_italic\" id=\"S2.T1.3.3.3.5.1.1.1.1\">et al.</em> 2023</span></span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.3.4.4\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r\" id=\"S2.T1.3.4.4.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.3.4.4.1.1\">\n<span class=\"ltx_p\" id=\"S2.T1.3.4.4.1.1.1\" style=\"width:56.9pt;\"></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S2.T1.3.4.4.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.3.4.4.2.1\">\n<span class=\"ltx_p\" id=\"S2.T1.3.4.4.2.1.1\" style=\"width:79.7pt;\"><span class=\"ltx_text\" id=\"S2.T1.3.4.4.2.1.1.1\">Data-driven</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S2.T1.3.4.4.3\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.3.4.4.3.1\">\n<span class=\"ltx_p\" id=\"S2.T1.3.4.4.3.1.1\" style=\"width:108.1pt;\"><span class=\"ltx_text\" id=\"S2.T1.3.4.4.3.1.1.1\">Deep neural network</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S2.T1.3.4.4.4\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.3.4.4.4.1\">\n<span class=\"ltx_p\" id=\"S2.T1.3.4.4.4.1.1\" style=\"width:113.8pt;\">Resource management in networks slicing</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S2.T1.3.4.4.5\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.3.4.4.5.1\">\n<span class=\"ltx_p\" id=\"S2.T1.3.4.4.5.1.1\" style=\"width:85.4pt;\"><span class=\"ltx_text\" id=\"S2.T1.3.4.4.5.1.1.1\">Z. Zhang <em class=\"ltx_emph ltx_font_italic\" id=\"S2.T1.3.4.4.5.1.1.1.1\">et al.</em> <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17451v3#bib.bib4\" title=\"\">4</a>]</cite></span></span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.3.5.5\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t\" id=\"S2.T1.3.5.5.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.3.5.5.1.1\">\n<span class=\"ltx_p\" id=\"S2.T1.3.5.5.1.1.1\" style=\"width:56.9pt;\"><span class=\"ltx_text\" id=\"S2.T1.3.5.5.1.1.1.1\">Topology</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S2.T1.3.5.5.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.3.5.5.2.1\">\n<span class=\"ltx_p\" id=\"S2.T1.3.5.5.2.1.1\" style=\"width:79.7pt;\">Model-driven</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S2.T1.3.5.5.3\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.3.5.5.3.1\">\n<span class=\"ltx_p\" id=\"S2.T1.3.5.5.3.1.1\" style=\"width:108.1pt;\">Outdated model discovery</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S2.T1.3.5.5.4\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.3.5.5.4.1\">\n<span class=\"ltx_p\" id=\"S2.T1.3.5.5.4.1.1\" style=\"width:113.8pt;\">Spatial-temporal load balance</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S2.T1.3.5.5.5\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.3.5.5.5.1\">\n<span class=\"ltx_p\" id=\"S2.T1.3.5.5.5.1.1\" style=\"width:85.4pt;\">P. Jia <em class=\"ltx_emph ltx_centering ltx_font_italic\" id=\"S2.T1.3.5.5.5.1.1.1\">et al.</em> <cite class=\"ltx_cite ltx_centering ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17451v3#bib.bib11\" title=\"\">11</a>]</cite></span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.3.6.6\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r\" id=\"S2.T1.3.6.6.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.3.6.6.1.1\">\n<span class=\"ltx_p\" id=\"S2.T1.3.6.6.1.1.1\" style=\"width:56.9pt;\"></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S2.T1.3.6.6.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.3.6.6.2.1\">\n<span class=\"ltx_p\" id=\"S2.T1.3.6.6.2.1.1\" style=\"width:79.7pt;\">Data-driven</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S2.T1.3.6.6.3\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.3.6.6.3.1\">\n<span class=\"ltx_p\" id=\"S2.T1.3.6.6.3.1.1\" style=\"width:108.1pt;\">Graph neural network</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S2.T1.3.6.6.4\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.3.6.6.4.1\">\n<span class=\"ltx_p\" id=\"S2.T1.3.6.6.4.1.1\" style=\"width:113.8pt;\">End-to-end latency prediction</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S2.T1.3.6.6.5\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.3.6.6.5.1\">\n<span class=\"ltx_p\" id=\"S2.T1.3.6.6.5.1.1\" style=\"width:85.4pt;\">H. Wang <em class=\"ltx_emph ltx_centering ltx_font_italic\" id=\"S2.T1.3.6.6.5.1.1.1\">et al.</em> <cite class=\"ltx_cite ltx_centering ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17451v3#bib.bib5\" title=\"\">5</a>]</cite></span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.3.7.7\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t\" id=\"S2.T1.3.7.7.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.3.7.7.1.1\">\n<span class=\"ltx_p\" id=\"S2.T1.3.7.7.1.1.1\" style=\"width:56.9pt;\"><span class=\"ltx_text\" id=\"S2.T1.3.7.7.1.1.1.1\">Behavior</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S2.T1.3.7.7.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.3.7.7.2.1\">\n<span class=\"ltx_p\" id=\"S2.T1.3.7.7.2.1.1\" style=\"width:79.7pt;\"><span class=\"ltx_text\" id=\"S2.T1.3.7.7.2.1.1.1\">Data-driven</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S2.T1.3.7.7.3\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.3.7.7.3.1\">\n<span class=\"ltx_p\" id=\"S2.T1.3.7.7.3.1.1\" style=\"width:108.1pt;\"><span class=\"ltx_text\" id=\"S2.T1.3.7.7.3.1.1.1\">Bayesian neural network</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S2.T1.3.7.7.4\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.3.7.7.4.1\">\n<span class=\"ltx_p\" id=\"S2.T1.3.7.7.4.1.1\" style=\"width:113.8pt;\">Traffic prediction, anomaly detection, data collection</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S2.T1.3.7.7.5\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.3.7.7.5.1\">\n<span class=\"ltx_p\" id=\"S2.T1.3.7.7.5.1.1\" style=\"width:85.4pt;\"><span class=\"ltx_text\" id=\"S2.T1.3.7.7.5.1.1.1\">C. Ruah <em class=\"ltx_emph ltx_font_italic\" id=\"S2.T1.3.7.7.5.1.1.1.1\">et al.</em> <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17451v3#bib.bib10\" title=\"\">10</a>]</cite></span></span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.3.8.8\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_b ltx_border_l ltx_border_r\" id=\"S2.T1.3.8.8.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.3.8.8.1.1\">\n<span class=\"ltx_p\" id=\"S2.T1.3.8.8.1.1.1\" style=\"width:56.9pt;\"></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_b ltx_border_r\" id=\"S2.T1.3.8.8.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.3.8.8.2.1\">\n<span class=\"ltx_p\" id=\"S2.T1.3.8.8.2.1.1\" style=\"width:79.7pt;\"></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_b ltx_border_r ltx_border_t\" id=\"S2.T1.3.8.8.3\" rowspan=\"2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.3.8.8.3.1\">\n<span class=\"ltx_p\" id=\"S2.T1.3.8.8.3.1.1\" style=\"width:108.1pt;\"><span class=\"ltx_text\" id=\"S2.T1.3.8.8.3.1.1.1\">Recurrent neural network</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_b ltx_border_r ltx_border_t\" id=\"S2.T1.3.8.8.4\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.3.8.8.4.1\">\n<span class=\"ltx_p\" id=\"S2.T1.3.8.8.4.1.1\" style=\"width:113.8pt;\">Cache monitoring, packet positioning</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_b ltx_border_r ltx_border_t\" id=\"S2.T1.3.8.8.5\" rowspan=\"2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.3.8.8.5.1\">\n<span class=\"ltx_p\" id=\"S2.T1.3.8.8.5.1.1\" style=\"width:85.4pt;\"><span class=\"ltx_text\" id=\"S2.T1.3.8.8.5.1.1.1\">G. Lin <em class=\"ltx_emph ltx_font_italic\" id=\"S2.T1.3.8.8.5.1.1.1.1\">et al.</em> 2022</span></span>\n</span>\n</td>\n</tr>\n</tbody>\n</table>\n</figure>",
112
+ "capture": "TABLE I: Digital Twins of Different Wireless Network Elements"
113
+ }
114
+ },
115
+ "image_paths": {
116
+ "1": {
117
+ "figure_path": "2311.17451v3_figure_1.png",
118
+ "caption": "Figure 1: Digital twin solutions for wireless network.",
119
+ "url": "http://arxiv.org/html/2311.17451v3/x1.png"
120
+ },
121
+ "2": {
122
+ "figure_path": "2311.17451v3_figure_2.png",
123
+ "caption": "Figure 2: Generative AI in 6G wireless network digital twin.",
124
+ "url": "http://arxiv.org/html/2311.17451v3/x2.png"
125
+ },
126
+ "3": {
127
+ "figure_path": "2311.17451v3_figure_3.png",
128
+ "caption": "Figure 3: A framework of hierarchical generative AI-enabled wireless network digital twin.",
129
+ "url": "http://arxiv.org/html/2311.17451v3/x3.png"
130
+ },
131
+ "4(a)": {
132
+ "figure_path": "2311.17451v3_figure_4(a).png",
133
+ "caption": "(a) Message prediction performance\nFigure 4: Performance comparison for wireless network digital twins",
134
+ "url": "http://arxiv.org/html/2311.17451v3/x4.png"
135
+ },
136
+ "4(b)": {
137
+ "figure_path": "2311.17451v3_figure_4(b).png",
138
+ "caption": "(b) Policy optimization performance\nFigure 4: Performance comparison for wireless network digital twins",
139
+ "url": "http://arxiv.org/html/2311.17451v3/x5.png"
140
+ }
141
+ },
142
+ "validation": true,
143
+ "references": [],
144
+ "url": "http://arxiv.org/html/2311.17451v3"
145
+ }
20240620/2311.17541v3.json ADDED
The diff for this file is too large to render. See raw diff