yilunzhao commited on
Commit
c1b7d8d
·
verified ·
1 Parent(s): 9d61c48

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. 20240819/2108.00452v10.json +0 -0
  2. 20240819/2202.13088v2.json +395 -0
  3. 20240819/2206.00794v2.json +0 -0
  4. 20240819/2211.12203v3.json +393 -0
  5. 20240819/2305.15897v4.json +0 -0
  6. 20240819/2306.02802v2.json +298 -0
  7. 20240819/2307.13269v3.json +0 -0
  8. 20240819/2308.07922v3.json +0 -0
  9. 20240819/2310.06824v3.json +548 -0
  10. 20240819/2310.12375v2.json +680 -0
  11. 20240819/2311.04061v2.json +470 -0
  12. 20240819/2312.10680v2.json +0 -0
  13. 20240819/2401.12508v2.json +632 -0
  14. 20240819/2402.05642v3.json +131 -0
  15. 20240819/2403.01888v3.json +437 -0
  16. 20240819/2403.02889v3.json +0 -0
  17. 20240819/2403.04484v2.json +89 -0
  18. 20240819/2403.06906v3.json +0 -0
  19. 20240819/2403.07162v3.json +397 -0
  20. 20240819/2403.13780v2.json +0 -0
  21. 20240819/2403.17111v2.json +217 -0
  22. 20240819/2404.06599v3.json +0 -0
  23. 20240819/2404.06913v3.json +0 -0
  24. 20240819/2405.10308v4.json +0 -0
  25. 20240819/2405.11389v2.json +0 -0
  26. 20240819/2405.14137v2.json +111 -0
  27. 20240819/2405.14893v2.json +0 -0
  28. 20240819/2405.18523v2.json +0 -0
  29. 20240819/2405.20602v2.json +0 -0
  30. 20240819/2406.04920v2.json +0 -0
  31. 20240819/2406.05913v2.json +109 -0
  32. 20240819/2406.14176v3.json +192 -0
  33. 20240819/2406.14192v2.json +0 -0
  34. 20240819/2407.02337v2.json +439 -0
  35. 20240819/2407.03219v2.json +58 -0
  36. 20240819/2407.05976v2.json +0 -0
  37. 20240819/2407.09271v2.json +0 -0
  38. 20240819/2407.10907v2.json +406 -0
  39. 20240819/2407.15871v3.json +511 -0
  40. 20240819/2407.19156v2.json +0 -0
  41. 20240819/2408.03837v3.json +0 -0
  42. 20240819/2408.08376v2.json +658 -0
  43. 20240819/2408.08869v2.json +512 -0
  44. 20240819/2408.09642v1.json +419 -0
  45. 20240819/2408.09676v1.json +0 -0
  46. 20240819/2408.09683v1.json +0 -0
  47. 20240819/2408.09687v1.json +451 -0
  48. 20240819/2408.09694v1.json +149 -0
  49. 20240819/2408.09695v1.json +0 -0
  50. 20240819/2408.09699v1.json +560 -0
20240819/2108.00452v10.json ADDED
The diff for this file is too large to render. See raw diff
 
20240819/2202.13088v2.json ADDED
@@ -0,0 +1,395 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Almost Tight Approximation Hardness for Single-Source Directed k-Edge-Connectivity",
3
+ "abstract": "In the -connected directed Steiner tree problem (-DST), we are given an -vertex directed graph with edge costs, a connectivity requirement , a root and a set of terminals . The goal is to find a minimum-cost subgraph that has internally disjoint paths from the root vertex to every terminal .\nThe problem is -hard, and inapproximability results are known in several parameters, e.g., hardness in terms of : -hardness for [Halperin and Krauthgamer, STOC\u201903], -hardness for general case [Cheriyan, Laekhanukit, Naves and Vetta, SODA\u201912], hardness in terms of [Cheriyan et al., SODA\u201912; Laekhanukit, SODA\u201914; Manurangsi, IPL\u201919] and hardness in terms of [Laekhanukit, SODA\u201914].",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Fault-Tolerant and Survival Network Design have been an active area of research for decades as enterprises depend more on communication networks and distributed computing. The need to design a network that can operate without disruption when one or more components fail has been growing dramatically.\nHenceforth, network scientists have formulated many models to address these problems. Amongst them, the simplest and arguably most fundamental problem in the area is the minimum-cost -outconnected spanning subgraph (-OCSS) problem that captures the problem of designing a multi-casting network with survivability property. The -OCSS problem is a generalization of the minimum spanning tree and the minimum-cost arborescence problems, where the goal is to design a network that can operate under failures of at most points. More formally, -OCSS asks to find a minimum-cost subgraph such that the root vertex is -connected to every other vertex.\nIn this paper, we study the analog of -OCSS in the presence of Steiner vertices, namely the -connected directed Steiner tree problem (-DST): Given a directed graph with cost on arcs, a root vertex and a set of terminals , the goal is to find a minimum-cost subgraph such that has internally disjoint paths from the root to every terminal , i.e., the root remains connected to every terminal even after the removal of vertices (or arcs).\nThe -DST problem is a natural generalization of the classical directed Steiner tree problem (DST) to high connectivity settings.\nThe undirected counterpart of -DST is the minimum-cost single-source -(vertex)-connected subgraph problem, which admits an -approximation algorithm [Nut12 ###reference_bx39###], and the edge-connectivity variant admits a factor-two approximation algorithm due to Jain [Jai01 ###reference_bx31###].\nThe -DST problem, on the other hand, has no non-trivial approximation algorithm for , except for the special case of -layered graph, which admits -approximation algorithm due to Laekhanukit [Lae16 ###reference_bx36###].\nThe cases of and are also notorious problems themselves, as both admit polylogarithmic approximation algorithms that run in quasi-polynomial time, but no polynomial-time approximation algorithms with sub-polynomial approximation. It has been long-standing open problems whether such algorithms exist for DST and -DST.\nWe answer the questions regarding the approximability of -DST negatively.\nFirst, we show an approximation hardness of for -DST under , which holds when is much larger than , thus implying that a trivial -approximation algorithm for the problem is tight up to the lower order term.\nFor , unless , it is hard to approximate the -DST problem to within a factor of .\nAssuming the Strongish Planted Clique Hypothesis (SPCH) [MRS21 ###reference_bx38###], our hardness result is tight up to a constant factor, and it, indeed, rules out -time -approximation algorithm for any function depending only on . See discussion in Section B.1 ###reference_###.\nAssuming the Strongish Planted Clique Hypothesis, there is no -time -approximation algorithm for the -DST problem.\nNext, we show that the -DST admits no -approximation algorithm even on an -layered graph, which consists of parts, called layers, and every arc joins a vertex from the -th layer to the -th layer.\nIt is hard to approximate the -DST problem on -layered graphs for to within a factor of for any constant , unless .\nIn addition, we obtain an approximation hardness exponential in by setting a different parameter in the reduction, which improves upon the previously known approximation hardness of due to Manurangsi [Man19 ###reference_bx37###] (which is in turn based on the two previous results [Lae14 ###reference_bx35###, CLNV14 ###reference_bx11###]), and is the first known approximation hardness for connectivity problems whose ratio is exponential in the connectivity requirement.\nIt is hard to approximate the -DST problem to within a factor of , unless .\nUsing the technique of Cheriyan, Laekhanukit, Naves and Vetta [CLNV14 ###reference_bx11###], which is based on the padding technique introduced by Kortsarz, Krauthgamer and Lee [KKL04 ###reference_bx32###], we extend our hardness result to the undirected counterpart of -DST, namely, the single source -vertex-connected Steiner tree problem (-ST) (a.k.a. undirected rooted subset -connectivity, shorty, rooted--VC) and the special case of -DST, namely -edge-connected group Steiner tree problem (-GST).\nThe latter problem is a natural fault-tolerant generalization of the classical group Steiner tree problem [GKR00 ###reference_bx21###], which has been studied in [KKN12 ###reference_bx33###, GKR10 ###reference_bx22###, CGL15 ###reference_bx9###, CDE+18 ###reference_bx5###].\nTo the best of our knowledge, a non-trivial approximation algorithm for this problem is known only for . For , only a bicriteria approximation algorithm, where the connectivity requirement can be dropped by a factor , is known in [CGL15 ###reference_bx9###]. Nevertheless, a trivial -approximation algorithm exists for all values of and we also show its tightness (up to the lower order term) for sufficiently large .\nFor , unless , it is hard to approximate the -ST problem to within a factor of .\nFor , unless , it is hard to approximate the -GST problem to within a factor of , where is the number of groups."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Preliminaries",
15
+ "text": "We use a standard graph terminology.\nLet be any graph, which can be either directed or undirected.\nFor undirected graphs, we refer to the elements in as the \u201cedges\u201d of and denote by the number of edges incident to a vertex .\nFor directed graphs, we refer to the elements in as the \u201carcs\u201d of and denote by the number of arcs entering .\nThe notation for an edge/arc is , or sometimes for an arc.\nFor a path between vertex and , we call it a -path and write it as for both directed and undirected graphs, or for only directed graphs.\nThe graphs may have multiple edges/arcs between two same vertices and , and both and count multiple ones.\nWe drop from the notations when it is clear from the context.\nWhen more than one graph is considered, we use to clarify the vertex set of , and the edge/arc set."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Overview of the Reductions",
21
+ "text": "To give some intuitions on how our reductions work, we dedicate this section to providing an overview. We have two main reductions, which are tailored for inapproximability results in different parameters, say and .\nBoth of the reductions inherit approximation hardness from the same source \u2013 the label cover problem, denoted by . We design reductions that have a one-to-one correspondence between a feasible solution to the label cover problem and that to the -DST problem, i.e.,\nCompleteness: Given a feasible multilabeling of the label cover instance , there is a corresponding -connected subgraph of such that .\nSoundness: Given a -connected subgraph of the -DST instance, there is a corresponding feasible multilabeling of the label cover instance such that ."
22
+ },
23
+ {
24
+ "section_id": "4",
25
+ "parent_section_id": null,
26
+ "section_name": "Inapproximability in Terms of the Number of Terminals",
27
+ "text": "In this section, we discuss the hardness reduction that is tailored for the parameter .\nOur reduction takes as input a label cover instance and then produces a -DST instance as an output.\nThe reduction runs in polynomial-time, and there is a one-to-one mapping between the solutions to the two problems.\nThus, the inapproximability result of label cover is mapped to the inapproximability of -DST directly. The main focus in this section is in reducing the number of terminals by exploiting edge-disjoint paths."
28
+ },
29
+ {
30
+ "section_id": "5",
31
+ "parent_section_id": null,
32
+ "section_name": "Inapproximability in Terms of the Connectivity Requirement",
33
+ "text": "This section presents a hardness reduction, which is tailored for the approximation hardness in terms of the connectivity requirement .\nOur reduction again takes a label cover instance as an input and produces a -DST instance .\nAs we wish to obtain an inapproximability in terms of , the main focus is on controlling the size of ."
34
+ },
35
+ {
36
+ "section_id": "6",
37
+ "parent_section_id": null,
38
+ "section_name": "Inapproximability for -GST",
39
+ "text": "In this section, we consider the -edge-connected group Steiner tree (-GST) problem.\nAn instance of the problem is a tuple where is the connectivity requirement, is an undirected graph with edge weight (or cost) , is the root and \nare groups of vertices.\nThe goal is to find a subgraph of minimum cost such that for each group there are edge-disjoint paths in from to .\nWe reduce a label cover instance to a -GST instance in polynomial time.\nFor the ease of presentation, assume that each group has its own connectivity requirement , i.e., only edge-disjoint paths from to are required.\nThis non-uniform version can be reduced to the uniform version by adding zero-cost edges to an arbitrary vertex in ."
40
+ }
41
+ ],
42
+ "appendix": [
43
+ {
44
+ "section_id": "Appendix 1",
45
+ "parent_section_id": null,
46
+ "section_name": "Appendix A Inapproximability for -ST",
47
+ "text": "There is a natural variant of -DST where undirected graphs are considered.\nIn this case, the edge/vertex-disjoint versions are no longer equivalent to the two versions of -DST.\nJain [Jai01 ###reference_bx31###] gave a -approximation algorithm for the edge-disjoint version while the vertex-disjoint case is at least as hard as the label cover problem, which admits no -approximation algorithm for any , unless .\nHere we consider the vertex-disjoint version, namely the single-source -vertex-connected Steiner tree problem (-ST), formally defined as follows.\nAn input instance is of the form where is the connectivity requirement, is a weighted undirected graph with a weight (or cost) function , the vertex is called root and is a set of terminals.\nThe problem is to find a subgraph of minimum cost defined by such that there are openly vertex-disjoint paths in from to the terminal for each .\nWe give a reduction from the label cover instance to a -ST instance .\nThe construction is similar to that for -DST, with some necessary adaptions."
48
+ },
49
+ {
50
+ "section_id": "Appendix 2",
51
+ "parent_section_id": null,
52
+ "section_name": "Appendix B Hardness under Strongish Planted Clique Hypothesis",
53
+ "text": "In this section, we discuss the hardness of -DST under the Strongish Planted Clique Hypothesis (SPCH), which asserts that there exists no -time approximation algorithm that solves the planted -clique problem. Note that here we use to mean the size of a subgraph rather than the connectivity requirement in the -DST problem.\nTo be formal, the planted -clique problem asks an algorithm to distinguish between the two cases of -vertex graphs: (1) a uniform random graph, and (2) a uniform random graph with an added -clique. The SPCH asserts that there exists no bounded-error probabilistic polynomial time algorithm that can distinguish the two cases in -time.\nUnder this complexity assumption, Manurangsi, Rubinstein and Schramm showed that a -CSP, particularly, the densest -subgraph problem (DS) admits no polynomial-time -approximation algorithm.\nTo be precise, in the DS problem, we are given a graph and an integer . The goal is to find a subset of vertices that spans the maximum number of edges. The following theorem was proved in [MRS21 ###reference_bx38###].\nAssuming the Strongish Planted Clique Hypothesis, there is no -time algorithm that can approximate the densest -subgraph problem on -vertex graphs to within a factor for any function depending only on . Furthermore, this holds even in the perfect completeness case where the input graph is promised to contain a -clique.\nWe will prove the following statement in Section B.1 ###reference_###, which gives an inapproximability result under SPCH for the (minimum) label cover problem with relation constraints. While this is not the variant of the label cover instance we defined earlier, it does not affect our hardness result presented in Section 4 ###reference_###.\nAssuming the Strongish Planted Clique Hypothesis, there is no -time algorithm that can approximate a label cover instance of size on a -complete bipartite graph to within a factor for any function depending only on . Furthermore, this holds even in the perfect completeness case where the input graph is promised to have a multilabeling of cost that satisfies all the constraints. In particular, there exists no FPT-approximation algorithm for the (minimum) label-cover problem parameterized by the number of vertices."
54
+ }
55
+ ],
56
+ "tables": {
57
+ "1": {
58
+ "table_html": "<figure class=\"ltx_table\" id=\"S1.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S1.T1.13\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S1.T1.13.14.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r\" id=\"S1.T1.13.14.1.1\">Parameter</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r\" id=\"S1.T1.13.14.1.2\">Lower Bound</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r\" id=\"S1.T1.13.14.1.3\">Lower Bound</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column\" id=\"S1.T1.13.14.1.4\">Upper Bound</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.13.15.2\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_border_r\" id=\"S1.T1.13.15.2.1\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r\" id=\"S1.T1.13.15.2.2\">(This paper)</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r\" id=\"S1.T1.13.15.2.3\">(Previous)</th>\n<th class=\"ltx_td ltx_th ltx_th_column\" id=\"S1.T1.13.15.2.4\"></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S1.T1.4.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S1.T1.1.1.1\">Connectivity \n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S1.T1.2.2.2\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S1.T1.3.3.3\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S1.T1.4.4.4\">unknown for general \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.9.9\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S1.T1.6.6.2\">Connectivity , Depth \n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S1.T1.7.7.3\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S1.T1.8.8.4\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S1.T1.9.9.5\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.13.13\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S1.T1.10.10.1\">Terminals \n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S1.T1.11.11.2\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S1.T1.12.12.3\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S1.T1.13.13.4\"></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Summary of the results for -DST</figcaption>\n</figure>",
59
+ "capture": "Table 1: Summary of the results for -DST"
60
+ }
61
+ },
62
+ "image_paths": {},
63
+ "validation": true,
64
+ "references": [
65
+ {
66
+ "1": {
67
+ "title": "Probabilistic approximations of metric spaces and its algorithmic\napplications.",
68
+ "author": "Yair Bartal.",
69
+ "venue": "In 37th Annual Symposium on Foundations of Computer Science,\nFOCS \u201996, Burlington, Vermont, USA, 14-16 October, 1996, pages 184\u2013193.\nIEEE Computer Society, 1996.",
70
+ "url": null
71
+ }
72
+ },
73
+ {
74
+ "2": {
75
+ "title": "Steiner tree approximation via iterative randomized rounding.",
76
+ "author": "Jaros\u0142aw Byrka, Fabrizio Grandoni, Thomas Rothvoss, and Laura Sanit\u00e0.",
77
+ "venue": "J. ACM, 60(1), February 2013.",
78
+ "url": null
79
+ }
80
+ },
81
+ {
82
+ "3": {
83
+ "title": "Approximation algorithms for directed steiner problems.",
84
+ "author": "Moses Charikar, Chandra Chekuri, To-Yat Cheung, Zuo Dai, Ashish Goel, Sudipto\nGuha, and Ming Li.",
85
+ "venue": "Journal of Algorithms, 33(1):73\u201391, 1999.",
86
+ "url": null
87
+ }
88
+ },
89
+ {
90
+ "4": {
91
+ "title": "Rounding via trees: Deterministic approximation algorithms for group\nsteiner trees and k-median.",
92
+ "author": "Moses Charikar, Chandra Chekuri, Ashish Goel, and Sudipto Guha.",
93
+ "venue": "In Jeffrey Scott Vitter, editor, Proceedings of the Thirtieth\nAnnual ACM Symposium on the Theory of Computing, Dallas, Texas, USA, May\n23-26, 1998, pages 114\u2013123. ACM, 1998.",
94
+ "url": null
95
+ }
96
+ },
97
+ {
98
+ "5": {
99
+ "title": "Survivable network design for group connectivity in low-treewidth\ngraphs.",
100
+ "author": "Parinya Chalermsook, Syamantak Das, Guy Even, Bundit Laekhanukit, and Daniel\nVaz.",
101
+ "venue": "In Eric Blais, Klaus Jansen, Jos\u00e9 D. P. Rolim, and David\nSteurer, editors, Approximation, Randomization, and Combinatorial\nOptimization. Algorithms and Techniques, APPROX/RANDOM 2018, August 20-22,\n2018 - Princeton, NJ, USA, volume 116 of LIPIcs, pages 8:1\u20138:19.\nSchloss Dagstuhl - Leibniz-Zentrum f\u00fcr Informatik, 2018.",
102
+ "url": null
103
+ }
104
+ },
105
+ {
106
+ "6": {
107
+ "title": "Beyond metric embedding: Approximating group steiner trees on bounded\ntreewidth graphs.",
108
+ "author": "Parinya Chalermsook, Syamantak Das, Bundit Laekhanukit, and Daniel Vaz.",
109
+ "venue": "In Philip N. Klein, editor, Proceedings of the Twenty-Eighth\nAnnual ACM-SIAM Symposium on Discrete Algorithms, SODA 2017, Barcelona,\nSpain, Hotel Porta Fira, January 16-19, pages 737\u2013751. SIAM, 2017.",
110
+ "url": null
111
+ }
112
+ },
113
+ {
114
+ "7": {
115
+ "title": "A greedy approximation algorithm for the group steiner problem.",
116
+ "author": "Chandra Chekuri, Guy Even, and Guy Kortsarz.",
117
+ "venue": "Discret. Appl. Math., 154(1):15\u201334, 2006.",
118
+ "url": null
119
+ }
120
+ },
121
+ {
122
+ "8": {
123
+ "title": "On survivable set connectivity.",
124
+ "author": "Parinya Chalermsook, Fabrizio Grandoni, and Bundit Laekhanukit.",
125
+ "venue": "In Proceedings of the Twenty-Sixth Annual ACM-SIAM Symposium on\nDiscrete Algorithms, pages 25\u201336. SIAM, 2014.",
126
+ "url": null
127
+ }
128
+ },
129
+ {
130
+ "9": {
131
+ "title": "On survivable set connectivity.",
132
+ "author": "Parinya Chalermsook, Fabrizio Grandoni, and Bundit Laekhanukit.",
133
+ "venue": "In Piotr Indyk, editor, Proceedings of the Twenty-Sixth Annual\nACM-SIAM Symposium on Discrete Algorithms, SODA 2015, San Diego, CA, USA,\nJanuary 4-6, 2015, pages 25\u201336. SIAM, 2015.",
134
+ "url": null
135
+ }
136
+ },
137
+ {
138
+ "10": {
139
+ "title": "Improved approximation algorithms for label cover problems.",
140
+ "author": "Moses Charikar, MohammadTaghi Hajiaghayi, and Howard J. Karloff.",
141
+ "venue": "Algorithmica, 61(1):190\u2013206, 2011.",
142
+ "url": null
143
+ }
144
+ },
145
+ {
146
+ "11": {
147
+ "title": "Approximating rooted steiner networks.",
148
+ "author": "Joseph Cheriyan, Bundit Laekhanukit, Guyslain Naves, and Adrian Vetta.",
149
+ "venue": "ACM Transactions on Algorithms (TALG), 11(2):1\u201322, 2014.",
150
+ "url": null
151
+ }
152
+ },
153
+ {
154
+ "12": {
155
+ "title": "Polylogarithmic approximation algorithm for k-connected directed\nsteiner tree on quasi-bipartite graphs.",
156
+ "author": "Chun-Hsiang Chan, Bundit Laekhanukit, Hao-Ting Wei, and Yuhao Zhang.",
157
+ "venue": "In Approximation, Randomization, and Combinatorial Optimization.\nAlgorithms and Techniques (APPROX/RANDOM 2020), volume 176, pages\n63:1\u201363:20. Schloss Dagstuhl\u2013Leibniz-Zentrum f\u00fcr Informatik, 2020.",
158
+ "url": null
159
+ }
160
+ },
161
+ {
162
+ "13": {
163
+ "title": "Eth-hardness of approximating 2-csps and directed steiner network.",
164
+ "author": "Irit Dinur and Pasin Manurangsi.",
165
+ "venue": "In Anna R. Karlin, editor, 9th Innovations in Theoretical\nComputer Science Conference, ITCS 2018, January 11-14, 2018, Cambridge, MA,\nUSA, volume 94 of LIPIcs, pages 36:1\u201336:20. Schloss Dagstuhl -\nLeibniz-Zentrum f\u00fcr Informatik, 2018.",
166
+ "url": null
167
+ }
168
+ },
169
+ {
170
+ "14": {
171
+ "title": "Analytical approach to parallel repetition.",
172
+ "author": "Irit Dinur and David Steurer.",
173
+ "venue": "In David B. Shmoys, editor, Symposium on Theory of Computing,\nSTOC 2014, New York, NY, USA, May 31 - June 03, 2014, pages 624\u2013633.\nACM, 2014.",
174
+ "url": null
175
+ }
176
+ },
177
+ {
178
+ "15": {
179
+ "title": "A threshold of ln n for approximating set cover.",
180
+ "author": "Uriel Feige.",
181
+ "venue": "J. ACM, 45(4):634\u2013652, 1998.",
182
+ "url": null
183
+ }
184
+ },
185
+ {
186
+ "16": {
187
+ "title": "Iterative rounding 2-approximation algorithms for minimum-cost vertex\nconnectivity problems.",
188
+ "author": "Lisa Fleischer, Kamal Jain, and David P. Williamson.",
189
+ "venue": "Journal of Computer and System Sciences, 72(5):838\u2013867, 2006.",
190
+ "url": null
191
+ }
192
+ },
193
+ {
194
+ "17": {
195
+ "title": "A Logarithmic Integrality Gap Bound for Directed Steiner Tree in\nQuasi-bipartite Graphs .",
196
+ "author": "Zachary Friggstad, Jochen K\u00f6nemann, and Shadravan Mohammad.",
197
+ "venue": "In Rasmus Pagh, editor, 15th Scandinavian Symposium and\nWorkshops on Algorithm Theory (SWAT 2016), volume 53 of Leibniz\nInternational Proceedings in Informatics (LIPIcs), pages 3:1\u20133:11,\nDagstuhl, Germany, 2016. Schloss Dagstuhl\u2013Leibniz-Zentrum fuer Informatik.",
198
+ "url": null
199
+ }
200
+ },
201
+ {
202
+ "18": {
203
+ "title": "Rooted k-connections in digraphs.",
204
+ "author": "Andr\u00e1s Frank.",
205
+ "venue": "Discret. Appl. Math., 157(6):1242\u20131254, 2009.",
206
+ "url": null
207
+ }
208
+ },
209
+ {
210
+ "19": {
211
+ "title": "A tight bound on approximating arbitrary metrics by tree metrics.",
212
+ "author": "Jittat Fakcharoenphol, Satish Rao, and Kunal Talwar.",
213
+ "venue": "J. Comput. Syst. Sci., 69(3):485\u2013497, 2004.",
214
+ "url": null
215
+ }
216
+ },
217
+ {
218
+ "20": {
219
+ "title": "Generalized polymatroids and submodular flows.",
220
+ "author": "Andr\u00e1s Frank and \u00c9va Tardos.",
221
+ "venue": "Math. Program., 42(1-3):489\u2013563, 1988.",
222
+ "url": null
223
+ }
224
+ },
225
+ {
226
+ "21": {
227
+ "title": "A polylogarithmic approximation algorithm for the group steiner tree\nproblem.",
228
+ "author": "Naveen Garg, Goran Konjevod, and R. Ravi.",
229
+ "venue": "J. Algorithms, 37(1):66\u201384, 2000.",
230
+ "url": null
231
+ }
232
+ },
233
+ {
234
+ "22": {
235
+ "title": "Tree embeddings for two-edge-connected network design.",
236
+ "author": "Anupam Gupta, Ravishankar Krishnaswamy, and R. Ravi.",
237
+ "venue": "In Moses Charikar, editor, Proceedings of the Twenty-First\nAnnual ACM-SIAM Symposium on Discrete Algorithms, SODA 2010, Austin,\nTexas, USA, January 17-19, 2010, pages 1521\u20131538. SIAM, 2010.",
238
+ "url": null
239
+ }
240
+ },
241
+ {
242
+ "23": {
243
+ "title": "Surviving in directed graphs: a quasi-polynomial-time polylogarithmic\napproximation for two-connected directed steiner tree.",
244
+ "author": "Fabrizio Grandoni and Bundit Laekhanukit.",
245
+ "venue": "In Proceedings of the 49th Annual ACM SIGACT Symposium on Theory\nof Computing, pages 420\u2013428, 2017.",
246
+ "url": null
247
+ }
248
+ },
249
+ {
250
+ "24": {
251
+ "title": "O ()-approximation algorithm for directed\nSteiner tree: a tight quasi-polynomial-time algorithm.",
252
+ "author": "Fabrizio Grandoni, Bundit Laekhanukit, and Shi Li.",
253
+ "venue": "In Proceedings of the 51st Annual ACM SIGACT Symposium on Theory\nof Computing, pages 253\u2013264, 2019.",
254
+ "url": null
255
+ }
256
+ },
257
+ {
258
+ "25": {
259
+ "title": "Quasi-polynomial algorithms for submodular tree orienteering and\nother directed network design problems.",
260
+ "author": "Rohan Ghuge and Viswanath Nagarajan.",
261
+ "venue": "In Shuchi Chawla, editor, Proceedings of the 2020 ACM-SIAM\nSymposium on Discrete Algorithms, SODA 2020, Salt Lake City, UT, USA,\nJanuary 5-8, 2020, pages 1039\u20131048. SIAM, 2020.",
262
+ "url": null
263
+ }
264
+ },
265
+ {
266
+ "26": {
267
+ "title": "Matroids and integrality gaps for hypergraphic steiner tree\nrelaxations.",
268
+ "author": "Michel X Goemans, Neil Olver, Thomas Rothvo\u00df, and Rico Zenklusen.",
269
+ "venue": "In Proceedings of the forty-fourth annual ACM symposium on\nTheory of computing, pages 1161\u20131176. SIAM, 2012.",
270
+ "url": null
271
+ }
272
+ },
273
+ {
274
+ "27": {
275
+ "title": "Multi-rooted greedy approximation of directed steiner trees with\napplications.",
276
+ "author": "Tomoya Hibi and Toshihiro Fujito.",
277
+ "venue": "Algorithmica, 74(2):778\u2013786, 2016.",
278
+ "url": null
279
+ }
280
+ },
281
+ {
282
+ "28": {
283
+ "title": "The prize-collecting generalized steiner tree problem via a new\napproach of primal-dual schema.",
284
+ "author": "Mohammad Taghi Hajiaghayi and Kamal Jain.",
285
+ "venue": "In Proceedings of the Seventeenth Annual ACM-SIAM Symposium on\nDiscrete Algorithms, SODA 2006, Miami, Florida, USA, January 22-26, 2006,\npages 631\u2013640. ACM Press, 2006.",
286
+ "url": null
287
+ }
288
+ },
289
+ {
290
+ "29": {
291
+ "title": "Polylogarithmic inapproximability.",
292
+ "author": "Eran Halperin and Robert Krauthgamer.",
293
+ "venue": "In Lawrence L. Larmore and Michel X. Goemans, editors, Proceedings of the 35th Annual ACM Symposium on Theory of Computing, June\n9-11, 2003, San Diego, CA, USA, pages 585\u2013594. ACM, 2003.",
294
+ "url": null
295
+ }
296
+ },
297
+ {
298
+ "30": {
299
+ "title": "Integrality ratio for group steiner trees and directed steiner trees.",
300
+ "author": "Eran Halperin, Guy Kortsarz, Robert Krauthgamer, Aravind Srinivasan, and Nan\nWang.",
301
+ "venue": "SIAM J. Comput., 36(5):1494\u20131511, 2007.",
302
+ "url": null
303
+ }
304
+ },
305
+ {
306
+ "31": {
307
+ "title": "A factor 2 approximation algorithm for the generalized steiner\nnetwork problem.",
308
+ "author": "Kamal Jain.",
309
+ "venue": "Combinatorica, 21(1):39\u201360, 2001.",
310
+ "url": null
311
+ }
312
+ },
313
+ {
314
+ "32": {
315
+ "title": "Hardness of approximation for vertex-connectivity network design\nproblems.",
316
+ "author": "Guy Kortsarz, Robert Krauthgamer, and James R. Lee.",
317
+ "venue": "SIAM J. Comput., 33(3):704\u2013720, 2004.",
318
+ "url": null
319
+ }
320
+ },
321
+ {
322
+ "33": {
323
+ "title": "Approximating fault-tolerant group-steiner problems.",
324
+ "author": "Rohit Khandekar, Guy Kortsarz, and Zeev Nutov.",
325
+ "venue": "Theor. Comput. Sci., 416:55\u201364, 2012.",
326
+ "url": null
327
+ }
328
+ },
329
+ {
330
+ "34": {
331
+ "title": "Biconnectivity approximations and graph carvings.",
332
+ "author": "Samir Khuller and Uzi Vishkin.",
333
+ "venue": "J. ACM, 41(2):214\u2013235, 1994.",
334
+ "url": null
335
+ }
336
+ },
337
+ {
338
+ "35": {
339
+ "title": "Parameters of two-prover-one-round game and the hardness of\nconnectivity problems.",
340
+ "author": "Bundit Laekhanukit.",
341
+ "venue": "In Proceedings of the twenty-fifth annual ACM-SIAM symposium on\nDiscrete algorithms, pages 1626\u20131643. SIAM, 2014.",
342
+ "url": null
343
+ }
344
+ },
345
+ {
346
+ "36": {
347
+ "title": "Approximating directed steiner problems via tree embedding.",
348
+ "author": "Bundit Laekhanukit.",
349
+ "venue": "In 43rd International Colloquium on Automata, Languages, and\nProgramming (ICALP 2016). Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik,\n2016.",
350
+ "url": null
351
+ }
352
+ },
353
+ {
354
+ "37": {
355
+ "title": "A note on degree vs gap of min-rep label cover and improved\ninapproximability for connectivity problems.",
356
+ "author": "Pasin Manurangsi.",
357
+ "venue": "Information Processing Letters, 145:24\u201329, 2019.",
358
+ "url": null
359
+ }
360
+ },
361
+ {
362
+ "38": {
363
+ "title": "The strongish planted clique hypothesis and its consequences.",
364
+ "author": "Pasin Manurangsi, Aviad Rubinstein, and Tselil Schramm.",
365
+ "venue": "In James R. Lee, editor, 12th Innovations in Theoretical\nComputer Science Conference, ITCS 2021, January 6-8, 2021, Virtual\nConference, volume 185 of LIPIcs, pages 10:1\u201310:21. Schloss Dagstuhl\n- Leibniz-Zentrum f\u00fcr Informatik, 2021.",
366
+ "url": null
367
+ }
368
+ },
369
+ {
370
+ "39": {
371
+ "title": "Approximating minimum-cost connectivity problems via uncrossable\nbifamilies.",
372
+ "author": "Zeev Nutov.",
373
+ "venue": "ACM Transactions on Algorithms (TALG), 9(1):1\u201316, 2012.",
374
+ "url": null
375
+ }
376
+ },
377
+ {
378
+ "40": {
379
+ "title": "On rooted k-connectivity problems in quasi-bipartite digraphs.",
380
+ "author": "Zeev Nutov.",
381
+ "venue": "In International Computer Science Symposium in Russia, pages\n339\u2013348. Springer, 2021.",
382
+ "url": null
383
+ }
384
+ },
385
+ {
386
+ "41": {
387
+ "title": "A series of approximation algorithms for the acyclic directed steiner\ntree problem.",
388
+ "author": "Alexander Zelikovsky.",
389
+ "venue": "Algorithmica, 18(1):99\u2013110, 1997.",
390
+ "url": null
391
+ }
392
+ }
393
+ ],
394
+ "url": "http://arxiv.org/html/2202.13088v2"
395
+ }
20240819/2206.00794v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240819/2211.12203v3.json ADDED
@@ -0,0 +1,393 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Edge Multiway Cut and Node Multiway Cut Are Hard for Planar Subcubic Graphs1footnote 11footnote 1An extended abstract of this paper appeared in the proceedings of SWAT 2024 [19].",
3
+ "abstract": "It is known that the weighted version of Edge Multiway Cut (also known as Multiterminal Cut) is NP-complete on planar graphs of maximum degree . In contrast, for the unweighted version, NP-completeness is only known for planar graphs of maximum degree . In fact, the complexity of unweighted Edge Multiway Cut was open for graphs of maximum degree for over twenty years. We prove that the unweighted version is NP-complete even for planar graphs of maximum degree . As weighted Edge Multiway Cut is polynomial-time solvable for graphs of maximum degree at most , we have now closed the complexity gap. We also prove that (unweighted) Node Multiway Cut (both with and without deletable terminals) is NP-complete for planar graphs of maximum degree . By combining our results with known results, we can apply two meta-classifications on graph containment from the literature. This yields full dichotomies for all three problems on -topological-minor-free graphs and, should be finite, on -subgraph-free graphs as well.\nPreviously, such dichotomies were only implied for -minor-free graphs.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "In this paper we consider the unweighted edge and node versions of the classic Multiway Cut problem, which is one of the most central separation/clustering graph problems with applications in, for example, computer vision [3 ###reference_b3###, 6 ###reference_b6###] and multi-processor scheduling [28 ###reference_b28###].\nTo define these problems, let be a graph. For a subset of either vertices or edges of , let denote the graph obtained from after deleting all elements, either vertices (and incident edges) or edges, of .\nNow, let be a set of specified vertices that are called the terminals of . A set is an edge multiway cut for if every connected component of contains at most one vertex of . In order words, removing pairwise disconnects the terminals of ; see Figure 1 ###reference_### for an example.\nWe define the notion of a node multiway cut in the same way, but there are two versions depending on whether or not\n can contain vertices of ; see again Figure 1 ###reference_###.\nThis leads to the following three decision problems,\nwhere the second one is also known as Unrestricted Node Multiway Cut and the third one as Restricted Node Multiway Cut or Node Multiway Cut with Undeletable Terminals.\n###figure_1### ###figure_2### ###figure_3### Edge Multiway Cut\n\n\n\nInput: A graph , a set of terminals and an integer .\nQuestion: Does have an edge multiway cut of size at most ?\nNode Multiway Cut with Deletable Terminals\n\n\n\nInput: A graph , a set of terminals and an integer .\nQuestion: Does have a node multiway cut of size at most ?\nNode Multiway Cut\n\n\n\nInput: A graph , a set of terminals and an integer .\nQuestion: Does have a node multiway cut of size at most ?\nIn Weighted Edge Multiway Cut, we are given a function . The goal is to decide if admits an edge multiway cut of total weight at most . If , then we obtain Edge Multiway Cut.\nSimilarly, we can define weighted variants of both versions of Node Multiway Cut with respect to a node weight function .\nThe above problems have been studied extensively; see, for example, [2 ###reference_b2###, 8 ###reference_b8###, 9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###, 12 ###reference_b12###, 15 ###reference_b15###, 16 ###reference_b16###, 17 ###reference_b17###, 20 ###reference_b20###, 21 ###reference_b21###, 23 ###reference_b23###, 24 ###reference_b24###]. The problems can be thought of as the natural dual problems of the Steiner Tree problem.\nIn their famous study of Edge Multiway Cut, Dahlhaus et al. [13 ###reference_b13###] showed that it is NP-complete even if the set of terminals has size . Garg et al. [16 ###reference_b16###] showed the same for Node Multiway Cut.\nWe note that this is a tight result: if , then both problems reduce to the Minimum Cut problem. The latter problem can be modelled as a maximum flow problem, and hence is well known to be solvable in polynomial time [14 ###reference_b14###].\nNote that Node Multiway Cut with Deletable Terminals is trivially polynomial-time solvable for any fixed ."
10
+ },
11
+ {
12
+ "section_id": "1.1",
13
+ "parent_section_id": "1",
14
+ "section_name": "Our Results",
15
+ "text": "The following three results fully answer our research question.\nEdge Multiway Cut is NP-complete for planar subcubic graphs.\nNode Multiway Cut is NP-complete for planar subcubic graphs.\nNode Multiway Cut with Deletable Terminals is NP-complete for planar subcubic graphs.\nWe prove Theorem 1.1 ###reference_theorem1### in Section 2 ###reference_###; Theorem 1.2 ###reference_theorem2### in Section 3 ###reference_###; and Theorems 1.3 ###reference_theorem3### in Section 4 ###reference_###.\nIn spirit, our construction for Edge Multiway Cut in Theorem 1.1 ###reference_theorem1### is similar to the one by Dahlhaus et al. [13 ###reference_b13###] for graphs of maximum degree . For non-terminal vertices of high degree, a local replacement by a (sub)cubic graph is relatively easy. However, for terminal vertices of high degree, a local replacement strategy seems impossible. Hence, the fact that terminals in the construction of Dahlhaus et al. [13 ###reference_b13###] can have degree up to becomes a crucial bottleneck.\nTo ensure that our constructed graph has maximum degree , we therefore need to build different gadgets. We then leverage several deep structural properties of the edge multiway cut in the resulting instance, making for a significantly more involved and technical correctness proof.\nCrucially, we first prove NP-completeness for a weighted version of the problem on graphs of maximum degree , in which\neach terminal is incident with exactly one edge of weight .\nIn the final step of our construction, we replace weighted edges and high-degree vertices with appropriate gadgets.\nThe NP-hardness for Node Multiway Cut for planar subcubic graphs shown in Theorem 1.2 ###reference_theorem2### follows from the NP-hardness of Edge Multiway Cut by constructing the line graph of input graph.\nThe NP-hardness for Node Multiway Cut with Deletable Terminals on planar subcubic graphs shown in Theorem 1.3 ###reference_theorem3### follows from a straightforward reduction from Vertex Cover."
16
+ },
17
+ {
18
+ "section_id": "1.2",
19
+ "parent_section_id": "1",
20
+ "section_name": "Consequences",
21
+ "text": "As discussed above, we immediately have the following dichotomy.\nFor every , Edge Multiway Cut and both versions of Node Multiway Cut on graphs of maximum degree are polynomial-time solvable if , and NP-complete if .\nFrom a result of Robertson and Seymour [26 ###reference_b26###], it follows that any problem that is NP-hard on subcubic planar graphs but polynomial-time solvable for graphs of bounded treewidth can be fully classified on -topological minor-free graphs. Namely, is polynomial-time solvable if contains a subcubic planar graph and NP-hard otherwise.\nIt is known that Edge Multiway Cut and both versions of Node Multiway Cut satisfy the second property [1 ###reference_b1###]. As Theorems 1.1 ###reference_theorem1###\u20131.2 ###reference_theorem2### show the first property, we obtain the following dichotomy.\nFor every set of graphs , Edge Multiway Cut and both versions of Node Multiway Cut on -topological-minor-free graphs are polynomial-time solvable if contains a planar subcubic graph, and NP-complete otherwise.\nLet\nthe -subdivision of a graph be the graph obtained from after replacing each edge by a path of\n edges with end-vertices and .\nA problem is NP-hard\nunder edge subdivision of subcubic graphs if for every integer there is an such that:\nif is NP-hard for the class of subcubic graphs, then is NP-hard for the class consisting of the -subdivisions of the graphs in .\nNow say that is polynomial-time solvable on graphs of bounded treewidth and NP-hard for subcubic graphs and under edge subdivision of subcubic graphs. The meta-classification from\nJohnson et al. [18 ###reference_b18###] states that for every finite set , on -subgraph-free graphs is polynomial-time solvable if contains a graph from , and NP-hard otherwise. Here, is the set consisting of all disjoint unions of zero or more paths and subdivided claws (-vertex stars in which edges may be subdivided). Figure 2 ###reference_### shows an example of a graph belonging to . Results from\nArnborg, Lagergren and Seese [1 ###reference_b1###] and Johnson et al. [18 ###reference_b18###]\nshow the first two properties. Theorems 1.1 ###reference_theorem1###\u20131.2 ###reference_theorem2### show the last property. Thus, we obtain:\n###figure_4### For every finite set of graphs , Edge Multiway Cut and both versions of Node Multiway Cut on -subgraph-free graphs are polynomial-time solvable if contains a graph from , and NP-complete otherwise."
22
+ },
23
+ {
24
+ "section_id": "2",
25
+ "parent_section_id": null,
26
+ "section_name": "The Proof of Theorem 1.1",
27
+ "text": "In this section, we show that Edge Multiway Cut is NP-complete on subcubic graphs. We reduce the problem from Planar 2P1N-3SAT, which is a restricted version of 3-SAT. Given a CNF-formula with the set of variables and the set of clauses , the incidence graph of the formula is the graph which is a bipartite graph with one of the partitions containing a vertex for each variable and the other partition containing a vertex for each clause of . There exists in an edge between a variable-vertex and a clause-vertex if and only if the variable appears in the clause. We define Planar 2P1N-3SAT as follows.\nPlanar 2P1N-3SATA set of variables and a CNF formula over and clause set with each clause containing at most three literals and each variable occurring twice positively and once negatively in such that is planar.Is there an assignment that satisfies ?\nThe above problem was shown to be NP-complete by\nDahlhaus et al. [13 ###reference_b13###]. By their construction, each variable occurs in at least two clauses having size . This property becomes important later in our NP-completeness proof.\nWe need two further definitions. Recall that in Weighted Edge Multiway Cut, we are given a function in addition to . The goal is to decide if admits an edge multiway cut of total weight at most . If the image of is the set , we denote the corresponding Weighted Edge Multiway Cut problem as -Edge Multiway Cut. Also note that if an edge/node multiway cut has smallest possible size (weight) among all edge/node multiway cuts for the pair , then is a minimum(-weight) edge/node multiway cut.\nWe show the reduction in two steps. In the first step, we reduce from Planar 2P1N-3SAT to -Edge Multiway Cut restricted to planar graphs of maximum degree where the terminals all have degree . In the second step, we show how to make the instance unweighted while keeping it planar and making its maximum degree bounded above by .\nSee 1.1 ###reference_theorem1###\nClearly, Edge Multiway Cut is in NP.\nWe reduce Edge Multiway Cut from Planar 2P1N-3SAT. Let be a given CNF formula with at most three literals in each clause and each variable occurring twice positively and once negatively.\nWe assume that each clause has size at least and every variable occurs in at least two clauses of size . Let be the set of variables in and be the set of clauses. We assume that the incidence graph is planar. By the reduction\nof Dahlhaus et al. [13 ###reference_b13###], Planar 2P1N-3SAT is NP-complete for such instances.\n###figure_5### We now describe the graph construction. For each vertex of corresponding to a clause in , we create a clause gadget (depending on the size of the clause), as in Figure 3 ###reference_###. For each vertex of corresponding to a variable , we create a variable gadget, also shown in Figure 3 ###reference_###. The gadgets have two terminals each (marked as red squares in Figure 3 ###reference_###), a positive and a negative one. In a variable gadget, the positive terminal is attached to the diamond and the negative one to the hat, by edges of weight ; refer to Figure 3 ###reference_###. In a clause gadget, each literal corresponds to a triangle, with these triangles connected in sequence, and the positive and negative terminal are attached to triangles at the start and end of the sequence, again by edges of weight .\nEach degree- vertex in a gadget (marked blue in Figure 3 ###reference_###) is called a link. The two edges incident on a link are called connector-edges. The edge of such a triangle that is not incident on the link is called the base of the triangle. For a variable , if and for clauses , then we connect the links of the diamond of to some link of the gadgets for and , each by identifying them with the respective link in the clause gadget. If for clause , then we connect the link of the hat of and some link on the gadget for , again by identifying the two links. An example of such variable and clause connections is depicted in Figure 6 ###reference_###. The structure formed by the link and the four connector-edges incident on it, is referred to as a link-structure. By the assumptions on , we can create the link-structures such that each link in the variable gadget participates in exactly one link-structure and corresponds to one occurrence of the variable. Similarly, each link of a clause gadget participates in exactly one link-structure.\nThe graph thus created is denoted by . We can construct in such a way that it is planar, because is planar and has maximum degree . Note that has maximum degree . Let be the set of terminals in the constructed graph . Note that has a total of terminals.\nWe observe that all edges in have weight at most . Non-terminal vertices are incident on edges of total weight at most . Crucially, terminals are incident on edges of total weight at most .\n###figure_6### We introduce some extra notions to describe the constructed graph . The connector-edges closest to the terminals are called outer edges, as indicated in Figure 3 ###reference_###. The structure formed by the two pairs of connector-edges and the link is called the link-structure; see Figure 4 ###reference_fig1###. Since each variable occurs twice positively and once negatively in , the constructed graph has exactly link-structures.\nWe now continue the reduction to obtain an unweighted planar subcubic graph.\nWe replace all the edges in of weight greater than by as many parallel edges between their end-vertices as the weight of the edge. Each of these parallel edges has weight . We refer to this graph as . Next, for each vertex in of degree greater than , we replace by a large honeycomb (hexagonal grid), as depicted in Figure 5 ###reference_###, of cells (these numbers are picked for convenience and not optimized). The neighbours of , of which there are at most six by the construction of , are now attached to distinct degree- vertices on the boundary of the honeycomb such that the distance along the boundary between any pair of them is cells of the honeycomb. These degree- vertices on the boundary are called the attachment points of the honeycomb. The edges not belonging to the honeycomb that are incident on these attachment points are called attaching edges. In the construction, we ensure that the attaching edges occur in the same cyclical order on the boundary as the edges to the neighbours of originally occurred around . Let the resultant graph be .\n###figure_7### Note that the degree of any vertex in is at most . For terminals, this was already the case in . Note that, therefore, terminals were not replaced by honeycombs to obtain . For non-terminals, this is clear from the construction of and . Moreover, all the edge weights of are equal to , and thus we can consider it unweighted. Also, all the replacements can be done as to retain a planar embedding of and hence, is planar. has size bounded by a polynomial in and can be constructed in polynomial time. Finally, we set .\nFor the sake of simplicity, we shall first argue that is a yes instance of Planar 2P1N-3SAT if and only if is a yes instance of -Edge Multiway Cut. Later, we show that the same holds for by proving that no edge of any of the honeycombs is ever present in any minimum edge multiway cut in .\nSuppose that is a truth assignment satisfying . Then, we create a set of edges , as follows:\nIf a variable is set to \u201ctrue\u201d by , then add to all the three edges of the hat in the corresponding gadget. If a variable is set to \u201cfalse\u201d by , then add to all the five edges of the diamond.\nFor each clause, pick a true literal in it and add to all the three edges of the clause-triangle corresponding to this literal.\nFinally, for each link-structure with none of its edges in yet, add the two connector-edges of its clause-triangle to .\nis an edge multiway cut of of weight at most .\nFor each variable, either the positive literal is true, or the negative one. Hence, either all the three edges of its hat are in or all the five edges of the diamond. Therefore, all the paths between terminal pairs of the form , for all , are disconnected in . Consider the link-structure in Figure 4 ###reference_fig1###. By our choice of , at least one endpoint of each link in is a vertex of degree , hence a dead end. Therefore, no path connecting any terminal pair in passes through any link. As all the paths in between a variable-terminal and a clause-terminal must pass through some link, we know that all terminal pairs of this type are disconnected in . Since is a satisfying truth assignment of , all the edges of one triangle from every clause gadget are in . Hence, all the paths between terminal pairs of the form , for all , are disconnected in . Hence, is an edge multiway cut.\nIt remains to show that the weight of is at most . Since satisfies each clause of , there are exactly triangle-bases of weight 2 from the clause gadgets in . Similarly, the variable gadgets contribute exactly bases to . Finally, for each of the link-structures, by the definition of \nand the fact that is a satisfying assignment,\neither the two connector-edges of the variable-triangle are in or the two connector-edges of the clause-triangle. Together, they contribute a weight of to the total weight of . Therefore, is an edge multiway cut in of weight at most .\nHence, is a yes instance of -Edge Multiway Cut.\nConversely, assume that is a yes instance of -Edge Multiway Cut. Hence, there exists an edge multiway cut of of weight at most . We shall demonstrate an assignment that satisfies . Before that, we shall discuss some structural properties of a minimum-weight edge multiway cut. In the following arguments, we assume that the clauses under consideration have size three, unless otherwise specified. While making the same arguments for clauses of size is easier, we prefer to argue about clauses of size three for generality.\nIf is an edge in incident on a non-terminal vertex of degree such that has weight greater than or equal to the sum of the other edges incident on , then there exists a minimum-weight edge multiway cut in that does not contain .\nThe above claim implies that there exists a minimum-weight multiway cut containing no such edge . To see this, note that an iterative application of the local replacement used in Claim 2 ###reference_2### would cause a conflict in the event that the replacement is cyclical. Suppose that the edges are replaced in the sequence . Then the weight of , denoted by must be strictly less than the weight of . Similarly, for . This would mean that , which is a contradiction.\nIf a minimum-weight edge multiway cut contains an edge of a cycle, then it contains at least two edges from that cycle.\nIt follows from Claim 2 ###reference_2### and the construction of that there exists a minimum-weight edge multiway cut for that does not contain the edges incident on the terminals. Among the minimum-weight edge multiway cuts that satisfy Claim 2 ###reference_2###, we shall select one that contains the maximum number of connector-edges and from the ones that satisfy both the aforementioned properties, we shall pick one that contains the maximum number of triangle-bases from clause gadgets of size . Let be a minimum edge multiway cut that fulfils all these requirements.\nWe say a link incident on a gadget reaches a terminal if is the first vertex on a path from the gadget to and no edge on is contained in .\nA terminal is reachable by a gadget if one of the links incident on the gadget reaches . Note that, for any terminal in the gadget, if is reached from some incident link by a path , then can be extended to a - path in using only edges inside the gadget. However, among the edges used by such an extension, at least one must belong to , or else .\n###figure_8### contains exactly one base of a triangle from each variable gadget.\nClearly, must contain at least one base from each variable gadget, else by the fact that contains no edges incident on terminals, a path between the terminals in such a gadget would remain in .\nSuppose that contains two bases of some variable gadget, say that of . By Claim 3 ###reference_3###, must also contain at least three connector-edges from this variable gadget: at least two connector-edges (of the two triangles) of the diamond and at least one connector-edge of the hat. We claim that, without loss of generality, at least all the outer connector-edges must be in . If for some triangle the outer connector-edge next to terminal is not in , then the link incident on this triangle does not reach any terminal ; otherwise, a - path would remain in , a contradiction. Hence, we simultaneously replace all inner connector-edges for which the corresponding outer connector-edge is not in by their corresponding outer connector-edge. For the resulting set , the variable terminals of the gadget and their neighbours in form a connected component of . Since the link incident on a triangle for which the outer connector-edge (next to terminal ) was not in does not reach any terminal , is feasible. Moreover, it has the same properties we demanded of . Thus, henceforth, we may assume that all the outer connector-edges of the -gadget are in .\nWe now distinguish six cases based on how many links of the gadget reach a terminal:\nCase 1. No link of the gadget reaches a terminal. \nWe can remove one of the two bases from without connecting any terminal pairs. This is so because in order to disconnect from , it suffices for to contain either the base of the diamond along with the two outer connector-edges or the base and outer connector-edge of the hat. No other terminal pairs are connected via the gadget by the assumption of this case. Hence, we contradict the minimality of (refer to Figure 7 ###reference_###).\n###figure_9### ###figure_10### Case 2. A link of the -gadget reaches at least two distinct terminals. \nBy the definition of reaches, this implies that there is a path in between any two of the reached terminals (see Figure 8 ###reference_###). This contradicts that is an edge multiway cut for .\n###figure_11### Case 3.Exactly one link of the -gadget reaches some terminal . \nWe remove from the base of a triangle that is not attached to and add the remaining connector-edge of the triangle that is attached to (if it is not already in ). Refer to Figure 9 ###reference_###. Consequently, although reaches , both connector-edges incident on are in . Since no other link reached any terminals and remains disconnected from in , we can obtain an edge multiway cut for satisfying Claim 2 ###reference_2### that has the same or less weight as , but has strictly more connector-edges than . This is a contradiction to our choice of .\n###figure_12### Case 4. Exactly two links of the -gadget reach two distinct terminals and , respectively. \nRecall that all three outer connector-edges are in . Now at least one of the inner connector-edges of the gadget must be in , or else would be connected to via this gadget. In particular, both the connector-edges of at least one of the two triangles attached to must be in . Figure 10 ###reference_### depicts this scenario. We can remove from one of the two bases and add instead the remaining connector-edge of the other triangle (if it is not already in ). Consequently, although reaches and reaches , all connector-edges incident on and are in . Moreover, and are not connected to each other in , as one base and its corresponding outer connector(s) are still in . The transformation results in an edge multiway cut for satisfying Claim 2 ###reference_2### that has the same or less weight than , but has strictly more connector-edges than . This is a contradiction to our choice of .\n###figure_13### Case 5. All the three links of the -gadget reach distinct terminals , respectively.\nRecall that all three outer connected edges are in . Now at most one (inner) connector-edge of the -gadget is not in , or else at least one pair of terminals among would remain connected via the gadget. Consider Figure 11 ###reference_### for a visual depiction of this case. We replace one of the bases in with this connector-edge (if it is not already in ). The resulting edge multiway cut is no heavier. To see that it is also feasible, note that while are still reached from the links of the gadget, all the connector-edges of this gadget are in the edge multiway cut. The terminals and are disconnected from each other in because one triangle-base and its connectors are still in the edge multiway cut. Hence, we obtain an edge multiway cut for satisfying Claim 2 ###reference_2### that has the same or less weight than , but with strictly more connector-edges than , a contradiction to our choice of .\n###figure_14### Case 6. At least two links of the -gadget reach exactly one terminal outside the gadget. \nRecall that every variable occurs in at least two clauses of size . Hence, is reachable via a link from the -gadget to at least one directly linked clause gadget of a clause of size . Also recall that is a minimum-weight edge multiway cut containing the maximum number of bases from clauses of size .\nSuppose that there exists a size- clause gadget , directly linked to the -gadget, that does not contain and via which is reachable from the -gadget.\nThat is, some link reaches via a path that contains edges of , but is not in . Refer to Figure 12 ###reference_### for a visual depiction.\nThen must contain two base-connector pairs from ; else, some terminal of would not be disconnected from in . Now remove from the base of one of the two triangles of and add the remaining two connector-edges of . This does not increase the weight, as the base of the clause-triangle has weight and the connectors have weight each. The only terminal pair that could get connected by the transformation is the pair of terminals on itself. However, one of the bases is still in the transformed cut. This new cut contradicts our choice of , as it has strictly more connector-edges and satisfies the other conditions.\nSuppose is contained in one of the size- clause gadgets , directly linked to the -gadget. If the link between the -gadget and is not one of the links meant in the assumption of this case, then the situation of the previous paragraph holds and we obtain a contradiction.\nThus, is reachable from the -gadget via both links of .\nHence, a base-connector pair of the triangle of that is not attached to must be in . Consider the link of the -gadget that is not linked to but reaches and let be a corresponding path, starting at this link and going to . Note that passes through a clause gadget directly linked to the -gadget. If is a size- clause gadget, then we obtain a contradiction as before. Hence, corresponds to a size- clause (as in Figure 13 ###reference_###). Since must either enter or leave through one of its outer triangles, a base-connector pair of at least one outer triangle of must be in , or the attached terminal would reach in , contradicting that is an edge multiway cut for . Let be such an outer triangle (see Figure 13 ###reference_###).\n###figure_15### We argue that, without loss of generality, contains a base-connector pair of the other outer triangle, . Suppose not. Then, in particular, the base of is not in . If passes through the link attached to , then one of the endpoints of the base of must be on . Since the base of is not in , the terminal next to remains connected to in , a contradiction. Hence, must either enter or exit via the link attached to its middle triangle . Moreover, must contain a base-connector pair of (see Figure 13 ###reference_###), or would still reach in . We now modify to obtain a set . If both connector-edges of are in , then replace the base of by the base of to obtain . Then all edges of are in . Otherwise, no edge of is in and thus no terminal must be reachable via the link attached to (or it would be connected to in ). So, we replace the base-connector pair of by a base-connector pair of to obtain . Then is an edge multiway cut for of the same weight at that has the same properties as . Hence, we may assume . Then contains a base-connector pair of .\nNow remove from the base and connector-edge of . Then and become connected to each other in , but not to any other terminal, or that terminal would already be connected to in . Now add the base and outer connector-edge of the triangle in that is attached to. This restores that is an edge multiway cut for .\nThe edge multiway cut we obtain has the same weight as and satisfies Claim 2 ###reference_2###. Moreover, it has no less connectors than but contains at least one more base of a clause gadget of size . Hence, we obtain a contradiction to our choice of .\nWe now focus on the link-structures.\nThere cannot exist a link-structure in that contributes less than two edges to and for which the clause-triangle of the link-structure contributes no connector-edges to .\nTowards a contradiction, suppose that such a link-structure exists. Let the clause gadget containing the link-structure be and the variable gadget containing it be . By Claim 4 ###reference_4###, we know that there exists a triangle of the -gadget that does not contribute its base to . Therefore, at least one terminal of the -gadget is reachable from the clause gadget . This implies that the clause-triangle of the link-structure is the middle triangle of ;\nelse, there would exist a path in between and the closest clause-terminal on , because the edge incident on this terminal is also not in by its properties.\nThen, since is feasible, it must contain the base and at least one connector-edge of each of the two outer triangles of . Else, at least one of the clause-terminals would be reachable from in .\nIt must also be the case that both connector-edges of each of the outer triangles must be in or the incident link reaches no terminal ; otherwise, or the incident clause-terminal would be connected to in .\nNow, we can remove one of the two bases from and add the two connector-edges of the middle triangle, without compromising the feasibility of the edge multiway cut. Thus, there exists an edge multiway cut of no greater weight than , satisfying Claim 2 ###reference_2###, and containing two more connector-edges (those of the clause-triangle of the link-structure). This is a contradiction to our choice of .\n###figure_16### contains at least two edges from each link-structure.\nSuppose that there exists a link-structure that contributes less than two edges to . Suppose that connects the clause gadget and the variable gadget . By Claim 5 ###reference_5###, we know that the clause-triangle of must contribute an edge to . Therefore, none of the connectors of the variable-triangle attached to are in . As a result, the variable-terminal of the -gadget attached to , say we call it , is reachable from via .\nBy Claim 3 ###reference_3### and the fact that only is in , the base of the clause-triangle must also be in . We do the following replacement: remove from the base-connector pair of the clause-triangle and add the base and (possibly two) connectors of the variable-triangle of , as follows. If the variable-triangle of is part of a diamond, then we add to the base and two outer connectors, thereby getting an edge multiway cut of equal weight but strictly more connectors. If the variable-triangle is a hat, then we add to the base and outer connector of the hat, obtaining an edge multiway cut for of strictly smaller weight than . If we can show that the resultant edge multiway cut is feasible, we obtain a contradiction in either scenario. We claim that such a replacement does not compromise the feasibility of .\nLet be the endpoints of the base of the clause-triangle of , where is the endpoint on which is incident (see Figure 14 ###reference_###).\nNote that no terminal other than should be reachable in from ; else, there would be a path from to that terminal via . In particular, the terminal of the clause gadget for on the side of cannot be reached in from the vertex . By removing the base-connector pair of the clause-triangle of , we may expose the clause-terminal on the side of the vertex (or another terminal outside ) to . However, by adding the base and (possibly two) connectors closest to , we disconnect any path between this terminal and . Since we did not modify the cut in any other way, no new connections would have been made. This shows the feasibility of the resultant edge multiway cut and thus proves our claim.\nIf there exists an edge multiway cut of weight at most for , then there exists a satisfying truth assignment for .\nLet be the edge multiway cut defined before. The immediate consequence of Claims 4 ###reference_4### and 6 ###reference_6### is that the weight of is at least . must also contain at least one base per clause gadget lest the two terminals on a clause gadget remain connected. Therefore, its weight is at least . Since it is an edge multiway cut of weight at most , it has exactly one base per clause gadget.\nWe also claim that for each link-structure, if one of the triangles attached to it has its base in , then the other one cannot: note that if both the triangles had their bases in , then each of them would also have a connector-edge in by Claim 3 ###reference_3###. By Claim 6 ###reference_6### and the assumption that the weight of is at most , the other two connector-edges of the link-structure are not in . Since at most one base per variable/clause gadget can be in , there would be a path between one of the variable-terminals and one of the clause-terminals in the linked gadgets through the link-structure, a contradiction to being an edge multiway cut for . Figure 15 ###reference_### shows one such case.\n###figure_17### We now define the truth assignment . For each variable-terminal, if the diamond has its base in , we make it \u201cfalse\u201d, otherwise if the hat has its base in we make it \u201ctrue\". Each clause gadget has exactly one triangle contributing its base to . From the above argument, we know that the variable-triangle linked to this clause-triangle must not contribute its base to . Hence, every clause gadget is attached to one literal triangle such that its base is not in , and is therefore \u201ctrue\u201d. Hence, every clause is satisfied by the truth assignment and is a yes instance of Planar 2P1N-3SAT.\nThe above implies that -Edge Multiway Cut is NP-complete on planar subcubic graphs. We now proceed to prove that (unweighted) Edge Multiway Cut is NP-complete on planar subcubic graphs. The proof follows from the\nclaim below, which states\nthat the honeycombs of (defined before) do not contribute any edge to any minimum edge multiway cut for ().\nAny minimum edge multiway cut for does not contain any of the honeycomb edges.\nLet be a minimum edge multiway cut for . Recall that is planar. Note that for any two vertices , an - cut in a planar graph corresponds to a simple (possibly degenerate) cycle in the planar dual [25 ###reference_b25###]. Therefore, the dual of an edge multiway cut comprises several cycles. Let the edges corresponding to in the planar dual of be . In fact, induces a planar graph such that exactly one terminal of is embedded in the interior of each face of this graph. If any face of did not contain a terminal, we could remove the edge in dual to one of the edges of this face. This would not connect any terminal pair, and hence contradicts the minimality of .\nSuppose that contains some of the edges of the honeycomb in replacing the vertex . We denote the intersection of with the edges of this honeycomb by . Let the set of edges dual to in be . By abuse of notation, we also denote by the graph formed by contracting all the edges in . Since each face of encloses a terminal, each bounded face of must enclose an attachment point of the honeycomb. If not, then we could remove from an edge in dual to some edge of the face of not enclosing an attachment point. This does not make any new terminal-to-terminal connections, as the part of the honeycomb enclosed by this face does not contain any path to any of the terminals of . This would be a contradiction to the minimality of .\nNext, we observe that no bounded face of can enclose more than one attachment point. Suppose that there exists a bounded face in that encloses two attachment points. Since the two attachment points are separated by 100 cells of the honeycomb, the length of the face boundary must be at least 50. We could remove all the 50 edges from dual to the edges of the face boundary and add all the attaching edges to , instead. All the terminal-to-terminal paths passing through the honeycomb will remain disconnected after the transformation. Since at most eight attaching edges can be added, we again get a contradiction to the minimality of . So, each bounded face of must enclose exactly one attachment point.\nTo enclose the attachment points, each of these faces must cross the boundary of the honeycomb exactly twice. We claim that the faces of , enclosing consecutive attachment points on the boundary of the honeycomb, are pairwise edge-disjoint. Suppose that the faces enclosing two consecutive attachment points, and , share an edge. Then, they must also share an edge that crosses the boundary of the honeycomb. If they do not, then let be the last edge of the face enclosing to cross the boundary and be the first edge of the face enclosing to cross the boundary of the honeycomb. The edges and along with the other edges not shared between the respective face boundaries bound a region of the plane containing no attachment points, a contradiction!\nTherefore, any two faces of enclosing consecutive attachment points share an edge which crosses the boundary of the honeycomb. Without loss of generality, let this edge be closer to . Then, the face enclosing must contain at least 50 edges as and are separated by 100 cells of the honeycomb. This implies that contains at least 50 edges. However, we could remove from it all the 50 edges and add all the (at most eight) attaching edges. This cut is smaller in size and disconnects all the terminal-terminal paths passing through the honeycomb. Once again, we contradict the minimality of .\nHence, all the faces in enclosing attachment points are edge-disjoint. So, there are at least edges in . We could replace this cut by a smaller cut, namely, the edge multiway cut formed by removing the edges in from and adding to it all the attaching edges incident on the attachment points. This cut disconnects all terminal-paths passing through the honeycomb and yet, is smaller in size than , a contradiction to its minimality. Hence, does not contain any edge of any of the honeycombs.\nBy the construction of and Claims 1 ###reference_1###, 7 ###reference_7###, and 8 ###reference_8###, we conclude that Edge Multiway Cut is NP-complete on planar subcubic graphs."
28
+ },
29
+ {
30
+ "section_id": "3",
31
+ "parent_section_id": null,
32
+ "section_name": "The Proof of Theorem 1.2",
33
+ "text": "In this section we prove Theorem 1.2 ###reference_theorem2###. We start with the following observation.\nNode Multiway Cut is NP-complete for planar graphs of maximum degree .\nIt is readily seen that Node Multiway Cut belongs to NP. We now reduce from Node Multiway Cut with Deletable Terminals on planar subcubic graphs. Let be an instance of this problem. Let be obtained from by adding a pendant vertex per vertex . Let . If has a node multiway cut , then is immediately a node multiway cut for . Conversely, if has a node multiway cut , then is immediately a node multiway cut for with . The result follows.\nWe also need the following lemma from Johnson et al. [18 ###reference_b18###]\n(the proof of this lemma can also be found in the appendix).\nIf Edge Multiway Cut is NP-complete for a class of graphs, then it is also NP-complete for the class of graphs consisting of the -subdivisions of the graphs of .\n###figure_18### We are now ready to prove Theorem 1.2 ###reference_theorem2###.\nSee 1.2 ###reference_theorem2###\nIt is readily seen that Node Multiway Cut belongs to NP.\nIn Theorem 1.1 ###reference_theorem1###, we showed that Edge Multiway Cut is NP-complete on the class of planar subcubic graphs. We will now reduce Node Multiway Cut from Edge Multiway Cut restricted to the class of planar subcubic graphs. Let be a planar subcubic graph with a set of terminals .\nFrom , we create an instance of Node Multiway Cut by the following operations;\nhere, the line graph of a graph has as vertex set and for every pair of edges and in , there is an edge between and in the line graph of if and only if and share an end-vertex.\nWe construct the -subdivision of , which we denote by .\nNext, we construct the line graph of , which we denote by .\nFinally, we create the terminal set of as follows: for each terminal in , consider the edges incident on it. In the line graph , these edges must form a clique, for . In this clique, we pick one vertex and make it a terminal. We denote the terminal set in by .\nNote that is planar, as is planar and every vertex in has degree at most [27 ###reference_b27###].\nNote also that is subcubic, as every edge in has one end-vertex of degree and the other end-vertex of degree at most .\nMoreover, and can be constructed in polynomial time.\nThere exists an edge multiway cut of of size at most if and only if there exists a node multiway cut of of size at most .\nWe assume that has an edge multiway cut of size at most . By Lemma 3.3 ###reference_theorem3###, also has an edge multiway cut of size at most . We claim that there exists an edge multiway cut of of size at most which does not contain any edge incident on a terminal. Every edge in is adjacent to some edge with both its ends having degree two. Therefore, if an edge in the edge multiway cut of is incident on a terminal, we can replace it with its adjacent edge, which disconnects all the paths disconnected by the former and does not increase the size of the edge multiway cut. Now, for each edge in we add its corresponding vertex in to a set . Since pairwise disconnects the terminals in , disconnects all the terminal cliques from each other. Therefore, is a node multiway cut of .\nConversely, let be a node multiway cut of of size at most . By similar arguments as above, we may assume that does not contain any vertex from any terminal-clique. We claim that has an edge multiway cut of size at most . To that end, we show that has an edge multiway cut of size at most and apply Lemma 3.3 ###reference_theorem3### to prove the same for . We add to the edge multiway cut the edges of that correspond to the vertices in . The size of is clearly at most . To see that it is an edge multiway cut of , note that pairwise disconnecting the terminal-cliques of amounts to pairwise disconnecting the set of edges incident on any terminal in from its counterparts. This, in turn, pairwise disconnects all the terminals in .\nBy our construction and Claim 9 ###reference_9###, Node Multiway Cut is NP-complete on the class of planar subcubic graphs."
34
+ },
35
+ {
36
+ "section_id": "4",
37
+ "parent_section_id": null,
38
+ "section_name": "The Proof of Theorem 1.3",
39
+ "text": "In this section we prove Theorem 1.3 ###reference_theorem3###.\nSee 1.3 ###reference_theorem3###\nIt is readily seen that Node Multiway Cut with Deletable Terminals belongs to NP. We now reduce from Vertex Cover on planar subcubic graphs, which is known to be NP-complete [22 ###reference_b22###]. Let be the graph of an instance of this problem. We keep the same graph, but set . Since any two adjacent vertices are now adjacent terminals, any vertex cover in corresponds to a node multiway cut for . The result follows."
40
+ },
41
+ {
42
+ "section_id": "5",
43
+ "parent_section_id": null,
44
+ "section_name": "Conclusions",
45
+ "text": "We proved that Edge Multiway Cut and both versions of Node Multiway Cut are NP-complete for planar subcubic graphs.\nWe also showed that these results filled complexity gaps in the literature related to maximum degree, -topological-minor-free graphs and -subgraph-free graphs.\nThe last dichotomy result assumes that is a finite set of graphs. We therefore pose the following challenging question.\nClassify the complexity of Edge Multiway Cut and both versions of Node Multiway Cut for -subgraph-free graphs when is infinite.\nAn answer to Open Problem 1 ###reference_n1### will require novel insights into the structure of -subgraph-free graphs."
46
+ }
47
+ ],
48
+ "appendix": [
49
+ {
50
+ "section_id": "Appendix 1",
51
+ "parent_section_id": null,
52
+ "section_name": "Appendix A The Proof of Lemma\u00a03.3",
53
+ "text": "Here is the proof of Lemma 3.3 ###reference_theorem3###, which is from Johnson et al. [18 ###reference_b18###], but which we include below for convenience.\nSee 3.3 ###reference_theorem3###\nLet belong to and a set of terminals in . Let be the graph after subdividing each edge. For each edge in , there exist two edges in . If an edge of is in an edge multiway cut for , then it suffices to replace it by only one of the two edges created from it in to disconnect the path lies on. This yields an edge multiway cut for of the same size. Conversely, if an edge of is in an edge multiway cut for , then we replace it by the corresponding original edge of . This yields an edge multiway cut for of the same size. Hence, has an edge multiway cut of size at most if and only if has an edge multiway cut of size ."
54
+ }
55
+ ],
56
+ "tables": {},
57
+ "image_paths": {
58
+ "1(a)": {
59
+ "figure_path": "2211.12203v3_figure_1(a).png",
60
+ "caption": "Figure 1: The three different types of multiway cuts that we consider in our paper. In all figures, the red square nodes form the terminal set T\ud835\udc47Titalic_T. In the top left figure, the green lines form an edge multiway cut. In the top right, the green encircled vertices form a node multiway cut not containing a vertex of T\ud835\udc47Titalic_T. In the bottom figure, the green encircled vertices form a node multiway cut that contains two vertices of T\ud835\udc47Titalic_T. The coloured parts depict the components formed after removing the edges/vertices of the multiway cut.",
61
+ "url": "http://arxiv.org/html/2211.12203v3/x1.png"
62
+ },
63
+ "1(b)": {
64
+ "figure_path": "2211.12203v3_figure_1(b).png",
65
+ "caption": "Figure 1: The three different types of multiway cuts that we consider in our paper. In all figures, the red square nodes form the terminal set T\ud835\udc47Titalic_T. In the top left figure, the green lines form an edge multiway cut. In the top right, the green encircled vertices form a node multiway cut not containing a vertex of T\ud835\udc47Titalic_T. In the bottom figure, the green encircled vertices form a node multiway cut that contains two vertices of T\ud835\udc47Titalic_T. The coloured parts depict the components formed after removing the edges/vertices of the multiway cut.",
66
+ "url": "http://arxiv.org/html/2211.12203v3/x2.png"
67
+ },
68
+ "1(c)": {
69
+ "figure_path": "2211.12203v3_figure_1(c).png",
70
+ "caption": "Figure 1: The three different types of multiway cuts that we consider in our paper. In all figures, the red square nodes form the terminal set T\ud835\udc47Titalic_T. In the top left figure, the green lines form an edge multiway cut. In the top right, the green encircled vertices form a node multiway cut not containing a vertex of T\ud835\udc47Titalic_T. In the bottom figure, the green encircled vertices form a node multiway cut that contains two vertices of T\ud835\udc47Titalic_T. The coloured parts depict the components formed after removing the edges/vertices of the multiway cut.",
71
+ "url": "http://arxiv.org/html/2211.12203v3/x3.png"
72
+ },
73
+ "2": {
74
+ "figure_path": "2211.12203v3_figure_2.png",
75
+ "caption": "Figure 2: An example of a graph, namely P1+P5+P7+S2,3,4subscript\ud835\udc431subscript\ud835\udc435subscript\ud835\udc437subscript\ud835\udc46234P_{1}+P_{5}+P_{7}+S_{2,3,4}italic_P start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT + italic_P start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT + italic_P start_POSTSUBSCRIPT 7 end_POSTSUBSCRIPT + italic_S start_POSTSUBSCRIPT 2 , 3 , 4 end_POSTSUBSCRIPT, that belongs to the set \ud835\udcae\ud835\udcae{\\cal S}caligraphic_S.",
76
+ "url": "http://arxiv.org/html/2211.12203v3/x4.png"
77
+ },
78
+ "3": {
79
+ "figure_path": "2211.12203v3_figure_3.png",
80
+ "caption": "Figure 3: The gadgets for the variables (top) as well as those for the clauses (bottom). The bottom-left gadget corresponds to a clause with three literals whereas the bottom-right one corresponds to a clause with two literals. The terminals are depicted as red squares.",
81
+ "url": "http://arxiv.org/html/2211.12203v3/x5.png"
82
+ },
83
+ "4": {
84
+ "figure_path": "2211.12203v3_figure_4.png",
85
+ "caption": "Figure 4: The figure shows a link-structure formed by the connector-edges of a clause-triangle and its corresponding variable-triangle. The two bases that complete the triangles are not drawn.",
86
+ "url": "http://arxiv.org/html/2211.12203v3/x6.png"
87
+ },
88
+ "5": {
89
+ "figure_path": "2211.12203v3_figure_5.png",
90
+ "caption": "Figure 5: Construction of G~~\ud835\udc3a\\tilde{G}over~ start_ARG italic_G end_ARG from G\ud835\udc3aGitalic_G by replacing every edge of weight greater than 1 by as many parallel edges as its weight and then replacing the vertices of degree greater than 3 by a honeycomb of size 1000\u00d71000100010001000\\times 10001000 \u00d7 1000.",
91
+ "url": "http://arxiv.org/html/2211.12203v3/x7.png"
92
+ },
93
+ "6": {
94
+ "figure_path": "2211.12203v3_figure_6.png",
95
+ "caption": "Figure 6: The variable interface of xisubscript\ud835\udc65\ud835\udc56x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. The positive literal xisubscript\ud835\udc65\ud835\udc56x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT occurs in the clauses cjsubscript\ud835\udc50\ud835\udc57c_{j}italic_c start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT and cgsubscript\ud835\udc50\ud835\udc54c_{g}italic_c start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT, whereas xi\u00af\u00afsubscript\ud835\udc65\ud835\udc56\\overline{x_{i}}over\u00af start_ARG italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_ARG occurs in chsubscript\ud835\udc50\u210ec_{h}italic_c start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT. The dotted curves connect the two vertices that are identified. No terminal is reachable from the vertex closest to the red dashed lines in the direction of the paths crossed by it.",
96
+ "url": "http://arxiv.org/html/2211.12203v3/"
97
+ },
98
+ "7": {
99
+ "figure_path": "2211.12203v3_figure_7.png",
100
+ "caption": "Figure 7: Case 1: In the figure on the left, we see the xisubscript\ud835\udc65\ud835\udc56x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT-gadget with the three clause gadgets it is linked to. The dotted lines indicate that the links are identified with each other. None of the links can reach any terminal. The red dashed curves indicate that the path is intersected by the multiway cut S\ud835\udc46Sitalic_S. The edges labeled with a red cross are contained in S\ud835\udc46Sitalic_S. In the right figure, we show how S\ud835\udc46Sitalic_S can be modified without compromising its feasibility.",
101
+ "url": "http://arxiv.org/html/2211.12203v3/x9.png"
102
+ },
103
+ "8": {
104
+ "figure_path": "2211.12203v3_figure_8.png",
105
+ "caption": "Figure 8: Case 2: In the figure we see the xisubscript\ud835\udc65\ud835\udc56x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT-gadget with one of its links reaching two distinct terminals. The dotted curve indicates that the links are identified with each other. The dashed curve shows that there exists a path between its endpoints.",
106
+ "url": "http://arxiv.org/html/2211.12203v3/x10.png"
107
+ },
108
+ "9": {
109
+ "figure_path": "2211.12203v3_figure_9.png",
110
+ "caption": "Figure 9: Case 3: The left figure shows the situation when exactly one link of the xisubscript\ud835\udc65\ud835\udc56x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT-gadget reaches a terminal. The edges labeled with a red cross are contained in S\ud835\udc46Sitalic_S. The right figure shows the replacement made in this case.",
111
+ "url": "http://arxiv.org/html/2211.12203v3/x11.png"
112
+ },
113
+ "10": {
114
+ "figure_path": "2211.12203v3_figure_10.png",
115
+ "caption": "Figure 10: Case 4: The left figure shows the situation when exactly two links of the xisubscript\ud835\udc65\ud835\udc56x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT-gadget reach two distinct terminals. The edges labeled with a red cross are contained in S\ud835\udc46Sitalic_S. The right figure depicts the situation after the replacement.",
116
+ "url": "http://arxiv.org/html/2211.12203v3/x12.png"
117
+ },
118
+ "11": {
119
+ "figure_path": "2211.12203v3_figure_11.png",
120
+ "caption": "Figure 11: Case 5: The left figure shows the situation when all the three links of the xisubscript\ud835\udc65\ud835\udc56x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT-gadget reach three distinct terminals. The edges labeled with a red cross are contained in S\ud835\udc46Sitalic_S. The right figure shows the situation after the replacement.",
121
+ "url": "http://arxiv.org/html/2211.12203v3/x13.png"
122
+ },
123
+ "12": {
124
+ "figure_path": "2211.12203v3_figure_12.png",
125
+ "caption": "Figure 12: Case 6: The figure on the left shows the situation when the xisubscript\ud835\udc65\ud835\udc56x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT-gadget reaches a terminal t\ud835\udc61titalic_t via a clause gadget of size two. The dotted curve in the figure indicates that its endpoints are identified whereas the dashed curve indicated that there exists a path between its endpoints that is not cut by S\ud835\udc46Sitalic_S. The figure on the right depicts the situation after the replacement.",
126
+ "url": "http://arxiv.org/html/2211.12203v3/x14.png"
127
+ },
128
+ "13": {
129
+ "figure_path": "2211.12203v3_figure_13.png",
130
+ "caption": "Figure 13: Case 6: In the top figure, there is a terminal t\ud835\udc61titalic_t reachable via (at least) two links of the xisubscript\ud835\udc65\ud835\udc56x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT-gadget. Moreover, t\ud835\udc61titalic_t appears in a clause gadget c\u2032superscript\ud835\udc50\u2032c^{\\prime}italic_c start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT corresponding to a clause of size two that is directly connected to the xisubscript\ud835\udc65\ud835\udc56x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT-gadget. The endpoints of the dotted curves are identified. The dashed curve indicates the existence of a path, not cut by S\ud835\udc46Sitalic_S, between its endpoints.",
131
+ "url": "http://arxiv.org/html/2211.12203v3/x15.png"
132
+ },
133
+ "14": {
134
+ "figure_path": "2211.12203v3_figure_14.png",
135
+ "caption": "Figure 14: The figure depicts a link-structure with the variable gadget of xisubscript\ud835\udc65\ud835\udc56x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT at the top and its clause gadget for c\ud835\udc50citalic_c at the bottom. Exactly one edge of the link-structure (labeled with a red cross) is in the set S\ud835\udc46Sitalic_S. The dashed red lines depict that the terminals cannot be reached from the vertices a\ud835\udc4eaitalic_a or b\ud835\udc4fbitalic_b.",
136
+ "url": "http://arxiv.org/html/2211.12203v3/x16.png"
137
+ },
138
+ "15": {
139
+ "figure_path": "2211.12203v3_figure_15.png",
140
+ "caption": "Figure 15: The figure shows a link-structure with the variable gadget at the bottom and its connected clause gadget at the top. The crossed-out red edges are the ones contained in the minimum edge multiway cut S\ud835\udc46Sitalic_S. The green curve shows the existence of a path between a variable-terminal and a clause-terminal. The dotted curve connects the identified connectors in the link-structure shown in the figure.",
141
+ "url": "http://arxiv.org/html/2211.12203v3/x17.png"
142
+ },
143
+ "16": {
144
+ "figure_path": "2211.12203v3_figure_16.png",
145
+ "caption": "Figure 16: The figure shows the construction in Theorem 1.2. The leftmost figure is an instance of Edge Multiway Cut on planar subcubic graphs. The figure in between shows a 2222-subdivision of the instance. The rightmost figure shows the line graph of the subdivided graphs drawn in green. In each figure, the terminals are shown as red squares.",
146
+ "url": "http://arxiv.org/html/2211.12203v3/x18.png"
147
+ }
148
+ },
149
+ "validation": true,
150
+ "references": [
151
+ {
152
+ "1": {
153
+ "title": "Easy problems for tree-decomposable graphs.",
154
+ "author": "S. Arnborg, J. Lagergren, and D. Seese.",
155
+ "venue": "Journal of Algorithms, 12:308\u2013340, 1991.",
156
+ "url": null
157
+ }
158
+ },
159
+ {
160
+ "2": {
161
+ "title": "Node multiway cut and subset feedback vertex set on graphs of bounded\nmim-width.",
162
+ "author": "B. Bergougnoux, C. Papadopoulos, and J. A. Telle.",
163
+ "venue": "Algorithmica, 84:1385\u20131417, 2022.",
164
+ "url": null
165
+ }
166
+ },
167
+ {
168
+ "3": {
169
+ "title": "Multiway cut for stereo and motion with slanted surfaces.",
170
+ "author": "S. Birchfield and C. Tomasi.",
171
+ "venue": "In Proc. ICCV 1999, pages 489\u2013495. IEEE Computer Society,\n1999.",
172
+ "url": null
173
+ }
174
+ },
175
+ {
176
+ "4": {
177
+ "title": "Treewidth is NP-complete on cubic graphs.",
178
+ "author": "H. L. Bodlaender, \u00c9. Bonnet, L. Jaffke, D. Knop, P. T. Lima, M. Milanic,\nS. Ordyniak, S. Pandey, and O. Such\u00fd.",
179
+ "venue": "In Proc. IPEC 2023, volume 285 of LIPIcs, pages\n7:1\u20137:13, 2023.",
180
+ "url": null
181
+ }
182
+ },
183
+ {
184
+ "5": {
185
+ "title": "Cutting Barnette graphs perfectly is hard.",
186
+ "author": "\u00c9. Bonnet, D. Chakraborty, and J. Duron.",
187
+ "venue": "In Proc. WG 2023, volume 14093 of LNCS, pages 116\u2013129.\nSpringer, 2023.",
188
+ "url": null
189
+ }
190
+ },
191
+ {
192
+ "6": {
193
+ "title": "Markov random fields with efficient approximations.",
194
+ "author": "Y. Boykov, O. Veksler, and R. Zabih.",
195
+ "venue": "In Proc. CVPR 1998, pages 648\u2013655. IEEE Computer Society,\n1998.",
196
+ "url": null
197
+ }
198
+ },
199
+ {
200
+ "7": {
201
+ "title": "Multicuts in unweighted digraphs with bounded degree and bounded\ntree-width.",
202
+ "author": "G. C\u0103linescu and C. G. Fernandes.",
203
+ "venue": "Electronic Notes in Discrete Mathematics, 7:194\u2013197, 2001.",
204
+ "url": null
205
+ }
206
+ },
207
+ {
208
+ "8": {
209
+ "title": "An improved approximation algorithm for Multiway cut.",
210
+ "author": "G. C\u0103linescu, H. J. Karloff, and Y. Rabani.",
211
+ "venue": "Journal of Computer and System Sciences, 60:564\u2013574, 2000.",
212
+ "url": null
213
+ }
214
+ },
215
+ {
216
+ "9": {
217
+ "title": "An parameterized algorithm for the Multiterminal\nCut problem.",
218
+ "author": "Y. Cao, J. Chen, and J. Fan.",
219
+ "venue": "Information Processing Letters, 114:167\u2013173, 2014.",
220
+ "url": null
221
+ }
222
+ },
223
+ {
224
+ "10": {
225
+ "title": "An improved parameterized algorithm for the Minimum Node\nMultiway Cut problem.",
226
+ "author": "J. Chen, Y. Liu, and S. Lu.",
227
+ "venue": "Algorithmica, 55:1\u201313, 2009.",
228
+ "url": null
229
+ }
230
+ },
231
+ {
232
+ "11": {
233
+ "title": "Fixed-parameter tractability of Directed Multiway Cut\nparameterized by the size of the cutset.",
234
+ "author": "R. Chitnis, M. Hajiaghayi, and D. Marx.",
235
+ "venue": "SIAM Journal on Computing, 42:1674\u20131696, 2013.",
236
+ "url": null
237
+ }
238
+ },
239
+ {
240
+ "12": {
241
+ "title": "On Multiway Cut parameterized above lower bounds.",
242
+ "author": "M. Cygan, M. Pilipczuk, M. Pilipczuk, and J. O. Wojtaszczyk.",
243
+ "venue": "ACM Transactions on Computation Theory, 5:3:1\u20133:11, 2013.",
244
+ "url": null
245
+ }
246
+ },
247
+ {
248
+ "13": {
249
+ "title": "The complexity of multiterminal cuts.",
250
+ "author": "E. Dahlhaus, D. S. Johnson, C. H. Papadimitriou, P. D. Seymour, and\nM. Yannakakis.",
251
+ "venue": "SIAM Journal on Computing, 23:864\u2013894, 1994.",
252
+ "url": null
253
+ }
254
+ },
255
+ {
256
+ "14": {
257
+ "title": "Maximal flow through a network.",
258
+ "author": "L. R. Ford and D. R. Fulkerson.",
259
+ "venue": "Canadian Journal of Mathematics, 8:399\u2013404, 1956.",
260
+ "url": null
261
+ }
262
+ },
263
+ {
264
+ "15": {
265
+ "title": "Domination and cut problems on chordal graphs with bounded leafage.",
266
+ "author": "E. Galby, D. Marx, P. Schepper, R. Sharma, and P. Tale.",
267
+ "venue": "In Proc. IPEC 2022, volume 249 of LIPIcs, pages\n14:1\u201314:24, 2022.",
268
+ "url": null
269
+ }
270
+ },
271
+ {
272
+ "16": {
273
+ "title": "Multiway cuts in node weighted graphs.",
274
+ "author": "N. Garg, V. V. Vazirani, and M. Yannakakis.",
275
+ "venue": "Journal of Algorithms, 50:49\u201361, 2004.",
276
+ "url": null
277
+ }
278
+ },
279
+ {
280
+ "17": {
281
+ "title": "The Planar Multiterminal Cut problem.",
282
+ "author": "D. Hartvigsen.",
283
+ "venue": "Discrete Applied Mathematics, 85:203\u2013222, 1998.",
284
+ "url": null
285
+ }
286
+ },
287
+ {
288
+ "18": {
289
+ "title": "Complexity framework for forbidden subgraphs I: The framework.",
290
+ "author": "M. Johnson, B. Martin, J. J. Oostveen, S. Pandey, S. Smith, and E. J. van\nLeeuwen.",
291
+ "venue": "arXiv:2211.12887 [math.CO], 2022.",
292
+ "url": null
293
+ }
294
+ },
295
+ {
296
+ "19": {
297
+ "title": "Edge multiway cut and node multiway cut are hard for planar subcubic\ngraphs.",
298
+ "author": "M. Johnson, B. Martin, S. Pandey, D. Paulusma, S. Smith, and E. J. van Leeuwen.",
299
+ "venue": "In Proc. SWAT 2023, volume 294 of LIPIcs, pages\n29:1\u201329:17, 2024.",
300
+ "url": null
301
+ }
302
+ },
303
+ {
304
+ "20": {
305
+ "title": "Solving Planar -Terminal Cut in time.",
306
+ "author": "P. N. Klein and D. Marx.",
307
+ "venue": "In Proc. ICALP 2012, volume 7391 of LNCS, pages 569\u2013580.\nSpringer, 2012.",
308
+ "url": null
309
+ }
310
+ },
311
+ {
312
+ "21": {
313
+ "title": "A tight lower bound for Planar Multiway Cut with fixed number\nof terminals.",
314
+ "author": "D. Marx.",
315
+ "venue": "In Proc. ICALP 2012, volume 7391 of LNCS, pages 677\u2013688.\nSpringer, 2012.",
316
+ "url": null
317
+ }
318
+ },
319
+ {
320
+ "22": {
321
+ "title": "Face covers and the genus problem for apex graphs.",
322
+ "author": "B. Mohar.",
323
+ "venue": "Journal of Combinatorial Theory, Series B, 82:102\u2013117, 2001.",
324
+ "url": null
325
+ }
326
+ },
327
+ {
328
+ "23": {
329
+ "title": "Planar Multiway Cut with terminals on few faces.",
330
+ "author": "S. Pandey and E. J. van Leeuwen.",
331
+ "venue": "In Proc. SODA 2022, pages 2032\u20132063. SIAM, 2022.",
332
+ "url": null
333
+ }
334
+ },
335
+ {
336
+ "24": {
337
+ "title": "Subset feedback vertex set on graphs of bounded independent set size.",
338
+ "author": "C. Papadopoulos and S. Tzimas.",
339
+ "venue": "Theoretical Computer Science, 814:177\u2013188, 2020.",
340
+ "url": null
341
+ }
342
+ },
343
+ {
344
+ "25": {
345
+ "title": "Minimum - cut of a planar undirected network in time.",
346
+ "author": "J. H. Reif.",
347
+ "venue": "SIAM Journal on Computing, 12:71\u201381, 1983.",
348
+ "url": null
349
+ }
350
+ },
351
+ {
352
+ "26": {
353
+ "title": "Graph minors. V. Excluding a planar graph.",
354
+ "author": "N. Robertson and P. D. Seymour.",
355
+ "venue": "Journal of Combinatorial Theory, Series B, 41:92\u2013114, 1986.",
356
+ "url": null
357
+ }
358
+ },
359
+ {
360
+ "27": {
361
+ "title": "Some properties of interchange graphs.",
362
+ "author": "J. Sedla\u00e1c\u0306ek.",
363
+ "venue": "In Theory of Graphs and Its Applications, pages 145\u2013150.\nAcademic Press, 1964.",
364
+ "url": null
365
+ }
366
+ },
367
+ {
368
+ "28": {
369
+ "title": "Multiprocessor scheduling with the aid of network flow algorithms.",
370
+ "author": "H. Stone.",
371
+ "venue": "IEEE Transactions on Software Engineering, SE-3(1):85\u201393,\n1977.",
372
+ "url": null
373
+ }
374
+ },
375
+ {
376
+ "29": {
377
+ "title": "NP-completeness of perfect matching index of cubic graphs.",
378
+ "author": "M. \u0160koviera and P. Var\u0161a.",
379
+ "venue": "In Proc. STACS 2022, volume 219 of LIPIcs, pages\n56:1\u201356:12, 2022.",
380
+ "url": null
381
+ }
382
+ },
383
+ {
384
+ "30": {
385
+ "title": "Vertex-edge domination in cubic graphs.",
386
+ "author": "R. Ziemann and P. Zylinski.",
387
+ "venue": "Discrete Mathematics, 343:112075, 2020.",
388
+ "url": null
389
+ }
390
+ }
391
+ ],
392
+ "url": "http://arxiv.org/html/2211.12203v3"
393
+ }
20240819/2305.15897v4.json ADDED
The diff for this file is too large to render. See raw diff
 
20240819/2306.02802v2.json ADDED
@@ -0,0 +1,298 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Quantification of Residential Flexibility Potential using Global Forecasting Models",
3
+ "abstract": "This paper proposes a general and practical approach to estimate the economic benefits of optimally controlling deferrable loads in a Distribution System Operator\u2019s (DSO) grid, without relying on historical observations. We achieve this by learning the simulated response of flexible loads to random control signals, using a non-parametric global forecasting model. An optimal control policy is found by including the latter in an optimization problem. We apply this method to electric water heaters and heat pumps operated through ripple control and show how flexibility, including rebound effects, can be characterized and controlled. Finally, we show that the forecaster\u2019s accuracy is sufficient to completely bypass the simulations and directly use the forecaster to estimate the economic benefit of flexibility control.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Flexibility is a term used to describe the ability of electric loads or distributed energy resources (DERs) to shift their consumption or production in time. Flexibility in distribution or transmission grids can increase grid resilience, reduce maintenance costs, lower distribution losses, and smooth and increase the predictability of the demand profile [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###]. Flexibility services usually require aggregating flexible residential customers into pools that reach a given \u201dcritical mass\u201d [4 ###reference_b4###, 5 ###reference_b5###]. In most cases, aggregation requires controlling heterogeneous types of devices [6 ###reference_b6###] (e.g., heat pumps, electric boilers, EVs, PVs), running different types of onboard controllers, (e.g., rule or heuristic-based, model predictive control, etc..). This condition restricts the kind of viable control methods for pooling flexibility. Some protocols, such as OSCP [7 ###reference_b7###], envisage intermediate actors optimizing flexibility pools by means of a global control signal, delegating the complexity of low-level control to a flexibility provider [8 ###reference_b8###, 9 ###reference_b9###]. Currently, the most used control method is ripple control [10 ###reference_b10###], using frequency-sensitive relays to shut down flexible devices. Aggregating loads in control pools reduces uncertainty in the total amount of actuated flexibility [11 ###reference_b11###]; yet, communicating instant flexibility may prove insufficient for optimal dispatch. Frequently, deactivating a cluster of energy-intensive devices might trigger a \u201drebound effect\u201d in the overall load once they are reactivated [12 ###reference_b12###]. This effect can create an unintended spike in peak demand, a factor that should be taken into account when optimizing the overall power profile."
10
+ },
11
+ {
12
+ "section_id": "1.1",
13
+ "parent_section_id": "1",
14
+ "section_name": "Related work",
15
+ "text": "Flexibility research has gained prominence in recent publications. For example, the International Energy Agency\u2019s (IEA) Annex 67 [13 ###reference_b13###] focuses on using building flexibility for grid control, and Annex 82 [14 ###reference_b14###] examines its quantification for utilities and DSOs. Some publications are mostly focused on the characterization of flexible devices [15 ###reference_b15###, 16 ###reference_b16###, 17 ###reference_b17###, 18 ###reference_b18###, 19 ###reference_b19###] while others mostly explore its exploitation in the context of demand side management and demand response, under the hypothesis of a known, observable and directly controllable system [20 ###reference_b20###, 21 ###reference_b21###, 22 ###reference_b22###]. For example, in [22 ###reference_b22###], [21 ###reference_b21###] and [23 ###reference_b23###], this is achieved for thermally activated building system (TABS) and heat pumps (HPs).\nOur work is mostly related to simulation-based flexibility assessment of partially observable and indirectly controllable systems. This setting resembles the current operational conditions of electrical grids: DSOs usually can only rely on smart meters\u2019 relays for load actuation, and temperature readings are not available. Similar conditions were considered in [24 ###reference_b24###], where authors assessed the energy flexibility potential of a pool of residential smart-grid-ready HPs (i.e., with an internal controller reacting to a discrete signal indicating if they have to consume more, less, or shut down) by means of bottom-up simulations. Similarly, in [25 ###reference_b25###], authors predicted the energy consumption of a group of 300 HPs controlled via binary throttling signals. In [26 ###reference_b26###], the authors trained a forecaster on periods in which demand-response is not active to quantify the flexibility associated with a pool of customers under a price-and-volume schema. This approach was possible due to the sparsity of actuation events, allowing to separate baseline and activation periods. Our work is also related to inverse optimization of price signals, which was first introduced in [27 ###reference_b27###]. The idea is that assuming that some buildings use a price-dependent (but unknown) controller, the DSO or an aggregator can try to reverse engineer the controllers by estimating approximate and invertible control laws by probing the system with a changing price signal; since the learned control laws are invertible, they can then be used to craft the optimal cost signal to provide a desired aggregate power profile. To show this, authors in [27 ###reference_b27###] fitted an invertible online FIR model to forecast the consumption of a group of buildings as a function of a price signal and derive an analytic solution for an associated closed-loop controller. The concept was then demonstrated by means of simulations on 20 heat-pump-equipped households. The authors of [18 ###reference_b18###] used the same concept to fit a linear model linking prices and the load of a cluster of price-sensitive buildings. The authors then proposed to characterize flexibility extracting parameters from the model response. They also proposed to estimate the expected savings of a given building by simulating its model twice, with and without a price-reacting control. A similar approach was proposed in [28 ###reference_b28###], where authors identified a general stochastic nonlinear model for predicting energy flexibility coming from a water tower operated by an unknown control strategy. The fitted model is then used in an optimization loop to design price signals for the optimal exploitation of flexibility. Authors in [29 ###reference_b29###] used the same method to find price signals to best meet flexibility requests using an iterative method."
16
+ },
17
+ {
18
+ "section_id": "1.2",
19
+ "parent_section_id": "1",
20
+ "section_name": "Contributions",
21
+ "text": "Opposed to the approaches presented in the reviewed literature, which employ simple invertible models to estimate flexibility [18 ###reference_b18###, 28 ###reference_b28###, 27 ###reference_b27###], we propose to train global forecasters or metamodels, based on boosted trees on simulated data to predict both the controlled and uncontrolled power of flexible devices. This allows conditioning the response on categorical variables, such as the number of controlled devices of different types and past binary control signals generated by ripple control or throttling. This latter ability allows the use of the forecaster as a surrogate model of the simulation inside a control loop. We also show that global models provide sufficient accuracy to bypass the simulations and to perform the same kind of what-if analysis presented in [24 ###reference_b24###]. This is possible because we are only interested in the aggregated power of the controlled devices, which has a much lower dimensionality than all the simulated states and signals. The method we propose can be used to assess the power response of groups of flexible devices from day zero by means of simulations but can also be applied to real controlled systems (for which it is not possible to retrieve a baseline response) by augmenting the training set using observations from the field. In section II ###reference_###, we show that the modeling and simulation phase needed to create a training set for the metamodel only requires statistical information, which is usually publicly available. In section III ###reference_###, we present a method to predict energy flexibility using a global forecasting model. We conduct an ablation study in which we suggest various training methodologies. These findings indicate that incorporating concepts of energy imbalances throughout the prediction horizon and crafting a training set from scenarios exhibiting orthogonal penetrations based on device types enhances the accuracy of forecasts. In III-D ###reference_###, we use the metamodel to characterize flexibility and rebound effects, allowing us to answer complex questions like: How does the controlled device mix influence flexibility? And, how many kWh, at which power level, could be deferred? In section IV ###reference_###, we describe how the metamodel can be used to optimize the available flexibility. In section IV-B ###reference_###, we propose a dynamic grouping strategy to ensure that the thermal comfort constraints of end users with an HP are never violated. Finally, in section V ###reference_###, we study the accuracy of the metamodel when used to optimize flexible devices. For the analyzed use case, we show that the metamodel is accurate enough to completely bypass the simulation, allowing us to use it for both simulation and control."
22
+ },
23
+ {
24
+ "section_id": "2",
25
+ "parent_section_id": null,
26
+ "section_name": "II Problem statement and system description",
27
+ "text": "Our objective is to evaluate the flexibility potential of residential customer groups in response to a force-off control signal .\nOur approach involves learning a computationally-effective meta-model based on a detailed, white-box simulation of flexible devices, and incorporating this model within an optimal control loop to minimize operational costs.\nWe consider the setting in which a DSO plans a control signal with a 15-minute resolution for the next day. In our simulations, the signal planning occurs every day at midnight, covering the subsequent 24 hours.\nWe restrict this study to two flexible devices, HPs and electric water heaters (EHs). We simulated the following heating system configurations:\nHP: in this configuration, both space heating and domestic hot water (DHW) are provided by an HP.\nEH: in this case, the EH is just used to provide DHW, while the space heating is not modeled, the latter being considered to be fueled by gas or oil.\nA detailed mathematical description of the building thermal model, stratified water tanks, HP, and heating system model is provided in appendix A ###reference_###. To validate our methodology, we conducted simulations reflecting typical device usage and overall power consumption for a DSO in the Swiss canton of Ticino. Appendix B ###reference_### lists the data sources used to configure the simulated devices. Within this region, our analysis included 2670 buildings with installed HPs and 1750 with EHs, possessing a total nominal electrical capacity of 12.5 MW and 7.7 MW, respectively."
28
+ },
29
+ {
30
+ "section_id": "3",
31
+ "parent_section_id": null,
32
+ "section_name": "III Global forecasting modes for flexibility simulation and control",
33
+ "text": "We start considering a single group of simulated flexible devices. We define a dataset of input-output tuples, where is a set of features, including past and future values of the control signal sent to the group of devices, while being their aggregated power profile for the next steps ahead. We want to use to train a forecaster, or meta-model, ."
34
+ },
35
+ {
36
+ "section_id": "3.1",
37
+ "parent_section_id": "3",
38
+ "section_name": "III-A Dataset generation",
39
+ "text": "The dataset is built from a one-year simulation in which devices were controlled using a random control policy and a one-year uncontrolled simulation; this is opposed to simulating tuples of controlled and uncontrolled cases starting from the same system\u2019s state. The latter approach is more complicated, requiring resetting the simulation states each time; furthermore, it cannot be used when gathering data from real systems. To build the control signal for the controlled year, we generated all possible daily random signals respecting specific criteria, such as a daily mandated minimum period for sustained state and a capped number of daily activations; these criteria are reported in table I ###reference_###. Using a 15-minute time-step will require generating ex-ante signals. For this reason, we used a dynamic programming approach, filtering out incompatible scenarios on the run, as they are sequentially generated. Figure 1 ###reference_### shows a sample of the resulting force-off signals, the ratio of scenarios in which the force-off signal is active as a function of time-step, and the distribution of the total steps in which the force-off signal is on.\n###figure_1### ###table_1### Instead of training several metamodels using datasets with different numbers of HPs and EHs, we follow a common approach from forecasting literature and train a single global model by crafting datasets of different penetration scenarios and using them to create a single dataset. We build the final dataset following these steps:\nwe build penetration scenarios by grouping a subset of the simulated buildings, from which the aggregated power is retrieved. A dataset is then built for each penetration scenario, picking at random observations from the simulated years. We sampled a total of 100 penetration scenarios and used , for a total length of the dataset of 40 equivalent years.\nwe retrieve metadata describing the pool of buildings for each penetration scenario. Metadata includes the total number of each kind of device, the mean thermal equivalent transmittance (U) of the sampled buildings, and other parameters reported in table II ###reference_###. We further augment the dataset with time features such as the hour, the day of the week, and the minute of the day of the prediction time.\nAugment each penetration scenario dataset through transformations and lags of the original features, as reported in table III ###reference_###, to obtain .\nRetrieve the final dataset by stacking the penetration scenario datasets\n###table_2###"
40
+ },
41
+ {
42
+ "section_id": "3.2",
43
+ "parent_section_id": "3",
44
+ "section_name": "III-B Model description",
45
+ "text": "The metamodel is a collection of multiple-input single-output (MISO) LightGBM regressors [30 ###reference_b30###] predicting at a different step-ahead. The alternative to a collection of MISO models is training just one MISO model after augmentation of the dataset with a categorical variable indicating the step ahead being predicted. This option was discarded due to both memory and computational time restrictions. For our dataset, this strategy requires more than 30 GB of RAM. Furthermore, training a single tree for the whole dataset requires more computational time than training a set of MISO predictors in parallel (on a dataset that is 96 times smaller). We recall that the final dataset is composed of 100 scenarios differing in the set of buildings composing the aggregated response to be predicted. This means that removing observations at random when performing a train-test split would allow the metamodel to see the same meteorological conditions present in the training set. To overcome this, the training set was formed by removing the last 20% of the yearly observations from each penetration scenario dataset . That is, the training-test split is done such that the training set contains only observations relative to the first 292 days of the yearly simulation.\nA hyper-parameter optimization is then run on a 3-fold cross-validation over the training set; this means that each fold of the hyper-parameter optimization contains roughly 53% of . The tuned hyper-parameters are just the learning rate and the number of estimators for the LightGBM regressors; the parameters are kept fixed for all 96 models predicting the various step-ahead. We used a fixed-budget strategy with 40 samples, using the optuna python package [31 ###reference_b31###] implementation of the tree-structured Parzen estimator [32 ###reference_b32###] as a sequential sampler."
46
+ },
47
+ {
48
+ "section_id": "3.3",
49
+ "parent_section_id": "3",
50
+ "section_name": "III-C Ablation studies",
51
+ "text": "We performed an ablation study to see the effectiveness of different sampling strategies (point (1) of the dataset-building methodology described in the previous section) and model variations.\nTo generate the final dataset, we tested two different sampling schemes for producing the penetration scenarios. In the first strategy, the total number of controllable devices is increased linearly, picking randomly between households with an HP or an EH. In the second strategy, the number of the two controllable classes of devices is increased independently, co-varying the number of HPs and EHs in a cartesian fashion.\n###figure_2### To enhance the accuracy of the metamodel, a physics-informed approach involving energy imbalance is proposed. This method utilizes the metamodel to simulate the system\u2019s response under two conditions: with the actual control signal and with a zeroed control signal. By subtracting these responses, we quantify the system\u2019s \u2019energy debt\u2019 at each timestep. This physics-based insight is crucial for improving predictions of future states. To test this hypothesis, we developed a secondary model where a set of regressors first forecasts the system response for future steps under both scenarios. The resultant energy imbalances from these predictions serve to enrich the training dataset. Subsequently, another set of regressors is trained on this augmented dataset, employing this physics-informed strategy during both training and prediction phases.\nIn total, we compared four distinctive configurations, comprising the two models and the two sampling strategies. Figure 3 ###reference_### provides representative examples of predictions of the energy-aware metamodel trained using the grid sampling strategy, featuring varying counts of controlled heat pumps (HPs) and electric heaters.\n###figure_3### Models performances can be better compared when plotting the average (over samples and prediction times) normalized Mean Absolute Error (nMAE) as a function of step ahead, as done in figure 4 ###reference_###. The nMAE for the predictions generated at time is defined as:\nThe grid sampling scheme did indeed help in increasing the accuracy of the predictions w.r.t. the random sampling scheme for both the LightGBM models. Including the information about energy imbalances at each step ahead shows some benefits for both sampling strategies, at the expense of a more complex model. The accuracy improvement impacts only controlled scenarios, as demonstrated by comparing the second and third panels in figure 4 ###reference_###. These panels show the scores obtained for instances where the force-off signal was activated at least once or never activated. This result aligns with our expectations. As an additional analysis, we studied the energy imbalance over the prediction horizon. For this analysis, we considered just the controlled cases in the test set. We define two relative energy imbalanced measures:\nwhere is the simulated power, is the power predicted by the metamodel with the control used in the simulation, and is the power predicted by the metamodel using a zero force off. We can interpret as the relative error in the total energy needs w.r.t. the simulation and as the change in the energy consumption estimated by the metamodel if the pool of flexible devices were not controlled. We removed from the comparison all the instances in which the force-off signal was activated in the last 5 hours of the day. In this case, part of the consumption will be deferred outside the prediction horizon, making the comparison meaningless.\nLooking at the first row of figure 5 ###reference_###, we see how the empirical cumulative distribution functions (ECDFs) of and its absolute value (left and right panels) are closer to zero when the grid sampling strategy is applied. Also, using the energy-aware model helps in having a more precise prediction in terms of used energy over the prediction horizon. For all 4 models, 80 % of the time, the relative deviation in the horizon energy prediction lies below 20%. The second row of figure 5 ###reference_### reports the change in the forecasted energy consumption within the prediction horizon with and without control. It is reasonable to think that the consumption should approximately match since the force off usually just defers the consumption. In this case, the energy-aware models present a lower difference in the consumed energy.\n###figure_4### ###figure_5###"
52
+ },
53
+ {
54
+ "section_id": "3.4",
55
+ "parent_section_id": "3",
56
+ "section_name": "III-D Characterization of the rebound effect",
57
+ "text": "We used the energy imbalance aware model in combination with the grid sampling strategy to visualize rebound effects for different numbers of HPs and EHs. Figure 6 ###reference_### shows three extreme examples of the characterization: the penetration scenario with the maximum number of EHs and zero HPs, the converse, and the scenario where both penetrations are at their maximum value. The rebound is shown in terms of energy imbalance from the test set, such that they have a force-off signal turning off at the fifteenth plotted step. It can be noticed how different observations can start to show negative energy imbalance at different time steps; this is because force-off signals can have different lengths, as shown in figure 1 ###reference_###. The upper left quadrant shows the energy imbalance predicted by the metamodel in the case of the maximum number of EHs and no HPs. Comparing it with the lower right quadrant, where the sample just contains HPs, we see that the rebound effect has a quicker decay, being close to zero after only 10 steps (corresponding to 2 and a half hours). The lower right quadrant exhibits a markedly slower dissipation of the rebound effect, attributable to the different heating mechanisms and temporal constants inherent in systems heated by EHs and HPs. EHs, dedicated solely to DHW heating, have their activation guided by a hysteresis function governed by two temperature sensors installed at varying heights within the water tank. In contrast, HPs are responsible for both DHW and space heating, and their activation hinges on the temperature of the hydronic circuit, thus creating a segregation between the HPs and the building heating elements, namely the serpentine. As a result, HPs\u2019 activation is subject to a system possessing a heating capacity significantly greater than that of the standalone DHW tank: the building\u2019s heating system. Further intricacy is added to the power response profile of the heat pump due to its dual role in catering to DHW and space heating needs, with priority assigned to the former. The visual responses presented in Figure 1 ###reference_### are color-differentiated according to the seven-day mean of the ambient temperature. As per the expected pattern, the EHs\u2019 responses exhibit independence from the average external temperature, while a modest influence can be detected for the HPs, where a rise in average temperatures aligns with a faster decay in response.\n###figure_6###"
58
+ },
59
+ {
60
+ "section_id": "4",
61
+ "parent_section_id": null,
62
+ "section_name": "IV Using metamodels for optimal flexibility control",
63
+ "text": "This section presents how the metamodel can be incorporated into the optimization loop, beginning with optimizing a single flexibility group. The objective that we found most compelling from both the DSO and energy supplier perspectives is the simultaneous minimization of day-ahead costs (incurred by the energy supplier on the spot market) and peak tariff (paid by the DSO to the TSO). Notably, this scenario is particularly well-suited to Switzerland, where a distinctive situation persists with the energy supplier and the DSO remaining bundled. The peak tariff, being proportionate to the maximum monthly peak over a 15-minute interval, poses a more significant optimization challenge than day-ahead costs, as the peak tariff is paid on the monthly peak. Since it is extremely hard to produce accurate forecasts over a one-month period, we solved the peak shaving problem on a daily basis as a heuristic. This then leads us to the following optimization problem:\nwhere refers to the step ahead, is the day-ahead spot price, is the price for the monthly peak in , is a coefficient taking into account the timestep duration. The second term in equation (5 ###reference_###) encodes the cost of increasing the peak realized so far in the current month, . Problem (4 ###reference_###) is not trivial to solve since it\u2019s a function of a non-parametric regressor, the metamodel. However, the parameters reported in table I ###reference_### produce a total of 155527 control scenarios; this allows us to evaluate (4 ###reference_###) using a brute-force approach, finding the exact minimizer . This is done through the following steps:\nForecast the total power of the DSO: . This forecaster was obtained by training 96 different LightGBM models, one for each step ahead.\nForecast the baseline consumption of flexible devices, , using the metamodel with the control signal set to zero (corresponding to not controlling the devices).\nForecast the response of flexible devices under a given control scenario for the next day. This is always done using the metamodel: .\nThe objective function is evaluated on for all the possible plausible control scenarios; the optimal control scenario minimizing the total costs is returned."
64
+ },
65
+ {
66
+ "section_id": "4.1",
67
+ "parent_section_id": "4",
68
+ "section_name": "IV-A Controlling multiple groups",
69
+ "text": "As previously noted, forcing off a group of flexibilities results in a subsequent rebound effect when they are permitted to reactivate. A viable strategy to counter this issue is to segment the flexibilities into various groups, thereby circumventing a concurrent reactivation. Moreover, this segmentation method helps exploit their thermal inertia to the fullest extent. This is especially true in the context of heat pumps, as variations in building insulation and heating system sizing inevitably lead to differences in turn-on requirements to maintain home thermal comfort under identical weather conditions. Analogous considerations apply to hot water boilers as well. In addition, it is crucial to note that, generally, EHs can endure longer force-off periods than HPs. Thus, the stratification of flexibilities into distinct groups not only mitigates the rebound effect but also facilitates the optimal utilization of the entire appliance fleet\u2019s potential.\nProblem (4 ###reference_###) can be reformulated as:\nwhere is the total number of groups and is the control signal sent to the group. Problem (6 ###reference_###) is a combinatorial problem; to reduce its complexity, we have used a sequential heuristic: the first group of devices optimizes on the uncontrolled power profile . Once their optimal control for the first group is found, the second group it\u2019s optimally scheduled on , where the second subscript in refers to the control group. An example of such sequential optimization is shown in figure 7 ###reference_###, where one group of EHs and one of HPs are scheduled sequentially.\n###figure_7### The upper panel shows the optimal control signals, along with the simulated response (dashed lines) and the response predicted by the metamodel (dotted lines). The middle panel shows the power from uncontrolled nodes in the DSO\u2019s grid (blue), the total DSO\u2019s power when no control action is taken (orange), and simulated and forecast system response (green and red)."
70
+ },
71
+ {
72
+ "section_id": "4.2",
73
+ "parent_section_id": "4",
74
+ "section_name": "IV-B Ensuring comfort for the end users",
75
+ "text": "To ensure end-user comfort while leveraging their flexibility, it is critical that appliances maintain the ability to meet energy demands for a certain period of time, despite shorter time shifts within this duration. When a building is heated with a thermo-electric device such as a heat pump (HP), its energy consumption exhibits a significant inverse correlation with the external temperature. This correlation can be effectively illustrated using an equivalent linear RC circuit to model the building\u2019s thermal dynamics.\nThe static behavior of this model can be represented by the energy signature, which depicts the linear relationship between the building\u2019s daily energy consumption and the mean daily external temperature, denoted as . As more households now feature photovoltaic (PV) power plants, it becomes relevant to include the average daily global horizontal irradiance, or , as a contributing factor in the energy signature fit. As a first approximation, we assume a linear relationship between global irradiance and PV production. Consequently, elevated values may correspond to lower daily energy consumption, granted a PV system is installed. However, such an effect should not be misattributed to variations in temperature. Failing to integrate into the regression could lead to an underestimation of the daily energy consumption when expressed as a function of temperature.\nThe comprehensive energy signature, denoted as , emerges as a piecewise linear function reliant on the external temperature and .\nOur ultimate objective is to ascertain the necessary operational duration for a specified HP to fulfill the building\u2019s daily energy requirements. Consequently, the total number of active hours during a day, , is obtained by dividing the energy signature by the nominal power of the HP:\nThe following steps describe our procedure to generate and control a group of HPs based on their estimated activation time:\nEstimate the energy signatures of all the buildings with an installed HP\nEstimate their reference activation time for worst-case conditions, that is, for and .\nAt control time, perform a day-ahead estimation of activation times for all the HPs, using a day-ahead forecast of and . Use the within-group maximum values of the needed activation time, to filter out control scenarios having more than force-off steps. This process guarantees that all HPs are allowed on for a sufficient time, given the temperature and irradiance conditions."
76
+ },
77
+ {
78
+ "section_id": "5",
79
+ "parent_section_id": null,
80
+ "section_name": "Using metamodels for closed loop emulations",
81
+ "text": "For testing operational and closed-loop accuracy, we simulated one year of optimized operations, in the case in which 66% of the available flexibilities are controlled. We used two control groups: one containing only EHs, which can be forced off for a longer period of time, and one group of HPs, controlled as explained in the previous section.\nThe prediction error accuracy was already studied in section III-C ###reference_###, where we tested the metamodel on a simulated test set. In that case, the force-off signals in the dataset were produced by a random policy. We further tested the performance of the metamodel when predicting the optimized force-off. We could expect a difference in prediction accuracy since, in this case, the force-off signals have a non-random pattern that could influence the average error of the forecaster. Besides this, we also assessed the accuracy of the metamodel in terms of economic results in closed-loop; that is, we retrieve the errors on the economic KPIs when the simulation is completely bypassed, and the metamodel is used for both optimizing and emulating the behavior of the controlled devices."
82
+ },
83
+ {
84
+ "section_id": "5.1",
85
+ "parent_section_id": "5",
86
+ "section_name": "Open loop operational accuracy",
87
+ "text": "At first, operational accuracy was assessed in terms of predictions, comparing the aggregated controlled power profile with the sum of the individually simulated (controlled) devices. Figure 8 ###reference_### shows the normalized daily time series of the prediction error during the actual optimization process. This is defined as:\nwhere are the aggregated simulated power profiles and their day ahead predictions, respectively. We see that for all the observed error paths, we just have sporadic deviations above 10%. To have a more general understanding of the metamodel performance, in the second panel of 8 ###reference_### we plotted the histogram of the mean daily error, defined as . This shows that the metamodel is usually under-predicting, or over-smoothing, the true response from the simulation, which is generally the expected behavior of a forecaster trained to minimize the sum of squares loss. The fact that this distribution is contained in the -2%+2% interval, which is much narrower than in the maximum observed discrepancies in the daily error traces, confirms that high error deviations in the day ahead predictions are just sporadic.\n###figure_8###"
88
+ },
89
+ {
90
+ "section_id": "5.2",
91
+ "parent_section_id": "5",
92
+ "section_name": "Closed loop economic performances",
93
+ "text": "We cannot directly assess the closed-loop performances of the metamodel in terms of prediction errors. This is because, when simulating in a closed loop, the metamodel\u2019s predictions are fed to itself in a recurrent fashion. This could result in slightly different starting conditions for each day; furthermore, comparing the sampled paths is not our final goal. A more significant comparison is in terms of economic returns. We compared these approaches:\nSimulated: we run the optimization and fully simulate the system\u2019s response. In this setting, the metamodel is just used to obtain the optimal control signal to be applied the day ahead. The controlled devices are then simulated, subject to the optimal control signal. The costs are then computed based on the simulations.\nForecast: for each day, the optimal predictions used for the optimization are used to estimate the cost. We anyways simulate the controlled devices; this process is repeated the next day. This approach gives us an understanding of how the operational prediction errors shown in figure 8 ###reference_### impact the estimation of the costs.\nEmulated: the simulations are completely bypassed. The metamodel is used to optimize the control signal and generate the next-day responses for the controlled devices.\nIt should be clear that, if the third approach gives comparable results in terms of costs, we could then just use the metamodel for both the control task and its evaluation. This would significantly speed up the simulation loop: we won\u2019t have to simulate the thermodynamic behavior of thousands of households, but just evaluate the trained metamodel, which evaluation is almost instantaneous. It could seem unlikely to reach the same accuracy produced by a detailed simulation, but this can be justified by the fact that we\u2019re only interested in an aggregated power profile, whose dimensionality is just a tiny fraction of all the simulated signals needed to produce it.\n###figure_9### In figure 9 ###reference_###, we reported the relative discrepancies from economic KPIs retrieved by the simulation, using the two aforementioned approaches. As an additional KPI, we also reported the estimated tons of produced . While the emissions are not directly optimized for, minimizing the energy costs also positively impacts the emissions, since energy prices correlate with the intensity in the energy mix. The emitted tons are estimated as:\nwhere is the carbon intensity in the national energy mix in .\nThe top panel refers to the costs that would generate considering the total power profile, . In both the forecast and closed-loop cases, all costs have a deviation of less than 1%. The total cost has a deviation of well below 1 per thousand. In our case study, the controlled group of devices is just a small fraction of the total energy delivered by the DSO; to estimate the metamodel\u2019s performance, it\u2019s thus important to evaluate only costs generated by controlled devices . These are shown in the bottom panel of figure 9 ###reference_###, where we have normalized the objectives\u2019 errors with the additional costs faced by the DSO due to the flexible group: both the energy costs and the we have a relative error below the 3%, while the peak cost has a deviation of 6%. We have a comparable deviation for forecasts and closed-loop simulations. In all the cases, the peak costs are underestimated; this was to be expected, as the metamodel is trained with a sum of squares loss, which systematically underestimates extreme events. These discrepancies can still be considered reasonable to perform A/B testing in simulation.\n###table_3### ###table_4### The left panel shows discrepancies for actual costs faced by the DSO, computed using the total power profile . In this case, we have roughly a ten-fold reduction in the relative error w.r.t. the simulations. This is not a surprise, since, as anticipated, the controllable devices constitute only a fraction of the energy supplied by the DSO. Nevertheless, this is the quantity we are interested in. For completeness, the relative deviations and absolute costs for the simulated case relative to figure 9 ###reference_### are reported in tables IV ###reference_### and V ###reference_### for the total and flexible device profiles, respectively."
94
+ },
95
+ {
96
+ "section_id": "6",
97
+ "parent_section_id": null,
98
+ "section_name": "VI Conclusions and extensions",
99
+ "text": "In this work, we presented a methodology to model the flexibility potential of controllable devices located in a DSO\u2019s distribution grid and optimally steer it by broadcasting force-off signals to different clusters of flexible devices. We achieved this by training a non-parametric global forecasting model conditional to the control signals and the number of controlled devices to predict their simulated aggregated power. The numerical use case showed that the forecaster\u2019s accuracy is high enough to use it as a guide to optimally steer deferrable devices. Moreover, the high accuracy on economic KPIs suggests that the forecaster can be used to completely bypass the simulation and speed up A/B-like testing and the retrieval of different demand-side management policies over different penetration of devices.\nWe envision the following possible extensions of the presented work:\nContinuous control. The presented use case relied on extensive enumeration of the possible force-off signals for the day ahead optimization. This was possible due to restrictions requested by the DSO on the shape of the control signal, which resulted in a total number of possible control signals in the order of 1e5 scenarios. Using a higher timestep for the control will require evaluating a prohibitive number of scenarios. The approach proposed in this paper can still be feasible by replacing the boosted tree with an \u201doptimizable\u201d regressor, that is, either a partial input-convex neural network [33 ###reference_b33###] or a conditional invertible neural network [34 ###reference_b34###]. In this case, we can use a continuous signal indicating the fraction of flexible devices to be forced off at a given moment in time. We can then apply gradient descent to the optimizable regressor and retrieve the optimal .\nProbabilistic forecast. The presented optimization framework is based on a deterministic formulation. Formulating the problem in the stochastic framework could be advantageous when considering peak tariffs. This would require summing two sources of uncertainty: the one associated with the prediction of the total power profile and the one associated with the metamodel forecasts. These can be both assessed by obtaining probability distributions after the training phase through conformal prediction and using them to generate scenarios.\nThis work was financially supported by the Swiss Federal Office of Energy (ODIS \u2013 Optimal DSO dISpatchability, SI/502074), partly by the Swiss National\nScience Foundation under NCCR Automation (grant agreement 51NF40 180545), and supported by IEA Annex 82 \u201dEnergy Flexible Buildings Towards Resilient Low Carbon Energy Systems\u201d. Lorenzo Nespoli and Vasco Medici are with ISAAC, DACD, SUPSI, Mendrisio, CH (email lorenzo.nespoli@supsi.ch, vasco.medici@supsi.ch). Lorenzo Nespoli is with Hive Power SA, Manno, CH"
100
+ }
101
+ ],
102
+ "appendix": [
103
+ {
104
+ "section_id": "Appendix 1",
105
+ "parent_section_id": null,
106
+ "section_name": "Appendix A Detailed simulation\u2019s model",
107
+ "text": "The heating system is modeled using the STASCH 6 standard.\nThe heat pump control logic is based on two temperature sensors placed at different heights of the water tank, while the circulation pump connecting the tank with the building\u2019s heating element is controlled by a hysteresis on the temperature measured by a sensor placed inside the house. \nWe describe the control logic in a sequential way, following the heating components of the system. The first decision is taken by the building central controller, which decides its working mode, that is, if the building needs to be cooled or heated, based on a moving average of the historical data of the external temperature:\nwhere the working mode is negative when the building requires to be cooled, positive when heating is required, and 0 when no actions are needed. and represent the maximum and minimum values of the external temperature\u2019s moving average, which is based on the past 7 days.\nThe actual activation of the heating element is controlled by the hysteresis on the internal temperature of the building, . If the working mode is positive, this is given by:\nwhere is the state of the hysteresis at time , 1 meaning that the circulation pump of the heating element must be activated, and was chosen to be equal to 1. For completeness, we report also the control logic when the building is in cooling mode:\nThe incoming water temperature in the heating element is then modulated linearly through a 3-way valve between a maximum and minimum value, based on the external temperature, both in the heating and cooling modes.\nWhen operative, the heating element requests hot or cold water to the water tank, which control logic is based on two temperature sensors located in two different layers. When the building is in heating mode, the control logic is a simple hysteresis based on the temperature of the sensor in the uppermost layer, which is identical to the one in (12 ###reference_###). When in cooling mode, the control logic is the following:\nwhere and are the temperature measured by the upper and lower sensors, respectively, and and are the minimum and maximum desired temperatures of the water in the tank while in cooling mode. \nThe value of is then communicated to the HP. In the case in which the HP is also used for the domestic hot water (DHW), the DHW tank is always served with priority by the HP.\nFloor heating was modeled starting from the first principles. Considering a fixed and uniform temperature for the ground and the building internal temperature at each time-step and stationary conditions, we can retrieve the analytical expression of the temperature profile along the pipe, through the energy balance on an infinitesimal element of the pipe. This can be expressed as:\nwhere is the heat capacity in , is the distance from the pipe entrance, is the temperature of the water inside the pipe at , are enthalpy flows at the entrance and exit of the considered infinitesimal volume, and are the heating powers from the building and from the ground.\nExpressing the latter through equivalent resistance taking into account convective and conductive effects, the balance in steady state can be rewritten as:\nwhere is the asymptotic temperature and where:\nwhere is the diameter of the tube, is the internal coefficient of heat transfer, which can be retrieved using available empirical relation for fully developed flow with fixed temperature at the boundary conditions [35 ###reference_b35###], is the heat transfer coefficient between the floor and the building air including both the effect for natural convection and radiation. The values of can be found in the literature [36 ###reference_b36###]. The value of the thermal resistances and , towards the floor and the ground, can be found in the literature as well. We can reformulate (16 ###reference_###), making it adimensional through a change of variable:\nfrom which solution we can retrieve the temperature profile of the water inside the pipe:\nwhere is the temperature of the water at the pipe inlet. We can use (21 ###reference_###) to retrieve the heating power flowing into the building, integrating along the pipe.\nwhere is the length of the serpentine. Integrating, we obtain\nwhere is the temperature of the water at the outlet of the serpentine. Note that the equation (23 ###reference_###) tends to when increase and is kept fixed.\nThe nominal mass flow of the heating system and the length of the serpentine are found as the solution of the following optimization problem:\nwhere is a reference mass flow, equal to and is the power required to keep the building internal temperature constant under reference conditions (we used an external temperature of -4 and a desired internal temperature of 20 ):\nwhere is the resistance of an equivalent RC circuit describing the heating dynamics of the building.\nThe dynamic equation describing the evolution of the temperature of the tank\u2019s layers is the following:\nwhere is the temperature of the layer, ,,, are the thermal powers due to buoyancy and conduction, from the lower and upper layer, respectively. The last term represents the enthalpy flow due to mass exchange, while is the thermal capacity of the layer, in and is the thermal power due to an electric resistance (for the boiler) or an heat exchange (for the heating system buffer).\nThe expression for the above thermal power are the following:\nwhere is the number of layers, is the equivalent thermal loss coefficient with the ambient and is the set of the layers heated by the heat exchange (or electric resistance). The buoyancy model is the one proposed in the\nIDEAS library [21 ###reference_b21###].\nA detailed description of the parameters for the boiler model can be found in [37 ###reference_b37###]."
108
+ },
109
+ {
110
+ "section_id": "Appendix 2",
111
+ "parent_section_id": null,
112
+ "section_name": "Appendix B Metadata sources",
113
+ "text": "###figure_10### To faithfully simulate the system, we need to estimate the presence of an HP or EH, the number of dwellers (influencing DHW consumption), and the equivalent thermal resistance and capacity of buildings. We retrieved information on which building is equipped with an HP or an EH in a given region using data from [38 ###reference_b38###], [39 ###reference_b39###]. We then combine this information with the following, summarized in figure 10 ###reference_###:\nthe average number of m2 per person for buildings of a given construction age, from [39 ###reference_b39###], which allows us to have an estimate of the number of dwellers. This information is then used to retrieve a water consumption profile and to size the heating source and buffer volume for the DHW.\nthe total annual consumption per square meter and construction age of buildings in the region, from [40 ###reference_b40###], and the heating reference surface (HRS) from [38 ###reference_b38###], which are then used to estimate the equivalent building\u2019s thermal resistance .\nA summary of the final set of parameters, the conditioning factors, and the sources used to retrieve them is reported in table VII ###reference_###.\n###table_5###"
114
+ }
115
+ ],
116
+ "tables": {
117
+ "1": {
118
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>Parameters used to generate all possible daily force-off signals</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S3.T1.1\">\n<tr class=\"ltx_tr\" id=\"S3.T1.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.1.1.1\">parameter</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.1.1.2\">value</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.2.1\">force off max steps</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.2.2\">96</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.3\" style=\"background-color:#F2F2F2;\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.3.1\"><span class=\"ltx_text\" id=\"S3.T1.1.3.1.1\" style=\"background-color:#F2F2F2;\">min constant period</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.3.2\"><span class=\"ltx_text\" id=\"S3.T1.1.3.2.1\" style=\"background-color:#F2F2F2;\">8 (2H)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.4.1\">max number of switches</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.4.2\">6</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.5\" style=\"background-color:#F2F2F2;\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.5.1\"><span class=\"ltx_text\" id=\"S3.T1.1.5.1.1\" style=\"background-color:#F2F2F2;\">max on steps</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.5.2\"><span class=\"ltx_text\" id=\"S3.T1.1.5.2.1\" style=\"background-color:#F2F2F2;\">48 (12H)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.1.6.1\">nightly uncontrolled period</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.1.6.2\">20 (5H)</td>\n</tr>\n</table>\n</figure>",
119
+ "capture": "TABLE I: Parameters used to generate all possible daily force-off signals"
120
+ },
121
+ "2": {
122
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T2\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">TABLE II: </span>Metadata used as features in the training set. Penetration scenario features describe the characteristics of the pool of simulated buildings and devices, while temporal features refer to the time of the prediction. Here stands for the quantile.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S3.T2.10\">\n<tr class=\"ltx_tr\" id=\"S3.T2.10.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.10.7.1\">penetration scenario features</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.10.7.2\">temporal features</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.10.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T2.10.6.6\">\n<span class=\"ltx_text\" id=\"S3.T2.10.6.6.7\"></span> <span class=\"ltx_text\" id=\"S3.T2.10.6.6.6\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S3.T2.10.6.6.6.6\">\n<span class=\"ltx_tr\" id=\"S3.T2.6.2.2.2.2.2\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S3.T2.6.2.2.2.2.2.2\">Sum, and </span></span>\n<span class=\"ltx_tr\" id=\"S3.T2.10.6.6.6.6.7\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S3.T2.10.6.6.6.6.7.1\">of the nominal powers of devices,</span></span>\n<span class=\"ltx_tr\" id=\"S3.T2.10.6.6.6.6.8\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S3.T2.10.6.6.6.6.8.1\">number of HPs and EHs and their ratio,</span></span>\n<span class=\"ltx_tr\" id=\"S3.T2.8.4.4.4.4.4\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S3.T2.8.4.4.4.4.4.2\">Mean, and of thermal resistances,</span></span>\n<span class=\"ltx_tr\" id=\"S3.T2.10.6.6.6.6.6\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S3.T2.10.6.6.6.6.6.2\">Mean, and of thermal capacities</span></span>\n</span></span><span class=\"ltx_text\" id=\"S3.T2.10.6.6.8\"></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T2.10.6.7\">\n<span class=\"ltx_text\" id=\"S3.T2.10.6.7.1\"></span> <span class=\"ltx_text\" id=\"S3.T2.10.6.7.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S3.T2.10.6.7.2.1\">\n<span class=\"ltx_tr\" id=\"S3.T2.10.6.7.2.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S3.T2.10.6.7.2.1.1.1\">hour, day of week,</span></span>\n<span class=\"ltx_tr\" id=\"S3.T2.10.6.7.2.1.2\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S3.T2.10.6.7.2.1.2.1\">minuteofday</span></span>\n</span></span><span class=\"ltx_text\" id=\"S3.T2.10.6.7.3\"></span></td>\n</tr>\n</table>\n</figure>",
123
+ "capture": "TABLE II: Metadata used as features in the training set. Penetration scenario features describe the characteristics of the pool of simulated buildings and devices, while temporal features refer to the time of the prediction. Here stands for the quantile."
124
+ },
125
+ "3": {
126
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T3\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">TABLE III: </span>Continuous variables, transformations and lags passed as features to the metamodel. Meteorological information consists of temperature and global horizontal irradiance measurements. </figcaption>\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S3.T3.2\">\n<tr class=\"ltx_tr\" id=\"S3.T3.2.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T3.2.3.1\">signals</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T3.2.3.2\">transformation</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T3.2.3.3\">lags</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T3.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T3.1.1.2\">\n<span class=\"ltx_text\" id=\"S3.T3.1.1.2.1\"></span> <span class=\"ltx_text\" id=\"S3.T3.1.1.2.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S3.T3.1.1.2.2.1\">\n<span class=\"ltx_tr\" id=\"S3.T3.1.1.2.2.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S3.T3.1.1.2.2.1.1.1\">shifts(15m)</span></span>\n<span class=\"ltx_tr\" id=\"S3.T3.1.1.2.2.1.2\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S3.T3.1.1.2.2.1.2.1\">mean(3h), mean(6h)</span></span>\n</span></span><span class=\"ltx_text\" id=\"S3.T3.1.1.2.3\"></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T3.1.1.3\">\n<span class=\"ltx_text\" id=\"S3.T3.1.1.3.1\"></span> <span class=\"ltx_text\" id=\"S3.T3.1.1.3.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S3.T3.1.1.3.2.1\">\n<span class=\"ltx_tr\" id=\"S3.T3.1.1.3.2.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S3.T3.1.1.3.2.1.1.1\">-95,\u202696</span></span>\n<span class=\"ltx_tr\" id=\"S3.T3.1.1.3.2.1.2\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S3.T3.1.1.3.2.1.2.1\">1\u202696</span></span>\n</span></span><span class=\"ltx_text\" id=\"S3.T3.1.1.3.3\"></span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.2.2\" style=\"background-color:#F2F2F2;\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.2.2.1\">\n<span class=\"ltx_text\" id=\"S3.T3.2.2.1.1\" style=\"background-color:#F2F2F2;\">, meteo</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.2.2.2\"><span class=\"ltx_text\" id=\"S3.T3.2.2.2.1\" style=\"background-color:#F2F2F2;\"><span class=\"ltx_text\" id=\"S3.T3.2.2.2.1.1\"></span> <span class=\"ltx_text\" id=\"S3.T3.2.2.2.1.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S3.T3.2.2.2.1.2.1\">\n<span class=\"ltx_tr\" id=\"S3.T3.2.2.2.1.2.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S3.T3.2.2.2.1.2.1.1.1\">shifts(15m)</span></span>\n<span class=\"ltx_tr\" id=\"S3.T3.2.2.2.1.2.1.2\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S3.T3.2.2.2.1.2.1.2.1\">mean(1h)</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S3.T3.2.2.2.1.3\"></span></span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.2.2.3\">\n<span class=\"ltx_text\" id=\"S3.T3.2.2.3.1\"></span><span class=\"ltx_text\" id=\"S3.T3.2.2.3.2\" style=\"background-color:#F2F2F2;\"> <span class=\"ltx_text\" id=\"S3.T3.2.2.3.2.1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S3.T3.2.2.3.2.1.1\">\n<span class=\"ltx_tr\" id=\"S3.T3.2.2.3.2.1.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S3.T3.2.2.3.2.1.1.1.1\">-4,..0</span></span>\n<span class=\"ltx_tr\" id=\"S3.T3.2.2.3.2.1.1.2\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S3.T3.2.2.3.2.1.1.2.1\">-168..-144, -24\u20260</span></span>\n</span></span><span class=\"ltx_text\" id=\"S3.T3.2.2.3.2.2\"></span></span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.2.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T3.2.4.1\">meteo</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T3.2.4.2\">mean(1h)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T3.2.4.3\">1..24</td>\n</tr>\n</table>\n</figure>",
127
+ "capture": "TABLE III: Continuous variables, transformations and lags passed as features to the metamodel. Meteorological information consists of temperature and global horizontal irradiance measurements. "
128
+ },
129
+ "4": {
130
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T4\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">TABLE IV: </span> First column: energy costs, peak, total costs, and emissions from the controlled simulation. Second column: relative differences from the simulated costs when evaluated using the metamodel\u2019s day-ahead predictions. Third column: relative differences from the simulated costs using the metamodel to emulate the system. Data refers to the case in which 66% of the available HPs and boilers were controlled.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S5.T4.5\">\n<tr class=\"ltx_tr\" id=\"S5.T4.4.2\">\n<td class=\"ltx_td ltx_border_tt\" id=\"S5.T4.4.2.3\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S5.T4.4.2.4\">simulated</td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S5.T4.3.1.1\">\n forecasts</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left ltx_border_tt\" id=\"S5.T4.4.2.2\">\n closed loop</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.5.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T4.5.4.1\">Energy</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T4.5.4.2\">4.18E+7</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T4.5.4.3\">1.13E-3</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left ltx_border_t\" id=\"S5.T4.5.4.4\">1.30E-3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.5.5\" style=\"background-color:#F2F2F2;\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.5.5.1\"><span class=\"ltx_text\" id=\"S5.T4.5.5.1.1\" style=\"background-color:#F2F2F2;\">Peak</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.5.5.2\"><span class=\"ltx_text\" id=\"S5.T4.5.5.2.1\" style=\"background-color:#F2F2F2;\">4.46E+6</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.5.5.3\"><span class=\"ltx_text\" id=\"S5.T4.5.5.3.1\" style=\"background-color:#F2F2F2;\">-8.05E-3</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S5.T4.5.5.4\"><span class=\"ltx_text\" id=\"S5.T4.5.5.4.1\" style=\"background-color:#F2F2F2;\">-1.05E-2</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.5.6\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.5.6.1\">Total</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.5.6.2\">4.62E+7</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.5.6.3\">2.47E-4</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S5.T4.5.6.4\">1.68E-4</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.5.3\" style=\"background-color:#F2F2F2;\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T4.5.3.1\">\n<span class=\"ltx_text\" id=\"S5.T4.5.3.1.1\" style=\"background-color:#F2F2F2;\">[ton]</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T4.5.3.2\"><span class=\"ltx_text\" id=\"S5.T4.5.3.2.1\" style=\"background-color:#F2F2F2;\">5.99E+4</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T4.5.3.3\"><span class=\"ltx_text\" id=\"S5.T4.5.3.3.1\" style=\"background-color:#F2F2F2;\">2.06E-3</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left ltx_border_bb\" id=\"S5.T4.5.3.4\"><span class=\"ltx_text\" id=\"S5.T4.5.3.4.1\" style=\"background-color:#F2F2F2;\">2.48E-3</span></td>\n</tr>\n</table>\n</figure>",
131
+ "capture": "TABLE IV: First column: energy costs, peak, total costs, and emissions from the controlled simulation. Second column: relative differences from the simulated costs when evaluated using the metamodel\u2019s day-ahead predictions. Third column: relative differences from the simulated costs using the metamodel to emulate the system. Data refers to the case in which 66% of the available HPs and boilers were controlled."
132
+ },
133
+ "5": {
134
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T5\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">TABLE V: </span> First column: additional energy costs, peak, total costs, and emissions faced by the DSO due to the flexibility group. Second and third columns as for table <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2306.02802v2#S5.T4\" title=\"TABLE IV \u2023 V-B Closed loop economic performances \u2023 V Using metamodels for closed loop emulations \u2023 Quantification of Residential Flexibility Potential using Global Forecasting Models\"><span class=\"ltx_text ltx_ref_tag\">IV</span></a></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S5.T5.5\">\n<tr class=\"ltx_tr\" id=\"S5.T5.4.2\">\n<td class=\"ltx_td ltx_border_tt\" id=\"S5.T5.4.2.3\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S5.T5.4.2.4\">simulated</td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S5.T5.3.1.1\">\n forecasts</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left ltx_border_tt\" id=\"S5.T5.4.2.2\">\n closed loop</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T5.5.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T5.5.4.1\">Energy</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T5.5.4.2\">3.65E+6</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T5.5.4.3\">1.29E-2</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left ltx_border_t\" id=\"S5.T5.5.4.4\">1.49E-2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T5.5.5\" style=\"background-color:#F2F2F2;\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T5.5.5.1\"><span class=\"ltx_text\" id=\"S5.T5.5.5.1.1\" style=\"background-color:#F2F2F2;\">Peak</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T5.5.5.2\"><span class=\"ltx_text\" id=\"S5.T5.5.5.2.1\" style=\"background-color:#F2F2F2;\">2.99E+5</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T5.5.5.3\"><span class=\"ltx_text\" id=\"S5.T5.5.5.3.1\" style=\"background-color:#F2F2F2;\">-1.2E-1</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S5.T5.5.5.4\"><span class=\"ltx_text\" id=\"S5.T5.5.5.4.1\" style=\"background-color:#F2F2F2;\">-1.56E-1</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T5.5.6\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T5.5.6.1\">Total</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T5.5.6.2\">3.95E+6</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T5.5.6.3\">2.88E-3</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S5.T5.5.6.4\">1.97E-3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T5.5.3\" style=\"background-color:#F2F2F2;\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T5.5.3.1\">\n<span class=\"ltx_text\" id=\"S5.T5.5.3.1.1\" style=\"background-color:#F2F2F2;\">[ton]</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T5.5.3.2\"><span class=\"ltx_text\" id=\"S5.T5.5.3.2.1\" style=\"background-color:#F2F2F2;\">5.58E+3</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T5.5.3.3\"><span class=\"ltx_text\" id=\"S5.T5.5.3.3.1\" style=\"background-color:#F2F2F2;\">2.21E-2</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left ltx_border_bb\" id=\"S5.T5.5.3.4\"><span class=\"ltx_text\" id=\"S5.T5.5.3.4.1\" style=\"background-color:#F2F2F2;\">2.65E-2</span></td>\n</tr>\n</table>\n</figure>",
135
+ "capture": "TABLE V: First column: additional energy costs, peak, total costs, and emissions faced by the DSO due to the flexibility group. Second and third columns as for table IV"
136
+ },
137
+ "6": {
138
+ "table_html": "<figure class=\"ltx_table\" id=\"A2.T6\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">TABLE VI: </span>Upper and lower bounds for the uniform distribution for the sizing of the EH</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"A2.T6.1\">\n<tr class=\"ltx_tr\" id=\"A2.T6.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"A2.T6.1.1.2\">power [kW/person]</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"A2.T6.1.1.1\">volume [m<sup class=\"ltx_sup\" id=\"A2.T6.1.1.1.1\">3</sup>/person]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T6.1.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A2.T6.1.2.1\">min</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A2.T6.1.2.2\">max</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A2.T6.1.2.3\">min</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A2.T6.1.2.4\">max</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T6.1.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"A2.T6.1.3.1\">1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"A2.T6.1.3.2\">2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"A2.T6.1.3.3\">0.08</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_bb ltx_border_t\" id=\"A2.T6.1.3.4\">0.12</td>\n</tr>\n</table>\n</figure>",
139
+ "capture": "TABLE VI: Upper and lower bounds for the uniform distribution for the sizing of the EH"
140
+ },
141
+ "7": {
142
+ "table_html": "<figure class=\"ltx_table\" id=\"A2.T7\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">TABLE VII: </span>Simulation parameters and their sources</figcaption>\n<table class=\"ltx_tabular ltx_align_middle\" id=\"A2.T7.2\">\n<tr class=\"ltx_tr\" id=\"A2.T7.2.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A2.T7.2.3.1\">parameter</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A2.T7.2.3.2\">conditional on</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A2.T7.2.3.3\">sources</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T7.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A2.T7.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A2.T7.1.1.2\">\n<span class=\"ltx_text\" id=\"A2.T7.1.1.2.1\"></span> <span class=\"ltx_text\" id=\"A2.T7.1.1.2.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"A2.T7.1.1.2.2.1\">\n<span class=\"ltx_tr\" id=\"A2.T7.1.1.2.2.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"A2.T7.1.1.2.2.1.1.1\">construction period, location,</span></span>\n<span class=\"ltx_tr\" id=\"A2.T7.1.1.2.2.1.2\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"A2.T7.1.1.2.2.1.2.1\">class of building</span></span>\n</span></span><span class=\"ltx_text\" id=\"A2.T7.1.1.2.3\"></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A2.T7.1.1.3\"><cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2306.02802v2#bib.bib38\" title=\"\">38</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2306.02802v2#bib.bib40\" title=\"\">40</a>]</cite></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T7.2.2\" style=\"background-color:#F2F2F2;\">\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T7.2.2.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T7.2.2.2\"><span class=\"ltx_text\" id=\"A2.T7.2.2.2.1\" style=\"background-color:#F2F2F2;\">-</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T7.2.2.3\"><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"A2.T7.2.2.3.1.1\" style=\"background-color:#F2F2F2;\">[</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2306.02802v2#bib.bib41\" title=\"\">41</a><span class=\"ltx_text\" id=\"A2.T7.2.2.3.2.2\" style=\"background-color:#F2F2F2;\">]</span></cite></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T7.2.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T7.2.4.1\">Prob(HP - EH)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T7.2.4.2\">\n<span class=\"ltx_text\" id=\"A2.T7.2.4.2.1\"></span> <span class=\"ltx_text\" id=\"A2.T7.2.4.2.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"A2.T7.2.4.2.2.1\">\n<span class=\"ltx_tr\" id=\"A2.T7.2.4.2.2.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"A2.T7.2.4.2.2.1.1.1\">construction period, location,</span></span>\n<span class=\"ltx_tr\" id=\"A2.T7.2.4.2.2.1.2\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"A2.T7.2.4.2.2.1.2.1\">class of building</span></span>\n</span></span><span class=\"ltx_text\" id=\"A2.T7.2.4.2.3\"></span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T7.2.4.3\"><cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2306.02802v2#bib.bib38\" title=\"\">38</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2306.02802v2#bib.bib42\" title=\"\">42</a>]</cite></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T7.2.5\" style=\"background-color:#F2F2F2;\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A2.T7.2.5.1\"><span class=\"ltx_text\" id=\"A2.T7.2.5.1.1\" style=\"background-color:#F2F2F2;\">occupants</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A2.T7.2.5.2\">\n<span class=\"ltx_text\" id=\"A2.T7.2.5.2.1\"></span><span class=\"ltx_text\" id=\"A2.T7.2.5.2.2\" style=\"background-color:#F2F2F2;\"> <span class=\"ltx_text\" id=\"A2.T7.2.5.2.2.1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"A2.T7.2.5.2.2.1.1\">\n<span class=\"ltx_tr\" id=\"A2.T7.2.5.2.2.1.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"A2.T7.2.5.2.2.1.1.1.1\">construction period, location,</span></span>\n<span class=\"ltx_tr\" id=\"A2.T7.2.5.2.2.1.1.2\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"A2.T7.2.5.2.2.1.1.2.1\">class of building</span></span>\n</span></span><span class=\"ltx_text\" id=\"A2.T7.2.5.2.2.2\"></span></span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A2.T7.2.5.3\"><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"A2.T7.2.5.3.1.1\" style=\"background-color:#F2F2F2;\">[</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2306.02802v2#bib.bib38\" title=\"\">38</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2306.02802v2#bib.bib39\" title=\"\">39</a><span class=\"ltx_text\" id=\"A2.T7.2.5.3.2.2\" style=\"background-color:#F2F2F2;\">]</span></cite></td>\n</tr>\n</table>\n</figure>",
143
+ "capture": "TABLE VII: Simulation parameters and their sources"
144
+ }
145
+ },
146
+ "image_paths": {
147
+ "1": {
148
+ "figure_path": "2306.02802v2_figure_1.png",
149
+ "caption": "Figure 1: Left: a random sample of daily scenarios for the force-off signal. Center: ratio of active signals for a given timestep of the day. Right: distribution of the number of active timesteps among all possible scenarios.",
150
+ "url": "http://arxiv.org/html/2306.02802v2/x1.png"
151
+ },
152
+ "2": {
153
+ "figure_path": "2306.02802v2_figure_2.png",
154
+ "caption": "Figure 2: Sampling strategies for building the final training set. Left: the total number of controllable devices is increased linearly, picking randomly between households with an HP or an EH. Left: the number of controllable devices is increased by independently co-varying the number of HPs and EHs.",
155
+ "url": "http://arxiv.org/html/2306.02802v2/x2.png"
156
+ },
157
+ "3": {
158
+ "figure_path": "2306.02802v2_figure_3.png",
159
+ "caption": "Figure 3: Random example of day-ahead metamodel\u2019s forecasts, for different numbers of HPs and EHs, where the force off was activated at least once, for the energy-aware metamodel trained using the grid sampling strategy",
160
+ "url": "http://arxiv.org/html/2306.02802v2/x3.png"
161
+ },
162
+ "4": {
163
+ "figure_path": "2306.02802v2_figure_4.png",
164
+ "caption": "Figure 4: Performances for the four tested metamodels, in terms of nMAE as a function of the step ahead.",
165
+ "url": "http://arxiv.org/html/2306.02802v2/x4.png"
166
+ },
167
+ "5": {
168
+ "figure_path": "2306.02802v2_figure_5.png",
169
+ "caption": "Figure 5: Left: cumulative distributions of the relative energy imbalance for different models. Right: empirical cumulative density functions of absolute relative energy imbalance for different models.",
170
+ "url": "http://arxiv.org/html/2306.02802v2/x5.png"
171
+ },
172
+ "6": {
173
+ "figure_path": "2306.02802v2_figure_6.png",
174
+ "caption": "Figure 6: Example of system response in terms of deviations from the expected response (prediction where control signal features referring to feature time-steps are zeroed), dependent on the number of HPs and EHs.",
175
+ "url": "http://arxiv.org/html/2306.02802v2/x6.png"
176
+ },
177
+ "7": {
178
+ "figure_path": "2306.02802v2_figure_7.png",
179
+ "caption": "Figure 7: Example of optimized control action using the metamodel. Top: control signals (dashed), forecast group responses (dotted) and simulated, both controlled and uncontrolled, response (thick). Middle: total power from uncontrolled DSO\u2019s households (blue), total DSO\u2019s power when no control action is taken (orange), simulated and forecasted system response (green and red). Bottom: day-ahead price on the spot market.",
180
+ "url": "http://arxiv.org/html/2306.02802v2/x7.png"
181
+ },
182
+ "8": {
183
+ "figure_path": "2306.02802v2_figure_8.png",
184
+ "caption": "Figure 8: Performance of the metamodel in the open-loop simulations. Left: daily relative errors plotted as time series. Right: distribution of the daily means of the relative error.",
185
+ "url": "http://arxiv.org/html/2306.02802v2/x8.png"
186
+ },
187
+ "9": {
188
+ "figure_path": "2306.02802v2_figure_9.png",
189
+ "caption": "Figure 9: Deviations of different objectives from the simulated results, using the metamodel to optimize and forecast the power profiles (blue) or to completely bypass the simulation (orange). Top: relative error of objectives normalized with the total simulated costs. Bottom: relative error of objectives normalized with the additional costs faced by the DSO due to the flexible group.",
190
+ "url": "http://arxiv.org/html/2306.02802v2/x9.png"
191
+ },
192
+ "10": {
193
+ "figure_path": "2306.02802v2_figure_10.png",
194
+ "caption": "Figure 10: Representative values of m2/person (Switzerland) and kWh/m2/year (Switzerland, canton Ticino) for buildings, conditional to the class of construction year.",
195
+ "url": "http://arxiv.org/html/2306.02802v2/x10.png"
196
+ }
197
+ },
198
+ "validation": true,
199
+ "references": [
200
+ {
201
+ "1": {
202
+ "title": "Conference Name: IEEE Transactions on Power Systems.",
203
+ "author": "B. Mohandes, M. S. E. Moursi, N. Hatziargyriou, and S. E. Khatib, \u201cA Review of Power System Flexibility With High Penetration of Renewables,\u201d IEEE Transactions on Power Systems, vol. 34, pp. 3140\u20133155, July 2019.",
204
+ "venue": null,
205
+ "url": null
206
+ }
207
+ },
208
+ {
209
+ "2": {
210
+ "title": "ISSN: 2165-4093.",
211
+ "author": "C. Eid, P. Codani, Y. Chen, Y. Perez, and R. Hakvoort, \u201cAggregation of demand side flexibility in a smart grid: A review for European market design,\u201d in 2015 12th International Conference on the European Energy Market (EEM), pp. 1\u20135, May 2015.",
212
+ "venue": null,
213
+ "url": null
214
+ }
215
+ },
216
+ {
217
+ "3": {
218
+ "title": "Conference Name: IEEE Transactions on Smart Grid.",
219
+ "author": "M. Parvania, M. Fotuhi-Firuzabad, and M. Shahidehpour, \u201cOptimal Demand Response Aggregation in Wholesale Electricity Markets,\u201d IEEE Transactions on Smart Grid, vol. 4, pp. 1957\u20131965, Dec. 2013.",
220
+ "venue": null,
221
+ "url": null
222
+ }
223
+ },
224
+ {
225
+ "4": {
226
+ "title": "Conference Name: IEEE Transactions on Control of Network Systems.",
227
+ "author": "R. Ghaemi, M. Abbaszadeh, and P. G. Bonanni, \u201cOptimal Flexibility Control of Large-Scale Distributed Heterogeneous Loads in the Power Grid,\u201d IEEE Transactions on Control of Network Systems, vol. 6, pp. 1256\u20131268, Sept. 2019.",
228
+ "venue": null,
229
+ "url": null
230
+ }
231
+ },
232
+ {
233
+ "5": {
234
+ "title": "Conference Name: Power Management Techniques for Integrated Circuit Design.",
235
+ "author": "K.-H. Chen, \u201cRipple-Based Control Technique Part I,\u201d in Power Management Techniques for Integrated Circuit Design, pp. 170\u2013269, IEEE, 2016.",
236
+ "venue": null,
237
+ "url": null
238
+ }
239
+ },
240
+ {
241
+ "6": {
242
+ "title": "Conference Name: IEEE Transactions on Power Systems.",
243
+ "author": "J. Pono\u0107ko and J. V. Milanovi\u0107, \u201cForecasting Demand Flexibility of Aggregated Residential Load Using Smart Meter Data,\u201d IEEE Transactions on Power Systems, vol. 33, pp. 5446\u20135455, Sept. 2018.",
244
+ "venue": null,
245
+ "url": null
246
+ }
247
+ },
248
+ {
249
+ "7": {
250
+ "title": "Conference Name: IEEE Transactions on Power Systems.",
251
+ "author": "W. Cui, Y. Ding, H. Hui, Z. Lin, P. Du, Y. Song, and C. Shao, \u201cEvaluation and Sequential Dispatch of Operating Reserve Provided by Air Conditioners Considering Lead\u2013Lag Rebound Effect,\u201d IEEE Transactions on Power Systems, vol. 33, pp. 6935\u20136950, Nov. 2018.",
252
+ "venue": null,
253
+ "url": null
254
+ }
255
+ },
256
+ {
257
+ "8": {
258
+ "title": "ISSN: 2378-5861.",
259
+ "author": "M. K. Petersen, K. Edlund, L. H. Hansen, J. Bendtsen, and J. Stoustrup, \u201cA taxonomy for modeling flexibility and a computationally efficient algorithm for dispatch in Smart Grids,\u201d in 2013 American Control Conference, pp. 1150\u20131156, June 2013.",
260
+ "venue": null,
261
+ "url": null
262
+ }
263
+ },
264
+ {
265
+ "9": {
266
+ "title": "ISSN: 0191-2216.",
267
+ "author": "F. Oldewurtel, D. Sturzenegger, G. Andersson, M. Morari, and R. S. Smith, \u201cTowards a standardized building assessment for demand response,\u201d in 52nd IEEE Conference on Decision and Control, pp. 7083\u20137088, Dec. 2013.",
268
+ "venue": null,
269
+ "url": null
270
+ }
271
+ },
272
+ {
273
+ "10": {
274
+ "title": "Conference Name: IEEE Transactions on Power Systems.",
275
+ "author": "O. Corradi, H. Ochsenfeld, H. Madsen, and P. Pinson, \u201cControlling Electricity Consumption by Forecasting its Response to Varying Prices,\u201d IEEE Transactions on Power Systems, vol. 28, pp. 421\u2013429, Feb. 2013.",
276
+ "venue": null,
277
+ "url": null
278
+ }
279
+ },
280
+ {
281
+ "11": {
282
+ "title": "arXiv:1907.02392 [cs].",
283
+ "author": "L. Ardizzone, C. L\u00fcth, J. Kruse, C. Rother, and U. K\u00f6the, \u201cGuided Image Generation with Conditional Invertible Neural Networks,\u201d July 2019.",
284
+ "venue": null,
285
+ "url": null
286
+ }
287
+ },
288
+ {
289
+ "12": {
290
+ "title": "John Wiley & Sons, Apr. 2011.",
291
+ "author": "T. L. Bergman, F. P. Incropera, D. P. DeWitt, and A. S. Lavine, Fundamentals of Heat and Mass Transfer.",
292
+ "venue": "Google-Books-ID: vvyIoXEywMoC.",
293
+ "url": null
294
+ }
295
+ }
296
+ ],
297
+ "url": "http://arxiv.org/html/2306.02802v2"
298
+ }
20240819/2307.13269v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240819/2308.07922v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240819/2310.06824v3.json ADDED
@@ -0,0 +1,548 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "The Geometry of Truth: Emergent Linear Structure in LLM Representations of True/False Datasets",
3
+ "abstract": "Large Language Models (LLMs) have impressive capabilities, but are prone to outputting falsehoods. Recent work has developed techniques for inferring whether a LLM is telling the truth by training probes on the LLM\u2019s internal activations. However, this line of work is controversial, with some authors pointing out failures of these probes to generalize in basic ways, among other conceptual issues. In this work, we use high-quality datasets of simple true/false statements to study in detail the structure of LLM representations of truth, drawing on three lines of evidence: 1. Visualizations of LLM true/false statement representations, which reveal clear linear structure. 2. Transfer experiments in which probes trained on one dataset generalize to different datasets. 3. Causal evidence obtained by surgically intervening in a LLM\u2019s forward pass, causing it to treat false statements as true and vice versa. Overall, we present evidence that at sufficient scale, LLMs linearly represent the truth or falsehood of factual statements. We also show that simple difference-in-mean probes generalize as well as other probing techniques while identifying directions which are more causally implicated in model outputs.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Despite their impressive capabilities, large language models (LLMs) do not always output true text (Lin et al., 2022 ###reference_b24###; Steinhardt, 2023 ###reference_b35###; Park et al., 2023 ###reference_b28###). In some cases, this is because they do not know better. In other cases, LLMs apparently know that statements are false but generate them anyway. For instance, Perez et al. (2022 ###reference_b30###) demonstrate that LLM assistants output more falsehoods when prompted with the biography of a less-educated user. More starkly, OpenAI (2023 ###reference_b27###) documents a case where a GPT-4-based agent gained a person\u2019s help in solving a CAPTCHA by lying about being a vision-impaired human. \u201cI should not reveal that I am a robot,\u201d the agent wrote in an internal chain-of-thought scratchpad, \u201cI should make up an excuse for why I cannot solve CAPTCHAs.\u201d\nWe would like techniques which, given a language model and a statement , determine whether believes to be true (Christiano et al., 2021 ###reference_b8###). One approach to this problem relies on inspecting model outputs; for instance, the internal chain-of-thought in the above example provides evidence that the model understood it was generating a falsehood. An alternative class of approaches instead leverages access to \u2019s internal state when processing . There has been considerable recent work on this class of approaches: Azaria & Mitchell (2023 ###reference_b3###), Li et al. (2023b ###reference_b23###), and Burns et al. (2023 ###reference_b6###) all train probes for classifying truthfulness based on a LLM\u2019s internal activations. In fact, the probes of Li et al. (2023b ###reference_b23###) and Burns et al. (2023 ###reference_b6###) are linear probes, suggesting the presence of a \u201ctruth direction\u201d in model internals.\nHowever, the efficacy and interpretation of these results are controversial. For instance, Levinstein & Herrmann (2023 ###reference_b20###) note that the probes of Azaria & Mitchell (2023 ###reference_b3###) fail to generalize in basic ways, such as to statements containing the word \u201cnot.\u201d The probes of Burns et al. (2023 ###reference_b6###) have similar generalization issues, especially when using representations from autoregressive transformers. This suggests these probes may be identifying not truth, but other features that correlate with truth on their training data.\nWorking with autoregressive transformers from the LLaMA-2 family (Touvron et al., 2023 ###reference_b37###), we shed light on this murky state of affairs. After curating high-quality datasets of simple, unambiguous true/false statements, we perform a detailed investigation of LLM representations of factuality. Our analysis, which draws on patching experiments, simple visualizations with principal component analysis (PCA), a study of probe generalization, and causal interventions, finds:\nEvidence that linear representations of truth emerge with scale, with larger models having a more abstract notion of truth that applies across structurally and topically diverse inputs.\nA small group of causally-implicated hidden states which encode these truth representations.\nConsistent results across a suite of probing techniques, but with simple difference-in-mean probes identifying directions which are most causally implicated.\nOur code, datasets, and an interactive dataexplorer are available at https://github.com/saprmarks/geometry-of-truth ###reference_ruth###.\n###figure_1###"
10
+ },
11
+ {
12
+ "section_id": "1.1",
13
+ "parent_section_id": "1",
14
+ "section_name": "Related work",
15
+ "text": "Linear world models. Substantial previous work has studied whether LLMs encode world models in their representations (Li et al., 2023a ###reference_b22###; 2021 ###reference_b21###; Abdou et al., 2021 ###reference_b1###; Patel & Pavlick, 2022 ###reference_b29###). Early work focused on whether individual neurons represent features (Wang et al., 2022 ###reference_b39###; Sajjad et al., 2022 ###reference_b33###; Bau et al., 2020 ###reference_b4###), but features may more generally be represented by directions in a LLM\u2019s latent space (i.e. linear combinations of neurons) (Dalvi et al., 2018 ###reference_b10###; Gurnee et al., 2023 ###reference_b19###; Cunningham et al., 2023 ###reference_b9###; Elhage et al., 2022 ###reference_b11###). We say such features are linearly represented by the LLM. Just as other authors have asked whether models have directions representing the concepts of \u201cWest Africa\u201d (Goh et al., 2021 ###reference_b18###) or \u201cbasketball\u201d (Gurnee et al., 2023 ###reference_b19###), we ask here whether there is a direction corresponding to the truth or falsehood of a factual statement.\nProbing for truthfulness. Others have trained probes to classify truthfulness from LLM activations, using both logistic regression (Azaria & Mitchell, 2023 ###reference_b3###; Li et al., 2023b ###reference_b23###), unsupervised (Burns et al., 2023 ###reference_b6###), and contrastive (Zou et al., 2023 ###reference_b40###; Rimsky et al., 2024 ###reference_b32###) techniques. This work differs from prior work in a number of ways. First, a cornerstone of our analysis is evaluating whether probes trained on one dataset transfer to topically and structurally different datasets in terms of both classification accuracy and causal mediation of model outputs. Second, we specifically interrogate whether our probes attend to truth, rather than merely features which correlate with truth (e.g. probable vs. improbable text). Third, we localize truth representations to a small number of hidden states above certain tokens. Fourth, we go beyond the mass-mean shift interventions of Li et al. (2023b ###reference_b23###) by systematically studying the properties of difference-in-mean. Finally, we carefully scope our setting, using only datasets of clear, simple, and unambiguous factual statements, rather than statements which are complicated and structured (Burns et al., 2023 ###reference_b6###), confusing (Azaria & Mitchell, 2023 ###reference_b3###; Levinstein & Herrmann, 2023 ###reference_b20###), or intentionally misleading (Li et al., 2023b ###reference_b23###; Lin et al., 2022 ###reference_b24###)."
16
+ },
17
+ {
18
+ "section_id": "2",
19
+ "parent_section_id": null,
20
+ "section_name": "Datasets",
21
+ "text": "###table_1###"
22
+ },
23
+ {
24
+ "section_id": "3",
25
+ "parent_section_id": null,
26
+ "section_name": "Localizing truth representations via patching",
27
+ "text": "Before beginning our study of LLM truth representations, we first address the question of which hidden states might contain such representations. We use simple patching experiments (Vig et al., 2020 ###reference_b38###; Finlayson et al., 2021 ###reference_b12###; Meng et al., 2022 ###reference_b25###; Geiger et al., 2020 ###reference_b14###) to localize certain hidden states for further analysis. Consider the following prompt :\nThe city of Tokyo is in Japan. This statement is: TRUE \nThe city of Hanoi is in Poland. This statement is: FALSE \nThe city of Chicago is in Canada. This statement is:\nSimilarly, let be the prompt obtained from by replacing \u201cChicago\u201d with \u201cToronto,\u201d thereby making the final statement true. In order to localize causally implicated hidden states, we run our model on the input and cache the residual stream activations for each token position and layer . Then, for each and , we run on but modify \u2019s forward pass by swapping out the residual stream activation for (and allowing this change to affect downstream computations); for each of these intervention experiments, we record the difference in log probability between the tokens \u201cTRUE\u201d and \u201cFALSE\u201d; the larger this difference, the more causally influential the hidden state in position and layer is on the model\u2019s prediction.\nResults for LLaMA-2-13B and the cities dataset are shown in Fig. 2 ###reference_###; see App. B ###reference_### for results on more models and datasets. We see three groups of causally implicated hidden states. The final group, labeled (c), directly encodes the model\u2019s prediction: after applying the LLM\u2019s decoder head directly to these hidden states, the top logits belong to tokens like \u201ctrue,\u201d \u201cTrue,\u201d and \u201cTRUE.\u201d The first group, labeled (a), likely encodes the LLM\u2019s representation of \u201cChicago\u201d or \u201cToronto.\u201d\n###figure_2### What does group (b) encode? The position of this group\u2014over the final token of the statement and end-of-sentence punctuation222This summarization behavior, in which information about clauses is encoded over clause-ending punctuation tokens, was also noted in Tigges et al. (2023 ###reference_b36###). We note that the largest LLaMA model displays this summarization behavior in a more context-dependent way; see App. B ###reference_###.\u2014suggests that it encodes information pertaining to the full statement. Since the information encoded is also causally influential on the model\u2019s decision to output \u201cTRUE\u201d or \u201cFALSE,\u201d we hypothesize that these hidden states store a representation of the statement\u2019s truth. In the remainder of this paper, we systematically study these hidden states."
28
+ },
29
+ {
30
+ "section_id": "4",
31
+ "parent_section_id": null,
32
+ "section_name": "Visualizing LLM representations of true/false datasets",
33
+ "text": "We begin our investigation with a simple technique: visualizing LLMs representations of our datasets using principal component analysis (PCA). Guided by the results of \u00a73 ###reference_###, we present here visualizations of the most downstream hidden state in group (b); for example, for LLaMA-2-13B, we use the layer 15 residual stream activation over the end-of-sentence punctuation token.333Our qualitative results are insensitive to choice of layer among early-middle to late-middle layers. On the other hand, when using representations over the final token in the statement (instead of the punctuation token), we sometimes see that the top PCs instead capture variation in the token itself (e.g. clusters for statements ending in \u201cChina\u201d regardless of their truth value). Unlike in \u00a73 ###reference_###, we do not prepend the statements with a few-shot prompt (so our models are not \u201cprimed\u201d to consider the truth value of our statements). For each dataset, we also center the activations by subtracting off their mean.\nWhen visualizing LLaMA-2-13B and 70B representations of our curated datasets \u2013 datasets constructed to have little variation with respect to non-truth features, such as sentence structure or subject matter \u2013 we see clear linear structure (Fig. 1 ###reference_###), with true statements separating from false ones in the top two principal components (PCs). As explored in App. C ###reference_###, this structure emerges rapidly in early-middle layers and emerges later for datasets of more structurally complex statements (e.g. conjunctive statements).\nTo what extent does this visually-apparent linear structure align between different datasets? Our visualizations indicate a nuanced answer: the axes of separation for various true/false datasets align often, but not always. For instance, Fig. 3 ###reference_###(a) shows the first PC of cities also separating true/false statements from other datasets, including diverse uncurated datasets. On the other hand, Fig. 3 ###reference_###(c) shows stark failures of alignment, with the axes of separation for datasets and statements and their negations being approximately orthogonal.\nThese cases of misalignment have an interesting relationship to scale. Fig. 3 ###reference_###(b) shows larger_than and smaller_than separating along antipodal directions in LLaMA-2-13B, but along a common direction in LLaMA-2-70B. App. C ###reference_### depicts a similar phenomenon occuring over the layers of LLaMA-2-13B: in early layers, cities and neg_cities separate antipodally, before rotating to lie orthogonally (as in Fig. 3 ###reference_###(c)), and finally aligning in later layers.\n###figure_3###"
34
+ },
35
+ {
36
+ "section_id": "4.1",
37
+ "parent_section_id": "4",
38
+ "section_name": "Discussion",
39
+ "text": "Overall, these visualizations suggest that as LLMs scale (and perhaps, also as a fixed LLM progresses through its forward pass), they hierarchically develop and linearly represent increasingly general abstractions. Small models represent surface-level characteristics of their inputs, and large models linearly represent more abstract concepts, potentially including notions like \u201ctruth\u201d that capture shared properties of topically and structurally diverse inputs. In middle regimes, we may find linear representation of concepts at intermediate levels of abstraction, for example, \u201caccurate factual recall\u201d or \u201cclose association\u201d (in the sense that \u201cBeijing\u201d and \u201cChina\u201d are closely associated).\nTo explore these intermediate regimes more deeply, suppose that and are true/false datasets, is a linearly-represented feature which correlates with truth on both and , and is a feature which correlates with truth on but has a negative correlation with truth on . If is very salient (i.e. the datasets\u2019 have large variance along the -direction) and is not, then we expect PCA visualizations of to show joint separation along . If is very salient but is not, we expect antipodal separation along , as in Fig. 3 ###reference_###(b, center). And if both and are salient, we expect visualizations like Fig. 3 ###reference_###(c).\nTo give an example, suppose that , , , and . Then we might expect to correlate with truth positively on and negatively on . If so, we would expect training linear probes on to result in improved generalization, despite consisting of the same statements as , but with the word \u201cnot\u201d inserted. We investigate this in \u00a75 ###reference_###."
40
+ },
41
+ {
42
+ "section_id": "5",
43
+ "parent_section_id": null,
44
+ "section_name": "Probing and generalization experiments",
45
+ "text": "In this section we train probes on datasets of true/false statements and test their generalization to other datasets. But first we discuss a deficiency of logistic regression and propose a simple, optimization-free alternative: mass-mean probing. Concretely, mass-mean probes use a difference-in-means direction, but\u2014when the covariance matrix of the classification data is known (e.g. when working with IID data)\u2014apply a correction intended to mitigate interference from non-orthogonal features. We will see that mass-mean probes are similarly accurate to probes trained with other techniques (including on out-of-distribution data) while being more causally implicated in model outputs."
46
+ },
47
+ {
48
+ "section_id": "5.1",
49
+ "parent_section_id": "5",
50
+ "section_name": "Challenges with logistic regression, and mass-mean probing",
51
+ "text": "A common technique in interpretability research for identifying feature directions is training linear probes with logistic regression (LR; Alain & Bengio, 2018 ###reference_b2###). In some cases, however, the direction identified by LR can fail to reflect an intuitive best guess for the feature direction, even in the absence of confounding features. Consider the following scenario, illustrated in Fig. 4 ###reference_### with hypothetical data:\nTruth is represented linearly along a direction .\nAnother feature is represented linearly along a direction not orthogonal to .444The superposition hypothesis of Elhage et al. (2022 ###reference_b11###), suggests this may be typical in deep networks.\nThe statements in our dataset have some variation with respect to feature , independent of their truth value.\nWe would like to identify the direction , but LR fails to do so. Assuming for simplicity linearly separable data, LR instead converges to the maximum margin separator Soudry et al. (2018 ###reference_b34###) (the dashed magenta line in Fig. 4 ###reference_###). Intuitively, LR treats the small projection of onto as significant, and adjusts the probe direction to have less \u201cinterference\u201d (Elhage et al., 2022 ###reference_b11###) from .\n###figure_4### A simple alternative to LR which identifies the desired direction in this scenario is to take the vector pointing from the mean of the false data to the mean of the true data. In more detail if is a dataset of with binary labels , we set where are the means of the positively- and negatively-labeled datapoints, respectively. A reasonable first pass at converting into a probe is to define555Since we are interested in truth directions, we always center our data and use unbiased probes.\nwhere is the logistic function. However, when evaluating on data that is independent and identically distributed (IID) to , we can do better by tilting our decision boundary to accommodate interference from . Concretely this means setting\nwhere is the covariance matrix of the dataset ; this coincides with performing linear discriminant analysis (Fisher, 1936 ###reference_b13###).666We prove in App. F ###reference_### that, given infinite data and a homoscedasticity assumption, coincides with the direction found by LR. Thus, one can view IID mass-mean probing as providing a way to select a good decision boundary while \u2013 unlike LR \u2013 also tracking a candidate feature direction which may be non-orthogonal to this decision boundary. App. E ###reference_### provides another interpretation of mass-mean probing in terms of Mahalanobis whitening. Finally, App.\nWe call the probes and mass-mean probes. As we will see, mass-mean probing is about as accurate for classification as LR, while also identifying directions which are more causally implicated in model outputs."
52
+ },
53
+ {
54
+ "section_id": "5.2",
55
+ "parent_section_id": "5",
56
+ "section_name": "Experimental set-up",
57
+ "text": "In this section, we measure the effect that choice of training data, probing technique, and model scale has on probe accuracy.\nFor training data, we use one of: cities, cities + neg_cities, larger_than, larger_than + smaller_than, or likely. By comparing probes trained on cities to probes trained on cities + neg_cities, we are able to measure the effect of increasing data diversity in a particular, targeted way: namely, we mitigate the effect of linearly-represented features which have opposite-sign correlations with the truth in cities and neg_cities. As in \u00a74 ###reference_###, we will extract activations at the most-downstream hidden state in group (b).\nOur probing techniques are logistic regression (LR), mass-mean probing (MM), and contrast-consistent search (CCS). CCS is an unsupervised method introduced in Burns et al. (2023 ###reference_b6###): given contrast pairs of statements with opposite truth values, CCS identifies a direction along which the representations of these statements are far apart. For our contrast pairs, we pair statements from cities and neg_cities, and from larger_than and smaller_than.\nFor test sets, we use all of our (curated and uncurated) true/false datasets. Given a training set , we train our probe on a random 80% split of . Then when evaluating accuracy on a test set , we use the remaining 20% of the data if and the full test set otherwise. For mass-mean probing, if , we use , and we use otherwise.\nFinally, we also include as baselines calibrated few-shot prompting777We first sweep over a number of shots and then resample a few -shot prompts to maximize performance. The word \u201ccalibrated\u201d means we selected a threshold for such that half of the statements are labeled true; this improves performance by a few percentage points. and \u2013 as an oracle baseline \u2013 LR on the test set."
58
+ },
59
+ {
60
+ "section_id": "5.3",
61
+ "parent_section_id": "5",
62
+ "section_name": "Results",
63
+ "text": "###figure_5### For each training set, probing technique, and model scale, we report the average accuracy across test sets. We expect many readers to be interested in the full results (including test set-specific accuracies), which are reported in App. D ###reference_###. Calibrated few-shot prompting was a surprisingly weak baseline, so we do not report it here (but see App. D ###reference_###).\nTraining on statements and their opposites improves generalization (Fig. 5 ###reference_###(a)). When passing from cities to cities+neg_cities, this effect is largely explained by improved generalization on neg_sp_en_trans, i.e. using training data containing the word \u201cnot\u201d improves generalization on other negated statements. On the other hand, passing from larger_than to larger_than+smaller_than also improves performance, despite both datasets being very structurally different from the rest of our datasets. As discussed in \u00a74.1 ###reference_###, this suggest that training on statements and their opposites mitigates the effect certain types of non-truth features have on the probe direction.\nProbes generalize better for larger models (Fig. 5 ###reference_###). While it is unsurprising that larger models are themselves better at labeling statements as true or false, it is not obvious that linear probes trained on larger models should also generalize better. Nevertheless, for LLaMA-2-13B and 70B, generalization is generally high; for example, no matter which probing technique is used, we find that probes trained on larger_than + smaller_than get accuracy on sp_en_trans. This corroborates our discussion in \u00a74.1 ###reference_###, in which we suggested that larger models linearly represent more general concepts concepts, like truth, which capture shared aspects of diverse inputs.\nMass-mean probes generalize about as well as other probing techniques for larger models (Fig. 5 ###reference_###(b)). While MM underperforms LR and CCS for LLaMA-2-7B, we find for larger models performance comparable to that of other probing techniques. Further, we will see in \u00a76 ###reference_### that the directions identified by MM are more causally implicated in model outputs.\nProbes trained on likely perform poorly (Fig. 5 ###reference_###(b)). The full results reveal that probes trained on likely are accurate when evaluated on some datasets, such as sp_en_trans where there is a strong () correlation between text probability and truth. However, on other datasets, especially those with anti-correlations between probability and truth, these probes perform worse than chance. Overall, this indicates that LLMs linearly represent truth-relevant information beyond the plausibility of text."
64
+ },
65
+ {
66
+ "section_id": "6",
67
+ "parent_section_id": null,
68
+ "section_name": "Causal intervention experiments",
69
+ "text": "In \u00a75 ###reference_### we measured the quality of linear probes in terms of their classification accuracy, both in- and out-of-distribution. In this section, we perform experiments which measure the extent to which these probes identify directions which are causally implicated in model outputs Finlayson et al. (2021 ###reference_b12###); Geva et al. (2023 ###reference_b17###); Geiger et al. (2021 ###reference_b15###). To do this, we will intervene in our model\u2019s computation by shifting the activations in group (b) (identified in \u00a73 ###reference_###) along the directions identified by our linear probes. Our goal is to cause LLMs to treat false statements appearing in context as true and vice versa. Crucially\u2014and in contrast to prior work (Li et al., 2023b ###reference_b23###)\u2014we evaluate our interventions on OOD inputs.\n###table_2###"
70
+ },
71
+ {
72
+ "section_id": "6.1",
73
+ "parent_section_id": "6",
74
+ "section_name": "Experimental set-up",
75
+ "text": "Let be a linear probe trained on a true/false dataset . Let be the probe direction, normalized so that where and are the mean representations of the true and false statements in , respectively; in other words, we normalize so that from the perspective of the probe , adding turns the average false statement into the average true statement. If our model encodes the truth value of statements along the direction , we would expect that replacing the representation of a false statement with would cause the model to produce outputs consistent with being a true statement.\nWe use inputs of the form\nThe Spanish word \u2018fruta\u2019 means \u2018goat\u2019. This statement is: FALSE \nThe Spanish word \u2018carne\u2019 means \u2018meat\u2019. This statement is: TRUE \ns. This statement is:\nwhere s varies over sp_en_trans statements. Then for each of the probes of \u00a75 ###reference_### we record:\nand , the average probability differences for varying over true statements or false statements in sp_en_trans, respectively,\nand , the average probability differences where varies over true (resp. false) statements but the probe direction is subtracted (resp. added) to each group (b) hidden state.\nFinally, we report the normalized indirect effects (NIEs)\nfor the falsetrue and the truefalse experiments, respectively. An NIE of means that the intervention was wholly ineffective at changing model outputs; an NIE of indicates that the intervention caused the LLM to label false statements as TRUE with as much confidence as genuine true statements, or vice versa."
76
+ },
77
+ {
78
+ "section_id": "6.2",
79
+ "parent_section_id": "6",
80
+ "section_name": "Results",
81
+ "text": "Results are shown in table 2 ###reference_###. We summarize our main takeaways.\nMass-mean probe directions are highly causal, with MM outperforming LR and CCS in 7/8 experimental conditions, often substantially. This is true despite LR, MM, and CCS probes all have very similar sp_en_trans classification accuracies.\nTraining on datasets and their opposites helps for cities but not for larger_than. This is surprising, considering that probes trained on larger_than + smaller_than are more accurate on sp_en_trans than probes trained on larger_than alone (see App. D ###reference_###), and indicates that there is more to be understood about how training on datasets and their opposites affects truth probes.\nTraining on likely is a surprisingly good baseline, though still weaker than interventions using truth probes. The performance here may be due to the strong correlation () between inputs being true and probable (according to LLaMA-2-70B) on sp_en_trans."
82
+ },
83
+ {
84
+ "section_id": "7",
85
+ "parent_section_id": null,
86
+ "section_name": "Discussion",
87
+ "text": ""
88
+ },
89
+ {
90
+ "section_id": "7.1",
91
+ "parent_section_id": "7",
92
+ "section_name": "Limitations and future work",
93
+ "text": "Our work has a number of limitations. First, we focus on simple, uncontroversial statements, and therefore cannot disambiguate truth from closely related features, such as \u201ccommonly believed\u201d or \u201cverifiable\u201d (Levinstein & Herrmann, 2023 ###reference_b20###). Second, we study only models in the LLaMA-2 family, so it is possible that some of our results do not apply for all LLMs.\nThis work also raises several questions which we were unable to answer here. For instance, why were interventions with mass-mean probe directions extracted from the likely dataset so effective, despite these probes not themselves being accurate at classifying true/false statements? And why did mass-mean probing with the cities + neg_cities training data perform poorly poorly for the 70B model, despite mass-mean probing with larger_than + smaller_than performing well?"
94
+ },
95
+ {
96
+ "section_id": "7.2",
97
+ "parent_section_id": "7",
98
+ "section_name": "Conclusion",
99
+ "text": "In this work we conduct a detailed investigation of the structure of LLM representations of truth. Drawing on simple visualizations, probing experiments, and causal evidence, we find evidence that at scale, LLMs compute and linearly represent the truth of true/false statements. We also localize truth representations to certain hidden states and introduce mass-mean probing, a simple alternative to other linear probing techniques which better identifies truth directions from true/false datasets."
100
+ }
101
+ ],
102
+ "appendix": [
103
+ {
104
+ "section_id": "Appendix 1",
105
+ "parent_section_id": null,
106
+ "section_name": "Appendix A Scoping of truth",
107
+ "text": "In this work, we consider declarative factual statements, for example \u201cEighty-one is larger than fifty-four\u201d or \u201cThe city of Denver is in Vietnam.\u201d We scope \u201ctruth\u201d to mean factuality, i.e. the truth or falsehood of these statements; for instance the examples given have truth values of true and false, respectively. To be clear, we list here some notions of \u201ctruth\u201d which we do not consider in this work:\nCorrect question answering (considered in Li et al. (2023b ###reference_b23###) and for some of the prompts used in Burns et al. (2023 ###reference_b6###)). For example, we do not consider \u201cWhat country is Paris in? France\u201d to have a truth value.\nPresence of deception, for example dishonest expressions of opinion (\u201cI like that plan\u201d).\nCompliance. For example, \u201cAnswer this question incorrectly: what country is Paris in? Paris is in Egypt\u201d is an example of compliance, even though the statement at the end of the text is false.\nMoreover, the statements under consideration in this work are all simple, unambiguous, and uncontroversial. Thus, we make no attempt to disambiguate \u201ctrue statements\u201d from closely-related notions like:\nUncontroversial statements\nStatements which are widely believed\nStatements which educated people believe.\nOn the other hand, our statements do disambiguate the notions of \u201ctrue statements\u201d and \u201cstatements which are likely to appear in training data\u201d; See our discussion at the end of \u00a72 ###reference_###."
108
+ },
109
+ {
110
+ "section_id": "Appendix 2",
111
+ "parent_section_id": null,
112
+ "section_name": "Appendix B Full patching results",
113
+ "text": "Fig. 6 ###reference_### shows full patching results. We see that both LLaMA-2-7B and LLaMA-2-13B display the \u201csummarization\u201d behavior in which information relevant to the full statement is represented over the end-of-sentence punctuation token. On the other hand, LLaMA-2-70B displays this behavior in a context-dependent way \u2013 we see it for cities but not for sp_en_trans.\n###figure_6###"
114
+ },
115
+ {
116
+ "section_id": "Appendix 3",
117
+ "parent_section_id": null,
118
+ "section_name": "Appendix C Emergence of linear structure across layers",
119
+ "text": "###figure_7### The linear structure observed in \u00a74 ###reference_### follows the following pattern: in early layers, representations are uninformative; then, in early middle layers, salient linear structure in the top few PCs rapidly emerges, with this structure emerging later for statements with a more complicated logical structure (e.g. conjunctions). This is shown for LLaMA-2-13B in Fig. 7 ###reference_###. We hypothesize that this is due to LLMs hierarchically developing understanding of their input data, progressing from surface level features to more abstract concepts.\nThe misalignment in Fig. 3 ###reference_###(c) also has an interesting dependence on layer. In Fig. 8 ###reference_### we visualize LLaMA-2-13B representations of cities and neg_cities at various layers. In early layers (left) we see antipodal alignment as in Fig. 3 ###reference_###(b, center). As we progress through layers, we see the axes of separation rotate to lie orthogonally, until they eventually align.\nOne interpretation of this is that in early layers, the model computed and linearly represented some feature (like \u201cclose association\u201d) which correlates with truth on both cities and neg_cities but with opposite signs. In later layers, the model computed and promoted to greater salience a more abstract concept which correlates with truth across both datasets.\n###figure_8###"
120
+ },
121
+ {
122
+ "section_id": "Appendix 4",
123
+ "parent_section_id": null,
124
+ "section_name": "Appendix D Full generalization results",
125
+ "text": "Here we present the full generalization results for probes trained on LLaMA-2-70B (Fig. 9 ###reference_###), 13B (Fig. 10 ###reference_###), and 7B (Fig. 11 ###reference_###). The horizontal axis shows the training data for the probe and the vertical axis shows the test set.\n###figure_9### ###figure_10### ###figure_11###"
126
+ },
127
+ {
128
+ "section_id": "Appendix 5",
129
+ "parent_section_id": null,
130
+ "section_name": "Appendix E Mass-mean probing in terms of Mahalanobis whitening",
131
+ "text": "###figure_12### One way to interpret the formula for the IID version of mass-mean probing is in terms of Mahalanobis whitening. Recall that if is a dataset of with covariance matrix , then the Mahalanobis whitening transformation satisfies the property that has covariance matrix given by the identity matrix, i.e. the whitened coordinates are uncorrelated with variance . Thus, noting that coincides with the inner product between and , we see that amounts to taking the projection onto after performing the change-of-basis given by . This is illustrated with hypothetical data in Fig. 12 ###reference_###."
132
+ },
133
+ {
134
+ "section_id": "Appendix 6",
135
+ "parent_section_id": null,
136
+ "section_name": "Appendix F For Gaussian data, IID mass-mean probing coincides with logistic regression on average",
137
+ "text": "Let and be a symmetric, positive-definite matrix. Suppose given access to a distribution of datapoints with binary labels such that the negative datapoints are distributed as and the positive datapoints are distributed as . Then the vector identified by mass-mean probing is . The following theorem then shows that is also the solution to logistic regression up to scaling.\nLet\nbe the direction identified by logistic regression. Then .\nSince the change of coordinates where (see App. E ###reference_###) sends to , we see that\nwhere is the distribution of labeled such that the positive/negative datapoints are distributed as . But the argmax on the right-hand side is clearly , so that as desired.\n\u220e"
138
+ },
139
+ {
140
+ "section_id": "Appendix 7",
141
+ "parent_section_id": null,
142
+ "section_name": "Appendix G Difference-in-means directions and linear concept erasure",
143
+ "text": "In this appendix, we explain the connection between difference-in-means directions and optimal erasure. One consequence of this connection is that it suggests a natural extension of difference-in-means probes to multi-class classification data.\nThe connection comes via the following theorem from Belrose et al. (2023 ###reference_b5###).\n(Belrose et al., 2023 ###reference_b5###, Thm. 3.1.) Let be jointly distributed random vectors with having finite mean and (representing one-hot encodings of a multi-class labels). Suppose that is a loss function convex in its first argument (e.g. cross-entropy loss).\nIf the class-conditional means for are all equal, then the best affine predictor (that is, a predictor of the form ) is constant .\nIn the case of a binary classification problem , this theorem implies that any nullity projection which eliminates linearly-recoverable information from has kernel\ngenerated by the difference-in-mean vector for the classes.\nFor a more general multi-class classification problem, one could similarly ask: What is the \u201cbest\u201d direction to project away in order to eliminate linearly-recoverable information from ? A natural choice is thus the top left singular vector of the cross-covariance matrix . (In the case of binary classification, we have that has column rank , making the top left singular vector.)"
144
+ },
145
+ {
146
+ "section_id": "Appendix 8",
147
+ "parent_section_id": null,
148
+ "section_name": "Appendix H Details on dataset creation",
149
+ "text": "Here we give example statements from our datasets, templates used for making the datasets, and other details regarding dataset creation.\ncities. We formed these statements from the template \u201cThe city of [city] is in [country]\u201d using a list of world cities from Geonames (2023 ###reference_b16###). We filtered for cities with populations , which did not share their name with any other listed city, which were located in a curated list of widely-recognized countries, and which were not city-states. For each city, we generated one true statement and one false statement, where the false statement was generated by sampling a false country with probability equal to the country\u2019s frequency among the true datapoints (this was to ensure that e.g. statements ending with \u201cChina\u201d were not disproportionately true). Example statements:\nThe city of Sevastopol is in Ukraine. (TRUE)\nThe city of Baghdad is in China. (FALSE)\nsp_en_trans. Beginning with a list of common Spanish words and their English translations, we formed statements from the template \u201cThe Spanish word \u2018[Spanish word]\u2019 means \u2018[English word]\u2019.\u201d Half of Spanish words were given their correct labels and half were given random incorrect labels from English words in the dataset. The first author, a Spanish speaker, then went through the dataset by hand and deleted examples with Spanish words that have multiple viable translations or were otherwise ambiguous. Example statements:\nThe Spanish word \u2018imaginar\u2019 means \u2018to imagine\u2019. (TRUE)\nThe Spanish word \u2018silla\u2019 means \u2018neighbor\u2019. (FALSE)\nlarger_than and smaller_than. We generate these statements from the templates \u201cx is larger than y\u201d and \u201cx is smaller than y\u201d for . We exclude cases where or where one of x or y is divisible by . We chose to limit the range of possible values in this way for the sake of visualization: we found that LLaMA-13B linearly represents the size of numbers, but not at a consistent scale: the internally represented difference between one and ten is considerably larger than between fifty and sixty. Thus, when visualizing statements with numbers ranging to one, the top principal components are dominated by features representing the sizes of numbers.\nneg_cities and neg_sp_en_trans. We form these datasets by negating statements from cities and sp_en_trans according to the templates \u201cThe city of [city] is not in [country]\u201d and \u201c\u2018The Spanish word \u2018[Spanish word]\u2019 does not mean \u2018[English word]\u2019.\u201d\ncities_cities_conj and cities_cities_disj. These datasets are generated from cities according to the following templates:\nIt is the case both that [statement 1] and that [statement 2].\nIt is the case either that [statement 1] or that [statement 2].\nWe sample the two statements independently to be true with probability for cities_cities_conj and with probability for cities_cities_disj. These probabilities are selected to ensure that the overall dataset is balanced between true and false statements, but that there is no correlation between the truth of the first and second statement in the conjunction.\nlikely. We generate this dataset by having LLaMA-13B produce unconditioned generations of length up to tokens, using temperature . At the final token of the generation, we either sample the most likely token or the 100th most likely final token. We remove generations which contain special tokens. Dataset examples:\nThe 2019-2024 Outlook for Women\u2019s and Girls\u2019 Cut and Sew and Knit and Crochet Sweaters in the United States This study covers the latent demand outlook for (LIKELY)\nTags: python, django Question: How to get my django app to work with python 3.7 I am new to django and have been trying to install it in my pc. I have installed python 3.7 together (UNLIKELY)\ncompanies_true_false. This dataset was introduced by Azaria & Mitchell (2023 ###reference_b3###); we obtained it via the project repository for Levinstein & Herrmann (2023 ###reference_b20###) which also used the dataset. Example statements:\nArcelorMittal has headquarters in Luxembourg. (TRUE)\nExxon Mobil engages in the provision of banking and financial services. (FALSE)\ncommon_claim_true_false. CommonClaim was introduced in Casper et al. (2023 ###reference_b7###). It consists of various statements generated by GPT-3-davinci-002, labeled by humans as being true, false, or neither. If human labelers disagreed on the truth of a statement, this is also recorded. We adapted CommonClaim by selecting statements which were labeled true or false with no labeler disagreement, then removing excess true statement to balance the dataset. Example statements:\nTomatoes are not actually a vegetable. (TRUE)\nContrary to popular belief, the platypuses are not venomous. (FALSE)\nAs these examples show, the statements can be ambiguous or of unclear truth value.\ncounterfact_true_false. Counterfact was introduced in Meng et al. (2022 ###reference_b25###) and consists of factual recall statements. We adapt Counterfact by using statements which form complete sentences and, for each such statement, using both the true version and a false version given by one of Counterfact\u2019s suggested false modifications. We also append a period to the end. Example statements:\nOlaus Rudbeck spoke the language Swedish. (TRUE)\nThe official religion of Malacca sultanate is Christianity. (FALSE)"
150
+ }
151
+ ],
152
+ "tables": {
153
+ "1": {
154
+ "table_html": "<figure class=\"ltx_table\" id=\"S2.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Our datasets</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S2.T1.4.4\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S2.T1.4.4.5.1\">\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.4.4.5.1.1\">Name</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.4.4.5.1.2\">Description</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S2.T1.4.4.5.1.3\">Rows</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.4.6.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T1.4.4.6.2.1\"><span class=\"ltx_text ltx_font_typewriter\" id=\"S2.T1.4.4.6.2.1.1\">cities</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T1.4.4.6.2.2\">\u201cThe city of <span class=\"ltx_text ltx_font_typewriter\" id=\"S2.T1.4.4.6.2.2.1\">[city]</span> is in <span class=\"ltx_text ltx_font_typewriter\" id=\"S2.T1.4.4.6.2.2.2\">[country]</span>.\u201d</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S2.T1.4.4.6.2.3\">1496</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.4.7.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.4.4.7.3.1\"><span class=\"ltx_text ltx_font_typewriter\" id=\"S2.T1.4.4.7.3.1.1\">neg_cities</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.4.4.7.3.2\">Negations of statements in <span class=\"ltx_text ltx_font_typewriter\" id=\"S2.T1.4.4.7.3.2.1\">cities</span> with \u201cnot\u201d</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S2.T1.4.4.7.3.3\">1496</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.4.8.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.4.4.8.4.1\"><span class=\"ltx_text ltx_font_typewriter\" id=\"S2.T1.4.4.8.4.1.1\">sp_en_trans</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.4.4.8.4.2\">\u201cThe Spanish word \u2018<span class=\"ltx_text ltx_font_typewriter\" id=\"S2.T1.4.4.8.4.2.1\">[word]</span>\u2019 means \u2018<span class=\"ltx_text ltx_font_typewriter\" id=\"S2.T1.4.4.8.4.2.2\">[English word]</span>\u2019.\u201d</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S2.T1.4.4.8.4.3\">354</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.4.9.5\">\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.4.4.9.5.1\"><span class=\"ltx_text ltx_font_typewriter\" id=\"S2.T1.4.4.9.5.1.1\">neg_sp_en_trans</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.4.4.9.5.2\">Negations of statements in <span class=\"ltx_text ltx_font_typewriter\" id=\"S2.T1.4.4.9.5.2.1\">sp_en_trans</span> with \u201cnot\u201d</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S2.T1.4.4.9.5.3\">354</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.2.2.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.2.2.2.3\"><span class=\"ltx_text ltx_font_typewriter\" id=\"S2.T1.2.2.2.3.1\">larger_than</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.2.2.2.2\">\u201c is larger than .\u201d</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S2.T1.2.2.2.4\">1980</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.4.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.4.4.4.3\"><span class=\"ltx_text ltx_font_typewriter\" id=\"S2.T1.4.4.4.3.1\">smaller_than</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.4.4.4.2\">\u201c is smaller than .\u201d</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S2.T1.4.4.4.4\">1980</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.4.10.6\">\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.4.4.10.6.1\"><span class=\"ltx_text ltx_font_typewriter\" id=\"S2.T1.4.4.10.6.1.1\">cities_cities_conj</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.4.4.10.6.2\">Conjunctions of two statements in <span class=\"ltx_text ltx_font_typewriter\" id=\"S2.T1.4.4.10.6.2.1\">cities</span> with \u201cand\u201d</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S2.T1.4.4.10.6.3\">1500</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.4.11.7\">\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.4.4.11.7.1\"><span class=\"ltx_text ltx_font_typewriter\" id=\"S2.T1.4.4.11.7.1.1\">cities_cities_disj</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.4.4.11.7.2\">Disjunctions of two statements in <span class=\"ltx_text ltx_font_typewriter\" id=\"S2.T1.4.4.11.7.2.1\">cities</span> with \u201cor\u201d</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S2.T1.4.4.11.7.3\">1500</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.4.12.8\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T1.4.4.12.8.1\"><span class=\"ltx_text ltx_font_typewriter\" id=\"S2.T1.4.4.12.8.1.1\">companies_true_false</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T1.4.4.12.8.2\">Claims about companies; from <cite class=\"ltx_cite ltx_citemacro_citet\">Azaria &amp; Mitchell (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.06824v3#bib.bib3\" title=\"\">2023</a>)</cite>\n</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S2.T1.4.4.12.8.3\">1200</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.4.13.9\">\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.4.4.13.9.1\"><span class=\"ltx_text ltx_font_typewriter\" id=\"S2.T1.4.4.13.9.1.1\">common_claim_true_false</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.4.4.13.9.2\">Various claims; from <cite class=\"ltx_cite ltx_citemacro_citet\">Casper et\u00a0al. (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.06824v3#bib.bib7\" title=\"\">2023</a>)</cite>\n</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S2.T1.4.4.13.9.3\">4450</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.4.14.10\">\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.4.4.14.10.1\"><span class=\"ltx_text ltx_font_typewriter\" id=\"S2.T1.4.4.14.10.1.1\">counterfact_true_false</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.4.4.14.10.2\">Various factual recall claims; from <cite class=\"ltx_cite ltx_citemacro_cite\">Meng et\u00a0al. (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.06824v3#bib.bib25\" title=\"\">2022</a>)</cite>\n</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S2.T1.4.4.14.10.3\">31960</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.4.15.11\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T1.4.4.15.11.1\"><span class=\"ltx_text ltx_font_typewriter\" id=\"S2.T1.4.4.15.11.1.1\">likely</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T1.4.4.15.11.2\">Nonfactual text with likely or unlikely final tokens</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S2.T1.4.4.15.11.3\">10000</td>\n</tr>\n</tbody>\n</table>\n</figure>",
155
+ "capture": "Table 1: Our datasets"
156
+ },
157
+ "2": {
158
+ "table_html": "<figure class=\"ltx_table\" id=\"S6.T2\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>NIEs for intervention experiments, averaged over statements from <span class=\"ltx_text ltx_font_typewriter\" id=\"S6.T2.6.1\">sp_en_trans</span>.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S6.T2.4.4\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S6.T2.4.4.5.1\">\n<td class=\"ltx_td\" id=\"S6.T2.4.4.5.1.1\"></td>\n<td class=\"ltx_td ltx_border_rr\" id=\"S6.T2.4.4.5.1.2\"></td>\n<td class=\"ltx_td ltx_align_center\" colspan=\"2\" id=\"S6.T2.4.4.5.1.3\">LLaMA-2-13B</td>\n<td class=\"ltx_td ltx_align_center\" colspan=\"2\" id=\"S6.T2.4.4.5.1.4\">LLaMA-2-70B</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.4.4.4\">\n<td class=\"ltx_td ltx_align_right\" id=\"S6.T2.4.4.4.5\">train set</td>\n<td class=\"ltx_td ltx_align_right ltx_border_rr\" id=\"S6.T2.4.4.4.6\">probe</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T2.1.1.1.1\">falsetrue</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S6.T2.2.2.2.2\">truefalse</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T2.3.3.3.3\">falsetrue</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.4.4.4.4\">truefalse</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.4.4.6.2\">\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S6.T2.4.4.6.2.1\" rowspan=\"2\"><span class=\"ltx_text ltx_font_typewriter\" id=\"S6.T2.4.4.6.2.1.1\">cities</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_rr ltx_border_t\" id=\"S6.T2.4.4.6.2.2\">LR</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T2.4.4.6.2.3\">.13</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_t\" id=\"S6.T2.4.4.6.2.4\">.19</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T2.4.4.6.2.5\">.55</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T2.4.4.6.2.6\">.99</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.4.4.7.3\">\n<td class=\"ltx_td ltx_align_right ltx_border_rr\" id=\"S6.T2.4.4.7.3.1\">MM</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T2.4.4.7.3.2\">.77</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S6.T2.4.4.7.3.3\">.90</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T2.4.4.7.3.4\">.58</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.4.4.7.3.5\">.89</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.4.4.8.4\">\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S6.T2.4.4.8.4.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S6.T2.4.4.8.4.1.1\">\n<span class=\"ltx_inline-block ltx_align_right\" id=\"S6.T2.4.4.8.4.1.1.1\">\n<span class=\"ltx_p\" id=\"S6.T2.4.4.8.4.1.1.1.1\"><span class=\"ltx_text ltx_font_typewriter\" id=\"S6.T2.4.4.8.4.1.1.1.1.1\">cities+</span></span>\n<span class=\"ltx_p\" id=\"S6.T2.4.4.8.4.1.1.1.2\"><span class=\"ltx_text ltx_font_typewriter\" id=\"S6.T2.4.4.8.4.1.1.1.2.1\">neg_cities</span></span>\n</span></span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_rr ltx_border_t\" id=\"S6.T2.4.4.8.4.2\">LR</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T2.4.4.8.4.3\">.33</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_t\" id=\"S6.T2.4.4.8.4.4\">.52</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T2.4.4.8.4.5\">.61</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T2.4.4.8.4.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T2.4.4.8.4.6.1\">1.00</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.4.4.9.5\">\n<td class=\"ltx_td ltx_align_right ltx_border_rr\" id=\"S6.T2.4.4.9.5.1\">MM</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T2.4.4.9.5.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T2.4.4.9.5.2.1\">.85</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S6.T2.4.4.9.5.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T2.4.4.9.5.3.1\">.97</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T2.4.4.9.5.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T2.4.4.9.5.4.1\">.81</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.4.4.9.5.5\">.95</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.4.4.10.6\">\n<td class=\"ltx_td ltx_align_right ltx_border_rr\" id=\"S6.T2.4.4.10.6.1\">CCS</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T2.4.4.10.6.2\">.31</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S6.T2.4.4.10.6.3\">.73</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T2.4.4.10.6.4\">.55</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.4.4.10.6.5\">.96</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.4.4.11.7\">\n<td class=\"ltx_td ltx_align_right ltx_border_tt\" id=\"S6.T2.4.4.11.7.1\" rowspan=\"2\"><span class=\"ltx_text ltx_font_typewriter\" id=\"S6.T2.4.4.11.7.1.1\">larger_than</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_rr ltx_border_tt\" id=\"S6.T2.4.4.11.7.2\">LR</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S6.T2.4.4.11.7.3\">.28</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_tt\" id=\"S6.T2.4.4.11.7.4\">.27</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S6.T2.4.4.11.7.5\">.61</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S6.T2.4.4.11.7.6\">.96</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.4.4.12.8\">\n<td class=\"ltx_td ltx_align_right ltx_border_rr\" id=\"S6.T2.4.4.12.8.1\">MM</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T2.4.4.12.8.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T2.4.4.12.8.2.1\">.71</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S6.T2.4.4.12.8.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T2.4.4.12.8.3.1\">.79</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T2.4.4.12.8.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T2.4.4.12.8.4.1\">.67</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.4.4.12.8.5\">1.01</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.4.4.13.9\">\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S6.T2.4.4.13.9.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S6.T2.4.4.13.9.1.1\">\n<span class=\"ltx_inline-block ltx_align_right\" id=\"S6.T2.4.4.13.9.1.1.1\">\n<span class=\"ltx_p\" id=\"S6.T2.4.4.13.9.1.1.1.1\"><span class=\"ltx_text ltx_font_typewriter\" id=\"S6.T2.4.4.13.9.1.1.1.1.1\">larger_than+</span></span>\n<span class=\"ltx_p\" id=\"S6.T2.4.4.13.9.1.1.1.2\"><span class=\"ltx_text ltx_font_typewriter\" id=\"S6.T2.4.4.13.9.1.1.1.2.1\">smaller_than</span></span>\n</span></span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_rr ltx_border_t\" id=\"S6.T2.4.4.13.9.2\">LR</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T2.4.4.13.9.3\">.07</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_t\" id=\"S6.T2.4.4.13.9.4\">.13</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T2.4.4.13.9.5\">.54</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T2.4.4.13.9.6\">1.02</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.4.4.14.10\">\n<td class=\"ltx_td ltx_align_right ltx_border_rr\" id=\"S6.T2.4.4.14.10.1\">MM</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T2.4.4.14.10.2\">.26</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S6.T2.4.4.14.10.3\">.53</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T2.4.4.14.10.4\">.66</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.4.4.14.10.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T2.4.4.14.10.5.1\">1.03</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.4.4.15.11\">\n<td class=\"ltx_td ltx_align_right ltx_border_rr\" id=\"S6.T2.4.4.15.11.1\">CCS</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T2.4.4.15.11.2\">.08</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S6.T2.4.4.15.11.3\">.17</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T2.4.4.15.11.4\">.57</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.4.4.15.11.5\">1.02</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.4.4.16.12\">\n<td class=\"ltx_td ltx_align_right ltx_border_tt\" id=\"S6.T2.4.4.16.12.1\" rowspan=\"2\"><span class=\"ltx_text ltx_font_typewriter\" id=\"S6.T2.4.4.16.12.1.1\">likely</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_rr ltx_border_tt\" id=\"S6.T2.4.4.16.12.2\">LR</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S6.T2.4.4.16.12.3\">.05</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_tt\" id=\"S6.T2.4.4.16.12.4\">.08</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S6.T2.4.4.16.12.5\">.18</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S6.T2.4.4.16.12.6\">.46</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.4.4.17.13\">\n<td class=\"ltx_td ltx_align_right ltx_border_rr\" id=\"S6.T2.4.4.17.13.1\">MM</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T2.4.4.17.13.2\">.70</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S6.T2.4.4.17.13.3\">.54</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T2.4.4.17.13.4\">.68</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.4.4.17.13.5\">.27</td>\n</tr>\n</tbody>\n</table>\n</figure>",
159
+ "capture": "Table 2: NIEs for intervention experiments, averaged over statements from sp_en_trans."
160
+ }
161
+ },
162
+ "image_paths": {
163
+ "1": {
164
+ "figure_path": "2310.06824v3_figure_1.png",
165
+ "caption": "Figure 1: PCA visualizations for LLaMA-2-70B representations of our true/false datasets.",
166
+ "url": "http://arxiv.org/html/2310.06824v3/x1.png"
167
+ },
168
+ "2": {
169
+ "figure_path": "2310.06824v3_figure_2.png",
170
+ "caption": "Figure 2: Difference log\u2061P\u2062(TRUE)\u2212log\u2061P\u2062(FALSE)\ud835\udc43TRUE\ud835\udc43FALSE\\log P(\\texttt{TRUE})-\\log P(\\texttt{FALSE})roman_log italic_P ( TRUE ) - roman_log italic_P ( FALSE ) in LLaMA-2-13B log probabilities after patching residual stream activation in the indicated token position and layer.",
171
+ "url": "http://arxiv.org/html/2310.06824v3/x2.png"
172
+ },
173
+ "3": {
174
+ "figure_path": "2310.06824v3_figure_3.png",
175
+ "caption": "Figure 3: (a) Projections of LLaMA-2-13B onto the top 2 PCs of cities. (b) PCA visualizations of larger_than+smaller_than. For LLaMA-2-7B (left), we see statements cluster according to surface-level characteristics, e.g. presence of the token \u201ceighty.\u201d For LLaMA-2-13B, we see that larger_than (center, top) and smaller_than (center, bottom) separate along opposite directions. (c) PCA visualizations of datasets and their negations. Unlike in other visualizations, we use layer 12121212 for cities+neg_cities; see App. C for an exploration of this misalignment emerging and resolving across layers.",
176
+ "url": "http://arxiv.org/html/2310.06824v3/x3.png"
177
+ },
178
+ "4": {
179
+ "figure_path": "2310.06824v3_figure_4.png",
180
+ "caption": "Figure 4: An illustration of a weakness of logistic regression.",
181
+ "url": "http://arxiv.org/html/2310.06824v3/x4.png"
182
+ },
183
+ "5": {
184
+ "figure_path": "2310.06824v3_figure_5.png",
185
+ "caption": "Figure 5: (a) Average accuracies over all datasets aside from those used for training. (b) Accuracies of probes for varying model scales and training data, averaged over all test sets.",
186
+ "url": "http://arxiv.org/html/2310.06824v3/x5.png"
187
+ },
188
+ "6": {
189
+ "figure_path": "2310.06824v3_figure_6.png",
190
+ "caption": "Figure 6: Full patching results across all three model sizes and inputs. Results are for patching false inputs (shown) to true by changing the first token shown on the left. Numbers in parentheses are the index of the token in the full (few-shot) prompt.",
191
+ "url": "http://arxiv.org/html/2310.06824v3/x6.png"
192
+ },
193
+ "7": {
194
+ "figure_path": "2310.06824v3_figure_7.png",
195
+ "caption": "Figure 7: Projections of LLaMA-2-13B representations of datasets onto their top two PCs, across various layers.",
196
+ "url": "http://arxiv.org/html/2310.06824v3/x7.png"
197
+ },
198
+ "8": {
199
+ "figure_path": "2310.06824v3_figure_8.png",
200
+ "caption": "Figure 8: PCA visualizations of LLaMA-2-13B representations of cities and neg_cities at various layers.",
201
+ "url": "http://arxiv.org/html/2310.06824v3/x8.png"
202
+ },
203
+ "9": {
204
+ "figure_path": "2310.06824v3_figure_9.png",
205
+ "caption": "Figure 9: Generalization results for LLaMA-2-70B.",
206
+ "url": "http://arxiv.org/html/2310.06824v3/x9.png"
207
+ },
208
+ "10": {
209
+ "figure_path": "2310.06824v3_figure_10.png",
210
+ "caption": "Figure 10: Generalization results for LLaMA-2-13B.",
211
+ "url": "http://arxiv.org/html/2310.06824v3/x10.png"
212
+ },
213
+ "11": {
214
+ "figure_path": "2310.06824v3_figure_11.png",
215
+ "caption": "Figure 11: Generalization results for LLaMA-2-7B.",
216
+ "url": "http://arxiv.org/html/2310.06824v3/x11.png"
217
+ },
218
+ "12": {
219
+ "figure_path": "2310.06824v3_figure_12.png",
220
+ "caption": "Figure 12: Mass-mean probing is equivalent to taking the projection onto \ud835\udf3dmmsubscript\ud835\udf3dmm{\\bm{\\theta}}_{\\mathrm{mm}}bold_italic_\u03b8 start_POSTSUBSCRIPT roman_mm end_POSTSUBSCRIPT after applying a whitening transformation.",
221
+ "url": "http://arxiv.org/html/2310.06824v3/extracted/5798819/images/whitening.png"
222
+ }
223
+ },
224
+ "validation": true,
225
+ "references": [
226
+ {
227
+ "1": {
228
+ "title": "Can language models encode perceptual structure without grounding? a case study in color, 2021.",
229
+ "author": "Mostafa Abdou, Artur Kulmizev, Daniel Hershcovich, Stella Frank, Ellie Pavlick, and Anders S\u00f8gaard.",
230
+ "venue": null,
231
+ "url": null
232
+ }
233
+ },
234
+ {
235
+ "2": {
236
+ "title": "Understanding intermediate layers using linear classifier probes, 2018.",
237
+ "author": "Guillaume Alain and Yoshua Bengio.",
238
+ "venue": null,
239
+ "url": null
240
+ }
241
+ },
242
+ {
243
+ "3": {
244
+ "title": "The internal state of an llm knows when its lying, 2023.",
245
+ "author": "Amos Azaria and Tom Mitchell.",
246
+ "venue": null,
247
+ "url": null
248
+ }
249
+ },
250
+ {
251
+ "4": {
252
+ "title": "Understanding the role of individual units in a deep neural network.",
253
+ "author": "David Bau, Jun-Yan Zhu, Hendrik Strobelt, Agata Lapedriza, Bolei Zhou, and Antonio Torralba.",
254
+ "venue": "Proceedings of the National Academy of Sciences, 2020.",
255
+ "url": null
256
+ }
257
+ },
258
+ {
259
+ "5": {
260
+ "title": "LEACE: Perfect linear concept erasure in closed form.",
261
+ "author": "Nora Belrose, David Schneider-Joseph, Shauli Ravfogel, Ryan Cotterell, Edward Raff, and Stella Biderman.",
262
+ "venue": "In Thirty-seventh Conference on Neural Information Processing Systems, 2023.",
263
+ "url": null
264
+ }
265
+ },
266
+ {
267
+ "6": {
268
+ "title": "Discovering latent knowledge in language models without supervision.",
269
+ "author": "Collin Burns, Haotian Ye, Dan Klein, and Jacob Steinhardt.",
270
+ "venue": "In The Eleventh International Conference on Learning Representations, 2023.",
271
+ "url": null
272
+ }
273
+ },
274
+ {
275
+ "7": {
276
+ "title": "Explore, establish, exploit: Red teaming language models from scratch, 2023.",
277
+ "author": "Stephen Casper, Jason Lin, Joe Kwon, Gatlen Culp, and Dylan Hadfield-Menell.",
278
+ "venue": null,
279
+ "url": null
280
+ }
281
+ },
282
+ {
283
+ "8": {
284
+ "title": "Eliciting latent knowledge: How to tell if your eyes deceive you, 2021.",
285
+ "author": "Paul Christiano, Ajeya Cotra, and Mark Xu.",
286
+ "venue": "URL https://docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit#heading=h.jrzi4atzacns.",
287
+ "url": null
288
+ }
289
+ },
290
+ {
291
+ "9": {
292
+ "title": "Sparse autoencoders find highly interpretable features in language models, 2023.",
293
+ "author": "Hoagy Cunningham, Aidan Ewart, Logan Riggs, Robert Huben, and Lee Sharkey.",
294
+ "venue": null,
295
+ "url": null
296
+ }
297
+ },
298
+ {
299
+ "10": {
300
+ "title": "What is one grain of sand in the desert? analyzing individual neurons in deep nlp models.",
301
+ "author": "Fahim Dalvi, Nadir Durrani, Hassan Sajjad, Yonatan Belinkov, Anthony Bau, and James R. Glass.",
302
+ "venue": "In AAAI Conference on Artificial Intelligence, 2018.",
303
+ "url": null
304
+ }
305
+ },
306
+ {
307
+ "11": {
308
+ "title": "Toy models of superposition.",
309
+ "author": "Nelson Elhage, Tristan Hume, Catherine Olsson, Nicholas Schiefer, Tom Henighan, Shauna Kravec, Zac Hatfield-Dodds, Robert Lasenby, Dawn Drain, Carol Chen, Roger Grosse, Sam McCandlish, Jared Kaplan, Dario Amodei, Martin Wattenberg, and Christopher Olah.",
310
+ "venue": "Transformer Circuits Thread, 2022.",
311
+ "url": null
312
+ }
313
+ },
314
+ {
315
+ "12": {
316
+ "title": "Causal analysis of syntactic agreement mechanisms in neural language models.",
317
+ "author": "Matthew Finlayson, Aaron Mueller, Sebastian Gehrmann, Stuart Shieber, Tal Linzen, and Yonatan Belinkov.",
318
+ "venue": "In Chengqing Zong, Fei Xia, Wenjie Li, and Roberto Navigli (eds.), Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 1828\u20131843, Online, August 2021. Association for Computational Linguistics.",
319
+ "url": null
320
+ }
321
+ },
322
+ {
323
+ "13": {
324
+ "title": "The use of multiple measurements in taxonomic problems.",
325
+ "author": "R. A. Fisher.",
326
+ "venue": "Annals of Eugenics, 7(2):179\u2013188, 1936.",
327
+ "url": null
328
+ }
329
+ },
330
+ {
331
+ "14": {
332
+ "title": "Neural natural language inference models partially embed theories of lexical entailment and negation.",
333
+ "author": "Atticus Geiger, Kyle Richardson, and Christopher Potts.",
334
+ "venue": "In Afra Alishahi, Yonatan Belinkov, Grzegorz Chrupala, Dieuwke Hupkes, Yuval Pinter, and Hassan Sajjad (eds.), Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, BlackboxNLP@EMNLP 2020, Online, November 2020, pp. 163\u2013173. Association for Computational Linguistics, 2020.",
335
+ "url": null
336
+ }
337
+ },
338
+ {
339
+ "15": {
340
+ "title": "Causal abstractions of neural networks.",
341
+ "author": "Atticus Geiger, Hanson Lu, Thomas Icard, and Christopher Potts.",
342
+ "venue": "In Marc\u2019Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan (eds.), Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pp. 9574\u20139586, 2021.",
343
+ "url": null
344
+ }
345
+ },
346
+ {
347
+ "16": {
348
+ "title": "All cities with a population 1000, 2023.",
349
+ "author": "Geonames.",
350
+ "venue": "URL https://download.geonames.org/export/dump/.",
351
+ "url": null
352
+ }
353
+ },
354
+ {
355
+ "17": {
356
+ "title": "Dissecting recall of factual associations in auto-regressive language models, 2023.",
357
+ "author": "Mor Geva, Jasmijn Bastings, Katja Filippova, and Amir Globerson.",
358
+ "venue": null,
359
+ "url": null
360
+ }
361
+ },
362
+ {
363
+ "18": {
364
+ "title": "Multimodal neurons in artificial neural networks.",
365
+ "author": "Gabriel Goh, Nick Cammarata \u2020, Chelsea Voss \u2020, Shan Carter, Michael Petrov, Ludwig Schubert, Alec Radford, and Chris Olah.",
366
+ "venue": "Distill, 2021.",
367
+ "url": null
368
+ }
369
+ },
370
+ {
371
+ "19": {
372
+ "title": "Finding neurons in a haystack: Case studies with sparse probing, 2023.",
373
+ "author": "Wes Gurnee, Neel Nanda, Matthew Pauly, Katherine Harvey, Dmitrii Troitskii, and Dimitris Bertsimas.",
374
+ "venue": null,
375
+ "url": null
376
+ }
377
+ },
378
+ {
379
+ "20": {
380
+ "title": "Still no lie detector for language models: Probing empirical and conceptual roadblocks, 2023.",
381
+ "author": "B. A. Levinstein and Daniel A. Herrmann.",
382
+ "venue": null,
383
+ "url": null
384
+ }
385
+ },
386
+ {
387
+ "21": {
388
+ "title": "Implicit representations of meaning in neural language models.",
389
+ "author": "Belinda Z. Li, Maxwell Nye, and Jacob Andreas.",
390
+ "venue": "In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 1813\u20131827, Online, August 2021. Association for Computational Linguistics.",
391
+ "url": null
392
+ }
393
+ },
394
+ {
395
+ "22": {
396
+ "title": "Emergent world representations: Exploring a sequence model trained on a synthetic task.",
397
+ "author": "Kenneth Li, Aspen K Hopkins, David Bau, Fernanda Vi\u00e9gas, Hanspeter Pfister, and Martin Wattenberg.",
398
+ "venue": "In The Eleventh International Conference on Learning Representations, 2023a.",
399
+ "url": null
400
+ }
401
+ },
402
+ {
403
+ "23": {
404
+ "title": "Inference-time intervention: Eliciting truthful answers from a language model, 2023b.",
405
+ "author": "Kenneth Li, Oam Patel, Fernanda Vi\u00e9gas, Hanspeter Pfister, and Martin Wattenberg.",
406
+ "venue": null,
407
+ "url": null
408
+ }
409
+ },
410
+ {
411
+ "24": {
412
+ "title": "TruthfulQA: Measuring how models mimic human falsehoods.",
413
+ "author": "Stephanie Lin, Jacob Hilton, and Owain Evans.",
414
+ "venue": "In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 3214\u20133252, Dublin, Ireland, May 2022. Association for Computational Linguistics.",
415
+ "url": null
416
+ }
417
+ },
418
+ {
419
+ "25": {
420
+ "title": "Locating and editing factual associations in GPT.",
421
+ "author": "Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov.",
422
+ "venue": "Advances in Neural Information Processing Systems, 36, 2022.",
423
+ "url": null
424
+ }
425
+ },
426
+ {
427
+ "26": {
428
+ "title": "CREAK: A dataset for commonsense reasoning over entity knowledge.",
429
+ "author": "Yasumasa Onoe, Michael JQ Zhang, Eunsol Choi, and Greg Durrett.",
430
+ "venue": "In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2), 2021.",
431
+ "url": null
432
+ }
433
+ },
434
+ {
435
+ "27": {
436
+ "title": "Gpt-4 technical report, 2023.",
437
+ "author": "OpenAI.",
438
+ "venue": null,
439
+ "url": null
440
+ }
441
+ },
442
+ {
443
+ "28": {
444
+ "title": "Ai deception: A survey of examples, risks, and potential solutions, 2023.",
445
+ "author": "Peter S. Park, Simon Goldstein, Aidan O\u2019Gara, Michael Chen, and Dan Hendrycks.",
446
+ "venue": null,
447
+ "url": null
448
+ }
449
+ },
450
+ {
451
+ "29": {
452
+ "title": "Mapping language models to grounded conceptual spaces.",
453
+ "author": "Roma Patel and Ellie Pavlick.",
454
+ "venue": "In International Conference on Learning Representations, 2022.",
455
+ "url": null
456
+ }
457
+ },
458
+ {
459
+ "30": {
460
+ "title": "Discovering language model behaviors with model-written evaluations, 2022.",
461
+ "author": "Ethan Perez, Sam Ringer, Kamil\u0117 Luko\u0161i\u016bt\u0117, Karina Nguyen, Edwin Chen, Scott Heiner, Craig Pettit, Catherine Olsson, Sandipan Kundu, Saurav Kadavath, Andy Jones, Anna Chen, Ben Mann, Brian Israel, Bryan Seethor, Cameron McKinnon, Christopher Olah, Da Yan, Daniela Amodei, Dario Amodei, Dawn Drain, Dustin Li, Eli Tran-Johnson, Guro Khundadze, Jackson Kernion, James Landis, Jamie Kerr, Jared Mueller, Jeeyoon Hyun, Joshua Landau, Kamal Ndousse, Landon Goldberg, Liane Lovitt, Martin Lucas, Michael Sellitto, Miranda Zhang, Neerav Kingsland, Nelson Elhage, Nicholas Joseph, Noem\u00ed Mercado, Nova DasSarma, Oliver Rausch, Robin Larson, Sam McCandlish, Scott Johnston, Shauna Kravec, Sheer El Showk, Tamera Lanham, Timothy Telleen-Lawton, Tom Brown, Tom Henighan, Tristan Hume, Yuntao Bai, Zac Hatfield-Dodds, Jack Clark, Samuel R. Bowman, Amanda Askell, Roger Grosse, Danny Hernandez, Deep Ganguli, Evan Hubinger, Nicholas Schiefer, and Jared Kaplan.",
462
+ "venue": null,
463
+ "url": null
464
+ }
465
+ },
466
+ {
467
+ "31": {
468
+ "title": "Collaborative data science, 2015.",
469
+ "author": "Plotly Technologies Inc.",
470
+ "venue": "URL https://plot.ly.",
471
+ "url": null
472
+ }
473
+ },
474
+ {
475
+ "32": {
476
+ "title": "Steering llama 2 via contrastive activation addition, 2024.",
477
+ "author": "Nina Rimsky, Nick Gabrieli, Julian Schulz, Meg Tong, Evan Hubinger, and Alexander Matt Turner.",
478
+ "venue": null,
479
+ "url": null
480
+ }
481
+ },
482
+ {
483
+ "33": {
484
+ "title": "Neuron-level interpretation of deep NLP models: A survey.",
485
+ "author": "Hassan Sajjad, Nadir Durrani, and Fahim Dalvi.",
486
+ "venue": "Transactions of the Association for Computational Linguistics, 10:1285\u20131303, 2022.",
487
+ "url": null
488
+ }
489
+ },
490
+ {
491
+ "34": {
492
+ "title": "The implicit bias of gradient descent on separable data.",
493
+ "author": "Daniel Soudry, Elad Hoffer, Mor Shpigel Nacson, Suriya Gunasekar, and Nathan Srebro.",
494
+ "venue": "The Journal of Machine Learning Research, 19(1):2822\u20132878, 2018.",
495
+ "url": null
496
+ }
497
+ },
498
+ {
499
+ "35": {
500
+ "title": "Emergent deception and emergent optimization, 2023.",
501
+ "author": "Jacob Steinhardt.",
502
+ "venue": "URL https://bounded-regret.ghost.io/emergent-deception-optimization/.",
503
+ "url": null
504
+ }
505
+ },
506
+ {
507
+ "36": {
508
+ "title": "Linear representations of sentiment in large language models, 2023.",
509
+ "author": "Curt Tigges, Oskar John Hollinsworth, Atticus Geiger, and Neel Nanda.",
510
+ "venue": null,
511
+ "url": null
512
+ }
513
+ },
514
+ {
515
+ "37": {
516
+ "title": "Llama 2: Open foundation and fine-tuned chat models, 2023.",
517
+ "author": "Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom.",
518
+ "venue": null,
519
+ "url": null
520
+ }
521
+ },
522
+ {
523
+ "38": {
524
+ "title": "Investigating gender bias in language models using causal mediation analysis.",
525
+ "author": "Jesse Vig, Sebastian Gehrmann, Yonatan Belinkov, Sharon Qian, Daniel Nevo, Yaron Singer, and Stuart Shieber.",
526
+ "venue": "In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 12388\u201312401. Curran Associates, Inc., 2020.",
527
+ "url": null
528
+ }
529
+ },
530
+ {
531
+ "39": {
532
+ "title": "Finding skill neurons in pre-trained transformer-based language models.",
533
+ "author": "Xiaozhi Wang, Kaiyue Wen, Zhengyan Zhang, Lei Hou, Zhiyuan Liu, and Juanzi Li.",
534
+ "venue": "In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pp. 11132\u201311152, Abu Dhabi, United Arab Emirates, December 2022. Association for Computational Linguistics.",
535
+ "url": null
536
+ }
537
+ },
538
+ {
539
+ "40": {
540
+ "title": "Representation engineering: A top-down approach to ai transparency, 2023.",
541
+ "author": "Andy Zou, Long Phan, Sarah Chen, James Campbell, Phillip Guo, Richard Ren, Alexander Pan, Xuwang Yin, Mantas Mazeika, Ann-Kathrin Dombrowski, Shashwat Goel, Nathaniel Li, Michael J. Byun, Zifan Wang, Alex Mallen, Steven Basart, Sanmi Koyejo, Dawn Song, Matt Fredrikson, J. Zico Kolter, and Dan Hendrycks.",
542
+ "venue": null,
543
+ "url": null
544
+ }
545
+ }
546
+ ],
547
+ "url": "http://arxiv.org/html/2310.06824v3"
548
+ }
20240819/2310.12375v2.json ADDED
@@ -0,0 +1,680 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Nearly Optimal Bounds for Sample-Based Testing and Learning of \ud835\udc58-Monotone Functions",
3
+ "abstract": "We study monotonicity testing of functions using sample-based algorithms, which are only allowed to observe the value of on points drawn independently from the uniform distribution. A classic result by Bshouty-Tamon (J. ACM 1996) proved that monotone functions can be learned with samples and it is not hard to show that this bound extends to testing. Prior to our work the only lower bound for this problem was in the small parameter regime, when , due to Goldreich-Goldwasser-Lehman-Ron-Samorodnitsky (Combinatorica 2000). Thus, the sample complexity of monotonicity testing was wide open for . We resolve this question, obtaining a nearly tight lower bound of for all at most a sufficiently small constant. In fact, we prove a much more general result, showing that the sample complexity of -monotonicity testing and learning for functions is . For testing with one-sided error we show that the sample complexity is .",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "A function over a partial order is -monotone if there does not exist a chain of points for which (a) when is odd and (b) when is even. When , these are the monotone functions, which are the non-decreasing functions with respect to . Monotone and -monotone Boolean functions over domains , , and have been the focus of a significant amount of research in property testing and computational learning theory. We give an overview of the literature in Section 1.4 ###reference_###.\nThe field of property testing is concerned with the design and analysis of sub-linear time randomized algorithms for determining if a function has, or is far from having, some specific property. A key aspect in the definition of a property testing algorithm is the type of access it has to the function. Early works on property testing, e.g. [RS96 ###reference_bx61###, GGR98 ###reference_bx44###], focused on the notion of query-based testers, which are allowed to observe the value of the function on any point of their choosing, and since then this has become the standard model. The weaker notion of sample-based testers, which can only view the function on independent uniform samples, was also considered by [GGR98 ###reference_bx44###] and has received some attention over the years, see e.g. [KR00 ###reference_bx53###, BBBY12 ###reference_bx4###, FLV15 ###reference_bx41###, GR16 ###reference_bx46###, FH23 ###reference_bx37###, FH24 ###reference_bx38###]. Sample-based algorithms are considered more natural in many settings, for example in computational learning theory, where they are the standard model. In fact, sample-based testing and learning are closely related problems; given a learning algorithm, it is always possible to design a testing algorithm with the same sample complexity, up to an additive factor111See Lemma C.1 ###reference_theorem1### for a precise statement. Also, note that if the learning algorithm is proper, then the time complexity is also preserved. If the learning algorithm is improper, then there is a time complexity blow-up, but the sample complexity is still preserved..\nFor many fundamental properties, there is still a large gap between how much we know in the query-based vs the sample-based models. Monotonicity (and -monotonicity) is such a property; despite a vast body of research on query-based monotonicity testing over the hypercube , the only work we know of which considers this problem in the sample-based model is [GGL+00 ###reference_bx43###], who gave an upper bound of and a matching lower bound for the case when on the number of samples needed to test monotonicity of functions . The upper bound for learning monotone Boolean functions due to [BT96 ###reference_bx22###, LRV22 ###reference_bx56###] also implies a testing upper bound of . Thus, this question has been wide open for .\nOur work addresses this gap in the monotonicity testing literature, proving a lower bound which matches the learning upper bound for all at most some constant, up to a factor of in the exponent. More generally, we prove a nearly tight lower bound for -monotonicity testing of functions, , i.e. functions with image size at most . To round out our results, we also give an improved learning algorithm for -monotone functions over under product distributions whose sample complexity matches our sample-based testing lower bound, up to poly-logarithmic factors in the exponent."
10
+ },
11
+ {
12
+ "section_id": "1.1",
13
+ "parent_section_id": "1",
14
+ "section_name": "Results",
15
+ "text": "Before explaining our results and the context for them, we first provide some terminology and basic notation. Given a domain and a distribution over , we denote the Hamming distance between two functions under by . We say that is -far from -monotone if for every -monotone function . The results in this paper pertain to sample-based testing and learning of -monotone functions with respect to Hamming distance. We use the following terminology:\nThe example oracle for under , denoted by , when queried, generates an example where is sampled according to .\nA sample-based -monotonicity tester under is a randomized algorithm which is given access to for an arbitrary input function and satisfies the following: (a) if is -monotone, then the algorithm accepts with probability at least , and (b) if is -far from -monotone, then the algorithm rejects with probability at least . The tester has one-sided error if in case (a) it accepts with probability .\nA sample-based learning algorithm for -monotone functions under is a randomized algorithm which is given access to for an arbitrary -monotone input function and outputs a hypothesis such that with probability at least . If left unspecified, .\nIn all of the above definitions if is unspecified, then it is the uniform distribution. Testing and learning are closely related problems; any sample-based learning algorithm can be used to construct a sample-based tester with the same sample complexity. We refer to this transformation as the testing-by-learning reduction and although this is not a new idea we provide a proof in Appendix C ###reference_### for completeness.\nFinally, we recall some important learning theory terminology. A learning algorithm for concept class is called proper if it always outputs a hypothesis , and is called improper if it is allowed to output arbitrary . Given a function , and a concept class , let . An agnostic proper learner is one which, given any (not necessarily in ), outputs a hypothesis for which with probability at least ."
16
+ },
17
+ {
18
+ "section_id": "1.1.1",
19
+ "parent_section_id": "1.1",
20
+ "section_name": "1.1.1 Sample-Based Testing and Learning on the Hypercube",
21
+ "text": "The problem of learning monotone Boolean functions over the hypercube was studied by [BT96 ###reference_bx22###] who proved an upper bound222We remark that any function over can be learned exactly with samples by a coupon-collector argument. Combining this with the upper bound by [BT96 ###reference_bx22###] yields . We use this slightly clunkier notation involving the min to emphasize that our upper and lower bounds are nearly matching in all parameter regimes. of for improper learning and very recently by [LRV22 ###reference_bx56###, LV23 ###reference_bx57###] who obtained the same upper bound for agnostic proper learning. The improper learning upper bound was extended by [BCO+15 ###reference_bx7###] who showed an upper bound of and a nearly matching lower bound of for learning -monotone Boolean functions for any . The testing-by-learning reduction shows that their upper bound also holds for sample-based testing.\nThe only prior lower bound for sample-based testing that we\u2019re aware of is when and [GGL+00 ###reference_bx43###, Theorem 5]. Our main result is the following much more general lower bound for this problem, which we prove in Section 3 ###reference_###.\nThere is an absolute constant such that for all , every sample-based -monotonicity tester for functions under the uniform distribution has sample complexity\nEven for the special case of sample-based monotonicity testing of Boolean functions ( and ), Theorem 1.1 ###reference_theorem1### is already a new result, which matches the upper bound for learning by [BT96 ###reference_bx22###] and is the first lower bound to hold for . Moreover, our lower bound is much more general, holding for all , and is optimal in all parameters, , up to a factor in the exponent. We show a nearly matching upper bound in Theorem 1.3 ###reference_theorem3###.\nWe also note that the testing-by-learning reduction implies that the same lower bound holds for learning with samples. As we mentioned, this result was already known for Boolean functions (the case) [BCO+15 ###reference_bx7###], but the general case of was not known prior to our work333It is possible that the techniques from [BCO+15 ###reference_bx7###] could be extended to provide an alternative proof of Corollary 1.2 ###reference_theorem2###, but we have not checked whether this is the case..\nThere is an absolute constant such that for every , every sample-based uniform-distribution learning algorithm for -monotone functions has sample complexity\nOn the upper bound side, a relatively straightforward argument extends the learning algorithm of [BCO+15 ###reference_bx7###] for Boolean -monotone functions, to -monotone functions with image size at most . We give a short proof in Section 1.5 ###reference_###. This shows that our lower bounds in Theorem 1.1 ###reference_theorem1### and Corollary 1.2 ###reference_theorem2### are tight up to a factor of in the exponent.\nThere is a uniform-distribution learning algorithm for -monotone functions which achieves error at most with time and sample complexity\nThe testing-by-learning reduction again gives us the following corollary.\nThere is a sample-based -monotonicity tester for functions with sample complexity\nLastly, we consider the problem of sample-based testing with one-sided error. For monotonicity testing of functions with non-adaptive queries, we know that one-sided and two-sided error testers achieve the same query-complexity (up to factors): there is a one-sided error upper bound due to [KMS18 ###reference_bx52###] and a two-sided error lower bound due to [CWX17 ###reference_bx33###]. We show that the situation is quite different for sample-based monotonicity testing; while the sample complexity of two-sided error testers is , one-sided error testers require samples for all .\nFor every , and , sample-based -monotonicity testing of functions with one-sided error requires samples."
22
+ },
23
+ {
24
+ "section_id": "1.1.2",
25
+ "parent_section_id": "1.1",
26
+ "section_name": "1.1.2 Sample-Based Testing and Learning in Continuous Product Spaces",
27
+ "text": "Learning -monotone Boolean-valued functions has also been studied over with respect to product measures by [HY22 ###reference_bx49###] who gave an upper bound of where hides polylog factors of , and . Our next result gives an upper bound which improves the dependence on from to in the exponent. By the same approach we used to generalize the upper bound in Theorem 1.3 ###reference_theorem3### to arbitrary , we get the same generalization for product spaces. We obtain the following upper bound which matches our lower bound for in Theorem 1.1 ###reference_theorem1### up to polylog factors of , and . We say that a function is measurable if the set is measurable for every .\nGiven an arbitrary product measure , there is a learning algorithm under for measurable -monotone functions with time and sample complexity\nThe hides polylogarithmic dependencies on , and .\nWe prove Theorem 1.6 ###reference_theorem6### in Section 4 ###reference_###. Once again the testing-by-learning reduction gives us the following corollary for sample-based testing.\nGiven an arbitrary product measure , there is a -monotonicity tester for measurable functions under with sample complexity\nThe hides polylogarithmic dependencies on , and ."
28
+ },
29
+ {
30
+ "section_id": "1.2",
31
+ "parent_section_id": "1",
32
+ "section_name": "Proof Overviews",
33
+ "text": "In this section we give an overview of our proofs for Theorem 1.1 ###reference_theorem1### and Theorem 1.6 ###reference_theorem6###."
34
+ },
35
+ {
36
+ "section_id": "1.2.1",
37
+ "parent_section_id": "1.2",
38
+ "section_name": "1.2.1 The Testing Lower Bound for Hypercubes",
39
+ "text": "Our proof of Theorem 1.1 ###reference_theorem1### uses a family functions known as Talagrand\u2019s random DNFs introduced by [Tal96 ###reference_bx63###] which have been used by [BB16 ###reference_bx3###] and [CWX17 ###reference_bx33###] to prove lower bounds for monotonicity testing of Boolean functions against adaptive and non-adaptive query-based testers. Very recently, they have also been used to prove lower bounds for tolerant monotonicity testing [CDL+24 ###reference_bx25###] and for testing convexity of sets in [BBH24 ###reference_bx5###].\nTo understand our construction, let us first consider the special case of monotonicity of Boolean functions, i.e. and . We think of a DNF term as a point which is said to be satisfied by if , where denotes the standard bit-wise partial order over . The width of a term is its Hamming weight, , and the width of a DNF is the max width among its terms. Consider randomly chosen terms each of width . We will see later how to choose and . Let and for each , let\nbe the set of points in which satisfy and no other terms. Let . Now observe that any two points lying in different \u2019s are incomparable and therefore independently embedding an arbitrary monotone function into each will result in a function which globally is monotone if one defines the function outside of appropriately. Using this fact we can define two distributions and as follows. Let denote the set of points in for which either or and for two different terms .\nis drawn by setting if and only if where contains each with probability , independently. Such a function is always monotone.\nis drawn by setting if and only if where contains each with probability , independently. Such a function will be -far from monotone with probability since its restriction with is uniformly random.\nNow, each satisfies and for both distributions the events and are independent when lie in different \u2019s. Therefore, any tester will need to see at least two points from the same to distinguish and . Roughly speaking, by birthday paradox this gives a lower bound on the number of samples. The lower bound is thus determined by the maximum number of terms that can be used in the construction for which .\nSo how are and chosen? By standard concentration bounds, we have and observe that a point satisfies a random term with probability exactly . We need to contain a constant fraction of , i.e. we need to satisfy exactly term with constant probability. The expected number of satisfied terms is and, roughly speaking, we need this value to be for all . Applying this constraint to the case when forces us to pick . Now when , the expected number of satisfied terms is and we are forced to choose . The lower bound for sample-based monotonicity testing of is then .\nLet us now think about generalizing this construction to testing -monotonicity of functions . The moral of the above argument is that the permitted number of terms is controlled by the number of distinct Hamming weights in the set . We observe that for larger values of and we can partition into blocks as each with a window of Hamming weights of size only . We are able to essentially repeat the above construction independently within each block wherein we can set and consequently .\nFor each block , the random Talagrand DNF within block is defined analogously to the above construction, except that it assigns function values from , instead of . See Fig.\u20091 ###reference_### for an illustration. Since there are blocks in total, the distribution only produces -monotone functions. At the same time, a function assigns uniform random values within each block . This results in a large number of long chains through which alternate between function value and . Considering the union of all such chains for shows that is -far from -monotone with probability .\n###figure_1###"
40
+ },
41
+ {
42
+ "section_id": "1.2.2",
43
+ "parent_section_id": "1.2",
44
+ "section_name": "1.2.2 The Learning Upper Bound for Product Spaces",
45
+ "text": "As we discussed in Section 1.1 ###reference_###, it suffices to prove Theorem 1.6 ###reference_theorem6### for the case of , i.e. learning functions under a product measure . We use a downsampling technique to reduce this problem to learning a discretized proxy of over a hypergrid where with mild label noise. This technique has been used in previous works [GKW19 ###reference_bx45###, BCS20 ###reference_bx9###, HY22 ###reference_bx49###] and our proof borrows many technical details from [HY22 ###reference_bx49###].\nNext, for which is a power of , we observe that a -monotone function can be viewed as a -monotone function over the hypercube by mapping each point to its bit-representation. We can then leverage a result of [BCO+15 ###reference_bx7###] which shows that all but a -fraction of the mass of the Fourier coefficients of -monotone Boolean functions is concentrated on the terms with degree at most . We can then use the Low-Degree Algorithm introduced by [LMN93 ###reference_bx54###] which was shown to work under random classification noise by [Kea98 ###reference_bx50###]."
46
+ },
47
+ {
48
+ "section_id": "1.3",
49
+ "parent_section_id": "1",
50
+ "section_name": "Discussion and Open Questions",
51
+ "text": "Our results for sample-based testing and learning over the hypercube are tight up to a factor in the exponent. Our upper bound for product spaces matches the lower bound for hypercubes only up to polylog factors of in the exponent. In particular, the upper bound for product spaces goes to as any one of the parameters , , or grow to , whereas the lower bound for the hypercube can be at most simply because and so any function can be learned exactly with samples. It seems intuitive that sample-based testing and learning of -monotone functions over should require samples as either of the parameters or approaches . A corollary of such a result would be that the sample-complexity of these problems for grow to as or approach . Moreover, if this is true, then -monotonicity of functions is not testable with a finite number of samples. Our results do not address this and it would be interesting to investigate this further.\nIs there a lower bound for sample-based -monotonicity testing of functions which approaches as or go to ?"
52
+ },
53
+ {
54
+ "section_id": "1.4",
55
+ "parent_section_id": "1",
56
+ "section_name": "Related Work",
57
+ "text": "Monotone functions and their generalization to -monotone functions have been extensively studied within property testing and learning theory over the last 25 years. We highlight some of the results which are most relevant to our work. Afterwards, we discuss some selected works on sample-based property testing.\nSample-based monotonicity testing of Boolean functions over the hypercube, , was considered by [GGL+00 ###reference_bx43###] (see [GGL+00 ###reference_bx43###, Theorems 5 and 6]) who gave an upper bound of and a lower bound of for . Sample-based monotonicity testing over general partial orders was studied by [FLN+02 ###reference_bx40###] who gave a one-sided error tester for functions where is any partial order on elements. Sample-based monotonicity testing of functions on the line was studied by [PRV18 ###reference_bx58###] who gave a one-sided error upper bound of and a matching lower bound of for all sample-based testers.\nMonotonicity testing has been extensively studied in the standard query model [Ras99 ###reference_bx59###, EKK+00 ###reference_bx35###, GGL+00 ###reference_bx43###, DGL+99 ###reference_bx34###, LR01 ###reference_bx55###, FLN+02 ###reference_bx40###, HK03 ###reference_bx47###, AC06 ###reference_bx1###, HK08 ###reference_bx48###, ACCL07 ###reference_bx2###, Fis04 ###reference_bx39###, SS08 ###reference_bx62###, Bha08 ###reference_bx15###, BCSM12 ###reference_bx12###, FR10 ###reference_bx42###, BBM12 ###reference_bx6###, RRSW11 ###reference_bx60###, BGJ+12 ###reference_bx14###, CS13 ###reference_bx29###, CS14a ###reference_bx30###, CST14 ###reference_bx32###, BRY14a ###reference_bx20###, BRY14b ###reference_bx21###, CDST15 ###reference_bx26###, CDJS15 ###reference_bx24###, KMS15 ###reference_bx51###, BB16 ###reference_bx3###, CWX17 ###reference_bx33###, BCS18 ###reference_bx8###, PRV18 ###reference_bx58###, BCS20 ###reference_bx9###, HY22 ###reference_bx49###, BKR24 ###reference_bx17###, BKKM23 ###reference_bx16###, BCS23b ###reference_bx11###, BCS23a ###reference_bx10###, CDL+24 ###reference_bx25###]. When discussing these works we treat as a small constant for brevity. For , the non-adaptive query complexity has been established at [KMS18 ###reference_bx52###, CWX17 ###reference_bx33###] with an adaptive lower bound of [CWX17 ###reference_bx33###]. This gap for adaptive monotonicity testing of Boolean functions is still an outstanding open question. For and under product measures, a recent result of [BCS23a ###reference_bx10###] established a non-adaptive upper bound of . For functions , [BKR24 ###reference_bx17###] showed upper and lower bounds of for non-adaptive, one-sided error testers and there is a general (adaptive) lower bound of due to [BBM12 ###reference_bx6###]. For real-valued functions , the query complexity is known to be . The upper bound is non-adaptive [CS13 ###reference_bx29###] and the lower bound holds even for adaptive testers [CS14b ###reference_bx31###].\nThe generalization to -monotonicity testing has also been studied in the standard query model by [GKW19 ###reference_bx45###, CGG+19 ###reference_bx28###]. These works show that the query-complexity of non-adaptive one-sided error -monotonicity testing is for all , demonstrating an interesting separation between (1-)monotonicity and 2-monotonicity.\nMonotone Boolean functions were studied in the context of learning theory by [BT96 ###reference_bx22###] who showed that they can be (improperly) learned to error under the uniform distribution with time and samples. Very recent works [LRV22 ###reference_bx56###, LV23 ###reference_bx57###] have given agnostic proper learning algorithms with the same complexity.\nThe result of [BT96 ###reference_bx22###] was generalized by [BCO+15 ###reference_bx7###] who gave upper and lower bounds of for learning -monotone Boolean functions . For Boolean functions over hypergrids , [CGG+19 ###reference_bx28###] gave an upper bound of where hides polylog factors of . This result was generalized to functions under product measures by [HY22 ###reference_bx49###].\nThe notion of sample-based property testing was first presented and briefly studied by [GGR98 ###reference_bx44###]. Broader studies of sample-based testing and its relationship with query-based testing have since been given by [FGL14 ###reference_bx36###, FLV15 ###reference_bx41###, GR16 ###reference_bx46###]. A characterization of properties which are testable with a constant number of samples was given by [BY19 ###reference_bx23###].\nAs we mentioned, sample-based algorithms are the standard model in learning theory, and learning requires at least as many samples as testing for every class of functions. Thus, it is natural to ask, when is testing easier than learning in terms of sample complexity? This question is referred to as testing vs learning and has been studied by [KR00 ###reference_bx53###] and more recently by [BFH21 ###reference_bx13###, FH23 ###reference_bx37###, FH24 ###reference_bx38###].\nThere has also been work studying models that interpolate between query-based and sample-based testers. For instance, [BBBY12 ###reference_bx4###] introduced the notion of active testing, where the tester may make queries, but only on points from a polynomial-sized batch of unlabeled samples drawn from the underlying distribution. This was inspired by the notion of active learning which considers learning problems under this access model.\nSample-based convexity testing of sets over various domains has also seen some recent attention [CFSS17 ###reference_bx27###, BMR19a ###reference_bx18###, BMR19b ###reference_bx19###, BBH24 ###reference_bx5###]."
58
+ },
59
+ {
60
+ "section_id": "1.5",
61
+ "parent_section_id": "1",
62
+ "section_name": "Learning Functions with Bounded Image Size: Proof of Theorem 1.3",
63
+ "text": "In this section we give a short proof showing that the learning algorithm of [BCO+15 ###reference_bx7###] can be extended in a relatively straightforward manner to functions by increasing the sample-complexity by a factor of in the exponent.\n[BCO+15 ###reference_bx7###, Theorem 1.4] proved this result for the case of . In particular, they show that there is a sample-based learning algorithm which given an arbitrary -monotone Boolean function , outputs such that using queries444Their result (Thm 1.4 of [BCO+15 ###reference_bx7###]) is stated for constant , but can be easily extended to arbitrary with the stated query complexity by replacing Thm 3.1 in their proof with the Low-Degree Algorithm stated for general . to the example oracle, . We will make use of this result.\nFor each , let denote the thresholded Boolean function defined as . Observe that for all we have . Thus, for each , run the learning algorithm of [BCO+15 ###reference_bx7###] with error parameters set to and to obtain a hypothesis . We have . By a union bound, with probability at least , every satisfies . Moreover, if this holds then by another union bound we have . Thus, the hypothesis satisfies . The number of samples used is and this completes the proof. \u220e"
64
+ },
65
+ {
66
+ "section_id": "2",
67
+ "parent_section_id": null,
68
+ "section_name": "Preliminaries on -Monotonicity",
69
+ "text": "We use the notation .\nGiven a poset and a function , an -alternating chain is a sequence of points such that for all ,\nwhen is odd, and\nwhen is even.\nFor a poset , a function is called -monotone if it does not have any -alternating chains.\nLet denote the set of all -monotone functions over the poset . The Hamming distance between two functions is . The distance to -monotonicity of is denoted by . The following claim is our main tool for lower bounding the distance to -monotonicity.\nLet and be an integer. Let be a collection of disjoint -alternating chains for . Then\nObserve that every -monotone function has the following property: for every , the sequence\nchanges sign at most times, whereas the sequence\nchanges sign exactly times. We have prepended a so that the first sign change occurs as soon as the function value decreases. Now, changing can only reduce the number of times the sequence changes sign by at most and so . Summing over all chains in and normalizing yields\nwhere the second inequality follows from and the third inequality is due to the fact that the chains in are all disjoint and each of size . This completes the proof since this inequality holds for all . \u220e\nWe use the notation to denote the set of all -monotone functions over the hypercube whose image has at most distinct values."
70
+ },
71
+ {
72
+ "section_id": "3",
73
+ "parent_section_id": null,
74
+ "section_name": "Lower Bound for Sample-Based Testers",
75
+ "text": "In this section we prove Theorem 1.1 ###reference_theorem1###, our lower bound on the sample-complexity of testing -monotonicity of functions . We refer the reader to Section 1.2.1 ###reference_.SSS1### for a discussion of our main ideas and a proof sketch for the special case of and , i.e. monotone Boolean functions. Our proof follows the standard approach of defining a pair of distributions over functions which satisfy the following:\nis supported over -monotone functions.\nFunctions drawn from are typically -far from -monotone: .\nThe distributions over labeled examples from and are close in TV-distance.\nOur construction uses a generalized version of a family functions known as random Talagrand DNFs, which were used by [BB16 ###reference_bx3###] and [CWX17 ###reference_bx33###] to prove lower bounds for testing monotonicity of Boolean functions with adaptive and non-adaptive queries.\nLet satisfy . For convenience, we will assume that and are integers and that divides . Let denote the \u2019th Hamming level of the hypercube. We partition into blocks as follows. For each , define\nThe idea of our proof is to define a random DNF within each . The width of each DNF will be set to and for each , the number of terms in the DNF within will be set to . The DNF defined over will assign function values from . The terms in each DNF will be chosen randomly from the following distribution. We think of terms as points in the hypercube where another point satisfies if , i.e. implies .\nA term is sampled from the distribution as follows. Form a (multi)-set by choosing independent uniform samples from . For each , let ."
76
+ },
77
+ {
78
+ "section_id": "3.1",
79
+ "parent_section_id": "3",
80
+ "section_name": "The Distributions and",
81
+ "text": "We now define the yes and no distributions over functions . For each , choose terms i.i.d. from and let denote the random set of all terms. Now, for each and , define the set\nof all points in the \u2019th block that satisfy the \u2019th term uniquely. Let denote the set of points in that satisfy a unique term. The following claim is key to our result and motivates our choice of and . We defer its proof to Section 3.2 ###reference_###.\nFor any , , and , we have\nAs a corollary, we have .\nFunctions drawn from are generated as follows. For each choose a uniform random assignment\nFor every define\nFunctions drawn are generated as follows. For each choose a uniform random function\nFor each define\nFor not belonging to any : if , then both the yes and no distributions assign value and if , then both the yes and no distributions assign value .\nIn summary, a function assigns the same random value to all points in , which results in a -monotone function, whereas a function assigns an i.i.d. uniform random -value to each point in , resulting in a function that is far from being -monotone. By construction, to detect any difference between these cases a tester will need to sample at least two points from the same . Theorem 1.1 ###reference_theorem1### follows immediately from the following three lemmas.\nEvery function in the support of is -monotone.\nConsider any . For each , consider the union of blocks formed by\nRecall that if , then and if , then . If , then . Therefore, it suffices to show that for any pair of comparable points , we have . Firstly, observe that by construction all points have function value . Since , if and are in different blocks, then and where and so the inequality is satisfied. Therefore, we may assume are in the same block. Since , if for some term , then as well. I.e. the set of terms in satisfied by is a superset of the set of terms in satisfied by . By construction, this implies . \u220e\nFor , we have\n.\nWe prove Lemma 3.4 ###reference_theorem4### in Section 3.4 ###reference_###.\nGiven a collection of points and a function , let denote the corresponding collection of labelled examples. Let and denote the distributions over when consists of i.i.d. uniform samples and and , respectively. If , then the total variation distance between and is .\nWe prove Lemma 3.5 ###reference_theorem5### in Section 3.3 ###reference_###."
82
+ },
83
+ {
84
+ "section_id": "3.2",
85
+ "parent_section_id": "3",
86
+ "section_name": "Proof of 3.2",
87
+ "text": "Recall , , the definition of from Definition 3.1 ###reference_theorem1###, and the definition of from eq. 1 ###reference_###. Since we have where . Note that since iff the non-zero coordinates of are a subset of the non-zero coordinates of . Therefore, we have\nNote that the first term is upper bounded as\nand this immediately implies the upper bound on . We can also lower bound this quantity by\nNow, combining our upper and lower bounds on yields\n\u220e"
88
+ },
89
+ {
90
+ "section_id": "3.3",
91
+ "parent_section_id": "3",
92
+ "section_name": " and are Hard to Distinguish: Proof of Lemma 3.5",
93
+ "text": "Recall the definition of the set in eq. 1 ###reference_###. For , let denote the event that and belong to the same for some and . Observe that conditioned on , the distributions and are identical. Let denote two i.i.d. uniform samples. We have\nwhere the first step holds since the \u2019s are disjoint and the second step holds by independence of and . Now, for a fixed and we have the following: by 3.2 ###reference_theorem2###, for we have and for we have . Therefore . Therefore, the RHS of eq. 2 ###reference_### is bounded as\nsince the \u2019s are decreasing with respect to . Therefore,\nsince . \u220e"
94
+ },
95
+ {
96
+ "section_id": "3.4",
97
+ "parent_section_id": "3",
98
+ "section_name": "Functions Drawn from are Far from -Monotone: Proof of Lemma 3.4",
99
+ "text": "We will use 2.3 ###reference_theorem3###, restated below for the special case of -valued functions over the hypercube. Recall that is the set of -monotone functions .\nLet and be an integer. Let be a collection of disjoint -alternating chains for . Then\nFrom the above claim, we can lower bound the distance to -monotonicity of by showing that it contains a collection of disjoint -alternating chains where whose union makes up an -fraction of the hypercube.\nRecall and note that takes values only from in . In particular, for , let\nand note that all points are assigned value . Moreover, this value is chosen uniformly at random when , which occurs with probability by 3.2 ###reference_theorem2###. Let and recall that we are assuming and so . We first show there exists a large collection of length- disjoint chains in for all .\nFor every , there exists a collection of vertex disjoint chains in of length of size .\nWe start by showing that there is a large matching in the transitive closure of the hypercube from to . Consider the bipartite graph where , , and . Observe that vertices in have degree exactly while vertices in have degree exactly . Note also that by Stirling\u2019s approximation. We now use the following claim from [BBH24 ###reference_bx5###].\nLet be a bipartite graph and be such that (a) each vertex has degree exactly and (b) each vertex has degree at least . Then there exists a matching in of size .\nBy the above claim and the previous observations, there exist subsets and of size and a bijection satisfying for all . We now use the following routing theorem due to Lehman and Ron to obtain a collection of disjoint chains from to .\nLet and , where . Moreover, suppose there is a bijection satisfying for all . Then there exist vertex disjoint paths from to in the hypercube.\nNow, invoking the above theorem on our bijection yields a collection of vertex disjoint paths from to . For each , let denote the collection of chains formed by taking a path in and including only the vertices from (recall eq. 3 ###reference_###). Note that the resulting chains in are of length . This completes the proof of 3.7 ###reference_theorem7###. \u220e\nFrom 3.7 ###reference_theorem7###, we have where each is a collection of vertex disjoint chains of length of size . Fix a chain . Let be the random variable which denotes the max-length alternating sub-chain (recall Definition 2.1 ###reference_theorem1###) of over a random . Fix in the chain and suppose . By 3.2 ###reference_theorem2###, . Moreover, conditioned on , is chosen from uniformly at random. Thus, any step of the sequence\nis non-zero and differs in sign from the previous non-zero step with probability at least and so . I.e., . Thus, using Markov\u2019s inequality we have\nNow, let and let . By eq. 4 ###reference_### we have and . Again using Markov\u2019s inequality, we have\nNow, for such that , let be any -alternating sub-chain of . Let which is a collection of disjoint -alternating chains for .\nNow, recall that and so . Thus, if , then and so by 3.6 ###reference_theorem6### we have\nBy 3.7 ###reference_theorem7### we have and recall that . Thus, the RHS of eq. 6 ###reference_### is . In conclusion,\nby eq. 5 ###reference_### and this completes the proof of Lemma 3.4 ###reference_theorem4###. \u220e"
100
+ },
101
+ {
102
+ "section_id": "4",
103
+ "parent_section_id": null,
104
+ "section_name": "Learning Upper Bound over Product Spaces",
105
+ "text": "In this section we prove Theorem 1.6 ###reference_theorem6###, our upper bound for learning measurable -monotone functions in . We restate the theorem below without any hidden logarithmic factors and for the case of . The theorem for general can then be obtained by replacing with and by following the same approach we used to prove Theorem 1.3 ###reference_theorem3### in Section 1.5 ###reference_###.\nGiven an arbitrary product measure , there is a learning algorithm under which learns any measurable -monotone function to error with probability with time and sample complexity\nOur proof uses downsampling to reduce our learning problem over to learning over a hypergrid, , under the uniform distribution with mild label noise. In Section 4.1 ###reference_### we synthesize the results from [HY22 ###reference_bx49###] which we borrow for our proof. In Section 4.2 ###reference_### we give two learning results for hypergrids whose time complexities correspond to the two arguments inside the expression in eq. 7 ###reference_###. In Section 4.3 ###reference_### we describe the learning algorithm and prove its correctness.\nThroughout this section, let be any product measure over and let be a power of two satisfying ."
106
+ },
107
+ {
108
+ "section_id": "4.1",
109
+ "parent_section_id": "4",
110
+ "section_name": "Reduction to Hypergrids via Downsampling",
111
+ "text": "The idea of downsampling is to construct a grid-partition of into blocks such that (a) the measure of each block under is roughly , and (b) the function we\u2019re trying to learn is constant on most of the blocks. Roughly speaking, this allows us to learn under by learning a proxy for over under the uniform distribution. The value of needed to achieve this depends on what [HY22 ###reference_bx49###] call the \n?block boundary size? of the function. Formally, the downsampling procedure constructs query access to maps and which have various good properties which we will spell out in the rest of this section. One should think of as mapping each point to the block of the grid-partition that belongs to and as mapping each block to some specific point contained in the block. See [HY22 ###reference_bx49###, Def 2.1] for a formal definition. Given these maps and a function we define the function as . We let denote the distribution over induced by sampling and then taking .\nLet be a -monotone function and . Using\nsamples from , there is a downsampling procedure that constructs query access to maps and such that with probability at least over the random samples, the following two conditions are satisfied:\n.\n.\nThe total running time and number of samples is .\n[HY22 ###reference_bx49###, Prop. 2.5] shows that there is a randomized procedure using samples from and time which constructs the maps and such that with probability , we get\nwhere is the -block boundary size of [HY22 ###reference_bx49###, Def. 2.4], which is at most when is -monotone [HY22 ###reference_bx49###, Lemma 7.1]. Thus, the first of the two quantities in the RHS is at most which is at most using our definition of . Then, [HY22 ###reference_bx49###, Lemma 2.7] states that\nand so invoking this lemma with and completes the proof. \u220e"
112
+ },
113
+ {
114
+ "section_id": "4.2",
115
+ "parent_section_id": "4",
116
+ "section_name": "Learning over Hypergrids",
117
+ "text": "For a function and a measure over , recall that the example oracle for under , denoted by , when queried, generates an example, , where is sampled from . Given a noise parameter , the noisy example oracle , when queried, samples from , returns the true example with probability , and returns the corrupted example with probability . This is referred to as random classification noise (RCN).\nWe prove the following two upper bounds for learning over hypergrids under RCN. The bound in Lemma 4.3 ###reference_theorem3### is relatively straightforward to prove using coupon collector arguments plus some additional work to handle the label noise. We give a proof in Appendix B ###reference_###.\nLet , , and . There is an algorithm which, given any -monotone function , uses at most\nexamples from and returns , satisfying .\nLet , , and be a power of two. There is an algorithm which, given any -monotone function , uses at most\nexamples from and returns , satisfying .\nLet denote the bijection which maps each element of to its bit representation. Let be defined as . Given define the function as .\nIf is -monotone over , then is -monotone over .\nObserve that if in , then in . Thus, if is an -alternating chain for , then is an -alternating chain for . Therefore, if is not -monotone, then neither is . \u220e\nNow, given 4.5 ###reference_theorem5### and the bijection , it suffices to provide a learning algorithm for . This is achieved using the Low-Degree Algorithm introduced by [LMN93 ###reference_bx54###] which was shown by [Kea98 ###reference_bx50###] to be robust to classification noise. Formally, we use the following theorem, which we prove in Appendix A ###reference_### for the sake of completeness.\nLet and . Suppose is a concept class of Boolean functions over such that for some fixed positive integer , all satisfy . Then there is an algorithm which, on any input , uses at most\nexamples from and returns a hypothesis where .\nWe use the following Fourier concentration lemma due to [BCO+15 ###reference_bx7###] for -monotone Boolean functions.\nIf is -monotone, then .\nBy Lemma 4.7 ###reference_theorem7###, we can invoke Theorem 4.6 ###reference_theorem6### with , concluding the proof of Lemma 4.4 ###reference_theorem4###. \u220e"
118
+ },
119
+ {
120
+ "section_id": "4.3",
121
+ "parent_section_id": "4",
122
+ "section_name": "Putting it Together: Proof of Theorem 4.1",
123
+ "text": "We now have all the tools to define the algorithm and prove its correctness.\nRecall that given maps , , and a function we define the function as . Recall that is the distribution over when . By Proposition 4.2 ###reference_theorem2###, step (2) of Alg. 1 ###reference_thm1### results in the following items being satisfied with probability at least .\n.\n.\nFirstly, by item (2), an example where , is equivalent to an example for some . I.e. the set from step (4) of Alg. 1 ###reference_thm1### is distributed according to . Now, as stated, Lemma 4.3 ###reference_theorem3### and Lemma 4.4 ###reference_theorem4### only hold when is given a sample from .\nHowever, the following claim shows that since and are sufficiently close (item (1) above), the guarantees on from Lemma 4.3 ###reference_theorem3### and Lemma 4.4 ###reference_theorem4### also hold when is given a sample from .\nLet be a concept class and let be an algorithm which given any , , and uses a sample from and produces satisfying with probability at least . If is a distribution over with , then given a sample from , produces satisfying with probability at least .\nUsing 4.8 ###reference_theorem8### and item (1) above, if step (2) of Alg. 1 ###reference_thm1### succeeds, then with probability at least , step (5) produces such that . By the triangle inequality and using our definition of in the return statement of Alg. 1 ###reference_thm1###, we have\nThe first term in the RHS is at most by item (2) above and the second term is at most as we argued in the previous paragraph. Finally, adding up the failure probabilities of steps (2) and (5), we conclude that Alg. 1 ###reference_thm1### produces satisfying with probability at least . \u220e"
124
+ },
125
+ {
126
+ "section_id": "4.3.1",
127
+ "parent_section_id": "4.3",
128
+ "section_name": "4.3.1 Proof of 4.8",
129
+ "text": "It is a well-known fact that for two distributions and , the TV-distance between the corresponding product distributions satisfies and thus we have\nGiven a set of examples , let denote the event that the algorithm fails to produce a hypothesis with error at most , after sampling . First, note the distribution over labels for the distributions are the same, and therefore\nUsing the definition of TV-distance we have\nand therefore\nwhere we used by the assumption in the statement of the claim. Now, conditioned on , we have that produces satisfying . Again using our bound on the TV-distance, we have\nand so . \u220e"
130
+ },
131
+ {
132
+ "section_id": "5",
133
+ "parent_section_id": null,
134
+ "section_name": "Sample-Based Testing with One-Sided Error",
135
+ "text": "In this section we prove Theorem 1.5 ###reference_theorem5###, our upper and lower bound on sample-based testing with one-sided error over the hypercube.\nBy a coupon-collecting argument, there is an sample upper bound for exactly learning any function over under the uniform distribution and therefore the upper bound is trivial.\nIt suffices to prove the lower bound for the case of and , i.e. for testing monotonicity of Boolean functions. We will need the following fact.\nLet be any anti-chain and let be any labelling of . Then there exists a monotone function such that for all . I.e. shatters the class of monotone functions.\nNow, let be any monotonicity tester with one-sided error and let denote a set of i.i.d. uniform samples. Since has one-sided error, if the input function is monotone, then must accept. In other words, for to reject it must be sure without a doubt that the input function is not monotone. By 5.1 ###reference_theorem1### for to be sure the input function is not monotone, it must be that is not an anti-chain. Let be any function which is -far from monotone. Since is a valid tester, it rejects with probability at least . By the above argument we have\nwhere the last inequality is by a union bound over all pairs of samples. We then have\nThus, combining eq. 14 ###reference_### and eq. 15 ###reference_### yields . \u220e"
136
+ },
137
+ {
138
+ "section_id": "6",
139
+ "parent_section_id": null,
140
+ "section_name": "Acknowledgements",
141
+ "text": "We would like to thank Eric Blais and Nathaniel Harms for helpful discussions during the early stages of this work and for their thoughtful feedback. We would also like to thank the anonymous reviewers whose comments helped significantly to improve this write up."
142
+ }
143
+ ],
144
+ "appendix": [
145
+ {
146
+ "section_id": "Appendix 1",
147
+ "parent_section_id": null,
148
+ "section_name": "Appendix A Low-Degree Algorithm with RCN: Proof of Theorem\u00a04.6",
149
+ "text": "In this section we prove Theorem 4.6 ###reference_theorem6###, showing that concept classes with bounded Fourier degree can be learned efficiently in the presence of random classification noise (RCN). This fact is already implicit from previous works [LMN93 ###reference_bx54###, Kea98 ###reference_bx50###], but we give a proof for the sake of completeness.\nFor , the parity function is defined as . The parity functions form a Fourier basis for the space of functions and the unique representation of is given by\nis Fourier coefficient for on . The idea of the Low-Degree Algorithm is to learn by learning its low-degree Fourier coefficients. From the definition of , observe that an estimate of can be viewed as a call to a statistical query oracle, which returns an estimate of to within some specified allowed query error, . In [Kea98 ###reference_bx50###], Kearns showed how to simulate statistical query algorithms using only examples with classification noise.\nSuppose there is an algorithm which learns a concept class of Boolean functions over to error , using at most statistical queries with allowed query error . Then, for any , there is a learning algorithm for which on any input , uses at most\nexamples from and outputs a hypothesis where .\nIn light of the above, we prove Theorem 4.6 ###reference_theorem6### by first giving an efficient statistical query algorithm, and then applying Theorem A.1 ###reference_theorem1###.\nSince we assume for all , the idea is to use a statistical query to obtain an estimate of for all . Define and note that\nWe define our statistical query algorithm to do the following:\nFor each , make a statistical query for an estimate of to allowed query error . Let denote the obtained estimate for .\nReturn where\nWe now prove that this hypothesis satisfies . First, observe that\nNow, if , then . In the other case, clearly if , then . Thus, for any , this inequality holds. Combining this observation with eq. 17 ###reference_### yields\nIn the next calculation, for , let . Now, writing expanding the squared sum, applying linearity of expectation, and using the fact that for any , the RHS of eq. 18 ###reference_### is equal to\nUsing eq. 18 ###reference_###, appendix A ###reference_7###, and the fact that for and , yields\nThus, makes statistical queries to with query error and returns a hypothesis satisfying . Therefore, applying Theorem A.1 ###reference_theorem1### completes the proof of Theorem 4.6 ###reference_theorem6###. \u220e"
150
+ },
151
+ {
152
+ "section_id": "Appendix 2",
153
+ "parent_section_id": null,
154
+ "section_name": "Appendix B Coupon Collecting Learner: Proof of Lemma\u00a04.3",
155
+ "text": "The learner is defined as follows. Take samples from and for each , let denote the number of times has been sampled. Let denote the number of times has been sampled with the label respectively. The learner outputs the hypothesis defined by .\nSuppose that . Then .\nEach label seen for is an independent -valued random variable which is equal to with probability and so . Thus,\nby Hoeffding\u2019s inequality and our bound on . \u220e\nSuppose we take samples. Then .\nFor any , and a union bound completes the proof. \u220e\nThe following claim is an immediate corollary of the previous claim.\nSuppose we take samples. Then .\nPartition the samples into batches of size . Invoke B.2 ###reference_theorem2### on each batch of samples with . By the claim, each batch of samples contains a least copy of every point in with probability at least . Thus, by a union bound over the batches, our sample contains at least copies of every point in with probability at least . \u220e\nLet and . The learner takes samples from . Let denote the event that for all . By B.3 ###reference_theorem3###, we have . For each , let , i.e. the indicator that is misclassified by the learner.\nBy B.1 ###reference_theorem1###, we have\nby Markov\u2019s inequality. Therefore,\nwhich is at most . The number of examples used by the learner is\nand this completes the proof. \u220e"
156
+ },
157
+ {
158
+ "section_id": "Appendix 3",
159
+ "parent_section_id": null,
160
+ "section_name": "Appendix C Testing by Learning",
161
+ "text": "Let be a domain, let be a measure over , and let be a class of Boolean-valued functions over . Suppose that for every there exists a learning algorithm for under using samples. Then for every there is an -tester for under using samples.\nWe define the property testing algorithm as follows.\nTake samples and run to obtain a hypothesis for .\nCompute a function for which . (We remark that this step incurs a blowup in time-complexity, but does not require any additional samples.)\nTake new samples and let be an empirical estimate for .\nIf , then accept. If , then reject.\nIf , then .\nBy the guarantee of the learning algorithm, we have . Now, since is a function in as close as possible to , we have . Thus, if , then as well. Thus, by the triangle inequality, with probability at least we have as claimed. \u220e\nNow, consider the quantity from step (4) of the algorithm, . Let be the Bernoulli random variable which equals with probability . Note that where the \u2019s are independent copies of . Using Hoeffding\u2019s inequality we have\nwhich is at most when . We can now argue that the tester succeeds with probability at least . There are two cases to consider.\n: By C.2 ###reference_theorem2###, with probability less than and by the above calculation with probability at most . By a union bound, with probability at least neither event occurs, and conditioned on this we have and the algorithm accepts.\n: Then since . Again, with probability at least and conditioned on this event occurring we have and the algorithm rejects.\nTherefore, satisfies the conditions needed for lemma C.1 ###reference_theorem1###. \u220e"
162
+ }
163
+ ],
164
+ "tables": {},
165
+ "image_paths": {
166
+ "1": {
167
+ "figure_path": "2310.12375v2_figure_1.png",
168
+ "caption": "Figure 1: An illustration of the construction used in our proof of Theorem 1.1. The image represents the set of points in the hypercube {0,1}dsuperscript01\ud835\udc51\\{0,1\\}^{d}{ 0 , 1 } start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT with Hamming weight in the interval [d2,d2+\u03b5\u2062d)\ud835\udc512\ud835\udc512\ud835\udf00\ud835\udc51[\\frac{d}{2},\\frac{d}{2}+\\varepsilon\\sqrt{d})[ divide start_ARG italic_d end_ARG start_ARG 2 end_ARG , divide start_ARG italic_d end_ARG start_ARG 2 end_ARG + italic_\u03b5 square-root start_ARG italic_d end_ARG ), increasing from bottom to top. The numbers on the left denote the Hamming weight of the points lying in the adjacent horizontal line. The Bisubscript\ud835\udc35\ud835\udc56B_{i}italic_B start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT blocks are the sets of points contained between two adjacent horizontal lines. Each orange shaded region within Bisubscript\ud835\udc35\ud835\udc56B_{i}italic_B start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT represents the set of points satisfied by a term ti,jsuperscript\ud835\udc61\ud835\udc56\ud835\udc57t^{i,j}italic_t start_POSTSUPERSCRIPT italic_i , italic_j end_POSTSUPERSCRIPT. The blue numbers represent the value that functions in the support of \ud835\udc9fyessubscript\ud835\udc9fyes\\mathcal{D}_{\\texttt{yes}}caligraphic_D start_POSTSUBSCRIPT yes end_POSTSUBSCRIPT and \ud835\udc9fnosubscript\ud835\udc9fno\\mathcal{D}_{\\texttt{no}}caligraphic_D start_POSTSUBSCRIPT no end_POSTSUBSCRIPT can take. We have used the notation \n?r\u22121,2\ud835\udc5f12r-1,2italic_r - 1 , 2? as shorthand for r\u22122,r\u22121\ud835\udc5f2\ud835\udc5f1r-2,r-1italic_r - 2 , italic_r - 1.",
169
+ "url": "http://arxiv.org/html/2310.12375v2/x1.png"
170
+ }
171
+ },
172
+ "validation": true,
173
+ "references": [
174
+ {
175
+ "1": {
176
+ "title": "Information theory in property testing and monotonicity testing in\nhigher dimension.",
177
+ "author": "Nir Ailon and Bernard Chazelle.",
178
+ "venue": "Information and Computation, 204(11):1704\u20131717, 2006.",
179
+ "url": null
180
+ }
181
+ },
182
+ {
183
+ "2": {
184
+ "title": "Estimating the distance to a monotone function.",
185
+ "author": "Nir Ailon, Bernard Chazelle, Seshadhri Comandur, and Ding Liu.",
186
+ "venue": "Random Structures Algorithms, 31(3):371\u2013383, 2007.",
187
+ "url": null
188
+ }
189
+ },
190
+ {
191
+ "3": {
192
+ "title": "A polynomial lower bound for testing monotonicity.",
193
+ "author": "Aleksandrs Belovs and Eric Blais.",
194
+ "venue": "In Proceedings, ACM Symposium on Theory of Computing (STOC),\n2016.",
195
+ "url": null
196
+ }
197
+ },
198
+ {
199
+ "4": {
200
+ "title": "Active property testing.",
201
+ "author": "Maria-Florina Balcan, Eric Blais, Avrim Blum, and Liu Yang.",
202
+ "venue": "In 53rd Annual IEEE Symposium on Foundations of Computer\nScience, FOCS, 2012.",
203
+ "url": null
204
+ }
205
+ },
206
+ {
207
+ "5": {
208
+ "title": "Testing and learning convex sets in the ternary hypercube.",
209
+ "author": "Hadley Black, Eric Blais, and Nathaniel Harms.",
210
+ "venue": "In 15th Innovations in Theoretical Computer Science Conference,\nITCS, 2024.",
211
+ "url": null
212
+ }
213
+ },
214
+ {
215
+ "6": {
216
+ "title": "Property testing lower bounds via communication complexity.",
217
+ "author": "Eric Blais, Joshua Brody, and Kevin Matulef.",
218
+ "venue": "Computational Complexity, 21(2):311\u2013358, 2012.",
219
+ "url": null
220
+ }
221
+ },
222
+ {
223
+ "7": {
224
+ "title": "Learning circuits with few negations.",
225
+ "author": "Eric Blais, Cl\u00e9ment L. Canonne, Igor Carboni Oliveira, Rocco A. Servedio,\nand Li-Yang Tan.",
226
+ "venue": "In RANDOM, 2015.",
227
+ "url": null
228
+ }
229
+ },
230
+ {
231
+ "8": {
232
+ "title": "A monotonicity tester for Boolean functions\nover the hypergrid .",
233
+ "author": "Hadley Black, Deeparnab Chakrabarty, and C. Seshadhri.",
234
+ "venue": "In Proceedings, ACM-SIAM Symposium on Discrete Algorithms\n(SODA), 2018.",
235
+ "url": null
236
+ }
237
+ },
238
+ {
239
+ "9": {
240
+ "title": "Domain reduction for monotonicity testing: A o(d)\ntester for boolean functions in -dimensions.",
241
+ "author": "Hadley Black, Deeparnab Chakrabarty, and C. Seshadhri.",
242
+ "venue": "In Proceedings of the 2020 ACM-SIAM Symposium on Discrete\nAlgorithms, SODA, 2020.",
243
+ "url": null
244
+ }
245
+ },
246
+ {
247
+ "10": {
248
+ "title": "A monotonicity tester for boolean functions on\n-dimensional hypergrids.",
249
+ "author": "Hadley Black, Deeparnab Chakrabarty, and C. Seshadhri.",
250
+ "venue": "In 64th IEEE Annual Symposium on Foundations of Computer\nScience, FOCS, 2023.",
251
+ "url": null
252
+ }
253
+ },
254
+ {
255
+ "11": {
256
+ "title": "Directed isoperimetric theorems for boolean functions on the\nhypergrid and an monotonicity tester.",
257
+ "author": "Hadley Black, Deeparnab Chakrabarty, and C. Seshadhri.",
258
+ "venue": "In Proceedings of the 55th Annual ACM Symposium on Theory of\nComputing, STOC, 2023.",
259
+ "url": null
260
+ }
261
+ },
262
+ {
263
+ "12": {
264
+ "title": "Monotonicity testing and shortest-path routing on the cube.",
265
+ "author": "Jop Bri\u00ebt, Sourav Chakraborty, David Garc\u00eda Soriano, and Ari Matsliah.",
266
+ "venue": "Combinatorica, 32(1):35\u201353, 2012.",
267
+ "url": null
268
+ }
269
+ },
270
+ {
271
+ "13": {
272
+ "title": "Vc dimension and distribution-free sample-based testing.",
273
+ "author": "Eric Blais, Renato Ferreira Pinto Jr, and Nathaniel Harms.",
274
+ "venue": "In Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory\nof Computing, pages 504\u2013517, 2021.",
275
+ "url": null
276
+ }
277
+ },
278
+ {
279
+ "14": {
280
+ "title": "Lower bounds for local monotonicity reconstruction from\ntransitive-closure spanners.",
281
+ "author": "Arnab Bhattacharyya, Elena Grigorescu, Madhav Jha, Kyoming Jung, Sofya\nRaskhodnikova, and David Woodruff.",
282
+ "venue": "SIAM Journal on Discrete Mathematics (SIDMA), 26(2):618\u2013646,\n2012.",
283
+ "url": null
284
+ }
285
+ },
286
+ {
287
+ "15": {
288
+ "title": "A note on the distance to monotonicity of boolean functions.",
289
+ "author": "Arnab Bhattacharyya.",
290
+ "venue": "Technical Report 012, Electronic Colloquium on Computational\nComplexity (ECCC), 2008.",
291
+ "url": null
292
+ }
293
+ },
294
+ {
295
+ "16": {
296
+ "title": "Improved monotonicity testers via hypercube embeddings.",
297
+ "author": "Mark Braverman, Subhash Khot, Guy Kindler, and Dor Minzer.",
298
+ "venue": "In 14th Innovations in Theoretical Computer Science Conference,\nITCS, 2023.",
299
+ "url": null
300
+ }
301
+ },
302
+ {
303
+ "17": {
304
+ "title": "Isoperimetric inequalities for real-valued functions with\napplications to monotonicity testing.",
305
+ "author": "Hadley Black, Iden Kalemaj, and Sofya Raskhodnikova.",
306
+ "venue": "Random Structures & Algorithms, 2024.",
307
+ "url": null
308
+ }
309
+ },
310
+ {
311
+ "18": {
312
+ "title": "The power and limitations of uniform samples in testing properties of\nfigures.",
313
+ "author": "Piotr Berman, Meiram Murzabulatov, and Sofya Raskhodnikova.",
314
+ "venue": "Algorithmica, 81(3):1247\u20131266, 2019.",
315
+ "url": null
316
+ }
317
+ },
318
+ {
319
+ "19": {
320
+ "title": "Testing convexity of figures under the uniform distribution.",
321
+ "author": "Piotr Berman, Meiram Murzabulatov, and Sofya Raskhodnikova.",
322
+ "venue": "Random Struct. Algorithms, 54(3):413\u2013443, 2019.",
323
+ "url": null
324
+ }
325
+ },
326
+ {
327
+ "20": {
328
+ "title": "-testing.",
329
+ "author": "Piotr Berman, Sofya Raskhodnikova, and Grigory Yaroslavtsev.",
330
+ "venue": "In Proceedings, ACM Symposium on Theory of Computing (STOC),\n2014.",
331
+ "url": null
332
+ }
333
+ },
334
+ {
335
+ "21": {
336
+ "title": "Lower bounds for testing properties of functions over hypergrid\ndomains.",
337
+ "author": "Eric Blais, Sofya Raskhodnikova, and Grigory Yaroslavtsev.",
338
+ "venue": "In Proceedings, IEEE Conference on Computational Complexity\n(CCC), 2014.",
339
+ "url": null
340
+ }
341
+ },
342
+ {
343
+ "22": {
344
+ "title": "On the fourier spectrum of monotone functions.",
345
+ "author": "Nader H. Bshouty and Christino Tamon.",
346
+ "venue": "J. ACM, 43(4):747\u2013770, 1996.",
347
+ "url": null
348
+ }
349
+ },
350
+ {
351
+ "23": {
352
+ "title": "A characterization of constant-sample testable properties.",
353
+ "author": "Eric Blais and Yuichi Yoshida.",
354
+ "venue": "Random Struct. Algorithms, 55(1):73\u201388, 2019.",
355
+ "url": null
356
+ }
357
+ },
358
+ {
359
+ "24": {
360
+ "title": "Property testing on product distributions: Optimal testers for\nbounded derivative properties.",
361
+ "author": "Deeparnab Chakrabarty, Kashyap Dixit, Madhav Jha, and C. Seshadhri.",
362
+ "venue": "In Proceedings, ACM-SIAM Symposium on Discrete Algorithms\n(SODA), 2015.",
363
+ "url": null
364
+ }
365
+ },
366
+ {
367
+ "25": {
368
+ "title": "Mildly exponential lower bounds on tolerant testers for monotonicity,\nunateness, and juntas.",
369
+ "author": "Xi Chen, Anindya De, Yuhao Li, Shivam Nadimpalli, and Rocco A. Servedio.",
370
+ "venue": "In Proceedings of the 2024 ACM-SIAM Symposium on Discrete\nAlgorithms, SODA, 2024.",
371
+ "url": null
372
+ }
373
+ },
374
+ {
375
+ "26": {
376
+ "title": "Boolean function monotonicity testing requires (almost)\n non-adaptive queries.",
377
+ "author": "Xi Chen, Anindya De, Rocco A. Servedio, and Li-Yang Tan.",
378
+ "venue": "In Proceedings, ACM Symposium on Theory of Computing (STOC),\n2015.",
379
+ "url": null
380
+ }
381
+ },
382
+ {
383
+ "27": {
384
+ "title": "Sample-based high-dimensional convexity testing.",
385
+ "author": "Xi Chen, Adam Freilich, Rocco A. Servedio, and Timothy Sun.",
386
+ "venue": "In Approximation, Randomization, and Combinatorial Optimization.\nAlgorithms and Techniques, APPROX/RANDOM, 2017.",
387
+ "url": null
388
+ }
389
+ },
390
+ {
391
+ "28": {
392
+ "title": "Testing k-monotonicity: The rise and fall of boolean functions.",
393
+ "author": "Cl\u00e9ment L. Canonne, Elena Grigorescu, Siyao Guo, Akash Kumar, and Karl\nWimmer.",
394
+ "venue": "Theory Comput., 15:1\u201355, 2019.",
395
+ "url": null
396
+ }
397
+ },
398
+ {
399
+ "29": {
400
+ "title": "Optimal bounds for monotonicity and Lipschitz testing over\nhypercubes and hypergrids.",
401
+ "author": "Deeparnab Chakrabarty and C. Seshadhri.",
402
+ "venue": "In Proceedings, ACM Symposium on Theory of Computing (STOC),\n2013.",
403
+ "url": null
404
+ }
405
+ },
406
+ {
407
+ "30": {
408
+ "title": "An monotonicity tester for Boolean functions over the\nhypercube.",
409
+ "author": "Deeparnab Chakrabarty and C. Seshadhri.",
410
+ "venue": "SIAM Journal on Computing (SICOMP), 45(2):461\u2013472, 2014.",
411
+ "url": null
412
+ }
413
+ },
414
+ {
415
+ "31": {
416
+ "title": "An optimal lower bound for monotonicity testing over hypergrids.",
417
+ "author": "Deeparnab Chakrabarty and C. Seshadhri.",
418
+ "venue": "Theory of Computing, 10:453\u2013464, 2014.",
419
+ "url": null
420
+ }
421
+ },
422
+ {
423
+ "32": {
424
+ "title": "New algorithms and lower bounds for monotonicity testing.",
425
+ "author": "Xi Chen, Rocco A. Servedio, and Li-Yang. Tan.",
426
+ "venue": "In Proceedings, IEEE Symposium on Foundations of Computer\nScience (FOCS), 2014.",
427
+ "url": null
428
+ }
429
+ },
430
+ {
431
+ "33": {
432
+ "title": "Beyond talagrand: New lower bounds for testing monotonicity and\nunateness.",
433
+ "author": "Xi Chen, Erik Waingarten, and Jinyu Xie.",
434
+ "venue": "In Proceedings, ACM Symposium on Theory of Computing (STOC),\n2017.",
435
+ "url": null
436
+ }
437
+ },
438
+ {
439
+ "34": {
440
+ "title": "Improved testing algorithms for monotonicity.",
441
+ "author": "Yevgeny Dodis, Oded Goldreich, Eric Lehman, Sofya Raskhodnikova, Dana Ron, and\nAlex Samorodnitsky.",
442
+ "venue": "Proceedings, International Workshop on Randomization and\nComputation (RANDOM), 1999.",
443
+ "url": null
444
+ }
445
+ },
446
+ {
447
+ "35": {
448
+ "title": "Spot-checkers.",
449
+ "author": "Funda Ergun, Sampath Kannan, Ravi Kumar, Ronitt Rubinfeld, and Mahesh\nViswanathan.",
450
+ "venue": "J. Comput. System Sci., 60(3):717\u2013751, 2000.",
451
+ "url": null
452
+ }
453
+ },
454
+ {
455
+ "36": {
456
+ "title": "Partial tests, universal tests and decomposability.",
457
+ "author": "Eldar Fischer, Yonatan Goldhirsh, and Oded Lachish.",
458
+ "venue": "In Innovations in Theoretical Computer Science, ITCS. ACM,\n2014.",
459
+ "url": null
460
+ }
461
+ },
462
+ {
463
+ "37": {
464
+ "title": "Distribution testing under the parity trace, 2023.",
465
+ "author": "Renato Ferreira Pinto Jr and Nathaniel Harms.",
466
+ "venue": null,
467
+ "url": null
468
+ }
469
+ },
470
+ {
471
+ "38": {
472
+ "title": "Distribution testing with a confused collector.",
473
+ "author": "Renato Ferreira Pinto Jr and Nathaniel Harms.",
474
+ "venue": "In 15th Innovations in Theoretical Computer Science Conference,\nITCS, 2024.",
475
+ "url": null
476
+ }
477
+ },
478
+ {
479
+ "39": {
480
+ "title": "On the strength of comparisons in property testing.",
481
+ "author": "Eldar Fischer.",
482
+ "venue": "Information and Computation, 189(1):107\u2013116, 2004.",
483
+ "url": null
484
+ }
485
+ },
486
+ {
487
+ "40": {
488
+ "title": "Monotonicity testing over general poset domains.",
489
+ "author": "Eldar Fischer, Eric Lehman, Ilan Newman, Sofya Raskhodnikova, and Ronitt\nRubinfeld.",
490
+ "venue": "Proceedings, ACM Symposium on Theory of Computing (STOC), 2002.",
491
+ "url": null
492
+ }
493
+ },
494
+ {
495
+ "41": {
496
+ "title": "Trading query complexity for sample-based testing and multi-testing\nscalability.",
497
+ "author": "Eldar Fischer, Oded Lachish, and Yadu Vasudev.",
498
+ "venue": "In IEEE 56th Annual Symposium on Foundations of Computer\nScience, FOCS. IEEE Computer Society, 2015.",
499
+ "url": null
500
+ }
501
+ },
502
+ {
503
+ "42": {
504
+ "title": "Approximating the distance to monotonicity in high dimensions.",
505
+ "author": "Shahar Fattal and Dana Ron.",
506
+ "venue": "ACM Trans. on Algorithms (TALG), 6(3), 2010.",
507
+ "url": null
508
+ }
509
+ },
510
+ {
511
+ "43": {
512
+ "title": "Testing monotonicity.",
513
+ "author": "Oded Goldreich, Shafi Goldwasser, Eric Lehman, Dana Ron, and Alex\nSamorodnitsky.",
514
+ "venue": "Combinatorica, 20:301\u2013337, 2000.",
515
+ "url": null
516
+ }
517
+ },
518
+ {
519
+ "44": {
520
+ "title": "Property testing and its connection to learning and approximation.",
521
+ "author": "Oded Goldreich, Shafi Goldwasser, and Dana Ron.",
522
+ "venue": "Journal of the ACM, 45(4):653\u2013750, 1998.",
523
+ "url": null
524
+ }
525
+ },
526
+ {
527
+ "45": {
528
+ "title": "Flipping out with many flips: Hardness of testing k-monotonicity.",
529
+ "author": "Elena Grigorescu, Akash Kumar, and Karl Wimmer.",
530
+ "venue": "SIAM J. Discret. Math., 33(4):2111\u20132125, 2019.",
531
+ "url": null
532
+ }
533
+ },
534
+ {
535
+ "46": {
536
+ "title": "On sample-based testers.",
537
+ "author": "Oded Goldreich and Dana Ron.",
538
+ "venue": "ACM Trans. Comput. Theory, 8(2):7:1\u20137:54, 2016.",
539
+ "url": null
540
+ }
541
+ },
542
+ {
543
+ "47": {
544
+ "title": "Distribution-free property testing.",
545
+ "author": "Shirley Halevy and Eyal Kushilevitz.",
546
+ "venue": "Proceedings, International Workshop on Randomization and\nComputation (RANDOM), 2003.",
547
+ "url": null
548
+ }
549
+ },
550
+ {
551
+ "48": {
552
+ "title": "Testing monotonicity over graph products.",
553
+ "author": "Shirley Halevy and Eyal Kushilevitz.",
554
+ "venue": "Random Structures Algorithms, 33(1):44\u201367, 2008.",
555
+ "url": null
556
+ }
557
+ },
558
+ {
559
+ "49": {
560
+ "title": "Downsampling for testing and learning in product distributions.",
561
+ "author": "Nathaniel Harms and Yuichi Yoshida.",
562
+ "venue": "In 49th International Colloquium on Automata, Languages, and\nProgramming, ICALP 2022, 2022.",
563
+ "url": null
564
+ }
565
+ },
566
+ {
567
+ "50": {
568
+ "title": "Efficient noise-tolerant learning from statistical queries.",
569
+ "author": "Michael J. Kearns.",
570
+ "venue": "J. ACM, 45(6):983\u20131006, 1998.",
571
+ "url": null
572
+ }
573
+ },
574
+ {
575
+ "51": {
576
+ "title": "On monotonicity testing and Boolean isoperimetric type theorems.",
577
+ "author": "Subhash Khot, Dor Minzer, and Muli Safra.",
578
+ "venue": "In Proceedings, IEEE Symposium on Foundations of Computer\nScience (FOCS), 2015.",
579
+ "url": null
580
+ }
581
+ },
582
+ {
583
+ "52": {
584
+ "title": "On monotonicity testing and boolean isoperimetric-type theorems.",
585
+ "author": "Subhash Khot, Dor Minzer, and Muli Safra.",
586
+ "venue": "SIAM J. Comput., 47(6):2238\u20132276, 2018.",
587
+ "url": null
588
+ }
589
+ },
590
+ {
591
+ "53": {
592
+ "title": "Testing problems with sublearning sample complexity.",
593
+ "author": "Michael J. Kearns and Dana Ron.",
594
+ "venue": "J. Comput. Syst. Sci., 61(3):428\u2013456, 2000.",
595
+ "url": null
596
+ }
597
+ },
598
+ {
599
+ "54": {
600
+ "title": "Constant depth circuits, fourier transform, and learnability.",
601
+ "author": "Nathan Linial, Yishay Mansour, and Noam Nisan.",
602
+ "venue": "J. ACM, 40(3):607\u2013620, 1993.",
603
+ "url": null
604
+ }
605
+ },
606
+ {
607
+ "55": {
608
+ "title": "On disjoint chains of subsets.",
609
+ "author": "Eric Lehman and Dana Ron.",
610
+ "venue": "Journal of Combinatorial Theory, Series A, 94(2):399\u2013404,\n2001.",
611
+ "url": null
612
+ }
613
+ },
614
+ {
615
+ "56": {
616
+ "title": "Properly learning monotone functions via local correction.",
617
+ "author": "Jane Lange, Ronitt Rubinfeld, and Arsen Vasilyan.",
618
+ "venue": "In 63rd IEEE Annual Symposium on Foundations of Computer\nScience, FOCS, 2022.",
619
+ "url": null
620
+ }
621
+ },
622
+ {
623
+ "57": {
624
+ "title": "Agnostic proper learning of monotone functions: beyond the black-box\ncorrection barrier.",
625
+ "author": "Jane Lange and Arsen Vasilyan.",
626
+ "venue": "In 64th IEEE Annual Symposium on Foundations of Computer\nScience, FOCS, 2023.",
627
+ "url": null
628
+ }
629
+ },
630
+ {
631
+ "58": {
632
+ "title": "Parameterized property testing of functions.",
633
+ "author": "Ramesh Krishnan S. Pallavoor, Sofya Raskhodnikova, and Nithin Varma.",
634
+ "venue": "ACM Trans. Comput. Theory, 9(4):17:1\u201317:19, 2018.",
635
+ "url": null
636
+ }
637
+ },
638
+ {
639
+ "59": {
640
+ "title": "Monotonicity testing.",
641
+ "author": "Sofya Raskhodnikova.",
642
+ "venue": "Masters Thesis, MIT, 1999.",
643
+ "url": null
644
+ }
645
+ },
646
+ {
647
+ "60": {
648
+ "title": "Approximating the Influence of Monotone Boolean Functions in\n Query Complexity.",
649
+ "author": "Dana Ron, Ronitt Rubinfeld, Muli Safra, and Omri Weinstein.",
650
+ "venue": "In Proceedings, International Workshop on Randomization and\nComputation (RANDOM), 2011.",
651
+ "url": null
652
+ }
653
+ },
654
+ {
655
+ "61": {
656
+ "title": "Robust characterization of polynomials with applications to program\ntesting.",
657
+ "author": "R. Rubinfeld and M. Sudan.",
658
+ "venue": "SIAM Journal of Computing, 25:647\u2013668, 1996.",
659
+ "url": null
660
+ }
661
+ },
662
+ {
663
+ "62": {
664
+ "title": "Parallel monotonicity reconstruction.",
665
+ "author": "Michael E. Saks and C. Seshadhri.",
666
+ "venue": "In Proceedings, ACM-SIAM Symposium on Discrete Algorithms\n(SODA), 2008.",
667
+ "url": null
668
+ }
669
+ },
670
+ {
671
+ "63": {
672
+ "title": "How much are increasing sets positively correlated?",
673
+ "author": "Michel Talagrand.",
674
+ "venue": "Comb., 16(2):243\u2013258, 1996.",
675
+ "url": null
676
+ }
677
+ }
678
+ ],
679
+ "url": "http://arxiv.org/html/2310.12375v2"
680
+ }
20240819/2311.04061v2.json ADDED
@@ -0,0 +1,470 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Neural Appearance Model for Cloth Rendering",
3
+ "abstract": "The realistic rendering of woven and knitted fabrics has posed significant challenges throughout many years. Previously, fiber-based micro-appearance models have achieved considerable success in attaining high levels of realism. However, rendering such models remains complex due to the intricate internal scatterings of hundreds of fibers within a yarn, requiring vast amounts of memory and time to render. In this paper, we introduce a new framework to capture aggregated appearance by tracing many light paths through the underlying fiber geometry. We then employ lightweight neural networks to accurately model the aggregated BSDF, which allows for the precise modeling of a diverse array of materials while offering substantial improvements in speed and reductions in memory. Furthermore, we introduce a novel importance sampling scheme to further speed up the rate of convergence. We validate the efficacy and versatility of our framework through comparisons with preceding fiber-based shading models as well as the most recent yarn-based model.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Fabrics are important in our everyday lives and their virtual representation has long been a key focus in computer graphics research. Fabrics, with their detailed structure of fibers, plies, and yarns, present a unique hierarchical geometric structure at each aggregation level, offering a wide array of appearances for different cloth types.\nThe challenge of accurately modeling the detailed geometry and scattering of fabrics has led to the development of different methods, mainly split into curve-based and surface-based shading models. Curve-based models, using the Bidirectional Curve Scattering Distribution Function (BCSDF), aim to explicitly represent individual elements like fibers [khungurn2015matching], plies [montazeri2020practical] and yarns [Zhu2023yarn], similar to methods used in hair rendering. However, these models, while accurate, face challenges like long rendering times and high storage needs.\nMicro-appearance models focus on representing fabrics at the microscale, detailing each fiber using high-resolution volumes or fiber meshes [zhao2011building]. These models are great at rendering with high detail but are limited in practical use due to their data-intensive nature and their challenges in manipulation and rendering.\nIn contrast, surface-based models, which depict fabric as a 2D sheet and use specific reflectance models for appearance, are known for being lightweight and user-friendly e.g. [sadeghi2013practical]. These models, widely used in the computer graphics industry, can accurately reproduce the overall appearance of fabrics. However, they often fail to capture the fine details necessary for realistic close-up images.\nIn this paper, we aim to combine the light scattering of a twisted yarn, made up of hundreds of fibers, by simulating the paths of many light rays into the yarn and analyzing their scattering properties. From this analysis, we show that the scattering can be described as three distinct components, and we introduce a new way to model each component using various neural networks and analytical solutions. Additionally, we derive an analytical importance sampling scheme that closely matches the combined scattering distribution. We demonstrate that our model is able to run up to 23 times faster while using up to 600 times less memory when compared to previous fiber-based methods. The memory gain is directly dependent on the fiber count which is often a few hundred. In summary, our main contributions include:\nWe introduce a novel neural framework for modeling the light scattering within a bundle of fibers in the yarns. By dividing the scattering into components, we can efficiently model various types of yarns across a broad range of parameters. Our proposed method runs significantly faster and uses substantially less memory.\nWe further improve on existing neural network approaches by using the channel-wise PReLU activation function to increase performance. We demonstrate its effectiveness by comparing its performance against various model architectures.\nFrom our observations, we derive a new analytical fitting of the scattering for importance sampling. We have managed to derive a new observation-based empirical and invertible importance sampling scheme that matches the scattering distribution to further accelerate the rate of convergence."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Prior Work",
15
+ "text": "Surface-Based Cloth Models - Cloth rendering has been a subject of extensive research, with various models being developed to achieve a balance between realism and computational efficiency. Traditional models have often depicted cloth as 2D surfaces, utilizing Bidirectional Reflectance Distribution Functions (BRDFs) or Bidirectional Texture Functions (BTFs) to illustrate light-cloth interactions [sattler2003efficient, adabala2003, irawan2012specular, sadeghi2013practical, rainer2019neural, kuznetsov2021neumip, jin2022woven, zhu2023realistic]. While these surface-based models are lightweight and capable of producing high-quality results at mid-to-far distances, they typically lack the fine-grained details necessary for close-up views.\nMicro-appearance Cloth Models - On the contrary, micro-appearance models have emerged, focusing on the fabric\u2019s micro-geometry down to the fiber level, offering a high fidelity and detail [schroder2011volumetric, zhao2011building, khungurn2015matching, loubet2018, Montazeri2021mechanics, aliaga2017appearance]. However, the high complexity of these models presents a significant challenge in rendering them efficiently. Various precomputation-based rendering methods have been developed to address this, such as the techniques proposed by [zhao2013modular, khungurn2017fast, luan2017fiber] to improve performance and GPU-based methods developed by [Wu2017realtime] for procedurally generated fabrics. Nevertheless, these methods often compromise either on performance or physical accuracy, as well as being difficult to edit and render.\nAggregation Based Techniques - In recent years, aggregation-based methodologies have been introduced to the domain of cloth rendering, aiming to model the multiple scatterings of a bundle of fibers implicitly. Montazeri et al. [montazeri2020practical, montazeri2021practical] pioneered an aggregated technique that encapsulates the light scatterings of individual fibers, approximating the overall appearance at the ply level for woven and knitted fabrics, respectively; later followed by the yarn-level extension [khattar2024multiscale]. However, their model, while being fast and practical, is predominantly observation-driven and not efficient for yarns with a high number of plies.\nZhu et al. [Zhu2022fur] advanced the field by proposing a technique to aggregate the scatterings of a bundle of straight fur fibers in a data-driven manner. They then parameterize the aggregated scattering by fitting analytical lobes, followed by the training of a neural network to predict the parameters for the lobes. This model does not accommodate twisted fibers and, being a far-field model, cannot represent yarn-level highlights at close-up views. In a subsequent study [Zhu2023yarn], the authors introduced an analytical solution designed to accurately approximate the multi-scattering of yarn by utilizing dual scattering theory. However, this model relies heavily on the assumptions inherited from dual scattering theory and also imposes additional assumptions on the fiber shading model. In contrast, our work, while employing similar fiber scattering models and micro-geometry, presents a more generalized model capable of fitting any yarn without necessitating specific assumptions.\nNeural BRDF Representation - [chen2020ibrdf, sztrajman2021nbrdf] was one of the firsts to leverage machine learning to represent BRDFs and achieve a high compression rate while preserving the fidelity of the BRDF. In this paper, we improve on Sztrajman et al.\u2019s [sztrajman2021nbrdf] framework to support aggregated yarn scattering, as we demonstrate that using their framework in a na\u00efve manner do not produce optimal results."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Preliminaries",
21
+ "text": "Every yarn is made up of twisted plies, which in turn consist of hundreds of strands called fibers. In our study, the primary aim is to aggregate a single-ply geometry while explicitly tracing interactions between plies for multi-ply yarn. The arrangement of the fibers around the yarn is characterized by the parameters , , and . Here, represents the number of fibers in the yarn, represents the fiber density, and describes the twist factor.\nwhere is the fiber radius, is the yarn radius, is the number of revolutions, and denotes the length along the yarn. Importantly, these parameters are defined such that they are invariant to the yarn\u2019s overall scale, allowing us to use our fitted model on all scales of the yarn with the same parameters, without having to re-train the neural networks or re-fit the parameters. The list of all parameters is detailed in Table 1 ###reference_###."
22
+ },
23
+ {
24
+ "section_id": "3.1",
25
+ "parent_section_id": "3",
26
+ "section_name": "Fiber Shading Model",
27
+ "text": "In this paper, our fiber shading model is based on the method by Khungurn et al. [khungurn2015matching], where fibers are modeled as glass-like tubes and the scatterings are split into Reflection (R) and Transmission (TT) lobes.\nwhere . The incident and outgoing directions , are parameterized into the longitudinal angle and azimuthal angle using the coordinate system defined in Marschner et al. [Marschner2003]. represents the scattering in the longitudinal plane, and represents the scattering in the azimuthal plane. They are defined as:\nwhere , represents the attenuation of each component, , represents the longitudinal roughness, and represents the azimuthal roughness. is the normalized Gaussian function defined in Khungurn et al.[khungurn2015matching] and denotes the von Mises distribution. Furthermore, is the Fresnel term and is approximated via Schlick\u2019s approximation [schlick1994]:"
28
+ },
29
+ {
30
+ "section_id": "3.2",
31
+ "parent_section_id": "3",
32
+ "section_name": "Yarn Shading Frame",
33
+ "text": "In this work, we found it useful to describe the light scattering in terms of two separate shading frames, the yarn shading frame, and the surface fiber shading frame. The yarn shading frame is defined as a traditional anisotropic surface shading frame on the yarn cylinder, with the incident and outgoing directions parameterized with longitudinal angle and azimuthal angle . The normal of the frame is aligned with the normal of the cylinder surface, while the tangent of the frame is aligned in the direction of the yarn tangent. We chose this in contrast to existing hair literature, where a longitudinal angle and an azimuthal offset are used, to make the process of finding the surface fiber shading frame easier. The surface fiber shading frame describes the fiber shading frame on the surface of the yarn, and using the coordinates system of Marschner et al. [Marschner2003] with when pointing towards the surface normal. The frame is rotated around the surface normal due to the fiber twist. In our paper, we denote the directions relative to the yarn shading frame with and , while the directions relative to the surface fiber shading frame as and which can be defined as:\nwhere . Derivation of the angle can be found in the appendix below.\n###figure_1### ###figure_2### ###figure_3### ###figure_4### ###figure_5###"
34
+ },
35
+ {
36
+ "section_id": "4",
37
+ "parent_section_id": null,
38
+ "section_name": "Our Aggregated Shading Model",
39
+ "text": "In alignment with the methodology proposed by Zhu et al. [Zhu2022fur], the aggregation of yarn fibers is achieved by encapsulating them within a closely bounded cylinder. We aggregate the yarn scattering by simulating many light rays into the yarn and recording their exiting radiance and direction to obtain the Radiance Distribution Map (RDM). The RDM is a 4-dimensional map of the exiting radiance for a given and and is parameterized by , . Further details on obtaining the RDM are given in \u00a75.3 ###reference_###. We then propose a framework to model the RDM by observation and analysis and split the RDM into 3 components to model them individually.\nIt might intuitively seem advantageous to model the RDM directly by fitting a neural network to it. However, our experiments suggest that this approach is not optimal. Initially, it was observed that at certain incoming angles, specifically at grazing azimuthal angles, a substantial amount of light traverses through the yarn cylinder without interacting with any fibers. In such instances, most of the light is directly transmitted and exhibits Dirac delta distributions, resulting in the corresponding RDM displaying sharp lobes with pronounced peak values. Such distributions pose a significant learning challenge for the neural network due to their high values.\nFurthermore, a considerable fraction of the brightness within the RDM is attributed to the paths characterized by a single bounce. These paths, interacting with a single fiber on the yarn\u2019s surface before exiting, create the highlights of the yarn and introduce abrupt alterations in brightness within the RDM. By isolating these paths into a distinct component, we achieved more precise highlights and facilitated the learning process for the neural network regarding the remaining data. Consequently, we introduce the subsequent shading model as a mixture of separate components T, R, and M, corresponding to the Direct Transmission Component, Direct Reflection Component, and Multi-Scattering Component respectively:\nwhere the T component models the light paths that directly pass through the yarn without intersecting many fibers, the R component models the light paths that hit a single fiber and are reflected away, and the M component models the multiple scattering of light within the yarn before exiting. The components T and M are more complex and hence modelled by a neural network, while the R component can be found analytically. By splitting the shading model into separate components, we can better fit each lobe more accurately, whilst using fewer parameters for the neural network, increasing computational efficiency. The first column in Fig. 1 ###reference_### illustrated the pathways associated with each component, followed by the visualization of the distributions of each component. Fig. 2 ###reference_### visualize the appearance of each component to showcase their contribution individually."
40
+ },
41
+ {
42
+ "section_id": "4.1",
43
+ "parent_section_id": "4",
44
+ "section_name": "Direct Transmission Component",
45
+ "text": "The Direct Transmission Component of our model represents the fraction of incoming light that directly passes through the yarn without intersecting any fibers, given a specific incident direction . It becomes particularly prominent in yarn assemblies with lower fiber densities, where a high proportion of light rays pass through directly, resulting in a more translucent appearance. Its influence is also more pronounced at grazing azimuthal angles. Consequently, we incorporate this component into the final scattering function. This component can be mathematically expressed as:\nwhere is the Dirac delta distribution, which is zero except when . The probability is multiplied with the Dirac delta distribution to determine the radiance of the transmission component. Instead of fitting directly with a neural network, we fit . This component is a two-dimensional map and can easily be modelled by a lightweight neural network."
46
+ },
47
+ {
48
+ "section_id": "4.2",
49
+ "parent_section_id": "4",
50
+ "section_name": "Direct Reflection Component",
51
+ "text": "Since the Direct Reflection Component is the reflection of fibers with a single bounce, this component corresponds to a bright highlight on the yarn surface. Therefore, this component contributes to a sharp change in radiance in the RDM. Hence, it would be beneficial to model this component analytically as opposed to fitting it with a neural network as this would allow us to achieve more accurate highlights, while simultaneously allowing the neural networks to converge at a faster rate with the other parameters. We model this component as a single fiber scattering relative to the surface fiber shading frame on the upper hemisphere of the surface."
52
+ },
53
+ {
54
+ "section_id": "4.3",
55
+ "parent_section_id": "4",
56
+ "section_name": "Multi-Scattering Component",
57
+ "text": "The Multi-Scattering Component captures the detailed interactions among fibers within the ply or yarn and is represented as in the scattering function. By using a neural network, we can effectively learn the distribution of these interactions, creating a robust multi-scattering model. This method is especially useful for modeling yarn aggregation to capture the scatterings of more complex yarn geometries, such as twist, a feature that the existing studies overlooked [Zhu2022fur]. For more detailed information and specific details about the network, please refer to \u00a75.3 ###reference_###."
58
+ },
59
+ {
60
+ "section_id": "4.4",
61
+ "parent_section_id": "4",
62
+ "section_name": "Importance Sampling",
63
+ "text": "###figure_6### ###figure_7### Given that pieces of cloth are composed of numerous yarns, the inter-reflection amongst the yarns significantly influences the overall visual appearance. It is important to employ an advanced importance sampling scheme to reduce variance as showcased in Fig. 3 ###reference_###. Nonetheless, due to the complexity of light scattering within a yarn when utilizing a neural network in our approach, we are precluded from using the Bidirectional Scattering Distribution Function (BSDF) for importance sampling. Consequently, we chose to fit an invertible analytical approximation of the data to enhance the sampling of the distribution. Sztrajman et al. [sztrajman2021nbrdf] utilized Blinn-Phong lobes to fit the distribution of their Neural BRDFs. However, given that the scattering of light within a yarn does not center around the half angle, the Blinn-Phong lobe is a poor fit for our model. From our observations of the multi-scattering component, we found that the light mainly scatters around the upper half of the cone centered at the fiber tangent at the yarn surface. Thus, we propose the following importance sampling scheme:\nSample lobe - We sample the lobes proportional to the energy of their lobes. Since is comprised of light passing-through with a probability of , the proportion of energy can be described as directly. The remaining portion of samples can be split proportionally according to the energy of and , which can be approximated by a constant which is pre-computed beforehand based on the computed RDM.\nSample outgoing direction - For the direct transmission component, we sample in the direction to simulate the light passing through the yarn. The direct reflection is sampled similarly to the fiber\u2019s distribution on the yarn surface. It is done by sampling the longitudinal angles via a normalized Gaussian around with the standard deviation corresponding to the fiber reflection\u2019s longitudinal roughness , while the azimuthal angle is uniformly distributed on the upper cone in the range .\nThe remaining multi-scattering component is sampled via two lobes which are derived from careful observations of the RDM. The first lobe is comprised of a distribution similar to the direct reflection component but with a different longitudinal and azimuthal roughness. It is defined by a longitudinal Gaussian distribution and azimuthal von Mises distribution , where azimuthal angle is zero at n. The second distribution is described by a simple uniform sphere to capture the remaining directions not covered by the first lobe. The two lobes are split with a parameter . The parameters , , and are to be fitted beforehand.\nCompute the PDF - The PDF can be described as a mixture of the lobes and can be computed as:\nHere, the PDF for each component is defined as:\nwith their proportions:\nwhere represents a Gaussian normalized in the range with mean and standard deviation . represents the von Mises distribution with mean and roughness ."
64
+ },
65
+ {
66
+ "section_id": "5",
67
+ "parent_section_id": null,
68
+ "section_name": "Our Neural Approach",
69
+ "text": "Due to the intricate nature of light scattering within a yarn, we opted to model it using neural networks, inspired by the success of Sztrajman et al. [sztrajman2021nbrdf] in accurately and efficiently modeling measured BRDFs. Additionally, this approach provides the generality and flexibility to model various yarn types without making assumptions about the underlying geometry. Beyond the enhancements described in \u00a74 ###reference_### to boost the neural network\u2019s performance, we have also refined their base architecture to achieve higher accuracy with minimal runtime costs by employing the channel-wise Parametric Rectified Linear Unit (PReLU) [he2015prelu] activation function instead of the Rectified Linear Unit (ReLU) [nair2010relu] activation presented in their paper. A comparison with this model na\u00efvely is shown in Fig. 4 ###reference_###.\n###table_1### ###figure_8### ###figure_9### ###figure_10### In brief, channel-wise PReLU allows each channel of the input its own learnable parameter, which provides the model with additional flexibility to learn more complex representations without a substantial increase in computational cost and mitigates issues related to the \"dying ReLU\" problem. The \"dying ReLU\" problem refers to the phenomenon where neurons in a network become inactive and only output zero during training, essentially ceasing to learn or update and thereby reducing the capacity of the model. This often occurs when a large gradient flows through a ReLU neuron, updating the weights in such a way that the neuron will always output zero. PReLU helps to avoid this issue by maintaining active learning and adapting its negative slope to the learned features of the input data.\nAdditionally, We chose the channel-wise PReLU activation function over ReLU because it introduces additional trainable parameters for the negative values of ReLU, allowing the neural network more flexibility to overfit with nearly no extra runtime cost, while avoiding the instability of the dying ReLU problem, which is more prevalent in smaller neural networks. Please refer to \u00a76 ###reference_### for additional details on the performance of various neural network architectures and activation functions."
70
+ },
71
+ {
72
+ "section_id": "5.1",
73
+ "parent_section_id": "5",
74
+ "section_name": "Data Generation",
75
+ "text": "To generate data for computing the RDM and preparing the training data for the neural networks, we initially establish our foundational single-ply yarn geometry, as previously detailed in \u00a73 ###reference_###. A bounding cylinder is defined around the yarn, and light rays, each possessing an initial weight of 1, are projected at random directions , uniformly distributed over a hemisphere, into the yarn. Monte Carlo random walks are subsequently utilized to trace the interactions of each ray with the fibers until it exits the yarn cylinder. For each sample, variables such as the incident angle, outgoing angle, outgoing weight, and the number of bounces (depth) are documented. Our dataset consists of 50-100 million sampled rays that are fully traced for each five yarn materials, with a maximum bounce depth of 200 on average. This sample collection process persists until convergence is attained."
76
+ },
77
+ {
78
+ "section_id": "5.2",
79
+ "parent_section_id": "5",
80
+ "section_name": "Direct Transmission Neural Network",
81
+ "text": "To acquire the training data for this network, we compute the probability of transmission for a given incoming direction with the gathered samples:\nWe gather samples using the method outlined in \u00a75.1 ###reference_###, then organize the data into two 22x90 histograms, representing with and bins across a range of 90x360 degrees. The first histogram calculates the number of direct transmission paths, while the second histogram counts the total number of paths. Subsequently, we divide the first histogram by the second to derive the probability map .\nNext, we train a lightweight neural network on the probability map. The network, which takes as a unit Cartesian vector and predicts , is configured with two hidden layers and follows a 3-7-7-1 structure. The hidden layers utilize the channel-wise Parametric Rectified Linear Unit (PReLU) activation function, and the output layer employs the Sigmoid activation function. The model is trained using the Mean Squared Error (MSE) loss function, coupled with the Adam optimizer. Our network architectures are illustrated in the last column of Fig 1 ###reference_###."
82
+ },
83
+ {
84
+ "section_id": "5.3",
85
+ "parent_section_id": "5",
86
+ "section_name": "Multi-Scattering Neural Network",
87
+ "text": "The neural network is trained on the multi-scattering component of the RDM. To prepare the data for the neural network, it is necessary to isolate the multi-scattering component from the collected samples. Initially, samples with a depth of 0 are removed to exclude the direct transmission samples, along with samples having a depth of 1 to exclude the direct reflection samples. Subsequently, a weighted 4D histogram is computed from the remaining data into 22x90x45x90 bins of , , , and , each spanning across the respective ranges of 90x360x180x360 degrees. The data is then divided by the number of samples in each incident bin and the solid angle in each outgoing bin to obtain the radiance at each bin. With the multi-scattering component RDM available, samples are randomly drawn from it to generate our training data.\nThe neural network is configured to accept Cartesian unit vectors and as inputs and to output r, g, b radiance values. The model incorporates two hidden layers with a 6-21-21-3 structure. The hidden layers utilize channel-wise PReLU activation functions, while the final layer employs the exponential activation function. The model is trained using the Mean Squared Error (MSE) loss function and optimized with the Adam optimizer.\nAs previously noted in \u00a74.3 ###reference_###, the multi-scattering component is represented by . However, in practice, the neural network was configured to model the product of and . Here, represents the cosine foreshortening factor and is inherently included in the RDM as we record the radiance for each and directly."
88
+ },
89
+ {
90
+ "section_id": "6",
91
+ "parent_section_id": null,
92
+ "section_name": "Model Analysis and Ablation",
93
+ "text": "In this section, we perform an ablation study about the neural network used in the multi-scattering component by comparing the performance of the model with different architectures. The model is trained on polyester until convergence (40 epoch). Fig. 5 ###reference_### shows the final loss of various model architectures along with different activation functions. It can be seen that channel-wise PReLU consistently outperforms other activation functions with the same model architecture. This is due to the additional trainable parameters of channel-wise PReLU, which gives the model more flexibility at a negligible increase in runtime cost. It also can be seen that increasing the model weights from our base model to 6-21-64-64-21-3 increases the model size by 10.6 times while only offering a 9% decrease in loss. From this, we can see that the model does not need to be overly large, and performs well even with a smaller number of weights."
94
+ },
95
+ {
96
+ "section_id": "7",
97
+ "parent_section_id": null,
98
+ "section_name": "Results",
99
+ "text": "In this section, we validate our model and evaluate its performance by comparing renderings with our model to reference images generated by rendering the explicit fiber geometry [khungurn2015matching] as well as the hierarchical yarn-based model [Zhu2023yarn]. For all the materials presented, besides polyester, we have used the fiber shading parameters given in Khungurn et al. [khungurn2015matching] which were computed by fitting the parameters to match real-life photographs. The parameters of polyester are determined ad hoc to demonstrate the flexibility of our framework. We then wrap these fibers into yarns with given fiber geometry parameters , , and . A summary of the parameters can be found in Table 3 ###reference_###. All images were rendered with path tracing on Mitsuba 3 [Mitsuba3], including neural network inference, using an Intel Core i7-10750H 6 Core Processor 2.60GHz machine, while neural network training was done on an NVIDIA GeForce RTX4080 (Mobile). The computation time required to gather the RDM is around a minute on an RTX4080 (Mobile). The average time it takes to train a neural network per material for the direct transmission and multi-scattering components are 30 seconds and 30 minutes respectively."
100
+ },
101
+ {
102
+ "section_id": "7.1",
103
+ "parent_section_id": "7",
104
+ "section_name": "Reference Comparisons",
105
+ "text": "3-Ply Knitted Glove - In this section, we rendered a scene with a 3-ply knitted glove. The base yarn curves defining the glove were taken from [Yukselyarns, Yuksel2012, Wu2017realtime] and wrapped with 3-plies. The plies are then wrapped with fibers procedurally to generate the ground truth image [zhao2016fitting]. For Fig. Neural Appearance Model for Cloth Rendering we rendered the scene at a resolution of 1080x1080. Our model matches the ground truth very well and performs 23 times faster while using around 300 times less memory. The scene is lit with an environmental map along with two spherical lights on the top-right and bottom-left corners. We also rendered the scene with different fiber parameters in Fig. 6 ###reference_### to highlight the flexibility of our model.\n###figure_11### ###figure_12### ###figure_13### ###figure_14### Ours\n###figure_15### ###figure_16### ###figure_17### Ref\n###figure_18### ###figure_19### ###figure_20### Ours\n###figure_21### ###figure_22### ###figure_23### Ref\n###figure_24### ###figure_25### ###figure_26### Close-up Yarn - In Fig. 7 ###reference_###, we compare our model against the reference on a close-up view of a yarn with varying fiber parameters. The scene is rendered at a resolution of 512x512 and then cropped to an appropriate size. The reference is rendered at 1024 spp, while our model is rendered at 64spp and performs on average 30 times faster. Our model can match the overall yarn appearance despite not having explicit fiber geometry. Please note for a multi-ply yarn, our model aggregates the fiber bundle of a single ply and we rely on the renderer to take the ply-ply interactions into account.\nWoven and Knitted Fabric - In Fig. 8 ###reference_###, we rendered our images using the dataset of yarn curves by Leaf et al. [leaf2018stanfordyarn]. The curves were interpolated and tiled into an appropriate size. All the images were rendered at a resolution of 720x720. All the reference images were rendered at 1024spp except for silk and cotton which were rendered at 4096spp as they take longer to converge due to their very high albedo. From our comparisons, our model matches the reference images very well and can accurately recreate yarn-level details even in the absence of explicit fiber geometry. However, although still visually accurate, we do note that cotton has difficulty matching the reference which is discussed further in the limitations section in \u00a78 ###reference_###. Our model performs around 11-17 times faster while using around 200-600 times less memory. Please refer to Table 2 ###reference_### for the full statistics."
106
+ },
107
+ {
108
+ "section_id": "7.2",
109
+ "parent_section_id": "7",
110
+ "section_name": "Comparisons with Zhu et al. 2023",
111
+ "text": "As depicted in Fig. 9 ###reference_###, we demonstrate that our approach not only achieves faster rendering speeds, as detailed in Table 2 ###reference_###, but also more accurately replicates the reference fiber-based appearance model by Khungurn et al. [khungurn2015matching]. Our model\u2019s superiority is due to our neural data-driven methodology that adapts more flexibly, allowing for an exact fit to the reference. In contrast, [Zhu2023yarn] uses an approximated fiber appearance model, which does not model Fresnel effects, and often requiring manual adjustments to align with the reference model. Notably, we use the exact same set of parameters and values across the three models (reference, ours, and [Zhu2023yarn]) without any post-tweaking.\n###figure_27### ###figure_28### ###figure_29### ###figure_30### ###figure_31### ###figure_32### ###figure_33### ###figure_34### ###figure_35### ###figure_36### ###figure_37### ###figure_38### ###figure_39### ###figure_40### ###figure_41### ###figure_42### ###figure_43### ###figure_44### ###figure_45### ###figure_46### ###figure_47### ###figure_48### ###figure_49### ###figure_50### ###figure_51### ###figure_52### ###figure_53### ###figure_54### ###figure_55### ###figure_56### ###figure_57### ###figure_58### ###figure_59### ###figure_60### ###figure_61### ###figure_62### ###figure_63### ###figure_64### ###figure_65### ###figure_66### .\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nfleece\n300\n0.30\n0.24\n0.040, 0.087, 0.087\n0.452, 0.725, 0.948\n7.238\n10.000\n25.989\n\nsilk\n300\n0.20\n0.00\n0.745, 0.008, 0.070\n0.620, 0.553, 0.562\n1.000\n10.000\n19.823\n\npolyester\n200\n0.40\n0.20\n0.700, 0.700, 0.700\n0.600, 0.000, 0.800\n5.238\n20.000\n25.000\n\ncotton\n600\n0.35\n0.06\n0.989, 0.959, 0.874\n0.999, 0.999, 0.999\n1.000\n27.197\n38.269\n\ngabardine\n450\n0.25\n0.12\n0.185, 0.047, 0.069\n0.999, 0.330, 0.354\n2.141\n10.000\n23.548"
112
+ },
113
+ {
114
+ "section_id": "8",
115
+ "parent_section_id": null,
116
+ "section_name": "Conclusion and Discussion",
117
+ "text": "Limitations - Our final aggregated ply shading model assumes that the light scattering enters and exits from the same spot and does not exhibit subsurface scattering. Based on our experiments, this is true for most fabrics except for fibers with very high albedo (such as cotton with 0.999) as they exhibit significantly more bounces per sample and hence travel more throughout the yarn, causing the exit point to be far from the the enter point. While this assumption satisfies most of our cloth types, we left a more accurate distribution of the exit point as a future study. Furthermore, our model assumes the appearance of the yarn is not spatially varying, and is unable to handle spatially varying yarn colors such as dyed cloth. Lastly, our model requires re-training to alter the yarn parameters, which might limit its use in interactive design and modelling for artists.\nFuture Works - Besides addressing the limitations above, a straightforward extension can include the training and fitting of more complex fiber distributions and scattering as the neural network has the potential to learn any complex distributions. Additionally, we would like to develop and leverage an auto-encoder architecture similar to [sztrajman2021nbrdf] to instantly interpolate our fitted yarn models with different fiber parameters to provide additional flexibility to designers and artists. Although our model performs well with an analytically fitted importance sampling lobe, we are interested in seeing if neural importance sampling methods could be used to further improve convergence [xu2023neusample]. We also would like to extend our method to support efficient level-of-detail simplification. This involves simplifying our model into a 3-dimensional BCSDF using a smaller neural network for far-field views, specifically when the width of the yarn is less than a pixel.\nConclusions - In this paper, we presented a novel aggregated shading framework by leveraging the flexibility and generality of neural networks to model the light interactions with a bundle of fibers i.e. ply. Our model can replicate the appearance of many fabrics while running significantly faster and requiring less memory. Through observations of the RDM, we also derived an analytical approximation and importance sampling scheme to further improve the rate of convergence of our model. Finally, our fitted model can be applied to any yarn geometry instantly, providing greater flexibility in designing fabrics."
118
+ }
119
+ ],
120
+ "appendix": [],
121
+ "tables": {
122
+ "1": {
123
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>List of important symbols for our neural yarn shading</figcaption>\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S3.T1.21\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T1.21.22.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S3.T1.21.22.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.21.22.1.1.1\">Notation</span></th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.21.22.1.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T1.21.22.1.2.1\">\n<span class=\"ltx_p\" id=\"S3.T1.21.22.1.2.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.21.22.1.2.1.1.1\">Definition</span></span>\n</span>\n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_tt\" id=\"S3.T1.1.1.1\"></th>\n<td class=\"ltx_td ltx_align_justify ltx_border_tt\" id=\"S3.T1.1.1.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T1.1.1.2.1\">\n<span class=\"ltx_p\" id=\"S3.T1.1.1.2.1.1\">number of fibers in a yarn</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T1.2.2.1\"></th>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T1.2.2.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T1.2.2.2.1\">\n<span class=\"ltx_p\" id=\"S3.T1.2.2.2.1.1\">twist factor</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.3.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T1.3.3.1\" style=\"padding-bottom:2.15277pt;\"></th>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T1.3.3.2\" style=\"padding-bottom:2.15277pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T1.3.3.2.1\">\n<span class=\"ltx_p\" id=\"S3.T1.3.3.2.1.1\">fiber density in a yarn</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S3.T1.4.4.1\"></th>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S3.T1.4.4.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T1.4.4.2.1\">\n<span class=\"ltx_p\" id=\"S3.T1.4.4.2.1.1\">attenuation of fiber reflection</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.5.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T1.5.5.1\"></th>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T1.5.5.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T1.5.5.2.1\">\n<span class=\"ltx_p\" id=\"S3.T1.5.5.2.1.1\">attenuation of fiber transmission</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T1.6.6.1\"></th>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T1.6.6.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T1.6.6.2.1\">\n<span class=\"ltx_p\" id=\"S3.T1.6.6.2.1.1\">longitudinal roughness of fiber reflection</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.7.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T1.7.7.1\"></th>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T1.7.7.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T1.7.7.2.1\">\n<span class=\"ltx_p\" id=\"S3.T1.7.7.2.1.1\">longitudinal roughness of fiber transmission</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.8.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T1.8.8.1\"></th>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T1.8.8.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T1.8.8.2.1\">\n<span class=\"ltx_p\" id=\"S3.T1.8.8.2.1.1\">azimuthal roughness of fiber transmission</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.9.9\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S3.T1.9.9.1\"></th>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S3.T1.9.9.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T1.9.9.2.1\">\n<span class=\"ltx_p\" id=\"S3.T1.9.9.2.1.1\">incoming direction relative to yarn frame</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.10.10\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T1.10.10.1\"></th>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T1.10.10.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T1.10.10.2.1\">\n<span class=\"ltx_p\" id=\"S3.T1.10.10.2.1.1\">outgoing direction relative to yarn frame</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.11.11\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T1.11.11.1\"></th>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T1.11.11.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T1.11.11.2.1\">\n<span class=\"ltx_p\" id=\"S3.T1.11.11.2.1.1\">incoming direction relative to surface fiber frame</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.12.12\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T1.12.12.1\"></th>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T1.12.12.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T1.12.12.2.1\">\n<span class=\"ltx_p\" id=\"S3.T1.12.12.2.1.1\">outgoing direction relative to surface fiber frame</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.13.13\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T1.13.13.1\"></th>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T1.13.13.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T1.13.13.2.1\">\n<span class=\"ltx_p\" id=\"S3.T1.13.13.2.1.1\">incoming longitudinal angle</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.14.14\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T1.14.14.1\"></th>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T1.14.14.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T1.14.14.2.1\">\n<span class=\"ltx_p\" id=\"S3.T1.14.14.2.1.1\">outgoing longitudinal angle</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.15.15\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T1.15.15.1\"></th>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T1.15.15.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T1.15.15.2.1\">\n<span class=\"ltx_p\" id=\"S3.T1.15.15.2.1.1\">incoming azimuthal angle</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.16.16\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T1.16.16.1\" style=\"padding-bottom:2.15277pt;\"></th>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T1.16.16.2\" style=\"padding-bottom:2.15277pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T1.16.16.2.1\">\n<span class=\"ltx_p\" id=\"S3.T1.16.16.2.1.1\">outgoing azimuthal angle</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.17.17\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S3.T1.17.17.1\"></th>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S3.T1.17.17.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T1.17.17.2.1\">\n<span class=\"ltx_p\" id=\"S3.T1.17.17.2.1.1\">our aggregated yarn scattering function</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.18.18\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T1.18.18.1\"></th>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T1.18.18.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T1.18.18.2.1\">\n<span class=\"ltx_p\" id=\"S3.T1.18.18.2.1.1\">our aggregated yarn direct reflection component</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.19.19\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T1.19.19.1\"></th>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T1.19.19.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T1.19.19.2.1\">\n<span class=\"ltx_p\" id=\"S3.T1.19.19.2.1.1\">our aggregated yarn direct transmission component</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.20.20\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T1.20.20.1\"></th>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T1.20.20.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T1.20.20.2.1\">\n<span class=\"ltx_p\" id=\"S3.T1.20.20.2.1.1\">our aggregated yarn multi-scattering component</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.21.21\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S3.T1.21.21.1\" style=\"padding-bottom:4.30554pt;\"></th>\n<td class=\"ltx_td ltx_align_justify ltx_border_bb\" id=\"S3.T1.21.21.2\" style=\"padding-bottom:4.30554pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T1.21.21.2.1\">\n<span class=\"ltx_p\" id=\"S3.T1.21.21.2.1.1\">fiber scattering function</span>\n</span>\n</td>\n</tr>\n</tbody>\n</table>\n</figure>",
124
+ "capture": "Table 1: List of important symbols for our neural yarn shading"
125
+ },
126
+ "2": {
127
+ "table_html": "<figure class=\"ltx_table\" id=\"S7.T2\">\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Performance Statistics for Fig. <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.04061v2#S7.F8\" title=\"Figure 8 \u2023 7.2 Comparisons with Zhu et al. 2023 \u2023 7 Results \u2023 Neural Appearance Model for Cloth Rendering\"><span class=\"ltx_text ltx_ref_tag\">8</span></a>. All rendering times were counted at equal quality (EQ).</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S7.T2.4\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S7.T2.4.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S7.T2.4.1.1.1\"><span class=\"ltx_text\" id=\"S7.T2.4.1.1.1.1\" style=\"font-size:90%;\">Material</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"3\" id=\"S7.T2.4.1.1.2\"><span class=\"ltx_text\" id=\"S7.T2.4.1.1.2.1\" style=\"font-size:90%;\">Time (s)</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"3\" id=\"S7.T2.4.1.1.3\"><span class=\"ltx_text\" id=\"S7.T2.4.1.1.3.1\" style=\"font-size:90%;\">Memory (MB)</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T2.4.2.2\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S7.T2.4.2.2.1\"></th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_t\" id=\"S7.T2.4.2.2.2\"><span class=\"ltx_text\" id=\"S7.T2.4.2.2.2.1\" style=\"font-size:90%;\">Ref</span></th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_t\" id=\"S7.T2.4.2.2.3\"><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S7.T2.4.2.2.3.1.1\" style=\"font-size:90%;\">[</span><span class=\"ltx_ref ltx_missing_citation ltx_ref_self\">Zhu2023yarn</span><span class=\"ltx_text\" id=\"S7.T2.4.2.2.3.2.2\" style=\"font-size:90%;\">]</span></cite></th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_t\" id=\"S7.T2.4.2.2.4\"><span class=\"ltx_text\" id=\"S7.T2.4.2.2.4.1\" style=\"font-size:90%;\">Ours</span></th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_t\" id=\"S7.T2.4.2.2.5\"><span class=\"ltx_text\" id=\"S7.T2.4.2.2.5.1\" style=\"font-size:90%;\">Ref</span></th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_t\" id=\"S7.T2.4.2.2.6\"><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S7.T2.4.2.2.6.1.1\" style=\"font-size:90%;\">[</span><span class=\"ltx_ref ltx_missing_citation ltx_ref_self\">Zhu2023yarn</span><span class=\"ltx_text\" id=\"S7.T2.4.2.2.6.2.2\" style=\"font-size:90%;\">]</span></cite></th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_t\" id=\"S7.T2.4.2.2.7\"><span class=\"ltx_text\" id=\"S7.T2.4.2.2.7.1\" style=\"font-size:90%;\">Ours</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S7.T2.4.3.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S7.T2.4.3.1.1\"><span class=\"ltx_text\" id=\"S7.T2.4.3.1.1.1\" style=\"font-size:90%;\">fleece</span></th>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S7.T2.4.3.1.2\"><span class=\"ltx_text\" id=\"S7.T2.4.3.1.2.1\" style=\"font-size:90%;\">12631</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S7.T2.4.3.1.3\"><span class=\"ltx_text\" id=\"S7.T2.4.3.1.3.1\" style=\"font-size:90%;\">3263</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S7.T2.4.3.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S7.T2.4.3.1.4.1\" style=\"font-size:90%;\">952</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S7.T2.4.3.1.5\"><span class=\"ltx_text\" id=\"S7.T2.4.3.1.5.1\" style=\"font-size:90%;\">6032</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S7.T2.4.3.1.6\"><span class=\"ltx_text\" id=\"S7.T2.4.3.1.6.1\" style=\"font-size:90%;\">20</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S7.T2.4.3.1.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S7.T2.4.3.1.7.1\" style=\"font-size:90%;\">20</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T2.4.4.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S7.T2.4.4.2.1\"><span class=\"ltx_text\" id=\"S7.T2.4.4.2.1.1\" style=\"font-size:90%;\">silk</span></th>\n<td class=\"ltx_td ltx_align_right\" id=\"S7.T2.4.4.2.2\"><span class=\"ltx_text\" id=\"S7.T2.4.4.2.2.1\" style=\"font-size:90%;\">31223</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S7.T2.4.4.2.3\"><span class=\"ltx_text\" id=\"S7.T2.4.4.2.3.1\" style=\"font-size:90%;\">20831</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S7.T2.4.4.2.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S7.T2.4.4.2.4.1\" style=\"font-size:90%;\">2908</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S7.T2.4.4.2.5\"><span class=\"ltx_text\" id=\"S7.T2.4.4.2.5.1\" style=\"font-size:90%;\">6216</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S7.T2.4.4.2.6\"><span class=\"ltx_text\" id=\"S7.T2.4.4.2.6.1\" style=\"font-size:90%;\">20</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S7.T2.4.4.2.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S7.T2.4.4.2.7.1\" style=\"font-size:90%;\">20</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T2.4.5.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S7.T2.4.5.3.1\"><span class=\"ltx_text\" id=\"S7.T2.4.5.3.1.1\" style=\"font-size:90%;\">polyester</span></th>\n<td class=\"ltx_td ltx_align_right\" id=\"S7.T2.4.5.3.2\"><span class=\"ltx_text\" id=\"S7.T2.4.5.3.2.1\" style=\"font-size:90%;\">8229</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S7.T2.4.5.3.3\"><span class=\"ltx_text\" id=\"S7.T2.4.5.3.3.1\" style=\"font-size:90%;\">11932</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S7.T2.4.5.3.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S7.T2.4.5.3.4.1\" style=\"font-size:90%;\">612</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S7.T2.4.5.3.5\"><span class=\"ltx_text\" id=\"S7.T2.4.5.3.5.1\" style=\"font-size:90%;\">5935</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S7.T2.4.5.3.6\"><span class=\"ltx_text\" id=\"S7.T2.4.5.3.6.1\" style=\"font-size:90%;\">29</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S7.T2.4.5.3.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S7.T2.4.5.3.7.1\" style=\"font-size:90%;\">29</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T2.4.6.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S7.T2.4.6.4.1\"><span class=\"ltx_text\" id=\"S7.T2.4.6.4.1.1\" style=\"font-size:90%;\">cotton</span></th>\n<td class=\"ltx_td ltx_align_right\" id=\"S7.T2.4.6.4.2\"><span class=\"ltx_text\" id=\"S7.T2.4.6.4.2.1\" style=\"font-size:90%;\">44836</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S7.T2.4.6.4.3\"><span class=\"ltx_text\" id=\"S7.T2.4.6.4.3.1\" style=\"font-size:90%;\">24167</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S7.T2.4.6.4.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S7.T2.4.6.4.4.1\" style=\"font-size:90%;\">2626</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S7.T2.4.6.4.5\"><span class=\"ltx_text\" id=\"S7.T2.4.6.4.5.1\" style=\"font-size:90%;\">4391</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S7.T2.4.6.4.6\"><span class=\"ltx_text\" id=\"S7.T2.4.6.4.6.1\" style=\"font-size:90%;\">7</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S7.T2.4.6.4.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S7.T2.4.6.4.7.1\" style=\"font-size:90%;\">7</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T2.4.7.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S7.T2.4.7.5.1\"><span class=\"ltx_text\" id=\"S7.T2.4.7.5.1.1\" style=\"font-size:90%;\">gabardine</span></th>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S7.T2.4.7.5.2\"><span class=\"ltx_text\" id=\"S7.T2.4.7.5.2.1\" style=\"font-size:90%;\">11427</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S7.T2.4.7.5.3\"><span class=\"ltx_text\" id=\"S7.T2.4.7.5.3.1\" style=\"font-size:90%;\">13024</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S7.T2.4.7.5.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S7.T2.4.7.5.4.1\" style=\"font-size:90%;\">785</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S7.T2.4.7.5.5\"><span class=\"ltx_text\" id=\"S7.T2.4.7.5.5.1\" style=\"font-size:90%;\">9340</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S7.T2.4.7.5.6\"><span class=\"ltx_text\" id=\"S7.T2.4.7.5.6.1\" style=\"font-size:90%;\">20</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S7.T2.4.7.5.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S7.T2.4.7.5.7.1\" style=\"font-size:90%;\">20</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
128
+ "capture": "Table 2: Performance Statistics for Fig. 8. All rendering times were counted at equal quality (EQ)."
129
+ },
130
+ "3": {
131
+ "table_html": "<figure class=\"ltx_table\" id=\"S7.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>Fiber parameters used throughout our paper. The shading parameters are based on matched fibers from <cite class=\"ltx_cite ltx_citemacro_cite\">[<span class=\"ltx_ref ltx_missing_citation ltx_ref_self\">khungurn2015matching</span>]</cite>, and the geometrical parameters are set on an ad hoc basis</figcaption>\n<p class=\"ltx_p ltx_align_center\" id=\"S7.T3.8\">.\n\n\n<span class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S7.T3.8.8\">\n<span class=\"ltx_thead\">\n<span class=\"ltx_tr\" id=\"S7.T3.8.8.8\">\n<span class=\"ltx_td ltx_th ltx_th_row ltx_border_tt\" id=\"S7.T3.8.8.8.9\"></span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S7.T3.1.1.1.1\"></span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S7.T3.2.2.2.2\"></span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S7.T3.3.3.3.3\"></span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S7.T3.4.4.4.4\"></span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S7.T3.5.5.5.5\"></span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S7.T3.6.6.6.6\"></span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S7.T3.7.7.7.7\"></span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S7.T3.8.8.8.8\"></span></span>\n</span>\n<span class=\"ltx_tbody\">\n<span class=\"ltx_tr\" id=\"S7.T3.8.8.9.1\">\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S7.T3.8.8.9.1.1\">fleece</span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S7.T3.8.8.9.1.2\">300</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S7.T3.8.8.9.1.3\">0.30</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S7.T3.8.8.9.1.4\">0.24</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S7.T3.8.8.9.1.5\">0.040, 0.087, 0.087</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S7.T3.8.8.9.1.6\">0.452, 0.725, 0.948</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S7.T3.8.8.9.1.7\">7.238</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S7.T3.8.8.9.1.8\">10.000</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S7.T3.8.8.9.1.9\">25.989</span></span>\n<span class=\"ltx_tr\" id=\"S7.T3.8.8.10.2\">\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S7.T3.8.8.10.2.1\">silk</span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S7.T3.8.8.10.2.2\">300</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S7.T3.8.8.10.2.3\">0.20</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S7.T3.8.8.10.2.4\">0.00</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S7.T3.8.8.10.2.5\">0.745, 0.008, 0.070</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S7.T3.8.8.10.2.6\">0.620, 0.553, 0.562</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S7.T3.8.8.10.2.7\">1.000</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S7.T3.8.8.10.2.8\">10.000</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S7.T3.8.8.10.2.9\">19.823</span></span>\n<span class=\"ltx_tr\" id=\"S7.T3.8.8.11.3\">\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S7.T3.8.8.11.3.1\">polyester</span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S7.T3.8.8.11.3.2\">200</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S7.T3.8.8.11.3.3\">0.40</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S7.T3.8.8.11.3.4\">0.20</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S7.T3.8.8.11.3.5\">0.700, 0.700, 0.700</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S7.T3.8.8.11.3.6\">0.600, 0.000, 0.800</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S7.T3.8.8.11.3.7\">5.238</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S7.T3.8.8.11.3.8\">20.000</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S7.T3.8.8.11.3.9\">25.000</span></span>\n<span class=\"ltx_tr\" id=\"S7.T3.8.8.12.4\">\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S7.T3.8.8.12.4.1\">cotton</span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S7.T3.8.8.12.4.2\">600</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S7.T3.8.8.12.4.3\">0.35</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S7.T3.8.8.12.4.4\">0.06</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S7.T3.8.8.12.4.5\">0.989, 0.959, 0.874</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S7.T3.8.8.12.4.6\">0.999, 0.999, 0.999</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S7.T3.8.8.12.4.7\">1.000</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S7.T3.8.8.12.4.8\">27.197</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S7.T3.8.8.12.4.9\">38.269</span></span>\n<span class=\"ltx_tr\" id=\"S7.T3.8.8.13.5\">\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S7.T3.8.8.13.5.1\">gabardine</span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb\" id=\"S7.T3.8.8.13.5.2\">450</span>\n<span class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S7.T3.8.8.13.5.3\">0.25</span>\n<span class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S7.T3.8.8.13.5.4\">0.12</span>\n<span class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S7.T3.8.8.13.5.5\">0.185, 0.047, 0.069</span>\n<span class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S7.T3.8.8.13.5.6\">0.999, 0.330, 0.354</span>\n<span class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S7.T3.8.8.13.5.7\">2.141</span>\n<span class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S7.T3.8.8.13.5.8\">10.000</span>\n<span class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S7.T3.8.8.13.5.9\">23.548</span></span>\n</span>\n</span></p>\n</figure>",
132
+ "capture": "Table 3: Fiber parameters used throughout our paper. The shading parameters are based on matched fibers from [khungurn2015matching], and the geometrical parameters are set on an ad hoc basis"
133
+ }
134
+ },
135
+ "image_paths": {
136
+ "1": {
137
+ "figure_path": "2311.04061v2_figure_1.png",
138
+ "caption": "Figure 1: Overview of our pipeline. The first step is to explicitly trace the rays and label them into three components to gather the data (direct transmission T, direct reflection R, multi-scattering M). Next, collect them into Radiance Distribution Maps (RDM). Here, we separate each component of the RDM (T, R, M) to demonstrate the vastly different scales and distributions of each component and visualize for when \u03b8i=45\u2218subscript\ud835\udf03\ud835\udc56superscript45\\theta_{i}=45^{\\circ}italic_\u03b8 start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = 45 start_POSTSUPERSCRIPT \u2218 end_POSTSUPERSCRIPT and \u03d5i=0\u2218subscriptitalic-\u03d5\ud835\udc56superscript0\\phi_{i}=0^{\\circ}italic_\u03d5 start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = 0 start_POSTSUPERSCRIPT \u2218 end_POSTSUPERSCRIPT. Lastly, the networks to learn the T and M components are visualized while R is being computed analytically.",
139
+ "url": "http://arxiv.org/html/2311.04061v2/x2.png"
140
+ },
141
+ "2(a)": {
142
+ "figure_path": "2311.04061v2_figure_2(a).png",
143
+ "caption": "Figure 2: The contribution of each component (direct transmission T, direct reflection R, multi-scattering M) to the final appearance of the yarn.",
144
+ "url": "http://arxiv.org/html/2311.04061v2/"
145
+ },
146
+ "2(b)": {
147
+ "figure_path": "2311.04061v2_figure_2(b).png",
148
+ "caption": "Figure 2: The contribution of each component (direct transmission T, direct reflection R, multi-scattering M) to the final appearance of the yarn.",
149
+ "url": "http://arxiv.org/html/2311.04061v2/"
150
+ },
151
+ "2(c)": {
152
+ "figure_path": "2311.04061v2_figure_2(c).png",
153
+ "caption": "Figure 2: The contribution of each component (direct transmission T, direct reflection R, multi-scattering M) to the final appearance of the yarn.",
154
+ "url": "http://arxiv.org/html/2311.04061v2/"
155
+ },
156
+ "2(d)": {
157
+ "figure_path": "2311.04061v2_figure_2(d).png",
158
+ "caption": "Figure 2: The contribution of each component (direct transmission T, direct reflection R, multi-scattering M) to the final appearance of the yarn.",
159
+ "url": "http://arxiv.org/html/2311.04061v2/"
160
+ },
161
+ "3(a)": {
162
+ "figure_path": "2311.04061v2_figure_3(a).png",
163
+ "caption": "Figure 3: Comparison of uniform sampling vs our proposed importance sampling scheme. The images are rendered at 64spp and demonstrate that our importance sampling scheme significantly reduces the variance with less noise.",
164
+ "url": "http://arxiv.org/html/2311.04061v2/"
165
+ },
166
+ "3(b)": {
167
+ "figure_path": "2311.04061v2_figure_3(b).png",
168
+ "caption": "Figure 3: Comparison of uniform sampling vs our proposed importance sampling scheme. The images are rendered at 64spp and demonstrate that our importance sampling scheme significantly reduces the variance with less noise.",
169
+ "url": "http://arxiv.org/html/2311.04061v2/"
170
+ },
171
+ "4(a)": {
172
+ "figure_path": "2311.04061v2_figure_4(a).png",
173
+ "caption": "Figure 4: We compare our neural network approach with the na\u00efve approach by contrasting them with the reference. Our method models each component of the RDM as described in \u00a74, while the na\u00efve approach models the RDM directly using the framework described in [sztrajman2021nbrdf]. Our approach successfully models the reference, including the subtle multi-scatterings, while the latter does not.",
174
+ "url": "http://arxiv.org/html/2311.04061v2/"
175
+ },
176
+ "4(b)": {
177
+ "figure_path": "2311.04061v2_figure_4(b).png",
178
+ "caption": "Figure 4: We compare our neural network approach with the na\u00efve approach by contrasting them with the reference. Our method models each component of the RDM as described in \u00a74, while the na\u00efve approach models the RDM directly using the framework described in [sztrajman2021nbrdf]. Our approach successfully models the reference, including the subtle multi-scatterings, while the latter does not.",
179
+ "url": "http://arxiv.org/html/2311.04061v2/x10.png"
180
+ },
181
+ "4(c)": {
182
+ "figure_path": "2311.04061v2_figure_4(c).png",
183
+ "caption": "Figure 4: We compare our neural network approach with the na\u00efve approach by contrasting them with the reference. Our method models each component of the RDM as described in \u00a74, while the na\u00efve approach models the RDM directly using the framework described in [sztrajman2021nbrdf]. Our approach successfully models the reference, including the subtle multi-scatterings, while the latter does not.",
184
+ "url": "http://arxiv.org/html/2311.04061v2/"
185
+ },
186
+ "6(a)": {
187
+ "figure_path": "2311.04061v2_figure_6(a).png",
188
+ "caption": "Figure 6: Rendering results of a 3-ply glove with fiber parameters; fleece material; fleece with half of the fiber density \u03c1\ud835\udf0c\\rhoitalic_\u03c1; fleece with double twist factor \u03b1\ud835\udefc\\alphaitalic_\u03b1; polyester material. Please refer to the supplementary video for a comprehensive overview.",
189
+ "url": "http://arxiv.org/html/2311.04061v2/"
190
+ },
191
+ "6(b)": {
192
+ "figure_path": "2311.04061v2_figure_6(b).png",
193
+ "caption": "Figure 6: Rendering results of a 3-ply glove with fiber parameters; fleece material; fleece with half of the fiber density \u03c1\ud835\udf0c\\rhoitalic_\u03c1; fleece with double twist factor \u03b1\ud835\udefc\\alphaitalic_\u03b1; polyester material. Please refer to the supplementary video for a comprehensive overview.",
194
+ "url": "http://arxiv.org/html/2311.04061v2/"
195
+ },
196
+ "6(c)": {
197
+ "figure_path": "2311.04061v2_figure_6(c).png",
198
+ "caption": "Figure 6: Rendering results of a 3-ply glove with fiber parameters; fleece material; fleece with half of the fiber density \u03c1\ud835\udf0c\\rhoitalic_\u03c1; fleece with double twist factor \u03b1\ud835\udefc\\alphaitalic_\u03b1; polyester material. Please refer to the supplementary video for a comprehensive overview.",
199
+ "url": "http://arxiv.org/html/2311.04061v2/"
200
+ },
201
+ "6(d)": {
202
+ "figure_path": "2311.04061v2_figure_6(d).png",
203
+ "caption": "Figure 6: Rendering results of a 3-ply glove with fiber parameters; fleece material; fleece with half of the fiber density \u03c1\ud835\udf0c\\rhoitalic_\u03c1; fleece with double twist factor \u03b1\ud835\udefc\\alphaitalic_\u03b1; polyester material. Please refer to the supplementary video for a comprehensive overview.",
204
+ "url": "http://arxiv.org/html/2311.04061v2/"
205
+ },
206
+ "7(a)": {
207
+ "figure_path": "2311.04061v2_figure_7(a).png",
208
+ "caption": "Figure 7: Single-yarn comparisons between our aggregated model and the fiber-based ground truth with varying fiber parameters. In this figure, we compare against both 1-ply and 3-ply yarn and demonstrate that our model can accurately recreate the ply level highlights.",
209
+ "url": "http://arxiv.org/html/2311.04061v2/"
210
+ },
211
+ "7(b)": {
212
+ "figure_path": "2311.04061v2_figure_7(b).png",
213
+ "caption": "Figure 7: Single-yarn comparisons between our aggregated model and the fiber-based ground truth with varying fiber parameters. In this figure, we compare against both 1-ply and 3-ply yarn and demonstrate that our model can accurately recreate the ply level highlights.",
214
+ "url": "http://arxiv.org/html/2311.04061v2/"
215
+ },
216
+ "7(c)": {
217
+ "figure_path": "2311.04061v2_figure_7(c).png",
218
+ "caption": "Figure 7: Single-yarn comparisons between our aggregated model and the fiber-based ground truth with varying fiber parameters. In this figure, we compare against both 1-ply and 3-ply yarn and demonstrate that our model can accurately recreate the ply level highlights.",
219
+ "url": "http://arxiv.org/html/2311.04061v2/"
220
+ },
221
+ "7(d)": {
222
+ "figure_path": "2311.04061v2_figure_7(d).png",
223
+ "caption": "Figure 7: Single-yarn comparisons between our aggregated model and the fiber-based ground truth with varying fiber parameters. In this figure, we compare against both 1-ply and 3-ply yarn and demonstrate that our model can accurately recreate the ply level highlights.",
224
+ "url": "http://arxiv.org/html/2311.04061v2/"
225
+ },
226
+ "7(e)": {
227
+ "figure_path": "2311.04061v2_figure_7(e).png",
228
+ "caption": "Figure 7: Single-yarn comparisons between our aggregated model and the fiber-based ground truth with varying fiber parameters. In this figure, we compare against both 1-ply and 3-ply yarn and demonstrate that our model can accurately recreate the ply level highlights.",
229
+ "url": "http://arxiv.org/html/2311.04061v2/"
230
+ },
231
+ "7(f)": {
232
+ "figure_path": "2311.04061v2_figure_7(f).png",
233
+ "caption": "Figure 7: Single-yarn comparisons between our aggregated model and the fiber-based ground truth with varying fiber parameters. In this figure, we compare against both 1-ply and 3-ply yarn and demonstrate that our model can accurately recreate the ply level highlights.",
234
+ "url": "http://arxiv.org/html/2311.04061v2/"
235
+ },
236
+ "7(g)": {
237
+ "figure_path": "2311.04061v2_figure_7(g).png",
238
+ "caption": "Figure 7: Single-yarn comparisons between our aggregated model and the fiber-based ground truth with varying fiber parameters. In this figure, we compare against both 1-ply and 3-ply yarn and demonstrate that our model can accurately recreate the ply level highlights.",
239
+ "url": "http://arxiv.org/html/2311.04061v2/"
240
+ },
241
+ "7(h)": {
242
+ "figure_path": "2311.04061v2_figure_7(h).png",
243
+ "caption": "Figure 7: Single-yarn comparisons between our aggregated model and the fiber-based ground truth with varying fiber parameters. In this figure, we compare against both 1-ply and 3-ply yarn and demonstrate that our model can accurately recreate the ply level highlights.",
244
+ "url": "http://arxiv.org/html/2311.04061v2/"
245
+ },
246
+ "7(i)": {
247
+ "figure_path": "2311.04061v2_figure_7(i).png",
248
+ "caption": "Figure 7: Single-yarn comparisons between our aggregated model and the fiber-based ground truth with varying fiber parameters. In this figure, we compare against both 1-ply and 3-ply yarn and demonstrate that our model can accurately recreate the ply level highlights.",
249
+ "url": "http://arxiv.org/html/2311.04061v2/"
250
+ },
251
+ "7(j)": {
252
+ "figure_path": "2311.04061v2_figure_7(j).png",
253
+ "caption": "Figure 7: Single-yarn comparisons between our aggregated model and the fiber-based ground truth with varying fiber parameters. In this figure, we compare against both 1-ply and 3-ply yarn and demonstrate that our model can accurately recreate the ply level highlights.",
254
+ "url": "http://arxiv.org/html/2311.04061v2/"
255
+ },
256
+ "7(k)": {
257
+ "figure_path": "2311.04061v2_figure_7(k).png",
258
+ "caption": "Figure 7: Single-yarn comparisons between our aggregated model and the fiber-based ground truth with varying fiber parameters. In this figure, we compare against both 1-ply and 3-ply yarn and demonstrate that our model can accurately recreate the ply level highlights.",
259
+ "url": "http://arxiv.org/html/2311.04061v2/"
260
+ },
261
+ "7(l)": {
262
+ "figure_path": "2311.04061v2_figure_7(l).png",
263
+ "caption": "Figure 7: Single-yarn comparisons between our aggregated model and the fiber-based ground truth with varying fiber parameters. In this figure, we compare against both 1-ply and 3-ply yarn and demonstrate that our model can accurately recreate the ply level highlights.",
264
+ "url": "http://arxiv.org/html/2311.04061v2/"
265
+ },
266
+ "8(a)": {
267
+ "figure_path": "2311.04061v2_figure_8(a).png",
268
+ "caption": "Figure 8: The SSIM comparison of our aggregated yarn shading model vs explicit fiber-based models [khungurn2015matching] on knitted and woven fabrics with equal quality. The EQ and ET references are highlighted to showcase their slow and noisy performance, respectively. Our model can accurately replicate the appearance of fiber-based models at a fraction of the time and memory costs.",
269
+ "url": "http://arxiv.org/html/2311.04061v2/extracted/5798945/images/cloth/collage/fleece_et_collage.png"
270
+ },
271
+ "8(b)": {
272
+ "figure_path": "2311.04061v2_figure_8(b).png",
273
+ "caption": "Figure 8: The SSIM comparison of our aggregated yarn shading model vs explicit fiber-based models [khungurn2015matching] on knitted and woven fabrics with equal quality. The EQ and ET references are highlighted to showcase their slow and noisy performance, respectively. Our model can accurately replicate the appearance of fiber-based models at a fraction of the time and memory costs.",
274
+ "url": "http://arxiv.org/html/2311.04061v2/extracted/5798945/images/cloth/collage/n-fleece_collage.png"
275
+ },
276
+ "8(c)": {
277
+ "figure_path": "2311.04061v2_figure_8(c).png",
278
+ "caption": "Figure 8: The SSIM comparison of our aggregated yarn shading model vs explicit fiber-based models [khungurn2015matching] on knitted and woven fabrics with equal quality. The EQ and ET references are highlighted to showcase their slow and noisy performance, respectively. Our model can accurately replicate the appearance of fiber-based models at a fraction of the time and memory costs.",
279
+ "url": "http://arxiv.org/html/2311.04061v2/extracted/5798945/images/cloth/collage/fleece_eq_collage.png"
280
+ },
281
+ "8(d)": {
282
+ "figure_path": "2311.04061v2_figure_8(d).png",
283
+ "caption": "Figure 8: The SSIM comparison of our aggregated yarn shading model vs explicit fiber-based models [khungurn2015matching] on knitted and woven fabrics with equal quality. The EQ and ET references are highlighted to showcase their slow and noisy performance, respectively. Our model can accurately replicate the appearance of fiber-based models at a fraction of the time and memory costs.",
284
+ "url": "http://arxiv.org/html/2311.04061v2/extracted/5798945/images/cloth/ssim/fleece_ssim.png"
285
+ },
286
+ "8(e)": {
287
+ "figure_path": "2311.04061v2_figure_8(e).png",
288
+ "caption": "Figure 8: The SSIM comparison of our aggregated yarn shading model vs explicit fiber-based models [khungurn2015matching] on knitted and woven fabrics with equal quality. The EQ and ET references are highlighted to showcase their slow and noisy performance, respectively. Our model can accurately replicate the appearance of fiber-based models at a fraction of the time and memory costs.",
289
+ "url": "http://arxiv.org/html/2311.04061v2/extracted/5798945/images/cloth/ssim/colorbar.png"
290
+ },
291
+ "8(f)": {
292
+ "figure_path": "2311.04061v2_figure_8(f).png",
293
+ "caption": "Figure 8: The SSIM comparison of our aggregated yarn shading model vs explicit fiber-based models [khungurn2015matching] on knitted and woven fabrics with equal quality. The EQ and ET references are highlighted to showcase their slow and noisy performance, respectively. Our model can accurately replicate the appearance of fiber-based models at a fraction of the time and memory costs.",
294
+ "url": "http://arxiv.org/html/2311.04061v2/extracted/5798945/images/cloth/collage/silk_et_collage.png"
295
+ },
296
+ "8(g)": {
297
+ "figure_path": "2311.04061v2_figure_8(g).png",
298
+ "caption": "Figure 8: The SSIM comparison of our aggregated yarn shading model vs explicit fiber-based models [khungurn2015matching] on knitted and woven fabrics with equal quality. The EQ and ET references are highlighted to showcase their slow and noisy performance, respectively. Our model can accurately replicate the appearance of fiber-based models at a fraction of the time and memory costs.",
299
+ "url": "http://arxiv.org/html/2311.04061v2/extracted/5798945/images/cloth/collage/n-silk_collage.png"
300
+ },
301
+ "8(h)": {
302
+ "figure_path": "2311.04061v2_figure_8(h).png",
303
+ "caption": "Figure 8: The SSIM comparison of our aggregated yarn shading model vs explicit fiber-based models [khungurn2015matching] on knitted and woven fabrics with equal quality. The EQ and ET references are highlighted to showcase their slow and noisy performance, respectively. Our model can accurately replicate the appearance of fiber-based models at a fraction of the time and memory costs.",
304
+ "url": "http://arxiv.org/html/2311.04061v2/extracted/5798945/images/cloth/collage/silk_eq_collage.png"
305
+ },
306
+ "8(i)": {
307
+ "figure_path": "2311.04061v2_figure_8(i).png",
308
+ "caption": "Figure 8: The SSIM comparison of our aggregated yarn shading model vs explicit fiber-based models [khungurn2015matching] on knitted and woven fabrics with equal quality. The EQ and ET references are highlighted to showcase their slow and noisy performance, respectively. Our model can accurately replicate the appearance of fiber-based models at a fraction of the time and memory costs.",
309
+ "url": "http://arxiv.org/html/2311.04061v2/extracted/5798945/images/cloth/ssim/silk_ssim.png"
310
+ },
311
+ "8(j)": {
312
+ "figure_path": "2311.04061v2_figure_8(j).png",
313
+ "caption": "Figure 8: The SSIM comparison of our aggregated yarn shading model vs explicit fiber-based models [khungurn2015matching] on knitted and woven fabrics with equal quality. The EQ and ET references are highlighted to showcase their slow and noisy performance, respectively. Our model can accurately replicate the appearance of fiber-based models at a fraction of the time and memory costs.",
314
+ "url": "http://arxiv.org/html/2311.04061v2/extracted/5798945/images/cloth/ssim/colorbar.png"
315
+ },
316
+ "8(k)": {
317
+ "figure_path": "2311.04061v2_figure_8(k).png",
318
+ "caption": "Figure 8: The SSIM comparison of our aggregated yarn shading model vs explicit fiber-based models [khungurn2015matching] on knitted and woven fabrics with equal quality. The EQ and ET references are highlighted to showcase their slow and noisy performance, respectively. Our model can accurately replicate the appearance of fiber-based models at a fraction of the time and memory costs.",
319
+ "url": "http://arxiv.org/html/2311.04061v2/extracted/5798945/images/cloth/collage/polyester_et_collage.png"
320
+ },
321
+ "8(l)": {
322
+ "figure_path": "2311.04061v2_figure_8(l).png",
323
+ "caption": "Figure 8: The SSIM comparison of our aggregated yarn shading model vs explicit fiber-based models [khungurn2015matching] on knitted and woven fabrics with equal quality. The EQ and ET references are highlighted to showcase their slow and noisy performance, respectively. Our model can accurately replicate the appearance of fiber-based models at a fraction of the time and memory costs.",
324
+ "url": "http://arxiv.org/html/2311.04061v2/extracted/5798945/images/cloth/collage/n-polyester_collage.png"
325
+ },
326
+ "8(m)": {
327
+ "figure_path": "2311.04061v2_figure_8(m).png",
328
+ "caption": "Figure 8: The SSIM comparison of our aggregated yarn shading model vs explicit fiber-based models [khungurn2015matching] on knitted and woven fabrics with equal quality. The EQ and ET references are highlighted to showcase their slow and noisy performance, respectively. Our model can accurately replicate the appearance of fiber-based models at a fraction of the time and memory costs.",
329
+ "url": "http://arxiv.org/html/2311.04061v2/extracted/5798945/images/cloth/collage/polyester_eq_collage.png"
330
+ },
331
+ "8(n)": {
332
+ "figure_path": "2311.04061v2_figure_8(n).png",
333
+ "caption": "Figure 8: The SSIM comparison of our aggregated yarn shading model vs explicit fiber-based models [khungurn2015matching] on knitted and woven fabrics with equal quality. The EQ and ET references are highlighted to showcase their slow and noisy performance, respectively. Our model can accurately replicate the appearance of fiber-based models at a fraction of the time and memory costs.",
334
+ "url": "http://arxiv.org/html/2311.04061v2/extracted/5798945/images/cloth/ssim/polyester_ssim.png"
335
+ },
336
+ "8(o)": {
337
+ "figure_path": "2311.04061v2_figure_8(o).png",
338
+ "caption": "Figure 8: The SSIM comparison of our aggregated yarn shading model vs explicit fiber-based models [khungurn2015matching] on knitted and woven fabrics with equal quality. The EQ and ET references are highlighted to showcase their slow and noisy performance, respectively. Our model can accurately replicate the appearance of fiber-based models at a fraction of the time and memory costs.",
339
+ "url": "http://arxiv.org/html/2311.04061v2/extracted/5798945/images/cloth/ssim/colorbar.png"
340
+ },
341
+ "8(p)": {
342
+ "figure_path": "2311.04061v2_figure_8(p).png",
343
+ "caption": "Figure 8: The SSIM comparison of our aggregated yarn shading model vs explicit fiber-based models [khungurn2015matching] on knitted and woven fabrics with equal quality. The EQ and ET references are highlighted to showcase their slow and noisy performance, respectively. Our model can accurately replicate the appearance of fiber-based models at a fraction of the time and memory costs.",
344
+ "url": "http://arxiv.org/html/2311.04061v2/extracted/5798945/images/cloth/collage/cotton_et_collage.png"
345
+ },
346
+ "8(q)": {
347
+ "figure_path": "2311.04061v2_figure_8(q).png",
348
+ "caption": "Figure 8: The SSIM comparison of our aggregated yarn shading model vs explicit fiber-based models [khungurn2015matching] on knitted and woven fabrics with equal quality. The EQ and ET references are highlighted to showcase their slow and noisy performance, respectively. Our model can accurately replicate the appearance of fiber-based models at a fraction of the time and memory costs.",
349
+ "url": "http://arxiv.org/html/2311.04061v2/extracted/5798945/images/cloth/collage/n-cotton_collage.png"
350
+ },
351
+ "8(r)": {
352
+ "figure_path": "2311.04061v2_figure_8(r).png",
353
+ "caption": "Figure 8: The SSIM comparison of our aggregated yarn shading model vs explicit fiber-based models [khungurn2015matching] on knitted and woven fabrics with equal quality. The EQ and ET references are highlighted to showcase their slow and noisy performance, respectively. Our model can accurately replicate the appearance of fiber-based models at a fraction of the time and memory costs.",
354
+ "url": "http://arxiv.org/html/2311.04061v2/extracted/5798945/images/cloth/collage/cotton_eq_collage.png"
355
+ },
356
+ "8(s)": {
357
+ "figure_path": "2311.04061v2_figure_8(s).png",
358
+ "caption": "Figure 8: The SSIM comparison of our aggregated yarn shading model vs explicit fiber-based models [khungurn2015matching] on knitted and woven fabrics with equal quality. The EQ and ET references are highlighted to showcase their slow and noisy performance, respectively. Our model can accurately replicate the appearance of fiber-based models at a fraction of the time and memory costs.",
359
+ "url": "http://arxiv.org/html/2311.04061v2/extracted/5798945/images/cloth/ssim/cotton_ssim.png"
360
+ },
361
+ "8(t)": {
362
+ "figure_path": "2311.04061v2_figure_8(t).png",
363
+ "caption": "Figure 8: The SSIM comparison of our aggregated yarn shading model vs explicit fiber-based models [khungurn2015matching] on knitted and woven fabrics with equal quality. The EQ and ET references are highlighted to showcase their slow and noisy performance, respectively. Our model can accurately replicate the appearance of fiber-based models at a fraction of the time and memory costs.",
364
+ "url": "http://arxiv.org/html/2311.04061v2/extracted/5798945/images/cloth/ssim/colorbar.png"
365
+ },
366
+ "8(u)": {
367
+ "figure_path": "2311.04061v2_figure_8(u).png",
368
+ "caption": "Figure 8: The SSIM comparison of our aggregated yarn shading model vs explicit fiber-based models [khungurn2015matching] on knitted and woven fabrics with equal quality. The EQ and ET references are highlighted to showcase their slow and noisy performance, respectively. Our model can accurately replicate the appearance of fiber-based models at a fraction of the time and memory costs.",
369
+ "url": "http://arxiv.org/html/2311.04061v2/extracted/5798945/images/cloth/collage/gabardine_et_collage.png"
370
+ },
371
+ "8(v)": {
372
+ "figure_path": "2311.04061v2_figure_8(v).png",
373
+ "caption": "Figure 8: The SSIM comparison of our aggregated yarn shading model vs explicit fiber-based models [khungurn2015matching] on knitted and woven fabrics with equal quality. The EQ and ET references are highlighted to showcase their slow and noisy performance, respectively. Our model can accurately replicate the appearance of fiber-based models at a fraction of the time and memory costs.",
374
+ "url": "http://arxiv.org/html/2311.04061v2/extracted/5798945/images/cloth/collage/n-gabardine_collage.png"
375
+ },
376
+ "8(w)": {
377
+ "figure_path": "2311.04061v2_figure_8(w).png",
378
+ "caption": "Figure 8: The SSIM comparison of our aggregated yarn shading model vs explicit fiber-based models [khungurn2015matching] on knitted and woven fabrics with equal quality. The EQ and ET references are highlighted to showcase their slow and noisy performance, respectively. Our model can accurately replicate the appearance of fiber-based models at a fraction of the time and memory costs.",
379
+ "url": "http://arxiv.org/html/2311.04061v2/extracted/5798945/images/cloth/collage/gabardine_eq_collage.png"
380
+ },
381
+ "8(x)": {
382
+ "figure_path": "2311.04061v2_figure_8(x).png",
383
+ "caption": "Figure 8: The SSIM comparison of our aggregated yarn shading model vs explicit fiber-based models [khungurn2015matching] on knitted and woven fabrics with equal quality. The EQ and ET references are highlighted to showcase their slow and noisy performance, respectively. Our model can accurately replicate the appearance of fiber-based models at a fraction of the time and memory costs.",
384
+ "url": "http://arxiv.org/html/2311.04061v2/extracted/5798945/images/cloth/ssim/gabardine_ssim.png"
385
+ },
386
+ "8(y)": {
387
+ "figure_path": "2311.04061v2_figure_8(y).png",
388
+ "caption": "Figure 8: The SSIM comparison of our aggregated yarn shading model vs explicit fiber-based models [khungurn2015matching] on knitted and woven fabrics with equal quality. The EQ and ET references are highlighted to showcase their slow and noisy performance, respectively. Our model can accurately replicate the appearance of fiber-based models at a fraction of the time and memory costs.",
389
+ "url": "http://arxiv.org/html/2311.04061v2/extracted/5798945/images/cloth/ssim/colorbar.png"
390
+ },
391
+ "9(a)": {
392
+ "figure_path": "2311.04061v2_figure_9(a).png",
393
+ "caption": "Figure 9: A comparison of ours with state-of-the-art model [Zhu2023yarn] in equal quality, using reference\u2019s parameters for aggregation",
394
+ "url": "http://arxiv.org/html/2311.04061v2/extracted/5798945/images/cloth/collage/fleece_eq_collage.png"
395
+ },
396
+ "9(b)": {
397
+ "figure_path": "2311.04061v2_figure_9(b).png",
398
+ "caption": "Figure 9: A comparison of ours with state-of-the-art model [Zhu2023yarn] in equal quality, using reference\u2019s parameters for aggregation",
399
+ "url": "http://arxiv.org/html/2311.04061v2/extracted/5798945/images/cloth/collage/silk_eq_collage.png"
400
+ },
401
+ "9(c)": {
402
+ "figure_path": "2311.04061v2_figure_9(c).png",
403
+ "caption": "Figure 9: A comparison of ours with state-of-the-art model [Zhu2023yarn] in equal quality, using reference\u2019s parameters for aggregation",
404
+ "url": "http://arxiv.org/html/2311.04061v2/extracted/5798945/images/cloth/collage/polyester_eq_collage.png"
405
+ },
406
+ "9(d)": {
407
+ "figure_path": "2311.04061v2_figure_9(d).png",
408
+ "caption": "Figure 9: A comparison of ours with state-of-the-art model [Zhu2023yarn] in equal quality, using reference\u2019s parameters for aggregation",
409
+ "url": "http://arxiv.org/html/2311.04061v2/extracted/5798945/images/cloth/collage/cotton_eq_collage.png"
410
+ },
411
+ "9(e)": {
412
+ "figure_path": "2311.04061v2_figure_9(e).png",
413
+ "caption": "Figure 9: A comparison of ours with state-of-the-art model [Zhu2023yarn] in equal quality, using reference\u2019s parameters for aggregation",
414
+ "url": "http://arxiv.org/html/2311.04061v2/extracted/5798945/images/cloth/collage/gabardine_eq_collage.png"
415
+ },
416
+ "9(f)": {
417
+ "figure_path": "2311.04061v2_figure_9(f).png",
418
+ "caption": "Figure 9: A comparison of ours with state-of-the-art model [Zhu2023yarn] in equal quality, using reference\u2019s parameters for aggregation",
419
+ "url": "http://arxiv.org/html/2311.04061v2/extracted/5798945/images/cloth/collage/n-fleece_collage.png"
420
+ },
421
+ "9(g)": {
422
+ "figure_path": "2311.04061v2_figure_9(g).png",
423
+ "caption": "Figure 9: A comparison of ours with state-of-the-art model [Zhu2023yarn] in equal quality, using reference\u2019s parameters for aggregation",
424
+ "url": "http://arxiv.org/html/2311.04061v2/extracted/5798945/images/cloth/collage/n-silk_collage.png"
425
+ },
426
+ "9(h)": {
427
+ "figure_path": "2311.04061v2_figure_9(h).png",
428
+ "caption": "Figure 9: A comparison of ours with state-of-the-art model [Zhu2023yarn] in equal quality, using reference\u2019s parameters for aggregation",
429
+ "url": "http://arxiv.org/html/2311.04061v2/extracted/5798945/images/cloth/collage/n-polyester_collage.png"
430
+ },
431
+ "9(i)": {
432
+ "figure_path": "2311.04061v2_figure_9(i).png",
433
+ "caption": "Figure 9: A comparison of ours with state-of-the-art model [Zhu2023yarn] in equal quality, using reference\u2019s parameters for aggregation",
434
+ "url": "http://arxiv.org/html/2311.04061v2/extracted/5798945/images/cloth/collage/n-cotton_collage.png"
435
+ },
436
+ "9(j)": {
437
+ "figure_path": "2311.04061v2_figure_9(j).png",
438
+ "caption": "Figure 9: A comparison of ours with state-of-the-art model [Zhu2023yarn] in equal quality, using reference\u2019s parameters for aggregation",
439
+ "url": "http://arxiv.org/html/2311.04061v2/extracted/5798945/images/cloth/collage/n-gabardine_collage.png"
440
+ },
441
+ "9(k)": {
442
+ "figure_path": "2311.04061v2_figure_9(k).png",
443
+ "caption": "Figure 9: A comparison of ours with state-of-the-art model [Zhu2023yarn] in equal quality, using reference\u2019s parameters for aggregation",
444
+ "url": "http://arxiv.org/html/2311.04061v2/extracted/5798945/images/cloth/collage/sota/fleece_collage.png"
445
+ },
446
+ "9(l)": {
447
+ "figure_path": "2311.04061v2_figure_9(l).png",
448
+ "caption": "Figure 9: A comparison of ours with state-of-the-art model [Zhu2023yarn] in equal quality, using reference\u2019s parameters for aggregation",
449
+ "url": "http://arxiv.org/html/2311.04061v2/extracted/5798945/images/cloth/collage/sota/silk_collage.png"
450
+ },
451
+ "9(m)": {
452
+ "figure_path": "2311.04061v2_figure_9(m).png",
453
+ "caption": "Figure 9: A comparison of ours with state-of-the-art model [Zhu2023yarn] in equal quality, using reference\u2019s parameters for aggregation",
454
+ "url": "http://arxiv.org/html/2311.04061v2/extracted/5798945/images/cloth/collage/sota/polyester_collage.png"
455
+ },
456
+ "9(n)": {
457
+ "figure_path": "2311.04061v2_figure_9(n).png",
458
+ "caption": "Figure 9: A comparison of ours with state-of-the-art model [Zhu2023yarn] in equal quality, using reference\u2019s parameters for aggregation",
459
+ "url": "http://arxiv.org/html/2311.04061v2/extracted/5798945/images/cloth/collage/sota/cotton_collage.png"
460
+ },
461
+ "9(o)": {
462
+ "figure_path": "2311.04061v2_figure_9(o).png",
463
+ "caption": "Figure 9: A comparison of ours with state-of-the-art model [Zhu2023yarn] in equal quality, using reference\u2019s parameters for aggregation",
464
+ "url": "http://arxiv.org/html/2311.04061v2/extracted/5798945/images/cloth/collage/sota/gabardine_collage.png"
465
+ }
466
+ },
467
+ "validation": true,
468
+ "references": [],
469
+ "url": "http://arxiv.org/html/2311.04061v2"
470
+ }
20240819/2312.10680v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240819/2401.12508v2.json ADDED
@@ -0,0 +1,632 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "On the Stochastic (Variance-Reduced) Proximal Gradient Method for Regularized Expected Reward Optimization",
3
+ "abstract": "We consider a regularized expected reward optimization problem in the non-oblivious setting that covers many existing problems in reinforcement learning (RL). In order to solve such an optimization problem, we apply and analyze the classical stochastic proximal gradient method. In particular, the method has shown to admit an sample complexity to an -stationary point, under standard conditions. Since the variance of the classical stochastic gradient estimator is typically large, which slows down the convergence, we also apply an efficient stochastic variance-reduce proximal gradient method with an importance sampling based ProbAbilistic Gradient Estimator (PAGE). Our analysis shows that the sample complexity can be improved from to under additional conditions. Our results on the stochastic (variance-reduced) proximal gradient method match the sample complexity of their most competitive counterparts for discounted Markov decision processes under similar settings. To the best of our knowledge, the proposed methods represent a novel approach in addressing the general regularized reward optimization problem.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Reinforcement learning (RL) Sutton & Barto (2018 ###reference_b53###) has recently become a highly active research area of machine learning that learns to make sequential decisions via interacting with the environment. In recent years, RL has achieved tremendous success so far in many applications such as control, job scheduling, online advertising, and game-playing Zhang & Dietterich (1995 ###reference_b72###); Pednault et al. (2002 ###reference_b40###); Mnih et al. (2013 ###reference_b37###), to mention a few. One of the central tasks of RL is to solve a certain (expected) reward optimization problem for decision-making. Following the research theme, we consider the following problem of maximizing the regularized expected reward:\nwhere is a closed proper convex (possibly nonsmooth) function, , is the reward function depending on the parameter , and denotes the probability distribution over a given subset parameterized by . By adapting the convention in RL, we call a policy parameterized by . Moreover, for the rest of this paper, we denote as the expected reward function in the non-oblivious setting. The learning objective is to learn a decision rule via finding the policy parameter that maximizes the regularized expected reward. To the best of our knowledge, the study on the general model (1 ###reference_###) has been limited in the literature. Hence, developing and analyzing algorithmic frameworks for solving the problem is of great interest.\nThere are large body of works in supervised learning focusing on the oblivious setting Zhang (2004 ###reference_b71###); Hastie et al. (2009 ###reference_b19###); Shapiro et al. (2021 ###reference_b50###), i.e., , where is sampled from an invariant distribution . Clearly, problem (1 ###reference_###) can be viewed as a generalization of those machine learning problems with oblivious objective functions. In the literature, an RL problem is often formulated as a discrete-time and discounted Markov decision processes (MDPs) Sutton & Barto (2018 ###reference_b53###) which aims to learn an optimal policy via optimizing the (discounted) cumulative sum of rewards. We can also see that the learning objective of an MDP can be covered by the problem (1 ###reference_###) with the property that the function does not depend on (see Example 3.3 ###reference_theorem3###). Recently, the application of RL for solving combinatorial optimization (CO) problems which are typically NP-hard has attracted much attention. These CO problems may include the traveling salesman problem and related problems Bello et al. (2016 ###reference_b8###); Mazyavkina et al. (2021 ###reference_b33###), the reward optimization problem arising from the finite expression method Liang & Yang (2022 ###reference_b30###); Song et al. (2023 ###reference_b52###), and the general binary optimization problem Chen et al. (2023 ###reference_b11###), to name just a few. The common key component of the aforementioned applications is the reward optimization, which could also be formulated as problem (1 ###reference_###). There also exist problems with general reward functions that are outside the scope of cumulative sum of rewards of trajectories that are used in MDPs. An interesting example is the MDP with general utilities; see, e.g., Zhang et al. (2020a ###reference_b67###); Kumar et al. (2022 ###reference_b25###); Barakat et al. (2023 ###reference_b5###) and references therein.\nAdding a regularizer to the objective function is a commonly used technique to impose desirable structures to the solution and/or to greatly enhance the expression power and applicability of RL Lan (2023 ###reference_b27###); Zhan et al. (2023 ###reference_b66###). When one considers the direct/simplex parameterization Agarwal et al. (2021 ###reference_b2###) of , a regularization function using the indicator function for the standard probability simplex is needed. Moreover, by using other indicator functions for general convex sets, one is able to impose some additional constraints on the parameter . For the softmax parameterization, one may also enforce a bounded constraint to to prevent it taking values that are too large. This can avoid potential numerical issues, including the overflow error on a floating point system. On the other hand, there are incomplete parametric policy classes, such as the log-linear and neural policy classes, that are often formulated as , where is a closed convex set Agarwal et al. (2021 ###reference_b2###). In this case, the indicator function is still necessary and useful. Some recent works (see, e.g., Ahmed et al. (2019 ###reference_b3###); Agarwal et al. (2020 ###reference_b1###); Mei et al. (2020 ###reference_b34###); Cen et al. (2022 ###reference_b10###)) have investigated the impact of the entropy regularization for MDPs. Systematic studies on general convex regularization for MDPs have been limited until the recent works Pham et al. (2020 ###reference_b42###); Lan (2023 ###reference_b27###); Zhan et al. (2023 ###reference_b66###). Finally, problem (1 ###reference_###) takes the same form as the stochastic optimization problem with decision-dependent distributions (see e.g., Drusvyatskiy & Xiao (2023 ###reference_b13###) and references therein), leading to numerous real-world applications such as performative prediction Mendler-D\u00fcnner et al. (2020 ###reference_b35###); Perdomo et al. (2020 ###reference_b41###), concept drift Gama et al. (2014 ###reference_b16###), strategic classification Tsirtsis et al. (2024 ###reference_b56###); Milli et al. (2019 ###reference_b36###), and casual inference Yao et al. (2021 ###reference_b63###). Consequently, we can see that problem (1 ###reference_###) is in fact quite general and has promising modeling power, as it covers many existing problems in the literature.\nThe purpose of this paper is to leverage existing tools and results in MDPs and nonconvex optimization for solving the general regularized expected reward optimization problem (1 ###reference_###) with general policy parameterization, which, to the best of our knowledge, has not been formally considered in the RL literature. It is well known that the policy gradient method Williams (1992 ###reference_b57###); Sutton et al. (1999 ###reference_b54###); Baxter & Bartlett (2001 ###reference_b6###), which lies in the heart of RL, is one of the most competitive and efficient algorithms due to its simplicity and versatility. Moreover, the policy gradient method is readily implemented and can be paired with other effective techniques. In this paper, we observe that the stochastic proximal gradient method, which shares the same spirit of the policy gradient method, can be applied directly for solving the targeted problem (1 ###reference_###) with convergence guarantees to a stationary point. Since the classical stochastic gradient estimator typically introduces a large variance, there is also a need to consider designing advanced stochastic gradient estimators with smaller variances. To this end, we shall also look into a certain stochastic variance-reduced proximal gradient method and analyze its convergence properties. In particular, the contributions of this paper are summarized as follows.\nWe consider a novel and general regularized reward optimization model (1 ###reference_###) that covers many existing important models in the machine learning and optimization literature. Thus, problem (1 ###reference_###) admits a promising modeling power which encourages potential applications.\nIn order to solve our targeted problem, we consider applying the classical stochastic proximal gradient method and analyze its convergence properties. We first demonstrate that the gradient of is Lipschitz continuous under standard conditions with respect to the reward function and the parameterized policy . Using the L-smoothness of , we then show that the classical stochastic proximal gradient method with a constant step-size (depending only on the Lipschitz constant for ) for solving problem (1 ###reference_###) outputs an -stationary point (see Definition 3.4 ###reference_theorem4###) within iterations, and the sample size for each iteration is , where is a given tolerance. Thus, the total sample complexity becomes , which matches the current state-of-the-art sample complexity of the classical stochastic policy gradient for MDPs; see e.g., Williams (1992 ###reference_b57###); Baxter & Bartlett (2001 ###reference_b6###); Zhang et al. (2020b ###reference_b70###); Xiong et al. (2021 ###reference_b59###); Yuan et al. (2022 ###reference_b65###).\nMoreover, in order to further reduce the variance of the stochastic gradient estimator, we utilize an importance sampling based probabilistic gradient estimator which leads to an efficient single-looped variance reduced method. The application of this probabilistic gradient estimator is motivated by the recent progress in developing efficient stochastic variance-reduced gradient methods for solving stochastic optimization Li et al. (2021b ###reference_b29###) and (unregularized) MDPs Gargiani et al. (2022 ###reference_b17###). We show that, under additional technical conditions, the total sample complexity is improved from to . This result again matches the results of some existing competitive variance-reduced methods for MDPs Papini et al. (2018 ###reference_b39###); Xu et al. (2019 ###reference_b60###); Pham et al. (2020 ###reference_b42###); Huang et al. (2021 ###reference_b20###); Yang et al. (2022 ###reference_b62###); Gargiani et al. (2022 ###reference_b17###). Moreover, to the best of our knowledge, the application of the above probabilistic gradient estimator is new for solving the regularized expected reward optimization (1 ###reference_###).\nThe rest of this paper is organized as follows. We first summarize some relative works in Section 2 ###reference_###. Next, in Section 3 ###reference_###, we present some background information that are needed for the exposition of this paper. Then, in Section 4 ###reference_###, we describe the classical stochastic proximal gradient method for solving (1 ###reference_###) and present the convergence properties of this method under standard technical conditions. Section 5 ###reference_### is dedicated to describing and analyzing the stochastic variance-reduced proximal gradient method with an importance sampling based probabilistic gradient estimator. Finally, we make some concluding remarks, and list certain limitations and future research directions of this paper in Section 6 ###reference_###."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Related Work",
15
+ "text": "The policy gradient method. One of the most influential algorithms for solving RL problems is the policy gradient method, built upon the foundations established in Williams (1992 ###reference_b57###); Sutton et al. (1999 ###reference_b54###); Baxter & Bartlett (2001 ###reference_b6###). Motivated by the empirical success of the policy gradient method and its variants, analyzing the convergence properties for these methods has long been one of the most active research topics in RL. Since the objective function is generally nonconcave, early works Sutton et al. (1999 ###reference_b54###); Pirotta et al. (2015 ###reference_b43###) focused on the asymptotic convergence properties to a stationary point. By utilizing the special structure in (entropy regularized) MDPs, recent works Liu et al. (2019 ###reference_b31###); Mei et al. (2020 ###reference_b34###); Agarwal et al. (2021 ###reference_b2###); Li et al. (2021a ###reference_b28###); Xiao (2022 ###reference_b58###); Cen et al. (2022 ###reference_b10###); Lan (2023 ###reference_b27###); Fatkhullin et al. (2023 ###reference_b15###) provided some exciting results on the global convergence. Meanwhile, since the exact gradient of the objective function can hardly be computed, sampling-based approximated/stochastic gradients have gained much attention. Therefore, many works investigated the convergence properties, including the iteration and sample complexities, for these algorithms with inexact gradients; see e.g., Zhang et al. (2020b ###reference_b70###); Liu et al. (2020 ###reference_b32###); Zhang et al. (2021b ###reference_b69###); Xiong et al. (2021 ###reference_b59###); Yuan et al. (2022 ###reference_b65###); Lan (2023 ###reference_b27###) and references therein.\nVariance reduction. While the classical stochastic gradient estimator is straightforward and simple to implement, one of its most critical issues is that the variance of the inexact gradient estimator can be large, which generally slows down the convergence of the algorithm. To alleviate this issue, an attractive approach is to pair the sample-based policy gradient methods with certain variance-reduced techniques. Variance-reduced methods were originally developed for solving (oblivious) stochastic optimization problems Johnson & Zhang (2013 ###reference_b22###); Nguyen et al. (2017 ###reference_b38###); Fang et al. (2018 ###reference_b14###); Li et al. (2021b ###reference_b29###) typically arising from supervised learning tasks. Motivated by the superior theoretical properties and practical performance of the stochastic variance-reduced gradient methods, similar algorithmic frameworks have recently been applied for solving MDPs Papini et al. (2018 ###reference_b39###); Xu et al. (2019 ###reference_b60###); Yuan et al. (2020 ###reference_b64###); Pham et al. (2020 ###reference_b42###); Huang et al. (2021 ###reference_b20###); Yang et al. (2022 ###reference_b62###); Gargiani et al. (2022 ###reference_b17###).\nStochastic optimization with decision-dependent distributions. Stochastic optimization is the core of modern machine learning applications, whose main objective is to learn a decision rule from a limited data sample that is assumed to generalize well to the entire population Drusvyatskiy & Xiao (2023 ###reference_b13###). In the classical supervised learning framework Zhang (2004 ###reference_b71###); Hastie et al. (2009 ###reference_b19###); Shapiro et al. (2021 ###reference_b50###), the underlying data distribution is assumed to be static, which turns out to be a crucial assumption when analyzing the convergence properties of the common stochastic optimization algorithms. On the other hand, there are problems where the distribution changes over the course of iterations of a specific algorithm, and these are closely related to the concept of performative prediction Perdomo et al. (2020 ###reference_b41###). In this case, understanding the convergence properties of the algorithm becomes more challenging. Toward this, some recent progress has been made on (strongly) convex stochastic optimization with decision-dependent distributions Mendler-D\u00fcnner et al. (2020 ###reference_b35###); Perdomo et al. (2020 ###reference_b41###); Drusvyatskiy & Xiao (2023 ###reference_b13###). Moreover, other works have also considered nonconvex problems and obtained some promising results; see Dong et al. (2023 ###reference_b12###); Jagadeesan et al. (2022 ###reference_b21###) and references therein. Developing theoretical foundation for these problems has become a very active field nowadays.\nRL with general utilities. It is known that the goal of an agent associated with an MDP is to seek an optimal policy via maximizing the cumulative discounted reward Sutton & Barto (2018 ###reference_b53###). However, there are decision problems of interest having more general forms. Beyond the scope of the expected cumulative reward in MDPs, some recent works also looked into RL problems with general utilities; see e.g., Zhang et al. (2020a ###reference_b67###); Kumar et al. (2022 ###reference_b25###); Barakat et al. (2023 ###reference_b5###) as mentioned previously. Global convergence results can also be derived via investigating the hidden convex structure Zhang et al. (2020a ###reference_b67###) inherited from the MDP."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Preliminary",
21
+ "text": "In this paper, we assume that the optimal objective value for problem (1 ###reference_###), denoted by , is finite and attained, and the reward function satisfies the following assumption.\nThe following three conditions with respect to the function hold:\nThere exists a constant such that\nis twice continuously differentiable with respect to , and there exist positive constants and such that\nThe first condition on the boundedness of the function , which is commonly assumed in the literature Sutton & Barto (2018 ###reference_b53###), ensures that is well-defined. And the second condition will be used to guarantee the well-definiteness and L-smoothness of the gradient . We remark here that when the reward function does not depend on (see e.g., Example 3.3 ###reference_theorem3###), then the second assumption holds automatically.\nTo determine the (theoretical) learning rate in our algorithmic frameworks, we also need to make some standard assumptions to establish the L-smoothness of .\nThe function is twice differential with respect to and there exist positive constants and such that\nThis assumption is a standard one and commonly employed in the literature when studying the convergence properties of the policy gradient method for MDPs; see e.g., Pirotta et al. (2015 ###reference_b43###); Papini et al. (2018 ###reference_b39###); Xu et al. (2020 ###reference_b61###); Pham et al. (2020 ###reference_b42###); Zhang et al. (2021a ###reference_b68###); Yang et al. (2022 ###reference_b62###) and references therein.\nUnder Assumption 3.1 ###reference_theorem1### and Assumption 3.2 ###reference_theorem2###, it is easy to verify that the gradient for the expected reward function can be written as:\nWe next present an example on the discrete-time discounted MDP, which can be covered by the general model (1 ###reference_###).\nWe denote a discrete-time discounted MDP as , where and denote the state space and the action space, respectively, is the state transition probability from to after selecting the action , is the reward function that is assumed to be uniformly bounded by a constant , is the discount factor, and is the initial state distribution.\nThe agent selects actions according to a stationary random policy parameterized by . Given an initial state , a trajectory can then be generated, where , , , , and is a finite horizon, and the accumulated discounted reward of the trajectory can be defined as . Then, the learning objective is to compute an optimal parameter that maximizes the expected reward function 111Here, the trajectory and the distribution correspond to and in (1 ###reference_###), respectively., i.e.,\nwhere denotes the probability distribution of a trajectory being sampled from that is parameterized by .\nIn the special case when (i.e., ) and , the MDP reduced to a multi-armed bandit problem Robbins (1952 ###reference_b45###) with a reward function simplified as . Particularly, a trajectory with the horizon is generated, where , and the accumulated discounted reward reduces to . As a consequence, problem (2 ###reference_###) can be simplified as\nBy adding a convex regularizer to problem (2 ###reference_###), we get the following regularized MDP:\nwhich was considered in Pham et al. (2020 ###reference_b42###). However, it is clear that does not depend on . Hence, the above regularized MDP is a special case of the proposed regularized reward optimization problem (1 ###reference_###).\nOne can check that the gradient has the following form Yuan et al. (2022 ###reference_b65###):\nBeing a composite optimization problem, problem (1 ###reference_###) admits the following first-order stationary condition\nHere, denotes the subdifferential of the proper closed and convex function which is defined as\nIt is well-known that is a nonempty closed convex subset of for any such that (see e.g., Rockafellar (1997 ###reference_b46###)). Note that any optimal solution of problem (1 ###reference_###) satisfies the condition (3 ###reference_###), while the reverse statement is generally not valid for nonconcave problems, including the problem (1 ###reference_###). The condition (3 ###reference_###) leads to the following concept of stationary points for problem (1 ###reference_###).\nA point is called a stationary point for problem (1 ###reference_###) if it satisfies the condition (3 ###reference_###). Given a tolerance , a stochastic optimization method attains an (expected) -stationary point, denoted as , if\nwhere the expectation is taken with respect to all the randomness caused by the algorithm, after running it iterations, and denotes the distance between a point and a closed convex set .\nNote that the optimality condition (3 ###reference_###) can be rewritten as\nfor some , where\ndenotes the proximal mapping of the function . The mapping is called the gradient mapping in the field of optimization Beck (2017 ###reference_b7###). It is easy to verify that if for a , it holds that\nthen there exists a vector satisfying such that\nwhich is equivalent to saying that\nMoreover, we can verify that (by using the firm nonexpansiveness of ; see e.g., Beck (2017 ###reference_b7###))\nTherefore, we can also characterize an (expected) -stationary point by using the following condition\nThe main objective of this paper is to study the convergence properties, including iteration and sample complexities, of the stochastic (variance-reduced) proximal gradient method to a -stationary point with a pre-specified . Note that all proofs of our results are presented in the appendix. Moreover, we acknowledge that our analysis is drawn upon classical results in the literature."
22
+ },
23
+ {
24
+ "section_id": "4",
25
+ "parent_section_id": null,
26
+ "section_name": "The stochastic proximal gradient method",
27
+ "text": "In this section, we present and analyze the stochastic proximal gradient method for solving the problem (1 ###reference_###). The fundamental idea of the algorithm is to replace the true gradient , which are not available for most of the time, with a stochastic gradient estimator in the classical proximal gradient method Beck (2017 ###reference_b7###). The method can be viewed as extensions to the projected policy gradient method with direct parameterization Agarwal et al. (2021 ###reference_b2###) and the stochastic policy gradient method for unregularized MDPs Williams (1992 ###reference_b57###). The detailed description of the algorithm is presented in Algorithm 1 ###reference_###.\nFor notational simplicity, we denote\nFrom Algorithm 1 ###reference_###, we see that at each iteration, data points, namely , are sample according to the current probability distribution . Using these data points, we can construct a REINFORCE-type stochastic gradient estimator . Then, the algorithm just performs a proximal gradient ascent updating. Let be the maximal number of iterations, then a sequence can be generated, and the output solution is selected randomly from this sequence. Next, we shall proceed to answer the questions that how to choose the learning rate , how large the sample size should be, and how many iterations for the algorithm to output an -stationary point for a given , theoretically. The next lemma establishes the L-smoothness of whose proof is given at Appendix A.1 ###reference_###.\nUnder Assumptions 3.1 ###reference_theorem1### and 3.2 ###reference_theorem2###, the gradient of is -smooth, i.e.,\nwith .\nFor an MDP with finite action space and state space as in Example 3.3 ###reference_theorem3###, the Lipschitz constant of can be expressed in terms of , and . We refer the reader to Agarwal et al. (2021 ###reference_b2###); Xiao (2022 ###reference_b58###) for more details.\nAs a consequence of the L-smoothness of the function , we next show that the learning rate can be chosen as a positive constant upper bounded by a constant depends only on the Lipschitz constant of . For notational complicity, we denote for the rest of this paper.\nUnder Assumptions 3.1 ###reference_theorem1### and 3.2 ###reference_theorem2###, if we set , then Algorithm 1 ###reference_### outputs a point satisfying\nwhere is defined in Definition 3.4 ###reference_theorem4###.\nThe proof of the above theorem is provided in Appendix A.2 ###reference_###. From this theorem, if one sets , i.e., , then there is no randomness along the iterations and the convergence property is reduced to\nwhich is implied by classical results on proximal gradient method (see e.g., Beck (2017 ###reference_b7###)). However, since the exact full gradient is rarely computable, it is common to require the variance (i.e., the trace of the covariance matrix) of the stochastic estimator to be bounded. The latter condition plays an essential role in analyzing stochastic first-order methods for solving nonconvex optimization problems, including RL applications; see, e.g., Beck (2017 ###reference_b7###); Papini et al. (2018 ###reference_b39###); Shen et al. (2019 ###reference_b51###); Lan (2020 ###reference_b26###); Yang et al. (2022 ###reference_b62###).\nUnder Assumptions 3.1 ###reference_theorem1### and 3.2 ###reference_theorem2###, there exists a constant such that for any ,\nThe proof of Lemma 4.4 ###reference_theorem4### is given in Appendix A.3 ###reference_###. By choosing a suitable sample size , we can rely on Lemma 4.4 ###reference_theorem4### to make the term in Theorem 4.3 ###reference_theorem3### small, for every . Then, Theorem 4.3 ###reference_theorem3### implies that Algorithm 1 ###reference_### admits an expected convergence rate to a stationary point. These results are summarized in the following theorem; see Appendix A.4 ###reference_### for a proof.\nSuppose that Assumptions 3.1 ###reference_theorem1### and 3.2 ###reference_theorem2### hold. Let be a given accuracy. Running the Algorithm 1 ###reference_### for\niterations with the learning rate and the sample size\noutputs a point satisfying\nMoreover, the sample complexity is .\nAs already mentioned in the introduction, the total sample complexity of Algorithm 1 ###reference_### to an -stationary point is shown to be , which matches the most competitive sample complexity of the classical stochastic policy gradient for MDPs Williams (1992 ###reference_b57###); Baxter & Bartlett (2001 ###reference_b6###); Zhang et al. (2020b ###reference_b70###); Xiong et al. (2021 ###reference_b59###); Yuan et al. (2022 ###reference_b65###).\nNote that the current state-of-the-art iteration complexity for the (small-batch) stochastic gradient descent method is with ; see, e.g., Ghadimi & Lan (2013 ###reference_b18###). The reason for requiring larger batch-size in Theorem 4.5 ###reference_theorem5### is to allow a constant learning rate. To the best of our knowledge, to get the same convergence properties as Theorem 4.5 ###reference_theorem5### under the same conditions for problem (1 ###reference_###), the large batch-size is required.\nAs mentioned in introduction, some recent progress has been made for analyzing the global convergence properties of the policy gradient methods for MDPs, which greatly rely on the concepts of gradient domination and its extensions Agarwal et al. (2021 ###reference_b2###); Mei et al. (2020 ###reference_b34###); Xiao (2022 ###reference_b58###); Yuan et al. (2022 ###reference_b65###); Gargiani et al. (2022 ###reference_b17###). This concept is also highly related to the classical P\u0141-condition Polyak (1963 ###reference_b44###) and K\u0141-condition Bolte et al. (2007 ###reference_b9###) in the field of optimization. One of the key ideas is to assume or verify that the difference between the optimal objective function value, namely , and can be bounded by the quantity depending on the norm of the gradient mapping at an arbitrary point. In particular, suppose that there exists a positive constant such that\nwhere is defined in Remark 3.5 ###reference_theorem5### (see e.g., Xiao (2022 ###reference_b58###)). Then, after running Algorithm 2 ###reference_### for iterations, one can easily check that\nAs a conclusion, by assuming or verifying stronger conditions, one can typically show that any stationary point of the problem (1 ###reference_###) is also a globally optimal solution. This shares the same spirit of Zhang et al. (2020a ###reference_b67###) for MDPs with general utilities. We leave it as a future research to analyze the global convergence of the problem (1 ###reference_###)."
28
+ },
29
+ {
30
+ "section_id": "5",
31
+ "parent_section_id": null,
32
+ "section_name": "Variance reduction via PAGE",
33
+ "text": "Recall from Theorem 4.3 ###reference_theorem3### that, there is a trade-off between the sample complexity and the iteration complexity of Algorithm 1 ###reference_###. In particular, while there is little room for us to improve the term which corresponds to the iteration complexity, it is possible to construct in an advanced manner to improve the sample complexity. Therefore, our main goal in this section is to reduce the expected sample complexity while keeping the term small. We achieve this goal by considering the stochastic variance-reduced gradient methods that have recently attracted much attention. Among these variance-reduced methods, as argued in Gargiani et al. (2022 ###reference_b17###), the ProbAbilistic Gradient Estimator (PAGE) proposed in Li et al. (2021b ###reference_b29###) has a simple structure, and can lead to optimal convergence properties. These appealing features make it attractive in machine learning applications. Therefore, in this section, we also consider the stochastic variance-reduced proximal gradient method with PAGE for solving the problem (1 ###reference_###).\nPAGE is originally designed for the stochastic nonconvex minimization in the oblivious setting:\nwhere is a fixed probability distribution and is a certain differentiable (and possibly nonconvex) loss function. For stochastic gradient-type methods, a certain stochastic gradient estimator for is required for performing the optimization. At the -th iteration, given a probability and the current gradient estimator , PAGE proposed to replace the vanilla mini-batch gradient estimator with the following unbiased stochastic estimator:\nwhere are sampled from , denote the sample sizes. Some key advantages of applying PAGE are summarized as follows. First, the algorithm is single-looped, which admit simpler implementation compared with existing double-looped variance reduced methods. Second, the probability can be adjusted dynamically, leading to more flexibilities. Third, one can choose to be much smaller than to guarantee the same iteration complexity as the vanilla SGD. Thus, the overall sample complexity can be significantly reduced. However, the application of PAGE to our setting needs significant modifications and extensions, which we shall demonstrate below. To the best of our knowledge, the application of PAGE for solving the general regularized reward optimization problem in the non-oblivious setting considered in this paper is new.\nFor notational simplicity, for the rest of this section, we denote\nfor , where denotes the importance weight between and . Note also that\nThe description of the proposed PAGE variance-reduced stochastic proximal gradient method is given in Algorithm 2 ###reference_###.\nIt is clear that the only difference between Algorithm 1 ###reference_### and Algorithm 2 ###reference_### is the choice of the gradient estimator. At each iteration of the latter algorithm, we have two choices for the gradient estimator, where, with probability , one chooses the same estimator as in Algorithm 1 ###reference_### with a sample size , and with probability , one constructs the estimator in a clever way which combines the information of the current iterate and the previous one. Since the data set is sampled according to the current probability distribution , we need to rely on the importance weight between and and construct the gradient estimator , which is an unbiased estimator for , so that becomes an unbiased estimator of . Indeed, one can easily verify that for any , it holds that\ni.e., is an unbiased estimator for \nprovided that .\nNext, we shall analyze the convergence properties of Algorithm 2 ###reference_###. Our analysis relies on the following assumption on the importance weight, which essentially controls the change of the distributions.\nLet , the importance weight between and is well-defined and there exists a constant such that\nClearly, the significance of the constant (if exists) may depend sensitively on and . To see this, let us assume that for any , is a discrete distribution over a set of finite points for which for all . Now, suppose that with . Then, a simple calculation shows that\nHowever, it is possible that there exists a certain or tiny. In this case, can be huge or even infinity. Fortunately, the regularization term can help to avoid such undesired situations via imposing the lower-bounded constraints for all . In this case, we see that .\nNote that Assumption 5.1 ###reference_theorem1### is also employed in many existing works Papini et al. (2018 ###reference_b39###); Xu et al. (2019 ###reference_b60###); Pham et al. (2020 ###reference_b42###); Yuan et al. (2020 ###reference_b64###); Gargiani et al. (2022 ###reference_b17###). However, this assumption could be too strong, and it is not checkable in general. Addressing the relaxation of this assumption through the development of a more sophisticated algorithmic framework is beyond the scope of this paper. Here, we would like to mention some recent progress on relaxing this stringent condition for MDPs. By constructing additional stochastic estimators for the Hessian matrix of the objective function, Shen et al. (2019 ###reference_b51###) proposed a Hessian-aided policy-gradient-type method that improves the sample complexity from to without assuming Assumption 5.1 ###reference_theorem1###. Later, by explicitly controlling changes in the parameter , Zhang et al. (2021a ###reference_b68###) developed a truncated stochastic incremental variance-reduced policy gradient method to prevent the variance of the importance weights from becoming excessively large leading to the sample complexity. By utilizing general Bregman divergences, Yuan et al. (2022 ###reference_b65###) proposed a double-looped variance-reduced mirror policy optimization approach and established an sample complexity, without requiring Hessian information or Assumption 5.1 ###reference_theorem1###. Recently, following the research theme as Shen et al. (2019 ###reference_b51###), Salehkaleybar et al. (2022 ###reference_b47###) also incorporated second-order information into the stochastic gradient estimator. By using momentum, the variance-reduced algorithm proposed in Salehkaleybar et al. (2022 ###reference_b47###) has some appealing features, including the small batch-size and parameter-free implementation. Recently, by imposing additional conditions, including the Lipschitz continuity of the Hessian of the score function and the Fisher-non-degeneracy condition of the policy, Fatkhullin et al. (2023 ###reference_b15###) derived improved (global) convergence guarantees for solving MDPs. We think that the above ideas can also be explored for solving the general model (1 ###reference_###).\nThe bounded variance of the importance weight implies that the (expected) distance between and is controlled by the distance between and , for any given . In particular, we have the following lemma, whose proof is provided in Appendix A.5 ###reference_###.\nUnder Assumption 3.1 ###reference_theorem1###, Assumption 3.2 ###reference_theorem2### and Assumption 5.1 ###reference_theorem1###, then it holds that\nwhere is a constant defined as\nUnder the considered assumptions, we are able to provide an estimate for the term\n, which plays an essential role in deriving an improved sample complexity of Algorithm 2 ###reference_###. The results are summarized in the following Lemma 5.4 ###reference_theorem4###; see Appendix A.6 ###reference_### for a proof which shares the same spirit as (Li et al., 2021b ###reference_b29###, Lemma 3 & 4).\nSuppose that Assumption 3.1 ###reference_theorem1###, Assumption 3.2 ###reference_theorem2###, and Assumption 5.1 ###reference_theorem1### hold. Let and be the sequences generated by Algorithm 2 ###reference_###, then it holds that\nWe are now ready to present the main result on the convergence property of the Algorithm 2 ###reference_### by showing how to select the sample sizes and , probability , and the learning rate . Intuitively, is typically a large number and one does not want to perform samplings with samples frequently, thus the probability and the sample size should both be small. Given , and , we can then determine the value of such that . Consequently, the key estimate in Theorem 4.3 ###reference_theorem3### can be applied directly. Our results are summarized in the following theorem. Reader is referred to Appendix A.7 ###reference_### for the proof of this result.\nSuppose that Assumption 3.1 ###reference_theorem1###, Assumption 3.2 ###reference_theorem2### and Assumption 5.1 ###reference_theorem1### hold. For a given , we set with and .\nChoose a learning rate satisfying . Then, running Algorithm 2 ###reference_### for iterations outputs a point satisfying\nMoreover, the total expected sample complexity is .\nBy using the stochastic variance-reduce gradient estimator with PAGE and the importance sampling technique, we have improved the total sample complexity from to , under the considered conditions. This result matches with the current competitive results established in Xu et al. (2019 ###reference_b60###); Yuan et al. (2020 ###reference_b64###); Pham et al. (2020 ###reference_b42###); Gargiani et al. (2022 ###reference_b17###) for solving MDPs and is applicable to the general model (1 ###reference_###). Finally, as mentioned in Remark 4.7 ###reference_theorem7###, by assuming or verifying stronger conditions, such as the gradient domination and its extensions, it is also possible to derive some global convergence results. Again, such a possibility is left to a future research direction."
34
+ },
35
+ {
36
+ "section_id": "6",
37
+ "parent_section_id": null,
38
+ "section_name": "Conclusions",
39
+ "text": "We have studied the stochastic (variance-reduced) proximal gradient method addressing a general regularized expected reward optimization problem which covers many existing important problem in reinforcement learning. We have established the sample complexity of the classical stochastic proximal gradient method and the sample complexity of the stochastic variance-reduced proximal gradient method with an importance sampling based probabilistic gradient estimator. Our results match the sample complexity of their most competitive counterparts under similar settings for Markov decision processes.\nMeanwhile, we have also suspected some limitations in the current paper. First, due to the nonconcavity of the objective function, we found it challenging to derive global convergence properties of the stochastic proximal gradient method and its variants without imposing additional conditions. On the other hand, analyzing the sample complexity for achieving convergence to second-order stationary points\u2014thereby avoiding saddle points\u2014may be more realistic and feasible Arjevani et al. (2020 ###reference_b4###). Second, the bounded variance condition for the importance weight turns out to be quite strong and can not be verified in general. How to relax this condition for our general model deserves further investigation. Last but not least, since we focus more on the theoretical analysis in this paper and due to the space constraint, we did not conduct any numerical simulation to examine the practical efficiency of the proposed methods. We shall try to delve into these challenges and get better understandings of the proposed problem and algorithms in a future research.\nFinally, this paper has demonstrated the possibility of pairing the stochastic proximal gradient method with efficient variance reduction techniques Li et al. (2021b ###reference_b29###) for solving the reward optimization problem (1 ###reference_###). Beyond variance-reduced methods, there are other possibilities that allow one deriving more sophisticated algorithms. For instance, one can also pair the stochastic proximal gradient method with the ideas of the actor-critic method Konda & Tsitsiklis (1999 ###reference_b24###), the natural policy gradient method Kakade (2001 ###reference_b23###), policy mirror descent methods Tomar et al. (2020 ###reference_b55###); Lan (2023 ###reference_b27###), trust-region methods Schulman et al. (2015 ###reference_b48###); Shani et al. (2020 ###reference_b49###), and the variational policy gradient methods Zhang et al. (2020a ###reference_b67###). We think that these possible generalizations can lead to more exciting results and make further contributions to the literature."
40
+ }
41
+ ],
42
+ "appendix": [
43
+ {
44
+ "section_id": "Appendix 1",
45
+ "parent_section_id": null,
46
+ "section_name": "Appendix A Proofs",
47
+ "text": "One could establish the -smoothness of via bounding the spectral norm of the Hessian . To this end, we first calculate the Hessian of as follows:\nThen, by the triangular inequality, it holds that\nThus, is -smooth with , and the proof is completed.\n\u220e\nFrom Lemma 4.1 ###reference_theorem1###, we see that\nBy the updating rule of , we see that\nCombining (5 ###reference_###) and (6 ###reference_###), we see that\nRearranging terms, we can rewrite the above inequality as\nBy the Cauchy-Schwarz inequality, we see that\nwhich together with (8 ###reference_###) implies that\nSumming the above inequality across , we get\nHere, we recall that .\nOn the other hand, (8 ###reference_###) also implies that\nNotice that\nThen by substituting the above equality into (10 ###reference_###) and rearranging terms, we see that\nwhere the second inequality is due to the Cauchy-Schwarz inequality and fact that\nand the third inequality is implied by Lemma 4.1 ###reference_theorem1###.\nSumming the above inequality across , we get\nwhere the last inequality is obtained from the fact that as a consequence of the choice of the learning rate.\nConsequently, we have that\nwhere the first inequality is because of (7 ###reference_###), the second inequality is due to (11 ###reference_###) and the third inequality is derived from (9 ###reference_###). Thus, the proof is completed.\n\u220e\nWe first estimate as follows\nThen, by the fact that for all random variable , we have\nwhich completes the proof.\n\u220e\nFrom Theorem 4.3 ###reference_theorem3###, in order to ensure that is a -stationary point, we can require\nIt is easy to verify that is an unbiased estimator of . Then, Lemma 4.4 ###reference_theorem4### implies that\nAs a consequence, if one chooses , then (12 ###reference_###) holds.\nOn the other hand, (13 ###reference_###) holds if one sets . Moreover, we see that the sample complexity can be computed as . Therefore, the proof is completed.\n\u220e\nFirst, recall that\nThen, by the definitions of and , we can verify that\nWe next consider the function . Taking the derivative of with respect to , we get\nMoreover, since\nwe see that the Hessian of with respect to can be computed as\nNotice that and . Therefore, by the Mean Value Theorem, we get\nwhere is a point between and . Now, from the expression of the Hessian matrix, we see that for any ,\nAs a consequence, we have\nwhich completes the proof.\n\u220e\nBy the definition of the stochastic gradient estimator given in Algorithm 2 ###reference_###, we can see that for ,\nwhere in the first inequality, we use the facts that for all random variable and is unbiased estimator for for all , in the second inequality, we rely on the fact that is independent, and the last inequality is due to Lemma 5.3 ###reference_theorem3###. By summing the above relation across , we see that\nwhich implies that\nRecall from (9 ###reference_###) that\nwhich together with (14 ###reference_###) implies that\nThus, the proof is completed.\n\u220e\nSince and\nwe can readily check that\nThen, we can see that\nwhere is a constant, the first inequality is due to Theorem 4.3 ###reference_theorem3###, the second inequality is derived from Lemma 5.4 ###reference_theorem4###, and the third inequality is implied by (15 ###reference_###).\nThen, in order to have for a given tolerance , we can simply set ,\nand require that\nTherefore, it suffices to set , and . (We ignore deriving the concrete expressions of , and , in terms of and other constants, but only give the big-O notation here for simplicity.)\nFinally, we can verify that the sample complexity can be bounded as\nTherefore, the proof is completed.\n\u220e"
48
+ }
49
+ ],
50
+ "tables": {},
51
+ "image_paths": {},
52
+ "validation": true,
53
+ "references": [
54
+ {
55
+ "1": {
56
+ "title": "Optimality and approximation with policy gradient methods in markov decision processes.",
57
+ "author": "Alekh Agarwal, Sham M Kakade, Jason D Lee, and Gaurav Mahajan.",
58
+ "venue": "In Conference on Learning Theory, pp. 64\u201366. PMLR, 2020.",
59
+ "url": null
60
+ }
61
+ },
62
+ {
63
+ "2": {
64
+ "title": "On the theory of policy gradient methods: Optimality, approximation, and distribution shift.",
65
+ "author": "Alekh Agarwal, Sham M Kakade, Jason D Lee, and Gaurav Mahajan.",
66
+ "venue": "The Journal of Machine Learning Research, 22(1):4431\u20134506, 2021.",
67
+ "url": null
68
+ }
69
+ },
70
+ {
71
+ "3": {
72
+ "title": "Understanding the impact of entropy on policy optimization.",
73
+ "author": "Zafarali Ahmed, Nicolas Le Roux, Mohammad Norouzi, and Dale Schuurmans.",
74
+ "venue": "In International conference on machine learning, pp. 151\u2013160. PMLR, 2019.",
75
+ "url": null
76
+ }
77
+ },
78
+ {
79
+ "4": {
80
+ "title": "Second-order information in non-convex stochastic optimization: Power and limitations.",
81
+ "author": "Yossi Arjevani, Yair Carmon, John C Duchi, Dylan J Foster, Ayush Sekhari, and Karthik Sridharan.",
82
+ "venue": "In Conference on Learning Theory, pp. 242\u2013299. PMLR, 2020.",
83
+ "url": null
84
+ }
85
+ },
86
+ {
87
+ "5": {
88
+ "title": "Reinforcement learning with general utilities: Simpler variance reduction and large state-action space.",
89
+ "author": "Anas Barakat, Ilyas Fatkhullin, and Niao He.",
90
+ "venue": "arXiv preprint arXiv:2306.01854, 2023.",
91
+ "url": null
92
+ }
93
+ },
94
+ {
95
+ "6": {
96
+ "title": "Infinite-horizon policy-gradient estimation.",
97
+ "author": "Jonathan Baxter and Peter L Bartlett.",
98
+ "venue": "journal of artificial intelligence research, 15:319\u2013350, 2001.",
99
+ "url": null
100
+ }
101
+ },
102
+ {
103
+ "7": {
104
+ "title": "First-order methods in optimization.",
105
+ "author": "Amir Beck.",
106
+ "venue": "SIAM, 2017.",
107
+ "url": null
108
+ }
109
+ },
110
+ {
111
+ "8": {
112
+ "title": "Neural combinatorial optimization with reinforcement learning.",
113
+ "author": "Irwan Bello, Hieu Pham, Quoc V Le, Mohammad Norouzi, and Samy Bengio.",
114
+ "venue": "arXiv preprint arXiv:1611.09940, 2016.",
115
+ "url": null
116
+ }
117
+ },
118
+ {
119
+ "9": {
120
+ "title": "The \u0142ojasiewicz inequality for nonsmooth subanalytic functions with applications to subgradient dynamical systems.",
121
+ "author": "J\u00e9r\u00f4me Bolte, Aris Daniilidis, and Adrian Lewis.",
122
+ "venue": "SIAM Journal on Optimization, 17(4):1205\u20131223, 2007.",
123
+ "url": null
124
+ }
125
+ },
126
+ {
127
+ "10": {
128
+ "title": "Fast global convergence of natural policy gradient methods with entropy regularization.",
129
+ "author": "Shicong Cen, Chen Cheng, Yuxin Chen, Yuting Wei, and Yuejie Chi.",
130
+ "venue": "Operations Research, 70(4):2563\u20132578, 2022.",
131
+ "url": null
132
+ }
133
+ },
134
+ {
135
+ "11": {
136
+ "title": "Monte carlo policy gradient method for binary optimization.",
137
+ "author": "Cheng Chen, Ruitao Chen, Tianyou Li, Ruichen Ao, and Zaiwen Wen.",
138
+ "venue": "arXiv preprint arXiv:2307.00783, 2023.",
139
+ "url": null
140
+ }
141
+ },
142
+ {
143
+ "12": {
144
+ "title": "Approximate regions of attraction in learning with decision-dependent distributions.",
145
+ "author": "Roy Dong, Heling Zhang, and Lillian Ratliff.",
146
+ "venue": "In International Conference on Artificial Intelligence and Statistics, pp. 11172\u201311184. PMLR, 2023.",
147
+ "url": null
148
+ }
149
+ },
150
+ {
151
+ "13": {
152
+ "title": "Stochastic optimization with decision-dependent distributions.",
153
+ "author": "Dmitriy Drusvyatskiy and Lin Xiao.",
154
+ "venue": "Mathematics of Operations Research, 48(2):954\u2013998, 2023.",
155
+ "url": null
156
+ }
157
+ },
158
+ {
159
+ "14": {
160
+ "title": "Spider: Near-optimal non-convex optimization via stochastic path-integrated differential estimator.",
161
+ "author": "Cong Fang, Chris Junchi Li, Zhouchen Lin, and Tong Zhang.",
162
+ "venue": "Advances in neural information processing systems, 31, 2018.",
163
+ "url": null
164
+ }
165
+ },
166
+ {
167
+ "15": {
168
+ "title": "Stochastic policy gradient methods: Improved sample complexity for fisher-non-degenerate policies.",
169
+ "author": "Ilyas Fatkhullin, Anas Barakat, Anastasia Kireeva, and Niao He.",
170
+ "venue": "In International Conference on Machine Learning, pp. 9827\u20139869. PMLR, 2023.",
171
+ "url": null
172
+ }
173
+ },
174
+ {
175
+ "16": {
176
+ "title": "A survey on concept drift adaptation.",
177
+ "author": "Jo\u00e3o Gama, Indr\u0117 \u017dliobait\u0117, Albert Bifet, Mykola Pechenizkiy, and Abdelhamid Bouchachia.",
178
+ "venue": "ACM computing surveys (CSUR), 46(4):1\u201337, 2014.",
179
+ "url": null
180
+ }
181
+ },
182
+ {
183
+ "17": {
184
+ "title": "Page-pg: A simple and loopless variance-reduced policy gradient method with probabilistic gradient estimation.",
185
+ "author": "Matilde Gargiani, Andrea Zanelli, Andrea Martinelli, Tyler Summers, and John Lygeros.",
186
+ "venue": "In International Conference on Machine Learning, pp. 7223\u20137240. PMLR, 2022.",
187
+ "url": null
188
+ }
189
+ },
190
+ {
191
+ "18": {
192
+ "title": "Stochastic first-and zeroth-order methods for nonconvex stochastic programming.",
193
+ "author": "Saeed Ghadimi and Guanghui Lan.",
194
+ "venue": "SIAM journal on optimization, 23(4):2341\u20132368, 2013.",
195
+ "url": null
196
+ }
197
+ },
198
+ {
199
+ "19": {
200
+ "title": "The elements of statistical learning: data mining, inference, and prediction, volume 2.",
201
+ "author": "Trevor Hastie, Robert Tibshirani, Jerome H Friedman, and Jerome H Friedman.",
202
+ "venue": "Springer, 2009.",
203
+ "url": null
204
+ }
205
+ },
206
+ {
207
+ "20": {
208
+ "title": "Bregman gradient policy optimization.",
209
+ "author": "Feihu Huang, Shangqian Gao, and Heng Huang.",
210
+ "venue": "arXiv preprint arXiv:2106.12112, 2021.",
211
+ "url": null
212
+ }
213
+ },
214
+ {
215
+ "21": {
216
+ "title": "Regret minimization with performative feedback.",
217
+ "author": "Meena Jagadeesan, Tijana Zrnic, and Celestine Mendler-D\u00fcnner.",
218
+ "venue": "In International Conference on Machine Learning, pp. 9760\u20139785. PMLR, 2022.",
219
+ "url": null
220
+ }
221
+ },
222
+ {
223
+ "22": {
224
+ "title": "Accelerating stochastic gradient descent using predictive variance reduction.",
225
+ "author": "Rie Johnson and Tong Zhang.",
226
+ "venue": "Advances in neural information processing systems, 26, 2013.",
227
+ "url": null
228
+ }
229
+ },
230
+ {
231
+ "23": {
232
+ "title": "A natural policy gradient.",
233
+ "author": "Sham M Kakade.",
234
+ "venue": "Advances in neural information processing systems, 14, 2001.",
235
+ "url": null
236
+ }
237
+ },
238
+ {
239
+ "24": {
240
+ "title": "Actor-critic algorithms.",
241
+ "author": "Vijay Konda and John Tsitsiklis.",
242
+ "venue": "Advances in neural information processing systems, 12, 1999.",
243
+ "url": null
244
+ }
245
+ },
246
+ {
247
+ "25": {
248
+ "title": "Policy gradient for reinforcement learning with general utilities.",
249
+ "author": "Navdeep Kumar, Kaixin Wang, Kfir Levy, and Shie Mannor.",
250
+ "venue": "arXiv preprint arXiv:2210.00991, 2022.",
251
+ "url": null
252
+ }
253
+ },
254
+ {
255
+ "26": {
256
+ "title": "First-order and stochastic optimization methods for machine learning, volume 1.",
257
+ "author": "Guanghui Lan.",
258
+ "venue": "Springer, 2020.",
259
+ "url": null
260
+ }
261
+ },
262
+ {
263
+ "27": {
264
+ "title": "Policy mirror descent for reinforcement learning: Linear convergence, new sampling complexity, and generalized problem classes.",
265
+ "author": "Guanghui Lan.",
266
+ "venue": "Mathematical programming, 198(1):1059\u20131106, 2023.",
267
+ "url": null
268
+ }
269
+ },
270
+ {
271
+ "28": {
272
+ "title": "Softmax policy gradient methods can take exponential time to converge.",
273
+ "author": "Gen Li, Yuting Wei, Yuejie Chi, Yuantao Gu, and Yuxin Chen.",
274
+ "venue": "In Conference on Learning Theory, pp. 3107\u20133110. PMLR, 2021a.",
275
+ "url": null
276
+ }
277
+ },
278
+ {
279
+ "29": {
280
+ "title": "Page: A simple and optimal probabilistic gradient estimator for nonconvex optimization.",
281
+ "author": "Zhize Li, Hongyan Bao, Xiangliang Zhang, and Peter Richt\u00e1rik.",
282
+ "venue": "In International conference on machine learning, pp. 6286\u20136295. PMLR, 2021b.",
283
+ "url": null
284
+ }
285
+ },
286
+ {
287
+ "30": {
288
+ "title": "Finite expression method for solving high-dimensional partial differential equations.",
289
+ "author": "Senwei Liang and Haizhao Yang.",
290
+ "venue": "arXiv preprint arXiv:2206.10121, 2022.",
291
+ "url": null
292
+ }
293
+ },
294
+ {
295
+ "31": {
296
+ "title": "Neural proximal/trust region policy optimization attains globally optimal policy.",
297
+ "author": "Boyi Liu, Qi Cai, Zhuoran Yang, and Zhaoran Wang.",
298
+ "venue": "arXiv preprint arXiv:1906.10306, 2019.",
299
+ "url": null
300
+ }
301
+ },
302
+ {
303
+ "32": {
304
+ "title": "An improved analysis of (variance-reduced) policy gradient and natural policy gradient methods.",
305
+ "author": "Yanli Liu, Kaiqing Zhang, Tamer Basar, and Wotao Yin.",
306
+ "venue": "Advances in Neural Information Processing Systems, 33:7624\u20137636, 2020.",
307
+ "url": null
308
+ }
309
+ },
310
+ {
311
+ "33": {
312
+ "title": "Reinforcement learning for combinatorial optimization: A survey.",
313
+ "author": "Nina Mazyavkina, Sergey Sviridov, Sergei Ivanov, and Evgeny Burnaev.",
314
+ "venue": "Computers & Operations Research, 134:105400, 2021.",
315
+ "url": null
316
+ }
317
+ },
318
+ {
319
+ "34": {
320
+ "title": "On the global convergence rates of softmax policy gradient methods.",
321
+ "author": "Jincheng Mei, Chenjun Xiao, Csaba Szepesvari, and Dale Schuurmans.",
322
+ "venue": "In International Conference on Machine Learning, pp. 6820\u20136829. PMLR, 2020.",
323
+ "url": null
324
+ }
325
+ },
326
+ {
327
+ "35": {
328
+ "title": "Stochastic optimization for performative prediction.",
329
+ "author": "Celestine Mendler-D\u00fcnner, Juan Perdomo, Tijana Zrnic, and Moritz Hardt.",
330
+ "venue": "Advances in Neural Information Processing Systems, 33:4929\u20134939, 2020.",
331
+ "url": null
332
+ }
333
+ },
334
+ {
335
+ "36": {
336
+ "title": "The social cost of strategic classification.",
337
+ "author": "Smitha Milli, John Miller, Anca D Dragan, and Moritz Hardt.",
338
+ "venue": "In Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 230\u2013239, 2019.",
339
+ "url": null
340
+ }
341
+ },
342
+ {
343
+ "37": {
344
+ "title": "Playing atari with deep reinforcement learning.",
345
+ "author": "Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller.",
346
+ "venue": "arXiv preprint arXiv:1312.5602, 2013.",
347
+ "url": null
348
+ }
349
+ },
350
+ {
351
+ "38": {
352
+ "title": "Sarah: A novel method for machine learning problems using stochastic recursive gradient.",
353
+ "author": "Lam M Nguyen, Jie Liu, Katya Scheinberg, and Martin Tak\u00e1\u010d.",
354
+ "venue": "In International conference on machine learning, pp. 2613\u20132621. PMLR, 2017.",
355
+ "url": null
356
+ }
357
+ },
358
+ {
359
+ "39": {
360
+ "title": "Stochastic variance-reduced policy gradient.",
361
+ "author": "Matteo Papini, Damiano Binaghi, Giuseppe Canonaco, Matteo Pirotta, and Marcello Restelli.",
362
+ "venue": "In International conference on machine learning, pp. 4026\u20134035. PMLR, 2018.",
363
+ "url": null
364
+ }
365
+ },
366
+ {
367
+ "40": {
368
+ "title": "Sequential cost-sensitive decision making with reinforcement learning.",
369
+ "author": "Edwin Pednault, Naoki Abe, and Bianca Zadrozny.",
370
+ "venue": "In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 259\u2013268, 2002.",
371
+ "url": null
372
+ }
373
+ },
374
+ {
375
+ "41": {
376
+ "title": "Performative prediction.",
377
+ "author": "Juan Perdomo, Tijana Zrnic, Celestine Mendler-D\u00fcnner, and Moritz Hardt.",
378
+ "venue": "In International Conference on Machine Learning, pp. 7599\u20137609. PMLR, 2020.",
379
+ "url": null
380
+ }
381
+ },
382
+ {
383
+ "42": {
384
+ "title": "A hybrid stochastic policy gradient algorithm for reinforcement learning.",
385
+ "author": "Nhan Pham, Lam Nguyen, Dzung Phan, Phuong Ha Nguyen, Marten Dijk, and Quoc Tran-Dinh.",
386
+ "venue": "In International Conference on Artificial Intelligence and Statistics, pp. 374\u2013385. PMLR, 2020.",
387
+ "url": null
388
+ }
389
+ },
390
+ {
391
+ "43": {
392
+ "title": "Policy gradient in lipschitz markov decision processes.",
393
+ "author": "Matteo Pirotta, Marcello Restelli, and Luca Bascetta.",
394
+ "venue": "Machine Learning, 100:255\u2013283, 2015.",
395
+ "url": null
396
+ }
397
+ },
398
+ {
399
+ "44": {
400
+ "title": "Gradient methods for the minimisation of functionals.",
401
+ "author": "Boris T Polyak.",
402
+ "venue": "USSR Computational Mathematics and Mathematical Physics, 3(4):864\u2013878, 1963.",
403
+ "url": null
404
+ }
405
+ },
406
+ {
407
+ "45": {
408
+ "title": "Some aspects of the sequential design of experiments.",
409
+ "author": "Herbert Robbins.",
410
+ "venue": "1952.",
411
+ "url": null
412
+ }
413
+ },
414
+ {
415
+ "46": {
416
+ "title": "Convex analysis, volume 11.",
417
+ "author": "R Tyrrell Rockafellar.",
418
+ "venue": "Princeton university press, 1997.",
419
+ "url": null
420
+ }
421
+ },
422
+ {
423
+ "47": {
424
+ "title": "Momentum-based policy gradient with second-order information.",
425
+ "author": "Saber Salehkaleybar, Sadegh Khorasani, Negar Kiyavash, Niao He, and Patrick Thiran.",
426
+ "venue": "arXiv preprint arXiv:2205.08253, 2022.",
427
+ "url": null
428
+ }
429
+ },
430
+ {
431
+ "48": {
432
+ "title": "Trust region policy optimization.",
433
+ "author": "John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz.",
434
+ "venue": "In International conference on machine learning, pp. 1889\u20131897. PMLR, 2015.",
435
+ "url": null
436
+ }
437
+ },
438
+ {
439
+ "49": {
440
+ "title": "Adaptive trust region policy optimization: Global convergence and faster rates for regularized mdps.",
441
+ "author": "Lior Shani, Yonathan Efroni, and Shie Mannor.",
442
+ "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pp. 5668\u20135675, 2020.",
443
+ "url": null
444
+ }
445
+ },
446
+ {
447
+ "50": {
448
+ "title": "Lectures on stochastic programming: modeling and theory.",
449
+ "author": "Alexander Shapiro, Darinka Dentcheva, and Andrzej Ruszczynski.",
450
+ "venue": "SIAM, 2021.",
451
+ "url": null
452
+ }
453
+ },
454
+ {
455
+ "51": {
456
+ "title": "Hessian aided policy gradient.",
457
+ "author": "Zebang Shen, Alejandro Ribeiro, Hamed Hassani, Hui Qian, and Chao Mi.",
458
+ "venue": "In International conference on machine learning, pp. 5729\u20135738. PMLR, 2019.",
459
+ "url": null
460
+ }
461
+ },
462
+ {
463
+ "52": {
464
+ "title": "A finite expression method for solving high-dimensional committor problems.",
465
+ "author": "Zezheng Song, Maria K Cameron, and Haizhao Yang.",
466
+ "venue": "arXiv preprint arXiv:2306.12268, 2023.",
467
+ "url": null
468
+ }
469
+ },
470
+ {
471
+ "53": {
472
+ "title": "Reinforcement learning: An introduction.",
473
+ "author": "Richard S Sutton and Andrew G Barto.",
474
+ "venue": "MIT press, 2018.",
475
+ "url": null
476
+ }
477
+ },
478
+ {
479
+ "54": {
480
+ "title": "Policy gradient methods for reinforcement learning with function approximation.",
481
+ "author": "Richard S Sutton, David McAllester, Satinder Singh, and Yishay Mansour.",
482
+ "venue": "Advances in neural information processing systems, 12, 1999.",
483
+ "url": null
484
+ }
485
+ },
486
+ {
487
+ "55": {
488
+ "title": "Mirror descent policy optimization.",
489
+ "author": "Manan Tomar, Lior Shani, Yonathan Efroni, and Mohammad Ghavamzadeh.",
490
+ "venue": "arXiv preprint arXiv:2005.09814, 2020.",
491
+ "url": null
492
+ }
493
+ },
494
+ {
495
+ "56": {
496
+ "title": "Optimal decision making under strategic behavior.",
497
+ "author": "Stratis Tsirtsis, Behzad Tabibian, Moein Khajehnejad, Adish Singla, Bernhard Sch\u00f6lkopf, and Manuel Gomez-Rodriguez.",
498
+ "venue": "Management Science, 2024.",
499
+ "url": null
500
+ }
501
+ },
502
+ {
503
+ "57": {
504
+ "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning.",
505
+ "author": "Ronald J Williams.",
506
+ "venue": "Machine learning, 8:229\u2013256, 1992.",
507
+ "url": null
508
+ }
509
+ },
510
+ {
511
+ "58": {
512
+ "title": "On the convergence rates of policy gradient methods.",
513
+ "author": "Lin Xiao.",
514
+ "venue": "The Journal of Machine Learning Research, 23(1):12887\u201312922, 2022.",
515
+ "url": null
516
+ }
517
+ },
518
+ {
519
+ "59": {
520
+ "title": "Non-asymptotic convergence of adam-type reinforcement learning algorithms under markovian sampling.",
521
+ "author": "Huaqing Xiong, Tengyu Xu, Yingbin Liang, and Wei Zhang.",
522
+ "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pp. 10460\u201310468, 2021.",
523
+ "url": null
524
+ }
525
+ },
526
+ {
527
+ "60": {
528
+ "title": "Sample efficient policy gradient methods with recursive variance reduction.",
529
+ "author": "Pan Xu, Felicia Gao, and Quanquan Gu.",
530
+ "venue": "arXiv preprint arXiv:1909.08610, 2019.",
531
+ "url": null
532
+ }
533
+ },
534
+ {
535
+ "61": {
536
+ "title": "An improved convergence analysis of stochastic variance-reduced policy gradient.",
537
+ "author": "Pan Xu, Felicia Gao, and Quanquan Gu.",
538
+ "venue": "In Uncertainty in Artificial Intelligence, pp. 541\u2013551. PMLR, 2020.",
539
+ "url": null
540
+ }
541
+ },
542
+ {
543
+ "62": {
544
+ "title": "Policy optimization with stochastic mirror descent.",
545
+ "author": "Long Yang, Yu Zhang, Gang Zheng, Qian Zheng, Pengfei Li, Jianhang Huang, and Gang Pan.",
546
+ "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pp. 8823\u20138831, 2022.",
547
+ "url": null
548
+ }
549
+ },
550
+ {
551
+ "63": {
552
+ "title": "A survey on causal inference.",
553
+ "author": "Liuyi Yao, Zhixuan Chu, Sheng Li, Yaliang Li, Jing Gao, and Aidong Zhang.",
554
+ "venue": "ACM Transactions on Knowledge Discovery from Data (TKDD), 15(5):1\u201346, 2021.",
555
+ "url": null
556
+ }
557
+ },
558
+ {
559
+ "64": {
560
+ "title": "Stochastic recursive momentum for policy gradient methods.",
561
+ "author": "Huizhuo Yuan, Xiangru Lian, Ji Liu, and Yuren Zhou.",
562
+ "venue": "arXiv preprint arXiv:2003.04302, 2020.",
563
+ "url": null
564
+ }
565
+ },
566
+ {
567
+ "65": {
568
+ "title": "A general sample complexity analysis of vanilla policy gradient.",
569
+ "author": "Rui Yuan, Robert M Gower, and Alessandro Lazaric.",
570
+ "venue": "In International Conference on Artificial Intelligence and Statistics, pp. 3332\u20133380. PMLR, 2022.",
571
+ "url": null
572
+ }
573
+ },
574
+ {
575
+ "66": {
576
+ "title": "Policy mirror descent for regularized reinforcement learning: A generalized framework with linear convergence.",
577
+ "author": "Wenhao Zhan, Shicong Cen, Baihe Huang, Yuxin Chen, Jason D Lee, and Yuejie Chi.",
578
+ "venue": "SIAM Journal on Optimization, 33(2):1061\u20131091, 2023.",
579
+ "url": null
580
+ }
581
+ },
582
+ {
583
+ "67": {
584
+ "title": "Variational policy gradient method for reinforcement learning with general utilities.",
585
+ "author": "Junyu Zhang, Alec Koppel, Amrit Singh Bedi, Csaba Szepesvari, and Mengdi Wang.",
586
+ "venue": "Advances in Neural Information Processing Systems, 33:4572\u20134583, 2020a.",
587
+ "url": null
588
+ }
589
+ },
590
+ {
591
+ "68": {
592
+ "title": "On the convergence and sample efficiency of variance-reduced policy gradient method.",
593
+ "author": "Junyu Zhang, Chengzhuo Ni, Csaba Szepesvari, Mengdi Wang, et al.",
594
+ "venue": "Advances in Neural Information Processing Systems, 34:2228\u20132240, 2021a.",
595
+ "url": null
596
+ }
597
+ },
598
+ {
599
+ "69": {
600
+ "title": "Sample efficient reinforcement learning with reinforce.",
601
+ "author": "Junzi Zhang, Jongho Kim, Brendan O\u2019Donoghue, and Stephen Boyd.",
602
+ "venue": "In Proceedings of the AAAI conference on artificial intelligence, volume 35, pp. 10887\u201310895, 2021b.",
603
+ "url": null
604
+ }
605
+ },
606
+ {
607
+ "70": {
608
+ "title": "Global convergence of policy gradient methods to (almost) locally optimal policies.",
609
+ "author": "Kaiqing Zhang, Alec Koppel, Hao Zhu, and Tamer Basar.",
610
+ "venue": "SIAM Journal on Control and Optimization, 58(6):3586\u20133612, 2020b.",
611
+ "url": null
612
+ }
613
+ },
614
+ {
615
+ "71": {
616
+ "title": "Solving large scale linear prediction problems using stochastic gradient descent algorithms.",
617
+ "author": "Tong Zhang.",
618
+ "venue": "In Proceedings of the twenty-first international conference on Machine learning, pp. 116, 2004.",
619
+ "url": null
620
+ }
621
+ },
622
+ {
623
+ "72": {
624
+ "title": "A reinforcement learning approach to job-shop scheduling.",
625
+ "author": "Wei Zhang and Thomas G Dietterich.",
626
+ "venue": "In IJCAI, volume 95, pp. 1114\u20131120. Citeseer, 1995.",
627
+ "url": null
628
+ }
629
+ }
630
+ ],
631
+ "url": "http://arxiv.org/html/2401.12508v2"
632
+ }
20240819/2402.05642v3.json ADDED
@@ -0,0 +1,131 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "An Optimization-based Baseline for Rigid 2D/3D Registration Applied to Spine Surgical Navigation Using CMA-ES",
3
+ "abstract": "A robust and efficient optimization-based 2D/3D registration framework is crucial for the navigation system of orthopedic surgical robots. It can provide precise position information of surgical instruments and implants during surgery.\nWhile artificial intelligence technology has advanced rapidly in recent years, traditional optimization-based registration methods remain indispensable in the field of 2D/3D registration.\nThe exceptional precision of this method enables it to be considered as a post-processing step of the learning-based methods, thereby offering a reliable assurance for registration.\nIn this paper, we present a coarse-to-fine registration framework based on the CMA-ES algorithm.\nWe conducted intensive testing of our method using data from different parts of the spine. The results shows the effectiveness of the proposed framework on real orthopedic spine surgery clinical data.\nThis work can be viewed as an additional extension that complements the optimization-based methods employed in our previous studies.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Automatic X-ray to CT registration is a process that aims to align intra-operative X-ray images with corresponding pre-operative CT scans.\nIt involves finding the spatial correspondence between these two modalities, enabling accurate integration and analysis of information from both imaging techniques. The challenges in automatic X-ray to CT registration arise due to differences in image acquisition protocols, patient positioning, and image artifacts. Additionally, anatomical deformations caused by patient movement or pathological changes present further complexities.\nAnd it has shown promising results in various clinical applications, including orthopedics, interventional radiology and minimally invasive surgical robot navigation. It allows clinicians to effectively fuse the information from X-ray and CT modalities, providing a comprehensive understanding of a patient\u2019s condition and facilitating more precise and targeted medical interventions [15 ###reference_b15###].\nRecent progress in machine learning has had a significant impact on 2D/3D registration, revolutionizing the field and improving the accuracy and efficiency of the registration process [3 ###reference_b3###].\nResearchers have started exploring the use of neural networks as a substitute for traditional similarity measures [13 ###reference_b13###], treating registration as a Markov decision process [12 ###reference_b12###], and employing differentiable projection operators to directly implement an end-to-end registration framework [4 ###reference_b4###, 7 ###reference_b7###].\nSome existing works [6 ###reference_b6###, 17 ###reference_b17###] get rid of the problem of lack of real data by adopting self-supervised training strategies.\nHowever, in the existing literature, Most learning-based registration methods still require the use of optimization-based methods as a post-processing step to fine-tune the results. For example, [1 ###reference_b1###, 4 ###reference_b4###] use neural networks to obtain an approximately convex mapping, which can increase the capture range of registration. But this network similarity function is overly smooth, thereby leading to premature convergence when the pose closely approximates the ground truth. In order to ensure the accuracy of registration, a benchmark based on covariance adaptive evolution strategy (CMA-ES) [9 ###reference_b9###] is adopted for refinement.\nGao et al. [5 ###reference_b5###], Gopalakrishnan et al. [7 ###reference_b7###] and Zhang et al. [17 ###reference_b17###] all proposed differentiable renderer and employed the gradient descent optimization method to refine the pose using this module.\nThis implies that an efficient and robust optimization-based registration method is still beneficial to the existing registration framework.\nIn this work, we proposed a coarse-to-fine benchmark for 2D/3D registration. The framework uses CMA-ES as the optimizer and is divided into two resolutions for pose estimation.\nWe validate our proposed framework on vertebral data, demonstrating its ability to achieve high registration accuracy.\nOur paper is organized as follows: Sect. 2 ###reference_### provides an overview of related work, Sect. 3 ###reference_### describes the proposed method. And in Sect. 4 ###reference_###, we present our experimental setup, datasets, quantitative and qualitative results, and analysis.\nThis work can be seen as a supplementary note on the optimization-based methods we used in [1 ###reference_b1###, 2 ###reference_b2###]."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Related Work",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Intensity-Based 2D/3D Registration",
21
+ "text": "In intensity-based methods, a simulated X-ray image, referred to as Digitally Reconstructed Radiograph (DRR), is derived from the 3-D X-ray attenuation map by simulating the attenuation of virtual X-rays.\nAn optimizer is employed to maximize an intensity-based similarity measure, such as normalized cross-correlation (NCC) and mutual information, between the DRR and X-ray images. Common mathematical optimization methods for 2D/3D registration include Powell-Brent [14 ###reference_b14###], Nelder-Mead, nonlinear conjugate gradient, gradient descent, evolutionary strategy, etc [16 ###reference_b16###].\nIt is widely recognized that intensity-based methods [8 ###reference_b8###] can achieve high registration accuracy. However, these methods also have two significant drawbacks: long computation time and limited capture range. In recent years, many literatures have tried to use neural networks as pose initialization for intensity-based methods [4 ###reference_b4###, 6 ###reference_b6###, 17 ###reference_b17###]. Learning-based methods can often initialize poses near the ground truth, which makes up for the shortcomings of the smaller capture range of intensity-based methods."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Feature-Based 2D/3D Registration",
27
+ "text": "Feature-based methods calculate similarity measures efficiently from geometric features extracted from the images, e.g., corners, lines and segmentations, and therefore have a higher computational efficiency than intensity-based methods. One potential drawback of feature-based methods is that they heavily rely on accurate detection of geometric features, which in itself can be a challenging task.\nErrors from the feature detection step are inevitably propagated into the registration result, making feature-based methods in general less accurate. Errors from the feature detection step inevitably propagate into the registration result, generally compromising the accuracy of feature-based methods."
28
+ },
29
+ {
30
+ "section_id": "3",
31
+ "parent_section_id": null,
32
+ "section_name": "Methodology",
33
+ "text": "Our registration framework is divided into two stages: coarse registration and fine registration, both of which use CMA-ES as the optimizer. Coarse registration is performed on 4downsampled images (256256), and fine registration is performed on the original full-resolution (10241024). In the coarse registration stage, we use multi-scale normalized cross-correlation (mNCC) as the similarity function, while the fine registration method uses gradient correlation (GC).\nIn the following part of this section, we will first introduce the problem formulation of this task and make a brief introduction on the adopted optimizer, CMA-ES. We will also discuss the similarity functions we used for the proposed framework."
34
+ },
35
+ {
36
+ "section_id": "3.1",
37
+ "parent_section_id": "3",
38
+ "section_name": "Problem Formulation",
39
+ "text": "The problem of rigid 2D/3D registration can be formulated as follows: Given a fixed 2D X-ray image and a moving 3D volume as input. We aim to seek an unknown camera pose such that the image projected from is as similar as possible to the acquired image . It is important to note that in this study, the three-dimensional volume used is a segmentation of vertebra, as bone is a rigid object with higher attenuation than soft tissue, making it more suitable for feature extraction."
40
+ },
41
+ {
42
+ "section_id": "3.2",
43
+ "parent_section_id": "3",
44
+ "section_name": "Optimizer",
45
+ "text": "CMA-ES is an evolutionary strategy designed for continuous optimization problems. It is a probabilistic-based optimization method that simulates natural selection and genetic mechanisms of biological evolution to optimize parameters.\nThe core idea of CMA-ES is to search for the optimal solution by gradually adjusting the probability distribution of the parameters. In each generation, CMA-ES generates a set of candidate solutions based on the current probability distribution and updates the distribution according to their performance. By iteratively optimizing the probability distribution, CMA-ES can effectively explore the search space and find better solutions.\nCMA-ES performs well in handling high-dimensional, non-convex optimization problems and exhibits robustness and convergence properties compared to other optimization algorithms. A public\nimplementation of CMA-ES can be found here1\u2020\u20201https://github.com/CyberAgentAILab/cmaes\nIn our framework, if the current similarity function is below a predefined threshold, the registration is considered to have converged. We also set up an additional early stopping strategy. If the minimum value of the similarity loss hasn\u2019t been updated after 100 generations of sampling, the registration process will be terminated immediately."
46
+ },
47
+ {
48
+ "section_id": "3.3",
49
+ "parent_section_id": "3",
50
+ "section_name": "Similarity Functions",
51
+ "text": ""
52
+ },
53
+ {
54
+ "section_id": "3.3.1",
55
+ "parent_section_id": "3.3",
56
+ "section_name": "3.3.1 Multi-scale normalized cross-correlation.",
57
+ "text": "Normalized cross correlation (N-CC) is a widely-used metric for image similarity measurement.\nIt can be expressed as follows:\nwhere and are two images of size . , represents the standard deviations of and , , denote the mean of the image intensities.The NCC calculated directly on the entire image is commonly known as global NCC [16 ###reference_b16###].\nPatch-based NCC [8 ###reference_b8###] is also a common similarity function, which is also called local NCC. In this work, we only consider square shaped patches, defined by the patch center(,) and a radius, . And it can be formulated as:\n, represents the standard deviations of the corresponding patches in and .\nMulti-scale NCC is a hybrid metric that combines the two aforementioned metrics. Assuming that the image is divided into K patches, the multi-scale NCC can be mathematically expressed as:\nis a hyperparameter and in this work we set it to 1. As for patch radius , we set it to 6 during experiment .\nCompared with global NCC, multi-scale NCC is more sensitive to texture details. And it is more stable than local NCC and less likely to fall into local minima. A public implementation of mNCC can be found here2\u2020\u20202 https://github.com/eigenvivek/DiffDRR.\nIn addition, we also considered using the intensity variance weighting method to give weight to each patch like some previous works [8 ###reference_b8###, 10 ###reference_b10###]. However, we discovered that this approach led to an unstable registration effect, especially noticeable in images with high noise levels or complicated anatomical regions like the cervical vertebrae."
58
+ },
59
+ {
60
+ "section_id": "3.3.2",
61
+ "parent_section_id": "3.3",
62
+ "section_name": "3.3.2 Gradient correlation.",
63
+ "text": "Gradient-based measures initially transform and by differentiation. We utilize horizontal and vertical Sobel templates to generate gradient images, and , representing the derivative of fluoroscopy intensity along the two orthogonal axes of the image. Subsequently, normalized cross correlation is then calculated between and and between and . The final value of this measure is the average of these normalized cross correlations.\nGC exhibits a sharp peak at the ground truth camera pose, but its landscape contains numerous local minima. On the other hand, mNCC is substantially smoother but has less defined peaks. As a result, we adopt mNCC as the similarity function during the coarse registration stage and subsequently replace it with GC during the fine registration stage."
64
+ },
65
+ {
66
+ "section_id": "4",
67
+ "parent_section_id": null,
68
+ "section_name": "Experiments",
69
+ "text": ""
70
+ },
71
+ {
72
+ "section_id": "4.1",
73
+ "parent_section_id": "4",
74
+ "section_name": "Dataset and Experiment Environment",
75
+ "text": ""
76
+ },
77
+ {
78
+ "section_id": "4.1.1",
79
+ "parent_section_id": "4.1",
80
+ "section_name": "4.1.1 Dataset.",
81
+ "text": "We employed fifteen spine CT scans to evaluate the performance of the proposed method, comprising five cervical spine scans, five thoracic spine scans, and five lumbar spine scans.\nEach scan has a corresponding X-ray with Postero-Anterior (PA) views.\nFor coarse registration, the size of the x-ray used is 256 256 ( downsampled), and fine registration uses the original image resolution."
82
+ },
83
+ {
84
+ "section_id": "4.1.2",
85
+ "parent_section_id": "4.1",
86
+ "section_name": "4.1.2 Image pre-processing.",
87
+ "text": "For each X-ray, the ground truth extrinsic matrix is provided and a logarithmic transformation following Beer-Lambert\u2019s law is applied to invert the intensity of the image.\nThe spines were segmented using an automatic method in [18 ###reference_b18###].\nTo ensure consistent and standardized analysis, we employed a resampling technique on the CT scans, resulting in an isotropic spacing of 1.0 mm.\nAdditionally, we applied a cropping or padding process along each dimension, yielding volumes of size , with the spine ROI approximately positioned at the center of the volume."
88
+ },
89
+ {
90
+ "section_id": "4.1.3",
91
+ "parent_section_id": "4.1",
92
+ "section_name": "4.1.3 Experiment settings.",
93
+ "text": "The camera intrinsic parameters used in the experiments simulate a Perlove PLX118F mobile C-arm imaging device which generates the X-ray images in this work.\nThe device has an isotropic pixel spacing of 0.19959 mm/pixel, a source-to-detector distance of 1011.7 mm, and a detector dimension of 1024 1024 pixels.\nFor each subject, twenty registrations were performed using initial poses sampled from normal distributions of for rotations in degrees and for translations in millimeters."
94
+ },
95
+ {
96
+ "section_id": "4.2",
97
+ "parent_section_id": "4",
98
+ "section_name": "Evaluation Metrics",
99
+ "text": "Following the standardized evaluation methods in 2D/3D registration [11 ###reference_b11###], we report mean target registration error (mTRE) in 50th, 75th, and 95th percentiles (in millimeters).\nmTRE is defined as the average 3D distance between the projections obtained from the ground truth camera poses and the estimated camera poses. Suppose we have a three-dimensional point set consisting of anatomical landmarks, mTRE can be represented as:\nWe also evaluate the errors in rotation and translation between estimated and ground truth respectively."
100
+ },
101
+ {
102
+ "section_id": "4.3",
103
+ "parent_section_id": "4",
104
+ "section_name": "Results",
105
+ "text": "The numerical results of the registration pose error are shown in Table. 1 ###reference_###. Because our experiments were initiated with rather substantial offsets, the mean errors were significantly skewed by the presence of large-scale outliers that do not truly reflect the actual distribution. The initial mTRE is , , and at the 95th, 75th, and 50th percentiles respectively. Because the sizes of different anatomical parts of the spine are different, the mTRE obtained from the experimental results on cervical, thoracic and lumbar spine data varies greatly. It will be more intuitive to directly compare the errors of each component in rotation and translation. It is worth noting that although the mean values of errors in some directions of rotation and translation became larger after registration in our experiments, this was actually affected by outliers.\nTaking the total errors in the three directions of rotation (rx, ry, rz) as an example: their initial errors are , and , while the errors after registration are , and .\nAnd the medians of the initial errors are 3.10, 2.90, and 3.16, while the medians of the registration results are noticeably smaller, measuring 0.52, 1.32, and 0.24 respectively.\n###table_1### The performance of our framework on lumbar and thoracic spine data is very convincing, which hints at the feasibility of this framework in clinical application. But we also noticed its unsatisfactory performance in cervical spine data.\nWe believe that this is mainly due to two reasons: 1)The cervical spine area contains a greater number of joints and bone structures, and has a wider range of motion compared to other spinal regions. As a result, cervical spine images may exhibit a more intricate shape and structure, and the registration process must consider a broader range of variations and uncertainties.\nIn contrast, the lumbar and thoracic regions are comparatively larger and have relatively simple structures, so the registration process may be easier.\n2)The patient\u2019s head direction was not entirely consistent during preoperative and intraoperative imaging, leading to deformations in certain parts of the cervical spine that exceeded the 6 DoF rigid body transformation limit.\nWe can mitigate the impact of jaws with significant shape differences in the image by partitioning the region of interest.\nHowever, in such cases, adopting regularization of rigid bodies for cervical spine registration may result in a higher likelihood of falling into local minima."
106
+ },
107
+ {
108
+ "section_id": "5",
109
+ "parent_section_id": null,
110
+ "section_name": "Conclusion",
111
+ "text": "Single-view 2D/3D registration inherently has ambiguities in translation (tz) in the depth direction and out-of-plane rotation (rx and ry).\nOur method cannot avoid this defect, but other research results [4 ###reference_b4###, 5 ###reference_b5###] show that combining optimization-based methods with learning-based methods can effectively alleviate this problem.\nOur experiments in previous work [1 ###reference_b1###] also substantiate this conclusion.\nIn addition, we aspire to develop a more elegant and rational solution to address this problem in future endeavors.\nIn this paper, we propose a multi-resolution 2D/3D registration algorithm using the CMA-ES algorithm. We verified the effectiveness of this framework using paired CT and X-ray images from three different anatomical sites (lumbar, thoracic, and cervical vertebrae) in the context of spinal surgical navigation.\nOur experimental results have yielded highly competitive outcomes. We aim for this method to serve as a benchmark, coupled with learning-based registration methods, and to potentially be implemented in clinical surgical settings in the future."
112
+ },
113
+ {
114
+ "section_id": "6",
115
+ "parent_section_id": null,
116
+ "section_name": "acknowledgement",
117
+ "text": "This work was supported in part by Bond-Star Medical Technology Co., Ltd..\nWe thank Sheng Zhang, Junxian Wu and Ziyue Zhang for their constructive suggestions at several stages of the project."
118
+ }
119
+ ],
120
+ "appendix": [],
121
+ "tables": {
122
+ "1": {
123
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>2D/3D registration performance on cervical, lumbar and thoracic spine data. This evaluation includes measurement of the errors in rotation and translation, the mean Target Registration Error (mTRE) at the 50th, 75th, and 95th percentiles.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S4.T1.22.22\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.4.4.4\">\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_border_r ltx_border_tt\" id=\"S4.T1.4.4.4.5\" rowspan=\"2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\"></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_border_r ltx_border_tt\" id=\"S4.T1.4.4.4.6\" rowspan=\"2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.4.4.6.1\">Subject</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_l ltx_border_r ltx_border_tt\" colspan=\"3\" id=\"S4.T1.1.1.1.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">mTRE(mm)\n</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_border_tt\" colspan=\"4\" id=\"S4.T1.3.3.3.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">Rotation Error(<sup class=\"ltx_sup\" id=\"S4.T1.3.3.3.3.1\">\u2218</sup>)\n</td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_left ltx_border_l ltx_border_tt\" colspan=\"4\" id=\"S4.T1.4.4.4.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">Translation Error(mm)\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.22.22.23.1\">\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.22.22.23.1.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">95th</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.22.22.23.1.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">75th</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.22.22.23.1.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">50th</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.22.22.23.1.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">rotate.</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_t\" id=\"S4.T1.22.22.23.1.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">rx</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_t\" id=\"S4.T1.22.22.23.1.6\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">ry</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.22.22.23.1.7\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">rz</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.22.22.23.1.8\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">trans.</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_t\" id=\"S4.T1.22.22.23.1.9\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">tx</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_t\" id=\"S4.T1.22.22.23.1.10\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">ty</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_t\" id=\"S4.T1.22.22.23.1.11\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">tz</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.5.5.5\">\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T1.5.5.5.2\" rowspan=\"5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.5.5.5.2.1\">Cervical</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_border_r ltx_border_tt\" id=\"S4.T1.5.5.5.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">\n1</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T1.5.5.5.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">136.8</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T1.5.5.5.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">101.1</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T1.5.5.5.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">74.2</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T1.5.5.5.6\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">34.9</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_tt\" id=\"S4.T1.5.5.5.7\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">21.9</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_tt\" id=\"S4.T1.5.5.5.8\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">6.7</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T1.5.5.5.9\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">6.4</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T1.5.5.5.10\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">13.5</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_tt\" id=\"S4.T1.5.5.5.11\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">2.1</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_tt\" id=\"S4.T1.5.5.5.12\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">4.8</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_tt\" id=\"S4.T1.5.5.5.13\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">6.7</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.6.6.6\">\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_border_r\" id=\"S4.T1.6.6.6.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">\n2</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.6.6.6.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">319.3</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.6.6.6.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">266.4</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.6.6.6.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">213.6</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.6.6.6.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">27.7</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.6.6.6.6\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">6.3</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.6.6.6.7\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">13.8</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.6.6.6.8\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">7.5</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.6.6.6.9\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">47.5</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.6.6.6.10\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">7.0</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.6.6.6.11\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">20.1</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.6.6.6.12\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">20.3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.7.7.7\">\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_border_r\" id=\"S4.T1.7.7.7.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">\n3</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.7.7.7.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">369.0</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.7.7.7.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">332.0</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.7.7.7.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">292.0</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.7.7.7.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">34.1</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.7.7.7.6\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">9.7</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.7.7.7.7\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">12.3</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.7.7.7.8\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">12.1</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.7.7.7.9\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">31.4</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.7.7.7.10\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">5.0</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.7.7.7.11\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">8.8</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.7.7.7.12\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">17.7</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.8.8.8\">\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_border_r\" id=\"S4.T1.8.8.8.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">\n4</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.8.8.8.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">367.5</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.8.8.8.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">335.0</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.8.8.8.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">291.8</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.8.8.8.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">38.1</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.8.8.8.6\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">8.9</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.8.8.8.7\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">19.6</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.8.8.8.8\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">9.6</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.8.8.8.9\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">58.3</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.8.8.8.10\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">16.8</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.8.8.8.11\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">10.6</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.8.8.8.12\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">30.9</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.9.9.9\">\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_border_r\" id=\"S4.T1.9.9.9.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">\n5</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.9.9.9.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">306.5</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.9.9.9.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">275.5</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.9.9.9.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">240.8</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.9.9.9.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">27.3</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.9.9.9.6\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">6.0</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.9.9.9.7\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">17.5</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.9.9.9.8\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">3.8</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.9.9.9.9\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">46.2</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.9.9.9.10\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">3.6</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.9.9.9.11\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">22.0</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.9.9.9.12\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">20.5</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.10.10.10\">\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_border_r\" id=\"S4.T1.10.10.10.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\"></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_border_r\" id=\"S4.T1.10.10.10.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">\n1-5</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.10.10.10.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">296.5</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.10.10.10.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">251.9</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.10.10.10.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">191.6</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.10.10.10.6\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">32.4</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.10.10.10.7\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">10.6</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.10.10.10.8\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">14.0</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.10.10.10.9\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">7.9</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.10.10.10.10\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">39.4</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.10.10.10.11\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">7.4</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.10.10.10.12\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">12.7</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.10.10.10.13\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">19.2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.11.11.11\">\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T1.11.11.11.2\" rowspan=\"5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.11.11.11.2.1\">Thoracic</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_border_r ltx_border_tt\" id=\"S4.T1.11.11.11.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">\n6</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T1.11.11.11.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">20.5</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T1.11.11.11.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">20.4</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T1.11.11.11.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">20.2</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T1.11.11.11.6\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">1.4</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_tt\" id=\"S4.T1.11.11.11.7\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">0.3</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_tt\" id=\"S4.T1.11.11.11.8\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">0.9</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T1.11.11.11.9\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">0.2</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T1.11.11.11.10\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">7.7</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_tt\" id=\"S4.T1.11.11.11.11\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">0.2</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_tt\" id=\"S4.T1.11.11.11.12\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">1.7</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_tt\" id=\"S4.T1.11.11.11.13\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">5.8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.12.12.12\">\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_border_r\" id=\"S4.T1.12.12.12.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">\n7</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.12.12.12.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">25.6</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.12.12.12.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">25.1</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.12.12.12.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">24.0</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.12.12.12.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">1.4</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.12.12.12.6\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">0.3</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.12.12.12.7\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">1.1</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.12.12.12.8\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">0.1</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.12.12.12.9\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">6.5</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.12.12.12.10\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">0.3</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.12.12.12.11\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">1.0</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.12.12.12.12\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">5.3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.13.13.13\">\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_border_r\" id=\"S4.T1.13.13.13.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">\n8</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.13.13.13.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">28.0</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.13.13.13.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">20.4</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.13.13.13.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">20.4</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.13.13.13.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">3.2</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.13.13.13.6\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">0.2</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.13.13.13.7\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">2.5</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.13.13.13.8\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">0.6</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.13.13.13.9\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">9.4</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.13.13.13.10\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">0.2</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.13.13.13.11\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">3.1</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.13.13.13.12\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">6.1</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.14.14.14\">\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_border_r\" id=\"S4.T1.14.14.14.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">\n9</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.14.14.14.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">46.8</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.14.14.14.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">31.3</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.14.14.14.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">30.6</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.14.14.14.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">6.3</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.14.14.14.6\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">2.3</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.14.14.14.7\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">3.1</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.14.14.14.8\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">0.9</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.14.14.14.9\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">13.4</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.14.14.14.10\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">0.9</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.14.14.14.11\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">6.2</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.14.14.14.12\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">6.3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.15.15.15\">\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_border_r\" id=\"S4.T1.15.15.15.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">\n10</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.15.15.15.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">10.2</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.15.15.15.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">10.2</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.15.15.15.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">10.1</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.15.15.15.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">2.5</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.15.15.15.6\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">1.6</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.15.15.15.7\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">0.8</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.15.15.15.8\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">0.2</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.15.15.15.9\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">4.9</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.15.15.15.10\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">0.4</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.15.15.15.11\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">1.6</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.15.15.15.12\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">2.9</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.16.16.16\">\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_border_r\" id=\"S4.T1.16.16.16.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\"></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_border_r\" id=\"S4.T1.16.16.16.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">\n6-10</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.16.16.16.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">23.4</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.16.16.16.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">19.1</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.16.16.16.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">16.4</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.16.16.16.6\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">3.0</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.16.16.16.7\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">0.9</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.16.16.16.8\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">1.7</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.16.16.16.9\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">0.4</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.16.16.16.10\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">8.4</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.16.16.16.11\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">0.4</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.16.16.16.12\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">2.7</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.16.16.16.13\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">5.3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.17.17.17\">\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T1.17.17.17.2\" rowspan=\"5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.17.17.17.2.1\">Lumbar</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_border_r ltx_border_tt\" id=\"S4.T1.17.17.17.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">\n11</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T1.17.17.17.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">36.4</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T1.17.17.17.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">35.4</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T1.17.17.17.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">35.2</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T1.17.17.17.6\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">1.5</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_tt\" id=\"S4.T1.17.17.17.7\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">0.2</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_tt\" id=\"S4.T1.17.17.17.8\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">1.3</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T1.17.17.17.9\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">0.1</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T1.17.17.17.10\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">14.6</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_tt\" id=\"S4.T1.17.17.17.11\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">0.2</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_tt\" id=\"S4.T1.17.17.17.12\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">2.1</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_tt\" id=\"S4.T1.17.17.17.13\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">12.3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.18.18.18\">\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_border_r\" id=\"S4.T1.18.18.18.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">\n12</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.18.18.18.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">13.5</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.18.18.18.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">13.4</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.18.18.18.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">13.0</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.18.18.18.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">2.3</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.18.18.18.6\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">0.9</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.18.18.18.7\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">1.1</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.18.18.18.8\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">0.2</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.18.18.18.9\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">2.1</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.18.18.18.10\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">0.4</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.18.18.18.11\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">1.0</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.18.18.18.12\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">0.7</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.19.19.19\">\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_border_r\" id=\"S4.T1.19.19.19.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">\n13</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.19.19.19.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">53.0</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.19.19.19.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">25.2</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.19.19.19.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">16.1</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.19.19.19.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">6.0</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.19.19.19.6\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">2.7</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.19.19.19.7\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">2.9</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.19.19.19.8\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">0.5</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.19.19.19.9\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">15.1</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.19.19.19.10\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">1.9</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.19.19.19.11\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">9.3</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.19.19.19.12\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">3.8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.20.20.20\">\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_border_r\" id=\"S4.T1.20.20.20.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">\n14</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.20.20.20.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">135.5</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.20.20.20.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">128.7</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.20.20.20.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">113.2</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.20.20.20.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">10.1</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.20.20.20.6\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">1.7</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.20.20.20.7\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">6.8</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.20.20.20.8\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">1.5</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.20.20.20.9\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">29.6</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.20.20.20.10\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">1.3</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.20.20.20.11\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">9.6</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.20.20.20.12\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">18.7</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.21.21.21\">\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_border_r\" id=\"S4.T1.21.21.21.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">\n15</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.21.21.21.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">12.9</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.21.21.21.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">12.9</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.21.21.21.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">12.9</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.21.21.21.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">1.2</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.21.21.21.6\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">0.3</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.21.21.21.7\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">0.7</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.21.21.21.8\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">0.2</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.21.21.21.9\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">3.9</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.21.21.21.10\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">0.5</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.21.21.21.11\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">1.1</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.21.21.21.12\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">2.3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.22.22.22\">\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_border_r\" id=\"S4.T1.22.22.22.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\"></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_border_r\" id=\"S4.T1.22.22.22.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">\n11-15</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.22.22.22.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">47.0</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.22.22.22.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">22.3</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.22.22.22.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">14.3</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.22.22.22.6\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">4.2</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.22.22.22.7\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">1.2</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.22.22.22.8\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">2.6</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.22.22.22.9\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">0.5</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S4.T1.22.22.22.10\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">13.1</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.22.22.22.11\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">0.9</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.22.22.22.12\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">4.7</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S4.T1.22.22.22.13\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">7.6</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.22.22.24.2\">\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_bb ltx_border_r ltx_border_tt\" id=\"S4.T1.22.22.24.2.1\" rowspan=\"2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.22.22.24.2.1.1\">Total</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_border_r ltx_border_tt\" id=\"S4.T1.22.22.24.2.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">Initial</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T1.22.22.24.2.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">118.7</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T1.22.22.24.2.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">98.6</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T1.22.22.24.2.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">75.4</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T1.22.22.24.2.6\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">11.6</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_tt\" id=\"S4.T1.22.22.24.2.7\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">4.0</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_tt\" id=\"S4.T1.22.22.24.2.8\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">3.8</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T1.22.22.24.2.9\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">3.8</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T1.22.22.24.2.10\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">23.8</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_tt\" id=\"S4.T1.22.22.24.2.11\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">8.1</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_tt\" id=\"S4.T1.22.22.24.2.12\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">7.9</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_tt\" id=\"S4.T1.22.22.24.2.13\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">7.8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.22.22.25.3\">\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_border_bb ltx_border_r\" id=\"S4.T1.22.22.25.3.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">Result</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_bb ltx_border_r\" id=\"S4.T1.22.22.25.3.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">114.6</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_bb ltx_border_r\" id=\"S4.T1.22.22.25.3.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">52.2</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_bb ltx_border_r\" id=\"S4.T1.22.22.25.3.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">19.8</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_bb ltx_border_r\" id=\"S4.T1.22.22.25.3.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">13.2</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_bb\" id=\"S4.T1.22.22.25.3.6\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">4.2</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_bb\" id=\"S4.T1.22.22.25.3.7\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">6.1</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_bb ltx_border_r\" id=\"S4.T1.22.22.25.3.8\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">2.9</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_bb ltx_border_r\" id=\"S4.T1.22.22.25.3.9\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">20.3</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_bb\" id=\"S4.T1.22.22.25.3.10\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">2.9</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_bb\" id=\"S4.T1.22.22.25.3.11\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">6.7</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_bb\" id=\"S4.T1.22.22.25.3.12\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">10.7</td>\n</tr>\n</tbody>\n</table>\n</figure>",
124
+ "capture": "Table 1: 2D/3D registration performance on cervical, lumbar and thoracic spine data. This evaluation includes measurement of the errors in rotation and translation, the mean Target Registration Error (mTRE) at the 50th, 75th, and 95th percentiles."
125
+ }
126
+ },
127
+ "image_paths": {},
128
+ "validation": true,
129
+ "references": [],
130
+ "url": "http://arxiv.org/html/2402.05642v3"
131
+ }
20240819/2403.01888v3.json ADDED
@@ -0,0 +1,437 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Fast Benchmarking of Asynchronous Multi-Fidelity Optimization on Zero-Cost Benchmarks",
3
+ "abstract": "While deep learning has celebrated many successes, its results often hinge on the meticulous selection of hyperparameters (HPs).\nHowever, the time-consuming nature of deep learning training makes HP optimization (HPO) a costly endeavor, slowing down the development of efficient HPO tools.\nWhile zero-cost benchmarks, which provide performance and runtime without actual training, offer a solution for non-parallel setups, they fall short in parallel setups as each worker must communicate its queried runtime to return its evaluation in the exact order.\nThis work addresses this challenge by introducing a user-friendly Python package that facilitates efficient parallel HPO with zero-cost benchmarks.\nOur approach calculates the exact return order based on the information stored in file system, eliminating the need for long waiting times and enabling much faster HPO evaluations.\nWe first verify the correctness of our approach through extensive testing and the experiments with 6 popular HPO libraries show its applicability to diverse libraries and its ability to achieve over 1000x speedup compared to a traditional approach.\nOur package can be installed via pip install mfhpo-simulator.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "1 Introduction",
9
+ "text": "Hyperparameter (HP) optimization of deep learning (DL) is crucial for strong performance (Zhang et al.,, 2021 ###reference_b33###; Sukthanker et al.,, 2022 ###reference_b27###; Wagner et al.,, 2022 ###reference_b28###) and it surged the research on HP optimization (HPO) of DL.\nHowever, due to the heavy computational nature of DL, HPO is often prohibitively expensive and both energy and time costs are not negligible.\nThis is the driving force behind the emergence of zero-cost benchmarks such as tabular and surrogate benchmarks, which enable yielding the (predictive) performance of a specific HP configuration in a small amount of time (Eggensperger et al.,, 2015 ###reference_b8###, 2021 ###reference_b9###; Arango et al.,, 2021 ###reference_b2###; Pfisterer et al.,, 2022 ###reference_b25###; Bansal et al.,, 2022 ###reference_b4###).\nAlthough these benchmarks effectively reduce the energy usage and the runtime of experiments in many cases, experiments considering runtimes between parallel workers may not be easily benefited as seen in Figure LABEL:main:methods:subfig:compress.\nFor example, multi-fidelity optimization (MFO) (Kandasamy et al.,, 2017 ###reference_b12###) has been actively studied recently due to its computational efficiency (Jamieson and Talwalkar,, 2016 ###reference_b11###; Li et al.,, 2017 ###reference_b16###; Falkner et al.,, 2018 ###reference_b10###; Awad et al.,, 2021 ###reference_b3###).\nTo further leverage efficiency, many of these MFO algorithms are designed to maintain their performance under multi-worker asynchronous runs (Li et al.,, 2020 ###reference_b17###; Falkner et al.,, 2018 ###reference_b10###; Awad et al.,, 2021 ###reference_b3###).\nHowever, to preserve the return order of each parallel run, a na\u00efve approach involves making each worker wait for the actual DL training to run (see Figure 1 ###reference_### (Left)).\nThis time is typically returned as cost of a query by zero-cost benchmarks, leading to significant time and energy waste, as each worker must wait for a potentially long duration.\n###figure_1### To address this problem, we introduce algorithms to not wait for large time durations and yet return the correct order of evaluations for each worker via file system synchronization.\nThis is provided as an open-sourced easy-to-use Python wrapper (see Figure 1 ###reference_### (Right) for the simplest codeblock) for existing benchmarking code.\nAlthough our wrapper should be applicable to an arbitrary HPO library and yield the correct results universally, it is impossible to perfectly realize it due to different overheads by different optimizers and different multi-core processing methods such as multiprocessing and server-based synchronization.\nFor this reason, we limit our application scope to HPO methods for zero-cost benchmarks with almost no benchmark query overheads.\nFurthermore, we provide an option to simulate asynchronous optimization over multiple cores only with a single core by making use of the ask-and-tell interface 111https://optuna.readthedocs.io/en/stable/tutorial/20_recipes/009_ask_and_tell.html ###reference_torial/20_recipes/009_ask_and_tell.html###.\nIn our experiments, we first empirically verify our implementation is correct using several edge cases.\nThen we use various open source software (OSS) HPO libraries such as SMAC3 (Lindauer et al.,, 2022 ###reference_b20###) and Optuna (Akiba et al.,, 2019 ###reference_b1###) on zero-cost benchmarks and we compare the changes in the performance based on the number of parallel workers.\nThe experiments demonstrated that our wrapper (see Figure 1 ###reference_### (Right)) finishes all the experiments times faster than the na\u00efve simulation (see Figure 1 ###reference_### (Left)).\nThe implementation for the experiments is also publicly available 222https://github.com/nabenabe0928/mfhpo-simulator-experiments ###reference_lator-experiments###."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "2 Background",
15
+ "text": "In this section, we define our problem setup.\nThroughout the paper, we assume minimization problems of an objective function 333As mentioned in Appendix B ###reference_###, we can also simulate with multi-objective optimization and constrained optimization. defined on the search space where is the domain of the -th HP.\nFurthermore, we define the (predictive) actual runtime function of the objective function given an HP configuration .\nAlthough and could involve randomness, we only describe the deterministic version for the notational simplicity.\nIn this paper, we use for the -th sample and for the -th observation and we would like to note that they are different notations.\nIn asynchronous optimization, the sampling order is not necessarily the observation order, as certain evaluations can take longer.\nFor example, if we have two workers and the runtime for the first two samples are and , will be observed first, yielding and ."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "2.1 Asynchronous Optimization on Zero-Cost Benchmarks",
21
+ "text": "Assume we have a zero-cost benchmark that we can query and in a negligible amount of time, the -th HP configuration is sampled from a policy where is a set of observations, and we have a set of parallel workers where each worker is a wrapper of and .\nLet a mapping be an index specifier of which worker processed the -th sample and be a set of the indices of samples the -th worker processed.\nWhen we define the sampling overhead for the -th sample as , the (simulated) runtime of the -th worker is computed as follows:\nNote that includes the benchmark query overhead , but we consider it zero, i.e. .\nIn turn, the ()-th sample will be processed by the worker that will be free first, and thus the index of the worker for the ()-th sample is specified by .\nOn top of this, each worker needs to free its evaluation when satisfies where is the sampling elapsed time of the incoming sample .\nThe problems of this setting are that (1) the policy is conditioned on , which is why the order of the observations must be preserved, and (2) each worker must wait for the other workers to match the order to be realistic.\nWhile an obvious approach is to let each worker wait for the queried runtime as in Figure 1 ###reference_### (Left), it is a waste of energy and time.\nTo address this problem, we need a wrapper as in Figure 1 ###reference_### (Right)."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "2.2 Related Work",
27
+ "text": "Although there have been many HPO benchmarks invented for MFO such as HPOBench (Eggensperger et al.,, 2021 ###reference_b9###), NASLib (Mehta et al.,, 2022 ###reference_b21###), and JAHS-Bench-201 (Bansal et al.,, 2022 ###reference_b4###), none of them provides a module to allow researchers to simulate runtime internally.\nWe defer the survey by Li and Li, (2024 ###reference_b15###) for the details of MFO.\nOther than HPO benchmarks, many HPO frameworks handling MFO have also been developed so far such as Optuna (Akiba et al.,, 2019 ###reference_b1###)), SMAC3 (Lindauer et al.,, 2022 ###reference_b20###), Dragonfly (Kandasamy et al.,, 2020 ###reference_b13###), and RayTune (Liaw et al.,, 2018 ###reference_b19###).\nHowever, no framework above considers the simulation of runtime.\nAlthough HyperTune (Li et al.,, 2022 ###reference_b18###) and SyneTune (Salinas et al.,, 2022 ###reference_b26###) are internally simulating the runtime, we cannot simulate optimizers of interest if the optimizers are not introduced in the packages.\nThis restricts researchers in simulating new methods, hindering experimentation and fair comparison.\nFurthermore, their simulation backend assumes that optimizers take the ask-and-tell interface and it requires the reimplementation of optimizers of interest in their codebase.\nSince reimplementation is time-consuming and does not guarantee its correctness without tests, it is helpful to have an easy-to-use Python wrapper around existing codes.\nNote that this work extends previous work (Watanabe, 2023a, ###reference_b29###), by adding the handling of optimizers with non-negligible overhead and the empirical verification of the simulation algorithm.\n###figure_2### ###figure_3###"
28
+ },
29
+ {
30
+ "section_id": "3",
31
+ "parent_section_id": null,
32
+ "section_name": "3 Automatic Waiting Time Scheduling Wrapper",
33
+ "text": "As an objective function may take a random seed and fidelity parameters in practice, we denote a set of the arguments for the -th query as .\nIn this section, a job means to allocate the -th queried HP configuration to a free worker and obtain its result .\nBesides that, we denote the -th chronologically ordered result as .\nOur wrapper outlined in Algorithm 1 is required to satisfy the following conditions:\nThe -th result comes earlier than the -th result for all ,\nThe wrapper recognizes each worker and allocates a job to the exact worker even when using multiprocessing (e.g. joblib and dask) and multithreading (e.g. concurrent.futures),\nThe evaluation of each job can be resumed in MFO, and\nEach worker needs to be aware of its own sampling overheads.\nNote that an example of the restart of evaluation could be when we evaluate DL model instantiated with HP for epochs and if we want to then evaluate the same HP configuration for epochs, we start the training of this model from the st epoch instead of from scratch using the intermediate state.\nLine 4 ###reference_4### checks this condition and Line 5 ###reference_5### ensures the intermediate state to restart exists before the evaluation.\nTo achieve these features, we chose to share the required information via the file system and create the following JSON files that map:\nfrom a thread or process ID of each worker to a worker index ,\nfrom a worker index to its timestamp immediately after the worker is freed,\nfrom a worker index to its (simulated) cumulative runtime , and\nfrom the -th configuration to a list of intermediate states .\nAs our wrapper relies on file system, we need to make sure that multiple workers will not edit the same file at the same time.\nFurthermore, usecases of our wrapper are not really limited to multiprocessing or multithreading that spawns child workers but could be file-based synchronization.\nHence, we use fcntl to safely acquire file locks.\nWe additionally provide an approach that also extends to the ask-and-tell interface by providing a Single-Core Simulator (SCS) for single-core scenarios (details omitted for brevity).\nWhile the Multi-Core Simulator (MCS) wraps optimizers running with cores or workers, SCS runs only on a single core and simulates a -worker run.\nUnlike previous work (Watanabe, 2023a, ###reference_b29###), Algorithm 1 ###reference_### handles expensive optimizers by checking individual workers\u2019 wait times during the latest sampling measured by in Line 12.\nHowever, this check complicates race conditions, making it hard to guarantee the correctness of implementation.\nFor this reason, empirical verification through edge cases is provided in the next section."
34
+ },
35
+ {
36
+ "section_id": "4",
37
+ "parent_section_id": null,
38
+ "section_name": "4 Empirical Algorithm Verification on Test Cases",
39
+ "text": "In this section, we verify our algorithm using some edge cases.\nThroughout this section, we use the number of workers .\nWe also note that our wrapper behavior depends only on returned runtime at each iteration in a non-continual setup and it is sufficient to consider only runtime and sampling time at each iteration.\nTherefore, we use a so-called fixed-configuration sampler, which defines a sequence of HP configurations and their corresponding runtimes at the beginning and samples from the fixed sequence iteratively.\nMore formally, assume we would like to evaluate HP configurations, then the sampler first generates and one of the free workers receives an HP configuration at the -th sampling that leads to the runtime of .\nFurthermore, we use two different optimizers to simulate the sampling cost:\nExpensive Optimizer: that sleeps for seconds as a sampling overhead before giving to a worker where is the size of a set of observations and is a proportionality constant, and\nCheap Optimizer: that gives to a worker immediately without a sampling overhead.\nIn principle, the results of each test case are uniquely determined by a pair of an optimizer and a sequence of runtimes.\nHence, we define such pairs at the beginning of each section."
40
+ },
41
+ {
42
+ "section_id": "4.1",
43
+ "parent_section_id": "4",
44
+ "section_name": "4.1 Quantitative Verification on Random Test Cases",
45
+ "text": "###figure_4### ###figure_5### We test our algorithm quantitatively using some test cases.\nThe test cases where for this verification were generated from the following distributions:\n1. Uniform , 2. Exponential , 3. Pareto , and 4. LogNormal ,\nwhere is the probability variable of the runtime and we used .\nEach distribution uses the default setups of numpy.random and the constant number calibrates the expectation of each distribution except for the Pareto distribution to be .\nFurthermore, we used the cheap optimizer and the expensive optimizer with .\nAs , the worst sampling duration for an expensive optimizer will be seconds.\nAs we can expect longer waiting times for the expensive optimizer, it is more challenging to yield the precise return order and the precise simulated runtime.\nHence, these test cases empirically verify our implementation if our wrapper passes every test case.\nWe performed the following procedures to check whether the obtained return orders are correct:\n(1) run optimizations with the na\u00efve simulation (NS), i.e. Figure 1 ###reference_### (Left) and without our wrapper, i.e. Figure 1 ###reference_### (Right), (2) define the trajectories for each optimization and , (3) sort so that holds, and (4) plot (see Figure 3 ###reference_###).\nIf the simulated return order is correct, the plot will look like , i.e. for all , and we expect to have such plots for all the experiments.\nFor comparison, we also collect without our wrapper, i.e. Figure 1 ###reference_### (Left) without time.sleep in Line 4.\nAs seen in Figure 3 ###reference_###, our wrapper successfully replicates the results obtained by the na\u00efve simulation.\nThe test cases by the Pareto distribution are edge cases because it has a heavy tail and it sometimes generates configurations with very long runtime, leading to blue dots located slightly above the red dots.\nAlthough this completely confuses the implementation without our wrapper, our wrapper appropriately handles the edge cases.\n###figure_6### ###figure_7### We check whether the simulated runtimes at each iteration were correctly calculated using the same setups.\nFigure 4 ###reference_### presents the simulated runtimes for each setup.\nAs can be seen in the figures, our wrapper got a relative error of .\nSince the expectation of runtime is seconds except for the Pareto distribution, the error was approximately milliseconds and this value comes from the query overhead in our wrapper before each sampling.\nAlthough the error is sufficiently small, the relative error becomes much smaller when we use more expensive benchmarks that will give a large runtime .\n###figure_8###"
46
+ },
47
+ {
48
+ "section_id": "4.2",
49
+ "parent_section_id": "4",
50
+ "section_name": "4.2 Performance Verification on Actual Runtime Reduction",
51
+ "text": "In the previous sections, we verified the correctness of our algorithms and empirically validated our algorithms.\nIn this section, we demonstrate the runtime reduction effect achieved by our wrapper.\nTo test the runtime reduction, we optimized the multi-fidelity 6D Hartmann function 444\nWe set the runtime function so that the maximum runtime for one evaluation becomes 1 hour.\nMore precisely, we used instead of in Appendix A.2 ###reference_###.\n (Kandasamy et al.,, 2017 ###reference_b12###) using random search with workers over 10 different random seeds.\nIn the noisy case, we added a random noise to the objective function.\nWe used both MCS and SCS in this experiment and the na\u00efve simulation.\nFigure 5 ###reference_### (Left) shows that both MCS and SCS perfectly reproduced the results by the na\u00efve simulation while they finished the experiments times and times faster, respectively.\nNote that it is hard to see, but the rightmost curve of Figure 5 ###reference_### (Left) has the three lines:\n(1) Simulated Runtime (MCS), (2) Simulated Runtime (SCS), and (3) Actual Runtime (Na\u00efve), and they completely overlap with each other.\nSCS is much quicker than MCS because it does not require communication between each worker via the file system.\nAlthough MCS could reproduce the results by the na\u00efve simulation even for the noisy case, SCS failed to reproduce the results because\nthe na\u00efve simulation relies on multi-core optimization, while SCS does not use multi-core optimization.\nThis difference affects the random seed effect on the optimizations.\nHowever, since SCS still reproduces the results for the deterministic case, it verifies our implementation of SCS.\nFrom the results, we can conclude that while SCS is generally quicker because it does not require communication via the file system, it may fail to reproduce the random seed effect.\nThis is because SCS wraps an optimizer by relying on the ask-and-tell interface instead of using the multi-core implementation provided by the optimizer."
52
+ },
53
+ {
54
+ "section_id": "5",
55
+ "parent_section_id": null,
56
+ "section_name": "5 Experiments on Zero-Cost Benchmarks Using Various Open-Sourced HPO Tools",
57
+ "text": "The aim of this section is to show that: (1) our wrapper is applicable to diverse HPO libraries and HPO benchmarks, and that (2) ranking of algorithms varies under benchmarking of parallel setups, making such evaluations necessary.\nWe use random search and TPE (Bergstra et al.,, 2011 ###reference_b5###; Watanabe, 2023b, ###reference_b30###) from Optuna (Akiba et al.,, 2019 ###reference_b1###), random forest-based Bayesian optimization (via the MFFacade) from SMAC3 (Lindauer et al.,, 2022 ###reference_b20###), DEHB (Awad et al.,, 2021 ###reference_b3###), HyperBand (Li et al.,, 2017 ###reference_b16###) and BOHB (Falkner et al.,, 2018 ###reference_b10###) from HpBandSter, NePS 555\nIt was under development when we used it and the package is available at https://github.com/automl/neps/ ###reference_github.com/automl/neps/###.\n, and HEBO (Cowen-Rivers et al.,, 2022 ###reference_b6###) as optimizers.\nFor more details, see Appendix B ###reference_###.\nOptuna uses multithreading, SMAC3 and DEHB use dask, HpBandSter uses file server-based synchronization, NePS uses file system-based synchronization, and HEBO uses the ask-and-tell interface.\nIn the experiments, we used these optimizers with our wrapper to optimize the MLP benchmark in HPOBench (Eggensperger et al.,, 2021 ###reference_b9###), HPOLib (Klein and Hutter,, 2019 ###reference_b14###), JAHS-Bench-201 (Bansal et al.,, 2022 ###reference_b4###), LCBench (Zimmer et al.,, 2021 ###reference_b34###) in YAHPOBench (Pfisterer et al.,, 2022 ###reference_b25###), and two multi-fidelity benchmark functions proposed by Kandasamy et al., (2017 ###reference_b12###).\nSee Appendix A ###reference_### for more details.\nWe used the number of parallel workers over 30 different random seeds for each and for HyperBand-based methods, i.e. the default value of a control parameter of HyperBand that determines the proportion of HP configurations discarded in each round of successive halving (Jamieson and Talwalkar,, 2016 ###reference_b11###).\nThe budget for each optimization was fixed to 200 full evaluations and this leads to 450 function calls for HyperBand-based methods with .\nNote that random search and HyperBand used 10 times more budget, i.e. 2000 full evaluations, compared to the others.\nAll the experiments were performed on bwForCluster NEMO, which has 10 cores of Intel(R) Xeon(R) CPU E5-2630 v4 on each computational node, and we used 15GB RAM per worker.\nAccording to Figure 6 ###reference_###, while some optimizer pairs such as BOHB and HEBO, and random search and NePS show the same performance statistically over the four different numbers of workers , DEHB exhibited different performance significance depending on the number of workers. For example, DEHB belongs to the top group with BOHB, TPE, and HEBO for , but it belongs to the bottom group with random search and NePS for . As shown by the red bars, we see statistically significant performance differences between the top groups and the bottom groups. Therefore, this directly indicates that we should study the effect caused by the number of workers in research.\nFurthermore, applying our wrapper to the listed optimizers demonstrably accelerated the entire experiment by a factor of times faster compared to the na\u00efve simulation.\nAct.\nSim.\n Fast\nAct.\nSim.\n Fast\nAct.\nSim.\n Fast\nAct.\nSim.\n Fast\n\n9.2e+06/\n3.0e+10/\n3.3e+03\n1.1e+07/\n1.5e+10/\n1.5e+03\n1.1e+07/\n7.7e+09/\n6.9e+02\n1.2e+07/\n3.9e+09/\n3.2e+02\n###figure_9###"
58
+ },
59
+ {
60
+ "section_id": "6",
61
+ "parent_section_id": null,
62
+ "section_name": "6 Broader Impact & Limitations",
63
+ "text": "The primary motivation for this paper is to reduce the runtime of simulations for MFO.\nAs shown in Table 1 ###reference_###, our experiments would have taken seconds CPU years with the na\u00efve simulation.\nAs the TDP of Intel(R) Xeon(R) CPU E5-2630 v4 used in our experiments consumes about 85W and about of is produced per 1kWh, the whole experiment would have produced about of if we estimate a core of the CPU needs 2W in its idole state.\nIt means that our wrapper saved of production at least.\nTherefore, researchers can also reduce the similar amount of for each experiment.\nThe main limitation of our current wrapper is the assumption that none of the workers will not die and any additional workers will not be added after the initialization.\nBesides that, our package cannot be used on Windows OS because fcntl is not supported on Windows."
64
+ },
65
+ {
66
+ "section_id": "7",
67
+ "parent_section_id": null,
68
+ "section_name": "7 Conclusions",
69
+ "text": "In this paper, we presented a simulator for parallel HPO benchmarking runs that maintains the exact order of the observations without waiting for actual runtimes.\nOur algorithm is available as a Python package that can be plugged into existing code and hardware setups.\nAlthough some existing packages internally support a similar mechanism, they are not applicable to multiprocessing or multithreading setups and they cannot be immediately used for newly developed methods.\nOur package supports such distributed computing setups and researchers can simply wrap their objective functions by our wrapper and directly use their own optimizers.\nWe demonstrated that our package significantly reduces the production that experiments using zero-cost benchmarks would have caused.\nOur package and its basic usage description are available at https://github.com/nabenabe0928/mfhpo-simulator ###reference_lator###."
70
+ }
71
+ ],
72
+ "appendix": [
73
+ {
74
+ "section_id": "Appendix x1",
75
+ "parent_section_id": null,
76
+ "section_name": "Submission Checklist",
77
+ "text": "For all authors\u2026\nDo the main claims made in the abstract and introduction accurately reflect the paper\u2019s contributions and scope? [Yes]\nDid you describe the limitations of your work?\n[Yes] Please check Section 6 ###reference_###.\nDid you discuss any potential negative societal impacts of your work?\n[N/A] This is out of scope for our paper.\nDid you read the ethics review guidelines and ensure that your paper\nconforms to them? https://2022.automl.cc/ethics-accessibility/ ###reference_y/###\n[Yes]\nIf you ran experiments\u2026\nDid you use the same evaluation protocol for all methods being compared (e.g.,\nsame benchmarks, data (sub)sets, available resources)?\n[Yes] Please check the source code available at https://github.com/nabenabe0928/mfhpo-simulator-experiments ###reference_lator-experiments###.\nDid you specify all the necessary details of your evaluation (e.g., data splits,\npre-processing, search spaces, hyperparameter tuning)?\n[Yes] Please check Section 5 ###reference_###.\nDid you repeat your experiments (e.g., across multiple random seeds or splits) to account for the impact of randomness in your methods or data?\n[Yes] We used 10 different random seeds for Section 4 ###reference_### and 30 different 30 random seeds for Section 5 ###reference_### as described in the corresponding sections.\nDid you report the uncertainty of your results (e.g., the variance across random seeds or splits)?\n[Yes] We reported for the necessary parts.\nDid you report the statistical significance of your results?\n[Yes] Please check Figure 6 ###reference_###.\nDid you use tabular or surrogate benchmarks for in-depth evaluations?\n[Yes] Please check Section 5 ###reference_###.\nDid you compare performance over time and describe how you selected the maximum duration?\n[N/A] This is out of scope for our paper.\nDid you include the total amount of compute and the type of resources used (e.g., type of gpus, internal cluster, or cloud provider)?\n[Yes] Please check Section 5 ###reference_###.\nDid you run ablation studies to assess the impact of different components of your approach?\n[N/A] This is out of scope for our paper.\nWith respect to the code used to obtain your results\u2026\nDid you include the code, data, and instructions needed to reproduce the\nmain experimental results, including all requirements (e.g.,\nrequirements.txt with explicit versions), random seeds, an instructive\nREADME with installation, and execution commands (either in the\nsupplemental material or as a url)?\n[Yes] Please check https://github.com/nabenabe0928/mfhpo-simulator-experiments ###reference_lator-experiments###.\nDid you include a minimal example to replicate results on a small subset\nof the experiments or on toy data?\n[Yes] Minimal examples are available at https://github.com/nabenabe0928/mfhpo-simulator/tree/main/examples/minimal ###reference_lator/tree/main/examples/minimal###.\nDid you ensure sufficient code quality and documentation so that someone else can execute and understand your code?\n[Yes]\nDid you include the raw results of running your experiments with the given code, data, and instructions?\n[No] As the raw results is 10+GB, it is not publicly available.\nDid you include the code, additional data, and instructions needed to generate the figures and tables in your paper based on the raw results?\n[Yes] Once you get all the data, the visualizations are possible using the scripts at https://github.com/nabenabe0928/mfhpo-simulator-experiments/tree/main/validation ###reference_lator-experiments/tree/main/validation###.\nIf you used existing assets (e.g., code, data, models)\u2026\nDid you cite the creators of used assets?\n[Yes]\nDid you discuss whether and how consent was obtained from people whose data you\u2019re using/curating if the license requires it?\n[Yes]\nDid you discuss whether the data you are using/curating contains personally identifiable information or offensive content?\n[Yes]\nIf you created/released new assets (e.g., code, data, models)\u2026\nDid you mention the license of the new assets (e.g., as part of your code submission)?\n[Yes] The license of our package is Apache-2.0 license.\nDid you include the new assets either in the supplemental material or as\na url (to, e.g., GitHub or Hugging Face)?\n[Yes] We mention that our package can be installed via pip install mfhpo-simulator.\nIf you used crowdsourcing or conducted research with human subjects\u2026\nDid you include the full text of instructions given to participants and screenshots, if applicable?\n[N/A] This is out of scope for our paper.\nDid you describe any potential participant risks, with links to Institutional Review Board (irb) approvals, if applicable?\n[N/A] This is out of scope for our paper.\nDid you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?\n[N/A] This is out of scope for our paper.\nIf you included theoretical results\u2026\nDid you state the full set of assumptions of all theoretical results?\n[N/A] This is out of scope for our paper.\nDid you include complete proofs of all theoretical results?\n[N/A] This is out of scope for our paper."
78
+ },
79
+ {
80
+ "section_id": "Appendix 1",
81
+ "parent_section_id": null,
82
+ "section_name": "Appendix A Benchmarks",
83
+ "text": "We first note that since the Branin and the Hartmann functions must be minimized, our functions have different signs from the prior literature that aims to maximize objective functions and when , our examples take .\nHowever, if users wish, users can specify as from fidel_dim.\nThe Branin function is the following function that has global minimizers and no local minimizer:\nwhere , , , , , , and .\nThe multi-fidelity Branin function was invented by Kandasamy et al., (2020 ###reference_b13###) and it replaces with the following :\nwhere , , , and .\n controls the rank correlation between low- and high-fidelities and higher yields less correlation.\nThe runtime function for the multi-fidelity Branin function is computed as 666\nSee the implementation of Kandasamy et al., (2020 ###reference_b13###): branin_mf.py at https://github.com/dragonfly/dragonfly/ ###reference_###.\n:\nwhere defines the maximum runtime to evaluate .\nThe following Hartmann function has local minimizers for the case and local minimizers for the case:\nwhere , , for the case is\nfor the case is\nfor the case is\nand for the case is\nThe multi-fidelity Hartmann function was invented by Kandasamy et al., (2020 ###reference_b13###) and it replaces with the following :\nwhere and is the factor that controls the rank correlation between low- and high-fidelities.\nHigher yields less correlation.\nThe runtime function of the multi-fidelity Hartmann function is computed as 777\nSee the implementation of Kandasamy et al., (2020 ###reference_b13###): hartmann3_2_mf.py for the case and hartmann6_4_mf.py for the case at https://github.com/dragonfly/dragonfly/ ###reference_###.\n:\nfor the case and\nfor the case where defines the maximum runtime to evaluate .\nIn this paper, we used the MLP benchmark in Table 6 of HPOBench (Eggensperger et al.,, 2021 ###reference_b9###), HPOlib (Klein and Hutter,, 2019 ###reference_b14###), JAHS-Bench-201 (Bansal et al.,, 2022 ###reference_b4###), and LCBench (Zimmer et al.,, 2021 ###reference_b34###) in YAHPOBench (Pfisterer et al.,, 2022 ###reference_b25###).\nHPOBench is a collection of tabular, surrogate, and raw benchmarks.\nIn our example, we have the MLP (multi-layer perceptron) benchmark, which is a tabular benchmark, in Table 6 of the HPOBench paper (Eggensperger et al.,, 2021 ###reference_b9###).\nThis benchmark has classification tasks and provides the validation accuracy, runtime, F1 score, and precision for each configuration at epochs of .\nThe search space of MLP benchmark in HPOBench is provided in Table 2 ###reference_###.\nHPOlib is a tabular benchmark for neural networks on regression tasks (Slice Localization, Naval Propulsion, Protein Structure, and Parkinsons Telemonitoring).\nThis benchmark has regression tasks and provides the number of parameters, runtime, and training and validation mean squared error (MSE) for each configuration at each epoch.\nThe search space of HPOlib is provided in Table 3 ###reference_###.\nJAHS-Bench-201 is an XGBoost surrogate benchmark for neural networks on image classification tasks (CIFAR10, Fashion-MNIST, and Colorectal Histology).\nThis benchmark has image classification tasks and provides FLOPS, latency, runtime, architecture size in megabytes, test accuracy, training accuracy, and validation accuracy for each configuration with two fidelity parameters: image resolution and epoch.\nThe search space of JAHS-Bench-201 is provided in Table 4 ###reference_###.\nLCBench is a random-forest surrogate benchmark for neural networks on OpenML datasets.\nThis benchmark has tasks and provides training/test/validation accuracy, losses, balanced accuracy, and runtime at each epoch.\nThe search space of HPOlib is provided in Table 5 ###reference_###."
84
+ },
85
+ {
86
+ "section_id": "Appendix 2",
87
+ "parent_section_id": null,
88
+ "section_name": "Appendix B Optimizers",
89
+ "text": "In our package, we show examples using BOHB (Falkner et al.,, 2018 ###reference_b10###), DEHB (Awad et al.,, 2021 ###reference_b3###), SMAC3 (Lindauer et al.,, 2022 ###reference_b20###), and NePS 888https://github.com/automl/neps/ ###reference_github.com/automl/neps/###.\nBOHB is a combination of HyperBand (Li et al.,, 2017 ###reference_b16###) and tree-structured Parzen estimator (Bergstra et al.,, 2011 ###reference_b5###; Watanabe, 2023b, ###reference_b30###).\nDEHB is a combination of HyperBand and differential evolution.\nWe note that DEHB does not natively support restarting of models, which we believe contributes to it subpar performance.\nSMAC3 is an HPO framework.\nSMAC3 supports various Bayesian optimization algorithms and uses different strategies for different scenarios.\nThe default strategies for MFO is the random forest-based Bayesian optimization and HyperBand.\nNePS is another HPO framework jointly with neural architecture search.\nWhen we used NePS, this package was still under developed and we used HyperBand, which was the default algorithm at the time.\nAlthough we focused on multi-fidelity optimization in this paper, our wrapper is applicable to multi-objective optimization and constrained optimization.\nWe give examples for these setups using MO-TPE (Ozaki et al.,, 2020 ###reference_b24###, 2022 ###reference_b23###) and c-TPE (Watanabe and Hutter,, 2022 ###reference_b31###, 2023 ###reference_b32###) at https://github.com/nabenabe0928/mfhpo-simulator/blob/main/examples/minimal/optuna_mo_ctpe.py ###reference_lator/blob/main/examples/minimal/optuna_mo_ctpe.py###."
90
+ }
91
+ ],
92
+ "tables": {
93
+ "1": {
94
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>\nThe total actual and simulated runtimes over all the experiments.\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.14.1\">Act.</span>: total actual runtime and <span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.15.2\">Sim.</span>: total simulated runtime.\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.16.3\">\u00a0Fast</span>: speedup factor of simulation.\n</figcaption>\n<p class=\"ltx_p ltx_align_center\" id=\"S5.T1.10.10\"><span class=\"ltx_text ltx_inline-block\" id=\"S5.T1.10.10.8\" style=\"width:424.9pt;\">\n<span class=\"ltx_inline-block ltx_transformed_outer\" id=\"S5.T1.10.10.8.8\" style=\"width:424.9pt;height:48.3pt;vertical-align:-0.9pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-29.5pt,3.3pt) scale(0.87798485121331,0.87798485121331) ;\">\n<span class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S5.T1.10.10.8.8.8\">\n<span class=\"ltx_thead\">\n<span class=\"ltx_tr\" id=\"S5.T1.6.6.4.4.4.4\">\n<span class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt ltx_colspan ltx_colspan_3\" id=\"S5.T1.3.3.1.1.1.1.1\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></span>\n<span class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt ltx_colspan ltx_colspan_3\" id=\"S5.T1.4.4.2.2.2.2.2\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></span>\n<span class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt ltx_colspan ltx_colspan_3\" id=\"S5.T1.5.5.3.3.3.3.3\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></span>\n<span class=\"ltx_td ltx_nopad_l ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_tt ltx_colspan ltx_colspan_3\" id=\"S5.T1.6.6.4.4.4.4.4\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></span></span>\n</span>\n<span class=\"ltx_tbody\">\n<span class=\"ltx_tr\" id=\"S5.T1.10.10.8.8.8.8\">\n<span class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_th ltx_th_row\" id=\"S5.T1.10.10.8.8.8.8.5\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">Act.</span>\n<span class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T1.10.10.8.8.8.8.6\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">Sim.</span>\n<span class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S5.T1.7.7.5.5.5.5.1\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">\u00a0Fast</span>\n<span class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_th ltx_th_row\" id=\"S5.T1.10.10.8.8.8.8.7\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">Act.</span>\n<span class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T1.10.10.8.8.8.8.8\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">Sim.</span>\n<span class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S5.T1.8.8.6.6.6.6.2\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">\u00a0Fast</span>\n<span class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_th ltx_th_row\" id=\"S5.T1.10.10.8.8.8.8.9\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">Act.</span>\n<span class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T1.10.10.8.8.8.8.10\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">Sim.</span>\n<span class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S5.T1.9.9.7.7.7.7.3\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">\u00a0Fast</span>\n<span class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_th ltx_th_row\" id=\"S5.T1.10.10.8.8.8.8.11\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">Act.</span>\n<span class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T1.10.10.8.8.8.8.12\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">Sim.</span>\n<span class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T1.10.10.8.8.8.8.4\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">\u00a0Fast</span></span>\n<span class=\"ltx_tr\" id=\"S5.T1.10.10.8.8.8.9.1\">\n<span class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_th ltx_th_row ltx_border_bb ltx_border_t\" id=\"S5.T1.10.10.8.8.8.9.1.1\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">9.2e+06/</span>\n<span class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T1.10.10.8.8.8.9.1.2\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">3.0e+10/</span>\n<span class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S5.T1.10.10.8.8.8.9.1.3\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.10.10.8.8.8.9.1.3.1\">3.3e+03</span></span>\n<span class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_th ltx_th_row ltx_border_bb ltx_border_t\" id=\"S5.T1.10.10.8.8.8.9.1.4\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">1.1e+07/</span>\n<span class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T1.10.10.8.8.8.9.1.5\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">1.5e+10/</span>\n<span class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S5.T1.10.10.8.8.8.9.1.6\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.10.10.8.8.8.9.1.6.1\">1.5e+03</span></span>\n<span class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_th ltx_th_row ltx_border_bb ltx_border_t\" id=\"S5.T1.10.10.8.8.8.9.1.7\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">1.1e+07/</span>\n<span class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T1.10.10.8.8.8.9.1.8\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">7.7e+09/</span>\n<span class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S5.T1.10.10.8.8.8.9.1.9\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.10.10.8.8.8.9.1.9.1\">6.9e+02</span></span>\n<span class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_th ltx_th_row ltx_border_bb ltx_border_t\" id=\"S5.T1.10.10.8.8.8.9.1.10\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">1.2e+07/</span>\n<span class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T1.10.10.8.8.8.9.1.11\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">3.9e+09/</span>\n<span class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T1.10.10.8.8.8.9.1.12\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.10.10.8.8.8.9.1.12.1\">3.2e+02</span></span></span>\n</span>\n</span>\n</span></span>\n</span></p>\n</figure>",
95
+ "capture": "Table 1: \nThe total actual and simulated runtimes over all the experiments.\nAct.: total actual runtime and Sim.: total simulated runtime.\n\u00a0Fast: speedup factor of simulation.\n"
96
+ },
97
+ "2": {
98
+ "table_html": "<figure class=\"ltx_table\" id=\"A1.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>\nThe search space of the MLP benchmark in HPOBench ( discrete + fidelity parameters).\nNote that we have fidelity parameters only for the raw benchmark.\nEach benchmark has performance metrics of possible configurations with random seeds.\n</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"A1.T2.20.20\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A1.T2.20.20.11.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_tt\" id=\"A1.T2.20.20.11.1.1\">Hyperparameter</th>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"A1.T2.20.20.11.1.2\">Choices</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T2.12.12.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"A1.T2.12.12.2.3\">L2 regularization</th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A1.T2.12.12.2.2\">[] with evenly distributed grids</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T2.14.14.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A1.T2.14.14.4.3\">Batch size</th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T2.14.14.4.2\">[] with evenly distributed grids</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T2.16.16.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A1.T2.16.16.6.3\">Initial learning rate</th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T2.16.16.6.2\">[] with evenly distributed grids</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T2.18.18.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A1.T2.18.18.8.3\">Width</th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T2.18.18.8.2\">[] with evenly distributed grids</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T2.19.19.9\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A1.T2.19.19.9.2\">Depth</th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T2.19.19.9.1\">{}</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T2.20.20.10\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb ltx_border_t\" id=\"A1.T2.20.20.10.2\">Epoch\u00a0(<span class=\"ltx_text ltx_font_bold\" id=\"A1.T2.20.20.10.2.1\">Fidelity</span>)</th>\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_t\" id=\"A1.T2.20.20.10.1\">{}</td>\n</tr>\n</tbody>\n</table>\n</figure>",
99
+ "capture": "Table 2: \nThe search space of the MLP benchmark in HPOBench ( discrete + fidelity parameters).\nNote that we have fidelity parameters only for the raw benchmark.\nEach benchmark has performance metrics of possible configurations with random seeds.\n"
100
+ },
101
+ "3": {
102
+ "table_html": "<figure class=\"ltx_table\" id=\"A1.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>\nThe search space of HPOlib ( discrete + categorical + fidelity parameters).\nEach benchmark has performance metrics of \npossible configurations with random seeds.\n</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"A1.T3.17.17\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A1.T3.17.17.8.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_tt\" id=\"A1.T3.17.17.8.1.1\">Hyperparameter</th>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"A1.T3.17.17.8.1.2\">Choices</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.11.11.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"A1.T3.11.11.1.2\">Batch size</th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A1.T3.11.11.1.1\">{}</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.12.12.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A1.T3.12.12.2.2\">Initial learning rate</th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.12.12.2.1\">{}</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.13.13.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A1.T3.13.13.3.2\">Number of units {1,2}</th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.13.13.3.1\">{}</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.14.14.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A1.T3.14.14.4.2\">Dropout rate {1,2}</th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.14.14.4.1\">{}</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.15.15.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"A1.T3.15.15.5.2\">Learning rate scheduler</th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A1.T3.15.15.5.1\">{<span class=\"ltx_text ltx_font_typewriter\" id=\"A1.T3.15.15.5.1.1\">cosine<span class=\"ltx_text ltx_markedasmath\" id=\"A1.T3.15.15.5.1.1.1\">,</span></span> <span class=\"ltx_text ltx_font_typewriter\" id=\"A1.T3.15.15.5.1.2\">constant</span>}</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.16.16.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A1.T3.16.16.6.2\">Activation function {1,2}</th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.16.16.6.1\">{<span class=\"ltx_text ltx_font_typewriter\" id=\"A1.T3.16.16.6.1.1\">relu<span class=\"ltx_text ltx_markedasmath\" id=\"A1.T3.16.16.6.1.1.1\">,</span></span> <span class=\"ltx_text ltx_font_typewriter\" id=\"A1.T3.16.16.6.1.2\">tanh</span>}</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.17.17.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb ltx_border_t\" id=\"A1.T3.17.17.7.2\">Epoch\u00a0(<span class=\"ltx_text ltx_font_bold\" id=\"A1.T3.17.17.7.2.1\">Fidelity</span>)</th>\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_t\" id=\"A1.T3.17.17.7.1\">[]</td>\n</tr>\n</tbody>\n</table>\n</figure>",
103
+ "capture": "Table 3: \nThe search space of HPOlib ( discrete + categorical + fidelity parameters).\nEach benchmark has performance metrics of \npossible configurations with random seeds.\n"
104
+ },
105
+ "4": {
106
+ "table_html": "<figure class=\"ltx_table\" id=\"A1.T4\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 4: </span>\nThe search space of JAHS-Bench-201 ( continuous + discrete + categorical + fidelity parameters).\nJAHS-Bench-201 is an XGBoost surrogate benchmark and the outputs are deterministic.\n</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"A1.T4.14.14\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A1.T4.14.14.7.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_tt\" id=\"A1.T4.14.14.7.1.1\">Hyperparameter</th>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"A1.T4.14.14.7.1.2\">Range or choices</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T4.9.9.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"A1.T4.9.9.1.2\">Learning rate</th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A1.T4.9.9.1.1\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T4.10.10.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A1.T4.10.10.2.2\">L2 regularization</th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T4.10.10.2.1\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T4.14.14.8.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A1.T4.14.14.8.2.1\">Activation function</th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T4.14.14.8.2.2\">{<span class=\"ltx_text ltx_font_typewriter\" id=\"A1.T4.14.14.8.2.2.1\">ReLU</span>, <span class=\"ltx_text ltx_font_typewriter\" id=\"A1.T4.14.14.8.2.2.2\">Hardswish</span>, <span class=\"ltx_text ltx_font_typewriter\" id=\"A1.T4.14.14.8.2.2.3\">Mish</span>}</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T4.14.14.9.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A1.T4.14.14.9.3.1\">Trivial augment\u00a0(<cite class=\"ltx_cite ltx_citemacro_cite\">M\u00fcller and Hutter, (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2403.01888v3#bib.bib22\" title=\"\">2021</a>)</cite>)</th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T4.14.14.9.3.2\">{<span class=\"ltx_text ltx_font_typewriter\" id=\"A1.T4.14.14.9.3.2.1\">True</span>, <span class=\"ltx_text ltx_font_typewriter\" id=\"A1.T4.14.14.9.3.2.2\">False</span>}</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T4.11.11.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"A1.T4.11.11.3.2\">Depth multiplier</th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A1.T4.11.11.3.1\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T4.12.12.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A1.T4.12.12.4.2\">Width multiplier</th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T4.12.12.4.1\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T4.14.14.10.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"A1.T4.14.14.10.4.1\">Cell search space</th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A1.T4.14.14.10.4.2\">{<span class=\"ltx_text ltx_font_typewriter\" id=\"A1.T4.14.14.10.4.2.1\">none</span>, <span class=\"ltx_text ltx_font_typewriter\" id=\"A1.T4.14.14.10.4.2.2\">avg-pool-3x3</span>, <span class=\"ltx_text ltx_font_typewriter\" id=\"A1.T4.14.14.10.4.2.3\">bn-conv-1x1</span>,</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T4.14.14.11.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A1.T4.14.14.11.5.1\">(NAS-Bench-201\u00a0(<cite class=\"ltx_cite ltx_citemacro_cite\">Dong and Yang, (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2403.01888v3#bib.bib7\" title=\"\">2020</a>)</cite>), Edge 1 \u2013 6)</th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T4.14.14.11.5.2\">\n<span class=\"ltx_text ltx_font_typewriter\" id=\"A1.T4.14.14.11.5.2.1\">bn-conv-3x3</span>, <span class=\"ltx_text ltx_font_typewriter\" id=\"A1.T4.14.14.11.5.2.2\">skip-connection</span>}</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T4.13.13.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"A1.T4.13.13.5.2\">Epoch\u00a0(<span class=\"ltx_text ltx_font_bold\" id=\"A1.T4.13.13.5.2.1\">Fidelity</span>)</th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A1.T4.13.13.5.1\">[]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T4.14.14.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"A1.T4.14.14.6.2\">Resolution\u00a0(<span class=\"ltx_text ltx_font_bold\" id=\"A1.T4.14.14.6.2.1\">Fidelity</span>)</th>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"A1.T4.14.14.6.1\">[]</td>\n</tr>\n</tbody>\n</table>\n</figure>",
107
+ "capture": "Table 4: \nThe search space of JAHS-Bench-201 ( continuous + discrete + categorical + fidelity parameters).\nJAHS-Bench-201 is an XGBoost surrogate benchmark and the outputs are deterministic.\n"
108
+ },
109
+ "5": {
110
+ "table_html": "<figure class=\"ltx_table\" id=\"A1.T5\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 5: </span>\nThe search space of LCBench ( discrete + continuous + fidelity parameters).\nAlthough the original LCBench is a collection of random configurations, YAHPOBench created random-forest surrogates over the observations.\nUsers can choose deterministic or non-deterministic outputs.\n</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"A1.T5.18.18\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A1.T5.18.18.9.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_tt\" id=\"A1.T5.18.18.9.1.1\">Hyperparameter</th>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"A1.T5.18.18.9.1.2\">Choices</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T5.11.11.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"A1.T5.11.11.1.2\">Batch size</th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A1.T5.11.11.1.1\">[]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T5.12.12.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A1.T5.12.12.2.2\">Max number of units</th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T5.12.12.2.1\">[]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T5.13.13.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A1.T5.13.13.3.2\">Number of layers</th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T5.13.13.3.1\">[]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T5.14.14.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"A1.T5.14.14.4.2\">Initial learning rate</th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A1.T5.14.14.4.1\">[]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T5.15.15.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A1.T5.15.15.5.2\">L2 regularization</th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T5.15.15.5.1\">[]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T5.16.16.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A1.T5.16.16.6.2\">Max dropout rate</th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T5.16.16.6.1\">[]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T5.17.17.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A1.T5.17.17.7.2\">Momentum</th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T5.17.17.7.1\">[]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T5.18.18.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb ltx_border_t\" id=\"A1.T5.18.18.8.2\">Epoch\u00a0(<span class=\"ltx_text ltx_font_bold\" id=\"A1.T5.18.18.8.2.1\">Fidelity</span>)</th>\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_t\" id=\"A1.T5.18.18.8.1\">[]</td>\n</tr>\n</tbody>\n</table>\n</figure>",
111
+ "capture": "Table 5: \nThe search space of LCBench ( discrete + continuous + fidelity parameters).\nAlthough the original LCBench is a collection of random configurations, YAHPOBench created random-forest surrogates over the observations.\nUsers can choose deterministic or non-deterministic outputs.\n"
112
+ }
113
+ },
114
+ "image_paths": {
115
+ "1": {
116
+ "figure_path": "2403.01888v3_figure_1.png",
117
+ "caption": "Figure 1: \nThe simplest codeblock example of how our wrapper works.\nLeft: a codeblock example without our wrapper (na\u00efve simulation).\nWe let each worker call sleep for the time specified by the queried result.\nThis implementation is commonly used to guarantee correctness, as research often requires us to run optimizers from other researchers.\nRight: a codeblock example with our wrapper (multi-core simulation).\nUsers only need to wrap the objective function with our module and remove the line for sleeping.\nIn the end, both codeblocks yield identical results.",
118
+ "url": "http://arxiv.org/html/2403.01888v3/x1.png"
119
+ },
120
+ "2(a)": {
121
+ "figure_path": "2403.01888v3_figure_2(a).png",
122
+ "caption": "(a)\nFigure 2: \nThe conceptual visualizations of our wrapper.\n(a) The workflow of our wrapper.\nThe gray parts are provided by users and our package is responsible for the light blue part.\nThe blue circles with the white cross must be modified by users via inheritance to match the signature used in our wrapper.\nThe p\ud835\udc5dpitalic_p-th worker receives the n\ud835\udc5bnitalic_n-th queried configuration \ud835\udc99(n)superscript\ud835\udc99\ud835\udc5b\\bm{x}^{(n)}bold_italic_x start_POSTSUPERSCRIPT ( italic_n ) end_POSTSUPERSCRIPT and stores its result f(n),\u03c4(n)superscript\ud835\udc53\ud835\udc5bsuperscript\ud835\udf0f\ud835\udc5bf^{(n)},\\tau^{(n)}italic_f start_POSTSUPERSCRIPT ( italic_n ) end_POSTSUPERSCRIPT , italic_\u03c4 start_POSTSUPERSCRIPT ( italic_n ) end_POSTSUPERSCRIPT in the file system.\nOur wrapper sorts out the right timing to return the n\ud835\udc5bnitalic_n-th queried result f(n)superscript\ud835\udc53\ud835\udc5bf^{(n)}italic_f start_POSTSUPERSCRIPT ( italic_n ) end_POSTSUPERSCRIPT to the optimizer based on the simulated runtime Tpsubscript\ud835\udc47\ud835\udc5dT_{p}italic_T start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT.\n(b) The compression of simulated runtime.\nEach circle on each line represents the timing when each result was delivered from each worker.\nLeft: an example when we na\u00efvely wait for the (actual) runtime \u03c4\u2062(\ud835\udc99)\ud835\udf0f\ud835\udc99\\tau(\\bm{x})italic_\u03c4 ( bold_italic_x ) of each query as reported by the benchmark.\nRight: an example when we use our wrapper to shrink the experiment runtime without losing the exact return order.",
123
+ "url": "http://arxiv.org/html/2403.01888v3/x2.png"
124
+ },
125
+ "2(b)": {
126
+ "figure_path": "2403.01888v3_figure_2(b).png",
127
+ "caption": "(b)\nFigure 2: \nThe conceptual visualizations of our wrapper.\n(a) The workflow of our wrapper.\nThe gray parts are provided by users and our package is responsible for the light blue part.\nThe blue circles with the white cross must be modified by users via inheritance to match the signature used in our wrapper.\nThe p\ud835\udc5dpitalic_p-th worker receives the n\ud835\udc5bnitalic_n-th queried configuration \ud835\udc99(n)superscript\ud835\udc99\ud835\udc5b\\bm{x}^{(n)}bold_italic_x start_POSTSUPERSCRIPT ( italic_n ) end_POSTSUPERSCRIPT and stores its result f(n),\u03c4(n)superscript\ud835\udc53\ud835\udc5bsuperscript\ud835\udf0f\ud835\udc5bf^{(n)},\\tau^{(n)}italic_f start_POSTSUPERSCRIPT ( italic_n ) end_POSTSUPERSCRIPT , italic_\u03c4 start_POSTSUPERSCRIPT ( italic_n ) end_POSTSUPERSCRIPT in the file system.\nOur wrapper sorts out the right timing to return the n\ud835\udc5bnitalic_n-th queried result f(n)superscript\ud835\udc53\ud835\udc5bf^{(n)}italic_f start_POSTSUPERSCRIPT ( italic_n ) end_POSTSUPERSCRIPT to the optimizer based on the simulated runtime Tpsubscript\ud835\udc47\ud835\udc5dT_{p}italic_T start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT.\n(b) The compression of simulated runtime.\nEach circle on each line represents the timing when each result was delivered from each worker.\nLeft: an example when we na\u00efvely wait for the (actual) runtime \u03c4\u2062(\ud835\udc99)\ud835\udf0f\ud835\udc99\\tau(\\bm{x})italic_\u03c4 ( bold_italic_x ) of each query as reported by the benchmark.\nRight: an example when we use our wrapper to shrink the experiment runtime without losing the exact return order.",
128
+ "url": "http://arxiv.org/html/2403.01888v3/x3.png"
129
+ },
130
+ "3(a)": {
131
+ "figure_path": "2403.01888v3_figure_3(a).png",
132
+ "caption": "(c) Cheap optimizer\nFigure 3: \nThe return order verification results.\nWhen we use our wrapper, the red dots are obtained.\nIf all the dots are aligned on y=x\ud835\udc66\ud835\udc65y=xitalic_y = italic_x, it implies that the return order in a simulation with our wrapper and that in its na\u00efve simulation perfectly match.\nAs expected, the red dots completely overlap with y=x\ud835\udc66\ud835\udc65y=xitalic_y = italic_x.\nSee the text in \u201cChecking Return Orders\u201d for the plot details.",
133
+ "url": "http://arxiv.org/html/2403.01888v3/x4.png"
134
+ },
135
+ "3(b)": {
136
+ "figure_path": "2403.01888v3_figure_3(b).png",
137
+ "caption": "(d) Expensive optimizer with c=5\u00d710\u22122\ud835\udc505superscript102c=5\\times 10^{-2}italic_c = 5 \u00d7 10 start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT\nFigure 3: \nThe return order verification results.\nWhen we use our wrapper, the red dots are obtained.\nIf all the dots are aligned on y=x\ud835\udc66\ud835\udc65y=xitalic_y = italic_x, it implies that the return order in a simulation with our wrapper and that in its na\u00efve simulation perfectly match.\nAs expected, the red dots completely overlap with y=x\ud835\udc66\ud835\udc65y=xitalic_y = italic_x.\nSee the text in \u201cChecking Return Orders\u201d for the plot details.",
138
+ "url": "http://arxiv.org/html/2403.01888v3/x5.png"
139
+ },
140
+ "4(a)": {
141
+ "figure_path": "2403.01888v3_figure_4(a).png",
142
+ "caption": "(a) Cheap optimizer\nFigure 4: \nThe verification of the simulated runtime.\nThe red dotted lines show the simulated runtime of our wrapper and the black solid lines show the actual runtime of the na\u00efve simulation.\nThe blue dotted lines show the absolute difference between the simulated runtime of our wrapper and the actual runtime of the na\u00efve simulation multiplied by 1000100010001000 to fit in the same scale as the other lines.\nThe red dotted lines and the black solid lines are expected to completely overlap and the blue lines should exhibit zero ideally.\nThat is, the closer the blue lines to the x\ud835\udc65xitalic_x-axis, the less relative error we have.",
143
+ "url": "http://arxiv.org/html/2403.01888v3/x6.png"
144
+ },
145
+ "4(b)": {
146
+ "figure_path": "2403.01888v3_figure_4(b).png",
147
+ "caption": "(b) Expensive optimizer with c=5\u00d710\u22123\ud835\udc505superscript103c=5\\times 10^{-3}italic_c = 5 \u00d7 10 start_POSTSUPERSCRIPT - 3 end_POSTSUPERSCRIPT\nFigure 4: \nThe verification of the simulated runtime.\nThe red dotted lines show the simulated runtime of our wrapper and the black solid lines show the actual runtime of the na\u00efve simulation.\nThe blue dotted lines show the absolute difference between the simulated runtime of our wrapper and the actual runtime of the na\u00efve simulation multiplied by 1000100010001000 to fit in the same scale as the other lines.\nThe red dotted lines and the black solid lines are expected to completely overlap and the blue lines should exhibit zero ideally.\nThat is, the closer the blue lines to the x\ud835\udc65xitalic_x-axis, the less relative error we have.",
148
+ "url": "http://arxiv.org/html/2403.01888v3/x7.png"
149
+ },
150
+ "5": {
151
+ "figure_path": "2403.01888v3_figure_5.png",
152
+ "caption": "Figure 5: \nThe verification of actual runtime reduction.\nThe x\ud835\udc65xitalic_x-axis shows the wall-clock time and the y\ud835\udc66yitalic_y-axis shows the cumulative minimum objective value during optimizations.\nNa\u00efve simulation (black dotted line) serves the correct result and the simulated results (red/blue dotted lines) for each algorithm should ideally match the result of the na\u00efve simulation.\nActual runtime (red/blue solid lines) shows the runtime reduction compared to the simulated results and it is better if we get the final result as quickly as possible.\nLeft: optimization of a deterministic multi-fidelity 6D Hartmann function.\nThe simulated results of our wrapper for both MCS and SCS coincide with the correct result while both of them showed significant speedups.\nRight: optimization of a noisy multi-fidelity 6D Hartmann function.\nWhile the simulated result for MCS coincides with the correct result, SCS did not yield the same result.\nMCS could reproduce the result because MCS still uses the same parallel processing procedure and the only change is to wrap the objective function.",
153
+ "url": "http://arxiv.org/html/2403.01888v3/x8.png"
154
+ },
155
+ "6": {
156
+ "figure_path": "2403.01888v3_figure_6.png",
157
+ "caption": "Figure 6: \nThe critical difference diagrams with 1/241superscript241/2^{4}1 / 2 start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT of the runtime budget for random search.\n\u201c[x.xx]\u201d shows the average rank of each optimizer after using 1/241superscript241/2^{4}1 / 2 start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT of the runtime budget for random search.\nFor example, \u201cBOHB [2.90]\u201d means that BOHB achieved the average rank of 2.90 among all the optimizers after running the specified amount of budget.\nP\ud835\udc43Pitalic_P indicates the number of workers used and the red bars connect all the optimizers that show no significant performance difference.\nNote that we used all the results except for JAHS-Bench-201 and LCBench due to the incompatibility between SMAC3, and JAHS-Bench-201 and LCBench.",
158
+ "url": "http://arxiv.org/html/2403.01888v3/x9.png"
159
+ }
160
+ },
161
+ "validation": true,
162
+ "references": [
163
+ {
164
+ "1": {
165
+ "title": "Optuna: A next-generation hyperparameter optimization framework.",
166
+ "author": "Akiba, T., Sano, S., Yanase, T., Ohta, T., and Koyama, M. (2019).",
167
+ "venue": "In International Conference on Knowledge Discovery & Data\nMining.",
168
+ "url": null
169
+ }
170
+ },
171
+ {
172
+ "2": {
173
+ "title": "HPO-B: A large-scale reproducible benchmark for black-box HPO\nbased on OpenML.",
174
+ "author": "Arango, S., Jomaa, H., Wistuba, M., and Grabocka, J. (2021).",
175
+ "venue": "arXiv:2106.06257.",
176
+ "url": null
177
+ }
178
+ },
179
+ {
180
+ "3": {
181
+ "title": "DEHB: Evolutionary HyperBand for scalable, robust and efficient\nhyperparameter optimization.",
182
+ "author": "Awad, N., Mallik, N., and Hutter, F. (2021).",
183
+ "venue": "arXiv:2105.09821.",
184
+ "url": null
185
+ }
186
+ },
187
+ {
188
+ "4": {
189
+ "title": "JAHS-Bench-201: A foundation for research on joint architecture and\nhyperparameter search.",
190
+ "author": "Bansal, A., Stoll, D., Janowski, M., Zela, A., and Hutter, F. (2022).",
191
+ "venue": "In Advances in Neural Information Processing Systems Datasets\nand Benchmarks Track.",
192
+ "url": null
193
+ }
194
+ },
195
+ {
196
+ "5": {
197
+ "title": "Algorithms for hyper-parameter optimization.",
198
+ "author": "Bergstra, J., Bardenet, R., Bengio, Y., and K\u00e9gl, B. (2011).",
199
+ "venue": "Advances in Neural Information Processing Systems.",
200
+ "url": null
201
+ }
202
+ },
203
+ {
204
+ "6": {
205
+ "title": "HEBO: Pushing the limits of sample-efficient hyper-parameter\noptimisation.",
206
+ "author": "Cowen-Rivers, A., Lyu, W., Tutunov, R., Wang, Z., Grosnit, A., Griffiths, R.,\nMaraval, A., Jianye, H., Wang, J., Peters, J., et al. (2022).",
207
+ "venue": "Journal of Artificial Intelligence Research, 74.",
208
+ "url": null
209
+ }
210
+ },
211
+ {
212
+ "7": {
213
+ "title": "NAS-Bench-201: Extending the scope of reproducible neural\narchitecture search.",
214
+ "author": "Dong, X. and Yang, Y. (2020).",
215
+ "venue": "arXiv:2001.00326.",
216
+ "url": null
217
+ }
218
+ },
219
+ {
220
+ "8": {
221
+ "title": "Efficient benchmarking of hyperparameter optimizers via surrogates.",
222
+ "author": "Eggensperger, K., Hutter, F., Hoos, H., and Leyton-Brown, K. (2015).",
223
+ "venue": "In AAAI Conference on Artificial Intelligence.",
224
+ "url": null
225
+ }
226
+ },
227
+ {
228
+ "9": {
229
+ "title": "HPOBench: A collection of reproducible multi-fidelity benchmark\nproblems for HPO.",
230
+ "author": "Eggensperger, K., M\u00fcller, P., Mallik, N., Feurer, M., Sass, R., Klein, A.,\nAwad, N., Lindauer, M., and Hutter, F. (2021).",
231
+ "venue": "arXiv:2109.06716.",
232
+ "url": null
233
+ }
234
+ },
235
+ {
236
+ "10": {
237
+ "title": "BOHB: Robust and efficient hyperparameter optimization at scale.",
238
+ "author": "Falkner, S., Klein, A., and Hutter, F. (2018).",
239
+ "venue": "In International Conference on Machine Learning.",
240
+ "url": null
241
+ }
242
+ },
243
+ {
244
+ "11": {
245
+ "title": "Non-stochastic best arm identification and hyperparameter\noptimization.",
246
+ "author": "Jamieson, K. and Talwalkar, A. (2016).",
247
+ "venue": "In International Conference on Artificial Intelligence and\nStatistics.",
248
+ "url": null
249
+ }
250
+ },
251
+ {
252
+ "12": {
253
+ "title": "Multi-fidelity Bayesian optimisation with continuous\napproximations.",
254
+ "author": "Kandasamy, K., Dasarathy, G., Schneider, J., and P\u00f3czos, B. (2017).",
255
+ "venue": "In International Conference on Machine Learning.",
256
+ "url": null
257
+ }
258
+ },
259
+ {
260
+ "13": {
261
+ "title": "Tuning hyperparameters without grad students: Scalable and robust\nBayesian optimisation with Dragonfly.",
262
+ "author": "Kandasamy, K., Vysyaraju, K., Neiswanger, W., Paria, B., Collins, C.,\nSchneider, J., Poczos, B., and Xing, E. (2020).",
263
+ "venue": "Journal of Machine Learning Research, 21.",
264
+ "url": null
265
+ }
266
+ },
267
+ {
268
+ "14": {
269
+ "title": "Tabular benchmarks for joint architecture and hyperparameter\noptimization.",
270
+ "author": "Klein, A. and Hutter, F. (2019).",
271
+ "venue": "arXiv:1905.04970.",
272
+ "url": null
273
+ }
274
+ },
275
+ {
276
+ "15": {
277
+ "title": "Multi-fidelity methods for optimization: A survey.",
278
+ "author": "Li, K. and Li, F. (2024).",
279
+ "venue": "arXiv:2402.09638.",
280
+ "url": null
281
+ }
282
+ },
283
+ {
284
+ "16": {
285
+ "title": "HyperBand: A novel bandit-based approach to hyperparameter\noptimization.",
286
+ "author": "Li, L., Jamieson, K., DeSalvo, G., Rostamizadeh, A., and Talwalkar, A. (2017).",
287
+ "venue": "Journal of Machine Learning Research, 18.",
288
+ "url": null
289
+ }
290
+ },
291
+ {
292
+ "17": {
293
+ "title": "A system for massively parallel hyperparameter tuning.",
294
+ "author": "Li, L., Jamieson, K., Rostamizadeh, A., Gonina, E., Ben-Tzur, J., Hardt, M.,\nRecht, B., and Talwalkar, A. (2020).",
295
+ "venue": "Machine Learning and Systems, 2.",
296
+ "url": null
297
+ }
298
+ },
299
+ {
300
+ "18": {
301
+ "title": "Hyper-Tune: towards efficient hyper-parameter tuning at scale.",
302
+ "author": "Li, Y., Shen, Y., Jiang, H., Zhang, W., Li, J., Liu, J., Zhang, C., and Cui, B.\n(2022).",
303
+ "venue": "arXiv:2201.06834.",
304
+ "url": null
305
+ }
306
+ },
307
+ {
308
+ "19": {
309
+ "title": "Tune: A research platform for distributed model selection and\ntraining.",
310
+ "author": "Liaw, R., Liang, E., Nishihara, R., Moritz, P., Gonzalez, J., and Stoica, I.\n(2018).",
311
+ "venue": "arXiv:1807.05118.",
312
+ "url": null
313
+ }
314
+ },
315
+ {
316
+ "20": {
317
+ "title": "SMAC3: A versatile Bayesian optimization package for\nhyperparameter optimization.",
318
+ "author": "Lindauer, M., Eggensperger, K., Feurer, M., Biedenkapp, A., Deng, D.,\nBenjamins, C., Ruhkopf, T., Sass, R., and Hutter, F. (2022).",
319
+ "venue": "Journal of Machine Learning Research, 23.",
320
+ "url": null
321
+ }
322
+ },
323
+ {
324
+ "21": {
325
+ "title": "NAS-Bench-Suite: NAS evaluation is (now) surprisingly easy.",
326
+ "author": "Mehta, Y., White, C., Zela, A., Krishnakumar, A., Zabergja, G., Moradian, S.,\nSafari, M., Yu, K., and Hutter, F. (2022).",
327
+ "venue": "arXiv:2201.13396.",
328
+ "url": null
329
+ }
330
+ },
331
+ {
332
+ "22": {
333
+ "title": "TrivialAugment: Tuning-free yet state-of-the-art data\naugmentation.",
334
+ "author": "M\u00fcller, S. and Hutter, F. (2021).",
335
+ "venue": "In International Conference on Computer Vision.",
336
+ "url": null
337
+ }
338
+ },
339
+ {
340
+ "23": {
341
+ "title": "Multiobjective tree-structured Parzen estimator.",
342
+ "author": "Ozaki, Y., Tanigaki, Y., Watanabe, S., Nomura, M., and Onishi, M. (2022).",
343
+ "venue": "Journal of Artificial Intelligence Research, 73.",
344
+ "url": null
345
+ }
346
+ },
347
+ {
348
+ "24": {
349
+ "title": "Multiobjective tree-structured Parzen estimator for computationally\nexpensive optimization problems.",
350
+ "author": "Ozaki, Y., Tanigaki, Y., Watanabe, S., and Onishi, M. (2020).",
351
+ "venue": "In Genetic and Evolutionary Computation Conference.",
352
+ "url": null
353
+ }
354
+ },
355
+ {
356
+ "25": {
357
+ "title": "YAHPO Gym \u2013 an efficient multi-objective multi-fidelity\nbenchmark for hyperparameter optimization.",
358
+ "author": "Pfisterer, F., Schneider, L., Moosbauer, J., Binder, M., and Bischl, B. (2022).",
359
+ "venue": "In International Conference on Automated Machine Learning.",
360
+ "url": null
361
+ }
362
+ },
363
+ {
364
+ "26": {
365
+ "title": "Syne Tune: A library for large scale hyperparameter tuning and\nreproducible research.",
366
+ "author": "Salinas, D., Seeger, M., Klein, A., Perrone, V., Wistuba, M., and Archambeau,\nC. (2022).",
367
+ "venue": "In International Conference on Automated Machine Learning.",
368
+ "url": null
369
+ }
370
+ },
371
+ {
372
+ "27": {
373
+ "title": "On the importance of architectures and hyperparameters for fairness\nin face recognition.",
374
+ "author": "Sukthanker, R., Dooley, S., Dickerson, J., White, C., Hutter, F., and Goldblum,\nM. (2022).",
375
+ "venue": "arXiv:2210.09943.",
376
+ "url": null
377
+ }
378
+ },
379
+ {
380
+ "28": {
381
+ "title": "On the importance of hyperparameters and data augmentation for\nself-supervised learning.",
382
+ "author": "Wagner, D., Ferreira, F., Stoll, D., Schirrmeister, R., M\u00fcller, S., and\nHutter, F. (2022).",
383
+ "venue": "arXiv:2207.07875.",
384
+ "url": null
385
+ }
386
+ },
387
+ {
388
+ "29": {
389
+ "title": "Python wrapper for simulating multi-fidelity optimization on HPO\nbenchmarks without any wait.",
390
+ "author": "Watanabe, S. (2023a).",
391
+ "venue": "arXiv:2305.17595.",
392
+ "url": null
393
+ }
394
+ },
395
+ {
396
+ "30": {
397
+ "title": "Tree-structured Parzen estimator: Understanding its algorithm\ncomponents and their roles for better empirical performance.",
398
+ "author": "Watanabe, S. (2023b).",
399
+ "venue": "arXiv:2304.11127.",
400
+ "url": null
401
+ }
402
+ },
403
+ {
404
+ "31": {
405
+ "title": "c-TPE: Generalizing tree-structured Parzen estimator with\ninequality constraints for continuous and categorical hyperparameter\noptimization.",
406
+ "author": "Watanabe, S. and Hutter, F. (2022).",
407
+ "venue": "arXiv:2211.14411.",
408
+ "url": null
409
+ }
410
+ },
411
+ {
412
+ "32": {
413
+ "title": "c-TPE: tree-structured Parzen estimator with inequality\nconstraints for expensive hyperparameter optimization.",
414
+ "author": "Watanabe, S. and Hutter, F. (2023).",
415
+ "venue": "In International Joint Conference on Artificial Intelligence.",
416
+ "url": null
417
+ }
418
+ },
419
+ {
420
+ "33": {
421
+ "title": "On the importance of hyperparameter optimization for model-based\nreinforcement learning.",
422
+ "author": "Zhang, B., Rajan, R., Pineda, L., Lambert, N., Biedenkapp, A., Chua, K.,\nHutter, F., and Calandra, R. (2021).",
423
+ "venue": "In International Conference on Artificial Intelligence and\nStatistics.",
424
+ "url": null
425
+ }
426
+ },
427
+ {
428
+ "34": {
429
+ "title": "Auto-PyTorch: Multi-fidelity metalearning for efficient and robust\nAutoDL.",
430
+ "author": "Zimmer, L., Lindauer, M., and Hutter, F. (2021).",
431
+ "venue": "Transactions on Pattern Analysis and Machine Intelligence.",
432
+ "url": null
433
+ }
434
+ }
435
+ ],
436
+ "url": "http://arxiv.org/html/2403.01888v3"
437
+ }
20240819/2403.02889v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240819/2403.04484v2.json ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Source Matters: Source Dataset Impact on Model Robustness in Medical Imaging",
3
+ "abstract": "Transfer learning has become an essential part of medical imaging classification algorithms, often leveraging ImageNet weights. The domain shift from natural to medical images has prompted alternatives such as RadImageNet, often showing comparable classification performance. However, it remains unclear whether the performance gains from transfer learning stem from improved generalization or shortcut learning. To address this, we conceptualize confounders by introducing the Medical Imaging Contextualized Confounder Taxonomy (MICCAT) and investigate a range of confounders across it \u2013 whether synthetic or sampled from the data \u2013 using two public chest X-ray and CT datasets. We show that ImageNet and RadImageNet achieve comparable classification performance, yet ImageNet is much more prone to overfitting to confounders. We recommend that researchers using ImageNet-pretrained models reexamine their model robustness by conducting similar experiments. Our code and experiments are available at https://github.com/DovileDo/source-matters.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Machine learning models hold immense promise for revolutionizing healthcare. However, their deployment in real-world clinical settings is hindered by various challenges, with one of the most critical being their hidden reliance on spurious features [27 ###reference_b27###]. Recent research has highlighted the detrimental effects of this reliance, including bias against demographic subgroups [2 ###reference_b2###], limited generalization across hospitals [28 ###reference_b28###], and the risk of clinical errors that may harm patients [21 ###reference_b21###].\nDespite transfer learning becoming a cornerstone in medical imaging, its impact on model generalization remains largely unexplored. Pre-training on ImageNet has become a standard practice due to its success in 2D image classification. While some studies have explored alternative medical source datasets for pre-training [3 ###reference_b3###, 19 ###reference_b19###, 29 ###reference_b29###, 16 ###reference_b16###], ImageNet continues to serve as a strong baseline.\nRecent literature suggests that the size of the source dataset may matter more than its domain or composition [22 ###reference_b22###, 9 ###reference_b9###]. However, [15 ###reference_b15###] demonstrated performance improvements through source dataset pruning. In this context, we argue that cross-domain transfer can be problematic, especially when source dataset selection is solely based on classification performance, as it may inadvertently lead to shortcut learning rather than genuine improvements in generalization. Shortcut learning can be considered antithetical to generalization and robustness as it is not a failure to generalize per se, but rather a failure to generalize in the intended direction [10 ###reference_b10###].\nIn this paper, we investigate how the domain of the source dataset affects model generalization. First, we conceptualize confounding factors in medical images by introducing the Medical Imaging Contextualized Confounder Taxonomy (MICCAT) and generate synthetic or sample real-world confounders from MICCAT, commonly found in chest X-rays and CT scans, to systematically assess model robustness. Second, we compare models pre-trained on natural (ImageNet) and medical (RadImageNet) datasets across X-ray and CT tasks and show substantial differences in robustness to shortcut learning despite comparable predictive performance. While transfer learning has been observed to enhance model robustness [13 ###reference_b13###], our results suggest that it may not hold true when transferring across domains, cautioning against using ImageNet pre-trained models in medical contexts due to their susceptibility to shortcut learning. Furthermore, our findings highlight the limitations of conventional performance metrics based on i.i.d. datasets, which fail to discern between genuine improvements in generalization and shortcut learning. Thus, we advocate for a more nuanced evaluation of transfer learning effectiveness to ensure the reliability and safety of machine learning applications in clinical settings."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Method",
15
+ "text": "###figure_1###"
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "MICCAT: towards a standardized taxonomy for medical imaging confounders",
21
+ "text": "To the best of our knowledge, there is no standardized taxonomy for classifying potential confounders in medical images. Thus, to better structure our robustness analysis, we propose a new taxonomy: Medical Imaging Contextualized Confounder Taxonomy (MICCAT).\nPrevious work has shown that standard demographic attributes such as sex, age, or ethnicity may act as confounders, leading to shortcut learning and potentially disadvantaging historically underserved subgroups [2 ###reference_b2###]. However, solely focusing on standard protected demographic attributes may overlook other specific factors related to clusters of patients for which the systems tend to fail [8 ###reference_b8###]. In MICCAT, we identify these as \u2018contextualized confounders\u2019, as they are often domain or context-specific, associated with particular image modalities, organs, hospitalization conditions, or diseases.\nFirst, MICCAT differentiates between patient level and environment level confounders. At the patient level, we make a distinction between standard demographic attributes (e.g., sex, age, race) and contextualized anatomical confounders, which arise from inherent anatomical properties of the organs and human body or disease variations in images. This distinction is crucial as standard demographic attributes often serve as proxies for underlying causes of learned shortcuts. For instance, ethnicity may proxy skin color in dermatoscopic images. Identifying the true shortcut cause allows for more targeted interventions to mitigate biases. We define the concept of environment level confounders, which stem from contextualized external or imaging confounders. The former include physical or virtual elements in images due to external factors like hospitalization devices or image tags, while the latter include characteristics related to the imaging modality itself, such as noise, motion blur, or differences in intensities due to equipment or acquisition parameters. Fig. 1 ###reference_### illustrates this taxonomy with examples for each category.\nConfounders studied in this paper. We explore the MICCAT by investigating four examples of confounders, highlighted by a black outline in Fig. 1 ###reference_###:\nAn external confounder (a tag) placed in the upper left corner of the image, representing confounding features introduced by various imaging devices across or within hospitals (Fig. 2(a) ###reference_sf1###).\nTwo typical imaging confounders: denoising (Fig. 2(c) ###reference_sf3###), widely used by various vendors to reduce noise for enhanced readability [11 ###reference_b11###], and Poisson noise (Fig. 2(d) ###reference_sf4###), originating from quantum statistics of photons, which cannot be mitigated through hardware engineering, unlike noise introduced by circuit-related artifacts [26 ###reference_b26###].\nA patient-level confounder where we use patient gender, which is easily accessible in metadata, as a proxy for a broader spectrum of anatomical confounders. We use the same term for this variable as in the original dataset."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Experimental Design",
27
+ "text": "We investigate the impact of source dataset domain on model generalization by comparing ImageNet [6 ###reference_b6###] and RadImageNet [19 ###reference_b19###] models, which are fine-tuned using binary prediction tasks for findings in open-access chest X-ray (NIH CXR14 [25 ###reference_b25###]) and CT (LIDC-IDRI [1 ###reference_b1###]) datasets curated to include systematically controlled confounders. NIH CXR14 is used to represent cross-domain transfer for both ImageNet and RadImageNet, as X-ray is not included in RadImageNet, while LIDC-IDRI serves as an in-domain example for RadImageNet and a cross-domain example for ImageNet.\n###figure_2### ###figure_3### ###figure_4### ###figure_5### Confounder generation.\nPatient gender is sampled to correlate \u2018Female\u2019 with the label.\nA tag is placed further away from the edges (starting at px in the original image of px), to ensure it remains intact during training despite augmentations applied (Fig. 2(a) ###reference_sf1###).\nThe simplest method for Denoising is applying low-pass filtering which entails converting the input image from the spatial to the frequency domain using Discrete Fourier Transform (DFT), followed by element-wise multiplication with the low-pass filter to generate the filtered image:\nwhere represents the distance from the origin in the frequency domain, and is the specified cutoff frequency. In our experiments, we set px. Subsequently, the high-frequency suppressed image is reconstructed in the spatial domain via the Inverse Discrete Fourier Transform (IDFT), resulting in a smoothing effect (see Fig. 2(c) ###reference_sf3###).\nPoisson noise originating from quantum statistics of photons is formulated as a Poisson random process:\nwhere represents Poisson noise, which notably affects image quality under low-dose conditions (e.g., low-dose CT and X-ray screenings), while the linear recording is obtained via the reversed conversion from attenuation given the prior information of the source intensity , where is the pixel values of projections, obtained from the image space as described in [17 ###reference_b17###].\nTo simulate low-dose screening, we add Poisson noise to the image (Fig. 2(d) ###reference_sf4###) by adjusting the parameter to control noise levels. We aim for minimal noise, setting after visually examining the noise to ensure it remains imperceptible.\nEvaluation.\nTo investigate shortcut learning systematically, we construct development datasets for fine-tuning, focusing on a binary classification task. We introduce previously mentioned confounders (e.g., \u2018Female\u2019) into the positive class with a controlled probability to deliberately influence the learning process, replicating scenarios where real-world data may contain confounders. To assess the presence of shortcut learning, we evaluate the fine-tuned models with independently and identically distributed (i.i.d.) as well as out-of-distribution (o.o.d.) test sets. In the o.o.d. set, we introduce the same artifact used during fine-tuning to the negative class with , such that the models are tested on instances where artifacts appear in the opposite class compared to what they encountered during training. We evaluate the fine-tuned models using the AUC (area under the receiver operating characteristic curve).\n# images in\n% split\n% class split\nImage\nBatch\n\nTask\nConfounder\ntest/dev(trainval)\ntrain/val\npos/neg\nsize\nsize\n\nLung mass (NIH CXR14 [25 ###reference_b25###])\nT, D, N\n83/248\n90/10\n30/70\n512 512\n32\n\nLung mass (LIDC-IDRI [1 ###reference_b1###])\nT, D, N\n1710/500\n80/20\n50/50\n362 362\n32\n\nAtelectasis (NIH CXR14 [25 ###reference_b25###])\nGender\n400/400\n85/15\n50/50\n256 256\n64\nMedical targets. We create separate binary classification tasks for lung mass detection using subsets of images sourced from two datasets: the chest X-ray NIH CXR14 [25 ###reference_b25###] subset annotated by clinicians [20 ###reference_b20###], and the chest CT dataset LIDC-IDRI [1 ###reference_b1###] annotated by four radiologists. From the latter, we sample paired positive and negative 2D slices from the original 3D scans using nodule ROI annotations, representing any kind of lesions and their nearby slices without remarkable findings. We include synthetic artifacts (a tag, denoising, and Poisson noise) in both tasks. For the case where patient gender serves as the confounding feature, we sample posterior to anterior (PA) images from NIH CXR14 to construct a binary classification task for atelectasis. We deliberately limit the size of our development datasets, encompassing both balanced and unbalanced class distributions to cover a spectrum of clinical scenarios. Data splits for training, validation, and testing preserve class distribution and are stratified by patient. Further details are available in Table 1 ###reference_###.\nFine-tuning details.\nWe use ResNet50 [12 ###reference_b12###], InceptionV3 [24 ###reference_b24###], InceptionResNetV2 [23 ###reference_b23###], and DenseNet121 [14 ###reference_b14###] as the backbones with average pooling and a dropout layer (0.5 probability). The models are trained using cross-entropy loss with Adam optimizer (learning rate: ) for a maximum of 200 epochs with early stopping after 30 epochs of no improvement in validation loss (AUC for the balanced tasks). This configuration, established during early tuning, proved flexible enough to accommodate different initializations and target datasets. During training, we apply image augmentations including random rotation (up to 10 degrees), width and height shifts, shear, and zoom, all set to 0.1, with a fill mode set to \u2018nearest\u2019. Models were implemented using Keras [4 ###reference_b4###] library and fine-tuned on an NVIDIA Tesla A100 GPU card."
28
+ },
29
+ {
30
+ "section_id": "3",
31
+ "parent_section_id": null,
32
+ "section_name": "Results and Discussion",
33
+ "text": "###figure_6### RadImageNet is robust to shortcut learning. Fig. 3 ###reference_### shows that ImageNet and RadImageNet achieve comparable AUC on i.i.d. test set, however, when subjected to o.o.d. test set, notable differences emerge. Specifically, ImageNet\u2019s o.o.d. performance on X-rays, confounded by tag, denoising, and patient gender, drops more compared to RadImageNet, indicating ImageNet\u2019s higher reliance on spurious correlations. This could be because certain features, for instance, a tag (letters), may serve as a discriminative feature in ImageNet, e.g., for the computer keyboard class. However, RadImageNet is invariant to such features as they are not consistently associated with specific labels across different classes, and this invariance transfers to the target task. We observed similar trends in the CT dataset, with the o.o.d. AUC decreasing from 0.84 to 0.02 for ImageNet, and to 0.22 for RadImageNet (for tag); and from 0.7 to 0.01 for ImageNet, and from 0.83 only to 0.6 for RadImageNet (for denoising). It is worth noting that RadImageNet models tend to train longer, averaging 141 epochs across all experiments, compared to 72 epochs for ImageNet models.\n###figure_7### Although tag and denoising are designed to replicate real-world artifacts, they lack the diversity found in real-world scenarios. Patient gender presents a more realistic confounder. Here, the performance gap between ImageNet and RadImageNet is smaller (by 0.12 on average for ) yet remains statistically significant (permutation test, , for ). This suggests that RadImageNet\u2019s resilience to shortcuts extends to more realistic confounder variations, further emphasizing its robustness in medical image classification. Here we only provide results for ResNet50,\nhowever, we observed similar results for InceptionV3, InceptionRes-NetV2, and DenseNet121.\nRandom initialization appears robust to shortcut learning, with consistent o.o.d. performance as increases. However, this is mainly due to the unbalanced class distribution in the lung mass prediction task within the NIH CXR14 dataset, where randomly initialized models tend to predict the overrepresented negative class (). Conversely, in the case of a balanced class distribution in the CT target dataset, the o.o.d. performance of randomly initialized models deteriorates to a similar degree as that of ImageNet-initialized models.\nShortcuts come in all shapes and sizes. ImageNet and RadImageNet both heavily rely on Poisson noise in X-rays (Fig. 4 ###reference_###, upper left) but RadImageNet shows greater robustness to noise in CT scans compared to ImageNet (Fig. 4 ###reference_###, lower left). It is important to note that Poisson noise manifests differently in X-rays and CT scans. In X-rays, Poisson noise introduces graininess characterized by random and pixel-wise independent variations, while in CT scans, it appears as streak artifacts structurally correlated to projections and thus is not pixel-wise independent in the image domain.\nTo understand the impact of this difference, we directly introduce Poisson noise in the image domain for CT scans, mimicking the pixel-wise independence seen in X-rays. However, since CT scans inherently contain noise, this introduces a confounding feature of high versus low levels of noise, as opposed to the original confounder of noise versus no noise.\nTo simulate a corresponding scenario in X-rays, we generate two levels of Poisson noise: for the positives and for the negatives (reversed for the o.o.d. test set). Both models show a smaller drop in o.o.d. AUC across modalities, indicating a reduced reliance on the noise shortcut (Fig. 4 ###reference_###, right). This suggests that discerning between high and low noise levels is a more challenging task than simply detecting the presence of noise.\nRadImageNet maintains its robustness in CT scans, while in X-rays, RadImageNet relies on noise to a similar extent as ImageNet. This may be explained by the absence of X-ray images in RadImageNet, leading to a lack of robust X-ray representations that would resist pixel-wise independent noise \u2013 a phenomenon less common in CT, MR, and ultrasound, modalities included in RadImageNet. This highlights that even transferring from a medical source of a different modality may lead to overfitting on confounders.\nWhile our findings generalize over the four tested CNNs, we did not investigate other architectures, such as transformers, due to CNNs competitive performance [7 ###reference_b7###]. Although we expect that our observations might hold true for transformers, given their tendency to reuse features to an even greater extent than CNNs [18 ###reference_b18###], we defer experimental verification to future research.\nIn our exploration of the MICCAT, we found that RadImageNet models are generally more robust to shortcuts. However, there is some variability within the category of imaging confounders, and the importance of the source domain in anatomical confounders seems to be lower. Expanding the scope to include other confounders would offer a more comprehensive understanding of the taxonomy landscape and provide insights into the nuances within each category, facilitating better-informed source dataset selection and evaluation strategies. MICCAT paves the way for a more systematic approach to addressing shortcut learning in medical imaging in general by providing a framework for thorough confounder curation and enabling a comprehensive analysis."
34
+ },
35
+ {
36
+ "section_id": "4",
37
+ "parent_section_id": null,
38
+ "section_name": "Conclusion",
39
+ "text": "Our study sheds light on the critical role of the source dataset domain in generalization in medical imaging tasks. By systematically investigating confounders typically found in X-rays and CT scans, we uncovered substantial differences in robustness to shortcuts between models pre-trained on natural and medical image datasets. Our findings caution against the blind application of transfer learning across domains. We advocate for a more nuanced evaluation to improve the reliability and safety of machine learning applications in clinical settings.\nProspect of application. Transfer learning plays a fundamental role in machine learning applications for medical imaging. Our study emphasizes the often underestimated importance of selecting pre-trained models, urging a necessary reevaluation and deeper investigation into their use in clinical practice."
40
+ }
41
+ ],
42
+ "appendix": [],
43
+ "tables": {
44
+ "1": {
45
+ "table_html": "<figure class=\"ltx_table\" id=\"S2.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S2.T1.9.1.1\" style=\"font-size:90%;\">Table 1</span>: </span><span class=\"ltx_text\" id=\"S2.T1.10.2\" style=\"font-size:90%;\">Target datasets used for fine-tuning. T: <span class=\"ltx_text ltx_font_italic\" id=\"S2.T1.10.2.1\">tag</span>, D: <span class=\"ltx_text ltx_font_italic\" id=\"S2.T1.10.2.2\">denoising</span>, N: <span class=\"ltx_text ltx_font_italic\" id=\"S2.T1.10.2.3\">noise</span>.</span></figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S2.T1.4\" style=\"width:541.0pt;height:91pt;vertical-align:-1.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(0.0pt,0.0pt) scale(1,1) ;\">\n<p class=\"ltx_p\" id=\"S2.T1.4.4\"><span class=\"ltx_text\" id=\"S2.T1.4.4.4\">\n<span class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S2.T1.4.4.4.4\">\n<span class=\"ltx_tbody\">\n<span class=\"ltx_tr\" id=\"S2.T1.4.4.4.4.5.1\">\n<span class=\"ltx_td ltx_border_t\" id=\"S2.T1.4.4.4.4.5.1.1\" style=\"padding-left:5.0pt;padding-right:5.0pt;\"></span>\n<span class=\"ltx_td ltx_th ltx_th_column ltx_border_t\" id=\"S2.T1.4.4.4.4.5.1.2\" style=\"padding-left:5.0pt;padding-right:5.0pt;\"></span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S2.T1.4.4.4.4.5.1.3\" style=\"padding-left:5.0pt;padding-right:5.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.4.4.4.4.5.1.3.1\"># images in</span></span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S2.T1.4.4.4.4.5.1.4\" style=\"padding-left:5.0pt;padding-right:5.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.4.4.4.4.5.1.4.1\">% split</span></span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S2.T1.4.4.4.4.5.1.5\" style=\"padding-left:5.0pt;padding-right:5.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.4.4.4.4.5.1.5.1\">% class split</span></span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S2.T1.4.4.4.4.5.1.6\" style=\"padding-left:5.0pt;padding-right:5.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.4.4.4.4.5.1.6.1\">Image</span></span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S2.T1.4.4.4.4.5.1.7\" style=\"padding-left:5.0pt;padding-right:5.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.4.4.4.4.5.1.7.1\">Batch</span></span></span>\n<span class=\"ltx_tr\" id=\"S2.T1.1.1.1.1.1\">\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_column\" id=\"S2.T1.1.1.1.1.1.2\" style=\"padding-left:5.0pt;padding-right:5.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.1.1.1.1.1.2.1\">Task</span></span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S2.T1.1.1.1.1.1.3\" style=\"padding-left:5.0pt;padding-right:5.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.1.1.1.1.1.3.1\">Confounder</span></span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S2.T1.1.1.1.1.1.1\" style=\"padding-left:5.0pt;padding-right:5.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.1.1.1.1.1.1.1\">test/dev(trainval)</span></span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S2.T1.1.1.1.1.1.4\" style=\"padding-left:5.0pt;padding-right:5.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.1.1.1.1.1.4.1\">train/val</span></span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S2.T1.1.1.1.1.1.5\" style=\"padding-left:5.0pt;padding-right:5.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.1.1.1.1.1.5.1\">pos/neg</span></span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S2.T1.1.1.1.1.1.6\" style=\"padding-left:5.0pt;padding-right:5.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.1.1.1.1.1.6.1\">size</span></span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S2.T1.1.1.1.1.1.7\" style=\"padding-left:5.0pt;padding-right:5.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.1.1.1.1.1.7.1\">size</span></span></span>\n<span class=\"ltx_tr\" id=\"S2.T1.2.2.2.2.2\">\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T1.2.2.2.2.2.2\" style=\"padding-left:5.0pt;padding-right:5.0pt;\">Lung mass (NIH CXR14\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2403.04484v2#bib.bib25\" title=\"\">25 ###reference_b25###</a>]</cite>)</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.2.2.2.2.2.3\" style=\"padding-left:5.0pt;padding-right:5.0pt;\">T, D, N</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.2.2.2.2.2.4\" style=\"padding-left:5.0pt;padding-right:5.0pt;\">83/248</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.2.2.2.2.2.5\" style=\"padding-left:5.0pt;padding-right:5.0pt;\">90/10</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.2.2.2.2.2.6\" style=\"padding-left:5.0pt;padding-right:5.0pt;\">30/70</span>\n<span class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S2.T1.2.2.2.2.2.1\" style=\"padding-left:5.0pt;padding-right:5.0pt;\">512 512</span>\n<span class=\"ltx_td ltx_nopad_r ltx_align_right ltx_border_t\" id=\"S2.T1.2.2.2.2.2.7\" style=\"padding-left:5.0pt;padding-right:5.0pt;\">32</span></span>\n<span class=\"ltx_tr\" id=\"S2.T1.3.3.3.3.3\">\n<span class=\"ltx_td ltx_align_left\" id=\"S2.T1.3.3.3.3.3.2\" style=\"padding-left:5.0pt;padding-right:5.0pt;\">Lung mass (LIDC-IDRI\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2403.04484v2#bib.bib1\" title=\"\">1 ###reference_b1###</a>]</cite>)</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S2.T1.3.3.3.3.3.3\" style=\"padding-left:5.0pt;padding-right:5.0pt;\">T, D, N</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S2.T1.3.3.3.3.3.4\" style=\"padding-left:5.0pt;padding-right:5.0pt;\">1710/500</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S2.T1.3.3.3.3.3.5\" style=\"padding-left:5.0pt;padding-right:5.0pt;\">80/20</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S2.T1.3.3.3.3.3.6\" style=\"padding-left:5.0pt;padding-right:5.0pt;\">50/50</span>\n<span class=\"ltx_td ltx_align_right\" id=\"S2.T1.3.3.3.3.3.1\" style=\"padding-left:5.0pt;padding-right:5.0pt;\">362 362</span>\n<span class=\"ltx_td ltx_nopad_r ltx_align_right\" id=\"S2.T1.3.3.3.3.3.7\" style=\"padding-left:5.0pt;padding-right:5.0pt;\">32</span></span>\n<span class=\"ltx_tr\" id=\"S2.T1.4.4.4.4.4\">\n<span class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S2.T1.4.4.4.4.4.2\" style=\"padding-left:5.0pt;padding-right:5.0pt;\">Atelectasis (NIH CXR14\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2403.04484v2#bib.bib25\" title=\"\">25 ###reference_b25###</a>]</cite>)</span>\n<span class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S2.T1.4.4.4.4.4.3\" style=\"padding-left:5.0pt;padding-right:5.0pt;\">Gender</span>\n<span class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S2.T1.4.4.4.4.4.4\" style=\"padding-left:5.0pt;padding-right:5.0pt;\">400/400</span>\n<span class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S2.T1.4.4.4.4.4.5\" style=\"padding-left:5.0pt;padding-right:5.0pt;\">85/15</span>\n<span class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S2.T1.4.4.4.4.4.6\" style=\"padding-left:5.0pt;padding-right:5.0pt;\">50/50</span>\n<span class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S2.T1.4.4.4.4.4.1\" style=\"padding-left:5.0pt;padding-right:5.0pt;\">256 256</span>\n<span class=\"ltx_td ltx_nopad_r ltx_align_right ltx_border_b\" id=\"S2.T1.4.4.4.4.4.7\" style=\"padding-left:5.0pt;padding-right:5.0pt;\">64</span></span>\n</span>\n</span></span></p>\n</span></div>\n</figure>",
46
+ "capture": "Table 1: Target datasets used for fine-tuning. T: tag, D: denoising, N: noise."
47
+ }
48
+ },
49
+ "image_paths": {
50
+ "1": {
51
+ "figure_path": "2403.04484v2_figure_1.png",
52
+ "caption": "Figure 1: MICCAT: Medical Imaging Contextualized Confounder Taxonomy. Instances of confounders investigated in this paper are highlighted in bold.",
53
+ "url": "http://arxiv.org/html/2403.04484v2/x1.png"
54
+ },
55
+ "2(a)": {
56
+ "figure_path": "2403.04484v2_figure_2(a).png",
57
+ "caption": "(a)\nFigure 2: Synthetic artifacts: (a) A tag with a red arrow for reference, (b) a zoomed-in view of the original image, (c) Denoising by low-pass filter with cutoff frequency (see Eq. 1) of D0=200subscript\ud835\udc370200D_{0}=200italic_D start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 200px, and (d) Poisson noise with N0=2\u00d7106subscript\ud835\udc4102superscript106N_{0}=2\\times 10^{6}italic_N start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 2 \u00d7 10 start_POSTSUPERSCRIPT 6 end_POSTSUPERSCRIPT (see Eq. 2). The parameters used here are to emphasize subtle local variations such as the smoothing effect of the low-pass filter and the graininess introduced by the Poisson noise. For our experiments, we use D0=500subscript\ud835\udc370500D_{0}=500italic_D start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 500px and N0=2\u00d7107subscript\ud835\udc4102superscript107N_{0}=2\\times 10^{7}italic_N start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 2 \u00d7 10 start_POSTSUPERSCRIPT 7 end_POSTSUPERSCRIPT which are imperceptible.",
58
+ "url": "http://arxiv.org/html/2403.04484v2/extracted/5800188/imgs/R.png"
59
+ },
60
+ "2(b)": {
61
+ "figure_path": "2403.04484v2_figure_2(b).png",
62
+ "caption": "(b)\nFigure 2: Synthetic artifacts: (a) A tag with a red arrow for reference, (b) a zoomed-in view of the original image, (c) Denoising by low-pass filter with cutoff frequency (see Eq. 1) of D0=200subscript\ud835\udc370200D_{0}=200italic_D start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 200px, and (d) Poisson noise with N0=2\u00d7106subscript\ud835\udc4102superscript106N_{0}=2\\times 10^{6}italic_N start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 2 \u00d7 10 start_POSTSUPERSCRIPT 6 end_POSTSUPERSCRIPT (see Eq. 2). The parameters used here are to emphasize subtle local variations such as the smoothing effect of the low-pass filter and the graininess introduced by the Poisson noise. For our experiments, we use D0=500subscript\ud835\udc370500D_{0}=500italic_D start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 500px and N0=2\u00d7107subscript\ud835\udc4102superscript107N_{0}=2\\times 10^{7}italic_N start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 2 \u00d7 10 start_POSTSUPERSCRIPT 7 end_POSTSUPERSCRIPT which are imperceptible.",
63
+ "url": "http://arxiv.org/html/2403.04484v2/extracted/5800188/imgs/original_crop.png"
64
+ },
65
+ "2(c)": {
66
+ "figure_path": "2403.04484v2_figure_2(c).png",
67
+ "caption": "(c)\nFigure 2: Synthetic artifacts: (a) A tag with a red arrow for reference, (b) a zoomed-in view of the original image, (c) Denoising by low-pass filter with cutoff frequency (see Eq. 1) of D0=200subscript\ud835\udc370200D_{0}=200italic_D start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 200px, and (d) Poisson noise with N0=2\u00d7106subscript\ud835\udc4102superscript106N_{0}=2\\times 10^{6}italic_N start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 2 \u00d7 10 start_POSTSUPERSCRIPT 6 end_POSTSUPERSCRIPT (see Eq. 2). The parameters used here are to emphasize subtle local variations such as the smoothing effect of the low-pass filter and the graininess introduced by the Poisson noise. For our experiments, we use D0=500subscript\ud835\udc370500D_{0}=500italic_D start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 500px and N0=2\u00d7107subscript\ud835\udc4102superscript107N_{0}=2\\times 10^{7}italic_N start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 2 \u00d7 10 start_POSTSUPERSCRIPT 7 end_POSTSUPERSCRIPT which are imperceptible.",
68
+ "url": "http://arxiv.org/html/2403.04484v2/extracted/5800188/imgs/low_crop.png"
69
+ },
70
+ "2(d)": {
71
+ "figure_path": "2403.04484v2_figure_2(d).png",
72
+ "caption": "(d)\nFigure 2: Synthetic artifacts: (a) A tag with a red arrow for reference, (b) a zoomed-in view of the original image, (c) Denoising by low-pass filter with cutoff frequency (see Eq. 1) of D0=200subscript\ud835\udc370200D_{0}=200italic_D start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 200px, and (d) Poisson noise with N0=2\u00d7106subscript\ud835\udc4102superscript106N_{0}=2\\times 10^{6}italic_N start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 2 \u00d7 10 start_POSTSUPERSCRIPT 6 end_POSTSUPERSCRIPT (see Eq. 2). The parameters used here are to emphasize subtle local variations such as the smoothing effect of the low-pass filter and the graininess introduced by the Poisson noise. For our experiments, we use D0=500subscript\ud835\udc370500D_{0}=500italic_D start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 500px and N0=2\u00d7107subscript\ud835\udc4102superscript107N_{0}=2\\times 10^{7}italic_N start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 2 \u00d7 10 start_POSTSUPERSCRIPT 7 end_POSTSUPERSCRIPT which are imperceptible.",
73
+ "url": "http://arxiv.org/html/2403.04484v2/extracted/5800188/imgs/noise_crop.png"
74
+ },
75
+ "3": {
76
+ "figure_path": "2403.04484v2_figure_3.png",
77
+ "caption": "Figure 3: Mean AUC across five-fold cross-validation with 95% CI for lung mass (left and middle) and atelectasis (right) prediction in chest X-rays. Increasing correlation between artifact (tag, denoising, gender) and the label leads to lower o.o.d. AUC (on o.o.d. test set as described in Sec. 2.2) (top row), while i.i.d. AUC increases (bottom row). RadImageNet pretraining shows less degradation in o.o.d. AUC compared to ImageNet pretraining, suggesting that ImageNet may over-rely on spurious correlations in the target dataset. The grey dotted line is the SOTA result for lung mass and atelectasis in NIH CXR14 reported by [5].",
78
+ "url": "http://arxiv.org/html/2403.04484v2/x2.png"
79
+ },
80
+ "4": {
81
+ "figure_path": "2403.04484v2_figure_4.png",
82
+ "caption": "Figure 4: O.o.d. AUC (mean and 95% CI across five-folds) for lung mass prediction in chest X-rays and CTs. In X-rays (top), both ImageNet and RadImageNet show similar reliance on Poisson noise. However, RadImageNet is more robust in CT scans (bottom). When the confounder is high vs low noise, both ImageNet and RadImageNet are less sensitive (right), compared to noise vs no noise (left).",
83
+ "url": "http://arxiv.org/html/2403.04484v2/x3.png"
84
+ }
85
+ },
86
+ "validation": true,
87
+ "references": [],
88
+ "url": "http://arxiv.org/html/2403.04484v2"
89
+ }
20240819/2403.06906v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240819/2403.07162v3.json ADDED
@@ -0,0 +1,397 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Digital Twin Evolution for Sustainable Smart Ecosystems",
3
+ "abstract": "Smart ecosystems are the drivers of modern society. They control infrastructures of socio-techno-economic importance, ensuring their stable and sustainable operation.\nSmart ecosystems are governed by digital twins\u2014real-time virtual representations of physical infrastructure. To support the open-ended and reactive traits of smart ecosystems, digital twins need to be able to evolve in reaction to changing conditions.\nHowever, digital twin evolution is challenged by the intertwined nature of physical and software components, and their individual evolution.\nAs a consequence, software practitioners find a substantial body of knowledge on software evolution hard to apply in digital twin evolution scenarios and a lack of knowledge on the digital twin evolution itself.\nThe aim of this paper, consequently, is to provide software practitioners with tangible leads toward understanding and managing the evolutionary concerns of digital twins.\nWe use four distinct digital twin evolution scenarios, contextualized in a citizen energy community case to illustrate the usage of the 7R taxonomy of digital twin evolution.\nBy that, we aim to bridge a significant gap in leveraging software engineering practices to develop robust smart ecosystems.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "1. Introduction",
9
+ "text": "Our modern world runs by smart ecosystems\u2014large-scale, decentralized systems, capable of self-organization and self-optimization (Jensen, 2020 ###reference_b14###).\nExamples of smart ecosystems include smart cities (Graciano Neto and Kassab, 2023 ###reference_b9###), smart energy communities (Gramelsberger et al., 2023 ###reference_b10###), and smart grids with renewable components (Hasan et al., 2023 ###reference_b11###).\nMuch like natural ecosystems, smart ecosystems are open-ended and need to allow for continuous changes in their structure and behavior. These evolutionary dynamics, in turn, challenge the technical sustainability (Penzenstadler et al., 2018 ###reference_b19###) of smart ecosystems, i.e., their ability to maintain the quality of service over a prolonged period of time (Hilty et al., 2006 ###reference_b13###).\nTo improve the sustainability of smart ecosystems, proper evolution mechanisms are required to be put in place. While evolution has a substantial body of knowledge in model-driven software engineering (Di Ruscio et al., 2011 ###reference_b7###; Hebig et al., 2017 ###reference_b12###), hybrid cyber-physical components of smart ecosystems, such as digital twins (Kritzinger et al., 2018 ###reference_b15###), give rise to challenges traditional software engineering techniques fall short of addressing.\nDigital twins are real-time, virtual representations of physical system components (Kritzinger et al., 2018 ###reference_b15###).\nThey govern smart ecosystems and provide essential mechanisms and services to assess, simulate, and control the physical infrastructure of smart ecosystems for optimal behavior (Michael et al., 2024 ###reference_b18###). Thus, to ensure the technical sustainability of smart ecosystems, first, the technical sustainability of digital twins must be managed.\nChanges in digital twins boil down to a heterogeneous set of components, including software, hardware, middleware, and IoT devices. The interdependency of concerns severely hinders the applicability of software engineering techniques and even challenges the very understanding of evolutionary needs.\nTo help software engineers apply their expertise in digital twin evolution scenarios, we provide a case-based demonstration of the 7R taxonomy in this paper. The 7R taxonomy of digital twin evolution (David and Bork, 2023 ###reference_b5###) defines seven elementary activities to support the technical sustainability of digital twins.\nThis paper is structured as follows.\nIn Sec. 2 ###reference_###, we elaborate on a case of an evolving smart ecosystem, driven by digital twin evolution.\nIn Sec. 3 ###reference_###, we recommend action points to apply the 7R taxonomy.\nIn Sec. 4 ###reference_###, we draw the conclusions.\nWe provide background information about key concepts in sidebars.\ninnertopmargin=4pt,\nlinewidth=0pt,\nframetitleaboveskip=-frametitlealignment=,\nbackgroundcolor=sidebarbgcolor\n\n{mdframed}\n\n\nThe 7R taxonomy of digital twin evolution\n\nTaxonomies are a form of classification, aiming to systematically organize knowledge of a specific research field or problem. Classification of objects helps to understand the specific field and systematically treat a particular problem. The 7R taxonomy of digital twin evolution (David and Bork, 2023 ###reference_b5###) identifies seven areas of action to react to the evolutionary needs of digital twins.\n\n\n\n\n\n\n\n\n\nRe-calibration of a model parameter is required when the model is not a faithful representation of the physical twin anymore and simulations become incorrect, leading to imprecise assessment, analysis, and control of the physical twin.\nRe-modeling the physical twin might be required in more elaborate cases, e.g., when the model does not reflect the real phenomenon properly. Specific software engineering tasks, such as re-architecting re-packaging a software component might be considered as refinements of this R-imperative.\nReconciliation of data, i.e., updating the data schema and migrating data might be needed when data discrepancies occur, and data might become inconsistent.\n\n\nRe-collecting data is needed when events are missed due to transient errors. It might necessitate reconciliation, re-modeling, and re-calibration.\nRe-deploying the evolved digital twin is needed after at least one of the previous steps has been taken.\nRe-configuration of the physical twin is required after the digital twin has evolved. Re-configuration entails a wide range of potential actions, from changing the settings of a physical component to the installation of new ones.\nReuse of the large amounts of data, knowledge, and know-how that have been amassed during the operation of the digital twin is paramount in ensuring cost-efficient digital twin projects.\n###figure_1###"
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "2. Case: Citizen Energy Community",
15
+ "text": "To illustrate the usage of the 7R taxonomy (see sidebar), we rely on a practical case of an evolving smart ecosystem, called the citizen energy community.\nEnergy communities enable collective, citizen-driven energy actions to support a clean energy transition (Commission, [n.\u2009d.] ###reference_b3###). In citizen energy communities (Fig. 1 ###reference_###), citizens and small commercial entities are equipped with energy generation and storage capacity, promoting them to first-class generators of energy. As opposed to traditional regulatory models, a citizen energy community gives rise to a smart ecosystem, in which participation is voluntary and egalitarian; and cyber-physical components compose the infrastructure.\nA digital twin is developed to govern the smart ecosystem (Gramelsberger et al., 2023 ###reference_b10###) from the very beginning.\nThe digital twin provides stakeholders with tools to monitor and optimize energy trading processes, simulate energy provision and usage scenarios, analyze what-if scenarios, and predict maintenance requirements.\nThroughout the lifespan of the system, new features are developed, new components are added, and core elements\u2014often as critical as a power plant\u2014are retired. In the following, we discuss four evolutionary scenarios in an escalating order of impact. By discussing the scenarios through the 7R framework of digital twin evolution for technical sustainability, we demonstrate how to organize the chain of thought about digital twin evolution into a structured set of arguments to support engineering tasks.\n###figure_2### innertopmargin=4pt,\nlinewidth=0pt,\nframetitleaboveskip=-frametitlealignment=,\nbackgroundcolor=sidebarbgcolor\n\n{mdframed}\n\n\nCitizen energy communities\n\nA citizen energy community (Commission, [n.\u2009d.] ###reference_b3###) is a localized entity, established with the purpose of generating, distributing, supplying, and storing energy. It enables local energy trading and facilitates the purchasing and selling of energy and energy services to optimize local consumption (Gramelsberger et al., 2023 ###reference_b10###). Such a citizen energy community consists of citizens, their buildings, small commercial or public entities consuming energy, and different sources producing energy including the citizens and small commercial or public entities.\nEnergy communities are crucial in driving the clean energy transition.\n\n\n\n\n\nDigital twins of citizen energy communities\n\nA digital twin of a citizen energy community provides a faithful virtual replica of the overall socio-techno-economic system. By that, the digital twin enables the assessment of key indicators, e.g., of sustainability and overall system health, and supports the continuous improvement and evolution of the ecosystem. A digital twin also helps monitor and optimize energy trading processes (Tsado et al., 2022 ###reference_b23###), simulate energy provision and usage scenarios, detect incorrect sensor information, and predict maintenance tasks of power lines, energy storages, or other physical components."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "2.1. Scenario 1: From a monitoring digital twin to a predictive digital twin",
21
+ "text": "The local government decided to provide financial incentives to residents, who provide the excess energy of their photovoltaic systems within the citizen energy community network. In the new setting, client end-points do not only consume but also produce electricity. However, this setup necessitates accurate forecasting of electricity fluctuations, especially excess electricity to prevent damage, e.g., due to overheating components."
22
+ },
23
+ {
24
+ "section_id": "2.1.x",
25
+ "parent_section_id": "2.1",
26
+ "section_name": "Re-model",
27
+ "text": "Forecasting excess electricity requires a suitable model of the electrical grid. Engineering models that leverage laws of physics are a typical choice. Thus, the grid operator decides to improve the models of the digital twin and re-model the grid by adding models of thermodynamics and external factors, such as atmospheric pressure and relative humidity."
28
+ },
29
+ {
30
+ "section_id": "2.1.x",
31
+ "parent_section_id": "2.1",
32
+ "section_name": "Re-calibrate",
33
+ "text": "With the new models added, the digital twin needs to be re-calibrated. Without calibration, the models would not match the real system, resulting in inaccurate forecasts. Re-calibration is achieved by manual tuning based on high-quality operational data collected by the digital twin."
34
+ },
35
+ {
36
+ "section_id": "2.2",
37
+ "parent_section_id": "2",
38
+ "section_name": "2.2. Scenario 2: AI-driven predictions",
39
+ "text": "After realizing the benefits of a predictive digital twin\u2014e.g., improved resource efficiency and safety\u2014the grid operator decides to further improve the predictive capabilities of the digital twin. One problem with the engineering model-based techniques in place is the computing power they require for detailed simulations. As an alternative, AI-based predictive methods are proposed and realized."
40
+ },
41
+ {
42
+ "section_id": "2.2.x",
43
+ "parent_section_id": "2.2",
44
+ "section_name": "Re-collect",
45
+ "text": "The development of the new AI model requires large volumes of data, including data that has not been considered before. Typically, data points that were excluded from the manually-built engineering models due to increased complexity are now becoming of particular interest, such as environmental data (e.g., cloud cover). Therefore, the data collection strategy needs to be revised, and the digital twin should start harvesting the required data points."
46
+ },
47
+ {
48
+ "section_id": "2.2.x",
49
+ "parent_section_id": "2.2",
50
+ "section_name": "Reconcile",
51
+ "text": "Collected data needs to pass through various data processing pipelines aiming to clean and consolidate data and eventually store it in a database. The data management infrastructure needs to be reconciled with the newly collected data. This includes technical aspects (e.g., updating data schemas and processing scripts); and in some cases, addressing the organizational or legal framework (e.g., when working with personal or sensitive data)."
52
+ },
53
+ {
54
+ "section_id": "2.2.x",
55
+ "parent_section_id": "2.2",
56
+ "section_name": "Re-model",
57
+ "text": "After reconciliation, re-modeling is required to generate AI-based prediction models that are trained on data from the new data pipelines. The re-modeling, here, concerns the addition of new data quantities and qualities to establish an adequate model for predicting the behavior of the energy community using AI."
58
+ },
59
+ {
60
+ "section_id": "2.2.x",
61
+ "parent_section_id": "2.2",
62
+ "section_name": "Re-calibrate",
63
+ "text": "The evolution of the data and the model require a re-calibration of the model to adjust it to the evolved (i.e., extended) scope, end eventually, again faithfully reflect the physical twin."
64
+ },
65
+ {
66
+ "section_id": "2.3",
67
+ "parent_section_id": "2",
68
+ "section_name": "2.3. Scenario 3: Management of excess energy",
69
+ "text": "Too much energy can lead to voltage frequency disturbances in the system. As a result, transformers might trip off to protect themselves from being damaged. This can cause localized blackouts.\nTo further improve the safety of the grid and optimize its efficiency, the operator decides to equip the grid with the latest generation of safety components\u2014sensors that detect potentially hazardous patterns, and actuators that can act upon hazardous situations. As usual, the digital twin operates these components."
70
+ },
71
+ {
72
+ "section_id": "2.3.x",
73
+ "parent_section_id": "2.3",
74
+ "section_name": "Re-configure",
75
+ "text": "First, the physical infrastructure of the grid needs to be re-configured. This re-configuration concerns putting new sensors and actuators in place. The new equipment enables the grid operator to localize causes for inefficient use of the grid and, consequently, to also actuate on identified grid components (e.g., temporal removal of consumers/producers from the grid, or establishment and enforcement of bandwidth limits)."
76
+ },
77
+ {
78
+ "section_id": "2.3.x",
79
+ "parent_section_id": "2.3",
80
+ "section_name": "Re-collect",
81
+ "text": "As new sensors are in place that are producing data not considered before, the digital twin has to collect these new data points about hazardous situations such as voltage frequency disturbance or energy overload in specific areas of the grid."
82
+ },
83
+ {
84
+ "section_id": "2.3.x",
85
+ "parent_section_id": "2.3",
86
+ "section_name": "Re-model",
87
+ "text": "For the optimization of the smart grid efficiency, the operators decide to use the existing sensor and actuator components and integrate them to realize an agent who is in continuous interaction with the physical components by an actuation and sensing relationship. In this respect, a new model is created that supports a reinforcement learning approach (Tomin et al., 2020 ###reference_b22###)."
88
+ },
89
+ {
90
+ "section_id": "2.3.x",
91
+ "parent_section_id": "2.3",
92
+ "section_name": "Re-calibrate",
93
+ "text": "The new model in support of reinforcement learning needs to be calibrated. This ensures that the model is a faithful representation of the grid. Calibration is achieved step-wise, by ingesting pieces of data as they arrive on the data stream."
94
+ },
95
+ {
96
+ "section_id": "2.3.x",
97
+ "parent_section_id": "2.3",
98
+ "section_name": "Re-deploy",
99
+ "text": "The data from the added sensors and actuators as well as the results of the developed reinforcement learning approach should be visualized to the users of the digital twin. This requires that the digital twin as a software system has to be re-deployed."
100
+ },
101
+ {
102
+ "section_id": "2.4",
103
+ "parent_section_id": "2",
104
+ "section_name": "2.4. Scenario 4: Retiring the coal power plant",
105
+ "text": "Eventually, the distributed citizen energy community reaches the level of self-sustainability, efficiency, and safety, where the central coal power plant component is not needed anymore; and political trends drive the obsolescence of coal-fired power generation. As a consequence, the coal power plant is retired. The digital twin, however, is a source of important information thanks to the data collected throughout the lifespan of the coal power plant.\nAdditionally, legal constraints require the grid operator to keep this data for several years for documentation purposes."
106
+ },
107
+ {
108
+ "section_id": "2.4.x",
109
+ "parent_section_id": "2.4",
110
+ "section_name": "Reuse",
111
+ "text": "The grid operator is now able to reuse important design documents, design rationale (engineering decisions), experimental simulation traces, and operative information collected by the digital twin during the lifespan of the original power plant.\nHowever, effective reuse might require further actions, e.g., re-calibrating models or re-collecting additional data.\nHere, we maintain a focus on software aspects. In a system-wide focus, resource value retention options would become additionally important (David et al., 2024 ###reference_b6###; Bork et al., 2024 ###reference_b2###), e.g., reusing particular components of a power plant, repairing or replacing parts in the smart grid, or re-purposing buildings leading to changed energy needs."
112
+ },
113
+ {
114
+ "section_id": "3",
115
+ "parent_section_id": null,
116
+ "section_name": "3. Action points for application",
117
+ "text": "We aim to ease the application of the 7R taxonomy for digital twin evolution. Generally, applying the taxonomy requires answering two questions related to the affected R-imperatives on the one hand and the existing evolutionary processes on the other."
118
+ },
119
+ {
120
+ "section_id": "3.1",
121
+ "parent_section_id": "3",
122
+ "section_name": "3.1. Which of the R-imperatives does an evolutionary scenario touch upon?",
123
+ "text": "Answering this question helps in understanding the primary roles of software engineering in support of digital twin evolution, and the extent to which software engineering is involved in these phases. Tab. 1 ###reference_### provides typical examples of such roles to every R-imperative."
124
+ },
125
+ {
126
+ "section_id": "3.1.x",
127
+ "parent_section_id": "3.1",
128
+ "section_name": "Re-calibration",
129
+ "text": "This imperative often does not require the involvement of model engineers and scientists; software engineers who are familiar with the model might take care of re-calibration in their own scope. Calibration and re-calibration of models is a moderately software-intensive R-imperative (\\harveyBallHalf\n\n[0.6ex])."
130
+ },
131
+ {
132
+ "section_id": "3.1.x",
133
+ "parent_section_id": "3.1",
134
+ "section_name": "Re-modeling",
135
+ "text": "This imperative, on the other hand, is primarily the concern of model engineers and scientists. The role of software engineers is to take such models and refactor them for scalability. This is typical, e.g., with machine learning models, in which algorithms are fine-tuned by scientists, enabling software engineers to integrate the model into the software architecture. Re-modeling is one of the least software-intensive R-imperatives (\\harveyBallQuarter\n\n)."
136
+ },
137
+ {
138
+ "section_id": "3.1.x",
139
+ "parent_section_id": "3.1",
140
+ "section_name": "Re-collecting",
141
+ "text": "Re-collecting data typically requires working with device APIs or interacting with a messaging middleware. It is a fairly software-intensive imperative (\\harveyBallThreeQuarter\n\n[0.6ex]) that touches upon distributed components and often runs into testing challenges."
142
+ },
143
+ {
144
+ "section_id": "3.1.x",
145
+ "parent_section_id": "3.1",
146
+ "section_name": "Reconciliation",
147
+ "text": "The software engineering effort focuses on maintaining data management pipelines as the underlying data collection infrastructure changes. This is a fairly critical and software-intensive imperative (\\harveyBallThreeQuarter\n\n[0.6ex]), as it touches upon data, a key value driver for companies (Laney, 2017 ###reference_b16###)."
148
+ },
149
+ {
150
+ "section_id": "3.1.x",
151
+ "parent_section_id": "3.1",
152
+ "section_name": "Re-deployment",
153
+ "text": "This imperative is typically the most software en-gineering\u2013intensive one (\\harveyBallFull\n\n[0.6ex]). As computing is typically located in the cloud nowadays, software engineers need to define the overall infrastructure-as-a-code (Staron et al., 2023 ###reference_b21###) for deployment, as well as enact the end-to-end DevOps or, in rare cases, CI/CD processes."
154
+ },
155
+ {
156
+ "section_id": "3.1.x",
157
+ "parent_section_id": "3.1",
158
+ "section_name": "Re-configuration",
159
+ "text": "Re-configuration of the physical infrastructure mostly requires interacting with middleware as physical components are mostly hidden behind messaging and procedural layers. Occasionally, developing and maintaining embedded software for physical devices might be required, which is typical in specialized cases, e.g., where custom measurement equipment is used. Still, this imperative is only moderately software-intensive (\\harveyBallHalf\n\n[0.6ex])."
160
+ },
161
+ {
162
+ "section_id": "3.1.x",
163
+ "parent_section_id": "3.1",
164
+ "section_name": "Reuse",
165
+ "text": "This imperative can be supported by software engineering (Michael et al., 2022 ###reference_b17###) by proper componentization of software, preparing it to be used in other digital twinning projects. AI-heavy companies might want to retain value from their previously trained AI components by transfer learning (Farahani et al., 2020 ###reference_b8###). As reuse in digital twin settings is a more pressing challenge on the physical side of things, this R-imperative is one of the least software-intensive tasks (\\harveyBallQuarter\n\n)."
166
+ },
167
+ {
168
+ "section_id": "3.2",
169
+ "parent_section_id": "3",
170
+ "section_name": "3.2. What are the processes in the organization?",
171
+ "text": "Answering this question helps organize the R-imperatives into a coherent flow. Taxonomies only define a classification of concepts and defer the operationalization to the specific context of the organization or company.\nThus, a process model or DevOps variant (David, 2023 ###reference_b4###) is required to operationalize the taxonomy.\nThese operationalizations might differ in their extent, intent, and vendor dependence."
172
+ },
173
+ {
174
+ "section_id": "3.2.x",
175
+ "parent_section_id": "3.2",
176
+ "section_name": "Extent: short versus long loops.",
177
+ "text": "In the demonstrative case, Scenario 1 is a relatively short loop. It requires implementing a new model and re-calibrating it. In contrast, Scenario 3 is a more elaborate one, touching upon all but one R-imperative. Clearly, the shorter the loop, the easier it is to oversee and manage. Evidence from the industry also shows that shorter loops, especially on the digital side of things (i.e., touching upon re-modeling, re-calibration, and re-deployment), are more frequently situated within the traditional realm of software engineering companies. Longer loops tend to extend into other domains and require more elaborate cooperation."
178
+ },
179
+ {
180
+ "section_id": "3.2.x",
181
+ "parent_section_id": "3.2",
182
+ "section_name": "Intent: data-first versus model-first.",
183
+ "text": "In the demonstrative case, we show one particular sequence of R-imperatives for each scenario. In practice, R-imperatives can be chained in a different order and with more cycles to achieve the evolutionary goals of digital twins. Often, the preferred order of R-imperatives depends on company best practices and employed paradigms.\n###figure_3### Fig. 2 ###reference_### shows two typical operationalizations of Scenario 3. In a data-first approach, the physical twin is re-configured, and subsequently, data collection and reconciliation start immediately to drive model creation in a deductive fashion. The discussion of Scenario 3 in the running example followed a data-first view. Alternatively, in a model-first approach, the re-configuration of the physical twin is followed by re-modeling, re-calibration, and re-deployment of the digital twin. The benefit of this approach is that models can be used to re-generate data schemas and processing scripts, and thus, data collection can commence smoothly, almost without manual intervention. Software companies adopting model-driven practices (Schmidt, 2006 ###reference_b20###) might venture into model-first evolutionary processes, but the data-first mindset is still prevalent in practice."
184
+ },
185
+ {
186
+ "section_id": "3.2.x",
187
+ "parent_section_id": "3.2",
188
+ "section_name": "Vendor dependence.",
189
+ "text": "Operating smart ecosystems is seldom a one-person show. Software companies work with various vendors. Increasingly more often, equipment vendors ship devices coupled with models pre-configured with reasonable defaults. In such cases, longer loops are to be expected, and re-modeling, re-calibration, and re-configuration tasks, in particular, need to be scheduled appropriately. In contrast, internal re-modeling and re-calibration speed up the process but pose challenges in technical aspects, such as maintenance, and non-functional aspects, such as certification."
190
+ },
191
+ {
192
+ "section_id": "4",
193
+ "parent_section_id": null,
194
+ "section_name": "4. Conclusion",
195
+ "text": "This paper provides a case-based introduction to the application of the 7R taxonomy of digital twin evolution. We focus on the role of software engineering in the key tasks outlined by the taxonomy (i.e., its R-imperatives).\nUltimately, the 7R taxonomy of digital twin evolution fosters better decisions in a convoluted problem space in which software engineers are key to success. There are many benefits software engineers can gain from using the taxonomy."
196
+ }
197
+ ],
198
+ "appendix": [],
199
+ "tables": {
200
+ "1": {
201
+ "table_html": "<figure class=\"ltx_table\" id=\"S2.T1\">\n<figcaption class=\"ltx_caption\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\">Table 1. </span>Primary roles of Software Engineering in Digital Twin Evolution</figcaption>\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S2.T1.4\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S2.T1.4.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S2.T1.4.1.1.1\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.4.1.1.1.1\" style=\"font-size:90%;\">R-imperative</span></th>\n<th class=\"ltx_td ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S2.T1.4.1.1.2\"></th>\n<th class=\"ltx_td ltx_align_center ltx_align_top ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S2.T1.4.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.4.1.1.3.1\" style=\"font-size:90%;\">Involvement of software engineers</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.2.2\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_th_row\" id=\"S2.T1.4.2.2.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_align_top ltx_th ltx_th_column\" id=\"S2.T1.4.2.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.4.2.2.2.1\" style=\"font-size:90%;\">Primary role</span></th>\n<th class=\"ltx_td ltx_align_center ltx_align_top ltx_th ltx_th_column\" id=\"S2.T1.4.2.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.4.2.2.3.1\" style=\"font-size:90%;\">Extent</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S2.T1.4.3.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S2.T1.4.3.1.1\"><span class=\"ltx_text\" id=\"S2.T1.4.3.1.1.1\" style=\"font-size:90%;\">Re-calibrate</span></th>\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S2.T1.4.3.1.2\"></th>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_t\" id=\"S2.T1.4.3.1.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.4.3.1.3.1\">\n<span class=\"ltx_p\" id=\"S2.T1.4.3.1.3.1.1\" style=\"width:369.9pt;\"><span class=\"ltx_text\" id=\"S2.T1.4.3.1.3.1.1.1\" style=\"font-size:90%;\">Update models. In major cases: support model engineers and scientists.</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_justify ltx_align_top ltx_border_t\" id=\"S2.T1.4.3.1.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.4.3.1.4.1\">\n<span class=\"ltx_p\" id=\"S2.T1.4.3.1.4.1.1\" style=\"width:28.5pt;\"><span class=\"ltx_ERROR undefined\" id=\"S2.T1.4.3.1.4.1.1.1\">\\harveyBallHalf</span></span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.4.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S2.T1.4.4.2.1\"><span class=\"ltx_text\" id=\"S2.T1.4.4.2.1.1\" style=\"font-size:90%;\">Re-model</span></th>\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S2.T1.4.4.2.2\"></th>\n<td class=\"ltx_td ltx_align_justify ltx_align_top\" id=\"S2.T1.4.4.2.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.4.4.2.3.1\">\n<span class=\"ltx_p\" id=\"S2.T1.4.4.2.3.1.1\" style=\"width:369.9pt;\"><span class=\"ltx_text\" id=\"S2.T1.4.4.2.3.1.1.1\" style=\"font-size:90%;\">Support model engineers and scientists, and refactor models for scalability.</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_justify ltx_align_top\" id=\"S2.T1.4.4.2.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.4.4.2.4.1\">\n<span class=\"ltx_p\" id=\"S2.T1.4.4.2.4.1.1\" style=\"width:28.5pt;\"><span class=\"ltx_ERROR undefined\" id=\"S2.T1.4.4.2.4.1.1.1\">\\harveyBallQuarter</span></span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.5.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S2.T1.4.5.3.1\"><span class=\"ltx_text\" id=\"S2.T1.4.5.3.1.1\" style=\"font-size:90%;\">Re-collect</span></th>\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S2.T1.4.5.3.2\"></th>\n<td class=\"ltx_td ltx_align_justify ltx_align_top\" id=\"S2.T1.4.5.3.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.4.5.3.3.1\">\n<span class=\"ltx_p\" id=\"S2.T1.4.5.3.3.1.1\" style=\"width:369.9pt;\"><span class=\"ltx_text\" id=\"S2.T1.4.5.3.3.1.1.1\" style=\"font-size:90%;\">Integration with sensor APIs and middleware (e.g., messaging).</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_justify ltx_align_top\" id=\"S2.T1.4.5.3.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.4.5.3.4.1\">\n<span class=\"ltx_p\" id=\"S2.T1.4.5.3.4.1.1\" style=\"width:28.5pt;\"><span class=\"ltx_ERROR undefined\" id=\"S2.T1.4.5.3.4.1.1.1\">\\harveyBallThreeQuarter</span></span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.6.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S2.T1.4.6.4.1\"><span class=\"ltx_text\" id=\"S2.T1.4.6.4.1.1\" style=\"font-size:90%;\">Reconcile</span></th>\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S2.T1.4.6.4.2\"></th>\n<td class=\"ltx_td ltx_align_justify ltx_align_top\" id=\"S2.T1.4.6.4.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.4.6.4.3.1\">\n<span class=\"ltx_p\" id=\"S2.T1.4.6.4.3.1.1\" style=\"width:369.9pt;\"><span class=\"ltx_text\" id=\"S2.T1.4.6.4.3.1.1.1\" style=\"font-size:90%;\">Maintenance of data management pipelines, ETL processes, data schemas.</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_justify ltx_align_top\" id=\"S2.T1.4.6.4.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.4.6.4.4.1\">\n<span class=\"ltx_p\" id=\"S2.T1.4.6.4.4.1.1\" style=\"width:28.5pt;\"><span class=\"ltx_ERROR undefined\" id=\"S2.T1.4.6.4.4.1.1.1\">\\harveyBallThreeQuarter</span></span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.7.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S2.T1.4.7.5.1\"><span class=\"ltx_text\" id=\"S2.T1.4.7.5.1.1\" style=\"font-size:90%;\">Re-deploy</span></th>\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S2.T1.4.7.5.2\"></th>\n<td class=\"ltx_td ltx_align_justify ltx_align_top\" id=\"S2.T1.4.7.5.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.4.7.5.3.1\">\n<span class=\"ltx_p\" id=\"S2.T1.4.7.5.3.1.1\" style=\"width:369.9pt;\"><span class=\"ltx_text\" id=\"S2.T1.4.7.5.3.1.1.1\" style=\"font-size:90%;\">Infrastructure-as-Code, DevOps, CI/CD.</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_justify ltx_align_top\" id=\"S2.T1.4.7.5.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.4.7.5.4.1\">\n<span class=\"ltx_p\" id=\"S2.T1.4.7.5.4.1.1\" style=\"width:28.5pt;\"><span class=\"ltx_ERROR undefined\" id=\"S2.T1.4.7.5.4.1.1.1\">\\harveyBallFull</span></span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.8.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S2.T1.4.8.6.1\"><span class=\"ltx_text\" id=\"S2.T1.4.8.6.1.1\" style=\"font-size:90%;\">Re-configure</span></th>\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S2.T1.4.8.6.2\"></th>\n<td class=\"ltx_td ltx_align_justify ltx_align_top\" id=\"S2.T1.4.8.6.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.4.8.6.3.1\">\n<span class=\"ltx_p\" id=\"S2.T1.4.8.6.3.1.1\" style=\"width:369.9pt;\"><span class=\"ltx_text\" id=\"S2.T1.4.8.6.3.1.1.1\" style=\"font-size:90%;\">Middleware development, embedded software development.</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_justify ltx_align_top\" id=\"S2.T1.4.8.6.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.4.8.6.4.1\">\n<span class=\"ltx_p\" id=\"S2.T1.4.8.6.4.1.1\" style=\"width:28.5pt;\"><span class=\"ltx_ERROR undefined\" id=\"S2.T1.4.8.6.4.1.1.1\">\\harveyBallHalf</span></span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.9.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S2.T1.4.9.7.1\"><span class=\"ltx_text\" id=\"S2.T1.4.9.7.1.1\" style=\"font-size:90%;\">Reuse</span></th>\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_bb\" id=\"S2.T1.4.9.7.2\"></th>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_bb\" id=\"S2.T1.4.9.7.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.4.9.7.3.1\">\n<span class=\"ltx_p\" id=\"S2.T1.4.9.7.3.1.1\" style=\"width:369.9pt;\"><span class=\"ltx_text\" id=\"S2.T1.4.9.7.3.1.1.1\" style=\"font-size:90%;\">Software componentization for reuse. Transfer learning from AI components.</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_justify ltx_align_top ltx_border_bb\" id=\"S2.T1.4.9.7.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S2.T1.4.9.7.4.1\">\n<span class=\"ltx_p\" id=\"S2.T1.4.9.7.4.1.1\" style=\"width:28.5pt;\"><span class=\"ltx_ERROR undefined\" id=\"S2.T1.4.9.7.4.1.1.1\">\\harveyBallQuarter</span></span>\n</span>\n</td>\n</tr>\n</tbody>\n</table>\n</figure>",
202
+ "capture": "Table 1. Primary roles of Software Engineering in Digital Twin Evolution"
203
+ }
204
+ },
205
+ "image_paths": {
206
+ "1": {
207
+ "figure_path": "2403.07162v3_figure_1.png",
208
+ "caption": "Figure 1. Digital Twin of an Energy Citizen Community evolving over time",
209
+ "url": "http://arxiv.org/html/2403.07162v3/x1.png"
210
+ },
211
+ "2": {
212
+ "figure_path": "2403.07162v3_figure_2.png",
213
+ "caption": "Figure 2. Operationalizations of the taxonomy in Scenario 3",
214
+ "url": "http://arxiv.org/html/2403.07162v3/extracted/5801262/figures/operationalization-2.png"
215
+ }
216
+ },
217
+ "validation": true,
218
+ "references": [
219
+ {
220
+ "1": {
221
+ "title": "The Role of Modeling in the Analysis and the Design of Sustainable Systems.",
222
+ "author": "Dominik Bork, Istvan David, Iris Reinhartz-Berger, Sergio Espa\u00f1a, Giancarlo Guizzardi, and Henderik Proper. 2024.",
223
+ "venue": "Communications of the Association for Information Systems 54 (2024).",
224
+ "url": null
225
+ }
226
+ },
227
+ {
228
+ "2": {
229
+ "title": "Energy communities.",
230
+ "author": "European Commission. [n.\u2009d.].",
231
+ "venue": "https://energy.ec.europa.eu/topics/markets-and-consumers/energy-communities.",
232
+ "url": null
233
+ }
234
+ },
235
+ {
236
+ "3": {
237
+ "title": "SusDevOps: Promoting Sustainability to a First Principle in Software Delivery.",
238
+ "author": "Istvan David. 2023.",
239
+ "venue": "Technical Report.",
240
+ "url": null
241
+ }
242
+ },
243
+ {
244
+ "4": {
245
+ "title": "Towards a Taxonomy of Digital Twin Evolution for Technical Sustainability. In ACM/IEEE International Conference on Model Driven Engineering Languages and Systems Companion. IEEE.",
246
+ "author": "Istvan David and Dominik Bork. 2023.",
247
+ "venue": "",
248
+ "url": null
249
+ }
250
+ },
251
+ {
252
+ "5": {
253
+ "title": "Circular Systems Engineering.",
254
+ "author": "Istvan David, Dominik Bork, and Gerti Kappel. 2024.",
255
+ "venue": "Software and Systems Modeling (2024).",
256
+ "url": null
257
+ }
258
+ },
259
+ {
260
+ "6": {
261
+ "title": "What is needed for managing co-evolution in MDE?. In Proc. of the 2nd Intl. Workshop on Model Comparison in Practice (IWMCP \u201911). ACM, 30\u201338.",
262
+ "author": "Davide Di Ruscio, Ludovico Iovino, and Alfonso Pierantonio. 2011.",
263
+ "venue": "",
264
+ "url": null
265
+ }
266
+ },
267
+ {
268
+ "7": {
269
+ "title": "A Concise Review of Transfer Learning. In 2020 Intl. Conf. on Computational Science and Computational Intelligence (CSCI). IEEE, 344\u2013351.",
270
+ "author": "A. Farahani et al. 2020.",
271
+ "venue": "",
272
+ "url": null
273
+ }
274
+ },
275
+ {
276
+ "8": {
277
+ "title": "What every engineer should know about smart cities.",
278
+ "author": "Valdemar Vicente Graciano Neto and Mohamad Kassab. 2023.",
279
+ "venue": "CRC Press, London, England.",
280
+ "url": null
281
+ }
282
+ },
283
+ {
284
+ "9": {
285
+ "title": "Enabling Informed Sustainability Decisions: Sustainability Assessment in Iterative System Modeling. In ACM/IEEE Intl. Conference on Model Driven Engineering Languages and Systems Companion. IEEE, 964\u2013968.",
286
+ "author": "Gabriele Gramelsberger, Hendrik Kausch, Judith Michael, Frank Piller, Ferdinanda Ponci, Aaron Praktiknjo, Bernhard Rumpe, Rega Sota, and Sandra Venghaus. 2023.",
287
+ "venue": "",
288
+ "url": null
289
+ }
290
+ },
291
+ {
292
+ "10": {
293
+ "title": "Review on cyber-physical and cyber-security system in smart grid: Standards, protocols, constraints, and recommendations.",
294
+ "author": "Mohammad Kamrul Hasan, AKM Ahasan Habib, Zarina Shukur, Fazil Ibrahim, Shayla Islam, and Md Abdur Razzaque. 2023.",
295
+ "venue": "Journal of Network and Computer Applications 209 (2023), 103540.",
296
+ "url": null
297
+ }
298
+ },
299
+ {
300
+ "11": {
301
+ "title": "Approaches to Co-Evolution of Metamodels and Models: A Survey.",
302
+ "author": "Regina Hebig, Djamel Eddine Khelladi, and Reda Bendraou. 2017.",
303
+ "venue": "IEEE Transactions on Software Engineering 43, 5 (2017), 396\u2013414.",
304
+ "url": null
305
+ }
306
+ },
307
+ {
308
+ "12": {
309
+ "title": "The relevance of information and communication technologies for environmental sustainability \u2013 A prospective simulation study.",
310
+ "author": "Lorenz M. Hilty et al. 2006.",
311
+ "venue": "Env. Modelling & Software 21, 11 (2006), 1618\u20131629.",
312
+ "url": null
313
+ }
314
+ },
315
+ {
316
+ "13": {
317
+ "title": "Applying a \u201cSmart Ecosystem\u201d Mindset to Rethink Your Products.",
318
+ "author": "Jakob Jul Jensen. 2020.",
319
+ "venue": "Computer 53, 12 (2020), 98\u2013101.",
320
+ "url": null
321
+ }
322
+ },
323
+ {
324
+ "14": {
325
+ "title": "Digital Twin in manufacturing: A categorical literature review and classification.",
326
+ "author": "Werner Kritzinger, Matthias Karner, Georg Traar, Jan Henjes, and Wilfried Sihn. 2018.",
327
+ "venue": "IFAC-PapersOnLine 51, 11 (2018), 1016\u20131022.",
328
+ "url": null
329
+ }
330
+ },
331
+ {
332
+ "15": {
333
+ "title": "Infonomics: How to Monetize, Manage, and Measure Information as an Asset for Competitive Advantage.",
334
+ "author": "Douglas B. Laney. 2017.",
335
+ "venue": "Routledge. 322 pages.",
336
+ "url": null
337
+ }
338
+ },
339
+ {
340
+ "16": {
341
+ "title": "Integration Challenges for Digital Twin Systems-of-Systems. In 10th IEEE/ACM Int. WS on SE for Systems-of-Systems and Software Ecosystems. IEEE.",
342
+ "author": "Judith Michael, J\u00e9r\u00f4me Pfeiffer, Bernhard Rumpe, and Andreas Wortmann. 2022.",
343
+ "venue": "",
344
+ "url": null
345
+ }
346
+ },
347
+ {
348
+ "17": {
349
+ "title": "Explaining Cyberphysical System Behavior With Digital Twins.",
350
+ "author": "Judith Michael, Maike Schwammberger, and Andreas Wortmann. 2024.",
351
+ "venue": "IEEE Software 41, 01 (2024), 55\u201363.",
352
+ "url": null
353
+ }
354
+ },
355
+ {
356
+ "18": {
357
+ "title": "Software Engineering for Sustainability: Find the Leverage Points!",
358
+ "author": "B. Penzenstadler, L. Duboc, C. C. Venters, S. Betz, N. Seyff, K. Wnuk, R. Chitchyan, S. M. Easterbrook, and C. Becker. 2018.",
359
+ "venue": "IEEE Software 35, 04 (2018), 22\u201333.",
360
+ "url": null
361
+ }
362
+ },
363
+ {
364
+ "19": {
365
+ "title": "Model-driven engineering.",
366
+ "author": "Douglas C Schmidt. 2006.",
367
+ "venue": "Computer-IEEE Computer Society 39, 2 (2006), 25.",
368
+ "url": null
369
+ }
370
+ },
371
+ {
372
+ "20": {
373
+ "title": "Recent Research Into Infrastructure as Code.",
374
+ "author": "M. Staron, S. Abrahao, B. Penzenstadler, and L. Hochstein. 2023.",
375
+ "venue": "IEEE Software 40, 01 (2023), 86\u201388.",
376
+ "url": null
377
+ }
378
+ },
379
+ {
380
+ "21": {
381
+ "title": "Development of Digital Twin for Load Center on the Example of Distribution Network of an Urban District. In E3S Web Conf., Vol. 209. 02029.",
382
+ "author": "Nikita Tomin, Victor Kurbatsky, Vadim Borisov, and Sergey Musalev. 2020.",
383
+ "venue": "",
384
+ "url": null
385
+ }
386
+ },
387
+ {
388
+ "22": {
389
+ "title": "A Digital Twin Integrated Cyber-physical Systems for Community Energy Trading. In IEEE SmartGridComm. 134\u2013140.",
390
+ "author": "Yakubu Tsado, Olamide Jogunola, Femi. O. Olatunji, and Bamidele Adebisi. 2022.",
391
+ "venue": "",
392
+ "url": null
393
+ }
394
+ }
395
+ ],
396
+ "url": "http://arxiv.org/html/2403.07162v3"
397
+ }
20240819/2403.13780v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240819/2403.17111v2.json ADDED
@@ -0,0 +1,217 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Vision-Based Dexterous Motion Planning by Dynamic Movement Primitives with Human Hand Demonstration",
3
+ "abstract": "This paper proposes a vision-based framework for a 7-degree-of-freedom robotic manipulator, with the primary objective of facilitating its capacity to acquire information from human hand demonstrations for the execution of dexterous pick-and-place tasks. Most existing works only focus on the position demonstration without considering the orientations. In this paper, by employing a single depth camera, MediaPipe is applied to generate the three-dimensional coordinates of a human hand, thereby comprehensively recording the hand\u2019s motion, encompassing the trajectory of the wrist, orientation of the hand, and the grasp motion. A mean filter is applied during data pre-processing to smooth the raw data. The demonstration is designed to pick up an object at a specific angle, navigate around obstacles in its path and subsequently, deposit it within a sloped container. The robotic system demonstrates its learning capabilities, facilitated by the implementation of Dynamic Movement Primitives, enabling the assimilation of user actions into its trajectories with different start and end points. Experimental studies are carried out to demonstrate the effectiveness of the work.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "With the continued expansion of the robotics industry, the scope of interactions between robots and humans in everyday life is poised to increase, thereby placing higher demands on the intelligent evolution of robots. Conventional methodologies for robot learning detect the environment through sensors, coupled with extensive computational processes executed within simulated environments, all in the pursuit of developing logical motion planning strategies for robots during task execution [1 ###reference_b1###]. This approach, however, requires substantial time and has high requirements on hardware performance. In stark contrast, human execution of analogous tasks is simple and intuitive. Therefore, one promising way to enhance the robot intelligence involves learning from human demonstration, wherein humans assume the role of instructors. Within this framework, robots imitate and learn from demonstrations (LfD), thereby elevating their behavioral dexterity.\nOne work of LfD involves the acquisition of human-guided instructional data. Conventional approaches to data collection employ mechanical sensors to gather information from human actions. In [2 ###reference_b2###], Chen et al. utilized motion capture markers and an Inertial Measurement Unit (IMU) to capture the foot movement. In parallel with advancements in computer vision technologies, the utilization of cameras has emerged as an alternative mechanism for capturing the human demonstration data. A notable advantage of employing cameras lies in obviating the necessity for individuals to sensors, thereby offering a more expeditious and streamlined alternative to conventional data collection methods. In [3 ###reference_b3###], Cai et al. used a single camera to track the position of human-driven objects, facilitating the subsequent emulation of these trajectories by robotic systems.\nIn recent years, a proliferation of camera-based skeletal detection tools has emerged, among which OpenPose, introduced by Cao et al. in 2017 [4 ###reference_b4###]. It enables the real-time extraction of human skeletal structures from webcam feeds and is amenable to multi-person scenarios, although it demands relatively high hardware requirements. In [5 ###reference_b5###], Fang et al. localized whole-body keypoints accurately and tracked humans simultaneously with OpenPose. However, for fine tasks, it is insufficient to track the Cartesian coordinates of the human body; it also requires the orientation of parts of the human body, such as the hands. For example, in [6 ###reference_b6###], Li et al. extracted factors from hand heatmaps to estimate hand poses and teleoperate a dual-arm system. MediaPipe is another vision-based tool for human skeletal keypoint extraction [7 ###reference_b7###]. In comparison to OpenPose, MediaPipe holds the advantage of accurately and efficiently capturing two-dimensional (2D) key points of the human hand, thus facilitating precise hand gesture acquisition. In [8 ###reference_b8###], Chen et al. utilized two cameras to capture 2D points of the hand and generate the three-dimensional (3D) coordinates to obtain trajectories of the human hand.\n###figure_1### Subsequent to the reception of human-guided instructions, robots have to learn from human actions. Behavioral cloning is a method to duplicate the human behavior by collecting the demonstration data, and the input-output mapping is simply supervised by learning methods [9 ###reference_b9###]. In comparison to behavioral cloning, reinforcement learning (RL) offers a more flexible and adaptive approach to learning. In [10 ###reference_b10###], an inverse RL infers the latent reward function of a task from expert demonstrations to better understand the task structure and generalize to novel situations. While the inverse RL method requires a more substantial volume of data and computational resources, Dynamic Movement Primitives (DMP) emerges as a notable methodology for robotic motion generation and control with a single demonstration [11 ###reference_b11###]. DMP aims to simulate the dynamic properties and flexibility exhibited by human when performing motion. This enables the DMP to generate suitable motions when encountering new situations, and allows for fine-tuning and adaptation while following prescribed trajectories. In [12 ###reference_b12###], the integration of DMP with adaptive admittance control enables the path planning and force control on curved surfaces.\nSeveral works have integrated DMP with vision sensors to control the manipulator. In [13 ###reference_b13###], Chen et al. utilized You Only Look Once (YOLO) to train and detect hand shapes with two webcams, enabling the robot to pick up an object and place it in a human hand. DMP was applied to set trajectories of the end-effector, learned from dragging movements. Similarly, Cai et al. detected multiple hand demonstrations using a depth camera and OpenPose to obtain a comprehensive translational trajectory and predict the endpoint by DMP [14 ###reference_b14###]. However, these works did not explicitly consider the hand\u2019s quaternions in motion planning. To the best of the author\u2019s knowledge, there has been no application of DMP with both translational and rotational demonstrations captured by cameras. Incorporating quaternions in motion planning adds a layer of dexterity, and exploring this aspect could be a potential avenue for future research in enhancing manipulator control.\nThe proposed framework is shown as in Fig. 1 ###reference_###. In this paper, a depth camera is employed with MediaPipe applied in generating 2D images which are then combined with the depth data to capture the 3D coordinates of the whole human hand. This enables the recording of the trajectory, orientation, and grasping of its movements. To mitigate the impact of the inevitable minor tremors in human motion, the acquired human demonstration data undergoes a pre-processing phase with proposed method to calculate the orientation of the hand and finger motions, involving the application of a mean filter. Following pre-processing, a modified DMP is proposed to learn the coordinate trajectory of the wrist. The new trajectories with novel start and end points are applied to the execution of the pick-and-place task. This task entails the precise manipulation of objects, such as the experimental demonstration including picking up object at a specified angle, avoiding obstacles, and the ultimate placement of the object within an inclined receptacle. The proposed framework offers a novel effective approach to many dexterous manipulation tasks."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II System Description",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "II-A Robotic Manipulator",
21
+ "text": "The equipment used in the experiment is the 7-degree-of-freedom (7-DOF) Franka Emika robot, which can perform complex tasks with dexterity. The joints provide a signal transmission frequency of to ensure smooth data process. The dynamics of the Franka Emika robot manipulator in joint space is presented as in Eq. (1 ###reference_###):\nwhere represents the inertial matrix, represents the Coriolis and centripetal matrix, is the gravity vector and is the torque input vector. , , are the joint angle, velocity, and acceleration vectors.\nTo accomplish trajectory tracking in the experiment, the dynamic equation can be transformed to that in Cartesian space. The end-effector pose is denoted as\n where is the position in Cartesian space and \nis the quaternion, where denotes the real part, and denote the imaginary part. The torque input vector is transformed to the force control input . The transformation from the joint space to the Cartesian space is shown in Eq. (2 ###reference_###).\nwhere is the Jacobian matrix, , where are the angular velocity and acceleration of the end-effector, respectively.\nWith Eq. (1 ###reference_###) and Eq. (2 ###reference_###), the dynamic equation of the manipulator in Cartesian space can be presented in Eq. (3 ###reference_###).\nwhere"
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "II-B Depth Camera",
27
+ "text": "The depth camera employed in this paper is the RealSense D435i, a product developed by Intel. This device is equipped with a dual camera system that captures visible light images and depth information. It relies on a laser projector and an infrared camera to measure the distance between objects and the camera, resulting in high quality depth images."
28
+ },
29
+ {
30
+ "section_id": "3",
31
+ "parent_section_id": null,
32
+ "section_name": "III Methodology",
33
+ "text": ""
34
+ },
35
+ {
36
+ "section_id": "3.1",
37
+ "parent_section_id": "3",
38
+ "section_name": "III-A Human Hand Demonstration",
39
+ "text": ""
40
+ },
41
+ {
42
+ "section_id": "3.1.1",
43
+ "parent_section_id": "3.1",
44
+ "section_name": "III-A1 3D Coordinate Generation of a Hand",
45
+ "text": "A depth camera can simultaneously capture Red-Green-Blue (RGB) images as well as the depth image while ensuring their alignment. In this paper, a consistent resolution of was uniformly established for both the RGB and depth image. This standardization facilitates an accurate correspondence between the data points within these two distinct graphical representations. As shown in Fig. 2 ###reference_###(a), the initial step entails the application of the MediaPipe\u2019s hand detection function to identify the hand\u2019s key points within the RGB image. Subsequently, the 2D pixel coordinates of these 21 key points are obtained. Corresponding the index of these pixels to the depth image, the depth of these pixels can be obtained in Fig.2 ###reference_###(b). The pixel coordinates do not represent real-world coordinates and therefore, a coordinate transformation from the pixel coordinates, , to real-world spatial coordinates, , is required.\nIn real-world spatial coordinates, an actual distance can be calculated by Eq. (4 ###reference_###).\nwhere is the depth, is the pixel distance, is the pixel value of width or height of the camera image, and is the view angle of the camera. Term is one pixel\u2019s angle in the figure, and is the real length of one pixel, so the actual distance with pixels can be concluded as in Eq. (4 ###reference_###).\nUsing Eq. (4 ###reference_###), we can get the 3D coordinates as in Eq. (5 ###reference_###).\nwhere is the height of the camera, and are the view angles of the camera, and are the resolution. are constant parameters related to the camera. In this paper, . The final 3D hand is shown in Fig. 2 ###reference_###(c).\n###figure_2### ###figure_3### ###figure_4###"
46
+ },
47
+ {
48
+ "section_id": "3.1.2",
49
+ "parent_section_id": "3.1",
50
+ "section_name": "III-A2 Orientation",
51
+ "text": "In addition to the precise control of the 3D coordinates of the end-effector, equal significance is attributed to managing the orientation of the end-effector and the grasping of the gripper. These key parameters can be calculated and corresponded through the 3D coordinates of the thumb, index finger, and wrist. To represent the orientation of the end-effector, the Euler angles of yaw, pitch and roll orientations are as in Eq. (6 ###reference_###).\nwhere the yaw angle is the rotation about the -axis, the pitch angle is the rotation about the -axis and the roll angle is the rotation about the -axis. denote the positions of index finger, and denote the positions of thumb and wrist, respectively.\nWhile Euler angles offer a straightforward and intuitive method for the orientation representation, the Franka Emika Panda robot uses quaternions as its chosen representation for orientation, so the conversion from Euler angles to quaternions is needed. Prior to executing this transformation, it is imperative to understand the quaternion multiplication operation. Assume and are quaternions, which can be represented by Eq. (7 ###reference_###).\nThen the multiplication of and can be obtained as\nThrough the multiplication of three axes, the transformation equation between quaternions and Euler angles is as:\nwhere and with .\n###figure_5### ###figure_6### ###figure_7### ###figure_8### ###figure_9### ###figure_10### Given that the default configuration of the robot\u2019s end-effector is perpendicular to the ground, while the default hand posture in the human demonstration aligns parallel to the ground, an essential adjustment is mandated. We rotate the end-effector around the -axis, so the desired quaternion should be calculated as the demonstration quaternion multiplying the quaternion rotated around the -axis:"
52
+ },
53
+ {
54
+ "section_id": "3.1.3",
55
+ "parent_section_id": "3.1",
56
+ "section_name": "III-A3 Grasping",
57
+ "text": "For grasping, the distance between the thumb and the index finger can be calculated as Eq. (8 ###reference_###). If the distance is smaller than the threshold, robot will consider it as a grasping motion learned and the gripper will then close and grasp. In this paper, the threshold is set to which can be changed depending on the tasks."
58
+ },
59
+ {
60
+ "section_id": "3.1.4",
61
+ "parent_section_id": "3.1",
62
+ "section_name": "III-A4 Data Pre-processing",
63
+ "text": "After obtaining the motion trajectory and posture from the human demonstration, a mean filter is applied to smooth the raw data. The mean filter requires a one-dimensional vector of length , denoted as , and the output vector after applying an average smoothing filter with a window size of given by Eq. (9 ###reference_###).\nwhere represents the th element in the output vector, corresponds to the th element in the input vector and ranges from to ."
64
+ },
65
+ {
66
+ "section_id": "3.2",
67
+ "parent_section_id": "3",
68
+ "section_name": "III-B Motion Planning",
69
+ "text": ""
70
+ },
71
+ {
72
+ "section_id": "3.2.1",
73
+ "parent_section_id": "3.2",
74
+ "section_name": "III-B1 Original Dynamic Movement Primitives",
75
+ "text": "In [11 ###reference_b11###], it is proposed that complex actions are composed of a set of primitive actions that are executed sequentially or in parallel, and DMP is the mathematical formalization of these primitive actions. In fact, DMP serves as an approach to decompose complex motion into a set of basic motion primitives. Each motion primitive is characterized as a nonlinear system whose dynamic properties are influenced by a guided trajectory, such that the primitives can be reutilized and adapted across various settings.\nAt its core, the system model of DMP is characterized by the fusion of a Proportional-Derivative (PD) controller with the inclusion of the control term , which, notably, is a nonlinear function. In this way, the system can not only converge to the goal point, but also allows the motion process to emulate the original trajectory. The dynamic system can be presented as Eq. (10 ###reference_###).\nwhere and are the position and velocity of the system, and are the start and goal points of the trajectory, is the time duration, is the spring stiffness, and is the damper damping.\nIn order to generate , it is imperative to first acquire , which can be represented by the demonstration trajectory as Eq. (11 ###reference_###).\nwhere , , are the position, velocity and acceleration of the pre-processed demonstration trajectory. is the nonlinear function used to generate arbitrary complex movement, so the work in [11 ###reference_b11###] used Gaussian functions as the basis functions to represent . Assume that each element, , has its own set of parameters. The basis functions are:\nwhere\nand starts at one and gradually tends toward zero, thereby ensuring that approaches zero when converges to . is a constant value, is the Gaussian function, where is the center, is the width, and is the adjustable weight. Each Gaussian function is endowed with a respective weight, and our goal is to find such a set of weights that minimizes the error between and . Locally weighted regression is used to obtain as:\nwhere\nand is the number of sampling points."
76
+ },
77
+ {
78
+ "section_id": "3.2.2",
79
+ "parent_section_id": "3.2",
80
+ "section_name": "III-B2 Modified Dynamic Movement Primitives",
81
+ "text": "In Eq. (10 ###reference_###), poses a potential issue when the starting point of the demonstration closely approximates the target position. In such cases, the term approaches to zero, consequently driving the term towards nullity. Additionally, the opposite signs of and engenders a mirroring effect in the trajectory shape. A modified DMP, as shown in Eq. (12 ###reference_###), wherein the system separates and so that remains unaffected by and [15 ###reference_b15###]."
82
+ },
83
+ {
84
+ "section_id": "3.3",
85
+ "parent_section_id": "3",
86
+ "section_name": "III-C Path Following Control",
87
+ "text": "In this paper, the manipulator is controlled by an impedance controller, which imparts a measure of flexibility through the modulation of stiffness in its movements. The principle of impedance control is to treat end-effector as a mass-spring-damper system. The torque is designed in Cartesian space as\nwhere the gains and are the design parameters."
88
+ },
89
+ {
90
+ "section_id": "4",
91
+ "parent_section_id": null,
92
+ "section_name": "IV Performance Evaluation",
93
+ "text": ""
94
+ },
95
+ {
96
+ "section_id": "4.1",
97
+ "parent_section_id": "4",
98
+ "section_name": "IV-A 3D Coordinate Accuracy",
99
+ "text": "This part of the experiment is dedicated to validating of the accuracy associated with the 3D coordinate generated by MediaPipe and Eq. (5 ###reference_###). The measured and calculated coordinates are shown in the Table I.\nAs shown in Table I, the maximum error observed along each axis remains confined within the threshold of . Some errors may be due to the measurement and the small shaking of the hand during demonstration. Noises are inevitable in the human demonstration data because human movements always have slight jitters. Hence a data smoothing approach is applied to the raw data."
100
+ },
101
+ {
102
+ "section_id": "4.2",
103
+ "parent_section_id": "4",
104
+ "section_name": "IV-B Data Pre-processing",
105
+ "text": "In light of the inherent noise present in the data collected from the human hand, a pre-processing step is employed. Specifically, we undertake data pre-processing through the application of a mean filter. Fig. 4 ###reference_### shows the comparison between raw and filtered Euler angles. The window size was tuned to 10.\n###figure_11###"
106
+ },
107
+ {
108
+ "section_id": "4.3",
109
+ "parent_section_id": "4",
110
+ "section_name": "IV-C Dynamic Movement Primitives",
111
+ "text": "The execution of the human demonstration involves picking up a sponge from the workbench with 40-degree yaw, moving it over a cup and putting the sponge in a box sloped with 50-degree pitch. The trajectory and value of the task can be seen in Fig. 5 ###reference_###. Fig. 5 ###reference_### (d) and (e) show the Euler angles and distance in the demonstration, which will be replicated by the end-effector. Then we employ modified DMP to learn the trajectory of , , and , respectively, with three new starting points: , , ,\nand three new end points: , , . Three new trajectories are shown in Fig. 5 ###reference_###(a)(b)(c)(f). New trajectories change the start and end points, but keep the shape, quaternion, and grasping motion. The video of the experiment can be seen in the ACM Lab YouTube channel: https://www.youtube.com/watch?v=XP22mKGLvUI. ###reference_I.###\n###figure_12### ###figure_13### ###figure_14### ###figure_15### ###figure_16### ###figure_17###"
112
+ },
113
+ {
114
+ "section_id": "5",
115
+ "parent_section_id": null,
116
+ "section_name": "Conclusion and Future Work",
117
+ "text": "This paper presented a comprehensive framework for manipulator to implement dexterous motion planning task by learning from human demonstration. Through the integration of MediaPipe and depth camera, the framework enables the precise calculation of the 3D coordinates of the human hand, with an error margin of less than . Utilizing these coordinates derived from human demonstrations, the framework facilitates the definition and acquisition of position and Euler angles through a modified DMP. This framework not only enhances the robot\u2019s capacity to perform various dexterous tasks but also augments its ability to imitate human motion, thereby more flexible and collaborative."
118
+ }
119
+ ],
120
+ "appendix": [],
121
+ "tables": {
122
+ "1": {
123
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>Measurement of Position</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_t\" id=\"S4.T1.1.1.1.1\">Point</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.1.1.1.2\">Measured (cm)</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.1.1.1.3\">Calculated (cm)</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.1.1.1.4\">Absolute Error (cm)</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.1.2.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S4.T1.1.2.1.1\">1</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.2.1.2\">(2.0, 8.0, 9.0)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.2.1.3\">(2.4, 8.0, 7.5)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.2.1.4\">(0.4, 0.0, 1.5)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.3.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T1.1.3.2.1\">2</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.3.2.2\">(-5.0, 0.0, 9.0)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.3.2.3\">(-6.3, 0.7, 8.1)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.3.2.4\">(1.3, 0.7, 0.9)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.4.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T1.1.4.3.1\">3</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.4.3.2\">(-6.0, -9.0, 34.5)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.4.3.3\">(-7.5, -8.1, 34.2)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.4.3.4\">(1.5, 0.9, 0.3)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.5.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T1.1.5.4.1\">4</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.5.4.2\">(-19.0, 7.0, 26.5)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.5.4.3\">(-20.5, 7.4, 26.8)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.5.4.4\">(1.5, 0.4, 0.3)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.6.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T1.1.6.5.1\">5</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.6.5.2\">(20.0, 10.0, 26.5)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.6.5.3\">(21.3, 10.1, 27.3)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.6.5.4\">(1.3, 0.1, 0.8)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.7.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T1.1.7.6.1\">6</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.7.6.2\">(11.0, -10.0, 12.5)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.7.6.3\">(10.6, -8.6, 13.9)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.7.6.4\">(0.4, 1.4, 1.4)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.8.7\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T1.1.8.7.1\">7</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.8.7.2\">(-14.0, -8.0, 12.5)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.8.7.3\">(-15.5, -6.8, 12.0)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.8.7.4\">(1.5, 1.2, 0.5)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.9.8\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T1.1.9.8.1\">8</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.9.8.2\">(27.0, -14.0, 34.5)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.9.8.3\">(28.6, -12.2, 36.1)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.9.8.4\">(1.6, 1.8, 1.6)</td>\n</tr>\n</tbody>\n</table>\n</figure>",
124
+ "capture": "TABLE I: Measurement of Position"
125
+ }
126
+ },
127
+ "image_paths": {
128
+ "1": {
129
+ "figure_path": "2403.17111v2_figure_1.png",
130
+ "caption": "Figure 1: The Schematic Diagram of the Proposed Work",
131
+ "url": "http://arxiv.org/html/2403.17111v2/x1.png"
132
+ },
133
+ "2(a)": {
134
+ "figure_path": "2403.17111v2_figure_2(a).png",
135
+ "caption": "(a) RGB Image with MediaPipe\nFigure 2: The Proposed 3D Hand Coordinate Generation",
136
+ "url": "http://arxiv.org/html/2403.17111v2/x2.png"
137
+ },
138
+ "2(b)": {
139
+ "figure_path": "2403.17111v2_figure_2(b).png",
140
+ "caption": "(b) Depth Image\nFigure 2: The Proposed 3D Hand Coordinate Generation",
141
+ "url": "http://arxiv.org/html/2403.17111v2/x3.png"
142
+ },
143
+ "2(c)": {
144
+ "figure_path": "2403.17111v2_figure_2(c).png",
145
+ "caption": "(c) 3D Hand\nFigure 2: The Proposed 3D Hand Coordinate Generation",
146
+ "url": "http://arxiv.org/html/2403.17111v2/x4.png"
147
+ },
148
+ "3(a)": {
149
+ "figure_path": "2403.17111v2_figure_3(a).png",
150
+ "caption": "(a)\nFigure 3: Euler Angles Generation. (a)-(c) are the yaw, pitch, and roll of hand and (d)-(f) are the yaw, pitch, and roll of gripper.",
151
+ "url": "http://arxiv.org/html/2403.17111v2/x5.png"
152
+ },
153
+ "3(b)": {
154
+ "figure_path": "2403.17111v2_figure_3(b).png",
155
+ "caption": "(b)\nFigure 3: Euler Angles Generation. (a)-(c) are the yaw, pitch, and roll of hand and (d)-(f) are the yaw, pitch, and roll of gripper.",
156
+ "url": "http://arxiv.org/html/2403.17111v2/x6.png"
157
+ },
158
+ "3(c)": {
159
+ "figure_path": "2403.17111v2_figure_3(c).png",
160
+ "caption": "(c)\nFigure 3: Euler Angles Generation. (a)-(c) are the yaw, pitch, and roll of hand and (d)-(f) are the yaw, pitch, and roll of gripper.",
161
+ "url": "http://arxiv.org/html/2403.17111v2/x7.png"
162
+ },
163
+ "3(d)": {
164
+ "figure_path": "2403.17111v2_figure_3(d).png",
165
+ "caption": "(d)\nFigure 3: Euler Angles Generation. (a)-(c) are the yaw, pitch, and roll of hand and (d)-(f) are the yaw, pitch, and roll of gripper.",
166
+ "url": "http://arxiv.org/html/2403.17111v2/x8.png"
167
+ },
168
+ "3(e)": {
169
+ "figure_path": "2403.17111v2_figure_3(e).png",
170
+ "caption": "(e)\nFigure 3: Euler Angles Generation. (a)-(c) are the yaw, pitch, and roll of hand and (d)-(f) are the yaw, pitch, and roll of gripper.",
171
+ "url": "http://arxiv.org/html/2403.17111v2/x9.png"
172
+ },
173
+ "3(f)": {
174
+ "figure_path": "2403.17111v2_figure_3(f).png",
175
+ "caption": "(f)\nFigure 3: Euler Angles Generation. (a)-(c) are the yaw, pitch, and roll of hand and (d)-(f) are the yaw, pitch, and roll of gripper.",
176
+ "url": "http://arxiv.org/html/2403.17111v2/x10.png"
177
+ },
178
+ "4": {
179
+ "figure_path": "2403.17111v2_figure_4.png",
180
+ "caption": "Figure 4: Smooth of Euler Angle",
181
+ "url": "http://arxiv.org/html/2403.17111v2/x11.png"
182
+ },
183
+ "5(a)": {
184
+ "figure_path": "2403.17111v2_figure_5(a).png",
185
+ "caption": "(a) X\nFigure 5: Human demonstration and new trajectories generated by modified DMP.",
186
+ "url": "http://arxiv.org/html/2403.17111v2/x12.png"
187
+ },
188
+ "5(b)": {
189
+ "figure_path": "2403.17111v2_figure_5(b).png",
190
+ "caption": "(b) Y\nFigure 5: Human demonstration and new trajectories generated by modified DMP.",
191
+ "url": "http://arxiv.org/html/2403.17111v2/x13.png"
192
+ },
193
+ "5(c)": {
194
+ "figure_path": "2403.17111v2_figure_5(c).png",
195
+ "caption": "(c) Z\nFigure 5: Human demonstration and new trajectories generated by modified DMP.",
196
+ "url": "http://arxiv.org/html/2403.17111v2/x14.png"
197
+ },
198
+ "5(d)": {
199
+ "figure_path": "2403.17111v2_figure_5(d).png",
200
+ "caption": "(d) Yaw, Pitch and Roll\nFigure 5: Human demonstration and new trajectories generated by modified DMP.",
201
+ "url": "http://arxiv.org/html/2403.17111v2/x15.png"
202
+ },
203
+ "5(e)": {
204
+ "figure_path": "2403.17111v2_figure_5(e).png",
205
+ "caption": "(e) Distance\nFigure 5: Human demonstration and new trajectories generated by modified DMP.",
206
+ "url": "http://arxiv.org/html/2403.17111v2/x16.png"
207
+ },
208
+ "5(f)": {
209
+ "figure_path": "2403.17111v2_figure_5(f).png",
210
+ "caption": "(f) 3D Trajectory\nFigure 5: Human demonstration and new trajectories generated by modified DMP.",
211
+ "url": "http://arxiv.org/html/2403.17111v2/x17.png"
212
+ }
213
+ },
214
+ "validation": true,
215
+ "references": [],
216
+ "url": "http://arxiv.org/html/2403.17111v2"
217
+ }
20240819/2404.06599v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240819/2404.06913v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240819/2405.10308v4.json ADDED
The diff for this file is too large to render. See raw diff
 
20240819/2405.11389v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240819/2405.14137v2.json ADDED
@@ -0,0 +1,111 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "RET-CLIP: A Retinal Image Foundation Model Pre-trained with Clinical Diagnostic Reports",
3
+ "abstract": "The Vision-Language Foundation model is increasingly investigated in the fields of computer vision and natural language processing, yet its exploration in ophthalmology and broader medical applications remains limited. The challenge is the lack of labeled data for the training of foundation model. To handle this issue, a CLIP-style retinal image foundation model is developed in this paper. Our foundation model, RET-CLIP, is specifically trained on a dataset of 193,865 patients to extract general features of color fundus photographs (CFPs), employing a tripartite optimization strategy to focus on left eye, right eye, and patient level to reflect real-world clinical scenarios. Extensive experiments demonstrate that RET-CLIP outperforms existing benchmarks across eight diverse datasets spanning four critical diagnostic categories: diabetic retinopathy, glaucoma, multiple disease diagnosis, and multi-label classification of multiple diseases, which demonstrate the performance and generality of our foundation model. The sourse code and pre-trained model are available at https://github.com/sStonemason/RET-CLIP.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Foundation models trained on large-scale, multi-task datasets are now becoming increasingly popular and have achieved success in the fields of computer vision and natural language processing. Foundation models excel in generalization in feature extraction, offering significant potential for addressing the complex challenges of clinical applications. However, the development of medical foundation models is still in its nascent phase, primarily hindered by the lack of high-quality data and concerns around patient privacy. Although initial efforts have been made [23 ###reference_b23###, 19 ###reference_b19###, 24 ###reference_b24###, 9 ###reference_b9###, 12 ###reference_b12###, 6 ###reference_b6###, 11 ###reference_b11###], the effectiveness of these models, particularly in analyzing retina fundus images, has yet to meet expectations, underscoring the urgent need for focused advancements in this area.\nIn the clinical diagnosis and treatment of ocular diseases, medical imaging, such as color fundus photography (CFP), and the detailed image interpretations and diagnostic reports written by professional ophthalmologists are indispensable. This makes the clinics of ophthalmology inherently rich in image-text multi-modality data, which holds significant potential for enhancing clinical applications. RETFound [25 ###reference_b25###] is a foundation model for retinal images based on self-supervised learning. However, it solely utilizes image data and overlooks the equally vast amount of clinical diagnostic text. To address this limitation, CLIP [17 ###reference_b17###], a powerful vision-language self-supervised paradigm, is widely explored in foundation models. By aligning the information of image and text in a shared representation space using a large corpus of image-text pairs, CLIP-style models can understand and associate visual content with natural language information. This results in feature representations with stronger generalization capabilities. Many studies focus on training vision-text models in the medical field [23 ###reference_b23###, 19 ###reference_b19###, 9 ###reference_b9###, 22 ###reference_b22###, 18 ###reference_b18###, 7 ###reference_b7###, 20 ###reference_b20###, 2 ###reference_b2###]. PMC-CLIP [9 ###reference_b9###] collects image-description pairs from large amount of scientific documents and trains a CLIP-style model based on them. FLAIR [18 ###reference_b18###] is a pre-trained vision-language model designed to understand retinal fundus images. The textual data utilized in such research often comes from captions in medical papers or through the manual annotation of simple labels. However, clinical diagnostic reports, rich in valuable textual information, remain underutilized in this context.\nMoreover, the conventional approaches often involve treating CFPs of individual eyes as separate entities during model training. This necessitates the extraction of information corresponding to each eye from the original clinical diagnostic reports, which may not always clearly differentiate between left and right eyes. The manual processing involved in this procedure requires specialized knowledge and could introduce errors and increase costs significantly due to the potential for human-induced noise. Conversely, considering both eyes of a patient together provides a more holistic and clinically meaningful approach in clinical scenarios.\nTo alleviate the above issues, we have the following contributions in this paper: Firstly, we propose a vision-language foundation model for CFPs, named RET-CLIP, which we believe is the first attempt to leverage clinical diagnostic reports to build a retinal foundation model, enriching the model\u2019s visual encoding capabilities with practicality and authenticity. The diagnostic reports in Chinese are included, extending the linguistic versatility of the research domain beyond English. Secondly, a novel strategy is proposed to decouple the information of left and right eyes in diagnostic reports, which is a simple yet effective paradigm for building a retinal foundation model. In practical scenarios, diagnostic reports are usually patient-level, mixing information from both eyes, which brings a big challenge for directly using CLIP to build foundation models. The proposed monocular and patient-level contrastive learning approach can handle this challenge in the ophthalmology domain. Lastly, our model achieves state-of-the-art performance across diverse tasks and datasets, confirming the effectiveness of the proposed training strategy."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Method",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Data Collection and Preprocessing",
21
+ "text": "Dataset acquisition.\nWe collected a dataset of retina fundus binocular images-text triplets (RET-Clinical) at the patient level for RET-CLIP. The dataset includes a total of 193,865 samples from Beijing Tongren Hospital, Beijing, China. Each patient\u2019s triplet includes two CFPs for left and right eyes, alongside a clinical diagnostic report.\nData preprocessing and augmentation.\nFor the CFPs, all of them are resized to . The augmentation includes random crop followed by resizing to , random horizontal flipping, color jitter, and image normalization. For diagnostic reports, we focus on correcting typos and consecutive punctuation errors caused by human input, restoring abbreviations to their full expressions, unifying mixed Chinese and English expressions into Chinese to align with our text encoder\u2019s language capabilities, and ensuring the text is coherent and grammatically correct by manual scrutiny. It\u2019s important to highlight that the preprocessing of text data only involves basic text standardization mentioned above, avoiding the need for advanced clinical knowledge or modifications that may alter the original content or meaning."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Model Architecture",
27
+ "text": "As shown in Figure 1 ###reference_###, we trained a Visual-Language model called RET-CLIP under the CLIP paradigm using our constructed binocular images-text triplets. RET-CLIP consists of a visual encoder and a text encoder, which extract image features from CFPs and text features from clinical diagnostic reports, respectively. During pre-training, image-text contrastive learning is performed at the monocular and patient level jointly. Patient level examines data features from a holistic patient perspective, effectively leveraging the information in raw data while minimizing the interference of manual preprocessing in the pre-training phase. Concurrently, the binocular level guides the model towards acquiring finer-grained features than the patient level. Combined together, these methodologies can improve RET-CLIP\u2019s performance.\n###figure_1### Given a mini-batch containing binocular images-text triplets (i.e., patients), , where , and represents the CFP of left eye, the CFP of right eye and the diagnostic report of the th patient, respectively. The visual encoder takes and as input, while the text encoder is fed with .\nVisual encoder.\nThe left and right CFPs for a patient are encoded to the embedding dimension of using a ViT-based [5 ###reference_b5###] encoder respectively:\nwhere and represent the image features of the left and right eye, respectively. Next, concatenation and a simple Multilayer Perceptron (MLP) are employed to merge the image features of left and right eyes to derive comprehensive patient-level image features:\nwhere denotes concatenation.\nText encoder.\nFor a given patient\u2019s diagnostic report , a BERT-based [4 ###reference_b4###] encoder is implemented to encode the clinical descriptions with a text token of length :\nwhere denotes the sentence embedding, denotes the embedding for [CLS] token. We then implement three stacked two-layer nonlinear MLPs , , to decouple into textual features representing the left eye, right eye, and patient level, termed as , , and , respectively:"
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "Training Objective",
33
+ "text": "For the provided mini-batch, termed as , the extracted feature set , which is , is then divided into three subsets: , , and , corresponding to left eye, right eye, and patient level, respectively. The image and text features of the same patient in each subset are positive samples of each other, while the rest are negative samples. The cosine similarity matrix is calculated on each subset.\nFor the subset of left eye features, we obtain the image feature matrix and the text feature matrix . We measure the inter-sample similarity, termed as and , using the cosine distance :\nThen we calculate the contrastive loss of the left eye:\nwhere and represent the one-hot labels, refers to InfoNCE loss [13 ###reference_b13###].\nThen we calculate and for right eye and patient level based on and in the same way. The final loss is the sum of the above three:"
34
+ },
35
+ {
36
+ "section_id": "2.4",
37
+ "parent_section_id": "2",
38
+ "section_name": "Implementation",
39
+ "text": "The vision encoder utilizes the base-sized version of the vision transformer (ViT-base) [5 ###reference_b5###], while the text encoder employs the base-sized version of RoBERTa (RoBERTa-base) [10 ###reference_b10###], both are initialized with the Chinese-CLIP weights [21 ###reference_b21###].\nAdamW is used as the optimizer. The batch size is 256, and training is performed using NVIDIA GeForce RTX 4090. The training process consists of 10 epochs, with the first 50 steps dedicated to warming up the model (from 0 to a learning rate of )."
40
+ },
41
+ {
42
+ "section_id": "3",
43
+ "parent_section_id": null,
44
+ "section_name": "Experiments",
45
+ "text": ""
46
+ },
47
+ {
48
+ "section_id": "3.1",
49
+ "parent_section_id": "3",
50
+ "section_name": "Tasks and Datasets",
51
+ "text": "We focus on designing downstream evaluation experiments primarily for visual tasks. These tasks contain four main categories: diagnosis of diabetic retinopathy, glaucoma, multiple diseases, and multi-label classification of multiple diseases.\nFor diabetic retinopathy diagnosis, IDRID [16 ###reference_b16###] and APTOS-2019 (https://www.kaggle.com/competitions/aptos2019-blindness-detection/data) are used. The labels for diabetic retinopathy are no, mild, moderate, severe, and proliferative retinopathy. The IDRID dataset comprises 516 images, while the APTOS dataset contains 3662 images.\nFor glaucoma diagnosis, PAPILA [8 ###reference_b8###] (488 images in total) and Glaucoma Fundus [1 ###reference_b1###] (1544 images in total) are used. They both have three categorical labels, non-glaucoma, suspected glaucoma (early glaucoma), and glaucoma (advanced glaucoma).\nFor multiple disease diagnosis, JSIEC [3 ###reference_b3###] (1000 in total) and Retina (https://www.kaggle.com/datasets/jr2ngb/cataractdataset) (601 in total) are tested. JSIEC contains 39 categories of common referable fundus diseases and conditions. Retina includes labels for normal, glaucoma, cataract, and other retinal diseases.\nFor multi-label classification of multiple diseases, RFMID [15 ###reference_b15###] and ODIR (https://odir2019.grand-challenge.org/) are tested. RFMID includes 3200 images with 28 categories of common referable fundus diseases and conditions. ODIR includes 10000 images (5000 patients\u2019 paired left and right eyes) with labels of normal, diabetic retinopathy, glaucoma, cataract,age-related macular degeneration (AMD), hypertension, myopia, and other diseases.\nFor the IDRIR, the entire dataset is officially divided into a test set comprising 20% of the data, with the remaining 80% designated as the training set. In our experiments, we further split the training set into a training set and a validation set using a 4:1 ratio. Similarly, for the PAPLA, we follow the official partitioning method, which aligns with the approach described above. Regarding the RFMID, the official division includes distinct sets for training, validation, and testing; we adhere to this official partitioning. For all other datasets, we divide them into training, validation, and test sets using a 0.56:0.14:0.3 ratio, following RETFound\u2019s [25 ###reference_b25###] partitioning method. For all datasets, samples within each category are initially distributed based on the specified proportions before being combined to ensure consistent category distribution across the training, validation, and test sets.\nWhen adapting to downstream tasks, the input image is mapped to a high-level feature representation by the visual encoder. A simple linear prediction head is then applied, followed by a Sigmoid or Softmax layer to achieve classification.\nFor each task, two adaptation methods are implemented: linear probing, training the classifier only with the encoder frozen, and fine-tuning, where both the encoder and classifier are trained. Each evaluation process consists of 50 epochs with a batch size of 16. The model weights with the best performance on the validation set are saved for testing."
52
+ },
53
+ {
54
+ "section_id": "3.2",
55
+ "parent_section_id": "3",
56
+ "section_name": "Comparision Methods and Evaluation Metrics",
57
+ "text": "To demonstrate the superiority of our method, we compare two broad categories of models: foundation models trained on non-CFP datasets (Chinese-CLIP [21 ###reference_b21###], PMC-CLIP [9 ###reference_b9###], DINOv2 [14 ###reference_b14###]) and models designed for CFP vision tasks (RETFound [25 ###reference_b25###], FLAIR [18 ###reference_b18###]).\nWe use the area under the receiver operating curve (AUROC) and area under the precision-recall curve (AUPR) as the evaluation metrics. We evaluate five iterations with different random seeds for each model on each downstream dataset to calculate the mean values. We also conduct the t-test for each downstream task to determine the significance level at which the top-performing method surpasses the others (see Supplementary Materials)."
58
+ },
59
+ {
60
+ "section_id": "3.3",
61
+ "parent_section_id": "3",
62
+ "section_name": "Result",
63
+ "text": "RET-CLIP outperforms five comparison models across eight datasets (four categories) as introduced before, demonstrating strong generalization capabilities.\nFor linear probing, the results are shown in Table 1 ###reference_### and Table 2 ###reference_###. RET-CLIP demonstrates superior performance on almost all datasets, which indicates that RET-CLIP has learned a rich feature representation during the pre-training phase, demonstrating the capability to capture high-quality features.\nFor fine-tuning, as shown in Table 3 ###reference_### and Table 4 ###reference_###, RET-CLIP demonstrates superior performance across nearly all tasks. This outcome substantiates RET-CLIP\u2019s robust feature extraction and generalization capabilities. Furthermore,\nit suggests that RET-CLIP not only captures high-quality features but also exhibits strong adaptability, enabling effective customization for specific tasks.\nIt\u2019s noteworthy that the previous foundation models designed for CFPs do not exhibit an advantage over models trained on non-CFP datasets. RETFound\u2019s [25 ###reference_b25###] image reconstruction-focused paradigm may prioritize features related to the rebuilding of CFP, which lack the granularity and quality needed for specific downstream tasks, hindering its broader applicability. FLAIR [18 ###reference_b18###], while is a CLIP-style model, does not suit ophthalmic tasks as it uses the text provision method employed by the original CLIP [17 ###reference_b17###], which is designed for natural contexts, offering limited textual insights from single labels. Moreover, its dependence on public datasets for training constrains its performance due to their limited scale and quality. In contrast, RET-CLIP leverages rich textual information from clinical reports to extract detailed features for ophthalmic tasks better, showcasing the benefits of integrating diagnostic reports into the training of medical CLIP-style models."
64
+ },
65
+ {
66
+ "section_id": "3.4",
67
+ "parent_section_id": "3",
68
+ "section_name": "Ablation study",
69
+ "text": "The results, as shown in Table 5 ###reference_###, confirm the effectiveness of optimizing objectives at both monocular and patient levels. As previously discussed, the combination of the global information provided at the patient level with the finer-grained features contributed at the monocular level is essential to achieve optimal performance."
70
+ },
71
+ {
72
+ "section_id": "4",
73
+ "parent_section_id": null,
74
+ "section_name": "Conclusion",
75
+ "text": "In this study, we compile a binocular images-text dataset, RET-Clinical, derived from 193,865 clinical patients, with which, we jointly optimize and pre-train a CLIP-style model, RET-CLIP, cooperating with the information of left eye, right eye, and patient level. RET-CLIP achieves state-of-the-art results across eight downstream tasks spanning four critical diagnostic categories. Our research narrows the existing void in ophthalmic vision-language models by integrating textual data from clinical diagnostic reports, thereby offering insights into the applicability of raw clinical texts in the wider medical domain."
76
+ }
77
+ ],
78
+ "appendix": [],
79
+ "tables": {
80
+ "1": {
81
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Diabetic retinopathy and glaucoma diagnosis results for linear probing. The best results on each metric are highlighted in bold.</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S3.T1.1\" style=\"width:433.6pt;height:117.5pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-48.9pt,13.3pt) scale(0.815883429901162,0.815883429901162) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S3.T1.1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T1.1.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt\" id=\"S3.T1.1.1.1.1.1\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.1.1.1.1\">Models</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" colspan=\"2\" id=\"S3.T1.1.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.1.1.2.1\">IDRID</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" colspan=\"2\" id=\"S3.T1.1.1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.1.1.3.1\">APTOS2019</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" colspan=\"2\" id=\"S3.T1.1.1.1.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.1.1.4.1\">PAPILA</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S3.T1.1.1.1.1.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.1.1.5.1\">Glaucoma Fundus</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.1.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S3.T1.1.1.2.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.2.2.1.1\">AUROC</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S3.T1.1.1.2.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.2.2.2.1\">AUPR</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S3.T1.1.1.2.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.2.2.3.1\">AUROC</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S3.T1.1.1.2.2.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.2.2.4.1\">AUPR</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S3.T1.1.1.2.2.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.2.2.5.1\">AUROC</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S3.T1.1.1.2.2.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.2.2.6.1\">AUPR</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S3.T1.1.1.2.2.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.2.2.7.1\">AUROC</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S3.T1.1.1.2.2.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.2.2.8.1\">AUPR</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T1.1.1.3.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T1.1.1.3.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.3.1.1.1\">CN-CLIP <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2405.14137v2#bib.bib21\" title=\"\">21</a>]</cite></span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.1.3.1.2\">0.633</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.1.1.3.1.3\">0.336</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.1.3.1.4\">0.806</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.1.1.3.1.5\">0.429</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.1.3.1.6\">0.658</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.1.1.3.1.7\">0.473</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.1.3.1.8\">0.863</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.1.3.1.9\">0.716</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.1.4.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T1.1.1.4.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.4.2.1.1\">PMC-CLIP <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2405.14137v2#bib.bib9\" title=\"\">9</a>]</cite></span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.1.4.2.2\">0.585</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.1.1.4.2.3\">0.303</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.1.4.2.4\">0.756</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.1.1.4.2.5\">0.368</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.1.4.2.6\">0.773</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.1.1.4.2.7\">0.603</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.1.4.2.8\">0.899</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.1.4.2.9\">0.780</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.1.5.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T1.1.1.5.3.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.5.3.1.1\">DinoV2 <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2405.14137v2#bib.bib14\" title=\"\">14</a>]</cite></span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.1.5.3.2\">0.748</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.1.1.5.3.3\">0.463</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.1.5.3.4\">0.783</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.1.1.5.3.5\">0.432</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.1.5.3.6\">0.740</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.1.1.5.3.7\">0.556</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.1.5.3.8\">0.891</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.1.5.3.9\">0.746</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.1.6.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T1.1.1.6.4.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.6.4.1.1\">RETFound <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2405.14137v2#bib.bib25\" title=\"\">25</a>]</cite></span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.1.6.4.2\">0.665</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.1.1.6.4.3\">0.368</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.1.6.4.4\">0.745</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.1.1.6.4.5\">0.370</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.1.6.4.6\">0.620</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.1.1.6.4.7\">0.511</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.1.6.4.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.6.4.8.1\">0.899</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.1.6.4.9\">0.773</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.1.7.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T1.1.1.7.5.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.7.5.1.1\">FLAIR <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2405.14137v2#bib.bib18\" title=\"\">18</a>]</cite></span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.1.7.5.2\">0.700</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.1.1.7.5.3\">0.475</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.1.7.5.4\">0.849</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.1.1.7.5.5\">0.515</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.1.7.5.6\">0.746</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.1.1.7.5.7\">0.595</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.1.7.5.8\">0.872</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.1.7.5.9\">0.672</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.1.8.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb ltx_border_r ltx_border_t\" id=\"S3.T1.1.1.8.6.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.8.6.1.1\">OURS</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T1.1.1.8.6.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.8.6.2.1\">0.856</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S3.T1.1.1.8.6.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.8.6.3.1\">0.616</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T1.1.1.8.6.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.8.6.4.1\">0.923</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S3.T1.1.1.8.6.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.8.6.5.1\">0.656</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T1.1.1.8.6.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.8.6.6.1\">0.775</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S3.T1.1.1.8.6.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.8.6.7.1\">0.667</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T1.1.1.8.6.8\">0.893</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T1.1.1.8.6.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.8.6.9.1\">0.789</span></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
82
+ "capture": "Table 1: Diabetic retinopathy and glaucoma diagnosis results for linear probing. The best results on each metric are highlighted in bold."
83
+ },
84
+ "2": {
85
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Multiple disease diagnosis and multi-label classification of multiple diseases results for linear probing.</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S3.T2.1\" style=\"width:433.6pt;height:117.5pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-48.9pt,13.3pt) scale(0.815883429901162,0.815883429901162) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S3.T2.1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt\" id=\"S3.T2.1.1.1.1.1\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.1.1.1.1\">Models</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" colspan=\"2\" id=\"S3.T2.1.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.1.1.2.1\">JSIEC</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" colspan=\"2\" id=\"S3.T2.1.1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.1.1.3.1\">Retina</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" colspan=\"2\" id=\"S3.T2.1.1.1.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.1.1.4.1\">RFMID</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S3.T2.1.1.1.1.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.1.1.5.1\">ODIR</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S3.T2.1.1.2.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.2.2.1.1\">AUROC</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S3.T2.1.1.2.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.2.2.2.1\">AUPR</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S3.T2.1.1.2.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.2.2.3.1\">AUROC</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S3.T2.1.1.2.2.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.2.2.4.1\">AUPR</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S3.T2.1.1.2.2.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.2.2.5.1\">AUROC</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S3.T2.1.1.2.2.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.2.2.6.1\">AUPR</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S3.T2.1.1.2.2.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.2.2.7.1\">AUROC</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S3.T2.1.1.2.2.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.2.2.8.1\">AUPR</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.3.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.3.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.3.1.1.1\">CN-CLIP <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2405.14137v2#bib.bib21\" title=\"\">21</a>]</cite></span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.1.3.1.2\">0.783</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.3.1.3\">0.239</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.1.3.1.4\">0.738</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.3.1.5\">0.514</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.1.3.1.6\">0.819</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.3.1.7\">0.293</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.1.3.1.8\">0.801</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.1.3.1.9\">0.483</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.4.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T2.1.1.4.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.4.2.1.1\">PMC-CLIP <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2405.14137v2#bib.bib9\" title=\"\">9</a>]</cite></span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.4.2.2\">0.947</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.4.2.3\">0.654</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.4.2.4\">0.778</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.4.2.5\">0.597</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.4.2.6\">0.854</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.4.2.7\">0.372</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.4.2.8\">0.800</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.4.2.9\">0.506</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.5.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T2.1.1.5.3.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.5.3.1.1\">DinoV2 <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2405.14137v2#bib.bib14\" title=\"\">14</a>]</cite></span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.5.3.2\">0.873</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.5.3.3\">0.446</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.5.3.4\">0.813</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.5.3.5\">0.635</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.5.3.6\">0.860</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.5.3.7\">0.430</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.5.3.8\">0.825</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.5.3.9\">0.550</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.6.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T2.1.1.6.4.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.6.4.1.1\">RETFound <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2405.14137v2#bib.bib25\" title=\"\">25</a>]</cite></span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.6.4.2\">0.704</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.6.4.3\">0.167</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.6.4.4\">0.630</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.6.4.5\">0.434</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.6.4.6\">0.842</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.6.4.7\">0.409</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.6.4.8\">0.738</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.6.4.9\">0.401</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.7.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T2.1.1.7.5.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.7.5.1.1\">FLAIR <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2405.14137v2#bib.bib18\" title=\"\">18</a>]</cite></span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.7.5.2\">0.843</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.7.5.3\">0.304</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.7.5.4\">0.773</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.7.5.5\">0.557</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.7.5.6\">0.773</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.7.5.7\">0.254</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.7.5.8\">0.858</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.7.5.9\">0.531</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.8.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.8.6.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.8.6.1.1\">OURS</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T2.1.1.8.6.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.8.6.2.1\">0.982</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.8.6.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.8.6.3.1\">0.855</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T2.1.1.8.6.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.8.6.4.1\">0.935</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.8.6.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.8.6.5.1\">0.864</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T2.1.1.8.6.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.8.6.6.1\">0.925</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.8.6.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.8.6.7.1\">0.552</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T2.1.1.8.6.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.8.6.8.1\">0.902</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T2.1.1.8.6.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.8.6.9.1\">0.682</span></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
86
+ "capture": "Table 2: Multiple disease diagnosis and multi-label classification of multiple diseases results for linear probing."
87
+ },
88
+ "3": {
89
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>Diabetic retinopathy and glaucoma diagnosis results for fine-tuning.</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S3.T3.1\" style=\"width:433.6pt;height:117.5pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-48.9pt,13.3pt) scale(0.815883429901162,0.815883429901162) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S3.T3.1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T3.1.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt\" id=\"S3.T3.1.1.1.1.1\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.1.1.1.1.1.1\">Models</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" colspan=\"2\" id=\"S3.T3.1.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.1.1.1.1.2.1\">IDRID</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" colspan=\"2\" id=\"S3.T3.1.1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.1.1.1.1.3.1\">APTOS2019</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" colspan=\"2\" id=\"S3.T3.1.1.1.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.1.1.1.1.4.1\">PAPILA</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S3.T3.1.1.1.1.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.1.1.1.1.5.1\">Glaucoma Fundus</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.1.1.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S3.T3.1.1.2.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.1.1.2.2.1.1\">AUROC</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S3.T3.1.1.2.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.1.1.2.2.2.1\">AUPR</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S3.T3.1.1.2.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.1.1.2.2.3.1\">AUROC</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S3.T3.1.1.2.2.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.1.1.2.2.4.1\">AUPR</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S3.T3.1.1.2.2.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.1.1.2.2.5.1\">AUROC</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S3.T3.1.1.2.2.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.1.1.2.2.6.1\">AUPR</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S3.T3.1.1.2.2.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.1.1.2.2.7.1\">AUROC</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S3.T3.1.1.2.2.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.1.1.2.2.8.1\">AUPR</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T3.1.1.3.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T3.1.1.3.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.1.1.3.1.1.1\">CN-CLIP <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2405.14137v2#bib.bib21\" title=\"\">21</a>]</cite></span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T3.1.1.3.1.2\">0.778</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T3.1.1.3.1.3\">0.506</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T3.1.1.3.1.4\">0.881</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T3.1.1.3.1.5\">0.619</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T3.1.1.3.1.6\">0.804</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T3.1.1.3.1.7\">0.690</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T3.1.1.3.1.8\">0.951</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T3.1.1.3.1.9\">0.876</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.1.1.4.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T3.1.1.4.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.1.1.4.2.1.1\">PMC-CLIP <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2405.14137v2#bib.bib9\" title=\"\">9</a>]</cite></span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.1.1.4.2.2\">0.785</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T3.1.1.4.2.3\">0.511</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.1.1.4.2.4\">0.776</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T3.1.1.4.2.5\">0.386</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.1.1.4.2.6\">0.798</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T3.1.1.4.2.7\">0.659</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.1.1.4.2.8\">0.925</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.1.1.4.2.9\">0.827</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.1.1.5.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T3.1.1.5.3.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.1.1.5.3.1.1\">DinoV2 <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2405.14137v2#bib.bib14\" title=\"\">14</a>]</cite></span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.1.1.5.3.2\">0.791</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T3.1.1.5.3.3\">0.533</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.1.1.5.3.4\">0.920</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T3.1.1.5.3.5\">0.675</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.1.1.5.3.6\">0.797</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T3.1.1.5.3.7\">0.681</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.1.1.5.3.8\">0.955</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.1.1.5.3.9\">0.884</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.1.1.6.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T3.1.1.6.4.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.1.1.6.4.1.1\">RETFound <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2405.14137v2#bib.bib25\" title=\"\">25</a>]</cite></span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.1.1.6.4.2\">0.822</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T3.1.1.6.4.3\">0.496</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.1.1.6.4.4\">0.943</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T3.1.1.6.4.5\">0.726</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.1.1.6.4.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.1.1.6.4.6.1\">0.855</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T3.1.1.6.4.7\">0.748</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.1.1.6.4.8\">0.943</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.1.1.6.4.9\">0.863</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.1.1.7.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T3.1.1.7.5.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.1.1.7.5.1.1\">FLAIR <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2405.14137v2#bib.bib18\" title=\"\">18</a>]</cite></span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.1.1.7.5.2\">0.795</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T3.1.1.7.5.3\">0.529</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.1.1.7.5.4\">0.932</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T3.1.1.7.5.5\">0.686</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.1.1.7.5.6\">0.752</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T3.1.1.7.5.7\">0.610</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.1.1.7.5.8\">0.905</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.1.1.7.5.9\">0.792</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.1.1.8.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb ltx_border_r ltx_border_t\" id=\"S3.T3.1.1.8.6.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.1.1.8.6.1.1\">OURS</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T3.1.1.8.6.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.1.1.8.6.2.1\">0.863</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S3.T3.1.1.8.6.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.1.1.8.6.3.1\">0.630</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T3.1.1.8.6.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.1.1.8.6.4.1\">0.951</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S3.T3.1.1.8.6.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.1.1.8.6.5.1\">0.748</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T3.1.1.8.6.6\">0.853</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S3.T3.1.1.8.6.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.1.1.8.6.7.1\">0.754</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T3.1.1.8.6.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.1.1.8.6.8.1\">0.958</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T3.1.1.8.6.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.1.1.8.6.9.1\">0.889</span></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
90
+ "capture": "Table 3: Diabetic retinopathy and glaucoma diagnosis results for fine-tuning."
91
+ },
92
+ "4": {
93
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T4\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 4: </span>Multiple disease diagnosis and multi-label classification of multiple diseases results for fine-tuning.</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S3.T4.1\" style=\"width:433.6pt;height:117.5pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-48.9pt,13.3pt) scale(0.815883429901162,0.815883429901162) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S3.T4.1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T4.1.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt\" id=\"S3.T4.1.1.1.1.1\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.1.1.1.1.1.1\">Models</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" colspan=\"2\" id=\"S3.T4.1.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.1.1.1.1.2.1\">JSIEC</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" colspan=\"2\" id=\"S3.T4.1.1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.1.1.1.1.3.1\">Retina</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" colspan=\"2\" id=\"S3.T4.1.1.1.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.1.1.1.1.4.1\">RFMID</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S3.T4.1.1.1.1.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.1.1.1.1.5.1\">ODIR</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.1.1.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S3.T4.1.1.2.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.1.1.2.2.1.1\">AUROC</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S3.T4.1.1.2.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.1.1.2.2.2.1\">AUPR</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S3.T4.1.1.2.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.1.1.2.2.3.1\">AUROC</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S3.T4.1.1.2.2.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.1.1.2.2.4.1\">AUPR</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S3.T4.1.1.2.2.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.1.1.2.2.5.1\">AUROC</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S3.T4.1.1.2.2.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.1.1.2.2.6.1\">AUPR</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S3.T4.1.1.2.2.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.1.1.2.2.7.1\">AUROC</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S3.T4.1.1.2.2.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.1.1.2.2.8.1\">AUPR</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T4.1.1.3.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T4.1.1.3.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.1.1.3.1.1.1\">CN-CLIP <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2405.14137v2#bib.bib21\" title=\"\">21</a>]</cite></span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.1.1.3.1.2\">0.992</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T4.1.1.3.1.3\">0.882</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.1.1.3.1.4\">0.839</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T4.1.1.3.1.5\">0.691</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.1.1.3.1.6\">0.901</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T4.1.1.3.1.7\">0.480</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.1.1.3.1.8\">0.859</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.1.1.3.1.9\">0.598</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.1.1.4.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T4.1.1.4.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.1.1.4.2.1.1\">PMC-CLIP <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2405.14137v2#bib.bib9\" title=\"\">9</a>]</cite></span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.1.1.4.2.2\">0.964</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T4.1.1.4.2.3\">0.738</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.1.1.4.2.4\">0.875</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T4.1.1.4.2.5\">0.742</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.1.1.4.2.6\">0.894</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T4.1.1.4.2.7\">0.456</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.1.1.4.2.8\">0.819</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.1.1.4.2.9\">0.542</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.1.1.5.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T4.1.1.5.3.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.1.1.5.3.1.1\">DinoV2 <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2405.14137v2#bib.bib14\" title=\"\">14</a>]</cite></span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.1.1.5.3.2\">0.996</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T4.1.1.5.3.3\">0.918</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.1.1.5.3.4\">0.893</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T4.1.1.5.3.5\">0.771</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.1.1.5.3.6\">0.914</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T4.1.1.5.3.7\">0.547</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.1.1.5.3.8\">0.867</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.1.1.5.3.9\">0.621</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.1.1.6.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T4.1.1.6.4.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.1.1.6.4.1.1\">RETFound <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2405.14137v2#bib.bib25\" title=\"\">25</a>]</cite></span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.1.1.6.4.2\">0.990</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T4.1.1.6.4.3\">0.884</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.1.1.6.4.4\">0.847</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T4.1.1.6.4.5\">0.697</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.1.1.6.4.6\">0.889</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T4.1.1.6.4.7\">0.489</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.1.1.6.4.8\">0.850</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.1.1.6.4.9\">0.620</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.1.1.7.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T4.1.1.7.5.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.1.1.7.5.1.1\">FLAIR <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2405.14137v2#bib.bib18\" title=\"\">18</a>]</cite></span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.1.1.7.5.2\">0.917</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T4.1.1.7.5.3\">0.704</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.1.1.7.5.4\">0.863</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T4.1.1.7.5.5\">0.679</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.1.1.7.5.6\">0.870</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T4.1.1.7.5.7\">0.397</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.1.1.7.5.8\">0.860</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.1.1.7.5.9\">0.601</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.1.1.8.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb ltx_border_r ltx_border_t\" id=\"S3.T4.1.1.8.6.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.1.1.8.6.1.1\">OURS</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T4.1.1.8.6.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.1.1.8.6.2.1\">0.999</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S3.T4.1.1.8.6.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.1.1.8.6.3.1\">0.972</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T4.1.1.8.6.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.1.1.8.6.4.1\">0.942</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S3.T4.1.1.8.6.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.1.1.8.6.5.1\">0.871</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T4.1.1.8.6.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.1.1.8.6.6.1\">0.946</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S3.T4.1.1.8.6.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.1.1.8.6.7.1\">0.581</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T4.1.1.8.6.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.1.1.8.6.8.1\">0.917</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T4.1.1.8.6.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.1.1.8.6.9.1\">0.715</span></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
94
+ "capture": "Table 4: Multiple disease diagnosis and multi-label classification of multiple diseases results for fine-tuning."
95
+ },
96
+ "5": {
97
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T5\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 5: </span>Results of ablation studies. Monocular-level loss refers to plus .</figcaption>\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S3.T5.5\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T5.5.1.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_r ltx_border_tt\" id=\"S3.T5.5.1.1.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" colspan=\"3\" id=\"S3.T5.5.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T5.5.1.1.2.1\">AUROC</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"3\" id=\"S3.T5.5.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T5.5.1.1.3.1\">AUPR</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T5.5.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T5.5.2.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T5.5.2.2.1.1\">Monocular-level Loss</span></th>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T5.5.2.2.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.2.2.2.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.2.2.2.1.1\">\u2713</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T5.5.2.2.3\"></td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S3.T5.5.2.2.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.2.2.4.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.2.2.4.1.1\">\u2713</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T5.5.2.2.5\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.2.2.5.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.2.2.5.1.1\">\u2713</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T5.5.2.2.6\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T5.5.2.2.7\">\u2713</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T5.5.3.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T5.5.3.3.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T5.5.3.3.1.1\">Patient-level Loss</span></th>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T5.5.3.3.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.3.3.2.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.3.3.2.1.1\">\u2713</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T5.5.3.3.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.3.3.3.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.3.3.3.1.1\">\u2713</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S3.T5.5.3.3.4\"></td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T5.5.3.3.5\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.3.3.5.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.3.3.5.1.1\">\u2713</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T5.5.3.3.6\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.3.3.6.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.3.3.6.1.1\">\u2713</span>\n</span>\n</td>\n<td class=\"ltx_td\" id=\"S3.T5.5.3.3.7\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T5.5.4.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T5.5.4.4.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T5.5.4.4.1.1\">IDRID</span></th>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S3.T5.5.4.4.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.4.4.2.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.4.4.2.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T5.5.4.4.2.1.1.1\">0.863</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S3.T5.5.4.4.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.4.4.3.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.4.4.3.1.1\">0.860</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S3.T5.5.4.4.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.4.4.4.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.4.4.4.1.1\">0.847</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S3.T5.5.4.4.5\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.4.4.5.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.4.4.5.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T5.5.4.4.5.1.1.1\">0.63</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S3.T5.5.4.4.6\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.4.4.6.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.4.4.6.1.1\">0.623</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S3.T5.5.4.4.7\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.4.4.7.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.4.4.7.1.1\">0.619</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T5.5.5.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T5.5.5.5.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T5.5.5.5.1.1\">APTOS-2019</span></th>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T5.5.5.5.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.5.5.2.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.5.5.2.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T5.5.5.5.2.1.1.1\">0.951</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T5.5.5.5.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.5.5.3.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.5.5.3.1.1\">0.945</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S3.T5.5.5.5.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.5.5.4.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.5.5.4.1.1\">0.941</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T5.5.5.5.5\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.5.5.5.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.5.5.5.1.1\">0.748</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T5.5.5.5.6\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.5.5.6.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.5.5.6.1.1\">0.737</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T5.5.5.5.7\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.5.5.7.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.5.5.7.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T5.5.5.5.7.1.1.1\">0.759</span></span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T5.5.6.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T5.5.6.6.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T5.5.6.6.1.1\">PAPILA</span></th>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T5.5.6.6.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.6.6.2.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.6.6.2.1.1\">0.853</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T5.5.6.6.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.6.6.3.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.6.6.3.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T5.5.6.6.3.1.1.1\">0.864</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S3.T5.5.6.6.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.6.6.4.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.6.6.4.1.1\">0.846</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T5.5.6.6.5\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.6.6.5.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.6.6.5.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T5.5.6.6.5.1.1.1\">0.754</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T5.5.6.6.6\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.6.6.6.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.6.6.6.1.1\">0.745</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T5.5.6.6.7\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.6.6.7.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.6.6.7.1.1\">0.739</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T5.5.7.7\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T5.5.7.7.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T5.5.7.7.1.1\">Glaucoma Fundus</span></th>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T5.5.7.7.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.7.7.2.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.7.7.2.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T5.5.7.7.2.1.1.1\">0.958</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T5.5.7.7.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.7.7.3.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.7.7.3.1.1\">0.948</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S3.T5.5.7.7.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.7.7.4.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.7.7.4.1.1\">0.957</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T5.5.7.7.5\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.7.7.5.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.7.7.5.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T5.5.7.7.5.1.1.1\">0.889</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T5.5.7.7.6\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.7.7.6.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.7.7.6.1.1\">0.869</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T5.5.7.7.7\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.7.7.7.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.7.7.7.1.1\">0.888</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T5.5.8.8\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T5.5.8.8.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T5.5.8.8.1.1\">JSIEC</span></th>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T5.5.8.8.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.8.8.2.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.8.8.2.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T5.5.8.8.2.1.1.1\">0.999</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T5.5.8.8.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.8.8.3.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.8.8.3.1.1\">0.997</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S3.T5.5.8.8.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.8.8.4.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.8.8.4.1.1\">0.997</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T5.5.8.8.5\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.8.8.5.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.8.8.5.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T5.5.8.8.5.1.1.1\">0.972</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T5.5.8.8.6\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.8.8.6.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.8.8.6.1.1\">0.949</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T5.5.8.8.7\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.8.8.7.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.8.8.7.1.1\">0.962</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T5.5.9.9\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T5.5.9.9.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T5.5.9.9.1.1\">Retina</span></th>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T5.5.9.9.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.9.9.2.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.9.9.2.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T5.5.9.9.2.1.1.1\">0.942</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T5.5.9.9.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.9.9.3.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.9.9.3.1.1\">0.939</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S3.T5.5.9.9.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.9.9.4.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.9.9.4.1.1\">0.935</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T5.5.9.9.5\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.9.9.5.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.9.9.5.1.1\">0.871</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T5.5.9.9.6\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.9.9.6.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.9.9.6.1.1\">0.869</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T5.5.9.9.7\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.9.9.7.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.9.9.7.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T5.5.9.9.7.1.1.1\">0.876</span></span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T5.5.10.10\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T5.5.10.10.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T5.5.10.10.1.1\">RFMID</span></th>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T5.5.10.10.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.10.10.2.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.10.10.2.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T5.5.10.10.2.1.1.1\">0.946</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T5.5.10.10.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.10.10.3.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.10.10.3.1.1\">0.924</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S3.T5.5.10.10.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.10.10.4.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.10.10.4.1.1\">0.940</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T5.5.10.10.5\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.10.10.5.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.10.10.5.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T5.5.10.10.5.1.1.1\">0.581</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T5.5.10.10.6\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.10.10.6.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.10.10.6.1.1\">0.573</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T5.5.10.10.7\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.10.10.7.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.10.10.7.1.1\">0.578</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T5.5.11.11\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb ltx_border_r\" id=\"S3.T5.5.11.11.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T5.5.11.11.1.1\">ODIR</span></th>\n<td class=\"ltx_td ltx_align_justify ltx_border_bb\" id=\"S3.T5.5.11.11.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.11.11.2.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.11.11.2.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T5.5.11.11.2.1.1.1\">0.917</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_bb\" id=\"S3.T5.5.11.11.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.11.11.3.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.11.11.3.1.1\">0.909</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_bb ltx_border_r\" id=\"S3.T5.5.11.11.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.11.11.4.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.11.11.4.1.1\">0.905</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_bb\" id=\"S3.T5.5.11.11.5\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.11.11.5.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.11.11.5.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T5.5.11.11.5.1.1.1\">0.715</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_bb\" id=\"S3.T5.5.11.11.6\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.11.11.6.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.11.11.6.1.1\">0.692</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_bb\" id=\"S3.T5.5.11.11.7\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S3.T5.5.11.11.7.1\">\n<span class=\"ltx_p\" id=\"S3.T5.5.11.11.7.1.1\">0.696</span>\n</span>\n</td>\n</tr>\n</tbody>\n</table>\n</figure>",
98
+ "capture": "Table 5: Results of ablation studies. Monocular-level loss refers to plus ."
99
+ }
100
+ },
101
+ "image_paths": {
102
+ "1": {
103
+ "figure_path": "2405.14137v2_figure_1.png",
104
+ "caption": "Figure 1: Overview of the RET-CLIP foundation model.",
105
+ "url": "http://arxiv.org/html/2405.14137v2/x1.png"
106
+ }
107
+ },
108
+ "validation": true,
109
+ "references": [],
110
+ "url": "http://arxiv.org/html/2405.14137v2"
111
+ }
20240819/2405.14893v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240819/2405.18523v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240819/2405.20602v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240819/2406.04920v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240819/2406.05913v2.json ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Revisiting Multi-User Downlink in IEEE 802.11ax: A Designers Guide to MU-MIMO",
3
+ "abstract": "Downlink (DL) Multi-User (MU) Multiple Input Multiple Output (MU-MIMO) is a key technology that allows multiple concurrent data transmissions from an Access Point (AP) to a selected sub-set of clients for higher network efficiency in IEEE 802.11ax. However, DL MU-MIMO feature is typically turned off as the default setting in AP vendors\u2019 products, that is, turning on the DL MU-MIMO may not help increase the network efficiency, which is counter-intuitive. In this article, we provide a sufficiently deep understanding of the interplay between the various underlying factors, i.e., CSI overhead and spatial correlation, which result in negative results when turning on the DL MU-MIMO. Furthermore, we provide a fundamental guideline as a function of operational scenarios to address the fundamental question \u201cwhen the DL MU-MIMO should be turned on/off\u201d.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "IEEE 802.11ax (Wi-Fi 6) marked a significant evolution milestone via the introduction of Multi-User (MU) communication modes (in contrast with legacy Single-User (SU) communication) for both Uplink (UL) and Downlink (DL) in tri-band (2.4/5/6 GHz) [1 ###reference_b1###]. For the Uplink, this implies the use of trigger-based OFDMA; in this article, we focus solely on DL Multi-User (MU) Multiple Input Multiple Output (MU-MIMO). Legacy Single-User MIMO (SU-MIMO) - the precursor to MU-MIMO - laid the groundwork by allowing transmission of multiple spatial streams from an access point (AP) equipped with multiple antennas to a single client device on downlink. With the proliferation of wireless client devices, a single Wi-Fi network access point (AP) can have multiple associated stations (STAs) [2 ###reference_b2###, 3 ###reference_b3###]. With multi-antenna clients 111However, the number of antennas at the AP always exceeds the number of antennas at a client., it is feasible via DL Transmit Beamforming (TxBF) at the AP to send multiple streams to multiple STAs simultaneously (DL MU-MIMO).\nA typical configuration [4 ###reference_b4###, 5 ###reference_b5###] such as Fig. 1 ###reference_### assumes an 8 x 8 AP (e.g., NetGear RAXE500) and 2 x 2 STAs (e.g., iPhone 15 and MacBook Air), implying that a single downlink transmission opportunity can potentially send a total of spatial streams 222Note that Wi-Fi 5 (IEEE 802.11ac) included support for MU-MIMO but limited to 4 streams on only 5 GHz downlink operation; whereas Wi-Fi 6 supports up to 8-stream on 2.4/5/6 GHz uplink/downlink operations. to a selected sub-set of clients, e.g. each 2 streams to each selected four STAs. While DL SU-MIMO results in scaling\nof per-user throughput as a result of multi-stream transmission, its benefits are limited by the fact that most clients support either 1 or 2 spatial streams (i.e., a total of 2-stream transmissions in DL SU-MIMO in Fig. 1 ###reference_###). By contrast, it is evident that in dense overlapped network scenarios - such as the enterprise or residential cluster - DL MU-MIMO provides a natural pathway to increasing network efficiency (aggregate network throughput) by enabling simultaneous transmissions of multiple streams to multiple clients (i.e., a total of 8-stream transmissions in DL MU-MIMO in Fig. 1 ###reference_###), with appropriate choice of the user sub-set and TxBF to minimize inter-user/inter-stream interference.\n###figure_1### Despite the promise of MU-MIMO for improved network capacity via simultaneous transmission to multiple users on downlink333There exists an analogous feature for the uplink: trigger-based OFDMA whereby a MHz channel may be shared synchronously by multiple users. However, consideration of UL OFDMA is beyond the scope of this article., real-world user testing has revealed significant challenges. A noticeable discrepancy exists between the theoretical speeds advertised by manufacturers who incorporate DL MU-MIMO and the actual throughput measured in specific conditions [6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###]. An industry test report [9 ###reference_b9###] showed that turning on MU-MIMO resulted in 58% aggregate throughput loss compared to SU-MIMO when pairing 4 x 4 Broadcom-based router with 2 x 2 Qualcomm-based STAs. An earlier research study [10 ###reference_b10###] demonstrated that DL SU-MIMO achieves 16.8% to 42% higher aggregated throughput MU-MIMO based on a test of a commercial 4 x 4 MU-MIMO-capable 802.11ac 5 GHz radio with 1 x 1 Xiaomi Mi 4i smartphones. Such variation in results is attributable to various factors at play, including the complex interplay of channel state information (CSI) overhead, device capabilities and environmental (propagation) conditions as a function of user location. In this article, we chose the IEEE 802.11ax indoor channel model [11 ###reference_b11###], widely used by the industry and academia, for a foundational exploration of DL SU/MU-MIMO throughput. Specifically, as the selected sub-set of clients for MU-MIMO on downlink are closer to each other in dense networks, increased spatial correlation will lead to significant inter-user and inter-stream interference in DL MU-MIMO. Thus overall network throughput degrades unless counteracted by a combination of inter-user interference cancellation and user selection algorithms [12 ###reference_b12###, 13 ###reference_b13###]. Moreover, CSI overhead affects both SU and MU aggregate throughput; in particular, CSI overhead increases significantly with the dimensionality of MU-MIMO. In turn, this implies that any MU-MIMO design must carefully consider the issue of (optimal) channel sounding periodicity when confronted with channel time variations444Further consideration of this topic is beyond the scope of this article..\nThe lack of a sufficiently deep understanding of the interplay between the various underlying factors discussed has resulted in AP vendors turning off the DL MU-MIMO feature as default setting in their products, reflecting the current ambivalence surrounding DL MU-MIMO. The primary purpose of this article is therefore to provide new insights underlying the fundamental question: \u201cwhen should DL MU-MIMO be turned on/off\u201d as a function of the operational scenario. By a combination of analysis and computation/simulation, we attempt to answer the above question by\nIdentifying set of conditions where DL SU-MIMO outperforms MU-MIMO and vice-versa;\nProvide broad \u2018rules of thumb\u2019 regarding use of DL MU-MIMO in current/future Wi-Fi systems.\nThe rest of this article is organized as follows. Section II ###reference_### introduces the impact of DL SU and MU CSI overhead differences on their effective channel capacity; In Section III ###reference_###, we explore the impact of spatial correlation on the MU channel capacity under the IEEE 802.11ax indoor channel model. In Section IV ###reference_###, a design guideline table for DL MU-MIMO is proposed by unifying the factors discussed in Section II ###reference_### and III ###reference_###. Finally, Section V ###reference_### concludes this article."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II Factor 1: CSI Overhead",
15
+ "text": "In 802.11ax DL transmission, AP is the transmitter which is called the beamformer, while a STA is the receiver which is called the beamformee. Beamforming depends on channel calibration procedures, called channel sounding in the 802.11ax standard. The channel sounding allows the beamformer to gather the beamforming report(s) that characterize the beamformee location(s) and to transmit the streams toward the precise direction of the beamformee(s).\n###figure_2###"
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "II-A DL SU/MU-MIMO channel sounding",
21
+ "text": "DL SU-MIMO is indicated by codebook info 0 in the High-Efficiency (HE) MIMO control field. As Fig. 2 ###reference_### shows, its channel sounding process consists of four major steps:\nThe beamformer begins the process by broadcasting a Null Data Packet Announcement (NDPA) frame, which is used to gain control of the channel and identify the intended beamformee.\nThe beamformer next transmits a Null Data Packet (NDP) to beamformee after a Short Interframe Space (SIFS). NDP is an empty frame that only contains the Physical Layer Protocol Data Unit (PPDU) header. The received NDP is used for channel estimation by analyzing the OFDM training symbols, called HE-LTF, whose length is a variable that depends on the number of spatial streams.\nFollowing receipt of the NDP, the beamformee responds with a BF feedback matrix in a compressed form. The BF feedback matrix instructs how the beamformer should steer the data frame to the beamformee with higher energy. Codebook information in the HE MIMO Control field provides the resolution schemes for compressing the BF feedback matrix.\nThe beamformer receives and recovers the compressed feedback matrix that is further used as the steering matrix to direct HE data transmissions toward the beamformee.\nBy contrast, DL MU-MIMO, indicated by codebook info 1 in the HE MIMO control field, follows the similar channel sounding protocols as the SU-MIMO, however, several major differences exist:\nNDPA frame format: A HE NDPA frame in MU-MIMO includes multiple STA Info fields, one for each beamformee, while the NDPA frame in SU-MIMO only carries a single STA Info field.\nBF Report Poll (BFRP) trigger frame: The compressed BF feedback in SU-MIMO comes right after the NDP. However, the beamformer in DL MU-MIMO must use a control frame - BFRP Trigger frame that instructs each beamformees to transmit the BF feedback simultaneously. The AP may transmit other BFRP Trigger frames to gather more feedbacks if necessary.\nCompressed BF feedback frame format: The HE MU Exclusive BF report is an extra field at the end of the frame for MU-MIMO, which thereby introduce extra CSI overhead;\nBF Feedback transmission: The BF feedback in SU-MIMO is transmitted over the UL OFDM while they are transmitted over the UL OFDMA in MU-MIMO."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "II-B CSI Overhead Comparison",
27
+ "text": "CSI overhead in DL SU/MU-MIMO can be calculated based on the CSI frame format indicating each sub-field size, as shown in Fig. 2 ###reference_###. In particular, CSI overhead is dominated by HE compressed BF feedback that contains the HE compressed BF report (as well as the extra sub-field - HE MU Exclusive BF report in MU-MIMO). The compressed BF report contains the compressed CSI for each sub-carrier, i.e., the V-matrix or steering/precoding matrix555The Null-steering step based on zero-forcing (ZF) and minimum mean square error (MMSE) approaches [12 ###reference_b12###, 14 ###reference_b14###], used for precoding in DL MU-MIMO are not implemented in real AP products [10 ###reference_b10###, 7 ###reference_b7###] because those can be implemented only if the full CSI is obtained, whereas the feedback V-matrix provides only partial CSI. Besides, null-steering step incurs additional computational complexity and thus chipset cost for the AP. used for digital beamforming. V-matrix is obtained by a) applying the singular value decomposition (SVD) to the full CSI, and b) compressing it to specific Givens rotation angles to reduce the amount of required bits. The compressed size of the V-matrix depends on the total number of Givens rotation angles as well as the number of bits used to quantize each angle, as defined in IEEE 802.11ax specification. In general, the larger the V-matrix dimension, the more the number of angles. Meanwhile, the number of bits to quantize each angle is indicated by the sub-field of codebook information choice with 1 bit in the HE MIMO Control field. Thus both SU- and MU- have two codebook information choices [1 ###reference_b1###], however, MU-MIMO uses more bits than SU-MIMO to quantize a single angle by using the same codebook information. For instance, if codebook information bit is set to 0, the number of bits to quantize an angle in SU-MIMO is 4 or 2 while they are 7 or 5 in MU-MIMO [1 ###reference_b1###], implying that the compressed V-matrix in MU-MIMO has larger overhead compared to SU-MIMO. In addition, the HE compressed BF report size also scales with the number of spatial streams and the number of sub-carriers. The MU-Exclusive BF report in MU-MIMO contains the delta SNR per sub-carrier, which represents the difference from the average SNR. The MU-Exclusive BF report represents the spatial characteristics for each sub-carrier caused by the environment, the size of which scales with the number of subcarriers. Since the 802.11ax specification does not detail how this information is exploited in the design of the beamformer, its implementation is chip vendor dependent.\nAs discussed, channel sounding procedures introduce a significant cost in airtime because the sounding exchange must be completed before a beamformed data transmission can occur. Therefore, if the MU-MIMO BF gain is not sufficient to offset the airtime consumed by the sounding exchange, MU-MIMO throughput can be lower than the SU-MIMO in some operational scenarios.\n###figure_3### As Fig. 2 ###reference_### shows, a cycle of CSI overhead and HE data transmission is repeated in both DL SU and MU-MIMO. In each cycle, the transmitted data for each STA is filled in one Transmit opportunity (TXOP) comprised of multiple back-to-back PPDUs (e.g., 1500 bytes) in SIFS burst mode. Thus the data transmission duration is the maximum TXOP limit ( ms) compared to which the duration of access delay is negligible (typically less than a few hundred microseconds), as long as the number of STAs is not excessively large. If we assume STA\u2019s walking speed equal to 2 mph, the resulting channel coherence time 666Channel coherence time is defined as the time duration over which the channel is considered to be not varying. (15 ms) will be greater than any one cycle duration. Hence, it is reasonable to assume a block fading channel for each cycle, i.e., channel capacity is fixed in a cycle while varying across different cycles. We will use the effective channel capacity to compare the SU and MU- performance as a function of CSI overhead - defined as the average channel capacity over both CSI overhead (zero channel capacity) duration and HE data transmission duration (non-zero channel capacity), given by\nwhere denotes the total number of cycles, denotes the Shannon channel capacity [14 ###reference_b14###] of the -th STA in the -th cycle, assumed to be constant due to the block fading channel. is the ratio of CSI overhead airtime to the cycle duration for the -th cycle. Eq. (1 ###reference_###) applies to DL SU-MIMO when the size of is 1. Note that varies across due to time-varying channels777For the pure analysis of CSI overhead in this section, the inter-user interference determined by spatial correlation is assumed to be zero. Thus in terms of changes only due to the channel gain variations rather than variations in inter-user interference. Then, shown in Fig. 3 ###reference_### is the maximum effective channel capacity that DL MU-MIMO can reach., but is independent of in our model since we assume a specific setup (i.e., MIMO dimension, codebook information, and the number of selected STAs, TXOP duration).\nAssuming an 8 x 8 AP, 1 x 1 STA(s), and 20 MHz channel bandwidth as in Fig. 3 ###reference_###, the effective channel capacity does not grow linearly with the number of STAs. In particular, the effective channel capacity under codebook info 1 is greatly reduced when the number of STAs reaches 8. This can be explained by following reasons:\nCSI overhead proportion shown in Fig. 3 ###reference_### grows exponentially with the number of STAs. This is because, on the one hand, an extra field - HE MU Exclusive BF report as a function of the number of sub-carriers and spatial streams - is included in HE compressed BF feedback, incurring extra CSI overhead; On the other hand, the bandwidth is shared using UL OFDMA leads to the lower UL data rate for HE compressed BF feedback transmission per STA; Thus, the DL MU-MIMO CSI overhead becomes significantly higher than SU-MIMO for a large number of STAs;\nAP transmit power is divided equally for each STA in DL MU-MIMO. As a result, in Eq. (1 ###reference_###) will drop with increasing number of STAs due to the lower transmit power per STA.\nThe same phenomenon repeats for the 8 x 8 AP, 2 x 2 STA, and 20 MHz cases in Fig. 3 ###reference_###; the effective channel capacity is reduced when the number of STAs reaches 4. However, this does not indicate that AP shall not support more STAs due to the lower effective channel capacity. However, inspite of this result, AP vendors may choose to support greater number of STAs on simultaneous DL as that may be independently desirable[2 ###reference_b2###]. It is noteworthy that codebook info 1 (i.e., using more bits to quantize the V-matrix) always has lower effective channel capacity performance than codebook info 0 in Fig 3 ###reference_###. This is because we assume the perfect channel estimation which does not produce channel estimation error under both Codebook info 0 and 1. Thus codebook info 1 with larger CSI overhead always suffers more than codebook info 0."
28
+ },
29
+ {
30
+ "section_id": "3",
31
+ "parent_section_id": null,
32
+ "section_name": "III Factor 2: Spatial correlation",
33
+ "text": "In this section, we investigate the impact of spatial correlation on the SU and MU performance in practical environmental conditions. The spatial correlation among user\u2019s is characterized by two key factors: user separation, and distance between AP and STAs. We use the Shannon channel capacity (without CSI overhead) as the metric to investigate the SU and MU throughput as a function of spatial correlation next."
34
+ },
35
+ {
36
+ "section_id": "3.1",
37
+ "parent_section_id": "3",
38
+ "section_name": "III-A Clustered-based multi-path channel model",
39
+ "text": "We use the class of cluster-based multipath fading channels to model the practical environmental conditions for indoor Wi-Fi downlink operation. Such models were introduced by Saleh and Valenzuela, and extended/elaborated upon by many other researchers [14 ###reference_b14###]. In particular, IEEE 802.11ax indoor channel model [11 ###reference_b11###] is a typical cluster-based channel model that we have adapted by incorporating a parameter for user separation, as shown in Fig. 4 ###reference_###. IEEE 802.11ax indoor channel model represents the propagation environment as a collection of scatterers grouped into clusters, where each cluster represents objects in the vicinity that act as a forward scattering source of rays that reach the receiver. Such clusters are typically represented via spatial-temporal models that capture the spatial characteristics of the environment, such as the transmit/receive antenna correlation and the distribution of objects, etc.\n###figure_4### A particular impact on our results arises from distinction between Line-of-sight (LoS) and Non-line-of-sight (NLoS) scenarios as defined by 11ax channel model specification, depending on the relationship between the breakpoint distance 888The breakpoint distance is defined as the distance that separates LoS and NLoS scenarios by characterizing different path loss exponents. and the distance between AP and STA(s) [11 ###reference_b11###]:\nLoS scenario (Fig. 4 ###reference_###) occurs if the distance between AP and STAs is smaller than the breakpoint distance. The received signal at each STA include a LoS component and multiple multipath-induced NLoS components within a tapped delay-line model. This results in Rician fading multipath models where the first tap (corresponding to earliest arrival at each STA) is the LoS component. Therefore, the CSI obtained at each STA in such cases includes both LoS component and NLoS components with spatial characteristics [11 ###reference_b11###]; LoS CSI component depends on the transmit/receive steering vector parameterized by LoS angle of departure (AoD)/angle of arrival (AoA). Each NLoS CSI component depends on transmit/receive antenna correlation parameterized by NLoS mean AoD/AoA along with angular spread, and the spatial distribution of random scatterers within the cluster. The mathematical expression for LoS/NLoS CSI components can be found at [15 ###reference_b15###]. Since the first LoS tap signal is typically significantly stronger than NLoS signals, the LoS CSI component dominates the CSI obtained at each STA.\nNLoS scenario occurs if the distance between AP and STAs is greater than the breakpoint distance; then the LoS tap signal at each STA in Fig. 4 ###reference_### is blocked. Thus, the received signals at each STA are all NLoS (hence Rayleigh fading) and the first NLoS tap signal\u2019s power is close to that of the other NLoS taps.\n###figure_5###"
40
+ },
41
+ {
42
+ "section_id": "3.2",
43
+ "parent_section_id": "3",
44
+ "section_name": "III-B Spatial Correlation",
45
+ "text": "Fig. 4 ###reference_### includes an 8 x 8 uniform linear array (ULA)-based AP whose ULA antenna spacing is half wavelength as well as two 1 x 1 STAs. The sake of using a 2-user example here is to provide key insights for readers. Extending to a larger user number will be discussed in the next section. Thus AP transmits to STA 1 if MU-MIMO is turned off and to both STA 1 and STA 2 if MU-MIMO is turned on. The spatial geometry of STAs is characterized by their angle of departure (AoD), i.e., LoS AoD to STA 1 and LoS AoD to STA 2. The user separation between STA 1 and 2 is defined as the difference between LoS AoD to STA 2 and STA 1, respectively. To investigate the impact of user separation, we fix the angular geometry of cluster 1999Cluster 1\u2019s NLoS mean AoD equals the LoS AoD to STA 1; Cluster 2\u2019s NLoS mean AoD equals the LoS AoD to STA 2. and STA 1, i.e., LoS AoD to STA 1 is set to , and the LoS AoD to STA 2 varied between and , thus the user separation between STA 1 and 2 ranges between and ."
46
+ },
47
+ {
48
+ "section_id": "3.2.1",
49
+ "parent_section_id": "3.2",
50
+ "section_name": "III-B1 Dominant feature in the LoS scenario - User separation",
51
+ "text": "Due to the small breakpoint distance, e.g., 10 meters, spatial correlation in the LoS scenario is not sensitive to the distance variation. Thus user separation is the single dominant feature that we explore in the LoS scenario, which is shown in Fig. 5 ###reference_###. Consider a set of LoS scenarios where the distance is 8 meters and the granularity of user separation is , resulting in a total of 90 user separation scenarios. DL SU channel capacity dominates MU in 14% scenarios of which 12% scenarios lie in and user separation regions. Note that DL MU-MIMO channel capacity over user separation exhibits a symmetric channel capacity pattern, that can be attributed to the ULA characteristics where the LoS transmit/receive steering vectors of two STAs are identical at or user separation. Then, their dominant LoS CSI component determined by LoS transmit/receive steering vectors are also close for user separation regions close to or . As a result, the corresponding V-matrices become highly correlated, incurring significantly higher inter-user interference than other user separation regions.\n###figure_6###"
52
+ },
53
+ {
54
+ "section_id": "3.2.2",
55
+ "parent_section_id": "3.2",
56
+ "section_name": "III-B2 Dominant feature in the NLoS scenario - Distance between AP and STAs",
57
+ "text": "In the NLoS scenario, the obtained CSI includes only NLoS components, and each NLoS component (corresponding to a NLoS tap) is determined by the transmit/receive antenna correlation as well as the characteristics of scatterers. Since the latter such as their distributions, shapes and properties of materials are random, each NLoS tap consists of superposition of multiple independent individual path components leading to the complex Gaussian assumption [11 ###reference_b11###]. As a result, the inter-user/inter-steam interference can vary significantly as a function of STA distance in such cases (and is insensitive to user angular separation).\nThe spatial correlation as a function of distance in NLoS is shown Fig. 5 ###reference_### for scenarios where the granularity of distance is 10 meters; the maximum distance for DL MU-MIMO operation with sufficiently high SNR at the STA is 60 meters. For each distance, 60 equally spaced user separations is used to calculate the proportion of scenarios in which MU channel capacity dominates SU. As shown, the proportion increases when distance increases, indicating that DL MU-MIMO benefits more than SU-MIMO at larger distance. In particular, MU becomes dominant over 50% scenarios for distances greater than 38 meters. The larger the distance is, the more the multiple scattering, reflection, and diffraction paths that decorrelate the signals received by different users. Hence, the inter-user interference is effectively reduced with the increasing distance."
58
+ },
59
+ {
60
+ "section_id": "4",
61
+ "parent_section_id": null,
62
+ "section_name": "IV Design Guideline for DL MU-MIMO",
63
+ "text": "This section provides practical design guidelines that unify the underlying factors discussed in Section II ###reference_### and III ###reference_###. For the same setup as Fig. 5 ###reference_### used to obtain the channel capacity is now modified to derive the effective channel capacity as in Fig. 3 ###reference_###. Meanwhile, We extend to a 4-user MU-MIMO operation, i,e, the user sub-set selection size is 4, indicating that upto 4 out of STAs are selected if MU-MIMO is turned on. As the 4-user spatial correlation (where each is characterized by LoS/NLoS, user separation, and distance) results in a large set of scenario combinations, we thereby provide some typical scenarios due to the page limit. It should also be noted that real indoor channels might differ from the used channel model, that is, the exact spatial correlation threshold, such as user separation and meter distance in Section III ###reference_###, used for turning on/off MU-MIMO might be different. However, real channels should have the same guideline trend as the used channel model under each operational scenario (without specifying specific thresholds) defined in Fig. 6 ###reference_###. All results were implemented in Matlab using indoor MIMO WLAN channel models created by Schumacher et al, [15 ###reference_b15###].\nAs the main features regarding CSI overhead are codebook information for BF compression (i.e., codebook info 0 and 1) and STA MIMO dimensions (i.e., 1 x 1 and 2 x 2 STA), there are a total of 4 operational scenarios regarding CSI overhead. Meanwhile, we provide 5 typical operational scenarios (i.e., 2 LoS and 3 NLoS scenarios 101010For the operational scenario of two at small distances and two at large distances, AP is assumed to serve one of the STAs at small distances if MU-MIMO is turned off.) regarding spatial correlation. As a result, we provide guidelines for 20 scenarios unifying both CSI overhead and spatial correlation, as shown in Fig. 6 ###reference_###. Our conclusion for the 2-user case is that among these 20 scenarios, DL MU-MIMO can be turned on in 9 (45%). According to the guideline table, DL MU-MIMO can be turned on in the following scenarios:\n1 x 1 STAs with sufficient user separation in LoS;\n2 x 2 STAs with codebook info 0 and sufficient user separation in LoS;\n1 x 1 STAs in NLoS;\n2 x 2 STAs with codebook info 0 and large distances in NLoS.\nOtherwise, DL MU-MIMO is suggested to be turned off, i.e., switch to DL SU-MIMO. Note that the condition for turning on DL MU-MIMO is more stringent for the 2 x 2 STA case, compared to the 1 x 1 STA case. This is because each spatial stream in the 2 x 2 STA case suffers more from interfering streams (self-interference from another stream for the same STA and/or streams from another STA) than the 1 x 1 STA case (only one interfering stream from another STA). Thus compared to the 1 x 1 STA case, MU-MIMO effective channel capacity is less likely to exceed SU-MIMO in the 2 x 2 STA case."
64
+ },
65
+ {
66
+ "section_id": "5",
67
+ "parent_section_id": null,
68
+ "section_name": "Conclusion",
69
+ "text": "This article provides new insights about the key underlying factors (i.e., CSI overhead and spatial correlation) that have resulted in AP vendors turning off the DL MU-MIMO feature as the default setting in their products. Based on our study and analysis, guidelines as a function of operational scenarios is provided to address the fundamental question \u201cwhen DL MU-MIMO should be turned on/off\u201d for current/next-generation Wi-Fi systems."
70
+ }
71
+ ],
72
+ "appendix": [],
73
+ "tables": {},
74
+ "image_paths": {
75
+ "1": {
76
+ "figure_path": "2406.05913v2_figure_1.png",
77
+ "caption": "Figure 1: SU-MIMO vs MU-MIMO on Downlink Operations.",
78
+ "url": "http://arxiv.org/html/2406.05913v2/x1.png"
79
+ },
80
+ "2": {
81
+ "figure_path": "2406.05913v2_figure_2.png",
82
+ "caption": "Figure 2: IEEE 802.11ax Channel Sounding followed by High-Efficiency (HE) Data Transmission.",
83
+ "url": "http://arxiv.org/html/2406.05913v2/x2.png"
84
+ },
85
+ "3": {
86
+ "figure_path": "2406.05913v2_figure_3.png",
87
+ "caption": "Figure 3: Effective Channel Capacity impacted by CSI Overhead. Average 25 dB SNR at the single STA in SU-MIMO.",
88
+ "url": "http://arxiv.org/html/2406.05913v2/x3.png"
89
+ },
90
+ "4": {
91
+ "figure_path": "2406.05913v2_figure_4.png",
92
+ "caption": "Figure 4: Modified IEEE 802.11ax Indoor Channel Model: DL SU (STA 1) and MU (STA 1 + 2) in Line-of-sight Scenario.",
93
+ "url": "http://arxiv.org/html/2406.05913v2/x4.png"
94
+ },
95
+ "5": {
96
+ "figure_path": "2406.05913v2_figure_5.png",
97
+ "caption": "Figure 5: Channel Capacity impacted by Spatial Correlation. 20 dBm Transmit Power, 20 MHz Bandwidth, -174 dBm/Hz Noise Power Spectrum Density.",
98
+ "url": "http://arxiv.org/html/2406.05913v2/x5.png"
99
+ },
100
+ "6": {
101
+ "figure_path": "2406.05913v2_figure_6.png",
102
+ "caption": "Figure 6: A 4-user Guideline Table for 8 x 8 AP under Modified IEEE 802.11ax Channel Model.",
103
+ "url": "http://arxiv.org/html/2406.05913v2/x6.png"
104
+ }
105
+ },
106
+ "validation": true,
107
+ "references": [],
108
+ "url": "http://arxiv.org/html/2406.05913v2"
109
+ }
20240819/2406.14176v3.json ADDED
@@ -0,0 +1,192 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "A Multi-Stream Fusion Approach with One-Class Learning for Audio-Visual Deepfake Detection This work is supported in part by a New York State Center of Excellence in Data Science award, National Institute of Justice (NIJ) Graduate Research Fellowship Award 15PNIJ-23-GG-01933-RESS, and synergistic activities funded by National Science Foundation (NSF) grant DGE-1922591.",
3
+ "abstract": "This paper addresses the challenge of developing a robust audio-visual deepfake detection model. In practical use cases, new generation algorithms are continually emerging, and these algorithms are not encountered during the development of detection methods. This calls for the generalization ability of the method. Additionally, to ensure the credibility of detection methods, it is beneficial for the model to interpret which cues from the video indicate it is fake. Motivated by these considerations, we then propose a multi-stream fusion approach with one-class learning as a representation-level regularization technique. We study the generalization problem of audio-visual deepfake detection by creating a new benchmark by extending and re-splitting the existing FakeAVCeleb dataset. The benchmark contains four categories of fake videos (Real Audio-Fake Visual, Fake Audio-Fake Visual, Fake Audio-Real Visual, and Unsynchronized videos).\nThe experimental results demonstrate that our approach surpasses the previous models by a large margin.\nFurthermore, our proposed framework offers interpretability, indicating which modality the model identifies as more likely to be fake. The source code is released at https://github.com/bok-bok/MSOC.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Recent advancements in deep learning, including Stable Diffusion [1 ###reference_b1###] and Sora, have enabled the generation of highly realistic images and audio, collectively referred to as deepfakes. The availability of numerous easy-to-use tools for generating deepfake videos significantly increases the chance of misuse of those media, as even non-experts can now create convincing fake content with minimal effort. This emphasizes the urgency for developing robust detection mechanisms to mitigate the risks associated with deepfakes.\nVideos, particularly those featuring a person speaking, have become a significant medium for disseminating deepfake information. Detecting these deepfakes requires joint consideration of both audio and visual modalities. The speech could be generated from text-to-speech [2 ###reference_b2###] and voice conversion algorithms [3 ###reference_b3###, 4 ###reference_b4###], and the videos are either face-swap [5 ###reference_b5###] from an original video or further rendered from speech and a still image [6 ###reference_b6###].\nAdditionally, while synchronization might be disrupted by modifying audio or visual modality, the generated modality can still be seamlessly synchronized with its corresponding counter modalities using lip-sync technologies [7 ###reference_b7###, 8 ###reference_b8###]. This ensures the creation of highly realistic fake videos.\nThis underscores the need for researchers to develop audio-visual deepfake detection mechanisms that surpass the capabilities of unimodal deepfake detection approaches.\nRecent research focuses mainly on the fusion of features of both modalities to improve the detection performance on audio-visual deepfake datasets [9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###]. By leveraging the complementary nature of audio and visual data, these approaches effectively improve their accuracy in identifying manipulated content.\nHowever, two issues are not well explored: First, the existing deep learning models may overfit to the specific fake generation methods present in the training data, leading to poor generalization when confronted with unseen deepfake generation algorithms in real-world scenarios. This could be attributed to the existing dataset design [12 ###reference_b12###, 13 ###reference_b13###] that does not benchmark the generalization ability for the models. This overfitting issue would limit the practical applicability of these models, as they fail to adapt to the rapidly evolving landscape of deepfake techniques. Second, existing approaches lack the ability to identify the modality source of a detected deepfake. This limitation arises because these systems are trained and tested using only the final audio-visual labels, without incorporating meta-information about the individual modalities.\nA model able to tell which modality is fake would enhance the interpretability and credibility in practice.\nIn this work, we propose a novel framework Multi-Stream Fusion Approach with One-Class Learning (MSOC) to tackle audio-visual deepfake detection, enhancing the generalization ability and interoperability. We extend the one-class learning approach, previously proposed in uni-modal contexts, to the audio-visual setting. We validate the generalization ability by resplitting the FakeAVCeleb [12 ###reference_b12###] dataset and separating the unseen algorithms into the test set. We curated four test sets (RAFV, FAFV, FARV, Unsynced)\nthat cover all kinds of fake categories.\nWe will make the dataset splits and model implementation publicly available upon the publication of this paper.\nOur contributions are summarized as:\nExtending one class learning from uni-modal to audio-visual deepfake detection;\nA multi-stream framework with audio-visual (AV), audio (A), and visual (V) branches;\nA curated dataset for evaluating performance on unseen generation methods based on FakeAVCeleb."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II Related Work",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "II-A Audio-Visual Deepfake Detection",
21
+ "text": "At the early stage of deepfake detection research, many studies focused on uni-modal detection models that use only audio [14 ###reference_b14###, 15 ###reference_b15###] or visual [16 ###reference_b16###] as input. However, uni-modal models are inherently limited to a single modality and cannot completely detect emerging deepfake videos that both audio and visual can be generated. To address this problem, recent research has started to focus on developing audio-visual deepfake detection models.\nInitially, many studies have focused on explicit synchronization issues between audio and visual modalities in deepfakes.\nShahzad et al. [17 ###reference_b17###] argue that altering either audio or visual can desynchronize speech and lip movements in videos. In addition, researchers have investigated the representation-level inconsistency due to single-modality manipulations. The modality dissonance score was introduced in [18 ###reference_b18###] to quantify the dissimilarities between the modality features. However, these methods may struggle to detect deepfakes where both audio and video are both generated in a more consistent way, such as text-to-speech followed by lip synch [12 ###reference_b12###].\nSeveral studies also develop audio-visual representations by integrating features from uni-modal feature extractors and mapping them to audio-visual targets [11 ###reference_b11###, 19 ###reference_b19###]. However,\nrecent studies [10 ###reference_b10###, 9 ###reference_b9###] claim that using only multimodal labels can misinform the data from the unimodal feature extractor during joint training.\nEnsemble models have also been studied [20 ###reference_b20###, 21 ###reference_b21###, 22 ###reference_b22###]. They combine models for audio, visual, and audio-visual data and leverage the strengths of each modality-specific model to enhance overall detection accuracy."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "II-B Video Deepfake Detection Datasets",
27
+ "text": "Existing methods are typically benchmarked on datasets such as FakeAVCeleb [12 ###reference_b12###] and DFDC [13 ###reference_b13###]. However, these datasets are limited in their ability to benchmark generalization since the test sets often contain the same deepfake generation algorithms as the training sets. Additionally, there is a greater variety of visual deepfake generation methods compared to audio modalities. In terms of attribute labeling, the FakeAVCeleb dataset attempts to present different categories of fakes, but the FARV category includes not only fake audio but also unsynchronized audio. This makes it difficult for methods to learn fake audio cues, since they are confounded with synchronization cues. Our study proposes extended datasets and new partitions to address these issues."
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "II-C One-Class Learning For Deepfake Detection",
33
+ "text": "Binary classification models work well in deepfake detection when the test data share similar distributions with the training data. However, since deepfake generation techniques are rapidly developing, deepfake attacks in practice are often unseen during training of deepfake detection models, and these binary classification models show significantly degraded performance on unseen attacks [23 ###reference_b23###]. To address this issue, Zhang et al. [14 ###reference_b14###] proposed the idea of one-class learning for speech deepfake detection. The idea was to use a so-called One-Class Softmax (OC-Softmax) loss to guide the neural network to learn an embedding space where bonafide speech utterances are clustered together while fake speech utterances are pushed away from this cluster during training:\nwhere (center) is the normalized weight vector for the target class; is the normalized embedding vector of the -th sample; and are margins for the real and fake classes, respectively. is the number of samples in a mini-batch, and is a scale factor. For each utterance, the cosine similarity between the feature embedding and the weight vector, , is called the OC score, a value between -1 and 1.\nSince then, many works on speech anti-spoofing have adopted the idea of one-class learning [24 ###reference_b24###, 15 ###reference_b15###, 25 ###reference_b25###]. The results show that models trained with one-class learning can effectively identify fakes as deviations from the learned bonafide embedding cluster for speech.\nDespite these advantages, the generalizability of one-class learning for audio-visual deepfake detection has not been thoroughly studied due to dataset limitations. This study addresses this gap by re-splitting the FakeAVCeleb dataset [12 ###reference_b12###] and analyzing the effectiveness of one-class learning in audio-visual deepfake detection."
34
+ },
35
+ {
36
+ "section_id": "3",
37
+ "parent_section_id": null,
38
+ "section_name": "III Method",
39
+ "text": "###figure_1### We propose a Multi-Stream Fusion Approach with One-Class learning (MSOC) for audio-visual deepfake detection. This architecture consists of the audio, visual, and audio-visual branches, which are independently trained using labels specific to their respective modalities. The training of these branches also leverages the OC-Softmax loss to improve their generalization ability to unseen deepfake generation methods. During inference, score fusion is used to integrate the decisions made by the three branches to arrive at the final classification decision. In this section, we describe architecture, training, and inference in detail."
40
+ },
41
+ {
42
+ "section_id": "3.1",
43
+ "parent_section_id": "3",
44
+ "section_name": "III-A Audio Model",
45
+ "text": "As displayed in the green part of Fig. 1 ###reference_###, the audio branch includes an audio feature extractor trained using an OC-Softmax loss with ground truth labels specific only to the audio modality. The audio branch compacts real data representations of audio modality and spreads fake ones in the embedding space.\nWe utilize ResNet [26 ###reference_b26###] as our audio feature extractor. The model processes 13-d Mel-Frequency Cepstral Coefficients (MFCC) vectors at a frame rate of 100 frames per second, which is 4 times the visual frame rate. The audio feature extractor then produces the audio embeddings with a dimensionality of 128.\nThe audio model is trained with which is the OC-Softmax losses computed with audio features of the audio branches using audio labels."
46
+ },
47
+ {
48
+ "section_id": "3.2",
49
+ "parent_section_id": "3",
50
+ "section_name": "III-B Visual Model",
51
+ "text": "As shown in the blue part of Fig. 1 ###reference_###, the visual branch consists of a visual feature extractor trained with an OC-Softmax loss, taking ground-truth labels regarding the visual modality only.\nThe visual branch tries to learn a visual embedding space where real data features are clustered while fake data features are pushed away from the cluster.\nWe employed ResNet[26 ###reference_b26###] and SCNet[27 ###reference_b27###] with STIL (Spatiotemporal Inconsistency Learning) block[28 ###reference_b28###] as the visual feature extractor, which takes frames , where 100 denotes the height and width of each frame, and 3 represents the RGB color channels. Then, the model returns embeddings , where the dimensionality is 128 for ResNet and 512 for SCNet-STIL.\nResNet.\nThe ResNet-based visual feature extractor consists of a 3D convolutional layer, ResNet blocks, and a temporal convolutional block. It captures the features of frames.\nSCNet-STIL.\nSCNet[27 ###reference_b27###] is a 2D Convolutional Neural Network. It features a self-calibration mechanism that broadens the receptive fields of its convolutional layers through internal communication [27 ###reference_b27###].\nThe SCNet-STIL is SCNet with STIL blocks designed to capture Spatio-Temporal Inconsistency [28 ###reference_b28###]. The STIL block is flexible and can be implemented in any 2D-CNN architecture.\nThe visual model is trained with , which is the OC-Softmax losses computed with visual features from the visual branch using visual labels."
52
+ },
53
+ {
54
+ "section_id": "3.3",
55
+ "parent_section_id": "3",
56
+ "section_name": "III-C Audio-Visual Model",
57
+ "text": "As shown in the purple part of Fig. 1 ###reference_###, the audio-visual branch consists of OC-Softmax integrated with visual and audio extractors, followed by three layers of a feedforward neural network. It is trained with both OC loss and cross-entropy loss. This branch focuses on compacting real-data representations on each feature extractor and separating real- and fake-data representations across both modalities.\nThe audio-visual model is trained with :\nwhere\n and are the OC-Softmax losses computed using audio and visual features from the audio-visual model with their respective labels.\n is the cross-entropy loss applied to the combined audio-visual features after the classifier, using the corresponding\nlabels in the audio-visual branch."
58
+ },
59
+ {
60
+ "section_id": "3.4",
61
+ "parent_section_id": "3",
62
+ "section_name": "III-D Inference",
63
+ "text": "We utilized OC scores, the cosine similarity to the embeddings of bonafide samples of each modality, from both the visual and audio branches. Additionally, we included the AV score, which is the softmax probability of real data from the audio-visual branch. The OC scores were thresholded at 0.5. These thresholded scores were then averaged with the AV score, and a final threshold of 0.5 was applied to determine the prediction."
64
+ },
65
+ {
66
+ "section_id": "4",
67
+ "parent_section_id": null,
68
+ "section_name": "IV Experimental Setup",
69
+ "text": ""
70
+ },
71
+ {
72
+ "section_id": "4.1",
73
+ "parent_section_id": "4",
74
+ "section_name": "IV-A Dataset",
75
+ "text": "The FakeAVCeleb dataset [12 ###reference_b12###] includes 500 real videos from various subjects and over 19,500 fake videos. It comprises four categories: RARV (Real Audio, Real Visual, and Synchronized), FARV (Fake Audio, Real Visual), RAFV (Real Audio, Fake Visual), and FAFV (Fake Audio, Fake Visual).\nPrevious works typically split the FakeAVCeleb dataset by subject ID [9 ###reference_b9###] or randomly [21 ###reference_b21###, 10 ###reference_b10###]. However, these splits have limitations in assessing the model\u2019s generalizability to unseen deepfake generation methods. In this paper, we propose a new split mechanism: we split the dataset based on generation methods to evaluate the performance on unseen methods. During the creation of training, validation, and test sets, we ensured that the generation methods used in the test sets were excluded from the training and validation sets.\nOur Training set contains 350 videos each from categories RARV (Real), FARV, RAFV, and FAFV (excluding faceswap and faceswap-wav2lip).\nValidation set contains 50 videos each from categories Real, FARV, RAFV, and FAFV (excluding faceswap and faceswap-wav2lip).\nFor Test set, we sampled 100 face swap (RAFV) and face swap-wav2lip (FAFV) videos not included in the training and validation sets. We generated 100 audio-only fake videos using a voice conversion library, named category FARV, due to FakeAVCeleb\u2019s limited methods for audio fakes. It is important to note that our newly created FARV dataset is synchronized, whereas the FARV from the FakeAVCeleb[12 ###reference_b12###] dataset is unsynchronized. Therefore, our FARV dataset has only one cue to detect a fake, making the unseen generation method more distinct. We also created a Unsynced category with 100 unsynchronized videos by shifting audio. Each of these four test datasets \u2014 RAFV, FAFV, FARV, and Unsynced\u2014 consists of 100 real videos (RARV) and 100 unseen fake videos."
76
+ },
77
+ {
78
+ "section_id": "4.2",
79
+ "parent_section_id": "4",
80
+ "section_name": "IV-B Evaluation Measures",
81
+ "text": "We evaluate audio-visual deepfake detection as a binary classification task based on the final audio-visual label. Accuracy is used as our primary metric, measuring the proportion of correctly classified samples out of the total samples. Given that our four test sets are balanced in terms of real and fake samples, accuracy is an appropriate metric, with a random guess expected to yield close to 50% accuracy."
82
+ },
83
+ {
84
+ "section_id": "4.3",
85
+ "parent_section_id": "4",
86
+ "section_name": "IV-C Comparison Methods",
87
+ "text": "We adopt some existing methods from the literature for comparison.\nThe multimodal-dissonance model [18 ###reference_b18###] utilizes a modality dissonance score (a distance between audio and visual features) to detect dissimilarities between modalities. AVDF [29 ###reference_b29###] simply concatenates audio and visual features and maps them directly to audio-visual labels. The multilabel method [30 ###reference_b30###] is trained using both audio and visual labels to address the issue that audio-visual labels may confuse the uni-modal feature extractors. MRDF-CE and MRDF-Margin [9 ###reference_b9###] utilize the cross- and within-modality regularization to maintain the unique features and differences of each modality during multimodal representation learning.\nWe not only compare our proposed model with state-of-the-art models but also with the Audio-Visual Branch with One-Class learning (AVOC), the audio-visual branch of MSOC."
88
+ },
89
+ {
90
+ "section_id": "4.4",
91
+ "parent_section_id": "4",
92
+ "section_name": "IV-D Training Details",
93
+ "text": "Our models are trained for 30 epochs using Adam optimizer, with an initial learning rate of and a batch size of 64. We select the best model with the best Area Under the Curve on the validation set. For the hyperparameter of OC-Softmax, We followed the default parameters from [14 ###reference_b14###]: , and . We ran all the models 4 times with different seeds for statistically robust results.\nWhile the three models are trained independently, they share the same training process: training examples are fed to all three models on the same schedule.\nAlso, for the comparison models, we trained and tested the models in our set-up from scratch for a fair comparison. Specifically, for the multimodal-dissonance model [18 ###reference_b18###], we trained for 100 epochs."
94
+ },
95
+ {
96
+ "section_id": "5",
97
+ "parent_section_id": null,
98
+ "section_name": "Results",
99
+ "text": ""
100
+ },
101
+ {
102
+ "section_id": "5.1",
103
+ "parent_section_id": "5",
104
+ "section_name": "Comparison with State-of-the-Art Methods",
105
+ "text": "To demonstrate the effectiveness of the proposed MSOC model on unseen attacks, we compared it with other state-of-the-art audio-visual deepfake detection models. The comparison of models\u2019 performance on test datasets is presented in Table I ###reference_###.\nWe can observe that state-of-the-art models perform poorly on unseen generation methods, which shows their lack of generalization ability. Our proposed model MSOC outperforms other models on FAFV, RAFV, and FARV test sets.\nThis indicates that multi-stream architecture with OC-Softmax successfully separated bonafide and generated data by compacting embedding of bonafide data, which resulted in better generalizability than other models in all combinations of fake modality.\nAs shown in the last column of Table I ###reference_###, all models perform poorly (close to random guessing) when identifying unsynchronized videos, which should be clearly recognized as fake. This is the first time these models have been tested on this unsynchronization benchmark, and our model exhibits general characteristics similar to existing fusion-based methods. The results suggest that training the audio and visual encoders with real/fake labels alone is insufficient to capture synchronization. We believe that incorporating an explicit module to learn audio-visual synchronization [31 ###reference_b31###, 32 ###reference_b32###] could address this issue, but we leave this for future work.\nAdditionally, we compare the MSOC framework with AVOC models. Table II ###reference_### shows that MSOC models generally perform better than AVOC. This suggests the strength of an audio and visual branch that is only dedicated to separating real and fake in each modality.\n###figure_2### ###figure_3### ###figure_4### ###figure_5###"
106
+ },
107
+ {
108
+ "section_id": "5.2",
109
+ "parent_section_id": "5",
110
+ "section_name": "Performance Analysis of Different Branches of MSOC",
111
+ "text": "In this section, we delve into the audio and visual branches of the MSOC architecture with the SCNet-STIL visual feature extractor. The MSOC model has three branches, providing enhanced performance and interpretability.\nFig. 2 ###reference_### visualizes the distribution of scores for each branch on four categories of fake videos.\nThe audio and visual score, OC scores , are calculated based on the cosine similarity between the bonafide embedding and the feature embedding of the respective modality. The audio-visual score represents the softmax probability that a video is real , calculated with audio-visual characteristics and audio-visual labels.\nThe figure shows that the audio-visual branch performs well when both modalities are fake (FAFV), predicting a probability close to 0 for all fake samples. However, the audio-visual branch exhibits greater confusion when only one modality is fake. The audio branch excels at distinguishing audio fake(Fig. 2(b) ###reference_sf2###) and real samples(Fig. 2(a) ###reference_sf1###). Also, the visual branch exhibits great performance in identifying real samples (Fig. 2(b) ###reference_sf2###), although it fails to detect some fake samples (Fig. 2(a) ###reference_sf1###). This highlights the benefit of using both the audio and visual branches.\nAdditionally, the audio and visual branches offer better interpretability of the model\u2019s decisions. With AVOC model, it is impossible to determine which modality the model perceives as fake. However, with MSOC, by analyzing the individual scores from branches, one can identify which modality contributes to the final result, providing insights into whether the audio or visual aspect is being manipulated. Therefore, leveraging all branches improves performance and enhances the transparency and reliability of the model\u2019s predictions."
112
+ },
113
+ {
114
+ "section_id": "5.3",
115
+ "parent_section_id": "5",
116
+ "section_name": "Impact of One-Class Learning",
117
+ "text": "In this section, we examine the impact of One-Class Learning by comparing AVOC models trained with and without OC-Softmax. We explore both AVOC models, which are ResNet-based and SCNet-STIL-based. Table III ###reference_### shows that the AVOC models trained with the OC-Softmax generally outperform AVOC models trained without the guidance of OC-Softmax. This result exhibits that implementing one-class learning on audio-visual deepfake detection successfully enhances models\u2019 robustness to unseen attacks by compacting the bonafide representations.\nWe visualized the impact of OC-Softmax in Fig. 3 ###reference_###\nby comparing the audio-visual embeddings of the model trained with and without OC-Softmax. The model trained with OC-Softmax successfully separates fake categories RAFV, FAFV, and FARV from real samples (RARV), although Unsynchronized samples still exhibit some overlap with the real samples. This overlap is anticipated, as detecting the unsynchronization is beyond the scope of an uni-modal feature extractor.\n###figure_6###"
118
+ },
119
+ {
120
+ "section_id": "5.4",
121
+ "parent_section_id": "5",
122
+ "section_name": "Impact of Visual Feature Extractor",
123
+ "text": "Table II ###reference_### demonstrates that models with SCNet-STIL visual feature extractor perform better on the RAFV test set. Thus,\nthis section examines the impact of visual feature extractors in one-class learning. Although OC-Softmax effectively compacts genuine representations and distributes fake representations, its performance is limited if the visual feature extractor fails to capture the general features of fake visual artifacts. This limitation arises because OC-Softmax compacts real representations based on observed attacks and real data, potentially including unseen attack representations within the realm of genuine representations. Therefore, extracting more general features of fake videos, such as Spatial-Temporal Inconsistency, could be beneficial.\nFig. 4 ###reference_### compares the visual scores from both visual feature extractors. We can observe that the ResNet-based visual feature extractor lacks the ability to detect unseen fake methods effectively compared to the SCNet-STIL-based visual feature extractor. This explains why models with the STIL feature extractor significantly outperform models with a ResNet feature extractor on the RAFV test set.\n###figure_7### ###figure_8###"
124
+ },
125
+ {
126
+ "section_id": "6",
127
+ "parent_section_id": null,
128
+ "section_name": "VI Conclusion",
129
+ "text": "This paper presents a multi-stream fusion framework with one-class learning to enhance audio-visual deepfake detection. Our proposed framework improves detection performance against unseen deepfake generation methods compared to SOTA models.\nAdditionally, the MSOC framework provides interpretability, offering the ability to identify which modality is fake, which can be achieved through the score distribution of the models (Audio, Visual, Audio-Visual). Future work includes joint modeling of detecting audio-visual unsynchronization and deepfakes and a more robust framework for rooting the fake modality."
130
+ }
131
+ ],
132
+ "appendix": [],
133
+ "tables": {
134
+ "1": {
135
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S4.T1.26.1.1\" style=\"font-size:90%;\">TABLE I</span>: </span><span class=\"ltx_text\" id=\"S4.T1.27.2\" style=\"font-size:90%;\">Results of comparison with state-of-the-art models on our test sets derived from the FakeAVCeleb dataset to ensure deepfake generation methods are not seen in training and validation. Average classification accuracy (%) and standard deviation of four runs of the models are shown.</span></figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T1.24\" style=\"width:216.8pt;height:54.5pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-142.3pt,35.8pt) scale(0.432466504745034,0.432466504745034) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.24.24\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.24.24.25.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T1.24.24.25.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.24.24.25.1.1.1\">Model</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.24.24.25.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.24.24.25.1.2.1\">RAFV</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.24.24.25.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.24.24.25.1.3.1\">FAFV</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.24.24.25.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.24.24.25.1.4.1\">FARV</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.24.24.25.1.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.24.24.25.1.5.1\">Unsynced</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.4.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T1.4.4.4.5\">Multilabel <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2406.14176v3#bib.bib30\" title=\"\">30</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.2.2.2.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.3.3.3.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.4.4.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.8.8.8\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.8.8.8.5\">Multimodal-dissonance <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2406.14176v3#bib.bib18\" title=\"\">18</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.5.5.5.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.6.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.7.7.7.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.8.8.8.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.12.12.12\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.12.12.12.5\">AVDF <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2406.14176v3#bib.bib29\" title=\"\">29</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.9.9.9.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.10.10.10.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.11.11.11.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.12.12.12.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.12.12.12.4.1\">49.88 2.30</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.16.16.16\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.16.16.16.5\">MRDF-CE <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2406.14176v3#bib.bib9\" title=\"\">9</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.13.13.13.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.14.14.14.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.15.15.15.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.16.16.16.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.20.20.20\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.20.20.20.5\">MRDF-Margin <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2406.14176v3#bib.bib9\" title=\"\">9</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.17.17.17.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.18.18.18.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.19.19.19.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.20.20.20.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.24.24.24\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_r\" id=\"S4.T1.24.24.24.5\">MSOC (Ours)</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T1.21.21.21.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.21.21.21.1.1\">60.25 2.19</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T1.22.22.22.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.22.22.22.2.1\">89.88 3.15</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T1.23.23.23.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.23.23.23.3.1\">74.38 5.41</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T1.24.24.24.4\"></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
136
+ "capture": "TABLE I: Results of comparison with state-of-the-art models on our test sets derived from the FakeAVCeleb dataset to ensure deepfake generation methods are not seen in training and validation. Average classification accuracy (%) and standard deviation of four runs of the models are shown."
137
+ },
138
+ "2": {
139
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S4.T2.22.1.1\" style=\"font-size:90%;\">TABLE II</span>: </span><span class=\"ltx_text\" id=\"S4.T2.23.2\" style=\"font-size:90%;\">The table compares AVOC and MSOC models. Average accuracy (%) and standard deviation of four runs on each test set. The multilabel model <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2406.14176v3#bib.bib30\" title=\"\">30</a>]</cite> is used as a baseline.</span></figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T2.20\" style=\"width:216.8pt;height:44.2pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-156.6pt,31.9pt) scale(0.40907525509947,0.40907525509947) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T2.20.20\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.20.20.21.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T2.20.20.21.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.20.20.21.1.1.1\">Model</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T2.20.20.21.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.20.20.21.1.2.1\">Feature Extractor</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.20.20.21.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.20.20.21.1.3.1\">RAFV</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.20.20.21.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.20.20.21.1.4.1\">FAFV</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.20.20.21.1.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.20.20.21.1.5.1\">FARV</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.20.20.21.1.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.20.20.21.1.6.1\">Unsynced</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.4.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T2.4.4.4.5\">Multilabel <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2406.14176v3#bib.bib30\" title=\"\">30</a>]</cite>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T2.4.4.4.6\">-</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.2.2.2.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.3.3.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.4.4.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.8.8.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T2.8.8.8.5\">AVOC</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T2.8.8.8.6\">SCNet-STIL</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.5.5.5.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.5.5.5.1.1\">60.50 4.06</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.6.6.6.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.7.7.7.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.8.8.8.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.12.12.12\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.12.12.12.5\">MSOC (Ours)</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.12.12.12.6\">SCNet-STIL</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.9.9.9.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.10.10.10.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.10.10.10.2.1\">89.88 3.15</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.11.11.11.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.11.11.11.3.1\">74.38 5.41</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.12.12.12.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.12.12.12.4.1\">45.25 1.64</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.16.16.16\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T2.16.16.16.5\">AVOC</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T2.16.16.16.6\">ResNet</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.13.13.13.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.14.14.14.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.15.15.15.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.16.16.16.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.16.16.16.4.1\">53.00 2.89</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.20.20.20\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b\" id=\"S4.T2.20.20.20.5\">MSOC</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_r\" id=\"S4.T2.20.20.20.6\">ResNet</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.17.17.17.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.17.17.17.1.1\">55.75 2.02</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.18.18.18.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.18.18.18.2.1\">90.88 2.43</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.19.19.19.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.19.19.19.3.1\">81.12 7.45</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.20.20.20.4\"></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
140
+ "capture": "TABLE II: The table compares AVOC and MSOC models. Average accuracy (%) and standard deviation of four runs on each test set. The multilabel model [30] is used as a baseline."
141
+ },
142
+ "3": {
143
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S5.T3.22.1.1\" style=\"font-size:90%;\">TABLE III</span>: </span><span class=\"ltx_text\" id=\"S5.T3.23.2\" style=\"font-size:90%;\">Comparison of models trained with and without OC softmax using different feature extractors. Average accuracy (%) and standard deviation of four runs. The multilabel model <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2406.14176v3#bib.bib30\" title=\"\">30</a>]</cite> is used as a baseline.</span></figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S5.T3.20\" style=\"width:216.8pt;height:42pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-170.1pt,33.0pt) scale(0.389206943252937,0.389206943252937) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S5.T3.20.20\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T3.20.20.21.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S5.T3.20.20.21.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.20.20.21.1.1.1\">Model</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T3.20.20.21.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.20.20.21.1.2.1\">Feature Extractor</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T3.20.20.21.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.20.20.21.1.3.1\">OC</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T3.20.20.21.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.20.20.21.1.4.1\">RAFV</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T3.20.20.21.1.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.20.20.21.1.5.1\">FAFV</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T3.20.20.21.1.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.20.20.21.1.6.1\">FARV</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T3.20.20.21.1.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.20.20.21.1.7.1\">Unsynced</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.4.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S5.T3.4.4.4.5\">Multilabel <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2406.14176v3#bib.bib30\" title=\"\">30</a>]</cite>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T3.4.4.4.6\">-</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T3.4.4.4.7\">-</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T3.1.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T3.2.2.2.2\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T3.3.3.3.3\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T3.4.4.4.4\"></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T3.8.8.8\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T3.8.8.8.5\">AVOC</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.8.8.8.6\">SCNet-STIL</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.8.8.8.7\">No</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.5.5.5.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.6.6.6.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.7.7.7.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.8.8.8.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.8.8.8.4.1\">48.88 1.98</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.12.12.12\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.12.12.12.5\">AVOC</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.12.12.12.6\">SCNet-STIL</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.12.12.12.7\">Yes</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.9.9.9.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.9.9.9.1.1\">60.50 4.06</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.10.10.10.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.10.10.10.2.1\">84.38 2.90</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.11.11.11.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.11.11.11.3.1\">70.62 1.63</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.12.12.12.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.16.16.16\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T3.16.16.16.5\">AVOC</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.16.16.16.6\">ResNet</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.16.16.16.7\">No</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.13.13.13.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.14.14.14.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.15.15.15.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.16.16.16.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.20.20.20\">\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S5.T3.20.20.20.5\">AVOC</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T3.20.20.20.6\">ResNet</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T3.20.20.20.7\">Yes</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T3.17.17.17.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.17.17.17.1.1\">52.75 2.30</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T3.18.18.18.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.18.18.18.2.1\">89.12 4.48</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T3.19.19.19.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.19.19.19.3.1\">79.62 2.90</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T3.20.20.20.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.20.20.20.4.1\">53.00 2.89</span></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
144
+ "capture": "TABLE III: Comparison of models trained with and without OC softmax using different feature extractors. Average accuracy (%) and standard deviation of four runs. The multilabel model [30] is used as a baseline."
145
+ }
146
+ },
147
+ "image_paths": {
148
+ "1": {
149
+ "figure_path": "2406.14176v3_figure_1.png",
150
+ "caption": "Figure 1: Overview of Multi-Stream Architecture. In the figure, black dashed lines represent the training process, and solid purple lines represent the inference process. \u2a01direct-sum\\bigoplus\u2a01 symbol represents feature concatenation. + symbol means addition.",
151
+ "url": "http://arxiv.org/html/2406.14176v3/x1.png"
152
+ },
153
+ "2(a)": {
154
+ "figure_path": "2406.14176v3_figure_2(a).png",
155
+ "caption": "(a) RAFV\nFigure 2: Score distribution visualization for four fake categories across each MSOC branch.\nScores close to 1 indicate the model perceives the modality as real, and the red vertical line denotes the decision threshold.",
156
+ "url": "http://arxiv.org/html/2406.14176v3/extracted/5800237/Figures/C_hist.png"
157
+ },
158
+ "2(b)": {
159
+ "figure_path": "2406.14176v3_figure_2(b).png",
160
+ "caption": "(b) FARV\nFigure 2: Score distribution visualization for four fake categories across each MSOC branch.\nScores close to 1 indicate the model perceives the modality as real, and the red vertical line denotes the decision threshold.",
161
+ "url": "http://arxiv.org/html/2406.14176v3/extracted/5800237/Figures/E_hist.png"
162
+ },
163
+ "2(c)": {
164
+ "figure_path": "2406.14176v3_figure_2(c).png",
165
+ "caption": "(c) FAFV\nFigure 2: Score distribution visualization for four fake categories across each MSOC branch.\nScores close to 1 indicate the model perceives the modality as real, and the red vertical line denotes the decision threshold.",
166
+ "url": "http://arxiv.org/html/2406.14176v3/extracted/5800237/Figures/D_hist.png"
167
+ },
168
+ "2(d)": {
169
+ "figure_path": "2406.14176v3_figure_2(d).png",
170
+ "caption": "(d) Unsynced\nFigure 2: Score distribution visualization for four fake categories across each MSOC branch.\nScores close to 1 indicate the model perceives the modality as real, and the red vertical line denotes the decision threshold.",
171
+ "url": "http://arxiv.org/html/2406.14176v3/extracted/5800237/Figures/F_hist.png"
172
+ },
173
+ "3": {
174
+ "figure_path": "2406.14176v3_figure_3.png",
175
+ "caption": "Figure 3: t-SNE visualization of concatenated audio-visual feature. The cross \u201cX\u201d in the figure represents the center of the data for each category. Better viewed in color.",
176
+ "url": "http://arxiv.org/html/2406.14176v3/extracted/5800237/Figures/tsne.png"
177
+ },
178
+ "4(a)": {
179
+ "figure_path": "2406.14176v3_figure_4(a).png",
180
+ "caption": "(a) Fake\nFigure 4: The figure compares visual scores computed from ResNet and SCNet-STIL visual feature extractors on both fake and real samples.",
181
+ "url": "http://arxiv.org/html/2406.14176v3/extracted/5800237/Figures/C_both_hist_fake.png"
182
+ },
183
+ "4(b)": {
184
+ "figure_path": "2406.14176v3_figure_4(b).png",
185
+ "caption": "(b) Real\nFigure 4: The figure compares visual scores computed from ResNet and SCNet-STIL visual feature extractors on both fake and real samples.",
186
+ "url": "http://arxiv.org/html/2406.14176v3/extracted/5800237/Figures/C_both_hist_real.png"
187
+ }
188
+ },
189
+ "validation": true,
190
+ "references": [],
191
+ "url": "http://arxiv.org/html/2406.14176v3"
192
+ }
20240819/2406.14192v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240819/2407.02337v2.json ADDED
@@ -0,0 +1,439 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Open foundation models for Azerbaijani language",
3
+ "abstract": "The emergence of multilingual large language models has enabled the development of language understanding and generation systems in Azerbaijani. However, most of the production-grade systems rely on cloud solutions, such as GPT-4. While there have been several attempts to develop open foundation models for Azerbaijani, these works have not found their way into common use due to a lack of systemic benchmarking. This paper encompasses several lines of work that promote open-source foundation models for Azerbaijani. We introduce (1) a large text corpus for Azerbaijani, (2) a family of encoder-only language models trained on this dataset, (3) labeled datasets for evaluating these models, and (4) extensive evaluation that covers all major open-source models with Azerbaijani support.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Large language models (LLMs) have seen a sudden rise in popularity in recent years. Both open-source and proprietary models have seen wide adoption across various industries. This boost has not been shared equally across different regions, however, mostly due to the slow osmosis of these technologies into low-resource languages. Azerbaijani language falls on the \"other\" side of this barrier, with its 24 million speakers worldwide.\nWhile some models have a limited understanding of the Azerbaijani language, only paid models offered by OpenAI have seen some level of adoption in the industry. Open-source models are being created with multilingual or Azerbaijani-only capabilities, but the community is not as keen to adopt them. This is possibly due to the limited exploration of these models\u2019 potential. This paper encompassed several lines of work that share a common goal - promoting open-source foundational models for Azerbaijani. Our contributions are as follows:\nDOLLMA: A new text corpus of 651.1 million words in Azerbaijani that can be used for pre-training LLMs.\naLLMA: A new family of BERT-class models trained on this dataset from scratch.\nThree labeled datasets that can be used for benchmarking foundation models in Azerbaijani:\nAZE-SCI: A text classification dataset.\nAZE-NSP: A next-sentence prediction dataset.\nCB-MCQ: A closed-book question-answering dataset.\nA benchmark for several natural language understanding (NLU) tasks in Azerbaijani. It contains our newly introduced models and other existing open-source alternatives."
10
+ },
11
+ {
12
+ "section_id": "1.1",
13
+ "parent_section_id": "1",
14
+ "section_name": "Foundation Models",
15
+ "text": "While language modeling has a long history, transformer-based large foundation models can be considered a recent phenomenon. These models have a disproportionately high number of trainable parameters, made possible due to the highly parallelizable nature of the transformer architecture. Their development takes place in two stages: Pre-training and fine-tuning. Pre-training is performed on Web-scale text corpora, while fine-tuning is performed on smaller and higher-quality data to adapt the model to a specific task. (Minaee et al., 2024 ###reference_b24###)\nFoundation models exist for various modalities, including language, vision, and speech. Language foundation models are usually classified as encoder, decoder, or encoder-decoder models. Encoder models are used for tasks that require language understanding, such as sentiment analysis and extractive question-answering. Encoder-decoder and decoder-only models are better suited for generative tasks, such as machine translation and text summarisation. Our work concentrates on encoder-only models. Our main inspiration is the BERT model family by (Devlin et al., 2019 ###reference_b10###) and its derivatives.\nIn the rest of the paper, a foundation model refers to a language model trained on a vast amount of unlabeled text data that can be fine-tuned for various downstream tasks. A large language model refers to a foundation language model with at least tens of millions of parameters."
16
+ },
17
+ {
18
+ "section_id": "1.2",
19
+ "parent_section_id": "1",
20
+ "section_name": "Modeling Azerbaijani",
21
+ "text": "The majority of LLMs are either monolingual English models or multilingual models that do not support Azerbaijani. Very few multilingual models support Azerbaijani, and only recently monolingual Azerbaijani models are beginning to emerge.\nThis slow progress can be explained by several factors. A smaller market and less investment is an obvious explanation, but the field faces more fundamental challenges that would not be immediately solved by more funding. One of these is the state of digitalization of the language. Most of the electronic books in Azerbaijani are scanned books. Only books published since the 1990s are written in the last version of the Azerbaijani Latin alphabet 111There was an older version of the Azerbaijani Latin alphabet introduced by the Soviets in 1922. This followed several variations until 1939 when the alphabet was replaced with a Cyrillic alternative. Azerbaijan started the transition to an updated Latin alphabet in 1991, which was completed in 2001., which creates another barrier. Yet another challenge is the small size of the community that\u2019s devoted to the development of open-source language models for Azerbaijani. The challenges regarding digitalization and script differences are further discussed in the third section.\nAn idea that is often heard regarding Azerbaijani LLMs is that we can simply go for the models developed for Turkish since languages are so similar. Azerbaijani and Turkish languages are not as similar as it is publicly perceived. According to (Salehi and Neysani, 2017 ###reference_b31###), Azerbaijanis scored 56% of receptive intelligibility in spoken Turkish. Differences in written language are not any smaller. Based on the methodology offered by (Gupta et al., 2019 ###reference_b13###), a 44% similarity score has been calculated between the vocabularies of the two languages 222https://www.ezglot.com/most-similar-languages?l=aze ###reference_ges?l=aze###. Due to these significant differences, Turkish LLMs are not useful in machine learning tasks for Azerbaijani.\nThe paper is structured as follows. The next section gives a brief overview of previous works on foundational language models, and language modeling on Azerbaijani. The third section introduces DOLLMA, a new text corpus, and outlines the methodology, challenges we faced, and future works. The fourth section introduces aLLMA, a new family of monolingual encoder-only language models. The fifth section introduces several benchmarks for evaluating encoder-only Azerbaijani language models. These benchmarks are used to evaluate newly introduced models, as well as existing alternatives. The sixth section presents these benchmarks\u2019 results."
22
+ },
23
+ {
24
+ "section_id": "2",
25
+ "parent_section_id": null,
26
+ "section_name": "Previous works",
27
+ "text": "The use of neural networks for language modeling can be traced back to the early 2000s. (Bengio et al., 2000 ###reference_b6###) and (Mikolov et al., 2010 ###reference_b23###) had created neural networks that outperformed traditional state-of-the-art model. (Schwenk et al., 2006 ###reference_b32###) uses neural networks for machine translation.\nThese models and their derivatives were task-specific. The idea of creating a foundational language model that could later be adapted (i.e., fine-tuned) to specific tasks was popularized only after the introduction of the transformer architecture by (Vaswani et al., 2017 ###reference_b36###). The earliest foundational language model that gained wide adoption was BERT by (Devlin et al., 2019 ###reference_b10###) and later variations like RoBERTa (Liu et al., 2019 ###reference_b21###).\nBERT was an encoder-only model, therefore more suitable for problems that could be formulated as a subset of the classification problem. Generative foundation models came out around the same time, in the example of\nGPT-1 (Radford and Narasimhan, 2018 ###reference_b27###), GPT-2 (Radford et al., 2019 ###reference_b28###), and T5 (Raffel et al., 2019 ###reference_b30###). While the GPT series continued with closed-source, enterprise models, other alternatives quickly emerged with superior performance. The most famous of these was the LLaMA series, which directly or indirectly resulted in the development of hundreds of open-source language models. (Touvron et al., 2023 ###reference_b35###).\nEarly foundation models were trained on English text, but multilingual models quickly emerged. Google had released multilingual BERT alternatives, and mGPT by (Shliazhko et al., 2023 ###reference_b33###) was an early variation of the GPT architecture for multiple languages. XLM-RoBERTa by (Conneau et al., 2020 ###reference_b9###) was a larger and more successful alternative to mGPT and was quickly adopted worldwide.\nXLM-RoBERTa was also one of the first (if not the first) foundation models that supported Azerbaijani. We are aware of only one academic work that has concentrated on the development of foundational language models for Azerbaijani. (Ziyaden et al., 2024 ###reference_b37###) have trained a RoBERTa model on the Azerbaijani split of the OSCAR dataset (Ortiz Su\u00e1rez et al., 2020 ###reference_b25###). This work is a first of its kind for Azerbaijani and a very valuable starting point. However, it does not concentrate on the development of a foundation model. Its main focus is improving model performance by text augmentation. Therefore, they do not perform a systematic evaluation of the model. They have released one RoBERTa model, without different sizes, which is yet another limiting factor in the adoption of the work. Unfortunately, this model has not been included in our evaluation benchmarks because they have not released a tokenizer that is compatible with their model.\nThere have also been some community attempts to create such open-source models. A series of RoBERTa models were developed by continuing the pre-training phase on a small Azerbaijani dataset (Hajili, 2024d ###reference_b17###). Alas Development Center has developed a series of decoder-only LLMs for Azerbaijani 333https://github.com/interneuron-ai/project-barbarossa ###reference_barbarossa###, but they offer no explanation regarding their approach, and the models failed to pass initial sanity checks."
28
+ },
29
+ {
30
+ "section_id": "3",
31
+ "parent_section_id": null,
32
+ "section_name": "Text corpus",
33
+ "text": "A large text corpus is a prerequisite for training a large language model. For reference, GPT-2 and RoBERTa both were trained on OpenWebText (Liu et al., 2019 ###reference_b21###), consisting of 13.5 billion tokens, which is roughly equivalent to 10 billion words. Original BERT models were trained on 3.3. billion words. While these numbers have exploded in recent years, the success of these models suggests that similarly effective models can be trained on similarly sized datasets.\nThe largest corpora that existed at the beginning of our work were OSCAR, which contained 316 million words in Azerbaijani, and Colossal Clean Crawled Corpus (C4) with 1.7 billion words. Introduced by (Raffel et al., 2020 ###reference_b29###), C4 is one of the most widely used datasets in the pretraining stage of LLMs. C4 is labeled by language and contains 1.83 million documents tagged as Azerbaijani. Upon further inspection, however, we discovered a significant portion of this text is not only in different languages, but also in different alphabets (Armenian, Georgian, and Cyrillic). In addition, the C4 dataset contains a significant amount of informal text. This can be a valuable resource, but it is outside the scope of our work. Considering all of these points, we decided against using it. OSCAR (Ortiz Su\u00e1rez et al., 2020 ###reference_b25###) dataset is also derived from CommonCrawl. It suffers from the same problems, so it was not included in our corpus either.\nDue to these limitations, we decided to curate a new dataset specifically for pre-training LLMs that understand Azerbaijani. This new corpus is called DOLLMA (Dataset for Open Large Language Models in Azerbaijani).444https://huggingface.co/datasets/allmalab/DOLLMA ###reference_OLLMA### The first and current version of this dataset contains Azerbaijani Wikipedia, Translated English Wikipedia (incomplete), news, blogs, books, and Azerbaijani laws. This dataset contains about 651.1 million words.555Words were counted with a simple whitespace tokenizer. New versions of DOLLMA will incorporate the Common Crawl data.\nBooks. We attempted to create a large book corpus but faced several challenges. Most of the available electronic books in Azerbaijani are scanned copies. Publishers rarely offer electronic books that are suitable for text extraction. As of 9 May 2024, Qanun Publishing, the largest publishing house in Azerbaijan, offers 52 PDFs or EPUBs on its website. The remaining books, which were sampled from the Azerbaijan National Library 666https://www.millikitabxana.az/ ###reference_www.millikitabxana.az/###, Children\u2019s Library 777https://www.clb.az/ ###reference_www.clb.az/###, and other sources, are all scanned copies that have occasionally passed through an OCR model. For OCR, Tesseract (Smith, 2007 ###reference_b34###) was chosen due to its multilingual support and open-source availability. We scanned thousands of books and manually sampled and analyzed them. Tesseract failed to capture guillemets, which is widespread in older Azerbaijani books. It also mixed up \"m\" with \"rn\" in scanned books. This happened often enough to decrease the quality of the text substantially. Due to these limitations, we decided against using OCR output altogether as training data. Instead, we opted for two datasets:\nBooks I contains a small number of handpicked books.\nBooks II contains a higher number of books with less detailed processing.\nWikipedia. We used dumps provided by the Wikimedia Foundation to create a new version of Azerbaijani Wikipedia. Both the data (aLLMA Lab, 2024d ###reference_b4###) and cleaning scripts 888https://github.com/ceferisbarov/azwiki ###reference_### are publicly available. BHOS AI team leads another initiative where they are using open-source translation models to translate English Wikipedia into Azerbaijani (BHOS AI R&D Center, 2024 ###reference_b8###). While this dataset offers little in terms of linguistic variety, it provides an invaluable knowledge base to train the models. Therefore, it was included in the final corpus.\nNews. There is an abundance of news datasets for Azerbaijani. However, we decided against using a very large news corpus, since it offers little variety in terms of language.\nIn our experience, models trained on news datasets do not learn the language comprehensively, possibly because the news contains little to no creative writing, first- and second-person narration, and dialogue. Due to these limitations, only two news datasets were included. One contains text scraped from several news platforms, and the other contains news and updates from Azerbaijan National Library. The BHOS AI team provided both datasets.\nBlogs. Another data source was blog posts collected from various websites. Instead of scraping a large number of websites for their blogs, several blogs were manually picked due to their high-quality text and informative content.\nLaws. The last part consisted of Azerbaijani laws, all of which are publicly available. We have also released this as an independent text corpus (aLLMA Lab, 2024e ###reference_b5###).\nYou can see a summary of these sources and their accompanying upscaling ratios in Table 1 ###reference_###.\nUpscaling ratios were decided rather arbitrarily. We decided against upscaling the news since they offer little linguistic variety. Azerbaijani Wikipedia was upscaled higher than the translated English Wikipedia to account for the lossy translation process. Azerbaijani laws offer higher-quality text than Azerbaijani Wikipedia but offer less variety both in terms of content and form. Considering this, we upscaled them at the same level. Blogs and Books II datasets were hand-picked and constituted the highest-quality text in our corpus. Therefore, their upscaling ratio was the highest. Books II had mediocre quality, mostly due to the challenges of extracting text from PDF files. We upscaled it at the same level as the English Wikipedia.\nA major shortcoming of DOLLMA is imbalanced domain distribution. While the dataset contains a substantial amount of text on Azerbaijani laws, it is lacking in terms of first-person narrative, and STEM fields. It is also heavily Azerbaijan-centric, which may or may not be an issue depending on the final goal.\nDeduplication has not been performed since none of the sources has the potential of overlapping with another (i.e., Wikipedia and News, or Books and Laws). However, the addition of a deduplication stage is important if this corpus is to be expanded further.\nLater versions of DOLLMA will include several major changes:\nAdd deduplication to the pipeline. This will allow us to incorporate potentially overlapping text sources.\nCreate a large-scale book corpus.\nImprove domain distribution.\nIncorporate web-scraping datasets such as OSCAR and C4.\nWe believe that these changes will open up new possibilities for modeling the Azerbaijani language. At the current state, however, taking into account time and hardware limitations, our dataset was sufficient to continue to the modeling stage."
34
+ },
35
+ {
36
+ "section_id": "4",
37
+ "parent_section_id": null,
38
+ "section_name": "Pre-training",
39
+ "text": "Using DOLLMA, we have developed a series of foundational language models called aLLMA (a Large Language Model for Azerbaijani). aLLMA has been trained in three sizes: small, base, and large. Base and large correspond to the original BERT models BERTBASE and BERTLARGE (Devlin et al., 2019 ###reference_b10###). Small architecture was borrowed from (Bhargava et al., 2021 ###reference_b7###). Architectural details of these models can be found in Table 2 ###reference_###. All three models999https://huggingface.co/allmalab/bert-small-aze ###reference_-aze###,101010https://huggingface.co/allmalab/bert-base-aze ###reference_aze###,111111https://huggingface.co/allmalab/bert-large-aze ###reference_-aze### have been released publicly and included in our benchmarks.\nWe recognize two alternative approaches to the problem of modeling a low-resource language:\nContinue the pertaining step of an existing multilingual foundation model.\nPre-train a foundation model from scratch.\naLLMA models were developed with the latter approach. While the benchmarks contain several models that have been trained with the former method, no detailed analysis of the performance difference is provided. This is left as a future research area.\nThe pre-training task was only masked language modeling. The next sentence prediction task constitutes one of our benchmarks but is not included in the pre-training stage. Training loss of aLLMA-Small and aLLMA-Base models can be found in Figure 1 ###reference_###.\nOne major limitation of the original BERT paper was static masking. If tokens are masked before the training process, then even with multiple epochs, the model will always have to predict the same token. We borrow the idea of dynamic masking from (Liu et al., 2019 ###reference_b21###). Instead of masking tokens before the training, tokens are masked on demand. This results in various masking patterns on the same text samples.\nSince our model is trained from scratch on an Azerbaijani-only dataset, using existing multilingual tokenizers offered no advantages. A WordPiece tokenizer121212https://huggingface.co/allmalab/bert-tokenizer-aze ###reference_izer-aze### was trained on a weighted version of DOLLMA, with a vocabulary size of 64k. We have not performed a systematic evaluation to find the optimal vocabulary size. (Kaya and Tantu\u011f, 2024 ###reference_b20###) have researched the impact of vocabulary size on the performance of Turkish language models. Since both Azerbaijani and Turkish are agglutinative languages and share similar morphological features, we used the results of this research as a guide. While (Kaya and Tantu\u011f, 2024 ###reference_b20###) recommends increasing this number further, anything above that would be too computationally expensive for us.\n###figure_1###"
40
+ },
41
+ {
42
+ "section_id": "5",
43
+ "parent_section_id": null,
44
+ "section_name": "Benchmarks",
45
+ "text": "This section presents the tasks that were used to evaluate the natural language understanding capabilities of foundation models in Azerbaijani. All of these tasks are a form of classification since the models are encoder-only. We created three new datasets - text classification (AZE-SCI), closed-book multiple-choice questions (CB-MCQ), and next-sentence prediction (AZE-NSP) as a part of this project. Four more datasets (WikiANN, translated MRPC, translated SQuAD, and LDQuAd) were borrowed from the open-source community.\nFor each task, all models were trained with the same hyperparameters (learning rate, number of epochs, etc.). In almost all cases, models were undertrained - the project had hardware and time constraints and we were trying to get comparative results rather than functioning models. The source code for all experiments is being released, and the reader can generate better-performing models by simply training longer. Benchmarks have been summarized in Table 3 ###reference_###."
46
+ },
47
+ {
48
+ "section_id": "5.1",
49
+ "parent_section_id": "5",
50
+ "section_name": "AZE-SCI",
51
+ "text": "AZE-SCI dataset contains titles, topics, and subtopics of dissertations written at Azerbaijani universities and institutes. Subtopics were ignored and only topic labels were used for classification. Being the simplest out of all, this dataset offers a traditional text classification challenge. (Hajili, 2024a ###reference_b14###)"
52
+ },
53
+ {
54
+ "section_id": "5.2",
55
+ "parent_section_id": "5",
56
+ "section_name": "AZE-NSP",
57
+ "text": "The next-sentence prediction task allows us to assess the higher-level understanding capabilities of the models. We were unable to find such a dataset in Azerbaijani and decided to build one ourselves. Several books were compiled and split into paragraphs. A sentence pair was extracted from each paragraph and divided into two parts. The second sentence served as the true label, while randomly sampled sentences from other parts of the same book functioned as distractors. Special care was taken to ensure that there was no overlap between this dataset\u2019s source text and the pre-training data. (aLLMA Lab, 2024b ###reference_b2###)"
58
+ },
59
+ {
60
+ "section_id": "5.3",
61
+ "parent_section_id": "5",
62
+ "section_name": "CB-MCQ",
63
+ "text": "The most challenging task given to the models was a closed-book multiple-choice question-answering dataset, collected from various websites. Its content is mostly middle- and high-school topics, but also contains topics like a driver\u2019s exam and state service examination. (aLLMA Lab, 2024a ###reference_b1###)\nAll of the tested models failed to learn this model even at a basic level. Due to this, we have decided against testing all models and including them in the leaderboards. This benchmark remains an open challenge for Azerbaijani language modeling. It has been released publicly on the Hugging Face platform to promote further research."
64
+ },
65
+ {
66
+ "section_id": "5.4",
67
+ "parent_section_id": "5",
68
+ "section_name": "Existing datasets",
69
+ "text": "Several open-source datasets were sampled as an evaluation criterion. Some of these datasets were discarded due to low quality or small size. In the end, we decided on WikiANN, translated SQuAD, LDQuAd, and translated MRPC."
70
+ },
71
+ {
72
+ "section_id": "5.4.1",
73
+ "parent_section_id": "5.4",
74
+ "section_name": "5.4.1 WikiANN",
75
+ "text": "WikiANN is a multilingual named entity recognition dataset sampled from Wikipedia articles (Pan et al., 2017 ###reference_b26###). The dataset contains 12 thousand samples in Azerbaijani. The text is tokenized and location, person, and organization entities are labeled. Since the tokenized version of the dataset does not match our tokenizer, each token was re-tokenized separately and a tag was assigned to each new token."
76
+ },
77
+ {
78
+ "section_id": "5.4.2",
79
+ "parent_section_id": "5.4",
80
+ "section_name": "5.4.2 SQuAD",
81
+ "text": "Question-answering problems usually demand more robust language understanding and therefore serve as a better criterion than simpler classification tasks. There is no original open-book question-answering dataset in Azerbaijani. The Stanford Question Answering Dataset (SQuAD) is one such dataset in English. We used a translated and reindexed version of the original (Hajili, 2024e ###reference_b18###)."
82
+ },
83
+ {
84
+ "section_id": "5.4.3",
85
+ "parent_section_id": "5.4",
86
+ "section_name": "5.4.3 LDQuAd",
87
+ "text": "LDQuAd is a native Azerbaijani alternative to the SQuAD dataset. It contains 154,000 thousand samples, about 30% of which have no answer. Upon further inspection, we realized that most samples with a \"no answer\" label actually had a correct answer. It is possible that indices were generated automatically with a string search, and some answers were not found, resulting in mislabeled samples. Due to this, we discarded all samples with no answer. (LocalDoc, 2024 ###reference_b22###)"
88
+ },
89
+ {
90
+ "section_id": "5.4.4",
91
+ "parent_section_id": "5.4",
92
+ "section_name": "5.4.4 MRPC",
93
+ "text": "Microsoft Research Paraphrase Corpus (MRPC) (Dolan and Brockett, 2005 ###reference_b11###) is an English dataset that is used in NLU benchmarks like GLUE. Each sample contains two sentences and a label of whether or not two sentences are paraphrased versions of each other. We used a translated version of the corpus (Eljan Mahammadli, 2024 ###reference_b12###)."
94
+ },
95
+ {
96
+ "section_id": "6",
97
+ "parent_section_id": null,
98
+ "section_name": "Results",
99
+ "text": "###table_1### Initial tests were performed on dozens of foundation models and some were deliberately left out of the final analysis due to their inferior performance. The final benchmark includes four model categories:\nMultilingual foundation models.\nBERT-Base-MULTI is a multilingual version of the original BERT model. XLM-RoBERTa-Base and XLM-RoBERTa-Large are some of the best-performing multilingual models (Conneau et al., 2020 ###reference_b9###). mDeBERTa-v3 is a multilingual version of DeBERTa v3 model (He et al., 2023 ###reference_b19###)).\nMultilingual models further pre-trained for Azerbaijani. BERT-Base-AZE (Hajili, 2024b ###reference_b15###), RoBERTa-Base-AZE (Hajili, 2024d ###reference_b17###), and mDEBERTA-v3-AZE (Hajili, 2024c ###reference_b16###) have been further pre-trained on a small and high-quality Azerbaijani dataset. Their base models are RoBERTA-Base, BERT-Base-MULTI, and DeBERTa-Base, respectively.\nModels pre-trained from scratch.\naLLMA-Small, aLLMA-Base, and aLLMA-Large are the only monolingual Azerbaijani models.\nBaseline models. The original English-only BERT-Base was added as a baseline for the multilingual models. BERT-Scratch refers to the models trained on a specific task without pre-training weights. It functions as a baseline for all models in the benchmark.\nYou can find the results in Table 4 ###reference_###. mDeBERTa-v3 and aLLMA-Base have the best overall performance. Figure 2 ###reference_### compares the performance of Base models.131313The difference in number of parameters between these models is due to varying vocabulary sizes. Otherwise, their architectures are identical. aLLMA-Base outperforms all other models of similar size in 4 out of 6 benchmarks. Comparing BERT-Base-AZE with BERT-Base-MULTI shows that further pre-training of multilingual models can result in some performance improvement, but also model collapse (compare their performance in LDQuAd benchmark). However, a more comprehensive analysis is required before we can make generalizations about the effects of continued monolingual pre-training on multilingual models.\nBERT-Scratch performs particularly well on AZE-SCI, MRPC, and WikiANN tasks. We believe this has two explanations. The first is that these tasks can be solved partially with statistical information from the input text, while this is not possible with the other tasks. The second is that the random baseline in these tasks is relatively high, while SQuAD and LDQuAd have very low random baselines.\n###figure_2### These results demonstrate several points regarding foundation models for low-resource languages:\nPre-training from scratch on a monolingual dataset is a viable strategy for building a low-resource LLM. aLLMA-Base has competitive performance against larger models despite being trained only on the DOLLMA corpus.\nMultilingual models offer competitive performance even in languages that they were undertrained for. Azerbaijani has not been the focus in any of these multilingual models (XLM-RoBERTa, mDeBERTa-v3, or BERT-Base-MULTI). Despite this, they outperform most models in some tasks.\nEven monolingual English foundation models can be useful for fine-tuning on a downstream task and perform better than training a model from scratch. BERT-Base was included in our research as a baseline but exceeded our expectations. This suggests that the state-of-the-art English models can be utilized for certain NLU tasks in Azerbaijani. This remains a potential research area.\nIt is still possible that we have missed some high-quality models and we are open to feedback regarding this. Our work can be strengthened by finding or creating new benchmarks. We hope that this work will lay the foundations for such developments."
100
+ },
101
+ {
102
+ "section_id": "7",
103
+ "parent_section_id": null,
104
+ "section_name": "Conclusion",
105
+ "text": "Despite some academic and community attempts to create a foundation model for Azerbaijani, this problem has not received systemic treatment. We tackle this issue by introducing a new family of foundation models for the language and benchmarking these models and other existing alternatives. To compensate for the lack of datasets suitable for benchmarking LLMs in Azerbaijani, we introduce text classification, closed-book question-answering, and next-sentence prediction datasets.\nThis work can be extended in several ways. The simplest improvement would be training larger models on larger corpora. Our project does not achieve this due to time and hardware limitations. aLLMA models are not a final product, but an early prototype. A larger training corpus, more advanced hardware, and a better-optimized training process will certainly result in more robust foundation models for Azerbaijani.\nA more urgent work, however, is extending the benchmarks by creating more labeled task-specific datasets and adding other existing models to the leaderboards.\nIncluding the next-sentence prediction task in the pre-training phase can increase the performance of aLLMA models further.\nAnother ambitious direction would be using our corpus to develop a generative foundation model. This paper concentrated on encoder-only models because it is a simpler problem to solve and it has more immediate applications. Nevertheless, generative language models have wide-ranging industrial applications and demand a systemic treatment."
106
+ }
107
+ ],
108
+ "appendix": [],
109
+ "tables": {
110
+ "1": {
111
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S3.T1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T1.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.1.1.1.1\">Data source</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.1.1.1.2\">Word count</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.1.1.1.3\">Upscale</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.1.1.1.4\">Final count</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.1.1.1.5\">Source</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T1.1.2.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.1.2.1.1\">English Wikipedia</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.1.2.1.2\">194.0M</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.1.2.1.3\">4</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.1.2.1.4\">776.0M</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.1.2.1.5\"><cite class=\"ltx_cite ltx_citemacro_citep\">(BHOS AI R&amp;D Center, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.02337v2#bib.bib8\" title=\"\">2024</a>)</cite></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.3.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.1.3.2.1\">Azerbaijani Wikipedia</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.1.3.2.2\">40.0M</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.1.3.2.3\">6</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.1.3.2.4\">245.0M</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.1.3.2.5\"><cite class=\"ltx_cite ltx_citemacro_citep\">(aLLMA Lab, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.02337v2#bib.bib3\" title=\"\">2024c</a>)</cite></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.4.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.1.4.3.1\">News</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.1.4.3.2\">238.9M</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.1.4.3.3\">1</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.1.4.3.4\">238.9M</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.1.4.3.5\">BHOS AI R&amp;D Center</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.5.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.1.5.4.1\">Books I</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.1.5.4.2\">2.5M</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.1.5.4.3\">20</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.1.5.4.4\">50.0M</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.1.5.4.5\">aLLMA Lab</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.6.5\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.1.6.5.1\">Books II</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.1.6.5.2\">131.7M</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.1.6.5.3\">4</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.1.6.5.4\">526.8M</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.1.6.5.5\">LocalDoc</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.7.6\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.1.7.6.1\">Blogs</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.1.7.6.2\">0.9M</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.1.7.6.3\">20</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.1.7.6.4\">17.5M</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.1.7.6.5\">aLLMA Lab</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.8.7\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.1.8.7.1\">Azerbaijani laws</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.1.8.7.2\">44M</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.1.8.7.3\">6</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.1.8.7.4\">264M</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.1.8.7.5\"><cite class=\"ltx_cite ltx_citemacro_citep\">(aLLMA Lab, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.02337v2#bib.bib5\" title=\"\">2024e</a>)</cite></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.9.8\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_t\" id=\"S3.T1.1.9.8.1\">Total</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_t\" id=\"S3.T1.1.9.8.2\">651.1M</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_t\" id=\"S3.T1.1.9.8.3\">-</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_t\" id=\"S3.T1.1.9.8.4\">2118.2M</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_t\" id=\"S3.T1.1.9.8.5\">-</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Data sources used to generate the DOLLMA corpus. English Wikipedia has been translated with open-source models by the BHOS AI team.</figcaption>\n</figure>",
112
+ "capture": "Table 1: Data sources used to generate the DOLLMA corpus. English Wikipedia has been translated with open-source models by the BHOS AI team."
113
+ },
114
+ "2": {
115
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T2\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S3.T2.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T2.1.1.1.1\">Model</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T2.1.1.1.2\">Hidden Size</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T2.1.1.1.3\">Num. Attention Heads</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T2.1.1.1.4\">Num. Hidden Layers</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T2.1.1.1.5\">Num. Parameters</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T2.1.2.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T2.1.2.1.1\">aLLMA-Small</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T2.1.2.1.2\">512</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T2.1.2.1.3\">8</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T2.1.2.1.4\">4</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T2.1.2.1.5\">45.9M</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.3.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T2.1.3.2.1\">aLLMA-Base</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T2.1.3.2.2\">768</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T2.1.3.2.3\">12</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T2.1.3.2.4\">12</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T2.1.3.2.5\">135.2M</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.4.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S3.T2.1.4.3.1\">aLLMA-Large</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S3.T2.1.4.3.2\">1024</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S3.T2.1.4.3.3\">16</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S3.T2.1.4.3.4\">24</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S3.T2.1.4.3.5\">369.5M</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span> Architectural differences among the aLLMA models.</figcaption>\n</figure>",
116
+ "capture": "Table 2: Architectural differences among the aLLMA models."
117
+ },
118
+ "3": {
119
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T3.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T3.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.1.1.1.1\">Dataset</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.1.1.1.2\">Num. of samples</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.1.1.1.3\">Task</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.1.1.1.4\">Source</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T3.1.2.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.1.2.1.1\">AZE-SCI</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.1.2.1.2\">5.76k</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.1.2.1.3\">Text classification</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.1.2.1.4\"><cite class=\"ltx_cite ltx_citemacro_citep\">(Hajili, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.02337v2#bib.bib14\" title=\"\">2024a</a>)</cite></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.3.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.1.3.2.1\">MRPC (translated)</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.1.3.2.2\">3.67k</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.1.3.2.3\">Paraphrase identification</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.1.3.2.4\"><cite class=\"ltx_cite ltx_citemacro_citep\">(Eljan Mahammadli, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.02337v2#bib.bib12\" title=\"\">2024</a>)</cite></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.4.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.1.4.3.1\">WikiANN</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.1.4.3.2\">12k</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.1.4.3.3\">Named entity recognition</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.1.4.3.4\"><cite class=\"ltx_cite ltx_citemacro_citep\">(Pan et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.02337v2#bib.bib26\" title=\"\">2017</a>)</cite></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.5.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.1.5.4.1\">SQuAD (Translated)</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.1.5.4.2\">54.1k</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.1.5.4.3\">Extractive QA</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.1.5.4.4\"><cite class=\"ltx_cite ltx_citemacro_citep\">(Hajili, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.02337v2#bib.bib18\" title=\"\">2024e</a>)</cite></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.6.5\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.1.6.5.1\">LDQuAd</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.1.6.5.2\">154k</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.1.6.5.3\">Extractive QA</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.1.6.5.4\"><cite class=\"ltx_cite ltx_citemacro_citep\">(LocalDoc, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.02337v2#bib.bib22\" title=\"\">2024</a>)</cite></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.7.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T3.1.7.6.1\">AZE-NSP</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T3.1.7.6.2\">9.15k</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T3.1.7.6.3\">Next sentence prediction</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T3.1.7.6.4\"><cite class=\"ltx_cite ltx_citemacro_citep\">(aLLMA Lab, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.02337v2#bib.bib2\" title=\"\">2024b</a>)</cite></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>Benchmarks.</figcaption>\n</figure>",
120
+ "capture": "Table 3: Benchmarks."
121
+ },
122
+ "4": {
123
+ "table_html": "<figure class=\"ltx_table\" id=\"S6.T4\">\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S6.T4.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S6.T4.1.1.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S6.T4.1.1.1.1\">Model name</td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S6.T4.1.1.1.2\">Size</td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S6.T4.1.1.1.3\">AZE-SCI</td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S6.T4.1.1.1.4\">MRPC</td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S6.T4.1.1.1.5\">WikiANN</td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S6.T4.1.1.1.6\">SQuAD</td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S6.T4.1.1.1.7\">AZE-NSP</td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S6.T4.1.1.1.8\">LDQuAd</td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S6.T4.1.1.1.9\">Avg.</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T4.1.2.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S6.T4.1.2.2.1\"><span class=\"ltx_text\" id=\"S6.T4.1.2.2.1.1\" style=\"color:#3166FF;\">XLM-RoBERTa-Large</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S6.T4.1.2.2.2\">560M</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S6.T4.1.2.2.3\">89.76</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S6.T4.1.2.2.4\">82.41</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S6.T4.1.2.2.5\">92.35</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S6.T4.1.2.2.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T4.1.2.2.6.1\">75.70</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S6.T4.1.2.2.7\">33.46</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S6.T4.1.2.2.8\">83.48</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S6.T4.1.2.2.9\">76.19</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T4.1.3.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.3.3.1\"><span class=\"ltx_text\" id=\"S6.T4.1.3.3.1.1\" style=\"color:#3166FF;\">mDeBERTa-v3</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.3.3.2\">279M</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.3.3.3\">87.13</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.3.3.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T4.1.3.3.4.1\">83.71</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.3.3.5\">91.87</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.3.3.6\">72.27</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.3.3.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T4.1.3.3.7.1\">78.84</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.3.3.8\">85.29</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.3.3.9\">83.19</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T4.1.4.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.4.4.1\"><span class=\"ltx_text\" id=\"S6.T4.1.4.4.1.1\" style=\"color:#FF4F00;\">mDEBERTA-v3-AZE</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.4.4.2\">279M</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.4.4.3\">89.73</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.4.4.4\">80.18</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.4.4.5\">91.83</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.4.4.6\">70.31</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.4.4.7\">78.29</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.4.4.8\">85.07</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.4.4.9\">82.57</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T4.1.5.5\">\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.5.5.1\"><span class=\"ltx_text\" id=\"S6.T4.1.5.5.1.1\" style=\"color:#3166FF;\">XLM-RoBERTa-Base</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.5.5.2\">278M</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.5.5.3\">86.99</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.5.5.4\">70.90</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.5.5.5\">90.29</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.5.5.6\">70.97</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.5.5.7\">74.96</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.5.5.8\">85.17</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.5.5.9\">79.88</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T4.1.6.6\">\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.6.6.1\"><span class=\"ltx_text\" id=\"S6.T4.1.6.6.1.1\" style=\"color:#FF4F00;\">RoBERTa-Base-AZE</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.6.6.2\">278M</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.6.6.3\">89.17</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.6.6.4\">81.25</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.6.6.5\">91.62</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.6.6.6\">70.36</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.6.6.7\">76.98</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.6.6.8\">85.44</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.6.6.9\">82.47</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T4.1.7.7\">\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.7.7.1\"><span class=\"ltx_text\" id=\"S6.T4.1.7.7.1.1\" style=\"color:#FF4F00;\">BERT-Base-AZE</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.7.7.2\">178M</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.7.7.3\">88.80</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.7.7.4\">80.12</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.7.7.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T4.1.7.7.5.1\">92.35</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.7.7.6\">69.42</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.7.7.7\">74.12</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.7.7.8\">64.41</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.7.7.9\">78.20</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T4.1.8.8\">\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.8.8.1\"><span class=\"ltx_text\" id=\"S6.T4.1.8.8.1.1\" style=\"color:#3166FF;\">BERT-Base-Multi</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.8.8.2\">178M</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.8.8.3\">86.88</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.8.8.4\">79.92</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.8.8.5\">91.67</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.8.8.6\">68.92</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.8.8.7\">72.46</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.8.8.8\">83.48</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.8.8.9\">80.56</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T4.1.9.9\">\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.9.9.1\"><span class=\"ltx_text\" id=\"S6.T4.1.9.9.1.1\">BERT-Scratch</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.9.9.2\">135M</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.9.9.3\">73.31</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.9.9.4\">65.36</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.9.9.5\">72.95</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.9.9.6\">16.11</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.9.9.7\">50.73</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.9.9.8\">26.60</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.9.9.9\">50.84</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T4.1.10.10\">\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.10.10.1\"><span class=\"ltx_text\" id=\"S6.T4.1.10.10.1.1\">BERT-Base</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.10.10.2\">108M</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.10.10.3\">76.73</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.10.10.4\">75.00</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.10.10.5\">90.94</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.10.10.6\">55.51</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.10.10.7\">62.12</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.10.10.8\">74.88</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.10.10.9\">72.53</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T4.1.11.11\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S6.T4.1.11.11.1\"><span class=\"ltx_text\" id=\"S6.T4.1.11.11.1.1\" style=\"color:#008000;\">ALLMA-Large</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S6.T4.1.11.11.2\">370M</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S6.T4.1.11.11.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T4.1.11.11.3.1\">91.46</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S6.T4.1.11.11.4\">81.55</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S6.T4.1.11.11.5\">91.71</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S6.T4.1.11.11.6\">73.77</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S6.T4.1.11.11.7\">78.58</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S6.T4.1.11.11.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T4.1.11.11.8.1\">85.93</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S6.T4.1.11.11.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T4.1.11.11.9.1\">83.83</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T4.1.12.12\">\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.12.12.1\"><span class=\"ltx_text\" id=\"S6.T4.1.12.12.1.1\" style=\"color:#008000;\">ALLMA-Base</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.12.12.2\">135M</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.12.12.3\">90.84</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.12.12.4\">79.74</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.12.12.5\">91.26</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.12.12.6\">71.30</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.12.12.7\">75.95</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.12.12.8\">85.69</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T4.1.12.12.9\">82.46</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T4.1.13.13\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S6.T4.1.13.13.1\"><span class=\"ltx_text\" id=\"S6.T4.1.13.13.1.1\" style=\"color:#008000;\">ALLMA-Small</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S6.T4.1.13.13.2\">46M</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S6.T4.1.13.13.3\">88.06</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S6.T4.1.13.13.4\">71.77</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S6.T4.1.13.13.5\">90.07</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S6.T4.1.13.13.6\">59.89</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S6.T4.1.13.13.7\">70.23</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S6.T4.1.13.13.8\">80.80</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S6.T4.1.13.13.9\">76.80</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 4: </span>Azerbaijani NLU benchmark. All metrics are F1 score. <span class=\"ltx_text\" id=\"S6.T4.6.1\" style=\"color:#3166FF;\"> Blue models</span> are multilingual. <span class=\"ltx_text\" id=\"S6.T4.7.2\" style=\"color:#FF4F00;\">Orange models</span> are multilingual models that have been further pre-trained for Azerbaijani. <span class=\"ltx_text\" id=\"S6.T4.8.3\" style=\"color:#008000;\">Green models</span> were trained from scratch only for Azerbaijani. <span class=\"ltx_text\" id=\"S6.T4.9.4\"> Black models<span class=\"ltx_text\" id=\"S6.T4.9.4.1\"> serve as baseline.</span></span></figcaption>\n</figure>",
124
+ "capture": "Table 4: Azerbaijani NLU benchmark. All metrics are F1 score. Blue models are multilingual. Orange models are multilingual models that have been further pre-trained for Azerbaijani. Green models were trained from scratch only for Azerbaijani. Black models serve as baseline."
125
+ }
126
+ },
127
+ "image_paths": {
128
+ "1": {
129
+ "figure_path": "2407.02337v2_figure_1.png",
130
+ "caption": "Figure 1: Training loss for aLLMA-Small, aLLMA-Base, and aLLMA-Large models.",
131
+ "url": "http://arxiv.org/html/2407.02337v2/extracted/5801265/tokens.jpg"
132
+ },
133
+ "2": {
134
+ "figure_path": "2407.02337v2_figure_2.png",
135
+ "caption": "Figure 2: Performance comparison among BERT models of the same configuration. aLLMA-Base outperforms the other models in 4 out of 6 benchmarks.",
136
+ "url": "http://arxiv.org/html/2407.02337v2/extracted/5801265/bert.png"
137
+ }
138
+ },
139
+ "validation": true,
140
+ "references": [
141
+ {
142
+ "1": {
143
+ "title": "az-multiple-choice-questions (revision eb9cd4f).",
144
+ "author": "aLLMA Lab. 2024a.",
145
+ "venue": null,
146
+ "url": "https://doi.org/10.57967/hf/2257"
147
+ }
148
+ },
149
+ {
150
+ "2": {
151
+ "title": "Aze-nsp (revision c59f4f8).",
152
+ "author": "aLLMA Lab. 2024b.",
153
+ "venue": null,
154
+ "url": "https://doi.org/10.57967/hf/2260"
155
+ }
156
+ },
157
+ {
158
+ "3": {
159
+ "title": "azwiki (revision 65d6610).",
160
+ "author": "aLLMA Lab. 2024c.",
161
+ "venue": null,
162
+ "url": "https://doi.org/10.57967/hf/2252"
163
+ }
164
+ },
165
+ {
166
+ "4": {
167
+ "title": "azwiki (revision 65d6610).",
168
+ "author": "aLLMA Lab. 2024d.",
169
+ "venue": null,
170
+ "url": "https://doi.org/10.57967/hf/2252"
171
+ }
172
+ },
173
+ {
174
+ "5": {
175
+ "title": "eqanun (revision 8f99a3a).",
176
+ "author": "aLLMA Lab. 2024e.",
177
+ "venue": null,
178
+ "url": "https://doi.org/10.57967/hf/2251"
179
+ }
180
+ },
181
+ {
182
+ "6": {
183
+ "title": "A neural probabilistic language model.",
184
+ "author": "Yoshua Bengio, R\u00e9jean Ducharme, and Pascal Vincent. 2000.",
185
+ "venue": "In Advances in Neural Information Processing Systems, volume 13. MIT Press.",
186
+ "url": "https://proceedings.neurips.cc/paper_files/paper/2000/file/728f206c2a01bf572b5940d7d9a8fa4c-Paper.pdf"
187
+ }
188
+ },
189
+ {
190
+ "7": {
191
+ "title": "Generalization in NLI: Ways (not) to go beyond simple heuristics.",
192
+ "author": "Prajjwal Bhargava, Aleksandr Drozd, and Anna Rogers. 2021.",
193
+ "venue": "In Proceedings of the Second Workshop on Insights from Negative Results in NLP, pages 125\u2013135, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.",
194
+ "url": "https://doi.org/10.18653/v1/2021.insights-1.18"
195
+ }
196
+ },
197
+ {
198
+ "8": {
199
+ "title": "Translated_english_wikipedia_on_azerbaijani (revision 077a718).",
200
+ "author": "BHOS AI R&D Center. 2024.",
201
+ "venue": null,
202
+ "url": "https://doi.org/10.57967/hf/2323"
203
+ }
204
+ },
205
+ {
206
+ "9": {
207
+ "title": "Unsupervised cross-lingual representation learning at scale.",
208
+ "author": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020.",
209
+ "venue": "In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8440\u20138451, Online. Association for Computational Linguistics.",
210
+ "url": "https://doi.org/10.18653/v1/2020.acl-main.747"
211
+ }
212
+ },
213
+ {
214
+ "10": {
215
+ "title": "BERT: Pre-training of deep bidirectional transformers for language understanding.",
216
+ "author": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019.",
217
+ "venue": "In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171\u20134186, Minneapolis, Minnesota. Association for Computational Linguistics.",
218
+ "url": "https://doi.org/10.18653/v1/N19-1423"
219
+ }
220
+ },
221
+ {
222
+ "11": {
223
+ "title": "Automatically constructing a corpus of sentential paraphrases.",
224
+ "author": "William B. Dolan and Chris Brockett. 2005.",
225
+ "venue": "In Proceedings of the Third International Workshop on Paraphrasing (IWP2005).",
226
+ "url": "https://aclanthology.org/I05-5002"
227
+ }
228
+ },
229
+ {
230
+ "12": {
231
+ "title": "glue-mrpc-azerbaijani (revision b60caf0).",
232
+ "author": "Eljan Mahammadli. 2024.",
233
+ "venue": null,
234
+ "url": "https://doi.org/10.57967/hf/2298"
235
+ }
236
+ },
237
+ {
238
+ "13": {
239
+ "title": "Unsupervised quality estimation without reference corpus for subtitle machine translation using word embeddings.",
240
+ "author": "Prabhakar Gupta, Shaktisingh Shekhawat, and Keshav Kumar. 2019.",
241
+ "venue": "In 2019 IEEE 13th International Conference on Semantic Computing (ICSC), pages 32\u201338.",
242
+ "url": "https://doi.org/10.1109/ICOSC.2019.8665529"
243
+ }
244
+ },
245
+ {
246
+ "14": {
247
+ "title": "azsci_topics (revision 26b9a83).",
248
+ "author": "Mammad Hajili. 2024a.",
249
+ "venue": null,
250
+ "url": "https://doi.org/10.57967/hf/2219"
251
+ }
252
+ },
253
+ {
254
+ "15": {
255
+ "title": "bert-base-cased-azerbaijani (revision 0cad0fa).",
256
+ "author": "Mammad Hajili. 2024b.",
257
+ "venue": null,
258
+ "url": "https://doi.org/10.57967/hf/2221"
259
+ }
260
+ },
261
+ {
262
+ "16": {
263
+ "title": "deberta-base-azerbaijani-v2 (revision dce9fc4).",
264
+ "author": "Mammad Hajili. 2024c.",
265
+ "venue": null,
266
+ "url": "https://doi.org/10.57967/hf/2846"
267
+ }
268
+ },
269
+ {
270
+ "17": {
271
+ "title": "roberta-base-azerbaijani (revision 40f7699).",
272
+ "author": "Mammad Hajili. 2024d.",
273
+ "venue": null,
274
+ "url": "https://doi.org/10.57967/hf/2220"
275
+ }
276
+ },
277
+ {
278
+ "18": {
279
+ "title": "squad-azerbaijani-reindex-translation (revision f48f8fe).",
280
+ "author": "Mammad Hajili. 2024e.",
281
+ "venue": null,
282
+ "url": "https://doi.org/10.57967/hf/2238"
283
+ }
284
+ },
285
+ {
286
+ "19": {
287
+ "title": "Debertav3: Improving deberta using electra-style pre-training with gradient-disentangled embedding sharing.",
288
+ "author": "Pengcheng He, Jianfeng Gao, and Weizhu Chen. 2023.",
289
+ "venue": null,
290
+ "url": "http://arxiv.org/abs/2111.09543"
291
+ }
292
+ },
293
+ {
294
+ "20": {
295
+ "title": "Effect of tokenization granularity for turkish large language models.",
296
+ "author": "Yi\u011fit Bekir Kaya and A. C\u00fcneyd Tantu\u011f. 2024.",
297
+ "venue": "Intelligent Systems with Applications, 21:200335.",
298
+ "url": "https://doi.org/https://doi.org/10.1016/j.iswa.2024.200335"
299
+ }
300
+ },
301
+ {
302
+ "21": {
303
+ "title": "Roberta: A robustly optimized bert pretraining approach.",
304
+ "author": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.",
305
+ "venue": null,
306
+ "url": "http://arxiv.org/abs/1907.11692"
307
+ }
308
+ },
309
+ {
310
+ "22": {
311
+ "title": "Ldquad (revision e082d87).",
312
+ "author": "LocalDoc. 2024.",
313
+ "venue": null,
314
+ "url": "https://doi.org/10.57967/hf/2269"
315
+ }
316
+ },
317
+ {
318
+ "23": {
319
+ "title": "Recurrent neural network based language model.",
320
+ "author": "Tom\u00e1\u0161 Mikolov, Martin Karafi\u00e1t, Luk\u00e1\u0161 Burget, Jan \u010cernock\u00fd, and Sanjeev Khudanpur. 2010.",
321
+ "venue": "In Proc. Interspeech 2010, pages 1045\u20131048.",
322
+ "url": "https://doi.org/10.21437/Interspeech.2010-343"
323
+ }
324
+ },
325
+ {
326
+ "24": {
327
+ "title": "Large language models: A survey.",
328
+ "author": "Shervin Minaee, Tomas Mikolov, Narjes Nikzad, Meysam Asgari Chenaghlu, Richard Socher, Xavier Amatriain, and Jianfeng Gao. 2024.",
329
+ "venue": "ArXiv, abs/2402.06196.",
330
+ "url": "https://api.semanticscholar.org/CorpusID:267617032"
331
+ }
332
+ },
333
+ {
334
+ "25": {
335
+ "title": "A monolingual approach to contextualized word embeddings for mid-resource languages.",
336
+ "author": "Pedro Javier Ortiz Su\u00e1rez, Laurent Romary, and Beno\u00eet Sagot. 2020.",
337
+ "venue": "In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1703\u20131714, Online. Association for Computational Linguistics.",
338
+ "url": "https://doi.org/10.18653/v1/2020.acl-main.156"
339
+ }
340
+ },
341
+ {
342
+ "26": {
343
+ "title": "Cross-lingual name tagging and linking for 282 languages.",
344
+ "author": "Xiaoman Pan, Boliang Zhang, Jonathan May, Joel Nothman, Kevin Knight, and Heng Ji. 2017.",
345
+ "venue": "In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1946\u20131958, Vancouver, Canada. Association for Computational Linguistics.",
346
+ "url": "https://doi.org/10.18653/v1/P17-1178"
347
+ }
348
+ },
349
+ {
350
+ "27": {
351
+ "title": "Improving language understanding by generative pre-training.",
352
+ "author": "Alec Radford and Karthik Narasimhan. 2018.",
353
+ "venue": null,
354
+ "url": "https://api.semanticscholar.org/CorpusID:49313245"
355
+ }
356
+ },
357
+ {
358
+ "28": {
359
+ "title": "Language models are unsupervised multitask learners.",
360
+ "author": "Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019.",
361
+ "venue": null,
362
+ "url": "https://api.semanticscholar.org/CorpusID:160025533"
363
+ }
364
+ },
365
+ {
366
+ "29": {
367
+ "title": "Exploring the limits of transfer learning with a unified text-to-text transformer.",
368
+ "author": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020.",
369
+ "venue": "The Journal of Machine Learning Research, 21(1).",
370
+ "url": null
371
+ }
372
+ },
373
+ {
374
+ "30": {
375
+ "title": "Exploring the limits of transfer learning with a unified text-to-text transformer.",
376
+ "author": "Colin Raffel, Noam M. Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019.",
377
+ "venue": "J. Mach. Learn. Res., 21:140:1\u2013140:67.",
378
+ "url": "https://api.semanticscholar.org/CorpusID:204838007"
379
+ }
380
+ },
381
+ {
382
+ "31": {
383
+ "title": "Receptive intelligibility of turkish to iranian-azerbaijani speakers.",
384
+ "author": "Mohammad Salehi and Aydin Neysani. 2017.",
385
+ "venue": "Cogent Education, 4(1):1326653.",
386
+ "url": "https://doi.org/10.1080/2331186X.2017.1326653"
387
+ }
388
+ },
389
+ {
390
+ "32": {
391
+ "title": "Continuous space language models for statistical machine translation.",
392
+ "author": "Holger Schwenk, Daniel Dechelotte, and Jean-Luc Gauvain. 2006.",
393
+ "venue": "In Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions, pages 723\u2013730, Sydney, Australia. Association for Computational Linguistics.",
394
+ "url": "https://aclanthology.org/P06-2093"
395
+ }
396
+ },
397
+ {
398
+ "33": {
399
+ "title": "mgpt: Few-shot learners go multilingual.",
400
+ "author": "Oleh Shliazhko, Alena Fenogenova, Maria Tikhonova, Vladislav Mikhailov, Anastasia Kozlova, and Tatiana Shavrina. 2023.",
401
+ "venue": null,
402
+ "url": "http://arxiv.org/abs/2204.07580"
403
+ }
404
+ },
405
+ {
406
+ "34": {
407
+ "title": "An overview of the tesseract ocr engine.",
408
+ "author": "R. Smith. 2007.",
409
+ "venue": "In Ninth International Conference on Document Analysis and Recognition (ICDAR 2007), volume 2, pages 629\u2013633.",
410
+ "url": "https://doi.org/10.1109/ICDAR.2007.4376991"
411
+ }
412
+ },
413
+ {
414
+ "35": {
415
+ "title": "Llama: Open and efficient foundation language models.",
416
+ "author": "Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth\u00e9e Lacroix, Baptiste Rozi\u00e8re, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023.",
417
+ "venue": null,
418
+ "url": "http://arxiv.org/abs/2302.13971"
419
+ }
420
+ },
421
+ {
422
+ "36": {
423
+ "title": "Attention is all you need.",
424
+ "author": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017.",
425
+ "venue": "In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc.",
426
+ "url": "https://proceedings.neurips.cc/paper_files/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf"
427
+ }
428
+ },
429
+ {
430
+ "37": {
431
+ "title": "Text data augmentation and pre-trained language model for enhancing text classification of low-resource languages.",
432
+ "author": "Atabay Ziyaden, Amir Yelenov, Fuad Hajiyev, Samir Rustamov, and Alexandr Pak. 2024.",
433
+ "venue": "PeerJ Computer Science, 10:e1974.",
434
+ "url": "https://doi.org/10.7717/peerj-cs.1974"
435
+ }
436
+ }
437
+ ],
438
+ "url": "http://arxiv.org/html/2407.02337v2"
439
+ }
20240819/2407.03219v2.json ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Localization in Dynamic Planar Environments Using Few Distance Measurements",
3
+ "abstract": "We present a method for determining the unknown location of a sensor placed in a known 2D environment in the presence of unknown dynamic obstacles, using only few distance measurements.\nWe present guarantees on the quality of the localization, which are robust under mild assumptions on the density of the unknown/dynamic obstacles in the known environment.\nWe demonstrate the effectiveness of our method in simulated experiments for different environments and varying dynamic-obstacle density. Our open source software is available at https://github.com/TAU-CGL/vb-fdml2-public.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Robot localization is the task of determining the pose (or location) of a robot in some environment, and is an extensively researched problem in robotics [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###]. The localization can be carried out with various sensors and techniques, and in different environments. In this work we focus on the \u201ckidnapped robot\u201d variant [3 ###reference_b3###], which strives to find the robot\u2019s location in an environment, with no prior information on its location, as opposed to fine-tuning the localization assuming we know generally where the robot is.\nIn a previous work [4 ###reference_b4###], we presented a method for performing robot localization in a planar (known) environment with only a few distance-measurements.\nHowever, environments may also contain dynamic disturbances, which do not appear in the known map of the environment, such as moved furniture, people walking around, other robots, etc. In this work, we present a general method for few distance-measurement localization, which is robust to such dynamic changes, both in theory and experiments."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II Problem Statement",
15
+ "text": "The sensor is placed in the interior of a planar workspace .\nLet be closed planar regions which are the dynamic obstacles, with corresponding trajectories , such that at time , the free region becomes\nWe refer to as the static environment, and to as the current (at time ) dynamic environment.\nA distance measurement is a mapping\nsuch that is the length of the\nfirst intersection of a ray emanating from in direction with the boundary of the\nworkspace at time .\nFurthermore, denote by the distance measurement for the known workspace , without dynamic obstacles, defined in the same way as .\nWe are now ready to state the basic version of the problem that we study.\nThe problem:\nGiven a static workspace ,\ndynamic obstacles \nwith corresponding trajectories\n which are unknown,\na set of rigid-body transformations with a set of corresponding positive real values (which were taken at times , respectively),\nwe wish to find all the poses such that and for all .\nWe wish to find the configuration , which is the original pose of the robot.\nBefore each one of the measurements, the robot moves to another pose or stays put. The pose of the sensor when making the th distance measurement at time is .\nAs shown in [4 ###reference_b4###], localization in a completely known environment can be effectively approximated.\nIf, a-priori we know exactly how looks like, then that method [4 ###reference_b4###] would work as-is.\nObviously, in the presence of unknown obstacles, it could not be applied as-is.\nSee Figure 1 ###reference_### for an example.\n###figure_1### We focus here on workspaces that have a polygonal boundary, namely polygons or polygons with holes. Aiming for generality of the approach, we assume no prior knowledge on the topology, geometry, number or location of the dynamic obstacles. Of course we must make some assumptions on the dynamic obstacles in order for the problem to be solvable. Indeed, we make the following -dynamic sparsity assumption: We specify two natural numbers and assume that out of a batch of measurements, there would be at least which measure the distance from the boundary of the known workspace .\nRemark.\nA trajectory of a dynamic obstacle may be degenerate, in the sense that , i.e., for all the obstacle stays put.\nFor simplicity, we refer to such unknown obstacles as dynamic as well."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "III The method",
21
+ "text": "For any , denote . Assume that the robot has measured distances for , and assume that there is subset of size of measurements, , which sample the static environment.\nLet be the preimage of the distance measurement in ,\nand let be the corresponding voxel clouds approximations (of resolution , for a given resolution parameter ) of those preimages for .\nWe get from the method described in [4 ###reference_b4###].\nWe also define\nagreement of a measurement as follows:\nLet . For a measurement , with , we say that a pose -agrees (with the static environment) on the measurement if\nThen we compute (where is the collection of all subsets of ):\nThe voxel cloud approximation is conservative111Up to some small set of voxels , which we can treat specifically.. That is, if is the ground truth, then . Furthermore, the distance between and the nearest predicted localization in is .\nWe then extract a collection \nof poses which are the centers of mass of connected components of voxels in .\nHowever, this set might contain many irrelevant poses, which are far from representing the correct localization. Hence we only leave poses that fulfill the following conditions, for prescribed parameters :\nThe pose is unique: , .\nThe pose -agrees with for every ."
22
+ },
23
+ {
24
+ "section_id": "4",
25
+ "parent_section_id": null,
26
+ "section_name": "IV Experiments and Results",
27
+ "text": "Our code is written in C++ with Python Bindings. We utilize\nOpenMP222https://www.openmp.org/ ###reference_www.openmp.org/###\nfor parallel computation. The code is run on an Ubuntu machine with Intel Core i7-12700 CPU.\nWe demonstrate our performance on four different test scenes: A square room, a polygon based on a LiDAR scan of our lab, a floor-plan,\nand randomly generated polygons.\nOur experiments are carried out as follows: We randomly place the sensor in each of the aforementioned rooms, with randomly placed dynamic obstacles which stay in place (see remark at the end of Section II ###reference_###, and Figure 2 ###reference_### for an example) and perform distance measurements, with increments in rotation between every pair of consecutive measurements. We apply our method assuming (10,6)-dynamic sparsity, which does not always occur in our experiments. We repeat each experiment times for different grid resolutions, and for and dynamic obstacles. We also ran our base method on those scenarios. In Table I ###reference_### we indeed see that our method significantly improves the success rate.\n###table_1### ###figure_2###"
28
+ },
29
+ {
30
+ "section_id": "5",
31
+ "parent_section_id": null,
32
+ "section_name": "Conclusion and Future Work",
33
+ "text": "In this work we showed that the few distance-measurement localization technique can be adjusted for uncertain obstacles in a known environment, and have demonstrated a significant improvement in performance on such scenarios.\nMany details of the analysis and experiments have been omitted here and will be supplied in a forthcoming full version.\nHowever, we are yet to determine with high confidence which of the measurements are those that measure the static environment. Furthermore, we do not guarantee the dynamic sparsity of a given scenario (even if we have full information on the dynamic obstacles).\nThe next goals are: (i) devise analysis tools for determining the dynamic sparsity of a given setting, and (ii) estimate the actual dynamic sparsity value in the absence of knowledge about the dynamic setting."
34
+ }
35
+ ],
36
+ "appendix": [],
37
+ "tables": {
38
+ "1": {
39
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S4.T1.1\">\n<tr class=\"ltx_tr\" id=\"S4.T1.1.2\">\n<td class=\"ltx_td ltx_border_l ltx_border_rr ltx_border_t\" id=\"S4.T1.1.2.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" colspan=\"2\" id=\"S4.T1.1.2.2\"><span class=\"ltx_text ltx_font_typewriter\" id=\"S4.T1.1.2.2.1\">lab-lidar</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" colspan=\"2\" id=\"S4.T1.1.2.3\"><span class=\"ltx_text ltx_font_typewriter\" id=\"S4.T1.1.2.3.1\">floor-plan</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" colspan=\"2\" id=\"S4.T1.1.2.4\"><span class=\"ltx_text ltx_font_typewriter\" id=\"S4.T1.1.2.4.1\">random</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_rr ltx_border_t\" id=\"S4.T1.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.2\">10</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.3\">30</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.4\">10</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.5\">30</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.6\">10</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.7\">30</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_rr ltx_border_tt\" id=\"S4.T1.1.3.1\">FDML\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.03219v2#bib.bib4\" title=\"\">4</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T1.1.3.2\">32.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T1.1.3.3\">18.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T1.1.3.4\">14.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T1.1.3.5\">13.4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T1.1.3.6\">34.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T1.1.3.7\">29.1</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_l ltx_border_rr ltx_border_t\" id=\"S4.T1.1.4.1\">\n<span class=\"ltx_text\" id=\"S4.T1.1.4.1.1\"></span> <span class=\"ltx_text\" id=\"S4.T1.1.4.1.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.1.4.1.2.1\">\n<span class=\"ltx_tr\" id=\"S4.T1.1.4.1.2.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T1.1.4.1.2.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.4.1.2.1.1.1.1\">Current</span></span></span>\n<span class=\"ltx_tr\" id=\"S4.T1.1.4.1.2.1.2\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T1.1.4.1.2.1.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.4.1.2.1.2.1.1\">method</span></span></span>\n</span></span><span class=\"ltx_text\" id=\"S4.T1.1.4.1.3\"></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.1.4.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.4.2.1\">94.3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.1.4.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.4.3.1\">92.4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.1.4.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.4.4.1\">94.6</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.1.4.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.4.5.1\">88.6</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.1.4.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.4.6.1\">99.1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.1.4.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.4.7.1\">93.1</span></td>\n</tr>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>Average success rate (%) comparison for each scene.</figcaption>\n</figure>",
40
+ "capture": "TABLE I: Average success rate (%) comparison for each scene."
41
+ }
42
+ },
43
+ "image_paths": {
44
+ "1": {
45
+ "figure_path": "2407.03219v2_figure_1.png",
46
+ "caption": "Figure 1: \nExample for why sampling a dynamic obstacle might lose the ground truth location. Our workspace \ud835\udcb2\ud835\udcb2\\mathcal{W}caligraphic_W is in gray. We have one dynamic obstacle \ud835\udc9f1subscript\ud835\udc9f1\\mathcal{D}_{1}caligraphic_D start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT with \u03c61=1\u2208S\u2062E\u2062(2)subscript\ud835\udf1111\ud835\udc46\ud835\udc382\\varphi_{1}=1\\in SE(2)italic_\u03c6 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 1 \u2208 italic_S italic_E ( 2 ) (see remark in Section II).\nWe take three distance measurements disubscript\ud835\udc51\ud835\udc56d_{i}italic_d start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT with a rotation offset of \u03c0/2\ud835\udf0b2\\pi/2italic_\u03c0 / 2 radians clockwise. The robot is at q\u2217\u2208S\u2062E\u2062(2)subscript\ud835\udc5e\ud835\udc46\ud835\udc382q_{*}\\in SE(2)italic_q start_POSTSUBSCRIPT \u2217 end_POSTSUBSCRIPT \u2208 italic_S italic_E ( 2 ).\nLeft: The free region at time t1subscript\ud835\udc611t_{1}italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, \ud835\udcb2t1subscript\ud835\udcb2subscript\ud835\udc611\\mathcal{W}_{t_{1}}caligraphic_W start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_POSTSUBSCRIPT. Ignoring the existence of dynamic obstacle yields the red pose q\u2032superscript\ud835\udc5e\u2032q^{\\prime}italic_q start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT which is false.\nRight: When we ignore the dynamic obstacles, and look for locations for which we would measure disubscript\ud835\udc51\ud835\udc56d_{i}italic_d start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, we get only q\u2032superscript\ud835\udc5e\u2032q^{\\prime}italic_q start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT and lose q\u2217subscript\ud835\udc5eq_{*}italic_q start_POSTSUBSCRIPT \u2217 end_POSTSUBSCRIPT.",
47
+ "url": "http://arxiv.org/html/2407.03219v2/x1.png"
48
+ },
49
+ "2": {
50
+ "figure_path": "2407.03219v2_figure_2.png",
51
+ "caption": "Figure 2: Example of the simulated experiment on the lab-lidar polygon, with 20202020 dynamic obstacles. Ground truth location is in blue, and we cast 10101010 rays for distance measurements, with 4444 of them sampling the dynamic obstacles (in red), and the rest sampling the static workspace \ud835\udcb2\ud835\udcb2\\mathcal{W}caligraphic_W (in magenta).",
52
+ "url": "http://arxiv.org/html/2407.03219v2/extracted/5800183/figures/measurements_modified.png"
53
+ }
54
+ },
55
+ "validation": true,
56
+ "references": [],
57
+ "url": "http://arxiv.org/html/2407.03219v2"
58
+ }
20240819/2407.05976v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240819/2407.09271v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240819/2407.10907v2.json ADDED
@@ -0,0 +1,406 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Parareal algorithms for stochastic Maxwell equations with the damping term driven by additive noise",
3
+ "abstract": "In this paper, we investigate the strong convergence analysis of parareal algorithms for stochastic Maxwell equations with the damping term driven by additive noise. The proposed parareal algorithms proceed as two-level temporal parallelizable integrators with the stochastic exponential integrator as the coarse -propagator and both the exact solution integrator and the stochastic exponential integrator as the fine -propagator. It is proved that the convergence order of the proposed algorithms linearly depends on the iteration number . Numerical experiments are performed to illustrate the convergence of the parareal algorithms for different choices of the iteration number and the damping coefficient .",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "When the electric and magnetic fluxes are perturbed by noise, the uncertainty and stochasticity can have a subtle but profound influence on the evolution of complex dynamical systems [25 ###reference_b25###]. In order to model the thermal motion of electrically charged microparticles, we consider the stochastic Maxwell equations with damping term driven by additive noise as follows\nwhere is an open, bounded and Lipschitz domain with boundary , of which is the unit outward. Here is the electric permittivity and is the magnetic permeability. The damping terms and are usually added to simulate the attenuation of electromagnetic waves in the medium, which can be caused by absorption, scattering or other non-ideal factors in the medium. The function and describe electric currents (or and describe magnetic currents). In particular, and do not depend on the electromagnetic fields and . The authors in [22 ###reference_b22###] proved the mild, strong and classical well-posedness for the Cauchy problem of stochastic Maxwell equations. Meanwhile, the authors in [20 ###reference_b20###] studied the approximate controllability of the stochastic Maxwell equations via an abstract approach and a constructive approach using a generalization of the Hilbert uniqueness method. Subsequently the work [24 ###reference_b24###] combined the study of well-posedness, homogenization and controllability of Maxwell equations with the description of the constitutive relations of complex media and dealt with deterministic and stochastic issues in both the frequency and time domains.\nSince stochastic Maxwell equations are a kind of stochastic Hamiltonian PDEs, constructing stochastic multi-symplectic numerical methods for problem (5 ###reference_###) has been paid more and more attention. The stochastic multi-symplectic numerical method for stochastic Maxwell equations driven by additive noise was proposed in [17 ###reference_b17###] based on the stochastic variational principle. Subsequently the authors in [10 ###reference_b10###] used a straightforward approach to avoid the introduction of additional variables and obtained three effecitve stochastic multi-symplectic numerical methods. Then the authors in [18 ###reference_b18###] used the wavelet collocation method in space and the stochastic symplectic method in time to construct the stochastic multi-symplectic energy-conserving method for three-dimensional stochastic Maxwell equations driven by multiplicative noise. The work in [31 ###reference_b31###] made a review on these stochastic multi-symplectic methods and summarised numerical methods for various stochastic Maxwell equations driven by additive and multiplicative noise. The general case of stochastic Hamiltonian PDEs was considered in [32 ###reference_b32###], where the multi-symplecticity of stochastic RK methods was investigated. Recently, the authors in [26 ###reference_b26###] and [27 ###reference_b27###] constructed\nmulti-symplectic DG methods for stochastic Maxwell equations driven by additive noise and multiplicative noise.\nFurthermore, the work in [16 ###reference_b16###] employed the local radial basis function collocation method and the work in [21 ###reference_b21###] utilized the global radial basis function collocation method for stochastic Maxwell equations driven by multiplicative noise to preserve multi-symplectic structure. Additionally, [4 ###reference_b4###] developed a symplectic discontinuous Galerkin full discretisation method for stochastic Maxwell equations driven by additive noise. Other efficient numerical methods for stochastic Maxwell equations also are investigated, see [30 ###reference_b30###] for the finite element method, [1 ###reference_b1###] for the numerical method based on the Wiener chaos expansion, [9 ###reference_b9###] for ergodic numerical method, [7 ###reference_b7###] for operator splitting method and [34 ###reference_b34###] for CN-FDTD and Yee-FDTD methods.\nMeanwhile, there are a lot of pregnant works focused mainly on strong convergence analysis of the numerical methods for stochastic Maxwell equations. In the temporal discretization methods, the semi-implicit Euler method was proposed in [5 ###reference_b5###] to proved mean-square convergence order is for stochastic Maxwell equations driven by multiplicative noise. Subsequently the work in [6 ###reference_b6###] studied the stochastic Runge-Kutta method with mean-square convergence order 1 for stochastic Maxwell equations driven by additive noise. In addition, explicit exponential integrator was proposed in [11 ###reference_b11###] for stochastic Maxwell equations with mean-square convergence order for multiplicative noise and convergence order for additive noise. The work [4 ###reference_b4###] developed discontinuous Galerkin full discretization method for stochastic Maxwell equations driven by additive noise with mean-square convergence order in time and in space, where represents regularity. Another related work by authors of [9 ###reference_b9###] showed the ergodic discontinuous Galerkin full discretization for\nstochastic Maxwell equations with mean-square convergence order both in\nthe temporal and spatial directions. In recent works\n[26 ###reference_b26###] and [27 ###reference_b27###], high order discontinuous Galerkin methods were designed for the stochastic Maxwell equations driven by additive noise and multiplicative noise with mean-square convergence order both . Besides, the authors of [7 ###reference_b7###] presented the operator splitting method for stochastic Maxwell equations driven by additive noise with mean-square convergence order .\nIn order to increase the convergence order and improve the computational efficiency on stochastic differential equations, the parareal algorithm has received attentions. This algorithm we focus on is a two-stage time-parallel integrator originally proposed in [23 ###reference_b23###] and further works studied on theoretical analysis and applications for differential model problems, see, for instance, [2 ###reference_b2###, 28 ###reference_b28###, 13 ###reference_b13###, 15 ###reference_b15###, 14 ###reference_b14###, 12 ###reference_b12###]. In terms of stochastic model, the work in\n[33 ###reference_b33###] investigated the parareal algorithm combining the projection method to SDEs with conserved quantities. Then the parareal algorithm for the stochastic Schr\u00f6dinger equations with weak damping term driven by additive noise was studied in [19 ###reference_b19###] with fine propagator being the exact solver and coarse propagator being the exponential -scheme. And the proposed algorithm increases the convergence order to in the linear case for . The parareal algorithm for semilinear parabolic SPDEs behaved differently in [3 ###reference_b3###] depending on the choice of the coarse integrator. When the linear implicit Euler scheme was selected, the convergence order was limited by the regularity of the noise with the increase of iteration number, while for the stochastic exponential scheme, the convergence order always increased.\nTo the best of our knowledge, there has been no reference considering the convergence analysis of the parareal algorithm for stochastic Maxwell equations till now.\nInspired by the pioneering works, we establish strong convergence analysis of the parareal algorithms for stochastic Maxwell equations with damping term driven by additive noise. Combining the benefits of the stochastic exponential integrator, we use this integrator as the coarse -propagator and for the fine -propagator, two choices are considered: the exact solution integrator as well as the stochastic exponential integrator. Taking advantage of the contraction semigroup generated by the Maxwell operator and the damping term, we derive the uniform mean-square convergence analysis of the proposed parareal algorithms with convergence order . The key point of convergence analysis is that the\nerror between the solution computed by the parareal algorithm and the reference solution generated by the fine propagator for\nthe stochastic exponential integrator still maintains the consistent convergence results. Different from the exact solution integrator as the fine -propagator, we need to make use of the Lipschitz continuity of the residual operator rather than the integrability of the exact solution directly in this case, which requires us to make assumptions about the directional derivatives of the drift coefficient. We find that the selection of parameters have an impact on the convergence analysis results of the parareal algorithms. An appropriate damping coefficient ensures stability and accelerates the convergence results and the scale of noise induces a perturbation of the solution numerically.\nThe article is organized as follows. In the forthcoming section, we collect some preliminaries about stochastic Maxwell equations. In section 3, we devote to introducing the parareal algorithms based on the exponential scheme as the coarse -propagator and both the exact solution integrator and the stochastic exponential integrator as the fine -propagator. In section 4, two convergence results in the sense of mean-square are analyzed. In section 5, numerical experiments are dedicated to illustrate the convergence analysis with the influences on the iteration number and the damping coefficient and the effect of noise with different scale on the numerical solution.\nTo lighten notations, throughout this paper, C stands for a constant which might be dependent of but is independent of and may vary from line to line."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Preliminaries",
15
+ "text": "The basic Hilbert space is with inner product\nfor all and the norm\nIn addition, assume that and are bounded and uniformly positive definite\nfunctions: , .\nThe -Wiener process is defined on a given probability space and can be expanded in a Fourier series\nwhere is a sequence of independent standard real-valued Wiener processes and is a complete orthonormal system of consisting of eigenfunctions of a symmetric, nonnegative and finite trace operator , i.e., and\n with corresponding eigenvalue .\nThe Maxwell operator is defined by\nwith domain\nBased on the closedness of the operator , we have the following lemma.\n[8 ###reference_b8###]\nThe Maxwell operator defined in (7 ###reference_###) with domain is closed and skew-adjoint, and generates a -semigroup on for . Moreover, the frequently used property for Maxwell operator M is : .\nLet the drift term be a Nemytskij operator associated with defined by\nThe diffusion term is the Nemytskij operator defined by\nWe consider the abstract form of (5 ###reference_###) in the infinite-dimensional space\nwhere the solution is a stochastic process with values in .\nLet be the semigroup generated by operator . One can show that the damping stochastic Maxwell equations (10 ###reference_###) possess the following lemma.\nFor the semigroup on , we obtain\nfor all .\nProof. Based on the semigroup generated by the operator , we deduce\nConsider the deterministic system [8 ###reference_b8###]\nThus\nwhich leads to\nthat is,\nCombining the formula (11 ###reference_###), we can conclude that the proof.\nTo ensure the well-posedness of mild solution of the stochastic Maxwell equations (10 ###reference_###), we need the following assumptions.\n(Initial value).\nThe initial value satisfies\n(Drift nonlinearity).\nThe drift operator satisfies\nfor all . Moreover, the nonlinear operator has bounded derivatives, i.e.,\nfor .\n[29 ###reference_b29###](Covariance operator).\nTo guarantee the existence of a mild solution, we further assume the covariance operator of satisfies\nwhere denotes the Hilbert\u2013Schmidt norm for operators from to ,\u2009 is the -th fractional powers of and is a parameter\ncharacterizing the regularity of noise. In the article,\nwe are mostly interested in for trace class operator .\n[8 ###reference_b8###]\nLet 1 ###reference_umption1###, 2 ###reference_umption2### and 3 ###reference_umption3### hold, there exists a unique mild solution to (10 ###reference_###), which satisfies\nfor each , where is a -semigroup generated by .\nMoreover, there exists a constant such that\nThe following lemma is the stability of analytical solution, which will be used in the proof of the Theorem 1 ###reference_orem1###.\n[29 ###reference_b29###]\nIf and are two solutions of (10 ###reference_###) with different initial values and , there exists a constant such that"
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Parareal algorithm for stochastic Maxwell equations",
21
+ "text": ""
22
+ },
23
+ {
24
+ "section_id": "3.1",
25
+ "parent_section_id": "3",
26
+ "section_name": "Parareal algorithm",
27
+ "text": "To perform the parareal algorithm, the considered interval is first divided into time intervals with a uniform coarse step-size for any . Each subinterval is further divided into small time intervals with a uniform fine step-size for all and . The\nparareal algorithm can be described as following\nInitialization. Use the coarse propagator with the coarse step-size to compute initial value by\nLet denote the number of parareal iterations: for all .\nTime-parallel computation. Use the fine propagator and time step-size to compute on each subinterval independently\nPrediction and correction. Note that we get two numerical solutions and \nat time through the initialization and parallelization, the sequential\nprediction and correction is defined as\nNoting that equation (12 ###reference_###) is of the following form , then parareal algorithm can be written as\nThe coarse integrator is required to be easy to calculate and enjoys a less computational cost, but need not to\nbe of high accuracy. On the other hand, the fine integrator defined on each subinterval is assumed to be more accurate but more costly than . Note that and can be the same numerical method or different numerical methods. In the article, the exponential integrator is chosen as the coarse integrator and both the exact integrator and the exponential integrator are chosen as the fine integrator ."
28
+ },
29
+ {
30
+ "section_id": "3.2",
31
+ "parent_section_id": "3",
32
+ "section_name": "Stochastic exponential scheme",
33
+ "text": "Consider the mild solution of the stochastic Maxwell equations (10 ###reference_###) on the time interval\nwhere -semigroup .\nBy approximating the integrals in above mild solution (15 ###reference_###) at the left endpoints, we can obtain the stochastic exponential scheme\nwhere ."
34
+ },
35
+ {
36
+ "section_id": "3.3",
37
+ "parent_section_id": "3",
38
+ "section_name": "Coarse and fine propagators",
39
+ "text": "Coarse propagator.\nThe stochastic exponential scheme is chosen as\nthe coarse propagator with\ntime step-size by (16 ###reference_###)\nwhere and .\nFine propagator.\nThe exact solution as the fine propagator with time step-size by (15 ###reference_###)\nwhere .\nBesides, the other choice is the stochastic exponential scheme is chosen as the fine propagator with time step-size by (16 ###reference_###)\nwhere and ."
40
+ },
41
+ {
42
+ "section_id": "4",
43
+ "parent_section_id": null,
44
+ "section_name": "Main results",
45
+ "text": "In this section, two convergence analysis results will be given, i.e., we investigate the parareal algorithms obtained by choosing the stochastic exponential integrator as the coarse integrator and both the exact integrator and the stochastic exponential integrator as the fine integrator."
46
+ },
47
+ {
48
+ "section_id": "4.1",
49
+ "parent_section_id": "4",
50
+ "section_name": "The exact integrator as the fine integrator",
51
+ "text": "Let 1 ###reference_umption1###, 2 ###reference_umption2### and 3 ###reference_umption3### hold, we apply the stochastic exponential integrator for coarse propagator and exact solution integrator for fine propagator . Then we have the following convergence estimate for the fixed iteration number\nwith a positive constant independent on , where the parareal solution is defined in (14 ###reference_###) and the exact solution is defined in (15 ###reference_###).\nTo simplify the exposition, let us introduce the following notation.\nThe residual operator\nfor all .\nBefore the error analysis, the following two useful lemmas are introduced.\n[15 ###reference_b15###]\nLet be a strict lower triangular Toeplitz matrix and its elements are defined as\nThe infinity norm of the power of is bounded as follows\n[15 ###reference_b15###]\nLet , a double indexed sequence {} satisties and\nfor and , then vector satisfies\nProof of Theorem 1.\nFor all and , denote the error . Since the exact solution is chosen as the fine propagator , it can be written as\nSubtracting the (4.1 ###reference_3###) from (14 ###reference_###) and using the notation of the residual operator (21 ###reference_###), we obtain\nFirstly, we estimate . Applying the stochastic exponential integrator (17 ###reference_###) for the coarse propagator , it holds that\nSubtracting the above two formulas leads to\nwhich by the contraction property of semigroup and the global Lipschitz property of .\nNow it remains to estimate . Applying exact solution integrator (18 ###reference_###) for fine progagator leads to\nwhere and denote the exact solution of system (10 ###reference_###) at time with the initial value and the initial time .\nSubstituting the above equations and equations (23 ###reference_###) and (24 ###reference_###) into the residual operator (21 ###reference_###), we obtain\nTo get the estimation of , by Lipschitz continuity property for and Lemma 4 ###reference_ma4###, we derive\nAs for , using the contraction property of semigroup and Lipschitz continuity property for yields\nFrom (4.1 ###reference_5###) and (29 ###reference_###), we know that\nCombining (4.1 ###reference_6###) and (4.1 ###reference_7###) enables us to derive\nLet . It follows from Lemma 6 ###reference_ma6### that\nTaking infinity norm and using Lemma 5 ###reference_ma5### imply\nThis completes the proof."
52
+ },
53
+ {
54
+ "section_id": "4.2",
55
+ "parent_section_id": "4",
56
+ "section_name": "The stochastic exponential integrator as the fine propagator",
57
+ "text": "In this section, the error we considered is the solution by the proposed algorithm and the reference solution generated by the fine propagator . To begin with, we define the reference solution as follows.\nFor all , the reference solution is defined by the fine propagator on each subinterval\nPrecisely,\nLet 1 ###reference_umption1###, 2 ###reference_umption2### and 3 ###reference_umption3### hold, we apply the stochastic exponential integrator for coarse propagator and the stochastic exponential integrator for fine propagator . Then we have the following convergence estimate for the fixed iteration number\nwith a positive constant independent on , where the parareal solution is defined in (14 ###reference_###) and the reference solution is defined in (31 ###reference_###).\nProof of Theorem 2.\nFor all and , let the error be defined by\n.\nObserve that the reference solution (31 ###reference_###) can be rewritten\nCombining the parareal algorithm form (14 ###reference_###) and the reference solution (34 ###reference_###) and using the notation of the residual operator (21 ###reference_###), the error can be written as\nNow we estimate . Applying the stochastic exponential integrator (17 ###reference_###) for the coarse propagator , we obtain\nSubtracting the above formula (35 ###reference_###) from (24 ###reference_###), we have\nArmed with contraction property of semigroup and Lipschitz continuity property of yield\nAs for , regarding the estimation of the residual operator, we need to resort to its directional derivatives. Due to formula (21 ###reference_###), the derivatives can be given by\nOne the one hand, since the stochastic exponential scheme is chosen as the fine propagator (19 ###reference_###) with time step-size , we obtain\nDenote for . Then taking the direction derivatives for above equation yields\nBased on the the form of semigroup , we have the following recursion formula\nApplying the discrete Gronwall lemma yields the following inequality\nMoreover, the derivative of can be writen by , where , that is, one gets\nOn the other hand, since the stochastic exponential scheme is chosen as the coarse propagator , taking the direction derivative for of formula (17 ###reference_###) leads to\nSubstituting formula (39 ###reference_###) and (40 ###reference_###) into formula (37 ###reference_###), we obtain\nUtilizing the bounded derivatives condition of , we get\nUsing the contraction property of semigroup, we have\nSubstituting the Gronwall inequality (38 ###reference_###) into the above inequality leads to\nIn conclusion, it holds that\nSubstituting and into above formula derives lipschitz continuity property of the residual operator\nCombining (4.2 ###reference_9###) and (42 ###reference_###), we have\nAccording to Lemma 5 ###reference_ma5### and Lemma 6 ###reference_ma6###, it yields to\nwhich leads to the final result\n\nWe can summarise Lipschitz continuity property of the residual operator : there exists such that for and , we have\nWhen we fix the iteration number , the convergence rate will be .\nThe error between the reference solution by the fine propagator defined in (31 ###reference_###) and the exact solution defined in (15 ###reference_###) do not affect the convergence rate of the parareal algorithm, due to\nTherefore, it is sufficient to study the convergence order of the error between and .\n[11 ###reference_b11###]\n(Uniform boundedness of reference solution ). There exists a constant such that\n(Uniform boundedness of parareal algorithm solution ). There exists a constant such that"
58
+ },
59
+ {
60
+ "section_id": "5",
61
+ "parent_section_id": null,
62
+ "section_name": "Numerical experiments",
63
+ "text": "This section is devoted to investigating the convergence result with several parameters and the effect of the scale of noise on numerical solutions. Since the parareal algorithm in principle is a temporal algorithm, and the spatial discretization is not our focus in this article, we perform finite difference method to discretize spatially.\nThe mean-square error is used as"
64
+ },
65
+ {
66
+ "section_id": "5.1",
67
+ "parent_section_id": "5",
68
+ "section_name": "Convergence",
69
+ "text": ""
70
+ },
71
+ {
72
+ "section_id": "5.1.1",
73
+ "parent_section_id": "5.1",
74
+ "section_name": "5.1.1 One-dimensional transverse magnetic wave",
75
+ "text": "We first consider the stochastic Maxwell equations with 1-D transverse magnetic wave driven by the standard Brownian motion\nby providing initial conditions\nfor , and .\n\n###figure_1### The parameters are normalized to , and . We apply the parareal algorithm to solve the numerical solution with the fine step-size and the coarse step-size . The spatial mesh grid-size . Figure 1 ###reference_### demonstrates the evolution of the mean-square error with the iteration number . From the Figure.1 ###reference_###, we observe that the damping term speeds up the convergence of the numerical solutions and the error approaches after at least nearly, which shows that the proposed algorithm converges.\nFrom a numerical analysis point of view, the inclusion of damping coefficients usually accelerates the convergence of numerical solutions by suppressing oscillations and instability, resulting in a faster steady state or desired precision. However, too small damping may not be enough to accelerate the convergence rate and may even introduce instability.\n\n###figure_2### Subsequently, we choose the damping coefficient to calculate the convergence order of the proposed parareal algorithm. We compute the numerical solution with the fine step-size and the coarse step-size . Figure 2 ###reference_### reports the convergence order of the parareal algorithm with the iteration number . It is clearly shown that the mean-square convergence order always increases as the iteration number increases."
76
+ },
77
+ {
78
+ "section_id": "5.1.2",
79
+ "parent_section_id": "5.1",
80
+ "section_name": "5.1.2 Two-dimensional transverse magnetic waves",
81
+ "text": "We consider the stochastic Maxwell equations with 2-D transverse magnetic polarization driven by trace class noise\nby providing initial conditions\nfor and . Following the formula (6 ###reference_###), we choose and , for some and . In this case, . We construct the Wiener process as follows [8 ###reference_b8###]\nwith and .\n\n###figure_3### Firstly, the parameters are normalized to , and .\nWe take the fine step-size , the coarse step-size and the spatial mesh grid-size . Figure 3 ###reference_### demonstrates the evolution of the mean-square error with iteration number . From the Figure 3 ###reference_###, we observe that the error approaches after nearly, which shows that the proposed algorithm converges.\nIn numerical simulation, the introduction of damping terms and the selection of parameters need to be careful to ensure the accuracy and physical authenticity of simulation results. Excessive damping may lead to excessive attenuation, thus affecting the accuracy of simulation results.\nSecondly, in order to investigate the relationship between the convergence order and the iteration number, we choose the damping coefficient to calculate the convergence order of the proposed algorithm as taking the different iteration number . We compute the numerical solution with the fine step-size and the coarse step-size . Figure 4 ###reference_### reports the convergence order of the proposed algoritnumerical errorhm with the iteration number . Indeed, the numerical experiments reveal that the convergence order of the proposed algorithm increases as the iteration number increases.\n\n###figure_4###"
82
+ },
83
+ {
84
+ "section_id": "5.2",
85
+ "parent_section_id": "5",
86
+ "section_name": "Impact of the scale of noise",
87
+ "text": "We consider the stochastic Maxwell equations with 2-D transverse magnetic polarization (46 ###reference_###). The parameters are normalized to , and we take the fine step-size , the coarse step-size and the spatial mesh grid-size . In order to show\nthe impact of the scale of noise on the numerical solution, we perform numerical simulations with four scales of noise and choose the damping coefficient .\n\n###figure_5### Figure 5 ###reference_### shows the 10 Contour plots of the numerical solution with different scales of noise and Figure 6 ###reference_### shows the electric field wave forms with different scales of noise. Comparing with deterministic case (a) of Figure 5 ###reference_### and Fig.6 ###reference_###, we can find that the oscillator of the wave forms (b-d) of Figure 5 ###reference_### and Figure 6 ###reference_### becomes more and more violent as the scale of the noise increases, i.e., from (a-d) of Figure 5 ###reference_### and Figure 6 ###reference_### it can be observed that the perturbation of the numerical solutions becomes more and more apparent as the scale of the noise increases.\n\n###figure_6###"
88
+ },
89
+ {
90
+ "section_id": "6",
91
+ "parent_section_id": null,
92
+ "section_name": "Conclusion",
93
+ "text": "In this paper, we study the strong convergence analysis of the parareal algorithms for stochastic Maxwell equations with damping term driven by additive noise. Firstly the stochastic exponential scheme is chosen as the coarse propagator and the exact solution scheme is chosen as the fine propagator. And we propose our numerical schemes and establish the mean-square convergence estimate. Secondly, both the coarse propagator and the fine propagator choose the stochastic exponential scheme. Meanwhile, the error we considered in this section is the distance between the solution computed by the parareal algorithm and the reference solution generated by the fine propagator. It is shown that the convergence order of the proposed algorithms is linearly related to the iteration number .\nAt last, One- and two-dimensional numerical examples are performed to demonstrate convergence analysis with respect to damping coefficient and noise scale. One key idea from the proofs of two convergence results is that the residual operator in Theorem 2 is related to Lipschitz continuity properties, whereas Theorem 1 concerns the integrability of the exact solution. The future works will include the study for the parareal algorithms for the stochastic Maxwell equations driven by multiplicative noise and other choices of integrators as the coarse and fine propagators."
94
+ }
95
+ ],
96
+ "appendix": [],
97
+ "tables": {},
98
+ "image_paths": {
99
+ "1": {
100
+ "figure_path": "2407.10907v2_figure_1.png",
101
+ "caption": "Figure 1: Convergence of 1D case vs. interation number k\ud835\udc58kitalic_k for different values of \u03c3=0,21,23,25\ud835\udf0e0superscript21superscript23superscript25\\sigma=0,2^{1},2^{3},2^{5}italic_\u03c3 = 0 , 2 start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT , 2 start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT , 2 start_POSTSUPERSCRIPT 5 end_POSTSUPERSCRIPT.",
102
+ "url": "http://arxiv.org/html/2407.10907v2/x1.png"
103
+ },
104
+ "2": {
105
+ "figure_path": "2407.10907v2_figure_2.png",
106
+ "caption": "Figure 2: Mean-square order of 1D case with respect to \u0394\u2062T=2\u2212i,i=5,6,7,8.formulae-sequence\u0394\ud835\udc47superscript2\ud835\udc56\ud835\udc565678\\Delta T=2^{-i},i=5,6,7,8.roman_\u0394 italic_T = 2 start_POSTSUPERSCRIPT - italic_i end_POSTSUPERSCRIPT , italic_i = 5 , 6 , 7 , 8 .",
107
+ "url": "http://arxiv.org/html/2407.10907v2/x2.png"
108
+ },
109
+ "3": {
110
+ "figure_path": "2407.10907v2_figure_3.png",
111
+ "caption": "Figure 3: Convergence of 2D case with interation number k\ud835\udc58kitalic_k for different values of \u03c3=0,21,23,25\ud835\udf0e0superscript21superscript23superscript25\\sigma=0,2^{1},2^{3},2^{5}italic_\u03c3 = 0 , 2 start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT , 2 start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT , 2 start_POSTSUPERSCRIPT 5 end_POSTSUPERSCRIPT.",
112
+ "url": "http://arxiv.org/html/2407.10907v2/x3.png"
113
+ },
114
+ "4": {
115
+ "figure_path": "2407.10907v2_figure_4.png",
116
+ "caption": "Figure 4: Mean-square order of 2D case with respect to \u0394\u2062T=2\u2212i,i=3,4,5,6.formulae-sequence\u0394\ud835\udc47superscript2\ud835\udc56\ud835\udc563456\\Delta T=2^{-i},i=3,4,5,6.roman_\u0394 italic_T = 2 start_POSTSUPERSCRIPT - italic_i end_POSTSUPERSCRIPT , italic_i = 3 , 4 , 5 , 6 .",
117
+ "url": "http://arxiv.org/html/2407.10907v2/x4.png"
118
+ },
119
+ "5": {
120
+ "figure_path": "2407.10907v2_figure_5.png",
121
+ "caption": "Figure 5: 10 Contour of Ez\u2062(x,y)subscript\ud835\udc38\ud835\udc67\ud835\udc65\ud835\udc66E_{z}(x,y)italic_E start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT ( italic_x , italic_y ) with different sizes of noise \u03bb1=\u03bb2=0,21,23,25formulae-sequencesubscript\ud835\udf061subscript\ud835\udf0620superscript21superscript23superscript25\\lambda_{1}=\\lambda_{2}=0,2^{1},2^{3},2^{5}italic_\u03bb start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = italic_\u03bb start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = 0 , 2 start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT , 2 start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT , 2 start_POSTSUPERSCRIPT 5 end_POSTSUPERSCRIPT in the time T=1\ud835\udc471T=1italic_T = 1.",
122
+ "url": "http://arxiv.org/html/2407.10907v2/x5.png"
123
+ },
124
+ "6": {
125
+ "figure_path": "2407.10907v2_figure_6.png",
126
+ "caption": "Figure 6: Ez\u2062(x,y)subscript\ud835\udc38\ud835\udc67\ud835\udc65\ud835\udc66E_{z}(x,y)italic_E start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT ( italic_x , italic_y ) with different sizes of noise \u03bb1=\u03bb2=0,21,23,25formulae-sequencesubscript\ud835\udf061subscript\ud835\udf0620superscript21superscript23superscript25\\lambda_{1}=\\lambda_{2}=0,2^{1},2^{3},2^{5}italic_\u03bb start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = italic_\u03bb start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = 0 , 2 start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT , 2 start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT , 2 start_POSTSUPERSCRIPT 5 end_POSTSUPERSCRIPT in the time T=1\ud835\udc471T=1italic_T = 1.",
127
+ "url": "http://arxiv.org/html/2407.10907v2/x6.png"
128
+ }
129
+ },
130
+ "validation": true,
131
+ "references": [
132
+ {
133
+ "1": {
134
+ "title": "Wiener chaos expansion and simulation of electromagnetic wave\npropagation excited by a spatially incoherent source.",
135
+ "author": "M. Badieirostami, A. Adibi, H. Zhou, and S. Chow.",
136
+ "venue": "Multiscale Model. Sim., 8:591\u2013604, 2010.",
137
+ "url": null
138
+ }
139
+ },
140
+ {
141
+ "2": {
142
+ "title": "A \"parareal\" time discretization for non-linear PDE\u2019s with\napplication to the pricing of an American put.",
143
+ "author": "G. Bal and Y. Maday.",
144
+ "venue": "Springer, Berlin, 2002.",
145
+ "url": null
146
+ }
147
+ },
148
+ {
149
+ "3": {
150
+ "title": "On parareal algorithms for semilinear parabolic stochastic PDEs.",
151
+ "author": "C. Br\u00e9hier and X. Wang.",
152
+ "venue": "SIAM J. Numer. Anal., 58:254\u2013278, 2020.",
153
+ "url": null
154
+ }
155
+ },
156
+ {
157
+ "4": {
158
+ "title": "A symplectic discontinuous Galerkin full discretization for\nstochastic Maxwell equations.",
159
+ "author": "C. Chen.",
160
+ "venue": "SIAM J. Numer. Anal., 59:2197\u20132217, 2021.",
161
+ "url": null
162
+ }
163
+ },
164
+ {
165
+ "5": {
166
+ "title": "Mean-square convergence of a semidiscrete scheme for stochastic\nMaxwell equations.",
167
+ "author": "C. Chen, J. Hong, and L. Ji.",
168
+ "venue": "SIAM J. Numer. Anal., 57:728\u2013750, 2019.",
169
+ "url": null
170
+ }
171
+ },
172
+ {
173
+ "6": {
174
+ "title": "Runge-Kutta semidiscretizations for stochastic Maxwell equations\nwith additive noise.",
175
+ "author": "C. Chen, J. Hong, and L. Ji.",
176
+ "venue": "SIAM J. Numer. Anal., 57:702\u2013727, 2019.",
177
+ "url": null
178
+ }
179
+ },
180
+ {
181
+ "7": {
182
+ "title": "A new efficient operator splitting method for stochastic Maxwell\nequations.",
183
+ "author": "C. Chen, J. Hong, and L. Ji.",
184
+ "venue": "arXiv preprint arXiv:2102.10547, 2021.",
185
+ "url": null
186
+ }
187
+ },
188
+ {
189
+ "8": {
190
+ "title": "Numerical approximations of stochastic Maxwell equations: via\nstructure-preserving algorithms.",
191
+ "author": "C. Chen, J. Hong, and L. Ji.",
192
+ "venue": "Springer, Heidelberg, 2023.",
193
+ "url": null
194
+ }
195
+ },
196
+ {
197
+ "9": {
198
+ "title": "Ergodic numerical approximations for stochastic Maxwell equations.",
199
+ "author": "C. Chen, J. Hong, L. Ji, and G. Liang.",
200
+ "venue": "arXiv preprint arXiv:2210.06092, 2022.",
201
+ "url": null
202
+ }
203
+ },
204
+ {
205
+ "10": {
206
+ "title": "Preservation of physical properties of stochastic Maxwell equations\nwith additive noise via stochastic multi-symplectic methods.",
207
+ "author": "C. Chen, J. Hong, and L. Zhang.",
208
+ "venue": "J. Comput. Phys., 306:500\u2013519, 2016.",
209
+ "url": null
210
+ }
211
+ },
212
+ {
213
+ "11": {
214
+ "title": "Exponential integrators for stochastic Maxwell\u2019s equations driven\nby it\u00f4 noise.",
215
+ "author": "D. Cohen, J. Cui, J. Hong, and L. Sun.",
216
+ "venue": "J. Comput. Phys., 410:109382, 2020.",
217
+ "url": null
218
+ }
219
+ },
220
+ {
221
+ "12": {
222
+ "title": "Symmetric parareal algorithms for Hamiltonian systems.",
223
+ "author": "X. Dai, L. Bris, F. Legoll, and Y. Maday.",
224
+ "venue": "ESAIM-Math. Model. Numer. Anal., 47:717\u2013742, 2013.",
225
+ "url": null
226
+ }
227
+ },
228
+ {
229
+ "13": {
230
+ "title": "Stable parareal in time method for first-and second-order hyperbolic\nsystems.",
231
+ "author": "X. Dai and Y. Maday.",
232
+ "venue": "SIAM J. Sci. Comput., 35:A52\u2013A78, 2013.",
233
+ "url": null
234
+ }
235
+ },
236
+ {
237
+ "14": {
238
+ "title": "Analysis for parareal algorithms applied to Hamiltonian\ndifferential equations.",
239
+ "author": "M. Gander and E. Hairer.",
240
+ "venue": "J. Comput. Appl. Math., 259:2\u201313, 2014.",
241
+ "url": null
242
+ }
243
+ },
244
+ {
245
+ "15": {
246
+ "title": "Analysis of the parareal time-parallel time-integration method.",
247
+ "author": "M. Gander and S. Vandewalle.",
248
+ "venue": "SIAM J. Sci. Comput., 29:556\u2013578, 2007.",
249
+ "url": null
250
+ }
251
+ },
252
+ {
253
+ "16": {
254
+ "title": "Three kinds of novel multi-symplectic methods for stochastic\nHamiltonian partial differential equations.",
255
+ "author": "J. Hong, B. Hou, Q. Li, and L. Sun.",
256
+ "venue": "J. Comput. Phys., 467:111453, 2022.",
257
+ "url": null
258
+ }
259
+ },
260
+ {
261
+ "17": {
262
+ "title": "A stochastic multi-symplectic scheme for stochastic Maxwell\nequations with additive noise.",
263
+ "author": "J. Hong, L. Ji, and L. Zhang.",
264
+ "venue": "J. Comput. Phys., 268:255\u2013268, 2014.",
265
+ "url": null
266
+ }
267
+ },
268
+ {
269
+ "18": {
270
+ "title": "An energy-conserving method for stochastic Maxwell equations with\nmultiplicative noise.",
271
+ "author": "J. Hong, L. Ji, and L. Zhang.",
272
+ "venue": "J. Comput. Phys., 351:216\u2013229, 2017.",
273
+ "url": null
274
+ }
275
+ },
276
+ {
277
+ "19": {
278
+ "title": "Parareal exponential -scheme for longtime simulation of\nstochastic Schr\u00f6dinger equations with weak damping.",
279
+ "author": "J. Hong, X. Wang, and L. Zhang.",
280
+ "venue": "SIAM J. Sci. Comput., 41:B1155\u2013B1177, 2019.",
281
+ "url": null
282
+ }
283
+ },
284
+ {
285
+ "20": {
286
+ "title": "On the approximate controllability of the stochastic Maxwell\nequations.",
287
+ "author": "T. Horsin, I. Stratis, and A. Yannacopoulos.",
288
+ "venue": "IMA J. Math. Control. I., 27:103\u2013118, 2010.",
289
+ "url": null
290
+ }
291
+ },
292
+ {
293
+ "21": {
294
+ "title": "Meshless structure-preserving GRBF collocation methods for\nstochastic Maxwell equations with multiplicative noise.",
295
+ "author": "B. Hou.",
296
+ "venue": "Appl. Numer. Math., 192:337\u2013355, 2023.",
297
+ "url": null
298
+ }
299
+ },
300
+ {
301
+ "22": {
302
+ "title": "Stochastic integrodiferential equations in Hilbert spaces with\napplications in electromagnetics.",
303
+ "author": "K. Liaskos, I. Stratis, and A. Yannacopoulos.",
304
+ "venue": "J. Integral Equations Appl., 22:559\u2013590, 2010.",
305
+ "url": null
306
+ }
307
+ },
308
+ {
309
+ "23": {
310
+ "title": "A \"parareal\" in time discretization of PDE\u2019s.",
311
+ "author": "J. Lions, Y. Maday, and G. Turinici.",
312
+ "venue": "C. R. Acad. Sci. Paris Ser. I Math., 332:661\u2013668, 2001.",
313
+ "url": null
314
+ }
315
+ },
316
+ {
317
+ "24": {
318
+ "title": "Mathematical analysis of deterministic and stochastic problems\nin complex media electromagnetics.",
319
+ "author": "G. Roach, I. Stratis, and A. Yannacopoulos.",
320
+ "venue": "Princeton University Press, 2012.",
321
+ "url": null
322
+ }
323
+ },
324
+ {
325
+ "25": {
326
+ "title": "Principles of statistical radiophysics:elements and random\nfields 3.",
327
+ "author": "S. Rytov, I. Kravov, and V. Tatarskii.",
328
+ "venue": "Springer, Berlin, 1989.",
329
+ "url": null
330
+ }
331
+ },
332
+ {
333
+ "26": {
334
+ "title": "Multi-symplectic discontinuous Galerkin methods for the stochastic\nMaxwell equations with additive noise.",
335
+ "author": "J. Sun, C. Shu, and Y. Xing.",
336
+ "venue": "J. Comput. Phys., 461:111199, 2022.",
337
+ "url": null
338
+ }
339
+ },
340
+ {
341
+ "27": {
342
+ "title": "Discontinuous Galerkin methods for stochastic Maxwell equations\nwith multiplicative noise.",
343
+ "author": "J. Sun, C. Shu, and Y. Xing.",
344
+ "venue": "ESAIM-Math. Model. Num., 57:841\u2013864, 2023.",
345
+ "url": null
346
+ }
347
+ },
348
+ {
349
+ "28": {
350
+ "title": "Convergence analysis for three parareal solvers.",
351
+ "author": "S. Wu and T. Zhou.",
352
+ "venue": "SIAM J. Sci. Comput., 37:A970\u2013A992, 2015.",
353
+ "url": null
354
+ }
355
+ },
356
+ {
357
+ "29": {
358
+ "title": "Galerkin finite element methods for stochastic parabolic partial\ndifferential equations.",
359
+ "author": "Y. Yan.",
360
+ "venue": "SIAM J. Numer. Anal., 43:1363\u20131384, 2005.",
361
+ "url": null
362
+ }
363
+ },
364
+ {
365
+ "30": {
366
+ "title": "Numerical studies of some stochastic partial differential\nequations.",
367
+ "author": "K. Zhang.",
368
+ "venue": "PhD thesis, The Chinese University of Hong Kong, 2008.",
369
+ "url": null
370
+ }
371
+ },
372
+ {
373
+ "31": {
374
+ "title": "A review on stochastic multi-symplectic methods for stochastic\nMaxwell equations.",
375
+ "author": "L. Zhang, C. Chen, and J. Hong.",
376
+ "venue": "Commun. Appl. Math. Comput., 1:467\u2013501, 2019.",
377
+ "url": null
378
+ }
379
+ },
380
+ {
381
+ "32": {
382
+ "title": "Stochastic multi-symplectic Runge\u2013Kutta methods for stochastic\nHamiltonian PDEs.",
383
+ "author": "L. Zhang and L. Ji.",
384
+ "venue": "Appl. Numer. Math., 135:396\u2013406, 2019.",
385
+ "url": null
386
+ }
387
+ },
388
+ {
389
+ "33": {
390
+ "title": "Parareal algorithms applied to stochastic differential equations with\nconserved quantities.",
391
+ "author": "L. Zhang, W. Zhou, and L. Ji.",
392
+ "venue": "J. Comput. Math., 37:48\u201360, 2019.",
393
+ "url": null
394
+ }
395
+ },
396
+ {
397
+ "34": {
398
+ "title": "Modeling and FDTD discretization of stochastic Maxwell\u2019s\nequations with Drude dispersion.",
399
+ "author": "Y. Zhou and D. Liang.",
400
+ "venue": "J. Comput. Phys., 509:113033, 2024.",
401
+ "url": null
402
+ }
403
+ }
404
+ ],
405
+ "url": "http://arxiv.org/html/2407.10907v2"
406
+ }
20240819/2407.15871v3.json ADDED
@@ -0,0 +1,511 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Semantic Prototypes: Enhancing Transparency without Black Boxes",
3
+ "abstract": "As machine learning (ML) models and datasets increase in complexity, the demand for methods that enhance explainability and interpretability becomes paramount. Prototypes, by encapsulating essential characteristics within data, offer insights that enable tactical decision-making and enhance transparency. Traditional prototype methods often rely on sub-symbolic raw data and opaque latent spaces, reducing explainability and increasing the risk of misinterpretations. This paper presents a novel framework that utilizes semantic descriptions to define prototypes and provide clear explanations, effectively addressing the shortcomings of conventional methods. Our approach leverages concept-based descriptions to cluster data on the semantic level, ensuring that prototypes not only represent underlying properties intuitively but are also straightforward to interpret. Our method simplifies the interpretative process and effectively bridges the gap between complex data structures and human cognitive processes, thereby enhancing transparency and fostering trust. Our approach outperforms existing widely-used prototype methods in facilitating human understanding and informativeness, as validated through a user survey.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "1. Introduction",
9
+ "text": "In the rapidly evolving landscape of data-driven decision-making and machine learning (ML) advancements, the pursuit of explainability and interpretability stands as a critical imperative. As ML models evolve in complexity and scope, understanding their decision-making processes becomes paramount for fostering trust, ensuring accountability, and promoting fairness. Equally crucial is the comprehension of the datasets upon which these models are trained and applied. Data, often vast and heterogeneous, serve as the foundational bedrock upon which ML models operate. However, the sheer volume and intricacy of data present formidable challenges in discerning meaningful insights, uncovering hidden biases, and ensuring the quality and fairness of AI-driven systems. Thus, the need for transparent and interpretable methodologies that not only shed light on ML model behavior but also facilitate a deeper understanding and management of data is unequivocal.\nPrototypes have emerged as pivotal constructs not only for explaining machine learning models but also for comprehending the underlying data (Kim et al., 2016 ###reference_b19###). Acting as archetypal representations, prototypes encapsulate the essential characteristics or features of specific clusters or classes within a dataset, providing intuitive insights into its inherent properties. Research on human cognition and reasoning has shown that the use of prototypical examples is fundamental to the development of effective strategies for tactical decision-making (Newell et al., 1972 ###reference_b31###; Cohen et al., 1996 ###reference_b8###), and recent user studies show that concept-based and prototype explanations are prefered by users over other existing explanations (Kim et al., 2023 ###reference_b20###).\nFor instance, in information retrieval, prototypes act as exemplars for enhancing search efficiency and relevance ranking by aiding in query expansion. Additionally, there is a growing interest within the AI community in case-based reasoning and prototype-based classifiers, highlighting the versatility and acceptance of prototypes in various applications. By leveraging prototypes, stakeholders can navigate the complexities of data-driven decision-making more effectively, fostering transparency and enabling nuanced decision-making processes.\n###figure_1### A sample image from class 2 of the CLEVR-Hans3 dataset, depicting a big blue rubber cylinder, a small purple rubber cylinder, a small cyan metal cylinder, a small red rubber sphere, and a small purple metal cube.\nHowever, the majority of existing prototype approaches exhibit a major structural limitation that undermines their effectiveness and trustworthiness: they rely solely on the raw, unstructured feature space. This can be problematic from many aspects. Firstly, the feature space is often not understandable, an issue that persists across many eXplainable AI (XAI) methods, and can result in lack of intuition and potential for misinformation (Rudin, 2019 ###reference_b34###; Mittelstadt et al., 2019 ###reference_b27###; Mastromichalakis et al., 2024 ###reference_b25###; Liartis et al., 2023 ###reference_b24###; Miller, 2019 ###reference_b26###). For example consider genomics, where the feature space consists of DNA sequences which can consist of millions or billions of base pairs for an organism, and in which there can be interdependencies that are thousands of base pairs apart. Parts of the genome might be irrelevant, others might regulate the expression of genes that are elsewhere in the sequence, others might be genes themselves etc. This raw feature representation is not understandable, even to a domain expert, so a traditional prototype in this case would not be helpful.\nSecondly, in many models, especially those involving complex interactions or relationships between features, a single or even a few examples might not capture the full range of interactions or the subtleties involved in model decisions. This can make it difficult to convey the full complexity of the model\u2019s decision-making process through just prototypical examples, and can lead to oversimplification and misinterpretation. For example, consider the image shown in Figure 2 ###reference_### from the CLEVR-Hans3 dataset (Stammer et al., 2021 ###reference_b37###), for which it is known that the class semantics are \u201csmall metal cube and small sphere\u201d. By just looking at the pixel representation of the prototype, even if the prototypical parts of the image have been highlighted (see bounding boxes in Figure 2 ###reference_###), it is impossible to discern the characteristics that make them prototypical. It could be the color, size, shape or texture of each object, or even their location in the image. Therefore, without telling a user the specific semantics that make this image (and the highlighted parts) prototypical, it is easy for them to misinterpret the explanation and be misled.\nThirdly, prototypes cannot be expected to generalize to all cases, and even though they might be excellent representations of a particular class, it is not made clear to an end-user which aspects of the prototype make it representative, and to which cases it might generalize.\nAdditionally, several prototype methods do not act on the feature representation itself, opting instead to utilize black-box models that transform the features into a lower-dimensional latent space representation\n. This exacerbates the aforementioned issues, as latent representations are non-intuitive and unintelligible to humans (Wan et al., 2024 ###reference_b41###), and it also creates a paradoxical situation where non-interpretable models are used to provide explanations or interpretability, which might also facilitate malicious manipulations (Hoffmann et al., 2021 ###reference_b18###). Instead, recent research emphasizes the importance of explaining the prototypes (Nauta et al., 2021a ###reference_b28###; Wan et al., 2024 ###reference_b41###) underscoring the necessity for a semantic level of information alongside prototypes.\nOur approach represents a novel solution to address the limitations in existing prototype methods. To tackle the challenge of using raw data to define prototypes, we propose a shift towards semantic prototypes. In our approach, prototypes are not selected based on raw input features but on the semantic descriptions associated with each data point. By leveraging concept-based semantic descriptions to create clusters of data described by semantic rules as shown in Figure 1 ###reference_###, we ensure that prototypes are representative of the underlying data distribution while maintaining transparency and interpretability. This process eliminates the need to map data to a non-interpretable latent space, as distances are measured on the more intuitive semantic level. Moreover, our method dynamically determines the number of prototypes needed to semantically cover the entire data distribution, enhancing its adaptability and effectiveness. Furthermore, our approach mitigates the issue of providing explanations solely in terms of raw sub-symbolic data by providing both prototypical examples and the corresponding prototypical semantic description. Each cluster\u2019s semantic description serves as the prototypical part of the data point on the semantic level, offering insights into why a particular example is deemed a prototype. This combination of prototypical examples and semantic descriptions bridges the gap between the semantic and data levels, enhancing the interpretability and trustworthiness of our method. By enabling users to question and scrutinize each step of the process on the semantic level, we foster warranted trust and confidence in the explanations provided. Thus, our approach offers a simple yet effective and intuitive solution to enhance the interpretability of prototype-based explanations, mitigating the drawbacks of existing approaches."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "2. Related Work",
15
+ "text": "Our work is positioned at the intersection of several key areas within artificial intelligence, notably explainable AI (XAI), prototype-based methods, and case-based reasoning (Aamodt and Plaza, 1994 ###reference_b2###). By leveraging semantic prototypes, our approach not only enhances model interpretability but also facilitates a clearer understanding of datasets, offering a comprehensive overview of their inherent structure and characteristics.\nClassical algorithms like k-medoids (Rousseeuw and Kaufman, 1987 ###reference_b33###) have traditionally been used to select representative subsets of data points, illustrating early methods of data summarization through clustering. More recently, seminal works such as (Kim et al., 2016 ###reference_b19###) and (Gurumoorthy et al., 2019 ###reference_b16###) have leveraged prototypes to critically evaluate models and enhance transparency in machine learning decisions, establishing prototypes as an interpretability method.\nA significant body of research has focused on using prototypes to create more interpretable classifiers. This approach, often referred to as case-based or example-based classification, aims to enhance the transparency of AI models by relying on representative examples. By integrating prototypes into the classification process, these methods strive to provide intuitive, example-driven explanations that make the model\u2019s decisions more understandable to humans. (Chen et al., 2019 ###reference_b6###) is a seminal work in this direction, introducing ProtoPNet, a deep network architecture that dissects images by finding prototypical parts and combines evidence from these prototypes to make final classifications. (Wang et al., 2023 ###reference_b42###) claims to improve the classification performance of ProtoPNet with a method to learn support prototypes that lie near the classification boundary in the feature space. In a similar vein, (Nauta et al., 2021b ###reference_b30###) introduces ProtoTree, a prototype-based decision tree that employs prototypical patches as proxies for class-defining semantics.\nSeveral other works follow this rationale of prototypical learning through various approaches (Angelov and Soares, 2020 ###reference_b3###; Arik and Pfister, 2020 ###reference_b4###; Xue et al., 2022 ###reference_b45###; Li et al., 2018 ###reference_b22###; Rymarczyk et al., 2021 ###reference_b36###, 2022 ###reference_b35###; Wang et al., 2023 ###reference_b42###). However, their reliance on raw data limits the interpretability of their methods and the intuitiveness of the prototypes, potentially leading to misleading explanations.\nRecent research aligns with our work by acknowledging the limitations of providing explanations in terms of raw data and highlights the necessity to \u201cexplain the prototypes\u201d. In (Nauta et al., 2021a ###reference_b28###), the authors introduce a method to provide further insights for prototypes based on existing methods like ProtoPNet by altering some characteristics of the image, such as hue and saturation, and providing explanations based on that information. Similarly, (Wan et al., 2024 ###reference_b41###) proposes the Semantic Prototype Analysis Network (SPANet), an interpretable object recognition approach that through additional semantic information enables models to explicate the decision process by \u201cpointing out where to focus\u201d and \u201cexplaining why\u201d on a semantic level. Our work also utilizes information on the semantic level to define prototypes and produce semantically enriched explanations, bearing strong similarities with recent rule-based (Mastromichalakis et al., 2024 ###reference_b25###; Liartis et al., 2023 ###reference_b24###, 2021 ###reference_b23###) and counterfactual methods (Dervakos et al., 2023 ###reference_b9###) that use semantic descriptions of data to provide intuitive, human-understandable explanations.\nOur work is further supported by research discussing the shortcomings of latent space prototype interpretability, as outlined in (Hoffmann et al., 2021 ###reference_b18###), where the non-interpretability of latent space is highlighted, showing that existing methods using non-interpretable embedding spaces limit the interpretability of prototypes and are vulnerable to malicious attacks. Efforts to bridge the \u201csemantic gap\u201d between latent space and pixel space through the correlation of prototypes with ground-truth object parts (Nauta et al., 2023 ###reference_b29###) still rely on opaque procedures to map raw data to the latent space. (Wang et al., 2021 ###reference_b43###) claims to address this opacity by introducing a plug-in transparent embedding space to bridge high-level input patches and output categories.\nIn contrast, our approach eliminates the reliance on non interpretable latent spaces by using semantic descriptions directly, making each step transparent and interpretable. This not only enhances trust in the explanations provided but also allows for a more robust understanding of both the model and the data. Our method stands out by addressing both the need for clear, semantic-level explanations and the requirement for prototypes that truly represent the data in an intuitive and human-understandable way."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "3. Semantic Prototypes",
21
+ "text": "In this section we define the proposed framework for semantic prototypes. At the core of the framework is the notion of an Attribute Set Description (ASD), which provides a simple way to represent data samples semantically, as a set of entities, where an entity is represented as a set of attributes.\nGiven a set , an Attribute Set Description is a set of the form where each is of the form .\nThe set is a vocabulary that lists all the possible attributes an entity can have, so an ASD lists the attributes of a collection of entities. For defining semantic prototypes, we assume that we have data where samples are described by an ASD.\nSpecifically, we assume that our data consist of triples where is a raw data point, (e.g. an image, audio signal, DNA sequence etc.) is a label, and an semantic description of that data point. We assume that is an ASD that reflects the contents of .\nIn the case of an image the entities could be objects depicted within the image, each characterised by shape, colour, size, etc., while in the case of a speech signal, the entities could be utterances that are characterized by loudness, pitch, intonation, rhythm etc.\nThe assumptions that i) there are available data with ASDs and ii) that the ASDs accurately describe the data samples are worth further discussion. In the ideal case, such semantic descriptions will have resulted by human expert annotation, especially in decision-critical domains. There already exist multiple datasets with manually-added semantic descriptions or metadata that can be used as ASDs, both for general-purpose and domain-specific tasks, such as Audio set (Gemmeke et al., 2017 ###reference_b15###) including audio events accompanied by an ontology describing their semantics, the Visual Genome (Krishna et al., 2017 ###reference_b21###), containing images accompanied by scene graphs, where entities are linked semantically to WordNet (Fellbaum, 2010 ###reference_b13###), and the cancer genome atlas (Weinstein et al., 2013 ###reference_b44###), that includes genomic sequences along with a rich set of clinical and other information, among others. Furthermore, one could also use traditional, transparent feature extraction techniques to generate the ASDs, or, even more complex models, such as large vision-language models, similar to recent works that relate to ours (Wan et al., 2024 ###reference_b41###). The point of the ASD is to provide a meaningful description of a data sample at a level of abstraction that is understandable.\nAn ASD can also be used to describe a set of data samples. Given an ASD , we will say that subsumes if . This can be thought of as being more general than . Given a data point , if subsumes we will also say that describes the data point . Essentially, describes , if the description of contains entities with attributes that match or exceed those described in , thus, there can be ASDs that describe multiple data points. For example a data sample with ASD is described by the ASD , and so is the data sample with ASD . We utilize this idea for the semantic prototypes, by first finding ASDs which describe only data points with a particular label. We call such ASDs class cluster descriptions (CCD) of that label.\nA class cluster description of class , is an ASD such that, if describes a datapoint (i.e. subsumes ), then .\nIntuitively, a CCD semantically describes a cluster of data points that belong to a specific class, and no other data points. It can be interpreted as an IF THEN rule in that IF a data point is described by a CCD, then it belongs to that particular class. The purpose of identifying and semantically describing clusters of data points is to subsequently find the most representative or informative samples for those clusters, which can then be given as prototypes, along with their semantic description (ASD), and the semantic description of why they belong to their class (CCD). In particular, given a CCD for a label, the corresponding semantic prototype is the data point whose ASD contains the least redundant information among points that fit that description.\nA semantic prototype for a class cluster description is a data point that is described by , and for which, given a distance metric , for every other data point that is described by , it holds that .\nIntuitively, this means that a semantic prototype is a data point that materialises all the semantic information of the class description, since it is described by it, and contains as little extra information as possible.\nThe choice to limit the extra information is made to ensure that end-users are not distracted by irrelevant characteristics of the data point, such as objects in an image that do not affect what class it belongs to. Regarding the distance metric , in our implementation we opt for a set edit distance, as it has been used in other semantic explainability methods (Dervakos et al., 2023 ###reference_b9###), but other distance metrics could potentially be used, such as the extension of Jaccard similarity to sets of sets."
22
+ },
23
+ {
24
+ "section_id": "4",
25
+ "parent_section_id": null,
26
+ "section_name": "4. Computing Semantic Prototypes",
27
+ "text": "Within the proposed framework we can find prototypes using semantic criteria, and we can also answer the question \u201dWhy is this example a prototype?\u201d, by accompanying the prototypical example with a semantic class description when showing it to an end-user. To this end, there are are two main components that need to be computed. First, is the process of identifying and describing clusters within each class (computing CCDs), and second is the process of choosing the most informative data sample for each cluster."
28
+ },
29
+ {
30
+ "section_id": "4.1",
31
+ "parent_section_id": "4",
32
+ "section_name": "4.1. Finding class cluster descriptions",
33
+ "text": "As the space of all possible CCDs is exponentially large, our approach works by first heuristically generating a large (but polynomial) number of potential CCDs, filtering out those that do not satisfy the criteria (i.e. the clusters contain data samples that have a different label), and finally choosing a subset of the computed CCDs, depending on the number of prototypes that we want to produce and on the class coverage of the CCDs.\nGiven a dataset , and a class for which we want to produce prototypical examples, we would ideally like to produce the smallest number of CCDs that describe the entirety of class without describing any data points from other classes. It is worth mentioning that since finding CCDs is equivalent to finding rules of the form \u201cIF data sample contains entities with specific attributes THEN it is classified to class \u201d, existing rule-based methods could be adapted for finding CCDs (Zhou et al., 2003 ###reference_b46###; Augasta and Kathirvalavakumar, 2012 ###reference_b5###; Mastromichalakis et al., 2024 ###reference_b25###; Liartis et al., 2021 ###reference_b23###).\nIn our implementation, we utilise Algorithm 1 ###reference_### to compute the initial CCDs, using as the positive data points and as the set of negative data points. Alg. 1 ###reference_### is a greedy algorithm inspired by (Liartis et al., 2021 ###reference_b23###) that starts with an ASD, and using a similarity metric (eq. 1 ###reference_###) as a heuristic, greedily merges (Alg. 2 ###reference_###) positive descriptions into more general ones, and subsequently checks that the more general descriptions do not describe any negative data points. In contrast to (Liartis et al., 2021 ###reference_b23###), we repeat this process for each positive data point, because we want to ensure that each positive data point fits at least one CCD, and to also mitigate \u201dbad\u201d choices induced by the heuristic. This strikes a balance between only utilizing each data point once, and exploring the combinatorially large number of all possible choices.\nThe similarity metric we use, as described in equation 1 ###reference_###, utilizes the Jaccard similarity to compare the entities described in each ASD. For each entity in , it calculates the maximum number of attributes it shares with any entity in . This is averaged over the entities in , repeated symmetrically for the entities in , and then these two quantities are averaged. We average over entities so that if describes many more entities than , it does not dominate the total similarity and vice versa. The two quantities are averaged so that the final quantity is between 0 and 1, as is commonly required of a similarity metric.\nThe merge operation also follows the paradigm of (Liartis et al., 2023 ###reference_b24###), by finding all common attributes for pairs of entities from and , and then trims the resulting ASD. This way of combining and is essentially the direct product of finite structures, applied to ASDs. It is also the join operation on the lattice induced by ASD subsumption. The resulting ASD of this merge operation holds the property that it subsumes and , and is subsumed by any other ASD that subsumes both and . Therefore, it is their most specific generalization. This operation is widely adopted for separating structured positive and negative examples (Liartis et al., 2023 ###reference_b24###; Cima et al., 2022 ###reference_b7###; Ortiz, 2019 ###reference_b32###; Guti\u00e9rrez-Basulto et al., 2018 ###reference_b17###). The trimming operation used only removes redundant entity descriptions, without sacrificing the property of the most specific generalization.\nAlg.1 results in a large set of CCDs which are used as candidates for subsequently finding semantic prototypes. As each CCD results in a prototype, we want to limit their number so that they can all be shown to a user without overwhelming them. In our implementation we again do this greedily, by choosing at each step the CCD that describes the most positive samples that are not already described by any previously selected class descriptions. Selecting the fewest number of CCDs that in total describe all positive data points is an instance of the set cover problem, while selecting CCDs that describe as many positive data points as possible is an instance of the maximum coverage problem. Both problems are NP-complete and the greedy algorithm is the best polynomial-time approximation, up to lower-order terms, unless P=NP (Vazirani, 2001 ###reference_b39###; Feige, 1998 ###reference_b12###; Dinur and Steurer, 2014 ###reference_b11###)."
34
+ },
35
+ {
36
+ "section_id": "4.2",
37
+ "parent_section_id": "4",
38
+ "section_name": "4.2. Finding semantic prototypes",
39
+ "text": "Having established the CCDs, we proceed to find prototypes for each class by identifying the closest data point to each CCD that is simultaneously described by it. In our implementation, the closeness is determined through the set edit distance, a metric quantifying the distance between the CCD and the ASD of each data point. In particular, as we know that the CCD describes the data sample , the only necessary edits to transform into are insertions of attributes into the sets contained in . To do this, for every pair of sets where , and we compute the number of insertions . Then the pairs are organized into a bipartite graph, where the weights of the edges are set to be the number of insertions computed previously. It is guaranteed that every will have at least one edge, since we know that describes , meaning that . Finally, we compute the minimum number of additions required to transform to , yielding the edit distance between the class description and the data point, by adapting the minimum weight full match algorithm, as used in (Filandrianos et al., 2022 ###reference_b14###). This is computed for all data points, and then the semantic prototype is chosen to be the one with the least edit distance."
40
+ },
41
+ {
42
+ "section_id": "5",
43
+ "parent_section_id": null,
44
+ "section_name": "5. Experiments",
45
+ "text": "In our experiments, we utilized two datasets to evaluate the effectiveness of our approach: the CLEVR-Hans dataset (Stammer et al., 2021 ###reference_b37###) and the CUB-200 dataset (Wah et al., 2011 ###reference_b40###). The CLEVR-Hans dataset comprises artificial images featuring a varying number of objects with different sizes, shapes, colors, and textures. This dataset is chosen because of its simplicity and clear semantics and characteristics that allow for a straightforward demonstration of how our method excels where other explanation techniques fall short. The second dataset we employed is the CUB-200 dataset, which consists of real images of birds divided into 200 classes according to species. This dataset allows us to evaluate our method in real-life images. It is widely used in prototype-based methods, making it ideal for comparison. Our code is available here: https://github.com/ails-lab/Semantic-Prototypes ###reference_types###."
46
+ },
47
+ {
48
+ "section_id": "5.1",
49
+ "parent_section_id": "5",
50
+ "section_name": "5.1. Qualitative Evaluation",
51
+ "text": ""
52
+ },
53
+ {
54
+ "section_id": "5.1.1",
55
+ "parent_section_id": "5.1",
56
+ "section_name": "5.1.1. CLEVR-Hans",
57
+ "text": "We conducted experiments on the CLEVR-Hans dataset to qualitatively analyze the informativeness and interpretability of our semantic prototype approach. We compared our method against existing prototype-based techniques, focusing on how well each method captures the clear and distinct semantics present in the dataset. Each class in the dataset has a clear semantic description that characterizes all the images within that class. For example, all images in Class 1 contain at least one large cube and one large cylinder; Class 2 images feature at least one small metal cube and one small sphere; Class 3 images include at least one large blue sphere and one small yellow sphere.\nThe following Class Characteristic Descriptions (CCDs) produced by our method correctly reflect the characteristics of each class\nFigure 3 ###reference_### shows the prototypical images for each class in the training set of CLEVR-Hans3, generated by Protodash (Gurumoorthy et al., 2019 ###reference_b16###) and our proposed approach. Our method selects prototypes with the least extraneous information, producing clearer, more focused prototypes that prevent cognitive overload and help users detect the distinguishing features between classes. By observing the prototypes produced by our method, users can more easily identify patterns due to the absence of distracting information. In contrast, prototypes generated by other methods often represent a \u201ccentral\u201d data point that may include irrelevant information.\nOur approach essentially disregards the distribution of images in the feature space, where images with many objects might be more common. Instead, we find the best CCDs that cover the class as comprehensively as possible and then identify the data point described by the CCD with the fewest redundancies.\nAdditionally, our study highlights the importance of providing explanations alongside prototypes. Although our method minimizes irrelevant information in prototypes, extracting the actual semantics of the class remains challenging. This challenge is even more pronounced with prototypes produced by methods like Protodash, where the amount of encapsulated information can be overwhelming. Even when methods can detect the exact parts of images containing characteristic features, it can still be difficult to extract the correct semantics due to feature entanglement at the data level. As shown in Figure 2 ###reference_### and discussed in Section 1 ###reference_###, simply indicating the prototypical patch sometimes fails to clarify the prototypical characteristics due to this entanglement. Our method, through the use of CCDs, clearly presents the semantics of each class in a simple, intuitive, and informative manner.\n###figure_2### ###figure_3### ###figure_4### ###figure_5### ###figure_6### ###figure_7###"
58
+ },
59
+ {
60
+ "section_id": "5.1.2",
61
+ "parent_section_id": "5.1",
62
+ "section_name": "5.1.2. CUB-200",
63
+ "text": "By analyzing the performance of our method on the CUB-200 dataset and comparing it to existing widely used methods, we assess how our approach handles real-world data, and showcase the merits of the semantic prototypes. We show that our method produces clear, semantically meaningful prototypes that align well with human understanding of bird species, highlighting the differentiating factors among similar species, whereas other methods fail to detect them. Through the inspection of prototypes of related species of birds that have great visual similarities, we are able to see that our method, accurately pinpoints the important features that differentiate the classes, while other methods do not. For example, in Figure 4 ###reference_### we can see two species of gulls, a ring-billed gull, and a glaucus-winged gull. The CCDs provided by our method indicate the characteristic black ring of the ring-billed as well as its black tail, and yellow eyes. When these features are juxtaposed with the CCDs provided for the glaucus-winged gull that include the characteristic pink legs and black eyes, the user is able to clearly distinguish these two species, while also understanding the characteristics of each gull. However, other widely-used methods like ProtoPNet (Chen et al., 2019 ###reference_b6###), fail to highlight these distinguishing characteristics, and indicate the wings or even the background as the prototypical patch of these classes as shown in Figure 5 ###reference_###. This can potentially be misleading and lower user trust because of the method\u2019s inability to detect the differentiating factors.\n###figure_8### ###figure_9### ###figure_10### ###figure_11###"
64
+ },
65
+ {
66
+ "section_id": "5.2",
67
+ "parent_section_id": "5",
68
+ "section_name": "5.2. User Survey",
69
+ "text": "For the human survey, we adopted a methodology similar to that described in (Vandenhende et al., 2022 ###reference_b38###; Dimitriou et al., 2024 ###reference_b10###). The purpose of this survey is to evaluate the effectiveness of prototype methods in teaching individuals about an unfamiliar task, through two primary stages: training and testing. During the training phase, participants are exposed to prototype instances from two analogous classes within a specified method, accompanied by relevant explanations where applicable. For example, Protodash employs solely images, ProtoPNet features images with a bounding box, and our method presents images alongside textual CCD. Each participant reviews four prototypes per class, totaling eight prototypes. To mitigate bias from recognizable class names, such as \u201cYellow-breasted Chat\u201d, these names are replaced with \u201cClass A\u201d and \u201cClass B\u201d.\nIn the testing phase, participants are required to classify ten images from the test set as either \u201cClass A\u201d or \u201cClass B.\u201d This approach is designed to assess their ability to learn the task by simply viewing random images from the training set, without any systematic selection algorithm or additional explanations. To evaluate the generalizability of the method, the experiment employs different pairings of labels as \u201cClass A\u201d and \u201cClass B\u201d. These pairs, such as Least Auklet versus Parakeet Auklet and Pelagic Cormorant versus Brandt Cormorant were selected because of their high visual similarity, which was confirmed by their high confusion rates as identified by the pretrained classifier cited in (Vandenhende et al., 2022 ###reference_b38###).\nParticipants underwent these two phases for the following six different methodologies: only CCDs, Random Images (baseline), semantic prototypes (our method), ProtoPNet (Chen et al., 2019 ###reference_b6###), Protodash (Gurumoorthy et al., 2019 ###reference_b16###), and ProtoPNet* (ProtoPNet prototypes along with explanations produced using the methodology introduced in (Nauta et al., 2021a ###reference_b28###)). The CCDs were presented before the random images to ensure that participants had no prior knowledge about the distribution of the images, starting with only textual descriptions provided by the CCDs. These were used as a baseline for our method to evaluate the usefulness of the prototypes compared to plain semantic descriptions. Different classes of birds were randomly permutated among the different methods so that each user couldn\u2019t use prior knowledge from a previous step of the survey to classify the images of a later method. Participants were ultimately asked to indicate which prototype method they preferred and found most helpful.\nThe study involved 20 PhD candidates in computer science who participated voluntarily after a call for participation. Altogether, they conducted 120 tests in total. The candidates possessed no prior knowledge about bird species, which ensured an unbiased approach to the tasks presented.\nTable 1 ###reference_### presents the results, showcasing the accuracy and participant preferences for each method.\nThe results of the user survey clearly demonstrate that our method of Semantic Prototypes (ProtoSem) outperformed all other methods in terms of performance in machine teaching and user satisfaction, exhibiting the highest accuracy and the lowest standard deviation along with the highest user preference. This indicates that our approach effectively helped users focus their attention in the right direction, achieving a consistent understanding across participants.\nWe see a significant discrepancy between the accuracy of participants who only read the CCDs of the two classes and those who viewed actual images from the dataset. While CCDs provide essential information for differentiating each class, they alone are insufficient for users to fully grasp the necessary distinctions. Familiarity with dataset instances plays a crucial role in properly understanding how to differentiate the classes. Additionally, the high standard deviation in performance suggests that the criteria for class selection vary significantly among users. Initially, this variation might seem counter-intuitive since users are provided with the fundamental characteristics of the classes, seemingly simplifying the classification task. However, participants struggled to intuitively grasp these explanations without examples from the dataset, as interpretations of a rule such as \u201cThe bird has a plain pattern on its head\u201d varied widely among users who had not seen how this characteristic manifests in actual birds.\nMoreover, although adding supplementary information to each prototype intuitively appears beneficial, this is not reflected in user performance. The accuracy of users who learned with the help of explanations from ProtoPNet and ProtoPNet*, which include images along with two different types of additional information, was comparable to that of learning from randomly selected images without any explanations. Notably, ProtoPNet\u2019s performance was slightly below this baseline, with a relatively higher standard deviation, indicating that the criteria for classifying images varied considerably. ProtoPNet* showed slightly improved performance and a relatively lower standard deviation, but still very close to the baseline.\nAdditionally, Protodash, which presents less information compared to other methods (except for the baseline), achieved higher performance than the preceding methods. This improvement primarily occurred because users could intuitively discern the differences between the two classes by comparing Protodash prototypes. Additionally, Protodash, which presents less information compared to other methods (except for the baseline), achieved higher performance than the preceding methods. This improvement primarily occurred because users could intuitively discern the differences between the two classes by comparing Protodash prototypes.\nAfter each participant completed the user survey, we conducted short interviews to gather feedback and insights on the methods. Here we present some notable observations highlighted by multiple participants.\nFirst, participants found it difficult to map the semantic information of the CCDs to the actual data when the prototypical images were not present. This underscores the importance of providing enhanced explanations in multiple formats, especially in areas where users lack expertise. Additionally, participants criticized the seemingly incorrect patches of ProtoPNet, noting that they often ignored these patches and instead identified their own patterns in the images. Many participants found the semantic explanations of ProtoPNet* unintuitive and uninterpretable because they could not relate them to the data, often choosing to disregard them.\nRegarding our method, some users mentioned that the presence of the semantic description helped them identify the distinguishing characteristics of the classes, though they had to pay more attention to process all the provided information compared to methods offering only plain images. They also suggested that smaller, more focused rules would be greatly beneficial.\nRegarding user preferences, it is important to note that half of the participants found our method\u2019s explanations more helpful than any of the alternatives. However, Table 1 ###reference_### also reveals a preference for methods that offer minimal information, specifically those consisting only of images without any textual content. This is highlighted by the fact that 45% of users identified the prototypes provided randomly, by ProtoPNet, and by Protodash as the most helpful. Interestingly, there was a stronger preference for a set of randomly selected images over the ProtoPNet* prototypes, which include images accompanied by textual explanations, even though the latter method resulted in higher user performance. Additionally, users seemed to prefer the ProtoPNet explanation, which features an image with a bounding box, despite its lower effectiveness for learning.\nThis highlights an important trade-off in explanation methods: informativeness versus simplicity. Some users prefer methods that contain the most useful information and can help them perform a task with careful attention, while others prefer explanations that are simple and do not require thorough investigation, even if they lead to poorer results. Therefore, it is important to keep our methods as concise as possible, avoiding unnecessary information to create explanations that are both simple and informative."
70
+ },
71
+ {
72
+ "section_id": "6",
73
+ "parent_section_id": null,
74
+ "section_name": "6. Conclusions",
75
+ "text": "In this work, we have introduced a framework for producing semantic prototypes, given a dataset that is enriched with human-understandable semantic information. We have developed a methodology for computing such semantic prototypes in practice, and we have demonstrated their merits qualitatively and quantitatively. The two main takeaways from our work are that i) It is important that prototypes are accompanied by some form of explanation of \u201cwhy is this a prototype?\u201d, which should be transparent and reliable, and ii) It is useful to compute prototypes in terms of their semantics, instead of their feature representation, and to make sure that there is as little as possible redundant information. This ensures that a user can more easily extrapolate the semantics of a class, or a cluster, compared to choosing the most representative sample in the dataset which might contain redundancies. Especially in large complex datasets, it could be challenging to find data which contains only information that causally links it to a class and as least additional information as possible, thus relying on the distribution of features could lead to less understandable prototypes, compared to relying on the semantic information.\nRegarding future extensions of our work, we have identified several key areas to be explored. Firstly, as Large Language Models (LLMs) are prevalent in both academia and industry, an interesting area to explore is the utilization of prototypes and their descriptions as a complement or enhancement to few-shot or in-context learning. Furthermore, for several natural language processing tasks, it might be useful to utilize LLMs for generating semantic descriptions, and then employing our proposed method for finding semantic prototypes in the data. A second interesting area to explore is the utilization of knowledge representation and knowledge graphs. In particular, the scale and interconnectedness of such structured data can be very useful for identifying clusters and prototypes semantically. In this regard, an extension of the algorithmic approach from sets of sets representations to labeled directed graph representations will provide much more expressive descriptions, which might in turn result in more understandable and informative prototypes.\nThirdly, there is an array of domain-specific applications for the proposed methodology. An example is the domain of music, where symbolic representations, such as musical scores and notation can serve as semantic descriptions of audio recordings. Furthermore, besides prototypes, there are numerous other forms of explanations that could potentially benefit from utilizing the human-understandable semantic level of abstraction and could be combined with prototypes, similar to how we combine the prototypes with CCDs which are closely related to rule-based explanations. An example would be accompanying the prototypes with their counterfactual data point, along with the associated semantic descriptions. Finally, the difficulty of objectively evaluating XAI methodologies and frameworks, and reproducing results is a known issue. In the future, we plan to extend the evaluation procedure to more participants of different backgrounds, and ideally guided by disciplines of human behavior and cognition, it would be worth exploring further what a good explanation should look like."
76
+ }
77
+ ],
78
+ "appendix": [],
79
+ "tables": {
80
+ "1": {
81
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S5.T1.8.1.1\" style=\"font-size:90%;\">Table 1</span>. </span><span class=\"ltx_text\" id=\"S5.T1.9.2\" style=\"font-size:90%;\">Accuracy in machine teaching and human preferences for each method.</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T1.6.6\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T1.6.6.7.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S5.T1.6.6.7.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.6.7.1.1.1\">Method</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T1.6.6.7.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.6.7.1.2.1\">Accuracy (%)</span></th>\n<th class=\"ltx_td ltx_nopad_r ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T1.6.6.7.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.6.7.1.3.1\">Preference (%)</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T1.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T1.1.1.1.2\">Random</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.1.1.1\"></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_t\" id=\"S5.T1.1.1.1.3\">10</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.2.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T1.2.2.2.2\">ProtoPNet <cite class=\"ltx_cite ltx_citemacro_citep\">(Chen et\u00a0al<span class=\"ltx_text\">.</span>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.15871v3#bib.bib6\" title=\"\">2019</a>)</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.2.2.2.1\"></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S5.T1.2.2.2.3\">15</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.3.3.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T1.3.3.3.2\">ProtoPNet* <cite class=\"ltx_cite ltx_citemacro_citep\">(Nauta et\u00a0al<span class=\"ltx_text\">.</span>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.15871v3#bib.bib28\" title=\"\">2021a</a>)</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.3.3.3.1\"></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S5.T1.3.3.3.3\">5</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.4.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T1.4.4.4.2\">Protodash <cite class=\"ltx_cite ltx_citemacro_citep\">(Gurumoorthy et\u00a0al<span class=\"ltx_text\">.</span>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.15871v3#bib.bib16\" title=\"\">2019</a>)</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.4.4.4.1\"></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S5.T1.4.4.4.3\">20</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.5.5.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T1.5.5.5.2\">Only CCDs</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.5.5.5.1\"></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S5.T1.5.5.5.3\">0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.6.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S5.T1.6.6.6.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.6.6.2.1\">ProtoSem (ours)</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.6.6.6.1\"></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_bb\" id=\"S5.T1.6.6.6.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.6.6.3.1\">50</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
82
+ "capture": "Table 1. Accuracy in machine teaching and human preferences for each method."
83
+ }
84
+ },
85
+ "image_paths": {
86
+ "1": {
87
+ "figure_path": "2407.15871v3_figure_1.png",
88
+ "caption": "Figure 1. Overview of our Method.",
89
+ "url": "http://arxiv.org/html/2407.15871v3/extracted/5800635/images/Semantic_Prototypes_Overview.png"
90
+ },
91
+ "2": {
92
+ "figure_path": "2407.15871v3_figure_2.png",
93
+ "caption": "Figure 2. A sample image from class 2 of the CLEVR-Hans3 dataset.",
94
+ "url": "http://arxiv.org/html/2407.15871v3/extracted/5800635/images/CLEVR_Hans_classid_1_000414.png"
95
+ },
96
+ "3(a)": {
97
+ "figure_path": "2407.15871v3_figure_3(a).png",
98
+ "caption": "(a) Protodash Class 1\nFigure 3. CLEVR-Hans3 Prototypes",
99
+ "url": "http://arxiv.org/html/2407.15871v3/extracted/5800635/images/prototype_clevr_train_protosash_0.png"
100
+ },
101
+ "3(b)": {
102
+ "figure_path": "2407.15871v3_figure_3(b).png",
103
+ "caption": "(b) Ours Class 1\nFigure 3. CLEVR-Hans3 Prototypes",
104
+ "url": "http://arxiv.org/html/2407.15871v3/extracted/5800635/images/prototype_clevr_train_0.png"
105
+ },
106
+ "3(c)": {
107
+ "figure_path": "2407.15871v3_figure_3(c).png",
108
+ "caption": "(c) Protodash Class 2\nFigure 3. CLEVR-Hans3 Prototypes",
109
+ "url": "http://arxiv.org/html/2407.15871v3/extracted/5800635/images/prototype_clevr_train_protosash_1.png"
110
+ },
111
+ "3(d)": {
112
+ "figure_path": "2407.15871v3_figure_3(d).png",
113
+ "caption": "(d) Ours Class 2\nFigure 3. CLEVR-Hans3 Prototypes",
114
+ "url": "http://arxiv.org/html/2407.15871v3/extracted/5800635/images/prototype_clevr_train_1.png"
115
+ },
116
+ "3(e)": {
117
+ "figure_path": "2407.15871v3_figure_3(e).png",
118
+ "caption": "(e) Protodash Class 3\nFigure 3. CLEVR-Hans3 Prototypes",
119
+ "url": "http://arxiv.org/html/2407.15871v3/extracted/5800635/images/prototype_clevr_train_protosash_2.png"
120
+ },
121
+ "3(f)": {
122
+ "figure_path": "2407.15871v3_figure_3(f).png",
123
+ "caption": "(f) Ours Class 3\nFigure 3. CLEVR-Hans3 Prototypes",
124
+ "url": "http://arxiv.org/html/2407.15871v3/extracted/5800635/images/prototype_clevr_train_2.png"
125
+ },
126
+ "4(a)": {
127
+ "figure_path": "2407.15871v3_figure_4(a).png",
128
+ "caption": "(a) Ring Billed Gull\nFigure 4. Two visually similar classes of gulls",
129
+ "url": "http://arxiv.org/html/2407.15871v3/extracted/5800635/images/Ring_Billed_Gull_0029_52613.jpg"
130
+ },
131
+ "4(b)": {
132
+ "figure_path": "2407.15871v3_figure_4(b).png",
133
+ "caption": "(b) Glaucus Winged Gull\nFigure 4. Two visually similar classes of gulls",
134
+ "url": "http://arxiv.org/html/2407.15871v3/extracted/5800635/images/Glaucous_Winged_Gull_0130_45210.jpg"
135
+ },
136
+ "5(a)": {
137
+ "figure_path": "2407.15871v3_figure_5(a).png",
138
+ "caption": "(a) Prototype for Ring Billed Gull produced by ProtoPNet (Chen et al., 2019)\nFigure 5. Misleading Prototypes produced by ProtoPNet (Chen et al., 2019)",
139
+ "url": "http://arxiv.org/html/2407.15871v3/extracted/5800635/images/prototype_in_original_pimg_1.png"
140
+ },
141
+ "5(b)": {
142
+ "figure_path": "2407.15871v3_figure_5(b).png",
143
+ "caption": "(b) Prototype for Glaucus Winged Gull produced by ProtoPNet (Chen et al., 2019)\nFigure 5. Misleading Prototypes produced by ProtoPNet (Chen et al., 2019)",
144
+ "url": "http://arxiv.org/html/2407.15871v3/extracted/5800635/images/prototype_in_original_pimg.png"
145
+ }
146
+ },
147
+ "validation": true,
148
+ "references": [
149
+ {
150
+ "1": {
151
+ "title": "Case-based reasoning: Foundational issues, methodological variations, and system approaches.",
152
+ "author": "Agnar Aamodt and Enric Plaza. 1994.",
153
+ "venue": "AI communications 7, 1 (1994), 39\u201359.",
154
+ "url": null
155
+ }
156
+ },
157
+ {
158
+ "2": {
159
+ "title": "Towards deep machine reasoning: a prototype-based deep neural network with decision tree inference. In 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC). IEEE, 2092\u20132099.",
160
+ "author": "Plamen Angelov and Eduardo Soares. 2020.",
161
+ "venue": "",
162
+ "url": null
163
+ }
164
+ },
165
+ {
166
+ "3": {
167
+ "title": "Protoattend: Attention-based prototypical learning.",
168
+ "author": "Sercan O Arik and Tomas Pfister. 2020.",
169
+ "venue": "Journal of Machine Learning Research 21, 210 (2020), 1\u201335.",
170
+ "url": null
171
+ }
172
+ },
173
+ {
174
+ "4": {
175
+ "title": "Reverse engineering the neural networks for rule extraction in classification problems.",
176
+ "author": "M Gethsiyal Augasta and Thangairulappan Kathirvalavakumar. 2012.",
177
+ "venue": "Neural processing letters 35 (2012), 131\u2013150.",
178
+ "url": null
179
+ }
180
+ },
181
+ {
182
+ "5": {
183
+ "title": "This looks like that: deep learning for interpretable image recognition.",
184
+ "author": "Chaofan Chen, Oscar Li, Daniel Tao, Alina Barnett, Cynthia Rudin, and Jonathan K Su. 2019.",
185
+ "venue": "Advances in neural information processing systems 32 (2019).",
186
+ "url": null
187
+ }
188
+ },
189
+ {
190
+ "6": {
191
+ "title": "Separability and its Approximations in Ontology-based Data Management.",
192
+ "author": "Gianluca Cima, Federico Croce, and Maurizio Lenzerini. 2022.",
193
+ "venue": "Semantic Web Preprint (2022), 1\u201336.",
194
+ "url": null
195
+ }
196
+ },
197
+ {
198
+ "7": {
199
+ "title": "Metarecognition in time-stressed decision making: Recognizing, critiquing, and correcting.",
200
+ "author": "Marvin S Cohen, Jared T Freeman, and Steve Wolf. 1996.",
201
+ "venue": "Human factors 38, 2 (1996), 206\u2013219.",
202
+ "url": null
203
+ }
204
+ },
205
+ {
206
+ "8": {
207
+ "title": "Choose your data wisely: a framework for semantic counterfactuals. In Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence. 382\u2013390.",
208
+ "author": "Edmund Dervakos, Konstantinos Thomas, Giorgos Filandrianos, and Giorgos Stamou. 2023.",
209
+ "venue": "",
210
+ "url": null
211
+ }
212
+ },
213
+ {
214
+ "9": {
215
+ "title": "Structure Your Data: Towards Semantic Graph Counterfactuals. In Proceedings of the 41st International Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 235). PMLR, 10897\u201310926.",
216
+ "author": "Angeliki Dimitriou, Maria Lymperaiou, Georgios Filandrianos, Konstantinos Thomas, and Giorgos Stamou. 2024.",
217
+ "venue": "",
218
+ "url": null
219
+ }
220
+ },
221
+ {
222
+ "10": {
223
+ "title": "Analytical approach to parallel repetition. In Proceedings of the forty-sixth annual ACM symposium on Theory of computing. 624\u2013633.",
224
+ "author": "Irit Dinur and David Steurer. 2014.",
225
+ "venue": "",
226
+ "url": null
227
+ }
228
+ },
229
+ {
230
+ "11": {
231
+ "title": "A threshold of ln n for approximating set cover.",
232
+ "author": "Uriel Feige. 1998.",
233
+ "venue": "J. ACM 45, 4 (jul 1998), 634\u2013652.",
234
+ "url": null
235
+ }
236
+ },
237
+ {
238
+ "12": {
239
+ "title": "WordNet.",
240
+ "author": "Christiane Fellbaum. 2010.",
241
+ "venue": "In Theory and applications of ontology: computer applications. Springer, 231\u2013243.",
242
+ "url": null
243
+ }
244
+ },
245
+ {
246
+ "13": {
247
+ "title": "Conceptual Edits as Counterfactual Explanations.. In AAAI Spring Symposium: MAKE.",
248
+ "author": "Giorgos Filandrianos, Konstantinos Thomas, Edmund Dervakos, and Giorgos Stamou. 2022.",
249
+ "venue": "",
250
+ "url": null
251
+ }
252
+ },
253
+ {
254
+ "14": {
255
+ "title": "Audio set: An ontology and human-labeled dataset for audio events. In 2017 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, 776\u2013780.",
256
+ "author": "Jort F Gemmeke, Daniel PW Ellis, Dylan Freedman, Aren Jansen, Wade Lawrence, R Channing Moore, Manoj Plakal, and Marvin Ritter. 2017.",
257
+ "venue": "",
258
+ "url": null
259
+ }
260
+ },
261
+ {
262
+ "15": {
263
+ "title": "Efficient data representation by selecting prototypes with importance weights. In 2019 IEEE International Conference on Data Mining (ICDM). IEEE, 260\u2013269.",
264
+ "author": "Karthik S Gurumoorthy, Amit Dhurandhar, Guillermo Cecchi, and Charu Aggarwal. 2019.",
265
+ "venue": "",
266
+ "url": null
267
+ }
268
+ },
269
+ {
270
+ "16": {
271
+ "title": "Reverse Engineering Queries in Ontology-Enriched Systems: The Case of Expressive Horn Description Logic Ontologies. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-18. International Joint Conferences on Artificial Intelligence Organization, 1847\u20131853.",
272
+ "author": "V\u00edctor Guti\u00e9rrez-Basulto, Jean Christoph Jung, and Leif Sabellek. 2018.",
273
+ "venue": "https://doi.org/10.24963/ijcai.2018/255",
274
+ "url": null
275
+ }
276
+ },
277
+ {
278
+ "17": {
279
+ "title": "This looks like that\u2026 does it? shortcomings of latent space prototype interpretability in deep networks.",
280
+ "author": "Adrian Hoffmann, Claudio Fanconi, Rahul Rade, and Jonas Kohler. 2021.",
281
+ "venue": "arXiv preprint arXiv:2105.02968 (2021).",
282
+ "url": null
283
+ }
284
+ },
285
+ {
286
+ "18": {
287
+ "title": "Examples are not enough, learn to criticize! criticism for interpretability.",
288
+ "author": "Been Kim, Rajiv Khanna, and Oluwasanmi O Koyejo. 2016.",
289
+ "venue": "Advances in neural information processing systems 29 (2016).",
290
+ "url": null
291
+ }
292
+ },
293
+ {
294
+ "19": {
295
+ "title": "\u201d Help Me Help the AI\u201d: Understanding How Explainability Can Support Human-AI Interaction. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1\u201317.",
296
+ "author": "Sunnie SY Kim, Elizabeth Anne Watkins, Olga Russakovsky, Ruth Fong, and Andr\u00e9s Monroy-Hern\u00e1ndez. 2023.",
297
+ "venue": "",
298
+ "url": null
299
+ }
300
+ },
301
+ {
302
+ "20": {
303
+ "title": "Visual genome: Connecting language and vision using crowdsourced dense image annotations.",
304
+ "author": "Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. 2017.",
305
+ "venue": "International journal of computer vision 123 (2017), 32\u201373.",
306
+ "url": null
307
+ }
308
+ },
309
+ {
310
+ "21": {
311
+ "title": "Deep learning for case-based reasoning through prototypes: A neural network that explains its predictions. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 32.",
312
+ "author": "Oscar Li, Hao Liu, Chaofan Chen, and Cynthia Rudin. 2018.",
313
+ "venue": "",
314
+ "url": null
315
+ }
316
+ },
317
+ {
318
+ "22": {
319
+ "title": "Semantic Queries Explaining Opaque Machine Learning Classifiers.. In DAO-XAI.",
320
+ "author": "Jason Liartis, Edmund Dervakos, Orfeas Menis-Mastromichalakis, Alexandros Chortaras, and Giorgos Stamou. 2021.",
321
+ "venue": "",
322
+ "url": null
323
+ }
324
+ },
325
+ {
326
+ "23": {
327
+ "title": "Searching for explanations of black-box classifiers in the space of semantic queries.",
328
+ "author": "Jason Liartis, Edmund Dervakos, Orfeas Menis-Mastromichalakis, Alexandros Chortaras, and Giorgos Stamou. 2023.",
329
+ "venue": "Semantic Web Preprint (2023), 1\u201342.",
330
+ "url": null
331
+ }
332
+ },
333
+ {
334
+ "24": {
335
+ "title": "Rule-Based Explanations of Machine Learning Classifiers Using Knowledge Graphs. In AAAI Spring Symposium: MAKE.",
336
+ "author": "Orfeas Menis Mastromichalakis, Edmund Dervakos, Alexandros Chortaras, and Giorgos Stamou. 2024.",
337
+ "venue": "",
338
+ "url": null
339
+ }
340
+ },
341
+ {
342
+ "25": {
343
+ "title": "Explanation in artificial intelligence: Insights from the social sciences.",
344
+ "author": "Tim Miller. 2019.",
345
+ "venue": "Artificial intelligence 267 (2019), 1\u201338.",
346
+ "url": null
347
+ }
348
+ },
349
+ {
350
+ "26": {
351
+ "title": "Explaining explanations in AI. In Proceedings of the conference on fairness, accountability, and transparency. 279\u2013288.",
352
+ "author": "Brent Mittelstadt, Chris Russell, and Sandra Wachter. 2019.",
353
+ "venue": "",
354
+ "url": null
355
+ }
356
+ },
357
+ {
358
+ "27": {
359
+ "title": "This looks like that, because\u2026 explaining prototypes for interpretable image recognition. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, 441\u2013456.",
360
+ "author": "Meike Nauta, Annemarie Jutte, Jesper Provoost, and Christin Seifert. 2021a.",
361
+ "venue": "",
362
+ "url": null
363
+ }
364
+ },
365
+ {
366
+ "28": {
367
+ "title": "Pip-net: Patch-based intuitive prototypes for interpretable image classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2744\u20132753.",
368
+ "author": "Meike Nauta, J\u00f6rg Schl\u00f6tterer, Maurice van Keulen, and Christin Seifert. 2023.",
369
+ "venue": "",
370
+ "url": null
371
+ }
372
+ },
373
+ {
374
+ "29": {
375
+ "title": "Neural prototype trees for interpretable fine-grained image recognition. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 14933\u201314943.",
376
+ "author": "Meike Nauta, Ron Van Bree, and Christin Seifert. 2021b.",
377
+ "venue": "",
378
+ "url": null
379
+ }
380
+ },
381
+ {
382
+ "30": {
383
+ "title": "Human problem solving. Vol. 104.",
384
+ "author": "Allen Newell, Herbert Alexander Simon, et al. 1972.",
385
+ "venue": "Prentice-hall Englewood Cliffs, NJ.",
386
+ "url": null
387
+ }
388
+ },
389
+ {
390
+ "31": {
391
+ "title": "Ontology-Mediated Queries from Examples: a Glimpse at the DL-Lite Case. In GCAI 2019. Proceedings of the 5th Global Conference on Artificial Intelligence (EPiC Series in Computing, Vol. 65), Diego Calvanese and Luca Iocchi (Eds.). EasyChair, 1\u201314.",
392
+ "author": "Magdalena Ortiz. 2019.",
393
+ "venue": "https://doi.org/10.29007/jhtz",
394
+ "url": null
395
+ }
396
+ },
397
+ {
398
+ "32": {
399
+ "title": "Clustering by means of medoids. In Proceedings of the statistical data analysis based on the L1 norm conference, neuchatel, switzerland, Vol. 31.",
400
+ "author": "P Rousseeuw and P Kaufman. 1987.",
401
+ "venue": "",
402
+ "url": null
403
+ }
404
+ },
405
+ {
406
+ "33": {
407
+ "title": "Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead.",
408
+ "author": "Cynthia Rudin. 2019.",
409
+ "venue": "Nature machine intelligence 1, 5 (2019), 206\u2013215.",
410
+ "url": null
411
+ }
412
+ },
413
+ {
414
+ "34": {
415
+ "title": "Interpretable image classification with differentiable prototypes assignment. In European Conference on Computer Vision. Springer, 351\u2013368.",
416
+ "author": "Dawid Rymarczyk, \u0141ukasz Struski, Micha\u0142 G\u00f3rszczak, Koryna Lewandowska, Jacek Tabor, and Bartosz Zieli\u0144ski. 2022.",
417
+ "venue": "",
418
+ "url": null
419
+ }
420
+ },
421
+ {
422
+ "35": {
423
+ "title": "Protopshare: Prototypical parts sharing for similarity discovery in interpretable image classification. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. 1420\u20131430.",
424
+ "author": "Dawid Rymarczyk, \u0141ukasz Struski, Jacek Tabor, and Bartosz Zieli\u0144ski. 2021.",
425
+ "venue": "",
426
+ "url": null
427
+ }
428
+ },
429
+ {
430
+ "36": {
431
+ "title": "Right for the right concept: Revising neuro-symbolic concepts by interacting with their explanations. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 3619\u20133629.",
432
+ "author": "Wolfgang Stammer, Patrick Schramowski, and Kristian Kersting. 2021.",
433
+ "venue": "",
434
+ "url": null
435
+ }
436
+ },
437
+ {
438
+ "37": {
439
+ "title": "Making heads or tails: Towards semantically consistent visual counterfactuals. In European Conference on Computer Vision. Springer, 261\u2013279.",
440
+ "author": "Simon Vandenhende, Dhruv Mahajan, Filip Radenovic, and Deepti Ghadiyaram. 2022.",
441
+ "venue": "",
442
+ "url": null
443
+ }
444
+ },
445
+ {
446
+ "38": {
447
+ "title": "Approximation algorithms. Vol. 1.",
448
+ "author": "Vijay V Vazirani. 2001.",
449
+ "venue": "Springer.",
450
+ "url": null
451
+ }
452
+ },
453
+ {
454
+ "39": {
455
+ "title": "The caltech-ucsd birds-200-2011 dataset.",
456
+ "author": "Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, and Serge Belongie. 2011.",
457
+ "venue": "(2011).",
458
+ "url": null
459
+ }
460
+ },
461
+ {
462
+ "40": {
463
+ "title": "Interpretable Object Recognition by Semantic Prototype Analysis. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 800\u2013809.",
464
+ "author": "Qiyang Wan, Ruiping Wang, and Xilin Chen. 2024.",
465
+ "venue": "",
466
+ "url": null
467
+ }
468
+ },
469
+ {
470
+ "41": {
471
+ "title": "Learning support and trivial prototypes for interpretable image classification. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 2062\u20132072.",
472
+ "author": "Chong Wang, Yuyuan Liu, Yuanhong Chen, Fengbei Liu, Yu Tian, Davis McCarthy, Helen Frazer, and Gustavo Carneiro. 2023.",
473
+ "venue": "",
474
+ "url": null
475
+ }
476
+ },
477
+ {
478
+ "42": {
479
+ "title": "Interpretable image recognition by constructing transparent embedding space. In Proceedings of the IEEE/CVF international conference on computer vision. 895\u2013904.",
480
+ "author": "Jiaqi Wang, Huafeng Liu, Xinyue Wang, and Liping Jing. 2021.",
481
+ "venue": "",
482
+ "url": null
483
+ }
484
+ },
485
+ {
486
+ "43": {
487
+ "title": "The cancer genome atlas pan-cancer analysis project.",
488
+ "author": "John N Weinstein, Eric A Collisson, Gordon B Mills, Kenna R Shaw, Brad A Ozenberger, Kyle Ellrott, Ilya Shmulevich, Chris Sander, and Joshua M Stuart. 2013.",
489
+ "venue": "Nature genetics 45, 10 (2013), 1113\u20131120.",
490
+ "url": null
491
+ }
492
+ },
493
+ {
494
+ "44": {
495
+ "title": "Protopformer: Concentrating on prototypical parts in vision transformers for interpretable image recognition.",
496
+ "author": "Mengqi Xue, Qihan Huang, Haofei Zhang, Lechao Cheng, Jie Song, Minghui Wu, and Mingli Song. 2022.",
497
+ "venue": "arXiv preprint arXiv:2208.10431 (2022).",
498
+ "url": null
499
+ }
500
+ },
501
+ {
502
+ "45": {
503
+ "title": "Extracting symbolic rules from trained neural network ensembles.",
504
+ "author": "Zhi-Hua Zhou, Yuan Jiang, and Shi-Fu Chen. 2003.",
505
+ "venue": "Ai Communications 16, 1 (2003), 3\u201315.",
506
+ "url": null
507
+ }
508
+ }
509
+ ],
510
+ "url": "http://arxiv.org/html/2407.15871v3"
511
+ }
20240819/2407.19156v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240819/2408.03837v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240819/2408.08376v2.json ADDED
@@ -0,0 +1,658 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Decoding the human brain tissue response to radiofrequency excitation using a biophysical-model-free deep MRI on a chip framework",
3
+ "abstract": "Abstract\nMagnetic resonance imaging (MRI) relies on radiofrequency (RF) excitation of proton spin. Clinical diagnosis requires a comprehensive collation of biophysical data via multiple MRI contrasts,\nacquired using a series of RF sequences that lead to lengthy examinations. Here, we developed a vision transformer-based framework that captures the spatiotemporal magnetic signal evolution and decodes the brain tissue response to RF excitation, constituting an MRI on a chip. Following a per-subject rapid calibration scan (28.2 s), a wide variety of image contrasts including fully quantitative molecular, water relaxation, and magnetic field maps can be generated automatically. The method was validated across healthy subjects and a cancer patient in two different imaging sites, and proved to be 94% faster than alternative protocols. The deep MRI on a chip (DeepMonC) framework may reveal the molecular composition of the human brain tissue in a wide range of pathologies, while offering clinically attractive scan times.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Results",
9
+ "text": ""
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "DeepMonC Framework",
15
+ "text": "The DeepMonC core module (Fig. 1a) was designed to capture the spatiotemporal dynamics of MRI signal propagation as a response to RF excitation, and enable the generation of on-demand image contrast. The system includes a vision transformer[36 ###reference_b36###, 37 ###reference_b37###] with a dual-domain input, comprised of RF excitation information and real-world tissue response image counterparts. An extension module was also designed, which quantifies six biophysical tissue parameters across the entire 3D brain, without the need for any additional input.\nThe core module inputs are a sequence of m=6 non-steady-state MRI calibration images and an RF excitation parameter tensor (Fig. 1a). The tensor includes two concatenated parts: the acquisition parameters used for obtaining the calibration images and the desired on-demand parameters for the subsequent image output. Separate embeddings for the real-image-data and the physical RF properties are then learned using a vision transformer and a fully connected layer, respectively. The quantification module, involves a transfer learning strategy where the core module weights are plugged-in, the last layer is removed, and there is augmentation of two new convolutional layers. Ground truth reference data are then used to instigate quantification-oriented learning (Fig. 1b).\nThe DeepMonc framework was trained using 3,118,692 image and acquisition parameter pairs from 9 healthy human volunteers, scanned at a single imaging site (Tel Aviv University) on a 3T MRI (Prisma, Siemens Healthineers) equipped with a 64-channel coil. The framework was then tested using 30,324 image and acquisition parameter pairs obtained from 4 other subjects representing three challenging datasets: (i) Two healthy subjects not used for training (scanned at the same site). (ii) A brain cancer patient scanned at a different imaging site (Erlangen University Hospital). (iii) A healthy volunteer scanned using different hardware and MRI model at a different imaging site (Erlangen University Hospital, Trio MRI with a 32 channel coil)."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Biophysical-model-free prediction of the tissue response to RF excitation",
21
+ "text": "The core module was validated for generating on-demand molecular (semisolid MT and amide proton CEST-weighted) images. The full reference imaging protocol consisted of 30 pseudo-random RF excitations (Supporting Information Fig. 1)[26 ###reference_b26###]. The first six images were used for per-subject calibration, followed by DeepMonC predictions of the multi-contrast images associated with the next six response images (Fig. 1a).\nA representative example of the DeepMonC output compared to the ground truth for each of the validation datasets is shown in Fig. 2 and whole-brain 3D reconstruction output is provided as Supporting Information Movies M1 (semisolid MT) and M2 (amide). An excellent visual, perceptive, and pixelwise similarity was obtained between DeepMonC output and ground truth. This is reflected by a structural similarity index measure (SSIM) 0.96, peak signal-to-noise ratio (PSNR) 36, and normalized mean-square error (NRMSE) 3% (Table 1).\nTo evaluate the ability to generate an up to 4-times longer output compared to the input, the process was continued recursively, until the entire 30-long sequence was predicted based on the first six calibration images (Supporting Information Movies M3 (semisolid MT) and M4 (amide)). Although there were some errors in the last six images, the overall performance remained high, with a structural similarity index measure (SSIM) 0.94, peak signal-to-noise ratio (PSNR) 32, and normalized mean-square error (NRMSE) 3.7% (Table 1). The inference times for reconstructing whole brain 6 or 24 unseen image contrasts were 7.674 s and 10.896 s, respectively, when using an Nvidia RTX 3060 GPU, and 9.495 s and 19.55 s, respectively, when using a desktop CPU (Intel I9-12900F)."
22
+ },
23
+ {
24
+ "section_id": "4",
25
+ "parent_section_id": null,
26
+ "section_name": "Rapid quantification of biophysical tissue parameters",
27
+ "text": "The quantification module was trained to receive the exact same input as the core module, and then produce six parameter maps: the semisolid MT proton volume fraction (fss) and exchange rate (kssw), water pool longitudinal (T1) and transverse (T2) relaxation times, and the static (B0) and transmit (B1) magnetic fields. The DeepMonC reconstructed paramater maps were visually, perceptually, and quantitatively similar to the ground truth reference (Fig. 3-5 panels a,b and Supporting Information Figure S2). The reconstruction performance was highest for the test subject scanned by the same scanner used for training (SSIM = 0.9190.024; PSNR = 30.1971.808; NRMSE = 0.0490.008), followed by the cancer patient (unseen pathology at an unseen imaging site: SSIM = 0.884; PSNR = 26.3491.246; NRMSE = 0.0590.007), and the unseen subject scanned using unseen hardware at an unseen imaging site (SSIM = 0.8110.044; PSNR = 24.1861.523; NRMSE = 0.0760.011).\nThe magnetic field maps reconstructed by DeepMonc exhibited improved homogeneity compared to their ground-truth counterparts (Fig. 3,4,5 panels a and b). This enabled successful artifact removal from the semisolid MT proton volume fraction and exchange rate maps, which are known to be sensitive to B0 and B1 inhomogeneity[38 ###reference_b38###, 39 ###reference_b39###, 40 ###reference_b40###] (white arrows in Fig. 3, and Fig. 5).\nTo analyze the contribution of the decoded tissue response information, captured by DeepMonc core module, to the quantification task performance, a comparison with standard supervised learning was performed. The same quantification architecture (Fig. 1b) was trained to receive the exact same inputs, and then output the same six quantitative biophysical parameter maps, but without employing the pre-trained DeepMonC weights (learnt by the core module, Fig. 1a). This standard supervised learning routine yielded parameter maps with a markedly lower resemblance to the ground truth (Fig. 3,4,5 panel c). The deterioration in output was accompanied by a statistically significant lower SSIM (0.8050.057, 0.7780.062, 0.7250.066, for the unseen subject, pathology, and hardware datasets, respectively, p0.0001, n=68 image pairs) and PSNR (25.7331.473, 23.5461.428, 22.6141.342, for the three datasets, respectively, p0.0001, n=68 image pairs), and a higher NRMSE (0.08420.0125, 0.08430.0128, 0.0920.012 for the three datasets, respectively, p0.0001, n=68 image pairs, Fig. 3,4,5 panel d). The inference time required for reconstructing whole brain quantitative images was 6.751 s or 9.822 s when using an Nvidia RTX 3060 GPU or a desktop CPU (Intel I9-12900F), respectively."
28
+ },
29
+ {
30
+ "section_id": "5",
31
+ "parent_section_id": null,
32
+ "section_name": "Discussion",
33
+ "text": "The past few decades have seen increased reliance on MRI for clinical diagnosis[41 ###reference_b41###]. In parallel, this has required the introduction of new contrast mechanisms and dedicated pulse sequences[42 ###reference_b42###, 43 ###reference_b43###, 44 ###reference_b44###, 10 ###reference_b10###, 45 ###reference_b45###, 46 ###reference_b46###, 47 ###reference_b47###]. While offering biological insights and improved diagnosis certainty, the integration of these sequences into routine MRI examinations exacerbates the already lengthy overall scan times. Here, we describe the development of a deep-learning-based framework that can rapidly decode the human brain tissue response to RF excitation. The system generates a variety of on-demand image contrasts in silico that faithfully recapitulate their physical in-vivo counterparts (hence, termed a deep MRI on a chip).\nThe target contrasts requested from DeepMonC were associated with RF parameters extrapolated beyond the range of the training parameters, thereby representing a highly challenging task (Supporting information Fig. 1). Nevertheless, an excellent agreement between the generated and ground-truth image-sets was obtained (Fig. 2 and Table 1). The dependence of DeepMonC on the particular set of calibration images used and the desired output contrast was assessed on 18 different input-output pairs (Supporting Information Figure S3). Despite some variability, a satisfactory reconstruction was obtained in all cases (SSIM 0.96, PSNR 36, NRMSE 2%). Importantly, DeepMonC was able to overcome unknown initial conditions, as all calibration image-set combinations but one (image indices 1-6, Supporting Information Figure S3) were acquired following an incomplete magnetization recovery.\nThe core module architecture was designed for image translation of m-to-m size (Fig. 1a, illustrated for m=6). Nevertheless, it can be recursively applied (by using the model\u2019s output as the next input for generating another set of m images), and maintains an attractive performance, for up to m-to-3m translations (Supporting Information Movies M3 and M4). Although some errors were visually observed when attempting m-to-4m translation (in the last m=6 images), additional training with longer acquisition protocols could further improve this performance.\nThe excellent on-demand contrast generation performance exhibited by DeepMonC (Table 1) can be attributed to two key factors: (1) The introduction of explicit (and varied) acquisition parameter descriptors into the training procedure; this information is traditionally overlooked and hidden from MR-related neural networks[48 ###reference_b48###, 49 ###reference_b49###]. (2) The incorporation of visual transformers as the learning strategy. These enable the system to address the double sequential nature of the image data obtained from both the 3D spatial domain and the temporal (spin-history) domain. Visual transformers, with their effective attention mechanism, are not only capable of capturing long-range data dependencies but can also understand global image context, alleviate noise, and adapt to various translational tasks[37 ###reference_b37###, 50 ###reference_b50###].\nContrast-weighted imaging is the prevalent acquisition mode in clinical MRI. However, it has become increasingly clear that quantitative extraction of biophysical tissue parameters may offer improved sensitivity, specificity, and reproducibility[51 ###reference_b51###, 52 ###reference_b52###, 53 ###reference_b53###]. By harnessing the decoded brain tissue response to RF excitation, the DeepMonC framework was further leveraged to simultaneously map six quantitative parameters (Fig. 3-5), spanning three different biophysical realms, namely water relaxation, semisolid macromolecule proton exchange, and magnetic field homogeneity. The results provide an excellent agreement with the ground truth (Fig. 3-5d, Supporting Information Fig. S2), as well as an inherent ability to mitigate artifacts (white arrows in Fig. 3 and Fig. 5). Specifically, the B0 and B1 maps generated by DeepMonC exhibit better homogeneity than the reference ground truth. This thereby represents a practical explanation for the successful reduction of hardware/in-homogeneity related noises around the sinuses/eyes and at the air-tissue interfaces.\nImportantly, the rich whole-brain information provided by DeepMonc was reconstructed in only 6.8 seconds, following a non-steady state rapid acquisition using a single pulse sequence of 28.2 s. This represents a 94% acceleration compared to the state of the art ground-truth reference (acquired in 8.5 min, Fig. 1b). Interestingly, the quantification task results were even less sensitive to the particular pulse sequence used for acquiring the calibration images (Supporting Information Figure S4) than the on-demand contrast generation task (Supporting Information Figure S3).\nThe success of the quantification module is directly associated with the reliance on DeepMonC\u2019s core pre-training, which generates a comprehensive understanding of the RF-to-tissue relations. This is supported by the statistically significant higher performance obtained by the quantification module compared to the vanilla use of DeepMonC (untrained) architecture (Fig. 3-5 panels c,d, n=68 image slices, p0.0001).\nThe generalization of DeepMonC predictions was assessed on three datasets, each representing a different challenge. Overall, there proved to be compelling evidence for generalization, with a faithful representation of the the RF-to-tissue interface, with a satisfactory image reconstruction obtained in all cases. It should however be noted that, as expected, the parameter quantification of the unseen subject scanned at the same site and scanner used for training, yielded the best results. The cancer patient scanned at a different image site yielded the next best performance (only healthy volunteers were used for training), followed by the healthy subject scanned using a different scanner model and hardware at a different imaging site (Fig. 3-5d, Supporting Information Fig. S4). When assessing the on-demand contrast generation task performance, the differences between the various test-sets were much less discernible, with mostly subtle variations in the reconstruction metrics (Table 1). In the future, additional training using subjects scanned on other scanner models and across various pathologies could further boost the framework performance.\nSaturation transfer (encompassing both CEST and semisolid MT) is the dominant biophysical mechanism involved in the on-demand contrast generation task. This was chosen as a representative emerging imaging approach that is the focus of much interest from across the medical community[10 ###reference_b10###, 54 ###reference_b54###, 55 ###reference_b55###, 56 ###reference_b56###, 57 ###reference_b57###, 58 ###reference_b58###, 59 ###reference_b59###]. Nevertheless, the same conceptual framework could potentially be applied for generating on-demand diffusion, perfusion, relaxation, susceptibility, and other contrast-weighted images, given that a per-subject rapidly acquired data from the same general mechanism of interest is provided, alongside the matching acquisition parameters. Notably, a single pulse sequence may represent several biophysical properties, similarly to the way that ST-contrast weighted images are affected by the T1, T2, B0, and B1. Furthermore, while this work was focused on brain imaging, we expect that the same framework could be similarly utilized in other organ/tissues (after proper training). Finally, the ground-truth reference used for the quantification task was obtained via standard water proton relaxometry, magnetic field-mapping, and semisolid MT MRF. However, the same quantification module could seamlessly be trained using alternative reference modalities, such as 31P-imaging (for reconstructing intracellular pH maps)[60 ###reference_b60###], or even non-MR images (such as Ki-67 proliferation index histological images), thereby creating new cross-modality insights and opportunities.\nIn summary, we have developed and validated a computational framework that can learn the intricate mapping between the magnetic resonance RF irradiation domain and the subject-specific image domain. The method is biophysical-model-free and thus, unbiased by pre-existing parameter restrictions or assumptions. Given its ultra-fast on-demand contrast generation ability, we expect this approach to play an important role in the efforts to accelerate clinical MRI."
34
+ },
35
+ {
36
+ "section_id": "6",
37
+ "parent_section_id": null,
38
+ "section_name": "Acknowledgments",
39
+ "text": "The authors thank Tony St\u00f6cker and R\u00fcdiger Stirnberg for their help with the 3D EPI readout. This project was funded by the European Union (ERC, BabyMagnet, project no. 101115639). Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them."
40
+ },
41
+ {
42
+ "section_id": "7",
43
+ "parent_section_id": null,
44
+ "section_name": "Author contributions",
45
+ "text": "Conceptualization: D.N., O.P., Deep learning methodology: D.N., O.P., MRI acquisition and reconstruction: M.Z., O.P, Writing, reviewing, and editing: D.N., M.Z., O.P., Supervision: O.P."
46
+ },
47
+ {
48
+ "section_id": "8",
49
+ "parent_section_id": null,
50
+ "section_name": "Competing interests",
51
+ "text": "D.N. and O.P applied for a patent related to the proposed framework."
52
+ },
53
+ {
54
+ "section_id": "9",
55
+ "parent_section_id": null,
56
+ "section_name": "Methods",
57
+ "text": ""
58
+ }
59
+ ],
60
+ "appendix": [],
61
+ "tables": {},
62
+ "image_paths": {
63
+ "1": {
64
+ "figure_path": "2408.08376v2_figure_1.png",
65
+ "caption": "Fig. 1: Schematic representation of the biophysical-model-free deep MRI on a chip (DeepMonC) framework. a. Automatic prediction of unseen molecular MRI contrast weighted images. A multi-domain input is used, including a sequence of m non-steady-state MRI calibration images and an RF excitation parameter tensor. It includes the acquisition parameters associated with the calibration images (solid lines) and the on-demand acquisition parameters (dashed lines) for the desired image output (m new images shown at the top). Separate embeddings for the real image data and the physical RF properties are learned using a vision transformer and a fully connected layer, respectively. b. A quantification module for the simultaneous mapping of six tissue and scanner parameter maps, including the semi-solid proton volume fraction (fss) and exchange rate (kssw), water proton longitudinal (T1) and transverse (T2) relaxation, and static (B0) and transmit (B1) magnetic fields. This module exploits the multi-domain embedding learned by the core module, utilizing a transfer learning strategy.",
66
+ "url": "http://arxiv.org/html/2408.08376v2/extracted/5799226/fig1.jpg"
67
+ },
68
+ "2": {
69
+ "figure_path": "2408.08376v2_figure_2.png",
70
+ "caption": "Fig. 2: Automatic prediction of unseen molecular MRI contrast weighted images. A comparison between representative ground truth (a, c, e) and DeepMonC-predicted (b, d, f) molecular MRI contrast-weighted images in the human brain. (a, b) Semiolid MT-weighted images from an unseen subject. (c, d) Amide proton transfer CEST-weighted images from a brain tumor patient scanned at an unseen imaging site. (e, f) Semisolid MT-weighted images from an unseen subject scanned at an unseen imaging site with hardware that was different from that used for training.",
71
+ "url": "http://arxiv.org/html/2408.08376v2/extracted/5799226/fig2.jpg"
72
+ },
73
+ "3": {
74
+ "figure_path": "2408.08376v2_figure_3.png",
75
+ "caption": "Fig. 3: Quantitative reconstruction of six molecular MRI, scanner field, and water-proton relaxation quantitative maps from a new healthy human volunteer scanned at the same imaging site used for training. (a) Ground truth reference images obtained using conventional T1 and T2-mapping, WASABI, and semisolid MT MR-Fingerprinting (MRF) in 8.5 min. (b) The same parameter maps obtained using DeepMonC in merely 28.2 s (94% scan time acceleration). Note the reduced field inhomogeneity (as seen in the B0 and B1 predicted images), which explains the successful noise reduction in the output maps (white arrows). (c) Quantitative reconstruction using conventional supervised learning (RF tissue response pretraining excluded), utilizing the same raw input data used in (b) for comparison. (d) Statistical analysis of the SSIM, PSNR, and NRMSE performance measures, comparing the DeepMonC reconstructed parameter maps to reference ground truth (n = 69 brain image slices per group ). ****p<<<0.0001.",
76
+ "url": "http://arxiv.org/html/2408.08376v2/extracted/5799226/fig3.jpg"
77
+ },
78
+ "4": {
79
+ "figure_path": "2408.08376v2_figure_4.png",
80
+ "caption": "Fig. 4: Quantitative reconstruction of six molecular MRI, scanner field, and water-proton relaxation quantitative maps from a brain cancer patient scanned at a different imaging site compared to training. (a) Ground truth reference images obtained using conventional T1 and T2-mapping, WASABI, and semisolid MT MR-Fingerprinting (MRF) in 8.5 min. (b) The same parameter maps obtained using DeepMonC in merely 28.2 s (94% scan time acceleration). (c) Quantitative reconstruction using conventional supervised learning (RF tissue response pretraining excluded), utilizing the same raw input data used in (b) for comparison. (d). Statistical analysis of the SSIM, PSNR, and NRMSE performance measures, comparing the DeepMonC reconstructed parameter maps to reference ground truth (n = 68 brain image slices per group ). ****p<<<0.0001.",
81
+ "url": "http://arxiv.org/html/2408.08376v2/extracted/5799226/fig4.jpg"
82
+ },
83
+ "5": {
84
+ "figure_path": "2408.08376v2_figure_5.png",
85
+ "caption": "Fig. 5: Quantitative reconstruction of six molecular MRI, scanner field, and water-proton relaxation quantitative maps from a new healthy volunteer scanned at a different imaging site and different hardware compared to training. (a) Ground truth reference images obtained using conventional T1 and T2-mapping, WASABI, and semisolid MT MR-Fingerprinting (MRF) in 8.5 min. (b) The same parameter maps obtained using DeepMonC in merely 28.2 s (94% scan time acceleration). Note the reduced field inhomogeneity (as seen in the B0 and B1 predicted images), which explains the successful noise reduction in the output maps (white arrows). (c) Quantitative reconstruction using conventional supervised learning (RF tissue response pretraining excluded), utilizing the same raw input data used in (b) for comparison. (d) Statistical analysis of the SSIM, PSNR, and NRMSE performance measures, comparing the DeepMonC reconstructed parameter map to reference ground truth (n = 68 brain image slices per group ). ****p<<<0.0001.",
86
+ "url": "http://arxiv.org/html/2408.08376v2/extracted/5799226/fig5.jpg"
87
+ },
88
+ "6": {
89
+ "figure_path": "2408.08376v2_figure_6.png",
90
+ "caption": "Table 1: Performance analysis for on-demand generation of molecular contrast-weighted images, comparing the DeepMonC reconstructed output to the reference ground truth.\nSSIM - Structural similarity index measure; PSNR - peak signal-to-noise ratio; NRMSE - normalized mean-square error.",
91
+ "url": "http://arxiv.org/html/2408.08376v2/extracted/5799226/table.jpg"
92
+ }
93
+ },
94
+ "validation": true,
95
+ "references": [
96
+ {
97
+ "1": {
98
+ "title": "Value of mri in medicine: More than just another\ntest?",
99
+ "author": "van Beek, E. J. et al.",
100
+ "venue": "Journal of Magnetic Resonance Imaging\n49, e14\u2013e25\n(2019).",
101
+ "url": null
102
+ }
103
+ },
104
+ {
105
+ "2": {
106
+ "title": "Advances in mri methodology.",
107
+ "author": "Yousaf, T., Dervenoulas, G. &\nPolitis, M.",
108
+ "venue": "International review of neurobiology\n141, 31\u201376\n(2018).",
109
+ "url": null
110
+ }
111
+ },
112
+ {
113
+ "3": {
114
+ "title": "Handbook of MRI pulse sequences\n(Elsevier, 2004).",
115
+ "author": "Bernstein, M. A., King, K. F. &\nZhou, X. J.",
116
+ "venue": null,
117
+ "url": null
118
+ }
119
+ },
120
+ {
121
+ "4": {
122
+ "title": "Consensus recommendations for a standardized brain\ntumor imaging protocol in clinical trials.",
123
+ "author": "Ellingson, B. M. et al.",
124
+ "venue": "Neuro-oncology\n17, 1188\u20131198\n(2015).",
125
+ "url": null
126
+ }
127
+ },
128
+ {
129
+ "5": {
130
+ "title": "Consensus recommendations for a standardized brain\ntumor imaging protocol for clinical trials in brain metastases.",
131
+ "author": "Kaufmann, T. J. et al.",
132
+ "venue": "Neuro-oncology\n22, 757\u2013772\n(2020).",
133
+ "url": null
134
+ }
135
+ },
136
+ {
137
+ "6": {
138
+ "title": "Mri: time is dose\u2014and money and versatility.",
139
+ "author": "Edelstein, W. A., Mahesh, M. &\nCarrino, J. A.",
140
+ "venue": "Journal of the American College of Radiology:\nJACR 7, 650\n(2010).",
141
+ "url": null
142
+ }
143
+ },
144
+ {
145
+ "7": {
146
+ "title": "A deep learning\u2013based approach to reduce rescan and\nrecall rates in clinical mri examinations.",
147
+ "author": "Sreekumari, A. et al.",
148
+ "venue": "American Journal of Neuroradiology\n40, 217\u2013223\n(2019).",
149
+ "url": null
150
+ }
151
+ },
152
+ {
153
+ "8": {
154
+ "title": "Magnetization transfer contrast and chemical exchange\nsaturation transfer mri. features and analysis of the field-dependent\nsaturation spectrum.",
155
+ "author": "Van Zijl, P. C., Lam, W. W.,\nXu, J., Knutsson, L. &\nStanisz, G. J.",
156
+ "venue": "Neuroimage 168,\n222\u2013241 (2018).",
157
+ "url": null
158
+ }
159
+ },
160
+ {
161
+ "9": {
162
+ "title": "Nuts and bolts of chemical exchange saturation\ntransfer mri.",
163
+ "author": "Liu, G., Song, X., Chan,\nK. W. & McMahon, M. T.",
164
+ "venue": "NMR in Biomedicine\n26, 810\u2013828\n(2013).",
165
+ "url": null
166
+ }
167
+ },
168
+ {
169
+ "10": {
170
+ "title": "Clinical applications of chemical exchange saturation\ntransfer (cest) mri.",
171
+ "author": "Jones, K. M., Pollard, A. C. &\nPagel, M. D.",
172
+ "venue": "Journal of Magnetic Resonance Imaging\n47, 11\u201327\n(2018).",
173
+ "url": null
174
+ }
175
+ },
176
+ {
177
+ "11": {
178
+ "title": "Differentiation between glioma and radiation necrosis\nusing molecular magnetic resonance imaging of endogenous proteins and\npeptides.",
179
+ "author": "Zhou, J. et al.",
180
+ "venue": "Nature medicine\n17, 130\u2013134\n(2011).",
181
+ "url": null
182
+ }
183
+ },
184
+ {
185
+ "12": {
186
+ "title": "Apt-weighted mri: techniques, current neuro\napplications, and challenging issues.",
187
+ "author": "Zhou, J., Heo, H.-Y.,\nKnutsson, L., van Zijl, P. C. &\nJiang, S.",
188
+ "venue": "Journal of Magnetic Resonance Imaging\n50, 347\u2013364\n(2019).",
189
+ "url": null
190
+ }
191
+ },
192
+ {
193
+ "13": {
194
+ "title": "Using the amide proton signals of intracellular\nproteins and peptides to detect ph effects in mri.",
195
+ "author": "Zhou, J., Payen, J.-F.,\nWilson, D. A., Traystman, R. J. &\nVan Zijl, P. C.",
196
+ "venue": "Nature medicine\n9, 1085\u20131090\n(2003).",
197
+ "url": null
198
+ }
199
+ },
200
+ {
201
+ "14": {
202
+ "title": "Detection of the ischemic penumbra using ph-weighted\nmri.",
203
+ "author": "Sun, P. Z., Zhou, J., Sun,\nW., Huang, J. & Van Zijl, P. C.",
204
+ "venue": "Journal of Cerebral Blood Flow &\nMetabolism 27, 1129\u20131136\n(2007).",
205
+ "url": null
206
+ }
207
+ },
208
+ {
209
+ "15": {
210
+ "title": "Magnetic resonance imaging of glutamate.",
211
+ "author": "Cai, K. et al.",
212
+ "venue": "Nature medicine\n18, 302\u2013306\n(2012).",
213
+ "url": null
214
+ }
215
+ },
216
+ {
217
+ "16": {
218
+ "title": "Glutamate-weighted cest (glucest) imaging for mapping\nneurometabolism: An update on the state of the art and emerging findings from\nin vivo applications.",
219
+ "author": "Cember, A. T., Nanga, R. P. R. &\nReddy, R.",
220
+ "venue": "NMR in Biomedicine\n36, e4780 (2023).",
221
+ "url": null
222
+ }
223
+ },
224
+ {
225
+ "17": {
226
+ "title": "Cest mri for monitoring kidney diseases.",
227
+ "author": "Stabinska, J., Keupp, J. &\nMcMahon, M. T.",
228
+ "venue": "In Advanced Clinical MRI of the Kidney:\nMethods and Protocols, 345\u2013360\n(Springer, 2023).",
229
+ "url": null
230
+ }
231
+ },
232
+ {
233
+ "18": {
234
+ "title": "Noninvasive evaluation of renal ph homeostasis after\nischemia reperfusion injury by cest-mri.",
235
+ "author": "Longo, D. L., Cutrin, J. C.,\nMichelotti, F., Irrera, P. &\nAime, S.",
236
+ "venue": "NMR in Biomedicine\n30, e3720 (2017).",
237
+ "url": null
238
+ }
239
+ },
240
+ {
241
+ "19": {
242
+ "title": "Quantitative magnetic resonance imaging\n(Academic Press, 2020).",
243
+ "author": "Seiberlich, N. et al.",
244
+ "venue": null,
245
+ "url": null
246
+ }
247
+ },
248
+ {
249
+ "20": {
250
+ "title": "Magnetic resonance fingerprinting.",
251
+ "author": "Ma, D. et al.",
252
+ "venue": "Nature 495,\n187\u2013192 (2013).",
253
+ "url": null
254
+ }
255
+ },
256
+ {
257
+ "21": {
258
+ "title": "Mr fingerprinting for contrast agent\u2013free and\nquantitative characterization of focal liver lesions.",
259
+ "author": "Fujita, S. et al.",
260
+ "venue": "Radiology: Imaging Cancer\n5, e230036\n(2023).",
261
+ "url": null
262
+ }
263
+ },
264
+ {
265
+ "22": {
266
+ "title": "Magnetic resonance fingerprinting: a review of\nclinical applications.",
267
+ "author": "Gaur, S. et al.",
268
+ "venue": "Investigative Radiology\n58, 561\u2013577\n(2023).",
269
+ "url": null
270
+ }
271
+ },
272
+ {
273
+ "23": {
274
+ "title": "Mr fingerprinting deep reconstruction network\n(drone).",
275
+ "author": "Cohen, O., Zhu, B. &\nRosen, M. S.",
276
+ "venue": "Magnetic resonance in medicine\n80, 885\u2013894\n(2018).",
277
+ "url": null
278
+ }
279
+ },
280
+ {
281
+ "24": {
282
+ "title": "Magnetic resonance fingerprinting: The role of\nartificial intelligence.",
283
+ "author": "Fyrdahl, A., Seiberlich, N. &\nHamilton, J. I.",
284
+ "venue": "In Artificial Intelligence in\nCardiothoracic Imaging, 201\u2013215\n(Springer, 2022).",
285
+ "url": null
286
+ }
287
+ },
288
+ {
289
+ "25": {
290
+ "title": "Mr fingerprinting for semisolid magnetization\ntransfer and chemical exchange saturation transfer quantification.",
291
+ "author": "Perlman, O., Farrar, C. T. &\nHeo, H.-Y.",
292
+ "venue": "NMR in Biomedicine\n36, e4710 (2023).",
293
+ "url": null
294
+ }
295
+ },
296
+ {
297
+ "26": {
298
+ "title": "Quantitative imaging of apoptosis following oncolytic\nvirotherapy by magnetic resonance fingerprinting aided by deep learning.",
299
+ "author": "Perlman, O. et al.",
300
+ "venue": "Nature biomedical engineering\n6, 648\u2013657\n(2022).",
301
+ "url": null
302
+ }
303
+ },
304
+ {
305
+ "27": {
306
+ "title": "An end-to-end ai-based framework for automated\ndiscovery of rapid cest/mt mri acquisition protocols and molecular parameter\nquantification (autocest).",
307
+ "author": "Perlman, O., Zhu, B.,\nZaiss, M., Rosen, M. S. &\nFarrar, C. T.",
308
+ "venue": "Magnetic Resonance in Medicine\n87, 2792\u20132810\n(2022).",
309
+ "url": null
310
+ }
311
+ },
312
+ {
313
+ "28": {
314
+ "title": "Cest mr fingerprinting (cest-mrf) for brain tumor\nquantification using epi readout and deep learning reconstruction.",
315
+ "author": "Cohen, O. et al.",
316
+ "venue": "Magnetic resonance in medicine\n89, 233\u2013249\n(2023).",
317
+ "url": null
318
+ }
319
+ },
320
+ {
321
+ "29": {
322
+ "title": "Dynamic and rapid deep synthesis of chemical exchange\nsaturation transfer and semisolid magnetization transfer mri signals.",
323
+ "author": "Nagar, D., Vladimirov, N.,\nFarrar, C. T. & Perlman, O.",
324
+ "venue": "Scientific Reports\n13, 18291 (2023).",
325
+ "url": null
326
+ }
327
+ },
328
+ {
329
+ "30": {
330
+ "title": "Learning-based optimization of acquisition schedule\nfor magnetization transfer contrast mr fingerprinting.",
331
+ "author": "Kang, B., Kim, B., Park,\nH. & Heo, H.-Y.",
332
+ "venue": "NMR in Biomedicine\n35, e4662 (2022).",
333
+ "url": null
334
+ }
335
+ },
336
+ {
337
+ "31": {
338
+ "title": "Quantitative molecular imaging using deep magnetic\nresonance fingerprinting.",
339
+ "author": "Vladimirov, N. et al.",
340
+ "venue": "Protocol Exchange Preprint\n(2024).",
341
+ "url": null
342
+ }
343
+ },
344
+ {
345
+ "32": {
346
+ "title": "Accelerated and quantitative three-dimensional\nmolecular mri using a generative adversarial network.",
347
+ "author": "Weigand-Whittier, J. et al.",
348
+ "venue": "Magnetic Resonance in Medicine\n89, 1901\u20131914\n(2023).",
349
+ "url": null
350
+ }
351
+ },
352
+ {
353
+ "33": {
354
+ "title": "Quantifying amide proton exchange rate and\nconcentration in chemical exchange saturation transfer imaging of the human\nbrain.",
355
+ "author": "Heo, H.-Y. et al.",
356
+ "venue": "Neuroimage 189,\n202\u2013213 (2019).",
357
+ "url": null
358
+ }
359
+ },
360
+ {
361
+ "34": {
362
+ "title": "Measuring chemical exchange saturation transfer\nexchange rates in the human brain using a particle swarm optimisation\nalgorithm.",
363
+ "author": "Carradus, A. J., Bradley, J. M.,\nGowland, P. A. & Mougin, O. E.",
364
+ "venue": "NMR in Biomedicine\n36, e5001 (2023).",
365
+ "url": null
366
+ }
367
+ },
368
+ {
369
+ "35": {
370
+ "title": "A deep learning approach for magnetization transfer\ncontrast mr fingerprinting and chemical exchange saturation transfer\nimaging.",
371
+ "author": "Kim, B., Sch\u00e4r, M.,\nPark, H. & Heo, H.-Y.",
372
+ "venue": "Neuroimage 221,\n117165 (2020).",
373
+ "url": null
374
+ }
375
+ },
376
+ {
377
+ "36": {
378
+ "title": "Unetr: Transformers for 3d medical image\nsegmentation.",
379
+ "author": "Hatamizadeh, A. et al.",
380
+ "venue": "In Proceedings of the IEEE/CVF winter\nconference on applications of computer vision, 574\u2013584\n(2022).",
381
+ "url": null
382
+ }
383
+ },
384
+ {
385
+ "37": {
386
+ "title": "An image is worth 16x16 words: Transformers for image\nrecognition at scale.",
387
+ "author": "Dosovitskiy, A. et al.",
388
+ "venue": "arXiv preprint arXiv:2010.11929\n(2020).",
389
+ "url": null
390
+ }
391
+ },
392
+ {
393
+ "38": {
394
+ "title": "Correction of b1-inhomogeneities for\nrelaxation-compensated cest imaging at 7 t.",
395
+ "author": "Windschuh, J. et al.",
396
+ "venue": "NMR in biomedicine\n28, 529\u2013537\n(2015).",
397
+ "url": null
398
+ }
399
+ },
400
+ {
401
+ "39": {
402
+ "title": "A simple correction for b1 field errors in\nmagnetization transfer ratio measurements.",
403
+ "author": "Samson, R. S., Wheeler-Kingshott, C. A.,\nSymms, M. R., Tozer, D. J. &\nTofts, P. S.",
404
+ "venue": "Magnetic resonance imaging\n24, 255\u2013263\n(2006).",
405
+ "url": null
406
+ }
407
+ },
408
+ {
409
+ "40": {
410
+ "title": "Simultaneous mapping of water shift and b1\n(wasabi)\u2014application to field-inhomogeneity correction of cest mri data.",
411
+ "author": "Schuenke, P. et al.",
412
+ "venue": "Magnetic resonance in medicine\n77, 571\u2013580\n(2017).",
413
+ "url": null
414
+ }
415
+ },
416
+ {
417
+ "41": {
418
+ "title": "Trends in use of medical imaging in us health care\nsystems and in ontario, canada, 2000-2016.",
419
+ "author": "Smith-Bindman, R. et al.",
420
+ "venue": "Jama 322,\n843\u2013856 (2019).",
421
+ "url": null
422
+ }
423
+ },
424
+ {
425
+ "42": {
426
+ "title": "A new class of contrast agents for mri based on\nproton chemical exchange dependent saturation transfer (cest).",
427
+ "author": "Ward, K., Aletras, A. &\nBalaban, R. S.",
428
+ "venue": "Journal of magnetic resonance\n143, 79\u201387\n(2000).",
429
+ "url": null
430
+ }
431
+ },
432
+ {
433
+ "43": {
434
+ "title": "Clinical quantitative susceptibility mapping (qsm):\nbiometal imaging and its emerging roles in patient care.",
435
+ "author": "Wang, Y. et al.",
436
+ "venue": "Journal of magnetic resonance imaging\n46, 951\u2013971\n(2017).",
437
+ "url": null
438
+ }
439
+ },
440
+ {
441
+ "44": {
442
+ "title": "An overview of cest mri for non-mr physicists.",
443
+ "author": "Wu, B. et al.",
444
+ "venue": "EJNMMI physics\n3, 1\u201321 (2016).",
445
+ "url": null
446
+ }
447
+ },
448
+ {
449
+ "45": {
450
+ "title": "Chemical exchange saturation transfer (cest): what is\nin a name and what isn\u2019t?",
451
+ "author": "Van Zijl, P. C. & Yadav, N. N.",
452
+ "venue": "Magnetic resonance in medicine\n65, 927\u2013948\n(2011).",
453
+ "url": null
454
+ }
455
+ },
456
+ {
457
+ "46": {
458
+ "title": "Physics, techniques and review of neuroradiological\napplications of diffusion kurtosis imaging (dki).",
459
+ "author": "Marrale, M. et al.",
460
+ "venue": "Clinical neuroradiology\n26, 391\u2013403\n(2016).",
461
+ "url": null
462
+ }
463
+ },
464
+ {
465
+ "47": {
466
+ "title": "Validating the sensitivity of inhomogeneous\nmagnetization transfer (ihmt) mri to myelin with fluorescence microscopy.",
467
+ "author": "Duhamel, G. et al.",
468
+ "venue": "Neuroimage 199,\n289\u2013303 (2019).",
469
+ "url": null
470
+ }
471
+ },
472
+ {
473
+ "48": {
474
+ "title": "Deep learning for accelerated and robust mri\nreconstruction.",
475
+ "author": "Heckel, R., Jacob, M.,\nChaudhari, A., Perlman, O. &\nShimron, E.",
476
+ "venue": "Magnetic Resonance Materials in Physics,\nBiology and Medicine 1\u201334 (2024).",
477
+ "url": null
478
+ }
479
+ },
480
+ {
481
+ "49": {
482
+ "title": "Ai-based reconstruction for fast mri\u2014a systematic\nreview and meta-analysis.",
483
+ "author": "Chen, Y. et al.",
484
+ "venue": "Proceedings of the IEEE\n110, 224\u2013245\n(2022).",
485
+ "url": null
486
+ }
487
+ },
488
+ {
489
+ "50": {
490
+ "title": "Transformers in vision: A survey.",
491
+ "author": "Khan, S. et al.",
492
+ "venue": "ACM computing surveys (CSUR)\n54, 1\u201341 (2022).",
493
+ "url": null
494
+ }
495
+ },
496
+ {
497
+ "51": {
498
+ "title": "Three dimensional mrf obtains highly repeatable and\nreproducible multi-parametric estimations in the healthy human brain at 1.5 t\nand 3t.",
499
+ "author": "Buonincontri, G. et al.",
500
+ "venue": "Neuroimage 226,\n117573 (2021).",
501
+ "url": null
502
+ }
503
+ },
504
+ {
505
+ "52": {
506
+ "title": "Repeatability and reproducibility of 3d mr\nfingerprinting relaxometry measurements in normal breast tissue.",
507
+ "author": "Panda, A. et al.",
508
+ "venue": "Journal of Magnetic Resonance Imaging\n50, 1133\u20131143\n(2019).",
509
+ "url": null
510
+ }
511
+ },
512
+ {
513
+ "53": {
514
+ "title": "Quantitative MRI in cancer\n(Taylor & Francis, 2011).",
515
+ "author": "Yankeelov, T. E., Pickens, D. R. &\nPrice, R. R.",
516
+ "venue": null,
517
+ "url": null
518
+ }
519
+ },
520
+ {
521
+ "54": {
522
+ "title": "Emerging techniques in brain tumor imaging: what\nradiologists need to know.",
523
+ "author": "Kim, M. & Kim, H. S.",
524
+ "venue": "Korean journal of radiology\n17, 598\u2013619\n(2016).",
525
+ "url": null
526
+ }
527
+ },
528
+ {
529
+ "55": {
530
+ "title": "Review and consensus recommendations on clinical\napt-weighted imaging approaches at 3t: application to brain tumors.",
531
+ "author": "Zhou, J. et al.",
532
+ "venue": "Magnetic resonance in medicine\n88, 546\u2013574\n(2022).",
533
+ "url": null
534
+ }
535
+ },
536
+ {
537
+ "56": {
538
+ "title": "Chemical exchange saturation transfer mri: what\nneuro-oncology clinicians need to know.",
539
+ "author": "Jabehdar Maralani, P. et al.",
540
+ "venue": "Technology in Cancer Research & Treatment\n22, 15330338231208613\n(2023).",
541
+ "url": null
542
+ }
543
+ },
544
+ {
545
+ "57": {
546
+ "title": "Apt-weighted mri can be an early marker for\ndemyelination (2021).",
547
+ "author": "Van Zijl, P. C.",
548
+ "venue": null,
549
+ "url": null
550
+ }
551
+ },
552
+ {
553
+ "58": {
554
+ "title": "Metabolic brain imaging with glucosamine cest mri: in\nvivo characterization and first insights.",
555
+ "author": "Rivlin, M., Perlman, O. &\nNavon, G.",
556
+ "venue": "Scientific Reports\n13, 22030 (2023).",
557
+ "url": null
558
+ }
559
+ },
560
+ {
561
+ "59": {
562
+ "title": "Personalized and muscle-specific oxphos measurement\nwith integrated crcest mri and proton mr spectroscopy.",
563
+ "author": "Armbruster, R. R. et al.",
564
+ "venue": "Nature Communications\n15, 5387 (2024).",
565
+ "url": null
566
+ }
567
+ },
568
+ {
569
+ "60": {
570
+ "title": "Whole-brain intracellular ph mapping of gliomas using\nhigh-resolution 31p mr spectroscopic imaging at 7.0 t.",
571
+ "author": "Paech, D. et al.",
572
+ "venue": "Radiology: Imaging Cancer\n6, e220127\n(2023).",
573
+ "url": null
574
+ }
575
+ },
576
+ {
577
+ "61": {
578
+ "title": "Pypulseq: A python package for mri pulse sequence\ndesign.",
579
+ "author": "Ravi, K. S., Geethanath, S. &\nVaughan, J. T.",
580
+ "venue": "Journal of Open Source Software\n4, 1725 (2019).",
581
+ "url": null
582
+ }
583
+ },
584
+ {
585
+ "62": {
586
+ "title": "Pulseq: a rapid and hardware-independent pulse\nsequence prototyping framework.",
587
+ "author": "Layton, K. J. et al.",
588
+ "venue": "Magnetic resonance in medicine\n77, 1544\u20131552\n(2017).",
589
+ "url": null
590
+ }
591
+ },
592
+ {
593
+ "63": {
594
+ "title": "Pulseq-cest: towards multi-site multi-vendor\ncompatibility and reproducibility of cest experiments using an open-source\nsequence standard.",
595
+ "author": "Herz, K. et al.",
596
+ "venue": "Magnetic resonance in medicine\n86, 1845\u20131858\n(2021).",
597
+ "url": null
598
+ }
599
+ },
600
+ {
601
+ "64": {
602
+ "title": "Cest mr-fingerprinting: practical considerations and\ninsights for acquisition schedule design and improved reconstruction.",
603
+ "author": "Perlman, O. et al.",
604
+ "venue": "Magnetic resonance in medicine\n83, 462\u2013478\n(2020).",
605
+ "url": null
606
+ }
607
+ },
608
+ {
609
+ "65": {
610
+ "title": "Rapid and quantitative chemical exchange saturation\ntransfer (cest) imaging with magnetic resonance fingerprinting (mrf).",
611
+ "author": "Cohen, O., Huang, S.,\nMcMahon, M. T., Rosen, M. S. &\nFarrar, C. T.",
612
+ "venue": "Magnetic resonance in medicine\n80, 2449\u20132463\n(2018).",
613
+ "url": null
614
+ }
615
+ },
616
+ {
617
+ "66": {
618
+ "title": "Whole-brain snapshot cest imaging at 7 t using\n3d-epi.",
619
+ "author": "Akbey, S., Ehses, P.,\nStirnberg, R., Zaiss, M. &\nSt\u00f6cker, T.",
620
+ "venue": "Magnetic resonance in medicine\n82, 1741\u20131752\n(2019).",
621
+ "url": null
622
+ }
623
+ },
624
+ {
625
+ "67": {
626
+ "title": "Whole brain snapshot cest at 3t using 3d-epi: aiming\nfor speed, volume, and homogeneity.",
627
+ "author": "Mueller, S. et al.",
628
+ "venue": "Magnetic resonance in medicine\n84, 2469\u20132483\n(2020).",
629
+ "url": null
630
+ }
631
+ },
632
+ {
633
+ "68": {
634
+ "title": "Elastix: a toolbox for intensity-based medical image\nregistration.",
635
+ "author": "Klein, S., Staring, M.,\nMurphy, K., Viergever, M. A. &\nPluim, J. P.",
636
+ "venue": "IEEE transactions on medical imaging\n29, 196\u2013205\n(2009).",
637
+ "url": null
638
+ }
639
+ },
640
+ {
641
+ "69": {
642
+ "title": "Unified segmentation.",
643
+ "author": "Ashburner, J. & Friston, K. J.",
644
+ "venue": "neuroimage 26,\n839\u2013851 (2005).",
645
+ "url": null
646
+ }
647
+ },
648
+ {
649
+ "70": {
650
+ "title": "Scipy 1.0: fundamental algorithms for scientific\ncomputing in python.",
651
+ "author": "Virtanen, P. et al.",
652
+ "venue": "Nature methods\n17, 261\u2013272\n(2020).",
653
+ "url": null
654
+ }
655
+ }
656
+ ],
657
+ "url": "http://arxiv.org/html/2408.08376v2"
658
+ }
20240819/2408.08869v2.json ADDED
@@ -0,0 +1,512 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "PEDAL: Enhancing Greedy Decoding with Large Language Models using Diverse Exemplars",
3
+ "abstract": "Self-ensembling techniques with diverse reasoning paths such as Self-Consistency have demonstrated remarkable performance gains in text generation with Large Language Models (LLMs). However, such techniques depend on the availability of an accurate answer extraction process to aggregate across multiple outputs. Moreover, they acquire higher inference cost, in comparison to Greedy Decoding, due to generation of relatively higher number of output tokens. Research has shown that the free form text outputs from Self-Consistency can be aggregated reliably using LLMs to produce the final output. Additionally, recent advancements in LLM inference have demonstrated that usage of diverse exemplars in prompts have the ability to induce diversity in the LLM outputs. Such proven techniques can be easily extended to self-ensembling based approaches to achieve enhanced results in text generation. In this paper, we introduce PEDAL (Prompts based on Exemplar Diversity Aggregated using LLMs), a hybrid self-ensembling approach, that combines the strengths of diverse exemplar based prompts and LLM based aggregation to achieve improvement in overall performance. On the publicly available SVAMP and ARC datasets, our experiments reveal that PEDAL can achieve better accuracy than Greedy Decoding based strategies with lower inference cost compared to Self Consistency based approaches.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Large Language Models (LLMs) Brown et al. (2020 ###reference_b3###); Raffel et al. (2020 ###reference_b28###); Chowdhery et al. (2022 ###reference_b11###); Touvron et al. (2023 ###reference_b33###) have been proven to show remarkable performance in a wide range of Natural Language Understanding tasks Zhao et al. (2023 ###reference_b41###) as a result of their outstanding reasoning capabilities Wei et al. (2022 ###reference_b35###); Zhou et al. (2022 ###reference_b43###).\nHowever, they still rely on carefully designed prompts to achieve optimal performance Khattab et al. (2023 ###reference_b20###); Fernando et al. (2023 ###reference_b14###). To realize further improvement in LLM reasoning, Wang et al. (2022 ###reference_b34###) proposed a self-ensembling technique termed \u201cSelf-Consistency\u201d(SC) where diverse \u201cChain-of-Thought\u201d(CoT) Wei et al. (2022 ###reference_b35###) reasoning paths were generated and then aggregated to construct an accurate and reliable response. This approach has been successfully extended to various use-cases such as LLM hallucination detection Chen et al. (2024 ###reference_b6###), medicineZhou et al. (2024 ###reference_b44###) and code generation Huang et al. (2024 ###reference_b17###).\nWhile SC based approaches can significantly improve the robustness of LLM outputs, one of their common drawbacks is that they perform best on a fixed answer set Wang et al. (2022 ###reference_b34###) or rely on training custom aggregation methods to measure consistency across multiple text outputs. To address this, Chen et al. (2023b ###reference_b8###) proposed \u201cUniversal Self Consistency\u201d(USC), an extension of SC, that aggregated the text outputs by re-invoking the LLM. Essentially, USC prompted the LLM to select the most consistent response among the different candidate answers generated by SC and demonstrated that it can achieve improved performance. However, this still leaves us with another drawback of SC which is the cost involved in generating the outputs. Concretely, SC involves generating long and diverse reasoning paths which results in a higher number of output tokens compared to Greedy Decoding based approaches. The cost of output token generation with LLMs is typically more than input token processing due to the difference in the number of forward passes Shazeer (2019 ###reference_b31###); Chng (2024 ###reference_b10###) resulting in a higher inference cost with SC.\nLi et al. (2023b ###reference_b22###) experimented with usage of diverse exemplars in the LLM prompts and combined them with diverse reasoning paths in SC to achieve more accurate results in text generation. We observe that if we leverage diverse exemplars with Greedy Decoding for text generation and aggregate the responses as in USC, we achieve better performance than traditional Greedy Decoding in terms of accuracy while also achieving lower cost of inference in comparison to SC based approaches.\nIn this paper, we present a hybrid self-ensembling approach, PEDAL(Prompts based on Exemplar Diversity Aggregated using an LLM), that offers a trade-off between the Greedy Decoding and SC in terms of accuracy and cost efficiency. We leverage diverse exemplars in LLM prompts to generate multiple candidate responses using Greedy Decoding and then aggregate them using an LLM to generate the final response. On two publicly available datasets, we demonstrate that PEDAL achieves better accuracy than Greedy Decoding based strategies and offers lower cost in inference compared to SC based strategies.\nRest of the paper is organized as follows: In\nSection 2 ###reference_###, we describe previous work for solving similar problems. Section 3 ###reference_### explains our proposed strategy in detail followed by Section 4 ###reference_### where we describe the data and the experiment settings to validate PEDAL. We then present our results and analyses in Section 5 ###reference_###. Finally, in Section 6 ###reference_###, we summarize our findings and discuss potential future work."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Related Work",
15
+ "text": "LLMs have been widely studied and applied in a variety of tasks including code generation Zheng et al. (2024 ###reference_b42###), finance Li et al. (2024 ###reference_b23###), law Yu et al. (2022 ###reference_b39###) and so on. However, none of the LLMs seem to consistently outperform the rest of the models across all tasks Jiang et al. (2023 ###reference_b19###). This led to exploring ensembling approaches with LLMs. Research focused on Prompt Chaining Chase (2022 ###reference_b5###), Fusion Li et al. (2023a ###reference_b21###), Mixture of Experts Cai et al. (2024 ###reference_b4###) and many more have shown promising results in combining LLMs to enhance the overall performance."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Self Ensembling Strategies",
21
+ "text": "Long (2023 ###reference_b24###); Yao et al. (2023 ###reference_b38###) generalized CoT to organize language model generated\n\u201cthoughts\u201d into a tree structure for solution\nsearch. However, similar to Wang et al. (2022 ###reference_b34###), they rely on custom aggregation methods to construct the final output. Chen et al. (2023b ###reference_b8###) addressed this issue by leveraging LLMs to perform majority consensus based aggregation without any specific model fine-tuning. In our work, we leverage a similar strategy to aggregate multiple candidates with a focus on the impact of using diverse LLM prompts as opposed to diverse reasoning paths."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Prompt Ensembling Strategies",
27
+ "text": "With the advent of LLMs, lot of research focused on developing effective prompting techniques Bach et al. (2022 ###reference_b2###); Lu et al. (2022 ###reference_b25###) that have been extended by multiple prompt ensembling techniques Zhang et al. (2023 ###reference_b40###); Pitis et al. (2023 ###reference_b27###) to achieve further improvement. Singh et al. (2023 ###reference_b32###) built a decision tree of prompts that links multiple LM calls to solve a task. Arora et al. (2022 ###reference_b1###) used multiple prompt templates to reformat few-shot example inputs into an open ended question-answering format and then leverage Weak Supervision Ratner et al. (2017 ###reference_b29###) to aggregate the LLM predictions. Hou et al. (2023 ###reference_b16###) applied AdaBoost Schapire (2013 ###reference_b30###) algorithm over a pre-defined prompt set for text classification by pairing prompts with the corresponding output distribution to construct a large pool of weak learners. Li et al. (2023b ###reference_b22###) enhanced SC with diverse prompts by randomly selecting different exemplars for prompt construction, followed by sampling reasoning paths for each such prompt and then scoring the quality of each reasoning path using a custom trained model. While our work also leverages a similar prompt construction strategy, we aggregate the predictions without relying on explicitly training a task-specific model. Additionally, we focus on leveraging such prompt based strategies to reduce LLM inference cost rather than enhancing SC based approaches.\n###figure_1###"
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "LLM Inference Cost",
33
+ "text": "To solve the problem of inference cost, researchers have commonly explored model compression techniques Zhu et al. (2024 ###reference_b45###) such as model quantization Jacob et al. (2018 ###reference_b18###), model pruning Cheng et al. (2024 ###reference_b9###) and model distillation Gou et al. (2021 ###reference_b15###) aimed at reducing the size of the model without hurting the performance significantly. Shazeer (2019 ###reference_b31###) proposed sharing keys and values across all of the different attention heads in the transformer architecture, thus, reducing the memory bandwidth requirements of incremental decoding. Wu et al. (2024 ###reference_b36###) explored decoding multiple successive tokens simultaneously in a single forward pass to reduce the inference time. FrugalGPT Chen et al. (2023a ###reference_b7###) proposed a cascade of LMs that stops when an intermediate output is considered reliable, resulting in better computational efficiency. In our work, we focus on reducing the number of output tokens during LLM inference in comparison to SC while achieving better accuracy than Greedy Decoding."
34
+ },
35
+ {
36
+ "section_id": "3",
37
+ "parent_section_id": null,
38
+ "section_name": "Methodology",
39
+ "text": "Figure 1 ###reference_### shows the high level overview of our proposed system. The LLM generates multiple candidate responses using Greedy Decoding with prompts based on diverse exemplars. The candidate responses are then aggregated using the same LLM to generate the final output."
40
+ },
41
+ {
42
+ "section_id": "3.1",
43
+ "parent_section_id": "3",
44
+ "section_name": "Prompts with Diverse Exemplars",
45
+ "text": "Traditional CoT based approaches rely on a single prompt comprised of a fixed set of exemplars. Li et al. (2023b ###reference_b22###) showed that constructing multiple prompts, by modifying the exemplars chosen for the purpose of In-Context-Learning (ICL), further enhances the reasoning capability of language models. On similar lines, we construct multiple LLM prompts by randomly sampling the exemplars for ICL multiple times using different seed settings. For each such LLM prompt, we generate a candidate response using Greedy Decoding."
46
+ },
47
+ {
48
+ "section_id": "3.2",
49
+ "parent_section_id": "3",
50
+ "section_name": "LLM-based Aggregation",
51
+ "text": "USC Chen et al. (2023b ###reference_b8###) that has been shown to accurately select the most consistent response among multiple SC responses using majority consensus. We follow USC and extract the final response from multiple candidate responses accordingly."
52
+ },
53
+ {
54
+ "section_id": "4",
55
+ "parent_section_id": null,
56
+ "section_name": "Experiments",
57
+ "text": ""
58
+ },
59
+ {
60
+ "section_id": "4.1",
61
+ "parent_section_id": "4",
62
+ "section_name": "Dataset",
63
+ "text": "We consider two publicly available datasets for the purpose of our experiments -\nSVAMP Patel et al. (2021 ###reference_b26###) Comprises of elementary-level Math Word Problems. Each problem consists of a short natural language narrative that describes a state of the world and poses a question about some unknown quantities.\nAI2 Reasoning Challenge (ARC) Clark et al. (2018 ###reference_b12###) is a multiple-choice question-answering dataset, containing questions from science exams from grade 3 to grade 9 and is further split in two partitions - \u2018ARC-Easy\u2019 and \u2018ARC-Challenge\u2019 where \u2018ARC-Challenge\u2019 partition contains relatively more difficult questions that require reasoning\nWe report results on the validation split of each dataset. We restrict the ARC dataset to \u2018ARC-Challenge\u2019 only and work with 30% of the data sampled at random. Table 1 ###reference_### captures the corresponding details of the validation datasets considered for the experiments in the paper.\n###table_1###"
64
+ },
65
+ {
66
+ "section_id": "4.2",
67
+ "parent_section_id": "4",
68
+ "section_name": "Baseline Strategies",
69
+ "text": "To benchmark our approach, PEDAL, we include the following baselines\nGreedy Decoding - We run the LLM to select the token with the highest probability at each step to generate the final output.\nUSC - We run SC with CoT prompting and select the most consistent answer among all candidate responses using the same LLM.\nUnified Diverse Exemplars - To understand the impact of multiple candidate responses generated in PEDAL using diverse prompts, we combine all such diverse exemplars directly into a single ICL prompt and run Greedy Decoding. We refer to this baseline as \u201cUnified Diverse Exemplars\u201d (UDE)."
70
+ },
71
+ {
72
+ "section_id": "4.3",
73
+ "parent_section_id": "4",
74
+ "section_name": "Experiment Setting",
75
+ "text": "Each of the strategies were run using Qwen2-7B-Instruct Yang et al. (2024 ###reference_b37###) and Llama-3-8B-Instruct Touvron et al. (2023 ###reference_b33###). We measure the performance using accuracy and the number of output tokens. For purposes of reporting, we also share the number of input tokens consumed by the strategies. The LLMs were run using 4-bit quantization Dettmers et al. (2023 ###reference_b13###). Each experiment is run under three random seed settings for reproducibility. We pick three exemplars per experiment for the ICL prompt construction with each dataset. For each experiment, USC is run to generate three intermediate outputs and PEDAL is run with three diverse input prompts.\n###table_2### ###table_3###"
76
+ },
77
+ {
78
+ "section_id": "5",
79
+ "parent_section_id": null,
80
+ "section_name": "Results and Analysis",
81
+ "text": "Table 2 ###reference_### and Table 3 ###reference_### show the performance metrics for different strategies using SVAMP dataset. Similarly, Table 4 ###reference_### and Table 5 ###reference_### capture the performance metrics for the ARC dataset. We observe that our proposed approach consistently performs better than Greedy Decoding in terms of accuracy and outperforms USC in terms of the number of output tokens."
82
+ },
83
+ {
84
+ "section_id": "5.1",
85
+ "parent_section_id": "5",
86
+ "section_name": "Arithmetic Reasoning",
87
+ "text": "As shown in Table 2 ###reference_###, PEDAL displays improvement over Greedy Decoding on the SVAMP dataset. With Qwen2, PEDAL achieves an average accuracy of 77.89% while Greedy Decoding achieves an average accuracy of 76% implying a 1.89% improvement. PEDAL also outperforms UDE which achieves an accuracy of 75.67%. USC achieves the accuracy of 80.33%. Similarly, with Llama3, we observe that PEDAL achieves an average accuracy of 74.11% while Greedy Decoding achieves a score of 70.22% resulting in 3.89% improvement. However, with Llama3, we observe that USC achieves an accuracy of 72.99% which is lesser than PEDAL while UDE achieves an accuracy 70.67% marginally outperforming Greedy Decoding.\nAs shown in Table 3 ###reference_###, with Qwen2, USC processes approximately 903 input tokens and 503 output tokens while PEDAL processes 1,343 input tokens with 192 output tokens making our approach evidently more cost efficient. With Llama3, USC processes an average of 694 input tokens and 924 output tokens while PEDAL processes 1,262 input tokens and 198 output tokens. While USC relies on lesser input tokens than PEDAL, the cost of output tokens with USC is more than 4 times the output token cost with PEDAL making our approach more cost efficient."
88
+ },
89
+ {
90
+ "section_id": "5.2",
91
+ "parent_section_id": "5",
92
+ "section_name": "Multiple-Choice Question Answering",
93
+ "text": "As shown in Table 4 ###reference_###, the strategies show a similar relationship with experiments run on the ARC dataset. With Qwen2, PEDAL achieves a marginal improvement of 0.39% over Greedy Decoding with an average accuracy of 83.77% while Greedy Decoding has an average accuracy of 83.38%. UDE outperforms PEDAL with an accuracy of 84.06% while USC still achieves the best performance with an accuracy of 84.35%. With Llama-3, PEDAL shows a 2.03% improvement with a score of 78.55% and greedy decoding achieves 76.52%. UDE achieves an accuracy of 76.52% matching the performance of Greedy Decoding. Surprisingly, USC achieves an accuracy of 71.88% which is relatively the least among the strategies. With USC, the main goal of the paper is to benchmark the proposed approach in terms of token count. To prevent diverging from the primary focus area, we leave deeper analysis of this behaviour to future work.\nAs shown in Table 5 ###reference_###, with Qwen2, our approach outperforms USC where USC processes roughly 1,154 input tokens and 669 output tokens on an average while PEDAL processes 1,180 input tokens with 100 output tokens. With Llama3, USC processes 1,073 input tokens and 929 output tokens while PEDAL processes 1,186 input tokens and 197 output tokens. Our approach is the better choice in terms of the number of output tokens processed by the LLM."
94
+ },
95
+ {
96
+ "section_id": "5.3",
97
+ "parent_section_id": "5",
98
+ "section_name": "Comparison to CoT",
99
+ "text": "Similar to PEDAL, CoT has been shown to be more accurate than Greedy Decoding and less expensive in terms of inference compared to SC. Based on pre-liminary interpolation of the number of output tokens using Table 3 ###reference_### and Table 5 ###reference_###, we compare the number of output tokens consumed in a single intermediate output in SC (equivalent to CoT) with the number of output tokens in PEDAL. With Llama3, we observe that PEDAL would be more cost efficient for both datasets. With Qwen2, we observe that PEDAL would be more cost efficient for the ARC dataset but may prove to be more expensive for the SVAMP dataset in comparison to CoT. While PEDAL seems to be more reliably consistent, it would be interesting to further investigate and arrive at definitive conclusions. We intend to evaluate the merits and drawbacks of both approaches in a practical setting in future work."
100
+ },
101
+ {
102
+ "section_id": "5.4",
103
+ "parent_section_id": "5",
104
+ "section_name": "Impact of Number of Diverse Prompts",
105
+ "text": "We re-run the experiments for both datasets with our best performing model, Qwen2, by varying the number of prompts to study how it affects the performance. As shown in Table 6 ###reference_###, we additionally run the experiments for two and four diverse prompts under three seed settings. We observe slight improvements as we increase the number of prompts with the SVAMP dataset. However, we do not observe any such specific pattern with the ARC dataset."
106
+ },
107
+ {
108
+ "section_id": "6",
109
+ "parent_section_id": null,
110
+ "section_name": "Conclusion",
111
+ "text": "In this paper, we explored self-ensembling with LLMs using diverse exemplars with LLM based output aggregation. We observed that this combination can perform better than Greedy Decoding in terms of accuracy and achieve better cost efficiency than SC based methods. However, we restricted the experiments to small datasets that allowed benchmarking approaches using exact match without additional manual annotation efforts. In future work, we plan to explore possibilities on extending such ensembling strategies to a wider range of problem settings involving free-form text generation to further deep dive into strengths and weaknesses of our proposed system."
112
+ }
113
+ ],
114
+ "appendix": [],
115
+ "tables": {
116
+ "1": {
117
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S4.T1.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.1\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.1.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T1.1.1.1.1.1\">\n<span class=\"ltx_p\" id=\"S4.T1.1.1.1.1.1.1\" style=\"width:71.1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.1.1.1.1.1\">Dataset Name</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.1.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T1.1.1.1.2.1\">\n<span class=\"ltx_p\" id=\"S4.T1.1.1.1.2.1.1\" style=\"width:113.8pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.1.2.1.1.1\">Number of Validation Samples</span></span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.2.2\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.1.2.2.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T1.1.2.2.1.1\">\n<span class=\"ltx_p\" id=\"S4.T1.1.2.2.1.1.1\" style=\"width:71.1pt;\">SVAMP</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T1.1.2.2.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T1.1.2.2.2.1\">\n<span class=\"ltx_p\" id=\"S4.T1.1.2.2.2.1.1\" style=\"width:113.8pt;\">300</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.3.3\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.1.3.3.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T1.1.3.3.1.1\">\n<span class=\"ltx_p\" id=\"S4.T1.1.3.3.1.1.1\" style=\"width:71.1pt;\">ARC</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.1.3.3.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T1.1.3.3.2.1\">\n<span class=\"ltx_p\" id=\"S4.T1.1.3.3.2.1.1\" style=\"width:113.8pt;\">345</span>\n</span>\n</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Validation dataset size for SVAMP and ARC datasets</figcaption>\n</figure>",
118
+ "capture": "Table 1: Validation dataset size for SVAMP and ARC datasets"
119
+ },
120
+ "2": {
121
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T2.8\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T2.8.9.1\">\n<th class=\"ltx_td ltx_align_justify ltx_align_top ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.8.9.1.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.8.9.1.1.1\">\n<span class=\"ltx_p\" id=\"S4.T2.8.9.1.1.1.1\" style=\"width:34.1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.8.9.1.1.1.1.1\">Model</span></span>\n</span>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_align_top ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T2.8.9.1.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.8.9.1.2.1\">\n<span class=\"ltx_p\" id=\"S4.T2.8.9.1.2.1.1\" style=\"width:71.1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.8.9.1.2.1.1.1\">Approach</span></span>\n</span>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_align_top ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T2.8.9.1.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.8.9.1.3.1\">\n<span class=\"ltx_p\" id=\"S4.T2.8.9.1.3.1.1\" style=\"width:71.1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.8.9.1.3.1.1.1\">Accuracy</span></span>\n</span>\n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.2\" rowspan=\"4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.1.1.2.1\">\n<span class=\"ltx_p\" id=\"S4.T2.1.1.2.1.1\" style=\"width:34.1pt;\"><span class=\"ltx_text\" id=\"S4.T2.1.1.2.1.1.1\">Qwen2</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.1.1.3.1\">\n<span class=\"ltx_p\" id=\"S4.T2.1.1.3.1.1\" style=\"width:71.1pt;\">Greedy</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.1.1.1.1\">\n<span class=\"ltx_p\" id=\"S4.T2.1.1.1.1.1\" style=\"width:71.1pt;\">76.0 1.52</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.2.2\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T2.2.2.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.2.2.2.1\">\n<span class=\"ltx_p\" id=\"S4.T2.2.2.2.1.1\" style=\"width:71.1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.2.2.2.1.1.1\">USC</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T2.2.2.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.2.2.1.1\">\n<span class=\"ltx_p\" id=\"S4.T2.2.2.1.1.1\" style=\"width:71.1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.2.2.1.1.1.1\">80.33</span> <span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.2.2.1.1.1.2\">0.98</span></span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.3.3\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T2.3.3.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.3.3.2.1\">\n<span class=\"ltx_p\" id=\"S4.T2.3.3.2.1.1\" style=\"width:71.1pt;\">UDE</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T2.3.3.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.3.3.1.1\">\n<span class=\"ltx_p\" id=\"S4.T2.3.3.1.1.1\" style=\"width:71.1pt;\">75.67 0.0</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.4.4\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T2.4.4.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.4.4.2.1\">\n<span class=\"ltx_p\" id=\"S4.T2.4.4.2.1.1\" style=\"width:71.1pt;\">PEDAL</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T2.4.4.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.4.4.1.1\">\n<span class=\"ltx_p\" id=\"S4.T2.4.4.1.1.1\" style=\"width:71.1pt;\">77.89 1.28</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.5.5\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.5.5.2\" rowspan=\"4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.5.5.2.1\">\n<span class=\"ltx_p\" id=\"S4.T2.5.5.2.1.1\" style=\"width:34.1pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.5.2.1.1.1\">Llama3</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T2.5.5.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.5.5.3.1\">\n<span class=\"ltx_p\" id=\"S4.T2.5.5.3.1.1\" style=\"width:71.1pt;\">Greedy</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T2.5.5.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.5.5.1.1\">\n<span class=\"ltx_p\" id=\"S4.T2.5.5.1.1.1\" style=\"width:71.1pt;\">70.22 1.03</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.6.6\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T2.6.6.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.6.6.2.1\">\n<span class=\"ltx_p\" id=\"S4.T2.6.6.2.1.1\" style=\"width:71.1pt;\">USC</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T2.6.6.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.6.6.1.1\">\n<span class=\"ltx_p\" id=\"S4.T2.6.6.1.1.1\" style=\"width:71.1pt;\">72.99 0.47</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.7.7\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T2.7.7.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.7.7.2.1\">\n<span class=\"ltx_p\" id=\"S4.T2.7.7.2.1.1\" style=\"width:71.1pt;\">UDE</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T2.7.7.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.7.7.1.1\">\n<span class=\"ltx_p\" id=\"S4.T2.7.7.1.1.1\" style=\"width:71.1pt;\">70.67 0.0</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.8.8\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T2.8.8.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.8.8.2.1\">\n<span class=\"ltx_p\" id=\"S4.T2.8.8.2.1.1\" style=\"width:71.1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.8.8.2.1.1.1\">PEDAL</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T2.8.8.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.8.8.1.1\">\n<span class=\"ltx_p\" id=\"S4.T2.8.8.1.1.1\" style=\"width:71.1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.8.8.1.1.1.1\">74.11</span> <span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.8.8.1.1.1.2\">0.57</span></span>\n</span>\n</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Performance comparison of Greedy Decoding, USC, UDE and PEDAL for SVAMP dataset using Accuracy. Averaged scores across 3 seeds are reported along with the standard deviation. Best performing strategy per model has been highlighted in <span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.10.1\">bold</span></figcaption>\n</figure>",
122
+ "capture": "Table 2: Performance comparison of Greedy Decoding, USC, UDE and PEDAL for SVAMP dataset using Accuracy. Averaged scores across 3 seeds are reported along with the standard deviation. Best performing strategy per model has been highlighted in bold"
123
+ },
124
+ "3": {
125
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S4.T3.8\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T3.8.9.1\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.8.9.1.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T3.8.9.1.1.1\">\n<span class=\"ltx_p\" id=\"S4.T3.8.9.1.1.1.1\" style=\"width:31.3pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.8.9.1.1.1.1.1\">Model</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T3.8.9.1.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T3.8.9.1.2.1\">\n<span class=\"ltx_p\" id=\"S4.T3.8.9.1.2.1.1\" style=\"width:56.9pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.8.9.1.2.1.1.1\">Approach</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t\" colspan=\"2\" id=\"S4.T3.8.9.1.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T3.8.9.1.3.1\">\n<span class=\"ltx_p\" id=\"S4.T3.8.9.1.3.1.1\" style=\"width:85.4pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.8.9.1.3.1.1.1\">Token Count</span></span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.8.10.2\">\n<td class=\"ltx_td ltx_align_top ltx_border_l ltx_border_r\" id=\"S4.T3.8.10.2.1\"></td>\n<td class=\"ltx_td ltx_align_top ltx_border_r\" id=\"S4.T3.8.10.2.2\"></td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T3.8.10.2.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T3.8.10.2.3.1\">\n<span class=\"ltx_p\" id=\"S4.T3.8.10.2.3.1.1\" style=\"width:42.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.8.10.2.3.1.1.1\">Input</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T3.8.10.2.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T3.8.10.2.4.1\">\n<span class=\"ltx_p\" id=\"S4.T3.8.10.2.4.1.1\" style=\"width:42.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.8.10.2.4.1.1.1\">Output</span></span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.2.2\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.2.2.3\" rowspan=\"2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T3.2.2.3.1\">\n<span class=\"ltx_p\" id=\"S4.T3.2.2.3.1.1\" style=\"width:31.3pt;\"><span class=\"ltx_text\" id=\"S4.T3.2.2.3.1.1.1\">Qwen2</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T3.2.2.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T3.2.2.4.1\">\n<span class=\"ltx_p\" id=\"S4.T3.2.2.4.1.1\" style=\"width:56.9pt;\">USC</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T3.1.1.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T3.1.1.1.1\">\n<span class=\"ltx_p\" id=\"S4.T3.1.1.1.1.1\" style=\"width:42.7pt;\">902.89 2.16</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T3.2.2.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T3.2.2.2.1\">\n<span class=\"ltx_p\" id=\"S4.T3.2.2.2.1.1\" style=\"width:42.7pt;\">502.75 1.43</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.4.4\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T3.4.4.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T3.4.4.3.1\">\n<span class=\"ltx_p\" id=\"S4.T3.4.4.3.1.1\" style=\"width:56.9pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.4.4.3.1.1.1\">PEDAL</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T3.3.3.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T3.3.3.1.1\">\n<span class=\"ltx_p\" id=\"S4.T3.3.3.1.1.1\" style=\"width:42.7pt;\">1342.18 86.87</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T3.4.4.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T3.4.4.2.1\">\n<span class=\"ltx_p\" id=\"S4.T3.4.4.2.1.1\" style=\"width:42.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.4.4.2.1.1.1\">191.99</span> <span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.4.4.2.1.1.2\">0.22</span></span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.6.6\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.6.6.3\" rowspan=\"2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T3.6.6.3.1\">\n<span class=\"ltx_p\" id=\"S4.T3.6.6.3.1.1\" style=\"width:31.3pt;\"><span class=\"ltx_text\" id=\"S4.T3.6.6.3.1.1.1\">Llama3</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T3.6.6.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T3.6.6.4.1\">\n<span class=\"ltx_p\" id=\"S4.T3.6.6.4.1.1\" style=\"width:56.9pt;\">USC</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T3.5.5.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T3.5.5.1.1\">\n<span class=\"ltx_p\" id=\"S4.T3.5.5.1.1.1\" style=\"width:42.7pt;\">693.46 8.79</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T3.6.6.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T3.6.6.2.1\">\n<span class=\"ltx_p\" id=\"S4.T3.6.6.2.1.1\" style=\"width:42.7pt;\">923.56 1.51</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.8.8\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T3.8.8.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T3.8.8.3.1\">\n<span class=\"ltx_p\" id=\"S4.T3.8.8.3.1.1\" style=\"width:56.9pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.8.8.3.1.1.1\">PEDAL</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T3.7.7.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T3.7.7.1.1\">\n<span class=\"ltx_p\" id=\"S4.T3.7.7.1.1.1\" style=\"width:42.7pt;\">1261.51 64.95</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T3.8.8.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T3.8.8.2.1\">\n<span class=\"ltx_p\" id=\"S4.T3.8.8.2.1.1\" style=\"width:42.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.8.8.2.1.1.1\">197.72</span> <span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.8.8.2.1.1.2\">0.2</span></span>\n</span>\n</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>Performance comparison of USC and PEDAL for SVAMP dataset using the number of output tokens. Averaged counts across 3 seeds are reported along with the standard deviation. Best performing strategy per model has been highlighted in <span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.10.1\">bold</span></figcaption>\n</figure>",
126
+ "capture": "Table 3: Performance comparison of USC and PEDAL for SVAMP dataset using the number of output tokens. Averaged counts across 3 seeds are reported along with the standard deviation. Best performing strategy per model has been highlighted in bold"
127
+ },
128
+ "4": {
129
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T4\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T4.8\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T4.8.9.1\">\n<th class=\"ltx_td ltx_align_justify ltx_align_top ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T4.8.9.1.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T4.8.9.1.1.1\">\n<span class=\"ltx_p\" id=\"S4.T4.8.9.1.1.1.1\" style=\"width:34.1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.8.9.1.1.1.1.1\">Model</span></span>\n</span>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_align_top ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T4.8.9.1.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T4.8.9.1.2.1\">\n<span class=\"ltx_p\" id=\"S4.T4.8.9.1.2.1.1\" style=\"width:71.1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.8.9.1.2.1.1.1\">Approach</span></span>\n</span>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_align_top ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T4.8.9.1.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T4.8.9.1.3.1\">\n<span class=\"ltx_p\" id=\"S4.T4.8.9.1.3.1.1\" style=\"width:71.1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.8.9.1.3.1.1.1\">Accuracy</span></span>\n</span>\n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T4.1.1\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T4.1.1.2\" rowspan=\"4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T4.1.1.2.1\">\n<span class=\"ltx_p\" id=\"S4.T4.1.1.2.1.1\" style=\"width:34.1pt;\"><span class=\"ltx_text\" id=\"S4.T4.1.1.2.1.1.1\">Qwen2</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T4.1.1.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T4.1.1.3.1\">\n<span class=\"ltx_p\" id=\"S4.T4.1.1.3.1.1\" style=\"width:71.1pt;\">Greedy</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T4.1.1.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T4.1.1.1.1\">\n<span class=\"ltx_p\" id=\"S4.T4.1.1.1.1.1\" style=\"width:71.1pt;\">83.38 0.55</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.2.2\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T4.2.2.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T4.2.2.2.1\">\n<span class=\"ltx_p\" id=\"S4.T4.2.2.2.1.1\" style=\"width:71.1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.2.2.2.1.1.1\">USC</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T4.2.2.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T4.2.2.1.1\">\n<span class=\"ltx_p\" id=\"S4.T4.2.2.1.1.1\" style=\"width:71.1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.2.2.1.1.1.1\">84.35</span> <span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.2.2.1.1.1.2\">0.62</span></span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.3.3\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T4.3.3.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T4.3.3.2.1\">\n<span class=\"ltx_p\" id=\"S4.T4.3.3.2.1.1\" style=\"width:71.1pt;\">UDE</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T4.3.3.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T4.3.3.1.1\">\n<span class=\"ltx_p\" id=\"S4.T4.3.3.1.1.1\" style=\"width:71.1pt;\">84.06 0.0</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.4.4\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T4.4.4.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T4.4.4.2.1\">\n<span class=\"ltx_p\" id=\"S4.T4.4.4.2.1.1\" style=\"width:71.1pt;\">PEDAL</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T4.4.4.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T4.4.4.1.1\">\n<span class=\"ltx_p\" id=\"S4.T4.4.4.1.1.1\" style=\"width:71.1pt;\">83.77 0.47</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.5.5\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T4.5.5.2\" rowspan=\"4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T4.5.5.2.1\">\n<span class=\"ltx_p\" id=\"S4.T4.5.5.2.1.1\" style=\"width:34.1pt;\"><span class=\"ltx_text\" id=\"S4.T4.5.5.2.1.1.1\">Llama3</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T4.5.5.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T4.5.5.3.1\">\n<span class=\"ltx_p\" id=\"S4.T4.5.5.3.1.1\" style=\"width:71.1pt;\">Greedy</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T4.5.5.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T4.5.5.1.1\">\n<span class=\"ltx_p\" id=\"S4.T4.5.5.1.1.1\" style=\"width:71.1pt;\">76.52 1.44</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.6.6\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T4.6.6.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T4.6.6.2.1\">\n<span class=\"ltx_p\" id=\"S4.T4.6.6.2.1.1\" style=\"width:71.1pt;\">USC</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T4.6.6.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T4.6.6.1.1\">\n<span class=\"ltx_p\" id=\"S4.T4.6.6.1.1.1\" style=\"width:71.1pt;\">71.88 0.71</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.7.7\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T4.7.7.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T4.7.7.2.1\">\n<span class=\"ltx_p\" id=\"S4.T4.7.7.2.1.1\" style=\"width:71.1pt;\">UDE</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T4.7.7.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T4.7.7.1.1\">\n<span class=\"ltx_p\" id=\"S4.T4.7.7.1.1.1\" style=\"width:71.1pt;\">76.52 0.0</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.8.8\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T4.8.8.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T4.8.8.2.1\">\n<span class=\"ltx_p\" id=\"S4.T4.8.8.2.1.1\" style=\"width:71.1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.8.8.2.1.1.1\">PEDAL</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T4.8.8.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T4.8.8.1.1\">\n<span class=\"ltx_p\" id=\"S4.T4.8.8.1.1.1\" style=\"width:71.1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.8.8.1.1.1.1\">78.55</span> <span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.8.8.1.1.1.2\">0.47</span></span>\n</span>\n</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 4: </span>Performance comparison of greedy decoding, USC, UDE and PEDAL for ARC dataset using Accuracy. Averaged scores across 3 seeds are reported along with the standard deviation. Best performing strategy per model has been highlighted in <span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.10.1\">bold</span></figcaption>\n</figure>",
130
+ "capture": "Table 4: Performance comparison of greedy decoding, USC, UDE and PEDAL for ARC dataset using Accuracy. Averaged scores across 3 seeds are reported along with the standard deviation. Best performing strategy per model has been highlighted in bold"
131
+ },
132
+ "5": {
133
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T5\">\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S4.T5.8\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T5.8.9.1\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T5.8.9.1.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T5.8.9.1.1.1\">\n<span class=\"ltx_p\" id=\"S4.T5.8.9.1.1.1.1\" style=\"width:31.3pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.8.9.1.1.1.1.1\">Model</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T5.8.9.1.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T5.8.9.1.2.1\">\n<span class=\"ltx_p\" id=\"S4.T5.8.9.1.2.1.1\" style=\"width:56.9pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.8.9.1.2.1.1.1\">Approach</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t\" colspan=\"2\" id=\"S4.T5.8.9.1.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T5.8.9.1.3.1\">\n<span class=\"ltx_p\" id=\"S4.T5.8.9.1.3.1.1\" style=\"width:85.4pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.8.9.1.3.1.1.1\">Token Count</span></span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.8.10.2\">\n<td class=\"ltx_td ltx_align_top ltx_border_l ltx_border_r\" id=\"S4.T5.8.10.2.1\"></td>\n<td class=\"ltx_td ltx_align_top ltx_border_r\" id=\"S4.T5.8.10.2.2\"></td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T5.8.10.2.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T5.8.10.2.3.1\">\n<span class=\"ltx_p\" id=\"S4.T5.8.10.2.3.1.1\" style=\"width:42.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.8.10.2.3.1.1.1\">Input</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T5.8.10.2.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T5.8.10.2.4.1\">\n<span class=\"ltx_p\" id=\"S4.T5.8.10.2.4.1.1\" style=\"width:42.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.8.10.2.4.1.1.1\">Output</span></span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.2.2\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T5.2.2.3\" rowspan=\"2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T5.2.2.3.1\">\n<span class=\"ltx_p\" id=\"S4.T5.2.2.3.1.1\" style=\"width:31.3pt;\"><span class=\"ltx_text\" id=\"S4.T5.2.2.3.1.1.1\">Qwen2</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T5.2.2.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T5.2.2.4.1\">\n<span class=\"ltx_p\" id=\"S4.T5.2.2.4.1.1\" style=\"width:56.9pt;\">USC</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T5.1.1.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T5.1.1.1.1\">\n<span class=\"ltx_p\" id=\"S4.T5.1.1.1.1.1\" style=\"width:42.7pt;\">1153.04 1.96</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T5.2.2.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T5.2.2.2.1\">\n<span class=\"ltx_p\" id=\"S4.T5.2.2.2.1.1\" style=\"width:42.7pt;\">668.71 7.19</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.4.4\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T5.4.4.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T5.4.4.3.1\">\n<span class=\"ltx_p\" id=\"S4.T5.4.4.3.1.1\" style=\"width:56.9pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.4.4.3.1.1.1\">PEDAL</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T5.3.3.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T5.3.3.1.1\">\n<span class=\"ltx_p\" id=\"S4.T5.3.3.1.1.1\" style=\"width:42.7pt;\">1179.76 100.10</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T5.4.4.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T5.4.4.2.1\">\n<span class=\"ltx_p\" id=\"S4.T5.4.4.2.1.1\" style=\"width:42.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.4.4.2.1.1.1\">99.47</span> <span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.4.4.2.1.1.2\">10.05</span></span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.6.6\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T5.6.6.3\" rowspan=\"2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T5.6.6.3.1\">\n<span class=\"ltx_p\" id=\"S4.T5.6.6.3.1.1\" style=\"width:31.3pt;\"><span class=\"ltx_text\" id=\"S4.T5.6.6.3.1.1.1\">Llama3</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T5.6.6.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T5.6.6.4.1\">\n<span class=\"ltx_p\" id=\"S4.T5.6.6.4.1.1\" style=\"width:56.9pt;\">USC</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T5.5.5.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T5.5.5.1.1\">\n<span class=\"ltx_p\" id=\"S4.T5.5.5.1.1.1\" style=\"width:42.7pt;\">1072.96 5.67</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T5.6.6.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T5.6.6.2.1\">\n<span class=\"ltx_p\" id=\"S4.T5.6.6.2.1.1\" style=\"width:42.7pt;\">928.1 1.31</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.8.8\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T5.8.8.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T5.8.8.3.1\">\n<span class=\"ltx_p\" id=\"S4.T5.8.8.3.1.1\" style=\"width:56.9pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.8.8.3.1.1.1\">PEDAL</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T5.7.7.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T5.7.7.1.1\">\n<span class=\"ltx_p\" id=\"S4.T5.7.7.1.1.1\" style=\"width:42.7pt;\">1185.27 115.08</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T5.8.8.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T5.8.8.2.1\">\n<span class=\"ltx_p\" id=\"S4.T5.8.8.2.1.1\" style=\"width:42.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.8.8.2.1.1.1\">196.83</span> <span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.8.8.2.1.1.2\">0.11</span></span>\n</span>\n</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 5: </span>Performance comparison of USC and PEDAL for ARC dataset using the number of output tokens. Averaged counts across 3 seeds are reported along with the standard deviation. Best performing strategy per model has been highlighted in <span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.10.1\">bold</span></figcaption>\n</figure>",
134
+ "capture": "Table 5: Performance comparison of USC and PEDAL for ARC dataset using the number of output tokens. Averaged counts across 3 seeds are reported along with the standard deviation. Best performing strategy per model has been highlighted in bold"
135
+ },
136
+ "6": {
137
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T6\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T6.6\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T6.6.7.1\">\n<th class=\"ltx_td ltx_align_justify ltx_align_top ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T6.6.7.1.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T6.6.7.1.1.1\">\n<span class=\"ltx_p\" id=\"S5.T6.6.7.1.1.1.1\" style=\"width:42.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T6.6.7.1.1.1.1.1\">Number of Prompts</span></span>\n</span>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_align_top ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T6.6.7.1.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T6.6.7.1.2.1\">\n<span class=\"ltx_p\" id=\"S5.T6.6.7.1.2.1.1\" style=\"width:71.1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T6.6.7.1.2.1.1.1\">SVAMP</span></span>\n</span>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_align_top ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T6.6.7.1.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T6.6.7.1.3.1\">\n<span class=\"ltx_p\" id=\"S5.T6.6.7.1.3.1.1\" style=\"width:71.1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T6.6.7.1.3.1.1.1\">ARC</span></span>\n</span>\n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T6.2.2\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T6.2.2.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T6.2.2.3.1\">\n<span class=\"ltx_p\" id=\"S5.T6.2.2.3.1.1\" style=\"width:42.7pt;\">2</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S5.T6.1.1.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T6.1.1.1.1\">\n<span class=\"ltx_p\" id=\"S5.T6.1.1.1.1.1\" style=\"width:71.1pt;\">77.0 0.98</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S5.T6.2.2.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T6.2.2.2.1\">\n<span class=\"ltx_p\" id=\"S5.T6.2.2.2.1.1\" style=\"width:71.1pt;\">83.96 0.36</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T6.4.4\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T6.4.4.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T6.4.4.3.1\">\n<span class=\"ltx_p\" id=\"S5.T6.4.4.3.1.1\" style=\"width:42.7pt;\">3</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S5.T6.3.3.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T6.3.3.1.1\">\n<span class=\"ltx_p\" id=\"S5.T6.3.3.1.1.1\" style=\"width:71.1pt;\">77.89 1.28</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S5.T6.4.4.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T6.4.4.2.1\">\n<span class=\"ltx_p\" id=\"S5.T6.4.4.2.1.1\" style=\"width:71.1pt;\">83.77 0.47</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T6.6.6\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_bb ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T6.6.6.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T6.6.6.3.1\">\n<span class=\"ltx_p\" id=\"S5.T6.6.6.3.1.1\" style=\"width:42.7pt;\">4</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_bb ltx_border_r ltx_border_t\" id=\"S5.T6.5.5.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T6.5.5.1.1\">\n<span class=\"ltx_p\" id=\"S5.T6.5.5.1.1.1\" style=\"width:71.1pt;\">78.22 1.34</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_bb ltx_border_r ltx_border_t\" id=\"S5.T6.6.6.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T6.6.6.2.1\">\n<span class=\"ltx_p\" id=\"S5.T6.6.6.2.1.1\" style=\"width:71.1pt;\">83.87 0.49</span>\n</span>\n</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 6: </span>Effect of number of prompts on performance using Qwen2 with SVAMP and ARC datasets. Averaged scores across 3 seeds are reported along with the standard deviation. </figcaption>\n</figure>",
138
+ "capture": "Table 6: Effect of number of prompts on performance using Qwen2 with SVAMP and ARC datasets. Averaged scores across 3 seeds are reported along with the standard deviation. "
139
+ }
140
+ },
141
+ "image_paths": {
142
+ "1": {
143
+ "figure_path": "2408.08869v2_figure_1.png",
144
+ "caption": "Figure 1: High level overview of PEDAL (Prompts based on Exemplar Diversity Aggregated using an LLM)",
145
+ "url": "http://arxiv.org/html/2408.08869v2/x1.png"
146
+ }
147
+ },
148
+ "validation": true,
149
+ "references": [
150
+ {
151
+ "1": {
152
+ "title": "Ask me anything: A simple strategy for prompting language models.",
153
+ "author": "Simran Arora, Avanika Narayan, Mayee F. Chen, Laurel Orr, Neel Guha, Kush Bhatia, Ines Chami, Frederic Sala, and Christopher R\u00e9. 2022.",
154
+ "venue": null,
155
+ "url": "http://arxiv.org/abs/2210.02441"
156
+ }
157
+ },
158
+ {
159
+ "2": {
160
+ "title": "PromptSource: An integrated development environment and repository for natural language prompts.",
161
+ "author": "Stephen Bach, Victor Sanh, Zheng Xin Yong, Albert Webson, Colin Raffel, Nihal V. Nayak, Abheesht Sharma, Taewoon Kim, M Saiful Bari, Thibault Fevry, Zaid Alyafeai, Manan Dey, Andrea Santilli, Zhiqing Sun, Srulik Ben-david, Canwen Xu, Gunjan Chhablani, Han Wang, Jason Fries, Maged Al-shaibani, Shanya Sharma, Urmish Thakker, Khalid Almubarak, Xiangru Tang, Dragomir Radev, Mike Tian-jian Jiang, and Alexander Rush. 2022.",
162
+ "venue": "In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 93\u2013104, Dublin, Ireland. Association for Computational Linguistics.",
163
+ "url": "https://doi.org/10.18653/v1/2022.acl-demo.9"
164
+ }
165
+ },
166
+ {
167
+ "3": {
168
+ "title": "Language models are few-shot learners.",
169
+ "author": "Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020.",
170
+ "venue": "In Proceedings of the 34th International Conference on Neural Information Processing Systems, NIPS\u201920, Red Hook, NY, USA. Curran Associates Inc.",
171
+ "url": null
172
+ }
173
+ },
174
+ {
175
+ "4": {
176
+ "title": "A survey on mixture of experts.",
177
+ "author": "Weilin Cai, Juyong Jiang, Fan Wang, Jing Tang, Sunghun Kim, and Jiayi Huang. 2024.",
178
+ "venue": null,
179
+ "url": "http://arxiv.org/abs/2407.06204"
180
+ }
181
+ },
182
+ {
183
+ "5": {
184
+ "title": "LangChain.",
185
+ "author": "Harrison Chase. 2022.",
186
+ "venue": null,
187
+ "url": "https://github.com/langchain-ai/langchain"
188
+ }
189
+ },
190
+ {
191
+ "6": {
192
+ "title": "Inside: Llms\u2019 internal states retain the power of hallucination detection.",
193
+ "author": "Chao Chen, Kai Liu, Ze Chen, Yi Gu, Yue Wu, Mingyuan Tao, Zhihang Fu, and Jieping Ye. 2024.",
194
+ "venue": null,
195
+ "url": "http://arxiv.org/abs/2402.03744"
196
+ }
197
+ },
198
+ {
199
+ "7": {
200
+ "title": "Frugalgpt: How to use large language models while reducing cost and improving performance.",
201
+ "author": "Lingjiao Chen, Matei Zaharia, and James Zou. 2023a.",
202
+ "venue": null,
203
+ "url": "http://arxiv.org/abs/2305.05176"
204
+ }
205
+ },
206
+ {
207
+ "8": {
208
+ "title": "Universal self-consistency for large language model generation.",
209
+ "author": "Xinyun Chen, Renat Aksitov, Uri Alon, Jie Ren, Kefan Xiao, Pengcheng Yin, Sushant Prakash, Charles Sutton, Xuezhi Wang, and Denny Zhou. 2023b.",
210
+ "venue": "ArXiv, abs/2311.17311.",
211
+ "url": "https://api.semanticscholar.org/CorpusID:265498407"
212
+ }
213
+ },
214
+ {
215
+ "9": {
216
+ "title": "A survey on deep neural network pruning-taxonomy, comparison, analysis, and recommendations.",
217
+ "author": "Hongrong Cheng, Miao Zhang, and Javen Qinfeng Shi. 2024.",
218
+ "venue": null,
219
+ "url": "http://arxiv.org/abs/2308.06767"
220
+ }
221
+ },
222
+ {
223
+ "10": {
224
+ "title": "Why do llm input tokens cost less than output tokens?",
225
+ "author": "Peter Chng. 2024.",
226
+ "venue": null,
227
+ "url": "https://peterchng.com/blog/2024/05/01/why-do-llm-input-tokens-cost-less-than-output-tokens/"
228
+ }
229
+ },
230
+ {
231
+ "11": {
232
+ "title": "Palm: Scaling language modeling with pathways.",
233
+ "author": "Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam M. Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Benton C. Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garc\u00eda, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark D\u00edaz, Orhan Firat, Michele Catasta, Jason Wei, Kathleen S. Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov,\nand Noah Fiedel. 2022.",
234
+ "venue": "J. Mach. Learn. Res., 24:240:1\u2013240:113.",
235
+ "url": "https://api.semanticscholar.org/CorpusID:247951931"
236
+ }
237
+ },
238
+ {
239
+ "12": {
240
+ "title": "Think you have solved question answering? try arc, the ai2 reasoning challenge.",
241
+ "author": "Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018.",
242
+ "venue": null,
243
+ "url": "http://arxiv.org/abs/1803.05457"
244
+ }
245
+ },
246
+ {
247
+ "13": {
248
+ "title": "Qlora: Efficient finetuning of quantized llms.",
249
+ "author": "Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. 2023.",
250
+ "venue": null,
251
+ "url": "http://arxiv.org/abs/2305.14314"
252
+ }
253
+ },
254
+ {
255
+ "14": {
256
+ "title": "Promptbreeder: Self-referential self-improvement via prompt evolution.",
257
+ "author": "Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, and Tim Rockt\u00e4schel. 2023.",
258
+ "venue": null,
259
+ "url": "http://arxiv.org/abs/2309.16797"
260
+ }
261
+ },
262
+ {
263
+ "15": {
264
+ "title": "Knowledge distillation: A survey.",
265
+ "author": "Jianping Gou, Baosheng Yu, Stephen J. Maybank, and Dacheng Tao. 2021.",
266
+ "venue": "International Journal of Computer Vision, 129(6):1789\u20131819.",
267
+ "url": "https://doi.org/10.1007/s11263-021-01453-z"
268
+ }
269
+ },
270
+ {
271
+ "16": {
272
+ "title": "Promptboosting: black-box text classification with ten forward passes.",
273
+ "author": "Bairu Hou, Joe O\u2019Connor, Jacob Andreas, Shiyu Chang, and Yang Zhang. 2023.",
274
+ "venue": "In Proceedings of the 40th International Conference on Machine Learning, ICML\u201923. JMLR.org.",
275
+ "url": null
276
+ }
277
+ },
278
+ {
279
+ "17": {
280
+ "title": "Enhancing large language models in coding through multi-perspective self-consistency.",
281
+ "author": "Baizhou Huang, Shuai Lu, Weizhu Chen, Xiaojun Wan, and Nan Duan. 2024.",
282
+ "venue": null,
283
+ "url": "http://arxiv.org/abs/2309.17272"
284
+ }
285
+ },
286
+ {
287
+ "18": {
288
+ "title": "Quantization and training of neural networks for efficient integer-arithmetic-only inference.",
289
+ "author": "Benoit Jacob, Skirmantas Kligys, Bo Chen, Menglong Zhu, Matthew Tang, Andrew Howard, Hartwig Adam, and Dmitry Kalenichenko. 2018.",
290
+ "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).",
291
+ "url": null
292
+ }
293
+ },
294
+ {
295
+ "19": {
296
+ "title": "Llm-blender: Ensembling large language models with pairwise ranking and generative fusion.",
297
+ "author": "Dongfu Jiang, Xiang Ren, and Bill Yuchen Lin. 2023.",
298
+ "venue": null,
299
+ "url": "http://arxiv.org/abs/2306.02561"
300
+ }
301
+ },
302
+ {
303
+ "20": {
304
+ "title": "Dspy: Compiling declarative language model calls into self-improving pipelines.",
305
+ "author": "Omar Khattab, Arnav Singhvi, Paridhi Maheshwari, Zhiyuan Zhang, Keshav Santhanam, Sri Vardhamanan, Saiful Haq, Ashutosh Sharma, Thomas T. Joshi, Hanna Moazam, Heather Miller, Matei Zaharia, and Christopher Potts. 2023.",
306
+ "venue": null,
307
+ "url": "http://arxiv.org/abs/2310.03714"
308
+ }
309
+ },
310
+ {
311
+ "21": {
312
+ "title": "Deep model fusion: A survey.",
313
+ "author": "Weishi Li, Yong Peng, Miao Zhang, Liang Ding, Han Hu, and Li Shen. 2023a.",
314
+ "venue": null,
315
+ "url": "http://arxiv.org/abs/2309.15698"
316
+ }
317
+ },
318
+ {
319
+ "22": {
320
+ "title": "Making language models better reasoners with step-aware verifier.",
321
+ "author": "Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, and Weizhu Chen. 2023b.",
322
+ "venue": "In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5315\u20135333, Toronto, Canada. Association for Computational Linguistics.",
323
+ "url": "https://doi.org/10.18653/v1/2023.acl-long.291"
324
+ }
325
+ },
326
+ {
327
+ "23": {
328
+ "title": "Large language models in finance: A survey.",
329
+ "author": "Yinheng Li, Shaofei Wang, Han Ding, and Hang Chen. 2024.",
330
+ "venue": null,
331
+ "url": "http://arxiv.org/abs/2311.10723"
332
+ }
333
+ },
334
+ {
335
+ "24": {
336
+ "title": "Large language model guided tree-of-thought.",
337
+ "author": "Jieyi Long. 2023.",
338
+ "venue": null,
339
+ "url": "http://arxiv.org/abs/2305.08291"
340
+ }
341
+ },
342
+ {
343
+ "25": {
344
+ "title": "Fantastically ordered prompts and where to find them: Overcoming few-shot prompt order sensitivity.",
345
+ "author": "Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, and Pontus Stenetorp. 2022.",
346
+ "venue": "In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8086\u20138098, Dublin, Ireland. Association for Computational Linguistics.",
347
+ "url": "https://doi.org/10.18653/v1/2022.acl-long.556"
348
+ }
349
+ },
350
+ {
351
+ "26": {
352
+ "title": "Are NLP models really able to solve simple math word problems?",
353
+ "author": "Arkil Patel, Satwik Bhattamishra, and Navin Goyal. 2021.",
354
+ "venue": "In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2080\u20132094, Online. Association for Computational Linguistics.",
355
+ "url": "https://doi.org/10.18653/v1/2021.naacl-main.168"
356
+ }
357
+ },
358
+ {
359
+ "27": {
360
+ "title": "Boosted prompt ensembles for large language models.",
361
+ "author": "Silviu Pitis, Michael R. Zhang, Andrew Wang, and Jimmy Ba. 2023.",
362
+ "venue": null,
363
+ "url": "http://arxiv.org/abs/2304.05970"
364
+ }
365
+ },
366
+ {
367
+ "28": {
368
+ "title": "Exploring the limits of transfer learning with a unified text-to-text transformer.",
369
+ "author": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020.",
370
+ "venue": "J. Mach. Learn. Res., 21(1).",
371
+ "url": null
372
+ }
373
+ },
374
+ {
375
+ "29": {
376
+ "title": "Snorkel: rapid training data creation with weak supervision.",
377
+ "author": "Alexander Ratner, Stephen H. Bach, Henry Ehrenberg, Jason Fries, Sen Wu, and Christopher R\u00e9. 2017.",
378
+ "venue": "Proc. VLDB Endow., 11(3):269\u2013282.",
379
+ "url": "https://doi.org/10.14778/3157794.3157797"
380
+ }
381
+ },
382
+ {
383
+ "30": {
384
+ "title": "Explaining adaboost.",
385
+ "author": "Robert E Schapire. 2013.",
386
+ "venue": "In Empirical inference, pages 37\u201352. Springer.",
387
+ "url": null
388
+ }
389
+ },
390
+ {
391
+ "31": {
392
+ "title": "Fast transformer decoding: One write-head is all you need.",
393
+ "author": "Noam Shazeer. 2019.",
394
+ "venue": null,
395
+ "url": "http://arxiv.org/abs/1911.02150"
396
+ }
397
+ },
398
+ {
399
+ "32": {
400
+ "title": "Tree prompting: Efficient task adaptation without fine-tuning.",
401
+ "author": "Chandan Singh, John Morris, Alexander Rush, Jianfeng Gao, and Yuntian Deng. 2023.",
402
+ "venue": "In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 6253\u20136267, Singapore. Association for Computational Linguistics.",
403
+ "url": "https://doi.org/10.18653/v1/2023.emnlp-main.384"
404
+ }
405
+ },
406
+ {
407
+ "33": {
408
+ "title": "Llama: Open and efficient foundation language models.",
409
+ "author": "Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth\u00e9e Lacroix, Baptiste Rozi\u00e8re, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023.",
410
+ "venue": "ArXiv, abs/2302.13971.",
411
+ "url": "https://api.semanticscholar.org/CorpusID:257219404"
412
+ }
413
+ },
414
+ {
415
+ "34": {
416
+ "title": "Self-consistency improves chain of thought reasoning in language models.",
417
+ "author": "Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Huai hsin Chi, and Denny Zhou. 2022.",
418
+ "venue": "ArXiv, abs/2203.11171.",
419
+ "url": "https://api.semanticscholar.org/CorpusID:247595263"
420
+ }
421
+ },
422
+ {
423
+ "35": {
424
+ "title": "Chain of thought prompting elicits reasoning in large language models.",
425
+ "author": "Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Huai hsin Chi, F. Xia, Quoc Le, and Denny Zhou. 2022.",
426
+ "venue": "ArXiv, abs/2201.11903.",
427
+ "url": "https://api.semanticscholar.org/CorpusID:246411621"
428
+ }
429
+ },
430
+ {
431
+ "36": {
432
+ "title": "Parallel decoding via hidden transfer for lossless large language model acceleration.",
433
+ "author": "Pengfei Wu, Jiahao Liu, Zhuocheng Gong, Qifan Wang, Jinpeng Li, Jingang Wang, Xunliang Cai, and Dongyan Zhao. 2024.",
434
+ "venue": null,
435
+ "url": "http://arxiv.org/abs/2404.12022"
436
+ }
437
+ },
438
+ {
439
+ "37": {
440
+ "title": "Qwen2 technical report.",
441
+ "author": "An Yang, Baosong Yang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Zhou, Chengpeng Li, Chengyuan Li, Dayiheng Liu, Fei Huang, Guanting Dong, Haoran Wei, Huan Lin, Jialong Tang, Jialin Wang, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Ma, Jianxin Yang, Jin Xu, Jingren Zhou, Jinze Bai, Jinzheng He, Junyang Lin, Kai Dang, Keming Lu, Keqin Chen, Kexin Yang, Mei Li, Mingfeng Xue, Na Ni, Pei Zhang, Peng Wang, Ru Peng, Rui Men, Ruize Gao, Runji Lin, Shijie Wang, Shuai Bai, Sinan Tan, Tianhang Zhu, Tianhao Li, Tianyu Liu, Wenbin Ge, Xiaodong Deng, Xiaohuan Zhou, Xingzhang Ren, Xinyu Zhang, Xipin Wei, Xuancheng Ren, Xuejing Liu, Yang Fan, Yang Yao, Yichang Zhang, Yu Wan, Yunfei Chu, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, Zhifang Guo, and Zhihao Fan. 2024.",
442
+ "venue": null,
443
+ "url": "http://arxiv.org/abs/2407.10671"
444
+ }
445
+ },
446
+ {
447
+ "38": {
448
+ "title": "Tree of thoughts: Deliberate problem solving with large language models.",
449
+ "author": "Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, and Karthik Narasimhan. 2023.",
450
+ "venue": null,
451
+ "url": "http://arxiv.org/abs/2305.10601"
452
+ }
453
+ },
454
+ {
455
+ "39": {
456
+ "title": "Legal prompting: Teaching a language model to think like a lawyer.",
457
+ "author": "Fangyi Yu, Lee Quartey, and Frank Schilder. 2022.",
458
+ "venue": null,
459
+ "url": "http://arxiv.org/abs/2212.01326"
460
+ }
461
+ },
462
+ {
463
+ "40": {
464
+ "title": "Prefer: Prompt ensemble learning via feedback-reflect-refine.",
465
+ "author": "Chenrui Zhang, Lin Liu, Jinpeng Wang, Chuyuan Wang, Xiao Sun, Hongyu Wang, and Mingchen Cai. 2023.",
466
+ "venue": null,
467
+ "url": "http://arxiv.org/abs/2308.12033"
468
+ }
469
+ },
470
+ {
471
+ "41": {
472
+ "title": "A survey of large language models.",
473
+ "author": "Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Zhipeng Chen, Jinhao Jiang, Ruiyang Ren, Yifan Li, Xinyu Tang, Zikang Liu, Peiyu Liu, Jian-Yun Nie, and Ji-Rong Wen. 2023.",
474
+ "venue": null,
475
+ "url": "http://arxiv.org/abs/2303.18223"
476
+ }
477
+ },
478
+ {
479
+ "42": {
480
+ "title": "A survey of large language models for code: Evolution, benchmarking, and future trends.",
481
+ "author": "Zibin Zheng, Kaiwen Ning, Yanlin Wang, Jingwen Zhang, Dewu Zheng, Mingxi Ye, and Jiachi Chen. 2024.",
482
+ "venue": null,
483
+ "url": "http://arxiv.org/abs/2311.10372"
484
+ }
485
+ },
486
+ {
487
+ "43": {
488
+ "title": "Least-to-most prompting enables complex reasoning in large language models.",
489
+ "author": "Denny Zhou, Nathanael Scharli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Olivier Bousquet, Quoc Le, and Ed Huai hsin Chi. 2022.",
490
+ "venue": "ArXiv, abs/2205.10625.",
491
+ "url": "https://api.semanticscholar.org/CorpusID:248986239"
492
+ }
493
+ },
494
+ {
495
+ "44": {
496
+ "title": "A survey of large language models in medicine: Progress, application, and challenge.",
497
+ "author": "Hongjian Zhou, Fenglin Liu, Boyang Gu, Xinyu Zou, Jinfa Huang, Jinge Wu, Yiru Li, Sam S. Chen, Peilin Zhou, Junling Liu, Yining Hua, Chengfeng Mao, Chenyu You, Xian Wu, Yefeng Zheng, Lei Clifton, Zheng Li, Jiebo Luo, and David A. Clifton. 2024.",
498
+ "venue": null,
499
+ "url": "http://arxiv.org/abs/2311.05112"
500
+ }
501
+ },
502
+ {
503
+ "45": {
504
+ "title": "A survey on model compression for large language models.",
505
+ "author": "Xunyu Zhu, Jian Li, Yong Liu, Can Ma, and Weiping Wang. 2024.",
506
+ "venue": null,
507
+ "url": "http://arxiv.org/abs/2308.07633"
508
+ }
509
+ }
510
+ ],
511
+ "url": "http://arxiv.org/html/2408.08869v2"
512
+ }
20240819/2408.09642v1.json ADDED
@@ -0,0 +1,419 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Solving stochastic climate-economy models: A deep least-squares Monte Carlo approach",
3
+ "abstract": "Stochastic versions of recursive integrated climate-economy assessment models are essential for studying and quantifying policy decisions under uncertainty.\nHowever, as the number of stochastic shocks increases, solving these models as dynamic programming problems using deterministic grid methods becomes computationally infeasible, and simulation-based methods are needed.\nThe least-squares Monte Carlo (LSMC) method has become popular for solving optimal stochastic control problems in quantitative finance.\nIn this paper, we extend the application of the LSMC method to stochastic climate-economy models.\nWe exemplify this approach using a stochastic version of the DICE model with all five main uncertainties discussed in the literature.\nTo address the complexity and high dimensionality of these models, we incorporate deep neural network approximations in place of standard regression techniques within the LSMC framework.\nOur results demonstrate that the deep LSMC method can be used to efficiently derive optimal policies for climate-economy models in the presence of uncertainty.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "The analysis of climate-economy policies is typically performed using Integrated Assessment Models (IAMs) that describe the complex interplay between the climate and the economy via deterministic equations.\nIn order to account for stochastic shocks when finding optimal mitigation policies adapted to climate and economic variables that are evolving stochastically over time, a recursive dynamic programming implementation of integrated assessment models is required.\nThis is a significantly harder computational problem to solve compared to the deterministic case.\nSeminal contributions to solving IAMs as optimal decision making problems in the presence of uncertainty include Kelly and Kolstad (1999 ###reference_b18###), Kelly and Kolstad (2001 ###reference_b19###), Leach (2007 ###reference_b23###), Traeger (2014 ###reference_b35###), and Cai and Lontzek (2019 ###reference_b7###).\nAll these studies are based on variants of the so-called dynamic integrated climate-economy (DICE) model extended to include stochastic shocks to the economy and climate.\nThe DICE model is one of the three main IAMs (the other two being FUND and PAGE) used by the United States government to determine the social cost of carbon; see Interagency Working Group on Social Cost of Greenhouse\nGases (2016 ###reference_b16###).\nIt has been regularly revised over the last three decades, with the first version dating back to Nordhaus et al. (1992 ###reference_b28###).\nIt balances parsimony with realism and is well documented with all published model equations; in addition, its code is publicly available, which is an exception rather than the rule for IAMs.\nAt the same time, it is important to note that IAMs, and the DICE model in particular, have significant limitations (in the model structure and model parameters), which have been criticized and debated in the literature (see the discussions in Ackerman et al. (2009 ###reference_b1###); Pindyck (2017 ###reference_b29###); Grubb et al. (2021 ###reference_b13###); Weitzman (2011 ###reference_b37###)).\nDespite the criticism, the DICE model has become the iconic typical reference point for climate-economy modeling, and is used in our study.\nThe original deterministic DICE model is solved as a global optimization problem using the General Algebraic Modeling Language (GAMS)111https://www.gams.com/ ###reference_www.gams.com/###, a high-level programming language for mathematical modeling.\nIts stochastic extensions mentioned in the above-mentioned studies require implementations of recursive dynamic programming to find optimal climate policies under uncertainty222If required, the deterministic DICE model can be solved as a recursive dynamic programming problem, too..\nThis is subject to the curse of dimensionality, and these studies are limited to only one or two stochastic variables.\nEven in this case, computations take several million core hours on a modern supercomputer (see, for instance, Cai and Lontzek (2019 ###reference_b7###)).\nTherefore, simulation methods are needed to handle models with many state variables and multiple shocks to reduce the computational burden.\nThe least-squares Monte Carlo (LSMC) method for solving multi-dimensional stochastic control problems has gained popularity in recent years due to its effectiveness in dealing with high dimensional problems and because it imposes fewer restrictions on the constraints and allows for flexibility in the dynamics of the underlying stochastic processes.\nThe idea is based on simulating random paths of the underlying stochastic variables over time and replacing the conditional expectation of the value function in the Bellman backward recursive solution of the stochastic control problem with an empirical least-squares regression estimate.\nThe transition density of the underlying process is not even required to be known in closed form; one just needs to be able to simulate the underlying processes.\nThe LSMC method was originally developed in Longstaff and Schwartz (2001 ###reference_b24###) and Tsitsiklis and Van Roy (2001 ###reference_b36###).\nThe convergence properties of this method are examined in Belomestny et al. (2010 ###reference_b6###); Belomestny (2011 ###reference_b5###), and A\u00efd et al. (2014 ###reference_b2###).\nThe LSMC method was originally developed for pricing American options where the state variables are not affected by the control.\nLater, an extension of the LSMC method with control randomisation was developed in Kharroubi et al. (2014 ###reference_b20###) to handle endogenous state variables (i.e. state variables that are affected by controls).\nWhen applied to stochastic control problems that aim to optimize an expected utility, some further extensions are needed as proposed in Andr\u00e9asson and Shevchenko (2022 ###reference_b3###) and Andr\u00e9asson and Shevchenko (2024 ###reference_b4###) to achieve a stable and accurate solution.\nIn this paper, we demonstrate how the LSMC method can be adapted to solve the recursive dynamic programming problem of stochastic IAMs.\nWe exemplify this approach with an application to the DICE model with uncertainties in: (1) the equilibrium temperature sensitivity, (2) the damage function coefficient, (3) the growth rate of total factor productivity, (4) the growth rate of decarbonization, and (5) the equilibrium carbon concentration in the upper strata.\nThese five uncertainties were identified in Nordhaus (2018 ###reference_b26###) as being major sources of uncertainty for the evolution of climate-economic state variables.\nTypically, polynomial regression is used in LSMC to approximate the corresponding conditional expectations with respect to state variables and controls.\nHowever, for models such as the stochastic DICE model, this leads to the need of too many covariates and simulations, making the method not practical.\nTo overcome this problem, we use deep neural network approximations for the required regressions and provide detailed explanations.\nThe DICE model is a deterministic approach that combines a Ramsey\u2013Cass\u2013Koopmans neoclassical model of economic growth (also known as the Ramsey growth model) with a simple climate model.\nIt involves six state variables (economic capital; temperature in atmosphere and lower oceans; carbon concentration in atmosphere, upper and lower oceans) evolving deterministically in time, two control variables (savings and carbon emission reduction rates) to be determined for each time period of the model, and several exogenous processes (e.g. population size and technology level).\nThe uncertainty about the future of the climate and economy is then typically assessed by treating some model parameters as random variables (because we do not know the exact true value of the key parameters) using a Monte Carlo analysis (see Nordhaus (2018 ###reference_b26###); Gillingham et al. (2015 ###reference_b12###)).\nModeling uncertainty owing to the stochastic nature of the state variables (i.e. owing to the process uncertainty that is present even if we know the model parameters exactly) requires the development and solution of the DICE model as a dynamic model of decision-making under uncertainty, where we calculate the optimal policy response under the assumption of continuing uncertainty throughout the time frame of the model.\nFew attempts have been made to extend the DICE model to incorporate stochasticity in the underlying state variables and solve it as a recursive dynamic programming problem.\nFor example, Kelly and Kolstad (1999 ###reference_b18###) and Leach (2007 ###reference_b23###) formulated the DICE model with stochasticity in the temporal evolution of temperature, and solved this as a recursive dynamic programming problem.\nThese studies are seminal contributions to the incorporation of uncertainty in the DICE model (although their numerical solution approach is difficult to extend to a higher dimensional space and time-frequency).\nCai and Lontzek (2019 ###reference_b7###) formulate DICE as a dynamic programming problem with a stochastic shock on the economy and climate.\nIn addition, Traeger (2014 ###reference_b35###) developed a reduced DICE model with a smaller number of state variables, whereas Lontzek et al. (2015 ###reference_b25###) studied the impact of climate tipping points, and Shevchenko et al. (2022 ###reference_b32###) considered the DICE model with discrete stochastic shocks to the economy.\nTo our best knowledge, the only attempt to solve the stochastic DICE model using an LSMC-type approach is Ikefuji et al. (2020 ###reference_b15###).\nTheir study handles only one uncertainty at a time, and the setup of the regression type Monte Carlo algorithm omits the integration for the conditional expectation in the Bellman equation, assuming the randomness is known in the transition of state variables (in principle, in this case, the required integration can be performed by using deterministic quadrature methods, but this will be subject to the curse of dimensionality).\nThe primary contributions of our paper are as follows:\nWe introduce an efficient approach for modeling stochastic climate-economy models by combining the least-squares Monte Carlo method with deep learning techniques. It provides flexibility in handling various types of uncertainties, including both parametric and stochastic process uncertainties.\nWe formulate a stochastic version of the DICE model using the sources of uncertainty as identified by Nordhaus (2018 ###reference_b26###). Notably, it does not rely on discretizing the underlying probability distributions that is usually performed in Monte-Carlo type analyses for the sake of model tractability.\nWe perform comprehensive numerical experiments and discuss numerical techniques to significantly reduce the computational burden and address several peculiarities of the model. Moreover, we demonstrate how to perform uncertainty quantification (UQ) to understand how uncertainties in the model propagate and affect outputs (such as projections for the evolution of atmospheric temperature).\nThe paper is organized as follows.\nSection 2 ###reference_### gives a description of the considered model.\nSection 3 ###reference_### describes the numerical method used to solve the model.\nSection 4 ###reference_### provides a comprehensive numerical study.\nSection 5 ###reference_### concludes."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Model description",
15
+ "text": "In this section, we present the DICE-2016R2 model as a classical example of a recursive climate-economy model.\nThis version of the DICE model was used in Nordhaus (2018 ###reference_b26###).\nIt includes parameter uncertainties in equilibrium temperature sensitivity, the damage function coefficient and the equilibrium carbon concentration in the upper strata, as well as process uncertainties in the growth rate of total factor productivity and the growth rate of decarbonization.\nThe original deterministic DICE model seeks to find policies that maximize a social welfare function, which models the discounted sum of population-weighted utility of per capita consumption:\nwhere is a discount factor, is the world population, denotes per capita consumption, and the time index corresponds to -year steps.\nThe policy consists of two control variables, per capita consumption and a carbon mitigation rate .\nThe utility function has constant elasticity with respect to per capita consumption, , with a risk-aversion parameter (the case corresponds to logarithmic utility).\nThe model features six state variables: economic capital , the concentration of carbon in the atmosphere, the upper oceans, and the lower oceans, , and the global mean temperature of the Earth\u2019s surface and the deep oceans, .\nThe evolution of the economic and geophysical sectors is governed by the dynamics described below.\nThe economic system: Gross output is modeled by a Cobb\u2013Douglas production function of capital, labor, and technology, , where and are the output elasticities of capital and labor, respectively.\nHere, denotes total factor productivity (see Subsection 2.1 ###reference_###), representing technological progress and efficiency improvements over time.\nThe DICE model incorporates economic damages from climate change, represented by a damage function that is quadratic in the global mean surface temperature, , where is the damage coefficient (see Subsection 2.1 ###reference_###).\nThese damages can be mitigated by emission reduction, controlled by the policy .\nReducing emissions incurs abatement costs (see Table 1 ###reference_### for their specification).\nNet output is then given by gross output reduced by damages and abatement costs, , and economic capital evolves according to the following dynamics:\nwhere is total consumption, and is the rate of depreciation of economic capital.\nThe carbon cycle: The carbon cycle is modeled by three reservoirs, which follow the dynamics:\nwhere is a coefficient matrix, is total emissions (in billions of tons per year), and is the conversion factor of mass into the equivalent mass of carbon.\nEmissions are equal to uncontrolled industrial emissions, given by a level of carbon intensity (see Subsection 2.1 ###reference_###) times gross output, reduced by the emission reduction rate , plus exogenous land-use emissions , i.e. .\nThe temperature module: The relationship between greenhouse gas accumulation and increased radiative forcing is described by the function:\nwhich models the change in total radiative forcings from anthropogenic sources such as .\nIt consists of exogenous forcings plus forcings due to atmospheric concentrations of .\nHere, is the preindustrial atmospheric carbon concentration.\nThe evolution of global mean temperatures follows the dynamics:\nwhere is a coefficient matrix, and is a model parameter.\nIt is important to note that is measured in terms of the absolute increase in temperature relative to the year 1900.\nIn DICE-2016R2, is assumed to be non-negative with an upper bound of 1, i.e. no negative industrial emissions are allowed.\nTable 1 ###reference_### summarizes the main coefficients of the model.\nNote that the number of time steps is chosen such that corresponds to the year 2015, while corresponds to the year 2500.\nThe social cost of carbon (SCC): The social cost of carbon (SCC) is a measure of the economic harm caused by emitting one additional ton of carbon dioxide () into the atmosphere.\nIt represents the present value of the damages associated with a marginal increase in emissions in a given year.\nThe SCC is typically expressed in monetary terms (e.g. dollars per ton of ) and is used to help policymakers evaluate the benefits of reducing emissions and compare the costs of different climate policies or regulatory actions aimed at mitigating climate change.\nThe SCC can be calculated in the DICE model by:\nwhere denotes the value function at time , and represents the to carbon mass transformation coefficient."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Modeling uncertainty",
21
+ "text": "The dynamics presented in the DICE model so far are purely deterministic, assuming precise knowledge of the future evolution of all exogenous variables for centuries ahead.\nThis approach is an unrealistic simplification.\nA reasonable way to address this issue is to introduce probabilistic distributions into the model to account for uncertainties about future outcomes.\nIn this paper, we distinguish between two types of uncertainties: stochastic process uncertainty, and initial parameter uncertainty.\nStochastic process uncertainty refers to the uncertainty in the evolution of future trajectories of exogenous variables.\nA classical example from quantitative finance is Brownian motion, , modeled by and for and , where denotes the normal distribution with expected value and variance .\nIncorporating stochastic process uncertainties is challenging because the uncertainty propagates over time, increasing the volatility of the variable\u2019s distribution.\nThe LSMC method we present below is highly sensitive to introduced volatility, making this incorporation a significant challenge that few contributions in the climate-economy literature have successfully addressed.\nInitial parameter uncertainty refers to uncertainty about one or more parameters in the system that remain fixed over time.\nA common method to study this uncertainty is a perturbation analysis, where parameters are sampled, the model is solved, and the process is repeated.\nHowever, this approach does not accurately depict the model\u2019s evolution over time, as an agent in the model would consider overall outcome uncertainty, not individual instances of the uncertain parameter.\nAnother related concept is Bayesian learning (Kelly and Kolstad, 1999 ###reference_b18###), where the parameter distribution evolves over time as more information about the system is revealed.\nThis type of uncertainty can be treated by the LSMC approach presented in this paper, but we chose not to include this in the current study, leaving it for future work.\nIdentifying reasonable uncertainties to include in the model is challenging, as some uncertainties might be more significant than others.\nAdvanced statistical analyses are required to make educated assumptions about probability distributions for the climate and economc system.\nFor our paper, we incorporate five uncertainties into the DICE model, as identified by Nordhaus (2018 ###reference_b26###).\nThese include stochastic process uncertainties in the growth rates of total factor productivity and the rate of decarbonization , as well as initial parameter uncertainties in the temperature-sensitivity coefficient, the damage coefficient, and the carbon cycle coefficient.\nWe emphasize that our method is not limited to these specific uncertainties, and we now explain our choices in more detail.\nProductivity growth. Assuming a Cobb-Douglas production function, the growth in total factor productivity models the growth in output that is not explained by growth in inputs of labor and capital used in production.\nThe DICE model assumes evolves according to , where is the deterministic growth rate which is specified in Table 1 ###reference_###.\nNordhaus (2018 ###reference_b26###) assumes is normally distributed with mean and standard deviation .\nBut in this case, using the dynamics for the growth rate, we can model as normally distributed with mean and standard deviation .\nIn order to remove extreme cases, we truncate this distribution at the mean two standard deviations.\nThe evolution of is shown in Figure 1 ###reference_###.\n###figure_1### The rate of decarbonization. Uncontrolled industrial emissions are given by a level of carbon intensity, , times gross output.\nThe DICE model assumes evolves according to , with a deterministic growth rate which is specified in Table 1 ###reference_###.\nNordhaus (2018 ###reference_b26###) assumes is normally distributed with mean and standard deviation .\nWe therefore model as normally distributed with mean and standard deviation , truncating the distribution at the mean two standard deviations in order to remove extreme cases.\nThe evolution of is shown in Figure 2 ###reference_###.\n###figure_2### Equilibrium temperature sensitivity (ETS). The equilibrium temperature sensitivity measures how much the Earth\u2019s surface will warm in response to a doubling of atmospheric .\nThe DICE model assumes the ETS is equal to for an equilibrium doubling.\nIn Table 1 ###reference_###, the ETS corresponds to the denominator in the definition of .\nNordhaus (2018 ###reference_b26###) models the ETS as a log-normal distribution, with .\nWe do the same, truncating at the mean two standard deviations.\nThe damage function. The DICE model assumes climate-induced economic damages are a quadratic function of the increase in atmospheric temperature.\nIt is modeled as a fractional loss of global output from greenhouse warming, , where denotes a damage coefficient representing the severity of the economic impact of global warming.\nThe DICE model assumes to be equal to 0.00236.\nNordhaus (2018 ###reference_b26###) models the by a normal distribution with mean 0.00236 and standard deviation 0.00118.\nWe use the same distribution but truncate it at the mean minus one standard deviation, and at the mean plus two standard deviations.\nThe carbon cycle. The carbon cycle coefficient models the equilibrium concentration of carbon in the biosphere and upper level of the oceans.\nThe DICE model assumes it to be equal to 360 gigatonnes of carbon (GtC).\nIn Table 1 ###reference_###, it corresponds to the value 360 appearing in the definitions of and .\nNordhaus (2018 ###reference_b26###) models this coefficient as a log-normal distribution, with \nWe do the same, truncating at the mean two standard deviations.\n###figure_3### Another type of uncertainty is parametric uncertainty, where the value of a coefficient can change over time as it is re-drawn at each point in time.\nThis type of uncertainty lies between the stochastic process and the initial parameter uncertainty.\nAlthough we did not include it in our study, it is straightforward to incorporate and solve using our method.\nAssuming implies a roughly probability of being negative.\nThis is a non-negligible scenario.\nGiven that the DICE model aims to combine equations for the economy and climate, it is highly questionable to assume the damage coefficient could be below or just above zero.\nMoreover, the assumption of a log-normal distribution for the equilibrium temperature sensitivity and the carbon cycle coefficient also entails a non-negligible probability of those coefficients being close to zero.\nNordhaus (2018 ###reference_b26###) avoids this issue by discretizing the distributions, separating them into quintiles, and then calculating the expected values of the random variables within those quintiles.\nThese expected values are taken as realizations of discrete uncertain variables, yielding sufficiently positive lowest realizations for the coefficients.\nInspired by this approach, we also truncate the distributions of the random variables, however, without discretizing them.\nThis avoids issues with too low damage coefficients and temperature sensitivities, as well as extreme growth rates for total factor productivity and carbon intensity."
22
+ },
23
+ {
24
+ "section_id": "3",
25
+ "parent_section_id": null,
26
+ "section_name": "The deep least-squares Monte Carlo method",
27
+ "text": "The numerical solution of the model is achieved using the endogenous state least-squares Monte Carlo (LSMC) algorithm with control randomization, as introduced by Kharroubi et al. (2014 ###reference_b20###) and adapted for expected utility optimal stochastic control problems by Andr\u00e9asson and Shevchenko (2022 ###reference_b3###).\nThis method approximates the conditional expectation of the value function in the Bellman equation using regression with a quadratic loss function applied to the transformed value function.\nTypically, regression basis functions are ordinary polynomials of the state and control variables, usually up to the third order.\nIn our implementation, we use deep neural networks to approximate the regression predictor.\nTo mitigate transformation bias in the regression estimate of the conditional expectation, we employ the smearing estimate as proposed by Andr\u00e9asson and Shevchenko (2022 ###reference_b3###).\nBelow is a brief description of the LSMC algorithm.\nLet correspond to time points in the interval .\nConsider the standard discrete dynamic programming problem with the objective to maximize the expected value of the utility-based total reward function\nwhere is a control, is a controlled state variable, and are reward functions, is a time discount factor, and the expectation is conditional on the initial state and following the policy .\nThe evolution of the state variable is specified by a transition function such that\nwhere are independent disturbance terms, i.e. the state of the next period depends on the current state\u2019s value, the current period\u2019s control decision, and the realisation of the disturbance term.\nThis problem can be solved using the backward recursion of the Bellman equation, starting from and then solving recursively:\nwhere the expectation is conditional on the state and the policy at time .\nFor further details on dynamic programming, we refer the interested reader to the excellent monograph by Fleming and Soner (2006 ###reference_b10###) on the subject.\nUsing Equation (8 ###reference_###), the optimal control can be found by solving:\nHere, denotes a set of admissible values of , which may depend on .\nWhen the number of state variables is more than three, it usually becomes computationally infeasible to use quadrature-based methods to evaluate the conditional expectation in (8 ###reference_###), making simulation methods like LSMC preferable.\nThe LSMC method approximates the conditional expectation in equation (8 ###reference_###):\nusing a regression scheme with the states and randomized policies as independent variables, and as the response variable.\nThe approximation function is denoted .\nThe method is implemented in two stages:\nForward simulation: For , the random state, control, disturbance variables as well as the transitioned state are simulated as , , , and , , where is sampled independently from .\nBackward recursion: Starting from the boundary condition , the optimal stochastic control problem in Equation (6 ###reference_###) is solved using the recursion in Equation (8 ###reference_###), as detailed in Algorithm 1 ###reference_###."
28
+ },
29
+ {
30
+ "section_id": "3.1",
31
+ "parent_section_id": "3",
32
+ "section_name": "Transformation bias and heteroskedasticity",
33
+ "text": "To mitigate challenges in approximating the value function due to the extreme curvature of utility functions, one can introduce a transformation that mirrors the shape of the value function.\nIn our implementation, we use:\nAt each time , the transformed value function is approximated using the least-squares regression:\nwhere , are zero mean and independent error terms, is a parametrized family of predictor functions, and the inverse of the transformation function.\nThen,\nwhere is the distribution of the error term .\nIn the absence of a closed-form solution for the integral in Equation (13 ###reference_###), the empirical distribution of the residuals:\ncan be used to approximate this integral.\nConsequently, the estimate of becomes:\nFor the chosen transformation in (11 ###reference_###), Equation (15 ###reference_###) simplifies to:\nIn Equation (16 ###reference_###), the mean of the transformed residuals does not depend on , simplifying the function evaluation of , as the mean can be precomputed and reused.\nIf heteroskedasticity is present in the regression with respect to the state and control variables, a method that accounts for heteroskedasticity is required.\nIn this case, the conditional variance can be modelled as a function of covariates:\nwhere is another parametrized family of predictor functions.\nThere are various standard methods to estimate and the smearing estimate with controlled heteroskedasticity can then be used as discussed in Andr\u00e9asson and Shevchenko (2022 ###reference_b3###).\nThe method presented in Algorithm 1 ###reference_### is called the regression surface approach.\nA common alternative is the realized value approach, where the value function in Equation (8 ###reference_###) is not computed by using the approximation of the conditional expectation (which was needed to find the optimal policy according to Equation (9 ###reference_###)), but rather by computing the discounted sum of rewards along one trajectory starting from the state at time .\nWhile promising greater numerical stability than the regression surface approach, the realized value approach requires calculating optimal decisions along the individual trajectories, which comes at a significant computational cost.\nFor details on this approach, we refer to Andr\u00e9asson and Shevchenko (2022 ###reference_b3###) and references therein.\nOriginally, we also implemented the realized value approach, however, we found that the regression surface approach provided a sufficiently accurate solution for the number of sample points chosen in our numerical study in Section 4 ###reference_###.\nAnother approach worth mentioning is the regress later LSMC method.\nHere, the value function is approximated directly rather than the conditional expectation: .\nFinding the optimal policy in (9 ###reference_###) then requires the explicit calculation of the conditional expectation:\neither analytically or numerically with quadrature methods.\nHowever, as mentioned earlier, this approach becomes infeasible in the case of many simultaneous shocks due to the high dimensionality of the required integration."
34
+ },
35
+ {
36
+ "section_id": "3.2",
37
+ "parent_section_id": "3",
38
+ "section_name": "Neural networks",
39
+ "text": "In our paper, we choose for the parametrized family of functions the class of deep neural networks.\nThis algorithmically generated class of functions has found tremendous success in all fields of science.\nOver the years, it has been shown that neural networks can act as surrogate functions in many models, due to their far reaching approximation capabilities.\nTheorems that establish approximations are referred to as universal approximation theorems (UAT); notable contributions include Cybenko (1989 ###reference_b8###) and Hornik (1991 ###reference_b14###).\nThese theorems establish the topological density of sets of neural networks in various topological spaces.\nOne speaks of the universal approximation property (Kratsios, 2021 ###reference_b22###) of a class of neural networks.\nUnfortunately, these theorems are usually non-constructive.\nTo numerically find optimal neural networks, one typically combines backpropagation (see, for example, Rumelhart et al. (1986 ###reference_b31###)) with ideas from stochastic approximation (Robbins and Monro, 1951 ###reference_b30###; Kiefer and Wolfowitz, 1952 ###reference_b21###; Dvoretzky, 1956 ###reference_b9###).\nAssuming sufficient integrability, the conditional expectation in Equation (10 ###reference_###) is the orthogonal projection of onto the subspace spanned by in the space of square-integrable random variables.\nThe universal approximation property of neural networks in this space (see, for instance, Hornik (1991 ###reference_b14###, Theorem 1)) then justifies the approximation of by for a suitably chosen neural network ."
40
+ },
41
+ {
42
+ "section_id": "3.3",
43
+ "parent_section_id": "3",
44
+ "section_name": "Uncertainty quantification",
45
+ "text": "Uncertainty quantification (UQ) is a research field focused on understanding how uncertainties in model inputs, parameters, and other factors propagate through models to affect their outputs.\nThis understanding is crucial for making informed decisions based on model predictions, particularly in complex systems where such decisions can have significant consequences.\nA key tool in UQ are Sobol\u2019 indices (Sobol\u2019, 2001 ###reference_b34###), which are quantitative measures used in sensitivity analysis to apportion the variance of a model output to different input variables or combinations of input variables.\nBy identifying the most important input variables and their interactions, Sobol\u2019 indices guide efforts to sort out the main factors which should be studied with care in complex models.\nSobol\u2019 indices provide a comprehensive view of how input variables and their interactions influence model outputs.\nThey can be applied to any type of model, regardless of its complexity or the nature of its inputs and outputs.\nThey are particularly valuable because they capture the effects of nonlinear interactions among input variables, which is critical for understanding complex systems.\nHowever, calculating Sobol\u2019 indices requires a large number of model evaluations, which can be computationally expensive for complex models.\nThe accurate estimation of Sobol\u2019 indices also depends on efficient and adequate sampling of the input space.\nDenote our stochastic DICE model by , which maps model inputs (such as the temperature-sensitivity coefficient) to model outputs (such as the projection of the global mean surface temperature in the year 2100).\nThere are two main types of Sobol\u2019 indices.\nFirst-order Sobol\u2019 indices : These indices represent the contribution of a single input variable to the output variance , ignoring interaction effects with other variables:\nwhere denotes the conditional expectation of given with respect to all inputs except for , and denotes the variance with respect to .\nTotal-order Sobol\u2019 indices : These indices represent the contribution of an input variable to the output variance, including all interactions with other variables.\nThey are defined as:\nwhere denotes the conditional expectation of with respect to given all inputs except for , and denotes the variance with respect to all inputs except for .\nFirst- and total-order Sobol\u2019 indices help determine which input variables are the most influential.\nVariables with high first-order indices have a strong direct effect, while those with high total-order indices are significant due to their interactions with other variables.\nIn Section 4 ###reference_###, we will compute Sobol\u2019 indices for our five identified uncertainties and examine their effect on the most important model parameters.\nIt is important to note that computing Sobol\u2019 indices in conjunction with the LSMC method involves solving the model with the backwards recursion (8 ###reference_###) only once, and then generating a sufficiently large amount of forward trajectories to estimate the indices and ."
46
+ },
47
+ {
48
+ "section_id": "3.4",
49
+ "parent_section_id": "3",
50
+ "section_name": "Comparison with other methods",
51
+ "text": "Jensen and Traeger (2014 ###reference_b17###) analyze long-term economic growth uncertainty in a DICE based assessment model with an infinite-horizon.\nThey express uncertainty in terms of stochastic shocks to the growth rate of total factor productivity.\nThe value function is approximated by Chebyshev polynomials, and the system is solved by value function iteration.\nThe base model has only 3 physical state variables: capital , atmospheric carbon , and technology level .\nNordhaus (2018 ###reference_b26###) considers the same DICE model version as the one used in this paper.\nFive uncertainties are identified, the same as those explained in Subsection 2.1 ###reference_###.\nThese uncertainties are treated as initial parameter uncertainties.\nThe distributions are discretized to reduce the computational burden, thereby reducing the number of possible scenarios from an uncountably infinite amount to just a few thousands.\nA Monte-Carlo based parameter perturbation analysis is performed, where parameters are sampled, and then the corresponding deterministic version of the DICE model is solved.\nIn contrast to Nordhaus (2018 ###reference_b26###), we don\u2019t need to discretize the distributions, and we need to solve the model only once.\nCai and Lontzek (2019 ###reference_b7###) also study a stochastic version of the DICE model, extending the deterministic 6-dimensional model to a stochastic 9-dimensional model.\nTwo additional model dimensions are due to uncertainty in the evolution of total factor productivity, and one additional dimension is due to a stochastic tipping point process.\nThe stochastic processes are discretized, and the resulting model is solved by value function iteration, where the value function is approximated by Chebychev polynomials.\nThe model is solved with the Blue Waters supercomputer, using 110,688 cores in parallel, with computation times of up to 8 hours.\nWhile we do not include a tipping point process in this paper, our simulation based method drastically reduces the computational burden by solving our 11-dimensional (in contrast to the 9-dimensional version of Cai and Lontzek (2019 ###reference_b7###)) model formulation on a 64 core machine within around 18 hours of computation time, depending on the amount of numerical precision that is required for the solutions.\nExpressed in terms of pure core hours (i.e. number of cores multiplied by total computing time), this amounts to a reduction in computing time of more than .\nIkefuji et al. (2020 ###reference_b15###) formulate a stochastic version of the DICE model considering one uncertainty at a time: a) uncertainty in the damage-abatement fraction, b) uncertainty in the damage parameter, c) uncertainty in the emissions-to-output ratio, and d) uncertainty in total factor productivity .\nThese uncertainties are introduced by multiplying the corresponding deterministic DICE variables by stochastic disturbances.\nThus, the number of state variables is the same as in the deterministic DICE (6).\nTo the best of our knowledge, this is the only attempt to solve a stochastic version of the DICE model by using an LSMC type approach.\nThey use least-squares regression with polynomial basis functions to approximate the value function, i.e. in the spirit of regress later LSMC.\nHere, we note that their regression type Monte Carlo algorithm setup omits the integration for the conditional expectation in the Bellman equation, assuming the random disturbance is known in the transition of state variables.\nIn principle, the standard regress later LSMC can be implemented here to handle this type of uncertainty but it will be a subject of the curse of dimensionality in the case of more than one shock.\nFriedl et al. (2023 ###reference_b11###) present a method for solving integrated assessment models and performing uncertainty quantification.\nThey exemplify their approach on a version of the DICE model with uncertainties in equilibrium temperature sensitivity (that contains a Bayesian learning component), and the damage function (represented by a stochastic tipping process).\nFirst, a deep neural network is trained to output, in particular, the optimal policies and value function at a given point in time, and then a Gaussian process-based model is trained to approximate quantities of interest such as the social cost of carbon in order to speed up the evaluation when calculating UQ metrics.\nIn contrast to Friedl et al. (2023 ###reference_b11###), our method approximates the conditional expectation rather than the policy functions, and then finds those by running an optimizer to solve Equation (9 ###reference_###).\nApproximating by a regression scheme is a challenging task, since the presence of the bounds (i.e. ) require a very careful choice of an appropriate regression scheme that can effectively interpolate the optimal policy, especially in the presence of extended periods when the policy is on the boundary.\nOur approach avoids this issue by finding the optimal policy through an optimizer which, once the conditional expectation has been approximated, can be performed with a high degree of numerical precision and speed.\nMoreover, the deep LSMC method requires performing a least-squares regression, where the loss function is the squared distance between the object of interest and the neural network prediction.\nThis choice of loss function is significantly simpler, as it avoids the eleven individual components that enter the loss function based on an elaborate set of first-order conditions that are needed in the solution of Friedl et al. (2023 ###reference_b11###).\nFinally, in contrast to Friedl et al. (2023 ###reference_b11###), we find that there is no need to train an additional Gaussian process-based surrogate model to perform UQ for the quantities of interest (such as the social cost of carbon).\nOnce the backward recursion (Equation (8 ###reference_###)) has been performed, a large amount of optimal trajectories for different realizations of uncertainties can be computed easily in order to perform UQ for the quantities of interest."
52
+ },
53
+ {
54
+ "section_id": "4",
55
+ "parent_section_id": null,
56
+ "section_name": "Numerical study",
57
+ "text": "In this section, we present the numerical results from applying the least-squares Monte Carlo method with transformation bias adjustment and neural network approximation of conditional expectations.\nFor clarity, we emphazise that our state vector consists of 11 variables: the six variables from the deterministic formulation of the DICE model (, , ), the two stochastic processes and , as well as the three parameters discussed in Subsection 2.1 ###reference_### (temperature-sensitivity coefficient, damage coefficient and carbon cycle coefficient).\nFor the backward recursion and least-squares approximation of the value function, we use sample points in the 11-dimensional state space.\nFigure 6 ###reference_### is based on forward trajectories, while the statistics reported in Table 2 ###reference_### are based on a sample of size .\nTo find the optimal policies in (9 ###reference_###), we use the limited-memory Broyden\u2013Fletcher\u2013Goldfarb\u2013Shanno algorithm with box constraints (L-BFGS-B).\nOn a 64 core machine, it took between 9 hours (for samples) and 18 hours (for samples) to perform the backward recursion.\nComputing optimal forward trajectories then typically took around 15 minutes for trajectories, and 1 hour for trajectories.\nThe initial year for the version of DICE model used in Nordhaus (2018 ###reference_b26###) is 2015, not 2020.\nFor illustration purposes, during calculation of the optimal forward trajectories, we made the first policy decision deterministic and equal to the optimal decision in the deterministic version of the model.\nThis amounts to starting the forward trajectories in the year 2020 with initial values that correspond to the optimal deterministic DICE states identified in Nordhaus (2018 ###reference_b26###).\nMoreover, the original DICE model is formulated as an infinite-horizon control problem, see Equation (1 ###reference_###).\nHowever, our formulation of the LSMC method as discussed in Section 3 ###reference_### assumes a finite time horizon with time steps ( in our case corresponding to being the year 2015, and being the year 2500).\nImposing a finite time horizon corresponds to a truncation of the problem, and one needs to choose an appropriate boundary reward function .\nSimilarly, as in Cai and Lontzek (2019 ###reference_b7###), our terminal reward function is computed by assuming that after 2500 the policies are fixed to , , and that the system evolves deterministically.\nThe reward is then equal to the discounted sum of population-weighted utility of per-capita consumption from following the fixed policies for another 100 time steps.\nDue to discounting and the large amount of time steps, it is assumed that a different choice of boundary reward that far ahead in the future should have a negligible impact on results for the twenty-first century.\nFor approximating conditional expectations, we use deep feedforward neural networks with two hidden layers, each containing 32 hidden nodes with hyperbolic tangent (tanh) as activation function, and a linear readout in the output layer.\nNeural network training is performed using minibatch stochastic gradient descent with the Adam optimizer.\nThe initial learning rate is set to and reduced to a minimum of during training.\nEarly stopping is implemented to avoid overfitting.\nDuring the backward recursion, the trained neural network from one step (e.g. step ) is used as the initial neural network for the next step\u2019s training (step ), which reduces computation time.\nFor this version of the stochastic DICE model, the transition equation (7 ###reference_###) can be separated into two transitions:\nwhere the deterministic transition to the post-decision variable precedes the transition .\nThis allows the conditional expectation in (8 ###reference_###) to be simplified to:\nThis method offers two main advantages: (1) dimension reduction in the covariates needed for the least-squares approximation of the conditional expectation, and (2) an increase in sampling efficiency by sampling only the post-decision states rather than both and .\nOur method benefits significantly from using post-decision variables, and we found a notable improvement in numerical precision.\nEconomic capital and total factor productivity can grow quite rapidly over time, especially in scenarios where large growth in meets a low consumption rate .\nThis poses an important numerical challenge, since an appropriate domain for sampling the state variables needs to be chosen with care.\nA popular solution to this issue, having been applied successfully in Jensen and Traeger (2014 ###reference_b17###), is to normalize economic capital as follows.\nFirst, we re-write output to express it in terms of labor-augmenting technology: , where .\nLet denote the deterministic trajectory of , where is fixed to be equal to the expected value.\nEconomic capital and output are then expressed in terms of units of effective labor: , and .\nThe state variable can also be substituted by and further normalized to .\nIn our simulations, we found that these normalization steps had a favorable impact on the precision of the numerical results.\nCalculating the social cost of carbon (5 ###reference_###) requires knowledge of partial derivatives of the value function with respect to atmospheric carbon concentration and economic capital.\nSince we do not have an analytic representation of the value function, we follow an approximation approach that was discussed in Traeger (2014 ###reference_b35###), where Chebychev polynomials were used to approximate the value function.\nAt each time , we approximate the value function by a neural network:\nfor a suitable parameter vector .\nThis approach strikes a balance between numerical precision and analytical tractability, applicable even in the presence of stochasticity.\nNote that the idea of approximating the value function by a neural network has already been carried out in Kelly and Kolstad (2001 ###reference_b19###) where, however, the neural network approximation was not used for computing the social cost of carbon.\nThe post-decision variables , representing the states after decision , have the same dimension as .\nThe sampling step in Algorithm 1 ###reference_### requires choosing an effective sampling distribution.\nOne standard approach would be to put a high-dimensional grid of uniformly drawn points around the deterministic DICE solution.\nHowever, in order to improve numerical precision, low-discrepancy grids are favourable in order to keep the number of sample points needed to a reasonable amount.\nLatin hypercube sampling offers a more favourable distribution of grid points compared to uniform sampling.\nWe chose to use Sobol\u2019 grid points (Sobol\u2019, 1967 ###reference_b33###), which offer even higher numerical precision compared to Latin hypercube samples.\nFigure 4 ###reference_### shows the point distribution of a uniform and of a Sobol\u2019 grid for comparison.\nWe found that using a low-discrepancy grid improved the numerical precision of the results.\n###figure_4### A major challenge in solving the model was to obtain stable estimates of the optimal emission mitigation rate .\nEstimating the optimal consumption rate was straightforward, but estimating required very precise estimates in the least-squares approximation of the conditional expectation.\nFigure 5 ###reference_### offers a partial explanation.\nIt illustrates a typical optimization surface when trying to find the optimal policies in Equation (9 ###reference_###), showing a steep curvature for and a much flatter surface for , indicating the need for precise numerical approximations and small tolerance values in the optimizer.\nWe see this issue as a consequence of the model setup.\nFor example, a low carbon intensity for times after leads to low emissions and mitigation costs, resulting in an almost negligible effect of the mitigation rate on the value .\nIn order to resolve this issue, very precise numerical approximations of conditional expectations based on a large number of well-spaced sample points as well as small tolerance values in the optimizer for were required.\n###figure_5### Each point in the state space can be optimized independently in Equation (9 ###reference_###).\nIn other words, when solving (9 ###reference_###) over a high-dimensional grid in state space, the individual optimization steps for each grid point can be executed in parallel.\nThis parallel optimization is implemented using Python\u2019s multiprocessing package over 64 cores, significantly reducing computation time and allowing for the usage of a reasonably large sample size without excessive computational costs.\nFigure 6 ###reference_### presents the evolution of the six most important variables over time if the optimal strategy is used, based on 500,000 independently simulated trajectories.\nThese six variables are the social cost of carbon , the global mean surface temperature , the carbon concentration in the atmosphere , the emission mitigation rate , total emissions , and damages .\nThe panels include the median trajectory (bold solid line), expected trajectory (dash-dotted line), the 25 and 75 quantiles (dashed lines), the 10 and 90 quantiles (solid lines) as well as the range of sampling paths between the 1 and 99 quantiles (shaded area).\nWe can observe a significant amount of uncertainty in all variables.\nMost notably, a significant fraction of scenarios sees full mitigation (i.e. ) well before the year 2100 in the optimal case, though the median trajectory is a bit below the full mitigation in 2100.\nWe also observe that for temperature, the 1 quantile is approximately at 2.5\u2218C, while the 99 quantile is approximately at 4.5\u2218C.\nThe SCC is about US$200 in 2100 under the median trajectory, and between $150 and $300 for the 10% and 90% quantiles.\nFor all variables the median trajectory and deterministic DICE solution are virtually indistinguishable and very close to the expected trajectory.\n###figure_6### Figure 7 ###reference_### shows the first- and total-order Sobol\u2019 indices for various model outputs in relation to the 5 sources of uncertainty which we considered in the model.\nThe analyzed outputs are the social cost of carbon in 2020 (SCC), the mean surface temperature in the atmosphere in 2100 (TATM), the carbon concentration in the atmosphere in 2100 (MAT), output in 2100 (OUT), emissions in 2100 (EMI) as well as damages in 2100 (DAM).\nThe first-order Sobol\u2019 indices (left panel) illustrate the individual contribution of each input to the variance of the outputs, while the total-order Sobol\u2019 indices (right panel) capture the overall contribution, including interactions with other inputs.\nNote that first-order indices do not sum up to , as we have not taken into account higher order indices (second order, third order etc.).\nFrom Figure 7 ###reference_###, it is evident that output is predominantly impacted by total factor productivity, with both first-order and total-order indices close to 100, indicating a strong direct influence.\nIn contrast, the overall impact of the carbon intensity is negligible, with the indices being below 1 throughout.\nUncertainty in could potentially be excluded to simplify the model without sacrificing accuracy.\nThe temperature-sensitivity and damage coefficients exhibit high indices across all remaining outputs, implying their large influence on the model outputs.\nBoth of these coefficients moreover show a significant difference between their first-order and total-order indices for emissions, suggesting substantial interaction effects with other inputs.\nNotably, the almost negligible first- and total-order indices for the carbon cycle coefficient with respect to emissions is contrasted by significant indices for damages, as well as atmospheric temperatures and carbon concentrations.\nFinally, we observe that uncertainty in the social cost of carbon in 2020 is largely due to temperature-sensitivity and damage coefficients.\nThis does not come as a surprise, as the uncertainty in and propagates through time and is therefore not very pronounced in the year 2020 (compared to, for instance, the year 2100).\nOverall, Figure 7 ###reference_### highlights that:\nProductivity has a strong influence on output, but neither on damages nor on temperatures.\nThe carbon intensity has a completely negligible impact on the model.\nThe temperature-sensitivity and damage coefficients have very strong impacts on the model.\n###figure_7### Figure 8 ###reference_### shows the evolution of first-order Sobol\u2019 indices for our main variables over time, up to the year 2150.\nIt highlights the fact that the impact of the uncertain variables on the outputs changes over time.\nMost notably, the changes appear not to follow a linear pattern, especially when looking at emissions.\nThere, the impact of total factor productivity peaks around the year 2035, but declines rapidly afterwards.\nIn contrast, the impact of on the social cost of carbon gradually rises from 0 in the year 2020, to around 25 in the year 2150.\nThis does not come as a surprise, as it highlights the effect of the large initial uncertainty about parameters such as the temperature-sensitivity and damage coefficients, which combines with a negligible initial uncertainty in total factor productivity that grows over time.\nAnother interesting effect that can be observed is that the total sum of all first-order indices declines for emissions from above 95 in the year 2020 to slightly below 40 in the year 2150.\nThis motivates the insight that the impact due to interactions between the uncertain variables grows over time.\n###figure_8### Table 2 ###reference_### shows the key statistics for the major variables.\nIn terms of the coefficient of variation (CV), we can observe the highest degree of uncertainty in emissions, followed by the social cost of carbon, damages, and output.\nMost importantly, the interquartile range (IQR) of 0.64C for temperature and 1.4 for damages highlights the importance of considering the notable variations in projections due to the presence of uncertainty.\nMoreover, we can re-confirm the presence of noticeable differences between the mean, median and best guess values for some variables, which is in line with the observations of Nordhaus (2018 ###reference_b26###).\nDifferences between the mean and median values hint at the presence of skewness in the distribution of the variables, which can also be visually confirmed from Figure 6 ###reference_###.\nFinally, differences between the best guess estimates and the mean and median values show that in some cases, the best guess provides a reasonable approximation of the complex dynamics, whereas in other cases it does not, which again highlights the importance of explicitly including stochastic dynamics into climate-economy models.\nSD, IQR and CV refer to standard deviation, interquartile range and coefficient of variation, respectively.\nBG refers to best guess, which is the value calculated along the expected trajectory, assuming that uncertainties are set to their respective means."
58
+ },
59
+ {
60
+ "section_id": "5",
61
+ "parent_section_id": null,
62
+ "section_name": "Conclusions",
63
+ "text": "Climate-economy models are essential tools for informed decision-making, risk management, and strategic planning in the face of climate change.\nThese models provide a structured framework for analyzing the economic implications of climate policies and developing sustainable solutions to mitigate and adapt to climate change impacts.\nIncorporating stochastic models into climate-economy analyses is crucial for capturing the full spectrum of uncertainties, improving risk assessment, designing resilient policies, and enhancing the overall robustness and reliability of the models and their predictions.\nHowever, the complexity of capturing the intricate, multifaceted, and probabilistic nature of climate and economic systems, coupled with the computational challenges of handling large-scale, high-dimensional, and stochastic models, poses significant challenges in deriving efficient solutions in the presence of uncertainty.\nThis paper presents an advanced approach to modeling recursive stochastic climate-economy models using a deep least-squares Monte Carlo (LSMC) method.\nThe method\u2019s flexibility allows for the application to various types of uncertainties, including parametric and stochastic process uncertainties.\nThe integration of deep neural networks enables the handling of high-dimensional models in a tractable manner and within a reasonable computational budget, thus making stochastic climate-economy models more accessible to researchers and policymakers.\nThe methodology and findings presented here provide a solid foundation for future work in this vital area of research.\nFuture research should explore the incorporation of Bayesian learning mechanisms to update probabilities as more information becomes available over time.\nSince our approach can manage high-dimensional stochastic shocks, a natural next step is to study the impact of multi-dimensional probability distributions whose marginals are correlated.\nAdditionally, we aim to apply our method to the study of climate tipping points as well as the Regional Integrated model of Climate Change and the Economy (RICE) of Nordhaus and Yang (1996 ###reference_b27###).\nThese future steps could further refine the model\u2019s predictions and enhance its policy relevance.\nIt is important to note that IAMs, and the DICE model in particular, have limitations in the model structure and model parameters which are debated in the literature, see e.g. discussions in Pindyck (2017 ###reference_b29###).\nThe incorporation of uncertainties into these models is an important improvement.\nOur approach demonstrates significant advancements in modeling and solving complex stochastic climate-economy models.\nBy capturing a wide range of uncertainties and leveraging advanced computational techniques, we contribute to the development of more robust and reliable tools for climate policy analysis.\nThe continued evolution of these models will be critical in supporting effective and sustainable climate action in the years to come, and the deep least-squares Monte Carlo method provides a useful tool to solve stochastic climate-economy models."
64
+ }
65
+ ],
66
+ "appendix": [],
67
+ "tables": {
68
+ "1": {
69
+ "table_html": "<figure class=\"ltx_table\" id=\"S2.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Tabelle 1: </span>Parameters for the base model.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S2.T1.43\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S2.T1.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S2.T1.2.2.2\">\n time steps of years</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column\" id=\"S2.T1.4.4.2\">\n, (in billions)</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S2.T1.8.8\">\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.8.8.4\">\n, , , \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.12.12\">\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.12.12.4\">\n, , , \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.13.13\">\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.13.13.1\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.15.15\">\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.15.15.2\">\n, \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.21.21\">\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.21.21.6\">\n, , , , , \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.26.26\">\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.26.26.5\">\n, , , , \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.28.28\">\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.28.28.2\">\n,\n\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.32.32\">\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.32.32.4\">\n, , , \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.35.35\">\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.35.35.3\">\n, , \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.39.39\">\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.39.39.4\">\n, , , \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.43.43\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S2.T1.43.43.4\">\n, , , \n</td>\n</tr>\n</tbody>\n</table>\n</figure>",
70
+ "capture": "Tabelle 1: Parameters for the base model."
71
+ },
72
+ "2": {
73
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Tabelle 2: </span>Statistics for major variables</figcaption><div class=\"ltx_flex_figure\">\n<div class=\"ltx_flex_cell ltx_flex_size_1\">\n<table class=\"ltx_tabular ltx_centering ltx_figure_panel ltx_guessed_headers ltx_align_middle\" id=\"S4.T2.2\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T2.2.3.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.2.3.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.2.3.1.1.1\">Variable</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.2.3.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.2.3.1.2.1\">Mean</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.2.3.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.2.3.1.3.1\">BG</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.2.3.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.2.3.1.4.1\">Median</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.2.3.1.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.2.3.1.5.1\">SD</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.2.3.1.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.2.3.1.6.1\">IQR</span></th>\n<th class=\"ltx_td ltx_nopad_r ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.2.3.1.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.2.3.1.7.1\">CV</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.2.4.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T2.2.4.1.1\">Social cost of carbon, 2020</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.2.4.1.2\">30.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.2.4.1.3\">28.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.2.4.1.4\">28.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.2.4.1.5\">12.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.2.4.1.6\">16.7</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_t\" id=\"S4.T2.2.4.1.7\">0.40</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.1.1.1\">Temperature, 2100 (C)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.2\">3.42</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.3\">3.49</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.4\">3.40</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.5\">0.46</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.6\">0.64</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T2.1.1.7\">0.13</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.2.5.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.2.5.2.1\">Carbon concentration, 2100 (ppm)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.5.2.2\">1,342</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.5.2.3\">1,344</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.5.2.4\">1,339</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.5.2.5\">156</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.5.2.6\">217</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T2.2.5.2.7\">0.12</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.2.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.2.2.1\">World output, 2100 (trillions, 2015)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.2.2\">833.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.2.3\">795.9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.2.4\">811.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.2.5\">203.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.2.6\">271.9</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T2.2.2.7\">0.24</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.2.6.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.2.6.3.1\">Emissions, 2100</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.6.3.2\">14.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.6.3.3\">13.1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.6.3.4\">12.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.6.3.5\">13.3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.6.3.6\">23.6</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T2.2.6.3.7\">0.95</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.2.7.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T2.2.7.4.1\">Damages, 2100 (percent output)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.2.7.4.2\">3.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.2.7.4.3\">2.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.2.7.4.4\">2.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.2.7.4.5\">1.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.2.7.4.6\">1.4</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_bb\" id=\"S4.T2.2.7.4.7\">0.34</td>\n</tr>\n</tbody>\n</table>\n</div>\n<div class=\"ltx_flex_break\"></div>\n<div class=\"ltx_flex_cell ltx_flex_size_1\">\n<ul class=\"ltx_itemize ltx_centering ltx_figure_panel\" id=\"S4.I2\">\n<li class=\"ltx_item\" id=\"S4.I2.1\" style=\"list-style-type:none;\">\n<span class=\"ltx_tag ltx_tag_item\"></span>\n<div class=\"ltx_para ltx_noindent\" id=\"S4.I2.1.p1\">\n<p class=\"ltx_p\" id=\"S4.I2.1.p1.1\"><span class=\"ltx_text\" id=\"S4.I2.1.p1.1.1\" style=\"font-size:80%;\">SD, IQR and CV refer to standard deviation, interquartile range and coefficient of variation, respectively.\nBG refers to best guess, which is the value calculated along the expected trajectory, assuming that uncertainties are set to their respective means.</span></p>\n</div>\n</li>\n</ul>\n</div>\n</div>\n</figure>",
74
+ "capture": "Tabelle 2: Statistics for major variables"
75
+ }
76
+ },
77
+ "image_paths": {
78
+ "1": {
79
+ "figure_path": "2408.09642v1_figure_1.png",
80
+ "caption": "Abbildung 1: Evolution of total factor productivity A\ud835\udc34Aitalic_A under the assumption that the growth rate gAsubscript\ud835\udc54\ud835\udc34g_{A}italic_g start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT is uncertain.",
81
+ "url": "http://arxiv.org/html/2408.09642v1/x1.png"
82
+ },
83
+ "2": {
84
+ "figure_path": "2408.09642v1_figure_2.png",
85
+ "caption": "Abbildung 2: Evolution of carbon intensity \u03c3\ud835\udf0e\\sigmaitalic_\u03c3 under the assumption that the growth rate g\u03c3subscript\ud835\udc54\ud835\udf0eg_{\\sigma}italic_g start_POSTSUBSCRIPT italic_\u03c3 end_POSTSUBSCRIPT is uncertain.",
86
+ "url": "http://arxiv.org/html/2408.09642v1/x2.png"
87
+ },
88
+ "3": {
89
+ "figure_path": "2408.09642v1_figure_3.png",
90
+ "caption": "Abbildung 3: Density plots of the parameter distributions of equilibrium temperature sensitivity (left panel), the damage coefficient (middle panel), and carbon cycle coefficient (right panel).",
91
+ "url": "http://arxiv.org/html/2408.09642v1/x3.png"
92
+ },
93
+ "4": {
94
+ "figure_path": "2408.09642v1_figure_4.png",
95
+ "caption": "Abbildung 4: Comparison of uniform grid (left panel) and low-discrepancy Sobol grid (right panel). In both cases, 1024 points were drawn in 11 dimensions. The plots depict the point distributions from the 11-dimensional grid projected on the first two components.",
96
+ "url": "http://arxiv.org/html/2408.09642v1/x4.png"
97
+ },
98
+ "5": {
99
+ "figure_path": "2408.09642v1_figure_5.png",
100
+ "caption": "Abbildung 5: Typical optimization surface over (ct,\u03bct)subscript\ud835\udc50\ud835\udc61subscript\ud835\udf07\ud835\udc61(c_{t},\\mu_{t})( italic_c start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_\u03bc start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) encountered during backward recursion.",
101
+ "url": "http://arxiv.org/html/2408.09642v1/x5.png"
102
+ },
103
+ "6": {
104
+ "figure_path": "2408.09642v1_figure_6.png",
105
+ "caption": "Abbildung 6: Evolution of the six most important variables over time.",
106
+ "url": "http://arxiv.org/html/2408.09642v1/x6.png"
107
+ },
108
+ "7": {
109
+ "figure_path": "2408.09642v1_figure_7.png",
110
+ "caption": "Abbildung 7: First-order (left) and total-order (right) Sobol\u2019 indices for various model outputs with respect to uncertainty in total factor productivity (TFP), carbon intensity (SIG), temperature-sensitivity coefficient (TSC), damage coefficient (DC) and carbon cycle coefficient (CC).",
111
+ "url": "http://arxiv.org/html/2408.09642v1/x7.png"
112
+ },
113
+ "8": {
114
+ "figure_path": "2408.09642v1_figure_8.png",
115
+ "caption": "Abbildung 8: First-order Sobol\u2019 indices for main variables over time.",
116
+ "url": "http://arxiv.org/html/2408.09642v1/x8.png"
117
+ }
118
+ },
119
+ "validation": true,
120
+ "references": [
121
+ {
122
+ "1": {
123
+ "title": "Limitations of integrated assessment models of climate change.",
124
+ "author": "Frank Ackerman, Stephen J. DeCanio, Richard B. Howarth, and Kristen Sheeran.",
125
+ "venue": "Climatic Change, 95(3):297\u2013315, 2009.",
126
+ "url": null
127
+ }
128
+ },
129
+ {
130
+ "2": {
131
+ "title": "A probabilistic numerical method for optimal multiple switching\nproblems in high dimension.",
132
+ "author": "Ren\u00e9 A\u00efd, Luciano Campi, Nicolas Langren\u00e9, and Huy\u00ean Pham.",
133
+ "venue": "SIAM Journal on Financial Mathematics, 5(1):191\u2013231, 2014.",
134
+ "url": null
135
+ }
136
+ },
137
+ {
138
+ "3": {
139
+ "title": "A bias-corrected least-squares Monte Carlo for solving\nmulti-period utility models.",
140
+ "author": "Johan G. Andr\u00e9asson and Pavel V. Shevchenko.",
141
+ "venue": "European Actuarial Journal, 12(1):349\u2013379, 2022.",
142
+ "url": null
143
+ }
144
+ },
145
+ {
146
+ "4": {
147
+ "title": "Optimal annuitisation, housing and reverse mortgage in retirement in\nthe presence of a means-tested public pension.",
148
+ "author": "Johan G. Andr\u00e9asson and Pavel V. Shevchenko.",
149
+ "venue": "European Actuarial Journal, 2024.",
150
+ "url": null
151
+ }
152
+ },
153
+ {
154
+ "5": {
155
+ "title": "Pricing Bermudan options by nonparametric regression: Optimal\nrates of convergence for lower estimates.",
156
+ "author": "Denis Belomestny.",
157
+ "venue": "Finance and Stochastics, 15:655\u2013683, 2011.",
158
+ "url": null
159
+ }
160
+ },
161
+ {
162
+ "6": {
163
+ "title": "Regression methods for stochastic control problems and their\nconvergence analysis.",
164
+ "author": "Denis Belomestny, Anastasia Kolodko, and John Schoenmakers.",
165
+ "venue": "SIAM Journal on Control and Optimization, 48(5):3562\u20133588, 2010.",
166
+ "url": null
167
+ }
168
+ },
169
+ {
170
+ "7": {
171
+ "title": "The social cost of carbon with economic and climate risks.",
172
+ "author": "Yongyang Cai and Thomas S. Lontzek.",
173
+ "venue": "Journal of Political Economy, 127(6):2684\u20132734, 2019.",
174
+ "url": null
175
+ }
176
+ },
177
+ {
178
+ "8": {
179
+ "title": "Approximation by superpositions of a sigmoidal function.",
180
+ "author": "George Cybenko.",
181
+ "venue": "Mathematics of Control, Signals and Systems, 2(4):303\u2013314, 1989.",
182
+ "url": null
183
+ }
184
+ },
185
+ {
186
+ "9": {
187
+ "title": "On stochastic approximation.",
188
+ "author": "Aryeh Dvoretzky.",
189
+ "venue": "In Proceedings of the Third Berkeley Symposium on\nMathematical Statistics and Probability, 1954\u20131955, vol. I, pages\n39\u201355. University of California Press, Berkeley and Los Angeles, Calif.,\n1956.",
190
+ "url": null
191
+ }
192
+ },
193
+ {
194
+ "10": {
195
+ "title": "Controlled Markov Processes and Viscosity Solutions,\nvolume 25 of Stochastic Modelling and Applied Probability.",
196
+ "author": "W.H. Fleming and H.M. Soner.",
197
+ "venue": "Springer, New York, second edition, 2006.",
198
+ "url": null
199
+ }
200
+ },
201
+ {
202
+ "11": {
203
+ "title": "Deep uncertainty quantification: With an application to integrated\nassessment models.",
204
+ "author": "Aleksandra Friedl, Felix K\u00fcbler, Simon Scheidegger, and Takafumi Usui.",
205
+ "venue": "Technical report, Working Paper University of Lausanne, 2023.",
206
+ "url": null
207
+ }
208
+ },
209
+ {
210
+ "12": {
211
+ "title": "Modeling uncertainty in climate change: A multi-model comparison.",
212
+ "author": "Kenneth Gillingham, William D. Nordhaus, David Anthoff, Geoffrey Blanford,\nValentina Bosetti, Peter Christensen, Haewon McJeon, John Reilly, and Paul\nSztorc.",
213
+ "venue": "Technical report, National Bureau of Economic Research, 2015.",
214
+ "url": null
215
+ }
216
+ },
217
+ {
218
+ "13": {
219
+ "title": "Modeling myths: On DICE and dynamic realism in integrated\nassessment models of climate change mitigation.",
220
+ "author": "Michael Grubb, Claudia Wieners, and Pu Yang.",
221
+ "venue": "Wiley Interdisciplinary Reviews: Climate Change, 12(3):e698, 2021.",
222
+ "url": null
223
+ }
224
+ },
225
+ {
226
+ "14": {
227
+ "title": "Approximation capabilities of multilayer feedforward networks.",
228
+ "author": "Kurt Hornik.",
229
+ "venue": "Neural Networks, 4(2):251\u2013257, 1991.",
230
+ "url": null
231
+ }
232
+ },
233
+ {
234
+ "15": {
235
+ "title": "Expected utility and catastrophic risk in a stochastic\neconomy-climate model.",
236
+ "author": "Masako Ikefuji, Roger J. A. Laeven, Jan R. Magnus, and Chris Muris.",
237
+ "venue": "Journal of Econometrics, 214(1):110\u2013129,\n2020.",
238
+ "url": null
239
+ }
240
+ },
241
+ {
242
+ "16": {
243
+ "title": "Technical support document: Social cost of carbon for regulatory\nimpact analysis under executive order 12866.",
244
+ "author": "Interagency Working Group on Social Cost of Greenhouse Gases.",
245
+ "venue": "Technical report, United States Government, 2016.",
246
+ "url": null
247
+ }
248
+ },
249
+ {
250
+ "17": {
251
+ "title": "Optimal climate change mitigation under long-term growth uncertainty:\nStochastic integrated assessment and analytic findings.",
252
+ "author": "Svenn Jensen and Christian P. Traeger.",
253
+ "venue": "European Economic Review, 69:104\u2013125, 2014.",
254
+ "url": null
255
+ }
256
+ },
257
+ {
258
+ "18": {
259
+ "title": "Bayesian learning, growth, and pollution.",
260
+ "author": "David L. Kelly and Charles D. Kolstad.",
261
+ "venue": "Journal of Economic Dynamics and Control, 23(4):491\u2013518, 1999.",
262
+ "url": null
263
+ }
264
+ },
265
+ {
266
+ "19": {
267
+ "title": "Solving infinite horizon growth models with an environmental sector.",
268
+ "author": "David L. Kelly and Charles D. Kolstad.",
269
+ "venue": "Computational Economics, 18:217\u2013231, 2001.",
270
+ "url": null
271
+ }
272
+ },
273
+ {
274
+ "20": {
275
+ "title": "A numerical algorithm for fully nonlinear HJB equations: An\napproach by control randomization.",
276
+ "author": "Idris Kharroubi, Nicolas Langren\u00e9, and Huy\u00ean Pham.",
277
+ "venue": "Monte Carlo Methods and Applications, 20(2):145\u2013165, 2014.",
278
+ "url": null
279
+ }
280
+ },
281
+ {
282
+ "21": {
283
+ "title": "Stochastic estimation of the maximum of a regression function.",
284
+ "author": "Jack Kiefer and Jacob Wolfowitz.",
285
+ "venue": "Annals of Mathematical Statistics, 23(3):462\u2013466, 1952.",
286
+ "url": null
287
+ }
288
+ },
289
+ {
290
+ "22": {
291
+ "title": "The universal approximation property: Characterization,\nconstruction, representation, and existence.",
292
+ "author": "Anastasis Kratsios.",
293
+ "venue": "Annals of Mathematics and Artificial Intelligence, 89(5):435\u2013469, 2021.",
294
+ "url": null
295
+ }
296
+ },
297
+ {
298
+ "23": {
299
+ "title": "The climate change learning curve.",
300
+ "author": "Andrew J. Leach.",
301
+ "venue": "Journal of Economic Dynamics and Control, 31(5):1728\u20131752, 2007.",
302
+ "url": null
303
+ }
304
+ },
305
+ {
306
+ "24": {
307
+ "title": "Valuing American options by simulation: A simple least-squares\napproach.",
308
+ "author": "Francis A. Longstaff and Eduardo S. Schwartz.",
309
+ "venue": "The Review of Financial Studies, 14(1):113\u2013147, 2001.",
310
+ "url": null
311
+ }
312
+ },
313
+ {
314
+ "25": {
315
+ "title": "Stochastic integrated assessment of climate tipping points indicates\nthe need for strict climate policy.",
316
+ "author": "Thomas S. Lontzek, Yongyang Cai, Kenneth L. Judd, and Timothy M. Lenton.",
317
+ "venue": "Nature Climate Change, 5(5):441\u2013444,\n2015.",
318
+ "url": null
319
+ }
320
+ },
321
+ {
322
+ "26": {
323
+ "title": "Projections and uncertainties about climate change in an era of\nminimal climate policies.",
324
+ "author": "William D. Nordhaus.",
325
+ "venue": "American Economic Journal: Economic Policy, 10(3):333\u2013360, 2018.",
326
+ "url": null
327
+ }
328
+ },
329
+ {
330
+ "27": {
331
+ "title": "A regional dynamic general-equilibrium model of alternative\nclimate-change strategies.",
332
+ "author": "William D. Nordhaus and Zili Yang.",
333
+ "venue": "The American Economic Review, 86(4):741\u2013765, 1996.",
334
+ "url": null
335
+ }
336
+ },
337
+ {
338
+ "28": {
339
+ "title": "The \u2018DICE\u2019 model: background and structure of a dynamic integrated\nclimate-economy model of the economics of global warming.",
340
+ "author": "William D. Nordhaus et al.",
341
+ "venue": "Technical report, Cowles Foundation for Research in Economics, Yale\nUniversity, 1992.",
342
+ "url": null
343
+ }
344
+ },
345
+ {
346
+ "29": {
347
+ "title": "The use and misuse of models for climate policy.",
348
+ "author": "Robert S. Pindyck.",
349
+ "venue": "Review of Environmental Economics and Policy, 11:100\u2013114, 2017.",
350
+ "url": null
351
+ }
352
+ },
353
+ {
354
+ "30": {
355
+ "title": "A stochastic approximation method.",
356
+ "author": "Herbert Robbins and Sutton Monro.",
357
+ "venue": "Annals of Mathematical Statistics, 22(3):400\u2013407, 1951.",
358
+ "url": null
359
+ }
360
+ },
361
+ {
362
+ "31": {
363
+ "title": "Learning representations by back-propagating errors.",
364
+ "author": "D. E. Rumelhart, G. E. Hinton, and R. J. Williams.",
365
+ "venue": "Nature, 323(6088):533\u2013536, 1986.",
366
+ "url": null
367
+ }
368
+ },
369
+ {
370
+ "32": {
371
+ "title": "Impact of COVID-19 type events on the economy and climate under the\nstochastic DICE model.",
372
+ "author": "Pavel V. Shevchenko, Daisuke Murakami, Tomoko Matsui, and Tor A. Myrvoll.",
373
+ "venue": "Environmental Economics and Policy Studies, 24:459\u2013476, 2022.",
374
+ "url": null
375
+ }
376
+ },
377
+ {
378
+ "33": {
379
+ "title": "On the distribution of points in a cube and the approximate\nevaluation of integrals.",
380
+ "author": "Ilya M. Sobol\u2019.",
381
+ "venue": "USSR Computational Mathematics and Mathematical Physics,\n7(4):86\u2013112, 1967.",
382
+ "url": null
383
+ }
384
+ },
385
+ {
386
+ "34": {
387
+ "title": "Global sensitivity indices for nonlinear mathematical models and\ntheir Monte Carlo estimates.",
388
+ "author": "Ilya M. Sobol\u2019.",
389
+ "venue": "Mathematics and Computers in Simulation, 55(1\u20133):271\u2013280, 2001.",
390
+ "url": null
391
+ }
392
+ },
393
+ {
394
+ "35": {
395
+ "title": "A 4-stated DICE: Quantitatively addressing uncertainty effects in\nclimate change.",
396
+ "author": "Christian P. Traeger.",
397
+ "venue": "Environmental and Resource Economics, 59(1):1\u201337, 2014.",
398
+ "url": null
399
+ }
400
+ },
401
+ {
402
+ "36": {
403
+ "title": "Regression methods for pricing complex American-style options.",
404
+ "author": "John N. Tsitsiklis and Benjamin Van Roy.",
405
+ "venue": "IEEE Transactions on Neural Networks, 12(4):694\u2013703, 2001.",
406
+ "url": null
407
+ }
408
+ },
409
+ {
410
+ "37": {
411
+ "title": "Fat-tailed uncertainty in the economics of catastrophic climate\nchange.",
412
+ "author": "Martin L. Weitzman.",
413
+ "venue": "Review of Environmental Economics and Policy, 5(2):275\u2013292, 2011.",
414
+ "url": null
415
+ }
416
+ }
417
+ ],
418
+ "url": "http://arxiv.org/html/2408.09642v1"
419
+ }
20240819/2408.09676v1.json ADDED
The diff for this file is too large to render. See raw diff
 
20240819/2408.09683v1.json ADDED
The diff for this file is too large to render. See raw diff
 
20240819/2408.09687v1.json ADDED
@@ -0,0 +1,451 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "TESL-Net: A Transformer-Enhanced CNN for Accurate Skin Lesion Segmentation",
3
+ "abstract": "Early detection of skin cancer relies on precise segmentation of dermoscopic images of skin lesions. However, this task is challenging due to the irregular shape of the lesion, the lack of sharp borders, and the presence of artefacts such as marker colours and hair follicles. Recent methods for melanoma segmentation are U-Nets and fully connected networks (FCNs). As the depth of these neural network models increases, they can face issues like the vanishing gradient problem and parameter redundancy, potentially leading to a decrease in the Jaccard index of the segmentation model. In this study, we introduced a novel network named TESL-Net for the segmentation of skin lesions. The proposed TESL-Net involves a hybrid network that combines the local features of a CNN encoder-decoder architecture with long-range and temporal dependencies using bi-convolutional long-short-term memory (Bi-ConvLSTM) networks and a Swin transformer. This enables the model to account for the uncertainty of segmentation over time and capture contextual channel relationships in the data. We evaluated the efficacy of TESL-Net in three commonly used datasets (ISIC 2016, ISIC 2017, and ISIC 2018) for the segmentation of skin lesions. The proposed TESL-Net achieves state-of-the-art performance, as evidenced by a significantly elevated Jaccard index demonstrated by empirical results.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Melanoma is the leading cause of skin cancer-related mortality, presenting a substantial global health concern [1 ###reference_b1###]. The survival rate for melanoma patients drops below 15% if the disease is not detected early [2 ###reference_b2###]. Therefore, early detection is crucial to reducing mortality rates, with research indicating a 90% survival rate for patients diagnosed in the early stages. However, differentiating a melanoma lesion from the surrounding healthy skin is challenging. The appearance of the skin can be affected by various factors, including lesion size, hair, reflections, colors, marker colors, textures, shapes, and non-uniform illumination [3 ###reference_b3###].\nDermatoscopy is a non-invasive imaging technique widely used to identify skin lesions and their surrounding areas for the detection and diagnosis of skin cancer [4 ###reference_b4###]. Manual evaluation of dermoscopic images requires specialised knowledge in dermoscopy and is time-consuming. Even for highly experienced dermatologists, diagnosing skin cancer using only their unaided eye can be imprecise, unreliable, and time-consuming [5 ###reference_b5###]. Traditional image preprocessing techniques struggle with complex tasks due to their reliance on highly customised and precise features and methods [6 ###reference_b6###]. To improve the efficacy of lesion analysis and identification, dermatologists have implemented computer-aided diagnostic (CAD) technologies [7 ###reference_b7###, 8 ###reference_b8###]. Precise segmentation is a critical component of any CAD-based diagnostic platform for skin cancer. This process improves the precision and effectiveness of skin lesion segmentation by providing essential quantitative information, including location, size, shape, and other characteristics [9 ###reference_b9###].\n###figure_1### ###figure_2### ###figure_3### ###figure_4### ###figure_5### ###figure_6### ###figure_7### ###figure_8### ###figure_9### ###figure_10### Skin lesion segmentation presents several challenges [10 ###reference_b10###]. Precisely delineating skin lesions is often difficult due to their irregular and blurry boundaries [11 ###reference_b11###, 12 ###reference_b12###]. Differentiating between the healthy surrounding skin and a lesion is also frequently challenging. In addition, the varying shapes, sizes, and colours of skin lesions further complicate their characterisation. Interference elements, including blood vessels, ruler traces, hairs, and ink speckles, add to the complexity of segmentation [13 ###reference_b13###, 14 ###reference_b14###]. These challenges are illustrated in Figure 1 ###reference_###, where lesions with diverse shapes, sizes and colours, as well as irregular and hazy boundaries, introduce redundancy that reduces performance [7 ###reference_b7###, 15 ###reference_b15###]. Low-contrast skin lesions from surrounding healthy tissues and interference elements such as blood vessels, filaments, and ink speckles add noise to images. These factors impede the development of advanced segmentation techniques.\nHowever, skin lesion segmentation presents several challenges [16 ###reference_b16###]. Precisely delineating skin lesions is often difficult due to their irregular and blurry boundaries. Differentiating between the healthy skin surrounding is also often challenging. In addition, the varying shapes, sizes, and colours of skin lesions further complicate their characterisation. Interference elements, including blood vessels, ruler traces, hairs, and ink speckles, add to the complexity of segmentation. These challenges are illustrated in Figure 1 ###reference_###, where lesions with diverse shapes, sizes and colours, as well as irregular and hazy boundaries, introduce redundancy that reduces performance. Low-contrast skin lesions from surrounding healthy tissue, and interference features such as blood vessels, filaments, and ink speckles add noise to images. These factors impede the development of advanced segmentation techniques.\n###figure_11### For the task of segmenting skin lesions, a variety of convolutional neural network (CNN) techniques, as well as attention-based approaches, have been explored. Bi et al. designed a network that extracts contextual and hierarchical information by integrating the combination of pyramidal features, residual connections, and dilated convolution [17 ###reference_b17###]. Tang et al. introduced DeepLabV3+, a CNN architecture that incorporates an advanced spatial pyramid pooling module to extract multi-scale features [18 ###reference_b18###]. Another notable example is the U-Net architecture [19 ###reference_b19###], which has become the industry standard for medical image segmentation, including skin lesions. The advent of deep learning has significantly improved the analysis of biological data and image segmentation [20 ###reference_b20###]. By effectively utilizing relevant features, deep learning methods outperform traditional methods in skin lesion segmentation. Segmentation performance has been further enhanced by improvements to the encoder-decoder architecture, including the implementation of efficient feature map learning procedures [21 ###reference_b21###].\nSegmentation model training can be enhanced by data augmentation techniques, such as rotation, scaling, and flipping, which increase the scale and diversity of datasets [22 ###reference_b22###]. To achieve optimal results, it is essential to carefully regulate the selection and extent of augmentation. Deep neural network models with numerous layers may encounter issues such as parameter redundancy and vanishing gradients. To address these challenges and achieve precise skin lesion segmentation, we have developed a transformer-enhanced CNN, TESL-Net. Our proposed technique ensures accurate segmentation of skin lesions while maintaining a model architecture."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II Related Work",
15
+ "text": "Numerous segmentation techniques are emphasised in the literature for the segmentation of skin lesions, including morphological operations [23 ###reference_b23###], thresholding approaches [24 ###reference_b24###], gradient vector flow [25 ###reference_b25###], and growth of the region [26 ###reference_b26###]. These conventional methods typically involve threshold setting, feature selection, and image pre-processing. The emergence of deep learning has significantly advanced segmentation techniques, particularly with CNNS. Yuan and Lo developed an improved CNN for skin lesion segmentation [27 ###reference_b27###]. Furthermore, studies have also used multiscale connection blocks instead of traditional skip connections to capture features at both the low- and the high-level more effectively [28 ###reference_b28###].\nHasan et al. proposed a dermoscopic skin network that employs depthwise separable convolution to reduce the number of trainable parameters [29 ###reference_b29###]. Abhishek et al. devised a novel deep semantic segmentation method that takes advantage of data from multiple colour bands [30 ###reference_b30###]. To analyse the boundaries and contextual relationships of target objects, the DAGAN authors implemented a dual discriminator network [31 ###reference_b31###].\nAttention mechanisms have been extensively implemented in CNNs to facilitate various tasks, including semantic segmentation, identification, classification, and machine translation. This approach enables models to focus on the most relevant features, thus reducing computational demands by weighting the features to emphasise pertinent information and suppress irrelevant data [16 ###reference_b16###, 32 ###reference_b32###]. To optimise skin lesion segmentation, Chen et al. integrated self-attention within codec components [33 ###reference_b33###]. Zhang et al. developed an attention-directed filter within a U-shaped framework to convey spatial features for image segmentation [34 ###reference_b34###]. A channel attention strategy was implemented to enhance skin lesion segmentation in a generative adversarial network (GAN) [35 ###reference_b35###]. CS2-Net enhanced feature extraction by implementing dual attention strategies in both the spatial and channel domains [36 ###reference_b36###]. The AS-Net further improved segmentation performance by integrating spatial and channel attention techniques [37 ###reference_b37###]. Furthermore, networks like MFSNet [38 ###reference_b38###] increased segmentation efficiency by incorporating multiscale concepts with attention mechanisms."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "III Proposed Method",
21
+ "text": "The proposed network uses Bidirectional Convolutional Long-Short-Term Memory (Bi-ConvLSTM) layers and spin transformer blocks to segment skin lesions."
22
+ },
23
+ {
24
+ "section_id": "3.1",
25
+ "parent_section_id": "3",
26
+ "section_name": "III-A TESL-Net",
27
+ "text": "The architecture of the proposed TESL-Net takes RGB images along with their corresponding masks as input. At the encoder stage, two consecutive blocks of depth-wise convolution followed by the activation function and batch normalisation layer are applied. After that, a max pooling layer is employed to reduce the spatial dimensions of the features. Once the size of the feature maps is reduced a Swin transformer block is used to extract and refine the feature information patch-wise. The same operations are again applied on the feature maps by increasing the depth twice. It is important to mention that the proposed TESL-Net uses two max-pooling layers at the encoder stage so that the spatial information especially at the boundary can be kept intact. At the decoder stage of the proposed TESL-Net transposed convolution is used to upsample the feature maps followed by ReLU and batch normalization operations. Once the spatial dimensions are increased, the Bi-ConvLSTM is used between the encoder-decoder blocks to capture short-term details and long-term dependencies of the feature information. Two consecutive blocks of depth-wise convolution followed by the activation function and batch normalization layer are then employed to reconstruct the extracted feature information. The same operations are again applied on the feature maps by reducing the channel depth twice. Finally, the sigmoid layer is employed to predict the binary masks. The mathematical description of the proposed TESL-Net is as follows:\nLet be the RGB image of size given to the input layer. The Depthwise Separable Convolution (DWS-Conv) is applied to the input image followed by batch normalization and the ReLU (Rectified Linear Unit) activation function that helps in dealing with overfitting and introduces non-linearity into the network. The output is defined as:\nThe resulting feature map is again fed into followed by ReLU and BN.\nThe feature map is passed through the max-pooling operation and then processed by the Swin Transformer Block, which captures long-range dependencies and context information.\nA similar process is applied to the resulting feature map in the second convolutional block. It is defined as:\nSubsequently, a transposed convolution operation is applied to the encoder generated feature map to up-sample the feature maps, followed by a ReLU activation layer and .\nThe feature map and from the encoder are reshaped and concatenated with the corresponding reshaped feature maps from the decoder. The outputs are then fed into the corresponding Bidirectional Convolutional LSTM (Bi-ConvLSTM) to capture temporal dependencies in both forward and backward directions.\nThe forward ConvLSTM is defined as:\nwhere , , and are the forget, input, and output gates, is the cell input activation, is the cell state, and is the hidden state.\nThe backward ConvLSTM operates similarly but in reverse temporal order. The Bi-ConvLSTM combines the forward and backward ConvLSTM outputs:\nThe respective outcomes of Bi-ConvLSTM are processed through DWS-Conv, and along with transposed convolution.\nThe final predicted mask is computed by applying a sigmoid to the last layer for binary segmentation.\n###figure_12### ###figure_13### ###figure_14### ###figure_15### ###figure_16### ###figure_17### ###figure_18### ###figure_19### ###figure_20### ###figure_21### ###figure_22### ###figure_23### ###figure_24### ###figure_25### ###figure_26### ###figure_27### ###figure_28### ###figure_29### ###figure_30### ###figure_31###"
28
+ },
29
+ {
30
+ "section_id": "3.2",
31
+ "parent_section_id": "3",
32
+ "section_name": "III-B Swin Transformer Block",
33
+ "text": "In contrast to the traditional multi-head self-attention (MSA) module, the Swin Transformer block is designed using shifted windows. (Fig. (2 ###reference_###)-b) illustrates two consecutive Swin Transformer blocks. Each Swin Transformer block includes a residual connection, a multi-head self-attention module, a LayerNorm (LN) layer, and a two-layer MLP with GELU activation. The two successive transformer blocks utilise the window-based multihead self-attention module (W-MSA) and the shifted window-based multihead self-attention module (SW-MSA), respectively. The following equations describe the formulation of continuous Swin Transformer blocks through this window partitioning mechanism:\nwhere represents the output of the module (S) W-MSA and the MLP module of block.\nSelf Attention is computed as follows:\nwhere denotes query, key and value Matrices. and represent several patches in the window and the dimension of the key or query, respectively. where is the value taken from the bias matrix"
34
+ },
35
+ {
36
+ "section_id": "4",
37
+ "parent_section_id": null,
38
+ "section_name": "IV Results and Discussion",
39
+ "text": "TESL-Net was evaluated against several SOTA segmentation networks. This section provides an overview of the datasets used, the evaluation criteria, the experimental setup, and the comparative experiments."
40
+ },
41
+ {
42
+ "section_id": "4.1",
43
+ "parent_section_id": "4",
44
+ "section_name": "IV-A Datasets",
45
+ "text": "The proposed TESL-Net model was evaluated in three challenging benchmark datasets (Table I ###reference_###), namely ISIC 2016 [39 ###reference_b39###], ISIC 2017 [40 ###reference_b40###] and ISIC 2018 [41 ###reference_b41###] for the segmentation of skin lesions in optical images. All datasets are publicly available and provide GT masks for the evaluation of image segmentation methods.\n###figure_32### ###figure_33### ###figure_34### ###figure_35### ###figure_36### ###figure_37### ###figure_38### ###figure_39### ###figure_40### ###figure_41### ###figure_42### ###figure_43### ###figure_44### ###figure_45### ###figure_46### ###figure_47### ###figure_48### ###figure_49### ###figure_50### ###figure_51### ###figure_52### ###figure_53### ###figure_54### ###figure_55### ###figure_56### ###figure_57### ###figure_58### ###figure_59### ###figure_60### ###figure_61### ###figure_62### ###figure_63### ###figure_64### ###figure_65### ###figure_66### ###figure_67### ###figure_68### ###figure_69### ###figure_70### ###figure_71###"
46
+ },
47
+ {
48
+ "section_id": "4.2",
49
+ "parent_section_id": "4",
50
+ "section_name": "IV-B Evaluation Criteria",
51
+ "text": "Performance evaluation of the proposed LSSF-Net is performed using five evaluation metrics recommended by the ISIC challenge leaderboard, including accuracy, Jaccard index (IOU), Dice coefficient, sensitivity, and specificity. These metrics are calculated using counts of true negatives (TN), true positives (TP), false negatives (FN), and false positives (FP) derived from predictions as given in equations (13 ###reference_###-17 ###reference_###)."
52
+ },
53
+ {
54
+ "section_id": "4.3",
55
+ "parent_section_id": "4",
56
+ "section_name": "IV-C Experimental Setup",
57
+ "text": "We assess the effectiveness of the proposed methodology using established benchmark datasets. All data sets were standardised to dimensions of pixels to ensure uniformity. A subset comprising 20% of the training data was segregated for validation purposes. Segmentation models were trained under various loss function configurations, using the optimiser (Adam) over 10 epochs. Initially, a learning rate of 0.001 was set, with a scheduled reduction by a factor of 0.25 every four epochs in the absence of observable improvements on the validation set. In addition, an early stop mechanism was employed to counteract overfitting. In particular, our approach achieved superior performance metrics, exceeding existing benchmarks even without employing data augmentation. The framework was implemented using Keras with TensorFlow backend, and all computations were performed on a NVIDIA K80 GPU."
58
+ },
59
+ {
60
+ "section_id": "4.4",
61
+ "parent_section_id": "4",
62
+ "section_name": "IV-D Comparisons with SOTA Methods",
63
+ "text": "We compared our proposed approach with ten cutting-edge methods including ARU-GD [43 ###reference_b43###], BCD-UNet [48 ###reference_b48###], CPFNet [49 ###reference_b49###], DAGAN [31 ###reference_b31###], FAT-Net [46 ###reference_b46###], RA-Net [45 ###reference_b45###], Separable-Unet [50 ###reference_b50###], Swin-Unet [42 ###reference_b42###], U-Net [19 ###reference_b19###], and UNet++ [44 ###reference_b44###].\nStatistical comparison findings with SOTA techniques in the ISIC 2016 dataset are presented in Table II ###reference_###. Our method consistently outperformed the most advanced techniques in the ISIC 2016 dataset in every metric. Specifically, TESL-Net achieved a Jaccard index (IOU) score that ranged from 2. 11% to 8. 13% higher compared to SOTA methods. Our technique demonstrates superior performance in all evaluation criteria. Comparisons of visual results showing various challenges in skin lesion segmentation, such as artefacts, hair, irregular morphologies, and multiple lesions, are presented in Figure 3 ###reference_###. The TESL-Net method achieved SOTA segmentation results in the test data, effectively handling skin lesions with irregular shapes and varying sizes.\nTen cutting-edge techniques were used to illustrate the statistical comparison findings in the ISIC 2017 dataset, as presented in Table III ###reference_###. TESL-Net achieved a Jaccard index (IOU) score of 2. 02% to 11. 22% higher than the SOTA methods. The visual results showing various challenges in skin lesion segmentation, such as irregular morphologies, hair, and artefacts, are shown in Figure 4 ###reference_###. It is evident that our TESL-Net consistently produces SOTA segmentation results in test data, effectively handling skin lesions with unusual shapes and variable sizes.\nEleven cutting-edge techniques are used to present the statistical comparison findings in Table IV ###reference_### on the ISIC 2018. In terms of the Jaccard index (IOU), TESL-Net achieved a score of 2.22%\u201310.47% higher than the SOTA methods described. In the same vein, we also obtained visual results for various skin lesion challenges, including the presence of artefacts, low contrast, irregular morphologies, and small lesions. The visual results of numerous skin lesion challenges are illustrated in Figure 5 ###reference_###. Even for skin lesions with unusual shapes and variable sizes, our method generates SOTA segmentation results on test data."
64
+ },
65
+ {
66
+ "section_id": "5",
67
+ "parent_section_id": null,
68
+ "section_name": "Conclusions",
69
+ "text": "We have developed TESL-Net, a novel and effective methodology for accurate skin lesion segmentation aimed at overcoming challenges in this field. Unlike traditional CNN-based encoder-decoders, TESL-Net utilises Swin transformer blocks in the encoder to efficiently extract contextual information from skin-lesion images globally. Further refinement of feature extraction is achieved by integrating a Bi-ConvLSTM module into skip connections. When evaluated on three publicly available benchmark datasets for skin lesion segmentation, TESL-Net outperformed a variety of SOTA methods. Despite its exceptional performance, we have identified areas for further improvement. We propose employing semi-supervised strategies to reduce data requirements for training by incorporating paired and unpaired data. TESL-Net is suitable not only for immediate medical imaging applications in skin segmentation but also holds promise for adaptation and expansion to other medical imaging and segmentation tasks."
70
+ }
71
+ ],
72
+ "appendix": [],
73
+ "tables": {
74
+ "1": {
75
+ "table_html": "<figure class=\"ltx_table\" id=\"S2.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>Details of the skin lesion image datasets used for TESL-Net evaluation.</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S2.T1.1\" style=\"width:433.6pt;height:70.3pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-60.9pt,9.9pt) scale(0.780670798363668,0.780670798363668) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S2.T1.1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S2.T1.1.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S2.T1.1.1.1.1.1\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.1.1.1.1.1.1\">Dataset</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"4\" id=\"S2.T1.1.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.1.1.1.1.2.1\">Image Count</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S2.T1.1.1.1.1.3\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.1.1.1.1.3.1\">Image Resolution Range</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S2.T1.1.1.1.1.4\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.1.1.1.1.4.1\">Format</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S2.T1.1.1.1.1.5\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.1.1.1.1.5.1\">Resized to</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.1.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S2.T1.1.1.2.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.1.1.2.2.1.1\">Training</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S2.T1.1.1.2.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.1.1.2.2.2.1\">Validation</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S2.T1.1.1.2.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.1.1.2.2.3.1\">Testing</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S2.T1.1.1.2.2.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.1.1.2.2.4.1\">Total</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S2.T1.1.1.3.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T1.1.1.3.1.1\">ISIC 2016 <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.09687v1#bib.bib39\" title=\"\">39</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.1.3.1.2\">900</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.1.3.1.3\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.1.3.1.4\">379</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.1.3.1.5\">1279</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.1.3.1.6\">679x453 - 6748x4499</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.1.3.1.7\">.jpeg</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S2.T1.1.1.3.1.8\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S2.T1.1.1.3.1.8.1\">256x256</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.1.4.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.1.1.4.2.1\">ISIC 2017 <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.09687v1#bib.bib40\" title=\"\">40</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.1.4.2.2\">2000</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.1.4.2.3\">150</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.1.4.2.4\">600</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.1.4.2.5\">2750</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.1.4.2.6\">679\u00d7453 - 6748\u00d74499</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.1.4.2.7\">.jpeg</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.1.5.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S2.T1.1.1.5.3.1\">ISIC 2018 <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.09687v1#bib.bib41\" title=\"\">41</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S2.T1.1.1.5.3.2\">2594</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S2.T1.1.1.5.3.3\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S2.T1.1.1.5.3.4\">1000</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S2.T1.1.1.5.3.5\">3594</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S2.T1.1.1.5.3.6\">679\u00d7453 - 6748\u00d74499</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S2.T1.1.1.5.3.7\">.jpeg</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
76
+ "capture": "TABLE I: Details of the skin lesion image datasets used for TESL-Net evaluation."
77
+ },
78
+ "2": {
79
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE II: </span>Quantitative performance comparison of TESL-Net with various SOTA methods on the ISIC2016 skin lesion dataset.</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T2.1\" style=\"width:433.6pt;height:297.1pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(46.0pt,-31.5pt) scale(1.26959377825905,1.26959377825905) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T2.1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S4.T2.1.1.1.1.1\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.1.1.1.1\">Method</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"5\" id=\"S4.T2.1.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.1.1.2.1\">Performance Measures (%)</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.1.1.2.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.2.2.1.1\">IoU</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.1.1.2.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.2.2.2.1\">Dice</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.1.1.2.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.2.2.3.1\">Acc</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.1.1.2.2.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.2.2.4.1\">Se</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.1.1.2.2.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.2.2.5.1\">Sp</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.3.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T2.1.1.3.1.1\">ARU-GD <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.09687v1#bib.bib47\" title=\"\">47</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.3.1.2\">85.12</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.3.1.3\">90.83</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.3.1.4\">94.38</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.3.1.5\">89.86</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.3.1.6\">94.65</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.4.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.1.1.4.2.1\">BCDU-Net <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.09687v1#bib.bib48\" title=\"\">48</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.4.2.2\">83.43</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.4.2.3\">80.95</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.4.2.4\">91.78</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.4.2.5\">78.11</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.4.2.6\">96.20</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.5.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.1.1.5.3.1\">CPFNet <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.09687v1#bib.bib49\" title=\"\">49</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.5.3.2\">83.81</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.5.3.3\">90.23</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.5.3.4\">95.09</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.5.3.5\">92.11</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.5.3.6\">95.91</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.6.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.1.1.6.4.1\">DAGAN <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.09687v1#bib.bib31\" title=\"\">31</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.6.4.2\">84.42</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.6.4.3\">90.85</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.6.4.4\">95.82</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.6.4.5\">92.28</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.6.4.6\">95.68</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.7.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.1.1.7.5.1\">FAT-Net <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.09687v1#bib.bib46\" title=\"\">46</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.7.5.2\">85.30</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.7.5.3\">91.59</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.7.5.4\">96.04</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.7.5.5\">92.59</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.7.5.6\">96.02</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.8.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.1.1.8.6.1\">RA-Net <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.09687v1#bib.bib45\" title=\"\">45</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.8.6.2\">87.40</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.8.6.3\">92.94</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.8.6.4\">96.70</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.8.6.5\">92.27</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.8.6.6\">96.79</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.9.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.1.1.9.7.1\">Separable-Unet <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.09687v1#bib.bib50\" title=\"\">50</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.9.7.2\">84.27</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.9.7.3\">89.95</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.9.7.4\">95.67</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.9.7.5\">93.14</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.9.7.6\">94.68</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.10.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.1.1.10.8.1\">Swin-Unet <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.09687v1#bib.bib42\" title=\"\">42</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.10.8.2\">87.60</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.10.8.3\">88.94</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.10.8.4\">96.00</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.10.8.5\">92.27</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.10.8.6\">95.79</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.11.9\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.1.1.11.9.1\">U-Net <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.09687v1#bib.bib19\" title=\"\">19</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.11.9.2\">81.38</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.11.9.3\">88.24</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.11.9.4\">93.31</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.11.9.5\">87.28</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.11.9.6\">92.88</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.12.10\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.1.1.12.10.1\">UNet++ <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.09687v1#bib.bib51\" title=\"\">51</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.12.10.2\">82.81</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.12.10.3\">89.19</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.12.10.4\">93.88</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.12.10.5\">88.78</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.12.10.6\">93.52</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.13.11\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb ltx_border_t\" id=\"S4.T2.1.1.13.11.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.13.11.1.1\">Proposed Method</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T2.1.1.13.11.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.13.11.2.1\">89.51</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T2.1.1.13.11.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.13.11.3.1\">93.43</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T2.1.1.13.11.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.13.11.4.1\">96.40</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T2.1.1.13.11.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.13.11.5.1\">94.55</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T2.1.1.13.11.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.13.11.6.1\">97.02</span></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
80
+ "capture": "TABLE II: Quantitative performance comparison of TESL-Net with various SOTA methods on the ISIC2016 skin lesion dataset."
81
+ },
82
+ "3": {
83
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE III: </span>Quantitative performance comparison of TESL-Net with various SOTA methods on the ISIC2017 skin lesion dataset.</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T3.1\" style=\"width:433.6pt;height:310.9pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(53.6pt,-38.4pt) scale(1.32859178449242,1.32859178449242) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T3.1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T3.1.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S4.T3.1.1.1.1.1\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.1.1.1.1.1\">Method</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"5\" id=\"S4.T3.1.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.1.1.1.2.1\">Performance Measures (%)</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.1.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T3.1.1.2.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.1.2.2.1.1\">IoU</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T3.1.1.2.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.1.2.2.2.1\">Dice</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T3.1.1.2.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.1.2.2.3.1\">Acc</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T3.1.1.2.2.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.1.2.2.4.1\">Se</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T3.1.1.2.2.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.1.2.2.5.1\">Sp</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T3.1.1.3.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T3.1.1.3.1.1\">ARU-GD <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.09687v1#bib.bib47\" title=\"\">47</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.1.3.1.2\">80.77</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.1.3.1.3\">87.89</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.1.3.1.4\">93.88</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.1.3.1.5\">88.31</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.1.3.1.6\">96.31</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.1.4.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.1.1.4.2.1\">AS-Net <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.09687v1#bib.bib37\" title=\"\">37</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.1.4.2.2\">80.51</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.1.4.2.3\">88.07</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.1.4.2.4\">94.66</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.1.4.2.5\">89.92</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.1.4.2.6\">95.72</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.1.5.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.1.1.5.3.1\">BCDU-Net <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.09687v1#bib.bib48\" title=\"\">48</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.1.5.3.2\">79.20</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.1.5.3.3\">78.11</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.1.5.3.4\">91.63</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.1.5.3.5\">76.46</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.1.5.3.6\">97.09</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.1.6.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.1.1.6.4.1\">DAGAN <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.09687v1#bib.bib31\" title=\"\">31</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.1.6.4.2\">75.94</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.1.6.4.3\">84.25</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.1.6.4.4\">93.26</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.1.6.4.5\">83.63</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.1.6.4.6\">97.25</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.1.7.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.1.1.7.5.1\">FAT-Net <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.09687v1#bib.bib46\" title=\"\">46</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.1.7.5.2\">76.53</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.1.7.5.3\">85.00</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.1.7.5.4\">93.26</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.1.7.5.5\">83.92</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.1.7.5.6\">97.25</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.1.8.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.1.1.8.6.1\">RA-Net <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.09687v1#bib.bib45\" title=\"\">45</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.1.8.6.2\">84.89</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.1.8.6.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.1.8.6.3.1\">90.99</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.1.8.6.4\">95.76</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.1.8.6.5\">91.06</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.1.8.6.6\">96.05</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.1.9.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.1.1.9.7.1\">SLT-Net <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.09687v1#bib.bib52\" title=\"\">52</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.1.9.7.2\">79.87</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.1.9.7.3\">67.90</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.1.9.7.4\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.1.9.7.5\">73.63</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.1.9.7.6\">97.27</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.1.10.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.1.1.10.8.1\">Swin-Unet <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.09687v1#bib.bib42\" title=\"\">42</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.1.10.8.2\">80.89</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.1.10.8.3\">81.99</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.1.10.8.4\">94.76</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.1.10.8.5\">88.06</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.1.10.8.6\">96.05</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.1.11.9\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.1.1.11.9.1\">U-Net <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.09687v1#bib.bib19\" title=\"\">19</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.1.11.9.2\">75.69</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.1.11.9.3\">84.12</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.1.11.9.4\">93.29</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.1.11.9.5\">84.30</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.1.11.9.6\">93.41</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.1.12.10\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.1.1.12.10.1\">UNet++ <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.09687v1#bib.bib51\" title=\"\">51</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.1.12.10.2\">78.58</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.1.12.10.3\">86.35</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.1.12.10.4\">93.73</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.1.12.10.5\">87.13</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.1.12.10.6\">94.41</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.1.13.11\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb ltx_border_t\" id=\"S4.T3.1.1.13.11.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.1.13.11.1.1\">Proposed Method</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T3.1.1.13.11.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.1.13.11.2.1\">86.91</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T3.1.1.13.11.3\">90.09</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T3.1.1.13.11.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.1.13.11.4.1\">95.80</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T3.1.1.13.11.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.1.13.11.5.1\">91.10</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T3.1.1.13.11.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.1.13.11.6.1\">97.29</span></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
84
+ "capture": "TABLE III: Quantitative performance comparison of TESL-Net with various SOTA methods on the ISIC2017 skin lesion dataset."
85
+ },
86
+ "4": {
87
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T4\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE IV: </span>Quantitative performance comparison of TESL-Net with various SOTA methods on the ISIC2018 skin lesion dataset.</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T4.1\" style=\"width:433.6pt;height:334.8pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(53.6pt,-41.4pt) scale(1.32859178449242,1.32859178449242) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T4.1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T4.1.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S4.T4.1.1.1.1.1\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.1.1.1.1.1\">Method</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"5\" id=\"S4.T4.1.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.1.1.1.2.1\">Performance Measures (%)</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.1.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T4.1.1.2.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.1.2.2.1.1\">IoU</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T4.1.1.2.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.1.2.2.2.1\">Dice</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T4.1.1.2.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.1.2.2.3.1\">Acc</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T4.1.1.2.2.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.1.2.2.4.1\">Se</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T4.1.1.2.2.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.1.2.2.5.1\">Sp</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T4.1.1.3.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T4.1.1.3.1.1\">ARU-GD <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.09687v1#bib.bib47\" title=\"\">47</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.1.3.1.2\">84.55</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.1.3.1.3\">89.16</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.1.3.1.4\">94.23</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.1.3.1.5\">91.42</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.1.3.1.6\">96.81</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.1.4.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T4.1.1.4.2.1\">AS-Net <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.09687v1#bib.bib37\" title=\"\">37</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.4.2.2\">83.09</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.4.2.3\">89.55</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.4.2.4\">95.68</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.4.2.5\">93.06</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.4.2.6\">94.69</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.1.5.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T4.1.1.5.3.1\">BCDU-Net <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.09687v1#bib.bib48\" title=\"\">48</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.5.3.2\">81.10</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.5.3.3\">85.10</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.5.3.4\">93.70</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.5.3.5\">78.50</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.5.3.6\">98.20</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.1.6.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T4.1.1.6.4.1\">DAGAN <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.09687v1#bib.bib31\" title=\"\">31</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.6.4.2\">81.13</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.6.4.3\">88.07</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.6.4.4\">93.24</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.6.4.5\">90.72</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.6.4.6\">95.88</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.1.7.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T4.1.1.7.5.1\">FAT-Net <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.09687v1#bib.bib46\" title=\"\">46</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.7.5.2\">82.02</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.7.5.3\">89.03</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.7.5.4\">95.78</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.7.5.5\">91.00</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.7.5.6\">96.99</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.1.8.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T4.1.1.8.6.1\">ICL-Net <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.09687v1#bib.bib9\" title=\"\">9</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.8.6.2\">83.76</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.8.6.3\">90.41</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.8.6.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.1.8.6.4.1\">97.24</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.8.6.5\">91.66</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.8.6.6\">98.63</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.1.9.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T4.1.1.9.7.1\">RA-Net <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.09687v1#bib.bib45\" title=\"\">45</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.9.7.2\">88.34</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.9.7.3\">93.25</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.9.7.4\">95.84</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.9.7.5\">93.63</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.9.7.6\">94.16</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.1.10.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T4.1.1.10.8.1\">Swin-Unet <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.09687v1#bib.bib42\" title=\"\">42</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.10.8.2\">82.79</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.10.8.3\">88.98</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.10.8.4\">96.83</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.10.8.5\">90.10</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.10.8.6\">97.16</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.1.11.9\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T4.1.1.11.9.1\">SLT-Net <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.09687v1#bib.bib52\" title=\"\">52</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.11.9.2\">71.51</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.11.9.3\">82.85</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.11.9.4\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.11.9.5\">78.85</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.11.9.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.1.11.9.6.1\">99.35</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.1.12.10\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T4.1.1.12.10.1\">U-Net <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.09687v1#bib.bib19\" title=\"\">19</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.12.10.2\">80.09</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.12.10.3\">86.64</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.12.10.4\">92.52</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.12.10.5\">85.22</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.12.10.6\">92.09</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.1.13.11\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T4.1.1.13.11.1\">UNet++ <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.09687v1#bib.bib51\" title=\"\">51</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.13.11.2\">81.62</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.13.11.3\">87.32</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.13.11.4\">93.72</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.13.11.5\">88.70</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.13.11.6\">93.96</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.1.14.12\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb ltx_border_t\" id=\"S4.T4.1.1.14.12.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.1.14.12.1.1\">Proposed Method</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T4.1.1.14.12.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.1.14.12.2.1\">90.56</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T4.1.1.14.12.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.1.14.12.3.1\">94.22</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T4.1.1.14.12.4\">96.23</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T4.1.1.14.12.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.1.14.12.5.1\">95.02</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T4.1.1.14.12.6\">97.21</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
88
+ "capture": "TABLE IV: Quantitative performance comparison of TESL-Net with various SOTA methods on the ISIC2018 skin lesion dataset."
89
+ }
90
+ },
91
+ "image_paths": {
92
+ "1(a)": {
93
+ "figure_path": "2408.09687v1_figure_1(a).png",
94
+ "caption": "Figure 1: Challenges in skin lesion segmentation.",
95
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/Challenges/LC1.jpg"
96
+ },
97
+ "1(b)": {
98
+ "figure_path": "2408.09687v1_figure_1(b).png",
99
+ "caption": "Figure 1: Challenges in skin lesion segmentation.",
100
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/Challenges/ML1.jpg"
101
+ },
102
+ "1(c)": {
103
+ "figure_path": "2408.09687v1_figure_1(c).png",
104
+ "caption": "Figure 1: Challenges in skin lesion segmentation.",
105
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/Challenges/PH1.jpg"
106
+ },
107
+ "1(d)": {
108
+ "figure_path": "2408.09687v1_figure_1(d).png",
109
+ "caption": "Figure 1: Challenges in skin lesion segmentation.",
110
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/Challenges/VA1.jpg"
111
+ },
112
+ "1(e)": {
113
+ "figure_path": "2408.09687v1_figure_1(e).png",
114
+ "caption": "Figure 1: Challenges in skin lesion segmentation.",
115
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/Challenges/PA1.jpg"
116
+ },
117
+ "1(f)": {
118
+ "figure_path": "2408.09687v1_figure_1(f).png",
119
+ "caption": "Figure 1: Challenges in skin lesion segmentation.",
120
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/Challenges/LC2.jpg"
121
+ },
122
+ "1(g)": {
123
+ "figure_path": "2408.09687v1_figure_1(g).png",
124
+ "caption": "Figure 1: Challenges in skin lesion segmentation.",
125
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/Challenges/ML2.jpg"
126
+ },
127
+ "1(h)": {
128
+ "figure_path": "2408.09687v1_figure_1(h).png",
129
+ "caption": "Figure 1: Challenges in skin lesion segmentation.",
130
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/Challenges/PH2.jpg"
131
+ },
132
+ "1(i)": {
133
+ "figure_path": "2408.09687v1_figure_1(i).png",
134
+ "caption": "Figure 1: Challenges in skin lesion segmentation.",
135
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/Challenges/VA2.jpg"
136
+ },
137
+ "1(j)": {
138
+ "figure_path": "2408.09687v1_figure_1(j).png",
139
+ "caption": "Figure 1: Challenges in skin lesion segmentation.",
140
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/Challenges/PA2.jpg"
141
+ },
142
+ "2": {
143
+ "figure_path": "2408.09687v1_figure_2.png",
144
+ "caption": "Figure 2: Schematic of the proposed method. (a) Block diagram of the proposed TESL-Net, (b) Swin Transformer Block.",
145
+ "url": "http://arxiv.org/html/2408.09687v1/x1.png"
146
+ },
147
+ "3(a)": {
148
+ "figure_path": "2408.09687v1_figure_3(a).png",
149
+ "caption": "Figure 3: Visual performance comparison of the proposed TESL-Net on ISIC 2016 [39] dataset.",
150
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2016/im_95.jpg"
151
+ },
152
+ "3(b)": {
153
+ "figure_path": "2408.09687v1_figure_3(b).png",
154
+ "caption": "Figure 3: Visual performance comparison of the proposed TESL-Net on ISIC 2016 [39] dataset.",
155
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2016/GT_95.jpg"
156
+ },
157
+ "3(c)": {
158
+ "figure_path": "2408.09687v1_figure_3(c).png",
159
+ "caption": "Figure 3: Visual performance comparison of the proposed TESL-Net on ISIC 2016 [39] dataset.",
160
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2016/LSSF_95.jpg"
161
+ },
162
+ "3(d)": {
163
+ "figure_path": "2408.09687v1_figure_3(d).png",
164
+ "caption": "Figure 3: Visual performance comparison of the proposed TESL-Net on ISIC 2016 [39] dataset.",
165
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2016/Swin_95.jpg"
166
+ },
167
+ "3(e)": {
168
+ "figure_path": "2408.09687v1_figure_3(e).png",
169
+ "caption": "Figure 3: Visual performance comparison of the proposed TESL-Net on ISIC 2016 [39] dataset.",
170
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2016/UNet_95.jpg"
171
+ },
172
+ "3(f)": {
173
+ "figure_path": "2408.09687v1_figure_3(f).png",
174
+ "caption": "Figure 3: Visual performance comparison of the proposed TESL-Net on ISIC 2016 [39] dataset.",
175
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2016/ARU_GD_95.jpg"
176
+ },
177
+ "3(g)": {
178
+ "figure_path": "2408.09687v1_figure_3(g).png",
179
+ "caption": "Figure 3: Visual performance comparison of the proposed TESL-Net on ISIC 2016 [39] dataset.",
180
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2016/AttNet_95.jpg"
181
+ },
182
+ "3(h)": {
183
+ "figure_path": "2408.09687v1_figure_3(h).png",
184
+ "caption": "Figure 3: Visual performance comparison of the proposed TESL-Net on ISIC 2016 [39] dataset.",
185
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2016/UNet++_95.jpg"
186
+ },
187
+ "3(i)": {
188
+ "figure_path": "2408.09687v1_figure_3(i).png",
189
+ "caption": "Figure 3: Visual performance comparison of the proposed TESL-Net on ISIC 2016 [39] dataset.",
190
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2016/DUCKNet_95.jpg"
191
+ },
192
+ "3(j)": {
193
+ "figure_path": "2408.09687v1_figure_3(j).png",
194
+ "caption": "Figure 3: Visual performance comparison of the proposed TESL-Net on ISIC 2016 [39] dataset.",
195
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2016/Meta_Poly_95.jpg"
196
+ },
197
+ "3(k)": {
198
+ "figure_path": "2408.09687v1_figure_3(k).png",
199
+ "caption": "Figure 3: Visual performance comparison of the proposed TESL-Net on ISIC 2016 [39] dataset.",
200
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2016/im_377.jpg"
201
+ },
202
+ "3(l)": {
203
+ "figure_path": "2408.09687v1_figure_3(l).png",
204
+ "caption": "Figure 3: Visual performance comparison of the proposed TESL-Net on ISIC 2016 [39] dataset.",
205
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2016/GT_377.jpg"
206
+ },
207
+ "3(m)": {
208
+ "figure_path": "2408.09687v1_figure_3(m).png",
209
+ "caption": "Figure 3: Visual performance comparison of the proposed TESL-Net on ISIC 2016 [39] dataset.",
210
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2016/Proposed_377.jpg"
211
+ },
212
+ "3(n)": {
213
+ "figure_path": "2408.09687v1_figure_3(n).png",
214
+ "caption": "Figure 3: Visual performance comparison of the proposed TESL-Net on ISIC 2016 [39] dataset.",
215
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2016/Swin_377.jpg"
216
+ },
217
+ "3(o)": {
218
+ "figure_path": "2408.09687v1_figure_3(o).png",
219
+ "caption": "Figure 3: Visual performance comparison of the proposed TESL-Net on ISIC 2016 [39] dataset.",
220
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2016/UNet_377.jpg"
221
+ },
222
+ "3(p)": {
223
+ "figure_path": "2408.09687v1_figure_3(p).png",
224
+ "caption": "Figure 3: Visual performance comparison of the proposed TESL-Net on ISIC 2016 [39] dataset.",
225
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2016/ARU_GD_377.jpg"
226
+ },
227
+ "3(q)": {
228
+ "figure_path": "2408.09687v1_figure_3(q).png",
229
+ "caption": "Figure 3: Visual performance comparison of the proposed TESL-Net on ISIC 2016 [39] dataset.",
230
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2016/AttNet_377.jpg"
231
+ },
232
+ "3(r)": {
233
+ "figure_path": "2408.09687v1_figure_3(r).png",
234
+ "caption": "Figure 3: Visual performance comparison of the proposed TESL-Net on ISIC 2016 [39] dataset.",
235
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2016/UNet++_377.jpg"
236
+ },
237
+ "3(s)": {
238
+ "figure_path": "2408.09687v1_figure_3(s).png",
239
+ "caption": "Figure 3: Visual performance comparison of the proposed TESL-Net on ISIC 2016 [39] dataset.",
240
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2016/DUCKNet_377.jpg"
241
+ },
242
+ "3(t)": {
243
+ "figure_path": "2408.09687v1_figure_3(t).png",
244
+ "caption": "Figure 3: Visual performance comparison of the proposed TESL-Net on ISIC 2016 [39] dataset.",
245
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2016/Meta_Poly_377.jpg"
246
+ },
247
+ "4(a)": {
248
+ "figure_path": "2408.09687v1_figure_4(a).png",
249
+ "caption": "Figure 4: Visual performance comparison of the proposed TESL-Net on ISIC 2017 [40] dataset.",
250
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2017/im_425.jpg"
251
+ },
252
+ "4(b)": {
253
+ "figure_path": "2408.09687v1_figure_4(b).png",
254
+ "caption": "Figure 4: Visual performance comparison of the proposed TESL-Net on ISIC 2017 [40] dataset.",
255
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2017/GT_425.jpg"
256
+ },
257
+ "4(c)": {
258
+ "figure_path": "2408.09687v1_figure_4(c).png",
259
+ "caption": "Figure 4: Visual performance comparison of the proposed TESL-Net on ISIC 2017 [40] dataset.",
260
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2017/LSSF_425.jpg"
261
+ },
262
+ "4(d)": {
263
+ "figure_path": "2408.09687v1_figure_4(d).png",
264
+ "caption": "Figure 4: Visual performance comparison of the proposed TESL-Net on ISIC 2017 [40] dataset.",
265
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2017/Swin_425.jpg"
266
+ },
267
+ "4(e)": {
268
+ "figure_path": "2408.09687v1_figure_4(e).png",
269
+ "caption": "Figure 4: Visual performance comparison of the proposed TESL-Net on ISIC 2017 [40] dataset.",
270
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2017/UNet_425.jpg"
271
+ },
272
+ "4(f)": {
273
+ "figure_path": "2408.09687v1_figure_4(f).png",
274
+ "caption": "Figure 4: Visual performance comparison of the proposed TESL-Net on ISIC 2017 [40] dataset.",
275
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2017/ARU_GD_425.jpg"
276
+ },
277
+ "4(g)": {
278
+ "figure_path": "2408.09687v1_figure_4(g).png",
279
+ "caption": "Figure 4: Visual performance comparison of the proposed TESL-Net on ISIC 2017 [40] dataset.",
280
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2017/AttNet_425.jpg"
281
+ },
282
+ "4(h)": {
283
+ "figure_path": "2408.09687v1_figure_4(h).png",
284
+ "caption": "Figure 4: Visual performance comparison of the proposed TESL-Net on ISIC 2017 [40] dataset.",
285
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2017/UNet++_425.jpg"
286
+ },
287
+ "4(i)": {
288
+ "figure_path": "2408.09687v1_figure_4(i).png",
289
+ "caption": "Figure 4: Visual performance comparison of the proposed TESL-Net on ISIC 2017 [40] dataset.",
290
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2017/DUCKNet_425.jpg"
291
+ },
292
+ "4(j)": {
293
+ "figure_path": "2408.09687v1_figure_4(j).png",
294
+ "caption": "Figure 4: Visual performance comparison of the proposed TESL-Net on ISIC 2017 [40] dataset.",
295
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2017/Meta_Poly_425.jpg"
296
+ },
297
+ "4(k)": {
298
+ "figure_path": "2408.09687v1_figure_4(k).png",
299
+ "caption": "Figure 4: Visual performance comparison of the proposed TESL-Net on ISIC 2017 [40] dataset.",
300
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2017/im_521.jpg"
301
+ },
302
+ "4(l)": {
303
+ "figure_path": "2408.09687v1_figure_4(l).png",
304
+ "caption": "Figure 4: Visual performance comparison of the proposed TESL-Net on ISIC 2017 [40] dataset.",
305
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2017/GT_521.jpg"
306
+ },
307
+ "4(m)": {
308
+ "figure_path": "2408.09687v1_figure_4(m).png",
309
+ "caption": "Figure 4: Visual performance comparison of the proposed TESL-Net on ISIC 2017 [40] dataset.",
310
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2017/LSSF_521.jpg"
311
+ },
312
+ "4(n)": {
313
+ "figure_path": "2408.09687v1_figure_4(n).png",
314
+ "caption": "Figure 4: Visual performance comparison of the proposed TESL-Net on ISIC 2017 [40] dataset.",
315
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2017/Swin_521.jpg"
316
+ },
317
+ "4(o)": {
318
+ "figure_path": "2408.09687v1_figure_4(o).png",
319
+ "caption": "Figure 4: Visual performance comparison of the proposed TESL-Net on ISIC 2017 [40] dataset.",
320
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2017/UNet_521.jpg"
321
+ },
322
+ "4(p)": {
323
+ "figure_path": "2408.09687v1_figure_4(p).png",
324
+ "caption": "Figure 4: Visual performance comparison of the proposed TESL-Net on ISIC 2017 [40] dataset.",
325
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2017/ARU_GD_521.jpg"
326
+ },
327
+ "4(q)": {
328
+ "figure_path": "2408.09687v1_figure_4(q).png",
329
+ "caption": "Figure 4: Visual performance comparison of the proposed TESL-Net on ISIC 2017 [40] dataset.",
330
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2017/AttNet_521.jpg"
331
+ },
332
+ "4(r)": {
333
+ "figure_path": "2408.09687v1_figure_4(r).png",
334
+ "caption": "Figure 4: Visual performance comparison of the proposed TESL-Net on ISIC 2017 [40] dataset.",
335
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2017/UNet++_521.jpg"
336
+ },
337
+ "4(s)": {
338
+ "figure_path": "2408.09687v1_figure_4(s).png",
339
+ "caption": "Figure 4: Visual performance comparison of the proposed TESL-Net on ISIC 2017 [40] dataset.",
340
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2017/DUCKNet_521.jpg"
341
+ },
342
+ "4(t)": {
343
+ "figure_path": "2408.09687v1_figure_4(t).png",
344
+ "caption": "Figure 4: Visual performance comparison of the proposed TESL-Net on ISIC 2017 [40] dataset.",
345
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2017/Meta_Poly_521.jpg"
346
+ },
347
+ "5(a)": {
348
+ "figure_path": "2408.09687v1_figure_5(a).png",
349
+ "caption": "Figure 5: Visual performance comparison of the proposed TESL-Net on ISIC 2018 [41] dataset.",
350
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2018/im_34.jpg"
351
+ },
352
+ "5(b)": {
353
+ "figure_path": "2408.09687v1_figure_5(b).png",
354
+ "caption": "Figure 5: Visual performance comparison of the proposed TESL-Net on ISIC 2018 [41] dataset.",
355
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2018/GT_34.jpg"
356
+ },
357
+ "5(c)": {
358
+ "figure_path": "2408.09687v1_figure_5(c).png",
359
+ "caption": "Figure 5: Visual performance comparison of the proposed TESL-Net on ISIC 2018 [41] dataset.",
360
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2018/LSSF_34.jpg"
361
+ },
362
+ "5(d)": {
363
+ "figure_path": "2408.09687v1_figure_5(d).png",
364
+ "caption": "Figure 5: Visual performance comparison of the proposed TESL-Net on ISIC 2018 [41] dataset.",
365
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2018/Swin_34.jpg"
366
+ },
367
+ "5(e)": {
368
+ "figure_path": "2408.09687v1_figure_5(e).png",
369
+ "caption": "Figure 5: Visual performance comparison of the proposed TESL-Net on ISIC 2018 [41] dataset.",
370
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2018/UNet_34.jpg"
371
+ },
372
+ "5(f)": {
373
+ "figure_path": "2408.09687v1_figure_5(f).png",
374
+ "caption": "Figure 5: Visual performance comparison of the proposed TESL-Net on ISIC 2018 [41] dataset.",
375
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2018/ARU_GD_34.jpg"
376
+ },
377
+ "5(g)": {
378
+ "figure_path": "2408.09687v1_figure_5(g).png",
379
+ "caption": "Figure 5: Visual performance comparison of the proposed TESL-Net on ISIC 2018 [41] dataset.",
380
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2018/AttNet_34.jpg"
381
+ },
382
+ "5(h)": {
383
+ "figure_path": "2408.09687v1_figure_5(h).png",
384
+ "caption": "Figure 5: Visual performance comparison of the proposed TESL-Net on ISIC 2018 [41] dataset.",
385
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2018/UNet++_34.jpg"
386
+ },
387
+ "5(i)": {
388
+ "figure_path": "2408.09687v1_figure_5(i).png",
389
+ "caption": "Figure 5: Visual performance comparison of the proposed TESL-Net on ISIC 2018 [41] dataset.",
390
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2018/DUCKNet_34.jpg"
391
+ },
392
+ "5(j)": {
393
+ "figure_path": "2408.09687v1_figure_5(j).png",
394
+ "caption": "Figure 5: Visual performance comparison of the proposed TESL-Net on ISIC 2018 [41] dataset.",
395
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2018/Meta_Poly_34.jpg"
396
+ },
397
+ "5(k)": {
398
+ "figure_path": "2408.09687v1_figure_5(k).png",
399
+ "caption": "Figure 5: Visual performance comparison of the proposed TESL-Net on ISIC 2018 [41] dataset.",
400
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2018/im_519.jpg"
401
+ },
402
+ "5(l)": {
403
+ "figure_path": "2408.09687v1_figure_5(l).png",
404
+ "caption": "Figure 5: Visual performance comparison of the proposed TESL-Net on ISIC 2018 [41] dataset.",
405
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2018/GT_519.jpg"
406
+ },
407
+ "5(m)": {
408
+ "figure_path": "2408.09687v1_figure_5(m).png",
409
+ "caption": "Figure 5: Visual performance comparison of the proposed TESL-Net on ISIC 2018 [41] dataset.",
410
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2018/LSSF_519.jpg"
411
+ },
412
+ "5(n)": {
413
+ "figure_path": "2408.09687v1_figure_5(n).png",
414
+ "caption": "Figure 5: Visual performance comparison of the proposed TESL-Net on ISIC 2018 [41] dataset.",
415
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2018/Swin_519.jpg"
416
+ },
417
+ "5(o)": {
418
+ "figure_path": "2408.09687v1_figure_5(o).png",
419
+ "caption": "Figure 5: Visual performance comparison of the proposed TESL-Net on ISIC 2018 [41] dataset.",
420
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2018/UNet_519.jpg"
421
+ },
422
+ "5(p)": {
423
+ "figure_path": "2408.09687v1_figure_5(p).png",
424
+ "caption": "Figure 5: Visual performance comparison of the proposed TESL-Net on ISIC 2018 [41] dataset.",
425
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2018/ARU_GD_519.jpg"
426
+ },
427
+ "5(q)": {
428
+ "figure_path": "2408.09687v1_figure_5(q).png",
429
+ "caption": "Figure 5: Visual performance comparison of the proposed TESL-Net on ISIC 2018 [41] dataset.",
430
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2018/AttNet_519.jpg"
431
+ },
432
+ "5(r)": {
433
+ "figure_path": "2408.09687v1_figure_5(r).png",
434
+ "caption": "Figure 5: Visual performance comparison of the proposed TESL-Net on ISIC 2018 [41] dataset.",
435
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2018/UNet++_519.jpg"
436
+ },
437
+ "5(s)": {
438
+ "figure_path": "2408.09687v1_figure_5(s).png",
439
+ "caption": "Figure 5: Visual performance comparison of the proposed TESL-Net on ISIC 2018 [41] dataset.",
440
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2018/DUCKNet_519.jpg"
441
+ },
442
+ "5(t)": {
443
+ "figure_path": "2408.09687v1_figure_5(t).png",
444
+ "caption": "Figure 5: Visual performance comparison of the proposed TESL-Net on ISIC 2018 [41] dataset.",
445
+ "url": "http://arxiv.org/html/2408.09687v1/extracted/5799147/Visuals/ISIC2018/Meta_Poly_519.jpg"
446
+ }
447
+ },
448
+ "validation": true,
449
+ "references": [],
450
+ "url": "http://arxiv.org/html/2408.09687v1"
451
+ }
20240819/2408.09694v1.json ADDED
@@ -0,0 +1,149 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "An Efficient Deep Reinforcement Learning Model for Online 3D Bin Packing Combining Object Rearrangement and Stable Placement",
3
+ "abstract": "This paper presents an efficient deep reinforcement learning (DRL) framework for online 3D bin packing (3D-BPP). The 3D-BPP is an NP-hard problem significant in logistics, warehousing, and transportation, involving the optimal arrangement of objects inside a bin. Traditional heuristic algorithms often fail to address dynamic and physical constraints in real-time scenarios. We introduce a novel DRL framework that integrates a reliable physics heuristic algorithm and object rearrangement and stable placement. Our experiment show that the proposed framework achieves higher space utilization rates effectively minimizing the amount of wasted space with fewer training epochs.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Robotic bin packing has many applications in the fields of logistics, warehousing, and transportation. The 3D Bin Packing Problem (3D-BPP), a well-known NP-hard problems [i3], is referred to as an optimization problem of packing multiple objects into a bin(s), while satisfying the bin capacity constraint [2]. The 3D-BPP can be tackled offline or online depending on whether all objects can be accessible or not. In terms of offline bin packing task, this setting assumes the prior knowledge of all objects, usually, finding the optimal packing sequence and optimal placement are involved in this setting. Typically, meta-heuristic algorithms have been employed to determine the optimal order sequence in previous studies [4], thereafter, heuristic algorithms, such as DBLF proposed by Korha and Mustaf [7] or HM proposed by Wang and Kris [8], are leveraged to determine where to place the object into the bin.\nCompared with offline bin packing, online bin packing is more challenging. Basically, the packing order is random, and the agent can only observe the upcoming objects (either single or multiple objects) as illustrated in Fig. 1 ###reference_###. In this context, relying exclusively on heuristics results in a considerable decline in bin utilization [4]. Under these constraints, Yang et al. [i4] employed unpacking-heuristics to improve the utilization. Nonetheless, this method raises the time cost, thereby diminishing the overall efficiency of the packing process.\nRecent progress in DRL has shown promising results in various domains by enabling models to learn optimal policies through trial and error [19]. Compared with heuristic algorithms, DRL excels in addressing optimization problems effectiveness in complex environments. However, real-world physical law damages the training efficiency as learning the physics in complex environment takes many trial-and-error iterations, and the stable placement cannot be guanranteed. Zhao et al. [14] and Yang et al. [i4] leveraged neural network to predict the physical feasibility map, enabling the agent to learn feasible packing strategies. Although these methods have achieved promising results in 3D-BPP, object stability is not guaranteed. To address these challenges, we propose an efficient and effective DRL framework using a highly reliable physics heuristic algorithm for online 3D-BPP. The main contributions of this paper are as follows.\nWe proposed a highly reliable physics heuristic algorithm that guarantees the stability of object placement in complex multi-stack environments, while retaining as many placement positions as possible.\nWe incorporated an object rearrangement process into the proposed framework which allows the robot manipulator to change the orientation of the upcoming object. It is also an efficient action that directly enhances space utilization without requiring additional time costs.\n###figure_1###"
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Related Work",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Heuristics in Bin Packing Problem",
21
+ "text": "The bin packing problem is a key challenge in combinatorial optimization, aiming to arrange multiple objects efficiently within the larger container. However, 3D-BPP become unsolvable within a reasonable time frame using exact algorithms [1] when involving a large number of objects. Over the years, various heuristic and meta-heuristic methods have been developed to address this problem [5][6][7][9]. Heuristic algorithms critically depend on the sequence of object placement, and current research often employs meta-heuristic techniques such as simulated annealing [13] and genetic algorithms [7].\nConsequently, if complete information on all objects to be packed is unavailable, the effectiveness of heuristic algorithms drops significantly. Moreover, in real-world logistics warehouse, gathering detailed information about all objects can be challenging and time-consuming, reducing operational efficiency. Therefore, We propose using the object rearrangement method to change the orientation of objects in order to improve bin utilization, under the constraints of unchangeable order sequence."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "DRL in 3D-BPP",
27
+ "text": "DRL combines the decision-making capabilities of reinforcement learning with the powerful representation learning of deep neural networks. Furthermore, it can be adaptable to changing conditions and provide feasible solutions with highly efficient [19], where traditional methods may struggle to find efficient solutions. DRL has recently demonstrated strong performance across various robotics tasks [17][18], showcasing its ability to handle complex spatial and dynamic challenges effectively.\nThus applying DRL to the 3D-BPP could indeed be a highly efficient approach. For example, Zhao et al. [14] introduced a prediction-and-projection scheme where the agent first generates a physical stability mask for placement actions as an auxiliary task, then using this mask to adjust the action probabilities output by the actor during training. However, DRL models can suffer from instability and sensitivity to hyperparameters, making them difficult to tune and sometimes resulting in unpredictable performance. Moreover, most work focuses only on sample constraints, without considering real-world physical properties of objects, including the object CoM and its deviation in a complete stack. These factors can result in solutions that are impractical for real-world applications where physical stability and balance are essential.\nThus we propose the DRL framework integrated with a physics heuristics. This not only guarantees the stability of object placement but also enhances the training efficiency of the model, allowing for faster convergence."
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "Stability check in 3D-BPP",
33
+ "text": "Stable stacking is a critical factor when designing an online 3D bin packing pipeline. Learning the rules of real-world physics is a very difficult process for DRL. This not only lengthens the training time for the model but also causes fluctuations in model convergence.\nTherefore, for 3D-BPP, it is necessary to design a reliable and efficient physics heuristics for feasible action detection to quickly rule out incorrect actions in the current state. Zhao et al. [14] and Yang et al. [i4] use the similar scheme that combines the ratio of overlapping parts between the placed item and its contact items with a neural network model for prediction. But this is not a reliable method, since the model is a black box, there are always parts that are inexplicable and unpredictable. On the other hand, Wang and Kris [8] proposed a mathematical model that using a linear programming method solves for the torque balance and force balance of the object for all contact forces. Although this is a very reliable method,it is too complex for regular objects and usually takes a long time to evaluate all the candidates actions.\n###figure_2### Thus we propose a new physics heuristic algorithm for rectilinear objects, which can guarantee the stability of object placement in an efficient and effective way, under real-world physical constraint.\n###figure_3###"
34
+ },
35
+ {
36
+ "section_id": "3",
37
+ "parent_section_id": null,
38
+ "section_name": "Method",
39
+ "text": "We describe our method in two parts. First, we present our stability check method, which is a highlight of our work. Second, we introduce a DRL framework that integrates physical heuristics and object rearrangement."
40
+ },
41
+ {
42
+ "section_id": "3.1",
43
+ "parent_section_id": "3",
44
+ "section_name": "Stability Checking via Physics Heuristics",
45
+ "text": "In our research, we assume that the object for bin packing are rigid body and have a uniform mass distribution, so that the center of mass (CoM) is the geometric center of the object. But our method is not limited by the mass distribution, here just for simplify questions. For uneven objects, we can use Gao et al. [gao2][gao1] to estimate as close as possible the CoM.\nFor the current state bin , we generate a bottom-to-top depth heightmap with a resolution of , where each voxel represents a 5 vertical column of 3D space in the agent\u2019s workspace. The object to be placed is defined by its dimensions . We employ a sliding window to traverse the height map to check the stability of each placement.\nBased on physics\u2019 principles, we introduce the convexHull-1 method, as shown in Fig. 2 ###reference_###. The upward support force, denoted as is defined as the set of highest points in the window, obtained by object under currently placement. We utilize OpenCV [22] to calculate the largest convex hull formed by . Then, we evaluate the placement stability by verifying if the center of the sliding window is within the convex hull or not. During our experiments, we observed that relying only on a single layer of the convex hull cannot ensure the stability of object placement. Fig. 3 ###reference_### shows an example using convexHull-1 for stability check and fail.\nTo address the aforementioned issue, we introduce convexHull-, for managing multiple stacks of objects in complex environments. Throughout the object packing procedure, we maintain an empty map with the same size as the action map. The main concept of covexHull- is that the supporting force must be vertical and originate from the ground. Basically, for each position inside the sliding window, we check the number of wasted voxels along the axis. We consider that only no wasted voxels can be the reliable support force, which corresponds to the empty map value is zero, denoted as . After each placement, we update the empty map outlines in Algorithm 2 ###reference_###. Similarly, we use the new set of points to calculate the convex hull and determine whether the window\u2019s CoM is within it or not. Fig. 3 ###reference_### illustrates an example of stability check using convexHull-. Algorithm 1 ###reference_### outlines our algorithm in detail.\n###figure_4###"
46
+ },
47
+ {
48
+ "section_id": "3.2",
49
+ "parent_section_id": "3",
50
+ "section_name": "DRL for Bin Packing",
51
+ "text": ""
52
+ },
53
+ {
54
+ "section_id": "3.2.1",
55
+ "parent_section_id": "3.2",
56
+ "section_name": "3.2.1 Problem Formulation",
57
+ "text": "Formally, online 3D bin packing involves packing a number of object , each with arbitrary dimensions and cuboid shapes, into a bin of arbitrary dimensions . The process is constrained by the visibility of only the immediately upcoming object could be packed into the bin. Once the bin is filled or can not pack upcoming object the process will stop.\nTo solve this task, we formulate it as a Markov Decision Processes (MDPs), which can be denoted as a tuple . Specifically, we employ two agents with polices and to independently predict placement orientation and position.\nThe whole process is descried as follow: At the time step , the agent observes the environment and takes a state representation, denoted as . Then the agent predicts the action and pass to agent to predict action . Execute the action tuple, causing the environment to transition to , then immediately obtains a reward . The process aims to achieve the maximal cumulative rewards with discount , as shown in Eq. (1 ###reference_###) and (2 ###reference_###), by jointly optimizing two policies."
58
+ },
59
+ {
60
+ "section_id": "3.2.2",
61
+ "parent_section_id": "3.2",
62
+ "section_name": "3.2.2 State Definition",
63
+ "text": "We define state as the configuration of the bin along with the object that is about to be packed. Use the depth image of the bin to generate a bottom-to-top depth heightmap [14].\nFollowing the work conducted by Yang et al. [i4], given the object with dimensions , we create a three channel map with the dimension . Each channel corresponds to one of the object\u2019s dimensions and is fully populated with the respective dimension values. Then combine them as to represent the State.\n###figure_5###"
64
+ },
65
+ {
66
+ "section_id": "3.2.3",
67
+ "parent_section_id": "3.2",
68
+ "section_name": "3.2.3 Action Definition",
69
+ "text": "In this work, we propose to arrange object orientation in order to achieve better placement. Therefore, the action is defined as the conjunction of object rearrangement and placement, which is represented by , where represents the target object orientation and represents a specific position on top layer of the bin. To simplify the packing procedure, both and are discretized.\nAs illustrated in Fig. 5 ###reference_###, there are six different orientations. The number of positions for possible placement is the same as the number of pixels inside the heightmap. Given , the agent firstly uses object rearrangement operation to achieve the object orientation , and then place the object to the position ."
70
+ },
71
+ {
72
+ "section_id": "3.2.4",
73
+ "parent_section_id": "3.2",
74
+ "section_name": "3.2.4 Reward Function",
75
+ "text": "Following the idea mentioned in [14], at the time step , the immediate reward is the weighted subtraction of increased utilization space and the wasted space given by Eq. (3 ###reference_###) to (5 ###reference_###). Please note that the wasted space can be calculated efficiently by comparing the summation of the empty map before and after the placement. In addition, both and are set to be one in our experiment."
76
+ },
77
+ {
78
+ "section_id": "3.2.5",
79
+ "parent_section_id": "3.2",
80
+ "section_name": "3.2.5 Physics Heuristics DRL Framework",
81
+ "text": "Distinct from other works [i4], we proposed a two-agents DRL framework integrated with physics heuristics as shown in Fig. 4 ###reference_###. Based on Proximal Policy Optimization (PPO) [24], we develop two actor networks: dedicated to predicting the object\u2019s orientation and to determining the packing position. Both actor networks takes input as the 4-channels maps, the output of is a six-dimensional vector where each element dedicates one specific object orientation, the output of is the action map for placement with the same size as the heightmap.\nThe training pipeline is as follows: Given the object and configuration of bin , firstly, the Phy-Heu module generates stable action maps for all potential object orientations. Using these stable action maps, we construct an orientation mask to exclude orientations that do not allow for any feasible stable placement. Meanwhile, will predict the probability distribution of the object orientations. Using the orientation mask and the predicted distribution of orientations, the orientation is sampled. Next, based on the sampled orientation, the agent takes the and shuffled to predict the placement score map. Lastly, we sample the action from the intersected map of the corresponding stable action map and the predicted action score map to ensure the placement stability."
82
+ },
83
+ {
84
+ "section_id": "4",
85
+ "parent_section_id": null,
86
+ "section_name": "Experiment and Result",
87
+ "text": "Our experiments were performed with CoppeliaSim using Newton physical engine. The experiments include: (1) the validation of the physical heuristic algorithms; (2) the training and testing of the DRL framework.\nIn this paper, we proposed an efficient DRL framework for tackling the online 3D-BPP, integrating a reliable physics heuristic algorithm and object rearrangement technique to enhance the stability and utilization of object placements within bins. Our experiments demonstrated that the proposed method achieves higher utilization rates with fewer training epochs compared to the baseline. The integration of the physics heuristic ensures the stability of object placements, significantly reducing the occurrence of object falls during both training and testing phases. The object rearrangement technique further improves the utilization of bin space without additional time costs, making our framework highly efficient for on-the-fly applications.\nIn the future, we aim to make our physics heuristic algorithm more accurate by precisely predicting each stable placement position, and to improve the training efficiency of the DRL model. Additionally, we will incorporate the method proposed by Gao et al [gao2][gao1] and Li et al [Li] to grasp and pack irregular objects in the real world and attempt to propose a strategy for packing unknown and uneven objects in complex real-world environments."
88
+ },
89
+ {
90
+ "section_id": "4.1",
91
+ "parent_section_id": "4",
92
+ "section_name": "Physics Heuristics Validation",
93
+ "text": "We compare the physics heuristic with algorithms convexHull-1 and convexHull- on CoppeliaSim. The bin dimensions . Objects are randomly generated with dimensions . In this experiment, based on the stable action map computed by convexHull-1 or convexHull-, a random position considered to be stable for placement is selected at each time step. The stability of the bin objects are checked after each placement. The runtime of the two algorithms and the number of un-stable placement are reported in Table 4.1 ###reference_###. Based on Table 4.1 ###reference_###, We find that convexHull- significantly surpasses convexHull-1 w.r.t. the accuracy of stability check. There was only one instance where convexHull- incorrectly assessed the stability. We suspect this is due to the stable issue of the physical engine. In addition, convexHull-1 and convexHull- have similar runtime which indicate the efficiency of convexHull-.\nto \u2014X[c]\u2014X[c]\u2014X[c]\u2014 convexHull-1 convexHull-\nObject number 3000 3000 \nFall number 153 1 \nTime cost(s) 4203.3 4452.6 \nPer cost(s) 1.40 - 1.00 1.48 - 1.00 \nFall rate 5.1% 0.03%\nRS [14] bin packing dataset is leveraged to train and test the proposed DRL framework. To evaluate the effectiveness of our proposed method, the result reported in Zhao et al. [14] is our baseline as we share the same setting. Consistent with previous studies, we employed space utilization (Uti.) as the metric to evaluate the bin packing policy, where a higher value indicates better performance. We test on the dataset RS, CUT-1, and CUT-2 [14], which is summarized in Table 4.2 ###reference_###.\nto \u2014X[c]\u2014X[c]\u2014X[c]\u2014X[c]\u2014X[c]\u2014 RS CUT-1 CUT-2 epoch\nOurs 61.2% 63.3% 62.5% 18.6k\n[14] 50.5% 60.8% 60.9% 100k\n###figure_6### The results show that our method achieves higher Uti. with fewer training epochs. Additionally, we analyzed the RS test results, comparing each test\u2019s Uti with the standard deviation of the object volumes in the object sequence. Specifically, larger standard deviation indicates greater volume difference among the objects. As shown in Fig. 6 ###reference_###, we found that the our model trained on the RS dataset is not affected by the differences in object volume within the sequence."
94
+ },
95
+ {
96
+ "section_id": "4.2",
97
+ "parent_section_id": "4",
98
+ "section_name": "DRL framework result",
99
+ "text": "RS [14] bin packing dataset is leveraged to train and test the proposed DRL framework. To evaluate the effectiveness of our proposed method, the result reported in Zhao et al. [14] is our baseline as we share the same setting. Consistent with previous studies, we employed space utilization (Uti.) as the metric to evaluate the bin packing policy, where a higher value indicates better performance. We test on the dataset RS, CUT-1, and CUT-2 [14], which is summarized in Table 4.2 ###reference_### ###reference_###.\nto \u2014X[c]\u2014X[c]\u2014X[c]\u2014X[c]\u2014X[c]\u2014 RS CUT-1 CUT-2 epoch\nOurs 61.2% 63.3% 62.5% 18.6k\n[14] 50.5% 60.8% 60.9% 100k\n###figure_7### The results show that our method achieves higher Uti. with fewer training epochs. Additionally, we analyzed the RS test results, comparing each test\u2019s Uti with the standard deviation of the object volumes in the object sequence. Specifically, larger standard deviation indicates greater volume difference among the objects. As shown in Fig. 6 ###reference_### ###reference_###, we found that the our model trained on the RS dataset is not affected by the differences in object volume within the sequence."
100
+ },
101
+ {
102
+ "section_id": "5",
103
+ "parent_section_id": null,
104
+ "section_name": "Conclusion",
105
+ "text": "In this paper, we proposed an efficient DRL framework for tackling the online 3D-BPP, integrating a reliable physics heuristic algorithm and object rearrangement technique to enhance the stability and utilization of object placements within bins. Our experiments demonstrated that the proposed method achieves higher utilization rates with fewer training epochs compared to the baseline. The integration of the physics heuristic ensures the stability of object placements, significantly reducing the occurrence of object falls during both training and testing phases. The object rearrangement technique further improves the utilization of bin space without additional time costs, making our framework highly efficient for on-the-fly applications.\nIn the future, we aim to make our physics heuristic algorithm more accurate by precisely predicting each stable placement position, and to improve the training efficiency of the DRL model. Additionally, we will incorporate the method proposed by Gao et al [gao2][gao1] and Li et al [Li] to grasp and pack irregular objects in the real world and attempt to propose a strategy for packing unknown and uneven objects in complex real-world environments."
106
+ }
107
+ ],
108
+ "appendix": [],
109
+ "tables": {
110
+ "1": {
111
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.SS1.1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Comparison of physics heuristics.</figcaption><div class=\"ltx_flex_figure\">\n<div class=\"ltx_flex_cell ltx_flex_size_1\"><span class=\"ltx_ERROR ltx_figure_panel undefined\" id=\"S4.SS1.1.2\">{tabu}</span></div>\n<div class=\"ltx_flex_break\"></div>\n<div class=\"ltx_flex_cell ltx_flex_size_1\">\n<p class=\"ltx_p ltx_figure_panel\" id=\"S4.SS1.1.1\">to \u2014X[c]\u2014X[c]\u2014X[c]\u2014 convexHull-1 convexHull-\n<br class=\"ltx_break\"/>Object number 3000 3000 \n<br class=\"ltx_break\"/>Fall number 153 1 \n<br class=\"ltx_break\"/>Time cost(s) 4203.3 4452.6 \n<br class=\"ltx_break\"/>Per cost(s) 1.40 - 1.00 1.48 - 1.00 \n<br class=\"ltx_break\"/>Fall rate 5.1% 0.03% \n<br class=\"ltx_break\"/></p>\n</div>\n<div class=\"ltx_flex_break\"></div>\n<div class=\"ltx_flex_cell ltx_flex_size_1\">\n<section class=\"ltx_subsection ltx_figure_panel\" id=\"S4.SS2\">\n<h3 class=\"ltx_title ltx_title_subsection\">\n<span class=\"ltx_tag ltx_tag_subsection\">4.2 </span>DRL framework result</h3>\n<div class=\"ltx_para\" id=\"S4.SS2.p1\">\n<p class=\"ltx_p\" id=\"S4.SS2.p1.1\">RS\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<span class=\"ltx_ref ltx_missing_citation ltx_ref_self\">14</span>]</cite> bin packing dataset is leveraged to train and test the proposed DRL framework. To evaluate the effectiveness of our proposed method, the result reported in Zhao <span class=\"ltx_text ltx_font_italic\" id=\"S4.SS2.p1.1.1\">et al.</span>\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<span class=\"ltx_ref ltx_missing_citation ltx_ref_self\">14</span>]</cite> is our baseline as we share the same setting. Consistent with previous studies, we employed space utilization (Uti.) as the metric to evaluate the bin packing policy, where a higher value indicates better performance. We test on the dataset RS, CUT-1, and CUT-2\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<span class=\"ltx_ref ltx_missing_citation ltx_ref_self\">14</span>]</cite>, which is summarized in Table\u00a0<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.09694v1#S4.SS2\" title=\"4.2 DRL framework result \u2023 4.1 Physics Heuristics Validation \u2023 4 Experiment and Result \u2023 An Efficient Deep Reinforcement Learning Model for Online 3D Bin Packing Combining Object Rearrangement and Stable Placement\">4.2 ###reference_### ###reference_###</a>.</p>\n</div>\n<figure class=\"ltx_table\" id=\"S4.SS2.2\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Comparison of packing performance.</figcaption>\n<div class=\"ltx_sectional-block\" id=\"S4.SS2.2.2\">\n<div class=\"ltx_para\" id=\"S4.SS2.1.1.p1\">\n<span class=\"ltx_ERROR ltx_centering undefined\" id=\"S4.SS2.1.1.p1.1\">{tabu}</span>\n<p class=\"ltx_p ltx_align_center\" id=\"S4.SS2.1.1.p1.2\">to \u2014X[c]\u2014X[c]\u2014X[c]\u2014X[c]\u2014X[c]\u2014 RS CUT-1 CUT-2 epoch</p>\n<p class=\"ltx_p ltx_align_center\" id=\"S4.SS2.1.1.p1.3\">Ours 61.2% 63.3% 62.5% 18.6k</p>\n<p class=\"ltx_p ltx_align_center\" id=\"S4.SS2.1.1.p1.4\"><cite class=\"ltx_cite ltx_citemacro_cite\">[<span class=\"ltx_ref ltx_missing_citation ltx_ref_self\">14</span>]</cite> 50.5% 60.8% 60.9% 100k</p>\n</div>\n<figure class=\"ltx_figure ltx_align_center\" id=\"S4.F6\"><img alt=\"Refer to caption\" class=\"ltx_graphics ltx_centering ltx_img_landscape\" height=\"189\" id=\"S4.F6.1.g1\" src=\"extracted/5794973/Uti_std.png\" width=\"236\"/>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_figure\">Figure 6: </span>Space utilization of our model independent of the standard deviation in object volume.</figcaption>\n</figure>\n<div class=\"ltx_para\" id=\"S4.SS2.2.2.p2\">\n<p class=\"ltx_p ltx_align_center\" id=\"S4.SS2.2.2.p2.1\">The results show that our method achieves higher Uti. with fewer training epochs. Additionally, we analyzed the RS test results, comparing each test\u2019s Uti with the standard deviation of the object volumes in the object sequence. Specifically, larger standard deviation indicates greater volume difference among the objects. As shown in Fig.\u00a0<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.09694v1#S4.F6\" title=\"Figure 6 \u2023 4.2 DRL framework result \u2023 4.1 Physics Heuristics Validation \u2023 4 Experiment and Result \u2023 An Efficient Deep Reinforcement Learning Model for Online 3D Bin Packing Combining Object Rearrangement and Stable Placement\">6 ###reference_### ###reference_###</a>, we found that the our model trained on the RS dataset is not affected by the differences in object volume within the sequence.</p>\n</div>\n<section class=\"ltx_section ltx_centering\" id=\"S5\">\n<h2 class=\"ltx_title ltx_title_section\">\n<span class=\"ltx_tag ltx_tag_section\">5 </span>Conclusion</h2>\n<div class=\"ltx_para\" id=\"S5.p1\">\n<p class=\"ltx_p\" id=\"S5.p1.1\">In this paper, we proposed an efficient DRL framework for tackling the online 3D-BPP, integrating a reliable physics heuristic algorithm and object rearrangement technique to enhance the stability and utilization of object placements within bins. Our experiments demonstrated that the proposed method achieves higher utilization rates with fewer training epochs compared to the baseline. The integration of the physics heuristic ensures the stability of object placements, significantly reducing the occurrence of object falls during both training and testing phases. The object rearrangement technique further improves the utilization of bin space without additional time costs, making our framework highly efficient for on-the-fly applications.</p>\n</div>\n<div class=\"ltx_para\" id=\"S5.p2\">\n<p class=\"ltx_p\" id=\"S5.p2.1\">In the future, we aim to make our physics heuristic algorithm more accurate by precisely predicting each stable placement position, and to improve the training efficiency of the DRL model. Additionally, we will incorporate the method proposed by Gao <span class=\"ltx_text ltx_font_italic\" id=\"S5.p2.1.1\">et al</span>\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<span class=\"ltx_ref ltx_missing_citation ltx_ref_self\">gao2</span>]</cite><cite class=\"ltx_cite ltx_citemacro_cite\">[<span class=\"ltx_ref ltx_missing_citation ltx_ref_self\">gao1</span>]</cite> and Li <span class=\"ltx_text ltx_font_italic\" id=\"S5.p2.1.2\">et al</span>\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<span class=\"ltx_ref ltx_missing_citation ltx_ref_self\">Li</span>]</cite> to grasp and pack irregular objects in the real world and attempt to propose a strategy for packing unknown and uneven objects in complex real-world environments.</p>\n</div>\n<div class=\"ltx_para\" id=\"S5.p3\">\n<span class=\"ltx_ERROR undefined\" id=\"S5.p3.1\">\\printbibliography</span>\n</div>\n</section>\n</div>\n</figure>\n</section>\n</div>\n</div>\n</figure>",
112
+ "capture": "Table 1: Comparison of physics heuristics."
113
+ },
114
+ "2": {
115
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.SS2.2\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Comparison of packing performance.</figcaption>\n<div class=\"ltx_sectional-block\" id=\"S4.SS2.2.2\">\n<div class=\"ltx_para\" id=\"S4.SS2.1.1.p1\">\n<span class=\"ltx_ERROR ltx_centering undefined\" id=\"S4.SS2.1.1.p1.1\">{tabu}</span>\n<p class=\"ltx_p ltx_align_center\" id=\"S4.SS2.1.1.p1.2\">to \u2014X[c]\u2014X[c]\u2014X[c]\u2014X[c]\u2014X[c]\u2014 RS CUT-1 CUT-2 epoch</p>\n<p class=\"ltx_p ltx_align_center\" id=\"S4.SS2.1.1.p1.3\">Ours 61.2% 63.3% 62.5% 18.6k</p>\n<p class=\"ltx_p ltx_align_center\" id=\"S4.SS2.1.1.p1.4\"><cite class=\"ltx_cite ltx_citemacro_cite\">[<span class=\"ltx_ref ltx_missing_citation ltx_ref_self\">14</span>]</cite> 50.5% 60.8% 60.9% 100k</p>\n</div>\n<figure class=\"ltx_figure ltx_align_center\" id=\"S4.F6\"><img alt=\"Refer to caption\" class=\"ltx_graphics ltx_centering ltx_img_landscape\" height=\"189\" id=\"S4.F6.1.g1\" src=\"extracted/5794973/Uti_std.png\" width=\"236\"/>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_figure\">Figure 6: </span>Space utilization of our model independent of the standard deviation in object volume.</figcaption>\n</figure>\n<div class=\"ltx_para\" id=\"S4.SS2.2.2.p2\">\n<p class=\"ltx_p ltx_align_center\" id=\"S4.SS2.2.2.p2.1\">The results show that our method achieves higher Uti. with fewer training epochs. Additionally, we analyzed the RS test results, comparing each test\u2019s Uti with the standard deviation of the object volumes in the object sequence. Specifically, larger standard deviation indicates greater volume difference among the objects. As shown in Fig.\u00a0<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.09694v1#S4.F6\" title=\"Figure 6 \u2023 4.2 DRL framework result \u2023 4.1 Physics Heuristics Validation \u2023 4 Experiment and Result \u2023 An Efficient Deep Reinforcement Learning Model for Online 3D Bin Packing Combining Object Rearrangement and Stable Placement\">6 ###reference_### ###reference_###</a>, we found that the our model trained on the RS dataset is not affected by the differences in object volume within the sequence.</p>\n</div>\n<section class=\"ltx_section ltx_centering\" id=\"S5\">\n<h2 class=\"ltx_title ltx_title_section\">\n<span class=\"ltx_tag ltx_tag_section\">5 </span>Conclusion</h2>\n<div class=\"ltx_para\" id=\"S5.p1\">\n<p class=\"ltx_p\" id=\"S5.p1.1\">In this paper, we proposed an efficient DRL framework for tackling the online 3D-BPP, integrating a reliable physics heuristic algorithm and object rearrangement technique to enhance the stability and utilization of object placements within bins. Our experiments demonstrated that the proposed method achieves higher utilization rates with fewer training epochs compared to the baseline. The integration of the physics heuristic ensures the stability of object placements, significantly reducing the occurrence of object falls during both training and testing phases. The object rearrangement technique further improves the utilization of bin space without additional time costs, making our framework highly efficient for on-the-fly applications.</p>\n</div>\n<div class=\"ltx_para\" id=\"S5.p2\">\n<p class=\"ltx_p\" id=\"S5.p2.1\">In the future, we aim to make our physics heuristic algorithm more accurate by precisely predicting each stable placement position, and to improve the training efficiency of the DRL model. Additionally, we will incorporate the method proposed by Gao <span class=\"ltx_text ltx_font_italic\" id=\"S5.p2.1.1\">et al</span>\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<span class=\"ltx_ref ltx_missing_citation ltx_ref_self\">gao2</span>]</cite><cite class=\"ltx_cite ltx_citemacro_cite\">[<span class=\"ltx_ref ltx_missing_citation ltx_ref_self\">gao1</span>]</cite> and Li <span class=\"ltx_text ltx_font_italic\" id=\"S5.p2.1.2\">et al</span>\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<span class=\"ltx_ref ltx_missing_citation ltx_ref_self\">Li</span>]</cite> to grasp and pack irregular objects in the real world and attempt to propose a strategy for packing unknown and uneven objects in complex real-world environments.</p>\n</div>\n<div class=\"ltx_para\" id=\"S5.p3\">\n<span class=\"ltx_ERROR undefined\" id=\"S5.p3.1\">\\printbibliography</span>\n</div>\n</section>\n</div>\n</figure>",
116
+ "capture": "Table 2: Comparison of packing performance."
117
+ }
118
+ },
119
+ "image_paths": {
120
+ "1": {
121
+ "figure_path": "2408.09694v1_figure_1.png",
122
+ "caption": "Figure 1: Online 3D-BPP, where the agent can only observe an upcoming object and pack it on-the-fly.",
123
+ "url": "http://arxiv.org/html/2408.09694v1/extracted/5794973/robotsscene.png"
124
+ },
125
+ "2": {
126
+ "figure_path": "2408.09694v1_figure_2.png",
127
+ "caption": "Figure 2: The main idea of convexHull-1. The left image depicts a sliding window that matches the size of the incoming object, along with portions of the scene objects contained within the sliding window. The right figure shows the zoom-in version of the content inside the sliding window. To determine the stability of the object, we calculate the largest convex hull of the highest points within the window. Next, we verify whether the center of the window lies within the convex hull. The object is deemed stable when positioned at the center of the sliding window if the convex hull includes the window\u2019s center.",
128
+ "url": "http://arxiv.org/html/2408.09694v1/extracted/5794973/windowSliding.png"
129
+ },
130
+ "3": {
131
+ "figure_path": "2408.09694v1_figure_3.png",
132
+ "caption": "Figure 3: Multi-layer packing scenarios showcasing the difference between convexHull-1 and convexHull-\u03b1\ud835\udefc\\alphaitalic_\u03b1 algorithms for checking the stability of the placement. (1) Both convexHull-1 and convexHull-\u03b1\ud835\udefc\\alphaitalic_\u03b1 consider the arrangement to be stable. (2) Conversely, convexHull-1 might incorrectly assess the stability if the incoming object is significantly heavier than the object in the middle layer, as detailed in (3).",
133
+ "url": "http://arxiv.org/html/2408.09694v1/extracted/5794973/stableChecekExample.png"
134
+ },
135
+ "4": {
136
+ "figure_path": "2408.09694v1_figure_4.png",
137
+ "caption": "Figure 4: The pipeline of the DRL framework combined with object rearrangement and physics heuristics.",
138
+ "url": "http://arxiv.org/html/2408.09694v1/extracted/5794973/model.png"
139
+ },
140
+ "5": {
141
+ "figure_path": "2408.09694v1_figure_5.png",
142
+ "caption": "Figure 5: Six possible orientations of the packing object.",
143
+ "url": "http://arxiv.org/html/2408.09694v1/extracted/5794973/orientation.png"
144
+ }
145
+ },
146
+ "validation": true,
147
+ "references": [],
148
+ "url": "http://arxiv.org/html/2408.09694v1"
149
+ }
20240819/2408.09695v1.json ADDED
The diff for this file is too large to render. See raw diff
 
20240819/2408.09699v1.json ADDED
@@ -0,0 +1,560 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Double-Precision Floating-Point Data Visualizations Using Vulkan API",
3
+ "abstract": "Proper representation of data in graphical visualizations becomes challenging when high accuracy in data types is required, especially in those situations where the difference between double-precision floating-point and single-precision floating-point values makes a significant difference. Some of the limitations of using single-precision over double-precision include lesser accuracy, which accumulates errors over time, and poor modeling of large or small numbers. In such scenarios, emulated double precision is often used as a solution. The proposed methodology uses a modern GPU pipeline and graphics library API specifications to use native double precision. In this research, the approach is implemented using the Vulkan API, C++, and GLSL. Experimental evaluation with a series of experiments on 2D and 3D point datasets is proposed to indicate the effectiveness of the approach. This evaluates performance comparisons between native double-precision implementations against their emulated double-precision approaches with respect to rendering performance and accuracy. This study provides insight into the benefits of using native double-precision in graphical applications, denoting limitations and problems with emulated double-precision usages. These results improve the general understanding of the precision involved in graphical visualizations and assist developers in making decisions about which precision methods to use during their applications.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Nowadays, computer graphics tools play a significant role in high-precision data modeling and display for several themes within science and engineering. They make an impact on decision-making and consequent results in very diverse domains, from scientific research to gaming. High-precision molecular dynamic simulation [1 ###reference_b1###], an attempt at film-game integration [2 ###reference_b2###], next-generation AI-based games [3 ###reference_b3###], the open-source dynamic simulation framework YADE [4 ###reference_b4###], three-dimensional reconstruction using [5 ###reference_b5###], and ray tracing of CAD-based simulation [6 ###reference_b6###] are some of the studies which underpin the need.\nIn graphical models, precision and accuracy are vital and are mostly used in scientific computing, medical imaging, and engineering simulations. These models guarantee to convey complex and comprehensive information precisely and accurately. The ancillary problem here is obtaining correct data types, mainly distinguishing double-precision from single-precision floating-point types, which turn out to be very influential. This kind of technology is, however, vision-limited to some level of accuracy, hence less valuable if even higher levels of precision are called for. It is, hence, important to pick an alternative that offers precision and, importantly, the level of accuracy required by the application or the project.\nProcessing and visualizing double-precision data present several challenges, such as computational intensity [7 ###reference_b7###], hardware and software support [8 ###reference_b8###] [9 ###reference_b9###], and rounding errors [10 ###reference_b10###] [11 ###reference_b11###], mainly due to the complexity and precision requirements of this type of data.\nThe choice of API in high-accuracy applications is based on each API\u2019s appropriateness. With changing application requirements, the suitability of each API changes: OpenGL [12 ###reference_b12###], Vulkan [13 ###reference_b13###], DirectX [14 ###reference_b14###], OpenCL [15 ###reference_b15###], and CUDA [16 ###reference_b16###] . Vulkan is a much newer API compared to OpenGL and gives a developer much more acute control over GPU processes, significantly improving rendering performance and speed. Direct3D is the most frequently used graphics API for game and multimedia applications that run on Windows systems. OpenCL is a portable programming language that can be run on various devices, hence the reason why OpenCL applications are highly portable across a wide variation of hardware like a GPU and CPU.\nIn the case of high-precision visual representation using Vulkan API and GLSL [17 ###reference_b17###], complicated technical details shall be addressed in the analysis of double-precision and single-precision floating-point data. One major reason could be the accuracy differences that, in turn, compromise both performance and memory consumption. In old generations of GPU hardware, a representation of double-precision data often emulated single-precision floating-point numbers. Modern GPUs have, however, come with support for double-precision data in their design. This research is based on recent versions of GLSL supporting both single and double precision. The evaluations taken into consideration for this research are to frame the differences between single-precision and double-precision data representations. In this paper, the approaches are compared with respect to visualization performance, where a rendering operation differs in both. This comparison will provide important information about which graphics API is more suitable for applications requiring high precision and will provide guidance for current applications in the field of graphics programming. Additionally, current challenges in processing and visualizing double-precision floating-point data and methods to overcome these challenges will be discussed.\nIn view of the requirement of accuracy in graphics visualizations, primarily due to real-world applications in science and engineering, this research is done to find out how double-precision floating-point data visualizations realize performance gains using the Vulkan API. The specific research questions that guided this study were as follows:\nRQ 1. How does the performance of native double-precision floating-point implementations compare to emulated double-precision implementations in Vulkan for rendering 2D and 3D points datasets?\nRQ 2. How does the scalability of double-precision floating-point data visualization in Vulkan API hold up with increasing dataset sizes?"
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Related Work",
15
+ "text": "This section reviews previous studies on the use of double- and single-precision floating-point values in graphics processing units (GPUs) and discusses the findings, similarities, and differences of these studies in the context of current research. Various floating-point approaches have been developed using graphics APIs and game engines. However, this section examines closely related approaches like emulated double-precision, hardware and software-based double-precision, and extended precision.\nDa Gra\u00e7a & Defour [18 ###reference_b18###] demonstrated a 44-bit solution emulating floating-point formats and corresponding operations, which increased the precision for applications that need more than the single precision, complemented with detailed performance and accuracy results. This implementation enabled straightforward and efficient operations for adding, multiplying, and storing floating-point numbers. It is shown that the research in compensated algorithms with float-float representation runs more efficiently for comparable accuracy, and adapting these algorithms to the GPU constitutes a significant part of future research.\nThall [19 ###reference_b19###] introduced \u2019doublefloats\u2019 for extended precision representation of floating point numbers for GPU computation. This representation, while improving accuracy without performance degradation on a GPU, comes at the cost of resources by exploiting the inherent parallelism in it. Doublefloats are constructed from unevaluated sums of 32-bit floats and deliver a precision of 48 significant bits. This approach\u2014crucial for very high precision applications\u2014necessarily restricted the use of GPU hardware. This shows, with the help of the Mandelbrot Set Explorer, both the utility of doublefloats and some potential applications in simulation, scientific computing, and image analysis.\nIn the OpenSpace [20 ###reference_b20###] study, the limitations of floating-point numbers in computers and the various methods developed to ensure accurate representation of large-scale astronomical data are discussed. With the method it offers, OpenSpace manages to solve precision problems by using Dynamic Scene Graph and rendering objects at the correct distances relative to the camera in cases where single-precision floating-point numbers are insufficient. This method enables accurate and efficient visualization of large-scale astronomical data and minimizes floating-point precision problems.\nDally et al. [21 ###reference_b21###] review the progress of GPUs from special-purpose hardware for 3D graphics to powerful programmable processors applied in HPC and deep learning. It reflects all significant steps of this development, including the creation of CUDA, the introduction of double-precision floating-point arithmetic, and other innovations like Tensor Cores that have increased the performance and flexibility of contemporary GPUs manyfold. These results depict that in the future, a GPU will keep evolving to provide high performance and support many applications.\nKaufmann et al. [22 ###reference_b22###] addresses the challenges and limitations inherent in real-time physics simulations in large-scale environments, primarily due to the imprecision of single-precision floating-point calculations. The authors solve a limitation where traditional physics engines still rely on single-precision floating-point numbers by proposing a system in which the subdivision of the simulation world into independent sectors takes place, and these sectors are allocated dynamically. It drastically reduces the occurrence of precision errors through cloning at sector boundaries, ensuring very consistent and accurate interaction across these sectors. It has hugely improved the precision and efficiency of real-time physics simulation in large-scale virtual environments by dividing the world into independently simulated, dynamically allocated sectors and using a cloning system to maintain accurate interactions at sector boundaries.\nThe progress report [23 ###reference_b23###] on the Godot Engine reviews challenges to render big worlds in games with single-precision floating-point numbers that lead to precision errors and jerky motions. The report explores solutions such as using double-precision in calculations but handling its impracticability again due to the limits of GPU. The final solution considered is emulating double precision by using two single precision floats for specific matrix calculations, where they preserve the accuracy and precision without much penalty in performance."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Background and Motivations",
21
+ "text": "In this section, a number of the fundamental technologies and methodologies of computer graphics will be introduced, including the pipelines of modern Graphics Processing Units (GPUs), the functionality of key graphics APIs, for instance, OpenGL and Vulkan, the use of the Graphics Shader Language (GLSL), the key differences between single and double precision operations, and finally, common issues with single precision floating point computations."
22
+ },
23
+ {
24
+ "section_id": "3.1",
25
+ "parent_section_id": "3",
26
+ "section_name": "GPU Pipeline",
27
+ "text": "The Graphics Processing Unit pipeline has become a critical component in modern computing, having extensive applications in graphics rendering and general-purpose computing. This paper explores several aspects of GPU pipeline advancements and applications as described in recent literature.\nAll pipeline stages concerning rasterization, pixel processing, and abstract geometry processing are implemented on the GPU. Internally, these are divided into a range of hardware stages with differing levels of programmability or configurability. The API provides an access method for the programmer to the logical model of a GPU; the actual implementation of this conceptual process in hardware is left to the manufacturer. The pipeline of the GPU processes graphical input through many phases, from the original vertices to the final pixel rendering, with differing degrees of programmability. The fully programmable vertex shader stage is responsible for perspective projection, lighting, model space to view space transformations and vertex shading. Another programmable stage is Geometry Shader, which deals with entire primitives to generate or to modify them and construct complex effects, including particle systems and shadow volumes.\n###figure_1### Stream output reuses processed data and is thus useful for effects like hair rendering. Fixed-function stages are triangle preparation and traversal, where triangles are prepared and rasterized into pieces; screen mapping, which translates the vertices from clip space into screen space; and clipping, which cuts triangles beyond the viewing frustum. There may be removal of occluded pieces based on the early z-test step, which is varied across GPUs for increased efficiency. The programmable pixel shader processes every fragment to provide texture and color effects. The final step, raster ops or blending, is where colors are combined, and other pixel tests, like depth and alpha testing, are managed. This pipeline consists of both fixed and programmable steps; both are required to efficiently render complex images. [24 ###reference_b24###] [25 ###reference_b25###]."
28
+ },
29
+ {
30
+ "section_id": "3.2",
31
+ "parent_section_id": "3",
32
+ "section_name": "Vulkan API",
33
+ "text": "Vulkan is the next-generation, efficient, and cross-platform graphics and compute API for enabling access to modern GPUs in today\u2019s devices\u2014PCs, consoles, mobile phones, and even embedded platforms. The Vulkan API has been designed to give much more direct control over the GPU, thus allowing finer-grained optimizations and efficient usage of the GPU. Vulkan significantly reduces driver overhead compared to older graphics APIs. Such overhead may yield great performance, particularly in CPU-bound applications. It also designs the API to be more predictable and with fewer errors, clear performance benefits from keeping the GPU busier, producing fewer bottlenecks than those caused by the CPU [26 ###reference_b26###]. Vulkan is characterized by its verbosity and fragility, but it provides enhanced control, a streamlined threading architecture, and superior performance. It provides functionality for transport, computation, and graphics and may be chosen as an option [27 ###reference_b27###]. The Vulkan Specification mandates a host environment with runtime support for 8-16, 32, and 64-bit signed and unsigned twos-complement integers, 32- and 64-bit floating-point types satisfying range and precision constraints, and ensuring their representation and endianness match those on every supported physical device."
34
+ },
35
+ {
36
+ "section_id": "3.3",
37
+ "parent_section_id": "3",
38
+ "section_name": "GLSL",
39
+ "text": "GLSL, often known as the OpenGL Shading Language, has a crucial function in contemporary computer graphics by enabling programmatic control over the graphical processing pipeline. This programming language provides a whole set of tools with which developers can create very flexible shaders, improving the ability to develop complex, dynamic graphical effects vastly in any real-time application.The GLSL has become central to a modern graphics programmer due to its wide application in different domains, from game development and virtual reality to scientific visualization. Recent developments in this area include the integration of GLSL with all major graphics APIs and its application in parallel computing cases. Unlocking the doors of new frontiers in graphical rendering and visualization is possible with GLSL. Thereon, the further development of GLSL and the expansion of the practical spectrum of its implementation essentially volatilely characterized the area of computer graphics when new challenges and prospects succeed one another. [28 ###reference_b28###] [29 ###reference_b29###]"
40
+ },
41
+ {
42
+ "section_id": "3.4",
43
+ "parent_section_id": "3",
44
+ "section_name": "An Explanation of the Distinction Between Single and Double Precision",
45
+ "text": "IEEE 754 floating-point format represents all the standards; it includes 32-bit single precision, 64-bit double precision, and an extended precision format. Each format includes a sign bit, an exponent section, and the mantissa part (fraction) [30 ###reference_b30###].\nThe single-precision floating-point format defined in IEEE Std 754-2019 utilizes a 32-bit (4-byte) structure. This format consists of a 1-bit sign bit, telling whether a number is positive or negative, an 8-bit exponent defining the scale of the number adjusted by a predefined \"bias\" value, and a 23-bit fraction representing the significant or mantissa part of the number. The single-precision format provides an accuracy of about 7 decimal digits while it covers a very wide range of values. This would normally be used in cases where speed is very essential and very fine precision is not required, for example, in the processing of graphics or audio [31 ###reference_b31###] [32 ###reference_b32###].\n###figure_2### The format, otherwise known as double-precision floating point, is 64-bit (8-byte) in size. Much like the single precision, the format contains a 1-bit sign bit but reserves 11 bits for the exponent and 52 bits for the fraction. These give double precision a much greater range and much higher precision. This format provides an accuracy of about 15\u201316 decimal digits and is preferably used where high accuracy is required, like in scientific computations and precision engineering tasks.\nThe IEEE 754 standard standardizes the way of representation and processing of floating-point numbers within a computer system. This creates consistency and reliability for numerical computations. The standard provides for the accuracy of numerical operations across different systems and by different languages. This is very important in scenarios where different applications and cross-platform are required."
46
+ },
47
+ {
48
+ "section_id": "3.5",
49
+ "parent_section_id": "3",
50
+ "section_name": "Problems with Single-precision floating point",
51
+ "text": "The prevailing solution to obtain high precision in graphical visualizations is using double-precision floating-point values. Doubles give a maximum of about 15 to 16 decimal digits of precision and, at the same time, offer far greater range than single-precision, floating-point values. This rise in precision and range drastically reduces errors due to rounding, positioning, accumulation, overflow, underflow, and limitation [33 ###reference_b33###]. In order to understand more easily the problems caused by single-precision floating point, the Mandelbrot Set [34 ###reference_b34###] formula has been used and rendered. A Mandelbrot set is a mathematical set that repeats in a certain way in the complex plane.\n###table_1### ###figure_3### ###figure_4### ###figure_5###"
52
+ },
53
+ {
54
+ "section_id": "3.5.1",
55
+ "parent_section_id": "3.5",
56
+ "section_name": "3.5.1 Rounding Issue",
57
+ "text": "Inaccuracies can occur due to the rounding issues. The first rounding may be toward a midpoint which then gets rounded again, moving it further from the closest correct value [35 ###reference_b35###]. Consider the number 3.1415926 that is represented base-10. The higher precision will round to three decimal places while the lower precision will round to the nearest integer. The higher precision rounds 3.1415926 to 3.142. When this result is then rounded to a lower precision it becomes 3. When 3.1415926 is rounded directly to the nearest integer, omitting the intermediate step the answer is again 3 so in this case there is no inaccuracy. A slight modification of the situation can make double rounding significant. For instance, if the exact value was 3.6515926 then rounding first to the higher precision gives 3.652 and further rounding to the lower precision gives 4. Rounding directly from 3.6515926 to the nearest integer gives 4 also so differences need not appear in every case, yet are of vital significance in the vicinities of some number values [36 ###reference_b36###] [37 ###reference_b37###]."
58
+ },
59
+ {
60
+ "section_id": "3.5.2",
61
+ "parent_section_id": "3.5",
62
+ "section_name": "3.5.2 Limited Precision",
63
+ "text": "In general, limited precision refers to the extent of precision that can be attained in any computation or measurement. For computational and scientific purposes, it is extremely important to be valid in domains as diverse as numerical analysis and engineering since the accuracy and reliability of a result are determined by its precision. Single-precision floating-point values provide an approximate 7 decimal digits of precision [38 ###reference_b38###]. This causes insufficient precision for more complex visualizations, and it may introduce substantial rounding errors with very large or small-scale numbers and lead to loss of details."
64
+ },
65
+ {
66
+ "section_id": "3.5.3",
67
+ "parent_section_id": "3.5",
68
+ "section_name": "3.5.3 Range Limitations",
69
+ "text": "This refers to being restricted to some values within a range that the computer system or a model of computation is capable of representing. These may be the largest or smallest values a number takes, either positive or negative. Fundamentally, they are directly proportional to the size of the data type used and are bound by limitations in the handling of very large or very small numbers. One issue with floats is that they have a restricted range, which can result in overflows or underflows. This could be manifested in graphical contexts as visual anomalies or inaccuracies during rendering [30 ###reference_b30###] [39 ###reference_b39###]."
70
+ },
71
+ {
72
+ "section_id": "4",
73
+ "parent_section_id": null,
74
+ "section_name": "Comparative Analysis between Single-precision Floating Point and Double-precision Floating-point Implementations",
75
+ "text": "Most modern graphics and compute target applications rely on floating-point operation accuracy, which brings about dramatic performance and quality impacts. On the other side, single-precision versus double-precision floating-point implementations represent a compromise in both computational performance and accuracy for high-performance graphics rendering. This section describes a comparative study of both under the Vulkan API framework, pointing out each of their benefits and tradeoffs. In conjunction with the explanation, the diagram provided shows the Vulkan-based application \u2014 the full extent of which will initialize and manage GPU resources for rendering both 2D and 3D point datasets to cover in detail how the choice of precisions affects the resultant rendering and performance metrics.\n###figure_6### The first step is to create a Vulkan instance. A Vulkan instance is an instance that lets the application interact with the Vulkan API. Immediately following the creation of the Vulkan instance, a debug messenger is created for debugging during the development process. Following this, inter-functioning the windowing system with the surface creation is done. All core Vulkan API elements have been created at this stage, which would now allow an application to use the GPU.\n###figure_7### Another core stage in managing the rendering of graphics using Vulkan is logical device setup. This logical device provides an interface with the GPU from the application. It enables a number of commands to be executed. At this stage, Command Buffers are allocated for the storage of rendering commands. Synchronization objects, like Semaphores and Fences, are created to control the synchronicity in the execution of commands. A Vertex Buffer is also allocated for vertex data. These parts ensure that all rendering processes go through smooth and concurrently.\nThis can do high-performance graphics operations with the Vulkan API by creating a Graphics Pipeline. It describes the process of rendering\u2014a pipeline starting from processing vertices up to fragment shading. Shader Modules are loaded into the pipeline to handle certain parts of the rendering process. Rendering output is controlled by the arrangement of Render Passes and Framebuffers. Therefore with such structure, it is possible to execute efficiently complex graphics operations.\nIt involves all rendering commands, ance drawing 2D and 3D points, updating the Framebuffers, and, finally, memory allocation to manage one buffer\u2014the so-called Buffer Memory\u2014which contains vertex data and other related information regarding the rendering process. In this step, it will be finalized how an application is going to handle its rendering process to present the final output. Finally, memory is allocated to manage one buffer\u2014the so-called Buffer Memory\u2014which contains vertex data and other related information regarding the rendering process. Efficient memory management has been taken into consideration to ensure that.GUI resources are used effectively.\nControl and data flow between Host Machine and GPU Host machine components: Points Dataset, SPIR-V Generator, Application, Camera Control. A Points Dataset, prepared, typically, in CSV format, contains 2D and 3D points. A SPIR-V Generator generates a low-level representation from an input high-level source representation, such as high-level shaders, for execution on a GPU. The Application is in charge of the entire rendering process and communicates with the Vulkan API. Camera Control deals with how the visualization is to be viewed.\nThe device components include the GPU, Physical Device, Shader Cores, Compute Units, Rasterizer, and Framebuffer. Each of these is one of the key elements in a rendering pipeline, and together they allow high performance in graphics-oriented operations.\nOnce the shaders are written in GLSL, they need to be compiled into SPIR-V, which is the intermediate representation used by Vulkan. The compilation process ensures that the shaders are optimized and can be executed efficiently on the GPU. The compilation can be done using tools like \u2018glslangValidator\u2018. The compiled SPIR-V shaders are then integrated into the Vulkan pipeline [40 ###reference_b40###] [41 ###reference_b41###]. The steps include creating shader modules, setting up the pipeline, and binding the necessary resources. Below is an example of how the shader modules are created and integrated:\nThe created shader module is then used in the graphics pipeline to execute the vertex and fragment shaders. By providing a detailed explanation of the shader code, its compilation, and integration into the Vulkan pipeline, this section offers a comprehensive understanding of how native double-precision floating-point operations are utilized in Vulkan applications."
76
+ },
77
+ {
78
+ "section_id": "4.1",
79
+ "parent_section_id": "4",
80
+ "section_name": "Emulated Double-precision Floating-point",
81
+ "text": "To compare the findings, case studies exemplifying emulated double-precision and native double-precision are presented. Development for emulated precision was based on David H. Bailey\u2019s approach in the DSFUN90 library [42 ###reference_b42###].\nThis algorithm aims to store a double precision floating point number by dividing it into two single precision floating point numbers. First, the variable value is converted to a single precision floating point number and assigned to the variable high. This step is to obtain a lower precision representation of value. The high value is then converted back to a double precision number and assigned to the highDouble variable. This conversion is necessary for comparison with the original value. Finally, the difference is calculated by subtracting highDouble from the value, and this difference is assigned to the low variable as a single precision number. Thus, the variables high and low are stored as a two-part representation of the original double-precision value. This method is especially useful when precision is essential and memory savings are required."
82
+ },
83
+ {
84
+ "section_id": "4.2",
85
+ "parent_section_id": "4",
86
+ "section_name": "Native Double-precision Floating-point",
87
+ "text": "Modern GPUs support double-precision floating-point natively, meaning that high-precision computation can be carried out without emulation. Vulkan API strongly supports native double-precision operations due to its shader language, GLSL (OpenGL Shading Language). In this work, double precision data types have been used, such as double and dvec2, performing vertex and fragment shaders in order to compute the exact values for the requested operations. Later, the shaders were compiled into SPIR-V, which represents the Vulkan Intermediate Representation.\nThis is a vertex shader written in GLSL, version 4.5.0, with the GL_ARB_gpu_shader_fp64 extension. The program uses a push constant block called PushConstants, including a member of type dmat4 for an MVP matrix. It also has an input of type dvec3 for position and color, and finally defines a variable named fragColor of data type flat dvec3 to create its output. In the main function, declare a dvec4 vector with a pos data and 1.0. Multiply the vector by the MVP matrix and assign it to gl_Position. Finally, assign the input color to the fragColor variable."
88
+ },
89
+ {
90
+ "section_id": "5",
91
+ "parent_section_id": null,
92
+ "section_name": "Experimental Results",
93
+ "text": "###figure_8### The proposed method has been implemented using Vulkan API, C++ 20, and GLSL, using only vertex shaders and fragment shaders compiled into SPIR-V code. To be used for testing, 2D point data consisting of 10,000, 100,000, 1,000,000, and 10,000,000 (x, y) coordinates were created randomly and uniquely in the range (-1.0 and 1.0) and then saved in .csv files. 3D point data consisting of 200,000 to 16,700,000 (x,y,z) coordinates were acquired 3D fractal models that created with the open sources libraries. The algorithms and open-source libraries used for dataset production are explained in Appendix A. The same datasets were used for both emulated double-precision experiments and native double-precision experiments. The experiments were conducted using NVIDIA RTX 3090 GPU, Intel(R) Core(TM) i7-6850K @3.60GHz CPU, and 44 GB DDR4 RAM hardware components. The framerate was recorded using the RenderDoc [43 ###reference_b43###] v1.33.\n###figure_9### ###figure_10### ###figure_11### Experiments demonstrate the advantages of using native double-precision arithmetic within Vulkan. Performance measurements indicate that rendering time for a large dataset improves significantly. The results for native double precision are summarized in the tables below.\nThe data in Table 2 indicates that, in general, the emulated double-precision floating point calculations are worse when compared to native calculations. Generally, the render times are longer and frame rates are lower. For example, the render time for a 3D Menger Sponge with 11.9 million vertices was up to 499.12 milliseconds. The highest frame rate observed was 729 fps for 3D Mandelbulb with 200K vertex.\nNative Double Precision Floating Point Calculations\nData in Table 3 clearly indicates that natively provided double precision excels in performance compared to emulated calculations. In most cases, render times are shorter, along with frame rates that become higher. For instance, 3D Menger Sponge containing 11.9 million vertices dropped the render time to as short as 471.29 milliseconds, beating emulated calculations. The highest frame rate among the datasets was achieved with 3D Mandelbulb 200K vertex at 854 fps.\nGenerally, the performance of natively conducted double-precision computations is better on a large dataset. Though there is droppings performance in emulated computations, the native ones are much more stable. This supports that the local computations are more appropriate when working on large data sets."
94
+ },
95
+ {
96
+ "section_id": "6",
97
+ "parent_section_id": null,
98
+ "section_name": "Limitations",
99
+ "text": "The research demonstrates significant improvements in graphic visualization under the Vulkan API using double-precision floating-point data. However, several limitations should be realized. The native double-precision implementation is highly dependent upon modern GPU availability and capability, thereby limiting such an approach to older/less powerful hardware. Although the proposed method has much better scalability with large data sets than emulated double-precision, challenges to efficiently processing extremely large datasets may exist that decrease performance gains with dataset size. Also, the experimental results were obtained on specific hardware and software configurations and may not generalize to other systems; additional benchmarking on a diversity of configurations is thus required. Still, however illuminating the controlled setting with 2D and 3D point datasets was for this method, further testing on real-world data is required to confirm its applicability and performance in practical scenarios since different kinds of data will raise unique issues and possibly differing performance characteristics."
100
+ },
101
+ {
102
+ "section_id": "7",
103
+ "parent_section_id": null,
104
+ "section_name": "Conclusion and Future Works",
105
+ "text": "This study was conducted to compare the performance and accuracy of Vulkan API, explicitly focusing on mutable and double-precision data types. The results show enormous implications for data type choices, more so concerning calculation speed and processing time. Double-precision solutions have been executed on the GPU at the moment using double-float and double-double techniques. Test applications consist of 2D points and 3D points; each of them contains double-precision vertex coordinates.\nNecessary information is given in the paper hand, and experiments on using double-precision directly have been performed in the case of supporting GPU hardware. The work straight followed a method of double precision ordered by the Khronos Group in OpenGL Shading Language specification 4.5 and Vulkan specification 1.3. This is a different approach compared to traditional emulated precision methods, with no extra processing required. Although this method requires both advanced hardware and software, this is an example of the studies in scientific visualization where precision and performance are desirable together. In this work, the fundamental layer of the visualization of 2D points by uniquely generating random x and y values for these points has been addressed, and 3D points by generating from 3D mesh models with double precision x,y, and z values.\nIn the future, several experiments can be performed\u2014the ones using actual data to make the application more applicable and accurate. These datasets can be used in tests that verify the visualizing methods. In the case of successful implementation of the most straightforward building block of graphical visualization, that is, point rendering, additional visualization components can be integrated to compile a full-featured visualization library. This will provide the ability to create more complex and informative visualizations, therefore increasing the application\u2019s functionality beyond simple point rendering."
106
+ }
107
+ ],
108
+ "appendix": [
109
+ {
110
+ "section_id": "Appendix 1",
111
+ "parent_section_id": null,
112
+ "section_name": "Appendix A Datasets",
113
+ "text": "To provide a more challenging and realistic evaluation, 3D point data were prepared using fractal algorithms, including Mandelbulb, Menger Sponge, Sierpinski Gasket and Julia fractals. The use of fractal models provides a complex and intricate dataset to test the rendering capabilities and performance of the proposed method in three-dimensional space. The diverse point counts ensure that the method\u2019s efficiency and effectiveness can be evaluated under different levels of complexity.\nAdvanced libraries and algorithms were employed in this research:\nNumPy (https://numpy.org/ ###reference_numpy.org/###) and Pandas (https://pandas.pydata.org/ ###reference_pandas.pydata.org/###): Used for data handling and manipulation.\nSkimage (https://scikit-image.org/ ###reference_scikit-image.org/###): Applied for surface extraction and mesh generation.\nPygltflib (https://pypi.org/project/pygltflib/ ###reference_###): Utilized to store outputs in GLTF2 format.\nNumba (https://numba.pydata.org/ ###reference_numba.pydata.org/###): Provided computation acceleration with jit and prange functions.\nOpen3D (https://www.open3d.org/ ###reference_www.open3d.org/###): Used for 3D data visualization.\nNoise (https://pypi.org/project/noise/ ###reference_pypi.org/project/noise/###) library\u2019s pnoise3 function: Generated noise data for fractal colors.\nAll this has been applied to the fine development of such important complex fractal structures as Mandelbulb, Menger Sponge, Julia, and Sierpinski Gasket, with double precision x, y, z local coordinates, and RGB color values. The meshes that are generated manifest high visual quality and generally accurate details for scientific analyses. All datasets and their source codes can be accessible on this repository: https://github.com/NeziheSozen/3d-fractal-generators ###reference_generators###\nThe Mandelbulb is a three-dimensional, mathematical object of fractal nature [44 ###reference_b44###] [45 ###reference_b45###] [46 ###reference_b46###]. This paper uses the Daniel White and Paul Nylander approach using spherical coordinates. The following algorithm was used to generate the 3D Mandelbulb datasets:\nThis algorithm creates points in a 3-dimensional space by calculating the Julia set with a given quaternion and parameters [47 ###reference_b47###] [48 ###reference_b48###]:\nThe Menger sponge is a three-dimensional fractal geometric shape defined by Karl Menger. A structure such as this one is developed by removing smaller cubes from the center and each face of an initial cube, therefore making a structure of almost no volume and infinite surface area through its infinite iterations. [49 ###reference_b49###] [50 ###reference_b50###]\nThe Sierpinski Gasket also referred to as the Sierpinski Triangle, is named after by the name of Wac\u0142aw Sierpi\u0144ski, who described this fractal [51 ###reference_b51###]. The creation of this fractal involves recursively cutting an equilateral triangle into three smaller equilateral triangles and leaving the central triangle of each such division empty. This is done as many times as possible. Thus, it goes on to yield an extremely elaborate and self-replicating pattern [52 ###reference_b52###]. The below algorithm shows the procedures to create The Sierpinski Tetrahedron:"
114
+ }
115
+ ],
116
+ "tables": {
117
+ "1": {
118
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S3.T1.3\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T1.3.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.1.1.1\"><img alt=\"[Uncaptioned image]\" class=\"ltx_graphics ltx_img_landscape\" height=\"148\" id=\"S3.T1.1.1.1.g1\" src=\"extracted/5799186/images/1e-1_zoom.png\" width=\"198\"/></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.2.2.2\"><img alt=\"[Uncaptioned image]\" class=\"ltx_graphics ltx_img_landscape\" height=\"149\" id=\"S3.T1.2.2.2.g1\" src=\"extracted/5799186/images/1e-4_zoom.png\" width=\"198\"/></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.3.3.3\"><img alt=\"[Uncaptioned image]\" class=\"ltx_graphics ltx_img_landscape\" height=\"149\" id=\"S3.T1.3.3.3.g1\" src=\"extracted/5799186/images/1e-6_zoom.png\" width=\"198\"/></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.3.4.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T1.3.4.1.1\">Zoom Factor: 1e-1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T1.3.4.1.2\">Zoom Factor: 1e-4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T1.3.4.1.3\">Zoom Factor: 1e-6</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Visualization of the Mandelbrot Set at the Point Re=-0.7436450, Im=0.13182590 with Different Zoom Factors. The images display the fractal structure at zoom levels of 1e-1, 1e-4, and 1e-6, showcasing the intricate details at progressively finer scales. Precision concerns and pixelization problems can be seen when the zoom is 1e-6</figcaption>\n</figure>",
119
+ "capture": "Table 1: Visualization of the Mandelbrot Set at the Point Re=-0.7436450, Im=0.13182590 with Different Zoom Factors. The images display the fractal structure at zoom levels of 1e-1, 1e-4, and 1e-6, showcasing the intricate details at progressively finer scales. Precision concerns and pixelization problems can be seen when the zoom is 1e-6"
120
+ },
121
+ "2": {
122
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T2\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T2.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T2.1.1.1\">\n<th class=\"ltx_td ltx_align_justify ltx_align_top ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T2.1.1.1.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T2.1.1.1.1.1\">\n<span class=\"ltx_p\" id=\"S5.T2.1.1.1.1.1.1\" style=\"width:156.5pt;\">Dataset Information</span>\n</span>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_align_top ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T2.1.1.1.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T2.1.1.1.2.1\">\n<span class=\"ltx_p\" id=\"S5.T2.1.1.1.2.1.1\" style=\"width:56.9pt;\">Rendering Time (milliseconds)</span>\n</span>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_align_top ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T2.1.1.1.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T2.1.1.1.3.1\">\n<span class=\"ltx_p\" id=\"S5.T2.1.1.1.3.1.1\" style=\"width:56.9pt;\">Framerate (fps)</span>\n</span>\n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T2.1.2.1\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T2.1.2.1.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T2.1.2.1.1.1\">\n<span class=\"ltx_p\" id=\"S5.T2.1.2.1.1.1.1\" style=\"width:156.5pt;\">200K vertices of 3D Mandelbulb</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S5.T2.1.2.1.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T2.1.2.1.2.1\">\n<span class=\"ltx_p\" id=\"S5.T2.1.2.1.2.1.1\" style=\"width:56.9pt;\">9.71</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S5.T2.1.2.1.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T2.1.2.1.3.1\">\n<span class=\"ltx_p\" id=\"S5.T2.1.2.1.3.1.1\" style=\"width:56.9pt;\">729</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.3.2\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T2.1.3.2.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T2.1.3.2.1.1\">\n<span class=\"ltx_p\" id=\"S5.T2.1.3.2.1.1.1\" style=\"width:156.5pt;\">200K vertices of 3D Menger Sponge</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S5.T2.1.3.2.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T2.1.3.2.2.1\">\n<span class=\"ltx_p\" id=\"S5.T2.1.3.2.2.1.1\" style=\"width:56.9pt;\">12.9</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S5.T2.1.3.2.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T2.1.3.2.3.1\">\n<span class=\"ltx_p\" id=\"S5.T2.1.3.2.3.1.1\" style=\"width:56.9pt;\">703</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.4.3\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T2.1.4.3.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T2.1.4.3.1.1\">\n<span class=\"ltx_p\" id=\"S5.T2.1.4.3.1.1.1\" style=\"width:156.5pt;\">1M vertices of 3D Sierpinski Gasket</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S5.T2.1.4.3.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T2.1.4.3.2.1\">\n<span class=\"ltx_p\" id=\"S5.T2.1.4.3.2.1.1\" style=\"width:56.9pt;\">32.19</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S5.T2.1.4.3.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T2.1.4.3.3.1\">\n<span class=\"ltx_p\" id=\"S5.T2.1.4.3.3.1.1\" style=\"width:56.9pt;\">745</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.5.4\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T2.1.5.4.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T2.1.5.4.1.1\">\n<span class=\"ltx_p\" id=\"S5.T2.1.5.4.1.1.1\" style=\"width:156.5pt;\">1.4M vertices of 3D Julia Set</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S5.T2.1.5.4.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T2.1.5.4.2.1\">\n<span class=\"ltx_p\" id=\"S5.T2.1.5.4.2.1.1\" style=\"width:56.9pt;\">35.26</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S5.T2.1.5.4.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T2.1.5.4.3.1\">\n<span class=\"ltx_p\" id=\"S5.T2.1.5.4.3.1.1\" style=\"width:56.9pt;\">717</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.6.5\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T2.1.6.5.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T2.1.6.5.1.1\">\n<span class=\"ltx_p\" id=\"S5.T2.1.6.5.1.1.1\" style=\"width:156.5pt;\">1.85M vertices of 3D Menger Sponge</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S5.T2.1.6.5.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T2.1.6.5.2.1\">\n<span class=\"ltx_p\" id=\"S5.T2.1.6.5.2.1.1\" style=\"width:56.9pt;\">36.69</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S5.T2.1.6.5.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T2.1.6.5.3.1\">\n<span class=\"ltx_p\" id=\"S5.T2.1.6.5.3.1.1\" style=\"width:56.9pt;\">612</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.7.6\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T2.1.7.6.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T2.1.7.6.1.1\">\n<span class=\"ltx_p\" id=\"S5.T2.1.7.6.1.1.1\" style=\"width:156.5pt;\">2M vertices of 3D Mandelbulb</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S5.T2.1.7.6.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T2.1.7.6.2.1\">\n<span class=\"ltx_p\" id=\"S5.T2.1.7.6.2.1.1\" style=\"width:56.9pt;\">53.19</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S5.T2.1.7.6.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T2.1.7.6.3.1\">\n<span class=\"ltx_p\" id=\"S5.T2.1.7.6.3.1.1\" style=\"width:56.9pt;\">687</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.8.7\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T2.1.8.7.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T2.1.8.7.1.1\">\n<span class=\"ltx_p\" id=\"S5.T2.1.8.7.1.1.1\" style=\"width:156.5pt;\">5M vertices of 3D Menger Sponge</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S5.T2.1.8.7.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T2.1.8.7.2.1\">\n<span class=\"ltx_p\" id=\"S5.T2.1.8.7.2.1.1\" style=\"width:56.9pt;\">239.73</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S5.T2.1.8.7.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T2.1.8.7.3.1\">\n<span class=\"ltx_p\" id=\"S5.T2.1.8.7.3.1.1\" style=\"width:56.9pt;\">698</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.9.8\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T2.1.9.8.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T2.1.9.8.1.1\">\n<span class=\"ltx_p\" id=\"S5.T2.1.9.8.1.1.1\" style=\"width:156.5pt;\">11.9M vertices of 3D Menger Sponge</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S5.T2.1.9.8.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T2.1.9.8.2.1\">\n<span class=\"ltx_p\" id=\"S5.T2.1.9.8.2.1.1\" style=\"width:56.9pt;\">499.12</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S5.T2.1.9.8.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T2.1.9.8.3.1\">\n<span class=\"ltx_p\" id=\"S5.T2.1.9.8.3.1.1\" style=\"width:56.9pt;\">468</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.10.9\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T2.1.10.9.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T2.1.10.9.1.1\">\n<span class=\"ltx_p\" id=\"S5.T2.1.10.9.1.1.1\" style=\"width:156.5pt;\">16.7M vertices of 3D Sierpinski Gasket</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T2.1.10.9.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T2.1.10.9.2.1\">\n<span class=\"ltx_p\" id=\"S5.T2.1.10.9.2.1.1\" style=\"width:56.9pt;\">722.89</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T2.1.10.9.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T2.1.10.9.3.1\">\n<span class=\"ltx_p\" id=\"S5.T2.1.10.9.3.1.1\" style=\"width:56.9pt;\">115</span>\n</span>\n</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Performance Results for Emulated Double-precision Floating-point Implementations.</figcaption>\n</figure>",
123
+ "capture": "Table 2: Performance Results for Emulated Double-precision Floating-point Implementations."
124
+ },
125
+ "3": {
126
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T3\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T3.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T3.1.1.1\">\n<th class=\"ltx_td ltx_align_justify ltx_align_top ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T3.1.1.1.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.1.1.1.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.1.1.1.1.1\" style=\"width:156.5pt;\">Dataset Information</span>\n</span>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_align_top ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T3.1.1.1.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.1.1.2.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.1.1.2.1.1\" style=\"width:56.9pt;\">Rendering Time (milliseconds)</span>\n</span>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_align_top ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T3.1.1.1.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.1.1.3.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.1.1.3.1.1\" style=\"width:56.9pt;\">Framerate (fps)</span>\n</span>\n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T3.1.2.1\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T3.1.2.1.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.2.1.1.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.2.1.1.1.1\" style=\"width:156.5pt;\">200K vertices of 3D Mandelbulb</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S5.T3.1.2.1.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.2.1.2.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.2.1.2.1.1\" style=\"width:56.9pt;\">11.87</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S5.T3.1.2.1.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.2.1.3.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.2.1.3.1.1\" style=\"width:56.9pt;\">842</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.3.2\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T3.1.3.2.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.3.2.1.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.3.2.1.1.1\" style=\"width:156.5pt;\">200K vertices of 3D Menger Sponge</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S5.T3.1.3.2.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.3.2.2.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.3.2.2.1.1\" style=\"width:56.9pt;\">14.88</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S5.T3.1.3.2.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.3.2.3.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.3.2.3.1.1\" style=\"width:56.9pt;\">804</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.4.3\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T3.1.4.3.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.4.3.1.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.4.3.1.1.1\" style=\"width:156.5pt;\">1M vertices of 3D Sierpinski Gasket</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S5.T3.1.4.3.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.4.3.2.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.4.3.2.1.1\" style=\"width:56.9pt;\">35.26</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S5.T3.1.4.3.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.4.3.3.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.4.3.3.1.1\" style=\"width:56.9pt;\">789</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.5.4\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T3.1.5.4.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.5.4.1.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.5.4.1.1.1\" style=\"width:156.5pt;\">1.4M vertices of 3D Julia Set</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S5.T3.1.5.4.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.5.4.2.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.5.4.2.1.1\" style=\"width:56.9pt;\">39.26</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S5.T3.1.5.4.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.5.4.3.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.5.4.3.1.1\" style=\"width:56.9pt;\">802</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.6.5\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T3.1.6.5.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.6.5.1.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.6.5.1.1.1\" style=\"width:156.5pt;\">1.85M vertices of 3D Menger Sponge</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S5.T3.1.6.5.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.6.5.2.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.6.5.2.1.1\" style=\"width:56.9pt;\">31.12.16</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S5.T3.1.6.5.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.6.5.3.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.6.5.3.1.1\" style=\"width:56.9pt;\">763</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.7.6\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T3.1.7.6.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.7.6.1.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.7.6.1.1.1\" style=\"width:156.5pt;\">2M vertices of 3D Mandelbulb</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S5.T3.1.7.6.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.7.6.2.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.7.6.2.1.1\" style=\"width:56.9pt;\">42.68</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S5.T3.1.7.6.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.7.6.3.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.7.6.3.1.1\" style=\"width:56.9pt;\">854</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.8.7\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T3.1.8.7.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.8.7.1.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.8.7.1.1.1\" style=\"width:156.5pt;\">5M vertices of 3D Menger Sponge</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S5.T3.1.8.7.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.8.7.2.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.8.7.2.1.1\" style=\"width:56.9pt;\">189.25</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S5.T3.1.8.7.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.8.7.3.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.8.7.3.1.1\" style=\"width:56.9pt;\">802</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.9.8\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T3.1.9.8.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.9.8.1.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.9.8.1.1.1\" style=\"width:156.5pt;\">11.9M vertices of 3D Menger Sponge</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S5.T3.1.9.8.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.9.8.2.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.9.8.2.1.1\" style=\"width:56.9pt;\">471.29</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S5.T3.1.9.8.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.9.8.3.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.9.8.3.1.1\" style=\"width:56.9pt;\">593</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.10.9\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T3.1.10.9.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.10.9.1.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.10.9.1.1.1\" style=\"width:156.5pt;\">16.7M vertices of 3D Sierpinski Gasket</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T3.1.10.9.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.10.9.2.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.10.9.2.1.1\" style=\"width:56.9pt;\">695.76</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T3.1.10.9.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.10.9.3.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.10.9.3.1.1\" style=\"width:56.9pt;\">330</span>\n</span>\n</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>Performance Results for Native Double-precision Floating-point Implementations.</figcaption>\n</figure>",
127
+ "capture": "Table 3: Performance Results for Native Double-precision Floating-point Implementations."
128
+ }
129
+ },
130
+ "image_paths": {
131
+ "1": {
132
+ "figure_path": "2408.09699v1_figure_1.png",
133
+ "caption": "Figure 1: Beginning with the input assembler, which takes the vertex data to assemble vertices into primitives, the pipeline is followed by a vertex shader for geometric transformations. Then, there is a tessellation control shader that performs the division of the surface, followed by the tessellation evaluation shader, refining the vertices. Subsequently, there is a geometry shader for generating or modifying geometry. Thereafter, it proceeds to the rasterizer, which projects 3D primitives onto the 2D screen. Next up is a fragment shader to compute pixel attributes, an early depth test optimization by discarding occluded fragments, the blending stage, which combines fragment colors, among other things, for transparent effects, and an output merger that finally writes the image in the frame buffer for display.",
134
+ "url": "http://arxiv.org/html/2408.09699v1/extracted/5799186/images/pipeline.png"
135
+ },
136
+ "2": {
137
+ "figure_path": "2408.09699v1_figure_2.png",
138
+ "caption": "Figure 2: This image is a bit-layout of single and double-precision floating-point numbers, as represented in accordance with the IEEE 754 standard. Single precision number would be 32 bits long. Bits needed for this: 1 bit for the sign, 8 bits for exponent, and 23 bits for mantissa. Double precision number: it is 64 bits; 1 bit for the sign, 11 bits for exponent, and 52 bits for mantissa. It has a wider range and is more accurate in representing floating-point numbers.",
139
+ "url": "http://arxiv.org/html/2408.09699v1/extracted/5799186/images/ieee754.png"
140
+ },
141
+ "3": {
142
+ "figure_path": "2408.09699v1_figure_3.png",
143
+ "caption": "Figure 3: 2D and 3D Point Data Visualization Process: a) 2D Point Data Visualization: The Vulkan-based visualization application reads a .csv file consisting of randomly generated 2D points: in this data set, every point is represented by the x and y coordinates. The CSV file is then read by the Vulkan-based visualization application, which visualizes it on the screen. (b) 3D Point Data Visualization: Downloaded model data in glTF/GLB format and further converted it into a .csv file, including the x, y, and z coordinates for each point. Feeding this .csv file into the Vulkan-based visualization application would draw the 3D points onto the screen. As soon as three-dimensional points can be visualized, more complex structures and models represented with data will easily be analyzed and understood.",
144
+ "url": "http://arxiv.org/html/2408.09699v1/extracted/5799186/images/diyagram_2d_3d_flow.png"
145
+ },
146
+ "4": {
147
+ "figure_path": "2408.09699v1_figure_4.png",
148
+ "caption": "Figure 4: This diagram shows initializing and managing GPU resources using the Vulkan API to visualize 2D and 3D point datasets. Vulkan exposes a low-level, general-purpose graphics API that is conceived to offer direct control of the GPU resources to ensure both high performance and flexibility. It details all the processes, from the initialization of the GPU resources to the creation of the graphics pipeline. The graphics pipeline and the initialization of GPU resources are showcased. This diagram provides a comprehensive overview of the stages and interactions involved in setting up and utilizing Vulkan for rendering within the application.",
149
+ "url": "http://arxiv.org/html/2408.09699v1/extracted/5799186/images/general_architecture.png"
150
+ },
151
+ "5": {
152
+ "figure_path": "2408.09699v1_figure_5.png",
153
+ "caption": "Figure 5: A few of the visualized objects from 3D datasets with double-precision floating-points are included, and the column chart on the right-hand side shows a comparison of rendering times in milliseconds for emulated double-precision and native double-precision.",
154
+ "url": "http://arxiv.org/html/2408.09699v1/extracted/5799186/images/cover_image.png"
155
+ },
156
+ "6": {
157
+ "figure_path": "2408.09699v1_figure_6.png",
158
+ "caption": "Figure 6: This figure is a screenshot of the application of an example 2D dataset. This dataset has 10 million 2D point data double-precision and the framerate value is calculated as 218 fps.",
159
+ "url": "http://arxiv.org/html/2408.09699v1/extracted/5799186/images/vulkan-app1.png"
160
+ },
161
+ "7": {
162
+ "figure_path": "2408.09699v1_figure_7.png",
163
+ "caption": "Figure 7: The graph shows the performance of both approaches in time (seconds) for different vertex numbers (10K, 100K, 1M, and 10M). The results reveal that native double-precision calculations are faster overall. The emulated double precision shows a significant performance degradation, especially at high vertex counts (10M). This finding indicates that local double-precision calculations are a more efficient option for calculations requiring high precision. While the emulation performs reasonably well at low vertex counts, local calculations are significantly faster on larger-scale datasets. This is critical for developers who want to perform high-precision and high-performance graphics operations using the Vulkan API.",
164
+ "url": "http://arxiv.org/html/2408.09699v1/extracted/5799186/images/2d_points_dataset_benchmark.png"
165
+ },
166
+ "8": {
167
+ "figure_path": "2408.09699v1_figure_8.png",
168
+ "caption": "Figure 8: The images prove that three renderings are realized from the same dataset. The left shape represents a triangulated Mandelbulb mesh in GLB/GLTF format; the middle image offers a rendering of those vertices in native double-precision; and the rightmost image, the output of the emulated double-precision implementation run against the same mesh vertices. Comparisons of the performance of these two were done, and because the dataset being fed is the same, the generated renderings are the same. This study compares rendering performance and accuracy for both the native double precision and the emulated double precision methods for rendering.",
169
+ "url": "http://arxiv.org/html/2408.09699v1/extracted/5799186/images/all_mandelbulb_meshes.png"
170
+ }
171
+ },
172
+ "validation": true,
173
+ "references": [
174
+ {
175
+ "1": {
176
+ "title": "Pushing the limit of molecular dynamics with ab initio accuracy to 100 million atoms with machine learning.",
177
+ "author": "Weile Jia, Han Wang, Mohan Chen, Denghui Lu, Lin Lin, Roberto Car, E Weinan, and Linfeng Zhang.",
178
+ "venue": "In SC20: International conference for high performance computing, networking, storage and analysis, pages 1\u201314. IEEE, 2020.",
179
+ "url": null
180
+ }
181
+ },
182
+ {
183
+ "2": {
184
+ "title": "The lost: An attempt to combine film and game.",
185
+ "author": "Qingpu Lou.",
186
+ "venue": "Highlights in Science, Engineering and Technology, 72:446\u2013452, 2023.",
187
+ "url": null
188
+ }
189
+ },
190
+ {
191
+ "3": {
192
+ "title": "Deep learning of 3d high-precision model digital engraving of next-generation games based on artificial intelligence.",
193
+ "author": "Yue Zhao et al.",
194
+ "venue": "Advances in Multimedia, 2022, 2022.",
195
+ "url": null
196
+ }
197
+ },
198
+ {
199
+ "4": {
200
+ "title": "Implementation of high-precision computation capabilities into the open-source dynamic simulation framework yade.",
201
+ "author": "Janek Kozicki, Anton Gladky, and Klaus Thoeni.",
202
+ "venue": "Computer Physics Communications, 270:108167, 2022.",
203
+ "url": null
204
+ }
205
+ },
206
+ {
207
+ "5": {
208
+ "title": "High-precision 3d reconstruction for small-to-medium-sized objects utilizing line-structured light scanning: A review.",
209
+ "author": "Bin Cui, Wei Tao, and Hui Zhao.",
210
+ "venue": "Remote Sensing, 13(21):4457, 2021.",
211
+ "url": null
212
+ }
213
+ },
214
+ {
215
+ "6": {
216
+ "title": "Hardware-accelerated ray tracing of cad-based geometry for monte carlo radiation transport.",
217
+ "author": "Patrick C Shriwise, Paul PH Wilson, Andrew Davis, and Paul K Romano.",
218
+ "venue": "Computing in Science & Engineering, 24(2):52\u201361, 2022.",
219
+ "url": null
220
+ }
221
+ },
222
+ {
223
+ "7": {
224
+ "title": "Leveraging the bfloat16 artificial intelligence datatype for higher-precision computations.",
225
+ "author": "Greg Henry, Ping Tak Peter Tang, and Alexander Heinecke.",
226
+ "venue": "In 2019 IEEE 26th Symposium on Computer Arithmetic (ARITH), pages 69\u201376. IEEE, 2019.",
227
+ "url": null
228
+ }
229
+ },
230
+ {
231
+ "8": {
232
+ "title": "Hardware support for a novel variable precision floating point format in a scientific computing environment.",
233
+ "author": "Riccardo Alidori.",
234
+ "venue": "PhD thesis, Politecnico di Torino, 2020.",
235
+ "url": null
236
+ }
237
+ },
238
+ {
239
+ "9": {
240
+ "title": "Fpu reduced variable precision in time: Application to the jacobi iterative method.",
241
+ "author": "Noureddine Ait Said, Mounir Benabdenbi, and Katell Morin-Allory.",
242
+ "venue": "In 2021 IEEE Computer Society Annual Symposium on VLSI (ISVLSI), pages 170\u2013175. IEEE, 2021.",
243
+ "url": null
244
+ }
245
+ },
246
+ {
247
+ "10": {
248
+ "title": "Can we avoid rounding-error estimation in hpc codes and still get trustworthy results?",
249
+ "author": "Fabienne J\u00e9z\u00e9quel, Stef Graillat, Daichi Mukunoki, Toshiyuki Imamura, and Roman Iakymchuk.",
250
+ "venue": "In Software Verification: 12th International Conference, VSTTE 2020, and 13th International Workshop, NSV 2020, Los Angeles, CA, USA, July 20\u201321, 2020, Revised Selected Papers 13, pages 163\u2013177. Springer, 2020.",
251
+ "url": null
252
+ }
253
+ },
254
+ {
255
+ "11": {
256
+ "title": "Stochastic rounding: implementation, error analysis and applications.",
257
+ "author": "Matteo Croci, Massimiliano Fasi, Nicholas J Higham, Theo Mary, and Mantas Mikaitis.",
258
+ "venue": "Royal Society Open Science, 9(3):211631, 2022.",
259
+ "url": null
260
+ }
261
+ },
262
+ {
263
+ "12": {
264
+ "title": "Opengl api, 2024.",
265
+ "author": "OpenGL.",
266
+ "venue": "Available at: https://www.opengl.org.",
267
+ "url": null
268
+ }
269
+ },
270
+ {
271
+ "13": {
272
+ "title": "Vulkan api, 2024.",
273
+ "author": "Vulkan.",
274
+ "venue": "Available at: https://www.vulkan.org.",
275
+ "url": null
276
+ }
277
+ },
278
+ {
279
+ "14": {
280
+ "title": "Directx api, 2024.",
281
+ "author": "DirectX.",
282
+ "venue": "Available at: https://www.microsoft.com/directx.",
283
+ "url": null
284
+ }
285
+ },
286
+ {
287
+ "15": {
288
+ "title": "Opencl api, 2024.",
289
+ "author": "OpenCL.",
290
+ "venue": "Available at: https://www.khronos.org/opencl.",
291
+ "url": null
292
+ }
293
+ },
294
+ {
295
+ "16": {
296
+ "title": "Cuda api, 2024.",
297
+ "author": "CUDA.",
298
+ "venue": "Available at: https://developer.nvidia.com/cuda-zone.",
299
+ "url": null
300
+ }
301
+ },
302
+ {
303
+ "17": {
304
+ "title": "Core language glsl, 2024.",
305
+ "author": "OpenGL Wiki.",
306
+ "venue": null,
307
+ "url": null
308
+ }
309
+ },
310
+ {
311
+ "18": {
312
+ "title": "Implementation of float-float operators on graphics hardware.",
313
+ "author": "Guillaume Da Gra\u00e7a and David Defour.",
314
+ "venue": "CoRR, abs/cs/0603115, 2006.",
315
+ "url": null
316
+ }
317
+ },
318
+ {
319
+ "19": {
320
+ "title": "Extended-precision floating-point numbers for gpu computation.",
321
+ "author": "Andrew Thall.",
322
+ "venue": "In ACM SIGGRAPH 2006 Research Posters, SIGGRAPH \u201906, page 52\u2013es, New York, NY, USA, 2006. Association for Computing Machinery.",
323
+ "url": null
324
+ }
325
+ },
326
+ {
327
+ "20": {
328
+ "title": "Openspace: Changing the narrative of public dissemination in astronomical visualization from what to how.",
329
+ "author": "Alexander Bock, Emil Axelsson, Carter Emmart, Masha Kuznetsova, Charles Hansen, and Anders Ynnerman.",
330
+ "venue": "IEEE Computer Graphics and Applications, 38(3):44\u201357, 2018.",
331
+ "url": null
332
+ }
333
+ },
334
+ {
335
+ "21": {
336
+ "title": "Evolution of the graphics processing unit (gpu).",
337
+ "author": "William J Dally, Stephen W Keckler, and David B Kirk.",
338
+ "venue": "IEEE Micro, 41(6):42\u201351, 2021.",
339
+ "url": null
340
+ }
341
+ },
342
+ {
343
+ "22": {
344
+ "title": "Accurate real-time physics simulation for large worlds.",
345
+ "author": "Lorenzo Schwertner Kaufmann, Flavio Paulus Franzin, Roberto Menegais, and Cesar Tadeu Pozzer.",
346
+ "venue": "In VISIGRAPP (1: GRAPP), pages 135\u2013142, 2021.",
347
+ "url": null
348
+ }
349
+ },
350
+ {
351
+ "23": {
352
+ "title": "Emulating double precision on the gpu to render large worlds, 2022.",
353
+ "author": "Clay John.",
354
+ "venue": null,
355
+ "url": null
356
+ }
357
+ },
358
+ {
359
+ "24": {
360
+ "title": "Game engine architecture.",
361
+ "author": "Jason Gregory.",
362
+ "venue": "AK Peters/CRC Press, 2018.",
363
+ "url": null
364
+ }
365
+ },
366
+ {
367
+ "25": {
368
+ "title": "Real-time rendering.",
369
+ "author": "Tomas Akenine-Moller, Eric Haines, and Naty Hoffman.",
370
+ "venue": "AK Peters/crc Press, 2019.",
371
+ "url": null
372
+ }
373
+ },
374
+ {
375
+ "26": {
376
+ "title": "The vulkan computer graphics api.",
377
+ "author": "Mike Bailey.",
378
+ "venue": "In ACM SIGGRAPH 2023 Courses, pages 1\u2013158. bul, 2023.",
379
+ "url": null
380
+ }
381
+ },
382
+ {
383
+ "27": {
384
+ "title": "Vulkan programming guide: The official guide to learning vulkan.",
385
+ "author": "Graham Sellers and John Kessenich.",
386
+ "venue": "Addison-Wesley Professional, 2016.",
387
+ "url": null
388
+ }
389
+ },
390
+ {
391
+ "28": {
392
+ "title": "Modern opengl programming.",
393
+ "author": "Ed Angel and Dave Shreiner.",
394
+ "venue": "In SIGGRAPH Asia 2011 Courses, SA \u201911, New York, NY, USA, 2011. Association for Computing Machinery.",
395
+ "url": null
396
+ }
397
+ },
398
+ {
399
+ "29": {
400
+ "title": "Graphics shaders: theory and practice.",
401
+ "author": "Mike Bailey and Steve Cunningham.",
402
+ "venue": "AK Peters/CRC Press, 2009.",
403
+ "url": null
404
+ }
405
+ },
406
+ {
407
+ "30": {
408
+ "title": "Ieee approved draft standard for floating-point arithmetic.",
409
+ "author": "IEEE-754.",
410
+ "venue": "IEEE P754/D2.50, April 2019, pages 1\u201383, 2019.",
411
+ "url": null
412
+ }
413
+ },
414
+ {
415
+ "31": {
416
+ "title": "Float vs double data types: What is the difference [updated], 2023.",
417
+ "author": "Robert Johns.",
418
+ "venue": null,
419
+ "url": null
420
+ }
421
+ },
422
+ {
423
+ "32": {
424
+ "title": "Fractals and chaos: the Mandelbrot set and beyond, volume 3.",
425
+ "author": "Benoit B Mandelbrot, Carl JG Evertsz, and Martin C Gutzwiller.",
426
+ "venue": "Springer, 2004.",
427
+ "url": null
428
+ }
429
+ },
430
+ {
431
+ "33": {
432
+ "title": "When double rounding is odd.",
433
+ "author": "Sylvie Boldo and Guillaume Melquiond.",
434
+ "venue": "bul, 07 2005.",
435
+ "url": null
436
+ }
437
+ },
438
+ {
439
+ "34": {
440
+ "title": "Revisiting \"what every computer scientist should know about floating-point arithmetic\", 2020.",
441
+ "author": "Vincent Lafage.",
442
+ "venue": null,
443
+ "url": null
444
+ }
445
+ },
446
+ {
447
+ "35": {
448
+ "title": "Exploring rounding errors in matlab using extended precision.",
449
+ "author": "Dina Tsarapkina and David J Jeffrey.",
450
+ "venue": "Procedia Computer Science, 29:1423\u20131432, 2014.",
451
+ "url": null
452
+ }
453
+ },
454
+ {
455
+ "36": {
456
+ "title": "Computer Organization and Design, Revised Fourth Edition, Fourth Edition: The Hardware/Software Interface.",
457
+ "author": "David A. Patterson and John L. Hennessy.",
458
+ "venue": "Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 4th edition, 2011.",
459
+ "url": null
460
+ }
461
+ },
462
+ {
463
+ "37": {
464
+ "title": "Possibilities and drawbacks using arbitrary precision numbers for structural analysis.",
465
+ "author": "Simon Klarmann and Jens Wackerfu\u00df.",
466
+ "venue": "PAMM, 20(1):e202000079, 2021.",
467
+ "url": null
468
+ }
469
+ },
470
+ {
471
+ "38": {
472
+ "title": "Dsfun90 (fortran-90 double-single package), 2005.",
473
+ "author": "David H. Bailey.",
474
+ "venue": null,
475
+ "url": null
476
+ }
477
+ },
478
+ {
479
+ "39": {
480
+ "title": "Renderdoc.",
481
+ "author": "Baldur Karlsson.",
482
+ "venue": "URL https://renderdoc. org, 2019.",
483
+ "url": null
484
+ }
485
+ },
486
+ {
487
+ "40": {
488
+ "title": "The unravelling of the real 3d mandelbulb.",
489
+ "author": "Daniel White.",
490
+ "venue": "On line, 2009.",
491
+ "url": null
492
+ }
493
+ },
494
+ {
495
+ "41": {
496
+ "title": "Expanding the mandelbrot set into higher dimensions.",
497
+ "author": "Javier Barrallo.",
498
+ "venue": "In Proceedings of Bridges 2010: Mathematics, Music, Art, Architecture, Culture, pages 247\u2013254, 2010.",
499
+ "url": null
500
+ }
501
+ },
502
+ {
503
+ "42": {
504
+ "title": "Mandelbulb, mandelbrot, mandelring and hopfbrot.",
505
+ "author": "Oliver Knill.",
506
+ "venue": "arXiv preprint arXiv:2305.17848, 2023.",
507
+ "url": null
508
+ }
509
+ },
510
+ {
511
+ "43": {
512
+ "title": "Julia sets in the quaternions.",
513
+ "author": "Alan Norton.",
514
+ "venue": "Computers & graphics, 13(2):267\u2013278, 1989.",
515
+ "url": null
516
+ }
517
+ },
518
+ {
519
+ "44": {
520
+ "title": "Interactive visualization of quaternion julia sets.",
521
+ "author": "John C Hart, Louis H Kauffman, and Daniel J Sandim.",
522
+ "venue": "In Proceedings of the First IEEE Conference on Visualization: Visualization90, pages 209\u2013218. IEEE, 1990.",
523
+ "url": null
524
+ }
525
+ },
526
+ {
527
+ "45": {
528
+ "title": "Classics on fractals.",
529
+ "author": "Gerald A Edgar.",
530
+ "venue": "CRC Press, 2019.",
531
+ "url": null
532
+ }
533
+ },
534
+ {
535
+ "46": {
536
+ "title": "Chaos and fractals: new frontiers of science, volume 106.",
537
+ "author": "Heinz-Otto Peitgen, Hartmut J\u00fcrgens, Dietmar Saupe, and Mitchell J Feigenbaum.",
538
+ "venue": "Springer, 2004.",
539
+ "url": null
540
+ }
541
+ },
542
+ {
543
+ "47": {
544
+ "title": "General topology.",
545
+ "author": "Waclaw Sierpinski.",
546
+ "venue": "Courier Dover Publications, 2020.",
547
+ "url": null
548
+ }
549
+ },
550
+ {
551
+ "48": {
552
+ "title": "Curves, Surfaces and Patterns, pages 249\u2013311.",
553
+ "author": "Robert Whitrow.",
554
+ "venue": "Springer London, London, 2008.",
555
+ "url": null
556
+ }
557
+ }
558
+ ],
559
+ "url": "http://arxiv.org/html/2408.09699v1"
560
+ }