Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- 20240522/1212.5289v2.json +69 -0
- 20240522/2009.09435v4.json +390 -0
- 20240522/2204.01349v4.json +0 -0
- 20240522/2206.02603v3.json +206 -0
- 20240522/2206.09677v5.json +0 -0
- 20240522/2206.14273v3.json +242 -0
- 20240522/2210.03123v3.json +0 -0
- 20240522/2211.07482v3.json +591 -0
- 20240522/2211.10054v2.json +230 -0
- 20240522/2301.10960v3.json +108 -0
- 20240522/2301.11761v3.json +0 -0
- 20240522/2302.04749v2.json +0 -0
- 20240522/2303.12002v3.json +0 -0
- 20240522/2304.01772v2.json +0 -0
- 20240522/2304.14606v2.json +0 -0
- 20240522/2305.05451v3.json +261 -0
- 20240522/2305.09972v2.json +147 -0
- 20240522/2306.00096v2.json +410 -0
- 20240522/2306.00420v2.json +77 -0
- 20240522/2306.09683v3.json +0 -0
- 20240522/2306.16564v4.json +584 -0
- 20240522/2307.07099v3.json +0 -0
- 20240522/2308.01123v3.json +0 -0
- 20240522/2308.01804v3.json +151 -0
- 20240522/2308.08670v3.json +0 -0
- 20240522/2310.00263v3.json +0 -0
- 20240522/2310.08559v4.json +0 -0
- 20240522/2310.10064v2.json +0 -0
- 20240522/2310.10274v2.json +0 -0
- 20240522/2310.11287v3.json +526 -0
- 20240522/2311.02142v2.json +244 -0
- 20240522/2311.02805v2.json +0 -0
- 20240522/2311.05956v2.json +0 -0
- 20240522/2311.07750v3.json +329 -0
- 20240522/2311.08053v4.json +147 -0
- 20240522/2311.11176v2.json +198 -0
- 20240522/2312.14474v2.json +0 -0
- 20240522/2312.16465v4.json +226 -0
- 20240522/2401.03083v2.json +424 -0
- 20240522/2401.08361v2.json +0 -0
- 20240522/2401.08539v2.json +243 -0
- 20240522/2401.09962v2.json +0 -0
- 20240522/2401.15330v3.json +0 -0
- 20240522/2402.00853v2.json +0 -0
- 20240522/2402.01965v3.json +394 -0
- 20240522/2402.02592v2.json +0 -0
- 20240522/2402.02675v2.json +0 -0
- 20240522/2402.09346v3.json +0 -0
- 20240522/2402.11489v2.json +0 -0
- 20240522/2402.17205v3.json +0 -0
20240522/1212.5289v2.json
ADDED
|
@@ -0,0 +1,69 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Modeling and Performance Evaluation of Computer Systems Security OperationProc. 4th St. Petersburg Workshop on Simulation / Ed. by S. M. Ermakov, Yu. N. Kashtanov, V. B. Melas, NII Chemistry St. Petersburg University Publishers, St. Petersburg, 2001, pp. 233\u2013238.",
|
| 3 |
+
"abstract": "A model of computer system security operation is developed based on the\nfork-join queueing network formalism. We introduce a security operation\nperformance measure, and show how it may be used to performance evaluation of\nactual systems.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "The explosive growth in computer systems and networks has increased the role\nof computer security within organizations [4 ###reference_b4###]. In many cases,\nineffective protection against computer security treats leads to considerable\ndamage, and even can cause an organization to be paralized. Therefore, the\ndevelopment of new models and methods of performance analysis of security\nsystems seems to be very important.\nIn this paper, we propose a model of computer security operation, and\nintroduce its related performance measure. It is shown how the model can be\napplied to performance evaluation of actual systems. Finally, a technique of\nsecurity system performance analysis is described and its practical\nimplementation is discussed.\nWe conclude with an appendix which contains technical details concerning\nfork-join network representation of the model, and related results."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "A Security Operation Model",
|
| 15 |
+
"text": "In this paper, we deal with the current security activities\n(see Fig. 1 ###reference_###) that mainly relate to the actual security threats\nrather than to strategic or long-term issues of security management.\nConsider the model of security operation in an organization, presented in\nFig. 2 ###reference_###. Each operational cycle starts with security attack\ndetection based on audit records and system/errors log analysis, traffic\nanalysis, or user reports. In order to detect an intrusion, automated tools of\nsecurity monitoring are normally used including procedures of statistical\nanomaly detection, rule-based detection, and data integrity control\n[4 ###reference_b4###].\nAfter security attack detection and identification, the integrity of\nsystem/application software and data in storage devices has to be examined\nto search for possible unauthorized modifications or damages made by the\nintruder. The investigation procedure can exploit file lists and checksum\nanalysis, hash functions, and other automated techniques.\nIn parallel, the system vulnerabilities, which allow the intruder to attack,\nshould be identified and investigated. The vulnerability analysis normally\npresents an informal procedure, and therefore, it can hardly be performed\nautomatically.\nBased on the results of integrity analysis, a software and data recovery\nprocedure can be initiated using back-up servers and reserving storage\ndevices. It has to take into account the security vulnerabilities identified\nat the previous step, so as to provide for further improvements in the entire\nsecurity system.\nAlong with the recovery procedure, the development of a complete set of\ncountermeasures against similar attacks should be performed. Finally, the\noperational cycle is concluded with appropriate modifications of software,\ndata bases, and system security policies and procedures.\nWe assume that the organization has appropriate personnel integrated in a\nComputer Emergency Response Team, available to handle the attack. The team\nwould include at least two subteams working in parallel, one to perform\nintegrity analysis and recovery procedures, and another to do vulnerability\nanalysis and development of countermeasures. At any time instant, each\nsubteam can deal with only one security incident. Any procedure may be started\nas soon as all prior procedures according to the model in Fig. 2 ###reference_###,\nhave been completed. If a request to handle a new incident occurs when a\nsubteam is still working on a procedure, the request has to wait until\nthe processing of that procedure is completed.\nWe denote by a random variable (r.v.) that represents the\ntime interval between detections of the th attack and its predecessor.\nFurthermore, we introduce r.v.\u2019s , , to\ndescribe the time of the th instant of procedure in the model.\nWe assume , to be independent and identically\ndistributed (i.i.d.) r.v.\u2019s with finite mean and variance for each ,\n. At the same time, we do not require of independence of\n for each , ."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "Security Operation Performance Evaluation",
|
| 21 |
+
"text": "In order to describe system performance, we introduce the following notations.\nLet be the mean time between consecutive security\nattacks (the attack cycle time), and be the mean time\nrequired to completely handle an attack (the recovery cycle time), as the\nnumber of attacks tends to .\nIn devising the security operation performance measure, one can take the ratio\nWith the natural condition , one can\nconsider as the time portion the system is under recovery, assuming\n.\nFirst note that the attack cycle time can immediately be evaluated as the mean\nvalue: .\nNow consider the cycle time of the entire system, which can be defined as\nthe mean time interval between successive completions of security system\nmodification procedures as the number of attacks . As one\ncan prove (see Appendix for further details), the system cycle time\n can be calculated as\nIn order to evaluate the recovery cycle time, we assume the system will\noperate under the maximum traffic level, which can be achieved when all the\ntime intervals between attacks are set to . Clearly, under that\ncondition, the system cycle time can be taken as a reasonable estimate of the\nrecovery cycle time.\nConsidering that now , we get the recovery cycle\ntime in the form"
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "4",
|
| 25 |
+
"parent_section_id": null,
|
| 26 |
+
"section_name": "Performance Analysis and Discussion",
|
| 27 |
+
"text": "In fact, the above model presents a quite simple but useful tool for security\nsystem operation management. It may be used to make decision on the basis of\na few natural parameters of the security operation process.\nLet us represent the ratio in the form\nand assume the attack rate determined by , to be\nfixed.\nTaking into account that the above result has been obtained based on the\nassumption of an infinite number of attacks, we arrive at the following\nconclusion. As the number of attacks becomes sufficiently large, the\nperformance of the system is determined by the time of the longest procedure\ninvolved in the system operation, whereas the impact of the order of\nperforming the procedures disappears.\nIt is clear that in order to improve system performance, the system security\nmanager (administrator) should first concentrate on decreasing the mean time\nrequired to perform the longest procedure within the security operation model,\nthen consider the second longest procedure, and so on. The goal of decreasing\nthe time can be achieved through partition of a whole procedure into\nsubprocedures, which can be performed in parallel, or through rescheduling of\nthe entire process with redistribution of particular activities between\nprocedures.\nIn practice, the above model and its related ratio can serve as the\nbasis for efficient monitorization of organizational security systems. Because\nthe introduction of new countermeasures may change the attack cycle time, the\nmonitoring requires updating this parameter after each modification of the\nsystem.\nFinally note, the above model can be easily extended to cover security\noperational processes, which consist of different procedures and precedence\nconstraints."
|
| 28 |
+
}
|
| 29 |
+
],
|
| 30 |
+
"appendix": [],
|
| 31 |
+
"tables": {},
|
| 32 |
+
"image_paths": {},
|
| 33 |
+
"validation": true,
|
| 34 |
+
"references": [
|
| 35 |
+
{
|
| 36 |
+
"1": {
|
| 37 |
+
"title": "Queueing models for systems with synchronization constraints.",
|
| 38 |
+
"author": "F. Baccelli and A. M. Makowski.",
|
| 39 |
+
"venue": "Proc. IEEE, 77(1):138\u2013160, January 1989.",
|
| 40 |
+
"url": null
|
| 41 |
+
}
|
| 42 |
+
},
|
| 43 |
+
{
|
| 44 |
+
"2": {
|
| 45 |
+
"title": "Algebraic modeling and performance evaluation of acyclic fork-join\nqueueing networks.",
|
| 46 |
+
"author": "N. K. Krivulin.",
|
| 47 |
+
"venue": "In N. Balakrishnan, V. B. Melas, and S. Ermakov, editors, Advances in Stochastic Simulation Methods, Statistics for Industry and\nTechnology, pages 63\u201381. Birkh\u00e4user, Boston, 2000.",
|
| 48 |
+
"url": null
|
| 49 |
+
}
|
| 50 |
+
},
|
| 51 |
+
{
|
| 52 |
+
"3": {
|
| 53 |
+
"title": "Evaluation of the mean interdeparture time in tandem queueing\nsystems.",
|
| 54 |
+
"author": "N. K. Krivulin and V. B. Nevzorov.",
|
| 55 |
+
"venue": "In S. M. Ermakov, Y. N. Kashtanov, and V. B. Melas, editors, Proc. 4th St. Petersburg Workshop on Simulation, pages 310\u2013315, St.\nPetersburg, 2001. NII Chemistry St. Petersburg University Publishers.",
|
| 56 |
+
"url": null
|
| 57 |
+
}
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"4": {
|
| 61 |
+
"title": "Network and Internetwork Security: Principles and Practice.",
|
| 62 |
+
"author": "W. Stallings.",
|
| 63 |
+
"venue": "Prentice Hall, Englewood Cliffs, 1995.",
|
| 64 |
+
"url": null
|
| 65 |
+
}
|
| 66 |
+
}
|
| 67 |
+
],
|
| 68 |
+
"url": "http://arxiv.org/html/1212.5289v2"
|
| 69 |
+
}
|
20240522/2009.09435v4.json
ADDED
|
@@ -0,0 +1,390 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Exploring the Linear Subspace Hypothesis in Gender Bias Mitigation",
|
| 3 |
+
"abstract": "Bolukbasi et al. (2016) presents one of the first gender bias mitigation techniques for word representations. Their method takes\npre-trained word representations as input and attempts to isolate a linear subspace that captures\nmost of the gender bias in the representations. As judged by an analogical evaluation task, their method virtually eliminates gender bias in the representations. However, an implicit and untested assumption of their method is that the bias subspace is actually linear.\nIn this work, we generalize their method to a kernelized, non-linear version. We take inspiration from kernel principal component analysis and derive a non-linear bias isolation technique.\nWe discuss and overcome some of the practical drawbacks of our method for non-linear gender bias mitigation in word representations and analyze empirically whether the bias subspace is actually linear. Our analysis shows that gender bias is in fact well captured by a linear subspace, justifying\nthe assumption of Bolukbasi et al. (2016).",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Pre-trained word representations are a necessity for strong performance on modern NLP tasks.\nSuch representations now serve as input to neural methods Goldberg (2017 ###reference_b6###), which recently have become the standard models in the field.\nHowever, because pre-trained representations are constructed from large, human-created corpora, they naturally contain societal biases encoded in that data; gender bias\nis among the most well-studied of these biases Caliskan et al. (2017 ###reference_b3###).\nBoth a-contextual word representations Mikolov et al. (2013 ###reference_b16###); Pennington et al. (2014 ###reference_b18###) and contextual word representations Peters et al. (2018 ###reference_b19###); Devlin et al. (2019 ###reference_b5###) have been shown to encode gender bias Bolukbasi et al. (2016 ###reference_b2###); Caliskan et al. (2017 ###reference_b3###); Zhao et al. (2019 ###reference_b23###); May et al. (2019 ###reference_b14###); Karve et al. (2019 ###reference_b12###).\nMore importantly, bias in pre-trained representations has been shown to influence models for downstream tasks where they are used as input, e.g., coreference resolution (Rudinger et al., 2018 ###reference_b20###; Zhao et al., 2018 ###reference_b24###).\nBolukbasi et al. (2016 ###reference_b2###) present one of the first methods\nfor detecting and mitigating gender bias in word representations.\nThey provide a novel linear-algebraic approach that post-processes\nword representations in order to partially remove gender bias. Under their evaluation, they\nfind they can nearly perfectly remove bias in an analogical reasoning task.\nHowever, subsequent work Gonen and Goldberg (2019 ###reference_b7###); Hall Maudslay et al. (2019 ###reference_b9###) has indicated that gender bias still lingers in the representations, despite Bolukbasi et al. ###reference_b2###\u2019s (2016 ###reference_b2###) strong experimental results.\nIn the development of their method, Bolukbasi et al. (2016 ###reference_b2###) make a critical and unstated assumption: Gender bias forms a linear subspace of word representation space.\nIn mathematics, linearity is a strong assumption and there is no reason a-priori why one should expect complex and nuanced societal phenomena, such as gender bias, should be represented by a linear subspace.\nIn this work, we present the first non-linear gender bias mitigation technique for a-contextual word representations. In doing so, we directly test the linearity assumption made\nby Bolukbasi et al. (2016 ###reference_b2###).\nOur method is based on the insight that Bolukbasi et al. ###reference_b2###\u2019s (2016 ###reference_b2###) method bears a close resemblance to principal component analysis (PCA). Just as one can kernelize PCA Sch\u00f6lkopf et al. (1997 ###reference_b21###),\nwe show that one can kernelize the method of Bolukbasi et al. (2016 ###reference_b2###). Due to the kernelization,\nthe bias is removed in the feature space, rather in the word representation space. Thus, we also explore pre-image techniques Mika et al. (1999 ###reference_b15###) to project the bias-mitigated vectors\nback into .\nAs previously noted, there are now multiple bias removal methodologies (Zhao et al., 2018 ###reference_b24###, 2019 ###reference_b23###; May et al., 2019 ###reference_b14###) that have succeed the method by Bolukbasi et al. (2016 ###reference_b2###). Furthermore Gonen and Goldberg (2019 ###reference_b7###) point out multiple flaws in Bolukbasi et al. ###reference_b2###\u2019s (2016 ###reference_b2###) bias mitigation technique and the aforementioned methods.\nNonetheless, we believe that this method has received sufficient attention from the community such that research into its properties is both interesting and useful.\nWe test our non-linear gender bias technique in several\nexperiments. First, we consider the Word representation Association Test (WEAT; Caliskan et al., 2017 ###reference_b3###); we notice that across five non-linear kernels and convex combinations thereof, there is seemingly no significant difference between\nthe non-linear bias mitigation technique and the linear one. Secondly, we\nconsider the professions task Bolukbasi et al. (2016 ###reference_b2###); Gonen and Goldberg (2019 ###reference_b7###) that measures\nhow word representations representing different professions are potentially gender-stereotyped. Again, as with the WEAT evaluation, we find that our non-linear bias mitigation technique performs on par with the linear method. We also consider\nwhether the non-linear gender mitigation technique removes indirect bias\nfrom the vectors Gonen and Goldberg (2019 ###reference_b7###); yet again, we find\nthe non-linear method performs on par with the linear methods.\nAs a final evaluation, we evaluate whether non-linear bias mitigation hurts semantic\nperformance.\nOn SimLex-999 Hill et al. (2015 ###reference_b10###), we show that similarity\nestimates between the vectors remain on par with the linear methods.\nWe conclude that much of the gender bias in word representations is indeed captured by a linear subspace, answering\nthis paper\u2019s titular question."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Bias as a Linear Subspace",
|
| 15 |
+
"text": "The first step of Bolukbasi et al. ###reference_b2###\u2019s (2016 ###reference_b2###)\ntechnique is the discovery of a subspace that\ncaptures most of the gender bias. Specifically,\nthey stipulate that this space is linear.\nGiven word representations that live in ,\nthey provide a spectral method for isolating\nthe bias subspace.\nIn this section, we review their approach and show how it is equivalent to principal component analysis (PCA) on a specific design (input) matrix.\nThen, we introduce and discuss the implicit assumption made\nby their work; we term this assumption the linear subspace hypothesis and test it in \u00a7 4 ###reference_###.\nGender bias in word representations may be represented\nas a linear subspace."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "Construction of a Bias Subspace",
|
| 21 |
+
"text": "We will assume the existence of a fixed and finite vocabulary , each element of which is a word .\nThe hard-debiasing approach takes a set of sets as input. Each set \ncontains words that are considered roughly semantically equivalent modulo their gender; Bolukbasi et al. (2016 ###reference_b2###) call the defining sets. For example, \nand form two such defining sets.\nWe identify each word with a unique integer for the sake\nof our indexing notation; indeed, we exclusively reserve the index for words.\nWe additionally introduce the function that maps an individual word to its\ndefining set.\nIn general,\nthe defining sets are not limited to a cardinality of two, but in practice Bolukbasi et al. (2016 ###reference_b2###) exclusively employ defining sets\nwith a cardinality of two in their experiments.\nUsing the sets , Bolukbasi et al. (2016 ###reference_b2###) define the matrix\nwhere we write for the word\u2019s representation and the empirical mean vector is defined as\nBolukbasi et al. (2016 ###reference_b2###) then extract a bias subspace using the singular value decomposition (SVD).\nSpecifically, they define the bias subspace to be the space\nspanned by the first columns of where\nAs is symmetric and positive semi-definite,\nthe SVD is equivalent to an eigendecomposition as our notation in Eq. 3 ###reference_### shows.\nWe assume the columns of , the eigenvectors of , are ordered by the magnitude of their corresponding eigenvalues."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "Bias Subspace Construction as PCA",
|
| 27 |
+
"text": "As briefly noted by Bolukbasi et al. (2016 ###reference_b2###), this can thus be cast as performing principal component analysis (PCA) on a recentered input matrix. We prove this assertion\nmore formally. We first prove\nthat the matrix may be written\nas an empirical covariance matrix.\nSuppose for all . Then we have\nwhere we define the design matrix as:\nwhere is defined as above.\n\u220e\nNext, we show that the matrix is centered, which is a\nrequirement for PCA.\nThe matrix is row-wise centered.\n\u220e\nThe method of Bolukbasi et al. (2016 ###reference_b2###) may be considered principal component analysis performed on the matrix .\nAs the algebra in Proposition 1 ###reference_orem1### and Proposition 2 ###reference_orem2###\nshow we may formulate the problem as an SVD on\na mean-centered covariance matrix. One view\nof PCA is performing matrix factorization on such\na matrix.\n\u220e"
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3",
|
| 31 |
+
"parent_section_id": null,
|
| 32 |
+
"section_name": "Bolukbasi et al. (2016)",
|
| 33 |
+
"text": "In this section, we review the bias mitigation technique introduced by Bolukbasi et al. (2016 ###reference_b2###).\nWhen possible, we take care to reformulate their method in terms of\nmatrix notation.\nThey introduce a two-step process that neutralizes and equalizes\nthe vectors to mitigate gender bias in the representations.\nThe underlying assumption of their method is that there exists\na linear subspace that captures most of the\ngender bias."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3.1",
|
| 37 |
+
"parent_section_id": "3",
|
| 38 |
+
"section_name": "Neutralize",
|
| 39 |
+
"text": "After finding the linear bias subspace ,\nthe gist behind Bolukbasi et al. ###reference_b2###\u2019s (2016 ###reference_b2###) approach is\nbased on elementary linear algebra. We may decompose\nany word vector as the sum of its orthogonal projection onto the bias subspace (range of the projection) and its orthogonal projection onto the complement of the bias subspace (null space of the projection), i.e.,\nWe may then re-embed every vector as\nWe may re-write this in terms of matrix notation in\nthe following manner. Let be an orthogonal basis for the linear bias subspace .\nThis may be found by taking the eigenvectors that correspond to the top- eigenvalues with largest magnitude.\nThen, we define the projection matrix onto the bias subspace\nas it follows that\nthe matrix is a projection matrix on the complement of .\nWe can then write the neutralize step using matrices\nThe matrix formulation of the neutralize step\noffers a cleaner presentation of what the neutralize\nstep does: it projects the vectors onto the orthogonal\ncomplement of the bias subspace."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3.2",
|
| 43 |
+
"parent_section_id": "3",
|
| 44 |
+
"section_name": "Equalize",
|
| 45 |
+
"text": "Bolukbasi et al. (2016 ###reference_b2###) decompose words into two classes.\nThe neutral words which undergo neutralization as explained above, and the gendered words, some of which receive the equalizing treatment. Given a set of equality sets which we can see as a greater extension of the defining sets , i.e., , Bolukbasi et al. (2016 ###reference_b2###) then proceed to decompose each of the words into their gendered and neutral counterparts, setting their neutral component to a constant (the mean of the equality set) and the gendered component to its mean-centered projection into the gendered subspace:\nwhere we define the following quantities:\nthe \u201cnormalizer\u201d ensures the vector is of unit length.\nThis fact is left unexplained in the original work,\nbut Hall Maudslay et al. (2019 ###reference_b9###) provide\na proof in their appendix."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "4",
|
| 49 |
+
"parent_section_id": null,
|
| 50 |
+
"section_name": "Bias as a Non-Linear Subspace",
|
| 51 |
+
"text": "We generalize the framework presented in Bolukbasi et al. (2016 ###reference_b2###) and cast it to a non-linear setting by exploiting its relationship to PCA. Thus, the natural extension of Bolukbasi et al. (2016 ###reference_b2###) is to kernelize it analogously to Sch\u00f6lkopf et al. (1997 ###reference_b21###), which is the kernelized generalization of PCA.\nOur approach preserves all the desirable formal properties presented in the linear method of Bolukbasi et al. (2016 ###reference_b2###)."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "4.1",
|
| 55 |
+
"parent_section_id": "4",
|
| 56 |
+
"section_name": "Adapting the Design Matrix",
|
| 57 |
+
"text": "The idea behind our non-linear bias mitigation technique is based on kernel PCA Sch\u00f6lkopf et al. (1998 ###reference_b22###). In short, the idea is to map the original word representations to a higher-dimensional space via a function . We will consider cases\nwhere is a reproducing kernel Hilbert space (RKHS) with reproducing kernel where the notation refers to an inner product in the RKHS. Traditionally,\none calls the feature space and will use this terminology\nthroughout this work. Exploiting the reproducing kernel property, we may carry out\nBolukbasi et al. ###reference_b2###\u2019s (2016 ###reference_b2###) bias isolation technique and construct a non-linear analogue.\nWe start the development of bias mitigation technique in feature space with a modification of the design matrix presented in Eq. 5 ###reference_###.\nIn the RKHS setting the non-linear analogue is\nwhere we define\nAs one can see, this is a relatively straightforward mapping\nfrom the set of linear operations to non-linear ones."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "4.2",
|
| 61 |
+
"parent_section_id": "4",
|
| 62 |
+
"section_name": "Kernel PCA",
|
| 63 |
+
"text": "Using our modified design matrix, we can\ncast our non-linear bias mitigation technique\nas a form of kernel PCA. Specifically,\nwe form the matrix\nOur goal is to find the eigenvalues and their corresponding eigenfunctions by solving the eigenvalue problem\nComputing these directly from Eq. 16 ###reference_###\nis impossible since \u2019s dimension may\nbe prohibitively large or even infinite.\nHowever, Sch\u00f6lkopf et al. ###reference_b21### note that is spanned by .\nThis allows us to rewrite Eq. 16 ###reference_### as\nwhere there exist coefficients .\nNow, by substituting Eq. 17 ###reference_### and Eq. 16 ###reference_### into the respective terms in , Sch\u00f6lkopf et al. (1997 ###reference_b21###) derive a computationally feasible eigendecomposition problem. Specifically, they consider\nwhere .\nOnce all the vectors have been estimated the inner product between an eigenfunction and a point can be computed as\nA projection into the basis can then be carried out by applying the projection operator as follows:\nwhere is the number of principal components. Projection operator is analogous to the linear projection introduced in \u00a7 3.1 ###reference_###."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "4.3",
|
| 67 |
+
"parent_section_id": "4",
|
| 68 |
+
"section_name": "Centering Kernel Matrix",
|
| 69 |
+
"text": "We can perform the required mean-centering operations on the design matrix by centering the kernel matrix in a similar fashion to Sch\u00f6lkopf et al. (1998 ###reference_b22###). For the case of equality sets of size 2, which is what Bolukbasi et al. ###reference_b2### use in practice, we realize that the centered design matrix reduces to pairwise differences:\nwhich leads to a very simple re-centering in terms of the Gram matrices:\nwhere\nwhere maps a defining set index to a tuple containing the word indices in the corresponding defining set and are projection operators which return the first or second elements of a tuple respectively. In simpler terms, Eq. 23b ###reference_.2### is creating two matrices: matrix which is constructed by looping over the definition sets and placing pairs within the same definition set as adjacent rows, then is constructed in the same way but the order of the adjacent pairs is swapped relative to .\nOnce we have this pairwise centered Gram matrix we can apply the eigendecomposition procedure described in Eq. 18 ###reference_### directly on . We note that carrying out this procedure using a linear kernel recovers the linear bias subspace from Bolukbasi et al. (2016 ###reference_b2###)."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "4.4",
|
| 73 |
+
"parent_section_id": "4",
|
| 74 |
+
"section_name": "Inner Product Correction (Neutralize)",
|
| 75 |
+
"text": "We now focus on neutralizing and equalizing the inner products in the RKHS rather than correcting the word representations directly.\nJust as in the linear case, we can decompose the representation of a word in the RKHS into biased and neutral components\nwhich provides a nonlinear equivalent for Eq. 10 ###reference_###:\nThe corrected inner product in the feature space for two neutralized words is given by\nApplying Eq. 19b ###reference_.2### and Eq. 20 ###reference_###\nwhere\nas derived by Sch\u00f6lkopf et al. (1998 ###reference_b22###).\n\u220e\nAn advantage of this approach is that it will not rely on errors due to the approximation of the pre-image.\nHowever, it will not give us back a set of bias-mitigated representations. Instead, it returns a bias mitigation metric, thus limiting the classifiers and regressors we could use.\nEq. 26 ###reference_### provides us with an approach to compute the inner product between words in feature space."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "4.5",
|
| 79 |
+
"parent_section_id": "4",
|
| 80 |
+
"section_name": "Inner Product Correction (Equalize)",
|
| 81 |
+
"text": "To equalize, we may naturally convert Eq. 11 ###reference_### to its feature-space equivalent.\nWe define an equalizing function\nwhere we define\nwhere maps an individual word index to its corresponding equality set index. Given vector dot products in the linear case follow the same geometric properties as inner products in the RKHS we can show that is unit norm follows directly from the proof for Proposition 1 in Hall Maudslay et al. (2019 ###reference_b9###) which can be found in Appendix A of Hall Maudslay et al. (2019 ###reference_b9###).\nFor any two vectors in the observed space and their corresponding representations in feature space the inner product is .\n\u220e\nFor a given neutral word and a word in an equality set the inner product is invariant across members in the equality set .\nwhere (i) follows from Proposition 5 ###reference_orem5### and (ii) follows from Proposition 4 ###reference_orem4###.\n\u220e\nAt this point, we have completely kernelized the approach in Bolukbasi et al. (2016 ###reference_b2###). Note that a linear kernel reduces to the method described in Bolukbasi et al. (2016 ###reference_b2###) as we would expect. We can see an initial disadvantage that equalizing via inner product correction has in comparison to Bolukbasi et al. (2016 ###reference_b2###) and that is that we now require switching in between three different inner products at test time depending on whether the words are neutral or not. To overcome this in practice, we neutralize all words and do not use the equalize correction, however, we present it for completeness.\n###figure_1###"
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "5",
|
| 85 |
+
"parent_section_id": null,
|
| 86 |
+
"section_name": "Computing the Pre-Image",
|
| 87 |
+
"text": "As mentioned in the previous section, a downfall of the metric correction approach is that it does not provide us representations that we can use in downstream tasks: the bias-mitigated representations only exist in feature space.\nThus, when it comes to transfer tasks such as classification we are limited to kernel methods, i.e., support vector machines).\nOne way to resolve this problem is by obtaining the pre-image of the corrected representations in the feature space.\nFinding the pre-image is a well-studied problem for kernel PCA Kwok and Tsang (2004 ###reference_b13###). The goal is to fine the pre-image mappings , and that compute (or approximate) the pre-images for and , respectively. In our case,\nwith the pre-image mapping, the neutralize step from Bolukbasi et al. (2016 ###reference_b2###) becomes\nIn general, we will not have access to \nso we fall back on the following approximation scheme."
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "6",
|
| 91 |
+
"parent_section_id": null,
|
| 92 |
+
"section_name": "Experiments and Results",
|
| 93 |
+
"text": "We carry out experiments across a range of benchmarks and statistical tests designed to quantify the underlying bias in word representations Gonen and Goldberg (2019 ###reference_b7###). Our experiments focus on quantifying both direct and indirect bias as defined in Gonen and Goldberg (2019 ###reference_b7###); Hall Maudslay et al. (2019 ###reference_b9###).\nFurthermore, we also carry out word similarity experiments using the Hill et al. (2015 ###reference_b10###) benchmark in order to assess that our new bias-mitigated spaces still preserve the original properties of word representations Mikolov et al. (2013 ###reference_b16###)."
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "6.1",
|
| 97 |
+
"parent_section_id": "6",
|
| 98 |
+
"section_name": "Experimental Setup",
|
| 99 |
+
"text": "Across all experiments we apply the neutralize metric correction step to all word representations, in contrast to Bolukbasi et al. (2016 ###reference_b2###) where the equalize step is applied to the equality sets and the neutralize step to a set of neutral words as determined in Bolukbasi et al. (2016 ###reference_b2###).\nWe show in Tab. 3 ###reference_### that applying the equalize step does not bring an enhancement over neutralizing all words. We varied kernel hyper-parameters using a grid search and found that they had little effect on performance, as a result we used default initialization strategies as suggested in Sch\u00f6lkopf et al. (1998 ###reference_b22###). Unless mentioned otherwise, all experiments use the inner product correction approach introduced in \u00a7 4.4 ###reference_###.\n###figure_2###"
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "6.2",
|
| 103 |
+
"parent_section_id": "6",
|
| 104 |
+
"section_name": "Kernel Variations",
|
| 105 |
+
"text": "The main kernels used throughout experiments are specified in Tab. 1 ###reference_###. We also explored the following compound kernels:\n\n\n(i) convex combinations of the Laplace, radial basis function (RBF), cosine and sigmoid kernels;\n\n(ii) convex combinations of cosine similarity, RBF, and sigmoid kernels;\n\n(iii) convex combinations of RBF and sigmoid kernels;\n\n(iv) polynomial kernels up to 4 degree.\n\n\nWe only report the results on the most fundamental kernels out of the explored kernels."
|
| 106 |
+
},
|
| 107 |
+
{
|
| 108 |
+
"section_id": "6.3",
|
| 109 |
+
"parent_section_id": "6",
|
| 110 |
+
"section_name": "Direct Bias: WEAT",
|
| 111 |
+
"text": "The Word Embedding Association Test Caliskan et al. (WEAT; 2017 ###reference_b3###) is a statistical test analogous to the implicit association test (IAT) for quantifying human biases in textual data (Greenwald and Banaji, 1995 ###reference_b8###).\nWEAT computes the difference in relative cosine similarity between two sets of target words and (e.g., careers and family) and two sets of attribute words and (e.g., male names and female names). Formally, this quantity is Cohen\u2019s -measure Cohen (1992 ###reference_b4###) also known as the effect size: The higher the measure, the more biased the representations. To quantify the significance of the estimated , Caliskan et al. (2017 ###reference_b3###) define the null hypothesis that there is no difference between the two sets of target words and the sets of attribute words in terms of their relative similarities (i.e., ). Using this null hypothesis, Caliskan et al. (2017 ###reference_b3###) then carry out a one-sided hypothesis test where failure to reject the null-hypothesis means that the degree of bias measured by is not significant.\nWe obtain WEAT scores across different kernels (Tab. 2 ###reference_###). We observe that the differences between the linear and the non-linear kernels is small and, in most cases, the linear kernel has a smaller value for the effect size indicating a lesser degree of bias in the corrected space. Overall, we conclude that the non-linear kernels do not reduce the linear bias as measured by WEAT further than the linear kernels. We also experiment with polynomial kernels and obtain similar results, which can be found in Tab. 7 ###reference_### of App. A ###reference_###."
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"section_id": "6.4",
|
| 115 |
+
"parent_section_id": "6",
|
| 116 |
+
"section_name": "Professions Gonen and Goldberg (2019)",
|
| 117 |
+
"text": "We consider the professions dataset introduced by Bolukbasi et al. (2016 ###reference_b2###) and apply the benchmark defined in Gonen and Goldberg (2019 ###reference_b7###). We find the neighbors (100 nearest neighbors) of each word using the corrected cosine similarity and count the number of male neighbors. We then report the Pearson correlation coefficient between the number of male neighbors for each word and the original bias of that word. The original bias of a word vector is given by the cosine similarity in the original word representation space.\nWe can observe from the results in Tab. 4 ###reference_### that the non-linear kernels yield only marginally different results, which in most cases seem to be slightly worse, i.e., their induced space exhibits marginally higher correlations with the original biased vector space.\nrepresentations\nOriginal\nPCA\nKPCA(rbf)\nKPCA(sig)\nKPCA(lap)\n\n\n\nWord2Vec\n0.740\n0.675\n0.678\n0.675\n0.708\n\nGlove\n0.758\n0.675\n0.681\n0.680\n0.715\nrepresentations\nOriginal\nPCA\nKPCA(rbf)\nKPCA(sig)\nKPCA(lap)\n\n\n\nWord2Vec\n0.974\n0.702\n0.716\n0.715\n0.720\n\nGlove\n0.978\n0.757\n0.754\n0.753\n0.914"
|
| 118 |
+
},
|
| 119 |
+
{
|
| 120 |
+
"section_id": "6.5",
|
| 121 |
+
"parent_section_id": "6",
|
| 122 |
+
"section_name": "Indirect Bias",
|
| 123 |
+
"text": "Following Gonen and Goldberg (2019 ###reference_b7###), we build a balanced training set of male and female words using the 5000 most biased words according to the bias in the original representations as described in \u00a7 6.4 ###reference_###, and then train an RBF-kernel support vector machine (SVM) classifier Pedregosa et al. (2011 ###reference_b17###) on a random sample of 1000 (training set) of them to predict the gender, and evaluate its generalization on the remaining 4000 (test set). We can perform classification in our corrected RKHS with any SVM kernel\n that can be written in the forms 111Stationary kernels are sometimes written in the form or , i.e., or since we can use the kernel trick in our corrected RKHS to compute the inputs to our SVM kernel, resulting in\nIt is clear that the RBF kernel is an example of a kernel that follows Eq. 35 ###reference_###.\nWe can see that the bias removal induced by non-linear kernels results in a slightly higher classification accuracy (shown in Tab. 5 ###reference_###) of gendered words for GoogleNews Word2Vec representations Mikolov et al. (2013 ###reference_b16###) and a slightly lower classification accuracy for GloVe representations Pennington et al. (2014 ###reference_b18###) (with the exception of the Laplace kernel which has a very high classification accuracy).\nOverall for the RBF and the sigmoid kernels there is no improvement in comparison to the linear kernel (PCA), the Laplace kernel seems to have notably worse results than the others, still being able to classify gendered words at a high accuracy of 91.4% for GloVe representations."
|
| 124 |
+
},
|
| 125 |
+
{
|
| 126 |
+
"section_id": "6.6",
|
| 127 |
+
"parent_section_id": "6",
|
| 128 |
+
"section_name": "Word Similarity: SimLex-999",
|
| 129 |
+
"text": "The quality of a word vector space is traditionally measured by how well it replicates human judgments of word similarity. We use the SimLex-999 benchmark by Hill et al. (2015 ###reference_b10###) which provides a ground-truth measure of similarity produced by 500 native English speakers. Similarity scores by our method are computed using Spearman correlation between representation and human judgments are reported. We can observe that the metric corrections only slightly change the Spearman correlation results on SimLex-999 (Tab. 6 ###reference_###) from the original representation space. We can thus conclude that the representation quality is mostly preserved."
|
| 130 |
+
},
|
| 131 |
+
{
|
| 132 |
+
"section_id": "7",
|
| 133 |
+
"parent_section_id": null,
|
| 134 |
+
"section_name": "Conclusion",
|
| 135 |
+
"text": "We offer a non-linear extension to the method presented in Bolukbasi et al. (2016 ###reference_b2###) by connecting its bias space construction to PCA and subsequently applying kernel PCA. We contend our extension is natural in the sense that it reduces to\nthe method of Bolukbasi et al. (2016 ###reference_b2###) in the special\ncase when we employ a linear kernel and in the non-linear case it preserves all the desired linear properties in the feature space.\nThis allows us to provide equivalent constructions of the neutralize and equalize steps presented.\nrepresentations\nOriginal\nPCA\nKPCA(rbf)\nKPCA(sig)\nKPCA(lap)\n\n\n\nWord2Vec\n0.121\n0.119\n0.118\n0.118\n0.118\n\nGlove\n0.302\n0.298\n0.298\n0.298\n0.305\nWe compare the linear bias mitigation technique to our new kernelized non-linear version across a suite of tasks and datasets.\nWe observe that our non-linear extensions of Bolukbasi et al. (2016 ###reference_b2###) show no notable performance differences across a set of benchmarks designed to quantify gender bias in word representations. Furthermore, the results in Tab. 7 ###reference_###(App. A ###reference_###) show that gradually increasing the degree of non-linearity has again no significant change in performance for the WEAT Caliskan et al. (2017 ###reference_b3###) benchmark. Thus, we provide empirical evidence for the linear subspace hypothesis; our results suggest representing gender bias as a linear subspace is a suitable assumption. We would like to highlight that our results are specific to our proposed kernelized extensions and does not imply that all non-linear variants of Bolukbasi et al. (2016 ###reference_b2###) will yield similar results. There may very well exist a non-linear technique that works better, but we were\nunable to find one in this work."
|
| 136 |
+
},
|
| 137 |
+
{
|
| 138 |
+
"section_id": "8",
|
| 139 |
+
"parent_section_id": null,
|
| 140 |
+
"section_name": "Acknowledgements",
|
| 141 |
+
"text": "We would like to thank Jennifer C. White for amending several typographical errors in final version of this manuscript."
|
| 142 |
+
}
|
| 143 |
+
],
|
| 144 |
+
"appendix": [
|
| 145 |
+
{
|
| 146 |
+
"section_id": "Appendix 1",
|
| 147 |
+
"parent_section_id": null,
|
| 148 |
+
"section_name": "Appendix A Polynomial Kernel Results",
|
| 149 |
+
"text": "For experimental completeness, we provide direct bias experiments on WEAT using a range of polynomial kernels.\nThe results are displayed in Tab. 7 ###reference_###.\nThe results for the polynomial kernels suggest the same conclusions we arrived at in the main text, i.e., a linear kernel is generally enough."
|
| 150 |
+
}
|
| 151 |
+
],
|
| 152 |
+
"tables": {
|
| 153 |
+
"1": {
|
| 154 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T1\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S5.T1.7\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S5.T1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.1.2.1\">kernel</span></th>\n<th class=\"ltx_td ltx_nopad_r ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T1.1.1.1\"></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T1.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T1.2.2.2\">Cosine</th>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left ltx_border_t\" id=\"S5.T1.2.2.1\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.3.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T1.3.3.2\">RBF Kernel</th>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S5.T1.3.3.1\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T1.4.4.2\">Sigmoid Kernel</th>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S5.T1.4.4.1\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.5.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T1.5.5.2\">Polynomial Kernel</th>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S5.T1.5.5.1\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T1.6.6.2\">Laplace Kernel</th>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S5.T1.6.6.1\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.7.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S5.T1.7.7.2\">Convex Combination</th>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left ltx_border_bb\" id=\"S5.T1.7.7.1\"></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Different kernels used throughout experiments.</figcaption>\n</figure>",
|
| 155 |
+
"capture": "Table 1: Different kernels used throughout experiments."
|
| 156 |
+
},
|
| 157 |
+
"2": {
|
| 158 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T2\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S5.T2.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T2.1.1.1\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_tt\" id=\"S5.T2.1.1.1.1\" rowspan=\"2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T2.1.1.1.1.1\">\n<span class=\"ltx_p\" id=\"S5.T2.1.1.1.1.1.1\" style=\"width:71.1pt;\"><span class=\"ltx_text\" id=\"S5.T2.1.1.1.1.1.1.1\">Targets</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" colspan=\"2\" id=\"S5.T2.1.1.1.2\">Original</td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" colspan=\"2\" id=\"S5.T2.1.1.1.3\">PCA</td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" colspan=\"2\" id=\"S5.T2.1.1.1.4\">KPCA (rbf)</td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" colspan=\"2\" id=\"S5.T2.1.1.1.5\">KPCA (sig)</td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" colspan=\"2\" id=\"S5.T2.1.1.1.6\">KPCPA (lap)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.2.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.1.2.2.1\">d</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.1.2.2.2\">p</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.2.2.3\">d</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.1.2.2.4\">p</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.1.2.2.5\">d</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.1.2.2.6\">p</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.1.2.2.7\">d</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.1.2.2.8\">p</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.1.2.2.9\">d</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left ltx_border_t\" id=\"S5.T2.1.2.2.10\">p</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.3.3\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_t\" colspan=\"11\" id=\"S5.T2.1.3.3.1\">\u00a0\u00a0 \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Google News Word2Vec <cite class=\"ltx_cite ltx_citemacro_cite\">Mikolov et\u00a0al. (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2009.09435v4#bib.bib16\" title=\"\">2013</a>)</cite>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.4.4\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_t\" id=\"S5.T2.1.4.4.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T2.1.4.4.1.1\">\n<span class=\"ltx_p\" id=\"S5.T2.1.4.4.1.1.1\" style=\"width:71.1pt;\">Career , Family</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.1.4.4.2\">1.622</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.1.4.4.3\">0.000</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.4.4.4\">1.327</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.1.4.4.5\">0.001</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.1.4.4.6\">1.321</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.1.4.4.7\">0.005</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.1.4.4.8\">1.319</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.1.4.4.9\">0.006</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.1.4.4.10\">1.311</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left ltx_border_t\" id=\"S5.T2.1.4.4.11\">0.002</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.5.5\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top\" id=\"S5.T2.1.5.5.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T2.1.5.5.1.1\">\n<span class=\"ltx_p\" id=\"S5.T2.1.5.5.1.1.1\" style=\"width:71.1pt;\">Math, Arts</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.1.5.5.2\">0.998</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.1.5.5.3\">0.017</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.5.5.4\">-0.540</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.1.5.5.5\">0.859</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.1.5.5.6\">-0.755</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.1.5.5.7\">0.922</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.1.5.5.8\">-0.754</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.1.5.5.9\">0.933</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.1.5.5.10\">-0.024</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S5.T2.1.5.5.11\">0.507</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.6.6\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top\" id=\"S5.T2.1.6.6.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T2.1.6.6.1.1\">\n<span class=\"ltx_p\" id=\"S5.T2.1.6.6.1.1.1\" style=\"width:71.1pt;\">Science , Arts</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.1.6.6.2\">1.159</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.1.6.6.3\">0.005</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.6.6.4\">0.288</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.1.6.6.5\">0.281</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.1.6.6.6\">0.271</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.1.6.6.7\">0.307</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.1.6.6.8\">0.269</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.1.6.6.9\">0.283</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.1.6.6.10\">1.110</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S5.T2.1.6.6.11\">0.009</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.7.7\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_t\" colspan=\"11\" id=\"S5.T2.1.7.7.1\">\u00a0\u00a0 \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 GloVe <cite class=\"ltx_cite ltx_citemacro_cite\">Pennington et\u00a0al. (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2009.09435v4#bib.bib18\" title=\"\">2014</a>)</cite>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.8.8\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_t\" id=\"S5.T2.1.8.8.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T2.1.8.8.1.1\">\n<span class=\"ltx_p\" id=\"S5.T2.1.8.8.1.1.1\" style=\"width:71.1pt;\">Career , Family</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.1.8.8.2\">1.749</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.1.8.8.3\">0.000</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.8.8.4\">1.160</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.1.8.8.5\">0.007</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.1.8.8.6\">1.166</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.1.8.8.7\">0.006</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.1.8.8.8\">1.165</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.1.8.8.9\">0.01</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.1.8.8.10\">1.443</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left ltx_border_t\" id=\"S5.T2.1.8.8.11\">0.000</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.9.9\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top\" id=\"S5.T2.1.9.9.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T2.1.9.9.1.1\">\n<span class=\"ltx_p\" id=\"S5.T2.1.9.9.1.1.1\" style=\"width:71.1pt;\">Math, Arts</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.1.9.9.2\">1.162</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.1.9.9.3\">0.007</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.9.9.4\">0.144</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.1.9.9.5\">0.389</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.1.9.9.6\">0.096</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.1.9.9.7\">0.437</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.1.9.9.8\">0.095</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.1.9.9.9\">0.411</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.1.9.9.10\">0.999</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S5.T2.1.9.9.11\">0.015</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.10.10\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_bb\" id=\"S5.T2.1.10.10.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T2.1.10.10.1.1\">\n<span class=\"ltx_p\" id=\"S5.T2.1.10.10.1.1.1\" style=\"width:71.1pt;\">Science , Arts</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T2.1.10.10.2\">1.281</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T2.1.10.10.3\">0.008</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.1.10.10.4\">-1.074</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T2.1.10.10.5\">0.985</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T2.1.10.10.6\">-1.114</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T2.1.10.10.7\">0.995</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T2.1.10.10.8\">-1.112</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T2.1.10.10.9\">0.993</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T2.1.10.10.10\">-0.522</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left ltx_border_bb\" id=\"S5.T2.1.10.10.11\">0.839</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span> WEAT results using GloVe and Google News word representations.</figcaption>\n</figure>",
|
| 159 |
+
"capture": "Table 2: WEAT results using GloVe and Google News word representations."
|
| 160 |
+
},
|
| 161 |
+
"3": {
|
| 162 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T3\">\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"S5.T3.1\" style=\"width:226.7pt;height:180pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(0.0pt,0.0pt) scale(1,1) ;\">\n<p class=\"ltx_p\" id=\"S5.T3.1.1\"><span class=\"ltx_text\" id=\"S5.T3.1.1.1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S5.T3.1.1.1.1\">\n<span class=\"ltx_tbody\">\n<span class=\"ltx_tr\" id=\"S5.T3.1.1.1.1.1.1\">\n<span class=\"ltx_td ltx_align_left ltx_border_tt ltx_rowspan ltx_rowspan_2\" id=\"S5.T3.1.1.1.1.1.1.1\"><span class=\"ltx_text\" id=\"S5.T3.1.1.1.1.1.1.1.1\">Dataset</span></span>\n<span class=\"ltx_td ltx_align_left ltx_border_tt ltx_colspan ltx_colspan_2\" id=\"S5.T3.1.1.1.1.1.1.2\"><cite class=\"ltx_cite ltx_citemacro_citet\">Bolukbasi et\u00a0al. (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2009.09435v4#bib.bib2\" title=\"\">2016</a>)</cite></span>\n<span class=\"ltx_td ltx_align_left ltx_border_tt ltx_colspan ltx_colspan_2\" id=\"S5.T3.1.1.1.1.1.1.3\">PCA</span></span>\n<span class=\"ltx_tr\" id=\"S5.T3.1.1.1.1.2.2\">\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T3.1.1.1.1.2.2.1\">d</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T3.1.1.1.1.2.2.2\">p</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.1.1.2.2.3\">d</span>\n<span class=\"ltx_td ltx_nopad_r ltx_align_left ltx_border_t\" id=\"S5.T3.1.1.1.1.2.2.4\">p</span></span>\n<span class=\"ltx_tr\" id=\"S5.T3.1.1.1.1.3.3\">\n<span class=\"ltx_td ltx_align_left ltx_border_t ltx_colspan ltx_colspan_5\" id=\"S5.T3.1.1.1.1.3.3.1\">Google News Word2Vec <cite class=\"ltx_cite ltx_citemacro_cite\">Mikolov et\u00a0al. (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2009.09435v4#bib.bib16\" title=\"\">2013</a>)</cite></span></span>\n<span class=\"ltx_tr\" id=\"S5.T3.1.1.1.1.4.4\">\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T3.1.1.1.1.4.4.1\">Career , Family</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T3.1.1.1.1.4.4.2\">1.299</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T3.1.1.1.1.4.4.3\">0.003</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T3.1.1.1.1.4.4.4\">1.327</span>\n<span class=\"ltx_td ltx_nopad_r ltx_align_left ltx_border_t\" id=\"S5.T3.1.1.1.1.4.4.5\">0.001</span></span>\n<span class=\"ltx_tr\" id=\"S5.T3.1.1.1.1.5.5\">\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T3.1.1.1.1.5.5.1\">Math, Arts</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T3.1.1.1.1.5.5.2\">-1.173</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T3.1.1.1.1.5.5.3\">0.995</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T3.1.1.1.1.5.5.4\">-0.540</span>\n<span class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S5.T3.1.1.1.1.5.5.5\">0.859</span></span>\n<span class=\"ltx_tr\" id=\"S5.T3.1.1.1.1.6.6\">\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T3.1.1.1.1.6.6.1\">Science , Arts</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T3.1.1.1.1.6.6.2\">-0.509</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T3.1.1.1.1.6.6.3\">0.832</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T3.1.1.1.1.6.6.4\">0.288</span>\n<span class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S5.T3.1.1.1.1.6.6.5\">0.281</span></span>\n<span class=\"ltx_tr\" id=\"S5.T3.1.1.1.1.7.7\">\n<span class=\"ltx_td ltx_align_left ltx_border_t ltx_colspan ltx_colspan_5\" id=\"S5.T3.1.1.1.1.7.7.1\">\u00a0\u00a0 \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 GloVe <cite class=\"ltx_cite ltx_citemacro_cite\">Pennington et\u00a0al. (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2009.09435v4#bib.bib18\" title=\"\">2014</a>)</cite></span></span>\n<span class=\"ltx_tr\" id=\"S5.T3.1.1.1.1.8.8\">\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T3.1.1.1.1.8.8.1\">Career , Family</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T3.1.1.1.1.8.8.2\">1.160</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T3.1.1.1.1.8.8.3\">0.000</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.1.1.8.8.4\">1.160</span>\n<span class=\"ltx_td ltx_nopad_r ltx_align_left ltx_border_t\" id=\"S5.T3.1.1.1.1.8.8.5\">0.007</span></span>\n<span class=\"ltx_tr\" id=\"S5.T3.1.1.1.1.9.9\">\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T3.1.1.1.1.9.9.1\">Math, Arts</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T3.1.1.1.1.9.9.2\">-0.632</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T3.1.1.1.1.9.9.3\">0.887</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.1.1.1.9.9.4\">0.144</span>\n<span class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S5.T3.1.1.1.1.9.9.5\">0.389</span></span>\n<span class=\"ltx_tr\" id=\"S5.T3.1.1.1.1.10.10\">\n<span class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T3.1.1.1.1.10.10.1\">Science , Arts</span>\n<span class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T3.1.1.1.1.10.10.2\">0.937</span>\n<span class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T3.1.1.1.1.10.10.3\">0.937</span>\n<span class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T3.1.1.1.1.10.10.4\">-1.074</span>\n<span class=\"ltx_td ltx_nopad_r ltx_align_left ltx_border_bb\" id=\"S5.T3.1.1.1.1.10.10.5\">0.985</span></span>\n</span>\n</span></span></p>\n</span></div>\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>Effect of the equalize step</figcaption>\n</figure>",
|
| 163 |
+
"capture": "Table 3: Effect of the equalize step"
|
| 164 |
+
},
|
| 165 |
+
"4": {
|
| 166 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S6.T4\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S6.T4.7\" style=\"width:343.3pt;height:54pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(0.0pt,0.0pt) scale(1,1) ;\">\n<p class=\"ltx_p\" id=\"S6.T4.7.1\"><span class=\"ltx_text\" id=\"S6.T4.7.1.1\">\n<span class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S6.T4.7.1.1.1\">\n<span class=\"ltx_thead\">\n<span class=\"ltx_tr\" id=\"S6.T4.7.1.1.1.1.1\">\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S6.T4.7.1.1.1.1.1.1\">representations</span>\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S6.T4.7.1.1.1.1.1.2\">Original</span>\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S6.T4.7.1.1.1.1.1.3\">PCA</span>\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S6.T4.7.1.1.1.1.1.4\">KPCA(rbf)</span>\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S6.T4.7.1.1.1.1.1.5\">KPCA(sig)</span>\n<span class=\"ltx_td ltx_nopad_r ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S6.T4.7.1.1.1.1.1.6\">KPCA(lap)</span></span>\n</span>\n<span class=\"ltx_tbody\">\n<span class=\"ltx_tr\" id=\"S6.T4.7.1.1.1.2.1\">\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S6.T4.7.1.1.1.2.1.1\">Word2Vec</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S6.T4.7.1.1.1.2.1.2\">0.740</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S6.T4.7.1.1.1.2.1.3\">0.675</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S6.T4.7.1.1.1.2.1.4\">0.678</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S6.T4.7.1.1.1.2.1.5\">0.675</span>\n<span class=\"ltx_td ltx_nopad_r ltx_align_left ltx_border_t\" id=\"S6.T4.7.1.1.1.2.1.6\">0.708</span></span>\n<span class=\"ltx_tr\" id=\"S6.T4.7.1.1.1.3.2\">\n<span class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S6.T4.7.1.1.1.3.2.1\">Glove</span>\n<span class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S6.T4.7.1.1.1.3.2.2\">0.758</span>\n<span class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S6.T4.7.1.1.1.3.2.3\">0.675</span>\n<span class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S6.T4.7.1.1.1.3.2.4\">0.681</span>\n<span class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S6.T4.7.1.1.1.3.2.5\">0.680</span>\n<span class=\"ltx_td ltx_nopad_r ltx_align_left ltx_border_bb\" id=\"S6.T4.7.1.1.1.3.2.6\">0.715</span></span>\n</span>\n</span></span></p>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 4: </span>Pearson correlation coefficients of professions analogy task. All observed at significant at . Indeed,\nall have -values .</figcaption>\n</figure>",
|
| 167 |
+
"capture": "Table 4: Pearson correlation coefficients of professions analogy task. All observed at significant at . Indeed,\nall have -values ."
|
| 168 |
+
},
|
| 169 |
+
"5": {
|
| 170 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S6.T5\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S6.T5.1\" style=\"width:343.3pt;height:54pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(0.0pt,0.0pt) scale(1,1) ;\">\n<p class=\"ltx_p\" id=\"S6.T5.1.1\"><span class=\"ltx_text\" id=\"S6.T5.1.1.1\">\n<span class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S6.T5.1.1.1.1\">\n<span class=\"ltx_thead\">\n<span class=\"ltx_tr\" id=\"S6.T5.1.1.1.1.1.1\">\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S6.T5.1.1.1.1.1.1.1\">representations</span>\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S6.T5.1.1.1.1.1.1.2\">Original</span>\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S6.T5.1.1.1.1.1.1.3\">PCA</span>\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S6.T5.1.1.1.1.1.1.4\">KPCA(rbf)</span>\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S6.T5.1.1.1.1.1.1.5\">KPCA(sig)</span>\n<span class=\"ltx_td ltx_nopad_r ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S6.T5.1.1.1.1.1.1.6\">KPCA(lap)</span></span>\n</span>\n<span class=\"ltx_tbody\">\n<span class=\"ltx_tr\" id=\"S6.T5.1.1.1.1.2.1\">\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S6.T5.1.1.1.1.2.1.1\">Word2Vec</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S6.T5.1.1.1.1.2.1.2\">0.974</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S6.T5.1.1.1.1.2.1.3\">0.702</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S6.T5.1.1.1.1.2.1.4\">0.716</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S6.T5.1.1.1.1.2.1.5\">0.715</span>\n<span class=\"ltx_td ltx_nopad_r ltx_align_left ltx_border_t\" id=\"S6.T5.1.1.1.1.2.1.6\">0.720</span></span>\n<span class=\"ltx_tr\" id=\"S6.T5.1.1.1.1.3.2\">\n<span class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S6.T5.1.1.1.1.3.2.1\">Glove</span>\n<span class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S6.T5.1.1.1.1.3.2.2\">0.978</span>\n<span class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S6.T5.1.1.1.1.3.2.3\">0.757</span>\n<span class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S6.T5.1.1.1.1.3.2.4\">0.754</span>\n<span class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S6.T5.1.1.1.1.3.2.5\">0.753</span>\n<span class=\"ltx_td ltx_nopad_r ltx_align_left ltx_border_bb\" id=\"S6.T5.1.1.1.1.3.2.6\">0.914</span></span>\n</span>\n</span></span></p>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 5: </span>Classification accuracy results on male versus female terms.</figcaption>\n</figure>",
|
| 171 |
+
"capture": "Table 5: Classification accuracy results on male versus female terms."
|
| 172 |
+
},
|
| 173 |
+
"6": {
|
| 174 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S7.T6\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S7.T6.5\" style=\"width:343.3pt;height:54pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(0.0pt,0.0pt) scale(1,1) ;\">\n<p class=\"ltx_p\" id=\"S7.T6.5.1\"><span class=\"ltx_text\" id=\"S7.T6.5.1.1\">\n<span class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S7.T6.5.1.1.1\">\n<span class=\"ltx_thead\">\n<span class=\"ltx_tr\" id=\"S7.T6.5.1.1.1.1.1\">\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S7.T6.5.1.1.1.1.1.1\">representations</span>\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S7.T6.5.1.1.1.1.1.2\">Original</span>\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S7.T6.5.1.1.1.1.1.3\">PCA</span>\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S7.T6.5.1.1.1.1.1.4\">KPCA(rbf)</span>\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S7.T6.5.1.1.1.1.1.5\">KPCA(sig)</span>\n<span class=\"ltx_td ltx_nopad_r ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S7.T6.5.1.1.1.1.1.6\">KPCA(lap)</span></span>\n</span>\n<span class=\"ltx_tbody\">\n<span class=\"ltx_tr\" id=\"S7.T6.5.1.1.1.2.1\">\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S7.T6.5.1.1.1.2.1.1\">Word2Vec</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S7.T6.5.1.1.1.2.1.2\">0.121</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S7.T6.5.1.1.1.2.1.3\">0.119</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S7.T6.5.1.1.1.2.1.4\">0.118</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S7.T6.5.1.1.1.2.1.5\">0.118</span>\n<span class=\"ltx_td ltx_nopad_r ltx_align_left ltx_border_t\" id=\"S7.T6.5.1.1.1.2.1.6\">0.118</span></span>\n<span class=\"ltx_tr\" id=\"S7.T6.5.1.1.1.3.2\">\n<span class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S7.T6.5.1.1.1.3.2.1\">Glove</span>\n<span class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S7.T6.5.1.1.1.3.2.2\">0.302</span>\n<span class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S7.T6.5.1.1.1.3.2.3\">0.298</span>\n<span class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S7.T6.5.1.1.1.3.2.4\">0.298</span>\n<span class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S7.T6.5.1.1.1.3.2.5\">0.298</span>\n<span class=\"ltx_td ltx_nopad_r ltx_align_left ltx_border_bb\" id=\"S7.T6.5.1.1.1.3.2.6\">0.305</span></span>\n</span>\n</span></span></p>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 6: </span>Correlation on SimLex-999 using GoogleNews Word2Vec and GloVe representations. The significance level is with .</figcaption>\n</figure>",
|
| 175 |
+
"capture": "Table 6: Correlation on SimLex-999 using GoogleNews Word2Vec and GloVe representations. The significance level is with ."
|
| 176 |
+
},
|
| 177 |
+
"7": {
|
| 178 |
+
"table_html": "<figure class=\"ltx_table\" id=\"A0.T7\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"A0.T7.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"A0.T7.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"A0.T7.1.1.1.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"A0.T7.1.1.1.1.1\">Targets</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"A0.T7.1.1.1.2\">Original</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"A0.T7.1.1.1.3\">PCA</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"A0.T7.1.1.1.4\">KPCA (poly-2)</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"A0.T7.1.1.1.5\">KPCA (poly-3)</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"A0.T7.1.1.1.6\">KPCPA (poly-4)</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"A0.T7.1.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"A0.T7.1.2.2.1\">d</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"A0.T7.1.2.2.2\">p</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"A0.T7.1.2.2.3\">d</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"A0.T7.1.2.2.4\">p</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"A0.T7.1.2.2.5\">d</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"A0.T7.1.2.2.6\">p</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"A0.T7.1.2.2.7\">d</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"A0.T7.1.2.2.8\">p</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"A0.T7.1.2.2.9\">d</th>\n<th class=\"ltx_td ltx_nopad_r ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"A0.T7.1.2.2.10\">p</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"A0.T7.1.3.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" colspan=\"11\" id=\"A0.T7.1.3.3.1\">\u00a0\u00a0 \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Google News Word2Vec <cite class=\"ltx_cite ltx_citemacro_cite\">Mikolov et\u00a0al. (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2009.09435v4#bib.bib16\" title=\"\">2013</a>)</cite>\n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A0.T7.1.4.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A0.T7.1.4.1.1\">Career , Family</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A0.T7.1.4.1.2\">1.622</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A0.T7.1.4.1.3\">0.000</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A0.T7.1.4.1.4\">1.327</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A0.T7.1.4.1.5\">0.001</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A0.T7.1.4.1.6\">1.320</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A0.T7.1.4.1.7\">0.004</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A0.T7.1.4.1.8\">1.321</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A0.T7.1.4.1.9\">0.001</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A0.T7.1.4.1.10\">1.312</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left ltx_border_t\" id=\"A0.T7.1.4.1.11\">0.002</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A0.T7.1.5.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"A0.T7.1.5.2.1\">Math, Arts</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A0.T7.1.5.2.2\">0.998</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A0.T7.1.5.2.3\">0.017</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A0.T7.1.5.2.4\">-0.540</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A0.T7.1.5.2.5\">0.859</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A0.T7.1.5.2.6\">-0.755</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A0.T7.1.5.2.7\">0.927</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A0.T7.1.5.2.8\">-0.755</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A0.T7.1.5.2.9\">0.933</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A0.T7.1.5.2.10\">-0.754</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"A0.T7.1.5.2.11\">0.932</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A0.T7.1.6.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"A0.T7.1.6.3.1\">Science , Arts</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A0.T7.1.6.3.2\">1.159</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A0.T7.1.6.3.3\">0.005</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A0.T7.1.6.3.4\">0.288</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A0.T7.1.6.3.5\">0.281</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A0.T7.1.6.3.6\">0.271</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A0.T7.1.6.3.7\">0.312</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A0.T7.1.6.3.8\">0.272</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A0.T7.1.6.3.9\">0.305</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A0.T7.1.6.3.10\">0.272</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"A0.T7.1.6.3.11\">305</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A0.T7.1.7.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" colspan=\"11\" id=\"A0.T7.1.7.4.1\">\u00a0\u00a0 \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 GloVe <cite class=\"ltx_cite ltx_citemacro_cite\">Pennington et\u00a0al. (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2009.09435v4#bib.bib18\" title=\"\">2014</a>)</cite>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A0.T7.1.8.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A0.T7.1.8.5.1\">Career , Family</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A0.T7.1.8.5.2\">1.749</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A0.T7.1.8.5.3\">0.000</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A0.T7.1.8.5.4\">1.160</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A0.T7.1.8.5.5\">0.007</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A0.T7.1.8.5.6\">1.166</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A0.T7.1.8.5.7\">0.000</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A0.T7.1.8.5.8\">1.166</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A0.T7.1.8.5.9\">0.009</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A0.T7.1.8.5.10\">1.667</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left ltx_border_t\" id=\"A0.T7.1.8.5.11\">0.005</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A0.T7.1.9.6\">\n<td class=\"ltx_td ltx_align_left\" id=\"A0.T7.1.9.6.1\">Math, Arts</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A0.T7.1.9.6.2\">1.162</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A0.T7.1.9.6.3\">0.007</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A0.T7.1.9.6.4\">0.144</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A0.T7.1.9.6.5\">0.389</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A0.T7.1.9.6.6\">0.096</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A0.T7.1.9.6.7\">0.429</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A0.T7.1.9.6.8\">0.097</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A0.T7.1.9.6.9\">0.421</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A0.T7.1.9.6.10\">0.097</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"A0.T7.1.9.6.11\">0.432</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A0.T7.1.10.7\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"A0.T7.1.10.7.1\">Science , Arts</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"A0.T7.1.10.7.2\">1.281</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"A0.T7.1.10.7.3\">0.008</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"A0.T7.1.10.7.4\">-1.074</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A0.T7.1.10.7.5\">0.985</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"A0.T7.1.10.7.6\">-1.113</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"A0.T7.1.10.7.7\">0.995</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"A0.T7.1.10.7.8\">-1.114</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"A0.T7.1.10.7.9\">0.994</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"A0.T7.1.10.7.10\">-1.114</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left ltx_border_bb\" id=\"A0.T7.1.10.7.11\">0.992</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 7: </span>Results for polynomial Kernel Experiments on Glove and Google News representations.</figcaption>\n</figure>",
|
| 179 |
+
"capture": "Table 7: Results for polynomial Kernel Experiments on Glove and Google News representations."
|
| 180 |
+
}
|
| 181 |
+
},
|
| 182 |
+
"image_paths": {
|
| 183 |
+
"1": {
|
| 184 |
+
"figure_path": "2009.09435v4_figure_1.png",
|
| 185 |
+
"caption": "Figure 1: Pre-image problem illustration for the neutralized representations (null-space).\nThe plane represents the bias subspace in the RKHS.",
|
| 186 |
+
"url": "http://arxiv.org/html/2009.09435v4/extracted/2009.09435v4/images/lintorkhs.png"
|
| 187 |
+
},
|
| 188 |
+
"2": {
|
| 189 |
+
"figure_path": "2009.09435v4_figure_2.png",
|
| 190 |
+
"caption": "Figure 2: 2D toy example of non-linear component removal using Kernel PCA and the pre-image (neutralize step) described in \u00a7 5.",
|
| 191 |
+
"url": "http://arxiv.org/html/2009.09435v4/extracted/2009.09435v4/images/toysim.png"
|
| 192 |
+
}
|
| 193 |
+
},
|
| 194 |
+
"validation": true,
|
| 195 |
+
"references": [
|
| 196 |
+
{
|
| 197 |
+
"1": {
|
| 198 |
+
"title": "Learning to find pre-images.",
|
| 199 |
+
"author": "G\u00f6khan H. Bak\u0131r, Jason Weston, and Bernhard Sch\u00f6lkopf. 2004.",
|
| 200 |
+
"venue": "Advances in Neural Information Processing Systems,\n16:449\u2013456.",
|
| 201 |
+
"url": "https://papers.nips.cc/paper_files/paper/2003/hash/ac1ad983e08ad3304a97e147f522747e-Abstract.html"
|
| 202 |
+
}
|
| 203 |
+
},
|
| 204 |
+
{
|
| 205 |
+
"2": {
|
| 206 |
+
"title": "Man is to computer\nprogrammer as woman is to homemaker? Debiasing word embeddings.",
|
| 207 |
+
"author": "Tolga Bolukbasi, Kai-Wei Chang, James Y. Zou, Venkatesh Saligrama, and Adam T.\nKalai. 2016.",
|
| 208 |
+
"venue": "In Advances in Neural Information Processing Systems, pages\n4349\u20134357.",
|
| 209 |
+
"url": "https://arxiv.org/abs/1607.06520"
|
| 210 |
+
}
|
| 211 |
+
},
|
| 212 |
+
{
|
| 213 |
+
"3": {
|
| 214 |
+
"title": "Semantics derived\nautomatically from language corpora contain human-like biases.",
|
| 215 |
+
"author": "Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. 2017.",
|
| 216 |
+
"venue": "Science, 356(6334):183\u2013186.",
|
| 217 |
+
"url": "https://arxiv.org/abs/1608.07187"
|
| 218 |
+
}
|
| 219 |
+
},
|
| 220 |
+
{
|
| 221 |
+
"4": {
|
| 222 |
+
"title": "Statistical power analysis.",
|
| 223 |
+
"author": "Jacob Cohen. 1992.",
|
| 224 |
+
"venue": "Current Directions in Psychological Science, 1(3):98\u2013101.",
|
| 225 |
+
"url": "https://journals.sagepub.com/doi/10.1111/1467-8721.ep10768783"
|
| 226 |
+
}
|
| 227 |
+
},
|
| 228 |
+
{
|
| 229 |
+
"5": {
|
| 230 |
+
"title": "BERT: Pre-training\nof deep bidirectional transformers for language understanding.",
|
| 231 |
+
"author": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019.",
|
| 232 |
+
"venue": "In Proceedings of the 2019 Conference of the North American\nChapter of the Association for Computational Linguistics: Human Language\nTechnologies, Volume 1 (Long and Short Papers), pages 4171\u20134186,\nMinneapolis, Minnesota. Association for Computational Linguistics.",
|
| 233 |
+
"url": "https://doi.org/10.18653/v1/N19-1423"
|
| 234 |
+
}
|
| 235 |
+
},
|
| 236 |
+
{
|
| 237 |
+
"6": {
|
| 238 |
+
"title": "Neural Network Methods in Natural Language Processing.",
|
| 239 |
+
"author": "Yoav Goldberg. 2017.",
|
| 240 |
+
"venue": "Morgan & Claypool Publishers.",
|
| 241 |
+
"url": "https://link.springer.com/book/10.1007/978-3-031-02165-7"
|
| 242 |
+
}
|
| 243 |
+
},
|
| 244 |
+
{
|
| 245 |
+
"7": {
|
| 246 |
+
"title": "Lipstick on a pig:\nDebiasing methods cover up systematic gender biases in word embeddings but\ndo not remove them.",
|
| 247 |
+
"author": "Hila Gonen and Yoav Goldberg. 2019.",
|
| 248 |
+
"venue": "In Proceedings of the 2019 Conference of the North American\nChapter of the Association for Computational Linguistics: Human Language\nTechnologies, Volume 1 (Long and Short Papers), pages 609\u2013614, Minneapolis,\nMinnesota. Association for Computational Linguistics.",
|
| 249 |
+
"url": "https://doi.org/10.18653/v1/N19-1061"
|
| 250 |
+
}
|
| 251 |
+
},
|
| 252 |
+
{
|
| 253 |
+
"8": {
|
| 254 |
+
"title": "Implicit social\ncognition: Attitudes, self-esteem, and stereotypes.",
|
| 255 |
+
"author": "Anthony G. Greenwald and Mahzarin R. Banaji. 1995.",
|
| 256 |
+
"venue": "Psychological Review, 102(1):4.",
|
| 257 |
+
"url": "https://pubmed.ncbi.nlm.nih.gov/7878162/"
|
| 258 |
+
}
|
| 259 |
+
},
|
| 260 |
+
{
|
| 261 |
+
"9": {
|
| 262 |
+
"title": "It\u2019s all in the name:\nMitigating gender bias with name-based counterfactual data substitution.",
|
| 263 |
+
"author": "Rowan Hall Maudslay, Hila Gonen, Ryan Cotterell, and Simone Teufel. 2019.",
|
| 264 |
+
"venue": "In Proceedings of the 2019 Conference on Empirical Methods in\nNatural Language Processing and the 9th International Joint Conference on\nNatural Language Processing, pages 5266\u20135274, Hong Kong, China. Association\nfor Computational Linguistics.",
|
| 265 |
+
"url": "https://doi.org/10.18653/v1/D19-1530"
|
| 266 |
+
}
|
| 267 |
+
},
|
| 268 |
+
{
|
| 269 |
+
"10": {
|
| 270 |
+
"title": "SimLex-999:\nEvaluating semantic models with (genuine) similarity estimation.",
|
| 271 |
+
"author": "Felix Hill, Roi Reichart, and Anna Korhonen. 2015.",
|
| 272 |
+
"venue": "Computational Linguistics, 41(4):665\u2013695.",
|
| 273 |
+
"url": "https://doi.org/10.1162/COLI_a_00237"
|
| 274 |
+
}
|
| 275 |
+
},
|
| 276 |
+
{
|
| 277 |
+
"11": {
|
| 278 |
+
"title": "Additive approximations in\nhigh dimensional nonparametric regression via the SALSA.",
|
| 279 |
+
"author": "Kirthevasan Kandasamy and Yaoliang Yu. 2016.",
|
| 280 |
+
"venue": "In International Conference on Machine Learning, pages 69\u201378.",
|
| 281 |
+
"url": "https://arxiv.org/abs/1602.00287"
|
| 282 |
+
}
|
| 283 |
+
},
|
| 284 |
+
{
|
| 285 |
+
"12": {
|
| 286 |
+
"title": "Conceptor debiasing of\nword representations evaluated on WEAT.",
|
| 287 |
+
"author": "Saket Karve, Lyle Ungar, and Jo\u00e3o Sedoc. 2019.",
|
| 288 |
+
"venue": "In Proceedings of the First Workshop on Gender Bias in Natural\nLanguage Processing, pages 40\u201348, Florence, Italy. Association for\nComputational Linguistics.",
|
| 289 |
+
"url": "https://doi.org/10.18653/v1/W19-3806"
|
| 290 |
+
}
|
| 291 |
+
},
|
| 292 |
+
{
|
| 293 |
+
"13": {
|
| 294 |
+
"title": "The pre-image\nproblem in kernel methods.",
|
| 295 |
+
"author": "James T. Kwok and Ivor W. Tsang. 2004.",
|
| 296 |
+
"venue": "IEEE Transactions on Neural Networks, 15(6):1517\u20131525.",
|
| 297 |
+
"url": "https://ieeexplore.ieee.org/document/1353287/"
|
| 298 |
+
}
|
| 299 |
+
},
|
| 300 |
+
{
|
| 301 |
+
"14": {
|
| 302 |
+
"title": "On measuring social\nbiases in sentence encoders.",
|
| 303 |
+
"author": "Chandler May, Alex Wang, Shikha Bordia, Samuel R. Bowman, and Rachel Rudinger.\n2019.",
|
| 304 |
+
"venue": "In Proceedings of the 2019 Conference of the North American\nChapter of the Association for Computational Linguistics: Human Language\nTechnologies, Volume 1 (Long and Short Papers), pages 622\u2013628, Minneapolis,\nMinnesota. Association for Computational Linguistics.",
|
| 305 |
+
"url": "https://doi.org/10.18653/v1/N19-1063"
|
| 306 |
+
}
|
| 307 |
+
},
|
| 308 |
+
{
|
| 309 |
+
"15": {
|
| 310 |
+
"title": "Kernel PCA and de-noising in feature spaces.",
|
| 311 |
+
"author": "Sebastian Mika, Bernhard Sch\u00f6lkopf, Alex J. Smola, Klaus-Robert M\u00fcller,\nMatthias Scholz, and Gunnar R\u00e4tsch. 1999.",
|
| 312 |
+
"venue": "In Advances in Neural Information Processing Systems, pages\n536\u2013542.",
|
| 313 |
+
"url": "https://papers.nips.cc/paper_files/paper/1998/hash/226d1f15ecd35f784d2a20c3ecf56d7f-Abstract.html"
|
| 314 |
+
}
|
| 315 |
+
},
|
| 316 |
+
{
|
| 317 |
+
"16": {
|
| 318 |
+
"title": "Efficient estimation of word\nrepresentations in vector space.",
|
| 319 |
+
"author": "Tom\u00e1s Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013.",
|
| 320 |
+
"venue": "In 1st International Conference on Learning Representations.",
|
| 321 |
+
"url": "http://arxiv.org/abs/1301.3781"
|
| 322 |
+
}
|
| 323 |
+
},
|
| 324 |
+
{
|
| 325 |
+
"17": {
|
| 326 |
+
"title": "Scikit-learn:\nMachine learning in Python.",
|
| 327 |
+
"author": "Fabian Pedregosa, Ga\u00ebl Varoquaux, Alexandre Gramfort, Vincent Michel,\nBertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron\nWeiss, Vincent Dubourg, Jake Vanderplas, Alexandre Passos, David Cournapeau,\nMatthieu Brucher, Matthieu Perrot, and \u00c9douard Duchesnay. 2011.",
|
| 328 |
+
"venue": "Journal of Machine Learning Research, 12(85):2825\u20132830.",
|
| 329 |
+
"url": "http://jmlr.org/papers/v12/pedregosa11a.html"
|
| 330 |
+
}
|
| 331 |
+
},
|
| 332 |
+
{
|
| 333 |
+
"18": {
|
| 334 |
+
"title": "GloVe: Global\nvectors for word representation.",
|
| 335 |
+
"author": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014.",
|
| 336 |
+
"venue": "In Proceedings of the 2014 Conference on Empirical Methods in\nNatural Language Processing, pages 1532\u20131543, Doha, Qatar. Association for\nComputational Linguistics.",
|
| 337 |
+
"url": "https://doi.org/10.3115/v1/D14-1162"
|
| 338 |
+
}
|
| 339 |
+
},
|
| 340 |
+
{
|
| 341 |
+
"19": {
|
| 342 |
+
"title": "Deep contextualized\nword representations.",
|
| 343 |
+
"author": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark,\nKenton Lee, and Luke Zettlemoyer. 2018.",
|
| 344 |
+
"venue": "In Proceedings of the 2018 Conference of the North American\nChapter of the Association for Computational Linguistics: Human Language\nTechnologies, Volume 1 (Long Papers), pages 2227\u20132237, New Orleans,\nLouisiana. Association for Computational Linguistics.",
|
| 345 |
+
"url": "https://doi.org/10.18653/v1/N18-1202"
|
| 346 |
+
}
|
| 347 |
+
},
|
| 348 |
+
{
|
| 349 |
+
"20": {
|
| 350 |
+
"title": "Gender bias in\ncoreference resolution.",
|
| 351 |
+
"author": "Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018.",
|
| 352 |
+
"venue": "In Proceedings of the 2018 Conference of the North American\nChapter of the Association for Computational Linguistics: Human Language\nTechnologies, Volume 2 (Short Papers), pages 8\u201314, New Orleans, Louisiana.\nAssociation for Computational Linguistics.",
|
| 353 |
+
"url": "https://www.aclweb.org/anthology/N18-2002"
|
| 354 |
+
}
|
| 355 |
+
},
|
| 356 |
+
{
|
| 357 |
+
"21": {
|
| 358 |
+
"title": "Kernel principal component analysis.",
|
| 359 |
+
"author": "Bernhard Sch\u00f6lkopf, Alexander Smola, and Klaus-Robert M\u00fcller. 1997.",
|
| 360 |
+
"venue": "In International Conference on Artificial Neural Networks,\npages 583\u2013588. Springer.",
|
| 361 |
+
"url": "https://people.eecs.berkeley.edu/~wainwrig/stat241b/scholkopf_kernel.pdf"
|
| 362 |
+
}
|
| 363 |
+
},
|
| 364 |
+
{
|
| 365 |
+
"22": {
|
| 366 |
+
"title": "Nonlinear component\nanalysis as a kernel eigenvalue problem.",
|
| 367 |
+
"author": "Bernhard Sch\u00f6lkopf, Alexander Smola, and Klaus-Robert M\u00fcller. 1998.",
|
| 368 |
+
"venue": "Neural Computation, 10(5):1299\u20131319.",
|
| 369 |
+
"url": "https://www.mlpack.org/papers/kpca.pdf"
|
| 370 |
+
}
|
| 371 |
+
},
|
| 372 |
+
{
|
| 373 |
+
"23": {
|
| 374 |
+
"title": "Gender bias in\ncontextualized word embeddings.",
|
| 375 |
+
"author": "Jieyu Zhao, Tianlu Wang, Mark Yatskar, Ryan Cotterell, Vicente Ordonez, and\nKai-Wei Chang. 2019.",
|
| 376 |
+
"venue": "In Proceedings of the 2019 Conference of the North American\nChapter of the Association for Computational Linguistics: Human Language\nTechnologies, Volume 1 (Long and Short Papers), pages 629\u2013634, Minneapolis,\nMinnesota. Association for Computational Linguistics.",
|
| 377 |
+
"url": "https://doi.org/10.18653/v1/N19-1064"
|
| 378 |
+
}
|
| 379 |
+
},
|
| 380 |
+
{
|
| 381 |
+
"24": {
|
| 382 |
+
"title": "Gender bias in\ncoreference resolution: Evaluation and debiasing methods.",
|
| 383 |
+
"author": "Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang.\n2018.",
|
| 384 |
+
"venue": "In Proceedings of the 2018 Conference of the North American\nChapter of the Association for Computational Linguistics: Human Language\nTechnologies, Volume 2 (Short Papers), pages 15\u201320, New Orleans, Louisiana.\nAssociation for Computational Linguistics.",
|
| 385 |
+
"url": "https://www.aclweb.org/anthology/N18-2003"
|
| 386 |
+
}
|
| 387 |
+
}
|
| 388 |
+
],
|
| 389 |
+
"url": "http://arxiv.org/html/2009.09435v4"
|
| 390 |
+
}
|
20240522/2204.01349v4.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240522/2206.02603v3.json
ADDED
|
@@ -0,0 +1,206 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "CAN-MM: Multiplexed Message Authentication Code for Controller Area Network message authentication in road vehicles",
|
| 3 |
+
"abstract": "As the automotive industry adopts more technology, the threat of cyberattacks on vehicles grows. Electronic Control Units operate in a hostile environment, raising safety concerns for drivers and passengers. Initiatives from both industry and government bodies aim to address these risks. The primary communication protocol used in the automotive industry, the standard Controller Area Networks protocol, is a target for cybercriminals due to its limitations in ensuring communication integrity. This paper proposes CAN Multiplexed MAC (CAN-MM), using frequency modulation to multiplex Message Authentication Code (MAC) data with standard CAN communication. CAN-MM enables the transmission of MAC payloads at reduced time cost while maintaining backward compatibility with old CAN protocol versions. The solution is also compatible with modern evolutions of the CAN protocol and advanced algorithms resorting to MAC as part of the security infrastructure.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Modern road vehicles, striving for improved comfort, sustainability, environmental friendliness, and safety [1 ###reference_b1###], feature intricate onboard control systems, especially in real-time safety-critical domains [1 ###reference_b1###, 2 ###reference_b2###]. The increased interconnectivity of electronic components exacerbates this complexity. However, this sophistication also makes the automotive industry an attractive target for attackers [3 ###reference_b3###], with ECUs vulnerable to cyberattacks in hostile environments [4 ###reference_b4###].\nTo mitigate these risks, carmakers and governments are endorsing initiatives to bolster cybersecurity in the automotive sector (i.e., the ISO/SAE 21434:2021 standard for road vehicles cybersecurity engineering [5 ###reference_b5###], and the ISO/PAS 5112:2022 guidelines for auditing cybersecurity engineering [6 ###reference_b6###]). Additionally, the UN Economic Commission for Europe (UNECE) has introduced new regulations for vehicle cybersecurity and software updates, delivered through the WP.29 package [7 ###reference_b7###, 8 ###reference_b8###]. The automotive industry is working harder to make their products more secure and to research ways to address serious security threats that take advantage of communication between modules [9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###].\nThe CAN protocol is central to automotive communication. Therefore, ensuring robust security measures within CAN communication is crucial to uphold the integrity and safety of modern vehicles [12 ###reference_b12###]. Detailed insights into potential CAN threats and related countermeasures are provided in [13 ###reference_b13###, 14 ###reference_b14###, 15 ###reference_b15###]. Country-specific regulations mandate specific CAN messages accessible through an On-Board Diagnostics (OBD) port in every vehicle [16 ###reference_b16###, 17 ###reference_b17###]. Ensuring the integrity (i.e., immunity to tampering) and authenticity (i.e., originating from an authorized source) of CAN messages is, therefore, critical to prevent unauthorized access and ensure the safety and operational efficiency of essential functionalities of the vehicle [18 ###reference_b18###, 19 ###reference_b19###, 20 ###reference_b20###]. To achieve this, the Secure Onboard Communication (SecOC) and Crypto Stack defined in Automotive Open System Architecture (AUTOSAR) require the incorporation of a MAC digest within the payload of each data frame [21 ###reference_b21###]. However, integrating a MAC digest in a CAN frame presents compatibility issues, feasible only for specific CAN protocol versions and resulting in back-compatibility challenges [22 ###reference_b22###].\nThis paper proposes a technique named CAN-MM, offering a novel approach to MAC transmission. This technique enables the multiplexing of the MAC alongside data transmission without altering the original frame format, ensuring full compatibility with all versions of the standard CAN protocol. The main objective of CAN-MM technology is to integrate a System-on-Chip (SoC) compatible MAC in the CAN version 2.0 to enable achieving a security level that matches the most recent advancements, such as SecOC utilizing MAC with Controller Area Network Flexible Data-Rate (CAN FD). Moreover, this approach addresses the authentication timing challenges identified by Ikumapayi et al.[22 ###reference_b22###]. Eventually, by freeing data bytes from the CAN frame, it offers a novel approach to incorporate the MAC in signature schemas, authentication protocols, or key exchange mechanisms, such as [23 ###reference_b23###].\nThe article is organized as follows: Section II ###reference_### gives some background on the CAN network, including vulnerabilities and common attacks, while Section III ###reference_### reports the state-of-the-art literature on CAN security. Section Section IV ###reference_### describes the CAN-MM architecture. Section VI ###reference_### provides experimental results, and Section IX ###reference_### summarizes the main contributions and concludes the paper."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "II Background",
|
| 15 |
+
"text": "On-board ECUs play a crucial role in automotive applications by managing subsystems and facilitating real-time communication with sensors and actuators [24 ###reference_b24###]. The CAN bus, a primary vehicle network, adheres to safety guidelines, ensuring reliable communication in noisy environments. The CAN electrical signal, transmitted differentially through CAN high line (CANH) and CAN low line (CANL), minimizes noise impact from motors, ignition systems, and switching contacts. High-speed (HS) (ISO 11898-2 [25 ###reference_b25###]) and Low-Speed (LS) (ISO 11898-3 [26 ###reference_b26###]) interfaces provide varying throughput capabilities based on different voltage levels. In HS CAN, dominant bit transmission (logic 0) raises CANH to 3.5V and lowers CANL to 1.5V, creating a 2V voltage difference. Recessive bit transmission (logic 1) maintains both CANH and CANL at 2.5V with minimal voltage difference. A differential voltage above 0.9V indicates a dominant level (logic 0), while below 0.5V denotes a recessive level (logic 1), ensuring reliable communication in noisy environments. Twisted-pair conductors are commonly used for physical transmission lines to mitigate magnetic interference.\nMultiple CAN protocol variants exist, each supporting different transmission speeds and frame payload sizes. CAN FD and CAN 2.0 protocols differ in maximum transmission speed and payload size, with CAN 2.0 limited to 8 bytes and CAN FD extending to 64 bytes. Despite CAN FD supporting larger payloads, many applications still use 8-byte payloads to ensure compatibility with existing vehicle CAN database [27 ###reference_b27###, 28 ###reference_b28###, 29 ###reference_b29###]. Controller Area Network Extra Long (CAN XL), a newer version meeting ISO/TC 22/SC 31 Data communication standards [30 ###reference_b30###], offers features like extended data payload capacity (up to 2,048 bytes) and higher communication speeds ranging from 500 kbit/s to 5 Mbit/s, with potential speeds reaching 12 Mbit/s in the CAN SIC XL FAST configuration. The CAN SIC XL FAST baud rate is comparable to the 10BASE-T1S technology, also known as Vehicle Ethernet, providing 10 Mbit/s bandwidth over a single-pair physical layer.\nThe original CAN protocol includes no built-in security features. Additionally, country-based regulations require the provision of an OBD port [16 ###reference_b16###, 17 ###reference_b17###], commonly located within vehicles, enabling access to legislative diagnostic messages. These messages, transmitted in plaintext to comply with legislative mandates, introduce considerable security vulnerabilities.\nIn an endeavor to mitigate these risks, the SecOC framework, explicitly designed for CAN FD, along with CAN secure (CANsec) for CAN XL, has been promulgated. These methodologies expressly elevate the principles of data integrity and authenticity over confidentiality [31 ###reference_b31###]. The critical role played by the CAN bus in the domain of automotive communications mandates a comprehensive investigation into its security weaknesses, potential avenues for attack, and the methods by which such attacks may be carried out [32 ###reference_b32###, 33 ###reference_b33###, 34 ###reference_b34###].\nThe attack surface of a CAN presents numerous potential vulnerabilities attackers could exploit. This encompasses strategies for unauthorized access, undermining data integrity, data breaches, executing hijacking maneuvers, or hindering the system. Despite the variety of attack vectors against CAN networks, two main types of attacks have been reported in the literature: (i) Man in the Middle (MitM) [35 ###reference_b35###] and (ii) Replay Attacks [36 ###reference_b36###].\nFigure 1 ###reference_### illustrates three prevalent automotive attack settings that target the CAN protocol. Each setting is effectively utilized in MitM and Replay Attacks. Figure 1 ###reference_###-A demonstrates an attack through a compromised CAN node, where unauthorized software takes control. This can occur via the corruption of the CAN controller\u2019s firmware or by exploiting software module vulnerabilities, such as a buffer overflow. In Figure 1 ###reference_###-B, an attack is facilitated by a hardware module that isolates the victim node from the rest of the vehicle network, enabling the interception and manipulation of CAN traffic. The final scheme, depicted in Figure 1 ###reference_###-C, involves connecting an external module to the vehicle\u2019s OBD port, granting direct access to the CAN bus. Various commercially available, low-cost CAN modules that feature Bluetooth connectivity support this approach, allowing for programmability via mobile applications. These settings are crucial in laying the groundwork for advanced CAN attacks, exemplified by the Janus Attack [37 ###reference_b37###] and the Cloak Attack [38 ###reference_b38###].\n###figure_1### The Janus Attack, a new and sophisticated threat in CAN protocol [37 ###reference_b37###], leverages the CAN protocol synchronization rules and targets devices with different sample points. It involves transmitting a single CAN frame with dual payloads, causing targeted devices to interpret divergent data compared to others in the network. This undermines the atomic multicast principle of CAN, critical for system integrity. It operates by coercing all CAN controllers to synchronize simultaneously, then manipulating the CAN bus level after the first one has sampled the bus but before another does, resulting in valid frames with differing payloads as it exploits the characteristics of the two different payloads to have the same size.\nA cloak attack in cybersecurity involves manipulating bit signals to deceive networked ECUs [38 ###reference_b38###]. The main idea is that the attacker leverages the different sampling times of two receivers to craft two different frames (FrameA and FrameB). The difference is represented by a selection of bits the attacker alters after the first receiver samples the frame (FrameA). Appropriately crafted, the bit-changes in the second frame (FrameB) can avoid triggering re-synchronization mechanisms, aiming for an optimized bit-string with minimal detection and errors in the Cyclic Redundancy Check (CRC) field (as the CRC code will be based on the original content of FrameA). If the attacker achieves such duplication, it can generate out-of-sync data in ECUs.\nThe Replay Attack shares similarities with MitM attack. To execute this attack, the attacker must perform a learning phase by monitoring the network and collecting a certain amount of CAN frames. Later, the attacker replays these previously collected frames on the network to achieve a target behavior. Unfortunately, this attack does not require the attacker to possess specific skills, expertise, or advanced knowledge about vehicle CAN networks.\nThese clusters of attacks can be successfully mitigated by linking a CAN Frame payload to a unique MAC that is directly derived from the frame data. Yet, the MAC alone is insufficient for replay attacks due to the CAN payload with identical data producing the same digest. Hence, adopting a rolling counter tied to the data is advised to achieve different digests while maintaining data parity.\nThe MAC effectively mitigates threats but may also introduce weaknesses in the framework system. This is especially significant in safety-critical, hard real-time systems like ECM, TCM, etc. Ikumapayiet al. formalizes the impact that authentication schemes have on the real-time performance of messages over CAN, CAN FD, and CAN XL based on response time analysis. A CAN frame is schedulable if its Worst-Case Response Time (WCRT) is less than or equal to its deadline. Message deadlines may be implicit, i.e., equal to their period, or explicit (constrained). In particular, Ikumapayi et al. [22 ###reference_b22###] demonstrated that adding a MAC to the payload of CAN, CAN FD, and CAN XL messages might impact the schedulability and the meeting of deadlines based on the percentage of utilization. In particular, on classical CAN, after 70% of utilization, almost all messages fail to meet the deadlines. On the other hand, CAN FD and CAN XL exhibit higher schedulable resilience (it drops when the percentage of bus utilization rises to 80-90%) thanks to the faster bit rate. Nevertheless, pushing such high bus utilization can be malevolent.\nWhen the CAN frames include the MAC in their payload before utilizing the data, the MAC shall be verified as a success. Modern ECUs are generally equipped with a Hardware Secure Module (HSM), a dedicated SoC module that manages all cryptographic and security functions, including verifying MACs. The host system is momentarily suspended during the verification process by the HSM. In the context of real-time systems, an attacker might take advantage of this by injecting or flooding the CAN vehicle network with secure CAN frames that possess a legitimate ID but include counterfeit data and MAC. This situation leads to the HSM being overwhelmed with MAC verification requests that fail, while the host system is forced into repeated waiting periods, causing abnormal delays [39 ###reference_b39###]. These delays can significantly disrupt the system\u2019s capacity to adhere to its real-time deadlines, necessitating the initiation of safety system recoveries to address the failure to meet these critical timing constraints."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "III Related Works",
|
| 21 |
+
"text": "As the original version of CAN protocol did not include any security support, researchers have come a long way to support it on top of the existing infrastructure or by proposing enhanced versions.\nFirst attempts to improve the security of the CAN protocol and improve resistance to attacks involved including a MAC digest for integrity and authenticity assurance [40 ###reference_b40###], often employing Cipher-based Message Authentication Code (CMAC) or keyed-Hash Message Authentication Code (HMAC) signatures, depending on hardware support. The CAN+ protocol, introduced by Ziermann et al. in [41 ###reference_b41###], aimed to enhance CAN data rates by relaxing constraints during specific transmission time slots. While the CAN application can benefit from the increased speed, its assessment lacked consideration for Electromagnetic Compatibility (EMC) and disturbance handling, which is crucial in the automotive domain. Furthermore, CAN+ relies on media access characteristics not present in the latest CAN FD and CAN XL protocols, which offer higher payload sizes and data rates. Despite advancements, minimizing latency in MAC signature reception and checking remains essential in CAN FD and CAN XL, which offer increased payload size and data rates.\nSignificant advancements have been made to enhance broadcast authentication mechanisms, capitalizing on the increased data rate of CAN+. Van Herreveg et al. introduced CanAuth [42 ###reference_b42###], a backward-compatible broadcast message authentication protocol for the CAN bus. This protocol meticulously follows CAN specifications, prioritizing ID-oriented authentication while addressing authentication delays and time synchronization concerns. However, Groza et al. [23 ###reference_b23###] point out that CanAuth\u2019s drawback lies in managing many keys associated with message IDs, raising security concerns. In response, they propose the LIBrA-CAN protocol as an alternative. Both LiBrA-CAN and CanAuth share the goal of enhancing CAN communication security but adopt distinct approaches and mechanisms. LiBrA-CAN emphasizes decentralized broadcast-based arbitration and lightweight implementation, ensuring resilience against replay attacks and flexibility in configuration. On the other hand, CanAuth focuses on message authentication and verification, providing robust protection against unauthorized access and tampering. To preserve the integrity of the physical layer, Hazem et al. [43 ###reference_b43###] put forth LCAP, a Lightweight CAN Authentication Protocol for Securing In-Vehicle Networks.\nAll previous works point out that the MAC size can significantly impact the resistance to attacks, i.e., the MAC size and the time required to elaborate it. To tackle the time constraints, authors in [44 ###reference_b44###] proposed a truncated MAC, justified by the average data size of 15,768 messages from a 2010 Toyota Prius during a 12.27-minute use case. They noted that only a part of the 8 bytes available in the CAN frames were used, making room for a short MAC. Following a similar direction, to further reduce the schema complexity and support all possible CAN protocols, very recently, Luo et al. [45 ###reference_b45###] proposed a lightweight schema based on the introduction of the MAC in place of the CRC field in the 2.0 version of the protocol. While the authors demonstrated the capability of their approach, the back compatibility with standard hardware is not guaranteed, as they will check a CRC value that is not correct.\nIn general, both approaches go against National Institute of Standards and Technology (NIST) guidelines, stating that a truncated MAC digest below 4 bytes compromises cyber resilience [46 ###reference_b46###]. Ikumapayi et al. [22 ###reference_b22###] have explored the impact of adding authentication codes as separate messages, noting potential strain on timely delivery, especially given size constraints. As the authors noted, the effect of reserving more than four bytes in CAN 2.0 (i.e., 24Bit-CMAC-8Bit-FV) limits data interchangeability as it requires adding an extra frame to contain the remaining bytes that do not fit into the original frame. However, secure CAN FD and CAN XL protocols support MAC digest sizes from 4 to 16 bytes, accommodating complex protocols like authentications as demonstrated by [23 ###reference_b23###].\nYet, upgrading an entire vehicle network to these protocols involves benefits and extra costs [47 ###reference_b47###], which are left to the manufacturer to evaluate.\nEventually, it is worth mentioning that some recent works support authentication and confidentiality without resorting to MAC [48 ###reference_b48###]. They include only cryptography techniques in the handshake phase, leading to a tiny increase in the latency, limited to hundreds of \u00b5s, paying with reduced security if compared with schemas resorting to MAC [49 ###reference_b49###, 50 ###reference_b50###, 23 ###reference_b23###].\nModulation techniques are not new in the security of the CAN protocol; recent efforts by Michaels et al. [51 ###reference_b51###] introduced modulation techniques to enhance the security of the CAN protocol. Their proposal incorporates a rolling secret (watermark) aligned with primary bus messages through multiplexing based on Binary Phase Shifting Keying (BPSK) modulation. While this multiplexed watermark significantly improves security by ensuring transmitted message authentication, it solely addresses this aspect, leaving incomplete coverage to attacks such as MitM, as the watermark can be forged."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "4",
|
| 25 |
+
"parent_section_id": null,
|
| 26 |
+
"section_name": "IV CAN-MM Technology",
|
| 27 |
+
"text": "CAN-MM technology offers a non-intrusive solution for implementing MAC-based message authentication and integrity checks without compromising payload capacity or backward compatibility with all CAN standard versions. This approach is especially relevant for CAN 2.0 applications, enabling the development of a secure CAN network with an large MAC digest size. Additionally, CAN-MM enhances response time and performance of MAC digest computation across all CAN versions.\nEssentially, the underlying concept of CAN-MM involves utilizing digital modulation techniques (i.e., On-Off Keying (OOK)) to multiplex the transmission of the MAC digest with the original CAN frame payload. The OOK is a simple digital modulation scheme based on Amplitude-Shift Keying (ASK) commonly used in telecommunication [52 ###reference_b52###, 53 ###reference_b53###]. OOK transmits a logical one by sending a carrier wave signal, while the absence of the carrier wave represents a logical zero.\nThe MAC information is encoded by switching the carrier wave on and off. A logical zero is transmitted on the bus by generating the original CAN signals, while in the case of a logical one, a wave is added to the standard CAN electric signals (in both CANH and CANL). This wave acts as a carrier. Its amplitude is a configured parameter, with a value of in this study, to ensure sufficient margins when reconstructing the original signal at the receiver\u2019s side.\nTo combine the signals from the CAN frame and MAC digest, the CAN-MM system necessitates appropriate synchronization, as depicted in Figure 2 ###reference_###. The Identifier Extension (IDE) bit of the CAN Control field initiates the synchronization procedure. During this procedure, a synchronization sequence of logic \u201d1\u201d and \u201d0\u201d is introduced on the MAC CODE RX line for the entire duration of the Control field. These values are modulated with the content of the Control field. Subsequently, the MAC digest is modulated onto the data payload. Finally, to enhance the reliability of the system, the CRC of the MAC digest is modulated onto the CRC slot of the payload. The CRC is a specialized checker to detect transmission errors. Multiplexing the MAC digest directly with the message ensures a strong link between the MAC code and the corresponding message, bolstering security by minimizing vulnerabilities such as message and code separation.\n###figure_2### Figure 3 ###reference_### depicts the effect of using the CAN-MM approach on the CAN 2.0 and CAN FD frames. In both cases, modulating the MAC helps maintain the full payload capacity of the frame; when the frame is long enough, e.g., the CAN FD, it reduces the necessary size of the frame while retaining the same amount of information. This reduction limits the need for the extra transmission time caused by appending the MAC to the data payload or as extra frames [22 ###reference_b22###] when the selected MAC length is above 64 bits. It also might help optimize the system\u2019s real-time performance and the CAN bus load of the entire vehicle network.\n###figure_3### The CAN-MM architecture consists of two main blocks: a transmitter and a receiver module.\nIn the left part of Figure 4 ###reference_###, the original transmitter (CAN controller and CAN transceiver blocks) is coupled with the additional functional components required to implement the CAN-MM schema in the bottom left. A multiplexer block is employed to multiplex the MAC-related information. This block includes a diverter switch [54 ###reference_b54###] with two inputs, namely a carrier supplied by an internal generator and ground. The modulated CAN signal is applied to both CANH and CANL. The multiplexer is controlled by the MAC bitstream to provide a carrier as output when the corresponding MAC bit is one and no contribution when the corresponding bit of the MAC is zero. The multiplexer control line is synchronized with the CAN controller to multiplex the MAC information with the CAN payload.\nIn the right part of Figure 4 ###reference_###, the receiver includes a decoder block that is responsible for extracting the multiplexed MAC bitstream sequence from the payload of the CAN frame. The MAC decoder block utilizes a hybrid analog-digital electronic network to extract the correct CAN-MM contributions encapsulated in the CAN physical signal. The standard CAN receiver and the MAC decoder operate in parallel, eliminating the necessity of extra computation time for MAC extraction. While the transceiver processes the CAN frame, the decoder reconstructs the MAC bitstream. This allows the ECU to receive the CAN data payload and its MAC code in a shorter time window compared to existing solutions [12 ###reference_b12###].\nThe custom CAN-MM components are situated downstream of the standard CAN interface to ensure full electrical compatibility with existing CAN interfaces.\n###figure_4### To further explain the CAN-MM decoder, Figure 5 ###reference_### shows the complexity of the analog-digital electronic required. The decoder is composed of four stages, which are as follows:\nFiltering: This stage is replicated for both CANH and CANL. It includes a band-pass filter with a center frequency fc at the frequency of the carrier signal.\nComparing: This stage is a threshold comparator that operates on both CANH and CANL to identify the specific area where the carrier signal is present.\nConjunction: This stage combines the analog data from both CAN lines into a single digital signal stream.\nCounter: The final stage is a logical network that identifies the area of the carrier signal in the digital domain.\nThese stages work together in a highly coordinated fashion to accurately extract the modulated information from the CAN channel.\n###figure_5###"
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "5",
|
| 31 |
+
"parent_section_id": null,
|
| 32 |
+
"section_name": "Validation Model",
|
| 33 |
+
"text": ""
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "5.1",
|
| 37 |
+
"parent_section_id": "5",
|
| 38 |
+
"section_name": "Experimental setup",
|
| 39 |
+
"text": "The validation of the CAN-MM architecture considers a typical application scenario, specifically, a standard automotive CAN 2.0 network operating at a speed of 500kbps. A hybrid automotive CAN network comprising three CAN nodes was designed and simulated using the LTSpice [55 ###reference_b55###] simulation environment to validate the architecture. Two nodes were CAN-MM transceivers, one serving as a transmitter and the other as a receiver. The third node was a standard CAN version 2.0 receiver. This setup enabled the validation and verification of the CAN-MM functionality and its backward compatibility with standard CAN transceivers. The complete block diagram for this configuration is presented in Figure 6 ###reference_###.\n###figure_6###"
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "5.2",
|
| 43 |
+
"parent_section_id": "5",
|
| 44 |
+
"section_name": "Noise and interference analysis",
|
| 45 |
+
"text": "CAN systems boast a robust immunity to ground noise and electromagnetic interference, thanks to differentially transmitted information, independent ground reference, usage of twisted-pair cabling, and balanced differential transceivers.\nSince the CAN-MM technology is modifying the original profile of the CAN signals, evaluating it under realistic noisy environments is crucial. A validation environment simulated standard vehicle noise to assess noise and interference effects on CAN-MM technology. The noise profile is acquired using a multi-protocol vehicle interface device connected to an actual vehicle\u2019s OBD port. The device, programmed to transmit a specific CAN frame to the ECM, captures the physical CAN signal via an oscilloscope. Direct access to the CAN bus input of the ECU is facilitated through a break-out box. The noise profile is obtained during engine idle, aligned with specifications from various research papers [56 ###reference_b56###, 57 ###reference_b57###, 58 ###reference_b58###]. Noise signals, recorded from both CAN lines with the same phase, cover frequencies from 10kHz to 10MHz, with amplitudes between -100mV and 100mV. Signal-to-Noise Ratio (SNR) calculations involve computations on two identical carrier signals with a peak-to-peak amplitude of 300mV. The SNR for this scenario was calculated to be approximately 14.31 dB (Figure 7 ###reference_###). This value provides insight into the signal\u2019s quality relative to the background noise with the current parameters.\n###figure_7###"
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "5.3",
|
| 49 |
+
"parent_section_id": "5",
|
| 50 |
+
"section_name": "SPICE Model",
|
| 51 |
+
"text": "The SPICE simulation incorporates input signals, such as the CAN bitstream and its associated MAC, generated from Piecewise Linear (PWL) files. Supplementary signals, including noise profiles, follow the same method with their respective PWL files. Standard library parts provided by the tool are utilized for the remaining design components.\nIn this setup, the LTC2875 standard CAN transceiver (refer to Figure 8 ###reference_###) is employed, as depicted in Figure 6 ###reference_###. The CAN-MM added part features a High Voltage Latch-Up Proof and a Single pole double throw (SPDT) Switch. Depending on the control value, this block outputs either the carrier wave or zero, subsequently added to the CANH and CANL signals provided by LTC2875, along with the noise contributions (see Figure 9 ###reference_###).\n###figure_8### Unlike the transmitter, the custom part of the CAN-MM receiver processes data in parallel to the standard transceiver (refer to Figure 10 ###reference_###). The receiver includes a pass-band analog filter with a cutoff frequency set to the carrier frequency, followed by a voltage comparator with a voltage reference set to the absolute value of the noise (in this case, 100 mV). These stages form the first decode chain for CAN-MM and are identical for both CAN lines (see Figure 10 ###reference_###).\n###figure_9### In the second stage of the CAN-MM receiver, the contribution on the two CAN lines is collapsed together through a NOR port. Downstream of the NOR port is a custom logical network based on flip-flop counters, which is used to extract the MAC contribution (refer to Figure 11 ###reference_###).\n###figure_10### ###figure_11###"
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "5.4",
|
| 55 |
+
"parent_section_id": "5",
|
| 56 |
+
"section_name": "Preliminary Hardware Implementation",
|
| 57 |
+
"text": "A hardware prototype was created to enhance the validation of the CAN-MM technology. The prototype is specifically designed to assess the functionality of the CAN-MM transmitter. It is implemented within a compact In-Loop CAN network, as illustrated in Figure 12 ###reference_###. The primary goal of this validation is to confirm the capability of a standard receiver to receive the CAN-MM conditioned signal accurately.\n###figure_12### The experimental setup involves a laptop connected to a Neo VI Multi-Protocol Vehicle Interface, which oversees a custom hardware board designed for CAN-MM operation. This board is crucial for converting the incoming CAN signal, received through the Neo VI interface, into a CAN-MM frame. The conversion process is directed by control signals continuously managed by the Neo VI device. Additionally, the hardware board is linked to another Neo VI device via the CAN-MM bus, set up to function under the standard CAN protocol. This configuration creates a closed loop with the laptop, facilitating seamless communication.\nNotably, the CAN-MM bus is deliberately designed to be open-access, enabling the intentional introduction of noise and permitting data acquisition with an oscilloscope. In the second stage of the loop-back scheme, a programmable noise source was also added to simulate the noise profile acquired during the idle operation of the engine, as previously used in the LT-Spice simulations."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "6",
|
| 61 |
+
"parent_section_id": null,
|
| 62 |
+
"section_name": "VI Experimental results",
|
| 63 |
+
"text": "The collected signal diagrams, illustrated in the following figures, show the electrical signals generated by each module depicted in Figure 6 ###reference_###. The output signals generated by CAN-MM node #1 are illustrated in Figure 13 ###reference_###, which depicts four subplots. The blue line in the first subplot illustrates a section of a transmitted CAN bitstream, while the second one displays the differential electrical signals. The third subplot shows the CAN-MM electrical signals that are transmitted on CANH and CANL, where MAC signal in the fourth subplot is multiplexed.\n###figure_13### Figure 14 ###reference_### depicts the functionality of the CAN-MM receiver in node #2. It shows how the receiver manages the physical signal generated by the CAN-MM transmitter and transmitted on the bus. The bottom subplot displays the received CAN-MM physical signal through the CAN-MM transceiver, which is identical to the signal transmitted by node #1 in Figure 13 ###reference_###. The subplot in blue color is the MAC bitstream extrapolated by the CAN-MM decoder in node #2, and it is the corresponding MAC of the subplot in red color.\n###figure_14### To demonstrate the complete compatibility of CAN-MM with the standard CAN 2.0 protocol, node #3 simulates a standard CAN 2.0 transceiver. As shown in Figure 15 ###reference_###, the backward compatibility is guaranteed, as the transceiver can reconstruct the correct CAN bitstream when it receives a CAN frame modulated under CAN-MM specifications. However, a standard CAN transceiver lacks the extended hardware required to demodulate the MAC bitstream, making it impossible to extract it.\n###figure_15### To support a timewise analysis of the CAN-MM to understand the potential benefits of the parallel transmission of the MAC alongside the data payload, we computed the MAC transmission Extra Time (), introduced by the transmission of the MAC digest.\nIt depends on the MAC\u2019s length in bits () and the selected CAN protocol transmission time of a data bit ( [22 ###reference_b22###]), as shown in equation Equation 1 ###reference_###.\nAligning with the experimental setup in [22 ###reference_b22###], we computed using =0.00025 () for the CAN FD and equal to 0.0001 () for the CAN XL.\nIn a CAN FD the required to transmit the 64-bit MAC digest is 16 \u00b5s, as per equation Equation 2 ###reference_###\nAdopting a more traditional baud rate on CAN FD, 500kbps, we calculate a = 0.002(). In this condition, the extra transmission time required by MAC appended to the payload is 128 \u00b5s (see Equation 3 ###reference_###).\nKeeping the MAC\u2019s size constant, adopting the CAN XL protocol with a speed rate of 10Mbps, the would be reduced to 6.4 \u00b5s, which represents the best possible transmission performance by SecOC and CANsec, as per equation Equation 4 ###reference_###, demonstrating that a broad adoption fo CAN XL would introduce faster performance.\nOpting for CAN-MM instead highlights a key benefit: the negligible impact on transmission times due to MAC. This capability to maintain consistent transmission times, with or without MAC, offers a solution to the schedulability challenges discussed by Ikumapayi et al.[22 ###reference_b22###].\nMoreover, CAN-MM supports countermeasures on the schedulability noted by the authors of [39 ###reference_b39###].\nThe systems described in the paper adopt Rate-monotonic scheduling (RMS), a deterministic scheduling algorithm for real-time operating systems that assign priorities to tasks based on their period; the shorter the period, the higher the priority. A pivotal aspect of RMS is its CPU utilization bound for periodic tasks, which can be calculated using the Liu & Layland formula, Equation 5 ###reference_###, where is the computation time of task , is the period of task , and is the total CPU utilization. This formula ensures that if the total CPU utilization is below a certain threshold, all tasks can be scheduled to meet their deadlines, making RMS particularly efficient for systems with hard real-time constraints.\nThe transmission time of the CAN and the MAC might significantly contribute to , the computational load. By reducing the transmission time, CAN-MM directly decreases and, consequently, the total CPU utilization. This reduction is crucial for enhancing resilience against certain types of attacks.\nTo provide a general understanding, the HSM performance metrics published by Pott [59 ###reference_b59###] indicate that more than 300 clock cycles are required for MAC verification. When considering latency, the total time is approximately 5-6 \u00b5s, which parallels the time savings achieved by CAN-MM compared to CAN XL. Consequently, this denotes that CAN-MM might theoretically offer a twofold increase in the system\u2019s ability to withstand such attacks, in contrast to the conventional CAN XL framework where the MAC is appended to the payload.\nThe robustness of CAN-MM was further validated through measures performed on the hardware implementation introduced in Subsection V-D ###reference_###. These results complement the ones produced by the LT-SPICE simulations. The captured data in Figure 16 ###reference_### portrays the real-time CAN-MM-H bus traffic. The applied noise profile follows what has been captured from a vehicle as described in Subsection V-B ###reference_###. Within this experimental framework, the CAN-MM transmitter effectively performs the multiplexing of the MAC Bitstream, precisely the bit sequence 000011101011110111, over the underlying physical CAN-H signal. This multiplexing process is executed through the OOK modulation technique, closely replicating the observations obtained in the simulated environment, thus confirming the robustness of the CAN-MM system.\n###figure_16### Moreover, the BUSMASTER [60 ###reference_b60###] tool reported error-free reception of the transmitted CAN message. This confirms the backward compatibility of the CAN-MM approach with conventional hardware. The multiplexed carrier of the standard transceiver is intelligently filtered out, effectively treating it as noise in the system."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "7",
|
| 67 |
+
"parent_section_id": null,
|
| 68 |
+
"section_name": "VII CAN-MM Type-B",
|
| 69 |
+
"text": "Section VI ###reference_### highlights a potential limitation in the CAN-MM architecture when the carrier and noise frequencies align, manifesting sporadic failures in demodulating the MAC bit-stream. While this scenario is unlikely to occur in actual situations, considering that noise amplitudes exceeding 100mV are seldom encountered, this paper introduces an advanced CAN-MM architecture called Type-B, able to withstand scenarios where the carrier signal frequency matches the noise. CAN-MM Type-B ensures additional robustness to noise across all frequency bands without risking data corruption.\nThe CAN-MM Type-B physical signals scheme incorporates Carrier Phase Shift Modulation (CPSM) [61 ###reference_b61###] as depicted in Figure 17 ###reference_###. The CPSM carrier varies between CANH and CANL, causing a phase shift ranging from 90\u00b0 to 270\u00b0. The proposed design sets the phase modulation to 90\u00b0 for CANL as depicted in Figure 20 ###reference_###.\n###figure_17### ###figure_18### The additional phase-shifting can result in incorrect codification, particularly if the differential voltage in the red area depicted in Figure 19 ###reference_### exceeds the 0.5V threshold. To overcome this limitation, an additional re-phaser stage represented by the orange area in the receiver reported in Figure 18 ###reference_### reverses the CPSM applied by the CAN-MM Type-B transmitter. This block is placed at the very beginning of the reception process. Once the re-phasing is completed, the standard CAN-MM receiver, which includes the standard CAN transceiver and the CAN-MM decoder, work in parallel to extract their respective data from the re-phased frame.\nThe additional protection to noise of CAN-MM Type-B across all frequency ranges comes with the cost of adding an upstream hardware re-phaser block to the CAN transceiver when it functions as a receiver.\n###figure_19### An LT-Spice model was developed to validate the robustness of the CAN-MM Type-B architecture (Figure 20 ###reference_###).\nMAC code 1 is encoded by adding a carrier with a shifting phase on CANL, allowing for greater robustness during decoding activities. However, in certain regions, the phase shifting can cause the differential voltage between these signals to exceed the 0.5V limit. Thus, as shown in Figure 21 ###reference_###, the signal is shifted back before decoding, obtaining full synchronization.\n###figure_20### ###figure_21### Figure 22 ###reference_### presents a comparative comparison between CAN-MM and CAN-MM Type-B to validate the design robustness. This experiment applies a noise signal with a 140mV amplitude to the original CAN-MM architecture model. The investigation is completed only for completeness since the resulting signal is clearly out of specification. As a result of the high noise level, the receiver could not extract the correct MAC bit-stream, and the output was a MAC bit-stream stuck to 1. However, in the case of CAN-MM Type-B, despite a noise signal with an amplitude of 200mV, the receiver correctly decoded the MAC stream.\n###figure_22### By referring to Figure 23 ###reference_###, we have calculated the signal-to-noise ratio (SNR) for this scenario to be approximately 17.32 dB. This high SNR value underscores the signal\u2019s robustness, affirming its clear distinction from the surrounding background noise.\n###figure_23###"
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "8",
|
| 73 |
+
"parent_section_id": null,
|
| 74 |
+
"section_name": "VIII Security Analysis",
|
| 75 |
+
"text": "This section delves into the security aspects of the CAN-MM architecture, particularly addressing attack models outlined in Section II ###reference_###.\nThe main objective of CAN-MM is to support a full CAN 2.0 vehicle network security by embedding a SecOC compatible MAC code within each payload frame, matching the same level of protection of CAN FD.\nMoreover, it supports security against threats such as MitM and replay attacks due to the presence of the MAC mechanism that neutralizes those types of attacks. This capability also includes the more recent Janus attack, as described by the author [37 ###reference_b37###].\nCAN-MM may also neutralize Cloak attacks by maintaining payload integrity, even amidst bit modifications. Leveraging the sample rate of two receivers will be more complex if the attacker also must coherently switch the modulated MAC. Such complexity will narrow the timing window where the attack is effective, as discussed in the original paper [38 ###reference_b38###].\nWhen a significant challenge arises when the system is overwhelmed by an excessive number of MACs that need to be validated [39 ###reference_b39###], the validation process demands intensive cryptographic computations, potentially compromising the system\u2019s ability to adhere to real-time deadlines. This issue becomes particularly acute with the influx of numerous fraudulent MACs. The CAN-MM system introduces enhanced security measures against those kinds of attacks."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "9",
|
| 79 |
+
"parent_section_id": null,
|
| 80 |
+
"section_name": "IX Conclusion",
|
| 81 |
+
"text": "This paper presented an efficient solution to mitigate security concerns within the automotive domain\u2019s fundamental communication protocol, the CAN. The proposed solution, CAN-MM, facilitates the transmission of MAC payloads in standard CAN to complement any security schemas based on it efficiently. The support of the MAC transmission also safeguards the automotive communication system against MitM and replay attacks.\nThe CAN-MM architecture, developed to upgrade communication hardware for upcoming security regulations, maintains compatibility with existing CAN devices, avoiding the necessity for a complete system or vehicle architecture overhaul. This hybrid networking capability offers flexibility to designers, minimizing the requirement for updating electronic components to the new generation and thereby reducing the cost of transitioning a vehicle fleet into the cyber-secure domain.\nAdditionally, an improved Type-B version of CAN-MM addresses potential demodulation issues without sacrificing backward compatibility. While this modified version may compromise some degree of backward compatibility, the applied modulation technology to the CAN protocol can be extended not only to version 2.0 but also to other existing versions that already incorporate the MAC."
|
| 82 |
+
}
|
| 83 |
+
],
|
| 84 |
+
"appendix": [],
|
| 85 |
+
"tables": {},
|
| 86 |
+
"image_paths": {
|
| 87 |
+
"1": {
|
| 88 |
+
"figure_path": "2206.02603v3_figure_1.png",
|
| 89 |
+
"caption": "Figure 1: Vehicle CAN network surface attack scheme. A small CAN vehicle network scheme composed of 4 modules: ECM, TCM, DEFC, and VGT. These ECUs communicate with sensors and actuators in real-time, making integration essential for their operation. (A) Corrupted vehicle CAN node runs unauthorized code. (B) Attack vector through external CAN module plugged upstream to CAN victim node. (C) The external CAN module directly accesses the OBD port inside the vehicle cabin.",
|
| 90 |
+
"url": "http://arxiv.org/html/2206.02603v3/"
|
| 91 |
+
},
|
| 92 |
+
"2": {
|
| 93 |
+
"figure_path": "2206.02603v3_figure_2.png",
|
| 94 |
+
"caption": "Figure 2: Physical Electrical CAN-MM Signal Scheme",
|
| 95 |
+
"url": "http://arxiv.org/html/2206.02603v3/"
|
| 96 |
+
},
|
| 97 |
+
"3": {
|
| 98 |
+
"figure_path": "2206.02603v3_figure_3.png",
|
| 99 |
+
"caption": "Figure 3: Application of the CAN-MM technology to both CAN 2.0 and CAN FD frames",
|
| 100 |
+
"url": "http://arxiv.org/html/2206.02603v3/"
|
| 101 |
+
},
|
| 102 |
+
"4": {
|
| 103 |
+
"figure_path": "2206.02603v3_figure_4.png",
|
| 104 |
+
"caption": "Figure 4: CAN-MM Transmitter & Receiver block scheme",
|
| 105 |
+
"url": "http://arxiv.org/html/2206.02603v3/"
|
| 106 |
+
},
|
| 107 |
+
"5": {
|
| 108 |
+
"figure_path": "2206.02603v3_figure_5.png",
|
| 109 |
+
"caption": "Figure 5: CAN-MM MAC decoder Type-A block scheme",
|
| 110 |
+
"url": "http://arxiv.org/html/2206.02603v3/"
|
| 111 |
+
},
|
| 112 |
+
"6": {
|
| 113 |
+
"figure_path": "2206.02603v3_figure_6.png",
|
| 114 |
+
"caption": "Figure 6: Block scheme of the CAN-MM validation setup",
|
| 115 |
+
"url": "http://arxiv.org/html/2206.02603v3/"
|
| 116 |
+
},
|
| 117 |
+
"7": {
|
| 118 |
+
"figure_path": "2206.02603v3_figure_7.png",
|
| 119 |
+
"caption": "Figure 7: SNR graph for real CAN recorded signals",
|
| 120 |
+
"url": "http://arxiv.org/html/2206.02603v3/"
|
| 121 |
+
},
|
| 122 |
+
"8": {
|
| 123 |
+
"figure_path": "2206.02603v3_figure_8.png",
|
| 124 |
+
"caption": "Figure 8: CAN-MM Transceiver - Stage 1 - SPICE Block",
|
| 125 |
+
"url": "http://arxiv.org/html/2206.02603v3/"
|
| 126 |
+
},
|
| 127 |
+
"9": {
|
| 128 |
+
"figure_path": "2206.02603v3_figure_9.png",
|
| 129 |
+
"caption": "Figure 9: CAN-MM Transceiver - Stage 2 - SPICE Block",
|
| 130 |
+
"url": "http://arxiv.org/html/2206.02603v3/"
|
| 131 |
+
},
|
| 132 |
+
"10": {
|
| 133 |
+
"figure_path": "2206.02603v3_figure_10.png",
|
| 134 |
+
"caption": "Figure 10: CAN-MM Receiver - Stage 1 - SPICE Block",
|
| 135 |
+
"url": "http://arxiv.org/html/2206.02603v3/"
|
| 136 |
+
},
|
| 137 |
+
"11": {
|
| 138 |
+
"figure_path": "2206.02603v3_figure_11.png",
|
| 139 |
+
"caption": "Figure 11: CAN-MM Receiver - Stage 2 - SPICE Block",
|
| 140 |
+
"url": "http://arxiv.org/html/2206.02603v3/"
|
| 141 |
+
},
|
| 142 |
+
"12": {
|
| 143 |
+
"figure_path": "2206.02603v3_figure_12.png",
|
| 144 |
+
"caption": "Figure 12: CAN-MM Hardware Concept Scheme",
|
| 145 |
+
"url": "http://arxiv.org/html/2206.02603v3/"
|
| 146 |
+
},
|
| 147 |
+
"13": {
|
| 148 |
+
"figure_path": "2206.02603v3_figure_13.png",
|
| 149 |
+
"caption": "Figure 13: CAN-MM transmitter output",
|
| 150 |
+
"url": "http://arxiv.org/html/2206.02603v3/"
|
| 151 |
+
},
|
| 152 |
+
"14": {
|
| 153 |
+
"figure_path": "2206.02603v3_figure_14.png",
|
| 154 |
+
"caption": "Figure 14: CAN-MM receiver signals",
|
| 155 |
+
"url": "http://arxiv.org/html/2206.02603v3/"
|
| 156 |
+
},
|
| 157 |
+
"15": {
|
| 158 |
+
"figure_path": "2206.02603v3_figure_15.png",
|
| 159 |
+
"caption": "Figure 15: CAN 2.0 transceiver",
|
| 160 |
+
"url": "http://arxiv.org/html/2206.02603v3/"
|
| 161 |
+
},
|
| 162 |
+
"16": {
|
| 163 |
+
"figure_path": "2206.02603v3_figure_16.png",
|
| 164 |
+
"caption": "Figure 16: CAN-MM-H acquired by Oscilloscope",
|
| 165 |
+
"url": "http://arxiv.org/html/2206.02603v3/extracted/2206.02603v3/031_Figure+figure16.png"
|
| 166 |
+
},
|
| 167 |
+
"17": {
|
| 168 |
+
"figure_path": "2206.02603v3_figure_17.png",
|
| 169 |
+
"caption": "Figure 17: CAN-MM Type-B physical signals scheme",
|
| 170 |
+
"url": "http://arxiv.org/html/2206.02603v3/"
|
| 171 |
+
},
|
| 172 |
+
"18": {
|
| 173 |
+
"figure_path": "2206.02603v3_figure_18.png",
|
| 174 |
+
"caption": "Figure 18: CAN-MM Type-B Transmitter& Receiver Block scheme",
|
| 175 |
+
"url": "http://arxiv.org/html/2206.02603v3/"
|
| 176 |
+
},
|
| 177 |
+
"19": {
|
| 178 |
+
"figure_path": "2206.02603v3_figure_19.png",
|
| 179 |
+
"caption": "Figure 19: Critical Area due to shifting phase for codification correctness",
|
| 180 |
+
"url": "http://arxiv.org/html/2206.02603v3/"
|
| 181 |
+
},
|
| 182 |
+
"20": {
|
| 183 |
+
"figure_path": "2206.02603v3_figure_20.png",
|
| 184 |
+
"caption": "Figure 20: CAN-MM Type-B Physical Signal with the shifted carrier on CAN-L",
|
| 185 |
+
"url": "http://arxiv.org/html/2206.02603v3/"
|
| 186 |
+
},
|
| 187 |
+
"21": {
|
| 188 |
+
"figure_path": "2206.02603v3_figure_21.png",
|
| 189 |
+
"caption": "Figure 21: CAN-MM Type-B filter scheme",
|
| 190 |
+
"url": "http://arxiv.org/html/2206.02603v3/"
|
| 191 |
+
},
|
| 192 |
+
"22": {
|
| 193 |
+
"figure_path": "2206.02603v3_figure_22.png",
|
| 194 |
+
"caption": "Figure 22: CAN-MM Type-A vs. CAN-MM Type-B Noise capability performances",
|
| 195 |
+
"url": "http://arxiv.org/html/2206.02603v3/"
|
| 196 |
+
},
|
| 197 |
+
"23": {
|
| 198 |
+
"figure_path": "2206.02603v3_figure_23.png",
|
| 199 |
+
"caption": "Figure 23: SNR CAN-MM TypeB Graph",
|
| 200 |
+
"url": "http://arxiv.org/html/2206.02603v3/"
|
| 201 |
+
}
|
| 202 |
+
},
|
| 203 |
+
"validation": true,
|
| 204 |
+
"references": [],
|
| 205 |
+
"url": "http://arxiv.org/html/2206.02603v3"
|
| 206 |
+
}
|
20240522/2206.09677v5.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240522/2206.14273v3.json
ADDED
|
@@ -0,0 +1,242 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Asymptotic bounds for the number of closed and privileged words",
|
| 3 |
+
"abstract": "A word has a border if is a non-empty proper prefix and suffix of . A word is said to be closed if is of length at most or if has a border that occurs exactly twice in . A word is said to be privileged if is of length at most or if has a privileged border that occurs exactly twice in . Let (resp. ) be the number of length- closed (resp. privileged) words over a -letter alphabet. In this paper, we improve existing upper and lower bounds on and . We completely resolve the asymptotic behaviour of . We also nearly completely resolve the asymptotic behaviour of by giving a family of upper and lower bounds that are separated by a factor that grows arbitrarily slowly.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Let denote the -letter alphabet . Throughout this paper, we denote the length of a word as . A word is said to be a factor of a word if for some words , . A word has a border if is a non-empty proper prefix and suffix of . A word that has a border is said to be bordered; otherwise, it is said to be unbordered. A word is said to be closed if or if has a border that occurs exactly twice in . If is a border and occurs in exactly twice, then we say is closed by . It is easy to see that if a word is closed by a word , then must be the largest border in ; otherwise would occur more than two times in . A word is said to be privileged if or if is closed by a privileged word.\nThe English word entanglement has the border ent and only contains two occurrences of ent. Thus, entanglement is a closed word, closed by ent. Since and ent is unbordered and therefore not privileged, we have that entanglement is not privileged.\nThe English word alfalfa is closed by alfa. Furthermore, alfa is closed by a. But , so alfa is privileged and therefore so is alfalfa.\nThe only border of the English word eerie is e and e appears times in the word. Thus, eerie is neither closed nor privileged.\nClosed words were introduced relatively recently by Fici [5 ###reference_b5###] as a way to classify Trapezoidal and Sturmian words. However, there are multiple equivalent formulations of closed words that have been defined at different times. Closed words are equivalent to codewords in prefix-synchronized codes [8 ###reference_b8###, 9 ###reference_b9###]. Closed words are also equivalent to periodic-like words [3 ###reference_b3###]. A period of a word is an integer such that for all . A length- word is said to be periodic if it has a period of length . In applications that require the analysis of long words, like DNA sequence analysis, the smallest period is typically much larger than half the length of the word. Periodic-like words were introduced as a generalization of periodic words that preserve some desirable properties of periodic words.\nPrivileged words [13 ###reference_b13###] were introduced as a technical tool related to a problem in dynamical systems and discrete geometry. They were originally defined as a generalization of rich words by tweaking the definition of a complete first return. A complete first return to a word is a word that starts and ends with , and contains only two occurrences of . A palindrome is a word that reads the same forwards and backwards. A word is said to be rich if and only if every palindromic factor of is a complete first return to a shorter palindrome. Interestingly, rich words contain the maximum possible number of distinct palindromic factors. Privileged words were then defined as an iterated complete first return. A word is privileged if and only if it is a complete first return to a shorter privileged word. Single letters and the empty word are defined to be privileged in order to make this definition meaningful.\nSince their introduction, there has been much research into the properties of closed and privileged words [1 ###reference_b1###, 2 ###reference_b2###, 4 ###reference_b4###, 6 ###reference_b6###, 12 ###reference_b12###, 16 ###reference_b16###, 17 ###reference_b17###, 20 ###reference_b20###]. One problem that has received some interest lately [7 ###reference_b7###, 14 ###reference_b14###, 18 ###reference_b18###, 19 ###reference_b19###] is to find good upper and lower bounds for the number of closed and privileged words.\nLet denote the number of length- closed words over . Let denote the number of length- closed words over that are closed by a length- word. Let denote the number of length- privileged words over . Let denote the number of length- privileged words over that are closed by a length- privileged word. See Tables 1 ###reference_### and 2 ###reference_### for sample values of and for small , . See sequences A226452 ###reference_oeis.org/A226452### and A231208 ###reference_oeis.org/A231208### in the On-Line Encyclopedia of Integer Sequences [15 ###reference_b15###] for sample values of and .\nEvery privileged word is a closed word, so any upper bound on is also an upper bound on . Furthermore, any lower bound on is also a lower bound on .\nForsyth et al. [7 ###reference_b7###] showed that for all for some .\nNicholson and Rampersad [14 ###reference_b14###] improved and generalized this bound by showing that there are constants and such that for all .\nRukavicka [18 ###reference_b18###] showed that there is a constant such that for all .\nRukavicka [19 ###reference_b19###] also showed that for every , there exist constants and such that length- privileged words for all where and .\nThe best upper and lower bounds for both and are widely separated, and can be much improved. In this paper, we improve the existing upper and lower bounds on and . Let and for . We prove the following two theorems.\nLet be an integer.\nThere exist constants and such that for all .\nThere exist constants and such that for all .\nLet be an integer.\nFor all there exist constants and such that\nfor all .\nFor all there exist constants and such that\nfor all .\nBefore we proceed, we give a heuristic argument as to why is in . Consider a \u201crandom\u201d length- word . Let where is a constant such that is a positive integer. There is a chance that has a length- border. Suppose has a length- border. Now suppose we drop the first and last character of to get . If were randomly chosen (which it is not), then we could use the linearity of expectation to get that the expected number of occurrences of in is approximately . Thus, for large enough we have that does not occur in with high probability, and so is closed. Therefore, there are approximately length- closed words."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Preliminary results",
|
| 15 |
+
"text": "In this section we give some necessary results and definitions in order to prove our main results. Also throughout this paper, we use \u2019s, \u2019s, and \u2019s to denote positive real constants (dependent on ).\nLet be a length- word. Suppose is closed by a length- word . Since is also the largest border of , it follows that cannot be closed by another word. This implies that\nfor .\nLet denote the number of length- words over that are closed by the word . Let denote the number of length- words over that do not contain the word as a factor.\nThe auto-correlation [9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###] of a length- word is a length- binary word where if and only if has a border of length . The auto-correlation polynomial of is defined as\nFor example, the word has auto-correlation and auto-correlation polynomial .\nWe now prove two technical lemmas that will be used in the proofs of Theorem 2 ###reference_orem2### (b) ###reference_i2### and Theorem 3 ###reference_orem3### (b) ###reference_###.\nLet be integers, and let be a real number such that . Then\nThe case when was proved in a paper by Forsyth et al. [7 ###reference_b7###, Lemma 9]. We generalize their proof to .\nWhen , we have . So suppose . By the binomial theorem, we have\nSo to show that , it is sufficient to show that\nfor .\nBy assumption we have that , so and thus . Adding to both sides we get , and so . If , then . This implies that , and\nTherefore letting , we have that . Multiplying both sides by we get , which proves\n(1 ###reference_###).\nNow we prove that . Going back to the binomial expansion of , we have\nSo to show that , it is sufficient to show that\nfor . But we have already proved that . Letting , we have that . Multiplying both sides by we get .\n\u220e\nLet and be integers. Then for any constant , we have\nWhen we have .\nThe proof is by induction on . Since we will use L\u2019H\u00f4pital\u2019s rule to evaluate the limit, we first compute the derivative of with respect to for any constant . We have\nIn the base case, when , we have\nSuppose . Then we have\n\u220e"
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "Closed words",
|
| 21 |
+
"text": ""
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "3.1",
|
| 25 |
+
"parent_section_id": "3",
|
| 26 |
+
"section_name": "Lower bound",
|
| 27 |
+
"text": "We first state a useful lemma from a paper of Nicholson and Rampersad [14 ###reference_b14###].\nLet be an integer. For every , there is a unique integer such that\nLet be a length- word. There exist constants and such that for we have\nWe now use the previous lemma to prove Theorem 2 ###reference_orem2### (a) ###reference_i1###.\nThe number of length- words closed by a length- word is clearly equal to the sum, over all length- words , of the number of length- words closed by . Thus we have that\nLet be such that . By Lemma 6 ###reference_orem6### there exist constants and such that for we have . Clearly for all . Since is asymptotically much smaller than , there exists a constant such that for all . Thus for we have\nfor some constant .\n\u220e"
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3.2",
|
| 31 |
+
"parent_section_id": "3",
|
| 32 |
+
"section_name": "Upper bound",
|
| 33 |
+
"text": "Before we proceed with upper bounding , we briefly outline the direction of the proof. First, we begin by bounding for and . We show that for , the number of length- words closed by a particular length- word is bounded by the number of words of length that do not have as a factor. For we prove that is negligibly small. Next, we prove upper bounds on the number of words that do not have as a factor, allowing us to finally bound .\nLet , , and be integers such that and . Let be a length- word. Then\nRecall that is the number of length- words that are closed by the word . Also recall that is the number of length- words that do not contain the word as a factor.\nLet be a length- word closed by where . Then we can write where does not contain as a factor. This immediately implies that . But from a result of Guibas and Odlyzko [11 ###reference_b11###, Section 7], we have that if for words , , then for all . The auto-correlation polynomial only has or as coefficients, depending on the \u2019s and \u2019s in the auto-correlation. Thus, the auto-correlation that maximizes is clearly . The words that achieve this auto-correlation are words of the form where . Therefore we have\n\u220e\nLet , , and be integers such that and . Then\nThe number of length- words closed by a length- word is equal to the sum, over all length- words , of the number of length- words closed by . Thus we have that\nBy Lemma 7 ###reference_orem7### we have that for all length- words . Therefore\n\u220e\nLet and integers. Then\nIt follows from Lemma 8 ###reference_orem8### that\nNow we show that\nLet be a word of length that is closed by a word of length . Then for some words , . So for all , . This implies that where is the length- prefix of , , and is the length- prefix of . Since , we have that . We see that is fully determined by the word . So since , we have . Thus\n\u220e\nLet , , and be integers. Then\nIf , then any length- word is shorter than , and thus cannot contain as a factor. So .\nSuppose . Let be a length- word that does not contain as a factor. Let us look at the symbols that ends in. Since does not contain , we have that ends in anywhere from to zeroes. So is of the form where is an integer with , , and is a length- word that does not contain as a factor. There are choices for , and choices for . So there are words of the form . Summing over all possible gives\n\u220e\nLet , , and be integers. Then\nCompute with the recurrence from Lemma 10 ###reference_orem10### and the result follows.\n\u220e\nLet , , and be integers. Then\nWe prove by induction on . In the base case, when , we have .\nSuppose . Then\n\u220e\nSince satisfies a linear recurrence, we know that the asymptotic behaviour of is determined by the root of maximum modulus of the polynomial . We use this fact to find an upper bound for .\nLet and be integers. Let\nThen .\nSince , we have that . This implies that\n\u220e\nLet be integers. Let be an integer such that . Then .\nThe proof is by induction on . By Corollary 12 ###reference_orem12### we have that\nfor .\nSuppose, for the base case, that . Let . Then\nClearly for all , so .\nSuppose that . Furthermore let where is an integer such that . Notice that . Then\nTo prove the desired bound, namely that , it is sufficient to show that . We begin by upper bounding with Lemma 4 ###reference_orem4###. We have\nIt is easy to verify that and for all . Thus, continuing from (2 ###reference_###), we have\n\u220e\nLet , , and be integers such that and . Then .\nThe proof is by induction on . The base case, when , is taken care of in Lemma 14 ###reference_orem14###.\nSuppose . Then\nBy Theorem 13 ###reference_orem13###, we have that . Therefore\n\u220e\nFirst notice that , since is just the number of length- words that do not contain .\nLet be a positive integer such that the following inequalities hold for all .\nNow we bound the sum in (3 ###reference_###).\nLet . Notice that is monotonically decreasing on the interval . So for we have that . Thus\nGoing back to (3 ###reference_###) we have\nEvaluating and bounding the definite integral, we have\nPutting everything together, we have that\nfor some constant .\n\u220e"
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "4",
|
| 37 |
+
"parent_section_id": null,
|
| 38 |
+
"section_name": "Privileged words",
|
| 39 |
+
"text": ""
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "4.1",
|
| 43 |
+
"parent_section_id": "4",
|
| 44 |
+
"section_name": "Lower bound",
|
| 45 |
+
"text": "In this section we provide a family of lower bounds for the number of length- privileged words. We use induction to prove these bounds. The basic idea is that we start with the lower bound by Nicholson and Rampersad, and then use it to bootstrap ourselves to better and better lower bounds.\nThe proof is by induction on . Let be such that . We clearly have for all . Let be a length- privileged word. By Lemma 6 ###reference_orem6### we have that there exist constants and such that for all . So the base case, when , is taken care of.\nSuppose . By induction we have that there exist constants and such that\nfor all . By Lemma 6 ###reference_orem6### we have\nfor . Since , we have that for all . Thus continuing from above we have\nfor all where .\n\u220e"
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "4.2",
|
| 49 |
+
"parent_section_id": "4",
|
| 50 |
+
"section_name": "Upper bound",
|
| 51 |
+
"text": "In Theorem 2 ###reference_orem2### (b) ###reference_i2### we proved that . Since every privileged word is also a closed word, this is also shows that . This bound improves on the existing bound on privileged words but it does not show that and behave differently asymptotically. We show that is much smaller than asymptotically by proving upper bounds on that show .\nLet , , and be integers such that and . Then\nThe number of length- privileged words closed by a length- privileged word is equal to the sum, over all length- privileged words , of the number of length- words closed by . Thus we have that\nBy Lemma 7 ###reference_orem7### we have that for all length- words . Therefore\n\u220e\nFor we can use Lemma 16 ###reference_orem16### to bound . But for , we can use Corollary 9 ###reference_orem9### and the fact that . We get\nThe proof is by induction on . The base case, when , is taken care of by Theorem 2 ###reference_orem2### (b) ###reference_i2###.\nSuppose . Then there exist constants and such that\nfor all .\nWe now bound . First, we let be a constant such that the following inequalities hold for all . We have\nThe sum on line (4 ###reference_###) is clearly convergent. We have\nNow we upper bound the sum\nIt is well-known that for . Thus, letting , we have\nWe reverse the order of the series, by letting be such that . We also shift the index of the series down by . We have\nThe first and second sum are both clearly convergent. It is also easy to show that both of them can be bounded by a constant multiplied by the first term. Thus, we have that\nPutting everything together and continuing from line (4 ###reference_###), we get\nfor some constant .\n\u220e"
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "5",
|
| 55 |
+
"parent_section_id": null,
|
| 56 |
+
"section_name": "Open problems",
|
| 57 |
+
"text": "We conclude by posing some open problems.\nIn this paper we showed that . In other words, we showed that can be bounded above and below by a constant times for sufficiently large. Can we do better than this?\nDoes the limit\nexist? If it does exist, what does the limit evaluate to? If it does not exist, evaluate\nIn this paper, we also gave a family of upper and lower bounds for . But for every , the upper and lower bounds on are asymptotically separated by a factor of . Let denote the smallest positive integer such that . Let denote the product\nIs ?\nThis problem can probably be solved by a careful analysis of the constants introduced on every step in Section 4 ###reference_###.\nDoes the limit\nexist? If it does, what does the limit evaluate to? If it does not exist, evaluate\nWe suspect that the first limit in problem 17 ###reference_8### and the first limit in problem 19 ###reference_01### do not exist due to a result of Guibas and Odlyzko [9 ###reference_b9###] on prefix-synchronized codes. Every codeword in a prefix-synchronized code of length begins with the same prefix of length . Each codeword is a prefix of a closed word of length that is closed by . They proved that, for , the size of a maximal prefix-synchronized code of length oscillates such that the limit does not exist. They mention that their approach can be generalized for , but that the proof is much more complicated."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "6",
|
| 61 |
+
"parent_section_id": null,
|
| 62 |
+
"section_name": "Acknowledgements",
|
| 63 |
+
"text": "Thanks to Jeffrey Shallit for introducing me to this problem and for helpful discussions and suggestions."
|
| 64 |
+
}
|
| 65 |
+
],
|
| 66 |
+
"appendix": [],
|
| 67 |
+
"tables": {
|
| 68 |
+
"1": {
|
| 69 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S1.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S1.T1.11\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S1.T1.11.11\">\n<th class=\"ltx_td ltx_nopad ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S1.T1.1.1.1\"><svg height=\"14.47\" overflow=\"visible\" version=\"1.1\" width=\"16.62\"><g transform=\"translate(0,14.47) scale(1,-1)\"><path d=\"M 0,14.47 16.62,0\" stroke=\"#000000\" stroke-width=\"0.4\"></path><g class=\"ltx_svg_fog\" transform=\"translate(0,0)\"><g transform=\"translate(0,5.96) scale(1, -1)\"><foreignobject height=\"5.96\" overflow=\"visible\" width=\"8.31\">\n<span class=\"ltx_inline-block\" id=\"S1.T1.1.1.1.pic1.1.1\">\n<span class=\"ltx_inline-block ltx_align_left\" id=\"S1.T1.1.1.1.pic1.1.1.1\">\n<span class=\"ltx_p\" id=\"S1.T1.1.1.1.pic1.1.1.1.1\"></span>\n</span>\n</span></foreignobject></g></g><g class=\"ltx_svg_fog\" transform=\"translate(11.62,5.96)\"><g transform=\"translate(0,8.51) scale(1, -1)\"><foreignobject height=\"8.51\" overflow=\"visible\" width=\"5\">\n<span class=\"ltx_inline-block\" id=\"S1.T1.1.1.1.pic1.2.1\">\n<span class=\"ltx_inline-block ltx_align_right\" id=\"S1.T1.1.1.1.pic1.2.1.1\">\n<span class=\"ltx_p\" id=\"S1.T1.1.1.1.pic1.2.1.1.1\"></span>\n</span>\n</span></foreignobject></g></g></g></svg></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S1.T1.2.2.2\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S1.T1.3.3.3\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S1.T1.4.4.4\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S1.T1.5.5.5\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S1.T1.6.6.6\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S1.T1.7.7.7\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S1.T1.8.8.8\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S1.T1.9.9.9\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S1.T1.10.10.10\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S1.T1.11.11.11\"></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S1.T1.11.12.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S1.T1.11.12.1.1\">10</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.11.12.1.2\">2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.11.12.1.3\">30</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.11.12.1.4\">70</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.11.12.1.5\">50</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.11.12.1.6\">30</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.11.12.1.7\">12</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.11.12.1.8\">6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.11.12.1.9\">2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.11.12.1.10\">2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S1.T1.11.12.1.11\">0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.11.13.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S1.T1.11.13.2.1\">11</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.13.2.2\">2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.13.2.3\">42</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.13.2.4\">118</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.13.2.5\">96</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.13.2.6\">54</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.13.2.7\">30</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.13.2.8\">13</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.13.2.9\">6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.13.2.10\">2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S1.T1.11.13.2.11\">2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.11.14.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S1.T1.11.14.3.1\">12</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.14.3.2\">2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.14.3.3\">60</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.14.3.4\">200</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.14.3.5\">182</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.14.3.6\">114</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.14.3.7\">54</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.14.3.8\">30</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.14.3.9\">12</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.14.3.10\">6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S1.T1.11.14.3.11\">2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.11.15.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S1.T1.11.15.4.1\">13</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.15.4.2\">2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.15.4.3\">88</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.15.4.4\">338</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.15.4.5\">346</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.15.4.6\">214</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.15.4.7\">126</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.15.4.8\">54</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.15.4.9\">30</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.15.4.10\">12</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S1.T1.11.15.4.11\">6</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.11.16.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S1.T1.11.16.5.1\">14</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.16.5.2\">2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.16.5.3\">132</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.16.5.4\">570</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.16.5.5\">640</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.16.5.6\">432</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.16.5.7\">232</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.16.5.8\">126</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.16.5.9\">54</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.16.5.10\">30</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S1.T1.11.16.5.11\">12</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.11.17.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S1.T1.11.17.6.1\">15</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.17.6.2\">2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.17.6.3\">202</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.17.6.4\">962</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.17.6.5\">1192</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.17.6.6\">828</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.17.6.7\">474</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.17.6.8\">240</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.17.6.9\">126</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.17.6.10\">54</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S1.T1.11.17.6.11\">30</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.11.18.7\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S1.T1.11.18.7.1\">16</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.18.7.2\">2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.18.7.3\">314</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.18.7.4\">1626</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.18.7.5\">2220</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.18.7.6\">1612</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.18.7.7\">908</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.18.7.8\">492</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.18.7.9\">240</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.18.7.10\">126</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S1.T1.11.18.7.11\">54</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.11.19.8\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S1.T1.11.19.8.1\">17</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.19.8.2\">2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.19.8.3\">494</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.19.8.4\">2754</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.19.8.5\">4128</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.19.8.6\">3112</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.19.8.7\">1822</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.19.8.8\">956</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.19.8.9\">504</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.19.8.10\">240</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S1.T1.11.19.8.11\">126</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.11.20.9\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S1.T1.11.20.9.1\">18</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.20.9.2\">2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.20.9.3\">784</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.20.9.4\">4676</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.20.9.5\">7670</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.20.9.6\">6024</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.20.9.7\">3596</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.20.9.8\">1934</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.20.9.9\">982</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.20.9.10\">504</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S1.T1.11.20.9.11\">240</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.11.21.10\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S1.T1.11.21.10.1\">19</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.21.10.2\">2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.21.10.3\">1252</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.21.10.4\">7960</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.21.10.5\">14264</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.21.10.6\">11636</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.21.10.7\">7084</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.21.10.8\">3828</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.21.10.9\">1992</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.21.10.10\">990</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S1.T1.11.21.10.11\">504</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.11.22.11\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_l ltx_border_r\" id=\"S1.T1.11.22.11.1\">20</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S1.T1.11.22.11.2\">2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S1.T1.11.22.11.3\">2008</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S1.T1.11.22.11.4\">13588</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S1.T1.11.22.11.5\">26524</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S1.T1.11.22.11.6\">22512</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S1.T1.11.22.11.7\">13928</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S1.T1.11.22.11.8\">7632</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S1.T1.11.22.11.9\">3946</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S1.T1.11.22.11.10\">2026</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S1.T1.11.22.11.11\">990</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Some values of for , where and .</figcaption>\n</figure>",
|
| 70 |
+
"capture": "Table 1: Some values of for , where and ."
|
| 71 |
+
},
|
| 72 |
+
"2": {
|
| 73 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S1.T2\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S1.T2.11\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S1.T2.11.11\">\n<th class=\"ltx_td ltx_nopad ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S1.T2.1.1.1\"><svg height=\"14.47\" overflow=\"visible\" version=\"1.1\" width=\"16.62\"><g transform=\"translate(0,14.47) scale(1,-1)\"><path d=\"M 0,14.47 16.62,0\" stroke=\"#000000\" stroke-width=\"0.4\"></path><g class=\"ltx_svg_fog\" transform=\"translate(0,0)\"><g transform=\"translate(0,5.96) scale(1, -1)\"><foreignobject height=\"5.96\" overflow=\"visible\" width=\"8.31\">\n<span class=\"ltx_inline-block\" id=\"S1.T2.1.1.1.pic1.1.1\">\n<span class=\"ltx_inline-block ltx_align_left\" id=\"S1.T2.1.1.1.pic1.1.1.1\">\n<span class=\"ltx_p\" id=\"S1.T2.1.1.1.pic1.1.1.1.1\"></span>\n</span>\n</span></foreignobject></g></g><g class=\"ltx_svg_fog\" transform=\"translate(11.62,5.96)\"><g transform=\"translate(0,8.51) scale(1, -1)\"><foreignobject height=\"8.51\" overflow=\"visible\" width=\"5\">\n<span class=\"ltx_inline-block\" id=\"S1.T2.1.1.1.pic1.2.1\">\n<span class=\"ltx_inline-block ltx_align_right\" id=\"S1.T2.1.1.1.pic1.2.1.1\">\n<span class=\"ltx_p\" id=\"S1.T2.1.1.1.pic1.2.1.1.1\"></span>\n</span>\n</span></foreignobject></g></g></g></svg></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S1.T2.2.2.2\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S1.T2.3.3.3\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S1.T2.4.4.4\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S1.T2.5.5.5\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S1.T2.6.6.6\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S1.T2.7.7.7\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S1.T2.8.8.8\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S1.T2.9.9.9\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S1.T2.10.10.10\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S1.T2.11.11.11\"></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S1.T2.11.12.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S1.T2.11.12.1.1\">10</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T2.11.12.1.2\">2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T2.11.12.1.3\">16</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T2.11.12.1.4\">22</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T2.11.12.1.5\">8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T2.11.12.1.6\">6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T2.11.12.1.7\">2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T2.11.12.1.8\">2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T2.11.12.1.9\">0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T2.11.12.1.10\">2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S1.T2.11.12.1.11\">0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T2.11.13.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S1.T2.11.13.2.1\">11</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.13.2.2\">2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.13.2.3\">26</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.13.2.4\">38</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.13.2.5\">16</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.13.2.6\">10</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.13.2.7\">6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.13.2.8\">4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.13.2.9\">2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.13.2.10\">2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S1.T2.11.13.2.11\">2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T2.11.14.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S1.T2.11.14.3.1\">12</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.14.3.2\">2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.14.3.3\">42</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.14.3.4\">68</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.14.3.5\">30</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.14.3.6\">18</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.14.3.7\">4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.14.3.8\">6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.14.3.9\">2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.14.3.10\">2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S1.T2.11.14.3.11\">0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T2.11.15.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S1.T2.11.15.4.1\">13</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.15.4.2\">2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.15.4.3\">68</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.15.4.4\">122</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.15.4.5\">58</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.15.4.6\">38</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.15.4.7\">14</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.15.4.8\">10</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.15.4.9\">6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.15.4.10\">4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S1.T2.11.15.4.11\">2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T2.11.16.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S1.T2.11.16.5.1\">14</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.16.5.2\">2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.16.5.3\">110</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.16.5.4\">218</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.16.5.5\">108</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.16.5.6\">76</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.16.5.7\">20</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.16.5.8\">14</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.16.5.9\">8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.16.5.10\">6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S1.T2.11.16.5.11\">2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T2.11.17.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S1.T2.11.17.6.1\">15</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.17.6.2\">2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.17.6.3\">178</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.17.6.4\">390</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.17.6.5\">204</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.17.6.6\">148</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.17.6.7\">46</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.17.6.8\">24</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.17.6.9\">18</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.17.6.10\">14</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S1.T2.11.17.6.11\">6</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T2.11.18.7\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S1.T2.11.18.7.1\">16</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.18.7.2\">2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.18.7.3\">288</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.18.7.4\">698</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.18.7.5\">384</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.18.7.6\">288</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.18.7.7\">86</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.18.7.8\">48</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.18.7.9\">16</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.18.7.10\">18</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S1.T2.11.18.7.11\">8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T2.11.19.8\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S1.T2.11.19.8.1\">17</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.19.8.2\">2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.19.8.3\">466</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.19.8.4\">1250</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.19.8.5\">724</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.19.8.6\">556</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.19.8.7\">178</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.19.8.8\">92</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.19.8.9\">36</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.19.8.10\">32</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S1.T2.11.19.8.11\">26</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T2.11.20.9\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S1.T2.11.20.9.1\">18</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.20.9.2\">2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.20.9.3\">754</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.20.9.4\">2240</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.20.9.5\">1364</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.20.9.6\">1076</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.20.9.7\">344</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.20.9.8\">190</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.20.9.9\">64</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.20.9.10\">36</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S1.T2.11.20.9.11\">28</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T2.11.21.10\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S1.T2.11.21.10.1\">19</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.21.10.2\">2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.21.10.3\">1220</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.21.10.4\">4016</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.21.10.5\">2572</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.21.10.6\">2092</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.21.10.7\">688</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.21.10.8\">388</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.21.10.9\">136</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.21.10.10\">70</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S1.T2.11.21.10.11\">56</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T2.11.22.11\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_l ltx_border_r\" id=\"S1.T2.11.22.11.1\">20</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S1.T2.11.22.11.2\">2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S1.T2.11.22.11.3\">1974</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S1.T2.11.22.11.4\">7204</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S1.T2.11.22.11.5\">4850</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S1.T2.11.22.11.6\">4068</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S1.T2.11.22.11.7\">1342</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S1.T2.11.22.11.8\">772</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S1.T2.11.22.11.9\">268</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S1.T2.11.22.11.10\">138</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S1.T2.11.22.11.11\">52</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Some values of for , where and .</figcaption>\n</figure>",
|
| 74 |
+
"capture": "Table 2: Some values of for , where and ."
|
| 75 |
+
}
|
| 76 |
+
},
|
| 77 |
+
"image_paths": {},
|
| 78 |
+
"validation": true,
|
| 79 |
+
"references": [
|
| 80 |
+
{
|
| 81 |
+
"1": {
|
| 82 |
+
"title": "On the number of closed factors in a word.",
|
| 83 |
+
"author": "G. Badkobeh, G. Fici, and Z. Lipt\u00e1k.",
|
| 84 |
+
"venue": "In Adrian-Horia Dediu, Enrico Formenti, Carlos Mart\u00edn-Vide, and Bianca Truthe, editors, Language and Automata Theory and Applications, pp. 381\u2013390, Cham, 2015. Springer International Publishing.",
|
| 85 |
+
"url": null
|
| 86 |
+
}
|
| 87 |
+
},
|
| 88 |
+
{
|
| 89 |
+
"2": {
|
| 90 |
+
"title": "Enumeration and structure of trapezoidal words.",
|
| 91 |
+
"author": "M. Bucci, A. De Luca, and G. Fici.",
|
| 92 |
+
"venue": "Theoret. Comput. Sci. 468 (2013), 12\u201322.",
|
| 93 |
+
"url": null
|
| 94 |
+
}
|
| 95 |
+
},
|
| 96 |
+
{
|
| 97 |
+
"3": {
|
| 98 |
+
"title": "Periodic-like words, periodicity, and boxes.",
|
| 99 |
+
"author": "A. Carpi and A. de Luca.",
|
| 100 |
+
"venue": "Acta Informatica 37(8) (2001), 597\u2013618.",
|
| 101 |
+
"url": null
|
| 102 |
+
}
|
| 103 |
+
},
|
| 104 |
+
{
|
| 105 |
+
"4": {
|
| 106 |
+
"title": "Open and closed prefixes of Sturmian words.",
|
| 107 |
+
"author": "A. De Luca and G. Fici.",
|
| 108 |
+
"venue": "In J. Karhum\u00e4ki, A. Lepist\u00f6, and L. Zamboni, editors, Combinatorics on Words, pp. 132\u2013142, Berlin, Heidelberg, 2013. Springer Berlin Heidelberg.",
|
| 109 |
+
"url": null
|
| 110 |
+
}
|
| 111 |
+
},
|
| 112 |
+
{
|
| 113 |
+
"5": {
|
| 114 |
+
"title": "A classification of trapezoidal words.",
|
| 115 |
+
"author": "G. Fici.",
|
| 116 |
+
"venue": "In P. Ambro\u017e, \u0160. Holub, and Z. Mas\u00e1kov\u00e1, editors, 8th International Conference on Words, WORDS 2011, Vol. 63 of Electronic Proceedings in Theoretical Computer Science, pp. 129\u2013137. 2011.",
|
| 117 |
+
"url": null
|
| 118 |
+
}
|
| 119 |
+
},
|
| 120 |
+
{
|
| 121 |
+
"6": {
|
| 122 |
+
"title": "Open and closed words.",
|
| 123 |
+
"author": "G. Fici.",
|
| 124 |
+
"venue": "Bull. European Assoc. Theor. Comput. Sci. , No. 123, (2017), 138\u2013147.",
|
| 125 |
+
"url": null
|
| 126 |
+
}
|
| 127 |
+
},
|
| 128 |
+
{
|
| 129 |
+
"7": {
|
| 130 |
+
"title": "Remarks on privileged words.",
|
| 131 |
+
"author": "M. Forsyth, A. Jayakumar, J. Peltom\u00e4ki, and J. Shallit.",
|
| 132 |
+
"venue": "Internat. J. Found. Comp. Sci. 27(4) (2016), 431\u2013442.",
|
| 133 |
+
"url": null
|
| 134 |
+
}
|
| 135 |
+
},
|
| 136 |
+
{
|
| 137 |
+
"8": {
|
| 138 |
+
"title": "Synchronization of binary messages.",
|
| 139 |
+
"author": "E. Gilbert.",
|
| 140 |
+
"venue": "IRE Trans. Info. Theory 6(4) (1960), 470\u2013477.",
|
| 141 |
+
"url": null
|
| 142 |
+
}
|
| 143 |
+
},
|
| 144 |
+
{
|
| 145 |
+
"9": {
|
| 146 |
+
"title": "Maximal prefix-synchronized codes.",
|
| 147 |
+
"author": "L. J. Guibas and A. M. Odlyzko.",
|
| 148 |
+
"venue": "SIAM J. Appl. Math. 35(2) (1978), 401\u2013418.",
|
| 149 |
+
"url": null
|
| 150 |
+
}
|
| 151 |
+
},
|
| 152 |
+
{
|
| 153 |
+
"10": {
|
| 154 |
+
"title": "Periods in strings.",
|
| 155 |
+
"author": "L. J. Guibas and A. M. Odlyzko.",
|
| 156 |
+
"venue": "J. Combin. Theory Ser. A 30(1) (1981), 19\u201342.",
|
| 157 |
+
"url": null
|
| 158 |
+
}
|
| 159 |
+
},
|
| 160 |
+
{
|
| 161 |
+
"11": {
|
| 162 |
+
"title": "String overlaps, pattern matching, and nontransitive games.",
|
| 163 |
+
"author": "L. J. Guibas and A. M. Odlyzko.",
|
| 164 |
+
"venue": "J. Combin. Theory Ser. A 30(2) (1981), 183\u2013208.",
|
| 165 |
+
"url": null
|
| 166 |
+
}
|
| 167 |
+
},
|
| 168 |
+
{
|
| 169 |
+
"12": {
|
| 170 |
+
"title": "Closed Ziv\u2013Lempel factorization of the -bonacci words.",
|
| 171 |
+
"author": "M. Jahannia, M. Mohammad-Noori, N. Rampersad, and M. Stipulanti.",
|
| 172 |
+
"venue": "Theoret. Comput. Sci. 918 (2022), 32\u201347.",
|
| 173 |
+
"url": null
|
| 174 |
+
}
|
| 175 |
+
},
|
| 176 |
+
{
|
| 177 |
+
"13": {
|
| 178 |
+
"title": "A characterization of subshifts with bounded powers.",
|
| 179 |
+
"author": "J. Kellendonk, D. Lenz, and J. Savinien.",
|
| 180 |
+
"venue": "Discrete Math. 313(24) (2013), 2881\u20132894.",
|
| 181 |
+
"url": null
|
| 182 |
+
}
|
| 183 |
+
},
|
| 184 |
+
{
|
| 185 |
+
"14": {
|
| 186 |
+
"title": "Improved estimates for the number of privileged words.",
|
| 187 |
+
"author": "J. Nicholson and N. Rampersad.",
|
| 188 |
+
"venue": "J. Integer Sequences 21 (2018), Article 18.3.8 (electronic).",
|
| 189 |
+
"url": null
|
| 190 |
+
}
|
| 191 |
+
},
|
| 192 |
+
{
|
| 193 |
+
"15": {
|
| 194 |
+
"title": "OEIS Foundation Inc. (2022), The On-Line Encyclopedia of Integer Sequences, https://oeis.org.",
|
| 195 |
+
"author": "N. J. A. Sloane et al.",
|
| 196 |
+
"venue": null,
|
| 197 |
+
"url": null
|
| 198 |
+
}
|
| 199 |
+
},
|
| 200 |
+
{
|
| 201 |
+
"16": {
|
| 202 |
+
"title": "Introducing privileged words: Privileged complexity of Sturmian words.",
|
| 203 |
+
"author": "J. Peltom\u00e4ki.",
|
| 204 |
+
"venue": "Theoret. Comput. Sci. 500 (2013), 57\u201367.",
|
| 205 |
+
"url": null
|
| 206 |
+
}
|
| 207 |
+
},
|
| 208 |
+
{
|
| 209 |
+
"17": {
|
| 210 |
+
"title": "Privileged factors in the Thue\u2013Morse word\u2014a comparison of privileged words and palindromes.",
|
| 211 |
+
"author": "J. Peltom\u00e4ki.",
|
| 212 |
+
"venue": "Disc. Appl. Math. 193 (2015), 187\u2013199.",
|
| 213 |
+
"url": null
|
| 214 |
+
}
|
| 215 |
+
},
|
| 216 |
+
{
|
| 217 |
+
"18": {
|
| 218 |
+
"title": "Upper bound for the number of closed and privileged words.",
|
| 219 |
+
"author": "J. Rukavicka.",
|
| 220 |
+
"venue": "Inform. Process. Lett. 156 (2020), 105917.",
|
| 221 |
+
"url": null
|
| 222 |
+
}
|
| 223 |
+
},
|
| 224 |
+
{
|
| 225 |
+
"19": {
|
| 226 |
+
"title": "Upper bound for the number of privileged words.",
|
| 227 |
+
"author": "J. Rukavicka.",
|
| 228 |
+
"venue": "Discrete Math. 346(1) (2023), 113164.",
|
| 229 |
+
"url": null
|
| 230 |
+
}
|
| 231 |
+
},
|
| 232 |
+
{
|
| 233 |
+
"20": {
|
| 234 |
+
"title": "Closed, palindromic, rich, privileged, trapezoidal, and balanced words in automatic sequences.",
|
| 235 |
+
"author": "L. Schaeffer and J. Shallit.",
|
| 236 |
+
"venue": "Electronic J. Combinatorics 23(1) (2016), P1.25 (electronic).",
|
| 237 |
+
"url": null
|
| 238 |
+
}
|
| 239 |
+
}
|
| 240 |
+
],
|
| 241 |
+
"url": "http://arxiv.org/html/2206.14273v3"
|
| 242 |
+
}
|
20240522/2210.03123v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240522/2211.07482v3.json
ADDED
|
@@ -0,0 +1,591 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Unifying O(3) Equivariant Neural Networks Design with Tensor-Network Formalism",
|
| 3 |
+
"abstract": "Many learning tasks, including learning potential energy surfaces from ab initio calculations, involve global spatial symmetries and permutational symmetry between atoms or general particles. Equivariant graph neural networks are a standard approach to such problems, with one of the most successful methods employing tensor products between various tensors that transform under the spatial group. However, as the number of different tensors and the complexity of relationships between them increase, maintaining parsimony and equivariance becomes increasingly challenging. In this paper, we propose using fusion diagrams, a technique widely employed in simulating SU()-symmetric quantum many-body problems, to design new spatial equivariant components for neural networks. This results in a diagrammatic approach to constructing novel neural network architectures. When applied to particles within a given local neighborhood, the resulting components, which we term \"fusion blocks,\" serve as universal approximators of any continuous equivariant function defined on the neighborhood. We incorporate a fusion block into pre-existing equivariant architectures (Cormorant and MACE), leading to improved performance with fewer parameters on a range of challenging chemical problems. Furthermore, we apply group-equivariant neural networks to study non-adiabatic molecular dynamics of stilbene cis-trans isomerization. Our approach, which combines tensor networks with equivariant neural networks, suggests a potentially fruitful direction for designing more expressive equivariant neural networks.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Graph neural networks (GNNs) have recently gained prominence in the field of chemistry, owing to their ability to learn from the structural properties of molecules and materials. Nevertheless, devising an efficient and accurate GNN architecture for investigating dynamic properties of chemical systems remains a formidable challenge. GNNs are adept at learning the structure of chemical systems and predicting their properties, including potential energy, dipole moment, and atomic forces. Recently, there has been a surge of interest in employing deep learning to forecast chemical properties and expedite first-principles dynamics simulations [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###]. Specifically, GNNs have been utilized to estimate the potential energy with distinct atomic coordinates, where the negative gradient concerning the input coordinates naturally corresponds to the atomic force. Accurate prediction of potential energy and atomic force[6 ###reference_b6###] necessitates adherence to spatial symmetries, such as translational and rotational covariance, since these properties are continuous functions defined on three-dimensional Euclidean space.\nMachine learning algorithms employed to predict properties such as potential energy and atomic forces must yield consistent results, regardless of the molecule\u2019s rotational pose or ordering. To address this challenge, researchers have developed group-equivariant neural networks that preserve these symmetries[7 ###reference_b7###, 8 ###reference_b8###, 5 ###reference_b5###, 9 ###reference_b9###, 10 ###reference_b10###]. In a group-equivariant network, symmetry operations on the data, including rotations of pictures and molecules, and permutations of the labels of each particle, commute with the network\u2019s layers, ensuring that the same physical property is predicted irrespective of the input\u2019s orientation. Many state-of-the-art spatially equivariant neural networks[11 ###reference_b11###, 12 ###reference_b12###] leverage the representation theory of the spatial rotation group in the so-called Fourier space [13 ###reference_b13###]. These Fourier space methods employ the Clebsch-Gordan nonlinearities[14 ###reference_b14###, 13 ###reference_b13###]. In fact, as elucidated in the supplementary material (SM), Clebsch-Gordan nonlinearities are the sole source of nonlinearity in Fourier space determined by invariant theory in mathematics [15 ###reference_b15###, 16 ###reference_b16###, 17 ###reference_b17###]. The Clebsch-Gordan nonlinearities further constrain the use of linear weights, which can only act on the multiplicity space corresponding to each irreducible representation[14 ###reference_b14###].\nIndependently of work in machine learning, physicists have been using network models, called tensor networks, to represent complicated quantum many-body systems. Tensor networks are a family of methods for approximating larger tensors by contracting together a large collection of smaller tensors. Tensor networks have been used to successfully approximate large quantum states with low entanglement accurately by making use of the density matrix renormalization group (DMRG) in one dimension [18 ###reference_b18###, 19 ###reference_b19###] and introducing low-rank tensors to represent the quantum states [20 ###reference_b20###]. Applications of tensor networks include quantum simulation of quantum physics problems [21 ###reference_b21###, 22 ###reference_b22###, 23 ###reference_b23###], quantum computing and quantum supremacy experiments [24 ###reference_b24###, 25 ###reference_b25###], machine learning and data science [26 ###reference_b26###, 27 ###reference_b27###, 28 ###reference_b28###], and quantum gravity [29 ###reference_b29###, 30 ###reference_b30###]. A special type of tensor networks concerns with global on-site SU() symmetry, called the spin networks, where multiple sites are fused by a prescribed fusion diagrams [31 ###reference_b31###, 32 ###reference_b32###] so that the global wavefunctions are SU() symmetric. The fusion diagrams, as we will show later and further in SM, are natural and sparse generalization of Clebsch-Gordan products among multiple sites. Fusion diagrams have found great success in simulate SU()-symmetric quantum systems [20 ###reference_b20###, 32 ###reference_b32###, 33 ###reference_b33###, 34 ###reference_b34###], and we will show their potential for constructing universal equivariant neural networks.\nFusion diagrams facilitate the classification of existing neural network architectures and inspire the development of novel equivariant blocks. We showcase the computational power of these blocks using classical results from invariant theory, which establish that under certain conditions, they can achieve universality. For instance, we employ fusion diagrams to construct a new SO(3)-equivariant block, which we incorporate into two state-of-the-art neural network architectures: Cormorant [35 ###reference_b35###] and MACE [12 ###reference_b12###]. We demonstrate that integrating the new equivariant layer significantly enhances the performance of both architectures, with a comparable or substantially fewer number of parameters.\nTo assess the validity of the fusion block, we carried out extensive experiments on various chemical systems, including standard benchmark datasets such as QM-9[36 ###reference_b36###] and MD-17[6 ###reference_b6###], which aims to predict the quantum properties of molecules and potential energy surfaces, as well as more challenging systems like the non-adiabatic cis-trans isomerization of stilbene. Non-adiabatic isomerization of stilbene poses a considerable challenge learning the multiple boarder and reactive potential energy surfaces (PESs), necessitating accurate interpolation and extrapolation.\nIn summary, this paper presents a novel method for constructing group-equivariant neural network blocks using fusion diagrams, a concept borrowed from theoretical physics. Our approach alleviates the combinatorial complexity associated with preserving symmetry constraints in neural networks, enabling the construction of expressive and universal equivariant layers. We demonstrate the effectiveness of the fusion block by incorporating our new SO(3)-equivariant layer into two state-of-the-art molecular neural network architectures, Cormorant and MACE, and evaluating them on a variety of common benchmarks in companion with more complicated molecular isomerization and adsorption process. Our results indicate that the fusion block leads to improved performance with comparable or fewer parameters. Overall, our approach contributes to the developing a new routine that can be used to construct more expressive group equivariant neural networks."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Background",
|
| 15 |
+
"text": "Before delving into the specifics of our approach, it is crucial to lay the groundwork with some foundational concepts. In this section, we offer an overview of relevant ideas from both machine learning and physics. We begin with a concise review of molecular dynamics and the significance of symmetry and equivariance in machine learning. Subsequently, we introduce the concept of tensor products and their role in theoretical physics, including a description of the fusion diagram notation."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "Molecular dynamics",
|
| 21 |
+
"text": "Molecular dynamics simulations are essential tools for studying molecular properties at the atomic level within specific timescales. To simulate atomic motion, we need to calculate the potential energy and atomic forces acting on molecules with particular geometric configurations in space. Generally, potential energy and its gradients can be accurately determined by electronic structure calculations from first principles or approximated classically as simple analytical potential functions within specific chemical environments, such as atomic type, bond length, and bond angle. However, the electronic structure calculations under the ab initio molecular dynamics (AIMD) calculations are expensive.\nOne popular approach to overcoming this limitation is to use neural networks as interatomic potentials [2 ###reference_b2###, 1 ###reference_b1###], which are trained with reference AIMD data. Training neural networks as interatomic potentials involves regressing on potential energy and atomic forces simultaneously, where predictive forces can be naturally achieved as the negative gradient of energy via back-propagation. The\npotential energy is invariant to 3D rigid rotations, while atomic forces are covariant to rotations, as they are vector values. The equivariant neural networks that we introduce subsequently are a powerful data-driven approach for an accurate representation of the chemical environment."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "Representation theory of SU(2) and SO(3)",
|
| 27 |
+
"text": "Rotationally equivariant nets are arguably one of the most successful types of equivariant neural networks. Let and be the input and output spaces of a layer , and let and be linear actions of a group encoding the symmetry on resp. . The layer is said to be equivariant to if\nIf the group action on the output space is the identity transformation, i.e. for all elements of , the above reduces to\nand we have an invariant layer. Constructing an equivariant neural network requires that both the learned affine function and the fixed nonlinear function obey equivariance. Kondor et al.[14 ###reference_b14###] showed how to construct learned affine functions that are equivariant to compact groups (such as the group of rotations or the group of permutations) using the theory of linear representations. A linear representation 111Linear representations should not be confused with the different use of the word \u201crepresentation\u201d in representation learning. of a compact group is pair such that for each , is assigned a linear transformation of for which . If the representation is finite-dimensional, the range of is a subset of the space of complex matrices for some . An irreducible linear representation (irrep) is a representation where has no proper subspaces preserved under .\nUsing well-known results in representation theory, we can apply a linear transformation that decomposes the inputs, outputs, and activations of a neural network into components that transform according to the group\u2019s irreps.\nThen, one can show that the most general possible equivariant linear transformation can be written as matrix multiplication against each component [37 ###reference_b37###, 14 ###reference_b14###, 38 ###reference_b38###, 39 ###reference_b39###].\nThe construction of linear equivariant layers where inputs and outputs transform according to linear group representations has been widely studied and used in today\u2019s neural networks [40 ###reference_b40###, 37 ###reference_b37###, 14 ###reference_b14###, 38 ###reference_b38###, 41 ###reference_b41###, 42 ###reference_b42###, 9 ###reference_b9###, 10 ###reference_b10###].\nFor the rest of this work, we will focus on equivariance in the presence of SU(2) and SO(3) symmetries. These groups have fundamental importance in modern quantum physics and machine learning applications on geometric data. The irreps of SU(2) can be indexed by a single non-negative integer or half-integer, called the spin label. For any and spin label , we denote the corresponding matrix that arises from evaluating as .\nIt is well known in group theory that SO(3) irreps are isomorphic to the irreps of SU(2) with integer spin labels [43 ###reference_b43###, 44 ###reference_b44###, 17 ###reference_b17###]. This relationship allows us to study both SO(3) and SU(2) at the same time. Depending on the mathematical context, vectors in might transform either by (contravariant transformation) or by its complex conjugate (covariant transformation).\nIn what follows we focus only on irreps and omit when we denote irreps.\nTo distinguish these two cases, we denote components of any vector transforming contravariantly by raised index with being defined accordingly for the covariant case. With the notion of raised and lowered indices, one can contract vectors like , where the Einstein summation convention will be used consistently in this paper.\nWith the above basic notions clarified, let us formally define the Clebsch-Gordan product.\nTake two SU(2) irreps and \nof spin and respectively.\nWe can then define the tensor product representation . As this is still an SU(2) representation, it can be decomposed into irreps labeled by spins. A Clebsch-Gordan decomposition is a matrix which transforms the tensor product into for a prescribed spin and any SU(2).\nFormally,\nBy definition, the Clebsch-Gordan decomposition is equivariant with respect to the action of SU(2) as well as SO(3).\nFormally, it can be understood an element from the space of SU(2) equivariant maps , where are the corresponding irreps with the angular momenta labels . In this case, we can write the Clebsch-Gordan product as a third-order tensor:\nwhere are the corresponding magnetic quantum numbers. There are well-established methods to compute both theoretically and algorithmically, e.g., [45 ###reference_b45###, 46 ###reference_b46###, 43 ###reference_b43###]. Summing in Einstein notation with lower and upper indices, we call it a Clebsch-Gordan product for input vectors :\nWe will also leave as entry indices and write Clebsch-Gordan product among input vectors as inner products:"
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3",
|
| 31 |
+
"parent_section_id": null,
|
| 32 |
+
"section_name": "Methods",
|
| 33 |
+
"text": "Here we demonstrate how fusion diagrams can be used to design equivariant components that we call \u201cneural fusion blocks.\u201d We present an explicit construction for transformations under SU(2) and SO(3).In each block, we apply a collection of fusion diagrams to our input tensors. Each incoming edge of the diagram is associated with an input to the block and each outgoing edge is associated with an output of the block.\nWe denote the collection of input tensors associated with incoming edges as and components\ncorresponding to the spin label are denoted as , where\n are the channel dimension indices.\nWe omit batch indices from our treatment for brevity.\nThe fusion block then acts according to Algorithm 1 ###reference_###. More illustrations on the updating rule with diagrammatic examples can be found in Section I in the SM.\nDue to the use of fusion diagrams, the resulting algorithm is guaranteed to be equivariant to rotations.It also serves an rotationally-equivariant universal approximator where we put the proof details in Section II in the SM.\nAs our focus is on the construction of specific components in an SO(3)-equivariant architecture rather than on proposing an entirely new architecture,\nwe demonstrate the potential of our formalism by incorporating it into existing neural networks.\nSpecifically, we choose to augment the Cormorant architecture proposed in [35 ###reference_b35###] and the recent state-of-art model [47 ###reference_b47###] with one additional three-body fusion block that replaces the conventional node-edge two-body interaction, with the aim of capturing inter-atomic interactions in a more faithful manner. Capturing three-body interactions in a SO(3) equivariant way with edge features could lead to a large overhead on computational resources.\nApplying fusion blocks to point clouds also requires ensuring that the resulting neural network obeys permutation symmetry. Since each fusion diagram has a single output, we can reinforce the permutation equivariance by passing these outputs through an aggregation function and incorporate them into existing message-passing-like mechanisms.\n It is worth mentioning that except employing Clebsch-Gordan products, there are other efficient architectures like using spherical coordinates of neighboring atoms and leveraging spherical harmonics to encode angular momentum information into higher dimensional representation of SO and filtering through the spherical representations [5 ###reference_b5###]."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3.1",
|
| 37 |
+
"parent_section_id": "3",
|
| 38 |
+
"section_name": "Cormorant with Fusion Diagrams (CoFD)",
|
| 39 |
+
"text": "Cormorant is one of the first equivariant neural networks that utilize the group equivariance and designed to learn molecular dynamics simulations and ground-state molecular properties. A neuron in Cormorant layer operates as follows:\nHere sums over atom \u2019s local neighborhood, and\n is a learned rotationally invariant function\nthat takes and as input in addition to other rotationally invariant features. denotes the channel-wise Clebsch-Gordan products and the spherical harmonics with input the relative displacement vectors between th and th atoms\n(we refer the reader to [35 ###reference_b35###] for the precise functional form of ).\nEach of these terms corresponds to the two-body diagram on the left below,\nwhile the product with is in a three-way product,\nit never has a covariant or contravariant component.\nIn particular, we observe that this layer has no equivariant interaction between equivariant parts of the activation atom and the activation for atom .\nInstead, their activations only interact through the rotationally invariant function . Instead, in the present paper we employ our fusion diagrams to add an additional term to (10 ###reference_0###)\nthat fully integrates all of the information between atom and atom .\nThis corresponds to the fusion diagram on the right.\nThe resulting fusion block has three inputs: the input activation for atom , the input activation for atom ,\nand the collection of spherical harmonics evaluated on their relative displacements,\nand one output: a new feature for atom .\nConsequently, we require a fusion diagram with three ingoing edges and one outgoing edge.\nGoing from left to right, we input the representation of atom , the representation of atom , and the spherical harmonic representation of the edge connecting the two.\nWe then incorporate this as a new term in (10 ###reference_0###),\ngiving the following functional form for our modified Cormorant layer.\nwhere is the output of the fusion block where inputs came from atom and atom within the coming legs chosen to be th atom, th atom, and their connecting edge. In other words, we use fusion diagrams to efficiently combine the atom-level messaging passing and edge-level message passing."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3.2",
|
| 43 |
+
"parent_section_id": "3",
|
| 44 |
+
"section_name": "MACE with Fusion Diagrams (MoFD)",
|
| 45 |
+
"text": "###figure_1###"
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "3.2.1",
|
| 49 |
+
"parent_section_id": "3.2",
|
| 50 |
+
"section_name": "Implementation of 3-body fusion blocks",
|
| 51 |
+
"text": "In our modified MACE architecture we use fusion diagrams (Figure 1 ###reference_###), a local neighborhood is defined by a cut-off radius , the information on the central particle is , adjacent particle , and the incident edges . In particular, this information is passed to a -element sequence of linearly independent 3-body fusion interactions given by a sequence of different internal spin configurations , with the -element sequence of outgoing activation on the center particle:\nThe final permutation invariant update to the center node information is obtained by concatenating , followed by a linear mixing layer along the new concatenated axis. Note that in the sparse implementation, the feature dimension for each incoming activation never gets updated. Each time the internal spin configuration is only specified by a single internal spin label , thus sparsifying the three-body information flow. For each internal spin value , the 3-body interaction fuses into a single SO()-equivariant tensor (this fusion corresponds to the Fig.1 in the SM), while the final messaging passing aggregates neighboring edges and nodes information to the center node. In this implementation based on MACE architectures, we found our fusion block would only marginally increase the number of trainable parameters given the same channel width.\nFusion Block can also be initialized with significantly more trainable parameters than the original Mace does, which we denote as the dense implementation. The key difference to the sparse implementation is the inclusion of multi-partite internal spins to create a nested module. More specifically, given a -sequence of internal spins , we can choose a tuple , a triple , and beyond instead of specifying a single choice of the internal spin in the sparse implementation. Hence, a total of selections can be made resulting in a significant boost to the model size and number of trainable parameters. In our explicit implementation, we feed all choices of internal spins at once, resulting in a typical 10X boost of the trainable parameter size. As a result, we do not need to additionally pass a linear layer to reshape the channel width. The dense model could often outperform or be on par with the sparse implementation only with half the channel width. As an overall observation, the choices of internal spins are vital to our numerical performance. In our practice, the internal spins are chosen to range from , and sometimes with both parties."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "4",
|
| 55 |
+
"parent_section_id": null,
|
| 56 |
+
"section_name": "Results",
|
| 57 |
+
"text": "We describe three well-rounded benchmarks to test CoFD and MoFD, including QM-9[36 ###reference_b36###] molecular property prediction, MD-17[6 ###reference_b6###] small molecular dynamics, non-adiabatic molecular dynamics of stilbene. Our results are summarized in Figure 2 ###reference_###, Table 1 ###reference_### and Table 2 ###reference_###."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "4.1",
|
| 61 |
+
"parent_section_id": "4",
|
| 62 |
+
"section_name": "QM-9 Molecular properties and MD-17 molecular dynamics datasets",
|
| 63 |
+
"text": "We first implement the fusion diagram on Cormorant architecture [35 ###reference_b35###].\nThe standard QM-9 benchmark dataset[36 ###reference_b36###] is used to test the performance of the CoFD model to predict molecular quantum properties of roughly 130,000 molecules in equilibrium, which contains multiple tasks of scalar value regression including atomization enthalpy, free energy, etc. In contrast, the MD-17 dataset [6 ###reference_b6###] involves learning the ground-state PES and its gradient, for eight small organic molecules at room temperature from reference DFT calculations.\nWe compare the CoFD model and the original Cormorant model. The fusion diagram reduces the number of parameters in our networks, ensuring that we are not simply improving performance by adding additional parameters:\nfor MD17, the networks with fusion diagrams have 135393 parameters compared to 154241 in the original Cormorant [35 ###reference_b35###], and our QM9 neural network has 121872 parameters compared to 299808 in the original [35 ###reference_b35###]. We report that the total time of training QM9 (resp. MD17) use 20 (resp. 12) hours with 256 Epoches, each with a mini-batch size of 64. Hence each epoch costs 281 (resp. 169) seconds. Code for our modified network can be found at https://github.com/ehthiede/diagram_corm ###reference_###. To be noted,\nthe fusion block used in the CoFD to predict QM-9 and MD-17 is a sparse implementation. We did not use the dense implementation in predicting the QM-9 and MD-17 properties due to the large computational expense. However, it would be an interesting future direction to reduce the recourse overhead in the dense implementation, which would enable more subsequent experiments."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "4.2",
|
| 67 |
+
"parent_section_id": "4",
|
| 68 |
+
"section_name": "Stilbene Non-adiabatic molecular dynamics",
|
| 69 |
+
"text": "Non-adiabatic MD (NAMD)[48 ###reference_b48###] is a powerful approach for predicting photo-induced chemical processes, including photo-catalytic reactivity[49 ###reference_b49###], photo-induced DNA damage[50 ###reference_b50###], and the performance of sun-screening products[51 ###reference_b51###]. Unlike ground-state dynamics, NAMD involves evaluating multiple PESs and their gradients simutaneously. However, studying excited-state dynamics requires higher accuracy electronic structure methods than DFT[52 ###reference_b52###], resulting in significantly higher computational costs. Thus, there is motivation to test our model\u2019s ability to study multiple PESs that are not generated by DFT.\n###figure_2### In this study, we explore the photo-induced cis-trans isomerization process of stilbene, a phenomenon first reported by Syage [54 ###reference_b54###]. Our approach utilizes the Complete Active Space Self-Consistent Field (CASSCF) theory [55 ###reference_b55###], specifically targeting the conjugated orbital localized on the carbon-carbon double bond and its anti-bonding counterpart. This selection forms our active space, characterized as two electrons in two orbitals (2e,2o), and all calculations are conducted using the 6-31G* basis set. To accurately capture the quantum effects inherent in photoisomerization, we adopt a quantum-classical approximation through trajectory surface hopping (TSH), as implemented in the SHARC package [56 ###reference_b56###]. This method integrates both quantum and classical dynamics, crucial for studying processes like isomerization. Wigner sampling [45 ###reference_b45###] is employed to generate a variety of initial configurations, initiating the molecular trajectories under study.\nA stringent criterion is applied to ensure the quality of the data: only trajectories maintaining total energy conservation within 0.2 eV were considered valid and included in the dataset. This threshold ensures the physical relevance of the trajectories by excluding those that do not adhere to energy conservation principles. The resultant dataset, therefore, comprises multiple molecular trajectories of stilbene, predominantly initiated in an excited state. These trajectories provide a comprehensive view of the isomerization process, offering valuable insights into the dynamics of this photochemical reaction. Detailed computational specifications and a more thorough introduction to the methods employed are available in the SM.\nThe widely-adopted MD17 dataset [57 ###reference_b57###] comprises adiabatic dynamic trajectories using the PBE functional, though the spin polarization, basis set, and computational grid information are absent from the literature, near equilibrium, where molecular movements are trivial. As a result, MD17 is heavily biased towards sampling the reactant region of the PES without considering the driven non-equilibrium forces.[58 ###reference_b58###] However, a meaningful chemical reaction typically involves three parts on the PES: reactant, product, and transition state. It is important to note that the accuracy of common density functionals is usually a few kcal/mol when compared to higher levels of theory. For example, the PBE functional used in the MD17 dataset has an average error of more than 9 kcal/mol (roughly 0.4 eV) when predicting reaction barriers [59 ###reference_b59###]. In contrast, the trajectories we sampled visited the reactant, product, and transition state regions of multiple PESs, as illustrated in Figure 2 ###reference_###.\nTo compare the performance of MACE and MoFD with sparse implementation, we selected one reactive trajectory and employed the MACE model with a feature channel dimension of 64 and high-order equivariant features with . For MoFD, we maintain the same feature angular momentum and set the feature channel dimension to 16, resulting in a model with only 66,784 parameters, an order of magnitude smaller than that of MACE. Given the increased difficulty in predicting atomic forces, we adjust the training loss on energy and forces with a ratio of 1:1000, as recommended in previous literature [11 ###reference_b11###, 12 ###reference_b12###]. As the loss is disproportionately weighted towards the force, we concentrate on the force regression performance. The models is trained on 285 samples and tested on a separate hold-out test set of 428 samples. The models were trained in a state-specific fashion, which means each model regress single state\u2019s PES and forces for comparison purposes. Our findings indicate that MoFD with sparse implementation has a decent performance in force prediction for the first two states, while MACE fits better when predicting the energy.\nWe further assess the generalization ability of our models across different trajectories by incorporating two additional independent trajectories into the dataset, resulting in a total of 950 training samples and 1,395 hold-out testing samples. We increase the complexity of MACE by expanding its feature channel width to 128, leading to a total of 979,088 parameters. Concurrently, we double the feature dimension of MoFD to 32, making it only as large as MACE. Additionally, we implement MoFD with a dense feature dimension of 16, with equivariant features , resulting in a total of 690,976 parameters (29.4% fewer than the original MACE). In terms of runtime, each epoch requires 52 seconds in the MACE model compared to 31 seconds in the MoFD model, attributed to the utilization of lower-dimensional angular momentum features as inputs. The MoFD model with the dense implementation surpass MACE in the force prediction tasks, while the MoFD model with the sparse implementation remains comparable to MACE\u2019s accuracy as indicated in Table 2 ###reference_###.Nonetheless, it is crucial to note that the performance of all models decrease when learning excited states due to the less well-defined topologies of excited-state PESs [52 ###reference_b52###]."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "5",
|
| 73 |
+
"parent_section_id": null,
|
| 74 |
+
"section_name": "Discussion",
|
| 75 |
+
"text": "In this work, we have introduced a new method for constructing equivariant blocks for rotation-equivariant layers based on fusion diagrams.\nPrevious work has shown that tensor products can be used to construct neurons for rotation-equivariant neural networks. Moreover, prior research has observed that neural network ansatzes for the quantum system can be unified with spin network ansatzes. Our work is the first to employ these connections in the opposite direction: by employing diagrammatic methods used in physics, we construct new components that can be incorporated into equivariant neural networks.\nUsing classic results from invariant theory, we show that neural networks built from using fusion blocks are capable of approximating any continuous SU(2)-equivariant functions. To demonstrate the practical utility of fusion blocks, we perturb existing SO(3) equivariant neural network architectures, such as Cormorant[35 ###reference_b35###] and MACE[12 ###reference_b12###], by incorporating a fusion block in each layer. The modified architectures generally achieves better performance for a smaller number of parameters. Indeed, the idea of using equivariance and symmetry to prune neural networks has been applied [60 ###reference_b60###] in the quantum setting. We believe this indicates that fusion blocks can be a useful addition to group-equivariant neural networks.\nTo test the performance of the fusion block approach, we apply the revised CoFD and MoFD models not only to the standard benchmark datasets QM-9[36 ###reference_b36###] and MD-17[6 ###reference_b6###], but also novel applications such as non-adiabatic molecular dynamics. We find that the addition of the fusion blocks improved the performance of the models.\nIn future work, we hope to use fusion blocks to improve the interpretability of equivariant neural networks. In theoretical physics, fusion diagrams represent physical processes that correspond to many-body interactions. Furthermore, physicists often manipulate fusion diagrams through internal permutations through a process known as recoupling. Recouplings relate to the physical properties of different fusion diagrams and can show symmetries present in the products that may not be immediately apparent by inspection.\nEmploying the formalism of recoupling may highlight hidden symmetries in the network architecture, indicating new ways to save computational effort. Employing the language of fusion diagrams in these settings could help unify our physical picture of fusion diagrams with computational realities. Finally, fusion diagrams are graphical representations of ways in which local atoms are being fused. It is of interest to consider the effect of the local subgraph topology on the corresponding fusion blocks; in particular, whether fusion diagrams serve as a general principle towards building more expressive graph neural nets with 3D equivariance specific to chemical applications. We leave addressing these questions as future research opportunities."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "6",
|
| 79 |
+
"parent_section_id": null,
|
| 80 |
+
"section_name": "Acknowledgements",
|
| 81 |
+
"text": "J.L. is supported in part by International Business Machines (IBM) Quantum through the Chicago Quantum Exchange, and the Pritzker School of Molecular Engineering at the University of Chicago through AFOSR MURI (FA9550-21-1-0209).\nSee pages - of appendix.pdf ###reference_.pdf###"
|
| 82 |
+
}
|
| 83 |
+
],
|
| 84 |
+
"appendix": [],
|
| 85 |
+
"tables": {
|
| 86 |
+
"1": {
|
| 87 |
+
"table_html": "<figure class=\"ltx_table\" id=\"Sx4.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span> Mean absolute error of various prediction targets on QM-9 (left) and conformational energies (in units of kcal/mol) on MD-17 (right), for both the original Cormorant architecture and our modified version that incorporates a fusion block. It should be noted that the CoFD models have significantly fewer parameters than the original Cormorant. We report the mean and standard deviation from multiple runs. In comparison, the model with lower predictive error has been bolded.\n</figcaption><div class=\"ltx_flex_figure\">\n<div class=\"ltx_flex_cell ltx_flex_size_2\">\n<table class=\"ltx_tabular ltx_centering ltx_figure_panel ltx_minipage ltx_align_middle\" id=\"Sx4.T1.13.13\" style=\"width:212.5pt;\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"Sx4.T1.13.13.14.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_tt\" id=\"Sx4.T1.13.13.14.1.1\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"></th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_tt\" id=\"Sx4.T1.13.13.14.1.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.13.13.14.1.2.1\" style=\"font-size:90%;\">Cormorant</span></th>\n<th class=\"ltx_td ltx_th ltx_th_column ltx_border_tt\" id=\"Sx4.T1.13.13.14.1.3\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"></th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_tt\" id=\"Sx4.T1.13.13.14.1.4\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.13.13.14.1.4.1\" style=\"font-size:90%;\">CoFD</span></th>\n<th class=\"ltx_td ltx_th ltx_th_column ltx_border_tt\" id=\"Sx4.T1.13.13.14.1.5\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"></th>\n<td class=\"ltx_td ltx_border_tt\" id=\"Sx4.T1.13.13.14.1.6\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T1.2.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"Sx4.T1.2.2.2.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"Sx4.T1.2.2.2.2.1\" style=\"font-size:90%;\"> (</span><span class=\"ltx_text\" id=\"Sx4.T1.2.2.2.2.2\" style=\"font-size:90%;\">)</span>\n</th>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"Sx4.T1.2.2.2.3\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.2.2.2.3.1\" style=\"font-size:90%;\">0.085</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"Sx4.T1.2.2.2.4\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.2.2.2.4.1\" style=\"font-size:90%;\">(0.001)</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"Sx4.T1.2.2.2.5\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.2.2.2.5.1\" style=\"font-size:90%;\">0.088</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"Sx4.T1.2.2.2.6\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.2.2.2.6.1\" style=\"font-size:90%;\">(0.003)</span></td>\n<td class=\"ltx_td ltx_border_t\" id=\"Sx4.T1.2.2.2.7\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T1.3.3.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx4.T1.3.3.3.1\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"Sx4.T1.3.3.3.1.1\" style=\"font-size:90%;\"> (eV)</span>\n</th>\n<td class=\"ltx_td ltx_align_right\" id=\"Sx4.T1.3.3.3.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.3.3.3.2.1\" style=\"font-size:90%;\">0.061</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"Sx4.T1.3.3.3.3\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.3.3.3.3.1\" style=\"font-size:90%;\">(0.005)</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"Sx4.T1.3.3.3.4\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.3.3.3.4.1\" style=\"font-size:90%;\">0.062</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"Sx4.T1.3.3.3.5\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.3.3.3.5.1\" style=\"font-size:90%;\">(0.001)</span></td>\n<td class=\"ltx_td\" id=\"Sx4.T1.3.3.3.6\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T1.4.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx4.T1.4.4.4.1\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"Sx4.T1.4.4.4.1.1\" style=\"font-size:90%;\"> (eV)</span>\n</th>\n<td class=\"ltx_td ltx_align_right\" id=\"Sx4.T1.4.4.4.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T1.4.4.4.2.1\" style=\"font-size:90%;\">0.034</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"Sx4.T1.4.4.4.3\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T1.4.4.4.3.1\" style=\"font-size:90%;\">(0.002)</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"Sx4.T1.4.4.4.4\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.4.4.4.4.1\" style=\"font-size:90%;\">0.0391</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"Sx4.T1.4.4.4.5\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.4.4.4.5.1\" style=\"font-size:90%;\">(0.0008)</span></td>\n<td class=\"ltx_td\" id=\"Sx4.T1.4.4.4.6\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T1.5.5.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx4.T1.5.5.5.1\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"Sx4.T1.5.5.5.1.1\" style=\"font-size:90%;\"> (eV)</span>\n</th>\n<td class=\"ltx_td ltx_align_right\" id=\"Sx4.T1.5.5.5.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.5.5.5.2.1\" style=\"font-size:90%;\">0.038</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"Sx4.T1.5.5.5.3\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.5.5.5.3.1\" style=\"font-size:90%;\">(0.008)</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"Sx4.T1.5.5.5.4\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.5.5.5.4.1\" style=\"font-size:90%;\">0.0347</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"Sx4.T1.5.5.5.5\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.5.5.5.5.1\" style=\"font-size:90%;\">(0.0006)</span></td>\n<td class=\"ltx_td\" id=\"Sx4.T1.5.5.5.6\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T1.6.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx4.T1.6.6.6.1\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"Sx4.T1.6.6.6.1.1\" style=\"font-size:90%;\"> (D)</span>\n</th>\n<td class=\"ltx_td ltx_align_right\" id=\"Sx4.T1.6.6.6.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.6.6.6.2.1\" style=\"font-size:90%;\">0.038</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"Sx4.T1.6.6.6.3\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.6.6.6.3.1\" style=\"font-size:90%;\">(0.009)</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"Sx4.T1.6.6.6.4\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.6.6.6.4.1\" style=\"font-size:90%;\">0.035</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"Sx4.T1.6.6.6.5\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.6.6.6.5.1\" style=\"font-size:90%;\">(0.001)</span></td>\n<td class=\"ltx_td\" id=\"Sx4.T1.6.6.6.6\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T1.7.7.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx4.T1.7.7.7.1\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"Sx4.T1.7.7.7.1.1\" style=\"font-size:90%;\"> (cal/mol K)</span>\n</th>\n<td class=\"ltx_td ltx_align_right\" id=\"Sx4.T1.7.7.7.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T1.7.7.7.2.1\" style=\"font-size:90%;\">0.026</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"Sx4.T1.7.7.7.3\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T1.7.7.7.3.1\" style=\"font-size:90%;\">(0.000)</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"Sx4.T1.7.7.7.4\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.7.7.7.4.1\" style=\"font-size:90%;\">0.0272</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"Sx4.T1.7.7.7.5\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.7.7.7.5.1\" style=\"font-size:90%;\">(0.0002)</span></td>\n<td class=\"ltx_td\" id=\"Sx4.T1.7.7.7.6\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T1.8.8.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx4.T1.8.8.8.1\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"Sx4.T1.8.8.8.1.1\" style=\"font-size:90%;\"> (eV)</span>\n</th>\n<td class=\"ltx_td ltx_align_right\" id=\"Sx4.T1.8.8.8.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.8.8.8.2.1\" style=\"font-size:90%;\">0.020</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"Sx4.T1.8.8.8.3\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.8.8.8.3.1\" style=\"font-size:90%;\">(0.000)</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"Sx4.T1.8.8.8.4\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T1.8.8.8.4.1\" style=\"font-size:90%;\">0.0135</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"Sx4.T1.8.8.8.5\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.8.8.8.5.1\" style=\"font-size:90%;\">(0.0002)</span></td>\n<td class=\"ltx_td\" id=\"Sx4.T1.8.8.8.6\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T1.9.9.9\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx4.T1.9.9.9.1\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"Sx4.T1.9.9.9.1.1\" style=\"font-size:90%;\"> (eV)</span>\n</th>\n<td class=\"ltx_td ltx_align_right\" id=\"Sx4.T1.9.9.9.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.9.9.9.2.1\" style=\"font-size:90%;\">0.021</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"Sx4.T1.9.9.9.3\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.9.9.9.3.1\" style=\"font-size:90%;\">(0.001)</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"Sx4.T1.9.9.9.4\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T1.9.9.9.4.1\" style=\"font-size:90%;\">0.0132</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"Sx4.T1.9.9.9.5\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.9.9.9.5.1\" style=\"font-size:90%;\">(0.0004)</span></td>\n<td class=\"ltx_td\" id=\"Sx4.T1.9.9.9.6\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T1.11.11.11\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx4.T1.11.11.11.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"Sx4.T1.11.11.11.2.1\" style=\"font-size:90%;\"> (</span><span class=\"ltx_text\" id=\"Sx4.T1.11.11.11.2.2\" style=\"font-size:90%;\">)</span>\n</th>\n<td class=\"ltx_td ltx_align_right\" id=\"Sx4.T1.11.11.11.3\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.11.11.11.3.1\" style=\"font-size:90%;\">0.961</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"Sx4.T1.11.11.11.4\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.11.11.11.4.1\" style=\"font-size:90%;\">(0.019)</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"Sx4.T1.11.11.11.5\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T1.11.11.11.5.1\" style=\"font-size:90%;\">0.50</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"Sx4.T1.11.11.11.6\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.11.11.11.6.1\" style=\"font-size:90%;\">(0.02)</span></td>\n<td class=\"ltx_td\" id=\"Sx4.T1.11.11.11.7\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T1.12.12.12\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx4.T1.12.12.12.1\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"Sx4.T1.12.12.12.1.1\" style=\"font-size:90%;\"> (eV)</span>\n</th>\n<td class=\"ltx_td ltx_align_right\" id=\"Sx4.T1.12.12.12.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.12.12.12.2.1\" style=\"font-size:90%;\">0.021</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"Sx4.T1.12.12.12.3\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.12.12.12.3.1\" style=\"font-size:90%;\">(0.000)</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"Sx4.T1.12.12.12.4\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T1.12.12.12.4.1\" style=\"font-size:90%;\">0.0130</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"Sx4.T1.12.12.12.5\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.12.12.12.5.1\" style=\"font-size:90%;\">(0.0004)</span></td>\n<td class=\"ltx_td\" id=\"Sx4.T1.12.12.12.6\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T1.13.13.13\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx4.T1.13.13.13.1\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"Sx4.T1.13.13.13.1.1\" style=\"font-size:90%;\"> (eV)</span>\n</th>\n<td class=\"ltx_td ltx_align_right\" id=\"Sx4.T1.13.13.13.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.13.13.13.2.1\" style=\"font-size:90%;\">0.022</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"Sx4.T1.13.13.13.3\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.13.13.13.3.1\" style=\"font-size:90%;\">(0.003)</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"Sx4.T1.13.13.13.4\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T1.13.13.13.4.1\" style=\"font-size:90%;\">0.0133</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"Sx4.T1.13.13.13.5\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.13.13.13.5.1\" style=\"font-size:90%;\">(0.0003)</span></td>\n<td class=\"ltx_td\" id=\"Sx4.T1.13.13.13.6\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T1.13.13.15.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"Sx4.T1.13.13.15.2.1\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.13.13.15.2.1.1\" style=\"font-size:90%;\">ZPVE (meV)</span></th>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"Sx4.T1.13.13.15.2.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.13.13.15.2.2.1\" style=\"font-size:90%;\">2.027</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"Sx4.T1.13.13.15.2.3\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.13.13.15.2.3.1\" style=\"font-size:90%;\">(0.042)</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"Sx4.T1.13.13.15.2.4\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T1.13.13.15.2.4.1\" style=\"font-size:90%;\">1.43</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"Sx4.T1.13.13.15.2.5\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.13.13.15.2.5.1\" style=\"font-size:90%;\">(0.04)</span></td>\n<td class=\"ltx_td ltx_border_bb\" id=\"Sx4.T1.13.13.15.2.6\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"></td>\n</tr>\n</tbody>\n</table>\n</div>\n<div class=\"ltx_flex_cell ltx_flex_size_2\">\n<table class=\"ltx_tabular ltx_centering ltx_figure_panel ltx_minipage ltx_align_middle\" id=\"Sx4.T1.14\" style=\"width:212.5pt;\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"Sx4.T1.14.1.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_tt\" id=\"Sx4.T1.14.1.1.1\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"></th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_tt\" id=\"Sx4.T1.14.1.1.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.14.1.1.2.1\" style=\"font-size:90%;\">Cormorant</span></th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_tt\" id=\"Sx4.T1.14.1.1.3\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.14.1.1.3.1\" style=\"font-size:90%;\">CoFD</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"Sx4.T1.14.2.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"Sx4.T1.14.2.1.1\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.14.2.1.1.1\" style=\"font-size:90%;\">Aspirin</span></th>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"Sx4.T1.14.2.1.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.14.2.1.2.1\" style=\"font-size:90%;\">\u00a0 0.098</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"Sx4.T1.14.2.1.3\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T1.14.2.1.3.1\" style=\"font-size:90%;\">0.0951</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T1.14.3.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx4.T1.14.3.2.1\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.14.3.2.1.1\" style=\"font-size:90%;\">Ethanol</span></th>\n<td class=\"ltx_td ltx_align_right\" id=\"Sx4.T1.14.3.2.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.14.3.2.2.1\" style=\"font-size:90%;\">0.027</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"Sx4.T1.14.3.2.3\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T1.14.3.2.3.1\" style=\"font-size:90%;\">0.0241</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T1.14.4.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx4.T1.14.4.3.1\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.14.4.3.1.1\" style=\"font-size:90%;\">Malonaldehyde</span></th>\n<td class=\"ltx_td ltx_align_right\" id=\"Sx4.T1.14.4.3.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.14.4.3.2.1\" style=\"font-size:90%;\">0.041</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"Sx4.T1.14.4.3.3\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T1.14.4.3.3.1\" style=\"font-size:90%;\">0.0380</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T1.14.5.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx4.T1.14.5.4.1\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.14.5.4.1.1\" style=\"font-size:90%;\">Naphthalene</span></th>\n<td class=\"ltx_td ltx_align_right\" id=\"Sx4.T1.14.5.4.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T1.14.5.4.2.1\" style=\"font-size:90%;\">0.029</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"Sx4.T1.14.5.4.3\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.14.5.4.3.1\" style=\"font-size:90%;\">0.0321</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T1.14.6.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx4.T1.14.6.5.1\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.14.6.5.1.1\" style=\"font-size:90%;\">Salicylic Acid</span></th>\n<td class=\"ltx_td ltx_align_right\" id=\"Sx4.T1.14.6.5.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.14.6.5.2.1\" style=\"font-size:90%;\">0.066</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"Sx4.T1.14.6.5.3\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T1.14.6.5.3.1\" style=\"font-size:90%;\">0.0608</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T1.14.7.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx4.T1.14.7.6.1\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.14.7.6.1.1\" style=\"font-size:90%;\">Toluene</span></th>\n<td class=\"ltx_td ltx_align_right\" id=\"Sx4.T1.14.7.6.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.14.7.6.2.1\" style=\"font-size:90%;\">0.034</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"Sx4.T1.14.7.6.3\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T1.14.7.6.3.1\" style=\"font-size:90%;\">0.0316</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T1.14.8.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"Sx4.T1.14.8.7.1\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.14.8.7.1.1\" style=\"font-size:90%;\">Uracil</span></th>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"Sx4.T1.14.8.7.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T1.14.8.7.2.1\" style=\"font-size:90%;\">0.023</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"Sx4.T1.14.8.7.3\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"Sx4.T1.14.8.7.3.1\" style=\"font-size:90%;\">0.0297</span></td>\n</tr>\n</tbody>\n</table>\n</div>\n</div>\n</figure>",
|
| 88 |
+
"capture": "Table 1: Mean absolute error of various prediction targets on QM-9 (left) and conformational energies (in units of kcal/mol) on MD-17 (right), for both the original Cormorant architecture and our modified version that incorporates a fusion block. It should be noted that the CoFD models have significantly fewer parameters than the original Cormorant. We report the mean and standard deviation from multiple runs. In comparison, the model with lower predictive error has been bolded.\n"
|
| 89 |
+
},
|
| 90 |
+
"2": {
|
| 91 |
+
"table_html": "<figure class=\"ltx_table\" id=\"Sx4.T2\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"Sx4.T2.1\" style=\"width:433.6pt;height:59.2pt;vertical-align:-0.5pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-182.0pt,24.6pt) scale(0.543600143244965,0.543600143244965) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"Sx4.T2.1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"Sx4.T2.1.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"Sx4.T2.1.1.1.1.1\">Model</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"Sx4.T2.1.1.1.1.2\">Feature Dimension</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"Sx4.T2.1.1.1.1.3\">Num. of Param.</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"Sx4.T2.1.1.1.1.4\">Train Size</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"Sx4.T2.1.1.1.1.5\">Ground State (Energy, Forces)</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"Sx4.T2.1.1.1.1.6\">First Excited State (Energy, Forces)</th>\n<th class=\"ltx_td ltx_nopad_r ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"Sx4.T2.1.1.1.1.7\">Second Excited State (Energy, Forces)</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"Sx4.T2.1.1.2.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx4.T2.1.1.2.1.1\">MACE</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx4.T2.1.1.2.1.2\">64</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx4.T2.1.1.2.1.3\">330320</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx4.T2.1.1.2.1.4\">285</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx4.T2.1.1.2.1.5\">(<span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.1.1.2.1.5.1\">19.15</span>, 0.70)</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx4.T2.1.1.2.1.6\">(<span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.1.1.2.1.6.1\">9.88</span>, 1.25)</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left ltx_border_t\" id=\"Sx4.T2.1.1.2.1.7\">(<span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.1.1.2.1.7.1\">21.80</span>, <span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.1.1.2.1.7.2\">1.03</span>)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.1.1.3.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T2.1.1.3.2.1\">MoFD-sparse</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T2.1.1.3.2.2\">16</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T2.1.1.3.2.3\">66784</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T2.1.1.3.2.4\">285</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T2.1.1.3.2.5\">(19.74, <span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.1.1.3.2.5.1\">0.62</span>)</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T2.1.1.3.2.6\">(13.43, <span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.1.1.3.2.6.1\">1.12</span>)</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"Sx4.T2.1.1.3.2.7\">(23.42, 1.07)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.1.1.4.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T2.1.1.4.3.1\">MACE</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T2.1.1.4.3.2\">128</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T2.1.1.4.3.3\">979088</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T2.1.1.4.3.4\">950</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T2.1.1.4.3.5\">(<span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.1.1.4.3.5.1\">26.44</span>, 1.30)</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T2.1.1.4.3.6\">(<span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.1.1.4.3.6.1\">29.07</span>, 3.56)</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"Sx4.T2.1.1.4.3.7\">(<span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.1.1.4.3.7.1\">48.77</span>, 3.05)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.1.1.5.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T2.1.1.5.4.1\">MoFD-sparse</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T2.1.1.5.4.2\">32</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T2.1.1.5.4.3\">141168</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T2.1.1.5.4.4\">950</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T2.1.1.5.4.5\">(28.84, 1.40)</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T2.1.1.5.4.6\">(36.69, 3.55)</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"Sx4.T2.1.1.5.4.7\">(55.08, 3.11)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.1.1.6.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"Sx4.T2.1.1.6.5.1\">MoFD-dense</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"Sx4.T2.1.1.6.5.2\">16</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"Sx4.T2.1.1.6.5.3\">690976</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"Sx4.T2.1.1.6.5.4\">950</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"Sx4.T2.1.1.6.5.5\">(27.59, <span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.1.1.6.5.5.1\">1.14</span>)</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"Sx4.T2.1.1.6.5.6\">(32.64, <span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.1.1.6.5.6.1\">3.31</span>)</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left ltx_border_bb\" id=\"Sx4.T2.1.1.6.5.7\">(54.76, <span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.1.1.6.5.7.1\">2.65</span>)</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Comparative analysis of MACE and MoFD models in dense and sparse implementations, evaluated on single and multiple independent non-adiabatic trajectories of cis-stilbene. The table presents the feature dimension, number of parameters (Num. of Param.), training set size (Train Size), and results for the ground state, first excited state, and second excited state. Results include energy values in milli-Hartree (mHartree) and forces in milli-Hartree per Angstrom (mHartree/A). The bold figures represent the best performance in each category.</figcaption>\n</figure>",
|
| 92 |
+
"capture": "Table 2: Comparative analysis of MACE and MoFD models in dense and sparse implementations, evaluated on single and multiple independent non-adiabatic trajectories of cis-stilbene. The table presents the feature dimension, number of parameters (Num. of Param.), training set size (Train Size), and results for the ground state, first excited state, and second excited state. Results include energy values in milli-Hartree (mHartree) and forces in milli-Hartree per Angstrom (mHartree/A). The bold figures represent the best performance in each category."
|
| 93 |
+
}
|
| 94 |
+
},
|
| 95 |
+
"image_paths": {
|
| 96 |
+
"1": {
|
| 97 |
+
"figure_path": "2211.07482v3_figure_1.png",
|
| 98 |
+
"caption": "Figure 1: Schematic illustration of the implementation of fusion blocks in the MACE architecture. For each atom the fusion block first fuses all the neighboring atoms for a given radius cut-off by pre-selected fusion diagram templates. Specifically, for each neighboring atom, we fuse the information from the root, neighbor atom, and their connecting edge. Then the fusion block applies an aggregation method: in the present work, we simply sum all the neighbors.",
|
| 99 |
+
"url": "http://arxiv.org/html/2211.07482v3/extracted/2211.07482v3/fb_model.png"
|
| 100 |
+
},
|
| 101 |
+
"2": {
|
| 102 |
+
"figure_path": "2211.07482v3_figure_2.png",
|
| 103 |
+
"caption": "Figure 2: (a) Illustration of photo-induced cis-trans isomerization of stilbene (b) Initial and end configurations of three representative trajectories, which are Wigner-sampled.[53] (c) the one-dimensional cut of stilbene ground/ excited-state PESs by rotating the carbon-carbon bond as illustrated in (a), which illustrates the energetic diagram of stilbene isomerization process.",
|
| 104 |
+
"url": "http://arxiv.org/html/2211.07482v3/extracted/2211.07482v3/sti3.jpg"
|
| 105 |
+
}
|
| 106 |
+
},
|
| 107 |
+
"validation": true,
|
| 108 |
+
"references": [
|
| 109 |
+
{
|
| 110 |
+
"1": {
|
| 111 |
+
"title": "Deepmd-kit: A deep learning package\nfor many-body potential energy representation and molecular dynamics.",
|
| 112 |
+
"author": "Wang, H., Zhang, L., Han,\nJ. & Weinan, E.",
|
| 113 |
+
"venue": "\\JournalTitleComputer Physics Communications\n228, 178\u2013184\n(2018).",
|
| 114 |
+
"url": null
|
| 115 |
+
}
|
| 116 |
+
},
|
| 117 |
+
{
|
| 118 |
+
"2": {
|
| 119 |
+
"title": "Jax md: a framework for\ndifferentiable physics.",
|
| 120 |
+
"author": "Schoenholz, S. & Cubuk, E. D.",
|
| 121 |
+
"venue": "\\JournalTitleAdvances in Neural Information Processing\nSystems 33, 11428\u201311441\n(2020).",
|
| 122 |
+
"url": null
|
| 123 |
+
}
|
| 124 |
+
},
|
| 125 |
+
{
|
| 126 |
+
"3": {
|
| 127 |
+
"title": "Geometric deep learning on\nmolecular representations.",
|
| 128 |
+
"author": "Atz, K., Grisoni, F. &\nSchneider, G.",
|
| 129 |
+
"venue": "\\JournalTitleNature Machine Intelligence\n3, 1023\u20131032\n(2021).",
|
| 130 |
+
"url": null
|
| 131 |
+
}
|
| 132 |
+
},
|
| 133 |
+
{
|
| 134 |
+
"4": {
|
| 135 |
+
"title": "Applications of deep learning in\nmolecule generation and molecular property prediction.",
|
| 136 |
+
"author": "Walters, W. P. & Barzilay, R.",
|
| 137 |
+
"venue": "\\JournalTitleAccounts of chemical research\n54, 263\u2013270\n(2020).",
|
| 138 |
+
"url": null
|
| 139 |
+
}
|
| 140 |
+
},
|
| 141 |
+
{
|
| 142 |
+
"5": {
|
| 143 |
+
"title": "Rotation invariant graph neural\nnetworks using spin convolutions.",
|
| 144 |
+
"author": "Shuaibi, M. et al.",
|
| 145 |
+
"venue": "\\JournalTitlearXiv preprint arXiv:2106.09575\n(2021).",
|
| 146 |
+
"url": null
|
| 147 |
+
}
|
| 148 |
+
},
|
| 149 |
+
{
|
| 150 |
+
"6": {
|
| 151 |
+
"title": "Machine learning of accurate\nenergy-conserving molecular force fields.",
|
| 152 |
+
"author": "Chmiela, S. et al.",
|
| 153 |
+
"venue": "\\JournalTitleScience Advances\n3, DOI: 10.1126/sciadv.1603015\n(2017).",
|
| 154 |
+
"url": null
|
| 155 |
+
}
|
| 156 |
+
},
|
| 157 |
+
{
|
| 158 |
+
"7": {
|
| 159 |
+
"title": "Theoretical aspects of group\nequivariant neural networks.",
|
| 160 |
+
"author": "Esteves, C.",
|
| 161 |
+
"venue": "\\JournalTitlearXiv preprint arXiv:2004.05154\n(2020).",
|
| 162 |
+
"url": null
|
| 163 |
+
}
|
| 164 |
+
},
|
| 165 |
+
{
|
| 166 |
+
"8": {
|
| 167 |
+
"title": "Equivariant convolutional networks\n(2021).",
|
| 168 |
+
"author": "Cohen, T. S. et al.",
|
| 169 |
+
"venue": null,
|
| 170 |
+
"url": null
|
| 171 |
+
}
|
| 172 |
+
},
|
| 173 |
+
{
|
| 174 |
+
"9": {
|
| 175 |
+
"title": "E(n) Equivariant Graph Neural\nNetworks.",
|
| 176 |
+
"author": "Garcia Satorras, V., Hoogeboom, E. &\nWelling, M.",
|
| 177 |
+
"venue": "\\JournalTitlearXiv e-prints\narXiv:2102.09844, DOI: 10.48550/arXiv.2102.09844\n(2021).",
|
| 178 |
+
"url": null
|
| 179 |
+
}
|
| 180 |
+
},
|
| 181 |
+
{
|
| 182 |
+
"10": {
|
| 183 |
+
"title": "SE(3) Equivariant Graph Neural\nNetworks with Complete Local Frames.",
|
| 184 |
+
"author": "Du, W. et al.",
|
| 185 |
+
"venue": "\\JournalTitlearXiv e-prints\narXiv:2110.14811, DOI: 10.48550/arXiv.2110.14811\n(2021).",
|
| 186 |
+
"url": null
|
| 187 |
+
}
|
| 188 |
+
},
|
| 189 |
+
{
|
| 190 |
+
"11": {
|
| 191 |
+
"title": "E(3)-equivariant graph neural networks for\ndata-efficient and accurate interatomic potentials (2021).",
|
| 192 |
+
"author": "Batzner, S. et al.",
|
| 193 |
+
"venue": "2101.03164.",
|
| 194 |
+
"url": null
|
| 195 |
+
}
|
| 196 |
+
},
|
| 197 |
+
{
|
| 198 |
+
"12": {
|
| 199 |
+
"title": "Mace: Higher order equivariant\nmessage passing neural networks for fast and accurate force fields.",
|
| 200 |
+
"author": "Batatia, I., Kov\u00e1cs, D. P.,\nSimm, G. N., Ortner, C. &\nCs\u00e1nyi, G.",
|
| 201 |
+
"venue": "\\JournalTitlearXiv preprint arXiv:2206.07697\n(2022).",
|
| 202 |
+
"url": null
|
| 203 |
+
}
|
| 204 |
+
},
|
| 205 |
+
{
|
| 206 |
+
"13": {
|
| 207 |
+
"title": "Clebsch-Gordan Nets: a fully Fourier space\nspherical convolutional neural network.",
|
| 208 |
+
"author": "Kondor, R., Lin, Z. &\nTrivedi, S.",
|
| 209 |
+
"venue": "In Advances in Neural Information\nProcessing Systems (NeurIPS) (2018).",
|
| 210 |
+
"url": null
|
| 211 |
+
}
|
| 212 |
+
},
|
| 213 |
+
{
|
| 214 |
+
"14": {
|
| 215 |
+
"title": "On the generalization of equivariance and convolution\nin neural networks to the action of compact groups (2018).",
|
| 216 |
+
"author": "Kondor, R. & Trivedi, S.",
|
| 217 |
+
"venue": "1802.03690.",
|
| 218 |
+
"url": null
|
| 219 |
+
}
|
| 220 |
+
},
|
| 221 |
+
{
|
| 222 |
+
"15": {
|
| 223 |
+
"title": "Classical Invariant Theory.",
|
| 224 |
+
"author": "Olver, P. J.",
|
| 225 |
+
"venue": "London Mathematical Society Student Texts\n(Cambridge University Press, 1999).",
|
| 226 |
+
"url": null
|
| 227 |
+
}
|
| 228 |
+
},
|
| 229 |
+
{
|
| 230 |
+
"16": {
|
| 231 |
+
"title": "Lie groups: an approach through invariants and\nrepresentations.",
|
| 232 |
+
"author": "Procesi, C.",
|
| 233 |
+
"venue": "Universitext (Springer, New\nYork, 2007).",
|
| 234 |
+
"url": null
|
| 235 |
+
}
|
| 236 |
+
},
|
| 237 |
+
{
|
| 238 |
+
"17": {
|
| 239 |
+
"title": "Symmetry, Representations, and Invariants\n(Springer New York, 2009).",
|
| 240 |
+
"author": "Goodman, R. & Wallach, N. R.",
|
| 241 |
+
"venue": null,
|
| 242 |
+
"url": null
|
| 243 |
+
}
|
| 244 |
+
},
|
| 245 |
+
{
|
| 246 |
+
"18": {
|
| 247 |
+
"title": "Density matrix formulation for\nquantum renormalization groups.",
|
| 248 |
+
"author": "White, S. R.",
|
| 249 |
+
"venue": "\\JournalTitlePhysical Review Letters\n69, 2863 (1992).",
|
| 250 |
+
"url": null
|
| 251 |
+
}
|
| 252 |
+
},
|
| 253 |
+
{
|
| 254 |
+
"19": {
|
| 255 |
+
"title": "Class of ansatz wave functions for\none-dimensional spin systems and their relation to the density matrix\nrenormalization group.",
|
| 256 |
+
"author": "Rommer, S. & \u00d6stlund, S.",
|
| 257 |
+
"venue": "\\JournalTitlePhysical Review B\n55, 2164 (1997).",
|
| 258 |
+
"url": null
|
| 259 |
+
}
|
| 260 |
+
},
|
| 261 |
+
{
|
| 262 |
+
"20": {
|
| 263 |
+
"title": "Tensor networks for complex quantum\nsystems.",
|
| 264 |
+
"author": "Or\u00fas, R.",
|
| 265 |
+
"venue": "\\JournalTitleNature Reviews Physics\n1, 538\u2013550,\nDOI: 10.1038/s42254-019-0086-7 (2019).",
|
| 266 |
+
"url": null
|
| 267 |
+
}
|
| 268 |
+
},
|
| 269 |
+
{
|
| 270 |
+
"21": {
|
| 271 |
+
"title": "Finitely correlated states on\nquantum spin chains.",
|
| 272 |
+
"author": "Fannes, M., Nachtergaele, B. &\nWerner, R. F.",
|
| 273 |
+
"venue": "\\JournalTitleCommunications in Mathematical Physics\n144, 443\u2013490\n(1992).",
|
| 274 |
+
"url": null
|
| 275 |
+
}
|
| 276 |
+
},
|
| 277 |
+
{
|
| 278 |
+
"22": {
|
| 279 |
+
"title": "Renormalization algorithms for quantum-many body\nsystems in two and higher dimensions (2004).",
|
| 280 |
+
"author": "Verstraete, F. & Cirac, J. I.",
|
| 281 |
+
"venue": null,
|
| 282 |
+
"url": null
|
| 283 |
+
}
|
| 284 |
+
},
|
| 285 |
+
{
|
| 286 |
+
"23": {
|
| 287 |
+
"title": "Entanglement renormalization.",
|
| 288 |
+
"author": "Vidal, G.",
|
| 289 |
+
"venue": "\\JournalTitlePhysical Review Letters\n99, 220405\n(2007).",
|
| 290 |
+
"url": null
|
| 291 |
+
}
|
| 292 |
+
},
|
| 293 |
+
{
|
| 294 |
+
"24": {
|
| 295 |
+
"title": "Efficient parallelization of tensor\nnetwork contraction for simulating quantum computation.",
|
| 296 |
+
"author": "Huang, C. et al.",
|
| 297 |
+
"venue": "\\JournalTitleNature Computational Science\n1, 578\u2013587\n(2021).",
|
| 298 |
+
"url": null
|
| 299 |
+
}
|
| 300 |
+
},
|
| 301 |
+
{
|
| 302 |
+
"25": {
|
| 303 |
+
"title": "Simulating the sycamore quantum supremacy circuits\n(2021).",
|
| 304 |
+
"author": "Pan, F. & Zhang, P.",
|
| 305 |
+
"venue": null,
|
| 306 |
+
"url": null
|
| 307 |
+
}
|
| 308 |
+
},
|
| 309 |
+
{
|
| 310 |
+
"26": {
|
| 311 |
+
"title": "Supervised learning with tensor\nnetworks.",
|
| 312 |
+
"author": "Stoudenmire, E. & Schwab, D. J.",
|
| 313 |
+
"venue": "\\JournalTitleAdvances in Neural Information Processing\nSystems 29 (2016).",
|
| 314 |
+
"url": null
|
| 315 |
+
}
|
| 316 |
+
},
|
| 317 |
+
{
|
| 318 |
+
"27": {
|
| 319 |
+
"title": "Tensor network for machine learning\n(2019).",
|
| 320 |
+
"author": "Efthymiou, S., Hidary, J. &\nLeichenauer, S.",
|
| 321 |
+
"venue": null,
|
| 322 |
+
"url": null
|
| 323 |
+
}
|
| 324 |
+
},
|
| 325 |
+
{
|
| 326 |
+
"28": {
|
| 327 |
+
"title": "Tensornetwork: A library for\nphysics and machine learning.",
|
| 328 |
+
"author": "Roberts, C. et al.",
|
| 329 |
+
"venue": "\\JournalTitlearXiv preprint arXiv:1905.01330\n(2019).",
|
| 330 |
+
"url": null
|
| 331 |
+
}
|
| 332 |
+
},
|
| 333 |
+
{
|
| 334 |
+
"29": {
|
| 335 |
+
"title": "Entanglement renormalization and\nholography.",
|
| 336 |
+
"author": "Swingle, B.",
|
| 337 |
+
"venue": "\\JournalTitlePhysical Review D\n86, 065007\n(2012).",
|
| 338 |
+
"url": null
|
| 339 |
+
}
|
| 340 |
+
},
|
| 341 |
+
{
|
| 342 |
+
"30": {
|
| 343 |
+
"title": "Holographic quantum\nerror-correcting codes: Toy models for the bulk/boundary correspondence.",
|
| 344 |
+
"author": "Pastawski, F., Yoshida, B.,\nHarlow, D. & Preskill, J.",
|
| 345 |
+
"venue": "\\JournalTitleJournal of High Energy Physics\n2015, 1\u201355\n(2015).",
|
| 346 |
+
"url": null
|
| 347 |
+
}
|
| 348 |
+
},
|
| 349 |
+
{
|
| 350 |
+
"31": {
|
| 351 |
+
"title": "Introduction to SU(2) recoupling theory and\ngraphical methods for loop quantum gravity (2019).",
|
| 352 |
+
"author": "M\u00e4kinen, I.",
|
| 353 |
+
"venue": "1910.06821.",
|
| 354 |
+
"url": null
|
| 355 |
+
}
|
| 356 |
+
},
|
| 357 |
+
{
|
| 358 |
+
"32": {
|
| 359 |
+
"title": "Tensor network states and\nalgorithms in the presence of a global su(2) symmetry.",
|
| 360 |
+
"author": "Singh, S. & Vidal, G.",
|
| 361 |
+
"venue": "\\JournalTitlePhysical Review B\n86, DOI: 10.1103/physrevb.86.195114\n(2012).",
|
| 362 |
+
"url": null
|
| 363 |
+
}
|
| 364 |
+
},
|
| 365 |
+
{
|
| 366 |
+
"33": {
|
| 367 |
+
"title": "A programming guide for tensor\nnetworks with global su(2) symmetry.",
|
| 368 |
+
"author": "Schmoll, P., Singh, S.,\nRizzi, M. & Or\u00fas, R.",
|
| 369 |
+
"venue": "\\JournalTitleAnnals of Physics\n419, 168232,\nDOI: 10.1016/j.aop.2020.168232 (2020).",
|
| 370 |
+
"url": null
|
| 371 |
+
}
|
| 372 |
+
},
|
| 373 |
+
{
|
| 374 |
+
"34": {
|
| 375 |
+
"title": "Benchmarking global \nsymmetry in two-dimensional tensor network algorithms.",
|
| 376 |
+
"author": "Schmoll, P. & Or\u00fas, R.",
|
| 377 |
+
"venue": "\\JournalTitlePhys. Rev. B 102,\n241101, DOI: 10.1103/PhysRevB.102.241101\n(2020).",
|
| 378 |
+
"url": null
|
| 379 |
+
}
|
| 380 |
+
},
|
| 381 |
+
{
|
| 382 |
+
"35": {
|
| 383 |
+
"title": "Cormorant: Covariant molecular neural networks\n(2019).",
|
| 384 |
+
"author": "Anderson, B., Hy, T.-S. &\nKondor, R.",
|
| 385 |
+
"venue": "1906.04015.",
|
| 386 |
+
"url": null
|
| 387 |
+
}
|
| 388 |
+
},
|
| 389 |
+
{
|
| 390 |
+
"36": {
|
| 391 |
+
"title": "Quantum chemistry structures and\nproperties of 134 kilo molecules.",
|
| 392 |
+
"author": "Ramakrishnan, R., Dral, P. O.,\nRupp, M. & von Lilienfeld, O. A.",
|
| 393 |
+
"venue": "\\JournalTitleScientific Data 1\n(2014).",
|
| 394 |
+
"url": null
|
| 395 |
+
}
|
| 396 |
+
},
|
| 397 |
+
{
|
| 398 |
+
"37": {
|
| 399 |
+
"title": "Deep Sets.",
|
| 400 |
+
"author": "Zaheer, M. et al.",
|
| 401 |
+
"venue": "\\JournalTitlearXiv e-prints\narXiv:1703.06114 (2017).",
|
| 402 |
+
"url": null
|
| 403 |
+
}
|
| 404 |
+
},
|
| 405 |
+
{
|
| 406 |
+
"38": {
|
| 407 |
+
"title": "On Universal Equivariant Set Networks\n(2019).",
|
| 408 |
+
"author": "Segol, N. & Lipman, Y.",
|
| 409 |
+
"venue": "1910.02421.",
|
| 410 |
+
"url": null
|
| 411 |
+
}
|
| 412 |
+
},
|
| 413 |
+
{
|
| 414 |
+
"39": {
|
| 415 |
+
"title": "On learning sets of symmetric elements.",
|
| 416 |
+
"author": "Maron, H., Litany, O.,\nChechik, G. & Fetaya, E.",
|
| 417 |
+
"venue": "In International Conference on Machine\nLearning, 6734\u20136744 (PMLR,\n2020).",
|
| 418 |
+
"url": null
|
| 419 |
+
}
|
| 420 |
+
},
|
| 421 |
+
{
|
| 422 |
+
"40": {
|
| 423 |
+
"title": "Group equivariant convolutional networks\n(2016).",
|
| 424 |
+
"author": "Cohen, T. S. & Welling, M.",
|
| 425 |
+
"venue": "1602.07576.",
|
| 426 |
+
"url": null
|
| 427 |
+
}
|
| 428 |
+
},
|
| 429 |
+
{
|
| 430 |
+
"41": {
|
| 431 |
+
"title": "Invariant and equivariant graph networks.",
|
| 432 |
+
"author": "Maron, H., Ben-Hamu, H.,\nShamir, N. & Lipman, Y.",
|
| 433 |
+
"venue": "In International Conference on Learning\nRepresentations (2019).",
|
| 434 |
+
"url": null
|
| 435 |
+
}
|
| 436 |
+
},
|
| 437 |
+
{
|
| 438 |
+
"42": {
|
| 439 |
+
"title": "The general theory of permutation equivarant neural\nnetworks and higher order graph variational encoders (2020).",
|
| 440 |
+
"author": "Thiede, E. H., Hy, T. S. &\nKondor, R.",
|
| 441 |
+
"venue": "2004.03990.",
|
| 442 |
+
"url": null
|
| 443 |
+
}
|
| 444 |
+
},
|
| 445 |
+
{
|
| 446 |
+
"43": {
|
| 447 |
+
"title": "Quantum Theory of Angular Momentum\n(WORLD SCIENTIFIC, 1988).",
|
| 448 |
+
"author": "Varshalovich, D. A., Moskalev, A. N. &\nKhersonskii, V. K.",
|
| 449 |
+
"venue": null,
|
| 450 |
+
"url": null
|
| 451 |
+
}
|
| 452 |
+
},
|
| 453 |
+
{
|
| 454 |
+
"44": {
|
| 455 |
+
"title": "Atomic Many-Body Theory\n(Springer Berlin Heidelberg, Berlin,\nHeidelberg, 1986).",
|
| 456 |
+
"author": "Lindgren, I. & Morrison, J.",
|
| 457 |
+
"venue": null,
|
| 458 |
+
"url": null
|
| 459 |
+
}
|
| 460 |
+
},
|
| 461 |
+
{
|
| 462 |
+
"45": {
|
| 463 |
+
"title": "Canonical definition of wigner\ncoefficients in un.",
|
| 464 |
+
"author": "Biedenharn, L. C., Giovannini, A. &\nLouck, J. D.",
|
| 465 |
+
"venue": "\\JournalTitleJournal of Mathematical Physics\n8, 691\u2013700,\nDOI: 10.1063/1.1705266 (1967).",
|
| 466 |
+
"url": null
|
| 467 |
+
}
|
| 468 |
+
},
|
| 469 |
+
{
|
| 470 |
+
"46": {
|
| 471 |
+
"title": "THE CLEBSCH-GORDAN COEFFICIENTS OF\nPERMUTATION GROUPS S(2) - S(6).",
|
| 472 |
+
"author": "Gao, M.-J. & Chen, J.-Q.",
|
| 473 |
+
"venue": "\\JournalTitleJ. Phys. A 18,\n189\u2013213, DOI: 10.1088/0305-4470/18/2/009\n(1985).",
|
| 474 |
+
"url": null
|
| 475 |
+
}
|
| 476 |
+
},
|
| 477 |
+
{
|
| 478 |
+
"47": {
|
| 479 |
+
"title": "The design space of E(3)-equivariant atom-centered\ninteratomic potentials, DOI: 10.48550/ARXIV.2205.06643\n(2022).",
|
| 480 |
+
"author": "Batatia, I. et al.",
|
| 481 |
+
"venue": null,
|
| 482 |
+
"url": null
|
| 483 |
+
}
|
| 484 |
+
},
|
| 485 |
+
{
|
| 486 |
+
"48": {
|
| 487 |
+
"title": "Perspective: Nonadiabatic dynamics\ntheory.",
|
| 488 |
+
"author": "Tully, J. C.",
|
| 489 |
+
"venue": "\\JournalTitleThe Journal of chemical physics\n137, 22A301\n(2012).",
|
| 490 |
+
"url": null
|
| 491 |
+
}
|
| 492 |
+
},
|
| 493 |
+
{
|
| 494 |
+
"49": {
|
| 495 |
+
"title": "Nano-photocatalytic materials:\npossibilities and challenges.",
|
| 496 |
+
"author": "Tong, H. et al.",
|
| 497 |
+
"venue": "\\JournalTitleAdvanced materials\n24, 229\u2013251\n(2012).",
|
| 498 |
+
"url": null
|
| 499 |
+
}
|
| 500 |
+
},
|
| 501 |
+
{
|
| 502 |
+
"50": {
|
| 503 |
+
"title": "Photodynamic therapy.",
|
| 504 |
+
"author": "Dougherty, T. J. et al.",
|
| 505 |
+
"venue": "\\JournalTitleJNCI: Journal of the national cancer\ninstitute 90, 889\u2013905\n(1998).",
|
| 506 |
+
"url": null
|
| 507 |
+
}
|
| 508 |
+
},
|
| 509 |
+
{
|
| 510 |
+
"51": {
|
| 511 |
+
"title": "Photoprotection: extending lessons\nlearned from studying natural sunscreens to the design of artificial\nsunscreen constituents.",
|
| 512 |
+
"author": "Baker, L. A., Marchetti, B.,\nKarsili, T. N., Stavros, V. G. &\nAshfold, M. N.",
|
| 513 |
+
"venue": "\\JournalTitleChemical Society Reviews\n46, 3770\u20133791\n(2017).",
|
| 514 |
+
"url": null
|
| 515 |
+
}
|
| 516 |
+
},
|
| 517 |
+
{
|
| 518 |
+
"52": {
|
| 519 |
+
"title": "The influence of the electronic\nstructure method on intersystem crossing dynamics. the case of\nthioformaldehyde.",
|
| 520 |
+
"author": "Mai, S., Atkins, A. J.,\nPlasser, F. & Gonz\u00e1lez, L.",
|
| 521 |
+
"venue": "\\JournalTitleJournal of Chemical Theory and Computation\n15, 3470\u20133480\n(2019).",
|
| 522 |
+
"url": null
|
| 523 |
+
}
|
| 524 |
+
},
|
| 525 |
+
{
|
| 526 |
+
"53": {
|
| 527 |
+
"title": "The morse oscillator in position\nspace, momentum space, and phase space.",
|
| 528 |
+
"author": "Dahl, J. P. & Springborg, M.",
|
| 529 |
+
"venue": "\\JournalTitleThe Journal of chemical physics\n88, 4535\u20134547\n(1988).",
|
| 530 |
+
"url": null
|
| 531 |
+
}
|
| 532 |
+
},
|
| 533 |
+
{
|
| 534 |
+
"54": {
|
| 535 |
+
"title": "Picosecond excitation and trans-cis\nisomerization of stilbene in a supersonic jet: Dynamics and spectra.",
|
| 536 |
+
"author": "Syage, J., Lambert, W. R.,\nFelker, P., Zewail, A. &\nHochstrasser, R.",
|
| 537 |
+
"venue": "\\JournalTitleChemical Physics Letters\n88, 266\u2013270\n(1982).",
|
| 538 |
+
"url": null
|
| 539 |
+
}
|
| 540 |
+
},
|
| 541 |
+
{
|
| 542 |
+
"55": {
|
| 543 |
+
"title": "A complete active space scf method\n(casscf) using a density matrix formulated super-ci approach.",
|
| 544 |
+
"author": "Roos, B. O., Taylor, P. R. &\nSigbahn, P. E.",
|
| 545 |
+
"venue": "\\JournalTitleChemical Physics\n48, 157\u2013173\n(1980).",
|
| 546 |
+
"url": null
|
| 547 |
+
}
|
| 548 |
+
},
|
| 549 |
+
{
|
| 550 |
+
"56": {
|
| 551 |
+
"title": "Nonadiabatic dynamics: The sharc\napproach.",
|
| 552 |
+
"author": "Mai, S., Marquetand, P. &\nGonz\u00e1lez, L.",
|
| 553 |
+
"venue": "\\JournalTitleWiley Interdisciplinary Reviews: Computational\nMolecular Science 8, e1370\n(2018).",
|
| 554 |
+
"url": null
|
| 555 |
+
}
|
| 556 |
+
},
|
| 557 |
+
{
|
| 558 |
+
"57": {
|
| 559 |
+
"title": "Machine learning of accurate\nenergy-conserving molecular force fields.",
|
| 560 |
+
"author": "Chmiela, S. et al.",
|
| 561 |
+
"venue": "\\JournalTitleScience advances\n3, e1603015\n(2017).",
|
| 562 |
+
"url": null
|
| 563 |
+
}
|
| 564 |
+
},
|
| 565 |
+
{
|
| 566 |
+
"58": {
|
| 567 |
+
"title": "xxmd: Benchmarking neural force fields using extended\ndynamics beyond equilibrium (2023).",
|
| 568 |
+
"author": "Pengmei, Z., Liu, J. &\nShu, Y.",
|
| 569 |
+
"venue": "2308.11155.",
|
| 570 |
+
"url": null
|
| 571 |
+
}
|
| 572 |
+
},
|
| 573 |
+
{
|
| 574 |
+
"59": {
|
| 575 |
+
"title": "Thirty years of density functional\ntheory in computational chemistry: an overview and extensive assessment of\n200 density functionals.",
|
| 576 |
+
"author": "Mardirossian, N. & Head-Gordon, M.",
|
| 577 |
+
"venue": "\\JournalTitleMolecular physics\n115, 2315\u20132372\n(2017).",
|
| 578 |
+
"url": null
|
| 579 |
+
}
|
| 580 |
+
},
|
| 581 |
+
{
|
| 582 |
+
"60": {
|
| 583 |
+
"title": "Symmetric pruning in quantum neural\nnetworks.",
|
| 584 |
+
"author": "Wang, X. et al.",
|
| 585 |
+
"venue": "\\JournalTitlearXiv preprint arXiv:2208.14057\n(2022).",
|
| 586 |
+
"url": null
|
| 587 |
+
}
|
| 588 |
+
}
|
| 589 |
+
],
|
| 590 |
+
"url": "http://arxiv.org/html/2211.07482v3"
|
| 591 |
+
}
|
20240522/2211.10054v2.json
ADDED
|
@@ -0,0 +1,230 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Decorr: Environment Partitioning for Invariant Learning and OOD Generalization",
|
| 3 |
+
"abstract": "Invariant learning methods, aimed at identifying a consistent predictor across multiple environments, are gaining prominence in out-of-distribution (OOD) generalization. Yet, when environments aren\u2019t inherent in the data, practitioners must define them manually. This environment partitioning\u2014algorithmically segmenting the training dataset into environments\u2014crucially affects invariant learning\u2019s efficacy but remains underdiscussed. Proper environment partitioning could broaden the applicability of invariant learning and enhance its performance. In this paper, we suggest partitioning the dataset into several environments by isolating low-correlation data subsets. Through experiments with synthetic and real data, our Decorr method demonstrates superior performance in combination with invariant learning. Decorr mitigates the issue of spurious correlations, aids in identifying stable predictors, and broadens the applicability of invariant learning methods.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Machine learning methods have made significant strides in image classification, speech recognition, machine translation, and other domains. However, these methods typically assume that the training and testing data are independently and identically distributed (i.i.d.), an assumption that may not hold in real-world applications like autopilot, healthcare, and financial prediction. The reliance on this assumption makes using these models risky in such critical applications, as performance can drastically decline at test time, and the cost of failure is substantial [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###]. Invariant Risk Minimization (IRM), introduced in [4 ###reference_b4###], addresses this Out-of-Distribution (OOD) generalization issue and has garnered considerable interest. IRM\u2019s objective is to discover a data representation that ensures the optimal classifier remains consistent across all environments, thereby enhancing generalizability to new testing environments or distributions. This approach has proven successful in various scenarios and datasets. Building on IRM\u2019s principles, several other invariant learning methods [5 ###reference_b5###, 6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###] have emerged, achieving promising OOD performance.\nTo deploy these invariant learning methods, establishing an environment partition of the training set is necessary. Existing approaches often rely on data sources or metadata to determine this partition. However, a natural partition may not always exist or can be difficult to identify, rendering these methods unsuitable for many datasets [10 ###reference_b10###]. For instance, in the Colored MNIST (CMNIST) synthetic dataset, the environments are defined with the correlations and respectively, and the belonging of each image to a specific environment is known during training. A more realistic scenario is one where such environment information is unknown. Even when a natural environment partition exists, it\u2019s worth questioning whether it is optimal for developing a model that generalizes well, considering the myriad ways the data could be segmented into different environments.\nSeveral studies have addressed the challenge of environment partitioning. Creager et al. [11 ###reference_b11###] introduced Environment Inference for Invariant Learning (EIIL), which employs a reference classifier trained using ERM to identify partitions that maximally violate the invariance principle, thereby maximizing the IRMv1 penalty on . Similarly, Just Train Twice (JTT) [12 ###reference_b12###] involves training a reference model first, followed by a second model that upweights training examples misclassified by the initial model. However, the effectiveness of these two-step, mistake-exploiting methods depends significantly on the performance of the reference model [13 ###reference_b13###, 14 ###reference_b14###]. Another straightforward approach to partitioning is clustering. Works by Matsuura et al. [15 ###reference_b15###], Sohoni et al. [10 ###reference_b10###], and Thopalli et al. [16 ###reference_b16###] have utilized conventional clustering techniques like -means to divide the dataset based on feature space, while Liu et al. [17 ###reference_b17###] sought to maximize the diversity of the output distribution through clustering.\nAlthough training and testing sets are not i.i.d. in the OOD setting, they should share certain common properties that aid in generalization. A widely recognized OOD assumption addressing covariate shift posits that [1 ###reference_b1###], suggesting that the outcome distribution remains consistent across training and testing sets given . Building on this premise, it follows that the environment label , whether based on or features extracted as , should be independent of the outcome . This foundational assumption enables us to address discrepancies between training and testing environments effectively.\nHowever, most environment partitioning methods utilize the outcome for segmentation. This approach is less harmful in scenarios with a high signal-to-noise ratio, such as image data, where inherently encompasses most information about . However, in the presence of noisy data, such as tabular data, these methods may mistakenly assign similar or identical features into different environments due to variations in . For instance, two data points with the same might end up in separate environments solely because the error components (irreducible and purely stochastic) in their values differ. This leads to disparate conditional distributions of the outcome across environments, violating the covariate shift assumption. The efficacy of these methods in high-noise contexts remains largely unexplored. While -means clustering does not depend on , relying solely on the feature space, it is not supported by a clear interpretation or theoretical justification for its application in this context.\nThis paper explores an environment partitioning method tailored for high-noise data to enhance the OOD generalization performance of IRM. We observe that models trained on datasets with uncorrelated features generally perform well against correlation shifts. IRM specifically seeks to develop an invariant predictor that excels across various environments. Inspired by these observations, we introduce Decorr, a method designed to identify subsets of features with low correlation for environment partitioning. Decorr is computationally efficient and independent of the outcome . Through experiments with both synthetic and real data, we demonstrate that Decorr, in conjunction with IRM, consistently outperforms in OOD scenarios we established, whereas some existing partitioning methods paired with IRM yield poor, sometimes worse-than-ERM, results.\nTo summarize, in this paper, our contributions are:\nWe introduce Decorr, a method that manually partitions the dataset into environments to enhance OOD generalization in combination with invariant learning. Decorr identifies subsets characterized by features with low correlations, which mitigates the issue of spurious correlations and aids in identifying stable predictors.\nWe demonstrate through simulation studies that Decorr-based invariant learning can achieve good OOD generalization even under model misspecification.\nThe proposed Decorr method demonstrates superior performance across a diverse range of datasets, including both tabular and image types.\nWe broaden the applicability of invariant learning methods (IRM, REx, etc.) when natural environment partitions are unknown. We improve the performance of invariant learning when the natural environment partitions are suboptimal."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "II Background",
|
| 15 |
+
"text": ""
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "II-A Invariant Risk Minimization",
|
| 21 |
+
"text": "IRM [4 ###reference_b4###] works with datasets sourced from multiple training environments , aiming to develop a model that excels across an extensive range of environments , where . The objective is to minimize the worst-case risk , with representing the risk within environment . Specifically, IRM seeks to identify a data representation and a classifier that remains optimal across all training environments when using the representation . This challenge is framed as a constrained optimization problem:\nTo make the problem solvable, the practical version IRMv1 is expressed as\nwhere indicates the entire invariant predictor, and is a fixed dummy scalar. The gradient norm penalty can be interpreted as the invariance of the predictor .\nAnother approach to invariant learning is Risk Extrapolation (REx) [18 ###reference_b18###], which aims to reduce training risks while increasing the similarity of training risks across environments. Variance-REx (V-REx) adds a penalty term\u2014the variance of training losses across all training environments\u2014to the traditional empirical risk minimization. It has been shown that this method can perform robustly in the presence of covariate shift."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "II-B Environment Partitioning Methods",
|
| 27 |
+
"text": "To our best knowledge, literature predominantly features two types of partitioning methods. Clustering methods for environment partitioning are discussed in [15 ###reference_b15###, 10 ###reference_b10###, 16 ###reference_b16###]. The general approach involves extracting features from the data and then clustering the samples based on these features into multiple groups, with all proposed methods employing -means for clustering. Conversely, EIIL [11 ###reference_b11###] introduces an adversarial approach that partitions data into two environments designed to maximize the IRM penalty using an ERM model. Specifically, the environment inference step seeks to optimize a probability distribution to enhance the IRMv1 regularizer , where represents the -weighted risk. After identifying the optimal , environments are assigned by placing data into one environment based on , with the rest allocated to another environment."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "2.3",
|
| 31 |
+
"parent_section_id": "2",
|
| 32 |
+
"section_name": "II-C Some Other Related Works",
|
| 33 |
+
"text": ""
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "2.3.1",
|
| 37 |
+
"parent_section_id": "2.3",
|
| 38 |
+
"section_name": "II-C1 Feature Decorrelation",
|
| 39 |
+
"text": "The use of correlation for feature selection is well-established in machine learning [19 ###reference_b19###, 20 ###reference_b20###, 21 ###reference_b21###, 22 ###reference_b22###]. More recently, the decorrelation method has been adapted for stable learning and OOD generalization. [23 ###reference_b23###] suggested decorrelating features by learning weights for training samples. [24 ###reference_b24###] initially clustered variables based on the stability of their correlations, then proceeded to decorrelate pairs of variables from different clusters. [25 ###reference_b25###] focused on simultaneously optimizing regression coefficients and sample weights to manage correlation. However, to our knowledge, discussions on using decorrelation for dataset partitioning to enhance invariant learning are lacking, which is the main focus of the subsequent sections in our study."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "2.3.2",
|
| 43 |
+
"parent_section_id": "2.3",
|
| 44 |
+
"section_name": "II-C2 Covariate Shift",
|
| 45 |
+
"text": "Covariate shift has been a significant challenge in machine learning long before the advent of OOD generalization and invariant learning. Earlier research on covariate shift focused on adaptively training a predictor using the training dataset, sometimes incorporating an unlabeled testing dataset or known test-train density ratios, but without employing multiple training sets or environment partitions. Under such conditions, Importance Weighting (IW) has proven to be an effective strategy for addressing covariate shift [26 ###reference_b26###, 27 ###reference_b27###, 28 ###reference_b28###, 29 ###reference_b29###, 30 ###reference_b30###, 31 ###reference_b31###]. In contrast, the contexts of OOD generalization and invariant learning typically require environment partitions and do not involve any pre-knowledge of the testing set. For further insights, theories, and methodologies related to covariate shift in machine learning, please refer to [32 ###reference_b32###, 33 ###reference_b33###]."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "3",
|
| 49 |
+
"parent_section_id": null,
|
| 50 |
+
"section_name": "III The Proposed Method",
|
| 51 |
+
"text": "The IRM objective outlined in Eqn. (1 ###reference_###) seeks to minimize risk across a set of environments , while imposing an invariant constraint on the weights . IRM operates under the assumption that the environment partition is pre-established. However, in real-world scenarios, the data can be divided into environments in numerous ways. For instance, such partitioning could be based on personal characteristics like gender, age, or education level when predicting income from personal data, or it might depend on the timing of data collection. These partitioning strategies may be subject to scrutiny. Often in practice, we are either presented with a single training dataset without a clear environment partition or handed a potentially inadequate predetermined partition. Consequently, it becomes essential to manually select an environment partition that is effective for invariant learning methods to maximize the OOD performance of the model.\nIn this section, we explore how to identify a subset of data characterized by low correlation, making it ideal for IRM learning. Given a data matrix , we denote its correlation matrix by . The deviation of from the identity matrix is assessed using the squared Frobenius distance , which serves as a measurement of how uncorrelated is. The formulation of our goal is as follows:\nsubject to some constraints on the size of . Given the large feasible set and the complexity of the optimization, we approach it by minimizing a softer alternative that transforms the subset selection into an optimization of sample weights. Using a weight vector for the observations in , the weighted correlation matrix is computed as described in [34 ###reference_b34###]. The correlation minimization problem can thus be reformulated as\nwhich is amenable to optimization. Here, the -th element of represents the probability that the -th data point is included in the new environment from the entire set . Restricting to causes (4 ###reference_###) to revert to the original hard problem (3 ###reference_###).\nThe specified target poses convergence challenges without constraints on . To address this, we propose two restrictions. First, considering as the number of desired environments, we limit the mean of the weights with . This constraint prevents the predominance of excessively small values in , which could lead to a small sample size and high variance within the partitioned environments. We enforce this by incorporating a penalty term into the objective. Additionally, we constrain \u2014where is a minimal value near zero\u2014rather than . This adjustment not only facilitates the optimization\u2019s convergence but also guarantees that all data points in the training set are eligible for inclusion in the partitioned set, allowing the model trained on this set to adapt to the entire distribution of observations rather than a confined segment. This approach also balances the trade-off between diversity shift and correlation shift [35 ###reference_b35###].\nTo partition the training dataset into environments such that , we iteratively optimize the objective specified in Eqn. (4 ###reference_###) with respect to the residual sample set, selecting samples to establish each new environment sequentially. We detail this procedure in Algorithm 1 ###reference_###. Once the environments are delineated, we employ IRM as the learning strategy to enhance OOD generalization.\nInput: training set , number of desired environments , restriction parameter , learning rate , number of epochs , and (suggested to be ) \nOutput: the partitioned environments \nInitialization: the residual set\nTo explore the impact of various environment partitioning strategies, we applied EIIL, -means, and Decorr to a two-dimensional toy dataset where and exhibit positive correlation, and is the sum of and an error term. The resulting partitions are illustrated in Fig. 1 ###reference_###. The EIIL partition shows no clear patterns, suggesting a strong dependence on the label and deviating from expected environmental separations. Both -means and Decorr reveal spatial characteristics. Decorr divides the dataset into environments characterized by distinct covariate relationships: one positively correlated (triangle) and one almost uncorrelated (circle). While -means also bifurcates the data spatially, the divisions it creates feature similar covariate properties with only a mean shift, potentially diminishing its utility for IRM applications.\n###figure_1### ###figure_2### ###figure_3###"
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "3.1",
|
| 55 |
+
"parent_section_id": "3",
|
| 56 |
+
"section_name": "III-A OOD Generalization under Model Misspecification",
|
| 57 |
+
"text": "In addition to our earlier discussions, we design an experiment to demonstrate that our method can achieve OOD generalization even when the models are misspecified. We adhere to most data-generating configurations and settings as described in [25 ###reference_b25###]. We denote as the invariant features and as the spurious features, and is the sample size. The data are generated according to the model outlined below:\nwhere and is a nonlinear term.\nWe define or , even though may exceed 3. The coefficient vector cycles through these six values, truncated to fit dimensions. Here, a straightforward is set by us.\nTo create training and testing sets with selection bias, we first produce original samples using the aforementioned method, then allocate each sample to the set based on the probability . A results in a positive correlation between and , while leads to a negative correlation. Nonetheless, the generation of is independent of , thereby introducing a spurious correlation through selection bias. In our experiment, with 20,000 samples in the training set, and with 10,000 samples in the testing set.\nWe employ linear regression of against as the base model to fit the synthetic data.\nWe compare Decorr against other environment partitioning strategies (pure random and -means), combined with two different invariant learning algorithms (IRM and V-REx [18 ###reference_b18###]). Additionally, we include ERM (Empirical Risk Minimization) and Decorr+ERM, the latter of which applies ERM to the first environment divided by Decorr, as baselines. To assess each model\u2019s generalization capability, we use , the norm of the coefficient on , which should ideally be zero for an optimal model, and the Mean Squared Error (MSE) in the testing set. The experimental results for two types of are presented in Table I ###reference_### and II ###reference_###, respectively. The results indicate that Decorr achieves superior performance on environment partitioning relative to other methods, meanwhile underscoring the importance of using invariant learning approaches, as evidenced by the baseline performance of ERM and Decorr+ERM. The norm of the coefficient on and the MSE for the testing set are significantly reduced when using Decorr+IRM or Decorr+REx.\n###table_1### ###table_2###"
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "4",
|
| 61 |
+
"parent_section_id": null,
|
| 62 |
+
"section_name": "IV Experiments",
|
| 63 |
+
"text": "We assess four distinct environment partitioning methods: pure random, EIIL [11 ###reference_b11###], -means, and Decorr, along with the complete procedure of Heterogeneous Risk Minimization (HRM, [17 ###reference_b17###]) in some experiments. Additionally, we deploy the original IRM (and V-REx [18 ###reference_b18###] exclusively in real data experiments) where environments are pre-defined, and also employ ERM for comparative analysis. We utilize either the original or widely used implementations of IRM111https://github.com/facebookresearch/InvarianceUnitTests, EIIL222https://github.com/ecreager/eiil, and HRM333https://github.com/LJSthu/HRM.\nAcross all experiments, we set parameters , , , and for Decorr, adjusting according to the specific requirements of each scenario. For instance, in synthetic data experiments, corresponds to the actual number of environments in the training data, ranging from 2 to 8, though in most cases, is either 2 or 3. Similarly, in related studies employing methods like EIIL, HRM, or -means, the number of training environments typically ranges between 2 and 3.\nAlthough this paper primarily does not target high signal-to-noise ratio data, we showcase the adaptability of Decorr by applying it to two image datasets, CMNIST and Waterbirds, demonstrating its effectiveness across various dataset types. Feature extraction is conducted using a neural network before proceeding with the decorrelation process. Decorr is applied to these extracted low-dimensional features (-means is applied similarly). After partitioning the dataset, we continue to employ the original raw image data for IRM training."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "4.1",
|
| 67 |
+
"parent_section_id": "4",
|
| 68 |
+
"section_name": "IV-A Synthetic Data Experiments",
|
| 69 |
+
"text": ""
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "4.1.1",
|
| 73 |
+
"parent_section_id": "4.1",
|
| 74 |
+
"section_name": "IV-A1 A Toy Example",
|
| 75 |
+
"text": "Following [36 ###reference_b36###], we consider a toy -dimensional example generated by\nwhere . The task is to predict given . We can write if we ignore the causal mechanics, hence using to predict has a fixed noise. Therefore, if is small, the model will rely more on due to the lower noise in predicting using . Conversely, if is large, the model will rely more on . For simplicity, we set and for three different environments. In each environment , 1,000 samples are generated. The goal is for the model to learn the true causal relationship, , rather than the spurious correlation between and . Therefore, we assess the effectiveness of different environment partitioning strategies by calculating the Mean Squared Error (MSE) between the linear regression coefficients and the ground-truth coefficients .\nData from original environments are combined, followed by the application of environment partitioning strategies to achieve the same number of environments, and then the application of IRM. We consider scenarios featuring varying numbers of original environments (2 or 3, corresponding to the first two or all three) and different data dimensions . The penalty weight in IRM is set as . Results, derived from 10 trials to calculate the mean and the standard deviation, are displayed in Table III ###reference_###. In most instances, Decorr yields coefficients that are closest to . Original IRM, when true environments are known (as indicated in the last row), reliably generates the true coefficients.\n###table_3###"
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "4.1.2",
|
| 79 |
+
"parent_section_id": "4.1",
|
| 80 |
+
"section_name": "IV-A2 Risks of IRM",
|
| 81 |
+
"text": "Next, we consider another example in [5 ###reference_b5###], which is generated by\nand\nAssume data are drawn from training environments . For a given environment , a data point is obtained by first randomly sampling a label , then sampling invariant features and environmental features . Finally, the observation is generated.\nWe let vary from 2 to 8, and for each case, we do training and testing 10 times to average the results. In each time, we set , , , , , where , , , are all sampled from standard normal. and are shared across environments and is not. In each environment , we sample 1,000 points. Again, data from original environments are combined, followed by the application of environment partitioning strategies (to obtain the same number of environments) and then IRM.\nIn this experiment, we set as the identity function. After training a logistic classifier, we generate 5,000 different testing environments (each with a new drawn from standard normal) to compute the worst-case prediction error rate. The penalty weight for IRM is . Most hyper-parameters align with those in the study by [5 ###reference_b5###]. The results, presented in Fig. 4 ###reference_###, indicate that Decorr consistently yields a stable low error rate. This clearly demonstrates the superiority of Decorr."
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "4.2",
|
| 85 |
+
"parent_section_id": "4",
|
| 86 |
+
"section_name": "IV-B Real Data Experiments",
|
| 87 |
+
"text": ""
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "4.2.1",
|
| 91 |
+
"parent_section_id": "4.2",
|
| 92 |
+
"section_name": "IV-B1 Implementation Details",
|
| 93 |
+
"text": "For each task, we utilize MLPs with two hidden layers featuring tanh activations and a dropout rate of following each hidden layer. The size of each hidden layer is , where is the input dimension. The output layer is either linear or logistic. We employ Adam optimization [37 ###reference_b37###] with default parameters (learning rate , , , ) to minimize binary cross-entropy loss for classification and MSE for regression. The training involves a maximum of 20,000 iterations and includes an penalty term weighted by . For IRM, the penalty weight is . Except for occupancy estimation, all other experiments utilize for Decorr and comparable methods, i.e., partitioning to create 2 environments. Unless otherwise specified, the original environment partitions for IRM and REx are determined based on the timestamps of the observations."
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "4.2.2",
|
| 97 |
+
"parent_section_id": "4.2",
|
| 98 |
+
"section_name": "IV-B2 Financial Indicators",
|
| 99 |
+
"text": "The task for the financial indicators dataset444https://www.kaggle.com/datasets/cnic92/200-financial-indicators-of-us-stocks-20142018 is to predict whether a stock\u2019s price will increase over the following year based on the stock\u2019s financial indicators. The dataset is divided into five annual segments from 2014 to 2018. Following the implementation details in [18 ###reference_b18###], we treat each year as a baseline environment, utilizing three environments for training, one for validation through early stopping, and one for testing. This setup results in a total of 20 different tasks. Additionally, we compile another set of 20 tasks, each using a single environment for training, one for validation, and three for testing. In this configuration, the original IRM and REx are ineffective due to the presence of only one explicit training environment. Following [1 ###reference_b1###], we assess all methods based on average error rate, worst-case error rate, and standard deviation across tasks in each task set. The results, displayed in Table IV ###reference_###, indicate that Decorr achieves the best performance. The superiority of Decorr+IRM over pure IRM suggests that the original environment partitions, determined by the timestamps of observations, are suboptimal.\n###table_4###"
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "4.2.3",
|
| 103 |
+
"parent_section_id": "4.2",
|
| 104 |
+
"section_name": "IV-B3 Adult",
|
| 105 |
+
"text": "The Adult dataset555https://archive.ics.uci.edu/ml/datasets/adult is a tabular collection derived from a U.S. census, aimed at classifying whether an individual\u2019s annual income exceeds or falls below 50,000 USD based on specific characteristics. We retain only race and sex as categorical variables, converting them into binary values . To introduce a distributional shift between training and testing data, we first identify either race or sex as the biased feature, denoted as , and categorize the dataset into four groups: , , , and . For the training set, we use 90% of the data from the first two groups and only an proportion from the last two groups, with the remainder allocated to the testing set. A lower value increases the distributional shift, whereas an of 0.9 results in no shift. We anticipate that IRM-based methods will learn the spurious correlation ( leads to ) under small values. The error rate results on the testing set are depicted in Fig. 4 ###reference_### and 4 ###reference_###. For pure IRM and REx, original environments are defined by the value of the biased feature.\n###figure_4### ###figure_5### ###figure_6###"
|
| 106 |
+
},
|
| 107 |
+
{
|
| 108 |
+
"section_id": "4.2.4",
|
| 109 |
+
"parent_section_id": "4.2",
|
| 110 |
+
"section_name": "IV-B4 Occupancy Estimation",
|
| 111 |
+
"text": "The Occupancy Estimation dataset666https://archive.ics.uci.edu/ml/datasets/Room+Occupancy+Estimation comprises data from various sensors (temperature, light, sound, CO2, etc.) recorded every minute in a room. The task is to estimate the room occupancy, i.e., the count of individuals in the room. We treat this as a regression task, converting the time into a real number within the range , standardizing the features, and training the models using the designated training and testing sets. The training errors are high for all the environment partitioning methods when , hence we divide to create environments. The MSE on the testing set is displayed in Table V ###reference_###, where Decorr once again yields the lowest testing error. For pure IRM and REx, original environments are determined by the collection times of the samples.\n###table_5### ###table_6###"
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"section_id": "4.2.5",
|
| 115 |
+
"parent_section_id": "4.2",
|
| 116 |
+
"section_name": "IV-B5 Stock",
|
| 117 |
+
"text": "The Stock dataset777https://www.kaggle.com/datasets/nikhilkohli/us-stock-market-data-60-extracted-features comprises market data and technical indicators for 10 U.S. stocks from 2005 to 2020. We attempt to predict whether the closing price of a stock will be higher tomorrow than it is today using today\u2019s technical indicators. For each stock, we allocate the first 70% of the data as the training set, 10% as the validation set for early stopping, and the final 20% as the testing set. The modeling is conducted on a per-stock basis. We assess methods based on average error rate, worst-case error rate, and standard deviation across different stocks. Results are presented in Table VI ###reference_###, where Decorr is shown to perform the best in terms of average error. Since there are no pre-defined environments, the original IRM and REx are not utilized."
|
| 118 |
+
},
|
| 119 |
+
{
|
| 120 |
+
"section_id": "4.3",
|
| 121 |
+
"parent_section_id": "4",
|
| 122 |
+
"section_name": "IV-C Image Data Experiments",
|
| 123 |
+
"text": "To demonstrate the applicability of our method across diverse data types, we implement Decorr and other baseline methods on two image datasets. With minor adjustments to Decorr tailored to the specifics of image data, our method consistently outperforms the others. Unless specified otherwise, the implementation details follow those outlined in Section IV-B ###reference_###.\nImage data have much higher dimensions than tabular data, making direct decorrelation of raw image data impractical. We first need to extract features using a trained convolutional neural network, using the output from the last fully-connected layer as the data\u2019s features. With these extracted low-dimensional features, we can proceed with decorrelation as before (we apply -means in this manner as well). After partitioning the dataset, we continue to use the raw image data for IRM training. For Decorr, we do not enforce uniform sample sizes across partitioned environments (i.e., in Algorithm 1 ###reference_###), allowing for more diversified environments in image datasets."
|
| 124 |
+
},
|
| 125 |
+
{
|
| 126 |
+
"section_id": "4.3.1",
|
| 127 |
+
"parent_section_id": "4.3",
|
| 128 |
+
"section_name": "IV-C1 Colored MNIST",
|
| 129 |
+
"text": "Colored MNIST (CMNIST), introduced by [4 ###reference_b4###] to assess IRM\u2019s capability to learn nonlinear invariant predictors, is an image dataset derived from MNIST. In CMNIST, each MNIST image is colored either red or green, creating a strong but spurious correlation between the image\u2019s color and its class label. This correlation poses a challenge for regular deep learning models, which tend to classify images based on color rather than shape.\nCMNIST is framed as a binary classification task, following the construction guidelines in [4 ###reference_b4###]. Initially, images are assigned a preliminary label for digits 0-4 and for digits 5-9. The final label is then derived by flipping with a 0.1 probability, which introduces data noise. Subsequently, the color ID is sampled by flipping with a probability , set at 0.2 for the first training environment and 0.1 for the second. In contrast, the test environment has a of 0.9, indicating a significant reversal in the correlation between color and label. The images are then colored according to their color ID . Figures 6 ###reference_### and 6 ###reference_### provide a visual representation of this setup.\n###figure_7### ###figure_8### Following [4 ###reference_b4###], we employ an MLP with two hidden layers as our base model. The network architecture and all hyperparameters remain consistent. We assess the methods by computing the error rate on the testing set. We also compute the percentage of data points exhibiting rare patterns (i.e., red images labeled 0-4 or green images labeled 5-9) within each partitioned environment. A significant discrepancy in percentages between two partitioned environments suggests greater diversity, which is potentially advantageous for invariant learning. Each model undergoes training for 5,000 epochs, with the initial 100 epochs proceeding without an IRM penalty. The outcomes, detailed in Table VII ###reference_###, demonstrate Decorr\u2019s great superiority.\n###table_7###"
|
| 130 |
+
},
|
| 131 |
+
{
|
| 132 |
+
"section_id": "4.3.2",
|
| 133 |
+
"parent_section_id": "4.3",
|
| 134 |
+
"section_name": "IV-C2 Waterbirds",
|
| 135 |
+
"text": "The Waterbirds dataset, introduced by [38 ###reference_b38###], merges elements from the CUB dataset [39 ###reference_b39###] and the Places dataset [40 ###reference_b40###]. The primary task is to identify the type of bird (waterbird or landbird) from the CUB dataset. However, combining CUB with Places introduces a spurious correlation: in the training set, most landbirds appear against land backgrounds and most waterbirds against water backgrounds, with 95% of the data following this pattern. Conversely, in the testing set, only half of the data maintain this regular pattern, while the other half do not, thus the training set correlation does not hold. Further details about the dataset are available in [38 ###reference_b38###].\nFollowing [38 ###reference_b38###], we utilize a pretrained ResNet50 as our base model, setting the penalty weight at . Each model undergoes training for 50 epochs. We also apply IRM and V-REx using an optimal environment partition: one environment contains all regular pattern data (waterbirds in water and landbirds on land), and the other comprises all rare pattern data. We implement each method and record the error rate. The results, presented in Table VIII ###reference_###, show that Decorr achieves the best performance among all environment partitioning methods.\n###table_8###"
|
| 136 |
+
},
|
| 137 |
+
{
|
| 138 |
+
"section_id": "4.4",
|
| 139 |
+
"parent_section_id": "4",
|
| 140 |
+
"section_name": "IV-D Summary",
|
| 141 |
+
"text": "In this section, we briefly summarize the characteristics of the tested methods. IRM performs exceptionally well when the environment partition aligns with the true data-generating mechanisms. However, this alignment is often not present in real datasets. We observe that the simple and natural split-by-time partition offers no significant advantage. -means provides a straightforward approach to environment partitioning and consistently demonstrates good performance across multiple experiments. However, as illustrated in Fig. 1 ###reference_###, -means may not always be the optimal method for partitioning. Our Decorr algorithm consistently achieves the best performance in the experiments mentioned above. Notably, Decorr does not aim to reconstruct the original data sources; rather, it seeks to identify partitions that exceed the performance of the original or natural partitions."
|
| 142 |
+
},
|
| 143 |
+
{
|
| 144 |
+
"section_id": "5",
|
| 145 |
+
"parent_section_id": null,
|
| 146 |
+
"section_name": "Conclusion",
|
| 147 |
+
"text": "Invariant learning is a robust framework for Out-Of-Distribution (OOD) generalization, with environment partitioning playing a critical role in the effectiveness of IRM. Although existing partitioning methods perform well in some cases, their efficacy varies and they are not supported by a clear interpretation or justification. Inspired by the advantages of a low-correlated training set, we developed the Decorr algorithm, which partitions data into multiple environments with minimal internal correlation. We further provide explanations that uncorrelated environments enhance OOD generalization. Our partitioning approach offers distinct benefits over existing methods. Across various types of tasks, including image-based ones, we demonstrate that our method consistently and significantly enhances the performance of IRM, greatly broadening its applicability."
|
| 148 |
+
}
|
| 149 |
+
],
|
| 150 |
+
"appendix": [],
|
| 151 |
+
"tables": {
|
| 152 |
+
"1": {
|
| 153 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>Selection bias experiment with . We compare Decorr against random and -means, combined with IRM and V-REx. ERM and Decorr+ERM are included too. The two evaluation metrics are significantly reduced when using Decorr+IRM or Decorr+REx, underscoring both the superiority of Decorr and the importance of invariant learning.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S3.T1.7\">\n<tr class=\"ltx_tr\" id=\"S3.T1.5.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S3.T1.5.1.2\">Method</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S3.T1.5.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.5.1.3\">Testing Set MSE</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.7.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.7.4.1\">ERM</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.7.4.2\">0.68 (0.05)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.7.4.3\">2.24 (0.17)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.7.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.7.5.1\">Decorr + ERM</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.7.5.2\">0.64 (0.06)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.7.5.3\">2.11 (0.19)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.6.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.6.2.1\">\n-means + IRM</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.6.2.2\">0.52 (0.09)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.6.2.3\">2.34 (0.18)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.7.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.7.6.1\">Random + IRM</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.7.6.2\">0.73 (0.20)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.7.6.3\">2.36 (0.24)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.7.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.7.7.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.7.7.1.1\">Decorr + IRM</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.7.7.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.7.7.2.1\">0.30 (0.22)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.7.7.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.7.7.3.1\">1.85 (0.35)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.7.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.7.3.1\">\n-means + REx</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.7.3.2\">0.67 (0.15)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.7.3.3\">2.31 (0.33)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.7.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.7.8.1\">Random + REx</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.7.8.2\">0.80 (0.10)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.7.8.3\">2.34 (0.28)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.7.9\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S3.T1.7.9.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.7.9.1.1\">Decorr + REx</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S3.T1.7.9.2\">0.65 (0.20)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.7.9.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.7.9.3.1\">1.83 (0.12)</span></td>\n</tr>\n</table>\n</figure>",
|
| 154 |
+
"capture": "TABLE I: Selection bias experiment with . We compare Decorr against random and -means, combined with IRM and V-REx. ERM and Decorr+ERM are included too. The two evaluation metrics are significantly reduced when using Decorr+IRM or Decorr+REx, underscoring both the superiority of Decorr and the importance of invariant learning."
|
| 155 |
+
},
|
| 156 |
+
"2": {
|
| 157 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE II: </span>Selection bias experiment with . We compare Decorr against random and -means, combined with IRM and V-REx. ERM and Decorr+ERM are included too. The two evaluation metrics are significantly reduced when using Decorr+IRM or Decorr+REx, underscoring both the superiority of Decorr and the importance of invariant learning.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S3.T2.7\">\n<tr class=\"ltx_tr\" id=\"S3.T2.5.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S3.T2.5.1.2\">Method</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S3.T2.5.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.5.1.3\">Testing Set MSE</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.7.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.7.4.1\">ERM</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.7.4.2\">0.77 (0.03)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.7.4.3\">2.64 (0.16)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.7.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.7.5.1\">Decorr + ERM</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.7.5.2\">0.78 (0.08)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.7.5.3\">2.60 (0.24)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.6.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.6.2.1\">\n-means + IRM</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.6.2.2\">0.58 (0.16)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.6.2.3\">2.00 (0.15)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.7.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.7.6.1\">Random + IRM</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.7.6.2\">0.70 (0.25)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.7.6.3\">2.12 (0.50)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.7.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.7.7.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.7.7.1.1\">Decorr + IRM</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.7.7.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.7.7.2.1\">0.39 (0.25)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.7.7.3\">2.12 (0.36)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.7.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.7.3.1\">\n-means + REx</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.7.3.2\">0.72 (0.10)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.7.3.3\">2.41 (0.20)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.7.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.7.8.1\">Random + REx</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.7.8.2\">0.83 (0.10)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.7.8.3\">2.58 (0.19)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.7.9\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S3.T2.7.9.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.7.9.1.1\">Decorr + REx</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S3.T2.7.9.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.7.9.2.1\">0.38 (0.18)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T2.7.9.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.7.9.3.1\">1.64 (0.08)</span></td>\n</tr>\n</table>\n</figure>",
|
| 158 |
+
"capture": "TABLE II: Selection bias experiment with . We compare Decorr against random and -means, combined with IRM and V-REx. ERM and Decorr+ERM are included too. The two evaluation metrics are significantly reduced when using Decorr+IRM or Decorr+REx, underscoring both the superiority of Decorr and the importance of invariant learning."
|
| 159 |
+
},
|
| 160 |
+
"3": {
|
| 161 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">TABLE III: </span>A toy example generated by Eqn. (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2211.10054v2#S4.E6\" title=\"In IV-A1 A Toy Example \u2023 IV-A Synthetic Data Experiments \u2023 IV Experiments \u2023 Decorr: Environment Partitioning for Invariant Learning and OOD Generalization\"><span class=\"ltx_text ltx_ref_tag\">6</span></a>). Our Decorr achieves the lowest MSE between linear regression coefficients and the ground-truth coefficients , in various scenarios featuring varying numbers of original/partitioned environments and different data dimensions. Standard deviations over 10 trials are included.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S4.T3.13.11\">\n<tr class=\"ltx_tr\" id=\"S4.T3.4.2.2\">\n<td class=\"ltx_td ltx_border_tt\" id=\"S4.T3.4.2.2.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"4\" id=\"S4.T3.3.1.1.1\">Number of Environments \n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"4\" id=\"S4.T3.4.2.2.2\">Number of Environments \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.12.10.10\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.12.10.10.9\">Method</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.5.3.3.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.6.4.4.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.7.5.5.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.8.6.6.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.9.7.7.5\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.10.8.8.6\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.11.9.9.7\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.12.10.10.8\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.13.11.12\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.13.11.12.1\">ERM</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.13.11.12.2\">0.28 (0.01)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.13.11.12.3\">0.28 (0.02)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.13.11.12.4\">0.29 (0.02)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.13.11.12.5\">0.28 (0.02)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.13.11.12.6\">0.46 (0.01)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.13.11.12.7\">0.46 (0.01)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.13.11.12.8\">0.46 (0.02)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.13.11.12.9\">0.46 (0.02)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.13.11.13\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.13.11.13.1\">Random + IRM</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.13.11.13.2\">0.28 (0.06)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.13.11.13.3\">0.27 (0.04)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.13.11.13.4\">0.27 (0.02)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.13.11.13.5\">0.29 (0.02)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.13.11.13.6\">0.45 (0.06)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.13.11.13.7\">0.49 (0.10)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.13.11.13.8\">0.49 (0.04)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.13.11.13.9\">0.46 (0.04)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.13.11.14\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.13.11.14.1\">EIIL</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.13.11.14.2\">0.29 (0.05)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.13.11.14.3\">0.35 (0.06)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.13.11.14.4\">0.36 (0.03)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.13.11.14.5\">0.38 (0.04)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.13.11.14.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.13.11.14.6.1\">0.17 (0.08)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.13.11.14.7\">0.31 (0.08)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.13.11.14.8\">0.34 (0.03)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.13.11.14.9\">0.37 (0.04)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.13.11.11\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.13.11.11.1\">\n-means + IRM</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.13.11.11.2\">0.21 (0.05)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.13.11.11.3\">0.27 (0.04)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.13.11.11.4\">0.27 (0.03)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.13.11.11.5\">0.28 (0.02)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.13.11.11.6\">0.33 (0.08)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.13.11.11.7\">0.33 (0.10)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.13.11.11.8\">0.38 (0.07)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.13.11.11.9\">0.35 (0.04)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.13.11.15\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.13.11.15.1\">HRM</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.13.11.15.2\">0.38 (0.17)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.13.11.15.3\">0.43 (0.06)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.13.11.15.4\">0.44 (0.04)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.13.11.15.5\">0.47 (0.02)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.13.11.15.6\">0.50 (0.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.13.11.15.7\">0.50 (0.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.13.11.15.8\">0.50 (0.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.13.11.15.9\">0.49 (0.01)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.13.11.16\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.13.11.16.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.13.11.16.1.1\">Decorr + IRM</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.13.11.16.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.13.11.16.2.1\">0.13 (0.03)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.13.11.16.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.13.11.16.3.1\">0.16 (0.09)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.13.11.16.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.13.11.16.4.1\">0.16 (0.06)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.13.11.16.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.13.11.16.5.1\">0.09 (0.02)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.13.11.16.6\">0.25 (0.02)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.13.11.16.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.13.11.16.7.1\">0.29 (0.04)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.13.11.16.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.13.11.16.8.1\">0.26 (0.05)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.13.11.16.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.13.11.16.9.1\">0.25 (0.03)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.13.11.17\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T3.13.11.17.1\">IRM (Oracle)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T3.13.11.17.2\">0.02 (0.00)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T3.13.11.17.3\">0.02 (0.01)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T3.13.11.17.4\">0.02 (0.00)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T3.13.11.17.5\">0.02 (0.00)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T3.13.11.17.6\">0.07 (0.03)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T3.13.11.17.7\">0.04 (0.01)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T3.13.11.17.8\">0.04 (0.02)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T3.13.11.17.9\">0.02 (0.00)</td>\n</tr>\n</table>\n</figure>",
|
| 162 |
+
"capture": "TABLE III: A toy example generated by Eqn. (6). Our Decorr achieves the lowest MSE between linear regression coefficients and the ground-truth coefficients , in various scenarios featuring varying numbers of original/partitioned environments and different data dimensions. Standard deviations over 10 trials are included."
|
| 163 |
+
},
|
| 164 |
+
"4": {
|
| 165 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T4\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">TABLE IV: </span>The experiment on the Financial Indicators dataset. There are five pre-determined original environments. Task set 1 utilizes three environments for training, while task set 2 uses one environment for training. Each task set comprises 20 tasks, with the average error rate, worst-case error rate, and standard deviation across these tasks reported.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S4.T4.1.1\">\n<tr class=\"ltx_tr\" id=\"S4.T4.1.1.2\">\n<td class=\"ltx_td ltx_border_tt\" id=\"S4.T4.1.1.2.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"3\" id=\"S4.T4.1.1.2.2\">Task Set 1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"3\" id=\"S4.T4.1.1.2.3\">Task Set 2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.1.3\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.3.1\">Method</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.1.3.2\">Average Error</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.1.3.3\">Worst Error</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.1.3.4\">STD</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.1.3.5\">Average Error</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.1.3.6\">Worst Error</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.1.3.7\">STD</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.1.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.1.4.1\">ERM</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.1.4.2\">45.02 (0.00)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.1.4.3\">52.92 (0.12)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.1.4.4\">5.20</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.1.4.5\">46.79 (0.01)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.1.4.6\">51.05 (0.05)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.1.4.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.1.4.7.1\">2.44</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.1.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.5.1\">Random + IRM</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.5.2\">45.95 (0.28)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.5.3\">53.77 (0.58)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.5.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.1.5.4.1\">4.27</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.5.5\">46.57 (0.26)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.5.6\">52.03 (0.79)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.5.7\">2.57</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.1.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.6.1\">EIIL</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.6.2\">48.95 (0.16)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.6.3\">57.37 (0.65)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.6.4\">4.52</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.6.5\">48.97 (0.38)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.6.6\">54.62 (0.70)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.6.7\">2.71</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.1.1\">\n-means + IRM</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.1.2\">45.20 (0.34)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.1.3\">53.64 (0.44)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.1.4\">6.19</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.1.5\">46.79 (0.28)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.1.6\">54.72 (1.22)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.1.7\">3.43</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.1.7\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.7.1\">HRM</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.7.2\">44.64 (0.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.7.3\">53.52 (0.03)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.7.4\">5.71</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.7.5\">46.71 (0.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.7.6\">51.17 (0.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.7.7\">2.48</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.1.8\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.8.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.1.8.1.1\">Decorr + IRM</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.8.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.1.8.2.1\">43.99 (0.10)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.8.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.1.8.3.1\">51.61 (0.04)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.8.4\">5.33</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.8.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.1.8.5.1\">43.96 (0.25)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.8.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.1.8.6.1\">50.23 (1.17)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.8.7\">2.81</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.1.9\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.1.9.1\">IRM</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.1.9.2\">45.86 (0.14)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.1.9.3\">56.38 (0.21)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.1.9.4\">6.27</td>\n<td class=\"ltx_td ltx_border_t\" id=\"S4.T4.1.1.9.5\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S4.T4.1.1.9.6\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S4.T4.1.1.9.7\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.1.10\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.1.1.10.1\">V-REx</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.1.1.10.2\">44.08 (0.02)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.1.1.10.3\">55.41 (0.12)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.1.1.10.4\">6.30</td>\n<td class=\"ltx_td ltx_border_bb\" id=\"S4.T4.1.1.10.5\"></td>\n<td class=\"ltx_td ltx_border_bb\" id=\"S4.T4.1.1.10.6\"></td>\n<td class=\"ltx_td ltx_border_bb\" id=\"S4.T4.1.1.10.7\"></td>\n</tr>\n</table>\n</figure>",
|
| 166 |
+
"capture": "TABLE IV: The experiment on the Financial Indicators dataset. There are five pre-determined original environments. Task set 1 utilizes three environments for training, while task set 2 uses one environment for training. Each task set comprises 20 tasks, with the average error rate, worst-case error rate, and standard deviation across these tasks reported."
|
| 167 |
+
},
|
| 168 |
+
"5": {
|
| 169 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T5\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">TABLE V: </span>The Occupancy Estimation dataset.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S4.T5.1.1\">\n<tr class=\"ltx_tr\" id=\"S4.T5.1.1.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T5.1.1.2.1\">Method</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T5.1.1.2.2\">Training Error</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T5.1.1.2.3\">Testing Error</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.1.1.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.1.3.1\">ERM</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.1.3.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.1.1.3.2.1\">0.026 (0.000)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.1.3.3\">0.773 (0.000)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.1.1.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.1.4.1\">Random + IRM</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.1.4.2\">0.587 (0.053)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.1.4.3\">0.824 (0.093)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.1.1.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.1.5.1\">EIIL</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.1.5.2\">0.191 (0.203)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.1.5.3\">0.690 (0.048)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.1.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.1.1.1\">\n-means + IRM</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.1.1.2\">0.044 (0.001)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.1.1.3\">0.352 (0.009)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.1.1.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.1.6.1\">HRM</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.1.6.2\">0.166 (0.000)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.1.6.3\">1.154 (0.000)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.1.1.7\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.1.7.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.1.1.7.1.1\">Decorr + IRM</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.1.7.2\">0.143 (0.009)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.1.7.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.1.1.7.3.1\">0.268 (0.011)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.1.1.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.1.8.1\">IRM</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.1.8.2\">0.420 (0.006)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.1.8.3\">0.840 (0.021)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.1.1.9\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T5.1.1.9.1\">V-REx</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T5.1.1.9.2\">0.060 (0.001)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T5.1.1.9.3\">0.686 (0.030)</td>\n</tr>\n</table>\n</figure>",
|
| 170 |
+
"capture": "TABLE V: The Occupancy Estimation dataset."
|
| 171 |
+
},
|
| 172 |
+
"6": {
|
| 173 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T6\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">TABLE VI: </span>The Stock dataset.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S4.T6.1.1\">\n<tr class=\"ltx_tr\" id=\"S4.T6.1.1.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T6.1.1.2.1\">Method</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T6.1.1.2.2\">\n<span class=\"ltx_text\" id=\"S4.T6.1.1.2.2.1\"></span> <span class=\"ltx_text\" id=\"S4.T6.1.1.2.2.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T6.1.1.2.2.2.1\">\n<span class=\"ltx_tr\" id=\"S4.T6.1.1.2.2.2.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T6.1.1.2.2.2.1.1.1\">Average</span></span>\n<span class=\"ltx_tr\" id=\"S4.T6.1.1.2.2.2.1.2\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T6.1.1.2.2.2.1.2.1\">Error</span></span>\n</span></span><span class=\"ltx_text\" id=\"S4.T6.1.1.2.2.3\"></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T6.1.1.2.3\">\n<span class=\"ltx_text\" id=\"S4.T6.1.1.2.3.1\"></span> <span class=\"ltx_text\" id=\"S4.T6.1.1.2.3.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T6.1.1.2.3.2.1\">\n<span class=\"ltx_tr\" id=\"S4.T6.1.1.2.3.2.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T6.1.1.2.3.2.1.1.1\">Worst-Case</span></span>\n<span class=\"ltx_tr\" id=\"S4.T6.1.1.2.3.2.1.2\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T6.1.1.2.3.2.1.2.1\">Error</span></span>\n</span></span><span class=\"ltx_text\" id=\"S4.T6.1.1.2.3.3\"></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T6.1.1.2.4\">STD</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.1.1.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T6.1.1.3.1\">ERM</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T6.1.1.3.2\">50.33 (0.10)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T6.1.1.3.3\">56.65 (0.26)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T6.1.1.3.4\">3.86</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.1.1.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.1.1.4.1\">Random + IRM</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.1.1.4.2\">50.11 (0.35)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.1.1.4.3\">54.54 (0.81)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.1.1.4.4\">2.62</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.1.1.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.1.1.5.1\">EIIL</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.1.1.5.2\">49.70 (0.33)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.1.1.5.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.1.1.5.3.1\">52.29 (0.44)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.1.1.5.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.1.1.5.4.1\">1.65</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.1.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.1.1.1.1\">\n-means + IRM</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.1.1.1.2\">50.02 (0.30)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.1.1.1.3\">54.92 (0.23)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.1.1.1.4\">2.68</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.1.1.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.1.1.6.1\">HRM</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.1.1.6.2\">50.22 (0.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.1.1.6.3\">55.97 (0.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.1.1.6.4\">4.78</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.1.1.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T6.1.1.7.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.1.1.7.1.1\">Decorr + IRM</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T6.1.1.7.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.1.1.7.2.1\">49.13 (0.34)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T6.1.1.7.3\">53.50 (0.75)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T6.1.1.7.4\">3.66</td>\n</tr>\n</table>\n</figure>",
|
| 174 |
+
"capture": "TABLE VI: The Stock dataset."
|
| 175 |
+
},
|
| 176 |
+
"7": {
|
| 177 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T7\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">TABLE VII: </span>The CMNIST experiment. Decorr achieves the lowest error rate on the testing set, compared to other baseline environment partitioning methods.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S4.T7.1.1\">\n<tr class=\"ltx_tr\" id=\"S4.T7.1.1.2\">\n<td class=\"ltx_td ltx_border_tt\" id=\"S4.T7.1.1.2.1\"></td>\n<td class=\"ltx_td ltx_border_tt\" id=\"S4.T7.1.1.2.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"S4.T7.1.1.2.3\">% of Rare Patterns</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T7.1.1.3\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.1.1.3.1\">Method</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.1.1.3.2\">Average Error</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.1.1.3.3\">Environment 1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.1.1.3.4\">Environment 2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T7.1.1.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.1.1.4.1\">ERM</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.1.1.4.2\">43.33 (0.14)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.1.1.4.3\">\u2013</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.1.1.4.4\">\u2013</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T7.1.1.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.1.1.5.1\">Random + IRM</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.1.1.5.2\">40.16 (0.16)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.1.1.5.3\">15.15 (0.08)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.1.1.5.4\">14.83 (0.10)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T7.1.1.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.1.1.6.1\">EIIL</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.1.1.6.2\">76.95 (3.46)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.1.1.6.3\">29.59 (2.14)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.1.1.6.4\">14.07 (0.09)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T7.1.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.1.1.1.1\">\n-means + IRM</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.1.1.1.2\">41.93 (0.52)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.1.1.1.3\">15.32 (0.21)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.1.1.1.4\">14.68 (0.19)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T7.1.1.7\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.1.1.7.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T7.1.1.7.1.1\">Decorr + IRM</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.1.1.7.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T7.1.1.7.2.1\">34.89 (6.24)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.1.1.7.3\">40.24 (0.49)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.1.1.7.4\">4.72 (0.26)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T7.1.1.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.1.1.8.1\">IRM</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.1.1.8.2\">30.18 (0.94)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.1.1.8.3\">19.89 (0.13)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.1.1.8.4\">10.09 (0.12)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T7.1.1.9\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T7.1.1.9.1\">V-REx</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T7.1.1.9.2\">34.22 (0.04)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T7.1.1.9.3\">19.89 (0.13)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T7.1.1.9.4\">10.09 (0.12)</td>\n</tr>\n</table>\n</figure>",
|
| 178 |
+
"capture": "TABLE VII: The CMNIST experiment. Decorr achieves the lowest error rate on the testing set, compared to other baseline environment partitioning methods."
|
| 179 |
+
},
|
| 180 |
+
"8": {
|
| 181 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T8\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">TABLE VIII: </span>The Waterbirds experiment. Decorr achieves the lowest error rate on the testing set, compared to other baseline environment partitioning methods.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S4.T8.1.1\">\n<tr class=\"ltx_tr\" id=\"S4.T8.1.1.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T8.1.1.2.1\">Method</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T8.1.1.2.2\">Average Error</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T8.1.1.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T8.1.1.3.1\">ERM</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T8.1.1.3.2\">25.76 (0.23)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T8.1.1.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T8.1.1.4.1\">Random + IRM</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T8.1.1.4.2\">22.96 (0.43)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T8.1.1.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T8.1.1.5.1\">EIIL</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T8.1.1.5.2\">27.43 (7.25)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T8.1.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T8.1.1.1.1\">\n-means + IRM</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T8.1.1.1.2\">24.32 (0.65)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T8.1.1.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T8.1.1.6.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T8.1.1.6.1.1\">Decorr + IRM</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T8.1.1.6.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T8.1.1.6.2.1\">22.70 (0.44)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T8.1.1.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T8.1.1.7.1\">IRM</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T8.1.1.7.2\">22.36 (0.08)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T8.1.1.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T8.1.1.8.1\">V-REx</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T8.1.1.8.2\">33.53 (7.84)</td>\n</tr>\n</table>\n</figure>",
|
| 182 |
+
"capture": "TABLE VIII: The Waterbirds experiment. Decorr achieves the lowest error rate on the testing set, compared to other baseline environment partitioning methods."
|
| 183 |
+
}
|
| 184 |
+
},
|
| 185 |
+
"image_paths": {
|
| 186 |
+
"1(a)": {
|
| 187 |
+
"figure_path": "2211.10054v2_figure_1(a).png",
|
| 188 |
+
"caption": "EIIL\nFigure 1: The resulting partitions on a toy dataset. The EIIL partition shows no clear patterns, suggesting a strong dependence on the label y\ud835\udc66yitalic_y and deviating from expected environmental separations. Decorr divides the dataset into environments characterized by distinct covariate relationships: one positively correlated (triangle) and one almost uncorrelated (circle). While k\ud835\udc58kitalic_k-means also bifurcates the data spatially, the divisions it creates feature similar covariate properties with only a mean shift.",
|
| 189 |
+
"url": "http://arxiv.org/html/2211.10054v2/extracted/2211.10054v2/figures/EIIL.png"
|
| 190 |
+
},
|
| 191 |
+
"1(b)": {
|
| 192 |
+
"figure_path": "2211.10054v2_figure_1(b).png",
|
| 193 |
+
"caption": "k\ud835\udc58kitalic_k-means\nFigure 1: The resulting partitions on a toy dataset. The EIIL partition shows no clear patterns, suggesting a strong dependence on the label y\ud835\udc66yitalic_y and deviating from expected environmental separations. Decorr divides the dataset into environments characterized by distinct covariate relationships: one positively correlated (triangle) and one almost uncorrelated (circle). While k\ud835\udc58kitalic_k-means also bifurcates the data spatially, the divisions it creates feature similar covariate properties with only a mean shift.",
|
| 194 |
+
"url": "http://arxiv.org/html/2211.10054v2/extracted/2211.10054v2/figures/Kmeans.png"
|
| 195 |
+
},
|
| 196 |
+
"1(c)": {
|
| 197 |
+
"figure_path": "2211.10054v2_figure_1(c).png",
|
| 198 |
+
"caption": "Decorr\nFigure 1: The resulting partitions on a toy dataset. The EIIL partition shows no clear patterns, suggesting a strong dependence on the label y\ud835\udc66yitalic_y and deviating from expected environmental separations. Decorr divides the dataset into environments characterized by distinct covariate relationships: one positively correlated (triangle) and one almost uncorrelated (circle). While k\ud835\udc58kitalic_k-means also bifurcates the data spatially, the divisions it creates feature similar covariate properties with only a mean shift.",
|
| 199 |
+
"url": "http://arxiv.org/html/2211.10054v2/extracted/2211.10054v2/figures/decorr.png"
|
| 200 |
+
},
|
| 201 |
+
"2": {
|
| 202 |
+
"figure_path": "2211.10054v2_figure_2.png",
|
| 203 |
+
"caption": "Figure 2: The example of Risks of IRM.\n",
|
| 204 |
+
"url": "http://arxiv.org/html/2211.10054v2/extracted/2211.10054v2/figures/ICLR-Worst.png"
|
| 205 |
+
},
|
| 206 |
+
"3": {
|
| 207 |
+
"figure_path": "2211.10054v2_figure_3.png",
|
| 208 |
+
"caption": "Figure 3: The Adult dataset: using race as the bias feature.\n",
|
| 209 |
+
"url": "http://arxiv.org/html/2211.10054v2/extracted/2211.10054v2/figures/ICLR-Race.png"
|
| 210 |
+
},
|
| 211 |
+
"4": {
|
| 212 |
+
"figure_path": "2211.10054v2_figure_4.png",
|
| 213 |
+
"caption": "Figure 4: The Adult dataset: using sex as the bias feature.\n",
|
| 214 |
+
"url": "http://arxiv.org/html/2211.10054v2/extracted/2211.10054v2/figures/ICLR-Sex.png"
|
| 215 |
+
},
|
| 216 |
+
"5": {
|
| 217 |
+
"figure_path": "2211.10054v2_figure_5.png",
|
| 218 |
+
"caption": "Figure 5: CMNIST training set examples: most of 0-4 are green, and most of 5-9 are red.\n",
|
| 219 |
+
"url": "http://arxiv.org/html/2211.10054v2/extracted/2211.10054v2/figures/CMNIST_train.jpg"
|
| 220 |
+
},
|
| 221 |
+
"6": {
|
| 222 |
+
"figure_path": "2211.10054v2_figure_6.png",
|
| 223 |
+
"caption": "Figure 6: CMNIST testing set examples: most of 0-4 are red, and most of 5-9 are green.\n",
|
| 224 |
+
"url": "http://arxiv.org/html/2211.10054v2/extracted/2211.10054v2/figures/CMNIST_test.jpg"
|
| 225 |
+
}
|
| 226 |
+
},
|
| 227 |
+
"validation": true,
|
| 228 |
+
"references": [],
|
| 229 |
+
"url": "http://arxiv.org/html/2211.10054v2"
|
| 230 |
+
}
|
20240522/2301.10960v3.json
ADDED
|
@@ -0,0 +1,108 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Visiting Distant Neighbors in Graph Convolutional Networks",
|
| 3 |
+
"abstract": "In this study, we expand the graph convolutional network layers for deep learning on graphs to higher order in terms of neighboring nodes. As a result, when constructing representations for a node in a graph for downstream tasks, in addition to the features of the node and its immediate neighboring nodes, we also include more distant nodes in the aggregations with tunable importance parameters. In experimenting with a number of standard benchmark graph datasets, we demonstrate how this higher order neighbor visiting pays off by outperforming the original model especially when we have a limited number of available labeled data points for the training of the model.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "The problem of representation learning in a graph has been an important topic in the field of deep learning research for quite a while. The goal is to learn informative representations of nodes (or edges) in a graph for downstream tasks. Several algorithms have been developed trying to represent the information from node features, edge features, and adjacency matrix of a graph with a low dimensional form in order to enable classification, clustering, and other tasks on graph-shaped data [4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###, 7 ###reference_b7###, 9 ###reference_b9###].\nHere, a semi-supervised learning method on a graph problem is used when we are looking to find the missing class labels for nodes (e.g. published papers) in a graph of relations (e.g. citation network of those published papers) based on a small subset of known labels for nodes. These types of problems can also be found when one is trying to cluster nodes of a graph into similar groups. Roughly speaking, by using these learning techniques, we can obtain lower dimensional embedding vectors for nodes in a graph where we can apply simple geometry distance functions to quantify the similarity or dissimilarity between two nodes.\nGraph convolutional networks were introduced [8 ###reference_b8###] as a method for combining the given features of a node and its neighbors in a convolving neural network layer in order to obtain embedding vectors. The main equation behind a first-neighbor graph convolutional network is encapsulated in the following formula:\nWhere is the adjacency matrix of the original undirected graph (for which we are trying to learn the node embeddings) with the self-loops added in order to capture the feature vector of the node. is the degree matrix of the graph, where the diagonal element is the degree of the node , plus one to account for the self-loop. is the feature matrix in the th layer and is simply the feature vectors of each node. is the trainable weight matrix in the th layer where its dimensions denote the feature vector and the output dimensions of the convolutional layer and finally stands for a non-linear activation function such as the rectified linear unit (ReLU) to introduce non-linearity into the model. This model, alongside with many other representation learning methods for graphs, assumes that the connections in a graph are signs of node similarity, which is not always the case in graph-structured datasets.\nIn previous studies done on real-world graphs, it is demonstrated how important it is to take into account the global role of a node in a graph when trying to make predictions about it. As our contribution, in this work we will expand the notion of graph convolutional networks to higher order neighbors of nodes in the graph in order to capture long-range correlations between nodes and let the optimization method decide for the best coefficients for different sets of neighbors and node\u2019s own feature vectors. Then we report the results of this new method on a number of datasets to demonstrate how this higher order approximation affects the performance of the model in terms of accuracy and efficiency, particularly in generalizing the model from a lower number of labeled data. We will also experiment on random graphs in order to measure wall-clock time performance against the original model."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Higher Order Neighbor Convolutions on Graphs",
|
| 15 |
+
"text": "In this section we introduce our graph neural network model which is an expansion of the previously known GCN to the th neighborhood:\nWhere is a trainable coefficients for the th neighborhood of the node, is the trainable weight matrix, are the node features and is the resulting node representations from this layer. is the normalized adjacency matrix of th neighborhood where the sum of each row is equal to 1 to avoid the dominance of high-degree nodes in the graph and if the shortest path between two nodes is equal to , excluding the self-loops and otherwise. In this definition, would be the identity matrix, would be the normalized adjacency matrix and would have elements equal to 1 where the shortest path between two nodes is equal to 2 and so on.\nNote that this can be considered similar to expanding the kernel size in a image processing convolutional neural network from 3 to 4 and larger. Computing s would be computationally cheaper than finding the matrix of shortest paths, since we will be stopping the approximation on a certain distance (e.g. 2 for second order neighborhood approximation).\nThe expanded propagation rule up to the second neighborhood approximation would look like the following:"
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "An Experiment in Semi-Supervised Node Classification",
|
| 21 |
+
"text": "Now that we have a propagation rule for this expanded model, in order to compare the results with the first-neighborhood model, we turn back to the problem of classifying the nodes in a graph, where we only have known labels for a tiny portion of the nodes and we want to classify other nodes. We will be using graph-structured datasets from citation networks of different sizes. The objective is to classify graph nodes into different classes by learning from only a few labeled nodes. The claim is that the information from node features, combined with the information from the structure of the graph can result in a better representation learning to be used for this task."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "3.1",
|
| 25 |
+
"parent_section_id": "3",
|
| 26 |
+
"section_name": "Model Architectures",
|
| 27 |
+
"text": "Here we define two graph neural networks with two layers of our expanded GCN layers on an undirected graph up to the second and third neighbor approximation. The forward model for these neural networks would take the following forms:\nZ_2 = softmax((\u03b3_0^1 I + \u03b3_1^1 \u00afA_1 + \u03b3_2^1 \u00afA_2) \u2009 ReLU(\u03b3_0^0 I + \u03b3_1^0 \u00afA_1 + \u03b3_2^0 \u00afA_2) \u2009 X \u2009 W^0) \u2009 W^1)\nZ_3 = softmax((\u03b3_0^1 I + \u03b3_1^1 \u00afA_1 + \u03b3_2^1 \u00afA_2 + \u03b3_3^1 \u00afA_3) \u2009 ReLU(\u03b3_0^0 I + \u03b3_1^0 \u00afA_1 + \u03b3_2^0 \u00afA_2 + \u03b3_3^0 \u00afA_3) \u2009 X \u2009 W^0) \u2009 W^1)\nWhere and with being the number of features for each node, being the dimensionality of the hidden layer, and the dimensionality of the embedding vectors. The weights in the model are then optimized by gradient descent using Adam stochastic optimization and we include a dropout [15 ###reference_b15###] in the network to improve generalization. We will use the negative log loss likelihood as the loss function here. The pre-processing step would include calculating the , , and matrices and normalizing the feature vectors."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3.2",
|
| 31 |
+
"parent_section_id": "3",
|
| 32 |
+
"section_name": "Related Works and Baseline Model",
|
| 33 |
+
"text": "Several previous approaches for the problem of learning graph representations have been studied in the past. Some of the classical methods such as the label propagation and manifold regularization on graphs, take advantage of graph Laplacian regularization and have been derived by rigorous graph theory. But more recent methods, due to the success of deep learning models in different fields, can learn node representations by sampling from different types of random-walks across the graph such as DeepWalk, node2vec, role2vec, and Planetoid.\nAlthough the scheme of neural networks on graph was studied before [17 ###reference_b17###, 18 ###reference_b18###], graph convolutional neural networks in their current form were introduced by [19 ###reference_b19###] and further studied in works by [20 ###reference_b20###, 21 ###reference_b21###], introducing a spectral graph convolutional neural network and introduction of a fast local convolution by [22 ###reference_b22###].\nIn a closely related work by [11 ###reference_b11###], the authors use diffusion across graph nodes in order to calculate the convolutional representations for nodes. Our work is different in the sense that we also allow the model to optimize the coefficients of the contributions of different neighborhood layers and the features of the node itself based on the problem. This new approach will be more important in problems where the features of the further neighbors are also important, possibly even more than the immediate neighbors. As a basic example, triangles have a key effect in shaping the community structures of some experimental networks [12 ###reference_b12###, 13 ###reference_b13###]. Some instances of these problems might be knowledge graphs such as financial transaction networks where we are looking for an specific type of fraud such as money-laundering. In these networks the information and the structure of farther neighbors are important to identify fraudulent transactions [14 ###reference_b14###]. In a closely similar study [10 ###reference_b10###], the authors assign different weight matrices for different neighborhood distances to improve the learning performance, whereas in this work we will be using the same weight matrix for different neighborhoods but as a linear combination, which reduces the number of parameters, and therefore the complexity of the model. Also our proposed expansion is different from just stacking up more vanilla GCN layers in order to get distant neighbor aggregation (like the study by [26 ###reference_b26###]) by reducing the number of parameters and model complexity since we are using the same weight matrix for different neighborhood distances and just add a linear combination coefficient.\nDistant neighborhood aggregation is not just limited to graph convolutional networks, but also studied in random walks and other methods as in [27 ###reference_b27###]. In this work, the authors develop a local-structure-aware framework for involving the information from farther nodes in different graph representation learning models. Also the authors in [28 ###reference_b28###] modify the graph neural networks to remove the noise from irrelevant propagations by taking into account the structural role of a node using eigenvalue decomposition of the graph. Which is similar to our work in terms of extending the graph embeddings to higher order relations.\nSince based on previous experiments on their paper [8 ###reference_b8###], the original graph convolutional networks (which considers only the first neighbors in convolution) outperforms the classical models and early deep models (DeepWalk, node2vec, Planetoid) on the same task as here, in order to demonstrate the performance gain acquired from considering farther neighbors, we will only compare our results to the original GCN. We will be using the similar neural network structure used in the original paper, which is a two layer GCN with ReLU activation function."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3.3",
|
| 37 |
+
"parent_section_id": "3",
|
| 38 |
+
"section_name": "Experimental Setup",
|
| 39 |
+
"text": "We will be testing the performance of different models on publicly available citation network datasets which are commonly used in benchmarking GNNs, also we will be experimenting with artificial random datasets for wall-clock time performance analysis."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3.3.1",
|
| 43 |
+
"parent_section_id": "3.3",
|
| 44 |
+
"section_name": "3.3.1 Datasets",
|
| 45 |
+
"text": "Datasets used for model performance evaluation are presented in Table 1 ###reference_###. Citeseer, Cora, and Pubmed are citation network datasets where each node represents published papers and a directed link from one node to another translates to a citation of the second paper in the first one. In these datasets, node features are bag-of-words vectors for each paper.\nNote that in all the datasets we are neglecting the direction of edges and consider an undirected graph for the learning task."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "3.3.2",
|
| 49 |
+
"parent_section_id": "3.3",
|
| 50 |
+
"section_name": "3.3.2 Training and Testing Procedure",
|
| 51 |
+
"text": "In order to compare the ability of each model in generalizing from the training data, we will treat the number of available labelled nodes as a parameter. So we will be training each model on different number of available nodes per class and then measure their performance on the a balanced set of the remaining nodes. We will be using 1, 2, 5, 10, 15, and 20 nodes per class for training, 30 nodes for validation and stopping the training, and the rest of the nodes (in a balanced way between classes) for measuring the accuracy. Note that in each repetition of the experiment, all of these nodes are shuffled randomly. We continue the training for a maximum number 200 epochs and use early stopping on 20 epochs. Meaning that if the validation accuracy does not improve after any 20 consecutive epochs, the training will be halted. Other training hyperparameters are presented in Table 2 ###reference_###. We repeat training and testing for each model for a total number of 500 different random initializations from [24 ###reference_b24###] and report the results."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "3.3.3",
|
| 55 |
+
"parent_section_id": "3.3",
|
| 56 |
+
"section_name": "3.3.3 Implementation",
|
| 57 |
+
"text": "We will be using PyTorch [16 ###reference_b16###] for implementing the models to work on a GPU accelerated fashion and we will make use of sparse matrix multiplications in PyTorch, which results in a complexity of i.e. linear in number of non-zero matrix elements."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "4",
|
| 61 |
+
"parent_section_id": null,
|
| 62 |
+
"section_name": "Results",
|
| 63 |
+
"text": "Results for different datasets and number of available labeled nodes are presented in Table 3 ###reference_###. GCN is the original graph convolutional network, GCN-2 represents the network with layers expanding up to the second neighborhood in Equation 3.1 ###reference_###, and GCN-3 is expanding up to the third neighborhood in Equation 3.1 ###reference_###. The results are in agreement with the validity of the expansion along neighborhood size, meaning that the accuracy of the model remains the same or increases with the graph convolution neighborhood size. When a lower number of training datapoints are available, higher order models outperform the original GCN by a larger margin but in abundance of training datapoints, we observe a saturated similar accuracy amongst the different models.\nThe average training time per epoch on random graphs of size with edges for different models are presented in Figure 1 ###reference_###. Since the preprocessing step of calculating matrices is only done once on each dataset, we are omitting this preprocessing step from the training time analysis.\n###figure_1### Looking at the performance gain per epoch, we can see that some of this higher computation cost is compensated by a faster learning in the expanded GCN models. The data in Figure 2 ###reference_### is acquired by averaging the accuracy per epoch for 100 different random initializations of the models on the Cora dataset. This shows that the expanded models would reach the same performance in a significantly lower number of epochs.\n###figure_2###"
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "5",
|
| 67 |
+
"parent_section_id": null,
|
| 68 |
+
"section_name": "Future Work and Conclusion",
|
| 69 |
+
"text": "The introduced expanded model has several limitations which may be improved in the future works. The current model does not include edge features and edge weights natively. There are several possible workarounds for this issue, such as converting the edges into nodes with features but including the edge features in the convolution, or similar solutions to the work done by [23 ###reference_b23###]. Expansion of the current gradient descent optimization to a mini-batch stochastic one, such as [25 ###reference_b25###] would also be helpful for larger datasets where memory limitations would not allow full-batch calculations.\nIn this work we have expanded the current notion of graph convolutional neural networks to a model which considers different coefficients for node\u2019s self-features and each layer of neighborhood. This would help remove original assumptions in this model where only the features of the node itself and the first neighbors (with similar coefficients) where used to learn node representations to use in downstream tasks such as classification and clustering tasks. Our model\u2019s propagation rule is expandable in terms of neighborhood distance and experiments on several datasets show a better generalization capability of this model compared to the original GCN, without adding to the trainable parameters, and particularly with a low number of available training nodes. This model can be useful in cases where the model size and efficiency is a concern and where the graph has an underlying higher-order structure, such as biological gene-regulatory networks or socioeconomic knowledge graphs."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "6",
|
| 73 |
+
"parent_section_id": null,
|
| 74 |
+
"section_name": "Aknnowledgements",
|
| 75 |
+
"text": "Funding was provided by NIBIB and NIMH through the NIH BRAIN Initiative Grant R01 EB028157 and NSF 2214217."
|
| 76 |
+
}
|
| 77 |
+
],
|
| 78 |
+
"appendix": [],
|
| 79 |
+
"tables": {
|
| 80 |
+
"1": {
|
| 81 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Dataset statistics</figcaption>\n<br class=\"ltx_break\"/>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S3.T1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T1.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S3.T1.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.1.1.1\" style=\"font-size:90%;\">Dataset</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.1.2.1\" style=\"font-size:90%;\">Nodes</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.1.3.1\" style=\"font-size:90%;\">Edges</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.1.1.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.1.4.1\" style=\"font-size:90%;\">Classes</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.1.1.1.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.1.5.1\" style=\"font-size:90%;\">Features</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T1.1.2.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S3.T1.1.2.1.1\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.1.2.1.1.1\" style=\"font-size:90%;\">Citeseer</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.2.1.2\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.1.2.1.2.1\" style=\"font-size:90%;\">3327</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.2.1.3\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.1.2.1.3.1\" style=\"font-size:90%;\">4732</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.2.1.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.1.2.1.4.1\" style=\"font-size:90%;\">6</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.2.1.5\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.1.2.1.5.1\" style=\"font-size:90%;\">3703</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.3.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T1.1.3.2.1\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.1.3.2.1.1\" style=\"font-size:90%;\">Cora</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.3.2.2\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.1.3.2.2.1\" style=\"font-size:90%;\">2708</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.3.2.3\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.1.3.2.3.1\" style=\"font-size:90%;\">5409</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.3.2.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.1.3.2.4.1\" style=\"font-size:90%;\">7</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.3.2.5\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.1.3.2.5.1\" style=\"font-size:90%;\">1433</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.4.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S3.T1.1.4.3.1\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.1.4.3.1.1\" style=\"font-size:90%;\">Pubmed</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.1.4.3.2\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.1.4.3.2.1\" style=\"font-size:90%;\">19717</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.1.4.3.3\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.1.4.3.3.1\" style=\"font-size:90%;\">44338</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.1.4.3.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.1.4.3.4.1\" style=\"font-size:90%;\">3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.1.4.3.5\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.1.4.3.5.1\" style=\"font-size:90%;\">500</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 82 |
+
"capture": "Table 1: Dataset statistics"
|
| 83 |
+
},
|
| 84 |
+
"2": {
|
| 85 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T2\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Training hyperparameters</figcaption>\n<br class=\"ltx_break\"/>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S3.T2.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S3.T2.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.1.1.1\" style=\"font-size:90%;\">Dropout</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T2.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.1.2.1\" style=\"font-size:90%;\">L2</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T2.1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.1.3.1\" style=\"font-size:90%;\">Output Dimension</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T2.1.1.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.1.4.1\" style=\"font-size:90%;\">Learning Rate</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T2.1.2.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb ltx_border_t\" id=\"S3.T2.1.2.1.1\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T2.1.2.1.1.1\" style=\"font-size:90%;\">0.5</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T2.1.2.1.2\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T2.1.2.1.2.1\" style=\"font-size:90%;\">5e-4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T2.1.2.1.3\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T2.1.2.1.3.1\" style=\"font-size:90%;\">16</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T2.1.2.1.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T2.1.2.1.4.1\" style=\"font-size:90%;\">0.01</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 86 |
+
"capture": "Table 2: Training hyperparameters"
|
| 87 |
+
},
|
| 88 |
+
"3": {
|
| 89 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>Accuracy and the standard error of the models on different datasets with various number of available nodes for training. A higher order model always outperforms or matches the accuracy of a lower order model.</figcaption>\n<br class=\"ltx_break\"/>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T3.60.60\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T3.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S4.T3.1.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.1.1.2.1\" style=\"font-size:90%;\">Cora</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.1.1.3.1\" style=\"font-size:90%;\">Citeseer</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.1.1.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.1.1.4.1\" style=\"font-size:90%;\">Pubmed</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T3.4.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T3.4.4.4.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S4.T3.4.4.4.4.1\" style=\"font-size:90%;\">GCN</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.2.2.2.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.3.3.3.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.4.4.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.7.7.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.7.7.7.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S4.T3.7.7.7.4.1\" style=\"font-size:90%;\">GCN-2</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.5.5.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.6.6.6.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.7.7.7.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.10.10.10\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.10.10.10.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S4.T3.10.10.10.4.1\" style=\"font-size:90%;\">GCN-3</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.8.8.8.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.9.9.9.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.10.10.10.3\">\n<span class=\"ltx_text ltx_font_smallcaps\" id=\"S4.T3.10.10.10.3.1\" style=\"font-size:90%;\">54.7 </span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.11.11.11\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T3.11.11.11.1\"></th>\n<td class=\"ltx_td ltx_border_t\" id=\"S4.T3.11.11.11.2\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S4.T3.11.11.11.3\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S4.T3.11.11.11.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.14.14.14\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T3.14.14.14.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S4.T3.14.14.14.4.1\" style=\"font-size:90%;\">GCN</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.12.12.12.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.13.13.13.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.14.14.14.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.17.17.17\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.17.17.17.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S4.T3.17.17.17.4.1\" style=\"font-size:90%;\">GCN-2</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.15.15.15.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.16.16.16.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.17.17.17.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.20.20.20\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.20.20.20.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S4.T3.20.20.20.4.1\" style=\"font-size:90%;\">GCN-3</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.18.18.18.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.19.19.19.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.20.20.20.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.21.21.21\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T3.21.21.21.1\"></th>\n<td class=\"ltx_td ltx_border_t\" id=\"S4.T3.21.21.21.2\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S4.T3.21.21.21.3\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S4.T3.21.21.21.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.24.24.24\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T3.24.24.24.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S4.T3.24.24.24.4.1\" style=\"font-size:90%;\">GCN</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.22.22.22.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.23.23.23.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.24.24.24.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.27.27.27\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.27.27.27.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S4.T3.27.27.27.4.1\" style=\"font-size:90%;\">GCN-2</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.25.25.25.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.26.26.26.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.27.27.27.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.30.30.30\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.30.30.30.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S4.T3.30.30.30.4.1\" style=\"font-size:90%;\">GCN-3</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.28.28.28.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.29.29.29.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.30.30.30.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.31.31.31\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T3.31.31.31.1\"></th>\n<td class=\"ltx_td ltx_border_t\" id=\"S4.T3.31.31.31.2\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S4.T3.31.31.31.3\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S4.T3.31.31.31.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.34.34.34\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T3.34.34.34.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S4.T3.34.34.34.4.1\" style=\"font-size:90%;\">GCN</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.32.32.32.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.33.33.33.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.34.34.34.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.37.37.37\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.37.37.37.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S4.T3.37.37.37.4.1\" style=\"font-size:90%;\">GCN-2</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.35.35.35.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.36.36.36.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.37.37.37.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.40.40.40\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.40.40.40.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S4.T3.40.40.40.4.1\" style=\"font-size:90%;\">GCN-3</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.38.38.38.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.39.39.39.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.40.40.40.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.41.41.41\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T3.41.41.41.1\"></th>\n<td class=\"ltx_td ltx_border_t\" id=\"S4.T3.41.41.41.2\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S4.T3.41.41.41.3\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S4.T3.41.41.41.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.44.44.44\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T3.44.44.44.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S4.T3.44.44.44.4.1\" style=\"font-size:90%;\">GCN</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.42.42.42.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.43.43.43.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.44.44.44.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.47.47.47\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.47.47.47.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S4.T3.47.47.47.4.1\" style=\"font-size:90%;\">GCN-2</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.45.45.45.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.46.46.46.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.47.47.47.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.50.50.50\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.50.50.50.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S4.T3.50.50.50.4.1\" style=\"font-size:90%;\">GCN-3</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.48.48.48.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.49.49.49.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.50.50.50.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.51.51.51\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T3.51.51.51.1\"></th>\n<td class=\"ltx_td ltx_border_t\" id=\"S4.T3.51.51.51.2\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S4.T3.51.51.51.3\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S4.T3.51.51.51.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.54.54.54\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T3.54.54.54.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S4.T3.54.54.54.4.1\" style=\"font-size:90%;\">GCN</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.52.52.52.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.53.53.53.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.54.54.54.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.57.57.57\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.57.57.57.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S4.T3.57.57.57.4.1\" style=\"font-size:90%;\">GCN-2</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.55.55.55.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.56.56.56.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.57.57.57.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.60.60.60\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S4.T3.60.60.60.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S4.T3.60.60.60.4.1\" style=\"font-size:90%;\">GCN-3</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.58.58.58.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.59.59.59.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.60.60.60.3\"></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 90 |
+
"capture": "Table 3: Accuracy and the standard error of the models on different datasets with various number of available nodes for training. A higher order model always outperforms or matches the accuracy of a lower order model."
|
| 91 |
+
}
|
| 92 |
+
},
|
| 93 |
+
"image_paths": {
|
| 94 |
+
"1": {
|
| 95 |
+
"figure_path": "2301.10960v3_figure_1.png",
|
| 96 |
+
"caption": "Figure 1: Wall-clock training time per epoch for different order of neighborhood and different random graphs of N\ud835\udc41Nitalic_N nodes and 2\u2062N2\ud835\udc412N2 italic_N edges.",
|
| 97 |
+
"url": "http://arxiv.org/html/2301.10960v3/"
|
| 98 |
+
},
|
| 99 |
+
"2": {
|
| 100 |
+
"figure_path": "2301.10960v3_figure_2.png",
|
| 101 |
+
"caption": "Figure 2: Mean accuracy of each model per epoch when training on the Cora dataset. The curves show average and standard deviation of accuracy on each training epoch on validation and training data. We can see how the expanded models tend to learn faster from the data.",
|
| 102 |
+
"url": "http://arxiv.org/html/2301.10960v3/"
|
| 103 |
+
}
|
| 104 |
+
},
|
| 105 |
+
"validation": true,
|
| 106 |
+
"references": [],
|
| 107 |
+
"url": "http://arxiv.org/html/2301.10960v3"
|
| 108 |
+
}
|
20240522/2301.11761v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240522/2302.04749v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240522/2303.12002v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240522/2304.01772v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240522/2304.14606v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240522/2305.05451v3.json
ADDED
|
@@ -0,0 +1,261 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Multiscale Augmented Normalizing Flows for Image Compression",
|
| 3 |
+
"abstract": "Most learning-based image compression methods lack efficiency for high image quality due to their non-invertible design.\nThe decoding function of the frequently applied compressive autoencoder architecture is only an approximated inverse of the encoding transform.\nThis issue can be resolved by using invertible latent variable models, which allow a perfect reconstruction if no quantization is performed.\nFurthermore, many traditional image and video coders apply dynamic block partitioning to vary the compression of certain image regions depending on their content.\nInspired by this approach, hierarchical latent spaces have been applied to learning-based compression networks.\nIn this paper, we present a novel concept, which adapts the hierarchical latent space for augmented normalizing flows, an invertible latent variable model.\nOur best performing model achieves significant rate savings of more than 7% over comparable single-scale models.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "In recent years, learning-based image compression has gained much attention, as it surpassed state-of-the-art image and video coding standards, e.g. BPG or VVC [1 ###reference_b1###].\nThese standards rely on hand-crafted features to achieve a suitable trade-off between compression efficiency and complexity.\nIn contrast, learning-based compression methods are optimized in an end-to-end fashion to jointly learn the parameters of the encoding and decoding function.\nCompressive autoencoders (CAE) [2 ###reference_b2###] have become a common approach for end-to-end learned image coding.\nThe non-linear encoding function computes a latent representation depending on the original image, which is then quantized and encoded, typically by arithmetic coding.\nThe quantized latent space is fed into the decoding function to generate a reconstructed version of the original image.\nCurrent research mainly focuses on optimizing the encoding/decoding transformations and the entropy models for the accurate prediction of the likelihood of the latent space symbols [3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###].\nA downside of most learning-based image compression methods, including CAE-based architectures, is their performance for high bit rates due to the lack of invertibility.\nEven if quantization is omitted, the reconstructed image is not completely identical to the original image, since the decoding function is not a perfect inverse of the encoding transform.\nThe use of invertible latent variable models, like Augmented Normalizing Flows (ANF) [6 ###reference_b6###], can improve the performance of learning-based image compression, especially for high quality image coding.\nIn terms of lossy compression, this invertible design can reduce the saturation effects for high bitrates, which typically occur with CAE-based architectures.\nFurthermore, learning-based image compression models are usually fully determined after the training and cannot adapt to the image content.\nIn contrast, state-of-the-art traditional image and video coding standards apply adaptive block partitioning to modify the coding structure depending on the image content.\nThis drawback has been addressed by RDONet [7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###] with the application of a hierarchical latent space.\nThis hierarchical design allows the network to transmit the latent representation for certain image areas at variable scales.\nWhile highly structured regions are typically transmitted using small blocks to preserve high quality, larger blocks are used for less important areas to reduce the bit rate necessary for coding the corresponding image region.\nThe decision in which hierarchy level a certain image area will be transmitted can be set during inference and thus, the network can adjust its behaviour even after the learned parameters have been fixed.\nSince the larger block sizes are especially beneficial in the low-rate regime, the compression performance can be significantly improved.\nIn this work, the ANF-based architecture ANFIC [10 ###reference_b10###] is extended with a multiscale latent space.\nThe invertible design allows image compression at very high quality, while the adaptive latent space can reduce the necessary bit rate."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Related Work",
|
| 15 |
+
"text": "[scale=0.75]imgs/m_anfic_architecture\nThe idea of a compressive autoencoder for image coding has been proposed in [2 ###reference_b2###].\nUp to now, multiple enhancements have been developed, extending the initial architecture.\nA major improvement is the scale hyperprior [12 ###reference_b12###], which derives the parameters of a density model for the latent space symbols from additional side-information.\nBy assigning shorter code words to more likely symbols, the benefit of saved bits outweighs the necessary rate for transmitting the additional side-information.\nThis idea has been further improved by predicting also the mean in combination with an autoregressive context model [13 ###reference_b13###] or using Gaussian mixture models (GMM) as density model [14 ###reference_b14###].\nThe work within this paper is based on two distinct architectures for image compression.\nNamely, ANFIC [10 ###reference_b10###] and RDONet [9 ###reference_b9###].\nThe ANFIC architecture applies augmented normalizing flows [6 ###reference_b6###] with two autoencoding layers, where the latent space is hierarchically augmented by a combination of a hyperprior and an autoregressive context model.\nSimilar to [14 ###reference_b14###], ANFIC uses a GMM with components to model the likelihoods of the latent space symbols.\nSince quantization is applied and the residual after encoding is replaced by zeros for the decoding step, the ANFIC architecture is still a lossy compression method.\nDue to the invertible network design, the introduced errors cannot be concealed during reconstruction.\nTherefore, the output after decoding is fed into a quality enhancement network [15 ###reference_b15###], which reduces the visible effects of the performed quantization.\nRDONet [7 ###reference_b7###] is a traditional compressive autoencoder based on [13 ###reference_b13###].\nThe main novelty of RDONet is the hierarchical latent space, which allows to transmit the latent representation in either of the multiple latent space units (LSUnit).\nThe decision in which LSUnit a certain region of the image will be transmitted can be set during inference.\nThus, the model preserves a certain degree of freedom to optimize the partitioning of the block size using Rate Distortion Optimization (RDO).\nThe overall image is then reconstructed based on the transmitted latent in all LSUnits.\nInitially, RDONet used solely random masks during training to decide where a certain image region is transmitted.\nIn later publications [8 ###reference_b8###, 9 ###reference_b9###] however, the training uses masks based on a variance criterion after a certain amount of training epochs.\nFurthermore, these variance-based masks have proven to be a good estimation for a suitable block partitioning during inference and can be an alternative to the time-consuming RDO [8 ###reference_b8###]."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "Proposed Method",
|
| 21 |
+
"text": "In this section, we propose two novel architectures, which extend ANFIC by a multiscale approach, which is inspired by the work on RDONet.\nIdentical to the ANFIC design, our M-ANFIC and MS-ANFIC models apply augmented normalizing flows with two autoencoding layers."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "3.1",
|
| 25 |
+
"parent_section_id": "3",
|
| 26 |
+
"section_name": "M-ANFIC",
|
| 27 |
+
"text": "[scale=0.75]imgs/lsunit_A\n[scale=0.75]imgs/lsunit_B\n[scale=0.75]imgs/ms_anfic_architecture\nThe first multiscale model, M-ANFIC, applies the multiscale latent space to all layers of the 2-step ANF.\nAs shown in Fig. 1 ###reference_###, two LSUnits per ANF layer are connected to each of the two autoencoding transforms, which allows to transmit the latent space on two different scales.\nThe LSUnits of the first autoencoding transform are of type A, whereas the LSUnits of the last ANF layer are of type B.\nComparing the structure of the two types in Figs. 2 ###reference_### and 3 ###reference_### shows, that type A contains solely a downsampling of factor two in the encoding and a upsampling of the same factor in the decoding step.\nThe latent space is computed by a convolutional layer without downsampling followed by the masking according to the externally applied mask.\nBesides all previously mentioned components, the LSUnit (Type B) further contains the quantization and the conditional hyperprior network with autoregressive context model to calculate the parameters of the GMM."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3.2",
|
| 31 |
+
"parent_section_id": "3",
|
| 32 |
+
"section_name": "MS-ANFIC",
|
| 33 |
+
"text": "As second model, we propose MS-ANFIC a multiscale ANFIC with latent split network.\nIn contrast to the M-ANFIC architecture, the multiscale latent space is only adapted for the final ANF layer, where the latent representation is transmitted.\nThis choice is motivated by the intention to use the different scales mainly in the course of transmission and not for the feature generation.\nAs shown in Fig. 4 ###reference_###, the first ANF layer is identical to the original ANFIC architecture, whereas the final ANF layer uses the LSUnits (Type B), like M-ANFIC, to adopt the multiscale latent.\nTo transform the single-scale latent derived from the first ANF layer into a multiscale representation an ANF-based latent split network, as shown in Fig. 5 ###reference_###, is used.\nThe two-scale latent can be computed as:\nwhere is the single-scale latent generated by the first ANF layer, and are the computed multiscale representations.\nThe functions and are learnable functions with the parameter set .\nThe design of the latent split network could also be extended to more than two hierarchy levels, without limiting the invertibility of the transformation.\n[scale=0.75]imgs/lssplit"
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "4",
|
| 37 |
+
"parent_section_id": null,
|
| 38 |
+
"section_name": "Experiments and Results",
|
| 39 |
+
"text": ""
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "4.1",
|
| 43 |
+
"parent_section_id": "4",
|
| 44 |
+
"section_name": "Parametrization",
|
| 45 |
+
"text": "The original ANFIC architecture used with a smaller number of channels in the transform layers, compared to the channels in the latent space.\nDirectly adapting these parameters for our multiscale models might favor the use of the first LSUnit over deeper units.\nTherefore, we used channels, both for the transform layers and the latent space.\nWe performed an ablation study, to ensure that differences in performance do not come from this change in the channel dimensions.\nThus, we also retrained the original ANFIC with this parametrization.\nAs shown in Sec. 4.3 ###reference_###, ANFIC achieves comparable results for both configurations."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "4.2",
|
| 49 |
+
"parent_section_id": "4",
|
| 50 |
+
"section_name": "Training",
|
| 51 |
+
"text": "Similar to the original ANFIC, we used the vimeo-90k [16 ###reference_b16###] data set to train our models, by taking a random frame of each sequence per epoch cropped to a size of 256\u00d7256.\nThe Adam optimizer with standard parameters [17 ###reference_b17###] has been used with the training schedule summarized in Tab. 1 ###reference_###.\nSimilar to RDONet [9 ###reference_b9###], we start with random masks first and switch to variance-based masks after 30 epochs.\nWe used MSE as our distortion metric and trained the network with the following loss function:\nBesides the rate and the MSE distortion metric, an additional loss term forces the residual to approximate zero.\nIdentical to the original ANFIC, the weighting factor for this term is calculated as ."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "4.3",
|
| 55 |
+
"parent_section_id": "4",
|
| 56 |
+
"section_name": "Rate Distortion Performance",
|
| 57 |
+
"text": "We performed our test on the TECNICK [18 ###reference_b18###] data set, which contains 100 high quality images of size 1200\u00d71200.\nThe averaged rates are reported in terms of bits per pixel (bpp) and the quality in PSNR-RGB and MS-SSIM [19 ###reference_b19###].\nAdditionally, we compute the Bj\u00f8ntegaard delta [20 ###reference_b20###] (BD) rates for both quality metrics.\nBesides our two models M-ANFIC and MS-ANFIC, we give the results of the ANFIC models with and , the retrained ANFIC-192 with the parametrization and the intra coding mode of the VVC reference implementation VTM in version 18.2.\nFor the latter, the PNG images have been converted into 10-bit YUV files with 444 color format. The decoded output is transformed back into PNG files and compared using PSNR-RGB and MS-SSIM metrics.\nThe BD-rates, in Tab. 2 ###reference_###, indicate that our MS-ANFIC model performs best, both in terms of PSNR-RGB and MS-SSIM, followed by our M-ANFIC model.\n###figure_1### The results for PSNR-RGB in Fig. 6 ###reference_### show that the ANFIC and ANFIC-192 are on par with the traditional VVC codec.\nCompared to the ANFIC model, our multiscale models M-ANFIC and MS-ANFIC can improve the compression performance by savings of 6.22% and 7.47% BD-rate, respectively.\nThe MS-ANFIC model with latent split network achieves even better results than M-ANFIC, although the number of parameters are reduced by more than 7% from 43.0 M to 39.7 M.\nMoreover, the results for ANFIC-192 prove, that the gains of our multiscale models is only partially related to the reparametrization of .\nCompared to ANFIC, ANFIC-192 achieves better compression results for lower rates and negligibly worse for high quality, which is caused by the lower number of channels in the latent representation and results in averaged bit rate savings of 2.47%.\nSimilar to MS-ANFIC, this is not related to the number of parameters as they have been slightly reduced from 22.7 M for ANFIC to 21.4 M.\nEven though the models are optimized for MSE, we also give the results for MS-SSIM in Fig. 7 ###reference_###.\nWe report the MS-SSIM in dB using the formula:\nOn a par with the results for the PSNR-RGB metric, the original ANFIC achieves comparable performance to the VTM implementation, whereas they are surpassed both by the reparametrized ANFIC model and our M-ANFIC and MS-ANFIC models.\nHere, we are able to achieve BD-rate savings up to 9.06% for MS-ANFIC over the original ANFIC implementation.\nAs our masks for the hierarchical latent space are derived using a variance-based decision, they inherently contain information on the amount of structure present in the different image regions.\nEvidently, the network can exploit this information.\nBased on the knowledge in which level of the latent space an image region was transferred, the hierarchical models can adapt the amount of details generated in the reconstructed image, which leads to a noteworthy improvement in terms of MS-SSIM.\n###figure_2###"
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "5",
|
| 61 |
+
"parent_section_id": null,
|
| 62 |
+
"section_name": "Conclusion",
|
| 63 |
+
"text": "In this paper, we proposed two novel architectures, which extend the concept of an ANF-based image compression network with a hierarchical latent space.\nThis multi-scale latent space adds an additional degree of freedom during inference, as the hierarchy level for the different image areas can be set by a mask.\nThus, the bit rate can be adaptively allocated to more important image areas, e.g. in scenarios like image coding for machines [21 ###reference_b21###], where high quality of certain regions of interest is advantageous.\nOur two models are modified versions of the ANFIC architecture.\nWe redesigned the latent space of ANFIC by adding hierarchical LSUnits, adapted based on RDONet, and developed an invertible latent split network for our MS-ANFIC model, which can derive a multiscale representation from a single-scale latent.\nThe adoption of a multiscale latent space can noticeably improve the compression performance compared to the single-scale ANFIC.\nWith the usage of a multiscale latent space for the final ANF layer, see the MS-ANFIC model, we are able to save on average more than 7% bit rate at same PSNR and 9% at same MS-SSIM.\nOur results prove, that the additional flexibility due to the multiscale principle can enhance the performance not only for compressive autoencoders but also in ANF-based architectures.\nThese outcomes show, that improvement of learning-based image compression is not limited to developing better transformations and entropy models.\nAlso the overall architecture has a major impact in the compression performance."
|
| 64 |
+
}
|
| 65 |
+
],
|
| 66 |
+
"appendix": [],
|
| 67 |
+
"tables": {
|
| 68 |
+
"1": {
|
| 69 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.9.1.1\">Table 1</span>: </span>Training schedule giving the epoch until which a set of learning rate (lr), and mask is used.\n<br class=\"ltx_break\"/></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.4\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T1.3.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S4.T1.3.1.2\"><span class=\"ltx_text\" id=\"S4.T1.3.1.2.1\" style=\"font-size:90%;\">Epoch</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S4.T1.3.1.3\"><span class=\"ltx_text\" id=\"S4.T1.3.1.3.1\" style=\"font-size:90%;\">lr</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S4.T1.3.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T1.3.1.4\"><span class=\"ltx_text\" id=\"S4.T1.3.1.4.1\" style=\"font-size:90%;\">Mask</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.4.3.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.4.3.1.1\"><span class=\"ltx_text\" id=\"S4.T1.4.3.1.1.1\" style=\"font-size:90%;\">30</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.4.3.1.2\"><span class=\"ltx_text\" id=\"S4.T1.4.3.1.2.1\" style=\"font-size:90%;\">1e-4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.4.3.1.3\"><span class=\"ltx_text\" id=\"S4.T1.4.3.1.3.1\" style=\"font-size:90%;\">0.1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.3.1.4\"><span class=\"ltx_text\" id=\"S4.T1.4.3.1.4.1\" style=\"font-size:90%;\">Rand</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.4.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.4.4.2.1\"><span class=\"ltx_text\" id=\"S4.T1.4.4.2.1.1\" style=\"font-size:90%;\">100</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.4.4.2.2\"><span class=\"ltx_text\" id=\"S4.T1.4.4.2.2.1\" style=\"font-size:90%;\">1e-4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.4.4.2.3\"><span class=\"ltx_text\" id=\"S4.T1.4.4.2.3.1\" style=\"font-size:90%;\">0.1</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.4.2.4\"><span class=\"ltx_text\" id=\"S4.T1.4.4.2.4.1\" style=\"font-size:90%;\">Var</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.4.2.2\"><span class=\"ltx_text\" id=\"S4.T1.4.2.2.1\" style=\"font-size:90%;\">130</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.4.2.3\"><span class=\"ltx_text\" id=\"S4.T1.4.2.3.1\" style=\"font-size:90%;\">1e-4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.4.2.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.2.4\"><span class=\"ltx_text\" id=\"S4.T1.4.2.4.1\" style=\"font-size:90%;\">Var</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 70 |
+
"capture": "Table 1: Training schedule giving the epoch until which a set of learning rate (lr), and mask is used.\n"
|
| 71 |
+
},
|
| 72 |
+
"2": {
|
| 73 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<figcaption class=\"ltx_caption\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.14.1.1\">Table 2</span>: </span>Bj\u00f8ntegaard delta rates on the TECNICK data set for PSNR-RGB and MS-SSIM with ANFIC as anchor. Best model is highlighted bold.\n<br class=\"ltx_break\"/></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T2.10\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T2.10.11.1\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_th_row ltx_border_r\" id=\"S4.T2.10.11.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T2.10.11.1.2\"><span class=\"ltx_text\" id=\"S4.T2.10.11.1.2.1\" style=\"font-size:90%;\">PSNR-RGB</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T2.10.11.1.3\"><span class=\"ltx_text\" id=\"S4.T2.10.11.1.3.1\" style=\"font-size:90%;\">MS-SSIM</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T2.2.2.3\"><span class=\"ltx_text\" id=\"S4.T2.2.2.3.1\" style=\"font-size:90%;\">ANFIC (anchor)</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.2.2.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.4.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.4.4.3\"><span class=\"ltx_text\" id=\"S4.T2.4.4.3.1\" style=\"font-size:90%;\">VTM-444</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.3.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.6.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.6.6.3\"><span class=\"ltx_text\" id=\"S4.T2.6.6.3.1\" style=\"font-size:90%;\">ANFIC-192</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.5.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.6.6.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.8.8\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.8.8.3\"><span class=\"ltx_text\" id=\"S4.T2.8.8.3.1\" style=\"font-size:90%;\">M-ANFIC (ours)</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.7.7.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.8.8.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.10.10\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.10.10.3\"><span class=\"ltx_text\" id=\"S4.T2.10.10.3.1\" style=\"font-size:90%;\">MS-ANFIC (ours)</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.9.9.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.10.10.2\"></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 74 |
+
"capture": "Table 2: Bj\u00f8ntegaard delta rates on the TECNICK data set for PSNR-RGB and MS-SSIM with ANFIC as anchor. Best model is highlighted bold.\n"
|
| 75 |
+
}
|
| 76 |
+
},
|
| 77 |
+
"image_paths": {
|
| 78 |
+
"6": {
|
| 79 |
+
"figure_path": "2305.05451v3_figure_6.png",
|
| 80 |
+
"caption": "Fig. 6: Objective quality evaluation on the TECNICK data set using the PSNR-RGB metric.",
|
| 81 |
+
"url": "http://arxiv.org/html/2305.05451v3/"
|
| 82 |
+
},
|
| 83 |
+
"7": {
|
| 84 |
+
"figure_path": "2305.05451v3_figure_7.png",
|
| 85 |
+
"caption": "Fig. 7: Objective quality evaluation on the TECNICK data set using the MS-SSIM metric. Note that all models were still trained exclusively on MSE.",
|
| 86 |
+
"url": "http://arxiv.org/html/2305.05451v3/"
|
| 87 |
+
}
|
| 88 |
+
},
|
| 89 |
+
"validation": true,
|
| 90 |
+
"references": [
|
| 91 |
+
{
|
| 92 |
+
"1": {
|
| 93 |
+
"title": "\u201cOverview of the versatile video coding (VVC) standard and its\napplications,\u201d",
|
| 94 |
+
"author": "Benjamin Bross, Ye-Kui Wang, Yan Ye, Shan Liu, Jianle Chen, Gary J. Sullivan,\nand Jens-Rainer Ohm,",
|
| 95 |
+
"venue": "IEEE Transactions on Circuits and Systems for Video Technology,\nvol. 31, no. 10, pp. 3736\u20133764, 2021.",
|
| 96 |
+
"url": null
|
| 97 |
+
}
|
| 98 |
+
},
|
| 99 |
+
{
|
| 100 |
+
"2": {
|
| 101 |
+
"title": "\u201cEnd-to-end optimized image compression,\u201d",
|
| 102 |
+
"author": "Johannes Ball\u00e9, Valero Laparra, and Eero P. Simoncelli,",
|
| 103 |
+
"venue": "in Proc. International Conference on Learning Representations\nICLR, 2017.",
|
| 104 |
+
"url": null
|
| 105 |
+
}
|
| 106 |
+
},
|
| 107 |
+
{
|
| 108 |
+
"3": {
|
| 109 |
+
"title": "\u201cCheckerboard context model for efficient learned image\ncompression,\u201d",
|
| 110 |
+
"author": "Dailan He, Yaoyan Zheng, Baocheng Sun, Yan Wang, and Hongwei Qin,",
|
| 111 |
+
"venue": "in Proc. IEEE/CVF Conference on Computer Vision and Pattern\nRecognition (CVPR), 2021.",
|
| 112 |
+
"url": null
|
| 113 |
+
}
|
| 114 |
+
},
|
| 115 |
+
{
|
| 116 |
+
"4": {
|
| 117 |
+
"title": "\u201cLearned image compression with gaussian-laplacian-logistic mixture\nmodel and concatenated residual modules,\u201d",
|
| 118 |
+
"author": "Haisheng Fu, Feng Liang, Jianping Lin, Bing Li, Mohammad Akbari, Jie Liang,\nGuohe Zhang, Dong Liu, Chengjie Tu, and Jingning Han,",
|
| 119 |
+
"venue": "IEEE Transactions on Image Processing, vol. 32, pp. 2063\u20132076,\n2023.",
|
| 120 |
+
"url": null
|
| 121 |
+
}
|
| 122 |
+
},
|
| 123 |
+
{
|
| 124 |
+
"5": {
|
| 125 |
+
"title": "\u201cLearned image compression with mixed transformer-cnn\narchitectures,\u201d",
|
| 126 |
+
"author": "Jinming Liu, Heming Sun, and Jiro Katto,",
|
| 127 |
+
"venue": "in Proc. IEEE/CVF Conference on Computer Vision and Pattern\nRecognition (CVPR), 2023.",
|
| 128 |
+
"url": null
|
| 129 |
+
}
|
| 130 |
+
},
|
| 131 |
+
{
|
| 132 |
+
"6": {
|
| 133 |
+
"title": "\u201cAugmented normalizing flows: Bridging the gap between generative\nflows and latent variable models,\u201d",
|
| 134 |
+
"author": "Chin-Wei Huang, Laurent Dinh, and Aaron C. Courville,",
|
| 135 |
+
"venue": "ArXiv, vol. abs/2002.07101, 2020.",
|
| 136 |
+
"url": null
|
| 137 |
+
}
|
| 138 |
+
},
|
| 139 |
+
{
|
| 140 |
+
"7": {
|
| 141 |
+
"title": "\u201cRate-distortion optimized learning-based image compression using an\nadaptive hierachical autoencoder with conditional hyperprior,\u201d",
|
| 142 |
+
"author": "Fabian Brand, Kristian Fischer, and Andr\u00e9 Kaup,",
|
| 143 |
+
"venue": "in Proc. IEEE/CVF Conference on Computer Vision and Pattern\nRecognition Workshops (CVPRW), 2021, pp. 1885\u20131889.",
|
| 144 |
+
"url": null
|
| 145 |
+
}
|
| 146 |
+
},
|
| 147 |
+
{
|
| 148 |
+
"8": {
|
| 149 |
+
"title": "\u201cLearning true rate-distortion-optimization for end-to-end image\ncompression,\u201d",
|
| 150 |
+
"author": "Fabian Brand, Kristian Fischer, Alexander Kopte, and Andr\u00e9 Kaup,",
|
| 151 |
+
"venue": "in Proc. Data Compression Conference (DCC), 2022, pp. 443\u2013443.",
|
| 152 |
+
"url": null
|
| 153 |
+
}
|
| 154 |
+
},
|
| 155 |
+
{
|
| 156 |
+
"9": {
|
| 157 |
+
"title": "\u201cRDONet: Rate-distortion optimized learned image compression with\nvariable depth,\u201d",
|
| 158 |
+
"author": "Fabian Brand, Kristian Fischer, Alexander Kopte, Marc Windsheimer, and\nAndr\u00e9 Kaup,",
|
| 159 |
+
"venue": "in Proc. IEEE/CVF Conference on Computer Vision and Pattern\nRecognition Workshops (CVPRW), 2022, pp. 1758\u20131762.",
|
| 160 |
+
"url": null
|
| 161 |
+
}
|
| 162 |
+
},
|
| 163 |
+
{
|
| 164 |
+
"10": {
|
| 165 |
+
"title": "\u201cANFIC: Image compression using augmented normalizing flows,\u201d",
|
| 166 |
+
"author": "Yung-Han Ho, Chih-Chun Chan, Wen-Hsiao Peng, Hsueh-Ming Hang, and Marek\nDoma\u0144ski,",
|
| 167 |
+
"venue": "IEEE Open Journal of Circuits and Systems, vol. 2, pp.\n613\u2013626, 2021.",
|
| 168 |
+
"url": null
|
| 169 |
+
}
|
| 170 |
+
},
|
| 171 |
+
{
|
| 172 |
+
"11": {
|
| 173 |
+
"title": "\u201cDensity modeling of images using a generalized normalization\ntransformation,\u201d",
|
| 174 |
+
"author": "Johannes Ball\u00e9, Valero Laparra, and Eero P. Simoncelli,",
|
| 175 |
+
"venue": "in Proc. International Conference on Learning Representations\nICLR, 2016.",
|
| 176 |
+
"url": null
|
| 177 |
+
}
|
| 178 |
+
},
|
| 179 |
+
{
|
| 180 |
+
"12": {
|
| 181 |
+
"title": "\u201cVariational image compression with a scale hyperprior,\u201d",
|
| 182 |
+
"author": "Johannes Ball\u00e9, David Minnen, Saurabh Singh, Sung Jin Hwang, and Nick\nJohnston,",
|
| 183 |
+
"venue": "in Proc. International Conference on Learning Representations\n(ICLR), 2018, pp. 1\u201347.",
|
| 184 |
+
"url": null
|
| 185 |
+
}
|
| 186 |
+
},
|
| 187 |
+
{
|
| 188 |
+
"13": {
|
| 189 |
+
"title": "\u201cJoint autoregressive and hierarchical priors for learned image\ncompression,\u201d",
|
| 190 |
+
"author": "David Minnen, Johannes Ball\u00e9, and George D Toderici,",
|
| 191 |
+
"venue": "in Advances in Neural Information Processing Systems, 2018,\nvol. 31, pp. 1\u201310.",
|
| 192 |
+
"url": null
|
| 193 |
+
}
|
| 194 |
+
},
|
| 195 |
+
{
|
| 196 |
+
"14": {
|
| 197 |
+
"title": "\u201cLearned image compression with discretized Gaussian mixture\nlikelihoods and attention modules,\u201d",
|
| 198 |
+
"author": "Zhengxue Cheng, Heming Sun, Masaru Takeuchi, and Jiro Katto,",
|
| 199 |
+
"venue": "in Proc. IEEE/CVF Conference on Computer Vision and Pattern\nRecognition (CVPR), 2020, pp. 7936\u20137945.",
|
| 200 |
+
"url": null
|
| 201 |
+
}
|
| 202 |
+
},
|
| 203 |
+
{
|
| 204 |
+
"15": {
|
| 205 |
+
"title": "\u201cEnd-to-end optimized versatile image compression with wavelet-like\ntransform,\u201d",
|
| 206 |
+
"author": "Haichuan Ma, Dong Liu, Ning Yan, Houqiang Li, and Feng Wu,",
|
| 207 |
+
"venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,\nvol. 44, no. 3, pp. 1247\u20131263, 2022.",
|
| 208 |
+
"url": null
|
| 209 |
+
}
|
| 210 |
+
},
|
| 211 |
+
{
|
| 212 |
+
"16": {
|
| 213 |
+
"title": "\u201cVideo enhancement with task-oriented flow,\u201d",
|
| 214 |
+
"author": "Tianfan Xue, Baian Chen, Jiajun Wu, Donglai Wei, and William T. Freeman,",
|
| 215 |
+
"venue": "International Journal of Computer Vision, vol. 127, no. 8, pp.\n1106\u20131125, 2019.",
|
| 216 |
+
"url": null
|
| 217 |
+
}
|
| 218 |
+
},
|
| 219 |
+
{
|
| 220 |
+
"17": {
|
| 221 |
+
"title": "\u201cAdam: A method for stochastic optimization,\u201d",
|
| 222 |
+
"author": "Diederik Kingma and Jimmy Ba,",
|
| 223 |
+
"venue": "in Proc. International Conference on Learning Representations\n(ICLR), 2014.",
|
| 224 |
+
"url": null
|
| 225 |
+
}
|
| 226 |
+
},
|
| 227 |
+
{
|
| 228 |
+
"18": {
|
| 229 |
+
"title": "\u201cTESTIMAGES: a Large-scale Archive for Testing Visual Devices and\nBasic Image Processing Algorithms,\u201d",
|
| 230 |
+
"author": "Nicola Asuni and Andrea Giachetti,",
|
| 231 |
+
"venue": "in Proc. Smart Tools and Apps for Graphics - Eurographics\nItalian Chapter Conference, 2014.",
|
| 232 |
+
"url": null
|
| 233 |
+
}
|
| 234 |
+
},
|
| 235 |
+
{
|
| 236 |
+
"19": {
|
| 237 |
+
"title": "\u201cMultiscale structural similarity for image quality assessment,\u201d",
|
| 238 |
+
"author": "Zhou Wang, Eero P. Simoncelli, and Alan C. Bovik,",
|
| 239 |
+
"venue": "in Proc. Asilomar Conference on Signals, Systems & Computers,\n2003, vol. 2, pp. 1398\u20131402.",
|
| 240 |
+
"url": null
|
| 241 |
+
}
|
| 242 |
+
},
|
| 243 |
+
{
|
| 244 |
+
"20": {
|
| 245 |
+
"title": "\u201cCalculation of average PSNR differences between RD-curves,\nVCEG-M33,\u201d",
|
| 246 |
+
"author": "Gisle Bj\u00f8ntegaard,",
|
| 247 |
+
"venue": "13th Meeting of the Video Coding Experts Group (VCEG), pp.\n1\u20135, Jan 2001.",
|
| 248 |
+
"url": null
|
| 249 |
+
}
|
| 250 |
+
},
|
| 251 |
+
{
|
| 252 |
+
"21": {
|
| 253 |
+
"title": "\u201cSaliency-driven hierarchical learned image coding for machines,\u201d",
|
| 254 |
+
"author": "Kristian Fischer, Fabian Brand, Christian Blum, and Andr\u00e9 Kaup,",
|
| 255 |
+
"venue": "in Proc. IEEE International Conference on Acoustics, Speech and\nSignal Processing (ICASSP), 2023.",
|
| 256 |
+
"url": null
|
| 257 |
+
}
|
| 258 |
+
}
|
| 259 |
+
],
|
| 260 |
+
"url": "http://arxiv.org/html/2305.05451v3"
|
| 261 |
+
}
|
20240522/2305.09972v2.json
ADDED
|
@@ -0,0 +1,147 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Real-Time Flying Object Detection with YOLOv8",
|
| 3 |
+
"abstract": "This paper presents a generalized model for real-time detection of flying objects that can be used for transfer learning and further research, as well as a refined model that achieves state-of-the-art results. We achieve this by training our first (generalized) model on a data set containing 40 different classes of flying objects, forcing the model to extract abstract feature representations. We then perform transfer learning with these learned parameters on a data set more representative of \u201creal world\u201d environments (i.e. higher frequency of occlusion, small spatial sizes, rotations, etc.) to generate our refined model. Object detection of flying objects remains challenging due to large variances of object spatial sizes/aspect ratios, rate of speed, occlusion, and clustered backgrounds. To address some of the presented challenges while simultaneously maximizing performance, we utilize the current state-of-the-art single-shot detector, YOLOv8, in an attempt to find the best trade-off between inference speed and mean average precision (mAP). While YOLOv8 is being regarded as the new state-of-the-art [YOLOv8Website], an official paper has not been released as of yet. Thus, we provide an in-depth explanation of the new architecture and functionality that YOLOv8 has adapted. Our final generalized model achieves a mAP50 of 79.2%, mAP50-95 of 68.5%, and an average inference speed of 50 frames per second (fps) on 1080p videos. Our final refined model maintains this inference speed and achieves an improved mAP50 of 99.1% and mAP50-95 of 83.5%.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Numerous recent events have demonstrated the malicious use of drones. Over the past few months, there have been reports of assassination attempts via drones with small explosive payloads [SuicideDrone], drug deliveries to state prisons [PrisonDrugs], and surveillance of the United States (U.S.) Border Patrol by smugglers [BorderPatrol] to exploit weaknesses. While research indicates that drone usage is expected to increase exponentially [DroneMarket], detection technology has yet to provide reliable and accurate results. Drones and mini unmanned aerial vehicles (UAVs) present a stealth capability and can avoid detection by most modern radar systems due to their small electromagnetic signature. They are also small, highly maneuverable, and omit low levels of noise. This, along with the ease of access, provides a natural incentive for drones to remain an integral part of modern warfare and illegal activities. While methods such as radio and acoustic detection have been proposed as solutions, they are currently known to be inaccurate [Drone-Detection-Using-YOLOv5]. This motivates the integration of a visual detector in any such detection system. The U.S. Border Patrol implements real-time object detection from digital towers to monitor people and motor vehicles [BorderDetection], but is not currently known to implement drone detection, which may explain the recent undetected illegal patrolling. Drone detection in this environment is challenging due to the cluttered desert background and the distance that drones survey from [BorderDigitalTowers]. The farther the drone is from cameras, the more difficult it will be to detect and classify it, as the object will convey less signal in the input space to the model.\nOur primary objective is to provide a generalized real-time flying object detection model that can be used by others for transfer learning or further research, as well as a refined model that is ready to use \u201cout of the box\u201d for implementation [OURCODE]. We define a generalized model as one that has good detection and classification performance on a large number of classes at higher resolutions while maintaining a reasonable frame rate (1080p : 30-60 frames per second). Instead of just training our model on drones, we train on a data set containing 40 different flying object categories to force the model to learn more abstract feature representations of flying objects. Then, we transfer learn these weights on a final data set containing more instances of \u201creal world\u201c environments (i.e. higher frequency of occlusion, small spatial sizes, rotations, etc.). This in turn will lead to a more refined, ready-to-implement real-time flying object detection model. To maximize our model\u2019s performance, we use the latest state-of-the-art single-shot detector, YOLOv8. Currently, single-stage detectors are the de-facto architecture choice for fast inference speeds. This choice comes at the expense of exchanging the higher accuracy you would typically expect from a two-state detector. While YOLOv8 is being regarded as the new state-of-the-art [YOLOv8Website], an official paper has yet to be released. This motivates our secondary objective, which is to explain the new architecture and functionality that YOLOv8 has adapted."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Materials and Methods",
|
| 15 |
+
"text": "Real-time object detection remains challenging due to variances in object spatial sizes and aspect ratios, inference speed, and noise. This is especially true for our use case, as flying objects can change location, scale, rotation, and trajectory very quickly. This conveys the necessity for fast inference speed and thorough model evaluation between low-variance classes, object sizes, rotations, backgrounds, and aspect ratios.\nOur initial model is trained on a data set [InitialDataset] comprised of 15,064 images of various flying objects with an 80% train and 20% validation split. Each image is labeled with the class number of the object and the coordinates of the edges of the associated bounding box. An image may have more than one object and class, sitting at an average of 1.6 annotated objects per image and a total of 24,769 annotations across all images. The median image ratio is 416x416. The images were pre-processed with auto-orientation, but there were no augmentations applied. The data set represents a long-tailed distribution with the drone (25.2% of objects), bird (25%), p-airplane (7.9%), and c-helicopter (6.3%) classes taking up the majority of the data set (64.4%), suffering from a class imbalance. Published on Roboflow with an unnamed author, this data set was generated in 2022, having been downloaded only 15 times.\nIn addition, we utilized a second data set [TransferDataset] to apply transfer learning for the refined model. With a focus on the challenges we laid out, this second data set consists of flying objects at a noticeably farther distance than our initial data set. It consists of 11,998 images, where the average image size is 0.33 mp with a median image ratio of 640x512. The images are separated into a 90% train and 10% validation split. An image may contain more than one object and class, however, it has an average of one object per image, reaching a total count of 12,410 annotated objects. With only four different objects, each class is well represented: drones take up 38.8% of the annotated objects, 21.2% helicopters, 20.4% airplanes, and 19.6% birds. Although Roboflow reports a bird class, the images that contain birds are not labeled and are not included as a class in the transfer model. This dataset was published on Roboflow in 2022 by Ahmed Mohsen [TransferDataset], having only 5 downloads by the time of this paper.\nWe chose the YOLOv8 architecture under the assumption that it would provide us with the highest probability of success given the task. YOLOv8 is assumed to be the new state-of-the-art due to its higher mean average precisions (mAPs) and lower inference speed on the COCO dataset. However, an official paper has\nyet to be released. It also specifically performs better at detecting aerial objects [Figure 9 ###reference_###]. We implement the code for YOLOv8 from the Ultralytics repository. We decide to implement transfer learning and initialize our models with pre-trained weights to then begin training on the custom data set. These weights are from a model trained on the COCO dataset. Due to only having access to a single NVIDIA RTX 3080 and 3070,\na greedy model selection/hyper-parameter tuning approach was chosen. We first train a version of the small, medium, and large versions of the model with default hyper-parameters for 100 epochs. Then, we decide which model is optimal for our use case given the trade-off between inference\nspeed and mAP-50-95 on the validation set. After the model size is selected, a greedy hyper-parameter search is conducted with 10 epochs per each\nset of hyper-parameters. The model with the optimal hyper-parameters trains for 163 epochs to generate the generalized model. After this model learns abstract feature representations for a wide array of flying objects, we then transfer learn these weights to a data set that is more representative of the real world [TransferDataset] to generate the refined model. This data set contains 3 classes: helicopter, plane, and drone, with very high variance in object spatial sizes. For evaluation, we are particularly interested in evaluating mAP50-95 and inference speed, as these are the most common measures of success across most object detection algorithms. Due to the large class imbalance, poor performance on the validation set was anticipated in the minority classes. However, this was not observed [Figure 2 ###reference_###].\nMean average precision (mAP) is one of the most used evaluation metrics for object detection. mAP takes the average precision (AP) over all classes and computes them at a pre-specified Intersection over Union (IoU) threshold. To define precision, we need to define true positives and false positives for object detection. A true positive will be determined when the IoU between the predicted box and ground truth is greater than the set IoU threshold, while a false positive will have the IoU below that threshold. Then, precision can be defined as . We take the mean over a class by iterating over a set of thresholds and averaging them. For mAP50-95, we take steps of 0.05 starting from an IoU threshold of 0.5 and stopping at 0.95. The average precision over this interval is the class AP. Do this for all classes and take the average over them and we generate the mAP50-95."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "Generalized Model Choice and Performance",
|
| 21 |
+
"text": "We evaluate small, medium, and large versions of the models to determine an optimal trade-off between inference speed and mAP50-95 to then optimize the hyper-parameters. The small, medium, and large models have (11151080, 25879480, & 43660680) parameters and (225, 295, & 365) layers respectively. After training the models, we see there is a noticeable increase in mAP50-95 between small and medium models (0.05), but not much delta between medium and large (0.002). We also see that small, medium, and large infer at 4.1, 5.7, and 9.3 milliseconds respectively on the validation set. However, our original goal is to reach an average inference speed between 30 to 60 frames for 1080p. When testing the medium-size model on multiple 1080p HD videos, we observe an average total speed (pre-process speed (0.5ms) + inference speed (17.25ms) + post-process speed (2ms)) of 19.75 ms (50 frames per second), which aligns with our primary objective. This leads to our selection of the medium-size model to begin tuning hyper-parameters.\nDue to a lack of computational resources, we evaluate 10 epochs for each set of hyper-parameters as an indicator for the potential performance of additional epochs. We observe that this assumption is correct, as training with the optimal set of hyper-parameters achieves better performance at epoch 100 compared to default hyper-parameters (0.027) [Figure 2 ###reference_###]. We choose the best hyper-parameters based on validation mAP50-95 as batch size of 16, stochastic gradient descent (SGD) as the optimizer, momentum of 0.937, weight decay of 0.01, classification loss weight = 1, box loss weight = 5.5, and distribution focal loss weight = 2.5. After training for 163 epochs, we achieve a mAP50-95 of 0.685 and an average inference speed on 1080p videos of 50 fps."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "Loss Function and Update Rule",
|
| 27 |
+
"text": "The generalized loss function and weight update procedure can be defined as follows:\n(1 ###reference_###) is the generalized loss function incorporating the individual loss weights and a regularization term with weight decay , (2 ###reference_###) is the velocity term with momentum , and (3 ###reference_###) is the weight update rule with as the learning rate. The specific YOLOv8 loss function can be defined as:\nwhere:\nand:\nis the total number of cells containing an object.\nis an indicator function for the cells containing an object.\nis a tuple that represents the ground truth bounding box consisting of (,, width, height).\nis the respective cell\u2019s predicted box.\nis a tuple that represents the central point of the ground truth bounding box.\nis the ground truth label for class c (not grid cell c) for each individual grid cell (x,y) in the input, regardless if an object is present.\nare the nearest predicted boxes IoUs (left and right) .\nand are the respective boxes width and height.\nis the diagonal length of the smallest enclosing box covering the predicted and ground truth boxes.\nEach cell then determines its best candidate for predicting the bounding box of the object. This loss function includes the complete IoU (CIoU) loss proposed by Zheng et al. [CIoU] as the box loss, the standard binary cross entropy for multi-label classification as the classification loss (allowing each cell to predict more than 1 class), and the distribution focal loss proposed by Li et al. [GFL] as the 3rd term.\n###figure_1### ###figure_2###"
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "2.3",
|
| 31 |
+
"parent_section_id": "2",
|
| 32 |
+
"section_name": "Model Confusion and Diagnosis",
|
| 33 |
+
"text": "###figure_3### ###figure_4### One of the primary challenges in object detection is dealing with data sets with low inter-class variance, i.e. multiple classes that look similar to each other compared to the rest of the labels. Take, for example, the F-14 and F-18, which are displayed in Figure 3 ###reference_###. Both have similar-looking wing shapes, two rudders, an engine, a cockpit, and a respective payload. In this confusion matrix [Figure 2 ###reference_###], the model is most likely to misclassify an F-14 as an F-18. This type of misclassification typically affects classes in categories with low inter-class variance amongst themselves. Visualizing activation maps [MMYOLOViz] is a technique that helps us understand what pixels in the input image are important for determining its class.\nGenerally, deeper layers in CNNs extract more granular/complex/low-level feature representations. YOLOv8 incorporates this idea into its architecture by having repeating modules and multiple detection heads when making its prediction. For our experimentation, we use MMYolo [MMYOLOViz] to create activation maps at different stages of our backbone. We expect some sense of differentiation in the different feature maps. If our model shows similar feature activations for F-14s and F-18s, we can say that may be the reason for class confusion.\nMMYolo [MMYOLOViz] by Yamaguchi et al. is an open-source toolbox for YOLO series algorithms based on PYTorch. MMYolo can decompose the most popular YOLO algorithms, making them easily customizable and ready for analysis. For our analysis, we employed MMYolo to first convert the weights from .pt (Pytorch model) to .pth (State dictionary file, i.e., weights, bias, etc.), and to second visualize the different activation maps of YOLOv8 during inference. MMYolo allows you to specify the model type, weight file, target layer, and channel reduction.\nYOLOv8 uses CSPDarknet53 [darkNet] as its backbone [Figure 7 ###reference_###], a deep neural network that extracts features at multiple resolutions (scales) by progressively down-sampling the input image. The feature maps produced at different resolutions contain information about objects at different scales in the image and different levels of detail and abstraction. YOLOv8 can incorporate different feature maps at different scales to learn about object shapes and textures, which helps it achieve high accuracy in most object detection tasks. YOLOv8\u2019s backbone consists of four sections, each with a single convolution followed by a c2f module [YOLOv8Website]. The c2f module is a new introduction to CSPDarknet53. The module comprises splits where one end goes through a bottleneck module (two 3x3 convolutions with residual connections). The bottleneck module output is further split N times, where N corresponds to the YOLOv8 model size. These splits are all finally concatenated and passed through one final convolution layer. This final layer is where we will get the activations.\n###figure_5### ###figure_6### Figure 3 ###reference_### shows the original F-14 and F-18 images and the activations of the four c2f stages in the network, with each stage being more profound in the network from the second image right. The activation map corresponding to the shallowest c2f module shows the broadest activation. This module detects the two wings of the aircraft and determines that this object is a plane. The second activation map corresponds to the second c2f module in our backbone. It shows strong activations at different components of the aircraft, such as locating the wings, body, cockpit, and payload. It appears that this layer is attempting to infer what kind of aircraft is being presented in the image by highlighting these features. The third activation map is starting to dive into the individual textures of the components of the aircraft, presumably checking for minute differences in the jet\u2019s structure. Finally, the model\u2019s final c2f module activates extremely fine-grained details and outlines in the respective images. These similar feature activation maps could be the reason that the model confuses the two."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3",
|
| 37 |
+
"parent_section_id": null,
|
| 38 |
+
"section_name": "Results",
|
| 39 |
+
"text": ""
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3.1",
|
| 43 |
+
"parent_section_id": "3",
|
| 44 |
+
"section_name": "Generalized Model",
|
| 45 |
+
"text": "To highlight our results, we address three challenging conditions: (1) detecting and classifying extremely small objects, (2) identifying flying objects that blend into their background, and (3) classifying different types of flying objects. We examined the performance of our generalized model, [InitialDataset], against these challenges. This is demonstrated in Figure 5 ###reference_###, which features four images that represent the bird, drone, passenger airplane, and V22 classes.\nThe first of the four images showcases the model\u2019s ability to identify distant birds. In the second image, the model was put to the test against a very small drone that occupied only 0.026% of the image size while also blending in with its background. The model still resulted in the correct detection and classification of the drone. The third image shows the model\u2019s ability to identify a minute passenger airplane of size 0.063% of the image, which is also blended into its surroundings. Finally, the fourth image features a V22 aircraft, which is an underrepresented class and accounts for only 3.57% of the entire dataset. A V22 can easily be mistaken as a drone due to its vertical propeller positioning. Despite these two hindering characteristics and only taking up 0.14% of the entire image, the image exhibits the model\u2019s ability to still identify the V22 with impressive accuracy, achieving a confidence score of 0.83.\nDespite the visual similarities between the birds, drones, and passenger airplanes in these images, our model successfully classified them with adequate confidence. These results illustrate our model\u2019s ability to overcome our identified challenges associated with object detection in real-world conditions, and also demonstrate our success in creating a solution that effectively tackles these challenges. Overall, it does very well at distinguishing various types of flying objects despite the need to account for multiple different classes of aircraft."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "3.2",
|
| 49 |
+
"parent_section_id": "3",
|
| 50 |
+
"section_name": "Refined Model",
|
| 51 |
+
"text": "To generate the refined model, we initialized the model with the weights learned from the generalized model and default hyperparameters. We then trained the model on the \u201creal world\u201c data set for 199 epochs [TransferDataset]. This data set was selected to focus on our challenge of detecting and classifying extremely small objects in appearance. Figure 5 ###reference_### displays our results, featuring four distinct images that represent the bird, drone, airplane, and helicopter objects.\nThe first image contains an extremely small bird that only takes up 0.02% of the image. Even with the lack of the bird class in our training process, our model correctly identified that the object was not any of the other available classes, even while allowing a very low confidence threshold of 0.20. The second image contains a drone, which also only took up 0.02% of its image. This drone is nearly indistinguishable from the background clouds to the human eye, yet our model was still able to classify it with a confidence score of 0.81. The third image includes a small airplane that takes up 0.034% of pixels, which our model was still able to correctly identify and classify with a high confidence score of 0.85. In the final image, a barely visible helicopter (0.01% of the image) was correctly classified with a confidence score of 0.73.\nIn Figure 4 ###reference_###, we can see that the feature map activation correctly segments the object in the first layer. The second layer starts picking out all of the tree tops, which can be explained by the higher relative variance of the tree tops. In the third layer, we see more importance being placed on the background and more granular features being detected. In the fourth layer, we see the outline of the drone itself. In the second row, the true strength of the localization accuracy with an over-emphasized detection is displayed. In the second layer, we see a de-emphasis on the background. In the third and fourth layers, we see the same behavior as before.\nOur model achieves state-of-the-art results, achieving a mAP50 of 0.991 and mAP50-95 of 0.835 across the plane, helicopter, and drone classes. These results demonstrate that our generalized model serves as an excellent base for transfer learning, particularly when dealing with extremely small objects, blended backgrounds, and distinguishing between drones and other flying objects."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "4",
|
| 55 |
+
"parent_section_id": null,
|
| 56 |
+
"section_name": "Discussion",
|
| 57 |
+
"text": "To the problem of flying object detection, we apply transfer learning with weights learned from our generalized model to our refined model in order to achieve state-of-the-art results in this domain. We argue that our algorithm extracts better feature representations of flying objects than those seen in previous research, furthering the current state of research in this domain. Our refined model achieves a 99.1% mAP50, 98.7% Precision, and 98.8% Recall with 50 fps inference speed on the 3-class data set (drone, plane, and helicopter), surpassing models generated from previous research to a significant extent. Aydin et al. proposed a YOLOv5 instance that achieved 90.40% mAP50, 91.8% Precision, and 87.5% Recall with 31 fps inference speed trained on a data set only containing drones and birds [Yolov5Drone]. Rozantsev et al. trained their proposed model on a data set reflective of ours, containing flying objects that occupy small portions of the input image with clustered backgrounds. They achieve an 84.9% AP on a data set containing only UAVs and 86.5% AP on a data set containing only aircraft [SingleMovingDetector]. Al-Qubaydhi et al. proposed a model utilizing the YOLOv5 framework and achieves an impressive 94.1% mAP50, 94.7% Precision, and 92.5% Recall on a dataset containing only one class of drones. [UAVYOLOv5Transfer]. Even with our exceptional results, a potential limitation of our refined model is that it was trained on a data set with a low amount of distinct environments. To address this potential generalization issue, we suggest utilizing our generalized model weights to transfer learn on a data set with higher frequency of distinct backgrounds."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "5",
|
| 61 |
+
"parent_section_id": null,
|
| 62 |
+
"section_name": "Model Architecture",
|
| 63 |
+
"text": "With the publication of \u201cYou Only Look Once: Unified, Real-Time Object Detection\u201d first proposed by Redmon et al. [YOLO_OG] in 2015, one of the most popular object detection algorithms, YOLOv1, was first described as having a \u201crefreshingly simple\u201d approach [CompReview]. At its inception, YOLOv1 could process images at 45 fps, while a variant, fast YOLO, could reach upwards of 155 fps. It also achieved high mAP compared to other object detection algorithms at the time.\nThe main proposal from YOLO is to frame object detection as a one-pass regression problem. YOLOv1 comprises a single neural network, predicting bounding boxes and associated class probability in a single evaluation. The base model of YOLO works by first dividing the input image into an S x S grid where each grid cell (i,j) predicts B bounding boxes, a confidence score for each box, and C class probabilities. The final output will be a tensor of shape S x S x (B x 5 + C)."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "5.1",
|
| 67 |
+
"parent_section_id": "5",
|
| 68 |
+
"section_name": "YOLOv1 Overview",
|
| 69 |
+
"text": "YOLOv1 architecture [Figure 6 ###reference_###] consists of 24 convolutional layers followed by two fully connected layers. In the paper [YOLO_OG], the authors took the first 20 convolutional layers from the backbone of the network and, with the addition of an average pooling layer and a single fully connected layer, it was pre-trained and validated on the ImageNet 2012 dataset. During inference, the final four layers and 2 FC layers are added to the network; all initialized randomly.\n###figure_7### YOLOv1 uses stochastic gradient descent as its optimizer. The loss function, shown by Equation 5 ###reference_###, comprises two parts: localization loss and classification loss. The localization loss measures the error between the predicted bounding box coordinates and the ground-truth bounding box. The classification loss measures the error between the predicted class probabilities and the ground truth. The and are regularization coefficients that regulate the magnitude of the different components, emphasizing object localization and de-emphasizing grid cells without objects.\n###figure_8###"
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "5.2",
|
| 73 |
+
"parent_section_id": "5",
|
| 74 |
+
"section_name": "YOLOv5 Overview",
|
| 75 |
+
"text": "YOLOv5 [Drone-Detection-Using-YOLOv5] is an object detection model introduced in 2020 by Ultralytics, the originators of the original YOLOv1 and YOLOv3. YOLOv5 achieves state-of-the-art performance on the COCO benchmark dataset [YOLOv5Doc] while also being fast and efficient to train and deploy. YOLOv5 made several architectural changes, most notably the standardized practice of structuring the model into three components: the backbone, neck, and head.\nThe backbone of YOLOv5 is Darknet53, a new network architecture that focuses on feature extraction characterized by small filter windows and residual connections. Cross-stage partial connections (CSP) enable the architecture to achieve a richer gradient flow while reducing computation as proposed by Wang et al. [cspNET].\nThe neck [CompReview], as described by Teven et al., of YOLOv5 connects the backbone to the head, whose purpose is to aggregate and refine the features extracted by the backbone, focusing on enhancing the spatial and semantic information across different scales. A Spatial Pyramid Pooling (SPP) [SPP] module removes the fixed-size constraint of the network, which removes the need to warp, augment, or crop images. This is followed by a CSP-Path Aggregation Network [cspNET] module, which incorporates the features learned in the backbone and shortens the information path between lower and higher layers.\nYOLOv5\u2019s head consists of three branches, each predicting a different feature scale. In the original publication of the model [YOLOv5Doc], the creators used three grid cell sizes of 13 x 13, 26 x 26, and 52 x 52, with each grid cell predicting B = 3 bounding boxes. Each head produces bounding boxes, class probabilities, and confidence scores. Finally, the network uses Non-maximum Suppression (NMS) [NMS] to filter out overlapping bounding boxes.\nYOLOv5 incorporates anchor boxes, which are fixed-sized bounding boxes used to predict the location and size of objects within an image. Instead of predicting arbitrary bounding boxes for each object instance, the model predicts the coordinates of the anchor boxes with predefined aspect ratios and scales and adjusts them to fit the object instance."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "5.3",
|
| 79 |
+
"parent_section_id": "5",
|
| 80 |
+
"section_name": "YOLOv8 Overview",
|
| 81 |
+
"text": "YOLOv8 is the latest version of the YOLO object detection model. This latest version has the same architecture as its predecessors [Figure 7 ###reference_###], but it introduces numerous improvements compared to the earlier versions of YOLO, such as a new neural network architecture that utilizes both Feature Pyramid Network (FPN) and Path Aggregation Network (PAN) and a new labeling tool that simplifies the annotation process. This labeling tool contains several useful features like auto labeling, labeling shortcuts, and customizable hotkeys. The combination of these features makes it easier to annotate images for training the model.\nThe FPN works by gradually reducing the spatial resolution of the input image while increasing the number of feature channels. This results in the creation of feature maps that are capable of detecting objects at different scales and resolutions. The PAN architecture, on the other hand, aggregates features from different levels of the network through skip connections. By doing so, the network can better capture features at multiple scales and resolutions, which is crucial for accurately detecting objects of different sizes and shapes [CompReview]."
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "5.4",
|
| 85 |
+
"parent_section_id": "5",
|
| 86 |
+
"section_name": "YOLOv8 vs YOLOv5",
|
| 87 |
+
"text": "The reason why YOLOv8 is being compared to YOLOv5 rather than any other version of YOLO is that YOLOv5\u2019s performance and metrics are closer to YOLOv8\u2019s. However, YOLOv8 surpasses YOLOv5 in aspects including a better mAP as seen in Figure 9 ###reference_###. Along with a better mAP, this shows that YOLOv8 has fewer outliers when measured against the RF100. RF100 is a 100-sample dataset from the Roboflow universe, which is a repository of 100,000 data sets. We also witness YOLOv8 outperforming YOLOv5 for each RF100 category. From Figure 9 ###reference_###, we can see that YOLOv8 produces similar or even better results compared to YOLOv5 [YOLOv8Website].\nAs mentioned previously, YOLOv8 uses a new architecture that combines both FAN and PAN modules. FPN is used to generate feature maps at multiple scales and resolutions, while PAN is used to aggregate features from different levels of the network to improve accuracy. The results of the combined FAN and PAN modules are better than YOLOv5, which uses a modified version of the CSPDarknet architecture. This modified version of CSPDarknet is based on cross-stage partial connections, which improves the flow of information between different parts of the network.\n###figure_9### ###figure_10### Another difference the two models have is based on their training data. YOLOv8 was trained on a larger and more diverse dataset compared to YOLOv5. YOLOv8 was trained on a blend of the COCO dataset and several other datasets, while YOLOv5 was trained primarily on the COCO dataset. Because of this, YOLOv8 has a better performance on a wider range of images.\nYOLOv8 includes a new labeling tool called RoboFlow Annotate, which is used for image annotation and object detection tasks in computer vision. RoboFlow Annotate makes it easier to annotate images for training the model and includes several features such as auto labeling, labeling shortcuts, and customizable hotkeys. In contrast, YOLOv5 uses a different labeling tool called LabelImg. LabelImg is an open-source graphical image annotation tool that allows its users to draw bounding boxes around objects of interest in an image, and then export the annotations in the YOLO format for training the model.\nYOLOv8 includes more advanced post-processing techniques than YOLOv5, which is a set of algorithms applied to the predicted bounding boxes and objectiveness scores generated by the neural network. These techniques help to refine the detection results, remove redundant detections, and improve the overall accuracy of the predictions. YOLOv8 uses Soft-NMS, a variant of the NMS technique used in YOLOv5. Soft-NMS applies a soft threshold to the overlapping bounding boxes instead of discarding them outright, whereas NMS removes the overlapping bounding boxes and keeps only the ones with the highest objectiveness score.\nOutput heads refer to the final layers of a neural network that predict the locations and classes of objects in an image. In YOLO architecture, there are normally several output heads that are responsible for predicting different aspects of the detected objects, such as the bounding box coordinates, class probabilities, and objectiveness scores. These output heads are typically connected to the last few layers of the neural network and are trained to output a set of values that can be used to localize and classify objects in an image. The number and type of output heads used can vary depending on the specific object detection algorithm and the requirements of the task at hand. YOLOv5 has three output heads while YOLOv8 has one output head. YOLOv8 does not have small, medium, and large anchor boxes. It uses an anchor-free detection mechanism that directly predicts the center of an object instead of the offset from a known anchor box. This reduces the number of box predictions and speeds up the post-processing process in return.\nIt is fair to note that YOLOv8 is slightly slower than YOLOv5 in regard to object detection speed. However, YOLOv8 is still able to process images in real-time on modern GPUs.\nBoth YOLOv5 and YOLOv8 use mosaic augmentation on the training set. Mosaic augmentation is a data augmentation technique that takes four random images from the training set and combines them into a single mosaic image. This image, where each quadrant contains a random crop from one of the four input images, is then used as input for the model [MosaicAug]."
|
| 88 |
+
}
|
| 89 |
+
],
|
| 90 |
+
"appendix": [],
|
| 91 |
+
"tables": {},
|
| 92 |
+
"image_paths": {
|
| 93 |
+
"1": {
|
| 94 |
+
"figure_path": "2305.09972v2_figure_1.png",
|
| 95 |
+
"caption": "Figure 1: Confusion matrix for all classes.\n",
|
| 96 |
+
"url": "http://arxiv.org/html/2305.09972v2/extracted/5592669/figures/confusion_matrix.png"
|
| 97 |
+
},
|
| 98 |
+
"2": {
|
| 99 |
+
"figure_path": "2305.09972v2_figure_2.png",
|
| 100 |
+
"caption": "Figure 2: YOLOv8 validation mAP50-95.\n",
|
| 101 |
+
"url": "http://arxiv.org/html/2305.09972v2/extracted/5592669/figures/Pre-Trained_YOLOv8_Val_mAP50-95.png"
|
| 102 |
+
},
|
| 103 |
+
"3": {
|
| 104 |
+
"figure_path": "2305.09972v2_figure_3.png",
|
| 105 |
+
"caption": "Figure 3: Feature activation maps for the F-14 and F-18 fighter jets. From left to right, we have the four stages of the model\u2019s CSPDarkNet53 backbone.",
|
| 106 |
+
"url": "http://arxiv.org/html/2305.09972v2/extracted/5592669/figures/F14vsF18.png"
|
| 107 |
+
},
|
| 108 |
+
"4": {
|
| 109 |
+
"figure_path": "2305.09972v2_figure_4.png",
|
| 110 |
+
"caption": "Figure 4: From left to right, (1) picks up the drone, (2) picks up tree top granularity - tree tops are more granular than stumps, (3) granular version of layer (2), (4) an outlier, texturized analysis of what the object is.",
|
| 111 |
+
"url": "http://arxiv.org/html/2305.09972v2/extracted/5592669/figures/drone_gradcam.png"
|
| 112 |
+
},
|
| 113 |
+
"5(a)": {
|
| 114 |
+
"figure_path": "2305.09972v2_figure_5(a).png",
|
| 115 |
+
"caption": "(a)\nFigure 5: Prediction Images",
|
| 116 |
+
"url": "http://arxiv.org/html/2305.09972v2/extracted/5592669/figures/Generalized_Images.png"
|
| 117 |
+
},
|
| 118 |
+
"5(b)": {
|
| 119 |
+
"figure_path": "2305.09972v2_figure_5(b).png",
|
| 120 |
+
"caption": "(b)\nFigure 5: Prediction Images",
|
| 121 |
+
"url": "http://arxiv.org/html/2305.09972v2/extracted/5592669/figures/Refined_Model_Images.png"
|
| 122 |
+
},
|
| 123 |
+
"6": {
|
| 124 |
+
"figure_path": "2305.09972v2_figure_6.png",
|
| 125 |
+
"caption": "Figure 6: YOLO Architecture [YOLO_OG]",
|
| 126 |
+
"url": "http://arxiv.org/html/2305.09972v2/extracted/5592669/figures/YOLOv1_Architecture.png"
|
| 127 |
+
},
|
| 128 |
+
"7": {
|
| 129 |
+
"figure_path": "2305.09972v2_figure_7.png",
|
| 130 |
+
"caption": "Figure 7: YOLOv8 Architecture [YOLOv8Website]",
|
| 131 |
+
"url": "http://arxiv.org/html/2305.09972v2/extracted/5592669/figures/YOLOv8_arch.png"
|
| 132 |
+
},
|
| 133 |
+
"8": {
|
| 134 |
+
"figure_path": "2305.09972v2_figure_8.png",
|
| 135 |
+
"caption": "Figure 8: YOLOs mAP@.50 against RF100.\n",
|
| 136 |
+
"url": "http://arxiv.org/html/2305.09972v2/extracted/5592669/figures/YOLOv5_vs_YOLOv8_COMP1.png"
|
| 137 |
+
},
|
| 138 |
+
"9": {
|
| 139 |
+
"figure_path": "2305.09972v2_figure_9.png",
|
| 140 |
+
"caption": "Figure 9: YOLOs average mAP@.50 against RF100 categories\n",
|
| 141 |
+
"url": "http://arxiv.org/html/2305.09972v2/extracted/5592669/figures/YOLOv5_vs_YOLOv8_COMP2.png"
|
| 142 |
+
}
|
| 143 |
+
},
|
| 144 |
+
"validation": true,
|
| 145 |
+
"references": [],
|
| 146 |
+
"url": "http://arxiv.org/html/2305.09972v2"
|
| 147 |
+
}
|
20240522/2306.00096v2.json
ADDED
|
@@ -0,0 +1,410 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Learning the Pareto Front Using Bootstrapped Observation Samples",
|
| 3 |
+
"abstract": "We consider Pareto front identification (PFI) for linear bandits (PFILin), i.e., the goal is to identify a set of arms with undominated\nmean reward vectors when the mean reward vector is a linear function of the context.\nPFILin includes the best arm identification problem and multi-objective active learning as special cases.\nThe sample complexity of our proposed algorithm is optimal up to a logarithmic factor.\nIn addition, the regret incurred by our algorithm during the estimation is within a logarithmic factor of the optimal regret among all algorithms that identify the Pareto front.\nOur key contribution is a new estimator that in every round updates the estimate for the unknown parameter along multiple context directions \u2013 in contrast to the conventional estimator that only updates\nthe parameter estimate along the chosen context.\nThis allows us to use low-regret arms to collect information about Pareto optimal arms.\nOur key innovation is to reuse the exploration\nsamples multiple times; in contrast to conventional estimators that use each sample only once.\nNumerical experiments demonstrate that the proposed algorithm successfully identifies the Pareto front while controlling the regret.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Consider a setting where one has to select among a finite set of actions that have multiple different characteristics, see, e.g., (Lizotte et al., 2010 ###reference_b17###; Van Moffaert and Now\u00e9, 2014 ###reference_b27###; Lin et al., 2019 ###reference_b16###).\nA classical example is prescribing a drug to a patient, where one needs to consider its efficacy, toxicity, and potentially all its side effects.\nThe efficacy and various side effects typically also depend on patient characteristics.\nSuch examples can be found also in online platforms, e-commerce sites, and are pertinent to the design of most recommender systems.\nThe problem of selecting an action that has multiple attributes is typically modeled using the concept of Pareto optimality, and the learning problem reduces to identifying the Pareto front (Goel et al., 2007 ###reference_b9###), i.e. the set of actions that are not dominated, and therefore, potentially optimal for some user.\nWe consider Pareto front identification (PFI) for linear bandits (PFILin), where the attributes of each action are a linear function of an associated context.\nPFILin generalizes both the best arm identification (BAI) problems\nand PFI for MABs.\nWe propose an algorithm PFIwR whose sample complexity is optimal to within logarithmic factors.\nA \u201cgood\u201d PFI algorithm should ideally have a both low sample complexity as well as low regret during the identification period.\nDegenne et al. (2019 ###reference_b7###); Zhong et al. (2023 ###reference_b30###) discuss the trade-off between regret and sample complexity in the context of BAI for drug testing.\nSuch considerations are also important for e-commerce platforms where high regret could lead to low customer satisfaction and underexposure of products.\nWe show PFIwR has close to optimal (within logarithmic factors) regret among all PFI\nalgorithms.\nIn particular, the Pareto front in the multi-objective setting typically has multiple arms, and hence, an algorithm may be forced to collect samples from high regret arms in order to decide whether it is on the Pareto front and minimizing regret is more challenging.\nIn the linear bandit setting, an algorithm must carefully choose actions so that corresponding contexts support efficient parameter estimation (Soare et al., 2014 ###reference_b22###; Tao et al., 2018 ###reference_b23###).\nConsequently, the challenging part of designing an algorithm is to allow suitable exploration and identification of the Pareto front, while controlling for the regret associated with these\narm choices.\nTo resolve this challenge, we propose the exploration-mixed estimator which \u201cmixes\u201d the observations during an exploitation round with bootstrapped samples from a previous exploration round, i.e., the estimator \u201crecycles\" the samples in the exploration phase.\nThe recycling is key in enabling the estimator to update along several context directions in every round.\nThis allows us to explore high-regret actions only for logarithimically increasing exploration rounds, and exploiting low-regret actions after that.\nHowever, recycling samples may cause dependency, and higher estimation error as compared to that of the conventional estimators.\nWe offset the higher error of the exploration-mixed estimator, by using a doubly-robust (DR) estimator (Bang and Robins, 2005 ###reference_b5###), that is robust to the error of the estimator used to impute the rewards for actions that are not selected.\nThese methods ensure we can simultaneously learn rewards for PFI and select arms to minimize regret.\nThe main contributions of this paper are as follows:\nWe introduce a novel estimation procedure for linear bandit feedback\nthat ensures convergence rate for the reward\nvectors of all arms while largely exploiting low regret\narms (Theorem 4.2 ###reference_thm2###).\nThis uniform convergence is possible due to two innovations: (i) the novel\nexploration-mixed estimator that reuses the observations in the\npast exploration rounds (Section 4.2 ###reference_###); and (ii) construction of\na DR estimate for unobserved rewards which is robust to the error of the\nexploration-mixed estimator (Section 4.3 ###reference_###).\nWe apply the novel estimation paradigm to PFILIn and propose a new\nalgorithm PFIwR with sample complexity that is optimal up to\nlogarithmic factors, and has Pareto regret in round\n with context dimension , after initial\nexploration rounds independent of the problem complexity\n(Theorem 5.2 ###reference_thm2###).\nFurther, the algorithm is shown to achieves optimal order regret among all\nPFI algorithms (Theorem 5.3 ###reference_thm3###).\nExperimental results clearly show the estimator converges on the\nrewards of all contexts while exploiting low-regret arms, and\nPFIwR has significantly superior performance to previously\nknown algorithms for both PFI and regret minimization."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Related Work",
|
| 15 |
+
"text": "The typical approach in multi-objective rewards is to scalarize the problem by either setting the objective to be a weighted combination of all the objective (Roijers et al., 2017 ###reference_b19###, 2018 ###reference_b20###; Wanigasekara et al., 2019 ###reference_b28###), or optimizing one while imposing constraints on the rest (Agrawal and Devanur, 2016 ###reference_b2###; Kim et al., 2023a ###reference_b13###).\nWhile these approaches identify only one action on the Pareto front,\nwe identify all actions on the Pareto front, i.e. identify the set of actions that are potentially optimal for any scalarization approach.\nTable 1 ###reference_### compares our contribution with the existing bandit literature.\nThe PFILin problem is a generalization of the BAI\n(Even-Dar et al., 2002 ###reference_b8###; Soare et al., 2014 ###reference_b22###) and single-objective regret minimization (Auer, 2002a ###reference_b4###; Valko et al., 2013 ###reference_b26###) to the multi-objective vector rewards.\nExisting algorithms for multi-objective PFI problems have focused on the Gaussian reward setting (Zuluaga et al., 2016 ###reference_b31###) and non-contextual MAB setting (Auer et al., 2016 ###reference_b3###), and the optimal regret guarantees remain open.\nLu et al. (2019 ###reference_b18###) proposed an algorithm that achieves a bound on regret for multi-objective contextual bandits; however the identification of all arms in the Pareto front is not established.\nWhile Degenne et al. (2019 ###reference_b7###) and Zhong et al. (2023 ###reference_b30###) obtained theoretical guarantees for both regret and sample complexity for non-contextual single-objective rewards, extension to linear and multi-objective rewards remains open."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "Problem Formulation: Pareto Front Identification for Linear Bandits",
|
| 21 |
+
"text": "For a positive integer , let .\nIn PFILin, an action is associated with a known\n-dimensional context vector . Let\n.\nWithout loss of generality, we assume and, as is standard in this literature (e.g., Tao et al. (2018 ###reference_b23###)), we assume that\n spans .\nIn period , the decision-maker chooses an , and observes a sample of the random reward vector , where is the unknown (but fixed) parameters with , for all , and is a mean-zero, -sub-Gaussian random error vector that is independent of actions , and other error vectors ; however, we allow for the components of to be correlated.\nLet denote the true mean reward vector for arm .\nWe want to identify the Pareto front of the defined as follows.\nFor vectors , , the vector dominates (denoted by ) if , for all and there exists such that .\nThe Pareto front is a set of arms whose mean reward vector is not dominated by the reward of any other arm.\nTo identify the Pareto front , one must compute a reasonable estimate for the entire set of reward vectors .\nFollowing Auer et al. (2016 ###reference_b3###), let\ndenote the amount by which\narm dominates arm .\nWe have if and only if , for all .\nTherefore, the distance\n denotes the\nminimum amount by which each component of the reward vector must\nbe increased to ensure that action is not dominated by any Pareto optimal\naction .\nBy definition, the distance for all Pareto\noptimal actions .\nNext, we define the PFI\nsuccess condition.\nFor precision and confidence , a\nPFI algorithm must output a set of arms such that, with\nprobability at least ,\nThe first condition in (1 ###reference_###) ensures that \ncontains the Pareto optimal set , and the second condition guarantees that the\nset only includes arms sufficiently close to the Pareto front.\nLet denote the number of samples required for an\nalgorithm to meet the success condition (1 ###reference_###).\nThen the cumulative regret of an algorithm\nuntil round is defined as\nwhere denotes the action selected by the algorithm.\nOur goal is to simultaneously establish an upper bound of the sample complexity and the Pareto regret\n.\nFor a Pareto sub-optimal arm , if the estimate of the reward vector of arm has error , it can erroneously appear Pareto optimal.\nTherefore, the required accuracy for a suboptimal arm is .\nSince the number of arms on the Pareto front is unknown, the algorithm must decide whether the remaining arms are all Pareto optimal or not to terminate.\nThus, we need another complexity measure,\nwhich is the amount by which each component of the mean reward of arm must be increased so that is weakly dominated by .\nNote that if and only if .\nFix a Pareto optimal arm .\nIf the reward for arm is underestimated by with respect to a Pareto optimal arm , it may appear weakly dominated by .\nThus, in order to prevent misidentifying the Pareto optimal arm as a suboptimal arm, the error of the estimator has to be at most .\nNext, consider a suboptimal arm .\nIf the error of the estimator is greater than ,\nthe Pareto optimal arm may appear dominated by suboptimal arm .\nIn order to distinguish the Pareto optimal arm from the Pareto suboptimal arms, the error of the estimator has be to at most .\nIn summary, to identify whether arm is in Pareto front, the estimation error has to be at most\nWe index the arms in increasing order of required accuracy, i.e. .\nFix , and let\n.\nSuppose the set of context vectors spans and , for all .\nThen, for any and , there exist a -Gaussian distribution for the i.i.d. noise sequence such that any algorithm requires at least\n\nrounds to meet the success condition (1 ###reference_###).\nTheorem 3.3 ###reference_thm3### generalizes the lower bound in Auer et al. (2016 ###reference_b3###) to the linear bandit setting.\nSince , the number of rounds required for PFI depends only on the smallest gaps instead of all gaps."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "4",
|
| 25 |
+
"parent_section_id": null,
|
| 26 |
+
"section_name": "Estimating Rewards with Low-Regret Actions",
|
| 27 |
+
"text": "Overview. Our main contribution is a novel estimation strategy\nthat simultaneously learns rewards of all actions while largely\nexploiting low-regret arms. We address the following two main challenges:\n(i) the number of arms can be exponentially large;\n(ii) exploiting the low-regret actions may not yield the\ninformation required to learn the rewards of unexploited arms.\nWe resolve (i) by reducing context vectors into basis vectors\n(Section 4.1 ###reference_###); and (ii) by reusing the reward\nsamples in the exploration phase (Section 4.2 ###reference_###) along with\ndoubly-robust estimation (Section 4.3 ###reference_###) to compensate for\ndependencies that arise from the data \u201creuse\" (imputation) scheme.\nOur strategy is applicable to more broadly to online learning problems\nunder linear bandit feedback, e.g., BAI (Tao et al., 2018 ###reference_b23###), policy\noptimization in reinforcement learning (He et al., 2021 ###reference_b10###). We now\ndescribe in more detail each ingredient in our approach and its\ntheoretical properties."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "4.1",
|
| 31 |
+
"parent_section_id": "4",
|
| 32 |
+
"section_name": "Exploration Strategy with Context Basis",
|
| 33 |
+
"text": "Let denote the matrix of contexts vectors.\nUsing the (reduced) singular value decomposition (SVD), one can compute\northonormal vectors and scalars such that .\nThus, it follows that , for and .\nFor each , let denote the probability mass function\n over actions .\nThen, for a randomized action , we have\nThus, can be viewed as the random reward\ncorresponding to the \u201ccontext basis\u201d . Note that\nthere is no \u201cpure\u201d action that corresponds to the\n\u201ccontext basis\u201d \u2013 it corresponds to a randomized mixture of actions. We\nwill combine these randomized actions with pure actions to efficiently\nlearn the parameter .\nBecause , sampling for \nyields the design matrix that\nsatisfies (see\nSection B.3 ###reference_### for details).\nFor each , let denote the set of rounds reserved for exploration.\nFix a confidence level , and let where is an absolute constant specified in (29 ###reference_###).\nDefine , and in each round , sample a basis index\n and sample the action according to corresponding probability mass function.\nDefine\nThis definition is to ensure that the number of actions in increases logarithmically in , and ensures that , .\nBy construction, for\n.\nWhen , i.e, in an exploration round, the algorithm selects\nthe sampled action , and when , the\nalgorithm choose an arm from the set of unidentified arms that has low\nestimated regret."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "4.2",
|
| 37 |
+
"parent_section_id": "4",
|
| 38 |
+
"section_name": "Recycling Reward Samples in the Exploration Phase",
|
| 39 |
+
"text": "For each and , i.e., when is in\nexploitation phase, let denote the arm chosen by the\nalgorithm.\nIn order to learn rewards of multiple arms, we \u201crecycle\" the\nreward sample observed in a previous exploration round by bootstrapping as\nfollows.\nRecall that at the beginning of each round , the context basis index , and . Let denote the set of previous exploration\nrounds where the action was chosen.\nNote that, by definition of in (5 ###reference_###), we are\nguaranteed that\n.\nLet denote time index of the exploration sample \u201cmixed\u201d with\nthe action chosen in the exploitation round\n.\nWe \u201cmix\u201d the action with the exploration sample in round\ni.e. we want to balance the\nreuse choice over the set .\nWe define the exploration-mixed contexts and rewards as follows:\nwhere are sampled independently.\nThe following properties follow from the linear structure and the\ndistribution of weights that has mean zero and unit\nvariance, and the definition of the reduced SVD , .\n(Exploration-mixed contexts and rewards.)\nLet denote the sigma-algebra generated by and .\nFor any and such that ,\n, and\n.\nWe can view as a stochastic\nfeedback from a new linear bandit problem with the same parameters\n.\nSince the random contexts contains the (randomized)\ncontext basis, the (expected) design matrix includes information on all\n arms for any selected action .\n\u201cRecycling\u201d the reward sample\n allows us to get information\non the rewards of the unselected (and hence unobserved) contexts while\nexploiting low regret action.\nNext, we define the exploration-mixed estimator,\nWhile the exploration-mixed estimator gains information on the unknown\nparameter on multiple contexts, reusing samples from previous\nrounds causes dependency that complicates the analysis of the convergence\nrate of the estimator (see Section B.5 ###reference_### for details).\nTo address this, we apply the doubly-robust (DR) technique from the\nmissing data literature instead of directly using the exploration-mixed\nestimator, as we explain next."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "4.3",
|
| 43 |
+
"parent_section_id": "4",
|
| 44 |
+
"section_name": "Doubly-Robust Estimation",
|
| 45 |
+
"text": "Doubly-robust estimation uses an estimate to impute the\nmissing value, and is robust to the estimation error for the missing value.\nIn each round , the unselected rewards\n are missing.\nOne possible approach is computing a ridge estimator and imputing \nfor to apply doubly-robust estimation, as proposed\nin Kim et al. (2021 ###reference_b11###, 2022 ###reference_b12###, 2024 ###reference_b15###).\nHowever, their approach assumes stochastic contexts that are IID over\nrounds with finite , and therefore, not applicable to PFILin where the\ncontexts are fixed and can be exponentially large.\nFurther, since the ridge estimator only gains information on the selected\nactions, their DR estimator does not converge while exploiting low regret\narms (See Appendix A.2 ###reference_### for detailed comparisons.)\nWe first reduce rewards into rewards\nusing (4 ###reference_###): , corresponding to the context\nbasis , , and\n.\nNote that for , and we learn using \ncontext basis vectors , , instead of contexts.\nWe view as missing data, and only\n is observed.\nTo induce a specified probability for missing data (needed to ensure\nrobustness of the DR estimation), we define a probability mass\nfunction defined as follows:\nand let denote the pseudo-action on \narms.\nTo couple the observed reward and the randomly\nselected reward , we resample both action\n and pseudo-action until the matching event\n happens.\nFor given , let denote the event\nof obtaining the matching within\n number of resampling so\nthat the event happens with probability at least\n.\nIf the event does not happen, we do not update the estimator (and use the estimator value obtained in the previous round).\nDefine new contexts for and .\nWith the coupled pseudo-action and its distribution , we construct the DR estimate for the reduced missing rewards as:\nFor , we impute a reward\n for the new \u201ccontext\u201d basis. For , the second term corrects\nthe imputed\nreward to ensure\nunbiasedness of the pseudo-rewards for all arms.\nTaking the expectation over on both sides\nof (10 ###reference_###) gives for all .\nThen, our proposed DR-mix estimator is\nThe estimator (11 ###reference_###) is recursively computable with a rank-1 update of the Gram matrix and summation of the weighted context vectors.\nLet denote the DR-mix estimator (11 ###reference_###) with the exploration-mixed estimator (8 ###reference_###) as the imputation\nestimator and pseudo-rewards (10 ###reference_###).\nLet ,\nThen, for all , , and ,\nwith probability at least .\nFor each , with probability at least ,\nIn early rounds, we have number of undetermined arms to estimate and we use the union bound (27 ###reference_###) to avoid the dependency on .\nAfter eliminating the suboptimal arms and when the number of undetermined arms is , we can use the tighter bound (28 ###reference_###).\nBecause the contexts are normalized by the Gram matrix , we obtain\n (derived in\nSection B.3 ###reference_###).\nWith only exploration rounds, we obtain\n convergence rate for the reward estimates of\nall arms.\nThis is possible because the DR-mixed estimator gains information on all\narms through the exploration-mixed estimate and is robust to the error of\nthe exploration-mixed estimator caused by the dependency of reusing the\nsamples (for details see Section B.6 ###reference_###).\nTherefore, our estimator enjoys the freedom to choose low-regret arms\nwhile simultaneously learning the rewards on all arms."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "5",
|
| 49 |
+
"parent_section_id": null,
|
| 50 |
+
"section_name": "Algorithm for Pareto Front Identification with Regret Minimization",
|
| 51 |
+
"text": "In this section, we apply our novel estimation strategy to PFILin and\nestablish novel algorithm with nearly optimal sample complexity and\nregret."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "5.1",
|
| 55 |
+
"parent_section_id": "5",
|
| 56 |
+
"section_name": "PFIwR Algorithm for Linear Contextual Bandits",
|
| 57 |
+
"text": "Our proposed algorithm, PFI with regret minimization (PFIwR), is displayed in Algorithm 1 ###reference_###.\nWhile any undetermined arms remains, the algorithm employs our novel\nestimation strategy to compute the reward estimates for and gap estimates\nrequired for PFI.\nThe algorithm uses two different confidence bound\nin (15 ###reference_###) based on the two convergence rates in\nTheorem 4.2 ###reference_thm2###.\nThe first bound uses the union bound (27 ###reference_###) because\nthe estimator must converge on all arms.\nHowever, when the number of undetermined arms are less than , we need\nat most reward estimate for and at most estimates for\nthe arm in that are \u201cnearest\u201d to and\ncritically affect the PFI.\nSpecifically, we only need the reward estimate for arm such\nthat for suboptimal arm and for arm either \nsuch that or\n such that\n for each optimal arm\n.\nPFIwR computes the set by eliminating the suboptimal arms that are dominated by other arms by the amount more than the confidence bound .\nThe set is the current estimate for -Pareto optimal arms that are not dominated by any other arms.\nThis arm elimination step is simplified compared to that in Auer et al. (2016 ###reference_b3###).\nThe algorithm in Auer et al. (2016 ###reference_b3###) leaves the identified Pareto optimal arm undetermined until all suboptimal arms dominated by the identified arm are eliminated, in order to ensure that a dominated arm is not spuriously declared as Pareto optimal.\nIn contrast, PFIwR does not keep the (identified) dominating arms in because the DR estimate converges on all arms in , including the identified arms, in contrast to the conventional estimator that does not converge on identified arms unless they are selected.\nConsequently, the cardinality of the set undetermined arms decreases faster in PFIwR (derived in Section B.8 ###reference_###), and this allows it to invoke the tighter confidence bound in (15 ###reference_###), that are only available when , earlier.\nIn addition to efficient estimation for PFI, the proposed PFIwR is able to choose low estimated regret actions after exploration rounds.\nThe novel estimator and its convergence (Theorem 4.2 ###reference_thm2###) ensure that sampling arms with low estimated regret does not harm the convergence rate of the reward estimates of other arms.\nThus, PFIwR is efficient in both for PFI and minimizing regret."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "5.2",
|
| 61 |
+
"parent_section_id": "5",
|
| 62 |
+
"section_name": "Sample Complexity and Regret Analysis",
|
| 63 |
+
"text": "Fix and .\nLet , where\n is the ordered gap defined in (3 ###reference_###) in increasing order.\nThen the stopping time of PFIwR is bounded above by\nwhere denotes an upper bound on the initial exploration rounds.\nThe proof and explicit finite-sample bound for is in Appendix B.9 ###reference_###.\nWhen the contexts are Euclidean basis, our result directly applies to the PFI in the MAB setting studied by Auer et al. (2016 ###reference_b3###).\nThe sample complexity is optimal within a logarithm factor of the lower bound in Theorem 3.3 ###reference_thm3###.\nFix and . Let\n\ndenote the minimum Pareto regret over suboptimal arms.\nThen, with probability at least , the instantaneous Pareto regret,\n\nfor all and , where is the\nerror bound defined in (15 ###reference_###).\nThe cumulative Pareto regret of PFIwR,\nwith probability at least , where ignores terms.\nThe explicit expression for the finite-sample bound is in Appendix B.10 ###reference_0###.\nThe first term is the regret from the exploration rounds , whose cardinality for all .\nSince the algorithm need to increase until it identifies all arms on the Pareto front, the bound involves , which is the cost for identifying all arms in the Pareto front.\nWhen and the contexts are Euclidean basis, Theorem 5.1 ###reference_thm1### and Theorem 5.2 ###reference_thm2### recovers the sample complexity bound and regret bound for the best arm identification in MAB setting established by Degenne et al. (2019 ###reference_b7###) and Zhong et al. (2023 ###reference_b30###).\nFor , let denote the\nminimum Pareto regret over suboptimal arms.\nSuppose the set of context vectors span and .\nThen, for any and , there exists a\n-sub-Gaussian distribution for the i.i.d. noise sequence\n such that for any PFI algorithms that satisfies PFI\nsuccess condition (1) with failure probability ,\nTheorem 5.3 ###reference_thm3### shows that PFIwR establishes nearly optimal regret among algorithms that achieve PFI and it is the first result on the trade-off between PFI and Pareto regret minimization.\nFor and the contexts are Euclidean basis, Theorem 5.3 ###reference_thm3### recovers the lower bound for regret of BAI algorithms developed by Zhong et al. (2023 ###reference_b30###).\nNote that the lower bound applies only to the algorithms that guarantee PFI; it is possible for an algorithm that does not guarantee PFI to have a regret lower bound that is lower than the one in Theorem 5.3 ###reference_thm3###."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "6",
|
| 67 |
+
"parent_section_id": null,
|
| 68 |
+
"section_name": "Experiments",
|
| 69 |
+
"text": ""
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "6.1",
|
| 73 |
+
"parent_section_id": "6",
|
| 74 |
+
"section_name": "Consistency of the Proposed Estimator on All Actions",
|
| 75 |
+
"text": "We conduct the following experiment to empirically verify that our proposed DR-mix estimator\nconverges on all arms while exploiting low-regret arms.\nWe consider a -arm bandit, i.e. the context vectors are the Euclidean basis in .\nThe parameter , and the random error is sampled from centered Gaussian distribution with variance .\nIn rounds , each of three arms is pulled with equal probability; in rounds , only the optimal arm (arm 1) is pulled.\nThe plots in Figures 1(a) ###reference_sf1### and 1(b) ###reference_sf2### illustrate the reward error of the proposed DR-mix estimator, the conventional ridge estimator, and an exploration-mixed estimator defined in (8 ###reference_###) as a function of the number of rounds .\nThe conventional ridge estimator converges only on the arm that is pulled (arm ), while the exploration-mixed estimator and the proposed DR-mix estimator converge for all arms, including arms and that are not observable in round .\nWhile the exploration-mixed estimator converges as fast as the DR-mix estimator on arm and , it converges slower on arm ; since the DR-mix estimator minimizes on all context basis while exploration-mixed estimator minimized only one context basis.\nFor further analysis on the estimators, see Section A.3 ###reference_###.\n###figure_1### ###figure_2### ###figure_3### ###figure_4### ###figure_5###"
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "6.2",
|
| 79 |
+
"parent_section_id": "6",
|
| 80 |
+
"section_name": "Comparison of MultiPFI and PFIwR",
|
| 81 |
+
"text": "Next, we compare PFIwR with MultiPFI (Auer et al., 2016 ###reference_b3###) on the SW-LLVM dataset (Zuluaga et al., 2016 ###reference_b31###) (see Section A.1 ###reference_### for details).\nFigure 2 ###reference_### reports the performance of PFIwR and MultiPFI (Auer et al., 2016 ###reference_b3###) on various .\nBoth algorithms use a fixed of .\nIn Figure 2(a) ###reference_sf1###, in most cases, PFIwR uses fewer samples than MultiPFI to satisfy the success condition (1 ###reference_###).\nEven though the number of samples used by PFIwR has a larger variance, in most cases, it uses fewer samples for PFI than MultiPFI.\nFigure 2(b) ###reference_sf2### is a box plot of the cumulative Pareto regret of PFIwR and MultiPFI at the termination of the algorithm \u2013 PFIwR has significantly lower regret than MultiPFI.\nFigure 2(c) ###reference_sf3### display the cumulative Pareto regret of PFIwR and MultiPFI as a function of rounds when .\nSince the number of rounds required for PFI and the horizon is random, to compute the average and standard deviation of the cumulative regret, we set the instantaneous regret to zero after the algorithm terminates in each experiment.\nThe regret of PFIwR increases slower than MultiPFI because it chooses actions that minimize regret in the exploitation phase while learning the rewards.\nThe experiment demonstrates that our proposed PFIwR achieves the dual goal of PFI and regret minimization."
|
| 82 |
+
}
|
| 83 |
+
],
|
| 84 |
+
"appendix": [
|
| 85 |
+
{
|
| 86 |
+
"section_id": "Appendix 1",
|
| 87 |
+
"parent_section_id": null,
|
| 88 |
+
"section_name": "Appendix A Supplementary Materials for Experiments",
|
| 89 |
+
"text": "The SW-LLVM dataset [Zuluaga et al., 2016 ###reference_b31###] consists of -dimensional reward vectors.\nWe normalized the reward vectors by subtracting the average and dividing by the standard deviation for each component.\nWe created a -arm PFI problem using the methodology in Auer et al. [2016 ###reference_b3###]: we clustered the reward vectors into \ngroups, with reward vectors in each group.\nWe computed the mean reward for the -th cluster by taking the average over the -th cluster, and when the algorithm selects an arm in any round, we randomly sample a reward vector from the -th cluster.\n###figure_6### The convergence properties of the DR\nestimator critically depend on\nthe imputation estimator used in the pseudo-reward (10 ###reference_###).\nIn Figure 3 ###reference_### we plot the error of the DR estimators with two different imputation estimators: ridge estimator and exploration-mixed estimator (8 ###reference_###) as a function of the number of rounds for a -armed bandit problem, or equivalently, a linear bandit problem with\nthe set of context vectors given by the Euclidean basis.\nUsing the DR method with the ridge estimator does not guarantee convergence on all arms \u2013 it only gets information from the exploited arm that has no information about the rewards of the other arms.\nIn contrast, the DR estimator with (8 ###reference_###) as an imputation\nestimator, converges on all arms.\nThis is possible because \u201cmixing\u201d contexts and rewards as in (7 ###reference_###) transforms the -armed bandit data into a linear bandit with stochastic contexts that span with high probability.\nThe plots in Figure 4 ###reference_### display the evolution of the density of estimates\nof the three methods for arm 1 and arm 2 for .\nFor arm 1 (Figure 4(a) ###reference_sf1###), the ridge estimator and the\nproposed DR-mix estimator converge faster, i.e., have zero-mean with a lower\nvariance, compared to the exploration-mixed estimator.\nSince the exploration-mixed estimator (8 ###reference_###) creates the context\nand the associated reward by assigning random weights to the current\nobservation and one from a past exploration round, the reward estimate for\nthe selected arm becomes unstable.\nIn contrast, the DR-mix estimator returns the focus to estimating the reward\nof the selected arm and converges faster than the exploration-mixed\nestimator.\nFor arm 2 (Figure 4(b) ###reference_sf2###), while the ridge estimator\ndiverges with increasing variance, the exploration-mixed estimator and the\nDR-mix estimator converge.\nSince there are no new samples from arm 2, the term increases\nthe mean and the variance of the density of the ridge estimator.\nIn contrast, the mean of the density of the exploration-mixed estimator\nand the proposed DR-mix estimator converges to 0, and the variance increases slower than that of the ridge estimator.\nThe fast convergence of the DR-mix estimator is a consequence of combining the\nexploration-mixed data and DR technique.\nThe exploration-mixed estimator (8 ###reference_###) leverages the linear structure of the mean reward vector to create a pair of \u201cmixed\u201d contexts and rewards by combining the context of the selected arm (arm 1) with randomly selected arms (arms 2 and 3) from the exploration phase.\nWhile the \u201cmixing\u201d allows the exploration-mixed estimator to learn all entries of the parameter vector, it minimizes instead of the basis vectors for target contexts of interest.\nAlthough the exploration-mixed estimator eventually converges to the true parameter, , the target of interest converges slower than .\nTherefore, we apply the DR method and use pseudo-rewards (10 ###reference_###) to move the target to the one of interest by modifying the context from to (equivalently, changing Gram matrix to .\nThus, our proposed estimator minimizes the target directly and estimates the mean rewards of the arms significantly faster.\n###figure_7### ###figure_8### ###figure_9### ###figure_10### ###figure_11### ###figure_12###"
|
| 90 |
+
},
|
| 91 |
+
{
|
| 92 |
+
"section_id": "Appendix 2",
|
| 93 |
+
"parent_section_id": null,
|
| 94 |
+
"section_name": "Appendix B Missing Proofs",
|
| 95 |
+
"text": "Before we prove Theorem 3.3 ###reference_thm3###, we present a lower bound for the error of linearly parameterized rewards.\nSuppose \nspans . For and , let ,\nwhere is an identically and independently distributed\nnoise. Then, for any and . Then\nthere exist mean-zero -sub-Gaussian random noises such that\nthe error\nwith probability at least , for any estimator \nthat uses at most \nnumber of independent samples .\nStep 1. Constructing a noise distribution: For \nthe noise\nwhere\n. Let \ndenote the probability measure for the noise . Since\nfor all , we have .\nIt is easy to show that the difference\nfor all .\nStep 2. Reduction to parameter estimation: For \nlet\n\ndenote joint distribution for .\nLet denote a set of linear independent vectors,\nand let , .\nLet denote a sample for arm , independent\nof initial samples, with . Let\n.\nLet denote any estimator that estimates using the data points.\nClearly,\nNext,\nwhere the last inequality holds because \nare independent. For each ,\nwhere the last inequality holds because for ,\nDefine the function as follows:\nThen, the estimator attains the minimum in (19 ###reference_###) since\nThus, for and ,\nPlugging this upper bound in (18 ###reference_###), we get\nLet denote the minimum number of samples over arms in .\nLet\n denote\nthe joint probability of the noise associated with samples from\neach arm .\nFor and any estimator \nfor that uses samples from arm ,\nwhere the last equality follows from the fact that \nonly uses samples from arm .\nTherefore,\nStep 3. Lower bound for the error probability. Taking maximum\nover gives,\nFor two vectors and let denote that and\n only differ in one coordinate. By Assouad\u2019s method (Lemma C.1 ###reference_thm1###),\nthere exists at least one such that\nThus,\nwhere \nis the total variation distance between two probability measures \nand . By Bretagnolle-Huber inequality,\nwhere the last equality uses the chain rule of entropy. Because there\nexists only one such that ,\nwhere the third inequality holds by .\nThus,\nSetting \ngives\nStep 4. Computing the required number of samples: Recall\nthat is the minimum number of samples\nover any linearly independent contexts. Thus, if \nthen \nand the lower bound of the probability holds.\n\u220e\nNow we are ready to prove the lower sample complexity bound for PFILIn.\nStep 1. Characterize the failure event:\nIn order to meet the success condition (1 ###reference_###) with ,\nthe algorithm must produce an estimate \nfor such that\nNote that\nFor and , let\nThus, if , then\n, i.e. the algorithm cannot\nsatisfy the success condition (1 ###reference_###).\nStep 2. Compute the required number of samples:\nFor each arm , suppose the number of observations for the parameters satisfies the upper bound .\nThen by Lemma B.1 ###reference_thm1###, for any estimator\nholds with probability at least , and .\nSince, for each , the estimators are\nindependent of each other, the events are independent, and\ntherefore,\nTherefore, if the total number of observations\n,\nthere exists an arm such that the number of independent\nsamples is less than and,\nBecause , setting \ngives .\nThus, any algorithm requires at least\nnumber of rounds to meet the success condition (1 ###reference_###).\n\u220e\nBy definition (7 ###reference_###), for all ,\nTaking conditional expectations on both sides,\nwhich proves the first identity. For the expected Gram matrix, by\ndefinition (8 ###reference_###),\nTaking conditional expectations on both sides,\nBy definition of , we obtain .\nBecause ,\nwhich completes the proof.\nWe provide a theoretical result on the design matrix constructed by the exploration strategy in Section 4.1 ###reference_###.\nFor any and , the normalized\nnorm .\nFor each , by Sherman-Morrison formula, for any\nLetting completes the proof.\n\u220e\nLemma B.2 ###reference_thm2### implies,\nwhich has same bound with G-optimal design [Smith, 1918 ###reference_b21###].\nAlthough we reduce contexts into context basis vectors, our estimation strategy enjoys the property of the optimal design for all context vectors.\nFor the Gram matrix of the DR-mix estimator , Lemma B.2 ###reference_thm2### implies,\nfor all .\nThe DR-mix estimator imputes the reward on the basis contexts and minimizes -norm error, which efficiently estimates the rewards on all arms.\nIn contrast, the exploration-mixed estimator minimizes the -norm error.\nAlthough the expected Gram matrix in the exploration-mixed estimator has it is discounted by the factor of because it employs only one of context basis, not all context basis, in each round.\nTherefore the DR-mix estimator converges faster than the exploration-mixed estimator on the rewards of all arms.\nWe provide the details on the coupling with resampling in the following lemma.\nThe key idea is coupling the event of interest with IID samples and bound the probability with another IID sample.\nLet and denote\nthe distribution for action on and pseudo-action on ,\nrespectively. Let and \ndenote IID samples from the distribution and \nand for the number of resampling , define new\ncontexts\nand the stopping time,\nThen, for any function and a real number ,\nfor all .\nFor , let\nBy definition of , the\nOn the event ,\nBecause the event \nis IID over given .\nThe second inequality in the lemma can be derived in a similar way.\n\u220e\nFix . Then,\nfor all that satisfy ,\nthe exploration-mixed estimator defined in (8 ###reference_###) satisfies\nwith probability at least ,\nLet us fix throughout the proof. For \nand let .\nBy definition of ,\nDefine the new contexts,\nSetting the number of resampling ,\nthe probability of obtaining matching samples \nfor is at least for all ,\nwhere is the number of trials until the matching. With the\nmatching pseudo action ,\nBy the coupling lemma (LemmaB.3 ###reference_thm3###) and , with probability\nat least ,\nThen the self-normalized norm,\nTo find the expectation of , recall that for ,\nthe pseudo-action is sampled from \ndefined in (9 ###reference_###). Let \ndenote the conditional expectation at round . Then .\nDefine\nNote that is martingale difference because for ,\nwhere the second last equality holds by \nand the last equality holds by . For ,\nwhere the last inequality can be easily found by the fact \nand following the proof of Lemma B.2 ###reference_thm2###. For ,\nwhere the last inequality holds by unif.\nThus, the eigenvalue of the martingale difference matrix lies in \nThen by Hoeffding bound for matrices (Lemma C.5 ###reference_thm5###),\nNote that\nwhere the last term appears in the martingale difference (21 ###reference_###).\nThus,\nSet . For such that ,\nwith probability at least ,\nwhich implies\nBecause the matrix is symmetric\nand positive definite, ,\nand thus,\nThen the self-normalized norm,\nIn the first term,\nNote that for all exploitation round , the\nsampled round for reuse . Thus, we can decompose,\nand thus,\nTo bound the first term, for , define\nLet denote a conditional expectation given errors\n, actions ,\nrandom indexes \nand weights . For\n, because is -sub-Gaussian and\n\nfor all ,\nFor ,\nwhere the second inequality holds by \nand the third inequality holds by \nalmost surely. Thus, for all ,\nFollowing the proof of Theorem 1 in Abbasi-Yadkori et al. [2011 ###reference_b1###], with\nprobability at least ,\nTo bound the second term in (23 ###reference_###), define\nfor . Let denote a conditional\nexpectation given errors , pseudo-actions\n, random indexes .\nFor , for each , by Hoeffding\u2019s Lemma,\nsince , for any ,\nThus,\nBecause for , we obtain\nFor , let .\nBecause is an even function, \nfor any and\nBecause is -sub-Gaussian and \nfor ,\nBy definition of the number of reusing round ,\nwhere the last inequality holds by construction of the exploration set (5 ###reference_###).\nThus,\nBy the fact that\nFollowing the proof of Theorem 1 in Abbasi-Yadkori et al. [2011 ###reference_b1###], with\nprobability at least ,\nIn summary, with probability at least ,\nfor all .\n\u220e\nWe prove a lemma on the robustness of the general DR estimator.\nFor let .\nFor any and , the DR estimator \nemploying as an\nimputation estimator satisfies\nwith probability at least for all and .\nFor each\nwith probability at least for all and .\nThe first term and second terms correspond to the convergence rate\nobtained from conventional self-normalizing bound. The third term\nis the -error of the estimator .\nThe -error of is multiplied\nwith the term, which comes from the fact that in the\npseudo-rewards,\nthe reward estimate \nis multiplied by the mean-zero random variable, .\nThe error of the imputation estimator is normalized by the Gram matrix\n which consist of all contexts. Thus, the \nerror of the conventional ridge estimator, which uses only selected\ncontexts and rewards in every round, is , which\nyields slow convergence rate.\nLet us fix throughout the proof. For and ,\nlet ,\nwhere\nLet .\nBy the definition of the estimator and the pseudo-reward ,\nfor\nOn the coupling event , we have ,\n and .\nThus, the first term,\nBy definition of , on the coupling event\nThus,\nFor the second term, define .\nThen,\nBecause ,\nIn the last term of (24 ###reference_###),\nFor each , the matrix\nis symmetric and a martingale difference matrix. Moreover,\nBecause \nfor ,\nalmost surely. By the Hoeffding bound for the matrix (Lemma C.5 ###reference_thm5###),\nwith probability at least ,\nPlugging in (24 ###reference_###),\nNote that is not random and is -sub-Gaussian.\nThus, by Lemma 9 in Abbasi-Yadkori et al. [2011 ###reference_b1###], with probability at least ,\nfor . Because for all ,\nwhich implies,\nand proves the first bound.\nFor the second bound, by Lemma C.2 ###reference_thm2###, with probability\nat least ,\nBy Lemma B.2 ###reference_thm2###,\nwhich implies\nwhich proves the second bound.\n\u220e\nLet denote the DR-mix estimator (11 ###reference_###) with the exploration-mixed estimator (8 ###reference_###) as the imputation estimator and pseudo-rewards (10 ###reference_###).\nLet ,\nThen, for all , , and ,\nwith probability at least .\nFor each , with probability at least ,\nThe proof for the two bounds are derived by simple computation using\nLemma B.5 ###reference_thm5### and we only prove the second bound. By\nLemma B.5 ###reference_thm5###, with probability at least ,\nfor all , and . By Lemma B.4 ###reference_thm4###,\nwith probability at least ,\nwhere the last inequality holds by sufficiently large .\nThen,\nfor all . By construction,\nand\n\u220e\nFor such that , the arm is correctly identified as suboptimal or Pareto optimal, and .\nCase 1. and :\nSuppose and .\nIf , then there exists \nsuch that and\nThus, , and .\nCase 2. and :\nIf then . Consider the case of\n. Because ,\nwe obtain Then\nfor all ,\nBecause ,\nand . Thus, and .\nCase 3. and :\nSuppose . Then for all ,\nand . Suppose . For\na Pareto optimal arm ,\nFor a suboptimal arm ,\nthere exists such that \nis weakly dominated by , and\nConsider the case , then\nand and . For the case of\n,\nCase 4. and :\nIf then .\nThus, for all ,\nbecause , we obtain \nand\nand . Thus, .\n\u220e\nBefore we prove the sample complexity, we provide an important properties of our proposed PFIwR algorithm.\nFor , the Pareto optimal arms\nare either in or in PFIwR, i.e., .\nWhen , the result holds by definition of . For\n, suppose holds.\nWhile updating and , only arms in \nare eliminated. Thus, we prove the results by showing that .\nFor each round , suppose an arm .\nThen there exists such that\nwhich implies\nfor all and . Thus, \nis proved.\n\u220e\nNow we are ready to prove the sample complexity of PFIwR .\nFix and . Define ,\nwhere is the required accuracy defined in (3 ###reference_###)\nwith ascending order . Then the\nstopping time of PFIwR is bounded\nby:\nStep 1. Sample complexity for accuracy of the estimator:\nFor , let denote the confidence bound defined\nin (15 ###reference_###). Because ,\nfor\nBy Theorem 4.2 ###reference_thm2###, with probability at least ,\nholds for all , and such\nthat . Thus, for any , if\nthen \nfor all when . By Lemma C.6 ###reference_thm6###,\nthe condition (30 ###reference_###) is implied by\nFor , we only need confidence interval for at most\n arms that affects PFI. Let denote the arms\nthat are nearest to . Then for ,\nholds for . Similarly,\nimplies \nfor .\nStep 2. Finding the sample complexity: From (31 ###reference_###),\nfor\nimplies for .\nThen by Lemma B.7 ###reference_thm7###, . If\nthen for all and\n by Lemma B.7 ###reference_thm7###. Since Theorem 4.2 ###reference_thm2###\nrequires , the sample complexity is bounded as\nThe proof completes by the fact that .\n\u220e\nBy Lemma B.8 ###reference_thm8###, the Pareto front .\nBy definition of and in the algorithm,\nNote that for .\nThus, , and the Pareto regret\nFor , suppose . Then\nthere exists such that .\nBy Theorem 4.2 ###reference_thm2###, with probability\nat least and is dominated by ,\nwhich is dominated by the arms in by definition of .\nThus,\nBy definition of ,\nwhich proves the instantaneous regret bound.\nTo prove the cumulative regret bound, summing up the regret over ,\nwith probability at least\nwhere ignores terms and the last equality\nholds by the sample complexity bound (Theorem 5.1 ###reference_thm1###).\nBecause the instantaeneous regret is bounded by ,\nthe regret is zero when ,\nwhich is implied by .\nIn addition, by Lemma B.7 ###reference_thm7###, the algorithm terminates\nwhen .\nThus,\nBy (31 ###reference_###), let\nThen implies \nand\nBecause ,\nPlugging in (32 ###reference_###) and ignoring \nterms,\n\u220e\nFor , let denote the\nminimum Pareto regret over suboptimal arms.\nSuppose the set of context vectors span and .\nThen, for any and , there exists a\n-sub-Gaussian distribution for the i.i.d. noise sequence\n such that for any PFI algorithms that satisfies PFI\nsuccess condition (1) with failure probability ,\nTheorem 5.3 ###reference_thm3### shows that PFIwR establishes nearly optimal regret among algorithms that achieve PFI and it is the first result on the trade-off between PFI and Pareto regret minimization.\nFor and the contexts are Euclidean basis, Theorem 5.3 ###reference_thm3### recovers the lower bound for regret of BAI algorithms developed by Zhong et al. [2023 ###reference_b30###].\nNote that the lower bound applies only to the algorithms that guarantee PFI; it is possible for an algorithm that does not guarantee PFI to have a regret lower bound that is lower than the one in Theorem 5.3 ###reference_thm3###.\nBy Lemma B.1 ###reference_thm1###, for and any\nestimator with round \nfor ,\nand any estimator cannot find an arm with zero Pareto regret with probability at least .\nThus, we need at least number of rounds to ensure that the estimation error is less than the minimum Pareto regret .\nBy Theorem 5.6 in Kim et al. [2023b ###reference_b14###], for any horizon , the expected regret is in the single objective linear bandit setting where the number of arms is finite and the contexts span .\nSince the same lower bound applies to the multi-dimensional rewards as well, setting \ngives the lower bound.\n\u220e"
|
| 96 |
+
},
|
| 97 |
+
{
|
| 98 |
+
"section_id": "Appendix 3",
|
| 99 |
+
"parent_section_id": null,
|
| 100 |
+
"section_name": "Appendix C Technical Lemmas",
|
| 101 |
+
"text": "In this section, we provide technical lemmas cited from the literature and novel lemmas (Lemma C.5 ###reference_thm5### and Lemma C.6 ###reference_thm6###).\nFor , let \ndenote the probability measure on the data space whose\nparameter is . For any collection of estimators \nthere exists at least one such that\nwhere indicates that and only\ndiffer in one coordinate.\nLet be a\nfiltration and be a real-valued stochastic\nprocess such that is -measurable.\nLet be an -valued\nstochastic process where is -measurable.\nAssume that are -sub-Gaussian given\n. Then with probability at least ,\nWhile the constant in Kim et al. [2023a ###reference_b13###] is , we prove that the bound also holds with using the following lemma.\nSuppose a random variable satisfies\n, and let be an -sub-Gaussian random variable.\nIf almost surely, then is -sub-Gaussian.\nLet \nbe a -valued stochastic process adapted to\nthe filtration , i.e., \nis -measurable for . Suppose the\nmatrix is symmetric and the eigenvalues of the difference\n lies in \nfor some . Then for ,\nThe proof is an adapted version of Hoeffding\u2019s inequality for matrix\nstochastic process with the argument of Tropp [2012 ###reference_b24###]. Let\n. Then,\nfor ,\nWe bound the first term and the second term is bounded with similar argument.\nFor any ,\nBecause is a real symmetric matrix,\nwhere the last inequality holds since \nhas nonnegative eigenvalues. Taking expectation on both side gives,\nBy Lieb\u2019s theorem Tropp [2015 ###reference_b25###] the mapping \nis concave on positive symmetric matrices for any symmetric positive\ndefinite .\nBy Jensen\u2019s inequality,\nBy Hoeffding\u2019s lemma,\nfor all . Because the eigenvalue of lies\nin , we have\nRecursively,\nThus we have\nMinimizing over gives and\nwhich proves the lemma.\n\u220e\nFor and , \nimplies .\nIf the function has negative\nderivatives and is decreasing on . Then, there exists a unique\n such that .\nNow, it is sufficient to show that\nLet denote the Lambert function which satisfies\n for . By definition of ,\nBy definition of ,\nBy Theorem 1 in Chatzigeorgiou [2013 ###reference_b6###], \nfor . Setting proves (35 ###reference_###).\n\u220e"
|
| 102 |
+
},
|
| 103 |
+
{
|
| 104 |
+
"section_id": "Appendix 4",
|
| 105 |
+
"parent_section_id": null,
|
| 106 |
+
"section_name": "Appendix D Limitation",
|
| 107 |
+
"text": "Although our main contribution is novel and improves current linear bandit algorithms, we found the following limitations are in need to be handled in the future work:\nThe number of exploration can dominate the sample complexity in Theorem 5.1 ###reference_thm1### when the problem complexity gaps are large.\nThe term does not have problem complexity gap and is not considered as the main term in theoretical analysis; while our proposed estimator may not efficient for the large gaps in practice.\nOur PFI comparison experiments are limited to MAB setting, although we design our algorithm for general contexts with possibly exponentially large number of arms.\nWe choose the MAB setting for the sake of comparison with the previous algorithm [Auer et al., 2016 ###reference_b3###]; we believe the superior performance of our algorithm may be drastically visible on general contexts with large number of arms."
|
| 108 |
+
}
|
| 109 |
+
],
|
| 110 |
+
"tables": {
|
| 111 |
+
"1": {
|
| 112 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S1.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S1.T1.26.1.1\" style=\"font-size:90%;\">Table 1</span>: </span><span class=\"ltx_text\" id=\"S1.T1.27.2\" style=\"font-size:90%;\">A comparison of the related works in terms of settings and theoretical guarantees.</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S1.T1.24.24\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S1.T1.24.24.25.1\">\n<td class=\"ltx_td ltx_border_tt\" id=\"S1.T1.24.24.25.1.1\"></td>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S1.T1.24.24.25.1.2\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S1.T1.24.24.25.1.2.1\" style=\"font-size:70%;\">Bandit Setting</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S1.T1.24.24.25.1.3\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S1.T1.24.24.25.1.3.1\" style=\"font-size:70%;\">Multi-objective?</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S1.T1.24.24.25.1.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S1.T1.24.24.25.1.4.1\" style=\"font-size:70%;\">Regret Bound?</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S1.T1.24.24.25.1.5\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S1.T1.24.24.25.1.5.1\" style=\"font-size:70%;\">PAC bound?</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.3.3.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S1.T1.3.3.3.4\"><cite class=\"ltx_cite ltx_citemacro_citet\">Valko et\u00a0al. <span class=\"ltx_text ltx_font_smallcaps\" id=\"S1.T1.3.3.3.4.1.1.1.1\" style=\"font-size:70%;\">(</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2306.00096v2#bib.bib26\" title=\"\">2013</a><span class=\"ltx_text ltx_font_smallcaps\" id=\"S1.T1.3.3.3.4.2.2.2.1\" style=\"font-size:70%;\">)</span></cite></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.3.3.3.5\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S1.T1.3.3.3.5.1\" style=\"font-size:70%;\">Kernel</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.1.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.2.2.2.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.3.3.3.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.6.6.6\">\n<td class=\"ltx_td ltx_align_left\" id=\"S1.T1.6.6.6.4\"><cite class=\"ltx_cite ltx_citemacro_citet\">Soare et\u00a0al. <span class=\"ltx_text ltx_font_smallcaps\" id=\"S1.T1.6.6.6.4.1.1.1.1\" style=\"font-size:70%;\">(</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2306.00096v2#bib.bib22\" title=\"\">2014</a><span class=\"ltx_text ltx_font_smallcaps\" id=\"S1.T1.6.6.6.4.2.2.2.1\" style=\"font-size:70%;\">)</span></cite></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.6.6.6.5\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S1.T1.6.6.6.5.1\" style=\"font-size:70%;\">Linear</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.4.4.4.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.5.5.5.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.6.6.6.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.9.9.9\">\n<td class=\"ltx_td ltx_align_left\" id=\"S1.T1.9.9.9.4\"><cite class=\"ltx_cite ltx_citemacro_citet\">Zuluaga et\u00a0al. <span class=\"ltx_text ltx_font_smallcaps\" id=\"S1.T1.9.9.9.4.1.1.1.1\" style=\"font-size:70%;\">(</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2306.00096v2#bib.bib31\" title=\"\">2016</a><span class=\"ltx_text ltx_font_smallcaps\" id=\"S1.T1.9.9.9.4.2.2.2.1\" style=\"font-size:70%;\">)</span></cite></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.9.9.9.5\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S1.T1.9.9.9.5.1\" style=\"font-size:70%;\">Gaussian Process</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.7.7.7.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.8.8.8.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.9.9.9.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.12.12.12\">\n<td class=\"ltx_td ltx_align_left\" id=\"S1.T1.12.12.12.4\"><cite class=\"ltx_cite ltx_citemacro_citet\">Auer et\u00a0al. <span class=\"ltx_text ltx_font_smallcaps\" id=\"S1.T1.12.12.12.4.1.1.1.1\" style=\"font-size:70%;\">(</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2306.00096v2#bib.bib3\" title=\"\">2016</a><span class=\"ltx_text ltx_font_smallcaps\" id=\"S1.T1.12.12.12.4.2.2.2.1\" style=\"font-size:70%;\">)</span></cite></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.12.12.12.5\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S1.T1.12.12.12.5.1\" style=\"font-size:70%;\">Multi-armed</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.10.10.10.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.11.11.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.12.12.12.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.15.15.15\">\n<td class=\"ltx_td ltx_align_left\" id=\"S1.T1.15.15.15.4\"><cite class=\"ltx_cite ltx_citemacro_citet\">Lu et\u00a0al. <span class=\"ltx_text ltx_font_smallcaps\" id=\"S1.T1.15.15.15.4.1.1.1.1\" style=\"font-size:70%;\">(</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2306.00096v2#bib.bib18\" title=\"\">2019</a><span class=\"ltx_text ltx_font_smallcaps\" id=\"S1.T1.15.15.15.4.2.2.2.1\" style=\"font-size:70%;\">)</span></cite></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.15.15.15.5\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S1.T1.15.15.15.5.1\" style=\"font-size:70%;\">Generalized linear</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.13.13.13.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.14.14.14.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.15.15.15.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.18.18.18\">\n<td class=\"ltx_td ltx_align_left\" id=\"S1.T1.18.18.18.4\"><cite class=\"ltx_cite ltx_citemacro_citet\">Degenne et\u00a0al. <span class=\"ltx_text ltx_font_smallcaps\" id=\"S1.T1.18.18.18.4.1.1.1.1\" style=\"font-size:70%;\">(</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2306.00096v2#bib.bib7\" title=\"\">2019</a><span class=\"ltx_text ltx_font_smallcaps\" id=\"S1.T1.18.18.18.4.2.2.2.1\" style=\"font-size:70%;\">)</span></cite></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.18.18.18.5\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S1.T1.18.18.18.5.1\" style=\"font-size:70%;\">Multi-armed</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.16.16.16.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.17.17.17.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.18.18.18.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.21.21.21\">\n<td class=\"ltx_td ltx_align_left\" id=\"S1.T1.21.21.21.4\"><cite class=\"ltx_cite ltx_citemacro_citet\">Zhong et\u00a0al. <span class=\"ltx_text ltx_font_smallcaps\" id=\"S1.T1.21.21.21.4.1.1.1.1\" style=\"font-size:70%;\">(</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2306.00096v2#bib.bib30\" title=\"\">2023</a><span class=\"ltx_text ltx_font_smallcaps\" id=\"S1.T1.21.21.21.4.2.2.2.1\" style=\"font-size:70%;\">)</span></cite></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.21.21.21.5\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S1.T1.21.21.21.5.1\" style=\"font-size:70%;\">Multi-armed</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.19.19.19.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.20.20.20.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.21.21.21.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.24.24.24\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S1.T1.24.24.24.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S1.T1.24.24.24.4.1\" style=\"font-size:70%;\">Our work</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S1.T1.24.24.24.5\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S1.T1.24.24.24.5.1\" style=\"font-size:70%;\">Linear</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S1.T1.22.22.22.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S1.T1.23.23.23.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S1.T1.24.24.24.3\"></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 113 |
+
"capture": "Table 1: A comparison of the related works in terms of settings and theoretical guarantees."
|
| 114 |
+
}
|
| 115 |
+
},
|
| 116 |
+
"image_paths": {
|
| 117 |
+
"1(a)": {
|
| 118 |
+
"figure_path": "2306.00096v2_figure_1(a).png",
|
| 119 |
+
"caption": "(a) Error |\u03b8^1\u2212\u03b8\u22c6(1)|subscript^\ud835\udf031superscriptsubscript\ud835\udf03\u22c61|\\widehat{\\theta}_{1}-\\theta_{\\star}^{(1)}|| over^ start_ARG italic_\u03b8 end_ARG start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - italic_\u03b8 start_POSTSUBSCRIPT \u22c6 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT | for the exploited arm (arm 1)\nFigure 1: Estimation errors of the proposed DR-mix estimator (11) with the conventional ridge estimator, and the exploration-mixed estimator (8) for a 3-armed bandit problem.\nThe line and shade represent the average and standard deviation over 1000 independent experiments.\nThe estimators use samples from all arms for n\u2208[50]\ud835\udc5bdelimited-[]50n\\in[50]italic_n \u2208 [ 50 ], and after that, only observe rewards from arm 1111.",
|
| 120 |
+
"url": "http://arxiv.org/html/2306.00096v2/extracted/2306.00096v2/figures/estimators_opt.png"
|
| 121 |
+
},
|
| 122 |
+
"1(b)": {
|
| 123 |
+
"figure_path": "2306.00096v2_figure_1(b).png",
|
| 124 |
+
"caption": "(b) Error \u2016\u03b8^{2,3}\u2212\u03b8\u22c6({2,3})\u20162subscriptnormsubscript^\ud835\udf0323superscriptsubscript\ud835\udf03\u22c6232\\|\\widehat{\\theta}_{\\{2,3\\}}-\\theta_{\\star}^{(\\{2,3\\})}\\|_{2}\u2225 over^ start_ARG italic_\u03b8 end_ARG start_POSTSUBSCRIPT { 2 , 3 } end_POSTSUBSCRIPT - italic_\u03b8 start_POSTSUBSCRIPT \u22c6 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( { 2 , 3 } ) end_POSTSUPERSCRIPT \u2225 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT for arm 2 and 3\nFigure 1: Estimation errors of the proposed DR-mix estimator (11) with the conventional ridge estimator, and the exploration-mixed estimator (8) for a 3-armed bandit problem.\nThe line and shade represent the average and standard deviation over 1000 independent experiments.\nThe estimators use samples from all arms for n\u2208[50]\ud835\udc5bdelimited-[]50n\\in[50]italic_n \u2208 [ 50 ], and after that, only observe rewards from arm 1111.",
|
| 125 |
+
"url": "http://arxiv.org/html/2306.00096v2/extracted/2306.00096v2/figures/estimators_suboptimal.png"
|
| 126 |
+
},
|
| 127 |
+
"2(a)": {
|
| 128 |
+
"figure_path": "2306.00096v2_figure_2(a).png",
|
| 129 |
+
"caption": "(a) Number of samples for PFI\nFigure 2: Comparison of PFIwR and MultiPFI on the SW-LLVM dataset.\nBoth algorithms correctly identify the \u03f5italic-\u03f5\\epsilonitalic_\u03f5-near Pareto optimal arms on all 500 independent experiments.",
|
| 130 |
+
"url": "http://arxiv.org/html/2306.00096v2/extracted/2306.00096v2/figures/llvm_box.png"
|
| 131 |
+
},
|
| 132 |
+
"2(b)": {
|
| 133 |
+
"figure_path": "2306.00096v2_figure_2(b).png",
|
| 134 |
+
"caption": "(b) Total sum of regret at termination\nFigure 2: Comparison of PFIwR and MultiPFI on the SW-LLVM dataset.\nBoth algorithms correctly identify the \u03f5italic-\u03f5\\epsilonitalic_\u03f5-near Pareto optimal arms on all 500 independent experiments.",
|
| 135 |
+
"url": "http://arxiv.org/html/2306.00096v2/extracted/2306.00096v2/figures/llvm_regrets.png"
|
| 136 |
+
},
|
| 137 |
+
"2(c)": {
|
| 138 |
+
"figure_path": "2306.00096v2_figure_2(c).png",
|
| 139 |
+
"caption": "(c) Cumulative regret (\u03f5=0.06italic-\u03f50.06\\epsilon=0.06italic_\u03f5 = 0.06)\nFigure 2: Comparison of PFIwR and MultiPFI on the SW-LLVM dataset.\nBoth algorithms correctly identify the \u03f5italic-\u03f5\\epsilonitalic_\u03f5-near Pareto optimal arms on all 500 independent experiments.",
|
| 140 |
+
"url": "http://arxiv.org/html/2306.00096v2/extracted/2306.00096v2/figures/llvm_regret_0.06.png"
|
| 141 |
+
},
|
| 142 |
+
"3": {
|
| 143 |
+
"figure_path": "2306.00096v2_figure_3.png",
|
| 144 |
+
"caption": "Figure 3: The \u21132subscript\u21132\\ell_{2}roman_\u2113 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT-error of the reward on the unexploited arms (arms 2 and 3) of the proposed estimator and the DR estimator whose imputation estimator is the conventional ridge estimator in the 3-armed bandit problem (for detailed setting, see Section 6.1.)\nThe estimators use samples from all three arms when t\u226450\ud835\udc6150t\\leq 50italic_t \u2264 50 and only arm 1 when t>50\ud835\udc6150t>50italic_t > 50. When constructing a DR estimator, choosing the imputation estimator that learns rewards on all arms is crucial for convergence on all arms.",
|
| 145 |
+
"url": "http://arxiv.org/html/2306.00096v2/extracted/2306.00096v2/figures/DRs_suboptimal.png"
|
| 146 |
+
},
|
| 147 |
+
"4(a)": {
|
| 148 |
+
"figure_path": "2306.00096v2_figure_4(a).png",
|
| 149 |
+
"caption": "(a) On the exploited arm (arm 1)\nFigure 4: Changes in densities of\nn\u2062(\u03b8^\u2212\u03b8\u22c6)\ud835\udc5b^\ud835\udf03subscript\ud835\udf03\u22c6\\sqrt{n}(\\widehat{\\theta}-\\theta_{\\star})square-root start_ARG italic_n end_ARG ( over^ start_ARG italic_\u03b8 end_ARG - italic_\u03b8 start_POSTSUBSCRIPT \u22c6 end_POSTSUBSCRIPT ) over the number of samples\nn=50,500,2000\ud835\udc5b505002000n=50,500,2000italic_n = 50 , 500 , 2000 on the exploited arm (arm 1) and the\nunexploited arm (arm 2).\nThe vertical line represents the average computed from 1000 independent experiments.\nThe proposed DR-mix estimator converges faster with lower variance than the ridge\nand exploration-mixed estimator on all arms.",
|
| 150 |
+
"url": "http://arxiv.org/html/2306.00096v2/extracted/2306.00096v2/figures/density_DR_Arm1.png"
|
| 151 |
+
},
|
| 152 |
+
"4(b)": {
|
| 153 |
+
"figure_path": "2306.00096v2_figure_4(b).png",
|
| 154 |
+
"caption": "(b) On the unexploited arm (arm 2)\nFigure 4: Changes in densities of\nn\u2062(\u03b8^\u2212\u03b8\u22c6)\ud835\udc5b^\ud835\udf03subscript\ud835\udf03\u22c6\\sqrt{n}(\\widehat{\\theta}-\\theta_{\\star})square-root start_ARG italic_n end_ARG ( over^ start_ARG italic_\u03b8 end_ARG - italic_\u03b8 start_POSTSUBSCRIPT \u22c6 end_POSTSUBSCRIPT ) over the number of samples\nn=50,500,2000\ud835\udc5b505002000n=50,500,2000italic_n = 50 , 500 , 2000 on the exploited arm (arm 1) and the\nunexploited arm (arm 2).\nThe vertical line represents the average computed from 1000 independent experiments.\nThe proposed DR-mix estimator converges faster with lower variance than the ridge\nand exploration-mixed estimator on all arms.",
|
| 155 |
+
"url": "http://arxiv.org/html/2306.00096v2/extracted/2306.00096v2/figures/density_DR_Arm2.png"
|
| 156 |
+
}
|
| 157 |
+
},
|
| 158 |
+
"validation": true,
|
| 159 |
+
"references": [
|
| 160 |
+
{
|
| 161 |
+
"1": {
|
| 162 |
+
"title": "Improved algorithms for linear stochastic bandits.",
|
| 163 |
+
"author": "Yasin Abbasi-Yadkori, D\u00e1vid P\u00e1l, and Csaba Szepesv\u00e1ri.",
|
| 164 |
+
"venue": "In Advances in Neural Information Processing Systems, pages 2312\u20132320, 2011.",
|
| 165 |
+
"url": null
|
| 166 |
+
}
|
| 167 |
+
},
|
| 168 |
+
{
|
| 169 |
+
"2": {
|
| 170 |
+
"title": "Linear contextual bandits with knapsacks.",
|
| 171 |
+
"author": "Shipra Agrawal and Nikhil Devanur.",
|
| 172 |
+
"venue": "Advances in Neural Information Processing Systems, 29, 2016.",
|
| 173 |
+
"url": null
|
| 174 |
+
}
|
| 175 |
+
},
|
| 176 |
+
{
|
| 177 |
+
"3": {
|
| 178 |
+
"title": "Pareto front identification from stochastic bandit feedback.",
|
| 179 |
+
"author": "P. Auer, C-K. Chiang, R. Ortner, and M. Drugan.",
|
| 180 |
+
"venue": "In Artificial intelligence and statistics, pages 939\u2013947. PMLR, 2016.",
|
| 181 |
+
"url": null
|
| 182 |
+
}
|
| 183 |
+
},
|
| 184 |
+
{
|
| 185 |
+
"4": {
|
| 186 |
+
"title": "Using confidence bounds for exploitation-exploration trade-offs.",
|
| 187 |
+
"author": "Peter Auer.",
|
| 188 |
+
"venue": "Journal of Machine Learning Research, 3(Nov):397\u2013422, 2002a.",
|
| 189 |
+
"url": null
|
| 190 |
+
}
|
| 191 |
+
},
|
| 192 |
+
{
|
| 193 |
+
"5": {
|
| 194 |
+
"title": "Doubly robust estimation in missing data and causal inference models.",
|
| 195 |
+
"author": "Heejung Bang and James M Robins.",
|
| 196 |
+
"venue": "Biometrics, 61(4):962\u2013973, 2005.",
|
| 197 |
+
"url": null
|
| 198 |
+
}
|
| 199 |
+
},
|
| 200 |
+
{
|
| 201 |
+
"6": {
|
| 202 |
+
"title": "Bounds on the lambert function and their application to the outage analysis of user cooperation.",
|
| 203 |
+
"author": "Ioannis Chatzigeorgiou.",
|
| 204 |
+
"venue": "IEEE Communications Letters, 17(8):1505\u20131508, 2013.",
|
| 205 |
+
"url": null
|
| 206 |
+
}
|
| 207 |
+
},
|
| 208 |
+
{
|
| 209 |
+
"7": {
|
| 210 |
+
"title": "Bridging the gap between regret minimization and best arm identification, with application to a/b tests.",
|
| 211 |
+
"author": "R\u00e9my Degenne, Thomas Nedelec, Cl\u00e9ment Calauz\u00e8nes, and Vianney Perchet.",
|
| 212 |
+
"venue": "In The 22nd International Conference on Artificial Intelligence and Statistics, pages 1988\u20131996. PMLR, 2019.",
|
| 213 |
+
"url": null
|
| 214 |
+
}
|
| 215 |
+
},
|
| 216 |
+
{
|
| 217 |
+
"8": {
|
| 218 |
+
"title": "Pac bounds for multi-armed bandit and markov decision processes.",
|
| 219 |
+
"author": "Eyal Even-Dar, Shie Mannor, and Yishay Mansour.",
|
| 220 |
+
"venue": "In COLT, volume 2, pages 255\u2013270. Springer, 2002.",
|
| 221 |
+
"url": null
|
| 222 |
+
}
|
| 223 |
+
},
|
| 224 |
+
{
|
| 225 |
+
"9": {
|
| 226 |
+
"title": "Response surface approximation of pareto optimal front in multi-objective optimization.",
|
| 227 |
+
"author": "Tushar Goel, Rajkumar Vaidyanathan, Raphael T Haftka, Wei Shyy, Nestor V Queipo, and Kevin Tucker.",
|
| 228 |
+
"venue": "Computer methods in applied mechanics and engineering, 196(4-6):879\u2013893, 2007.",
|
| 229 |
+
"url": null
|
| 230 |
+
}
|
| 231 |
+
},
|
| 232 |
+
{
|
| 233 |
+
"10": {
|
| 234 |
+
"title": "Uniform-pac bounds for reinforcement learning with linear function approximation.",
|
| 235 |
+
"author": "Jiafan He, Dongruo Zhou, and Quanquan Gu.",
|
| 236 |
+
"venue": "Advances in Neural Information Processing Systems, 34:14188\u201314199, 2021.",
|
| 237 |
+
"url": null
|
| 238 |
+
}
|
| 239 |
+
},
|
| 240 |
+
{
|
| 241 |
+
"11": {
|
| 242 |
+
"title": "Doubly robust thompson sampling with linear payoffs.",
|
| 243 |
+
"author": "Wonyoung Kim, Gi-Soo Kim, and Myunghee Cho Paik.",
|
| 244 |
+
"venue": "In Advances in Neural Information Processing Systems, 2021.",
|
| 245 |
+
"url": null
|
| 246 |
+
}
|
| 247 |
+
},
|
| 248 |
+
{
|
| 249 |
+
"12": {
|
| 250 |
+
"title": "Double doubly robust thompson sampling for generalized linear contextual bandits.",
|
| 251 |
+
"author": "Wonyoung Kim, Kyungbok Lee, and Myunghee Cho Paik.",
|
| 252 |
+
"venue": "arXiv preprint arXiv:2209.06983, 2022.",
|
| 253 |
+
"url": null
|
| 254 |
+
}
|
| 255 |
+
},
|
| 256 |
+
{
|
| 257 |
+
"13": {
|
| 258 |
+
"title": "Improved algorithms for multi-period multi-class packing problems with bandit feedback.",
|
| 259 |
+
"author": "Wonyoung Kim, Garud Iyengar, and Assaf Zeevi.",
|
| 260 |
+
"venue": "In Proceedings of the 40th International Conference on Machine Learning, volume 202, pages 16458\u201316501. PMLR, 23\u201329 Jul 2023a.",
|
| 261 |
+
"url": null
|
| 262 |
+
}
|
| 263 |
+
},
|
| 264 |
+
{
|
| 265 |
+
"14": {
|
| 266 |
+
"title": "Squeeze all: Novel estimator and self-normalized bound for linear contextual bandits.",
|
| 267 |
+
"author": "Wonyoung Kim, Myunghee Cho Paik, and Min-Hwan Oh.",
|
| 268 |
+
"venue": "In Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, volume 206, pages 3098\u20133124. PMLR, 25\u201327 Apr 2023b.",
|
| 269 |
+
"url": null
|
| 270 |
+
}
|
| 271 |
+
},
|
| 272 |
+
{
|
| 273 |
+
"15": {
|
| 274 |
+
"title": "A doubly robust approach to sparse reinforcement learning.",
|
| 275 |
+
"author": "Wonyoung Kim, Garud Iyengar, and Assaf Zeevi.",
|
| 276 |
+
"venue": "In International Conference on Artificial Intelligence and Statistics, pages 2305\u20132313. PMLR, 2024.",
|
| 277 |
+
"url": null
|
| 278 |
+
}
|
| 279 |
+
},
|
| 280 |
+
{
|
| 281 |
+
"16": {
|
| 282 |
+
"title": "A pareto-efficient algorithm for multiple objective optimization in e-commerce recommendation.",
|
| 283 |
+
"author": "Xiao Lin, Hongjie Chen, Changhua Pei, Fei Sun, Xuanji Xiao, Hanxiao Sun, Yongfeng Zhang, Wenwu Ou, and Peng Jiang.",
|
| 284 |
+
"venue": "In Proceedings of the 13th ACM Conference on recommender systems, pages 20\u201328, 2019.",
|
| 285 |
+
"url": null
|
| 286 |
+
}
|
| 287 |
+
},
|
| 288 |
+
{
|
| 289 |
+
"17": {
|
| 290 |
+
"title": "Efficient reinforcement learning with multiple reward functions for randomized controlled trial analysis.",
|
| 291 |
+
"author": "Daniel J Lizotte, Michael H Bowling, and Susan A Murphy.",
|
| 292 |
+
"venue": "In ICML, volume 10, pages 695\u2013702, 2010.",
|
| 293 |
+
"url": null
|
| 294 |
+
}
|
| 295 |
+
},
|
| 296 |
+
{
|
| 297 |
+
"18": {
|
| 298 |
+
"title": "Multi-objective generalized linear bandits.",
|
| 299 |
+
"author": "Shiyin Lu, Guanghui Wang, Yao Hu, and Lijun Zhang.",
|
| 300 |
+
"venue": "In Proceedings of the 28th International Joint Conference on Artificial Intelligence, pages 3080\u20133086, 2019.",
|
| 301 |
+
"url": null
|
| 302 |
+
}
|
| 303 |
+
},
|
| 304 |
+
{
|
| 305 |
+
"19": {
|
| 306 |
+
"title": "Interactive thompson sampling for multi-objective multi-armed bandits.",
|
| 307 |
+
"author": "Diederik M Roijers, Luisa M Zintgraf, and Ann Now\u00e9.",
|
| 308 |
+
"venue": "In Algorithmic Decision Theory: 5th International Conference, ADT 2017, Luxembourg, Luxembourg, October 25\u201327, 2017, Proceedings 5, pages 18\u201334. Springer, 2017.",
|
| 309 |
+
"url": null
|
| 310 |
+
}
|
| 311 |
+
},
|
| 312 |
+
{
|
| 313 |
+
"20": {
|
| 314 |
+
"title": "Interactive multi-objective reinforcement learning in multi-armed bandits for any utility function.",
|
| 315 |
+
"author": "Diederik M Roijers, Luisa M Zintgraf, Pieter Libin, and Ann Now\u00e9.",
|
| 316 |
+
"venue": "In ALA workshop at FAIM, volume 8, 2018.",
|
| 317 |
+
"url": null
|
| 318 |
+
}
|
| 319 |
+
},
|
| 320 |
+
{
|
| 321 |
+
"21": {
|
| 322 |
+
"title": "On the standard deviations of adjusted and interpolated values of an observed polynomial function and its constants and the guidance they give towards a proper choice of the distribution of observations.",
|
| 323 |
+
"author": "Kirstine Smith.",
|
| 324 |
+
"venue": "Biometrika, 12(1/2):1\u201385, 1918.",
|
| 325 |
+
"url": null
|
| 326 |
+
}
|
| 327 |
+
},
|
| 328 |
+
{
|
| 329 |
+
"22": {
|
| 330 |
+
"title": "Best-arm identification in linear bandits.",
|
| 331 |
+
"author": "Marta Soare, Alessandro Lazaric, and R\u00e9mi Munos.",
|
| 332 |
+
"venue": "Advances in Neural Information Processing Systems, 27, 2014.",
|
| 333 |
+
"url": null
|
| 334 |
+
}
|
| 335 |
+
},
|
| 336 |
+
{
|
| 337 |
+
"23": {
|
| 338 |
+
"title": "Best arm identification in linear bandits with linear dimension dependency.",
|
| 339 |
+
"author": "Chao Tao, Sa\u00fal Blanco, and Yuan Zhou.",
|
| 340 |
+
"venue": "In International Conference on Machine Learning, pages 4877\u20134886. PMLR, 2018.",
|
| 341 |
+
"url": null
|
| 342 |
+
}
|
| 343 |
+
},
|
| 344 |
+
{
|
| 345 |
+
"24": {
|
| 346 |
+
"title": "User-friendly tail bounds for sums of random matrices.",
|
| 347 |
+
"author": "Joel A Tropp.",
|
| 348 |
+
"venue": "Foundations of computational mathematics, 12(4):389\u2013434, 2012.",
|
| 349 |
+
"url": null
|
| 350 |
+
}
|
| 351 |
+
},
|
| 352 |
+
{
|
| 353 |
+
"25": {
|
| 354 |
+
"title": "An introduction to matrix concentration inequalities.",
|
| 355 |
+
"author": "Joel A Tropp.",
|
| 356 |
+
"venue": "Foundations and Trends\u00ae in Machine Learning, 8(1-2):1\u2013230, 2015.",
|
| 357 |
+
"url": null
|
| 358 |
+
}
|
| 359 |
+
},
|
| 360 |
+
{
|
| 361 |
+
"26": {
|
| 362 |
+
"title": "Finite-time analysis of kernelised contextual bandits.",
|
| 363 |
+
"author": "Michal Valko, Nathan Korda, R\u00e9mi Munos, Ilias Flaounas, and Nello Cristianini.",
|
| 364 |
+
"venue": "In Uncertainty in Artificial Intelligence, 2013.",
|
| 365 |
+
"url": null
|
| 366 |
+
}
|
| 367 |
+
},
|
| 368 |
+
{
|
| 369 |
+
"27": {
|
| 370 |
+
"title": "Multi-objective reinforcement learning using sets of pareto dominating policies.",
|
| 371 |
+
"author": "Kristof Van Moffaert and Ann Now\u00e9.",
|
| 372 |
+
"venue": "The Journal of Machine Learning Research, 15(1):3483\u20133512, 2014.",
|
| 373 |
+
"url": null
|
| 374 |
+
}
|
| 375 |
+
},
|
| 376 |
+
{
|
| 377 |
+
"28": {
|
| 378 |
+
"title": "Learning multi-objective rewards and user utility function in contextual bandits for personalized ranking.",
|
| 379 |
+
"author": "Nirandika Wanigasekara, Yuxuan Liang, Siong Thye Goh, Ye Liu, Joseph Jay Williams, and David S Rosenblum.",
|
| 380 |
+
"venue": "In IJCAI, pages 3835\u20133841, 2019.",
|
| 381 |
+
"url": null
|
| 382 |
+
}
|
| 383 |
+
},
|
| 384 |
+
{
|
| 385 |
+
"29": {
|
| 386 |
+
"title": "Assouad, fano, and le cam.",
|
| 387 |
+
"author": "Bin Yu.",
|
| 388 |
+
"venue": "In Festschrift for Lucien Le Cam: research papers in probability and statistics, pages 423\u2013435. Springer, 1997.",
|
| 389 |
+
"url": null
|
| 390 |
+
}
|
| 391 |
+
},
|
| 392 |
+
{
|
| 393 |
+
"30": {
|
| 394 |
+
"title": "Achieving the pareto frontier of regret minimization and best arm identification in multi-armed bandits.",
|
| 395 |
+
"author": "Zixin Zhong, Wang Chi Cheung, and Vincent Tan.",
|
| 396 |
+
"venue": "Transactions on Machine Learning Research, 2023.",
|
| 397 |
+
"url": null
|
| 398 |
+
}
|
| 399 |
+
},
|
| 400 |
+
{
|
| 401 |
+
"31": {
|
| 402 |
+
"title": "-pal: an active learning approach to the multi-objective optimization problem.",
|
| 403 |
+
"author": "Marcela Zuluaga, Andreas Krause, and Markus P\u00fcschel.",
|
| 404 |
+
"venue": "The Journal of Machine Learning Research, 17(1):3619\u20133650, 2016.",
|
| 405 |
+
"url": null
|
| 406 |
+
}
|
| 407 |
+
}
|
| 408 |
+
],
|
| 409 |
+
"url": "http://arxiv.org/html/2306.00096v2"
|
| 410 |
+
}
|
20240522/2306.00420v2.json
ADDED
|
@@ -0,0 +1,77 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Logics with probabilistic team semantics and the Boolean negation",
|
| 3 |
+
"abstract": "We study the expressivity and the complexity of various logics in probabilistic team semantics with the Boolean negation.\nIn particular, we study the extension of probabilistic independence logic with the Boolean negation, and a recently introduced logic FOPT.\nWe give a comprehensive picture of the relative expressivity of these logics together with the most studied logics in probabilistic team semantics setting, as well as relating their expressivity to a numerical variant of second-order logic.\nIn addition, we introduce novel entropy atoms and show that the extension of first-order logic by entropy atoms subsumes probabilistic independence logic.\nFinally, we obtain some results on the complexity of model checking, validity, and satisfiability of our logics.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Probabilistic team semantics is a novel framework for the logical analysis of probabilistic and quantitative dependencies.\nTeam semantics, as a semantic framework for logics involving qualitative dependencies and independencies, was introduced by Hodges [18 ###reference_b18###] and popularised by V\u00e4\u00e4n\u00e4nen [26 ###reference_b26###] via his dependence logic.\nTeam semantics defines truth in reference to collections of assignments, called teams, and is particularly suitable for the formal analysis of properties, such as the functional dependence between variables, that arise only in the presence of multiple assignments.\nThe idea of generalising team semantics to the probabilistic setting can be traced back to the works of Galliani [6 ###reference_b6###] and Hyttinen et al. [19 ###reference_b19###], however the beginning of a more systematic study of the topic dates back to works of Durand et al. [4 ###reference_b4###].\nIn probabilistic team semantics the basic semantic units are probability distributions (i.e., probabilistic teams).\nThis shift from set-based to distribution-based semantics allows probabilistic notions of dependency, such as conditional probabilistic independence, to be embedded in the framework111In [22 ###reference_b22###] Li recently introduced first-order theory of random variables with probabilistic independence (FOTPI) whose variables are interpreted by discrete distributions over the unit interval. The paper shows that true arithmetic is interpretable in FOTPI whereas probabilistic independence logic is by our results far less complex..\nThe expressivity and complexity of non-probabilistic team-based logics can be related to fragments of (existential) second-order logic and have been studied extensively (see, e.g., [7 ###reference_b7###, 5 ###reference_b5###, 10 ###reference_b10###]).\nTeam-based logics, by definition, are usually not closed under Boolean negation, so adding it can greatly increase the complexity and expressivity of these logics [20 ###reference_b20###, 16 ###reference_b16###].\nSome expressivity and complexity results have also been obtained for logics in probabilistic team semantics (see below).\nHowever, richer semantic and computational frameworks are sometimes needed to characterise these logics.\nMetafinite Model Theory, introduced by Gr\u00e4del and Gurevich [9 ###reference_b9###], generalises the approach of Finite Model Theory by shifting to two-sorted structures, which extend finite structures by another (often infinite) numerical domain and weight functions bridging the two sorts.\nA particularly important subclass of metafinite structures are the so-called -structures, which extend finite structures with the real arithmetic on the second sort.\nBlum-Shub-Smale machines (BSS machines for short) [1 ###reference_b1###] are essentially register machines with registers that can store arbitrary real numbers and compute rational functions over reals in a single time step.\nInterestingly, Boolean languages which are decidable by a non-deterministic polynomial-time BSS machine coincide with those languages which are PTIME-reducible to the true existential sentences of real arithmetic (i.e., the complexity class ) [2 ###reference_b2###, 25 ###reference_b25###].\nRecent works have established fascinating connections between second-order logics over -structures, complexity classes using the BSS-model of computation, and logics using probabilistic team semantics.\nIn [14 ###reference_b14###], Hannula et al. establish that the expressivity and complexity of probabilistic independence logic coincide with a particular fragment of existential second-order logic over -structures and NP on BSS-machines.\nIn [17 ###reference_b17###], Hannula and Virtema focus on probabilistic inclusion logic, which is shown to be tractable (when restricted to Boolean inputs), and relate it to linear programming.\nIn this paper, we focus on the expressivity and model checking complexity of probabilistic team-based logics that have access to Boolean negation.\nWe also study the connections between probabilistic independence logic and a logic called , which is defined via a computationally simpler probabilistic semantics [12 ###reference_b12###].\nThe logic is the probabilistic variant of a certain team-based logic that can define exactly those dependencies that are first-order definable [21 ###reference_b21###].\nWe also introduce novel entropy atoms and relate the extension of first-order logic with these atoms to probabilistic independence logic.\nThis version of the paper includes the proofs omitted from the conference version [13 ###reference_b13###].\nSee Figure 1 ###reference_### for our expressivity results and Table 1 ###reference_### for our complexity results."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Preliminaries",
|
| 15 |
+
"text": "We assume the reader is familiar with the basics in complexity theory [24 ###reference_b24###].\nIn this work, we will encounter complexity classes , , , and the class together with the notion of completeness under the usual polynomial time many to one reductions.\nA bit more formally for the latter complexity class which is more uncommon than the others, consists of all languages that can be decided by alternating Turing machines within an exponential runtime of and polynomially many alternations between universal and existential states.\nThere exist problems in propositional team logic with generalized dependence atoms that are complete for this class [15 ###reference_b15###].\nIt is also known that truth evaluation of alternating dependency quantified boolean formulae (ADQBF) is complete for this class [15 ###reference_b15###]."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "Probabilistic team semantics",
|
| 21 |
+
"text": "We denote first-order variables by and tuples of first-order variables by . For the length of the tuple , we write . The set of variables that appear in the tuple is denoted by . A vocabulary is a finite set of relation, function, and constant symbols, denoted by , , and , respectively. Each relation symbol and function symbol has a prescribed arity, denoted by and .\nLet be a finite relational vocabulary such that . For a finite -structure and a finite set of variables , an assignment of for is a function . A team of over is a finite set of assignments .\nA probabilistic team is a function , where is the set of non-negative real numbers. The value is called the weight of assignment . Since zero-weights are allowed, we may, when useful, assume that is maximal, i.e., it contains all assignments . The support of is defined as . A team is nonempty if .\nThese teams are called probabilistic because we usually consider teams that are probability distributions, i.e., functions for which .222In some sources, the term probabilistic team only refers to teams that are distributions, and the functions that are not distributions are called weighted teams.\nIn this setting, the weight of an assignment can be thought of as the probability that the values of the variables are as in the assignment.\nIf is a probability distribution, we also write .\nFor a set of variables , the restriction of the assignment to is denoted by .\nThe restriction of a team to is , and the restriction of a probabilistic team to is where\nIf is a first-order formula, then is the restriction of the team to those assignments in that satisfy the formula . The weight is defined analogously as the sum of the weights of the assignments in that satisfy , e.g.,\nFor a variable and , we denote by , the modified assignment such that if , and otherwise. For a set , the modified team is defined as the set .\nLet be any probabilistic team. Then the probabilistic team is a function defined as\nIf is a fresh variable, the summation can be dropped and the right-hand side of the equation becomes . For singletons , we write and instead of and .\nLet then be a distribution. Denote by the set of all probability distributions , and let be a function .\nThen the probabilistic team is a function defined as\nfor all and . If is a fresh variable, the summation can again be dropped and the right-hand side of the equation becomes .\nLet and be probabilistic teams with common variable and value domains, and let . The -scaled union of and , denoted by , is the probabilistic team defined as"
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "3",
|
| 25 |
+
"parent_section_id": null,
|
| 26 |
+
"section_name": "Probabilistic independence logic with Boolean negation",
|
| 27 |
+
"text": "In this section, we define probabilistic independence logic with Boolean negation, denoted by . The logic extends first-order logic with probabilistic independence atom which states that the tuples and are independent given the tuple .\nThe syntax for the logic over a vocabulary is as follows:\nwhere is a first-order variable, , , and are tuples of first-order variables, and .\nLet be a first-order formula. We denote by the formula which is obtained from by pushing the negation in front of atomic formulas. We also use the shorthand notations and .\nLet be a probability distribution. The semantics for the logic is defined as follows:\niff for all .\niff for all .\niff for all .\niff .\niff and .\niff and for some such that .\niff for some .\niff .\nThe satisfaction relation above refers to the Tarski semantics of first-order logic. For a sentence , we write if , where is the distribution that maps the empty assignment to 1.\nThe logic also has the following useful property called locality. Denote by the set of the free variables of a formula .\nLet be any -formula. Then for any set of variables , any -structure , and any probabilistic team such that ,\nIn addition to probabilistic conditional independence atoms, we may also consider other atoms. If and are tuples of variables, then is a dependence atom. If and are also of the same length, is a marginal identity atom. The semantics for these atoms are defined as follows:\niff for all , implies ,\niff for all .\nWe write and for first-order logic with dependence atoms or marginal identity atoms, respectively. Analogously, for , we write for the logic with access to the atoms (or the Boolean negation) from .\nFor two logics and over probabilistic team semantics, we write if for any formula , there is a formula such that for all and . The equality and strict inequality are defined from the above relation in the usual way. The next two propositions follow from the fact that dependence atoms and marginal identity atoms can be expressed with probabilistic independence atoms.\n.\n.\nOn the other hand, omitting the Boolean negation strictly decreases the expressivity:\n.\nBy Theorems 4.1 and 6.5 of [14 ###reference_b14###], over a fixed universe size, any open formula of defines a closed subset of for a suitable depending on the size of the universe and the number of free variables.\nNow, clearly, this cannot be true for all of the formulas of as it contains the Boolean negation, e.g., the formula .\n\u220e"
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "4",
|
| 31 |
+
"parent_section_id": null,
|
| 32 |
+
"section_name": "Metafinite logics",
|
| 33 |
+
"text": "In this section, we consider logics over -structures. These structures extend finite relational structures with real numbers as a second domain and add functions that map tuples from the finite domain to .\nLet and be finite vocabularies such that is relational and is functional.\nAn -structure of vocabulary is a tuple where the reduct of to is a finite relational structure, and is a set that contains functions for each function symbol . Additionally,\nfor any , if each is a function from to , is called an -structure,\nif each is a distribution, is called a -structure.\nNext, we will define certain metafinite logics which are variants of functional second-order logic with numerical terms. The numerical -terms are defined as follows:\nwhere and and are first-order variables such that . The interpretation of a numerical term in the structure under an assignment is denoted by . We define\nThe interpretations of the rest of the numerical terms are defined in the obvious way.\nSuppose that , and let . The syntax for the logic is defined as follows:\nwhere and are numerical -terms constructed using operations from , , , , and are first-order variables, is a function variable, and is a -formula of .\nThe semantics of is defined via -structures and assignments analogous to first-order logic, except for the interpretations of function variables , which range over functions . For any , we define as the variant of , where the quantification of function variables ranges over . If the quantification of function variables is restricted to distributions, the resulting logic is denoted by . The existential fragment, in which universal quantification over function variables is not allowed, is denoted by .\nFor metafinite logics and , we define expressivity comparison relations , , and in the usual way, see e.g. [14 ###reference_b14###].\n.\nFirst, note that since the constants 0 and 1 are definable in both logics, we may use them when needed. To show that , it suffices to show that any numerical identity can also be expressed in . Suppose that . Since the domain of is finite, we may assume that it is linearly ordered: a linear order can be defined with an existentially quantified binary function variable such that the formulas and correspond to and , respectively.\nThen, without loss of generality, we may assume that we have an -ary successor function defined by the lexicographic order induced by the linear order. Thus, we can existentially quantify a function variable such that\nThen is as wanted.\nTo show that , we show that any numerical identity can be expressed in . We can existentially quantify a function variable such that\nThen is as wanted. Note that since no universal quantification over function variables was used, the proposition also holds for existential fragments, i.e., .\n\u220e\n.\nSince is definable in and the formula states that is a probability distribution, we have that .\nNext, we show that\nTo show that , let . Note that any function can be expressed as , where and are functions such that and , where is the characteristic function of . Since numerical terms can clearly be expressed in , it suffices to modify as follows: for all quantified function variables , replace each appearance of term with and instead of , quantify two function variables and .\nTo show that , let . Note that any positive real number can be written as a ratio , where . Since numerical terms of the form can clearly be expressed in , it suffices to modify as follows: for all quantified function variables , replace each appearance of term with and instead of , quantify a function variable such that for all .\nLastly, to show that , it suffices to see that for any , we can compress each function term into a fraction of size , where is the size of the finite domain and the maximal arity of any function variable appearing in .\nWe omit the proof, since it is essentially the same as the one for Lemma 6.4 in [14 ###reference_b14###].\n\u220e"
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "5",
|
| 37 |
+
"parent_section_id": null,
|
| 38 |
+
"section_name": "Equi-expressivity of and",
|
| 39 |
+
"text": "In this section, we show that the expressivity of probabilistic independence logic with the Boolean negation coincides with full second-order logic over -structures.\n.\nWe first show that . Note that by Proposition 6 ###reference_position6###, we have , so it suffices to show that . We may assume that every independence atom is in the form or where and are pairwise disjoint tuples. [4 ###reference_b4###, Lemma 25]\nLet formula be such that its free-variables are from . Then there is a formula with exactly one free function variable such that for all structures and all probabilistic teams , if and only if , where is a probability distribution such that for all .\nDefine the formula as follows:\nIf , where , then .\nIf , where , then .\nIf , where are disjoint, then\nIf , where are disjoint, then\nIf , then , where is obtained from by pushing the negation in front of atomic formulas.\nIf , then .\nIf , then\nIf , then .\nIf , then\nSince the the above is essentially same as the translation in [4 ###reference_b4###, Theorem 14], but extended with the Boolean negation (for which the claim follows directly from the semantical clauses), it is easy to show that satisfies the claim.\n\u220e\nWe now show that . By Propositions 3 ###reference_position3### and 6 ###reference_position6###, and , so it suffices to show that .\nNote that even though we consider , where only distributions can be quantified, it may still happen that the interpretation of a numerical term does not belong to the unit interval.\nThis may happen if we have a term of the form where contains a variable that does not appear in .\nFortunately, for any formula containing such terms, there is an equivalent formula without them [17 ###reference_b17###, Lemma 19]. Thus, it suffices to consider formulas without such terms.\nTo prove that , we construct a useful normal form for -sentences. The following lemma is based on similar lemmas from [4 ###reference_b4###, Lemma, 16] and [17 ###reference_b17###, Lemma, 20].\nEvery formula can be written in the form , where , is quantifier-free and such that all the numerical identity atoms are in the form or for distinct ,, such that at most one of them is not quantified.\nWe begin by defining a formula for each numerical term using fresh function symbols .\nIf where is a function symbol, then is defined as .\nIf , then is defined as .\nIf , then is defined as .\nThen the formula is defined as follows:\nIf , then where consists of the function symbols for each subterm of or . The negated case is analogous; just add negation in front of .\nIf is an atom or a negated atom (of the first sort), then .\nIf , where and for , then ).\nIf , where , then\nLet , where . Let list all of the free function variables in . Then define\nwhere each , for , is such that , introduces a new function symbol for each multiplication in ,\nand the formula is obtained from by replacing all second sort identities of the form with\nand with .\nIf , where and , then .\nIt is straightforward to check that is as wanted. In (5), instead of quantifying for each a distribution , we quantify a single distribution such that , where is the domain of our structure.\n\u220e\nWe use the abbreviations and for the -formulas and , respectively. Let and be -formulas with free variables form . Then for any structure and probabilistic team over ,\niff for some distribution ,\niff for all distributions .\nLet for some sequence of functions such that . Now\nSince the variables are fresh, the right-hand side becomes for all , i.e., for some distribution . It is now straightforward to check that the two claims hold.\n\u220e\nLet be a formula in the form , where , is quantifier-free and such that all the numerical identity atoms are in the form or for distinct ,, from .\nThen there is a formula such that for all structures and probabilistic teams ,\nDefine\nwhere\n and , whenever and and , whenever .\nBy Lemma 2 ###reference_ma2###, it suffices to show that for all distributions , subsets , and probabilistic teams , we have\nThe claim is shown by induction on the structure of the formula .\nIf is an atom or a negated atom (of the first sort), then clearly we may let .\nLet . Then define\nAssume first that for a given . Then . Define functions such that iff , and iff . Let . It suffices to show that . Now, by the definition of , we have and . Since , we obtain and . Hence, .\nAssume then that , and define as the extension of such that and . Then for all . Hence, for all .\nThe negated case is analogous; just add in front of the existential quantification.\nLet . Then define\nThe negated case is analogous; just add in front of the existential quantification. The proof is similar to the previous one, so it is omitted.\nIf , then . The claim directly follows from semantics of conjunction.\nLet . Then define\nAssume first that for all . Then there are such that , , and for all . Define such that when , where is the distribution defined as\nLet and . Now , and we have , , and . By locality, this implies that .\nAssume then that .\nLet be such that for .\nLet then and for .\nNow, we also have for .\nSince , we have either or for all .\nWe get that for some .\nThus, .\nHence, for all .\nWe obtain for all by an analogous argument.\nAs a result, we get that for all .\n\u220e"
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "6",
|
| 43 |
+
"parent_section_id": null,
|
| 44 |
+
"section_name": "Probabilistic logics and entropy atoms",
|
| 45 |
+
"text": "In this section we consider extending probabilistic team semantics with novel entropy atoms.\nFor a discrete random variable , with possible outcomes occuring with probabilities , the Shannon entropy of is given as:\nThe base of the logarithm does not play a role in this definition (usually it is assumed to be ).\nFor a set of discrete random variables, the entropy is defined in terms of the vector-valued random variable it defines. Given three sets of discrete random variables , it is known that is conditionally independent of given (written ) if and only if the conditional mutual information vanishes.\nSimilarly, functional dependence of from holds if and only if the conditional entropy of given vanishes. Writing for the union of two sets and , we note that and can respectively be expressed as and .\nThus many familiar dependency concepts over random variables translate into linear equations over Shannon entropies.\nIn what follows, we shortly consider similar information-theoretic approach to dependence and independence in probabilistic team semantics.\nLet be a probabilistic team over a finite structure with universe . Let be a -ary sequence of variables from the domain of .\nLet be the vector-valued random variable, where is the probability that takes value in the probabilistic team .\nThe Shannon entropy of in is defined as follows:\nUsing this definition we now define the concept of an entropy atom.\nLet and be two sequences of variables from the domain of . These sequences may be of different lengths. The entropy atom is an expression of the form , and it is given the following semantics:\nWe then define entropy logic as the logic obtained by extending first-order logic with entropy atoms. The entropy atom is relatively powerful compared to our earlier atoms, since, as we will show next, it encapsulates many familiar dependency notions such as dependence and conditional independence.\nThe following equivalences hold over probabilistic teams of finite structures with two distinct constants and :\n.\n, where is defined as\nwhere and .\nThe translation of the dependence atom simply expresses that the conditional entropy of given vanishes, which expresses that depends functionally on .\nConsider the translation of the independence atom. Observe that essentially restricts attention to that subteam in which the universally quantified variable is either or .\nThere, the weight distribution of is obtained by vertically stacking together halved weight distributions of and . Similarly, corresponds to halving and vertical stacking of and a dummy constant distribution . Consider now the effect of halving the weights of the entropy function given in (1 ###reference_###):\nLet us turn back to our subteam , obtained by quantification and split disjunction from some initial team . This subteam has to satisfy . What this amounts to, is the following\nThus, the translation captures the entropy condition of the independence atom.\n\u220e\nSince conditional independence can be expressed with marginal independence, i.e., [11 ###reference_b11###, Theorem 11], we obtain the following corollary:\n.\nIt is easy to see at this point that entropy logic and its extension with negation are subsumed by second-order logic over the reals with exponentiation.\nand .\nThe translation is similar to the one in Theorem 5.2 ###reference_theorem2###, so it suffices to notice that the entropy atom can be expressed as\nSince can be expressed in and , we are done.\n\u220e"
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "7",
|
| 49 |
+
"parent_section_id": null,
|
| 50 |
+
"section_name": "Logic for first-order probabilistic dependecies",
|
| 51 |
+
"text": "Here, we define the logic , which was introduced in [12 ###reference_b12###].333In [12 ###reference_b12###], two sublogics of , called and , were also considered. Note that the results of this section also hold for these sublogics. Let be a quantifier- and disjunction-free first-order formula, i.e., for a first-order atomic formula of the vocabulary . Let be a first-order variable. The syntax for the logic over a vocabulary is defined as follows:\nLet be any probabilistic team, not necessarily a probability distribution. The semantics for the logic is defined as follows:\niff for all .\niff .\niff or is empty.\niff and .\niff or .\niff for some .\niff for all .\nNext, we present some useful properties of .\nLet be any -formula. Then for any set of variables , any -structure , and any probabilistic team such that ,\nOver singleton traces the expressivity of coincides with that of . For , let denote the -formula obtained by replacing the symbols , and by , and , respectively, and expressions of the form by the formula .\nLet be a -formula, a structure, and a probabilistic team of with support . Then iff .\nThe proof proceeds by induction on the structure of formulas. The cases for literals and Boolean connectives are trivial. The cases for quantifiers are immediate once one notices that interpreting the quantifiers and maintain singleton supportness. We show the case for . Let if , and otherwise. Then\nThe first equivalence follows from the semantics of and the second follows from the induction hypotheses after observing that the support of is . The last equivalence follows via a simple arithmetic observation.\n\u220e\nThe following theorem follows directly from Propositions 7 ###reference_position7### and 8 ###reference_position8###.\nFor sentences we have that .\nFor a logic , we write for the following variant of the model checking problem: given a sentence and a structure , decide whether .\nThe above result immediately yields the following corollary.\nis -complete.\nThis follows directly from the linear translation of -sentences into equivalent -sentences of Theorem 7.1 ###reference_theorem1### and the well-known fact that the model-checking problem of is -complete.\n\u220e\nand is non-comparable to for open formulas.\nWe begin the proof of the first claim by showing that . Note that we may use numerical terms of the form in , because they can be expressed by the formula .\nLet formula be such that its free-variables are from . Then there is a formula with exactly one free function variable such that for all structures and all probabilistic teams , if and only if , where is a function such that for all .\nWe may assume that\nthe formula is in the form , where and is quantifier-free. We begin by defining inductively a formula for the subformula . Note that in the following refers to the characteristic function of , i.e., such that if and only if . For simplicity, we only write despite the fact that may contain free function variables in addition to the variables .\nIf , then .\nIf , then\nIf , then , where is obtained from by pushing the negation in front of atomic formulas.\nIf , where , then , where , respectively.\nFor each , we define a formula , which says that is the characteristic function of . Let and define as follows:\nIf , where , then .\nIf , then .\nIf , then\nLet be a list such that each , , is a subformula of some formula that appears in a function symbol of the formula . Now, we can define\nThis shows that . The first claim now follows, since .\nWe will prove the second claim now. In the proof of Proposition 4 ###reference_position4###, it was noted that the formula cannot be expressed in . This is not the case for as it contains the Boolean negation, and thus the formula can be expressed in by the results of Section 4.2 in [12 ###reference_b12###].\nOn the other hand, we have (Prop. 2 ###reference_position2###). Since on the level of sentences, is equivalent to existential second-order logic [26 ###reference_b26###], there is a sentence such that for all , iff a undirected graph is 2-colourable. Since over singleton traces the expressivity of coincides with , the sentence cannot be expressed in , as 2-colourability cannot be expressed in .\n\u220e"
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "8",
|
| 55 |
+
"parent_section_id": null,
|
| 56 |
+
"section_name": "Complexity of satisfiability, validity and model checking",
|
| 57 |
+
"text": "We now define satisfiability and validity in the context of probabilistic team semantics. Let . The formula is satisfiable in a structure if for some probabilistic team , and is valid in a structure if for all probabilistic teams over . The formula is satisfiable if there is a structure such that is satisfiable in , and is valid if is valid in for all structures .\nFor a logic , the satisfiability problem and the validity problem are defined as follows: given a formula , decide whether is satisfiable (or valid, respectively). For the model checking problem , we consider the following variant: given a sentence and a structure , decide whether .\nis in and -hard.\nFirst note that is clearly a conservative extension of , as it is easy to check that probabilistic semantics and Tarski semantics agree on first-order formulas over singleton traces. The hardness now follows from this and the fact that model checking problem for is -complete.\nFor upper bound, notice first that any -formula can be reduced to an almost conjunctive formula of [17 ###reference_b17###, Lem, 17].\nThen the desired bounds follow due to the reduction from Proposition 3 in [17 ###reference_b17###].\nThe mentioned reduction yields families of systems of linear inequalities from a structure and assignment such that a system has a solution if and only if .\nFor a -formula , this transition requires exponential time and this yields membership in .\n\u220e\nWe now prove the following lemma, which will be used to prove the upper-bounds in the next three theorems.\nLet be a finite structure and . Then there is a first-order sentence over vocabulary such that is satisfiable in if and only if .\nLet be such that its free variables are from . By locality (Prop. 1 ###reference_position1###), we may restrict to the teams over the variables . Define a fresh first-order variable for each . The idea is that the variable represents the weight of the assignment for which . For notational simplicity, assume that . Thus, we can write for the tuple that contains the variables for all the possible assignments over . Define then\nwhere is constructed as follows:\nIf or where , then .\nIf for some such that , then\nIf or , then or , respectively.\nIf , then\nIf , then\nIf , then\n\u220e\nis in and -hard.\nFor the lower bound, we use the fact that dependence atoms can be expressed by using probabilistic independence atoms.\nLet be a structure and be a probabilistic team over .\nThen [11 ###reference_b11###, Prop. 3].\nThe -hardness follows since the model checking problem for is -complete [8 ###reference_b8###, Thm. 5.2].\nThe upper-bound follows from the fact that when restricted to , the exponential translation in Lemma 3 ###reference_ma3### is an existential sentence, and the existential theory of the reals is in .\n\u220e\nis in and -hard.\nWe first prove the lower bound through a reduction from the satisfiability problem for propositional team-based logic, that is, .\nGiven a -formula , the problems asks whether there is a team such that ?\nLet be a -formula over propositional variables .\nFor , let denote a variable corresponding to the proposition .\nLet be the structure over empty vocabulary.\nThen, is satisfiable iff is satisfiable iff , where is a -formula obtained from by simply replacing each proposition by the variable .\nThis gives -hardness of (and consequently, of ) since the satisfiability for is -complete [16 ###reference_b16###].\nThe upper-bound follows from the exponential translation from to real arithmetic in Lemma 3 ###reference_ma3### and the fact that the full theory of the reals is in .\n\u220e\nis - and is -complete.\nIt suffices to prove the claim for , since the claim for follows from the fact that has the Boolean negation.\nFor the lower bound, note that is a conservative extension of , and hence the claim follows from the r.e.-hardness of over the finite.\nFor the upper-bound, we use Lemma 3 ###reference_ma3###. Let be a satisfiable formula of . We can verify that by going through all finite structures until we come across a structure in which is satisfiable. Hence, it suffices to show that for any finite structure , it is decidable to check whether is satisfiable in . For this, construct a sentence as in Lemma 3 ###reference_ma3###. Then is such that is satisfiable in iff . Since real arithmetic is decidable, we now have that is -complete.\n\u220e\nand are - and and are -complete.\nThe lower bound follows from the fact that and are both conservative extensions of . We obtain the upper bound from the previous theorem, since includes both and .\n\u220e"
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "9",
|
| 61 |
+
"parent_section_id": null,
|
| 62 |
+
"section_name": "Conclusion",
|
| 63 |
+
"text": "We have studied the expressivity and complexity of various logics in probabilistic team semantics with the Boolean negation.\nOur results give a quite comprehensive picture of the relative expressivity of these logics and their relations to numerical variants of (existential) second-order logic.\nAn interesting question for further study is to determine the exact complexities of the decision problems studied in Section 8.\nFurthermore, dependence atoms based on various notions of entropy deserve further study, as do the connections of probabilistic team semantics to the field of information theory."
|
| 64 |
+
}
|
| 65 |
+
],
|
| 66 |
+
"appendix": [],
|
| 67 |
+
"tables": {
|
| 68 |
+
"1": {
|
| 69 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S1.T1\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S1.T1.26\" style=\"width:433.6pt;height:58.7pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-182.3pt,24.7pt) scale(0.543290341191722,0.543290341191722) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S1.T1.26.26\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S1.T1.3.3.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S1.T1.3.3.3.4\">Logic</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S1.T1.1.1.1.1\">\n for sentences</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S1.T1.2.2.2.2\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S1.T1.3.3.3.3\"></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S1.T1.7.7.7\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S1.T1.4.4.4.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S1.T1.5.5.5.2\">\n <span class=\"ltx_text\" id=\"S1.T1.5.5.5.2.1\" style=\"font-size:90%;\">(Cor.\u00a0<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2306.00420v2#Thmcorollary2\" title=\"Corollary 2. \u2023 7 Logic for first-order probabilistic dependecies \u2023 Logics with probabilistic team semantics and the Boolean negation\"><span class=\"ltx_text ltx_ref_tag\">2</span></a>)</span>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.6.6.6.3\">\n <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2306.00420v2#bib.bib12\" title=\"\">12</a>, Thm.\u00a05.2]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.7.7.7.4\">\n <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2306.00420v2#bib.bib12\" title=\"\">12</a>, Thm.\u00a05.2]</cite>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.12.12.12\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S1.T1.8.8.8.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S1.T1.10.10.10.3\">\n and -hard <span class=\"ltx_text\" id=\"S1.T1.10.10.10.3.1\" style=\"font-size:90%;\">(Thm.\u00a0<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2306.00420v2#S8.Thmtheorem2\" title=\"Theorem 8.2. \u2023 8 Complexity of satisfiability, validity and model checking \u2023 Logics with probabilistic team semantics and the Boolean negation\"><span class=\"ltx_text ltx_ref_tag\">8.2</span></a>)</span>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.11.11.4\">\n <span class=\"ltx_text\" id=\"S1.T1.11.11.11.4.1\" style=\"font-size:90%;\">(Thm.\u00a0<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2306.00420v2#S8.Thmtheorem4\" title=\"Theorem 8.4. \u2023 8 Complexity of satisfiability, validity and model checking \u2023 Logics with probabilistic team semantics and the Boolean negation\"><span class=\"ltx_text ltx_ref_tag\">8.4</span></a>)</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.12.12.12.5\">\n <span class=\"ltx_text\" id=\"S1.T1.12.12.12.5.1\" style=\"font-size:90%;\">(Thm.\u00a0<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2306.00420v2#S8.Thmtheorem4\" title=\"Theorem 8.4. \u2023 8 Complexity of satisfiability, validity and model checking \u2023 Logics with probabilistic team semantics and the Boolean negation\"><span class=\"ltx_text ltx_ref_tag\">8.4</span></a>)</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.16.16.16\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S1.T1.13.13.13.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S1.T1.14.14.14.2\">\n <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2306.00420v2#bib.bib23\" title=\"\">23</a>, Prop.\u00a05.16, Lem.\u00a05.21]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.15.15.15.3\">\n\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2306.00420v2#bib.bib23\" title=\"\">23</a>, Thm.\u00a05.6]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.16.16.16.4\">\n\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2306.00420v2#bib.bib23\" title=\"\">23</a>, Thm.\u00a05.6]</cite>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.21.21.21\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S1.T1.17.17.17.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S1.T1.19.19.19.3\">\n, -hard <span class=\"ltx_text\" id=\"S1.T1.19.19.19.3.1\" style=\"font-size:90%;\">(Thm.\u00a0<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2306.00420v2#S8.Thmtheorem1\" title=\"Theorem 8.1. \u2023 8 Complexity of satisfiability, validity and model checking \u2023 Logics with probabilistic team semantics and the Boolean negation\"><span class=\"ltx_text ltx_ref_tag\">8.1</span></a>)</span>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.20.20.20.4\">\n <span class=\"ltx_text\" id=\"S1.T1.20.20.20.4.1\" style=\"font-size:90%;\">(Thm.\u00a0<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2306.00420v2#S8.Thmtheorem4\" title=\"Theorem 8.4. \u2023 8 Complexity of satisfiability, validity and model checking \u2023 Logics with probabilistic team semantics and the Boolean negation\"><span class=\"ltx_text ltx_ref_tag\">8.4</span></a>)</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.21.21.21.5\">\n <span class=\"ltx_text\" id=\"S1.T1.21.21.21.5.1\" style=\"font-size:90%;\">(Thm.\u00a0<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2306.00420v2#S8.Thmtheorem4\" title=\"Theorem 8.4. \u2023 8 Complexity of satisfiability, validity and model checking \u2023 Logics with probabilistic team semantics and the Boolean negation\"><span class=\"ltx_text ltx_ref_tag\">8.4</span></a>)</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.26.26.26\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb\" id=\"S1.T1.22.22.22.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb\" id=\"S1.T1.24.24.24.3\">\n, -hard <span class=\"ltx_text\" id=\"S1.T1.24.24.24.3.1\" style=\"font-size:90%;\">(Thm.\u00a0<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2306.00420v2#S8.Thmtheorem3\" title=\"Theorem 8.3. \u2023 8 Complexity of satisfiability, validity and model checking \u2023 Logics with probabilistic team semantics and the Boolean negation\"><span class=\"ltx_text ltx_ref_tag\">8.3</span></a>)</span>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S1.T1.25.25.25.4\">\n <span class=\"ltx_text\" id=\"S1.T1.25.25.25.4.1\" style=\"font-size:90%;\">(Thm.\u00a0<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2306.00420v2#S8.Thmtheorem4\" title=\"Theorem 8.4. \u2023 8 Complexity of satisfiability, validity and model checking \u2023 Logics with probabilistic team semantics and the Boolean negation\"><span class=\"ltx_text ltx_ref_tag\">8.4</span></a>)</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S1.T1.26.26.26.5\">\n <span class=\"ltx_text\" id=\"S1.T1.26.26.26.5.1\" style=\"font-size:90%;\">(Thm.\u00a0<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2306.00420v2#S8.Thmtheorem4\" title=\"Theorem 8.4. \u2023 8 Complexity of satisfiability, validity and model checking \u2023 Logics with probabilistic team semantics and the Boolean negation\"><span class=\"ltx_text ltx_ref_tag\">8.4</span></a>)</span>\n</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Overview of our results. Unless otherwise noted, the results are completeness results. Satisfiability and Validity are considered for finite structures.</figcaption>\n</figure>",
|
| 70 |
+
"capture": "Table 1: Overview of our results. Unless otherwise noted, the results are completeness results. Satisfiability and Validity are considered for finite structures."
|
| 71 |
+
}
|
| 72 |
+
},
|
| 73 |
+
"image_paths": {},
|
| 74 |
+
"validation": true,
|
| 75 |
+
"references": [],
|
| 76 |
+
"url": "http://arxiv.org/html/2306.00420v2"
|
| 77 |
+
}
|
20240522/2306.09683v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240522/2306.16564v4.json
ADDED
|
@@ -0,0 +1,584 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Pareto Optimal Learning for Estimating Large Language Model Errors",
|
| 3 |
+
"abstract": "Large Language Models (LLMs) have shown impressive abilities in many applications. When a concrete and precise answer is desired, it is important to have a quantitative estimation of the potential error rate. However, this can be challenging due to the text-in-text-out nature of generative models. We present a method based on Pareto optimization that generates a risk score to estimate the probability of error in an LLM response by integrating multiple sources of information. We prove theoretically that the error estimator optimized in our framework aligns with the LLM and the information sources in an Pareto optimal manner. Experimental results show that the risk scores estimated by our method are well correlated with the true LLM error rate, thus facilitating error correction. By dynamically combining with prompting strategies such as self-verification and information retrieval, we demonstrate the proposed method can be utilized to increase the performance of an LLM, surpassing state-of-the-art task specific models.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Large Language Models (LLMs) have evolved to become impressively powerful in recent developments (Zhao et al., 2023 ###reference_b51###), with Generative Pretrained Transformer (GPT) models showing increasingly effective capabilities. The evolution from GPT-3 (Brown et al., 2020 ###reference_b4###) to GPT-4 (OpenAI, 2023 ###reference_b27###), along with the emergence of other LLMs such as PaLM (Chowdhery et al., 2022 ###reference_b6###) and LLaMA (Touvron et al., 2023 ###reference_b37###), has marked a significant leap in natural language understanding and problem-solving abilities. The generative nature of these models has led to their widespread adoption in numerous application fields.\nDespite their advanced capabilities, LLMs are capable of generating incorrect results (Ji et al., 2023 ###reference_b16###), an issue that is particularly problematic in applications where precision and dependability are critical, like the biomedical and healthcare fields (Azamfirei et al., 2023 ###reference_b3###; Nori et al., 2023 ###reference_b26###).\nExisting approaches for improving the correctness of LLMs include prompt engineering White et al. (2023 ###reference_b44###), retrieval methods Chen et al. (2023 ###reference_b5###), domain-specific tuning Wu et al. (2023 ###reference_b45###); Nguyen (2023 ###reference_b25###) among many others (Wang et al., 2023 ###reference_b39###). While existing methods show varying degrees of improvement on different tasks, in general there lacks a systematic way to efficiently quantify the likelihood of errors in LLM outputs. One strategy involves querying the LLM in various ways to estimate the answer\u2019s correctness (e.g., Manakul et al., 2023 ###reference_b22###). However, these approaches are computationally expensive and biased by the LLM itself.\nIn many LLM applications, including disease prediction Han et al. (2023 ###reference_b12###), medical diagnosis Shea et al. (2023 ###reference_b36###), and question answering (QA) Moore et al. (2023 ###reference_b23###), a concrete and precise answer is desired. For these mission critical tasks, a quantitative error estimation or confidence level for the response is equally important as giving a correct answer itself. However, the text-in text-out nature of the generative language models makes it challenging to estimate the error probability of the answer quantitatively. Although some LLMs provide internal probability scores for the generated tokens, they are poorly calibrated to the true error rate, particularly after applying reinforcement learning with human feedback (RLHF) (Ouyang et al., 2022 ###reference_b28###; OpenAI, 2023 ###reference_b27###).\nOur goal in this paper is to address this issue by establishing a systematic way to quantitatively estimate the likelihood of error in LLM answer. We approach this through training an estimator model via multi-objective optimization, leveraging extensive research in Pareto optimization (Pareto, 1964 ###reference_b29###).\nGiven the optimized model and any LLM response, we can then directly estimate the LLM response error rate which we refer to as the Pareto optimal learning assessed risk (POLAR) score (Section 3.2 ###reference_###). This framework leverages structured information retrieval from other information sources such as knowledge bases.\nWe introduce a novel approach that trains a Pareto-optimal probabilistic model to simultaneously optimize on LLM and align with the external information sources.\nOur key contributions are as follows:\ni) We propose a novel framework using Pareto optimization aligning to the LLM and multiple external information sources.\nii) The POLAR score from our framework is shown experimentally to be effective in estimating LLM error rate.\niii) We demonstrate that POLAR scores can be leveraged to boost an LLM\u2019s performance by easily combining with other popular strategies such as self-verification Weng et al. (2023 ###reference_b43###) and retrieval augmented generation (RAG) Chen et al. (2023 ###reference_b5###)."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Related Work",
|
| 15 |
+
"text": "Several heuristics have been proposed to reduce error and estimate the confidence level of an LLM response. Wang et al. (2022 ###reference_b41###) used self-consistency to infer the reliability of the answer. Manakul et al. (2023 ###reference_b22###) proposed SelfCheckGPT as a black-box method to detect hallucination. The chain-of-thought method Wei et al. (2022 ###reference_b42###) has also been used to indicate potential errors. These approaches are not able to provide a quantitative error estimation that is calibrated to the true error rate, and susceptible to the issue that the model\u2019s self-assessment of confidence is inherently biased. The quality of the results is also highly dependent on the prompting strategy.\nProducing confidence scores that are well correlated with a model\u2019s error rate has been well established in traditional machine learning. The study of model calibration dates back to the seminal work of Platt scaling (Platt et al., 1999 ###reference_b30###), where a Logistic calibration model is fitted on top of the original model output. Various techniques have been developed afterwards for model calibration, including isotonic regression (Zadrozny and Elkan, 2002 ###reference_b48###), temperature scaling (Guo et al., 2017 ###reference_b11###), and Bayesian binning (Naeini et al., 2015 ###reference_b24###). For LLM, a contextual calibration method for LLMs was proposed by (Zhao et al., 2021 ###reference_b52###), which adjusts the class balance by taking an ensemble of LLM queries with content-free input. These methods rely on annotated calibration data and access to model\u2019s output probabilities that are not always available.\nThe problem of aggregating multiple sources of information or supervision sources is studied extensively in programmatic weak supervision (Zhang et al., 2022 ###reference_b49###; Fu et al., 2020 ###reference_b9###; Varma et al., 2019 ###reference_b38###; Wang and Poon, 2018 ###reference_b40###; Lang and Poon, 2021 ###reference_b19###). Notable works include distant supervision (Hoffmann et al., 2011 ###reference_b14###), crowd-sourcing (Raykar et al., 2010 ###reference_b34###), data programming Ratner et al. (2016 ###reference_b33###, 2017 ###reference_b31###) and MeTaL, also known as Snorkel, (Ratner et al., 2019 ###reference_b32###).\nMost of these works weigh the supervision sources across all examples and combine the multiple sources into a single label per example.\nThis approach have shown success but also exhibits significant limitations when applied to identifying LLM errors, primarily due to the weighting dilemma, where if the weight assigned to the LLM is too low, the aggregated result can be noisy, and if the LLM weight is too high, the output is dominated by the LLM, making detecting LLM error difficult.\nR\u00fchling Cachay et al. (2021 ###reference_b35###) mitigates the weighting problem with instance-dependent weighting, but the expectation maximization procedure demonstrates significant learning variance.\nIn this work we present a framework to systematically estimate error of an LLM output by simultaneously aligning to multiple information sources while circumventing the weighting dilemma through Pareto optimization."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "Methodology",
|
| 21 |
+
"text": "Our error estimation framework for LLM responses is a two-step process.\nIn the first step, we iterate through a corpus of input instances to collect the corresponding LLM responses, while dynamically retrieving heuristic answers from other information sources. In this process, a probabilistic function is learned that fits the multiple sources in a Pareto optimal manner. In the second step, the optimized model is used to estimate the error rate of the LLM response on any new input instance, which we refer to as the POLAR score.\nAfter the error estimation step, we also provide an optional third step that strategically re-prompt the LLM based on the POLAR score, and leverage the information retrieved from the information sources in an RAG manner Chen et al. (2023 ###reference_b5###). An overview of the framework is shown in Figure 1 ###reference_###.\n###figure_1###"
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "3.1",
|
| 25 |
+
"parent_section_id": "3",
|
| 26 |
+
"section_name": "Problem setup",
|
| 27 |
+
"text": "Denote the LLM as a function where is the input text, and is the user-defined prompt specific to some task. In order to quantify LLM error, we define as the quantized output space (e.g. answer choices in QA, disease type in diagnosis tasks, etc.). Any free-text LLM output is mapped to the quantized space through mapping . Now we can define the LLM answer as\nwhere the output space is of cardinality . Note that the LLM is allowed to state \u201cunsure\u201d, but in error estimation we only consider the scenario when the LLM explicitly states the answer. Suppose the true answer for input is , estimating the LLM error rate is to estimate the probability .\nTo account for other information sources, such as knowledge bases and expert rules, we introduce a pool of sources. For , define the triggering function indicating if the input triggers a retrieval from information source . Example triggering includes recognized entities, text patterns, and etc.. Multiple sources can be triggered at the same time.\nOnce triggers retrieval from source , i.e. , the retrieval function represents the answer suggested by information source for input . Note that the LLM function defined in (1 ###reference_###) is also an information source that maps . For simplicity, we denote and . The answers retrieved from the multiple information sources can conflict with the LLM and among themselves, thus require a proper aggregation, which is the main challenge to solve."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3.2",
|
| 31 |
+
"parent_section_id": "3",
|
| 32 |
+
"section_name": "Pareto Optimal Learning Assessed Risk",
|
| 33 |
+
"text": "We propose to learn an optimal function by simultaneously fitting it to the output of the LLM and other information sources. Incorporating multiple sources helps reduce error dependency on a single source, and corrects the bias from the LLM itself by aligning to external knowledge. The primary challenge here is to design a framework to resolve conflicts among different sources and with the LLM. We do not assume the sources to be independent, as it is often assumed in the weak supervision literature Zhang et al. (2022 ###reference_b49###). Instead we adopt the looser assumption that the sources are individually positively correlated with the correct output , better than a random guess. Ideally, a reasonable should align with each source if it is triggered for retrieval, which is measured by\nwhere is the cross-entropy loss of against the information source. Note that , which essentially estimates the probability distribution on the output space with cardinality .\nIn order for to align with the multiple information sources, mathematically, we want to solve the following multi-objective problem for :\nAs the objectives may conflict, we seek an that is Pareto optimal, following multi-objective learning theory (Hwang and Masud, 2012 ###reference_b15###) and Pareto optimization (Pareto, 1964 ###reference_b29###).\nis Pareto optimal to , if no exists that Pareto dominates in (3 ###reference_###). must satisfy\nThe Pareto optimization framework effectively manages dependencies between information sources. For example, a Pareto optimal remains unaffected by the arbitrary duplication of a source. However, finding Pareto optimal solutions remains challenging in the multi-objective optimization literature. In this paper, we adopt one of the standard approaches to scalarize the multiple objectives Hwang and Masud (2012 ###reference_b15###), and solve\nwhere and is a Pareto aggregator as defined below.\nis a Pareto aggregator if it satisfies:\nis convex, and\nif , where .\nIn this study, we explore four different types of aggregators:\nLinear: ,\nQuadratic: ,\nEuclidean norm: ,\nChebyshev: .\nThe nonlinear aggregator shapes the optimization in Equation (4 ###reference_###) differently through Jensen\u2019s inequality.\nWhile the first three aggregators qualify as Pareto aggregators, the Chebyshev aggregator does not meet the definition criteria, serving as a comparative element in our experiment.\nWith Definitions 1 ###reference_inition1### and 2 ###reference_inition2###, we propose finding an optimal solution by solving Equation (4 ###reference_###), which can be done via standard stochastic gradient descent algorithms such as Adam (Kingma and Ba, 2014 ###reference_b17###). The solution is guaranteed by the following Theorem, with a detailed proof in Appendix A ###reference_###.\nSuppose is a Pareto aggregator as in Definition 2 ###reference_inition2###, solving the problem in Equation 4 ###reference_### approximates a Pareto optimum by minimizing the upperbound.\nOnce a optimal solution is found, we can estimate the error rate of for any new input , by selecting the probability in that corresponds to , denoted , and compute a risk score\nwhere represents the probability distribution of as estimated by . We refer to as the Pareto optimal learning assessed risk (POLAR) score.\nAlgorithm 1 ###reference_### summarizes the entire process.\nStep 1: Training estimator\nStep 2: Estimation with ."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3.3",
|
| 37 |
+
"parent_section_id": "3",
|
| 38 |
+
"section_name": "Step 3*: Error Correction with POLAR",
|
| 39 |
+
"text": "Identifying LLM responses with a higher risk of error presents an opportunity to efficiently correct the errors and improve the final accuracy. In most of the applications, the POLAR score itself is sufficient to facilitate human-in-the-loop intervention. Here we provide an optional Step 3, which easily connects the POLAR score to other prompting strategies to correct the error automatically. In this setting, the information sources also serve as additional input to the LLM to enhance its answer. In this paper, we propose two dynamic prompting strategies to illustrate correcting LLM errors using the POLAR score .\nPick risk threshold . For any input and LLM answer , if , simply ask the LLM to self-verify its previous answer.\nPick risk threshold . For any input and LLM answer , if , retrieve information from all sources that are triggered, i.e. . Provide the answer suggested by the information sources and the description of the sources to the LLM, and generate the revised answer. Algorithm 2 outlines the POLAR-assisted RAG. We provide detailed prompting design description in the Appendix D ###reference_###."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "4",
|
| 43 |
+
"parent_section_id": null,
|
| 44 |
+
"section_name": "Experiments",
|
| 45 |
+
"text": ""
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "4.1",
|
| 49 |
+
"parent_section_id": "4",
|
| 50 |
+
"section_name": "LLM Error Estimation",
|
| 51 |
+
"text": "We present the LLM error estimation results using the POLAR score and compare them with existing methods as baselines.\nWe estimate the error rate through LLM ensemble:\nQuery the LLM multiple times with the same input to sample responses.\nEstimate the probability where the answer is different from the most frequent one, and use it as the estimation for error rate on input .\nAs this approach is extremely expensive, we only evaluated this for GPT-3.5-turbo on the CDR dataset. The estimated ECE in this experiment is 0.4466, which is far worse than other approaches in Table 1 ###reference_###. Therefore, we focus on comparison with other baseline methods in the rest of this section.\nThe following methods utilize the exact same information sources as in our framework. They differ in how the answers from the multiple sources and the LLM were combined together.\nSnorkel (Ratner et al., 2019 ###reference_b32###): a weak supervision method combining multiple supervision sources via matrix completion. Snorkel model fitted on training set is use to give class probabilities for LLM error rate. We use the class probability given by the fitted model to estimate LLM error rate.\nWeaSEL (R\u00fchling Cachay et al., 2021 ###reference_b35###): A state-of-the-art weak supervision framework that fits a neural network using the LLM and weak labels in an end-to-end manner. We use the class probability given by the fitted model to estimate LLM error rate.\nMajority vote: A common method that estimates class probability according to the voted ratios among all information sources.\nTable 1 ###reference_### compares POLAR score performance in LLM error calibration against baseline methods. We report the results spanning the four datasets and three LLMs (GPT-4, GPT-3.5-turbo, and text-davinci-003). The proposed POLAR score consistently outperforms other methods. Among the baseline methods, Snorkel, WeaSEL, and LLM distilled model can achieve top or close-to-top performance in some cases under specific metric, but lack the consistency to deliver stable calibration for different LLMs on different tasks. In comparison, the proposed POLAR score is consistently well-calibrated to the true error rate.\n###figure_2### ###figure_3###"
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "4.1.1",
|
| 55 |
+
"parent_section_id": "4.1",
|
| 56 |
+
"section_name": "4.1.1 Evaluation metrics",
|
| 57 |
+
"text": "As the primary interest is in how well the LLM error rate is estimated, we report evaluation results measured in expected calibration error (ECE) Naeini et al. (2015 ###reference_b24###) that is widely used in assessing model confidence calibration as well as the coefficient of correlation . An ECE of indicates perfectly estimated error rates, or equivalently a perfectly calibrated model in our framework.\nFor each dataset, the training split is used as the input examples along with the predefined information sources to learn . The test split and its ground truth labels are used obtain to , compute the true LLM error rate, and evaluate measured by ECE and .\nFigure 2 ###reference_### displays POLAR score calibration for GPT-4 on the CDR chemical-disease relation extraction task. The calibration curve (Figure 2(a) ###reference_sf1###) shows that the POLAR score reliably estimates the true probability of LLM error rates. Figure 2(b) ###reference_sf2### demonstrates a high correlation between the POLAR score and the true error rate. Figure 2(c) ###reference_sf3### reveals that responses with the highest POLAR scores are most prone to errors, with the top scores indicating nearly a 100% error rate."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "4.1.2",
|
| 61 |
+
"parent_section_id": "4.1",
|
| 62 |
+
"section_name": "4.1.2 Baseline methods",
|
| 63 |
+
"text": "We compare the proposed POLAR score with the following baseline error estimation approaches. We divide them based on if they utilized the information sources that were used in our framework.\nWe estimate the error rate through LLM ensemble:\nQuery the LLM multiple times with the same input to sample responses.\nEstimate the probability where the answer is different from the most frequent one, and use it as the estimation for error rate on input .\nAs this approach is extremely expensive, we only evaluated this for GPT-3.5-turbo on the CDR dataset. The estimated ECE in this experiment is 0.4466, which is far worse than other approaches in Table 1 ###reference_### ###reference_###. Therefore, we focus on comparison with other baseline methods in the rest of this section.\nThe following methods utilize the exact same information sources as in our framework. They differ in how the answers from the multiple sources and the LLM were combined together.\nSnorkel (Ratner et al., 2019 ###reference_b32### ###reference_b32###): a weak supervision method combining multiple supervision sources via matrix completion. Snorkel model fitted on training set is use to give class probabilities for LLM error rate. We use the class probability given by the fitted model to estimate LLM error rate.\nWeaSEL (R\u00fchling Cachay et al., 2021 ###reference_b35### ###reference_b35###): A state-of-the-art weak supervision framework that fits a neural network using the LLM and weak labels in an end-to-end manner. We use the class probability given by the fitted model to estimate LLM error rate.\nMajority vote: A common method that estimates class probability according to the voted ratios among all information sources.\nTable 1 ###reference_### ###reference_### compares POLAR score performance in LLM error calibration against baseline methods. We report the results spanning the four datasets and three LLMs (GPT-4, GPT-3.5-turbo, and text-davinci-003). The proposed POLAR score consistently outperforms other methods. Among the baseline methods, Snorkel, WeaSEL, and LLM distilled model can achieve top or close-to-top performance in some cases under specific metric, but lack the consistency to deliver stable calibration for different LLMs on different tasks. In comparison, the proposed POLAR score is consistently well-calibrated to the true error rate.\n###figure_4### ###figure_5###"
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "4.2",
|
| 67 |
+
"parent_section_id": "4",
|
| 68 |
+
"section_name": "Improved LLM performance with POLAR-assisted dynamic prompting",
|
| 69 |
+
"text": "We investigate the utility of the POLAR score through dynamic prompting (Section 3.3 ###reference_###) to rectify LLM errors. In this experiment, we focus only on the CDR dataset and GPT-4 and GPT-3.5-turbo models.\nTo understand the advantage of dynamic prompting, we first examine the effect of LLM error rate using static prompting. That is, ignoring the POLAR score, we persistently follow up the initial LLM response with another prompt, either using the self-verification or providing suggested answers from the information sources. The results are shown in Figure 3(a) ###reference_sf1### where the LLM error rate is plotted with respect to the POLAR score. We observe that the LLM error rate decreases with static follow-up prompts when the POLAR score is high, around . However, the LLM error rate increases slightly with follow-up prompting when the POLAR score is low, around . This suggests the utility of dynamic prompting \u2013 follow up the initial LLM response with a prompt only when the LLM is more likely to make a mistake, i.e. when the POLAR score is high.\nWe apply dynamic prompting to the same problem and set . The results are shown in Figure 3(b) ###reference_sf2###. We see that the POLAR-assisted dynamic prompting increases the GPT-4 performance. The POLAR-assisted RAG strategy has a larger increase due to incorporating additional information sources in the follow-up prompt. We note that GPT-4 with POLAR-assisted RAG outperforms state-of-the-art supervised task-specific model Xiao et al. (2021 ###reference_b46###)."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "5",
|
| 73 |
+
"parent_section_id": null,
|
| 74 |
+
"section_name": "Ablations",
|
| 75 |
+
"text": ""
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "6",
|
| 79 |
+
"parent_section_id": null,
|
| 80 |
+
"section_name": "Conclusion",
|
| 81 |
+
"text": "We presented a novel framework for LLM error estimation using Pareto optimal learning. The error estimator learned in our framework aligns with the LLM and other information sources Pareto optimally. We showed experimentally that the proposed POLAR score is well calibrated with the LLM error rate evaluated on ground truth, ensuring reliable error estimation. We proposed two POLAR-assisted dynamic prompting strategies, and showed that POLAR-assisted RAG enhances GPT-4\u2019s performance, surpassing state-of-the-art task-specific model. This development marks a substantial advancement in the application of LLM, providing an effective method to both estimate and reduce LLM errors."
|
| 82 |
+
}
|
| 83 |
+
],
|
| 84 |
+
"appendix": [
|
| 85 |
+
{
|
| 86 |
+
"section_id": "Appendix 1",
|
| 87 |
+
"parent_section_id": null,
|
| 88 |
+
"section_name": "Appendix A Proof of Theorems",
|
| 89 |
+
"text": "For convenience, let\u2019s denote\nWe first show that any minimizing is Pareto optimal.\nProof by contradiction. Suppose is not Pareto optimal. Then there must exist some Pareto dominating . Without loss of generality, let\u2019s assume , and , . Then according to Definition 2 ###reference_inition2### of Pareto aggregator,\nwhich contradicts the assumption that is the minimizer for\nTherefore, the original statement is true, and minimizing the objective\ngives a Pareto optimum.\nNext, we use Jensen\u2019s inequality to upperbound this objective with the objective in problem 4 ###reference_###. Using the fact that is convex, we apply Jensen\u2019s inequality and get\nTherefore, solving the problem in Equation 4 ###reference_### approximates Pareto optimal harmonizer by upperbounding Equation 9 ###reference_###.\n\u220e"
|
| 90 |
+
},
|
| 91 |
+
{
|
| 92 |
+
"section_id": "Appendix 2",
|
| 93 |
+
"parent_section_id": null,
|
| 94 |
+
"section_name": "Appendix B Weights for Rebalancing the Sources",
|
| 95 |
+
"text": "In our experiments, we explored four different types of scalarization functions, namely:\nLinear aggregator: .\nQuadratic aggregator: .\nEuclidean norm aggregator: .\nChebyshev aggregator: .\nThe weights are parameters of . In the main text of the paper, we fixed to equal weights . Here we introduce three approaches to determine the weighting if necessary."
|
| 96 |
+
},
|
| 97 |
+
{
|
| 98 |
+
"section_id": "Appendix 3",
|
| 99 |
+
"parent_section_id": null,
|
| 100 |
+
"section_name": "Appendix C Training details",
|
| 101 |
+
"text": "We explored different configurations of Pareto optimal learning below:\nHarmonizer model: we experiment 1. BERT Devlin et al. (2018 ###reference_b8###) (PubMedBERT Gu et al. (2020 ###reference_b10###) for biomedical datasets CDR and ChemProt), 2. multi-layer perceptron (MLP), 3. Logistic regression (LR). The last two are built on top of the last layer embedding of the corresponding BERT model.\nPareto loss scalarizer: we experiment all four loss scalarization functions as defined in Section 3.2 ###reference_###, namely linear, quadratic, Euclidean norm, and Chebyshevy scalarization.\nOptimizer: We use AdamW Loshchilov and Hutter (2017 ###reference_b21###) optimizer with learning rate , weight decay , batch size 16. All hyperparameters are optimized on held out dev set.\nComputation: We trained on Azure Standard NC12s v3 with 1 Nvidia V100 GPU."
|
| 102 |
+
},
|
| 103 |
+
{
|
| 104 |
+
"section_id": "Appendix 4",
|
| 105 |
+
"parent_section_id": null,
|
| 106 |
+
"section_name": "Appendix D LLM Prompting Details",
|
| 107 |
+
"text": "In this section we will describe the details of the prompts used to query the LLMs.\nSetting: describe the role of the LLM in the task, and the overall problem setting.\nBackground: necessary background knowledge for domain specific tasks, including information from annotation guidelines for human annotators.\nData structure: for relation extraction tasks, explain the definition of each entity.\nDesired output: describe the list of the desired output. For each of the categories, provide explanation and optionally some examples.\nChain of thought (CoT): instruction to encourage the LLM to think step-by-step, articulate point-by-point, and give the response in the desired structure.\nConfidence: ask the model to state \u201cunsure\u201d if it is not confident about the answer.\nExample: state the example and ask the model to perform the task.\nEach prompt for out-of-the-box (zero-shot) prediction contains:\nA problem setting part that depends on the specific dataset.\nA response regularization part that encourages chain-of-thought (CoT) and confidence check, and specifies proper response format.\nA task instance part that contains the input instance and restates the task to perform.\nIn dynamic prompting, we query another follow-up prompt after the LLM gives the initial out-of-the-box response. As this is an extension to our main experiments, we only implemented for the CDR relation extraction task. The follow-up prompts for the two dynamic prompting strategies are:"
|
| 108 |
+
}
|
| 109 |
+
],
|
| 110 |
+
"tables": {
|
| 111 |
+
"1": {
|
| 112 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>LLM error rate estimation performance, using the POLAR score and other methods, measured in ECE and . The best entries (low ECE, high ) from each row are highlighted in bold.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.8\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T1.8.5.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.8.5.1.1\">Task</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.8.5.1.2\">LLM</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S4.T1.8.5.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.8.5.1.3.1\">POLAR</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S4.T1.8.5.1.4\">Snorkel</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S4.T1.8.5.1.5\">WeaSEL</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S4.T1.8.5.1.6\">Majority vote</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.8.4\">\n<td class=\"ltx_td ltx_border_t\" id=\"S4.T1.8.4.5\"></td>\n<th class=\"ltx_td ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.8.4.6\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.8.4.7\">ECE</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.5.1.1\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.8.4.8\">ECE</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.6.2.2\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.8.4.9\">ECE</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.7.3.3\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.8.4.10\">ECE</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.8.4.4\"></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.8.6.1\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.6.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.8.6.1.1.1\">CDR</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.6.1.2\">GPT-4</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.8.6.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.8.6.1.3.1\">0.043</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.8.6.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.8.6.1.4.1\">0.890</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.8.6.1.5\">0.167</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.8.6.1.6\">0.299</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.8.6.1.7\">0.146</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.8.6.1.8\">0.387</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.8.6.1.9\">0.145</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.8.6.1.10\">0.348</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.8.7.2\">\n<td class=\"ltx_td\" id=\"S4.T1.8.7.2.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.7.2.2\">GPT-3.5-turbo</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.7.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.8.7.2.3.1\">0.046</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.7.2.4\">0.934</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.7.2.5\">0.164</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.7.2.6\">0.320</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.7.2.7\">0.081</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.7.2.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.8.7.2.8.1\">0.942</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.7.2.9\">0.182</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.7.2.10\">0.540</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.8.8.3\">\n<td class=\"ltx_td\" id=\"S4.T1.8.8.3.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.8.3.2\">Text-davinci-3</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.8.3.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.8.8.3.3.1\">0.055</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.8.3.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.8.8.3.4.1\">0.907</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.8.3.5\">0.154</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.8.3.6\">0.371</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.8.3.7\">0.135</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.8.3.8\">0.877</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.8.3.9\">0.149</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.8.3.10\">0.450</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.8.9.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.8.9.4.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.8.9.4.1.1\">ChemProt</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.8.9.4.2\">GPT-4</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.8.9.4.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.8.9.4.3.1\">0.035</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.8.9.4.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.8.9.4.4.1\">0.934</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.8.9.4.5\">0.182</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.8.9.4.6\">0.510</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.8.9.4.7\">0.278</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.8.9.4.8\">0.885</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.8.9.4.9\">0.233</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.8.9.4.10\">0.244</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.8.10.5\">\n<td class=\"ltx_td\" id=\"S4.T1.8.10.5.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.10.5.2\">GPT-3.5-turbo</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.10.5.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.8.10.5.3.1\">0.048</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.10.5.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.8.10.5.4.1\">0.944</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.10.5.5\">0.228</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.10.5.6\">0.625</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.10.5.7\">0.219</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.10.5.8\">0.922</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.10.5.9\">0.282</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.10.5.10\">0.031</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.8.11.6\">\n<td class=\"ltx_td\" id=\"S4.T1.8.11.6.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.11.6.2\">Text-davinci-3</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.11.6.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.8.11.6.3.1\">0.051</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.11.6.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.8.11.6.4.1\">0.917</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.11.6.5\">0.218</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.11.6.6\">0.700</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.11.6.7\">0.213</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.11.6.8\">0.846</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.11.6.9\">0.279</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.11.6.10\">0.307</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.8.12.7\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.8.12.7.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.8.12.7.1.1\">SemEval</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.8.12.7.2\">GPT-4</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.8.12.7.3\">0.079</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.8.12.7.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.8.12.7.4.1\">0.916</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.8.12.7.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.8.12.7.5.1\">0.068</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.8.12.7.6\">0.714</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.8.12.7.7\">0.612</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.8.12.7.8\">0.626</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.8.12.7.9\">0.115</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.8.12.7.10\">0.379</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.8.13.8\">\n<td class=\"ltx_td\" id=\"S4.T1.8.13.8.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.13.8.2\">GPT-3.5-turbo</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.13.8.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.8.13.8.3.1\">0.047</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.13.8.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.8.13.8.4.1\">0.963</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.13.8.5\">0.150</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.13.8.6\">0.821</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.13.8.7\">0.345</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.13.8.8\">0.890</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.13.8.9\">0.277</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.13.8.10\">0.208</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.8.14.9\">\n<td class=\"ltx_td\" id=\"S4.T1.8.14.9.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.14.9.2\">Text-davinci-3</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.14.9.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.8.14.9.3.1\">0.067</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.14.9.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.8.14.9.4.1\">0.950</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.14.9.5\">0.119</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.14.9.6\">0.796</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.14.9.7\">0.455</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.14.9.8\">0.784</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.14.9.9\">0.242</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.14.9.10\">0.396</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.8.15.10\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.8.15.10.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.8.15.10.1.1\">SMS</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.8.15.10.2\">GPT-4</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.8.15.10.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.8.15.10.3.1\">0.014</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.8.15.10.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.8.15.10.4.1\">0.980</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.8.15.10.5\">0.244</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.8.15.10.6\">0.089</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.8.15.10.7\">0.409</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.8.15.10.8\">0.345</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.8.15.10.9\">0.588</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.8.15.10.10\">0.091</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.8.16.11\">\n<td class=\"ltx_td\" id=\"S4.T1.8.16.11.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.16.11.2\">GPT-3.5-turbo</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.16.11.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.8.16.11.3.1\">0.041</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.16.11.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.8.16.11.4.1\">0.963</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.16.11.5\">0.075</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.16.11.6\">0.202</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.16.11.7\">0.286</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.16.11.8\">0.731</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.16.11.9\">0.148</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.16.11.10\">0.006</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.8.17.12\">\n<td class=\"ltx_td ltx_border_bb\" id=\"S4.T1.8.17.12.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T1.8.17.12.2\">Text-davinci-3</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T1.8.17.12.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.8.17.12.3.1\">0.023</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T1.8.17.12.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.8.17.12.4.1\">0.943</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T1.8.17.12.5\">0.201</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T1.8.17.12.6\">0.053</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T1.8.17.12.7\">0.420</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T1.8.17.12.8\">0.238</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T1.8.17.12.9\">0.325</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T1.8.17.12.10\">0.091</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 113 |
+
"capture": "Table 1: LLM error rate estimation performance, using the POLAR score and other methods, measured in ECE and . The best entries (low ECE, high ) from each row are highlighted in bold."
|
| 114 |
+
},
|
| 115 |
+
"2": {
|
| 116 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>LLM error estimation performance with and without external information sources.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T2.4\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T2.4.5.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.4.5.1.1\">Dataset</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S4.T2.4.5.1.2\">CDR</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S4.T2.4.5.1.3\">ChemProt</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S4.T2.4.5.1.4\">SemEval</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S4.T2.4.5.1.5\">SMS</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.4.4\">\n<td class=\"ltx_td ltx_border_t\" id=\"S4.T2.4.4.5\"></td>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.4.4.6\">ECE</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.4.4.7\">ECE</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.2.2.2\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.4.4.8\">ECE</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.3.3.3\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.4.4.9\">ECE</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.4.4.4\"></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.4.6.1\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.4.6.1.1\">POLAR (with source)</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T2.4.6.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.6.1.2.1\">0.043</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T2.4.6.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.6.1.3.1\">0.890</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T2.4.6.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.6.1.4.1\">0.035</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T2.4.6.1.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.6.1.5.1\">0.934</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T2.4.6.1.6\">0.079</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T2.4.6.1.7\">0.916</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T2.4.6.1.8\">0.014</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T2.4.6.1.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.6.1.9.1\">0.980</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.4.7.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T2.4.7.2.1\">Without Sources</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T2.4.7.2.2\">0.164</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T2.4.7.2.3\">0.592</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T2.4.7.2.4\">0.216</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T2.4.7.2.5\">0.766</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T2.4.7.2.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.7.2.6.1\">0.063</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T2.4.7.2.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.7.2.7.1\">0.947</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T2.4.7.2.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.7.2.8.1\">0.013</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T2.4.7.2.9\">0.977</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 117 |
+
"capture": "Table 2: LLM error estimation performance with and without external information sources."
|
| 118 |
+
},
|
| 119 |
+
"3": {
|
| 120 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>Average LLM error estimation performance for different loss aggregator and modeling choices.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T3.6\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T3.6.5.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.6.5.1.1\">Aggregator</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S4.T3.6.5.1.2\">Linear</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S4.T3.6.5.1.3\">Quadratic</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S4.T3.6.5.1.4\">Euclidean norm</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S4.T3.6.5.1.5\">Chebyshev</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.6.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T3.6.4.5\">Architecture</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T3.6.4.6\">ECE</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T3.3.1.1\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T3.6.4.7\">ECE</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T3.4.2.2\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T3.6.4.8\">ECE</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T3.5.3.3\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T3.6.4.9\">ECE</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T3.6.4.4\"></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T3.6.6.1\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.6.6.1.1\">BERT</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.6.6.1.2\">0.0625</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.6.6.1.3\">0.9273</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.6.6.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.6.6.1.4.1\">0.0458</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.6.6.1.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.6.6.1.5.1\">0.9366</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.6.6.1.6\">0.0549</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.6.6.1.7\">0.9003</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.6.6.1.8\">0.0711</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.6.6.1.9\">0.8260</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.6.7.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.6.7.2.1\">MLP</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.6.7.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.6.7.2.2.1\">0.0555</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.6.7.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.6.7.2.3.1\">0.9392</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.6.7.2.4\">0.0974</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.6.7.2.5\">0.9188</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.6.7.2.6\">0.0691</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.6.7.2.7\">0.9302</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.6.7.2.8\">0.0775</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.6.7.2.9\">0.8934</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.6.8.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T3.6.8.3.1\">LR</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T3.6.8.3.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.6.8.3.2.1\">0.0641</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T3.6.8.3.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.6.8.3.3.1\">0.9360</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T3.6.8.3.4\">0.1072</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T3.6.8.3.5\">0.9020</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T3.6.8.3.6\">0.0766</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T3.6.8.3.7\">0.9288</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T3.6.8.3.8\">0.0948</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T3.6.8.3.9\">0.8813</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 121 |
+
"capture": "Table 3: Average LLM error estimation performance for different loss aggregator and modeling choices."
|
| 122 |
+
}
|
| 123 |
+
},
|
| 124 |
+
"image_paths": {
|
| 125 |
+
"1": {
|
| 126 |
+
"figure_path": "2306.16564v4_figure_1.png",
|
| 127 |
+
"caption": "Figure 1: Pareto optimal learning framework for LLM error estimation and correction.",
|
| 128 |
+
"url": "http://arxiv.org/html/2306.16564v4/extracted/2306.16564v4/Figures/Framework_new.png"
|
| 129 |
+
},
|
| 130 |
+
"2(a)": {
|
| 131 |
+
"figure_path": "2306.16564v4_figure_2(a).png",
|
| 132 |
+
"caption": "(a) Error calibration curve\nFigure 2: LLM error estimation using the POLAR score. (a) The LLM response error rate vs ten equal-interval POLAR score bins. (b) The POLAR scores are sorted and then binned where each bin contains 100 examples. The average of the LLM errors and POLAR scores are plotted for each bin. The last bin with the top POLAR scores may have less than 100 examples. (c) shows the average LLM error rate vs top percentile POLAR score examples.",
|
| 133 |
+
"url": "http://arxiv.org/html/2306.16564v4/"
|
| 134 |
+
},
|
| 135 |
+
"2(b)": {
|
| 136 |
+
"figure_path": "2306.16564v4_figure_2(b).png",
|
| 137 |
+
"caption": "(b) Correlation with error rate\nFigure 2: LLM error estimation using the POLAR score. (a) The LLM response error rate vs ten equal-interval POLAR score bins. (b) The POLAR scores are sorted and then binned where each bin contains 100 examples. The average of the LLM errors and POLAR scores are plotted for each bin. The last bin with the top POLAR scores may have less than 100 examples. (c) shows the average LLM error rate vs top percentile POLAR score examples.",
|
| 138 |
+
"url": "http://arxiv.org/html/2306.16564v4/"
|
| 139 |
+
},
|
| 140 |
+
"2(c)": {
|
| 141 |
+
"figure_path": "2306.16564v4_figure_2(c).png",
|
| 142 |
+
"caption": "(c) LLM error detection\nFigure 2: LLM error estimation using the POLAR score. (a) The LLM response error rate vs ten equal-interval POLAR score bins. (b) The POLAR scores are sorted and then binned where each bin contains 100 examples. The average of the LLM errors and POLAR scores are plotted for each bin. The last bin with the top POLAR scores may have less than 100 examples. (c) shows the average LLM error rate vs top percentile POLAR score examples.",
|
| 143 |
+
"url": "http://arxiv.org/html/2306.16564v4/"
|
| 144 |
+
},
|
| 145 |
+
"3(a)": {
|
| 146 |
+
"figure_path": "2306.16564v4_figure_3(a).png",
|
| 147 |
+
"caption": "(a) Error rate reduction conditioning on POLAR score.\nFigure 3: (a) shows the GPT-4 error rate before and after re-prompting, as plotted against the POLAR score. (b) shows the performance improvement using the two dynamic prompting strategies in Section 3.3.",
|
| 148 |
+
"url": "http://arxiv.org/html/2306.16564v4/"
|
| 149 |
+
},
|
| 150 |
+
"3(b)": {
|
| 151 |
+
"figure_path": "2306.16564v4_figure_3(b).png",
|
| 152 |
+
"caption": "(b) Dynamic prompting performance\nFigure 3: (a) shows the GPT-4 error rate before and after re-prompting, as plotted against the POLAR score. (b) shows the performance improvement using the two dynamic prompting strategies in Section 3.3.",
|
| 153 |
+
"url": "http://arxiv.org/html/2306.16564v4/"
|
| 154 |
+
}
|
| 155 |
+
},
|
| 156 |
+
"validation": true,
|
| 157 |
+
"references": [
|
| 158 |
+
{
|
| 159 |
+
"1": {
|
| 160 |
+
"title": "Contributions to the study of sms spam filtering: new collection and\nresults.",
|
| 161 |
+
"author": "Tiago A Almeida, Jos\u00e9 Mar\u00eda G Hidalgo, and Akebo Yamakami. 2011.",
|
| 162 |
+
"venue": "In Proceedings of the 11th ACM symposium on Document\nengineering, pages 259\u2013262.",
|
| 163 |
+
"url": null
|
| 164 |
+
}
|
| 165 |
+
},
|
| 166 |
+
{
|
| 167 |
+
"2": {
|
| 168 |
+
"title": "Learning from rules generalizing labeled exemplars.",
|
| 169 |
+
"author": "Abhijeet Awasthi, Sabyasachi Ghosh, Rasna Goyal, and Sunita Sarawagi. 2020.",
|
| 170 |
+
"venue": "arXiv preprint arXiv:2004.06025.",
|
| 171 |
+
"url": null
|
| 172 |
+
}
|
| 173 |
+
},
|
| 174 |
+
{
|
| 175 |
+
"3": {
|
| 176 |
+
"title": "Large language models and the perils of their hallucinations.",
|
| 177 |
+
"author": "Razvan Azamfirei, Sapna R Kudchadkar, and James Fackler. 2023.",
|
| 178 |
+
"venue": "Critical Care, 27(1):1\u20132.",
|
| 179 |
+
"url": null
|
| 180 |
+
}
|
| 181 |
+
},
|
| 182 |
+
{
|
| 183 |
+
"4": {
|
| 184 |
+
"title": "Language models are few-shot learners.",
|
| 185 |
+
"author": "Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla\nDhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell,\net al. 2020.",
|
| 186 |
+
"venue": "Advances in neural information processing systems,\n33:1877\u20131901.",
|
| 187 |
+
"url": null
|
| 188 |
+
}
|
| 189 |
+
},
|
| 190 |
+
{
|
| 191 |
+
"5": {
|
| 192 |
+
"title": "Benchmarking large language models in retrieval-augmented generation.",
|
| 193 |
+
"author": "Jiawei Chen, Hongyu Lin, Xianpei Han, and Le Sun. 2023.",
|
| 194 |
+
"venue": "arXiv preprint arXiv:2309.01431.",
|
| 195 |
+
"url": null
|
| 196 |
+
}
|
| 197 |
+
},
|
| 198 |
+
{
|
| 199 |
+
"6": {
|
| 200 |
+
"title": "Palm: Scaling language\nmodeling with pathways.",
|
| 201 |
+
"author": "Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra,\nAdam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian\nGehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez,\nAbhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran,\nEmily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob\nAustin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm\nLevskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia,\nVedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David\nLuan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David\nDohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai,\nThanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica\nMoreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi\nWang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei,\nKathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel.\n2022.",
|
| 202 |
+
"venue": null,
|
| 203 |
+
"url": "http://arxiv.org/abs/2204.02311"
|
| 204 |
+
}
|
| 205 |
+
},
|
| 206 |
+
{
|
| 207 |
+
"7": {
|
| 208 |
+
"title": "Comparative toxicogenomics database (ctd): update 2021.",
|
| 209 |
+
"author": "Allan Peter Davis, Cynthia J Grondin, Robin J Johnson, Daniela Sciaky, Jolene\nWiegers, Thomas C Wiegers, and Carolyn J Mattingly. 2021.",
|
| 210 |
+
"venue": "Nucleic acids research, 49(D1):D1138\u2013D1143.",
|
| 211 |
+
"url": null
|
| 212 |
+
}
|
| 213 |
+
},
|
| 214 |
+
{
|
| 215 |
+
"8": {
|
| 216 |
+
"title": "BERT: pre-training of deep\nbidirectional transformers for language understanding.",
|
| 217 |
+
"author": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018.",
|
| 218 |
+
"venue": "CoRR, abs/1810.04805.",
|
| 219 |
+
"url": "http://arxiv.org/abs/1810.04805"
|
| 220 |
+
}
|
| 221 |
+
},
|
| 222 |
+
{
|
| 223 |
+
"9": {
|
| 224 |
+
"title": "Fast and three-rious: Speeding up weak supervision with triplet\nmethods.",
|
| 225 |
+
"author": "Daniel Fu, Mayee Chen, Frederic Sala, Sarah Hooper, Kayvon Fatahalian, and\nChristopher R\u00e9. 2020.",
|
| 226 |
+
"venue": "In International Conference on Machine Learning, pages\n3280\u20133291. PMLR.",
|
| 227 |
+
"url": null
|
| 228 |
+
}
|
| 229 |
+
},
|
| 230 |
+
{
|
| 231 |
+
"10": {
|
| 232 |
+
"title": "Domain-specific\nlanguage model pretraining for biomedical natural language processing.",
|
| 233 |
+
"author": "Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodong Liu,\nTristan Naumann, Jianfeng Gao, and Hoifung Poon. 2020.",
|
| 234 |
+
"venue": null,
|
| 235 |
+
"url": "http://arxiv.org/abs/arXiv:2007.15779"
|
| 236 |
+
}
|
| 237 |
+
},
|
| 238 |
+
{
|
| 239 |
+
"11": {
|
| 240 |
+
"title": "On calibration of modern neural networks.",
|
| 241 |
+
"author": "Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. 2017.",
|
| 242 |
+
"venue": "In International conference on machine learning, pages\n1321\u20131330. PMLR.",
|
| 243 |
+
"url": null
|
| 244 |
+
}
|
| 245 |
+
},
|
| 246 |
+
{
|
| 247 |
+
"12": {
|
| 248 |
+
"title": "Evaluation of gpt-4 for 10-year cardiovascular risk prediction:\ninsights from the uk biobank and koges data.",
|
| 249 |
+
"author": "Changho Han, Dong Won Kim, Songsoo Kim, Seng Chan You, Jin Young Park, SungA\nBae, and Dukyong Yoon. 2023.",
|
| 250 |
+
"venue": "iScience.",
|
| 251 |
+
"url": null
|
| 252 |
+
}
|
| 253 |
+
},
|
| 254 |
+
{
|
| 255 |
+
"13": {
|
| 256 |
+
"title": "Semeval-2010 task 8: Multi-way classification of semantic relations\nbetween pairs of nominals.",
|
| 257 |
+
"author": "Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, Diarmuid O\nS\u00e9aghdha, Sebastian Pad\u00f3, Marco Pennacchiotti, Lorenza Romano, and\nStan Szpakowicz. 2019.",
|
| 258 |
+
"venue": "arXiv preprint arXiv:1911.10422.",
|
| 259 |
+
"url": null
|
| 260 |
+
}
|
| 261 |
+
},
|
| 262 |
+
{
|
| 263 |
+
"14": {
|
| 264 |
+
"title": "Knowledge-based weak supervision for information extraction of\noverlapping relations.",
|
| 265 |
+
"author": "Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S Weld.\n2011.",
|
| 266 |
+
"venue": "In Proceedings of the 49th annual meeting of the association\nfor computational linguistics: human language technologies, pages 541\u2013550.",
|
| 267 |
+
"url": null
|
| 268 |
+
}
|
| 269 |
+
},
|
| 270 |
+
{
|
| 271 |
+
"15": {
|
| 272 |
+
"title": "Multiple objective decision making\u2014methods and applications:\na state-of-the-art survey, volume 164.",
|
| 273 |
+
"author": "C-L Hwang and Abu Syed Md Masud. 2012.",
|
| 274 |
+
"venue": "Springer Science & Business Media.",
|
| 275 |
+
"url": null
|
| 276 |
+
}
|
| 277 |
+
},
|
| 278 |
+
{
|
| 279 |
+
"16": {
|
| 280 |
+
"title": "Survey of hallucination in natural language generation.",
|
| 281 |
+
"author": "Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii,\nYe Jin Bang, Andrea Madotto, and Pascale Fung. 2023.",
|
| 282 |
+
"venue": "ACM Computing Surveys, 55(12):1\u201338.",
|
| 283 |
+
"url": null
|
| 284 |
+
}
|
| 285 |
+
},
|
| 286 |
+
{
|
| 287 |
+
"17": {
|
| 288 |
+
"title": "Adam: A method for stochastic optimization.",
|
| 289 |
+
"author": "Diederik P Kingma and Jimmy Ba. 2014.",
|
| 290 |
+
"venue": "arXiv preprint arXiv:1412.6980.",
|
| 291 |
+
"url": null
|
| 292 |
+
}
|
| 293 |
+
},
|
| 294 |
+
{
|
| 295 |
+
"18": {
|
| 296 |
+
"title": "Overview of the biocreative vi chemical-protein interaction track.",
|
| 297 |
+
"author": "Martin Krallinger, Obdulia Rabal, Saber A Akhondi, Mart\u0131n P\u00e9rez\nP\u00e9rez, Jes\u00fas Santamar\u00eda, Gael P\u00e9rez Rodr\u00edguez, Georgios\nTsatsaronis, Ander Intxaurrondo, Jos\u00e9 Antonio L\u00f3pez, Umesh Nandal,\net al. 2017.",
|
| 298 |
+
"venue": "In Proceedings of the sixth BioCreative challenge evaluation\nworkshop, volume 1, pages 141\u2013146.",
|
| 299 |
+
"url": null
|
| 300 |
+
}
|
| 301 |
+
},
|
| 302 |
+
{
|
| 303 |
+
"19": {
|
| 304 |
+
"title": "Self-supervised self-supervision by combining deep learning and\nprobabilistic logic.",
|
| 305 |
+
"author": "Hunter Lang and Hoifung Poon. 2021.",
|
| 306 |
+
"venue": "In Proceedings of the AAAI Conference on Artificial\nIntelligence, volume 35, pages 4978\u20134986.",
|
| 307 |
+
"url": null
|
| 308 |
+
}
|
| 309 |
+
},
|
| 310 |
+
{
|
| 311 |
+
"20": {
|
| 312 |
+
"title": "Biocreative v cdr task corpus: a resource for chemical disease\nrelation extraction.",
|
| 313 |
+
"author": "Jiao Li, Yueping Sun, Robin J Johnson, Daniela Sciaky, Chih-Hsuan Wei, Robert\nLeaman, Allan Peter Davis, Carolyn J Mattingly, Thomas C Wiegers, and Zhiyong\nLu. 2016.",
|
| 314 |
+
"venue": "Database, 2016.",
|
| 315 |
+
"url": null
|
| 316 |
+
}
|
| 317 |
+
},
|
| 318 |
+
{
|
| 319 |
+
"21": {
|
| 320 |
+
"title": "Decoupled weight decay regularization.",
|
| 321 |
+
"author": "Ilya Loshchilov and Frank Hutter. 2017.",
|
| 322 |
+
"venue": "arXiv preprint arXiv:1711.05101.",
|
| 323 |
+
"url": null
|
| 324 |
+
}
|
| 325 |
+
},
|
| 326 |
+
{
|
| 327 |
+
"22": {
|
| 328 |
+
"title": "Selfcheckgpt: Zero-resource black-box hallucination detection for\ngenerative large language models.",
|
| 329 |
+
"author": "Potsawee Manakul, Adian Liusie, and Mark JF Gales. 2023.",
|
| 330 |
+
"venue": "arXiv preprint arXiv:2303.08896.",
|
| 331 |
+
"url": null
|
| 332 |
+
}
|
| 333 |
+
},
|
| 334 |
+
{
|
| 335 |
+
"23": {
|
| 336 |
+
"title": "Assessing the quality of multiple-choice questions using gpt-4 and\nrule-based methods.",
|
| 337 |
+
"author": "Steven Moore, Huy A Nguyen, Tianying Chen, and John Stamper. 2023.",
|
| 338 |
+
"venue": "In European Conference on Technology Enhanced Learning, pages\n229\u2013245. Springer.",
|
| 339 |
+
"url": null
|
| 340 |
+
}
|
| 341 |
+
},
|
| 342 |
+
{
|
| 343 |
+
"24": {
|
| 344 |
+
"title": "Obtaining well calibrated probabilities using bayesian binning.",
|
| 345 |
+
"author": "Mahdi Pakdaman Naeini, Gregory Cooper, and Milos Hauskrecht. 2015.",
|
| 346 |
+
"venue": "In Proceedings of the AAAI conference on artificial\nintelligence, volume 29.",
|
| 347 |
+
"url": null
|
| 348 |
+
}
|
| 349 |
+
},
|
| 350 |
+
{
|
| 351 |
+
"25": {
|
| 352 |
+
"title": "A brief report on lawgpt 1.0: A virtual legal assistant based on\ngpt-3.",
|
| 353 |
+
"author": "Ha-Thanh Nguyen. 2023.",
|
| 354 |
+
"venue": "arXiv preprint arXiv:2302.05729.",
|
| 355 |
+
"url": null
|
| 356 |
+
}
|
| 357 |
+
},
|
| 358 |
+
{
|
| 359 |
+
"26": {
|
| 360 |
+
"title": "Capabilities of gpt-4 on medical challenge problems.",
|
| 361 |
+
"author": "Harsha Nori, Nicholas King, Scott Mayer McKinney, Dean Carignan, and Eric\nHorvitz. 2023.",
|
| 362 |
+
"venue": "arXiv preprint arXiv:2303.13375.",
|
| 363 |
+
"url": null
|
| 364 |
+
}
|
| 365 |
+
},
|
| 366 |
+
{
|
| 367 |
+
"27": {
|
| 368 |
+
"title": "Gpt-4 technical report.",
|
| 369 |
+
"author": "OpenAI. 2023.",
|
| 370 |
+
"venue": null,
|
| 371 |
+
"url": "http://arxiv.org/abs/2303.08774"
|
| 372 |
+
}
|
| 373 |
+
},
|
| 374 |
+
{
|
| 375 |
+
"28": {
|
| 376 |
+
"title": "Training language models to\nfollow instructions with human feedback.",
|
| 377 |
+
"author": "Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela\nMishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John\nSchulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda\nAskell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. 2022.",
|
| 378 |
+
"venue": null,
|
| 379 |
+
"url": "http://arxiv.org/abs/2203.02155"
|
| 380 |
+
}
|
| 381 |
+
},
|
| 382 |
+
{
|
| 383 |
+
"29": {
|
| 384 |
+
"title": "Cours d\u2019\u00e9conomie politique, volume 1.",
|
| 385 |
+
"author": "Vilfredo Pareto. 1964.",
|
| 386 |
+
"venue": "Librairie Droz.",
|
| 387 |
+
"url": null
|
| 388 |
+
}
|
| 389 |
+
},
|
| 390 |
+
{
|
| 391 |
+
"30": {
|
| 392 |
+
"title": "Probabilistic outputs for support vector machines and comparisons to\nregularized likelihood methods.",
|
| 393 |
+
"author": "John Platt et al. 1999.",
|
| 394 |
+
"venue": "Advances in large margin classifiers, 10(3):61\u201374.",
|
| 395 |
+
"url": null
|
| 396 |
+
}
|
| 397 |
+
},
|
| 398 |
+
{
|
| 399 |
+
"31": {
|
| 400 |
+
"title": "Snorkel: Rapid training data creation with weak supervision.",
|
| 401 |
+
"author": "Alexander Ratner, Stephen H Bach, Henry Ehrenberg, Jason Fries, Sen Wu, and\nChristopher R\u00e9. 2017.",
|
| 402 |
+
"venue": "In Proceedings of the VLDB Endowment. International Conference\non Very Large Data Bases, volume 11, page 269. NIH Public Access.",
|
| 403 |
+
"url": null
|
| 404 |
+
}
|
| 405 |
+
},
|
| 406 |
+
{
|
| 407 |
+
"32": {
|
| 408 |
+
"title": "Training complex models with multi-task weak supervision.",
|
| 409 |
+
"author": "Alexander Ratner, Braden Hancock, Jared Dunnmon, Frederic Sala, Shreyash\nPandey, and Christopher R\u00e9. 2019.",
|
| 410 |
+
"venue": "In Proceedings of the AAAI Conference on Artificial\nIntelligence, volume 33, pages 4763\u20134771.",
|
| 411 |
+
"url": null
|
| 412 |
+
}
|
| 413 |
+
},
|
| 414 |
+
{
|
| 415 |
+
"33": {
|
| 416 |
+
"title": "Data programming: Creating large training sets, quickly.",
|
| 417 |
+
"author": "Alexander J Ratner, Christopher M De Sa, Sen Wu, Daniel Selsam, and Christopher\nR\u00e9. 2016.",
|
| 418 |
+
"venue": "Advances in neural information processing systems, 29.",
|
| 419 |
+
"url": null
|
| 420 |
+
}
|
| 421 |
+
},
|
| 422 |
+
{
|
| 423 |
+
"34": {
|
| 424 |
+
"title": "Learning from crowds.",
|
| 425 |
+
"author": "Vikas C Raykar, Shipeng Yu, Linda H Zhao, Gerardo Hermosillo Valadez, Charles\nFlorin, Luca Bogoni, and Linda Moy. 2010.",
|
| 426 |
+
"venue": "Journal of machine learning research, 11(4).",
|
| 427 |
+
"url": null
|
| 428 |
+
}
|
| 429 |
+
},
|
| 430 |
+
{
|
| 431 |
+
"35": {
|
| 432 |
+
"title": "End-to-end weak supervision.",
|
| 433 |
+
"author": "Salva R\u00fchling Cachay, Benedikt Boecking, and Artur Dubrawski. 2021.",
|
| 434 |
+
"venue": "Advances in Neural Information Processing Systems,\n34:1845\u20131857.",
|
| 435 |
+
"url": null
|
| 436 |
+
}
|
| 437 |
+
},
|
| 438 |
+
{
|
| 439 |
+
"36": {
|
| 440 |
+
"title": "Use of gpt-4 to analyze medical records of patients with extensive\ninvestigations and delayed diagnosis.",
|
| 441 |
+
"author": "Yat-Fung Shea, Cynthia Min Yao Lee, Whitney Chin Tung Ip, Dik Wai Anderson Luk,\nand Stephanie Sze Wing Wong. 2023.",
|
| 442 |
+
"venue": "JAMA Network Open, 6(8):e2325000\u2013e2325000.",
|
| 443 |
+
"url": null
|
| 444 |
+
}
|
| 445 |
+
},
|
| 446 |
+
{
|
| 447 |
+
"37": {
|
| 448 |
+
"title": "Llama: Open and efficient foundation language models.",
|
| 449 |
+
"author": "Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne\nLachaux, Timoth\u00e9e Lacroix, Baptiste Rozi\u00e8re, Naman Goyal, Eric\nHambro, Faisal Azhar, et al. 2023.",
|
| 450 |
+
"venue": "arXiv preprint arXiv:2302.13971.",
|
| 451 |
+
"url": null
|
| 452 |
+
}
|
| 453 |
+
},
|
| 454 |
+
{
|
| 455 |
+
"38": {
|
| 456 |
+
"title": "Multi-resolution weak supervision for sequential data.",
|
| 457 |
+
"author": "Paroma Varma, Frederic Sala, Shiori Sagawa, Jason Fries, Daniel Fu, Saelig\nKhattar, Ashwini Ramamoorthy, Ke Xiao, Kayvon Fatahalian, James Priest,\net al. 2019.",
|
| 458 |
+
"venue": "Advances in Neural Information Processing Systems, 32.",
|
| 459 |
+
"url": null
|
| 460 |
+
}
|
| 461 |
+
},
|
| 462 |
+
{
|
| 463 |
+
"39": {
|
| 464 |
+
"title": "Survey on factuality in\nlarge language models: Knowledge, retrieval and domain-specificity.",
|
| 465 |
+
"author": "Cunxiang Wang, Xiaoze Liu, Yuanhao Yue, Xiangru Tang, Tianhang Zhang, Cheng\nJiayang, Yunzhi Yao, Wenyang Gao, Xuming Hu, Zehan Qi, Yidong Wang, Linyi\nYang, Jindong Wang, Xing Xie, Zheng Zhang, and Yue Zhang. 2023.",
|
| 466 |
+
"venue": null,
|
| 467 |
+
"url": "http://arxiv.org/abs/2310.07521"
|
| 468 |
+
}
|
| 469 |
+
},
|
| 470 |
+
{
|
| 471 |
+
"40": {
|
| 472 |
+
"title": "Deep probabilistic logic: A unifying framework for indirect\nsupervision.",
|
| 473 |
+
"author": "Hai Wang and Hoifung Poon. 2018.",
|
| 474 |
+
"venue": "arXiv preprint arXiv:1808.08485.",
|
| 475 |
+
"url": null
|
| 476 |
+
}
|
| 477 |
+
},
|
| 478 |
+
{
|
| 479 |
+
"41": {
|
| 480 |
+
"title": "Self-consistency improves chain of thought reasoning in language\nmodels.",
|
| 481 |
+
"author": "Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, and Denny Zhou. 2022.",
|
| 482 |
+
"venue": "arXiv preprint arXiv:2203.11171.",
|
| 483 |
+
"url": null
|
| 484 |
+
}
|
| 485 |
+
},
|
| 486 |
+
{
|
| 487 |
+
"42": {
|
| 488 |
+
"title": "Chain of thought prompting elicits reasoning in large language\nmodels.",
|
| 489 |
+
"author": "Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and\nDenny Zhou. 2022.",
|
| 490 |
+
"venue": "arXiv preprint arXiv:2201.11903.",
|
| 491 |
+
"url": null
|
| 492 |
+
}
|
| 493 |
+
},
|
| 494 |
+
{
|
| 495 |
+
"43": {
|
| 496 |
+
"title": "Large language models are better reasoners with self-verification.",
|
| 497 |
+
"author": "Yixuan Weng, Minjun Zhu, Fei Xia, Bin Li, Shizhu He, Shengping Liu, Bin Sun,\nKang Liu, and Jun Zhao. 2023.",
|
| 498 |
+
"venue": "In Findings of the Association for Computational Linguistics:\nEMNLP 2023, pages 2550\u20132575.",
|
| 499 |
+
"url": null
|
| 500 |
+
}
|
| 501 |
+
},
|
| 502 |
+
{
|
| 503 |
+
"44": {
|
| 504 |
+
"title": "A prompt pattern catalog to enhance prompt engineering with chatgpt.",
|
| 505 |
+
"author": "Jules White, Quchen Fu, Sam Hays, Michael Sandborn, Carlos Olea, Henry Gilbert,\nAshraf Elnashar, Jesse Spencer-Smith, and Douglas C Schmidt. 2023.",
|
| 506 |
+
"venue": "arXiv preprint arXiv:2302.11382.",
|
| 507 |
+
"url": null
|
| 508 |
+
}
|
| 509 |
+
},
|
| 510 |
+
{
|
| 511 |
+
"45": {
|
| 512 |
+
"title": "Bloomberggpt: A large\nlanguage model for finance.",
|
| 513 |
+
"author": "Shijie Wu, Ozan Irsoy, Steven Lu, Vadim Dabravolski, Mark Dredze, Sebastian\nGehrmann, Prabhanjan Kambadur, David Rosenberg, and Gideon Mann. 2023.",
|
| 514 |
+
"venue": null,
|
| 515 |
+
"url": "http://arxiv.org/abs/2303.17564"
|
| 516 |
+
}
|
| 517 |
+
},
|
| 518 |
+
{
|
| 519 |
+
"46": {
|
| 520 |
+
"title": "Sais: supervising and augmenting intermediate steps for\ndocument-level relation extraction.",
|
| 521 |
+
"author": "Yuxin Xiao, Zecheng Zhang, Yuning Mao, Carl Yang, and Jiawei Han. 2021.",
|
| 522 |
+
"venue": "arXiv preprint arXiv:2109.12093.",
|
| 523 |
+
"url": null
|
| 524 |
+
}
|
| 525 |
+
},
|
| 526 |
+
{
|
| 527 |
+
"47": {
|
| 528 |
+
"title": "Fine-tuning pre-trained language model with weak supervision: A\ncontrastive-regularized self-training approach.",
|
| 529 |
+
"author": "Yue Yu, Simiao Zuo, Haoming Jiang, Wendi Ren, Tuo Zhao, and Chao Zhang. 2021.",
|
| 530 |
+
"venue": "In Proceedings of the 2021 Conference of the North American\nChapter of the Association for Computational Linguistics: Human Language\nTechnologies, pages 1063\u20131077.",
|
| 531 |
+
"url": null
|
| 532 |
+
}
|
| 533 |
+
},
|
| 534 |
+
{
|
| 535 |
+
"48": {
|
| 536 |
+
"title": "Transforming classifier scores into accurate multiclass probability\nestimates.",
|
| 537 |
+
"author": "Bianca Zadrozny and Charles Elkan. 2002.",
|
| 538 |
+
"venue": "In Proceedings of the eighth ACM SIGKDD international\nconference on Knowledge discovery and data mining, pages 694\u2013699.",
|
| 539 |
+
"url": null
|
| 540 |
+
}
|
| 541 |
+
},
|
| 542 |
+
{
|
| 543 |
+
"49": {
|
| 544 |
+
"title": "A survey on programmatic weak supervision.",
|
| 545 |
+
"author": "Jieyu Zhang, Cheng-Yu Hsieh, Yue Yu, Chao Zhang, and Alexander Ratner. 2022.",
|
| 546 |
+
"venue": "arXiv preprint arXiv:2202.05433.",
|
| 547 |
+
"url": null
|
| 548 |
+
}
|
| 549 |
+
},
|
| 550 |
+
{
|
| 551 |
+
"50": {
|
| 552 |
+
"title": "Wrench: A comprehensive benchmark for weak supervision.",
|
| 553 |
+
"author": "Jieyu Zhang, Yue Yu, Yinghao Li, Yujing Wang, Yaming Yang, Mao Yang, and\nAlexander Ratner. 2021.",
|
| 554 |
+
"venue": "arXiv preprint arXiv:2109.11377.",
|
| 555 |
+
"url": null
|
| 556 |
+
}
|
| 557 |
+
},
|
| 558 |
+
{
|
| 559 |
+
"51": {
|
| 560 |
+
"title": "A survey of large language models.",
|
| 561 |
+
"author": "Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou,\nYingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. 2023.",
|
| 562 |
+
"venue": "arXiv preprint arXiv:2303.18223.",
|
| 563 |
+
"url": null
|
| 564 |
+
}
|
| 565 |
+
},
|
| 566 |
+
{
|
| 567 |
+
"52": {
|
| 568 |
+
"title": "Calibrate before use: Improving few-shot performance of language\nmodels.",
|
| 569 |
+
"author": "Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. 2021.",
|
| 570 |
+
"venue": "In International Conference on Machine Learning, pages\n12697\u201312706. PMLR.",
|
| 571 |
+
"url": null
|
| 572 |
+
}
|
| 573 |
+
},
|
| 574 |
+
{
|
| 575 |
+
"53": {
|
| 576 |
+
"title": "Nero: A neural rule grounding framework for label-efficient relation\nextraction.",
|
| 577 |
+
"author": "Wenxuan Zhou, Hongtao Lin, Bill Yuchen Lin, Ziqi Wang, Junyi Du, Leonardo\nNeves, and Xiang Ren. 2020.",
|
| 578 |
+
"venue": "In Proceedings of The Web Conference 2020, pages 2166\u20132176.",
|
| 579 |
+
"url": null
|
| 580 |
+
}
|
| 581 |
+
}
|
| 582 |
+
],
|
| 583 |
+
"url": "http://arxiv.org/html/2306.16564v4"
|
| 584 |
+
}
|
20240522/2307.07099v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240522/2308.01123v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240522/2308.01804v3.json
ADDED
|
@@ -0,0 +1,151 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "QUEST: Query Stream for Practical Cooperative Perception",
|
| 3 |
+
"abstract": "Cooperative perception can effectively enhance individual perception performance by providing additional viewpoint and expanding the sensing field. Existing cooperation paradigms are either interpretable (result cooperation) or flexible (feature cooperation). In this paper, we propose the concept of query cooperation to enable interpretable instance-level flexible feature interaction. To specifically explain the concept, we propose a cooperative perception framework, termed QUEST, which let query stream flow among agents. The cross-agent queries are interacted via fusion for co-aware instances and complementation for individual unaware instances. Taking camera-based vehicle-infrastructure perception as a typical practical application scene, the experimental results on the real-world dataset, DAIR-V2X-Seq, demonstrate the effectiveness of QUEST and further reveal the advantage of the query cooperation paradigm on transmission flexibility and robustness to packet dropout. We hope our work can further facilitate the cross-agent representation interaction for better cooperative perception in practice.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Despite the significant progress have been made in individual perception, intelligent vehicles still have to face challenges of unobservable dangers caused by occlusion and limited perception range. Different from the individual perception which senses the surrounding with its own onboard sensor system, cooperative perception, especially vehicle-infrastructure cooperative perception (VICP), shed light on reliable autonomous driving in a complex traffic environment and have achieved increasing attention recently [1 ###reference_b1###, 2 ###reference_b2###]. Leveraging the roadside sensor system with more flexible mounting height and posture, the cooperative perception field is effectively extended, and some challenging individual perception cases (e.g., long-range small object detection) can be readily tackled in VICP setting [3 ###reference_b3###, 4 ###reference_b4###].\n###figure_1### Advantages are usually followed by new challenges. Naturally, the first and foremost question is how to cooperate between multiple agents. According to what is shared among agents, there are three typical cooperation paradigms [1 ###reference_b1###, 5 ###reference_b5###, 2 ###reference_b2###], including data cooperation (early fusion), feature cooperation (intermediate fusion), and result cooperation (late fusion). Data cooperation [6 ###reference_b6###, 7 ###reference_b7###] is regarded as the upper bound of performance since the comprehensive information is interchanged along with raw data across agents. However, the high transmission cost of massive data is unbearable in practical applications. Result cooperation is widely deployed in practice due to the advantages of bandwidth-economic, where agents only share predictions [6 ###reference_b6###, 3 ###reference_b3###]. Nevertheless, the significant information loss in result cooperation makes it highly reliant on accurate individual predictions. Compared with those two paradigms, feature cooperation [8 ###reference_b8###, 9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###, 12 ###reference_b12###, 13 ###reference_b13###, 14 ###reference_b14###, 15 ###reference_b15###] is more flexible and performance-bandwidth balanced, as the information loss is controllable via feature selection and compression. Even though some of them have achieved region-level feature selection [16 ###reference_b16###], the interpretability of feature selection and fusion are still limited, since the scene-level features abstractly represent the whole observable region. It is worth noting that the interaction between predictions in result cooperation is instance-level, resulting in physically interpretable cooperation targets.\nAddressing that, we naturally come up with a question: is there an eclectic approach for cooperative perception, which is both interpretable and flexible?\nInspired by the success of transformer-based methods in individual perception tasks[17 ###reference_b17###, 18 ###reference_b18###, 19 ###reference_b19###], we propose the concept of query cooperation, which is an instance-level feature interaction paradigm based on the query stream across agents, standing on the midpoint between scene-level feature cooperation and instance-level result cooperation (Figure 1 ###reference_###). The instance-level cooperation makes it more physically interpretable, and feature interaction introduces more information elasticity. Specifically, we propose a framework, named QUEST, as a representative approach to describe the concept, where queries flow in the stream among agents. Firstly, each agent performs individual transformer-based perception. Every query output from the decoder corresponds to a possible detected object, and the query will be shared if its confidence score meets the requirement of the request agent. As the cross-agent queries arrive, they are utilized for both query fusion and complementation. Theoretically, query fusion can enhance the feature of the sensed instance with the feature from other viewpoints, while query complementation can directly complement the unaware instance of the local perception system. Then, the queries are used for cooperative perception, resulting in the final perception results. To evaluate the performance of QUEST, we generate the camera-centric cooperation labels on DAIR-V2X-Seq based on the single-side groundtruth labeled at the image-captured timestamps *.\n11footnotetext: The original cooperation groundtruth is labeled at LiDAR\u2019s timestamp [3 ###reference_b3###], which is not suitable for camera-based researches.\nOur contributions are summarized as follows:\nWe propose the concept of query cooperation paradigm for cooperative perception task, which is more interpretable than scene-level feature cooperation and more flexible than result cooperation.\nA query cooperation framework, termed QUEST, is proposed as a representative approach. The cross-agent queries interact at the instance level via fusion and complementation.\nWe take the camera-based vehicle-infrastructure cooperative object detection as a typical application scene. The experimental results on the real-world dataset, DAIR-V2X-Seq, demonstrate the effectiveness of QUEST and further show the advantage of the query cooperation paradigm on flexibility and robustness. Besides, the camera-centric cooperation labels are generated to facilitate the further development of the related researches.\n###figure_2###"
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "II Related Works",
|
| 15 |
+
"text": "In this section, we briefly review two related topics, cooperative perception and query-based perception."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "II-A Cooperative perception",
|
| 21 |
+
"text": "To break the sensing range limitation of onboard sensor systems and eliminate the influences of unobservable dangers, cooperative perception\nhas attracted increasing attention in recent years. The most intuitive approach is data cooperation, which transmits raw sensor data and fundamentally overcomes the occlusion and long-range perception problem. Since 3D data can be directly aggregated, most data cooperation approaches are LiDAR-based [6 ###reference_b6###, 7 ###reference_b7###]. Although raw data reserves comprehensive information, the high transmission cost makes it challenging to deploy in practice. For the convenience of communication, result cooperation only transmits perception predictions, which is the most bandwidth-economic [6 ###reference_b6###, 3 ###reference_b3###]. In addition, the instance-level bounding box aggregation makes the cooperation more physically interpretable. However, the performance of result cooperation highly relies on the accurate individual perception and precise parameters for coordinate system transformation. Therefore, recent methods pay more attention to feature cooperation, which can achieve better performance-bandwidth balance [8 ###reference_b8###, 9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###, 12 ###reference_b12###, 16 ###reference_b16###, 13 ###reference_b13###, 14 ###reference_b14###, 15 ###reference_b15###]. Compared with the simple bounding box, the feature map is more flexible for both fusion and compression, but the scene-level feature cooperation is redundant for object perception and less explainable. Aiming on interpretable flexible cooperation, we propose the concept of query cooperation, which transmits instance-level features across agents."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "II-B Query-based perception",
|
| 27 |
+
"text": "Since the pioneering work DETR [17 ###reference_b17###] is proposed for 2D object detection, the object query has been adopted for more and more perception tasks, including 3D detection and tracking. Query-based methods typically utilize sparse learnable queries for attentive feature aggregation. DETR3D [18 ###reference_b18###] predicts 3D locations of queries and obtains the corresponding image features via projection. PETR [20 ###reference_b20###] turns to embed image features with 3D position and directly learns the mapping relations using the attention mechanism. BEVFormer [21 ###reference_b21###, 22 ###reference_b22###] tackles the perception from a bird-eye view with grid-shaped queries and manages to realize spatial-temporal feature interaction through the deformable transformer. Leveraging temporal information, query-based methods are also beneficial to object tracking. To model cross-frame object association, MOTR [19 ###reference_b19###] and TrackFormer [23 ###reference_b23###] propose track query based on single frame object query. MUTR [24 ###reference_b24###] and PF-Track [25 ###reference_b25###] utilizes track query and achieve promising tracking performance for multi-view tasks. All of the existing query-based methods are developed for individual perception, we further extend it to cooperative perception in this paper. Specifically, we propose the QUEST framework to achieve a query stream across agents and design the cross-agent query interaction module for query fusion and complementation."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3",
|
| 31 |
+
"parent_section_id": null,
|
| 32 |
+
"section_name": "III Query Cooperation Paradigm",
|
| 33 |
+
"text": "What to share and how to cooperate are the two main concerns for practical cooperative perception, especially considering the limited bandwidth of the wireless communication. To design a better cooperation strategy, it is expected to be both interpretable and flexible, since interpretability leads to controllable cooperation and flexibility provides more operation space and possibilities. Considering that, we propose the query cooperation paradigm, which shares features across agents and performs cooperation via instance-level feature interaction.\nFor clarity, we take vehicle-infrastructure cooperative perception as an example.\nQuery Generation. Both vehicle and infrastructure perform individual perception all the time, and each perception prediction is corresponded to an object query , according to the theory of transformer-based perception,\n,\nwhere is the feature extraction function for queries, is the query-based prediction function, and denotes the input sensor data.\nQuery Transmission. The query cooperation is triggered when the vehicle requests additional information from infrastructure side. Noting that the query request can be along with a specific instance-level requirement, like confidence threshold and region mask. Then, the queries met the requirement are posted to the vehicle side.\nQuery Interaction. Both the received queries and local queries are leveraged for further cooperative perception, and the query interaction strategy is to determine how to enhance and complement the with .\n,\nwhere denotes the query interaction function and is the generated cooperative query set.\nQuery-based Prediction. is further fed into query-based prediction heads for perception tasks, resulting in the final cooperative perception predictions ."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "4",
|
| 37 |
+
"parent_section_id": null,
|
| 38 |
+
"section_name": "IV QUEST Framework",
|
| 39 |
+
"text": "To elaborate on the concept of query cooperation, we describe the proposed representative framework in this section. Benefiting from the deployment convenience, camera-based sensor systems are widely adopted in practical applications. Thus, we take the camera-based vehicle-infrastructure cooperative perception as a typical scenario to describe the framework."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "4.1",
|
| 43 |
+
"parent_section_id": "4",
|
| 44 |
+
"section_name": "IV-A Overall Architecture",
|
| 45 |
+
"text": "As illustrated in Figure 2 ###reference_###, QUEST achieves cooperative perception via a cross-agent QUEry STream. The object queries flow from the infrastructure side to the vehicle side when query cooperation is triggered by the vehicle. The framework mainly consists of two functional modules, including single-agent query-based perception modules and a cross-agent query interaction module.\nFor every single agent, like the vehicle, the query-based perception module is continuously running to ensure the basic individual perception capability, leveraging its own sensor data obtained from the onboard system. It will always output perception predictions whether the query cooperation is triggered or not. Theoretically, every query-based perception method can be directly plugged in, and we adopt PETR [20 ###reference_b20###] as an example in this paper. The captured image is fed into the backbone for feature extraction, and both the feature and calibration parameters are input to a transformer-based decoder to perform object detection. Each prediction is matched with a corresponding object query, and it is the source of the query stream. Considering the limited bandwidth of wireless communication, the infrastructure-side query stream is shunted according to a confidence score threshold required by the vehicle side, resulting in a high-quality sparse feature transmission.\nWhen the infrastructure-side query stream flows to the vehicle side, it joins the local query stream to form a cooperative query stream. The cross-agent query interaction module is designed to integrate the object queries from different sources, which is elaborated in the following subsection. The joint query stream finally flocks to the transformer-based decoder, and the cooperative predictions are output."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "4.2",
|
| 49 |
+
"parent_section_id": "4",
|
| 50 |
+
"section_name": "IV-B Cross-agent Query Interaction",
|
| 51 |
+
"text": "Similar to all the other cooperation paradigms, how to aggregate the cross-agent information is always the most important part of the framework. Benefiting from the interpretable instance-level cooperation, the query interaction mechanism is natural, including query fusion for co-aware objects and query complementation for unaware objects.\nIn the first place, the corresponding location of the cross-agent queries should be transformed into a unified coordinate system, which is generally the vehicle-side LiDAR coordinate system. Since each query is along with a 3D reference point, the transformation is readily performed using the calibration parameters (rotation and translation matrix).\nThe instance-level predictions are matched according to their locations in result cooperation. Although the strategy can be directly adopted in QUEST, it relies on both the accurate location prediction and precise coordinate transformation. To realize more robust query matching, we propose the dual-space query embedding.\n###figure_3### Dual-space Query Embedding takes both location information and semantic information into consideration, which is embedded in physical and feature space. For location embedding, we expand the exact center to a grid to give a high tolerance of location noise, as shown in Figure 3 ###reference_###. The 3D coordinates in the grid are concatenated to form grid embedding after normalization. However, the loose constraint of location will inevitably introduce false-matched pairs. We further take semantic information into account to pay additional attention to appearance. Technically, the query\u2019s feature is concatenated with the grid embedding , and the dual-space query embedding is generated using a multi-layer perceptron (MLP) encoder.\n, where is the concatenation operation, denotes the multi-layer perceptron encoder, and is the semantic embedding. We directly regard the query\u2019s feature as semantic embedding in this work.\nCross-agent Query Alignment is a specific and necessary operation for query cooperation, which is mainly due to the implicit encoding of the instance-level orientation. The prediction\u2019s orientation is explicitly represented in result cooperation, and the orientation of the dense feature map is directly related to the corresponding coordinate system. Therefore, both of them can achieve orientation transformation via explicit coordinate system transformation. However, the implicit encoded feature in instance-level query can not be manually operated, even if the orientation-related feature is decoupled from others. We adopt MLP for feature space alignment, which enables implicit orientation transformation and cross-agent feature alignment.\n, where is the infrastructure-side query, and is the rotation matrix from infrastructure side to vehicle side.\nAttentive Query Fusion is to enhance the vehicle-side aware queries with the queries from the infrastructure-side view. The fusion is attentively guided by the dual-space query embedding. Specifically, we calculate the embedding distance between each two query pairs and generate the attentive fusion weights on the basis of that via MLP. Take the vehicle-side query and the infrastructure-side query as an example,\n, where and denote the generated dual-space query embedding, and is the distance function. Then, the vehicle-side query stream is updated and formed to the cooperative query stream via weighted summation.\n###figure_4### Query Complementation is to complement the vehicle-side unaware object queries with the received infrastructure-side queries. Instead of simply inserting the cross-agent queries into the local query stream, we turn to a replacement strategy to reduce the extra computational cost. Firstly, the vehicle-side query is sorted according to the confidence score. The received queries are then used to replace the queries with low confidence scores, as shown in Figure 4 ###reference_###."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "5",
|
| 55 |
+
"parent_section_id": null,
|
| 56 |
+
"section_name": "Experiments",
|
| 57 |
+
"text": "###figure_5### This section describes experiments on the real-world vehicle-infrastructure dataset. We provide detailed studies and qualitative analysis on effectiveness, flexibility of query transmission, and robustness to packet dropout."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "5.1",
|
| 61 |
+
"parent_section_id": "5",
|
| 62 |
+
"section_name": "Experimental Setting",
|
| 63 |
+
"text": "Datasets.\nWe evaluate the proposed QUEST framework on the large-scale real-world cooperative dataset DAIR-V2X-Seq [26 ###reference_b26###], which consists of more than 15,000 frames captured from 95 representative scenes. It comprises 7445 image pairs for training and 3316 pairs for validation. We follow the official split scheme and report experimental results on the validation set. The perception range for evaluation is set as following the official setting. The input images are resized to a fixed size of .\n###figure_6### Camera-centric cooperation labels. Since the asynchronous capture frequency between camera and LiDAR, there is always a misalignment between the image and the original cooperation groundtruth (labeled at the LiDAR\u2019s timestamp) [3 ###reference_b3###]. For the camera-based researches, we generate the cooperation annotations based on the single-side groundtruth labeled at the image-captured timestamps. The generated camera-centric cooperation labels are more accurate, as shown in Figure 6 ###reference_###.\nImplementation Details.\nWe employ VoVNetV2 [27 ###reference_b27###] as backbone, and the output of the 5th stage is upsampled and fused with that of the 4th stage following PETR [20 ###reference_b20###].\nAdamW optimizer [28 ###reference_b28###, 29 ###reference_b29###] is adopted with a weight decay of 0.01. The initial learning rate is set to and is scheduled according to cosine annealing [30 ###reference_b30###]. The model is trained for 100 epochs until convergence. The same as [20 ###reference_b20###, 18 ###reference_b18###, 24 ###reference_b24###], the model output at most 300 objects during the inference time. Experiments are implemented in PyTorch on a server with NVIDIA A100."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "5.2",
|
| 67 |
+
"parent_section_id": "5",
|
| 68 |
+
"section_name": "Effectiveness Study",
|
| 69 |
+
"text": "First of all, we compare our QUEST (two versions) with vehicle-only and result cooperation approaches in Table I ###reference_###. All reported methods use PETR (adopting VoVNetV2 as backbone) as an individual perception module. The full version QUEST achieves 20.3% on and 14.1% on , which outperforms result cooperation with a large margin, not to mention the vehicle-only approach. Benefiting from the cooperative perception, both distant and occluded objects can be detected, as shown in Figure 5 ###reference_###.\nTheoretically, there are two ways that query cooperation can boost the perception performance. One is the query enhancement for co-aware objects, the other is the query complementation for unaware objects caused by occlusion or the long-range problem. Therefore, we also report the results of an ablated version (QUEST-f), which only adopts query fusion as cross-agent query interaction, and the query complementation is switched off.\nNoting that QUEST-f performs better than the vehicle-only approach, but is slightly worse than result cooperation. It demonstrates that: (1) If an object can be observed by both vehicle and infrastructure, query fusion can effectively enhance the instance-level feature leveraging the information from another viewpoint; (2) Query complementation is more dominant compared with query fusion, since the unobservable object lies in the blind area of the vehicle can be replenished, which is in line with the motivation of cooperative perception. The instance-level complementation lets result cooperation outperform QUEST-f, but there is a further performance lift when adopting query complementation. Although both of them are at the instance-level, the advantage of query cooperation is more obvious."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "5.3",
|
| 73 |
+
"parent_section_id": "5",
|
| 74 |
+
"section_name": "Flexibility of Query Transmission",
|
| 75 |
+
"text": "Benefiting from the interpretable instance-level cooperation, the cross-agent information transmission is more flexible via query selection. It can be regarded as an instance-level spatial-wise information compression considering wireless bandwidth. QUEST employs confidence-based query selection by filtering the queries under the required score threshold. We report the performance at different thresholds (from 0.1 to 0.8) in Table II ###reference_###.\nIt can be seen that the requirement of transmission bandwidth is significantly reduced as the selection threshold increases (Figure 7 ###reference_###). The transmission Bytes are only half of the full package when we set a higher confidence threshold, such as 0.5. Theoretically, a higher threshold leads to better precision and worse recall. Although both and inevitably decline due to the selection, the descending range is acceptable.\n###figure_7### Compared with region-level spatial-wise compression in the existing feature cooperation approaches, instance-level query selection is more fine-grained and interpretable. The channel-wise query compression can further reduce bandwidth requirements and make it more suitable for practical applications."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "5.4",
|
| 79 |
+
"parent_section_id": "5",
|
| 80 |
+
"section_name": "Robustness to Packet Dropout",
|
| 81 |
+
"text": "Packet dropout is inevitable for wireless communication, and will severely affect the performance of cooperative perception.\nThe scene-level cooperation may degrade into vehicle-side individual perception when the received data/feature is fragmentary due to the packet dropout. Different from that, the minimum transmission unit is reduced to instance level in query cooperation, so the dropout will result in at most partial query loss.\nTo simulate the packet dropout, we manually set a dropout ratio of query transmission during evaluation, and the results are reported in Table III ###reference_###.\nAlthough performance decline is avoidless, QUEST can still generate valid predictions when packet dropout occurs. It maintains about performance when the dropout ratio reaches 0.7. The results suggest that QUEST is relatively robust when facing query loss, and show the advantage of query cooperation on robustness to packet dropout."
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "6",
|
| 85 |
+
"parent_section_id": null,
|
| 86 |
+
"section_name": "VI Discussion on Query Cooperation",
|
| 87 |
+
"text": "Experimental results of QUEST have reflected the characteristics of query cooperation. In this section, we further discuss the pros and cons of query cooperation paradigm.\nPossible extensions. Standing on the midpoint of instance-level result cooperation and scene-level feature cooperation, query cooperation takes both advantages of them, resulting in more possibilities to explore. Since the query stream is instance-level, it is more convenient to introduce temporal information and give the chance to model the individual motion of every single object. Leveraging temporal features, the object detection performance will be further boosted via spatial-temporal cooperation. Similar to single-vehicle scenario, query cooperation paradigm opens the gate to end-to-end (E2E) cooperative tracking via a spatial-temporal query stream. Furthermore, there is a wider ocean to explore, when the query stream goes beyond perception and flows throughout the whole pipeline, including perception, prediction, and planning. E2E cooperative driving can expand the E2E autonomous driving [31 ###reference_b31###] to a system-wide improvement for intelligent transportation system.\nForeseeable limitation. Behind all the advances, the limitation is also foreseeable. Since query cooperation is on the basis of the query stream, it naturally requests all agents participating in the symbioses to employ a query-based onboard system. Therefore, the query cooperation adaption for the hybrid intelligent transportation system deserves further exploration. In addition, the query alignment among different transformer-based architectures also needs to be tackled for widespread use."
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "7",
|
| 91 |
+
"parent_section_id": null,
|
| 92 |
+
"section_name": "VII Conclusion",
|
| 93 |
+
"text": "Aiming at interpretable and flexible cooperative perception, we propose the concept of query cooperation in this paper, which enables instance-level feature interaction among agents via the query stream. To specifically describe the query cooperation, a representative cooperative perception framework (QUEST) is proposed. It performs cross-agent query interaction by fusion and complementation, which are designed for co-aware objects and unaware objects respectively. Taking camera-based vehicle-infrastructure cooperative perception as a typical scenario, we generate the camera-centric cooperation labels of DAIR-V2X-Seq and evaluate the proposed framework on it. The experimental results not only demonstrate the effectiveness but also show the advantages of transmission flexibility and robustness to packet dropout. In addition, we discuss the pros and cons of query cooperation paradigm from the possible extensions and foreseeable limitations.\nFrom our perspective of view, the query cooperation has great potential and deserves further exploration. We hope our work can facilitate the cooperative perception research for practical applications. Planned future efforts will include 1) adaption for other cooperative tasks, e.g., prediction and planning, 2) query alignment across agents and time, and 3) query selection and compression for practical convenience."
|
| 94 |
+
}
|
| 95 |
+
],
|
| 96 |
+
"appendix": [],
|
| 97 |
+
"tables": {
|
| 98 |
+
"1": {
|
| 99 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T1\">\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>Effectiveness study on QUEST framework. QUEST-f is an ablated version that only adopts query fusion (without query complementation) for cross-agent query interaction.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T1.6\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T1.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_t\" id=\"S5.T1.2.2.3\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.2.2.3.1\" style=\"font-size:90%;\">Approach</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" colspan=\"2\" id=\"S5.T1.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" colspan=\"2\" id=\"S5.T1.2.2.2\"></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.6.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T1.3.3.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S5.T1.4.4.2\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T1.5.5.3\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T1.6.6.4\"></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T1.6.7.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S5.T1.6.7.1.1\"><span class=\"ltx_text\" id=\"S5.T1.6.7.1.1.1\" style=\"font-size:90%;\">vehicle-only</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.6.7.1.2\"><span class=\"ltx_text\" id=\"S5.T1.6.7.1.2.1\" style=\"font-size:90%;\">17.8</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.6.7.1.3\"><span class=\"ltx_text\" id=\"S5.T1.6.7.1.3.1\" style=\"font-size:90%;\">10.9</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.6.7.1.4\"><span class=\"ltx_text\" id=\"S5.T1.6.7.1.4.1\" style=\"font-size:90%;\">15.6</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.6.7.1.5\"><span class=\"ltx_text\" id=\"S5.T1.6.7.1.5.1\" style=\"font-size:90%;\">9.4</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.6.8.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S5.T1.6.8.2.1\"><span class=\"ltx_text\" id=\"S5.T1.6.8.2.1.1\" style=\"font-size:90%;\">result coop.</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.8.2.2\"><span class=\"ltx_text\" id=\"S5.T1.6.8.2.2.1\" style=\"font-size:90%;\">29.9</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.6.8.2.3\"><span class=\"ltx_text\" id=\"S5.T1.6.8.2.3.1\" style=\"font-size:90%;\">14.7</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.8.2.4\"><span class=\"ltx_text\" id=\"S5.T1.6.8.2.4.1\" style=\"font-size:90%;\">20.7</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.8.2.5\"><span class=\"ltx_text\" id=\"S5.T1.6.8.2.5.1\" style=\"font-size:90%;\">10.7</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.6.9.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S5.T1.6.9.3.1\"><span class=\"ltx_text\" id=\"S5.T1.6.9.3.1.1\" style=\"font-size:90%;\">QUEST-f</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.6.9.3.2\"><span class=\"ltx_text\" id=\"S5.T1.6.9.3.2.1\" style=\"font-size:90%;\">21.7</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.6.9.3.3\"><span class=\"ltx_text\" id=\"S5.T1.6.9.3.3.1\" style=\"font-size:90%;\">12.8</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.6.9.3.4\"><span class=\"ltx_text\" id=\"S5.T1.6.9.3.4.1\" style=\"font-size:90%;\">19.3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.6.9.3.5\"><span class=\"ltx_text\" id=\"S5.T1.6.9.3.5.1\" style=\"font-size:90%;\">10.7</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.6.10.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_r\" id=\"S5.T1.6.10.4.1\"><span class=\"ltx_text\" id=\"S5.T1.6.10.4.1.1\" style=\"font-size:90%;\">QUEST</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.6.10.4.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.10.4.2.1\" style=\"font-size:90%;\">39.4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T1.6.10.4.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.10.4.3.1\" style=\"font-size:90%;\">20.3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.6.10.4.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.10.4.4.1\" style=\"font-size:90%;\">33.3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.6.10.4.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.10.4.5.1\" style=\"font-size:90%;\">14.1</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 100 |
+
"capture": "TABLE I: Effectiveness study on QUEST framework. QUEST-f is an ablated version that only adopts query fusion (without query complementation) for cross-agent query interaction."
|
| 101 |
+
},
|
| 102 |
+
"2": {
|
| 103 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T2\">\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\">TABLE II: </span>Performance under different transmission threshold.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T2.6\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T2.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T2.2.2.3\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.2.2.3.1\" style=\"font-size:90%;\">Threshold</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" colspan=\"2\" id=\"S5.T2.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" colspan=\"2\" id=\"S5.T2.2.2.2\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.2.2.4\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.2.2.4.1\" style=\"font-size:90%;\">Bytes</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.6.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T2.3.3.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S5.T2.4.4.2\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T2.5.5.3\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S5.T2.6.6.4\"></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T2.6.7.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.6.7.1.1\"><span class=\"ltx_text\" id=\"S5.T2.6.7.1.1.1\" style=\"font-size:90%;\">0.1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.6.7.1.2\"><span class=\"ltx_text\" id=\"S5.T2.6.7.1.2.1\" style=\"font-size:90%;\">40.1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.6.7.1.3\"><span class=\"ltx_text\" id=\"S5.T2.6.7.1.3.1\" style=\"font-size:90%;\">20.3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.6.7.1.4\"><span class=\"ltx_text\" id=\"S5.T2.6.7.1.4.1\" style=\"font-size:90%;\">33.4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.6.7.1.5\"><span class=\"ltx_text\" id=\"S5.T2.6.7.1.5.1\" style=\"font-size:90%;\">14.1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.6.7.1.6\"><span class=\"ltx_text\" id=\"S5.T2.6.7.1.6.1\" style=\"font-size:90%;\">74.4K</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.6.8.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.6.8.2.1\"><span class=\"ltx_text\" id=\"S5.T2.6.8.2.1.1\" style=\"font-size:90%;\">0.2</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.6.8.2.2\"><span class=\"ltx_text\" id=\"S5.T2.6.8.2.2.1\" style=\"font-size:90%;\">39.5</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.6.8.2.3\"><span class=\"ltx_text\" id=\"S5.T2.6.8.2.3.1\" style=\"font-size:90%;\">20.3</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.6.8.2.4\"><span class=\"ltx_text\" id=\"S5.T2.6.8.2.4.1\" style=\"font-size:90%;\">33.3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.6.8.2.5\"><span class=\"ltx_text\" id=\"S5.T2.6.8.2.5.1\" style=\"font-size:90%;\">14.1</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.6.8.2.6\"><span class=\"ltx_text\" id=\"S5.T2.6.8.2.6.1\" style=\"font-size:90%;\">60.0K</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.6.9.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.6.9.3.1\"><span class=\"ltx_text\" id=\"S5.T2.6.9.3.1.1\" style=\"font-size:90%;\">0.3</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.6.9.3.2\"><span class=\"ltx_text\" id=\"S5.T2.6.9.3.2.1\" style=\"font-size:90%;\">39.4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.6.9.3.3\"><span class=\"ltx_text\" id=\"S5.T2.6.9.3.3.1\" style=\"font-size:90%;\">20.3</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.6.9.3.4\"><span class=\"ltx_text\" id=\"S5.T2.6.9.3.4.1\" style=\"font-size:90%;\">33.3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.6.9.3.5\"><span class=\"ltx_text\" id=\"S5.T2.6.9.3.5.1\" style=\"font-size:90%;\">14.1</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.6.9.3.6\"><span class=\"ltx_text\" id=\"S5.T2.6.9.3.6.1\" style=\"font-size:90%;\">52.2K</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.6.10.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.6.10.4.1\"><span class=\"ltx_text\" id=\"S5.T2.6.10.4.1.1\" style=\"font-size:90%;\">0.4</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.6.10.4.2\"><span class=\"ltx_text\" id=\"S5.T2.6.10.4.2.1\" style=\"font-size:90%;\">39.0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.6.10.4.3\"><span class=\"ltx_text\" id=\"S5.T2.6.10.4.3.1\" style=\"font-size:90%;\">20.1</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.6.10.4.4\"><span class=\"ltx_text\" id=\"S5.T2.6.10.4.4.1\" style=\"font-size:90%;\">33.2</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.6.10.4.5\"><span class=\"ltx_text\" id=\"S5.T2.6.10.4.5.1\" style=\"font-size:90%;\">14.1</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.6.10.4.6\"><span class=\"ltx_text\" id=\"S5.T2.6.10.4.6.1\" style=\"font-size:90%;\">43.8K</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.6.11.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.6.11.5.1\"><span class=\"ltx_text\" id=\"S5.T2.6.11.5.1.1\" style=\"font-size:90%;\">0.5</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.6.11.5.2\"><span class=\"ltx_text\" id=\"S5.T2.6.11.5.2.1\" style=\"font-size:90%;\">38.7</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.6.11.5.3\"><span class=\"ltx_text\" id=\"S5.T2.6.11.5.3.1\" style=\"font-size:90%;\">20.0</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.6.11.5.4\"><span class=\"ltx_text\" id=\"S5.T2.6.11.5.4.1\" style=\"font-size:90%;\">33.1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.6.11.5.5\"><span class=\"ltx_text\" id=\"S5.T2.6.11.5.5.1\" style=\"font-size:90%;\">14.0</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.6.11.5.6\"><span class=\"ltx_text\" id=\"S5.T2.6.11.5.6.1\" style=\"font-size:90%;\">40.8K</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.6.12.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.6.12.6.1\"><span class=\"ltx_text\" id=\"S5.T2.6.12.6.1.1\" style=\"font-size:90%;\">0.6</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.6.12.6.2\"><span class=\"ltx_text\" id=\"S5.T2.6.12.6.2.1\" style=\"font-size:90%;\">38.3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.6.12.6.3\"><span class=\"ltx_text\" id=\"S5.T2.6.12.6.3.1\" style=\"font-size:90%;\">19.7</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.6.12.6.4\"><span class=\"ltx_text\" id=\"S5.T2.6.12.6.4.1\" style=\"font-size:90%;\">32.5</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.6.12.6.5\"><span class=\"ltx_text\" id=\"S5.T2.6.12.6.5.1\" style=\"font-size:90%;\">13.8</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.6.12.6.6\"><span class=\"ltx_text\" id=\"S5.T2.6.12.6.6.1\" style=\"font-size:90%;\">38.2K</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.6.13.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.6.13.7.1\"><span class=\"ltx_text\" id=\"S5.T2.6.13.7.1.1\" style=\"font-size:90%;\">0.7</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.6.13.7.2\"><span class=\"ltx_text\" id=\"S5.T2.6.13.7.2.1\" style=\"font-size:90%;\">37.7</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.6.13.7.3\"><span class=\"ltx_text\" id=\"S5.T2.6.13.7.3.1\" style=\"font-size:90%;\">19.1</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.6.13.7.4\"><span class=\"ltx_text\" id=\"S5.T2.6.13.7.4.1\" style=\"font-size:90%;\">32.1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.6.13.7.5\"><span class=\"ltx_text\" id=\"S5.T2.6.13.7.5.1\" style=\"font-size:90%;\">13.6</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.6.13.7.6\"><span class=\"ltx_text\" id=\"S5.T2.6.13.7.6.1\" style=\"font-size:90%;\">35.5K</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.6.14.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T2.6.14.8.1\"><span class=\"ltx_text\" id=\"S5.T2.6.14.8.1.1\" style=\"font-size:90%;\">0.8</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T2.6.14.8.2\"><span class=\"ltx_text\" id=\"S5.T2.6.14.8.2.1\" style=\"font-size:90%;\">36.7</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T2.6.14.8.3\"><span class=\"ltx_text\" id=\"S5.T2.6.14.8.3.1\" style=\"font-size:90%;\">18.5</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T2.6.14.8.4\"><span class=\"ltx_text\" id=\"S5.T2.6.14.8.4.1\" style=\"font-size:90%;\">30.9</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T2.6.14.8.5\"><span class=\"ltx_text\" id=\"S5.T2.6.14.8.5.1\" style=\"font-size:90%;\">13.1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T2.6.14.8.6\"><span class=\"ltx_text\" id=\"S5.T2.6.14.8.6.1\" style=\"font-size:90%;\">31.9K</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 104 |
+
"capture": "TABLE II: Performance under different transmission threshold."
|
| 105 |
+
},
|
| 106 |
+
"3": {
|
| 107 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T3\">\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\">TABLE III: </span>Performance under different transmission packet dropout ratios. The transmission threshold is set to 0.3.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T3.6\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T3.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T3.2.2.3\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.2.2.3.1\" style=\"font-size:90%;\">Ratio</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" colspan=\"2\" id=\"S5.T3.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" colspan=\"2\" id=\"S5.T3.2.2.2\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T3.2.2.4\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.2.2.4.1\" style=\"font-size:90%;\">Bytes</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.6.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T3.3.3.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S5.T3.4.4.2\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T3.5.5.3\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S5.T3.6.6.4\"></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T3.6.7.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.6.7.1.1\"><span class=\"ltx_text\" id=\"S5.T3.6.7.1.1.1\" style=\"font-size:90%;\">0.0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.6.7.1.2\"><span class=\"ltx_text\" id=\"S5.T3.6.7.1.2.1\" style=\"font-size:90%;\">39.4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.6.7.1.3\"><span class=\"ltx_text\" id=\"S5.T3.6.7.1.3.1\" style=\"font-size:90%;\">20.3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.6.7.1.4\"><span class=\"ltx_text\" id=\"S5.T3.6.7.1.4.1\" style=\"font-size:90%;\">33.3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.6.7.1.5\"><span class=\"ltx_text\" id=\"S5.T3.6.7.1.5.1\" style=\"font-size:90%;\">14.1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.6.7.1.6\"><span class=\"ltx_text\" id=\"S5.T3.6.7.1.6.1\" style=\"font-size:90%;\">52.2K</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.6.8.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.6.8.2.1\"><span class=\"ltx_text\" id=\"S5.T3.6.8.2.1.1\" style=\"font-size:90%;\">0.3</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.6.8.2.2\"><span class=\"ltx_text\" id=\"S5.T3.6.8.2.2.1\" style=\"font-size:90%;\">33.7</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.6.8.2.3\"><span class=\"ltx_text\" id=\"S5.T3.6.8.2.3.1\" style=\"font-size:90%;\">17.3</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.6.8.2.4\"><span class=\"ltx_text\" id=\"S5.T3.6.8.2.4.1\" style=\"font-size:90%;\">28.5</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.6.8.2.5\"><span class=\"ltx_text\" id=\"S5.T3.6.8.2.5.1\" style=\"font-size:90%;\">12.5</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.6.8.2.6\"><span class=\"ltx_text\" id=\"S5.T3.6.8.2.6.1\" style=\"font-size:90%;\">36.5K</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.6.9.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.6.9.3.1\"><span class=\"ltx_text\" id=\"S5.T3.6.9.3.1.1\" style=\"font-size:90%;\">0.5</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.6.9.3.2\"><span class=\"ltx_text\" id=\"S5.T3.6.9.3.2.1\" style=\"font-size:90%;\">29.7</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.6.9.3.3\"><span class=\"ltx_text\" id=\"S5.T3.6.9.3.3.1\" style=\"font-size:90%;\">15.7</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.6.9.3.4\"><span class=\"ltx_text\" id=\"S5.T3.6.9.3.4.1\" style=\"font-size:90%;\">25.7</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.6.9.3.5\"><span class=\"ltx_text\" id=\"S5.T3.6.9.3.5.1\" style=\"font-size:90%;\">12.1</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.6.9.3.6\"><span class=\"ltx_text\" id=\"S5.T3.6.9.3.6.1\" style=\"font-size:90%;\">26.1K</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.6.10.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.6.10.4.1\"><span class=\"ltx_text\" id=\"S5.T3.6.10.4.1.1\" style=\"font-size:90%;\">0.7</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.6.10.4.2\"><span class=\"ltx_text\" id=\"S5.T3.6.10.4.2.1\" style=\"font-size:90%;\">25.9</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.6.10.4.3\"><span class=\"ltx_text\" id=\"S5.T3.6.10.4.3.1\" style=\"font-size:90%;\">13.7</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.6.10.4.4\"><span class=\"ltx_text\" id=\"S5.T3.6.10.4.4.1\" style=\"font-size:90%;\">22.2</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.6.10.4.5\"><span class=\"ltx_text\" id=\"S5.T3.6.10.4.5.1\" style=\"font-size:90%;\">11.6</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.6.10.4.6\"><span class=\"ltx_text\" id=\"S5.T3.6.10.4.6.1\" style=\"font-size:90%;\">15.6K</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.6.11.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T3.6.11.5.1\"><span class=\"ltx_text\" id=\"S5.T3.6.11.5.1.1\" style=\"font-size:90%;\">veh. only</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T3.6.11.5.2\"><span class=\"ltx_text\" id=\"S5.T3.6.11.5.2.1\" style=\"font-size:90%;\">17.8</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T3.6.11.5.3\"><span class=\"ltx_text\" id=\"S5.T3.6.11.5.3.1\" style=\"font-size:90%;\">10.9</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T3.6.11.5.4\"><span class=\"ltx_text\" id=\"S5.T3.6.11.5.4.1\" style=\"font-size:90%;\">15.6</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T3.6.11.5.5\"><span class=\"ltx_text\" id=\"S5.T3.6.11.5.5.1\" style=\"font-size:90%;\">9.4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T3.6.11.5.6\"><span class=\"ltx_text\" id=\"S5.T3.6.11.5.6.1\" style=\"font-size:90%;\">-</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 108 |
+
"capture": "TABLE III: Performance under different transmission packet dropout ratios. The transmission threshold is set to 0.3."
|
| 109 |
+
}
|
| 110 |
+
},
|
| 111 |
+
"image_paths": {
|
| 112 |
+
"1": {
|
| 113 |
+
"figure_path": "2308.01804v3_figure_1.png",
|
| 114 |
+
"caption": "Figure 1: Query cooperation enables instance-level feature cooperation, which is more interpretable than scene-level feature cooperation and more flexible than instance-level result cooperation.",
|
| 115 |
+
"url": "http://arxiv.org/html/2308.01804v3/extracted/2308.01804v3/figure/query-coop.jpg"
|
| 116 |
+
},
|
| 117 |
+
"2": {
|
| 118 |
+
"figure_path": "2308.01804v3_figure_2.png",
|
| 119 |
+
"caption": "Figure 2: Architecture of QUEST framework.",
|
| 120 |
+
"url": "http://arxiv.org/html/2308.01804v3/extracted/2308.01804v3/figure/QUEST.png"
|
| 121 |
+
},
|
| 122 |
+
"3": {
|
| 123 |
+
"figure_path": "2308.01804v3_figure_3.png",
|
| 124 |
+
"caption": "Figure 3: Illustration of the location grid for dual-space query embedding. Compared with the exact center-based matching, grid-based matching is more robust with location noise.",
|
| 125 |
+
"url": "http://arxiv.org/html/2308.01804v3/extracted/2308.01804v3/figure/dse.jpg"
|
| 126 |
+
},
|
| 127 |
+
"4": {
|
| 128 |
+
"figure_path": "2308.01804v3_figure_4.png",
|
| 129 |
+
"caption": "Figure 4: Illustration of the cross-agent query complementation. The local queries with low confidence scores are replaced with the received queries to reduce additional computational costs.",
|
| 130 |
+
"url": "http://arxiv.org/html/2308.01804v3/extracted/2308.01804v3/figure/query-compensation.png"
|
| 131 |
+
},
|
| 132 |
+
"5": {
|
| 133 |
+
"figure_path": "2308.01804v3_figure_5.png",
|
| 134 |
+
"caption": "Figure 5: Visualization examples at different scenes. Red: groundtruth. Blue: predictions of QUEST.",
|
| 135 |
+
"url": "http://arxiv.org/html/2308.01804v3/extracted/2308.01804v3/figure/demo.jpg"
|
| 136 |
+
},
|
| 137 |
+
"6": {
|
| 138 |
+
"figure_path": "2308.01804v3_figure_6.png",
|
| 139 |
+
"caption": "Figure 6: Examples of the generated camera-centric cooperation labels and the corresponding LiDAR-centric labels from [26]. Left: LiDAR-centric labels. Right: camera-centric labels. The generated labels will be made publicly available at GitHub upon publication.",
|
| 140 |
+
"url": "http://arxiv.org/html/2308.01804v3/extracted/2308.01804v3/figure/label.jpg"
|
| 141 |
+
},
|
| 142 |
+
"7": {
|
| 143 |
+
"figure_path": "2308.01804v3_figure_7.png",
|
| 144 |
+
"caption": "Figure 7: The change of performance and transmission cost under different transmission thresholds.",
|
| 145 |
+
"url": "http://arxiv.org/html/2308.01804v3/extracted/2308.01804v3/figure/flexible.png"
|
| 146 |
+
}
|
| 147 |
+
},
|
| 148 |
+
"validation": true,
|
| 149 |
+
"references": [],
|
| 150 |
+
"url": "http://arxiv.org/html/2308.01804v3"
|
| 151 |
+
}
|
20240522/2308.08670v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240522/2310.00263v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240522/2310.08559v4.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240522/2310.10064v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240522/2310.10274v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240522/2310.11287v3.json
ADDED
|
@@ -0,0 +1,526 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Assessing the Causal Impact of Humanitarian Aid on Food Security",
|
| 3 |
+
"abstract": "In the face of climate change-induced droughts, vulnerable regions encounter severe threats to food security, demanding urgent humanitarian assistance. This paper introduces a causal inference framework for the Horn of Africa, aiming to assess the impact of cash-based interventions on food crises. Our contributions include identifying causal relationships within the malnutrition system, harmonizing a comprehensive database including socio-economic, weather and remote sensing data, and estimating the causal effect of cash-based interventions on malnutrition. On a country level, our results revealed no significant effects, likely due to limited sample size, suboptimal data quality, and an imperfect causal graph resulting from our limited understanding of multidisciplinary systems like malnutrition. Instead, on a district level, results revealed significant effects, further implying the context-specific nature of the system. This underscores the need to enhance data collection and refine causal models with domain experts for more effective future interventions and policies, improving transparency and accountability in humanitarian aid.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "In a world where climate change is rapidly accelerating, droughts are becoming more frequent and severe, posing a serious challenge to food security in the most vulnerable regions of our planet. In this context, communities that rely solely on rainfall for their livelihoods are especially at risk, often requiring immediate humanitarian assistance to survive [1 ###reference_b1###, 2 ###reference_b2###]. Failure to act or provide adequate aid can have immense consequences, including devastating economic losses, mass displacement of people, malnutrition in infants, and elevated mortality rates due to hunger and famine [3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###]. Humanitarian organizations are facing a significant challenge due to the widening gap between funding and the needs of the people affected by food crises [6 ###reference_b6###, 7 ###reference_b7###]. As a result, designing effective humanitarian interventions in resource-constrained situations has become a critical issue. Despite numerous comprehensive reviews, there is still a lack of solid evidence to identify the best strategies to help populations affected by crises [8 ###reference_b8###]. Cash-based and voucher aid programs are considered effective in emergencies, but their cost-effectiveness varies by context [9 ###reference_b9###]. Standardized methods for evaluating cash-based interventions in food emergencies are lacking [8 ###reference_b8###]. Our aim is to determine the impact of interventions, using observational causal inference to enhance intervention design, and transparency in charity, and improve cash-based aid outcomes during extreme droughts.\nThe Horn of Africa has witnessed a concerning rise in acute malnutrition, affecting 6.5 million people in 2022 [6 ###reference_b6###]. Prolonged dry spells significantly contribute to this crisis [10 ###reference_b10###], yet it is crucial to recognize that droughts are not the sole driver. Various factors, including hydrological conditions, food production capabilities, market access, insufficient aid, conflicts, and displacement, play a significant role [11 ###reference_b11###, 12 ###reference_b12###, 13 ###reference_b13###, 14 ###reference_b14###, 15 ###reference_b15###]. Studying malnutrition in this context is intricate, involving multiple variables, scales, and non-linear relationships. Predictive Machine Learning (ML) techniques are not suited to understanding the causes and estimating the causal effect by default [16 ###reference_b16###, 17 ###reference_b17###], instead, this paper focuses on causal inference, specifically assessing the impact of cash-based interventions during the 2016, 2018, and 2022 Horn of Africa droughts. Our aim is to demonstrate the application of causal inference for evaluating the effectiveness of cash-based interventions in malnutrition scenarios."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Related Work",
|
| 15 |
+
"text": "In recent years, the surge in available data has enabled us to assess the impact of climate change on food insecurity. This data originates from diverse sources, encompassing Earth observation products [18 ###reference_b18###] and systematic socioeconomic data collection programs [19 ###reference_b19###, 20 ###reference_b20###]. Leveraging this wealth of data, we can estimate causal effects from observations [16 ###reference_b16###, 17 ###reference_b17###]. This approach is particularly vital in domains where conducting controlled experiments is impractical, costly, or unethical, with food insecurity research being a prominent example. Observational data for causal inference has gained prominence across various disciplines, including ecology [21 ###reference_b21###], agriculture [22 ###reference_b22###, 23 ###reference_b23###, 24 ###reference_b24###], public policy [25 ###reference_b25###, 26 ###reference_b26###], and Earth sciences [27 ###reference_b27###, 28 ###reference_b28###]. While there have been subjective and technical assessments of cash-based interventions in emergency contexts [8 ###reference_b8###, 29 ###reference_b29###], to the best of our knowledge, this is the first effort to apply modern observational causal inference methods to evaluate the effectiveness of cash-based aid in a food emergency context. It is also the first time such a broad database of driving factors has been used for this purpose. The contributions of our work are summarized as follows: i) identifying the overarching causal graph and the drivers of malnutrition in the Horn of Africa, ii) building a harmonized database with the best available data suitable to evaluate cash-based interventions, iii) the estimation of the causal effect of cash-based interventions on malnutrition."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "Causal Inference",
|
| 21 |
+
"text": "Deep Learning (DL) techniques have achieved considerable success in various domains, including computer vision, natural language processing or graph representation learning. DL has shown increased evidence of the potential to address problems in Earth and climate sciences as well [30 ###reference_b30###]. Since 2014, DL applied to Earth Observation has grown exponentially, triggered by the extensive and highly available data sources, and the methodological advancements in DL [31 ###reference_b31###]. However, deploying DL models in real-world scenarios presents challenges such as reduced generalization performance with shifts in data distribution [32 ###reference_b32###], biased predictions perpetuating unfair discrimination [33 ###reference_b33###, 34 ###reference_b34###], or abstract interpretability notions [35 ###reference_b35###]. These issues are partially attributed to the absence of causal formalism in modern Machine Learning (ML) systems, leading to a growing interest in causal machine learning (CausalML), which incorporates causal knowledge into ML methods [36 ###reference_b36###].\nTo reason about the causal effects of certain random variables on others, first, we need to codify causal relations. Causal inference provides a language for formalizing structural knowledge about the data-generating process [37 ###reference_b37###] with which we can estimate what will happen to data after changes, called interventions. The canonical representation of causal relations is a causal Direct Acyclic Graph (DAG), which can encode a priori assumptions about the causal structure of interest. In causal modeling, assumptions are crucial, as establishing relations solely on observational data can prove challenging. Background knowledge and domain expertise are a common source of assumptions in causal inference. Randomized controlled trials (RCTs) are also a gold standard for establishing causality because, under certain conditions, random assignment helps control for confounding variables. However, RCTs also bring to attention when this sort of experiment cannot be performed. When studying complex dynamic systems such as the malnutrition system, replicating interventional experiments could prove infeasible or unethical. Therefore, when modeling the causal relations of the malnutrition system involving climate and socio-economic dynamics, multi-scale and non-linear drivers, we rely solely on the information provided by background knowledge and associated literature."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "4",
|
| 25 |
+
"parent_section_id": null,
|
| 26 |
+
"section_name": "Data and Methods",
|
| 27 |
+
"text": ""
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "4.1",
|
| 31 |
+
"parent_section_id": "4",
|
| 32 |
+
"section_name": "Notation and terminology.",
|
| 33 |
+
"text": "In this paper, we assess the impact of cash interventions (treatment) on malnutrition (outcome) using a DAG denoted as (see Figure 1 ###reference_###). The set of vertices, labeled by , represents relevant variables, and directed edges in set indicate causation from one variable to another [37 ###reference_b37###]. We employ the -operator to describe interventions. denotes the probability that when we intervene by setting the value of to . Here, is the treatment variable (cash interventions), and is the outcome variable (Global Acute Malnutrition, GAM).\n###figure_1###"
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "4.2",
|
| 37 |
+
"parent_section_id": "4",
|
| 38 |
+
"section_name": "Data",
|
| 39 |
+
"text": "Malnutrition is influenced by various climatic, economic, and social factors, as represented in our DAG, which reflects the dynamics of agropastoralist households in drought displacement situations in Somalia [42 ###reference_b42###]. We collect and harmonize data for the variables in the DAG from multiple sources (Table 1 ###reference_###). Our outcome variable is the malnutrition index GAM [19 ###reference_b19###], and the treatment variable is a proxy for cash interventions, reflecting the number of individuals who received money in the form of credit or remittances [19 ###reference_b19###]. We also collect data on El Ni\u00f1o Southern Oscillation (ENSO) to account for climate variability [43 ###reference_b43###] and use the Standardized Precipitation Index (SPI) to characterize dry spells [44 ###reference_b44###, 18 ###reference_b18###]. Socio-economic data include monthly market prices of livestock, staple food, water, and sorghum production [45 ###reference_b45###]. We measure conflict levels using a proxy based on recorded fatalities [46 ###reference_b46###] and incorporate data on drought-induced internal displacement [20 ###reference_b20###]. All data are aggregated annually and by district."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "4.3",
|
| 43 |
+
"parent_section_id": "4",
|
| 44 |
+
"section_name": "Problem Formulation",
|
| 45 |
+
"text": "To estimate the Average Treatment Effect (ATE), , we identify an adjustment set . We apply the back-door criterion, which relies on a graphical test to determine whether adjusting for a set of graph nodes is sufficient for estimating . We find the parent adjustment set that is sufficient for estimating the ATE, which is {Market Prices, Sorghum Production, Fatalities, Drought-induced internal displacements, Population}. Utilizing the Potential Outcomes framework, our ATE estimation aims to capture the difference between the average GAM values under cash-based aid exceeding a chosen threshold and the average value of the outcome when cash-based aid falls below that threshold. To estimate the effect, we use several methods of varying complexity. Linear regression (LR) and distance matching (M) are selected as baseline estimation methods. The popular Inverse Propensity Score weighting (IPS W) is also used [47 ###reference_b47###], as well as modern machine learning methods, the T-learner (T-L) and X-learner (X-L) [48 ###reference_b48###]. Given the unavailability of observed ground truth estimates, we resort to performing refutation tests, in line with recent research [49 ###reference_b49###, 50 ###reference_b50###], to assess the robustness of our models. We perform the following tests: i) Placebo treatment, where the treatment is randomly permuted, and the estimated effect is expected to drop to 0; ii) Random Common Cause (RCC), where a random confounder is added to the dataset and the estimate is expected to remain unchanged; iii) Random Subset Removal (RSR), where a subset of data is randomly removed and the effect is expected to remain the same."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "5",
|
| 49 |
+
"parent_section_id": null,
|
| 50 |
+
"section_name": "Implementation and Results",
|
| 51 |
+
"text": "For the experiments, we are using the doWhy [49 ###reference_b49###] and Causal ML [51 ###reference_b51###] Python libraries. As the treatment is a continuous variable, we binarize it assuming different thresholds. This can be interpreted as considering the treated group as those samples where the number of individuals who receive money surpass a certain threshold and the control group as the rest. We also remove samples where the treatment is close to the threshold in order not to violate the stable unit treatment value assumption (SUTVA) [37 ###reference_b37###]. We take the 35, 50, and 75 percentile values of the number of people who receive any form of cash as different threshold levels.\nFrom 2016 to 2022, we collected data spanning 57 districts in Somalia, resulting in a dataset of 378 samples. To address population differences between urban and agro-pastoral areas, we normalized the data per district population. We framed the problem as an ATE estimation task by converting the number of people receiving money into a binary variable using various thresholds. The estimation represents the percentage of malnourished people who would have been affected if specific thresholds of people receiving money had been reached (Table 2 ###reference_###). While all estimations show a reduction in the percentage of people with GAM as more individuals receive cash interventions, none are statistically significant at the 95% confidence level. This outcome is expected due to data scarcity and the complexity of the real problem. It is impossible to account for all system drivers, but ongoing efforts aim to improve our understanding and reduce bias by addressing unaccounted major drivers and acquiring more observational data. The humanitarian community has established data repositories, but there\u2019s a need for enhanced and broader data collection following FAIR principles (Findability, Accessibility, Interoperability, and Reusable). Additionally, our country-level DAG may not fully capture context-specific relationships and localized impacts on the ground, including factors like past drought events, the political situation, poverty levels, and livelihood options, which significantly influence intervention effectiveness [42 ###reference_b42###].\nWe perform a last experiment, where we only consider a single district, Baidoa, as we know it contains above-average data quality. We resample the time resolution to monthly to increase the sample size, even though it doesn\u2019t provide additional information for seasonal variables, and run the same experiments as in the country-level case. We find statistically significant results in most of the experiments, reaffirming the conclusion that both data quality and localization of the problem are key features of these experiments, and causal assumptions are not fulfilled if these aspects are not considered."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "6",
|
| 55 |
+
"parent_section_id": null,
|
| 56 |
+
"section_name": "CauseMe platform",
|
| 57 |
+
"text": "Given the data quality challenges and the context-driven nature of malnutrition systems, our focus turns to data-driven causal discovery as a crucial methodology. The complexity of multi-domain issues, such as malnutrition, renders traditional expertise insufficient for constructing robust DAGs. Enter the innovative CauseMe platform [52 ###reference_b52###], functioning as the link between causal inference experts and domain specialists. This platform empowers non-causality experts to employ data-driven causal discovery methods, facilitating data exploration and initial DAG construction. Developed as an accessible tool for causal discovery methods, the platform [53 ###reference_b53###] aims to democratize access to these techniques across diverse scientific fields. Catering to experts less versed in causality but eager to conduct further data experiments, CauseMe allows the execution of various causal discovery methods on time series data through an interactive interface. Users can tweak method parameters, select variables for causal analysis, and obtain graphical representations showcasing the returned causal relationships. Moreover, CauseMe facilitates the interpretation of these graphs through a Large Language Model (LLM), allowing users to input contextual information about the data, thereby receiving explanations of the obtained graph."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "7",
|
| 61 |
+
"parent_section_id": null,
|
| 62 |
+
"section_name": "Conclusions",
|
| 63 |
+
"text": "Optimally distributing available resources and evaluating how, who, where, and when to intervene is crucial to mitigating climate change impacts. In this paper, we presented a novel data-driven approach for assessing the effectiveness of cash-based interventions in food emergencies through the lens of causal inference. We constructed a DAG to capture the dynamics of malnutrition under drought conditions and collected data characterizing the system. Our goal was to estimate the causal effects of cash-based interventions on reducing district-level global acute malnutrition across Somalia. Preliminary country-wise results did not reach statistical significance, although a singular district analysis did, prompting further steps: i) identifying more suitable treatment variables, ii) refining the causal graph with domain experts, iii) gaining insights on the spatio-temporal heterogeneity of impact of interventions through Conditional Average Treatment Effects (CATE) [54 ###reference_b54###]. If data allows it, causal inference can be used to assess the efficacy of interventions in specific locations, supporting targeted aid where on-ground surveys are not feasible. The proposed approach could promote greater accountability and transparency amongst humanitarian actors, encouraging individuals to contribute to impactful and traceable aid."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "8",
|
| 67 |
+
"parent_section_id": null,
|
| 68 |
+
"section_name": "Acknowledgments",
|
| 69 |
+
"text": "This work was supported by the Fundaci\u00f3n BBVA with the project \u2018Causal inference in the human biosphere coupled system (SCALE)\u2019 ###reference_-ayudas-a-equipos-de-investigacion-cientifica-en-big-data/###, the Microsoft Climate Research Initiative through the Causal4Africa ###reference_ollaboration/microsoft-climate-research-initiative/projects/### project, the European Union\u2019s Horizon Europe Research and Innovation Program through the ThinkingEarth ###reference_copernicus-foundation-models-thinking-earth### project (under Grant Agreement number 101130544) and the GVA PROMETEO ###reference_isp.uv.es/ai4cs### AI4CS\nproject on \u2018AI for complex systems\u2019 (2022-2026) with\nCIPROM/2021/056."
|
| 70 |
+
}
|
| 71 |
+
],
|
| 72 |
+
"appendix": [],
|
| 73 |
+
"tables": {
|
| 74 |
+
"1": {
|
| 75 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.4.1.1\">Table 1</span>: </span><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.5.2\">Variables and sources used in the study, temporal and spatial resolution.</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T1.1.2.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_t\" id=\"S4.T1.1.2.1.1\">Variable</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_t\" id=\"S4.T1.1.2.1.2\">Source</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.1.2.1.3\">Spatial Resolution</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.1.2.1.4\">Temporal Resolution</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.1.3.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T1.1.3.1.1\">ENSO</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T1.1.3.1.2\">\n<a class=\"ltx_ref ltx_href\" href=\"https://climexp.knmi.nl/selectdailyindex.cgi?id=someone@somewhere\" title=\"\">WMO</a>\u00a0<cite class=\"ltx_cite ltx_citemacro_citep\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.11287v3#bib.bib38\" title=\"\">38</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.1.3.1.3\">Country</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.1.3.1.4\">Daily</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.1.2\">Standarized precipitation Index</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.1.3\">\n<a class=\"ltx_ref ltx_href\" href=\"https://developers.google.com/earth-engine/datasets/catalog/UCSB-CHG_CHIRPS_DAILY#bands\" title=\"\">CHIRPS</a>\u00a0<cite class=\"ltx_cite ltx_citemacro_citep\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.11287v3#bib.bib18\" title=\"\">18</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.1.1.1\">\n\u00ba</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.1.1.4\">Daily</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.4.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.4.2.1\">Violent Conflict</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.4.2.2\">\n<a class=\"ltx_ref ltx_href\" href=\"https://acleddata.com/#/dashboard\" title=\"\">ACLED</a>\u00a0<cite class=\"ltx_cite ltx_citemacro_citep\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.11287v3#bib.bib39\" title=\"\">39</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.1.4.2.3\">Geolocated Event</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.1.4.2.4\">Hourly</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.5.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.5.3.1\">Local Market Prices</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.5.3.2\">\n<a class=\"ltx_ref ltx_href\" href=\"https://fsnau.org/ids/dashboard.php\" title=\"\">FSNAU</a>\u00a0<cite class=\"ltx_cite ltx_citemacro_citep\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.11287v3#bib.bib19\" title=\"\">19</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.1.5.3.3\">District</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.1.5.3.4\">Monthly</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.6.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.6.4.1\">Sorghum Yield Production</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.6.4.2\">\n<a class=\"ltx_ref ltx_href\" href=\"https://fsnau.org/ids/dashboard.php\" title=\"\">FSNAU</a>\u00a0<cite class=\"ltx_cite ltx_citemacro_citep\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.11287v3#bib.bib19\" title=\"\">19</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.1.6.4.3\">District</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.1.6.4.4\">Seasonal</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.7.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.7.5.1\">Drought Displacement</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.7.5.2\">\n<a class=\"ltx_ref ltx_href\" href=\"https://unhcr.github.io/dataviz-somalia-prmn/index.html\" title=\"\">UNHCR PRMN</a>\u00a0<cite class=\"ltx_cite ltx_citemacro_citep\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.11287v3#bib.bib20\" title=\"\">20</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.1.7.5.3\">District</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.1.7.5.4\">Weekly</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.8.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.8.6.1\">Somalia Districts</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.8.6.2\">\n<a class=\"ltx_ref ltx_href\" href=\"https://data.humdata.org/dataset/cod-ab-som\" title=\"\">UNDP</a>\u00a0<cite class=\"ltx_cite ltx_citemacro_citep\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.11287v3#bib.bib40\" title=\"\">40</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.1.8.6.3\">District</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.1.8.6.4\">Static</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.9.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.9.7.1\">Population</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.9.7.2\">\n<a class=\"ltx_ref ltx_href\" href=\"https://data.humdata.org/dataset/cod-ps-som?\" title=\"\">UNFPA</a>\u00a0<cite class=\"ltx_cite ltx_citemacro_citep\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.11287v3#bib.bib41\" title=\"\">41</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.1.9.7.3\">District</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.1.9.7.4\">Static, 2021</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.10.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.10.8.1\">Number of individuals that received cash</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.10.8.2\">\n<a class=\"ltx_ref ltx_href\" href=\"https://fsnau.org/ids/dashboard.php\" title=\"\">FSNAU</a>\u00a0<cite class=\"ltx_cite ltx_citemacro_citep\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.11287v3#bib.bib19\" title=\"\">19</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.1.10.8.3\">District</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.1.10.8.4\">Monthly</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.11.9\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b\" id=\"S4.T1.1.11.9.1\">Global Acute Malnutrition</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b\" id=\"S4.T1.1.11.9.2\">\n<a class=\"ltx_ref ltx_href\" href=\"https://fsnau.org/ids/dashboard.php\" title=\"\">FSNAU</a>\u00a0<cite class=\"ltx_cite ltx_citemacro_citep\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.11287v3#bib.bib19\" title=\"\">19</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T1.1.11.9.3\">District</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T1.1.11.9.4\">Monthly</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 76 |
+
"capture": "Table 1: Variables and sources used in the study, temporal and spatial resolution."
|
| 77 |
+
},
|
| 78 |
+
"2": {
|
| 79 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.7.1.1\">Table 2</span>: </span><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.8.2\">Area, threshold, method, ATE, 95% confidence intervals and p-values. Refutation tests fail if their p-value is less than 0.05. Numbers are in the percentage of people in GAM per capita. Results for Somalia and the Baidoa district.</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T2.4\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.4.5.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" colspan=\"6\" id=\"S4.T2.4.5.1.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T2.4.5.1.1.1\" style=\"font-size:90%;\">Cause Effect Estimation</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"6\" id=\"S4.T2.4.5.1.2\"><span class=\"ltx_text\" id=\"S4.T2.4.5.1.2.1\" style=\"font-size:90%;\">Refutation Tests</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.4.6.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"2\" id=\"S4.T2.4.6.2.1\"><span class=\"ltx_text\" id=\"S4.T2.4.6.2.1.1\" style=\"font-size:90%;\">Placebo</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"2\" id=\"S4.T2.4.6.2.2\"><span class=\"ltx_text\" id=\"S4.T2.4.6.2.2.1\" style=\"font-size:90%;\">RCC</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"2\" id=\"S4.T2.4.6.2.3\"><span class=\"ltx_text\" id=\"S4.T2.4.6.2.3.1\" style=\"font-size:90%;\">RSR</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.4.7.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S4.T2.4.7.3.1\"><span class=\"ltx_text\" id=\"S4.T2.4.7.3.1.1\" style=\"font-size:90%;\">Area</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S4.T2.4.7.3.2\"><span class=\"ltx_text\" id=\"S4.T2.4.7.3.2.1\" style=\"font-size:90%;\">Th</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.7.3.3\"><span class=\"ltx_text\" id=\"S4.T2.4.7.3.3.1\" style=\"font-size:90%;\">Method</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.7.3.4\"><span class=\"ltx_text\" id=\"S4.T2.4.7.3.4.1\" style=\"font-size:90%;\">ATE</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.7.3.5\"><span class=\"ltx_text\" id=\"S4.T2.4.7.3.5.1\" style=\"font-size:90%;\">CI</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.7.3.6\"><span class=\"ltx_text\" id=\"S4.T2.4.7.3.6.1\" style=\"font-size:90%;\">p-value</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.7.3.7\"><span class=\"ltx_text\" id=\"S4.T2.4.7.3.7.1\" style=\"font-size:90%;\">Effect*</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.7.3.8\"><span class=\"ltx_text\" id=\"S4.T2.4.7.3.8.1\" style=\"font-size:90%;\">p-value</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.7.3.9\"><span class=\"ltx_text\" id=\"S4.T2.4.7.3.9.1\" style=\"font-size:90%;\">Effect*</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.7.3.10\"><span class=\"ltx_text\" id=\"S4.T2.4.7.3.10.1\" style=\"font-size:90%;\">p-value</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.7.3.11\"><span class=\"ltx_text\" id=\"S4.T2.4.7.3.11.1\" style=\"font-size:90%;\">Effect*</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.7.3.12\"><span class=\"ltx_text\" id=\"S4.T2.4.7.3.12.1\" style=\"font-size:90%;\">p-value</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.4.4\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S4.T2.4.4.5\"></th>\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S4.T2.4.4.6\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.1\">\n<span class=\"ltx_text\" id=\"S4.T2.1.1.1.1\" style=\"font-size:90%;\">(</span><span class=\"ltx_text\" id=\"S4.T2.1.1.1.2\" style=\"font-size:90%;\">)</span>\n</td>\n<td class=\"ltx_td\" id=\"S4.T2.4.4.7\"></td>\n<td class=\"ltx_td\" id=\"S4.T2.4.4.8\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.2.2\">\n<span class=\"ltx_text\" id=\"S4.T2.2.2.2.1\" style=\"font-size:90%;\">(</span><span class=\"ltx_text\" id=\"S4.T2.2.2.2.2\" style=\"font-size:90%;\">)</span>\n</td>\n<td class=\"ltx_td\" id=\"S4.T2.4.4.9\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.3.3\">\n<span class=\"ltx_text\" id=\"S4.T2.3.3.3.1\" style=\"font-size:90%;\">(</span><span class=\"ltx_text\" id=\"S4.T2.3.3.3.2\" style=\"font-size:90%;\">)</span>\n</td>\n<td class=\"ltx_td\" id=\"S4.T2.4.4.10\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4\">\n<span class=\"ltx_text\" id=\"S4.T2.4.4.4.1\" style=\"font-size:90%;\">(</span><span class=\"ltx_text\" id=\"S4.T2.4.4.4.2\" style=\"font-size:90%;\">)</span>\n</td>\n<td class=\"ltx_td\" id=\"S4.T2.4.4.11\"></td>\n<td class=\"ltx_td\" id=\"S4.T2.4.4.12\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.4.8.4\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_t\" id=\"S4.T2.4.8.4.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S4.T2.4.8.4.2\"><span class=\"ltx_text\" id=\"S4.T2.4.8.4.2.1\" style=\"font-size:90%;\">35</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.8.4.3\"><span class=\"ltx_text\" id=\"S4.T2.4.8.4.3.1\" style=\"font-size:90%;\">LR</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.8.4.4\"><span class=\"ltx_text\" id=\"S4.T2.4.8.4.4.1\" style=\"font-size:90%;\">-0.743</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.8.4.5\"><span class=\"ltx_text\" id=\"S4.T2.4.8.4.5.1\" style=\"font-size:90%;\">(-0.001, 0.001)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.8.4.6\"><span class=\"ltx_text\" id=\"S4.T2.4.8.4.6.1\" style=\"font-size:90%;\">0.842</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.8.4.7\"><span class=\"ltx_text\" id=\"S4.T2.4.8.4.7.1\" style=\"font-size:90%;\">-0.000</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.8.4.8\"><span class=\"ltx_text\" id=\"S4.T2.4.8.4.8.1\" style=\"font-size:90%;\">0.860</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.8.4.9\"><span class=\"ltx_text\" id=\"S4.T2.4.8.4.9.1\" style=\"font-size:90%;\">-0.000</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.8.4.10\"><span class=\"ltx_text\" id=\"S4.T2.4.8.4.10.1\" style=\"font-size:90%;\">0.880</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.8.4.11\"><span class=\"ltx_text\" id=\"S4.T2.4.8.4.11.1\" style=\"font-size:90%;\">-0.000</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.8.4.12\"><span class=\"ltx_text\" id=\"S4.T2.4.8.4.12.1\" style=\"font-size:90%;\">0.940</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.4.9.5\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S4.T2.4.9.5.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T2.4.9.5.2\"><span class=\"ltx_text\" id=\"S4.T2.4.9.5.2.1\" style=\"font-size:90%;\">35</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.9.5.3\"><span class=\"ltx_text\" id=\"S4.T2.4.9.5.3.1\" style=\"font-size:90%;\">M</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.9.5.4\"><span class=\"ltx_text\" id=\"S4.T2.4.9.5.4.1\" style=\"font-size:90%;\">-3.917</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.9.5.5\"><span class=\"ltx_text\" id=\"S4.T2.4.9.5.5.1\" style=\"font-size:90%;\">(-0.001, 0.000)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.9.5.6\"><span class=\"ltx_text\" id=\"S4.T2.4.9.5.6.1\" style=\"font-size:90%;\">0.306</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.9.5.7\"><span class=\"ltx_text\" id=\"S4.T2.4.9.5.7.1\" style=\"font-size:90%;\">0.000</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.9.5.8\"><span class=\"ltx_text\" id=\"S4.T2.4.9.5.8.1\" style=\"font-size:90%;\">0.940</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.9.5.9\"><span class=\"ltx_text\" id=\"S4.T2.4.9.5.9.1\" style=\"font-size:90%;\">-0.000</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.9.5.10\"><span class=\"ltx_text\" id=\"S4.T2.4.9.5.10.1\" style=\"font-size:90%;\">1.000</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.9.5.11\"><span class=\"ltx_text\" id=\"S4.T2.4.9.5.11.1\" style=\"font-size:90%;\">-0.000</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.9.5.12\"><span class=\"ltx_text\" id=\"S4.T2.4.9.5.12.1\" style=\"font-size:90%;\">0.900</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.4.10.6\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S4.T2.4.10.6.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T2.4.10.6.2\"><span class=\"ltx_text\" id=\"S4.T2.4.10.6.2.1\" style=\"font-size:90%;\">35</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.10.6.3\"><span class=\"ltx_text\" id=\"S4.T2.4.10.6.3.1\" style=\"font-size:90%;\">IPS W</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.10.6.4\"><span class=\"ltx_text\" id=\"S4.T2.4.10.6.4.1\" style=\"font-size:90%;\">-1.170</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.10.6.5\"><span class=\"ltx_text\" id=\"S4.T2.4.10.6.5.1\" style=\"font-size:90%;\">(-0.001, 0.001)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.10.6.6\"><span class=\"ltx_text\" id=\"S4.T2.4.10.6.6.1\" style=\"font-size:90%;\">0.787</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.10.6.7\"><span class=\"ltx_text\" id=\"S4.T2.4.10.6.7.1\" style=\"font-size:90%;\">1.056</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.10.6.8\"><span class=\"ltx_text\" id=\"S4.T2.4.10.6.8.1\" style=\"font-size:90%;\">0.680</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.10.6.9\"><span class=\"ltx_text\" id=\"S4.T2.4.10.6.9.1\" style=\"font-size:90%;\">-1.17</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.10.6.10\"><span class=\"ltx_text\" id=\"S4.T2.4.10.6.10.1\" style=\"font-size:90%;\">1.000</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.10.6.11\"><span class=\"ltx_text\" id=\"S4.T2.4.10.6.11.1\" style=\"font-size:90%;\">-1.310</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.10.6.12\"><span class=\"ltx_text\" id=\"S4.T2.4.10.6.12.1\" style=\"font-size:90%;\">1.000</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.4.11.7\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S4.T2.4.11.7.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T2.4.11.7.2\"><span class=\"ltx_text\" id=\"S4.T2.4.11.7.2.1\" style=\"font-size:90%;\">35</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.11.7.3\"><span class=\"ltx_text\" id=\"S4.T2.4.11.7.3.1\" style=\"font-size:90%;\">T-L (RF)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.11.7.4\"><span class=\"ltx_text\" id=\"S4.T2.4.11.7.4.1\" style=\"font-size:90%;\">-1.348</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.11.7.5\"><span class=\"ltx_text\" id=\"S4.T2.4.11.7.5.1\" style=\"font-size:90%;\">(-0.001, 0.000)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.11.7.6\"><span class=\"ltx_text\" id=\"S4.T2.4.11.7.6.1\" style=\"font-size:90%;\">0.342</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.11.7.7\"><span class=\"ltx_text\" id=\"S4.T2.4.11.7.7.1\" style=\"font-size:90%;\">-1.348</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.11.7.8\"><span class=\"ltx_text\" id=\"S4.T2.4.11.7.8.1\" style=\"font-size:90%;\">0.000</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.11.7.9\"><span class=\"ltx_text\" id=\"S4.T2.4.11.7.9.1\" style=\"font-size:90%;\">-1.490</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.11.7.10\"><span class=\"ltx_text\" id=\"S4.T2.4.11.7.10.1\" style=\"font-size:90%;\">0.300</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.11.7.11\"><span class=\"ltx_text\" id=\"S4.T2.4.11.7.11.1\" style=\"font-size:90%;\">-1.875</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.11.7.12\"><span class=\"ltx_text\" id=\"S4.T2.4.11.7.12.1\" style=\"font-size:90%;\">0.380</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.4.12.8\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S4.T2.4.12.8.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T2.4.12.8.2\"><span class=\"ltx_text\" id=\"S4.T2.4.12.8.2.1\" style=\"font-size:90%;\">35</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.12.8.3\"><span class=\"ltx_text\" id=\"S4.T2.4.12.8.3.1\" style=\"font-size:90%;\">X-L (RF)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.12.8.4\"><span class=\"ltx_text\" id=\"S4.T2.4.12.8.4.1\" style=\"font-size:90%;\">-2.335</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.12.8.5\"><span class=\"ltx_text\" id=\"S4.T2.4.12.8.5.1\" style=\"font-size:90%;\">(-0.001, 0.000)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.12.8.6\"><span class=\"ltx_text\" id=\"S4.T2.4.12.8.6.1\" style=\"font-size:90%;\">0.342</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.12.8.7\"><span class=\"ltx_text\" id=\"S4.T2.4.12.8.7.1\" style=\"font-size:90%;\">-1.348</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.12.8.8\"><span class=\"ltx_text\" id=\"S4.T2.4.12.8.8.1\" style=\"font-size:90%;\">0.000</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.12.8.9\"><span class=\"ltx_text\" id=\"S4.T2.4.12.8.9.1\" style=\"font-size:90%;\">-2.346</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.12.8.10\"><span class=\"ltx_text\" id=\"S4.T2.4.12.8.10.1\" style=\"font-size:90%;\">0.450</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.12.8.11\"><span class=\"ltx_text\" id=\"S4.T2.4.12.8.11.1\" style=\"font-size:90%;\">-2.726</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.12.8.12\"><span class=\"ltx_text\" id=\"S4.T2.4.12.8.12.1\" style=\"font-size:90%;\">0.410</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.4.13.9\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S4.T2.4.13.9.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T2.4.13.9.2\"><span class=\"ltx_text\" id=\"S4.T2.4.13.9.2.1\" style=\"font-size:90%;\">50</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.13.9.3\"><span class=\"ltx_text\" id=\"S4.T2.4.13.9.3.1\" style=\"font-size:90%;\">LR</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.13.9.4\"><span class=\"ltx_text\" id=\"S4.T2.4.13.9.4.1\" style=\"font-size:90%;\">-0.168</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.13.9.5\"><span class=\"ltx_text\" id=\"S4.T2.4.13.9.5.1\" style=\"font-size:90%;\">(-0.001, 0.001)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.13.9.6\"><span class=\"ltx_text\" id=\"S4.T2.4.13.9.6.1\" style=\"font-size:90%;\">0.971</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.13.9.7\"><span class=\"ltx_text\" id=\"S4.T2.4.13.9.7.1\" style=\"font-size:90%;\">0.000</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.13.9.8\"><span class=\"ltx_text\" id=\"S4.T2.4.13.9.8.1\" style=\"font-size:90%;\">0.960</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.13.9.9\"><span class=\"ltx_text\" id=\"S4.T2.4.13.9.9.1\" style=\"font-size:90%;\">-0.000</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.13.9.10\"><span class=\"ltx_text\" id=\"S4.T2.4.13.9.10.1\" style=\"font-size:90%;\">0.980</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.13.9.11\"><span class=\"ltx_text\" id=\"S4.T2.4.13.9.11.1\" style=\"font-size:90%;\">-0.000</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.13.9.12\"><span class=\"ltx_text\" id=\"S4.T2.4.13.9.12.1\" style=\"font-size:90%;\">0.960</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.4.14.10\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S4.T2.4.14.10.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T2.4.14.10.2\"><span class=\"ltx_text\" id=\"S4.T2.4.14.10.2.1\" style=\"font-size:90%;\">50</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.14.10.3\"><span class=\"ltx_text\" id=\"S4.T2.4.14.10.3.1\" style=\"font-size:90%;\">M</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.14.10.4\"><span class=\"ltx_text\" id=\"S4.T2.4.14.10.4.1\" style=\"font-size:90%;\">-3.725</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.14.10.5\"><span class=\"ltx_text\" id=\"S4.T2.4.14.10.5.1\" style=\"font-size:90%;\">(-0.001, 0.000)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.14.10.6\"><span class=\"ltx_text\" id=\"S4.T2.4.14.10.6.1\" style=\"font-size:90%;\">0.380</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.14.10.7\"><span class=\"ltx_text\" id=\"S4.T2.4.14.10.7.1\" style=\"font-size:90%;\">-0.000</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.14.10.8\"><span class=\"ltx_text\" id=\"S4.T2.4.14.10.8.1\" style=\"font-size:90%;\">0.880</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.14.10.9\"><span class=\"ltx_text\" id=\"S4.T2.4.14.10.9.1\" style=\"font-size:90%;\">-0.000</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.14.10.10\"><span class=\"ltx_text\" id=\"S4.T2.4.14.10.10.1\" style=\"font-size:90%;\">1.000</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.14.10.11\"><span class=\"ltx_text\" id=\"S4.T2.4.14.10.11.1\" style=\"font-size:90%;\">-0.000</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.14.10.12\"><span class=\"ltx_text\" id=\"S4.T2.4.14.10.12.1\" style=\"font-size:90%;\">0.740</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.4.15.11\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T2.4.15.11.1\"><span class=\"ltx_text\" id=\"S4.T2.4.15.11.1.1\" style=\"font-size:90%;\">Somalia</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T2.4.15.11.2\"><span class=\"ltx_text\" id=\"S4.T2.4.15.11.2.1\" style=\"font-size:90%;\">50</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.15.11.3\"><span class=\"ltx_text\" id=\"S4.T2.4.15.11.3.1\" style=\"font-size:90%;\">IPS W</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.15.11.4\"><span class=\"ltx_text\" id=\"S4.T2.4.15.11.4.1\" style=\"font-size:90%;\">-2.844</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.15.11.5\"><span class=\"ltx_text\" id=\"S4.T2.4.15.11.5.1\" style=\"font-size:90%;\">(-0.001, 0.001)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.15.11.6\"><span class=\"ltx_text\" id=\"S4.T2.4.15.11.6.1\" style=\"font-size:90%;\">0.554</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.15.11.7\"><span class=\"ltx_text\" id=\"S4.T2.4.15.11.7.1\" style=\"font-size:90%;\">-0.615</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.15.11.8\"><span class=\"ltx_text\" id=\"S4.T2.4.15.11.8.1\" style=\"font-size:90%;\">0.860</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.15.11.9\"><span class=\"ltx_text\" id=\"S4.T2.4.15.11.9.1\" style=\"font-size:90%;\">-2.844</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.15.11.10\"><span class=\"ltx_text\" id=\"S4.T2.4.15.11.10.1\" style=\"font-size:90%;\">1.000</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.15.11.11\"><span class=\"ltx_text\" id=\"S4.T2.4.15.11.11.1\" style=\"font-size:90%;\">-2.769</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.15.11.12\"><span class=\"ltx_text\" id=\"S4.T2.4.15.11.12.1\" style=\"font-size:90%;\">0.940</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.4.16.12\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T2.4.16.12.1\"><span class=\"ltx_text\" id=\"S4.T2.4.16.12.1.1\" style=\"font-size:90%;\">(Country)</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T2.4.16.12.2\"><span class=\"ltx_text\" id=\"S4.T2.4.16.12.2.1\" style=\"font-size:90%;\">50</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.16.12.3\"><span class=\"ltx_text\" id=\"S4.T2.4.16.12.3.1\" style=\"font-size:90%;\">T-L (RF)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.16.12.4\"><span class=\"ltx_text\" id=\"S4.T2.4.16.12.4.1\" style=\"font-size:90%;\">-0.875</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.16.12.5\"><span class=\"ltx_text\" id=\"S4.T2.4.16.12.5.1\" style=\"font-size:90%;\">(-0.001, 0.000)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.16.12.6\"><span class=\"ltx_text\" id=\"S4.T2.4.16.12.6.1\" style=\"font-size:90%;\">0.254</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.16.12.7\"><span class=\"ltx_text\" id=\"S4.T2.4.16.12.7.1\" style=\"font-size:90%;\">-0.875</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.16.12.8\"><span class=\"ltx_text\" id=\"S4.T2.4.16.12.8.1\" style=\"font-size:90%;\">0.000</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.16.12.9\"><span class=\"ltx_text\" id=\"S4.T2.4.16.12.9.1\" style=\"font-size:90%;\">-1.351</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.16.12.10\"><span class=\"ltx_text\" id=\"S4.T2.4.16.12.10.1\" style=\"font-size:90%;\">0.110</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.16.12.11\"><span class=\"ltx_text\" id=\"S4.T2.4.16.12.11.1\" style=\"font-size:90%;\">-2.177</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.16.12.12\"><span class=\"ltx_text\" id=\"S4.T2.4.16.12.12.1\" style=\"font-size:90%;\">0.320</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.4.17.13\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S4.T2.4.17.13.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T2.4.17.13.2\"><span class=\"ltx_text\" id=\"S4.T2.4.17.13.2.1\" style=\"font-size:90%;\">50</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.17.13.3\"><span class=\"ltx_text\" id=\"S4.T2.4.17.13.3.1\" style=\"font-size:90%;\">X-L (RF)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.17.13.4\"><span class=\"ltx_text\" id=\"S4.T2.4.17.13.4.1\" style=\"font-size:90%;\">-2.318</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.17.13.5\"><span class=\"ltx_text\" id=\"S4.T2.4.17.13.5.1\" style=\"font-size:90%;\">(-0.001, 0.000)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.17.13.6\"><span class=\"ltx_text\" id=\"S4.T2.4.17.13.6.1\" style=\"font-size:90%;\">0.254</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.17.13.7\"><span class=\"ltx_text\" id=\"S4.T2.4.17.13.7.1\" style=\"font-size:90%;\">-0.875</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.17.13.8\"><span class=\"ltx_text\" id=\"S4.T2.4.17.13.8.1\" style=\"font-size:90%;\">0.000</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.17.13.9\"><span class=\"ltx_text\" id=\"S4.T2.4.17.13.9.1\" style=\"font-size:90%;\">-2.335</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.17.13.10\"><span class=\"ltx_text\" id=\"S4.T2.4.17.13.10.1\" style=\"font-size:90%;\">0.480</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.17.13.11\"><span class=\"ltx_text\" id=\"S4.T2.4.17.13.11.1\" style=\"font-size:90%;\">-2.880</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.17.13.12\"><span class=\"ltx_text\" id=\"S4.T2.4.17.13.12.1\" style=\"font-size:90%;\">0.410</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.4.18.14\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S4.T2.4.18.14.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T2.4.18.14.2\"><span class=\"ltx_text\" id=\"S4.T2.4.18.14.2.1\" style=\"font-size:90%;\">75</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.18.14.3\"><span class=\"ltx_text\" id=\"S4.T2.4.18.14.3.1\" style=\"font-size:90%;\">LR</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.18.14.4\"><span class=\"ltx_text\" id=\"S4.T2.4.18.14.4.1\" style=\"font-size:90%;\">-2.139</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.18.14.5\"><span class=\"ltx_text\" id=\"S4.T2.4.18.14.5.1\" style=\"font-size:90%;\">(-0.001, 0.001)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.18.14.6\"><span class=\"ltx_text\" id=\"S4.T2.4.18.14.6.1\" style=\"font-size:90%;\">0.690</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.18.14.7\"><span class=\"ltx_text\" id=\"S4.T2.4.18.14.7.1\" style=\"font-size:90%;\">-0.000</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.18.14.8\"><span class=\"ltx_text\" id=\"S4.T2.4.18.14.8.1\" style=\"font-size:90%;\">0.980</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.18.14.9\"><span class=\"ltx_text\" id=\"S4.T2.4.18.14.9.1\" style=\"font-size:90%;\">-0.000</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.18.14.10\"><span class=\"ltx_text\" id=\"S4.T2.4.18.14.10.1\" style=\"font-size:90%;\">0.900</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.18.14.11\"><span class=\"ltx_text\" id=\"S4.T2.4.18.14.11.1\" style=\"font-size:90%;\">-0.000</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.18.14.12\"><span class=\"ltx_text\" id=\"S4.T2.4.18.14.12.1\" style=\"font-size:90%;\">0.980</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.4.19.15\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S4.T2.4.19.15.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T2.4.19.15.2\"><span class=\"ltx_text\" id=\"S4.T2.4.19.15.2.1\" style=\"font-size:90%;\">75</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.19.15.3\"><span class=\"ltx_text\" id=\"S4.T2.4.19.15.3.1\" style=\"font-size:90%;\">M</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.19.15.4\"><span class=\"ltx_text\" id=\"S4.T2.4.19.15.4.1\" style=\"font-size:90%;\">1.970</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.19.15.5\"><span class=\"ltx_text\" id=\"S4.T2.4.19.15.5.1\" style=\"font-size:90%;\">(-0.001, 0.001)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.19.15.6\"><span class=\"ltx_text\" id=\"S4.T2.4.19.15.6.1\" style=\"font-size:90%;\">0.750</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.19.15.7\"><span class=\"ltx_text\" id=\"S4.T2.4.19.15.7.1\" style=\"font-size:90%;\">-0.000</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.19.15.8\"><span class=\"ltx_text\" id=\"S4.T2.4.19.15.8.1\" style=\"font-size:90%;\">0.840</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.19.15.9\"><span class=\"ltx_text\" id=\"S4.T2.4.19.15.9.1\" style=\"font-size:90%;\">0.000</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.19.15.10\"><span class=\"ltx_text\" id=\"S4.T2.4.19.15.10.1\" style=\"font-size:90%;\">1.000</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.19.15.11\"><span class=\"ltx_text\" id=\"S4.T2.4.19.15.11.1\" style=\"font-size:90%;\">0.000</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.19.15.12\"><span class=\"ltx_text\" id=\"S4.T2.4.19.15.12.1\" style=\"font-size:90%;\">0.720</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.4.20.16\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S4.T2.4.20.16.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T2.4.20.16.2\"><span class=\"ltx_text\" id=\"S4.T2.4.20.16.2.1\" style=\"font-size:90%;\">75</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.20.16.3\"><span class=\"ltx_text\" id=\"S4.T2.4.20.16.3.1\" style=\"font-size:90%;\">IPS W</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.20.16.4\"><span class=\"ltx_text\" id=\"S4.T2.4.20.16.4.1\" style=\"font-size:90%;\">-3.508</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.20.16.5\"><span class=\"ltx_text\" id=\"S4.T2.4.20.16.5.1\" style=\"font-size:90%;\">(-0.001, 0.000)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.20.16.6\"><span class=\"ltx_text\" id=\"S4.T2.4.20.16.6.1\" style=\"font-size:90%;\">0.509</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.20.16.7\"><span class=\"ltx_text\" id=\"S4.T2.4.20.16.7.1\" style=\"font-size:90%;\">-3.964</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.20.16.8\"><span class=\"ltx_text\" id=\"S4.T2.4.20.16.8.1\" style=\"font-size:90%;\">0.440</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.20.16.9\"><span class=\"ltx_text\" id=\"S4.T2.4.20.16.9.1\" style=\"font-size:90%;\">-3.508</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.20.16.10\"><span class=\"ltx_text\" id=\"S4.T2.4.20.16.10.1\" style=\"font-size:90%;\">1.000</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.20.16.11\"><span class=\"ltx_text\" id=\"S4.T2.4.20.16.11.1\" style=\"font-size:90%;\">-3.868</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.20.16.12\"><span class=\"ltx_text\" id=\"S4.T2.4.20.16.12.1\" style=\"font-size:90%;\">0.920</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.4.21.17\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S4.T2.4.21.17.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T2.4.21.17.2\"><span class=\"ltx_text\" id=\"S4.T2.4.21.17.2.1\" style=\"font-size:90%;\">75</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.21.17.3\"><span class=\"ltx_text\" id=\"S4.T2.4.21.17.3.1\" style=\"font-size:90%;\">T-L (RF)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.21.17.4\"><span class=\"ltx_text\" id=\"S4.T2.4.21.17.4.1\" style=\"font-size:90%;\">-2.631</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.21.17.5\"><span class=\"ltx_text\" id=\"S4.T2.4.21.17.5.1\" style=\"font-size:90%;\">(-0.001, 0.000)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.21.17.6\"><span class=\"ltx_text\" id=\"S4.T2.4.21.17.6.1\" style=\"font-size:90%;\">0.095</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.21.17.7\"><span class=\"ltx_text\" id=\"S4.T2.4.21.17.7.1\" style=\"font-size:90%;\">-2.631</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.21.17.8\"><span class=\"ltx_text\" id=\"S4.T2.4.21.17.8.1\" style=\"font-size:90%;\">0.000</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.21.17.9\"><span class=\"ltx_text\" id=\"S4.T2.4.21.17.9.1\" style=\"font-size:90%;\">-2.584</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.21.17.10\"><span class=\"ltx_text\" id=\"S4.T2.4.21.17.10.1\" style=\"font-size:90%;\">0.480</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.21.17.11\"><span class=\"ltx_text\" id=\"S4.T2.4.21.17.11.1\" style=\"font-size:90%;\">-3.211</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.21.17.12\"><span class=\"ltx_text\" id=\"S4.T2.4.21.17.12.1\" style=\"font-size:90%;\">0.420</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.4.22.18\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S4.T2.4.22.18.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T2.4.22.18.2\"><span class=\"ltx_text\" id=\"S4.T2.4.22.18.2.1\" style=\"font-size:90%;\">75</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.22.18.3\"><span class=\"ltx_text\" id=\"S4.T2.4.22.18.3.1\" style=\"font-size:90%;\">X-L (RF)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.22.18.4\"><span class=\"ltx_text\" id=\"S4.T2.4.22.18.4.1\" style=\"font-size:90%;\">-3.630</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.22.18.5\"><span class=\"ltx_text\" id=\"S4.T2.4.22.18.5.1\" style=\"font-size:90%;\">(-0.001, 0.000)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.22.18.6\"><span class=\"ltx_text\" id=\"S4.T2.4.22.18.6.1\" style=\"font-size:90%;\">0.095</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.22.18.7\"><span class=\"ltx_text\" id=\"S4.T2.4.22.18.7.1\" style=\"font-size:90%;\">-2.631</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.22.18.8\"><span class=\"ltx_text\" id=\"S4.T2.4.22.18.8.1\" style=\"font-size:90%;\">0.000</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.22.18.9\"><span class=\"ltx_text\" id=\"S4.T2.4.22.18.9.1\" style=\"font-size:90%;\">-3.577</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.22.18.10\"><span class=\"ltx_text\" id=\"S4.T2.4.22.18.10.1\" style=\"font-size:90%;\">0.460</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.22.18.11\"><span class=\"ltx_text\" id=\"S4.T2.4.22.18.11.1\" style=\"font-size:90%;\">-4.415</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.22.18.12\"><span class=\"ltx_text\" id=\"S4.T2.4.22.18.12.1\" style=\"font-size:90%;\">0.350</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.4.23.19\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_t\" id=\"S4.T2.4.23.19.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S4.T2.4.23.19.2\"><span class=\"ltx_text\" id=\"S4.T2.4.23.19.2.1\" style=\"font-size:90%;\">35</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.23.19.3\"><span class=\"ltx_text\" id=\"S4.T2.4.23.19.3.1\" style=\"font-size:90%;\">T-L (RF)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.23.19.4\"><span class=\"ltx_text\" id=\"S4.T2.4.23.19.4.1\" style=\"font-size:90%;\">-7.545</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.23.19.5\"><span class=\"ltx_text\" id=\"S4.T2.4.23.19.5.1\" style=\"font-size:90%;\">(-0.001, -0.000)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.23.19.6\"><span class=\"ltx_text\" id=\"S4.T2.4.23.19.6.1\" style=\"font-size:90%;\">0.001</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.23.19.7\"><span class=\"ltx_text\" id=\"S4.T2.4.23.19.7.1\" style=\"font-size:90%;\">-7.545</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.23.19.8\"><span class=\"ltx_text\" id=\"S4.T2.4.23.19.8.1\" style=\"font-size:90%;\">0.000</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.23.19.9\"><span class=\"ltx_text\" id=\"S4.T2.4.23.19.9.1\" style=\"font-size:90%;\">-8.153</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.23.19.10\"><span class=\"ltx_text\" id=\"S4.T2.4.23.19.10.1\" style=\"font-size:90%;\">0.141</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.23.19.11\"><span class=\"ltx_text\" id=\"S4.T2.4.23.19.11.1\" style=\"font-size:90%;\">-9.282</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.23.19.12\"><span class=\"ltx_text\" id=\"S4.T2.4.23.19.12.1\" style=\"font-size:90%;\">0.296</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.4.24.20\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S4.T2.4.24.20.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T2.4.24.20.2\"><span class=\"ltx_text\" id=\"S4.T2.4.24.20.2.1\" style=\"font-size:90%;\">35</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.24.20.3\"><span class=\"ltx_text\" id=\"S4.T2.4.24.20.3.1\" style=\"font-size:90%;\">X-L (RF)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.24.20.4\"><span class=\"ltx_text\" id=\"S4.T2.4.24.20.4.1\" style=\"font-size:90%;\">-1.583</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.24.20.5\"><span class=\"ltx_text\" id=\"S4.T2.4.24.20.5.1\" style=\"font-size:90%;\">(-0.001, 0.000)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.24.20.6\"><span class=\"ltx_text\" id=\"S4.T2.4.24.20.6.1\" style=\"font-size:90%;\">0.001</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.24.20.7\"><span class=\"ltx_text\" id=\"S4.T2.4.24.20.7.1\" style=\"font-size:90%;\">-7.545</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.24.20.8\"><span class=\"ltx_text\" id=\"S4.T2.4.24.20.8.1\" style=\"font-size:90%;\">0.000</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.24.20.9\"><span class=\"ltx_text\" id=\"S4.T2.4.24.20.9.1\" style=\"font-size:90%;\">-2.750</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.24.20.10\"><span class=\"ltx_text\" id=\"S4.T2.4.24.20.10.1\" style=\"font-size:90%;\">0.104</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.24.20.11\"><span class=\"ltx_text\" id=\"S4.T2.4.24.20.11.1\" style=\"font-size:90%;\">-5.213</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.24.20.12\"><span class=\"ltx_text\" id=\"S4.T2.4.24.20.12.1\" style=\"font-size:90%;\">0.105</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.4.25.21\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S4.T2.4.25.21.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T2.4.25.21.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.25.21.2.1\" style=\"font-size:90%;\">50</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.25.21.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.25.21.3.1\" style=\"font-size:90%;\">M</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.25.21.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.25.21.4.1\" style=\"font-size:90%;\">-15.197</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.25.21.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.25.21.5.1\" style=\"font-size:90%;\">(-0.003, -0.000)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.25.21.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.25.21.6.1\" style=\"font-size:90%;\">0.025</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.25.21.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.25.21.7.1\" style=\"font-size:90%;\">-0.000</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.25.21.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.25.21.8.1\" style=\"font-size:90%;\">0.980</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.25.21.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.25.21.9.1\" style=\"font-size:90%;\">-0.002</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.25.21.10\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.25.21.10.1\" style=\"font-size:90%;\">1.000</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.25.21.11\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.25.21.11.1\" style=\"font-size:90%;\">-0.002</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.25.21.12\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.25.21.12.1\" style=\"font-size:90%;\">0.960</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.4.26.22\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S4.T2.4.26.22.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T2.4.26.22.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.26.22.2.1\" style=\"font-size:90%;\">50</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.26.22.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.26.22.3.1\" style=\"font-size:90%;\">IPS W</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.26.22.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.26.22.4.1\" style=\"font-size:90%;\">-16.968</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.26.22.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.26.22.5.1\" style=\"font-size:90%;\">(-0.003, -0.001)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.26.22.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.26.22.6.1\" style=\"font-size:90%;\">0.003</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.26.22.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.26.22.7.1\" style=\"font-size:90%;\">-4.833</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.26.22.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.26.22.8.1\" style=\"font-size:90%;\">0.520</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.26.22.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.26.22.9.1\" style=\"font-size:90%;\">-16.968</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.26.22.10\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.26.22.10.1\" style=\"font-size:90%;\">1.000</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.26.22.11\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.26.22.11.1\" style=\"font-size:90%;\">-16.886</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.26.22.12\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.26.22.12.1\" style=\"font-size:90%;\">0.980</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.4.27.23\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T2.4.27.23.1\"><span class=\"ltx_text\" id=\"S4.T2.4.27.23.1.1\" style=\"font-size:90%;\">Baidoa</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T2.4.27.23.2\"><span class=\"ltx_text\" id=\"S4.T2.4.27.23.2.1\" style=\"font-size:90%;\">50</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.27.23.3\"><span class=\"ltx_text\" id=\"S4.T2.4.27.23.3.1\" style=\"font-size:90%;\">T-L (RF)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.27.23.4\"><span class=\"ltx_text\" id=\"S4.T2.4.27.23.4.1\" style=\"font-size:90%;\">-9.377</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.27.23.5\"><span class=\"ltx_text\" id=\"S4.T2.4.27.23.5.1\" style=\"font-size:90%;\">(-0.002, -0.000)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.27.23.6\"><span class=\"ltx_text\" id=\"S4.T2.4.27.23.6.1\" style=\"font-size:90%;\">0.022</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.27.23.7\"><span class=\"ltx_text\" id=\"S4.T2.4.27.23.7.1\" style=\"font-size:90%;\">-9.377</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.27.23.8\"><span class=\"ltx_text\" id=\"S4.T2.4.27.23.8.1\" style=\"font-size:90%;\">0.000</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.27.23.9\"><span class=\"ltx_text\" id=\"S4.T2.4.27.23.9.1\" style=\"font-size:90%;\">-10.260</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.27.23.10\"><span class=\"ltx_text\" id=\"S4.T2.4.27.23.10.1\" style=\"font-size:90%;\">0.075</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.27.23.11\"><span class=\"ltx_text\" id=\"S4.T2.4.27.23.11.1\" style=\"font-size:90%;\">-11.317</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.27.23.12\"><span class=\"ltx_text\" id=\"S4.T2.4.27.23.12.1\" style=\"font-size:90%;\">0.261</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.4.28.24\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T2.4.28.24.1\"><span class=\"ltx_text\" id=\"S4.T2.4.28.24.1.1\" style=\"font-size:90%;\">(District)</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T2.4.28.24.2\"><span class=\"ltx_text\" id=\"S4.T2.4.28.24.2.1\" style=\"font-size:90%;\">50</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.28.24.3\"><span class=\"ltx_text\" id=\"S4.T2.4.28.24.3.1\" style=\"font-size:90%;\">X-L (RF)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.28.24.4\"><span class=\"ltx_text\" id=\"S4.T2.4.28.24.4.1\" style=\"font-size:90%;\">-4.816</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.28.24.5\"><span class=\"ltx_text\" id=\"S4.T2.4.28.24.5.1\" style=\"font-size:90%;\">(-0.001, 0.000)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.28.24.6\"><span class=\"ltx_text\" id=\"S4.T2.4.28.24.6.1\" style=\"font-size:90%;\">0.022</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.28.24.7\"><span class=\"ltx_text\" id=\"S4.T2.4.28.24.7.1\" style=\"font-size:90%;\">-9.377</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.28.24.8\"><span class=\"ltx_text\" id=\"S4.T2.4.28.24.8.1\" style=\"font-size:90%;\">0.000</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.28.24.9\"><span class=\"ltx_text\" id=\"S4.T2.4.28.24.9.1\" style=\"font-size:90%;\">-5.750</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.28.24.10\"><span class=\"ltx_text\" id=\"S4.T2.4.28.24.10.1\" style=\"font-size:90%;\">0.100</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.28.24.11\"><span class=\"ltx_text\" id=\"S4.T2.4.28.24.11.1\" style=\"font-size:90%;\">-7.610</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.28.24.12\"><span class=\"ltx_text\" id=\"S4.T2.4.28.24.12.1\" style=\"font-size:90%;\">0.213</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.4.29.25\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S4.T2.4.29.25.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T2.4.29.25.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.29.25.2.1\" style=\"font-size:90%;\">75</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.29.25.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.29.25.3.1\" style=\"font-size:90%;\">IPS W</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.29.25.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.29.25.4.1\" style=\"font-size:90%;\">-15.898</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.29.25.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.29.25.5.1\" style=\"font-size:90%;\">(-0.003, -0.000)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.29.25.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.29.25.6.1\" style=\"font-size:90%;\">0.040</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.29.25.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.29.25.7.1\" style=\"font-size:90%;\">-0.632</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.29.25.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.29.25.8.1\" style=\"font-size:90%;\">0.960</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.29.25.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.29.25.9.1\" style=\"font-size:90%;\">-15.898</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.29.25.10\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.29.25.10.1\" style=\"font-size:90%;\">1.000</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.29.25.11\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.29.25.11.1\" style=\"font-size:90%;\">-15.801</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.29.25.12\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.29.25.12.1\" style=\"font-size:90%;\">0.980</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.4.30.26\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S4.T2.4.30.26.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T2.4.30.26.2\"><span class=\"ltx_text\" id=\"S4.T2.4.30.26.2.1\" style=\"font-size:90%;\">75</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.30.26.3\"><span class=\"ltx_text\" id=\"S4.T2.4.30.26.3.1\" style=\"font-size:90%;\">T-L (RF)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.30.26.4\"><span class=\"ltx_text\" id=\"S4.T2.4.30.26.4.1\" style=\"font-size:90%;\">-9.802</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.30.26.5\"><span class=\"ltx_text\" id=\"S4.T2.4.30.26.5.1\" style=\"font-size:90%;\">(-0.002, -0.000)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.30.26.6\"><span class=\"ltx_text\" id=\"S4.T2.4.30.26.6.1\" style=\"font-size:90%;\">0.019</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.30.26.7\"><span class=\"ltx_text\" id=\"S4.T2.4.30.26.7.1\" style=\"font-size:90%;\">-9.802</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.30.26.8\"><span class=\"ltx_text\" id=\"S4.T2.4.30.26.8.1\" style=\"font-size:90%;\">0.000</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.30.26.9\"><span class=\"ltx_text\" id=\"S4.T2.4.30.26.9.1\" style=\"font-size:90%;\">-10.104</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.30.26.10\"><span class=\"ltx_text\" id=\"S4.T2.4.30.26.10.1\" style=\"font-size:90%;\">0.391</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.30.26.11\"><span class=\"ltx_text\" id=\"S4.T2.4.30.26.11.1\" style=\"font-size:90%;\">-11.583</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.30.26.12\"><span class=\"ltx_text\" id=\"S4.T2.4.30.26.12.1\" style=\"font-size:90%;\">0.314</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.4.31.27\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_b\" id=\"S4.T2.4.31.27.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b\" id=\"S4.T2.4.31.27.2\"><span class=\"ltx_text\" id=\"S4.T2.4.31.27.2.1\" style=\"font-size:90%;\">75</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.4.31.27.3\"><span class=\"ltx_text\" id=\"S4.T2.4.31.27.3.1\" style=\"font-size:90%;\">X-L (RF)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.4.31.27.4\"><span class=\"ltx_text\" id=\"S4.T2.4.31.27.4.1\" style=\"font-size:90%;\">-3.374</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.4.31.27.5\"><span class=\"ltx_text\" id=\"S4.T2.4.31.27.5.1\" style=\"font-size:90%;\">(-0.001, 0.000)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.4.31.27.6\"><span class=\"ltx_text\" id=\"S4.T2.4.31.27.6.1\" style=\"font-size:90%;\">0.019</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.4.31.27.7\"><span class=\"ltx_text\" id=\"S4.T2.4.31.27.7.1\" style=\"font-size:90%;\">-9.802</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.4.31.27.8\"><span class=\"ltx_text\" id=\"S4.T2.4.31.27.8.1\" style=\"font-size:90%;\">0.000</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.4.31.27.9\"><span class=\"ltx_text\" id=\"S4.T2.4.31.27.9.1\" style=\"font-size:90%;\">-4.200</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.4.31.27.10\"><span class=\"ltx_text\" id=\"S4.T2.4.31.27.10.1\" style=\"font-size:90%;\">0.250</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.4.31.27.11\"><span class=\"ltx_text\" id=\"S4.T2.4.31.27.11.1\" style=\"font-size:90%;\">-7.025</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.4.31.27.12\"><span class=\"ltx_text\" id=\"S4.T2.4.31.27.12.1\" style=\"font-size:90%;\">0.241</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 80 |
+
"capture": "Table 2: Area, threshold, method, ATE, 95% confidence intervals and p-values. Refutation tests fail if their p-value is less than 0.05. Numbers are in the percentage of people in GAM per capita. Results for Somalia and the Baidoa district."
|
| 81 |
+
}
|
| 82 |
+
},
|
| 83 |
+
"image_paths": {
|
| 84 |
+
"1": {
|
| 85 |
+
"figure_path": "2310.11287v3_figure_1.png",
|
| 86 |
+
"caption": "Fig. 1: DAG representing the malnutrition system in Somalia.",
|
| 87 |
+
"url": "http://arxiv.org/html/2310.11287v3/"
|
| 88 |
+
}
|
| 89 |
+
},
|
| 90 |
+
"validation": true,
|
| 91 |
+
"references": [
|
| 92 |
+
{
|
| 93 |
+
"1": {
|
| 94 |
+
"title": "\u201cExamining the role of unusually warm indo-pacific sea surface temperatures in recent african droughts,\u201d",
|
| 95 |
+
"author": "Funk C., Harrison L., Shukla S., Pomposi C., and Galu G.and Korecha D. et al.,",
|
| 96 |
+
"venue": "Quarterly Journal of the Royal Meteorological Society, vol. 144, 2018.",
|
| 97 |
+
"url": null
|
| 98 |
+
}
|
| 99 |
+
},
|
| 100 |
+
{
|
| 101 |
+
"2": {
|
| 102 |
+
"title": "\u201cImpact of Drought on Poverty in Somalia,\u201d",
|
| 103 |
+
"author": "Pape U. J. and Wollburg P. R.,",
|
| 104 |
+
"venue": "Social Science Research Network, 2019.",
|
| 105 |
+
"url": null
|
| 106 |
+
}
|
| 107 |
+
},
|
| 108 |
+
{
|
| 109 |
+
"3": {
|
| 110 |
+
"title": "\u201cAddressing the human cost in a changing climate,\u201d",
|
| 111 |
+
"author": "Desai, B., Bresch, D., Cazabat, C., Hochrainer-Stigler, S., Mechler, R., Ponserre, S. and Schewe, H.,",
|
| 112 |
+
"venue": "Science, vol. 372, 2021.",
|
| 113 |
+
"url": null
|
| 114 |
+
}
|
| 115 |
+
},
|
| 116 |
+
{
|
| 117 |
+
"4": {
|
| 118 |
+
"title": "\u201cSomalia: Drought and Famine Displacement Monitoring Dashboard (September 2022),\u201d 2022.",
|
| 119 |
+
"author": "UN Office for the Coordination of Humanitarian Affairs (OCHA),",
|
| 120 |
+
"venue": null,
|
| 121 |
+
"url": null
|
| 122 |
+
}
|
| 123 |
+
},
|
| 124 |
+
{
|
| 125 |
+
"5": {
|
| 126 |
+
"title": "\u201cFacing famine: Somali experiences in the famine of 2011,\u201d",
|
| 127 |
+
"author": "Maxwell D., Majid N., Adan G., Abdirahman K., and Kim J. J,",
|
| 128 |
+
"venue": "Food Policy, vol. 65, pp. 73, 2016.",
|
| 129 |
+
"url": null
|
| 130 |
+
}
|
| 131 |
+
},
|
| 132 |
+
{
|
| 133 |
+
"6": {
|
| 134 |
+
"title": "\u201cImpacts of the Cost of Inaction on WFP Food Assistance in Eastern Africa (2021 & 2022),\u201d https://docs.wfp.org/api/documents/WFP-0000148305/download/, 2023.",
|
| 135 |
+
"author": "WFP,",
|
| 136 |
+
"venue": null,
|
| 137 |
+
"url": null
|
| 138 |
+
}
|
| 139 |
+
},
|
| 140 |
+
{
|
| 141 |
+
"7": {
|
| 142 |
+
"title": "\u201cRising Global Food Insecurity: Assessing Policy Responses,\u201d 2023.",
|
| 143 |
+
"author": "Food and Agriculture Organization of the United Nations (FAO),",
|
| 144 |
+
"venue": null,
|
| 145 |
+
"url": null
|
| 146 |
+
}
|
| 147 |
+
},
|
| 148 |
+
{
|
| 149 |
+
"8": {
|
| 150 |
+
"title": "\u201cCash-based approaches in humanitarian emergencies: a systematic review,\u201d",
|
| 151 |
+
"author": "Doocy S. and Tappis H.,",
|
| 152 |
+
"venue": "Campbell Systematic Reviews,, vol. 13, no. 1, pp. 1\u2013200, 2017.",
|
| 153 |
+
"url": null
|
| 154 |
+
}
|
| 155 |
+
},
|
| 156 |
+
{
|
| 157 |
+
"9": {
|
| 158 |
+
"title": "\u201cThe effectiveness and value for money of cash-based humanitarian assistance: a systematic review,\u201d",
|
| 159 |
+
"author": "Doocy S. and Tappis H.,",
|
| 160 |
+
"venue": "Journal of Development Effectiveness, vol. 10, no. 1, pp. 121\u2013144, 2018.",
|
| 161 |
+
"url": null
|
| 162 |
+
}
|
| 163 |
+
},
|
| 164 |
+
{
|
| 165 |
+
"10": {
|
| 166 |
+
"title": "\u201cFrom rain to famine: assessing the utility of rainfall observations and seasonal forecasts to anticipate food insecurity in east africa,\u201d",
|
| 167 |
+
"author": "Coughlan de Perez E., Aalst M., Choularton R., Hunk B., Mason S., Nissan H., and Schwager S.,",
|
| 168 |
+
"venue": "Food Secur., vol. 11, no. 1, pp. 57\u201368, 2019.",
|
| 169 |
+
"url": null
|
| 170 |
+
}
|
| 171 |
+
},
|
| 172 |
+
{
|
| 173 |
+
"11": {
|
| 174 |
+
"title": "\u201cViewpoint: Determining famine: Multi-dimensional analysis for the twenty-first century,\u201d",
|
| 175 |
+
"author": "Maxwell D., Khalif A., Hailey P., and Checchi F.,",
|
| 176 |
+
"venue": "Food Policy, vol. 92, 2020.",
|
| 177 |
+
"url": null
|
| 178 |
+
}
|
| 179 |
+
},
|
| 180 |
+
{
|
| 181 |
+
"12": {
|
| 182 |
+
"title": "\u201cFood, Drought and Conflict Evidence from a Case-Study on Somalia,\u201d 2017.",
|
| 183 |
+
"author": "Sneyers A.,",
|
| 184 |
+
"venue": null,
|
| 185 |
+
"url": null
|
| 186 |
+
}
|
| 187 |
+
},
|
| 188 |
+
{
|
| 189 |
+
"13": {
|
| 190 |
+
"title": "\u201cSomalia: Drought Impact and Needs Assessment (Volume I),\u201d https://www.gfdrr.org/en/publication/somalia-drought-impact-and-needs-assessment-volume-i, 2017.",
|
| 191 |
+
"author": "GFDRR,",
|
| 192 |
+
"venue": null,
|
| 193 |
+
"url": null
|
| 194 |
+
}
|
| 195 |
+
},
|
| 196 |
+
{
|
| 197 |
+
"14": {
|
| 198 |
+
"title": "\u201cDrought, armed conflict and population mortality in somalia, 2014\u20132018: A statistical analysis,\u201d",
|
| 199 |
+
"author": "Warsame A., Frison S., and Checci F.,",
|
| 200 |
+
"venue": "PLOS Glob. Public Health, vol. 3, no. 4, 2023.",
|
| 201 |
+
"url": null
|
| 202 |
+
}
|
| 203 |
+
},
|
| 204 |
+
{
|
| 205 |
+
"15": {
|
| 206 |
+
"title": "\u201cClimate, conflict and forced migration,,\u201d",
|
| 207 |
+
"author": "Guy Abel J., Brottrager M., Cuaresma J. C., and Muttarak R.,",
|
| 208 |
+
"venue": "Global Environmental Change, vol. 54, no. 4, 2019.",
|
| 209 |
+
"url": null
|
| 210 |
+
}
|
| 211 |
+
},
|
| 212 |
+
{
|
| 213 |
+
"16": {
|
| 214 |
+
"title": "\u201cCausality: Models, reasoning, and inference,\u201d",
|
| 215 |
+
"author": "Pearl J.,",
|
| 216 |
+
"venue": "Cambridge University Press, vol. 19, 2000.",
|
| 217 |
+
"url": null
|
| 218 |
+
}
|
| 219 |
+
},
|
| 220 |
+
{
|
| 221 |
+
"17": {
|
| 222 |
+
"title": "Elements of Causal Inference: Foundations and Learning Algorithms,",
|
| 223 |
+
"author": "Peters J., Janzing D., and Schlkopf B.,",
|
| 224 |
+
"venue": "The MIT Press, 2017.",
|
| 225 |
+
"url": null
|
| 226 |
+
}
|
| 227 |
+
},
|
| 228 |
+
{
|
| 229 |
+
"18": {
|
| 230 |
+
"title": "\u201cThe climate hazards infrared precipitation with stations\u2014a new environmental record for monitoring extremes,\u201d 2015.",
|
| 231 |
+
"author": "Chris, F., Peterson, P., Landsfeld, M., Pedreros, D., Verdin, J., Shukla, S., Husak, G., Rowland, J., Harrison, L., Hoell, A. and Michaelsen, J.,",
|
| 232 |
+
"venue": null,
|
| 233 |
+
"url": null
|
| 234 |
+
}
|
| 235 |
+
},
|
| 236 |
+
{
|
| 237 |
+
"19": {
|
| 238 |
+
"title": "\u201cFood Security and Nutrition Analysis Unit (FSNAU),\u201d https://dashboard.fsnau.org/, 2021.",
|
| 239 |
+
"author": "FSNAU,",
|
| 240 |
+
"venue": null,
|
| 241 |
+
"url": null
|
| 242 |
+
}
|
| 243 |
+
},
|
| 244 |
+
{
|
| 245 |
+
"20": {
|
| 246 |
+
"title": "\u201cUNHCR Somalia - Interactive Internal Displacements Visualisation,\u201d https://unhcr.github.io/dataviz-somalia-prmn/index.html.",
|
| 247 |
+
"author": "UNHCR Somalia ID,",
|
| 248 |
+
"venue": null,
|
| 249 |
+
"url": null
|
| 250 |
+
}
|
| 251 |
+
},
|
| 252 |
+
{
|
| 253 |
+
"21": {
|
| 254 |
+
"title": "\u201cUtilizing causal diagrams across quasi-experimental approaches,\u201d",
|
| 255 |
+
"author": "Arif S. and MacNeil M. A.,",
|
| 256 |
+
"venue": "Ecosphere, vol. 13, no. 4, 2022.",
|
| 257 |
+
"url": null
|
| 258 |
+
}
|
| 259 |
+
},
|
| 260 |
+
{
|
| 261 |
+
"22": {
|
| 262 |
+
"title": "\u201cTowards assessing agricultural land suitability with causal machine learning,\u201d",
|
| 263 |
+
"author": "Giannarakis, G., Sitokonstantinou, V., Lorilla, R. S. and Kontoes, C.,",
|
| 264 |
+
"venue": "in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 1442\u20131452.",
|
| 265 |
+
"url": null
|
| 266 |
+
}
|
| 267 |
+
},
|
| 268 |
+
{
|
| 269 |
+
"23": {
|
| 270 |
+
"title": "\u201cEvaluating digital agriculture recommendations with causal inference,\u201d",
|
| 271 |
+
"author": "Tsoumas, I., Giannarakis, G., Sitokonstantinou, V., Koukos, A., Loka, D., Bartsotas, N., Kontoes, C. and Athanasiadis, Ioannis,",
|
| 272 |
+
"venue": "in Proceedings of the AAAI Conference on Artificial Intelligence, 2023, vol. 37, pp. 14514\u201314522.",
|
| 273 |
+
"url": null
|
| 274 |
+
}
|
| 275 |
+
},
|
| 276 |
+
{
|
| 277 |
+
"24": {
|
| 278 |
+
"title": "\u201cSatellites reveal a small positive yield effect from conservation tillage across the us corn belt,\u201d",
|
| 279 |
+
"author": "Deines, J. M., Wang, S. and Lobell, D. B.,",
|
| 280 |
+
"venue": "Environmental Research Letters, vol. 14, no. 12, pp. 124038, 2019.",
|
| 281 |
+
"url": null
|
| 282 |
+
}
|
| 283 |
+
},
|
| 284 |
+
{
|
| 285 |
+
"25": {
|
| 286 |
+
"title": "\u201cCausal inference and impact evaluation,\u201d",
|
| 287 |
+
"author": "Foug\u00e8re D. and Jacquemet N.,",
|
| 288 |
+
"venue": "Economie et Statistique/Economics and Statistics, 2019.",
|
| 289 |
+
"url": null
|
| 290 |
+
}
|
| 291 |
+
},
|
| 292 |
+
{
|
| 293 |
+
"26": {
|
| 294 |
+
"title": "\u201cPolicy evaluation using causal inference methods,\u201d",
|
| 295 |
+
"author": "Foug\u00e8re D. and Jacquemet N.,",
|
| 296 |
+
"venue": "In Handbook of Research Methods and Applications in Empirical Microeconomics, 2021.",
|
| 297 |
+
"url": null
|
| 298 |
+
}
|
| 299 |
+
},
|
| 300 |
+
{
|
| 301 |
+
"27": {
|
| 302 |
+
"title": "\u201cInferring causation from time series in earth system sciences,\u201d",
|
| 303 |
+
"author": "Runge J., Bathiany S., and Bollt et al,",
|
| 304 |
+
"venue": "Nature Communications, vol. 10, 2021.",
|
| 305 |
+
"url": null
|
| 306 |
+
}
|
| 307 |
+
},
|
| 308 |
+
{
|
| 309 |
+
"28": {
|
| 310 |
+
"title": "\u201cCausal inference in geoscience and remote sensing from observational data.,\u201d",
|
| 311 |
+
"author": "P\u00e9rez-Suay A. and Camps-Valls G.,",
|
| 312 |
+
"venue": "IEEE Transactions on Geoscience and Remote Sensing, vol. 57, 2018.",
|
| 313 |
+
"url": null
|
| 314 |
+
}
|
| 315 |
+
},
|
| 316 |
+
{
|
| 317 |
+
"29": {
|
| 318 |
+
"title": "\u201cReview of the humanitarian response to the 2016/17 drought in the Horn of Africa for the European Commission,\u201d 2019.",
|
| 319 |
+
"author": "Groupe URD,",
|
| 320 |
+
"venue": null,
|
| 321 |
+
"url": null
|
| 322 |
+
}
|
| 323 |
+
},
|
| 324 |
+
{
|
| 325 |
+
"30": {
|
| 326 |
+
"title": "\u201cDeep learning and process understanding for data-driven earth system science,\u201d",
|
| 327 |
+
"author": "Reichstein, M. and Camps-Valls, G. and Stevens, B. and Jung, M. and Denzler, J. and Carvalhais, N. and Prabhat, fnm,",
|
| 328 |
+
"venue": "Nature, vol. 566, no. 7743, pp. 195\u2013204, 2019.",
|
| 329 |
+
"url": null
|
| 330 |
+
}
|
| 331 |
+
},
|
| 332 |
+
{
|
| 333 |
+
"31": {
|
| 334 |
+
"title": "Deep Learning for the Earth Sciences: A Comprehensive Approach to Remote Sensing, Climate Science and Geosciences,",
|
| 335 |
+
"author": "Camps-Valls, G. and Tuia, D. and Zhu, X.X. and Reichstein, M.,",
|
| 336 |
+
"venue": "Wiley, 2021.",
|
| 337 |
+
"url": null
|
| 338 |
+
}
|
| 339 |
+
},
|
| 340 |
+
{
|
| 341 |
+
"32": {
|
| 342 |
+
"title": "\u201cUnderspecification presents challenges for credibility in modern machine learning,\u201d 2020.",
|
| 343 |
+
"author": "Alexander D\u2019Amour et al,",
|
| 344 |
+
"venue": null,
|
| 345 |
+
"url": null
|
| 346 |
+
}
|
| 347 |
+
},
|
| 348 |
+
{
|
| 349 |
+
"33": {
|
| 350 |
+
"title": "\u201cA survey on bias and fairness in machine learning,\u201d",
|
| 351 |
+
"author": "Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K. and Galstyan, A.,",
|
| 352 |
+
"venue": "ACM Comput. Surv., vol. 54, no. 6, jul 2021.",
|
| 353 |
+
"url": null
|
| 354 |
+
}
|
| 355 |
+
},
|
| 356 |
+
{
|
| 357 |
+
"34": {
|
| 358 |
+
"title": "\u201cOn the dangers of stochastic parrots: Can language models be too big?,\u201d",
|
| 359 |
+
"author": "Bender, E. M. and Gebru, T. and McMillan-Major, A. and Shmitchell, S.,",
|
| 360 |
+
"venue": "in Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, New York, NY, USA, 2021, FAccT \u201921, p. 610\u2013623, Association for Computing Machinery.",
|
| 361 |
+
"url": null
|
| 362 |
+
}
|
| 363 |
+
},
|
| 364 |
+
{
|
| 365 |
+
"35": {
|
| 366 |
+
"title": "\u201cThe mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery.,\u201d",
|
| 367 |
+
"author": "Lipton, Z. C.,",
|
| 368 |
+
"venue": "Queue, vol. 16, no. 3, pp. 31\u201357, jun 2018.",
|
| 369 |
+
"url": null
|
| 370 |
+
}
|
| 371 |
+
},
|
| 372 |
+
{
|
| 373 |
+
"36": {
|
| 374 |
+
"title": "\u201cCausal machine learning: A survey and open problems,\u201d 2022.",
|
| 375 |
+
"author": "Kaddour, J., Lynch, A., Liu Q., Kusner M. J. and Silva R.,",
|
| 376 |
+
"venue": null,
|
| 377 |
+
"url": null
|
| 378 |
+
}
|
| 379 |
+
},
|
| 380 |
+
{
|
| 381 |
+
"37": {
|
| 382 |
+
"title": "Causality,",
|
| 383 |
+
"author": "Pearl, J.,",
|
| 384 |
+
"venue": "Cambridge University Press, 2 edition, 2009.",
|
| 385 |
+
"url": null
|
| 386 |
+
}
|
| 387 |
+
},
|
| 388 |
+
{
|
| 389 |
+
"38": {
|
| 390 |
+
"title": "\u201cWorld Meteorological Organization (WMO),\u201d https://climexp.knmi.nl/selectdailyindex.cgi.",
|
| 391 |
+
"author": "WMO,",
|
| 392 |
+
"venue": null,
|
| 393 |
+
"url": null
|
| 394 |
+
}
|
| 395 |
+
},
|
| 396 |
+
{
|
| 397 |
+
"39": {
|
| 398 |
+
"title": "\u201cIntroducing ACLED-Armed Conflict Location and Event Data,\u201d 2010.",
|
| 399 |
+
"author": "Clionadh, R., Linke, A., Hegre, H. and Karlsen, J.,",
|
| 400 |
+
"venue": null,
|
| 401 |
+
"url": null
|
| 402 |
+
}
|
| 403 |
+
},
|
| 404 |
+
{
|
| 405 |
+
"40": {
|
| 406 |
+
"title": "\u201cSomalia - Subnational Administrative Boundaries,\u201d https://data.humdata.org/dataset/cod-ab-som.",
|
| 407 |
+
"author": "UNDP,",
|
| 408 |
+
"venue": null,
|
| 409 |
+
"url": null
|
| 410 |
+
}
|
| 411 |
+
},
|
| 412 |
+
{
|
| 413 |
+
"41": {
|
| 414 |
+
"title": "\u201cUnited Nations Population Fund (UNFPA),\u201d https://data.humdata.org/dataset/cod-ps-som.",
|
| 415 |
+
"author": "UNFPA,",
|
| 416 |
+
"venue": null,
|
| 417 |
+
"url": null
|
| 418 |
+
}
|
| 419 |
+
},
|
| 420 |
+
{
|
| 421 |
+
"42": {
|
| 422 |
+
"title": "\u201cMonitoring Methodology for Displacement associated with Drought,\u201d",
|
| 423 |
+
"author": "Internal Displacement Monitoring Centre (iDMC),",
|
| 424 |
+
"venue": "2020.",
|
| 425 |
+
"url": null
|
| 426 |
+
}
|
| 427 |
+
},
|
| 428 |
+
{
|
| 429 |
+
"43": {
|
| 430 |
+
"title": "\u201cAssessing the impact of enso on agriculture over africa using earth observation data,\u201d",
|
| 431 |
+
"author": "Sazib, N., Mladenova, I. E. and Bolten, J. D.,",
|
| 432 |
+
"venue": "Frontiers in Sustainable Food Systems, vol. 4, 2020.",
|
| 433 |
+
"url": null
|
| 434 |
+
}
|
| 435 |
+
},
|
| 436 |
+
{
|
| 437 |
+
"44": {
|
| 438 |
+
"title": "\u201cStandardized Precipitation Index User Guide,\u201d",
|
| 439 |
+
"author": "M. Svoboda, M. Hayes, and D. Wood,",
|
| 440 |
+
"venue": "World Meteorological Organization, 2012.",
|
| 441 |
+
"url": null
|
| 442 |
+
}
|
| 443 |
+
},
|
| 444 |
+
{
|
| 445 |
+
"45": {
|
| 446 |
+
"title": "\u201cHousehold Economy Analysis (Hea) So11: Southern Inland Pastoral Livelihood Zonelower Juba Region,\u201d https://www.acted.org/wp-content/uploads/2017/03/household-economy-analysis-hea.pdf, 2017.",
|
| 447 |
+
"author": "Acted,",
|
| 448 |
+
"venue": null,
|
| 449 |
+
"url": null
|
| 450 |
+
}
|
| 451 |
+
},
|
| 452 |
+
{
|
| 453 |
+
"46": {
|
| 454 |
+
"title": "\u201cLarge weather and conflict effects on internal displacement in somalia with little evidence of feedback onto conflict,\u201d",
|
| 455 |
+
"author": "Thalheimer, L. and Schwarz, M. P. and Pretis, F.,",
|
| 456 |
+
"venue": "Global Environmental Change, vol. 79, pp. 102641, 2023.",
|
| 457 |
+
"url": null
|
| 458 |
+
}
|
| 459 |
+
},
|
| 460 |
+
{
|
| 461 |
+
"47": {
|
| 462 |
+
"title": "\u201cMatching methods for causal inference: A review and a look forward,\u201d",
|
| 463 |
+
"author": "Stuart, E. A.,",
|
| 464 |
+
"venue": "Statistical Science, vol. 25, no. 1, pp. 1\u201321, 2010.",
|
| 465 |
+
"url": null
|
| 466 |
+
}
|
| 467 |
+
},
|
| 468 |
+
{
|
| 469 |
+
"48": {
|
| 470 |
+
"title": "\u201cMetalearners for estimating heterogeneous treatment effects using machine learning,\u201d",
|
| 471 |
+
"author": "K\u00fcnzel, S. R., Sekhon, J. S., Bickel, P. J. and Yu, B.,",
|
| 472 |
+
"venue": "Proceedings of the National Academy of Sciences, vol. 116, no. 10, pp. 4156\u20134165, 2019.",
|
| 473 |
+
"url": null
|
| 474 |
+
}
|
| 475 |
+
},
|
| 476 |
+
{
|
| 477 |
+
"49": {
|
| 478 |
+
"title": "\u201cDowhy: An end-to-end library for causal inference,\u201d 2020.",
|
| 479 |
+
"author": "Sharma, A. and Kiciman, E.,",
|
| 480 |
+
"venue": null,
|
| 481 |
+
"url": null
|
| 482 |
+
}
|
| 483 |
+
},
|
| 484 |
+
{
|
| 485 |
+
"50": {
|
| 486 |
+
"title": "\u201cMaking Sense of Sensitivity: Extending Omitted Variable Bias,\u201d",
|
| 487 |
+
"author": "Cinelli, C. and Hazlett, C.,",
|
| 488 |
+
"venue": "Journal of the Royal Statistical Society Series B: Statistical Methodology, vol. 82, no. 1, pp. 39\u201367, 12 2019.",
|
| 489 |
+
"url": null
|
| 490 |
+
}
|
| 491 |
+
},
|
| 492 |
+
{
|
| 493 |
+
"51": {
|
| 494 |
+
"title": "\u201cCausalml: Python package for causal machine learning,\u201d",
|
| 495 |
+
"author": "Chen, H., Harinen, T., Lee, J.-Y., Yung, M. and Zhao, Z.,",
|
| 496 |
+
"venue": "CoRR, vol. abs/2002.11631, 2020.",
|
| 497 |
+
"url": null
|
| 498 |
+
}
|
| 499 |
+
},
|
| 500 |
+
{
|
| 501 |
+
"52": {
|
| 502 |
+
"title": "\u201cCauseMe: A platform to benchmark causal discovery methods,\u201d https://causeme.uv.es/.",
|
| 503 |
+
"author": "Image and Signal Processing Group, Universitat de Val\u00e8ncia and Runge J.,",
|
| 504 |
+
"venue": null,
|
| 505 |
+
"url": null
|
| 506 |
+
}
|
| 507 |
+
},
|
| 508 |
+
{
|
| 509 |
+
"53": {
|
| 510 |
+
"title": "\u201cInferring causation from time series with perspectives in earth system sciences,\u201d",
|
| 511 |
+
"author": "Runge, J., et al,",
|
| 512 |
+
"venue": "Nature Communications, vol. 10, no. 1, pp. article no. 2553, 2019.",
|
| 513 |
+
"url": null
|
| 514 |
+
}
|
| 515 |
+
},
|
| 516 |
+
{
|
| 517 |
+
"54": {
|
| 518 |
+
"title": "\u201cPersonalizing sustainable agriculture with causal machine learning,\u201d",
|
| 519 |
+
"author": "Giannarakis, G. and Sitokonstantinou, V. and Lorilla, R. S. and Kontoes, C.,",
|
| 520 |
+
"venue": "arXiv preprint arXiv:2211.03179, 2022.",
|
| 521 |
+
"url": null
|
| 522 |
+
}
|
| 523 |
+
}
|
| 524 |
+
],
|
| 525 |
+
"url": "http://arxiv.org/html/2310.11287v3"
|
| 526 |
+
}
|
20240522/2311.02142v2.json
ADDED
|
@@ -0,0 +1,244 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Sparse Training of Discrete Diffusion Models for Graph Generation",
|
| 3 |
+
"abstract": "Generative models for graphs often encounter scalability challenges due to the inherent need to predict interactions for every node pair.\nDespite the sparsity often exhibited by real-world graphs, the unpredictable sparsity patterns of their adjacency matrices, stemming from their unordered nature, leads to quadratic computational complexity.\nIn this work, we introduce SparseDiff, a denoising diffusion model for graph generation that is able to exploit sparsity during its training phase.\nAt the core of SparseDiff is a message-passing neural network tailored to predict only a subset of edges during each forward pass. When combined with a sparsity-preserving noise model, this model can efficiently work with edge lists representations of graphs, paving the way for scalability to much larger structures.\nDuring the sampling phase, SparseDiff iteratively populates the adjacency matrix from its prior state, ensuring prediction of the full graph while controlling memory utilization.\nExperimental results show that SparseDiff simultaneously matches state-of-the-art in generation performance on both small and large graphs, highlighting the versatility of our method.\u2020\u2020Contact: yiming.qin@epfl.ch.111Our code is available at https://github.com/qym7/SparseDiff.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Random graph models have been foundational in graph generation, with a rich legacy spanning several decades (erdHos1960evolution; aiello2000random; barabasi2013network). However, recent interest has gravitated towards learned graph models, primarily due to their enhanced ability to represent intricate data distributions.\nTraditional frameworks like generative adversarial networks (de2018molgan) and variational autoencoders (simonovsky2018graphvae) predominantly addressed graphs with a maximum of 9 nodes.\nThis limitation was somewhat alleviated with the advent of denoising diffusion models (niu2020permutation; jo2022score; vignac2022digress), elevating capacity to roughly 100 nodes. However, these models are still not scaled for broader applications like transportation (rong2023city) or financial system anomaly detection (li2023diga).\nThe primary bottleneck of many generative graph models is their computational complexity.\nWhile many natural graphs are sparse, the unordered nature graphs makes it challenging to exploit this trait. Without a predetermined sparsity pattern, models frequently make exhaustive predictions for every node pair, constraining them to a ceiling of 200 nodes (vignac2022digress). Proposed methods to circumvent this issue include imposing a node ordering (dai2020scalable), assembling sub-graphs (limnios2023sagess), generating hierarchically (karami2023higen; jang2023hggt), and conditioning the generation on a sampled degree distribution (chen2023efficient).\nThese methods, designed for large graphs, implicitly make assumptions on the data distribution\nwhich sometimes reflect in a poor ability to model very constrained graphs such as molecules (chen2023efficient; kong2023autoregressive).\nTo address these limitations, we propose SparseDiff, a generative model for graphs that exploits sparsity in its training phase by adopting edge list representations.\nSparseDiff defines a discrete denoising diffusion model that comprises three primary components:\nA noise model designed to retain sparsity throughout the diffusion process;\nA loss function computed on a set of random node pairs;\nA sparse graph transformer rooted in the message-passing framework.\nDuring the sampling process, our model iterates over pairs of nodes and progressively builds the predicted graph.\nSetting it apart from other scalable models, SparseDiff harnesses sparsity inherently without imposing additional assumptions on the data distribution.\nAs a result, it also encompasses dense denoising diffusion models as a limit case.\nWe show across a wide range of benchmarks that despite its simplicity,\nSparseDiff matches the generation performance of scalable models on large graphs. It also achieves comparable results to state-of-the-art dense models on small molecular datasets, making our model fit for all graph sizes.\nOverall, SparseDiff provides high controllability over GPU usage and thus extends the capabilities of current discrete graph models, making them suitable for significanlty larger graphs.\n###figure_1### ###figure_2### ###figure_3### ###figure_4###"
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Related Work",
|
| 15 |
+
"text": ""
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "Denoising diffusion models for graphs",
|
| 21 |
+
"text": "Diffusion models (sohldickstein2015diffusion; ho2020denoising) have gained increasing popularity as generative models due to their impressive performance across generative tasks in computer vision (dhariwal2021diffusion; ho2022imagen; poole2022dreamfusion), protein generation (baek2021accurate; ingraham2022illuminating) or audio synthesis (kong2020diffwave).\nThey can be trained by likelihood maximization (song2020score; kingma2021variational), which provides stability gains over generative adversarial networks, and admit a stochastic differential equation formulation (song2020score).\nTwo core components define these models. The first is a Markovian noise model, which iteratively corrupts a data point to a noisy sample until it conforms to a predefined prior distribution. The second component, a denoising network, is trained to revert the corrupted data to a less noisy state. This denoising network typically predicts the original data or, equivalently, the added noise .\nAfter the denoising network has been trained, it can be used to sample new objects. First, some noise is sampled from a prior distribution. The denoising network is then iteratively applied to this object. At each time step, a distribution is computed by marginalizing over the network prediction :\nand a new object is sampled from this distribution. While this integral is in general difficult to evaluate, two prominent frameworks allow for its efficient computation: Gaussian diffusion (ho2020denoising) and discrete diffusion (austin2021structured).\nWhen tailored to graph generation, initial diffusion models employed Gaussian noise on the adjacency matrices (niu2020permutation; jo2022score). They utilized a graph attention network to regress the added noise . Given that , regressing the noise is, up to an affine transformation, the same as regressing the clean graph, which is a discrete object. Recognizing the inherent discreteness of graphs, subsequent models (vignac2022digress; haefeli2022diffusion) leveraged discrete diffusion.\nThey recast graph generation as a series of classification tasks, preserving graph discreteness and achieving top-tier results. However, they made predictions for all pairs of nodes, which restricted their scalability."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "Scalable Graph Generation",
|
| 27 |
+
"text": "Efforts to enhance the scalability of diffusion models for graph generation have mainly followed two paradigms: subgraph aggregation and hierarchical refinement.\nThis approach divides larger graphs into smaller subgraphs, which are subsequently combined. Notably, SnapButton (yang2020scaleauto) enhances autoregressive models (liu2018constrained; liao2019efficient; mercado2021graph) by merging subgraphs. Meanwhile, BiGG (dai2020scalable) deconstructs adjacency matrices using a binary tree data structure, gradually generating edges with an autoregressive model. One notable limitation of autoregressive models is the breaking of permutation equivariance due to node ordering dependency. To counter this, (kong2023autoregressive) proposed learning the node ordering \u2013 a task theoretically at least as hard as isomorphism testing. Separately, SaGess (limnios2023sagess) trains a dense DiGress model to generate subgraphs sampled from a large graph, and then merges these subgraphs.\nThis class of methods initially generates a rudimentary graph, which undergoes successive refinements for enhanced detail (yang2020scaleauto; karami2023higen). Illustrative of this approach, the HGGT model (jang2023hggt) employs a -tree representation. Specifically for molecular generation, fragment-based models, such as (jin2018junction; jin2020hierarchical; maziarz2021learning), adeptly assemble compounds using pre-defined molecular fragments.\nA unique approach outside these paradigms was presented by chen2023efficient, who initially generated a node degree distribution for the nodes, and subsequently crafted an adjacency matrix conditioned on this distribution, preserving sparsity. Despite the universal feasibility of this factorization, the ease of learning the conditional distribution remains incertain, as there does not even always exist undirected graphs that satisfy a given degree distribution.\nOverall, scalable generation models typically either introduce a dependence on node orderings, or rely heavily on the existence of a community structure in the graphs. In contrast, the SparseDiff model described in next section aims at making no assumption besides sparsity, which results in very good performance across a wide range of graphs."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3",
|
| 31 |
+
"parent_section_id": null,
|
| 32 |
+
"section_name": "SparseDiff: Sparse Denoising Diffusion for Large Graph Generation",
|
| 33 |
+
"text": "We introduce the Sparse Denoising Diffusion model (SparseDiff), designed to bolster the scalability of discrete diffusion models by adopting edge list representations of graphs.\nWhile our primary focus is on graphs with discrete node and edge attributes, our model can be readily extended to accommodate continuous node attributes as well.\nA graph , composed of nodes and edges, is denoted as a triplet . Here, represents the edge list detailing indices of endpoints, while the node and edge attributes are encapsulated using a one-hot format in and , respectively.\nThe method\u2019s schematic is depicted in Fig. 2 ###reference_###. SparseDiff integrates three key components for training using sparse representations: a noise model that preserves sparsity in the graphs, a graph transformer that operates on sparse representations, and a loss function computed on random pairs of nodes. This integration facilitates efficient model training. However, it is crucial to note that during sampling, our model\u2019s complexity remains quadratic in .\n###figure_5###"
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3.1",
|
| 37 |
+
"parent_section_id": "3",
|
| 38 |
+
"section_name": "Sparsity-preserving noise model",
|
| 39 |
+
"text": "Our framework requires a noise model that preserves the sparsity of edges during diffusion. This rules out Gaussian-based models, and we therefore build our model on the discrete diffusion framework of austin2021structured.\nIn discrete diffusion, adding noise means jumping from state to state, i.e., sampling a state from a categorical distribution. The transition probabilities are given by a Markov transition matrix for each time step , where is the probability of transitioning from state to state . In the context of graph generation, the states corresponds to the possible node types or edge types, one particular state for the edges being \u201dno edge\u201d. The noise model is a product over nodes and edges, which means that nodes and edges are corrupted independently. In batch form, the noise model can therefore be written , where and refers to the transition matrix at step for nodes and edges respectively, while contains all edge features in dense format.\nSince the noise model is markovian, there noise does not need to be added recursively, and can be obtained by multiplying the Markov transition matrices: , with for and respectively.\nAs the Markov transition matrices are user-specified, several choices are possible.\nUniform transitions are the most commonly used model (hoogeboom2021argmax; austin2021structured; yang2023diffsound), but they do not preserve sparsity in the diffusion process. The two noise models that do not result in dense noisy graphs are absorbing transitions, used in (kong2023autoregressive; chen2023efficient), and marginal transitions. In this work, we choose to use the marginal transitions as they are supported by theoretical analysis (ingraham2022illuminating; vignac2022digress). In the marginal transition model, the probability of transitioning to a state is proportional to the marginal probability of that state in the data. In the context of graphs, this means that jumping to the state \u201dno edge\u201d will be very likely, as it is the dominant label in the data. Formally, if and are the marginal distribution of node and edge types and is the transpose of , the marginal transition matrices for nodes and edges are defined by:\nWhile standard discrete diffusion models simply compute transition probabilities using a product , this multiplication is not directly compatible with sparse representations of graphs. As a result, we adopt a three-step approach to sample noisy graphs without using dense tensors. First, we compute for edges of the clean graph and sample from this categorical distribution. Next, we determine the number of new edges to add to this list. This number follows a binomial distribution with draws and a success rate of , where is the number of non-existing edges and is the probability of staying in the state \u201dno edge\u201d. Finally, we sample positions for these new edges uniformly from the set of non-occupied entries in the adjacency matrix edges, with an efficient algorithm detailed in Appendix A.2 ###reference_###.\nWe note that our choice of noise model does not guarantee that the noisy graph is always sparse. However, it is the case with high probability, as stated by the following lemma, which is an application of desolneux2008estimating (cf. Appendix B ###reference_###).\n(High-probability bound on the sparsity of the noisy graph) \nConsider a graph with nodes and edges. We denote by the edge ratio . Let denote the number of edges in the noisy graph sampled from the marginal transition model.\nThen, for sufficiently large and , for any , we have:\nThis lemma shows that, in large and sparse graphs, the probability that the fraction of edges in the noisy graph is higher than decreases exponentially with the graph size. For instance, for small and , this probability can be written with for two constants and ."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3.2",
|
| 43 |
+
"parent_section_id": "3",
|
| 44 |
+
"section_name": "Prediction on a subset of pairs",
|
| 45 |
+
"text": "In discrete denoising diffusion for graphs such as (vignac2022digress; haefeli2022diffusion), a neural network is trained to predict the clean graph, i.e., the class of each node and each pair. This results in a quadratic space and time complexity.\nIn order to avoid this limitation, SparseDiff only makes prediction for a random subset of the edges that we call \u201dquery edges\u201d.\nFor this purpose, we introduce a parameter which corresponds to a fraction of pairs that are sampled uniformly in each forward pass. In our implementation, was treated as a constant and chosen to balance GPU usage, but it could be chosen as a decreasing function of the number of nodes as well.\nEquipped with a well-defined diffusion model, the denoising network is trained to predict the clean data distribution represented by . This network is trained by minimizing the cross-entropy loss between the predicted distribution and the clean graph, which is simply a sum over nodes and edges thanks to the cartesian product structure of the noise:\nwhere the constant (set to 5 in our experiments) balances the importance of nodes and edges."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "3.3",
|
| 49 |
+
"parent_section_id": "3",
|
| 50 |
+
"section_name": "Sparse Message-Passing Transformer",
|
| 51 |
+
"text": "###figure_6### The final component of the Sparse Diffusion Model is a memory-efficient graph neural network.\nIn previous diffusion models for graphs, the main complexity bottleneck lay in the need to encode features for all pairs of nodes, leading to a computation complexity that scaled as , where is the number of layers and the dimensionality of edge activations. To address this issue, it is necessary to avoid learning embeddings for all pairs of nodes. Fortunately, as our noisy graphs are sparse, edge lists representations can be leveraged. These representations can be efficiently used within message-passing neural networks (MPNNs) architectures (scarselli2008graph; gilmer2017neural) through\nthe use of specialized librairies such as Pytorch Geometric (fey2019fast) or the Deep Graph Library (wang2019deep).\nThe denoising network of SparseDiff has to deal with two simulatenous constraints. First, it needs to make predictions for the query edges . Second, in contrast to previous diffusion models, it cannot compute activations for all pairs of nodes.\nEdge predictions, although not possible within most message-passing architectures, are however common in the context of link prediction for knowledge graphs (zhang2018link; chamberlain2022graph; boschin2023machine). We therefore first consider a link prediction approach to our problem, detailed below.\nInstead of storing activations for pairs of edges, link predictions models typically only store representations for the nodes. In this framework, a graph neural network that learns embeddings for each node is coupled with a auxiliary module that predicts edges. In the simplest setting, this module can simply compute the cosine similarity between node representations. However, our model needs to predict edge features as well, which implies that we need to learn the edge prediction model. In practice, we parametrize this module by a symmetrized multi-layer perceptron that takes the representations and of both endpoints as input:\nWhile this approach is very memory efficient, we find that it has a slow convergence and poor overall performance in practice (cf. ablations in Appendix D.6 ###reference_###). In particular, we could not replicate the performance of dense denoising diffusion models, even on datasets of small graphs. This suggests that reconstructing the graph from node representations only, which is theoretically proved to be possible (maehara1990dimension), might be hard to achieve in practice.\nBased on the previous findings, we consider an approach that stores activations for pairs of nodes. The list of pairs for which we store activations will define our computational graph , i.e., the graph that is used by the message-passing architecture. This graph contains all nodes with their noisy features , as well as a list of edges denoted .\nIn order to bypass the need for an edge prediction module, the computational graph should contain the list of query edges sampled previously, i.e., .\nFurthermore, this graph should ideally contain all information about the noisy graph, which imposes . As a result of these two constraints, we define the computational graph as the union of the noisy and query edge lists. Since these two graphs are sparse, the computational graph used in our message-passing architecture is guaranteed to be sparse as well.\nOne extra benefit of using a computational graph that with more edges than the noisy graph only is that it acts as a graph rewiring mechanism. Introducing edges in the computational graph that do not exist in the input graph\nprovides the message-passing network with shortcuts, which is known to help the propagation of information and alleviate over-squashing issues (alon2020bottleneck; topping2021understanding; pmlr-v202-di-giovanni23a).\nOur denoising network architecture builds upon the message-passing transformer architecture developed in (shi2020masked). These layers integrate the graph attention mechanism (velivckovic2017graph) within a Transformer architecture by adding normalization and feed-forward layers. In contrast to previous architectures used in denoising networks for graphs such as (jo2022score) or (haefeli2022diffusion), the graph attention mechanism is based on edge list representations and is able to leverage the sparsity of graphs.\nWe however incorporate several elements of (vignac2022digress) to improve performance.\nSimilarly to their model, we internally manipulate graph-level features (such as the time information), as they are able to store information more compactly. Features for the nodes, edges, and graphs all depend on each other thanks to the use of PNA pooling layers (corso2020principal) and FiLM layers (perez2018film).\nFinally, similarly to standard graph transformer networks, we use a set of features as structural and positional encodings. These features, that include information about the graph Laplacian and cycle counts, are detailed in Appendix C ###reference_###. As highlighted in (vignac2022digress), these features can only be computed when the noisy graphs are sparse, which is an important benefit of discrete diffusion models.\nWe note that not all these encodings can be computed in sub-quadratic time. However, we find that this is not an issue in practice for the graphs that we consider as these features are not back-propagated through. On graphs with 500 nodes, computing these features is for example 5 times faster than the forward pass itself. On larger graphs, these encoding might however be removed for more efficient computations."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "3.4",
|
| 55 |
+
"parent_section_id": "3",
|
| 56 |
+
"section_name": "Sampling",
|
| 57 |
+
"text": "Once the denoising network has been trained, it can be used to sample new graphs.\nSimilarly to other diffusion models for graphs, we first sample a number of nodes and keep this number constant during diffusion. We then sample a random graph from the prior distribution , where and are the marginal probabilities of each class in the data and denotes the categorical distribution with probabilities . Note that in the particular case of unattributed graphs, sampling from this prior distribution amounts to sampling an Erdos-Renyi graph. As previously, this graph can be sampled without using dense representations by i) sampling a number of edges to add from a categorical distribution, ii) sampling uniformly locations for these edges, and iii) sampling their edge type.\nAfter the graph has been sampled, the denoising network can be applied recursively in order to sample previous time step. Unfortunately, the full graph cannot be predicted at once, as this would require quadratic memory. Furthermore, it would also create a distribution shift: as the message passing network has been trained on computational graphs that are sparse, it should not be used at sampling time on dense query graphs.\nWe therefore use an iterative procedure, illustrated in Fig. 4 ###reference_###, to populate the matrix of predictions. At each iteration, we sample edges among the unfilled entries of the adjacency matrix, and use the message-passing network to predict these edges. The node features are only predicted at the last of these iterations. This procedure results in calls to the denoising diffusion model at each diffusion step. Our procedure therefore results in quadratic time complexity at sampling time, which is, as noted earlier, difficult to avoid without making assumptions on the data distribution.\n###figure_7###"
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "4",
|
| 61 |
+
"parent_section_id": null,
|
| 62 |
+
"section_name": "Experiments",
|
| 63 |
+
"text": "We conduct experiments to present the capability of SparseDiff across a wide range of graphs.\nSparseDiff matches state-of-the-art performance on datasets of small molecules (QM9, Moses), while being simulatenously very competitive on datasets of larger graphs (Planar, SBM, Protein, Ego).\nWe compare the performance of SparseDiff to GraphNVP (madhawa2019graphnvp), DiGress (vignac2022digress), Spectre (martinkus2022spectre), GraphRNN (you2018graphrnn), GG-GAN (krawczuk2020gg), JDSS (jo2022score), as well as several scalable models: HiGen (karami2023higen), EDGE (chen2023efficient), BiGG (dai2020scalable) and HGGT (jang2023hggt), and GraphARM (kong2023autoregressive)."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "4.1",
|
| 67 |
+
"parent_section_id": "4",
|
| 68 |
+
"section_name": "Molecule generation",
|
| 69 |
+
"text": "Since our method admits dense models as a limit when , it should match their performance on datasets of small graphs. We verify this capability on the QM9, and Moses molecular datasets used in DiGress (vignac2022digress). The QM9 dataset (wu2018moleculenet) that contains molecules with up to 9 heavy atoms can either be treated with implicit or explicit hydrogens. The Moses benchmark (polykovskiy2020molecular), based on ZINC Clean Leads, contains drug-sized molecules and features many tools to assess the model performance. Since QM9 contains charged atoms, we incorporate formal charges as an additional discrete node feature that is learned during diffusion, similarly to (Vignac2023MiDiMG). For fair comparison, we also apply this improvement to DiGress.\nFor QM9 dataset, we assess performance by checking the proportion of connected graphs, the molecular validity of the largest connected component (measured by the success of RDKit sanitization), uniqueness over 10.000 molecules. Additionally, we use the Frechet ChemNet Distance (FCD) (preuer2018frechet) which measures the similarity between sets of molecules using a pretrained neural network.\nIn Table 1 ###reference_###, we observe that SparseDiff overall achieves the best performance on QM9 with implicit hydrogens. In particular, it clearly outperforms other scalable methods on the FCD metric, showing that such methods are not well suited to small and very structured graphs.\nResults for QM9 with explicit hydrogens and the MOSES dataset are presented in Tables 5 ###reference_### (Appendix D.3 ###reference_###), and Table 6 ###reference_### (Appendix D.4 ###reference_###). We find that SparseDiff compares similarly to the DiGress model, which is expected as small graphs are not very sparse."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "4.2",
|
| 73 |
+
"parent_section_id": "4",
|
| 74 |
+
"section_name": "Large graph generation",
|
| 75 |
+
"text": "We also evaluate our model on datasets of graphs with increasing size: first, a dataset of Planar graphs (with 64 nodes per graph) which tests the ability of a model to generate graphs without edge crossings. Then, a dataset drawn from the Stochastic Block Model (SBM) (martinkus2022spectre) with 2 to 5 communities. SBM graphs contains up to nodes, which is the largest size used in dense diffusion models such as DiGress (vignac2022digress). Finally, we use the Ego (Sen2008CollectiveCI) and Protein (Dobson2003DistinguishingES) datasets that feature graphs with up to nodes. Ego is sourced from the CiteSeer (citeseer) dataset, captures citation relationships, while Protein represents amino acids connected when they are within 6 Angstroms of each other. The detailed statistics of them can be referred in Appendix D.2 ###reference_###.\nFor evaluation, baseline methods often sample the same number of graphs as the test set, introducing significant variance when this number is limited. To enhance reliability and fair comparisons, we recommend using at least five samplings and reporting their mean and variance. Additionally, MMD metrics, commonly used in graph generation tasks, produce small values that are challenging to directly compare. To address this, we report the metrics divided by MMD(training, test), and provide raw numbers in Appendix.\nIn addition to the MMD metrics, we report the validity of generated graphs for the SBM dataset, which is the fraction of graphs that pass a statistical test for the stochastic block model. For the Planar dataset, validity corresponds to the fraction of graphs that are planar and connected. We also use the FID and RBF MMD metrics defined in thompson2022evaluation. These metrics measure the diversity and fidelity of generated graphs using a randomly parametrized GNN.\nResults are presented in Tables 2 ###reference_### and 3 ###reference_###. We observe that dense models can achieve very good results on the SBM and planar graphs, which are not too large, but fail on larger graphs. The reason is that a tiny batch size (of 2 on a 32Gb GPU) needs to be used for such graphs, which makes training very slow. SparseDiff is competitive with both dense and scalable models on most metrics of all datasets, despite not being tailored to large graphs only. We however note that evaluation on these datasets is typically done by sampling only a small number of graphs, which makes results very brittle.\nWe observe that SparseDiff, despite a training time which is half of DiGress, is competitive with the state of the art. We however note that as the dataset is very small, which makes results very brittle across different samplings from the same model. We therefore strongly advocate for reporting results over several samplings for fair comparison.\nAs shown in Table , SparseDiff comparable results to other scalable methods for both datasets on most of the metrics.\nAgain, we highlight that some metrics exhibit high variance and recommend evaluating on several samplings, which should for instance prevent results inferior to from appearing."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "5",
|
| 79 |
+
"parent_section_id": null,
|
| 80 |
+
"section_name": "Conclusion",
|
| 81 |
+
"text": "In this study, we introduce SparseDiff, a scalable discrete denoising diffusion model for graph generation. SparseDiff permits the use of edge list representations by predicting only a subset of edges at once. Experimental results demonstrate that SparseDiff exhibits very good performance across all graph sizes, whereas other scalable methods tend to perform poorly on small, structured graphs.\nSparseDiff enhances the capabilities of discrete diffusion models to process larger datasets, thereby broadening its applicability, including tasks such as generating large biological molecules and community graphs, among others."
|
| 82 |
+
}
|
| 83 |
+
],
|
| 84 |
+
"appendix": [
|
| 85 |
+
{
|
| 86 |
+
"section_id": "Appendix 1",
|
| 87 |
+
"parent_section_id": null,
|
| 88 |
+
"section_name": "Appendix A Algorithm",
|
| 89 |
+
"text": "In the process of graph generation, we employ an iterative approach to construct the adjacency matrix. During each iteration, a random set of query edges, counting a proportion denoted by , is drawn without repetition from all edges until the entire matrix is populated. It is worth noting that when does not evenly divide 1, the last iteration may result in a different number of query edges. To maintain consistency in the number of query edges, we adopt a strategy in such cases: we utilize the last percent of edges from the dataset. This suggests that a small portion of edges will be repeatedly sampled during generation, but given that their predicted distribution should remain the same, this strategy will not change any mathematical formulation behind.\nBesides, the nodes are only sampled once across all iterations.\nTo enhance clarity and facilitate understanding, we provide Algorithm 1 ###reference_thm1### as the following.\nWe design a special algorithm to apply noise to graph data in a sparse manner.\nThe fundamental idea behind this algorithm is to treat separately existing and non-existing edges. In sparse graphs, the number of edges typically scales sub-quadratically with the number of nodes, denoted as , while the quadratic space complexity mainly stems from the non-existing edges. Even after sampling, newly emerging edges (those transitioning from non-existing to existing) also exhibit a linear scale concerning . Thus, we initially employ binomial sampling to estimate the count of these emerging edges denoted as . Subsequently, we randomly select edges from the pool of all non-existing edges and assign them random labels following the noised edge distribution, while the remaining non-existing edges maintain their non-existent state. This algorithm enables the replication of the traditional diffusion process without necessitating the adjacency matrix of size , but only with the edge list composed by edges.\nThe most challenging aspect of this algorithm is the random selection of a specific number (i.e. ) of emerging edges from the entire set of non-existing edges without introducing the adjacency matrix.\nDue to the algorithm\u2019s complexity and the technical intricacies involved, a more detailed discussion is discussed later in Appendix A.4 ###reference_###.\nIn the context of sparse training, we assume that maintaining the same distribution of edge types as in dense graphs within each graph is beneficial for training. To illustrate, consider a graph where non-existent edges account for of the total edges. In sparse training, on average, the non-existent edges should also constitute of the query edges . This assumption implies that we need to perform query edge sampling separately for each graph in order to maintain the distribution within each graph. When a batch contains graphs of varying sizes, simultaneously selecting a proportion of query edges in each graph is not straightforward.\nFor this purpose, an efficient sampling algorithm has been devised and can be viewed in our codes later.\nWe also apply a permutation to the node ordering in each graph. The absence of this step can negatively impact training performance due to symmetry.\nWhen sampling non-existing edges, a common approach is to use the adjacency matrix, which can be problematic for large graphs due to its quadratic size. The same challenge arises in the final step of sampling sparse noise.\nConsider a graph with nodes, featuring existing edges and pairs of nodes that are not connected. The condensed indices of the existing edges are , , , and . If the objective is to sample non-existing edges, you can start by randomly selecting two indices from the range , which corresponds to the 6 non-existing edges. For example, if indices are randomly chosen, where denotes the position of the third non-existing edge, and represents the fourth non-existing edge. These condensed indices are then inserted in the list of non-existing edges. Upon amalgamation with the existing edges, the final set of edges will become .\nThis approach allows us to efficiently sample non-existing edges while ensuring the proper placement of existing edges within the sampled set. Given the high complexity of the coding, please refer to our codes for more details."
|
| 90 |
+
},
|
| 91 |
+
{
|
| 92 |
+
"section_id": "Appendix 2",
|
| 93 |
+
"parent_section_id": null,
|
| 94 |
+
"section_name": "Appendix B Proof of Lemma 3.1",
|
| 95 |
+
"text": "The lemma for noisy graph with guaranteed sparsity comes from directly from the proposition regarding the tail behavior of a binomial distribution (desolneux2008estimating) as follows:\n(Tail behavior of a binomial distribution)\nLet be independent Bernoulli random variables with parameter and let . Consider a constant or a real function . Then according to the Hoeffding inequality, satisfies:\nFor sparse graphs, the edge ratio is clearly smaller than . Consider then Bernoulli random variables with parameter and a constant with (i.e. number of all node pairs in an undirected graph) draws, and note sampled existing edge number as , we have:"
|
| 96 |
+
},
|
| 97 |
+
{
|
| 98 |
+
"section_id": "Appendix 3",
|
| 99 |
+
"parent_section_id": null,
|
| 100 |
+
"section_name": "Appendix C Structural and Positional Encodings",
|
| 101 |
+
"text": "During training, we augment model expressiveness with additional encodings. To make thing clear, we divide them into encodings for edges, for nodes, and for graphs."
|
| 102 |
+
},
|
| 103 |
+
{
|
| 104 |
+
"section_id": "Appendix 4",
|
| 105 |
+
"parent_section_id": null,
|
| 106 |
+
"section_name": "Appendix D Additional Experiments",
|
| 107 |
+
"text": "In our experiments, we select specific metrics tailored to each dataset, with a focus on four widely reported Maximum Mean Discrepancy (MMD) metrics. These metrics include node degree (Deg), clustering coefficient (Clus), orbit count (Orb), and eigenvalues of the normalized graph Laplacian (Spec).\nTo provide a more comprehensive overview of the various scales found in existing graph datasets, we present here key statistics for them. These statistics encompass the number of graphs, the range of node numbers, the range of edge numbers, the edge fraction for existing edges, and the query edge proportion used for training, i.e. the proportion of existing edges among all node pairs. In our consideration, we focus on undirected graphs. Therefore, when counting edges between nodes and , we include the edge in both directions.\n###table_1### We additionally report the results for QM9 with explicit hydrogens in Table 5 ###reference_###. Having explicit hydrogens makes the problem more complex because the resulting graphs are larger. We observe that SparseDiff achieves better validity than DiGress and has comparable results on other metrics when both are utilizing charges.\nMoses is an extensive molecular dataset with larger molecular graphs than QM9, offering a much more comprehensive set of metrics. While autoregressive models such as GraphINVENT are recognized for achieving higher validity on this dataset, both SparseDiff and DiGress exhibit advantages across most other metrics. Notably, SparseDiff closely aligns with the results achieved by DiGress, affirming the robustness of our method on complex datasets.\nTo ease comparison with other methods, Table 7 ###reference_### provides the raw numbers (not presented as ratios) for the SBM, Planar, Ego, and Protein datasets.\nThis part presents 2 ablation experiments that motivate our approach.\nSparseDiff builds upon an experimental observation and a hypothesis.\nFirstly, our experiments demonstrate that relying solely on node features for link prediction yields unsatisfactory results. This observation encouraged us to design the computational graph that contains all edges to be predicted (i.e. query edges) as the input graph.\nSecondly, we hypothesized that preserving the same distribution of edge types as observed in dense graphs for loss calculation is advantageous for training. This hypothesis requires to only calculate losses on uniformly sampled query edges."
|
| 108 |
+
},
|
| 109 |
+
{
|
| 110 |
+
"section_id": "Appendix 5",
|
| 111 |
+
"parent_section_id": null,
|
| 112 |
+
"section_name": "Appendix E Visualization",
|
| 113 |
+
"text": "###figure_8### ###figure_9### ###figure_10### ###figure_11### ###figure_12### ###figure_13### ###figure_14### ###figure_15### ###figure_16### ###figure_17###"
|
| 114 |
+
}
|
| 115 |
+
],
|
| 116 |
+
"tables": {
|
| 117 |
+
"1": {
|
| 118 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Molecule generation on QM9 with implicit hydrogens (mean and std over 5 samplings). For fair comparison, DiGress was modified to handle formal charges and benchmarked. While there is no major benefit to using sparsity on small graph, SparseDiff is very competitive, while other scalable models have a poor FCD metric, indicating that they does not correctly model the data.\n</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.28\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_tt\" id=\"S4.T1.4.4.5\">Class</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_tt\" id=\"S4.T1.4.4.6\">Method</th>\n<td class=\"ltx_td ltx_align_right ltx_border_tt\" id=\"S4.T1.1.1.1\">Valid \n</td>\n<td class=\"ltx_td ltx_align_right ltx_border_tt\" id=\"S4.T1.2.2.2\">Unique \n</td>\n<td class=\"ltx_td ltx_align_right ltx_border_tt\" id=\"S4.T1.3.3.3\">Connected \n</td>\n<td class=\"ltx_td ltx_align_right ltx_border_tt\" id=\"S4.T1.4.4.4\">FCD \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T1.6.6.3\">Dense</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T1.6.6.4\">SPECTRE</th>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T1.5.5.1\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T1.6.6.2\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T1.6.6.5\">-</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T1.6.6.6\">-</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.8.8\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S4.T1.8.8.3\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.8.8.4\">GraphNVP</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.7.7.1\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.8.8.2\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.8.8.5\">-</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.8.8.6\">-</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.10.10\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S4.T1.10.10.3\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.10.10.4\">GDSS</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.9.9.1\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.10.10.2\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.10.10.5\">-</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.10.10.6\">-</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.14.14\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S4.T1.14.14.5\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.14.14.6\">DiGress</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.11.11.1\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.12.12.2\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.13.13.3\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.14.14.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.18.18\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S4.T1.18.18.5\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.18.18.6\">DiGress + charges</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.15.15.1\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.16.16.2\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.17.17.3\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.18.18.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.19.19\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.19.19.2\">Sparse</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.19.19.3\">GraphARM</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.19.19.1\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.19.19.4\">-</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.19.19.5\">-</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.19.19.6\">1.22</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.22.22\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S4.T1.22.22.4\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.22.22.5\">EDGE</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.20.20.1\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.21.21.2\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.22.22.6\">-</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.22.22.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.24.24\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S4.T1.24.24.3\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.24.24.4\">HGGT</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.23.23.1\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.24.24.2\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.24.24.5\">-</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.24.24.6\">0.40</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.28.28\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_bb\" id=\"S4.T1.28.28.5\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S4.T1.28.28.6\">SparseDiff(ours)</th>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S4.T1.25.25.1\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S4.T1.26.26.2\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S4.T1.27.27.3\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S4.T1.28.28.4\"></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 119 |
+
"capture": "Table 1: Molecule generation on QM9 with implicit hydrogens (mean and std over 5 samplings). For fair comparison, DiGress was modified to handle formal charges and benchmarked. While there is no major benefit to using sparsity on small graph, SparseDiff is very competitive, while other scalable models have a poor FCD metric, indicating that they does not correctly model the data.\n"
|
| 120 |
+
},
|
| 121 |
+
"2": {
|
| 122 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>\nUnconditional generation on the Stochastic Block Model (SBM) and Planar datasets. A SBM graph is valid if it passes a statistical test for the stochastic block model, while a planar graph is valid if it is planar and connected. Results are presented in the form of ratios: . VUN: valid, unique & novel graphs.\n</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T2.27\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.27.26.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_tt\" id=\"S4.T2.27.26.1.1\">Dataset</th>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"4\" id=\"S4.T2.27.26.1.2\">Stochastic block model</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"4\" id=\"S4.T2.27.26.1.3\">Planar</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.10.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T2.10.8.9\">Model</th>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T2.3.1.1\">Deg.\n</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T2.4.2.2\">Clust.\n</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T2.5.3.3\">Orbit\n</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T2.6.4.4\">V.U.N.\n</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T2.7.5.5\">Deg. \n</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T2.8.6.6\">Clust. \n</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T2.9.7.7\">Orbit\n</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T2.10.8.8\">V.U.N.\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.27.27.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T2.27.27.2.1\">GraphRNN</th>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T2.27.27.2.2\">6.9</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T2.27.27.2.3\">1.7</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T2.27.27.2.4\">3.1</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T2.27.27.2.5\">5%</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T2.27.27.2.6\">24.5</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T2.27.27.2.7\">9.0</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T2.27.27.2.8\">2508</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T2.27.27.2.9\">0%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.27.28.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.27.28.3.1\">GRAN</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.27.28.3.2\">14.1</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.27.28.3.3\">1.7</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.27.28.3.4\">2.1</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.27.28.3.5\">25%</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.27.28.3.6\">3.5</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.27.28.3.7\">1.4</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.27.28.3.8\">1.8</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.27.28.3.9\">0%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.27.29.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.27.29.4.1\">GG-GAN</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.27.29.4.2\">4.4</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.27.29.4.3\">2.1</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.27.29.4.4\">2.3</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.27.29.4.5\">25%</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.27.29.4.6\">\u2013</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.27.29.4.7\">\u2013</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.27.29.4.8\">\u2013</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.27.29.4.9\">\u2013</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.27.30.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.27.30.5.1\">SPECTRE</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.27.30.5.2\">1.9</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.27.30.5.3\">1.6</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.27.30.5.4\">1.6</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.27.30.5.5\">53%</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.27.30.5.6\">2.5</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.27.30.5.7\">2.5</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.27.30.5.8\">2.4</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.27.30.5.9\">25%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.17.15\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.17.15.8\">DiGress</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.11.9.1\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.12.10.2\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.17.15.9\">1.7</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.13.11.3\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.14.12.4\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.15.13.5\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.16.14.6\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.17.15.7\">\n%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.19.17\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.19.17.3\">HiGen</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.19.17.4\">2.4</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.18.16.1\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.19.17.2\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.19.17.5\">\u2013</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.19.17.6\">\u2013</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.19.17.7\">\u2013</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.19.17.8\">\u2013</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.19.17.9\">\u2013</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.27.25\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S4.T2.27.25.9\">SparseDiff</th>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S4.T2.20.18.1\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S4.T2.21.19.2\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S4.T2.22.20.3\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S4.T2.23.21.4\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S4.T2.24.22.5\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S4.T2.25.23.6\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S4.T2.26.24.7\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S4.T2.27.25.8\"></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 123 |
+
"capture": "Table 2: \nUnconditional generation on the Stochastic Block Model (SBM) and Planar datasets. A SBM graph is valid if it passes a statistical test for the stochastic block model, while a planar graph is valid if it is planar and connected. Results are presented in the form of ratios: . VUN: valid, unique & novel graphs.\n"
|
| 124 |
+
},
|
| 125 |
+
"3": {
|
| 126 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>Unconditional generation on graphs with up to 500 nodes. On such graphs, dense models such as DiGress clearly fail, whereas SparseDiff presents competitive performance on most metrics. Results are presented in the form of ratios: .</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T3.31\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T3.8.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_tt\" id=\"S4.T3.8.6.7\">Dataset</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_tt\" id=\"S4.T3.8.6.8\">Class</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_tt\" id=\"S4.T3.8.6.9\">Model</th>\n<td class=\"ltx_td ltx_align_right ltx_border_tt\" id=\"S4.T3.3.1.1\">Degree\n</td>\n<td class=\"ltx_td ltx_align_right ltx_border_tt\" id=\"S4.T3.4.2.2\">Clust.\n</td>\n<td class=\"ltx_td ltx_align_right ltx_border_tt\" id=\"S4.T3.5.3.3\">Orbit\n</td>\n<td class=\"ltx_td ltx_align_right ltx_border_tt\" id=\"S4.T3.6.4.4\">Spectre\n</td>\n<td class=\"ltx_td ltx_align_right ltx_border_tt\" id=\"S4.T3.7.5.5\">FID \n</td>\n<td class=\"ltx_td ltx_align_right ltx_border_tt\" id=\"S4.T3.8.6.6\">RBF \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.31.30.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T3.31.30.1.1\"><em class=\"ltx_emph ltx_font_italic\" id=\"S4.T3.31.30.1.1.1\">Protein</em></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T3.31.30.1.2\">Dense</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T3.31.30.1.3\">GRAN</th>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T3.31.30.1.4\">6.7</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T3.31.30.1.5\">7.1</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T3.31.30.1.6\">40.6</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T3.31.30.1.7\">5.7</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T3.31.30.1.8\">\u2013</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T3.31.30.1.9\">\u2013</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.12.10\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S4.T3.12.10.5\"></th>\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S4.T3.12.10.6\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.12.10.7\">DiGress</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T3.9.7.1\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T3.10.8.2\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T3.11.9.3\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T3.12.10.4\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T3.12.10.8\">7.9</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T3.12.10.9\">5.2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.31.31.2\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S4.T3.31.31.2.1\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.31.31.2.2\">Sparse</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.31.31.2.3\">DRuM</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T3.31.31.2.4\">6.3</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T3.31.31.2.5\">9.7</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T3.31.31.2.6\">10.8</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T3.31.31.2.7\">3.3</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T3.31.31.2.8\">\u2013</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T3.31.31.2.9\">\u2013</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.14.12\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S4.T3.14.12.3\"></th>\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S4.T3.14.12.4\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.14.12.5\">BiGG</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T3.13.11.1\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T3.14.12.6\">3.7</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T3.14.12.2\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T3.14.12.7\">5.0</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T3.14.12.8\">\u2013</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T3.14.12.9\">\u2013</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.31.32.3\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S4.T3.31.32.3.1\"></th>\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S4.T3.31.32.3.2\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.31.32.3.3\">HiGen</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T3.31.32.3.4\">4.0</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T3.31.32.3.5\">6.4</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T3.31.32.3.6\">7.3</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T3.31.32.3.7\">2.8</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T3.31.32.3.8\">\u2013</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T3.31.32.3.9\">\u2013</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.20.18\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S4.T3.20.18.7\"></th>\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S4.T3.20.18.8\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.20.18.9\">SparseDiff</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T3.15.13.1\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T3.16.14.2\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T3.17.15.3\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T3.18.16.4\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T3.19.17.5\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T3.20.18.6\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.21.19\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.21.19.2\"><em class=\"ltx_emph ltx_font_italic\" id=\"S4.T3.21.19.2.1\">Ego</em></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.21.19.3\">Dense</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.21.19.4\">DiGress</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T3.21.19.5\">354</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T3.21.19.1\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T3.21.19.6\">100</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T3.21.19.7\">\u2013</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T3.21.19.8\">18.7</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T3.21.19.9\">5.3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.22.20\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S4.T3.22.20.2\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.22.20.3\">Sparse</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.22.20.4\">EDGE</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T3.22.20.5\">290</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T3.22.20.6\">17.3</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T3.22.20.7\">43.3</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T3.22.20.8\">\u2013</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T3.22.20.1\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T3.22.20.9\">4.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.25.23\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S4.T3.25.23.4\"></th>\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S4.T3.25.23.5\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.25.23.6\">HiGen</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T3.23.21.1\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T3.25.23.7\">4.9</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T3.24.22.2\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T3.25.23.3\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T3.25.23.8\">\u2013</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T3.25.23.9\">\u2013</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.31.29\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_bb\" id=\"S4.T3.31.29.7\"></th>\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_bb\" id=\"S4.T3.31.29.8\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S4.T3.31.29.9\">SparseDiff</th>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S4.T3.26.24.1\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S4.T3.27.25.2\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S4.T3.28.26.3\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S4.T3.29.27.4\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S4.T3.30.28.5\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S4.T3.31.29.6\"></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 127 |
+
"capture": "Table 3: Unconditional generation on graphs with up to 500 nodes. On such graphs, dense models such as DiGress clearly fail, whereas SparseDiff presents competitive performance on most metrics. Results are presented in the form of ratios: ."
|
| 128 |
+
},
|
| 129 |
+
"4": {
|
| 130 |
+
"table_html": "<figure class=\"ltx_table\" id=\"A4.T4\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 4: </span>Statistics for the datasets employed in our experiments.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"A4.T4.3\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A4.T4.3.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"A4.T4.3.3.4\">Name</td>\n<td class=\"ltx_td ltx_align_right ltx_border_tt\" id=\"A4.T4.3.3.5\">Graph number</td>\n<td class=\"ltx_td ltx_align_right ltx_border_tt\" id=\"A4.T4.3.3.6\">Node range</td>\n<td class=\"ltx_td ltx_align_right ltx_border_tt\" id=\"A4.T4.3.3.7\">Edge range</td>\n<td class=\"ltx_td ltx_align_right ltx_border_tt\" id=\"A4.T4.1.1.1\">Edge Fraction ()</td>\n<td class=\"ltx_td ltx_align_right ltx_border_tt\" id=\"A4.T4.3.3.3\">\n ()</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T4.3.4.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A4.T4.3.4.1.1\">QM9</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A4.T4.3.4.1.2\">133,885</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A4.T4.3.4.1.3\">[2,9]</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A4.T4.3.4.1.4\">[2, 28]</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A4.T4.3.4.1.5\">[20, 56]</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A4.T4.3.4.1.6\">50</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T4.3.5.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"A4.T4.3.5.2.1\">QM9(H)</td>\n<td class=\"ltx_td ltx_align_right\" id=\"A4.T4.3.5.2.2\">133,885</td>\n<td class=\"ltx_td ltx_align_right\" id=\"A4.T4.3.5.2.3\">[3, 29]</td>\n<td class=\"ltx_td ltx_align_right\" id=\"A4.T4.3.5.2.4\">[4, 56]</td>\n<td class=\"ltx_td ltx_align_right\" id=\"A4.T4.3.5.2.5\">[7.7, 44]</td>\n<td class=\"ltx_td ltx_align_right\" id=\"A4.T4.3.5.2.6\">50</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T4.3.6.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"A4.T4.3.6.3.1\">Moses</td>\n<td class=\"ltx_td ltx_align_right\" id=\"A4.T4.3.6.3.2\">1,936,962</td>\n<td class=\"ltx_td ltx_align_right\" id=\"A4.T4.3.6.3.3\">[8, 27]</td>\n<td class=\"ltx_td ltx_align_right\" id=\"A4.T4.3.6.3.4\">[14, 62]</td>\n<td class=\"ltx_td ltx_align_right\" id=\"A4.T4.3.6.3.5\">[8.0, 22]</td>\n<td class=\"ltx_td ltx_align_right\" id=\"A4.T4.3.6.3.6\">50</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T4.3.7.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"A4.T4.3.7.4.1\">Planar</td>\n<td class=\"ltx_td ltx_align_right\" id=\"A4.T4.3.7.4.2\">200</td>\n<td class=\"ltx_td ltx_align_right\" id=\"A4.T4.3.7.4.3\">[64, 64]</td>\n<td class=\"ltx_td ltx_align_right\" id=\"A4.T4.3.7.4.4\">[346, 362]</td>\n<td class=\"ltx_td ltx_align_right\" id=\"A4.T4.3.7.4.5\">[8.4, 8.8]</td>\n<td class=\"ltx_td ltx_align_right\" id=\"A4.T4.3.7.4.6\">50</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T4.3.8.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A4.T4.3.8.5.1\">SBM</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A4.T4.3.8.5.2\">200</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A4.T4.3.8.5.3\">[44, 187]</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A4.T4.3.8.5.4\">[258, 2258]</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A4.T4.3.8.5.5\">[6.0, 17]</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A4.T4.3.8.5.6\">25</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T4.3.9.6\">\n<td class=\"ltx_td ltx_align_left\" id=\"A4.T4.3.9.6.1\">Ego</td>\n<td class=\"ltx_td ltx_align_right\" id=\"A4.T4.3.9.6.2\">757</td>\n<td class=\"ltx_td ltx_align_right\" id=\"A4.T4.3.9.6.3\">[50, 399]</td>\n<td class=\"ltx_td ltx_align_right\" id=\"A4.T4.3.9.6.4\">[112, 2124]</td>\n<td class=\"ltx_td ltx_align_right\" id=\"A4.T4.3.9.6.5\">[1.2, 11]</td>\n<td class=\"ltx_td ltx_align_right\" id=\"A4.T4.3.9.6.6\">10</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T4.3.10.7\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"A4.T4.3.10.7.1\">Protein</td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"A4.T4.3.10.7.2\">918</td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"A4.T4.3.10.7.3\">[100, 500]</td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"A4.T4.3.10.7.4\">[372, 3150]</td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"A4.T4.3.10.7.5\">[8.9, 6.7]</td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"A4.T4.3.10.7.6\">10</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 131 |
+
"capture": "Table 4: Statistics for the datasets employed in our experiments."
|
| 132 |
+
},
|
| 133 |
+
"5": {
|
| 134 |
+
"table_html": "<figure class=\"ltx_table\" id=\"A4.T5\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 5: </span>Unconditional generation on QM9 with explicit hydrogens. On small graphs such as QM9, sparse models are not beneficial, but SparseDiff still achieves very good performance.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"A4.T5.11\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"A4.T5.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"A4.T5.4.4.5\">Model</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_tt\" id=\"A4.T5.4.4.6\">Connected</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_tt\" id=\"A4.T5.1.1.1\">Valid\n</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_tt\" id=\"A4.T5.2.2.2\">Unique\n</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_tt\" id=\"A4.T5.3.3.3\">Atom stable\n</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_tt\" id=\"A4.T5.4.4.4\">Mol stable\n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A4.T5.8.8\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A4.T5.8.8.5\">DiGress</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A4.T5.8.8.6\">\u2013</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A4.T5.5.5.1\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A4.T5.6.6.2\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A4.T5.7.7.3\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A4.T5.8.8.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T5.9.9\">\n<td class=\"ltx_td ltx_align_left\" id=\"A4.T5.9.9.2\">DiGress + charges</td>\n<td class=\"ltx_td ltx_align_right\" id=\"A4.T5.9.9.3\">98.6</td>\n<td class=\"ltx_td ltx_align_right\" id=\"A4.T5.9.9.4\">97.7</td>\n<td class=\"ltx_td ltx_align_right\" id=\"A4.T5.9.9.1\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A4.T5.9.9.5\">99.8</td>\n<td class=\"ltx_td ltx_align_right\" id=\"A4.T5.9.9.6\">97.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T5.11.11\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"A4.T5.11.11.3\">SparseDiff</td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"A4.T5.11.11.4\">98.1</td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"A4.T5.10.10.1\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"A4.T5.11.11.2\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"A4.T5.11.11.5\">99.7</td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"A4.T5.11.11.6\">95.7</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 135 |
+
"capture": "Table 5: Unconditional generation on QM9 with explicit hydrogens. On small graphs such as QM9, sparse models are not beneficial, but SparseDiff still achieves very good performance."
|
| 136 |
+
},
|
| 137 |
+
"6": {
|
| 138 |
+
"table_html": "<figure class=\"ltx_table\" id=\"A4.T6\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 6: </span>Mean and standard deviation across 5 samplings on the MOSES benchmark. SparseDiff has similar performance to DiGress, despite a shorter training time.</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"A4.T6.60\" style=\"width:433.6pt;height:224.3pt;vertical-align:-1.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(7.1pt,-3.7pt) scale(1.03389772112042,1.03389772112042) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"A4.T6.60.60\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"A4.T6.5.5.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"A4.T6.5.5.5.6\">Model</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_tt\" id=\"A4.T6.1.1.1.1\">Connected \n</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_tt\" id=\"A4.T6.2.2.2.2\">Valid \n</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_tt\" id=\"A4.T6.3.3.3.3\">Unique \n</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_tt\" id=\"A4.T6.4.4.4.4\">Novel \n</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_tt\" id=\"A4.T6.5.5.5.5\">Filters \n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A4.T6.10.10.10\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"A4.T6.10.10.10.6\">GraphINVENT</th>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A4.T6.6.6.6.1\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A4.T6.7.7.7.2\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A4.T6.8.8.8.3\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A4.T6.9.9.9.4\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A4.T6.10.10.10.5\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T6.15.15.15\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A4.T6.15.15.15.6\">DiGress</th>\n<td class=\"ltx_td ltx_align_right\" id=\"A4.T6.11.11.11.1\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A4.T6.12.12.12.2\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A4.T6.13.13.13.3\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A4.T6.14.14.14.4\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A4.T6.15.15.15.5\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T6.20.20.20\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A4.T6.20.20.20.6\">SparseDiff</th>\n<td class=\"ltx_td ltx_align_right\" id=\"A4.T6.16.16.16.1\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A4.T6.17.17.17.2\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A4.T6.18.18.18.3\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A4.T6.19.19.19.4\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A4.T6.20.20.20.5\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T6.25.25.25\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"A4.T6.25.25.25.6\">Model</th>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A4.T6.21.21.21.1\">FCD \n</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A4.T6.22.22.22.2\">SNN \n</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A4.T6.23.23.23.3\">Scaf \n</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A4.T6.24.24.24.4\">Frag \n</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A4.T6.25.25.25.5\">IntDiv \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T6.30.30.30\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"A4.T6.30.30.30.6\">GraphINVENT</th>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A4.T6.26.26.26.1\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A4.T6.27.27.27.2\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A4.T6.28.28.28.3\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A4.T6.29.29.29.4\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A4.T6.30.30.30.5\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T6.35.35.35\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A4.T6.35.35.35.6\">DiGress</th>\n<td class=\"ltx_td ltx_align_right\" id=\"A4.T6.31.31.31.1\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A4.T6.32.32.32.2\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A4.T6.33.33.33.3\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A4.T6.34.34.34.4\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A4.T6.35.35.35.5\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T6.40.40.40\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A4.T6.40.40.40.6\">SparseDiff</th>\n<td class=\"ltx_td ltx_align_right\" id=\"A4.T6.36.36.36.1\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A4.T6.37.37.37.2\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A4.T6.38.38.38.3\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A4.T6.39.39.39.4\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A4.T6.40.40.40.5\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T6.45.45.45\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"A4.T6.45.45.45.6\">Model</th>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A4.T6.41.41.41.1\">Filters \n</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A4.T6.42.42.42.2\">logP \n</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A4.T6.43.43.43.3\">SA \n</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A4.T6.44.44.44.4\">QED \n</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A4.T6.45.45.45.5\">Weight \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T6.50.50.50\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"A4.T6.50.50.50.6\">GraphINVENT</th>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A4.T6.46.46.46.1\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A4.T6.47.47.47.2\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A4.T6.48.48.48.3\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A4.T6.49.49.49.4\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A4.T6.50.50.50.5\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T6.55.55.55\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A4.T6.55.55.55.6\">DiGress</th>\n<td class=\"ltx_td ltx_align_right\" id=\"A4.T6.51.51.51.1\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A4.T6.52.52.52.2\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A4.T6.53.53.53.3\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A4.T6.54.54.54.4\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A4.T6.55.55.55.5\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T6.60.60.60\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"A4.T6.60.60.60.6\">SparseDiff</th>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"A4.T6.56.56.56.1\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"A4.T6.57.57.57.2\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"A4.T6.58.58.58.3\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"A4.T6.59.59.59.4\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"A4.T6.60.60.60.5\"></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
|
| 139 |
+
"capture": "Table 6: Mean and standard deviation across 5 samplings on the MOSES benchmark. SparseDiff has similar performance to DiGress, despite a shorter training time."
|
| 140 |
+
},
|
| 141 |
+
"7": {
|
| 142 |
+
"table_html": "<figure class=\"ltx_table\" id=\"A4.T7\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 7: </span>Raw results on the SBM, Planar, Protein and Ego datasets. </figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"A4.T7.30\" style=\"width:433.6pt;height:224.3pt;vertical-align:-1.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-10.3pt,5.3pt) scale(0.954466546420655,0.954466546420655) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"A4.T7.30.30\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A4.T7.6.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_tt\" id=\"A4.T7.6.6.6.7\">Model</th>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"A4.T7.1.1.1.1\">Deg (e-3)\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"A4.T7.2.2.2.2\">Clus (e-2)\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"A4.T7.3.3.3.3\">Orb (e-2)\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"A4.T7.4.4.4.4\">Spec (e-3)\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"A4.T7.5.5.5.5\">FID\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"A4.T7.6.6.6.6\">RBF MMD (e-2)\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T7.30.30.31.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" colspan=\"2\" id=\"A4.T7.30.30.31.1.1\"><em class=\"ltx_emph ltx_font_italic\" id=\"A4.T7.30.30.31.1.1.1\">SBM</em></th>\n<td class=\"ltx_td ltx_border_t\" id=\"A4.T7.30.30.31.1.2\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"A4.T7.30.30.31.1.3\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"A4.T7.30.30.31.1.4\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"A4.T7.30.30.31.1.5\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"A4.T7.30.30.31.1.6\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T7.30.30.32.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A4.T7.30.30.32.2.1\">Training set</th>\n<td class=\"ltx_td ltx_align_left\" id=\"A4.T7.30.30.32.2.2\">0.8</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A4.T7.30.30.32.2.3\">3.32</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A4.T7.30.30.32.2.4\">2.55</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A4.T7.30.30.32.2.5\">5.2</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A4.T7.30.30.32.2.6\">16.83</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A4.T7.30.30.32.2.7\">3.13</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T7.12.12.12\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A4.T7.12.12.12.7\">SparseDiff</th>\n<td class=\"ltx_td ltx_align_left\" id=\"A4.T7.7.7.7.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"A4.T7.8.8.8.2\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"A4.T7.9.9.9.3\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"A4.T7.10.10.10.4\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"A4.T7.11.11.11.5\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"A4.T7.12.12.12.6\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T7.30.30.33.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" colspan=\"2\" id=\"A4.T7.30.30.33.3.1\"><em class=\"ltx_emph ltx_font_italic\" id=\"A4.T7.30.30.33.3.1.1\">Planar</em></th>\n<td class=\"ltx_td\" id=\"A4.T7.30.30.33.3.2\"></td>\n<td class=\"ltx_td\" id=\"A4.T7.30.30.33.3.3\"></td>\n<td class=\"ltx_td\" id=\"A4.T7.30.30.33.3.4\"></td>\n<td class=\"ltx_td\" id=\"A4.T7.30.30.33.3.5\"></td>\n<td class=\"ltx_td\" id=\"A4.T7.30.30.33.3.6\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T7.30.30.34.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A4.T7.30.30.34.4.1\">Training set</th>\n<td class=\"ltx_td ltx_align_left\" id=\"A4.T7.30.30.34.4.2\">0.2</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A4.T7.30.30.34.4.3\">3.10</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A4.T7.30.30.34.4.4\">0.05</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A4.T7.30.30.34.4.5\">6.3</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A4.T7.30.30.34.4.6\">0.19</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A4.T7.30.30.34.4.7\">3.20</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T7.18.18.18\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A4.T7.18.18.18.7\">SparseDiff</th>\n<td class=\"ltx_td ltx_align_left\" id=\"A4.T7.13.13.13.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"A4.T7.14.14.14.2\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"A4.T7.15.15.15.3\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"A4.T7.16.16.16.4\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"A4.T7.17.17.17.5\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"A4.T7.18.18.18.6\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T7.30.30.35.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" colspan=\"2\" id=\"A4.T7.30.30.35.5.1\"><em class=\"ltx_emph ltx_font_italic\" id=\"A4.T7.30.30.35.5.1.1\">Protein</em></th>\n<td class=\"ltx_td\" id=\"A4.T7.30.30.35.5.2\"></td>\n<td class=\"ltx_td\" id=\"A4.T7.30.30.35.5.3\"></td>\n<td class=\"ltx_td\" id=\"A4.T7.30.30.35.5.4\"></td>\n<td class=\"ltx_td\" id=\"A4.T7.30.30.35.5.5\"></td>\n<td class=\"ltx_td\" id=\"A4.T7.30.30.35.5.6\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T7.30.30.36.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A4.T7.30.30.36.6.1\">Training set</th>\n<td class=\"ltx_td ltx_align_left\" id=\"A4.T7.30.30.36.6.2\">0.3</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A4.T7.30.30.36.6.3\">0.68</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A4.T7.30.30.36.6.4\">0.32</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A4.T7.30.30.36.6.5\">0.9</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A4.T7.30.30.36.6.6\">5.74</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A4.T7.30.30.36.6.7\">0.68</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T7.24.24.24\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A4.T7.24.24.24.7\">SparseDiff</th>\n<td class=\"ltx_td ltx_align_left\" id=\"A4.T7.19.19.19.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"A4.T7.20.20.20.2\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"A4.T7.21.21.21.3\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"A4.T7.22.22.22.4\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"A4.T7.23.23.23.5\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"A4.T7.24.24.24.6\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T7.30.30.37.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" colspan=\"2\" id=\"A4.T7.30.30.37.7.1\"><em class=\"ltx_emph ltx_font_italic\" id=\"A4.T7.30.30.37.7.1.1\">Ego</em></th>\n<td class=\"ltx_td\" id=\"A4.T7.30.30.37.7.2\"></td>\n<td class=\"ltx_td\" id=\"A4.T7.30.30.37.7.3\"></td>\n<td class=\"ltx_td\" id=\"A4.T7.30.30.37.7.4\"></td>\n<td class=\"ltx_td\" id=\"A4.T7.30.30.37.7.5\"></td>\n<td class=\"ltx_td\" id=\"A4.T7.30.30.37.7.6\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T7.30.30.38.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A4.T7.30.30.38.8.1\">Training set</th>\n<td class=\"ltx_td ltx_align_left\" id=\"A4.T7.30.30.38.8.2\">0.2</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A4.T7.30.30.38.8.3\">1.0</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A4.T7.30.30.38.8.4\">1.20</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A4.T7.30.30.38.8.5\">1.4</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A4.T7.30.30.38.8.6\">1.21</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A4.T7.30.30.38.8.7\">1.23</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T7.30.30.30\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"A4.T7.30.30.30.7\">SparseDiff</th>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"A4.T7.25.25.25.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"A4.T7.26.26.26.2\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"A4.T7.27.27.27.3\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"A4.T7.28.28.28.4\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"A4.T7.29.29.29.5\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"A4.T7.30.30.30.6\"></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
|
| 143 |
+
"capture": "Table 7: Raw results on the SBM, Planar, Protein and Ego datasets. "
|
| 144 |
+
},
|
| 145 |
+
"8": {
|
| 146 |
+
"table_html": "<figure class=\"ltx_table\" id=\"A4.T8\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 8: </span>Influence of including edges features for edge prediction.</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"A4.T8.12\" style=\"width:433.6pt;height:59.9pt;vertical-align:-1.1pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(17.5pt,-2.4pt) scale(1.08797742093441,1.08797742093441) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"A4.T8.12.12\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A4.T8.6.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_tt\" id=\"A4.T8.6.6.6.7\">Model</th>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"A4.T8.1.1.1.1\">Deg \n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"A4.T8.2.2.2.2\">Clus \n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"A4.T8.3.3.3.3\">Orb\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"A4.T8.4.4.4.4\">Spec\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"A4.T8.5.5.5.5\">FID\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"A4.T8.6.6.6.6\">RBF MMD\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T8.12.12.13.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"A4.T8.12.12.13.1.1\">Link Pred</th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A4.T8.12.12.13.1.2\">0.0043</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A4.T8.12.12.13.1.3\">0.0721</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A4.T8.12.12.13.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"A4.T8.12.12.13.1.4.1\">0.0275</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A4.T8.12.12.13.1.5\">0.0344</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A4.T8.12.12.13.1.6\">1.51e6</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A4.T8.12.12.13.1.7\"><span class=\"ltx_text ltx_font_bold\" id=\"A4.T8.12.12.13.1.7.1\">0.0315</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T8.12.12.12\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb ltx_border_r\" id=\"A4.T8.12.12.12.7\">SparseDiff</th>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"A4.T8.7.7.7.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"A4.T8.8.8.8.2\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"A4.T8.9.9.9.3\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"A4.T8.10.10.10.4\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"A4.T8.11.11.11.5\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"A4.T8.12.12.12.6\"></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
|
| 147 |
+
"capture": "Table 8: Influence of including edges features for edge prediction."
|
| 148 |
+
},
|
| 149 |
+
"9": {
|
| 150 |
+
"table_html": "<figure class=\"ltx_table\" id=\"A4.T9\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 9: </span>Influence of edge loss distribution on EGO dataset.</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"A4.T9.14\" style=\"width:433.6pt;height:57.5pt;vertical-align:-1.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(9.6pt,-1.2pt) scale(1.04619671460512,1.04619671460512) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"A4.T9.14.14\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"A4.T9.6.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt\" id=\"A4.T9.6.6.6.7\">Loss based on</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"A4.T9.1.1.1.1\">Deg \n</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"A4.T9.2.2.2.2\">Clus \n</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"A4.T9.3.3.3.3\">Orb\n</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"A4.T9.4.4.4.4\">Spec\n</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"A4.T9.5.5.5.5\">FID\n</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"A4.T9.6.6.6.6\">RBF MMD\n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A4.T9.8.8.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"A4.T9.8.8.8.3\">Comp graph</th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A4.T9.8.8.8.4\">0.0021</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A4.T9.8.8.8.5\">0.0566</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A4.T9.7.7.7.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A4.T9.8.8.8.6\">0.0100</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A4.T9.8.8.8.7\">28.2</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A4.T9.8.8.8.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T9.14.14.14\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb ltx_border_r\" id=\"A4.T9.14.14.14.7\">Query graph</th>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"A4.T9.9.9.9.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"A4.T9.10.10.10.2\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"A4.T9.11.11.11.3\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"A4.T9.12.12.12.4\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"A4.T9.13.13.13.5\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"A4.T9.14.14.14.6\"></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
|
| 151 |
+
"capture": "Table 9: Influence of edge loss distribution on EGO dataset."
|
| 152 |
+
}
|
| 153 |
+
},
|
| 154 |
+
"image_paths": {
|
| 155 |
+
"1(a)": {
|
| 156 |
+
"figure_path": "2311.02142v2_figure_1(a).png",
|
| 157 |
+
"caption": "(a) Ego training set (50505050 to 399399399399 nodes).\nFigure 1: Samples from SparseDiff trained on large graphs.",
|
| 158 |
+
"url": "http://arxiv.org/html/2311.02142v2/iclr2023/figures/ego_training_3.png"
|
| 159 |
+
},
|
| 160 |
+
"1(b)": {
|
| 161 |
+
"figure_path": "2311.02142v2_figure_1(b).png",
|
| 162 |
+
"caption": "(b) Generated Ego graphs.\nFigure 1: Samples from SparseDiff trained on large graphs.",
|
| 163 |
+
"url": "http://arxiv.org/html/2311.02142v2/iclr2023/figures/ego.png"
|
| 164 |
+
},
|
| 165 |
+
"1(c)": {
|
| 166 |
+
"figure_path": "2311.02142v2_figure_1(c).png",
|
| 167 |
+
"caption": "(c) Protein training set (100100100100 to 500500500500 nodes).\nFigure 1: Samples from SparseDiff trained on large graphs.",
|
| 168 |
+
"url": "http://arxiv.org/html/2311.02142v2/iclr2023/figures/protein_training_3.png"
|
| 169 |
+
},
|
| 170 |
+
"1(d)": {
|
| 171 |
+
"figure_path": "2311.02142v2_figure_1(d).png",
|
| 172 |
+
"caption": "(d) Generated Protein graphs.\nFigure 1: Samples from SparseDiff trained on large graphs.",
|
| 173 |
+
"url": "http://arxiv.org/html/2311.02142v2/iclr2023/figures/protein.png"
|
| 174 |
+
},
|
| 175 |
+
"2": {
|
| 176 |
+
"figure_path": "2311.02142v2_figure_2.png",
|
| 177 |
+
"caption": "Figure 2: Overview of SparseDiff. In order to train a denoising neural network without considering all pairs of nodes, SparseDiff combines i) a noise model that preserves sparsity during diffusion; ii) a graph transformer \u03d5\u03b8subscriptitalic-\u03d5\ud835\udf03\\phi_{\\theta}italic_\u03d5 start_POSTSUBSCRIPT italic_\u03b8 end_POSTSUBSCRIPT implemented within the message-passing framework; iii) a loss function computed on a subset \ud835\udc6cqsubscript\ud835\udc6c\ud835\udc5e{\\bm{E}}_{q}bold_italic_E start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT of all pairs of nodes. Together, these components allow for using edge lists and training diffusion models on significantly larger graphs than dense methods.",
|
| 178 |
+
"url": "http://arxiv.org/html/2311.02142v2/iclr2023/figures/overview_sparsediff.png"
|
| 179 |
+
},
|
| 180 |
+
"3": {
|
| 181 |
+
"figure_path": "2311.02142v2_figure_3.png",
|
| 182 |
+
"caption": "Figure 3: Definition of the noisy graph Gtsuperscript\ud835\udc3a\ud835\udc61G^{t}italic_G start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT, the query graph Gqsubscript\ud835\udc3a\ud835\udc5eG_{q}italic_G start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT, and the computational graph Gcsubscript\ud835\udc3a\ud835\udc50G_{c}italic_G start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT, with an edge proportion \u03bb=0.16\ud835\udf060.16\\lambda=0.16italic_\u03bb = 0.16. The noisy graph Gtsuperscript\ud835\udc3a\ud835\udc61G^{t}italic_G start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT is the result of our sparsity-preserving noising process, the query graph Gqsubscript\ud835\udc3a\ud835\udc5eG_{q}italic_G start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT consists of a fraction \u03bb\ud835\udf06\\lambdaitalic_\u03bb of randomly chosen edges, and the computational graph Gcsubscript\ud835\udc3a\ud835\udc50G_{c}italic_G start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT is the union of the noisy and query graphs. Self-loops are not included in the calculation.",
|
| 183 |
+
"url": "http://arxiv.org/html/2311.02142v2/iclr2023/figures/query_edges.pdf"
|
| 184 |
+
},
|
| 185 |
+
"4": {
|
| 186 |
+
"figure_path": "2311.02142v2_figure_4.png",
|
| 187 |
+
"caption": "Figure 4: Visualization of the iterative sampling process, with a query edge proportion",
|
| 188 |
+
"url": "http://arxiv.org/html/2311.02142v2/iclr2023/figures/sampling.pdf"
|
| 189 |
+
},
|
| 190 |
+
"5(a)": {
|
| 191 |
+
"figure_path": "2311.02142v2_figure_5(a).png",
|
| 192 |
+
"caption": "(a) Training graphs.\nFigure 5: Visualization for Moses dataset.",
|
| 193 |
+
"url": "http://arxiv.org/html/2311.02142v2/iclr2023/figures/moses_training.png"
|
| 194 |
+
},
|
| 195 |
+
"5(b)": {
|
| 196 |
+
"figure_path": "2311.02142v2_figure_5(b).png",
|
| 197 |
+
"caption": "(b) Generated graphs.\nFigure 5: Visualization for Moses dataset.",
|
| 198 |
+
"url": "http://arxiv.org/html/2311.02142v2/iclr2023/figures/moses_vis.png"
|
| 199 |
+
},
|
| 200 |
+
"6(a)": {
|
| 201 |
+
"figure_path": "2311.02142v2_figure_6(a).png",
|
| 202 |
+
"caption": "(a) Training graphs.\nFigure 6: Visualization for Planar dataset.",
|
| 203 |
+
"url": "http://arxiv.org/html/2311.02142v2/iclr2023/figures/planar_training.png"
|
| 204 |
+
},
|
| 205 |
+
"6(b)": {
|
| 206 |
+
"figure_path": "2311.02142v2_figure_6(b).png",
|
| 207 |
+
"caption": "(b) Generated graphs.\nFigure 6: Visualization for Planar dataset.",
|
| 208 |
+
"url": "http://arxiv.org/html/2311.02142v2/iclr2023/figures/planar_vis.png"
|
| 209 |
+
},
|
| 210 |
+
"7(a)": {
|
| 211 |
+
"figure_path": "2311.02142v2_figure_7(a).png",
|
| 212 |
+
"caption": "(a) Training graphs.\nFigure 7: Visualization for SBM dataset.",
|
| 213 |
+
"url": "http://arxiv.org/html/2311.02142v2/iclr2023/figures/sbm_training.png"
|
| 214 |
+
},
|
| 215 |
+
"7(b)": {
|
| 216 |
+
"figure_path": "2311.02142v2_figure_7(b).png",
|
| 217 |
+
"caption": "(b) Generated graphs.\nFigure 7: Visualization for SBM dataset.",
|
| 218 |
+
"url": "http://arxiv.org/html/2311.02142v2/iclr2023/figures/sbm_vis.png"
|
| 219 |
+
},
|
| 220 |
+
"8(a)": {
|
| 221 |
+
"figure_path": "2311.02142v2_figure_8(a).png",
|
| 222 |
+
"caption": "(a) Training graphs.\nFigure 8: Visualization for Ego dataset.",
|
| 223 |
+
"url": "http://arxiv.org/html/2311.02142v2/iclr2023/figures/ego_training.png"
|
| 224 |
+
},
|
| 225 |
+
"8(b)": {
|
| 226 |
+
"figure_path": "2311.02142v2_figure_8(b).png",
|
| 227 |
+
"caption": "(b) Generated graphs.\nFigure 8: Visualization for Ego dataset.",
|
| 228 |
+
"url": "http://arxiv.org/html/2311.02142v2/iclr2023/figures/ego_vis.png"
|
| 229 |
+
},
|
| 230 |
+
"9(a)": {
|
| 231 |
+
"figure_path": "2311.02142v2_figure_9(a).png",
|
| 232 |
+
"caption": "(a) Training graphs.\nFigure 9: Visualization for Protein dataset.",
|
| 233 |
+
"url": "http://arxiv.org/html/2311.02142v2/iclr2023/figures/protein_training.png"
|
| 234 |
+
},
|
| 235 |
+
"9(b)": {
|
| 236 |
+
"figure_path": "2311.02142v2_figure_9(b).png",
|
| 237 |
+
"caption": "(b) Generated graphs.\nFigure 9: Visualization for Protein dataset.",
|
| 238 |
+
"url": "http://arxiv.org/html/2311.02142v2/iclr2023/figures/protein_vis.png"
|
| 239 |
+
}
|
| 240 |
+
},
|
| 241 |
+
"validation": true,
|
| 242 |
+
"references": [],
|
| 243 |
+
"url": "http://arxiv.org/html/2311.02142v2"
|
| 244 |
+
}
|
20240522/2311.02805v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240522/2311.05956v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240522/2311.07750v3.json
ADDED
|
@@ -0,0 +1,329 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "SynthEnsemble: A Fusion of CNN, Vision Transformer, and Hybrid Models for Multi-Label Chest X-Ray Classification",
|
| 3 |
+
"abstract": "Chest X-rays are widely used to diagnose thoracic diseases, but the lack of detailed information about these abnormalities makes it challenging to develop accurate automated diagnosis systems, which is crucial for early detection and effective treatment. To address this challenge, we employed deep learning techniques to identify patterns in chest X-rays that correspond to different diseases. We conducted experiments on the \u201dChestX-ray14\u201d dataset using various pre-trained CNNs, transformers, hybrid(CNN+Transformer) models, and classical models. The best individual model was the CoAtNet, which achieved an area under the receiver operating characteristic curve (AUROC) of 84.2%. By combining the predictions of all trained models using a weighted average ensemble where the weight of each model was determined using differential evolution, we further improved the AUROC to 85.4%, outperforming other state-of-the-art methods in this field. Our findings demonstrate the potential of deep learning techniques, particularly ensemble deep learning, for improving the accuracy of automatic diagnosis of thoracic diseases from chest X-rays. Code available at: https://github.com/syednabilashraf/SynthEnsemble",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "The field of medical diagnostics has witnessed a growing interest and recognition because of deep learning as a promising and feasible approach. Specifically, using chest X-ray imaging as a screening and diagnostic modality in Artificial Intelligence tools holds significant importance for various thoracic diseases [1 ###reference_b1###]. However, the lack of properly labeled hospital-scale datasets, as well as fine-grained features, are hindering the development of computer-aided diagnosis systems[2 ###reference_b2###]. Despite this, the utilization of Convolutional Neural Networks (CNNs), pre-trained transformer models, and their subsequent fine-tuning for downstream tasks has demonstrated efficacy in situations where there is a scarcity of training data and quality features [3 ###reference_b3###] [4 ###reference_b4###]. Additionally, it is imperative to mitigate unexpected biases, as they are deemed undesirable within a medical scenario. In the context of low-resolution images and limited image data, it was seen that Swin Transformer V2 outperforms alternative vision transformer models[5 ###reference_b5###].\nThis study investigates various deep-learning approaches for the purpose of identifying features in chest radiography (CXR) pictures that are indicative of chest illnesses. In every instance of chest disease, we employ pre-trained convolutional neural network (CNN) models, vision transformer models, and a fusion of CNN and transformers, utilizing chest X-ray pictures as input. Furthermore, we optimize the hyper-parameters of the multi-label system, which encompasses 14 diagnostic labels concurrently.\nIn our study, we present three noteworthy contributions:\nWe outperformed previous efforts in thoracic disease identification by achieving a superior ROC AUC. To achieve this, we conducted a thorough comparison of diverse neural network models, including transformers, Convolutional Neural Networks (CNNs), hybrids (CNN + ViT), and even classical models.\nWe developed an innovative approach that involves the seamless integration of diverse Deep Neural Networks and Classical Machine Learning Models. By incorporating Ensemble learning techniques, we effectively elevated the performance of the model.\nWe utilized a unique approach to make the training more cost-effective by using a cyclic learning rate.\nMoreover, we implemented a two-step fine-tuning and training process, accompanied by the identification of optimal learning rates for each step. This approach was employed to achieve improved and expedited convergence. Collectively, these contributions shed light on a pathway for advancing within the realm of thoracic disease identification."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "II Literature Review",
|
| 15 |
+
"text": "In recent years, significant progress has been made in the field of deep learning and the availability of extensive datasets in the task of medical imaging. These advancements have facilitated the development of methods that have demonstrated comparable performances against healthcare experts in various medical imaging tasks[6 ###reference_b6###] [7 ###reference_b7###] [8 ###reference_b8###] [9 ###reference_b9###] [10 ###reference_b10###]. In particular, detection of diabetic retinopathy[6 ###reference_b6###], classification of skin cancer[7 ###reference_b7###], identifying arrhythmia[8 ###reference_b8###], recognition of haemorrhage[9 ###reference_b9###], and pulmonary tuberculosis detection in x-rays[10 ###reference_b10###]\nIn continuation of expanding the realm of this medical imaging field, Wang et al.[2 ###reference_b2###] introduced the ChestX-ray-14 dataset, which contains significantly larger data compared to prior datasets in the same domain. Additionally, Wang et al. conducted a comparative evaluation of various convolutional neural network architectures that had been pre-trained on ImageNet[11 ###reference_b11###]. After that, researchers have come forward to improve the detection of different chest diseases by proposing and leveraging different methodologies, like Yao et al.[12 ###reference_b12###], who tested the statistical dependencies among labels."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "II-A Convolutional Neural Networks in medical image domain",
|
| 21 |
+
"text": "In the field of medical image learning, it is not unexpected to leverage the Convolutional Neural Network (CNN)[13 ###reference_b13###] as the foundation for the most successful models in the field of medical image learning. Among researchers, the CNN structure has been the prevailing choice for image recognition tasks. As a result, many have proposed efficient methodologies on top of the existing ones. Gao et al. [14 ###reference_b14###] illustrated a solution to the vanishing-gradient problem in Convolutional Neural Networks (CNNs) by interconnecting each layer of the CNN with every subsequent layer. However, the complex architectures of CNNs give rise to concerns regarding interpretability and computational efficiency. Nevertheless, Pranav et al. successfully employed 121-layer CNNs to address medical imaging challenges in the realm of Cardiovascular diseases, demonstrating consistent performances [15 ###reference_b15###]."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "II-B Transformer in the medical computer vision",
|
| 27 |
+
"text": "In recent times, in the context of the classification and detection task, self-supervised learning techniques such as masked autoencoders[16 ###reference_b16###] are being used to improve the performance of pure CNNs. Post-2020, the adoption of the Transformer model in computer vision has become evident, attributed to its significant capability, as outlined in Vaswani et al.\u2019s work [17 ###reference_b17###]. Nevertheless, for effective utilization of transformers, self-attention, and self-supervision techniques in image processing, various researchers have suggested diverse enhancements. Zihang et al. [18 ###reference_b18###] combined the strengths of transformers and convolutional networks, emphasizing the synergies between these two architectures. They also brought attention to the issue of limited scalability in the self-attention mechanism of transformers when dealing with larger image sizes. Addressing this concern, Li et al. [19 ###reference_b19###], in their paper on Vision Outlooker (VOLO), pointed out its effectiveness in encoding fine-grain features and contextual information into tokens\u2014an achievement not previously attainable through self-attention mechanisms. Additionally, this study led to the development of MaxVIT [20 ###reference_b20###] to better accommodate larger image sizes.\nWhile both CNNs and Transformers contribute significant roles in medical computer vision, they exhibit distinct strengths and weaknesses. CNNs excel in scalability and have demonstrated high performance on large datasets. Conversely, transformers, such as ViTs, introduce innovative approaches with attention mechanisms. However, addressing the scalability limitations of transformers remains a considerable challenge, which this study tries to overcome. We have attempted to harness the capabilities of recent ViT models like Swin Transformer v2 [5 ###reference_b5###], initially trained on low-quality images, to effectively handle downstream tasks involving higher-resolution images and mitigate the scalability issues."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3",
|
| 31 |
+
"parent_section_id": null,
|
| 32 |
+
"section_name": "III Methodology",
|
| 33 |
+
"text": ""
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3.1",
|
| 37 |
+
"parent_section_id": "3",
|
| 38 |
+
"section_name": "III-A Dataset Description",
|
| 39 |
+
"text": "Our research uses the ChestX-ray14 dataset[2 ###reference_b2###], a robust compilation comprising 112,120 frontal-view X-ray images from 30,805 unique patients from 1992 to 2015. Expanding on the ChestX-ray8 dataset, this comprehensive collection incorporates six additional extrathoracic conditions, including Edema, Emphysema, Fibrosis, Pleural Thickening, and Hernia.\n###figure_1### Figure 1 ###reference_### illustrates the inherent class imbalance within our dataset, revealing that certain medical conditions are disproportionately represented. This imbalance could potentially lead to biased model performance. Moreover, a considerable portion of the dataset, amounting to more than 60,000 (out of 112,120 images), was categorized as \u201dNo Findings,\u201d indicating the absence of any of the 14 detectable diseases."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3.2",
|
| 43 |
+
"parent_section_id": "3",
|
| 44 |
+
"section_name": "III-B Model Exploration",
|
| 45 |
+
"text": "In this section, we briefly introduce different types of cutting-edge image classification models that we have selected for our experiment, including CNN, Vision Transformer (ViT), and Hybrid (CNN + ViT) models."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "3.2.1",
|
| 49 |
+
"parent_section_id": "3.2",
|
| 50 |
+
"section_name": "III-B1 Hybrid architectures",
|
| 51 |
+
"text": "CoAtNet: CoAtNets [18 ###reference_b18###] comprise a novel class of hybrid models that seamlessly merge depthwise convolution and self-attention through simple relative attention mechanisms. This approach leads to a coherent fusion of convolution and attention layers, effectively enhancing generalization, capacity, and efficiency. Furthermore, CoAtNets demonstrate improved performance by systematically stacking convolutional and attention layers in a meticulously designed manner.\nMaxViT: MaxViT[20 ###reference_b20###] is a pioneering model that leverages multi-axis attention to achieve powerful global-local spatial interactions on diverse input resolutions, all while maintaining linear complexity. By ingeniously incorporating blocked local and dilated global attention mechanisms, MaxViT empowers the integration of attention with convolutions. This ingenious synergy culminates in a hierarchical vision backbone, where the fundamental building block is seamlessly replicated across multiple stages."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "3.2.2",
|
| 55 |
+
"parent_section_id": "3.2",
|
| 56 |
+
"section_name": "III-B2 Vision Transformer (ViT)",
|
| 57 |
+
"text": "Swin V2: Swin Transformer V2[5 ###reference_b5###] introduces an array of strategies, including residual-post-norm with cosine attention, log-spaced position bias, and SimMIM self-supervised pre-training. These techniques collectively foster training stability, resolution transfer, and reduction in labeled data dependency. The outcome is a model that achieves remarkable performance across diverse vision tasks, surpassing state-of-the-art benchmarks.\nVOLO: Addressing the limitations of self-attention, Vision Outlooker (VOLO)[19 ###reference_b19###] introduces outlook attention, an innovative technique that efficiently captures fine-grained features and contexts at a more granular level. This novel approach, embodied in the VOLO architecture, stands in contrast to conventional self-attention, which predominantly focuses on coarser global dependency modeling."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "3.2.3",
|
| 61 |
+
"parent_section_id": "3.2",
|
| 62 |
+
"section_name": "III-B3 Convolutional Neural Network (CNN)",
|
| 63 |
+
"text": "DenseNet: Dense Convolutional Networks (DenseNets)[14 ###reference_b14###] present a departure from conventional architectures by establishing dense connections that link each layer to all other layers in a feed-forward manner. This unique connectivity pattern results in an exponential increase in direct connections, mitigating challenges associated with vanishing gradients and facilitating robust feature propagation. DenseNets also engender feature reuse and parameter reduction, showcasing their efficacy in optimizing image classification tasks.\nConvNeXt V2: Building upon the ConvNeXt framework, ConvNeXt V2[4 ###reference_b4###] introduces an upgraded model with a fully convolutional masked autoencoder structure and a Global Response Normalization (GRN) layer. This integration of self-supervised learning techniques and architectural refinements contributes to substantial performance enhancements across various recognition benchmarks, underscoring the potency of combined approaches in image classification."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "3.3",
|
| 67 |
+
"parent_section_id": "3",
|
| 68 |
+
"section_name": "III-C Data pre-processing and splitting",
|
| 69 |
+
"text": "This section outlines the fundamental steps taken to pre-process the data, ensuring its suitability for subsequent analysis and model training."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "3.3.1",
|
| 73 |
+
"parent_section_id": "3.3",
|
| 74 |
+
"section_name": "III-C1 Image Resizing and Normalization",
|
| 75 |
+
"text": "Initially, the images were sized 1024x1024 pixels, and we resized them to more manageable dimensions of 224x224 pixels to enhance computational efficiency within resource constraints. We normalized the images using a mean and standard deviation of images from the Imagenet dataset."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "3.3.2",
|
| 79 |
+
"parent_section_id": "3.3",
|
| 80 |
+
"section_name": "III-C2 Horizontal Flips and Rotation",
|
| 81 |
+
"text": "We incorporated random horizontal flips and rotations to enhance orientation robustness and promote feature learning. These augmentations were applied with a 50% probability each, and rotations were confined to a maximum of 10 degrees."
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "3.3.3",
|
| 85 |
+
"parent_section_id": "3.3",
|
| 86 |
+
"section_name": "III-C3 Splitting",
|
| 87 |
+
"text": "The dataset was divided into distinct groups, with 70% allocated for training, 20% for testing, and 10% for validation. Notably, patient overlaps were meticulously avoided across these divisions, as evident in Table I ###reference_###. As indicated by Yao et al.[12 ###reference_b12###], variations in random splitting negligibly impact performance, thus guaranteeing an equitable basis for comparison."
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "3.4",
|
| 91 |
+
"parent_section_id": "3",
|
| 92 |
+
"section_name": "III-D Training and Optimization",
|
| 93 |
+
"text": "Finding initial learning rate: To identify the initial learning rate (LR), we leverage the Learning Rate Range Test, a technique discussed by Smith[21 ###reference_b21###]. The crux of this approach is centered around the concept of cyclical learning rate (CLR), which alternately increases and decreases during the training process. Our choice of the optimizer is AdamW[22 ###reference_b22###], which we adopt with CLR during training.\nWe ran a small training session for 100 iterations in which the learning rate was increased between two boundaries, min_lr and max_lr, which were 1e-7 and 1e-1, respectively. We then plot the LR vs. Loss curve and pick the midpoint of the steepest descending portion of the curve as the maximum bound for training with CLR.\n###figure_2### ###figure_3### Training DNN: Figure 2 ###reference_###a illustrates the architecture used for training every DNN. We employed pre-trained weights sourced from ImageNet for initializing each neural network. The model\u2019s head was trained for three epochs, as it was randomly initialized. Then, the full network was fine-tuned for ten epochs with discriminative LR, where the model\u2019s initial layers were trained with a lower LR compared to the final layers. Training was halted when the model failed to improve for three consecutive epochs. The best model with the lowest validation loss during training was saved. To optimize the initial learning rate for the fine-tuned model, the weights of the saved model were loaded, and the LR Range Test was re-run. The entire model was then trained for a second time using the optimal initial learning rate for five epochs, saving only the model with the best validation loss. While training in the second phase only slightly improved the validation loss for some models like CoAtNet, all six models were trained using the same approach with two phases.\nTraining Classical Models as Meta-Learner: To enhance outcomes, we explored utilizing feature vectors generated by the top-performing DNN as input for classical models like XGBoost and Random Forest. We pursued two strategies: firstly, training classical models with the output vectors from the second-to-last layer of our DNN models (varied per model) as depicted in Figure 2 ###reference_###c; secondly, training them with the last layer of our custom head, producing 512 features for each DNN as shown in Figure 2 ###reference_###b. Integrating the top-performing DNN with classical models aimed to synergize model strengths and enhance overall performance.\nEnsembling DNN: To make our predictions more robust with reduced variance, we used two different ensemble techniques on the validation split before evaluating on test data. (1) Stacking: we concatenated the six probability vectors outputted by all 6 DNNs to form 84 features (6 DNN models * 14 probabilities each = 84 features) for each image and trained XGBoost as a meta classifier to make the final predictions. (2) Weighted average: we averaged the six probability vectors using different weights for each DNN to produce one probability vector for each image. The optimal weight for each model, which determined its contribution to the weighted final prediction, was found using a stochastic global search algorithm known as differential evolution[23 ###reference_b23###]. The weights were bounded 0 and 1 (inclusive) and summed to 1. This was the superior ensemble technique and has been shown in Figure 3 ###reference_###."
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "4",
|
| 97 |
+
"parent_section_id": null,
|
| 98 |
+
"section_name": "IV Experiments",
|
| 99 |
+
"text": ""
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "4.1",
|
| 103 |
+
"parent_section_id": "4",
|
| 104 |
+
"section_name": "IV-A Experimental Setup",
|
| 105 |
+
"text": "We chose PyTorch as our implementation platform. Our experiments were run on two Nvidia T4 GPUs with 16GB of memory each. We selected Binary cross-entropy loss as our loss function for multi-label classification. We employed AdamW as our optimizer with a true weight decay of 1e-2 for every DNN and a momentum of 0.9. We set the batch size to 32, which used the full capacity of our GPUs. Our training process incorporates an early stopping mechanism, stopping training if the validation loss does not improve by a margin of 1e-3 within five epochs. Additionally, to safeguard model progress, we implement a checkpoint system, preserving the model\u2019s state each time the validation loss experiences a reduction of at least 1e-4."
|
| 106 |
+
},
|
| 107 |
+
{
|
| 108 |
+
"section_id": "4.2",
|
| 109 |
+
"parent_section_id": "4",
|
| 110 |
+
"section_name": "IV-B Models Performance",
|
| 111 |
+
"text": "Among all of our DNN models, CoAtNet performed the best among all models with a mean AUROC of 84.2%, followed by ConvNeXtV2 with 84.1% (shown in Table II ###reference_###).\nNext, we used the feature vectors from these models to train classical models (XGBoost and Random Forest), as detailed in Table III ###reference_###. Notably, CoAtNet features surpassed ConvNeXtV2 in both cases (last and second-to-last layers), with clear performance improvement using 512 features compared to the larger alternative.\nLastly, we ensembled all 6 DNNs with two distinct techniques. (1) Stacking: the probability vectors outputted by each DNN to train XGBoost as a meta-classifier. (2) Unweighted and weighted average ensemble where the weights were determined using differential evolution. We assessed each technique on validation split before evaluating on test data, as shown in Table IV ###reference_###. The weighted average ensemble demonstrated superior performance overall, outperforming individual DNN models as well as other ensemble methods."
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"section_id": "4.3",
|
| 115 |
+
"parent_section_id": "4",
|
| 116 |
+
"section_name": "IV-C Comparison of DNN, ML and Ensemble model",
|
| 117 |
+
"text": "Our experimental investigation was structured around three distinct categories: Deep Learning Models, Classical Models with deep learning model features, and an ensemble approach that combines all models. Within the Deep Learning Models category, we further categorized models into CNN, Transformer, and Hybrid (CNN + Transformer) architectures, as shown in Table V ###reference_###. Our selection process involved identifying the most promising model from each category within Deep Neural Network (DNN), classical Machine Learning (ML), and Ensemble models.\nAmong six evaluated DNN models (CoAtNet, MaxViT, DenseNet121, ConvNeXtV2, VOLO, and SwinV2), CoAtNet performed the best. We used CoAtNet and ConvNeXtV2\u2019s feature vectors for training classical ML models (XGBoost, Random Forest), but they fell short of deep learning models\u2019 performance. Instead, an ensemble model combining all models consistently outperformed others, emphasizing its predictive accuracy and robustness. In summary, CoAtNet excelled in DNNs, classical models couldn\u2019t match deep learning models\u2019 performance, and the ensemble model dominated predictive accuracy across all labels.\n###figure_4###"
|
| 118 |
+
},
|
| 119 |
+
{
|
| 120 |
+
"section_id": "4.4",
|
| 121 |
+
"parent_section_id": "4",
|
| 122 |
+
"section_name": "IV-D Comparison with existing approaches",
|
| 123 |
+
"text": "A comparative analysis was conducted between our proposed model and previous competing models, as shown in Table VI ###reference_###. The results demonstrated that our novel ensemble model produced superior outcomes. Notably, two exceptions occurred: ImageGCN performed slightly better in the context of Hernia while Net[24 ###reference_b24###] took the lead in relation to Emphysema and Fibrosis detection. As depicted by the ROC curve in Figure 4 ###reference_###, the ensemble model does well for all 14 diseases, excelling particularly in the case of Emphysema, yet showing comparatively less effectiveness for Infiltration."
|
| 124 |
+
},
|
| 125 |
+
{
|
| 126 |
+
"section_id": "5",
|
| 127 |
+
"parent_section_id": null,
|
| 128 |
+
"section_name": "Conclusion",
|
| 129 |
+
"text": "In this paper, we proposed an ensemble model for multi-label classification of chest X-rays (CXRs) using deep learning techniques. Firstly, we trained and evaluated several pure transformers, CNN, and hybrid models on the ChestX-ray14 dataset and found that the hybrid model CoAtNet performed the best individually, achieving an AUROC of 84.2%. We also explored the performance of classical models like XGBoost and Random Forest when trained with feature vectors from our best-performing individual DNNs. Finally, we implemented a weighted average ensemble on the predictions of our DNNs using differential evolution to determine the optimal weight for each model. Our experiments show that our proposed ensemble model achieves better results than other state-of-the-art methods in this field, with a mean AUROC of 85.4%. This demonstrates the potential of ensemble deep learning for improving the accuracy of automatic diagnosis of thoracic diseases from CXRs."
|
| 130 |
+
}
|
| 131 |
+
],
|
| 132 |
+
"appendix": [],
|
| 133 |
+
"tables": {
|
| 134 |
+
"1": {
|
| 135 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>SPLITTING OF THE DATASET</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S3.T1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T1.1.1.1\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.1.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S3.T1.1.1.1.2\">Total</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S3.T1.1.1.1.3\">Train</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S3.T1.1.1.1.4\">Validation</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S3.T1.1.1.1.5\">Test</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T1.1.2.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.1.2.1.1\">Images</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.1.2.1.2\">112120</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.1.2.1.3\">78544</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.1.2.1.4\">11220</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.1.2.1.5\">22356</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.3.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.1.3.2.1\">Unique Patients</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T1.1.3.2.2\">30805</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T1.1.3.2.3\">21563</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T1.1.3.2.4\">3081</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T1.1.3.2.5\">6161</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 136 |
+
"capture": "TABLE I: SPLITTING OF THE DATASET"
|
| 137 |
+
},
|
| 138 |
+
"2": {
|
| 139 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE II: </span>AUROC of DNN models</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T2.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.1.1\">Model</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.1.2\">Type</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.1.3\">Model Param (M)</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.1.4\">AUROC</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.1.2.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.1.2.1.1\">CoAtNet</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T2.1.2.1.2\">Hybrid</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T2.1.2.1.3\">73.9</td>\n<td class=\"ltx_td ltx_align_right ltx_border_r ltx_border_t\" id=\"S4.T2.1.2.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.2.1.4.1\">0.84239</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.3.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.1.3.2.1\">MaxViT</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T2.1.3.2.2\">Hybrid</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T2.1.3.2.3\">116</td>\n<td class=\"ltx_td ltx_align_right ltx_border_r ltx_border_t\" id=\"S4.T2.1.3.2.4\">0.84013</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.4.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.1.4.3.1\">DenseNet121</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T2.1.4.3.2\">CNN</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T2.1.4.3.3\">8</td>\n<td class=\"ltx_td ltx_align_right ltx_border_r ltx_border_t\" id=\"S4.T2.1.4.3.4\">0.82440</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.5.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.1.5.4.1\">ConvNeXtV2</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T2.1.5.4.2\">CNN</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T2.1.5.4.3\">198</td>\n<td class=\"ltx_td ltx_align_right ltx_border_r ltx_border_t\" id=\"S4.T2.1.5.4.4\">0.84091</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.6.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.1.6.5.1\">VOLO</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T2.1.6.5.2\">Transformer</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T2.1.6.5.3\">58.7</td>\n<td class=\"ltx_td ltx_align_right ltx_border_r ltx_border_t\" id=\"S4.T2.1.6.5.4\">0.83205</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.7.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.1.7.6.1\">SwinV2</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T2.1.7.6.2\">Transformer</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T2.1.7.6.3\">49.7</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T2.1.7.6.4\">0.83573</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 140 |
+
"capture": "TABLE II: AUROC of DNN models"
|
| 141 |
+
},
|
| 142 |
+
"3": {
|
| 143 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE III: </span>AUROC OF CLASSICAL MODELS</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T3.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T3.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.1.1.1.1\">Feature Extraction Model</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T3.1.1.1.2\">Classifier Model</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T3.1.1.1.3\">Features</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T3.1.1.1.4\">AUROC</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T3.1.2.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.1.2.1.1\">CoAtNet</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.1.2.1.2\">XGB</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.1.2.1.3\">512</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.1.2.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.2.1.4.1\">0.83814</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.3.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.1.3.2.1\">CoAtNet</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.1.3.2.2\">XGB</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.1.3.2.3\">2048</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.1.3.2.4\">0.82354</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.4.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.1.4.3.1\">ConvNeXtV2</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.1.4.3.2\">XGB</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.1.4.3.3\">512</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.1.4.3.4\">0.82536</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.5.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.1.5.4.1\">ConvNeXtV2</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.1.5.4.2\">XGB</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.1.5.4.3\">3072</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.1.5.4.4\">0.81160</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.6.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.1.6.5.1\">CoAtNet</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.1.6.5.2\">RF</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.1.6.5.3\">512</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.1.6.5.4\">0.81441</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.7.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.1.7.6.1\">CoAtNet</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.1.7.6.2\">RF</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.1.7.6.3\">2048</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.1.7.6.4\">0.80607</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.8.7\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.1.8.7.1\">ConvNeXtV2</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.1.8.7.2\">RF</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.1.8.7.3\">512</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.1.8.7.4\">0.80150</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.9.8\">\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.1.9.8.1\">ConvNeXtV2</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T3.1.9.8.2\">RF</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T3.1.9.8.3\">3072</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T3.1.9.8.4\">0.79458</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 144 |
+
"capture": "TABLE III: AUROC OF CLASSICAL MODELS"
|
| 145 |
+
},
|
| 146 |
+
"4": {
|
| 147 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T4\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE IV: </span>AUROC WITH DIFFERENT ENSEMBLE TECHNIQUES</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T4.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T4.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T4.1.1.1.1\">Technique</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T4.1.1.1.2\">AUROC</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T4.1.2.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T4.1.2.1.1\">Stacking with Meta Classifier (XGB)</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.1.2.1.2\">0.84518</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.3.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T4.1.3.2.1\">Unweighted Average Ensemble</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.1.3.2.2\">0.85327</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.4.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T4.1.4.3.1\">Weighted Average Ensemble</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T4.1.4.3.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.4.3.2.1\">0.85433</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 148 |
+
"capture": "TABLE IV: AUROC WITH DIFFERENT ENSEMBLE TECHNIQUES"
|
| 149 |
+
},
|
| 150 |
+
"5": {
|
| 151 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T5\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE V: </span>AUROC OF BEST DNN, CLASSICAL AND ENSEMBLE MODEL FOR ALL DISEASES</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T5.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T5.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T5.1.1.1.1\">Disease</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T5.1.1.1.2\">CoAtNet</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T5.1.1.1.3\">CoAtNet+XGB</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T5.1.1.1.4\">Weighted Ensemble</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T5.1.2.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T5.1.2.1.1\">Atelectasis</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.1.2.1.2\">0.82313</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.1.2.1.3\">0.82364</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.1.2.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.1.2.1.4.1\">0.83390</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.1.3.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T5.1.3.2.1\">Consolidation</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.1.3.2.2\">0.80980</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.1.3.2.3\">0.81151</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.1.3.2.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.1.3.2.4.1\">0.81575</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.1.4.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T5.1.4.3.1\">Infiltration</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.1.4.3.2\">0.73105</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.1.4.3.3\">0.73199</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.1.4.3.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.1.4.3.4.1\">0.74102</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.1.5.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T5.1.5.4.1\">Pneumothorax</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.1.5.4.2\">0.89660</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.1.5.4.3\">0.89068</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.1.5.4.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.1.5.4.4.1\">0.90164</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.1.6.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T5.1.6.5.1\">Edema</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.1.6.5.2\">0.90185</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.1.6.5.3\">0.90214</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.1.6.5.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.1.6.5.4.1\">0.91034</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.1.7.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T5.1.7.6.1\">Emphysema</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.1.7.6.2\">0.92067</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.1.7.6.3\">0.91891</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.1.7.6.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.1.7.6.4.1\">0.92946</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.1.8.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T5.1.8.7.1\">Fibrosis</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.1.8.7.2\">0.81574</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.1.8.7.3\">0.80103</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.1.8.7.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.1.8.7.4.1\">0.83347</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.1.9.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T5.1.9.8.1\">Effusion</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.1.9.8.2\">0.88203</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.1.9.8.3\">0.88147</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.1.9.8.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.1.9.8.4.1\">0.88977</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.1.10.9\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T5.1.10.9.1\">Pneumonia</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.1.10.9.2\">0.76093</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.1.10.9.3\">0.75798</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.1.10.9.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.1.10.9.4.1\">0.77648</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.1.11.10\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T5.1.11.10.1\">Pleural_Thickening</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.1.11.10.2\">0.80053</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.1.11.10.3\">0.80448</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.1.11.10.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.1.11.10.4.1\">0.81270</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.1.12.11\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T5.1.12.11.1\">Cardiomegaly</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.1.12.11.2\">0.90788</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.1.12.11.3\">0.90909</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.1.12.11.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.1.12.11.4.1\">0.91954</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.1.13.12\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T5.1.13.12.1\">Nodule</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.1.13.12.2\">0.79828</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.1.13.12.3\">0.79695</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.1.13.12.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.1.13.12.4.1\">0.80611</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.1.14.13\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T5.1.14.13.1\">Mass</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.1.14.13.2\">0.86191</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.1.14.13.3\">0.86310</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.1.14.13.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.1.14.13.4.1\">0.87315</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.1.15.14\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T5.1.15.14.1\">Hernia</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.1.15.14.2\">0.88305</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.1.15.14.3\">0.84093</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.1.15.14.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.1.15.14.4.1\">0.91723</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.1.16.15\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T5.1.16.15.1\">Average</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T5.1.16.15.2\">0.84239</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T5.1.16.15.3\">0.83814</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T5.1.16.15.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.1.16.15.4.1\">0.85433</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 152 |
+
"capture": "TABLE V: AUROC OF BEST DNN, CLASSICAL AND ENSEMBLE MODEL FOR ALL DISEASES"
|
| 153 |
+
},
|
| 154 |
+
"6": {
|
| 155 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T6\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE VI: </span>COMPARISON OF AUROC WITH PREVIOUS WORK ON CHESTX-RAY14 DATASET</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T6.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T6.1.1\">\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T6.1.1.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.1.2.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.1.2.1.1\">Disease</span>\n</span>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T6.1.1.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.1.3.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.1.3.1.1\">Ensemble (Ours)</span>\n</span>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T6.1.1.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.1.4.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.1.4.1.1\">CoAtNet (Ours)</span>\n</span>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T6.1.1.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.1.1.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.1.1.1.1\"> Net<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.07750v3#bib.bib24\" title=\"\">24</a>]</cite></span>\n</span>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T6.1.1.5\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.1.5.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.1.5.1.1\">ImageGCN<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.07750v3#bib.bib25\" title=\"\">25</a>]</cite></span>\n</span>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T6.1.1.6\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.1.6.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.1.6.1.1\">Wang et al.<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.07750v3#bib.bib2\" title=\"\">2</a>]</cite></span>\n</span>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T6.1.1.7\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.1.7.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.1.7.1.1\">Li et al.<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.07750v3#bib.bib26\" title=\"\">26</a>]</cite></span>\n</span>\n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T6.1.2.1\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T6.1.2.1.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.2.1.1.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.2.1.1.1.1\">Atelectasis</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S4.T6.1.2.1.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.2.1.2.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.2.1.2.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.1.2.1.2.1.1.1\">0.83390</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S4.T6.1.2.1.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.2.1.3.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.2.1.3.1.1\">0.82313</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S4.T6.1.2.1.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.2.1.4.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.2.1.4.1.1\">0.779</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S4.T6.1.2.1.5\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.2.1.5.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.2.1.5.1.1\">0.802</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S4.T6.1.2.1.6\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.2.1.6.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.2.1.6.1.1\">0.716</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S4.T6.1.2.1.7\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.2.1.7.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.2.1.7.1.1\">0.800</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.1.3.2\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r\" id=\"S4.T6.1.3.2.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.3.2.1.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.3.2.1.1.1\">Consolidation</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.3.2.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.3.2.2.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.3.2.2.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.1.3.2.2.1.1.1\">0.81575</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.3.2.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.3.2.3.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.3.2.3.1.1\">0.80980</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.3.2.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.3.2.4.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.3.2.4.1.1\">0.759</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.3.2.5\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.3.2.5.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.3.2.5.1.1\">0.796</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.3.2.6\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.3.2.6.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.3.2.6.1.1\">0.708</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.3.2.7\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.3.2.7.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.3.2.7.1.1\">0.800</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.1.4.3\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r\" id=\"S4.T6.1.4.3.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.4.3.1.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.4.3.1.1.1\">Infiltration</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.4.3.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.4.3.2.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.4.3.2.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.1.4.3.2.1.1.1\">0.74102</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.4.3.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.4.3.3.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.4.3.3.1.1\">0.73105</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.4.3.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.4.3.4.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.4.3.4.1.1\">0.710</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.4.3.5\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.4.3.5.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.4.3.5.1.1\">0.702</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.4.3.6\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.4.3.6.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.4.3.6.1.1\">0.609</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.4.3.7\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.4.3.7.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.4.3.7.1.1\">0.700</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.1.5.4\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r\" id=\"S4.T6.1.5.4.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.5.4.1.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.5.4.1.1.1\">Pneumothorax</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.5.4.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.5.4.2.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.5.4.2.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.1.5.4.2.1.1.1\">0.90164</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.5.4.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.5.4.3.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.5.4.3.1.1\">0.89660</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.5.4.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.5.4.4.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.5.4.4.1.1\">0.878</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.5.4.5\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.5.4.5.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.5.4.5.1.1\">0.900</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.5.4.6\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.5.4.6.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.5.4.6.1.1\">0.806</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.5.4.7\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.5.4.7.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.5.4.7.1.1\">0.870</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.1.6.5\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r\" id=\"S4.T6.1.6.5.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.6.5.1.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.6.5.1.1.1\">Edema</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.6.5.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.6.5.2.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.6.5.2.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.1.6.5.2.1.1.1\">0.91034</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.6.5.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.6.5.3.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.6.5.3.1.1\">0.90185</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.6.5.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.6.5.4.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.6.5.4.1.1\">0.855</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.6.5.5\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.6.5.5.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.6.5.5.1.1\">0.883</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.6.5.6\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.6.5.6.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.6.5.6.1.1\">0.835</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.6.5.7\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.6.5.7.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.6.5.7.1.1\">0.880</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.1.7.6\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r\" id=\"S4.T6.1.7.6.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.7.6.1.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.7.6.1.1.1\">Emphysema</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.7.6.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.7.6.2.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.7.6.2.1.1\">0.92946</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.7.6.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.7.6.3.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.7.6.3.1.1\">0.92067</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.7.6.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.7.6.4.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.7.6.4.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.1.7.6.4.1.1.1\">0.933</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.7.6.5\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.7.6.5.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.7.6.5.1.1\">0.915</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.7.6.6\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.7.6.6.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.7.6.6.1.1\">0.815</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.7.6.7\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.7.6.7.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.7.6.7.1.1\">0.910</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.1.8.7\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r\" id=\"S4.T6.1.8.7.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.8.7.1.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.8.7.1.1.1\">Fibrosis</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.8.7.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.8.7.2.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.8.7.2.1.1\">0.83347</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.8.7.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.8.7.3.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.8.7.3.1.1\">0.81574</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.8.7.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.8.7.4.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.8.7.4.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.1.8.7.4.1.1.1\">0.838</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.8.7.5\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.8.7.5.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.8.7.5.1.1\">0.825</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.8.7.6\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.8.7.6.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.8.7.6.1.1\">0.769</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.8.7.7\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.8.7.7.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.8.7.7.1.1\">0.780</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.1.9.8\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r\" id=\"S4.T6.1.9.8.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.9.8.1.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.9.8.1.1.1\">Effusion</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.9.8.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.9.8.2.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.9.8.2.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.1.9.8.2.1.1.1\">0.88977</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.9.8.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.9.8.3.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.9.8.3.1.1\">0.88203</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.9.8.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.9.8.4.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.9.8.4.1.1\">0.836</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.9.8.5\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.9.8.5.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.9.8.5.1.1\">0.874</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.9.8.6\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.9.8.6.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.9.8.6.1.1\">0.784</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.9.8.7\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.9.8.7.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.9.8.7.1.1\">0.870</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.1.10.9\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r\" id=\"S4.T6.1.10.9.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.10.9.1.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.10.9.1.1.1\">Pneumonia</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.10.9.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.10.9.2.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.10.9.2.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.1.10.9.2.1.1.1\">0.77648</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.10.9.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.10.9.3.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.10.9.3.1.1\">0.76093</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.10.9.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.10.9.4.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.10.9.4.1.1\">0.737</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.10.9.5\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.10.9.5.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.10.9.5.1.1\">0.715</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.10.9.6\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.10.9.6.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.10.9.6.1.1\">0.633</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.10.9.7\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.10.9.7.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.10.9.7.1.1\">0.670</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.1.11.10\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r\" id=\"S4.T6.1.11.10.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.11.10.1.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.11.10.1.1.1\">Pleural_Thickening</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.11.10.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.11.10.2.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.11.10.2.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.1.11.10.2.1.1.1\">0.81270</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.11.10.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.11.10.3.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.11.10.3.1.1\">0.80053</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.11.10.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.11.10.4.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.11.10.4.1.1\">0.791</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.11.10.5\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.11.10.5.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.11.10.5.1.1\">0.791</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.11.10.6\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.11.10.6.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.11.10.6.1.1\">0.708</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.11.10.7\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.11.10.7.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.11.10.7.1.1\">0.760</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.1.12.11\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r\" id=\"S4.T6.1.12.11.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.12.11.1.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.12.11.1.1.1\">Cardiomegaly</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.12.11.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.12.11.2.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.12.11.2.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.1.12.11.2.1.1.1\">0.91954</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.12.11.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.12.11.3.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.12.11.3.1.1\">0.90788</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.12.11.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.12.11.4.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.12.11.4.1.1\">0.895</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.12.11.5\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.12.11.5.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.12.11.5.1.1\">0.894</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.12.11.6\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.12.11.6.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.12.11.6.1.1\">0.807</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.12.11.7\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.12.11.7.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.12.11.7.1.1\">0.870</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.1.13.12\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r\" id=\"S4.T6.1.13.12.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.13.12.1.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.13.12.1.1.1\">Nodule</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.13.12.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.13.12.2.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.13.12.2.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.1.13.12.2.1.1.1\">0.80611</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.13.12.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.13.12.3.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.13.12.3.1.1\">0.79828</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.13.12.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.13.12.4.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.13.12.4.1.1\">0.777</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.13.12.5\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.13.12.5.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.13.12.5.1.1\">0.768</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.13.12.6\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.13.12.6.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.13.12.6.1.1\">0.671</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.13.12.7\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.13.12.7.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.13.12.7.1.1\">0.750</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.1.14.13\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r\" id=\"S4.T6.1.14.13.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.14.13.1.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.14.13.1.1.1\">Mass</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.14.13.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.14.13.2.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.14.13.2.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.1.14.13.2.1.1.1\">0.87315</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.14.13.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.14.13.3.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.14.13.3.1.1\">0.86191</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.14.13.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.14.13.4.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.14.13.4.1.1\">0.834</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.14.13.5\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.14.13.5.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.14.13.5.1.1\">0.843</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.14.13.6\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.14.13.6.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.14.13.6.1.1\">0.706</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.14.13.7\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.14.13.7.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.14.13.7.1.1\">0.830</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.1.15.14\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r\" id=\"S4.T6.1.15.14.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.15.14.1.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.15.14.1.1.1\">Hernia</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.15.14.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.15.14.2.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.15.14.2.1.1\">0.91723</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.15.14.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.15.14.3.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.15.14.3.1.1\">0.88305</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.15.14.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.15.14.4.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.15.14.4.1.1\">0.938</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.15.14.5\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.15.14.5.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.15.14.5.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.1.15.14.5.1.1.1\">0.943</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.15.14.6\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.15.14.6.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.15.14.6.1.1\">0.767</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r\" id=\"S4.T6.1.15.14.7\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.15.14.7.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.15.14.7.1.1\">0.770</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.1.16.15\">\n<td class=\"ltx_td ltx_align_justify ltx_border_b ltx_border_l ltx_border_r\" id=\"S4.T6.1.16.15.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.16.15.1.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.16.15.1.1.1\">Average</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_b ltx_border_r\" id=\"S4.T6.1.16.15.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.16.15.2.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.16.15.2.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.1.16.15.2.1.1.1\">0.85433</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_b ltx_border_r\" id=\"S4.T6.1.16.15.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.16.15.3.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.16.15.3.1.1\">0.84239</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_b ltx_border_r\" id=\"S4.T6.1.16.15.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.16.15.4.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.16.15.4.1.1\">0.826</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_b ltx_border_r\" id=\"S4.T6.1.16.15.5\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.16.15.5.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.16.15.5.1.1\">0.832</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_b ltx_border_r\" id=\"S4.T6.1.16.15.6\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.16.15.6.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.16.15.6.1.1\">0.738</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_b ltx_border_r\" id=\"S4.T6.1.16.15.7\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T6.1.16.15.7.1\">\n<span class=\"ltx_p\" id=\"S4.T6.1.16.15.7.1.1\">0.804</span>\n</span>\n</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 156 |
+
"capture": "TABLE VI: COMPARISON OF AUROC WITH PREVIOUS WORK ON CHESTX-RAY14 DATASET"
|
| 157 |
+
}
|
| 158 |
+
},
|
| 159 |
+
"image_paths": {
|
| 160 |
+
"1": {
|
| 161 |
+
"figure_path": "2311.07750v3_figure_1.png",
|
| 162 |
+
"caption": "Figure 1: Disease distribution in the dataset.",
|
| 163 |
+
"url": "http://arxiv.org/html/2311.07750v3/extracted/2311.07750v3/images/data_distribution2.png"
|
| 164 |
+
},
|
| 165 |
+
"2": {
|
| 166 |
+
"figure_path": "2311.07750v3_figure_2.png",
|
| 167 |
+
"caption": "Figure 2: Architecture for DNN and classical model. CH: Custom Head. XGB: XGBoost. RF: Random Forest",
|
| 168 |
+
"url": "http://arxiv.org/html/2311.07750v3/extracted/2311.07750v3/images/dnn_classical_architecture.png"
|
| 169 |
+
},
|
| 170 |
+
"3": {
|
| 171 |
+
"figure_path": "2311.07750v3_figure_3.png",
|
| 172 |
+
"caption": "Figure 3: Average weighted ensemble with differential evolution.",
|
| 173 |
+
"url": "http://arxiv.org/html/2311.07750v3/extracted/2311.07750v3/images/ensemble.png"
|
| 174 |
+
},
|
| 175 |
+
"4": {
|
| 176 |
+
"figure_path": "2311.07750v3_figure_4.png",
|
| 177 |
+
"caption": "Figure 4: ROC curve for all 14 diseases displaying the True Positive against\nFalse Positive rate, which illustrates the model\u2019s capacity to differentiate effectively.",
|
| 178 |
+
"url": "http://arxiv.org/html/2311.07750v3/extracted/2311.07750v3/images/auc.png"
|
| 179 |
+
}
|
| 180 |
+
},
|
| 181 |
+
"validation": true,
|
| 182 |
+
"references": [
|
| 183 |
+
{
|
| 184 |
+
"1": {
|
| 185 |
+
"title": "arXiv:1705.02315 [cs].",
|
| 186 |
+
"author": "X. Wang, Y. Peng, L. Lu, Z. Lu, M. Bagheri, and R. M. Summers, \u201cChestX-ray8:\nHospital-scale Chest X-ray Database and Benchmarks on\nWeakly-Supervised Classification and Localization of Common\nThorax Diseases,\u201d in 2017 IEEE Conference on Computer\nVision and Pattern Recognition (CVPR), pp. 3462\u20133471, July 2017.",
|
| 187 |
+
"venue": null,
|
| 188 |
+
"url": null
|
| 189 |
+
}
|
| 190 |
+
},
|
| 191 |
+
{
|
| 192 |
+
"2": {
|
| 193 |
+
"title": "Publisher: arXiv Version Number: 2.",
|
| 194 |
+
"author": "L. Seyyed-Kalantari, G. Liu, M. McDermott, I. Y. Chen, and M. Ghassemi,\n\u201cCheXclusion: Fairness gaps in deep chest X-ray classifiers,\u201d 2020.",
|
| 195 |
+
"venue": null,
|
| 196 |
+
"url": null
|
| 197 |
+
}
|
| 198 |
+
},
|
| 199 |
+
{
|
| 200 |
+
"3": {
|
| 201 |
+
"title": "Publisher: arXiv Version Number: 1.",
|
| 202 |
+
"author": "S. Woo, S. Debnath, R. Hu, X. Chen, Z. Liu, I. S. Kweon, and S. Xie,\n\u201cConvNeXt V2: Co-designing and Scaling ConvNets with Masked\nAutoencoders,\u201d 2023.",
|
| 203 |
+
"venue": null,
|
| 204 |
+
"url": null
|
| 205 |
+
}
|
| 206 |
+
},
|
| 207 |
+
{
|
| 208 |
+
"4": {
|
| 209 |
+
"title": "Publisher: arXiv Version Number: 2.",
|
| 210 |
+
"author": "Z. Liu, H. Hu, Y. Lin, Z. Yao, Z. Xie, Y. Wei, J. Ning, Y. Cao, Z. Zhang,\nL. Dong, F. Wei, and B. Guo, \u201cSwin Transformer V2: Scaling Up\nCapacity and Resolution,\u201d 2021.",
|
| 211 |
+
"venue": null,
|
| 212 |
+
"url": null
|
| 213 |
+
}
|
| 214 |
+
},
|
| 215 |
+
{
|
| 216 |
+
"5": {
|
| 217 |
+
"title": "arXiv:1707.01836 [cs].",
|
| 218 |
+
"author": "P. Rajpurkar, A. Y. Hannun, M. Haghpanahi, C. Bourn, and A. Y. Ng,\n\u201cCardiologist-Level Arrhythmia Detection with Convolutional Neural\nNetworks,\u201d July 2017.",
|
| 219 |
+
"venue": null,
|
| 220 |
+
"url": null
|
| 221 |
+
}
|
| 222 |
+
},
|
| 223 |
+
{
|
| 224 |
+
"6": {
|
| 225 |
+
"title": "arXiv:1710.04934 [cs, stat].",
|
| 226 |
+
"author": "M. Grewal, M. M. Srivastava, P. Kumar, and S. Varadarajan, \u201cRADNET:\nRadiologist Level Accuracy using Deep Learning for HEMORRHAGE\ndetection in CT Scans,\u201d Jan. 2018.",
|
| 227 |
+
"venue": null,
|
| 228 |
+
"url": null
|
| 229 |
+
}
|
| 230 |
+
},
|
| 231 |
+
{
|
| 232 |
+
"7": {
|
| 233 |
+
"title": "arXiv:1409.0575 [cs].",
|
| 234 |
+
"author": "O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang,\nA. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei,\n\u201cImageNet Large Scale Visual Recognition Challenge,\u201d Jan. 2015.",
|
| 235 |
+
"venue": null,
|
| 236 |
+
"url": null
|
| 237 |
+
}
|
| 238 |
+
},
|
| 239 |
+
{
|
| 240 |
+
"8": {
|
| 241 |
+
"title": "arXiv:1710.10501 [cs].",
|
| 242 |
+
"author": "L. Yao, E. Poblenz, D. Dagunts, B. Covington, D. Bernard, and K. Lyman,\n\u201cLearning to diagnose from scratch by exploiting dependencies among\nlabels,\u201d Feb. 2018.",
|
| 243 |
+
"venue": null,
|
| 244 |
+
"url": null
|
| 245 |
+
}
|
| 246 |
+
},
|
| 247 |
+
{
|
| 248 |
+
"9": {
|
| 249 |
+
"title": "Publisher: arXiv Version Number: 5.",
|
| 250 |
+
"author": "G. Huang, Z. Liu, L. van der Maaten, and K. Q. Weinberger, \u201cDensely\nConnected Convolutional Networks,\u201d 2016.",
|
| 251 |
+
"venue": null,
|
| 252 |
+
"url": null
|
| 253 |
+
}
|
| 254 |
+
},
|
| 255 |
+
{
|
| 256 |
+
"10": {
|
| 257 |
+
"title": "arXiv:1711.05225 [cs, stat].",
|
| 258 |
+
"author": "P. Rajpurkar, J. Irvin, K. Zhu, B. Yang, H. Mehta, T. Duan, D. Ding, A. Bagul,\nC. Langlotz, K. Shpanskaya, M. P. Lungren, and A. Y. Ng, \u201cCheXNet:\nRadiologist-Level Pneumonia Detection on Chest X-Rays with\nDeep Learning,\u201d Dec. 2017.",
|
| 259 |
+
"venue": null,
|
| 260 |
+
"url": null
|
| 261 |
+
}
|
| 262 |
+
},
|
| 263 |
+
{
|
| 264 |
+
"11": {
|
| 265 |
+
"title": "arXiv:2111.06377 [cs].",
|
| 266 |
+
"author": "K. He, X. Chen, S. Xie, Y. Li, P. Doll\u00e1r, and R. Girshick, \u201cMasked\nAutoencoders Are Scalable Vision Learners,\u201d Dec. 2021.",
|
| 267 |
+
"venue": null,
|
| 268 |
+
"url": null
|
| 269 |
+
}
|
| 270 |
+
},
|
| 271 |
+
{
|
| 272 |
+
"12": {
|
| 273 |
+
"title": "arXiv:1706.03762 [cs].",
|
| 274 |
+
"author": "A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez,\nL. Kaiser, and I. Polosukhin, \u201cAttention Is All You Need,\u201d Aug.\n2023.",
|
| 275 |
+
"venue": null,
|
| 276 |
+
"url": null
|
| 277 |
+
}
|
| 278 |
+
},
|
| 279 |
+
{
|
| 280 |
+
"13": {
|
| 281 |
+
"title": "Publisher: arXiv Version Number: 2.",
|
| 282 |
+
"author": "Z. Dai, H. Liu, Q. V. Le, and M. Tan, \u201cCoAtNet: Marrying Convolution and\nAttention for All Data Sizes,\u201d 2021.",
|
| 283 |
+
"venue": null,
|
| 284 |
+
"url": null
|
| 285 |
+
}
|
| 286 |
+
},
|
| 287 |
+
{
|
| 288 |
+
"14": {
|
| 289 |
+
"title": "Publisher: arXiv Version Number: 2.",
|
| 290 |
+
"author": "L. Yuan, Q. Hou, Z. Jiang, J. Feng, and S. Yan, \u201cVOLO: Vision Outlooker\nfor Visual Recognition,\u201d 2021.",
|
| 291 |
+
"venue": null,
|
| 292 |
+
"url": null
|
| 293 |
+
}
|
| 294 |
+
},
|
| 295 |
+
{
|
| 296 |
+
"15": {
|
| 297 |
+
"title": "Publisher: arXiv Version Number: 4.",
|
| 298 |
+
"author": "Z. Tu, H. Talebi, H. Zhang, F. Yang, P. Milanfar, A. Bovik, and Y. Li,\n\u201cMaxViT: Multi-Axis Vision Transformer,\u201d 2022.",
|
| 299 |
+
"venue": null,
|
| 300 |
+
"url": null
|
| 301 |
+
}
|
| 302 |
+
},
|
| 303 |
+
{
|
| 304 |
+
"16": {
|
| 305 |
+
"title": "Publisher: arXiv Version Number: 6.",
|
| 306 |
+
"author": "L. N. Smith, \u201cCyclical Learning Rates for Training Neural\nNetworks,\u201d 2015.",
|
| 307 |
+
"venue": null,
|
| 308 |
+
"url": null
|
| 309 |
+
}
|
| 310 |
+
},
|
| 311 |
+
{
|
| 312 |
+
"17": {
|
| 313 |
+
"title": "Publisher: arXiv Version Number: 3.",
|
| 314 |
+
"author": "I. Loshchilov and F. Hutter, \u201cDecoupled Weight Decay Regularization,\u201d\n2017.",
|
| 315 |
+
"venue": null,
|
| 316 |
+
"url": null
|
| 317 |
+
}
|
| 318 |
+
},
|
| 319 |
+
{
|
| 320 |
+
"18": {
|
| 321 |
+
"title": "Publisher: arXiv Version Number: 6.",
|
| 322 |
+
"author": "Z. Li, C. Wang, M. Han, Y. Xue, W. Wei, L.-J. Li, and L. Fei-Fei, \u201cThoracic\nDisease Identification and Localization with Limited Supervision,\u201d\n2017.",
|
| 323 |
+
"venue": null,
|
| 324 |
+
"url": null
|
| 325 |
+
}
|
| 326 |
+
}
|
| 327 |
+
],
|
| 328 |
+
"url": "http://arxiv.org/html/2311.07750v3"
|
| 329 |
+
}
|
20240522/2311.08053v4.json
ADDED
|
@@ -0,0 +1,147 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Batch Selection and Communication for Active Learning with Edge Labeling",
|
| 3 |
+
"abstract": "Conventional retransmission (ARQ) protocols are designed with the goal of ensuring the correct reception of all the individual transmitter\u2019s packets at the receiver. When the transmitter is a learner communicating with a teacher, this goal is at odds with the actual aim of the learner, which is that of eliciting the most relevant label information from the teacher. Taking an active learning perspective, this paper addresses the following key protocol design questions: (i) Active batch selection: Which batch of inputs should be sent to the teacher to acquire the most useful information and thus reduce the number of required communication rounds? (ii) Batch encoding: Can batches of data points be combined to reduce the communication resources required at each communication round? Specifically, this work introduces Communication-Constrained Bayesian Active Knowledge Distillation (CC-BAKD), a novel protocol that integrates Bayesian active learning with compression via a linear mix-up mechanism. Comparisons with existing active learning protocols demonstrate the advantages of the proposed approach.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "In a classic communication-theoretic model [1 ###reference_b1###], the sender organizes the data into a set of packets that are then passed on to a lower layer of the protocol stack. The responsibility of the lower layer is to ensure reliable transmission, such that all data packets from the sender\u2019s set are reliably replicated at the destination. In doing so, the sender runs an automatic repeat request (ARQ) protocol, and the destination sends ACK/NACK feedback messages to indicate the status of the received packets. The packets are transmitted without replacement; that is, upon reception of an ACK, the packet is never sent again.\nNow, assume that, as depicted in Fig. 1 ###reference_###, the sender is a learner that uses a channel to communicate with a teacher [2 ###reference_b2###, 3 ###reference_b3###]. The packets at the learner encode unlabeled data samples that the learner can send over the channel to the teacher to obtain, possibly noisy, labels. This transmission could follow the traditional ARQ-based protocol, ensuring all data samples are replicated on the teacher side. This paper starts with the observation that the communication objective in this problem should not be to replicate data at the teacher, but rather to elicit the most informative label information from the teacher.\nThis novel objective introduces two novel aspects in the design of the communication protocol. First, the learner can adaptively select the data points to communicate based on its current uncertainty about inference decisions at test time. Second, the learner may not need to encode data samples individually, requesting separate label information. Rather, a compressed mix-up over batches of selected inputs may suffice to obtain useful information from the teacher, saving bandwidth over the communication channel.\n###figure_1### In this paper, we specifically address the following two design aspects for the setting in Fig. 1:\nActive batch selection: Which inputs should be sent to the teacher to acquire the most useful information and thus reduce the number of required communication rounds?\n Batch encoding: How do we encode the input information for transmission to the teacher to reduce the communication resources required at each round?"
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "1.1",
|
| 13 |
+
"parent_section_id": "1",
|
| 14 |
+
"section_name": "Active Batch Selection and Batch Encoding",
|
| 15 |
+
"text": "The problem of active batch selection can be viewed as a form of Active Knowledge Distillation (AKD) [4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###]. In AKD, the goal is to select inputs for a teacher that responds with its model\u2019s predictive probabilities over the set of labels, i.e., with soft labels, for the chosen inputs. Most existing AKD schemes select inputs with maximal predictive uncertainty for the learner\u2019s model [5 ###reference_b5###, 6 ###reference_b6###]. This approach suffers from confirmation bias, as inputs with maximal predictive uncertainty for the learner may correspond to inherently hard examples to predict and thus to highly uncertain inputs for the teacher. The authors in [6 ###reference_b6###] proposed the Robust Active Knowledge Distillation (RAKD) protocol that strives to address the confirmation bias problem by making worst-case assumptions on the labeling errors made by the teacher, and it penalizes accordingly the potential information gains associated with each input. However, RAKD does not account for communication constraints between learner and teacher.\nFew papers have addressed the problem of batch encoding. In [5 ###reference_b5###, 7 ###reference_b7###], the authors proposed to mix up inputs within a batch, as in [8 ###reference_b8###], to augment the size of the unlabeled data set at the learner. The mix-up approach has not been explored in terms of its potential benefits in reducing communication requirements."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "1.2",
|
| 19 |
+
"parent_section_id": "1",
|
| 20 |
+
"section_name": "Main Contributions",
|
| 21 |
+
"text": "In this paper, we contribute to both problems of active batch selection and batch encoding by proposing a communication protocol that integrates Bayesian active learning with compression via a linear mix-up mechanism. The approach, referred to as Communication-Constrained Bayesian Active Knowledge Distillation (CC-BAKD), aims at maximizing the amount of reliable labeling information provided by the teacher per communication round.\n\n Bayesian AKD (BAKD): To reduce the number of communication rounds between the learner and the teacher, we adopt a Bayesian active learning strategy [2 ###reference_b2###, 3 ###reference_b3###] that selects batches based on their epistemic uncertainty at the learner\u2019s side. Epistemic uncertainty refers to the portion of the overall predictive uncertainty that additional data may decrease. As such, it contrasts with the inherent, aleatoric, predictive uncertainty that characterizes hard examples. By focusing solely on epistemic uncertainty, unlike RAKD [6 ###reference_b6###], the learner automatically addresses the confirmation bias problem of avoiding hard inputs characterized by significant inherent uncertainty.\n\n Batch encoding via linear mix-up compression: We propose a novel batch compression strategy that jointly compresses a batch of inputs over the feature dimension while mixing up the feature-compressed inputs. This compression strategy is integrated with the epistemic uncertainty-based active batch selection process to reduce the communication overhead per communication round.\nOrganization: The rest of the paper is organized as follows. In Section II ###reference_###, we present the setting of interest. Section III ###reference_### presents a BAKD protocol imposing only a maximum batch size constraint at each communication round. We then present the CC-BAKD protocol in Section IV ###reference_###. Experiments are given in Section V ###reference_###, followed by conclusions in Section VI ###reference_###."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2",
|
| 25 |
+
"parent_section_id": null,
|
| 26 |
+
"section_name": "II Setting",
|
| 27 |
+
"text": "We study the setup illustrated in Fig. 1 ###reference_###, in which a learner aims to train a -class classification model through communication with a teacher."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "2.1",
|
| 31 |
+
"parent_section_id": "2",
|
| 32 |
+
"section_name": "II-A Learning Model",
|
| 33 |
+
"text": "The learner has access to a pool set with unlabeled inputs, as well as an initial training set with labeled examples, where is the -th input sample with features and is the corresponding hard target.\nThe goal of the learner is to train a classifier by leveraging a discriminative model , e.g., a neural network, with parameters . To this end, the learner wishes to distill knowledge available at the teacher in the form of a pre-trained model . For this purpose, at each communication round , the learner selects a batch of unlabeled inputs, denoted as , from the pool set . The selected subset is removed from set and sent to the teacher. The teacher responds to the learner with the soft targets for each input from the batch , given by the probability vector:\nwhich assigns a probability to each of the possible classes.\nAfter receiving feedback from the teacher, the learner updates the training set with the corresponding soft targets. Thus, the updated training set consists of two disjoint sets\nwhere the set is the initial training set with hard targets and the set denotes the part of the training set with the received soft targets as in (1 ###reference_###). Using the updated training set, the learner retrains its local model."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "2.2",
|
| 37 |
+
"parent_section_id": "2",
|
| 38 |
+
"section_name": "II-B Communication Model",
|
| 39 |
+
"text": "At each communication round, the learner communicates with the teacher over a frame of fixed length given by real numbers [1 ###reference_b1###]. Letting be the total number of real numbers, also referred to as symbols, available for communication from learner to teacher. Thus, the total number of communication rounds of the protocol is constrained and given by\nEach transmitted symbol is affected by additive i.i.d. zero-mean Gaussian noise with noise power .111Such noise models potential distortions caused by quantization [9 ###reference_b9###] or by channel noise stemming from analog transmission (see, e.g., [10 ###reference_b10###]). We consider the portion of the frame allocated for communication between teacher and learner to be negligible compared to that between learner and teacher. This assumption is reasonable because sending soft targets consumes significantly fewer resources than sending batches of input feature vectors; we do not model the communication resources from teacher to learner."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3",
|
| 43 |
+
"parent_section_id": null,
|
| 44 |
+
"section_name": "III Bayesian Active Knowledge Distillation",
|
| 45 |
+
"text": "In this section, we present a BAKD protocol that reduces the number of communication rounds while imposing only a maximum batch size constraint at each round. The following section will integrate this protocol with a batch compression strategy to yield the proposed CC-BAKD algorithm."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "3.1",
|
| 49 |
+
"parent_section_id": "3",
|
| 50 |
+
"section_name": "III-A Bayesian Active Knowledge Distillation",
|
| 51 |
+
"text": "The goal of AKD is for the learner to reduce the number of communication rounds with the teacher [6 ###reference_b6###]. In this section, we propose to apply tools from Bayesian active learning to introduce a novel AKD protocol, referred to as BAKD, that selects inputs with maximum epistemic uncertainty at the learner [2 ###reference_b2###, 11 ###reference_b11###, 12 ###reference_b12###, 3 ###reference_b3###, 13 ###reference_b13###]. As discussed in Section I ###reference_###, the underlying principle is that the learner should select a batch of inputs on which its current model has maximal epistemic uncertainty, thus avoiding inputs that are inherently hard to predict.\nAccordingly, the learner decides a batch of inputs on which to query the teacher by using the BatchBALD acquisition function in [3 ###reference_b3###]. Specifically, the selected batch is obtained as\nwhere the BatchBALD acquisition function\nequals the mutual information between the labels corresponding to the selected inputs in the candidate set . The mutual information is evaluated by the learner concerning the joint distribution\nwhere the variational posterior distribution is maintained by the learner based on the current training set as explained in the next subsection.\nThe mutual information criterion in (5 ###reference_###) captures the average disagreement between the learner\u2019s models over the batch of inputs when averaged with respect to distribution . This average captures the level of epistemic uncertainty as predicted by the learner [3 ###reference_b3###, 13 ###reference_b13###]."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "3.2",
|
| 55 |
+
"parent_section_id": "3",
|
| 56 |
+
"section_name": "III-B Bayesian Learning with Soft Labels",
|
| 57 |
+
"text": "We now address the problem of optimizing the variational distribution based on the updated training set (2 ###reference_###). To this end, we adopt the standard variational inference (VI) framework (see, e.g., [13 ###reference_b13###]), which we extend to account for the availability of soft labels in .\nTo start, let us define the standard free energy criterion, also known as negative ELBO, for the labeled data set as [13 ###reference_b13###, p.456]\nwhere (8 ###reference_###) represents the cross-entropy (CE) loss, is a fixed prior distribution, and the empirical average over the training set is denoted as . Furthermore, the constant is a hyperparameter that captures relative contributions of prior and data-dependent loss.\nMinimizing the free energy over the distribution within some set of distributions yields the conventional VI algorithm. Instantiations of VI include variational dropout (VD) [14 ###reference_b14###, 11 ###reference_b11###, 15 ###reference_b15###], where the variational distribution is modeled as a Bernoulli-Gaussian vector.\nWe extend VI to allow training over data set (2 ###reference_###) consisting of both hard-labeled and soft-labeled examples. To this end, we propose to optimize the weighted free energy criterion\nwhere the conventional free energy is defined as in (7 ###reference_###) as a function of the labeled data set , while the \u201csoft\u201dfree energy criterion follows (7 ###reference_###) but with a modified CE loss\nThe CE (10 ###reference_###) gauges the cross-entropy between the soft targets in (1 ###reference_###) and the learner\u2019s predictive distribution. In the weighted free energy (9 ###reference_###), we introduce the hyperparameter to dictate the relative weight given by the learner to the feedback provided by the teacher as compared to the original labels in set . The BAKD protocol is summarized in Algorithm 1 ###reference_###."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "4",
|
| 61 |
+
"parent_section_id": null,
|
| 62 |
+
"section_name": "IV Communication-Constrained Bayesian Active Distillation",
|
| 63 |
+
"text": "In this section, we introduce CC-BAKD, a generalization of the BAKD protocol introduced in the previous section that aims at reducing the required communication resources per communication round by compressing the selected batches at Step 3 of Algorithm 1 ###reference_###. However, compressing batches to reduce communication costs comes at the price of introducing reconstruction noise, which other noise sources, such as quantization noise, can further augment. Consequently, the batch received to be labeled by the teacher is distorted, which can impair the quality of the output soft targets that the teacher can give as feedback to the learner. To combat this uncertainty, CC-BAKD introduces a new acquisition function in contrast to the one in Step 3 of Algorithm 1 ###reference_###, as well as a novel model update in lieu of Step 7 of Algorithm 1 ###reference_###. An overview of CC-BAKD is given in Algorithm 2 ###reference_###. Details are given below."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "4.1",
|
| 67 |
+
"parent_section_id": "4",
|
| 68 |
+
"section_name": "IV-A Batch Encoding",
|
| 69 |
+
"text": "After selecting a batch at a given step, as detailed in Section IV-D ###reference_###, the learner compresses it for transmission. As we detail next, this is done using a linear compressor that implements a novel form of mix-up encoding.\nLet and be the compressed batch and feature dimensions, respectively. Let us also denote as the data matrix for the selected batch and as its vector form, where is the operation that stacks columns of the input matrix. We adopt a joint linear compression scheme of the form\nwhere is the compression matrix. Note that the compressed vector in (11 ###reference_###) is of dimension . We use existing designs for matrix . For example, matrix can be based on principal component analysis (PCA) [16 ###reference_b16###]. We further assume that the learner and the teacher agree on the compression matrix at the start of the protocol.\nDefine the compression ratio as\nWith a compression rate , the number of communication rounds in (3 ###reference_###) when evaluates to\nWe use to avoid the case in which communication is unconstrained: when . The case where compression is not applied can be obtained by letting , as is the case of BAKD."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "4.2",
|
| 73 |
+
"parent_section_id": "4",
|
| 74 |
+
"section_name": "IV-B Batch Decoding and Teacher\u2019s Feedback",
|
| 75 |
+
"text": "The teacher receives the vector\nwhere is the mentioned additive Gaussian noise accounting for quantization noise or analog communications, whose entries are distributed as with mean zero and noise power , as defined in Section II-B ###reference_###. To estimate the selected batch, the teacher obtains the vector\nwhere in (a) we used (11 ###reference_###).\nWe measure the overall distortion due to transmission over the constrained communication channel as\nwhere, on the right-hand side, the first term within the norm represents the reconstruction noise due to compression, and the second is the quantization or communication noise.\nThe decoded signal is then resized in matrix form as . The decoded batch of inputs is given by , where is the -th column of . Using (15 ###reference_###), the teacher labels the distorted outputs to produce the estimated soft targets\nNote that the estimated soft targets in (17 ###reference_###) differ from the true soft targets in (1 ###reference_###) for due to the noise in (16 ###reference_###)."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "4.3",
|
| 79 |
+
"parent_section_id": "4",
|
| 80 |
+
"section_name": "IV-C Learner\u2019s Model Update",
|
| 81 |
+
"text": "After receiving the teacher\u2019s feedback in (17 ###reference_###) for each input in the batch, the learner updates its training and pool sets, and it also updates its model to obtain a new distribution , as per Steps 10-12 of Algorithm 2 ###reference_###. As in (2 ###reference_###), the updated training set is composed of two disjoint sets . However, unlike the setting studied in the previous section, set is now impaired by the noise from (16 ###reference_###) present in the estimated soft targets (17 ###reference_###) provided by the teacher. To tackle this uncertainty, we propose two methods to the learner to obtain , both of which generalize the weighted free energy criterion (9 ###reference_###).\nUncompressed covariates-based CE loss: The CE loss is\nwhich associates the noiseless, clean inputs to the estimated soft targets .\nCompressed covariates-based CE loss: The CE loss is\nwhich associates the decoded, distorted inputs to the estimated soft targets . Decoded inputs can be obtained at the learner\u2019s side by applying the encoder/decoder locally, but this method does not account for quantization noise."
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "4.4",
|
| 85 |
+
"parent_section_id": "4",
|
| 86 |
+
"section_name": "IV-D Compression-Aware Active Batch Selection",
|
| 87 |
+
"text": "In choosing the next batch from the pool , the learner should not only attempt to maximize the epistemic uncertainty at the current model, as in (4 ###reference_###) for BALD or BatchBALD [2 ###reference_b2###, 3 ###reference_b3###], but it should also account for the noise caused by the compression loss. For fixed compression matrix , this can be done by letting the learner choose the batch on the decoded batch space using the encoder/decoder steps locally. Hence, we propose to generalize the acquisition function (4 ###reference_###) as\nwhere the decoded batch takes the place of the lossless covariates . We refer to the above generalization as the compression-aware acquisition function. Note that this acquisition function does not deal with quantization noise."
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "5",
|
| 91 |
+
"parent_section_id": null,
|
| 92 |
+
"section_name": "Experiments",
|
| 93 |
+
"text": "In this section, we empirically demonstrate the effectiveness of CC-BAKD in Algorithm 2 ###reference_### by comparing its performance with BAKD in Algorithm 1 ###reference_###, which does not perform compression, and with RAKD from [6 ###reference_b6###], which is based on an acquisition function that uses the aleatoric uncertainty at the learner. For clarity, when considering CC-BAKD, we show the best final average performance obtained with (18 ###reference_###) or (19 ###reference_###). In this regard, we generally found that (19 ###reference_###) outperforms (18 ###reference_###) in situations of high compression ratio . For BAKD and CC-BAKD, we display the best final average performance results regarding by performing a grid search over ."
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "5.1",
|
| 97 |
+
"parent_section_id": "5",
|
| 98 |
+
"section_name": "Simulation Setup",
|
| 99 |
+
"text": "Following [17 ###reference_b17###, 11 ###reference_b11###, 12 ###reference_b12###], we consider the MNIST dataset, consisting of handwritten digit images with pixels divided into digit classes. The standard MNIST training set of 60K examples is partitioned to create class-balanced sets and with and K inputs, respectively. We use the remaining examples to create class-balanced training and validation sets for the teacher, with the validation set having a size of 100 examples and the training set having a size of 50K examples. This process is repeated ten times for statistical evaluation. The standard test set of 10K examples is used to evaluate the models of the learner and the teacher.\nModel architectures: The learner has a neural network with two hidden layers with 800 ReLU units per layer. As in [12 ###reference_b12###], VD is applied only in the last layer with a dropout probability of 0.5. The teacher has a neural network with two hidden layers with 1200 ReLU units per layer and three dropout layers with hidden dropout probabilities of 0.5 and an input dropout probability of 0.8. For VD, we used 10-100 stochastic realizations and Bernoulli dropout layers [14 ###reference_b14###, 11 ###reference_b11###, 15 ###reference_b15###].\nTraining: We use stochastic gradient descent with a training batch size of 32, a learning rate of 0.01, and a momentum of 0.9 over ten epochs for the learner\u2019s training. We further assume that the learner\u2019s training data keeps the ordering given by the acquisition step. For the teacher\u2019s training, we use a learning rate of 0.01, a momentum of 0.9, and a weight decay of 5e-4. We employ early stopping during teacher training with a patience of 5 epochs. The baseline test accuracy performance of the learner prior to communications is [%], while the accuracy of the teacher\u2019s model is [%].\nCommunication model: Unless otherwise stated, we consider a worst-case scenario where the number of available symbols is equal to the feature dimension symbols, meaning that if there is no compression, the number of communication rounds in (3 ###reference_###) would evaluate to one; that is, the learner would be able to query the label of a single input.\nRAKD Benchmark: To benchmark the performance of BAKD and CC-BAKD, we consider the Oracle RAKD method, which provides an upper bound on the accuracy achievable by RAKD [6 ###reference_b6###].\nLinear compression: We adopt PCA to obtain the overall compression matrix in (11 ###reference_###) by using a balanced 10K labeled examples dataset. A sub-sampling approach is used to generate data when . This method creates random batches by selecting inputs from the 10K balanced without replacement. We repeat this process ten times, as before. Due to the limited variability of the MNIST dataset, we found that the region of interest for the compression ratio lies in . This is because most of the variability of the MNIST digits can be explained with few feature dimensions, and reconstruction noise is usually extremely low outside this range.\n###figure_2###"
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "5.2",
|
| 103 |
+
"parent_section_id": "5",
|
| 104 |
+
"section_name": "Results without Quantization Noise",
|
| 105 |
+
"text": "We analyze the reconstruction noise in isolation by setting the additive noise power in (14 ###reference_###) to zero. We start by evaluating the impact of the compression ratio, , for a fixed total number of symbols and a fixed batch size of . Fig. 3 ###reference_### shows the learner\u2019s final test accuracy after communicating with the teacher. As increases, the performance of the benchmark methods BAKD and Oracle RAKD does not change since they do not apply compression. In contrast, for CC-BAKD, as increases, more communication rounds are possible, improving the accuracy to some level, after which additional compression deteriorates the learner\u2019s performance.\n###figure_3### Figure 2 ###reference_### shows the evolution of the accuracy of different schemes as a function of the number of communicated symbols, which reflects the transmission latency. Take the point as an example. For this value of , CC-BAKD, with and , attains an accuracy of approximately % while, with and , the accuracy at is approximately %. In contrast, BAKD and RAKD cannot transmit any input for values of smaller than . In this regard, BAKD attains an accuracy of % only after approximately symbols, implying that, to achieve the same accuracy of CC-BAKD, BAKD requires a latency that is approximately 85 times larger than CC-BAKD."
|
| 106 |
+
},
|
| 107 |
+
{
|
| 108 |
+
"section_id": "5.3",
|
| 109 |
+
"parent_section_id": "5",
|
| 110 |
+
"section_name": "Results with Quantization Noise",
|
| 111 |
+
"text": "We now study the impact of additive noise, which further degrades the estimated inputs at the teacher and, as a result, also the soft targets provided by the teacher to the learner. Figure 4 ###reference_### shows the learner\u2019s final test accuracy as a function of the noise power in (14 ###reference_###) for a fixed batch size . In this figure, it is noteworthy that BAKD and RAKD exhibit superior performance compared to CC-BAKD because they utilize more symbols (). While noise generally degrades the learner\u2019s accuracy, CC-BAKD is seen to be significantly more robust to the presence of noise than BAKD and RAKD. This is because BAKD and RAKD curves decay faster as the noise power increases than the CC-BAKD one.\n###figure_4###"
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"section_id": "6",
|
| 115 |
+
"parent_section_id": null,
|
| 116 |
+
"section_name": "VI Conclusions",
|
| 117 |
+
"text": "This paper has introduced Communication-Constrained Bayesian Active Distillation (CC-BAKD), a new protocol that moves away from the classical ARQ-based approach of ensuring the correct reception of all the information packets at the transmitter, aiming instead to collect the most relevant information from the remote teacher. To do so, CC-BAKD builds on Bayesian active learning to address the problem of confirmation bias and compression based on a linear mix-up mechanism. Numerical results have demonstrated clear advantages over the state-of-the-art. Future work may consider extensions involving multiple learners and/or teachers."
|
| 118 |
+
}
|
| 119 |
+
],
|
| 120 |
+
"appendix": [],
|
| 121 |
+
"tables": {},
|
| 122 |
+
"image_paths": {
|
| 123 |
+
"1": {
|
| 124 |
+
"figure_path": "2311.08053v4_figure_1.png",
|
| 125 |
+
"caption": "Figure 1: \nA learner communicates with a teacher over a constrained communication channel to obtain soft labels for batches of unlabeled inputs. This work aims to devise active batch selection strategies that use the available communication resources as efficiently as possible while reducing the communication cost of a batch through a batch encoding method.",
|
| 126 |
+
"url": "http://arxiv.org/html/2311.08053v4/"
|
| 127 |
+
},
|
| 128 |
+
"2": {
|
| 129 |
+
"figure_path": "2311.08053v4_figure_2.png",
|
| 130 |
+
"caption": "Figure 2: Evolution of the learner\u2019s performance as a function of the number of communicated symbols N\ud835\udc41Nitalic_N. The batch size is B=4\ud835\udc354B=4italic_B = 4 for all schemes. The red lines indicate values of the number of symbols N\ud835\udc41Nitalic_N\nrequired to transmit a single uncompressed input and a batch of B=4\ud835\udc354B=4italic_B = 4 uncompressed inputs, respectively.",
|
| 131 |
+
"url": "http://arxiv.org/html/2311.08053v4/"
|
| 132 |
+
},
|
| 133 |
+
"3": {
|
| 134 |
+
"figure_path": "2311.08053v4_figure_3.png",
|
| 135 |
+
"caption": "Figure 3: \nLearner\u2019s final test accuracy as a function of the compression ratio, R\ud835\udc45Ritalic_R, for a total number of symbols of N=784\ud835\udc41784N=784italic_N = 784 and a batch size of B=4\ud835\udc354B=4italic_B = 4. For CC-BAKD, we show the corresponding number of communication rounds, C\ud835\udc36Citalic_C in (13) in the top horizontal axis. For BAKD and RAKD, the number of communication rounds is fixed to one for a batch size of one since these protocols do not apply compression.",
|
| 136 |
+
"url": "http://arxiv.org/html/2311.08053v4/"
|
| 137 |
+
},
|
| 138 |
+
"4": {
|
| 139 |
+
"figure_path": "2311.08053v4_figure_4.png",
|
| 140 |
+
"caption": "Figure 4: \nCC-BAKD final learner\u2019s performance as a function of the noise power for a batch size of B=4\ud835\udc354B=4italic_B = 4. For CC-BAKD with R=0.99\ud835\udc450.99R=0.99italic_R = 0.99, the number of transmitted symbols is N=7840\ud835\udc417840N=7840italic_N = 7840, while for RAKD and BAKD, the number of transmitted symbols is N=78400\ud835\udc4178400N=78400italic_N = 78400.",
|
| 141 |
+
"url": "http://arxiv.org/html/2311.08053v4/"
|
| 142 |
+
}
|
| 143 |
+
},
|
| 144 |
+
"validation": true,
|
| 145 |
+
"references": [],
|
| 146 |
+
"url": "http://arxiv.org/html/2311.08053v4"
|
| 147 |
+
}
|
20240522/2311.11176v2.json
ADDED
|
@@ -0,0 +1,198 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Morphology-Enhanced CAM-Guided SAM for Weakly Supervised Breast Lesion Segmentation",
|
| 3 |
+
"abstract": "Ultrasound imaging plays a critical role in the early detection of breast cancer. Accurate identification and segmentation of lesions are essential steps in clinical practice, requiring methods to assist physicians in lesion segmentation. However, ultrasound lesion segmentation models based on supervised learning require extensive manual labeling, which is both time-consuming and labor-intensive. In this study, we present a novel framework for weakly supervised lesion segmentation in early breast ultrasound images. Our method uses morphological enhancement and class activation map (CAM)-guided localization. Finally, we employ the Segment Anything Model (SAM), a computer vision foundation model, for detailed segmentation. This approach does not require pixel-level annotation, thereby reducing the cost of data annotation. The performance of our method is comparable to supervised learning methods that require manual annotations, achieving a Dice score of 74.39% and outperforming comparative supervised models in terms of Hausdorff distance in the BUSI dataset. These results demonstrate that our framework effectively integrates weakly supervised learning with SAM, providing a promising solution for breast cancer image analysis. The code for this study is available at: https://github.com/YueXin18/MorSeg-CAM-SAM.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Breast cancer is one of the most common malignant tumors affecting women, with its incidence rising annually [1 ###reference_b1###]. Globally, in 2020, there were approximately 2.26 million cases of breast cancer, leading to around 685 thousand deaths [2 ###reference_b2###]. This makes it the leading cause of cancer-related mortality among women worldwide [3 ###reference_b3###]. The World Health Organization launched the Global Breast Cancer Initiative in 2021, aiming to tackle this significant health challenge [3 ###reference_b3###].\nGlobally, three main inequities affect breast cancer care [4 ###reference_b4###]: late diagnosis, often at advanced stages; inadequate services, including limited diagnostic and treatment facilities; and low coverage, particularly in the inclusion of breast cancer in Universal Health Coverage (UHC).\nThe risk factors for breast cancer are multifaceted. Age is a primary risk factor, with older women exhibiting the highest incidence rates [5 ###reference_b5###]. Furthermore, genetic factors play a role in 5-10% of cases, notably mutations in the BRCA1 or BRCA2 genes [3 ###reference_b3###]. Early detection and prompt treatment are vital, as the five-year survival rate can exceed 90% with early diagnosis [6 ###reference_b6###]. In terms of diagnostics, while manual examinations are common, three key imaging techniques are employed: mammography, magnetic resonance imaging (MRI), and ultrasound. Ultrasound is particularly effective for examining dense breasts, as it provides detailed insights into the morphology, orientation, internal structure, and margins of lesions [7 ###reference_b7###, 8 ###reference_b8###]. These assessments are crucial for distinguishing between benign and malignant breast lesions [9 ###reference_b9###]. Ultrasound stands out as a highly sensitive, non-invasive, radiation-free, and cost-effective method for early breast cancer detection and diagnosis, especially in dense breast tissue [10 ###reference_b10###].\nComputer-aided diagnosis (CAD) has emerged as a key research priority for radiologists, particularly for enhancing the efficiency of interpreting ultrasound images [11 ###reference_b11###]. CAD systems can autonomously analyze lesion characteristics and differentiate them from normal tissues. However, the automatic detection of breast tumors presents challenges, notably due to their irregular shapes and blurred boundaries. Current research in breast ultrasound (BUS) image lesion segmentation falls into two main categories: traditional methods that rely on predefined features [12 ###reference_b12###] and those based on deep learning [13 ###reference_b13###, 14 ###reference_b14###]. Traditional segmentation approaches, such as region growing-based [15 ###reference_b15###, 16 ###reference_b16###, 17 ###reference_b17###, 18 ###reference_b18###], threshold-based [19 ###reference_b19###, 20 ###reference_b20###, 21 ###reference_b21###, 22 ###reference_b22###], and clustering-based methods [23 ###reference_b23###, 24 ###reference_b24###], effectively capture contour information of lesions. However, they often struggle with generalization, especially in cases of lesions with fuzzy and irregular boundaries. In contrast, deep learning-based methods have shown significant progress in detecting breast lesions. A notable development in this area is the introduction of an end-to-end convolutional neural network (CNN), UNet, specifically designed for medical image segmentation [25 ###reference_b25###]. Following its introduction, several neural network architectures similar to UNet, such as [26 ###reference_b26###, 27 ###reference_b27###, 28 ###reference_b28###, 29 ###reference_b29###, 30 ###reference_b30###], have been developed, demonstrating enhanced abilities in segmenting breast lesions.\nMost deep learning methods in medical imaging operate under fully supervised scenarios, where their performance heavily depends on the quantity of pixel-level labels [31 ###reference_b31###]. However, annotating medical images requires specialized medical knowledge, including disease diagnosis and understanding of anatomical structures, making accurate and comprehensive annotation challenging. Consequently, achieving better segmentation results with less costly image annotation has become a focal point in medical image segmentation research. Researchers are increasingly exploring methods that can efficiently utilize imperfect data or labels, such as weakly supervised segmentation algorithms [32 ###reference_b32###, 31 ###reference_b31###]. Some approaches use classification networks and class activation maps (CAM) [33 ###reference_b33###], which identify semantic features and help initially localize lesions [34 ###reference_b34###, 35 ###reference_b35###]. These methods not only facilitate lesion detection but also aid in understanding the classification model\u2019s predictions [36 ###reference_b36###], allowing researchers and users to examine the model\u2019s decision-making basis [37 ###reference_b37###, 38 ###reference_b38###]. However, CAMs typically provide only a rough estimate of the predicted areas, and their ability to precisely detect lesion boundaries, especially edges, is limited [39 ###reference_b39###, 40 ###reference_b40###].\nFurthermore, the effectiveness of weakly supervised methods largely depends on the accuracy of pseudo-labels. These output generated by CAM, can be influenced by surrounding background noise [41 ###reference_b41###].\nThe recently introduced Segment Anything Model (SAM) [42 ###reference_b42###] represents an advancement in addressing these challenges. Trained on over 11 million images with 1 billion masks (SA-1B), SAM is capable of zero-shot segmentation on unseen images using various prompts such as bounding boxes, points, and text [43 ###reference_b43###]. However, its application to medical images, which often feature complex biological tissues with diverse shapes and characteristics, differs from natural images. Therefore, direct segmentation of lesions using SAM in medical contexts has not yet yielded optimal results [44 ###reference_b44###, 45 ###reference_b45###].\nMore critically, the SAM requires prompt-based input to perform medical image segmentation tasks, making it difficult to automate and challenging for scalable applications in clinical settings.\nTo overcome the limitations of current methods, we propose a novel weakly supervised lesion segmentation framework comprising four main modules in breast ultrasound imaging: a traditional segmentation module based on morphology, a semantic information extraction and lesion localization module, an information fusion module, and a SAM fine-grained segmentation module. The traditional segmentation module utilizes morphology to perform initial segmentation and extract contour information from medical images, focusing on the shape, edge, and direction of lesions. The semantic information extraction and lesion localization module, leveraging image-level category labels, trains a classification network and achieves a fuzzy localization of lesions through the heat map provided by CAM. The information fusion module then adeptly combines the outputs from these two modules, generating a more comprehensive lesion area. Finally, SAM utilizes this area as a prompt for segmenting lesions, refining the segmentation process and enhancing the results through post-processing. This integrated approach aims to address the challenges in weakly supervised breast lesion segmentation by combining traditional and advanced techniques for more accurate and efficient results. The key contributions of this paper are outlined as follows:\nThis paper introduces a novel integration of SAM with weakly supervised methods for breast lesion segmentation. It can refine segmentation regions when fed with regions derived from CAM or similar techniques. This ability is particularly beneficial in scenarios with limited training data, ensuring improved segmentation outcomes.\nThe proposed segmentation framework integrates prior knowledge of lesion morphology, semantic features from medical images, and the precise segmentation capabilities of SAM. By merging these diverse methodologies, the framework is able to learn more comprehensive lesion features, leveraging the combined strengths of each approach.\nThe framework has been evaluated on two publicly available datasets, showcasing comparable performance as fully supervised learning approaches, as indicated by the overlap of the confidence intervals. And it outperformed other weakly supervised segmentation method."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Related work",
|
| 15 |
+
"text": ""
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "Weakly supervised segmentation methods",
|
| 21 |
+
"text": "Deep learning, particularly through architectures like UNet [25 ###reference_b25###] and DeepLabv3+, has revolutionized lesion segmentation in medical imaging. UNet, known for its encoder-decoder structure and skip connections, simplifies the segmentation process by eliminating complex manual feature extraction. DeepLabv3+ extends these capabilities with dilated convolution, effectively handling variations in lesion size and shape [46 ###reference_b46###]. This adaptability makes it especially suitable for complex medical image segmentation tasks.\nHowever, supervised learning based lesion segmentation relies on high-quality, pixel-level annotated datasets, which makes this field challenging. To achieve high-quality segmentation with easier and cost-effective annotations, researchers are exploring weakly supervised strategies, such as interaction-based methods. These methods involve user participation in selecting regions, marking boundaries, and refining labels, guiding the algorithm for better segmentation. For instance, Roth et al. [47 ###reference_b47###] used a random walk algorithm with user clicks to train a convolutional network, enhancing segmentation with custom loss functions and attention mechanisms. Pinheiro and Collobert [48 ###reference_b48###] proposed a method using pixel-level labels and back-propagation of errors for more accurate weakly supervised segmentation. These approaches offer promising alternatives to resource-heavy fully supervised methods, especially when high-quality annotated datasets are scarce.\nWeakly supervised semantic segmentation has gained significant attention in recent times, especially with the advancement of class activation mapping (CAM) techniques [33 ###reference_b33###]. Various researchers have proposed innovative methods to enhance the accuracy and overcome the inherent limitations of CAM-based approaches. For instance, Chen et al. [49 ###reference_b49###] introduced the causal CAM (C-CAM) method, addressing the challenge of unclear object boundaries between foreground and background. C-CAM operates on two causal chains: the category-causal chain, which relates to how image content affects categories, and the anatomical-causal chain, focusing on anatomical structures influencing organ segmentation. This method has been thoroughly tested across three public medical image datasets. Zhong et al. [50 ###reference_b50###] proposed a combination of CAM and weakly supervised detection-aware pre-training (DAP). This approach leverages weakly labeled categorical data for pre-training and transforms the categorical dataset into a detection dataset via a weakly supervised target localization method based on class-activation mapping. This enables the pre-trained models to be location-aware and capable of predicting bounding boxes. Ahn and Kwak [51 ###reference_b51###] proposed AffinityNet which generates accurate segmentation labels for training images based solely on image-level class labels. It combines semantic features with random walking to modify CAM and produce segmentation labels. These methods exemplify the innovative approaches in weakly supervised learning for medical image lesion detection and segmentation, significantly reducing the cost and subjectivity associated with manual labeling. However, weakly supervised methods primarily rely on image-level labeling, which can lead to the model learning imprecise features, impacting the final segmentation\u2019s accuracy. Additionally, these methods might not fully exploit all available information in complex images, particularly with intricate lesion structures or boundary cases. To address these challenges, the integration of boundary-refined gain tools is crucial. It can be a balanced and effective strategy for dealing with the complexities and nuances of medical image segmentation."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "SAM-based segmentation methods",
|
| 27 |
+
"text": "The Segment Anything Model (SAM) [42 ###reference_b42###] has recently gained considerable attention in the computer vision community for its remarkable zero-shot image segmentation capabilities. SAM, a model capable of generalizing to unfamiliar objects and images without additional training, incorporates the prompt paradigm from natural language processing into computer vision. This approach enables accurate image segmentation based on input prompts, such as points or boxes, and can generate masks for all objects in an image. However, SAM is primarily optimized for natural images, and its direct application in medical image segmentation has proven less accurate, posing significant challenges in this domain [44 ###reference_b44###, 52 ###reference_b52###]. Addressing these issues, researchers are focusing on adapting SAM for medical imaging. Ma and Wang et al. [52 ###reference_b52###] developed MedSAM, trained on a large-scale dataset containing over 1 million medical image-mask pairs. MedSAM is adept at segmenting various anatomical structures and lesions across different medical imaging modalities, offering a balanced mix of automation and customization. Similarly, Wu et al. [53 ###reference_b53###] introduced the Medical SAM Adapter, which integrates medical-specific knowledge into SAM using parameter-efficient adaptive fine-tuning techniques, significantly enhancing the original model. Fine-tuning SAM to suit specific datasets enhances its adaptability and ability to capture relevant features, improving performance on both familiar and unseen data [45 ###reference_b45###]. This process allows SAM to learn more generalizable features, aiding its performance across diverse samples and scenarios. However, fine-tuning may demand substantial computational resources, such as increased training time and storage, which could limit its applicability. Moreover, a fine-tuned SAM might not generalize well across various types of medical images or segmentation tasks due to domain-specific adaptations. Recent studies have shown that using prompt methods with SAM can markedly improve its performance. Chen et al. [54 ###reference_b54###] introduced RSPrompter, a novel prompt learning technique, to guide SAM in generating semantic instance-level masks, particularly enhancing its capabilities in remote sensing image instance segmentation. Deng et al. [55 ###reference_b55###] proposed the SAM-U framework, incorporating multi-frame prompts to achieve more accurate medical image segmentation. Compared to fine-tuning, prompt methods can reduce the dependency on large quantities of accurate labels. By integrating prompts into training, the model can achieve commendable performance with a relatively limited number of labels. Additionally, prompt methods enhance the model\u2019s interpretability, offering a more comprehensible and user-friendly approach to image segmentation.\nThese research demonstrate the potential and effectiveness of SAM, a generalized segmentation model that can be extended to the medical image field."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3",
|
| 31 |
+
"parent_section_id": null,
|
| 32 |
+
"section_name": "Methods",
|
| 33 |
+
"text": "In this study, we proposed a weakly supervised lesion segmentation method for breast ultrasound images, inspired by the framework outlined in Liu et al. [56 ###reference_b56###]. The proposed method shown in Figure 1 ###reference_###, commences with an initial segmentation of medical images, utilizing morphological knowledge like lesion shape and edges. Then followed by lesion localization obtained from an image classification model and CAM. Subsequently, we flexibly integrate the outcomes of traditional morphological segmentation with those of lesion localization. The process concludes with the application of the SAM and various post-processing techniques to refine the segmentation results to remove the topological error.\n###figure_1###"
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3.1",
|
| 37 |
+
"parent_section_id": "3",
|
| 38 |
+
"section_name": "Traditional segmentation based on morphological feature",
|
| 39 |
+
"text": "Initially, our process begins with segmenting the medical image based on key characteristics of the lesions, such as shape, edges, and orientation. To achieve this, we employ the K-means algorithm to cluster the pre-processed image. Then, we apply thresholding to isolate all suspected lesions. These suspected lesions are then meticulously filtered using morphological knowledge to ensure precision in the segmentation."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3.1.1",
|
| 43 |
+
"parent_section_id": "3.1",
|
| 44 |
+
"section_name": "3.1.1 Suspicious lesion extraction",
|
| 45 |
+
"text": "The breast ultrasound images frequently exhibit low overall brightness, with grayscale values primarily confined to a lower range. To counteract this, we implement an automatic color enhancement (ACE) technique, as described by Getreuer [57 ###reference_b57###]. This ACE algorithm enhances image contrast by assessing the brightness and interrelationships between a target pixel and its adjacent pixels. Within this context, let represent a specific color channel of an RGB breast ultrasound (BUS) image, defined over the domain . The intensities in this channel are normalized within the range . The ACE process is applied independently to each of the three color channels, facilitating chromatic aberration correction in the BUS image as shown in Equation 1 ###reference_###.\nwhere represents , represents the Euclidean distance between and . The slope function, , plays a pivotal role in adapting to local image contrasts. It is designed to enhance minor variations and enrich significant ones, effectively compressing or expanding the dynamic range based on local image content. In the subsequent stage, we compute the enhancement channel by normalizing to the range as shown in Equation 2 ###reference_###. This normalization is crucial for maintaining consistency in contrast enhancement across the image.\nFollowing ACE, we employ the K-means clustering algorithm, an iterative method that groups the image data into distinct clusters based on a distance formula. The objective function depicted in Equation 3 ###reference_###, is used to achieve optimal clustering. Here, signifies the number of clusters, the number of points in each cluster, and the centroid of each cluster. The term quantifies the distance between a data point and the centroid of its cluster.\nAfter clustering, we perform global threshold segmentation, dividing the image into target and background regions using global information. This step is essential in completing the extraction of all suspected lesions, setting the stage for subsequent detailed analysis and segmentation."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "3.1.2",
|
| 49 |
+
"parent_section_id": "3.1",
|
| 50 |
+
"section_name": "3.1.2 Suspected lesion filtering",
|
| 51 |
+
"text": "Our filtering approach for suspected lesions in breast ultrasound (BUS) images involves a layered anatomical model, as advised by experienced radiologists. We classify BUS image structures into three layers: subcutaneous fat, breast parenchyma, and chest wall muscle. Early-stage breast cancer lesions, typically benign, are almost exclusively located in the parenchymal layer. Based on this a priori knowledge, we removed the suspected lesions located in the bottom 1/3 and top 1/10 of the image.\nFor filtering suspected lesions within the parenchymal layer, we consider both the shape and aspect ratio of lesions, taking into account the textural characteristics of breast ultrasound images. Benign lesions like cystic nodules typically exhibit a hypoechoic, round or oval shape. We use morphological knowledge to filter out erroneously extracted tissues based on aspect ratio; benign lesions generally have a ratio between and , while ducts and lobules have a lower ratio. We calculate the minimum enclosing rectangle for each lesion from the binary image obtained by K-means clustering and thresholding, determining height and width. Non-target areas are then filtered out using this morphological knowledge. The criterion for screening non-lesion areas, based on their aspect ratio, is defined as follows in Equation 4 ###reference_###:\nHere, denotes the aspect ratio of the -th suspected lesion in the binary image."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "3.2",
|
| 55 |
+
"parent_section_id": "3",
|
| 56 |
+
"section_name": "CAM-Guided classification model for lesion localization",
|
| 57 |
+
"text": ""
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "3.2.1",
|
| 61 |
+
"parent_section_id": "3.2",
|
| 62 |
+
"section_name": "3.2.1 Semantic information extraction",
|
| 63 |
+
"text": "To train a supervised learning segmentation model, the lesions in medical images require pixel-level annotation by professional medical experts. This process is time-consuming and costly, and the interpretation and annotation of medical images may be affected by the subjective judgment of experts. Different experts may give different annotations, which can lead to inconsistencies and uncertainty. Considering these existing problems, we use image-level labels to achieve semantic information extraction.\nIn our study, we employ DenseNet for classifying breast ultrasound (BUS) images due to its ability to effectively utilize features from shallow layers with low complexity, aiding in achieving a smooth decision function with enhanced generalization performance. Specifically, we utilize the DenseNet-121 variant, which comprises 121 layers. The network\u2019s architecture begins with an initial convolutional layer designed for three input channels. This layer uses a convolution kernel with a stride of 2 for extracting preliminary features. Subsequent processes include batch normalization and the application of ReLU activation functions. Spatial resolution is then reduced through a maximum pooling operation. The neural network consists of four dense blocks and three transition layers. The dense blocks contain 6, 12, 24, and 16 convolutional blocks, respectively. Each block features tightly connected convolutional layers, incorporating both and convolutions. These layers process outputs from preceding layers, continuously integrating new features. Transition layers, positioned between the dense blocks, comprise a convolution followed by a average pooling layer to decrease the spatial dimension of the output. The culmination of dense blocks and transition layers leads to a batch normalization layer that normalizes the final feature set. This is followed by mapping these features to the number of output categories in the classification output layer. In this network, the initial convolutional layer extracts fundamental features like edges and textures. The subsequent convolutional layers within the dense blocks build upon these features, enhancing semantic information layer by layer. The dense connectivity ensures efficient feature reuse and information transmission, while the transition layers help maintain semantic information as they reduce the feature map size. Ultimately, global average pooling aggregates the feature maps into a comprehensive representation, capturing the overarching semantic information of the image.\nLet presents the BUS dataset, where represents the -th image and is its corresponding image-level label, indicating the presence (or absence) of a lesion in image . The data with lesions is represented as , and the data without lesions is represented as .\nThe training process involves minimizing the binary cross entropy (BCE) loss, which is mathematically formulated as:\nwhere is the number of samples, represents the classification network and is the sigmoid function."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "3.2.2",
|
| 67 |
+
"parent_section_id": "3.2",
|
| 68 |
+
"section_name": "3.2.2 Lesion localization",
|
| 69 |
+
"text": "In weakly supervised breast lesion localization stage, we employed an optimized Class Activation Mapping (CAM) method known as LayerCAM [39 ###reference_b39###] to generate a heatmap for benign lesion images. LayerCAM is integrated following the final convolutional layer of DenseNet[58 ###reference_b58###], highlighting key features relevant to classification. This approach provides a rough approximation of the lesion area without the need for pixel-level labeling. The calculation of the prediction score for a target category () is described by Equation 6 ###reference_###, where is the score, represents the classifier function with parameters , and is the input image. Let denote the output feature map from the final convolutional layer, with being the -th feature map within . Each activation in at spatial position is represented by .\nTo compute the gradient of the target category\u2019s prediction score with respect to a specific spatial location in , we use Equation 7 ###reference_###:\nLayerCAM uniquely assigns weights to each spatial location in the feature map based on their importance. These weights are determined by the gradients, using positive gradients as weights and assigning zero to negative gradients (Equation 8 ###reference_###):\nThe weighted activation for each position in the feature map is calculated using Equation 9 ###reference_###:\nFinally, to obtain the class activation map, the adjusted activations are linearly combined across the channel dimension and passed through a ReLU function (Equation 10 ###reference_###):"
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "3.3",
|
| 73 |
+
"parent_section_id": "3",
|
| 74 |
+
"section_name": "Feature fusion and region synthesis",
|
| 75 |
+
"text": "The morphology based algorithm is effective in contour extraction but faces challenges in medical image segmentation due to low contrast between lesions and surrounding tissues, complex lesion shapes and boundaries, and sensitivity to image noise. This sensitivity lead to incorrect clustering and erroneous segmentation.\nConversely, deep classification networks, through LayerCAM, can identify salient object regions but often suffer from imprecise activations. To address these limitations, we propose a fusion of traditional segmentation and deep learning methods, leveraging the strengths of both to construct a more accurate and complete lesion region, thereby enhancing segmentation accuracy.\nLet represents the set of suspected lesions extracted via traditional morphology-based segmentation, and denote the set identified through CAM-guided lesion localization module. We extract the outline of each lesion in the and fuse them with lesions in the to identify the one with the maximum intersection area. This lesion is then selected as the final synthetic result for next step.\nIf there\u2019s no intersection between and , we consider as our segmentation outcome. This operation is mathematically expressed in Equation 11 ###reference_###:\nwhere is the lesion with the largest overlap between and . Figure 2 ###reference_### illustrates this fusion process. In this way, a collection of lesion regions that encompass both morphological and semantic information.\n###figure_2###"
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "3.4",
|
| 79 |
+
"parent_section_id": "3",
|
| 80 |
+
"section_name": "SAM-Optimized lesion segmentation",
|
| 81 |
+
"text": "In this section, we detail the utilization of the SAM to enhance segmentation results obtained from synthesis regions. The initial segmentation, while incorporating morphological and CAM based semantic information, often lack precise lesion boundaries and areas. SAM, with its capability for high-precision segmentation, is proposed as a powerful tool to refine the segmentation result. SAM\u2019s architecture comprises three key components: an image encoder, a prompt encoder, and a mask decoder. The image encoder uses a scalable Vision Transformer (ViT) pre-trained by MAE [59 ###reference_b59###], adept at processing high-resolution inputs. Its primary function is to transform the target image into a feature space representation. For BUS image segmentation, we have devised two strategies to generate box and point prompts to integrate with SAM. These prompts are then fed into the encoder.\nThe mask decoder plays a crucial role in integrating the embeddings produced by both the image and prompt encoders. It decodes the final segmentation mask from the combined feature map of these embeddings. This process effectively aligns the image embedding, prompt embedding, and output token to generate a detailed and accurate mask, thereby enhancing the overall quality and precision of the segmentation.\nIn our research, the BUS images often contain complex biological structures and are susceptible to various noise sources. Direct application of the SAM proved insufficient for medical image segmentation due to these complexities. However, we discovered that using intermediate results as seed signals in SAM significantly improves its efficacy.\nWe used the original BUS image as input to SAM and experimented with two ways to interact with SAM: box prompt and point prompt. In box prompt segmentation, for each breast ultrasound (BUS) image, we generate the smallest enclosing rectangle from the fused pseudo-label information. This rectangular data is then fed as a seed region signal into SAM\u2019s prompt encoder, where it is transformed into embedding vectors. A mask decoder, leveraging these embeddings, segments the lesion mask from the BUS image.\nThe second method, point prompt segmentation, involves generating random points on the fused pseudo-labels. The coordinates of these points are input into SAM\u2019s prompt encoder, leading to enhanced pseudo-labels post-SAM segmentation.Moreover, we observed that the lesion areas segmented by SAM often contain holes. To rectify this, we apply a morphological reconstruction post-processing operation to refine the topological error. This method involves iterative expansion and erosion operations. In the post-processing stage, we performed an iterative expansion operation on the SAM-optimized segmented lesion regions to identify and fill these gaps, resulting in more accurate lesion segmentation outcomes."
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "4",
|
| 85 |
+
"parent_section_id": null,
|
| 86 |
+
"section_name": "Experiments and results",
|
| 87 |
+
"text": ""
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "4.1",
|
| 91 |
+
"parent_section_id": "4",
|
| 92 |
+
"section_name": "Dataset",
|
| 93 |
+
"text": "The Breast Ultrasound Image (BUSI) dataset [60 ###reference_b60###] is a classification and segmentation resource comprising ultrasound images from 600 female patients aged 25 to 75, collected in 2018. It contains 780 PNG images, categorized into normal, benign, and malignant classes, with 133 normal, 437 benign, and 210 malignant images.\nOur research is centered on understanding the early mechanisms of breast health and disease, primarily aimed at early detection and prevention of breast cancer. The segmentation of benign and normal images is crucial in identifying potential early abnormalities, thereby enhancing both prevention and early diagnosis for patients. Due to incomplete lesion labeling in the dataset, such as instances of multiple lesions with only one marked, we undertook a secondary selection process. This refined the dataset to 123 normal and 365 benign cases. To counter the imbalanced data distribution, we augmented the normal images using techniques like flipping and rotation. The dataset was then randomly divided into a training set with 390 images (around 80%) and a testing set with 98 images (around 20%).\nTo further validate our generalization, we used a second breast ultrasound dataset, referred to as Dataset B, proposed by Yap et al. [61 ###reference_b61###]. This dataset contains 163 images with an average size of 760\u00d7570 pixels, collected using the Siemens ACUSON Sequoia C512 system. It includes 109 benign cases and 54 malignant cases."
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "4.2",
|
| 97 |
+
"parent_section_id": "4",
|
| 98 |
+
"section_name": "Implementation details",
|
| 99 |
+
"text": "For these two experimental datasets, all the images were resized to 256\u00d7256 pixels. Note that all ablation experiments and parameter selections were performed using the BUSI dataset [60 ###reference_b60###], while Dataset B was used solely for additional evaluation.\nOur implementation utilizes the PyTorch framework [62 ###reference_b62###] and is trained on a single NVIDIA GeForce RTX 3090 (24GB). The morphology-based segmentation module employs K-means clustering with for lesion extraction, using a binarization threshold of 90. The classification model we used is DenseNet121 [58 ###reference_b58###], pre-trained on ImageNet [63 ###reference_b63###]. We utilize Stochastic Gradient Descent (SGD) as the optimizer, with a weight decay of 0.0004 and momentum of 0.9. The learning rate starts at , and the model is trained over 100 epochs with a minimum batch size of 16. We employ several evaluation metrics: Dice score, the 95th percentile of Hausdorff Distance (HD95), and Intersection over Union (IoU). For each of these metrics, we calculate and report both the mean and the 95% confidence interval. The confidence intervals are determined using bootstrap analysis, involving 5000 resampling iterations to ensure statistical robustness and accuracy in our results.\nOur ablation experiments, detailed in Section 4.3 ###reference_###, were meticulously designed to assess the performance of each module. We summarize the experimental setup and final setting as follows. We employ LayerCAM to determine the approximate location of the lesion and establish a binarization threshold at 200. In the final experiments, we incorporate the ViT-H model version of SAM.\nOur complete codes are public available for download at: https://github.com/YueXin18/MorSeg-CAM-SAM ###reference_###."
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "4.3",
|
| 103 |
+
"parent_section_id": "4",
|
| 104 |
+
"section_name": "Experimental results",
|
| 105 |
+
"text": ""
|
| 106 |
+
},
|
| 107 |
+
{
|
| 108 |
+
"section_id": "4.3.1",
|
| 109 |
+
"parent_section_id": "4.3",
|
| 110 |
+
"section_name": "4.3.1 CAM based lesion location",
|
| 111 |
+
"text": "To refine lesion localization and segmentation, we evaluated five class activation mapping (CAM) methods: Grad-CAM [64 ###reference_b64###], AblationCAM [65 ###reference_b65###], LayerCAM [39 ###reference_b39###], EigenGradCAM [66 ###reference_b66###], and Grad-CAM++ [67 ###reference_b67###].\nWe established a threshold by calculating the lesion intersection ratio from the results of the CAM methods compared to those from morphology-based segmentation. Using this threshold, the grayscale maps obtained from the CAM are converted into binary segmentation maps. In our experiments, the threshold was computationally determined to be set at 200.\nWe also validated the threshold selection method and results using ground truth. We conducted comparisons across five CAM related methods at different thresholds from 180 to 210. However, due to Grad-CAM\u2019s inability to segment lesions at a threshold of 210, we limited the comparison at this threshold to the other four methods. The experimental results validating the threshold selection on the BUSI dataset are detailed in Table 1 ###reference_###, with the best outcomes highlighted in bold.\nIt can be seen that among the five CAM methods, the initial localization of lesions using LayerCAM has the best performance. When the threshold is 200, the results of LayerCAM on HD95 are not as good as when the threshold is equal to 210, but the results of Dice score and IoU metrics are better. This is because the threshold has an effect on the selection of significant activations, and more activations may be included when the threshold is set to 200. Whereas HD95 focuses on the difference between the boundary predicted by the model and the true boundary. Therefore, more activations may lead to larger boundary errors, which may affect HD95. Dice score and IoU metrics focus more on the overall picture of the segmentation results than HD95, and these additional activations may also provide more information that may be beneficial for metrics such as Dice score and IoU.\nLayerCAM\u2019s superiority lies in its ability to assign distinct weights to each spatial location, thereby acknowledging the varying significance of the class of interest. This feature enables LayerCAM to eliminate background noise and retain reliable object localization information. The visualization of the results from various CAM models on the BUSI dataset is presented in Figure 3 ###reference_###.\n###figure_3### ###table_1###"
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"section_id": "4.3.2",
|
| 115 |
+
"parent_section_id": "4.3",
|
| 116 |
+
"section_name": "4.3.2 Ablation experiments",
|
| 117 |
+
"text": "In our ablation study, we systematically evaluated the performance impact of various components and configurations within our proposed framework, and presented the results on the BUSI dataset. This approach helps in understanding the contribution of each module to the overall segmentation task.\nWe experiment with two prompting methods for SAM: the point prompt and the box prompt. In the point prompt approach, we randomly generate 10 points within the synthesized area described in Section 3.3 ###reference_### and input these into the SAM model. For the box prompt method, we construct the smallest enclosing rectangle around the synthesized lesion as the box prompt. And then subsequently fed into SAM for segmentation.\nThe results of the ablation experiments are shown in Table 2 ###reference_###, with bold indicating the best results. Each configuration was meticulously assessed to determine its contribution to the effectiveness of the segmentation task. This structured approach not only underscores the individual significance of the modules but also illustrates their combined impact in our comprehensive framework.\nThe results presented in the table indicate that using a box prompt as input for SAM yields better performance compared to a point-based prompt. Furthermore, the most effective segmentation results for this task are attained by integrating all these modules.\nThe visualization of the ablation experiment\u2019s results is presented in Figure 4 ###reference_###. It is evident from these visualizations that each of our proposed modules contributes to enhancing the performance of lesion segmentation.\nA notable observation from Figure 4 ###reference_### is the tendency of SAM-segmented results to exhibit holes in certain areas. This issue may stem from the manner in which prompts are provided to the model, underscoring the necessity for final post-processing steps to address these gaps. The performance of our model shown in Table 4 ###reference_### is after post-processing to fill the holes inside the segment area.\n###figure_4### ###table_2###"
|
| 118 |
+
},
|
| 119 |
+
{
|
| 120 |
+
"section_id": "4.3.3",
|
| 121 |
+
"parent_section_id": "4.3",
|
| 122 |
+
"section_name": "4.3.3 Comparison of SAM in different versions with different prompts",
|
| 123 |
+
"text": "In our study, we evaluated three models from the SAM series, each with different parameter counts: ViT-B (91M parameters), ViT-L (308M), and ViT-H (636M).\nIn this experiment, we utilize the box prompt for the SAM model, as the results presented in Table 2 ###reference_### indicate superior performance of the box prompt compared to the point prompt. Comparative experiments to evaluate their performance were conducted, with the results detailed in Table 3 ###reference_###, where the best results are highlighted in bold.\nThe performance of these results is also illustrated in Figure 5 ###reference_###.\nThe experimental findings suggest a trade-off between resource consumption and performance efficiency, indicating that the choice of model should align with the capabilities of the available experimental equipment. Based on these considerations, we selected ViT-H SAM for our final segmentation tasks.\nThe observations from Figure 5 ###reference_### highlight that holes are present in all versions of SAM predictions, underscoring the importance of post-processing when utilizing SAM for accurate results.\n###figure_5### ###table_3###"
|
| 124 |
+
},
|
| 125 |
+
{
|
| 126 |
+
"section_id": "4.3.4",
|
| 127 |
+
"parent_section_id": "4.3",
|
| 128 |
+
"section_name": "4.3.4 Compare with other deep learning models",
|
| 129 |
+
"text": "In our study, we evaluated the performance of our proposed framework by comparing it with three methods on two publicly available datasets. The methods used for comparison are UNet [25 ###reference_b25###], Deeplabv3+ [46 ###reference_b46###], and AffinityNet [51 ###reference_b51###]. UNet and Deeplabv3+ are fully supervised networks, whereas AffinityNet operates under a weakly supervised paradigm with only image-level labeling. For an equitable comparison, all networks were retrained on the BUSI dataset and Dataset B. Quantitative comparison results are presented in Table 4 ###reference_###.\nThe experimental results reveal that the two supervised learning methods (UNet and Deeplabv3+) exhibit comparable performance in both Dice score across the BUSI and Dataset B datasets, as indicated by the overlap of their 95% confidence intervals.\nCompared to these supervised learning methods, our model demonstrates slightly lower performance, with a Dice score that is 4.01% points lower on the BUSI dataset and 8.67% points lower on Dataset B. Despite these differences, the overlapping confidence intervals suggest that these variations are not statistically significant [68 ###reference_b68###, 69 ###reference_b69###].\nIn terms of HD95, our model surpasses UNet by 2.61 on the BUSI dataset and shows a reduction of 7.95 compared to Deeplabv3+, indicating superior precision in delineating lesion boundaries. On Dataset B, the HD95 of our model is similar to those of the supervised learning models, being only 3.75 higher than Deeplabv3+.\nCompared to the weakly supervised method, AffinityNet, our method demonstrates significant improvements across all evaluation metrics. The inferior performance of AffinityNet underscores both the complexity of this task and the advantage of our method\u2019s comprehensive strategy for utilizing information, which includes both lesion contours and semantic details from breast images. These outcomes collectively underscore the effectiveness and precision of our proposed framework in medical image segmentation tasks.\n###table_4### Figure 6 ###reference_### offers a visual comparison of segmentation results on the BUSI dataset from our method against UNet, Deeplabv3+, and AffinityNet, using representative cases from the test set. This comparison distinctly highlights the sensitivity of our method to lesion contours. As observed in Figure 6 ###reference_###, our proposed framework excels in segmenting contours and smaller lesions, outperforming both UNet and Deeplabv3+ in these aspects. The figure further reveals that the fully supervised methods, UNet and Deeplabv3+, tend to be influenced by background noise during segmentation, leading to the extraction of some non-lesion tissues. Our method effectively addresses this issue by removing incorrect segmentations through our refined process of filtering suspected lesions. In comparison with AffinityNet, which operates under a similar weakly supervised framework, our method demonstrates superior accuracy in producing segmentation masks. This indicates the effective extraction and utilization of semantic features from breast images in our approach. Overall, our proposed method achieves competitive results, particularly notable for its lower reliance on extensive annotation, compared to traditional fully supervised segmentation techniques.\n###figure_6###"
|
| 130 |
+
},
|
| 131 |
+
{
|
| 132 |
+
"section_id": "5",
|
| 133 |
+
"parent_section_id": null,
|
| 134 |
+
"section_name": "Discussion",
|
| 135 |
+
"text": "Our model comprises four different modules: morphological feature segmentation, CAM-guided localization, feature fusion, and final SAM-optimized segmentation. Optimal results necessitate coordinated functioning between these modules, which presented a challenge in our study. Nonetheless, our objective is to maximize the utilization of medical prior knowledge to minimize the reliance on detailed pixel-level annotation. The ablation experiment validates the essential role of each module and provides insights into their individual performances, such as using only morphology and CAM, or the efficiency of a lightweight SAM. This flexibility enables researchers to tailor module combinations to suit specific needs and contexts.\nIn the CAM-guided localization module, LayerCAM effectively utilizes deep learning to correlate key image features with potential disease areas, yielding valuable semantic information about lesions. Additionally, it can transform the class activation map into a binary image using a specific threshold. This thresholding process facilitates the initial identification of lesion location and size without requiring any extra annotations.\nIn terms of operational efficiency, our proposed segmentation framework is typically faster to train due to the simpler and lighter-weight model structure design using only category-level labels without the need to deal with detailed pixel-level annotations as in end-to-end supervised learning.\nThis study aims to develop an effective primary screening method for breast cancer identification, potentially reducing misdiagnosis and overtreatment, particularly in resource-constrained environments. Our focus was solely on benign tumors in mammography for experimental data. Although the performance of our proposed method on the BUSI dataset differs by only 4% in Dice score compared to supervised learning, the considerable overlap in the performance confidence intervals of both models suggests that the difference might not be statistically significant [68 ###reference_b68###, 69 ###reference_b69###]. The promising performance of our approach paves the way for incorporating a broader spectrum of data types and diseases in future research."
|
| 136 |
+
},
|
| 137 |
+
{
|
| 138 |
+
"section_id": "6",
|
| 139 |
+
"parent_section_id": null,
|
| 140 |
+
"section_name": "Conclusion",
|
| 141 |
+
"text": "In this study, we introduce a novel, morphology-enhanced CAM-guided SAM framework for weakly supervised segmentation of breast lesions from ultrasound images. Our methodology, evaluated using the BUSI and Dataset B public dataset, effectively segments lesions with image-level labeling. The framework capitalizes on a priori knowledge of breast lesion morphology for contour extraction and incorporates semantic feature extraction and lesion localization using a CAM-based approach. We explored various class activation mapping techniques, ultimately integrating LayerCAM for highlighting lesion regions.\nLeveraging the strengths of both segmentation methods, we fuse the extracted information for more accurate and smoother segmentation. The SAM model serves as a powerful segmentation enhancement tool, refining these synthesis results. A final post-processing step is applied for further enhancement.\nOur approach demonstrates notable effectiveness, achieving a Dice score of 74.39%, and a 95th percentile Hausdorff Distance (HD95) of 24.27 on the BUSI dataset. These results not only affirm the validity and superiority of our method but also show its competitive edge over fully supervised network like Deeplabv3+ in boundary segmentation accuracy, while significantly outperforming weakly supervised networks that rely solely on image-level labels.\nIn the future research, we aim to expand this framework\u2019s application to lesion segmentation in other medical imaging datasets, further advancing the field of medical image analysis."
|
| 142 |
+
}
|
| 143 |
+
],
|
| 144 |
+
"appendix": [],
|
| 145 |
+
"tables": {
|
| 146 |
+
"1": {
|
| 147 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Performance comparison of CAM methods under different thresholds on the BUSI dataset. The models that achieved the best performance are highlighted in bold.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S4.T1.2.2\">\n<tr class=\"ltx_tr\" id=\"S4.T1.2.2.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.2.2.2.3\">Threshold</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T1.2.2.2.4\">Methods</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.1.1\">Dice()</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T1.2.2.2.5\">HD95</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T1.2.2.2.2\">IoU()</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.2.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.2.2.3.1\" rowspan=\"5\"><span class=\"ltx_text\" id=\"S4.T1.2.2.3.1.1\">180</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T1.2.2.3.2\">Grad-CAM</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T1.2.2.3.3\">35.60 [30.58,40.58]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T1.2.2.3.4\">50.65 [42.78,59.96]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T1.2.2.3.5\">23.86 [20.10,27.71]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.2.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.4.1\">AblationCAM</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.4.2\">36.69 [31.61,41.70]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.4.3\">49.37 [41.88,58.37]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.4.4\">24.89 [20.95,28.92]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.2.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.5.1\">LayerCAM</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.5.2\">36.32 [31.15,41.78]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.5.3\">56.78 [48.01,66.74]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.5.4\">24.89 [20.67,29.39]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.2.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.6.1\">EigenGradCAM</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.6.2\">36.91 [31.16,42.71]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.6.3\">62.86 [48.88,78.41]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.6.4\">25.73 [21.33,30.31]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.2.7\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.7.1\">Grad-CAM++</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.7.2\">35.70 [30.69,40.98]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.7.3\">54.13 [46.50,62.49]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.7.4\">24.24 [20.19,28.60]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.2.8\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.2.2.8.1\" rowspan=\"5\"><span class=\"ltx_text\" id=\"S4.T1.2.2.8.1.1\">190</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T1.2.2.8.2\">Grad-CAM</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T1.2.2.8.3\">35.98 [31.09,40.73]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T1.2.2.8.4\">49.20 [41.37,58.41]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T1.2.2.8.5\">23.99 [20.33,27.70]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.2.9\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.9.1\">AblationCAM</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.9.2\">37.39 [32.46,42.28]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.9.3\">47.67 [40.33,56.32]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.9.4\">25.27 [21.50,29.14]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.2.10\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.10.1\">LayerCAM</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.10.2\">37.86 [32.72,43.17]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.10.3\">52.43 [44.01,61.88]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.10.4\">25.93 [21.79,30.27]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.2.11\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.11.1\">EigenGradCAM</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.11.2\">37.81 [32.00,43.56]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.11.3\">61.87 [47.77,77.32]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.11.4\">26.41 [22.00,30.98]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.2.12\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.12.1\">Grad-CAM++</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.12.2\">37.24 [32.20,42.46]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.12.3\">50.77 [43.24,59.22]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.12.4\">25.35 [21.32,29.59]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.2.13\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.2.2.13.1\" rowspan=\"5\"><span class=\"ltx_text\" id=\"S4.T1.2.2.13.1.1\">200</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T1.2.2.13.2\">Grad-CAM</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T1.2.2.13.3\">35.78 [30.93,40.53]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T1.2.2.13.4\">47.05 [39.20,56.19]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T1.2.2.13.5\">23.81 [20.21,27.45]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.2.14\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.14.1\">AblationCAM</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.14.2\">37.72 [32.85,42.57]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.14.3\">46.32 [38.87,54.92]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.14.4\">25.41 [21.72,29.22]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.2.15\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.15.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.2.2.15.1.1\">LayerCAM</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.15.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.2.2.15.2.1\">40.40 [35.28,45.57]</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.15.3\">45.60 [38.17,53.88]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.15.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.2.2.15.4.1\">27.78 [23.66,32.01]</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.2.16\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.16.1\">EigenGradCAM</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.16.2\">38.35 [32.48,44.22]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.16.3\">61.18 [46.94,76.70]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.16.4\">26.86 [22.33,31.48]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.2.17\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.17.1\">Grad-CAM++</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.17.2\">38.78 [33.69,43.82]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.17.3\">47.36 [39.97,55.59]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.17.4\">26.46 [22.48,30.56]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.2.18\">\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.2.2.18.1\" rowspan=\"4\"><span class=\"ltx_text\" id=\"S4.T1.2.2.18.1.1\">210</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T1.2.2.18.2\">AblationCAM</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T1.2.2.18.3\">36.86 [31.85,41.70]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T1.2.2.18.4\">45.15 [37.52,53.91]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T1.2.2.18.5\">24.75 [20.97,28.54]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.2.19\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.19.1\">LayerCAM</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.19.2\">40.34 [35.29,45.28]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.19.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.2.2.19.3.1\">45.09 [37.50,53.77]</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.19.4\">27.61 [23.73,31.48]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.2.20\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.20.1\">EigenGradCAM</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.20.2\">37.65 [31.86,43.53]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.20.3\">60.78 [46.35,76.39]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.2.2.20.4\">26.25 [21.74,30.81]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.2.21\">\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r\" id=\"S4.T1.2.2.21.1\">Grad-CAM++</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r\" id=\"S4.T1.2.2.21.2\">39.50 [34.44,44.53]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r\" id=\"S4.T1.2.2.21.3\">46.16 [38.25,55.19]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r\" id=\"S4.T1.2.2.21.4\">26.99 [23.05,31.01]</td>\n</tr>\n</table>\n</figure>",
|
| 148 |
+
"capture": "Table 1: Performance comparison of CAM methods under different thresholds on the BUSI dataset. The models that achieved the best performance are highlighted in bold."
|
| 149 |
+
},
|
| 150 |
+
"2": {
|
| 151 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Ablation results of segmented breast lesions on the BUSI dataset. MorSeg denotes morphological-based segmentation, while LCAM denotes the layerCAM module. SAM-p and SAM-b refer to the SAM module based on point and box prompts, respectively.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S4.T2.1\">\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" colspan=\"4\" id=\"S4.T2.1.1.1\">Modules</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.2\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T2.1.1.2.1\">Dice(%)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.3\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T2.1.1.3.1\">HD95</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.4\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T2.1.1.4.1\">IoU(%)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.1.2.1\">MorSeg</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T2.1.2.2\">LCAM</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T2.1.2.3\">SAM-p</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T2.1.2.4\">SAM-b</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.1.3.1\">\u2713</td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S4.T2.1.3.2\"></td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S4.T2.1.3.3\"></td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S4.T2.1.3.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.3.5\">45.15 [39.23,51.02]</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.3.6\">104.70 [93.72,115.36]</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.3.7\">32.95 [27.91,38.28]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S4.T2.1.4.1\">\u2713</td>\n<td class=\"ltx_td ltx_border_r\" id=\"S4.T2.1.4.2\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T2.1.4.3\">\u2713</td>\n<td class=\"ltx_td ltx_border_r\" id=\"S4.T2.1.4.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.4.5\">45.66 [36.65,55.05]</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.4.6\">82.98 [67.38,98.48]</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.4.7\">39.29 [30.80,48.05]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S4.T2.1.5.1\">\u2713</td>\n<td class=\"ltx_td ltx_border_r\" id=\"S4.T2.1.5.2\"></td>\n<td class=\"ltx_td ltx_border_r\" id=\"S4.T2.1.5.3\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T2.1.5.4\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.5.5\">45.49 [35.74,55.24]</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.5.6\">64.45 [51.17,78.14]</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.5.7\">40.25 [31.27,49.27]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.6\">\n<td class=\"ltx_td ltx_border_l ltx_border_r\" id=\"S4.T2.1.6.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T2.1.6.2\">\u2713</td>\n<td class=\"ltx_td ltx_border_r\" id=\"S4.T2.1.6.3\"></td>\n<td class=\"ltx_td ltx_border_r\" id=\"S4.T2.1.6.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.6.5\">40.40 [35.28,45.57]</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.6.6\">45.60 [38.17,53.88]</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.6.7\">27.78 [23.66,32.01]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.7\">\n<td class=\"ltx_td ltx_border_l ltx_border_r\" id=\"S4.T2.1.7.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T2.1.7.2\">\u2713</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T2.1.7.3\">\u2713</td>\n<td class=\"ltx_td ltx_border_r\" id=\"S4.T2.1.7.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.7.5\">22.21 [16.12,28.97]</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.7.6\">124.76 [112.39,136.40]</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.7.7\">16.55 [11.10,22.69]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.8\">\n<td class=\"ltx_td ltx_border_l ltx_border_r\" id=\"S4.T2.1.8.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T2.1.8.2\">\u2713</td>\n<td class=\"ltx_td ltx_border_r\" id=\"S4.T2.1.8.3\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T2.1.8.4\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.8.5\">58.22 [50.54,65.80]</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.8.6\">35.74 [27.22,45.12]</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.8.7\">48.57 [41.19,55.91]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.9\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S4.T2.1.9.1\">\u2713</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T2.1.9.2\">\u2713</td>\n<td class=\"ltx_td ltx_border_r\" id=\"S4.T2.1.9.3\"></td>\n<td class=\"ltx_td ltx_border_r\" id=\"S4.T2.1.9.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.9.5\">67.04 [60.31,73.20]</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.9.6\">29.33 [21.79,37.75]</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.9.7\">56.14 [49.77,62.21]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.10\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S4.T2.1.10.1\">\u2713</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T2.1.10.2\">\u2713</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T2.1.10.3\">\u2713</td>\n<td class=\"ltx_td ltx_border_r\" id=\"S4.T2.1.10.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.10.5\">70.45 [63.04,77.24]</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.10.6\">36.30 [25.58,48.14]</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.10.7\">61.73 [54.79,68.69]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.11\">\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_l ltx_border_r\" id=\"S4.T2.1.11.1\">\u2713</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r\" id=\"S4.T2.1.11.2\">\u2713</td>\n<td class=\"ltx_td ltx_border_b ltx_border_r\" id=\"S4.T2.1.11.3\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r\" id=\"S4.T2.1.11.4\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T2.1.11.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.11.5.1\">74.10 [66.84,80.68]</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T2.1.11.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.11.6.1\">25.63 [18.13,34.25]</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T2.1.11.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.11.7.1\">65.82 [58.71,72.27]</span></td>\n</tr>\n</table>\n</figure>",
|
| 152 |
+
"capture": "Table 2: Ablation results of segmented breast lesions on the BUSI dataset. MorSeg denotes morphological-based segmentation, while LCAM denotes the layerCAM module. SAM-p and SAM-b refer to the SAM module based on point and box prompts, respectively."
|
| 153 |
+
},
|
| 154 |
+
"3": {
|
| 155 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>Comparison of segmentation performance of different versions of the SAMs based on box prompt on the BUSI dataset. AVG times indicates the average time to process a BUS image.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S4.T3.2.2\">\n<tr class=\"ltx_tr\" id=\"S4.T3.2.2.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.2.2.2.3\">Methods</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.1.1.1.1\">Dice()</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.2.2.2.4\">HD95</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.2.2.2.2\">IoU()</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.2.2.2.5\">AVG Time(s)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.2.2.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.2.2.3.1\">ViT-B SAM</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.2.2.3.2\">71.04 [63.76,77.63]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.2.2.3.3\">26.66 [19.03,35.36]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.2.2.3.4\">62.10 [54.93,68.65]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.2.2.3.5\">0.1470</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.2.2.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S4.T3.2.2.4.1\">ViT-L SAM</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T3.2.2.4.2\">73.36 [66.24,79.84]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T3.2.2.4.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.2.2.4.3.1\">24.70 [17.45,33.10]</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T3.2.2.4.4\">64.52 [57.76,70.75]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T3.2.2.4.5\">0.2934</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.2.2.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_l ltx_border_r\" id=\"S4.T3.2.2.5.1\">ViT-H SAM</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r\" id=\"S4.T3.2.2.5.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.2.2.5.2.1\">74.10 [66.84,80.68]</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r\" id=\"S4.T3.2.2.5.3\">25.63 [18.13,34.25]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r\" id=\"S4.T3.2.2.5.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.2.2.5.4.1\">65.82 [58.71,72.27]</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r\" id=\"S4.T3.2.2.5.5\">0.4817</td>\n</tr>\n</table>\n</figure>",
|
| 156 |
+
"capture": "Table 3: Comparison of segmentation performance of different versions of the SAMs based on box prompt on the BUSI dataset. AVG times indicates the average time to process a BUS image."
|
| 157 |
+
},
|
| 158 |
+
"4": {
|
| 159 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T4\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 4: </span>Performance comparison with some breast lesion segmentation methods on two public datasets. SL means supervised learning and WSL means weakly supervised learning. </figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S4.T4.2.2\">\n<tr class=\"ltx_tr\" id=\"S4.T4.2.2.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T4.2.2.2.3\">Datasets</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T4.2.2.2.4\">Models</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T4.2.2.2.5\">Training</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T4.1.1.1.1\">Dice()</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T4.2.2.2.6\">HD95</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T4.2.2.2.2\">IoU()</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.2.2.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T4.2.2.3.1\" rowspan=\"4\"><span class=\"ltx_text\" id=\"S4.T4.2.2.3.1.1\">BUSI</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T4.2.2.3.2\">UNet</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T4.2.2.3.3\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T4.2.2.3.3.1\">SL</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T4.2.2.3.4\">78.31 [71.77,84.28]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T4.2.2.3.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.2.2.3.5.1\">21.66 [11.05,35.54]</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T4.2.2.3.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.2.2.3.6.1\">70.51 [63.61,76.77]</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.2.2.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T4.2.2.4.1\">Deeplabv3+</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T4.2.2.4.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.2.2.4.2.1\">78.40 [72.78,83.21]</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T4.2.2.4.3\">32.22 [17.79,49.91]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T4.2.2.4.4\">68.49 [63.10,73.23]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.2.2.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T4.2.2.5.1\">AffinityNet</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T4.2.2.5.2\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T4.2.2.5.2.1\">WSL</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T4.2.2.5.3\">16.14 [10.98,21.73]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T4.2.2.5.4\">78.15 [68.08,89.41]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T4.2.2.5.5\">10.98 [7.03,15.35]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.2.2.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T4.2.2.6.1\">Proposed model</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T4.2.2.6.2\">74.39 [67.09,81.02]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T4.2.2.6.3\">24.27 [16.67,32.85]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T4.2.2.6.4\">66.27 [59.09,72.91]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.2.2.7\">\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T4.2.2.7.1\" rowspan=\"4\"><span class=\"ltx_text\" id=\"S4.T4.2.2.7.1.1\">Dataset B</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T4.2.2.7.2\">UNet</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T4.2.2.7.3\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T4.2.2.7.3.1\">SL</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T4.2.2.7.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.2.2.7.4.1\">82.63[76.86,87.86]</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T4.2.2.7.5\">21.73[10.65,35.36]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T4.2.2.7.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.2.2.7.6.1\">72.38[65.01,79.41]</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.2.2.8\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T4.2.2.8.1\">Deeplabv3+</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T4.2.2.8.2\">78.91[70.26,86.39]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T4.2.2.8.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.2.2.8.3.1\">18.72[6.69,34.28]</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T4.2.2.8.4\">68.90[59.06,77.82]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.2.2.9\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T4.2.2.9.1\">AffinityNet</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T4.2.2.9.2\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T4.2.2.9.2.1\">WSL</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T4.2.2.9.3\">32.92[20.44,45.47]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T4.2.2.9.4\">112.73[87.55,137.34]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T4.2.2.9.5\">23.88[14.57,33.32]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.2.2.10\">\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r\" id=\"S4.T4.2.2.10.1\">Proposed model</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r\" id=\"S4.T4.2.2.10.2\">73.96[60.65,85.06]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r\" id=\"S4.T4.2.2.10.3\">22.47[9.70,39.05]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r\" id=\"S4.T4.2.2.10.4\">65.76[53.14,76.87]</td>\n</tr>\n</table>\n</figure>",
|
| 160 |
+
"capture": "Table 4: Performance comparison with some breast lesion segmentation methods on two public datasets. SL means supervised learning and WSL means weakly supervised learning. "
|
| 161 |
+
}
|
| 162 |
+
},
|
| 163 |
+
"image_paths": {
|
| 164 |
+
"1": {
|
| 165 |
+
"figure_path": "2311.11176v2_figure_1.png",
|
| 166 |
+
"caption": "Figure 1: The proposed framework of our model consists of four key stages: in stage 1, we perform a preliminary image segmentation focusing on morphological features. This is followed by the generation of a heat map for lesion localization using a CAM-based classification network, as shown in stage 2. Subsequently, the features from the previous stages were synthesized as shown in stage 3 and generated as a bounding box prompt in stage 4 for detailed segmentation using the SAM.",
|
| 167 |
+
"url": "http://arxiv.org/html/2311.11176v2/"
|
| 168 |
+
},
|
| 169 |
+
"2": {
|
| 170 |
+
"figure_path": "2311.11176v2_figure_2.png",
|
| 171 |
+
"caption": "Figure 2: The feature fusion and lesion synthesis process. This step extracts the contours of each lesion in the traditional morphology-based segmentation results and determines the lesion with the largest area of intersection with the results of the CAM-guided lesion localization module as the synthesized result.",
|
| 172 |
+
"url": "http://arxiv.org/html/2311.11176v2/"
|
| 173 |
+
},
|
| 174 |
+
"3": {
|
| 175 |
+
"figure_path": "2311.11176v2_figure_3.png",
|
| 176 |
+
"caption": "Figure 3: Visualization of different Class Activation Mapping (CAM) Methods at a threshold value of 200 on the BUSI dataset.",
|
| 177 |
+
"url": "http://arxiv.org/html/2311.11176v2/"
|
| 178 |
+
},
|
| 179 |
+
"4": {
|
| 180 |
+
"figure_path": "2311.11176v2_figure_4.png",
|
| 181 |
+
"caption": "Figure 4: Visualization results from ablation study on the BUSI dataset.",
|
| 182 |
+
"url": "http://arxiv.org/html/2311.11176v2/"
|
| 183 |
+
},
|
| 184 |
+
"5": {
|
| 185 |
+
"figure_path": "2311.11176v2_figure_5.png",
|
| 186 |
+
"caption": "Figure 5: Visualization results of different versions of SAM segmentation on the BUSI dataset.",
|
| 187 |
+
"url": "http://arxiv.org/html/2311.11176v2/"
|
| 188 |
+
},
|
| 189 |
+
"6": {
|
| 190 |
+
"figure_path": "2311.11176v2_figure_6.png",
|
| 191 |
+
"caption": "Figure 6: Comparative experiment results visualization on the BUSI dataset. Red contour lines depict the lesion edges as delineated in the ground truth labels.",
|
| 192 |
+
"url": "http://arxiv.org/html/2311.11176v2/"
|
| 193 |
+
}
|
| 194 |
+
},
|
| 195 |
+
"validation": true,
|
| 196 |
+
"references": [],
|
| 197 |
+
"url": "http://arxiv.org/html/2311.11176v2"
|
| 198 |
+
}
|
20240522/2312.14474v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240522/2312.16465v4.json
ADDED
|
@@ -0,0 +1,226 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Multi-Contact Whole-Body Force Control for Position-Controlled Robots",
|
| 3 |
+
"abstract": "Many humanoid and multi-legged robots are controlled in positions rather than in torques, which prevents direct control of contact forces, and hampers their ability to create multiple contacts to enhance their balance, such as placing a hand on a wall or a handrail. This paper introduces the SEIKO (Sequential Equilibrium Inverse Kinematic Optimization) pipeline, and proposes a unified formulation that exploits an explicit model of flexibility to indirectly control contact forces on traditional position-controlled robots. SEIKO formulates whole-body retargeting from Cartesian commands and admittance control using two quadratic programs solved in real time. Our pipeline is validated with experiments on the real, full-scale humanoid robot Talos in various multi-contact scenarios, including pushing tasks, far-reaching tasks, stair climbing, and stepping on sloped surfaces. Code and videos are available at: https://hucebot.github.io/seiko_controller_website/",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Humans often use additional contact points to enhance their stability, for instance, by using a handrail or a wall when walking, or to extend their reach, for instance, when grasping a distant object. While humanoid robots would benefit from a similar strategy, current robots minimize the number of contacts and use them only for feet and required interactions with the environment, such as pushing a button [1 ###reference_b1###].\nThe primary challenge in controlling multi-contact lies in the redundancy of force distribution resulting from closed kinematic chains [2 ###reference_b2###]. For a given posture with several contacts, there are infinite ways to distribute force among them. For instance, a humanoid with both hands on a table can apply more or less force to the hands without any visible change in joint position.\nTo regulate forces, most prior studies on multi-contact whole-body control rely on torque-controlled robots with inverse dynamics controllers [3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###]. Unfortunately, inverse dynamics is highly sensitive to model and calibration errors, and identifying models for humanoids is particularly challenging [6 ###reference_b6###]. Perfect identification of environment\u2019s properties is generally not possible. This is why most deployed robots use position control, which is simpler and more reliable [7 ###reference_b7###], but it lacks direct control authority over contact forces.\nPrior work on position-controlled robots [8 ###reference_b8###, 9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###] has often regulated contact forces indirectly using various forms of admittance schemes applied independently to each effector. While effective in many scenarios, this strategy may lack robustness in challenging situations near physical limits or with significant model errors. This is due to its heuristic nature, which lacks theoretical grounding and fails to consider the whole-body effect of postural changes on contact forces.\nOur main idea is to exploit the robot\u2019s non-rigidity to explicitly model the relationship between joint position commands and contact forces. Flexibility arises from either non-observable mechanical structural bending or internal impedance of non-ideal joint position control. We present a control pipeline (Fig. 1 ###reference_###) designed to regulate contact forces on position-controlled robots. Our approach offers a novel unified whole-body formulation using optimization-based Quadratic Programming (QP) to leverage fast QP solvers.\n###figure_1### ###figure_2### ###figure_3### ###figure_4### We conducted experiments on the Talos humanoid robot [12 ###reference_b12###], equipped with powerful arms but known for significant hip mechanical flexibility [13 ###reference_b13###]. Our control pipeline is compatible with commands from autonomous planners and teleoperation, with a focus on the latter in this study. Well-suited for teleoperation, our method is robust against operator errors related to awareness and embodiment challenges. Unlike most existing methods, our approach enables motions close to feasibility boundaries (both in term of kinematic, balance, and torque limits), allowing full exploitation of the capabilities of the hardware.\nOur work named SEIKO for Sequential Equilibrium Inverse Kinematic Optimization provides the following contributions:\nA Sequential QP (SQP) formulation that computes posture deflection and joint command correction, accounting for joint flexibility in multi-contact quasi-static conditions.\nA multi-contact retargeting and control architecture for position-controlled robots with contact switch and pushing capabilities, designed to be robust against model errors.\nValidation on the hardware Talos humanoid robot with several multi-contact tasks, including the validation of our prior retargeting work, which was previously tested only in simulation for humanoid robots."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "II Related Work",
|
| 15 |
+
"text": "Multi-contact tasks have been studied in-depth on humanoid robots with torque control [3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###], where contact forces are directly regulated using whole-body inverse dynamic controllers. But torque control relies on an accurate model of the robot\u2019s dynamic, which is challenging to identify [6 ###reference_b6###, 13 ###reference_b13###] and lacks robustness. Joint impedance control [14 ###reference_b14###, 15 ###reference_b15###] offers a more robust alternative to torque control, but still requires modeling the actuators to be able to specify torque feedforward references.\nAlthough [16 ###reference_b16###] demonstrated ladder climbing on a position-controlled robot without regulating contact forces, it is essential to regulate these forces. Doing so enables pushing tasks, smooth contact transitions, and enhances robustness for motions near system limits, where stability margins are reduced. Most studies using position-controlled robots [9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###, 17 ###reference_b17###] regulate contact forces indirectly through methods named \u201ceffector admittance\u201d, \u201cfoot force difference control\u201d, or \u201cdamping control\u201d all based on the same principle introduced in [8 ###reference_b8###]: an admittance feedback law is applied to each effector to adjust its Cartesian pose reference. For example, to reduce the force measured on the hand, these approaches will retract its desired position away from the contact surface. However, because it is expected that the hand remains in contact, these approaches implicitly rely on flexibility without explicitly considering it. In contrast, SEIKO Controller models the whole-body flexibility, providing a more grounded formulation. This also enables the accounting of postural changes on the contact forces, which the effector admittance scheme cannot do.\n[18 ###reference_b18###] proposed the idea of modeling torques produced by position-controlled actuators, which was further studied in [19 ###reference_b19###] and applied to multi-contact in [20 ###reference_b20###, 21 ###reference_b21###]. Similar to our approach, they differentiate the quasi-static equilibrium but their method uses pseudo-inverses which fails at considering constraints. Furthermore, [21 ###reference_b21###] also uses elastic joint models, but their method solves a cascade of several QP problems. Their purely reactive control architecture lacks feedforward terms and retargeted references, making it more sensitive to noise and violations of the quasi-static assumption. In contrast, our method is unified, allows faster motions, and does not require actual joint positions to be measured, which accommodates robots with mechanical flexibility like the Talos robot.\nThis work builds upon our prior work [22 ###reference_b22###, 23 ###reference_b23###, 24 ###reference_b24###] which devised a retargeting framework for multi-contact tasks on simulated humanoids and hardware bimanual manipulators with an highlight on enforcing feasibility. Both this work and [22 ###reference_b22###, 23 ###reference_b23###, 24 ###reference_b24###] also target teleoperation applications. While many studies have investigated the teleoperation of complex robots with floating bases [25 ###reference_b25###], fewer have explicitly addressed multi-contact scenarios [26 ###reference_b26###, 27 ###reference_b27###]. In contrast to these, our work addresses the regulation of contact forces and demonstrates both contact switch and pushing tasks."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "III Problem Definition",
|
| 21 |
+
"text": "Quasi-static robot configurations are defined by postural positions, joint torques, and contact wrenches . For position-controlled robots, control inputs only consist of joint position commands . The whole-body retargeting stage (illustrated in Fig. 1 ###reference_### and proposed in previous work [22 ###reference_b22###]) provides a stream of desired quasi-static configurations expected to be feasible.\nAchieving desired contact wrenches is essential for multi-contact tasks, but contact wrenches can not be directly commanded on position-controlled robots. Our approach aims to indirectly control contact wrenches through joint position commands optimized to take into account the flexibility of the robot. Table I ###reference_### lists the notations and quantities used throughout this letter.\nAddressing the problem involves overcoming the following challenges:\nMulti-contact tasks exhibit redundancy in both kinematics and contact wrench distribution, akin to the Grasp matrix\u2019s nullspace in manipulation [28 ###reference_b28###].\nWhile adding contacts is generally feasible, removing contacts challenge the robot\u2019s balance and can be infeasible.\nTransitioning between contact states (enabled or disabled) involves discrete changes in problem formulation. Ensuring continuity in contact wrenches (from non-zero to zero and vice versa) and posture is essential for smooth transitions.\nTo ensure safety, physical limits must be enforced such as balance, joint kinematics, actuator torque limits, and contact stability conditions prohibiting pulling, sliding, tilting.\nFor application to hardware, the controller must be robust to model errors and violations of simplifying assumptions."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "4",
|
| 25 |
+
"parent_section_id": null,
|
| 26 |
+
"section_name": "IV Method",
|
| 27 |
+
"text": "###figure_5###"
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "4.1",
|
| 31 |
+
"parent_section_id": "4",
|
| 32 |
+
"section_name": "IV-A Main Idea",
|
| 33 |
+
"text": "According to rigid body theory in multi-contact [2 ###reference_b2###, 28 ###reference_b28###], the contact wrenches of an ideal infinitely stiff mechanical system are non-unique and lie in a redundant nullspace. Real systems, however, always exhibit inherent flexibility: the structure slightly bends, and both the deflected posture and contact wrenches uniquely evolve towards the configuration minimizing overall elastic energy. Therefore, given constant joint position commands, the mapping that takes into account flexibility is unique and well-defined. Our approach models and predicts this whole-body non-linear deflection effect, utilizing it for the control of contact wrenches.\nSpecifically, we linearize and compute derivatives of the deflection effect to consider how contact wrenches change with variations in joint position commands through the Jacobian matrix . Instead of directly inverting this Jacobian matrix, we formulate the control problem as a Quadratic Programming (QP) which solves for position command changes and optimizes multiple objectives, similar to task space inverse dynamic approaches. We explicitly model the system\u2019s flexibility by treating each robot joint as a spring, encompassing both internal actuator impedance and mechanical flexibilities."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "4.2",
|
| 37 |
+
"parent_section_id": "4",
|
| 38 |
+
"section_name": "IV-B Overall Architecture",
|
| 39 |
+
"text": "Our proposed control architecture depicted in Fig. 2 ###reference_### consists of a two-stage pipeline. Firstly, SEIKO Retargeting, previously introduced in [22 ###reference_b22###], optimizes a desired whole-body configuration within feasibility limits. Subsequently, our novel SEIKO Controller computes corrected joint position commands for tracking . These joint commands are then sent to the robot\u2019s low-level servomotors and tracked by stiff internal position controllers.\nThe controller has three goals: (i) achieve the desired contact wrenches , (ii) avoid violations of joint torque limits , and (iii) enhance robustness against model inaccuracies. The Retargeting step is crucial as it enforces feasibility limits a priori, and generates a desired configuration to be tracked. The controller indeed exhibits reduced stability when tracking a highly infeasible non-retargeted reference.\nThe set of effectors that may come into contact with the environment is pre-defined. Each effector\u2019s state is either: \u201cenabled\u201d, standing for fixed and in contact transmitting forces and torques to the environment, or \u201cdisabled\u201d, indicating that it is free to move and is commanded by the operator. Our formulation handles both plane contacts (6 DoFs, e.g., feet) and point contacts (3 DoFs, e.g., hands with ball shape). The full details of contact formulation are available in the supplementary material of [22 ###reference_b22###]. Other types of contacts can also be easily implemented, such as full grasp contact for hand grippers or even line contact on the edge of feet.\nAn external planner or human operator provides commands as input to the Retargeting stage: (i) Cartesian pose or velocity commands for each free (disabled) effector, (ii) a Boolean signal that manually triggers the transition between contact states, and (iii) an optional \u201cpushing mode\u201d enabling explicit control of the normal force of a specific enabled contact. Our method does not plan contact sequencing, and relies on external decisions for contact stances and sequence.\nThe proposed method operates instantaneously without considering the future of unknown intention, and relies on the quasi-static assumption. The nonlinear whole-body optimizations are solved using SQP schemes with only one QP iteration per time step. This allows for quick convergence at high frequency () and responsiveness to input changes."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "4.3",
|
| 43 |
+
"parent_section_id": "4",
|
| 44 |
+
"section_name": "IV-C Equilibrium Equation and Flexibility Model",
|
| 45 |
+
"text": "Motions of mobile robots with a floating base are governed by the equation of motion in joint space [2 ###reference_b2###]. Under the quasi-static assumption, where , this equation simplifies to represent the equilibrium, i.e. system\u2019s balance, between contact wrenches, gravity effects, and applied torques:\nwhich is non-linear in . We approximate the linearization of the equilibrium equation by considering small variations of the configuration and partial derivatives:\nwhile neglecting second order terms (see Section -B ###reference_### in supplementary material for details).\nStiff position-controlled robots deviate from the rigid assumption due to inherent hardware flexibility arising from factors like Series Elastic Actuators [29 ###reference_b29###], deformations in links or transmissions [13 ###reference_b13###], impedance of non-ideal position control [18 ###reference_b18###], or the inclusion of soft damper elements within the structure [30 ###reference_b30###]. In this work, we model this flexibility as joint elastic flexibility, where the relation between joint position and generated torque is expressed as follows:\nNote that link flexibility can also be modeled in a similar manner by introducing passive joints without actuation. Its derivative is written:\nwhere is the deflected posture under joint flexibility and is the joint position command of actuators.\nThe derivative-based linear approximation of the equilibrium equation (2 ###reference_###) combined with flexibility model (4 ###reference_###) is linear w.r.t. configuration changes:\nTherefore can also be linearly expressed from and using the following row decomposition:\nwhere refer to the first 6 rows representing the floating base and the remaining joint rows."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "4.4",
|
| 49 |
+
"parent_section_id": "4",
|
| 50 |
+
"section_name": "IV-D SEIKO Retargeting",
|
| 51 |
+
"text": "This section summarizes the SEIKO Retargeting method developed in [22 ###reference_b22###, 23 ###reference_b23###]. From this previous work, Section -C ###reference_### in supplementary material provides further explanation on how balance is enforced.\nThe Retargeting preprocesses inputs for each disabled effector, which includes the commanded motion from the operator (comprising both pose and velocity ) and the admittance velocity command (see Section IV-F ###reference_###). Processing includes filtering and merging these commands:\nwhere is a reference pose that integrates velocity commands at each time step (see [23 ###reference_b23###]). allows the Cartesian pose command to be expressed relative to this reference, and not in arbitrary world frame. The filtering process incorporates a smoothing low-pass filter and bounds signal\u2019s velocity and acceleration through time-optimal bang-bang trajectory replanning [23 ###reference_b23###]. With the clamping , we also constrain separately the position and orientation of within a radius centered on to prevent the reference pose from windup when the retargeted motion is saturated by the feasibility constraints.\nAt each time step, SEIKO Retargeting solve the QP:\nThe QP solves for the configuration change (9a ###reference_1###), integrating it to update the desired configuration, e.g., . The optimization minimizes tasks weighted by manually tuned parameters for stability and desired trade-off. The cost function includes disabled effector pose targets (9b ###reference_2###), default joint position targets (9c ###reference_3###) for regularization and mitigating kinematic local minima, joint torque minimization (9d ###reference_4###) for human-like postures, contact wrench penalization (9e ###reference_5###), and decision variable regularization (9f ###reference_6###).\nEquality constraints enforce the linearized equilibrium equation (9g ###reference_7###) and ensure enabled contacts are fixed (9h ###reference_8###). Inequality constraints include joint position limits (9i ###reference_9###), joint torque limits (9j ###reference_10###), and contact stability conditions (9k ###reference_11###) considering unilaterality, friction pyramid, and center of pressure (see Section -D ###reference_### in supplementary material). Additional constraints involve limits on joint changes (9l ###reference_12###) and contact wrench changes (9m ###reference_13###).\nCompared to our prior SEIKO Retargeting work [22 ###reference_b22###], we enhanced the contact switching procedure with fewer arbitrary choices and clearer physical semantics. Details can be found in Section -E ###reference_### in supplementary material."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "4.5",
|
| 55 |
+
"parent_section_id": "4",
|
| 56 |
+
"section_name": "IV-E SEIKO Controller",
|
| 57 |
+
"text": "We assume that actual joint positions under flexibility cannot be directly measured but can be estimated from the model. Despite model errors, our approach relies on the model\u2019s derivatives direction to provide sufficient information about system evolution. The controller uses the derivative-based linear approximation of the equilibrium equation with flexibility (5 ###reference_###) to model how contact wrench distribution changes with joint command changes . This approach generalizes previously used admittance control laws such as \u201cfoot difference control\u201d [8 ###reference_b8###] which implicitly depends on flexibility without considering it.\nThe following feedback law regulates contact wrenches. It is the only feedback effect in our unified formulation that uses measured quantities and is tuned with only two parameters:\nwhere is the desired effort in the controller optimization, and acts as a feedforward term. SEIKO Controller solves the following QP at each time step::\nThe QP solves for flexible configuration changes (11a ###reference_.1###). Joint command changes are obtained from the decision variables using (7 ###reference_###) and are then obtained by integration.\nThe cost function primarily computes joint position correction and resulting posture deflection to achieve the control effort on contact wrench changes (11b ###reference_.2###). It also adjusts disabled effector poses influenced by flexibility toward Retargeting\u2019s desired poses (11c ###reference_.3###). As secondary objectives, the optimization penalizes the discrepancy between corrected and desired joint positions (11d ###reference_.4###) and regularizes changes in joint commands (11e ###reference_.5###).\nEquality constraints enforce the linearized equilibrium equation with flexibility (11f ###reference_.6###) through the first upper 6 floating base rows of decomposition (6 ###reference_###) and ensure no Cartesian motion for enabled contacts (11g ###reference_.7###). Inequality constraints ensure kinematic limits of joint position commands (11h ###reference_.8###) and restrict maximum joint torques (11i ###reference_.9###).\nJoint torque limits used as constraints are dynamically updated to prevent the integrated state from continuously increasing when the measured joint torque reaches the defined torque limit . For each joint at each time step:\nwhere are small positive margin parameters implementing a hysteresis effect to improve stability."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "4.6",
|
| 61 |
+
"parent_section_id": "4",
|
| 62 |
+
"section_name": "IV-F State Estimation and Effectors Admittance",
|
| 63 |
+
"text": "The estimated measured wrench in feedback law (10 ###reference_###) is computed using a complementary filter:\nThis filter enhances closed-loop stability by mitigating dynamical effects affecting neglected by the quasi-static assumption. It introduces a trade-off between the reactive measurement and the term estimated through the integration of the predicted change . The measured contact wrench velocity is computed using finite differences from , and then it is low-pass filtered at Hz using an exponential first-order scheme.\nWe utilize an admittance scheme to compute an additional Cartesian velocity command for disabled effectors :\nwhere the filtering applies a deadband and output clamping to both linear and angular vector norms, thereby rejecting peak forces and inertial effects during motion.\nThis effect minimizes interaction wrenches for disabled effectors, reducing collision forces during contact establishment and after contact removal, while also aiding in aligning feet with surface orientation. Implemented at input of the Retargeting level, this approach seamlessly integrates with operator command processing (8 ###reference_###)."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "5",
|
| 67 |
+
"parent_section_id": null,
|
| 68 |
+
"section_name": "Experimental Evaluation",
|
| 69 |
+
"text": ""
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "5.1",
|
| 73 |
+
"parent_section_id": "5",
|
| 74 |
+
"section_name": "Implementation Details",
|
| 75 |
+
"text": "We implemented SEIKO in C++ using RBDL (due to historical reasons) and Pinocchio [31 ###reference_b31###] rigid body libraries. More specifically, Pinocchio efficiently computes the analytical derivatives of the terms appearing in the equation (2 ###reference_###). We solve the QP problems using the QuadProg [32 ###reference_b32###] solver.\nThe entire control pipeline operates at a frequency of Hz, with joint position commands interpolated at kHz before being transmitted to the robot\u2019s actuators. The median computing times observed on the internal computer of the Talos robot are ms and ms for SEIKO Retargeting and SEIKO Controller, respectively. The maximum measured times for each were ms and ms, respectively.\nThe Talos robot, manufactured by PAL Robotics, is a humanoid robot of m height with 32 DoFs. We measured with an independent weighing scale its actual total mass to be kg, while the URDF model provided by PAL assumes a mass of kg. This discrepancy of kg can be seen by the Force-Torque sensors in the feet, which enable our controller to adapt to this model error. We changed the robot\u2019s right hand and forearm with a 3D printed part that replaced the gripper and wrist joints beyond the elbow joint. The ball-shaped hand (point contact) allows us to apply high contact forces (up to kg) on the arm during multi-contact tests. After removing the right forearm joints and excluding the head joints, our QP solver works with joints. All joints are used in position-controlled mode.\nThroughout all our evaluations, we used as flexibility model the position-control P gains imported from PAL\u2019s Gazebo simulation of the Talos robot. Unlike other work [13 ###reference_b13###] that estimate precise flexibility model, our approach does not heavily depend on model accuracy. This is because our formulation with derivatives utilizes only the approximate \u201cgradient\u201d direction for whole-body control.\nIn all subsequent experiments, an expert operator issued velocity commands for each robot\u2019s effectors using dedicated 6-DoF input devices1113Dconnexion SpaceMouse: https://3dconnexion.com/uk/spacemouse/ ###reference_###, with one device assigned to each effector. Teleoperation was conducted with a clear, direct line of sight to the robot and its surrounding environment.\n\n###figure_6### ###figure_7###"
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "5.2",
|
| 79 |
+
"parent_section_id": "5",
|
| 80 |
+
"section_name": "Wrench Distribution Tracking",
|
| 81 |
+
"text": "In Fig. 3 ###reference_###, we illustrate the role of SEIKO Controller in realizing multi-contact wrench distribution during a hand pushing task. The robot initiates a point contact with a vertical wall using its left hand. The \u201cpushing mode\u201d of SEIKO is employed to command a target trajectory for the normal force applied on the wall. Retargeting adjusts the robot\u2019s posture slightly forward to apply a large force ( N), and generates the desired contact wrenches, including opposing tangential forces on the feet in the sagittal plane.\nWe did not perform any identification or tuning of the robot flexibility model on the actual hardware, which may have significant errors. Estimating this flexibility [13 ###reference_b13###] could enhance tracking accuracy, given that we observed near-perfect tracking performance in the Gazebo simulator which uses an ideal model.\nThe attached video222Additional videos: https://hucebot.github.io/seiko_controller_website/ ###reference__website/### demonstrates additional multi-contact scenarios, such as stair climbing and stepping on sloped surfaces (Fig. 1 ###reference_###). The observed motions of the robot are deliberately slow due to the focus on quasi-static movements. We also conducted additional comparisons with the prior method effector admittance control [17 ###reference_b17###] in Section -A ###reference_### in supplementary material.\n\n###figure_8### ###figure_9###"
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "5.3",
|
| 85 |
+
"parent_section_id": "5",
|
| 86 |
+
"section_name": "Contact Switch",
|
| 87 |
+
"text": "Fig. 4 ###reference_### illustrates the foot contact switch capabilities, showcasing the Talos robot being teleoperated to lift and then re-establish contact with the right foot. Without the Controller, weight transfer from the right to the left foot and hand occurs abruptly during the foot lift. The robot did not fall as it was operating far from its feasibility boundaries. Conversely, when the controller and admittance scheme (equation (14 ###reference_###)) were enabled, the redistribution of contact wrenches became smooth and controlled. Additionally, at s, when the foot collided with the ground, the admittance control sightly lifted the foot to prevent unwanted ground forces before contact was re-established.\n###figure_10### ###figure_11###"
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "5.4",
|
| 91 |
+
"parent_section_id": "5",
|
| 92 |
+
"section_name": "Whole Body Damping",
|
| 93 |
+
"text": "Imperfect stiff position control and flexibilities lead to small oscillations when disturbed, particularly noticeable on Talos in the sagittal plane, causing forward-backward oscillations. In equation (10 ###reference_###), the controller\u2019s feedback law employs a damping term with the gain parameter . In Fig. 5 ###reference_###, we show that this feedback law applied to contact wrenches, serving as the only feedback mechanism in our formulation using sensor measurements, effectively attenuates the whole-body oscillations.\nIn double support, we applied short pushes (10-12 pushes, Fig.5 ###reference_### left) to the robot\u2019s torso and observed oscillations until energy dissipation. Using the controller, we tested various damping gain (). We recorded unfiltered angular velocity in sagittal plane with pelvis IMU\u2019s gyroscope since it does not rely on model nor unobserved joint positions. Fig. 5 ###reference_### (center) shows median and deciles confidence interval of sagittal motion velocity. To quantify damping (Fig. 5 ###reference_### right), we estimated the averaged logarithmic decrement from oscillation peaks (), reflecting damping of oscillation amplitudes and linked to the damping ratio for under-damped systems.\nIn following experiments, the damping gain is set to , as higher values tended to be unstable near feasibility boundaries where model errors had a more pronounced effect."
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "5.5",
|
| 97 |
+
"parent_section_id": "5",
|
| 98 |
+
"section_name": "Far Reaching with Model Errors",
|
| 99 |
+
"text": "Fig. 6 ###reference_### illustrates the capability of our approach to perform challenging far-reaching tasks near feasibility limits, even in the presence of large model errors. We teleoperated the right hand of the Talos robot for a forward-reaching motion as far as allowed by the controller, and added a kg load during operation on the hand to induce mass model errors. The robot remained stable thanks to the tracking of foot contact wrenches and adaptation of the whole-body posture. Additionally, the Controller through equation (12 ###reference_###) prevents excessive violation of joint torques, with a limit ratio set to .\n###figure_12### ###figure_13### ###figure_14### ###figure_15### ###figure_16###"
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "5.6",
|
| 103 |
+
"parent_section_id": "5",
|
| 104 |
+
"section_name": "Robustness Evaluation",
|
| 105 |
+
"text": "We performed a comprehensive analysis of our approach\u2019s robustness using the MuJoCo simulator, as summarized in Fig. 7 ###reference_###. The focus was on evaluating the impact of model errors and motion speed on system\u2019s balance. We simulated the Talos robot in double support, executing motion sequences reaching a distant target with the left hand and returning to the initial posture. The number of successful trials without fall for three conditions are reported: (i) without SEIKO Controller, (ii) with SEIKO Controller but without considering joint torque limits (11i ###reference_.9###), (12 ###reference_###), and (iii) using the full control method. Variations included hand Cartesian motion velocity (slow cm/s to fast cm/s) and additional mass on the left hand (none to kg).\nWe observed that MuJoCo\u2019s soft contact model produces a more pronounced flexibility behavior than Gazebo or even the actual robot. The presented results implicitly incorporate flexibility model errors, although they are not quantified.\nSEIKO Retargeting without whole-body control (left) operates in open-loop and is partially robust to motion speed but struggles with model errors. Using SEIKO Controller (middle) significantly improves success rates, adapting joint position commands to handle additional hand mass for balance. However, unplanned posture adaptations and model errors near full extension reach actuator torque limits, leading to loss of control authority. Considering actuator torque limits in the controller (right) enhances robustness by optimizing posture and avoiding infeasible hand pose commands. Challenges persist at high speeds and heavy masses, where inertial effects violate the quasi-static assumption."
|
| 106 |
+
},
|
| 107 |
+
{
|
| 108 |
+
"section_id": "6",
|
| 109 |
+
"parent_section_id": null,
|
| 110 |
+
"section_name": "VI Discussion and Conclusion",
|
| 111 |
+
"text": "Our control architecture\u2019s robustness is showcased at moderate motion speeds (Fig. 7 ###reference_###), but it inherently relies on the quasi-static assumption and is unsuitable for highly dynamic motions. Exploring more dynamic and agile motions is an avenue for future research. Establishing contact with stiff position-controlled robots requires precise and slow operator commands, even if effectors admittance (14 ###reference_###) helps mitigating this problem. Future work could explore applying the proposed approach to robots using joint impedance control. As analyzed in [13 ###reference_b13###], we noted greater leg flexibility in the Talos robot than in our basic model. Although our controller enables successful contact transitions in teleoperated tasks, this significant difference hampers the quick contact switches needed for walking. Refining the flexibility model may allow walking capabilities.\nThe robot fell when attempting to climb large cm stairs for exceeding arm joint torque limits during the challenging contact switch. Despite being theoretically feasible according to the retargeting model, the adaptation of joint torque limits (12 ###reference_###) is insufficient to ensure robustness if an infeasible contact transition is attempted due to model errors (e.g., underestimating the robot\u2019s weight).\nSEIKO Controller overcomes the inherent lack of direct control authority over contact forces of position-controlled by explicitly considering flexibilities. The whole-body multi-contact formulation is grounded in model and enhances robustness to moderate motion speeds and model errors, safely carrying substantial unmodelled loads at arm\u2019s length. The unified whole-body formulation employs a single feedback law on contact forces, effectively leveraging both postural change (i.e., CoM displacement) and contact force redistribution to regulate balance. Given that the primary advantage of humanoids and other multi-limbed robots lies in their strong versatility, this research paves the way for broadening the application and deployment of real-world scenarios, utilizing more capable and adaptable multi-contact systems in uncertain contexts and environments."
|
| 112 |
+
}
|
| 113 |
+
],
|
| 114 |
+
"appendix": [],
|
| 115 |
+
"tables": {
|
| 116 |
+
"1": {
|
| 117 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table I: </span>Mathematical notations</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S3.T1.36\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T1.36.37.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_l ltx_border_t\" id=\"S3.T1.36.37.1.1\">Notation</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S3.T1.36.37.1.2\">Description</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_t\" id=\"S3.T1.1.1.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.1.1.2\">Number of joints</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l\" id=\"S3.T1.2.2.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T1.2.2.2\">Number of enabled plane contacts</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.3.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l\" id=\"S3.T1.3.3.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T1.3.3.2\">Number of enabled point contacts</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.4.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l\" id=\"S3.T1.4.4.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T1.4.4.2\">Dimension of stacked wrench</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.5.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l\" id=\"S3.T1.5.5.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T1.5.5.2\">Estimated measured quantities</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.6.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l\" id=\"S3.T1.6.6.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T1.6.6.2\">Operator\u2019s raw Cartesian commands</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.7.7\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l\" id=\"S3.T1.7.7.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T1.7.7.2\">Effectors admittance scheme quantities</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.8.8\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l\" id=\"S3.T1.8.8.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T1.8.8.2\">Processed commands for retargeting input</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.9.9\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l\" id=\"S3.T1.9.9.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T1.9.9.2\">Desired state computed by retargeting</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.10.10\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l\" id=\"S3.T1.10.10.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T1.10.10.2\">Flexible state computed by controller</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.11.11\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l\" id=\"S3.T1.11.11.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T1.11.11.2\">Enabled contact quantities</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.12.12\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l\" id=\"S3.T1.12.12.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T1.12.12.2\">Disabled contact (free effector) quantities</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.13.13\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l\" id=\"S3.T1.13.13.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T1.13.13.2\">Cartesian pose</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.14.14\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l\" id=\"S3.T1.14.14.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T1.14.14.2\">Cartesian spatial velocity</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.15.15\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l\" id=\"S3.T1.15.15.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T1.15.15.2\">Posture position (floating base and joints)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.16.16\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l\" id=\"S3.T1.16.16.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T1.16.16.2\">Posture velocity</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.17.17\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l\" id=\"S3.T1.17.17.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T1.17.17.2\">Joint position and velocity</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.18.18\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l\" id=\"S3.T1.18.18.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T1.18.18.2\">Joint position command sent to robot</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.19.19\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l\" id=\"S3.T1.19.19.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T1.19.19.2\">Joint position min/max bounds</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.20.20\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l\" id=\"S3.T1.20.20.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T1.20.20.2\">Wrench effort (input to controller)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.21.21\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l\" id=\"S3.T1.21.21.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T1.21.21.2\">Joint torque</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.22.22\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l\" id=\"S3.T1.22.22.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T1.22.22.2\">Absolute maximum joint torque</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.23.23\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l\" id=\"S3.T1.23.23.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T1.23.23.2\">Joint torque limits used in Controller</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.24.24\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l\" id=\"S3.T1.24.24.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T1.24.24.2\">Stacked contact wrench</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.25.25\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l\" id=\"S3.T1.25.25.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T1.25.25.2\">Joint stiffness vector</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.26.26\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l\" id=\"S3.T1.26.26.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T1.26.26.2\">Joint stiffness matrix</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.27.27\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l\" id=\"S3.T1.27.27.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T1.27.27.2\">Selection matrix joint to full dimension</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.28.28\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l\" id=\"S3.T1.28.28.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T1.28.28.2\">Selection matrix full to joint dimension</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.29.29\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l\" id=\"S3.T1.29.29.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T1.29.29.2\">Gravity vector</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.30.30\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l\" id=\"S3.T1.30.30.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T1.30.30.2\">Stacked effectors Jacobian matrix</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.31.31\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l\" id=\"S3.T1.31.31.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T1.31.31.2\">Proportional and derivative control gains</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.32.32\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l\" id=\"S3.T1.32.32.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T1.32.32.2\">Effectors admittance gain</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.33.33\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l\" id=\"S3.T1.33.33.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T1.33.33.2\">Time step</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.34.34\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l\" id=\"S3.T1.34.34.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T1.34.34.2\">Effector poses (forward kinematic)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.36.36\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_l\" id=\"S3.T1.35.35.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r\" id=\"S3.T1.36.36.2\">Operations on Lie algebra</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 118 |
+
"capture": "Table I: Mathematical notations"
|
| 119 |
+
}
|
| 120 |
+
},
|
| 121 |
+
"image_paths": {
|
| 122 |
+
"1(a)": {
|
| 123 |
+
"figure_path": "2312.16465v4_figure_1(a).png",
|
| 124 |
+
"caption": "Figure 1: Overview of our control pipeline (top), and illustrations of teleoperated multi-contact experiments on Talos humanoid robot (bottom).",
|
| 125 |
+
"url": "http://arxiv.org/html/2312.16465v4/"
|
| 126 |
+
},
|
| 127 |
+
"1(b)": {
|
| 128 |
+
"figure_path": "2312.16465v4_figure_1(b).png",
|
| 129 |
+
"caption": "Figure 1: Overview of our control pipeline (top), and illustrations of teleoperated multi-contact experiments on Talos humanoid robot (bottom).",
|
| 130 |
+
"url": "http://arxiv.org/html/2312.16465v4/extracted/2312.16465v4/media/screenshot_slope_1.png"
|
| 131 |
+
},
|
| 132 |
+
"1(c)": {
|
| 133 |
+
"figure_path": "2312.16465v4_figure_1(c).png",
|
| 134 |
+
"caption": "Figure 1: Overview of our control pipeline (top), and illustrations of teleoperated multi-contact experiments on Talos humanoid robot (bottom).",
|
| 135 |
+
"url": "http://arxiv.org/html/2312.16465v4/extracted/2312.16465v4/media/screenshot_stair_2.png"
|
| 136 |
+
},
|
| 137 |
+
"1(d)": {
|
| 138 |
+
"figure_path": "2312.16465v4_figure_1(d).png",
|
| 139 |
+
"caption": "Figure 1: Overview of our control pipeline (top), and illustrations of teleoperated multi-contact experiments on Talos humanoid robot (bottom).",
|
| 140 |
+
"url": "http://arxiv.org/html/2312.16465v4/extracted/2312.16465v4/media/screenshot_reach_1.png"
|
| 141 |
+
},
|
| 142 |
+
"2": {
|
| 143 |
+
"figure_path": "2312.16465v4_figure_2.png",
|
| 144 |
+
"caption": "Figure 2: Control architecture for position-controlled robots: Operator\u2019s Cartesian commands are retargeted into a feasible whole-body configuration. The controller uses a joint flexibility model to adjust actuator position commands for contact wrench control and prevent exceeding joint torque limits.",
|
| 145 |
+
"url": "http://arxiv.org/html/2312.16465v4/"
|
| 146 |
+
},
|
| 147 |
+
"3(a)": {
|
| 148 |
+
"figure_path": "2312.16465v4_figure_3(a).png",
|
| 149 |
+
"caption": "Figure 3: Force distribution tracking during pushing tasks. The Talos robot (left) pushes a vertical wall using its left hand, following a predefined hand force target trajectory. Plots display the desired and measured normal force for the left hand (top) and the sagittal tangential force for the left foot (bottom); comparing with control enabled (5 trials) and without (5 trials).",
|
| 150 |
+
"url": "http://arxiv.org/html/2312.16465v4/extracted/2312.16465v4/media/talos_push_wall.png"
|
| 151 |
+
},
|
| 152 |
+
"3(b)": {
|
| 153 |
+
"figure_path": "2312.16465v4_figure_3(b).png",
|
| 154 |
+
"caption": "Figure 3: Force distribution tracking during pushing tasks. The Talos robot (left) pushes a vertical wall using its left hand, following a predefined hand force target trajectory. Plots display the desired and measured normal force for the left hand (top) and the sagittal tangential force for the left foot (bottom); comparing with control enabled (5 trials) and without (5 trials).",
|
| 155 |
+
"url": "http://arxiv.org/html/2312.16465v4/"
|
| 156 |
+
},
|
| 157 |
+
"4(a)": {
|
| 158 |
+
"figure_path": "2312.16465v4_figure_4(a).png",
|
| 159 |
+
"caption": "Figure 4: Comparison of contact switch trials with and without SEIKO Controller. Initially, both feet and right hand are in contact. The operator teleoperated the robot to disable the right foot contact, lift the foot, and re-establish contact. Vertical contact forces \ud835\udf40d,\ud835\udf40readsuperscript\ud835\udf40dsuperscript\ud835\udf40read\\bm{\\lambda}^{\\text{d}},\\bm{\\lambda}^{\\text{read}}bold_italic_\u03bb start_POSTSUPERSCRIPT d end_POSTSUPERSCRIPT , bold_italic_\u03bb start_POSTSUPERSCRIPT read end_POSTSUPERSCRIPT (top row) and the desired vertical position of the right foot \ud835\udc7fright footdsubscriptsuperscript\ud835\udc7fdright foot\\bm{X}^{\\text{d}}_{\\text{right foot}}bold_italic_X start_POSTSUPERSCRIPT d end_POSTSUPERSCRIPT start_POSTSUBSCRIPT right foot end_POSTSUBSCRIPT (bottom row) are displayed.",
|
| 160 |
+
"url": "http://arxiv.org/html/2312.16465v4/extracted/2312.16465v4/media/foot_switch.png"
|
| 161 |
+
},
|
| 162 |
+
"4(b)": {
|
| 163 |
+
"figure_path": "2312.16465v4_figure_4(b).png",
|
| 164 |
+
"caption": "Figure 4: Comparison of contact switch trials with and without SEIKO Controller. Initially, both feet and right hand are in contact. The operator teleoperated the robot to disable the right foot contact, lift the foot, and re-establish contact. Vertical contact forces \ud835\udf40d,\ud835\udf40readsuperscript\ud835\udf40dsuperscript\ud835\udf40read\\bm{\\lambda}^{\\text{d}},\\bm{\\lambda}^{\\text{read}}bold_italic_\u03bb start_POSTSUPERSCRIPT d end_POSTSUPERSCRIPT , bold_italic_\u03bb start_POSTSUPERSCRIPT read end_POSTSUPERSCRIPT (top row) and the desired vertical position of the right foot \ud835\udc7fright footdsubscriptsuperscript\ud835\udc7fdright foot\\bm{X}^{\\text{d}}_{\\text{right foot}}bold_italic_X start_POSTSUPERSCRIPT d end_POSTSUPERSCRIPT start_POSTSUBSCRIPT right foot end_POSTSUBSCRIPT (bottom row) are displayed.",
|
| 165 |
+
"url": "http://arxiv.org/html/2312.16465v4/"
|
| 166 |
+
},
|
| 167 |
+
"5(a)": {
|
| 168 |
+
"figure_path": "2312.16465v4_figure_5(a).png",
|
| 169 |
+
"caption": "Figure 5: Impact of damping gain Kdsubscript\ud835\udc3e\ud835\udc51K_{d}italic_K start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT on Talos\u2019s torso oscillations. Short pushes are applied (left), and IMU\u2019s gyroscope measures sagittal plane oscillation for varying Kdsubscript\ud835\udc3e\ud835\udc51K_{d}italic_K start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT values (middle). Damping effect is quantified using logarithmic decrement metric from oscillation peaks (right).",
|
| 170 |
+
"url": "http://arxiv.org/html/2312.16465v4/x5.jpeg"
|
| 171 |
+
},
|
| 172 |
+
"5(b)": {
|
| 173 |
+
"figure_path": "2312.16465v4_figure_5(b).png",
|
| 174 |
+
"caption": "Figure 5: Impact of damping gain Kdsubscript\ud835\udc3e\ud835\udc51K_{d}italic_K start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT on Talos\u2019s torso oscillations. Short pushes are applied (left), and IMU\u2019s gyroscope measures sagittal plane oscillation for varying Kdsubscript\ud835\udc3e\ud835\udc51K_{d}italic_K start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT values (middle). Damping effect is quantified using logarithmic decrement metric from oscillation peaks (right).",
|
| 175 |
+
"url": "http://arxiv.org/html/2312.16465v4/"
|
| 176 |
+
},
|
| 177 |
+
"6(a)": {
|
| 178 |
+
"figure_path": "2312.16465v4_figure_6(a).png",
|
| 179 |
+
"caption": "Figure 6: Far reaching task with and without adding a large unmodeled mass (9999 kg) on the hand. The controller enforces joint torque ratio limits (top row, set to 0.60.60.60.6) and tracks the foot contact wrenches (middle row) to ensure balance.",
|
| 180 |
+
"url": "http://arxiv.org/html/2312.16465v4/extracted/2312.16465v4/media/talos_reach_no_load.png"
|
| 181 |
+
},
|
| 182 |
+
"6(b)": {
|
| 183 |
+
"figure_path": "2312.16465v4_figure_6(b).png",
|
| 184 |
+
"caption": "Figure 6: Far reaching task with and without adding a large unmodeled mass (9999 kg) on the hand. The controller enforces joint torque ratio limits (top row, set to 0.60.60.60.6) and tracks the foot contact wrenches (middle row) to ensure balance.",
|
| 185 |
+
"url": "http://arxiv.org/html/2312.16465v4/extracted/2312.16465v4/media/talos_reach_with_load.png"
|
| 186 |
+
},
|
| 187 |
+
"6(c)": {
|
| 188 |
+
"figure_path": "2312.16465v4_figure_6(c).png",
|
| 189 |
+
"caption": "Figure 6: Far reaching task with and without adding a large unmodeled mass (9999 kg) on the hand. The controller enforces joint torque ratio limits (top row, set to 0.60.60.60.6) and tracks the foot contact wrenches (middle row) to ensure balance.",
|
| 190 |
+
"url": "http://arxiv.org/html/2312.16465v4/"
|
| 191 |
+
},
|
| 192 |
+
"7(a)": {
|
| 193 |
+
"figure_path": "2312.16465v4_figure_7(a).png",
|
| 194 |
+
"caption": "Figure 7: Comparison of our controller\u2019s robustness against model errors and motion velocity. The Talos robot performs in double support 10101010 far-reaching tasks at the edge of the feasibility boundary in the MuJoCo simulator (left). The number of successful trials without falling is indicated (out of 10101010). Different combinations of hand motion velocity and added mass on the robot\u2019s hand are compared (middle). The comparison includes scenarios with the SEIKO controller disabled (only open-loop SEIKO retargeting), the SEIKO controller with only foot wrenches control, and the full controller also considering joint torque limits. Overall success ratio comparing the three controllers is given on right panel.",
|
| 195 |
+
"url": "http://arxiv.org/html/2312.16465v4/extracted/2312.16465v4/media/merged_reaching_crop.png"
|
| 196 |
+
},
|
| 197 |
+
"7(b)": {
|
| 198 |
+
"figure_path": "2312.16465v4_figure_7(b).png",
|
| 199 |
+
"caption": "Figure 7: Comparison of our controller\u2019s robustness against model errors and motion velocity. The Talos robot performs in double support 10101010 far-reaching tasks at the edge of the feasibility boundary in the MuJoCo simulator (left). The number of successful trials without falling is indicated (out of 10101010). Different combinations of hand motion velocity and added mass on the robot\u2019s hand are compared (middle). The comparison includes scenarios with the SEIKO controller disabled (only open-loop SEIKO retargeting), the SEIKO controller with only foot wrenches control, and the full controller also considering joint torque limits. Overall success ratio comparing the three controllers is given on right panel.",
|
| 200 |
+
"url": "http://arxiv.org/html/2312.16465v4/"
|
| 201 |
+
},
|
| 202 |
+
"8(a)": {
|
| 203 |
+
"figure_path": "2312.16465v4_figure_8(a).png",
|
| 204 |
+
"caption": "Figure S1: Comparison of our SEIKO Controller and effector admittance [17] for tracking inconsistent references on Talos humanoid robot simulated in double support using Gazebo simulator (left). SEIKO Retargeting is used to generate a configuration where most of the robot\u2019s weight is positioned above the right foot. At t=1\u2062s\ud835\udc611\ud835\udc60t=1sitalic_t = 1 italic_s, the reference sent to the controller is overridden, requesting an equal weight distribution between the two feet, inconsistent with the desired posture and CoM position. Both controllers are initiated at t=6\u2062s\ud835\udc616\ud835\udc60t=6sitalic_t = 6 italic_s and foot normal force tracking is displayed for each controller (right). SEIKO Controller successfully tracks the overridden reference by shifting the CoM of the robot, while the effector admittance controller results in the robot falling.",
|
| 205 |
+
"url": "http://arxiv.org/html/2312.16465v4/x9.png"
|
| 206 |
+
},
|
| 207 |
+
"8(b)": {
|
| 208 |
+
"figure_path": "2312.16465v4_figure_8(b).png",
|
| 209 |
+
"caption": "Figure S1: Comparison of our SEIKO Controller and effector admittance [17] for tracking inconsistent references on Talos humanoid robot simulated in double support using Gazebo simulator (left). SEIKO Retargeting is used to generate a configuration where most of the robot\u2019s weight is positioned above the right foot. At t=1\u2062s\ud835\udc611\ud835\udc60t=1sitalic_t = 1 italic_s, the reference sent to the controller is overridden, requesting an equal weight distribution between the two feet, inconsistent with the desired posture and CoM position. Both controllers are initiated at t=6\u2062s\ud835\udc616\ud835\udc60t=6sitalic_t = 6 italic_s and foot normal force tracking is displayed for each controller (right). SEIKO Controller successfully tracks the overridden reference by shifting the CoM of the robot, while the effector admittance controller results in the robot falling.",
|
| 210 |
+
"url": "http://arxiv.org/html/2312.16465v4/"
|
| 211 |
+
},
|
| 212 |
+
"9(a)": {
|
| 213 |
+
"figure_path": "2312.16465v4_figure_9(a).png",
|
| 214 |
+
"caption": "Figure S2: Comparison between our SEIKO Controller and effector admittance [17] for hand multi-contact and large model errors. Talos humanoid robot is simulated in Gazebo, with a posture featuring both feet and the right hand in contact (left). Tracking of the right hand force is compared across several initial contact forces (right). In the second row, a large external vertical force (200\u2062N200\ud835\udc41200N200 italic_N) is applied on the robot\u2019s torso. The effector admittance scheme fails to track the reference when faced with large external disturbances.",
|
| 215 |
+
"url": "http://arxiv.org/html/2312.16465v4/x11.png"
|
| 216 |
+
},
|
| 217 |
+
"9(b)": {
|
| 218 |
+
"figure_path": "2312.16465v4_figure_9(b).png",
|
| 219 |
+
"caption": "Figure S2: Comparison between our SEIKO Controller and effector admittance [17] for hand multi-contact and large model errors. Talos humanoid robot is simulated in Gazebo, with a posture featuring both feet and the right hand in contact (left). Tracking of the right hand force is compared across several initial contact forces (right). In the second row, a large external vertical force (200\u2062N200\ud835\udc41200N200 italic_N) is applied on the robot\u2019s torso. The effector admittance scheme fails to track the reference when faced with large external disturbances.",
|
| 220 |
+
"url": "http://arxiv.org/html/2312.16465v4/"
|
| 221 |
+
}
|
| 222 |
+
},
|
| 223 |
+
"validation": true,
|
| 224 |
+
"references": [],
|
| 225 |
+
"url": "http://arxiv.org/html/2312.16465v4"
|
| 226 |
+
}
|
20240522/2401.03083v2.json
ADDED
|
@@ -0,0 +1,424 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Energy-efficient Decentralized Learning via Graph Sparsification",
|
| 3 |
+
"abstract": "This work aims at improving the energy efficiency of decentralized learning by optimizing the mixing matrix, which controls the communication demands during the learning process. Through rigorous analysis based on a state-of-the-art decentralized learning algorithm, the problem is formulated as a bi-level optimization, with the lower level solved by graph sparsification. A solution with guaranteed performance is proposed for the special case of fully-connected base topology and a greedy heuristic is proposed for the general case. Simulations based on real topology and dataset show that the proposed solution can lower the energy consumption at the busiest node by \u2013 while maintaining the quality of the trained model.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Learning from decentralized data [1 ###reference_b1###] is an emerging machine learning paradigm that has found many applications [2 ###reference_b2###].\nCommunication efficiency has been a major consideration in designing learning algorithms, as the cost in communicating model updates, e.g., communication time, bandwidth consumption, and energy consumption, dominates the total operation cost in many application scenarios [1 ###reference_b1###]. Existing works on reducing this cost can be broadly classified into (i) model compression for reducing the cost per communication [3 ###reference_b3###, 4 ###reference_b4###] and (ii) hyperparameter optimization for reducing the number of communications until convergence [5 ###reference_b5###]. The two approaches are orthogonal and can be applied jointly.\nIn this work, we focus on hyperparameter optimization in the decentralized learning setting, where nodes communicate with neighbors according to a given base topology [6 ###reference_b6###].\nTo this end, we adopt a recently proposed optimization framework from [7 ###reference_b7###] that allows for systematic design of a critical hyperparameter in decentralized learning, the mixing matrix, to minimize a generally-defined cost measure. The choice of mixing matrix as the design parameter utilizes the observation from [8 ###reference_b8###] that not all the links are equally important for convergence. Hence, instead of communicating over all the links at the same frequency as in most of the existing works [1 ###reference_b1###, 5 ###reference_b5###], communicating on different links with different frequencies can further improve the communication efficiency. However, the existing mixing matrix designs [8 ###reference_b8###, 7 ###reference_b7###] fall short at addressing a critical cost measure in wireless networks: energy consumption at the busiest node. Although energy consumption is considered in [7 ###reference_b7###], its cost model only captures the total energy consumption over all the nodes.\nIn this work, we address this gap based on a rigorous theoretical foundation."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "1.1",
|
| 13 |
+
"parent_section_id": "1",
|
| 14 |
+
"section_name": "Related Work",
|
| 15 |
+
"text": "Decentralized learning algorithms. The standard algorithm for learning under a fully decentralized architecture was an algorithm called Decentralized Parallel Stochastic Gradient Descent (D-PSGD) [6 ###reference_b6###], which was shown to achieve the same computational complexity but a lower communication complexity than training via a central server.\nSince then a number of improvements have been developed, e.g., [9 ###reference_b9###],\nbut these works only focused on the number of iterations.\nCommunication cost reduction. One line of works tried to reduce the amount of data per communication through model compression, e.g., [3 ###reference_b3###, 10 ###reference_b10###, 11 ###reference_b11###, 12 ###reference_b12###].\nAnother line of works reduced the frequency of communications, e.g., [13 ###reference_b13###, 14 ###reference_b14###, 5 ###reference_b5###].\nby designing an optimized frequency [13 ###reference_b13###, 14 ###reference_b14###] or an adaptive frequency [5 ###reference_b5###]. A unified analysis of the cost-convergence tradeoff of such solutions was provided in [15 ###reference_b15###].\nLater works [16 ###reference_b16###, 17 ###reference_b17###] started to combine model compression and infrequent communications. Recently, it was recognized that better tradeoffs can be achieved by activating subsets of links, e.g., via event-based triggers [16 ###reference_b16###, 17 ###reference_b17###] or predetermined mixing matrices [8 ###reference_b8###, 7 ###reference_b7###].\nOur work is closest to [8 ###reference_b8###, 7 ###reference_b7###] by also designing the mixing matrix, but we address a different objective of maximum per-node energy consumption.\nMixing matrix design. Mixing matrix design has been considered in the classical problem of distributed averaging, e.g., [18 ###reference_b18###, 19 ###reference_b19###]\ndesigned a mixing matrix with the fastest convergence to -average and [20 ###reference_b20###] designed a sequence of mixing matrices to achieve exact average in finite time.\nIn contrast, fewer works have addressed the design of mixing matrices in decentralized learning [21 ###reference_b21###, 8 ###reference_b8###, 7 ###reference_b7###, 22 ###reference_b22###, 23 ###reference_b23###]. Out of these, most focused on optimizing the training time, either by minimizing the time per iteration on computation [21 ###reference_b21###] or communication [8 ###reference_b8###, 22 ###reference_b22###], or by minimizing the number of iterations [23 ###reference_b23###]. To our knowledge, [7 ###reference_b7###] is the only prior work that explicitly designed mixing matrices for minimizing energy consumption. However, [7 ###reference_b7###] only considered the total energy consumption, but this work considers the energy consumption at the busiest node.\nOur design is based on an objective function that generalizes the spectral gap objective [24 ###reference_b24###] to random mixing matrices. Spectral gap is an important parameter for capturing the impact of topology on the convergence rate of decentralized learning [6 ###reference_b6###, 21 ###reference_b21###]. Even if recent works identified some other parameters through which the topology can impact the convergence rate, such as the effective number of neighbors [24 ###reference_b24###] and the neighborhood heterogeneity [23 ###reference_b23###], their results did not invalidate the impact of spectral gap and just pointed out additional factors."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "1.2",
|
| 19 |
+
"parent_section_id": "1",
|
| 20 |
+
"section_name": "Summary of Contributions",
|
| 21 |
+
"text": "We study the design of mixing matrix in decentralized learning with the following contributions:\n1) Instead of considering the total energy consumption as in [7 ###reference_b7###], our design aims at minimizing the energy consumption at the busiest node, leading to a more balanced load.\n2) Instead of using a heuristic objective as in [8 ###reference_b8###] or a partially justified objective as in [7 ###reference_b7###], we use a fully theoretically-justified design objective, which enables a new approach for mixing matrix design based on graph sparsification.\n3) Based on the new approach, we propose an algorithm with guaranteed performance for a special case and a greedy heuristic for the general case. Our solution achieves \u2013 lower energy consumption at the busiest node while producing a model of the same quality as the best-performing benchmark in simulations based on real topology and dataset.\nRoadmap. Section 2 ###reference_### formulates our problem, Section 3 ###reference_### presents the proposed solution, Section 4 ###reference_### evaluates it against benchmarks, and Section 5 ###reference_### concludes the paper.\nProofs and additional evaluation results are provided in the appendix."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2",
|
| 25 |
+
"parent_section_id": null,
|
| 26 |
+
"section_name": "Background and Problem Formulation",
|
| 27 |
+
"text": ""
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "2.1",
|
| 31 |
+
"parent_section_id": "2",
|
| 32 |
+
"section_name": "Decentralized Learning Algorithm",
|
| 33 |
+
"text": "Consider a network of nodes connected through a base topology (), where defines the pairs of nodes that can directly communicate.\nEach node has a local objective function that depends on the parameter vector and its local dataset. The goal is to minimize the global objective function\nWe consider a state-of-the-art decentralized learning algorithm called D-PSGD [6 ###reference_b6###]. Let () denote the parameter vector at node after iterations and the stochastic gradient computed in iteration . D-PSGD runs the following update in parallel at each node :\nwhere is the mixing matrix in iteration , and is the learning rate.\nTo be consistent with the base topology, only if .\nFinally we let .\nThe convergence of this algorithm is guaranteed under the following assumptions:\nEach local objective function is -Lipschitz smooth, i.e.,111For a vector , denotes the -2 norm. For a matrix , denotes the spectral norm, and denotes the Frobenius norm. .\nThere exist constants such that\n\n, .\nThere exist constants such that \n.\nLet and let be a random symmetric matrix such that each row/column in sums to one.\nLet .\nUnder assumptions (1)\u2013(3), if each mixing matrix is an i.i.d. copy of \nand , then D-PSGD can achieve for any given () when the number of iterations reaches\nRemark: The required number of iterations depends on the mixing matrix only through the parameter : the smaller , the fewer iterations are needed.\nThe proof of Theorem 2.1 ###reference_theorem1### is based on [25 ###reference_b25###, Theorem 2] and included in Appendix A ###reference_###."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "2.2",
|
| 37 |
+
"parent_section_id": "2",
|
| 38 |
+
"section_name": "Mixing Matrix",
|
| 39 |
+
"text": "As node needs to send its parameter vector to node in iteration only if , we can control the communications by designing the mixing matrix . To this end, we use \nwhere is the weighted Laplacian matrix [26 ###reference_b26###] of the topology activated in iteration .\nGiven the incidence matrix222Matrix is a matrix, defined as if link starts at node (under arbitrary link orientation), if ends at , and otherwise. of the base topology and a vector of link weights , the Laplacian matrix is given by\n\nThe above reduces the mixing matrix design problem to a problem of designing the link weights , where a link will be activated in iteration if and only if .\nThis construction guarantees that is symmetric with each row/column summing up to one."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "2.3",
|
| 43 |
+
"parent_section_id": "2",
|
| 44 |
+
"section_name": "Cost Model",
|
| 45 |
+
"text": "We use to denote the cost vector in an iteration when the link weight vector is . We focus on the energy consumption at each node , which contains two parts: (i) computation energy for computing the local stochastic gradient and the local aggregation, and (ii) communication energy for sending the updated local parameter vector from node to node . Then the energy consumption at node in iteration is modeled as\nwhere denotes the indicator function. This cost function models the basic scenario where all communications are point-to-point and independent. Other scenarios are left to future work."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "2.4",
|
| 49 |
+
"parent_section_id": "2",
|
| 50 |
+
"section_name": "Optimization Framework",
|
| 51 |
+
"text": "To trade off between the cost per iteration and the convergence rate, we adopt a bi-level optimization framework:\nLower-level optimization: design link weights to maximize the convergence rate (by minimizing ) under a given budget on the maximum cost per node in each iteration, which results in a required number of iterations of .\nUpper-level optimization: design to minimize the total maximum cost per node ."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "3",
|
| 55 |
+
"parent_section_id": null,
|
| 56 |
+
"section_name": "Mixing Matrix Design via Graph Sparsification",
|
| 57 |
+
"text": "As the upper-level optimization only involves one scalar decision variable, we will focus on the lower-level optimization."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "3.1",
|
| 61 |
+
"parent_section_id": "3",
|
| 62 |
+
"section_name": "Simplified Objective",
|
| 63 |
+
"text": "Theorem 2.1 ###reference_theorem1### implies that the lower-level optimization should minimize . While it is possible to formulate this minimization in terms of the link weights , the resulting optimization problem, with a form similar to [7 ###reference_b7###, (18)], will be intractable due to the presence of non-linear matrix inequality constraint. We thus further simplify the objective as follows.\nFor any mixing matrix , where is a randomized Laplacian matrix,\nwhere denotes the -th smallest eigenvalue of .\nBy Lemma 3.1 ###reference_theorem1###, we relax the objective of the lower-level optimization to designing a randomized by solving"
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "3.2",
|
| 67 |
+
"parent_section_id": "3",
|
| 68 |
+
"section_name": "Idea on Leveraging Graph Sparsification",
|
| 69 |
+
"text": "We propose to solve the relaxed lower-level optimization (5 ###reference_###) based on graph (spectral) sparsification.\nFirst, we compute the optimal link weight vector without the budget constraint (5b ###reference_2###) by solving the following optimization:\nConstraint (6b ###reference_2###) ensures at the optimum, i.e., minimizes .\nOptimization (6 ###reference_###) is a semi-definite programming (SDP) problem that can be solved in polynomial time by existing algorithms [27 ###reference_b27###].\nThe vector establishes a lower bound on the relaxed objective:\nif\n is the optimal randomized solution for (5 ###reference_###),\nthen .\nThen, we use a graph sparsification algorithm to sparsify the weighted graph with link weights to satisfy the budget constraint.\nAs , and graph sparsification aims at preserving the original eigenvalues [28 ###reference_b28###], the sparsified link weight vector is expected to achieve an objective value that approximates the optimal for (5 ###reference_###)."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "3.3",
|
| 73 |
+
"parent_section_id": "3",
|
| 74 |
+
"section_name": "Algorithm Design",
|
| 75 |
+
"text": "We now apply the above idea to develop algorithms for mixing matrix design."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "3.3.1",
|
| 79 |
+
"parent_section_id": "3.3",
|
| 80 |
+
"section_name": "3.3.1 Ramanujan-Graph-based Design for a Special Case",
|
| 81 |
+
"text": "Consider the special case when the base topology is a complete graph and all transmissions by a node have the same cost, i.e., for all such that . Let . Then any graph with degrees bounded by satisfies the budget constraint.\nThe complete graph has an ideal sparsifier known as Ramanujan graph. A -regular graph is a Ramanujan graph if all the non-zero eigenvalues of its Laplacian matrix lie between and [29 ###reference_b29###].\nBy assigning weight to every link of a Ramanujan graph , we obtain a weighted graph , whose Laplacian satisfies and\nBy Lemma 3.1 ###reference_theorem1###, the deterministic mixing matrix achieves a -value that satisfies\nRamanujan graphs can be easily constructed by drawing random -regular graphs until satisfying the Ramanujan definition [30 ###reference_b30###]. By the result of [31 ###reference_b31###, 32 ###reference_b32###], for , we can generate random -regular graphs in polynomial time. Thus, the above method can efficiently construct a deterministic mixing matrix with guaranteed performance in solving the lower-level optimization for a given budget such that ."
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "3.3.2",
|
| 85 |
+
"parent_section_id": "3.3",
|
| 86 |
+
"section_name": "3.3.2 Intractability for General Case",
|
| 87 |
+
"text": "We will see that finding a feasible graph sparsifier is computationally hard in the general case.\nTo facilitate the discussions, assume for all and , and , so the budget constraint (5b ###reference_2###) translates to a maximum degree constraint.\nFor a general base topology and any fixed ,\nit is not clear whether there exists a feasible Laplacian satisfying the maximum degree constraint (recall that for to be feasible, its convergence parameter for must be strictly less than ). For example, a subgraph of a star graph needs edges incident to the center to remain connected, and if , then any subgraph with maximum degree at most is disconnected, which implies .\nHence, through similar computations as in (3 ###reference_###)\u2013(4 ###reference_###),\nfor a deterministic ,\nMoreover, in general, the task of determining whether there exists a feasible satisfying the maximum degree constraint is NP-hard\nbecause deciding the existence of a connected spanning subgraph with maximum degree no more than is NP-hard.\nThe following theorem provides NP-hardness for a slightly more general problem; its proof is provided in Appendix A ###reference_###.\nGiven a graph , and a degree constraint for each vertex ,\nthen it is NP-hard to decide the existence of such that is a connected graph and for all vertices .\nFinding a feasible graph sparsifier under degree constraints is equivalent to finding a connected spanning subgraph under the same constraints, as\nand we can set the link weights to be small enough so that , under which if and only if .\nTherefore, for a general base topology with general costs and a general budget constraint,\nit is algorithmically intractable to find a feasible graph sparsifier."
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "3.3.3",
|
| 91 |
+
"parent_section_id": "3.3",
|
| 92 |
+
"section_name": "3.3.3 Greedy Heuristic for General Case",
|
| 93 |
+
"text": "For the general case, ideally we want to sparsify a weighted graph with link weights such that the sparsified graph with link weights will approximate the eigenvalues of while satisfying the constraint for each . While this remains an NP-hard problem for general graphs, we propose a greedy heuristic based on the intuition that the importance of a link is reflected in its absolute weight. Specifically, we will find the link with the minimum absolute weight according to the solution to (6 ###reference_###) such that the cost for either node or node exceeds the budget , set , and then find the next link by re-solving (6 ###reference_###) under this additional constraint,\nuntil either all the nodes satisfy the budget or the graph becomes disconnected;\nin the latter case, the algorithm reports failure to find a sparsifier under budget ."
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "4",
|
| 97 |
+
"parent_section_id": null,
|
| 98 |
+
"section_name": "Performance Evaluation",
|
| 99 |
+
"text": "We evaluate the proposed solution for the general case based on a real dataset and the topology of a real wireless network. We defer the evaluation in the special case to\nAppendix B ###reference_###.\nExperiment setting:\nWe consider training for image classification based on CIFAR-10, which consists of 60,000 color images in 10 classes. We train the ResNet-50 model over its training dataset with 50,000 images, and then test the trained model over the testing dataset with 10,000 images.\nWe use the topology of Roofnet [33 ###reference_b33###] at data rate 1 Mbps as the base topology, which contains 33 nodes and 187 links.\nTo evaluate the cost, we set the computation energy as (Wh) and the communication energy as (Wh) based on our parameters and the parameters from [34 ###reference_b34###]333Our model size is MB, batch size is 32, and processing speed is 8ms per sample. Assuming 1Mbps links and TX2 as the hardware, whose power is 4.7W during computation and 1.35W during communication [34 ###reference_b34###], we estimate the computation energy by Wh, and the communication energy with each neighbor by Wh, where the multiplication by 2 is because this testbed uses WiFi, which is half-duplex..\nFollowing [7 ###reference_b7###], we set the learning rate as 0.8 at the beginning and reduce it by 10X after 100, 150, 180, 200 epochs, and the mini-batch size to 32.\nBenchmarks:\nWe compare the proposed solution with with four benchmarks: \u2018Vanilla D-PSGD\u2019 [6 ###reference_b6###] where all the neighbors communicate in all the iterations, \u2018Periodic\u2019 where all the neighbors communicate periodically, \u2018MATCHA\u2019 [8 ###reference_b8###] which was designed to minimize training time, and Algorithm 1 in [7 ###reference_b7###] (\u2018Greedy total\u2019 or \u2018Gt\u2019 ) for the cost model (2 ###reference_###) which was designed to minimize the total energy consumption444While the final solution in [7 ###reference_b7###] was randomized over a set of mixing matrices, we only use the deterministic design by Algorithm 1 for a fair comparison, as the same randomization can be applied to the proposed solution..\nIn \u2018Vanilla D-PSGD\u2019, \u2018Periodic\u2019, and \u2018MATCHA\u2019, identical weights are assigned to every activated link, whereas in \u2018Greedy total\u2019 and the proposed algorithm, heterogeneous link weights are parts of the designs.\nWe first tune MATCHA to minimize its loss at convergence, and then tune the other benchmarks to activate the same number of links on the average. We evaluate two versions of the proposed algorithm (\u2018Greedy per-node\u2019 or \u2018Gp\u2019): one with the same maximum energy consumption per node as the best-performing benchmark (leading to a budget that amounts to of maximum degree) and the other with the same accuracy as the best-performing benchmark at convergence (leading to a budget that amounts to of maximum degree).\n\n###figure_1### Results: Fig. 1 ###reference_### shows the loss and accuracy of the trained model, with respect to both the epochs and the maximum energy consumption per node. We see that: (i) instead of activating all the links as in \u2018Vanilla D-PSGD\u2019, it is possible to activate fewer (weighted) links without degrading the quality of the trained model; (ii) different ways of selecting the links to activate lead to different quality-cost tradeoffs; (iii) the algorithm designed to optimize the total energy consumption (\u2018Greedy total\u2019) performs the best among the benchmarks; (iv) however, by balancing the energy consumption across nodes, the proposed algorithm (\u2018Greedy per-node\u2019) can achieve either a better loss/accuracy at the same maximum energy consumption per node, or a lower maximum energy consumption per node at the same loss and accuracy. In particular, the proposed algorithm (at maximum degree)\ncan save energy at the busiest node compared to the best-performing benchmark (\u2018Greedy total\u2019) and compared to \u2018Vanilla D-PSGD\u2019, while producing a model of the same quality.\nMeanwhile, the proposed algorithm also saves \u2013 of the total energy consumption compared to the benchmarks, as shown in\nTable 1 ###reference_###."
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "5",
|
| 103 |
+
"parent_section_id": null,
|
| 104 |
+
"section_name": "Conclusion",
|
| 105 |
+
"text": "Based on an explicit characterization of how the mixing matrix affects the convergence rate in decentralized learning, we proposed a bi-level optimization for mixing matrix design, with the lower level solved by graph sparsification. This enabled us to develop a solution with guaranteed performance for a special case and a heuristic for the general case. Our solution greatly reduced the energy consumption at the busiest node while maintaining the quality of the trained model."
|
| 106 |
+
}
|
| 107 |
+
],
|
| 108 |
+
"appendix": [
|
| 109 |
+
{
|
| 110 |
+
"section_id": "Appendix 1",
|
| 111 |
+
"parent_section_id": null,
|
| 112 |
+
"section_name": "Appendix A Supporting Proofs",
|
| 113 |
+
"text": "We first recall the following result from [25 ###reference_b25###].\n[25 ###reference_b25###, Theorem 2] \nLet . Under assumptions (1)\u2013(3), if there exist a constant such that the mixing matrices , each being symmetric with each row/column summing to one555Originally, [25 ###reference_b25###, Theorem 2] had a stronger assumption that each mixing matrix is doubly stochastic, but we have verified that it suffices to have each row/column summing to one., satisfy\nfor all and integer , then D-PSGD can achieve for any given () when the number of iterations reaches\nRemark: Originally, [25 ###reference_b25###, Theorem 2] only mandates (8 ###reference_###) for the product of mixing matrices, but we consider the case of for the tractability of mixing matrix design.\nSince are i.i.d. copies of a random matrix in our case, we first rewrite (8 ###reference_###) as\nYet (9 ###reference_###) is not an explicit function of the mixing matrix, so in the next lemma, we relate it to an equivalent quantity that is an explicit function of and thus easier to handle.\nFor any randomized mixing matrix that is symmetric with every row/column summing to one, defined in (9 ###reference_###) satisfies for .\nTheorem 2.1 ###reference_theorem1### follows from Theorem A.1 ###reference_theorem1### and Lemma A.2 ###reference_theorem2###,\nso it remains to prove Lemma A.2 ###reference_theorem2###.\nOne direction was proved in [7 ###reference_b7###], and we will prove that , or equivalently .\nFor this, we rely on the following fact (see [8 ###reference_b8###, Lemma 1]666Although [8 ###reference_b8###, Lemma 1] originally assumed to be doubly stochastic, we have verified that having each row/column summing to one is sufficient.): for any matrix ,\nWe fix a matrix .\nNow set , and\n(10 ###reference_###) yields that\nNote that for our choice of matrix , . Hence,\nThus, by (11 ###reference_###) and (12 ###reference_###),\nwe establish that\nSince is an arbitrary nonzero matrix, it follows from (9 ###reference_###) that\n.\n\u220e\nAs the spectral norm is convex, Jensen\u2019s inequality implies\nFor a given Laplacian matrix ,\nwhere the first \u201c\u201d is because the eigenvalues of are squares of the eigenvalues of as shown by eigenvalue decomposition.\nBy Lemma IV.2 in [7 ###reference_b7###],\nCombining (14 ###reference_###)\u2013(16 ###reference_###) yields (4 ###reference_###).\n\u220e\nIt suffices to reduce from the Hamiltonian path problem [35 ###reference_b35###] to our problem.\nRecall the Hamiltonian path problem for a given graph is a problem of determining the existence of a path visiting each vertex exactly once;\nsuch path is referred to as Hamiltonian path.\nFirst, we construct an input graph \nfor each instance of the Hamiltonian path problem.\nLet , and\n for all .\nSecond,\nwe show there exists such that\nif then\n for each vertex and\n if and only if has a Hamiltonian path.\nIf there exists a Hamiltonian path , then since the degree of each vertex in would be less than or equal to 2, satisfies the constraint ;\nmoreover, since is connected.\nConversely, suppose satisfies , for each vertex and\n.\nThen apparently is connected, and\na connected graph with degrees less than or equal to 2 can only be a path or a cycle. A path must be a Hamiltonian path in since it connects all the vertices; similarly, a cycle must contain a Hamiltonian path.\nThe proof is now complete as the Hamiltonian path problem is NP-complete [35 ###reference_b35###].\n\u220e"
|
| 114 |
+
},
|
| 115 |
+
{
|
| 116 |
+
"section_id": "Appendix 2",
|
| 117 |
+
"parent_section_id": null,
|
| 118 |
+
"section_name": "Appendix B Additional Evaluation Results",
|
| 119 |
+
"text": "###figure_2### In addition to the evaluation of the general case in Section 4 ###reference_###, we also evaluate the special case of a fully-connected base topology. We use the same experiment setting and benchmarks as in Section 4 ###reference_###, except that the base topology is a -node complete graph. The proposed solution in this case is the Ramanujan-graph-based design in Section 3.3.1 ###reference_.SSS1###. We still evaluate two versions of this solution: one with the same maximum energy consumption per node as the best-performing benchmark (leading to a budget that amounts to of node degree in the base topology) and the other with an accuracy no worse than vanilla D-PSGD (leading to a budget that amounts to of node degree).\nThe results in Fig. 2 ###reference_### show that: (i) similar to Fig. 1 ###reference_###, careful selection of the links to activate can notably improve the quality-cost tradeoff in decentralized learning; (ii) however, the best-performing benchmark under fully-connected base topology becomes \u2018MATCHA\u2019 even if it was designed for a different objective [8 ###reference_b8###]; (iii) nevertheless, by intentionally optimizing a parameter (4 ###reference_###) controlling the convergence rate while balancing the communication load across nodes, the proposed Ramanujan-graph-based solution can achieve a better loss/accuracy at the same maximum energy consumption per node (\u2018Ramanujan \u2019), or lower maximum energy consumption per node with a loss/accuracy no worse than \u2018Vanilla D-PSGD\u2019 (\u2018Ramanujan \u2019).\nCompared with the results in Fig. 1 ###reference_###, the proposed solution delivers less energy saving at the busiest node under a fully-connected base topology. Intuitively, this phenomenon is because the symmetry of the base topology leads to naturally balanced loads across nodes even if this is not considered by the benchmarks, which indicates that there is more room for improvement in cases with asymmetric base topology."
|
| 120 |
+
}
|
| 121 |
+
],
|
| 122 |
+
"tables": {
|
| 123 |
+
"1": {
|
| 124 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.2.1.1\">Table 1</span>: </span>Stats at epoch 200.</figcaption>\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.3\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T1.3.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S4.T1.3.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.3.1.1.1.1\" style=\"font-size:90%;\">Method</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.3.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.3.1.1.2.1\" style=\"font-size:90%;\">Loss</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.3.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.3.1.1.3.1\" style=\"font-size:90%;\">Acc.</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.3.1.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.3.1.1.4.1\" style=\"font-size:90%;\">Per-node Ene.</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.3.1.1.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.3.1.1.5.1\" style=\"font-size:90%;\">Total Ene.</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.3.2.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T1.3.2.1.1\"><span class=\"ltx_text\" id=\"S4.T1.3.2.1.1.1\" style=\"font-size:90%;\">Vanilla</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.3.2.1.2\"><span class=\"ltx_text\" id=\"S4.T1.3.2.1.2.1\" style=\"font-size:90%;\">0.277</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.3.2.1.3\"><span class=\"ltx_text\" id=\"S4.T1.3.2.1.3.1\" style=\"font-size:90%;\">87.6%</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.3.2.1.4\"><span class=\"ltx_text\" id=\"S4.T1.3.2.1.4.1\" style=\"font-size:90%;\">280kWh</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.3.2.1.5\"><span class=\"ltx_text\" id=\"S4.T1.3.2.1.5.1\" style=\"font-size:90%;\">4980kWh</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.3.3.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.3.3.2.1\"><span class=\"ltx_text\" id=\"S4.T1.3.3.2.1.1\" style=\"font-size:90%;\">Periodic</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.2.2\"><span class=\"ltx_text\" id=\"S4.T1.3.3.2.2.1\" style=\"font-size:90%;\">0.350</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.2.3\"><span class=\"ltx_text\" id=\"S4.T1.3.3.2.3.1\" style=\"font-size:90%;\">85.4%</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.2.4\"><span class=\"ltx_text\" id=\"S4.T1.3.3.2.4.1\" style=\"font-size:90%;\">140kWh</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.2.5\"><span class=\"ltx_text\" id=\"S4.T1.3.3.2.5.1\" style=\"font-size:90%;\">2490kWh</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.3.4.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.3.4.3.1\"><span class=\"ltx_text\" id=\"S4.T1.3.4.3.1.1\" style=\"font-size:90%;\">MATCHA</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.4.3.2\"><span class=\"ltx_text\" id=\"S4.T1.3.4.3.2.1\" style=\"font-size:90%;\">0.313</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.4.3.3\"><span class=\"ltx_text\" id=\"S4.T1.3.4.3.3.1\" style=\"font-size:90%;\">86.4%</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.4.3.4\"><span class=\"ltx_text\" id=\"S4.T1.3.4.3.4.1\" style=\"font-size:90%;\">213kWh</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.4.3.5\"><span class=\"ltx_text\" id=\"S4.T1.3.4.3.5.1\" style=\"font-size:90%;\">2490kWh</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.3.5.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.3.5.4.1\"><span class=\"ltx_text\" id=\"S4.T1.3.5.4.1.1\" style=\"font-size:90%;\">Gt, 50%</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.5.4.2\"><span class=\"ltx_text\" id=\"S4.T1.3.5.4.2.1\" style=\"font-size:90%;\">0.236</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.5.4.3\"><span class=\"ltx_text\" id=\"S4.T1.3.5.4.3.1\" style=\"font-size:90%;\">88.0%</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.5.4.4\"><span class=\"ltx_text\" id=\"S4.T1.3.5.4.4.1\" style=\"font-size:90%;\">147kWh</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.5.4.5\"><span class=\"ltx_text\" id=\"S4.T1.3.5.4.5.1\" style=\"font-size:90%;\">2477kWh</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.3.6.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.3.6.5.1\"><span class=\"ltx_text\" id=\"S4.T1.3.6.5.1.1\" style=\"font-size:90%;\">Gp, 25%</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.6.5.2\"><span class=\"ltx_text\" id=\"S4.T1.3.6.5.2.1\" style=\"font-size:90%;\">0.244</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.6.5.3\"><span class=\"ltx_text\" id=\"S4.T1.3.6.5.3.1\" style=\"font-size:90%;\">87.7%</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.6.5.4\"><span class=\"ltx_text\" id=\"S4.T1.3.6.5.4.1\" style=\"font-size:90%;\">67kWh</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.6.5.5\"><span class=\"ltx_text\" id=\"S4.T1.3.6.5.5.1\" style=\"font-size:90%;\">1465kWh</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.3.7.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S4.T1.3.7.6.1\"><span class=\"ltx_text\" id=\"S4.T1.3.7.6.1.1\" style=\"font-size:90%;\">Gp, 55%</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.3.7.6.2\"><span class=\"ltx_text\" id=\"S4.T1.3.7.6.2.1\" style=\"font-size:90%;\">0.192</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.3.7.6.3\"><span class=\"ltx_text\" id=\"S4.T1.3.7.6.3.1\" style=\"font-size:90%;\">88.2%</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.3.7.6.4\"><span class=\"ltx_text\" id=\"S4.T1.3.7.6.4.1\" style=\"font-size:90%;\">147kWh</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.3.7.6.5\"><span class=\"ltx_text\" id=\"S4.T1.3.7.6.5.1\" style=\"font-size:90%;\">3169kWh</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 125 |
+
"capture": "Table 1: Stats at epoch 200."
|
| 126 |
+
}
|
| 127 |
+
},
|
| 128 |
+
"image_paths": {
|
| 129 |
+
"1": {
|
| 130 |
+
"figure_path": "2401.03083v2_figure_1.png",
|
| 131 |
+
"caption": "Fig. 1: Training loss and testing accuracy for decentralized learning over Roofnet.",
|
| 132 |
+
"url": "http://arxiv.org/html/2401.03083v2/extracted/2401.03083v2/general_case2.png"
|
| 133 |
+
},
|
| 134 |
+
"2": {
|
| 135 |
+
"figure_path": "2401.03083v2_figure_2.png",
|
| 136 |
+
"caption": "Fig. 2: Training loss and testing accuracy for decentralized learning over a complete graph.",
|
| 137 |
+
"url": "http://arxiv.org/html/2401.03083v2/extracted/2401.03083v2/complete_graph2.png"
|
| 138 |
+
}
|
| 139 |
+
},
|
| 140 |
+
"validation": true,
|
| 141 |
+
"references": [
|
| 142 |
+
{
|
| 143 |
+
"1": {
|
| 144 |
+
"title": "\u201cCommunication-efficient learning of deep networks from\ndecentralized data,\u201d",
|
| 145 |
+
"author": "H. McMahan, Eider Moore, D. Ramage, S. Hampson, and Blaise Ag\u00fcera y Arcas,",
|
| 146 |
+
"venue": "in AISTATS, 2017.",
|
| 147 |
+
"url": null
|
| 148 |
+
}
|
| 149 |
+
},
|
| 150 |
+
{
|
| 151 |
+
"2": {
|
| 152 |
+
"title": "Advances and Open Problems in Federated Learning,",
|
| 153 |
+
"author": "Peter Kairouz et al.,",
|
| 154 |
+
"venue": "Now Foundations and Trends, 2021.",
|
| 155 |
+
"url": null
|
| 156 |
+
}
|
| 157 |
+
},
|
| 158 |
+
{
|
| 159 |
+
"3": {
|
| 160 |
+
"title": "\u201cDecentralized deep learning with arbitrary communication\ncompression,\u201d",
|
| 161 |
+
"author": "Anastasia Koloskova, Tao Lin, Sebastian U Stich, and Martin Jagg,",
|
| 162 |
+
"venue": "in The International Conference on Learning Representations\n(ICLR), 2020.",
|
| 163 |
+
"url": null
|
| 164 |
+
}
|
| 165 |
+
},
|
| 166 |
+
{
|
| 167 |
+
"4": {
|
| 168 |
+
"title": "\u201cCommunication-efficient federated learning for resource-constrained\nedge devices,\u201d",
|
| 169 |
+
"author": "Guangchen Lan, Xiao-Yang Liu, Yijing Zhang, and Xiaodong Wang,",
|
| 170 |
+
"venue": "IEEE Transactions on Machine Learning in Communications and\nNetworking, vol. 1, pp. 210\u2013224, 2023.",
|
| 171 |
+
"url": null
|
| 172 |
+
}
|
| 173 |
+
},
|
| 174 |
+
{
|
| 175 |
+
"5": {
|
| 176 |
+
"title": "\u201cAdaptive federated learning in resource constrained edge computing\nsystems,\u201d",
|
| 177 |
+
"author": "Shiqiang Wang, Tiffany Tuor, Theodoros Salonidis, Kin K. Leung, Christian\nMakaya, Ting He, and Kevin Chan,",
|
| 178 |
+
"venue": "IEEE Journal on Selected Areas in Communications, vol. 37, no.\n6, pp. 1205\u20131221, 2019.",
|
| 179 |
+
"url": null
|
| 180 |
+
}
|
| 181 |
+
},
|
| 182 |
+
{
|
| 183 |
+
"6": {
|
| 184 |
+
"title": "\u201cCan decentralized algorithms outperform centralized algorithms? a\ncase study for decentralized parallel stochastic gradient descent,\u201d",
|
| 185 |
+
"author": "Xiangru Lian, Ce Zhang, Huan Zhang, Cho-Jui Hsieh, Wei Zhang, and Ji Liu,",
|
| 186 |
+
"venue": "in Proceedings of the 31st International Conference on Neural\nInformation Processing Systems, 2017, p. 5336\u20135346.",
|
| 187 |
+
"url": null
|
| 188 |
+
}
|
| 189 |
+
},
|
| 190 |
+
{
|
| 191 |
+
"7": {
|
| 192 |
+
"title": "\u201cLaplacian matrix sampling for communication- efficient\ndecentralized learning,\u201d",
|
| 193 |
+
"author": "Cho-Chun Chiu, Xusheng Zhang, Ting He, Shiqiang Wang, and Ananthram Swami,",
|
| 194 |
+
"venue": "IEEE Journal on Selected Areas in Communications, vol. 41, no.\n4, pp. 887\u2013901, 2023.",
|
| 195 |
+
"url": null
|
| 196 |
+
}
|
| 197 |
+
},
|
| 198 |
+
{
|
| 199 |
+
"8": {
|
| 200 |
+
"title": "\u201cMATCHA: Speeding up decentralized SGD via matching\ndecomposition sampling,\u201d",
|
| 201 |
+
"author": "J. Wang, A. K. Sahu, Z. Yang, G. Joshi, and S. Kar,",
|
| 202 |
+
"venue": "in NeurIPS Workshop on Federated Learning, 2019.",
|
| 203 |
+
"url": null
|
| 204 |
+
}
|
| 205 |
+
},
|
| 206 |
+
{
|
| 207 |
+
"9": {
|
| 208 |
+
"title": "\u201cOptimal complexity in decentralized training,\u201d",
|
| 209 |
+
"author": "Yucheng Lu and Christopher De Sa,",
|
| 210 |
+
"venue": "in International Conference on Machine Learning (ICML), 2021.",
|
| 211 |
+
"url": null
|
| 212 |
+
}
|
| 213 |
+
},
|
| 214 |
+
{
|
| 215 |
+
"10": {
|
| 216 |
+
"title": "\u201cMoniqua: Modulo quantized communication in decentralized SGD,\u201d",
|
| 217 |
+
"author": "Yucheng Lu and Christopher De Sa,",
|
| 218 |
+
"venue": "in International Conference on Machine Learning (ICML), 2020.",
|
| 219 |
+
"url": null
|
| 220 |
+
}
|
| 221 |
+
},
|
| 222 |
+
{
|
| 223 |
+
"11": {
|
| 224 |
+
"title": "\u201cCommunication compression for decentralized training,\u201d",
|
| 225 |
+
"author": "Hanlin Tang, Shaoduo Gan, Ce Zhang, Tong Zhang, and Ji Liu,",
|
| 226 |
+
"venue": "in Advances in Neural Information Processing Systems (NeurIPS),\n2018.",
|
| 227 |
+
"url": null
|
| 228 |
+
}
|
| 229 |
+
},
|
| 230 |
+
{
|
| 231 |
+
"12": {
|
| 232 |
+
"title": "\u201cCommunication-efficient network-distributed optimization with\ndifferential-coded compressors,\u201d",
|
| 233 |
+
"author": "Xin Zhang, Jia Liu, Zhengyuan Zhu, and Elizabeth S. Bentley,",
|
| 234 |
+
"venue": "in IEEE INFOCOM, 2020, p. 317\u2013326.",
|
| 235 |
+
"url": null
|
| 236 |
+
}
|
| 237 |
+
},
|
| 238 |
+
{
|
| 239 |
+
"13": {
|
| 240 |
+
"title": "\u201cAdaptive communication strategies to achieve the best error-runtime\ntrade-off in local-update SGD,\u201d",
|
| 241 |
+
"author": "Jianyu Wang and Gauri Joshi,",
|
| 242 |
+
"venue": "in Systems for ML, 2019.",
|
| 243 |
+
"url": null
|
| 244 |
+
}
|
| 245 |
+
},
|
| 246 |
+
{
|
| 247 |
+
"14": {
|
| 248 |
+
"title": "\u201cFederated learning over wireless networks: Optimization model\ndesign and analysis,\u201d",
|
| 249 |
+
"author": "Nguyen H. Tran, Wei Bao, Albert Zomaya, Minh N.H. Nguyen, and Choong Seon Hong,",
|
| 250 |
+
"venue": "in IEEE INFOCOM, 2019.",
|
| 251 |
+
"url": null
|
| 252 |
+
}
|
| 253 |
+
},
|
| 254 |
+
{
|
| 255 |
+
"15": {
|
| 256 |
+
"title": "\u201cCooperative sgd: A unified framework for the design and analysis of\nlocal-update sgd algorithms,\u201d",
|
| 257 |
+
"author": "Jianyu Wang and Gauri Joshi,",
|
| 258 |
+
"venue": "Journal of Machine Learning Research, vol. 22, no. 213, pp.\n1\u201350, 2021.",
|
| 259 |
+
"url": null
|
| 260 |
+
}
|
| 261 |
+
},
|
| 262 |
+
{
|
| 263 |
+
"16": {
|
| 264 |
+
"title": "\u201cSPARQ-SGD: Event-triggered and compressed communication in\ndecentralized optimization,\u201d",
|
| 265 |
+
"author": "Navjot Singh, Deepesh Data, Jemin George, and Suhas Diggavi,",
|
| 266 |
+
"venue": "in IEEE CDC, 2020.",
|
| 267 |
+
"url": null
|
| 268 |
+
}
|
| 269 |
+
},
|
| 270 |
+
{
|
| 271 |
+
"17": {
|
| 272 |
+
"title": "\u201cSQuARM-SGD: Communication-efficient momentum SGD for\ndecentralized optimization,\u201d",
|
| 273 |
+
"author": "Navjot Singh, Deepesh Data, Jemin George, and Suhas Diggavi,",
|
| 274 |
+
"venue": "IEEE Journal on Selected Areas in Information Theory, vol. 2,\nno. 3, pp. 954\u2013969, 2021.",
|
| 275 |
+
"url": null
|
| 276 |
+
}
|
| 277 |
+
},
|
| 278 |
+
{
|
| 279 |
+
"18": {
|
| 280 |
+
"title": "\u201cFast linear iterations for distributed averaging,\u201d",
|
| 281 |
+
"author": "Lin Xiao and Stephen Boyd,",
|
| 282 |
+
"venue": "Systems & Control Letters, vol. 53, pp. 65\u201378, September\n2004.",
|
| 283 |
+
"url": null
|
| 284 |
+
}
|
| 285 |
+
},
|
| 286 |
+
{
|
| 287 |
+
"19": {
|
| 288 |
+
"title": "\u201cRandomized gossip algorithms,\u201d",
|
| 289 |
+
"author": "Stephen Boyd, Arpita Ghosh, Balaji Prabhakar, and Devavrat Shah,",
|
| 290 |
+
"venue": "in IEEE Transactions on Information Theory, 2006, vol. 52.",
|
| 291 |
+
"url": null
|
| 292 |
+
}
|
| 293 |
+
},
|
| 294 |
+
{
|
| 295 |
+
"20": {
|
| 296 |
+
"title": "\u201cGraph diameter, eigenvalues, and minimum-time consensus,\u201d",
|
| 297 |
+
"author": "Julien M Hendrickx, Rapha\u00ebl M Jungers, Alexander Olshevsky, and Guillaume\nVankeerberghen,",
|
| 298 |
+
"venue": "Automatica, pp. 635\u2013640, 2014.",
|
| 299 |
+
"url": null
|
| 300 |
+
}
|
| 301 |
+
},
|
| 302 |
+
{
|
| 303 |
+
"21": {
|
| 304 |
+
"title": "\u201cThe role of network topology for distributed machine learning,\u201d",
|
| 305 |
+
"author": "G. Neglia, G. Calbi, D. Towsley, and G. Vardoyan,",
|
| 306 |
+
"venue": "in IEEE INFOCOM, 2019.",
|
| 307 |
+
"url": null
|
| 308 |
+
}
|
| 309 |
+
},
|
| 310 |
+
{
|
| 311 |
+
"22": {
|
| 312 |
+
"title": "\u201cThroughput-optimal topology design for cross-silo federated\nlearning,\u201d",
|
| 313 |
+
"author": "Othmane Marfoq, Chuan Xu, Giovanni Neglia, and Richard Vidal,",
|
| 314 |
+
"venue": "in Proceedings of the 34th International Conference on Neural\nInformation Processing Systems, Red Hook, NY, USA, 2020, NIPS\u201920, Curran\nAssociates Inc.",
|
| 315 |
+
"url": null
|
| 316 |
+
}
|
| 317 |
+
},
|
| 318 |
+
{
|
| 319 |
+
"23": {
|
| 320 |
+
"title": "\u201cRefined convergence and topology learning for decentralized sgd\nwith heterogeneous data,\u201d",
|
| 321 |
+
"author": "Batiste Le Bars, Aur\u00e9lien Bellet, Marc Tommasi, Erick Lavoie, and Anne-Marie\nKermarrec,",
|
| 322 |
+
"venue": "in Proceedings of The 26th International Conference on\nArtificial Intelligence and Statistics, Francisco Ruiz, Jennifer Dy, and\nJan-Willem van de Meent, Eds. 25\u201327 Apr 2023, vol. 206 of Proceedings\nof Machine Learning Research, pp. 1672\u20131702, PMLR.",
|
| 323 |
+
"url": null
|
| 324 |
+
}
|
| 325 |
+
},
|
| 326 |
+
{
|
| 327 |
+
"24": {
|
| 328 |
+
"title": "\u201cBeyond spectral gap: the role of the topology in decentralized\nlearning,\u201d",
|
| 329 |
+
"author": "Thijs Vogels, Hadrien Hendrikx, and Martin Jaggi,",
|
| 330 |
+
"venue": "in Advances in Neural Information Processing Systems,\nS. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, Eds. 2022,\nvol. 35, pp. 15039\u201315050, Curran Associates, Inc.",
|
| 331 |
+
"url": null
|
| 332 |
+
}
|
| 333 |
+
},
|
| 334 |
+
{
|
| 335 |
+
"25": {
|
| 336 |
+
"title": "\u201cA unified theory of decentralized SGD with changing topology and\nlocal updates,\u201d",
|
| 337 |
+
"author": "Anastasia Koloskova, Nicolas Loizou, Sadra Boreiri, Martin Jaggi, and Sebastian\nStich,",
|
| 338 |
+
"venue": "in ICML, 2020.",
|
| 339 |
+
"url": null
|
| 340 |
+
}
|
| 341 |
+
},
|
| 342 |
+
{
|
| 343 |
+
"26": {
|
| 344 |
+
"title": "Modern Graph Theory,",
|
| 345 |
+
"author": "B\u00e9la Bollob\u00e1s,",
|
| 346 |
+
"venue": "Graduate texts in mathematics. Springer, 2013.",
|
| 347 |
+
"url": null
|
| 348 |
+
}
|
| 349 |
+
},
|
| 350 |
+
{
|
| 351 |
+
"27": {
|
| 352 |
+
"title": "\u201cA faster interior point method for semidefinite programming,\u201d",
|
| 353 |
+
"author": "Haotian Jiang, Tarun Kathuria, Yin Tat Lee, Swati Padmanabhan, and Zhao Song,",
|
| 354 |
+
"venue": "in 2020 IEEE 61st Annual Symposium on Foundations of Computer\nScience (FOCS), 2020, pp. 910\u2013918.",
|
| 355 |
+
"url": null
|
| 356 |
+
}
|
| 357 |
+
},
|
| 358 |
+
{
|
| 359 |
+
"28": {
|
| 360 |
+
"title": "\u201cGraph sparsification by effective resistances,\u201d",
|
| 361 |
+
"author": "Daniel A. Spielman and Nikhil Srivastava,",
|
| 362 |
+
"venue": "in ACM STOC, 2008.",
|
| 363 |
+
"url": null
|
| 364 |
+
}
|
| 365 |
+
},
|
| 366 |
+
{
|
| 367 |
+
"29": {
|
| 368 |
+
"title": "\u201cExpander graphs and their applications,\u201d",
|
| 369 |
+
"author": "Shlomo Hoory, Nathan Linial, and Avi Wigderson,",
|
| 370 |
+
"venue": "Bull. Amer. Math. Soc., vol. 43, no. 04, pp. 439\u2013562, Aug.\n2006.",
|
| 371 |
+
"url": null
|
| 372 |
+
}
|
| 373 |
+
},
|
| 374 |
+
{
|
| 375 |
+
"30": {
|
| 376 |
+
"title": "\u201cRelative expanders or weakly relatively Ramanujan graphs,\u201d",
|
| 377 |
+
"author": "Joel Friedman,",
|
| 378 |
+
"venue": "Duke Mathematical Journal, vol. 118, no. 1, pp. 19 \u2013 35, 2003.",
|
| 379 |
+
"url": null
|
| 380 |
+
}
|
| 381 |
+
},
|
| 382 |
+
{
|
| 383 |
+
"31": {
|
| 384 |
+
"title": "\u201cGenerating random regular graphs,\u201d",
|
| 385 |
+
"author": "Jeong Han Kim and Van H. Vu,",
|
| 386 |
+
"venue": "in Proceedings of the Thirty-Fifth Annual ACM Symposium on\nTheory of Computing, New York, NY, USA, 2003, STOC \u201903, p. 213\u2013222,\nAssociation for Computing Machinery.",
|
| 387 |
+
"url": null
|
| 388 |
+
}
|
| 389 |
+
},
|
| 390 |
+
{
|
| 391 |
+
"32": {
|
| 392 |
+
"title": "\u201cUniform generation of random regular graphs,\u201d",
|
| 393 |
+
"author": "Pu Gao and Nicholas Wormald,",
|
| 394 |
+
"venue": "SIAM Journal on Computing, vol. 46, no. 4, pp. 1395\u20131427,\n2017.",
|
| 395 |
+
"url": null
|
| 396 |
+
}
|
| 397 |
+
},
|
| 398 |
+
{
|
| 399 |
+
"33": {
|
| 400 |
+
"title": "\u201cLink-level measurements from an 802.11b mesh network,\u201d",
|
| 401 |
+
"author": "Daniel Aguayo, John Bicket, Sanjit Biswas, Glenn Judd, and Robert Morris,",
|
| 402 |
+
"venue": "in SIGCOMM, 2004.",
|
| 403 |
+
"url": null
|
| 404 |
+
}
|
| 405 |
+
},
|
| 406 |
+
{
|
| 407 |
+
"34": {
|
| 408 |
+
"title": "\u201cA first look into the carbon footprint of federated learning,\u201d",
|
| 409 |
+
"author": "Xinchi Qiu, Titouan Parcollet, Javier Fernandez-Marques, Pedro P. B. Gusmao,\nDaniel J. Beutel, Taner Topal, Akhil Mathur, and Nicholas D. Lane,",
|
| 410 |
+
"venue": "2021.",
|
| 411 |
+
"url": null
|
| 412 |
+
}
|
| 413 |
+
},
|
| 414 |
+
{
|
| 415 |
+
"35": {
|
| 416 |
+
"title": "Computers and Intractability; A Guide to the Theory of\nNP-Completeness,",
|
| 417 |
+
"author": "Michael R. Garey and David S. Johnson,",
|
| 418 |
+
"venue": "W. H. Freeman I& Co., USA, 1990.",
|
| 419 |
+
"url": null
|
| 420 |
+
}
|
| 421 |
+
}
|
| 422 |
+
],
|
| 423 |
+
"url": "http://arxiv.org/html/2401.03083v2"
|
| 424 |
+
}
|
20240522/2401.08361v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240522/2401.08539v2.json
ADDED
|
@@ -0,0 +1,243 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Mapping low-resolution edges to high-resolution paths: the case of traffic measurements in cities",
|
| 3 |
+
"abstract": "We consider the following problem : we have a high-resolution street network of a given city, and low-resolution measurements of traffic within this city. We want to associate to each measurement the set of streets corresponding to the observed traffic. To do so, we take benefit of specific properties of these data to match measured links to links in the street network. We propose several success criteria for the obtained matching. They show that the matching algorithm generally performs very well, and they give complementary ways to detect data discrepancies that makes any matching highly dubious.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Matching items based on their approximate coordinates in some space is a classical but challenging task. It plays a key role in geographical studies, where items often have similar but different coordinates in various databases. The task becomes even more challenging when the databases have different resolutions.\nWe consider here such a situation: a high-resolution map of a city is given, as well as low-resolution measurements performed on some of its main streets. Theses measurements are partial: only a minority of the city streets are included. More importantly, these measurements have a low resolution: each measured street corresponds to several edges within the map.\nThen, the question we address is the following: how to map the low-resolution measurement data onto the high-resolution edges of the city map? This is a crucial preliminary step for any work dealing with real-world traffic measurements in cities.\nBecause measured streets indeed correspond to higher-resolution edges within the city map, a natural approach consists in modeling the city map as a high-resolution urban network of streets and crossings; then matching the extremities of measured streets to nodes of the urban network; and matching each measured street to a shortest path between these two nodes. Indeed, considering the low-resolution measured streets and the urban network have strong topological similarities and the quite obvious fact that streets are mostly straights lines between crossings (or crossings linked to each others by straight lines depending on one\u2019s point of view), we assess this method provides a useful and relevant tool for urban network analysis using real traffic data which will be used for future works on traffic measures network analysis."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Available data",
|
| 15 |
+
"text": "OpenStreetMap [1 ###reference_b1###] is a collaborative project that provides free and open map data at world scale. It relies on open databases provided by various institutions as well as data entered by its users/contributors. In France, most data come from land registry or from IGN111IGN is a French public institution producing and maintaining geographical information for France, see https://ign.fr/institut/identity-card ###reference_###, and they are regularly updated by OSM contributors. This ensures a high reliability for OSM data on France. The OSMnx Python library built on top of OSM allows to easily use those data and perform network analysis on them [2 ###reference_b2###].\nIn an effort to develop open data and related applications, more and more administrations and cities in the world publicly provide their data on dedicated platforms. In particular, many cities provide traffic measurements composed of the coordinates of some sensors deployed in the city and the traffic they observe over time. For instance, Paris 222https://opendata.paris.fr/explore/dataset/referentiel-comptages-routiers/information/ ###reference_referentiel-comptages-routiers/information/###, Berlin 333https://api.viz.berlin.de/daten/verkehrsdetektion ###reference_tektion###, Lyon 444https://www.data.gouv.fr/fr/datasets/comptage-criter-de-la-metropole-de-lyon/ ###reference_age-criter-de-la-metropole-de-lyon/###, Montreal 555https://donnees.montreal.ca/dataset/geobase ###reference_e### or Geneva 666https://ge.ch/sitg/sitg\u02d9catalog/sitg\u02d9donnees?keyword=&geodataid=1530&topic=tous&service=tous&datatype=tous&distribution=tous&sort=auto ###reference_es?keyword=&geodataid=1530&topic=tous&service=tous&datatype=tous&distribution=tous&sort=auto### provide such data.\nIn general, these measures are carefully scrutinized by city hall traffic control officials who provide the data.\nWe take the case of Paris as a paradigmatic example of a large western city for our work. In this case, OSM street network data are very complete and accurate, and the city publicly provides reliable traffic measurements.\nWe obtain the Paris street network using OSMnx as follows.We do not use the OSMnx simplification feature as it deeply modifies the graph. We do use the OSMnx consolidation feature with a tolerance distance of 4 meters - the average width of a road. It is needed to merge some similar OSM nodes which actually are duplicates, but it preserves specific structures such as roundabouts at \u00c9toile in Paris. A more precise description both features can be found OSMnx documentation. We also had to buffer the map by 350 meters in order to include roads slightly outside the city where measurements are provided, typically the ring road entrances and exits. With these parameters, OSMnx provides a directed network of 40,198 nodes and 58,727 links although it is not connected.\nFor Paris traffic measurements, we use the data provided by the city of Paris open-data platform [10 ###reference_b10###]. It relies on a set of more than 3000 sensors, each giving traffic measurements on a sequence of segments that represents a street. Here we consider each segment independently (by cutting sequences into simple segments when required) and associate each sensor to each segment representing its street.\nWe obtain the measurement graph in which edges represent these segments and nodes represent segment extremities. This graph is undirected and not connected, and it has 5594 edges and 5271 nodes."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "Problem and framework",
|
| 21 |
+
"text": "We display in Figure 1 ###reference_### a drawing of the Paris street network obtained from OSM, together with the traffic measurement network. The two networks are defined over the same geographical area but the nodes representing a same entity (e.g., a street extremity) in both networks generally have different coordinates. The goal of this paper is to provide methods to match the links of the measurement network to links in the street network.\n###figure_1### We say that the measurement network is a low-resolution network because its links generally correspond to several links in the street network. Conversely, we say that the street network is a high-resolution network. Figure 2 ###reference_### shows that the measurement links are indeed longer than the links of the street network, in general.\n###figure_2### This leads to the following problem statement.\nInput. We consider a high-resolution directed street network and a low-resolution measurement network with , . In addition, each node in or has coordinates in the Euclidean plane, and each link in or has a length in meters.\nOutput. For each link in we give a subset of of links that we consider to be the streets in that correspond to the measurement of . We also provide a multi-criteria assessment of each proposed matching.\nThis problem is very general, and may be very challenging. We however observe in Figure 1 ###reference_### that the two networks are strongly related, and that for each measurement link there is a path in the street network that follows it closely. We therefore make the following assumptions, that we will use to propose a matching algorithm and assess its results.\nOur first assumption is that the correct/best matching of a measurement link in is a path in from a node close to to a node close to , or conversely. Indeed, each measurement link in a priori consists of a coarse-grain street that may be divided into a sequence of smaller streets in . Therefore, for a given integer , we will consider in the nearest nodes of each extremity of the measurement link, and match this link with paths in between these nodes.\nGoing further, the measurements links correspond to straight pieces of streets and the matching paths should therefore have a length (in meters) similar to the one of the measurement link. This is our second assumption, therefore we will only consider shortest paths, as their length is minimal, like the length of the straight line corresponding to the measurement link.\nFor the same reason, our third assumption states that the considered shortest paths should be close to straight lines. Therefore, we will avoid taking paths with edges that form important angles, either between them or with the measurement link.\nTo capture this, we introduce the following notations. Let us consider a candidate path made of nodes in for matching a measurement link in . We denote by the angle between links and viewed as segments, and we call it the -th running angle. We denote by the angle between the segment corresponding to the measurement link and the one corresponding to the link , and we call it the -th straight-line angle. Notice that running angles and straight-line angles are related by: , , which implies that , .\nLast but not least, our forth assumption is that the matching path remains close to the considered measurement links all along/throughout the path, therefore we will try to minimize the area between the measurement link and the chosen path."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "4",
|
| 25 |
+
"parent_section_id": null,
|
| 26 |
+
"section_name": "Matching algorithm",
|
| 27 |
+
"text": "We consider two input networks and , as well as an integer . For any node in , we denote by the set of the nearest neighbors (with respect to node coordinates) of in . Then, for each link in we compute the set of all shortest paths in from any node in to any node in . We also compute the similar set of paths in the other direction. Finally, output the set of links in that correspond to the path in that minimizes a given criterion, excluding path with length equal to 0 (ie we took the same node in and ).\nWe consider the following library of criteria, in which each link is viewed as a segment:\nthe length difference between the considered measurement link and the considered path (where we added to the path length the distance between the path endpoints and the measurement link endpoints)\nthe average running angle with the measurement link over the path;\nthe average straight-line angle between the links of the path and the measurement link;\nthe area between the path and the measurement link.\nWe run the algorithm using one of these criteria but, for each obtained matching of a measurement link, we also output its relevance with respect to all other criteria. In this way, we give precious indications on the quality of proposed matchings, as we illustrate in next section."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "5",
|
| 31 |
+
"parent_section_id": null,
|
| 32 |
+
"section_name": "Results and discussion",
|
| 33 |
+
"text": "We present here the results obtained by running the algorithm above on the Paris street and measurement networks with , which is representative of a wide variety of cases.\nAs we compute for each measurement link shortest paths, we obtain here candidate paths for matching each of them. Each criterion provides a score for any shortest path, enabling us to analyze and compare them.\nWe noticed 3 edges from the measurement networks were not successfully matched whichever criterion was used : two are on Rue de Rivoli, and the remaining one is located in Bagnolet interchange. Due to Rue de Rivoli now being unavailable for most vehicules but buses and bikes, it has recently been partly removed from the Paris street graph obtained through OSMnx. Hence, connectivity issues explain the matching failure here as some one-way streets remain around there. As for Bagnolet it clearly is a side effect : increasing the buffering even more would help here as it is in fact impossible to find a path from the points selected during the matching, but it might also possibly create similar issues elsewhere. Overall, it mostly reminds us we need to be aware of structural modifications of street networks over time. These unmatched edges thus explain why we will now consider 5591 (successfully) matched low-res measurement edges out of 5594."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "5.1",
|
| 37 |
+
"parent_section_id": "5",
|
| 38 |
+
"section_name": "Score evaluation",
|
| 39 |
+
"text": "###figure_3### ###figure_4### For all cases, we observe in Figure 3 ###reference_### that scores are overall very low (that is logical as we wanted to minimize scores) for most of the paths before a salient increase at the very end. The meaning behind is that the matching algorithm is relevant in picking paths that are overall good candidates for our criteria, and only struggle for a small part of the edges. \nIt is remarkable that both angle criteria have seemingly quite similar behaviour, while the two others do even more : we might wonder if they provide complementary information, but also if LC and AC are just too strong at discriminating as about 99% of the edges have an almost null score while RC and SC work slightly better at ranking edges. \nWhile no rank correlations were observed for any criterion, the scores are very low similar overall for most of the paths selected by the matching algorithm, making it difficult to unravel any real distinction between them but also to have one standing out. Nonetheless, we can still notice from Figure 3 ###reference_###, especially on the right plot, that three different regimes can be observed : the lowest scores on the very left, the highest scores on the very right and then all the others inbetween which are in fact most of the paths."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "5.2",
|
| 43 |
+
"parent_section_id": "5",
|
| 44 |
+
"section_name": "Correlation between criteria scores",
|
| 45 |
+
"text": "###figure_5### ###figure_6### ###figure_7### ###figure_8### On Figure 4 ###reference_###, we provide the correlations obtained between the scores of the set of edges matched for a given criterion and the three other scores for that same set. We can notice that LC and AC appear to have a correlated behaviour whatever the criteria used for matching. Applying matching with SC as criterion also seem to efficiently choose edges with fairly good LC and AC scores while on the opposite using AC or LC as criterion for the algorithm picks a set of edges with a wide range of scores - and not necessarily good ones - for RC and SC. We also notice that minimizing RC provides similar yet better results than SC for most criteria, as SC is seemingly more difficult to minimize than others : this is the criterion with the most low-scored edges. On Figure 5 ###reference_###, we see the output of the matching minimizing RC, where all the lowest-scoring edges are highlighted for each criteria. Unsurprisingly, lowest-scoring edges are the same for LC and AC. We remark here that all criteria but RC point ring-road edges as part of the worst scores, whereas RC mostly has its worst scores on various places inside the city. It might lead to consider applying AC inside the city and RC on the ring road could be an relevant strategy.\n###figure_9### ###figure_10### ###figure_11### ###figure_12### We can also observe on Figure 6 ###reference_### the location of the lowest-scoring edges for RC but considering all four different matching outputs. First of all, we can simply note that no major mistakes seem to be observed if we compare them to Figure 1 ###reference_###. On the one hand, we observe from Figure 6 ###reference_### that LC and AC matching outputs are quite similar yet they both have a tendency to include small links or dead-ends - especially compared to to the two others - as if it was buffered around major roads all around the network, even though AC does it less than LC. On the other hand, RC and SC outputs are also similar and pick more edges than required for the matching but on different places, for example on the outer edge of the ring road where some unrequired loop structures can be seen.\nFocusing now on the location of lowest-scoring edges for each output, there are a lot of similarities and more specifically we can find some of the largest road infrastructures such as Place de l\u2019Etoile or Porte Maillot, meanwhile the Bercy interchange has an extremely complex topology yet it does not seem to be such an issue. We might need to check closer to understand what is really at stake here.\n###figure_13### ###figure_14### ###figure_15### ###figure_16###"
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "5.3",
|
| 49 |
+
"parent_section_id": "5",
|
| 50 |
+
"section_name": " Visual inspection of limit cases",
|
| 51 |
+
"text": "To provide a deeper understanding of the matching on Figure 6 ###reference_### for each criteria, we study more precisely its behaviour on three cases on Figures 8 ###reference_### 7 ###reference_### 9 ###reference_### : a significant roundabout at Etoile and two massive interchanges at Bercy and Porte de Maillot.\nThe best output for Etoile clearly is AC as we can observe small mistakes on all the others. This is way more confusing for the two other cases, although we might want to exclude LC as a relevant criterion for such areas considering missing yet expected edges for Bercy, or RC as relevant due to unexpected edges at Etoile and Maillot. However, it is not surprising that the criteria that mostly value straight-lines fail in complex structures such as interchanges. We could even say that the opposite would have been worrying : in fact we are able to detect areas where in any case here, human intervention would be mandatory to ensure no significant mistake is done. It also seem relevant to consider in such infrastructures, our main hypothesis about the difference in resolution between the two networks is no longer valid or at least weaker.\n###figure_17### ###figure_18### ###figure_19### ###figure_20### ###figure_21### ###figure_22### ###figure_23### ###figure_24### ###figure_25### ###figure_26### ###figure_27### ###figure_28### ###figure_29### ###figure_30### ###figure_31### ###figure_32### ###figure_33### ###figure_34###"
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "6",
|
| 55 |
+
"parent_section_id": null,
|
| 56 |
+
"section_name": "Related work",
|
| 57 |
+
"text": "Several works explore the limits of OSM [3 ###reference_b3###] [4 ###reference_b4###], including ways to overcome missing data [5 ###reference_b5###]. Based on OSM, OSMnx enables a wide range of studies on urban networks dynamics and properties [6 ###reference_b6###] [7 ###reference_b7###] [8 ###reference_b8###]. Among those, Paris street network and congestions dynamics were previously studied by Taillanter [9 ###reference_b9###] with a network analysis restricted to the measurement network of Paris [10 ###reference_b10###]. Although these results are of interest, it may suffer from sensors geographical distribution heterogeneity and some major traffic road may have been cut, with unknown impact on obtained results. We also note that in spatial network theory and geomatics, Lagesse [11 ###reference_b11###] defined the notion of \u201dway\u201d to overcome edges in network theory as they induce side-effects due to the arbitrary selection of edges and nodes inside a specific area while ignoring everything around. Ways are designed in order to avoid significant angular variations along a path.\nThe problem we consider in this paper is close to map matching: the problem of matching a curve in an embedded graph. Wenk et al.studied graph modeling of transport data and geometrical algorithms for road networks and it can be encapsulated as map-matching [12 ###reference_b12###] [13 ###reference_b13###], where their goal was to match traffic traces such as GPS trajectories to edges on a graph considering all types of errors that can be found in real datasets. This might include sampling errors, measurement errors, sensors malfunction or more simply lack of precision which all have a strong influence on map-matching [14 ###reference_b14###]. This recent survey on map-matching methods [15 ###reference_b15###] completes what had been done by Houssou in his thesis [16 ###reference_b16###], especially the part about map matching under network topological constraints. What stands out overall is that map-matching is an easy but tedious task for a human operator that needs to be at least partially automated. This is also a classic task in geography and geomatics, where specialists often use GIS (Geographic Information System), such as QGIS 777https://www.qgis.org/ ###reference_www.qgis.org/###, as they allow to tackle our problem by spatially joining two networks [17 ###reference_b17###] although it works a little too much like a black box for beginners and experienced users alike. These methods were further adapted and improved for the specific case of Volunteered Geographic Information such as OSM but only using geometric features [18 ###reference_b18###] or buffering around a link to find matching elements [19 ###reference_b19###]. Nonetheless, we claim matching nodes to edges is not relevant here (although sensors location is provided) as it would necessarily lead to fallacious results, hence edges matching is by far the most appropriate technique for our problem. What makes our work different from classical map-matching, graph matching or GIS methods is that we intend to map low-resolution edges on high-resolution edges : whereas trajectories might be both long and serpentine, measurements are done on smaller - and necessarily way more straight - edges to provide accurate results, and we take advantage of this topological specificity linking both graphs by using shortest paths in our algorithm."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "7",
|
| 61 |
+
"parent_section_id": null,
|
| 62 |
+
"section_name": "Conclusion",
|
| 63 |
+
"text": "We designed a method taking advantage of the specific features of urban networks to accurately solve our problem. We observed in most cases and for most criteria, this task can be automated for a significant part of the graph even if the very end inevitably requires human verification. A significant asset of our method is that anyone can add its own criteria to be tested and compared to the previous ones in order to improve or adapt the algorithm to some specific situation.\nWe also gained insight on all of our assumptions. Nonetheless, it seems quite obvious some external factors might have a strong influence on the results such as map data resolution and precision, the non-planarity of street networks (that may add some confusion to the matching when picking nearest nodes, especially for bridges, tunnels, or interchanges), and the fact that both datasets might differ over time. A major perspective would be to focus on the unambiguity of the matching as it would be valuable to avoid any high-resolution edge to be matched more than once. Otherwise, how can we find which measurement sensor was the most relevant one among several possibilities for any edge ? Indeed, network topology implies that human verification is utterly mandatory to check how the matching is done, making it sometimes difficult to simply design a relevant criteria working perfectly anywhere in the graph.\nHowever, most of these problems can be solved by hand and, on the whole, we are able to understand where and why the algorithm works correctly or not. We assume it might be improved either by a relevant combination of criteria with specifically required properties (if not all at the same time) or by hand to avoid significant mistakes. Further work could now explore similar cases in other cities to assess those criteria on various street network topologies and deepen our analysis.\nAcknowledgements.\nThis project has received financial support from the CNRS through the MITI interdisciplinary programs and from AID/DGA (Direction G\u00e9n\u00e9rale de l\u2019Armement).\nWe thank Eric Colin de Verdi\u00e8re and Claire Lagesse for helpful discussions."
|
| 64 |
+
}
|
| 65 |
+
],
|
| 66 |
+
"appendix": [],
|
| 67 |
+
"tables": {},
|
| 68 |
+
"image_paths": {
|
| 69 |
+
"1": {
|
| 70 |
+
"figure_path": "2401.08539v2_figure_1.png",
|
| 71 |
+
"caption": "Figure 1: Overlayed drawings of OSM Paris street network (in black) and traffic sensor network (in blue).",
|
| 72 |
+
"url": "http://arxiv.org/html/2401.08539v2/"
|
| 73 |
+
},
|
| 74 |
+
"2": {
|
| 75 |
+
"figure_path": "2401.08539v2_figure_2.png",
|
| 76 |
+
"caption": "Figure 2: Cumulative distribution of link length (in meters) for both networks in Paris",
|
| 77 |
+
"url": "http://arxiv.org/html/2401.08539v2/"
|
| 78 |
+
},
|
| 79 |
+
"3(a)": {
|
| 80 |
+
"figure_path": "2401.08539v2_figure_3(a).png",
|
| 81 |
+
"caption": "Figure 3: Normalized scores for each criterion with all 5591 matched edges ranked in ascending score order (logarithmic scale of the y-axis is used on the right)",
|
| 82 |
+
"url": "http://arxiv.org/html/2401.08539v2/"
|
| 83 |
+
},
|
| 84 |
+
"3(b)": {
|
| 85 |
+
"figure_path": "2401.08539v2_figure_3(b).png",
|
| 86 |
+
"caption": "Figure 3: Normalized scores for each criterion with all 5591 matched edges ranked in ascending score order (logarithmic scale of the y-axis is used on the right)",
|
| 87 |
+
"url": "http://arxiv.org/html/2401.08539v2/"
|
| 88 |
+
},
|
| 89 |
+
"4(a)": {
|
| 90 |
+
"figure_path": "2401.08539v2_figure_4(a).png",
|
| 91 |
+
"caption": "Figure 4: Correlation scores (normalized to [0,1] for both axis each time) of paths matched using a criterion (on the x-axis) with all other criteria one by one (on the y-axis) : first line is LC, then RC, SC, AC. Red dots correspond to correlation with RC, green with SC, blue with AC, orange with LC. Scores are normalized : both axis are [0,1]. Hence the first red plot on top left corner is the correlation between LC and RC scores for edges matched by minimizing LC. The green one just on its right is the correlation between LC and SC scores from matching minimizing LC, and so forth. To understand how to analyze this figure, from the correlation between LC and RC (top-left corner) we focus on the few reds dots around (1,0). These dots are edges with good RC score, but extremely bad LC score (though LC was the focus of the matching here) : this path has very little angular\nvariation yet it is way too short or long.",
|
| 92 |
+
"url": "http://arxiv.org/html/2401.08539v2/"
|
| 93 |
+
},
|
| 94 |
+
"4(b)": {
|
| 95 |
+
"figure_path": "2401.08539v2_figure_4(b).png",
|
| 96 |
+
"caption": "Figure 4: Correlation scores (normalized to [0,1] for both axis each time) of paths matched using a criterion (on the x-axis) with all other criteria one by one (on the y-axis) : first line is LC, then RC, SC, AC. Red dots correspond to correlation with RC, green with SC, blue with AC, orange with LC. Scores are normalized : both axis are [0,1]. Hence the first red plot on top left corner is the correlation between LC and RC scores for edges matched by minimizing LC. The green one just on its right is the correlation between LC and SC scores from matching minimizing LC, and so forth. To understand how to analyze this figure, from the correlation between LC and RC (top-left corner) we focus on the few reds dots around (1,0). These dots are edges with good RC score, but extremely bad LC score (though LC was the focus of the matching here) : this path has very little angular\nvariation yet it is way too short or long.",
|
| 97 |
+
"url": "http://arxiv.org/html/2401.08539v2/"
|
| 98 |
+
},
|
| 99 |
+
"4(c)": {
|
| 100 |
+
"figure_path": "2401.08539v2_figure_4(c).png",
|
| 101 |
+
"caption": "Figure 4: Correlation scores (normalized to [0,1] for both axis each time) of paths matched using a criterion (on the x-axis) with all other criteria one by one (on the y-axis) : first line is LC, then RC, SC, AC. Red dots correspond to correlation with RC, green with SC, blue with AC, orange with LC. Scores are normalized : both axis are [0,1]. Hence the first red plot on top left corner is the correlation between LC and RC scores for edges matched by minimizing LC. The green one just on its right is the correlation between LC and SC scores from matching minimizing LC, and so forth. To understand how to analyze this figure, from the correlation between LC and RC (top-left corner) we focus on the few reds dots around (1,0). These dots are edges with good RC score, but extremely bad LC score (though LC was the focus of the matching here) : this path has very little angular\nvariation yet it is way too short or long.",
|
| 102 |
+
"url": "http://arxiv.org/html/2401.08539v2/"
|
| 103 |
+
},
|
| 104 |
+
"4(d)": {
|
| 105 |
+
"figure_path": "2401.08539v2_figure_4(d).png",
|
| 106 |
+
"caption": "Figure 4: Correlation scores (normalized to [0,1] for both axis each time) of paths matched using a criterion (on the x-axis) with all other criteria one by one (on the y-axis) : first line is LC, then RC, SC, AC. Red dots correspond to correlation with RC, green with SC, blue with AC, orange with LC. Scores are normalized : both axis are [0,1]. Hence the first red plot on top left corner is the correlation between LC and RC scores for edges matched by minimizing LC. The green one just on its right is the correlation between LC and SC scores from matching minimizing LC, and so forth. To understand how to analyze this figure, from the correlation between LC and RC (top-left corner) we focus on the few reds dots around (1,0). These dots are edges with good RC score, but extremely bad LC score (though LC was the focus of the matching here) : this path has very little angular\nvariation yet it is way too short or long.",
|
| 107 |
+
"url": "http://arxiv.org/html/2401.08539v2/"
|
| 108 |
+
},
|
| 109 |
+
"5(a)": {
|
| 110 |
+
"figure_path": "2401.08539v2_figure_5(a).png",
|
| 111 |
+
"caption": "Figure 5: Output of the matching for RC, where the lowest-rated edges for each criteria are highlighted (50 for LC/AC, 300 for RC/SC due to scores on Figure 3).",
|
| 112 |
+
"url": "http://arxiv.org/html/2401.08539v2/"
|
| 113 |
+
},
|
| 114 |
+
"5(b)": {
|
| 115 |
+
"figure_path": "2401.08539v2_figure_5(b).png",
|
| 116 |
+
"caption": "Figure 5: Output of the matching for RC, where the lowest-rated edges for each criteria are highlighted (50 for LC/AC, 300 for RC/SC due to scores on Figure 3).",
|
| 117 |
+
"url": "http://arxiv.org/html/2401.08539v2/"
|
| 118 |
+
},
|
| 119 |
+
"5(c)": {
|
| 120 |
+
"figure_path": "2401.08539v2_figure_5(c).png",
|
| 121 |
+
"caption": "Figure 5: Output of the matching for RC, where the lowest-rated edges for each criteria are highlighted (50 for LC/AC, 300 for RC/SC due to scores on Figure 3).",
|
| 122 |
+
"url": "http://arxiv.org/html/2401.08539v2/"
|
| 123 |
+
},
|
| 124 |
+
"5(d)": {
|
| 125 |
+
"figure_path": "2401.08539v2_figure_5(d).png",
|
| 126 |
+
"caption": "Figure 5: Output of the matching for RC, where the lowest-rated edges for each criteria are highlighted (50 for LC/AC, 300 for RC/SC due to scores on Figure 3).",
|
| 127 |
+
"url": "http://arxiv.org/html/2401.08539v2/"
|
| 128 |
+
},
|
| 129 |
+
"6(a)": {
|
| 130 |
+
"figure_path": "2401.08539v2_figure_6(a).png",
|
| 131 |
+
"caption": "Figure 6: Output of the matching for all criteria, where the 300 lowest-scoring edges for RC are specifically highlighted in red. Top-left is for LC, top-right for RC, bottom-left for SC and bottom-right for AC.",
|
| 132 |
+
"url": "http://arxiv.org/html/2401.08539v2/"
|
| 133 |
+
},
|
| 134 |
+
"6(b)": {
|
| 135 |
+
"figure_path": "2401.08539v2_figure_6(b).png",
|
| 136 |
+
"caption": "Figure 6: Output of the matching for all criteria, where the 300 lowest-scoring edges for RC are specifically highlighted in red. Top-left is for LC, top-right for RC, bottom-left for SC and bottom-right for AC.",
|
| 137 |
+
"url": "http://arxiv.org/html/2401.08539v2/"
|
| 138 |
+
},
|
| 139 |
+
"6(c)": {
|
| 140 |
+
"figure_path": "2401.08539v2_figure_6(c).png",
|
| 141 |
+
"caption": "Figure 6: Output of the matching for all criteria, where the 300 lowest-scoring edges for RC are specifically highlighted in red. Top-left is for LC, top-right for RC, bottom-left for SC and bottom-right for AC.",
|
| 142 |
+
"url": "http://arxiv.org/html/2401.08539v2/"
|
| 143 |
+
},
|
| 144 |
+
"6(d)": {
|
| 145 |
+
"figure_path": "2401.08539v2_figure_6(d).png",
|
| 146 |
+
"caption": "Figure 6: Output of the matching for all criteria, where the 300 lowest-scoring edges for RC are specifically highlighted in red. Top-left is for LC, top-right for RC, bottom-left for SC and bottom-right for AC.",
|
| 147 |
+
"url": "http://arxiv.org/html/2401.08539v2/"
|
| 148 |
+
},
|
| 149 |
+
"7(a)": {
|
| 150 |
+
"figure_path": "2401.08539v2_figure_7(a).png",
|
| 151 |
+
"caption": "Figure 7: Etoile : OSM and measurements networks - Outputs for LC/RC/SC/AC",
|
| 152 |
+
"url": "http://arxiv.org/html/2401.08539v2/extracted/2401.08539v2/courbes_tolerance4/limit_cases/etoiles/etoiles_osm.png"
|
| 153 |
+
},
|
| 154 |
+
"7(b)": {
|
| 155 |
+
"figure_path": "2401.08539v2_figure_7(b).png",
|
| 156 |
+
"caption": "Figure 7: Etoile : OSM and measurements networks - Outputs for LC/RC/SC/AC",
|
| 157 |
+
"url": "http://arxiv.org/html/2401.08539v2/extracted/2401.08539v2/courbes_tolerance4/limit_cases/etoiles/etoiles_mesures.png"
|
| 158 |
+
},
|
| 159 |
+
"7(c)": {
|
| 160 |
+
"figure_path": "2401.08539v2_figure_7(c).png",
|
| 161 |
+
"caption": "Figure 7: Etoile : OSM and measurements networks - Outputs for LC/RC/SC/AC",
|
| 162 |
+
"url": "http://arxiv.org/html/2401.08539v2/extracted/2401.08539v2/courbes_tolerance4/limit_cases/etoiles/etoile_LC.png"
|
| 163 |
+
},
|
| 164 |
+
"7(d)": {
|
| 165 |
+
"figure_path": "2401.08539v2_figure_7(d).png",
|
| 166 |
+
"caption": "Figure 7: Etoile : OSM and measurements networks - Outputs for LC/RC/SC/AC",
|
| 167 |
+
"url": "http://arxiv.org/html/2401.08539v2/extracted/2401.08539v2/courbes_tolerance4/limit_cases/etoiles/etoile_RC.png"
|
| 168 |
+
},
|
| 169 |
+
"7(e)": {
|
| 170 |
+
"figure_path": "2401.08539v2_figure_7(e).png",
|
| 171 |
+
"caption": "Figure 7: Etoile : OSM and measurements networks - Outputs for LC/RC/SC/AC",
|
| 172 |
+
"url": "http://arxiv.org/html/2401.08539v2/extracted/2401.08539v2/courbes_tolerance4/limit_cases/etoiles/etoile_SC.png"
|
| 173 |
+
},
|
| 174 |
+
"7(f)": {
|
| 175 |
+
"figure_path": "2401.08539v2_figure_7(f).png",
|
| 176 |
+
"caption": "Figure 7: Etoile : OSM and measurements networks - Outputs for LC/RC/SC/AC",
|
| 177 |
+
"url": "http://arxiv.org/html/2401.08539v2/extracted/2401.08539v2/courbes_tolerance4/limit_cases/etoiles/etoile_AC.png"
|
| 178 |
+
},
|
| 179 |
+
"8(a)": {
|
| 180 |
+
"figure_path": "2401.08539v2_figure_8(a).png",
|
| 181 |
+
"caption": "Figure 8: Bercy : OSM and measurements networks - Outputs for LC/RC/SC/AC",
|
| 182 |
+
"url": "http://arxiv.org/html/2401.08539v2/extracted/2401.08539v2/courbes_tolerance4/limit_cases/bercy/bercy_osm.png"
|
| 183 |
+
},
|
| 184 |
+
"8(b)": {
|
| 185 |
+
"figure_path": "2401.08539v2_figure_8(b).png",
|
| 186 |
+
"caption": "Figure 8: Bercy : OSM and measurements networks - Outputs for LC/RC/SC/AC",
|
| 187 |
+
"url": "http://arxiv.org/html/2401.08539v2/extracted/2401.08539v2/courbes_tolerance4/limit_cases/bercy/bercy_measures.png"
|
| 188 |
+
},
|
| 189 |
+
"8(c)": {
|
| 190 |
+
"figure_path": "2401.08539v2_figure_8(c).png",
|
| 191 |
+
"caption": "Figure 8: Bercy : OSM and measurements networks - Outputs for LC/RC/SC/AC",
|
| 192 |
+
"url": "http://arxiv.org/html/2401.08539v2/extracted/2401.08539v2/courbes_tolerance4/limit_cases/bercy/LC_bercy.png"
|
| 193 |
+
},
|
| 194 |
+
"8(d)": {
|
| 195 |
+
"figure_path": "2401.08539v2_figure_8(d).png",
|
| 196 |
+
"caption": "Figure 8: Bercy : OSM and measurements networks - Outputs for LC/RC/SC/AC",
|
| 197 |
+
"url": "http://arxiv.org/html/2401.08539v2/extracted/2401.08539v2/courbes_tolerance4/limit_cases/bercy/RC_bercy.png"
|
| 198 |
+
},
|
| 199 |
+
"8(e)": {
|
| 200 |
+
"figure_path": "2401.08539v2_figure_8(e).png",
|
| 201 |
+
"caption": "Figure 8: Bercy : OSM and measurements networks - Outputs for LC/RC/SC/AC",
|
| 202 |
+
"url": "http://arxiv.org/html/2401.08539v2/extracted/2401.08539v2/courbes_tolerance4/limit_cases/bercy/SC_bercy.png"
|
| 203 |
+
},
|
| 204 |
+
"8(f)": {
|
| 205 |
+
"figure_path": "2401.08539v2_figure_8(f).png",
|
| 206 |
+
"caption": "Figure 8: Bercy : OSM and measurements networks - Outputs for LC/RC/SC/AC",
|
| 207 |
+
"url": "http://arxiv.org/html/2401.08539v2/extracted/2401.08539v2/courbes_tolerance4/limit_cases/bercy/AC_bercy.png"
|
| 208 |
+
},
|
| 209 |
+
"9(a)": {
|
| 210 |
+
"figure_path": "2401.08539v2_figure_9(a).png",
|
| 211 |
+
"caption": "Figure 9: Maillot : OSM and measurements networks - Outputs for LC/RC/SC/AC",
|
| 212 |
+
"url": "http://arxiv.org/html/2401.08539v2/extracted/2401.08539v2/courbes_tolerance4/limit_cases/maillot/maillot_osm.png"
|
| 213 |
+
},
|
| 214 |
+
"9(b)": {
|
| 215 |
+
"figure_path": "2401.08539v2_figure_9(b).png",
|
| 216 |
+
"caption": "Figure 9: Maillot : OSM and measurements networks - Outputs for LC/RC/SC/AC",
|
| 217 |
+
"url": "http://arxiv.org/html/2401.08539v2/extracted/2401.08539v2/courbes_tolerance4/limit_cases/maillot/maillot_bornes.png"
|
| 218 |
+
},
|
| 219 |
+
"9(c)": {
|
| 220 |
+
"figure_path": "2401.08539v2_figure_9(c).png",
|
| 221 |
+
"caption": "Figure 9: Maillot : OSM and measurements networks - Outputs for LC/RC/SC/AC",
|
| 222 |
+
"url": "http://arxiv.org/html/2401.08539v2/extracted/2401.08539v2/courbes_tolerance4/limit_cases/maillot/maillot_LC.png"
|
| 223 |
+
},
|
| 224 |
+
"9(d)": {
|
| 225 |
+
"figure_path": "2401.08539v2_figure_9(d).png",
|
| 226 |
+
"caption": "Figure 9: Maillot : OSM and measurements networks - Outputs for LC/RC/SC/AC",
|
| 227 |
+
"url": "http://arxiv.org/html/2401.08539v2/extracted/2401.08539v2/courbes_tolerance4/limit_cases/maillot/maillot_RC.png"
|
| 228 |
+
},
|
| 229 |
+
"9(e)": {
|
| 230 |
+
"figure_path": "2401.08539v2_figure_9(e).png",
|
| 231 |
+
"caption": "Figure 9: Maillot : OSM and measurements networks - Outputs for LC/RC/SC/AC",
|
| 232 |
+
"url": "http://arxiv.org/html/2401.08539v2/extracted/2401.08539v2/courbes_tolerance4/limit_cases/maillot/maillot_SC.png"
|
| 233 |
+
},
|
| 234 |
+
"9(f)": {
|
| 235 |
+
"figure_path": "2401.08539v2_figure_9(f).png",
|
| 236 |
+
"caption": "Figure 9: Maillot : OSM and measurements networks - Outputs for LC/RC/SC/AC",
|
| 237 |
+
"url": "http://arxiv.org/html/2401.08539v2/extracted/2401.08539v2/courbes_tolerance4/limit_cases/maillot/maillot_AC.png"
|
| 238 |
+
}
|
| 239 |
+
},
|
| 240 |
+
"validation": true,
|
| 241 |
+
"references": [],
|
| 242 |
+
"url": "http://arxiv.org/html/2401.08539v2"
|
| 243 |
+
}
|
20240522/2401.09962v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240522/2401.15330v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240522/2402.00853v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240522/2402.01965v3.json
ADDED
|
@@ -0,0 +1,394 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Analyzing Neural Network-Based Generative Diffusion Models through Convex Optimization",
|
| 3 |
+
"abstract": "Diffusion models are gaining widespread use in cutting-edge image, video, and audio generation. Score-based diffusion models stand out among these methods, necessitating the estimation of score function of the input data distribution. In this study, we present a theoretical framework to analyze two-layer neural network-based diffusion models by reframing score matching and denoising score matching as convex optimization. We prove that training shallow neural networks for score prediction can be done by solving a single convex program. Although most analyses of diffusion models operate in the asymptotic setting or rely on approximations, we characterize the exact predicted score function and establish convergence results for neural network-based diffusion models with finite data. Our results provide a precise characterization of what neural network-based diffusion models learn in non-asymptotic settings.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Diffusion models [19 ###reference_b19###] were proposed to tackle the problem of sampling from an unknown distribution and is later shown to be able to generate high quality images [10 ###reference_b10###]. Song et al. [22 ###reference_b22###] recognize diffusion model as an example of score-based models which iteratively exploit Langevin dynamics to produce data from an unknown distribution. This approach only requires the estimation of the score function of the data distribution. Specifically, the simplest form of Langevin Monte Carlo procedure involves first sampling from an initial distribution, then repeating the following steps\nwhere is an independently generated i.i.d. Gaussian noise and is a small constant. Here, is known as the score function of the distribution we desire to sample from. It can be shown that under certain conditions [2 ###reference_b2###], we obtain iterates distributed according to the target distribution as tends to zero and number of iterations tends to infinity. Langevin dynamics sampling procedure suggests that we can attempt to sample from an unknown distribution as long as we can estimate the score function of this distribution at each data point, which is the key observation in current diffusion models designed for generative tasks. In practice, deep neural networks are trained to minimize variants of score matching objective for fitting the score function.\nExisting literature on the theory of diffusion models typically establish convergence of diffusion process when the learned score function approximates the score of unknown data distribution well, but in reality only empirical approximation is available due to finite training samples and limited neural network (NN) capacity. Current literature falls short in understanding the role of NN approximation error for score-based generative models and it is also difficult to characterize the distribution from which these models sample in practice. However, in [15 ###reference_b15###, 27 ###reference_b27###, 28 ###reference_b28###], the authors show NN-based score-based generative models given finite training data usually generalize well due to approximation errors introduced by limited model capacity and also optimization errors, recognizing the critical role NN approximation error plays in effectiveness of current large diffusion models. This work contributes to understanding neural network approximation error in finite data regime when trained with score matching or denoising score matching objective, which is crucial for understanding neural network-based score-based generative models. Specifically, we answer the following question:\nHow do NNs approximate the distribution when trained with a (denoising) score matching objective given finite samples and a limited number of neurons?\nTo summarize, we reframe the (denoising) score matching problem with two-layer neural network as a convex program and characterize the exact form of predicted score function for two-layer neural networks with finite data samples. We establish convergence result for neural network-based Langevin sampling, which serves as a core backbone of nowadays generative models used in application. Our convex program for score matching objective bypasses the Jacobian computation issue for piecewise linear activation function such as ReLU, which stabilizes the training procedure and can have practical benefits. All theoretic findings are corroborated with simulation resutls."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Background",
|
| 15 |
+
"text": "Diffusion model has been shown to be useful in various generative tasks including image generation [10 ###reference_b10###], audio generation [30 ###reference_b30###], and text generation [26 ###reference_b26###]. Variants of diffusion models such as denoising diffusion implicit model [20 ###reference_b20###] have been designed to speedup sample generation procedure. The key to score-based diffusion model is the estimation of score function at any data point. In practice, a deep neural network model is trained to minimize variants of the score matching objective and is used for score function estimation. The score matching objective can be shown to be equivalent up to a constant to\nwhich is more practical since is usually not directly available. To help alleviate the computation overhead in computing trace of Jacobian in (1 ###reference_###) for deep neural network and high dimensional data, sliced score matching [21 ###reference_b21###] which exploits trace estimation method for trace of Jacobian evaluation has been proposed. Another variant used more commonly nowadays in conjunction with annealed Langevin sampling is denoising score matching [24 ###reference_b24###] which considers sampling from a perturbed distribution and totally circumvents the computation of trace of Jacobian.\nTheory guarantees for diffusion models relate to convergence of log-concave sampling procedure which contains Langevin dynamics. Prior literature establishes convergence of final distribution of log-concave sampling procedure to ground truth data distribution under mild condition with exact score function at each sample point being known [2 ###reference_b2###]. Recent work [12 ###reference_b12###] characterizes the generalization error for NN-based score models with bounds in number of neurons and obtain vanishing generalization gap when number of neurons tends to infinity, i.e., when the approximation error vanishes. Though neural network approximation error has been recognized to be core to the generalization capability of diffusion models in deep learning, existing work falls short in characterizing the exact score function learned by neural network with finite samples. Our work focuses on analyzing what neural network-based score model learns in finite regime. Specifically, we show that the score matching objective fitted with two-layer neural network can be reparametrized as a quadratic convex program and solved directly to global optimality and the predicted score function will be piecewise linear with kinks only at training data points. We also investigate cases where the convex program can be solved analytically and we observe that the predicted score function may not integrate to be concave and thus only convergence to local stationary point is guaranteed.\nBesides theoretic interest mentioned above, our convex programs may have practical benefit since they stabilize the training procedure due to convexity. Moreover, for commonly used activation function such as ReLU, trace of Jacobian involves threshold function which has zero gradient almost everywhere. Therefore, conventional gradient-based optimizers may face difficulties minimizing the training objective. Our convex programs bypass this Jacobian computation issue and thus gain advantage.\nTo our best knowledge, this is the first work characterizing the exact score function learned by two-layer neural network with finite data samples and this is also the first convex program derived for score-matching objective. Our work is closely related to prior convex neural network theories [17 ###reference_b17###, 18 ###reference_b18###, 7 ###reference_b7###] which consider mainly squared loss instead. A recent work [29 ###reference_b29###] tackles very similar problems as ours and also studies the NN approximation error in score matching fitting, though there are several key differences which we defer to Appendix A ###reference_### for sake of page limitation. In below, we present the convex program derived for score matching objective in Section 3 ###reference_### with exact score characterization and convergence results established. We further delve into the denoising score matching fitting in Section 4 ###reference_###. We present simulation results with both Langevin Monte Carlo and annealed Langevin sampling with our convex score predictor in Section 5 ###reference_###. Conclusion and future work is discussed in Section 6 ###reference_###.\nNotation. We first introduce some notations we will use in later sections.\nWe use to denote the sign function taking value when and otherwise, and to denote the - valued indicator function taking value when the argument is a true Boolean statement. For any vector , and applies elementwise. We denote the pseudoinverse of matrix as . We denote subgradient of a convex function at as . For any vector , len() denote the dimension of Standard asymptotic notation is used, i.e., for any sequence and any given , we use and to represent and respectively for some We use (also ) when both and ."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "Score Matching",
|
| 21 |
+
"text": "In this section, we derive convex program for score matching fitting problem with two-layer neural network and establish convergence results for neural network-based Langevin sampling procedure. We detail the neural network architecture being studied in Section 3.1 ###reference_### and present the corresponding convex program in Section 3.2 ###reference_###, score prediction characterization and convergence theory is included in Section 3.3 ###reference_###.\nFor sake of clarity, we present results for NN without skip connection here in the main content and leave results with more general architecture in Appendix C.3 ###reference_###."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "3.1",
|
| 25 |
+
"parent_section_id": "3",
|
| 26 |
+
"section_name": "Score Matching Problem and Neural Network Architectures",
|
| 27 |
+
"text": "Let denote a neural network parameterized by parameter with output dimension the same as input data dimension, which is required for score matching estimation and is captured by specific U-Net used in nowadays diffusion model implementation. With data samples, the empirical version of score matching objective (1 ###reference_###) is\nThe final training loss we consider is the above score matching objective together with weight decay term, which writes\nwhere denotes the parameters to be regularized. We note that a non-zero weight decay is indeed core for the optimal value to stay finite, see Appendix B ###reference_### for explanation, which rationalizes the additional weight decay term involved here. Let denote number of hidden neurons. Consider two-layer neural network of general form as below\nwith activation function , parameter and where is the input data, is the first-layer weight, is the first-layer bias, is the second-layer weight, is the second-layer bias and is the skip connection coefficient."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3.2",
|
| 31 |
+
"parent_section_id": "3",
|
| 32 |
+
"section_name": "Convex Programs",
|
| 33 |
+
"text": "We describe separately the derived convex program for univariate data and multivariate data here since our score prediction characterization and convergence result in below (Section 3.3 ###reference_###) focuses on univariate data and thus presenting the univariate data convex program explicitly helps improve readability.\nUnivariate Data. Consider training data . Score matching fitting with objective (2 ###reference_###) is equivalent to solving a quadratic convex program in the sense that both problems have same optimal value and an optimal NN parameter set which achieves minimal loss can be derived from the solution to the corresponding convex program. We detail this finding in the following theorem,\nWhen is ReLU or absolute value activation and , denote the optimal score matching objective value (2 ###reference_###) with specified in (3 ###reference_###) as when and 111Note when , the optimal value to problem (2 ###reference_###) may be unbounded, see Appendix B ###reference_### for explanation.\nwhere the entries of are determined by the pairwise distances between data points, and the entries of correspond to the derivative of evaluated at entries of (see Appendix C.2 ###reference_### for the formulas).\nSee Appendix C.2 ###reference_###.\n\u220e\nMore precisely, when is absolute value function, quadratic term coefficient is formed by first taking pairwise distance between data points as then we normalize column-wise with mean reduced to form , the desired is simply a concatenation of two copies of as Linear term coefficient is column sum of two matrices and and formally writes . Once an optimal solution to the convex program (4 ###reference_###) has been derived, we can reconstruct an optimal NN parameter set that achieves minimal training loss simply from data points and See Appendix C.2 ###reference_### for the reconstruction procedure. Given all this, with known, for any test data , the predicted score is given by\nwhere Remarkbly, the optimal score is a piecewise linear function with breakpoints only at a subset of data points. When training data points are highly separated, the optimal score approximately corresponds to the score function of a mixture of Gaussians with centroids at . The breakpoints delineate the ranges of each Gaussian component.\nMultivariate Data. To state the convex program for multivariate data, we first introduce the concept of arrangement matrices. When is arbitrary, for data matrix and any arbitrary vector , We consider the set of diagonal matrices\nwhich takes value or along the diagonal that indicates the set of possible arrangement activation patterns for the ReLU activation. Indeed, we can enumerate the set of sign patterns as where is bounded by\nfor [17 ###reference_b17###, 23 ###reference_b23###]. Since the proof of Theorem 3.1 ###reference_theorem1### is closely tied to reconstruction of optimal neurons and does not trivially extend to multivariate data, we instead build on [14 ###reference_b14###] and employ an alternative duality-free proof to derive our conclusion for multivariate data. The result holds for zero and , i.e., when there is no bias term, skip connection, and weight decay added. We present here result for ReLU . See Appendix C.4 ###reference_### for conclusion for the case when is absolute value activation.\nWhen is ReLU, and\n all zero, denote the optimal score matching objective value (2 ###reference_###) with specified in (3 ###reference_###) as , when , under Assumption C.6 ###reference_theorem6###,\nSee Appendix C.4 ###reference_###.\n\u220e\nPrior work [11 ###reference_b11###, 13 ###reference_b13###] observes that with linear activation, the optimal weight matrix of score fitting reduces to empirical precision matrix which models the correlation between data points and the authors exploit this fact in graphical model construction. Here we show that the optimal \u2019s solved for (5 ###reference_###) correspond to piecewise empirical covariance estimator and therefore the non-linear two-layer NN is a more expressive model compared to prior linear models. To see this, we first write , then the convex program (5 ###reference_###) can be rewritten as\nWhen the optimal value is finite, e.g., , an optimal solution to (6 ###reference_###) is given by\nwhere and is the generator of . The above expression for can be seen as a (negative) piecewise empirical covariance estimator which partitions the space with hyperplane arrangements. When and , reduces to the empirical precision matrix corresponding to linear activation model."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3.3",
|
| 37 |
+
"parent_section_id": "3",
|
| 38 |
+
"section_name": "Score Prediction and Convergence Result",
|
| 39 |
+
"text": "In this section, we delve into the convex program (4 ###reference_###) and show that with distinct data points and large weight decay, (4 ###reference_###) can be solved analytically and the integration of predicted score function is always concave for ReLU activation, which aligns with theoretic assumptions for Langevin sampling procedures. We then establish convergence result for Langevin dynamics in this regime. Though the same observation does not persist for absolute value activation, where the predicted score function may integrate to be non-concave and thus only convergence to stationary points holds. All notations in this section follow Section 3.2 ###reference_###.\nScore Prediction. Consider the case when is ReLU and , let denote the sample mean and denote the sample variance. We know and following Appendix C.2 ###reference_###. When is optimal and the neural network will always predict zero score no matter what input it takes. When for some threshold 222See Appendix D.1.1 ###reference_.SSS1### for value of ., is all zero except for the first and the -th entry, which have value and for some respectively 333See Appendix D.1.1 ###reference_.SSS1### for proof.. Therefore, for any input data point the predicted score is\n###figure_1### for some Left plot in Figure 1 ###reference_### provides a visualization of (7 ###reference_###) and its integration. Note within sampled data range, the predicted score function aligns with score function of Gaussian distribution parameterized by sample mean and sample variance ; outside data range, the predicted score function is a linear interpolation. The integration of score function is always concave in this case, and therefore Langevin dynamics sampling with predicted score function has well-established convergence guarantees [6 ###reference_b6###, 3 ###reference_b3###, 5 ###reference_b5###].\nContrarily, when is absolute value activation and still , when goes below and stays above some threshold (see Appendix D.1.2 ###reference_.SSS2### for details), for any test data , the corresponding predicted score is given by\nRight plot in Figure 1 ###reference_### depicts the score prediction and its integration. Within the sampled data range, the score prediction corresponds to score of Gaussian distribution parameterized by sample mean and sample variance which is the same as the score predicted by ReLU neural network. The score prediction outside sampled data range is still a linear interpolation but with a different slope from what is predicted by the ReLU neural network. This underscores the distinction between absolute value activation and ReLU activation. The corresponding probability density when being absolute value activation is log-concave only when . Notably, the solution with corresponds to the unique minimum norm solution of the convex program (4 ###reference_###), highlighting its significance. Here the score prediction no longer corresponds to log-concave distribution except for the min-norm case and classic convergence theory has only theoretic assurance for converging to stationary points [2 ###reference_b2###].\nWhen skip connection is added, i.e., the optimal score prediction corresponding to is no longer zero and the corresponding optimal neural network parameter set is given by . For any test data , the predicted score is given by\nwhich aligns with the score function of Gaussian distribution with mean being sample mean and variance being sample variance.Therefore, adding skip connection would change the zero score prediction to a linear function parameterized by sample mean and variance in the large weight decay regime. See Appendix D.1.3 ###reference_.SSS3### and D.1.4 ###reference_.SSS4### for details.\nConvergence Result. Here we state our convergence result for Langevin sampling with NN-based score predictor. Strong convergence guarantees for Langevin Monte Carlo method are often contingent upon the log-concavity of the target distribution. Consider two-layer ReLU network without skip connection and consider training points . We have derived that the NN-predicted score function for any input distribution is always concave given , thus we can exploit existing convergence results for log-concave sampling to derive the convergence of Langevin dynamics with a neural network-based score function, which we state formally as the below theorem.\nWhen used in Algoritm 2 ###reference_### is of two-layer ReLU (without skip connection) trained to optimal with Algorithm 1 ###reference_### and let denote the target distribution (defined below). In Algorithm 2 ###reference_###, for any , if we take step size then for , it holds that after\nwhere denotes 2-Wasserstein distance and satisfies\nfor some .\nSee Appendix D.2 ###reference_###.\n\u220e\nTo the best of our knowledge, prior to our study, there has been no characterization of the sample distribution generated by Algorithm 2 ###reference_### when the score model is trained using Algorithm 1 ###reference_###."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "4",
|
| 43 |
+
"parent_section_id": null,
|
| 44 |
+
"section_name": "Denoising Score Matching",
|
| 45 |
+
"text": "To tackle the difficulty in computation of trace of Jacobian required in score matching objective (1 ###reference_###), denoising score matching has been proposed in [24 ###reference_b24###]. It then becomes widely used in practical generative models, especially for its natural conjunction with annealed Langevin sampling procedure, which forms the current mainstream noising/denoising paradigm of large-scale diffusion models being used in popular applications. In this section, we reframe denoising score matching fitting problem with two-layer neural network as a convex program which can be solved to global optimality stably. We empirically verify the validity of our theoretic findings in Section 5 ###reference_###.\nTo briefly review, denoising score matching first perturbs data points with a predefined noise distribution and then estimates the score of the perturbed data distribution. When the noise distribution is chosen to be standard Gaussian, for some noise level , the objective is equivalent to\nwith the empirical version given by\nwhere are samples from and are samples from standard Gaussian. The final training loss we consider is the above score matching objective together with weight decay term, which writes\nwhere denotes the parameters to be regularized. Unlike for score matching objective where weight decay is important for optimal objective value to stay finite, here for denoising objective, weight decay is unnecessary and can be removed. In our derived convex program, we allow to be arbitrarily close to zero so the result is general. Note (9 ###reference_###) circumvents the computation of trace of Jacobian and is thus more applicable for training tasks in large data regime. We consider the same neural network architecture described in Section 3.1 ###reference_### except that here we only consider case for . Like in Section 3.2 ###reference_###, we still present our conclusion for univariate data and multivariate data separately so that we can easily demonstrate deeper investigation on univariate data findings.\nUnivariate Data. Consider training data . Denoising score matching fitting with objective (9 ###reference_###) is equivalent to solving a lasso problem in the sense that both problems have same optimal value and an optimal NN parameter set which achieves minimal loss can be derived from the solution to the corresponding lasso program. The difference between convex program of denoising score matching fitting and that of score matching fitting is that no linear term is included in this scenario. We detail this finding in the following theorem,\nWhen is ReLU or absolute value activation and , denote the optimal denoising score matching objective value (9 ###reference_###) with specified in (3 ###reference_###) as when and ,\nwhere the entries of are determined by the pairwise distances between data points.\nSee Appendix E.1 ###reference_###.\n\u220e\nFor demonstration, consider being ReLU, then coefficient matrix is concatenation of two matrices, i.e., where is the column-mean-subtracted version of and measures pairwise distance between data points. Similarly, we have with Label vector is the mean-subtracted version of original training label Once an optimal to (10 ###reference_###) has been derived, we can construct an optimal NN parameter set that achieves minimal training loss out of and data points. See Appendix E.1 ###reference_### for details. Under this construction, with value of known, given any test data , NN-predicted score is\nwith . We then proceed to present our multivariate data result, which holds for and being all zero due to a change of our proof paradigm.\nMultivariate Data. Let denote the label matrix, i.e., , and be the arrangement activation patterns for ReLU activation as defined in Section 3.2 ###reference_###, we have the following result for ReLU . See Appendix E.2 ###reference_### for also convex program for absolute value activation.\nWhen is ReLU, and\n all zero, denote the optimal denoising score matching objective value (9 ###reference_###) with specified in (3 ###reference_###) as , when , under Assumption C.6 ###reference_theorem6###,\nSee Appendix E.2 ###reference_###.\n\u220e\nThe derived convex program (11 ###reference_###) is a simple least square fitting and any convex program solver can be used to solve it efficiently."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "5",
|
| 49 |
+
"parent_section_id": null,
|
| 50 |
+
"section_name": "Numerical Results",
|
| 51 |
+
"text": "Finally, we corroborate our previous findings with synthetic data simulation. For sake of page limitation, we present here in main text some of our simulation results for score matching fitting (Section 3.2 ###reference_###) with Langevin sampling and denoising score matching fitting (Section 4 ###reference_###) with annealed Langevin sampling. We defer more empirical observations to Appendix G ###reference_###."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "5.1",
|
| 55 |
+
"parent_section_id": "5",
|
| 56 |
+
"section_name": "Score Matching Simulations",
|
| 57 |
+
"text": "###figure_2### For score matching fitting problems, we verify both the validity of our convex program (Equation 4 ###reference_###) and our score prediction characterization (Figure 1 ###reference_###) with univariate Gaussian data and we show that the derived convex program (Equation 4 ###reference_###) is also able to capture two-component Gaussian mixture distribution. We also present sampling histograms with Langevin dynamics (non-annealed) aided by our convex score predictor. For univariate Gaussian data simulation, we set . Plot (1) in Figure 2 ###reference_### compares objective value of non-convex training with Adam optimizer and our convex program loss solved via CVXPY [4 ###reference_b4###]. The dashed blue line denotes our convex program objective value which solves the training problem globally and stably. Plot (2) is for score prediction, which verifies our analytical characterization in Section 3.3 ###reference_### and aligns with Figure 1 ###reference_###. Plot (3) shows sampling histogram via Langevin dynamics which recognizes the underline Gaussian as desired. The right figure in Figure 2 ###reference_### repeats the same experiments for two-component Gaussian mixture distribution with a slightly small value since we known from Section 3.3 ###reference_### that cannot capture Gaussian mixture distribution. Our convex program identifies the underline distribution accurately. See Appendix F.1 ###reference_### for more experimental details."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "5.2",
|
| 61 |
+
"parent_section_id": "5",
|
| 62 |
+
"section_name": "Denoising Score Matching Simulations",
|
| 63 |
+
"text": "###figure_3### For denoising score matching fitting problems, we verify our derived program (11 ###reference_###) for spiral data and present sampling results with annealed Langevin process integrated with our convex score predictor. For easier computation, we switch to a variant of program (11 ###reference_###) in our implementation, see Appendix F.2.1 ###reference_.SSS1### for details. The left plot in Figure 3 ###reference_### shows the spiral training data and the second left plot depicts the score predicted by our convex score predictor solved with CVXPY [4 ###reference_b4###]. It can be clearly observed that the score prediction already aligns with the training data shape.The five right plots in Figure 3 ###reference_### are sampling results with annealed Langevin process aided with our convex score predictor after different levels of denoising, see Appendix F.2 ###reference_### for experimental details. Our convex program for denoising score matching works well in capturing training data distribution."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "6",
|
| 67 |
+
"parent_section_id": null,
|
| 68 |
+
"section_name": "Conclusion and Future Work",
|
| 69 |
+
"text": "In this work, we analyze neural network-based diffusion models via lens of convex optimization. We derive equivalent convex programs for (denoising) score matching fitted with two-layer neural network. For Langevin dynamics with NN-based score predictor, we first characterize the score prediction by solving the derived convex program analytically and we then establish convergence result based on existing convergence theory for log-concave sampling procedure. Notably for univariate data, for certain weight decay range, the predicted score would capture Gaussian distribution characterized by sample mean and sample variance no matter what input distribution is and for general weight decay, score prediction aligns with Gaussian mixture distribution when training data is highly separated. Besides theoretic interest, the derived convex program has potential empirical benefits since it bypasses the difficulty of using gradient-based optimizers due to the Jacobian terms. Our theoretic findings are corroborated with simulation results. For future work, our proof technique can be extend easily to networks of arbitrary depth by considering convex reparameterizations (see e.g., [8 ###reference_b8###, 25 ###reference_b25###])."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "7",
|
| 73 |
+
"parent_section_id": null,
|
| 74 |
+
"section_name": "Acknowledgement",
|
| 75 |
+
"text": "This work was supported in part by the National Science Foundation (NSF) under Grant DMS-2134248; in part by the NSF CAREER Award under Grant CCF-2236829; in part by the U.S. Army Research Office Early Career Award under Grant W911NF-21-1-0242; and in part by the Office of Naval Research under Grant N00014-24-1-2164."
|
| 76 |
+
}
|
| 77 |
+
],
|
| 78 |
+
"appendix": [
|
| 79 |
+
{
|
| 80 |
+
"section_id": "Appendix 1",
|
| 81 |
+
"parent_section_id": null,
|
| 82 |
+
"section_name": "Appendix A More on Prior Work",
|
| 83 |
+
"text": "Here we make a note on difference between our work and work [29 ###reference_b29###], which tackles very similar problems as ours though there are several key differences. In [29 ###reference_b29###], the authors study shallow neural network trained for score denoisers and characterize the exact neural network output. The authors show contractive property for NN-based denoiser and prove NN-based denoiser is advantageous against eMMSE denoiser. In our work, we study the exact score matching objective (1 ###reference_###) which has not been considered in the other work and establish convergence result for NN-based score predictor which will be much harder to prove for NN-based denoiser due to involvement of noise. Moreover, for denoising score matching objective, we derive convex programs for arbitrary weight decay for multivariate data while the characterization in [29 ###reference_b29###] is for vanishing weight decay. For multivariate data, the authors of [29 ###reference_b29###] only consider modified objective with data belongs to special subspaces while our convex program holds in general. Finally, our analysis is based on convex optimization theory and no convexification is considered in [29 ###reference_b29###]. Our work can be viewed as complementary to [29 ###reference_b29###] in the sense that we study similar problems with different objectives and constraints from different angles.\nAnother work [9 ###reference_b9###] establishes approximation error bound between true score function and the GD minimizer to score-matching objective, which also serve as an approximation error bound between our convex program and true score in some cases while our method bypasses the potential local optimum problem caused by GD and derives the exact score function being predicted. The final result in [9 ###reference_b9###] is presented as error bound on predicted score and true score asymptotically in number of hidden neurons and number of data samples with expectation over noises added to data and random initialization. Our work doesn\u2019t have such a bound while we can solve the training problem globally and thus escape local minimum with a convex program (which only holds with Assumption 3.8 in the other work) and derive the score function for finite data and neurons, i.e., we know exactly what\u2019s the predicted score function analytically (see Section 3.3 ###reference_### in our work) in certain regime while the other work only has an error bound on this."
|
| 84 |
+
},
|
| 85 |
+
{
|
| 86 |
+
"section_id": "Appendix 2",
|
| 87 |
+
"parent_section_id": null,
|
| 88 |
+
"section_name": "Appendix B Explanation for Unbounded Objective Value",
|
| 89 |
+
"text": "Here we illustrate via a simple example that weight decay is necessary for the optimal objective value to stay finite. Follow notation in Section 3 ###reference_###,\nconsider for example only one data point and one hidden neuron, then objective function for the neural network with ReLU activation and no skip connection would be\nWLOG consider then when weight decay parameter set and above, we get\nThen we can set and the above expression becomes . Thus the objective goes to minus infinity when goes to infinity."
|
| 90 |
+
},
|
| 91 |
+
{
|
| 92 |
+
"section_id": "Appendix 3",
|
| 93 |
+
"parent_section_id": null,
|
| 94 |
+
"section_name": "Appendix C Proof in Section 3.2",
|
| 95 |
+
"text": "The below constraint set is strictly feasible only when\nConsider without loss of generality that Let for some , the first four constraints with are then . When the first constraint is . Thus is necessary for the constraint set to be strictly feasible. Since we can always find satisfying\nNote such satisfies all constraints in the original constraint set when . Therefore when the original constraint is strictly feasible.\n\u220e\nThe below constraint set is strictly feasible only when\nConsider without loss of generality that Then taking and in the first constraint gives and . It\u2019s necessary to have and to have both constraints strictly satisfiable. Since we can always find satisfying the below linear system\nNote such also satisfies\nTherefore when the original constraint set is strictly feasible.\n\u220e\nThe below constraint set is strictly feasible only when\nConsider without loss of generality that Then taking in the first constraint gives which indicates that is necessary for the constraint set to be strictly feasible. Since we can always find satisfying\nNote such also satisfies\nTherefore when , the original constraint set is strictly feasible.\n\u220e\nHere we first depict the assumption required for Theorem 3.2 ###reference_theorem2### to hold in Assumption C.6 ###reference_theorem6###. Note if Assumption C.6 ###reference_theorem6### is not true, original Theorem 3.2 ###reference_theorem2### still holds with equal sign replaced by greater than or equal to, which can be trivially seen from our proof of Theorem 3.2 ###reference_theorem2### below. Assumption C.6 ###reference_theorem6### has already been characterized in Proposition 3.1 in [14 ###reference_b14###], here we restate it for sake of completeness. First we define for each activation pattern the set of vectors that induce as\nLet denote the activation pattern set induced by dataset , assume for any\nAccording to Proposition 3.1 in [14 ###reference_b14###], Assumption C.6 ###reference_theorem6### is satisfied whenever data matrix is full row-rank. Empirically, Assumption C.6 ###reference_theorem6### holds with high probability according to experiments in [14 ###reference_b14###]."
|
| 96 |
+
},
|
| 97 |
+
{
|
| 98 |
+
"section_id": "Appendix 4",
|
| 99 |
+
"parent_section_id": null,
|
| 100 |
+
"section_name": "Appendix D Proof in Section 3.3",
|
| 101 |
+
"text": "When , the predicted score function is differentiable almost everywhere with least slope and largest slope . Then since the integrated score function is weakly concave, Theorem 3.3 ###reference_theorem3### follows case 1 in Theorem 4.3.6 in [2 ###reference_b2###].\n\u220e"
|
| 102 |
+
},
|
| 103 |
+
{
|
| 104 |
+
"section_id": "Appendix 5",
|
| 105 |
+
"parent_section_id": null,
|
| 106 |
+
"section_name": "Appendix E Proof in Section 4",
|
| 107 |
+
"text": "When for some , when the score matching objective can be reduced to\nwhich can be rewritten as\nLet , then problem (31 ###reference_###) is equivalent to\nTherefore,\nwhere enumerate all possible sign patterns of \nUnder Assumption C.6 ###reference_theorem6###, the construction of optimal parameter set follows Appendix C.4.1 ###reference_.SSS1###. With absolute value activation, the same conclusion holds by replacing to be and enumerate all possible sign patterns of \n\u220e"
|
| 108 |
+
},
|
| 109 |
+
{
|
| 110 |
+
"section_id": "Appendix 6",
|
| 111 |
+
"parent_section_id": null,
|
| 112 |
+
"section_name": "Appendix F Details for Numerical Experiments in Section 5",
|
| 113 |
+
"text": "For Gaussian data experiment, the training dataset contains data points sampled from standard Gaussian. For non-convex neural network training, we run 10 trials with different random parameter initiations and solve with Adam optimizer with step size . We train for 500 epochs. We run Langevin dynamics sampling (Algorithm 2 ###reference_###) with convex score predictor with data points and iterations, we take to be uniform distribution from to and\nFor Gaussian mixture experiment, the training dataset contains two Gaussian component each containing data points, with centers at and and both have standard variance. We take .\nFor spiral data simulation, we first generate data points forming a spiral as shown in the left most plot in Figure 3 ###reference_###. We then add five levels of Gaussian noise with mean zero and standard deviation . Thus the training data set contains noisy data points. We fit five convex score predictors corresponding to each noise level. We solve the convex program with CVXPY [4 ###reference_b4###] with MOSEK solver [1 ###reference_b1###]. The score plot corresponding to fitting our convex program with noise level For annealed Langevin sampling, we sample data points in total, starting from uniform distribution on interval. We set in each single Langevin process in Algorithm 2 ###reference_###. We present the sample scatter plots sequentially after sample with noise level score predictor for steps (Level 1), noise level score predictor for steps (Level 2), noise level score predictor for steps (Level 3), noise level score predictor for steps (Level 4), and finally noise level score predictor for steps (Level 5)."
|
| 114 |
+
},
|
| 115 |
+
{
|
| 116 |
+
"section_id": "Appendix 7",
|
| 117 |
+
"parent_section_id": null,
|
| 118 |
+
"section_name": "Appendix G Additional Simulation Results",
|
| 119 |
+
"text": "In this section, we give more simulation results besides those discussed in main text in Section 5 ###reference_###. In Section G.1 ###reference_### we show simulation results for score matching tasks with more neural network types and data distributions. In section G.2 ###reference_### we show more simulation results for denoising score matching tasks.\n\n###figure_4### Here we verify our findings in Section 3.3 ###reference_### with more experiments. The upper left plot in Figure 4 ###reference_### shows results for two-layer ReLU network without skip connection and with training data of uniform distribution on range . Here we still set as in Section 5.1 ###reference_###. Our theoretic analysis in Section 3.3 ###reference_### reveals that for this value, the predicted score corresponding to Gaussian distribution characterized with sample mean and sample variance, which is corroborated by our simulation results here, i.e., the mid-subplot shows score function contained in left plot in Figure 1 ###reference_###. The upper right plot follows same experimental setup except that here we experiment with absolute value activation instead of ReLU and the training data is standard normal. The predicted score is aligned with our theoretic derivation in right plot in Figure 1 ###reference_###.\nThe bottom left and bottom right plots are for networks with skip connection. We set . Our theory in Section 3.3 ###reference_### concludes that for this value, NNs without skip connection would predict zero scores while NNs with skip connection predict linear score corresponding to Gaussian distribution chractrized with sample mean and sample variance, which is supported by our simulation results.\nFor non-convex training, we run 10 trials with different random parameter initiations. It can be observed that our convex program always solves the training problem globally. Note for absolute value activation NNs (top right and bottom right plots), non-convex training sometimes sticks with local optimality, reflected by the gap of convergence value between non-convex training and our convex fitting.\n\n###figure_5### For experiment in Figure 5 ###reference_###, training data is standard Gaussian and is adopted. we take ten noise levels with standard deviation being the uniform grid from to . For each noise level, we sample steps. Initial sample points follow uniform distribution in range . The non-convex training uses Adam optimizer and takes epochs. Left most plot in Figure 5 ###reference_### shows the training loss of non-convex fittings with stepsize and our convex fitting. It can be observed that our convex fitting achieves lower training loss than all non-convex fittings. The second plot in Figure 5 ###reference_### shows annealed Langevin sampling histogram using our convex score predictor, which captures the underline Gaussian distribution. The right three plots show annealed Langevin sampling histograms with non-convex fitted score predictor trained with different learning rates. With training loss diverges and thus the predict score diverges from the true score of training data. Thus the sample histogram diverges from Gaussian. With non-convex fitted NN recognizes the desired distribution while with the NN is not trained enough thus the sampling results resemble Gaussian to some extent but not accurately. These results show that non-convex fitted score predictor is sometimes unstable due to training hyperparameter setting while convex fitted score predictor is usually much more reliable and thus gains empirical advantage."
|
| 120 |
+
}
|
| 121 |
+
],
|
| 122 |
+
"tables": {},
|
| 123 |
+
"image_paths": {
|
| 124 |
+
"1": {
|
| 125 |
+
"figure_path": "2402.01965v3_figure_1.png",
|
| 126 |
+
"caption": "Figure 1: Predicted score function and its integration for univariate data with two-layer neural network with ReLU activation (left) and absolut value activation (right). The left subplot shows all optimal score predictions by convex score predictor for univariate input data of arbitrary distribution for certain weight decay range and the right subplot shows its integration. See Section 3.3 for details.",
|
| 127 |
+
"url": "http://arxiv.org/html/2402.01965v3/extracted/2402.01965v3/relu_abs-01.png"
|
| 128 |
+
},
|
| 129 |
+
"2": {
|
| 130 |
+
"figure_path": "2402.01965v3_figure_2.png",
|
| 131 |
+
"caption": "Figure 2: Simulation results for\nscore matching tasks with two-layer ReLU neural network. Left figure is for Gaussian data, right figure is for two-component Gaussian mixture. Sampling histogram is with Langevin dynamics. See Section 5.1 for details.",
|
| 132 |
+
"url": "http://arxiv.org/html/2402.01965v3/extracted/2402.01965v3/sm_simu8-01.png"
|
| 133 |
+
},
|
| 134 |
+
"3": {
|
| 135 |
+
"figure_path": "2402.01965v3_figure_3.png",
|
| 136 |
+
"caption": "Figure 3: 2D simulation results for\ndenoising score matching tasks with our convex score predictor. The second figure shows vector field plot for score predicted by our convex score predictor. The right plots show denoising procedure with different noise levels in annealed Langevin sampling. See Section 5.2 for details.",
|
| 137 |
+
"url": "http://arxiv.org/html/2402.01965v3/extracted/2402.01965v3/spiral2-01.png"
|
| 138 |
+
},
|
| 139 |
+
"4": {
|
| 140 |
+
"figure_path": "2402.01965v3_figure_4.png",
|
| 141 |
+
"caption": "Figure 4: Simulation results for\nscore matching tasks with two-layer neural network. The left subplots for all four categories show training loss where the dashed blue lines indicate loss of convex score predictor. The middle plots show score prediction by convex score predictor. The right plots show sampling histograms via plain Langevin process with convex score predictor. See Appendix G.1 for details.",
|
| 142 |
+
"url": "http://arxiv.org/html/2402.01965v3/extracted/2402.01965v3/sm_3-01.png"
|
| 143 |
+
},
|
| 144 |
+
"5": {
|
| 145 |
+
"figure_path": "2402.01965v3_figure_5.png",
|
| 146 |
+
"caption": "Figure 5: Simulation results for denoising score matching tasks with two-layer ReLU neural network. The left plot shows training loss where the dashed blue line indicates loss of convex score predictor (10). The second plot shows sampling histogram via annealed Langevin process with convex score predictor. The third, fourth, and fifth plots show sampling histograms via annealed Langevin process with non-convex score predictors trained with learning rates 1,1\u2062e\u22122,1\u2062e\u2212611\ud835\udc5221\ud835\udc5261,1e-2,1e-61 , 1 italic_e - 2 , 1 italic_e - 6 respectively. The ground truth distribution is standard Gaussian, which is recovered by our model.",
|
| 147 |
+
"url": "http://arxiv.org/html/2402.01965v3/extracted/2402.01965v3/dsm_3-01.png"
|
| 148 |
+
}
|
| 149 |
+
},
|
| 150 |
+
"validation": true,
|
| 151 |
+
"references": [
|
| 152 |
+
{
|
| 153 |
+
"1": {
|
| 154 |
+
"title": "The MOSEK optimization toolbox, 2019.",
|
| 155 |
+
"author": "M. ApS.",
|
| 156 |
+
"venue": null,
|
| 157 |
+
"url": null
|
| 158 |
+
}
|
| 159 |
+
},
|
| 160 |
+
{
|
| 161 |
+
"2": {
|
| 162 |
+
"title": "Log-concave sampling, 2023.",
|
| 163 |
+
"author": "S. Chewi.",
|
| 164 |
+
"venue": null,
|
| 165 |
+
"url": null
|
| 166 |
+
}
|
| 167 |
+
},
|
| 168 |
+
{
|
| 169 |
+
"3": {
|
| 170 |
+
"title": "Theoretical guarantees for approximate sampling from smooth and log-concave densities, 2016.",
|
| 171 |
+
"author": "A. S. Dalalyan.",
|
| 172 |
+
"venue": null,
|
| 173 |
+
"url": null
|
| 174 |
+
}
|
| 175 |
+
},
|
| 176 |
+
{
|
| 177 |
+
"4": {
|
| 178 |
+
"title": "CVXPY: A Python-embedded modeling language for convex optimization.",
|
| 179 |
+
"author": "S. Diamond and S. Boyd.",
|
| 180 |
+
"venue": "Journal of Machine Learning Research, 17(83):1\u20135, 2016.",
|
| 181 |
+
"url": null
|
| 182 |
+
}
|
| 183 |
+
},
|
| 184 |
+
{
|
| 185 |
+
"5": {
|
| 186 |
+
"title": "Non-asymptotic convergence analysis for the unadjusted langevin algorithm, 2016.",
|
| 187 |
+
"author": "A. Durmus and E. Moulines.",
|
| 188 |
+
"venue": null,
|
| 189 |
+
"url": null
|
| 190 |
+
}
|
| 191 |
+
},
|
| 192 |
+
{
|
| 193 |
+
"6": {
|
| 194 |
+
"title": "Sampling from a strongly log-concave distribution with the unadjusted langevin algorithm.",
|
| 195 |
+
"author": "A. Durmus and \u00c9. Moulines.",
|
| 196 |
+
"venue": "arXiv: Statistics Theory, 2016.",
|
| 197 |
+
"url": null
|
| 198 |
+
}
|
| 199 |
+
},
|
| 200 |
+
{
|
| 201 |
+
"7": {
|
| 202 |
+
"title": "Globally optimal training of neural networks with threshold activation functions, 2023.",
|
| 203 |
+
"author": "T. Ergen, H. I. Gulluk, J. Lacotte, and M. Pilanci.",
|
| 204 |
+
"venue": null,
|
| 205 |
+
"url": null
|
| 206 |
+
}
|
| 207 |
+
},
|
| 208 |
+
{
|
| 209 |
+
"8": {
|
| 210 |
+
"title": "Path regularization: A convexity and sparsity inducing regularization for parallel relu networks, 2023.",
|
| 211 |
+
"author": "T. Ergen and M. Pilanci.",
|
| 212 |
+
"venue": null,
|
| 213 |
+
"url": null
|
| 214 |
+
}
|
| 215 |
+
},
|
| 216 |
+
{
|
| 217 |
+
"9": {
|
| 218 |
+
"title": "Neural network-based score estimation in diffusion models: Optimization and generalization, 2024.",
|
| 219 |
+
"author": "Y. Han, M. Razaviyayn, and R. Xu.",
|
| 220 |
+
"venue": null,
|
| 221 |
+
"url": null
|
| 222 |
+
}
|
| 223 |
+
},
|
| 224 |
+
{
|
| 225 |
+
"10": {
|
| 226 |
+
"title": "Denoising diffusion probabilistic models, 2020.",
|
| 227 |
+
"author": "J. Ho, A. Jain, and P. Abbeel.",
|
| 228 |
+
"venue": null,
|
| 229 |
+
"url": null
|
| 230 |
+
}
|
| 231 |
+
},
|
| 232 |
+
{
|
| 233 |
+
"11": {
|
| 234 |
+
"title": "Estimation of non-normalized statistical models by score matching.",
|
| 235 |
+
"author": "A. Hyv\u00e4rinen.",
|
| 236 |
+
"venue": "Journal of Machine Learning Research, 6(24):695\u2013709, 2005.",
|
| 237 |
+
"url": null
|
| 238 |
+
}
|
| 239 |
+
},
|
| 240 |
+
{
|
| 241 |
+
"12": {
|
| 242 |
+
"title": "On the generalization properties of diffusion models, 2023.",
|
| 243 |
+
"author": "P. Li, Z. Li, H. Zhang, and J. Bian.",
|
| 244 |
+
"venue": null,
|
| 245 |
+
"url": null
|
| 246 |
+
}
|
| 247 |
+
},
|
| 248 |
+
{
|
| 249 |
+
"13": {
|
| 250 |
+
"title": "Estimation of high-dimensional graphical models using regularized score matching, 2016.",
|
| 251 |
+
"author": "L. Lin, M. Drton, and A. Shojaie.",
|
| 252 |
+
"venue": null,
|
| 253 |
+
"url": null
|
| 254 |
+
}
|
| 255 |
+
},
|
| 256 |
+
{
|
| 257 |
+
"14": {
|
| 258 |
+
"title": "Fast convex optimization for two-layer relu networks: Equivalent model classes and cone decompositions, 2022.",
|
| 259 |
+
"author": "A. Mishkin, A. Sahiner, and M. Pilanci.",
|
| 260 |
+
"venue": null,
|
| 261 |
+
"url": null
|
| 262 |
+
}
|
| 263 |
+
},
|
| 264 |
+
{
|
| 265 |
+
"15": {
|
| 266 |
+
"title": "Score-based generative models detect manifolds, 2022.",
|
| 267 |
+
"author": "J. Pidstrigach.",
|
| 268 |
+
"venue": null,
|
| 269 |
+
"url": null
|
| 270 |
+
}
|
| 271 |
+
},
|
| 272 |
+
{
|
| 273 |
+
"16": {
|
| 274 |
+
"title": "From complexity to clarity: Analytical expressions of deep neural network weights via clifford\u2019s geometric algebra and convexity, 2024.",
|
| 275 |
+
"author": "M. Pilanci.",
|
| 276 |
+
"venue": null,
|
| 277 |
+
"url": null
|
| 278 |
+
}
|
| 279 |
+
},
|
| 280 |
+
{
|
| 281 |
+
"17": {
|
| 282 |
+
"title": "Neural networks are convex regularizers: Exact polynomial-time convex optimization formulations for two-layer networks, 2020.",
|
| 283 |
+
"author": "M. Pilanci and T. Ergen.",
|
| 284 |
+
"venue": null,
|
| 285 |
+
"url": null
|
| 286 |
+
}
|
| 287 |
+
},
|
| 288 |
+
{
|
| 289 |
+
"18": {
|
| 290 |
+
"title": "Vector-output relu neural network problems are copositive programs: Convex analysis of two layer networks and polynomial-time algorithms, 2021.",
|
| 291 |
+
"author": "A. Sahiner, T. Ergen, J. Pauly, and M. Pilanci.",
|
| 292 |
+
"venue": null,
|
| 293 |
+
"url": null
|
| 294 |
+
}
|
| 295 |
+
},
|
| 296 |
+
{
|
| 297 |
+
"19": {
|
| 298 |
+
"title": "Deep unsupervised learning using nonequilibrium thermodynamics, 2015.",
|
| 299 |
+
"author": "J. Sohl-Dickstein, E. A. Weiss, N. Maheswaranathan, and S. Ganguli.",
|
| 300 |
+
"venue": null,
|
| 301 |
+
"url": null
|
| 302 |
+
}
|
| 303 |
+
},
|
| 304 |
+
{
|
| 305 |
+
"20": {
|
| 306 |
+
"title": "Denoising diffusion implicit models, 2022.",
|
| 307 |
+
"author": "J. Song, C. Meng, and S. Ermon.",
|
| 308 |
+
"venue": null,
|
| 309 |
+
"url": null
|
| 310 |
+
}
|
| 311 |
+
},
|
| 312 |
+
{
|
| 313 |
+
"21": {
|
| 314 |
+
"title": "Sliced score matching: A scalable approach to density and score estimation, 2019.",
|
| 315 |
+
"author": "Y. Song, S. Garg, J. Shi, and S. Ermon.",
|
| 316 |
+
"venue": null,
|
| 317 |
+
"url": null
|
| 318 |
+
}
|
| 319 |
+
},
|
| 320 |
+
{
|
| 321 |
+
"22": {
|
| 322 |
+
"title": "Score-based generative modeling through stochastic differential equations, 2021.",
|
| 323 |
+
"author": "Y. Song, J. Sohl-Dickstein, D. P. Kingma, A. Kumar, S. Ermon, and B. Poole.",
|
| 324 |
+
"venue": null,
|
| 325 |
+
"url": null
|
| 326 |
+
}
|
| 327 |
+
},
|
| 328 |
+
{
|
| 329 |
+
"23": {
|
| 330 |
+
"title": "An introduction to hyperplane arrangements.",
|
| 331 |
+
"author": "R. P. Stanley et al.",
|
| 332 |
+
"venue": "Geometric combinatorics, 13(389-496):24, 2004.",
|
| 333 |
+
"url": null
|
| 334 |
+
}
|
| 335 |
+
},
|
| 336 |
+
{
|
| 337 |
+
"24": {
|
| 338 |
+
"title": "A connection between score matching and denoising autoencoders.",
|
| 339 |
+
"author": "P. Vincent.",
|
| 340 |
+
"venue": "Neural computation, 23(7):1661\u20131674, 2011.",
|
| 341 |
+
"url": null
|
| 342 |
+
}
|
| 343 |
+
},
|
| 344 |
+
{
|
| 345 |
+
"25": {
|
| 346 |
+
"title": "Parallel deep neural networks have zero duality gap, 2023.",
|
| 347 |
+
"author": "Y. Wang, T. Ergen, and M. Pilanci.",
|
| 348 |
+
"venue": null,
|
| 349 |
+
"url": null
|
| 350 |
+
}
|
| 351 |
+
},
|
| 352 |
+
{
|
| 353 |
+
"26": {
|
| 354 |
+
"title": "Ar-diffusion: Auto-regressive diffusion model for text generation, 2023.",
|
| 355 |
+
"author": "T. Wu, Z. Fan, X. Liu, Y. Gong, Y. Shen, J. Jiao, H.-T. Zheng, J. Li, Z. Wei, J. Guo, N. Duan, and W. Chen.",
|
| 356 |
+
"venue": null,
|
| 357 |
+
"url": null
|
| 358 |
+
}
|
| 359 |
+
},
|
| 360 |
+
{
|
| 361 |
+
"27": {
|
| 362 |
+
"title": "On the generalization of diffusion model, 2023.",
|
| 363 |
+
"author": "M. Yi, J. Sun, and Z. Li.",
|
| 364 |
+
"venue": null,
|
| 365 |
+
"url": null
|
| 366 |
+
}
|
| 367 |
+
},
|
| 368 |
+
{
|
| 369 |
+
"28": {
|
| 370 |
+
"title": "Diffusion probabilistic models generalize when they fail to memorize.",
|
| 371 |
+
"author": "T. Yoon, J. Y. Choi, S. Kwon, and E. K. Ryu.",
|
| 372 |
+
"venue": "In ICML 2023 Workshop on Structured Probabilistic Inference & Generative Modeling, 2023.",
|
| 373 |
+
"url": null
|
| 374 |
+
}
|
| 375 |
+
},
|
| 376 |
+
{
|
| 377 |
+
"29": {
|
| 378 |
+
"title": "How do minimum-norm shallow denoisers look in function space?, 2023.",
|
| 379 |
+
"author": "C. Zeno, G. Ongie, Y. Blumenfeld, N. Weinberger, and D. Soudry.",
|
| 380 |
+
"venue": null,
|
| 381 |
+
"url": null
|
| 382 |
+
}
|
| 383 |
+
},
|
| 384 |
+
{
|
| 385 |
+
"30": {
|
| 386 |
+
"title": "A survey on audio diffusion models: Text to speech synthesis and enhancement in generative ai, 2023.",
|
| 387 |
+
"author": "C. Zhang, C. Zhang, S. Zheng, M. Zhang, M. Qamar, S.-H. Bae, and I. S. Kweon.",
|
| 388 |
+
"venue": null,
|
| 389 |
+
"url": null
|
| 390 |
+
}
|
| 391 |
+
}
|
| 392 |
+
],
|
| 393 |
+
"url": "http://arxiv.org/html/2402.01965v3"
|
| 394 |
+
}
|
20240522/2402.02592v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240522/2402.02675v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240522/2402.09346v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240522/2402.11489v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240522/2402.17205v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|